I’m attempting to fine-tune the MERT-v0 Audio Model for a downstream task with audio effect classification, a multi-label classification problem. I have a dataset of raw audio and their respective audio effects that have been used on those audio files. I think that part is fine, but I’m trying to add inputs of both genre and loudness into the mix. The inputs to the model would then be:
- Audio (resampled to 16kHz at 5 seconds [floats])
- Genre [string]
- Loudness [float]
And the output should be:
- Audio Effect(s) [int (probably but finally output as string later)]
From MERT, my understanding is that they have the shape of [13, 768] for 13 states and 768 feature dimensions.
My Google Collab code is here.
My issue is that I’m running into matrix errors when trying to train in the final step. Everything else passes but as soon as I train I’m getting this error:
RuntimeError: Given groups=1, weight of size [1, 13, 1], expected input[13, 2, 768] to have 13 channels, but got 2 channels instead
If someone has some cycles, I could use the help! I’m very new to this area of ML/DL and would appreciate any references to learn more too! Thanks for your help!
I’ve tried changing the model architecture, size of the matrixes, etc, but I’m at a loss for how to achieve this. Everytime I change the arch or matrix size another issue comes up with dimension errors or something along that line.
When I was converting my inputs into tensors they were these sizes:
- Audio [1, 80000]
- Genre [1, 1]
- Loudness [1, 1]
Any help is apprecaited!