Seq2seq trainer doesn’t execute after UserWarning about dimension
I am trying to do a machine translation from Hindi to Sanskrit using NLLB model. But I get the below warning, and the training does not progress:
Debugging help: prefetch_factor option could only be specified in multiprocessing.let num_workers > 0 to enable multiprocessing
I’m trying to train a transformers model. Everything runs smoothly (loading data, initializing Trainer class, etc.) up until the trainer.train() statement. Then I get this error:
LLaVA fine-tuning: The input provided to the model are wrong. The number of image tokens is 0 while the number of image given to the model is 8
I’m trying to fine-tune LLaVA on a custom dataset, following the code presented here: https://colab.research.google.com/drive/10NLrfBKgt9ntPoQYQ24rEVWU-2rr1xf1#scrollTo=4ycDwt9G1RWN. I’ve been debugging and print steps but I’m not sure what I’m doing wrong to get this error: