Basically, I started off with llama since I was told to use meta-llama (any on hugging face) but due to my laptop’s limited specs meta-llama never entered the training phase. Even tried 1 billion parameter model from meta still not good.
I have a .txt file which contained the raw data (not arranged in question:answer pair) so I wrote a script that first created a json file and in that file data is arranged in question:answer pair now my goal was to use a specific model to fine tune it and train it. My data is not vast/large it is small or medium at best. So, I tried a bunch of models. Gpt2 etc gave me awful responses, so I tried some other models like Albert, tiny-bert, roberta-large but issue is, that some of the questions are indeed getting properly answered by the bot, but a huge chunk of them just fail horribly. The answers for some are just repetitive, in-complete.
I tried several methods to make it more efficent (SBERT/TF-IDF etc) and even tried to modify the fine tuning file but to no avail. I am very new to all of this so I was wondering how to proceed with this. Thanks for your time.
Tried several models to train my chatbot, almost all of them are failing.
Expecting my chatbot to be properly trained using the json file which contains question:answer pairs, however the data might not be too vast/large.
laptop got some limitations too (specs, e.g low ram -> 8GB)