hope this finds you sound.
I’m finetuning an instruct model(mistral 7B) with a 500 row dataset that has instruction, input and explanation. During training, my prompt consisted of the instruction and the input. In my dataset, all instructions are the same. The output has the following format – step-by-step analysis, strategy summary and conclusion. Now once finetuned, the model does quite well to explain the input but the moment I ask it to do something else it does not adhere to the prompt, especially when my instruction is different by input has the same format as before.
Example –
training dataset -
instruction = analyse the following passage and explain the key point of the last sentence
input = my_passage
output = expected_output # has sentence by sentence analysis, summary, key_point
now this works brilliantly, if i use the model with same task but if i do the following –
instruction = analyse the following passage and provide me a summary.
input = my_passage # same as my previous input
The model output follows the same format as – sentence by sentence analysis, summary, key_point; even if i explicitly mention not to generate the keypoint section.
Now I was reading the orca2 paper from Microsoft which mentioned when we train small models, we actually teach the model how to think for a particular task like – slow reasoning, direct answer, recall then generate etc. Since I have only 1 type of task/prompt and my dataset has the same output format for each row, is it the reason why my model is acting in such a way? Also one other thing i noticed – if i ask it something completely different like –
[INST]Assume you are a geography expert. You will be provided with the name of a country. What's the capital of the country?
Australia[/INST].
it answers the question but has an analysis section and conclusion as well. perhaps it’s evident that the model can think only 1 way.
My question is shall i add a few different tasks and prompts to diversify the dataset to fix this behaviour? Lastly, in sfttrainer we don’t specify the loss function it will use. Now i read a few articles which suggested it will be cross entropy loss which makes sense. but is there any proper documentation for this? And what would you suggest that can help me to gain in-depth knowledge on any of the parameters including training or bnb_cofig like lora_alpha, top_p etc (other than huggingface documentation, I already went through that)?
Thanks a lot in advance and best wishes.