I’m trying to fine-tune open sourced LLMs, for now let’s stick with Mistral-7b-instruct model.
My task is a follow: I have emails, that represents “price requests” for shipments sends by our clients.
In the emails, the clients tells us the pickup address, the shipper, consignee ETC.
My initial idea was to train different adapters, using DORA, each of them is trained on extracting a different entity from the email.
My dataset was created as follow: I have the email, and the annotation which is “Based on the email, I’ve found this [ENTITY]: entity_here
I’ve created a System message, and and chat_template to create the dataset in a way Mistral will accept, using this chat_template:
"{%- for message in messages %}"
"{%- if message['role'] == 'system' -%}"
"{{- '<s>' + message['content'] -}}"
"{%- else -%}"
"{%- if message['role'] == 'user' -%}"
"{{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}}"
"{%- else -%}"
"{{-'' + message['content'] + '</s>' -}}"
"{%- endif -%}"
"{%- endif -%}"
"{%- endfor -%}"
"{%- if add_generation_prompt -%}"
"{{-''-}}"
"{%- endif -%}"
Now to the problem. The model seems to learn what it needs to extract, it generates decent answers, with the same format as the assistant it was trained by, the problem is that after it generates the answer, it keeps on generating additional texts regarding the email that are irrelevant to the task, E.G. “please contact us in….”
When I fine tune GPT3.5 for example for the same task, the model is able to extract exactly what I need for it, which suggests to me that I’m doing something wrong.
Does anyone have suggestions as to where did I go wrong?