I’m using the Nous-Hermes-2-Mixtral-8x7B-DPO model via the Hugging Face Transformers library within a RAG (Retrieval-Augmented Generation) application that I’m deploying with LangChain. Most of my documents are web pages scraped and converted to markdown for indexing. However, I’m encountering an issue where a significant number of my queries that request the generation of tables are hitting the max tokens limit, even with high values such as 1K or 2K tokens.
After some debugging, I discovered that many of these tables start generating correctly but eventually continue with ” ” (spaces) until the model reaches the max tokens length.
Question
Is this behavior indicative of a common type of hallucination, or is it something unusual? I tried increasing the repetition penalty, but I noticed that the model then struggled with other issues, and the quality of the responses deteriorated.
Unfortunately I cannot share an example of the exact prompt as well as the response of the model, since it would reveal data about the client I am working for. How ever it will be similar to the following:
Prompt:
System: You are an AI assistant, an expert in delivering concise, accurate, up-to-date recommendations, product/service comparisons [Continues...] nnYour task is to provide very concise recommendations, comparisons, and answers to questions related to your area of expertise. Do NOT provide any information that is not directly related to the question asked. If you do not know the answer to a question, you should politely respond saying so. Always base your answer in the provided context documents [Continues...] n--------------n CONTEXT DOCUMENTS: [HERE ARE crawled web pages on markdown format].
Parameters:
- temperature: 0.2
- repetition_penalty: 1.0
- max_tokens: 1000
Response:
"| Column 1 | Column 2 | Column 3 |n|-----------|----------|----------|n| Row 1 | Data 1 | Data 2 |n| Row 2 | Data 3 | Data 4 |n| Row 3 | Data 5 | Data 6 |n| Row 4 | Data 7 | Data 8 | ...(empty spaces until reach max-tokens limit)"
An important point is that documents can contain markdown tables as well. I wonder if this could be affecting the model in some way? Trying to reduce the problem to Mixtral, I tried to replicate this error outside the context of RAG and the documents that I put as context, and in 100 calls to Mixtral requesting a markdown table of 3 columns by 25 rows, it generated without problems.
May be my context documents are guiding the LLM in the wrong direction?
I would appreciate any comments or guidance in the right direction. Thanks in advance!!