I’m currently encountering an issue while attempting to retrieve chat responses from multiple indexes in my project. Here’s a brief overview of the situation:
Problem: Despite conducting thorough research, I haven’t found a suitable solution for fetching chat responses from multiple indexes simultaneously.
Objective: My goal is to efficiently collect chat responses from various indexes within my application to enhance the user experience.
Below is a snippet of the code I’m currently using to handle chat requests and create indexes:
async def handle_chat_request(request: Request, data: ChatData) -> StreamingResponse:
if data.config.model in OLLAMA_MODELS:
return await _ollama_chat(request, data)
elif data.config.model in OPENAI_MODELS:
return await _openai_chat(request, data)
else:
raise HTTPException(status_code=400, detail="Invalid Model Name.")
async def _openai_chat(request: Request, data: ChatData) -> StreamingResponse:
print("Received an OpenAI chat request:", request, data)
Settings.llm = OpenAI(model=data.config.model, temperature=0.2)
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
return await _chat_stream(request, data)
async def _chat_stream(request: Request, data: ChatData) -> StreamingResponse:
try:
index = _get_or_create_index(data.config.model, data.datasource)
chat_engine = index.as_chat_engine()
response = chat_engine.stream_chat(data.message, [])
async def event_generator():
for token in response.response_gen:
if await request.is_disconnected():
break
yield convert_sse(token)
yield convert_sse({"done": True})
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
)
except Exception as e:
full_exception = traceback.format_exc()
logger.error(f"{data.config.model} chat error: {e}n{40 * '~'} n{full_exception}")
raise HTTPException(status_code=500, detail="Internal Server Error")
Initially, I attempted to combine multiple documents into a single index to address this challenge. However, this approach led to accuracy issues in the response. Refactoring is necessary to overcome this obstacle. Below is the code snippet illustrating the initial approach:
document1 = SimpleDirectoryReader(ds_data_dir).load_data()
document2 = generate_web_index("https://github.com")
index = VectorStoreIndex.from_documents(document1 + document2 , show_progress=True,
storage_context=StorageContext.from_defaults(),
embed_model=Settings.embed_model)
index.storage_context.persist(ds_storage_dir)
def generate_web_index(website_url: str):
return BeautifulSoupWebReader().load_data(urls=[website_url])
I’m seeking advice on how to improve this approach and handle multiple indexes effectively. Any insights or suggestions would be greatly appreciated.
Thank you.