Relative Content

Tag Archive for pythonlangchainchromadb

RAG module – Batch size exceeds maximum batch size

i am using google gen ai as it is free(suggest any better for free) and using google gen ai embedding well
i am trying to get the file splitted into chunks and then embedded it using googlegenaiembedding and then put it to the chroma vector db in my local machine but as i try with smaller data it works well but when i try adding a bigger set of files it stops and gives me an error for

getting error while using runnables in langchain with chromadb and ConversationalRetrievalChain

from langchain.chains import VectorDBQA from langchain.llms import OpenAI # Now we can load the persisted database from disk, and use it as normal. vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key=”chat_history”, return_messages=True) qa = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0.8), vectordb.as_retriever(search_kwargs={“k”: 3}), memory=memory ) from langchain_core.runnables import ConfigurableFieldSpec from langchain_community.chat_message_histories import ChatMessageHistory […]