I want to create index seperately for each cities but I got an error about node_id
this is my code an error occure at the retrieve line
nodes_list = []
for i in range(len(documents)):
nodes_list.append(splitter.get_nodes_from_documents([documents[i]], show_progress=True))
for i in range(len(nodes_list)):
vector_store = FaissVectorStore(faiss_index=faiss_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(nodes=nodes_list[i],storage_context=storage_context)
index.storage_context.persist(persist_dir=f'/content/chunked-province-sep1/province{i}')
persist_dir='/content/chunked-province-sep1/province3'
vector_store = FaissVectorStore.from_persist_dir(persist_dir)
storage_context = StorageContext.from_defaults(vector_store=vector_store, persist_dir=persist_dir)
index = load_index_from_storage(storage_context=storage_context)
retriever = VectorIndexRetriever(index = index, similarity_top_k=5)
response = retriever.retrieve('somestring') <------ error at this line
sum_response = []
for i in range(5):
print('Response:', response[i].text,'n')
print('Score:', response[i].score)
print('-'*50,'nn')
sum_response.append(response[i].text)
context = response[0].text
and the error does’t occure in province0 but the rest got an error
KeyError Traceback (most recent call last)
<ipython-input-105-bfd8906da473> in <cell line: 7>()
5 # retriever = SummaryIndexLLMRetriever(index=index)
6
----> 7 response = retriever.retrieve(llmGenForRetrieve)
8 sum_response = []
9 for i in range(5):
6 frames
/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/retrievers/retriever.py in <listcomp>(.0)
142 assert isinstance(self._index.index_struct, IndexDict)
143 node_ids = [
--> 144 self._index.index_struct.nodes_dict[idx] for idx in query_result.ids
145 ]
146 nodes = self._docstore.get_nodes(node_ids)
KeyError: '8'
I tried restart kernel in colab and it doesn’t work
New contributor
SelMon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.