llama-index parallel ingestion is failing with token exceed error for embedding model deployed externally
I am using the below test code to try parallel llamaindex ingestion. The code is taken from the link: https://docs.llamaindex.ai/en/stable/examples/ingestion/parallel_execution_ingestion_pipeline/