I’m currently starting to learn about LLMs and RAG using open source software.
I have LocalAI installed and running on my Server. I can reach it at 0.0.0.0:8000.
Now I want to use it to run the multilingual-e5-large-instruct model.
I tried to install it via python using the transformers library but now it is running at 0.0.0.0:8000 without any connection to LocalAI. I suspect the installation process must be different to have it available in LocalAI. Unfortunately, I do not understand how to do that. Can anyone please point me in the right direction?
Ultimately, I want to use this embedding model in AnythingLLM besides a large Ollama3 model.
Thanks!