I have downloaded the .gguf file for Llama 3, and I want to run the model locally. I have tried using lmstudio, ollama with locallm, and openwebui, but I have not had any success with any of them. Does anyone know how to do this?
current problems :
ollama and locallm are great but they
only work if u pull and download llms through their cli and do not
allow local imports.
#lm studio is the thing that could solve my problem but for unknown reasones it doesn’t work.
- note i wanna use CPU