I am new to Hugging face and building RAG application. Therefore, i wanted to ask: can I use Hugging face Large Language Model (LLM) Free of Cost for my task of creating a chatbot with RAG chain?
If yes, then could anyone write the method / steps dooing so?
Yes you can, if you either find some place to host this for free or you host this yourself. If this is only about testing something in a proof of concept without the need for public availability and you dont have access to any decent hardware, you can use something like a Google Colab Jupyter notebook.
In the code, you can use one model from Huggingface as your Embedding Model for chunk/document retrieval, and a generative model from Huggingface to take the retrieval-augmented prompt and generate an answer.
Check out this example application to see how one could do that in Python using LangChain: https://github.com/EliasK93/BGE-M3-and-Gemma-2-for-retrieval-augmented-generation
If you search online you will find many other tutorials and examples as well.
4
I am assuming you want to consume LLM API via HF. For free account, you can use upto a limit. But eventually you will be rate limited. And free account is not recommended for prod, obviously.
From their page –
The Inference API is free to use, and rate limited. If you need an inference solution for production, check out our Inference Endpoints service. With Inference Endpoints, you can easily deploy any machine learning model on dedicated and fully managed infrastructure. Select the cloud, region, compute instance, autoscaling range and security level to match your model, latency, throughput, and compliance needs.
models and token pricing:
https://huggingface.co/docs/api-inference/en/index
https://huggingface.co/spaces/philschmid/llm-pricing
You can run a large language model (LLM) for free if you have a GPU. For example, with a 14 GB GPU, you can easily run a model from the LLaMA family, though you may need to apply quantization. If you don’t have a high-capacity GPU, you can still run a smaller model like T5 with quantization on a more affordable GPU, but be aware that the quality will be significantly lower.
Abdul Haadi is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Hugging Face offers several services for hosting models, datasets, compute for training, etc… Hugging Face does not offer an LLM itself.
3