I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.
Everything works, and now I would like to customize the model configuration.
In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:
- temperature: 0.1
- Model: gpt-4o
In the README, there is a mention about using llm_options
.
If I go to the OpenAI Module documentation:
- https://rubydoc.info/gems/langchainrb/Langchain/LLM/OpenAI
It says I have to check here:
- https://github.com/alexrudall/ruby-openai/blob/main/lib/openai/client.rb#L5-L13
But there is not any mention of temperature
, for example. Also, in the example in the Langchain::LLM::OpenAI
documentation, the options are totally different.
# ruby-openai options:
CONFIG_KEYS = %i[
api_type
api_version
access_token
log_errors
organization_id
uri_base
request_timeout
extra_headers
].freeze
# Example in Class: Langchain::LLM::OpenAI documentation:
{
n: 1,
temperature: 0.0,
chat_completion_model_name: "gpt-3.5-turbo",
embeddings_model_name: "text-embedding-3-small"
}.freeze
- Langchain.rb version: 0.13.4