Configure LLM
Configure LLM API
Before creating the RAG, configure the Language Model (LLM) you want to use. (This is found under AI AI Config section in the ⚙ Settings page)

Type: Choose between Generative or Extractive.
Deployment Type: Select from OpenAI, AzureOpenAI, Hosted, VertexAI, AnyScale, Hugging Face, or AWS Bedrock.
Model: Specify the language model from your chosen deployment type.
Name: Name your model.
For non-OpenAI language models (other than Bedrock)
URL: provide the URL.
API Key: Provide the API key for the language model (other than Bedrock)
For Bedrock, enter AWS region, key and secret
Temperature : LLM temperature
Last updated

