Configure LLM's
Last updated
Last updated
Before creating the QNA, configure the Language Model (LLM) you want to use. (This is found under AI Applications page)
Type: Choose between Generative or Extractive.
Deployment Type: Select from OpenAI, AzureOpenAI, Hosted, VertexAI, AnyScale, Hugging Face, or AWS Bedrock.
Model: Specify the language model from your chosen deployment type.
Name: Name your model.
For non-OpenAI language models (other than Bedrock)
URL: provide the URL.
API Key: Provide the API key for the language model (other than Bedrock)
For Bedrock, enter AWS region, key and secret
Temperature : LLM temperature