Product Docs
  • What is Dataworkz?
  • Getting Started
    • What You Will Need (Prerequisites)
    • Create with Default Settings: RAG Quickstart
    • Custom Settings: RAG Quickstart
    • Data Transformation Quickstart
    • Create an Agent: Quickstart
  • Concepts
    • RAG Applications
      • Overview
      • Ingestion
      • Embedding Models
      • Vectorization
      • Retrieve
    • AI Agents
      • Introduction
      • Overview
      • Tools
        • Implementation
      • Type
      • Tools Repository
      • Tool Execution Framework
      • Agents
      • Scenarios
      • Agent Builder
    • Data Studio
      • No-code Transformations
      • Datasets
      • Dataflows
        • Single Dataflows:
        • Composite dataflows:
        • Benefits of Dataflows:
      • Discovery
        • How to: Discovery
      • Lineage
        • Features of Lineage:
        • Viewing a dataset's lineage:
      • Catalog
      • Monitoring
      • Statistics
  • Guides
    • RAG Applications
      • Configure LLM's
        • AWS Bedrock
      • Embedding Models
        • Privately Hosted Embedding Models
        • Amazon Bedrock Hosted Embedding Model
        • OpenAI Embedding Model
      • Connecting Your Data
        • Finding Your Data Storage: Collections
      • Unstructured Data Ingestion
        • Ingesting Unstructured Data
        • Unstructured File Ingestion
        • Html/Sharepoint Ingestion
      • Create Vector Embeddings
        • How to Build the Vector embeddings from Scratch
        • How do Modify Existing Chunking/Embedding Dataflows
      • Response History
      • Creating RAG Experiments with Dataworkz
      • Advanced RAG - Access Control for your data corpus
    • AI Agents
      • Concepts
      • Tools
        • Dataset
        • AI App
        • Rest API
        • LLM Tool
        • Relational DB
        • MongoDB
        • Snowflake
      • Agent Builder
      • Agents
      • Guidelines
    • Data Studio
      • Transformation Functions
        • Column Transformations
          • String Operations
            • Format Operations
            • String Calculation Operations
            • Remove Stop Words Operation
            • Fuzzy Match Operation
            • Masking Operations
            • 1-way Hash Operation
            • Copy Operation
            • Unnest Operation
            • Convert Operation
            • Vlookup Operation
          • Numeric Operations
            • Tiles Operation
            • Numeric Calculation Operations
            • Custom Calculation Operation
            • Numeric Encode Operation
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Convert Operation
            • VLookup Operation
          • Boolean Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
          • Date Operations
            • Date Format Operations
            • Date Calculation Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Encode Operation
            • Convert Operation
          • Datetime/Timestamp Operations
            • Datetime Format Operations
            • Datetime Calculation Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Encode Operation
            • Page 1
        • Dataset Transformations
          • Utility Functions
            • Area Under the Curve
            • Page Rank Utility Function
            • Transpose Utility Function
            • Semantic Search Template Utility Function
            • New Header Utility Function
            • Transform to JSON Utility Function
            • Text Utility Function
            • UI Utility Function
          • Window Functions
          • Case Statement
            • Editor Query
            • UI Query
          • Filter
            • Editor Query
            • UI Query
      • Data Prep
        • Joins
          • Configuring a Join
        • Union
          • Configuring a Union
      • Working with CSV files
      • Job Monitoring
    • Utility Features
      • IP safelist
      • Connect to data source(s)
        • Cloud Data Platforms
          • AWS S3
          • BigQuery
          • Google Cloud Storage
          • Azure
          • Snowflake
          • Redshift
          • Databricks
        • Databases
          • MySQL
          • Microsoft SQL Server
          • Oracle
          • MariaDB
          • Postgres
          • DB2
          • MongoDB
          • Couchbase
          • Aerospike
          • Pinecone
        • SaaS Applications
          • Google Ads
          • Google Analytics
          • Marketo
          • Zoom
          • JIRA
          • Salesforce
          • Zendesk
          • Hubspot
          • Outreach
          • Fullstory
          • Pendo
          • Box
          • Google Sheets
          • Slack
          • OneDrive / Sharepoint
          • ServiceNow
          • Stripe
      • Authentication
      • User Management
    • How To
      • Data Lake to Salesforce
      • Embed RAG into your App
  • API
    • Generate API Key in Dataworkz
    • RAG Apps API
    • Agents API
  • Open Source License Types
Powered by GitBook
On this page
  1. Guides
  2. RAG Applications
  3. Embedding Models

Amazon Bedrock Hosted Embedding Model

PreviousPrivately Hosted Embedding ModelsNextOpenAI Embedding Model

Last updated 1 month ago

To set up a connection to Amazon Bedrock, you'll need to provide a few key details. This configuration allows you to interact with Bedrock's services and use its capabilities effectively. Follow the steps below to configure your connection.

Required Input Fields

When configuring your Amazon Bedrock connection, the following fields are required:

  1. Name Field Type: Text Description: Enter a unique name for this Bedrock connection configuration. This helps you identify the connection later when managing multiple configurations.

  2. Embedding Provider Field Type: Dropdown Description: Select the embedding provider that you will use. This could refer to the specific service or model from Amazon Bedrock, such as "Amazon SageMaker" or another compatible service.

  3. Embedding Model Field Type: Dropdown Description: Choose the embedding model you would like to use for processing data within the Bedrock framework.

  4. Dimension Field Type: Dropdown Description: Select the dimension. This value will typically be determined by the model you choose.

  5. Access Type Field Type: Dropdown Description: Choose the type of access you require for the Bedrock connection. Options typically include:

    • API Key: Use an API key to authenticate the connection.

    • IAM Role: Use an IAM (Identity and Access Management) role for secure access control.

  6. Key Field Type: Text Description: Provide your AWS Access Key. This key will be used to authenticate API requests to Amazon Bedrock. Make sure to handle your keys securely.

  7. Secret Field Type: Text Description: Enter your AWS Secret Key. This secret key works with the Access Key to secure the connection and is required for making authorized API calls.

  8. Region Field Type: Dropdown or Text Description: Select the AWS region where your Amazon Bedrock service is hosted. Common options may include regions like "us-east-1", "us-west-2", etc. Ensure that the region selected corresponds to the location of your AWS services.


Example Configuration

Here’s an example of what the configuration could look like when filled in:

Field
Value Example

Name

My_Bedrock_Connection

Embedding Provider

Amazon Bedrock

Embedding Model

amazon.titan-embed-text-v1

Dimension

1536

Access Type

API Key

Key

AKIAEXAMPLEKEY

Secret

EXAMPLESECRETKEY

Region

us-east-1


Next Steps

After filling in the necessary fields, click Save or Test Connection to establish the connection to Amazon Bedrock. You should now be able to use the configured connection to start processing your data and leveraging Bedrock’s capabilities.