This page is a practical walkthrough for building, testing, and publishing Agents in Dataworkz. It takes you from a blank slate to a fully operational Agent, with guidance at each step on what to conf
Step 1 — Create an Agent
From the AI Agents dashboard, click Create Agent.
Name: Choose a clear, descriptive name. For example, support-order-tracker.
Description: Write a short summary of what the Agent does. Example: “Answers questions about order tracking and refunds.”
Tags / Owner: Optional fields to help organize or assign responsibility.
Access controls: Decide who can edit or run the Agent.
Tip: Use naming conventions such as domain-purpose-environment (e.g., finance-expense-qa).
Step 2 — Define the Persona
The Persona is the Agent’s voice, scope, and boundaries. It acts as a system prompt for the LLM.
What to include:
Role: Who the Agent is (e.g., “You are a support assistant”).
Tone: How responses should sound (e.g., concise, polite, factual).
Boundaries: Topics to avoid or escalate (e.g., “Do not give legal advice”).
Format: Any output requirements (tables, markdown, JSON).
Example Persona:
You are a customer-support assistant. Always confirm the order ID in replies. Keep answers concise and polite. If asked about billing disputes, request the order number and escalate if unclear. Do not invent data.
Note: Keep persona instructions short and test often. Long prompts can confuse the LLM.
Step 3 — Add Scenarios
Scenarios let the Agent focus on specific user intents. Each scenario narrows the scope and determines which tools are available.
For each scenario, configure:
Title: Short and intent-focused (e.g., Track Orders).
Description: Explain the intent and expected outcomes.
Tools: Select from the repository (only the tools relevant to this intent).
Example queries: Add 5–15 user utterances (e.g., “Where is my order?”).
Failure messages: Define fallback responses.
Scenario Example:
Title: Refund Status
Description: Handles refund inquiries. Uses FetchRefundPolicy and GetRefundStatus tools.
Example queries: “When will my refund arrive?” “Am I eligible for a refund on this order?”
Failure message: “I couldn’t confirm your refund. Please contact support.”
Tip: The more examples you add, the more reliably the classifier will route queries to this scenario.
Step 4 — Attach Tools and Configure Parameters
Once scenarios are in place, connect them to tools from the Tools Repository.
Map scenario inputs to tool parameters.
Confirm parameter types (string, integer, etc.).
Provide default values for optional inputs.
Document outputs so the Agent can use them downstream.
Example Mapping:
Scenario Input
Tool Parameter
customerId
fetch_orders.customer_id
orderId
fetch_shipping.order_id
Warning: Never hardcode API keys or secrets as normal parameters. Use secure configuration.
Step 5 — Configure Variables
Variables make an Agent more flexible and reduce repetition.
Agent-level variables: Shared across all scenarios (e.g., defaultCurrency = USD).
Scenario-level variables: Scoped to one scenario.
Runtime/session variables: Set during a conversation.
Examples:
ingestName = "orders_data" (string)
maxResults = 25 (integer)
Tip: Use variables for dataset names, default regions, or limits so the Agent doesn’t repeatedly ask users for the same info.
Step 6 — Define Failure Messages
Graceful failure improves user trust. Define clear fallback responses for:
No scenario match: “I’m not sure I can help with that. Try asking about orders or refunds.”
Missing input: “Please provide your order number so I can check its status.”
Tool errors: “I couldn’t fetch your order details. Try again later.”
Advanced: You can route failures to escalation flows (e.g., handover to a human agent).
Step 7 — Test in Run Mode
Use Run Mode to simulate conversations before going live.
Open Run Mode and start a test chat.
Try happy-path queries (where all inputs are provided).