Create & Manage Agents — Step by Step Guide
This page is a practical walkthrough for building, testing, and publishing Agents in Dataworkz. It takes you from a blank slate to a fully operational Agent, with guidance at each step on what to conf
Step 1 — Create an Agent
From the AI Agents dashboard, click Create Agent.
Name: Choose a clear, descriptive name. For example,
support-order-tracker.Description: Write a short summary of what the Agent does. Example: “Answers questions about order tracking and refunds.”
Tags / Owner: Optional fields to help organize or assign responsibility.
Access controls: Decide who can edit or run the Agent.
Tip: Use naming conventions such as
domain-purpose-environment(e.g.,finance-expense-qa).
Step 2 — Define the Persona
The Persona is the Agent’s voice, scope, and boundaries. It acts as a system prompt for the LLM.
What to include:
Role: Who the Agent is (e.g., “You are a support assistant”).
Tone: How responses should sound (e.g., concise, polite, factual).
Boundaries: Topics to avoid or escalate (e.g., “Do not give legal advice”).
Format: Any output requirements (tables, markdown, JSON).
Example Persona:
You are a customer-support assistant. Always confirm the order ID in replies. Keep answers concise and polite. If asked about billing disputes, request the order number and escalate if unclear. Do not invent data.Note: Keep persona instructions short and test often. Long prompts can confuse the LLM.
Step 3 — Add Scenarios
Scenarios let the Agent focus on specific user intents. Each scenario narrows the scope and determines which tools are available.
For each scenario, configure:
Title: Short and intent-focused (e.g., Track Orders).
Description: Explain the intent and expected outcomes.
Tools: Select from the repository (only the tools relevant to this intent).
Example queries: Add 5–15 user utterances (e.g., “Where is my order?”).
Failure messages: Define fallback responses.
Scenario Example:
Title: Refund Status
Description: Handles refund inquiries. Uses
FetchRefundPolicyandGetRefundStatustools.Example queries: “When will my refund arrive?” “Am I eligible for a refund on this order?”
Failure message: “I couldn’t confirm your refund. Please contact support.”
Tip: The more examples you add, the more reliably the classifier will route queries to this scenario.
Step 4 — Attach Tools and Configure Parameters
Once scenarios are in place, connect them to tools from the Tools Repository.
Map scenario inputs to tool parameters.
Confirm parameter types (string, integer, etc.).
Provide default values for optional inputs.
Document outputs so the Agent can use them downstream.
Example Mapping:
customerId
fetch_orders.customer_id
orderId
fetch_shipping.order_id
Warning: Never hardcode API keys or secrets as normal parameters. Use secure configuration.
Step 5 — Configure Variables
Variables make an Agent more flexible and reduce repetition.
Agent-level variables: Shared across all scenarios (e.g.,
defaultCurrency = USD).Scenario-level variables: Scoped to one scenario.
Runtime/session variables: Set during a conversation.
Examples:
ingestName = "orders_data"(string)maxResults = 25(integer)
Tip: Use variables for dataset names, default regions, or limits so the Agent doesn’t repeatedly ask users for the same info.
Step 6 — Define Failure Messages
Graceful failure improves user trust. Define clear fallback responses for:
No scenario match: “I’m not sure I can help with that. Try asking about orders or refunds.”
Missing input: “Please provide your order number so I can check its status.”
Tool errors: “I couldn’t fetch your order details. Try again later.”
Advanced: You can route failures to escalation flows (e.g., handover to a human agent).
Step 7 — Test in Run Mode
Use Run Mode to simulate conversations before going live.
Open Run Mode and start a test chat.
Try happy-path queries (where all inputs are provided).
Try edge cases (missing inputs, vague wording, invalid values).
Review probe logs to see how tools were invoked.
Check for:
Correct scenario selection.
Accurate parameter mapping.
Expected outputs.
Proper failure handling.
Tip: Build a test script with 10–15 representative queries and rerun it after each major change.
Step 8 — Publish and Monitor
When testing is complete, publish the Agent.
Pre-publish checklist:
All scenarios tested.
Failure messages defined.
Secrets stored securely.
Access controls set.
Post-publish monitoring:
Track tool call success rates.
Monitor latency and error logs.
Review user queries regularly to refine scenarios.
Maintenance best practices:
Update scenarios as business rules change.
Add new examples to improve intent classification.
Version Agent configurations and roll back if necessary.
Common Pitfalls to Avoid
Overly broad scenarios → split into smaller, focused ones.
Vague persona instructions → keep concise and specific.
Missing input validation → enforce correct data types.
Hardcoded secrets → always use secure storage.
Lack of monitoring → set up logs and review them regularly.
Quick Checklist
Before publishing, ensure:
Summary
Building an Agent in Dataworkz is a structured process:
Create the Agent.
Define its persona.
Add scenarios for different intents.
Attach tools and configure parameters.
Use variables for flexibility.
Define graceful failure responses.
Test thoroughly in Run Mode.
Publish and monitor in production.
With this workflow, you can design Agents that are reliable, secure, and aligned with real-world user needs.
Last updated

