Response History
Users can view past queries, AI responses, feedback, performance metrics (groundedness, relevance), and backend analytics (response times, errors) for deeper insights into app performance.
Last updated
Users can view past queries, AI responses, feedback, performance metrics (groundedness, relevance), and backend analytics (response times, errors) for deeper insights into app performance.
Last updated
You can access the Response History in two places within the AI application:
In the Analytics Tab (Main Screen):
On the main AI application screen, look for the Analytics tab, which is represented by a small bar chart icon next to the created AI application.
Clicking on this tab provides an in-depth view of your query and response history, including:
Feedback: Thumbs up, thumbs down, or neutral feedback for each response.
Response Quality Metrics: Analytics for response groundedness, context relevance, and answer relevance.
Retrieval Probe View: Insights into the backend processes, showing how many milliseconds each step takes to complete and identifying any errors that may have occurred.
From the Search/Query Screen:
In the search/query screen, you can view response history by clicking a small square icon (resembling a window with a line 1/4 of the way through it vertically) located at the top left, just beneath the "Q&A" label.
This icon allows you to open the response history directly from the query interface, providing quick access to previously processed queries and their corresponding results.
1. Query and Response Logs
Queries: A list of all queries you’ve submitted.
Responses: The AI’s responses to those queries, including the raw output.
Feedback: You can see how you rated each response (thumbs up, thumbs down, neutral).
2. Feedback Analysis
You can filter and sort responses based on feedback. For example, you might want to review only the queries that received thumbs down to assess areas for improvement.
Feedback Types:
Thumbs Up: Indicates satisfaction with the answer.
Thumbs Down: Indicates dissatisfaction with the response, allowing users to see what went wrong.
Neutral: Indicates that the response was neither particularly good nor bad.
3. Performance Metrics
The system provides performance analytics for each response to help you assess the quality of the AI's output:
Response Groundness: Indicates the level of factual accuracy in the response.
Context Relevance: Measures how well the response aligns with the context of the query.
Answer Relevance: Assesses whether the response is relevant and appropriate to the user's question.
4. Retrieval Probe View
This view provides detailed backend information about how the AI processes each query:
Time Metrics: Shows the time taken for each step in the query processing pipeline (in milliseconds), so you can identify any potential delays or bottlenecks.
Error Tracking: If any errors occurred during response generation, you’ll see an error message, which can help you troubleshoot and improve future queries.
View Response Performance:
Click on any query in the Response History to view detailed analytics about the response.
Use the provided metrics to understand the quality of the AI's response, including the factors influencing its relevance and accuracy.
Review any feedback provided to identify patterns, such as consistent issues with context or relevance.
Identify and Troubleshoot Errors:
The Retrieval Probe View shows you how long each step in the response pipeline took. If there are any delays or issues, they will be highlighted.
If an error is present, you can use the time logs to pinpoint exactly where in the process the issue occurred and take appropriate action to fix it.
Optimize Query Quality:
By reviewing the response history and feedback data, you can fine-tune your queries for better results. For example, if a certain type of question frequently gets poor feedback, you might adjust the way you phrase your queries.
The performance metrics can also give you a better idea of what aspects of the AI need improvement—whether it’s grounding the answers in fact, maintaining context, or ensuring the answers are relevant to the query.