LogoLogo
Tonic Validatetonic_validateDocs homeTonic.ai
  • Tonic Validate guide
  • About Tonic Validate
    • What is Tonic Validate?
    • Validate components and tools
    • Validate workflows
  • Getting started with Validate
    • Starting your Validate account
    • Setting up the Validate SDK
    • Quickstart example - create a run and log responses
    • Creating and revoking Validate API keys
  • About RAG metrics
    • About using metrics to evaluate a RAG system
    • RAG components summary
    • RAG metrics summary
    • RAG metrics reference
  • Benchmarks and projects
    • Managing benchmarks in Validate
    • Managing projects in Validate
  • Runs
    • Starting a Validate run
    • Viewing and managing runs
  • Production monitoring
    • Configuring your RAG system to send questions to Validate
    • Viewing the metric scores and logged questions
  • Code examples
    • End-to-end example using LlamaIndex
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Production monitoring

Configuring your RAG system to send questions to Validate

Last updated 12 months ago

Was this helpful?

To configure your RAG system to log questions and answers to Tonic Validate, whenever your RAG system answers a question from a user, you add a call to the Validate log function.

The call to log includes:

  • The identifier of the Validate production monitoring project to send the question to

  • The text of the question from the user to the RAG system

  • The answer that the RAG system provided to the user

  • The context that the RAG system used to answer the question

from tonic_validate import ValidateMonitorer
monitorer = ValidateMonitorer()
monitorer.log(
    project_id="<project identifier>",
    question="<question text>",
    answer="<answer>",
    context_list=["<context used to answer the question"]
)

As your RAG system sends questions to the Validate production managing project, Validate by default generates the following metrics scores for each question:

You can also request additional metrics. For information about the available metrics, go to RAG metrics reference.

Answer consistency
Retrieval precision
Augmentation precision