The Tonic Validate application and Tonic Validate SDK (tonic_validate
) allow you to measure how well your RAG LLM system performs.
Validate calculates rigorous LLM-assisted RAG evaluation metrics. You can also use tonic_validate
to compute RAG metrics outside the context of a Validate project.
Validate also provides an integration with Ragas, tonic_ragas_logger, that allows you to visualize Ragas evaluation results in Validate.
Start your Tonic Validate account
Sign up for a Validate account.
Create an API key and project.
Set up the Tonic Validate SDK
Install the SDK. Provide Validate and Open AI API keys.
Quickstart example
Use tonic_validate
to log RAG metrics to a project.
Types of RAG metrics
RAG metrics measure the quality of RAG LLM responses.
Create and manage benchmarks
A benchmark is a set of questions, optionally with expected responses.
Create and manage projects
A project consists of a set of runs.
Start a new run
Start a new Tonic Validate run to calculate metrics for RAG LLM answers to questions.
View run results
Review average metric scores, and the grouping of values for the questions.
End-to-end example with a llama index
Demonstrates an end-to-end Tonic Validate flow.