Quickstart example - create a run and log responses
For an existing Validate development project, the following example creates a run and logs the responses to the UI.
When the RAG system response and retrieved context is logged to the Tonic Validate application, RAG metrics are calculated using calls to Open AI.
from tonic_validate import ValidateScorer, ValidateApi, Benchmark# Function to simulate getting a response and context from your LLM# Replace this with your actual function calldefget_rag_response(question):return{"llm_answer":"Paris","llm_context_list": ["Paris is the capital of France."]}benchmark =Benchmark(questions=["What is the capital of France?"], answers=["Paris"])# Score the responses for each question and answer pairscorer =ValidateScorer()run = scorer.score(benchmark, get_rag_response)# Uploads the project to the UIvalidate_api =ValidateApi("your-api-key")validate_api.upload_run("your-project-id", run)