How Structural uses AI
AI features are currently only available on the United States deployment of Structural Cloud and on self-hosted instances that enable AI-enhanced features. They are not yet available on the European deployment of Structural Cloud.
Current AI features in Structural
Structural currently has the following AI-enhanced features:
LLM-based sensitivity detection. Sends the following information to an LLM to help to identify columns that contain sensitive values:
Database schema
Sample source data values. You can opt to exclude the source data values.
Value creation for the Custom Categorical generator. Allows users to provide a prompt for the LLM to use to create the available values. For example,
1,000 common pharmaceuticals.
LLM that Structural uses
Structural AI features use the Microsoft Azure OpenAI Service.
How information is used by OpenAI
Microsoft stores queries and responses for 30 days for abuse monitoring, per their policy. While we have explicitly disabled the Response API's feature to save conversation history for application use, the platform's independent 30-day logging for security and compliance still applies.
Your input prompts and generated outputs are never used to train the underlying models. The LLM only accesses input prompts. Potentially sensitive information, such as a database schema or values, is never shared unless you explicitly provide it, which we strongly discourage.
For answers to more detailed questions, reach out to our support team.
Enabling AI-enhanced features on a self-hosted instance
To enable AI-enhanced features on their Structural instance, self-hosted customers must provide an API key for an LLM provider.
To configure the connection to the LLM provider, use the following environment settings. You can add these settings manually to the Environment Settings list on Structural Settings.
Overview of the AI-enhanced sensitivity detection flow
When AI-enhanced features are enabled, the sensitivity detection process sends prompts to your LLM provider.

Identifying the LLM provider
First, to identify the LLM provider to use for Structural AI-enhanced features:
TONIC_LLM_PROVIDER- By default, this is not set, and the AI-enhanced features are not available. Structural supports any OpenAI provider that is compatible with the Responses API. The available values are:OpenAI
Providing connection details for OpenAI
When TONIC_LLM_PROVIDER is OpenAI, to provide the required connection information:
TONIC_OPENAI_ENDPOINT- The OpenAI endpoint URL to use for AI-enhanced features. Should include the endpoint base URL for the OpenAI API, without the version. Default:https://api.openai.comExample URL for Microsoft foundry:https://my-foundry-instance.ai.azure.com/openaiTONIC_OPENAI_API_KEY- The API key for OpenAI.
Note that if you use a hosted OpenAI deployment such as Microsoft Foundry, Structural requires the following model deployments. The deployment name must match the model name.
gpt-4.1-mini
gpt-4.1
Managing the use of AI-enhanced features
Setting whether a workspace uses LLM-based sensitivity detection
On Structural Cloud, and on self-hosted instances that enable AI-enhanced features, LLM-based sensitivity detection is enabled by default.
You can disable LLM-based sensitivity detection within individual workspaces. To do this, in the advanced workspace overrides, set the Tonic LLM Enable Enhanced Recommendations setting to false.
Excluding sample data from LLM-based sensitivity detection
By default, LLM-based sensitivity detection sends both the database schema and sample source data values to the LLM.
To exclude the contextual source data, and only send the schema, set the environment setting TONIC_LLM_ENABLE_ENHANCED_RECOMMENDATIONS_SAMPLE_DATA to false.
Self-hosted instances can set this from the Environment Settings tab on Structural Settings.
Cloud organizations can set this from the Organization Settings tab on Structural Settings.
Last updated
Was this helpful?