LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Confirming the notifications endpoint location
  • Required information for a webhook
  • Webhook URL
  • Header values
  • Message properties
  • Displaying the list of webhooks
  • Creating and editing webhooks
  • Creating a webhook
  • Editing a webhook
  • Webhook configuration settings
  • Configuring the webhook settings and headers
  • Customizing the webhook request body
  • Previewing the request JSON
  • Testing the webhook
  • Deleting a webhook
  • Enabling and disabling webhooks

Was this helpful?

Export as PDF
  1. Running data generation

Webhooks

Last updated 4 months ago

Was this helpful?

Required license: Professional or Enterprise

Tonic Structural allows you to set up webhooks to fire HTTP POST requests when a data generation or upsert job completes successfully, fails, or is canceled.

Webhooks are only supported for data generation jobs and for upsert jobs, if upsert is enabled. You cannot trigger a webhook after other jobs such as sensitivity scans.

Webhooks enable Structural to integrate more seamlessly into your workflow. These requests can pass information about the data generation job, and can be used to trigger actions in other systems.

One common use of the Structural webhooks feature is to post a message to a Slack channel.

never inherit the webhooks configuration from their parent workspace. Child workspaces always have their own webhooks.

Confirming the notifications endpoint location

Webhooks require access to the Structural notifications server. The notifications server URL and port are set as the value of the TONIC_NOTIFICATIONS_URL.

On a Docker deployment, the default value is https://tonic_notifications:7001. For a Kubernetes deployment deployed using Structural's provided Helm chart, the default value is https://tonic-notifications:7001.

If the notifications server on your instance does not match the default value, then you must update the value of TONIC_NOTIFICATIONS_URL.

Required information for a webhook

Before you create a webhook, make sure that you have the required information.

Webhook URL

Each webhook requires a webhook URL. This is the URL that receives the webhook message.

The URL cannot resolve to a private IPv4 range.

Header values

Check whether the webhook requires any header values.

For example, an application might require:

  • A content-type header. For example, Content-type: application/json

  • The version of an API to use. This might be needed to send an API call to perform an action based on the job status. For example, Accept: application/vnd.pagerduty+json;version=2

  • Authorization for a third-party service. For example, Authorization: Bearer <token value>

Message properties

By default, the webhook message contains the workspace identifier and name, the job identifier, and the job status.

You must also determine whether your application requires any other properties.

For example, for a Slack notification webhook, you must provide a text property that contains the text of the Slack notification.

Displaying the list of webhooks

You manage webhooks from the Post-Job Actions view. To display the Post-Job Actions view, either:

  • On the workspace management view, in the workspace navigation bar, click Post-Job Actions.

  • On Workspaces view, from the dropdown menu in the Name column, select Post-Job Actions.

On the Post-Job Actions view, the Webhooks list contains the list of webhooks.

For each webhook, the list contains:

  • The name of the webhook.

  • The job statuses that trigger the webhook.

  • The webhook URL.

  • The user who created the webhook.

  • The date and time when the webhook was most recently updated.

Creating and editing webhooks

Required workspace permission: Configure post-job scripts and webhooks

Creating a webhook

To create a webhook, in the Webhooks panel, click Create Webhook.

On the webhook configuration dialog, you can set up, preview, and test the webhook.

To save the webhook, click Save. The webhook is added to the Webhooks list.

Editing a webhook

To edit a webhook:

  1. In the Webhooks list, click the edit icon for the webhook.

  2. On the webhook configuration dialog, update the webhook configuration.

  3. Click Save.

Webhook configuration settings

Configuring the webhook settings and headers

On the Settings & Headers tab, you set most of the webhook configuration, except for the message body.

  1. In the Webhook Name field, provide a name for the webhook.

  2. In the Webhook URL field, provide the URL to send the webhook request to.

  3. By default, a webhook requires SSL certificate validation. To bypass the validation, and trust the server certificate, check Trust the Server Certificate (bypass SSL certificate validation). You can use this option if the server has a trustworthy self-signed certificate.

  4. Under Trigger Events, select the data generation job events that trigger the webhook. The webhook can be triggered when a job succeeds, a job fails, or a job is canceled. To trigger a webhook in response to an event, check the event checkbook. For example, to trigger the webhook when a job is canceled, check the Job Cancelled checkbox.

  5. The header list always contains a Content-Type header. The default value is application/json. You cannot delete the Content-Type header, but you can change the value.

  6. To add custom header values for the webhook request:

    1. To add a header row, click Add Header.

    2. In the Header Name field for each header, provide the header name.

    3. In the Header Value field for each header, provide the header value.

    4. To remove a header row, click its delete icon.

Customizing the webhook request body

On the Message Body tab, you can customize the body of the request. The message body is sent as a JSON payload that consists of a set of keys and values.

For each property, the Property Name field contains the key, and the Property Value field contains the value.

Default properties

By default, the message body contains the following properties. The values are variables that are replaced by the actual values for the triggering event. You can use these variables in the values of your custom properties.

  • jobId - The identifier of the job. To include the job ID in a custom property value, use the {jobId} variable.

  • jobStatus - The status of the job. To include the job status in a custom property value, use the {jobStatus} variable.

  • jobType - The type of job (data generation or upsert). To include the job type in a custom property, use the {jobType} variable.

  • workspaceId - The identifier of the workspace. To include the workspace ID in a custom property value, use the {workspaceId} variable.

  • workspaceName - The name of the workspace. To include the workspace name in a custom property value, use the {workspaceName} variable.

Adding properties

You can also add other properties to the message body that are needed for the particular webhook. For example, for a Slack notification webhook, you provide a text property that contains the text of the notification.

To add a property:

  1. Click Add Property.

  2. In the Property Name field, provide the key name.

  3. In the Property Value field, provide the value. You can include the default variables in the value. The following example of a text value for a Slack notification includes the job type, job identifier, workspace name, and job status: {jobType} job {jobId} for workspace {workspaceName} completed with a status of {jobStatus).

Removing properties

To remove a property, click its delete icon.

Previewing the request JSON

The Preview tab contains a preview of the JSON body of the request. In the preview, the variables are replaced by sample values.

To copy the JSON to the clipboard, click Copy to clipboard. You can then, for example, use the copied JSON to test the webhook request in another tool such as Postman.

Testing the webhook

From the webhook configuration dialog, you can send a test request. The test request includes the configured headers and message body. The message body uses sample values for the variables.

To send a test request, click Test Webhook.

Deleting a webhook

Required workspace permission: Configure post-job scripts and webhooks

To delete a webhook:

  1. In the Webhooks list, click the delete icon for the webhook.

  2. On the confirmation dialog, click Delete.

Enabling and disabling webhooks

Required workspace permission: Configure post-job scripts and webhooks

You use the toggle at the left of each webhook to determine whether the webhook is enabled.

When the toggle is in the on position, the webhook is enabled. It is triggered by the selected generation job statuses.

When the toggle is in the off position, the webhook is not enabled, and is not triggered by the selected generation job statuses.

The application that you send the webhook to should provide information about how to obtain the URL. For example, for information on how to generate the webhook URL for a Slack notification, go to in the Slack documentation.

A toggle to .

Options to and the webhook.

Child workspaces
environment setting
Sending messages using Incoming Webhooks
enable or disable the webhook
edit
delete
Post-Job Actions view
Settings & Headers tab of the webhook configuration dialog
Message Body tab of the webhook configuration dialog
Preview tab of the webhook configuration dialog