Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
After you select the connector type, you configure:
Where to find the source data
Where to write the data generation output
For data connectors that connect to a database, the Source Settings section provides connection information for the source database.
You cannot change the source data configuration for a child workspace.
For information about the source connection fields for a specific data connector, go to the workspace configuration topic for that connector type.
For data connectors that support upsert, the workspace configuration includes an Upsert section to allow you to enable and configure upsert. Upsert adds and updates rows in the destination database, but keeps all other existing rows intact.
If you enable upsert, then you cannot write output to an Ephemeral database or to a container repository. You must write the output to a destination database.
For more information, go to Enabling and configuring upsert.
For data connectors that connect to a database, the Destination Settings section provides information about where and how Structural writes the output data from data generation.
Depending on the data connector type, you might be able to write to either:
Destination database - Writes the output data to a destination database on a database server.
Ephemeral snapshot - Writes the output data to a Tonic Ephemeral user snapshot.
Container repository - Writes the output data to a data volume in a container repository.
When you write the output to a destination database, the destination database must be of the same type as the source database.
Structural does not create the destination database. It must exist before you generate data.
In Destination Settings, you provide the connection information for the destination database. For information about the destination database connection fields for a specific data connector, go to the workspace configuration topic for that connector type.
If available, the Copy Settings from Source allows you to copy the source connection details to the destination database, if both databases are in the same location. Structural does not copy the connection password.
If Ephemeral supports your workspace database type, then you can write the destination data to a snapshot in Ephemeral. For data larger than 10 GB, this option is recommended instead of writing to a container repository.
From Ephemeral, you can use the snapshot to start new Ephemeral databases.
For more information, go to Writing output to Tonic Ephemeral.
Some data connectors allow you to write the transformed data to a data volume in a container repository instead of to a database server.
You can use the resulting data volume to create a database in Tonic Ephemeral. If you do plan to use the data to start an Ephemeral database, and the size of the data is larger than 10 GB, then the recommendation is to write the data to an Ephemeral user snapshot instead.
For more information, go to Writing output to a container repository.
When you provide connection details for a database server, Structural provides a Test Connection button to test the connection, and verify that Structural can use the connection details to connect to the database. Structural uses the connection details to try to reach the database, and indicates whether it succeeded or failed. We strongly recommend that you test the connections.
The environment setting TONIC_TEST_CONNECTION_TIMEOUT_IN_SECONDS
determines the number of seconds before a connection test times out. You can configure this setting from the Environment Settings tab on Structural Settings. By default, the connection test times out after 15 seconds.
A file connector workspace uses files as its source data and produces transformed versions of those files as its output.
For file connector workspaces, the File Location section indicates where the source files are obtained from - either a local file system or a cloud storage solution (Amazon S3 or Google Cloud Storage).
When the files come from cloud storage, the Output Location section indicates where to write the transformed files. You must also provide the cloud storage connection credentials.
For more information, go to Configuring the file connector storage type and output options.
Tonic Ephemeral is a separate Tonic.ai product that allows you to create temporary databases to use for testing and demos. For more information about Ephemeral, go to the .
Identification and connection type
Settings to identify the workspace and to select the data connector.
Data connection settings
Connect to source and destination databases or, for the file connector, local or cloud storage files.
Data generation settings
Block data generation on schema changes. Enable cross-run consistency.
Enable and configure upsert
Add new destination records and update changed destination records. Ignore other unchanged destination records.
Write output to Tonic Ephemeral
Use the data generation output to create an Ephemeral user snapshot.
Write output to a container repository
Use the data generation output to populate a container data volume.
Only available for PostgreSQL, MySQL, SQL Server, and Oracle.
Not compatible with upsert.
Not compatible with Preserve Destination or Incremental table modes.
Tonic Ephemeral is a separate Tonic.ai product that allows you to create temporary databases to use for testing and demos. For more information about Ephemeral, go to the .
If Ephemeral supports your workspace database type, then you can write the destination data to a snapshot in Ephemeral. You can then use the snapshot to start Ephemeral databases.
To write the transformed data to Ephemeral, under Destination Settings, click Ephemeral Database.
Structural can write the data snapshot to either Ephemeral Cloud or to a self-hosted instance of Ephemeral. By default, Structural writes the data snapshot to Ephemeral Cloud.
All workspaces on the same self-hosted Structural instance or in the same Structural Cloud organization must write to the same instance of Ephemeral. When you change the Ephemeral output configuration in one workspace, it is automatically changed in other workspaces that write to Ephemeral.
For Ephemeral Cloud, Structural writes the snapshot to the account for the user who runs the data generation job. If that user has an Ephemeral account on Ephemeral Cloud, then Structural uses that account. If the user does not have an account, then Structural creates a two-week Ephemeral free trial account for the user.
Note that if you are on a self-hosted instance of Ephemeral, then you must always provide an Ephemeral API key.
To write a snapshot to Ephemeral Cloud:
Click Tonic Ephemeral cloud.
If you are on a self-hosted instance of Structural:
In the API Key field, provide an Ephemeral API key from your Ephemeral account.
To test the connection, click Test Connection.
To write the snapshot to a self-hosted instance of Ephemeral:
Click Tonic Ephemeral self-hosted.
In the API Key field, provide an Ephemeral API key from your Ephemeral account. Structural writes the snapshot to the Ephemeral account that is associated with the API key.
In the Tonic Ephemeral URL field, provide the URL to your self-hosted Ephemeral instance.
To test the connection, click Test Connection.
For Oracle, you select the base image to use when it creates the data snapshot.
If you are writing to Ephemeral Cloud, then you must use the Oracle 23c base image that comes with Ephemeral. This image has the following limitations:
A maximum of 12GB of user data
A maximum of 2CPU cores and 2GB of RAM
If you are writing to a self-hosted instance of Ephemeral, then you can also select a custom image that you created in Ephemeral.
If you do not configure any advanced settings, then:
The snapshot uses the same name as the workspace, and has no description.
The snapshot size allocation is determined by the source data size.
Structural discards the temporary Ephemeral database that is created during the data generation.
To change any of these settings, click Advanced settings.
By default, the snapshot name uses the workspace name.
When you run data generation, if a snapshot with the same name already exists in Ephemeral, then Structural overwrites that snapshot with the new snapshot.
Under Advanced settings:
In the Snapshot name field, provide the name of the snapshot. The snapshot name can use the following placeholder values to help identify the snapshot:
{workspaceName}
- Inserts the name of the workspace.
{workspaceId}
- Inserts the identifier of the workspace.
{jobId}
- Inserts the identifier of the data generation job that created the snapshot.
{timestamp}
- Inserts the timestamp when the snapshot was created.
Including the job ID or timestamp ensures that a data generation job does not overwrite a previous snapshot.
Optionally, in the Snapshot description field, provide a longer description of the snapshot.
By default, the resources used for the snapshot are based on the size of the source data.
For source data that is 25 GB or less, Nano is used.
For source data larger than 25 GB, Micro is used.
To select a specific option:
Toggle Custom pod resources to the on position.
From the dropdown list, select the option to use for the combination of vCPUs and memory:
Nano - 0.125 vCPU with 0.5 GB RAM
Micro - 0.5 vCPU with 2 GB RAM
Small - 1 vCPU with 4 GB RAM
Medium - 2 vCPU with 8 GB RAM
Large - 4 vCPU with 16 GB RAM
By default, the Ephemeral size allocation for the snapshot is based on the size of the source data.
To instead provide a custom data size allocation, under Advanced settings:
Toggle Custom data size allocation to the on position.
In the field, enter the size allocation in gigabytes.
When Structural creates the Ephemeral snapshot, it creates a temporary Ephemeral database.
By default, Structural deletes that database when the data generation is complete.
To instead keep the database, under Advanced settings, toggle Keep database active in Tonic Ephemeral after data generation to the on position.
For a MySQL or PostgreSQL workspace, you can provide a customization file that helps to ensure that the temporary Ephemeral database is configured correctly.
To provide the customization details:
Toggle Use custom configuration to the on position.
In the text area, paste the contents of the customization file.
For information about how to create and manage custom images for Oracle, go to .
Most workspaces that connect to a database have a Block data generation if schema changes detected toggle. The setting is usually in the Source Settings section.
By default, the option is turned off. When the option is off, Structural only blocks data generation when there are conflicting schema changes. Structural does not block data generation when there are non-conflicting schema changes.
If this option is turned on, then if Structural detects any changes at all to the schema, then data generation is blocked until you resolve the schema changes. For more information, go to Viewing and resolving schema changes.
For generators where consistency is enabled, a statistics seed enables consistency across data generation runs. The Structural-wide statistics seed value ensures consistency across both data generation runs and workspaces.
You use the Override Statistics Seed setting to override the Structural-wide statistics seed value. For workspaces that connect to a database, the setting is under Destination Settings. For a file connector workspace, the setting is under Output Location.
You can either disable consistency across data generations, or provide a seed value for the workspace. The workspace seed value ensures consistency across data generation runs for that workspace, and across other workspaces that have the same seed value.
For details about using seed values to ensure consistency across data generation runs and databases, go to #enabling-consistency-across-runs-or-multiple-databases.
Required license: Professional or Enterprise
Not compatible with writing output to a container repository or a Tonic Ephemeral snapshot.
By default, Tonic Structural data generation replaces the existing destination database with the transformed data from the current job.
Upsert adds and updates rows in the destination database, but keeps all of the other existing rows intact. For example, you might have a standard set of test records that you do not want to replace every time you generate data in Structural.
If you enable upsert, then you cannot write the destination data to a container repository or to a Tonic Ephemeral snapshot. You must write the data to a database server.
Upsert is currently only supported for the following data connectors:
MySQL
Oracle
PostgreSQL
SQL Server
For an overview of upsert, you can also view the video tutorial.
When upsert is enabled, the data generation job writes the generated data to an intermediate database. Structural then runs the upsert job to write the new and updated records to the destination database.
The destination database must already exist. Structural cannot run an upsert job to an empty destination database.
The upsert job adds and updates records based on the primary keys.
If the primary key for a record already exists in the destination database, the upsert job updates the record.
If the primary key for a record does not exist in the destination database, the upsert job inserts a new row.
To only update or insert records that Structural creates based on source records, and ignore other records that are already in the destination database, ensure that the primary keys for each set of records operate on different ranges. For example, allocate the integer range 1-1000 for existing destination database records that you add manually. Then ensure that the source database records, and by extension the records that Structural creates during data generation, use a different range.
Also note that when upsert is enabled, the Truncate table mode does not actually truncate the destination table. Instead, it works more like Preserve Destination table mode, which preserves existing records in the destination table.
To enable upsert, in the Upsert section of the workspace details, toggle Enable Upsert to the on position.
When you enable upsert for a workspace, you are prompted to configure the upsert processing and provide the connection details for the intermediate database.
When you enable upsert, Structural displays the following settings to configure the upsert process.
Disable Triggers
Indicates whether to disable any user-defined triggers before the upsert job runs. This prevents duplicate rows from being added to the destination database. By default, this is enabled.
Automatically Start Upsert After Successful Data Generation
Persist Conflicting Data Tables
When an upsert job cannot process rows with unique constraint conflicts, as well as rows that have foreign keys to those rows, this setting indicates whether to preserve the temporary tables that contain those rows. By default, this is disabled. Structural only keeps the applicable temporary tables from the most recent upsert job.
Warn on Mismatched Constraints
Indicates whether to treat mismatched foreign key and unique constraints between the source and destination databases as warnings instead of errors, so that the upsert job does not fail. By default, this is disabled.
Required license: Enterprise
The intermediate database must have the same schema as the destination database. If the schemas do not match, then the upsert process fails.
To ensure that schema changes are automatically reflected in the intermediate database, you can connect the workspace to your own database migration script or tool. Structural then runs the migration script or tool whenever you run upsert data generation.
When you start an upsert data generation job:
If migration is enabled, Structural calls the endpoint to start the migration.
Structural cannot start the upsert data generation until the migration completes successfully. It regularly calls the status check endpoint to check whether the migration is complete.
When the migration is complete, Structural starts the upsert data generation.
Required. Structural calls this endpoint to start the migration process specified by the provided URL.
The request includes:
Any custom parameter values that you add.
The connection information for the intermediate database.
The request uses the following format:
The response contains the identifier of the migration task.
The response uses the following format:
Required. Structural calls this endpoint to check the current status of the migration process.
The request includes the task identifier that was returned when the migration process started. The request URL must be able to pass the request identifier as either a path or query parameter.
The response provides the current status of the migration task. The possible status values are:
Unknown
Queued
Running
Canceled
Completed
Failed
The response uses the following format:
Optional. Structural calls this endpoint to retrieve the log entries for the migration process. It adds the migration logs to the upsert logs.
The request includes the task identifier that was returned when the migration process started. The request URL must be able to pass the request identifier as either a path or query parameter
The response body of the request should be 'text/plain'.
It contains the raw logs.
Optional. Structural calls this endpoint to cancel the migration process.
The request includes the task identifier that was returned when the migration process started. The request URL must be able to pass the request identifier as either a path or query parameter.
To enable the migration process, toggle Enable Migration Service to the on position.
When you enable the migration process, you must configure the POST Start Schema Changes
and GET Status of Schema Change
endpoints.
You can optionally configure the GET Schema Change Logs
and DELETE Cancel Schema Changes
endpoints.
To configure the endpoints:
To configure the POST Start Schema Changes
endpoint:
In the URL field, provide the URL of the migration script.
Optionally, in the Parameters field, provide any additional parameter values that your migration scripts need.
To configure the GET Status of Schema Change
endpoint, in the URL field, provide the URL for the status check.
The URL must include an {id}
placeholder. This is used to pass the identifier that is returned from the Start Schema Changes
endpoint.
To configure the GET Schema Change Logs
endpoint, in the URL field, provide the URL to use to retrieve the logs.
The URL must include an {id}
placeholder. This is used to pass the identifier that is returned from the Start Schema Changes
endpoint.
To configure the DELETE Cancel Schema Changes
endpoint, in the URL field, provide the URL to use for the cancellation.
The URL must include an {id}
placeholder. This is used to pass the identifier that is returned from the Start Schema Changes
endpoint.
When you enable upsert, you must provide the connection information for the intermediate database.
For details, go to the workspace configuration information for the data connector.
Indicates whether to immediately run the upsert job after the initial data data generation to the intermediate database. By default, this is enabled. If you turn this off, then after the initial data generation, you must start the upsert job manually. For more information, go to .
Every workspace includes the following settings to identify the workspace and to select the type of data connector.
All workspaces have the following fields that identify the workspace:
In the Workspace name field, enter the name of the workspace.
In the Workspace description field, provide a brief description of the workspace. The description can contain up to 200 characters.
In the Tags field, provide a comma-separated list of tags to assign to the workspace. For more information on managing tags, go to Assigning tags to a workspace.
Under Connection Type, select the type of data connector to use for the workspace data. You cannot change the connection type on a child workspace.
The Basic and Professional licenses limit the number and type of data connectors you can use.
A Basic instance can only use one data connector type, which can be either PostgreSQL or MySQL. After you create your first workspace, any subsequent workspaces must use the same data connector type.
A Professional instance can use up two different data connector types, which can be any type other than Oracle or Db2 for LUW. After you create workspaces that use two different data connector types, any subsequent workspaces must use one of those data connector types.
If you don't see the database that you want to connect to, or you want to have different database types for your source and destination database, contact support@tonic.ai.
When you select a connector type, Structural updates the view to display the connection fields used for that connector type. The specific fields vary based on the connector type.
Requires Kubernetes.
For self-hosted Docker deployments, you can install and configure a separate Kubernetes cluster to use. For more information, go to Setting up a Kubernetes cluster to use to write output data to container artifacts.
For information about required Kubernetes permissions, go to Required access to write destination data to container artifacts.
Not compatible with upsert.
Not compatible with Preserve Destination or Incremental table modes.
Only supported for PostgreSQL, MySQL, and SQL Server.
You can configure a workspace to write destination data to a container repository instead of to a database server.
When Structural writes data generation output to a repository, it writes the destination data to a container volume. From the list of container artifacts, you can copy the volume digest, and download a Docker Compose file that provides connection settings for the database on the volume. Structural generates the Compose file when you make the request to download it. For more information about getting access to the container artifacts, go to Viewing and downloading container artifacts.
You can also use the data volume to start a Tonic Ephemeral database. However, if the data is larger than 10 GB, we recommend that you write the data to an Ephemeral user snapshot instead. For information about writing to an Ephemeral snapshot, go to Writing output to Tonic Ephemeral.
For an overview of writing destination data to container artifacts, you can also view the video tutorial.
Under Destination Settings, to indicate to write the destination data to container artifacts, click Container Repository.
For a Structural instance that is deployed on Docker, unless you set up a separate Kubernetes cluster, the Container Repository option is hidden.
You can switch between writing to a database server and writing to a container repository at any time. Structural preserves the configuration details for both options. When you run data generation, it uses the currently selected option for the workspace.
From the Database Image dropdown list, select the image to use to create the container artifacts.
Select an image version that is compatible with the version of the database that is used in the workspace.
For a MySQL workspace, you can provide a customization file that helps to ensure that the temporary destination database is configured correctly.
To provide the customization details:
Toggle Use customization to the on position.
In the text area, paste the contents of the customization file.
To provide the location where Structural publishes the container artifacts:
In the Registry field, type the path to the container registry where Structural publishes the data volume.
In the Repository Path field, provide the path within the registry where Structural publishes the data volume.
You next provide the credentials that Structural uses to read from and write to the registry.
When you provide the registry, Structural detects whether the registry is from Amazon Elastic Container Registry (Amazon ECR), Google Artifact Registry (GAR), or a different container solution.
It displays the appropriate fields based on the registry type.
For a registry other than an Amazon ECR or a GAR registry, the credentials can be either a username and access token, or a secret.
The option to use a secret is not available on Structural Cloud.
In general, the credentials must be for a user that has read and write permissions for the registry.
The secret is the name of a Kubernetes secret that lives on the pod that the Structural worker runs on. The secret type must be kubernetes.io/dockerconfigjson
. The Kubernetes documentation provides information on how to create a registry credentials secret.
To use a username and access token:
Click Access token.
In the Username field, provide the username.
In the Access Token field, provide the access token.
To use a secret:
Click Secret name.
In the Secret Name field, provide the name of the secret.
For ACR, the provided credentials must be for a service principal that has sufficient permissions on the registry.
For Structural, the service principal must at least have the permissions that are associated with the AcrPush role.
Structural only supports Google Artifact Registry (GAR). It does not support Google Container Registry (GCR).
For a GAR registry, you upload a service account file, which is a JSON file that contains credentials that provide access to Google Cloud Platform (GCP).
The associated service account must have the Artifact Registry Writer role.
For Service Account File, to search for and select the file, click Browse.
For an Amazon ECR registry, you can either:
Provide the AWS access and secret key that is associated with the IAM user that will connect to the registry
Provide an assumed role
(Self-hosted only) Use the credentials configured in the Structural environment settings TONIC_AWS_ACCESS_KEY_ID
and TONIC_AWS_SECRET_ACCESS_KEY
.
(Self-hosted only) If Structural is deployed in Amazon Elastic Kubernetes Service (Amazon EKS), then you can use the AWS credentials that live on the EC2 instance.
To provide an AWS access key and secret key:
Click Access Keys.
In the Access Key field, enter an AWS access key that is associated with an IAM user or role.
In the Secret Key field, enter the secret key that is associated with the access key.
To provide an assumed role:
Click Assume Role.
In the Role ARN field, provide the Amazon Resource Name (ARN) for the role.
In the Session Name field, provide the role session name.
If you do not provide a session name, then Structural automatically generates a default unique value. The generated value begins with TonicStructural
.
In the Duration (in seconds) field, provide the maximum length in seconds of the session.
The default is 3600
, indicating that the session can be active for up to 1 hour.
The provided value must be less than the maximum session duration that is allowed for the role.
For the assumed role, Structural generates the external ID that is used in the assume role request. Your role’s trust policy must be configured to condition on your unique external ID.
Here is an example trust policy:
On a self-hosted instance, to use the credentials configured in the environment settings, click Environment Variables.
On a self-hosted instance, to use the AWS credentials from the EC2 instance, click Instance Profile.
The IAM user must have permission to list, push, and pull images from the registry. The following example policy includes the required permissions.
For additional security, a repository name filter allows you to limit access to only the repositories that are used in Structural. You need to make sure that the repositories that you create for Structural match the filter.
For example, you could prefix Structural repository names with tonic-
. In the policy, you include a filter based on the tonic-
prefix:
In the Tags field, provide the tag values to apply to the container artifacts. You can also change the tag configuration for individual data generation jobs.
Use commas to separate the tags.
A tag cannot contain spaces. Structural provides the following built-in values for you to use in tags:
{workspaceId}
- The identifier of the workspace.
{workspaceName}
- The name of the workspace.
{timestamp}
- The timestamp when the data generation job that created the artifact completed.
{jobId}
- The identifier of the data generation job that created the artifact.
For example, the following creates a tag that contains the workspace name, job identifier, and timestamp:
{workspaceName}_{jobId}_{timestamp}
To also tag the artifacts as latest, check the Tag as "latest" in your repository checkbox.
You can also optionally configure custom resource values for the Kubernetes pods. You can specify the ephemeral storage, memory, and CPU millicores.
To provide custom resources:
Toggle Set custom pod resources to the on position.
Under Storage Size:
In the field, provide the number of megabytes or gigabytes of storage.
From the dropdown list, select the unit to use.
The storage can be between 32MB and 25GB.
Under Memory Size:
In the field, provide the number of megabytes or gigabytes of RAM.
From the dropdown list, select the unit to use.
The memory can be between 512MB and 4 GB.
Under Processor Size:
In the field, provide the number of millicores.
From the dropdown list, select the unit.
The processor size can be between 250m and 1000m.
Only available for PostgreSQL and SQL Server. Not available for MySQL.
In the Custom Database Name field, provide the name to use for the destination database.
If you do not provide a custom database name, then the destination database uses the same name as the source database.
In the Custom Password field, provide the password for the destination database user.
If you do not provide a password, then Structural generates a password.
If your Kubernetes nodes are configured with taints, then on a self-hosted instance, you can configure the tolerations that enable the datapacker pods to be scheduled on the nodes. The datapacker pod hosts the temporary database that Structural uses during the data generation.
For an overview of taints and tolerations, go to the Kubernetes documentation.
To configure the tolerations, you configure the following environment settings. You can add these settings to the Environment Settings list on Structural Settings.
CONTAINERIZATION_POD_NODE_TOLERATION_KEY
- The toleration key value to apply to the datapacker pods. This setting is required. If you do not configure this setting, then Structural ignores the other settings.
CONTAINERIZATION_POD_NODE_TOLERATION_VALUES
- A comma-separated list of toleration values to apply to the datapacker pods.
CONTAINERIZATION_POD_NODE_TOLERATION_EFFECT
- The toleration effect to apply to the datapacker pods.
CONTAINERIZATION_POD_NODE_TOLERATION_OPERATOR
- The toleration operator to apply to the datapacker pods.