Structural data generation workflow
Last updated
Was this helpful?
Last updated
Was this helpful?
Tonic Structural data generation combines sensitive data detection and data transformation to create safe, secure, and compliant datasets.
The Structural data generation workflow involves the following steps:
You can also view this .
Based on the sensitivity scan results, you configure the data generation. The configuration includes:
To get started, you . When you create a workspace, you identify the type of source data, such as PostgreSQL or MySQL, and establish the connections to the source database and the destination location. The source database contains the original data that you want to synthesize. The destination location is where Structural stores the synthesized data. It might be a database, a storage location, a container repository, or an Ephemeral database snapshhot.
Next, you . The sensitivity scan identifies columns that contain sensitive data. These columns need to be protected by a generator.
The table mode controls the number of rows and columns that are copied to the destination database.
You can make adjustments to the initial sensitivity assignments. For example, you can mark additional columns as sensitive that the initial scan did not identify as sensitive.
To protect the data in a column, especially a sensitive column, you assign a generator to it. The generator replaces the source value with a different value in the destination database. For example, the generator might scramble the characters or assign a random value of the same type.
After you complete the configuration, you . The data generation job uses the configured table modes and generators to transform the data from the source database and write the transformed data to the destination location. You can track the job progress and view the job results.