Structural data generation workflow

Tonic Structural data generation combines sensitive data detection and data transformation to create safe, secure, and compliant datasets.

The Structural data generation workflow involves the following steps:

Overview diagram of the Tonic Structural data generation workflow

You can also view this video overview of the Structural data generation workflow.

  1. To get started, you create a data generation workspace. When you create a data generation workspace, you identify the type of source data, such as PostgreSQL or MySQL, and establish the connections to the source database and the destination location. The source database contains the original data that you want to synthesize. The destination location is where Structural stores the synthesized data. It might be a database, a storage location, a container repository, or an Ephemeral database.

  2. Next, you analyze the results of the initial sensitivity scan. The sensitivity scan identifies columns that contain sensitive data. These columns need to be protected by a generator.

  3. Based on the sensitivity scan results, you configure the data generation. The configuration includes:

    • Assigning table modes to tables. The table mode controls the number of rows and columns that are copied to the destination database.

    • Indicating column sensitivity. You can make adjustments to the initial sensitivity assignments. For example, you can mark additional columns as sensitive that the initial scan did not identify as sensitive.

    • Assigning and configuring column generators. To protect the data in a column, especially a sensitive column, you assign a generator to it. The generator replaces the source value with a different value in the destination database. For example, the generator might scramble the characters or assign a random value of the same type.

  4. After you complete the configuration, you run the data generation job. The data generation job uses the configured table modes and generators to transform the data from the source database and write the transformed data to the destination location. You can track the job progress and view the job results.

Last updated