LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • About the Structural free trial
  • Signing up for the free trial
  • Determining whether to use your own data
  • Uploading files
  • Connecting to a database
  • Provide a name for your workspace
  • Invite other users to Structural and your workspace
  • Supported databases for free trial workspaces
  • Selecting the database type
  • Free trial resources
  • Getting Started Guide panel
  • Quick start checklist
  • Next step hints
  • Creating a file group
  • Uploading local files
  • Selecting files from cloud storage
  • Configuring file delimiters and settings
  • Assigning a generator
  • Applying all recommendations
  • Selecting a generator
  • Assigning a recommended generator
  • Configuring the destination location
  • Available output options
  • Displaying the current destination configuration
  • Confirming or changing the destination configuration
  • Running data generation
  • Starting the generation
  • Viewing the job details and connecting to an Ephemeral database
  • Next steps for free trial users

Was this helpful?

Export as PDF

Getting started with the Structural free trial

Last updated 2 months ago

Was this helpful?

If you are a user who wants to set up an account in an existing Tonic Structural Cloud or self-hosted organization, go to Creating a new account in an existing organization.

About the Structural free trial

The Structural 14-day free trial allows you to explore and experiment in Structural Cloud before you decide whether to purchase Structural.

When you sign up for a free trial, Structural automatically creates a sample workspace for you to use. You can also create a workspace that uses your own database or files.

The free trial provides tools to introduce you to Structural and to guide you through configuring and completing a data generation.

Structural tracks and displays the amount of time remaining in your free trial. You can request a demonstration and contact support.

When the free trial period ends, you can continue to use Structural to configure workspaces. You can no longer generate data or train models. Contact Tonic.ai to discuss purchasing a Structural license, or select the option to .

Signing up for the free trial

To start a new free trial of Structural:

  1. Go to .

  2. Click Create Account.

On the Create your account dialog, to create an account, either:

  • To use a corporate Google email address to create the account, click Create account using Google.

  • To create a new Structural account:

    1. Enter your email address. You cannot use a public email address for a free trial account.

    2. Create and confirm a Structural password.

    3. Click Create Account.

Structural sends an activation link to your email address.

After you activate your account and log in, Structural next prompts you to select the use case that best matches why you are exploring Structural.

If none of the provided use cases fits, use the Other option to tell us about your use case.

After you select a use case, click Next. The Create Your Workspace panel displays.

Determining whether to use your own data

When you sign up for a free trial, Structural provides access to a sample PostgreSQL workspace that you can use to explore how to configure and run data generation.

You can also choose to create a workspace that uses your own data, either from local files or from a database.

On the Create your workspace panel:

  • To create a workspace that uses local files as the source data, click Upload Files, then click Next. Go to Uploading files.

  • To create a new workspace that uses your own data, click Bring your own data, then click Next. Go to Connecting to a database.

Uploading files

For other workspaces that you create during the free trial, you can also create a file connector workspace that uses files from cloud storage ( Amazon S3 or Google Cloud Storage).

After you select Upload files and click Next, you are prompted to provide a name for the workspace.

In the field provided, enter the name to use for the workspace, then click Next.

After you create at least one file group, you can start to use the other Structural features and functions.

Connecting to a database

Provide a name for your workspace

If you choose to create a workspace with your own data, then the first step is to provide a name for the workspace.

In the field provided, enter the name to use for your first workspace, then click Next.

The Invite others to Tonic panel displays.

Invite other users to Structural and your workspace

Under Invite others to Tonic, you can optionally invite other users with the same corporate email domain to start their own Structural free trial. The users that you invite are able to view and edit your workspace.

For example, you might want to invite other users if you don't have access to the connection information for the source data. You can invite a user who does have access. They can then update the workspace configuration to add the connection details.

To continue without inviting other users, click Skip this step.

To invite users:

  1. For each user to invite, enter the email address, then press Enter. The email addresses must have the same corporate email domain as your email address.

  2. After you create the list of users to invite, click Next.

The Add source data connection view displays.

Supported databases for free trial workspaces

The final step in the workspace creation is to provide the source data to use for your workspace.

Structural provides data connectors that allow you to connect to an existing database. Each data connector allows you to connect to a specific type of database. Structural supports several types of application databases, data warehouses, and Spark data solutions.

For the first workspace that you create using the free trial wizard, you can choose:

Selecting the database type

To connect to an existing database, on the Add source data connection panel, click the data connector to use, then click Add connection details.

The panel also includes a Local files option, which creates a local files file connector workspace, the same as the Upload files option.

Use the connection details fields to provide the connection information for your source data. The specific fields depend on the type of data connector that you select.

After you provide the connection details, to test the connection, click Test Connection.

To save your workspace, click Save.

Free trial resources

The Structural free trial includes a couple of resources to introduce you to Structural and to guide you through the tasks for your first data generation.

Getting Started Guide panel

The Getting Started Guide panel provides access to Structural information and support resources.

The Getting Started Guide panel displays automatically when you first start the free trial. To display the Getting Started Guide panel manually, in the Structural heading, click Getting Started.

The Getting Started Guide panel provides links to Structural instructional videos and this Structural documentation. It also contains links to request a Structural demo, contact Tonic.ai support, and purchase a Structural Cloud pay-as-you-go subscription.

Quick start checklist

For each free trial workspace, Structural provides access to a workspace checklist.

The checklist displays at the bottom left of the workspace management view. It displays automatically when you display the workspace management view. To hide the checklist, click the minimize icon. To display the checklist again, click the checklist icon.

The checklist provides a basic list of tasks to perform in order to complete a Structural data generation.

Each checklist task is linked to the Structural location where you can complete that task. Structural automatically detects and marks when a task is completed.

The checklist tasks are slightly different based on the type of workspace.

Checklist for database-based workspaces

For workspaces that are connected to a database, including the sample PostgreSQL workspace and workspaces that you connect to your own data, the checklist contains:

  1. Connect a source database - Set the connection to the source database. In most cases, you set the source connection when you create the workspace. When you click this step, Structural navigates to the Source Settings section of the workspace details view.

  2. Connect to destination database - Set the location where Structural writes the transformed data. When you click this step, Structural navigates to the Destination Settings section of the workspace details view.

  3. Apply generators to modify dataset - Configure how Structural transforms at least one column in the source data. When you click this step:

    • If there are available generator recommendations, then Structural navigates to Privacy Hub and displays the generator recommendations panel.

    • If there are no available generator recommendations, then Structural navigates to Database View.

  4. Generate data - Run the data generation to produce the destination data. When you click this item, Structural navigates to the Confirm Generation panel.

Checklist for local file workspaces

For workspaces that use data from local files, the checklist contains:

  1. Create a file group - Create a file group with files that you upload from a local file system. Each file group becomes a table in the workspace. When you click this step, Structural navigates to the File Groups view for the workspace.

  2. Apply generators to modify dataset - Configure how Structural transforms at least one column in the source files. When you click this step:

    • If there are available generator recommendations, then Structural navigates to Privacy Hub and displays the generator recommendations panel.

    • If there are no available generator recommendations, then Structural navigates to Database View.

  3. Generate data - Run the data generation to produce transformed versions of the source files. When you click this step, Structural navigates to the Confirm Generation panel.

  4. Download your dataset - Download the transformed files from the Structural application database.

Checklist for cloud storage file workspaces

For workspaces that use data from files in cloud storage (Amazon S3 or Google Cloud Storage), the checklist contains:

  1. Configure output location - Configure the cloud storage location where Structural writes the transformed files. When you click this step, Structural navigates to the Output location section of the workspace details view.

  2. Create a file group - Create a file group that contains files selected from cloud storage. When you click this step, Structural navigates to the File Groups view for the workspace.

  3. Apply generators to modify dataset - Configure how Structural transforms at least one column in the source data. When you click this step:

    • If there are available generator recommendations, then Structural navigates to Privacy Hub and displays the generator recommendations panel.

    • If there are no available generator recommendations, then Structural navigates to Database View.

  4. Generate data - Run the data generation to produce transformed versions of the source files. When you click this step, Structural navigates to the Confirm Generation panel.

Next step hints

In addition to the workspace checklists, Structural uses next step hints to help guide you through the workspace configuration and data generation.

When a next step hint is available, it displays as an animated marker next to the suggested next action.

When you hover over the highlighted action, Structural displays a help text popup that explains the recommended action.

When you click the highlighted action, the hint is removed, and the next hint is displayed.

Creating a file group

For a file connector workspace, to identify the source data, you create file groups. A file group is a set of files of the same type and with the same structure. Each file group becomes a table in the workspace. For CSV files, each column becomes a table column. For XML and JSON file groups, the table contains a single XML or JSON column.

On the File Groups view, click Create File Group.

Uploading local files

Selecting files from cloud storage

Configuring file delimiters and settings

Assigning a generator

To get value out of the data generation process, you assign generators to the data columns.

A generator indicates how to transform the data in a column. For example, for a column that contains a name value, you might assign the Name generator, which indicates how to generate a replacement name in the generation output.

Applying all recommendations

For sensitive columns that Structural detects, Structural can also provide a recommended generator configuration.

When there are recommendations available, Privacy Hub displays a link to review all of the recommendations.

The Recommended Generators by Sensitivity Type panel displays a list of sensitive columns that Structural detected, along with the suggested generators to apply.

After reviewing, to apply all of the suggested generators, click Apply All. For more information about using this panel, go to Reviewing and applying recommended generators.

Selecting a generator

To display Database View, on the workspace management view, click Database View.

On Database View, in the column list, the Applied Generator column lists the currently assigned generator for each column. For a new workspace, the columns are all assigned the Passthrough generator. The Passthrough generator simply passes the source value through to the destination data without masking it.

Click a column that is marked as Passthrough, and that is not marked as sensitive. For example, in the sample workspace, the customers.Last_Transaction column. The column configuration panel displays. To select a generator, click the generator dropdown. The list contains generators that can be assigned to the column based on the column data type. For customers.Last_Transaction, the Timestamp Shift generator is a good option.

Assigning a recommended generator

For Passthrough columns that Structural identified as containing sensitive data, the Applied Generator column displays an icon to indicate that there is a recommended generator.

In Database View, click one of those columns. For example, in the sample workspace, the customers.email column is marked as containing an email address.

For customers.Email, click the generator dropdown. Instead of the column configuration panel, you see a panel that indicates the recommended generator. For customers.Email, the recommended generator is Email. To assign the Email generator, click Apply. The column configuration panel displays with the generator assigned.

Configuring the destination location

To run a data generation, Structural must have a destination for the transformed data.

For a local files workspace, Structural saves the transformed files to the application database.

For workspaces that use data from a database, and for workspaces that use cloud storage files, you configure where Structural writes the output data.

Available output options

The destination location for data generation output can be one of the following:

  • For database-based data connectors, you can write the transformed data to a destination database.

Displaying the current destination configuration

To display the destination configuration for the workspace:

  1. Click the Workspace Settings tab.

  2. Scroll to the Destination Settings section or, for a file connector workspace that uses cloud storage files, scroll to the Output location section.

Confirming or changing the destination configuration

Ephemeral snapshot

For data connectors that Ephemeral supports, the default option is to write the output to Ephemeral.

For the Ephemeral option, the default configuration is:

  • Structural writes the output to Ephemeral Cloud. If you do not have an Ephemeral Cloud account, then we create an Ephemeral free trial account for you. If your organization has a self-hosted Ephemeral instance, then you can choose to write the output to that instance. Note that all workspaces in the same organization or for the same self-hosted Structural instance must use the same Ephemeral instance.

  • Structural uses the output data to create an Ephemeral user snapshot. You can use the user snapshot to create Ephemeral databases.

  • When Structural creates the user snapshot in Ephemeral, it creates a temporary Ephemeral database to use as the basis for the user snapshot. There is an option to keep that temporary database. For a free trial workspace, this option is enabled by default. The database expires after 48 hours.

Destination database

To write the data to a destination database, click Database Server. Structural displays the configuration fields for the destination database.

Container repository

To write the data to a data volume in a container repository, click Container Repository. Structural displays the configuration fields to select a base image and provide the details about the repository.

For more information, go to Writing output to a container repository.

Cloud storage files output location

For a file connector workspace that uses files from cloud storage (Amazon S3 or Google Cloud Storage), you configure the cloud storage output location where Structural writes the transformed files. The configuration includes the required credentials to use.

For more information, go to Configuring the file connector storage type and output options.

Running data generation

After you complete the workspace and generator configuration, you can run your first data generation.

The data generation process uses the assigned generators to transform the source data. It writes the transformed data to the configured destination location.

For a local files workspace, it writes the files to the Structural application database.

Starting the generation

The Generate Data option is at the top right of the Tonic heading.

When you click Generate Data, Structural displays the Confirm Generation panel.

The Confirm Generation panel provides access to the current destination configuration, along with other advanced generation options such as subsetting and upsert.

It also indicates if there are any issues that prevent you from starting the data generation. For example, if the workspace does not have a configured destination, then Structural cannot run the data generation.

To start the data generation, click Run Generation. For more information about running data generation, go to Running data generation jobs.

For a new Tonic Ephemeral account, the first time that you run data generation, you also receive an activation email message for the account.

Viewing the job details and connecting to an Ephemeral database

To view the job status and details:

  1. Click Jobs.

  2. In the list, click the data generation job.

For a data generation that writes the output to an Ephemeral database, the Data Available in Tonic Ephemeral panel provides access to the database connection information.

To display the connection details, click Connecting to your database.

The connection details include the database location and credentials. Each field contains a copy icon to allow you to copy the value.

Next steps for free trial users

The first time that you complete all of the steps in a checklist, Structural displays a panel with options to chat with our sales team, schedule a demo, or purchase a subscription.

If your free trial has expired, to get an extension, you can reach out to us using either the in-app chat or an email message.

If you do connect to your own data, then you must allowlist the Structural static IP addresses. For more information, go to .

To use the sample workspace, click Use a sample workspace, then click Next. Structural displays , which summarizes the protection status for the source data. It also displays the and the .

The Upload files option creates a local files workspace. The source data consists of groups of files selected from a local file system. The files in a file group must have the same type and structure. Each file group becomes a "table" in the source data.

Structural displays the File Groups view, where you can .

It also displays the with links to resources to help you get started.

If you connect to your own data, then you must allowlist the Structural static IP addresses. For more information, go to .

For subsequent workspaces that you create from Workspaces view, you can also choose , , and .

Structural displays , which summarizes the protection status for the source data.

It also displays the with links to resources to help you get started.

For a file connector workspace that uses local files, you can either drag and drop files from your local file system to the file group, or you can search for and select files to add. For more information, go to .

For a file connector workspace that uses cloud storage, you select the files to include in the file group. For more information, go to .

For files that contain CSV content, you configure the delimiters and other file settings. For more information, go to .

You can also choose to apply an individual generator manually. You can do this from , , or .

If the data connector supports Tonic Ephemeral, then the default option is to .

For some Structural data connectors, Structural can .

For file connector workspaces that transform files from cloud storage (Amazon S3 or Google Cloud Storage), you .

For details about how to configure Structural to write output to Ephemeral, go to Writing output to Tonic Ephemeral. For more information about Ephemeral, go to the .

For information on how to configure the destination information for a specific data connector, go to the workspace configuration information for that data connector. The contains a list of the available data connectors, and provides a link to the documentation for each data connector.

You can also continue to get to know Structural and experiment with other Structural features such as or using to mask more complex values such as JSON or XML.

start a Structural Cloud pay-as-you-go subscription
app.tonic.ai
file connector
set up the file groups for the workspace
Google BigQuery
MongoDB
MySQL
PostgreSQL
Snowflake on AWS
Snowflake on Azure
SQL Server
Yugabyte
Databricks
Salesforce
Amazon DynamoDB
Privacy Hub
Privacy Hub
Database View
Table View
write the output data to Ephemeral
write the transformed data to a data volume in a container repository
data connector summary
subsetting
composite generators
Privacy Hub
Getting Started Guide panel
quick start checklist
Getting Started Guide panel
Getting Started Guide panel
configure the cloud storage location where Structural writes the transformed files
I allowlist access to my database. What are your static IP addresses?
I allowlist access to my database. What are your static IP addresses?
Use case selection for a free trial account
Create your workspace panel with workspace options
Field to specify the workspace name
Field to specify the name of your first workspace
Option to invite other users to create an account
Available data connectors for a free trial workspace
Connection details for a data connector
Getting Started Guide panel
Link to the Getting Started Guide panel in the Tonic Structural heading
Icon to display the quick start checklist
Next step hint for a recommended action
Next step hint help text
Recommended generators panel
Workspace management options for data generation configuration
Selecting a generator for a column
Generator recommendation for a sensitive column
Destination Settings section for a workspace
Generate Data button to start data generation
Confirm Generation panel
Ephemeral documentation
Selecting local files
Selecting cloud storage or file mount files
Configuring delimiters and file settings for .csv files