LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Identifying the type of storage
  • Selecting the location for the transformed files
  • Local files
  • Cloud storage
  • File mount
  • Providing credentials to access AWS
  • Selecting the type of credentials to use
  • Providing an assumed role
  • Providing the AWS credentials
  • Providing credentials to access Google Cloud Storage
  • Providing credentials to access MinIO

Was this helpful?

Export as PDF
  1. Connecting to your data
  2. File connector

Configuring the file connector storage type and output options

Last updated 2 hours ago

Was this helpful?

On the workspace details view for a file connector workspace, you:

  • Identify the type of storage. After you add a file group to the workspace, you cannot change the storage type.

  • Indicate where to write the transformed files.

  • If needed, provide credentials to access the cloud storage.

Identifying the type of storage

On the workspace creation view:

  1. Under Connection Type, under File/Blob Storage, click Files.

  2. Select the type of file storage where the source files are located.

    • To choose files from Amazon S3, click Amazon S3.

    • To choose files from MinIO, make sure that the TONIC_AWS_S3_OVERRIDE_URL points to your MinIO endpoint, then click Amazon S3.

    • To choose files from GCS, click Google Cloud Storage.

    • To upload files from a local file system, click Local Filesystem.

    • To choose files from a local file mount, click File Mount. The file mount option is not available on Structural Cloud. If you , then that path is used. You cannot specify the path. Otherwise, in the Source File Mount Path field, provide the file mount path where the source files are located. The file mount path must be accessible by the container that runs the Structural application.

    After you add a file group to the workspace, you cannot change the storage type.

Selecting the location for the transformed files

Local files

Cloud storage

For cloud storage workspaces, in the Output location field, provide the path to the folder where Structural writes the transformed files.

File mount

For files that come from a local file mount, you can write the output files to one of the following:

  • An S3 bucket

  • Google Cloud Storage

  • A file mount

S3 bucket or Google Cloud Storage

For S3 buckets and Google Cloud Storage, in the Output location field, provide the path to the folder where Structural writes the transformed files.

File mount - single file mount path

File mount - no single file mount path

If you did not configure a single file mount path:

  1. By default, the files are written to the same file mount path where the source files are located. To use a different file mount path:

    1. Toggle Set different mount for output to the on position.

    2. In the Destination File Mount Path field, provide the file mount path. The file mount path must be accessible by the container that runs the Structural application.

  2. In the Output location field, provide the location within the file mount where Structural writes the transformed files.

Providing credentials to access AWS

For a file connector workspace that writes files to Amazon S3, under AWS Credentials, you configure how Structural obtains the credentials to connect to Amazon S3.

Selecting the type of credentials to use

Under AWS Credentials, click the type of credentials to use. The options are:

  • Environment - Only available on self-hosted instances. Indicates to use either:

    • The credentials for the IAM role on the host machine.

      • TONIC_AWS_ACCESS_KEY_ID - An AWS access key that is associated with an IAM user or role.

      • TONIC_AWS_SECRET_ACCESS_KEY - The secret key that is associated with the access key.

      • TONIC_AWS_REGION - The AWS Region to send the authentication request to.

  • Assumed role - Indicates to use the specified assumed role.

  • User credentials - Indicates to use the provided user credentials.

Providing an assumed role

To provide an assumed role, click Assume role, then:

  1. In the Role ARN field, provide the Amazon Resource Name (ARN) for the role.

  2. In the Session Name field, provide the role session name. If you do not provide a session name, then Structural automatically generates a default unique value. The generated value begins with TonicStructural.

  3. In the Duration (in seconds) field, provide the maximum length in seconds of the session. The default is 3600, indicating that the session can be active for up to 1 hour. The provided value must be less than the maximum session duration that is allowed for the role.

  4. By default, Structural uses the same assumed role to both retrieve the source files and write the output files. To provide a different assumed role for the output location:

    1. Toggle Set different credentials for output to the on position.

    2. In the Role ARN field, provide the ARN for the role.

    3. In the Session Name field, provide the role session name. If you do not provide a session name, then Structural automatically generates a default unique value. The generated value begins with TonicStructural.

    4. In the Duration (in seconds) field, provide the maximum length in seconds of the session. The default is 3600, indicating that the session can be active for up to 1 hour. The provided value must be less than the maximum session duration that is allowed for the role.

For each assumed role, Structural generates the external ID that is used in the assume role request. Your role’s trust policy must be configured to condition on your unique external ID.

Here is an example trust policy:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {
      "AWS": "<originating-account-id>"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "StringEquals": {
        "sts:ExternalId": "<external-id>"
      }
    }
  }
}

Providing the AWS credentials

To provide the credentials, under AWS Credentials:

  1. In the AWS Access Key field, enter the AWS access key that is associated with an IAM user or role.

  2. In the AWS Secret Key field, enter the secret key that is associated with the access key.

  3. From the AWS Region dropdown list, select the AWS Region to send the authentication request to.

  4. By default, Structural uses the same AWS credentials to both retrieve the source files and write the output files. To provide different AWS credentials for the output location:

    1. Toggle Set different credentials for output to the on position.

    2. In the AWS Access Key field, enter the AWS access key that is associated with an IAM user or role.

    3. In the AWS Secret Key field, enter the secret key that is associated with the access key.

    4. From the AWS Region dropdown list, select the AWS Region to send the authentication request to.

  5. In the AWS Session Token field, you can optionally provide a session token for a temporary set of credentials. You can provide a session token regardless of whether you use the same or different credentials for the source and output.

Providing credentials to access Google Cloud Storage

To write files to a folder in Google Cloud Storage, you must provide Google Cloud Platform credentials in the workspace configuration.

Under GCP Credentials:

  1. For Service Account File, select the service account file (JSON file) for the source files.

  2. In the GCP Project ID field, provide the identifier of the project that contains the source files.

Providing credentials to access MinIO

Under AWS credentials, you provide the MinIO credentials. The MinIO credentials consist of an access key and a secret key.

To provide the credentials, you can either:

    • TONIC_AWS_ACCESS_KEY_ID - A MinIO access key

    • TONIC_AWS_SECRET_ACCESS_KEY - The secret key that is associated with the access key

  • Provide the access key and secret key manually

To use the credentials from the environment settings, under AWS Credentials, click Environment.

To provide the credentials manually:

  1. Under AWS Credentials, click User credentials.

  2. In the AWS Access Key field, enter the MinIO access key.

  3. In the AWS Secret Key field, enter the secret key that is associated with the access key.

  4. By default, Structural uses the same credentials to both retrieve the source files and write the output files. To provide different MinIO credentials for the output location:

    1. Toggle Set different credentials for output to the on position.

    2. In the AWS Access Key field, enter the MinIO access key.

    3. In the AWS Secret Key field, enter the secret key that is associated with the access key.

When the source files come from a local file system, Tonic Structural writes the output files to the large file store in the Structural application database. You can then .

For a file mount, if you , then in the Output location field, provide the location within the file mount where Structural writes the transformed files.

The credentials set in the following :

When the TONIC_AWS_S3_OVERRIDE_URL points to a MinIO endpoint, then when you select Amazon S3 as the source, you create a MinIO workspace.

(Self-hosted only) Use the credentials set in the following :

download the most recently generated files
environment settings
environment setting
environment settings
environment setting
configured a single file mount path
configured a single file mount path