LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Connecting to the source database
  • Providing the connection details
  • Using key pair authentication for the connection
  • Indicating whether to trust the server certificate
  • Enabling a proxy connection
  • Limiting the included schemas
  • Testing the source connection
  • Blocking data generation on all schema changes
  • Enabling Lambda processing
  • Connecting to the destination database
  • Copying the source database connection details
  • Providing destination database connection details
  • Using key pair authentication for the connection
  • Testing the destination database connection
  • Indicating whether to trust the server certificate
  • Enabling a proxy connection
  • Setting the storage location for temporary files
  • Setting the type of storage to use
  • Enabling separate paths for source and destination files
  • Setting S3 bucket locations
  • Setting external stage locations
  • Providing AWS credentials for the storage locations

Was this helpful?

Export as PDF
  1. Connecting to your data
  2. Snowflake on AWS

Configuring Snowflake on AWS workspace data connections

Last updated 1 month ago

Was this helpful?

From May through November 2025, Snowflake is phasing out support for using a simple username and password for authentication. For more information, go to their .

We strongly recommend that you use key pair authentication for all database connections.

In the workspace configuration, under Connection Type, select Snowflake.

In the Source Settings section, under Snowflake Type, click AWS.

Connecting to the source database

In the Source Settings section, provide the details for the connection to the source database.

Providing the connection details

To connect to the source database, you can either:

  • Populate the connection fields.

  • Use a connection string.

You can also use key pair authentication instead of a password.

Populating the connection fields

By default, Use connection string is off, and you provide the connection values in the individual fields:

  1. In the Server field, provide the server where the database is located. You must provide the full path to the server. https:// is optional. So the format of the server value can be either:

    • <account>.<region>.snowflakecomputing.com

    • https://<account>.<region>.snowflakecomputing.com

    For example: abc123456.us-east-1.snowflakecomputing.com or https://abc123456.us-east-1.snowflakecomputing.com

  2. In the Database field, provide the name of the database.

  3. In the Username field, provide the username for the account to use to connect to the database.

  4. For the user password, you can either specify the password manually, or you can select a secret name from a secrets manager. The selected secret must store a password. The secrets manager option only displays if at least one secrets manager is configured. For information about configuring the available secrets managers, go to Configuring secrets managers for database connections. To enter the password manually:

    1. Click Provide Password.

    2. In the password field, enter the password.

    To use a secret name from a secrets manager:

    1. Click Use Secrets Manager.

    2. From the secrets manager dropdown list, select the secrets manager. Structural connects to the secrets manager and retrieves a list of available secret names.

    3. From the secret name dropdown list, select the secret name.

Using a connection string

To use a connection string to connect to the source database:

  1. Toggle Use connection string to the on position.

  2. In the Connection String field, provide the connection string.

  3. For the password, you can either specify the password manually, or you can select a secret name from a secrets manager. The selected secret must store a password. The secrets manager option only displays if at least one secrets manager is configured. For information about configuring the available secrets managers, go to Configuring secrets managers for database connections. To enter the password manually:

    1. Click Provide Password.

    2. In the password field, enter the password.

    To use a secret name from a secrets manager:

    1. Click Use Secrets Manager.

    2. From the secrets manager dropdown list, select the secrets manager. Structural connects to the secrets manager and retrieves a list of available secret names.

    3. From the secret name dropdown list, select the secret name.

The connection string uses the following format:

account=<account>;host=<account>.<region>.snowflakecomputing.com;user=<username>;db=<database>

Using key pair authentication for the connection

Instead of providing a password, you can use key pair authentication.

To do this:

  1. Toggle Use Key Pair Authentication to the on position.

  2. Expand the Key Pair Authentication Settings.

  3. For RSA Private Key, click Browse, then select the key file.

  4. If the key is encrypted, then in the Encrypted Key Passphrase field, provide the passphrase to use to decrypt the key.

Indicating whether to trust the server certificate

To trust the server certificate, and ignore the certificate authority's revocation list, toggle Trust Server Certificate to the on position.

This option can be useful when your Tonic Structural instance cannot connect to the certificate authority.

Enabling a proxy connection

You can use a proxy server to connect to the source database.

When you use a connection string to connect to the source database, Structural automatically adds the configured proxy connection parameters to the connection string.

If you manually include proxy connection parameters in the connection string, and also configure the proxy connection settings, the connection string will have duplicate proxy connection parameters.

We recommend that you use the configuration fields to enable the proxy connection, and do not include proxy connection parameters in the connection string.

To use a proxy server to connect to the source database:

  1. Toggle Enable proxy connection to the on position.

  2. In the Proxy Host field, provide the host name for the proxy connection.

  3. In the Proxy Port field, provide the port for the proxy connection.

  4. Optionally, in the Proxy User field, provide the name of the user for the proxy connection.

  5. If you provide a proxy user, then in the Proxy Password field, provide the password for the specified user.

  6. Optionally, in the Non-Proxy Hosts field, provide the list of hosts for which to bypass the proxy server and connect to directly.

Use a pipe symbol (|) to separate the host names. For example, host1|host2|host3.

You can also use an asterisk (*) as a wildcard. For example, to connect directly to all hosts with host names that start with myhost, use myhost*.

Limiting the included schemas

By default, the source database includes all of the schemas. To specify a list of specific schemas to either include or exclude:

  1. Toggle Limit Schemas to the on position.

  2. From the filter option dropdown list, select whether to include or exclude the listed schemas.

  3. In the field, provide the list of schemas to either include or exclude. Use commas or semicolons to separate the schemas.

Do not exclude schemas that are referred to by included schemas, unless you create those schemas manually outside of Structural.

Testing the source connection

To test the connection to the source database, click Test Source Connection.

Blocking data generation on all schema changes

By default, data generation is not blocked for schema changes that do not conflict with your workspace configuration.

To block data generation when there are any schema changes, regardless of whether they conflict with your workspace configuration, toggle Block data generation on schema changes to the on position.

Enabling Lambda processing

The default data generation process for Snowflake on AWS cannot scale to extremely large volumes of data. For volumes of hundreds of gigabytes or larger, you must use the Lambda-based processing.

To enable Lambda processing, toggle Enable Lambda generation to the on position.

Connecting to the destination database

In the Destination Settings section, you specify the connection information for the destination database.

If the destination database is in the same location as the source database, then you can copy the connection and authentication details from the source database. The copied details include the proxy connection configuration.

If the destination database is in a different location, then you can either:

  • Populate the connection fields.

  • Use a connection string.

You can also use key pair authentication instead of a password.

Copying the source database connection details

To copy the connection details from the source database:

  1. Click Copy Settings from Source.

  2. For the user password, you can either specify the password manually, or you can select a secret name from a secrets manager. The selected secret must store a password. The secrets manager option only displays if at least one secrets manager is configured. For information about configuring the available secrets managers, go to Configuring secrets managers for database connections. To enter the password manually:

    1. Click Provide Password.

    2. In the password field, enter the password.

    To use a secret name from a secrets manager:

    1. Click Use Secrets Manager.

    2. From the secrets manager dropdown list, select the secrets manager. Structural connects to the secrets manager and retrieves a list of available secret names.

    3. From the secret name dropdown list, select the secret name.

  3. To test the connection to the destination database, click Test Destination Connection.

Providing destination database connection details

If you do not copy the details from the source database, then you can either populate the connection fields or use a connection string.

Populating the connection fields

By default, Use connection string is off, and you provide the connection values in the individual fields:

  1. In the Server field, provide the server where the database is located. You must provide the full path to the server. The https:// is optional. So the format of the server value can be either:

    • <account>.<region>.snowflakecomputing.com

    • https://<account>.<region>.snowflakecomputing.com

    For example: abc123456.us-east-1.snowflakecomputing.com or https://abc123456.us-east-1.snowflakecomputing.com

  2. In the Database field, provide the name of the database.

  3. In the Username field, provide the username for the account to use to connect to the database.

  4. For the user password, you can either specify the password manually, or you can select a secret name from a secrets manager. The selected secret must store a password. The secrets manager option only displays if at least one secrets manager is configured. For information about configuring the available secrets managers, go to Configuring secrets managers for database connections. To enter the password manually:

    1. Click Provide Password.

    2. In the password field, enter the password.

    To use a secret name from a secrets manager:

    1. Click Use Secrets Manager.

    2. From the secrets manager dropdown list, select the secrets manager. Structural connects to the secrets manager and retrieves a list of available secret names.

    3. From the secret name dropdown list, select the secret name.

Using a connection string

To use a connection string to connect to the destination database:

  1. Toggle Use connection string to the on position.

  2. In the Connection String field, provide the connection string.

  3. For the password, you can either specify the password manually, or you can select a secret name from a secrets manager. The selected secret must store a password. The secrets manager option only displays if at least one secrets manager is configured. For information about configuring the available secrets managers, go to Configuring secrets managers for database connections. To enter the password manually:

    1. Click Provide Password.

    2. In the password field, enter the password.

    To use a secret name from a secrets manager:

    1. Click Use Secrets Manager.

    2. From the secrets manager dropdown list, select the secrets manager. Structural connects to the secrets manager and retrieves a list of available secret names.

    3. From the secret name dropdown list, select the secret name.

The connection string uses the following format:

account=<account>;host=<account>.<region>.snowflakecomputing.com;user=<username>;db=<database>

Using key pair authentication for the connection

Instead of providing a password, you can use key pair authentication.

To do this:

  1. Toggle Use Key Pair Authentication to the on position.

  2. Expand the Key Pair Authentication Settings.

  3. For RSA Private Key, click Browse, then select the key file.

  4. If the key is encrypted, then in the Encrypted Key Passphrase field, provide the passphrase to use to decrypt the key.

Testing the destination database connection

To test the connection to the destination database, click Test Destination Connection.

Indicating whether to trust the server certificate

To trust the server certificate, and ignore the certificate authority's revocation list, toggle Trust Server Certificate to the on position.

This option can be useful when your Structural instance cannot connect to the certificate authority.

Enabling a proxy connection

You can use a proxy server to connect to the destination database.

When you use a connection string to connect to the destination database, Structural adds the proxy connection parameters to the connection string.

If you manually include proxy connection parameters in the connection string, and also configure the proxy connection settings, the connection string will have duplicate proxy connection parameters.

We recommend that you use the configuration fields to enable the proxy connection, and do not include proxy connection parameters in the connection string.

To enable and configure the proxy connection:

  1. Toggle Enable proxy connection to the on position.

  2. In the Proxy Host field, provide the host name for the proxy connection.

  3. In the Proxy Port field, provide the port for the proxy connection.

  4. Optionally, in the Proxy User field, provide the name of the user for the proxy connection.

  5. If you provide a proxy user, then in the Proxy Password field, provide the password for the specified user.

  6. Optionally, in the Non-Proxy Hosts field, provide the list of hosts for which to bypass the proxy server and connect to directly.

Use a pipe symbol (|) to separate the host names. For example, host1|host2|host3.

You can also use an asterisk (*) as a wildcard. For example, to connect directly to all hosts whose host names start with myhost, use myhost*.

Setting the storage location for temporary files

During data generation, Structural uses temporary CSV files to load and unload Snowflake tables.

For Lambda processing, you specify a single S3 bucket path.

If you do not use Lambda processing, then you can either:

  • Use external stages instead of S3 buckets.

  • Provide separate paths for the source and destination files.

Setting the type of storage to use

By default, the temporary files are stored in S3 buckets.

To instead use external stages, toggle Use External Stage to the on position.

The Use External Stage toggle does not display if Enable Lambda Generation is on.

Enabling separate paths for source and destination files

By default, you provide a single S3 bucket path or external stage. Within that path:

  • Structural copies the files that contain the source data into an input folder.

  • After it applies the generators, Structural copies the files that contain the destination data into an output folder.

To instead provide separate paths for the source and destination files, toggle Use Separate Destination Location to the on position.

The Use Separate Destination Location toggle does not display if Enable Lambda Generation is on.

Setting S3 bucket locations

If Use Separate Destination Location is off, then in the S3 Bucket Path field, specify the S3 bucket.

If Use Separate Destination Location is on, then:

  1. In the Source S3 Bucket field, enter the path to the S3 bucket to use for the source files.

  2. In the Destination S3 Bucket field, enter the path to the S3 bucket to use for the destination files.

Setting external stage locations

If Use External Stage is on, then you provide external stage locations instead of S3 buckets. For each stage, the format is:

<database>.<schema>.<stage>

Where:

  • <database> is the name of the database where the stage is located.

  • <schema> is the name of the schema that contains the stage.

  • <stage> is the name of the stage.

If Use Separate Destination Location is off, then in the Source Snowflake External Stage Name field, enter the external stage.

If Use Separate Destination Location is on, then:

  1. in the Source Snowflake External Stage Name field, enter the external stage to use for the source files.

  2. In the Destination Snowflake External Stage Name field, enter the external stage to use for the destination files.

Providing AWS credentials for the storage locations

For each S3 bucket or external stage, you can optionally provide specific AWS credentials.

If you do not provide credentials in the workspace configuration, then Structural uses either:

    • TONIC_AWS_ACCESS_KEY_ID - An AWS access key that is associated with an IAM user or role.

    • TONIC_AWS_SECRET_ACCESS_KEY - The secret key that is associated with the access key.

    • TONIC_AWS_REGION - The AWS Region to send the authentication request to.

  • The credentials for the IAM role on the host machine.

  • The credentials in a credentials file.

To provide the credentials:

  1. For the S3 bucket or external stage, to display the credentials field, click AWS Credentials.

  2. In the AWS Access Key field, enter the AWS access key that is associated with an IAM user or role.

  3. In the AWS Secret Key field, enter the secret key that is associated with the access key.

  4. From the AWS Region dropdown list, select the AWS Region to send the authentication request to.

If you enable Lambda processing, make sure that you , and .

The credentials set in the following :

blog post
grant the required permissions to the IAM role
complete the other required configuration
environment settings