LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Recommended generators for specific types of data
  • Names
  • Dates, events, timestamps
  • Free text
  • Maintaining empty values
  • Path expressions to replace all text values
  • Path expressions for XML with namespaces
  • Passing through default minimum and maximum date values
  • Use Regex Mask to add values
  • Aligning email addresses to names

Was this helpful?

Export as PDF
  1. Configuring data generation
  2. Generator assignment and configuration

Generator hints and tips

Last updated 4 months ago

Was this helpful?

These hints and tips can help you to choose generators and address some specific use cases.

Recommended generators for specific types of data

Names

Tonic Structural provides several options to de-identify names of individuals. The method that you select depends on the specific use case, including the required realism of the output and the privacy requirements.

Here are a few possible generator options and how and why you might use them.

  • Randomly returns a name from a dictionary of primarily Westernized names, unrelated to the original value. Can provide complete privacy, unless you use . The output is realistic because the returned values are real names.

  • This generator shuffles all of the values in the field, but preserves the overall frequency of the values. It ensures that the output contains realistic-looking names, and that the output uses the names from the original data set. This can be beneficial if the original data contains, for example, names that are common to a particular region and that should be maintained. When you use this generator with the option, it ensures the output is secure from re-identification. However, if the source data set is small or each name is highly unique, Structural might not allow you to use this option.

  • Allows you to provide your own dictionary of values. These values are included in the output at the same frequency that the original values occur in the source data.

  • Randomly replaces characters with other characters. The output does not provide realistic names, but it provides a high level of privacy that prevents recovery of the original data. It does preserve whitespace, punctuation (such as hyphenated names), and capitalization. Because it is a character-level replacement, it preserves the length of the input string.

  • Similar to Character Scramble, but uses a single character mapping throughout the generated data. This reduces the privacy level, but ensures consistency and uniqueness. This generator also has more support for additional unicode blocks to ensure that the output characters more closely match the input. This might be helpful if the input includes names with characters that are outside of the basic Latin (a-z, A-Z) characters.

Dates, events, timestamps

Rows of data often have multiple date or timestamp fields that have a logical dependency, such as START_DATE and END_DATE.

In this case, a randomly generated date is not viable, because it could produce a nonsensical output where events occur chronologically out of order.

The following generator options handle these scenarios:

  • (with ) To solve the problem described above, you ensure that two or more timestamps are randomly shifted by the same amount instead of independently from each other. The key is to use the consistency option. For example, a row of data represents an individual that is identified by a primary key of PERSON_ID. The row also contains START_DATE and END_DATE columns. You can apply a timestamp shift to the START_DATE and END_DATE columns within a desired range, and make both columns consistent to PERSON_ID. Whenever the generator encounters the same PERSON_ID value, it shifts the dates by the same amount.

  • You can apply the Event Timestamps generator to multiple date columns on the same table. You can link them to follow the underlying distribution of dates. For more information, go to the blog post .

  • This generator can sometimes address the described problem. You can configure this generator to truncate the input to the year, month, day, hour, minute, or second. It guarantees that a secondary event does not occur BEFORE a primary event. However, truncation might cause them to become the same date value or timestamp. Whether you can use this generator for this purpose depends on the typical time separation between the two events relative to the truncation option, and whether truncation provides an adequate level of privacy for the particular use case.

Free text

Free text refers to text fields in the source database that might come from an "uncontrolled" source such as user text entry. In these cases, any record might or might not contain sensitive information.

Some possible examples include:

  • Notes from a doctor or healthcare provider that contain Protected Health Information (PHI).

  • Other personally identifiable information, such as a Social Security number or telephone number, that a user enters into an open-ended text entry form.

Structural provides several suitable options. The method that you select depends on the specific use case, including the required realism of the output and any privacy requirements.

Here are a few generator options for free text fields, with information on how and why you might use them.

    • Null: If the field is nullable and the use case does not require any data in the field, you can use the Null generator to replace the values with NULL.

    • Constant: Allows you to provide a fixed value to replace all of the source value. For example, you could provide a "Lorem ipsum" string or other dummy value that is appropriate for your data set.

    • Custom Categorical: Similar to the Constant generator, it replaces the original value with a fixed value. To increase the cardinality of the output, you enter a list of possible values. The values are randomly used on the output records.

Maintaining empty values

Most Structural generators preserve NULL values that are in the data.

They do not automatically preserve empty values.

To make sure that any empty values stay empty in the destination database:

  1. For the default generator, select the generator to apply to the non-empty values.

  2. Create a condition to look for empty values. You can either:

    • Use the regular expression comparison against the regular expression whitespace value (\s*).

    • Use the = operator and leave the value empty or empty except for a single space.

    If you are not sure which characters the empty strings use, the regular expressionn option is more flexible. However, it is less efficient.

Path expressions to replace all text values

Instead of creating separate path expressions for each path, you can use one or two path expressions that capture all of the values.

  • //text() gets all of the text nodes.

  • //@* gets all of the attribute values.

You apply the generator to each expression.

Sub-generators are applied sequentially. You can apply the wildcard paths in addition to more specific paths and generators.

Path expressions for XML with namespaces

When your XML includes namespaces, then to include the namespaces in the path expression, specify the elements as:

*[name()='namespace:elementName']

For example, for the following XML:

<ns0:Message xmlns:ns0=".">
    <ns0:Payload>
        <ns1:Customer xmlns:ns1=".">
            <ns1:name>
                Josh
            </ns1:name>
        </ns1:Customer>
    </ns0:Payload>
</ns0:Message>

A working XPath to mask the name value is:

/*[name()='ns0:Message']/*[name()='ns0:Payload']/*[name()='ns1:Customer']/*[name()='ns1:name']

Passing through default minimum and maximum date values

You might sometimes set default date values to the absolute minimum and maximum values that are allowed by the database. For example, for SQL Server, these values are January 1, 1753 and December 31, 9999.

To skip those default values and shift the other values:

  1. Create conditions to look for the minimum or maximum values.

Use Regex Mask to add values

You might sometimes want to add values that are the output of a generator to the results of the transformation by another generator.

To accomplish this:

  1. In addition to the capture groups that are specific to your data:

    • Use (^) as a capture group for a prefix.

    • Use ($) as a capture group for a suffix.

    • Use () as an empty group at any point in the regular expression pattern.

  2. Apply the relevant generators to each capture group.

So to implement the example above (prefix with a constant, scramble the value, append a sequential integer), you provide the expression (^)(.*)()($).

This produces four capture groups:

Aligning email addresses to names

A table that contains user data might include both name and email address columns. If a user's email address is based on their name, then in the destination data, you might want to also tie the email addresses to the names.

For example, your email addresses might use the format firstName.lastName@mycompany.com. In the source data, the email address for John Smith is John.Smith@mycompany.com. In the destination data, assuming John Smith is replaced by Michael Jones, you want the email address to be Michael.Jones@mycompany.com.

At a high level, to line up name and email address columns:

  1. Create a regular expression that extracts to capture groups the name portion of the email address. The specific expression varies based on the email address format.

In this example, the source data contains userId, firstName, lastName, and emailAddress fields, and the email address is firstName.lastName@mycompany.com.

To ensure that the destination data email addresses are aligned to the destination data names:

  1. For the first name and last name capture groups:

Randomly replaces characters with other characters. The output does not contain meaningful text, but it provides a high level of privacy that prevents recovery of the original data. The Character Scramble generator does preserve whitespace, punctuation, and capitalization. Because it is a character-level replacement, it preserves the length of the input string.

Uses regular expressions to parse strings. It then replaces specified substrings with the output of selected generators. The parts of the string to replace are specified in unnamed top-level capture groups. The Regex Mask generator can preserve more realism of the underlying text, but introduces privacy risks. Any sensitive information that does not conform to a known and configured pattern is not captured and replaced. As an example of matching specific formats, a configuration that includes the following two patterns would replace both telephone numbers that use the ###-###-#### format, and SSNs that use the ###-##-#### format, but leave the surrounding text unmodified: SSN: ([0-9]{3}-[0-9]{2}-[0-9]{4}) Telephone Number: ([0-9]{3}-[0-9]{3}-[0-9]{4}) You can configure multiple regular expression patterns to handle all known or expected sensitive information formats. You cannot use this method to replace values that you cannot use a regular expression to reliably identify, such as names within free text. When you use this option, make sure to enable Replace all matches for each pattern.

, , and generators Each of these options provides the highest level of privacy, because they completely remove or replace the original text. You might use each one for different reasons:

Assign the generator to the column.

For the empty value condition, set the generator to .

You sometimes might want to apply the same generator to all of the text values in a JSON, HTML, or XML value. For example, you might want to apply the generator to all of the text.

For the or generator, the path expression $..* captures all of the text values. You can then select the generator to apply to the values.

For the and generators, you create two path expressions:

For example, one path expression references a specific name or address and uses the or generator. The wildcard path expressions use the generator to mask any unknown fields in the document that could contain sensitive information.

As another example, you might assign the generator to specific known fields that never contain sensitive information.

When you assign the generator, the minimum value cannot be shifted backward and the maximum value cannot be shifted forward.

Assign the generator to the column.

For the default generator, select the generator.

For those conditions, set the generator to .

For example, you use to mask a username. You might also want to prefix the value with a fixed constant value, or append a sequential integer.

Apply the generator to the column.

Group 0 is for the prefix. You assign the generator and provide the value to use as the prefix.

Group 1 captures all of the original values. You assign the generator.

Group 2 captures any empty values. You assign the generator to provide a value to use for those values.

Group 3 is for the suffix. You assign the generator.

Assign the generator to the name fields. Make the Name generator consistent with an identifier column.

Assign the generator to the email address field.

Assign the generator to each name capture group. Make the Name generator consistent with the same identifier column.

For the firstName field, assign the generator, configured to produce a first name. Make the generator consistent with the userId column.

For the lastName field, assign the generator, configured to produce a last name. Make the generator consistent with the userId column.

For the emailAddress field, assign the generator. Use the following regular expression to extract the parts of the email address to capture groups: ([a-zA-Z]+).([a-zA-Z]+)@(.*)

Assign the generator, configured to produce the first and last names.

Make the generator consistent with the userId column.

Name generator
Consistency
Categorical generator
Differential Privacy
Custom Categorical
Character Scramble
Character Substitution
Timestamp Shift generator
Consistency
Event Timestamps generator
Simulating event pipelines for fun and profit (and for testing too)
Date Truncation generator
Character Scramble generator
Regex Mask generator
Constant
Custom Categorical
Null
Conditional
Passthrough
Character Scramble
Array JSON Mask
JSON Mask
HTML Mask
XML Mask
Name
Address
Character Scramble
Passthrough
Timestamp Shift
Conditional
Timestamp Shift
Passthrough
Character Scramble
Regex Mask
Constant
Character Scramble
Constant
Sequential Integer
Name
Regex Mask
Name
Name
Name
Regex Mask
Name
Name
Regex Mask configuration for an email address that includes name values
Diagram that shows the name and email address column configuration