LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Viewing the current foreign keys
  • Identifying virtual foreign keys
  • Filtering the foreign keys
  • Sorting the foreign keys
  • Adding virtual foreign keys
  • Adding virtual foreign keys from Add Foreign Key Relationships
  • Uploading a JSON file of virtual foreign keys
  • JSON format for the foreign key file
  • JSON format for composite keys
  • JSON format for polymorphic keys
  • Downloading virtual foreign keys
  • Deleting a virtual foreign key

Was this helpful?

Export as PDF
  1. Configuring data generation

Viewing and adding foreign keys

Last updated 3 months ago

Was this helpful?

Foreign keys define relationships between tables. The value of a foreign key column in a table is the primary key of a row from a different table.

For example, a transactions table includes a customer_id column. The value of customer_id is a primary key value from the id column in the customers table.

A table can also have composite foreign keys that consist of multiple columns.

Tonic Structural uses foreign keys when it generates subsets and when it applies generators to primary keys or foreign keys.

During data generation, when , Structural ensures that the foreign keys are synchronized with the primary keys.

When Structural creates a subset, it to include in the subset.

Often, foreign key relationships are defined in the source database. When you have missing relationships or cannot define them in the source database, Structural offers a virtual foreign key tool to allow you to add additional foreign keys. This can ensure that all relationships are maintained. Structural only uses these virtual foreign keys during the data generation process. It does not write the virtual foreign keys to the destination database.

From Foreign Keys view, you can:

  • (all license plans).

  • (Professional and Enterprise plans only).

To display Foreign Keys view:

  • On the workspace management view, in the workspace navigation bar, click Foreign Keys.

  • On Workspaces view, from the dropdown menu in the Name column, select Foreign Keys.

Viewing the current foreign keys

On Foreign Keys view, the View Foreign Key Relationships tab contains the list of foreign keys in the source database.

For each foreign key:

  • Foreign Key contains the name of the columns (tableName.columnName) that contain the foreign key values.

  • Primary Key contains the name of the column (tableName.columnName) that contains the primary key value that is used to populate the foreign key column.

Identifying virtual foreign keys

Virtual foreign keys that you added are displayed with a checkbox.

You can delete those keys. You cannot delete keys that are defined in the source database.

Filtering the foreign keys

You can filter the foreign keys by the name of the foreign key column or the primary key column.

In the filter field, begin to type text that is in the column name. As you type, Structural filters the list.

Sorting the foreign keys

You can sort the foreign keys by the name of the foreign key column or primary key column.

To sort the list:

  1. Click the Sort dropdown for the column that you want to use to sort the list.

  2. On the sort panel, click the sort order to use.

Adding virtual foreign keys

Required license: Professional or Enterprise

Required workspace permission: Configure virtual foreign keys

Tonic allows you to add virtual foreign keys to your source database. You would use this feature to add a specific foreign key that is missing, or if your source database does not use foreign keys.

You can either:

  • Add the foreign keys one at a time from the Add Foreign Key Relationships tab.

  • Upload a JSON file that contains the foreign keys.

If your database uses polymorphic keys (typically if you have a Ruby on Rails application), then you must use the JSON file upload to configure those keys.

Adding virtual foreign keys from Add Foreign Key Relationships

You can configure virtual foreign keys from the Add Foreign Key Relationships tab.

You cannot configure polymorphic keys here. Polymorphic keys must be uploaded from a JSON file.

To add virtual foreign keys to your source database:

  1. Under Select Foreign Keys, check the checkboxes to identify the foreign key fields. These are the fields that contain a value that is a primary key from another table. The Select Foreign Keys list contains the columns that are not already configured as foreign key columns. The top level of the Select Foreign Keys list displays the unique column names. This is the column name only, without the table name. Next to each column name is the number of times that it appears in the source database. You can use the sort dropdown list to sort the list either by the column name or by the number of times the column appears. You expand the column name to display the list of columns that have that name. This list uses the tableName.columnName format. For example, a database has a customer_id column in both the sales and customers tables. On the Select Foreign Keys tab, the top level entry is customer_id. Under customer_id are entries for sales.customer_id and customers.customer_id.

  2. As you select and deselect columns, they are added to or removed from the Foreign Key Preview list. Under Create New Foreign Key, Structural also updates the number of keys to add. From Foreign Key Preview, to remove a selected column, click its delete icon. This performs the same function as unchecking the checkbox in the Select Foreign Keys list.

  3. From the Select Primary Key dropdown list, select the column that provides the values for the selected foreign key columns.

  4. To create the virtual foreign keys, click Create n foreign keys. n is the number of keys that are created, based on the number of foreign key columns that you selected.

Uploading a JSON file of virtual foreign keys

You can upload a JSON file that contains the virtual foreign keys. For example, you can create a JSON file that you can use to populate virtual foreign keys in multiple workspaces that have the same source data structure.

If you already configured virtual foreign keys, then the uploaded virtual foreign keys replace the existing ones.

The virtual foreign key JSON also allows you to add polymorphic keys. You cannot add polymorphic keys from the Add Foreign Key Relationships tab.

On the Foreign Key Relationships view, to upload a foreign keys file:

  1. Click Upload Foreign Key JSON. If you already have virtual foreign keys configured, then the button is Update Foreign Key JSON.

  2. On the upload dialog, to search for and select the file, click Browse.

  3. After you select the file, click Upload.

The uploaded keys are added to the View Foreign Key Relationships list. Those keys replace any existing virtual foreign keys.

JSON format for the foreign key file

The foreign key JSON is an array of foreign key entries. Here is an example of a foreign key file that contains a single entry:

[
  {
    "fk_schema": "public",
    "fk_table": "paystubs",
    "fk_columns": ["employee_id"],
    "target_schema": "public",
    "target_table": "employees",
    "target_columns": ["id"]
  }
]

To illustrate the field values, we'll use the following example, which reflects the example entry above.

A paystubs table lists the pay stubs that were issued to employees. paystubs contains an employee_id field.

employee_id identifies the employee that received the pay stub. employee_id is a foreign key. It contains the value of the id field in employees, which is the primary key field for the employees table.

Both paystubs and employees are in the public schema.

In the foreign keys JSON, each entry contains the following fields.

  • fk_schema - The name of the schema for the table that contains the foreign key. For our example, fk_schema is public.

  • fk_table - The name of the table that contains the foreign key. For our example, fk_table is paystubs.

  • fk_columns - An array that contains the names of the foreign key columns. In our example, the fk_columns array contains a single value, employee_id.

  • target_schema - The name of the schema for the table that contains the referenced primary key. In our example, target_schema is public.

  • target_table - The name of the table that contains the referenced primary key. In our example, target_table is employees.

  • target_columns - An array that contains the names of the primary key columns. In our example, the target_columns array contains a single value, id.

JSON format for composite keys

The ability to provide multiple columns in fk_columns and target_columns is used to support composite foreign keys.

fk_columns and target_columns must contain the same number of columns. The corresponding columns must be in the same order in both arrays.

For example, a sales table contains sales_person_id and sales_manager_id, which refer to the id and manager_id columns in the employees table.

In the JSON:

  • fk_table is sales, and fk_columns is [sales_person_id, sales_manager_id].

  • target_table is employees, and target_columns is [id, manager_id].

The entry for this example would look like:

[
  {
    "fk_schema": "public",
    "fk_table": "sales",
    "fk_columns": ["sales_person_id","sales_manager_id"],
    "target_schema": "public",
    "target_table": "employees",
    "target_columns": ["id","manager_id"]
  }
]

JSON format for polymorphic keys

Some application types have polymorphic keys. Polymorphic keys allow a single column in one table to contain foreign key values that refer to primary keys from multiple other tables. These types of keys cannot be represented in a traditional relational database, but are common in application frameworks such as Ruby on Rails.

For example, a person can have multiple addresses, and a company can have multiple addresses. To support this without complicated joins between tables, the addresses table includes the following columns:

  • A column that contains the identifier of the company or the person that the address belongs to.

  • Another column that identifies whether the identifier is a company or a person.

For example, the people table contains:

id
first_name
last_name

1

John

Doe

2

Mary

Smith

The companies table contains:

id
company_name

1

My Company

2

Example Company

The addresses table contains:

id
address
address_owner_id
address_owner_type

1

123 Main Street

1

Person

2

234 Elm Street

1

Company

In the addresses table, to identify the address owner:

  • The address_owner_id column contains an id value from either the people or companies table.

  • The address_owner_type column identifies whether the identifier is a person or a company.

The value of address_owner_id is 1 for both records. However, address 1 belongs to John Doe, and address 2 belongs to My Company.

Each entry in the polymorphic keys JSON identifies the fields that contain the foreign key values and the foreign key type. It also lists the foreign key types, and identifies the source of the identifier for that foreign key type.

The following is the JSON for the example above:

[
  {
    "fk_table": "addresses",
    "fk_schema": "public",
    "fk_columns": ["address_owner_id"],
    "nullable": false,
    "polymorphic_target": { 
      "fk_type_column": "address_owner_type",
      "types": {
        "Person": {
          "target_schema": "public",
          "target_table": "people",
          "target_columns": ["id"]
        },
        "Company": {
          "target_schema": "public",
          "target_table": "companies",
          "target_columns": ["id"]
        }
      }
    }
  }
]

Each entry contains the following fields:

  • fk_table - The name of the table that contains the foreign key values. In our example, this is the addresses table.

  • fk_schema - The name of the schema for the table that contains the foreign key. In our example, the schema is public.

  • fk_columns - An array that contains the names of the columns that contain the foreign key values. In our example, the value is address_owner_id.

  • nullable - Whether the foreign key column is nullable.

  • polymorphic_target - Identifies the target types and the identifier source for each type.

polymorphic_target contains the following fields:

  • fk_type_column - In the table that contains the foreign key, the name of the column that contains the foreign key type. In our example, this is the address_owner_type column in addresses.

  • types - A list of the target types.

Each entry in types identifies the name of the type. In our example, our types are Person and Company. Note that these are the values of the type column in the polymorphic table, not necessarily the names of the tables they point to. For example, the Person type refers to the people table.

Each type has the following attributes:

  • target_schema - The schema that contains the target table. In our example, the tables for both types belong to the public schema.

  • target_table - The table that contains the primary key value. In our example, for the Person type, the target table is people. For the Company type, the target table is companies.

  • target_columns - An array containing the column that contains the primary key value. In our example, the name of the identifier column in both tables is id.

Downloading virtual foreign keys

If you created virtual foreign keys, then you can download those keys to a JSON file. For example, you might want to upload the same set of virtual foreign keys to another workspace that uses the same source data.

To download the virtual foreign keys, click Download Foreign Key JSON.

Deleting a virtual foreign key

You can delete virtual foreign keys. You cannot delete foreign keys that are defined in the source database.

To delete an individual virtual foreign key, click its delete icon.

To delete multiple virtual foreign keys:

  1. Check the checkbox next to each virtual foreign key to delete.

  2. Click Bulk Delete.

You can also .

You cannot create virtual foreign keys from a . You can only create virtual foreign keys from a parent workspace.

child workspace
generators are assigned to primary key columns
uses foreign keys to identify the related tables and rows
View the current foreign keys
Add virtual foreign keys
create virtual foreign keys from a table details panel in Subsetting view
View Foreign Key Relationships tab
Add Foreign Key Relationships tab