Links
Comment on page

Configuring Databricks workspace data connections

During workspace creation, under Connection Type, select Databricks.

Identifying the source database

In the Source Server section:
  1. 1.
    In the Catalog Name field, provide the name of the catalog where the source database is located. If you do not provide a catalog name, then the default catalog is used. For Unity Catalog, this is the catalog that you configured as the default. For earlier versions that do not support Unity Catalog, the default is hive_metastore.
  2. 2.
    In the Database Name field, provide the name of the source database.

Enabling validation of table filters

For Databricks workspaces, you can provide where clauses to filter tables. For details, go to Applying a filter to tables.
The Enable partition filter validation toggle indicates whether Tonic should validate those filters when you create them.
By default, the setting is in the on position, and Tonic validates the filters. To disable the validation, toggle Enable partition filter validation to the off position.

Blocking data generation on all schema changes

By default, data generation is not blocked as long as schema changes do not conflict with your workspace configuration.
To block data generation when there are any schema changes, regardless of whether they conflict with your workspace configuration switch Block data generation on schema changes to the on position.

Connecting to the Databricks cluster

In the Databricks Cluster section, you provide the connection information for the cluster.
  1. 1.
    Under Databricks Type, select whether to use Databricks on AWS or Azure Databricks.
  2. 2.
    In the API Token field, provide the API token for Databricks. For information on how to generate an API token, go to the Databricks documentation.
  3. 3.
    In the Host URL field, provide the URL for the cluster host.
  4. 4.
    In the HTTP Path field, provide the path to the cluster.
  5. 5.
    In the Port field, provide the port to use to access the cluster.
  6. 6.
    By default, data generation jobs run on the specified cluster. To instead run data generation jobs on an ephemeral Databricks job cluster:
    1. 1.
      Toggle Use Databricks Job Cluster to the on position.
    2. 2.
      In the Cluster Information text area, provide the details for the job cluster.
  7. 7.
    For clusters that use Databricks runtime 10.4 and below, Tonic installs a cluster initialization script, which is stored as a Databricks workspace file. By default, this script is uploaded to the /Shared workspace directory. To upload the script to a different directory, set Workspace Path to an absolute path in the workspace tree. Tonic must have access to the directory.
  8. 8.
    To test the connection to the cluster, click Test Cluster Connection.

Connecting to the destination server

In the Destination Settings section, you specify where Tonic writes the destination database.

Selecting the output type

Under Output Storage Type, select the type of storage to use for the destination data:
  • To use Databricks Delta tables, click Databricks.
  • To use Amazon S3, click Amazon S3 Files.
  • To use Azure, click Azure Data Lake Storage Gen2 Files.

Configuring the output settings for Databricks Delta tables

If you selected Databricks as the output type:
  1. 1.
    In the Catalog Name field, provide the name of the catalog that contains the database If the Databricks cluster connection supports multiple catalogs (Unity Catalog) and you do not specify a catalog, then Tonic uses the default catalog. For connections that use the legacy metastore, you can leave the field blank, or set it to hive_metastore.
  2. 2.
    In the Database Name field, provide the name of the database.
If you do not specify a database, Tonic uses the database name default in the active catalog.

Configuring the output settings for Amazon S3 or Azure

If you selected either Amazon S3 Files or Azure Data Lake Storage Gen2 Files as the output type:
  1. 1.
    In the Output Location field, provide the location in either Amazon S3 or Azure for the destination data.
  2. 2.
    By default, Tonic writes the results of each data generation to a different folder. To create the folder, it appends a GUID to the end of the output location. To instead always write the results to the specified output location, and overwrite the results of the previous job, toggle Create job specific destination folder to the off position.
  3. 3.
    By default, each output table is written in the format used by the corresponding input table. To instead write all output tables to a single format:
    1. 1.
      Toggle Write all output to a specific type to the on position.
    2. 2.
      From the Select output type dropdown list, select the output format to use. The options are:
      • Avro
      • JSON
      • Parquet
      • Delta
      • CSV
      • ORC
    3. 3.
      If you select CSV, you also configure the file format.
      1. 1.
        To treat the first row as a header, check Treat first row as a column header. The box is checked by default.
      2. 2.
        In the Column Delimiter field, type the character to use to separate the columns. The default is a comma (,).
      3. 3.
        In the Escape Character field, type the character to use to escape special characters. The default is a backslash (\).
      4. 4.
        In the Quoting Character field, type the character to use to quote text values. The default is a double quote (").
      5. 5.
        In the NULL Value Replacement String field, type the string to use to represent null values. The default is an empty string.
Last modified 1mo ago