Configuring Spark with Livy workspace data connections

In the workspace configuration, select Spark as the connection type, then select Self-managed as the cluster type.

Connecting to the catalog database

Under Catalog Database, to connect to a Hive catalog using Livy:

  1. Under Catalog Type, click Hive.

  2. Under Launch Method, click Livy.

  3. In the Hive Catalog Database field, enter the name of the database.

  4. In the Server field, provide the server where the database is located.

  5. In the Port field, provide the port to use to connect to the database.

  6. In the Username field, provide the username for the account to use to connect to the database.

  7. In the Password field, provide the password for the specified user.

  8. To test the connection to the Hive catalog database, click Test Hive Connection.

Enabling validation of table filters

The Enable partition filter validation setting indicates whether Tonic Structural should validate those filters when you create them.

By default, the setting is in the on position, and Structural validates the filters. To disable the validation, toggle Enable partition filter validation to the off position.

Blocking data generation on all schema changes

By default, data generation is not blocked as long as schema changes do not conflict with your workspace configuration.

To block data generation when there are any schema changes, regardless of whether they conflict with your workspace configuration, toggle Block data generation on schema changes to the on position.

Providing the connection details for Livy

Under Livy Connection Details, you connect the connection to Livy, which launches the data generation:

  1. In the Server field, provide the name of the Livy server.

  2. In the Port field, provide the port to use to connect to the Livy server.

  3. In the Proxy User field, provide the name of the proxy user to use to connect to the Livy server.

  4. By default, SSL is enabled, and Enable SSL/TLS is in the on position. We strongly recommend that you do not turn off SSL.

  5. To indicate that Structural should trust the server certificate, toggle Trust Server Certificate to the on position.

  6. To test the connection to Livy, click Test Livy Connection.

Configuring the connection to the destination data

In the Output Location section, you configure the location in HDFS where Structural writes the destination data.

  1. In the Server field, provide the server where the destination data is located.

  2. In the IPC Port field, provide the IPC port.

  3. In the Web HDFS Port field, provide the web HDFS port.

  4. By default, WebHDFS Authentication Method is set to None. To use Pseudo authentication, click Pseudo. In the Web HDFS Username field, provide the name of the Web HDFS user. To use Kerberos authentication:

    1. Click Kerberos.

    2. In the Kerberos Username field, type the name of the Kerberos user.

    3. In the Kerberos Password field, type the password for the specified Kerberos user.

    4. In the Kerberos Realm field, type the Kerberos realm.

  5. In the Path on HDFS field, provide the path to the destination database.

  6. By default, SSL is enabled, and Enable SSL/TLS is in the on position. We strongly recommend that you do not turn off SSL.

  7. To indicate that Structural should trust the server certificate, toggle Trust Server Certificate to the on position.

  8. To test the connection to the destination database, click Test HDFS Connection.

Note that if you use Kerberos for authentication, then you must also provide Structural with the path to the Kerberos configuration file (krb5.conf). This allows Tonic to communicate with the Kerberos clusters. The path to the file is stored in the Kerberos environment variable KRB5_CONFIG.

To provide Structural with access to the configuration file:

  1. Create a volume mount.

  2. Put the Kerberos configuration file (krb5.conf) on the volume mount.

  3. Add the KRB5_CONFIG environment variable, and set the value to the path to krb5.conf.

Providing Spark configuration variable values

The Spark Configuration section provides a list of spark configuration variables that Structural needs to be set.

Last updated