Configuring a Databricks pipeline

For a Databricks pipeline, the settings include:

  • Databricks credentials

  • Output location

  • Whether to also generate redacted versions of the original files

  • Selected files and folders

Changing the Databricks credentials for a pipeline

When you create a pipeline that uses files from Databricks, you are prompted to provide the credentials to use to connect to Databricks.

From the Pipeline Settings page, to change the credentials:

  1. Click Update Databricks Credentials.

  1. Provide the new credentials:

    1. In the Databricks URL field, provide the URL to the Databricks workspace.

    2. In the Access Token field, provide the access token to use to get access to the volume.

  2. To test the connection, click Test Databricks Connection.

  3. To save the new credentials, click Update Databricks Credentials.

Selecting a location for the output files

On the Pipeline Settings page, under Select Output Location, navigate to and select the folder in Databricks where Textual writes the output files.

When you run a pipeline, Textual creates a folder in the output location. The folder name is the pipeline job identifier.

Within the job folder, Textual recreates the folder structure for the original files. It then creates the JSON output for each file. The name of the JSON file is <original filename>_<original extension>_parsed.json.

If the pipeline is also configured to generate redacted versions of the files, then Textual writes the redacted version of each file to the same location.

For example, for the original file Transaction1.txt, the output for a pipeline run contains:

  • Transaction1_txt_parsed.json

  • Transaction.txt

Indicating whether to also redact the files

By default, when you run a Databricks pipeline, Textual only generates the JSON output.

To also generate versions of the original files that redact or synthesize the detected entity values, toggle Synthesize Files to the on position.

Filtering files in selected folders by file type

One option for selected folders is to filter the processed files based on the file extension. For example, in a selected folder, you might only want to process .txt and .csv files.

Under File Processing Settings, select the file extensions to include. To add a file type, select it from the dropdown list. To remove a file type, click its delete icon.

Note that this filter does not apply to individually selected files. Textual always processes those files regardless of file type.

Selecting files and folders to process

Under Select files and folders to add to run, navigate to and select the folders and individual files to process.

To add a folder or file to the pipeline, check its checkbox.

When you check a folder checkbox, Textual adds it to the Prefix Patterns list. It processes all of the applicable files in the folder, based on whether the file type is a type that Textual supports and whether it is included in the file type filter.

When you click the folder name, it displays the folder contents.

When you select an individual file, Textual adds it to the Selected Files list.

To delete a file or folder, either:

  • In the navigation pane, uncheck the checkbox.

  • In the Prefix Patterns or Selected Files list, click its delete icon.

Last updated