Creating and editing pipelines
Last updated
Last updated
To create a pipeline, either:
On the Pipelines page, click Create a New Pipeline.
On the Home page, click Create, then click Pipeline.
On the Create A New Pipeline panel:
In the Name field, type the name of the pipeline.
Under Files Source, select the location of the source files.
To upload files from a local file system, click File upload, then click Save.
To select files from and write output to Amazon S3, click Amazon S3.
To select files from and write output to Databricks, click Databricks.
To select files from and write output to Azure Blob Storage, click Azure.
If you selected Amazon S3, provide the credentials to use to connect to Amazon S3.
In the Access Secret field, provide the secret key that is associated with the access key.
From the Region dropdown list, select the AWS Region to send the authentication request to.
In the Session Token field, provide the session token to use for the authentication request.
To test the credentials, click Test AWS Connection.
Click Save.
On the Pipeline Settings page, provide the rest of the pipeline configuration. For more information, go to Configuring an Amazon S3 pipeline.
Click Save.
If you selected Databricks, provide the connection information:
In the Databricks URL field, provide the URL to the Databricks workspace.
In the Access Token field, provide the access token to use to get access to the volume.
To test the connection, click Test Databricks Connection.
Click Save.
On the Pipeline Settings page, provide the rest of the pipeline configuration. For more information, go to Configuring a Databricks pipeline.
Click Save.
If you selected Azure, provide the connection information:
In the Account Name field, provide the name of your Azure account.
In the Account Key field, provide the access key for your Azure account.
To test the connection, click Test Azure Connection.
Click Save.
On the Pipeline Settings page, provide the rest of the pipeline configuration. For more information, go to Configuring an Azure pipeline.
Click Save.
To update a pipeline configuration:
Either:
On the Pipelines page, click the pipeline options menu, then click Settings.
On the pipeline details page, click the settings icon. For cloud storage pipelines, the settings icon is next to the Run Pipeline option. For uploaded file pipelines, the settings icon is next to the Upload Files option.
On the Pipeline Settings page, update the configuration. For all pipelines, you can change the pipeline name, and whether to also create redacted versions of the original files. For cloud storage pipelines, you can change the file selection. For more information, go to Configuring an Amazon S3 pipeline, Configuring a Databricks pipeline, or Configuring an Azure pipeline. For uploaded file pipelines, you do not manage files from the Pipeline Settings page. For information about uploading files, go to Selecting files for an uploaded file pipeline.
Click Save.
To delete a pipeline, on the Pipeline Settings page, click Delete Pipeline.
In the Access Key field, provide an AWS access key that is associated with an IAM user or role. For an example of a role that has the required permissions for an Amazon S3 pipeline, go to .