LogoLogo
Release notesPython SDK docsDocs homeTextual CloudTonic.ai
  • Tonic Textual guide
  • Getting started with Textual
  • Previewing Textual detection and redaction
  • Entity types that Textual detects
    • Built-in entity types
    • Managing custom entity types
  • Language support in Textual
  • Datasets - Create redacted files
    • Datasets workflow for text redaction
    • Creating and managing datasets
    • Assigning tags to datasets
    • Displaying the file manager
    • Adding and removing dataset files
    • Reviewing the sensitivity detection results
    • Configuring the redaction
      • Configuring added and excluded values for built-in entity types
      • Working with custom entity types
      • Selecting the handling option for entity types
      • Configuring synthesis options
      • Configuring handling of file components
    • Adding manual overrides to PDF files
      • Editing an individual PDF file
      • Creating templates to apply to PDF files
    • Sharing dataset access
    • Previewing the original and redacted data in a file
    • Downloading redacted data
  • Pipelines - Prepare LLM content
    • Pipelines workflow for LLM preparation
    • Viewing pipeline lists and details
    • Assigning tags to pipelines
    • Setting up pipelines
      • Creating and editing pipelines
      • Supported file types for pipelines
      • Creating custom entity types from a pipeline
      • Configuring file synthesis for a pipeline
      • Configuring an Amazon S3 pipeline
      • Configuring a Databricks pipeline
      • Configuring an Azure pipeline
      • Configuring a Sharepoint pipeline
      • Selecting files for an uploaded file pipeline
    • Starting a pipeline run
    • Sharing pipeline access
    • Viewing pipeline results
      • Viewing pipeline files, runs, and statistics
      • Displaying details for a processed file
      • Structure of the pipeline output file JSON
    • Downloading and using pipeline output
  • Textual Python SDK
    • Installing the Textual SDK
    • Creating and revoking Textual API keys
    • Obtaining JWT tokens for authentication
    • Instantiating the SDK client
    • Datasets and redaction
      • Create and manage datasets
      • Redact individual strings
      • Redact individual files
      • Transcribe and redact an audio file
      • Configure entity type handling for redaction
      • Record and review redaction requests
    • Pipelines and parsing
      • Create and manage pipelines
      • Parse individual files
  • Textual REST API
    • About the Textual REST API
    • REST API authentication
    • Redaction
      • Redact text strings
  • Datasets
    • Manage datasets
    • Manage dataset files
  • Snowflake Native App and SPCS
    • About the Snowflake Native App
    • Setting up the app
    • Using the app
    • Using Textual with Snowpark Container Services directly
  • Install and administer Textual
    • Textual architecture
    • Setting up and managing a Textual Cloud pay-as-you-go subscription
    • Deploying a self-hosted instance
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
    • Configuring Textual
      • How to configure Textual environment variables
      • Configuring the number of textual-ml workers
      • Configuring the number of jobs to run concurrently
      • Configuring the format of Textual logs
      • Setting a custom certificate
      • Configuring endpoint URLs for calls to AWS
      • Enabling PDF and image processing
      • Setting the S3 bucket for file uploads and redactions
      • Required IAM role permissions for Amazon S3
      • Configuring model preferences
    • Viewing model specifications
    • Managing user access to Textual
      • Textual organizations
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Viewing the list of SSO groups in Textual
        • Azure
        • GitHub
        • Google
        • Keycloak
        • Okta
      • Managing Textual users
      • Managing permissions
        • About permissions and permission sets
        • Built-in permission sets and available permissions
        • Viewing the lists of permission sets
        • Configuring custom permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
    • Textual monitoring
      • Downloading a usage report
      • Tracking user access to Textual
Powered by GitBook
On this page
  • Displaying the list of pipelines
  • Displaying details for a pipeline
  • Details for a cloud storage pipeline
  • Details for an uploaded file pipeline

Was this helpful?

Export as PDF
  1. Pipelines - Prepare LLM content

Viewing pipeline lists and details

Last updated 7 days ago

Was this helpful?

In Tonic Textual, a pipeline identifies a set of files that Textual processes into content that can be imported into an LLM system.

Displaying the list of pipelines

To display the Pipelines page, in the Textual navigation menu, click Pipelines.

The pipelines list only displays the pipelines that you have access to.

Users who have the global permission View all pipelines can see the complete list of pipelines.

For each pipeline, the list includes:

  • The name of the pipeline

  • Any tags assigned to the pipeline, as well as an option to add tags. For more information, go to Assigning tags to pipelines.

  • When the pipeline was most recently updated

  • The user who most recently updated the pipeline

If there are no pipelines, then the Pipelines page displays a panel to allow you to create a pipeline.

Displaying details for a pipeline

Required pipeline permission: View pipeline settings

To display the details for a pipeline, on the Pipelines page, click the pipeline name.

Details for a cloud storage pipeline

For a cloud storage pipeline (Amazon S3, Databricks, Azure, or Sharepoint), the details include:

  • The tags that are assigned to a pipeline, as well as an option to add tags. For more information, to go Assigning tags to pipelines.

  • The Run Pipeline option, which starts a new pipeline run. For more information, go to Starting a pipeline run.

  • The list of processed files. For more information, go to Viewing pipeline files, runs, and statistics.

  • For pipelines that are configured to also redact files, the redaction configuration. For more information, go to Configuring file synthesis for a pipeline.

  • File statistics for the pipeline files. For more information, go to Viewing pipeline files, runs, and statistics.

Details for an uploaded file pipeline

For an uploaded file pipeline, the pipeline details include:

  • The tags that are assigned to the pipeline, plus an option to add tags. For more information, go to Assigning tags to pipelines.

  • The Upload Files option, which you use to add files to the pipeline. For more information, go to Selecting files for an uploaded file pipeline.

  • The list of files in the pipeline. Includes both new and processed files. For more information, go to Viewing pipeline files, runs, and statistics.

  • For pipelines that are configured to also redact files, the redaction configuration. For more information, go to Configuring file synthesis for a pipeline.

  • File statistics for the pipeline files. For more information, go to Viewing pipeline files, runs, and statistics.

The settings option, which you use to change the configuration settings for the pipeline. For more information, go to .

The list of pipeline runs. For more information, go to .

The settings option, which you use to change the configuration settings for the pipeline. For more information, go to .

Pipelines page
Pipeline details page for an Amazon S3 pipeline
Pipeline details page for an uploaded file pipeline
Viewing the list of pipeline runs
Editing a pipeline
Editing a pipeline