LogoLogo
Release notesPython SDK docsDocs homeTextual CloudTonic.ai
  • Tonic Textual guide
  • Getting started with Textual
  • Previewing Textual detection and redaction
  • Entity types that Textual detects
    • Built-in entity types
    • Managing custom entity types
  • Language support in Textual
  • Datasets - Create redacted files
    • Datasets workflow for text redaction
    • Creating and managing datasets
    • Assigning tags to datasets
    • Adding and removing dataset files
    • Reviewing the sensitivity detection results
    • Configuring the redaction
      • Configuring added and excluded values for built-in entity types
      • Working with custom entity types
      • Selecting the handling option for entity types
      • Configuring synthesis options
      • Configuring handling of file components
    • Adding manual overrides to PDF files
      • Editing an individual PDF file
      • Creating templates to apply to PDF files
    • Sharing dataset access
    • Previewing the original and redacted data in a file
    • Downloading redacted data
  • Pipelines - Prepare LLM content
    • Pipelines workflow for LLM preparation
    • Viewing pipeline lists and details
    • Assigning tags to pipelines
    • Setting up pipelines
      • Creating and editing pipelines
      • Supported file types for pipelines
      • Creating custom entity types from a pipeline
      • Configuring file synthesis for a pipeline
      • Configuring an Amazon S3 pipeline
      • Configuring a Databricks pipeline
      • Configuring an Azure pipeline
      • Configuring a Sharepoint pipeline
      • Selecting files for an uploaded file pipeline
    • Starting a pipeline run
    • Sharing pipeline access
    • Viewing pipeline results
      • Viewing pipeline files, runs, and statistics
      • Displaying details for a processed file
      • Structure of the pipeline output file JSON
    • Downloading and using pipeline output
  • Textual Python SDK
    • Installing the Textual SDK
    • Creating and revoking Textual API keys
    • Obtaining JWT tokens for authentication
    • Instantiating the SDK client
    • Datasets and redaction
      • Create and manage datasets
      • Redact individual strings
      • Redact individual files
      • Transcribe and redact an audio file
      • Configure entity type handling for redaction
      • Record and review redaction requests
    • Pipelines and parsing
      • Create and manage pipelines
      • Parse individual files
  • Textual REST API
    • About the Textual REST API
    • REST API authentication
    • Redaction
      • Redact text strings
  • Datasets
    • Manage datasets
    • Manage dataset files
  • Snowflake Native App and SPCS
    • About the Snowflake Native App
    • Setting up the app
    • Using the app
    • Using Textual with Snowpark Container Services directly
  • Install and administer Textual
    • Textual architecture
    • Setting up and managing a Textual Cloud pay-as-you-go subscription
    • Deploying a self-hosted instance
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
    • Configuring Textual
      • How to configure Textual environment variables
      • Configuring the number of textual-ml workers
      • Configuring the number of jobs to run concurrently
      • Configuring the format of Textual logs
      • Setting a custom certificate
      • Configuring endpoint URLs for calls to AWS
      • Enabling PDF and image processing
      • Setting the S3 bucket for file uploads and redactions
      • Required IAM role permissions for Amazon S3
      • Configuring model preferences
    • Viewing model specifications
    • Managing user access to Textual
      • Textual organizations
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Viewing the list of SSO groups in Textual
        • Azure
        • GitHub
        • Google
        • Keycloak
        • Okta
      • Managing Textual users
      • Managing permissions
        • About permissions and permission sets
        • Built-in permission sets and available permissions
        • Viewing the lists of permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
    • Textual monitoring
      • Downloading a usage report
      • Tracking user access to Textual
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Pipelines - Prepare LLM content

Pipelines workflow for LLM preparation

Last updated 3 months ago

Was this helpful?

The Textual LLM preparation workflow transforms source files into content that you can incorporate into an LLM.

You can:

  • Upload files directly from a local file system

  • Select files from an S3 bucket

  • Select files from a Databricks data volume

  • Select files from an Azure Blob Storage container

  • Select files from a Sharepoint repository

Textual can process plain text files (.txt and .csv), .docx files, and .xslx files. It can also process PDF files. For images, Textual can extract text from .png, .tif/.tiff, and .jpg/.jpeg files.

You can also create and manage pipelines from the .

At a high level, to use Textual to create LLM-ready content:

    1. Provide the credentials to use to connect to the storage location.

    2. Identify the location where Textual writes the pipeline output.

    3. Optionally, filter the files by file type. For example, you might only want to process PDF files.

    4. Identify the files to include in the pipeline. You can select individual files or folders. When you select folders, Textual processes all of the files in the folder.

  1. For each file, Textual:

    1. Converts the content to raw text. For image files, this means to extract any text that is present.

    2. Uses its built-in models to detect entity values in the text.

    3. Generates a Markdown version of the original text.

    4. Produces a JSON file that contains:

      • The Markdown version of the text

      • The detected entities and their locations

From Textual, for each processed file, you can:

For cloud storage pipelines, the JSON files also are available from the configured output location.

You can also configure pipelines to create redacted versions of the original values. For more information, go to Datasets workflow for text redaction.

If the source files are in a local file system, then . Textual stores the files in your , and then automatically processes each new file.

If the source files are in cloud storage (, , , or ):

Textual also provides .

Create a Textual pipeline.
upload the files to the pipeline
configured Amazon S3 location
Amazon S3
Databricks
Azure
Sharepoint
Run the pipeline.
View the file content, the detected entities, and the output JSON.
Copy and download the output JSON files.
code snippets to help you to use the pipeline output
Textual SDK
LLM preparation workflow