LogoLogo
Release notesPython SDK docsDocs homeTextual CloudTonic.ai
  • Tonic Textual guide
  • Getting started with Textual
  • Previewing Textual detection and redaction
  • Entity types that Textual detects
    • Built-in entity types
    • Managing custom entity types
  • Language support in Textual
  • Datasets - Create redacted files
    • Datasets workflow for text redaction
    • Creating and managing datasets
    • Assigning tags to datasets
    • Displaying the file manager
    • Adding and removing dataset files
    • Reviewing the sensitivity detection results
    • Configuring the redaction
      • Configuring added and excluded values for built-in entity types
      • Working with custom entity types
      • Selecting the handling option for entity types
      • Configuring synthesis options
      • Configuring handling of file components
    • Adding manual overrides to PDF files
      • Editing an individual PDF file
      • Creating templates to apply to PDF files
    • Sharing dataset access
    • Previewing the original and redacted data in a file
    • Downloading redacted data
  • Pipelines - Prepare LLM content
    • Pipelines workflow for LLM preparation
    • Viewing pipeline lists and details
    • Assigning tags to pipelines
    • Setting up pipelines
      • Creating and editing pipelines
      • Supported file types for pipelines
      • Creating custom entity types from a pipeline
      • Configuring file synthesis for a pipeline
      • Configuring an Amazon S3 pipeline
      • Configuring a Databricks pipeline
      • Configuring an Azure pipeline
      • Configuring a Sharepoint pipeline
      • Selecting files for an uploaded file pipeline
    • Starting a pipeline run
    • Sharing pipeline access
    • Viewing pipeline results
      • Viewing pipeline files, runs, and statistics
      • Displaying details for a processed file
      • Structure of the pipeline output file JSON
    • Downloading and using pipeline output
  • Textual Python SDK
    • Installing the Textual SDK
    • Creating and revoking Textual API keys
    • Obtaining JWT tokens for authentication
    • Instantiating the SDK client
    • Datasets and redaction
      • Create and manage datasets
      • Redact individual strings
      • Redact individual files
      • Transcribe and redact an audio file
      • Configure entity type handling for redaction
      • Record and review redaction requests
    • Pipelines and parsing
      • Create and manage pipelines
      • Parse individual files
  • Textual REST API
    • About the Textual REST API
    • REST API authentication
    • Redaction
      • Redact text strings
  • Datasets
    • Manage datasets
    • Manage dataset files
  • Snowflake Native App and SPCS
    • About the Snowflake Native App
    • Setting up the app
    • Using the app
    • Using Textual with Snowpark Container Services directly
  • Install and administer Textual
    • Textual architecture
    • Setting up and managing a Textual Cloud pay-as-you-go subscription
    • Deploying a self-hosted instance
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
    • Configuring Textual
      • How to configure Textual environment variables
      • Configuring the number of textual-ml workers
      • Configuring the number of jobs to run concurrently
      • Configuring the format of Textual logs
      • Setting a custom certificate
      • Configuring endpoint URLs for calls to AWS
      • Enabling PDF and image processing
      • Setting the S3 bucket for file uploads and redactions
      • Required IAM role permissions for Amazon S3
      • Configuring model preferences
    • Viewing model specifications
    • Managing user access to Textual
      • Textual organizations
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Viewing the list of SSO groups in Textual
        • Azure
        • GitHub
        • Google
        • Keycloak
        • Okta
      • Managing Textual users
      • Managing permissions
        • About permissions and permission sets
        • Built-in permission sets and available permissions
        • Viewing the lists of permission sets
        • Configuring custom permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
    • Textual monitoring
      • Downloading a usage report
      • Tracking user access to Textual
Powered by GitBook
On this page
  • Enabling and disabling custom entity types
  • Editing a custom entity type
  • Creating a custom entity type
  • Running a new scan to reflect custom entity type changes

Was this helpful?

Export as PDF
  1. Datasets - Create redacted files
  2. Configuring the redaction

Working with custom entity types

Last updated 14 days ago

Was this helpful?

From the entity types list, you can set whether each custom entity is active, and edit the custom entity configuration.

You can also create a new custom entity type.

Enabling and disabling custom entity types

Required dataset permission: Edit dataset settings

In the entity types list, custom entity types include a toggle to indicate whether the custom entity type is active for that dataset or pipeline.

To disable a custom entity type, set the toggle to the off position.

When a custom entity type is enabled, then it is listed under either the found or not found entity types, depending on whether the files include entities of that type.

When a custom entity type is not enabled, it is listed under Inactive custom entity types. To enable the custom entity type, set the toggle to the on position.

Editing a custom entity type

Required global permission - either:

  • Create custom entity types

  • Edit any custom entity type

To edit a custom entity type, click the settings icon for that type.

Note that any changes to the custom entity type settings affect all of the datasets and pipelines that use the custom entity type.

Creating a custom entity type

Required global permission: Create custom entity types

From the dataset details or pipeline details page, to create a new custom entity type, click Create Custom Entity Type.

Running a new scan to reflect custom entity type changes

When you enable, disable, add, or edit custom entity types, the changes do not take effect until you run a new scan.

For datasets and uploaded file pipelines, to run a new scan, click Scan.

For a cloud storage pipeline, Textual scans the files when you run the pipeline.

For information on how to configure a custom entity type, go to .

For information on how to configure a custom entity type, go to .

Custom entity type configuration settings
Custom entity type configuration settings
Custom entity type in the found entity types list
Inactive custom entity types list
Create Custom Entity Type option on the dataset details page
Create Custom Entity Type option on the pipeline details page
Scan prompt for a dataset or an uploaded file pipeline