arrow-left

Only this pageAll pages
gitbookPowered by GitBook
triangle-exclamation
Couldn't generate the PDF for 166 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Tonic Textual

Loading...

Loading...

Loading...

Loading...

Loading...

Entity types

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Create and manage datasets

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Manage dataset files

Loading...

Loading...

Loading...

Loading...

Loading...

Configure the redaction

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Preview and obtain output

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Guided redaction (Beta)

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Textual Python SDK

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Textual REST API

Loading...

Loading...

Managing your user profile

The User Profile page displays a summary of information about your Textual account.

From the user profile page, you can copy your organization identifier, configure a team name, and manage your Textual API keys. For more information about managing your Textual API keys, go to Creating and revoking Textual API keys.

hashtag
Displaying the User Profile page

To display the User Profile page:

  1. Click the user icon at the top right.

  2. In the user menu, click User Profile.

hashtag
Copying your organization identifier

The profile summary includes the identifier of your organization in Textual.

To copy the identifier, click the copy icon.

hashtag
Adding a team name

From the User Profile page, you can specify a team name. For example, you might use the team name field to identify a specific department or project that you belong to.

To add a team name, type the name in the field.

Getting started with Textual

Note that these instructions are for setting up a new account on Textual Cloud. For a self-hosted instance, depending on how it is set up, you might either create an account manually or use single sign-on (SSO).

hashtag
Signing up for Textual

To get started with a new Textual account:

About entity types

An entity type is a category of entity value. For example, the entity value John might be an example of the Given Name entity type.

Tonic Textual comes with a .

You can also configure custom entity types, which you can use to detect values that are not covered by the built-in entity types.

When you create a custom entity type, you can either:

  • You might create this type of custom entity when there are a limited number of values or the values follow specific formats that can easily be identified with a regular expression.

Managing model-based custom entity types

circle-info

Required global permission - either:

  • Create custom entity types

Define and train a model to identify matching values. Training a model is an iterative process that can take hours or days, depending on your data. You might create this type of custom entity when there are a large number of values that do not follow a specific format. The values need to be identified more by context.

You can also view this video overview of entity types and entity type handlingarrow-up-right.

built-in set of entity types that it always detects
Use regular expressions to identify matching values.
Go to https://textual.tonic.ai/arrow-up-right.
  • Click Sign up.

  • Enter your email address.

  • Create and confirm a password for your Textual account.

  • Click Sign Up.

  • Textual creates your account and sends you an email message to activate the account.

    After you activate your account and log in, Textual displays the Textual Home page, which you can use to preview how Textual detects and replaces values. For more information, go to Previewing Textual detection and redaction.

    Home page for a new account

    hashtag
    Using the Textual free trial

    When you set up an account on Textual Cloud, you start a Textual free trial.

    hashtag
    Using the Getting Started options

    On the Home page, the collapsible Getting Started section provides links to tasks and information related to:

    • Using the Textual Python SDK

    • Working with datasets

    • Working with guided redaction

    • Working with

    Getting started section on the Home page

    To view the available options for a panel, hover the mouse over it. Hover over an option to display a tooltip to identify the action that the option performs.

    Hovering over a Getting Started panel to display the available options

    hashtag
    Word count limit

    During the free trial Textual scans up to 100,000 words for free. Note that Textual counts actual words, not tokens. For example, "Hello, my name is John Smith." counts as six words.

    After the 100,000 words, Textual disables scanning for your account. Until you purchase a pay-as-you-go subscription, you cannot add files to a dataset.

    hashtag
    Viewing your current usage

    Textual displays the current available usage in the navigation menu.

    Available usage for an account

    hashtag
    Next steps - pay-as-you-go or product demo

    Textual also prompts you to purchase a pay-as-you-go subscription, which allows an unlimited number of words scanned for a flat rate per 1,000 words.

    You can also request a Textual product demo.

    Edit any custom entity type

    Self-hosted instances must also configure a connection to the LLM to use.

    While a regex-based entity type identifies values that match regular expressions, for a model-based custom entity type, you train a model to identify the entity values.

    A model-based entity type is useful when the values are identified more by context than by format. For example, for an entity type that identifies the names of health conditions, it would not be possible to set up regular expressions that identify the values.

    You iterate over text-based guidelines that identify the entity type values in a smaller set of data, then use a larger set of data to train one or more models.

    Each trained model is based on a selected version of the guidelines.

    You select the trained model to use for the custom entity type, and select the datasets to enable the custom entity type for.

    hashtag
    Getting started

    hashtag
    Defining the entity type

    hashtag
    Activating and managing an entity type

    Tonic Textual guide

    Tonic Textual allows you to put your text-based data to work for you.

    A Textual dataset is a collection of files from a local file system or cloud storage. Textual scans the dataset files to identify sensitive values. You can then choose to redact or replace those sensitive values, to produce output files in the same format that you can safely use for development and training.

    A guided redaction project identifies blocks out sensitive values in files. For example, you might use guided redaction to prepare documents in response to a Freedom of Information Act request. Textual scans the files to identify values. You can then review and adjust the results before you download the output files.

    You can use the Textual SDK or the Textual REST API to manage datasets or to remove sensitive values from individual text strings and files.

    Want to know what's in the latest Textual releases? Go to the Textual release notesarrow-up-right.

    hashtag
    Startup and overview

    hashtag
    Textual SDK, REST API, and other integrations

    Need help with Textual? Contact .

    Viewing the list of custom entity types

    To display the list of custom entity types, in the Textual navigation bar, click Custom Entities.

    Custom Entity Types page with regex-based and model-based custom entity types

    hashtag
    Information in the list

    For each custom entity type, the Custom Entity Types page includes the following information:

    • Name of the custom entity type.

    • Whether the custom entity type is regex-based or model-based.

    • For model-based custom entity types, the status. The status indicates where the entity type is in the creation process.

    • When the custom entity type was most recently updated.

    • The number of projects where the custom entity type is active.

    hashtag
    Filtering the list

    hashtag
    Filtering by name

    To filter the list by the entity type name, in the search field, begin to type text in the name. As you type, Textual filters the list to only include matching entity types.

    hashtag
    Filtering by type

    By default, the list includes both regex-based and model-based custom entity types.

    To filter the list to only include one of the formats:

    1. Click Filter by type.

    2. In the filter dropdown list, select the format to include.

    hashtag
    Filtering by creator

    By default, the list includes all of the custom entity types that were created by users in your organization.

    To filter the list to only include custom entity types that were created by specific users:

    1. Click Filter by creator.

    2. In the dropdown list, check the checkbox for each user to include.

    hashtag
    Sorting the list

    By default, the list is sorted alphabetically by the entity type name.

    You can sort the list by any of the data columns.

    To sort by a column, click the column heading.

    To reverse the sort order, click the heading again.

    Overview of the process to create a model-based custom entity type

    For a custom model entity type, the overall process is as follows:

    hashtag
    Select and annotate test files

    The first step is to identify the entity values that are in a small set of test files. The test files and established values are used both to iterate over the model guidelines and to assess how well your trained models perform.

    Changing the dataset name

    circle-info

    Required dataset permission: Edit dataset settings

    The dataset name displays in the panel at the top left of the dataset details page.

    To change the dataset name:

    Overview of the model definition process General workflow to define a model-based custom entity type

    Start a new entity type Begin the process of creating a new model-based custom entity type

    Select test data Identify the entity values in a small set of files.

    Refine model guidelines Fine-tune the guidelines used to identify values for the entity type.

    Select training data Assemble a much larger set of files to use to train models for the entity type.

    Create and train models Create models that are based on a selected guidelines version, and train those models on the training data.

    Select the active model Identify the trained model to use for the entity type.

    Rename or delete the entity type Change the entity type name or delete the entity type.

    Enable and disable the entity type in datasets Identify the datasets that use the entity type.

    When you create the model-based custom entity type, you provide an initial description of the entity type. For example, "Scientific names of health conditions". The description is the first version of the model guidelines. The guidelines tell the model how to identify the entity type values.

  • You then select a small set of smaller test files that contain entity values. For example, if you typically use Textual to redact values in patient appointment reports, then you might upload a few of those reports to use as test files. The files should be no more than 5,000 words.

  • Textual uses your initial guidelines to identify values in the files.

  • You then review and correct the annotations to identify the definitive set of entity values that the test files contain.

  • hashtag
    Iterate over model guidelines

    After you establish the entity values in your test files, you iterate over the guidelines for the model.

    For each version of the guidelines, Textual uses the guidelines to detect entity values in the test files.

    Textual then compares the values that the guidelines version detects against the values that you established when you annotated the test files.

    Textual generates scores to identify how well that version of the guidelines performed. If you are not satisfied with the results, you can update the guidelines to create a new version.

    Textual automatically generates suggestions to improve the guidelines, based on how well the current guidelines identified the values. For example, it might suggest more specific wording or additional text to describe exceptions.

    hashtag
    Select training data

    When you have guidelines that you are satisfied with, you select a larger set of data to use for model training.

    The training data should contain at least 1,000 entity values. The files should still be relatively small - no more than 5,000 words.

    For example, when setting up a custom entity type to identify health conditions, you might use 5 or 6 appointment reports for your test data, but several hundred reports for your training data.

    hashtag
    Train models

    When you create a model, you select the guidelines version to use for it.

    The model uses the guidelines to annotate the training data - in other words, to detect entity values in the training files. You review the annotation results to determine whether you are satisfied with the detections.

    If you are not satisfied, you can:

    1. Return to the guidelines refinement to edit the guidelines.

    2. Create a new guidelines version.

    3. Create a model that uses the new version.

    If you are satisfied, then you can start the model training. Model training can take a very long time - sometimes hours or days - depending on the data.

    When the model finishes training, it scans and identifies values in the original test data. Each trained model receives a score to identify how well its detections matched the definitive values that you established.

    hashtag
    Select a model to use

    To make the entity type available to use, you select the trained model to use.

    The custom entity type is then active and can be enabled or disabled within individual datasets.

    Flow to create a model-based custom entity type
    custom entity types
    [email protected]envelope

    Getting started with Textual

    Sign up for a Textual account.

    Textual entity types

    Built-in entity types come with Textual. You can also configure custom entity types.

    Preview Textual detection and redaction

    Use the home page to see how Textual identifies sensitive values in text or a file.

    Datasets workflow

    Use Textual to detect and replace sensitive values in files.

    About guided redaction (beta)

    Use Textual to identify and block out sensitive values in files.

    Manage API keys

    Generate and revoke API keys for SDK and API authentication.

    SDK - Datasets and redaction

    Use the Textual Python SDK to redact text and manage datasets. Review redaction requests in the Request Explorer.

    REST API

    Use the Textual REST API to redact text strings, manage datasets, and manage user access.

    Microsoft Fabric integration

    Use the Textual workload in Fabric to detect and replace sensitive values in OneLake files.

    Snowflake Native App

    Use the Snowflake Native App to redact values in your data warehouse.

    How Textual handles entity values that match multiple types

    A detected value might match multiple entity types.

    For example, a telephone number might match both the Phone Number and Numeric Value entity types.

    On most dataset details views, each value is only counted once, for the entity type that it is assigned in the output file. The Analytics page under Entities analysis allows you to choose whether to include a value in the counts for all of the types that it matches.

    By default, a detected value is assigned the entity type that it most closely matches. For our example, the telephone number value most closely matches the Phone Number entity type, and so by default is included in the Phone Number count and values list.

    If the entity type is ignored, or the value is excluded, then Textual moves the value to the next matching type.

    In our example, if you set the handling option for Phone Number to Ignore, then the telephone number value is added to the count and values list for the Numeric Value entity type.

    On the dataset details page, click Project settings.
  • On the Dataset settings page, in the Dataset Name field, provide the new name for the dataset.

  • Dataset Settings page
    1. Click Save Dataset.

    Dataset name

    Starting a new model-based custom entity type

    To start a new custom entity type:

    1. On the Custom Entity Types page, click Create Custom Entity Type, then select Model-based entity type.

    Custom entity type creation dropdown
    1. On the next panel, in the Custom entity type name field, provide a name for the custom entity type.

    2. In the Annotation guidelines text area, provide instructions for how the model should identify values that you want to find. This is the first version of the model guidelines.

    1. Click Next.

    Textual displays the Test data setup page.

    Navigating the file list

    To display the list of files for the dataset, on the dataset details page, click Project files.

    For cloud storage dataset, the file list displays the full path to each file.

    hashtag
    Information in the file list

    For each file, the file list includes the following information:

    Detected entity values

    The Catalog page under Entities analysis displays the list of detected entity values for the dataset. To display the Catalog, in the left menu on the dataset details page, click Catalog.

    hashtag
    Information in the Catalog

    The Catalog lists each instance of an entity value separately. For example, the given name John is detected twice in one file and 3 times in another file. The Catalog then contains 5 entries for John.

    Deleting datasets

    circle-info

    Required dataset permission: Delete a dataset

    To delete a dataset:

    1. On the dataset details page, click Project settings.

    Count of entities per file

    For each dataset file, the Project files page displays:

    • The number of detected entity values in the file.

    • The total number of words in the file.

    Reviewing the sensitivity detection results

    circle-info

    Required dataset permission: View dataset settings

    The dataset details page provides information about the results of the sensitivity detection, including the overall results, the results per file, and the results per entity type.

    Renaming or deleting a model-based custom entity type

    hashtag
    Renaming the entity type

    To change the entity type name, on the entity type details page:

    1. Click the actions menu next to the entity type name.

    Supported file types

    Tonic Textual can process the following types of files:

    • txt

    • csv

    • tsv

    Generating cloud storage output files

    To generate original format output files for a cloud storage dataset, on the dataset details page, click Generate to <cloud storage type>.

    Tonic Textual generates the output files to the . If the output location is not configured, then the generate option is disabled.

    For datasets that produce JSON output, Textual generates the output files automatically as soon as the output location is configured.

    Creating a guided redaction project

    circle-info

    Required global permission: Create guided redaction projects

    To create a project:

    1. On the Guided Redaction page, click New Project.

    Changing the project name and description

    From the project settings panel, you can change the project name and an optional description.

    To display the settings panel, either:

    • On the Guided Redaction page, click the settings icon for the project.

    • On the project details page, click More, then click Settings.

    Displaying a dataset file preview

    circle-info

    You cannot preview TIF image files. You can preview PNG and JPG files.

    From the dataset file list, to display the preview, either:

    • Click the file name.

    Installing the Textual SDK

    The is a Python SDK that you can use to redact text and files.

    It requires Python 3.9 or higher.

    To install the Tonic Textual Python SDK, run:

    pip install tonic-textual

    Viewing the audit log for a project

    circle-info

    Required guided redaction permission: View audit log

    The audit log tracks the following actions associated with the project:

    • Project is created

    Previewing dataset file output

    circle-info

    Required dataset permission: Preview redacted dataset files

    circle-info

    You cannot preview TIF image files. You can preview PNG and JPG files.

    On the Dataset settings page, click Delete Dataset.

  • Click Confirm Delete.

  • docx

  • xlsx

  • pdf

  • png

  • tif or tiff

  • jpg or jpeg

  • Datasets that write JSON output can also process the following types of files:

    • .rtf

    • .eml

    • .msg

    On the new project panel, in the Project Name field, provide a name for the project.

  • Click Save.

  • Textual displays the project details page.

    On the project settings panel:
    1. In the Name field, provide the new name.

    2. In the Description field, provide the description.

    3. Click Save.

    Files are added or removed

  • Project status changes

  • File status changes

  • Comments are added

  • Files are exported

  • To display the audit log:

    1. On the project details page, click More.

    2. In the menu, click Audit log.

    For each entry, the audit log includes the following information:

    • When the action occurred.

    • The email address of the user who performed the action.

    • The type of action.

    • A description of the action.

    Tonic Textual SDKarrow-up-right
    Value and word counts for dataset files

    Entity values that match multiple types How Textual counts entity values that match more than one entity type.

    Summary results Summary counts on the Project files and Entity settings page.

    Entity counts per file Viewing the counts per file on the Project settings page.

    Detected entity values The Catalog page lists the detected entity values in the dataset.

    Detected values per entity type The Analytics page contains an analysis of the detected entity values for each entity type.

    Entity type list and settings The Entity settings page displays the entity type results and configuration.

  • In the menu, click Edit entity name.

  • On the Edit Name panel, provide the new name for the entity type.

  • Click Save.

  • hashtag
    Deleting the entity type

    You cannot delete an entity type that is enabled for any datasets. For information on how to enable and disable an entity type in datasets, go to Enabling and disabling the entity type for datasets.

    To delete a model-based custom entity type, in the entity types list, click the delete icon.

    On the confirmation panel, click Delete.

    Edit entity name option in the actions menu for a model-based custom entity type
    configured output location
    Generate option to generate output files for a cloud storage dataset

    Display the preview Display the preview for a dataset file.

    Preview for a redacted file dataset file View and make edits to the redactions in a dataset file.

    Preview for a JSON output dataset file View the content of the JSON output for a dataset file.

    Panel to provide the name and initial guidelines for a model-based custom entity type

    Overriding the project entity type configuration

    By default, each file in a project uses the project-wide settings for Textual entity types. The entity type settings include:

    • Whether each entity type is enabled. You can enable or disable both built-in and regex-based custom entity types. You cannot enable model-based custom entity types. When you disable an entity type, values of that type are ignored by the Textual scan.

    • For each built-in entity type, added and excluded values.

    Within a file, you can override the project settings. When you save the new settings, Textual automatically rescans the file to change the detected entities based on the new settings.

    hashtag
    Displaying the file-specific entity type settings

    To display the file-specific entity type settings, either:

    • On the details panel for a redaction, click the settings icon.

    • In the file header, click More, then click File-specific entity detection settings.

    hashtag
    Enabling and disabling entity types

    To enable an entity type, set the entity type toggle to the on position.

    To disable an entity type, set the toggle to the off position.

    hashtag
    Adding and excluding values for a built-in entity type

    To display the panel to add and exclude values, click the add or exclude values icon.

    On the panel:

    • Use the Add to detection tab to add values.

    • Use the Remove from detection tab to remove values.

    Each value can be either a specific word or phrase to add or exclude, or a regular expression to identify the values to add or exclude. Regular expressions must be C# compatible.

    hashtag
    Saving and updating the detection based on the changes

    When you finish the configuration changes, click Save.

    Textual rescans the file and changes the detected entities based on the new settings.

    For example, the initial scan detected a Given Name value. You change the file-specific settings to disable the Given Name entity type. After the new scan, the value is no longer highlighted.

    Setting the project status

    circle-info

    Required guided redaction permission: Edit status

    The project status indicates where the project is in the redaction and review process.

    For information about how to configure the available status values, go to Configuring the available file and project statuses.

    Each new project is assigned the built-in Not started status.

    You can set the status from either the Guided Redaction page or the project details page.

    hashtag
    From the Guided Redaction page

    From the Guided Redaction page, to change the status of a project:

    1. In the Status column, click the status value.

    2. From the status dropdown, select the new status.

    hashtag
    From the project details page

    From the project details page, to change the status of the project:

    1. In the page heading, click the status value.

    2. From the status dropdown list, select the new status.

    hashtag
    From the project settings panel

    You can also set the project status from the project settings panel. To display the project settings panel, either:

    • On the Guided Redaction page, click the settings icon.

    • On the project details page, click More, then select Settings.

    On the project settings panel, from the Status dropdown list, select the new project status.

    Deleting a project

    circle-info

    Required guided redaction permission: Delete a project

    To delete a guided redaction project:

    1. Either:

      1. On the Guided Redaction page, click the options menu for the project, then click Delete project.

      2. On the project details page, click More, then click Delete project.

    2. On the confirmation panel, click Delete.

    Downloading project files

    circle-info

    Required guided redaction permission: Download project files

    From the project file list, you can download an individual redacted file, or all of the files in the project.

    From the file details, you can download an individual file.

    Regardless of the original file format, all guided redaction output files are downloaded as PDFs.

    hashtag
    Downloading a single file

    To download an individual file, either:

    • On the project details page, click the download icon for the file.

    • On the file details page, click More, then click Download File.

    Select the download options, then click Download.

    hashtag
    Downloading all of the project files

    To download all of the files in the project:

    1. On the project details page, click More, then click Download files into a zip folder.

    2. Select the download options, then click Download.

    hashtag
    Download options

    When you download files, you first select whether to download files with redacted content or files to use for review.

    In redacted files, the redacted values are covered by a colored box, and can optionally include the reference codes. For review files, the redacted values are visible, and the reference codes are always included.

    hashtag
    Downloading redacted files

    To download redacted files:

    1. On the Download Options panel, click Redacted.

    2. Under Box color, click the color to use for the boxes that cover the redacted values.

    3. To display the reference codes over the redacted values, check Include reference codes.

    hashtag
    Downloading review files

    To download review files, on the Download Options panel, click For Review.

    Review files always display reference codes.

    Setting the file status

    The file status reflects where the file is in the redaction and review process. The status is selected from the configured list of statuses.

    From the project file list, or the file details page, to set the status of a file:

    1. Click the current status value.

    2. In the dropdown list, select the new status.

    Previewing file output

    circle-info

    Required guided redaction permission: Preview project files

    Preview mode displays the redacted content as it appears in the downloaded file.

    To change to Preview mode, in the file heading, click the preview icon.

    In preview mode:

    • The boxes that cover the redacted content are always black.

    • The reference codes display on the boxes.

    Preview mode does not display redactions that do not have an assigned reference code. That content is not redacted in the output.

    REST API authentication

    Before you can use the API, you must create a Tonic Textual API key. For information on how to obtain a Textual API key, go to Creating and revoking Textual API keys.

    When you call the API, you place your API key in the authorization header of the request, similar to the following curl request, which fetches the list of datasets for the current user.

    curl --request GET \
    --url "https://textual.tonic.ai/api/dataset" \
    --header "Content-Type: application/json" \
    --header "Authorization: API_KEY"

    Most Textual API requests require authentication. For each request, the reference information indicates whether the request requires an API key.

    For requests that require an API key, if you do not provide a valid API key, you receive a 401 Unauthorized response.

    • The name of the file. For cloud storage datasets, the file name includes the full path to the file. When the file needs to be rescanned, a warning icon displays in front of the name.

    Warning icon for a file that needs to be rescanned
    • The number of detected entity values in the file.

    • The total number of words in the file.

    • When the file was added to the dataset.

    • When the file was most recently scanned.

    • For PDF and image files on self-hosted instances, the OCR model used to process the file.

    hashtag
    Filtering the file list

    To filter the list based on the file name, in the search field, begin to type text from the file name.

    As you type, Textual updates the file list to only include matching files.

    hashtag
    Sorting the file list

    You can also sort the list by any of the columns.

    • To sort by a column, click the column heading.

    • To reverse the sort order, click the column heading again.

    File list for an uploaded file dataset
    File list for a cloud storage dataset

    For each value instance, the Catalog includes:

    • The entity value.

    • How the value appears in the output, based on the selected handling option for the value's entity type.

    • The entity type.

    • A confidence score to indicate how confident Textual is that the value is correctly detected and identified.

    • The entity value instance in its immediate context.

    • The name of the file that contains the value instance.

    hashtag
    Filtering the Catalog

    hashtag
    Filtering by entity value

    To filter the list by text in the entity value, in the search field, begin to type the text.

    As you type, Textual filters the list to only include entity values that contain that text.

    Entities catalog filtered by value text

    hashtag
    Filtering by entity type

    By default, the Catalog list includes all of the entity types. To filter the list to a specific entity type, click All types, then select the entity type. To remove the filter, select All types.

    hashtag
    Filtering by file

    By default, the Catalog list includes values from all of the files. To filter the list to only include values detected in a specific file, click All files, then select the file. To remove the filter, select All files.

    hashtag
    Sorting the Catalog

    You can sort the Catalog by the value, transformation, and entity type, and confidence score.

    To sort by a column, click the column heading.

    To reverse the sort order, click the column heading again.

    Entities catalog for a dataset

    Click the options menu, then click Preview.

    Options menu for a dataset file with the Preview option

    Managing regex-based custom entity types

    circle-info

    Required global permission - either:

    • Create custom entity types

    • Edit any custom entity type

    A regex-based custom entity type uses one or more regular expressions to identify values of that type. If a value matches a configured regular expression for the custom entity type, then it is identified as that entity type.

    Regex-based custom entity types are useful when the entity values have a standard format. For example, to detect an identifier that is specific to your organization, and that always uses the same format, you could create a regex-based custom entity type.

    For a more varied set of values that does not conform to one or a few formats, and that rely more on context, you would instead create a .

    hashtag
    Creating, editing, and deleting a regex-based custom entity type

    hashtag
    Creating a regex-based custom entity type

    circle-info

    Required global permission: Create custom entity types

    To create a regex-based custom entity type, on the Custom Entity Types page:

    1. Click Create Custom Entity Type.

    2. In the dropdown, click Regex-based entity type.

    After you :

    • To save the new type, but not scan dataset files for the new type, click Save Without Scanning Files.

    • To both save the new type and scan for it, click Save and Scan Files.

    To detect new custom entity types in a dataset, Textual needs to run a scan. If you do not run the scan when you save the custom entity type, then on the dataset details page, you are prompted to run a scan.

    hashtag
    Editing a regex-based custom entity type

    circle-info

    Required global permission: You can edit any custom entity type that you create.

    Users with the global permission Edit any custom entity type can edit any custom entity type.

    To edit a custom entity type, in the regex-based entity types list, click the edit icon for the entity type.

    You can also edit a regex-based custom entity type from the dataset details page.

    For an existing entity type, you can change the description, the regular expressions, and the enabled datasets.

    You cannot change the entity type name, which is used to produce the identifier to use to configure the entity type handling from the SDK.

    After you update the configuration:

    • To save the changes, but not scan dataset files based on the updated configuration, click Save Without Scanning Files.

    • To both save the new type and scan based on the updated configuration, click Save and Scan Files.

    To reflect the changes to custom entity types in a dataset, Textual needs to run a scan. If you do not run the scan when you save the changes, then on the dataset details page, you are prompted to run a scan.

    hashtag
    Deleting a regex-based custom entity type

    When you delete a custom entity type, it is removed from the datasets that it was active for.

    To delete a custom entity type:

    1. In the custom entity types list, click the delete icon for the entity type.

    2. On the confirmation panel, click Delete Entity Type.

    hashtag
    Configuration settings for regex-based custom entity types

    The configuration for a regex-based custom entity type includes:

    • Name and description

    • Regular expressions to identify matching values. From the configuration panel, you can test the expressions against text that you provide.

    • Datasets to make the entity type active for. You can also enable and disable custom entity types from the dataset details pages.

    hashtag
    Name and description

    In the Name field, provide a name for the entity type. Each custom entity type name:

    • Must be unique within an organization.

    • Can only contain alphanumeric characters and spaces. Custom entity type names cannot contain punctuation or other special characters.

    After you save the entity type, you cannot change the name. Textual uses the name as the basis for the identifier that you use to refer to the entity type in the SDK.

    In the Description field, provide a longer description of the custom entity type.

    hashtag
    Regular expressions to identify matching values

    Under Keywords, Phrases, or Regexes, provide expressions to identify matching values for the entity type.

    An entry can be as simple as a single word or phrase, or you can provide a more complex regular expression to identify the values.

    Textual maintains an empty row at the bottom of the list. When you type an expression into the last row, Textual adds a new empty row.

    To add an entry, begin to type the value in the empty row.

    To edit an entry, click the entry field, then edit the value.

    To remove an entry, click its delete icon.

    hashtag
    Testing an expression

    Under Test Entry, you can check whether Textual correctly identifies a value as the entity type based on the provided expression.

    To test an expression:

    1. From the dropdown list, select the entry to test.

    1. In the text area, provide the text to test.

    As you enter the text, Textual automatically scans the text for matches to the selected expression. The Result field displays the input text and highlights the matching values.

    hashtag
    Enabling and disabling the regex-based entity type for datasets and guided redaction projects

    Under Activate Custom Entity Type, you identify the datasets and guided redaction projects to make the entity active for.

    From the dataset details and guided redaction details, you can also enable and disable custom entity types for that dataset or guided redaction project.

    To make the entity active for all current and future datasets and guided redaction projects, check Automatically activate for all current, and new datasets and guided redaction projects.

    The rest of the panel is split into separate lists for datasets and guided redaction projects.

    For each list:

    • To make the entity active for a specific dataset or guided redaction project, set the toggle for the dataset or project to the on position.

    • To filter the list based on the dataset or project name, in the filter field for the list, begin to type text from the name. Textual updates the list to only include matching datasets or projects.

    • To update all of the currently displayed datasets or projects, click Bulk action, then click Enable or Disable.

    For information about enabling and disabling custom entity types from within a dataset, go to .

    For information about enabling and disabling custom entity types from a guided redation project go to:

    Working with custom entity types

    The entity types list includes any custom entity types that are active for the dataset. From the Entity settings page, you can enable and disable custom entity types.

    You can also update the configuration of a regex-based custom entity type, and go to the details page for a model-based custom entity type.

    hashtag
    Enabling and disabling custom entity types

    circle-info

    Required dataset permission: Edit dataset settings

    The entity types list includes the custom entity types that are active for the dataset.

    To manage which custom entity types are active for the dataset:

    1. Click Custom entity types.

    2. On the Enable custom entity types panel, to search for specific entity types, begin to type text in the entity type name. As you type, Textual updates the list to only display matching entity types.

    1. To enable a custom entity type for the dataset, set its toggle to the on position.

    2. To disable a custom entity type for the dataset, set its toggle to the off position.

    3. To enable all of the custom entity types for the dataset, click Bulk, then click Enable all.

    hashtag
    Updating the configuration of a regex-based custom entity type

    circle-info

    Required global permission - either:

    • Create custom entity types

    • Edit any custom entity type

    From the dataset details, you can edit the configuration of a regex-based custom entity type. To edit a regex-based custom entity type, click the settings icon for that type.

    Note that any changes to the custom entity type settings affect all of the datasets that use the custom entity type.

    For information on how to configure a custom entity type, go to .

    hashtag
    Viewing the details for a model-based custom entity type

    circle-info

    Required global permission - either:

    • Create custom entity types

    • Edit any custom entity type

    For a model-based custom entity type, to display the details for the entity type, click the settings icon.

    Textual displays the entity type details in a new browser tab.

    hashtag
    Running a new scan to reflect custom entity type changes

    When you enable, disable, or edit custom entity types, the changes do not take effect until you run a new scan.

    To run a new scan, click Scan.

    Selecting the handling option for entity types

    circle-info

    Required dataset permission: Edit dataset settings

    For datasets that produce redacted files, for each entity type, you choose how to handle the detected values. This determines how each value displays in the output files.

    For datasets that create JSON output, the entity type handling determines the display in downloaded Markdown or HTML files.

    hashtag
    Available handling options

    The available options are:

    • Synthesize - Indicates to replace the value with another realistic value. For example, the first name value Michael might be replaced with the value John. The synthesized values are always consistent, meaning that a given entity value always has the same replacement value. For example, if the first name Michael appears multiple times in the text, it is always replaced with John. Textual does not synthesize any excluded values. For custom entity types, Textual scrambles the values.

    • Redact - This is the default option, except for the Full Mailing Address entity type, which is ignored by default. For text files, Redact indicates to tokenize the value - to replace it with a token that identifies the entity type followed by a unique identifier. For example, the first name value Michael might be replaced with NAME_GIVEN_12m5s. The identifiers are consistent, which means that for a given original value, the replacement always has the same unique identifier. For example, the first name Michael might always be replaced with NAME_GIVEN_12m5sb

    hashtag
    Selecting the handling option for a specific entity type

    On the Entity settings page, to select the handling option for an entity type:

    1. In the De-identification Setting column, click the dropdown.

    2. In the dropdown list, click the option.

    On the Analytics page, the also provides an option to set the handling option.

    hashtag
    Selecting the handling option for all of the entity types

    On the Entity settings page, to select the same handling option for all of the entity types, from the Bulk Edit dropdown above the data type list, select the option.

    Configuring handling of .docx file components

    circle-info

    Required dataset permission: Edit dataset settings

    In .docx and .xslx files, as long as the URL entity type handling option is not set to Off, Textual automatically changes the destination of hyperlinks to google.com.

    On the Dataset settings page, the Word Document Settings section contains settings to determine how to manage .docx images, tables, and comments.

    Word Document Settings section on the Dataset settings page

    To display the Dataset settings page, on the dataset details page, click Project settings.

    hashtag
    Configuring how to handle .docx images

    For .docx images, including .svg files, you can configure the dataset to either:

    • Redact the image content. When you select this option, Textual looks for and blocks out sensitive values in the image.

    • Ignore the image.

    • Replace the images with black boxes.

    On the Dataset settings page, under Image settings for DOCX files:

    • To redact the image content, click Redact contents of images using OCR. This is the default selection.

    • To ignore the images entirely, click Ignore images during scan.

    • To replace the images with black boxes, click Replace images from the output file with black boxes.

    hashtag
    Configuring how to handle .docx tables

    For .docx tables, you can configure the dataset to either:

    • Redact the table content. When you select this option, Textual detects sensitive values and replaces them based on the entity type configuration.

    • Block out all of the table cells. When you select this option, Textual places a black box over each table cell.

    On the Dataset settings page, under Table settings for DOCX files:

    • To redact the table content, click Redact content using the entity type configuration. This is the default selection.

    • To block out the table content, click Block out all table cell content.

    hashtag
    Configuring how to handle .docx comments

    For comments in a .docx file, you can configure the dataset to either:

    • Remove the comments from the file.

    • Ignore the comments and leave them in the file.

    On the Dataset settings page, to remove the comments, toggle Remove comments from the output file to the on position. This is the default configuration.

    To ignore the comments, toggle Remove comments from the output file to the off position.

    Enabling and disabling the entity type for datasets

    circle-info

    You cannot enable model-based custom entity types for guided redaction projects.

    To include a model-based custom entity type in the entity types that Structural scans for in a dataset, you must enable the entity type for the dataset.

    Before you delete a model-based custom entity type, you must make sure that it is not enabled for any datasets.

    In the entity types list, the Activated for column displays the number of datasets that the entity type is enabled for.

    You cannot enable an inactive entity type.

    To change the selected datasets:

    1. Click the database icon. The Activate custom entity panel displays the datasets that you have access to.

    1. To filter the datasets by name, in the search field, type text from the name.

    2. For each dataset, the toggle indicates whether the entity type is active for the dataset. When the toggle is in the on position, the entity type is enabled for the dataset. To enable or disable the entity type for a single dataset, set the toggle.

    3. To enable or disable the entity type for all of the datasets that are currently included in the list, click Bulk Edit, then select whether to enable or disable the entity type.

    You can also enable or disable entity types from the dataset details. For more information, go to .

    Analysis of detected entity types

    The Analytics page under Entities analysis displays a summary of the detected entity types. For each entity type, you can display the distribution across the dataset files.

    Analytics page on the dataset details page

    hashtag
    Selecting the value count option

    On the Analytics page, you can choose how Textual determines the displayed value counts and entity types.

    • Match counts to redacted files - Displays value counts based on the output files. For this view, the counts do not include entity types that are ignored. The counts also resolve entity values that match multiple types and entity values that share some text. Each value is counted as a single type.

    • Show all detected entities - Displays the full detection value counts. For this view, the counts per entity type include all of the entities that Textual found during processing. This includes:

      • Values for ignored entity types

      • Entity values that match multiple entity types

    hashtag
    Summary counts

    The panels at the top of the page provide summary information for the detected entity values. The displayed values are based on the selected value count option.

    The summary information includes:

    • The number of detected entity values.

    • The number of detected entity types.

    • The percentage of detected values that are redacted.

    hashtag
    Counts by entity type

    The entity types list on the Analytics page displays a summary of the detected value counts for the detected entity types. The displayed entity types and counts are based on the selected value count option.

    For each entity type, the list includes:

    • The count of detected values

    • The percentage of detected values in the dataset that are of that type

    By default, the entity types are listed in descending order based on the value count.

    You can sort the list by the entity type, count, and percentage. To sort by a column, click the heading. To reverse the sort order, click the heading again.

    hashtag
    Displaying the top 10 file list for an entity type

    When you click an entity type, Textual displays a panel that lists the 10 files that contain the most detected values for that entity type.

    The panel also allows you to change the handling option for the entity type.

    Selecting the active model for the entity type

    Before you can use a model-based entity type, you must select the model to use.

    On the model list page, the active model is marked as Active.

    To change the active version:

    1. Hover the mouse over the model to make the active model for the entity type.

    2. Click Activate.

    Reviewing redactions

    circle-info

    Required guided redaction permission: Review redactions

    In Review mode, you mark individual redactions as reviewed.

    This indicates that you verified that the redaction is correctly detected and identified.

    To change to Review mode, in the file heading, click the review icon.

    Review mode does not highlight redactions that do not have an assigned reference code. That content is not redacted.

    In the redaction list, all of the reference codes use the same highlighting. There is no distinction between automatically and manually assigned codes.

    hashtag
    Navigating between redactions

    At the top left is the summary of the reviewed redactions. It shows the number of reviewed redactions and the total number of redactions.

    The redactions panel at the right displays the list of redactions. There are no options to change the configuration. The list does not contain redactions that do not have assigned reference codes.

    To navigate directly to the first unreviewed redaction, click the icon next to the review summary.

    When you click a redaction, the review panel also allows you to navigate to the previous and next redactions.

    hashtag
    Marking a redaction as reviewed

    To mark a redaction as reviewed, either:

    • In the file content:

      1. Click the redaction.

      2. On the panel, click Mark as reviewed.

    When you mark a redaction as reviewed:

    • In the header, the count of reviewed redactions is updated.

    • In the file content, the redaction is highlighted in green

    • In the redaction list, a check mark is added in front of the redaction.

    hashtag
    Unmarking a reviewed redaction

    To mark a reviewed redaction as not reviewed, either:

    • In the file content:

      1. Click the redaction.

      2. On the panel, click Unmark as reviewed.

    Assigning tags to a project

    You can assign tags to each guided redaction project, to help to further identify and link projects. For example, you might use a tag to indicate when a project is for a FOIA request.

    On the Guided Redaction page, the Tags column contains the assigned tags for the project.

    On the project settings panel the Tags field lists the project tags. To display the settings panel, either:

    • On the Guided Redaction page, click the settings icon for the project.

    • On the project details page, click More, then click Settings.

    To change the assigned tags:

    1. Click the Tags column or field.

    2. To add a tag, type the tag text, then press enter.

    3. To remove a tag, click its delete icon.

    Viewing the list of guided redaction projects

    circle-info

    Required permissions

    Either:

    • Global permission - either:

      • Use guided redaction projects

      • View all guided redaction projects

    • Guided redaction permission: Access to view or perform an action on one or more guided redaction projects

    The Guided Redaction page contains the list of redaction projects.

    To display the Guided Redaction page, in the Textual navigation bar, click Guided Redaction.

    The list displays the projects that you have access to.

    hashtag
    Information in the list

    For each project, the list includes:

    • The name of the project

    • Any tags assigned to the project

    • The number of files in the project

    • The project status

    hashtag
    Filtering the project list

    You can filter the list based on the project name.

    In the search field, type text from the project name. As you type, Textual updates the list to only include matching projects.

    hashtag
    Sorting the project list

    By default, the list is sorted in descending order by the creation date. The most recently created projects are at the top of the list.

    You can sort the list by any of the other columns, other than the options column.

    To sort by a column, click the column heading. To reverse the sort order, click the column heading again.

    Configuring the available file and project statuses

    As you redact and review the project files, you set the status of each file and project. Projects and files use the same set of status values.

    Each status is associated with a color.

    hashtag
    Built-in statuses

    Textual comes with a set of built-in statuses. The built-in statuses are:

    • Not started - This is the default status. It is applied automatically to all new projects and files. You cannot delete this status.

    • In progress

    • Ready for review

    • Review in progress

    • Done - This is intended to be the final status for a project or file. You cannot delete this status.

    You can change the name and assigned color of all of the built-in statuses. You can delete the built-in statuses that you do not need, except for the Not started status and the Done status.

    You can also add custom statuses, to accommodate your particular redaction and review process.

    hashtag
    Displaying the status list

    On the Guided Redaction page, to display the status list, click Status Settings.

    The status list shows the status name and the associated color.

    hashtag
    Adding a status

    circle-info

    Required global permission: Create guided redaction status values

    From the Status Settings panel, to add a status:

    1. Click Add a status.

    2. On the status configuration panel, in the field, provide a name for the status. Status names must be unique.

    3. Click the color to assign to the status.

    hashtag
    Editing a status

    circle-info

    Required global permission: Edit guided redaction status values

    From the Status Settings panel, to change the status configuration:

    1. Click the status name.

    2. On the status configuration panel, you can change the status name and color. Remember that the status name must be unique.

    3. To save the changes, click Save.

    hashtag
    Deleting a status

    circle-info

    Required global permission: Edit guided redaction status values

    You cannot delete:

    • The built-in Not started status.

    • The built-in Done status.

    • A status that is currently assigned to a project or file.

    From the Status Settings panel, to delete a status:

    1. Click the status name.

    2. On the status configuration panel, click the delete icon.

    Sharing access to a guided redaction project

    circle-info

    Required permissions

    Global permission - View users and groups

    Either:

    • Global permission - Control access to all guided redaction projects

    • Guided redaction permission - Share guided redaction access

    Textual uses guided redaction permission sets for role-based access control (RBAC) of each project.

    A guided redaction permission set is a collection of guided redaction permissions.

    Textual provides built-in guided redaction permission sets. Organizations can also configure custom permission sets.

    To share project access, you assign guided redaction permission sets to users and to SSO groups, if you use SSO to manage Textual users. Before you assign a guided redaction permission set to an SSO group, make sure that you are aware of who is in the group. The permissions that are granted to an SSO group automatically are granted to all of the users in the group.

    To change the current access to a project:

    1. Either:

      1. On the Guided Redaction page, click the share icon for the project.

      2. On the project details page, click More, then click Share.

    Uploading and deleting local files

    For a local file dataset, you upload and remove new files directly.

    On Tonic Textual Cloud, and by default for self-hosted instances, Textual stores the uploaded files in the application database.

    On a self-hosted instance, you can instead configure an S3 bucket where Textual stores the files. In the S3 bucket, the files are stored in a folder that is named for the dataset identifier.

    For more information, go to Setting the S3 bucket for file uploads and redactions.

    For an example of an IAM role with the required permissions, go to .

    hashtag
    Adding files to the dataset

    circle-info

    Required dataset permission: Upload files to a dataset

    From the dataset details page, to add files to the dataset:

    1. In the left menu, click Project files.

    2. On the dataset files page, click Upload Files.

    1. Search for and select the files.

    Textual uploads and then processes the files. For more information about file processing, go to .

    circle-info

    Do not leave the page while files are uploading. If you leave the page before the upload is complete, then the upload stops.

    You can leave the page while Textual is processing the file.

    On a self-hosted instance, when a file fails to upload, you can download the associated logs. To download the logs, click the options menu for the file, then select Download Logs.

    hashtag
    Removing files from the dataset

    circle-info

    Required dataset permission: Delete files from a dataset

    To remove a file from the dataset:

    1. In the file list, click the options menu for the file.

    2. In the options menu, click Delete File.

    Transcribe and redact an audio file

    You can send an audio file to the Tonic Textual SDK. Textual creates a transcription of the audio file, and then redacts the transcription text as a string.

    hashtag
    Audio file limitations

    The file must be 25MB or smaller, and must be one of the following file types:

    • m4a

    • mp3

    • webm

    • mp4

    • mpga

    • wav

    hashtag
    Sending the transcription and redaction request

    To transcribe and redact an audio file, you use .

    The request includes the .

    The includes the redacted or synthesized content and details about the detected entity values.

    Creating and managing guided redaction projects

    hashtag
    Setting up projects

    hashtag
    Configure entity types and access

    Configuring PDF options

    circle-info

    Required dataset permission: Edit dataset settings

    The PDF Settings section of the Dataset settings page provides options to configure how to work with PDF files.

    PDF Settings section of the Dataset settings page

    hashtag
    Using the new synthesis process

    Textual has developed an updated synthesis process that is currently implemented for the following entity types:

    • URLs

    • Names

    • Custom entity types

    In particular, the new synthesis process improves the display of the synthesized values in PDF files. The values better match the available space and the original font.

    Under PDF Settings, the New PDF synthesis mode (experimental) determines which process to use.

    To use the new process, toggle the setting to the on position.

    hashtag
    Configuring whether to redact PDF signatures

    By default, Textual redacts scanned-in signatures in PDF files. You can configure the dataset to instead ignore the signatures.

    Under PDF Settings:

    • To redact PDF signatures, toggle Detect and redact signatures in PDFs to the on position. This is the default configuration.

    • To ignore PDF signatures, toggle Detect and redact signatures in PDFs to the off position.

    hashtag
    Selecting the OCR model to use (self-hosted only)

    For PDFs as well as images, if multiple optical character recognition (OCR) models are available, you can select the specific model to use in the dataset. For information on how to enable specific models, go to .

    Under PDF Settings, from the OCR Engine dropdown list, select the model to use.

    Viewing the dataset list and details

    hashtag
    Viewing the list of datasets

    hashtag
    Displaying the Datasets page

    To display the Datasets

    Changing cloud storage credentials and output location

    circle-info

    Required dataset permission: Edit dataset settings

    For a cloud storage dataset, you can:

    • Update the cloud storage credentials. Note that this option is only available if you provided the credentials manually. If you use the credentials set in environment variables, then you cannot change the credentials.

    Assigning tags to datasets

    circle-info

    Required dataset permission: Edit dataset settings

    Tags can help you to organize your datasets. For example, you can use tags to indicate datasets that belong to different groups, or that deal with specific areas of your data.

    You can manage tags from both the Datasets page and the dataset details.

    Configuring reference codes for guided redaction

    Reference codes are used to indicate the type of value that is redacted.

    Before you can start a guided redaction, you must set up the reference codes.

    You can optionally link each reference code to one or more Textual entity types. Make sure that you map reference codes to all of the entity types that appear in your files.

    hashtag
    Displaying the reference codes

    To display the list of reference codes:

    Downloading local output files

    circle-info

    Required dataset permission: Download redacted dataset files

    For each file in a dataset, you can download the output file.

    hashtag
    Downloading a single output file

    Tracking and managing file processing

    When you add files to a local files dataset, or change the file selection for a cloud storage dataset, Tonic Textual automatically scans the files to identify the entities that they contain.

    When you change the dataset configuration, Textual also prompts you to run a new scan. For example, a new scan is required when you:

    • Configure added values

    • Change the available custom entity types

    About guided redaction

    circle-info

    The guided redaction feature is currently in beta.

    The Textual guided redaction tool blocks out sensitive values in files. For example, you might use guided redaction to prepare documents to provide in response to a Freedom of Information Act request.

    Guided redaction supports built-in and custom entity types. Textual completes an initial scan of the files to identify sensitive values in the files. You can then manually add and remove redactions.

    The values are covered with a black or white box. You assign and display reference codes to identify the type of content for each redaction.

    Processing a project file

    hashtag
    Redacting, reviewing, and previewing a file

    hashtag
    Configuring and commenting on files

    File preview for a JSON output file

    For a dataset that generates JSON output:

    • On the left is the original content. For files other than .txt files, you can toggle between generated Markdown and the rendered file.

    • On the right are the results.

    List of entity types

    The Entity settings page displays the list of active entity types for the dataset set. This includes:

    • All of the built-in entity types

    • Any custom entity types that are active for the dataset

    Enabling and disabling entity types and values

    The entity types settings for a project determine the Textual entity types that Textual detects in new project files. You can enable or disable both . You cannot enable model-based custom entity types.

    For example, you can tell Textual to ignore all Given Name values in new project files.

    For built-in entity types, you can also configure specific values to add to or exclude from the detection. For example, you can keep the Given Name entity type active, but indicate to ignore the value "Mark".

    By default, all of the entity types are active, and there are no added or excluded values.

    Any changes to the project entity type settings only affect files that are added after the change. Files that were already scanned are not affected.

    You can override the entity type settings in individual files. For more information, go to .

    Managing the list of project files

    The project details page contains the list of files in the project. To display the project details, on the Guided Redaction page, click the project name.

    hashtag
    Information in the file list

    For each file in the project, the list includes the following information:

    Sharing dataset access

    circle-info

    Required permissions:

    • Global permission - View users and groups

    Instantiating the SDK client

    Whenever you call the Textual SDK, you first instantiate the SDK client.

    To work with Textual datasets, or to redact individual files, you instantiate TonicTextual.

    hashtag
    Instantiating when the API key is already configured

    If the API key is configured as the value of the TONIC_TEXTUAL_API_KEY

    Configuring guided redaction options

    Before you create guided redaction projects, configure the following options.

    Navigating project file details

    circle-info

    Required guided redaction permission

    Either:

    In the redaction list, click the redaction.
    In the redaction list, click the redaction.
    To remove all of the tags, click the delete icon at the right of the tags field.
  • When the project was created

  • The user who created the project

  • Click Save.

    The project access panel contains the current list of users and groups who have access to the project, and displays their assigned guided redaction permission sets. To add a user or group to the list of users and groups:

    1. In the search field, begin to type the user email address or group name.

    2. From the list of matching users or groups, select the user or group to add.

  • For a user or group, to change the assigned guided redaction permission sets:

    1. Click Access. The dropdown list displays the list of custom and built-in guided redaction permission sets.

    2. Under Custom Permission Sets, check the checkbox next to each guided redaction permission set to assign to the user or group. To remove an assigned guided redaction permission set, uncheck the checkbox.

    3. Under Built-In Permission Sets, click the guided redaction permission set to assign to the user or group. You can only assign one built-in permission set. By default, for an added user or group, the Viewer permission set is selected. To not grant any built-in permission set, select None.

    1. In the Textual navigation bar, click Guided Redaction.

    2. On the Guided Redaction page, click Reference Codes Settings.

    For each each reference code, the list includes:

    • Code value

    • Any Textual entity types that the code is mapped to

    • The user who most recently updated the code configuration

    • When the code was most recently updated

    hashtag
    Creating a reference code

    circle-info

    Required global permission: Create redaction reference codes

    To create a reference code:

    1. Click New Reference Code.

    2. In the Reference Code field, type the code.

    3. Optionally, check the checkbox next to each built-in entity type that applies to the reference code. You can link up to 5 entity types to a reference code. For example, a reference code is used for any name value. You would link the reference code to both the Given Name and Family Name entity types. The list indicates when an entity type is already linked to a reference code. You can link the same entity type to multiple reference codes. Linking entity types is optional. If the reference code represents a value that is not represented in the Textual entity types, then you do not link it.

    4. Click Create.

    hashtag
    Editing a reference code

    circle-info

    Required global permission: Edit redaction reference codes

    For a reference code, you can change the code value and the assigned built-in entity types.

    To edit a reference code:

    1. Click the settings icon for the code.

    2. On the details panel, update the code. You can change the code and the assigned entity types.

    3. Click Save.

    hashtag
    Deleting a reference code

    circle-info

    Required global permission: Edit redaction reference codes

    You cannot delete a reference code that is currently assigned to a redaction.

    To delete a reference code, click its delete icon.

    Name of the file
  • Number of redactions in the file

  • Number of pages in the file

  • Number of comments in the file

  • Status of the file

  • Name of the user who most recently made changes in the file

  • When the most recent change occurred

  • hashtag
    Filtering the file list

    You can filter the file list based on the file name.

    To filter the list, in the search field, type text in the file name. As you type, Textual updates the list to only include matching files.

    hashtag
    Sorting the file list

    By default, the file list is sorted in descending order based on the update date. The most recently updated files are at the top of the list.

    You can sort the file list by any column except for the options column. To sort the list by a column, click the column heading. To reverse the sort order, click the column heading again.

    hashtag
    Supported file types

    You can use guided redaction for the following types of files:

    • .jpg

    • .msg

    • .pdf

    • .png

    • .txt

    • .docx - Note that the content is treated as text, and all images are removed.

    hashtag
    Adding files to the project

    circle-info

    Required guided redaction permission: Upload files to a project

    When you add files to a project, Textual automatically assigns the Not started status to those files. It also scans the files for built-in entity types, based on the project configuration.

    To add files to a project:

    1. On the project details page, click Upload Files.

    2. Search for and select the files to add.

    hashtag
    Removing a file from a project

    circle-info

    Required guided redaction permission: Delete files from a project

    To delete a file from the project, either:

    • On the project details page, click the delete icon for the file.

    • On the file details page, click More, then click Delete File.

    Edit file redactions
  • Preview project files

  • Manage comments

  • Download project files

  • To display the file details, on the project details page, click the file name.

    On the file details page:

    • For a non-paginated file, such as a plain text file, the file content displays as a single block.

    • For PDFs, the file displays one page at a time. The page list for the file displays at the left.

    In the file content, the redactions are highlighted. The first time you view a file, these are the redactions from the initial Textual scan.

    hashtag
    Searching for text

    To search for a specific piece of text in a file, type the text into the search field, then press Enter.

    If it finds the text, Textual displays the number of matches in the file, you can then navigate between the matches.

    hashtag
    Displaying redaction details

    When you click a redaction that was added by the Textual scan, the redaction panel displays.

    For redactions that Textual added, the redaction panel displays the entity type.

    hashtag
    Navigating between file pages

    For a paginated file, to jump to a specific page, in the left-hand page navigation, click the page.

    hashtag
    Viewing the redaction list

    At the right of the file details is the file redaction list.

    For a paginated file, the redaction list is grouped by page. For each redaction, the redaction list contains the redacted text, or an indicator that the redaction is for a selected area or the entire page.

    The redaction list also lists the assigned reference codes.

    To search for specific values in the file, in the search field, type the value text.

    hashtag
    Displaying the project settings panel

    The entity type configuration for the project is part of the project settings panel.

    To display the project settings panel, either:

    • On the Guided Redaction page, click the settings icon.

    • On the project details panel, click More, then select Settings.

    After you change the project settings, Textual automatically rescans the files to add or remove automatically detected redactions that are affected by the changes.

    hashtag
    Determining the enabled entity types

    By default, a project includes all of the built-in and custom entity types. When you add a file, Textual automatically scans the file to identify instances of those entity types.

    From the settings panel, you can disable entity types. For example, if you disable the Occupation entity type, Textual does not scan for occupation values.

    On the settings panel, to exclude an entity type from the initial file scan, set the entity type toggle to the off position.

    hashtag
    Configuring added and excluded values for built-in entity types

    For each built-in entity type, you can configure added and excluded values.

    You might add values that Textual does not detect because, for example, they are specific to your organization or industry.

    You might exclude values that Textual redacts incorrectly.

    To display the panel to add and exclude values, click the add or exclude values icon.

    On the panel:

    • Use the Add to detection tab to add values.

    • Use the Remove from detection tab to remove values.

    Each value can be either a specific word or phrase to add or exclude, or a regular expression to identify the values to add or exclude. Regular expressions must be C# compatible.

    built-in and regex-based custom entity types
    Overriding the project entity type configuration
    model-based custom entity type
    configure the entity type
    Working with custom entity types
    Enabling and disabling entity types and values
    Overriding the project entity type configuration
    Custom entity type creation dropdown
    Details panel for a regex-based custom entity type
    Regular expressions list for a custom entity type
    Dropdown list to select the regular expression to test
    Test results for a custom entity type regular expression
    Activate Custom Entity Type section to select the datasets and guided redaction projects that include the custom entity type
    , while the first name Helen might always be replaced with
    NAME_GIVEN_9ha3m2
    . For PDF files,
    Redact
    indicates to either cover the value with a black box, or, if there is space, display the entity type and identifier. For image files,
    Redact
    indicates to cover the value with a black box. Textual does not redact any excluded values.
  • Ignore - Indicates to not make any changes to the values. For example, the first name value Michael remains Michael. This this the default option for the Full Mailing Address entity type.

  • entity type details panel
    Handling options for a detected entity type
    Bulk Edit dropdown list to apply the same handling option to all of the entity types
    Working with custom entity types
    Activate custom entity panel to enable the entity type for datasets
    Activate option to select the active model for a model-based custom entity type

    View the projects list View the list of guided redaction projects.

    Create a project Start a new guided redaction project.

    Set the project status Track the status of a guided redaction project.

    Assign tags to a project Use tags to further identify a guided redaction project.

    Change the project name and description Give a guided redaction project a new name and add a more detailed description.

    Delete a project Remove a guided redaction project.

    Share access to a project Add users or groups to a guided redaction project.

    Set the active entity types and values Identify the entity types to scan for. Add and exclude specific values.

    Configure status values Manage the available status values for projects and files.

    Configure reference codes Manage the reference codes to assign to redactions.

    To disable all of the custom entity types for the dataset, click Bulk, then click Disable all.
    Enable custom entity types panel for a dataset
    Scan prompt for a dataset
    Configuration settings for regex-based custom entity types
    Entity values that share some text
    The percentage of detected values that are synthesized.
    Summary counts for the detected entity values
    Entity types list on the Analytics page
    Summary of values per file for an entity type
    Enabling PDF and image processing
    OCR Engine dropdown list in PDF Settings
    textual.redact_audioarrow-up-right
    entity type handling configuration
    redaction response
    , then you do not need to provide the API key when you instantiate the SDK client.

    For Textual datasets, or to use the redact method:

    hashtag
    Instantiating when the API key is not configured

    If the API key is not configured as the value of the environment variable TONIC_TEXTUAL_API_KEY, then you must include the API key in the request.

    For Textual datasets, or to use the redact method:

    environment variable
    from tonic_textual.redact_api import TextualNer
    # The default URL is https://textual.tonic.ai (Textual Cloud)
    # If you host Textual, provide your Textual URL
    textual = TextualNer()
    redaction_response=textual.redact_audio(<path to the audio file>)
    redaaction_response.describe
    from tonic_textual.redact_api import TonicTextual
    api_key = "your-tonic-textual-api-key"
    # The default URL is https://textual.tonic.ai (Textual Cloud)
    # If you host Textual, provide your Textual URL
    textual = TonicTextual(api_key=api_key)
    page, in the navigation menu, click
    Datasets
    .
    Datasets page

    The datasets list only displays the datasets that you have access to.

    Users who have the global permission View all datasets can see the complete list of datasets.

    For each dataset, the Datasets page includes:

    • The name of the dataset

    • Any tags assigned to the dataset. For datasets that you can edit, there is also an option to assign tags. For more information, go to Assigning tags to datasets.

    • The user who most recently updated the dataset

    • When the dataset was created

    hashtag
    Filtering the datasets by name

    To filter the datasets by name, in the search field, begin to type text that is in the dataset name.

    As you type, the list is filtered to only include datasets with names that contain the filter text.

    hashtag
    Filtering the datasets by tag

    You can assign tags to each dataset. Tags can help you to organize and provide a quick glance into the dataset configuration.

    On the Datasets page, to filter the datasets by their assigned tags:

    Panel to filter datasets by their assigned tags
    1. In the heading for the Tags column, click the filter icon.

    2. On the tag list, check the checkbox for each tag to include.

    To find a specific tag, in the search field, type the tag name.

    hashtag
    Displaying details for a dataset

    circle-info

    Required dataset permission: View dataset settings

    To display the details page for a dataset, on the Datasets page, click the dataset name.

    Dataset details page

    The dataset details page displays the tags assigned to the dataset, as well as an option to add tags. For more information, go to Assigning tags to datasets.

    The menu at the left includes:

    Project files

    The list of files in the dataset. For a cloud storage dataset, where the files can be located across multiple folders, Textual navigates to the first folder that contains selected dataset files.

    Entities analysis

    Provides information about the detected entity values in the dataset. Catalog displays the list of detected entity values. Analytics summarizes the count of values by entity type.

    Entity settings

    The list of entity types, with options to configure how Textual transforms the entities for each type.

    Project settings

    Settings to configure:

    • Dataset name

    • Credentials for cloud storage

    • Output location for cloud storage

    Change the output location for the generated output files.

    You configure the connection credentials and output location from the Dataset settings page. To display the Dataset settings page, on the dataset details page, click Project settings.

    After you update the configuration, click Save Dataset.

    hashtag
    Changing cloud storage credentials

    From the credentials section, to update the cloud storage credentials, click Update <Cloud storage solution> Credentials.

    hashtag
    Amazon S3

    To provide updated credentials for Amazon S3:

    1. In the Access Key field, provide an AWS access key that is associated with an IAM user or role. For an example of a role that has the required permissions for an Amazon S3 dataset, go to .

    2. In the Access Secret field, provide the secret key that is associated with the access key.

    3. From the Region dropdown list, select the AWS Region to send the authentication request to.

    4. In the Session Token field, provide the session token to use for the authentication request.

    5. To test the credentials, click Test AWS Connection.

    hashtag
    Azure

    To provide updated credentials for Azure:

    1. In the Account Name field, provide the name of your Azure account.

    2. In the Account Key field, provide the access key for your Azure account.

    3. To test the connection, click Test Azure Connection.

    hashtag
    SharePoint

    SharePoint credentials must have the following application permissions (not delegated permissions):

    • Files.Read.All - To see the SharePoint files

    • Files.ReadWrite.All -To write redacted files and metadata back to SharePoint

    • Sites.ReadWrite.All - To view and modify the SharePoint sites

    To provide updated credentials for SharePoint:

    1. In the Tenant ID field, provide the SharePoint tenant identifier for the SharePoint site.

    2. In the Client ID field, provide the client identifier for the SharePoint site.

    3. In the Client Secret field, provide the secret to use to connect to the SharePoint site.

    4. To test the connection, click Test SharePoint Connection.

    hashtag
    Setting the output location

    The output location is where Textual writes the redacted files.

    When you create a cloud storage database, after you select the initial set of files and folders, Textual prompts you to select the output location.

    For an existing dataset, you set the output location from the Output Location section of the Dataset settings page.

    Click the edit icon, then select the cloud storage folder where Textual writes the output files for the dataset.

    When you generate output for a cloud storage dataset, Textual creates a folder in the output location. The folder name is the identifier of the job that generated the files.

    Within the job folder, Textual recreates the folder structure for the original files.

    Textual then writes the output files to the corresponding folders.

    hashtag
    Managing tags from the Datasets page

    On the Datasets page, the Tags column displays the currently assigned tags.

    Datasets with assigned tags and the Tags option to change the tag assignment

    To change the tag assignment for a dataset:

    Editing a dataset's tags from the datasets list
    1. Click Tags.

    2. On the dataset tags panel, to add a new tag, type the tag text, then press Enter.

    3. To remove a tag, click its delete icon.

    4. To remove all of the tags, click the delete all icon.

    hashtag
    Managing tags from the dataset details page

    On the dataset details page, the assigned tags display under the dataset name.

    Editing a dataset's tags from the dataset details page

    To change the tag assignment:

    1. Click Tags.

    2. On the dataset tags panel, to add a new tag, type the tag text, then press Enter.

    3. To remove a tag, click its delete icon.

    4. To remove all of the tags, click the delete all icon.

    From the file list, to download a single output file, click the options menu for the file, then select the download option.

    hashtag
    Datasets that generate redacted output files

    For datasets that generate redacted versions of the source files, the option is Download File.

    File options menu with the download option

    hashtag
    Datasets that generate JSON output

    For datasets that generate JSON output, you can download either the JSON output or the redacted version of the Markdown content.

    File options menu for a file in a JSON output dataset

    For RTF files, you can also download the redacted content in HTML format.

    File options menu for an RTF file in a JSON output dataset

    hashtag
    Downloading all of the output files

    To download all of the output files, click the download icon that is next to the file filter field.

    Download all files icon on the dataset file list

    For datasets that generate redacted versions of the source files, the download happens immediately.

    For datasets that generate JSON output:

    • To download the JSON output for the files, select Download JSON.

    • To download the redacted Markdown content for all of the files, select Download Markdown.

    • If the dataset contains any RTF files, then to download only the redacted HTML content for all of the RTF files in the dataset, select Download HTML (RTF only). If you select this option, then the download does not include other types of files.

    Download all options for a JSON output dataset
    The file list reflects the current scanning status for the file. A file is initially queued for scanning. When the scan starts, the status changes to scanning. When Textual finishes processing a file, it marks the file as scanned.

    When a file needs to be rescanned, a warning icon displays in front of the file name.

    Warning icon for a file that needs to be rescanned

    As Textual processes each file, it updates the results.

    hashtag
    Pausing the file processing

    If needed, you can pause the file processing. To pause the processing, click the pause icon.

    The information in the results only reflect the files that are scanned.

    For a cloud storage dataset, when you generate output, Textual only includes files that are scanned.

    hashtag
    Starting a scan on a paused file

    circle-info

    Required dataset permission: Start a scan of dataset files

    After you pause the scan, you can start a scan on individual files.

    To start a scan on a file, click the refresh icon for the file.

    Refresh icon for a dataset file

    hashtag
    Downloading logs for files that fail to process

    circle-info

    Required dataset permission: Start a scan of dataset files

    When Textual is unable to process a file, it displays an error for that file.

    To download log files for the failed file:

    1. Click the options menu for the file.

    2. Click Download Logs.

    The overall workflow is as follows:
    Guided redaction workflow

    hashtag
    Pre-project setup

    Before you create a guided redaction project:

    1. Set up the list of statuses that can be assigned to a file or a project. Each status is associated with a color. Textual provides a set of built-in statuses, including a Not started status that is applied automatically to all new projects and files, and a Done status that indicates that the project or file is complete. You can configure custom statuses and change the names and colors of the built-in statuses. You can also delete the built-in statuses, except for Not started and Done.

    2. If you assign reference codes to the redactions, configure the reference codes. Reference codes identify the type of information that is redacted. You can optionally link each code to up to 5 Textual entity types. For example, if you use a single code for all name values, then you could link that code to both the Given Name and Family Name entity types. Be sure to map reference codes to all of the entity types that are present in your files.

    hashtag
    Project creation and population

    1. Create a guided redaction project.

    2. Add files to the project. For new files, Textual uses its built-in models to scan each file for entity values.

    hashtag
    File redaction

    For each file, in Redaction mode, you can add and remove redactions.

    Every redaction must be assigned a reference code. Redactions that do not have reference codes are not redacted in the output.

    As you work on the redaction, you can update the file and project statuses.

    Use the project audit log to track the project activity.

    hashtag
    Redaction review

    In Review mode, review the redactions and mark each one as reviewed.

    As you work on the review, you can update the file and project statuses.

    Use the project audit log to track the project activity.

    hashtag
    Output preview and download

    In Preview mode, preview the redacted output based on the current redactions.

    When the redaction and review is complete, download the redacted files. All downloaded output is in PDF format, with the individual PDF files bundled into a .zip file.

    When you download files, you select whether the output is redacted or is to be used for review.

    • Review output does not cover the redactions, and always displays the reference codes.

    • For redacted output, you select the color of the box and whether to display the reference codes.

    hashtag
    Viewing the output JSON for the file

    The JSON view contains the content of the JSON output file.

    For details about the JSON output structure for the different types of files, go to Structure of JSON output files.

    JSON view on a file preview for a JSON output dataset

    hashtag
    Tables view - Tables in a PDF or image file

    For a PDF or image file that contains one or more tables, the Tables view displays the tables.

    To display Tables view, select it from the view dropdown list.

    View selection menu

    If the file does not contain any tables, then the Tables view option is not available.

    hashtag
    Key-Values view - Key-value pairs in a PDF or image file

    For a PDF or image file that contains key-value pairs, the Key-Values view displays the key-value pairs.

    To display Key-Values view, select it from the view dropdown list.

    View selection menu

    If the file does not contain key-value pairs, then the Key-Values view option is not available.

    File preview for a text file in a JSON output dataset
    File preview for a PDF file in a JSON output dataset
    hashtag
    Information in the entity types list

    For each entity type, the list includes:

    • The name of the entity type.

    • The number of detected values for that type in the dataset files.

    • The selected handling option.

    hashtag
    Filtering the entity types list

    You can filter the entity types list by:

    • Text in the type name or description.

    • Whether the entity type is built-in or custom.

    • Whether there are detected entities for the entity type.

    • The handling option for the entity type

    hashtag
    Filtering by name or description

    To filter by name or description, in the search field, begin to type text in the name or description. As you type, Textual filters the list to only include matching entity types.

    Filtering the entity types list by name

    hashtag
    Applying other filters

    To apply other filters, click Filter options, then select the filters to apply.

    Filter options for dataset entity types
    Entity settings page for a dataset
    Either:
    • Global permission - Manage access to datasets

    • Dataset permission - Share dataset access

    Tonic Textual uses dataset permission sets for role-based access (RBAC) of each dataset.

    A dataset permission set is a set of dataset permissions. Each permission provides access to a specific dataset feature or function.

    Textual provides built-in dataset permission sets. Organizations can also configure custom permission sets.

    To share dataset access, you assign dataset permission sets to users and to SSO groups, if you use SSO to manage Textual users. Before you assign a dataset permission set to an SSO group, make sure that you are aware of who is in the group. The permissions that are granted to an SSO group automatically are granted to all of the users in the group.

    To change the current access to the dataset:

    1. On either the Datasets page or the dataset details page, click the share icon.

    Share icon for a dataset in the datasets list
    Share icon on dataset details
    1. The dataset access panel contains the current list of users and groups who have access to the dataset, and displays their assigned dataset permission sets. To add a user or group to the list of users and groups:

      1. In the search field, begin to type the user email address or group name.

      2. From the list of matching users or groups, select the user or group to add.

    2. For a user or group, to change the assigned dataset permission sets:

      1. Click Access. The dropdown list displays the list of custom and built-in dataset permission sets.

      2. Under Custom Permission Sets, check the checkbox next to each dataset permission set to assign to the user or group. To remove an assigned dataset permission set, uncheck the checkbox.

    Tracking and managing file processing
    Dataset files list with the upload option
    Options menu for a dataset file

    Redact a file Add and remove redactions.

    Review redactions in a file Verify the redactions in a project file.

    Preview a file View how the file will appear in the output.

    Set the file status As you process the file, update the current status.

    Editing an individual PDF file

    circle-info

    Required dataset permission: Edit dataset settings

    For PDF files, you can add manual overrides to the initial detections, which are based on the detected data types and handling configuration.

    For each manual override, you select an area of the file.

    For the selected area, you can either:

    • Ignore any automatically detected values. For example, a scanned form might show an example or boilerplate content that doesn't actually contain sensitive values.

    • Redact that area. The file might contain sensitive content that Tonic Textual is unable to detect. For example, a scanned form might contain handwritten notes.

    You can also apply a template to the file.

    You can also .

    hashtag
    Selecting the manual override option for a file

    To manage the manual overrides for a PDF file:

    1. In the file list, click the options menu for the file.

    2. In the options menu, click Edit Redactions.

    The File Redactions panel displays the file content. The values that Textual detected are highlighted. The page also shows any manual overrides that were added to the file.

    hashtag
    Applying a PDF template to a file

    If a dataset contains multiple files that have the same format, then you can create a template to apply to those files. For more information, go to .

    On the File Redactions panel, to apply a template to the file, select it from the template dropdown list.

    When you apply a PDF template to a file, the manual overrides from that template are displayed on the file preview. The manual overrides are not included in the Redactions list.

    hashtag
    Adding a manual override

    On the File Redactions panel, to add a manual override to a file:

    1. Select the type of override. To indicate to ignore any automatically detected values in the selected area, click Ignore Redactions. To indicate to redact the selected area, click Add Manual Redaction.

    2. Use the mouse to draw a box around the area to select.

    Textual adds the override to the Redactions list. The icon indicates the type of override.

    In the file content:

    • Overrides that ignore detected values within the selected area are outlined in red.

    • Overrides that redact the selected area are outlined in green.

    hashtag
    Navigating to a manual override

    To select and highlight a manual override in the file content, in the Redactions list, click the navigate icon for the override.

    hashtag
    Removing a manual override

    To remove a manual override, in the Redactions list, click the delete icon for the override.

    hashtag
    Saving the manual overrides

    To save the current manual overrides, click Save.

    Selecting the training data for your models

    Before you start training models, on the Model data setup page, you select the training data to use.

    Model Data Setup page with selected training files

    hashtag
    About training data

    The training data is a much larger set of files than the test data, and can include hundreds of files or more. The data should ideally contain at least 1,000 values for the entity type. For example, for an entity type to identify health conditions, you might use 5 medical appointment reports in your test data, but several hundred medical reports for your training data.

    Similar to the test files, the training files should be relatively small - no more than 5,000 words.

    For training data, there is no option to paste in text. Training data files are either uploaded from a local file system or selected from a cloud storage solution.

    If you selected the test data from a cloud storage solution, then you must use the same cloud storage solution for the training data. For example, if you selected the test data from Amazon S3, then you must select the training data from Amazon S3.

    You can add files to the training data at any time. New files are only used for models that are trained after the files are added.

    hashtag
    Uploading files from a local file system

    If there are no training files, then on the Model data setup page, click Upload files, then search for and select the files to upload.

    To add more uploaded files to the training data:

    1. Click Add training data.

    1. Click Upload files.

    2. Search for and select the files to add to the training data.

    hashtag
    Selecting files from cloud storage

    If there are no training files, then on the Model data setup page, click the cloud storage solution to use, then select the files to add.

    If the test data came from a cloud storage solution, then you must use the same cloud storage option for the training data.

    To add more cloud storage files to the training data:

    1. Click Add training data.

    1. Select the cloud storage solution.

    2. Select the files to add to the training data.

    For training data, you can select entire folders. Textual then adds all of the files in the folder.

    hashtag
    Displaying the content of a training data file

    On the Model data setup page, to display the content of an uploaded file, click the file name.

    hashtag
    Training data file statuses

    Each training data file goes through the following statuses:

    • Queued for upload - The file is not yet uploaded.

    • Uploading - Textual is uploading the file.

    • Ready - The file is uploaded and is used for subsequent model training.

    Model training cannot start until all of the currently uploaded files are Ready.

    hashtag
    Deleting training files

    On the Model data setup page, to delete a training file:

    1. Click its delete icon.

    2. On the confirmation panel, you can choose to skip the confirmation when you delete training files. If you select this option, then the next time you delete a training file, the file is deleted immediately, and the panel does not display.

    3. Click Delete.

    When you delete a training file:

    • For existing models that annotated the file:

      • The entity counts continue to reflect the entities that were detected in the file

      • The file name remains in the list on the model details.

    Creating and revoking Textual API keys

    circle-info

    Required global permission: Create an API key

    To be able to use the Textual SDK, you must have an API key.

    Alternatively, you can use the Textual API to obtain a JSON Web Token (JWT) to use for authentication.

    hashtag
    Viewing the list of API keys

    You manage keys from the User API Keys section of the User Profile page.

    To display the User Profile page:

    1. Click the user icon at the top right.

    2. In the user menu, click User Profile.

    hashtag
    Creating a Textual API key

    To create a Textual API key:

    1. From the User API Keys section, click Create New Key.

    2. In the API Key Name field, type a name to use to identify the key.

    1. Click the save icon.

    Textual adds the key to the list.

    hashtag
    Copying a Textual API key

    To copy the text of an API key, so that you can use it in an SDK or API request, click its copy icon.

    hashtag
    Revoking a Textual API key

    To revoke a Textual API key, in the User API Keys list, click the Revoke option for the key to revoke.

    hashtag
    Configuring the API key as an environment setting

    You cannot instantiate the SDK client without an API key.

    Instead of providing the key every time you call the Textual API, you can configure the API key as the value of the TONIC_TEXTUAL_API_KEY.

    Obtaining JWT tokens for authentication

    Instead of an API key, you can use the Textual API to obtain a JSON Web Token (JWT) to use for authentication.

    hashtag
    Configuring the JWT and refresh token lifetimes

    hashtag
    JWT lifetime

    By default, a JWT is valid for 30 minutes.

    On a self-hosted instance, to configure a different lifetime, set the SOLAR_JWT_EXPIRATION_IN_MINUTES.

    hashtag
    Refresh token lifetime

    You use a refresh token to obtain a new JWT. By default, a refresh token is valid for 10,000 minutes, which is roughly equivalent to 7 days.

    On a self-hosted instance, to configure a different lifetime, set the environment variable SOLAR_REFRESH_TOKEN_EXPIRATION_IN_MINUTES.

    hashtag
    Obtaining your first JWT and refresh token

    To obtain your first JWT and refresh token, you make a login request to the Textual API. Before you can make this call, you must have a Textual account.

    To make the call, perform a POST operation against:

    The request payload is:

    For example:

    In the response:

    • The jwt property contains the JWT.

    • The refreshToken property contains the refresh token.

    hashtag
    Obtaining a new JWT and refresh token

    You use the refresh token to obtain both a new JWT and a new refresh token.

    To obtain the new JWT and token, perform a POST operation against:

    The request payload is:

    In the response:

    • The jwt property contains the new JWT.

    • The refreshToken property contains the new refresh token.

    About the Textual REST API

    The Tonic Textual REST API allows you to more deeply integrate Textual functions into your existing workflows.

    You can use the REST API as another tool alongside the Textual application and the Textual Python SDK. The Python SDK supports the same actions as the REST API. We recommend the Python SDK for customers who already use Python.

    You can download the Textual OpenAPI specification from:

    https://textual.tonic.ai/swagger/v1/swagger.jsonarrow-up-right

    Redact individual files

    circle-info

    Required global permission: Use the API to parse or redact a text string

    You can use the Textual SDK to redact and synthesize values in individual files.

    Before you perform these tasks, remember to instantiate the SDK client.

    For a self-hosted instance, you can also configure the S3 bucket to use to store the files. For more information, go to Setting the S3 bucket for file uploads and redactions. For an example of an IAM role with the required permissions, go to .

    hashtag
    Sending a file to Textual

    To send an individual file to Textual, you use .

    You first open the file so that Textual can read it, then make the call for Textual to read the file.

    The response includes:

    • The file name

    • The identifier of the job that processed the file. You use this identifier to retrieve a transformed version of the file.

    hashtag
    Getting the file with redacted or synthesized values

    After you use to send the file to Textual, you use to retrieve a transformed version of the file.

    To identify the file, you use the job identifier that you received from textual.start_file_redaction. You can for the detected entity values.

    Before you make the call to download the file, you specify the path to download the file content to.

    Built-in entity types

    Tonic Textual's built-in models identify a range of sensitive values, such as:

    • Locations and addresses

    • Names of people and organizations

    • Identifiers and account numbers

    Datasets flows

    You use a Textual dataset to detect sensitive values in files. The dataset output can be either:

    • Files in the same format as the original file, with the sensitive values replaced based on the dataset configuration.

    • JSON files that contain a summary of the detected values and replacements.

    You can also create and manage datasets from the or .

    Selecting cloud storage files

    circle-info

    Required dataset permission: Edit dataset settings

    For a cloud storage dataset, you manage files from the file selection panel.

    When you create a dataset, after you provide the cloud storage credentials and save the dataset, Textual immediately prompts you to select dataset files. After you select the files and click Next, Textual prompts you to set the output location.

    For an existing dataset, to display the File Selection

    Creating a dataset

    circle-info

    Required global permission: Create datasets

    When you create a dataset, you specify:

    • The type of output to produce

    Iterating over the guidelines to use for model training

    On the Guidelines refinement page, you prepare the guidelines that define a model.

    To test each version of the guidelines, Textual uses the guidelines to detect entity values in the test data. It then generates scores to indicate how closely those detection results match the values that you established during your initial review.

    Textual also generates recommendations to improve the guidelines.

    hashtag
    Viewing the initial version of the guidelines

    Commenting on a project file

    circle-info

    Required guided redaction permission: Manage comments

    You can add and respond to comments on the redacted file.

    For example, you might want to add a message to a future reviewer to explain why a value is redacted, or why you selected the reference codes that you did.

    Or a reviewer might want to add a comment to indicate why they believe that a redaction is not correct.

    Users can respond to comments, which starts a comment thread.

    You can resolve or delete a comment thread. For example, a comment requests a change to a reference code assignment. After you make the change, you can resolve or delete the comment thread.

    hashtag
    Viewing the comments list

    To view the list of comments, in the toolbar, click Comment or press c. This puts you into comment mode. To exit comment mode, click Comment or press c again.

    In comment mode, the redactions list is replaced with the Comments panel, which contains a list of comment threads.

    The dropdown at the top right controls which comment threads to include. You can display either:

    • All comment threads, both open and resolved

    • Only open comment threads. This is the default.

    • Only resolved comment threads.

    hashtag
    Starting a comment thread

    To start a new comment thread in a file:

    1. To start a new comment thread, you must be in comment mode. To enter comment mode, click Comment or press c.

    2. For a PDF or image, click the location where you want to place the comment. For other text files, select the text to attach the comment to.

    3. In the comment field, type the text of the comment.

    4. Click Comment.

    The comment thread is represented by a comment icon. The comment icons are always visible, even when the comments list is not displayed.

    To exit comment mode, click Comment or press c again.

    hashtag
    Adding a response to a comment thread

    To add a response to a comment thread:

    1. Click the comment icon for the thread. You do not need to change to comment mode.

    2. In the text field, type the text of the response.

    3. Click Reply.

    hashtag
    Editing a comment in a thread

    To edit a comment from within a thread:

    1. Click the comment icon for the thread. You do not need to change to comment mode.

    2. Click the edit icon for the comment to edit.

    3. In the text field, edit the text of the comment.

    4. Click Save.

    hashtag
    Resolving a comment thread

    To resolve a comment thread:

    1. Click the comment icon for the thread. You do not need to change to comment mode.

    2. Click the resolve icon.

    hashtag
    Deleting a comment thread

    To delete a comment thread.

    1. Click the comment icon for the thread. You do not need to change to comment mode.

    2. Click the delete icon.

    Customize entity type settings Override the entity type and value configuration from the project.

    Comment on a file Create, reply to, and resolve comment threads on a project file.

    Authentication Authentication requirements for the REST API

    Redaction Use the REST API to redact text.

    Datasets Use the REST API to manage datasets and dataset files.

    Entity linking Use the REST API to link entity values.

    Access management Use the REST API to retrieve user information and to manage dataset access.

    add manual redactions from the file preview
    Creating templates to apply to PDF files
    File options menu for a PDF file
    File Redactions panel
    Redactions list of manual overrides with a navigate icon highlighted
    environment variable
    User API Keys list on the User Profile page
    API key creation panel

    Handling of .docx and PDF file components

    Under
    Built-In Permission Sets
    , click the dataset permission set to assign to the user or group. You can only assign one built-in permission set. By default, for an added user or group, the Viewer permission set is selected. To not grant any built-in permission set, select
    None
    .
    The file name is dimmed, and you cannot display the file details.
  • For models that are created after the file is deleted, the file is not annotated and is not displayed in the list on the model details.

  • Add training data options to select additional training files
    Add training data options to select additional training files
    Example IAM role for file uploads and redactions
    Example IAM role for Amazon S3 datasets
    environment variable
    textual.start_file_redactionarrow-up-right
    textual.start_file_redactionarrow-up-right
    textual.download_redacted_filearrow-up-right
    specify the entity type handling
    Example IAM role for file uploads and redactions
    <Textual_URL>/api/auth/login
    {"userName": "<Textual username>",
    "password": "<Textual password>"}
    {"userName": "[email protected]",
    "password": "MyPassword123!"}
    <TEXTUAL_URL>/api/auth/token_refresh
    {"refreshToken": "<refresh token>"}
    with open("<path to the file>", "r") as f:
        j = textual.start_file_redaction(f,"<file name>")
    with open("<path to output location>", "wb") as fo:
        fo.write(textual.download_redacted_file(<job identifier>)
    The built-in entity types are:
    Entity type name
    Identifier (for API)
    Description

    CC Exp

    CC_EXP

    The expiration date of a credit card.

    Credit Card

    CREDIT_CARD

    A credit card number.

    CVV

    CVV

    The card verification value for a credit card.

    hashtag
    Overall workflow

    At a high level, to use Textual to detect sensitive values and create redacted data:

    Diagram of the Tonic Textual dataset workflow

    hashtag
    Create and populate a dataset

    1. Create a Textual dataset, which is a set of files to redact. The files can be uploaded from a local file system, or can come from a cloud storage solution. When you create the dataset, you also choose the type of output, which can be either:

      • The redacted version of the original files. The file is in the same format as the original file.

      • JSON summaries of the files and the detected entities.

    2. Add files to the dataset. Textual supports almost any free-text file, PDF files, .docx files, and .xlsx files.

      For images, Textual supports PNG, JPG (both .jpg and .jpeg), and TIF (both .tif and .tiff) files.

    3. Textual uses its built-in models to scan the files and identify sensitive values. For JSON output, Textual also immediately generates the output files.

    hashtag
    Review the redaction results

    Review the types of entities that were detected in the scanned files.

    hashtag
    Configure entity type handling

    At any time, for datasets that produce redacted files, you can configure how Textual handles the detected values for each entity type.

    For all datasets, you can provide added and excluded values for each built-in entity type.

    You can also create and enable custom entity types.

    hashtag
    Select the handling option for each entity type

    For datasets that produce redacted output files, you configure how Textual redacts the values. This configuration does not apply to datasets that produce JSON output.

    For each entity type, you select the action to perform on detected values. The options are:

    • Redaction - By default, Textual redacts the entity values, which means to replace the values with a token that identifies the type of sensitive value, followed by a unique identifier. For example, NAME_GIVEN_l2m5sb, LOCATION_j40pk6. The identifiers are consistent, which means that for the same original value, the redacted value always has the same identifier. For example, the first name Michael might always be replaced with NAME_GIVEN_12m5sb, while the first name Helen might always be replaced with NAME_GIVEN_9ha3m2. For PDF files, redaction means to either cover the value with a black box, or, if there is space, display the entity type and identifier. For image files, redaction means to cover the value with a black box.

    • Synthesis - For a given entity type, you can instead choose to synthesize the values, which means to replace the original value with a realistic replacement. The synthesized values are always consistent, meaning that a given original value always produces the same replacement value. For example, the first name Michael might always be replaced with the first name John. You can also identify specific replacement values.

    • Ignore - You can choose to ignore the values, and not replace them.

    Textual automatically updates the file previews and downloadable files to reflect the updated configuration.

    hashtag
    Define added and excluded values for entity types

    Optionally, for all datasets, you can create lists of values to add to or exclude from an entity type. You might do this to reflect values that are not detected or that are detected incorrectly.

    hashtag
    Manually update PDF files

    Datasets also provide additional options to redact PDF files.

    You can add manual overrides to a PDF file. When you add a manual override, you draw a box to identify the affected portion of the file.

    You can use manual overrides either to ignore the automatically detected redactions in the selected area, or to redact the selected area.

    To make it easier to process multiple files that have a similar format, such as a form, you can create templates that you can apply to PDF files in the dataset.

    hashtag
    Generate or download output files

    After you complete the redaction configuration and manual updates, to obtain the output files:

    • For local file datasets, you download the output files.

    • For cloud storage datasets, for datasets that produce original format files, you run a generation job that writes the output files to the configured output location. For datasets that produce JSON output, the files are generated to the output location as soon as the the output location is configured.

    hashtag
    File upload and download flows

    For a local file dataset, the file upload and download flows are as follows. For a more general overview of the Textual architecture, go to Textual architecture.

    hashtag
    File upload flow

    When you upload a file to a local file dataset, the flow is as follows:

    File upload flow for a local file dataset
    1. The Textual user uploads the file.

    2. The API service stores the file in either Amazon S3 or the Textual application database. For more information, go to Setting the S3 bucket for file uploads and redactions.

    3. The API service starts a job in the worker.

    4. The worker sends any PDF and image files to the OCR service (Amazon Textract, Document Intelligence, or Tesseract) to extract the file text.

    5. The OCR service returns the PDF and image text to the worker.

    6. The worker submits the file text to the Textual machine learning service to detect and replace entity values.

    7. The machine learning service returns the results to the worker.

    8. The worker stores the results in the application database.

    hashtag
    File download flow

    When you download a redacted file from a local file dataset, the flow is as follows:

    File download flow for a local file dataset
    1. The Textual user makes the request to download the file.

    2. The API service retrieves the file from where it is stored in either Amazon S3 or the application database.

    3. The API service retrieves the detected entities and entity handling settings from the application database.

    4. The API service applies those results to the file.

    5. The API service returns the redacted file to the Textual user.

    Textual SDK
    REST API
    panel
    1. On the dataset details page, click Project files.

    2. On the dataset files page, click Select Files.

    File list with option to select files

    The file selection includes:

    • Whether to restrict the dataset to specific file types

    • The files or folders to include in the dataset

    When you change the file selection, Textual scans the files for entities. For more information, go to Tracking and managing file processing.

    hashtag
    Filtering files by file extension

    When you select files, you can filter the selectable files based on file extension.

    To limit the file extensions to include:

    1. Click File Extension Filter. By default, all file extensions are included, and none of the checkboxes are checked.

    2. Check the checkbox for each file extension to include. As you select the file extensions to include, Textual updates the navigation pane so that you can only select files that have one of those file extensions. It hides files that have other file extensions and folders that do not contain files with the selected file extensions.

    File extension filter on the file selection panel

    hashtag
    Selecting files and folders to include

    In the file selection area, you navigate to and select the folders and files to add to the dataset.

    hashtag
    Navigating through the folders

    In the navigation area, to display the contents of a folder, click the Open link for the folder.

    hashtag
    Selecting a file or folder

    To add a folder or file to the dataset, check its checkbox.

    File selection panel with selected files

    hashtag
    Managing selected folders

    In the navigation pane, when you check a folder checkbox, Textual adds it to the Prefix Patterns list.

    Selected paths for a cloud storage dataset

    hashtag
    Adding a folder manually

    Instead of navigating to a folder and selecting it, you can add the path to the list manually

    To add a folder path:

    1. Click Add Prefix Pattern.

    2. In the field, type the path to the folder, then click the save icon.

    hashtag
    Removing folder paths

    To remove a folder path from the dataset, either:

    • In the navigation pane, uncheck its checkbox.

    • In the Prefix Patterns list, click its delete icon.

    For the selected folders, the dataset includes all of the applicable files in the folder that:

    • Are of a file type that Textual supports

    • Match the file extension filter

    hashtag
    Managing selected files

    In the navigation pane, when you select an individual file, Textual adds it to the Selected Files list.

    Selected files for a cloud storage dataset

    To delete a file, either:

    • In the navigation pane, uncheck its checkbox.

    • In the Selected Files list, click its delete icon.

    File selection panel for a cloud storage dataset

    The source location for the files.

  • If the files are in cloud storage, the connection credentials.

  • hashtag
    Setting the name, source type, and output type

    To create a dataset:

    1. On the Datasets page, click Create a Dataset.

    Dataset creation panel
    1. In the Dataset Name field, provide a name for the dataset.

    2. Under Output Format, select the type of output to generate.

    3. Under File Source, select the source type. If the source type is a cloud storage option, then provide the required credentials.

    4. Click Save.

    5. For cloud storage datasets:

      1. Textual prompts you to configure the initial file selection. For more information, go to .

      2. After you select the files, it prompts you to select an output location. For more information, go to .

    hashtag
    Providing credentials for Amazon S3

    circle-info

    On self-hosted instances, we are deprecating the options to provide credentials on the dataset panel and read credentials from environment variables.

    Instead, the credentials must be included in the configuration of an IAM role that has the correct permissions.

    If the source type is Amazon S3, provide the credentials to use to connect to Amazon S3.

    Credentials fields for an Amazon S3 dataset
    1. For a self-hosted instance, select the location of the credentials. You can either provide credentials manually, or use credentials that are configured in environment variables. Note that after you save the dataset, you cannot change the selection.

    2. If you are not using environment variables, then in the Access Key field, provide an AWS access key that is associated with an IAM user or role. For an example of a role that has the required permissions for an Amazon S3 dataset, go to Required IAM role permissions for Amazon S3.

    3. In the Access Secret field, provide the secret key that is associated with the access key.

    4. From the Region dropdown list, select the AWS Region to send the authentication request to.

    5. In the Session Token field, provide the session token to use for the authentication request.

    6. To test the credentials, click Test AWS Connection.

    7. By default, connections to Amazon S3 use Amazon S3 encryption. To instead use AWS KMS encryption:

      1. Click Show Advanced Options.

      2. From the Server-Side Encryption Type dropdown list, select AWS KMS.

    8. Click Save. Textual prompts you to .

    hashtag
    Providing Azure credentials

    Credentials fields for an Azure dataset

    If the source type is Azure, provide the connection information:

    1. In the Account Name field, provide the name of your Azure account.

    2. In the Account Key field, provide the access key for your Azure account.

    3. To test the connection, click Test Azure Connection.

    4. Click Save. Textual prompts you to .

    hashtag
    Providing SharePoint credentials

    Credentials fields for a SharePoint dataset

    If the source type is SharePoint, provide the credentials for the Entra ID application.

    The credentials must have the following application permissions (not delegated permissions):

    • Files.Read.All - To see the SharePoint files

    • Files.ReadWrite.All -To write redacted files and metadata back to SharePoint

    • Sites.ReadWrite.All - To view and modify the SharePoint sites

    To provide the credentials:

    1. In the Tenant ID field, provide the SharePoint tenant identifier for the SharePoint site.

    2. In the Client ID field, provide the client identifier for the SharePoint site.

    3. In the Client Secret field, provide the secret to use to connect to the SharePoint site.

    4. To test the connection, click Test SharePoint Connection.

    5. Click Save. Textual prompts you to .

    To work on the guidelines, click Guidelines refinement. The Guidelines refinement option is enabled when you complete the review on the initial set of test files.

    The first time you display the Guidelines refinement page, Textual uses the guidelines that you provided during the entity type creation to populate the Version 1 tab.

    Guidelines refinement page for a model-based custom entity type

    At the left are the guidelines.

    At the right is the list of test data files.

    hashtag
    File statuses for the guidelines refinement

    For each version of the guidelines, Textual uses the guidelines to detect entity values in the test data.

    The file statuses are:

    • Queued for annotation - Textual has not yet scanned the file.

    • Annotating - Textual is in the process of scanning the file.

    • Annotated - The scan is complete.

    hashtag
    Reviewing the test scores for the guidelines

    When Textual uses guidelines to detect entity values in the test files, it sets the number of detected entities and a set of scores. The scores reflect how well the detections match the entity values that you established in the test data setup. If you change the established values in the test data, Textual updates the scores for the guidelines.

    The overall entity count and scores across all files are displayed across the top of the page. The file list displays the entity count and scores for each file.

    Overall scores and file scores in the Guidelines Refinement list

    The scores are:

    • Precision score - Measures the accuracy of positive predictions. Indicates how many of the detected entities were correctly identified. For example, the guidelines detect 10 values. If only 3 of those are correct, then the precision score is lower than if 7 of those are correct.

    • Recall score - Measures the model's ability to find all of the entities. Indicates how many of the actual entities it detected. For example, the guidelines detect 10 correct values. If the total number of correct values is 20, then the recall score is lower than if the total number of correct values is 12.

    • F1 score - The harmonic mean of precision and recall. The goal is to have a balance between precision and recall. The guidelines should produce annotations that are both accurate and complete. Detecting all of the correct values is not useful if the guidelines also detect a large number of incorrect values. And detecting only correct values is not useful if the guidelines only detect a fraction of the total number of correct values.

    hashtag
    Reviewing the guideline detections

    To review the entity values that Textual detected based on the current version of the guidelines, click the file name.

    hashtag
    Editing the guidelines

    Based on how accurately Textual detected the entity values, Textual generates suggested changes to the guidelines.

    For example, it might suggest additional language to more specifically identify values that the previous version either missed or detected incorrectly.

    To start a new version of the guidelines:

    1. Click Edit. If there are suggestions, you can also click Review.

    Edit and Review options for the annotation guidelines
    1. On the Annotation guidelines panel, the current guidelines are displayed in an editable text area on the left. On the right is a summary of the suggested updates to the guidelines. To display the proposed replacement guidelines, toggle Show diff to the on position.

    Annotation guidelines panel with AI suggestions to improve the guidelines in the next version
    1. To update the guidelines, you can either:

      • Update the guidelines manually.

      • Accept all of the suggestions, and replace the current guidelines. To do this, click Accept changes.

      • Manually copy text from the suggestions and paste it into the guidelines.

    2. To save the guidelines version, and start the detection and scoring, click Save new version.

    Textual creates a new tab for the new version of the guidelines. The tab label is Version n, where n is incremented for each new version. The most recent version is at the left.

    Guidelines version tabs on the Guidelines Refinement page

    Configuring added and excluded values for built-in entity types

    circle-info

    Required dataset permission: Edit dataset settings

    In a dataset, for each built-in entity type, you can configure additional values to detect, and values to exclude. You cannot define added and excluded values for custom entity types.

    You might add values that Textual does not detect because, for example, they are specific to your organization or industry.

    You might exclude a value because:

    • Textual labeled the value incorrectly.

    • You do not want to redact a specific value. For example, you might want to preserve known test values.

    Note that you can also .

    hashtag
    Displaying the Configure Entity Detection panel

    From the Configure Entity Detection panel, you configure both added and excluded values for entity types.

    To display the panel, click the settings icon for the entity type.

    The panel contains an Add to detection tab for added values, and an Exclude from detection tab for excluded values.

    hashtag
    Selecting the entity type to add or exclude values for

    The entity type dropdown list at the top of the Configure Entity Detection panel indicates the entity type to configure added and excluded values for.

    The initial selected entity type is the entity type for which you clicked the icon. To configure values for a different entity type, select the entity type from the list.

    hashtag
    Configuring added values

    On the Add to detection tab, you configure the added values for the selected entity type.

    Each value can be a specific word or phrase, or a regular expression to identify the values to add. Regular expressions must be C# compatible.

    hashtag
    Configuring a new added value

    To add an added value:

    1. Click the empty entry.

    2. Type the value into the field.

    hashtag
    Editing an added value

    To edit an added value:

    1. Click the value.

    2. Update the value text.

    hashtag
    Testing an added value

    For each added value, you can test whether Textual correctly detects it.

    To test a value:

    1. From the Test Entry dropdown list, select the number for the value to test.

    2. In the text field, type or paste content that contains a value or values that Textual should detect.

    The Results field displays the text and highlights matching values.

    hashtag
    Removing an added value

    To remove an added value, click its delete icon.

    hashtag
    Configuring excluded values

    On the Exclude from detection tab, you configure the excluded values for the selected entity type.

    Each value can be either a specific word or phrase to exclude, or a regular expression to identify the values to exclude. The regular expression must be C# compatible.

    You can also provide a specific context within which to ignore a value. For example, in the phrase "one moment, please", you probably do not want the word "one" to be detected as a numeric value. If you specify "one moment, please" as an excluded value for the numeric entity type, then "one" is not identified as a number when it is seen in that context.

    hashtag
    Adding an excluded value

    To add an excluded value:

    1. Click the empty entry.

    2. Type the value into the field.

    hashtag
    Editing an excluded value

    To edit an excluded value:

    1. Click the value.

    2. Update the value text.

    hashtag
    Testing an excluded value

    For each excluded value, you can test whether Textual correctly detects it.

    To test the value that you are currently editing:

    1. From the Test Entry dropdown list, select the number for the value to test.

    2. In the text field, type or paste content that contains a value or values to exclude.

    The Results field displays the text and highlights matching values.

    hashtag
    Removing an excluded value

    To remove an excluded value, click its delete icon.

    hashtag
    Saving the updated added and excluded values

    New added values are not reflected in the entity types list until Textual runs a new scan.

    When you save the changes, you can choose whether to immediately run a new scan on the dataset files.

    To save the changes and also start a scan, click Save and Scan Files.

    To save the changes, but not run a scan, click Save Without Scanning Files. When you do not run the scan, then on the dataset details page, Textual displays a prompt to run a scan.

    Creating templates to apply to PDF files

    circle-info

    Required dataset permission: Edit dataset settings

    A dataset might contain multiple files that have the same structure, such as a set of scanned-in forms.

    Instead of adding the same manual overrides for each file, you can use a PDF file in the dataset to create a template that you can apply to other PDF files in the dataset.

    When you edit a PDF file, you can apply a template.

    hashtag
    Creating a PDF template

    To add a PDF template to a dataset:

    1. On the Dataset settings page, under PDF Settings, click PDF Templates.

    1. On the template creation and selection panel, click Create a New Template.

    1. On the template details page:

      1. In the Name field, provide a name for the template.

      2. From the file dropdown list, select the dataset file to use to create the template.

    1. When you finish adding the manual overrides, click Save New Template.

    hashtag
    Updating an existing PDF template

    When you update a PDF template, it affects any files that use the template.

    To update a PDF template:

    1. On the Dataset settings page, under PDF Settings, click PDF Templates.

    2. Under Edit an Existing Template, select the template, then click Edit Selected Template.

    3. On the template details panel, you can change the template name, and add or remove manual overrides.

    1. To save the changes, click Update Template.

    hashtag
    Managing the manual overrides

    hashtag
    Adding a manual override

    On the template details panel, to add a manual override to a file:

    1. Select the type of override. To indicate to ignore any automatically detected values in the selected area, click Ignore Redactions. To indicate to redact the selected area, click Add Manual Redaction.

    2. Use the mouse to draw a box around the area to select.

    Tonic Textual adds the override to the Redactions list. The icon indicates the type of override.

    hashtag
    Navigating to a manual override

    To select and highlight a manual override in the file content, in the Redactions list, click the navigate icon for the override.

    hashtag
    Removing a manual override

    To remove a manual override, in the Redactions list, click the delete icon for the override.

    hashtag
    Deleting a PDF template

    When you delete a PDF template, the template and its manual overrides are removed from any files that the template was assigned to.

    To delete a PDF template:

    1. On the Dataset settings page, under PDF Settings, click PDF Templates.

    2. Under Edit an Existing Template, select the template, then click Edit Selected Template.

    3. On the template details panel, click Delete.

    Configure entity type handling for redaction

    circle-info

    Required dataset permission: Edit dataset settings

    By default, when you:

    • Configure a dataset

    • Redact a string

    • Retrieve a redacted file

    Textual does the following:

    • For the string and file redaction, replaces detected values with tokens.

    • For LLM synthesis, generates realistic synthesized values.

    When you make the request, you can:

    • Override the default behavior.

    • For individual files and text strings, specify custom entity types to include.

    hashtag
    Specifying the handling option for entity types

    For each entity type, you can choose to redact, synthesize, or ignore the value.

    • When you redact a value, Textual replaces the value with a token that consists of the entity type. For example, ORGANIZATION.

    • When you synthesize a value, Textual replaces the value with a different realistic value.

    • When you ignore a value, Textual passes through the original value.

    To specify the handling option for entity types, you use the generator_config parameter.

    Where:

    • <entity_type> is the identifier of the entity type. For example, ORGANIZATION. For the list of built-in entity types that Textual scans for, go to . For custom entity types, the identifier is the entity type name in all caps. Spaces are replaced with underscores, and the identifier is prefixed with CUSTOM_. For example, for a custom entity type named My New Type, the identifier is CUSTOM_MY_NEW_TYPE. From the Custom Entity Types page, to copy the identifier of a custom entity type, click its copy icon.

    • <handling_option> is the handling option to use for the specified entity type. The possible values are Redaction, Synthesis, GroupingSynthesis, ReplacementSynthesis, and Off.

    For example, to synthesize organization values, and ignore languages:

    hashtag
    Specifying a default handling option

    For string and file redaction, you can specify a default handling option to use for entity types that are not specified in generator_config.

    To do this, you use the generator_default parameter.

    generator_default can be either Redaction, Synthesis, GroupingSynthesis, ReplacementSynthesis, or Off.

    hashtag
    Providing added and excluded values for entity types

    You can also configure added and excluded values for each entity type.

    You add values that Textual does not detect for an entity type, but should. You exclude values that you do not want Textual to identify as that entity type.

    • To specify the added values, use label_allow_lists.

    • To specify the excluded values, use label_block_lists.

    For each of these parameters, the value is a list of entity types to specify the added or excluded values for. To specify the values, you provide an array of regular expressions.

    The following example uses label_allow_lists to add values:

    • For NAME_GIVEN, adds the values There and Here.

    • For NAME_FAMILY, adds values that match the regular expression ([a-z]{2}).

    hashtag
    Including custom entity types

    When you redact a string or download a redacted file, you can provide a comma-separated list of custom entity types to include. Textual then scans for and redacts those entity types based on the configuration in generator_config.

    For example:

    Creating and training models for a model-based entity type

    After you select your training data, on the Model training page, you create one or more trained models.

    For each model, you select the version of the guidelines to use. Textual first uses those guidelines to annotate the training data. Based on how well the guidelines identified the values in the training data, you decide whether to start the model training.

    When the training is complete, the model scans the test data. The model is scored based on how well it detected the definitive values that you confirmed in the test data.

    hashtag
    Information on the model list

    Redacting a project file

    circle-info

    Required guided redaction permission: Edit file redactions

    In Redaction mode, you add and remove redactions.

    To change to Redaction mode, in the file heading, click the edit icon.

    File preview for a redacted file

    For a dataset that generates output files of the same type as the original file:

    • On the left, the preview displays the original data. The detected entity values are highlighted.

    • On the right, the preview displays the data with replacement values that are based on the dataset configuration for the detected entity types.

    Summary results for the dataset

    The Project files and Entity settings page display the following summary results for the dataset::

    • The number of detected entities

    • The percentage of dataset content that is sensitive

    hashtag
    Adding redactions

    In a text file, you can select specific text to redact.

    For non-text files such as a PDF or an image, you redact a selected area or an entire page.

    hashtag
    Redacting selected text in a text file

    To add a single redaction of selected text:

    1. Select the text. Note that you can add a redaction of text within an existing redaction.

    2. On the redaction panel, select the reference codes that apply.

    hashtag
    Redacting a selected area in a PDF or image file

    For PDF and image files, you redact an area that you draw over the page. You can draw over a single word, or you might use this option to redact a displayed image or an entire table.

    To redact a selected area:

    1. Click Draw Redaction or press d. This puts you in draw redaction mode.

    2. Use the mouse to draw a box over the area to redact.

    3. After you draw the box, identify the reference codes that apply.

    The selected reference codes replace any individual redactions that were within the area.

    However, Textual does save those individual redactions. If you remove the redaction for the selected area, the individual redactions are restored.

    To exit draw redaction mode, click Draw Redaction or press d again.

    hashtag
    Redacting selected values throughout the file

    If there are multiple instances of a value in a file, then when you redact one instance, you can choose to redact all of the instances in that file.

    To redact all of the instances of a selected value in the current file:

    1. Select the value - either highlight the text or use area selection.

    2. Select the reference codes to apply.

    3. Click the file icon.

    4. On the confirmation panel, click Confirm.

    hashtag
    Redacting selected values throughout the project

    If there are other instances of the value in other project files, then when you redact one instance, you can choose to redact all of the instances in the project.

    To redact all of the instances of a selected value in all of the project files:

    1. Select the value - either highlight the text or use area selection.

    2. Select the reference codes to apply.

    3. Click the folder icon.

    4. On the confirmation panel, click Confirm.

    hashtag
    Redacting an entire page

    For PDF files and images, you can redact an entire page.

    To redact the current page:

    1. Click Redact Current Page.

    2. On the panel, select the reference codes that apply.

    The selected reference codes replace any individual redactions that were on the page.

    However, Textual does save those individual redactions. If you remove the redaction for the entire page, the individual redactions are restored.

    hashtag
    Identifying redactions that do not have reference codes

    Every redaction must be assigned at least one reference code. To enable Textual to assign reference codes when it performs its scan, you must assign reference codes to the entity types that are present in the project files.

    Redactions that do not have reference codes are not redacted in the downloaded output files. They also are not marked in Review mode or Preview mode.

    In Redaction mode:

    • In the file content, redactions without a reference code are outlined in red.

    • The redactions list displays a separate list of redactions that do not have assigned reference codes. The list heading includes a link to the reference code settings.

    hashtag
    Changing the assigned reference codes for a redaction

    For each redaction, you can add and remove assigned reference codes.

    hashtag
    From the file content

    From the file content, to change the assigned reference codes for a redaction:

    1. Click the redaction.

    2. On the panel, to add a reference code, click Add, then select the reference code to add.

    3. To remove a reference code, click the delete icon for the reference code.

    hashtag
    From the redaction list

    The redaction list at the right includes the assigned reference codes for each redaction. A + icon displays next to the assigned codes.

    To change the assigned reference codes:

    1. Click the redaction.

    2. On the panel, check the checkboxes for the reference codes to add.

    From the list of entities that do not have reference codes, to add reference codes:

    1. Click the entity value, then click Assign reference codes.

    2. On the panel, check the checkboxes for the reference codes to add.

    hashtag
    Removing redactions

    You can remove redactions from the file.

    When you remove a redaction of a selected area or an entire page, Textual restores any individual redactions that were there previously.

    hashtag
    From the file content

    From the file content, to remove a single redaction:

    1. Click the redaction.

    2. On the redaction panel, click the delete icon.

    hashtag
    From the redaction list

    From the redaction list, to remove one or more redactions:

    1. Check the checkbox for each redaction to remove.

    2. Click Remove.

    hashtag
    Adding and removing text highlights

    Instead of redacting selected text, you can choose to highlight it.

    Text highlights are not assigned reference codes. In the redaction list and output, the text is struck through instead of covered.

    hashtag
    Adding a text highlight

    To add a text highlight:

    1. Select the text to highlight.

    2. On the panel, click the strikethrough icon.

    hashtag
    Removing a text highlight

    To remove a highlight:

    1. Click the highlight.

    2. On the panel, click the delete icon.

    hashtag
    Selecting multiple PDF or image redactions to update

    In a PDF file or an image file, to update multiple redactions:

    1. Either click Bulk Selection, or press b. This puts you into bulk selection mode.

    2. Use the mouse to draw a box around the redactions to select.

    3. On the panel, select the updates to apply.

    When you are finished with the updates, to exit bulk selection mode, click Bulk Selection or press b again.

    hashtag
    Undoing and redoing redaction changes

    Textual maintains a history of your redaction changes.

    To undo the most recent change, click the undo icon.

    To restore the most recent undone change, click the redo icon.

    When you apply a redaction to all matching values in a file or project, Textual undoes and redoes all of those changes.

    For changes applied in bulk selection mode, Textual only undoes or redoes one change at a time. For example, if you selected 3 redactions and then removed them, when you select the undo option, it only restores one of those redactions. To restore all of those redactions, you would select undo 3 times.

    Date Time

    DATE_TIME

    A date or timestamp.

    DOB

    DOB

    A person's date of birth.

    Email Address

    EMAIL_ADDRESS

    An email address.

    Event

    EVENT

    The name of an event.

    Gender Identifier

    GENDER_IDENTIFIER

    An identifier of a person's gender.

    Healthcare Identifier

    HEALTHCARE_ID

    An identifier associated with healthcare, such as a patient number.

    IBAN Code

    IBAN_CODE

    An international bank account number used to identify an overseas bank account.

    IP Address

    IP_ADDRESS

    An IP address.

    Language

    LANGUAGE

    The name of a spoken language.

    Law

    LAW

    A title of a law.

    Location

    LOCATION

    A value related to a location. Can include any part of a mailing address.

    Occupation

    OCCUPATION

    A job title or profession.

    Street Address

    LOCATION_ADDRESS

    A street address.

    City

    LOCATION_CITY

    The name of a city.

    State

    LOCATION_STATE

    A state name or abbreviation.

    Zip

    LOCATION_ZIP

    A postal code.

    Country

    LOCATION_COUNTRY

    The name of a country.

    Full Mailing Address

    LOCATION_COMPLETE_ADDRESS

    A full postal address. By default, the entity type handling option for this entity type is Off.

    Medical License

    MEDICAL_LICENSE

    The identifier of a medical license.

    Money

    MONEY

    A monetary value.

    Given Name

    NAME_GIVEN

    A given name or first name.

    Family Name

    NAME_FAMILY

    A family name or surname.

    NRP

    NRP

    A nationality, religion, or political group.

    Numeric Identifier

    NUMERIC_PII

    A numeric value that acts as an identifier.

    Numeric Value

    NUMERIC_VALUE

    A numeric value.

    Organization

    ORGANIZATION

    The name of an organization.

    Password

    PASSWORD

    A password used for authentication.

    Person Age

    PERSON_AGE

    The age of a person.

    Phone Number

    PHONE_NUMBER

    A telephone number.

    Product

    PRODUCT

    The name of a product.

    URL

    URL

    A URL to a web page.

    US Bank Number

    US_BANK_NUMBER

    The account number of a bank in the United States.

    US Bank Routing Number

    US_ROUTING_TRANSIT_NUMBER

    The routing number of a bank in the United States.

    US ITIN

    US_ITIN

    An Individual Taxpayer Identification Number in the United States.

    US Passport

    US_PASSPORT

    A United States passport identifier.

    US SSN

    US_SSN

    A United States Social Security number.

    add manual redactions from the file preview
    Configure Entity Detection panel to configure added and excluded entity values
    Entity type dropdown for Custom Entity Detection
    Adding an added value for an entity type
    Testing an added value
    Adding an excluded value for an entity type
    Testing an excluded value
    Save options for Custom Entity Detection

    Add the manual overrides to the file.

    PDF Templates option on the PDF Settings section of the Dataset ssettings page
    Panel with option to create a PDF template
    PDF template details panel for a new template
    Template details panel for an existing template.
    Redactions list of manual overrides with a navigate icon highlighted

    The number of entity types for which there are detected entities

  • The number of files in the dataset

  • The total number of words in the dataset files

  • Summary results for a dataset
    In the Server-side Encryption AWS KMS ID field, provide the KMS key ID. Note that if the KMS key doesn't exist in the same account that issues the command, you must provide the full key ARN instead of the key ID.

    Note that after you save the new dataset, you cannot change the encryption type.

    Selecting cloud storage files
    Changing cloud storage credentials and output location
    select the dataset files
    select the dataset files
    select the dataset files
    Dataset creation panel
    Built-in entity types
    Copy identifier option for a custom entity type
    generator_config={'<entity_type>':'<handling_option>'}
    generator_config={'ORGANIZATION':'Synthesis', 'LANGUAGE':'Off'}
    {'<entity_type>':['<regex>']}
    (label_allow_lists={
        'NAME_GIVEN':['There','Here'], 
        'NAME_FAMILY':['([a-z]{2})']
        }
    )
    custom_entities="["<entity type identifier>"]
    custom_entities=["CUSTOM_COGNITIVE_ACCESS_KEY", "CUSTOM_PERSONAL_GRAVITY_INDEX"]

    For each model, the model list includes:

    Model training page
    • Model - The model name. Models are automatically named Model n, where n is the number of the model. For example, the first model you create is Model 1, the second is Model 2, and so on.

    • Status - The model status. The possible statuses are:

      • Annotating - The model is using the selected guidelines to annotate the training data.

      • Ready for training - The annotation is complete. For models with this status, Textual displays a Review option to allow you to review the annotations.

      • Training - The training is in progress. Textual displays the percentage of training data that the model has trained on.

      • Ready - The model is trained. You can select any trained model as the active model for the entity type.

    • Guideline version - The version of the guidelines used for the model. To view the guidelines text, click the view icon.

    • Benchmark score - A score that indicates how well the model performed when it annotated the test data after training.

    • Detected entities - The number of entity values that the model detected in the training data.

    • # of files - The number of training files that were used for the annotation and model training.

    hashtag
    Starting a new model

    To start a new model:

    1. Click Create new model.

    Create new model panel to select the guidelines version for the model
    1. On the Create new model panel, from the Guideline version dropdown list, select the version of the guidelines to use for the model.

    2. Click Save.

    Textual adds the model to the list and uses the selected guidelines version to annotate the training data files.

    hashtag
    Reviewing the annotations for a model

    Before you train the model, you review the annotations to see how well the model performed.

    To review the annotations, click the model name. Models that are ready to review also display a Review and Train link next to the model name.

    On the model details page:

    • On the left is the list of training data files, with the number of entities detected in each file.

    • On the right is the list of the entities in the training files, in descending order by the number of occurrences.

    Model details page with the list of detected values

    To display the content of a file with the annotations highlighted, click the file name.

    Model details page with the content of an annotated training file

    After you review the annotations, if you are not satisfied with the results, to return to the guidelines refinement:

    1. In the model list, in the Guideline version column, click the view icon.

    2. On the guidelines panel, click Go to guidelines refinement.

    Guidelines panel for a model, with the option to return to the guidelines refinement

    For a model that is not trained yet, the model details page also displays a Modify guidelines option.

    Textual displays the Guidelines Refinement page, and selects that guidelines version. You can then edit the guidelines to create a new version, then create a new model that uses the new version.

    hashtag
    Training the model

    If you are satisfied with the annotation results, then on the model details page, to start the training, click Train model.

    Train model option for a model

    hashtag
    Downloading a data package for a model

    To help troubleshoot issues with a trained model, you can download a model data package to send to Tonic.ai.

    The data package is a .zip file that contains the following:

    • General information about the custom entity type and model. Includes the entity type name entity type identifier, and the model identifier.

    • The set of test files, including the established entity values that you identified.

    • The set of training files, including the entity values that the model identified.

    To download the data package, either:

    • On the Model Training page, click the download icon for the model.

    • On the model details page, click Download Training Data.

    Download Training Data option on the model details for a trained model
    hashtag
    Preview for PDF and image files

    For a PDF or image file, for entity types that use the Redact handling option, the value is covered by a black box.

    File preview for a redacted PDF file

    The preview for a PDF file also reflects any manual edits.

    hashtag
    Selecting entity type handling options from the preview

    You can use the preview to select the entity type handling option for each entity type. The options are:

    • Redaction - This is the default value. Textual replaces the value with the name of the entity type followed by a unique identifier. For example, the first name John is replaced with NAME_GIVEN_12345. Note that the identifier is only visible in the downloaded file. It does not display on the preview.

    • Synthesize - Textual replaces the value with a realistic generated value. For example, the first name John is replaced with the first name Michael. The replacement values are consistent, which means that a given value always has the same replacement. For example, Michael is always the replacement value for John.

    • Ignore - Textual ignores the value and copies it as is to the output file.

    To select the entity type handling option:

    1. In the results panel, click a detected value. For a PDF file, you can click the value in either the source or the results panel.

    2. On the details panel, click the entity type handling option. Textual applies the same option to all entity values of that type.

    Selecting an entity type handling option

    From the preview, you can only select the entity type handling option. For the Synthesis option, you cannot configure synthesis options for an entity type. You must configure those options from the dataset details page. For more information, go to Configuring entity type synthesis options.

    hashtag
    Ignoring specific instances in PDF files

    From the PDF preview, you can also choose to ignore a specific value.

    To configure whether to ignore a specific detected value:

    1. In the source or results panel, click the value.

    2. On the details panel, to ignore the value, toggle Ignore to the on position.

    Panel with the option to ignore a PDF value

    hashtag
    Adding and removing manual redactions

    From the file preview, you can add manual redactions to text, PDF, and image files.

    You cannot add manual redactions to .docx files.

    When you add a manual redaction, you select the entity type to assign to it.

    hashtag
    Manual redactions in text files

    hashtag
    Adding a manual redaction to a text file

    To add a manual redaction to a text file:

    1. In the source text panel on the left, select the text to redact.

    2. On the redaction panel, from the dropdown list, select the entity type for the redaction.

    Redaction creation panel for a text file redaction

    hashtag
    Removing a manual redaction from a text file

    To remove a manual redaction from a text file:

    1. In the source text panel on the left, double-click the redaction.

    2. Click Delete.

    Redaction details for an existing text file redaction

    hashtag
    Manual redactions in PDF or image files

    hashtag
    Adding manual redactions to a PDF or image file

    For a PDF or image file, to change to redaction mode, click Add Redaction.

    Add Redaction option for a PDF or image

    The Add Redaction button changes to Done.

    Done button to exit redaction mode for a PDF option

    While in redaction mode, to add a redaction:

    1. In the source text panel on the left, draw a box around the content to redact.

    2. On the redaction details panel, from the dropdown list, select the entity type for the redaction.

    Redaction details panel before the entity type is selected
    1. Click Add Redaction.

    To exit redaction mode, click Done.

    hashtag
    Viewing the count of manual redactions for a page

    The preview heading displays the count of manual redactions on the current page of the file.

    Count of manual redactions on the current page

    hashtag
    Removing a manual redaction from a PDF or image file

    To remove a manual redaction from a PDF or image file:

    1. In the source text panel on the left, click the redaction.

    2. On the redaction details panel, click Delete Redaction.

    Delete Redaction option for a manual redaction
    File preview with the original and redacted and synthesized text data

    Create and manage datasets

    Textual uses datasets to produce files with sensitive values replaced.

    Before you perform these tasks, remember to instantiate the SDK client.

    hashtag
    Get your list of datasets

    To get the complete list of datasets that you own, use textual.get_all_datasetsarrow-up-right.

    hashtag
    Create and add files to a dataset

    circle-info

    Required global permission: Create datasets

    Required dataset permission: Upload files to a dataset

    To create a new dataset and then upload a file to it, use .

    To add a file to the dataset, use . To identify the file, provide the file path and name.

    To provide the file as IO bytes, you provide the file name and the file bytes. You do not provide a path.

    Textual creates the dataset, scans the uploaded file, and redacts the detected values.

    hashtag
    Configure a dataset

    circle-info

    Required dataset permission: Edit dataset settings

    To change the configuration of a dataset, use .

    You can use dataset.edit to change:

    • The name of the dataset

    • The

    Alternatively, instead of specifying the configuration, you can use the copy_from_dataset parameter to indicate to copy the configuration from another dataset.

    hashtag
    Get the current status of dataset files

    circle-info

    Required dataset permission: Preview redacted dataset files

    To get the current status of the files in the current dataset, use :

    The response includes:

    • The name and identifier of the dataset

    • The number of files in the dataset

    • The number of files that are waiting to be processed (scanned and redacted)

    For example:

    hashtag
    Get lists of files by status

    circle-info

    Required dataset permission: Preview redacted dataset files

    To get a list of files that have a specific status, use the following:

    The file list includes:

    • File identifier and name

    • Number of rows and columns

    • Processing status

    • For failed files, the error

    hashtag
    Delete a file from a dataset

    circle-info

    Required dataset permission: Delete files from a dataset

    To delete a file from a dataset, use .

    hashtag
    Get redacted content for a dataset

    circle-info

    Required dataset permission: Download redacted dataset files

    To get the redacted content in JSON format for a dataset, use :

    For example:

    The response looks something like:

    Previewing Textual detection and redaction

    circle-info

    Required global permission: Use the playground on the Home page

    The Tonic Textual Home page provides a tool that allows you to see how Textual detects and replaces values in plain text or an uploaded file.

    It also provides a preview of the redaction configuration options, including:

    Language support in Textual

    Tonic Textual supports languages in addition to English. Textual automatically detects the language and applies the correct model.

    On self-hosted instances, you configure whether to support multiple languages, and can optionally provide auxiliary language models.

    hashtag
    Supported languages

    Textual can detect values in the following languages:

    Selecting and reviewing test data

    For a model-based custom entity, you first select a set of test data. You annotate the test data to identify all of the entity values that are in those files.

    The test data is a small set of files - up to around 5 files - that contain typical entity type values. Each file also should be relatively small - no more than 5,000 words.

    For example, for an entity type that identifies health conditions, you might select 5 or 6 medical appointment reports that contain a variety of typical values.

    When you iterate over the model guidelines, Textual uses those guidelines to scan the files, and generates scores to indicate how well its detections matched the set of values that you established during your review.

    When a model finishes training, Textual uses the model to scan the test files, and generates a score to indicate how well its detections matched your established values.

    Record and review redaction requests

    circle-info

    Required global permission: Use the Request Explorer

    When you use the redact method to redact a plain text string, you can also choose to record the request.

    The recorded requests are encrypted.

    When you make the request, you specify the number of hours to keep the recorded request. After that amount of time elapses, the request is completely purged. Recorded requests are never kept more than 720 hours, regardless of the configured retention time.

    Configuring and editing PDF redaction and synthesis

    circle-info

    Required dataset permission: Edit dataset settings

    You can configure how Textual works with PDFs. For an individual PDF file, you can add manual overrides to selected areas of a file. Manual overrides can ignore detected values from Tonic Textual, or add redactions.

    datasets = textual.get_all_datasets()

    Configure PDF options

    Determine the synthesis process to use, and how to manage PDF signatures.

    Edit an individual file

    Add manual overrides to a PDF file. You can also apply a template.

    Create PDF templates

    PDF templates allow you to add the same overrides to files that have the same structure.

    The number of files that had errors during processing
    dataset.get_processed_filesarrow-up-right
  • When the file was uploaded

  • textual.create_datasetarrow-up-right
    dataset.add_filearrow-up-right
    dataset.editarrow-up-right
    handling option for each entity type
    Added or excluded values for each entity type
    dataset.describearrow-up-right
    dataset.get_failed_filesarrow-up-right
    dataset.get_running_filesarrow-up-right
    dataset.get_queued_filesarrow-up-right
    dataset.delete_filearrow-up-right
    dataset.fetch_all_json()arrow-up-right
    dataset = textual.create_dataset('<dataset name>')
    dataset.add_file('<path to file>','<file name>') 
    dataset.add_file('<file name>',<file bytes>) 
    dataset.edit(name='<dataset name>', 
      generator_config={'<entity_type>':'<handling_type>'},
      label_allow_lists={'<entity_type>':LabelCustomList(regexes['<regex>']},
      label_block_lists={'<entity_type>':LabelCustomList(regexes['<regex>']}
    )
    dataset.describe()
        Dataset: example [879d4c5d-792a-c009-a9a0-60d69be20206]
        Number of Files: 1
        Files that are waiting for processing: 
        Files that encountered errors while processing: 
        Number of Rows: 0
        Number of rows fetched: 0
    dataset.delete_file('<file identifier>')
    dataset = textual.get_dataset('<dataset name>')
    dataset.fetch_all_json()
    dataset = textual.get_dataset('mydataset')
    dataset.fetch_all_json()
    '[["PERSON Portrait by PERSON, DATE_TIME ...]'
    How to replace the values for each entity type.
  • Added and excluded values for each entity type.

  • The Home page displays automatically when you log in to Textual. To return to the Home page from other pages, in the navigation menu, click Home.

    Initial view of the Textual Home page

    hashtag
    Providing the content to redact

    To provide the content to redact, you can enter text directly, or you can upload a file.

    hashtag
    Entering text

    As you enter or paste text in the Textual playground text area, Textual displays the redacted version in the Results panel at the right.

    Home page with redacted text

    hashtag
    Using one of the samples

    Textual also provides sample text options for some common use cases. To populate the text with a sample, click Try a sample, then select the sample to use.

    Sample text options for the Home page

    hashtag
    Uploading a file

    You can also redact .txt or .docx files.

    To provide a file, click Upload, then search for and select the file.

    Textual processes the file and then displays the redacted version in the Results panel. The Textual playground text area is removed.

    Home page with the content of an uploaded file

    If you try to upload a file type that isn't supported, such as a PDF file, Textual prompts you to create a dataset that contains the file.

    Dataset creation prompt when the file type is not supported for the preview tool

    hashtag
    Clearing the text

    To clear the text, click Clear.

    hashtag
    Selecting the handling option for an entity type

    The handling option indicates how Textual replaces a detected value for an entity type. You can experiment with different handling options.

    Note that the updated configuration is only used for the current redacted text. When you clear the text, Textual also clears the configuration.

    The options are:

    • Redaction - This is the default value. Textual replaces the value with the name of the entity type, followed by a token to distinguish values of the same type. The same value always has the same token. For example, the first name John might be replaced with NAME_GIVEN_dySb5. In the same file, the first name Mary might be replaced with NAME_GIVEN_zrL2f.

    • Synthes - Textual replaces the value with a realistic generated value. For example, the first name John is replaced with the first name Michael. The replacement values are consistent, which means that a given value always has the same replacement. For example, Michael is always the replacement value for John.

    • Ignore - Textual ignores the value and copies it as is to the Results panel.

    To change the handling option for an entity type:

    1. In the Results panel, click an instance of the entity type.

    2. On the configuration panel, click the handling option to use.

    Selecting the handling option for an entity type

    Textual updates all instances of that entity type to use the selected handling option.

    For example, if you change the handling option for NAME_GIVEN to Synthesis, then all instances of first names are replaced with realistic values.

    Redacted text with given name value synthesized

    hashtag
    Defining added and excluded values

    For each entity type in entered text, you can use regular expressions to define added and excluded values.

    • Added values are values that Textual does not detect for an entity type, but that you want to include. For example, you might have values that are specific to your company or industry.

    • Excluded values are values that you do not want Textual to identify as a given entity type.

    Note that the configuration is only used for the current redacted text. When you clear the text, Textual also clears the configuration.

    Also, this option is only available for text that you enter directly. For an uploaded file, to do additional configuration or to download the file, you must create a dataset from the file.

    hashtag
    Displaying the configuration panel

    To display the configuration panel for added and excluded values, click Fine-tune Results.

    The Fine-Tune Results panel displays the list of configured rules for the current text. For each rule, the list includes:

    • The entity type.

    • Whether the rule adds or excludes values.

    • The regular expression to identify the added or excluded values.

    Fine-Tune Results panel for added and excluded values

    hashtag
    Adding a rule to add or exclude values

    On the Fine-Tune Results panel, to create a rule:

    1. Click Add Rule.

    Row to define a new rule for added or excluded values
    1. From the entity type dropdown list, select the entity type that the rule applies to.

    2. From the rule type dropdown list:

      • If the rule adds values, then select Include.

      • If the rule excludes values, then select Exclude.

    3. In the regular expression field, provide the regular expression to use to identify the values to add or exclude.

    4. To save the rule, click the save icon.

    hashtag
    Editing a rule

    To edit a rule:

    1. On the Fine-Tune Results panel, click the edit icon for the rule.

    2. Update the configuration.

    3. Click the save icon.

    hashtag
    Deleting a rule

    On the Fine-Tune Results panel, to delete a rule, click its delete icon.

    hashtag
    Creating a dataset from an uploaded file

    From an uploaded file, you can create a dataset that contains the file.

    You can then provide additional configuration, such as added and excluded values, and download the redacted file.

    To create a dataset from an uploaded file:

    1. Click Download.

    2. Click Create a Dataset.

    Textual displays the dataset details for the new dataset. The dataset name is Playground Dataset <number>, where the number reflects the number of datasets that were created from the Home page.

    The dataset contains the uploaded file.

    hashtag
    Viewing and copying the request code

    When Textual generates the redacted version of the text, it also generates the corresponding API request. The request includes the entity type configuration.

    To view the API request code, click Show Code.

    Code to create the redaction request, including the entity type handling and added and excluded values

    To hide the code, click Hide Code.

    hashtag
    Selecting the request code type

    On the code panel:

    • The Python tab contains the Python version of the request.

    • The cURL tab contains the cURL version of the request.

    hashtag
    Copying the request code

    To copy the currently selected version of the request code, click Copy Code.

    hashtag
    Enabling and using additional LLM processing of detected entities

    Textual offers an option to send detected entity information to a custom Large Language Model (LLM) to synthesize accurate replacements.

    circle-info

    The Textual LLM functionality runs only on the Textual Cloud infrastructure. It does not use any third-party LLM providers.

    hashtag
    LLM synthesis methods

    Textual provides the following LLM synthesis methods.

    hashtag
    ReplacementSynthesis

    ReplacementSynthesis redacts sensitive values. It uses the LLM to generate contextually appropriate replacements based on the surrounding text.

    When you use this method:

    1. Textual identifies sensitive values in the text.

    2. Textual redacts the values and sends the following to the LLM:

      • Redacted placeholders for the detected values, such as ORGANIZATION or NAME_GIVEN

      • The positions of the detected entities

      • The surrounding text context

      Textual does not send the original sensitive values to the LLM.

    3. The LLM analyzes the context.

    4. The LLM generates realistic replacement values that fit naturally within the text.

    hashtag
    GroupingSynthesis

    GroupingSynthesis does the following:

    • Groups related entities

    • Generates new entity names

    • Uses the LLM to reproduce the original format of the value

    When you use this method:

    1. Textual sends the detected entity values and surrounding text to the LLM. To enable grouping and format pattern recognition, Textual must send the original sensitive values.

    2. The LLM groups entities based on whether they refer to the same thing, concept, or person. Grouping is only done within each entity type. For example, Lyon the person and Lyon the city are never grouped together.

    3. The LLM chooses a representative value for each group. For example, if the content includes Will, William, and W.I.L.L, it chooses William as the most complete form.

    4. The representative value is sent to Textual's standard, non-LLM synthesis generators to get a replacement value.

    5. The LLM formats the replacement to match the original format. For example, if Will is replaced with Rob, then W.I.L.L becomes R.O.B.

    hashtag
    Making the LLM processing available

    To enable the LLM processing, set the environment variable ENABLE_EXPERIMENTAL_SYNTHESIS to True. If this is not set to True, then the LLM processing does not work.

    You must also set up the Solar.LLM container.

    hashtag
    Configuring the Solar.LLM container

    To configure the container, you can use the following Docker Compose content as a reference:

    The AWS keys are used to download the Textual custom models. To obtain a copy of the keys, contact your Tonic.ai support representative.

    hashtag
    Enabling the LLM processing for entered text

    After you enter text in the Textual playground panel, to enable the LLM processing, in the Results panel, click Use an LLM to perform AI synthesis.

    You cannot use this option for text that contains more than 100 words.

    By default, the LLM processing applies the following synthesis methods to the entity types:

    When you clear the text, Textual reverts to the default processing.

    hashtag
    Processing with the SDK

    In the Python SDK, to use LLM synthesis, call the redact function.

    Name
    Code

    Afrikaans

    af

    Albanian

    sq

    Amharic

    am

    Arabic

    ar

    Armenian

    hy

    Assamese

    as

    Azerbaijani

    az

    Basque

    eu

    hashtag
    Self-hosted instances

    On a self-hosted instance, you configure whether Textual supports multiple languages.

    You can also optionally provide auxiliary language models.

    hashtag
    Enabling multi-language support

    To enable support for languages other than English, set the environment variable TEXTUAL_MULTI_LINGUAL=true.

    The setting is used by the machine learning container.

    hashtag
    Providing auxiliary language model assets

    You can provide additional language model assets for Textual to use.

    By default, Textual looks for model assets in the machine learning container, in /usr/bin/textual/language_models. The default Helm and Docker Compose configurations include the volume mount.

    To choose a different location, set the environment variable TEXTUAL_LANGUAGE_MODEL_DIRECTORY. Note that if you change the location, you must also modify your volume mounts.

    For help with installing model assets, contact Tonic.ai support ([email protected]envelope).

    hashtag
    Selecting the initial set of test files

    On the Test data setup page, to select the files, you can do a combination of:

    • Paste text into a text field.

    • Upload files from a local system.

    • Select files from one and only one of the following cloud storage options:

      • An S3 bucket

      • Azure Blob Storage

      • A SharePoint repository

    Test Data Setup page with no files selected

    After you select the initial set of test files, Textual uses the draft guidelines that you provided to identify entity values in the files.

    hashtag
    Pasting text directly

    To paste text directly:

    1. Click Sample Text.

    Sample Text field to create a test file from pasted text
    1. In the file, paste the text.

    2. Click Next.

    hashtag
    Uploading local files

    To upload local files for the draft model to annotate:

    1. Click File Upload.

    2. Click Upload Files.

    3. Search for and select the files.

    4. Click Next.

    hashtag
    Providing Amazon S3 credentials

    To provide credentials for Amazon S3:

    1. Click Amazon S3.

    Credentials fields to connect to Amazon S3
    1. For a self-hosted instance, select the location of the credentials. You can either provide credentials manually, or use credentials that are configured in environment variables. Note that after you save the credentials, you cannot change the selection.

    2. If you are not using environment variables, then in the Access Key field, provide an AWS access key that is associated with an IAM user or role. For an example of a role that has the required permissions for an Amazon S3 dataset, go to Required IAM role permissions for Amazon S3.

    3. In the Access Secret field, provide the secret key that is associated with the access key.

    4. From the Region dropdown list, select the AWS Region to send the authentication request to.

    5. In the Session Token field, provide the session token to use for the authentication request.

    6. To test the credentials, click Test AWS Connection.

    7. Click Next. Textual prompts you to select the files.

    hashtag
    Providing Azure credentials

    To provide credentials for Azure:

    1. Click Azure.

    Credentials fields to connect to Azure
    1. In the Account Name field, provide the name of your Azure account.

    2. In the Account Key field, provide the access key for your Azure account.

    3. To test the connection, click Test Azure Connection.

    4. Click Next. Textual prompts you to select the files.

    hashtag
    Providing SharePoint credentials

    For SharePoint, click SharePoint, then provide the credentials for the Entra ID application.

    Credentials fields to connect to SharePoint

    The credentials must have the following application permissions (not delegated permissions):

    • Files.Read.All - To see the SharePoint files

    • Files.ReadWrite.All -To write redacted files and metadata back to SharePoint

    • Sites.ReadWrite.All - To view and modify the SharePoint sites

    To provide the credentials:

    1. In the Tenant ID field, provide the SharePoint tenant identifier for the SharePoint site.

    2. In the Client ID field, provide the client identifier for the SharePoint site.

    3. In the Client Secret field, provide the secret to use to connect to the SharePoint site.

    4. To test the connection, click Test SharePoint Connection.

    5. Click Next. Textual prompts you to select the files.

    hashtag
    Selecting cloud storage files

    After you provide the credentials, you select the files to use.

    For test data, you cannot select folders. You must select individual files.

    hashtag
    Viewing the file list

    On the Test data setup page:

    • The list of test files displays at the left.

    • The content of the selected file displays at the right, with the entity values highlighted.

    Test Data Setup page with selected files

    hashtag
    Adding data to the list

    You can add to the test data at any time, including when you are iterating over the model guidelines.

    To add data, on the Test data setup page:

    1. Click Add test sample.

    Add test sample dropdown list with source type options to add test files
    1. From the sample type menu, select the source type for the new data. The Write sample text and Upload Files options are always available. If you previously selected data from a cloud storage solution, then that cloud storage solution is available. You cannot add files from a different cloud storage solution. For example, if you initially selected files from Amazon S3, then you cannot select files from Azure or SharePoint. If you did not previously select data from a cloud storage solution, then you can select from any of the cloud storage solutions.

    2. For a cloud storage solution, if needed, provide the credentials for the cloud storage solution, then select the additional files.

    3. For sample text, provide the content.

    4. For upload, search for and select the files.

    When you add to the test data, Textual uses the most recent version of the guidelines to identify entity values in the new data. You can then conduct the review.

    hashtag
    File review statuses

    Each file goes through the following statuses:

    • Queued for upload - Textual is uploading the file to the set of test files.

    • Ready for Review - The file is uploaded, but you have not yet reviewed the file to finalize the entity values that the file contains.

    • Reviewed - You completed the review.

    hashtag
    Reviewing a file and changing the detected values

    To review a file, click the file name. The file content displays to the right. The values from the initial detection are highlighted.

    • To add an instance of an entity value, select the value text.

    • To remove an instance, click its delete icon. On the confirmation panel, click Delete.

    To save the current annotation updates, but not mark the file as reviewed, click Save.

    When you finish the review and complete the changes, click Save and mark as reviewed.

    hashtag
    Deleting test files

    On the Test Data Setup page, to delete a test file:

    1. Click its delete icon.

    2. On the confirmation panel, you can choose to skip the confirmation when you delete test files. If you select this option, then the next time you delete a test file, the file is deleted immediately, and the panel does not display.

    3. Click Delete.

    When you delete a test file:

    • For existing guidelines versions, the file name and scores remain in the list of test files for those guidelines. The file name is dimmed, and you can no longer display a preview of the file content.

    • For existing models that annotated the deleted file during their training, the benchmark score does not change.

    • For new guidelines versions, the file is not used and is not listed.

    • For models that are trained after the file is deleted, the file is not annotated and is not included in the benchmark score.

    From the Request Explorer, you can review your recorded requests to check the results and assess the quality of the redaction. You can also test changes to the redaction configuration.

    You cannot view requests from other users.

    hashtag
    Recording a redaction request

    To record a redaction request, you include the record_options argument:

    The record_options argument includes the following parameters:

    • record - Whether to record the request. The default is False. To record the request, set record to True.

    • retention_time_in_hours - The number of hours to preserve the recorded request. The default is 1. After the retention time elapses, the request is purged completely.

    • tags - A list of tags to assign to the request. The tags are mostly intended to make it easier to search for requests on the Request Explorer page.

    hashtag
    Viewing the list of recorded requests

    The Request Explorer page in Textual contains the list requests that you recorded and that are not yet purged. You cannot view requests from other users.

    To display the Request Explorer page, in the Textual navigation bar, click Request Explorer.

    For each request, the list includes:

    • A 255-character preview of the text that was sent for redaction.

    • The tags assigned to the request.

    • The date when the request will be purged.

    Request Explorer page for redaction requests from the SDK

    hashtag
    Filtering the requests

    You can search for a request based on text that is contained in the redacted text, and by the tags that you assigned to the request.

    To search by text from the string, in the search field, begin to type the text.

    To search by an assigned tag, in the search field, type tags: followed by the tag to search for.

    hashtag
    Previewing the request results

    From the request list, to view the results of a request, click the request row.

    By default, the preview uses Identification view. For each detected entity, Identification view displays the value and the entity type.

    Identification view of the redaction preview

    To instead display only the replacement value, which by default is the entity type, click Replacement.

    Replacement view of the redaction preview

    hashtag
    Testing changes to the redaction configuration

    From the preview, you can test how the results change when you:

    • Change the handling option for entity types.

    • Add and exclude values for entity types.

    hashtag
    Displaying the Edit Request panel

    To display the edit panel, from the request preview page, click Edit.

    The Edit Request panel displays the full list of the available entity types.

    Edit Request panel with the list of entity types

    hashtag
    Changing entity type handling options

    You can change how Textual handles detected entity values for each entity type.

    Note that the handling option changes are not saved when you close the preview and return to the requests list.

    hashtag
    Available handling options

    The handling options are:

    • Off - Indicates to ignore values for this entity type.

    • Redact - This is the default option. Indicates to replace each value with a token that represents the entity type.

    • Synthesize - Indicates to replace each value with a realistic replacement value.

    hashtag
    Changing the handling option for a single entity type

    To change the handling option for a single entity type, either:

    • Click the handling option value for the entity type, then select the handling option.

    Handling option dropdown for an entity type on the Request Explorer preview
    • Click the entity type, then under Generator, click the handling option.

    Generator panel to select the entity type handling option

    hashtag
    Selecting the same handling option for all entity types

    To select the same handling option for all of the entity types:

    1. Click Bulk Edit.

    2. From the Bulk Edit dropdown list, select the handling option.

    Bulk Edit dropdown to set the handling option for all entity types

    hashtag
    Configuring added and excluded values for an entity type

    To configure added and excluded values for an entity type, click the entity type.

    The Edit Request panel expands to display the Add to detection and Exclude from detection lists.

    • You use the Add to detection list to configure regular expressions to identify additional values to detect as the selected entity type.

    • You use the Exclude from detection list to configure regular expressions to identify values to not detect as the selected entity type.

    Note that the added and excluded values are not saved when you close the preview and return to the requests list.

    Edit Request panel with the Add to detection and Exclude from detection lists

    hashtag
    Creating a regular expression for an added or excluded value

    To create a regular expression for added or excluded values:

    1. Click the Add regex option for that list.

    2. In the field, provide a regular expression to identify values to add or exclude.

    Field to create an added or excluded value regular expression
    1. Press Enter.

    Saved regular expression for a value

    hashtag
    Editing a regular expression for added or excluded values

    To edit a regular expression:

    1. Click the edit icon for the expression.

    2. In the field, edit the expression.

    Edit field for a regular expression
    1. Click the save icon.

    hashtag
    Deleting a regular expression for added or excluded values

    To delete a regular expression, click the delete icon for that expression.

    hashtag
    Viewing whether an entity type has added or excluded values

    When an entity type has added values, the added values icon displays for that entity type.

    Added values icon for an entity type in the Request Explorer

    When an entity type has excluded values, the excluded values icon displays for that entity type.

    Excluded values icon for an entity type in the Request Explorer

    hashtag
    Replaying the request

    To replay the request based on the current configuration, click Replay.

    Replay button for a previewed request

    When you replay the request, in addition to the Identification and Replacement options, you use the Diff toggle to indicate whether to compare the original and new results.

    For our example, we made the following changes to the configuration:

    • For Given Name and Family Name, changed the handling option to Synthesize.

    • For Credit Card, indicated to ignore the value 41111111111.

    hashtag
    Replayed results views with the Diff toggle off

    When the Diff toggle is in the off position, Identification view only reflects changes to the added and excluded values.

    In our example, we configured 41111111111 to not be detected as a credit card number. In the replayed request, it is instead detected as a numeric value.

    Identification view of a replayed request with Diff off

    Replacement view reflects both the added and excluded values and the changes to the handling option.

    For our example, in addition to the entity type change for the credit card number 41111111111, the given and family names are now realistic replacement values instead of the entity types.

    Replacement view of a replayed request with Diff off

    hashtag
    Replayed results views with the Diff toggle on

    When you set the Diff toggle to the on position, the preview displays the original content to the left, and the modified content to the right.

    In Identification view, you can see the changes to the entity detection based on the added and excluded values.

    Identification view of a replayed request with Diff on

    In Replacement view, you can also see the changes to the selected handling options for the entity types.

    Replacement view of a replayed request with Diff on

    hashtag
    Clearing all of the configuration changes

    To clear all of the regular expressions for all of the entity types, click Remove Changes.

    Remove Changes button for a previewed request
    services:
      textual-llm:
        image: textual-llm:[textual-version-here]
        container_name: textual-llm
        volumes:
          - llm-models:/app/models
        ports:
          - "11443:11443"
        secrets:
          - llm_aws_key_id
          - llm_aws_access_key
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  count: all
                  capabilities: [gpu]
        restart: unless-stopped
        networks:
          - llm-network
    
    volumes:
      llm-models:
    
    networks:
      llm-network:
        driver: bridge
    
    secrets:
      llm_aws_key_id:
        environment: "LLM_AWS_KEY_ID"
      llm_aws_access_key:
        environment: "LLM_AWS_ACCESS_KEY"
    
    generator_config = {
        "NUMERIC_VALUE": PiiState.ReplacementSynthesis,
        "LANGUAGE": PiiState.ReplacementSynthesis,
        "MONEY": PiiState.ReplacementSynthesis,
        "PRODUCT": PiiState.ReplacementSynthesis,
        "EVENT": PiiState.ReplacementSynthesis,
        "WORK_OF_ART": PiiState.ReplacementSynthesis,
        "LAW": PiiState.ReplacementSynthesis,
        "US_PASSPORT": PiiState.ReplacementSynthesis,
        "MEDICAL_LICENSE": PiiState.ReplacementSynthesis,
        "DATE_TIME": PiiState.GroupingSynthesis,
        "US_BANK_NUMBER": PiiState.ReplacementSynthesis,
        "NRP": PiiState.ReplacementSynthesis,
        "US_SSN": PiiState.GroupingSynthesis,
        "IP_ADDRESS": PiiState.Synthesis,
        "ORGANIZATION": PiiState.GroupingSynthesis,
        "PHONE_NUMBER": PiiState.GroupingSynthesis,
        "US_ITIN": PiiState.ReplacementSynthesis,
        "LOCATION": PiiState.GroupingSynthesis,
        "LOCATION_ADDRESS": PiiState.GroupingSynthesis,
        "LOCATION_CITY": PiiState.GroupingSynthesis,
        "LOCATION_STATE": PiiState.GroupingSynthesis,
        "LOCATION_ZIP": PiiState.GroupingSynthesis,
        "LOCATION_COUNTRY": PiiState.ReplacementSynthesis,
        "CREDIT_CARD": PiiState.GroupingSynthesis,
        "US_DRIVER_LICENSE": PiiState.ReplacementSynthesis,
        "EMAIL_ADDRESS": PiiState.ReplacementSynthesis,
        "IBAN_CODE": PiiState.ReplacementSynthesis,
        "URL": PiiState.ReplacementSynthesis,
        "NAME_GIVEN": PiiState.GroupingSynthesis,
        "NAME_FAMILY": PiiState.GroupingSynthesis,
        "PERSON": PiiState.GroupingSynthesis,
        "GENDER_IDENTIFIER": PiiState.ReplacementSynthesis,
        "OCCUPATION": PiiState.ReplacementSynthesis,
        "USERNAME": PiiState.ReplacementSynthesis,
        "PASSWORD": PiiState.ReplacementSynthesis,
        "PERSON_AGE": PiiState.GroupingSynthesis,
        "DOB": PiiState.GroupingSynthesis,
        "CC_EXP": PiiState.GroupingSynthesis,
        "CVV": PiiState.GroupingSynthesis,
        "PROJECT_NAME": PiiState.ReplacementSynthesis,
        "MICR_CODE": PiiState.ReplacementSynthesis,
        "HEALTHCARE_ID": PiiState.ReplacementSynthesis,
        "NUMERIC_PII": PiiState.ReplacementSynthesis,
        "LOCATION_COMPLETE_ADDRESS": PiiState.ReplacementSynthesis,
    }
    record_options = RecordApiRequestOptions(record=<boolean>, retention_time_in_hours=<number of hours>, tags=["tag name"])

    Belarusian

    be

    Bengali

    bn

    Bengali Romanized

    Bosnian

    bs

    Breton

    br

    Bulgarian

    bg

    Burmese

    my

    Burmese (alternative)

    Catalan

    ca

    Chinese (Simplified)

    zh

    Chinese (Traditional)

    zh

    Croatian

    hr

    Czech

    cs

    Danish

    da

    Dutch

    nl

    English

    en

    Esperanto

    eo

    Estonian

    et

    Filipino

    tl

    Finnish

    fi

    French

    fr

    Galician

    gl

    Irish

    ga

    Georgian

    ka

    German

    de

    Greek

    el

    Gujarati

    gu

    Hausa

    ha

    Hebrew

    he

    Hindi

    hi

    Hindi Romanized

    Hungarian

    hu

    Icelandic

    is

    Indonesian

    id

    Italian

    it

    Japanese

    ja

    Javanese

    jv

    Kannada

    kn

    Kazakh

    kk

    Khmer

    km

    Korean

    ko

    Kurdish (Kurmanji)

    ku

    Kyrgyz

    ky

    Lao

    lo

    Latin

    la

    Latvian

    lv

    Lithuanian

    lt

    Macedonian

    mk

    Malagasy

    mg

    Malay

    ms

    Malayalam

    ml

    Marathi

    mr

    Mongolian

    mn

    Nepali

    ne

    Norwegian

    no

    Oriya

    or

    Oromo

    om

    Pashto

    ps

    Persian

    fa

    Polish

    pl

    Portuguese

    pt

    Punjabi

    pa

    Romanian

    ro

    Russian

    ru

    Sanskrit

    sa

    Scottish Gaelic

    gd

    Serbian

    sr

    Sinhala

    si

    Sindhi

    sd

    Slovak

    sk

    Slovenian

    sl

    Somali

    so

    Spanish

    es

    Sundanese

    su

    Swahili

    sw

    Swedish

    sv

    Tamil

    ta

    Tamil Romanized

    Telugu

    te

    Telugu Romanized

    Thai

    th

    Turkish

    tr

    Ukrainian

    uk

    Urdu

    ur

    Urdu Romanized

    Uyghur

    ug

    Uzbek

    uz

    Vietnamese

    vi

    Welsh

    cy

    Western Frisian

    fy

    Xhosa

    xh

    Yiddish

    yi

    Datasets and redaction

    You can use the Tonic Textual SDK to manage datasets and to redact individual strings and files.

    Create and manage datasets

    Create, update, and get redacted files from a Textual dataset.

    Redact strings

    Send plain text, JSON, or XML strings for redaction.

    Redact individual files

    Send a file for redaction and retrieve the results.

    Transcribe and redact audio files

    Send and audio file to be transcribed and retrieve the redacted transcription.

    Configure entity type handling

    Configure how Textual treats each type of entity in a dataset, redacted file, or redacted string.

    Record and review redaction requests

    View the results of an SDK redaction request in the Textual application.

    Redact individual strings

    circle-info

    Required global permission: Use the API to parse or redact a text string

    Before you perform these tasks, remember to instantiate the SDK client.

    You can use the Tonic Textual SDK to redact individual strings, including:

    • Plain text strings

    • JSON content

    • XML content

    For a text string, you can also request synthesized values from a large language model (LLM).

    The redaction request can include the .

    The includes the redacted or synthesized content and details about the detected entity values.

    hashtag
    Redact a plain text string

    To send a plain text string for redaction, use :

    For example:

    The redact call provides an option to record the request, to allow you to preview the results in the Textual application. For more information, go to .

    hashtag
    Redact multiple plain text strings

    To send multiple plain text strings for redaction, use :

    For example:

    hashtag
    Redact JSON content

    To send a JSON string for redaction, use . You can send the JSON content as a JSON string or a Python dictionary.

    redact_json ensures that only the values are redacted. It ignores the keys.

    hashtag
    Basic JSON redaction example

    Here is a basic example of a JSON redaction request:

    It produces the following JSON output:

    hashtag
    Specifying entity types for specific JSON paths

    When you redact a JSON string, you can optionally assign specific entity types to selected JSON paths.

    To do this, you include the jsonpath_allow_lists parameter. Each entry consists of an entity type and a list of JSON paths for which to always use that entity type. Each JSON path must point to a simple string or numeric value.

    The specified entity type overrides both the detected entity type and any added or excluded values.

    In the following example, the value of the key1 node is always treated as a telephone number:

    It produces the following redacted output:

    hashtag
    Redact XML content

    To send an XML string for redaction, use .

    redact_xml ensures that only the values are redacted. It ignores the XML markup.

    For example:

    Produces the following XML output:

    hashtag
    Redact HTML content

    To send an HTML string for redaction, use .

    redact_html ensures that only the values are redacted. It ignores the HTML markup.

    For example:

    Produces the following HTML output:

    hashtag
    Synthesis with Large Language Models

    You can request synthesized values from a large language model (LLM) using two different approaches.

    hashtag
    ReplacementSynthesis

    ReplacementSynthesis redacts sensitive values and uses the LLM to generate realistic replacements based on the surrounding context.

    When you use this process, Textual first identifies the sensitive values in the text. It then sends the value locations and redacted values to the LLM. For example, if Textual identifies a product name, it sends the location and the redacted value PRODUCT to the LLM. Textual does not send the original values to the LLM.

    The LLM then generates realistic synthesized values of the appropriate value types based on the context of the surrounding text.

    Example:

    Output:

    hashtag
    GroupingSynthesis

    GroupingSynthesis groups related entities, generates new entity names, then uses the LLM to reproduce the original format of the value.

    This approach is particularly useful when values have specific formats that need to be preserved. For example, if a name is spelled out using the phonetic alphabet (e.g., "B as in boy, O as in orange, B as in boy" for "Bob"), GroupingSynthesis will:

    1. Identify the grouped entity ("Bob")

    2. Generate a new entity name without using LLM ("Tom")

    3. Use the LLM to reproduce the same format ("T as in toy, O as in orange, M as in mark")

    Example:

    Output:

    hashtag
    Configuration

    Use the generator_config parameter to specify which entity types should use synthesis and which synthesis method to apply. Use generator_default to set the default behavior for entity types not explicitly configured.

    For more information about configuring entity type handling, see


    Note: Before you can use either synthesis method, you must enable additional LLM processing. The additional processing sends the values and surrounding text to the LLM. For an overview of the LLM processing and how to enable it, see the documentation about configuring the Solar.LLM container

    hashtag
    Format of the redaction and synthesis response

    The response provides the redacted or synthesized version of the string, and the list of detected entity values.

    For each redacted item, the response includes:

    • The location of the value in the original text (start and end)

    • The location of the value in the redacted version of the string (new_start and new_end)

    The entity type (label)
  • The original value (text)

  • The replacement value (new_text). new_text is null in the following cases:

    • The entity type is ignored

  • A score to indicate confidence in the detection and redaction (score)

  • The detected language for the value (language)

  • For responses from textual.redact_json, the JSON path to the entity in the original document (json_path)

  • For responses from textual.redact_xml, the XPath to the entity in the original XML document (xml_path)

  • handling configuration for entity types
    redaction response
    textual.redactarrow-up-right
    Record and review redaction requests
    textual.redact_bulkarrow-up-right
    textual.redact_jsonarrow-up-right
    textual.redact_xmlarrow-up-right
    textual.redact_htmlarrow-up-right
    Configure entity type handling for redaction
    Configuring the Solar.LLM container
    redaction_response = textual.redact("""<text of the string>""")
    redaction_response.describe()
    redaction_response = textual.redact("""Contact Tonic AI with questions""")
    redaction_response.describe()
    
    Contact ORGANIZATION_EPfC7XZUZ with questions
        
    {"start": 8, "end": 16, "new_start": 8, "new_end": 30, "label": "ORGANIZATION", "text": "Tonic AI", "new_text": "[ORGANIZATION]", "score": 0.85, "language": "en"}
    bulk_response = textual.redact_bulk([<List of strings])
    bulk_response = textual.redact_bulk(["Tonic.ai was founded in 2018", "John Smith is a person"])
    bulk_response.describe()
    
    [ORGANIZATION_5Ve7OH] was founded in [DATE_TIME_DnuC1]
    
    {"start": 0, "end": 5, "new_start": 0, "new_end": 21, "label": "ORGANIZATION", "text": "Tonic", "score": 0.9, "language": "en", "new_text": "[ORGANIZATION]"}
    {"start": 21, "end": 25, "new_start": 37, "new_end": 54, "label": "DATE_TIME", "text": "2018", "score": 0.9, "language": "en", "new_text": "[DATE_TIME]"}
    
    [NAME_GIVEN_dySb5] [NAME_FAMILY_7w4Db3] is a person
    
    {"start": 0, "end": 4, "new_start": 0, "new_end": 18, "label": "NAME_GIVEN", "text": "John", "score": 0.9, "language": "en", "new_text": "[NAME_GIVEN]"}
    {"start": 5, "end": 10, "new_start": 19, "new_end": 39, "label": "NAME_FAMILY", "text": "Smith", "score": 0.9, "language": "en", "new_text": "[NAME_FAMILY]"}
    json_redaction = textual.redact_json(<JSON string or Python dictionary>)
    d=dict()
    d['person']={'first':'John','last':'OReilly'}
    d['address']={'city': 'Memphis', 'state':'TN', 'street': '847 Rocky Top', 'zip':1234}
    d['description'] = 'John is a man that lives in Memphis.  He is 37 years old and is married to Cynthia.'
    
    json_redaction = textual.redact_json(d)
    
    print(json.dumps(json.loads(json_redaction.redacted_text), indent=2))
    {
    "person": {
        "first": "[NAME_GIVEN]",
        "last": "[NAME_FAMILY]"
    },
    "address": {
        "city": "[LOCATION_CITY]",
        "state": "[LOCATION_STATE]",
        "street": "[LOCATION_ADDRESS]",
        "zip": "[LOCATION_ZIP]"
    },
    "description": "[NAME_GIVEN] is a man that lives in [LOCATION_CITY].  He is [DATE_TIME] and is married to [NAME_GIVEN]."
    }
    jsonpath_allow_lists={'entity_type':['JSON Paths']}
    response = textual.redact_json('{"key1":"Ex123", "key2":"Johnson"}', jsonpath_allow_lists={'PHONE_NUMBER':['$.key1']})
    {"key1":"[PHONE_NUMBER]","key2":"My name is [NAME_FAMILY]"}
    xml_string = '''<?xml version="1.0" encoding="UTF-8"?>
        <!-- This XML document contains sample PII with namespaces and attributes -->
        <PersonInfo xmlns="http://www.example.com/default" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:contact="http://www.example.com/contact">
            <!-- Personal Information with an attribute containing PII -->
            <Name preferred="true" contact:userID="john.doe123">
                <FirstName>John</FirstName>
                <LastName>Doe</LastName>He was born in 1980.</Name>
    
            <contact:Details>
                <!-- Email stored in an attribute for demonstration -->
                <contact:Email address="[email protected]"/>
                <contact:Phone type="mobile" number="555-6789"/>
            </contact:Details>
    
            <!-- SSN stored as an attribute -->
            <SSN value="987-65-4321" xsi:nil="false"/>
            <data>his name was John Doe</data>
        </PersonInfo>'''
    
    response = textual.redact_xml(xml_string)
    
    redacted_xml = response.redacted_text
    <?xml version="1.0" encoding="UTF-8"?><!-- This XML document contains sample PII with namespaces and attributes -->\n<PersonInfo xmlns="http://www.example.com/default" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:contact="http://www.example.com/contact"><!-- Personal Information with an attribute containing PII --><Name preferred="true" contact:userID="[NAME_GIVEN]">[GENDER_IDENTIFIER] was born in [DOB].<FirstName>[NAME_GIVEN]</FirstName><LastName>[NAME_FAMILY]</LastName></Name><contact:Details><!-- Email stored in an attribute for demonstration --><contact:Email address="[EMAIL_ADDRESS]"></contact:Email><contact:Phone type="mobile" number="[PHONE_NUMBER]"></contact:Phone></contact:Details><!-- SSN stored as an attribute --><SSN value="[PHONE_NUMBER]" xsi:nil="false"></SSN><data>[GENDER_IDENTIFIER] name was [NAME_GIVEN] [NAME_FAMILY]</data></PersonInfo>
    html_content = """
    <!DOCTYPE html>
    <html>
        <head>
            <title>John Doe</title>
        </head>
        <body>
            <h1>John Doe</h1>
            <p>John Doe is a person who lives in New York City.</p>
            <p>John Doe's phone number is 555-555-5555.</p>
        </body>
    </html>
    """
    
    # Run the redact_xml method
    redacted_html = redact.redact_html(html_content, generator_config={
                "NAME_GIVEN": "Synthesis",
                "NAME_FAMILY": "Synthesis"
            }) 
    
    print(redacted_html.redacted_text)
    <!DOCTYPE html>
    <html>
        <head>
            <title>Scott Roley</title>
        </head>
        <body>
            <h1>Scott Roley</h1>
            <p>Scott Roley is a person who lives in [LOCATION_CITY].</p>
            <p>Scott Roley's phone number is [PHONE_NUMBER].</p>
        </body>
    </html>
    from textual import PiiState
    
    sample_text = "My name is John, and today I am demoing Textual, a software product created by Tonic"
    
    # Configure to synthesize organization entities
    generator_config = {"ORGANIZATION": PiiState.ReplacementSynthesis}
    generator_default = PiiState.Off
    
    response = textual.redact(
        sample_text,
        generator_config=generator_config,
        generator_default=generator_default,
    )
    My name is John, and today I am demoing Textual, a software product created by Initech Enterprises.
    from textual import PiiState
    
    sample_text = "The caller spelled their name: B as in boy, O as in orange, B as in boy"
    
    # Configure to use grouping synthesis for names
    generator_config = {"NAME_GIVEN": PiiState.GroupingSynthesis}
    generator_default = PiiState.Off
    
    response = textual.redact(
        sample_text,
        generator_config=generator_config,
        generator_default=generator_default,
    )
    The caller spelled their name: T as in toy, O as in orange, M as in mark
    Contact ORGANIZATION_EPfC7XZUZ with questions
        
    {"start": 8, "end": 16, "new_start": 8, "new_end": 30, "label": "ORGANIZATION", "text": "Tonic AI", "new_text": "[ORGANIZATION]", "score": 0.85, "language": "en"}

    Configuring entity type synthesis options

    circle-info

    Required dataset permission: Edit dataset settings

    When Textual generates replacement values, those values are always consistent. Consistency means that the same original value always produces the same replacement value. You can also enable consistency with some Tonic Structural output values.

    For all entity types, you can specify the replacements for specific values.

    Some entity types include type-specific options for how Tonic Textual generates the replacement values.

    For custom entity types, you can select the generator to use.

    hashtag
    Enabling consistency with Tonic Structural

    If you also use Tonic Structural, then you can configure Textual to enable selected synthesized values to be consistent between the two applications.

    For example, a given source telephone number can produce the same replacement telephone number in both Structural and Textual.

    To enable this consistency, you configure a statistics seed value as the value of the Textual SOLAR_STATISTICS_SEED. A statistics seed is a signed 32-bit integer.

    The value must match a , either:

    • The value of the Structural environment setting TONIC_STATISTICS_SEED.

    • A statistics seed configured for an individual Structural workspace.

    The current statistics seed value is displayed on the System Settings page.

    hashtag
    Displaying the synthesis options for an entity type

    To display the synthesis options for an entity type, click the icon next to the handling option dropdown list.

    hashtag
    Providing specific replacement values

    For all entity types, you can provide a list of specific replacement values.

    For example, for the Given Name entity type, you might indicate to always replace John with Michael and Mary with Melissa.

    For the remaining values, Textual generates the replacement values.

    In the text area, provide a JSON object that maps the original values to the replacement values. For example:

    With the above configuration for the Language entity type:

    • All instances of French are changed to German.

    • All instances of English are changed to Japanese.

    • Textual selects the replacement values for other languages.

    When you provide replacement values:

    • The values are case-insensitive. For example, if you specify "John": "Michael", then Structural also replaces john with michael and JOHN with MICHAEL.

    • Structural ignores leading and trailing punctuation. To continue the example of "John": "Michael", Structural also replaces 'John'

    hashtag
    Configuring name synthesis options

    For the Given Name and Family Name entity types, you can configure:

    • Whether to treat the same name with different casing as a different value.

    • Whether to replicate the gender of the original value.

    hashtag
    Differentiating source values by case

    To treat the same name with different casing as different source values, check Is Consistency Case Sensitive.

    For example, when this is checked, john and John are treated as different names, and can have different replacement values - john might be replaced with michael, and John might be replaced with Stephen.

    When this is not checked, then john and John are treated as the same source value, and get the same replacement.

    hashtag
    Preserving gender in names

    To replace source names with a names that have the same gender, check Preserve Gender.

    For example, when this is checked, John might be replaced with Michael, since they are both traditionally male names. However, John would not be replaced with Mary, which is traditionally a female name.

    hashtag
    Configuring location synthesis options

    Location values include the following types:

    • Location

    • Location Address

    • Location State

    You can select whether to generate HIPAA or non-HIPAA addresses. Address values can be consistent with values generated in Structural.

    For each location type other than Location State, you can specify whether to use a realistic replacement value. For Location State, based on HIPAA guidelines, both the Synthesize option and the Ignore option pass through the value.

    For location types that include zip codes, you can also specify how to generate the new zip code values.

    hashtag
    Selecting the type of address generator to use

    Under Address generator type, select the type of address generator to use:

    • HIPAA-compliant address generator. This option generates values similar to those generated by the .

    • Non-HIPAA address generator. This option generates values similar to those generated by the .

    If you configured a Textual statistics seed that matches a Structural statistics seed, then the generated address values are consistent with values generated in Structural. A given address value produces the same output value in both applications.

    For example, in both Textual and Structural, a source address value 123 Main Street might be replaced with 234 Oak Avenue.

    hashtag
    Indicating whether to use realistic replacement values

    By default, Textual replaces a location value with a realistic corresponding value. For example, "Main Street" might be replaced with "Fourth Avenue".

    To instead scramble the values, uncheck Replace with realistic values.

    hashtag
    Indicating how to generate replacement zip codes

    By default, to generate a new zip code, Textual selects a real zip code that starts with the same three digits as the original zip code. For a low population area, Textual instead selects a random zip code from the United States.

    To instead replace the last two digits of the zip code with zeros, check Replace zeroes for zip codes. For a low population area, Textual instead replaces all of the digits in the zip code with zeros.

    hashtag
    Configuring datetime synthesis options

    By default, when you select the Synthesize option for Date/Time and Date of Birth values, Textual shifts the datetime values to a value that occurs within 7 days before or after the original value.

    To customize how Textual sets the new values, you can:

    • Set a different range within which Textual sets the new values

    • Indicate whether to scramble date values that Textual cannot parse

    • Indicate whether to shift all of the original values by the same amount and in the same direction

    hashtag
    Adjusting the range for the replacement values

    By default, Textual adjusts the dates to values that are within 7 days before or after the original date.

    To change the range:

    1. In the Left bound on # of Days To Shift field, enter the number of days before the original date within which the replacement datetime value must occur. For example, if you enter 10, then the replacement datetime value cannot occur earlier than 10 days before the original value.

    2. In the Right bound on # of Days To Shift field, enter the number of days after the original date within which the replacement datetime value must occur. For example, if you enter 6, then the replacement datetime value cannot occur later than 6 days after the original value.

    hashtag
    Indicating how to replace datetime values in unsupported formats

    Textual can parse datetime values that use either a format in or a format that you add.

    The Scramble Unrecognized Dates checkbox indicates how Textual should handle datetime values that it does not recognize.

    By default, the checkbox is checked, and Textual scrambles those values.

    To instead pass through the values without changing them, uncheck Scramble Unrecognized Dates.

    hashtag
    Indicating whether to shift all values by the same amount

    By default, Textual applies different shifts to the original values. Some replacement dates might be earlier, and some might be later. The amount of shift might also vary.

    To shift all of the datetime values in the same way, check Apply same shift for entire document.

    For example, if this is checked, Textual might shift all datetime values 3 days in the future.

    hashtag
    Adding datetime formats

    By default, Textual is able to recognize datetime values that use a format from .

    Under Additional Date Formats, you can add other datetime formats that you know are present in your data.

    The formats must use a .

    To add a format, type the format in the field, then click +.

    To remove a format, click its delete icon.

    hashtag
    Default supported datetime formats in Textual

    By default, Textual supports the following datetime formats.

    hashtag
    Date only formats

    Format
    Example value

    hashtag
    Date and time formats

    Format
    Example value

    hashtag
    Time only formats

    Format
    Example value

    hashtag
    Configuring age synthesis options

    By default, when you select the Synthesize option for Age values, Textual shifts the age value to a value that is within seven years before or after the original value. For age values that it cannot synthesize, it scrambles the value.

    To configure the synthesis:

    1. In the Range of Years +/- for the Shifted Age field, enter the number of years before and after the original value to use as the range for the synthesized value.

    2. By default, Textual scrambles age values that it cannot parse. To instead pass through the value unchanged, uncheck Scramble Unrecognized Ages.

    hashtag
    Configuring telephone number synthesis options

    For Phone Number values, you can choose whether to generate a realistic phone number. If you do, then the generated values can be consistent with values generated in Structural.

    hashtag
    Selecting the generator type

    From the Phone number generator type dropdown list:

    • To replace each phone number with a randomly generated number, select Random Number.

    • To generate a realistic telephone number, select US Phone Number. The US Phone Number option generates values similar to those generated by the .

    If you also configured a Textual statistics seed that matches a Structural statistics seed, then the synthesized values are consistent with values generated in Structural. A given source telephone number produces the same output telephone number in both applications.

    For example, in both Textual and Structural, 123-456-6789 might be replaced with 154-567-8901.

    hashtag
    Determining how to replace invalid telephone numbers

    The Replace invalid numbers with valid numbers checkbox determines how Textual handles invalid telephone numbers in the data.

    To replace the invalid with valid telephone numbers, check the checkbox.

    If you do not check the checkbox, then Textual randomly replaces the numeric characters.

    hashtag
    Selecting and configuring the generator for custom entity types

    By default, when you select the Synthesize option for a custom entity type, Textual scrambles the original value.

    From the generator dropdown list, select the generator to use to create the replacement value.

    The available generators are:

    Generator
    Description
    with
    'Michael'
    .
    Location Zip
    Add additional date formats for Textual to recognize

    yyyy-M

    2024-1

    yyyy/M

    2024/1

    d/M/yyyy

    17/1/2024

    d-MMM-yyyy

    17-Jan-2024

    dd-MMM-yy

    17-Jan-24

    d-M-yyyy

    17-1-2024

    d/MMM/yyyy

    17/Jan/2024

    d MMMM yyyy

    17 January 2024

    d MMM yyyy

    17 Jan 2024

    d MMMM, yyyy

    17 January, 2024

    ddd, d MMM yyyy

    Wed, 17 Jan 2024

    M/d/yyyy

    1/17/2024

    M/d/yy

    1/17/24

    M-d-yyyy

    1-17-2024

    MMddyyyy

    01172024

    MMMM d, yyyy

    January 17, 2024

    MMM d, ''yy

    Jan 17, '24

    MM-yyyy

    01-2024

    MMMM, yyyy

    January, 2024

    yyyy/M/d HH:mm:ss

    2024/1/17 15:45:30

    yyyy-M-dTHH:mm:ss

    2024-1-17T15:45:30

    yyyy/M/dTHH:mm:ss

    2024/1/17T15:45:30

    yyyy-M-d HH:mm:ss'Z'

    2024-1-17 15:45:30Z

    yyyy-M-d'T'HH:mm:ss'Z'

    2024-1-17T15:45:30Z

    yyyy-M-d HH:mm:ss.fffffff

    2024-1-17 15:45:30.1234567

    yyyy-M-dd HH:mm:ss.FFFFFF

    2024-1-17 15:45:30.123456

    yyyy-M-dTHH:mm:ss.fff

    2024-1-17T15:45:30.123

    Email

    Generates an email address.

    HIPAA Address Generator

    Generates a mailing address.

    The generator has the as the built-in location entity types.

    IP Address

    Generates an IP address.

    MICR Code

    Generates an MICR code.

    Money

    Generates a currency amount.

    Name

    Generates a person's name.

    You configure:

    • Whether to generate the same replacement value from source values that have different capitalization.

    • Whether the replacement value reflects the gender of the original value.

    Numeric Value

    Generates a numeric value.

    You configure whether to use the Integer Primary Key generator to generate the value.

    Person Age

    Generates an age value.

    The Person Age generator has the .

    Phone Number

    Generates a telephone number.

    The Phone Number generator has the .

    SSN

    Generates a United States Social Security Number.

    URL

    Generates a URL.

    yyyy/M/d

    2024/1/17

    yyyy-M-d

    2024-1-17

    yyyyMMdd

    20240117

    yyyy.M.d

    2024.1.17

    yyyy, MMM d

    2024, Jan 17

    yyyy-M-d HH:mm

    2024-1-17 15:45

    d-M-yyyy HH:mm

    17-1-2024 15:45

    MM-dd-yy HH:mm

    01-17-24 15:45

    d/M/yy HH:mm:ss

    17/1/24 15:45:30

    d/M/yyyy HH:mm:ss

    17/1/2024 15:45:30

    HH:mm

    15:45

    HH:mm:ss

    15:45:30

    HHmmss

    154530

    hh:mm:ss tt

    03:45:30 PM

    HH:mm:ss'Z'

    15:45:30Z

    Scramble

    This is the default generator.

    Scrambles the original value.

    CC Exp

    Generates a credit card expiration date.

    Company Name

    Generates a name of a business.

    Credit Card

    Generates a credit card number.

    CVV

    Generates a credit card security code.

    Date Time

    Generates a datetime value.

    The Date Time generator has the same synthesis configuration options as the built-in Date/Time entity type.

    environment variable
    Noda Time LocalDateTime patternarrow-up-right
    Synthesis options for an entity type
    Synthesis options for name values
    Synthesis options for location values
    Datetime synthesis options
    Synthesis options for age values
    Synthesis options for telephone number values
    Generator dropdown list for a custom entity type
    Default supported datetime formats in Textual
    Default supported datetime formats in Textual
    {
      "French": "German",
      "English": "Japanese"
    }
    same configuration options for generator type and realistic replacements
    same configuration options as the built-in Age entity type
    same configuration options as the built-in Phone Number entity type

    Structure of JSON output files

    The JSON output provides access to Markdown content and identifies the entities that were detected in the file.

    hashtag
    Common elements in the JSON output

    hashtag
    Information about the entire file

    All JSON output files contain the following elements that contain information for the entire file:

    For specific file types, the JSON output includes additional objects and properties to reflect the file structure.

    hashtag
    Hashed and Markdown content

    The JSON output contains hashed and Markdown content for the entire file and for individual file components.

    hashtag
    Entities

    The JSON output contains entities arrays for the entire file and for individual file components.

    Each entity in the entities array has the following properties:

    hashtag
    Plain text files

    For plain text files, the JSON output only contains the information for the entire file.

    hashtag
    .csv files

    For .csv files, the structure contains a tables array.

    The tables array contains a table object that contains header and data arrays..

    For each row in the file, the data array contains a row array.

    For each value in a row, the row array contains a value object.

    The value object contains the entities, hashed content, and Markdown content for the value.

    hashtag
    .xlsx files

    For .xlsx files, the structure contains a tables array that provides details for each worksheet in the file.

    For each worksheet, the tables array contains a worksheet object.

    For each row in a worksheet, the worksheet object contains a header array and a data array. The data array contains a row array.

    For each cell in a row, the row array contains a cell object.

    Each cell object contains the entities, hashed content, and Markdown content for the cell.

    hashtag
    .docx files

    For .docx files, the JSON output structure adds:

    • A footnotes array for content in footnotes.

    • An endnotes array for content in endnotes.

    • A header object for content in the page headers. Includes separate objects for the first page header, even page header, and odd page header.

    These arrays and objects contain the entities, hashed content, and Markdown content for the notes, headers, and footers.

    hashtag
    PDF and image files

    PDF and image files use the same structure. Textual extracts and scans the text from the files.

    For PDF and image files, the JSON output structure adds the following content.

    hashtag
    pages array

    The pages array contains all of the content on the pages. This includes content in tables and key-value pairs, which are also listed separately in the output.

    For each page in the file, the pages array contains a page array.

    For each component on the page - such as paragraphs, headings, headers, and footers - the page array contains a component object.

    Each component object contains the component entities, hashed content, and Markdown content.

    hashtag
    tables array

    The tables array contains content that is in tables.

    For each table in the file, the tables array contains a table array.

    For each row in a table, the table array contains a row array.

    For each cell in a row, the row array contains a cell object.

    Each cell object identifies the type of cell (header or content). It also contains the entities, hashed content, and Markdown content for the cell.

    hashtag
    keyValuePairs array

    The keyValuePairs array contains key-value pair content. For example, for a PDF of a form with fields, a key-value pair might represent a field label and a field value.

    For each key-value pair, the keyValuePairs array contains a key-value pair object.

    The key-value pair object contains:

    • An automatically incremented identifier. For example, id for the first key-value pair is 1, for the second key-value pair is 2, and so on.

    • The start and end position of the key-value pair

    • The text of the key

    hashtag
    PDF and image JSON outline

    hashtag
    .eml and .msg files

    For email message files, the JSON output structure adds the following content.

    hashtag
    Email message identifiers

    The JSON output includes the following email message identifiers:

    • The identifier of the current message

    • If the message was a reply to another message, the identifier of that message

    • An array of related email messages. This includes the email message that the message replied to, as well as any other messages in an email message thread.

    hashtag
    Recipients

    The JSON output includes the email address and display name of the message recipients. It contains separate lists for the following:

    • Recipients in the To line

    • Recipients in the CC line

    • Recipients in the BCC line

    hashtag
    Subject line

    The subject object contains the message subject line. It includes:

    • Markdown and hashed versions of the message subject line.

    • The entities that were detected in the subject line.

    hashtag
    Message timestamp

    sentDate provides the timestamp when the message was sent.

    hashtag
    Message body

    The plainTextBodyContent object contains the body of the email message.

    It contains:

    • Markdown and hashed versions of the message body.

    • The entities that were detected in the message body.

    hashtag
    Message attachments

    The attachments array provides information about any attachments to the email message. For each attached file, it includes:

    • The identifier of the message that the file is attached to.

    • The identifier of the attachment.

    • The JSON output for the file.

    hashtag
    Email message JSON outline

    hashtag
    RTF files

    For RTF files, the JSON output structure adds the following content.

    hashtag
    htmlContent object

    The htmlContent object contains details about the HTML version of the file content. It includes the following:

    • innerTextEntities lists the entity values that are contained in the displayed content of the file.

    • attributeEntities lists the entity values that are contained in the HTML attributes for the file.

    • text contains the text of the HTML content.

    hashtag
    tables array

    The tables array contains content that is in tables.

    For each table in the file, the tables array contains a table array.

    For each row in a table, the table array contains a row array.

    For each cell in a row, the row array contains a cell object.

    Each cell object identifies the type of cell (header or content). It also contains the entities, hashed content, and Markdown content for the cell.

    hashtag
    RTF file JSON outline

  • A footer object for content in the page footers. Includes separate objects for the first page footer, even page footer, and odd page footer.

  • The entities, hashed content, and Markdown content for the value
    The count of words in the original file.
  • The count of words in the redacted version of the file.

  • hash contains the hashed version of the HTML content.

    fileType

    The type of the original file.

    content

    Details about the file content. It includes:

    • Hashed and Markdown content for the file

    • Entities in the file

    schemaVersion

    An integer that identifies the version of the JSON schema that was used for the JSON output.

    Textual uses this to convert content from older schemas to the most recent schema.

    hash

    The hashed version of the file or component content.

    text

    The file or component content in Markdown notation.

    start

    Within the file or component, the location where the entity value starts.

    For example, in the following text:

    My name is John.

    John is an entity that starts at 11.

    end

    Within the file or component, the location where the entity value ends.

    For example, in the following text:

    My name is John.

    John is an entity that ends at 14.

    label

    The type of entity.

    For a list of the built-in entity types that Textual detects, go to .

    text

    The text of the entity.

    score

    The confidence score for the entity.

    Indicates how confident Textual is that the value is an entity of the specified type.

    language

    The language code to identify the language for the entity value. For example, en indicates that the value is in English.

    {
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown content>",
        "hash": "<hashed content>",
        "entities": [   //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"      }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "tables": [
        {
          "tableName": "csv_table",
          "header": [//Columns that contain heading info (col_0, col_1, and so on)
            "<column identifier>"
          ],
          "data": [  //Entry for each row in the file
            [   //Entry for each value in the row
              {    
                "entities": [   //Entry for each entity in the value
                  {
                    "start": <start location>,,
                    "end": <end location>,
                    "label": "<value type>",
                    "text": "<value text>",
                    "score": <confidence score>,
                    "language": "<language code>"
                  }
                ],
                "hash": "<hashed value content>",
                "text": "<Markdown value content>"
              }
            ]
          ]
        }
      ],
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   ///Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "tables": [   //Entry for each worksheet
        {
          "tableName": "<Name of the worksheet>",
          "header": [ //Columns that contain heading info (col_0, col_1, and so on)
            "<column identifier>"
          ],
          "data": [   //Entry for each row
            [   //Entry for each cell in the row
              {
                "entities": [   //Entry for each entity in the cell
                  {
                    "start": <start location>,
                    "end": <end location>,
                    "label": "<value type>",
                    "text": "<value text>",
                    "score": <confidence score>,
                    "language": "<language code>"
                  }
                ],
                "hash": "<hashed cell content>",
                "text": "<Markdown cell content>"
              }
            ]
          ]
        }
      ],
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "footNotes": [   //Entry for each footnote
        {
          "entities": [   //Entry for each entity in the footnote
            {
              "start": <start location>,
              "end": <end location>,
              "pythonStart": <start location in Python>,
              "pythonEnd": <end location in Python>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
              "exampleRedaction": null
            }
          ],
          "hash": "<hashed footnote content>",
          "text": "<Markdown footnote content>"
        }
      ],
      "endNotes": [   //Entry for each endnote
        {
          "entities": [   //Entry for each entity in the endnote
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed endnote content>",
          "text": "<Markdown endnote content>"
        }
      ],
      "header": {
        "first": {
          "entities": [   //Entry for each entity in the first page header
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed first page header content>",
          "text": "<Markdown first page header content>"
        },
        "even": {
          "entities": [   //Entry for each entity in the even page header
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed even page header content>",
          "text": "<Markdown even page header content>"
        },
        "odd": {
          "entities": [   //Entry for each entity in the odd page header
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed odd page header content>",
          "text": "<Markdown odd page header content>"
        }
      },
      "footer": {
        "first": {
          "entities": [   //Entry for each entity in the first page footer
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed first page footer content>",
          "text": "<Markdown first page footer content>"
        },
        "even": {
          "entities": [   //Entry for each entity in the even page footer
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed even page footer content>",
          "text": "<Markdown even page footer content>"
        },
        "odd": {
          "entities": [   //Entry for each entity in the odd page footer
            {
              "start": <start location>,
              "end": <end location>,
              "label": "<value type>",
              "text": "<value text>",
              "score": <confidence score>,
              "language": "<language code>"
            }
          ],
          "hash": "<hashed odd page footer content>",
          "text": "<Markdown odd page footer content>"
        }
      },
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "pages": [   //Entry for each page in the file
        [   //Entry for each component on the page
          {
            "type": "<page component type>",
            "content": {
              "entities": [   //Entry for each entity in the component
                {
                  "start": <start location>,
                  "end": <end location>,
                  "label": "<value type>",
                  "text": "<value text>",
                  "score": <confidence score>,
                  "language": "<language code>"
                }
              ],
              "hash": "<hashed component content>",
              "text": "<Markdown component content>"
            }
          }
        ],
      "tables": [   //Entry for each table in the file
        [   //Entry for each row in the table
          [   //Entry for each cell in the row
            {
              "type": "<content type>",   //ColumnHeader or Content
              "content": {
                "entities": [  //Entry for each entity in the cell
                  {
                    "start": <start location>,
                    "end": <end location>,
                    "label": "<value type>",
                    "text": "<value text>",
                    "score": <confidence score>,
                    "language": "<language code>"
                  }
                ],
                "hash": "<hashed cell text>",
                "text": "<Markdown cell text>"
              }
            }
          ]
        ]
      ],
      "keyValuePairs": [   //Entry for each key-value pair in the file
        {
          "id": <incremented identifier>,
          "key": "<key text>",
          "value": {
            "entities": [  //Entry for each entity in the value
              {
                "start": <start location>,
                "end": <end location>,
                "label": "<value type>",
                "text": "<value text>",
                "score": <confidence score>,
                "language": "<language code>"
              }
            ],
            "hash": "<hashed value text>",
            "text": "<Markdown value text>"
          },
          "start": <start location of the key-value pair>,
          "end": <end location of the key-value pair>
        }
      ],
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   ///Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "messageId": "<email message identifier>",
      "inReplyToMessageId": <message that this message replied to>,
      "messageIdReferences": [<related email messages>],
      "senderAddress": {
        "address": "<sender email address>",
        "displayName": "<sender display name>"
      },
      "toAddresses": [  //Entry for each recipient in the To list
        {
          "address": "<recipient email address>",
          "displayName": "<recipient display name>"
        }
      ],
      "ccAddresses": [ //Entry for each recipient in the CC list
        {
          "address": "<recipient email address>",
          "displayName": "<recipient display name>"
        }
      ],
      "bccAddresses": [ //Entry for each recipient in the BCC list
        {
          "address": "<recipient email address>",
          "displayName": "<recipient display name>"
        }
      ],
      "sentDate": "<timestamp when the message was sent>",
      "subject": {
        "text": "<Markdown version of the subject line>",
        "hash": "<hashed version of the subject line>",
        "entities": [   //Entry for each entity in the subject line
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "plainTextBodyContent": {
        "text": "<Markdown version of the message body>",
        "hash": "<hashed version of the message body>",
        "entities": [ //Entry for each entity in the message body
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "attachments": [ //Entry for each attached file
        {
          "parentMessageId": "<the message that the file is attached to>",
          "contentId": "<identifier of the attachment>",
          "fileName": "<name of the attachment file>",
          "document": {<JSON for the attached file>},
          "wordCount": <number of words in the attachment>,
          "redactedWordCount": <number of words in the redacted attachment>
        }
      ],
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [ //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "schemaVersion": <integer schema version>
    }
    {
      "fileType": "<file type>",
      "content": {
        "text": "<Markdown file content>",
        "hash": "<hashed file content>",
        "entities": [   //Entry for each entity in the file
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
        ]
      },
      "htmlContent" {
        "innerTextEntities": [ //Entry for each entity in the content text
          {
            "start": <start location>,
            "end": <end location>,
            "label": "<value type>",
            "text": "<value text>",
            "score": <confidence score>,
            "language": "<language code>"
          }
         ],
         "attributeEntities":[ // Entry for each entity in the HTML attributes
           {
             "start": <start location>,
             "end": <end location>,
             "label": "<value type>",
             "text": "<value text>",
             "score": <confidence score>,
             "language": "<language code>"
           }
          ],
         "hash": "<hashed HTML content>",
         "text": "<HTML content text>"
      },
     "tables": [   //Entry for each table in the file
        [   //Entry for each row in the table
          [   //Entry for each cell in the row
            {
              "type": "<content type>",   //ColumnHeader or Content
              "content": {
                "entities": [  //Entry for each entity in the cell
                  {
                    "start": <start location>,
                    "end": <end location>,
                    "label": "<value type>",
                    "text": "<value text>",
                    "score": <confidence score>,
                    "language": "<language code>"
                  }
                ],
                "hash": "<hashed cell text>",
                "text": "<Markdown cell text>"
              }
            }
          ]
        ]
      ],
      "schemaVersion": <integer schema version>
    }
    Built-in entity types
    Structural statistics seed value
    HIPAA Address generator in Structural
    Address generator in Structural
    Phone generator in Structural