You can use the Textual SDK to parse individual files, either from a local file system or from an S3 bucket.
Textual returns a FileParseResult
object for each parsed file. The FileParseResult
object is a wrapper around the output JSON for the processed file.
To parse a single file from a local file system, use textual.parse_file
:
You must use rb
access mode to read the file. rb
access mode opens the file to be read in binary format.
You can also set a timeout in seconds for the parsing. You can add the timeout as a parameter of parse_file command. To set a timeout to use for all parsing, set the environment variable TONIC_TEXTUAL_PARSE_TIMEOUT_IN_SECONDS
.
You can also parse files that are stored in Amazon S3. Because this process uses the boto3 library to fetch the file from Amazon S3, you must first set up the correct AWS credentials.
To parse a file from an S3 bucket, use textual.parse_s3_file
:
Textual uses pipelines to transform file text into a format that can be used in an LLM system.
You can use the Textual SDK to create and manage pipelines and to retrieve pipeline run results.
Before you perform these tasks, remember to instantiate the SDK client.
To create a pipeline, use the pipeline creation method for the type of pipeline to create"
textual.create_local_pipeline
- Creates an uploaded file pipeline.
textual.create_s3_pipeline
- Creates an Amazon S3 pipeline.
textual.create_azure_pipeline
- Creates an Azure pipeline.
textual.create_databricks_pipeline
- Creates a Databricks pipeline.
When you create the pipeline, you can also:
If needed, provide the credentials to use to connect to Amazon S3, Azure, or Databricks.
Indicate whether to also generate redacted files. By default, pipelines do not generate redacted files. To generate redacted files, set synthesize_files
to True
.
For example, to create an uploaded file pipeline that also creates redacted files:
The response contains the pipeline object.
To delete a pipeline, use textual.delete_pipeline
.
To change whether a pipeline also generates synthesized files, use pipeline.set_synthesize_files
.
To a a file to an uploaded file pipeline, use pipeline.upload_file
.
For an Amazon S3 pipeline, you can configure the output location for the processed files. You can also identify the files and folders for the pipeline to process:
To identify the output location for the processed files, use s3_pipeline.set_output_location
.
To identify individual files for the pipeline to process, use s3_pipeline.add_files
.
To identify prefixes - folders for which the pipeline processes all applicable files - use s3_pipeline.add_prefixes
.
For an Azure pipeline, you can configure the output location for the processed files. You can also identify the files and folders for the pipeline to process:
To identify the output location for the processed files, use azure_pipeline.set_output_location
.
To identify individual files for the pipeline to process, use azure_pipeline.add_files
.
To identify prefixes - folders for which the pipeline processes all applicable files - use azure_pipeline.add_prefixes
.
To get the list of pipelines, use textual.get_pipelines
.
The response contains a list of pipeline objects.
To use the pipeline identifier to get a single pipeline, use textual.get_pipeline_by_id
.
The response contains a single pipeline object.
The pipeline identifier is displayed on the pipeline details page. To copy the identifier, click the copy icon.
To run a pipeline, use pipeline.run
.
The response contains the job identifier.
To get the list of pipeline runs, use pipeline.get_runs
.
The response contains a list of pipeline run objects.
Once you have the pipeline, to get an enumerator of the files in the pipeline from the most recent pipeline run, use pipeline.enumerate_files
.
The response is an enumerator of file parse result objects.
To get a list of entities that were detected in a file, use get_all_entities
. For example, to get the detected entities for all of the files in a pipeline:
To provide a list entity types and how to process them, use get_entities
:
generator_config
is a dictionary that specifies whether to redact, synthesize, or do neither for each entity type in the dictionary.
For a list of the entity types that Textual detects, go to Entity types that Textual detects.
For each entity type, you provide the handling type:
Redaction
indicates to replace the value with the value type.
Synthesis
indicates to replace the value with a realistic value.
Off
indicates to keep the value as is.
generator_default
indicates how to process values for entity types that were not included in the generator_config
list.
The response contains the list of entities. For each value, the list includes:
Entity type
Where the value starts in the source file
Where the value ends in the source file
The original text of the entity
To get the Markdown output of a pipeline file, use get_markdown
. In the request, you can provide generator_config
and generator_default
to configure how to present the detected entities in the output file.
The response contains the Markdown files, with the detected entities processed as specified in generator_config
and generator_default
.
To split a pipeline file into text chunks that can be imported into an LLM, use get_chunks
.
In the request, you set the maximum number of characters in each chunk.
You can also provide generator_config
and generator_default
to configure how to present the detected entities in the text chunks.
The response contains the list of text chunks, with the detected entities processed as specified in generator_config
and generator_default
.
Create and manage pipelines
Create, run, and get results from Textual pipelines.
Parse individual files
Send a single file to be parsed.