Tonic Textual pipelines can process files from sources such as Amazon S3, Azure Blob Storage, and Databricks Unity Catalog. You can also create pipelines to process files that you upload directly from your browser.
For those uploaded file pipelines, Textual always stores the files in an S3 bucket. On a self-hosted instance, before you add files to an uploaded file pipeline, you must configure the S3 bucket and the associated authentication credentials.
The configured S3 bucket is also used to store dataset files and individual files that you use the Textual SDK to redact. If an S3 bucket is not configured, then:
The dataset and individual redacted files are stored in the Textual application database.
You cannot use Amazon Textract for PDF and image processing. If you configured Textual to use Amazon Textract, Textual instead uses Tesseract.
The authentication credentials for the S3 bucket include:
The AWS Region where the S3 bucket is located.
An AWS access key that is associated with an IAM user or role.
The secret key that is associated with the access key.
To provide the authentication credentials, you can either:
Provide the values directly as environment variable values.
Use the instance profile of the compute instance where Textual runs.
For an example IAM role that has the required permissions, go to #file-upload-example-iam-role.
In .env, add the following settings:
SOLAR_INTERNAL_BUCKET_NAME= <S3 bucket path>
AWS_REGION= <AWS Region>
AWS_ACCESS_KEY_ID= <AWS access key>
AWS_SECRET_ACCESS_KEY= <AWS secret key>
If you use the instance profile of the compute instance, then only the bucket name is required.
In values.yaml, within env: { }
under both textual_api_server
and textual_worker
, add the following settings:
SOLAR_INTERNAL_BUCKET_NAME
AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
For example, if no other environment variables are defined:
If you use the instance profile of the compute instance, then only the bucket name is required.