The Docker Compose file is available in the GitHub repository https://github.com/TonicAI/textual_docker_compose/tree/main.
Fork the repository.
To deploy Textual:
Rename sample.env to .env.
In .env, provide values for the required settings. These are not commented out and have <FILL IN>
as a placeholder value:
SOLAR_VERSION
- Provided by Tonic.ai.
SOLAR_LICENSE
- Provided by Tonic.ai.
ENVIRONMENT_NAME
- The name that you want to use for your Textual instance. For example, my-company-name
.
SOLAR_SECRET
- The string to use for Textual encryption.
SOLAR_DB_PASSWORD
- The password that you want to use for the Textual application database, which stores the metadata for Textual, including the datasets and custom models.Textual deploys a PostgreSQL database container for the application database.
To deploy and start Textual, run docker-compose up -d
.
You install a self-hosted instance of Tonic Textual on either:
A VM or server that runs Linux and on which you have superuser access.
A local machine that runs Mac, Windows, or Linux.
At minimum, we recommend that the server or cluster that you deploy Textual to has access to the following resources:
Nvidia GPU, 16GB GPU RAM. We recommend at least 6GB GPU RAM for each textual-ml worker.
If you only use a CPU and not a GPU, then we recommend an M5.2xLarge. However, without GPU, performance will be significantly slower.
The number of words per second that Textual processes depends on many factors, including:
The hardware that runs the textual-ml
container
The number of workers that are assigned to the textual-ml
container
The auxiliary model, if any, that is used in the textual-ml
container.
To optimize the throughput of and the cost to use Textual, we recommend that the textual-ml
container runs on modern hardware with GPU compute. If you use AWS, we recommend a g5 instance with 1 GPU.
To use GPU resources:
Ensure that the correct Nvidia drivers are installed for your instance.
If you use Kubernetes to deploy Textual, follow the instructions in the NVIDIA GPU operator documentation.
If you use Minikube, then use the instructions in Using NVIDIA GPUs with Minikube.
If you use Docker Compose to deploy Textual, follow these steps to install the nvidia-container-runtime.
The Tonic Textual Helm chart is available in the GitHub repository https://github.com/TonicAI/textual_helm_charts.
To use the Helm chart, you can either:
Use the OCI-based registry that Tonic hosts on quay.io.
Fork or clone the repository and then maintain it locally.
During the onboarding period, you are provided access credentials to our docker image repository on Quay.io. If you require new credentials, or you experience issues accessing the repository, contact support@tonic.ai.
Before you deploy Textual, you create a values.yaml file with the configuration for your instance.
For details about the required and optional configuration options, go to the repository readme.
To deploy and validate access to Textual from the forked repository, follow the instructions in the repository readme.
To use the OCI-based registry, run:
The GitHub repository contains a readme with the details on how to populate a values.yaml file and deploy Textual.
The Tonic Textual images are stored on Quay.io. During onboarding, Tonic.ai provides you with credentials to access the image repository. If you require new credentials, or you experience issues accessing the repository, contact support@tonic.ai.
You can deploy Textual using either Kubernetes or Docker.
System requirements
System requirements to deploy a self-hosted Textual instance.
Deploy on Docker
How to use Docker Compose to deploy a self-hosted Textual instance on Docker.
Deploy on Kubernetes
How to use Helm to deploy a self-hosted Textual instance on Kubernetes.