Go to Tonic.ai
Search
⌃
K
Links
Tonic User Guide
About Tonic
Getting started with Tonic
Managing your Tonic account
Frequently Asked Questions
Creating and managing workspaces
Managing workspaces
Managing access to workspaces
Viewing workspace jobs and job details
Configuring data generation
Privacy Hub
Database View
Table View
Identifying sensitive data
Table modes
Generators
Subsetting data
Viewing and adding foreign keys
Viewing and resolving schema changes
Tracking changes to workspaces and generator presets
Using the Privacy Report to verify data protection
Running data generation
Running a data generation job
Managing Tonic data generation performance
Post-job scripts
Webhooks
Configuring data science mode
Data science mode prerequisites
Viewing the list of models
Creating a model configuration
Configuring a model
Viewing, editing, and deleting a model
Training and exporting data models
Training a model
Reviewing the training results
Exporting a model
Installing and Administering Tonic
Tonic architecture
Using Tonic securely
Deploying a self-hosted Tonic instance
Setting up and managing a Tonic Cloud pay-as-you-go subscription
Managing user access to Tonic
Tonic monitoring and logging
Setting environment variables
Updating Tonic
Connecting to your data
About data connectors
Overview for database administrators
Data connector summary
Amazon EMR
Amazon Redshift
Databricks
Tonic process overview for Databricks
System requirements for Databricks
Tonic differences and limitations with Databricks
Before you create a Databricks workspace
Configuring Databricks workspace data connections
File connector
Google BigQuery
MongoDB
MySQL
Oracle
PostgreSQL
Snowflake on AWS
Snowflake on Azure
Spark SDK
Spark with Livy
SQL Server
Using the Tonic API
About the Tonic API
Getting an API token
Getting the workspace ID
Using the Tonic API to perform tasks
Example script: Starting a data generation job
Example script: Polling for a job status and creating a Docker package
Other resources
Release notes
Tonic tutorial videos
Powered By
GitBook
Databricks
Databricks is a cloud-based platform for big data processing.
Tonic can run Spark jobs on Databricks on AWS and Azure Databricks.
System requirements
Supported versions and providers for Databricks
Tonic differences and limitations
Features that are unavailable or work differently for the Databricks data connector
Required Databricks configuration
Required configuration for Databricks before you create a Databricks workspace
Configure workspace data connections
Data connection settings for Databricks workspaces
Previous
Configuring Amazon Redshift workspace data connections
Next
Tonic process overview for Databricks
Last modified
1mo ago