# Structural differences and limitations with Databricks

{% hint style="info" %}
**Required license:** Professional or Enterprise
{% endhint %}

## No automatic sensitivity scans

For a Databricks workspace, Structural does not automatically run sensitivity scans. You must [run the sensitivity scan manually](https://docs.tonic.ai/app/generation/identify-sensitive-data/running-the-structural-sensitivity-scan#sensitivity-scan-manual) or [set up scheduled sensitivity scans](https://docs.tonic.ai/app/generation/identify-sensitive-data/running-the-structural-sensitivity-scan#sensitivity-scan-schedule).

## No workspace inheritance <a href="#databricks-tonic-differences-workspace-inheritance" id="databricks-tonic-differences-workspace-inheritance"></a>

Databricks workspaces do not support workspace inheritance.

## Table mode limitations <a href="#databricks-tonic-differences-table-modes" id="databricks-tonic-differences-table-modes"></a>

You can only assign the [De-Identify](https://docs.tonic.ai/app/generation/table-modes#de-identify) or [Truncate](https://docs.tonic.ai/app/generation/table-modes#truncate) table modes.

For Truncate mode, the table is ignored completely. The table does not exist in the destination database.

While you cannot assign the [Incremental](https://docs.tonic.ai/app/generation/table-modes#incremental) table mode in a Databricks workspace, you can replicate the effect. To do this:

* The data must include partition columns.
* You must set `spark.sql.sources.partitionOverwriteMode=dynamic`.

## Generator limitations <a href="#databricks-tonic-differences-generators" id="databricks-tonic-differences-generators"></a>

Based on the version of Databricks, a Databricks workspace can only use the following generators:

<table><thead><tr><th valign="top">Databricks 10.4 and earlier</th><th valign="top">Databricks 11.3 and later</th><th data-hidden></th></tr></thead><tbody><tr><td valign="top">Address<br>Alphanumeric String Key<br>ASCII Key<br>Business Name<br>Categorical<br>Character Scramble<br>Character Substitution<br>Company Name<br>Conditional<br>Constant<br>Continuous<br>Custom Categorical<br>Date Truncation<br>Email<br>File Name<br>Find and Replace<br>FNR<br>Geo<br>HIPAA Address<br>Hostname<br>Integer Key<br>IP Address<br>MAC Address<br>Name<br>Noise Generator<br>Null<br>Numeric String Key<br>Phone<br>Random Boolean<br>Random Double<br>Random Hash<br>Random Integer<br>Random Timestamp<br>Random UUID<br>Regex Mask<br>Sequential Integer<br>Shipping Container<br>SSN<br>Struct Mask<br>Timestamp Shift<br>Unique Email<br>URL<br>UUID Key<br>XML Mask</td><td valign="top"><p>Address<br>Business Name</p><p>Categorical</p><p>Character Scramble</p><p>Company Name</p><p>Conditional</p><p>Constant</p><p>Continuous</p><p>Custom Categorical</p><p>Date Truncation</p><p>Email</p><p>FNR</p><p>HIPAA Address</p><p>Integer Key</p><p>IP Address</p><p>JSON Mask</p><p>MAC Address</p><p>Name</p><p>Noise Generator</p><p>Null</p><p>Random Double</p><p>Random Hash</p><p>Random Integer</p><p>Random UUID<br>Regex Mask</p><p>SSN</p><p>Struct Mask</p><p>Timestamp Shift Generator</p><p>UUID Key</p></td><td></td></tr></tbody></table>

## No subsetting, but support for table filtering <a href="#databricks-tonic-differences-subsetting" id="databricks-tonic-differences-subsetting"></a>

Databricks workspaces do not support subsetting.

However, for tables that use the De-Identify table mode, you can provide a `WHERE` clause to filter the table. For details, go to [table-filtering](https://docs.tonic.ai/app/generation/subsetting/table-filtering "mention").

## No upsert <a href="#databricks-tonic-differences-upsert" id="databricks-tonic-differences-upsert"></a>

Databricks workspaces do not support upsert.

## No output to a container repository <a href="#databricks-tonic-limitations-containerization" id="databricks-tonic-limitations-containerization"></a>

For Databricks workspaces, you cannot write the destination data to a container repository.

## No post-job scripts

For Databricks workspaces, there is no option to run post-job scripts after a job.

You can create [webhooks](https://docs.tonic.ai/app/workflows/webhooks) that are triggered by data generation jobs.
