Spark connector
This connector leverages ClickHouse-specific optimizations, such as advanced partitioning and predicate pushdown, to improve query performance and data handling. The connector is based on ClickHouse's official JDBC connector, and manages its own catalog.
Before Spark 3.0, Spark lacked a built-in catalog concept, so users typically relied on external catalog systems such as Hive Metastore or AWS Glue. With these external solutions, users had to register their data source tables manually before accessing them in Spark. However, since Spark 3.0 introduced the catalog concept, Spark can now automatically discover tables by registering catalog plugins.
Spark's default catalog is spark_catalog, and tables are identified by {catalog name}.{database}.{table}. With the new
catalog feature, it is now possible to add and work with multiple catalogs in a single Spark application.
Choosing Between Catalog API and TableProvider API
The ClickHouse Spark connector supports two access patterns: the Catalog API and the TableProvider API (format-based access). Understanding the differences helps you choose the right approach for your use case.
Catalog API vs TableProvider API
| Feature | Catalog API | TableProvider API |
|---|---|---|
| Configuration | Centralized via Spark configuration | Per-operation via options |
| Table Discovery | Automatic via catalog | Manual table specification |
| DDL Operations | Full support (CREATE, DROP, ALTER) | Limited (automatic table creation only) |
| Spark SQL Integration | Native (clickhouse.database.table) | Requires format specification |
| Use Case | Long-term, stable connections with centralized config | Ad-hoc, dynamic, or temporary access |
- Choosing Between Catalog API and TableProvider API
- Requirements
- Compatibility matrix
- Installation & setup
- Register the catalog (required)
- Using the TableProvider API (Format-based Access)
- Configuring ClickHouse Options
- ClickHouse Cloud settings
- Read data
- Write data
- DDL operations
- Working with VariantType
- Configurations
- Supported data types
- Contributing and support
Requirements
- Java 8 or 17 (Java 17+ required for Spark 4.0)
- Scala 2.12 or 2.13 (Spark 4.0 only supports Scala 2.13)
- Apache Spark 3.3, 3.4, 3.5, or 4.0
Compatibility matrix
| Version | Compatible Spark Versions | ClickHouse JDBC version |
|---|---|---|
| main | Spark 3.3, 3.4, 3.5, 4.0 | 0.9.4 |
| 0.10.0 | Spark 3.3, 3.4, 3.5, 4.0 | 0.9.5 |
| 0.9.0 | Spark 3.3, 3.4, 3.5, 4.0 | 0.9.4 |
| 0.8.1 | Spark 3.3, 3.4, 3.5 | 0.6.3 |
| 0.7.3 | Spark 3.3, 3.4 | 0.4.6 |
| 0.6.0 | Spark 3.3 | 0.3.2-patch11 |
| 0.5.0 | Spark 3.2, 3.3 | 0.3.2-patch11 |
| 0.4.0 | Spark 3.2, 3.3 | Not depend on |
| 0.3.0 | Spark 3.2, 3.3 | Not depend on |
| 0.2.1 | Spark 3.2 | Not depend on |
| 0.1.2 | Spark 3.2 | Not depend on |
Installation & setup
For integrating ClickHouse with Spark, there are multiple installation options to suit different project setups.
You can add the ClickHouse Spark connector as a dependency directly in your project's build file (such as in pom.xml
for Maven or build.sbt for SBT).
Alternatively, you can put the required JAR files in your $SPARK_HOME/jars/ folder, or pass them directly as a Spark
option using the --jars flag in the spark-submit command.
Both approaches ensure the ClickHouse connector is available in your Spark environment.
Import as a Dependency
- Maven
- Gradle
- SBT
- Spark SQL/Shell CLI
Add the following repository if you want to use SNAPSHOT version.
Add the following repository if you want to use the SNAPSHOT version:
When working with Spark's shell options (Spark SQL CLI, Spark Shell CLI, and Spark Submit command), the dependencies can be registered by passing the required jars:
If you want to avoid copying the JAR files to your Spark client node, you can use the following instead:
Note: For SQL-only use cases, Apache Kyuubi is recommended for production.
Download the library
The name pattern of the binary JAR is:
You can find all available released JAR files in the Maven Central Repository and all daily build SNAPSHOT JAR files in the Sonatype OSS Snapshots Repository.
It's essential to include the clickhouse-jdbc JAR with the "all" classifier, as the connector relies on clickhouse-http and clickhouse-client — both of which are bundled in clickhouse-jdbc:all. Alternatively, you can add clickhouse-client JAR and clickhouse-http individually if you prefer not to use the full JDBC package.
In any case, ensure that the package versions are compatible according to the Compatibility Matrix.
Register the catalog (required)
In order to access your ClickHouse tables, you must configure a new Spark catalog with the following configs:
| Property | Value | Default Value | Required |
|---|---|---|---|
spark.sql.catalog.<catalog_name> | com.clickhouse.spark.ClickHouseCatalog | N/A | Yes |
spark.sql.catalog.<catalog_name>.host | <clickhouse_host> | localhost | No |
spark.sql.catalog.<catalog_name>.protocol | http | http | No |
spark.sql.catalog.<catalog_name>.http_port | <clickhouse_port> | 8123 | No |
spark.sql.catalog.<catalog_name>.user | <clickhouse_username> | default | No |
spark.sql.catalog.<catalog_name>.password | <clickhouse_password> | (empty string) | No |
spark.sql.catalog.<catalog_name>.database | <database> | default | No |
spark.<catalog_name>.write.format | json | arrow | No |
These settings could be set via one of the following:
- Edit/Create
spark-defaults.conf. - Pass the configuration to your
spark-submitcommand (or to yourspark-shell/spark-sqlCLI commands). - Add the configuration when initiating your context.
When working with a ClickHouse cluster, you need to set a unique catalog name for each instance. For example:
That way, you would be able to access clickhouse1 table <ck_db>.<ck_table> from Spark SQL by
clickhouse1.<ck_db>.<ck_table>, and access clickhouse2 table <ck_db>.<ck_table> by clickhouse2.<ck_db>.<ck_table>.
Using the TableProvider API (Format-based Access)
In addition to the catalog-based approach, the ClickHouse Spark connector supports a format-based access pattern via the TableProvider API.
Format-based Read Example
- Python
- Scala
- Java
Format-based Write Example
- Python
- Scala
- Java
TableProvider Features
The TableProvider API provides several powerful features:
Automatic Table Creation
When writing to a non-existent table, the connector automatically creates the table with an appropriate schema. The connector provides intelligent defaults:
- Engine: Defaults to
MergeTree()if not specified. You can specify a different engine using theengineoption (e.g.,ReplacingMergeTree(),SummingMergeTree(), etc.) - ORDER BY: Required - You must explicitly specify the
order_byoption when creating a new table. The connector validates that all specified columns exist in the schema. - Nullable Key Support: Automatically adds
settings.allow_nullable_key=1if ORDER BY contains nullable columns
- Python
- Scala
- Java
ORDER BY Required: The order_by option is required when creating a new table via the TableProvider API. You must explicitly specify which column(s) to use for the ORDER BY clause. The connector validates that all specified columns exist in the schema and will throw an error if any columns are missing.
Engine Selection: The default engine is MergeTree(), but you can specify any ClickHouse table engine using the engine option (e.g., ReplacingMergeTree(), SummingMergeTree(), AggregatingMergeTree(), etc.).
TableProvider Connection Options
When using the format-based API, the following connection options are available:
Connection Options
| Option | Description | Default Value | Required |
|---|---|---|---|
host | ClickHouse server hostname | localhost | Yes |
protocol | Connection protocol (http or https) | http | No |
http_port | HTTP/HTTPS port | 8123 | No |
database | Database name | default | Yes |
table | Table name | N/A | Yes |
user | Username for authentication | default | No |
password | Password for authentication | (empty string) | No |
ssl | Enable SSL connection | false | No |
ssl_mode | SSL mode (NONE, STRICT, etc.) | STRICT | No |
timezone | Timezone for date/time operations | server | No |
Table Creation Options
These options are used when the table doesn't exist and needs to be created:
| Option | Description | Default Value | Required |
|---|---|---|---|
order_by | Column(s) to use for ORDER BY clause. Comma-separated for multiple columns | N/A | Yes |
engine | ClickHouse table engine (e.g., MergeTree(), ReplacingMergeTree(), SummingMergeTree(), etc.) | MergeTree() | No |
settings.allow_nullable_key | Enable nullable keys in ORDER BY (for ClickHouse Cloud) | Auto-detected** | No |
settings.<key> | Any ClickHouse table setting | N/A | No |
cluster | Cluster name for Distributed tables | N/A | No |
clickhouse.column.<name>.variant_types | Comma-separated list of ClickHouse types for Variant columns (e.g., String, Int64, Bool, JSON). Type names are case-sensitive. Spaces after commas are optional. | N/A | No |
* The order_by option is required when creating a new table. All specified columns must exist in the schema.
** Automatically set to 1 if ORDER BY contains nullable columns and not explicitly provided.
Best Practice: For ClickHouse Cloud, explicitly set settings.allow_nullable_key=1 if your ORDER BY columns might be nullable, as ClickHouse Cloud requires this setting.
Writing Modes
The Spark connector (both TableProvider API and Catalog API) supports the following Spark write modes:
append: Add data to existing tableoverwrite: Replace all data in the table (truncates table)
Partition Overwrite Not Supported: The connector does not currently support partition-level overwrite operations (e.g., overwrite mode with partitionBy). This feature is in progress. See GitHub issue #34 for tracking this feature.
- Python
- Scala
- Java
Configuring ClickHouse Options
Both the Catalog API and TableProvider API support configuring ClickHouse-specific options (not connector options). These are passed through to ClickHouse when creating tables or executing queries.
ClickHouse options allow you to configure ClickHouse-specific settings like allow_nullable_key, index_granularity, and other table-level or query-level settings. These are different from connector options (like host, database, table) which control how the connector connects to ClickHouse.
Using TableProvider API
With the TableProvider API, use the settings.<key> option format:
- Python
- Scala
- Java
Using Catalog API
With the Catalog API, use the spark.sql.catalog.<catalog_name>.option.<key> format in your Spark configuration:
Or set them when creating tables via Spark SQL:
ClickHouse Cloud settings
When connecting to ClickHouse Cloud, make sure to enable SSL and set the appropriate SSL mode. For example:
Read data
- Java
- Scala
- Python
- Spark SQL
Write data
Partition Overwrite Not Supported: The Catalog API does not currently support partition-level overwrite operations (e.g., overwrite mode with partitionBy). This feature is in progress. See GitHub issue #34 for tracking this feature.
- Java
- Scala
- Python
- Spark SQL
DDL operations
You can perform DDL operations on your ClickHouse instance using Spark SQL, with all changes immediately persisted in ClickHouse. Spark SQL allows you to write queries exactly as you would in ClickHouse, so you can directly execute commands such as CREATE TABLE, TRUNCATE, and more - without modification, for instance:
When using Spark SQL, only one statement can be executed at a time.
The above examples demonstrate Spark SQL queries, which you can run within your application using any API—Java, Scala, PySpark, or shell.
Working with VariantType
VariantType support is available in Spark 4.0+ and requires ClickHouse 25.3+ with experimental JSON/Variant types enabled.
The connector supports Spark's VariantType for working with semi-structured data. VariantType maps to ClickHouse's JSON and Variant types, allowing you to store and query flexible schema data efficiently.
This section focuses specifically on VariantType mapping and usage. For a complete overview of all supported data types, see the Supported data types section.
ClickHouse Type Mapping
| ClickHouse Type | Spark Type | Description |
|---|---|---|
JSON | VariantType | Stores JSON objects only (must start with {) |
Variant(T1, T2, ...) | VariantType | Stores multiple types including primitives, arrays, and JSON |
Reading VariantType Data
When reading from ClickHouse, JSON and Variant columns are automatically mapped to Spark's VariantType:
- Scala
- Python
- Java
Writing VariantType Data
You can write VariantType data to ClickHouse using either JSON or Variant column types:
- Scala
- Python
- Java
Creating VariantType Tables with Spark SQL
You can create VariantType tables using Spark SQL DDL:
Configuring Variant Types
When creating tables with VariantType columns, you can specify which ClickHouse types to use:
JSON Type (Default)
If no variant_types property is specified, the column defaults to ClickHouse's JSON type, which only accepts JSON objects:
This creates the following ClickHouse query:
Variant Type with Multiple Types
To support primitives, arrays, and JSON objects, specify the types in the variant_types property:
This creates the following ClickHouse query:
Supported Variant Types
The following ClickHouse types can be used in Variant():
- Primitives:
String,Int8,Int16,Int32,Int64,UInt8,UInt16,UInt32,UInt64,Float32,Float64,Bool - Arrays:
Array(T)where T is any supported type, including nested arrays - JSON:
JSONfor storing JSON objects
Read Format Configuration
By default, JSON and Variant columns are read as VariantType. You can override this behavior to read them as strings:
- Scala
- Python
- Java
Write Format Support
VariantType write support varies by format:
| Format | Support | Notes |
|---|---|---|
| JSON | ✅ Full | Supports both JSON and Variant types. Recommended for VariantType data |
| Arrow | ⚠️ Partial | Supports writing to ClickHouse JSON type. Does not support ClickHouse Variant type. Full support is pending resolution of https://github.com/ClickHouse/ClickHouse/issues/92752 |
Configure the write format:
If you need to write to a ClickHouse Variant type, use JSON format. Arrow format only supports writing to JSON type.
Best Practices
- Use JSON type for JSON-only data: If you only store JSON objects, use the default JSON type (no
variant_typesproperty) - Specify types explicitly: When using
Variant(), explicitly list all types you plan to store - Enable experimental features: Ensure ClickHouse has
allow_experimental_json_type = 1enabled - Use JSON format for writes: JSON format is recommended for VariantType data for better compatibility
- Consider query patterns: JSON/Variant types support ClickHouse's JSON path queries for efficient filtering
- Column hints for performance: When using JSON fields in ClickHouse, adding column hints improves query performance. Currently, adding column hints via Spark is not supported. See GitHub issue #497 for tracking this feature.
Example: Complete Workflow
- Scala
- Python
- Java
Configurations
The following are the adjustable configurations available in the connector.
Using Configurations: These are Spark-level configuration options that apply to both Catalog API and TableProvider API. They can be set in two ways:
-
Global Spark configuration (applies to all operations):
-
Per-operation override (TableProvider API only - can override global settings):
Alternatively, set them in spark-defaults.conf or when creating the Spark session.
| Key | Default | Description | Since |
|---|---|---|---|
| spark.clickhouse.ignoreUnsupportedTransform | true | ClickHouse supports using complex expressions as sharding keys or partition values, e.g. cityHash64(col_1, col_2), and those can not be supported by Spark now. If true, ignore the unsupported expressions and log a warning, otherwise fail fast w/ an exception. Warning: When spark.clickhouse.write.distributed.convertLocal=true, ignoring unsupported sharding keys may corrupt the data. The connector validates this and throws an error by default. To allow it, explicitly set spark.clickhouse.write.distributed.convertLocal.allowUnsupportedSharding=true. | 0.4.0 |
| spark.clickhouse.read.compression.codec | lz4 | The codec used to decompress data for reading. Supported codecs: none, lz4. | 0.5.0 |
| spark.clickhouse.read.distributed.convertLocal | true | When reading Distributed table, read local table instead of itself. If true, ignore spark.clickhouse.read.distributed.useClusterNodes. | 0.1.0 |
| spark.clickhouse.read.fixedStringAs | binary | Read ClickHouse FixedString type as the specified Spark data type. Supported types: binary, string | 0.8.0 |
| spark.clickhouse.read.format | json | Serialize format for reading. Supported formats: json, binary | 0.6.0 |
| spark.clickhouse.read.runtimeFilter.enabled | false | Enable runtime filter for reading. | 0.8.0 |
| spark.clickhouse.read.splitByPartitionId | true | If true, construct input partition filter by virtual column _partition_id, instead of partition value. There are known issues with assembling SQL predicates by partition value. This feature requires ClickHouse Server v21.6+ | 0.4.0 |
| spark.clickhouse.useNullableQuerySchema | false | If true, mark all the fields of the query schema as nullable when executing CREATE/REPLACE TABLE ... AS SELECT ... on creating the table. Note, this configuration requires SPARK-43390(available in Spark 3.5), w/o this patch, it always acts as true. | 0.8.0 |
| spark.clickhouse.write.batchSize | 10000 | The number of records per batch on writing to ClickHouse. | 0.1.0 |
| spark.clickhouse.write.compression.codec | lz4 | The codec used to compress data for writing. Supported codecs: none, lz4. | 0.3.0 |
| spark.clickhouse.write.distributed.convertLocal | false | When writing Distributed table, write local table instead of itself. If true, ignore spark.clickhouse.write.distributed.useClusterNodes. This bypasses ClickHouse's native routing, requiring Spark to evaluate the sharding key. When using unsupported sharding expressions, set spark.clickhouse.ignoreUnsupportedTransform to false to prevent silent data distribution errors. | 0.1.0 |
| spark.clickhouse.write.distributed.convertLocal.allowUnsupportedSharding | false | Allow writing to Distributed tables with convertLocal=true and ignoreUnsupportedTransform=true when the sharding key is unsupported. This is dangerous and may cause data corruption due to incorrect sharding. When set to true, you must ensure that your data is properly sorted/sharded before writing, as Spark cannot evaluate the unsupported sharding expression. Only set to true if you understand the risks and have verified your data distribution. By default, this combination will throw an error to prevent silent data corruption. | 0.10.0 |
| spark.clickhouse.write.distributed.useClusterNodes | true | Write to all nodes of cluster when writing Distributed table. | 0.1.0 |
| spark.clickhouse.write.format | arrow | Serialize format for writing. Supported formats: json, arrow | 0.4.0 |
| spark.clickhouse.write.localSortByKey | true | If true, do local sort by sort keys before writing. | 0.3.0 |
| spark.clickhouse.write.localSortByPartition | value of spark.clickhouse.write.repartitionByPartition | If true, do local sort by partition before writing. If not set, it equals to spark.clickhouse.write.repartitionByPartition. | 0.3.0 |
| spark.clickhouse.write.maxRetry | 3 | The maximum number of write we will retry for a single batch write failed with retryable codes. | 0.1.0 |
| spark.clickhouse.write.repartitionByPartition | true | Whether to repartition data by ClickHouse partition keys to meet the distributions of ClickHouse table before writing. | 0.3.0 |
| spark.clickhouse.write.repartitionNum | 0 | Repartition data to meet the distributions of ClickHouse table is required before writing, use this conf to specific the repartition number, value less than 1 mean no requirement. | 0.1.0 |
| spark.clickhouse.write.repartitionStrictly | false | If true, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement. Note, this configuration requires SPARK-37523(available in Spark 3.4), w/o this patch, it always acts as true. | 0.3.0 |
| spark.clickhouse.write.retryInterval | 10s | The interval in seconds between write retry. | 0.1.0 |
| spark.clickhouse.write.retryableErrorCodes | 241 | The retryable error codes returned by ClickHouse server when write failing. | 0.1.0 |
Supported data types
This section outlines the mapping of data types between Spark and ClickHouse. The tables below provide quick references for converting data types when reading from ClickHouse into Spark and when inserting data from Spark into ClickHouse.
Reading data from ClickHouse into Spark
| ClickHouse Data Type | Spark Data Type | Supported | Is Primitive | Notes |
|---|---|---|---|---|
Nothing | NullType | ✅ | Yes | |
Bool | BooleanType | ✅ | Yes | |
UInt8, Int16 | ShortType | ✅ | Yes | |
Int8 | ByteType | ✅ | Yes | |
UInt16,Int32 | IntegerType | ✅ | Yes | |
UInt32,Int64, UInt64 | LongType | ✅ | Yes | |
Int128,UInt128, Int256, UInt256 | DecimalType(38, 0) | ✅ | Yes | |
Float32 | FloatType | ✅ | Yes | |
Float64 | DoubleType | ✅ | Yes | |
String, UUID, Enum8, Enum16, IPv4, IPv6 | StringType | ✅ | Yes | |
FixedString | BinaryType, StringType | ✅ | Yes | Controlled by configuration READ_FIXED_STRING_AS |
Decimal | DecimalType | ✅ | Yes | Precision and scale up to Decimal128 |
Decimal32 | DecimalType(9, scale) | ✅ | Yes | |
Decimal64 | DecimalType(18, scale) | ✅ | Yes | |
Decimal128 | DecimalType(38, scale) | ✅ | Yes | |
Date, Date32 | DateType | ✅ | Yes | |
DateTime, DateTime32, DateTime64 | TimestampType | ✅ | Yes | |
Array | ArrayType | ✅ | No | Array element type is also converted |
Map | MapType | ✅ | No | Keys are limited to StringType |
IntervalYear | YearMonthIntervalType(Year) | ✅ | Yes | |
IntervalMonth | YearMonthIntervalType(Month) | ✅ | Yes | |
IntervalDay, IntervalHour, IntervalMinute, IntervalSecond | DayTimeIntervalType | ✅ | No | Specific interval type is used |
JSON, Variant | VariantType | ✅ | No | Requires Spark 4.0+ and ClickHouse 25.3+. Can be read as StringType with spark.clickhouse.read.jsonAs=string |
Object | ❌ | |||
Nested | ❌ | |||
Tuple | StructType | ✅ | No | Supports both named and unnamed tuples. Named tuples map to struct fields by name, unnamed tuples use _1, _2, etc. Supports nested structs and nullable fields |
Point | ❌ | |||
Polygon | ❌ | |||
MultiPolygon | ❌ | |||
Ring | ❌ | |||
IntervalQuarter | ❌ | |||
IntervalWeek | ❌ | |||
Decimal256 | ❌ | |||
AggregateFunction | ❌ | |||
SimpleAggregateFunction | ❌ |
Inserting data from Spark into ClickHouse
| Spark Data Type | ClickHouse Data Type | Supported | Is Primitive | Notes |
|---|---|---|---|---|
BooleanType | Bool | ✅ | Yes | Mapped to Bool type (not UInt8) since version 0.9.0 |
ByteType | Int8 | ✅ | Yes | |
ShortType | Int16 | ✅ | Yes | |
IntegerType | Int32 | ✅ | Yes | |
LongType | Int64 | ✅ | Yes | |
FloatType | Float32 | ✅ | Yes | |
DoubleType | Float64 | ✅ | Yes | |
StringType | String | ✅ | Yes | |
VarcharType | String | ✅ | Yes | |
CharType | String | ✅ | Yes | |
DecimalType | Decimal(p, s) | ✅ | Yes | Precision and scale up to Decimal128 |
DateType | Date | ✅ | Yes | |
TimestampType | DateTime | ✅ | Yes | |
ArrayType (list, tuple, or array) | Array | ✅ | No | Array element type is also converted |
MapType | Map | ✅ | No | Keys are limited to StringType |
StructType | Tuple | ✅ | No | Converted to named Tuple with field names. |
VariantType | JSON or Variant | ✅ | No | Requires Spark 4.0+ and ClickHouse 25.3+. Defaults to JSON type. Use clickhouse.column.<name>.variant_types property to specify Variant with multiple types. |
Object | ❌ | |||
Nested | ❌ |
Contributing and support
If you'd like to contribute to the project or report any issues, we welcome your input! Visit our GitHub repository to open an issue, suggest improvements, or submit a pull request. Contributions are welcome! Please check the contribution guidelines in the repository before starting. Thank you for helping improve our ClickHouse Spark connector!