{"id":"src_wN2GOhorcJvCjyQx9LmEo","slug":"clickhouse-changelog","name":"ClickHouse Changelog","type":"scrape","url":"https://clickhouse.com/docs/whats-new/changelog","orgId":"org_rJ1apPbNZUQ5O1rSzYqoG","org":{"slug":"clickhouse","name":"ClickHouse"},"isPrimary":true,"metadata":"{\"noFeedFound\":true,\"provider\":\"docusaurus\",\"providerDetectedAt\":\"2026-04-11T14:58:10.396Z\",\"evaluatedMethod\":\"scrape\",\"evaluatedAt\":\"2026-04-11T14:58:10.396Z\",\"parseInstructions\":\"This is a single page containing multiple ClickHouse version releases (e.g. 26.3, 26.2, 26.1). Each version starts with a heading like 'ClickHouse release 26.3 LTS' followed by a date. Group all content under a version heading (Backward Incompatible Changes, New Features, Bug Fixes, etc.) into ONE release per version. Do NOT create separate releases for individual changelog items.\",\"renderRequired\":false}","releaseCount":5,"releasesLast30Days":5,"avgReleasesPerWeek":1.4,"latestVersion":"26.3","latestDate":"2026-03-26T00:00:00.000Z","changelogUrl":null,"hasChangelogFile":false,"lastFetchedAt":"2026-04-11T15:18:44.136Z","trackingSince":"2026-03-26T00:00:00.000Z","releases":[{"id":"rel_ff5x69Y7vhdbPCfSlJv6m","version":"26.3","title":"ClickHouse release 26.3 LTS","summary":"#### Backward Incompatible Changes\n\n- **Data type serialization:** Propagate data types serialization versions to nested data types. String serializat...","content":"#### Backward Incompatible Changes\n\n- **Data type serialization:** Propagate data types serialization versions to nested data types. String serialization version `with_size_stream` now applies to String types inside Array/Map/Variant/JSON/etc. Controlled by `propagate_types_serialization_versions_to_nested_types` (enabled by default). **Upgrade is safe, but downgrade is not** — data written by 26.3 in columns with nested types will be unreadable in older versions.\n- **Skip index type:** Removed the `hypothesis` skip index type. Creating tables with `INDEX ... TYPE hypothesis` now produces an error.\n- **Function removal:** Removed experimental `detectProgrammingLanguage` function.\n- **NOT operator precedence:** Fixed to match SQL standard. `NOT` now binds looser than `IS NULL`, `BETWEEN`, `LIKE`, and arithmetic operators. Queries relying on previous non-standard behavior may change results.\n- **Skip index filenames:** Fixed skip index files not respecting `replace_long_file_name_to_hash` setting. Long-named indices are now hashed like column files. Backward compatible for upgrades, but downgrading may cause long-named indices to be ignored.\n- **Async insert default:** Async insert is now enabled by default. Batching all small inserts by default. Can be disabled at config, session, query, or table level. Set `compatibility=<version less than 26.2>` for previous behavior.\n- **MySQL datatype mapping:** Changed default `mysql_datatypes_support_level` to `decimal,datetime64,date2Date32`, enabling proper mapping of MySQL DATE/DECIMAL/DATETIME to Date32/Decimal/DateTime64. Previously MySQL DATE mapped to Date (cannot represent dates before 1970-01-01).\n- **UDF stderr handling:** Changed default `stderr_reaction` from `throw` to `log_last` for executable UDFs. UDFs writing warnings to stderr no longer fail when exit code is 0.\n- **Metadata changes:** Fixed normal projections metadata so multi-column sorting keys are properly recognized.\n- **Index function API:** `mergeTreeAnalyzeIndexes{,UUID}` now accepts array of part names instead of regexp for improved performance (experimental).\n\n#### New Features\n\n- **Bucketed Map serialization:** Added bucketed serialization for Map columns in MergeTree with `map_serialization_version = 'with_buckets'`. Keys split into hash-based buckets, providing 2-49x speedup for single-key lookups. New settings: `map_serialization_version`, `max_buckets_in_map`, `map_buckets_strategy`, `map_buckets_coefficient`, `map_buckets_min_avg_size`.\n- **Materialized CTE:** Support evaluating CTEs only once and storing results in temporary tables.\n- **SQL-standard functions:** Allow certain functions like `NOW` without parentheses for compatibility.\n- **naturalSortKey function:** Added `naturalSortKey(s)` function.\n- **JSON input for JSONExtract:** Support native JSON/Object input for JSONExtract functions.\n- **Nullable query parameters:** If a query parameter has Nullable type and is not specified, assume value is NULL.\n- **ZooKeeper auxiliary support:** Support auxiliary ZooKeeper for Replicated database.\n- **JSON has function:** Support `has` function for JSON type to check path existence.\n- **mergeTreeTextIndex function:** New table function to read data directly from a text index for introspection.\n- **table_readonly setting:** Add MergeTree setting to mark tables as read-only.\n- **Partition pruning settings:** New `use_partition_pruning` setting and `use_partition_key` alias to disable partition pruning.\n- **Iceberg EXECUTE expire_snapshots:** Support `ALTER TABLE ... EXECUTE expire_snapshots('<timestamp>')` for Iceberg tables.\n- **HTTP handlers per port:** Allow each `type=http` entry in `<protocols>` to specify custom `<handlers>` key for different routing rules per port.\n- **EXPLAIN improvements:** Add `pretty=1` option for tree-style indented output and `compact=1` to collapse Expression steps.\n- **Access control:** New `restore_access_entities_with_current_grants` setting for restored users/roles in backups.\n- **Unicode functions:** Added `caseFoldUTF8`, `removeDiacriticsUTF8`, and `normalizeUTF8NFKCCasefold` functions.\n- **asciiCJK tokenizer:** New tokenizer for full-text indexes and `tokens` function using Unicode word boundary rules.\n- **Unavailable shards limits:** New `max_skip_unavailable_shards_num` and `max_skip_unavailable_shards_ratio` settings to limit silent shard skipping.\n- **SOME keyword:** Added `SOME` keyword for subquery expressions (identical to `ANY`).\n- **FixedString output formatting:** New `output_format_trim_fixed_string` setting to strip trailing null bytes.\n- **Parenthesized table joins:** Support parenthesized table join expressions in FROM clause.\n- **toDaysInMonth function:** Returns number of days in the month of specified date.\n\n#### Experimental Features\n\n- **WebAssembly UDFs:** Experimental support for WebAssembly-based user-defined functions with Wasmtime backend.\n- **External SQL dialects:** Support for external SQL dialects using `polyglot` library.\n- **ALP codec:** Floating-point compression codec (without ALP_rd fallback).\n- **Lazy JSON type hints:** Experimental lazy type hints for JSON columns. `ALTER TABLE ... MODIFY COLUMN json JSON(path TypeName)` completes as metadata-only operation without rewriting data.\n- **YTsaurus parallel reads:** Enable parallel reads from YTsaurus table engine.\n\n#### Performance Improvements\n\n- **Object storage reads:** Improved data lake reading performance by resizing pipeline to number of processing threads, providing ~40x improvements on multi-core machines.\n- **Parallel replicas logic:** Clarified relationship between `enable_parallel_replicas` and `automatic_parallel_replicas_mode`. Queries use parallel replicas if `enable_parallel_replicas > 0`. `automatic_parallel_replicas_mode=1` uses statistics-based decision; mode=0 uses parallel replicas for all supported queries.\n- **Enhanced partition pruning:** Allow partition pruning when predicate contains comparison operators and partition key is wrapped in deterministic function chain (e.g., `cityHash64(x) % 5 > 2`, `toYYYYMM(x) < 2026`).","publishedAt":"2026-03-26T00:00:00.000Z","url":"https://clickhouse.com/docs/whats-new/changelog#26-3","media":[]},{"id":"rel_kXMEw7tUQU3ueaUmzK2P9","version":null,"title":"Bug Fixes and Improvements","summary":"- Fixed `sumCount` aggregate function unable to read older serialized states after introduction of `Nullable(Tuple)`\n- Fixed exception in tuple compar...","content":"- Fixed `sumCount` aggregate function unable to read older serialized states after introduction of `Nullable(Tuple)`\n- Fixed exception in tuple comparison involving `Nothing` type with `GROUPING SETS` and `ORDER BY`\n- Fixed non-deterministic `uncompressed_hash` computation for Compact MergeTree parts with multiple compression codecs\n- Fixed logical error during INSERT SELECT with JSON and buckets in shared data\n- Fixed `MEMORY_LIMIT_EXCEEDED` being reported as `CORRUPTED_DATA` during SummingMergeTree and CoalescingMergeTree merges\n- Fixed \"Context has expired\" exception for correlated subqueries with table functions like `url()`\n- Fixed exceptions and incorrect behavior in `optimize_syntax_fuse_functions` with aggregate projections and Date types\n- Fixed `replaceRegexpOne` to `extract` query rewrite producing wrong results; fixed exception with `GROUP BY ... WITH CUBE` and `group_by_use_nulls=1`\n- Fixed `DROP DATABASE` hanging indefinitely with `database_atomic_wait_for_drop_and_detach_synchronously`\n- Fixed `KILL QUERY` unable to terminate queries stuck in `WITH FILL`, dictionary loading, or `ALTER DELETE`\n- Fixed security issue: `loop` table function bypassed row policies and column-level grants\n- Fixed incorrect partition pruning with pre-epoch DateTime64 and `toDate()` function\n- Fixed `hasPartitionId` returning incorrect results with multiple partitions\n- Fixed possible crashes reading empty granules in advanced shared data in JSON\n- Fixed `Cannot schedule a file` error on `INSERT` into `Distributed` due to race between `DROP` and `INSERT`\n- Fixed server crash in `mapContainsKey/mapContainsKeyLike` with `tokenbf_v1` skip index\n- Fixed `LOGICAL_ERROR` with `LowCardinality` inside compound types in various functions\n- Fixed `LOGICAL_ERROR` in `MergingAggregatedTransform` with `ARRAY JOIN` and `merge()` over Distributed tables\n- Fixed server crash caused by uncaught exception in HTTP connection pool destructor\n- Fixed incorrect result with `grace_hash` algorithm and non-equi joins\n- Fixed performance inefficiency in DeltaLake metadata scanning\n- Fixed data race in ZooKeeper client between sendThread and receiveThread\n- Fixed bug preventing CTE with distributed insert selects\n- Fixed exception from `CachedOnDiskReadBufferFromFile::readBigAt`\n- Fixed `LOGICAL_ERROR` in `Alias` engine with materialized columns\n- Fixed Keeper data loss after restart with Azure Blob Storage\n- Fixed JIT miscompilation of `sign` function for integer types wider than `Int8`\n- Fixed `DUPLICATE_COLUMN` exception and silent NULLs reading Delta Lake tables with name mode column mapping\n- Fixed mutation after lightweight update and secondary indices\n- Fixed incorrect result of FINAL queries mixing primary and non-primary key skip indexes\n- Enforced READ ON FILE checks for scalar `file()` and `DESCRIBE TABLE file()`\n- Fixed crash with glob pattern in `file()` when directory contains dangling symlinks\n- Fixed segfault in query plan optimization converting outer join to inner join with `arrayJoin`\n- Fixed `ProtobufList` format not working with Kafka engine\n- Fixed logical error with `analyzer_compatibility_join_using_top_level_identifier` and ARRAY JOIN\n- Set `Watch` component for watch responses in `aggregated_zookeeper_log`\n- Fixed partition pruning with FINAL deduplication when partition key not covered by sorting key\n- Fixed logical error in `kql_array_sort_asc`/`kql_array_sort_desc` with constant array arguments\n- Fixed out-of-bounds access in `ColumnConst::getExtremes` with `extremes = 1`\n- Fixed potential deadlock with concurrent `MOVE PARTITION` operations\n- HTTP server now returns error message in response body for 400 Bad Request\n- Fixed wrong results with distributed index analysis and query condition cache\n- Fixed `LOGICAL_ERROR` in `MergeTreeSetIndex` with `toDate` conversion on key columns\n- Fixed `LOGICAL_ERROR` with RIGHT JOIN wrapped in CROSS JOIN in legacy join code path\n- Fixed segfaults and OOM from corrupted `DDSketch` deserialization in `quantilesDD`\n- Fixed `LOGICAL_ERROR` with correlated columns in lambda functions like `arrayMap`\n- Fixed logical error in `caseWithExpression` with `materialize(NULL)` or `Nullable(Nothing)` arguments\n- Fixed bad cast exception when filtering `_table` virtual column in `merge` table function\n- Fixed sporadic deduplication failure with inconsistent cleanup ordering in ZooKeeper\n- Fixed exception with `ORDER BY ... WITH FILL` used together with `LIMIT BY`\n- Fixed silent data corruption inserting Parquet/Arrow `Date` into `Enum` column\n- Fixed exception reading Arrow file with `Array` column into table with `Nested` column\n- Fixed `MATERIALIZE INDEX` and `MATERIALIZE PROJECTION` mutations getting stuck\n- Fixed exception reading from `Nullable(Tuple(...))` with colliding element names\n- Fixed exception when joining `Merge` table wrapping `Distributed` table","publishedAt":null,"url":"https://clickhouse.com/docs/whats-new/changelog#bug-fixes-and-improvements","media":[]},{"id":"rel_kGzY-5SwQaso3-iuGD6V1","version":null,"title":"ClickHouse Bug Fixes Release","summary":"Multiple bug fixes including:\n\n**Parquet and Data Format Fixes:**\n- Fixed incorrect Parquet Bool to FixedString conversion in native V3 reader\n- Fixed...","content":"Multiple bug fixes including:\n\n**Parquet and Data Format Fixes:**\n- Fixed incorrect Parquet Bool to FixedString conversion in native V3 reader\n- Fixed HTTP Basic Auth to accept base64 credentials without padding\n\n**Query Execution Fixes:**\n- Fixed incorrect partition pruning with Nullable partition key columns\n- Fixed rare exception in pipeline executor during query cancellation\n- Fixed 'Column identifier is already registered' exception with count_distinct_optimization and QUALIFY\n- Fixed exception with IN/NOT IN using LowCardinality column arguments\n- Fixed 'Pipeline stuck' exception in full_sorting_merge joins\n- Fixed LOGICAL_ERROR with TTL merging and aggregate projections\n- Fixed logical error with CROSS JOIN and INNER JOIN USING\n- Fixed null pointer dereference in dictGetOrDefault with Nullable keys\n- Fixed exception in DISTINCT queries with aggregate projections and LowCardinality\n- Fixed LOGICAL_ERROR with arrayJoin in filter expression with OUTER JOIN\n- Fixed logical error with parallel replicas and optimize_aggregation_in_order\n- Fixed pipeline deadlock with sort_overflow_mode and window functions\n- Fixed column rollback in Buffer engine during exception handling\n\n**Type System and Comparison Fixes:**\n- Fixed bad cast exception in null-safe comparison with Dynamic/Variant and NULL\n- Fixed IS DISTINCT FROM with Dynamic/Variant vs NULL returning incorrect results\n\n**Index and Optimization Fixes:**\n- Fixed tryGetColumnDescription to filter subcolumns by parent column kind\n- Fixed text index usage with other skip indexes\n- Fixed distributed index analysis with expressions in primary key\n- Fixed S3 requests being incorrectly retried on non-retryable errors\n- Fixed set skip index usefulness detection with OR predicates\n\n**ClickHouse Keeper Fixes:**\n- Fixed Keeper disconnecting Java ZooKeeper clients after addWatch request\n- Fixed zk_followers and zk_synced_followers metrics not decreasing\n- Fixed Keeper's secure raft port ignoring cipherList and dhParamsFile configuration\n- Fixed misleading Keeper log messages about operation timing\n- Fixed Keeper TCP connections preventing graceful server shutdown\n\n**Join and Filter Fixes:**\n- Fixed LOGICAL_ERROR with PREWHERE and IN subquery on MergeTree tables\n- Fixed exception 'Sorting column wasn't found in ActionsDAG' with query_plan_convert_join_to_in\n- Fixed BAD_GET exception with non-boolean expression in WHERE and SELECT with JOIN\n- Fixed LOGICAL_ERROR with read_in_order_through_join and parallel replicas\n\n**Data Type and Function Fixes:**\n- Fixed MongoDB dictionary source failing with named collections\n- Fixed LOGICAL_ERROR when Identifier is empty after parameter substitution\n- Fixed exception with input table function as argument of remote\n- Fixed outdated data parts resurrection from incorrect cleanup\n- Fixed exception in LogicalExpressionOptimizerPass with Variant return type\n- Fixed parseDateTimeBestEffort incorrectly parsing month/weekday prefixes\n- Fixed UNKNOWN_IDENTIFIER with merge() table function over tables with different JSON parameters\n- Fixed optimize_skip_unused_shards with Distributed storage in Views\n- Fixed tuple subcolumn access by name for external tables\n- Fixed reverseUTF8 exception on invalid UTF-8 input\n- Fixed LOGICAL_ERROR with FINAL, PREWHERE, constant WHERE and column-independent aggregates\n\n**Configuration and System Fixes:**\n- Fixed system.trace_log entries for dictionary auto-reloads having non-empty query IDs\n- Fixed crash from null pointer dereference in system tables iteration\n- Fixed SYSTEM START REPLICATED VIEW not waking up refresh task\n- Fixed 'Inconsistent table names' exception with view() containing JOINs\n- Fixed adjusting RLIMIT_SIGPENDING via pending_signals\n\n**Security Fix:**\n- Fixed RBAC bypass allowing users to DESCRIBE any table via remote/cluster functions without SHOW_COLUMNS privilege\n\n**Performance Fixes:**\n- Fixed unexpected result with read_in_order_use_virtual_row and monotonic functions\n- Fixed JIT expression compilation bugs preventing optimization passes\n\n**TTL and Transactions:**\n- Fixed LOGICAL_ERROR in renameAndCommitEmptyParts with concurrent TRUNCATE TABLE and OPTIMIZE TABLE\n- Fixed decimal overflow when partition pruning with DateTime64","publishedAt":null,"url":"https://clickhouse.com/docs/whats-new/changelog#clickhouse-bug-fixes-release","media":[]},{"id":"rel_1SllwBZYbJo2GufUCh5TP","version":null,"title":"ClickHouse Performance and Feature Improvements","summary":"**Performance Improvements:**\n- Allow read-in-order optimization and primary-key pruning with `Nullable` CAST target types for monotonic conversions\n-...","content":"**Performance Improvements:**\n- Allow read-in-order optimization and primary-key pruning with `Nullable` CAST target types for monotonic conversions\n- Allow index pruning and filter pushdown when comparing integral columns with float literals\n- Added SLRU cache for Parquet metadata to improve read performance\n- Support swapping sides of ANTI, SEMI and FULL joins based on optimizer statistics\n- Optimized granules skipping for `pointInPolygon` and fixed index analysis issues\n- Improved `levenshteinDistance` function performance\n- Optimized batch decimal type conversions by avoiding per-element function calls\n- Iceberg tables now support asynchronous metadata prefetching and cached metadata usage\n- S3Queue ordered mode uses ListObjectsV2 StartAfter to reduce ListObjects calls\n- Lowered memory usage for inserts deduplication in sync mode\n- Use arch-specific cache line size instead of hardcoded 64-byte value\n- Optimized text index dictionary reading and analysis\n- Sped up LZ4 decompression of 16 byte blocks in ARM\n- Refactored tokenization to high-performance interface with SIMD support\n- Improved text index analysis for queries with combined conditions\n- Improved performance of queries with constant expressions generating large arrays/maps\n- Fixed key condition analysis for `DateTime64` primary keys compared with integer constants\n- Setting `optimize_syntax_fuse_functions` enabled by default\n- Optimized `avgWeighted` aggregate function with local accumulators (~27% improvement for Nullable inputs)\n- Improved performance and reduced memory usage for parallel window functions and `arrayFold` workloads\n- Improved sorted merges performance\n- Optimized `INTERSECT ALL` and `EXCEPT ALL`\n- Added `read_in_order_use_virtual_row` optimization support for reverse-order reads\n- Reduced cache contention in RIGHT and FULL JOINs\n- Optimized `PrefetchingHelper::calcPrefetchLookAhead` with integer arithmetic\n- Reduced Keeper memory consumption by replacing `absl::flat_hash_set` with `CompactChildrenSet` (KeeperMemNode reduced from 144 to 128 bytes)\n\n**Feature Improvements:**\n- Aggregate projections now correctly supported in views\n- Support OUTER to INNER join conversion optimization with `join_use_nulls`\n- Improved subcolumns reading with correct sizes calculation\n- Separate jemalloc arenas for mark, uncompressed and page caches to avoid memory fragmentation\n- Tables with DELETE TTL rules can now use vertical merge algorithm\n- Apply data skipping indexes during distributed index analysis\n- Secondary index marks prewarmed when `prewarm_mark_cache` setting enabled\n- Reduced locking during access control\n- Compound AND conditions in row policies and PREWHERE now decomposed for sorting-key atoms extraction\n- Reduced lock contention in MergeTreeBackgroundExecutor\n- Fixed excessive memory usage (~514 MiB) during format auto-detection for non-Arrow data\n- Parse GeoParquet files with different Geo types in same column\n- Introduced `tokensForLikePattern` SQL function for LIKE pattern tokenization\n- Added `{_schema_hash}` placeholder for S3 table engine\n- SymbolIndex, addressToSymbol, system.symbols, buildId now work on macOS\n- `system.stack_trace` table now works on macOS\n- Added per-server LDAP config option `<follow_referrals>` to control referral chasing\n- Track data skipping indices used in query execution via `skip_indices` column in query_log\n- ACCESS_DENIED hints no longer reveal column names unless user can show all required columns\n- Added dedicated cleanup thread for MergeTree to prevent cleanup delays\n- Reload cluster config if IPs of local server's hostname changed\n- Allow `optimize_aggregators_of_group_by_keys` to correctly optimize in GROUPING SETS queries\n- Keeper-bench: report errors in metrics and generate JSON metrics file\n- Added ROLE clause to CREATE USER\n- Internal_replication settings can now be set for Replicated database clusters\n- New setting `allow_nullable_tuple_in_extracted_subcolumns` controls Tuple subcolumns behavior","publishedAt":null,"url":"https://clickhouse.com/docs/whats-new/changelog#clickhouse-performance-and-feature-improvements","media":[]},{"id":"rel_GmhFeC9gq3JRKBOfOfqhK","version":null,"title":"ClickHouse Release","summary":"Add information about deferred filters as a separate item to EXPLAIN query output. Enable `type_json_allow_duplicated_key_with_literal_and_nested_obje...","content":"Add information about deferred filters as a separate item to EXPLAIN query output. Enable `type_json_allow_duplicated_key_with_literal_and_nested_object` by default to avoid errors about duplicated keys during JSON parsing. Keeper improvement: `find_super_nodes` command is more reliable when finding multiple super nodes by forbidding traversal of their children. Initial completion support for `clickhouse-keeper-client`. Flush async logging buffers in case of crash. Enable the impersonate feature by default (see EXECUTE AS target_user). Improve canceling queries with SQLite, MongoDB and MySQL table engines by KILL QUERY and cancel query (Ctrl+C). Add server setting `jemalloc_profiler_sampling_rate` to control jemalloc profiling. Support weights in concurrent bounded queue implementation. Add sslmode to allowed keys for PostgreSQL dictionary sources to support SSL connections. Show clear \"no such file\" error when passing non-existent file paths. Text indexes can now be built on Nullable and Array(Nullable) columns. Avoid dropping named collections that are dependencies of dictionary sources. Enable `grace_hash` join algorithm for queries with totals. Cancel background merges early in DROP DATABASE. Remove NetlinkMetricsProvider and use procfs exclusively for per-thread taskstats metric collection. Refactor Iceberg manifest file handling to fix caching issues. Optimize PREWHERE decisions for expressions like `toDate(time)`. Add `MaxAllocatedEphemeralLockSequentialNumber` metric for ZooKeeper. Add new profile event `KeeperRequestTotalWithSubrequests` for better Keeper workload visibility. `SYSTEM RELOAD DICTIONARIES` now reloads dictionaries in topological order. Restart statistics cache after changing MergeTree settings. Only \"alive\" replicas participate in distributed index analysis. Add setting `access_control_improvements.disallow_config_defined_profiles_for_sql_defined_users`. Cap automatic parallel replicas heuristic to actual node count. Implement hedged requests and asynchronous reading for distributed index analysis. Deserialization of binary `AggregateFunction` states now requires consuming full input. Make TRUNCATE DATABASE respond to query cancellation. Improve `keeper-bench` with request pipelining and better stats. Support SAMPLE clause in distributed index analysis. Show chart title in dashboard even with empty results. Cap column lists in analyzer error messages to 10 entries. Return column stats from sub-queries with joins. Mark ZooKeeper session as expired immediately on finalization. Use more math functions from LLVM-libc. Reduce memory usage in `system.jemalloc_profile_text`. Add `is_subrequest` column to `system.aggregated_zookeeper_log`. Allow `ALTER TABLE MODIFY COLUMN x TTL` without specifying column type. Skip stale Keeper requests for disconnected sessions. Support text index on `mapValues(map)` with IN operator. Shell-like completion support in clickhouse keeper-client. Prevent Keeper `mntr` command from getting stuck. Reduce lock contention in Keeper dispatcher. Tolerate missing padding at the end of parquet file blocks. Fix how Alias table targets are saved as DDL dependencies. Fix wrong result or exception during reading subcolumns of ALIAS columns. Fix missing column in JOIN with non-standard identifier alias. Fix crashes in Kusto dialect functions with empty arguments. Forbid mounting local_object_storage outside user_files_path in clickhouse-client. Fix logical race in DeltaLake table engine on snapshot version change. Fix logical error on attaching a part in MergeTree with chained renames. Fix explicit settings being silently ignored when sent with `compatibility`. Fix client reporting NETWORK_ERROR instead of actual parsing error in INSERT with parallel parsing.","publishedAt":null,"url":"https://clickhouse.com/docs/whats-new/changelog#clickhouse-release","media":[]}],"pagination":{"page":1,"pageSize":20,"totalPages":1,"totalItems":5},"summaries":{"rolling":{"windowDays":90,"summary":"ClickHouse 26.3 LTS tightened data serialization and SQL semantics. String serialization now propagates into nested types like Array and Map, making downgrades unsafe but upgrades transparent. The release also corrected NOT operator precedence to match SQL standard—queries relying on the old non-standard binding will produce different results—and removed the experimental `hypothesis` skip index type and `detectProgrammingLanguage` function.","releaseCount":1,"generatedAt":"2026-04-11T14:59:55.451Z"},"monthly":[{"year":2026,"month":3,"summary":"ClickHouse 26.3 LTS tightened data type handling and SQL compliance. String serialization versions now propagate to nested types within Arrays, Maps, and Variants—upgrades are safe but downgrades will break access to 26.3-written data. The release also fixed NOT operator precedence to match SQL standards, removed the hypothesis skip index type, and eliminated the experimental detectProgrammingLanguage function.","releaseCount":1,"generatedAt":"2026-04-11T14:59:56.970Z"}]}}