What's New in v26.2

On this page Carat arrow pointing down
Note:

The releases on this page are testing releases, not supported or intended for production environments. The new features and bug fixes noted on this page may not yet be documented across CockroachDB’s documentation.

  • CockroachDB self-hosted: All v26.2 testing binaries and Docker images are available for download.
  • CockroachDB Advanced: v26.2 testing releases are not yet available.
  • CockroachDB Standard and Basic: v26.2 testing releases are not available.

When v26.2 becomes Generally Available (GA), a new v26.2.0 section on this page will describe key features and additional upgrade considerations.

CockroachDB v26.2 is in active development, and the following testing releases are intended for testing and experimentation only, and are not qualified for production environments or eligible for support or uptime SLA commitments. When CockroachDB v26.2 is Generally Available (GA), production releases will also be announced on this page.

Get future release notes emailed to you:

v26.2.0-alpha.2

Release Date: March 18, 2026

Downloads

Warning:

CockroachDB v26.2.0-alpha.2 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v26.2.0-alpha.2.linux-amd64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.2.linux-amd64.tgz
(SHA256)
ARM cockroach-v26.2.0-alpha.2.linux-arm64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.2.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v26.2.0-alpha.2.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.2.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v26.2.0-alpha.2.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.2.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v26.2.0-alpha.2.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v26.2.0-alpha.2.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v26.2.0-alpha.2

Source tag

To view or download the source code for CockroachDB v26.2.0-alpha.2 on Github, visit v26.2.0-alpha.2 source tag.

Changelog

View a detailed changelog on GitHub: v26.2.0-alpha.1...v26.2.0-alpha.2

Backward-incompatible changes

  • Lowered the default value of the sql.guardrails.max_row_size_log cluster setting from 64 MiB to 16 MiB, and the default value of sql.guardrails.max_row_size_err from 512 MiB to 80 MiB. These settings control the maximum size of a row (or column family) that SQL can write before logging a warning or returning an error, respectively. The previous defaults were high enough that large rows would hit other limits first (such as the Raft command size limit or the backup SST size limit), producing confusing errors. The new defaults align with existing system limits to provide clearer diagnostics. If your workload legitimately writes rows larger than these new defaults, you can restore the previous behavior by increasing these settings. #164468
  • When selecting from a view, the view owner's privileges on the underlying tables are now checked. Previously, no privilege checks were performed on the underlying tables, so a view would continue to work even after the owner lost access to the underlying tables. This also affects row-level security (RLS): the view owner's RLS policies are now enforced instead of the invoker's. If this causes issues, the previous behavior can be restored by setting the cluster setting sql.auth.skip_underlying_view_privilege_checks.enabled to true. #164664

Security updates

  • When the security.provisioning.ldap.enabled cluster setting is enabled, LDAP-authenticated DB Console logins now update the estimated_last_login_time column in the system.users table. #163400
  • When the security.provisioning.oidc.enabled cluster setting is enabled, OIDC-authenticated DB Console logins now populate the estimated_last_login_time column in system.users, allowing administrators to track when OIDC users last accessed the DB Console. #164129

SQL language changes

  • Added the ST_AsMVT aggregate function to generate Mapbox Vector Tile (MVT) binary format from geospatial data, providing PostgreSQL/PostGIS compatibility for web mapping applications. #150663
  • ALTER TABLE ... SET LOCALITY is now fully executed using the declarative schema changer, improving reliability and consistency with other schema change operations. #161763
  • Added an index storage parameter skip_unique_checks that can be used to disable unique constraint checks for indexes with implicit partition columns, including indexes in regional-by-row tables. This should only be used if the application can guarantee uniqueness, for example, by using external UUID values or relying on a unique_rowid() default value. Incorrectly applying this setting when uniqueness is not guaranteed by the application could result in logically duplicate keys in different partitions of a unique index. #163378
  • Introduced the information_schema.crdb_delete_statement_hints built-in function, which accepts 2 kinds of payload: row_id (int): the primary key of system.statement_hints; fingerprint (string). The function returns the number of rows deleted. #163891
  • Added support for importing Parquet files using the IMPORT statement. Parquet files can be imported from cloud storage URLs (s3://, gs://, azure://) or HTTP servers that support range requests (Accept-Ranges: bytes). This feature supports column-level compression formats (Snappy, GZIP, ZSTD, Brotli, etc.) as specified in the Parquet file format, but does not support additional file-level compression (e.g., .parquet.gz files). Nested Parquet types (lists, maps, structs) are not currently supported; only flat schemas with primitive types are supported at this time.

Epic: CRDB-23802 #163991 - We now include information_schema.crdb_rewrite_inline_hints statements in the schema.sql file of a statement diagnostics bundle for re-creating all the statement hints bound to the statement. The hints recreation statement have been sorted in ascending order of the original hints' creation time. #164164 - Views now support the PostgreSQL-compatible security_invoker option. When set via CREATE VIEW ... WITH (security_invoker) or ALTER VIEW SET (security_invoker = true), privilege checks on the underlying tables are performed as the querying user rather than the view owner. The security_invoker option can be reset with ALTER VIEW ... RESET (security_invoker). #164184 - CockroachDB now supports COMMIT AND CHAIN and ROLLBACK AND CHAIN (as well as END AND CHAIN and ABORT AND CHAIN). These finish the current transaction and immediately start a new explicit transaction with the same isolation level, priority, and read/write mode as the previous transaction. AND NO CHAIN is also accepted for Postgres compatibility but behaves the same as a plain COMMIT or ROLLBACK.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164403 - RESTORE TABLE/DATABASE now supports the WITH GRANTS option, which restores grants on restore targets for users in the restoring cluster. Note that using this option with new_db_name will cause the new database to inherit the privileges in the backed up database. #164444 - During an INSPECT run, a new check validates unique column values in REGIONAL BY ROW tables. #164449 - Added PostgreSQL-compatible numeric formatting functions to_char(int, text), to_char(float, text), to_char(numeric, text), and to_number(text, text). These functions format numbers as strings and parse formatted strings back to numbers using the PostgreSQL formatting syntax.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164672 - Added to_date(text, text) and to_timestamp(text, text) SQL functions that parse dates and timestamps from formatted strings using PostgreSQL-compatible format patterns. For example, to_date('2023-03-15', 'YYYY-MM-DD') returns a date, and to_timestamp('2023-03-15 14:30:45', 'YYYY-MM-DD HH24:MI:SS') returns a timestamptz.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164672 - Added support for a new statement hint used to change session variable values for the duration of a single statement without application changes. The new hint type can be created using the information_schema.crdb_set_session_variable_hint builtin. The override applies only when executing a statement matching the given fingerprint and does not persist on the session or surrounding transaction. #164909 - Active Session History tables are now accessible via information_schema.crdb_node_active_session_history and information_schema.crdb_cluster_active_session_history, in addition to the existing crdb_internal tables. This improves discoverability when browsing information_schema for available metadata.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164969 - The enable_super_regions session variable and the sql.defaults.super_regions.enabled cluster setting are no longer required to use super regions. Super region DDL operations (ADD, DROP, and ALTER SUPER REGION) now work without any experimental flag. The session variable and cluster setting are deprecated, and existing scripts that set them will continue to work without error. #165227

Operational changes

  • The bulkio.index_backfill.elastic_control.enabled cluster setting is now enabled by default, allowing index backfill operations to integrate with elastic CPU control and automatically throttle based on available resources.

Epic: CRDB-48845. #163866 - Promoted 9 admission control metrics to Essential status, making them more discoverable in monitoring dashboards and troubleshooting workflows. These metrics track admission control wait times, resource exhaustion (slots, I/O tokens, CPU tokens), and replication flow control, providing critical visibility into cluster health and performance throttling. #164827 - Added periodic ASH workload summary logging to the OPS channel. Two new cluster settings, obs.ash.log_interval (default 10m) and obs.ash.log_top_n (default 10), control how often and how many entries are emitted. Each summary reports the most frequently sampled workloads grouped by event type, event name, and workload ID, providing durable visibility into workload patterns that previously existed only in memory. #165093 - users cannot run AOST queries on the reader tenant, unless they set bypass_pcr_reader_catalog_aost session variable. This session variable should not be used by default in the reader workload. It should only be used during investigation or for changing reader tenant specific cluster settings.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #165382

Bug fixes

  • JWT authentication now returns a clear error when HTTP requests to fetch JWKS or OpenID configuration return non-2xx status codes, instead of silently passing the response body to the JSON parser.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #158294 - Fixed an issue where ORDER BY expressions containing subqueries with non-default NULLS ordering (e.g., NULLS LAST for ASC, NULLS FIRST for DESC) could cause an error during query planning.

Co-Authored-By: Claude noreply@anthropic.com #163230 - Fixed a bug that caused ALTER INDEX ... PARTITION BY statements to fail on a nonexistent index even if IF EXISTS was used. #163378 - Fixed a bug where backups taken after a 25.4 mixed version cluster downgrade could result in inconsistent backup indexes. #164301 - Altering a non-scan-only changefeed to add a target with initial_scan='only' now returns an error instead of not doing a scan and adding the target to the watched targets list. #164433 - A bug where adding a target without an initial scan, dropping that same target, and then adding it again with an initial scan would result in the target being added without an initial scan has now been fixed.

Release note (bug fix): A bug where adding a target without an initial scan could cause events to be sent again for all targets as of the original statement time has been fixed.

Release note (ops change): Previously, altering a changefeed to add a table with an initial scan during a schema change backfill or while the changefeed had lagging ranges would sometimes be rejected. This is no longer the case.

Release note (backward-incompatible change): Using ALTER CHANGEFEED ADD ... for a table that is already watched will now return an error. #164433 - Fixed a bug where creating a table with a user-defined type column failed when the user had USAGE privilege on the base type but not on its implicit array type. The array type now inherits privileges from the base type, matching PostgreSQL behavior. #164471 - ALTER TABLE ... ALTER PRIMARY KEY USING COLUMNS (col) USING HASH is now correctly treated as a no-op when the table already has a matching hash-sharded primary key, instead of attempting an unnecessary schema change.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164557 - Fixed a bug in appBatchStats.merge where the numEmptyEntries field was not being properly accumulated when merging statistics. This could result in incorrect statistics tracking for empty raft log entries.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164671 - Fixed a bug where ALTER TABLE ... ALTER COLUMN ... SET DATA TYPE from an unbounded string or bit type to a bounded type with a length >= 64 (e.g., STRING to STRING(100)) would skip validating existing data against the new length constraint. This could leave rows in the table that violate the column's type, with values longer than the specified limit.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164739 - Fixed a bug where RESTORE with skip_missing_foreign_keys could fail with an internal error if the restored table had an in-progress schema change that added a foreign key constraint whose referenced table was not included in the restore. #164757 - In a recent change that was included in 25.4+, we inadvertently made it so that setting min_checkpoint_frequency to 0 would cause the changefeed's highwater to not advance/not send resolved timestamps. This bug has been fixed. Note however that setting min_checkpoint_frequency to lower than 500ms is not recommended as it may cause degraded changefeed performance. #164765 - Lowered the default value of the changefeed.max_retry_backoff cluster setting from 10m to 30s to reduce changefeed lag during rolling restarts. #164874 - Previously, CockroachDB might not have promptly responded to the statement timeout when performing a hash join with ON filter that is mostly false. This is now fixed. #164879 - Fixed a bug where IMPORT error messages could include unredacted cloud storage credentials from the source URI. Credentials are now stripped from URIs before they appear in error messages.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #164881 - Changefeed retry backoff now resets when the changefeed's resolved timestamp (highwater mark) advances between retries, in addition to the existing time-based reset (changefeed.retry_backoff_reset). This prevents transient rolling restarts from causing changefeeds to fall behind because of excessive backoff. #164933 - Fixed a rare race condition where SHOW CREATE TABLE could fail with a "relation does not exist" error if a table referenced by a foreign key was being concurrently dropped. #164942 - Fixes a bug that had previously allowed the primary and secondary to be in separate super regions. #164943 - Fixed a bug that could cause row sampling for table statistics to crash a node due to a data race when processing a collated string column with values larger than 400 bytes. This bug has existed since before v23.1. #165260 - The information_schema.crdb_node_active_session_history and information_schema.crdb_cluster_active_session_history views now include the app_name column, matching the underlying crdb_internal tables.

Co-Authored-By: roachdev-claude roachdev-claude-bot@cockroachlabs.com #165367

v26.2.0-alpha.1

Release Date: March 11, 2026

Downloads

Warning:

CockroachDB v26.2.0-alpha.1 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v26.2.0-alpha.1.linux-amd64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.1.linux-amd64.tgz
(SHA256)
ARM cockroach-v26.2.0-alpha.1.linux-arm64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.1.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v26.2.0-alpha.1.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.1.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v26.2.0-alpha.1.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v26.2.0-alpha.1.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v26.2.0-alpha.1.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v26.2.0-alpha.1.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v26.2.0-alpha.1

Source tag

To view or download the source code for CockroachDB v26.2.0-alpha.1 on Github, visit v26.2.0-alpha.1 source tag.

Backward-incompatible changes

  • Increased the default value of sql.stats.automatic_full_concurrency_limit (which controls the maximum number of concurrent full statistics collections) from 1 to number of vCPUs divided by 2 (e.g., 4 vCPU nodes will have the value of 2). #161806
  • The TG_ARGV trigger function parameter now uses 0-based indexing to match PostgreSQL behavior. Previously, TG_ARGV[1] returned the first argument; now TG_ARGV[0] returns the first argument and TG_ARGV[1] returns the second argument. Additionally, usage of TG_ARGV no longer requires setting the allow_create_trigger_function_with_argv_references session variable. #161925
  • Lowered the default value of the sql.guardrails.max_row_size_log cluster setting from 64 MiB to 16 MiB, and the default value of sql.guardrails.max_row_size_err from 512 MiB to 80 MiB. These settings control the maximum size of a row (or column family) that SQL can write before logging a warning or returning an error, respectively. The previous defaults were high enough that large rows would hit other limits first (such as the Raft command size limit or the backup SST size limit), producing confusing errors. The new defaults align with existing system limits to provide clearer diagnostics. If your workload legitimately writes rows larger than these new defaults, you can restore the previous behavior by increasing these settings. #164468
  • Changed the default value of the sql.catalog.allow_leased_descriptors.enabled cluster setting to true. This setting allows introspection tables like information_schema and pg_catalog to use cached descriptors when building the table results, which improves the performance of introspection queries when there are many tables in the cluster. #159162
  • The bulkio.import.elastic_control.enabled cluster setting is now enabled by default, allowing import operations to integrate with elastic CPU control and automatically throttle based on available resources. #163867
  • The bulkio.ingest.sst_batcher_elastic_control.enabled cluster setting is now enabled by default, allowing SST batcher operations to integrate with elastic CPU control and automatically throttle based on available resources. #163868
  • The session variable distsql_prevent_partitioning_soft_limited_scans is now enabled by default. This prevents scans with soft limits from being planned as multiple TableReaders, which decreases the initial setup costs of some fully-distributed query plans. #160051
  • Creating or altering a changefeed or Kafka/Pub/Sub external connection now returns an error when the topic_name query parameter is explicitly set to an empty string in the sink URI, rather than silently falling back to using the table name as the topic name. Existing changefeeds with an empty topic_name are not affected. #164225
  • TTL jobs are now owned by the schedule owner instead of the node user. This allows users with CONTROLJOB privilege to cancel TTL jobs, provided the schedule owner is not an admin (CONTROLJOB does not grant control over admin-owned jobs). #161226
  • Calling information_schema.crdb_rewrite_inline_hints now requires the REPAIRCLUSTER privilege. #160716
  • The Statement Details page URL format has changed from /statement/{implicitTxn}/{statementId} to /statement/{statementId}. As a result, bookmarks using the old URL structure will no longer work. #159558
  • Changed the unit of measurement for admission control duration metrics from microseconds to nanoseconds. The following metrics are affected: admission.granter.slots_exhausted_duration.kv, admission.granter.cpu_load_short_period_duration.kv, admission.granter.cpu_load_long_period_duration.kv, admission.granter.io_tokens_exhausted_duration.kv, admission.granter.elastic_io_tokens_exhausted_duration.kv, and admission.elastic_cpu.nanos_exhausted_duration. Note that dashboards displaying these metrics will show a discontinuity at upgrade time, with pre-upgrade values appearing much lower due to the unit change. #160956
  • Renamed the builtin function crdb_internal.inject_hint (introduced in v26.1.0-alpha.2) to information_schema.crdb_rewrite_inline_hints. #160716
  • Removed the incremental_location option from BACKUP and CREATE SCHEDULE FOR BACKUP. #159189
  • Removed the incremental_location option from SHOW BACKUP and RESTORE. #160416

Security updates

  • LDAP authentication for the DB Console now supports automatic user provisioning. When the cluster setting security.provisioning.ldap.enabled is set to true, users who authenticate successfully via LDAP will be automatically created in CockroachDB if they do not already exist. #163199

General changes

  • Changefeeds now support the partition_alg option for specifying a Kafka partitioning algorithm. Currently fnv-1a (default) and murmur2 are supported. The option is only valid on Kafka v2 sinks. This is protected by the cluster setting changefeed.partition_alg.enabled. An example usage: SET CLUSTER SETTING changefeed.partition_alg.enabled=true; CREATE CHANGEFEED ... INTO 'kafka://...' WITH partition_alg='murmur2';. Note that if a changefeed is created using the murmur2 algorithm, and then the cluster setting is disabled, the changefeed will continue using the murmur2 algorithm unless the changefeed is altered to use a different partition_alg. #161265

Enterprise edition changes

  • Added a new cluster setting, security.provisioning.oidc.enabled, to allow automatic provisioning of users when they log in for the first time via OIDC. When enabled, a new user will be created in CockroachDB upon their first successful OIDC authentication. This feature is disabled by default. #159787
  • LDAP authentication for the DB Console now additionally supports role-based access control (RBAC) through LDAP group membership. To use this feature, an administrator must first create roles in CockroachDB with names that match the Common Names (CN) of their LDAP groups. These roles should then be granted the desired privileges for DB Console access. When a user who is a member of a corresponding LDAP group logs into the DB Console, they will be automatically granted the role and its associated privileges, creating consistent behavior with SQL client connections. #162302

SQL language changes

  • Added the MAINTAIN privilege, which can be granted on tables and materialized views. Users with the MAINTAIN privilege on a materialized view can execute REFRESH MATERIALIZED VIEW without being the owner. Users with the MAINTAIN privilege on a table can execute ANALYZE without needing SELECT. This aligns with PostgreSQL 17 behavior. #164236
  • Added cluster settings to control the number of concurrent automatic statistics collection jobs:

    • sql.stats.automatic_full_concurrency_limit controls the maximum number of concurrent full statistics collections. The default is 1.
    • sql.stats.automatic_extremes_concurrency_limit controls the maximum number of concurrent partial statistics collections using extremes. The default is 128.

    Note that at most one statistics collection job can run on a single table at a time. #158835

  • Added a new cluster setting bulkio.import.distributed_merge.mode to enable distributed merge support for IMPORT operations. When enabled (default: false), IMPORT jobs will use a two-phase approach where import processors first write SST files to local storage, then a coordinator merges and ingests them. This can improve performance for large imports by reducing L0 file counts and enabling merge-time optimizations. This feature requires all nodes to be running v26.1 or later. #159330

  • CockroachDB now supports the PostgreSQL session variables tcp_keepalives_idle, tcp_keepalives_interval, tcp_keepalives_count, and tcp_user_timeout. These allow per-session control over TCP keepalive behavior on each connection. A value of 0 (the default) uses the corresponding cluster setting. Non-zero values override the cluster setting for that session only. Units match PostgreSQL: seconds for keepalive settings, milliseconds for tcp_user_timeout. #164369

  • Added the optimizer_inline_any_unnest_subquery session setting to enable/disable the optimizer rule InlineAnyProjectSet. The setting is on by default in v26.2 and later. #161880

  • Users can now set the use_backups_with_ids session setting to enable a new SHOW BACKUPS IN experience. When enabled, SHOW BACKUPS IN {collection} displays all backups in the collection. Results can be filtered by backup end time using OLDER THAN {timestamp} or NEWER THAN {timestamp} clauses. Example usage: SET use_backups_with_ids = true; SHOW BACKUPS IN '{collection}' OLDER THAN '2026-01-09 12:13:14' NEWER THAN '2026-01-04 15:16:17'; #160137

  • If the new SHOW BACKUP experience is enabled by setting the use_backups_with_ids session variable to true, SHOW BACKUP will parse the IDs provided by SHOW BACKUPS and display contents for single backups. #160812

  • If the new RESTORE experience is enabled by setting the use_backups_with_ids session variable to true, RESTORE will parse the IDs provided by SHOW BACKUPS and will restore the specified backup without the use of AS OF SYSTEM TIME. #161294

  • SHOW BACKUP and RESTORE now allow backup IDs even if the use_backups_with_ids session variable is not set. Setting the variable only configures whether LATEST is resolved using the new or legacy path. #162329

  • Added the REVISION START TIME option to the new SHOW BACKUPS experience enabled via the use_backups_with_ids session variable. Use the REVISION START TIME option to view the revision start times of revision history backups. #161328

  • Added support for SHOW STATEMENT HINTS, which displays information about the statement hints (if any) associated with the given statement fingerprint string. The fingerprint is normalized in the same way as EXPLAIN (FINGERPRINT) before hints are matched. Example usage: SHOW STATEMENT HINTS FOR ' SELECT * FROM xy WHERE x = 10 ' or SHOW STATEMENT HINTS FOR $$ SELECT * FROM xy WHERE x = 10 $$ WITH DETAILS. #159231

  • CREATE OR REPLACE TRIGGER is now supported. If a trigger with the same name already exists on the same table, it is replaced with the new definition. If no trigger with that name exists, a new trigger is created. #162633

  • Added support for ALTER TABLE ENABLE TRIGGER and ALTER TABLE DISABLE TRIGGER syntax. This allows users to temporarily disable triggers without dropping them, and later re-enable them. The syntax supports disabling/enabling individual triggers by name, or all triggers on a table using the ALL or USER keywords. #161924

  • Updated DROP TRIGGER to accept the CASCADE option for PostgreSQL compatibility. Since triggers in CockroachDB cannot have dependents, CASCADE behaves the same as RESTRICT or omitting the option entirely. #161915

  • ALTER TABLE ... DROP CONSTRAINT can now be used to drop UNIQUE constraints. The backing UNIQUE index will also be dropped, as CockroachDB treats the constraint and index as the same thing. #162345

  • DROP COLUMN and DROP INDEX with CASCADE now properly drop dependent triggers. Previously, these operations would fail with an unimplemented error when a trigger depended on the column or index being dropped. #163296

  • CREATE OR REPLACE FUNCTION now works on trigger functions that have active triggers. Previously, this was blocked with an unimplemented error, requiring users to drop and recreate triggers. The replacement now atomically updates all dependent triggers to execute the new function body. #163348

  • Updated CockroachDB to allow a prefix of index key columns to be used for the shard column in a hash-sharded index. The shard_columns storage parameter may be used to override the default, which uses all index key columns in the shard column. #161422

  • Added support for the pg_trigger_depth() builtin function, which returns the current nesting level of PostgreSQL triggers (0 if not called from inside a trigger). #162286

  • A database-level changefeed with no tables will periodically poll to check for tables added to the database. The new option hibernation_polling_frequency sets the frequency at which the polling occurs, until a table is found, at which point polling ceases. #156771

  • INSPECT is now a generally available (GA) feature. The enable_inspect_command session variable has been deprecated, and is now effectively always set to true. #159659

  • Added the STRICT option for locality-aware backups. When enabled, backups fail if data from a KV node with one locality tag would be backed up to a bucket with a different locality tag, ensuring data domiciling compliance. #158999

  • Added support for the dmetaphone(), dmetaphone_alt(), and daitch_mokotoff() built-in functions, completing CockroachDB's implementation of the PostgreSQL fuzzystrmatch extension. dmetaphone and dmetaphone_alt return Double Metaphone phonetic codes for a string, and daitch_mokotoff returns an array of Daitch-Mokotoff soundex codes. These functions are useful for fuzzy string matching based on phonetic similarity. #163430

  • crdb_internal.datums_to_bytes is now available in the information_schema system catalog as information_schema.crdb_datums_to_bytes. #156963

  • The information_schema.crdb_datums_to_bytes built-in function is now documented. #160486

  • Row count validation after IMPORT is now enabled by default in async mode. After an IMPORT completes, a background INSPECT job validates that the imported row count matches expectations. The IMPORT result now includes an inspect_job_id column so the INSPECT job can be viewed separately. The bulkio.import.row_count_validation.mode cluster setting controls this behavior, with valid values of off, async (default), and sync. #163543

  • Queries executed via the vectorized engine now display their progress in the phase column of SHOW QUERIES. Previously, this feature was only available in the row-by-row engine. #158029

  • CockroachDB now shows execution statistics (like execution time) on EXPLAIN ANALYZE output for render nodes, which often handle built-in functions. #161509

  • The output of EXPLAIN [ANALYZE] in non-VERBOSE mode is now more succinct. #153361

Operational changes

  • The new cockroach gen dashboard command generates standardized monitoring dashboards from an embedded configuration file. It outputs a dashboard JSON file for either Datadog (--tool=datadog) or Grafana (--tool=grafana), with Grafana dashboards using Prometheus queries. The generated dashboards include metrics across Overview, Hardware, Runtime, Networking, SQL, and Storage categories. Use --output to set the output file path and --rollup-interval to control metric aggregation. #161050
  • Added the server.sql_tcp_user.timeout cluster setting, which specifies the maximum amount of time transmitted data can remain unacknowledged before the underlying TCP connection is forcefully closed. This setting is enabled by default with a value of 30 seconds and is supported on Linux and macOS (Darwin). #164037
  • Introduced a new cluster setting kvadmission.store.snapshot_ingest_bandwidth_control.min_rate.enabled. When this setting is enabled and disk bandwidth-based admission control is active, snapshot ingestion will be admitted at a minimum rate. This prevents snapshot ingestion from being starved by other elastic work. #159436
  • The kv.range_split.load_sample_reset_duration cluster setting now defaults to 30m. This should improve load-based splitting in rare edge cases. #159499
  • Added the kv.protectedts.protect, kv.protectedts.release, kv.protectedts.update_timestamp, kv.protectedts.get_record, and kv.protectedts.mark_verified metrics to track protected timestamp storage operations. These metrics help diagnose issues with excessive protected timestamp churn and operational errors. Each operation tracks both successful completions (.success) and failures (.failed, such as ErrExists or ErrNotExists). Operators can monitor these metrics to understand PTS system behavior and identify performance issues related to backups, changefeeds, and other features that use protected timestamps. #160129
  • Added a new metric sql.rls.policies_applied.count that tracks the number of SQL statements where row-level security (RLS) policies were applied during query planning. #164405
  • External connections can now be used with online restore. #159090
  • Changed goroutine profile dumps from human-readable .txt.gz files to binary proto .pb.gz files. This improves the performance of the goroutine dumper by eliminating brief in-process pauses that occurred when collecting goroutine stacks. #160798
  • Added a new structured event of type rewrite_inline_hints that is emitted when an inline-hints rewrite rule is added using information_schema.crdb_rewrite_inline_hints. This event is written to both the event log and the OPS channel. #160901
  • Added a new metric sql.query.with_statement_hints.count that is incremented whenever a statement is executed with one or more external statement hints applied. An example of an external statement hint is an inline-hints rewrite rule added by calling information_schema.crdb_rewrite_inline_hints. #161043
  • Logical Data Replication (LDR) now supports hash-sharded indexes and secondary indexes with virtual computed columns. Previously, tables with these index types could not be replicated using LDR. #161062
  • Backup schedules that utilize the revision_history option now apply that option only to incremental backups triggered by that schedule, rather than duplicating the revision history in the full backups as well. #162105
  • The build.timestamp Prometheus metric now carries major and minor labels identifying the release series of the running CockroachDB binary (e.g., major="26", minor="1" for any v26.1.x build). #163834
  • Jobs now clear their running status messages upon successful completion. #163765
  • Changefeed ranges are now more accurately reported as lagging. #163427

Command-line changes

  • The cockroach debug tsdump command now defaults to --format=raw instead of --format=text. The raw (gob) format is optimized for Datadog ingestion. A new --output flag lets you write output directly to a file, avoiding potential file corruption that can occur with shell redirection. If --output is not specified, output is written to stdout. #160538
  • The cockroach debug tsdump command now supports ZSTD encoding via --format=raw --encoding=zstd. This generates compressed tsdump files that are approximately 85% smaller than raw format. The tsdump upload command automatically detects and decompresses ZSTD files, allowing direct upload without manual decompression. #161998
  • The cockroach debug zip command's --include-files and --exclude-files flags now support full zip path patterns. Patterns containing / are matched against the full path within the zip archive (e.g., --include-files='debug/nodes/1/*.json'). Patterns without / continue to match the base file name as before. #163266
  • Added a --list-dbs flag to workload init workload_generator that lists all user databases found in debug logs without initializing tables. This helps users discover which databases are available in the debug zip before running the full init command. #163930

DB Console changes

  • Added a new time-series bar graph called Plan Distribution Over Time to the Statement Fingerprint page, on the Explain Plans tab. It shows which execution plans were used in each time interval, helping detect shifts in query plan distributions. #161011
  • The SQL Activity > Sessions page now defaults the Session Status filter to Active, Idle to exclude closed sessions. #160576

Bug fixes

  • The fix for node descriptor not found errors for changefeeds with execution_locality filters in CockroachDB Basic and Standard clusters is now controlled by cluster setting sql.instance_info.use_instance_resolver.enabled (default: true). #163947
  • Fixed a bug that caused a routine with an INSERT statement to unnecessarily block dropping a hash-sharded index or computed column on the target table. This fix applies only to newly created routines. In releases prior to v25.3, the fix must be enabled by setting the session variable use_improved_routine_dependency_tracking to on. #146250
  • Fixed a bug where creating a routine could create unnecessary column dependencies when the routine references columns through CHECK constraints (including those for RLS policies and hash-sharded indexes) or partial index predicates. These unnecessary dependencies prevented dropping the column without first dropping the routine. The fix is gated behind the session setting use_improved_routine_deps_triggers_and_computed_cols, which is off by default prior to v26.1. #159126
  • Fixed a bug that allowed a column to be dropped from a table even if it was referenced in the RETURNING clause of an UPDATE or DELETE statement in a routine. In releases prior to v25.3, the fix must be enabled by setting the session variable use_improved_routine_dependency_tracking to on. #146250
  • CockroachDB could previously encounter internal errors like column statistics cannot be determined for empty column set and invalid union in some edge cases with UNION, EXCEPT, and INTERCEPT. This has now been fixed. #150706
  • Fixed a bug that could cause a scan over a secondary index to read significantly more KVs than necessary in order to satisfy a limit when the scanned index had more than one column family. #156672
  • Fixed an issue where long-running transactions with many statements could cause unbounded memory growth in the SQL statistics subsystem. When a transaction includes a large number of statements, the SQL statistics ingester now automatically flushes buffered statistics before the transaction commits. As a side effect, the flushed statement statistics might not have an associated transaction fingerprint ID because the transaction has not yet completed. In such cases, the transaction fingerprint ID cannot be backfilled after the fact. #158527
  • Fixed a bug that allowed columns to be dropped despite being referenced by a routine. This could occur when a column was only referenced as a target column in the SET clause of an UPDATE statement within the routine. This fix only applies to newly-created routines. In versions prior to v26.1, the fix must be enabled by setting the session variable prevent_update_set_column_drop. #158935
  • Fixed a bug that caused newly-created routines to incorrectly prevent dropping columns that were not directly referenced, most notably columns referenced by computed column expressions. The fix is gated behind the session setting use_improved_routine_deps_triggers_and_computed_cols, which is off by default prior to v26.1. #158935
  • Fixed a bug where schema changes could fail after a RESTORE due to missing session data. #159176
  • The ascii built-in function now returns 0 when the input is the empty string instead of an error. #159178
  • Fixed a bug where comments associated with constraints were left behind after the column and constraint were dropped. #159180
  • Fixed a bug which could cause prepared statements to fail with the error message non-const expression when they contained filters with stable functions. This bug has been present since 25.4.0. #159201
  • Fixed a bug in the TPC-C workload where long-duration runs (>= 4 days or indefinite) would experience periodic performance degradation every 24 hours due to excessive concurrent UPDATE statements resetting warehouse and district year-to-date values. #159286
  • Fixed a race condition where queries run after revoking BYPASSRLS could return wrong results because cached plans failed to notice the change immediately. #159354
  • Fixed a bug where TRUNCATE did not behave correctly with respect to the schema_locked storage parameter, and was not being blocked when Logical Data Replication (LDR) was in use. This behavior was incorrect and has been fixed. #159378
  • Fixed a race condition that could occur during context cancellation of an incoming snapshot. #159403
  • Fixed a bug that could cause a panic during changefeed startup if an error occurred while initializing the metrics controller. #159431
  • Fixed a memory accounting issue that could occur when a lease expired due to a SQL liveness session-based timeout. #159527
  • Fixed a bug that caused SHOW CREATE FUNCTION to error when the function body contained casts from columns to user-defined types. #159642
  • Fixed a bug where a query predicate could be ignored when all of the following conditions were met: the query used a lookup join to an index, the predicate constrained a column to multiple values (e.g., column IN (1, 2)), and the constrained column followed one or more columns with optional multi-value constraints in the index. This bug was introduced in v24.3.0. #159722
  • Fixed a bug where rolling back a transaction that had just rolled back a savepoint would block other transactions accessing the same rows for five seconds. #160346
  • Fixed a deadlock that could occur when a statistics creation task panicked. #160348
  • Fixed a bug where CockroachDB could crash when handling decimals with negative scales via the extended PGWire protocol. An error is now returned instead, matching PostgreSQL behavior. #160499
  • Fixed a bug where the pprof UI endpoints for allocs, heap, block, and mutex profiles ignored the seconds parameter and returned immediate snapshots instead of delta profiles. #160608
  • Previously, v26.1.0-beta.1 and v26.1.0-beta.2 could encounter a rare process crash when running TTL jobs. This has been fixed. #160674
  • Fixed a bug where schema changes adding a NOT NULL constraint could enter an infinite retry loop if a row violated the constraint and contained certain content (e.g., "EOF"). Such errors are now correctly classified and don't cause retries. #160780
  • An error will now be reported when the database provided as the argument to a SHOW REGIONS or SHOW SUPER REGIONS statement does not exist. This bug had been present since version v21.1. #161014
  • Fixed a bug where CREATE INDEX on a table with PARTITION ALL BY would fail if the partition columns were explicitly included in the primary key definition. #161083
  • Fixed a bug in which inline-hints rewrite rules created with information_schema.crdb_rewrite_inline_hints were not correctly applied to statements run with EXPLAIN ANALYZE. This bug was introduced in v26.1.0-alpha.2. #161273
  • Fixed a bug where AVRO file imports of data with JSON or binary records could hang indefinitely when encountering stream errors from cloud storage (such as HTTP/2 CANCEL errors). Import jobs will now properly fail with an error instead of hanging. #161290
  • Fixed a bug where IMPORT with AVRO data using OCF format could silently lose data if the underlying storage (e.g., S3) returned an error during read. Such errors are now properly reported. Other formats (specified via data_as_binary_records and data_as_json_records options) are unaffected. The bug has been present since about v20.1. #161318
  • Fixed a bug that prevented successfully injecting hints using information_schema.crdb_rewrite_inline_hints for INSERT, UPSERT, UPDATE, and DELETE statements. This bug had existed since hint injection was introduced in v26.1.0-alpha.2. #161773
  • Fixed prepared statements failing with version mismatch errors when user-defined types are modified between preparation and execution. Prepared statements now automatically detect UDT changes and re-parse to use current type definitions. #161827
  • Previously, CockroachDB could hit an internal error when evaluating built-in functions with '{}' as an argument (without explicit type casts, such as on a query like SELECT cardinality('{}');). This is now fixed and a regular error is returned instead (matching PostgreSQL behavior). #161835
  • Fixed a bug where the index definition shown in pg_indexes for hash sharded indexes with STORING columns was not valid SQL. The STORING clause now appears in the correct position. #161882
  • Fixed a bug where DROP TABLE ... CASCADE would incorrectly drop tables that had triggers or row-level security (RLS) policies referencing the dropped table. Now only the triggers/policies are dropped, and the tables owning them remain intact. #161914
  • Reduced contention when dropping descriptors or running concurrent imports. #161941
  • Fixed a bug where multi-statement explicit transactions using SAVEPOINT to recover from certain errors (like duplicate key-value violations) could lose writes performed before the savepoint was created, in rare cases when buffered writes were enabled (off by default). This bug was introduced in v25.2. #161972
  • Fixed a bug introduced in v26.1.0-beta.1 in which row-level TTL jobs could encounter GC threshold errors if each node had a large number of spans to process. #161979
  • Fixed an error that occurred when using generic query plans that generates a lookup join on indexes containing identity computed columns. #162036
  • Fixed a bug that could cause changefeeds using Kafka v1 sinks to hang when the changefeed was cancelled. #162058
  • Fixed an internal error could not find format code for column N that occurred when executing EXPLAIN ANALYZE EXECUTE statements via JDBC or other clients using the PostgreSQL binary protocol. #162115
  • Fixed a bug where statement bundles were missing CREATE TYPE statements for user-defined types used as array column types. #162357
  • Fixed a bug in which PL/pgSQL UDFs with many IF statements would cause a timeout and/or OOM when executed from a prepared statement. This bug was introduced in v23.2.22, v24.1.15, v24.3.9, v25.1.2, and v25.2.0. #162512
  • Fixed a bug where an error would occur when defining a foreign key on a hash-sharded primary key without explicitly providing the primary key columns. #162608
  • Fixed a bug where generating a debug zip could trigger an out-of-memory (OOM) condition on a node if malformed log entries were present in logs using json or json-compact formatting. This bug was introduced in v24.1. #163224
  • Fixed a bug that prevented the optimizer_min_row_count setting from applying to anti-join expressions, which could lead to bad query plans. The fix is gated behind optimizer_use_min_row_count_anti_join_fix, which is on by default on v26.2 and later, and off by default in earlier versions. #163244
  • Fixed an optimizer limitation that prevented index usage on computed columns when querying through views or subqueries containing JSON fetch expressions (such as ->, ->>, #>, or #>>). Queries that project JSON expressions matching indexed computed column definitions now correctly use indexes instead of performing full table scans, significantly improving performance for JSON workloads. #163395
  • Fixed a bug that could cause incorrect results for any of the following types of statements:

    • Prepared statements with LIMIT expressions where the limit is a placeholder and the given placeholder value is negative. This could result in a successful query when the correct result is an error.
    • Prepared statements with OFFSET expressions where the limit is a placeholder. In some cases this could produce incorrect results.
    • Statements within a UDF or stored procedure similar to (1) and (2) where the limit/offset is a reference to an argument of the UDF/SP. #163500
  • Dropping a region from the system database no longer leaves REGIONAL BY TABLE system tables referencing the removed region, preventing descriptor validation errors. #163503

  • Fixed an issue where changefeeds with execution_locality filters could fail in multi-tenant clusters with node descriptor not found errors. #163507

  • Fixed a bug where EXPLAIN ANALYZE (DEBUG) statement bundles did not include triggers, their functions, or tables modified by those triggers. The bundle's schema.sql file now contains the CREATE TRIGGER, CREATE FUNCTION, and CREATE TABLE statements needed to fully reproduce the query environment when triggers are involved. #163584

  • Fixed a rare data race during parallel constraint checks where a fresh descriptor collection could resolve a stale enum type version. This bug was introduced in v26.1.0. #163883

  • Fixed a bug where running changefeeds with envelope=enriched and enriched_properties containing source would cause failures during a cluster upgrade. #163885

  • Fixed a bug where dropped columns appeared in pg_catalog.pg_attribute with the atttypid column equal to 2283 (anyelement). Now this column will be 0 for dropped columns. This matches PostgreSQL behavior, where atttypid=0 is used for dropped columns. #163950

  • Fixed a race condition/conflict between concurrent ALTER FUNCTION ... SET SCHEMA and DROP SCHEMA operations. #164043

  • Fixed a bug where super region zone configurations did not constrain all replicas to regions within the super region. #164285

  • Fixed a bug where CockroachDB returned "cached plan must not change result type" errors during the Execute phase instead of the Bind phase of the extended pgwire protocol. This caused compatibility issues with drivers like pgx that expect the error before BindComplete is sent, particularly when using batch operations with prepared statements after schema changes. #164406

  • Statistics histogram collection is now skipped for JSON columns referenced in partial index predicates, except when sql.stats.non_indexed_json_histograms.enabled is true (default: false). #164477

  • Fixed a bug where import rollback could incorrectly revert data in a table that was already online. This could only occur if an import job was cancelled or failed after the import had already succeeded and the table was made available for use. #159627

  • Invalid avro_schema_prefix is now caught during statement time. The prefix must start with [A-Za-z_] and subsequently contain only [A-Za-z0-9_], as specified in the Avro specification. #159869

Performance improvements

  • Added a new session variable, distsql_prevent_partitioning_soft_limited_scans, which, when true, prevents scans with soft limits from being planned as multiple TableReaders by the physical planner. This should decrease the initial setup costs of some fully-distributed query plans. #160051
  • Database- and table-level backups no longer fetch all object descriptors from disk in order to resolve the backup targets. Now only the objects that are referenced by the targeted objects will be fetched. This improves performance when there are many tables in the cluster. #157790
  • Various background tasks and jobs now more actively yield to foreground work when that work is waiting to run. #159205
  • Improved changefeed performance when filtering unwatched column families and offline tables by replacing expensive error chain traversal with direct status enum comparisons. #159745
  • Fixed a performance regression in pg_catalog.pg_roles and pg_catalog.pg_authid by avoiding privilege lookups for each row in the table. #160121
  • Queries that have comparison expressions with the levenshtein built-in are now up to 30% faster. #160394
  • The optimizer now better optimizes query plans of statements within UDFs and stored procedures that have IN subqueries. #160503
  • Significantly reduced WAL write latency when using encryption at rest by properly recycling WAL files instead of deleting and recreating them. #160784
  • Optimized the logic that applies zone config constraints so it no longer fetches all descriptors in the cluster during background constraint reconciliation. #160966
  • The optimizer can now better handle filters that redundantly unnest() an array placeholder argument within an IN or ANY filter. Previously, this pattern could prevent the filters from being used to constrain a table scan. Example: SELECT k FROM a WHERE k = ANY(SELECT * FROM unnest($1:::INT[])) #161816
  • Improved changefeed checkpointing performance when changefeeds are lagging. Previously, checkpoint updates could be redundantly applied multiple times per checkpoint operation. #162546
  • The query optimizer now eliminates redundant filter and projection operators over inputs with zero cardinality, even when the filter or projection expressions are not leakproof. This produces simpler, more efficient query plans in cases where joins or other operations fold to zero rows. #164212
×