-
-- Guidance on [how to contribute to CockroachDB](https://cockroachlabs.atlassian.net/wiki/x/QQFdB) has been moved to the public wiki at [wiki.crdb.io](https://cockroachlabs.atlassian.net/wiki/). [#41542][#41542]
-- Removed the `kv.bulk_io_write.addsstable_max_rate` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). [#41745][#41745] {% comment %}doc{% endcomment %}
-- Improved the consistency checker's log output. [#41893][#41893]
-- When the replicas within a range are found to be corrupted, the outliers will be terminated. Previously, the leaseholder replica would terminate, regardless of which replicas disagreed with each other. This is expected to curb the spread of corrupted data better than the previous approach. [#41902][#41902]
-
-
Backward-incompatible changes
-
-- The `extract()` [built-in function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) with sub-second arguments (millisecond, microsecond) is now Postgres-compatible, and will return the total number of seconds in addition to sub-seconds. Anyone who was previously relying on `extract()` to return only sub-second data will need to adjust their applications. [#41069][#41069] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) statements now support specifying CSV filenames using wildcard characters. This behavior can be disabled with the `WITH disabled_glob_matching` option. [#40714][#40714] {% comment %}doc{% endcomment %}
-- When using a `nodelocal` file URL for [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import)/[`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore)/[`EXPORT`](https://www.cockroachlabs.com/docs/v20.1/export), you can now specify which node's local file system to use by including the node's ID in the URL: `nodelocal:///path/file.csv`. [#41990][#41990] {% comment %}doc{% endcomment %}
-- Added the new `WITH experimental_save_rejected` option for skipping faulty rows during `IMPORT`, saving the faulty rows in a file called `.rejected`. After fixing the problems in this file, you can use it with `IMPORT INTO`. [#41430][#41430] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- The `extract()` [built-in function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) now supports millennium, century, decade, isoyear, isodow, and julian for `DATE`, `TIMESTAMP`, and `TIMESTAMPTZ`. The `date_trunct()` function now supports millennium, century, and decade for `DATE`, `TIMESTAMP`, and `TIMESTAMPTZ`. [#41784][#41784] {% comment %}doc{% endcomment %}
-- `extract()` now supports a string constant as element argument. [#41429][#41429]
-- Added support for the `JOIN LATERAL` syntax. [#40945][#40945] {% comment %}doc{% endcomment %}
-- Added support for `WITH RECURSIVE` with `UNION ALL`. [#41368][#41368] {% comment %}doc{% endcomment %}
-- Added support for `bit_and()` and `bit_or()` aggregate functions. [#41334][#41334] {% comment %}doc{% endcomment %}
-- `NULL` values are now allowed to be among tuples when being compared against a tuple. [#40298][#40298]
-- Added syntax-only support for `ORDER BY ... NULLS FIRST | LAST`. [#41544][#41544] {% comment %}doc{% endcomment %}
-- Comments can now be associated to indexes with `COMMENT ON INDEX` and can be checked with `SHOW INDEXES FROM ... WITH COMMENT`. [#41555][#41555] {% comment %}doc{% endcomment %}
-- `SELECT` and `HAVING` can now refer to ungrouped columns when the grouped columns contain the primary key of the table containing the ungrouped columns. [#41732][#41732] {% comment %}doc{% endcomment %}
-- Dropping a unique index that was created via `CREATE UNIQUE INDEX` statement no longer requires the `CASCADE` option. [#42001][#42001] {% comment %}doc{% endcomment %}
-- Added the `pg_prepared_statements` table. [#42018][#42018]
-
-
Command-line changes
-
-- The new `--storage-engine` flag for [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start), and equivalent `COCKROACH_STORAGE_ENGINE` environment variable, specify the type of storage engine a node should use. Options are `rocksdb` (default) and `pebble`. [#41453][#41453] {% comment %}doc{% endcomment %}
-- Enhanced the error message produced by [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init) when it encounters an already-initialized cluster to recommend adding `--join` to the `cockroach start` commands. [#42318][#42318]
-
-
Bug fixes
-
-- The Admin UI no longer mixes unit sizes in timeseries graph tooltips. [#40970][#40970]
-- The `pg_collation_for()` function now correctly quotes its output. [#41545][#41545]
-- Fixed an internal error when subqueries are used in arguments to commands like `SET`. [#41581][#41581]
-- CockroachDB now properly emits the cluster ID, once known, to all secondary log files (SQL audit logging, statement execution logging, and RocksDB events) and properly removes excess secondary log files. [#40993][#40993]
-- Other callers to `acquireNodeLease` will not get erroneously cancelled just because the context of the first caller was cancelled. [#41785][#41785]
-- Vectorized execution no longer errors when adding an ordinality column to an expression with a limit. For example, `SELECT * FROM (SELECT * FROM foo LIMIT 1) WITH ORDINALITY` no longer throws an index out of range error. [#41782][#41782]
-- Fixed a bug causing rare crashes when using built-in functions. [#41970][#41970]
-- The `date_trunc()` function now correctly considers timezones for `TIMESTAMPTZ` and `DATE` types. [#42006][#42006]
-- Fixed a bug causing `CREATE TABLE AS` statements to fail with the message "unexpected concurrency for a flow that was forced to be planned locally". [#42013][#42013]
-- Fixed a bug where `SHOW ZONE CONFIGURATION` and `crdb_internal.zones` would show results for resources the user does not have permission to view. [#42066][#42066], [#42080][#42080]
-- Fixed a bug during planning for some queries that could cause an infinite loop and prevent the query from being cancelled. [#42082][#42082]
-- Fixed a bug that caused jobs for dropping tables to report an inaccurate status. [#42121][#42121]
-- Fixed a bug where rapid network disconnections could lead to cluster unavailability. [#41533][#41533]
-- Fixed a stack overflow that could occur with certain patterns of queries. [#41984][#41984]
-- Fixed some casts from `OID` to `TEXT`. [#41928][#41928]
-- Fixed a bug where some cluster setting changes were not reflected during currently running `IMPORT`s. [#42268][#42268]
-- Fixed bugs where: `:date` would result in the previous day (`::date>`) when context local timestamp is set and the timezone is less than UTC+00:00; `date_trunc()` for `TIMESTAMP` would produce incorrect results if a local timezone was set; and `date_trunc()` for `DATE` would produce an incorrect negative timezone offset in a local timezone. [#42267][#42267]
-- Casting `TIMESTAMPTZ` data to the `TIME` type NOW properly respects time zone information. [#42269][#42269]
-- Fixed a crash when using `EXPLAIN (VEC)` on some index joins. [#40897][#40897]
-
-
Performance improvements
-
-- Improved performance for some join queries due to improved filter inference during query planning. [#41250][#41250]
-- Improved statistics estimation during query planning for columns with many `NULL` values. [#41520][#41520]
-- The `cockroach debug check-store` command is now faster. [#41805][#41805]
-- Improved the low-level performance of short range reverse scans. [#42092][#42092]
-- Individual response messages in a response batch no longer each contain information about transaction state changes. [#42139][#42139]
-- `BACKUP` work is now more evenly spread across clusters that have non-uniform leaseholder distributions. [#42274][#42274]
-
-
-
-
Contributors
-
-This release includes 376 merged PRs by 48 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Aayush Shah (first-time contributor)
-- Andrea Sosso (first-time contributor)
-- Arber Avdullahu (first-time contributor)
-- Elliot Courant
-- George Papadrosou
-- Roga Pria Sembada (first-time contributor)
-- Salvatore Tomaselli (first-time contributor)
-- lzhfromustc (first-time contributor)
-- sumeerbhola (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-In addition to security updates and various enhancements and bug fixes, this v20.1 alpha release includes some major highlights:
-
-- **Cluster backup:** You can now use CockroachDB's [Enterprise `BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) feature to back up an entire cluster's data, including configuration and system information such as user privileges, zone configurations, and cluster settings. At this time, you can restore individual databases and tables from a cluster backup. In a future release, you'll be able to restore an entire cluster as well.
-- **Fresher follower reads:** [Follower reads](https://www.cockroachlabs.com/docs/v20.1/follower-reads) are now available for reads at least 4.8 seconds in the past, a much shorter window than the previous 48 seconds.
-- **Import from Avro format**: You can now use the [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) and [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v20.1/import-into) statements to bulk import SQL data from Avro files. This makes it easier to migrate from systems like Spanner that export data in the Avro format.
-- **Vectorized execution for `TIMESTAMPTZ`**: [Vectorized execution](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports the `TIMESTAMPTZ` data type in addition to several other previously [supported data types](https://www.cockroachlabs.com/docs/v20.1/data-types).
-- **CockroachDB backend for Django**: Developers using the Django framework can now leverage the `django-cockroachdb` adapter to [run their Python apps on CockroachDB](https://www.cockroachlabs.com/docs/v20.1/build-a-python-app-with-cockroachdb-django).
-
-
Security updates
-
-- The authentication code for new SQL connections has been simplified to always use the HBA configuration defined per `server.host_based_authentication.configuration`. The format of this file generally follows that of [`pg_hba.conf`](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html). This behavior remains equivalent to previous CockroachDB versions, and this change is only discussed here for clarity:
-
- Upon each configuration change, CockroachDB automatically inserts the entry `host all root all cert` as a first rule, to ensure the root user can always log in with a valid client certificate.
-
- If the configuration is set to empty or found to be invalid in the cluster setting, the following default configuration is automatically used:
-
- ~~~
- host all root all cert
- host all all all cert-password
- ~~~
-
- At any moment the current configuration on each node can be inspected using the `/debug/hba_conf` URL on the HTTP endpoint. The list of valid [authentication](https://www.cockroachlabs.com/docs/v20.1/authentication) methods is currently:
-
- - `cert`, for certificate-based authentication over an SSL connection
- exclusively
- - `cert-password`, which allows either cert-based or password-based
- authentication over an SSL connection
- - `password` for password-based authentication over an SSL connection
- - `gss` for Kerberos-based authentication over an SSL connection,
- enabled when running a CCL binary and an Enterprise license
-
- In effect, CockroachDB treats all the `host` rules as `hostssl` and behaves as per a default of `hostnossl all all all reject`.
-
- It is not currently possible to define authentication rules over non-SSL connections. As of this writing, non-SSL connections are only possible when running with `--insecure`, and on insecure nodes all the authentication logic is entirely disabled. [#43726][#43726]
-
-- CockroachDB now supports the [authentication](https://www.cockroachlabs.com/docs/v20.1/authentication) methods `'trust'` and `'reject'` in the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `server.host_based_authentication.configuration`. They are used to unconditionally allow and deny matching connection attempts. [#43731][#43731]
-- Users [`GRANT`ing](https://www.cockroachlabs.com/docs/v20.1/grant) and [`REVOKE`ing](https://www.cockroachlabs.com/docs/v20.1/revoke) admin roles must be members of the admin role with `ADMIN OPTION`. This check was previously bypassed. [#41218][#41218]
-- Fixed a bug in the parsing logic for `server.host_based_authentication.configuration`, where both single-character strings, and quoted strings containing spaces and separated by commas were not properly parsed. This would cause rules for usernames consisting of a single character or usernames containing spaces to apply improperly. [#43713][#43713]
-
-
General changes
-
-- Added system tables `system.protected_ts_meta` and `system.protected_ts_records` to support the implementation of [protected timestamps](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20191009_gc_protected_timestamps.md), a subsystem used to ensure that data required for long-running jobs is not garbage collected. [#42829][#42829]
-
-
Enterprise edition changes
-
-- Shortened the default interval for the `kv.closed_timestamp.target_duration` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) from `30s` to `3s`, which allows for follower reads at 4.8 seconds in the past rather than the previous 48 seconds. [#43147][#43147]
-- CockroachDB now supports [importing](https://www.cockroachlabs.com/docs/v20.1/import) Avro data. [#43104][#43104]
-- Importing data into CockroachDB from external HTTP servers is now more resilient to connection interruption. [#43374][#43374] [#43558][#43558]
-- Added `BACKUP TO `, which allows you to [backup](https://www.cockroachlabs.com/docs/v20.1/backup) all relevant system tables as well as all user data in a cluster. [#43767][#43767]
-
-
SQL language changes
-
-- CockroachDB now provides a link to the relevant GitHub issue when clients attempt to use certain features that are not yet implemented. [#42847][#42847]
-- [Vectorized queries](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) that execute only on supported types (even if those types form part of a table with unsupported types) are now run through the vectorized engine. This would previously fall back to the row-by-row execution engine. [#42616][#42616]
-- CockroachDB now allows stored columns in [secondary indexes](https://www.cockroachlabs.com/docs/v20.1/indexes) to respect the [column family](https://www.cockroachlabs.com/docs/v20.1/column-families) table definitions that they are based on. [#42073][#42073]
-- The error message reported when a client specifies a bulk I/O operation that uses an incompatible SQL function or operator now avoids the confusing and inaccurate term "backfill". This error is also now reported with code `22C01`. [#42941][#42941]
-- The `CURRENT_TIME` [function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) was added, which can be used with precision, e.g., `SELECT CURRENT_TIME, CURRENT_TIME(3)`. [#42928][#42928]
-- `CREATE TABLE pg_temp.abc(a int)` now creates a temporary table. See [temp tables RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20191009_temp_tables.md) (guide-level explanation) for more details about the search path semantics. [#41977][#41977]
-- A new boolean column `'is_inverted'` has been added to `crdb_internal.table_indexes` virtual table which indicates whether the [index](https://www.cockroachlabs.com/docs/v20.1/indexes) is inverted or not. [#43102][#43102]
-- The output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v20.1/explain) now shows [joins](https://www.cockroachlabs.com/docs/v20.1/joins) where there are no equality columns as "cross" instead of "hash". Cross joins can be very expensive and should be avoided. [#43061][#43061]
-- The error code for [backups](https://www.cockroachlabs.com/docs/v20.1/backup), which would overwrite files, changed from class 58 (`"system"`) to class 42 (`"Syntax or Access Rule Violation"`). [#43221][#43221]
-- CockroachDB now allows the usage of `TIMETZ` throughout the cluster. [#43023][#43023]
-- Column types are now be displayed in the box for the input synchronizer in the flow diagram obtained via [`EXPLAIN (DISTSQL, TYPES)`](https://www.cockroachlabs.com/docs/v20.1/explain). [#43193][#43193]
-- CockroachDB now supports [interval types](https://www.cockroachlabs.com/docs/v20.1/interval) with precision (e.g., `INTERVAL(5)`, `INTERVAL SECOND(5)`), and storing intervals with duration fields (e.g., `INTERVAL x TO y`). [#43130][#43130]
-- When a session that has created temporary tables exits gracefully, the tables and temporary schema are now deleted automatically. [#42742][#42742]
-- [Foreign key](https://www.cockroachlabs.com/docs/v20.1/foreign-key) checks that do not involve cascades are now performed after the mutation is complete, allowing self-referential foreign keys, or referential cycles. The execution plans for foreign key checks are now driven by the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer), which can make better planning decisions. In particular, if there is a suitable duplicated index, the one in the current locality will be used for foreign key checks. [#43263][#43263]
-- Better estimates for the number of rows needed by [`SELECT`](https://www.cockroachlabs.com/docs/v20.1/select-clause) and [`DISTINCT`](https://www.cockroachlabs.com/docs/v20.1/select-clause#select-distinct-rows) operations may now result in faster queries when the results of these queries are limited (e.g., `SELECT DISTINCT * FROM t LIMIT 10`). [#42895][#42895]
-- `MINUTE TO SECOND` is now parsed as `MM:SS` instead of `HH:MM`. Additionally, [interval syntax](https://www.cockroachlabs.com/docs/v20.1/interval), such as `INTERVAL "01:02.123"`, is now parsed correctly as `MM:SS.fff`. This matches Postgres behavior. [#43292][#43292]
-- Previously, CockroachDB returned error code `42830` and `23503` for duplicate [foreign key](https://www.cockroachlabs.com/docs/v20.1/foreign-key) names. It now returns `42710`, which matches Postgres. [#43210][#43210]
-- Clients can now retrieve system user information from the `pg_authid` virtual table, which is Postgres-compatible. [#43437][#43437]
-- The [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) can now derive constant [computed columns](https://www.cockroachlabs.com/docs/v20.1/computed-columns) during [index](https://www.cockroachlabs.com/docs/v20.1/indexes) selection. This enables more efficient `HASH` indexes. [#43450][#43450]
-- [Vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports the [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) data type. [#43514][#43514]
-- CockroachDB now provides more descriptive error messages and an error hint when an unsupported rule is provided via `server.host_based_authentication.configuration`. [#43711][#43711] [#43710][#43710]
-- Added an experimental prototype for altering the [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) of a table. [#42462][#42462]
-
-
Command-line changes
-
-- Added a `nodelocal` command that can be used to upload file: `cockroach nodelocal upload location/of/file destination/of/file`. [#42966][#42966]
-- The `table` format, used to display the results of CLI shell queries, has been updated. [#43728][#43728]
-- Telemetry is now recorded for whenever the command [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) is used. [#43795][#43795]
-
-
Admin UI changes
-
-- Added page search and pagination to the [Statements page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page). [#41641][#41641]
-- A graph of changefeed restarts due to retryable errors is now included in the [Admin UI](https://www.cockroachlabs.com/docs/v20.1/admin-ui-debug-pages). [#43213][#43213]
-
-
Bug fixes
-
-- Fixed a bug that caused some jobs to be left indefinitely in a pending state and never run. [#42880][#42880]
-- Fixed the row count estimate during query planning for some queries with multiple predicates where the selectivity of one predicate was calculated using a histogram. [#42916][#42916]
-- CockroachDB now more reliably produces an error message when a client mistakenly uses a directory name instead of a file name with `nodelocal://` in bulk I/O operations. [#42542][#42542]
-- Fixed a bug where an error would occur when trying to export data using a `nodelocal://` URL. CockroachDB now properly handles cases where the system's temporary directory lives on a different filesystem from the external I/O directory. [#42542][#42542]
-- CockroachDB now avoids using `$TMPDIR` (often set `/tmp`) during bulk I/O operations. This prevents errors occurring when the `$TMPDIR` disk capacity is small compared to the configured external I/O directory. [#42542][#42542]
-- Temporary files created during certain bulk I/O operations are now properly deleted when an error occurs. This prevents left-over temporary files being retained in the system and leaking disk usage over time. [#42542][#42542]
-- Empty [arrays](https://www.cockroachlabs.com/docs/v20.1/array) are now correctly encoded and decoded over the binary protocol. [#42949][#42949]
-- CockroachDB now ensures that databases being [restored](https://www.cockroachlabs.com/docs/v20.1/restore) are dropped if the `RESTORE` is canceled or fails. [#42946][#42946]
-- Fixed a bug with some existing caching issues surrounding [role](https://www.cockroachlabs.com/docs/v20.1/authorization#create-and-manage-roles) memberships, where users could see out-of-date role membership information. [#42998][#42998]
-- Fixed a bug where scanning an [index](https://www.cockroachlabs.com/docs/v20.1/indexes) of an unsupported type with the vectorized engine would lead to an internal error. [#42999][#42999]
-- Fixed a bug where comparisons between [`DATE`](https://www.cockroachlabs.com/docs/v20.1/date) and [`TIMESTAMP`](https://www.cockroachlabs.com/docs/v20.1/timestamp) vs. [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) broke because CockroachDB tried to normalize the `TIMESTAMPTZ` to UTC. CockroachDB now converts the `DATE` and `TIMESTAMP` to the `context` timezone and compares the `TIMESTAMPTZ` without altering its timezone. [#42927][#42927]
-- Previously, CockroachDB did not handle `date` casts from `timestamp`/`timestamptz` with time attached to it for times before the unix epoch correctly. For example, `'1969-12-30 01:00:00'::timestamp` would round to `'1969-12-31'` instead of `'1969-12-30'`. This fix addresses that change. [#42952][#42952]
-- Fixed a bug where `current_timestamp` did not correctly account for `SET TIME ZONE` in the background when storing results, and stored the timestamp as `UTC` instead. [#43012][#43012]
-- The range rebalancing logic now considers stores with very close diversity scores equal (all other things being the same) and does not attempt to rebalance. [#43041][#43041]
-- The range rebalancing logic now considers the new store being added when looking for target in case of rebalance. [#43041][#43041]
-- Previously, gracefully terminating a node with `SIGINT` printed an error banner to the console. This was misleading, since the node responded to the signal correctly and terminated cleanly. This patch converts the error banner to a less-alarming informational message. [#42848][#42848]
-- Fixed a bug that could lead to follower reads or [CDC](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) updates that did not reflect the full set of data at the timestamp. This bug was never observed in practice and should rarely cause issues, one of the necessary ingredients being an aggressive closed timestamp interval. [#42939][#42939]
-- Fixed a bug where a well-timed write could slip in on the right-hand side of a [range merge](https://www.cockroachlabs.com/docs/v20.1/range-merges). This would allow it to improperly synchronize with reads on the post-merged range. [#43138][#43138]
-- Previously, the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) could panic in a specific situation where it would prune all the columns of multiple scans of the same CTE and then try to define different required physical properties for each scan. This seems to have been a possible bug since the addition of multi-use CTEs in v19.2, but is hard to trigger without the not-yet-released `LimitHint` physical property. This patch makes all CTE scans uniquely identifiable, even after column pruning. [#43161][#43161]
-- Some incorrect issue links referenced to by error hints have been corrected. [#43232][#43232]
-- CockroachDB no longer fails on an expression of the form `NOT(a && b)`. [#43242][#43242]
-- Improved support for `OID` column types in tables. [#42973][#42973]
-- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v20.1/explain) can now be used with statements that use [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v20.1/as-of-system-time). [#43296][#43296]
-- Fixed an internal error that could be returned when performing `MIN`/`MAX` aggregation over a [`STRING`](https://www.cockroachlabs.com/docs/v20.1/string) column that contains `NULL` values when executed via the vectorized engine. Only the previous v20.1 alpha releases were affected. [#43429][#43429]
-- Fixed an internal error that could occur when `CASE` operator operating on distinct although compatible types was executed via the vectorized engine. For example, a query similar to `SELECT CASE WHEN false THEN 0:::INT2 ELSE 1:::INT8 END` previously would error out. [#43557][#43557]
-- CockroachDB now ensures that a transaction running into multiple intents from an abandoned conflicting transaction cleans them up more efficiently. [#43563][#43563]
-- CockroachDB now writes less metadata about aborted transactions to disk. [#42765][#42765]
-- The concept of lax constant functional dependencies was [previously removed](https://github.com/cockroachdb/cockroach/pull/43532). There was a left-over case when a key is downgraded: if there was a strong empty key, the result is a lax empty key (which is no longer a concept). This change fixes this by removing the key altogether in this case. [#43722][#43722]
-- It is now possible to perform [`ALTER COLUMN SET/DROP NULL`](https://www.cockroachlabs.com/docs/v20.1/alter-column) on multiple (different) columns of the same table inside a single transaction. [#43644][#43644]
-- CockroachDB now properly rejects [`ALTER COLUMN DROP NOT NULL`](https://www.cockroachlabs.com/docs/v20.1/alter-column) on a column that is part of the primary key. [#43644][#43644]
-- When the fourth column of a rule in the setting `server.host_based_authentication.configuration` is an IP address without a mask length (e.g., `1.2.3.4` instead of `1.2.0.0/16`), CockroachDB now properly interprets the fifth column as an IP netmask, as per https://www.postgresql.org/docs/current/auth-pg-hba-conf.html. [#43779][#43779]
-- CockroachDB no longer tries to issue HTTP requests as part of an [import](https://www.cockroachlabs.com/docs/v20.1/import) once the import job has been canceled. [#43789][#43789]
-
-
Performance improvements
-
-- When resumed, paused [imports](https://www.cockroachlabs.com/docs/v20.1/import) now continue from their internally recorded progress instead of starting over. [#42476][#42476] [#43053][#43053]
-- Adjusted the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer)'s cost of [lookup join](https://www.cockroachlabs.com/docs/v20.1/joins) when the lookup columns aren't a key in the table. This will cause some queries to switch to using a hash or merge join instead of a lookup join, improving performance in most cases. [#43003][#43003]
-- Removed an unused field from Raft command `protobuf`, resulting in a 16% reduction in the overhead of each Raft proposal. [#43042][#43042]
-- Range splits are now less disruptive to foreground reads. [#43048][#43048]
-- CockroachDB now uses better execution plans when a `VALUES` clause is used as the right-hand side of `IN` or `ANY`. [#43154][#43154]
-- The [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) can now infer additional filter conditions in some cases based on transitive equalities between columns. [#43194][#43194]
-- Improved the estimated row count for some [lookup joins](https://www.cockroachlabs.com/docs/v20.1/joins) during planning, which can lead to a better plan. [#43325][#43325]
-- The [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) now generates better execution plans in some cases where there is an `ORDER BY` expression that simplifies to a simple variable reference. [#43465][#43465]
-
-
Build changes
-
-- Go 1.13.5 is now required to build CockroachDB from source. [#43289][#43289]
-
-
Doc Updates
-
-- Added a [Django app development tutorial](https://www.cockroachlabs.com/docs/v20.1/build-a-python-app-with-cockroachdb-django). [#6359](https://github.com/cockroachdb/docs/pull/6359) [#6365](https://github.com/cockroachdb/docs/pull/6365)
-- Updated the [Hibernate app development tutorial](https://www.cockroachlabs.com/docs/v20.1/build-a-java-app-with-cockroachdb-hibernate) with client-side retry logic. [#5760](https://github.com/cockroachdb/docs/pull/5760)
-- Documented how to [use keyset pagination to iterate through query results](https://www.cockroachlabs.com/docs/v20.1/selection-queries#paginate-through-limited-results). [#6114](https://github.com/cockroachdb/docs/pull/6114)
-- Improved the [GSSAPI authentication](https://www.cockroachlabs.com/docs/v20.1/gssapi_authentication) instructions for configuring Active Directory and MIT and for configuring the client. [#6115](https://github.com/cockroachdb/docs/pull/6115)
-- Expanded the [Kubernetes tutorial](https://www.cockroachlabs.com/docs/v20.1/orchestrate-cockroachdb-with-kubernetes#step-2-start-cockroachdb) to show how to use a custom CA instead of Kubernetes built-in CA when using manual configs. [#6232](https://github.com/cockroachdb/docs/pull/6232)
-- Updated the [Kubernetes tutorial](https://www.cockroachlabs.com/docs/v20.1/orchestrate-cockroachdb-with-kubernetes) for compatibility with Helm 3.0. [#6121](https://github.com/cockroachdb/docs/pull/6121)
-- Added language-specific connection strings to the instructions on [connecting to a CockroachCloud cluster](https://www.cockroachlabs.com/docs/cockroachcloud/connect-to-your-cluster). [#6077](https://github.com/cockroachdb/docs/pull/6077)
-- Added Docker as a download option on the full [release notes list](https://www.cockroachlabs.com/docs/releases#docker). [#5792](https://github.com/cockroachdb/docs/issues/5792)
-- Updated the [`IMPORT` documentation](https://www.cockroachlabs.com/docs/v20.1/import) with an example usage of `DELIMITED` with escaping, a note about `DEFAULT` values, and an explanation of the `strict_quotes` option. [#6244](https://github.com/cockroachdb/docs/pull/6244)
-- Added an FAQ on [why Cockroach Labs changed the license for CockroachDB](https://www.cockroachlabs.com/docs/v20.1/frequently-asked-questions#why-did-cockroach-labs-change-the-license-for-cockroachdb). [#6154](https://github.com/cockroachdb/docs/pull/6154)
-- Corrected the description of the [possible result of clock skew outside the configured clock offset bounds](https://www.cockroachlabs.com/docs/v20.1/operational-faqs#what-happens-when-node-clocks-are-not-properly-synchronized). [#6329](https://github.com/cockroachdb/docs/pull/6329)
-- Expanded the [data types overview](https://www.cockroachlabs.com/docs/v20.1/data-types) to indicate whether or not a type supports [vectorized execution](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). [#6327](https://github.com/cockroachdb/docs/pull/6327)
-
-
-
-
Contributors
-
-This release includes 279 merged PRs by 47 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Akshay Shah (first-time contributor)
-- Andrii Vorobiov
-- Antoine Grondin
-- Jason Brown (first-time contributor)
-
-
-
-- CockroachDB previously allowed non-authenticated access to privileged HTTP endpoints like `/_admin/v1/events`, which operate using `root` user permissions and can thus access (and sometimes modify) any and all data in the cluster. This security vulnerability has been patched by disallowing non-authenticated access to these endpoints and restricting access to admin users only.
-
- {{site.data.alerts.callout_info}}
- Users who have built monitoring automation using these HTTP endpoints must modify their automation to work using an HTTP session token for an admin user.
- {{site.data.alerts.end}}
-
-- Some Admin UI screens (e.g., Jobs) were previously incorrectly displayed using `root` user permissions, regardless of the logged-in user's credentials. This enabled insufficiently privileged users to access privileged information. This security vulnerability has been patched by using the credentials of the logged-in user to display all Admin UI screens.
-
-- Privileged HTTP endpoints and certain Admin UI screens require an admin user. However, `root` is disallowed from logging in via HTTP and it is not possible to create additional admin accounts without an Enterprise license. This is further discussed [here](https://github.com/cockroachdb/cockroach/issues/43870) and will be addressed in an upcoming patch revision.
-
- {{site.data.alerts.callout_info}}
- Users without an Enterprise license can create an additional admin user using a temporary evaluation license, until an alternative is available. A user created this way will persist beyond the license expiry.
- {{site.data.alerts.end}}
-
-- Some Admin UI screens currently display an error or a blank page when viewed by a non-admin user (e.g., Table Details). This is a known limitation mistakenly introduced by the changes described above. This situation is discussed further [here](https://github.com/cockroachdb/cockroach/issues/44033) and will be addressed in an upcoming patch revision. The list of UI pages affected includes but is not limited to:
-
- - Job details
- - Database details
- - Table details
- - Zone configurations
-
- {{site.data.alerts.callout_info}}
- Users can access these Admin UI screens using an admin user until a fix is available.
- {{site.data.alerts.end}}
-
-The list of HTTP endpoints affected by the first change above includes:
-
-| HTTP Endpoint | Description | Sensitive information revealed | Special (see below) |
-|--------------------------------------------------------|-----------------------------------|----------------------------------------------------|---------------------|
-| `/_admin/v1/data_distribution` | Database-table-node mapping | Database and table names | |
-| `/_admin/v1/databases/{database}/tables/{table}/stats` | Table stats histograms | Stored table data via PK values | |
-| `/_admin/v1/drain` | API to shut down a node | Can cause DoS on cluster | |
-| `/_admin/v1/enqueue_range` | Force range rebalancing | Can cause DoS on cluster | |
-| `/_admin/v1/events` | Event log | Usernames, stored object names, privilege mappings | |
-| `/_admin/v1/nontablestats` | Non-table statistics | Stored table data via PK values | |
-| `/_admin/v1/rangelog` | Range log | Stored table data via PK values | |
-| `/_admin/v1/settings` | Cluster settings | Organization name | |
-| `/_status/allocator/node/{node_id}` | Rebalance simulator | Can cause DoS on cluster | yes |
-| `/_status/allocator/range/{range_id}` | Rebalance simulatoor | Can cause DoS on cluster | yes |
-| `/_status/certificates/{node_id}` | Node and user certificates | Credentials | |
-| `/_status/details/{node_id}` | Node details | Internal IP addresses | |
-| `/_status/enginestats/{node_id}` | Storage statistics | Operational details | |
-| `/_status/files/{node_id}` | Retrieve heap and goroutine dumps | Operational details | yes |
-| `/_status/gossip/{node_id}` | Gossip details | Internal IP addresses | yes |
-| `/_status/hotranges` | Ranges with active requests | Stored table data via PK values | |
-| `/_status/local_sessions` | SQL sessions | Cleartext SQL queries | yes |
-| `/_status/logfiles/{node_id}` | List of log files | Operational details | yes |
-| `/_status/logfiles/{node_id}/{file}` | Server logs + entries | Many: names, application data, credentials, etc. | yes |
-| `/_status/logs/{node_id}` | Log entries | Many: names, application data, credentials, etc. | yes |
-| `/_status/profile/{node_id}` | Profiling data | Operational details | |
-| `/_status/raft` | Raft details | Stored table data via PK values | |
-| `/_status/range/{range_id}` | Range details | Stored table data via PK values | |
-| `/_status/ranges/{node_id}` | Range details | Stored table data via PK values | |
-| `/_status/sessions` | SQL sessions | Cleartext SQL queries | yes |
-| `/_status/span` | Statistics per key span | Whether certain table rows exist | |
-| `/_status/stacks/{node_id}` | Stack traces | Application data, stored table data | |
-| `/_status/stores/{node_id}` | Store details | Operational details | |
-
-{{site.data.alerts.callout_info}}
-"Special" endpoints are subject to the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `server.remote_debugging.mode`. Unless the setting was customized, clients are only able to connect from the same machine as the node.
-{{site.data.alerts.end}}
-
-
Backward-incompatible changes
-
-- The combination of the [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) options `format=experimental_avro`, `envelope=key_only`, and `updated` is now rejected. This is because the use of `key_only` prevents any rows with updated fields from being emitted, which renders the `updated` option meaningless. [#41793][#41793] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- Client usernames can now be defined to start with a digit; in particular, all-digit usernames are now permitted. [#42464][#42464] {% comment %}doc{% endcomment %}
-- Changed the default value of the `--max-sql-memory` limit from 128mb to 25% of system memory. [#42480][#42480] {% comment %}doc{% endcomment %}
-- Nodes that have been terminated as the result of a failed consistency check now refuse to restart, making it more likely that the operator notices that there is a persistent issue in a timely manner. [#42401][#42401]
-- CockroachDB will now advertise some previously-hidden cluster settings, such as `enterprise.license`, in reports such as the one generated by [`SHOW ALL CLUSTER SETTINGS`](https://www.cockroachlabs.com/docs/v20.1/show-cluster-setting). Only the names are listed; the values are still redacted out. The values can be accessed/modified using the specific statements `SET`/[`SHOW CLUSTER SETTING`](https://www.cockroachlabs.com/docs/v20.1/show-cluster-setting) (singular). [#42520][#42520] {% comment %}doc{% endcomment %}
-- It is now possible to easily identify cluster settings for which tuning effects are known and documented, via the new column `public` in the output of [`SHOW ALL CLUSTER SETTINGS`](https://www.cockroachlabs.com/docs/v20.1/show-cluster-setting) and the virtual table `crdb_internal.cluster_settings`. [#42520][#42520] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) now supports the restoration of empty databases. [#42005][#42005] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- Filters of the form `x = D` (as in `SELECT * FROM t WHERE x = D AND f(x)`), where `D` is a constant and `x` is a column name, will now cause `D` to be inlined for `x` in other filters. [#42151][#42151] {% comment %}doc{% endcomment %}
-- The ID of the current session is now available via a `session_id` variable. Session IDs are also now shown in [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v20.1/show-queries) results. [#41622][#41622] {% comment %}doc{% endcomment %}
-- The [`extract()` function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) now returns values of type float and includes fractional parts in the values for element 'second', 'millisecond', 'julian', and 'epoch'. This improves compatibility with PostgreSQL's `extract()` which returns values of type double precision. [#42131][#42131] {% comment %}doc{% endcomment %}
-- `pg_index.indoption` now correctly conveys ascending/descending order and nulls the first/last positioning of columns in an index. [#42343][#42343] {% comment %}doc{% endcomment %}
-- Updated pgwire to send `ParameterStatus` messages when certain server parameters are changed for the given session over pgwire. [#42376][#42376]
-- Added the ability to run the `avg()` function over intervals. [#42457][#42457] {% comment %}doc{% endcomment %}
-- It is now supported to specify selection target aliases as `GROUP BY` columns. Note that the `FROM` columns take precedence over the aliases, which are only used if there are no columns with those names in the current scope. [#42447][#42447] {% comment %}doc{% endcomment %}
-- Updated the error message hint when a client attempts to add a sequence-based column to an existing table (which is an unimplemented feature) to refer to Github issue [#42508](https://github.com/cockroachdb/cockroach/issues/42508). [#42509][#42509]
-- CockroachDB now returns a more accurate error message, hint, and error code when an error is encountered while adding a new column. [#42509][#42509]
-- `EXPLAIN (VERBOSE)` now indicates if auto-commit will be used for mutations. [#42500][#42500] {% comment %}doc{% endcomment %}
-- Mutations in CTEs not at the top level are no longer allowed. This restriction is also implemented by Postgres. [#41033][#41033] {% comment %}doc{% endcomment %}
-- `WITH` expressions are now hoisted to the top level in a query when possible. [#41033][#41033]
-- Made the [`date_trunc` function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) follow Postgres more closely by truncating to Monday when used with `week`. Previously, it truncated to Sunday. [#42622][#42622] {% comment %}doc{% endcomment %}
-- Introduce precision support for [`TIMESTAMP` and `TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp), supporting precisions from 0 to 6 inclusive. Previous versions of `TIMESTAMP` and `TIMESTAMPTZ` defaulted to 6 units of precision. Note that if you downgrade while having a precision set, you will have full precision (6) again, but if you re-upgrade you will find your precisions truncated again. [#42580][#42580] {% comment %}doc{% endcomment %}
-- [`CREATE/ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v20.1/alter-sequence) now support the `OWNED BY` syntax. [#40992][#40992] {% comment %}doc{% endcomment %}
-- Changed [`extract()`](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) on a `TIMESTAMPTZ` to match the environment's location in which `extract()` is executed. Previously, it would always perform the operation as if it was in UTC. Furthermore, `timezone`, `timezone_hour` and `timezone_minute` are added to the `extract()` command. [#42632][#42632] {% comment %}doc{% endcomment %}
-- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) now supports a `WITH diff` option, which instructs it to include a `before` field in each publication. [#41793][#41793] {% comment %}doc{% endcomment %}
-- The fields in the Avro format for [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) records have been re-ordered to allow for optimized parsing. This is a backwards compatible change. [#41793][#41793] {% comment %}doc{% endcomment %}
-- Users can now use the [`current_timestamp()` function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) with a given precision from 0-6, e.g., `SELECT current_timestamp(4)`. [#42633][#42633] {% comment %}doc{% endcomment %}
-- When executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution), make each buffering operator use at most `sql.distsql.temp_storage.workmem` memory (which is 64MB by default). Previously, all buffering operators (like hash and merge joins, sorts) could use arbitrary amounts of memory which could lead to OOM panics. [#42468][#42468] {% comment %}doc{% endcomment %}
-- Added a new statement `SHOW PUBLIC CLUSTER SETTINGS` (abbreviated as `SHOW CLUSTER SETTINGS`), which can be used to list only the public cluster settings that are supported for tuning and configuration. [#42520][#42520] {% comment %}doc{% endcomment %}
-- Added the `kv.allocator.min_lease_transfer_interval` cluster setting, which allows the minimum interval between lease transfers initiated from each node to be configured. [#42724][#42724] {% comment %}doc{% endcomment %}
-- Made string to interval conversion more strict. For example, strings such as `'{{'` and `'{1,2}'` were previously interpreted as the 00:00 interval. They are now rejected. [#42739][#42739]
-- Some columns in `pg_type` (`typinput`, `typoutput`, `typreceive`, `typsend`, `typmodin`, `typmodout`, `typanalyze`) were incorrectly typed as `OID` instead of `REGPROC`. This issue has been resolved. [#42782][#42782]
-- Users can now use the `cast()` function to cast strings to `int[]` or `decimal[]`, when appropriate. [#42704][#42704] {% comment %}doc{% endcomment %}
-- `SET TIME ZONE` now accepts inputs beginning with `GMT` and `UTC`, such as `GMT+5` and `UTC-3:59`. This was previously unsupported. [#42781][#42781] {% comment %}doc{% endcomment %}
-- It is now possible to reference tables by table descriptor ID in mutations using `INSERT`/`UPSERT`/`UPDATE`/`DELETE`, in a similar way to what is already allowed in `SELECT` statements. For example: `INSERT INTO [53 AS foo] VALUES (1, 2, 3)`. [#42683][#42683] {% comment %}doc{% endcomment %}
-- Added new support for precision for `TIME` types (e.g., `TIME(3)` will truncate to milliseconds). Previously this would raise syntax errors. [#42668][#42668] {% comment %}doc{% endcomment %}
-
-
Command-line changes
-
-- Users can now use `cockroach demo` to shut down and restart nodes. This is available in `cockroach demo` only as `demo `. This command is not available in other CLIs, e.g., `cockroach sql`. **This feature is experimental**. [#42230][#42230] {% comment %}doc{% endcomment %}
-- The various CLI commands that use SQL now display errors using a new display format that emphasizes the 5-digit [`SQLSTATE`](https://wikipedia.org/wiki/SQLSTATE) code. Users are encouraged to combine these codes together with the error message when seeking help or troubleshooting. [#42779][#42779] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- Fixed typo that breaks [statements page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page) loading. [#42577][#42577]
-- Certain web UI pages (like the list of databases or tables) now restrict their content to match the privileges of the logged-in user. [#42563][#42563] {% comment %}doc{% endcomment %}
-- The event log now presents all cluster settings changes, unredacted, when an admin user uses the page. [#42563][#42563] {% comment %}doc{% endcomment %}
-- Customization of the UI by users is now only properly saved if the user has write privilege to `system.ui` (i.e., is an admin user). Also, all authenticated users share the same customizations. This is a known limitation and should be lifted in a future version. [#42563][#42563] {% comment %}doc{% endcomment %}
-- Access to table statistics are temporarily blocked from access by non-admin users until further notice, for security reasons. [#42563][#42563] {% comment %}doc{% endcomment %}
-- Certain debug pages have been blocked from non-admin users for security reasons. [#42563][#42563] {% comment %}doc{% endcomment %}
-- The cluster settings page now lists public and reserved settings in two separate tables. [#42520][#42520] {% comment %}doc{% endcomment %}
-- Added a new range selector that supports custom time/date ranges. [#41327][#41327] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Reduced the likelihood of out-of-memory errors during histogram collection. [#42357][#42357]
-- Fixed a bug which could result in ranges becoming unavailable while a single node is unreachable. The symptoms of this would closely resemble that of a range that has lost a majority of replicas, i.e., the log files would likely include messages of the form "have been waiting [...] for proposing command", except that a majority will be available, though not reflected in the surviving replicas' range status. [#42251][#42251]
-- Fixed a Makefile bug that would prevent building CockroachDB from sources in rare circumstances. [#42363][#42363]
-- Fixed an out-of-memory error that could occur when collecting statistics on tables with a string index column. [#42372][#42372]
-- Changed the return type of (date +- interval) and (interval + date) to be timestamp instead of timestamptz, to be in line with Postgres. Furthermore, this change fixed a bug where these calculations would be incorrect if the current timezone is not UTC. [#42324][#42324]
-- Fixed a bug when using `experimental_save_rejected` for CSV [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) that would cause the rejected row file to overwrite the original input file. [#42398][#42398]
-- For tables with dropped indexes, the [`SHOW RANGE FOR ROW`](https://www.cockroachlabs.com/docs/v20.1/show-range-for-row) command sometimes returned incorrect results or an error. Fixed the underlying issue in the `crdb_internal.encode_key` built-in function. [#42456][#42456]
-- Fixed a bug in scenarios where we have `UPDATE` cascades, and we are updating a table that has `CHECK` constraints, and the table is self-referencing or is involved in a reference cycle. In this case an `UPDATE` that cascades back in the original table was not validated with respect to the `CHECK` constraints. [#42231][#42231]
-- The [movr workload](https://www.cockroachlabs.com/docs/v20.1/movr) now populates table columns with randomly generated data instead of nulls. [#42483][#42483]
-- Fixed a bug where if a sequence is used by two columns of the same table, the dependency relation with the first column can be lost. [#40900][#40900]
-- Fixed a bug where memory was leaking when counting rows during backup. [#42529][#42529]
-- Fixed a bug where, if one were to cast the same type into two or more different precisions/widths from a table in the same `SELECT` query, they would only get the first precision specified. For example, `SELECT a::decimal(10, 3), a::decimal(10, 1) FROM t` would return both results as `a::decimal(10, 3)`. [#42574][#42574]
-- CockroachDB will now be less likely hang in an inconvenient/inoperative state if it attempts to access an external HTTP server that blocks or is overloaded. A possible symptom of the bug is a node failing to shut down upon `cockroach quit`. This bug is present since at least version 2.0. [#42536][#42536]
-- Stopped including tables that are being restored or imported as valid targets in backups and changefeeds. [#42606][#42606]
-- Fixed a bug that would produce a spurious failure with the error message "incompatible `COALESCE` expressions" when adding or validating `MATCH FULL` foreign key constraints involving composite keys with columns of differing types. [#42528][#42528]
-- When a custom `nullif` is provided during `IMPORT`, always treat it as a null value. [#42635][#42635]
-- Changefeeds now emit backfill row updates for a dropped column when the table descriptor drops that column. [#42053][#42053]
-- It's now possible to transfer range leases to lagging replicas. [#42379][#42379]
-- Long running transactions which attempt to `TRUNCATE` can now be pushed and will commit in cases where they previously could fail or retry forever. [#42650][#42650]
-- Fixed multiple existing bugs: a panic on performing cascade updates on tables with multiple column families; a bug where a self referential foreign key constraint with a `SET DEFAULT` would not be maintained on a cascading update; a bug where multiple self-referential foreign key constraints would cause all the rows in the referenced constraint columns to be set to _NULL_ or a default value on a cascading update. [#42624][#42624]
-- Fixed a case where we incorrectly determined that a query (or part of a query) which contains an `IS NULL` constraint on a unique index column returns at most one row, possibly ignoring a `LIMIT 1` clause. [#42760][#42760]
-- Fixed a bug with incorrect handling of top _K_ sort by the vectorized engine when _K_ is greater than 1024. [#42831][#42831]
-- `ALTER INDEX IF EXISTS` no longer fails when using an unqualified index name that does not match any existing index. Now it is a no-op. [#42797][#42797]
-- Prevent internal error in some cases when a _NULL_ literal is passed to the `OVERLAPS` operator. [#42877][#42877]
-- CockroachDB now prevents a number of panics from the SQL layer caused by an invalid range split. These would usually manifest with messages mentioning encoding errors (including "found null on not null column", but also possibly various others). [#42833][#42833]
-- The result column names for the JSON functions `json{b}_array_elements`, `json{b}_array_elements_text`, `json{b}_each`, `json{b}_each_text` were fixed to be compatible with Postgres. [#41861][#41861]
-- Fixed a bug where selecting columns by forcing an `INTERLEAVING` index would error instead of returning the correct results. [#42798][#42798]
-- Fixed a bug where attempting to parse `0000-01-01 00:00` when involving `time` did not work, as `pgdate` does not understand `0000` as a year. [#42762][#42762]
-
-
Performance improvements
-
-- Transactions are now able to refresh their read timestamp even after the partial success of a batch. [#35140][#35140]
-- Some retryable errors are now avoided by declining to restart transactions on some false conflicts. [#42236][#42236]
-- CockroachDB now detects the case when the right-hand side of an `ANY` expression is a _NULL_ array (and determine that the expression is always false). [#42698][#42698]
-- CockroachDB now generates better plans in many cases where the query has `LEFT` / `RIGHT JOIN`s and also has a `LIMIT`. [#42718][#42718]
-
-
Build changes
-
-- Go version 1.12.10+ is now required to build CockroachDB successfully. [#42474][#42474] {% comment %}doc{% endcomment %}
-- `make buildshort` is now able to produce valid CCL binaries with all enterprise features (minus UI). [#42541][#42541] {% comment %}doc{% endcomment %}
-
-
-
-
Contributors
-
-This release includes 195 merged PRs by 44 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Adam Pantel (first-time contributor, CockroachDB team member)
-- Ananthakrishnan (first-time contributor)
-- Andrii Vorobiov (first-time contributor)
-- George Papadrosou
-- Jaewan Park
-- Roga Pria Sembada
-- Ryan Kuo (first-time contributor)
-- Vlad
-- georgebuckerfield (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-In addition to various updates, enhancements, and bug fixes, this first v20.1 beta release includes the following major highlights:
-
-- **Online primary key changes**: You can now change a table’s primary key using the `ALTER TABLE ... ALTER PRIMARY KEY` statement. Changing a table’s primary key rewrites its primary and some secondary indexes behind-the-scenes and can take a while, but the table remains online with no interruption to data access. For now, this feature is considered experimental and is behind a [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). To try it out, run `SET experimental_enable_primary_key_changes = true`. The syntax is `ALTER TABLE table_name ALTER PRIMARY KEY USING COLUMNS (x, y)`.
-- **Full cluster restore**: You can now use CockroachDB's [Enterprise `RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) feature to restore a full cluster `BACKUP` to a new cluster, including all configuration and system information such as [user privileges](https://www.cockroachlabs.com/docs/v20.1/authorization#privileges), [zone configurations](https://www.cockroachlabs.com/docs/v20.1/configure-replication-zones), and [cluster settings](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). Restoring a cluster backup to an existing cluster is not supported.
-- **Encrypted backup files**: You can now use an encryption key to encrypt data in [Enterprise `BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) files, and to decrypt the data upon [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore).
-
-
Backward-incompatible changes
-
-- [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init) now waits for server readiness and thus no longer fails when a mistaken server address is provided. [#43904][#43904] {% comment %}doc{% endcomment %}
-- The `cockroach user` CLI command has been removed. It was previously deprecated in CockroachDB 19.2. Note that a 19.2 client (supporting `cockroach user`) can still operate user accounts in a 20.1 server. [#43903][#43903] {% comment %}doc{% endcomment %}
-- CockroachDB now creates files without read permissions for the "others" group. Sites that automate file management (e.g., log collection) using multiple user accounts now must ensure that the CockroachDB server and the management tools running on the same system are part of a shared unix group. [#44043][#44043] {% comment %}doc{% endcomment %}
-- Previously, intervals cast to integers and floats would rely on a year being 365 days. To match `extract('epoch' from interval)` behavior in PostgreSQL/CockroachDB to 365.25 days, casting to integers and floats now values a year at 365.25 days in seconds instead of 365 days. [#43923][#43923] {% comment %}doc{% endcomment %}
-
-
Security updates
-
-- An admin user is now required to access statement details in the Admin UI and HTTP endpoint. Previously, any user could access these details, which could result in users accessing data that they didn't have privileges to see. [#44349][#44349] {% comment %}doc{% endcomment %}
-- CockroachDB now properly rejects control characters in the value of the `server.host_based_authentication.configuration` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). Previously, these characters were accepted and would silently result in unintended rule matching. Deployments careful to strip control characters from their HBA configurations are not affected. [#43811][#43811] {% comment %}doc{% endcomment %}
-- Connections using unix sockets are now subject to the HBA rules defined via the setting `server.host_based_authentication.configuration`, in a way compatible with PostgreSQL: incoming unix connections match `local` rules, whereas incoming TCP connections match `host` rules. The default HBA configuration used when the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) is empty is now:
-
- ~~~
- host all root all cert
- host all all all cert-password
- local all all password
- ~~~
- [#43848][#43848]
-- Previously, when setting a user's password to enable password authentication for the user, it was not possible to revert this choice. The only way to disable password authentication was to either drop the user or add a specific per-user HBA rule. This has been fixed and the PostgreSQL-compatible [`ALTER USER WITH PASSWORD NULL`](https://www.cockroachlabs.com/docs/v20.1/alter-user) statement can now be used to clear the user's password. [#43892][#43892] {% comment %}doc{% endcomment %}
-- A CockroachDB node process ([`start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start)/[`start-single-node`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start-single-node)) now configures its umask to create all its files without unix permission bits for "others", so that data/log/etc files do not become world-readable by default in systems that do not otherwise customize umask to enforce log file visibility. The files produced by other `cockroach` commands (e.g., the [CLI commands](https://www.cockroachlabs.com/docs/v20.1/cockroach-commands)) do not force their umask. Note that after upgrading to this release, in sites where permissions matter, administrators should be careful to run `chmod -R o-rwx` in directories where files were created by a previous version. [#44043][#44043] {% comment %}doc{% endcomment %}
-- The new command `cockroach auth-session login` (reserved for administrators) creates authentication tokens with an arbitrary expiration date. Operators should be careful to monitor `system.web_sessions` and enforce policy-mandated expirations by either using SQL queries or the new command `cockroach auth-session logout`. [#43872][#43872] {% comment %}doc{% endcomment %}
-- The `root` user can now have a password, like any other member of the admin role. However, as in previous versions, the HBA configuration cannot be overridden to prevent `root` from logging in with a valid TLS client certificate. This special rule remains enforced in order to ensure that users cannot "lock themselves out" of administrating their cluster. [#43893][#43893] {% comment %}doc{% endcomment %}
-- The `root` user remains special with regard to [authentication](https://www.cockroachlabs.com/docs/v20.1/authentication) when some system ranges are unavailable. In that case, password authentication will fail, subject to a timeout set to the minimum of 5 seconds and the configured value of `system.user_login.timeout`, because the password cannot be retrieved from a system table. However, certificate authentication remains available. [#43893][#43893] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- `SHOW JOBS` and the Jobs page in the Admin UI now show the parameters used for connecting to external storage, with only the values of parameters classified as secrets redacted. [#44737][#44737] {% comment %}doc{% endcomment %}
-- It's now possible to disable job execution on a node in emergency cases. To do so, place a `DISABLE_STARTING_BACKGROUND_JOBS` file in the node's first store directory. [#44786][#44786] {% comment %}doc{% endcomment %}
-- A node no longer declares itself unready through the [`/health?ready=1`](https://www.cockroachlabs.com/docs/v20.1/monitoring-and-alerting#health-ready-1) endpoint while in the process of decommissioning. It continues to declare itself unready while draining. [#43889][#43889] {% comment %}doc{% endcomment %}
-- CockroachDB will now report a timeout error when a client attempts to connect via SQL or the Admin UI and some system ranges are unavailable. The previous behavior was to wait indefinitely. The timeout is configurable via the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `server.user_login.timeout` and is set to 10 seconds by default. The value "0" means "indefinitely" and can be used to restore the pre-v20.1 behavior. This timeout does not apply to the `root` user, which is always able to login on unavailable clusters. [#44022][#44022] {% comment %}doc{% endcomment %}
-- The `kv.allocator.range_rebalance_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings), which controls how far away from the mean a store's range count must be before it is considered for rebalance, is now subject to a 2-replica minimum. If, for example, the mean number of replicas per store is 5.6 and the setting is 5%, the store will not be considered for rebalance unless the number of replicas is lesser than 3 or greater than 8. Previously, the bounds would have been 5 and 6. [#44247][#44247] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- Added the ability to restore a cluster backup to a new cluster, including all configuration and system information such as user privileges, zone configurations, and cluster settings. Restoring a cluster backup to an existing cluster is not supported. [#43828][#43828]
-- Added support for encrypting `BACKUP`/`RESTORE` files via the `encryption_passphrase` option. [#44177][#44177]
-
-
SQL language changes
-
-- Foreign key checks for insertions performed by [`UPSERT`s](https://www.cockroachlabs.com/docs/v20.1/upsert) are now handled by the optimizer. [#43824][#43824] {% comment %}doc{% endcomment %}
-- Added a rough estimation of execution progress to [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v20.1/show-queries). [#42518][#42518] {% comment %}doc{% endcomment %}
-- Added `NOT NULL` columns as check constraints to `information_schema.table_constraints`, for PostgreSQL compatibility. [#44731][#44731] {% comment %}doc{% endcomment %}
-- Added support for temporary view creation, if temporary tables are enabled. Temporary views disappear at the end of a connection. Views that depend on temporary tables are automatically temporary. [#44729][#44729]
-- Added the `require_explicit_primary_keys` and `sql.defaults.require_explicit_primary_keys.enabled` [cluster settings](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) to control whether CockroachDB should error out when tables are created without explicit primary keys. [#44702][#44702] {% comment %}doc{% endcomment %}
-- The `enable_primary_key_changes` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) has changed to `experimental_enable_primary_key_changes`. [#43818][#43818]
-- Primary key columns are no longer required to be in column family `0`. [#43742][#43742]
-- Primary key changes are now enabled on tables with multiple [column families](https://www.cockroachlabs.com/docs/v20.1/column-families). [#43821][#43821]
-- The primary key of a table can now be altered to one that is [interleaved](https://www.cockroachlabs.com/docs/v20.1/interleave-in-parent) in another table. [#44038][#44038]
-- Primary key changes can now be performed on interleaved children. [#44075][#44075]
-- Primary key changes are now enabled on tables that have foreign key relationships. [#43830][#43830]
-- Extract can now be called on an interval (e.g., `extract(day from interval '254 days')`). This follows the PostgreSQL implementation. Furthermore, this deprecates `extract_duration`, which will be removed at a later date. [#43293][#43293] {% comment %}doc{% endcomment %}
-- CockroachDB previously did not support `AT TIME ZONE` parsing for anything other than precise location strings (e.g., `AT TIME ZONE 'Australia/Sydney'`). CockroachDB now supports parsing `AT TIME ZONE` with various other offset behaviors supported by `SET TIME ZONE` (e.g., `AT TIME ZONE '+3'`, `AT TIME ZONE 'GMT+4'`). [#43414][#43414] {% comment %}doc{% endcomment %}
-- CockroachDB now supports `SET TIME ZONE` with colons (e.g., `+4:00`). [#43414][#43414] {% comment %}doc{% endcomment %}
-- Previously, `SELECT interval '1-2 1' DAY TO HOUR` would fail. This is now permitted as per the SQL standard. [#43379][#43379] {% comment %}doc{% endcomment %}
-- Previously, spaces added to intervals with qualifiers (e.g., `SELECT interval ' 1 ' YEAR`) would be evaluated as seconds. The qualifier is now used as the multiplier. [#43379][#43379] {% comment %}doc{% endcomment %}
-- Previously, adding a decimal point to days (e.g., `SELECT interval '1.5 01:00:00'`) would return `1 day 01:00:00`, unlike PostgreSQL, which returns `1 day 13:00:00`. The behavior now matches PostgreSQL. [#43379][#43379] {% comment %}doc{% endcomment %}
-- Previously, using the `Y-M constant` format for intervals (e.g., `SELECT INTERVAL '1-2 3'`) would always resolve the constant component (3) as seconds, even for items such as `SELECT INTERVAL '1-2 3' DAY`. The behavior has been corrected and now matches PostgreSQL. [#43379][#43379] {% comment %}doc{% endcomment %}
-- Some tools generate SQL that includes the `fillfactor` storage parameter, e.g., `CREATE TABLE ... WITH (fillfactor=100)`. This syntax is now supported, but has no effect, since the parameter has no meaning in CockroachDB. [#43307][#43307]
-- [`SHOW RANGES`](https://www.cockroachlabs.com/docs/v20.1/show-ranges) now shows locality information consistent with the range descriptor when node ID and store ID do not match. [#43807][#43807] {% comment %}doc{% endcomment %}
-- Ranges are now considered under-replicated by the [`system.replication_stats` report](https://www.cockroachlabs.com/docs/v20.1/query-replication-reports#system-replication_stats) when one of the replicas is unresponsive (or the respective node is not running). [#43825][#43825] {% comment %}doc{% endcomment %}
-- [`CREATE USER`](https://www.cockroachlabs.com/docs/v20.1/create-user) and [`ALTER USER`](https://www.cockroachlabs.com/docs/v20.1/alter-user) now accept the parameter `[WITH] PASSWORD NULL` to indicate the user's password should be removed, thus preventing them from using password authentication. This is compatible with PostgreSQL. [#43892][#43892] {% comment %}doc{% endcomment %}
-- Previously, a panic could occur when a table had a default column and a constraint in the [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v20.1/create-table) statement. This is now fixed. [#43959][#43959]
-- Previously, [`DECIMAL`](https://www.cockroachlabs.com/docs/v20.1/decimal) types could not be sent over the network when the computation was performed by the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This has been fixed, and the vectorized engine now fully supports `DECIMAL` type. [#43311][#43311] {% comment %}doc{% endcomment %}
-- Previously, there was a restriction that foreign keys could only reference one outbound column at any given point in time (e.g., in `CREATE TABLE test(a int)`, having two foreign keys on column `a` was not allowed). This restriction is now removed. [#43417][#43417] {% comment %}doc{% endcomment %}
-- Invalid usages of `FOR UPDATE` [locking clauses](https://www.cockroachlabs.com/docs/v20.1/sql-grammar#for_locking_clause) are now rejected by the SQL optimizer. [#43887][#43887]
-- Added `to_hex(string)` string functionality. [#44016][#44016] {% comment %}doc{% endcomment %}
-- Previously, `to_hex(-1)` would return `-1` instead of the negative hex representation (`FFFFFFFFFFFFFFFF`). This is now fixed. [#44016][#44016]
-- The new global default [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `sql.defaults.temporary_tables.enabled` can be used to enable temporary tables. [#43816][#43816]
-- An optimization has been added to scan over only 1 row when finding the MIN/MAX of a single aggregate group, as long as the correct index is present. [#43547][#43547]
-- [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v20.1/show-create) now also emits the `COMMENT` statements sufficient to populate the table's user-defined comments, if any, alongside the `CREATE` statement proper. [#43152][#43152] {% comment %}doc{% endcomment %}
-- More invalid usages of `FOR UPDATE` [locking clauses](https://www.cockroachlabs.com/docs/v20.1/sql-grammar#for_locking_clause) are now rejected by the SQL optimizer. [#44015][#44015]
-- Added the `timeofday` functionality supported by PostgreSQL, which returns the time on one of the nodes as a formatted string. [#44050][#44050] {% comment %}doc{% endcomment %}
-- Added `localtime`, which by default returns the current time as the [`TIME`](https://www.cockroachlabs.com/docs/v20.1/time) data type (as opposed to `current_time`, which returns the [`TIMETZ`](https://www.cockroachlabs.com/docs/v20.1/time) data type). [#44042][#44042] {% comment %}doc{% endcomment %}
-- Added `localtimestamp`, which by default returns the current timestamp as the [`TIMESTAMP`](https://www.cockroachlabs.com/docs/v20.1/timestamp) data type (as opposed to `current_timestamp`, which returns the [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) data type). [#44042][#44042]
-- Added support for having `D:H:M`, `D:M:S.fff`, or `D:H:M:S.fff` for interval parsing if the first element is a decimal or empty (e.g., `:04:05` and `1.0:04:05` would be `04:05:00` and `1 day 04:05:00` respectively). [#43924][#43924] {% comment %}doc{% endcomment %}
-- Previously, floats were supported in `H:M:S` formats for interval parsing (e.g., `1.0:2.0:3.0`), which did not make sense. Floats are no longer allowed for the M field. [#43924][#43924] {% comment %}doc{% endcomment %}
-- Previously, CockroachDB would return an internal error when using [`SET tracing`](https://www.cockroachlabs.com/docs/v20.1/set-vars#set-tracing) with any type other than string. Now it will return a regular query error. Additionally, boolean arguments are now supported in `SET tracing`, and `true` is mapped to `on` mode of tracing whereas `false` is mapped to `off`. [#44260][#44260] {% comment %}doc{% endcomment %}
-- Indexes that reference, or are referenced by, a foreign key constraint can now be dropped if there is another suitable index for the constraint. [#43332][#43332] {% comment %}doc{% endcomment %}
-- Added a `log` [builtin](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) for any base (e.g., `log(2.0, 4.0)`). [#41848][#41848] {% comment %}doc{% endcomment %}
-- Non-admin users can now query [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v20.1/show-jobs) and `crdb_internal.jobs` and see their own jobs. [#44345][#44345] {% comment %}doc{% endcomment %}
-- [Vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports `DISTINCT` on unordered input. [#42522][#42522] {% comment %}doc{% endcomment %}
-- `pg_catalog` access method information is now more accurate. Added inverted index to the access methods listed in `pg_am` and set `pg_class.relam` to zero for sequences and views, which is more consistent with PostgreSQL. [#43715][#43715]
-- An overload has been added to the `unnest` [builtin](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) function in order to support multiple array arguments. [#41557][#41557]
-- Duplicate labels are allowed when declaring a tuple, but a "column reference is ambiguous" error is now returned if a duplicate label is accessed (e.g.,`SELECT ((1, '2') AS a, a);` is successful, but `SELECT (((1,2,3) AS a,b,a)).a;` returns an error). [#41557][#41557]
-- Telemetry information is now collected for uses of secondary indexes that use [column families](https://www.cockroachlabs.com/docs/v20.1/column-families). [#44506][#44506]
-- Telemetry information is now collected for uses of the [`SHOW RANGE ... FROM ROW`](https://www.cockroachlabs.com/docs/v20.1/show-range-for-row) command. [#44502][#44502]
-- CockroachDB now supports `AT TIME ZONE` and the `timezone` [builtin](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) for `time` and `timetz` methods. [#44099][#44099] {% comment %}doc{% endcomment %}
-- `AT TIME ZONE` now supports the POSIX standard. Offsets such as `UTC+3` and `+3` are interpreted to be timezones *west* of UTC instead of *east* of UTC (e.g., `America/New_York` is equivalent to `UTC+5` instead of `UTC-5`). [#44099][#44099]
-- CockroachDB supports `timezone(timestamp(tz), str)`, but PostgreSQL supports the inverse `timezone(str, timestamp(tz))`. Both are now supported, and the former version will be deprecated at a later stage. - CockroachDB now supports `str AT TIME ZONE str`, removing the need for an explicit cast. [#44099][#44099] {% comment %}doc{% endcomment %}
-- The [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports `bool_and`/`bool_or` [builtin](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) aggregation functions. [#44164][#44164] {% comment %}doc{% endcomment %}
-- Non-admin users can now use the `ZONECONFIG` privilege to create, edit, and delete zone configurations. [#43941][#43941] {% comment %}doc{% endcomment %}
-
-
Command-line changes
-
-- When running [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) with multiple nodes, each node now takes up to 128MB for SQL memory and 64MB for cache by default. Previously, each node would take up to 25% of total memory, which could cause OOM problems. These defaults can be modified via the `--max-sql-memory` and `--cache` flags. [#44478][#44478] {% comment %}doc{% endcomment %}
-- Connections using unix sockets are now accepted even when the server is running in secure mode. Consult [`cockroach start --help`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) for details about the `--socket` parameter. [#43848][#43848] {% comment %}doc{% endcomment %}
-- The [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init) command now waits until the node at the provided server address is ready to accept initialization. This also waits for network readiness. This makes it easier to implement initialization scripts by removing the need for a loop. In addition, implementing such a loop is operationally unsafe and is not recommended. [#43904][#43904] {% comment %}doc{% endcomment %}
-- The MovR dataset will now be split among all nodes in the demo cluster. [#43798][#43798] {% comment %}doc{% endcomment %}
-- [`cockroach demo --with-load`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) can now round robin queries to all nodes in the demo cluster. [#43474][#43474] {% comment %}doc{% endcomment %}
-- The SQLSmith workload now accepts an argument `error-sensitivity` which controls what types of errors the workload exits on. [#43925][#43925]
-- [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v20.1/cockroach-gen) now excludes decommissioned nodes. [#43908][#43908] {% comment %}doc{% endcomment %}
-- The [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v20.1/cockroach-node#node-decommission) and [`cockroach node recommission`](https://www.cockroachlabs.com/docs/v20.1/cockroach-node#node-recommission) commands now produce a warning on the standard error if one of the node(s) specified is already (d/r)ecommissioned. [#43915][#43915] {% comment %}doc{% endcomment %}
-- The [`start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) and [`start-single-node`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start-single-node) commands no longer initiate a 1-minute hard shutdown countdown after a request to gracefully terminate. This means that graceful shutdowns are now free to take longer than one minute. It also means that deployments where a maximum shutdown time must be enforced must now use a service manager that is suitably configured to do so. [#44074][#44074] {% comment %}doc{% endcomment %}
-- The new `cockroach auth-session login`, `cockroach auth-session list`, and `cockroach auth-session logout` commands are now provided to facilitate the management of web sessions. The command `auth-session login` also produces a HTTP cookie which can be used by non-interactive HTTP-based database management tools. It also can generate such a cookie for the `root` user, who would not otherwise be able to do so using a web browser. [#43872][#43872] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- Decommissioned node history is now viewable on a dedicated page. This reduces the amount of information on the [Cluster Overview](https://www.cockroachlabs.com/docs/v20.1/admin-ui-cluster-overview-page) page. [#42817][#42817] {% comment %}doc{% endcomment %}
-- Execution Latency graph is now renamed to "KV Execution Latency". [#43290][#43290] {% comment %}doc{% endcomment %}
-- Redesigned the Cluster Overview page. [#43552][#43552] {% comment %}doc{% endcomment %}
-- We previously introduced a fix on the Admin UI to prevent non-admin users from executing queries. However, this inadvertently caused certain pages requiring table details not to display. This issue has now been resolved. [#44167][#44167]
-- Non-admin users can now use the [Jobs](https://www.cockroachlabs.com/docs/v20.1/admin-ui-jobs-page) page of the Admin UI to see their own jobs. [#44345][#44345] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- When running a query with the `LIKE` [operator](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) using custom `ESCAPE` symbols, patterns containing Unicode characters no longer result in an internal error. [#44633][#44633]
-- Fixed a server crash caused by some queries with outer joins and negative limits. [#44590][#44590]
-- When cleaning up [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes), CockroachDB no longer repeatedly looks for non-existing jobs, which could cause high memory usage. [#44607][#44607]
-- Calling `NULLIF` with one null argument no longer results in an internal error. [#44718][#44718]
-- Fixed a "no indexes" internal error in some cases when GROUP BY is used on a virtual table. [#44692][#44692]
-- Fixed invalid query results in some cases involving stored columns with `NULL` values. [#44728][#44728]
-- Fixed invalid query results in some cases where part of a `WHERE` clause is incorrectly discarded. [#44668][#44668]
-- Fixed bugs around [`cockroach dump`](https://www.cockroachlabs.com/docs/v20.1/cockroach-dump) and [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import)/[`EXPORT`](https://www.cockroachlabs.com/docs/v20.1/export) where columns of arrays or collated strings were not able to be roundtripped between `cockroach` and the dump. [#44464][#44464]
-- `CASE` [operators](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) with an unknown `WHEN` type no longer return an error. [#44756][#44756]
-- Fixed a type checking error where `BETWEEN` would sometimes allow boundary expressions of a different type. [#44775][#44775]
-- Fixed a "cannot map variable" error in some rare cases involving joins. [#44788][#44788]
-- Fixed a bug causing lost update transaction anomalies. [#44507][#44507]
-- Fixed an occasional "concurrent map write" crash. [#44872][#44872]
-- Fixed a bug where [`DROP INDEX`](https://www.cockroachlabs.com/docs/v20.1/drop-index) jobs waiting for garbage collection might deleted before the data was actually removed from disk. [#44831][#44831]
-- CockroachDB no longer returns an internal error when executing a `substring()` function with non-INT8 start and length arguments via the vectorized engine. [#44887][#44887]
-- Fixed incorrect deduplication of impure expressions (e.g., `gen_random_uuid`) in projections and default values. [#44890][#44890]
-- [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) operations now correctly take context timezone (set by `SET TIME ZONE`) into account. Previously, not doing so lead to bugs involving daylight saving in arithmetic. For example, with `America/Chicago`, evaluating `'2010-11-06 23:59:00-05'::timestamptz + '1 day'::interval` would return incorrect results as it assumed it was a fixed offset of `-5` instead. Also, text conversion from `TIMESTAMPTZ` TO `STRING` sometimes used the wrong timezone offset if the location of the session did not match the location when the `TIMESTAMPTZ` was parsed, and `to_json()` built-ins with `TIMESTAMPTZ` did not take session timezone into consideration. [#44812][#44812]
-- Previously, when [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) was used with `vectorize=experimental_on`, CockroachDB could incorrectly report some values as NULL. This has now been fixed. [#43785][#43785]
-- When casting a string to bytes, octal escapes greater than `377` will now generate an error, rather than silently wrapping around. [#43806][#43806]
-- A job can be running but shown in a pending state. [#43814][#43814]
-- On Linux machines, we now respect the available memory limit as dictated by the cgroup limits which apply to the `cockroach` process. [#43137][#43137]
-- Previously, CockroachDB would return incorrect results for some aggregate functions when used as [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions) with non-default window frame. This is now fixed. Note that MIN, MAX, SUM, AVG, and "pure" window functions (i.e., non-aggregates) were not affected. [#39308][#39308]
-- Previously, CockroachDB could return an internal error when running a query with a CAST operation (`:::`) in some cases when using the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This is now fixed. [#43857][#43857]
-- Previously, a query shutdown mechanism could fail to fully cleanup the infrastructure when the query was executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) and the query plan contained wrapped row-by-row processors (in v19.2, this applies to Lookup joins and Index joins). This is now fixed. [#43579][#43579]
-- Fixed a bug introduced in v19.2 that would allow foreign keys to use a unique index on the referenced columns that indexed more columns than were included in the columns used in the FK constraint, which allows potentially violating uniqueness in the referenced columns themselves. [#43793][#43793]
-- [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) cleanup is now run exactly once. [#43933][#43933]
-- A benign error previously logged at the `ERROR` level is now logged at the `INFO` level behind a `verbosity(2)` flag. This error might have been observed as "context canceled: readerCtx in Inbox stream handler". [#44020][#44020]
-- A bug causing lost update transaction anomalies was fixed. [#42969][#42969]
-- Previously, an internal error could occur when a query with an aggregate function MIN or MAX was executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) when the input column was either INT2 or INT4 type. This is now fixed. [#43985][#43985]
-- [CDC](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) is no longer susceptible to a bug where a resolved timestamp might be published before all events that precede it have been published in the presence of a Range merge. [#44035][#44035]
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) now emits the `goroutine` file in the proper sub-directory when the corresponding call fails with an error. [#44064][#44064]
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) is again able to operate correctly and continue to iterate over all nodes if one of the nodes does not deliver its goroutine dumps. It would previously prematurely and incorrectly stop with an incomplete dump; this was a regression introduced in v19.2. [#44064][#44064]
-- The file generated by [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v20.1/cockroach-gen) no longer gets an executable bit. The executable bit was previously placed in error. [#44043][#44043]
-- Fixed internal error of the form "x FK cols, only y cols in index" in some cases when inserting into a table with foreign key references. [#44031][#44031]
-- CockroachDB now ensures internal cleanup after [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/backup) is only run once. [#43960][#43960]
-- Converted a panic in `golang.org/x/text/language/tags.go` when using collated strings to an error. [#44103][#44103]
-- SQL mutation statements that target tables with no foreign key relationships now correctly read data as per the state of the database when the statement started execution. This is required for compatibility with PostgreSQL and to ensure deterministic behavior when certain operations are parallelized. Prior to this fix, a statement could incorrectly operate multiple times (i.e., the [Halloween Problem](https://wikipedia.org/wiki/Halloween_Problem)) on data that itself was writing, and potentially never terminate. This fix is limited to tables without FK relationships, and for certain operations on tables with FK relationships; in other cases, the fix is not active and the bug is still present. A full fix will be provided in a later release. [#42862][#42862]
-- CockroachDB now properly supports using `--url` with query options (e.g., `application_name`) but without specifying `sslmode`. The default of `sslmode=disable` is assumed in that case. This applies to the [CLI commands](https://www.cockroachlabs.com/docs/v20.1/cockroach-commands) that use SQL, including (but not limited to) [`cockroach sql`](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql), [`cockroach node`](https://www.cockroachlabs.com/docs/v20.1/cockroach-node), `cockroach auth-session`, and [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip). [#44113][#44113]
-- The GC process has been improved to paginate the key versions of a key to fix OOM crashes, which can occur when there are extremely large numbers of versions for a given key. [#43862][#43862]
-- Removed [statistics](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer#table-statistics) information from backup jobs' payload information to avoid excessive memory utilization when issuing commands such as [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v20.1/show-jobs). [#44180][#44180]
-- Previously, CockroachDB could crash in special circumstances when using the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) with the `vectorize=experimental_on` setting. This is now fixed. [#44144][#44144]
-- Fixed planning bug related to FULL joins between single-row relations. [#44156][#44156]
-- Fixed "CopyFrom requires empty destination" internal error. [#44114][#44114]
-- Fix a bug where multiple nodes attempted to populate the results for [`CREATE TABLE ... AS`](https://www.cockroachlabs.com/docs/v20.1/create-table-as) leading to duplicate rows. [#43840][#43840]
-- All admin users are now allowed to use [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) and [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import). [#44250][#44250]
-- `to_english(-2^63)` previously errored. This is now fixed to return the correct result. [#44251][#44251]
-- Fixed internal error when mixed types are used with `BETWEEN`. [#44216][#44216]
-- Fixed an error that could occur in very specific scenarios involving mutations and foreign keys. [#44314][#44314]
-- Previously, CockroachDB would return an internal error when a query with CASE operator that returns only NULL values was executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This is now fixed. [#44346][#44346]
-- Fixed a bug when cascade deleting thousands of rows across interleaved tables. [#44159][#44159]
-- Fixed incorrect plans in very rare cases involving filters that aren't constant folded in the optimizer but that can be evaluated statically when running a given query. [#44307][#44307]
-- Fixed an internal error that could happen in the planner when table statistics were collected manually using [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v20.1/create-statistics) for different columns at different times. [#44430][#44430]
-- Fixed "no output column equivalent to" and "column not in input" errors in some cases involving `DISTINCT ON` and `ORDER BY`. [#44543][#44543]
-- Fixed possibly incorrect query results in various cornercases, especially when [`SELECT DISTINCT`](https://www.cockroachlabs.com/docs/v20.1/select-clause) is used. [#44386][#44386]
-- Previously, CockroachDB would return an internal error when a `substring` function with a negative length was executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This is now fixed (it now returns a regular query error). [#44627][#44627]
-
-
Performance improvements
-
-- Secondary indexes that store columns on tables with [column families](https://www.cockroachlabs.com/docs/v20.1/column-families) can now perform reads on only the needed columns in single row reads. [#43567][#43567]
-- CockroachDB now uses better execution plans in some cases where there is an ordering on an expression that can be constant-folded to a simple column reference. [#43724][#43724]
-- Histograms are now collected automatically for all boolean columns, resulting in better query plans in some cases. For tables that aren't being modified frequently, it might be necessary to run [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v20.1/create-statistics) manually to see the benefit. [#44151][#44151]
-
-
Build changes
-
-- Building CockroachDB now requires Node.js version 12 or greater. [#44024][#44024] {% comment %}doc{% endcomment %}
-
-
Doc updates
-
-- Added a new [Technical Advisories](https://www.cockroachlabs.com/docs/advisories) section with information about major issues with CockroachDB that may impact security or stability in production environments. [#6492](https://github.com/cockroachdb/docs/pull/6492)
-- Added a tutorial on [streaming an Enterprise changefeed from CockroachCloud to Snowflake](https://www.cockroachlabs.com/docs/cockroachcloud/stream-changefeed-to-snowflake-aws). [#6317](https://github.com/cockroachdb/docs/pull/6317)
-- Documented the [`TIMETZ`](https://www.cockroachlabs.com/docs/v20.1/time) data type. [#6391](https://github.com/cockroachdb/docs/pull/6391)
-- Fixed the [JavaScript code sample for connecting to a CockroachCloud cluster](https://www.cockroachlabs.com/docs/cockroachcloud/connect-to-your-cluster). [#6393](https://github.com/cockroachdb/docs/pull/6393)
-- Clarified the behavior of default values when using [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v20.1/import-into). [#6396](https://github.com/cockroachdb/docs/pull/6396)
-- Clarified the behavior of [decommissioning](https://www.cockroachlabs.com/docs/v20.1/remove-nodes) in clusters of various sizes. [#6402](https://github.com/cockroachdb/docs/pull/6402)
-- Documented [`LATERAL` joins](https://www.cockroachlabs.com/docs/v20.1/joins#lateral-joins) and [subqueries](https://www.cockroachlabs.com/docs/v20.1/subqueries#lateral-subqueries). [#6425](https://github.com/cockroachdb/docs/pull/6425)
-- Improved the [Django "build an app" code sample](https://www.cockroachlabs.com/docs/v20.1/build-a-python-app-with-cockroachdb-django). [#6404](https://github.com/cockroachdb/docs/pull/6404), [#6412](https://github.com/cockroachdb/docs/pull/6412)
-- Updated [Change Data Capture examples](https://www.cockroachlabs.com/docs/v20.1/change-data-capture#create-a-changefeed-connected-to-kafka) to show more than one table in a changefeed. [#6511](https://github.com/cockroachdb/docs/pull/6511)
-
-
-
-
Contributors
-
-This release includes 420 merged PRs by 68 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Andrii Vorobiov
-- George Papadrosou
-- Jaewan Park
-- Jason Brown
-- Y.Horie (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-In addition to various updates, enhancements, and bug fixes, this v20.1 beta release includes the ability to **log slow SQL queries**. By setting the `sql.log.slow_query.latency_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings), each node of your cluster will log queries that exceed the specified service latency to a new file called `cockroach-sql-slow.log`.
-
-
Security updates
-
-- Operators can now disable external HTTP access when performing certain operations (`BACKUP`, `IMPORT`, etc.). The external HTTP access, as well as custom HTTP endpoints, are disabled by providing an `--external-io-disable-http` flag. This flag provides a light-weight option to disable external HTTP access in environments where running a full-fledged proxy server may not be feasible. If running a proxy server is acceptable, operators may choose to start the `cockroach` binary while specifying the `HTTP(S)_PROXY` environment variable. [#44900][#44900]
-
-
General changes
-
-- Added a slow query log facility to CockroachDB, configurable by setting the `sql.log.slow_query.latency_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). When used, each node of your cluster will record queries that exceed the specified service latency to a new file called `cockroach-sql-slow.log`. [#44816][#44816]
-- New clusters will have a larger default range size of 512 MB, which will result in fewer ranges for the same amount of data. [#45209][#45209]
-
-
Enterprise edition changes
-
-- Row counts in `BACKUP` and `RESTORE` now include rows in system tables. [#44965][#44965]
-
-
SQL language changes
-
-- Disallowed changing the primary key of a table in the same transaction as its `CREATE TABLE` statement. [#44815][#44815]
-- Introduced the ability to create views using `CREATE VIEW IF NOT EXISTS`, which does nothing if the view already existed. [#44913][#44913]
-- If temporary table creation is enabled, users now have the ability to create temporary sequences as well. [#44806][#44806]
-- Added built-in support of hash-sharded indexes with new `USING HASH WITH BUCKET_COUNT = ` syntax for indices (including the primary index of a table). This feature allows users to easily relieve write hot-spots caused by sequential insert patterns at the cost of scan time for queries over the hashed dimension. [#42922][#42922]
-- Added support for primary key changes into hash sharded indexes. [#44993][#44993]
-- Disabled creating a hash sharded index that is also interleaved. [#44996][#44996]
-- An `UPDATE` returning a serialization failure error (code `40001`) now leaves behind a lock, helping the transaction succeed if it retries. This prevents starvation of transactions where an `UPDATE` is prone to conflicts. [#44654][#44654]
-- Added a builtin function `getdatabaseencoding()`, which returns the current encoding name used by the database. [#45129][#45129]
-- The SQL:2008 syntax `OFFSET ROWS` and `FETCH FIRST ROWS ONLY` now accept parameter values. [#45112][#45112]
-- Disallowed primary key changes on tables that are currently undergoing a primary key change. [#44784][#44784]
-- Added support for the aggregate function `corr()` [#44628][#44628]
-- `INSERT..ON CONFLICT` index column names can now be specified in any order, rather than only in the same order as the index. [#45280][#45280]
-
-
Command-line changes
-
-- Previously, `cockroach debug zip` would only print an informational message about a piece of data it was retrieving *after* the data was retrieved (or an error was observed). This patch changes it to print a message beforehand as well. This enables better troubleshooting of hanging queries. [#44342][#44342]
-- `cockroach debug zip` now properly supports special characters in database and table names. [#44342][#44342]
-- `cockroach debug zip` will now apply [the `--timeout` parameter](https://www.cockroachlabs.com/docs/v20.1/cockroach-node) to the SQL queries it performs (there was no timeout previously, causing `cockroach debug zip` to potentially hang). [#44342][#44342]
-- `cockroach debug zip` is now able to tolerate more forms of cluster unavailability. Nonetheless, in case system ranges are unavailable, it is recommended to run `cockroach debug zip` towards each node address in turn to maximize the amount of useful data collected. [#44342][#44342]
-- `cockroach debug zip` now includes secondary log files in the main log directory, for example the RocksDB logs. Log files in alternate log directories (e.g., `--sql-audit-dir`, if different from the main log directory) are not included. [#45200][#45200]
-
-
Admin UI changes
-
-- Changed Decommissioned Node History view to accommodate the case when there are no decommissioned nodes. [#44205][#44205]
-- Changed styling of the Cluster Overview view. [#44212][#44212]
-- Endpoint `/_status/registry/{node_id}` will now display status info about the jobs running on this `node_id`. [#45030][#45030]
-- The "Log file list" endpoint now includes secondary log files in the main log directory, for example the RocksDB logs. Log files in alternate log directories (e.g., `--sql-audit-dir`, if different from the main log directory) are not included. [#45200][#45200]
-
-
Bug fixes
-
-- Fixed a bug where CockroachDB could return an internal error on the queries that return `INT` columns when the default integer size has been changed. [#44930][#44930]
-- Fixed a bug where CockroachDB could crash when running `EXPLAIN (VEC)` in some edge cases. Now, an internal error is returned instead. [#44931][#44931]
-- Fixed a bug where CockroachDB would return an internal error when the merge join operation was performed via the vectorized execution engine in a case when two sides of the join had comparable but different types in the equality columns (for example, `INT2` on the left and `INT4` on the right). [#44942][#44942]
-- Fixed internal query errors in some cases involving negative limits. [#45009][#45009]
-- Fixed a bug where the distinct operation in the row execution engine would fail to properly account for its memory usage, potentially leading to OOMs on large tables. [#45254][#45254]
-- Correctly handle Avro byte datums when converting them to the expected string column families (such as `VARCHAR`, `CHAR`, etc.). [#45242][#45242]
-- Fixed a potential error occurring when loading the movr dataset with a large number of rows in the promo codes column. [#45035][#45035]
-
-
Performance improvements
-
-- The [cost-based-optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) now generates faster execution plans in some cases that involve `IN` / `NOT IN` with an empty tuple (or `= ANY` with an empty array). [#45170][#45170]
-
-
Doc updates
-
-- Added a [tutorial for developing and deploying a multi-region web application](https://www.cockroachlabs.com/docs/v20.1/multi-region-overview) with Flask, SQLAlchemy, CockroachCloud, and Google Cloud Platform. [#5732][#5732]
-- Added a [Developer Guide](https://www.cockroachlabs.com/docs/v20.1/developer-guide-overview) that shows how to do common application development tasks in several languages: Go, Java, and Python. [#6362][#6362]
-- Added [information about how to access the Admin UI on secure clusters](https://www.cockroachlabs.com/docs/v20.1/admin-ui-overview). [#6640][#6640]
-- Overhauled the documentation on [authorization](https://www.cockroachlabs.com/docs/v20.1/authorization), [roles](https://www.cockroachlabs.com/docs/v20.1/create-role), and [grants](https://www.cockroachlabs.com/docs/v20.1/grant). [#6332][#6332]
-- Added docs for [troubleshooting node liveness](https://www.cockroachlabs.com/docs/v20.1/cluster-setup-troubleshooting). [#6322][#6322]
-- Added docs for online primary key changes with [`ALTER TABLE ... ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v20.1/alter-table). [#6513][#6513]
-- Added a tutorial for using PonyORM with CockroachDB. [#6531][#6531]
-- Added a [tutorial for using the jOOQ ORM with CockroachDB](https://www.cockroachlabs.com/docs/v20.1/build-a-java-app-with-cockroachdb-jooq). [#6684][#6684]
-
-
-
-
Contributors
-
-This release includes 122 merged PRs by 33 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Andrii Vorobiov
-- Artem Barger (first-time contributor)
-- Jaewan Park
-- abhishek20123g (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-In addition to various updates, enhancements, and bug fixes, this beta release includes the following major highlights:
-
-- **SELECT FOR UPDATE**: CockroachDB now supports [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v20.1/select-for-update) for ordering transactions. Use `SELECT FOR UPDATE` to lock the rows returned by a [selection query](https://www.cockroachlabs.com/docs/v20.1/selection-queries), to control concurrent access to one or more rows of a table.
-- **SQL savepoints**: CockroachDB now fully supports SQL [savepoints](https://www.cockroachlabs.com/docs/v20.1/savepoint). New syntax for savepoints includes `SAVEPOINT `, `RELEASE SAVEPOINT `, and `ROLLBACK TO SAVEPOINT `. To inspect the current stack of active savepoints, use `SHOW SAVEPOINT STATUS`.
-- **Hash-sharded indexes**: CockroachDB now supports new syntax for defining [hash-sharded indexes](https://www.cockroachlabs.com/blog/hash-sharded-indexes-unlock-linear-scaling-for-sequential-workloads/). Hash-sharded indexes improve write performance to indexes on sequential keys. To define a hash-sharded index, use `CREATE INDEX ... USING HASH WITH BUCKET_COUNT = `.
-- **RBAC now under BSL**: All [role-based access control (RBAC) features](https://www.cockroachlabs.com/docs/v20.1/authorization#roles) (`CREATE ROLE`, `ALTER ROLE`, `DROP ROLE`, `GRANT ROLE`, `REVOKE ROLE`) are now [BSL features](https://www.cockroachlabs.com/docs/v20.1/licensing-faqs) and available to non-enterprise users.
-- **Improved vectorized execution**: [Vectorized execution](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports [hash joins](https://www.cockroachlabs.com/docs/v20.1/joins#hash-joins), [merge joins](https://www.cockroachlabs.com/docs/v20.1/joins#merge-joins), and most [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions). By default, the `vectorized` [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) is set to `auto`, which uses vectorized execution for all queries except those including unordered [`DISTINCT`](https://www.cockroachlabs.com/docs/v20.1/select-clause#eliminate-duplicate-rows) clauses or calls to the `percent_rank` or `cume_dist` [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions). To turn vectorized execution on for all operations, set `vectorized` to `on`.
-- **Statement tracing in the Admin UI**: Statement diagnostic information is now available in the Admin UI. When viewing a statement fingerprint in the Admin UI, you can now trigger a trace on the next query execution matching that statement fingerprint.
-
-
Backward-incompatible changes
-
-- The [`GRANT`](https://www.cockroachlabs.com/docs/v20.1/grant) and [`REVOKE`](https://www.cockroachlabs.com/docs/v20.1/revoke) statements now require that the requesting user already have the target privileges themselves. For example, `GRANT SELECT ON t TO foo` requires that the requesting user already have the `SELECT` privilege on `t`. [#45697][#45697] {% comment %}doc{% endcomment %}
-- During the upgrade process from 19.2 to 20.1, almost all schema changes will now be disallowed on 20.1 nodes until the upgrade has been finalized, as part of ensuring consistency while the cluster undergoes a significant, backward-incompatible change in how schema changes are executed. Attempting these schema changes will return an error to the client. [#45990][#45990]
-- To ensure consistency for [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) throughout the upgrade process to 20.1, schema changes initiated in 19.2 now cannot be adopted, paused, or resumed from 20.1 nodes until an internal migration is run. [#46214][#46214]
-
-
Security updates
-
-
Identity management changes
-
-- All [role-based access control (RBAC) features](https://www.cockroachlabs.com/docs/v20.1/authorization#roles) (`CREATE ROLE`, [`ALTER ROLE`], `DROP ROLE`, `GRANT ROLE`, `REVOKE ROLE`) are now available to non-enterprise users. All RBAC features are now [BSL features](https://www.cockroachlabs.com/docs/v20.1/licensing-faqs). [#46042][#46042]
-- [`USER`s](https://www.cockroachlabs.com/docs/v20.1/create-user) and [`ROLE`s](https://www.cockroachlabs.com/docs/v20.1/create-role) are now the same (which is PostgreSQL behavior). This means `CREATE`/`ALTER`/`DROP` `ROLE`/`USER` can be used interchangeably. [#44968][#44968] {% comment %}doc{% endcomment %}
-- The validation rule for principal names was extended to support periods and thus allow domain name-like principals. For reference, the validation rule is currently the regular expression `^[p{Ll}0-9_][---p{Ll}0-9_.]*$`, and limited to a size of 63 UTF-8 bytes (larger usernames are rejected with an error); for comparison, PostgreSQL allows many more characters and truncates at 63 characters silently. [#45575][#45575] {% comment %}doc{% endcomment %}
-- [Usernames](https://www.cockroachlabs.com/docs/v20.1/create-user) can now contain periods, for compatibility with certificate managers that require domain names to be used as usernames. [#45575][#45575] {% comment %}doc{% endcomment %}
-- User and role principals can now be prevented from logging in using the `NOLOGIN` attribute, which can be set using the [`CREATE USER/ROLE`](https://www.cockroachlabs.com/docs/v20.1/create-user) or [`ALTER USER/ROLE`](https://www.cockroachlabs.com/docs/v20.1/alter-user). When using the [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v20.1/create-user) syntax, `NOLOGIN` is enabled by default whereas when using [`CREATE USER`](https://www.cockroachlabs.com/docs/v20.1/create-role), it is not. [#45541][#45541] {% comment %}doc{% endcomment %}
-- The `CREATEROLE` option can now be granted to users/roles using [`CREATE USER/ROLE`](https://www.cockroachlabs.com/docs/v20.1/create-user) or [`ALTER USER/ROLE`](https://www.cockroachlabs.com/docs/v20.1/alter-user). This delegates the permission to create additional roles. [#44232][#44232] {% comment %}doc{% endcomment %}
-- Any [role](https://www.cockroachlabs.com/docs/v20.1/authorization#roles) created prior to v20.1 will be able to log into clusters started with `--insecure`, unless/until they are given the `NOLOGIN` attribute. For secure clusters, roles cannot log in by virtue of not having a password nor a client certificate. [#45541][#45541] {% comment %}doc{% endcomment %}
-
-
Authentication changes
-
-- CockroachDB now offers the ability to SQL client connection events (connection established and connection terminated) and SQL client authentication events (authentication method selection, authentication method application, authentication method result, and session termination) to a distinct `cockroach-auth.log` file in each node's main log directory. To enable SQL client connection logging, set the `server.auth_log.sql_connections.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). To enable SQL client authentication event logging, set the new `server.auth_log.sql_sessions.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) cluster setting. Note that this feature is experimental; as such, the interface and output are subject to change. [#45193][#45193] {% comment %}doc{% endcomment %}
-- The password field (used exclusively for password-based authentication) can now be configured to have an expiration date using the `VALID UNTIL` attribute, which can be set with [`ALTER USER/ROLE`](https://www.cockroachlabs.com/docs/v20.1/alter-user). Note that the attribute sets an expiration date for the password, not the user account. This is consistent with PostgreSQL. [#45541][#45541] {% comment %}doc{% endcomment %}
-- Client and node certificates are now allowed to specify the principal in either the SubjectCommonName field or the SubjectAlternateNames field. Previously, the principal could only be specified in the SubjectCommonName field. This facilitates the use of Amazon Certificate Manager (ACM) and other Cloud-based certificate management tools. [#45819][#45819]
-
-
Authorization changes
-
-- Admin users can now [grant](https://www.cockroachlabs.com/docs/v20.1/grant) `ZONECONFIG` to non-admin users on specific SQL objects (databases/tables). When set, the user is authorized to modify that object's [zone configuration](https://www.cockroachlabs.com/docs/v20.1/configure-replication-zones) and decide data placement on specific nodes or groups thereof. [#45201][#45201] {% comment %}doc{% endcomment %}
-- The [`GRANT`](https://www.cockroachlabs.com/docs/v20.1/grant) statement has been changed to be more like a capability-based system: it can only propagate privileges that the requesting user already has. Previously, it could be used to grant any other bit to any user, even to the requesting user. Note that the pseudo-privilege `ALL` includes `GRANT`, so `GRANT ALL`, in effect, preserves the previous behavior: it grants `GRANT` and every over privileges, so all privileges can be re-granted transitively. [#45697][#45697] {% comment %}doc{% endcomment %}
-- It is now possible for operators to disable the use of implicit credentials when accessing external cloud storage services for various bulk operations (e.g, [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import), etc.). The use of implicit credentials can be disabled by using the `--external-io-disable-implicit-credentials` flag. [#45969][#45969] {% comment %}doc{% endcomment %}
-
-
Security bug fixes
-
-- All users can now *view* any comments on any object (bypassing other privileges), but modifying comments require write privilege on the target object. Previously, any user could modify any database/table/view/index comment via direct SQL updates to `system.comments`. This was unintended and a form of privilege escalation, and is now prevented. The privileges required for the [`COMMENT`](https://www.cockroachlabs.com/docs/v20.1/comment-on) statement, `pg_description`, `col_description()`, `obj_description()`, and `shobj_description()` are operating as in PostgreSQL and are unaffected by this change.[#45712][#45712] {% comment %}doc{% endcomment %}
-- The `--external-io-dir=disabled` now applies to `nodelocal upload` requests. [#45858][#45858] {% comment %}doc{% endcomment %}
-- The non-authenticated `/health` HTTP endpoint was previously exposing the private IP address of the node, which can be privileged information in some deployments. This has been corrected. Deployments using automation to retrieve a node build details and address details should use `/_status/details/local` instead and use a valid admin authentication cookie. [#45119][#45119]
-
-
Enterprise edition changes
-
-- [CDC](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) to cloud-storage sinks now supports optional `gzip` compression. [#45326][#45326] {% comment %}doc{% endcomment %}
-- [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) can be re-run with the same destination path to automatically append an incremental backup to that path. [#45255][#45255] {% comment %}doc{% endcomment %}
-- [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) now allows using `AS OF SYSTEM TIME` to pick a target backup from a larger list of incremental backups. [#45368][#45368] {% comment %}doc{% endcomment %}
-- [Changefeeds](https://www.cockroachlabs.com/docs/v20.1/create-changefeed) now have new options to control the types of schema change events the changefeed should respond to (`schema_change_events`), and the behavior to take when such an event occurs (`schema_change_policy`). This functionality allows users to halt changefeeds upon schema changes or skip over the logical backfill that is performed by default. [#45652][#45652] {% comment %}doc{% endcomment %}
-- Two new [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/create-changefeed) options, `initial_scan` and `no_initial_scan`, have been added. The `initial_scan` can be used only if no cursor had been specified (indicating the desire to start the `CHANGEFEED` from `now()`). The new options allow clients to override this default behavior and create changefeeds from the present without an initial scan, or from a point in time with one. You cannot specify both options simultaneously. [#45663][#45663] {% comment %}doc{% endcomment %}
-- [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) will no longer fail if the GC window for a table is exceeded while backing the table up. [#45859][#45859] {% comment %}doc{% endcomment %}
-- Incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) can quickly skip unchanged data, making frequent incremental backups 10-100x faster, depending on data size and frequency. [#46108][#46108]
-
-
Storage changes
-
-- Improved the ability of garbage collection to process ranges exhibiting abnormally large numbers of transaction records and/or abort span entries. [#45444][#45444]
-- Improved a debug message that is printed when a range is unavailable (i.e., unable to accept writes). [#45580][#45580]
-- The timing of garbage collection for historical data is defined by the `gc.ttlseconds` variable in the applicable [zone configuration](https://www.cockroachlabs.com/docs/v20.1/configure-replication-zones#replication-zone-variables). However, in practice, data is not garbage collected immediately after the TTL passes. The new `kv.gc_ttl.strict_enforcement` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) makes sure that `AS OF SYSTEM TIME` queries older than the TTL, but before GC has happened, return an error. [#45826][#45826]
-- A bug in the range metrics collection would fail to correctly identify a range that had lost quorum, causing it to not be reported via the "unavailable ranges" metric. This is now fixed. [#45253][#45253]
-- Range garbage collection will now trigger based on a large abort span, adding defense-in-depth against ranges growing large (and eventually unstable). [#45573][#45573]
-- Fixed a bug that could cause requests to a quiesced range to hang in the KV [replication layer](https://www.cockroachlabs.com/docs/v20.1/architecture/replication-layer). This bug would cause the message "have been waiting ... for proposing" to appear. even though no loss of quorum occurred. [#46045][#46045]
-
-
SQL changes
-
-
SQL language additions
-
-- CockroachDB now supports string and byte array literals using the dollar-quoted notation, as documented [here](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING). [#44130][#44130] {% comment %}doc{% endcomment %}
-- CockroachDB now supports expanding all columns of a tuple using the `.*` notation, for example: `SELECT (t).* FROM (SELECT (1,'b',2.3) AS t)`. This is a CockroachDB-specific extension. [#45609][#45609] {% comment %}doc{% endcomment %}
-- CockroachDB now supports accessing the Nth column in a column with tuple type using the syntax `(...).@N`, for example: `SELECT (t).@2 FROM (SELECT (1,'b',2.3) AS t)`. This is a CockroachDB-specific extension. [#45609][#45609] {% comment %}doc{% endcomment %}
-- Duplicate rows in the input to an [`INSERT..ON CONFLICT DO NOTHING`](https://www.cockroachlabs.com/docs/v20.1/insert) statement will now be ignored rather than triggering an error. [#45443][#45443] {% comment %}doc{% endcomment %}
-- SQL [savepoints](https://www.cockroachlabs.com/docs/v20.1/savepoint) are now supported. `SAVEPOINT `, `RELEASE SAVEPOINT `, and `ROLLBACK TO SAVEPOINT ` now work. `SHOW SAVEPOINT STATUS` can be used to inspect the current stack of active savepoints. [#45566][#45566]
-- CockroachDB still considers the name `cockroach_restart` special in [`SAVEPOINT`](https://www.cockroachlabs.com/docs/v20.1/savepoint)s. A savepoint defined with the name `cockroach_restart` is a "restart savepoint" and has different semantics than standard savepoints:
- 1. Restart savepoints must be opened immediately when the transaction starts. Opening a restart savepoint after other statements have been executed is not allowed. In contrast, standard savepoints can be opened after other statements have already been executed.
- 1. After a successful `RELEASE`, a restart savepoint does not allow further use of the transaction. `COMMIT` must immediately follow the RELEASE.
- 1. Restart savepoints cannot be nested. Issuing `SAVEPOINT cockroach_restart` two times in a row only creates a single savepoint marker. This can be seen with `SHOW SAVEPOINT STATUS`. Issuing `SAVEPOINT cockroach_restart` after `ROLLBACK TO SAVEPOINT cockroach_restart` reuses the marker instead of creating a new one. In contrast, two `SAVEPOINT` statements with a standard savepoint name, or a `SAVEPOINT` statement immediately after a `ROLLBACK`, create two distinct savepoint markers.
-
- **Note:** The [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) `force_savepoint_restart` still works and causes every savepoint name to become equivalent to `cockroach_restart` with the special semantics described above. [#46194][#46194]
-- [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v20.1/select-for-update) now hooks into a new leaseholder-only locking mechanism. This allows the feature to be used to improve performance of [transactions](https://www.cockroachlabs.com/docs/v20.1/transactions) that read, modify, and write contended to rows. Similarly, [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update) statements now use this new mechanism by default, meaning that their performance under contention is improved. This is only enabled for `UPDATE` statements that can push their filter all the way into their key-value scan. To determine whether an `UPDATE` statement is implicitly using `SELECT FOR UPDATE` locking, look for a `locking strength` field in the `EXPLAIN` output for the statement. [#45701][#45701] {% comment %}doc{% endcomment %}
-- The statement `CREATE SCHEMA IF NOT EXISTS` is now accepted, and ignored, if it targets one of the pre-defined schemas (`public`, `pg_temp`, `pg_catalog`, etc.) [#42703][#42703]
-- Added the `every` [aggregate function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators#aggregate-functions). [#46059][#46059]
-- The `ceil` and `floor` [built-in functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) now accept integer inputs. [#46166][#46166]
-
-
Query planning changes
-
-- CockroachDB previously allowed [`TIMESTAMP`/`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) and [`TIME`/`TIMETZ`](https://www.cockroachlabs.com/docs/v20.1/time) to be converted to `TIMESTAMP(0)`/`TIMESTAMPTZ(0)`, but this does not actually change any of the precision within it. CockroachDB now prohibits converting any precision of `TIMESTAMP`/`TIMESTAMPTZ`/`TIME`/`TIMETZ` to a lower precision value. [#45314][#45314] {% comment %}doc{% endcomment %}
-- [Operators](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) on two [arrays](https://www.cockroachlabs.com/docs/v20.1/array) with different element types now fail at type-check time instead of evaluation time. [#45260][#45260]
-- Improved the error message for the unsupported interaction between correlated subqueries and `WITH` clauses [#45227][#45227]
-- [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) and [`INSERT..ON CONFLICT`](https://www.cockroachlabs.com/docs/v20.1/insert) statements now (occasionally) need to do an extra check to ensure that they never update the same row twice. This may adversely affect performance in cases where the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) cannot statically prove the extra check is unnecessary. [#45372][#45372]
-- The [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) now considers the likely number of rows an operator will need to provide, and might choose query plans based on this. In particular, the optimizer might prefer [lookup joins](https://www.cockroachlabs.com/docs/v20.1/joins) over alternatives in some situations where all rows of the join will probably not be needed. [#45604][#45604]
-- [JSONB](https://www.cockroachlabs.com/docs/v20.1/jsonb) columns can now used in `GROUP BY` and `DISTINCT ON` clauses. [#45229][#45229] {% comment %}doc{% endcomment %}
-- The [inverted index](https://www.cockroachlabs.com/docs/v20.1/inverted-indexes) implementation now supports indexing [array](https://www.cockroachlabs.com/docs/v20.1/array) columns. This permits accelerating containment queries (`@>` and `<@`) on array columns by adding an index to them. [#45157][#45157] {% comment %}doc{% endcomment %}
-- [`EXPLAIN BUNDLE`](https://www.cockroachlabs.com/docs/v20.1/explain) can now be used to run a query and collect execution information in a support bundle, which can be downloaded via the Admin UI. [#45735][#45735] {% comment %}doc{% endcomment %}
-- [`EXPLAIN BUNDLE`](https://www.cockroachlabs.com/docs/v20.1/explain) now works when the client driver prepares the statement. [#46111][#46111]
-- Renamed `experimental_optimizer_foreign_keys` [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) and `sql.defaults.optimizer_foreign_keys.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) to remove the experimental prefix. [#46174][#46174]
-- Fixed the `"negative limit hint"` internal query error. [#45879][#45879]
-- Fixed query errors in cases where a [CTE](https://www.cockroachlabs.com/docs/v20.1/common-table-expressions) was used inside a recursive CTE. [#45877][#45877]
-- Fixed an internal error that could occur in the optimizer when a `WHERE` filter contained at least one [correlated subquery](https://www.cockroachlabs.com/docs/v20.1/subqueries#correlated-subqueries) and one non-correlated subquery. [#46153][#46153]
-- Improvements in session settings reporting in the [`EXPLAIN (OPT,ENV)`](https://www.cockroachlabs.com/docs/v20.1/explain) output. [#46212][#46212]
-
-
Execution changes
-
-- The new `diagnostics.sql_stat_reset.interval` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) controls the rate at which SQL statement statistics are refreshed. Additionally, the setting `diagnostics.forced_stat_reset.interval` was renamed to `diagnostics.forced_sql_stat_reset_interval`. [#45082][#45082] {% comment %}doc{% endcomment %}
-- [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update) statements now acquire locks using the `FOR UPDATE` locking mode during their initial row scan, which improves performance for contended workloads. This behavior is configurable using the `enable_implicit_select_for_update` [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) and the `sql.defaults.implicit_select_for_update.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). [#45159][#45159] {% comment %}doc{% endcomment %}
-- Hash joins and sorts are now run using the vectorized engine when `vectorize=auto` (default configuration). [#45582][#45582] {% comment %}doc{% endcomment %}
-- CockroachDB now collects separate sets of metrics for usage of [`SAVEPOINT`](https://www.cockroachlabs.com/docs/v20.1/savepoint): one set for regular SQL savepoints and one set for uses dedicated to CockroachDB's client-side transaction retry protocol. [#45566][#45566]
-- Vectorized distributed flows and disk spilling now support the [`INTERVAL`](https://www.cockroachlabs.com/docs/v20.1/interval) type. [#45776][#45776] {% comment %}doc{% endcomment %}
-- Queries with [`MERGE` join](https://www.cockroachlabs.com/docs/v20.1/joins) can now run via the vectorized engine when `vectorize` is set to `auto`. [#45784][#45784] {% comment %}doc{% endcomment %}
-- Hash aggregation is now supported in `vectorize=auto` mode. [#45832][#45832] {% comment %}doc{% endcomment %}
-- Added telemetry reporting for usages of inverted and hash-sharded indexes. [#46060][#46060]
-- The statement tag returned to the client upon success for [`CREATE USER`](https://www.cockroachlabs.com/docs/v20.1/create-user), [`ALTER USER`](https://www.cockroachlabs.com/docs/v20.1/alter-user), and [`DROP USER`](https://www.cockroachlabs.com/docs/v20.1/drop-user) now include the word "ROLE" instead of "USER", for compatibility with PostgreSQL. These three statements are now aliases for `CREATE ROLE`, `ALTER ROLE`, `DROP ROLE`. [#46042][#46042]
-- `experimental_on` option for `vectorize` [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) has been renamed to `on`. The only things that will not run with `auto`, but will run with `on`, are unordered [`DISTINCT`](https://www.cockroachlabs.com/docs/v20.1/select-clause#eliminate-duplicate-rows) and two [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions) (`percent_rank` and `cume_dist`). The two options are otherwise identical. [#46080][#46080]
-- `NOTICE` commands can now be sent by CockroachDB servers using the Postgres client/server protocol. These notices will print when using the CockroachDB CLI. These notices can be disabled using the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `sql.notices.enabled = false`. [#45679][#45679] {% comment %}doc{% endcomment %}
-- Fixed an internal error that could occur when `NULLIF` was called with one null argument. [#45354][#45354]
-- Fix a bug where `EXPERIMENTAL SCRUB TABLE` on a timestamp/timestamptz key does not work. [#45410][#45410]
-- Significantly reduced the amount of memory allocated while scanning tables with a large average row size. [#45323][#45323]
-- Some vectorized execution plans that used [lookup joins](https://www.cockroachlabs.com/docs/v20.1/joins) with decimals would previously return incorrect results. This is now fixed. [#45536][#45536]
-- Previously, drivers that did not truncate trailing zeroes for decimals in the binary format end up having inaccuracies of up to 10^4 during the decode step. Fixed the error by truncating the trailing zeroes as appropriate. This fixes known incorrect decoding cases with Postgres in Elixir. [#45613][#45613]
-- Previously, an internal error could occur in CockroachDB when executing queries via the vectorized engine in queries that contained unordered synchronizers. This has been fixed. [#45690][#45690]
-- Fixed a bug where the distinct operation on `ARRAY[NULL]` and `NULL` could sometimes return an incorrect result and omit some tuples. [#45229][#45229]
-- Previously, CockroachDB could crash when computing [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions) with the `RANGE` mode of framing when one of the bounds was either `'offset PRECEDING'` or `'offset FOLLOWING'` and when there were `NULL` values in the single column from `ORDER BY` clause. Additionally, also in `RANGE` mode, bounds `'0 PRECEDING'` and `'0 FOLLOWING'` could be handled incorrectly. This is now fixed. [#44666][#44666]
-- Verify column is now visible before accessing its datums. [#45801][#45801]
-- Expected errors from the vectorized execution engine are no longer mistakenly annotated as unexpected errors. [#45673][#45673]
-- Mixed type comparison or binary expressions that could have previously returned wrong results in the vectorized execution engine now fall back to using the row execution engine. [#45724][#45724]
-- Fixed [decimal](https://www.cockroachlabs.com/docs/v20.1/decimal) rounding errors in the vectorized execution engine. [#45950][#45950]
-- Fixed a bug where various [session variables](https://www.cockroachlabs.com/docs/v20.1/set-vars) whose value would display as "`on`" or "`off`" could not be set to the values "`on`" or "`off`", only `true` or `false`. [#46163][#46163]
-- Fixed a bug that caused transactions that have performed [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) to deadlock after they restart. [#46170][#46170]
-
-
Updates to schema change / DDL statements
-
-- It is now possible to [create a table](https://www.cockroachlabs.com/docs/v20.1/create-table) and then add or alter a [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) within the same transaction. [#46015][#46015]
-- The [`ALTER TABLE .. ADD PRIMARY KEY ...`](https://www.cockroachlabs.com/docs/v20.1/alter-table) command can now be used when the target table has the default rowid [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key). [#45514][#45514] {% comment %}doc{% endcomment %}
-- Fixed a bug where a table without a [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) and a column named `rowid` would throw an error when being created. [#45507][#45507]
-- The use of schema changes when a [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) change is in progress is now disabled. This includes the following: (1) running a primary key change in a transaction and then starting another schema change in the same transaction, and (2) starting a primary key change on one connection and then starting a schema change on the same table on another connection while the initial primary key change is currently executing. [#45397][#45397] {% comment %}doc{% endcomment %}
-- CockroachDB has now disabled [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) changes when a concurrent schema change is executing on the same table, or if a schema change has been started on the same table in the current transaction. [#45513][#45513] {% comment %}doc{% endcomment %}
-- You can now drop a [primary key](https://www.cockroachlabs.com/docs/v20.1/primary-key) as long as you add another primary key within the same [transaction](https://www.cockroachlabs.com/docs/v20.1/transactions). This feature is intended to be used when you do not want the existing primary key to be rewritten as a secondary index by [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v20.1/alter-primary-key). [#44511][#44511] {% comment %}doc{% endcomment %}
-- The experimental variable gating around usages of online primary key changes is now removed. [#45753][#45753] {% comment %}doc{% endcomment %}
-- Previously, when a [database was renamed](https://www.cockroachlabs.com/docs/v20.1/rename-database), any [table](https://www.cockroachlabs.com/docs/v20.1/create-table) referencing a [sequence](https://www.cockroachlabs.com/docs/v20.1/create-sequence) would be blocked from being able to rename the table. This is to block cases where if the table's reference to the sequence contains the database name, and the database name changes, we have no way of overwriting the table's reference to the sequence in the new database. However, if no database name is included in the sequence reference, we should continue to allow the database to rename, as is implemented with this change. [#45502][#45502] {% comment %}doc{% endcomment %}
-- Previously, [renaming a database](https://www.cockroachlabs.com/docs/v20.1/rename-database) with dependent [views](https://www.cockroachlabs.com/docs/v20.1/views) returned a misleading error message that implies it was a dependent view on a dependent table. Now the error message generically says `cannot rename relation ... as it depends on relation ...` instead. [#45427][#45427]
-- Long-running [transactions](https://www.cockroachlabs.com/docs/v20.1/transactions) which attempt to [`TRUNCATE`](https://www.cockroachlabs.com/docs/v20.1/truncate) can now be pushed and will commit in cases where they previously could fail or retry forever. [#44091][#44091]
-- It is now possible to create [inverted indexes](https://www.cockroachlabs.com/docs/v20.1/inverted-indexes) on columns whose names are mixed-case. [#45621][#45621]
-
-
Updates to Bulk I/O statements
-
-- Improved error messages when [importing MySQL dump](https://www.cockroachlabs.com/docs/v20.1/migrate-from-mysql) data. [#45958][#45958]
-- `nodelocal://` URIs now require a node ID be specified in the hostname field. The special node ID of `'self'` is equivalent to the old behavior of when the node ID was unspecified. [#45764][#45764] {% comment %}doc{% endcomment %}
-- Fixed cases where target column specifications in [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v20.1/import-into) were ignored. [#45747][#45747]
-- `IMPORT` now correctly handles columns named with reserved keywords and/or other special characters. [#45944][#45944]
-- Google storage client is now resilient to transient connection errors. [#46000][#46000]
-- The creation of a database and table between incremental cluster backups is now allowed. [#46066][#46066]
-- Better error reporting when importing data. [#46165][#46165]
-
-
Changes to background job management
-
-- Schema changes are now scheduled and run fully like other [jobs](https://www.cockroachlabs.com/docs/v20.1/show-jobs): they now can be [canceled](https://www.cockroachlabs.com/docs/v20.1/cancel-job), [paused](https://www.cockroachlabs.com/docs/v20.1/pause-job), and [resumed](https://www.cockroachlabs.com/docs/v20.1/resume-job). Some other UI differences come with this implementation change; notably, all schema changes now have an associated job, failed schema changes are now rolled back within the "Reverting" phase of the same job, and GC for dropped indexes and tables is deferred to a later job. [#45870][#45870]
-- Non-running [`jobs`](https://www.cockroachlabs.com/docs/v20.1/show-jobs) are now considered for adoption in randomized order instead of in a determistic order by creation time, to avoid potential deadlocks when schema change jobs need to execute in a specific order. This is a preemptive change, not a bug fix, but it affects all jobs. [#45870][#45870] {% comment %}doc{% endcomment %}
-- On new clusters, the internal `system.jobs` table now uses the default `zoneconfig` and `TTL` (25h). [#45767][#45767] {% comment %}doc{% endcomment %}
-- The cleanup job spawned by [`ALTER .. PRIMARY KEY`](https://www.cockroachlabs.com/docs/v20.1/alter-primary-key) in some cases cannot be cancelled. [#45595][#45595] {% comment %}doc{% endcomment %}
-- Introduced a temporary table cleanup job that runs once every periodically per cluster. It removes any temporary schemas and their related objects that did not get removed cleanly when the connection closed. This period can be changed by the [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `sql.temp_object_cleaner.cleanup_interval`, which defaults to 30 minutes. [#45669][#45669] {% comment %}doc{% endcomment %}
-- Previously, after deleting an index, table, or database, the relevant schema change job would change its running status to waiting for GC TTL. The schema change and the GC process are now decoupled into two jobs. [#45962][#45962]
-- Fixed a bug where, in some rare cases, a job was not [cancellable](https://www.cockroachlabs.com/docs/v20.1/cancel-job) when in state "Reverting". [#45320][#45320]
-- When considering if a job should be orphaned, CockroachDB use to take the conservative approach when a descriptor ID pointing to non-existent descriptor was found. This caused jobs to hang forever and be garbage collected. CockroachDB now disregards these IDs when considering if a job has still work to do. [#45353][#45353]
-- Logs emitted from [jobs](https://www.cockroachlabs.com/docs/v20.1/show-jobs) are now tagged with the job ID to improve visibility and aid debugging. [#45728][#45728]
-- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v20.1/import-into) jobs which are canceled or fail can no longer get stuck in an unrecoverable state if data from the previous state of the table had expired relative to the GC TTL. [#44739][#44739]
-- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/create-changefeed) jobs that take a long time to perform backfills will no longer encounter failures due to garbage collection, so long as they begin before the data has expired. [#45778][#45778]
-
-
Updates to APIs and introspection
-
-- Telemetry reporting has been added for the commands [`SHOW INDEXES`](https://www.cockroachlabs.com/docs/v20.1/show-index), [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v20.1/show-queries), [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v20.1/show-jobs), and [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v20.1/show-constraints). [#45897][#45897]
-- [`SHOW USERS`](https://www.cockroachlabs.com/docs/v20.1/show-users) and [`SHOW ROLES`](https://www.cockroachlabs.com/docs/v20.1/show-roles) are now the same, as `USERS` is now an alias for `ROLES`. `SHOW USERS` and `SHOW ROLES` now match PostgreSQL `du` command. `SHOW ROLES` now displays three columns: `username`, `options`, and `member_of`. [#45827][#45827]
-- HTTP endpoints now report status `403 (Forbidden)` instead of `500 (Internal server error)` when the authenticated user has insufficient privileges to use the endpoint. [#45325][#45325] {% comment %}doc{% endcomment %}
-- The end point `/_status/job/{job_id}` will now display status info about a job. [#45094][#45094] {% comment %}doc{% endcomment %}
-- The pprof endpoints now allow downloading the binary profiles. To do so, attach `?download=true` to the URL. [#45790][#45790] {% comment %}doc{% endcomment %}
-- Improved the debuggability of C++-level issues by providing access to thread stack traces via a new `/debug/threads` endpoint, which is exposed on the Admin UI advanced debug page. Now include thread stack traces in the info collected by `debug zip`. Thread stack traces are currently only available on Linux. [#45321][#45321] {% comment %}doc{% endcomment %}
-- Accesses to `/health` using a non-root authentication token no longer hang when a node is currently under load, or if a system range is unavailable. [#45119][#45119]
-- Statement diagnostics traces now contain processor statistics. [#46132][#46132]
-
-
Command-line changes
-
-
Changes to operational workflows
-
-- Hostnames from the `cockroach start --join` flag can now be resolved as SRV record. This simplifies cluster deployment in Kubernetes. [#45815][#45815] {% comment %}doc{% endcomment %}
-- The `--decommission` flag for [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) is now deprecated. It will be removed altogether in a next stable release. Deployments should use [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v20.1/cockroach-node) followed by either `cockroach quit` or an equivalent form of server shut down. [#45903][#45903] {% comment %}doc{% endcomment %}
-- The `--socket` flag for [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) is now deprecated in favor of `--socket-dir`. CockroachDB now automatically chooses a name for the socket in the specified directory based on the configured port number. `--socket` will be removed in a later version. [#45931][#45931] {% comment %}doc{% endcomment %}
-- Added the `--cert-principal-map` flag to [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start), which specifies a comma-separated list of `cert-principal`:`db-principal` mappings that map the principals found in certificates to DB principals. This allows the usage of "node" and "root" certificates where the common name contains dots or a host name, which allows such certificates to be generated by certificate authorities that place restrictions on the contents of the common name field. [#45819][#45819]
-
-
Configuration changes
-
-- Added a new `default` option for the `--storage-engine` flag that respects the engine used last. [#45512][#45512] {% comment %}doc{% endcomment %}
-
-
Usability improvements
-
-- Some CLI [commands](https://www.cockroachlabs.com/docs/v20.1/cockroach-commands) now provide more details and/or a hint when they encounter an error. [#45575][#45575] {% comment %}doc{% endcomment %}
-- CockroachDB now refuses to start if named time zones are not properly configured. It is possible to override this behavior for testing purposes, with the understanding that doing so will cause incorrect SQL results and other inconsistencies, using the environment variable `COCKROACH_INCONSISTENT_TIME_ZONES`. [#45640][#45640] {% comment %}doc{% endcomment %}
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) client commands now display out-of-band server notices at the end of execution. [#46144][#46144] [#46124][#46124]
-- When some result rows have already been received from the server when an error is encountered, the [CockroachDB SQL shell](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql) now presents both the result rows and the error in the output, regardless of the selected table formatter. Previously, only the error was reported with some formatters, or both with other formatters. [#45872][#45872] {% comment %}doc{% endcomment %}
-- Added the flag `--disable-demo-license` to provide another option to disable [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) from attempting to acquire a demo license. [#46126][#46126]
-- The parameter `--set` for [`cockroach sql`](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) is now able to override all client-side options, as advertised. [#46118][#46118]
-- Fixed a bug that caused `cockroach demo -e` to display a `connection refused` error. [#46126][#46126]
-
-
Change to troubleshooting facilities
-
-- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) output now contains hex representation of marshaled jobs payload and progress as well table descriptors. This allows you to copy this string and unmarshal it when debugging. [#45721][#45721] {% comment %}doc{% endcomment %}
-- CockroachDB will now dump the stacks of all goroutines upon receiving `SIGQUIT` prior to terminating. This feature is intended for use while troubleshooting misbehaving nodes. [#36378][#36378] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- Removed mention of RocksDB from the Read Amplification, SSTables, Compactions/Flushes, and Compaction Queue graphs. [#45398][#45398]
-- The display options are now saved separately for each authenticated user. **Note:** When upgrading to a version with this change, all current display customizations for admin users are lost. [#45127][#45127] {% comment %}doc{% endcomment %}
-- Customizations of the Admin UI are again properly saved across sessions. [#45127][#45127]
-- Refactored redux data flow in enqueue range. [#45667][#45667]
-- Increased enqueue range timeout to an hour to prevent operations from timing out before completion. [#45667][#45667]
-- The Admin UI now reports a clearer message when a non-admin user uses an admin-only feature. [#45122][#45122]
-- Added the Statement Diagnostics History page. [#45799][#45799]
-- You can now access the Activate Diagnostics dialog from the [Statements page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page), through the **Activate** link. [#45799][#45799] {% comment %}doc{% endcomment %}
-- Added a Release Notes subscription form on the [Cluster Overview page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-cluster-overview-page). [#45143][#45143]
-- The jobs status filter now includes "running", which was previously omitted by mistake. [#45937][#45937]
-
-
Performance improvements
-
-- [Importing an Avro file](https://www.cockroachlabs.com/docs/v20.1/migrate-from-avro) is now faster. [#45269][#45269]
-- Improved the execution plans of [foreign key](https://www.cockroachlabs.com/docs/v20.1/foreign-key) checks for [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) and [`INSERT .. ON CONFLICT`](https://www.cockroachlabs.com/docs/v20.1/insert) in some cases (in particular multi-region). [#45520][#45520]
-- [Importing](https://www.cockroachlabs.com/docs/v20.1/import) delimited data now has improved throughput. [#45543][#45543]
-- Improved the selectivity estimation of some predicates containing `OR`, leading to better plan selection by the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer). [#45732][#45732]
-- Improved cardinality estimation in the [optimizer](https://www.cockroachlabs.com/docs/v20.1/cost-based-optimizer) for relations with a small number of rows. This leads to the optimizer choosing a better query plan in some cases. [#45771][#45771]
-- `crdb_internal.jobs` now loads less data into memory. [#45914][#45914]
-
-
Doc updates
-
-- Upgraded the search on the docs site. [#6692][#6692]
-- Added docs for `--storage-engine` flag on [node start](https://www.cockroachlabs.com/docs/v20.1/cockroach-start), which can be `pebble`, `rocksdb`, or `default`, which makes the `--storage-engine` flag sticky for future runs when no engine is specified. [#6769][#6769]
-- Added examples of `COMMENT ON TABLE` in `SHOW CREATE TABLE` output to [`COMMENT ON`](https://www.cockroachlabs.com/docs/v20.1/comment-on), [`SHOW TABLES`](https://www.cockroachlabs.com/docs/v20.1/show-tables), and [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v20.1/show-create). [#6789][#6789]
-- Added the "Duplicate Indexes" Youtube video to the [Duplicate Indexes Topology doc](https://www.cockroachlabs.com/docs/v20.1/topology-duplicate-indexes). [#6796][#6796]
-- Updated the [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) doc to include new flags. [#6841][#6841]
-
-
-
-
Contributors
-
-This release includes 385 merged PRs by 42 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Andrii Vorobiov
-- Artem Barger
-- Damien Hollis (first-time contributor)
-- Jaewan Park
-- Ziheng Liu (first-time contributor)
-- pohzipohzi (first-time contributor)
-
-
-
-- The `admin` role is now required to use the new [`cockroach nodelocal upload`](https://www.cockroachlabs.com/docs/v20.1/cockroach-nodelocal-upload) functionality. [#46265][#46265]
-
-
Enterprise edition changes
-
-- Incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) can now quickly skip unchanged data. This makes frequent incremental backups 10-100x faster depending on data size and frequency. [#46390][#46390]
-
-
SQL language changes
-
-- Added `get_bits()` and `set_bit()` [builtin functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) for bits. [#45957][#45957]
-- Modified the `get_bits()` and `set_bit()` [builtin functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) to support [byte array](https://www.cockroachlabs.com/docs/v20.1/sql-constants#byte-array-literals). [#46380][#46380]
-- Arrays can now be compared using the `<`, `<=`, `>`, and `>=` operations. [#46254][#46254]
-- [`EXPLAIN BUNDLE`](https://www.cockroachlabs.com/docs/v20.1/explain) now contains distsql diagrams. [#46225][#46225]
-- Previously, when creating a non-partitioned index on a partitioned table with the `sql_safe_updates` [session variable](https://www.cockroachlabs.com/docs/v20.1/set-vars) set to `true`, CockroachDB would error out. CockroachDB now sends a NOTICE stating that creating a non-partitioned index on a partitioned table is not performant. [#46223][#46223]
-- Added new internal tables `crdb_internal.node_transactions` and `crdb_internal.cluster_transactions` that contain some metadata about active user transactions. [#46206][#46206]
-- Added the column `txn_id` to the `crdb_internal.node_queries` and `crdb_internal.cluster_queries` tables. These fields represent the transaction ID of each query in each row. [#46206][#46206]
-- Columns in the process of being added to or removed from a table are now always set to their default or computed value if another transaction concurrently [`INSERT`](https://www.cockroachlabs.com/docs/v20.1/insert)s, [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update)s, or [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert)s a row. This fixes an issue where a column being backfilled would not get properly set by concurrent transactions. [#46285][#46285]
-- [`ROLLBACK TO SAVEPOINT`](https://www.cockroachlabs.com/docs/v20.1/rollback-transaction) (for either regular savepoints or "restart savepoints" defined with `cockroach_restart`) now causes a "feature not supported" error after a DDL statement in a HIGH PRIORITY transaction, in order to avoid a transaction deadlock. See issue [#46414][#46414] for details. [#46415][#46415]
-- Added support for the `stddev_samp` aggregate [builtin function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators), which is the same as `stddev` (according to PostgreSQL documentation, the latter is actually the historical alias of the former). [#46279][#46279]
-
-
Command-line changes
-
-- Ensured the correct error messages are shown to the user when using [`cockroach nodelocal upload`](https://www.cockroachlabs.com/docs/v20.1/cockroach-nodelocal-upload). [#46311][#46311]
-
-
Bug fixes
-
-- Fixed a crash when [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import)ing a table without a table definition. [#46193][#46193]
-- Added support for queries with qualified stars that refer to tables in outer scopes. [#46233][#46233]
-- Fixed an incorrect "no data source matches prefix" error in some cases involving subqueries that use views. [#46226][#46226]
-- Previously, the `experimental_strftime` and `experimental_strptime` [builtin functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) used the non-POSIX standard `%f` for nanosecond display. However, as CockroachDB only supports up to microsecond precision and [Python's `strftime` has `%f` to microsecond](https://docs.python.org/3.9/library/datetime.html#strftime-strptime-behavior), we have similarly switched %f to use microsecond instead of nanosecond precision. [#46263][#46263]
-- Added a check that detects invalid sequence numbers in the RocksDB write-ahead log and returns an error during node startup instead of applying the invalid log entries. [#46328][#46328]
-- [Follower reads](https://www.cockroachlabs.com/docs/v20.1/follower-reads) that hit intents no longer have a chance of entering an infinite loop. This bug was present in earlier versions of the v20.1 release. [#46234][#46234]
-- Fixed an internal error that could occur when an aggregate inside the right-hand side of a `LATERAL` [join](https://www.cockroachlabs.com/docs/v20.1/joins) was scoped at the level of the left-hand side. [#46227][#46227]
-- Fixed an error that incorrectly occurred when an aggregate was used inside the `WHERE` or `ON` clause of a [subquery](https://www.cockroachlabs.com/docs/v20.1/subqueries) but was scoped at an outer level of the query. [#46227][#46227]
-- Reverted performance improvements to incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup)s until a potential correctness issue is addressed. [#46385][#46385]
-- [CDC](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) no longer combines with long running transactions to trigger an assertion. Previously, this could crash a server if the right sequence of events occurred. This was typically rare, but was much more common when CDC was in use. [#46391][#46391]
-- Fixed a race in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). [#46360][#46360]
-- Fixed a rare bug causing transactions that have performed schema changes to deadlock after they restart. [#46384][#46384]
-
-
Doc updates
-
-- Added docs for [`cockroach nodelocal upload`](https://www.cockroachlabs.com/docs/v20.1/cockroach-nodelocal-upload), which uploads a file to the external IO directory on a node's (the gateway node, by default) local file system. [#6871][#6871]
-- Added [guidance](https://www.cockroachlabs.com/docs/v20.1/create-table#create-a-table-with-a-hash-sharded-primary-index) on using [hash-sharded indexes](https://www.cockroachlabs.com/docs/v20.1/indexes#hash-sharded-indexes). [#6820][#6820]
-- Updated [production checklist](https://www.cockroachlabs.com/docs/v20.1/recommended-production-settings#azure) and [Azure deployment guides](https://www.cockroachlabs.com/docs/v20.1/deploy-cockroachdb-on-microsoft-azure) to recommend compute-optimize F-series VMs in Azure deployments. [#7005][#7005]
-
-
-
-
Contributors
-
-This release includes 46 merged PRs by 20 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Amit Sadaphule (first-time contributor)
-- Andrii Vorobiov
-- Marcus Gartner (first-time contributor, CockroachDB team member)
-- abhishek20123g
-
-
-
-- The new `--unencrypted-localhost-http` flag for [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) and [`cockroach start-single-node`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start-single-node) forces the HTTP listener to bind to `localhost` addresses only and disables the TLS protocol. In secure clusters, this makes the Admin UI reachable with an `http://` URL without requiring certificate or CA setup. [#46472][#46472] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- Transactions reading a lot of data behave better when exceeding the memory limit set by the `kv.transaction.max_refresh_spans_bytes` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). Such transactions now attempt to resolve the conflicts they run into instead of being forced to always retry. Increasing `kv.transaction.max_refresh_spans_bytes` should no longer be necessary for most workloads. [#46803][#46803]
-- Before [upgrading from v19.2 to v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version), it is best practice to make sure there are no schema changes in progress. However, if any are still running when the upgrade is started, note that they will stop making progress during the upgrade and will run to completion once the upgrade has been finalized. [#47073][#47073]
-
-
Enterprise edition changes
-
-- The new `protect_data_from_gc_on_pause` [`CHANGEFEED` option](https://www.cockroachlabs.com/docs/v20.1/create-changefeed#options) ensures that the data needed to resume a `CHANGEFEED` is not garbage collected. [#46345][#46345] {% comment %}doc{% endcomment %}
-- [`BACKUP`s](https://www.cockroachlabs.com/docs/v20.1/backup) and [`RESTORE`s](https://www.cockroachlabs.com/docs/v20.1/restore) now collect some [anonymous telemetry](https://www.cockroachlabs.com/docs/v20.1/diagnostics-reporting) on throughput and feature usage. [#46755][#46755]
-- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v20.1/show-backup) now shows whether a `BACKUP` is a cluster backup or not. [#46768][#46768] {% comment %}doc{% endcomment %}
-- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v20.1/show-backup) now shows the privileges assigned to tables and databases in a backup and, if the `WITH privileges` option is specified, lists which users and roles had which privileges. [#46853][#46853] {% comment %}doc{% endcomment %}
-- [Incremental backups](https://www.cockroachlabs.com/docs/v20.1/backup#incremental-backups) and [restores](https://www.cockroachlabs.com/docs/v20.1/restore#restore-from-incremental-backups) using HTTP storage now require explicitly specifying incremental storage locations. [#46967][#46967] {% comment %}doc{% endcomment %}
-- The new, appending [incremental backup syntax](https://www.cockroachlabs.com/docs/v20.1/backup#create-incremental-backups) does not allow converting a [cluster backup](https://www.cockroachlabs.com/docs/v20.1/backup#backup-a-cluster) to a specific table or database backup. [#46966][#46966] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- Outer columns (columns in a subquery that reference a higher scope) can now be used in the `SELECT` list of an aggregation or grouping expression without explicitly including them in the `GROUP BY` list, for improved Postgres compatibility. [#46417][#46417] {% comment %}doc{% endcomment %}
-- Renamed the `EXPLAIN BUNDLE` statement to `EXPLAIN ANALYZE (DEBUG)`. [#46534][#46534] {% comment %}doc{% endcomment %}
-- The `EXPLAIN ANALYZE (DEBUG)` statement now contains all the information available via `EXPLAIN (OPT,ENV)`. [#46441][#46441] {% comment %}doc{% endcomment %}
-- The `length()`, `octet_length()` and `bit_length()` [built-in functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) are now supported on `BIT` and `VARBIT`. [#46524][#46524] {% comment %}doc{% endcomment %}
-- The [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v20.1/explain-analyze) response now includes memory and disk usage details and omits allocation stats if they are zero. [#46316][#46316] {% comment %}doc{% endcomment %}
-- The `CREATE TEMPORARY TABLE` statement now supports the `ON COMMIT` syntax. [#46594][#46594] {% comment %}doc{% endcomment %}
-- The [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) statement now records additional [anonymous telemetry](https://www.cockroachlabs.com/docs/v20.1/diagnostics-reporting) about its performance and reliability. [#46763][#46763]
-- `CREATE INDEX CONCURRENTLY` and `DROP INDEX CONCURRENTLY` are now supported as no-ops, as all indexes are created concurrently. [#46802][#46802] {% comment %}doc{% endcomment %}
-- The type checking code now prefers aggregate overloads with string inputs if there are multiple possible candidates due to arguments of unknown type. [#46898][#46898]
-- Added an unimplemented error when attempting to `ADD CONSTRAINT` with the `EXCLUDE USING` syntax. [#46909][#46909]
-- Added support for `CREATE INDEX .... INCLUDE (col1, col2, ...)`, which is an alias that PostgreSQL uses that is analogous to CockroachDB's `STORING (col1, col2, ...)` syntax. [#46909][#46909]
-- Added support for parsing the `REINDEX` syntax, which results in an unimplemented error that explains that `REINDEX`ing is not required in CockroachDB. [#46909][#46909]
-- The [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now only runs queries with streaming operators. To enable vectorized execution for buffering operators, use `SET vectorize=on`. [#46925][#46925] {% comment %}doc{% endcomment %}
-- The [`EXPLAIN`](https://www.cockroachlabs.com/docs/v20.1/explain) response now shows `SPANS | FULL SCAN` for full table scans and `SPANS | LIMITED SCAN` if there is a limit. Previously, both cases would return `SPANS | ALL`. [#47013][#47013] {% comment %}doc{% endcomment %}
-- The `CREATE ROLE`/`ALTER ROLE`/`DROP ROLE` results no longer show the rows affected, as this number could be misleading. [#46819][#46819]
-- Added a hint to use `ALTER ROLE` when trying to `GRANT` a role option directly to a user using the `GRANT ROLE` syntax. [#46819][#46819]
-- Improved the error message for `ALTER COLUMN ... SET DATA TYPE` for data type conversions that involve overwriting existing values. [#47170][#47170]
-
-
Command-line changes
-
-- The `cockroach` commands that internally use SQL, including [`cockroach sql`](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo), now can connect to a server using a unix datagram socket. The syntax for this is `--url 'postgres://user@?host=/path/to/directory?port=NNNN'`. [#47007][#47007] {% comment %}doc{% endcomment %}
-- The [`cockroach workload`](https://www.cockroachlabs.com/docs/v20.1/cockroach-workload) command now sets its `application_name` based on the chosen workload. [#46546][#46546] {% comment %}doc{% endcomment %}
-- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) command now creates valid zip files even if some of its requests encounter an error. [#46634][#46634]
-- The [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) command now displays a TCP-based connection URL and unix datagram socket for each node of a demo cluster. [#46935][#46935] {% comment %}doc{% endcomment %}
-- It is now possible to pre-configure the secure mode of [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) using the `COCKROACH_INSECURE` environment variable like other client commands. [#46959][#46959] {% comment %}doc{% endcomment %}
-- When running [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) in secure mode, the generated SQL URL now embeds the password so that commands using this URL can run without a request for password. [#47007][#47007] {% comment %}doc{% endcomment %}
-- The SQL URL generated by [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) no longer requires TLS client certificates in particular directory locations. [#47007][#47007] {% comment %}doc{% endcomment %}
-- The new experimental client-side command `demo ls` for [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) displays the connection parameters for every node in a demo cluster. [#47007][#47007] {% comment %}doc{% endcomment %}
-- The client-side commands specific to [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo), starting with `demo`, are now advertised in the output of `?`. Note that this feature is currently experimental. [#47007][#47007] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- The ALL filter on the Statements page now excludes internal statements. [#45646][#45646]
-- Tooltips showing statements and jobs are now limited in size for very long statements. [#46982][#46982]
-- The default timescale on metrics pages is now 10 minutes. Previously, the timescale defaulted to the age of the longest running node. [#46980][#46980]
-- Improved tooltips for existing capacity and storage metrics. [#46987][#46987]
-- Added analytics tracking for table sorts, searches and diagnostics activation on the Statements page, and navigation on the Statement Details page. [#47003][#47003]
-- The download link for statement diagnostics now points to the bundle zip file. [#47016][#47016]
-- Removed the Statements tab from the Databases > Table Details page. [#47102][#47102]
-- Cleaned up barcharts on the Statements page. [#47129][#47129]
-
-
Bug fixes
-
-- Fixed an Admin UI bug where sort columns were only being applied per-page instead of for the entire multi-page list of statements [#46978][#46978]
-- Fixed a performance bug where truncate would take 2*num columns + 2*num indexes round trips. This could lead to slow truncate performance in distributed clusters. [#46334][#46334]
-- It is no longer possible to inject stats within an explicit transaction. [#46567][#46567]
-- Casting a bit array to a bigger varbit array no longer adds extra 0 padding at the end. [#46532][#46532]
-- Fixed a bug where `pg_catalog.pg_indexes` showed the wrong index definition for inverted indexes. [#46527][#46527]
-- Fixed incorrect query results in some corner cases involving variance/stddev/corr. [#46436][#46436]
-- Fixed an internal error or incorrect evaluation of check constraints in certain cases involving `UPSERT` and foreign key checks. [#46409][#46409]
-- `cockroach debug zip` now properly collects heap profiles. [#46469][#46469]
-- The goroutine dump facility now functions properly when logging to files is disabled, e.g., via `--log-dir=` or `--logtostderr`. [#46469][#46469]
-- Fixed an internal error that could occur during planning for some queries with a join and negative `LIMIT`. [#46440][#46440]
-- Fixed a bug where the vectorized engine could sometimes give an incorrect result when reading from interleaved parents or children. [#46456][#46456]
-- Fixed a bug where the vectorized engine would throw an internal error when executing a query that utilized an inverted index. [#46267][#46267]
-- Fixed a bug where operations on an index that contained a collated string in descending order would fail. [#46570][#46570]
-- Fixed a bug with `SET TIME ZONE`, where a string prefixed with `UTC` or `GMT` and with time zones with colons had it's offset inverted the wrong way. [#46510][#46510]
-- CockroachDB no longer incorrectly accounts for some RAM usage when computing aggregate functions. [#46545][#46545]
-- `SHOW INDEXES ... WITH COMMENT` no longer shows duplicate rows for certain tables if indexes are identically named. [#46621][#46621]
-- Fixed an internal error that could happen during planning when a column with a histogram was filtered with a predicate of a different data type. [#46552][#46552]
-- Scans that lock rows (via `FOR UPDATE`) are no longer elided when the results are unused. [#46676][#46676]
-- Fixed a bug (introduced in v20.1.0-beta.3) in the new schema change GC job implementation which would cause the execution of GC jobs to be incorrectly delayed in the presence of other table descriptor updates. [#46691][#46691]
-- Fixed a bug with distinct aggregations on `JSONB` columns. [#46711][#46711]
-- Fixed a rare bug causing the assertion failure "caller violated contract: discovered non-conflicting lock". [#46744][#46744]
-- Ensured that index and table GC happen closer to their GC deadline. [#46743][#46743]
-- Statement diagnostics created through `EXPLAIN ANALYZE (DEBUG)` now show up in the UI page. [#46804][#46804]
-- Benign "outbox next" errors are now only logged when log verbosity is set to 1 or greater. [#46838][#46838]
-- Failed or canceled `IMPORT`s now properly clean up partially imported data. [#46856][#46856]
-- Failed or canceled `RESTORE`s now properly clean up partially imported data. [#46855][#46855]
-- Fixed a rare bug causing errors to be returned for successfully committed transactions. The most common error message was "TransactionStatusError: already committed". [#46848][#46848]
-- The "insecure cluster" indicator is once again displayed at the top right of Admin UI for insecure clusters. [#46865][#46865]
-- Fixed a rare assertion failure that contained the text "existing lock cannot be acquired by different transaction". This assertion was only present in earlier v20.1 releases and not in any earlier releases. [#46896][#46896]
-- Fixed an incorrect query result that could occur when a scalar aggregate was called with a null input. [#46898][#46898]
-- Fixed incorrect result with count(*) when grouping on constant columns. [#46891][#46891]
-- `cockroach demo` now properly cleans up its temporary files if the background license acquisition fails. [#47007][#47007]
-- Tooltips for statement diagnostics are now only shown on hover. [#46995][#46995]
-- Fixed a bug when queries with projections of only `INT2` and/or `INT4` columns were executed via the vectorized engine. [#46977][#46977]
-- CockroachDB no longer considers a non-`NULL` value from an interleaved parent table to be `NULL` when the interleaved child has a `NULL` value in the row with the corresponding index key. [#47103][#47103]
-- Incremental, full-cluster `BACKUP`s with revision history are no longer disallowed in some cases where system tables have changed. [#47132][#47132]
-- Fixed a bug when adding a self-referencing foreign key constraint in the same transaction that creates a table. [#47128][#47128]
-- Change data capture no longer combines with long running transactions to trigger an assertion with the text "lock timestamp regression". [#47139][#47139]
-- As part of migrating to the new schema change job implementation in 20.1, failed `IMPORT` and `RESTORE` jobs that left behind table data in 19.2 that had not been completely garbage collected by the time the cluster was upgraded to 20.1 will now have GC jobs automatically created for them. [#47144][#47144]
-- Fixed a bug preventing clusters from creating `TIMETZ` columns before an upgrade to v20.1 is finalized. [#47169][#47169]
-- Fixed a data race on AST nodes for `SELECT` statements that include a `WINDOW` clause. [#47175][#47175]
-- Fixed the behavior of `crdb_internal.zones` in mixed-version clusters. [#47236][#47236]
-- Fixed reads from `system.namespace` and `crdb_internal.zones` on 19.2 nodes in a mixed-version cluster. [#47236][#47236]
-- Fixed incompatibility with v19.2 nodes for tables with computed columns. [#47274][#47274]
-- Restoring a backup from v2.1 to v20.1 with a timestamp column no longer results in incomplete type data. [#47240][#47240]
-- Fixed some cases where limits were applied incorrectly when pushed down into scans (resulting in some queries returning more results than they should). [#47296][#47296]
-- Fixed an assertion failure with the text "expected latches held, found none". [#47301][#47301]
-
-
Performance improvements
-
-- Improved execution plans involving filters with `OR` expressions. [#46371][#46371]
-- Improved execution plans for queries containing a division by a constant. [#46861][#46861]
-- Virtual tables that access all table descriptors now make fewer round trips. [#46949][#46949]
-
-
Build changes
-
-- Building from source now requires Go 1.13.19. [#46619][#46619] {% comment %}doc{% endcomment %}
-- It is now possible to build CockroachDB with the Clang++ v10 compiler. [#46860][#46860] {% comment %}doc{% endcomment %}
-
-
Doc updates
-
-- Improved the documentation on [viewing and controlling backup jobs](https://www.cockroachlabs.com/docs/v20.1/backup#viewing-and-controlling-backups-jobs) and added documentation on [showing a backup with privileges](https://www.cockroachlabs.com/docs/v20.1/show-backup#show-a-backup-with-privileges). [#7101](https://github.com/cockroachdb/docs/pull/7101)
-- Documented [key/passphrase-based backup encryption](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore). [#7085](https://github.com/cockroachdb/docs/pull/7085)
-- Documented how to use [`EXPLAIN(DISTSQL, TYPES)`](https://www.cockroachlabs.com/docs/v20.1/explain#distsql-option) to include the data types of the input columns in the generated physical plan. [#7045](https://github.com/cockroachdb/docs/pull/7045)
-- Updated [Azure hardware recommendations](https://www.cockroachlabs.com/docs/v20.1/recommended-production-settings#azure). [#7005](https://github.com/cockroachdb/docs/pull/7005)
-- Documented [`INTERVAL`](https://www.cockroachlabs.com/docs/v20.1/interval) duration fields and updated the syntax and precision details. [#7000](https://github.com/cockroachdb/docs/pull/7000)
-- Various updates related to [role-based access control (RBAC)](https://www.cockroachlabs.com/docs/v20.1/authorization) moving under the BSL license. [#7003](https://github.com/cockroachdb/docs/pull/7003)
-
-
-
-
Contributors
-
-This release includes 173 merged PRs by 32 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Andrii Vorobiov
-- Shaker Islam (first-time contributor)
-
-
-
-- The new ability to specify [`TIME`/`TIMETZ`](https://www.cockroachlabs.com/docs/v20.1/time#precision) and [`INTERVAL`](https://www.cockroachlabs.com/docs/v20.1/interval#precision) precision is available only after finalizing an upgrade to v20.1. Previously, it was allowed to specify precision for these data types in clusters with mixed v19.2 and v20.1 nodes, but nodes running v19.2 would not respect the precision. [#47438][#47438]
-
-
Command-line changes
-
-- The new `--clock-device` flag for [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) and [`cockroach start-single-node`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start-single-node) identifies a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for querying current time. This is supported on Linux only and may be needed in cases where the host clock is unreliable or prone to large jumps (e.g., when using vMotion). [#47379][#47379]
-
-
Bug fixes
-
-- Fixed a bug causing some schema change rollbacks to fail permanently even on transient errors. [#47575][#47575]
-- Fixed an incompatibility between Pebble and RocksDB bloom filters that could result in keys disappearing or reappearing when switching storage engines. [#47611][#47611]
-- Fixed a panic that would result in "invalid truncation decision" error messages. [#47346][#47346]
-- Fixed a backward incompatibility between RocksDB and Pebble that prevented RocksDB from opening a Pebble created WAL file under certain conditions. [#47383][#47383]
-- Fixed a mishandling of truncated WAL records in Pebble that could prevent Pebble from opening a DB after a crash. [#47383][#47383]
-- Fixed a bug in the new schema change GC job implementation that caused unnecessary table descriptor lookups whenever a table was updated. [#47490][#47490]
-- Fixed a bug introduced in an earlier v20.1 release that could cause a workload to stall under heavy load. [#47493][#47493]
-- Fixed a bug introduced with the new schema change job implementation in v20.1.0-beta.3 that caused errors when rolling back a schema change to be swallowed. [#47499][#47499]
-- Fixed a bug that could could trigger an assertion with the text "received X results, limit was Y". [#47501][#47501]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-With the release of CockroachDB v20.1, we've made a variety of productivity, performance, and security improvements. Check out a comprehensive [summary of the most significant user-facing changes](#v20-1-0-summary) and then [upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version). You can also read more about these changes in the [v20.1 blog post](https://www.cockroachlabs.com/blog/cockroachdb-20-1-release/) or [watch our 20.1 release demo and overview](https://www.cockroachlabs.com/webinars/introducing-cockroachdb-20-1-build-fast-and-build-to-last/).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Summary
-
-This section summarizes the most significant user-facing changes in v20.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases.
-
-- [CockroachCloud](#v20-1-0-cockroachcloud)
-- [Core features](#v20-1-0-core-features)
-- [Enterprise features](#v20-1-0-enterprise-features)
-- [Backward-incompatible changes](#v20-1-0-backward-incompatible-changes)
-- [Known limitations](#v20-1-0-known-limitations)
-- [Education](#v20-1-0-education)
-
-
-
-
CockroachCloud
-
-- You can now use the code `CRDB30` for a **free 30-day trial of CockroachCloud**.
-
-- **[CockroachCloud pricing](https://www.cockroachlabs.com/pricing/)** is now available on our website.
-
-- **VPC peering** is now supported for CockroachCloud clusters running on GCP. [Contact us](https://www.cockroachlabs.com/contact-sales/) to set up a VPC peering-enabled CockroachCloud cluster.
-
-
Core features
-
-These features are freely available in the core version and do not require an enterprise license.
-
-Area | Feature | Description
------|---------|------------
-SQL | **Online Primary Key Changes** | The new [`ALTER TABLE ... ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v20.1/alter-primary-key) statement lets you change a table’s primary key with no interruption to data access. The old primary key is converted to a `UNIQUE` secondary index to help optimize the performance of queries that still filter on the old key. However, if this conversion is not desired, you can [drop and add a primary key constraint](https://www.cockroachlabs.com/docs/v20.1/add-constraint#drop-and-add-a-primary-key-constraint) instead.
-SQL | **Schema Change Controls** | [Online schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) can now be paused, resumed, and cancelled via [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v20.1/pause-job), [`RESUME JOB`](https://www.cockroachlabs.com/docs/v20.1/resume-job), and [`CANCEL JOB`](https://www.cockroachlabs.com/docs/v20.1/cancel-job).
-SQL | **Foreign Key Improvements** | CockroachDB now supports [multiple foreign key constraints on a single column](https://www.cockroachlabs.com/docs/v20.1/foreign-key#add-multiple-foreign-key-constraints-to-a-single-column). Also, it's now possible to drop the index on foreign key columns, or on the referenced columns, if another index exists on the same columns and fulfills [indexing requirements](https://www.cockroachlabs.com/docs/v20.1/foreign-key#rules-for-creating-foreign-keys).
-SQL | **`SELECT FOR UPDATE`** | The new [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v20.1/select-for-update) statement lets you order transactions by controlling concurrent access to one or more rows of a table. It works by locking the rows returned by a selection query, such that other transactions attempting to `SELECT` the same data and then `UPDATE` the results of that selection are forced to wait for the transaction that locked the rows to finish. This prevents [transaction retries](https://www.cockroachlabs.com/docs/v20.1/transactions#transaction-retries) that would otherwise occur and, thus, leads to increased throughput and decreased tail latency for contended operations.
-SQL | **Nested Transactions and Savepoints** | CockroachDB now supports the nesting of transactions using savepoints. These [nested transactions](https://www.cockroachlabs.com/docs/v20.1/transactions#nested-transactions), also known as sub-transactions, can be rolled back without discarding the state of the entire surrounding transaction. This can be useful in applications that abstract database access using an application development framework or ORM. Different components of the application can operate on different sub-transactions without having to know about each others' internal operations, while trusting that the database will maintain isolation between sub-transactions and preserve data integrity.
-SQL | **Hash-Sharded Indexes** | For tables indexed on sequential keys, CockroachDB nows offers [hash-sharded indexes](https://www.cockroachlabs.com/docs/v20.1/indexes#hash-sharded-indexes) to distribute sequential traffic uniformly across ranges, eliminating single-range hotspots and improving write performance on sequentially-keyed indexes at a small cost to read performance. This feature is currently [experimental](https://www.cockroachlabs.com/docs/v20.1/experimental-features).
-SQL | **Slow Query Log** | You can now enable a [slow query log](https://www.cockroachlabs.com/docs/v20.1/query-behavior-troubleshooting#using-the-slow-query-log) to record SQL queries whose service latency exceeds a specified threshold.
-SQL | **`EXPLAIN` Improvements** | The new [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v20.1/explain-analyze#debug-option) option executes a query and generates a link to a ZIP file that contains the physical query plan, execution statistics, statement tracing, and other information about the query. Also, the `(DISTSQL, TYPES)` option on `EXPLAIN` and `EXPLAIN ANALYZE` include the data types of the input columns in the physical plan.
-SQL | **Recursive Common Table Expressions** | CockroachDB now supports [common table expressions that contain subqueries that refer to their own output](https://www.cockroachlabs.com/docs/v20.1/common-table-expressions#recursive-common-table-expressions).
-SQL | **`TIMETZ` Data Type** | CockroachDB now supports the [`TIMETZ` variant](https://www.cockroachlabs.com/docs/v20.1/time#timetz) of the `TIME` data type for SQL standard compliance and increased compatibility with ORMS.
-SQL | **Precision in Time Values** | CockroachDB now supports precision levels from 0 (seconds) to 6 (microseconds) for [`TIME`/`TIMETZ`](https://www.cockroachlabs.com/docs/v20.1/time#precision) and [`INTERVAL`](https://www.cockroachlabs.com/docs/v20.1/interval#precision) values. Precision in time values specifies the number of fractional digits retained in the seconds field.
-SQL | **Vectorized Execution Improvements** | [Vectorized execution](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) now supports [hash joins](https://www.cockroachlabs.com/docs/v20.1/joins#hash-joins), [merge joins](https://www.cockroachlabs.com/docs/v20.1/joins#merge-joins), most [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions), as well as the [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) data type in addition to several other previously [supported data types](https://www.cockroachlabs.com/docs/v20.1/data-types).
-SQL | **Column Families in Secondary Indexes** | [Secondary indexes](https://www.cockroachlabs.com/docs/v20.1/column-families) now respect the column family definitions applied to tables. When you define a secondary index, CockroachDB breaks the secondary index key-value pairs into column families, according to the family and stored column configurations.
-SQL | **Temporary Tables** | CockroachDB now supports session-scoped [temporary tables](https://www.cockroachlabs.com/docs/v20.1/temporary-tables), [views](https://www.cockroachlabs.com/docs/v20.1/views#temporary-views), and [sequences](https://www.cockroachlabs.com/docs/v20.1/create-sequence#temporary-sequences). Unlike persistent objects, temp objects can only be accessed from the session in which they were created, and they are dropped at the end of the session. This feature is currently [experimental](https://www.cockroachlabs.com/docs/v20.1/experimental-features).
-Dev Tools | **Expanded ORM Support** | CockroachDB now supports additional Postgres-compatible ORMs, including [Django](https://www.cockroachlabs.com/docs/v20.1/build-a-python-app-with-cockroachdb-django) and [pewee](https://www.cockroachlabs.com/docs/v20.1/install-client-drivers#peewee) for Python developers, and [jOOQ](https://www.cockroachlabs.com/docs/v20.1/build-a-java-app-with-cockroachdb-jooq) for Java developers.
-I/O | **Bulk Import Improvements** | The [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) and [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v20.1/import-into) statements now support bulk importing [from Avro files](https://www.cockroachlabs.com/docs/v20.1/migrate-from-avro). This makes it easier to migrate from systems like Spanner that export data in the Avro format.
Also, the new [`cockroach nodelocal upload`](https://www.cockroachlabs.com/docs/v20.1/cockroach-nodelocal-upload) command makes it easier to upload a file to a node's external IO directory for import from the node rather than from cloud storage.
Finally, [paused](https://www.cockroachlabs.com/docs/v20.1/pause-job) imports, when [resumed](https://www.cockroachlabs.com/docs/v20.1/resume-job), now continue from their internally recorded progress instead of starting over.
-Security | **RBAC Changes** | All [role-based access control (RBAC) features](https://www.cockroachlabs.com/docs/v20.1/authorization#roles) (`CREATE ROLE`, `ALTER ROLE`, `DROP ROLE`, `GRANT ROLE`, `REVOKE ROLE`) are now covered by the [BSL license](https://www.cockroachlabs.com/docs/v20.1/licensing-faqs) and available to non-enterprise users.
-Security | **Various Improvements** | Several security features have been added to CockroachDB v20.1, including the ability to [customize the mapping between TLS certificates and principals](https://www.cockroachlabs.com/docs/v20.1/create-security-certificates-openssl#step-2-create-the-certificate-and-key-pairs-for-nodes), to [name user/roles with periods](https://www.cockroachlabs.com/docs/v20.1/create-user#considerations) so as to reflect the structure of internet domain names, to [allow or disallow users from authenticating](https://www.cockroachlabs.com/docs/v20.1/create-user#set-login-privileges-for-a-user), and to [allow or disallow users from creating, altering, and dropping other users](https://www.cockroachlabs.com/docs/v20.1/create-user#allow-the-user-to-create-other-users).
Also, CockroachDB's support for the PostgreSQL Host-Based Authentication (HBA) configuration language, which enables sites to customize the principal/client address/authentication method matrix, has been extended and unified.
-CLI | **Demo Cluster Improvements** | Several features have been added to the [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) command, including the ability to start a demo cluster in secure mode using TLS certificates to encrypt network communication (via the `--insecure=false` flag), to return the client connection URLs for all nodes in a demo cluster (via the `demo ls` shell command), to shut down/restart/decommission/recommission individual nodes in a multi-node demo cluster (via the `demo shutdown|restart|decommission|recommission ` shell commands), and to prevent the loading of a temporary enterprise license (via the `--disable-demo-license` flag).
-UI | **Various Improvements** | The [**Network Latency**](https://www.cockroachlabs.com/docs/v20.1/admin-ui-network-latency-page) page of the Admin UI is now easier to access and has been redesigned to help you understand the round-trip latencies between all nodes in your cluster.
Also, the **Statement Details** page now allows you to write information about a SQL statement to a [diagnostics bundle](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page#diagnostics) that you can download. This bundle consists of a JSON file that contains a distributed trace of the SQL statement, a physical query plan, execution statistics, and other information about the query.
-Internals | **Various Improvements** | CockroachDB's storage layer now uses [protected timestamps](https://www.cockroachlabs.com/docs/v20.1/architecture/storage-layer#protected-timestamps) to ensure the safety of historical data while also enabling shorter [GC TTLs](https://www.cockroachlabs.com/docs/v20.1/configure-replication-zones#gc-ttlseconds). A shorter GC TTL means that fewer previous MVCC values are kept around. This can help lower query execution costs for workloads which update rows frequently throughout the day, since the [SQL layer](https://www.cockroachlabs.com/docs/v20.1/architecture/sql-layer) has to scan over previous MVCC values to find the current value of a row.
Also, Cockroach's transaction layer now uses a [concurrency manager](https://www.cockroachlabs.com/docs/v20.1/architecture/transaction-layer#concurrency-manager) to sequence incoming requests and provide isolation between the transactions that issued those requests that intend to perform conflicting operations.
-
-
Enterprise features
-
-These features require an [enterprise license](https://www.cockroachlabs.com/docs/v19.2/enterprise-licensing). Register for a 30-day trial license [here](https://www.cockroachlabs.com/get-cockroachdb/enterprise/), or consider testing enterprise features locally using the [`cockroach demo`](https://www.cockroachlabs.com/docs/v19.2/cockroach-demo) CLI command, which starts an in-memory CockroachDB cluster with a temporary enterprise license pre-loaded. [CockroachCloud clusters](https://cockroachlabs.cloud/) also include all enterprise features.
-
-Area | Feature | Description
------|---------|------------
-Recovery | **Full-cluster backup and restore** | CockroachDB's `BACKUP` feature now supports [backing up an entire cluster's data](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore#full-backups), including all configuration and system information such as [user privileges](https://www.cockroachlabs.com/docs/v20.1/authorization#privileges), [zone configurations](https://www.cockroachlabs.com/docs/v20.1/configure-replication-zones), and [cluster settings](https://www.cockroachlabs.com/docs/v20.1/cluster-settings). In rare disaster recovery situations, CockroachDB's `RESTORE` feature can now [restore a cluster backup to a new cluster](https://www.cockroachlabs.com/docs/v20.1/restore#full-cluster). Restoring a cluster backup to an existing cluster is not supported.
-Recovery | **Encrypted backups** | CockroachDB now supports using an [encryption passphrase](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore) to encrypt data in Enterprise `BACKUP` files and to decrypt the data upon `RESTORE`.
-SQL | **Improved follower reads** | [Follower reads](https://www.cockroachlabs.com/docs/v20.1/follower-reads) are now available for `AS OF SYSTEM TIME` queries at least 4.8 seconds in the past, a much shorter window than the previous 48 seconds.
-
-
Backward-incompatible changes
-
-Before [upgrading to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your application as necessary.
-
-- The `extract()` [built-in function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators) with sub-second arguments (millisecond, microsecond) is now Postgres-compatible and returns the total number of seconds in addition to sub-seconds instead of returning only sub-seconds.
-
-- Casting intervals to integers and floats is now Postgres-compatible and values a year at 365.25 days in seconds instead of 365 days.
-
-- The combination of the [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) options `format=experimental_avro`, `envelope=key_only`, and `updated` is now rejected. This is because the use of `key_only` prevents any rows with updated fields from being emitted, which renders the `updated` option meaningless.
-
-- The [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init) CLI command now waits for server readiness and thus no longer fails when a mistaken server address is provided.
-
-- The `cockroach user` CLI command has been removed. It was previously deprecated in CockroachDB v19.2. Note that a v19.2 client (supporting `cockroach user`) can still operate user accounts in a v20.1 server.
-
-- CockroachDB now creates files without read permissions for the "others" group. Sites that automate file management (e.g., log collection) using multiple user accounts now must ensure that the CockroachDB server and the management tools running on the same system are part of a shared unix group.
-
-- The [`GRANT`](https://www.cockroachlabs.com/docs/v20.1/grant) and [`REVOKE`](https://www.cockroachlabs.com/docs/v20.1/revoke) statements now require that the requesting user already have the target privileges themselves. For example, `GRANT SELECT ON t TO foo` requires that the requesting user already have the `SELECT` privilege on `t`.
-
-- During an [upgrade to v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version), ongoing [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) will stop making progress, and it will not be possible to manipulate them via [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v20.1/pause-job)/[`RESUME JOB`](https://www.cockroachlabs.com/docs/v20.1/resume-job)/[`CANCEL JOB`](https://www.cockroachlabs.com/docs/v20.1/cancel-job) statements. Once the upgrade has been finalized, these schema changes will run to completion.
-
-- During an [upgrade to v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version), new [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) will be blocked and return an error, with the exception of [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v20.1/create-table) statements without foreign key references and no-op schema change statements that use `IF NOT EXISTS`. Also, ongoing schema changes started will stop making progress, and it will not be possible to manipulate them via [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v20.1/pause-job)/[`RESUME JOB`](https://www.cockroachlabs.com/docs/v20.1/resume-job)/[`CANCEL JOB`](https://www.cockroachlabs.com/docs/v20.1/cancel-job) statements. Once the upgrade has been finalized, ongoing schema changes will run to completion and new schema changes will be allowed.
-
-
Known limitations
-
-For information about new and unresolved limitations in CockroachDB v20.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v20.1/known-limitations).
-
-
Education
-
-Area | Topic | Description
------|-------|------------
-Training | **Videos Lessons on YouTube** | Added two [Cockroach University](https://university.cockroachlabs.com) playlists to YouTube, one with the entire set of video lessons from ["Getting Started with CockroachDB"](https://www.youtube.com/playlist?list=PL_QaflmEF2e8Prn7r7CIyBKsHPgsgNO_1), and the other featuring the first batch of video lessons from the upcoming course, ["CockroachDB for Python Developers"](https://www.youtube.com/playlist?list=PL_QaflmEF2e8ijr7gxCZHSKH9-Vl8Yf9q).
-Docs | **Developer Guide** | Added guidance on common tasks when building apps on CockroachDB, such as [installing Postgres clients](https://www.cockroachlabs.com/docs/v20.1/install-client-drivers); [connecting to the database](https://www.cockroachlabs.com/docs/v20.1/connect-to-the-database); effectively [inserting](https://www.cockroachlabs.com/docs/v20.1/insert-data), [querying](https://www.cockroachlabs.com/docs/v20.1/query-data), [updating](https://www.cockroachlabs.com/docs/v20.1/update-data), and [deleting](https://www.cockroachlabs.com/docs/v20.1/delete-data); [handling errors](https://www.cockroachlabs.com/docs/v20.1/error-handling-and-troubleshooting); and [making queries fast](https://www.cockroachlabs.com/docs/v20.1/make-queries-fast). For convenience, much of the guidance is offered across various popular languages (Java, Python, Go) in addition to straight SQL.
-Docs | **"Hello World" Repos** | Added several language-specific [GitHub repos](https://github.com/cockroachlabs?q=hello-world&type=&language=) with the simple starter applications featured in our ["Hello World" tutorials](https://www.cockroachlabs.com/docs/v20.1/hello-world-example-apps).
-Docs | **Multi-Region Sample App and Tutorial** | Added a full-stack, multi-region sample application ([GitHub repo](https://github.com/cockroachlabs/movr-flask)) with an [accompanying tutorial](https://www.cockroachlabs.com/docs/v20.1/multi-region-overview) on building a multi-region application on a multi-region CockroachCloud cluster. Also added a [video demonstration](https://www.youtube.com/playlist?list=PL_QaflmEF2e8o2heLyIt5iDUTgJE3EPkp) as a YouTube playlist.
-Docs | **Streaming Changefeeds to Snowflake Tutorial** | Added an [end-to-end tutorial](https://www.cockroachlabs.com/docs/cockroachcloud/stream-changefeed-to-snowflake-aws) on how to use an Enterprise changefeed to stream row-level changes from CockroachCloud to Snowflake, an online analytical processing (OLAP) database.
-Docs | **Improved Backup/Restore Docs** | Updated the backup/restore docs to better separate [broadly applicable guidance and best practices](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore) from more advanced topics.
-Docs | **Release Support Policy** | Added a page explaining Cockroach Labs' [policy for supporting major releases of CockroachDB]({% link releases/release-support-policy.md %}), including the phases of support that each major release moves through, the currently supported releases, and an explanation of the [naming scheme]({% link releases/index.md %}#overview) used for CockroachDB.
diff --git a/src/current/_includes/releases/v20.1/v20.1.1.md b/src/current/_includes/releases/v20.1/v20.1.1.md
deleted file mode 100644
index aed4c12e947..00000000000
--- a/src/current/_includes/releases/v20.1/v20.1.1.md
+++ /dev/null
@@ -1,210 +0,0 @@
-
{{ include.release }}
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This page lists additions and changes in v20.1.1 since v20.1.0.
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version)
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Backward-incompatible changes
-
-- The copy of `system` and `crdb_internal` tables extracted by [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) is now written using the `TSV` format (inside the .zip file), instead of an ASCII-art table. [#48094][#48094] {% comment %}doc{% endcomment %}
-- Updated the textual error and warning messages displayed by [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit). [#47692][#47692]
-- [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) now prints progress details on its standard error stream, even when `--logtostderr` is not specified. Scripts that wish to ignore this output can redirect the standard error stream. [#47692][#47692] {% comment %}doc{% endcomment %}
-- CockroachDB v20.1 introduced an experimental new rule for the `--join` flag causing it to prefer SRV records, if present in DNS, to look up the peer nodes to join. However, it is also found to cause disruption in in certain deployments. To reduce this disruption and UX surprise, the feature is now gated behind a new command-line flag `--experimental-dns-srv` which must now be explicitly passed to [`cockroach start`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start) to enable it. [#49129][#49129] {% comment %}doc{% endcomment %}
-- Added a new cluster setting, `server.shutdown.lease_transfer_wait`, that allows you to configure the server shutdown timeout period for transferring range leases to other nodes. Previously, the timeout period was not configurable and was set to 5 seconds, and the phase of server shutdown responsible for range lease transfers would give up after 10000 attempts of transferring replica leases away. The limit of 10000 attempts has been removed, so that now only the maximum duration `server.shutdown.lease_transfer_wait` applies. [#47692][#47692] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- The [statement diagnostics](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page#diagnostics) bundle now contains a new file, `trace-jaeger.json`, that can be manually imported in Jaeger for visualization. [#47432][#47432] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- Fixed a bug where the job ID of a lagging [changefeed](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) would be omitted, and instead it would be reported as sinkless. [#48562][#48562]
-
-
SQL language changes
-
-- The `pg_collation`, `pg_proc`, `pg_database`, and `pg_type` tables in the `pg_catalog` database no longer require privileges on any database in order for the data to be visible. [#48080][#48080], [#48765][#48765]
-- Histogram collection with [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v20.1/create-statistics) is no longer supported on columns with type `array`. Only row count, distinct count, and null count are collected for array-type columns. [#48343][#48343] {% comment %}doc{% endcomment %}
-- [`ROLLBACK TO SAVEPOINT`](https://www.cockroachlabs.com/docs/v20.1/rollback-transaction) is no longer permitted after miscellaneous internal errors. [#48305][#48305] {% comment %}doc{% endcomment %}
-- Fixed an issue with optimizing subqueries involving [set operations](https://www.cockroachlabs.com/docs/v20.1/selection-queries) that can prevent queries from executing. [#48680][#48680]
-- CockroachDB now correctly reports the type length for the `char` type. [#48642][#48642]
-- The `RowDescription` message of the wire-level protocol now contains the table ID and column ID for each column in the result set. These values correspond to `pg_attribute.attrelid` and `pg_attribute.attnum`. If a result column does not refer to a simple table or view, these values will be zero, as they were before. The message also contains the type modifier for each column in the result set. This corresponds to `pg_attribute.atttypmod`. If it is not available, the value is `-1`, as it was before. [#48748][#48748], [#49087][#49087]
-
-
Command-line changes
-
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) now tries multiple times to retrieve data using SQL if it encounters retry errors and skips over fully decommissioned nodes. It also supports two command-line parameters `--nodes` and `--exclude-nodes`. When specified, they control which nodes are inspected when gathering the data. This makes it possible to focus on a group of nodes of interest in a large cluster, or to exclude nodes that `cockroach debug zip` would have trouble reaching otherwise. Both flags accept a list of individual node IDs or ranges of node IDs, e.g., `--nodes=1,10,13-15`. [#48094][#48094] {% comment %}doc{% endcomment %}
-- Client commands such as [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init) and [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) now support the `--cluster-name` and `--disable-cluster-name-verification` flags in order to support running them on clusters that have been configured to use a cluster name. Previously it was impossible to run such commands against nodes configured with the `--cluster-name` flag. [#48016][#48016] {% comment %}doc{% endcomment %}
-- It is now possible to drain a node without shutting down the process, using the `cockroach node drain` command. This makes it easier to integrate with service managers and orchestration: it now becomes safe to issue `cockroach node drain` and then separately stop the service via a process manager or orchestrator. Without this new mode, there is a risk to misconfigure the service manager to auto-restart the node after it shuts down via `quit`, in a way that's surprising or unwanted. The new command `node drain` also recognizes the new `--drain-wait` flag. [#47692][#47692] {% comment %}doc{% endcomment %}
-- The time that [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) waits client-side for the node to drain (remove existing clients and push range leases away) is now configurable via the command-line flag `--drain-wait`. Note that separate server-side timeouts also apply separately. Check the `server.shutdown.*` cluster settings for details. [#47692][#47692] {% comment %}doc{% endcomment %}
-- The commands [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) and `cockroach node drain` now report a "work remaining" metric on their standard error stream. The value reduces until it reaches `0` to indicate that the graceful shutdown has completed server-side. An operator can now rely on `cockroach node drain` to obtain confidence of a graceful shutdown prior to terminating the server process. [#47692][#47692] {% comment %}doc{% endcomment %}
-- The default value of the parameter `--drain-wait` for [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) has been increased from 1 minute to 10 minutes, to give more time for nodes with thousands of ranges to migrate their leases away. [#47692][#47692] {% comment %}doc{% endcomment %}
-- Added support for `list cert` with certificates which require `--cert-principal-map` to pass validation. [#48177][#48177] {% comment %}doc{% endcomment %}
-- Added support for the `--cert-principal-map` flag in the [`cockroach cert`](https://www.cockroachlabs.com/docs/v20.1/cockroach-cert), [`cockroach sql`](https://www.cockroachlabs.com/docs/v20.1/cockroach-sql), [`cockroach init`](https://www.cockroachlabs.com/docs/v20.1/cockroach-init), and [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) commands. [#48177][#48177] {% comment %}doc{% endcomment %}
-- Made `--storage-engine` sticky (i.e., resolve to the last used engine type when unspecified) even when specified stores are encrypted at rest. [#49073][#49073] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- Fixed a bug where `Raft log too large` was reported incorrectly for replicas for which the raft log size is not to be trusted. [#48286][#48286]
-- Fixed a bug where a multi-node cluster without localities defined wouldn't be able to render the [**Network Latency**](https://www.cockroachlabs.com/docs/v20.1/admin-ui-network-latency-page) page. [#49191][#49191]
-- Fixed a bug where link to specific problem ranges had an incorrect path. Problem ranges are now linked correctly again. [#49188][#49188]
-
-
Bug fixes
-
-- Fixed a bug where vectorized queries on composite datatypes could sometimes return invalid data. [#48463][#48463]
-- Fixed a bug that could lead to data corruption or data loss if a replica was both the source of a snapshot and was being concurrently removed from the range and certain specific conditions exist inside RocksDB. This scenario is rare, but possible. [#48321][#48321]
-- Fixed a bug where the migration for ongoing schema change jobs would cause the node to panic with an "index out of bounds" error upon encountering a malformed table descriptor with no schema change mutation corresponding to the job to be migrated. [#48838][#48838]
-- Fixed an error where instead of returning a parsing error in queries with `count(*)` CockroachDB could incorrectly return no output (when the query was executed via row-by-row engine). [#47485][#47485]
-- Fixed a bug where CockroachDB was incorrectly releasing memory used by hash aggregation. [#47518][#47518]
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) can now successfully avoid out-of-memory errors when extracting very large `system` or `crdb_internal` tables. It will also report an error encountered while writing the end of the output ZIP file. [#48094][#48094]
-- Removed redundant metadata information for subqueries and postqueries in [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v20.1/explain) output. [#47975][#47975]
-- [`TRUNCATE`](https://www.cockroachlabs.com/docs/v20.1/truncate) can now run on temporary tables, fixing a bug in v20.1 where temporary tables could not be truncated, resulting in an error `unexpected value: nil`. [#48078][#48078]
-- Fixed a bug in which `(tuple).*` was only expanded to the first column in the tuple and the remaining elements were dropped. [#48290][#48290]
-- Fixed case where [`PARTITION BY`](https://www.cockroachlabs.com/docs/v20.1/partition-by) and [`ORDER BY`](https://www.cockroachlabs.com/docs/v20.1/query-order) columns in window specifications were losing qualifications when used inside views. [#47715][#47715]
-- CockroachDB will no longer display a severe `internal error` upon certain privilege check failures via `pg_catalog` built-in functions. [#48242][#48242]
-- Fixed a bug where a read operation in a transaction with a past savepoint rollback would give an internal error for exceeding the maximum count of results requested [#48165][#48165]
-- The distinction between delete jobs for columns and dependent jobs for deleting indices, views and sequences is now better defined. [#48259][#48259]
-- Fixed incorrect results that could occur when casting negative intervals or timestamps to type `decimal`. [#48345][#48345]
-- Fixed an error that occurred when statistics collection was explicitly requested on a column with type `array`. [#48343][#48343]
-- Fixed a nil pointer dereference in Pebble's block cache due to a rare "double free" of a block. [#48346][#48346]
-- Fixed Pebble to properly mark `sstables` for compaction which contain range tombstones. This matches the behavior when using RocksDB and ensures that space used for temporary storage is reclaimed quickly. [#48346][#48346]
-- Fixed a bug introduced in v20.1 that could cause multiple index GC jobs to be created for the same schema change in rare cases. [#47818][#47818]
-- Fixed a bug where CockroachDB could return an internal error when performing a query with `CASE`, `AND`, `OR` operators in some cases when it was executed via the vectorized engine. [#48072][#48072]
-- Fixed a rare bug where stats were not automatically generated for a new table. [#48027][#48027]
-- Fixed a panic that could occur when [`SHOW RANGES`](https://www.cockroachlabs.com/docs/v20.1/show-ranges) or [`SHOW RANGE FOR ROW`](https://www.cockroachlabs.com/docs/v20.1/show-range-for-row) was called with a virtual table. [#48347][#48347]
-- Made SRV resolution non-fatal for join list records to align with the standard and improve reliability of node startup. [#48349][#48349]
-- Fixed a rare bug causing a range to deadlock and all the writes to the respective range to timeout. [#48303][#48303]
-- Fixed a long-standing bug where HTTP requests would start to fail with error 503 "`transport: authentication handshake failed: io: read/write on closed pipe`" and never become possible again until the node is restarted. [#48456][#48456]
-- When processing `--join`, invalid SRV records with port number `0` are now properly ignored. [#48527][#48527]
-- Fixed a bug where `SHOW STATISTICS USING JSON` contained incorrect single quotes for strings with spaces inside histograms. [#48544][#48544]
-- Fixed a bug where the two settings `kv.range_split.by_load_enabled` and `kv.range_split.load_qps_threshold` were incorrectly marked as non-public in the output of `SHOW CLUSTER SETTINGS`. [#48585][#48585]
-- You can no longer [drop databases](https://www.cockroachlabs.com/docs/v20.1/drop-database) that contain tables which are currently offline due to [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) or [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore). Previously dropping a database in this state could lead to a corrupted schema which prevented running backups. [#48606][#48606]
-- Fixed a bug preventing timestamps from being closed which could result in failed follower reads or failure to observe resolved timestamps in [changefeeds](https://www.cockroachlabs.com/docs/v20.1/change-data-capture). [#48682][#48682]
-- Fixed `debug encryption-status` and the Admin UI display of encryption status when using Pebble. [#47995][#47995]
-- CockroachDB now deletes the partially imported data after an [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) fails or is canceled. [#48605][#48605]
-- Fixed a bug where the [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v20.1/show-create) statement would sometimes show a partitioning step for an index that has been dropped. [#48768][#48768]
-- Re-allowed `diagnostics.forced_sql_stat_reset.interval`, `diagnostics.sql_stat_reset.interval` and `external.graphite.interval` to set to their maximum values (24hr, 24hr and 15min respectively). [#48760][#48760]
-- Fixed a bug where CockroachDB could encounter an internal error when a query with `LEFT SEMI` or `LEFT ANTI` join was performed via the vectorized execution engine. This is likely to occur only with `vectorize=on` setting. [#48751][#48751]
-- Fixed a bug where running [`cockroach dump`](https://www.cockroachlabs.com/docs/v20.1/cockroach-dump) on tables with interleaved primary keys would erroneously include an extra `CREATE UNIQUE INDEX "primary" ... INTERLEAVE IN PARENT` statement in the dump output. This made it impossible to reimport dumped data without manual editing. [#48776][#48776]
-- Fixed a bug where running [`cockroach dump`](https://www.cockroachlabs.com/docs/v20.1/cockroach-dump) on a table with collated strings would omit the collation clause for the data insertion statements. [#48832][#48832]
-- CockroachDB now properly [restores tables](https://www.cockroachlabs.com/docs/v20.1/restore) that were backed up while they were in the middle of a schema change. [#48850][#48850]
-- Manually writing a `NULL` value into the `system.users` table for the `hashedPassword` column will no longer cause a server crash during user authentication. [#48836][#48836]
-- Fixed a bug where in rare circumstances, CockroachDB may fail to open a store configured to use the Pebble storage engine. [#49080][#49080]
-- Fixed a bug where the storage engine, when configured to use the Pebble storage engine, would return duplicate keys, causing incorrect or inconsistent results. [#49080][#49080]
-- Fixed a bug where columns of a table could not be dropped after a primary key change. [#49088][#49088]
-- Fixed a bug which falsely indicated that `kv.closed_timestamp.max_behind_nanos` was almost always growing. [#48716][#48716]
-- Fixed a bug where changing the primary key of a table that had [partitioned indexes](https://www.cockroachlabs.com/docs/v20.1/multi-region-database) could cause indexes to lose their zone configurations. In particular, the indexes rebuilt as part of a primary key change would keep their partitions but lose the zone configurations attached to those partitions. [#48827][#48827]
-- Fixed costing of lookup join with a limit on top, resulting in better plans in some cases. [#49137][#49137]
-- Fixed a bug where on [dropping a database](https://www.cockroachlabs.com/docs/v20.1/drop-database), it would not drop the entry for its public schema in the `system.namespace` table. [#49139][#49139]
-- [`SHOW BACKUP SCHEMAS`](https://www.cockroachlabs.com/docs/v20.1/show-backup#show-a-backup-with-schemas) no longer shows table comments as they may be inaccurate. [#49130][#49130]
-- Fixed a memory leak which can affect [changefeeds](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) performing scans of large tables. [#49161][#49161]
-- Prevented namespace orphans (manifesting as `database "" not found` errors) when migrating from v19.2. [#49200][#49200]
-- Fixed a bug that caused query failures when using arrays in [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions). [#49238][#49238]
-
-
Performance improvements
-
-- Disabled the Go runtime block profile by default which results in a small but measurable reduction in CPU usage. The block profile has diminished in utility with the advent of mutex contention profiles and is almost never used during performance investigations. [#48153][#48153]
-- The cleanup job which runs after a primary key change to remove old indexes, which blocks other schema changes from running, now starts immediately after the primary key swap is complete. This reduces the amount of waiting time before subsequent schema changes can run. [#47818][#47818]
-- Histograms used by the optimizer for query planning now have more accurate row counts per histogram bucket, particularly for columns that have many null values. The histograms also have improved cardinality estimates. This results in better plans in some cases. [#48626][#48626], [#48646][#48646]
-- Fixed a bug that caused a simple schema change to take more than 30s. [#48621][#48621]
-- Queries run via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) are now processed faster, with most noticeable gains on the queries that output many rows. [#48732][#48732]
-- Reduced time needed to run a [backup command](https://www.cockroachlabs.com/docs/v20.1/backup) when it is built on a lot of previous incremental backups. [#48772][#48772]
-
-
Doc updates
-
-- Added a tutorial on [using Flyway with CockroachDB](https://www.cockroachlabs.com/docs/v20.1/flyway). [#7329][#7329]
-
-
-
-
Contributors
-
-This release includes 94 merged PRs by 27 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Drew Kimball (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Backward-incompatible changes
-
-- The file names for heap profile dumps are now `memprof..`, where previously they were named `memprof..`. [#55260][#55260]
-
-
SQL language changes
-
-- Fixed a bug where temporary tables may be included in [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) commands. [#56475][#56475]
-
-
Command-line changes
-
-- CockroachDB now better attempts to retain heap profile dumps after a crash due to an out-of-memory error. [#55260][#55260]
-- CockroachDB now better attempts to retain memory statistics corresponding to increases in total memory usage, not just heap allocations. [#55260][#55260]
-
-
Bug fixes
-
-- Fixed a panic that could occur when running [`SHOW STATISTICS USING JSON`](https://www.cockroachlabs.com/docs/v20.1/show-statistics) for a table in which at least one of the columns contained all null values. [#56515][#56515]
-- The file names for generated `goroutine`, CPU and memory profiles were sometimes incorrect, resulting in repeated warnings like `strconv.ParseUint: parsing "txt": invalid syntax` in log files. This has been corrected. [#55260][#55260]
-- Fixed a bug when the Pebble storage engine is used with [encryption-at-rest](https://www.cockroachlabs.com/docs/v20.1/encryption#encryption-at-rest-enterprise) that could result in data corruption in some fairly rare cases after a table drop, table truncate, or replica deletion. [#56680][#56680]
-- Previously, dumps of tables with a [`BIT`](https://www.cockroachlabs.com/docs/v20.1/bit) type column would result in an error. This column type is now supported. [#56452][#56452]
-- In 20.1.8, we attempted to fix `age`'s lack of normalization of `H:M:S` into the years, months, and days field. However, this was also broken for values greater than 1 month, as well as breaking `a::timestamp(tz) - b::timestamp(tz)` operators. This has now been resolved. [#56769][#56769]
-- CockroachDB previously would crash when executing a query with an [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v20.1/as-of-system-time) clause that used a placeholder (note that it wasn't a prepared statement, it was an attempt to use an unspecified placeholder value on a non-prepared statement). This is now fixed. [#56781][#56781]
-- CockroachDB previously could encounter an internal error when [`DATE`](https://www.cockroachlabs.com/docs/v20.1/date), [`TIMESTAMP`](https://www.cockroachlabs.com/docs/v20.1/timestamp), or [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) values that used year 1 BC were sent between nodes for execution. Additionally, previously it was not possible to specify `DATE`, `TIMESTAMP`, or `TIMESTAMPTZ` values with year 1 BC without using AD/BC notation. This is now fixed. [#56743][#56743]
-- Fixed internal error when collecting a statement diagnostic bundle in some cases where the query hits an error. [#56785][#56785]
-- Some boolean session variables would only accept string (`"true"` or `"false"`) values. Now they also accept unquoted true or false values. [#56814][#56814]
-- Fixed a bug which would prevent the dropping of hash sharded indexes if they were added prior to other columns. [#55823][#55823]
-- Fixed a race condition in the [`tpcc`](https://www.cockroachlabs.com/docs/v20.1/performance-benchmarking-with-tpc-c-10-warehouses) workload with the `--scatter` flag where tables could be scattered multiple times or not at all. [#56979][#56979]
-- Previously if a cluster backup that was taken during a schema change, a cluster restore of that backup would create duplicates of the ongoing schema changes. [#56450][#56450]
-- Fixed a case where attempting to start a second [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) to the same location while the first was running using passphrase-based encryption could overwrite the metadata required to decrypt it and thus render it unreadable. [#57025][#57025]
-- Fixed an internal error when using aggregates and window functions in an `ORDER BY` for a [`UNION or VALUES` clause](https://www.cockroachlabs.com/docs/v20.1/selection-queries). [#57522][#57522]
-- The [`CREATE TEMP TABLE AS`](https://www.cockroachlabs.com/docs/v20.1/temporary-tables) statement previously created a non-temporary table. Now it makes a temporary one. [#57550][#57550]
-- Fixed a bug where schema change jobs to add foreign keys to existing tables, via [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v20.1/alter-table), could sometimes not be successfully reverted (either due to being canceled or having failed). [#57810][#57810]
-- Fixed a bug where concurrent addition of a [foreign key constraint](https://www.cockroachlabs.com/docs/v20.1/foreign-key) and drop of a unique index could cause the foreign key constraint to be added with no unique constraint on the referenced columns. [#57810][#57810]
-- Fixed a bug which can occur when canceling [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) when there are multiple queued schema changes that could result in future schema changes being stuck. [#55058][#55058]
-- Fixed a bug which can lead to canceled [schema change](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) jobs ending in the failed rather than canceled state. [#55058][#55058]
-
-
Performance improvements
-
-- Interactions between Raft heartbeats and the Raft `goroutine` pool scheduler are now more efficient and avoid excessive mutex contention. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#57009][#57009]
-
-
Backward-compatible change
-
-- The reserved, non-documented [cluster settings](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) `server.heap_profile.xxx` have been renamed to `server.mem_profile.xxx`. They now control collection of multiple sorts of memory profiles besides just Go heap allocations. [#55260][#55260]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
SQL language changes
-
-- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) where some unusual range boundaries in interleaved tables caused an error. [#58260][#58260]
-
-
Bug fixes
-
-- In v20.1.8, we attempted to fix `age`'s lack of normalization of `H:M:S` into the years, months and days field. However, this was also broken for values greater than 1 month, as well as breaking `a::timestamp(tz) - b::timestamp(tz)` operators. This has now been resolved. [#57956][#57956]
-- Fixed an assertion error caused by some DDL statements used in conjunction with common table expressions (`WITH`). [#57952][#57952]
-- Fixed a bug that caused temp tables to not be cleaned up after the associated session was closed. [#58167][#58167]
-- Added a safeguard against crashes while running `SHOW STATISTICS USING JSON`, which is used internally for statement diagnostics, and [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v20.1/explain-analyze). [#58264][#58264]
-- Previously, CockroachDB could return non-deterministic output when querying the `information_schema.statistics` virtual table (internally used by the `SHOW INDEXES` command)—namely, the implicit columns of the secondary indexes could be in arbitrary order. This is now fixed, and the columns will be in the same order as they are in the primary index. [#58215][#58215]
-- Previously, CockroachDB could crash when performing a [`DELETE`](https://www.cockroachlabs.com/docs/v20.1/delete) operation after an alteration of the primary key when in some cases, and now it is fixed. The bug was introduced in v20.1. [#58267][#58267]
-- Fixed a panic in protobuf decoding. [#58861][#58861]
-- Fixed a bug that caused errors when accessing a tuple column (`tuple.column` syntax) of a tuples that could be statically determined to be null. [#58899][#58899]
-- Fixed an internal error involving string literals used as arrays. [#59066][#59066]
-- GC jobs now populate the `running_status` column for [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v20.1/show-jobs). This bug has been present since v20.1. [#59138][#59138]
-- Fixed a bug in which some non-conflicting rows provided as input to an [`INSERT ... ON CONFLICT DO NOTHING`](https://www.cockroachlabs.com/docs/v20.1/insert) statement could be discarded, and not inserted. This could happen in cases where the table had one or more unique indexes in addition to the primary index, and some of the rows in the input conflicted with existing values in one or more unique index. This scenario could cause the rows that did not conflict to be erroneously discarded. This is now fixed. [#59172][#59172]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Bug fixes
-
-- Fixed a bug in URL handling of HTTP external storage paths on Windows. [#59268][#59268]
-- Fixed a bug where CockroachDB could encounter an internal error when executing queries with [`BYTES`](https://www.cockroachlabs.com/docs/v20.1/bytes) or [`STRING`](https://www.cockroachlabs.com/docs/v20.1/string) types via the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). [#59257][#59257]
-- Fixed a bug where CockroachDB could crash when executing an [`ALTER INDEX ... SPLIT/UNSPLIT AT`](https://www.cockroachlabs.com/docs/v20.1/split-at) statement when more values are provided than are explicitly specified in the [index](https://www.cockroachlabs.com/docs/v20.1/indexes). [#59272][#59272]
-
-
-
-
Contributors
-
-This release includes 3 merged PRs by 3 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Cheng Jing (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Bug fixes
-
-- Previously, if [`RELEASE SAVEPOINT cockroach_restart`](https://www.cockroachlabs.com/docs/v20.1/release-savepoint#commit-a-transaction-by-releasing-a-retry-savepoint) was followed by [`ROLLBACK`](https://www.cockroachlabs.com/docs/v20.1/rollback-transaction), the `sql.txn.rollback.count` metric would be incremented. This was incorrect, because the transaction had already committed. The metric is no longer incremented in this case. [#60251][#60251]
-- Fixed a bug where an error in protecting a record could be incorrectly reported, preventing some backups of very large tables from succeeding. [#60961][#60961]
-- Fixed a bug where high-latency global clusters could sometimes fall behind in [resolving timestamps for changefeeds](https://www.cockroachlabs.com/docs/v20.1/create-changefeed#messages). [#60926][#60926]
-- Creating [interleaved](https://www.cockroachlabs.com/docs/v20.1/interleave-in-parent) partitioned indexes is now disallowed. Previously, the database would crash when trying to create one. Note that [interleaved tables will be deprecated altogether](https://www.cockroachlabs.com/docs/v20.2/interleave-in-parent#deprecation) in a future release. [#61423][#61423]
-- In the Advanced Debugging section of the Admin UI (DB Console), manually enqueueing a range to the garbage collection (GC) queue now properly respects the `SkipShouldQueue` option. This ensures that you can force the GC of a specific range. [#60746][#60746]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-
Bug fixes
-
-- Fixed a bug where some import failures would cause tables to stay `OFFLINE` when they should have been brought back to `PUBLIC`. [#61481][#61481]
-- Fixed a bug where an invalid tuple comparison using `ANY` was causing an internal error. CockroachDB now returns "unsupported comparison operator". [#61725][#61725]
-- Changed the behavior of the `kv.closed_timestamp.target_duration` [cluster setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) when set to 0. This would make [follower reads](https://www.cockroachlabs.com/docs/v20.1/follower-reads) more aggressive instead of disabling them. Setting `kv.closed_timestamp.target_duration` to 0 will now disable routing requests to follower replicas. [#62442][#62442]
-- Fixed a bug where [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) using `AS OF SYSTEM TIME` of tables that included [foreign key constraints](https://www.cockroachlabs.com/docs/v20.1/foreign-key) from backups created by v19.x or earlier would lead to malformed schema metadata. [#62493][#62493]
-- Fixed an internal error that could occur during planning when a query used the output of the `RETURNING` clause of an [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update), and one or more of the columns in the `RETURNING` clause were from a table specified in the `FROM` clause of the `UPDATE` (i.e., not from the table being updated). [#62964][#62964]
-- Dropping a [foreign key](https://www.cockroachlabs.com/docs/v20.1/foreign-key) that was added in the same transaction no longer triggers an internal error. This bug has been present since at least v20.1. [#62881][#62881]
-- Fixed a bug where index backfill data might have been missed by [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) in incremental backups. [#63303][#63303]
-
-
Performance improvements
-
-- SQL statistics collection has been made more efficient by avoiding an accidental heap allocation per row for some schemas. [#58199][#58199]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-
Bug fixes
-
-- Fixed a bug where [incremental cluster backups](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore) may have missed data written to tables while they were `OFFLINE`. In practice this can happen if a [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) was running across incremental backups. [#63494][#63494]
-- Fixed a bug where [cluster restore](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore) would sometimes (very rarely) fail after retrying. [#63773][#63773]
-- Fixed a bug where some of the writes of the jobs while they were running may have been missed by the backup. [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) and [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) jobs are now restored as reverting so that they cleanup after themselves. [#63773][#63773]
-- Fixed a rare issue that caused [replica divergence](https://www.cockroachlabs.com/docs/v20.1/architecture/replication-layer). When it occurred, it was reported by the replica consistency checker, typically within 24 hours of occurrence, which would terminate the nodes. [#63475][#63475]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This page lists additions and changes in version v20.1.16 since version v20.1.15.
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-
Bug fixes
-
-- Fixed a correctness bug which caused [partitioned](https://www.cockroachlabs.com/docs/v20.1/partitioning) index scans to omit rows where the value of the first index column was `NULL`. This bug was present since v19.2.0. [#64050][#64050]
-- Fixed a bug where multiple concurrent invocations of [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v20.1/cockroach-debug-zip) could lead to cluster instability. This bug was present since CockroachDB v20.1. [#64086][#64086]
-- Previously, passwords in SQL statements in telemetry updates and crash reports were anonymized as `*****`. Passwords are now anonymized as `'*****'` so that the SQL statements do not result in parsing errors when executed. [#64347][#64347]
-- Fixed a race condition where read-only requests during replica removal (e.g., during range merges or rebalancing) could be evaluated on the removed replica, returning an empty result. [#64377][#64377]
-- Fixed a bug where [encryption-at-rest](https://www.cockroachlabs.com/docs/v20.1/encryption#encryption-at-rest-enterprise) metadata was not synced and might become corrupted during a hard reset. [#64498][#64498]
-
-
Performance improvements
-
-- The Raft processing goroutine pool's size is now capped at 96. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#64568][#64568]
-- The Raft scheduler now prioritizes the node liveness Range. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#64568][#64568]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version).
-
-
Bug fixes
-
-- Fixed a race condition where read-write requests during replica removal (for example, during range merges or rebalancing) could be evaluated on the removed replica. These cases would not result in data being written to persistent storage, but could result in errors that should not have been returned. [#64604][#64604]
-- Fixed a bug where users of OSS builds of CockroachDB would see "Page Not Found" when loading the DB Console. [#64126][#64126]
-
-
-
-
Contributors
-
-This release includes 3 merged PRs by 4 authors.
-We would like to thank the following contributor from the CockroachDB community:
-
-- Joshua M. Clulow (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This page lists additions and changes in v20.1.2 since v20.1.1.
-
-- For a comprehensive summary of features in v20.1, see the [v20.1 GA release notes]({% link releases/v20.1.md %}#v20-1-0).
-- To upgrade to v20.1, see [Upgrade to CockroachDB v20.1](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version)
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
Bug fixes
-
-- Corrected the replicas count for table details in the Admin UI. [#49206][#49206]
-- The `rolcanlogin` value for roles is now correctly populated in `pg_roles` and `pg_catalog`. [#49622][#49622]
-- Fixed a rare bug in the Pebble storage engine that could lead to storage engine inconsistencies. [#49378][#49378]
-- Corrected how engine type is reported in bug reports when using [`cockroach demo`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo). [#49377][#49377]
-- Fixed a bug where [`cockroach quit`](https://www.cockroachlabs.com/docs/v20.1/cockroach-quit) would not proceed to perform a hard shutdown when the value passed to `--drain-wait` was very small, but non-zero. This bug existed since v19.1.9, v19.2.7 and v20.1.1. [#49363][#49363]
-- Fixed a bug where [`demo node restart`](https://www.cockroachlabs.com/docs/v20.1/cockroach-demo) would not work due to an invalid certificate directory. [#49390][#49390]
-- Fixed some benign errors that were being reported as unexpected internal errors by the vectorized execution engine. [#49534][#49534]
-- Fixed a rare bug in the Pebble storage engine where keys were being returned out-of-order from large sstable files. [#49602][#49602]
-- When run via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution), queries with a hash routed in the DistSQL plan no longer return an internal error or incorrect results. [#49624][#49624]
-- When run via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution), queries that have columns of the [`BYTES`](https://www.cockroachlabs.com/docs/v20.1/bytes) type in the output no longer result in an internal error. [#49384][#49384]
-- CockroachDB no longer leaks file descriptors during [GSS authentication](https://www.cockroachlabs.com/docs/v20.1/gssapi_authentication). [#49614][#49614]
-- Attempting to perform a full cluster [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) on a backup that did not contain any user data no longer fails. [#49745][#49745]
-- Abandoned intents due to failed transaction coordinators are now cleaned up much faster. This resolves a regression in v20.1.0 compared to prior releases. [#49835][#49835]
-- Fixed the descriptions for `--socket-dir` and `--socket` in the CLI help. They were incorrect since v20.1.0. [#49906][#49906]
-- Adjusted Pebble's out of memory error behavior to match that of the Go runtime in order to make the condition more obvious. [#49874][#49874]
-- When performing incremental backups with revision history on a database (or full cluster), and a table in the database was dropped and then other tables were later created, the backup no longer returns an error. [#49925][#49925]
-- Fixed an internal planning error for recursive CTEs (`WITH RECURSIVE` expressions) in which the left side of the `UNION ALL` query used in the CTE definition produced zero rows. [#49964][#49964]
-
-
Doc updates
-
-- Added a [CockroachCloud Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) on creating and connecting to a 30-day free CockroachCloud cluster and running your first query. [#7454][#7454]
-- Updated the [Active Record tutorial](https://www.cockroachlabs.com/docs/v20.1/build-a-ruby-app-with-cockroachdb-activerecord) to use a new CockroachDB adapter version. [#7480][#7480]
-- Changed instances of "whitelist"/"blacklist" to "allowlist"/"blocklist" throughout the documentation. [#7479][#7479]
-- Updated all mentions of `range_min_size` and `range_max_size` to use the new default values of `134217728` and `536870912`, respectively. [#7449][#7449]
-- Updated the [hardware storage recommendations](https://www.cockroachlabs.com/docs/v20.1/recommended-production-settings#storage). [#7514][#7514]
-- Revised the node [decommissioning guidance](https://www.cockroachlabs.com/docs/v20.1/remove-nodes). [#7304][#7304]
-
-
-
-- HTTP endpoints beginning with `/debug/` now require a valid [`admin`](https://www.cockroachlabs.com/docs/v20.1/authorization) login session. [#50487][#50487]
-
-
Enterprise edition changes
-
-- [Full cluster restore](https://www.cockroachlabs.com/docs/v20.1/restore#full-cluster) is now more resilient to transient transaction retry errors during restore. [#50004][#50004]
-- The default flush interval for [changefeeds](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) that do not specify a `RESOLVED` option is now 5s instead of 200ms to more gracefully handle higher-latency sinks. [#50251][#50251]
-
-
SQL language changes
-
-- Previously, using `infinity` evaluated to a negative, i.e., "-292277022365-05-08T08:17:07Z". This has been fixed to be the maximum supported timestamp in PostgreSQL that is not infinity. Likewise, `-infinity` is the smallest supported value. Note this does currently does not behave exactly like `infinity` in PostgreSQL (this is a work in progress and may be resolved later). [#50365][#50365]
-
-
Bug fixes
-
-- Previously, `extract(epoch from timestamptz)` from a session time zone not in UTC would return a value which was incorrectly offset by the session time zone. This is now fixed. [#50075][#50075]
-- Previously, the parallel importer could get stuck due to a race between emitted import batches and checking for context cancellation (either due to an unforeseen error, or due to explicit context cancallation). This is now fixed. [#50089][#50089]
-- Previously, using separate groups for producer and consumer could lead to a situation where consumer would exit (due to an error, or explicit context cancellation) without the producer realizing, leading to a deadlock. Producer and consumer are now correctly linked during data import. [#50089][#50089]
-- Casting to width-limited strings now works correctly for strings containing Unicode characters. [#50159][#50159]
-- Fixed some cases in which casting a string to a width-limited string array was not truncating the string. [#50168][#50168]
-- Fixed a bug in which restarting CockroachDB with the Pebble storage engine after a crash during write-ahead logging could, in some rare cases, return an "unexpected EOF" error. [#50282][#50282]
-- Previously, the [Admin UI Statements page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-statements-page) was incorrectly displaying information about whether or not a statement was distributed (it was always `Yes`). This is now fixed. [#50347][#50347]
-- Fixed a RocksDB bug that could result in inconsistencies in rare circumstances. [#50397][#50397]
-- Fixed a bug that broke the data distribution [Advanced Debug page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-debug-pages) in the Admin UI on clusters that had upgraded from 19.2 to 20.1. [#49987][#49987]
-- Previously, when a [changefeed](https://www.cockroachlabs.com/docs/v20.1/change-data-capture) would fail to set up its flows due to a node draining, the changefeed would be marked as failed. These errors are now retryable. [#50088][#50088]
-
-
Performance improvements
-
-- CockroachDB now optimizes reading of files when doing backups and storage-level compactions of files. This should deliver a performance improvement for some read-heavy operations on an IOPS-constrained device. [#50105][#50105]
-- Limited [`SELECT`](https://www.cockroachlabs.com/docs/v20.1/select-clause) statements now do a better job avoiding unnecessary contention with [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update) and [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v20.1/select-for-update) statements. [#50119][#50119]
-- Improved the [optimizer](https://www.cockroachlabs.com/docs/v19.2/cost-based-optimizer)'s estimation of the selectivity of some filters involving a disjunction (`OR`) of predicates over multiple columns. This results in more accurate cardinality estimation and enables the optimizer to choose better query plans in some cases. [#50470][#50470]
-
-
Build changes
-
-- Release Docker images are now built on Debian 9.12. [#50482][#50482]
-
-
Doc updates
-
-- Updated guidance on [node decommissioning](https://www.cockroachlabs.com/docs/v20.1/remove-nodes). [#7304][#7304]
-- Added node density guidance to the [Production Checklist](https://www.cockroachlabs.com/docs/v20.1/recommended-production-settings#node-density-testing-configuration). [#7514][#7514]
-- Renamed "whitelist/blacklist" terminology to "allowlist/blocklist". [#7535][#7535]
-- Updated the Releases navigation in the sidebar to expose the latest Production and Testing releases. [#7550][#7550]
-- Fixed scrollbar visibility on Chrome. [#7487][#7487]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-CockroachDB introduced a critical bug in the v20.1.4 release that affects [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) and [`INSERT … ON CONFLICT DO UPDATE SET x = excluded.x`](https://www.cockroachlabs.com/docs/v20.1/insert#on-conflict-clause) statements involving more than 10,000 rows. All deployments running CockroachDB v20.1.4 and v20.1.5 are affected. A fix is included in [v20.1.6]({% link releases/v20.1.md %}#v20-1-6).
-
-For more information, see [Technical Advisory 54418](https://www.cockroachlabs.com/docs/advisories/a54418).
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-
General changes
-
-- Links in error messages that point to unimplemented issues now use the Cockroach Labs redirect/short-link server. [#50310][#50310]
-- [Schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes) are now logged in greater detail. [#50373][#50373]
-
-
Enterprise edition changes
-
-- [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) now has a new option `skip_missing_sequence_owners` that must be supplied when restoring only the table/sequence that was previously a sequence owner/owned by a table. Additionally, a bug causing ownership relationships to not be remapped after a restore has been fixed. [#51629][#51629] {% comment %}doc{% endcomment %}
-
-
-
-- The new `statement-diag` [`cockroach` command](https://www.cockroachlabs.com/docs/v20.1/cockroach-commands) can now be used to manage statement diagnostics. [#51229][#51229] {% comment %}doc{% endcomment %}
-- The `statement-diag` command now shows all times in UTC. [#51457][#51457] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Fixed a bug affecting some [`DROP DATABASE`](https://www.cockroachlabs.com/docs/v20.1/drop-database) schema changes where multiple GC jobs are created, causing the GC job for the database to fail. GC jobs will no longer fail if a table descriptor that has already been deleted by a different GC job is not found. [#50556][#50556]
-- Previously, if a full cluster [restore](https://www.cockroachlabs.com/docs/v20.1/restore) failed while restoring the system table data, it would not clean up after itself properly and would leave some [temporary tables](https://www.cockroachlabs.com/docs/v20.1/temporary-tables) public and not dropped. This bug has been fixed. [#50209][#50209]
-- Fixed a bug causing a cluster restore to fail when the largest descriptor in the backup was a database. This was typically seen when the last action in backing up a cluster was a [database creation](https://www.cockroachlabs.com/docs/v20.1/create-database). [#50817][#50817]
-- Cluster backup would previously appear as [`BACKUP TABLE TO`](https://www.cockroachlabs.com/docs/v20.1/backup) rather than `BACKUP TO` in the [jobs table](https://www.cockroachlabs.com/docs/v20.1/show-jobs). This bug has been fixed. [#50818][#50818]
-- Fixed a bug causing a badly timed power outage or a system crash to report an error upon process restart. [#50847][#50847]
-- Some `pg_catalog` queries that previously returned an error like "`crdb_internal_vtable_pk` column not allowed" now work again. [#50843][#50843]
-- Fixed "column not in input" internal error in some corner cases. [#50859][#50859]
-- Fixed a rare bug causing a multi-range [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v20.1/select-clause) statement containing an `IN` clause to fail to observe a consistent snapshot and violate serializability. [#50816][#50816]
-- Fixed regression where [granting privileges](https://www.cockroachlabs.com/docs/v20.1/grant) and [dropping objects](https://www.cockroachlabs.com/docs/v20.1/drop-table) would be slow when performed on a large number of objects due to unnecessary queries for looking up jobs in the `system.jobs` table. Previously, CockroachDB executed a quadratic number of queries based on the number of objects. CockroachDB now executes a linear number of queries based on the number of objects, which significantly improves the speed of dropping multiple objects or granting multiple privileges to a user. [#50923][#50923]
-- Previously, CockroachDB could crash when internal memory accounting hit a discrepancy. Now it will report an error instead. [#51014][#51014]
-- Improved support for large statement diagnostic bundles. [#51031][#51031]
-- CockroachDB now prevents spurious "SimpleQuery not allowed while in extended protocol mode" errors. [#51249][#51249]
-- [Renaming](https://www.cockroachlabs.com/docs/v20.1/rename-table) a [temporary table](https://www.cockroachlabs.com/docs/v20.1/temporary-tables) no longer converts it to a persistent table. The table continues to remain temporary after a rename. This patch also prevents users from converting a temporary table to a persistent table by renaming the table with a fully-qualified name and a schema referring to `public`. [#51309][#51309]
-- Fixed incorrect results in some cases involving [joins](https://www.cockroachlabs.com/docs/v20.1/joins) on [interleaved tables](https://www.cockroachlabs.com/docs/v20.1/interleave-in-parent) with [limits](https://www.cockroachlabs.com/docs/v20.1/limit-offset). [#51432][#51432]
-- [`cockroach dump`](https://www.cockroachlabs.com/docs/v20.1/cockroach-dump) no longer errors out when dumping [temporary tables](https://www.cockroachlabs.com/docs/v20.1/temporary-tables), [views](https://www.cockroachlabs.com/docs/v20.1/views#temporary-views), or [sequences](https://www.cockroachlabs.com/docs/v20.1/create-sequence#temporary-sequences). It either ignores them or throws an informative error if the temporary object is explicitly requested to be dumped via the [CLI](https://www.cockroachlabs.com/docs/v20.1/cockroach-commands). [#51457][#51457]
-- Fix a bug causing `cockroach dump` to improperly escape quotes within table comments. [#51510][#51510]
-- Fix a bug causing `cockroach dump` to not emit a correct statement for comments on [indexes](https://www.cockroachlabs.com/docs/v20.1/indexes). [#51510][#51510]
-- There is a known issue where [`BACKUP`s](https://www.cockroachlabs.com/docs/v20.1/backup) may get stuck when nearly completed. When this happens, we prevent garbage collection of old data from the targets that are being backed up, until the job is cancelled. This change stops the garbage build-up while the `BACKUP` is stuck. [#51519][#51519]
-- Previously, CockroachDB could hit an internal error when executing `regexp_replace` [builtin](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators). This bug has been fixed. [#51347][#51347]
-- Previously, CockroachDB could hit a "command is too large" error when performing [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) operations with many values. This bug has been fixed. [#51626][#51626]
-- Fixed a bug that prevented a table from being [dropped](https://www.cockroachlabs.com/docs/v20.1/drop-table) if a user created a [sequence](https://www.cockroachlabs.com/docs/v20.1/create-sequence) owned by the table's column and then dropped the sequence. [#51629][#51629]
-- [`DROP DATABASE CASCADE`](https://www.cockroachlabs.com/docs/v20.1/drop-database) now works as expected even when the database has a sequence with an owner in it. [#51629][#51629]
-- Fixed a bug causing descriptors to be in an invalid state due to ownership issues. [#51629][#51629]
-- Previously, orphaned `system.namespace/system.descriptor` entries were left if a `DROP DATABASE CASCADE` was issued, and the database contained dependency relations. For example, if the database included a [view](https://www.cockroachlabs.com/docs/v20.1/views) that depended on a table in the database, dropping the database would result in an orphaned entry for the view. This bug is now fixed, and cleanup happens as expected. [#51895][#51895]
-- CockroachDB now returns proper error messages for [index creation statements](https://www.cockroachlabs.com/docs/v20.1/create-index) that use a column that does not exist. [#51892][#51892]
-- Fixed a bug preventing `NULL` index members from being added to [hash-sharded indexes](https://www.cockroachlabs.com/docs/v20.1/indexes#hash-sharded-indexes). [#51906][#51906]
-- In earlier testing releases, columns that were members of hash-sharded indexes could not be renamed. Indexes created in prior releases will need to [be dropped](https://www.cockroachlabs.com/docs/v20.1/drop-index) and [recreated](https://www.cockroachlabs.com/docs/v20.1/create-index) to resolve this limitation. [#51906][#51906]
-- It is no longer possible for rapid range lease movement to trigger a rare assertion failure under contended workloads. The assertion contained the text: "discovered lock by different transaction than existing lock". [#51869][#51869]
-- Fixed bug in the Pebble storage engine that in rare circumstances could construct corrupted store, resulting in a node crash. [#51915][#51915]
-- Fixed a bug causing traces collected through the `sql.trace.txn.enable_threshold` setting to be incomplete sometimes. [#51845][#51845]
-- Increased the robustness of [restore](https://www.cockroachlabs.com/docs/v20.1/restore) against descriptors which may be in an unexpected state. [#51925][#51925]
-- Previously, CockroachDB could encounter benign internal "context canceled" errors when queries were executed by [the vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). [#51933][#51933]
-- Fixed a bug causing [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) jobs to block when finished backing up data. [#52003][#52003]
-- Fixed a bug causing [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) jobs to sometimes block at the end of the job when sending its results back if the connection that started the job disconnected. [#52003][#52003]
-- Fixed a bug causing CockroachDB to crash on some queries with [merge joins](https://www.cockroachlabs.com/docs/v20.1/joins#merge-joins). [#52046][#52046]
-- An unknown condition previously caused CockroachDB to crash with the message "committed txn with writeTooOld err". This condition no longer crashes a node. Instead, an error message is printed to the logs asking for help in the investigation. [#51843][#51843]
-
-
Performance improvements
-
-- Introduced a new `server.consistency_check.max_rate` [setting](https://www.cockroachlabs.com/docs/v20.1/cluster-settings), expressed in bytes/seconds, to throttle the rate at which CockroachDB scans through the disk to perform a consistency check. This control is necessary to ensure smooth performance on a cluster with large node sizes, in the 10TB+ range. [#50066][#50066] {% comment %}doc{% endcomment %}
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a58932.md %}
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-CockroachDB introduced a critical bug in the [v20.1.4 release]({% link releases/v20.1.md %}#v20-1-4) that affects [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) and [`INSERT … ON CONFLICT DO UPDATE SET x = excluded.x`](https://www.cockroachlabs.com/docs/v20.1/insert#on-conflict-clause) statements involving more than 10,000 rows. All deployments running CockroachDB v20.1.4 and v20.1.5 are affected. A fix is included in [v20.1.6]({% link releases/v20.1.md %}#v20-1-6).
-
-{{site.data.alerts.callout_danger}}
-{% include /v20.1/alerts/warning-a63162.md %}
-{{site.data.alerts.end}}
-
-For more information, see [Technical Advisory 54418](https://www.cockroachlabs.com/docs/advisories/a54418).
-{{site.data.alerts.end}}
-
-
SQL language changes
-
-- Reduced memory used by table scans containing JSON data. [#53318][#53318]
-
-
Bug fixes
-
-- Fixed an internal error that could occur when an [aggregate function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators#aggregate-functions) argument contained [a correlated subquery](https://www.cockroachlabs.com/docs/v20.1/subqueries#correlated-subqueries) with another aggregate function referencing the outer scope. This now returns an appropriate user-friendly error, "aggregate function calls cannot be nested". [#52142][#52142]
-- Previously, subtracting months from a [`TIMESTAMP`](https://www.cockroachlabs.com/docs/v20.1/timestamp)/[`DATE`](https://www.cockroachlabs.com/docs/v20.1/date)/[`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v20.1/timestamp) whose date value is greater than 28 could subtract an additional year. This bug is now fixed. [#52156][#52156]
-- Previously, CockroachDB could return incorrect results on queries that encountered `ReadWithinUncertaintyInterval` errors. This bug is now fixed. [#52045][#52045]
-- Fixed instances of slow plans for prepared queries involving [CTEs](https://www.cockroachlabs.com/docs/v20.1/common-table-expressions) or foreign key checks. [#52205][#52205]
-- Large write requests no longer have a chance of erroneously throwing a "transaction with sequence has a different value" error. [#52267][#52267]
-- Type OIDs in the result metadata were incorrect for the `bit`, `bpchar`, `char(n)`, and `varchar(n)` types, and the corresponding array types. They are now correct. [#52351][#52351]
-- CockroachDB now prevents deadlocks on connection close with an open user transaction and [temporary tables](https://www.cockroachlabs.com/docs/v20.1/temporary-tables). [#52326][#52326]
-- Fixed a bug that could prevent schema changes for up to 5 minutes when using the `COPY` protocol. [#52455][#52455]
-- Executing a large number of statements in a transaction without committing could previously crash a CockroachDB server. This bug is now fixed. [#52402][#52402]
-- Fixed a bug causing the temporary object cleaner to get stuck trying to remove objects that it mistakenly thought were temporary. Note that no persistent data was deleted. The temporary cleaner simply returned an error because it thought certain persistent data was temporary. [#52662][#52662]
-- Previously, CockroachDB would erroneously restart the execution of empty, unclosed portals after they had been fully exhausted. This bug is now fixed. [#52443][#52443]
-- Fixed a bug causing the Google Cloud API client used by [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v20.1/restore) and [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import) to leak memory when interacting with Google Cloud Storage. [#53229][#53229]
-- CockroachDB no longer displays a value for `gc.ttlseconds` if not set. [#52813][#52813]
-
-
Performance improvements
-
-- Queries no longer block during planning if cached table statistics have become stale and the new statistics have not yet been loaded. Instead, the stale statistics are used for planning until the new statistics have been loaded. This improves performance because it prevents latency spikes that may occur if there is a delay in loading the new statistics. [#52191][#52191]
-
-
-
-- The concurrency of the evaluation of [`UNION ALL`](https://www.cockroachlabs.com/docs/v20.1/selection-queries#union-combine-two-queries) queries has been reduced. Previously, such queries could crash a server (in extreme cases, due to memory shortage). That bug is now fixed, at the expense of possible minor reduction in performance. [#53444][#53444]
-
-
Bug fixes
-
-- The [cluster Node Map](https://www.cockroachlabs.com/docs/v20.1/admin-ui-cluster-overview-page#node-map-enterprise) and the [debug page](https://www.cockroachlabs.com/docs/v20.1/admin-ui-debug-pages) for cluster locality reports are now again available to non-admin users. [#53331][#53331]
-- Previously, CockroachDB could return incorrect results when performing `LEFT ANTI` [hash joins](https://www.cockroachlabs.com/docs/v20.1/joins#hash-joins) when the right equality columns formed a key and the query was executed with the [vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This bug has been fixed. [#53346][#53346]
-- Fixed a rare internal error related to [foreign key checks](https://www.cockroachlabs.com/docs/v20.1/foreign-key). [#53648][#53648]
-- CockroachDB could previously crash when evaluating queries containing [window functions](https://www.cockroachlabs.com/docs/v20.1/window-functions) with the `GROUPS` framing mode when ` FOLLOWING` boundary was used and the offset was a very large value such that it could result in an integer overflow. This is now fixed. [#53755][#53755]
-- Fixed the "no binding for WithID" internal error when using [`WITH RECURSIVE`](https://www.cockroachlabs.com/docs/v20.1/common-table-expressions#recursive-common-table-expressions) in queries with placeholders. [#54037][#54037]
-- A change in v20.1 caused a certain class of bulk [`UPDATE`](https://www.cockroachlabs.com/docs/v20.1/update) and [`DELETE`](https://www.cockroachlabs.com/docs/v20.1/delete) statements to hang indefinitely if run in an implicit transaction. We now break up these statements to avoid starvation and prevent them from hanging indefinitely. [#53561][#53561]
-- Fixed a bug that could cause [garbage collection](https://www.cockroachlabs.com/docs/v20.1/architecture/storage-layer#garbage-collection) jobs for tables dropped as part of a [`DROP DATABASE CASCADE`](https://www.cockroachlabs.com/docs/v20.1/drop-database) to never complete. [#54129][#54129]
-- Fixed a bug that caused a crash when using a `RANGE`-mode [window function](https://www.cockroachlabs.com/docs/v20.1/window-functions) with an offset (e.g, `OVER (PARTITION BY b ORDER BY a RANGE 1 PRECEDING)`). [#54075][#54075]
-- Fixed a bug that could cause the asynchronous migration to upgrade jobs from v19.2 to fail to complete and keep retrying indefinitely upon encountering a dropped database where some, but not all, of the tables have already been cleaned up. This bug can only occur if an upgrade to v20.1 happened while a database was in the process of being [dropped](https://www.cockroachlabs.com/docs/v20.1/drop-database) or a set of tables was being [truncated](https://www.cockroachlabs.com/docs/v20.1/truncate). [#51176][#51176]
-- Asynchronous schema change migrations now mark a job as failed instead of retrying indefinitely when a descriptor referenced by a schema change job does not exist. [#51176][#51176]
-- Fixed a potential race condition in the schema change job migration from v19.2 that could cause spurious errors and retries due to the wrong transaction being used internally. [#51176][#51176]
-- Fixed a bug that allowed new types to be used in an [array type](https://www.cockroachlabs.com/docs/v20.1/array) during a [version upgrade](https://www.cockroachlabs.com/docs/v20.1/upgrade-cockroach-version). [#53962][#53962]
-- Database [creation](https://www.cockroachlabs.com/docs/v20.1/create-database) and [deletion](https://www.cockroachlabs.com/docs/v20.1/drop-database) was previously not correctly tracked by `revision_history` cluster backups. This is now fixed. [#53806][#53806]
-- Fixed two bugs that caused CockroachDB to return errors when attempting to add [constraints](https://www.cockroachlabs.com/docs/v20.1/constraints) in the same transaction in which the table was created:
- 1. Adding a [`NOT NULL` constraint](https://www.cockroachlabs.com/docs/v20.1/not-null) no longer fails with the error `check ... does not exist`.
- 1. Adding a `NOT VALID` [foreign key constraint](https://www.cockroachlabs.com/docs/v20.1/foreign-key) no longer fails with the internal error `table descriptor is not valid: duplicate constraint name`. [#54288][#54288]
-- Fixed a bug that could lead to out-of-memory errors when [dropping](https://www.cockroachlabs.com/docs/v20.1/drop-table) large numbers of tables at a high frequency. [#54285][#54285]
-- CockroachDB could previously crash in rare circumstances when many queries running in the cluster were consuming a lot of memory and at least one query was running through the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). This is now fixed. [#54406][#54406]
-- In releases v20.1.4 and v20.1.5, CockroachDB might finish [`UPSERT`](https://www.cockroachlabs.com/docs/v20.1/upsert) operations too early. A simple `UPSERT` would correctly insert up to 10,000 rows and then ignore the rest. An `UPSERT` with a `RETURNING` clause would process up to 10,000 rows but return no rows. For more information, see [Technical Advisory 54418](https://www.cockroachlabs.com/docs/advisories/a54418). [#54418][#54418]
-
-
-
-- Fixed a case where connections to Google Cloud storage would ignore the [`--external-io-disable-implicit-credentials`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start#external-io-disable-implicit-credentials) flag to `cockroach start`. [#55091][#55091]
-
-
General changes
-
-- Reduced the memory overhead of rangefeeds (i.e., long-lived requests) which reduces the memory overhead for running [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v20.2/stream-data-out-of-cockroachdb-using-changefeeds)s over large tables. [#54632][#54632]
-
-
Bug fixes
-
-- Fixed a bug where columns used in an [index](https://www.cockroachlabs.com/docs/v20.1/indexes) which contained the columns of a [foreign key](https://www.cockroachlabs.com/docs/v20.1/foreign-key) as a prefix could lead to all of the index columns being set to _NULL_ or the default value on cascade. [#54543][#54543]
-- Fixed a bug causing servers to crash with the message "committed txn with writeTooOld". Versions below 20.1.4 are susceptible to this bug. Versions 20.1.4+ will not crash, but instead print messages to the log files. [#54282][#54282]
-- Fixed a rare bug which can lead to index backfills failing in the face of [transaction restarts](https://www.cockroachlabs.com/docs/v20.1/transactions#transaction-retries). [#54859][#54859]
-- Fixed a race condition propagating post-query metadata in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution). [#55168][#55168]
-- Fixed a bug causing nodes running version 20.1 to not be able to serve [follower reads](https://www.cockroachlabs.com/docs/v20.1/follower-reads) in mixed-version clusters running versions 19.2 and 20.1. [#55089][#55089]
-- The first timing column in the trace.txt file collected as part of a [statement diagnostics bundle](https://www.cockroachlabs.com/docs/v20.1/explain-analyze#debug-option) has been fixed.
-
-
-
-- Fixed a bug where [schema changes](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes), which affect referenced tables, might not have propagated to other nodes. [#55375][#55375]
-- Fixed a bug where inscrutable errors were returned on failed [backup creation](https://www.cockroachlabs.com/docs/v20.1/backup). [#54968][#54968]
-- Fixed a bug where CockroachDB crashed when [executing a query via the vectorized engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution) when most of the SQL memory (determined via [`--max-sql-memory`](https://www.cockroachlabs.com/docs/v20.1/cockroach-start#flags) startup argument) had already been reserved. [#55458][#55458]
-- Fixed a bug where the [`age()`](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators#date-and-time-functions) function did not normalize the duration for large day or H:M:S values in the same way PostgreSQL does. [#55527][#55527]
-- Fixed a bug where CockroachDB did not account for all the memory used by the vectorized hash aggregation which could lead to an OOM crash. [#55571][#55571]
-- Fixed a bug where using the `MIN`/`MAX` aggregates in a prepared statement did not report the correct [data type](https://www.cockroachlabs.com/docs/v20.1/data-types) size. [#55621][#55621]
-
-
-
-
Contributors
-
-This release includes 8 merged PRs by 6 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- kev (first-time contributor)
-
-
-
-- Added a setting to opt in or out of including [SQL statistics](https://www.cockroachlabs.com/docs/v20.1/create-statistics) in [backups](https://www.cockroachlabs.com/docs/v20.1/backup). [#55879][#55879]
-
-
SQL language changes
-
-- Parsing [intervals](https://www.cockroachlabs.com/docs/v20.1/interval) with fractional years now produces intervals with no more precision than months, to match the behavior of PostgreSQL. [#56247][#56247]
-- Table names are now listed before index names in [`EXPLAIN (DISTSQL)`](https://www.cockroachlabs.com/docs/v20.1/explain) diagram output. Previously, the diagrams used `index@table`, and now they use `table@index`. [#56396][#56396]
-
-
Bug fixes
-
-- [Changefeeds](https://www.cockroachlabs.com/docs/v20.1/changefeed-for) were previously incompatible with the [vectorized execution engine](https://www.cockroachlabs.com/docs/v20.1/vectorized-execution), and creating changefeeds with the vectorized engine enabled could cause a server to hang. This could happen in v20.2 releases with `SET vectorize_row_count_threshold=0;`, and in v20.1 releases with `SET vectorize=on`. This bug is now fixed. [#55754][#55754]
-- Fixed possible write skew in distributed queries that have both zigzag [joins](https://www.cockroachlabs.com/docs/v20.1/joins) and table readers with the zigzag joins reading keys not read by the table readers. [#55874][#55874]
-- CockroachDB previously could return incorrect results when computing [aggregation functions](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators#aggregate-functions) when some of the functions contained a `DISTINCT` clause and some did not. This bug is now fixed. [#55873][#55873]
-- CockroachDB previously could incorrectly evaluate the `sqrdiff` function when used as a [window function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators#window-functions) in some cases. This bug is now fixed. [#55999][#55999]
-- Fixed a `top-level relational expression cannot have outer columns` error in some queries that involve `WITH`. [#56086][#56086]
-- Fix a bug causing [`IMPORT`](https://www.cockroachlabs.com/docs/v20.1/import)s of malformed Avro records to hang forever. [#56097][#56097]
-- Fixed a bug causing CockroachDB to crash when a [`BACKUP`](https://www.cockroachlabs.com/docs/v20.1/backup) query was unable to count the total nodes in the cluster. [#56096][#56096]
-- Fixed an error that could occur at the end of a restoration of a backup that had ongoing [schema change jobs](https://www.cockroachlabs.com/docs/v20.1/online-schema-changes). [#56021][#56021]
-- Previously, cluster backups created in releases before v20.2 could not be [restored](https://www.cockroachlabs.com/docs/v20.1/restore) in 20.2 clusters, and would produce an error message about failing to restore a system table. This bug is now fixed. [#56024][#56024]
-- [Options set on users](https://www.cockroachlabs.com/docs/v20.1/alter-user) (e.g., `ALTER USER CREATEDB`) were not included in cluster backups and thus not restored. Role options are now included in cluster backups. [#55442][#55442]
-- Fixed a bug that did not respect disabling [protected timestamp settings](https://www.cockroachlabs.com/docs/v20.1/cluster-settings) with zero values. [#56454][#56454]
-- Fixed a bug which that result in a failed restore when restoring into a database with a different set of [privileges](https://www.cockroachlabs.com/docs/v20.1/authorization) than the backup privileges. [#55880][#55880]
-- Fixed a race between job completion and sending the result of the job to the client. CockroachDB now sends results to the client after a job completes. [#56146][#56146]
-- In [v20.1.8]({% link releases/v20.1.md %}#v20-1-8), we attempted to fix the `age()` [function](https://www.cockroachlabs.com/docs/v20.1/functions-and-operators)'s normalization of `H:M:S` input into years, months, and days. However, the v20.1.8 fix was broken for values greater than 1 month, and for `a::timestamp(tz) - b::timestamp(tz)` expressions. This bug has been resolved. [commit 59b2bc218][59b2bc218]
-
-
-
-
Contributors
-
-This release includes 22 merged PRs by 12 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Max Neverov (first-time contributor)
-
-
- How the system refers to this metric, e.g., sql.bytesin.
-
-
-
-
- Downsampler
-
-
-
- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Aggregator
-
-
-
- Used to combine data points from different nodes. It has the same operations available as the Downsampler.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Rate
-
-
-
- Determines how to display the rate of change during the selected time period.
-
-
- Options:
-
-
-
- Normal: Returns the actual recorded value.
-
-
- Rate: Returns the rate of change of the value per second.
-
-
- Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase).
-
-
-
-
-
-
-
- Source
-
-
- The set of nodes being queried, which is either:
-
-
- The entire cluster.
-
-
- A single, named node.
-
-
-
-
-
-
- Per Node
-
-
- If checked, the chart will show a line for each node's value of this metric.
-
-
-
-
diff --git a/src/current/_includes/v20.1/admin-ui/admin-ui-log-files.md b/src/current/_includes/v20.1/admin-ui/admin-ui-log-files.md
deleted file mode 100644
index 1afc7b2aa6d..00000000000
--- a/src/current/_includes/v20.1/admin-ui/admin-ui-log-files.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Log files can be accessed using the Admin UI, which displays them in JSON format.
-
-1. [Access the Admin UI](admin-ui-overview.html#admin-ui-access) and then click [**Advanced Debug**](admin-ui-debug-pages.html) in the left-hand navigation.
-
-2. Under **Raw Status Endpoints (JSON)**, click **Log Files** to view the JSON of all collected logs.
-
-3. Copy one of the log filenames. Then click **Specific Log File** and replace the `cockroach.log` placeholder in the URL with the filename.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/admin-ui/admin-ui-metrics-navigation.md b/src/current/_includes/v20.1/admin-ui/admin-ui-metrics-navigation.md
deleted file mode 100644
index 6516c21dfce..00000000000
--- a/src/current/_includes/v20.1/admin-ui/admin-ui-metrics-navigation.md
+++ /dev/null
@@ -1,5 +0,0 @@
-## Dashboard navigation
-
-Use the **Graph** menu to display metrics for your entire cluster or for a specific node.
-
-To the right of the Graph and Dashboard menus, a range selector allows you to filter the view for a predefined timeframe or custom date/time range. Use the navigation buttons to move to the previous, next, or current timeframe. Note that the active timeframe is reflected in the URL and can be easily shared.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/admin-ui/admin-ui-summary-events.md b/src/current/_includes/v20.1/admin-ui/admin-ui-summary-events.md
deleted file mode 100644
index 39458a4299a..00000000000
--- a/src/current/_includes/v20.1/admin-ui/admin-ui-summary-events.md
+++ /dev/null
@@ -1,43 +0,0 @@
-## Summary and events
-
-### Summary panel
-
-A **Summary** panel of key metrics is displayed to the right of the timeseries graphs.
-
-
-
-Metric | Description
---------|----
-Total Nodes | The total number of nodes in the cluster. [Decommissioned nodes](remove-nodes.html) are not included in this count.
-Capacity Used | The storage capacity used as a percentage of [usable capacity](admin-ui-cluster-overview-page.html#capacity-metrics) allocated across all nodes.
-Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster.
-Queries per second | The total number of `SELECT`, `UPDATE`, `INSERT`, and `DELETE` queries executed per second across the cluster.
-P99 Latency | The 99th percentile of service latency.
-
-{{site.data.alerts.callout_info}}
-{% include {{ page.version.version }}/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-### Events panel
-
-Underneath the [Summary](#summary-panel) panel, the **Events** panel lists the 5 most recent events logged for all nodes across the cluster. To list all events, click **View all events**.
-
-
-
-The following types of events are listed:
-
-- Database created
-- Database dropped
-- Table created
-- Table dropped
-- Table altered
-- Index created
-- Index dropped
-- View created
-- View dropped
-- Schema change reversed
-- Schema change finished
-- Node joined
-- Node decommissioned
-- Node restarted
-- Cluster setting changed
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/admin-ui/logical-bytes.md b/src/current/_includes/v20.1/admin-ui/logical-bytes.md
deleted file mode 100644
index e85f04cea92..00000000000
--- a/src/current/_includes/v20.1/admin-ui/logical-bytes.md
+++ /dev/null
@@ -1 +0,0 @@
-Logical bytes reflect the approximate number of bytes stored in the database. This value may deviate from the number of physical bytes on disk, due to factors such as compression and [write amplification](https://en.wikipedia.org/wiki/Write_amplification).
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/alerts/warning-a58932.md b/src/current/_includes/v20.1/alerts/warning-a58932.md
deleted file mode 100644
index 03920fa18ee..00000000000
--- a/src/current/_includes/v20.1/alerts/warning-a58932.md
+++ /dev/null
@@ -1,3 +0,0 @@
-A denial-of-service (DoS) vulnerability is present in CockroachDB v20.1.0 - v20.1.10 due to a bug in [protobuf](https://github.com/gogo/protobuf). This is resolved in CockroachDB [v20.1.11](../releases/v20.1.html#v20-1-11) and [later releases](../releases/#production-releases). When upgrading is not an option, users should audit their network configuration to verify that the CockroachDB HTTP port is not available to untrusted clients. We recommend blocking the HTTP port behind a firewall.
-
-For more information, including other affected versions, see [Technical Advisory 58932](../advisories/a58932.html).
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/alerts/warning-a63162.md b/src/current/_includes/v20.1/alerts/warning-a63162.md
deleted file mode 100644
index 6f328a46fe3..00000000000
--- a/src/current/_includes/v20.1/alerts/warning-a63162.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Cockroach Labs has discovered a bug relating to incremental backups, for CockroachDB v20.1.0 - v20.1.13. If a backup coincides with an in-progress index creation (backfill), `RESTORE`, or `IMPORT`, it is possible that a subsequent incremental backup will not include all of the indexed, restored or imported data.
-
-Users are advised to upgrade to v20.1.15 or later, which includes resolutions.
-
-For more information, including other affected versions, see [Technical Advisory 63162](../advisories/a63162.html).
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/BasicExample.java b/src/current/_includes/v20.1/app/BasicExample.java
deleted file mode 100644
index f9737ff4a43..00000000000
--- a/src/current/_includes/v20.1/app/BasicExample.java
+++ /dev/null
@@ -1,438 +0,0 @@
-import java.util.*;
-import java.time.*;
-import java.sql.*;
-import javax.sql.DataSource;
-
-import org.postgresql.ds.PGSimpleDataSource;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicExample.java && java BasicExample
-
- To build the javadoc:
-
- $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java
-
- At a high level, this code consists of two classes:
-
- 1. BasicExample, which is where the application logic lives.
-
- 2. BasicExampleDAO, which is used by the application to access the
- data store.
-
-*/
-
-public class BasicExample {
-
- public static void main(String[] args) {
-
- // Configure the database connection.
- PGSimpleDataSource ds = new PGSimpleDataSource();
- ds.setServerName("localhost");
- ds.setPortNumber(26257);
- ds.setDatabaseName("bank");
- ds.setUser("maxroach");
- ds.setPassword(null);
- ds.setSsl(true);
- ds.setSslMode("require");
- ds.setSslRootCert("certs/client.root.crt");
- ds.setSslCert("certs/client.maxroach.crt");
- ds.setSslKey("certs/client.maxroach.key.pk8");
- ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
- ds.setApplicationName("BasicExample");
-
- // Create DAO.
- BasicExampleDAO dao = new BasicExampleDAO(ds);
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // necessary in production code.
- dao.testRetryHandling();
-
- // Set up the 'accounts' table.
- dao.createAccounts();
-
- // Insert a few accounts "by hand", using INSERTs on the backend.
- Map balances = new HashMap();
- balances.put("1", "1000");
- balances.put("2", "250");
- int updatedAccounts = dao.updateAccounts(balances);
- System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts);
-
- // How much money is in these accounts?
- int balance1 = dao.getAccountBalance(1);
- int balance2 = dao.getAccountBalance(2);
- System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
-
- // Transfer $100 from account 1 to account 2
- int fromAccount = 1;
- int toAccount = 2;
- int transferAmount = 100;
- int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
- if (transferredAccounts != -1) {
- System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
- }
-
- balance1 = dao.getAccountBalance(1);
- balance2 = dao.getAccountBalance(2);
- System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
-
- // Bulk insertion example using JDBC's batching support.
- int totalRowsInserted = dao.bulkInsertRandomAccountData();
- System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted);
-
- // Print out 10 account values.
- int accountsRead = dao.readAccounts(10);
-
- // Drop the 'accounts' table so this code can be run again.
- dao.tearDown();
- }
-}
-
-/**
- * Data access object used by 'BasicExample'. Abstraction over some
- * common CockroachDB operations, including:
- *
- * - Auto-handling transaction retries in the 'runSQL' method
- *
- * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
- * method
- */
-
-class BasicExampleDAO {
-
- private static final int MAX_RETRY_COUNT = 3;
- private static final String SAVEPOINT_NAME = "cockroach_restart";
- private static final String RETRY_SQL_STATE = "40001";
- private static final boolean FORCE_RETRY = false;
-
- private final DataSource ds;
-
- BasicExampleDAO(DataSource ds) {
- this.ds = ds;
- }
-
- /**
- Used to test the retry logic in 'runSQL'. It is not necessary
- in production code.
- */
- void testRetryHandling() {
- if (this.FORCE_RETRY) {
- runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
- }
- }
-
- /**
- * Run SQL code in a way that automatically handles the
- * transaction retry logic so we do not have to duplicate it in
- * various places.
- *
- * @param sqlCode a String containing the SQL code you want to
- * execute. Can have placeholders, e.g., "INSERT INTO accounts
- * (id, balance) VALUES (?, ?)".
- *
- * @param args String Varargs to fill in the SQL code's
- * placeholders.
- * @return Integer Number of rows updated, or -1 if an error is thrown.
- */
- public Integer runSQL(String sqlCode, String... args) {
-
- // This block is only used to emit class and method names in
- // the program output. It is not necessary in production
- // code.
- StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
- StackTraceElement elem = stacktrace[2];
- String callerClass = elem.getClassName();
- String callerMethod = elem.getMethodName();
-
- int rv = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // We're managing the commit lifecycle ourselves so we can
- // automatically issue transaction retries.
- connection.setAutoCommit(false);
-
- int retryCount = 0;
-
- while (retryCount < MAX_RETRY_COUNT) {
-
- Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryHandling()'.
- if (FORCE_RETRY) {
- forceRetry(connection); // SELECT 1
- }
-
- try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {
-
- // Loop over the args and insert them into the
- // prepared statement based on their types. In
- // this simple example we classify the argument
- // types as "integers" and "everything else"
- // (a.k.a. strings).
- for (int i=0; i %10s\n", name, val);
- }
- }
- }
- } else {
- int updateCount = pstmt.getUpdateCount();
- rv += updateCount;
-
- // This printed output is for debugging and/or demonstration
- // purposes only. It would not be necessary in production code.
- System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt);
- }
-
- connection.releaseSavepoint(sp);
- connection.commit();
- break;
-
- } catch (SQLException e) {
-
- if (RETRY_SQL_STATE.equals(e.getSQLState())) {
- System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n",
- e.getSQLState(), e.getMessage(), retryCount);
- connection.rollback(sp);
- retryCount++;
- rv = -1;
- } else {
- rv = -1;
- throw e;
- }
- }
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- rv = -1;
- }
-
- return rv;
- }
-
- /**
- * Helper method called by 'testRetryHandling'. It simply issues
- * a "SELECT 1" inside the transaction to force a retry. This is
- * necessary to take the connection's session out of the AutoRetry
- * state, since otherwise the other statements in the session will
- * be retried automatically, and the client (us) will not see a
- * retry error. Note that this information is taken from the
- * following test:
- * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
- *
- * @param connection Connection
- */
- private void forceRetry(Connection connection) throws SQLException {
- try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
- statement.executeQuery();
- }
- }
-
- /**
- * Creates a fresh, empty accounts table in the database.
- */
- public void createAccounts() {
- runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
- };
-
- /**
- * Update accounts by passing in a Map of (ID, Balance) pairs.
- *
- * @param accounts (Map)
- * @return The number of updated accounts (int)
- */
- public int updateAccounts(Map accounts) {
- int rows = 0;
- for (Map.Entry account : accounts.entrySet()) {
-
- String k = account.getKey();
- String v = account.getValue();
-
- String[] args = {k, v};
- rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
- }
- return rows;
- }
-
- /**
- * Transfer funds between one account and another. Handles
- * transaction retries in case of conflict automatically on the
- * backend.
- * @param fromId (int)
- * @param toId (int)
- * @param amount (int)
- * @return The number of updated accounts (int)
- */
- public int transferFunds(int fromId, int toId, int amount) {
- String sFromId = Integer.toString(fromId);
- String sToId = Integer.toString(toId);
- String sAmount = Integer.toString(amount);
-
- // We have omitted explicit BEGIN/COMMIT statements for
- // brevity. Individual statements are treated as implicit
- // transactions by CockroachDB (see
- // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).
-
- String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
- "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
- "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";
-
- return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
- }
-
- /**
- * Get the account balance for one account.
- *
- * We skip using the retry logic in 'runSQL()' here for the
- * following reasons:
- *
- * 1. Since this is a single read ("SELECT"), we do not expect any
- * transaction conflicts to handle
- *
- * 2. We need to return the balance as an integer
- *
- * @param id (int)
- * @return balance (int)
- */
- public int getAccountBalance(int id) {
- int balance = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // Check the current balance.
- ResultSet res = connection.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + id);
- if(!res.next()) {
- System.out.printf("No users in the table with id %i", id);
- } else {
- balance = res.getInt("balance");
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
-
- return balance;
- }
-
- /**
- * Insert randomized account data (ID, balance) using the JDBC
- * fast path for bulk inserts. The fastest way to get data into
- * CockroachDB is the IMPORT statement. However, if you must bulk
- * ingest from the application using INSERT statements, the best
- * option is the method shown here. It will require the following:
- *
- * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
- * settings (see the connection info in 'BasicExample.main').
- *
- * 2. Inserting in batches of 128 rows, as used inside this method
- * (see BATCH_SIZE), since the PGJDBC driver's logic works best
- * with powers of two, such that a batch of size 128 can be 6x
- * faster than a batch of size 250.
- * @return The number of new accounts inserted (int)
- */
- public int bulkInsertRandomAccountData() {
-
- Random random = new Random();
- int BATCH_SIZE = 128;
- int totalNewAccounts = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // We're managing the commit lifecycle ourselves so we can
- // control the size of our batch inserts.
- connection.setAutoCommit(false);
-
- // In this example we are adding 500 rows to the database,
- // but it could be any number. What's important is that
- // the batch size is 128.
- try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
- for (int i=0; i<=(500/BATCH_SIZE);i++) {
- for (int j=0; j %s row(s) updated in this batch\n", count.length);
- }
- connection.commit();
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
- return totalNewAccounts;
- }
-
- /**
- * Read out a subset of accounts from the data store.
- *
- * @param limit (int)
- * @return Number of accounts read (int)
- */
- public int readAccounts(int limit) {
- return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
- }
-
- /**
- * Perform any necessary cleanup of the data store so it can be
- * used again.
- */
- public void tearDown() {
- runSQL("DROP TABLE accounts;");
- }
-}
diff --git a/src/current/_includes/v20.1/app/activerecord-basic-sample.rb b/src/current/_includes/v20.1/app/activerecord-basic-sample.rb
deleted file mode 100644
index bbfa9a5e77d..00000000000
--- a/src/current/_includes/v20.1/app/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,61 +0,0 @@
-# Use bundler inline - these would typically go in a Gemfile
-require 'bundler/inline'
-gemfile do
- source 'https://rubygems.org'
- gem 'pg'
- gem 'activerecord', '5.2.0'
-
- # CockroachDB ActiveRecord adapter dependency
- gem 'activerecord-cockroachdb-adapter', '5.2.0'
-end
-
-require 'pg'
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-
-# Connect to CockroachDB using ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
-
- # Specify the CockroachDB ActiveRecord adapter
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
-
- # These are the certificate files created in the previous step
- sslrootcert: 'certs/ca.crt',
- sslkey: 'certs/client.maxroach.key',
- sslcert: 'certs/client.maxroach.crt'
-)
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration programmatically.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create!(id: 1, balance: 1000)
-Account.create!(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "account: #{acct.id} balance: #{acct.balance}"
-end
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/basic-sample-pgx.go b/src/current/_includes/v20.1/app/basic-sample-pgx.go
deleted file mode 100644
index d385cd82067..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample-pgx.go
+++ /dev/null
@@ -1,52 +0,0 @@
-package main
-
-import (
- "context"
- "fmt"
- "log"
-
- "github.com/jackc/pgx/v4"
-)
-
-func main() {
- config, err := pgx.ParseConfig("postgresql://maxroach@localhost:26257/bank?sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error configuring the database: ", err)
- }
-
- config.TLSConfig.ServerName = "localhost"
-
- // Connect to the "bank" database.
- conn, err := pgx.ConnectConfig(context.Background(), config)
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer conn.Close(context.Background())
-
- // Create the "accounts" table.
- if _, err := conn.Exec(context.Background(),
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := conn.Exec(context.Background(),
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := conn.Query(context.Background(), "SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/basic-sample.c b/src/current/_includes/v20.1/app/basic-sample.c
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/src/current/_includes/v20.1/app/basic-sample.clj b/src/current/_includes/v20.1/app/basic-sample.clj
deleted file mode 100644
index 10c98fff2ba..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.clj
+++ /dev/null
@@ -1,35 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:dbtype "postgresql"
- :dbname "bank"
- :host "localhost"
- :port "26257"
- :ssl true
- :sslmode "require"
- :sslcert "certs/client.maxroach.crt"
- :sslkey "certs/client.maxroach.key.pk8"
- :user "maxroach"})
-
-(defn test-basic []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Insert two rows into the "accounts" table.
- (j/insert! conn :accounts {:id 1 :balance 1000})
- (j/insert! conn :accounts {:id 2 :balance 250})
-
- ;; Print out the balances.
- (println "Initial balances:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- doall)
-
- ))
-
-
-(defn -main [& args]
- (test-basic))
diff --git a/src/current/_includes/v20.1/app/basic-sample.cpp b/src/current/_includes/v20.1/app/basic-sample.cpp
deleted file mode 100644
index 67b6c1d1062..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.cpp
+++ /dev/null
@@ -1,39 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-int main() {
- try {
- // Connect to the "bank" database.
- pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost");
-
- pqxx::nontransaction w(c);
-
- // Create the "accounts" table.
- w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- cout << "Initial balances:" << endl;
- pqxx::result r = w.exec("SELECT id, balance FROM accounts");
- for (auto row : r) {
- cout << row[0].as() << ' ' << row[1].as() << endl;
- }
-
- w.commit(); // Note this doesn't doesn't do anything
- // for a nontransaction, but is still required.
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v20.1/app/basic-sample.cs b/src/current/_includes/v20.1/app/basic-sample.cs
deleted file mode 100644
index ffedb0cd210..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.cs
+++ /dev/null
@@ -1,101 +0,0 @@
-using System;
-using System.Data;
-using System.Security.Cryptography.X509Certificates;
-using System.Net.Security;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Require;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- Simple(connStringBuilder.ConnectionString);
- }
-
- static void Simple(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
- conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
-
- static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
- {
- // To be able to add a certificate with a private key included, we must convert it to
- // a PKCS #12 format. The following openssl command does this:
- // openssl pkcs12 -password pass: -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
- // As of 2018-12-10, you need to provide a password for this to work on macOS.
- // See https://github.com/dotnet/corefx/issues/24225
-
- // Note that the password used during X509 cert creation below
- // must match the password used in the openssl command above.
- clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass"));
- }
-
- // By default, .Net does all of its certificate verification using the system certificate store.
- // This callback is necessary to validate the server certificate against a CA certificate file.
- static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
- {
- X509Certificate2 caCert = new X509Certificate2("ca.crt");
- X509Chain caCertChain = new X509Chain();
- caCertChain.ChainPolicy = new X509ChainPolicy()
- {
- RevocationMode = X509RevocationMode.NoCheck,
- RevocationFlag = X509RevocationFlag.EntireChain
- };
- caCertChain.ChainPolicy.ExtraStore.Add(caCert);
-
- X509Certificate2 serverCert = new X509Certificate2(certificate);
-
- caCertChain.Build(serverCert);
- if (caCertChain.ChainStatus.Length == 0)
- {
- // No errors
- return true;
- }
-
- foreach (X509ChainStatus status in caCertChain.ChainStatus)
- {
- // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store)
- if (status.Status != X509ChainStatusFlags.UntrustedRoot)
- {
- return false;
- }
- }
- return true;
- }
-
- }
-}
diff --git a/src/current/_includes/v20.1/app/basic-sample.go b/src/current/_includes/v20.1/app/basic-sample.go
deleted file mode 100644
index 6e22c858dbb..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.go
+++ /dev/null
@@ -1,46 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/basic-sample.php b/src/current/_includes/v20.1/app/basic-sample.php
deleted file mode 100644
index 4edae09b12a..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v20.1/app/basic-sample.rb b/src/current/_includes/v20.1/app/basic-sample.rb
deleted file mode 100644
index 73081eb9d19..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.rb
+++ /dev/null
@@ -1,31 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey: 'certs/client.maxroach.key',
- sslcert: 'certs/client.maxroach.crt'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts "id: #{row['id']} balance: #{row['balance']}"
- end
-end
-
-# Close the database connection.
-conn.close()
diff --git a/src/current/_includes/v20.1/app/basic-sample.rs b/src/current/_includes/v20.1/app/basic-sample.rs
deleted file mode 100644
index 4a078991cd8..00000000000
--- a/src/current/_includes/v20.1/app/basic-sample.rs
+++ /dev/null
@@ -1,45 +0,0 @@
-use openssl::error::ErrorStack;
-use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
-use postgres::Client;
-use postgres_openssl::MakeTlsConnector;
-
-fn ssl_config() -> Result {
- let mut builder = SslConnector::builder(SslMethod::tls())?;
- builder.set_ca_file("certs/ca.crt")?;
- builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
- builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
- Ok(MakeTlsConnector::new(builder.build()))
-}
-
-fn main() {
- let connector = ssl_config().unwrap();
- let mut client =
- Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
-
- // Create the "accounts" table.
- client
- .execute(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
- &[],
- )
- .unwrap();
-
- // Insert two rows into the "accounts" table.
- client
- .execute(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
- &[],
- )
- .unwrap();
-
- // Print out the balances.
- println!("Initial balances:");
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v20.1/app/before-you-begin.md b/src/current/_includes/v20.1/app/before-you-begin.md
deleted file mode 100644
index dfb97226414..00000000000
--- a/src/current/_includes/v20.1/app/before-you-begin.md
+++ /dev/null
@@ -1,8 +0,0 @@
-1. [Install CockroachDB](install-cockroachdb.html).
-2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster.
-3. Choose the instructions that correspond to whether your cluster is secure or insecure:
-
-
-
-
-
diff --git a/src/current/_includes/v20.1/app/create-a-database.md b/src/current/_includes/v20.1/app/create-a-database.md
deleted file mode 100644
index ca132a43c89..00000000000
--- a/src/current/_includes/v20.1/app/create-a-database.md
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-1. In the SQL shell, create the `bank` database that your application will use:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
-1. Create a SQL user for your app:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE USER WITH PASSWORD ;
- ~~~
-
- Take note of the username and password. You will use it in your application code later.
-
-1. Give the user the necessary permissions:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > GRANT ALL ON DATABASE bank TO ;
- ~~~
-
-
-
-{% comment %}
-
-
-1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html).
-1. Start the [built-in SQL shell](cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --url='postgres://:@:26257?sslmode=verify-full&sslrootcert=/'
- ~~~
-
- For the `--url` flag, use the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console [earlier](#get-the-connection-string):
- - Replace `` and `` with the SQL user and password that you created.
- - Replace `/` with the path to the CA certificate that you downloaded.
-
-1. In the SQL shell, create the `bank` database that your application will use:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
-1. Give the user the necessary permissions:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > GRANT ALL ON DATABASE bank TO ;
- ~~~
-
-
-{% endcomment %}
diff --git a/src/current/_includes/v20.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v20.1/app/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 4d5b4626013..00000000000
--- a/src/current/_includes/v20.1/app/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL shell](cockroach-sql.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v20.1/app/django-basic-sample/models.py b/src/current/_includes/v20.1/app/django-basic-sample/models.py
deleted file mode 100644
index 6068f8bbb8e..00000000000
--- a/src/current/_includes/v20.1/app/django-basic-sample/models.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from django.db import models
-
-class Customers(models.Model):
- id = models.AutoField(primary_key=True)
- name = models.CharField(max_length=250)
-
-class Products(models.Model):
- id = models.AutoField(primary_key=True)
- name = models.CharField(max_length=250)
- price = models.DecimalField(max_digits=18, decimal_places=2)
-
-class Orders(models.Model):
- id = models.AutoField(primary_key=True)
- subtotal = models.DecimalField(max_digits=18, decimal_places=2)
- customer = models.ForeignKey(Customers, on_delete=models.CASCADE, null=True)
- product = models.ManyToManyField(Products)
-
diff --git a/src/current/_includes/v20.1/app/django-basic-sample/settings.py b/src/current/_includes/v20.1/app/django-basic-sample/settings.py
deleted file mode 100644
index 351ebb2cdff..00000000000
--- a/src/current/_includes/v20.1/app/django-basic-sample/settings.py
+++ /dev/null
@@ -1,130 +0,0 @@
-"""
-Django settings for myproject project.
-
-Generated by 'django-admin startproject' using Django 3.0.
-
-For more information on this file, see
-https://docs.djangoproject.com/en/3.0/topics/settings/
-
-For the full list of settings and their values, see
-https://docs.djangoproject.com/en/3.0/ref/settings/
-"""
-
-import os
-
-# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
-BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-# Quick-start development settings - unsuitable for production
-# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
-
-# SECURITY WARNING: keep the secret key used in production secret!
-SECRET_KEY = 'spl=g73)8-)ja%x*k1eje4d#&24#t)zao^s$6vc1rdk(e3t!e('
-
-# SECURITY WARNING: do not run with debug turned on in production!
-DEBUG = True
-
-ALLOWED_HOSTS = ['0.0.0.0']
-
-
-# Application definition
-
-INSTALLED_APPS = [
- 'django.contrib.admin',
- 'django.contrib.auth',
- 'django.contrib.contenttypes',
- 'django.contrib.sessions',
- 'django.contrib.messages',
- 'django.contrib.staticfiles',
- 'myproject',
-]
-
-MIDDLEWARE = [
- 'django.middleware.security.SecurityMiddleware',
- 'django.contrib.sessions.middleware.SessionMiddleware',
- 'django.middleware.common.CommonMiddleware',
- 'django.middleware.csrf.CsrfViewMiddleware',
- 'django.contrib.auth.middleware.AuthenticationMiddleware',
- 'django.contrib.messages.middleware.MessageMiddleware',
- 'django.middleware.clickjacking.XFrameOptionsMiddleware',
-]
-
-ROOT_URLCONF = 'myproject.urls'
-
-TEMPLATES = [
- {
- 'BACKEND': 'django.template.backends.django.DjangoTemplates',
- 'DIRS': [],
- 'APP_DIRS': True,
- 'OPTIONS': {
- 'context_processors': [
- 'django.template.context_processors.debug',
- 'django.template.context_processors.request',
- 'django.contrib.auth.context_processors.auth',
- 'django.contrib.messages.context_processors.messages',
- ],
- },
- },
-]
-
-WSGI_APPLICATION = 'myproject.wsgi.application'
-
-
-# Database
-# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
-
-DATABASES = {
- 'default': {
- 'ENGINE': 'django_cockroachdb',
- 'NAME': 'bank',
- 'USER': 'django',
- 'HOST': 'localhost',
- 'PORT': '26257',
- 'OPTIONS': {
- 'sslmode': 'require',
- 'sslrootcert': '/certs/ca.crt',
- 'sslcert': '/certs/client.django.crt',
- 'sslkey': '/certs/client.django.key',
- },
- },
-}
-
-
-# Password validation
-# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
-
-AUTH_PASSWORD_VALIDATORS = [
- {
- 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
- },
-]
-
-
-# Internationalization
-# https://docs.djangoproject.com/en/3.0/topics/i18n/
-
-LANGUAGE_CODE = 'en-us'
-
-TIME_ZONE = 'UTC'
-
-USE_I18N = True
-
-USE_L10N = True
-
-USE_TZ = True
-
-
-# Static files (CSS, JavaScript, Images)
-# https://docs.djangoproject.com/en/3.0/howto/static-files/
-
-STATIC_URL = '/static/'
diff --git a/src/current/_includes/v20.1/app/django-basic-sample/urls.py b/src/current/_includes/v20.1/app/django-basic-sample/urls.py
deleted file mode 100644
index 9550d713ffa..00000000000
--- a/src/current/_includes/v20.1/app/django-basic-sample/urls.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from django.contrib import admin
-from django.urls import path
-
-from .views import CustomersView, OrdersView, PingView, ProductView
-
-urlpatterns = [
- path('admin/', admin.site.urls),
-
- path('ping/', PingView.as_view()),
-
- # Endpoints for customers URL.
- path('customer/', CustomersView.as_view(), name='customers'),
- path('customer//', CustomersView.as_view(), name='customers'),
-
- # Endpoints for customers URL.
- path('product/', ProductView.as_view(), name='product'),
- path('product//', ProductView.as_view(), name='product'),
-
- path('order/', OrdersView.as_view(), name='order'),
-]
diff --git a/src/current/_includes/v20.1/app/django-basic-sample/views.py b/src/current/_includes/v20.1/app/django-basic-sample/views.py
deleted file mode 100644
index 52ce2d98500..00000000000
--- a/src/current/_includes/v20.1/app/django-basic-sample/views.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from django.http import JsonResponse, HttpResponse
-from django.utils.decorators import method_decorator
-from django.views.generic import View
-from django.views.decorators.csrf import csrf_exempt
-from django.db import Error, IntegrityError
-from django.db.transaction import atomic
-from psycopg2 import errorcodes
-import json
-import sys
-import time
-
-from .models import *
-
-# Warning: Do not use retry_on_exception in an inner nested transaction.
-def retry_on_exception(num_retries=3, on_failure=HttpResponse(status=500), delay_=0.5, backoff_=1.5):
- def retry(view):
- def wrapper(*args, **kwargs):
- delay = delay_
- for i in range(num_retries):
- try:
- return view(*args, **kwargs)
- except IntegrityError as ex:
- if i == num_retries - 1:
- return on_failure
- elif getattr(ex.__cause__, 'pgcode', '') == errorcodes.SERIALIZATION_FAILURE:
- time.sleep(delay)
- delay *= backoff_
- except Error as ex:
- return on_failure
- return wrapper
- return retry
-
-class PingView(View):
- def get(self, request, *args, **kwargs):
- return HttpResponse("python/django", status=200)
-
-@method_decorator(csrf_exempt, name='dispatch')
-class CustomersView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- customers = list(Customers.objects.values())
- else:
- customers = list(Customers.objects.filter(id=id).values())
- return JsonResponse(customers, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- name = form_data['name']
- c = Customers(name=name)
- c.save()
- return HttpResponse(status=200)
-
- @retry_on_exception(3)
- @atomic
- def delete(self, request, id=None, *args, **kwargs):
- if id is None:
- return HttpResponse(status=404)
- Customers.objects.filter(id=id).delete()
- return HttpResponse(status=200)
-
- # The PUT method is shadowed by the POST method, so there doesn't seem
- # to be a reason to include it.
-
-@method_decorator(csrf_exempt, name='dispatch')
-class ProductView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- products = list(Products.objects.values())
- else:
- products = list(Products.objects.filter(id=id).values())
- return JsonResponse(products, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- name, price = form_data['name'], form_data['price']
- p = Products(name=name, price=price)
- p.save()
- return HttpResponse(status=200)
-
- # The REST API outlined in the github does not say that /product/ needs
- # a PUT and DELETE method
-
-@method_decorator(csrf_exempt, name='dispatch')
-class OrdersView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- orders = list(Orders.objects.values())
- else:
- orders = list(Orders.objects.filter(id=id).values())
- return JsonResponse(orders, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- c = Customers.objects.get(id=form_data['customer']['id'])
- o = Orders(subtotal=form_data['subtotal'], customer=c)
- o.save()
- for p in form_data['products']:
- p = Products.objects.get(id=p['id'])
- o.product.add(p)
- o.save()
- return HttpResponse(status=200)
diff --git a/src/current/_includes/v20.1/app/for-a-complete-example-go.md b/src/current/_includes/v20.1/app/for-a-complete-example-go.md
deleted file mode 100644
index 64803f686a9..00000000000
--- a/src/current/_includes/v20.1/app/for-a-complete-example-go.md
+++ /dev/null
@@ -1,4 +0,0 @@
-For complete examples, see:
-
-- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pgx)
-- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html)
diff --git a/src/current/_includes/v20.1/app/for-a-complete-example-java.md b/src/current/_includes/v20.1/app/for-a-complete-example-java.md
deleted file mode 100644
index b4c63135ae0..00000000000
--- a/src/current/_includes/v20.1/app/for-a-complete-example-java.md
+++ /dev/null
@@ -1,4 +0,0 @@
-For complete examples, see:
-
-- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC)
-- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html)
diff --git a/src/current/_includes/v20.1/app/for-a-complete-example-python.md b/src/current/_includes/v20.1/app/for-a-complete-example-python.md
deleted file mode 100644
index c647ce75df2..00000000000
--- a/src/current/_includes/v20.1/app/for-a-complete-example-python.md
+++ /dev/null
@@ -1,5 +0,0 @@
-For complete examples, see:
-
-- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2)
-- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html)
-- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html)
diff --git a/src/current/_includes/v20.1/app/gorm-sample.go b/src/current/_includes/v20.1/app/gorm-sample.go
deleted file mode 100644
index f54a9dd55af..00000000000
--- a/src/current/_includes/v20.1/app/gorm-sample.go
+++ /dev/null
@@ -1,206 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "math"
- "math/rand"
- "time"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-
- // Necessary in order to check for transaction retry error codes.
- "github.com/lib/pq"
-)
-
-// Account is our model, which corresponds to the "accounts" database
-// table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-// Functions of type `txnFunc` are passed as arguments to our
-// `runTransaction` wrapper that handles transaction retries for us
-// (see implementation below).
-type txnFunc func(*gorm.DB) error
-
-// This function is used for testing the transaction retry loop. It
-// can be deleted from production code.
-var forceRetryLoop txnFunc = func(db *gorm.DB) error {
-
- // The first statement in a transaction can be retried transparently
- // on the server, so we need to add a placeholder statement so that our
- // force_retry statement isn't the first one.
- if err := db.Exec("SELECT now()").Error; err != nil {
- return err
- }
- // Used to force a transaction retry.
- if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil {
- return err
- }
- return nil
-}
-
-func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error {
- var fromAccount Account
- var toAccount Account
-
- db.First(&fromAccount, fromID)
- db.First(&toAccount, toID)
-
- if fromAccount.Balance < amount {
- return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount)
- }
-
- fromAccount.Balance -= amount
- toAccount.Balance += amount
-
- if err := db.Save(&fromAccount).Error; err != nil {
- return err
- }
- if err := db.Save(&toAccount).Error; err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Set to `true` and GORM will print out all DB queries.
- db.LogMode(false)
-
- // Automatically create the "accounts" table based on the Account
- // model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- var fromID = 1
- var toID = 2
- db.Create(&Account{ID: fromID, Balance: 1000})
- db.Create(&Account{ID: toID, Balance: 250})
-
- // The sequence of steps in this section is:
- // 1. Print account balances.
- // 2. Set up some Accounts and transfer funds between them inside
- // a transaction.
- // 3. Print account balances again to verify the transfer occurred.
-
- // Print balances before transfer.
- printBalances(db)
-
- // The amount to be transferred between the accounts.
- var amount = 100
-
- // Transfer funds between accounts. To handle potential
- // transaction retry errors, we wrap the call to `transferFunds`
- // in `runTransaction`, a wrapper which implements a retry loop
- // with exponential backoff around our access to the database (see
- // the implementation for details).
- if err := runTransaction(db,
- func(*gorm.DB) error {
- return transferFunds(db, fromID, toID, amount)
- },
- ); err != nil {
- // If the error is returned, it's either:
- // 1. Not a transaction retry error, i.e., some other kind
- // of database error that you should handle here.
- // 2. A transaction retry error that has occurred more than
- // N times (defined by the `maxRetries` variable inside
- // `runTransaction`), in which case you will need to figure
- // out why your database access is resulting in so much
- // contention (see 'Understanding and avoiding transaction
- // contention':
- // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
- fmt.Println(err)
- }
-
- // Print balances after transfer to ensure that it worked.
- printBalances(db)
-
- // Delete accounts so we can start fresh when we want to run this
- // program again.
- deleteAccounts(db)
-}
-
-// Wrapper for a transaction. This automatically re-calls `fn` with
-// the open transaction as an argument as long as the database server
-// asks for the transaction to be retried.
-func runTransaction(db *gorm.DB, fn txnFunc) error {
- var maxRetries = 3
- for retries := 0; retries <= maxRetries; retries++ {
- if retries == maxRetries {
- return fmt.Errorf("hit max of %d retries, aborting", retries)
- }
- txn := db.Begin()
- if err := fn(txn); err != nil {
- // We need to cast GORM's db.Error to *pq.Error so we can
- // detect the Postgres transaction retry error code and
- // handle retries appropriately.
- pqErr := err.(*pq.Error)
- if pqErr.Code == "40001" {
- // Since this is a transaction retry error, we
- // ROLLBACK the transaction and sleep a little before
- // trying again. Each time through the loop we sleep
- // for a little longer than the last time
- // (A.K.A. exponential backoff).
- txn.Rollback()
- var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5)
- fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs)
- time.Sleep(time.Millisecond * time.Duration(sleepMs))
- } else {
- // If it's not a retry error, it's some other sort of
- // DB interaction error that needs to be handled by
- // the caller.
- return err
- }
- } else {
- // All went well, so we try to commit and break out of the
- // retry loop if possible.
- if err := txn.Commit().Error; err != nil {
- pqErr := err.(*pq.Error)
- if pqErr.Code == "40001" {
- // However, our attempt to COMMIT could also
- // result in a retry error, in which case we
- // continue back through the loop and try again.
- continue
- } else {
- // If it's not a retry error, it's some other sort
- // of DB interaction error that needs to be
- // handled by the caller.
- return err
- }
- }
- break
- }
- }
- return nil
-}
-
-func printBalances(db *gorm.DB) {
- var accounts []Account
- db.Find(&accounts)
- fmt.Printf("Balance at '%s':\n", time.Now())
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
-
-func deleteAccounts(db *gorm.DB) error {
- // Used to tear down the accounts table so we can re-run this
- // program.
- err := db.Exec("DELETE from accounts where ID > 0").Error
- if err != nil {
- return err
- }
- return nil
-}
diff --git a/src/current/_includes/v20.1/app/hibernate-basic-sample/Sample.java b/src/current/_includes/v20.1/app/hibernate-basic-sample/Sample.java
deleted file mode 100644
index 60a6b54f984..00000000000
--- a/src/current/_includes/v20.1/app/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,236 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.Transaction;
-import org.hibernate.JDBCException;
-import org.hibernate.cfg.Configuration;
-
-import java.util.*;
-import java.util.function.Function;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- public long getId() {
- return id;
- }
-
- @Column(name="balance")
- public long balance;
- public long getBalance() {
- return balance;
- }
- public void setBalance(long newBalance) {
- this.balance = newBalance;
- }
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- private static Function addAccounts() throws JDBCException{
- Function f = s -> {
- long rv = 0;
- try {
- s.save(new Account(1, 1000));
- s.save(new Account(2, 250));
- s.save(new Account(3, 314159));
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- } catch (JDBCException e) {
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{
- Function f = s -> {
- long rv = 0;
- try {
- Account fromAccount = (Account) s.get(Account.class, fromId);
- Account toAccount = (Account) s.get(Account.class, toId);
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.balance -= amount;
- toAccount.balance += amount;
- s.save(fromAccount);
- s.save(toAccount);
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
- } catch (JDBCException e) {
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() throws JDBCException {
- Function f = s -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate();
- } catch (JDBCException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- private static Function getAccountBalance(long id) throws JDBCException{
- Function f = s -> {
- long balance;
- try {
- Account account = s.get(Account.class, id);
- balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- } catch (JDBCException e) {
- throw e;
- }
- return balance;
- };
- return f;
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(Session session, Function fn) {
- long rv = 0;
- int attemptCount = 0;
-
- while (attemptCount < MAX_ATTEMPT_COUNT) {
- attemptCount++;
-
- if (attemptCount > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount);
- }
-
- Transaction txn = session.beginTransaction();
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.createNativeQuery("SELECT now()").list();
- }
-
- try {
- rv = fn.apply(session);
- if (rv != -1) {
- txn.commit();
- System.out.printf("APP: COMMIT;\n");
- break;
- }
- } catch (JDBCException e) {
- if (RETRY_SQL_STATE.equals(e.getSQLState())) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount);
- System.out.printf("APP: ROLLBACK;\n");
- txn.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv = -1;
- } else {
- throw e;
- }
- }
- }
- return rv;
- }
-
- public static void main(String[] args) {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- try (Session session = sessionFactory.openSession()) {
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(session, forceRetryLogic());
- } else {
-
- runTransaction(session, addAccounts());
- long fromBalance = runTransaction(session, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(session, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- } finally {
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/hibernate-basic-sample/build.gradle b/src/current/_includes/v20.1/app/hibernate-basic-sample/build.gradle
deleted file mode 100644
index b76c29abac1..00000000000
--- a/src/current/_includes/v20.1/app/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- implementation 'org.hibernate:hibernate-core:5.4.19.Final'
- implementation 'org.postgresql:postgresql:42.2.9'
-}
diff --git a/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index 04e17e27e22..00000000000
Binary files a/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index f3b20936d2e..00000000000
--- a/src/current/_includes/v20.1/app/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.CockroachDB201Dialect
-
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v20.1/app/insecure/BasicExample.java b/src/current/_includes/v20.1/app/insecure/BasicExample.java
deleted file mode 100644
index 855e7c87018..00000000000
--- a/src/current/_includes/v20.1/app/insecure/BasicExample.java
+++ /dev/null
@@ -1,433 +0,0 @@
-import java.util.*;
-import java.time.*;
-import java.sql.*;
-import javax.sql.DataSource;
-
-import org.postgresql.ds.PGSimpleDataSource;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicExample.java && java BasicExample
-
- To build the javadoc:
-
- $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java
-
- At a high level, this code consists of two classes:
-
- 1. BasicExample, which is where the application logic lives.
-
- 2. BasicExampleDAO, which is used by the application to access the
- data store.
-
-*/
-
-public class BasicExample {
-
- public static void main(String[] args) {
-
- // Configure the database connection.
- PGSimpleDataSource ds = new PGSimpleDataSource();
- ds.setServerName("localhost");
- ds.setPortNumber(26257);
- ds.setDatabaseName("bank");
- ds.setUser("maxroach");
- ds.setPassword(null);
- ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
- ds.setApplicationName("BasicExample");
-
- // Create DAO.
- BasicExampleDAO dao = new BasicExampleDAO(ds);
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // necessary in production code.
- dao.testRetryHandling();
-
- // Set up the 'accounts' table.
- dao.createAccounts();
-
- // Insert a few accounts "by hand", using INSERTs on the backend.
- Map balances = new HashMap();
- balances.put("1", "1000");
- balances.put("2", "250");
- int updatedAccounts = dao.updateAccounts(balances);
- System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts);
-
- // How much money is in these accounts?
- int balance1 = dao.getAccountBalance(1);
- int balance2 = dao.getAccountBalance(2);
- System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
-
- // Transfer $100 from account 1 to account 2
- int fromAccount = 1;
- int toAccount = 2;
- int transferAmount = 100;
- int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
- if (transferredAccounts != -1) {
- System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
- }
-
- balance1 = dao.getAccountBalance(1);
- balance2 = dao.getAccountBalance(2);
- System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
-
- // Bulk insertion example using JDBC's batching support.
- int totalRowsInserted = dao.bulkInsertRandomAccountData();
- System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted);
-
- // Print out 10 account values.
- int accountsRead = dao.readAccounts(10);
-
- // Drop the 'accounts' table so this code can be run again.
- dao.tearDown();
- }
-}
-
-/**
- * Data access object used by 'BasicExample'. Abstraction over some
- * common CockroachDB operations, including:
- *
- * - Auto-handling transaction retries in the 'runSQL' method
- *
- * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
- * method
- */
-
-class BasicExampleDAO {
-
- private static final int MAX_RETRY_COUNT = 3;
- private static final String SAVEPOINT_NAME = "cockroach_restart";
- private static final String RETRY_SQL_STATE = "40001";
- private static final boolean FORCE_RETRY = false;
-
- private final DataSource ds;
-
- BasicExampleDAO(DataSource ds) {
- this.ds = ds;
- }
-
- /**
- Used to test the retry logic in 'runSQL'. It is not necessary
- in production code.
- */
- void testRetryHandling() {
- if (this.FORCE_RETRY) {
- runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
- }
- }
-
- /**
- * Run SQL code in a way that automatically handles the
- * transaction retry logic so we do not have to duplicate it in
- * various places.
- *
- * @param sqlCode a String containing the SQL code you want to
- * execute. Can have placeholders, e.g., "INSERT INTO accounts
- * (id, balance) VALUES (?, ?)".
- *
- * @param args String Varargs to fill in the SQL code's
- * placeholders.
- * @return Integer Number of rows updated, or -1 if an error is thrown.
- */
- public Integer runSQL(String sqlCode, String... args) {
-
- // This block is only used to emit class and method names in
- // the program output. It is not necessary in production
- // code.
- StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
- StackTraceElement elem = stacktrace[2];
- String callerClass = elem.getClassName();
- String callerMethod = elem.getMethodName();
-
- int rv = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // We're managing the commit lifecycle ourselves so we can
- // automatically issue transaction retries.
- connection.setAutoCommit(false);
-
- int retryCount = 0;
-
- while (retryCount < MAX_RETRY_COUNT) {
-
- Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryHandling()'.
- if (FORCE_RETRY) {
- forceRetry(connection); // SELECT 1
- }
-
- try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {
-
- // Loop over the args and insert them into the
- // prepared statement based on their types. In
- // this simple example we classify the argument
- // types as "integers" and "everything else"
- // (a.k.a. strings).
- for (int i=0; i %10s\n", name, val);
- }
- }
- }
- } else {
- int updateCount = pstmt.getUpdateCount();
- rv += updateCount;
-
- // This printed output is for debugging and/or demonstration
- // purposes only. It would not be necessary in production code.
- System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt);
- }
-
- connection.releaseSavepoint(sp);
- connection.commit();
- break;
-
- } catch (SQLException e) {
-
- if (RETRY_SQL_STATE.equals(e.getSQLState())) {
- System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n",
- e.getSQLState(), e.getMessage(), retryCount);
- connection.rollback(sp);
- retryCount++;
- rv = -1;
- } else {
- rv = -1;
- throw e;
- }
- }
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- rv = -1;
- }
-
- return rv;
- }
-
- /**
- * Helper method called by 'testRetryHandling'. It simply issues
- * a "SELECT 1" inside the transaction to force a retry. This is
- * necessary to take the connection's session out of the AutoRetry
- * state, since otherwise the other statements in the session will
- * be retried automatically, and the client (us) will not see a
- * retry error. Note that this information is taken from the
- * following test:
- * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
- *
- * @param connection Connection
- */
- private void forceRetry(Connection connection) throws SQLException {
- try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
- statement.executeQuery();
- }
- }
-
- /**
- * Creates a fresh, empty accounts table in the database.
- */
- public void createAccounts() {
- runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
- };
-
- /**
- * Update accounts by passing in a Map of (ID, Balance) pairs.
- *
- * @param accounts (Map)
- * @return The number of updated accounts (int)
- */
- public int updateAccounts(Map accounts) {
- int rows = 0;
- for (Map.Entry account : accounts.entrySet()) {
-
- String k = account.getKey();
- String v = account.getValue();
-
- String[] args = {k, v};
- rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
- }
- return rows;
- }
-
- /**
- * Transfer funds between one account and another. Handles
- * transaction retries in case of conflict automatically on the
- * backend.
- * @param fromId (int)
- * @param toId (int)
- * @param amount (int)
- * @return The number of updated accounts (int)
- */
- public int transferFunds(int fromId, int toId, int amount) {
- String sFromId = Integer.toString(fromId);
- String sToId = Integer.toString(toId);
- String sAmount = Integer.toString(amount);
-
- // We have omitted explicit BEGIN/COMMIT statements for
- // brevity. Individual statements are treated as implicit
- // transactions by CockroachDB (see
- // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).
-
- String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
- "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
- "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";
-
- return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
- }
-
- /**
- * Get the account balance for one account.
- *
- * We skip using the retry logic in 'runSQL()' here for the
- * following reasons:
- *
- * 1. Since this is a single read ("SELECT"), we do not expect any
- * transaction conflicts to handle
- *
- * 2. We need to return the balance as an integer
- *
- * @param id (int)
- * @return balance (int)
- */
- public int getAccountBalance(int id) {
- int balance = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // Check the current balance.
- ResultSet res = connection.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + id);
- if(!res.next()) {
- System.out.printf("No users in the table with id %i", id);
- } else {
- balance = res.getInt("balance");
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
-
- return balance;
- }
-
- /**
- * Insert randomized account data (ID, balance) using the JDBC
- * fast path for bulk inserts. The fastest way to get data into
- * CockroachDB is the IMPORT statement. However, if you must bulk
- * ingest from the application using INSERT statements, the best
- * option is the method shown here. It will require the following:
- *
- * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
- * settings (see the connection info in 'BasicExample.main').
- *
- * 2. Inserting in batches of 128 rows, as used inside this method
- * (see BATCH_SIZE), since the PGJDBC driver's logic works best
- * with powers of two, such that a batch of size 128 can be 6x
- * faster than a batch of size 250.
- * @return The number of new accounts inserted (int)
- */
- public int bulkInsertRandomAccountData() {
-
- Random random = new Random();
- int BATCH_SIZE = 128;
- int totalNewAccounts = 0;
-
- try (Connection connection = ds.getConnection()) {
-
- // We're managing the commit lifecycle ourselves so we can
- // control the size of our batch inserts.
- connection.setAutoCommit(false);
-
- // In this example we are adding 500 rows to the database,
- // but it could be any number. What's important is that
- // the batch size is 128.
- try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
- for (int i=0; i<=(500/BATCH_SIZE);i++) {
- for (int j=0; j %s row(s) updated in this batch\n", count.length);
- }
- connection.commit();
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
- } catch (SQLException e) {
- System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
- e.getSQLState(), e.getCause(), e.getMessage());
- }
- return totalNewAccounts;
- }
-
- /**
- * Read out a subset of accounts from the data store.
- *
- * @param limit (int)
- * @return Number of accounts read (int)
- */
- public int readAccounts(int limit) {
- return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
- }
-
- /**
- * Perform any necessary cleanup of the data store so it can be
- * used again.
- */
- public void tearDown() {
- runSQL("DROP TABLE accounts;");
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/activerecord-basic-sample.rb b/src/current/_includes/v20.1/app/insecure/activerecord-basic-sample.rb
deleted file mode 100644
index b664b4c06e1..00000000000
--- a/src/current/_includes/v20.1/app/insecure/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,56 +0,0 @@
-# Use bundler inline - these would typically go in a Gemfile
-require 'bundler/inline'
-gemfile do
- source 'https://rubygems.org'
- gem 'pg'
- gem 'activerecord', '5.2.0'
-
- # CockroachDB ActiveRecord adapter dependency
- gem 'activerecord-cockroachdb-adapter', '5.2.0'
-end
-
-require 'pg'
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-
-# Connect to CockroachDB using ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
-
- # Specify the CockroachDB ActiveRecord adapter
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration programmatically.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create!(id: 1, balance: 1000)
-Account.create!(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "account: #{acct.id} balance: #{acct.balance}"
-end
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample-pgx.go b/src/current/_includes/v20.1/app/insecure/basic-sample-pgx.go
deleted file mode 100644
index 72c18f019a3..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample-pgx.go
+++ /dev/null
@@ -1,50 +0,0 @@
-package main
-
-import (
- "context"
- "fmt"
- "log"
-
- "github.com/jackc/pgx/v4"
-)
-
-func main() {
- config, err := pgx.ParseConfig("postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error configuring the database: ", err)
- }
-
- // Connect to the "bank" database.
- conn, err := pgx.ConnectConfig(context.Background(), config)
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer conn.Close(context.Background())
-
- // Create the "accounts" table.
- if _, err := conn.Exec(context.Background(),
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := conn.Exec(context.Background(),
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := conn.Query(context.Background(), "SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.clj b/src/current/_includes/v20.1/app/insecure/basic-sample.clj
deleted file mode 100644
index 182b78b675e..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.clj
+++ /dev/null
@@ -1,31 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:dbtype "postgresql"
- :dbname "bank"
- :host "localhost"
- :port "26257"
- :user "maxroach"})
-
-(defn test-basic []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Insert two rows into the "accounts" table.
- (j/insert! conn :accounts {:id 1 :balance 1000})
- (j/insert! conn :accounts {:id 2 :balance 250})
-
- ;; Print out the balances.
- (println "Initial balances:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- doall)
-
- ))
-
-
-(defn -main [& args]
- (test-basic))
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.cpp b/src/current/_includes/v20.1/app/insecure/basic-sample.cpp
deleted file mode 100644
index a06d84d1a25..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.cpp
+++ /dev/null
@@ -1,39 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-int main() {
- try {
- // Connect to the "bank" database.
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- pqxx::nontransaction w(c);
-
- // Create the "accounts" table.
- w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- cout << "Initial balances:" << endl;
- pqxx::result r = w.exec("SELECT id, balance FROM accounts");
- for (auto row : r) {
- cout << row[0].as() << ' ' << row[1].as() << endl;
- }
-
- w.commit(); // Note this doesn't doesn't do anything
- // for a nontransaction, but is still required.
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.cs b/src/current/_includes/v20.1/app/insecure/basic-sample.cs
deleted file mode 100644
index b7cf8e1ff3f..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.cs
+++ /dev/null
@@ -1,50 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Disable;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- Simple(connStringBuilder.ConnectionString);
- }
-
- static void Simple(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.go b/src/current/_includes/v20.1/app/insecure/basic-sample.go
deleted file mode 100644
index 6a647f51641..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.php b/src/current/_includes/v20.1/app/insecure/basic-sample.php
deleted file mode 100644
index db5a26e3111..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.rb b/src/current/_includes/v20.1/app/insecure/basic-sample.rb
deleted file mode 100644
index 570ea610bb1..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.rb
+++ /dev/null
@@ -1,28 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts "id: #{row['id']} balance: #{row['balance']}"
- end
-end
-
-# Close the database connection.
-conn.close()
diff --git a/src/current/_includes/v20.1/app/insecure/basic-sample.rs b/src/current/_includes/v20.1/app/insecure/basic-sample.rs
deleted file mode 100644
index 8b7c3b115a9..00000000000
--- a/src/current/_includes/v20.1/app/insecure/basic-sample.rs
+++ /dev/null
@@ -1,32 +0,0 @@
-use postgres::{Client, NoTls};
-
-fn main() {
- let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
-
- // Create the "accounts" table.
- client
- .execute(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
- &[],
- )
- .unwrap();
-
- // Insert two rows into the "accounts" table.
- client
- .execute(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
- &[],
- )
- .unwrap();
-
- // Print out the balances.
- println!("Initial balances:");
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v20.1/app/insecure/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 5beb4cdd508..00000000000
--- a/src/current/_includes/v20.1/app/insecure/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL shell](cockroach-sql.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v20.1/app/insecure/django-basic-sample/models.py b/src/current/_includes/v20.1/app/insecure/django-basic-sample/models.py
deleted file mode 100644
index 6068f8bbb8e..00000000000
--- a/src/current/_includes/v20.1/app/insecure/django-basic-sample/models.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from django.db import models
-
-class Customers(models.Model):
- id = models.AutoField(primary_key=True)
- name = models.CharField(max_length=250)
-
-class Products(models.Model):
- id = models.AutoField(primary_key=True)
- name = models.CharField(max_length=250)
- price = models.DecimalField(max_digits=18, decimal_places=2)
-
-class Orders(models.Model):
- id = models.AutoField(primary_key=True)
- subtotal = models.DecimalField(max_digits=18, decimal_places=2)
- customer = models.ForeignKey(Customers, on_delete=models.CASCADE, null=True)
- product = models.ManyToManyField(Products)
-
diff --git a/src/current/_includes/v20.1/app/insecure/django-basic-sample/settings.py b/src/current/_includes/v20.1/app/insecure/django-basic-sample/settings.py
deleted file mode 100644
index bc5f078fd4f..00000000000
--- a/src/current/_includes/v20.1/app/insecure/django-basic-sample/settings.py
+++ /dev/null
@@ -1,124 +0,0 @@
-"""
-Django settings for myproject project.
-
-Generated by 'django-admin startproject' using Django 3.0.
-
-For more information on this file, see
-https://docs.djangoproject.com/en/3.0/topics/settings/
-
-For the full list of settings and their values, see
-https://docs.djangoproject.com/en/3.0/ref/settings/
-"""
-
-import os
-
-# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
-BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-# Quick-start development settings - unsuitable for production
-# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
-
-# SECURITY WARNING: keep the secret key used in production secret!
-SECRET_KEY = 'spl=g73)8-)ja%x*k1eje4d#&24#t)zao^s$6vc1rdk(e3t!e('
-
-# SECURITY WARNING: do not run with debug turned on in production!
-DEBUG = True
-
-ALLOWED_HOSTS = ['0.0.0.0']
-
-
-# Application definition
-
-INSTALLED_APPS = [
- 'django.contrib.admin',
- 'django.contrib.auth',
- 'django.contrib.contenttypes',
- 'django.contrib.sessions',
- 'django.contrib.messages',
- 'django.contrib.staticfiles',
- 'myproject',
-]
-
-MIDDLEWARE = [
- 'django.middleware.security.SecurityMiddleware',
- 'django.contrib.sessions.middleware.SessionMiddleware',
- 'django.middleware.common.CommonMiddleware',
- 'django.middleware.csrf.CsrfViewMiddleware',
- 'django.contrib.auth.middleware.AuthenticationMiddleware',
- 'django.contrib.messages.middleware.MessageMiddleware',
- 'django.middleware.clickjacking.XFrameOptionsMiddleware',
-]
-
-ROOT_URLCONF = 'myproject.urls'
-
-TEMPLATES = [
- {
- 'BACKEND': 'django.template.backends.django.DjangoTemplates',
- 'DIRS': [],
- 'APP_DIRS': True,
- 'OPTIONS': {
- 'context_processors': [
- 'django.template.context_processors.debug',
- 'django.template.context_processors.request',
- 'django.contrib.auth.context_processors.auth',
- 'django.contrib.messages.context_processors.messages',
- ],
- },
- },
-]
-
-WSGI_APPLICATION = 'myproject.wsgi.application'
-
-
-# Database
-# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
-
-DATABASES = {
- 'default': {
- 'ENGINE': 'django_cockroachdb',
- 'NAME': 'bank',
- 'USER': 'django',
- 'HOST': 'localhost',
- 'PORT': '26257',
- }
-}
-
-
-# Password validation
-# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
-
-AUTH_PASSWORD_VALIDATORS = [
- {
- 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
- },
- {
- 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
- },
-]
-
-
-# Internationalization
-# https://docs.djangoproject.com/en/3.0/topics/i18n/
-
-LANGUAGE_CODE = 'en-us'
-
-TIME_ZONE = 'UTC'
-
-USE_I18N = True
-
-USE_L10N = True
-
-USE_TZ = True
-
-
-# Static files (CSS, JavaScript, Images)
-# https://docs.djangoproject.com/en/3.0/howto/static-files/
-
-STATIC_URL = '/static/'
diff --git a/src/current/_includes/v20.1/app/insecure/django-basic-sample/urls.py b/src/current/_includes/v20.1/app/insecure/django-basic-sample/urls.py
deleted file mode 100644
index 9550d713ffa..00000000000
--- a/src/current/_includes/v20.1/app/insecure/django-basic-sample/urls.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from django.contrib import admin
-from django.urls import path
-
-from .views import CustomersView, OrdersView, PingView, ProductView
-
-urlpatterns = [
- path('admin/', admin.site.urls),
-
- path('ping/', PingView.as_view()),
-
- # Endpoints for customers URL.
- path('customer/', CustomersView.as_view(), name='customers'),
- path('customer//', CustomersView.as_view(), name='customers'),
-
- # Endpoints for customers URL.
- path('product/', ProductView.as_view(), name='product'),
- path('product//', ProductView.as_view(), name='product'),
-
- path('order/', OrdersView.as_view(), name='order'),
-]
diff --git a/src/current/_includes/v20.1/app/insecure/django-basic-sample/views.py b/src/current/_includes/v20.1/app/insecure/django-basic-sample/views.py
deleted file mode 100644
index 52ce2d98500..00000000000
--- a/src/current/_includes/v20.1/app/insecure/django-basic-sample/views.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from django.http import JsonResponse, HttpResponse
-from django.utils.decorators import method_decorator
-from django.views.generic import View
-from django.views.decorators.csrf import csrf_exempt
-from django.db import Error, IntegrityError
-from django.db.transaction import atomic
-from psycopg2 import errorcodes
-import json
-import sys
-import time
-
-from .models import *
-
-# Warning: Do not use retry_on_exception in an inner nested transaction.
-def retry_on_exception(num_retries=3, on_failure=HttpResponse(status=500), delay_=0.5, backoff_=1.5):
- def retry(view):
- def wrapper(*args, **kwargs):
- delay = delay_
- for i in range(num_retries):
- try:
- return view(*args, **kwargs)
- except IntegrityError as ex:
- if i == num_retries - 1:
- return on_failure
- elif getattr(ex.__cause__, 'pgcode', '') == errorcodes.SERIALIZATION_FAILURE:
- time.sleep(delay)
- delay *= backoff_
- except Error as ex:
- return on_failure
- return wrapper
- return retry
-
-class PingView(View):
- def get(self, request, *args, **kwargs):
- return HttpResponse("python/django", status=200)
-
-@method_decorator(csrf_exempt, name='dispatch')
-class CustomersView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- customers = list(Customers.objects.values())
- else:
- customers = list(Customers.objects.filter(id=id).values())
- return JsonResponse(customers, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- name = form_data['name']
- c = Customers(name=name)
- c.save()
- return HttpResponse(status=200)
-
- @retry_on_exception(3)
- @atomic
- def delete(self, request, id=None, *args, **kwargs):
- if id is None:
- return HttpResponse(status=404)
- Customers.objects.filter(id=id).delete()
- return HttpResponse(status=200)
-
- # The PUT method is shadowed by the POST method, so there doesn't seem
- # to be a reason to include it.
-
-@method_decorator(csrf_exempt, name='dispatch')
-class ProductView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- products = list(Products.objects.values())
- else:
- products = list(Products.objects.filter(id=id).values())
- return JsonResponse(products, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- name, price = form_data['name'], form_data['price']
- p = Products(name=name, price=price)
- p.save()
- return HttpResponse(status=200)
-
- # The REST API outlined in the github does not say that /product/ needs
- # a PUT and DELETE method
-
-@method_decorator(csrf_exempt, name='dispatch')
-class OrdersView(View):
- def get(self, request, id=None, *args, **kwargs):
- if id is None:
- orders = list(Orders.objects.values())
- else:
- orders = list(Orders.objects.filter(id=id).values())
- return JsonResponse(orders, safe=False)
-
- @retry_on_exception(3)
- @atomic
- def post(self, request, *args, **kwargs):
- form_data = json.loads(request.body.decode())
- c = Customers.objects.get(id=form_data['customer']['id'])
- o = Orders(subtotal=form_data['subtotal'], customer=c)
- o.save()
- for p in form_data['products']:
- p = Products.objects.get(id=p['id'])
- o.product.add(p)
- o.save()
- return HttpResponse(status=200)
diff --git a/src/current/_includes/v20.1/app/insecure/gorm-sample.go b/src/current/_includes/v20.1/app/insecure/gorm-sample.go
deleted file mode 100644
index 8ea2b303e7f..00000000000
--- a/src/current/_includes/v20.1/app/insecure/gorm-sample.go
+++ /dev/null
@@ -1,206 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "math"
- "math/rand"
- "time"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-
- // Necessary in order to check for transaction retry error codes.
- "github.com/lib/pq"
-)
-
-// Account is our model, which corresponds to the "accounts" database
-// table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-// Functions of type `txnFunc` are passed as arguments to our
-// `runTransaction` wrapper that handles transaction retries for us
-// (see implementation below).
-type txnFunc func(*gorm.DB) error
-
-// This function is used for testing the transaction retry loop. It
-// can be deleted from production code.
-var forceRetryLoop txnFunc = func(db *gorm.DB) error {
-
- // The first statement in a transaction can be retried transparently
- // on the server, so we need to add a placeholder statement so that our
- // force_retry statement isn't the first one.
- if err := db.Exec("SELECT now()").Error; err != nil {
- return err
- }
- // Used to force a transaction retry.
- if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil {
- return err
- }
- return nil
-}
-
-func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error {
- var fromAccount Account
- var toAccount Account
-
- db.First(&fromAccount, fromID)
- db.First(&toAccount, toID)
-
- if fromAccount.Balance < amount {
- return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount)
- }
-
- fromAccount.Balance -= amount
- toAccount.Balance += amount
-
- if err := db.Save(&fromAccount).Error; err != nil {
- return err
- }
- if err := db.Save(&toAccount).Error; err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Set to `true` and GORM will print out all DB queries.
- db.LogMode(false)
-
- // Automatically create the "accounts" table based on the Account
- // model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- var fromID = 1
- var toID = 2
- db.Create(&Account{ID: fromID, Balance: 1000})
- db.Create(&Account{ID: toID, Balance: 250})
-
- // The sequence of steps in this section is:
- // 1. Print account balances.
- // 2. Set up some Accounts and transfer funds between them inside
- // a transaction.
- // 3. Print account balances again to verify the transfer occurred.
-
- // Print balances before transfer.
- printBalances(db)
-
- // The amount to be transferred between the accounts.
- var amount = 100
-
- // Transfer funds between accounts. To handle potential
- // transaction retry errors, we wrap the call to `transferFunds`
- // in `runTransaction`, a wrapper which implements a retry loop
- // with exponential backoff around our access to the database (see
- // the implementation for details).
- if err := runTransaction(db,
- func(*gorm.DB) error {
- return transferFunds(db, fromID, toID, amount)
- },
- ); err != nil {
- // If the error is returned, it's either:
- // 1. Not a transaction retry error, i.e., some other kind
- // of database error that you should handle here.
- // 2. A transaction retry error that has occurred more than
- // N times (defined by the `maxRetries` variable inside
- // `runTransaction`), in which case you will need to figure
- // out why your database access is resulting in so much
- // contention (see 'Understanding and avoiding transaction
- // contention':
- // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
- fmt.Println(err)
- }
-
- // Print balances after transfer to ensure that it worked.
- printBalances(db)
-
- // Delete accounts so we can start fresh when we want to run this
- // program again.
- deleteAccounts(db)
-}
-
-// Wrapper for a transaction. This automatically re-calls `fn` with
-// the open transaction as an argument as long as the database server
-// asks for the transaction to be retried.
-func runTransaction(db *gorm.DB, fn txnFunc) error {
- var maxRetries = 3
- for retries := 0; retries <= maxRetries; retries++ {
- if retries == maxRetries {
- return fmt.Errorf("hit max of %d retries, aborting", retries)
- }
- txn := db.Begin()
- if err := fn(txn); err != nil {
- // We need to cast GORM's db.Error to *pq.Error so we can
- // detect the Postgres transaction retry error code and
- // handle retries appropriately.
- pqErr := err.(*pq.Error)
- if pqErr.Code == "40001" {
- // Since this is a transaction retry error, we
- // ROLLBACK the transaction and sleep a little before
- // trying again. Each time through the loop we sleep
- // for a little longer than the last time
- // (A.K.A. exponential backoff).
- txn.Rollback()
- var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5)
- fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs)
- time.Sleep(time.Millisecond * time.Duration(sleepMs))
- } else {
- // If it's not a retry error, it's some other sort of
- // DB interaction error that needs to be handled by
- // the caller.
- return err
- }
- } else {
- // All went well, so we try to commit and break out of the
- // retry loop if possible.
- if err := txn.Commit().Error; err != nil {
- pqErr := err.(*pq.Error)
- if pqErr.Code == "40001" {
- // However, our attempt to COMMIT could also
- // result in a retry error, in which case we
- // continue back through the loop and try again.
- continue
- } else {
- // If it's not a retry error, it's some other sort
- // of DB interaction error that needs to be
- // handled by the caller.
- return err
- }
- }
- break
- }
- }
- return nil
-}
-
-func printBalances(db *gorm.DB) {
- var accounts []Account
- db.Find(&accounts)
- fmt.Printf("Balance at '%s':\n", time.Now())
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
-
-func deleteAccounts(db *gorm.DB) error {
- // Used to tear down the accounts table so we can re-run this
- // program.
- err := db.Exec("DELETE from accounts where ID > 0").Error
- if err != nil {
- return err
- }
- return nil
-}
diff --git a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/Sample.java b/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/Sample.java
deleted file mode 100644
index 60a6b54f984..00000000000
--- a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,236 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.Transaction;
-import org.hibernate.JDBCException;
-import org.hibernate.cfg.Configuration;
-
-import java.util.*;
-import java.util.function.Function;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- public long getId() {
- return id;
- }
-
- @Column(name="balance")
- public long balance;
- public long getBalance() {
- return balance;
- }
- public void setBalance(long newBalance) {
- this.balance = newBalance;
- }
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- private static Function addAccounts() throws JDBCException{
- Function f = s -> {
- long rv = 0;
- try {
- s.save(new Account(1, 1000));
- s.save(new Account(2, 250));
- s.save(new Account(3, 314159));
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- } catch (JDBCException e) {
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{
- Function f = s -> {
- long rv = 0;
- try {
- Account fromAccount = (Account) s.get(Account.class, fromId);
- Account toAccount = (Account) s.get(Account.class, toId);
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.balance -= amount;
- toAccount.balance += amount;
- s.save(fromAccount);
- s.save(toAccount);
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
- } catch (JDBCException e) {
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() throws JDBCException {
- Function f = s -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate();
- } catch (JDBCException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- return f;
- }
-
- private static Function getAccountBalance(long id) throws JDBCException{
- Function f = s -> {
- long balance;
- try {
- Account account = s.get(Account.class, id);
- balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- } catch (JDBCException e) {
- throw e;
- }
- return balance;
- };
- return f;
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(Session session, Function fn) {
- long rv = 0;
- int attemptCount = 0;
-
- while (attemptCount < MAX_ATTEMPT_COUNT) {
- attemptCount++;
-
- if (attemptCount > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount);
- }
-
- Transaction txn = session.beginTransaction();
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.createNativeQuery("SELECT now()").list();
- }
-
- try {
- rv = fn.apply(session);
- if (rv != -1) {
- txn.commit();
- System.out.printf("APP: COMMIT;\n");
- break;
- }
- } catch (JDBCException e) {
- if (RETRY_SQL_STATE.equals(e.getSQLState())) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount);
- System.out.printf("APP: ROLLBACK;\n");
- txn.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv = -1;
- } else {
- throw e;
- }
- }
- }
- return rv;
- }
-
- public static void main(String[] args) {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- try (Session session = sessionFactory.openSession()) {
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(session, forceRetryLogic());
- } else {
-
- runTransaction(session, addAccounts());
- long fromBalance = runTransaction(session, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(session, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- } finally {
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/build.gradle b/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/build.gradle
deleted file mode 100644
index b76c29abac1..00000000000
--- a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- implementation 'org.hibernate:hibernate-core:5.4.19.Final'
- implementation 'org.postgresql:postgresql:42.2.9'
-}
diff --git a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index 50085afe06e..00000000000
Binary files a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index db3d396d4c9..00000000000
--- a/src/current/_includes/v20.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.CockroachDB201Dialect
- jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/Sample.java b/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/Sample.java
deleted file mode 100644
index d1a54a8ddd2..00000000000
--- a/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/Sample.java
+++ /dev/null
@@ -1,215 +0,0 @@
-package com.cockroachlabs;
-
-import com.cockroachlabs.example.jooq.db.Tables;
-import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
-import org.jooq.DSLContext;
-import org.jooq.SQLDialect;
-import org.jooq.Source;
-import org.jooq.conf.RenderQuotedNames;
-import org.jooq.conf.Settings;
-import org.jooq.exception.DataAccessException;
-import org.jooq.impl.DSL;
-
-import java.io.InputStream;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.util.*;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.function.Function;
-
-import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- private static Function addAccounts() {
- return ctx -> {
- long rv = 0;
-
- ctx.delete(ACCOUNTS).execute();
- ctx.batchInsert(
- new AccountsRecord(1L, 1000L),
- new AccountsRecord(2L, 250L),
- new AccountsRecord(3L, 314159L)
- ).execute();
-
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- return rv;
- };
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) {
- return ctx -> {
- long rv = 0;
-
- AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
- AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
-
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.setBalance(fromAccount.getBalance() - amount);
- toAccount.setBalance(toAccount.getBalance() + amount);
-
- ctx.batchUpdate(fromAccount, toAccount).execute();
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
-
- return rv;
- };
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() {
- return ctx -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- ctx.execute("SELECT crdb_internal.force_retry('1s')");
- } catch (DataAccessException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- }
-
- private static Function getAccountBalance(long id) {
- return ctx -> {
- AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
- long balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- return balance;
- };
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(DSLContext session, Function fn) {
- AtomicLong rv = new AtomicLong(0L);
- AtomicInteger attemptCount = new AtomicInteger(0);
-
- while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
- attemptCount.incrementAndGet();
-
- if (attemptCount.get() > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
- }
-
- if (session.connectionResult(connection -> {
- connection.setAutoCommit(false);
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.fetch("SELECT now()");
- }
-
- try {
- rv.set(fn.apply(session));
- if (rv.get() != -1) {
- connection.commit();
- System.out.printf("APP: COMMIT;\n");
- return true;
- }
- } catch (DataAccessException | SQLException e) {
- String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
-
- if (RETRY_SQL_STATE.equals(sqlState)) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
- System.out.printf("APP: ROLLBACK;\n");
- connection.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv.set(-1L);
- } else {
- throw e;
- }
- }
-
- return false;
- })) {
- break;
- }
- }
-
- return rv.get();
- }
-
- public static void main(String[] args) throws Exception {
- try (Connection connection = DriverManager.getConnection(
- "jdbc:postgresql://localhost:26257/bank?sslmode=disable",
- "maxroach",
- ""
- )) {
- DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
- .withExecuteLogging(true)
- .withRenderQuotedNames(RenderQuotedNames.NEVER));
-
- // Initialise database with db.sql script
- try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
- ctx.parser().parse(Source.of(in).readString()).executeBatch();
- }
-
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(ctx, forceRetryLogic());
- } else {
-
- runTransaction(ctx, addAccounts());
- long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip
deleted file mode 100644
index f11f86b8f43..00000000000
Binary files a/src/current/_includes/v20.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample-pgx.go b/src/current/_includes/v20.1/app/insecure/txn-sample-pgx.go
deleted file mode 100644
index d1b818bc262..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample-pgx.go
+++ /dev/null
@@ -1,58 +0,0 @@
-package main
-
-import (
- "context"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb/crdbpgx"
- "github.com/jackc/pgx/v4"
-)
-
-func transferFunds(ctx context.Context, tx pgx.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(ctx,
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(ctx,
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(ctx,
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- config, err := pgx.ParseConfig("postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error configuring the database: ", err)
- }
-
- // Connect to the "bank" database.
- conn, err := pgx.ConnectConfig(context.Background(), config)
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer conn.Close(context.Background())
-
- // Run a transfer in a transaction.
- err = crdbpgx.ExecuteTx(context.Background(), conn, pgx.TxOptions{}, func(tx pgx.Tx) error {
- return transferFunds(context.Background(), tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.clj b/src/current/_includes/v20.1/app/insecure/txn-sample.clj
deleted file mode 100644
index 0e2d9df55e3..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.clj
+++ /dev/null
@@ -1,44 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:dbtype "postgresql"
- :dbname "bank"
- :host "localhost"
- :port "26257"
- :user "maxroach"})
-
-;; The transaction we want to run.
-(defn transferFunds
- [txn from to amount]
-
- ;; Check the current balance.
- (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
- (mapv :balance)
- (first))]
- (when (< fromBalance amount)
- (throw (Exception. "Insufficient funds"))))
-
- ;; Perform the transfer.
- (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
- (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
-
-(defn test-txn []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Execute the transaction within an automatic retry block;
- ;; the transaction object is bound to 'txn'.
- (util/with-txn-retry [txn conn]
- (transferFunds txn 1 2 100))
-
- ;; Execute a query outside of an automatic retry block.
- (println "Balances after transfer:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- (doall))))
-
-(defn -main [& args]
- (test-txn))
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.cpp b/src/current/_includes/v20.1/app/insecure/txn-sample.cpp
deleted file mode 100644
index 0f65137be22..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.cpp
+++ /dev/null
@@ -1,74 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-void transferFunds(
- pqxx::dbtransaction *tx, int from, int to, int amount) {
- // Read the balance.
- pqxx::result r = tx->exec(
- "SELECT balance FROM accounts WHERE id = " + to_string(from));
- assert(r.size() == 1);
- int fromBalance = r[0][0].as();
-
- if (fromBalance < amount) {
- throw domain_error("insufficient funds");
- }
-
- // Perform the transfer.
- tx->exec("UPDATE accounts SET balance = balance - "
- + to_string(amount) + " WHERE id = " + to_string(from));
- tx->exec("UPDATE accounts SET balance = balance + "
- + to_string(amount) + " WHERE id = " + to_string(to));
-}
-
-
-// ExecuteTx runs fn inside a transaction and retries it as needed.
-// On non-retryable failures, the transaction is aborted and rolled
-// back; on success, the transaction is committed.
-//
-// For more information about CockroachDB's transaction model see
-// https://cockroachlabs.com/docs/transactions.html.
-//
-// NOTE: the supplied exec closure should not have external side
-// effects beyond changes to the database.
-void executeTx(
- pqxx::connection *c, function fn) {
- pqxx::work tx(*c);
- while (true) {
- try {
- pqxx::subtransaction s(tx, "cockroach_restart");
- fn(&s);
- s.commit();
- break;
- } catch (const pqxx::pqxx_exception& e) {
- // Swallow "transaction restart" errors; the transaction will be retried.
- // Unfortunately libpqxx doesn't give us access to the error code, so we
- // do string matching to identify retryable errors.
- if (string(e.base().what()).find("restart transaction:") == string::npos) {
- throw;
- }
- }
- }
- tx.commit();
-}
-
-int main() {
- try {
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- executeTx(&c, [](pqxx::dbtransaction *tx) {
- transferFunds(tx, 1, 2, 100);
- });
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.cs b/src/current/_includes/v20.1/app/insecure/txn-sample.cs
deleted file mode 100644
index f64a664ccff..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.cs
+++ /dev/null
@@ -1,120 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Disable;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- TxnSample(connStringBuilder.ConnectionString);
- }
-
- static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
- {
- int balance = 0;
- using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
- using (var reader = cmd.ExecuteReader())
- {
- if (reader.Read())
- {
- balance = reader.GetInt32(0);
- }
- else
- {
- throw new DataException(String.Format("Account id={0} not found", from));
- }
- }
- if (balance < amount)
- {
- throw new DataException(String.Format("Insufficient balance in account id={0}", from));
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- }
-
- static void TxnSample(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
-
- try
- {
- using (var tran = conn.BeginTransaction())
- {
- tran.Save("cockroach_restart");
- while (true)
- {
- try
- {
- TransferFunds(conn, tran, 1, 2, 100);
- tran.Commit();
- break;
- }
- catch (NpgsqlException e)
- {
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if (e.ErrorCode == 40001)
- {
- // Signal the database that we will attempt a retry.
- tran.Rollback("cockroach_restart");
- }
- else
- {
- throw;
- }
- }
- }
- }
- }
- catch (DataException e)
- {
- Console.WriteLine(e.Message);
- }
-
- // Now printout the results.
- Console.WriteLine("Final balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.go b/src/current/_includes/v20.1/app/insecure/txn-sample.go
deleted file mode 100644
index 2c0cd1b6da6..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.go
+++ /dev/null
@@ -1,51 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.php b/src/current/_includes/v20.1/app/insecure/txn-sample.php
deleted file mode 100644
index e060d311cc3..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.rb b/src/current/_includes/v20.1/app/insecure/txn-sample.rb
deleted file mode 100644
index 0ac2af236b7..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.rb
+++ /dev/null
@@ -1,49 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close the database connection.
-conn.close()
diff --git a/src/current/_includes/v20.1/app/insecure/txn-sample.rs b/src/current/_includes/v20.1/app/insecure/txn-sample.rs
deleted file mode 100644
index d1dd0e021c9..00000000000
--- a/src/current/_includes/v20.1/app/insecure/txn-sample.rs
+++ /dev/null
@@ -1,60 +0,0 @@
-use postgres::{error::SqlState, Client, Error, NoTls, Transaction};
-
-/// Runs op inside a transaction and retries it as needed.
-/// On non-retryable failures, the transaction is aborted and
-/// rolled back; on success, the transaction is committed.
-fn execute_txn(client: &mut Client, op: F) -> Result
-where
- F: Fn(&mut Transaction) -> Result,
-{
- let mut txn = client.transaction()?;
- loop {
- let mut sp = txn.savepoint("cockroach_restart")?;
- match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
- Err(ref err)
- if err
- .code()
- .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
- .unwrap_or(false) => {}
- r => break r,
- }
- }
- .and_then(|t| txn.commit().map(|_| t))
-}
-
-fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
- // Read the balance.
- let from_balance: i64 = txn
- .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
- .get(0);
-
- assert!(from_balance >= amount);
-
- // Perform the transfer.
- txn.execute(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
- &[&amount, &from],
- )?;
- txn.execute(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
- &[&amount, &to],
- )?;
- Ok(())
-}
-
-fn main() {
- let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
-
- // Run a transfer in a transaction.
- execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
-
- // Check account balances after the transaction.
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v20.1/app/insecure/upperdb-basic-sample/main.go b/src/current/_includes/v20.1/app/insecure/upperdb-basic-sample/main.go
deleted file mode 100644
index 5c855356d7e..00000000000
--- a/src/current/_includes/v20.1/app/insecure/upperdb-basic-sample/main.go
+++ /dev/null
@@ -1,185 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "time"
-
- "github.com/upper/db/v4"
- "github.com/upper/db/v4/adapter/cockroachdb"
-)
-
-// The settings variable stores connection details.
-var settings = cockroachdb.ConnectionURL{
- Host: "localhost",
- Database: "bank",
- User: "maxroach",
- Options: map[string]string{
- // Insecure node.
- "sslmode": "disable",
- },
-}
-
-// Accounts is a handy way to represent a collection.
-func Accounts(sess db.Session) db.Store {
- return sess.Collection("accounts")
-}
-
-// Account is used to represent a single record in the "accounts" table.
-type Account struct {
- ID uint64 `db:"id,omitempty"`
- Balance int64 `db:"balance"`
-}
-
-// Collection is required in order to create a relation between the Account
-// struct and the "accounts" table.
-func (a *Account) Store(sess db.Session) db.Store {
- return Accounts(sess)
-}
-
-// createTables creates all the tables that are neccessary to run this example.
-func createTables(sess db.Session) error {
- _, err := sess.SQL().Exec(`
- CREATE TABLE IF NOT EXISTS accounts (
- ID SERIAL PRIMARY KEY,
- balance INT
- )
- `)
- if err != nil {
- return err
- }
- return nil
-}
-
-// crdbForceRetry can be used to simulate a transaction error and
-// demonstrate upper/db's ability to retry the transaction automatically.
-//
-// By default, upper/db will retry the transaction five times, if you want
-// to modify this number use: sess.SetMaxTransactionRetries(n).
-//
-// This is only used for demonstration purposes and not intended
-// for production code.
-func crdbForceRetry(sess db.Session) error {
- var err error
-
- // The first statement in a transaction can be retried transparently on the
- // server, so we need to add a placeholder statement so that our
- // force_retry() statement isn't the first one.
- _, err = sess.SQL().Exec(`SELECT 1`)
- if err != nil {
- return err
- }
-
- // If force_retry is called during the specified interval from the beginning
- // of the transaction it returns a retryable error. If not, 0 is returned
- // instead of an error.
- _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`)
- if err != nil {
- return err
- }
-
- return nil
-}
-
-func main() {
- // Connect to the local CockroachDB node.
- sess, err := cockroachdb.Open(settings)
- if err != nil {
- log.Fatal("cockroachdb.Open: ", err)
- }
- defer sess.Close()
-
- // Adjust this number to fit your specific needs (set to 5, by default)
- // sess.SetMaxTransactionRetries(10)
-
- // Create the "accounts" table
- createTables(sess)
-
- // Delete all the previous items in the "accounts" table.
- err = Accounts(sess).Truncate()
- if err != nil {
- log.Fatal("Truncate: ", err)
- }
-
- // Create a new account with a balance of 1000.
- account1 := Account{Balance: 1000}
- err = Accounts(sess).InsertReturning(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Create a new account with a balance of 250.
- account2 := Account{Balance: 250}
- err = Accounts(sess).InsertReturning(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Change the balance of the first account.
- account1.Balance = 500
- err = sess.Save(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Change the balance of the second account.
- account2.Balance = 999
- err = sess.Save(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Delete the first record.
- err = sess.Delete(&account1)
- if err != nil {
- log.Fatal("Delete: ", err)
- }
-
- startTime := time.Now()
-
- // Add a couple of new records within a transaction.
- err = sess.Tx(func(tx db.Session) error {
- var err error
-
- if err = tx.Save(&Account{Balance: 887}); err != nil {
- return err
- }
-
- if time.Now().Sub(startTime) < time.Second*1 {
- // Will fail continuously for 2 seconds.
- if err = crdbForceRetry(tx); err != nil {
- return err
- }
- }
-
- if err = tx.Save(&Account{Balance: 342}); err != nil {
- return err
- }
-
- return nil
- })
- if err != nil {
- log.Fatal("Could not commit transaction: ", err)
- }
-
- // Printing records
- printRecords(sess)
-}
-
-func printRecords(sess db.Session) {
- accounts := []Account{}
- err := Accounts(sess).Find().All(&accounts)
- if err != nil {
- log.Fatal("Find: ", err)
- }
- log.Printf("Balances:")
- for i := range accounts {
- fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/java-version-note.md b/src/current/_includes/v20.1/app/java-version-note.md
deleted file mode 100644
index 3d559314262..00000000000
--- a/src/current/_includes/v20.1/app/java-version-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-We recommend using Java versions 8+ with CockroachDB.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/jooq-basic-sample/Sample.java b/src/current/_includes/v20.1/app/jooq-basic-sample/Sample.java
deleted file mode 100644
index fd71726603e..00000000000
--- a/src/current/_includes/v20.1/app/jooq-basic-sample/Sample.java
+++ /dev/null
@@ -1,215 +0,0 @@
-package com.cockroachlabs;
-
-import com.cockroachlabs.example.jooq.db.Tables;
-import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
-import org.jooq.DSLContext;
-import org.jooq.SQLDialect;
-import org.jooq.Source;
-import org.jooq.conf.RenderQuotedNames;
-import org.jooq.conf.Settings;
-import org.jooq.exception.DataAccessException;
-import org.jooq.impl.DSL;
-
-import java.io.InputStream;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.util.*;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.function.Function;
-
-import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- private static Function addAccounts() {
- return ctx -> {
- long rv = 0;
-
- ctx.delete(ACCOUNTS).execute();
- ctx.batchInsert(
- new AccountsRecord(1L, 1000L),
- new AccountsRecord(2L, 250L),
- new AccountsRecord(3L, 314159L)
- ).execute();
-
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- return rv;
- };
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) {
- return ctx -> {
- long rv = 0;
-
- AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
- AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
-
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.setBalance(fromAccount.getBalance() - amount);
- toAccount.setBalance(toAccount.getBalance() + amount);
-
- ctx.batchUpdate(fromAccount, toAccount).execute();
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
-
- return rv;
- };
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() {
- return ctx -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- ctx.execute("SELECT crdb_internal.force_retry('1s')");
- } catch (DataAccessException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- }
-
- private static Function getAccountBalance(long id) {
- return ctx -> {
- AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
- long balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- return balance;
- };
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(DSLContext session, Function fn) {
- AtomicLong rv = new AtomicLong(0L);
- AtomicInteger attemptCount = new AtomicInteger(0);
-
- while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
- attemptCount.incrementAndGet();
-
- if (attemptCount.get() > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
- }
-
- if (session.connectionResult(connection -> {
- connection.setAutoCommit(false);
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.fetch("SELECT now()");
- }
-
- try {
- rv.set(fn.apply(session));
- if (rv.get() != -1) {
- connection.commit();
- System.out.printf("APP: COMMIT;\n");
- return true;
- }
- } catch (DataAccessException | SQLException e) {
- String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
-
- if (RETRY_SQL_STATE.equals(sqlState)) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
- System.out.printf("APP: ROLLBACK;\n");
- connection.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv.set(-1L);
- } else {
- throw e;
- }
- }
-
- return false;
- })) {
- break;
- }
- }
-
- return rv.get();
- }
-
- public static void main(String[] args) throws Exception {
- try (Connection connection = DriverManager.getConnection(
- "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt",
- "maxroach",
- ""
- )) {
- DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
- .withExecuteLogging(true)
- .withRenderQuotedNames(RenderQuotedNames.NEVER));
-
- // Initialise database with db.sql script
- try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
- ctx.parser().parse(Source.of(in).readString()).executeBatch();
- }
-
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(ctx, forceRetryLogic());
- } else {
-
- runTransaction(ctx, addAccounts());
- long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v20.1/app/jooq-basic-sample/jooq-basic-sample.zip
deleted file mode 100644
index 859305478c0..00000000000
Binary files a/src/current/_includes/v20.1/app/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ
diff --git a/src/current/_includes/v20.1/app/project.clj b/src/current/_includes/v20.1/app/project.clj
deleted file mode 100644
index 41efc324b59..00000000000
--- a/src/current/_includes/v20.1/app/project.clj
+++ /dev/null
@@ -1,7 +0,0 @@
-(defproject test "0.1"
- :description "CockroachDB test"
- :url "http://cockroachlabs.com/"
- :dependencies [[org.clojure/clojure "1.8.0"]
- [org.clojure/java.jdbc "0.6.1"]
- [org.postgresql/postgresql "9.4.1211"]]
- :main test.test)
diff --git a/src/current/_includes/v20.1/app/python/sqlalchemy/example.py b/src/current/_includes/v20.1/app/python/sqlalchemy/example.py
deleted file mode 100644
index b5390259e2d..00000000000
--- a/src/current/_includes/v20.1/app/python/sqlalchemy/example.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""This module performs the following steps sequentially:
- 1. Reads in existing account IDs (if any) from the bank database.
- 2. Creates additional accounts with randomly generated IDs. Then, it adds a bit of money to each new account.
- 3. Chooses two accounts at random and takes half of the money from the first and deposits it into the second.
-"""
-
-import random
-from math import floor
-from sqlalchemy import create_engine, Column, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-from cockroachdb.sqlalchemy import run_transaction
-
-Base = declarative_base()
-
-
-
-class Account(Base):
- """The Account class corresponds to the "accounts" database table.
- """
- __tablename__ = 'accounts'
- id = Column(Integer, primary_key=True)
- balance = Column(Integer)
-
-
-# Create an engine to communicate with the database. The
-# "cockroachdb://" prefix for the engine URL indicates that we are
-# connecting to CockroachDB using the 'cockroachdb' dialect.
-# For more information, see
-# https://github.com/cockroachdb/sqlalchemy-cockroachdb.
-
-engine = create_engine(
- # For cockroach demo:
- 'cockroachdb://:@:/bank?sslmode=require',
- echo=True # Log SQL queries to stdout
-)
-
-# Automatically create the "accounts" table based on the Account class.
-Base.metadata.create_all(engine)
-
-
-# Store the account IDs we create for later use.
-
-seen_account_ids = set()
-
-
-# The code below generates random IDs for new accounts.
-
-def create_random_accounts(sess, num):
- """Create N new accounts with random IDs and random account balances.
- Note that since this is a demo, we do not do any work to ensure the
- new IDs do not collide with existing IDs.
- """
- new_accounts = []
- while num > 0:
- billion = 1000000000
- new_id = floor(random.random()*billion)
- seen_account_ids.add(new_id)
- new_accounts.append(
- Account(
- id=new_id,
- balance=floor(random.random()*1000000)
- )
- )
- num = num - 1
- sess.add_all(new_accounts)
-
-
-run_transaction(sessionmaker(bind=engine),
- lambda s: create_random_accounts(s, 100))
-
-
-def get_random_account_id():
- """ Helper function for getting random existing account IDs.
- """
- random_id = random.choice(tuple(seen_account_ids))
- return random_id
-
-
-def transfer_funds_randomly(session):
- """Transfer money randomly between accounts (during SESSION).
- Cuts a randomly selected account's balance in half, and gives the
- other half to some other randomly selected account.
- """
- source_id = get_random_account_id()
- sink_id = get_random_account_id()
-
- source = session.query(Account).filter_by(id=source_id).one()
- amount = floor(source.balance/2)
-
- # Check balance of the first account.
- if source.balance < amount:
- raise "Insufficient funds"
-
- source.balance -= amount
- session.query(Account).filter_by(id=sink_id).update(
- {"balance": (Account.balance + amount)}
- )
-
-
-# Run the transfer inside a transaction.
-
-run_transaction(sessionmaker(bind=engine), transfer_funds_randomly)
diff --git a/src/current/_includes/v20.1/app/python/sqlalchemy/sqlalchemy-large-txns.py b/src/current/_includes/v20.1/app/python/sqlalchemy/sqlalchemy-large-txns.py
deleted file mode 100644
index 7a6ef82c2e3..00000000000
--- a/src/current/_includes/v20.1/app/python/sqlalchemy/sqlalchemy-large-txns.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from sqlalchemy import create_engine, Column, Float, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-from cockroachdb.sqlalchemy import run_transaction
-from random import random
-
-Base = declarative_base()
-
-# The code below assumes you have run the following SQL statements.
-
-# CREATE DATABASE pointstore;
-
-# USE pointstore;
-
-# CREATE TABLE points (
-# id INT PRIMARY KEY DEFAULT unique_rowid(),
-# x FLOAT NOT NULL,
-# y FLOAT NOT NULL,
-# z FLOAT NOT NULL
-# );
-
-engine = create_engine(
- # For cockroach demo:
- 'cockroachdb://:@:/bank?sslmode=require',
- echo=True # Log SQL queries to stdout
-)
-
-
-class Point(Base):
- __tablename__ = 'points'
- id = Column(Integer, primary_key=True)
- x = Column(Float)
- y = Column(Float)
- z = Column(Float)
-
-
-def add_points(num_points):
- chunk_size = 1000 # Tune this based on object sizes.
-
- def add_points_helper(sess, chunk, num_points):
- points = []
- for i in range(chunk, min(chunk + chunk_size, num_points)):
- points.append(
- Point(x=random()*1024, y=random()*1024, z=random()*1024)
- )
- sess.bulk_save_objects(points)
-
- for chunk in range(0, num_points, chunk_size):
- run_transaction(
- sessionmaker(bind=engine),
- lambda s: add_points_helper(
- s, chunk, min(chunk + chunk_size, num_points)
- )
- )
-
-
-add_points(10000)
diff --git a/src/current/_includes/v20.1/app/retry-errors.md b/src/current/_includes/v20.1/app/retry-errors.md
deleted file mode 100644
index 5f219f53e12..00000000000
--- a/src/current/_includes/v20.1/app/retry-errors.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under contention.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/app/see-also-links.md b/src/current/_includes/v20.1/app/see-also-links.md
deleted file mode 100644
index cfdb9aba7ba..00000000000
--- a/src/current/_includes/v20.1/app/see-also-links.md
+++ /dev/null
@@ -1,8 +0,0 @@
-You might also be interested in the following pages:
-
-- [Client Connection Parameters](connection-parameters.html)
-- [Data Replication](demo-replication-and-rebalancing.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Replication & Rebalancing](demo-replication-and-rebalancing.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/Account.java b/src/current/_includes/v20.1/app/spring-data-jdbc/Account.java
deleted file mode 100644
index e4f6dd62495..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/Account.java
+++ /dev/null
@@ -1,35 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.math.BigDecimal;
-
-import org.springframework.data.annotation.Id;
-
-/**
- * Domain entity mapped to the account table.
- */
-public class Account {
- @Id
- private Long id;
-
- private String name;
-
- private AccountType type;
-
- private BigDecimal balance;
-
- public Long getId() {
- return id;
- }
-
- public String getName() {
- return name;
- }
-
- public AccountType getType() {
- return type;
- }
-
- public BigDecimal getBalance() {
- return balance;
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountController.java b/src/current/_includes/v20.1/app/spring-data-jdbc/AccountController.java
deleted file mode 100644
index aa2d70b07bd..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountController.java
+++ /dev/null
@@ -1,148 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.math.BigDecimal;
-
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.dao.DataRetrievalFailureException;
-import org.springframework.data.domain.PageRequest;
-import org.springframework.data.domain.Pageable;
-import org.springframework.data.domain.Sort;
-import org.springframework.data.web.PageableDefault;
-import org.springframework.data.web.PagedResourcesAssembler;
-import org.springframework.hateoas.IanaLinkRelations;
-import org.springframework.hateoas.Link;
-import org.springframework.hateoas.PagedModel;
-import org.springframework.hateoas.RepresentationModel;
-import org.springframework.hateoas.server.RepresentationModelAssembler;
-import org.springframework.http.HttpEntity;
-import org.springframework.http.HttpStatus;
-import org.springframework.http.ResponseEntity;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.web.bind.annotation.*;
-import org.springframework.web.servlet.support.ServletUriComponentsBuilder;
-
-import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
-import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
-import static org.springframework.transaction.annotation.Propagation.REQUIRES_NEW;
-
-/**
- * Main remoting and transaction boundary in the form of a REST controller. The discipline
- * when following the entity-control-boundary (ECB) pattern is that only service boundaries
- * are allowed to start and end transactions. A service boundary can be a controller, business
- * service facade or service activator (JMS/Kafka listener).
- *
- * This is enforced by the REQUIRES_NEW propagation attribute of @Transactional annotated
- * controller methods. Between the web container's HTTP listener and the transaction proxy,
- * there's yet another transparent proxy in the form of a retry loop advice with exponential
- * backoff. It takes care of retrying transactions that are aborted by transient SQL errors,
- * rather than having these propagate all the way over the wire to the client / user agent.
- *
- * @see RetryableTransactionAspect
- */
-@RestController
-public class AccountController {
- @Autowired
- private AccountRepository accountRepository;
-
- @Autowired
- private PagedResourcesAssembler pagedResourcesAssembler;
-
- /**
- * Provides the service index resource representation which is only links
- * for clients to follow.
- */
- @GetMapping
- public ResponseEntity index() {
- RepresentationModel index = new RepresentationModel();
-
- // Type-safe way to generate URLs bound to controller methods
- index.add(linkTo(methodOn(AccountController.class)
- .listAccounts(PageRequest.of(0, 5)))
- .withRel("accounts")); // Lets skip curies and affordances for now
-
- // This rel essentially informs the client that a POST to its href with
- // form parameters will transfer funds between referenced accounts.
- // (its only a demo)
- index.add(linkTo(AccountController.class)
- .slash("transfer{?fromId,toId,amount}")
- .withRel("transfer"));
-
- // Spring boot actuators for observability / monitoring
- index.add(new Link(
- ServletUriComponentsBuilder
- .fromCurrentContextPath()
- .pathSegment("actuator")
- .buildAndExpand()
- .toUriString()
- ).withRel("actuator"));
-
- return new ResponseEntity<>(index, HttpStatus.OK);
- }
-
- /**
- * Provides a paged representation of accounts (sort order omitted).
- */
- @GetMapping("/account")
- @Transactional(propagation = REQUIRES_NEW)
- public HttpEntity> listAccounts(
- @PageableDefault(size = 5, direction = Sort.Direction.ASC) Pageable page) {
- return ResponseEntity
- .ok(pagedResourcesAssembler.toModel(accountRepository.findAll(page), accountModelAssembler()));
- }
-
- /**
- * Provides a point lookup of a given account.
- */
- @GetMapping(value = "/account/{id}")
- @Transactional(propagation = REQUIRES_NEW, readOnly = true) // Notice its marked read-only
- public HttpEntity getAccount(@PathVariable("id") Long accountId) {
- return new ResponseEntity<>(accountModelAssembler().toModel(
- accountRepository.findById(accountId)
- .orElseThrow(() -> new DataRetrievalFailureException("No such account: " + accountId))),
- HttpStatus.OK);
- }
-
- /**
- * Main funds transfer method.
- */
- @PostMapping(value = "/transfer")
- @Transactional(propagation = REQUIRES_NEW)
- public HttpEntity transfer(
- @RequestParam("fromId") Long fromId,
- @RequestParam("toId") Long toId,
- @RequestParam("amount") BigDecimal amount
- ) {
- if (amount.compareTo(BigDecimal.ZERO) < 0) {
- throw new IllegalArgumentException("Negative amount");
- }
- if (fromId.equals(toId)) {
- throw new IllegalArgumentException("From and to accounts must be different");
- }
-
- BigDecimal fromBalance = accountRepository.getBalance(fromId).add(amount.negate());
- // Application level invariant check.
- // Could be enhanced or replaced with a CHECK constraint like:
- // ALTER TABLE account ADD CONSTRAINT check_account_positive_balance CHECK (balance >= 0)
- if (fromBalance.compareTo(BigDecimal.ZERO) < 0) {
- throw new NegativeBalanceException("Insufficient funds " + amount + " for account " + fromId);
- }
-
- accountRepository.updateBalance(fromId, amount.negate());
- accountRepository.updateBalance(toId, amount);
-
- return ResponseEntity.ok().build();
- }
-
- private RepresentationModelAssembler accountModelAssembler() {
- return (entity) -> {
- AccountModel model = new AccountModel();
- model.setName(entity.getName());
- model.setType(entity.getType());
- model.setBalance(entity.getBalance());
- model.add(linkTo(methodOn(AccountController.class)
- .getAccount(entity.getId())
- ).withRel(IanaLinkRelations.SELF));
- return model;
- };
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountModel.java b/src/current/_includes/v20.1/app/spring-data-jdbc/AccountModel.java
deleted file mode 100644
index 6ee47d48f4b..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountModel.java
+++ /dev/null
@@ -1,42 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.math.BigDecimal;
-
-import org.springframework.hateoas.RepresentationModel;
-import org.springframework.hateoas.server.core.Relation;
-
-/**
- * Account resource represented in HAL+JSON via REST API.
- */
-@Relation(value = "account", collectionRelation = "accounts")
-public class AccountModel extends RepresentationModel {
- private String name;
-
- private AccountType type;
-
- private BigDecimal balance;
-
- public String getName() {
- return name;
- }
-
- public void setName(String name) {
- this.name = name;
- }
-
- public AccountType getType() {
- return type;
- }
-
- public void setType(AccountType type) {
- this.type = type;
- }
-
- public BigDecimal getBalance() {
- return balance;
- }
-
- public void setBalance(BigDecimal balance) {
- this.balance = balance;
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountRepository.java b/src/current/_includes/v20.1/app/spring-data-jdbc/AccountRepository.java
deleted file mode 100644
index f63c71f829b..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/AccountRepository.java
+++ /dev/null
@@ -1,36 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.math.BigDecimal;
-
-
-import org.springframework.data.domain.Page;
-import org.springframework.data.jdbc.repository.query.Modifying;
-import org.springframework.data.jdbc.repository.query.Query;
-import org.springframework.data.repository.PagingAndSortingRepository;
-import org.springframework.data.repository.query.Param;
-import org.springframework.stereotype.Repository;
-import org.springframework.transaction.annotation.Transactional;
-
-import static org.springframework.transaction.annotation.Propagation.MANDATORY;
-
-/**
- * The main account repository, notice there's no implementation needed since its auto-proxied by
- * spring-data.
- */
-@Repository
-@Transactional(propagation = MANDATORY)
-public interface AccountRepository extends PagingAndSortingRepository {
-
- @Query("SELECT * FROM account LIMIT :pageSize OFFSET :offset")
- Page findAll(@Param("pageSize") int pageSize, @Param("offset") long offset);
-
- @Query("SELECT count(id) FROM account")
- long countAll();
-
- @Query(value = "SELECT balance FROM account WHERE id=:id")
- BigDecimal getBalance(@Param("id") Long id);
-
- @Modifying
- @Query("UPDATE account SET balance = balance + :balance WHERE id=:id")
- void updateBalance(@Param("id") Long id, @Param("balance") BigDecimal balance);
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/JdbcApplication.java b/src/current/_includes/v20.1/app/spring-data-jdbc/JdbcApplication.java
deleted file mode 100644
index 7fa57a75c89..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/JdbcApplication.java
+++ /dev/null
@@ -1,107 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.math.BigDecimal;
-import java.util.ArrayDeque;
-import java.util.Deque;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
-import java.util.concurrent.ScheduledExecutorService;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.boot.CommandLineRunner;
-import org.springframework.boot.WebApplicationType;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-import org.springframework.boot.builder.SpringApplicationBuilder;
-import org.springframework.context.annotation.EnableAspectJAutoProxy;
-import org.springframework.core.Ordered;
-import org.springframework.data.jdbc.repository.config.EnableJdbcRepositories;
-import org.springframework.hateoas.Link;
-import org.springframework.hateoas.config.EnableHypermediaSupport;
-import org.springframework.http.HttpEntity;
-import org.springframework.http.HttpMethod;
-import org.springframework.transaction.annotation.EnableTransactionManagement;
-import org.springframework.web.client.HttpClientErrorException;
-import org.springframework.web.client.RestTemplate;
-
-/**
- * Spring boot server application using spring-data-jdbc for data access.
- */
-@EnableHypermediaSupport(type = EnableHypermediaSupport.HypermediaType.HAL)
-@EnableJdbcRepositories
-@EnableAspectJAutoProxy(proxyTargetClass = true)
-@EnableSpringDataWebSupport
-@EnableTransactionManagement(order = Ordered.LOWEST_PRECEDENCE - 1) // Bump up one level to enable extra advisors
-@SpringBootApplication
-public class JdbcApplication implements CommandLineRunner {
- protected static final Logger logger = LoggerFactory.getLogger(JdbcApplication.class);
-
- public static void main(String[] args) {
- new SpringApplicationBuilder(JdbcApplication.class)
- .web(WebApplicationType.SERVLET)
- .run(args);
- }
-
- @Override
- public void run(String... args) {
- for (String a : args) {
- if ("--skip-client".equals(a)) {
- return;
- }
- }
-
- logger.info("Lets move some $$ around!");
-
- final Link transferLink = new Link("http://localhost:8080/transfer{?fromId,toId,amount}");
-
- final int threads = Runtime.getRuntime().availableProcessors();
-
- final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(threads);
-
- Deque> futures = new ArrayDeque<>();
-
- for (int i = 0; i < threads; i++) {
- Future> future = executorService.submit(() -> {
- for (int j = 0; j < 100; j++) {
- int fromId = 1 + (int) Math.round(Math.random() * 3);
- int toId = fromId % 4 + 1;
-
- BigDecimal amount = new BigDecimal("10.00");
-
- Map form = new HashMap<>();
- form.put("fromId", fromId);
- form.put("toId", toId);
- form.put("amount", amount);
-
- String uri = transferLink.expand(form).getHref();
-
- try {
- new RestTemplate().exchange(uri, HttpMethod.POST, new HttpEntity<>(null), String.class);
- } catch (HttpClientErrorException.BadRequest e) {
- logger.warn(e.getResponseBodyAsString());
- }
- }
- });
- futures.add(future);
- }
-
- while (!futures.isEmpty()) {
- try {
- futures.pop().get();
- logger.info("Worker finished - {} remaining", futures.size());
- } catch (InterruptedException e) {
- Thread.currentThread().interrupt();
- } catch (ExecutionException e) {
- logger.warn("Worker failed", e.getCause());
- }
- }
-
- logger.info("All client workers finished but server keeps running. Have a nice day!");
-
- executorService.shutdownNow();
- }
-}
-
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/RetryableTransactionAspect.java b/src/current/_includes/v20.1/app/spring-data-jdbc/RetryableTransactionAspect.java
deleted file mode 100644
index b4d83471a9d..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/RetryableTransactionAspect.java
+++ /dev/null
@@ -1,88 +0,0 @@
-package io.roach.data.jdbc;
-
-import java.lang.reflect.UndeclaredThrowableException;
-import java.util.concurrent.atomic.AtomicLong;
-
-import org.aspectj.lang.ProceedingJoinPoint;
-import org.aspectj.lang.annotation.Around;
-import org.aspectj.lang.annotation.Aspect;
-import org.aspectj.lang.annotation.Pointcut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.core.Ordered;
-import org.springframework.core.annotation.Order;
-import org.springframework.dao.ConcurrencyFailureException;
-import org.springframework.dao.TransientDataAccessException;
-import org.springframework.stereotype.Component;
-import org.springframework.transaction.TransactionSystemException;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.transaction.support.TransactionSynchronizationManager;
-import org.springframework.util.Assert;
-
-/**
- * Aspect with an around advice that intercepts and retries transient concurrency exceptions.
- * Methods matching the pointcut expression (annotated with @Transactional) are retried a number
- * of times with exponential backoff.
- *
- * This advice needs to runs in a non-transactional context, which is before the underlying
- * transaction advisor (@Order ensures that).
- */
-@Component
-@Aspect
-// Before TX advisor
-@Order(Ordered.LOWEST_PRECEDENCE - 2)
-public class RetryableTransactionAspect {
- protected final Logger logger = LoggerFactory.getLogger(getClass());
-
- @Pointcut("execution(* io.roach..*(..)) && @annotation(transactional)")
- public void anyTransactionBoundaryOperation(Transactional transactional) {
- }
-
- @Around(value = "anyTransactionBoundaryOperation(transactional)",
- argNames = "pjp,transactional")
- public Object retryableOperation(ProceedingJoinPoint pjp, Transactional transactional)
- throws Throwable {
- final int totalRetries = 30;
- int numAttempts = 0;
- AtomicLong backoffMillis = new AtomicLong(150);
-
- Assert.isTrue(!TransactionSynchronizationManager.isActualTransactionActive(), "TX active");
-
- do {
- try {
- numAttempts++;
- return pjp.proceed();
- } catch (TransientDataAccessException | TransactionSystemException ex) {
- handleTransientException(ex, numAttempts, totalRetries, pjp, backoffMillis);
- } catch (UndeclaredThrowableException ex) {
- Throwable t = ex.getUndeclaredThrowable();
- if (t instanceof TransientDataAccessException) {
- handleTransientException(t, numAttempts, totalRetries, pjp, backoffMillis);
- } else {
- throw ex;
- }
- }
- } while (numAttempts < totalRetries);
-
- throw new ConcurrencyFailureException("Too many transient errors (" + numAttempts + ") for method ["
- + pjp.getSignature().toLongString() + "]. Giving up!");
- }
-
- private void handleTransientException(Throwable ex, int numAttempts, int totalAttempts,
- ProceedingJoinPoint pjp, AtomicLong backoffMillis) {
- if (logger.isWarnEnabled()) {
- logger.warn("Transient data access exception (" + numAttempts + " of max " + totalAttempts + ") "
- + "detected (retry in " + backoffMillis + " ms) "
- + "in method '" + pjp.getSignature().getDeclaringTypeName() + "." + pjp.getSignature().getName()
- + "': " + ex.getMessage());
- }
- if (backoffMillis.get() >= 0) {
- try {
- Thread.sleep(backoffMillis.get());
- } catch (InterruptedException e) {
- Thread.currentThread().interrupt();
- }
- backoffMillis.set(Math.min((long) (backoffMillis.get() * 1.5), 1500));
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/TransactionHintsAspect.java b/src/current/_includes/v20.1/app/spring-data-jdbc/TransactionHintsAspect.java
deleted file mode 100644
index 0eae2cba3ba..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/TransactionHintsAspect.java
+++ /dev/null
@@ -1,63 +0,0 @@
-package io.roach.data.jdbc;
-
-import org.aspectj.lang.ProceedingJoinPoint;
-import org.aspectj.lang.annotation.Around;
-import org.aspectj.lang.annotation.Aspect;
-import org.aspectj.lang.annotation.Pointcut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.core.Ordered;
-import org.springframework.core.annotation.Order;
-import org.springframework.jdbc.core.JdbcTemplate;
-import org.springframework.stereotype.Component;
-import org.springframework.transaction.TransactionDefinition;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.transaction.support.TransactionSynchronizationManager;
-import org.springframework.util.Assert;
-
-/**
- * Aspect with an around advice that intercepts and sets transaction attributes.
- *
- * This advice needs to runs in a transactional context, which is after the underlying
- * transaction advisor.
- */
-@Component
-@Aspect
-// After TX advisor
-@Order(Ordered.LOWEST_PRECEDENCE)
-public class TransactionHintsAspect {
- protected final Logger logger = LoggerFactory.getLogger(getClass());
-
- @Autowired
- private JdbcTemplate jdbcTemplate;
-
- private String applicationName = "roach-data";
-
- @Pointcut("execution(* io.roach..*(..)) && @annotation(transactional)")
- public void anyTransactionBoundaryOperation(Transactional transactional) {
- }
-
- @Around(value = "anyTransactionBoundaryOperation(transactional)",
- argNames = "pjp,transactional")
- public Object setTransactionAttributes(ProceedingJoinPoint pjp, Transactional transactional)
- throws Throwable {
- Assert.isTrue(TransactionSynchronizationManager.isActualTransactionActive(), "TX not active");
-
- // https://www.cockroachlabs.com/docs/v19.2/set-vars.html
- jdbcTemplate.update("SET application_name=?", applicationName);
-
- if (transactional.timeout() != TransactionDefinition.TIMEOUT_DEFAULT) {
- logger.info("Setting statement time {} for {}", transactional.timeout(),
- pjp.getSignature().toShortString());
- jdbcTemplate.update("SET statement_timeout=?", transactional.timeout() * 1000);
- }
-
- if (transactional.readOnly()) {
- logger.info("Setting transaction read only for {}", pjp.getSignature().toShortString());
- jdbcTemplate.execute("SET transaction_read_only=true");
- }
-
- return pjp.proceed();
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/changelog-master.xml b/src/current/_includes/v20.1/app/spring-data-jdbc/changelog-master.xml
deleted file mode 100644
index 05915165a00..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/changelog-master.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-
-
-
-
- ANY
-
-
-
-
-
- 1
- Alice
-
- asset
-
-
- 2
- Bob
-
- expense
-
-
- 3
- Bobby Tables
-
- asset
-
-
- 4
- Doris
-
- expense
-
-
-
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/spring-data-jdbc/create.sql b/src/current/_includes/v20.1/app/spring-data-jdbc/create.sql
deleted file mode 100644
index 349f461aa39..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jdbc/create.sql
+++ /dev/null
@@ -1,7 +0,0 @@
-create table account
-(
- id int not null primary key default unique_rowid(),
- balance numeric(19, 2) not null,
- name varchar(128) not null,
- type varchar(25) not null
-);
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/Account.java b/src/current/_includes/v20.1/app/spring-data-jpa/Account.java
deleted file mode 100644
index 42f397c0dd3..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/Account.java
+++ /dev/null
@@ -1,40 +0,0 @@
-package io.roach.data.jpa;
-
-import java.math.BigDecimal;
-
-import javax.persistence.*;
-
-@Entity
-@Table(name = "account")
-public class Account {
- @Id
- @Column
- @GeneratedValue(strategy = GenerationType.IDENTITY)
- private Long id;
-
- @Column(length = 128, nullable = false, unique = true)
- private String name;
-
- @Column(length = 25, nullable = false)
- @Enumerated(EnumType.STRING)
- private AccountType type;
-
- @Column(length = 25, nullable = false)
- private BigDecimal balance;
-
- public Long getId() {
- return id;
- }
-
- public String getName() {
- return name;
- }
-
- public AccountType getType() {
- return type;
- }
-
- public BigDecimal getBalance() {
- return balance;
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/AccountController.java b/src/current/_includes/v20.1/app/spring-data-jpa/AccountController.java
deleted file mode 100644
index 9296b00daca..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/AccountController.java
+++ /dev/null
@@ -1,101 +0,0 @@
-package io.roach.data.jpa;
-
-import java.math.BigDecimal;
-
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.data.domain.PageRequest;
-import org.springframework.data.domain.Pageable;
-import org.springframework.data.domain.Sort;
-import org.springframework.data.web.PageableDefault;
-import org.springframework.data.web.PagedResourcesAssembler;
-import org.springframework.hateoas.IanaLinkRelations;
-import org.springframework.hateoas.PagedModel;
-import org.springframework.hateoas.RepresentationModel;
-import org.springframework.hateoas.server.RepresentationModelAssembler;
-import org.springframework.http.HttpEntity;
-import org.springframework.http.HttpStatus;
-import org.springframework.http.ResponseEntity;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.web.bind.annotation.*;
-
-import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
-import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
-import static org.springframework.transaction.annotation.Propagation.REQUIRES_NEW;
-
-@RestController
-public class AccountController {
- @Autowired
- private AccountRepository accountRepository;
-
- @Autowired
- private PagedResourcesAssembler pagedResourcesAssembler;
-
- @GetMapping
- public ResponseEntity index() {
- RepresentationModel index = new RepresentationModel();
-
- index.add(linkTo(methodOn(AccountController.class)
- .listAccounts(PageRequest.of(0, 5)))
- .withRel("accounts"));
-
- index.add(linkTo(AccountController.class)
- .slash("transfer{?fromId,toId,amount}")
- .withRel("transfer"));
-
- return new ResponseEntity<>(index, HttpStatus.OK);
- }
-
- @GetMapping("/account")
- @Transactional(propagation = REQUIRES_NEW)
- public HttpEntity> listAccounts(
- @PageableDefault(size = 5, direction = Sort.Direction.ASC) Pageable page) {
- return ResponseEntity
- .ok(pagedResourcesAssembler.toModel(accountRepository.findAll(page), accountModelAssembler()));
- }
-
- @GetMapping(value = "/account/{id}")
- @Transactional(propagation = REQUIRES_NEW)
- public HttpEntity getAccount(@PathVariable("id") Long accountId) {
- return new ResponseEntity<>(accountModelAssembler().toModel(accountRepository.getOne(accountId)),
- HttpStatus.OK);
- }
-
- @PostMapping(value = "/transfer")
- @Transactional(propagation = REQUIRES_NEW)
- public HttpEntity transfer(
- @RequestParam("fromId") Long fromId,
- @RequestParam("toId") Long toId,
- @RequestParam("amount") BigDecimal amount
- ) {
- if (amount.compareTo(BigDecimal.ZERO) < 0) {
- throw new IllegalArgumentException("Negative amount");
- }
- if (fromId.equals(toId)) {
- throw new IllegalArgumentException("From and to accounts must be different");
- }
-
- BigDecimal fromBalance = accountRepository.getBalance(fromId).add(amount.negate());
-
- if (fromBalance.compareTo(BigDecimal.ZERO) < 0) {
- throw new NegativeBalanceException("Insufficient funds " + amount + " for account " + fromId);
- }
-
- accountRepository.updateBalance(fromId, amount.negate());
- accountRepository.updateBalance(toId, amount);
-
- return ResponseEntity.ok().build();
- }
-
- private RepresentationModelAssembler accountModelAssembler() {
- return (entity) -> {
- AccountModel model = new AccountModel();
- model.setName(entity.getName());
- model.setType(entity.getType());
- model.setBalance(entity.getBalance());
- model.add(linkTo(methodOn(AccountController.class)
- .getAccount(entity.getId())
- ).withRel(IanaLinkRelations.SELF));
- return model;
- };
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/AccountModel.java b/src/current/_includes/v20.1/app/spring-data-jpa/AccountModel.java
deleted file mode 100644
index 0106b2ee4c5..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/AccountModel.java
+++ /dev/null
@@ -1,39 +0,0 @@
-package io.roach.data.jpa;
-
-import java.math.BigDecimal;
-
-import org.springframework.hateoas.RepresentationModel;
-import org.springframework.hateoas.server.core.Relation;
-
-@Relation(value = "account", collectionRelation = "accounts")
-public class AccountModel extends RepresentationModel {
- private String name;
-
- private AccountType type;
-
- private BigDecimal balance;
-
- public String getName() {
- return name;
- }
-
- public void setName(String name) {
- this.name = name;
- }
-
- public AccountType getType() {
- return type;
- }
-
- public void setType(AccountType type) {
- this.type = type;
- }
-
- public BigDecimal getBalance() {
- return balance;
- }
-
- public void setBalance(BigDecimal balance) {
- this.balance = balance;
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/AccountRepository.java b/src/current/_includes/v20.1/app/spring-data-jpa/AccountRepository.java
deleted file mode 100644
index 0e964e3b515..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/AccountRepository.java
+++ /dev/null
@@ -1,23 +0,0 @@
-package io.roach.data.jpa;
-
-import java.math.BigDecimal;
-
-import org.springframework.data.jpa.repository.JpaRepository;
-import org.springframework.data.jpa.repository.JpaSpecificationExecutor;
-import org.springframework.data.jpa.repository.Modifying;
-import org.springframework.data.jpa.repository.Query;
-import org.springframework.stereotype.Repository;
-import org.springframework.transaction.annotation.Transactional;
-
-import static org.springframework.transaction.annotation.Propagation.MANDATORY;
-
-@Repository
-@Transactional(propagation = MANDATORY)
-public interface AccountRepository extends JpaRepository, JpaSpecificationExecutor {
- @Query(value = "select balance from Account where id=?1")
- BigDecimal getBalance(Long id);
-
- @Modifying
- @Query("update Account set balance = balance + ?2 where id=?1")
- void updateBalance(Long id, BigDecimal balance);
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/JpaApplication.java b/src/current/_includes/v20.1/app/spring-data-jpa/JpaApplication.java
deleted file mode 100644
index 98480a047df..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/JpaApplication.java
+++ /dev/null
@@ -1,96 +0,0 @@
-package io.roach.data.jpa;
-
-import java.math.BigDecimal;
-import java.util.ArrayDeque;
-import java.util.Deque;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
-import java.util.concurrent.ScheduledExecutorService;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.boot.CommandLineRunner;
-import org.springframework.boot.WebApplicationType;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-import org.springframework.boot.builder.SpringApplicationBuilder;
-import org.springframework.context.annotation.EnableAspectJAutoProxy;
-import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
-import org.springframework.hateoas.Link;
-import org.springframework.hateoas.config.EnableHypermediaSupport;
-import org.springframework.http.HttpEntity;
-import org.springframework.http.HttpMethod;
-import org.springframework.transaction.annotation.EnableTransactionManagement;
-import org.springframework.web.client.HttpClientErrorException;
-import org.springframework.web.client.RestTemplate;
-
-@EnableHypermediaSupport(type = EnableHypermediaSupport.HypermediaType.HAL)
-@EnableJpaRepositories
-@EnableAspectJAutoProxy(proxyTargetClass = true)
-@EnableTransactionManagement
-@SpringBootApplication
-public class JpaApplication implements CommandLineRunner {
- protected static final Logger logger = LoggerFactory.getLogger(JpaApplication.class);
-
- public static void main(String[] args) {
- new SpringApplicationBuilder(JpaApplication.class)
- .web(WebApplicationType.SERVLET)
- .run(args);
- }
-
- @Override
- public void run(String... args) throws Exception {
- logger.info("Lets move some $$ around!");
-
- final Link transferLink = new Link("http://localhost:8080/transfer{?fromId,toId,amount}");
-
- final int threads = Runtime.getRuntime().availableProcessors();
-
- final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(threads);
-
- Deque> futures = new ArrayDeque<>();
-
- for (int i = 0; i < threads; i++) {
- Future> future = executorService.submit(() -> {
- for (int j = 0; j < 100; j++) {
- int fromId = 1 + (int) Math.round(Math.random() * 3);
- int toId = fromId % 4 + 1;
-
- BigDecimal amount = new BigDecimal("10.00");
-
- Map form = new HashMap<>();
- form.put("fromId", fromId);
- form.put("toId", toId);
- form.put("amount", amount);
-
- String uri = transferLink.expand(form).getHref();
-
- try {
- new RestTemplate().exchange(uri, HttpMethod.POST, new HttpEntity<>(null), String.class);
- } catch (HttpClientErrorException.BadRequest e) {
- logger.warn(e.getResponseBodyAsString());
- }
- }
- });
- futures.add(future);
- }
-
- while (!futures.isEmpty()) {
- try {
- futures.pop().get();
- logger.info("Worker finished - {} remaining", futures.size());
- } catch (InterruptedException e) {
- Thread.currentThread().interrupt();
- } catch (ExecutionException e) {
- logger.warn("Worker failed", e.getCause());
- }
- }
-
- logger.info("All client workers finished but server keeps running. Have a nice day!");
-
- executorService.shutdownNow();
- }
-}
-
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/RetryableTransactionAspect.java b/src/current/_includes/v20.1/app/spring-data-jpa/RetryableTransactionAspect.java
deleted file mode 100644
index 1c9ec619ed9..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/RetryableTransactionAspect.java
+++ /dev/null
@@ -1,80 +0,0 @@
-package io.roach.data.jpa;
-
-import java.lang.reflect.UndeclaredThrowableException;
-import java.util.concurrent.atomic.AtomicLong;
-
-import org.aspectj.lang.ProceedingJoinPoint;
-import org.aspectj.lang.annotation.Around;
-import org.aspectj.lang.annotation.Aspect;
-import org.aspectj.lang.annotation.Pointcut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.core.Ordered;
-import org.springframework.core.annotation.Order;
-import org.springframework.dao.ConcurrencyFailureException;
-import org.springframework.dao.TransientDataAccessException;
-import org.springframework.orm.jpa.JpaSystemException;
-import org.springframework.stereotype.Component;
-import org.springframework.transaction.TransactionSystemException;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.transaction.support.TransactionSynchronizationManager;
-import org.springframework.util.Assert;
-
-@Component
-@Aspect
-@Order(Ordered.LOWEST_PRECEDENCE - 1)
-public class RetryableTransactionAspect {
- protected final Logger logger = LoggerFactory.getLogger(getClass());
-
- @Pointcut("execution(* io.roach..*(..)) && @annotation(transactional)")
- public void anyTransactionBoundaryOperation(Transactional transactional) {
- }
-
- @Around(value = "anyTransactionBoundaryOperation(transactional)",
- argNames = "pjp,transactional")
- public Object retryableOperation(ProceedingJoinPoint pjp, Transactional transactional)
- throws Throwable {
- final int totalRetries = 30;
- int numAttempts = 0;
- AtomicLong backoffMillis = new AtomicLong(150);
-
- Assert.isTrue(!TransactionSynchronizationManager.isActualTransactionActive(), "TX active");
-
- do {
- try {
- numAttempts++;
- return pjp.proceed();
- } catch (TransientDataAccessException | TransactionSystemException | JpaSystemException ex) {
- handleTransientException(ex, numAttempts, totalRetries, pjp, backoffMillis);
- } catch (UndeclaredThrowableException ex) {
- Throwable t = ex.getUndeclaredThrowable();
- if (t instanceof TransientDataAccessException) {
- handleTransientException(t, numAttempts, totalRetries, pjp, backoffMillis);
- } else {
- throw ex;
- }
- }
- } while (numAttempts < totalRetries);
-
- throw new ConcurrencyFailureException("Too many transient errors (" + numAttempts + ") for method ["
- + pjp.getSignature().toLongString() + "]. Giving up!");
- }
-
- private void handleTransientException(Throwable ex, int numAttempts, int totalAttempts,
- ProceedingJoinPoint pjp, AtomicLong backoffMillis) {
- if (logger.isWarnEnabled()) {
- logger.warn("Transient data access exception (" + numAttempts + " of max " + totalAttempts + ") "
- + "detected (retry in " + backoffMillis + " ms) "
- + "in method '" + pjp.getSignature().getDeclaringTypeName() + "." + pjp.getSignature().getName()
- + "': " + ex.getMessage());
- }
- if (backoffMillis.get() >= 0) {
- try {
- Thread.sleep(backoffMillis.get());
- } catch (InterruptedException e) {
- Thread.currentThread().interrupt();
- }
- backoffMillis.set(Math.min((long) (backoffMillis.get() * 1.5), 1500));
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/application.yml b/src/current/_includes/v20.1/app/spring-data-jpa/application.yml
deleted file mode 100644
index 465b74abce3..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/application.yml
+++ /dev/null
@@ -1,30 +0,0 @@
-########################
-# Spring boot properties
-# http://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html
-########################
-
-spring:
- output:
- ansi:
- enabled: ALWAYS
-
- liquibase:
- change-log: classpath:db/changelog-master.xml
- default-schema:
- drop-first: false
- contexts: crdb
- enabled: true
-
- datasource:
- url: jdbc:postgresql://localhost:26257/roach_data?sslmode=disable
- driver-class-name: org.postgresql.Driver
- username: root
- password:
- hikari:
- connection-test-query: SELECT 1
-
- jpa:
- open-in-view: false
- properties:
- hibernate:
- dialect: org.hibernate.dialect.CockroachDB201Dialect
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/changelog-master.xml b/src/current/_includes/v20.1/app/spring-data-jpa/changelog-master.xml
deleted file mode 100644
index 05915165a00..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/changelog-master.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-
-
-
-
- ANY
-
-
-
-
-
- 1
- Alice
-
- asset
-
-
- 2
- Bob
-
- expense
-
-
- 3
- Bobby Tables
-
- asset
-
-
- 4
- Doris
-
- expense
-
-
-
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/spring-data-jpa/create.sql b/src/current/_includes/v20.1/app/spring-data-jpa/create.sql
deleted file mode 100644
index 4b89e03defd..00000000000
--- a/src/current/_includes/v20.1/app/spring-data-jpa/create.sql
+++ /dev/null
@@ -1,17 +0,0 @@
--- DROP TABLE IF EXISTS account cascade;
--- DROP TABLE IF EXISTS databasechangelog cascade;
--- DROP TABLE IF EXISTS databasechangeloglock cascade;
-
-create table account
-(
- id int not null primary key default unique_rowid(),
- balance numeric(19, 2) not null,
- name varchar(128) not null,
- type varchar(25) not null
-);
-
--- insert into account (id,balance,name,type) values
--- (1, 500.00,'Alice','asset'),
--- (2, 500.00,'Bob','expense'),
--- (3, 500.00,'Bobby Tables','asset'),
--- (4, 500.00,'Doris','expense');
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/BasicExample.java b/src/current/_includes/v20.1/app/spring-mybatis/BasicExample.java
deleted file mode 100644
index d461b551955..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/BasicExample.java
+++ /dev/null
@@ -1,77 +0,0 @@
-package com.example.cockroachdemo;
-
-import java.time.LocalTime;
-
-import com.example.cockroachdemo.model.Account;
-import com.example.cockroachdemo.model.BatchResults;
-import com.example.cockroachdemo.service.AccountService;
-
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.boot.CommandLineRunner;
-import org.springframework.context.annotation.Profile;
-import org.springframework.stereotype.Component;
-
-@Component
-@Profile("!test")
-public class BasicExample implements CommandLineRunner {
- @Autowired
- private AccountService accountService;
-
- @Override
- public void run(String... args) throws Exception {
- accountService.createAccountsTable();
- deleteAllAccounts();
- insertAccounts();
- printNumberOfAccounts();
- printBalances();
- transferFunds();
- printBalances();
- bulkInsertRandomAccountData();
- printNumberOfAccounts();
- }
-
- private void deleteAllAccounts() {
- int numDeleted = accountService.deleteAllAccounts();
- System.out.printf("deleteAllAccounts:\n => %s total deleted accounts\n", numDeleted);
- }
-
- private void insertAccounts() {
- Account account1 = new Account();
- account1.setId(1);
- account1.setBalance(1000);
-
- Account account2 = new Account();
- account2.setId(2);
- account2.setBalance(250);
- BatchResults results = accountService.addAccounts(account1, account2);
- System.out.printf("insertAccounts:\n => %s total new accounts in %s batches\n", results.getTotalRowsAffected(), results.getNumberOfBatches());
- }
-
- private void printBalances() {
- int balance1 = accountService.getAccount(1).map(Account::getBalance).orElse(-1);
- int balance2 = accountService.getAccount(2).map(Account::getBalance).orElse(-1);
-
- System.out.printf("printBalances:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n",
- LocalTime.now(), 1, balance1, 2, balance2);
- }
-
- private void printNumberOfAccounts() {
- System.out.printf("printNumberOfAccounts:\n => Number of accounts at time '%s':\n => %s total accounts\n",
- LocalTime.now(), accountService.findCountOfAccounts());
- }
-
- private void transferFunds() {
- int fromAccount = 1;
- int toAccount = 2;
- int transferAmount = 100;
- int transferredAccounts = accountService.transferFunds(fromAccount, toAccount, transferAmount);
- System.out.printf("transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n",
- transferAmount, fromAccount, toAccount, transferredAccounts);
- }
-
- private void bulkInsertRandomAccountData() {
- BatchResults results = accountService.bulkInsertRandomAccountData(500);
- System.out.printf("bulkInsertRandomAccountData:\n => finished, %s total rows inserted in %s batches\n",
- results.getTotalRowsAffected(), results.getNumberOfBatches());
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/CockroachDemoApplication.java b/src/current/_includes/v20.1/app/spring-mybatis/CockroachDemoApplication.java
deleted file mode 100644
index 4f220dcd989..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/CockroachDemoApplication.java
+++ /dev/null
@@ -1,11 +0,0 @@
-package com.example.cockroachdemo;
-
-import org.springframework.boot.SpringApplication;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-
-@SpringBootApplication
-public class CockroachDemoApplication {
- public static void main(String[] args) {
- SpringApplication.run(CockroachDemoApplication.class, args);
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/MyBatisConfiguration.java b/src/current/_includes/v20.1/app/spring-mybatis/MyBatisConfiguration.java
deleted file mode 100644
index d8777d5016d..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/MyBatisConfiguration.java
+++ /dev/null
@@ -1,51 +0,0 @@
-package com.example.cockroachdemo;
-
-import javax.sql.DataSource;
-
-import org.apache.ibatis.annotations.Mapper;
-import org.apache.ibatis.session.ExecutorType;
-import org.apache.ibatis.session.SqlSessionFactory;
-import org.mybatis.spring.SqlSessionFactoryBean;
-import org.mybatis.spring.SqlSessionTemplate;
-import org.mybatis.spring.annotation.MapperScan;
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.context.annotation.Bean;
-import org.springframework.context.annotation.Configuration;
-import org.springframework.context.annotation.Primary;
-
-/**
- * This class configures MyBatis and sets up mappers for injection.
- *
- * When using the Spring Boot Starter, using a class like this is completely optional unless you need to
- * have some mappers use the BATCH executor (as we do in this demo). If you do not have that requirement,
- * then you can remove this class. By Default, the MyBatis Spring Boot Starter will find all mappers
- * annotated with @Mapper and will automatically wire your Datasource to the underlying MyBatis
- * infrastructure.
- */
-@Configuration
-@MapperScan(basePackages = "com.example.cockroachdemo.mapper", annotationClass = Mapper.class)
-@MapperScan(basePackages = "com.example.cockroachdemo.batchmapper", annotationClass = Mapper.class,
- sqlSessionTemplateRef = "batchSqlSessionTemplate")
-public class MyBatisConfiguration {
-
- @Autowired
- private DataSource dataSource;
-
- @Bean
- public SqlSessionFactory sqlSessionFactory() throws Exception {
- SqlSessionFactoryBean factory = new SqlSessionFactoryBean();
- factory.setDataSource(dataSource);
- return factory.getObject();
- }
-
- @Bean
- @Primary
- public SqlSessionTemplate sqlSessionTemplate() throws Exception {
- return new SqlSessionTemplate(sqlSessionFactory());
- }
-
- @Bean(name = "batchSqlSessionTemplate")
- public SqlSessionTemplate batchSqlSessionTemplate() throws Exception {
- return new SqlSessionTemplate(sqlSessionFactory(), ExecutorType.BATCH);
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/RetryableTransactionAspect.java b/src/current/_includes/v20.1/app/spring-mybatis/RetryableTransactionAspect.java
deleted file mode 100644
index 943e4647e90..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/RetryableTransactionAspect.java
+++ /dev/null
@@ -1,87 +0,0 @@
-package com.example.cockroachdemo;
-
-import java.lang.reflect.UndeclaredThrowableException;
-import java.util.concurrent.atomic.AtomicLong;
-
-import org.aspectj.lang.ProceedingJoinPoint;
-import org.aspectj.lang.annotation.Around;
-import org.aspectj.lang.annotation.Aspect;
-import org.aspectj.lang.annotation.Pointcut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.core.Ordered;
-import org.springframework.core.annotation.Order;
-import org.springframework.dao.ConcurrencyFailureException;
-import org.springframework.dao.TransientDataAccessException;
-import org.springframework.stereotype.Component;
-import org.springframework.transaction.TransactionSystemException;
-import org.springframework.transaction.annotation.Transactional;
-import org.springframework.transaction.support.TransactionSynchronizationManager;
-import org.springframework.util.Assert;
-
-/**
- * Aspect with an around advice that intercepts and retries transient concurrency exceptions.
- * Methods matching the pointcut expression (annotated with @Transactional) are retried.
- *
- * This advice needs to runs in a non-transactional context, which is before the underlying
- * transaction advisor (@Order ensures that).
- */
-@Component
-@Aspect
-// Before TX advisor
-@Order(Ordered.LOWEST_PRECEDENCE - 1)
-public class RetryableTransactionAspect {
- protected final Logger logger = LoggerFactory.getLogger(getClass());
-
- @Pointcut("@annotation(transactional)")
- public void anyTransactionBoundaryOperation(Transactional transactional) {
- }
-
- @Around(value = "anyTransactionBoundaryOperation(transactional)",
- argNames = "pjp,transactional")
- public Object retryableOperation(ProceedingJoinPoint pjp, Transactional transactional)
- throws Throwable {
- final int totalRetries = 30;
- int numAttempts = 0;
- AtomicLong backoffMillis = new AtomicLong(150);
-
- Assert.isTrue(!TransactionSynchronizationManager.isActualTransactionActive(), "TX active");
-
- do {
- try {
- numAttempts++;
- return pjp.proceed();
- } catch (TransientDataAccessException | TransactionSystemException ex) {
- handleTransientException(ex, numAttempts, totalRetries, pjp, backoffMillis);
- } catch (UndeclaredThrowableException ex) {
- Throwable t = ex.getUndeclaredThrowable();
- if (t instanceof TransientDataAccessException) {
- handleTransientException(t, numAttempts, totalRetries, pjp, backoffMillis);
- } else {
- throw ex;
- }
- }
- } while (numAttempts < totalRetries);
-
- throw new ConcurrencyFailureException("Too many transient errors (" + numAttempts + ") for method ["
- + pjp.getSignature().toLongString() + "]. Giving up!");
- }
-
- private void handleTransientException(Throwable ex, int numAttempts, int totalAttempts,
- ProceedingJoinPoint pjp, AtomicLong backoffMillis) {
- if (logger.isWarnEnabled()) {
- logger.warn("Transient data access exception (" + numAttempts + " of max " + totalAttempts + ") "
- + "detected (retry in " + backoffMillis + " ms) "
- + "in method '" + pjp.getSignature().getDeclaringTypeName() + "." + pjp.getSignature().getName()
- + "': " + ex.getMessage());
- }
- if (backoffMillis.get() >= 0) {
- try {
- Thread.sleep(backoffMillis.get());
- } catch (InterruptedException e) {
- Thread.currentThread().interrupt();
- }
- backoffMillis.set(Math.min((long) (backoffMillis.get() * 1.5), 1500));
- }
- }
-}
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/application.yml b/src/current/_includes/v20.1/app/spring-mybatis/application.yml
deleted file mode 100644
index baca47cd1ef..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/application.yml
+++ /dev/null
@@ -1,5 +0,0 @@
-spring:
- datasource:
- driver-class-name: org.postgresql.Driver
- url: jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=/certs/ca.crt&sslkey=/certs/client.maxroach.key.pk8&sslcert=/certs/client.maxroach.crt
- username: maxroach
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/batchmapper/BatchAccountMapper.java b/src/current/_includes/v20.1/app/spring-mybatis/batchmapper/BatchAccountMapper.java
deleted file mode 100644
index 1115f1ad606..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/batchmapper/BatchAccountMapper.java
+++ /dev/null
@@ -1,19 +0,0 @@
-package com.example.cockroachdemo.batchmapper;
-
-import java.util.List;
-
-import com.example.cockroachdemo.model.Account;
-
-import org.apache.ibatis.annotations.Flush;
-import org.apache.ibatis.annotations.Insert;
-import org.apache.ibatis.annotations.Mapper;
-import org.apache.ibatis.executor.BatchResult;
-
-@Mapper
-public interface BatchAccountMapper {
- @Insert("upsert into accounts(id, balance) values(#{id}, #{balance})")
- void insertAccount(Account account);
-
- @Flush
- List flush();
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/mapper/AccountMapper.java b/src/current/_includes/v20.1/app/spring-mybatis/mapper/AccountMapper.java
deleted file mode 100644
index e64a0bc76ac..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/mapper/AccountMapper.java
+++ /dev/null
@@ -1,40 +0,0 @@
-package com.example.cockroachdemo.mapper;
-
-import java.util.List;
-import java.util.Optional;
-
-import com.example.cockroachdemo.model.Account;
-
-import org.apache.ibatis.annotations.Delete;
-import org.apache.ibatis.annotations.Mapper;
-import org.apache.ibatis.annotations.Param;
-import org.apache.ibatis.annotations.Select;
-import org.apache.ibatis.annotations.Update;
-
-@Mapper
-public interface AccountMapper {
- @Delete("delete from accounts")
- int deleteAllAccounts();
-
- @Update("update accounts set balance=#{balance} where id=${id}")
- void updateAccount(Account account);
-
- @Select("select id, balance from accounts where id=#{id}")
- Optional findAccountById(int id);
-
- @Select("select id, balance from accounts order by id")
- List findAllAccounts();
-
- @Update({
- "upsert into accounts (id, balance) values",
- "(#{fromId}, ((select balance from accounts where id = #{fromId}) - #{amount})),",
- "(#{toId}, ((select balance from accounts where id = #{toId}) + #{amount}))",
- })
- int transfer(@Param("fromId") int fromId, @Param("toId") int toId, @Param("amount") int amount);
-
- @Update("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))")
- void createAccountsTable();
-
- @Select("select count(*) from accounts")
- Long findCountOfAccounts();
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/model/Account.java b/src/current/_includes/v20.1/app/spring-mybatis/model/Account.java
deleted file mode 100644
index 57951af7bc3..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/model/Account.java
+++ /dev/null
@@ -1,22 +0,0 @@
-package com.example.cockroachdemo.model;
-
-public class Account {
- private int id;
- private int balance;
-
- public int getId() {
- return id;
- }
-
- public void setId(int id) {
- this.id = id;
- }
-
- public int getBalance() {
- return balance;
- }
-
- public void setBalance(int balance) {
- this.balance = balance;
- }
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/model/BatchResults.java b/src/current/_includes/v20.1/app/spring-mybatis/model/BatchResults.java
deleted file mode 100644
index b60f71005f8..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/model/BatchResults.java
+++ /dev/null
@@ -1,19 +0,0 @@
-package com.example.cockroachdemo.model;
-
-public class BatchResults {
- private int numberOfBatches;
- private int totalRowsAffected;
-
- public BatchResults(int numberOfBatches, int totalRowsAffected) {
- this.numberOfBatches = numberOfBatches;
- this.totalRowsAffected = totalRowsAffected;
- }
-
- public int getNumberOfBatches() {
- return numberOfBatches;
- }
-
- public int getTotalRowsAffected() {
- return totalRowsAffected;
- }
-}
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/service/AccountService.java b/src/current/_includes/v20.1/app/spring-mybatis/service/AccountService.java
deleted file mode 100644
index b257b793c78..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/service/AccountService.java
+++ /dev/null
@@ -1,16 +0,0 @@
-package com.example.cockroachdemo.service;
-
-import java.util.Optional;
-
-import com.example.cockroachdemo.model.Account;
-import com.example.cockroachdemo.model.BatchResults;
-
-public interface AccountService {
- void createAccountsTable();
- Optional getAccount(int id);
- BatchResults bulkInsertRandomAccountData(int numberToInsert);
- BatchResults addAccounts(Account...accounts);
- int transferFunds(int fromAccount, int toAccount, int amount);
- long findCountOfAccounts();
- int deleteAllAccounts();
-}
diff --git a/src/current/_includes/v20.1/app/spring-mybatis/service/MyBatisAccountService.java b/src/current/_includes/v20.1/app/spring-mybatis/service/MyBatisAccountService.java
deleted file mode 100644
index 8085f0ac358..00000000000
--- a/src/current/_includes/v20.1/app/spring-mybatis/service/MyBatisAccountService.java
+++ /dev/null
@@ -1,102 +0,0 @@
-package com.example.cockroachdemo.service;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Optional;
-import java.util.Random;
-
-import com.example.cockroachdemo.batchmapper.BatchAccountMapper;
-import com.example.cockroachdemo.mapper.AccountMapper;
-import com.example.cockroachdemo.model.Account;
-import com.example.cockroachdemo.model.BatchResults;
-
-import org.apache.ibatis.executor.BatchResult;
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.stereotype.Service;
-import org.springframework.transaction.annotation.Propagation;
-import org.springframework.transaction.annotation.Transactional;
-
-@Service
-public class MyBatisAccountService implements AccountService {
- @Autowired
- private AccountMapper mapper;
- @Autowired
- private BatchAccountMapper batchMapper;
- private Random random = new Random();
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public void createAccountsTable() {
- mapper.createAccountsTable();
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public BatchResults addAccounts(Account...accounts) {
- for (Account account : accounts) {
- batchMapper.insertAccount(account);
- }
- List results = batchMapper.flush();
-
- return new BatchResults(1, calculateRowsAffectedBySingleBatch(results));
- }
-
- private int calculateRowsAffectedBySingleBatch(List results) {
- return results.stream()
- .map(BatchResult::getUpdateCounts)
- .flatMapToInt(Arrays::stream)
- .sum();
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public BatchResults bulkInsertRandomAccountData(int numberToInsert) {
- int BATCH_SIZE = 128;
- List> results = new ArrayList<>();
-
- for (int i = 0; i < numberToInsert; i++) {
- Account account = new Account();
- account.setId(random.nextInt(1000000000));
- account.setBalance(random.nextInt(1000000000));
- batchMapper.insertAccount(account);
- if ((i + 1) % BATCH_SIZE == 0) {
- results.add(batchMapper.flush());
- }
- }
- if(numberToInsert % BATCH_SIZE != 0) {
- results.add(batchMapper.flush());
- }
- return new BatchResults(results.size(), calculateRowsAffectedByMultipleBatches(results));
- }
-
- private int calculateRowsAffectedByMultipleBatches(List> results) {
- return results.stream()
- .mapToInt(this::calculateRowsAffectedBySingleBatch)
- .sum();
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public Optional getAccount(int id) {
- return mapper.findAccountById(id);
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public int transferFunds(int fromId, int toId, int amount) {
- return mapper.transfer(fromId, toId, amount);
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public long findCountOfAccounts() {
- return mapper.findCountOfAccounts();
- }
-
- @Override
- @Transactional(propagation = Propagation.REQUIRES_NEW)
- public int deleteAllAccounts() {
- return mapper.deleteAllAccounts();
- }
-}
diff --git a/src/current/_includes/v20.1/app/start-cockroachdb.md b/src/current/_includes/v20.1/app/start-cockroachdb.md
deleted file mode 100644
index cb37e315246..00000000000
--- a/src/current/_includes/v20.1/app/start-cockroachdb.md
+++ /dev/null
@@ -1,89 +0,0 @@
-{% comment %}
-Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachCloud. The instructions below will adjust accordingly.
-
-
-
-
-
-
-{% endcomment %}
-
-
-
-1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html).
-1. Run the [`cockroach demo`](cockroach-demo.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach demo \
- --empty
- ~~~
-
- This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster.
-1. Take note of the `(sql/tcp)` connection string in the SQL shell welcome text:
-
- ~~~
- # Connection parameters:
- # (console) http://127.0.0.1:61009
- # (sql) postgres://root:admin@?host=%2Fvar%2Ffolders%2Fk1%2Fr048yqpd7_9337rgxm9vb_gw0000gn%2FT%2Fdemo255013852&port=26257
- # (sql/tcp) postgres://root:admin@127.0.0.1:61011?sslmode=require
- ~~~
-
- You will use it in your application code later.
-
-
-
-{% comment %}
-
-
-### Create a free cluster
-
-1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account.
-1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account.
-1. On the **Overview** page, click **Create Cluster**.
-1. On the **Create new cluster** page, for **Cloud provider**, select **Google Cloud**.
-1. For **Regions & nodes**, use the default selection of `California (us-west)` region and 1 node.
-1. Under **Hardware per node**, select 2vCPU for **Compute** and a 35 GiB disk for **Storage**.
-1. Name the cluster. The cluster name must be 6-20 characters in length, and can include lowercase letters, numbers, and dashes (but no leading or trailing dashes).
-1. Click **Next**.
-1. On the **Summary** page, enter your credit card details.
-
- {{site.data.alerts.callout_info}}
- You will not be charged until after your free trial expires in 30 days.
- {{site.data.alerts.end}}
-
-1. In the **Trial Code** field, enter `CRDB30`. Click **Apply**.
-1. Click **Create cluster**.
-
-Your cluster will be created in approximately 20-30 minutes. Watch this [Getting Started with CockroachCloud](https://youtu.be/3hxSBeE-1tM) video while you wait.
-
-Once your cluster is created, you will be redirected to the **Cluster Overview** page.
-
-### Create a SQL user
-
-1. In the left navigation bar, click **SQL Users**.
-1. Click **Add User**. The **Add User** dialog displays.
-1. Enter a **Username** and **Password**.
-1. Click **Save**.
-
-### Authorize your network
-
-1. In the left navigation bar, click **Networking**.
-1. Click **Add Network**. The **Add Network** dialog displays.
-1. From the **Network** dropdown, select **Current Network** to auto-populate your local machine's IP address.
-1. To allow the network to access the cluster's Admin UI and to use the CockroachDB client to access the databases, select the **Admin UI to monitor the cluster** and **CockroachDB Client to access the databases** checkboxes.
-1. Click **Apply**.
-
-### Get the connection string
-
-1. In the top-right corner of the Console, click the **Connect** button. The **Connect** dialog displays.
-1. From the **User** dropdown, select the SQL user you created [earlier](#create-a-sql-user).
-1. Verify that the `us-west2 GCP` region and `default_db` database are selected.
-1. Click **Continue**. The **Connect** tab is displayed.
-1. Click **Connection string** and take note of the connection string for your cluster. You will use it in your application code later.
-1. Create a `certs` directory on your local workstation.
-1. Click the name of the `ca.crt` file to download the CA certificate to your local machine.
-1. Move the downloaded `ca.crt` file to the `certs` directory.
-
-
-{% endcomment %}
diff --git a/src/current/_includes/v20.1/app/txn-sample-pgx.go b/src/current/_includes/v20.1/app/txn-sample-pgx.go
deleted file mode 100644
index e13856271ef..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample-pgx.go
+++ /dev/null
@@ -1,60 +0,0 @@
-package main
-
-import (
- "context"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb/crdbpgx"
- "github.com/jackc/pgx/v4"
-)
-
-func transferFunds(ctx context.Context, tx pgx.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(ctx,
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(ctx,
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(ctx,
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- config, err := pgx.ParseConfig("postgresql://maxroach@localhost:26257/bank?sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error configuring the database: ", err)
- }
-
- config.TLSConfig.ServerName = "localhost"
-
- // Connect to the "bank" database.
- conn, err := pgx.ConnectConfig(context.Background(), config)
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer conn.Close(context.Background())
-
- // Run a transfer in a transaction.
- err = crdbpgx.ExecuteTx(context.Background(), conn, pgx.TxOptions{}, func(tx pgx.Tx) error {
- return transferFunds(context.Background(), tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v20.1/app/txn-sample.clj b/src/current/_includes/v20.1/app/txn-sample.clj
deleted file mode 100644
index c093078ebc4..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.clj
+++ /dev/null
@@ -1,48 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:dbtype "postgresql"
- :dbname "bank"
- :host "localhost"
- :port "26257"
- :ssl true
- :sslmode "require"
- :sslcert "certs/client.maxroach.crt"
- :sslkey "certs/client.maxroach.key.pk8"
- :user "maxroach"})
-
-;; The transaction we want to run.
-(defn transferFunds
- [txn from to amount]
-
- ;; Check the current balance.
- (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
- (mapv :balance)
- (first))]
- (when (< fromBalance amount)
- (throw (Exception. "Insufficient funds"))))
-
- ;; Perform the transfer.
- (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
- (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
-
-(defn test-txn []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Execute the transaction within an automatic retry block;
- ;; the transaction object is bound to 'txn'.
- (util/with-txn-retry [txn conn]
- (transferFunds txn 1 2 100))
-
- ;; Execute a query outside of an automatic retry block.
- (println "Balances after transfer:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- (doall))))
-
-(defn -main [& args]
- (test-txn))
diff --git a/src/current/_includes/v20.1/app/txn-sample.cpp b/src/current/_includes/v20.1/app/txn-sample.cpp
deleted file mode 100644
index 728e4a2e5cc..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.cpp
+++ /dev/null
@@ -1,74 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-void transferFunds(
- pqxx::dbtransaction *tx, int from, int to, int amount) {
- // Read the balance.
- pqxx::result r = tx->exec(
- "SELECT balance FROM accounts WHERE id = " + to_string(from));
- assert(r.size() == 1);
- int fromBalance = r[0][0].as();
-
- if (fromBalance < amount) {
- throw domain_error("insufficient funds");
- }
-
- // Perform the transfer.
- tx->exec("UPDATE accounts SET balance = balance - "
- + to_string(amount) + " WHERE id = " + to_string(from));
- tx->exec("UPDATE accounts SET balance = balance + "
- + to_string(amount) + " WHERE id = " + to_string(to));
-}
-
-
-// ExecuteTx runs fn inside a transaction and retries it as needed.
-// On non-retryable failures, the transaction is aborted and rolled
-// back; on success, the transaction is committed.
-//
-// For more information about CockroachDB's transaction model see
-// https://cockroachlabs.com/docs/transactions.html.
-//
-// NOTE: the supplied exec closure should not have external side
-// effects beyond changes to the database.
-void executeTx(
- pqxx::connection *c, function fn) {
- pqxx::work tx(*c);
- while (true) {
- try {
- pqxx::subtransaction s(tx, "cockroach_restart");
- fn(&s);
- s.commit();
- break;
- } catch (const pqxx::pqxx_exception& e) {
- // Swallow "transaction restart" errors; the transaction will be retried.
- // Unfortunately libpqxx doesn't give us access to the error code, so we
- // do string matching to identify retryable errors.
- if (string(e.base().what()).find("restart transaction:") == string::npos) {
- throw;
- }
- }
- }
- tx.commit();
-}
-
-int main() {
- try {
- pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost");
-
- executeTx(&c, [](pqxx::dbtransaction *tx) {
- transferFunds(tx, 1, 2, 100);
- });
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v20.1/app/txn-sample.cs b/src/current/_includes/v20.1/app/txn-sample.cs
deleted file mode 100644
index 4815bf7e61b..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.cs
+++ /dev/null
@@ -1,168 +0,0 @@
-using System;
-using System.Data;
-using System.Security.Cryptography.X509Certificates;
-using System.Net.Security;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Require;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- TxnSample(connStringBuilder.ConnectionString);
- }
-
- static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
- {
- int balance = 0;
- using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
- using (var reader = cmd.ExecuteReader())
- {
- if (reader.Read())
- {
- balance = reader.GetInt32(0);
- }
- else
- {
- throw new DataException(String.Format("Account id={0} not found", from));
- }
- }
- if (balance < amount)
- {
- throw new DataException(String.Format("Insufficient balance in account id={0}", from));
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- }
-
- static void TxnSample(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
- conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
-
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
-
- try
- {
- using (var tran = conn.BeginTransaction())
- {
- tran.Save("cockroach_restart");
- while (true)
- {
- try
- {
- TransferFunds(conn, tran, 1, 2, 100);
- tran.Commit();
- break;
- }
- catch (NpgsqlException e)
- {
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if (e.ErrorCode == 40001)
- {
- // Signal the database that we will attempt a retry.
- tran.Rollback("cockroach_restart");
- }
- else
- {
- throw;
- }
- }
- }
- }
- }
- catch (DataException e)
- {
- Console.WriteLine(e.Message);
- }
-
- // Now printout the results.
- Console.WriteLine("Final balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
-
- static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
- {
- // To be able to add a certificate with a private key included, we must convert it to
- // a PKCS #12 format. The following openssl command does this:
- // openssl pkcs12 -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
- // As of 2018-12-10, you need to provide a password for this to work on macOS.
- // See https://github.com/dotnet/corefx/issues/24225
- clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass"));
- }
-
- // By default, .Net does all of its certificate verification using the system certificate store.
- // This callback is necessary to validate the server certificate against a CA certificate file.
- static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
- {
- X509Certificate2 caCert = new X509Certificate2("ca.crt");
- X509Chain caCertChain = new X509Chain();
- caCertChain.ChainPolicy = new X509ChainPolicy()
- {
- RevocationMode = X509RevocationMode.NoCheck,
- RevocationFlag = X509RevocationFlag.EntireChain
- };
- caCertChain.ChainPolicy.ExtraStore.Add(caCert);
-
- X509Certificate2 serverCert = new X509Certificate2(certificate);
-
- caCertChain.Build(serverCert);
- if (caCertChain.ChainStatus.Length == 0)
- {
- // No errors
- return true;
- }
-
- foreach (X509ChainStatus status in caCertChain.ChainStatus)
- {
- // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store)
- if (status.Status != X509ChainStatusFlags.UntrustedRoot)
- {
- return false;
- }
- }
- return true;
- }
- }
-}
diff --git a/src/current/_includes/v20.1/app/txn-sample.go b/src/current/_includes/v20.1/app/txn-sample.go
deleted file mode 100644
index fc15275abca..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.go
+++ /dev/null
@@ -1,53 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v20.1/app/txn-sample.php b/src/current/_includes/v20.1/app/txn-sample.php
deleted file mode 100644
index 363dbcd73cd..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v20.1/app/txn-sample.rb b/src/current/_includes/v20.1/app/txn-sample.rb
deleted file mode 100644
index 1c9059775fc..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.rb
+++ /dev/null
@@ -1,54 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
-
- # These are the certificate files created in the previous step
- sslrootcert: 'certs/ca.crt',
- sslkey: 'certs/client.maxroach.key',
- sslcert: 'certs/client.maxroach.crt'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close the database connection.
-conn.close()
diff --git a/src/current/_includes/v20.1/app/txn-sample.rs b/src/current/_includes/v20.1/app/txn-sample.rs
deleted file mode 100644
index c8e099b89e6..00000000000
--- a/src/current/_includes/v20.1/app/txn-sample.rs
+++ /dev/null
@@ -1,73 +0,0 @@
-use openssl::error::ErrorStack;
-use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
-use postgres::{error::SqlState, Client, Error, Transaction};
-use postgres_openssl::MakeTlsConnector;
-
-/// Runs op inside a transaction and retries it as needed.
-/// On non-retryable failures, the transaction is aborted and
-/// rolled back; on success, the transaction is committed.
-fn execute_txn(client: &mut Client, op: F) -> Result
-where
- F: Fn(&mut Transaction) -> Result,
-{
- let mut txn = client.transaction()?;
- loop {
- let mut sp = txn.savepoint("cockroach_restart")?;
- match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
- Err(ref err)
- if err
- .code()
- .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
- .unwrap_or(false) => {}
- r => break r,
- }
- }
- .and_then(|t| txn.commit().map(|_| t))
-}
-
-fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
- // Read the balance.
- let from_balance: i64 = txn
- .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
- .get(0);
-
- assert!(from_balance >= amount);
-
- // Perform the transfer.
- txn.execute(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
- &[&amount, &from],
- )?;
- txn.execute(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
- &[&amount, &to],
- )?;
- Ok(())
-}
-
-fn ssl_config() -> Result {
- let mut builder = SslConnector::builder(SslMethod::tls())?;
- builder.set_ca_file("certs/ca.crt")?;
- builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
- builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
- Ok(MakeTlsConnector::new(builder.build()))
-}
-
-fn main() {
- let connector = ssl_config().unwrap();
- let mut client =
- Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
-
- // Run a transfer in a transaction.
- execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
-
- // Check account balances after the transaction.
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v20.1/app/upperdb-basic-sample/main.go b/src/current/_includes/v20.1/app/upperdb-basic-sample/main.go
deleted file mode 100644
index 3e838fe43e2..00000000000
--- a/src/current/_includes/v20.1/app/upperdb-basic-sample/main.go
+++ /dev/null
@@ -1,187 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "time"
-
- "github.com/upper/db/v4"
- "github.com/upper/db/v4/adapter/cockroachdb"
-)
-
-// The settings variable stores connection details.
-var settings = cockroachdb.ConnectionURL{
- Host: "localhost",
- Database: "bank",
- User: "maxroach",
- Options: map[string]string{
- // Secure node.
- "sslrootcert": "certs/ca.crt",
- "sslkey": "certs/client.maxroach.key",
- "sslcert": "certs/client.maxroach.crt",
- },
-}
-
-// Accounts is a handy way to represent a collection.
-func Accounts(sess db.Session) db.Store {
- return sess.Collection("accounts")
-}
-
-// Account is used to represent a single record in the "accounts" table.
-type Account struct {
- ID uint64 `db:"id,omitempty"`
- Balance int64 `db:"balance"`
-}
-
-// Collection is required in order to create a relation between the Account
-// struct and the "accounts" table.
-func (a *Account) Store(sess db.Session) db.Store {
- return Accounts(sess)
-}
-
-// createTables creates all the tables that are neccessary to run this example.
-func createTables(sess db.Session) error {
- _, err := sess.SQL().Exec(`
- CREATE TABLE IF NOT EXISTS accounts (
- ID SERIAL PRIMARY KEY,
- balance INT
- )
- `)
- if err != nil {
- return err
- }
- return nil
-}
-
-// crdbForceRetry can be used to simulate a transaction error and
-// demonstrate upper/db's ability to retry the transaction automatically.
-//
-// By default, upper/db will retry the transaction five times, if you want
-// to modify this number use: sess.SetMaxTransactionRetries(n).
-//
-// This is only used for demonstration purposes and not intended
-// for production code.
-func crdbForceRetry(sess db.Session) error {
- var err error
-
- // The first statement in a transaction can be retried transparently on the
- // server, so we need to add a placeholder statement so that our
- // force_retry() statement isn't the first one.
- _, err = sess.SQL().Exec(`SELECT 1`)
- if err != nil {
- return err
- }
-
- // If force_retry is called during the specified interval from the beginning
- // of the transaction it returns a retryable error. If not, 0 is returned
- // instead of an error.
- _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`)
- if err != nil {
- return err
- }
-
- return nil
-}
-
-func main() {
- // Connect to the local CockroachDB node.
- sess, err := cockroachdb.Open(settings)
- if err != nil {
- log.Fatal("cockroachdb.Open: ", err)
- }
- defer sess.Close()
-
- // Adjust this number to fit your specific needs (set to 5, by default)
- // sess.SetMaxTransactionRetries(10)
-
- // Create the "accounts" table
- createTables(sess)
-
- // Delete all the previous items in the "accounts" table.
- err = Accounts(sess).Truncate()
- if err != nil {
- log.Fatal("Truncate: ", err)
- }
-
- // Create a new account with a balance of 1000.
- account1 := Account{Balance: 1000}
- err = Accounts(sess).InsertReturning(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Create a new account with a balance of 250.
- account2 := Account{Balance: 250}
- err = Accounts(sess).InsertReturning(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Change the balance of the first account.
- account1.Balance = 500
- err = sess.Save(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Change the balance of the second account.
- account2.Balance = 999
- err = sess.Save(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Delete the first record.
- err = sess.Delete(&account1)
- if err != nil {
- log.Fatal("Delete: ", err)
- }
-
- startTime := time.Now()
-
- // Add a couple of new records within a transaction.
- err = sess.Tx(func(tx db.Session) error {
- var err error
-
- if err = tx.Save(&Account{Balance: 887}); err != nil {
- return err
- }
-
- if time.Now().Sub(startTime) < time.Second*1 {
- // Will fail continuously for 2 seconds.
- if err = crdbForceRetry(tx); err != nil {
- return err
- }
- }
-
- if err = tx.Save(&Account{Balance: 342}); err != nil {
- return err
- }
-
- return nil
- })
- if err != nil {
- log.Fatal("Could not commit transaction: ", err)
- }
-
- // Printing records
- printRecords(sess)
-}
-
-func printRecords(sess db.Session) {
- accounts := []Account{}
- err := Accounts(sess).Find().All(&accounts)
- if err != nil {
- log.Fatal("Find: ", err)
- }
- log.Printf("Balances:")
- for i := range accounts {
- fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance)
- }
-}
diff --git a/src/current/_includes/v20.1/app/util.clj b/src/current/_includes/v20.1/app/util.clj
deleted file mode 100644
index d040affe794..00000000000
--- a/src/current/_includes/v20.1/app/util.clj
+++ /dev/null
@@ -1,38 +0,0 @@
-(ns test.util
- (:require [clojure.java.jdbc :as j]
- [clojure.walk :as walk]))
-
-(defn txn-restart-err?
- "Takes an exception and returns true if it is a CockroachDB retry error."
- [e]
- (when-let [m (.getMessage e)]
- (condp instance? e
- java.sql.BatchUpdateException
- (and (re-find #"getNextExc" m)
- (txn-restart-err? (.getNextException e)))
-
- org.postgresql.util.PSQLException
- (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors.
-
- false)))
-
-;; Wrapper for a transaction.
-;; This automatically invokes the body again as long as the database server
-;; asks the transaction to be retried.
-
-(defmacro with-txn-retry
- "Wrap an evaluation within a CockroachDB retry block."
- [[txn c] & body]
- `(j/with-db-transaction [~txn ~c]
- (loop []
- (j/execute! ~txn ["savepoint cockroach_restart"])
- (let [res# (try (let [r# (do ~@body)]
- {:ok r#})
- (catch java.sql.SQLException e#
- (if (txn-restart-err? e#)
- {:retry true}
- (throw e#))))]
- (if (:retry res#)
- (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"])
- (recur))
- (:ok res#))))))
diff --git a/src/current/_includes/v20.1/backups/advanced-examples-list.md b/src/current/_includes/v20.1/backups/advanced-examples-list.md
deleted file mode 100644
index 45029e6676e..00000000000
--- a/src/current/_includes/v20.1/backups/advanced-examples-list.md
+++ /dev/null
@@ -1,9 +0,0 @@
-For examples of advanced `BACKUP` and `RESTORE` use cases, see [Back up and Restore Data - Advanced Options](backup-and-restore.html). Advanced examples include:
-
-- [Incremental backups with a specified destination](backup-and-restore.html)
-- [Backup with revision history and point-in-time restore](backup-and-restore.html)
-- [Locality-aware backup and restore](backup-and-restore.html)
-- New in v20.1: [Encrypted backup and restore](backup-and-restore.html)
-- [Restore into a different database](backup-and-restore.html)
-- [Remove the foreign key before restore](backup-and-restore.html)
-- [Restoring users from `system.users` backup](backup-and-restore.html)
diff --git a/src/current/_includes/v20.1/backups/encrypted-backup-description.md b/src/current/_includes/v20.1/backups/encrypted-backup-description.md
deleted file mode 100644
index 0a56b865bd9..00000000000
--- a/src/current/_includes/v20.1/backups/encrypted-backup-description.md
+++ /dev/null
@@ -1,11 +0,0 @@
-New in v20.1: You can encrypt full or incremental backups by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must be included in the [`RESTORE`](restore.html) statement.
-
-When used with [incremental backups](backup.html#incremental-backups), the [`encryption_passphrase` option](backup.html#with-encryption-passphrase) is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](backup-and-restore.html), the passphrase provided is applied to files in all localities.
-
-Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation.
-
-{{site.data.alerts.callout_info}}
-`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption.
-{{site.data.alerts.end}}
-
-For an example of an encrypted backup, see [Create an encrypted backup](backup-and-restore.html).
diff --git a/src/current/_includes/v20.1/cdc/core-csv.md b/src/current/_includes/v20.1/cdc/core-csv.md
deleted file mode 100644
index 4ee6bfc587d..00000000000
--- a/src/current/_includes/v20.1/cdc/core-csv.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/cdc/create-core-changefeed-avro.md b/src/current/_includes/v20.1/cdc/create-core-changefeed-avro.md
deleted file mode 100644
index 476308d611f..00000000000
--- a/src/current/_includes/v20.1/cdc/create-core-changefeed-avro.md
+++ /dev/null
@@ -1,109 +0,0 @@
-In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.
-
-1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start-single-node \
- --insecure \
- --listen-addr=localhost \
- --background
- ~~~
-
-2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/).
-
-3. Move into the extracted `confluent-` directory and start Confluent:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent start
- ~~~
-
- Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives).
-
-4. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --format=csv \
- --insecure
- ~~~
-
- {% include {{ page.version.version }}/cdc/core-csv.md %}
-
-5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING kv.rangefeed.enabled = true;
- ~~~
-
-6. Create table `bar`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bar (a INT PRIMARY KEY);
- ~~~
-
-7. Insert a row into the table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bar VALUES (0);
- ~~~
-
-8. Start the core changefeed:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > EXPERIMENTAL CHANGEFEED FOR bar \
- WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081', resolved = '10s';
- ~~~
-
- ~~~
- table,key,value
- bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000
- NULL,NULL,\000\000\000\000\003\002<1590612821682559000.0000000000
- ~~~
-
- This changefeed will emit [`resolved` timestamps](changefeed-for.html#options) every 10 seconds. Depending on how quickly you insert into your watched table, the output could look different than what is shown here.
-
-9. In a new terminal, add another row:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
- ~~~
-
-10. Back in the terminal where the core changefeed is streaming, the output will appear:
-
- ~~~
- bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002
- NULL,NULL,\000\000\000\000\003\002<1590612831891317000.0000000000
- ~~~
-
- Note that records may take a couple of seconds to display in the core changefeed.
-
-11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
-
-12. To stop `cockroach`, run:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit --insecure
- ~~~
-
-13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent stop
- ~~~
-
- To terminate all Confluent processes, use:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent destroy
- ~~~
diff --git a/src/current/_includes/v20.1/cdc/create-core-changefeed.md b/src/current/_includes/v20.1/cdc/create-core-changefeed.md
deleted file mode 100644
index 75803a9ebb0..00000000000
--- a/src/current/_includes/v20.1/cdc/create-core-changefeed.md
+++ /dev/null
@@ -1,86 +0,0 @@
-In this example, you'll set up a core changefeed for a single-node cluster.
-
-1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start-single-node \
- --insecure \
- --listen-addr=localhost \
- --background
- ~~~
-
-2. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --format=csv \
- --insecure
- ~~~
-
- {% include {{ page.version.version }}/cdc/core-csv.md %}
-
-3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING kv.rangefeed.enabled = true;
- ~~~
-
-4. Create table `foo`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE foo (a INT PRIMARY KEY);
- ~~~
-
-5. Insert a row into the table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO foo VALUES (0);
- ~~~
-
-6. Start the core changefeed:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > EXPERIMENTAL CHANGEFEED FOR foo
- WITH resolved = '10s';
- ~~~
- ~~~
- table,key,value
- foo,[0],"{""after"": {""a"": 0}}"
- NULL,NULL,"{""resolved"":""1590611959605806000.0000000000""}"
- ~~~
-
- This changefeed will emit [`resolved` timestamps](changefeed-for.html#options) every 10 seconds. Depending on how quickly you insert into your watched table, the output could look different than what is shown here.
-
-7. In a new terminal, add another row:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)"
- ~~~
-
-8. Back in the terminal where the core changefeed is streaming, the following output has appeared:
-
- ~~~
- table,key,value
- foo,[0],"{""after"": {""a"": 0}}"
- NULL,NULL,"{""resolved"":""1590611959605806000.0000000000""}"
- foo,[1],"{""after"": {""a"": 1}}"
- NULL,NULL,"{""resolved"":""1590611970141415000.0000000000""}"
- ~~~
-
- Note that records may take a couple of seconds to display in the core changefeed.
-
-9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
-
-10. To stop `cockroach`, run:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit --insecure
- ~~~
diff --git a/src/current/_includes/v20.1/cdc/external-urls.md b/src/current/_includes/v20.1/cdc/external-urls.md
deleted file mode 100644
index c0f67240938..00000000000
--- a/src/current/_includes/v20.1/cdc/external-urls.md
+++ /dev/null
@@ -1,48 +0,0 @@
-~~~
-[scheme]://[host]/[path]?[parameters]
-~~~
-
-Location | Scheme | Host | Parameters |
-|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------
-Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`
-Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME`
-Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS`
-HTTP [3](#considerations) | `http` | Remote host | N/A
-NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A
-S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT`
-
-{{site.data.alerts.callout_info}}
-The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [`encodeURIComponent`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [`url.QueryEscape`](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB.
-
- If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., [`BACKUP`](backup.html), [`RESTORE`](restore.html), etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security).
-{{site.data.alerts.end}}
-
-
-
-- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be omitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/).
-
-- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be Base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)).
-
-- 3 You can create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
-
-- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
-
-- 5 Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage.
-
-- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
-
-- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
-
-#### Example file URLs
-
-Location | Example
--------------+----------------------------------------------------------------------------------
-Amazon S3 | `s3://acme-co/employees?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456`
-Azure | `azure://employees?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co`
-Google Cloud | `gs://acme-co`
-HTTP | `http://localhost:8080/employees`
-NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations)
diff --git a/src/current/_includes/v20.1/cdc/print-key.md b/src/current/_includes/v20.1/cdc/print-key.md
deleted file mode 100644
index ab0b0924d30..00000000000
--- a/src/current/_includes/v20.1/cdc/print-key.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/cdc/url-encoding.md b/src/current/_includes/v20.1/cdc/url-encoding.md
deleted file mode 100644
index 2a681d7f913..00000000000
--- a/src/current/_includes/v20.1/cdc/url-encoding.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Parameters should always be URI-encoded before they are included the changefeed's URI, as they often contain special characters. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/client-transaction-retry.md b/src/current/_includes/v20.1/client-transaction-retry.md
deleted file mode 100644
index 6a54534169e..00000000000
--- a/src/current/_includes/v20.1/client-transaction-retry.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/computed-columns/add-computed-column.md b/src/current/_includes/v20.1/computed-columns/add-computed-column.md
deleted file mode 100644
index c670b1c7285..00000000000
--- a/src/current/_includes/v20.1/computed-columns/add-computed-column.md
+++ /dev/null
@@ -1,55 +0,0 @@
-In this example, create a table:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE x (
- a INT NULL,
- b INT NULL AS (a * 2) STORED,
- c INT NULL AS (a + 4) STORED,
- FAMILY "primary" (a, b, rowid, c)
- );
-~~~
-
-Then, insert a row of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO x VALUES (6);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+
-| a | b | c |
-+---+----+----+
-| 6 | 12 | 10 |
-+---+----+----+
-(1 row)
-~~~
-
-Now add another computed column to the table:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED;
-~~~
-
-The `d` column is added to the table and computed from the `a` column divided by 2.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+---+
-| a | b | c | d |
-+---+----+----+---+
-| 6 | 12 | 10 | 3 |
-+---+----+----+---+
-(1 row)
-~~~
diff --git a/src/current/_includes/v20.1/computed-columns/convert-computed-column.md b/src/current/_includes/v20.1/computed-columns/convert-computed-column.md
deleted file mode 100644
index 12fd6e7d418..00000000000
--- a/src/current/_includes/v20.1/computed-columns/convert-computed-column.md
+++ /dev/null
@@ -1,108 +0,0 @@
-You can convert a stored, computed column into a regular column by using `ALTER TABLE`.
-
-In this example, create a simple table with a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE office_dogs (
- id INT PRIMARY KEY,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
- );
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name) VALUES
- (1, 'Petee', 'Hirata'),
- (2, 'Carl', 'Kimball'),
- (3, 'Ernie', 'Narayan');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+---------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+---------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-+----+------------+-----------+---------------+
-(3 rows)
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-(4 rows)
-~~~
-
-Now, convert the computed column (`full_name`) to a regular column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED;
-~~~
-
-Check that the computed column was converted:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | | {} |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-(4 rows)
-~~~
-
-The computed column is now a regular column and can be updated as such:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+----------------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+----------------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-| 4 | Lola | McDog | This is not computed |
-+----+------------+-----------+----------------------+
-(4 rows)
-~~~
diff --git a/src/current/_includes/v20.1/computed-columns/jsonb.md b/src/current/_includes/v20.1/computed-columns/jsonb.md
deleted file mode 100644
index 76a5b08ad8a..00000000000
--- a/src/current/_includes/v20.1/computed-columns/jsonb.md
+++ /dev/null
@@ -1,35 +0,0 @@
-In this example, create a table with a `JSONB` column and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE student_profiles (
- id STRING PRIMARY KEY AS (profile->>'id') STORED,
- profile JSONB
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO student_profiles (profile) VALUES
- ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'),
- ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
- ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM student_profiles;
-~~~
-~~~
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| id | profile |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} |
-| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} |
-| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-~~~
-
-The primary key `id` is computed as a field from the `profile` column.
diff --git a/src/current/_includes/v20.1/computed-columns/partitioning.md b/src/current/_includes/v20.1/computed-columns/partitioning.md
deleted file mode 100644
index 926c45793b4..00000000000
--- a/src/current/_includes/v20.1/computed-columns/partitioning.md
+++ /dev/null
@@ -1,53 +0,0 @@
-{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}}
-
-In this example, create a table with geo-partitioning and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE user_locations (
- locality STRING AS (CASE
- WHEN country IN ('ca', 'mx', 'us') THEN 'north_america'
- WHEN country IN ('au', 'nz') THEN 'australia'
- END) STORED,
- id SERIAL,
- name STRING,
- country STRING,
- PRIMARY KEY (locality, id))
- PARTITION BY LIST (locality)
- (PARTITION north_america VALUES IN ('north_america'),
- PARTITION australia VALUES IN ('australia'));
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO user_locations (name, country) VALUES
- ('Leonard McCoy', 'us'),
- ('Uhura', 'nz'),
- ('Spock', 'ca'),
- ('James Kirk', 'us'),
- ('Scotty', 'mx'),
- ('Hikaru Sulu', 'us'),
- ('Pavel Chekov', 'au');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM user_locations;
-~~~
-~~~
-+---------------+--------------------+---------------+---------+
-| locality | id | name | country |
-+---------------+--------------------+---------------+---------+
-| australia | 333153890100609025 | Uhura | nz |
-| australia | 333153890100772865 | Pavel Chekov | au |
-| north_america | 333153890100576257 | Leonard McCoy | us |
-| north_america | 333153890100641793 | Spock | ca |
-| north_america | 333153890100674561 | James Kirk | us |
-| north_america | 333153890100707329 | Scotty | mx |
-| north_america | 333153890100740097 | Hikaru Sulu | us |
-+---------------+--------------------+---------------+---------+
-~~~
-
-The `locality` column is computed from the `country` column.
diff --git a/src/current/_includes/v20.1/computed-columns/secondary-index.md b/src/current/_includes/v20.1/computed-columns/secondary-index.md
deleted file mode 100644
index e274db59d7e..00000000000
--- a/src/current/_includes/v20.1/computed-columns/secondary-index.md
+++ /dev/null
@@ -1,63 +0,0 @@
-In this example, create a table with a computed columns and an index on that column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE gymnastics (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- athlete STRING,
- vault DECIMAL,
- bars DECIMAL,
- beam DECIMAL,
- floor DECIMAL,
- combined_score DECIMAL AS (vault + bars + beam + floor) STORED,
- INDEX total (combined_score DESC)
- );
-~~~
-
-Then, insert a few rows a data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES
- ('Simone Biles', 15.933, 14.800, 15.300, 15.800),
- ('Gabby Douglas', 0, 15.766, 0, 0),
- ('Laurie Hernandez', 15.100, 0, 15.233, 14.833),
- ('Madison Kocian', 0, 15.933, 0, 0),
- ('Aly Raisman', 15.833, 0, 15.000, 15.366);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM gymnastics;
-~~~
-~~~
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| id | athlete | vault | bars | beam | floor | combined_score |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 |
-| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 |
-| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 |
-| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 |
-| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-~~~
-
-Now, run a query using the secondary index:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC;
-~~~
-~~~
-+------------------+----------------+
-| athlete | combined_score |
-+------------------+----------------+
-| Simone Biles | 61.833 |
-| Aly Raisman | 46.199 |
-| Laurie Hernandez | 45.166 |
-| Madison Kocian | 15.933 |
-| Gabby Douglas | 15.766 |
-+------------------+----------------+
-~~~
-
-The athlete with the highest combined score of 61.833 is Simone Biles.
diff --git a/src/current/_includes/v20.1/computed-columns/simple.md b/src/current/_includes/v20.1/computed-columns/simple.md
deleted file mode 100644
index 49045fc6cb7..00000000000
--- a/src/current/_includes/v20.1/computed-columns/simple.md
+++ /dev/null
@@ -1,40 +0,0 @@
-In this example, let's create a simple table with a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- city STRING,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED,
- address STRING,
- credit_card STRING,
- dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users (first_name, last_name) VALUES
- ('Lola', 'McDog'),
- ('Carl', 'Kimball'),
- ('Ernie', 'Narayan');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-~~~
- id | city | first_name | last_name | full_name | address | credit_card | dl
-+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+
- 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL
- e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL
- f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL
-(3 rows)
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html).
diff --git a/src/current/_includes/v20.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v20.1/faq/auto-generate-unique-ids.html
deleted file mode 100644
index c1269995b2e..00000000000
--- a/src/current/_includes/v20.1/faq/auto-generate-unique-ids.html
+++ /dev/null
@@ -1,107 +0,0 @@
-To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users (
- id UUID NOT NULL DEFAULT gen_random_uuid(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+--------------------------------------+----------+-------+---------+-------------+
- cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL
- 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL
- 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL
-(3 rows)
-~~~
-
-Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users2 (
- id BYTES DEFAULT uuid_v4(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+------------------------------------------------+----------+-------+---------+-------------+
- 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL
- \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL
- \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL
-(3 rows)
-~~~
-
-In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.
-
-This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index.
-
-If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users3 (
- id INT DEFAULT unique_rowid(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users3;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+--------------------+---------+--------+---------+-------------+
- 469048192112197633 | chicago | Blake | NULL | NULL
- 469048192112263169 | seattle | Hannah | NULL | NULL
- 469048192112295937 | seattle | Bobby | NULL | NULL
-(3 rows)
-~~~
-
-Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed.
diff --git a/src/current/_includes/v20.1/faq/clock-synchronization-effects.md b/src/current/_includes/v20.1/faq/clock-synchronization-effects.md
deleted file mode 100644
index 52bdca7559a..00000000000
--- a/src/current/_includes/v20.1/faq/clock-synchronization-effects.md
+++ /dev/null
@@ -1,26 +0,0 @@
-CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node.
-
-While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
-
-In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting](cluster-settings.html), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping).
-
-### Considerations
-
-When setting up clock synchronization:
-
-- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing).
-- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should.
-- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
-- Do not run more than one clock sync service on VMs where `cockroach` is running.
-
-### Tutorials
-
-For guidance on synchronizing clocks, see the tutorial for your deployment environment:
-
-Environment | Featured Approach
-------------|---------------------
-[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
-[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
-[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
-[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
-[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
diff --git a/src/current/_includes/v20.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v20.1/faq/clock-synchronization-monitoring.html
deleted file mode 100644
index 7fb82e4d188..00000000000
--- a/src/current/_includes/v20.1/faq/clock-synchronization-monitoring.html
+++ /dev/null
@@ -1,8 +0,0 @@
-As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes:
-
-Metric | Definition
--------|-----------
-`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds
-`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds
-
-As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset.
diff --git a/src/current/_includes/v20.1/faq/differences-between-numberings.md b/src/current/_includes/v20.1/faq/differences-between-numberings.md
deleted file mode 100644
index 741ec4f8066..00000000000
--- a/src/current/_includes/v20.1/faq/differences-between-numberings.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences |
-|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------|
-| Size | 16 bytes | 8 bytes | 1 to 8 bytes |
-| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered |
-| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention |
-| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values |
-| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local |
-| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher |
-| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node |
-| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited |
diff --git a/src/current/_includes/v20.1/faq/planned-maintenance.md b/src/current/_includes/v20.1/faq/planned-maintenance.md
deleted file mode 100644
index a21e0467127..00000000000
--- a/src/current/_includes/v20.1/faq/planned-maintenance.md
+++ /dev/null
@@ -1,22 +0,0 @@
-By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window.
-
-For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.time_until_store_dead = '15m0s';
-~~~
-
-After completing the maintenance work and [restarting the nodes](cockroach-start.html), you would then change the setting back to its default:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> RESET CLUSTER SETTING server.time_until_store_dead;
-~~~
-
-It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
-
-{% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING server.shutdown.drain_wait = '10s';
- ~~~
diff --git a/src/current/_includes/v20.1/faq/sequential-numbers.md b/src/current/_includes/v20.1/faq/sequential-numbers.md
deleted file mode 100644
index 8a4794b9243..00000000000
--- a/src/current/_includes/v20.1/faq/sequential-numbers.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations:
-
-- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous
-FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details.
-- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that
-consumes a lower sequence number commits after a transaction that consumes a higher number).
-- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers.
-- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
diff --git a/src/current/_includes/v20.1/faq/sequential-transactions.md b/src/current/_includes/v20.1/faq/sequential-transactions.md
deleted file mode 100644
index 684f2ce5d2a..00000000000
--- a/src/current/_includes/v20.1/faq/sequential-transactions.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly
-solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM
-TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following:
-
-- Paginating through all the changes to a table or dataset
-- Determining the order of changes to data over time
-- Determining the state of data at some point in the past
-- Determining the changes to data between two points of time
-
-Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering.
-
-However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows:
-
-- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);`
-- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;`
-
-This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result.
-
-If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs.
diff --git a/src/current/_includes/v20.1/faq/simulate-key-value-store.html b/src/current/_includes/v20.1/faq/simulate-key-value-store.html
deleted file mode 100644
index 4772fa5358c..00000000000
--- a/src/current/_includes/v20.1/faq/simulate-key-value-store.html
+++ /dev/null
@@ -1,13 +0,0 @@
-CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:
-
-~~~ sql
-> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);
-~~~
-
-When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation:
-
-~~~ sql
-> UPSERT INTO kv VALUES (1, b'hello')
-~~~
-
-This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.
diff --git a/src/current/_includes/v20.1/faq/sql-query-logging.md b/src/current/_includes/v20.1/faq/sql-query-logging.md
deleted file mode 100644
index 1872ab616fb..00000000000
--- a/src/current/_includes/v20.1/faq/sql-query-logging.md
+++ /dev/null
@@ -1,152 +0,0 @@
-There are several ways to log SQL queries. The type of logging to use depends on your requirements and on the purpose of the logs.
-
-- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs).
-- For local testing, turn on [per-node execution logs](#per-node-execution-logs).
-- For per-table audit logs for security purposes, turn on [SQL audit logs](#sql-audit-logs).
-
-### Cluster-wide execution logs
-
-For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = true;
-~~~
-
-With this setting on, each node of the cluster writes all SQL queries it executes to a secondary `cockroach-sql-exec` log file. Use the symlink `cockroach-sql-exec.log` to open the most recent log. When you no longer need to log queries, you can turn the setting back off:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = false;
-~~~
-
-Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
-
-### Slow query logs
-
-New in v20.1: The `sql.log.slow_query.latency_threshold` [cluster setting](cluster-settings.html) is used to log only queries whose service latency exceeds a specified threshold value (e.g., 100 milliseconds):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING sql.log.slow_query.latency_threshold = '100ms';
-~~~
-
-Each node that serves as a gateway will then record slow SQL queries to a `cockroach-sql-slow` log file. Use the symlink `cockroach-sql-slow.log` to open the most recent log. For more details on logging slow queries, see [Using the slow query log](query-behavior-troubleshooting.html#using-the-slow-query-log).
-
-{{site.data.alerts.callout_info}}
-Setting `sql.log.slow_query.latency_threshold` to a non-zero value enables tracing on all queries, which impacts performance. After debugging, set the value back to `0s` to disable the log.
-{{site.data.alerts.end}}
-
-Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
-
-### Authentication logs
-
-{% include {{ page.version.version }}/misc/experimental-warning.md %}
-
-SQL client connections can be logged by turning on the `server.auth_log.sql_connections.enabled` [cluster setting](cluster-settings.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.auth_log.sql_connections.enabled = true;
-~~~
-
-This will log connection established and connection terminated events to a `cockroach-auth` log file. Use the symlink `cockroach-auth.log` to open the most recent log.
-
-{{site.data.alerts.callout_info}}
-In addition to SQL sessions, connection events can include SQL-based liveness probe attempts, as well as attempts to use the [PostgreSQL cancel protocol](https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.5.7.9).
-{{site.data.alerts.end}}
-
-This example log shows both types of connection events over a `hostssl` (TLS certificate over TCP) connection:
-
-~~~
-I200219 05:08:43.083907 5235 sql/pgwire/server.go:445 [n1,client=[::1]:34588] 22 received connection
-I200219 05:08:44.171384 5235 sql/pgwire/server.go:453 [n1,client=[::1]:34588,hostssl] 26 disconnected; duration: 1.087489893s
-~~~
-
-Along with the above, SQL client authenticated sessions can be logged by turning on the `server.auth_log.sql_sessions.enabled` [cluster setting](cluster-settings.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.auth_log.sql_sessions.enabled = true;
-~~~
-
-This logs authentication method selection, authentication method application, authentication method result, and session termination events to the `cockroach-auth` log file. Use the symlink `cockroach-auth.log` to open the most recent log.
-
-This example log shows authentication success over a `hostssl` (TLS certificate over TCP) connection:
-
-~~~
-I200219 05:08:43.089501 5149 sql/pgwire/auth.go:327 [n1,client=[::1]:34588,hostssl,user=root] 23 connection matches HBA rule:
-# TYPE DATABASE USER ADDRESS METHOD OPTIONS
-host all root all cert-password
-I200219 05:08:43.091045 5149 sql/pgwire/auth.go:327 [n1,client=[::1]:34588,hostssl,user=root] 24 authentication succeeded
-I200219 05:08:44.169684 5235 sql/pgwire/conn.go:216 [n1,client=[::1]:34588,hostssl,user=root] 25 session terminated; duration: 1.080240961s
-~~~
-
-This example log shows authentication failure log over a `local` (password over Unix socket) connection:
-
-~~~
-I200219 05:02:18.148961 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 17 connection matches HBA rule:
-# TYPE DATABASE USER ADDRESS METHOD OPTIONS
-local all all password
-I200219 05:02:18.151644 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 18 user has no password defined
-I200219 05:02:18.152863 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 19 authentication failed: password authentication failed for user root
-I200219 05:02:18.154168 1036 sql/pgwire/conn.go:216 [n1,client,local,user=root] 20 session terminated; duration: 5.261538ms
-~~~
-
-For complete logging of client connections, we recommend enabling both `server.auth_log.sql_connections.enabled` and `server.auth_log.sql_sessions.enabled`.
-
-{{site.data.alerts.callout_info}}
-Be aware that both logs perform one disk I/O per event and will impact performance when enabled.
-{{site.data.alerts.end}}
-
-For more details on authentication and certificates, see [Authentication](authentication.html).
-
-Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
-
-### Per-node execution logs
-
-Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node.
-
-Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](cockroach-start.html) command. For example, to start a single node locally and log all client-generated SQL queries it executes, you'd run:
-
-~~~ shell
-$ cockroach start --insecure --listen-addr=localhost --vmodule=exec_log=2 --join=
-~~~
-
-{{site.data.alerts.callout_success}}
-To log CockroachDB-generated SQL queries as well, use `--vmodule=exec_log=3`.
-{{site.data.alerts.end}}
-
-From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT crdb_internal.set_vmodule('exec_log=2');
-~~~
-
-This will result in the following output:
-
-~~~
- crdb_internal.set_vmodule
-+---------------------------+
- 0
-(1 row)
-~~~
-
-Once the logging is enabled, all client-generated SQL queries executed by the node will be written to the primary [CockroachDB log file](debug-and-error-logs.html) as follows:
-
-~~~
-I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 ""
-~~~
-
-### SQL audit logs
-
-{% include {{ page.version.version }}/misc/experimental-warning.md %}
-
-SQL audit logging is useful if you want to log all queries that are run against specific tables, by specific users.
-
-- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html).
-
-- For reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html).
-
-Note that enabling SQL audit logs can negatively impact performance. As a result, we recommend using SQL audit logs for security purposes only.
diff --git a/src/current/_includes/v20.1/faq/when-to-interleave-tables.html b/src/current/_includes/v20.1/faq/when-to-interleave-tables.html
deleted file mode 100644
index a65196ad693..00000000000
--- a/src/current/_includes/v20.1/faq/when-to-interleave-tables.html
+++ /dev/null
@@ -1,5 +0,0 @@
-You're most likely to benefit from interleaved tables when:
-
- - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy)
- - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits)
- - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs)
diff --git a/src/current/_includes/v20.1/json/json-sample.go b/src/current/_includes/v20.1/json/json-sample.go
deleted file mode 100644
index d5953a71ee2..00000000000
--- a/src/current/_includes/v20.1/json/json-sample.go
+++ /dev/null
@@ -1,79 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "io/ioutil"
- "net/http"
- "time"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
- if err != nil {
- panic(err)
- }
-
- // The Reddit API wants us to tell it where to start from. The first request
- // we just say "null" to say "from the start", subsequent requests will use
- // the value received from the last call.
- after := "null"
-
- for i := 0; i < 41; i++ {
- after, err = makeReq(db, after)
- if err != nil {
- panic(err)
- }
- // Reddit limits to 30 requests per minute, so do not do any more than that.
- time.Sleep(2 * time.Second)
- }
-}
-
-func makeReq(db *sql.DB, after string) (string, error) {
- // First, make a request to reddit using the appropriate "after" string.
- client := &http.Client{}
- req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
-
- req.Header.Add("User-Agent", `Go`)
-
- resp, err := client.Do(req)
- if err != nil {
- return "", err
- }
-
- res, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return "", err
- }
-
- // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
- // accomplish multiple things at once.
- // The JSON reddit returns looks like this:
- // {
- // "data": {
- // "children": [ ... ]
- // },
- // "after": ...
- // }
- // We structure our query so that we extract the `children` field, and then
- // expand that and insert each individual element into the database as a
- // separate row. We then return the "after" field so we know how to make the
- // next request.
- r, err := db.Query(`
- INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements($1->'data'->'children')
- RETURNING $1->'data'->'after'`,
- string(res))
- if err != nil {
- return "", err
- }
-
- // Since we did a RETURNING, we need to grab the result of our query.
- r.Next()
- var newAfter string
- r.Scan(&newAfter)
-
- return newAfter, nil
-}
diff --git a/src/current/_includes/v20.1/json/json-sample.py b/src/current/_includes/v20.1/json/json-sample.py
deleted file mode 100644
index 49e302613e0..00000000000
--- a/src/current/_includes/v20.1/json/json-sample.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import json
-import psycopg2
-import requests
-import time
-
-conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-# The Reddit API wants us to tell it where to start from. The first request
-# we just say "null" to say "from the start"; subsequent requests will use
-# the value received from the last call.
-url = "https://www.reddit.com/r/programming.json"
-after = {"after": "null"}
-
-for n in range(41):
- # First, make a request to reddit using the appropriate "after" string.
- req = requests.get(url, params=after, headers={"User-Agent": "Python"})
-
- # Decode the JSON and set "after" for the next request.
- resp = req.json()
- after = {"after": str(resp['data']['after'])}
-
- # Convert the JSON to a string to send to the database.
- data = json.dumps(resp)
-
- # The JSON reddit returns looks like this:
- # {
- # "data": {
- # "children": [ ... ]
- # },
- # "after": ...
- # }
- # We structure our query so that we extract the `children` field, and then
- # expand that and insert each individual element into the database as a
- # separate row.
- cur.execute("""INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements(%s->'data'->'children')""", (data,))
-
- # Reddit limits to 30 requests per minute, so do not do any more than that.
- time.sleep(2)
-
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v20.1/known-limitations/adding-stores-to-node.md b/src/current/_includes/v20.1/known-limitations/adding-stores-to-node.md
deleted file mode 100644
index 206d98718a3..00000000000
--- a/src/current/_includes/v20.1/known-limitations/adding-stores-to-node.md
+++ /dev/null
@@ -1,5 +0,0 @@
-After a node has initially joined a cluster, it is not possible to add additional [stores](cockroach-start.html#store) to the node. Stopping the node and restarting it with additional stores causes the node to not reconnect to the cluster.
-
-To work around this limitation, [decommission the node](remove-nodes.html), remove its data directory, and then run [`cockroach start`](cockroach-start.html) to join the cluster again as a new node.
-
-[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/39415)
diff --git a/src/current/_includes/v20.1/known-limitations/cdc.md b/src/current/_includes/v20.1/known-limitations/cdc.md
deleted file mode 100644
index 75ec49f877e..00000000000
--- a/src/current/_includes/v20.1/known-limitations/cdc.md
+++ /dev/null
@@ -1,9 +0,0 @@
-- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables).
-- Changefeeds do not share internal buffers, so each running changefeed will increase total memory usage. To watch multiple tables, we recommend creating a changefeed with a comma-separated list of tables.
-- Many DDL queries (including [`TRUNCATE`](truncate.html) and [`DROP TABLE`](drop-table.html)) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended).
-- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html).
-- Partial or intermittent sink unavailability may impact changefeed stability; however, [ordering guarantees](change-data-capture.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](change-data-capture.html#monitor-a-changefeed).
-- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended).
-- Additional target options will be added, including partitions and ranges of primary key rows.
-- Changefeeds do not pick up data ingested with the [`IMPORT INTO`](import-into.html) statement.
-- Using a [cloud storage sink](create-changefeed.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files.
diff --git a/src/current/_includes/v20.1/known-limitations/correlated-ctes.md b/src/current/_includes/v20.1/known-limitations/correlated-ctes.md
deleted file mode 100644
index 225d4f02499..00000000000
--- a/src/current/_includes/v20.1/known-limitations/correlated-ctes.md
+++ /dev/null
@@ -1,20 +0,0 @@
-CockroachDB does not support correlated common table expressions. This means that a CTE cannot refer to a variable defined outside the scope of that CTE.
-
-For example, the following query returns an error:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users
- WHERE id =
- (WITH rides_home AS
- (SELECT revenue FROM rides
- WHERE end_address = address)
- SELECT rider_id FROM rides_home);
-~~~
-
-~~~
-ERROR: CTEs may not be correlated
-SQLSTATE: 0A000
-~~~
-
-This query returns an error because the `WITH rides_home` clause references a column (`address`) returned by the `SELECT` statement at the top level of the query, outside the `rides_home` CTE definition.
diff --git a/src/current/_includes/v20.1/known-limitations/dropping-renaming-during-upgrade.md b/src/current/_includes/v20.1/known-limitations/dropping-renaming-during-upgrade.md
deleted file mode 100644
index 16a6c43c438..00000000000
--- a/src/current/_includes/v20.1/known-limitations/dropping-renaming-during-upgrade.md
+++ /dev/null
@@ -1,10 +0,0 @@
-When upgrading from v19.2.x to v20.1.0, as soon as any node of the cluster has run v20.1.0, it is important to avoid dropping, renaming, or truncating tables, views, sequences, or databases on the v19.2 nodes. This is true even in cases where nodes were upgraded to v20.1.0 and then rolled back to v19.2.
-
-In this case, avoid running the following operations against v19.2 nodes:
-
-- [`DROP TABLE`](drop-table.html), [`TRUNCATE TABLE`](truncate.html), [`RENAME TABLE`](rename-table.html)
-- [`DROP VIEW`](drop-view.html)
-- [`DROP SEQUENCE`](drop-sequence.html), [`RENAME SEQUENCE`](rename-sequence.html)
-- [`DROP DATABASE`](drop-database.html), [`RENAME DATABASE`](rename-database.html)
-
-Running any of these operations against v19.2 nodes will result in inconsistency between two internal tables, `system.namespace` and `system.namespace2`. This inconsistency will prevent you from being able to recreate the dropped or renamed objects; the returned error will be `ERROR: relation already exists`. In the case of a dropped or renamed database, [`SHOW DATABASES`](show-databases.html) will also return an error: `ERROR: internal error: "" is not a database`.
diff --git a/src/current/_includes/v20.1/known-limitations/dump-table-with-collations.md b/src/current/_includes/v20.1/known-limitations/dump-table-with-collations.md
deleted file mode 100644
index 50c700b0e1b..00000000000
--- a/src/current/_includes/v20.1/known-limitations/dump-table-with-collations.md
+++ /dev/null
@@ -1,55 +0,0 @@
-When using [`cockroach dump`](cockroach-dump.html) to dump the data of a table containing [collations](collate.html), the resulting `INSERT`s do not include the relevant collation clauses. For example:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start-single-node --insecure
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO de_names VALUES
- ('Backhaus' COLLATE de),
- ('Bär' COLLATE de),
- ('Baz' COLLATE de)
- ;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> q
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach dump defaultdb de_names --insecure > dump.sql
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat dump.sql
-~~~
-
-~~~
-CREATE TABLE de_names (
- name STRING COLLATE de NOT NULL,
- CONSTRAINT "primary" PRIMARY KEY (name ASC),
- FAMILY "primary" (name)
-);
-
-INSERT INTO de_names (name) VALUES
- ('Backhaus'),
- (e'B\u00E4r'),
- ('Baz');
-~~~
-
-[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/48278)
diff --git a/src/current/_includes/v20.1/known-limitations/dump-table-with-no-columns.md b/src/current/_includes/v20.1/known-limitations/dump-table-with-no-columns.md
deleted file mode 100644
index 9dc903636c5..00000000000
--- a/src/current/_includes/v20.1/known-limitations/dump-table-with-no-columns.md
+++ /dev/null
@@ -1 +0,0 @@
-It is not currently possible to use [`cockroach dump`](cockroach-dump.html) to dump the schema and data of a table with no user-defined columns. See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for more details.
diff --git a/src/current/_includes/v20.1/known-limitations/import-high-disk-contention.md b/src/current/_includes/v20.1/known-limitations/import-high-disk-contention.md
deleted file mode 100644
index 48b9c63acf2..00000000000
--- a/src/current/_includes/v20.1/known-limitations/import-high-disk-contention.md
+++ /dev/null
@@ -1,6 +0,0 @@
-[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
-~~~
diff --git a/src/current/_includes/v20.1/known-limitations/node-map.md b/src/current/_includes/v20.1/known-limitations/node-map.md
deleted file mode 100644
index df9ef58486e..00000000000
--- a/src/current/_includes/v20.1/known-limitations/node-map.md
+++ /dev/null
@@ -1,8 +0,0 @@
-You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
-
-| Node | Region | Datacenter |
-| ------ | ------ | ------ |
-| Node1 | us-east | datacenter-1 |
-| Node2 | us-west | datacenter-1 |
-
-In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the Node Map will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the Node Map will be displayed.
diff --git a/src/current/_includes/v20.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v20.1/known-limitations/partitioning-with-placeholders.md
deleted file mode 100644
index b3c3345200d..00000000000
--- a/src/current/_includes/v20.1/known-limitations/partitioning-with-placeholders.md
+++ /dev/null
@@ -1 +0,0 @@
-When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause.
diff --git a/src/current/_includes/v20.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v20.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
deleted file mode 100644
index b7d947bb4c9..00000000000
--- a/src/current/_includes/v20.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
+++ /dev/null
@@ -1,64 +0,0 @@
-Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.
-
-If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, `XXA00`, and the following error message:
-
-```
-transaction committed but schema change aborted with error:
-HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
-Manual inspection may be required to determine the actual state of the database.
-```
-
-{{site.data.alerts.callout_info}}
-This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the Postgres error code `40003`, `"statement completion unknown"`.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_danger}}
-If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions.
-{{site.data.alerts.end}}
-
-This error will occur in various scenarios, including but not limited to:
-
-- Creating a unique index fails because values aren't unique.
-- The evaluation of a computed value fails.
-- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.
-
-To see an example of this error, start by creating the following table.
-
-{% include copy-clipboard.html %}
-~~~ sql
-CREATE TABLE T(x INT);
-INSERT INTO T(x) VALUES (1), (2), (3);
-~~~
-
-Then, enter the following multi-statement transaction, which will trigger the error.
-
-{% include copy-clipboard.html %}
-~~~ sql
-BEGIN;
-ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
-INSERT INTO T(x) VALUES (3);
-COMMIT;
-~~~
-
-~~~
-pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
-HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
-Manual inspection may be required to determine the actual state of the database.
-~~~
-
-In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted.
-
-{% include copy-clipboard.html %}
-~~~ sql
-SELECT * FROM t;
-~~~
-
-~~~
- x
-+---+
- 1
- 2
- 3
- 3
-(4 rows)
-~~~
diff --git a/src/current/_includes/v20.1/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v20.1/known-limitations/schema-changes-between-prepared-statements.md
deleted file mode 100644
index 736fe99df61..00000000000
--- a/src/current/_includes/v20.1/known-limitations/schema-changes-between-prepared-statements.md
+++ /dev/null
@@ -1,33 +0,0 @@
-When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE users (id INT PRIMARY KEY);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-PREPARE prep1 AS SELECT * FROM users;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-ALTER TABLE users ADD COLUMN name STRING;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-INSERT INTO users VALUES (1, 'Max Roach');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-EXECUTE prep1;
-~~~
-
-~~~
-ERROR: cached plan must not change result type
-SQLSTATE: 0A000
-~~~
-
-It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible.
diff --git a/src/current/_includes/v20.1/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v20.1/known-limitations/schema-changes-within-transactions.md
deleted file mode 100644
index a6631e88461..00000000000
--- a/src/current/_includes/v20.1/known-limitations/schema-changes-within-transactions.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Within a single [transaction](transactions.html):
-
-- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail).
-- As of version v2.1, you can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, [see this example](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table).
-- A `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table.
-- A table name cannot be reused. For example, you cannot drop a table named `a` and then create (or rename) a different table with the name `a`. Similarly, you cannot rename a table named `a` to `b` and then create (or rename) a different table with the name `a`. As a workaround, split [`ALTER TABLE ... RENAME TO`](rename-table.html), [`DROP TABLE`](drop-table.html), and [`CREATE TABLE`](create-table.html) statements that reuse table names into separate transactions.
-- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed).
-- As of v19.1, some schema changes can be used in combination in a single `ALTER TABLE` statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically).
-
-{{site.data.alerts.callout_info}}
-If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/metric-names.md b/src/current/_includes/v20.1/metric-names.md
deleted file mode 100644
index 7eebed323d8..00000000000
--- a/src/current/_includes/v20.1/metric-names.md
+++ /dev/null
@@ -1,246 +0,0 @@
-Name | Help
------|-----
-`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
-`addsstable.copies` | Number of SSTable ingestions that required copying files during application
-`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
-`build.timestamp` | Build information
-`capacity.available` | Available storage capacity
-`capacity.reserved` | Capacity reserved for snapshots
-`capacity.used` | Used storage capacity
-`capacity` | Total storage capacity
-`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds
-`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds
-`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges
-`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine
-`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine
-`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions
-`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue
-`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted
-`distsender.batches.partial` | Number of partial batches processed
-`distsender.batches` | Number of batches processed
-`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered
-`distsender.rpc.sent.local` | Number of local RPCs sent
-`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors
-`distsender.rpc.sent` | Number of RPCs sent
-`exec.error` | Number of batch KV requests that failed to execute on this node
-`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node
-`exec.success` | Number of batch KV requests executed successfully on this node
-`gcbytesage` | Cumulative age of non-live data in seconds
-`gossip.bytes.received` | Number of received gossip bytes
-`gossip.bytes.sent` | Number of sent gossip bytes
-`gossip.connections.incoming` | Number of active incoming gossip connections
-`gossip.connections.outgoing` | Number of active outgoing gossip connections
-`gossip.connections.refused` | Number of refused incoming gossip connections
-`gossip.infos.received` | Number of received gossip Info objects
-`gossip.infos.sent` | Number of sent gossip Info objects
-`intentage` | Cumulative age of intents in seconds
-`intentbytes` | Number of bytes in intent KV pairs
-`intentcount` | Count of intent keys
-`keybytes` | Number of bytes taken up by keys
-`keycount` | Count of all keys
-`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated
-`leases.epoch` | Number of replica leaseholders using epoch-based leases
-`leases.error` | Number of failed lease requests
-`leases.expiration` | Number of replica leaseholders using expiration-based leases
-`leases.success` | Number of successful lease requests
-`leases.transfers.error` | Number of failed lease transfers
-`leases.transfers.success` | Number of successful lease transfers
-`livebytes` | Number of bytes of live data (keys plus values)
-`livecount` | Count of live keys
-`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
-`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
-`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds
-`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
-`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
-`node-id` | node ID with labels for advertised RPC and HTTP addresses
-`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
-`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
-`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
-`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
-`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
-`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
-`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
-`queue.gc.info.intentsconsidered` | Number of 'old' intents
-`queue.gc.info.intenttxns` | Number of associated distinct transactions
-`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
-`queue.gc.info.pushtxn` | Number of attempted pushes
-`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
-`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
-`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
-`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
-`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
-`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
-`queue.gc.pending` | Number of pending replicas in the GC queue
-`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue
-`queue.gc.process.success` | Number of replicas successfully processed by the GC queue
-`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue
-`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
-`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
-`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
-`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
-`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
-`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
-`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
-`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
-`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
-`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
-`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
-`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
-`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue
-`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
-`queue.replicate.pending` | Number of pending replicas in the replicate queue
-`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
-`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
-`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
-`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
-`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
-`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
-`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
-`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
-`queue.split.pending` | Number of pending replicas in the split queue
-`queue.split.process.failure` | Number of replicas which failed processing in the split queue
-`queue.split.process.success` | Number of replicas successfully processed by the split queue
-`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
-`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
-`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
-`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
-`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
-`raft.commandsapplied` | Count of Raft commands applied
-`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue
-`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
-`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands
-`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries
-`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
-`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working
-`raft.rcvd.app` | Number of MsgApp messages received by this store
-`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
-`raft.rcvd.dropped` | Number of dropped incoming Raft messages
-`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
-`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
-`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
-`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
-`raft.rcvd.prop` | Number of MsgProp messages received by this store
-`raft.rcvd.snap` | Number of MsgSnap messages received by this store
-`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
-`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
-`raft.rcvd.vote` | Number of MsgVote messages received by this store
-`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
-`raft.ticks` | Number of Raft ticks queued
-`raftlog.behind` | Number of Raft log entries followers on other stores are behind
-`raftlog.truncated` | Number of Raft log entries truncated
-`range.adds` | Number of range additions
-`range.raftleadertransfers` | Number of raft leader transfers
-`range.removes` | Number of range removals
-`range.snapshots.generated` | Number of generated snapshots
-`range.snapshots.normal-applied` | Number of applied snapshots
-`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots
-`range.splits` | Number of range splits
-`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
-`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
-`ranges` | Number of ranges
-`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
-`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
-`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
-`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
-`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue
-`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue
-`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue
-`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree
-`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue
-`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
-`replicas.leaders` | Number of raft leaders
-`replicas.leaseholders` | Number of lease holders
-`replicas.quiescent` | Number of quiesced replicas
-`replicas.reserved` | Number of replicas reserved for snapshots
-`replicas` | Number of replicas
-`requests.backpressure.split` | Number of backpressured writes waiting on a Range split
-`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue
-`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender
-`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease
-`requests.slow.raft` | Number of requests that have been stuck for a long time in raft
-`rocksdb.block.cache.hits` | Count of block cache hits
-`rocksdb.block.cache.misses` | Count of block cache misses
-`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
-`rocksdb.block.cache.usage` | Bytes used by the block cache
-`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
-`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
-`rocksdb.compactions` | Number of table compactions
-`rocksdb.flushes` | Number of table flushes
-`rocksdb.memtable.total-size` | Current size of memtable in bytes
-`rocksdb.num-sstables` | Number of rocksdb SSTables
-`rocksdb.read-amplification` | Number of disk reads per query
-`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
-`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds
-`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error.
-`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error.
-`sql.bytesin` | Number of sql bytes received
-`sql.bytesout` | Number of sql bytes sent
-`sql.conns` | Number of active sql connections
-`sql.ddl.count` | Number of SQL DDL statements
-`sql.delete.count` | Number of SQL DELETE statements
-`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution
-`sql.distsql.flows.active` | Number of distributed SQL flows currently active
-`sql.distsql.flows.total` | Number of distributed SQL flows executed
-`sql.distsql.queries.active` | Number of distributed SQL queries currently active
-`sql.distsql.queries.total` | Number of distributed SQL queries executed
-`sql.distsql.select.count` | Number of DistSQL SELECT statements
-`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution
-`sql.exec.latency` | Latency in nanoseconds of SQL statement execution
-`sql.insert.count` | Number of SQL INSERT statements
-`sql.mem.current` | Current sql statement memory usage
-`sql.mem.distsql.current` | Current sql statement memory usage for distsql
-`sql.mem.distsql.max` | Memory usage per sql statement for distsql
-`sql.mem.max` | Memory usage per sql statement
-`sql.mem.session.current` | Current sql session memory usage
-`sql.mem.session.max` | Memory usage per sql session
-`sql.mem.txn.current` | Current sql transaction memory usage
-`sql.mem.txn.max` | Memory usage per sql transaction
-`sql.misc.count` | Number of other SQL statements
-`sql.query.count` | Number of SQL queries
-`sql.select.count` | Number of SQL SELECT statements
-`sql.service.latency` | Latency in nanoseconds of SQL request execution
-`sql.txn.abort.count` | Number of SQL transaction ABORT statements
-`sql.txn.begin.count` | Number of SQL transaction BEGIN statements
-`sql.txn.commit.count` | Number of SQL transaction COMMIT statements
-`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements
-`sql.update.count` | Number of SQL UPDATE statements
-`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
-`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
-`sys.cgocalls` | Total number of cgo call
-`sys.cpu.sys.ns` | Total system cpu time in nanoseconds
-`sys.cpu.sys.percent` | Current system cpu percentage
-`sys.cpu.user.ns` | Total user cpu time in nanoseconds
-`sys.cpu.user.percent` | Current user cpu percentage
-`sys.fd.open` | Process open file descriptors
-`sys.fd.softlimit` | Process open FD soft limit
-`sys.gc.count` | Total number of GC runs
-`sys.gc.pause.ns` | Total GC pause in nanoseconds
-`sys.gc.pause.percent` | Current GC pause percentage
-`sys.go.allocbytes` | Current bytes of memory allocated by go
-`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
-`sys.goroutines` | Current number of goroutines
-`sys.rss` | Current process RSS
-`sys.uptime` | Process uptime in seconds
-`sysbytes` | Number of bytes in system KV pairs
-`syscount` | Count of system KV pairs
-`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
-`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
-`timeseries.write.samples` | Total number of metric samples written to disk
-`totalbytes` | Total number of bytes taken up by keys and values including non-live data
-`tscache.skl.read.pages` | Number of pages in the read timestamp cache
-`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache
-`tscache.skl.write.pages` | Number of pages in the write timestamp cache
-`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache
-`txn.abandons` | Number of abandoned KV transactions
-`txn.aborts` | Number of aborted KV transactions
-`txn.autoretries` | Number of automatic retries to avoid serializable restarts
-`txn.commits1PC` | Number of committed one-phase KV transactions
-`txn.commits` | Number of committed KV transactions (including 1PC)
-`txn.durations` | KV transaction durations in nanoseconds
-`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command
-`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer
-`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
-`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
-`txn.restarts` | Number of restarted KV transactions
-`valbytes` | Number of bytes taken up by values
-`valcount` | Count of all values
diff --git a/src/current/_includes/v20.1/misc/available-capacity-metric.md b/src/current/_includes/v20.1/misc/available-capacity-metric.md
deleted file mode 100644
index 61dbcb9cbf2..00000000000
--- a/src/current/_includes/v20.1/misc/available-capacity-metric.md
+++ /dev/null
@@ -1 +0,0 @@
-If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/misc/aws-locations.md b/src/current/_includes/v20.1/misc/aws-locations.md
deleted file mode 100644
index 8b073c1f230..00000000000
--- a/src/current/_includes/v20.1/misc/aws-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`|
-| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` |
-| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` |
-| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` |
-| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` |
-| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` |
-| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` |
-| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` |
-| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` |
-| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` |
-| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` |
-| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` |
-| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` |
-| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` |
-| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` |
-| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v20.1/misc/azure-locations.md b/src/current/_includes/v20.1/misc/azure-locations.md
deleted file mode 100644
index 7119ff8b7cb..00000000000
--- a/src/current/_includes/v20.1/misc/azure-locations.md
+++ /dev/null
@@ -1,30 +0,0 @@
-| Location | SQL Statement |
-| -------- | ------------- |
-| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` |
-| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` |
-| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` |
-| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` |
-| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` |
-| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` |
-| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` |
-| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` |
-| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` |
-| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` |
-| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` |
-| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` |
-| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` |
-| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` |
-| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` |
-| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` |
-| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` |
-| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` |
-| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` |
-| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` |
-| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` |
-| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` |
-| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` |
-| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` |
-| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` |
-| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` |
-| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` |
-| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` |
diff --git a/src/current/_includes/v20.1/misc/basic-terms.md b/src/current/_includes/v20.1/misc/basic-terms.md
deleted file mode 100644
index 0ee7fd5d6c5..00000000000
--- a/src/current/_includes/v20.1/misc/basic-terms.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Term | Definition
------|------------
-**Cluster** | Your CockroachDB deployment, which acts as a single logical application.
-**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
-**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as that range reaches 512 MiB in size, it splits into two ranges. This process continues for these new ranges as the table and its indexes continue growing.
-**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
-**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder.
-**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder.
-**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication.
diff --git a/src/current/_includes/v20.1/misc/chrome-localhost.md b/src/current/_includes/v20.1/misc/chrome-localhost.md
deleted file mode 100644
index 24f9bb159a3..00000000000
--- a/src/current/_includes/v20.1/misc/chrome-localhost.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's Admin UI, so be sure to enable the feature only temporarily.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/client-side-intervention-example.md b/src/current/_includes/v20.1/misc/client-side-intervention-example.md
deleted file mode 100644
index d0bbfc33695..00000000000
--- a/src/current/_includes/v20.1/misc/client-side-intervention-example.md
+++ /dev/null
@@ -1,28 +0,0 @@
-The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must:
-
-- Raise an error if the `max_retries` limit is reached
-- Retry on `40001` error codes
-- [`COMMIT`](commit-transaction.html) at the end of the `try` block
-- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance
-
-~~~ python
-while true:
- n++
- if n == max_retries:
- throw Error("did not succeed within N retries")
- try:
- # add logic here to run all your statements
- conn.exec('COMMIT')
- break
- catch error:
- if error.code != "40001":
- throw error
- else:
- # This is a retry error, so we roll back the current transaction
- # and sleep for a bit before retrying. The sleep time increases
- # for each failed transaction. Adapted from
- # https://colintemple.com/2017/03/java-exponential-backoff/
- conn.exec('ROLLBACK');
- sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1)
- sleep(sleep_ms) # Assumes your sleep() takes milliseconds
-~~~
diff --git a/src/current/_includes/v20.1/misc/csv-import-callout.md b/src/current/_includes/v20.1/misc/csv-import-callout.md
deleted file mode 100644
index 60555c5d0b6..00000000000
--- a/src/current/_includes/v20.1/misc/csv-import-callout.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The column order in your schema must match the column order in the file being imported.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/misc/customizing-the-savepoint-name.md b/src/current/_includes/v20.1/misc/customizing-the-savepoint-name.md
deleted file mode 100644
index ed895f906f3..00000000000
--- a/src/current/_includes/v20.1/misc/customizing-the-savepoint-name.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints).
-
-Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions).
-
-This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints.
diff --git a/src/current/_includes/v20.1/misc/debug-subcommands.md b/src/current/_includes/v20.1/misc/debug-subcommands.md
deleted file mode 100644
index 379047a6441..00000000000
--- a/src/current/_includes/v20.1/misc/debug-subcommands.md
+++ /dev/null
@@ -1,3 +0,0 @@
-While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), and [`ballast`](cockroach-debug-ballast.html) subcommands.
-
-The other `debug` subcommands are useful only to CockroachDB's developers and contributors.
diff --git a/src/current/_includes/v20.1/misc/delete-statistics.md b/src/current/_includes/v20.1/misc/delete-statistics.md
deleted file mode 100644
index a850a1ed654..00000000000
--- a/src/current/_includes/v20.1/misc/delete-statistics.md
+++ /dev/null
@@ -1,17 +0,0 @@
-To delete statistics for all tables in all databases:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> DELETE FROM system.table_statistics WHERE true;
-~~~
-
-To delete a named set of statistics (e.g, one named "users_stats"), run a query like the following:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> DELETE FROM system.table_statistics WHERE name = 'users_stats';
-~~~
-
-After deleting statistics, restart the nodes in your cluster to clear the statistics caches.
-
-For more information about the `DELETE` statement, see [`DELETE`](delete.html).
diff --git a/src/current/_includes/v20.1/misc/diagnostics-callout.html b/src/current/_includes/v20.1/misc/diagnostics-callout.html
deleted file mode 100644
index a969a8cf152..00000000000
--- a/src/current/_includes/v20.1/misc/diagnostics-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/drivers.md b/src/current/_includes/v20.1/misc/drivers.md
deleted file mode 100644
index 680422e13ce..00000000000
--- a/src/current/_includes/v20.1/misc/drivers.md
+++ /dev/null
@@ -1,18 +0,0 @@
-{{site.data.alerts.callout_info}}
-Applications may encounter incompatibilities when using advanced or obscure features of a driver or ORM with **beta-level** support. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-{{site.data.alerts.end}}
-
-| App Language | Drivers | ORMs | Support level |
-|--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+------|
-| Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) [Django](build-a-python-app-with-cockroachdb-django.html) [peewee](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database) | Full |
-| Java | [JDBC](build-a-java-app-with-cockroachdb.html) | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) [jOOQ](build-a-java-app-with-cockroachdb-jooq.html) | Full |
-| Go | [pgx](build-a-go-app-with-cockroachdb.html) [pq](build-a-go-app-with-cockroachdb-pq.html) | [GORM](build-a-go-app-with-cockroachdb-gorm.html) [upper/db](build-a-go-app-with-cockroachdb-upperdb.html) | Full |
-| Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html) | Full |
-| Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html) | Beta |
-| C | [libpq](http://www.postgresql.org/docs/9.5/static/libpq.html) | No ORMs tested | Beta |
-| C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | No ORMs tested | Beta |
-| C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | No ORMs tested | Beta |
-| Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | No ORMs tested | Beta |
-| PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | No ORMs tested | Beta |
-| Rust | postgres {% comment %} This link is in HTML instead of Markdown because HTML proofer dies bc of https://github.com/rust-lang/crates.io/issues/163 {% endcomment %} | No ORMs tested | Beta |
-| TypeScript | No drivers tested | [TypeORM](https://typeorm.io/#/) | Beta |
diff --git a/src/current/_includes/v20.1/misc/enterprise-features.md b/src/current/_includes/v20.1/misc/enterprise-features.md
deleted file mode 100644
index 704a3d32e34..00000000000
--- a/src/current/_includes/v20.1/misc/enterprise-features.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Feature | Description
---------+-------------------------
-[Geo-Partitioning](topology-geo-partitioned-replicas.html) | This feature gives you row-level control of how and where your data is stored to dramatically reduce read and write latencies and assist in meeting regulatory requirements in multi-region deployments.
-[Follower Reads](follower-reads.html) | This feature reduces read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data.
-[`BACKUP`](backup.html) | This feature creates full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp, stored on a service such as AWS S3, Google Cloud Storage, NFS, or HTTP storage.
Backups can be locality-aware such that each node writes files only to the backup destination that matches the node's [locality](cockroach-start.html#locality). This is useful for reducing cloud storage data transfer costs by keeping data within cloud regions and complying with data domiciling requirements.
-[`RESTORE`](restore.html) | This feature restores your cluster's schemas and data from an enterprise `BACKUP`.
-[Change Data Capture](change-data-capture.html) (CDC) | This feature provides efficient, distributed, row-level [change feeds into Apache Kafka](create-changefeed.html) for downstream processing such as reporting, caching, or full-text indexing.
-[Node Map](enable-node-map.html) | This feature visualizes the geographical configuration of a cluster by plotting node localities on a world map.
-[Locality-Aware Index Selection](cost-based-optimizer.html#preferring-the-nearest-index) | Given [multiple identical indexes](topology-duplicate-indexes.html) that have different locality constraints using [replication zones](configure-replication-zones.html), the cost-based optimizer will prefer the index that is closest to the gateway node that is planning the query. In multi-region deployments, this can lead to performance improvements due to improved data locality and reduced network traffic.
-[Encryption at Rest](encryption.html#encryption-at-rest-enterprise) | Supplementing CockroachDB's encryption in flight capabilities, this feature provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using AES in counter mode, with all key sizes allowed.
-[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory.
-[`EXPORT`](export.html) | This feature uses the CockroachDB distributed execution engine to quickly get large sets of data out of CockroachDB in a CSV format that can be ingested by downstream systems.
diff --git a/src/current/_includes/v20.1/misc/experimental-warning.md b/src/current/_includes/v20.1/misc/experimental-warning.md
deleted file mode 100644
index d38a9755593..00000000000
--- a/src/current/_includes/v20.1/misc/experimental-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-**This is an experimental feature**. The interface and output are subject to change.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/explore-benefits-see-also.md b/src/current/_includes/v20.1/misc/explore-benefits-see-also.md
deleted file mode 100644
index 6b1a3afed71..00000000000
--- a/src/current/_includes/v20.1/misc/explore-benefits-see-also.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Replication & Rebalancing](demo-replication-and-rebalancing.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html)
-- [Serializable Transactions](demo-serializable.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
-- [JSON Support](demo-json-support.html)
diff --git a/src/current/_includes/v20.1/misc/external-urls.md b/src/current/_includes/v20.1/misc/external-urls.md
deleted file mode 100644
index 12789956364..00000000000
--- a/src/current/_includes/v20.1/misc/external-urls.md
+++ /dev/null
@@ -1,48 +0,0 @@
-~~~
-[scheme]://[host]/[path]?[parameters]
-~~~
-
-Location | Scheme | Host | Parameters |
-|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------
-Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`
-Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME`
-Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS`
-HTTP [3](#considerations) | `http` | Remote host | N/A
-NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A
-S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT`
-
-{{site.data.alerts.callout_info}}
-The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB.
-
-New in v20.1: If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., `BACKUP`, `RESTORE`, etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security).
-{{site.data.alerts.end}}
-
-
-
-- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be omitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/).
-
-- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)).
-
-- 3 You can create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
-
-- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
-
-- 5New in v20.1: Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage.
-
-- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
-
-- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
-
-#### Example file URLs
-
-Location | Example
--------------+----------------------------------------------------------------------------------
-Amazon S3 | `s3://acme-co/employees?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456`
-Azure | `azure://employees?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co`
-Google Cloud | `gs://acme-co`
-HTTP | `http://localhost:8080/employees`
-NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations)
diff --git a/src/current/_includes/v20.1/misc/force-index-selection.md b/src/current/_includes/v20.1/misc/force-index-selection.md
deleted file mode 100644
index 0c3fa835a7b..00000000000
--- a/src/current/_includes/v20.1/misc/force-index-selection.md
+++ /dev/null
@@ -1,61 +0,0 @@
-By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table.
-
-{{site.data.alerts.callout_info}}
-Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query.
-{{site.data.alerts.end}}
-
-The syntax to force a scan of a specific index is:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@my_idx;
-~~~
-
-This is equivalent to the longer expression:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@{FORCE_INDEX=my_idx};
-~~~
-
-The syntax to force a **reverse scan** of a specific index is:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC};
-~~~
-
-Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is
-
-{% include copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]}
-~~~
-
-where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending).
-
-When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance.
-
-You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE kv (K INT PRIMARY KEY, v INT);
-~~~
-
-you can check the scan direction with:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC};
-~~~
-
-~~~
- text
-+-------------------------------------+
- scan users,rev
- └── flags: force-index=primary,rev
-(2 rows)
-~~~
-
-To see all indexes available on a table, use [`SHOW INDEXES`](show-index.html).
diff --git a/src/current/_includes/v20.1/misc/gce-locations.md b/src/current/_includes/v20.1/misc/gce-locations.md
deleted file mode 100644
index 22122aae78d..00000000000
--- a/src/current/_includes/v20.1/misc/gce-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` |
-| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` |
-| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` |
-| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` |
-| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` |
-| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` |
-| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` |
-| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` |
-| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` |
-| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` |
-| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` |
-| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` |
-| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` |
-| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` |
-| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` |
-| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v20.1/misc/haproxy.md b/src/current/_includes/v20.1/misc/haproxy.md
deleted file mode 100644
index 375af8e937d..00000000000
--- a/src/current/_includes/v20.1/misc/haproxy.md
+++ /dev/null
@@ -1,39 +0,0 @@
-By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly:
-
- ~~~
- global
- maxconn 4096
-
- defaults
- mode tcp
- # Timeout values should be configured for your specific use.
- # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
- timeout connect 10s
- timeout client 1m
- timeout server 1m
- # TCP keep-alive on client side. Server already enables them.
- option clitcpka
-
- listen psql
- bind :26257
- mode tcp
- balance roundrobin
- option httpchk GET /health?ready=1
- server cockroach1 :26257 check port 8080
- server cockroach2 :26257 check port 8080
- server cockroach3 :26257 check port 8080
- ~~~
-
- The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster:
-
- Field | Description
- ------|------------
- `timeout connect` `timeout client` `timeout server` | Timeout values that should be suitable for most deployments.
- `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.
This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node.
- `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms.
- `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests.
- `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.
-
- {{site.data.alerts.callout_info}}
- For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html).
- {{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/import-perf.md b/src/current/_includes/v20.1/misc/import-perf.md
deleted file mode 100644
index b0520a9c392..00000000000
--- a/src/current/_includes/v20.1/misc/import-perf.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_success}}
-For best practices for optimizing import performance in CockroachDB, see [Import Performance Best Practices](import-performance-best-practices.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/install-next-steps.html b/src/current/_includes/v20.1/misc/install-next-steps.html
deleted file mode 100644
index 1b00fba3410..00000000000
--- a/src/current/_includes/v20.1/misc/install-next-steps.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
If you're just getting started with CockroachDB:
-
The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.
diff --git a/src/current/_includes/v20.1/misc/logging-flags.md b/src/current/_includes/v20.1/misc/logging-flags.md
deleted file mode 100644
index 67233c50834..00000000000
--- a/src/current/_includes/v20.1/misc/logging-flags.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Flag | Description
------|------------
-`--log-dir` | Enable logging to files and write logs to the specified directory.
Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory.
-`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.
**Default**: 100MiB
-`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.
**Default**: 10MiB
-`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.
**Default**: `INFO`
-`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`
If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.
Setting `--logtostderr=NONE` disables logging to `stderr`.
-`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.
When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).
**Default:** `false`
-`--sql-audit-dir` | If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB.
Note that enabling SQL audit logs can negatively impact performance. As a result, we recommend using SQL audit logs for security purposes only. For more information, see the [`EXPERIMENTAL_AUDIT`](experimental-audit.html) reference page.
diff --git a/src/current/_includes/v20.1/misc/mitigate-contention-note.md b/src/current/_includes/v20.1/misc/mitigate-contention-note.md
deleted file mode 100644
index ffe3cff554a..00000000000
--- a/src/current/_includes/v20.1/misc/mitigate-contention-note.md
+++ /dev/null
@@ -1,5 +0,0 @@
-{{site.data.alerts.callout_info}}
-It's possible to mitigate read-write contention and reduce transaction retries using the following techniques:
-1. By performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).
-2. By using [`SELECT FOR UPDATE`](select-for-update.html) to order transactions by controlling concurrent access to one or more rows of a table. This reduces retries in scenarios where a transaction performs a read and then updates the same row it just read.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/movr-flask-211.md b/src/current/_includes/v20.1/misc/movr-flask-211.md
deleted file mode 100644
index 0dbe6c6550e..00000000000
--- a/src/current/_includes/v20.1/misc/movr-flask-211.md
+++ /dev/null
@@ -1,7 +0,0 @@
-{{site.data.alerts.callout_info}}
-CockroachDB versions v21.1 and above support [new multi-region capabilities, with different SQL syntax](../v21.1/multiregion-overview.html).
-
-For the latest version of the application and database schema built on v21.1 multi-region features, see the [`movr-flask` repository](https://github.com/cockroachlabs/movr-flask).
-
-For the latest version of the tutorial, see the [v21.1 docs](../v21.1/movr-flask-overview.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/movr-live-demo.md b/src/current/_includes/v20.1/misc/movr-live-demo.md
deleted file mode 100644
index bcf4667692f..00000000000
--- a/src/current/_includes/v20.1/misc/movr-live-demo.md
+++ /dev/null
@@ -1,5 +0,0 @@
-{{site.data.alerts.callout_success}}
-For a live demo of the deployed MovR Flask application, see [https://movr.cloud](https://movr.cloud).
-
-Note that the backend for the live demo uses a newer version of the application, built on the [multi-region syntax introduced in v21.1](../v21.1/multiregion-overview.html). This newer application is also deployed using a simplified, serverless workflow. For more details, see the [`movr-flask` repository's README](https://github.com/cockroachlabs/movr-flask).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/movr-schema.md b/src/current/_includes/v20.1/misc/movr-schema.md
deleted file mode 100644
index a8f57a2acae..00000000000
--- a/src/current/_includes/v20.1/misc/movr-schema.md
+++ /dev/null
@@ -1,12 +0,0 @@
-The six tables in the `movr` database store user, vehicle, and ride data for MovR:
-
-Table | Description
---------|----------------------------
-`users` | People registered for the service.
-`vehicles` | The pool of vehicles available for the service.
-`rides` | When and where users have rented a vehicle.
-`promo_codes` | Promotional codes for users.
-`user_promo_codes` | Promotional codes in use by users.
-`vehicle_location_histories` | Vehicle location history.
-
-
diff --git a/src/current/_includes/v20.1/misc/movr-workflow.md b/src/current/_includes/v20.1/misc/movr-workflow.md
deleted file mode 100644
index 1127ebd2a62..00000000000
--- a/src/current/_includes/v20.1/misc/movr-workflow.md
+++ /dev/null
@@ -1,76 +0,0 @@
-The workflow for MovR is as follows:
-
-1. A user loads the app and sees the 25 closest vehicles.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT id, city, status FROM vehicles WHERE city='amsterdam' limit 25;
- ~~~
-
-2. The user signs up for the service.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO users (id, name, address, city, credit_card)
- VALUES ('66666666-6666-4400-8000-00000000000f', 'Mariah Lam', '88194 Angela Gardens Suite 60', 'amsterdam', '123245696');
- ~~~
-
- {{site.data.alerts.callout_info}}Usually for Universally Unique Identifier (UUID) you would need to generate it automatically but for the sake of this follow up we will use predetermined UUID to keep track of them in our examples.{{site.data.alerts.end}}
-
-3. In some cases, the user adds their own vehicle to share.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO vehicles (id, city, type, owner_id,creation_time,status, current_location, ext)
- VALUES ('ffffffff-ffff-4400-8000-00000000000f', 'amsterdam', 'skateboard', '66666666-6666-4400-8000-00000000000f', current_timestamp(), 'available', '88194 Angela Gardens Suite 60', '{"color": "blue"}');
- ~~~
-4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT code FROM user_promo_codes WHERE user_id ='66666666-6666-4400-8000-00000000000f';
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE vehicles SET status = 'in_use' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b';
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO rides (id, city, vehicle_city, rider_id, vehicle_id, start_address,end_address, start_time, end_time, revenue)
- VALUES ('cd032f56-cf1a-4800-8000-00000000066f', 'amsterdam', 'amsterdam', '66666666-6666-4400-8000-00000000000f', 'bbbbbbbb-bbbb-4800-8000-00000000000b', '70458 Mary Crest', '', TIMESTAMP '2020-10-01 10:00:00.123456', NULL, 0.0);
- ~~~
-
-5. During the ride, MovR tracks the location of the vehicle.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long)
- VALUES ('amsterdam', 'cd032f56-cf1a-4800-8000-00000000066f', current_timestamp(), -101, 60);
- ~~~
-
-6. The user ends the ride and releases the vehicle.
-
- For example:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE vehicles SET status = 'available' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b';
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE rides SET end_address ='33862 Charles Junctions Apt. 49', end_time=TIMESTAMP '2020-10-01 10:30:00.123456', revenue=88.6
- WHERE id='cd032f56-cf1a-4800-8000-00000000066f';
- ~~~
diff --git a/src/current/_includes/v20.1/misc/multi-store-nodes.md b/src/current/_includes/v20.1/misc/multi-store-nodes.md
deleted file mode 100644
index 01642597169..00000000000
--- a/src/current/_includes/v20.1/misc/multi-store-nodes.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-In the absence of [special replication constraints](configure-replication-zones.html), CockroachDB rebalances replicas to take advantage of available storage capacity. However, in a 3-node cluster with multiple stores per node, CockroachDB is **not** able to rebalance replicas from one store to another store on the same node because this would temporarily result in the node having multiple replicas of the same range, which is not allowed. This is due to the mechanics of rebalancing, where the cluster first creates a copy of the replica at the target destination before removing the source replica. To allow this type of cross-store rebalancing, the cluster must have 4 or more nodes; this allows the cluster to create a copy of the replica on a node that doesn't already have a replica of the range before removing the source replica and then migrating the new replica to the store with more capacity on the original node.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/schema-change-stmt-note.md b/src/current/_includes/v20.1/misc/schema-change-stmt-note.md
deleted file mode 100644
index b522b658652..00000000000
--- a/src/current/_includes/v20.1/misc/schema-change-stmt-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-This statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/misc/schema-change-view-job.md b/src/current/_includes/v20.1/misc/schema-change-view-job.md
deleted file mode 100644
index 8861174d621..00000000000
--- a/src/current/_includes/v20.1/misc/schema-change-view-job.md
+++ /dev/null
@@ -1 +0,0 @@
-This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html).
diff --git a/src/current/_includes/v20.1/misc/session-vars.html b/src/current/_includes/v20.1/misc/session-vars.html
deleted file mode 100644
index 1c7532d314d..00000000000
--- a/src/current/_includes/v20.1/misc/session-vars.html
+++ /dev/null
@@ -1,585 +0,0 @@
-
The default transaction access mode for the current session. If set to on, only read operations are allowed in transactions in the current session; if set to off, both read and write operations are allowed. See SET TRANSACTION
- for more details.
-
- off
-
-
Yes
-
Yes
-
-
-
-
- distsql
-
-
The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node.
-
- auto
-
-
Yes
-
Yes
-
-
-
-
-
- enable_implicit_select_for_update
-
-
New in v20.1: Indicates whether UPDATE statements acquire locks using the FOR UPDATE locking mode during their initial row scan, which improves performance for contended workloads. For more information about how FOR UPDATE locking works, see the documentation for SELECT FOR UPDATE.
-
- on
-
-
Yes
-
Yes
-
-
-
-
- enable_insert_fast_path
-
-
Indicates whether CockroachDB will use a specialized execution operator for inserting into a table. We recommend leaving this setting on.
-
- on
-
-
Yes
-
Yes
-
-
-
-
- enable_zigzag_join
-
-
Indicates whether the cost-based optimizer will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering.
-
- on
-
-
Yes
-
Yes
-
-
-
-
- extra_float_digits
-
-
The number of digits displayed for floating-point values. Only values between -15 and 3 are supported.
-
- 0
-
-
Yes
-
Yes
-
-
-
-
- reorder_joins_limit
-
-
Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan. For more information, see Join reordering.
-
- 4
-
-
Yes
-
Yes
-
-
-
-
force_savepoint_restart
-
When set to true, allows the SAVEPOINT statement to accept any name for a savepoint.
-
- off
-
-
Yes
-
Yes
-
-
-
-
- locality
-
-
The location of the node. For more information, see Locality.
-
Node-dependent
-
No
-
Yes
-
-
-
-
- node_id
-
-
The ID of the node currently connected to.
- This variable is particularly useful for verifying load balanced connections.
-
Node-dependent
-
No
-
Yes
-
-
-
-
- optimizer_foreign_keys
-
-
New in v20.1: If off, disables optimizer-driven foreign key checks.
-
-
on
-
Yes
-
Yes
-
-
-
-
- results_buffer_size
-
-
The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can also be set for all connections using the 'sql.defaults.results_buffer_size' cluster setting. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering.
-
- 16384
-
-
Yes
-
Yes
-
-
-
-
- require_explicit_primary_keys
-
-
New in v20.1: If on, CockroachDB throws on error for all tables created without an explicit primary key defined.
-
-
- off
-
-
Yes
-
Yes
-
-
-
-
- search_path
-
-
A list of schemas that will be searched to resolve unqualified table or function names. For more details, see SQL name resolution.
-
- public
-
-
Yes
-
Yes
-
-
-
-
- server_version
-
-
The version of PostgreSQL that CockroachDB emulates.
-
Version-dependent
-
No
-
Yes
-
-
-
-
- server_version_num
-
-
The version of PostgreSQL that CockroachDB emulates.
-
Version-dependent
-
Yes
-
Yes
-
-
-
-
- session_id
-
-
The ID of the current session.
-
Session-dependent
-
No
-
Yes
-
-
-
-
- session_user
-
-
The user connected for the current session.
-
User in connection string
-
No
-
Yes
-
-
-
-
- sql_safe_updates
-
-
If false, potentially unsafe SQL statements are allowed, including DROP of a non-empty database and all dependent objects, DELETE without a WHERE clause, UPDATE without a WHERE clause, and ALTER TABLE .. DROP COLUMN. See Allow Potentially Unsafe SQL Statements for more details.
-
- true for interactive sessions from the built-in SQL client, false for sessions from other clients
-
Yes
-
Yes
-
-
-
-
- statement_timeout
-
-
The amount of time a statement can run before being stopped.
- This value can be an int (e.g., 10) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., '4s'). A value of 0 turns it off.
-
- 0s
-
-
Yes
-
Yes
-
-
-
-
- timezone
-
-
The default time zone for the current session.
- This session variable was named "time zone" (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
-
- UTC
-
-
Yes
-
Yes
-
-
-
-
- tracing
-
-
The trace recording state.
-
- off
-
-
-
-
Yes
-
-
-
-
- transaction_isolation
-
-
All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels.
- This session variable was called transaction isolation level (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
-
- SERIALIZABLE
-
-
No
-
Yes
-
-
-
-
- transaction_priority
-
-
The priority of the current transaction. See Transactions: Transaction priorities for more details.
- This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
-
- NORMAL
-
-
Yes
-
Yes
-
-
-
-
- transaction_read_only
-
-
The access mode of the current transaction. See Set Transaction for more details.
-
- off
-
-
Yes
-
Yes
-
-
-
-
- transaction_status
-
-
The state of the current transaction. See Transactions for more details.
- This session variable was called transaction status (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
The minimum number of rows required to use the vectorized engine to execute a query plan.
-
-
- 1000
-
-
Yes
-
Yes
-
-
-
-
- client_encoding
-
-
(Reserved; exposed only for ORM compatibility.)
-
- UTF8
-
-
No
-
Yes
-
-
-
-
- client_min_messages
-
-
(Reserved; exposed only for ORM compatibility.)
-
- notice
-
-
No
-
Yes
-
-
-
-
- datestyle
-
-
(Reserved; exposed only for ORM compatibility.)
-
- ISO
-
-
No
-
Yes
-
-
-
-
- default_tablespace
-
-
(Reserved; exposed only for ORM compatibility.)
-
-
-
-
No
-
Yes
-
-
-
-
- idle_in_transaction_session_timeout
-
-
(Reserved; exposed only for ORM compatibility.)
-
- 0
-
-
No
-
Yes
-
-
-
-
- integer_datetimes
-
-
(Reserved; exposed only for ORM compatibility.)
-
- on
-
-
No
-
Yes
-
-
-
-
- intervalstyle
-
-
(Reserved; exposed only for ORM compatibility.)
-
- postgres
-
-
No
-
Yes
-
-
-
-
- lock_timeout
-
-
(Reserved; exposed only for ORM compatibility.)
-
- 0
-
-
No
-
Yes
-
-
-
-
- max_identifier_length
-
-
(Reserved; exposed only for ORM compatibility.)
-
- 128
-
-
No
-
Yes
-
-
-
-
- max_index_keys
-
-
(Reserved; exposed only for ORM compatibility.)
-
- 32
-
-
No
-
Yes
-
-
-
-
- row_security
-
-
(Reserved; exposed only for ORM compatibility.)
-
- off
-
-
No
-
Yes
-
-
-
-
- standard_conforming_strings
-
-
(Reserved; exposed only for ORM compatibility.)
-
- on
-
-
No
-
Yes
-
-
-
-
- server_encoding
-
-
(Reserved; exposed only for ORM compatibility.)
-
- UTF8
-
-
Yes
-
Yes
-
-
-
-
- synchronize_seqscans
-
-
(Reserved; exposed only for ORM compatibility.)
-
- on
-
-
No
-
Yes
-
-
-
-
diff --git a/src/current/_includes/v20.1/misc/sorting-delete-output.md b/src/current/_includes/v20.1/misc/sorting-delete-output.md
deleted file mode 100644
index fa0d6e54be7..00000000000
--- a/src/current/_includes/v20.1/misc/sorting-delete-output.md
+++ /dev/null
@@ -1,9 +0,0 @@
-To sort the output of a `DELETE` statement, use:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (DELETE ... RETURNING ...)
- SELECT ... FROM a ORDER BY ...
-~~~
-
-For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows).
diff --git a/src/current/_includes/v20.1/misc/tooling.md b/src/current/_includes/v20.1/misc/tooling.md
deleted file mode 100644
index 9c45c3d13c5..00000000000
--- a/src/current/_includes/v20.1/misc/tooling.md
+++ /dev/null
@@ -1,73 +0,0 @@
-## Support levels
-
-We’ve partnered with open-source projects, vendors, and individuals to offer the following levels of support with third-party tools.
-
-- **Full support** indicates that the vast majority of the tool's features should work without issue with CockroachDB. CockroachDB is regularly tested against the recommended version documented here.
-- **Partial support** indicates that the tool has been tried with CockroachDB, but its integration might require additional steps, lack support for all features, or exhibit unexpected behavior.
-
-If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward better support.
-
-## Drivers
-
-| Language | Driver | Recommended version | Support level |
-|----------+--------+---------------------+---------------|
-| C | [libpq](http://www.postgresql.org/docs/9.5/static/libpq.html) | [PostgreSQL 9.5](http://www.postgresql.org/docs/9.5/static/libpq.html) | Beta |
-| C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | [7.1.1](https://github.com/jtv/libpqxx/releases) (Windows) [4.0.1](https://github.com/jtv/libpqxx/releases) or higher (macOS) | Beta |
-| C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | [4.1.3.1](https://www.nuget.org/packages/Npgsql/) | Beta |
-| Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | [0.7.11](https://search.maven.org/search?q=g:org.clojure%20AND%20a:java.jdbc) | Beta |
-| Go | [pgx](build-a-go-app-with-cockroachdb.html)[pq](build-a-go-app-with-cockroachdb-pq.html) | [4.6.0](https://github.com/jackc/pgx/releases)[1.5.2](https://github.com/lib/pq/releases) | FullFull |
-| Java | [JDBC](build-a-java-app-with-cockroachdb.html) | [42.2.9](https://jdbc.postgresql.org/download/) | Full |
-| Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [8.2.1](https://www.npmjs.com/package/pg) | Beta |
-| PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | [PHP 7.4.6](https://www.php.net/downloads) | Beta |
-| Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [2.8.6](https://www.psycopg.org/docs/install.html) | Full |
-| Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [1.2.3](https://rubygems.org/gems/pg) | Full |
-| Rust | postgres {% comment %} This link is in HTML instead of Markdown because HTML proofer dies bc of https://github.com/rust-lang/crates.io/issues/163 {% endcomment %} | 0.17.3 | Beta |
-
-## Data access frameworks (e.g., ORMs)
-
-| Language | ORM | Recommended version | Support level |
-|----------+-----+---------------------+---------------|
-| Go | [GORM](build-a-go-app-with-cockroachdb-gorm.html)[upper/db](build-a-go-app-with-cockroachdb-upperdb.html) | [1.9.11](https://github.com/jinzhu/gorm/releases)[v4](https://github.com/upper/db/releases) | FullFull |
-| Java | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html)[jOOQ](build-a-java-app-with-cockroachdb-jooq.html)[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | [5.4.19](https://hibernate.org/orm/releases/)[3.13.2](https://www.jooq.org/download/versions) (must be 3.13.0 or higher)[3.5.5 and higher](https://mybatis.org/mybatis-3/) | FullFull |
-| Node.js | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html) | [sequelize 5.21.9](https://www.npmjs.com/package/sequelize) [sequelize-cockroachdb 1.1.0](https://www.npmjs.com/package/sequelize-cockroachdb) | Beta |
-| Ruby | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html) | [activerecord 5.2](https://rubygems.org/gems/activerecord/versions) [activerecord-cockroachdb-adpater 5.2.2](https://rubygems.org/gems/activerecord-cockroachdb-adapter/versions)[activerecord 6.0](https://rubygems.org/gems/activerecord/versions) [activerecord-cockroachdb-adpater 6.0.0beta3](https://rubygems.org/gems/activerecord-cockroachdb-adapter/versions) | Full
FullFullFull |
-| Typescript | [TypeORM](https://typeorm.io/#/) | [0.2.24](https://www.npmjs.com/package/typeorm) | Full |
-
-## Application frameworks
-
-| Framework | Data access | Recommended version | Support level |
-|-----------+-------------+---------------------+---------------|
-| Spring | [JDBC](build-a-spring-app-with-cockroachdb-jdbc.html)[JPA (Hibernate)](build-a-spring-app-with-cockroachdb-jpa.html)jOOQ[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | See individual Java ORM or [driver](#drivers) for data access version support. | See individual Java ORM or [driver](#drivers) for data access support level. |
-
-## Graphical user interfaces (GUIs)
-
-| GUI | Recommended version | Support level |
-|-----+---------------------+---------------|
-| [Beekeeper Studio](https://beekeeperstudio.io) | [1.6.10 or higher](https://www.beekeeperstudio.io/get) | Full |
-| [DBeaver](https://dbeaver.com/) | [5.2.3 or higher](https://dbeaver.com/download/) | Full |
-| [DbVisualizer](https://www.dbvis.com/) | [10.0.22 or higher](https://www.dbvis.com/download/) | Beta |
-| [Navicat for PostgreSQL](https://www.navicat.com/en/products/navicat-for-postgresql)/[Navicat Premium](https://www.navicat.com/en/products/navicat-premium) | [12.1.25 or higher](https://www.navicat.com/en/download/navicat-for-postgresql) | Beta |
-| [Pgweb](http://sosedoff.github.io/pgweb/) | [0.9.12 or higher](https://github.com/sosedoff/pgweb/releases/latest) | Beta |
-| [Postico](https://eggerapps.at/postico/) | 1.5.8 or higher | Beta |
-| [TablePlus](https://tableplus.com/) | [Build 222 or higher](https://tableplus.com/download) | Beta |
-| [Vault](https://www.vaultproject.io/docs/configuration/storage/cockroachdb) | 1.3.9 or higher | Beta |
-
-## Integrated development environments (IDEs)
-
-| IDE | Recommended version | Support level |
-|-----+---------------------+---------------|
-| [DataGrip](https://www.jetbrains.com/datagrip/) | [2021.1 or higher](https://www.jetbrains.com/datagrip/download) | Full |
-| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | [2021.1 or higher](https://www.jetbrains.com/idea/download) | Beta |
-
-## Schema migration tools
-
-| Tool | Recommended version | Support level |
-|-----+---------------------+---------------|
-| [Flyway](flyway.html) | [6.4.2](https://flywaydb.org/documentation/commandline/#download-and-installation) or higher | Full |
-
-## Other tools
-
-| Tool | Recommended version | Support level |
-|-----+---------------------+---------------|
-| [Flowable](https://blog.flowable.org/2019/07/11/getting-started-with-flowable-and-cockroachdb/) | [6.4.2](https://github.com/flowable/flowable-engine/releases/tag/flowable-6.4.2) or higher | Full |
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-expand-disk-size.md b/src/current/_includes/v20.1/orchestration/kubernetes-expand-disk-size.md
deleted file mode 100644
index 5f5f77b4962..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-expand-disk-size.md
+++ /dev/null
@@ -1,184 +0,0 @@
-You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
-) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. Increasing disk size is often beneficial for CockroachDB performance. Read our [Kubernetes performance guide](kubernetes-performance.html#disk-size) for guidance on disks.
-
-1. Get the persistent volume claims for the volumes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc
- ~~~
-
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-
-2. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe storageclass standard
- ~~~
-
- ~~~
- Name: standard
- IsDefaultClass: Yes
- Annotations: storageclass.kubernetes.io/is-default-class=true
- Provisioner: kubernetes.io/gce-pd
- Parameters: type=pd-standard
- AllowVolumeExpansion: False
- MountOptions:
- ReclaimPolicy: Delete
- VolumeBindingMode: Immediate
- Events:
- ~~~
-
- If necessary, edit the storage class:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
- ~~~
-
- ~~~
- storageclass.storage.k8s.io/standard patched
- ~~~
-
-3. Edit one of the persistent volume claims to request more space:
-
- {{site.data.alerts.callout_info}}
- The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size.
- {{site.data.alerts.end}}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
- ~~~
-
- ~~~
- persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
- ~~~
-
- ~~~
- persistentvolumeclaim/datadir-cockroachdb-0 patched
- ~~~
-
-
-4. Check the capacity of the persistent volume claim:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-my-release-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
- ~~~
-
-
- If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.
-
- {{site.data.alerts.callout_success}}
- Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`.
- {{site.data.alerts.end}}
-
-5. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-my-release-cockroachdb-0
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-cockroachdb-0
- ~~~
-
-
- ~~~
- Waiting for user to (re-)start a pod to finish file system resize of volume on node.
- ~~~
-
-6. Delete the corresponding pod to restart it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod my-release-cockroachdb-0
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-0
- ~~~
-
-
- The `FileSystemResizePending` condition and message will be removed.
-
-7. View the updated persistent volume claim:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-my-release-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
- ~~~
-
-
-8. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v20.1/orchestration/kubernetes-limitations.md
deleted file mode 100644
index 8dfbfaa50d9..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-limitations.md
+++ /dev/null
@@ -1,15 +0,0 @@
-#### Kubernetes version
-
-To deploy CockroachDB {{page.version.version}}, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is [eligible for patch support by the Kubernetes project](https://kubernetes.io/releases/).
-
-#### Helm version
-
-Helm 3.0 or higher is required when using our instructions to [deploy via Helm](orchestrate-cockroachdb-with-kubernetes.html#step-2-start-cockroachdb).
-
-#### Resources
-
-When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload.
-
-#### Storage
-
-At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local).
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-prometheus-alertmanager.md b/src/current/_includes/v20.1/orchestration/kubernetes-prometheus-alertmanager.md
deleted file mode 100644
index 52c80a8181c..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-prometheus-alertmanager.md
+++ /dev/null
@@ -1,243 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-### Configure Prometheus
-
-Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring.
-
-This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using built-in Kubernetes concepts.
-
-{{site.data.alerts.callout_info}}
-If you're on Hosted GKE, before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#hosted-gke).
-{{site.data.alerts.end}}
-
-1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service/cockroachdb labeled
- ~~~
-
- This ensures that only the `cockroachdb` (not the `cockroach-public` service) is being monitored by a Prometheus job.
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service/cockroachdb labeled
- ~~~
-
- This ensures that only the `cockroachdb` (not the `cockroach-public` service) is being monitored by a Prometheus job.
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc my-release-cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service/my-release-cockroachdb labeled
- ~~~
-
- This ensures that there is a Prometheus job and monitoring data only for the `my-release-cockroachdb` service, not for the `my-release-cockroach-public` service.
-
-
-2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.43/bundle.yaml):
-
- {{site.data.alerts.callout_info}}
- If you run into an error when installing the Prometheus Operator, first try updating the [release version](https://github.com/prometheus-operator/prometheus-operator/blob/master/RELEASE.md) specified in the below command and reapplying the manifest. If this doesn't work, please [file an issue](file-an-issue.html).
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply \
- -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.43/bundle.yaml
- ~~~
-
- ~~~
- customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
- customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
- clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator configured
- clusterrole.rbac.authorization.k8s.io/prometheus-operator configured
- deployment.apps/prometheus-operator created
- serviceaccount/prometheus-operator configured
- service/prometheus-operator created
- ~~~
-3. Confirm that the `prometheus-operator` has started:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get deploy prometheus-operator
- ~~~
-
- ~~~
- NAME READY UP-TO-DATE AVAILABLE AGE
- prometheus-operator 1/1 1 1 27s
- ~~~
-
-4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance:
-
- {{site.data.alerts.callout_info}}
- By default, this manifest uses the secret name generated by the CockroachDB Kubernetes Operator. If you generated your own certificates and keys when starting CockroachDB, be sure that `ca.secret.name` matches the name of the node secret you created.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml
- ~~~
-
- ~~~
- serviceaccount/prometheus created
- clusterrole.rbac.authorization.k8s.io/prometheus created
- clusterrolebinding.rbac.authorization.k8s.io/prometheus created
- servicemonitor.monitoring.coreos.com/cockroachdb created
- prometheus.monitoring.coreos.com/cockroachdb created
- ~~~
-
-5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus:
-
- 1. Port-forward from your local machine to the pod running Prometheus:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward prometheus-cockroachdb-0 9090
- ~~~
-
- 2. Go to http://localhost:9090 in your browser.
-
- 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
-
-
-
- 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
-
-
-
- {{site.data.alerts.callout_success}}
- Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-4-access-the-admin-ui){% else %}[Access the Admin UI](#step-4-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars.
-
- For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/).
- {{site.data.alerts.end}}
-
-### Configure Alertmanager
-
-Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this.
-
-1. Download our alertmanager-config.yaml configuration file:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -OOOOOOOOO \
- https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager-config.yaml
- ~~~
-
-2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/#receiver). Initially, the file contains a placeholder web hook.
-
-3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret generic alertmanager-cockroachdb \
- --from-file=alertmanager.yaml=alertmanager-config.yaml
- ~~~
-
- ~~~
- secret/alertmanager-cockroachdb created
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label secret alertmanager-cockroachdb app=cockroachdb
- ~~~
-
- ~~~
- secret/alertmanager-cockroachdb labeled
- ~~~
-
- {{site.data.alerts.callout_danger}}
- The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `alertmanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
- {{site.data.alerts.end}}
-
-4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml
- ~~~
-
- ~~~
- alertmanager.monitoring.coreos.com/cockroachdb created
- service/alertmanager-cockroachdb created
- ~~~
-
-5. Verify that Alertmanager is running:
-
- 1. Port-forward from your local machine to the pod running Alertmanager:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward alertmanager-cockroachdb-0 9093
- ~~~
-
- 2. Go to http://localhost:9093 in your browser. The screen should look like this:
-
-
-
-6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this:
-
-
-
-7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml
- ~~~
-
- ~~~
- prometheusrule.monitoring.coreos.com/prometheus-cockroachdb-rules created
- ~~~
-
-8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/rules. The screen should look like this:
-
-
-
-9. Verify that the `TestAlertManager` example alert is firing by opening http://localhost:9090/alerts. The screen should look like this:
-
-
-
-10. To remove the example alert:
-
- 1. Use the `kubectl edit` command to open the rules for editing:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl edit prometheusrules prometheus-cockroachdb-rules
- ~~~
-
- 2. Remove the `dummy.rules` block and save the file:
-
- ~~~
- - name: rules/dummy.rules
- rules:
- - alert: TestAlertManager
- expr: vector(1)
- ~~~
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-insecure.md
deleted file mode 100644
index c983a4c85ab..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-insecure.md
+++ /dev/null
@@ -1,130 +0,0 @@
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
-{{site.data.alerts.end}}
-
-1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node status \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node status \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node decommission \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node decommission \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 73 | true | false
- (1 row)
- ~~~
-
- Once the node has been fully decommissioned and stopped, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, remove a pod from your StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=3 \
- --reuse-values
- ~~~
-
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-secure.md b/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-secure.md
deleted file mode 100644
index 313b54493b4..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-remove-nodes-secure.md
+++ /dev/null
@@ -1,119 +0,0 @@
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
-{{site.data.alerts.end}}
-
-1. Get a shell into the `cockroachdb-client-secure` pod you created earlier and use the `cockroach node status` command to get the internal IDs of nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node status \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node status \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node decommission \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node decommission \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 73 | true | false
- (1 row)
- ~~~
-
- Once the node has been fully decommissioned and stopped, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, remove a pod from your StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=3 \
- --reuse-values
- ~~~
-
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-scale-cluster.md b/src/current/_includes/v20.1/orchestration/kubernetes-scale-cluster.md
deleted file mode 100644
index 8819b5151ce..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-scale-cluster.md
+++ /dev/null
@@ -1,156 +0,0 @@
-Your Kubernetes cluster includes 3 worker nodes, or instances, that can run pods. A CockroachDB node runs in each pod. As recommended in our [production best practices](recommended-production-settings.html#topology), you should ensure that two pods are not placed on the same worker node.
-
-
-1. On a production deployment, first add a worker node, bringing the total from 3 to 4:
- - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
- - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling).
- - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).
-
-1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=4
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-1. Verify that the new pod started successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 51m
- cockroachdb-1 1/1 Running 0 47m
- cockroachdb-2 1/1 Running 0 3m
- cockroachdb-3 1/1 Running 0 1m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. Back in the Admin UI, view the **Node List** to ensure that the fourth node successfully joined the cluster.
-
-
-
-1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=4 \
- --reuse-values
- ~~~
-
- ~~~
- Release "my-release" has been upgraded. Happy Helming!
- LAST DEPLOYED: Tue May 14 14:06:43 2019
- NAMESPACE: default
- STATUS: DEPLOYED
-
- RESOURCES:
- ==> v1beta1/PodDisruptionBudget
- NAME AGE
- my-release-cockroachdb-budget 51m
-
- ==> v1/Pod(related)
-
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 38m
- my-release-cockroachdb-1 1/1 Running 0 39m
- my-release-cockroachdb-2 1/1 Running 0 39m
- my-release-cockroachdb-3 0/1 Pending 0 0s
- my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m
-
- ...
- ~~~
-
-1. Get the name of the `Pending` CSR for the new pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-3 2m system:serviceaccount:default:default Pending
- node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued
- node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued
- node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued
- ...
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
-1. Examine the CSR for the new pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.my-release-cockroachdb-3
- ~~~
-
- ~~~
- Name: default.node.my-release-cockroachdb-3
- Labels:
- Annotations:
- CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
- Requesting User: system:serviceaccount:default:default
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local
- my-release-cockroachdb-1.my-release-cockroachdb
- my-release-cockroachdb-public
- my-release-cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- 10.48.1.6
- Events:
- ~~~
-
-1. If everything looks correct, approve the CSR for the new pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.my-release-cockroachdb-3
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved
- ~~~
-
-1. Verify that the new pod started successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 51m
- my-release-cockroachdb-1 1/1 Running 0 47m
- my-release-cockroachdb-2 1/1 Running 0 3m
- my-release-cockroachdb-3 1/1 Running 0 1m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. Back in the Admin UI, view the **Node List** to ensure that the fourth node successfully joined the cluster.
-
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v20.1/orchestration/kubernetes-simulate-failure.md
deleted file mode 100644
index d5f3e52884f..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-simulate-failure.md
+++ /dev/null
@@ -1,56 +0,0 @@
-Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
-
-To see this in action:
-
-1. Terminate one of the CockroachDB nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- pod "my-release-cockroachdb-2" deleted
- ~~~
-
-
-
-2. In the Admin UI, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
-
-3. Back in the terminal, verify that the pod was automatically restarted:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-2 1/1 Running 0 44s
- ~~~
-
diff --git a/src/current/_includes/v20.1/orchestration/kubernetes-upgrade-cluster.md b/src/current/_includes/v20.1/orchestration/kubernetes-upgrade-cluster.md
deleted file mode 100644
index 59dfa3e2c8f..00000000000
--- a/src/current/_includes/v20.1/orchestration/kubernetes-upgrade-cluster.md
+++ /dev/null
@@ -1,383 +0,0 @@
-It is strongly recommended that you regularly upgrade your CockroachDB version in order to pick up bug fixes, performance improvements, and new features. The [CockroachDB upgrade documentation](upgrade-cockroach-version.html) describes how to perform a "rolling upgrade" of a CockroachDB cluster by stopping and restarting nodes one at a time. This is to ensure that the cluster remains available during the upgrade.
-
-The corresponding process on Kubernetes is a [staged update](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update), in which the Docker image is updated in the CockroachDB StatefulSet and then applied to the pods one at a time.
-
-1. Decide how the upgrade will be finalized.
-
- {{site.data.alerts.callout_info}}
- This step is relevant only when upgrading from v19.2.x to v20.1. For upgrades within the v20.1.x series, skip this step.
- {{site.data.alerts.end}}
-
- By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in v20.1. After finalization, however, it will no longer be possible to perform a downgrade to v19.2. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade.
-
- We recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade:
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- {% endif %}
-
- 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING cluster.preserve_downgrade_option = '19.2';
- ~~~
-
- 1. Exit the SQL shell and delete the temporary pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.updateStrategy.rollingUpdate.partition=2
- ~~~
-
-
-1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- --type='json' \
- -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-
-
-
- {{site.data.alerts.callout_info}}
- For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete job my-release-cockroachdb-init
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set image.tag={{page.release_info.version}} \
- --reuse-values
- ~~~
-
-
-1. Check the status of your cluster's pods. You should see one of them being restarted:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 2m
- cockroachdb-1 1/1 Running 0 2m
- cockroachdb-2 0/1 Terminating 0 1m
- ...
- ~~~
-
-
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 2m
- my-release-cockroachdb-1 1/1 Running 0 3m
- my-release-cockroachdb-2 0/1 ContainerCreating 0 25s
- my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s
- ...
- ~~~
-
- {{site.data.alerts.callout_info}}
- Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster.
- {{site.data.alerts.end}}
-
-
-1. After the pod has been restarted with the new image, get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% if page.secure == true %}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- {% else %}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% endif %}
-
-1. Run the following SQL query to verify that the number of underreplicated ranges is zero:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status;
- ~~~
-
- ~~~
- ranges_underreplicated
- --------------------------
- 0
- (1 row)
- ~~~
-
- This indicates that it is safe to proceed to the next pod.
-
-1. Exit the SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Decrement the partition value by 1 to allow the next pod in the cluster to update:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.updateStrategy.rollingUpdate.partition=1 \
- ~~~
-
-
-1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`).
-
-1. Check the image of each pod to confirm that all have been upgraded:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods \
- -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
- ~~~
-
-
- ~~~
- cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- ...
- ~~~
-
-
-
- ~~~
- my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- ...
- ~~~
-
-
- You can also check the CockroachDB version of each node in the [Admin UI](admin-ui-cluster-overview-page.html#node-details).
-
-
-1. Finish the upgrade.
-
- {{site.data.alerts.callout_info}}
- This step is relevant only when upgrading from v19.2.x to v20.1. For upgrades within the v20.1.x series, skip this step.
- {{site.data.alerts.end}}
-
- If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
-
- Once you are satisfied with the new version, re-enable auto-finalization:
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
- {% endif %}
-
- 2. Re-enable auto-finalization:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
- ~~~
-
- 3. Exit the SQL shell and delete the temporary pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v20.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v20.1/orchestration/local-start-kubernetes.md
deleted file mode 100644
index 081c2274c0f..00000000000
--- a/src/current/_includes/v20.1/orchestration/local-start-kubernetes.md
+++ /dev/null
@@ -1,24 +0,0 @@
-## Before you begin
-
-Before getting started, it's helpful to review some Kubernetes-specific terminology:
-
-Feature | Description
---------|------------
-[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
-[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
-[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
-[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.
When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
-[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.
-
-## Step 1. Start Kubernetes
-
-1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation.
-
- {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}}
-
-2. Start a local Kubernetes cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ minikube start
- ~~~
diff --git a/src/current/_includes/v20.1/orchestration/monitor-cluster.md b/src/current/_includes/v20.1/orchestration/monitor-cluster.md
deleted file mode 100644
index 60ff410378f..00000000000
--- a/src/current/_includes/v20.1/orchestration/monitor-cluster.md
+++ /dev/null
@@ -1,69 +0,0 @@
-To access the cluster's [Admin UI](admin-ui-overview.html):
-
-{% if page.secure == true %}
-
-1. On secure clusters, [certain pages of the Admin UI](admin-ui-overview.html#admin-ui-access) can only be accessed by `admin` users.
-
- Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-1. Assign `roach` to the `admin` role (you only need to do this once):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > GRANT admin TO roach;
- ~~~
-
-1. Exit the SQL shell and pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-{% endif %}
-
-1. In a new terminal window, port-forward from your local machine to the `cockroachdb-public` service:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward service/cockroachdb-public 8080
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward service/my-release-cockroachdb-public 8080
- ~~~
-
-
- ~~~
- Forwarding from 127.0.0.1:8080 -> 8080
- ~~~
-
- {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
-
-{% if page.secure == true %}
-
-1. Go to https://localhost:8080 and log in with the username and password you created earlier.
-
- {% include {{ page.version.version }}/misc/chrome-localhost.md %}
-
-{% else %}
-
-1. Go to http://localhost:8080.
-
-{% endif %}
-
-1. In the UI, verify that the cluster is running as expected:
- - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster.
- - Click the **Databases** tab on the left to verify that `bank` is listed.
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-insecure.md
deleted file mode 100644
index fea5a620b2a..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-insecure.md
+++ /dev/null
@@ -1,94 +0,0 @@
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-3. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario.
-
- Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below.
-
- {% include copy-clipboard.html %}
- ~~~
- statefulset:
- resources:
- limits:
- memory: "8Gi"
- requests:
- memory: "8Gi"
- conf:
- cache: "2Gi"
- max-sql-memory: "2Gi"
- ~~~
-
- 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
-
- {{site.data.alerts.callout_success}}
- For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
- {{site.data.alerts.end}}
-
- 2. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
-
- {{site.data.alerts.callout_info}}
- If necessary, you can [expand disk size](orchestrate-cockroachdb-with-kubernetes.html#expand-disk-size) after the cluster is live.
- {{site.data.alerts.end}}
-
-4. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-5. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-6. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-secure.md
deleted file mode 100644
index 1ac3b16c795..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-helm-secure.md
+++ /dev/null
@@ -1,185 +0,0 @@
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-3. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario.
-
- Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below.
-
- {% include copy-clipboard.html %}
- ~~~
- statefulset:
- resources:
- limits:
- memory: "8Gi"
- requests:
- memory: "8Gi"
- conf:
- cache: "2Gi"
- max-sql-memory: "2Gi"
- tls:
- enabled: true
- ~~~
-
- 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
-
- {{site.data.alerts.callout_success}}
- For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
- {{site.data.alerts.end}}
-
- 2. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
-
- {{site.data.alerts.callout_info}}
- If necessary, you can [expand disk size](orchestrate-cockroachdb-with-kubernetes.html#expand-disk-size) after the cluster is live.
- {{site.data.alerts.end}}
-
- 3. For a secure deployment, set `tls.enabled` to true.
-
-4. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-6. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the CockroachDB node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificate, at which point the CockroachDB node is started in the pod.
-
- 1. Get the names of the `Pending` CSRs:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
- ...
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.my-release-cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
- Requesting User: system:serviceaccount:default:my-release-cockroachdb
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
- my-release-cockroachdb-0.my-release-cockroachdb
- my-release-cockroachdb-public
- my-release-cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-0 approved
- ~~~
-
- 4. Repeat steps 2 and 3 for the other 2 pods.
-
-7. Confirm that three pods are `Running` successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 0/1 Running 0 6m
- my-release-cockroachdb-1 0/1 Running 0 6m
- my-release-cockroachdb-2 0/1 Running 0 6m
- my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
- ~~~
-
-8. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.client.root approved
- ~~~
-
-9. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-10. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-insecure.md
deleted file mode 100644
index 8ae114b06ec..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-insecure.md
+++ /dev/null
@@ -1,127 +0,0 @@
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it.
-
- Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- {{site.data.alerts.callout_danger}}
- To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes instance, you *must* set `resources.requests.memory` and `resources.limits.memory` to explicit values in the CockroachDB `containers` spec. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes.
-
- For example, to allocate 8Gi of memory to CockroachDB in each pod:
-
- ~~~
- containers:
- - name: cockroachdb
- ...
- resources:
- requests:
- memory: "8Gi"
- limits:
- memory: "8Gi"
- ~~~
- {{site.data.alerts.end}}
-
- Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
- Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
- 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml
- ~~~
-
- 2. Modify the file wherever there is a `TODO` comment.
-
- 3. Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-insecure.yaml
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init created
- ~~~
-
-5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init 1/1 7s 27s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-cqf8l 0/1 Completed 0 56s
- cockroachdb-0 1/1 Running 0 7m51s
- cockroachdb-1 1/1 Running 0 7m51s
- cockroachdb-2 1/1 Running 0 7m51s
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-insecure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-insecure.md
deleted file mode 100644
index f6a459c432d..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-insecure.md
+++ /dev/null
@@ -1,65 +0,0 @@
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-3. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-secure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-secure.md
deleted file mode 100644
index 0c7450b5532..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-helm-secure.md
+++ /dev/null
@@ -1,162 +0,0 @@
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-3. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario.
-
- Create a `my-values.yaml` file to override the defaults. For a secure deployment, set `tls.enabled` to true:
-
- {% include copy-clipboard.html %}
- ~~~
- tls:
- enabled: true
- ~~~
-
-4. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-6. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the CockroachDB node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificate, at which point the CockroachDB node is started in the pod.
-
- 1. Get the names of the `Pending` CSRs:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
- ...
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.my-release-cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
- Requesting User: system:serviceaccount:default:my-release-cockroachdb
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
- my-release-cockroachdb-0.my-release-cockroachdb
- my-release-cockroachdb-public
- my-release-cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-0 approved
- ~~~
-
- 4. Repeat steps 2 and 3 for the other 2 pods.
-
-7. Confirm that three pods are `Running` successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 0/1 Running 0 6m
- my-release-cockroachdb-1 0/1 Running 0 6m
- my-release-cockroachdb-2 0/1 Running 0 6m
- my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
- ~~~
-
-8. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.client.root approved
- ~~~
-
-9. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-10. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-insecure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-insecure.md
deleted file mode 100644
index bebb6eb3062..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-insecure.md
+++ /dev/null
@@ -1,83 +0,0 @@
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init created
- ~~~
-
-5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init 1/1 7s 27s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-cqf8l 0/1 Completed 0 56s
- cockroachdb-0 1/1 Running 0 7m51s
- cockroachdb-1 1/1 Running 0 7m51s
- cockroachdb-2 1/1 Running 0 7m51s
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-secure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-secure.md
deleted file mode 100644
index 0558bcdf3b2..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-local-secure.md
+++ /dev/null
@@ -1,366 +0,0 @@
-Download and modify our StatefulSet configuration, depending on how you want to sign your certificates.
-
-{{site.data.alerts.callout_danger}}
-Some environments, such as Amazon EKS, do not support certificates signed by Kubernetes' built-in CA. In this case, use the second configuration below.
-{{site.data.alerts.end}}
-
-- Using the Kubernetes CA: [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml).
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
- ~~~
-
-- Using a non-Kubernetes CA: [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml)
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml
- ~~~
-
-{{site.data.alerts.callout_success}}
-If you change the StatefulSet name from the default `cockroachdb`, be sure to start and end with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
-{{site.data.alerts.end}}
-
-#### Initialize the cluster
-
-Choose the authentication method that corresponds to the StatefulSet configuration you downloaded and modified above.
-
-- [Kubernetes CA](#kubernetes-ca)
-- [Non-Kubernetes CA](#non-kubernetes-ca)
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
-
-##### Kubernetes CA
-
-1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-secure.yaml
- ~~~
-
- ~~~
- serviceaccount/cockroachdb created
- role.rbac.authorization.k8s.io/cockroachdb created
- clusterrole.rbac.authorization.k8s.io/cockroachdb created
- rolebinding.rbac.authorization.k8s.io/cockroachdb created
- clusterrolebinding.rbac.authorization.k8s.io/cockroachdb created
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
-
- 1. Get the names of the `Pending` CSRs:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.node.cockroachdb-0 1m system:serviceaccount:default:cockroachdb Pending
- default.node.cockroachdb-1 1m system:serviceaccount:default:cockroachdb Pending
- default.node.cockroachdb-2 1m system:serviceaccount:default:cockroachdb Pending
- ...
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
- Requesting User: system:serviceaccount:default:cockroachdb
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- cockroachdb-0.cockroachdb.default.svc.cluster.local
- cockroachdb-0.cockroachdb
- cockroachdb-public
- cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- 10.48.1.6
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest "default.node.cockroachdb-0" approved
- ~~~
-
- 4. Repeat steps 2 and 3 for the other 2 pods.
-
-3. Initialize the CockroachDB cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
- pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
- pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
- ~~~
-
- 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init-secure created
- ~~~
-
- 4. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.client.root approved
- ~~~
-
- 5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init-secure
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init-secure 1/1 23s 35s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-secure-q8s7v 0/1 Completed 0 55s
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
-
-##### Non-Kubernetes CA
-
-{{site.data.alerts.callout_info}}
-The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates.
-{{site.data.alerts.end}}
-
-1. Create two directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir my-safe-directory
- ~~~
-
- Directory | Description
- ----------|------------
- `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
- `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
-
-2. Create the CA certificate and key pair:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-3. Create a client certificate and key pair for the root user:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Upload the client certificate and key to the Kubernetes cluster as a secret:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.client.root \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.client.root created
- ~~~
-
-5. Create the certificate and key pair for your CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- localhost 127.0.0.1 \
- cockroachdb-public \
- cockroachdb-public.default \
- cockroachdb-public.default.svc.cluster.local \
- *.cockroachdb \
- *.cockroachdb.default \
- *.cockroachdb.default.svc.cluster.local \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-6. Upload the node certificate and key to the Kubernetes cluster as a secret:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.node \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.node created
- ~~~
-
-7. Check that the secrets were created on the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get secrets
- ~~~
-
- ~~~
- NAME TYPE DATA AGE
- cockroachdb.client.root Opaque 3 41m
- cockroachdb.node Opaque 5 14s
- default-token-6qjdb kubernetes.io/service-account-token 3 4m
- ~~~
-
-8. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- serviceaccount/cockroachdb created
- role.rbac.authorization.k8s.io/cockroachdb created
- rolebinding.rbac.authorization.k8s.io/cockroachdb created
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-9. Initialize the CockroachDB cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
- pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
- pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
- ~~~
-
- 3. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-0 \
- -- /cockroach/cockroach init \
- --certs-dir=/cockroach/cockroach-certs
- ~~~
-
- ~~~
- Cluster successfully initialized
- ~~~
-
- 4. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v20.1/orchestration/start-cockroachdb-secure.md
deleted file mode 100644
index 415e16323fc..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-cockroachdb-secure.md
+++ /dev/null
@@ -1,203 +0,0 @@
-#### Set up configuration file
-
-1. Download and modify our [StatefulSet configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml
- ~~~
-
-1. Allocate CPU and memory resources to CockroachDB on each pod. These settings should be appropriate for your workload. For more context on provisioning CPU and memory, see the [Production Checklist](recommended-production-settings.html#hardware).
-
- {{site.data.alerts.callout_success}}
- Resource `requests` and `limits` should have identical values.
- {{site.data.alerts.end}}
-
- ~~~
- resources:
- requests:
- cpu: "2"
- memory: "8Gi"
- limits:
- cpu: "2"
- memory: "8Gi"
- ~~~
-
- {{site.data.alerts.callout_info}}
- If no resource limits are specified, the pods will be able to consume the maximum available CPUs and memory. However, to avoid overallocating resources when another memory-intensive workload is on the same instance, always set resource requests and limits explicitly.
- {{site.data.alerts.end}}
-
-1. In the `volumeClaimTemplates` specification, you may want to modify `resources.requests.storage` for your use case. This configuration defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
-
- ~~~
- resources:
- requests:
- storage: "100Gi"
- ~~~
-
-#### Initialize the cluster
-
-{{site.data.alerts.callout_info}}
-The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. If you use a different method of generating certificates, make sure to update `secret.secretName` in the StatefulSet configuration with the name of your node secret.
-{{site.data.alerts.end}}
-
-1. Create two directories:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs my-safe-directory
- ~~~
-
- Directory | Description
- ----------|------------
- `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
- `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
-
-2. Create the CA certificate and key pair:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-3. Create a client certificate and key pair for the root user:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Upload the client certificate and key to the Kubernetes cluster as a secret:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.client.root \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.client.root created
- ~~~
-
-5. Create the certificate and key pair for your CockroachDB nodes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- localhost 127.0.0.1 \
- cockroachdb-public \
- cockroachdb-public.default \
- cockroachdb-public.default.svc.cluster.local \
- *.cockroachdb \
- *.cockroachdb.default \
- *.cockroachdb.default.svc.cluster.local \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-6. Upload the node certificate and key to the Kubernetes cluster as a secret:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.node \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.node created
- ~~~
-
-7. Check that the secrets were created on the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get secrets
- ~~~
-
- ~~~
- NAME TYPE DATA AGE
- cockroachdb.client.root Opaque 3 41m
- cockroachdb.node Opaque 5 14s
- default-token-6qjdb kubernetes.io/service-account-token 3 4m
- ~~~
-
-8. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- serviceaccount/cockroachdb created
- role.rbac.authorization.k8s.io/cockroachdb created
- rolebinding.rbac.authorization.k8s.io/cockroachdb created
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-9. Initialize the CockroachDB cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
- pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
- pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
- ~~~
-
- 3. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-0 \
- -- /cockroach/cockroach init \
- --certs-dir=/cockroach/cockroach-certs
- ~~~
-
- ~~~
- Cluster successfully initialized
- ~~~
-
- 4. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/orchestration/start-kubernetes.md b/src/current/_includes/v20.1/orchestration/start-kubernetes.md
deleted file mode 100644
index 8f27bd4cac0..00000000000
--- a/src/current/_includes/v20.1/orchestration/start-kubernetes.md
+++ /dev/null
@@ -1,97 +0,0 @@
-You can use the hosted [Google Kubernetes Engine (GKE)](#hosted-gke) service or the hosted [Amazon Elastic Kubernetes Service (EKS)](#hosted-eks) to quickly start Kubernetes.
-
-- [Hosted GKE](#hosted-gke)
-- [Hosted EKS](#hosted-eks)
-
-{{site.data.alerts.callout_info}}
-The CockroachDB Kubernetes Operator is currently supported for GKE. You can also use the Operator on platforms such as [Red Hat OpenShift](../{{site.versions["stable"]}}/deploy-cockroachdb-with-kubernetes-openshift.html) and [IBM Cloud Pak for Data](https://www.ibm.com/products/cloud-pak-for-data).
-{{site.data.alerts.end}}
-
-### Hosted GKE
-
-1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation.
-
- This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
- {{site.data.alerts.callout_success}}
- The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.
- {{site.data.alerts.end}}
-
-2. From your local workstation, start the Kubernetes cluster, specifying one of the available [regions](https://cloud.google.com/compute/docs/regions-zones#available) (e.g., `us-east1`):
-
- {{site.data.alerts.callout_success}}
- Since this region can differ from your default `gcloud` region, be sure to include the `--region` flag to run `gcloud` commands against this cluster.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1
- ~~~
-
- ~~~
- Creating cluster cockroachdb...done.
- ~~~
-
- This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--region` flag specifies a [regional three-zone cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster), and `--num-nodes` specifies one node in each zone.
-
- The `--machine-type` flag tells the node pool to use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
-
- The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
-
-3. Get the email address associated with your Google Cloud account:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ gcloud info | grep Account
- ~~~
-
- ~~~
- Account: [your.google.cloud.email@example.org]
- ~~~
-
- {{site.data.alerts.callout_danger}}
- This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.
- {{site.data.alerts.end}}
-
-4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create clusterrolebinding $USER-cluster-admin-binding \
- --clusterrole=cluster-admin \
- --user=
- ~~~
-
- ~~~
- clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created
- ~~~
-
-### Hosted EKS
-
-1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation.
-
- This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
-2. From your local workstation, start the Kubernetes cluster:
-
- {{site.data.alerts.callout_success}}
- To ensure that all 3 nodes can be placed into a different availability zone, you may want to first [confirm that at least 3 zones are available in the region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#availability-zones-describe) for your account.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ eksctl create cluster \
- --name cockroachdb \
- --nodegroup-name standard-workers \
- --node-type m5.xlarge \
- --nodes 3 \
- --nodes-min 1 \
- --nodes-max 4 \
- --node-ami auto
- ~~~
-
- This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
-
- Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster.
-
-3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console.
diff --git a/src/current/_includes/v20.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v20.1/orchestration/test-cluster-insecure.md
deleted file mode 100644
index 153c8f918f0..00000000000
--- a/src/current/_includes/v20.1/orchestration/test-cluster-insecure.md
+++ /dev/null
@@ -1,72 +0,0 @@
-1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
-2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- balance DECIMAL
- );
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts (balance)
- VALUES
- (1000.50), (20000), (380), (500), (55000);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +--------------------------------------+---------+
- 6f123370-c48c-41ff-b384-2c185590af2b | 380
- 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50
- ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500
- d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000
- e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000
- (5 rows)
- ~~~
-
-3. Exit the SQL shell and delete the temporary pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v20.1/orchestration/test-cluster-secure.md b/src/current/_includes/v20.1/orchestration/test-cluster-secure.md
deleted file mode 100644
index 9714ed41d5f..00000000000
--- a/src/current/_includes/v20.1/orchestration/test-cluster-secure.md
+++ /dev/null
@@ -1,188 +0,0 @@
-To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, get a shell into the pod, and then start the built-in SQL client.
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml
- ~~~
-
- {{site.data.alerts.callout_info}}
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
- ~~~
- pod/cockroachdb-client-secure created
- ~~~
-
-1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the CockroachDB SQL shell.
- # All statements must be terminated by a semicolon.
- # To exit, type: \q.
- #
- # Server version: CockroachDB CCL v20.1.0 (x86_64-unknown-linux-gnu, built 2020/07/29 22:56:36, go1.13.9) (same version as client)
- # Cluster ID: f82abd88-5d44-4493-9558-d6c75a3b80cc
- #
- # Enter \? for a brief introduction.
- #
- root@:26257/defaultdb>
- ~~~
-
-2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +----+---------+
- 1 | 1000.50
- (1 row)
- ~~~
-
-3. [Create a user with a password](create-user.html#create-a-user-with-a-password):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the Admin UI later.
-
-4. Exit the SQL shell and pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-
-
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely.
-
- 1. Download the file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -OOOOOOOOO \
- https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
- ~~~
-
- 1. In the file, change `serviceAccountName: cockroachdb` to `serviceAccountName: my-release-cockroachdb`.
-
- 1. Use the file to launch a pod and keep it running indefinitely:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f client-secure.yaml
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" created
- ~~~
-
- {{site.data.alerts.callout_info}}
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
- {{site.data.alerts.end}}
-
-2. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the CockroachDB SQL shell.
- # All statements must be terminated by a semicolon.
- # To exit, type: \q.
- #
- # Server version: CockroachDB CCL v20.1.0 (x86_64-unknown-linux-gnu, built 2020/07/29 22:56:36, go1.13.9) (same version as client)
- # Cluster ID: f82abd88-5d44-4493-9558-d6c75a3b80cc
- #
- # Enter \? for a brief introduction.
- #
- root@:26257/defaultdb>
- ~~~
-
-3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +----+---------+
- 1 | 1000.50
- (1 row)
- ~~~
-
-4. [Create a user with a password](create-user.html#create-a-user-with-a-password):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the Admin UI later.
-
-5. Exit the SQL shell and pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-
-{{site.data.alerts.callout_success}}
-This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
-
-If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v20.1/performance/check-rebalancing-after-partitioning.md
deleted file mode 100644
index c4981c70632..00000000000
--- a/src/current/_includes/v20.1/performance/check-rebalancing-after-partitioning.md
+++ /dev/null
@@ -1,41 +0,0 @@
-Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined.
-
-To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning:
-
-
-
-To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SELECT * FROM \
-[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \
-WHERE \"start_key\" IS NOT NULL \
- AND \"start_key\" NOT LIKE '%Prefix%';"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+------------------+----------------------------+----------+----------+--------------+
- /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3
- /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8
- /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3
- /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8
- /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5
- /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1
-(6 rows)
-~~~
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`.
diff --git a/src/current/_includes/v20.1/performance/check-rebalancing.md b/src/current/_includes/v20.1/performance/check-rebalancing.md
deleted file mode 100644
index a5e5e5b3005..00000000000
--- a/src/current/_includes/v20.1/performance/check-rebalancing.md
+++ /dev/null
@@ -1,33 +0,0 @@
-Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones.
-
-To check this, access the Admin UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes:
-
-
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 33 | {3,4,7} | 7
-(1 row)
-~~~
-
-In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone.
diff --git a/src/current/_includes/v20.1/performance/configure-network.md b/src/current/_includes/v20.1/performance/configure-network.md
deleted file mode 100644
index 7cd3e3cbcc6..00000000000
--- a/src/current/_includes/v20.1/performance/configure-network.md
+++ /dev/null
@@ -1,18 +0,0 @@
-CockroachDB requires TCP communication on two ports:
-
-- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster)
-- **8080** (`tcp:8080`) for accessing the Admin UI
-
-Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the Admin UI from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls):
-
-Field | Recommended Value
-------|------------------
-Name | **cockroachweb**
-Source filter | IP ranges
-Source IP ranges | Your local network's IP ranges
-Allowed protocols | **tcp:8080**
-Target tags | `cockroachdb`
-
-{{site.data.alerts.callout_info}}
-The **tag** feature will let you easily apply the rule to your instances.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v20.1/performance/import-movr.md b/src/current/_includes/v20.1/performance/import-movr.md
deleted file mode 100644
index a0fe2dc710a..00000000000
--- a/src/current/_includes/v20.1/performance/import-movr.md
+++ /dev/null
@@ -1,160 +0,0 @@
-Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle).
-
-1. Still on the fourth instance, start the [built-in SQL shell](cockroach-sql.html), pointing it at one of the CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql {{page.certs}} --host=
- ~~~
-
-2. Create the `movr` database and set it as the default:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE movr;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET DATABASE = movr;
- ~~~
-
-3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE users (
- id UUID NOT NULL,
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+------+---------------+----------------+--------+
- 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052
- (1 row)
-
- Time: 2.882582355s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE vehicles (
- id UUID NOT NULL,
- city STRING NOT NULL,
- type STRING NULL,
- owner_id UUID NULL,
- creation_time TIMESTAMP NULL,
- status STRING NULL,
- ext JSON NULL,
- mycol STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+-------+---------------+----------------+---------+
- 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767
- (1 row)
-
- Time: 5.803841493s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE rides (
- id UUID NOT NULL,
- city STRING NOT NULL,
- vehicle_city STRING NULL,
- rider_id UUID NULL,
- vehicle_id UUID NULL,
- start_address STRING NULL,
- end_address STRING NULL,
- start_time TIMESTAMP NULL,
- end_time TIMESTAMP NULL,
- revenue DECIMAL(10,2) NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
- INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
- CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+
- 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841
- (1 row)
-
- Time: 44.620371424s
- ~~~
-
- {{site.data.alerts.callout_success}}
- You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](admin-ui-jobs-page.html) of the Admin UI.
- {{site.data.alerts.end}}
-
-7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables:
-
- Referencing columns | Referenced columns
- --------------------|-------------------
- `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id`
- `rides.city`, `rides.rider_id` | `users.city`, `users.id`
- `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id`
-
- As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE vehicles
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, owner_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, rider_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_vehicle_city_ref_vehicles
- FOREIGN KEY (vehicle_city, vehicle_id)
- REFERENCES vehicles (city, id);
- ~~~
-
-4. Exit the built-in SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v20.1/performance/overview.md b/src/current/_includes/v20.1/performance/overview.md
deleted file mode 100644
index 66c28ede405..00000000000
--- a/src/current/_includes/v20.1/performance/overview.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### Topology
-
-You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload:
-
-
-
-{{site.data.alerts.callout_info}}
-Within a single GCE zone, network latency between instances should be sub-millisecond.
-{{site.data.alerts.end}}
-
-You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload:
-
-
-
-{{site.data.alerts.callout_info}}
-Network latencies will increase with geographic distance between nodes. You can observe this in the [Network Latency page](admin-ui-network-latency-page.html) of the Admin UI.
-{{site.data.alerts.end}}
-
-To reproduce the performance demonstrated in this tutorial:
-
-- For each CockroachDB node, you'll use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk.
-- For running the client application workload, you'll use smaller instances, such as `n2-standard-2`.
-
-### Schema
-
-Your schema and data will be based on our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html).
-
-
-
-A few notes about the schema:
-
-- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated.
-- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling.
-- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later.
-
-### Important concepts
-
-To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here.
diff --git a/src/current/_includes/v20.1/performance/partition-by-city.md b/src/current/_includes/v20.1/performance/partition-by-city.md
deleted file mode 100644
index 0a29ac78d89..00000000000
--- a/src/current/_includes/v20.1/performance/partition-by-city.md
+++ /dev/null
@@ -1,419 +0,0 @@
-For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region.
-
-1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/).
-
-2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](licensing-faqs.html#set-a-license):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING cluster.organization = '';"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING enterprise.license = '';"
- ~~~
-
-3. Define partitions for all tables and their secondary indexes.
-
- Start with the `users` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Now define partitions for the `vehicles` table and its secondary indexes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE vehicles \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Next, define partitions for the `rides` table and its secondary indexes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE rides \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \
- PARTITION BY LIST (vehicle_city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Finally, drop an unused index on `rides` rather than partition it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="DROP INDEX rides_start_time_idx;"
- ~~~
-
- {{site.data.alerts.callout_info}}
- The `rides` table contains 1 million rows, so dropping this index will take a few minutes.
- {{site.data.alerts.end}}
-
-7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require city data to be stored on specific nodes based on node locality.
-
- City | Locality
- -----|---------
- New York | `zone=us-east1-b`
- Boston | `zone=us-east1-b`
- Washington DC | `zone=us-east1-b`
- Seattle | `zone=us-west1-a`
- San Francisco | `zone=us-west2-a`
- Los Angeles | `zone=us-west2-a`
-
- {{site.data.alerts.callout_info}}
- Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead.
- {{site.data.alerts.end}}
-
- Start with the `users` table partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Move on to the `vehicles` table and secondary index partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Finish with the `rides` table and secondary index partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
diff --git a/src/current/_includes/v20.1/performance/scale-cluster.md b/src/current/_includes/v20.1/performance/scale-cluster.md
deleted file mode 100644
index 8358ff1cdd3..00000000000
--- a/src/current/_includes/v20.1/performance/scale-cluster.md
+++ /dev/null
@@ -1,61 +0,0 @@
-1. SSH to one of the `n2-standard-4` instances in the `us-west1-a` zone.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west1,zone=us-west1-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances in the `us-west1-a` zone.
-
-5. SSH to one of the `n2-standard-4` instances in the `us-west2-a` zone.
-
-6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-7. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west2,zone=us-west2-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-8. Repeat steps 5 - 7 for the other two `n2-standard-4` instances in the `us-west2-a` zone.
diff --git a/src/current/_includes/v20.1/performance/start-cluster.md b/src/current/_includes/v20.1/performance/start-cluster.md
deleted file mode 100644
index 3c3fbf75f25..00000000000
--- a/src/current/_includes/v20.1/performance/start-cluster.md
+++ /dev/null
@@ -1,60 +0,0 @@
-#### Start the nodes
-
-1. SSH to the first `n2-standard-4` instance.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join=:26257,:26257,:26257 \
- --locality=cloud=gce,region=us-east1,zone=us-east1-b \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time.
-
-#### Initialize the cluster
-
-1. SSH to the fourth instance, the one not running a CockroachDB node.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-4. Run the [`cockroach init`](cockroach-init.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init {{page.certs}} --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v20.1/performance/test-performance-after-partitioning.md b/src/current/_includes/v20.1/performance/test-performance-after-partitioning.md
deleted file mode 100644
index 16c07a9f92d..00000000000
--- a/src/current/_includes/v20.1/performance/test-performance-after-partitioning.md
+++ /dev/null
@@ -1,93 +0,0 @@
-After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city.
-
-To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance).
-
-#### Reads
-
-Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805]
-
- Median time (milliseconds):
- 7.62641429901
- ~~~
-
-Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms.
-
-#### Writes
-
-Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883]
-
- Median time (milliseconds):
- 8.90052318573
- ~~~
-
- Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms.
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719]
-
- Median time (milliseconds):
- 9.26303863525
- ~~~
-
- Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms.
diff --git a/src/current/_includes/v20.1/performance/test-performance.md b/src/current/_includes/v20.1/performance/test-performance.md
deleted file mode 100644
index 2009ac9653f..00000000000
--- a/src/current/_includes/v20.1/performance/test-performance.md
+++ /dev/null
@@ -1,146 +0,0 @@
-In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases.
-
-#### Reads
-
-For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121]
-
- Median time (milliseconds):
- 72.0270872116
- ~~~
-
-As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client.
-
-For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use:
-
-1. SSH to the instance in `us-west2-a` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'los angeles' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"]
- ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"]
- ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"]
- ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"]
- ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"]
-
- Times (milliseconds):
- [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375]
-
- Median time (milliseconds):
- 7.6071023941
- ~~~
-
-Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms.
-
-#### Writes
-
-The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918]
-
- Median time (milliseconds):
- 48.4025478363
- ~~~
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297]
-
- Median time (milliseconds):
- 116.868495941
- ~~~
-
-It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 49 | {2,6,8} | 6
-(1 row)
-~~~
-
-For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that:
-
-- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client.
-- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east.
diff --git a/src/current/_includes/v20.1/performance/tuning-secure.py b/src/current/_includes/v20.1/performance/tuning-secure.py
deleted file mode 100644
index a644dbb1c87..00000000000
--- a/src/current/_includes/v20.1/performance/tuning-secure.py
+++ /dev/null
@@ -1,77 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257,
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.root.key',
- sslcert='certs/client.root.crt'
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v20.1/performance/tuning.py b/src/current/_includes/v20.1/performance/tuning.py
deleted file mode 100644
index dcb567dad91..00000000000
--- a/src/current/_includes/v20.1/performance/tuning.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v20.1/performance/use-hash-sharded-indexes.md b/src/current/_includes/v20.1/performance/use-hash-sharded-indexes.md
deleted file mode 100644
index ff487520578..00000000000
--- a/src/current/_includes/v20.1/performance/use-hash-sharded-indexes.md
+++ /dev/null
@@ -1 +0,0 @@
-For performance reasons, we [discourage indexing on sequential keys](indexes.html#indexing-columns). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](indexes.html#hash-sharded-indexes). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hotspots and improving write performance on sequentially-keyed indexes at a small cost to read performance.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v20.1/prod-deployment/advertise-addr-join.md
deleted file mode 100644
index 67019d1fcea..00000000000
--- a/src/current/_includes/v20.1/prod-deployment/advertise-addr-join.md
+++ /dev/null
@@ -1,4 +0,0 @@
-Flag | Description
------|------------
-`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
-`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
diff --git a/src/current/_includes/v20.1/prod-deployment/aws-inbound-rules.md b/src/current/_includes/v20.1/prod-deployment/aws-inbound-rules.md
deleted file mode 100644
index b44a170c6f2..00000000000
--- a/src/current/_includes/v20.1/prod-deployment/aws-inbound-rules.md
+++ /dev/null
@@ -1,22 +0,0 @@
-#### Inter-node and load balancer-node communication
-
- Field | Value
--------|-------------------
- Port Range | **26257**
- Source | The ID of your security group (e.g., *sg-07ab277a*)
-
-#### Application data
-
- Field | Value
--------|-------------------
- Port Range | **26257**
- Source | Your application's IP ranges
-
-#### Admin UI
-
- Field | Value
--------|-------------------
- Port Range | **8080**
- Source | Your network's IP ranges
-
-You can set your network IP by selecting "My IP" in the Source field.
\ No newline at end of file
diff --git a/src/current/_includes/v20.1/prod-deployment/backup.sh b/src/current/_includes/v20.1/prod-deployment/backup.sh
deleted file mode 100644
index efcbd4c7041..00000000000
--- a/src/current/_includes/v20.1/prod-deployment/backup.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-
-set -euo pipefail
-
-# This script creates full backups when run on the configured
-# day of the week and incremental backups when run on other days, and tracks
-# recently created backups in a file to pass as the base for incremental backups.
-
-what="" # Leave empty for cluster backup, or add "DATABASE database_name" to backup a database.
-base="/backups" # The URL where you want to store the backup.
-extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params.
-recent=recent_backups.txt # File in which recent backups are tracked.
-backup_parameters=