diff --git a/.gitmodules b/.gitmodules index e69de29bb2d..5b3eceab80e 100644 --- a/.gitmodules +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "contrib/pg_tde/src/libkmip"] + path = contrib/pg_tde/src/libkmip + url = https://github.com/Percona-Lab/libkmip.git diff --git a/contrib/pg_tde/.gitignore b/contrib/pg_tde/.gitignore new file mode 100644 index 00000000000..80806407112 --- /dev/null +++ b/contrib/pg_tde/.gitignore @@ -0,0 +1,14 @@ +*.so +*.o +*.frontend +__pycache__ + +/config.cache +/config.log +/config.status +/autom4te.cache +/configure~ +t/results + +# tools files +typedefs-full.list diff --git a/contrib/pg_tde/CONTRIBUTING.md b/contrib/pg_tde/CONTRIBUTING.md new file mode 100644 index 00000000000..d0656d3a68c --- /dev/null +++ b/contrib/pg_tde/CONTRIBUTING.md @@ -0,0 +1,124 @@ +# Contributing guide + +Welcome to `pg_tde` - the Transparent Database Encryption for PostgreSQL! + +We're glad that you would like to become a Percona community member and participate in keeping open source open. + +You can contribute in one of the following ways: + +1. Reach us on our [Forums](https://forums.percona.com/c/postgresql/pg-tde-transparent-data-encryption-tde/82). +2. [Submit a bug report or a feature request](#submit-a-bug-report-or-a-feature-request) +3. [Submit a pull request (PR) with the code patch](#submit-a-pull-request) +4. [Contribute to documentation](#contributing-to-documentation) + +By contributing, you agree to the [Percona Community code of conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md). + + +## Submit a bug report or a feature request + +All bug reports, enhancements and feature requests are tracked in [Jira issue tracker](https://jira.percona.com/projects/PG). If you would like to suggest a new feature / an improvement or you found a bug in `pg_tde`, please submit the report to the [PG project](https://jira.percona.com/projects/PG/issues). + +Start by searching the open tickets for a similar report. If you find that someone else has already reported your issue, then you can upvote that report to increase its visibility. + +If there is no existing report, submit your report following these steps: + +1. Sign in to [Jira issue tracker](https://jira.percona.com/projects/PG/issues). You will need to create an account if you do not have one. +2. In the _Summary_, _Description_, _Steps To Reproduce_, _Affects Version_ fields describe the problem you have detected or an idea that you have for a new feature or improvement. +3. As a general rule of thumb, try to create bug reports that are: + + * Reproducible: describe the steps to reproduce the problem. + * Unique: check if there already exists a JIRA ticket to describe the problem. + * Scoped to a Single Bug: only report one bug in one JIRA ticket + +## Submit a pull request + +Though not mandatory, we encourage you to first check for a bug report among Jira issues and in the PR list: perhaps the bug has already been addressed. + +For feature requests and enhancements, we do ask you to create a Jira issue, describe your idea and discuss the design with us. This way we align your ideas with our vision for the product development. + +If the bug hasn’t been reported / addressed, or we’ve agreed on the enhancement implementation with you, do the following: + +1. [Fork](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo) this repository +2. Clone this repository on your machine. +3. Create a separate branch for your changes. If you work on a Jira issue, please include the issue number in the branch name so it reads as `-my_branch`. This makes it easier to track your contribution. +4. Make your changes. Please follow the guidelines outlined in the [PostgreSQL Coding Standard](https://www.postgresql.org/docs/current/source.html) to improve code readability. +
+ .vimrc configuration example + + ``` + set nocompatible " choose no compatibility with legacy vi + syntax enableset + tabstop=4set + background=lightset + textwidth=80set + colorcolumn=80 + let g:filestyle_ignore_patterns = ['^\t* \{1,3}\S'] + highlight Normal ctermbg=15 + highlight ColorColumn ctermbg=52 + ``` +
+ +5. Test your changes locally. See the [Running tests ](#running-tests) section for more information +6. Update the documentation describing your changes. See the [Contributing to documentation](#contributing-to-documentation) section for details +7. Commit the changes. Add the Jira issue number at the beginning of your message subject, so that is reads as ` : My commit message`. Follow this pattern for your commits: + + ``` + PG-1234: Main commit message. + + Details of fix. + ``` + + The [commit message guidelines](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53) will help you with writing great commit messages + +8. Open a pull request to Percona +9. Our team will review your code and if everything is correct, will merge it. Otherwise, we will contact you for additional information or with the request to make changes. + +### Building pg_tde + +To build `pg_tde` from source code, you require the following: + +* git +* make +* gcc +* pg_config + +Refer to the [Building from source code](https://github.com/percona/pg_tde?tab=readme-ov-file#building-from-sources-for-community-postgresql) section for guidelines. + + +### Running tests + +When you work, you should periodically run tests to check that your changes don’t break existing code. + +You can find the tests in the `sql` directory. + +#### Run manually + +1. Change directory to pg_tde + +**NOTE**: Make sure `postgres` user is the owner of the `pg_tde` directory + +2. Start the tests + 1. If you built PostgreSQL from PGDG, use the following command: + + ```sh + make installcheck + ``` + + + 2. If you installed PostgreSQL server from Percona Distribution for PostgreSQL, use the following command: + + ```sh + sudo su postgres bash -c 'make installcheck USE_PGXS=1' + ``` +#### Run automatically + +The tests are run automatically with GitHub actions once you commit and push your changes. Make sure all tests are successfully passed before you proceed. + + +## Contributing to documentation + +`pg_tde` documentation is maintained in the `documentation` directory. Please read the [Contributing guide](https://github.com/percona/pg_tde/blob/main/documentation/CONTRIBUTING.md) for guidelines how you can contribute to the docs. + +## After your pull request is merged + +Once your pull request is merged, you are an official Percona Community Contributor. Welcome to the community! diff --git a/contrib/pg_tde/LICENSE b/contrib/pg_tde/LICENSE new file mode 100644 index 00000000000..a6d04899b4f --- /dev/null +++ b/contrib/pg_tde/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2024 Percona LLC + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/contrib/pg_tde/Makefile b/contrib/pg_tde/Makefile new file mode 100644 index 00000000000..4b6a32cbbf6 --- /dev/null +++ b/contrib/pg_tde/Makefile @@ -0,0 +1,91 @@ +# contrib/pg_tde/Makefile + +PGFILEDESC = "pg_tde access method" +MODULE_big = pg_tde +EXTENSION = pg_tde +DATA = pg_tde--1.0-beta2.sql + +REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/pg_tde/pg_tde.conf +REGRESS = toast_decrypt_basic \ +toast_extended_storage_basic \ +move_large_tuples_basic \ +non_sorted_off_compact_basic \ +update_compare_indexes_basic \ +pg_tde_is_encrypted_basic \ +test_issue_153_fix_basic \ +multi_insert_basic \ +update_basic \ +subtransaction_basic \ +trigger_on_view_basic \ +change_access_method_basic \ +insert_update_delete_basic \ +keyprovider_dependency_basic \ +vault_v2_test_basic \ +alter_index_basic \ +merge_join_basic \ +tablespace_basic +TAP_TESTS = 1 + +OBJS = src/encryption/enc_tde.o \ +src/encryption/enc_aes.o \ +src/access/pg_tde_slot.o \ +src/access/pg_tde_tdemap.o \ +src$(MAJORVERSION)/access/pg_tde_io.o \ +src$(MAJORVERSION)/access/pg_tdeam_visibility.o \ +src$(MAJORVERSION)/access/pg_tdeam.o \ +src$(MAJORVERSION)/access/pg_tdetoast.o \ +src$(MAJORVERSION)/access/pg_tde_prune.o \ +src$(MAJORVERSION)/access/pg_tde_vacuumlazy.o \ +src$(MAJORVERSION)/access/pg_tde_visibilitymap.o \ +src$(MAJORVERSION)/access/pg_tde_rewrite.o \ +src$(MAJORVERSION)/access/pg_tdeam_handler.o \ +src/access/pg_tde_ddl.o \ +src/access/pg_tde_xlog.o \ +src/access/pg_tde_xlog_encrypt.o \ +src/transam/pg_tde_xact_handler.o \ +src/keyring/keyring_curl.o \ +src/keyring/keyring_file.o \ +src/keyring/keyring_vault.o \ +src/keyring/keyring_kmip.o \ +src/keyring/keyring_kmip_ereport.o \ +src/keyring/keyring_api.o \ +src/catalog/tde_global_space.o \ +src/catalog/tde_keyring.o \ +src/catalog/tde_keyring_parse_opts.o \ +src/catalog/tde_principal_key.o \ +src/common/pg_tde_shmem.o \ +src/common/pg_tde_utils.o \ +src/smgr/pg_tde_smgr.o \ +src/pg_tde_defs.o \ +src/pg_tde_event_capture.o \ +src/pg_tde.o \ +src/libkmip/libkmip/src/kmip.o \ +src/libkmip/libkmip/src/kmip_bio.o \ +src/libkmip/libkmip/src/kmip_locate.o \ +src/libkmip/libkmip/src/kmip_memset.o + +ifdef USE_PGXS +PG_CONFIG = pg_config +PGXS := $(shell $(PG_CONFIG) --pgxs) +override PG_CPPFLAGS += -I$(CURDIR)/src/include -I$(CURDIR)/src/libkmip/libkmip/include -I$(CURDIR)/src$(MAJORVERSION)/include +include $(PGXS) +else +subdir = contrib/pg_tde +top_builddir = ../.. +override PG_CPPFLAGS += -I$(top_srcdir)/$(subdir)/src/include -I$(top_srcdir)/$(subdir)/src/libkmip/libkmip/include -I$(top_srcdir)/$(subdir)/src$(MAJORVERSION)/include +include $(top_builddir)/src/Makefile.global +include $(top_srcdir)/contrib/contrib-global.mk +endif + +override SHLIB_LINK += -lcurl -lcrypto -lssl + +# Fetches typedefs list for PostgreSQL core and merges it with typedefs defined in this project. +# https://wiki.postgresql.org/wiki/Running_pgindent_on_non-core_code_or_development_code +update-typedefs: + wget -q -O - "https://buildfarm.postgresql.org/cgi-bin/typedefs.pl?branch=REL_17_STABLE" | cat - typedefs.list | sort | uniq > typedefs-full.list + +# Indents projects sources. +indent: + pgindent --typedefs=typedefs-full.list --excludes=pgindent_excludes . + +.PHONY: update-typedefs indent diff --git a/contrib/pg_tde/Makefile.tools b/contrib/pg_tde/Makefile.tools new file mode 100644 index 00000000000..bf34e129dc9 --- /dev/null +++ b/contrib/pg_tde/Makefile.tools @@ -0,0 +1,23 @@ +TDE_OBJS = \ + src/access/pg_tde_tdemap.frontend \ + src/access/pg_tde_xlog_encrypt.frontend \ + src/catalog/tde_global_space.frontend \ + src/catalog/tde_keyring.frontend \ + src/catalog/tde_keyring_parse_opts.frontend \ + src/catalog/tde_principal_key.frontend \ + src/common/pg_tde_utils.frontend \ + src/encryption/enc_aes.frontend \ + src/encryption/enc_tde.frontend \ + src/keyring/keyring_api.frontend \ + src/keyring/keyring_curl.frontend \ + src/keyring/keyring_file.frontend \ + src/keyring/keyring_vault.frontend \ + src/keyring/keyring_kmip.frontend \ + src/keyring/keyring_kmip_ereport.frontend \ + src/libkmip/libkmip/src/kmip.frontend \ + src/libkmip/libkmip/src/kmip_bio.frontend \ + src/libkmip/libkmip/src/kmip_locate.frontend \ + src/libkmip/libkmip/src/kmip_memset.frontend + +%.frontend: %.c + $(CC) $(CPPFLAGS) -c $< -o $@ \ No newline at end of file diff --git a/contrib/pg_tde/README.md b/contrib/pg_tde/README.md new file mode 100644 index 00000000000..d9ca4f0c9a3 --- /dev/null +++ b/contrib/pg_tde/README.md @@ -0,0 +1,172 @@ +[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/percona/pg_tde/badge)](https://scorecard.dev/viewer/?uri=github.com/percona/pg_tde) +[![Forum](https://img.shields.io/badge/Forum-join-brightgreen)](https://forums.percona.com/) + +# pg_tde: Transparent Database Encryption for PostgreSQL + +The PostgreSQL extension provides data at rest encryption. It is currently in an experimental phase and is under active development. [We need your feedback!](https://github.com/percona/pg_tde/discussions/151) + +## Table of contents +1. [Overview](#overview) +2. [Documentation](#documentation) +1. [Percona Server for PostgreSQL](#percona-server-for-postgresql) +3. [Build from sources](#building-from-sources-for-community-postgresql) +4. [Run in docker](#run-in-docker) +5. [Setting up](#setting-up) +6. [Helper functions](#helper-functions) + +## Overview +Transparent Data Encryption offers encryption at the file level and solves the problem of protecting data at rest. The encryption is transparent for users allowing them to access and manipulate the data and not to worry about the encryption process. As a key provider, the extension supports the keyringfile and [Hashicorp Vault](https://www.vaultproject.io/). + +### This extension provides two `access methods` with different options: + +#### `tde_heap_basic` access method +- Works with community PostgreSQL 16 and 17 or with [Percona Server for PosgreSQL 17](https://docs.percona.com/postgresql/17/postgresql-server.html) +- Encrypts tuples and WAL +- **Doesn't** encrypt indexes, temporary files, statistics +- CPU expensive as it decrypts pages each time they are read from bufferpool + +#### `tde_heap` access method +- Works only with [Percona Server for PostgreSQL 17](https://docs.percona.com/postgresql/17/postgresql-server.html) +- Uses extended Storage Manager and WAL APIs +- Encrypts tuples, WAL and indexes +- **Doesn't** encrypt temporary files and statistics **yet** +- Faster and cheaper than `tde_heap_basic` + +## Documentation + +Full and comprehensive documentation about `pg_tde` is available at https://percona.github.io/pg_tde/. + +## Percona Server for PostgreSQL + +Percona provides binary packages of `pg_tde` extension only for Percona Server for PostgreSQL. Learn how to install them or build `pg_tde` from sources for PSPG in the [documentation](https://percona.github.io/pg_tde/main/install.html). + +## Building from sources for community PostgreSQL + 1. Install required dependencies (replace XX with 16 or 17) + - On Debian and Ubuntu: + ```sh + sudo apt install make gcc autoconf git libcurl4-openssl-dev postgresql-server-dev-XX + ``` + + - On RHEL 8 compatible OS: + ```sh + sudo yum install epel-release + yum --enablerepo=powertools install git make gcc autoconf libcurl-devel perl-IPC-Run redhat-rpm-config openssl-devel postgresqlXX-devel + ``` + + - On MacOS: + ```sh + brew install make autoconf curl gettext postresql@XX + ``` + + 2. Install or build postgresql 16 or 17 + 3. If postgres is installed in a non standard directory, set the `PG_CONFIG` environment variable to point to the `pg_config` executable + + 4. Clone the repository, build and install it with the following commands: + + ```sh + git clone https://github.com/percona/pg_tde + ``` + + 5. Compile and install the extension + + ```sh + cd pg_tde + make USE_PGXS=1 + sudo make USE_PGXS=1 install + ``` + +## Run in Docker + +There is a [docker image](https://hub.docker.com/r/perconalab/pg_tde) with `pg_tde` based community [PostgreSQL 16](https://hub.docker.com/_/postgres) + +``` +docker run --name pg-tde -e POSTGRES_PASSWORD=mysecretpassword -d perconalab/pg_tde +``` +Docker file is available [here](https://github.com/percona/pg_tde/blob/main/docker/Dockerfile) + + +_See [Make Builds for Developers](https://github.com/percona/pg_tde/wiki/Make-builds-for-developers) for more info on the build infrastructure._ + +## Setting up + + 1. Add extension to the `shared_preload_libraries`: + 1. Via configuration file `postgresql.conf ` + ``` + shared_preload_libraries=pg_tde + ``` + 2. Via SQL using [ALTER SYSTEM](https://www.postgresql.org/docs/current/sql-altersystem.html) command + ```sql + ALTER SYSTEM SET shared_preload_libraries = 'pg_tde'; + ``` + 2. Start or restart the `postgresql` instance to apply the changes. + * On Debian and Ubuntu: + + ```sh + sudo systemctl restart postgresql.service + ``` + + * On RHEL 8 compatible OS (replace XX with your version): + ```sh + sudo systemctl restart postgresql-XX.service + ``` + 3. [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) with SQL (requires superuser or a database owner privileges): + + ```sql + CREATE EXTENSION pg_tde; + ``` + 4. Create a key provider. Currently `pg_tde` supports `File` and `Vault-V2` key providers. You can add the required key provider using one of the functions. + + + ```sql + -- For Vault-V2 key provider + -- pg_tde_add_key_provider_vault_v2(provider_name, vault_token, vault_url, vault_mount_path, vault_ca_path) + SELECT pg_tde_add_key_provider_vault_v2( + 'vault-provider', + json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/token' ), + json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/url' ), + to_json('secret'::text), NULL); + + -- For File key provider + -- pg_tde_add_key_provider_file(provider_name, file_path); + SELECT pg_tde_add_key_provider_file('file','/tmp/pgkeyring'); + ``` + + **Note: The `File` provided is intended for development and stores the keys unencrypted in the specified data file.** + + 5. Set the principal key for the database using the `pg_tde_set_principal_key` function. + + ```sql + -- pg_tde_set_principal_key(principal_key_name, provider_name); + SELECT pg_tde_set_principal_key('my-principal-key','file'); + ``` + + 6. Specify `tde_heap_basic` access method during table creation + ```sql + CREATE TABLE albums ( + album_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist_id INTEGER, + title TEXT NOT NULL, + released DATE NOT NULL + ) USING tde_heap_basic; + ``` + 7. You can encrypt existing table. It requires rewriting the table, so for large tables, it might take a considerable amount of time. + ```sql + ALTER TABLE table_name SET access method tde_heap_basic; + ``` + + +## Latest test release + +To download the latest build of the main branch, use the `HEAD` release from [releases](https://github.com/percona/pg_tde/releases). + +Builds are available in a tar.gz format, containing only the required files, and as a deb package. +The deb package is built against the pgdg16 release, but this dependency is not yet enforced in the package. + + +## Helper functions + +The extension provides the following helper functions: + +### pg_tde_is_encrypted(tablename) + +Returns `t` if the table is encrypted (uses the tde_heap_basic access method), or `f` otherwise. diff --git a/contrib/pg_tde/SECURITY.md b/contrib/pg_tde/SECURITY.md new file mode 100644 index 00000000000..5300d156f90 --- /dev/null +++ b/contrib/pg_tde/SECURITY.md @@ -0,0 +1,24 @@ +# Security Policy + +## Supported Versions + +pg_tde project follows rolling release strategy. So all security updates go to new versions. + +## Reporting a Vulnerability + +Please report any vulnerabilities to our project in [Jira](https://perconadev.atlassian.net/jira/software/c/projects/PG/issues). + +If the vulnerability is accepted and confirmed by our experts, you should normally expect us to deliver +a version with a fix according to the timelines provided below: + +For Percona created software (our engineers wrote the code): + +- Low/Medium: 120 days +- High: 90 days +- Critical: ASAP but should not exceed 30 days + +For Non-Percona created software (upstream provided/packaged) from the time the vendor releases a patch: + +- Low/Medium: 2nd release from current version +- High: Next release +- Critical: Hotfix or no later than next release (our regular release cadence is once every month) diff --git a/contrib/pg_tde/code-of-conduct.md b/contrib/pg_tde/code-of-conduct.md new file mode 100644 index 00000000000..842500a8c7e --- /dev/null +++ b/contrib/pg_tde/code-of-conduct.md @@ -0,0 +1,5 @@ +# Percona Code of Conduct + +All Percona Products follow the [Percona Community Code of Conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md). + +If you notice any unacceptable behavior, let us know as soon as possible by writing to . We will respond within 48 hours. diff --git a/contrib/pg_tde/data/tenk.data b/contrib/pg_tde/data/tenk.data new file mode 100644 index 00000000000..c9064c9c032 --- /dev/null +++ b/contrib/pg_tde/data/tenk.data @@ -0,0 +1,10000 @@ +8800 0 0 0 0 0 0 800 800 3800 8800 0 1 MAAAAA AAAAAA AAAAxx +1891 1 1 3 1 11 91 891 1891 1891 1891 182 183 TUAAAA BAAAAA HHHHxx +3420 2 0 0 0 0 20 420 1420 3420 3420 40 41 OBAAAA CAAAAA OOOOxx +9850 3 0 2 0 10 50 850 1850 4850 9850 100 101 WOAAAA DAAAAA VVVVxx +7164 4 0 0 4 4 64 164 1164 2164 7164 128 129 OPAAAA EAAAAA AAAAxx +8009 5 1 1 9 9 9 9 9 3009 8009 18 19 BWAAAA FAAAAA HHHHxx +5057 6 1 1 7 17 57 57 1057 57 5057 114 115 NMAAAA GAAAAA OOOOxx +6701 7 1 1 1 1 1 701 701 1701 6701 2 3 TXAAAA HAAAAA VVVVxx +4321 8 1 1 1 1 21 321 321 4321 4321 42 43 FKAAAA IAAAAA AAAAxx +3043 9 1 3 3 3 43 43 1043 3043 3043 86 87 BNAAAA JAAAAA HHHHxx +1314 10 0 2 4 14 14 314 1314 1314 1314 28 29 OYAAAA KAAAAA OOOOxx +1504 11 0 0 4 4 4 504 1504 1504 1504 8 9 WFAAAA LAAAAA VVVVxx +5222 12 0 2 2 2 22 222 1222 222 5222 44 45 WSAAAA MAAAAA AAAAxx +6243 13 1 3 3 3 43 243 243 1243 6243 86 87 DGAAAA NAAAAA HHHHxx +5471 14 1 3 1 11 71 471 1471 471 5471 142 143 LCAAAA OAAAAA OOOOxx +5006 15 0 2 6 6 6 6 1006 6 5006 12 13 OKAAAA PAAAAA VVVVxx +5387 16 1 3 7 7 87 387 1387 387 5387 174 175 FZAAAA QAAAAA AAAAxx +5785 17 1 1 5 5 85 785 1785 785 5785 170 171 NOAAAA RAAAAA HHHHxx +6621 18 1 1 1 1 21 621 621 1621 6621 42 43 RUAAAA SAAAAA OOOOxx +6969 19 1 1 9 9 69 969 969 1969 6969 138 139 BIAAAA TAAAAA VVVVxx +9460 20 0 0 0 0 60 460 1460 4460 9460 120 121 WZAAAA UAAAAA AAAAxx +59 21 1 3 9 19 59 59 59 59 59 118 119 HCAAAA VAAAAA HHHHxx +8020 22 0 0 0 0 20 20 20 3020 8020 40 41 MWAAAA WAAAAA OOOOxx +7695 23 1 3 5 15 95 695 1695 2695 7695 190 191 ZJAAAA XAAAAA VVVVxx +3442 24 0 2 2 2 42 442 1442 3442 3442 84 85 KCAAAA YAAAAA AAAAxx +5119 25 1 3 9 19 19 119 1119 119 5119 38 39 XOAAAA ZAAAAA HHHHxx +646 26 0 2 6 6 46 646 646 646 646 92 93 WYAAAA ABAAAA OOOOxx +9605 27 1 1 5 5 5 605 1605 4605 9605 10 11 LFAAAA BBAAAA VVVVxx +263 28 1 3 3 3 63 263 263 263 263 126 127 DKAAAA CBAAAA AAAAxx +3269 29 1 1 9 9 69 269 1269 3269 3269 138 139 TVAAAA DBAAAA HHHHxx +1839 30 1 3 9 19 39 839 1839 1839 1839 78 79 TSAAAA EBAAAA OOOOxx +9144 31 0 0 4 4 44 144 1144 4144 9144 88 89 SNAAAA FBAAAA VVVVxx +2513 32 1 1 3 13 13 513 513 2513 2513 26 27 RSAAAA GBAAAA AAAAxx +8850 33 0 2 0 10 50 850 850 3850 8850 100 101 KCAAAA HBAAAA HHHHxx +236 34 0 0 6 16 36 236 236 236 236 72 73 CJAAAA IBAAAA OOOOxx +3162 35 0 2 2 2 62 162 1162 3162 3162 124 125 QRAAAA JBAAAA VVVVxx +4380 36 0 0 0 0 80 380 380 4380 4380 160 161 MMAAAA KBAAAA AAAAxx +8095 37 1 3 5 15 95 95 95 3095 8095 190 191 JZAAAA LBAAAA HHHHxx +209 38 1 1 9 9 9 209 209 209 209 18 19 BIAAAA MBAAAA OOOOxx +3055 39 1 3 5 15 55 55 1055 3055 3055 110 111 NNAAAA NBAAAA VVVVxx +6921 40 1 1 1 1 21 921 921 1921 6921 42 43 FGAAAA OBAAAA AAAAxx +7046 41 0 2 6 6 46 46 1046 2046 7046 92 93 ALAAAA PBAAAA HHHHxx +7912 42 0 0 2 12 12 912 1912 2912 7912 24 25 ISAAAA QBAAAA OOOOxx +7267 43 1 3 7 7 67 267 1267 2267 7267 134 135 NTAAAA RBAAAA VVVVxx +3599 44 1 3 9 19 99 599 1599 3599 3599 198 199 LIAAAA SBAAAA AAAAxx +923 45 1 3 3 3 23 923 923 923 923 46 47 NJAAAA TBAAAA HHHHxx +1437 46 1 1 7 17 37 437 1437 1437 1437 74 75 HDAAAA UBAAAA OOOOxx +6439 47 1 3 9 19 39 439 439 1439 6439 78 79 RNAAAA VBAAAA VVVVxx +6989 48 1 1 9 9 89 989 989 1989 6989 178 179 VIAAAA WBAAAA AAAAxx +8798 49 0 2 8 18 98 798 798 3798 8798 196 197 KAAAAA XBAAAA HHHHxx +5960 50 0 0 0 0 60 960 1960 960 5960 120 121 GVAAAA YBAAAA OOOOxx +5832 51 0 0 2 12 32 832 1832 832 5832 64 65 IQAAAA ZBAAAA VVVVxx +6066 52 0 2 6 6 66 66 66 1066 6066 132 133 IZAAAA ACAAAA AAAAxx +322 53 0 2 2 2 22 322 322 322 322 44 45 KMAAAA BCAAAA HHHHxx +8321 54 1 1 1 1 21 321 321 3321 8321 42 43 BIAAAA CCAAAA OOOOxx +734 55 0 2 4 14 34 734 734 734 734 68 69 GCAAAA DCAAAA VVVVxx +688 56 0 0 8 8 88 688 688 688 688 176 177 MAAAAA ECAAAA AAAAxx +4212 57 0 0 2 12 12 212 212 4212 4212 24 25 AGAAAA FCAAAA HHHHxx +9653 58 1 1 3 13 53 653 1653 4653 9653 106 107 HHAAAA GCAAAA OOOOxx +2677 59 1 1 7 17 77 677 677 2677 2677 154 155 ZYAAAA HCAAAA VVVVxx +5423 60 1 3 3 3 23 423 1423 423 5423 46 47 PAAAAA ICAAAA AAAAxx +2592 61 0 0 2 12 92 592 592 2592 2592 184 185 SVAAAA JCAAAA HHHHxx +3233 62 1 1 3 13 33 233 1233 3233 3233 66 67 JUAAAA KCAAAA OOOOxx +5032 63 0 0 2 12 32 32 1032 32 5032 64 65 OLAAAA LCAAAA VVVVxx +2525 64 1 1 5 5 25 525 525 2525 2525 50 51 DTAAAA MCAAAA AAAAxx +4450 65 0 2 0 10 50 450 450 4450 4450 100 101 EPAAAA NCAAAA HHHHxx +5778 66 0 2 8 18 78 778 1778 778 5778 156 157 GOAAAA OCAAAA OOOOxx +5852 67 0 0 2 12 52 852 1852 852 5852 104 105 CRAAAA PCAAAA VVVVxx +5404 68 0 0 4 4 4 404 1404 404 5404 8 9 WZAAAA QCAAAA AAAAxx +6223 69 1 3 3 3 23 223 223 1223 6223 46 47 JFAAAA RCAAAA HHHHxx +6133 70 1 1 3 13 33 133 133 1133 6133 66 67 XBAAAA SCAAAA OOOOxx +9112 71 0 0 2 12 12 112 1112 4112 9112 24 25 MMAAAA TCAAAA VVVVxx +7575 72 1 3 5 15 75 575 1575 2575 7575 150 151 JFAAAA UCAAAA AAAAxx +7414 73 0 2 4 14 14 414 1414 2414 7414 28 29 EZAAAA VCAAAA HHHHxx +9741 74 1 1 1 1 41 741 1741 4741 9741 82 83 RKAAAA WCAAAA OOOOxx +3767 75 1 3 7 7 67 767 1767 3767 3767 134 135 XOAAAA XCAAAA VVVVxx +9372 76 0 0 2 12 72 372 1372 4372 9372 144 145 MWAAAA YCAAAA AAAAxx +8976 77 0 0 6 16 76 976 976 3976 8976 152 153 GHAAAA ZCAAAA HHHHxx +4071 78 1 3 1 11 71 71 71 4071 4071 142 143 PAAAAA ADAAAA OOOOxx +1311 79 1 3 1 11 11 311 1311 1311 1311 22 23 LYAAAA BDAAAA VVVVxx +2604 80 0 0 4 4 4 604 604 2604 2604 8 9 EWAAAA CDAAAA AAAAxx +8840 81 0 0 0 0 40 840 840 3840 8840 80 81 ACAAAA DDAAAA HHHHxx +567 82 1 3 7 7 67 567 567 567 567 134 135 VVAAAA EDAAAA OOOOxx +5215 83 1 3 5 15 15 215 1215 215 5215 30 31 PSAAAA FDAAAA VVVVxx +5474 84 0 2 4 14 74 474 1474 474 5474 148 149 OCAAAA GDAAAA AAAAxx +3906 85 0 2 6 6 6 906 1906 3906 3906 12 13 GUAAAA HDAAAA HHHHxx +1769 86 1 1 9 9 69 769 1769 1769 1769 138 139 BQAAAA IDAAAA OOOOxx +1454 87 0 2 4 14 54 454 1454 1454 1454 108 109 YDAAAA JDAAAA VVVVxx +6877 88 1 1 7 17 77 877 877 1877 6877 154 155 NEAAAA KDAAAA AAAAxx +6501 89 1 1 1 1 1 501 501 1501 6501 2 3 BQAAAA LDAAAA HHHHxx +934 90 0 2 4 14 34 934 934 934 934 68 69 YJAAAA MDAAAA OOOOxx +4075 91 1 3 5 15 75 75 75 4075 4075 150 151 TAAAAA NDAAAA VVVVxx +3180 92 0 0 0 0 80 180 1180 3180 3180 160 161 ISAAAA ODAAAA AAAAxx +7787 93 1 3 7 7 87 787 1787 2787 7787 174 175 NNAAAA PDAAAA HHHHxx +6401 94 1 1 1 1 1 401 401 1401 6401 2 3 FMAAAA QDAAAA OOOOxx +4244 95 0 0 4 4 44 244 244 4244 4244 88 89 GHAAAA RDAAAA VVVVxx +4591 96 1 3 1 11 91 591 591 4591 4591 182 183 PUAAAA SDAAAA AAAAxx +4113 97 1 1 3 13 13 113 113 4113 4113 26 27 FCAAAA TDAAAA HHHHxx +5925 98 1 1 5 5 25 925 1925 925 5925 50 51 XTAAAA UDAAAA OOOOxx +1987 99 1 3 7 7 87 987 1987 1987 1987 174 175 LYAAAA VDAAAA VVVVxx +8248 100 0 0 8 8 48 248 248 3248 8248 96 97 GFAAAA WDAAAA AAAAxx +4151 101 1 3 1 11 51 151 151 4151 4151 102 103 RDAAAA XDAAAA HHHHxx +8670 102 0 2 0 10 70 670 670 3670 8670 140 141 MVAAAA YDAAAA OOOOxx +6194 103 0 2 4 14 94 194 194 1194 6194 188 189 GEAAAA ZDAAAA VVVVxx +88 104 0 0 8 8 88 88 88 88 88 176 177 KDAAAA AEAAAA AAAAxx +4058 105 0 2 8 18 58 58 58 4058 4058 116 117 CAAAAA BEAAAA HHHHxx +2742 106 0 2 2 2 42 742 742 2742 2742 84 85 MBAAAA CEAAAA OOOOxx +8275 107 1 3 5 15 75 275 275 3275 8275 150 151 HGAAAA DEAAAA VVVVxx +4258 108 0 2 8 18 58 258 258 4258 4258 116 117 UHAAAA EEAAAA AAAAxx +6129 109 1 1 9 9 29 129 129 1129 6129 58 59 TBAAAA FEAAAA HHHHxx +7243 110 1 3 3 3 43 243 1243 2243 7243 86 87 PSAAAA GEAAAA OOOOxx +2392 111 0 0 2 12 92 392 392 2392 2392 184 185 AOAAAA HEAAAA VVVVxx +9853 112 1 1 3 13 53 853 1853 4853 9853 106 107 ZOAAAA IEAAAA AAAAxx +6064 113 0 0 4 4 64 64 64 1064 6064 128 129 GZAAAA JEAAAA HHHHxx +4391 114 1 3 1 11 91 391 391 4391 4391 182 183 XMAAAA KEAAAA OOOOxx +726 115 0 2 6 6 26 726 726 726 726 52 53 YBAAAA LEAAAA VVVVxx +6957 116 1 1 7 17 57 957 957 1957 6957 114 115 PHAAAA MEAAAA AAAAxx +3853 117 1 1 3 13 53 853 1853 3853 3853 106 107 FSAAAA NEAAAA HHHHxx +4524 118 0 0 4 4 24 524 524 4524 4524 48 49 ASAAAA OEAAAA OOOOxx +5330 119 0 2 0 10 30 330 1330 330 5330 60 61 AXAAAA PEAAAA VVVVxx +6671 120 1 3 1 11 71 671 671 1671 6671 142 143 PWAAAA QEAAAA AAAAxx +5314 121 0 2 4 14 14 314 1314 314 5314 28 29 KWAAAA REAAAA HHHHxx +9202 122 0 2 2 2 2 202 1202 4202 9202 4 5 YPAAAA SEAAAA OOOOxx +4596 123 0 0 6 16 96 596 596 4596 4596 192 193 UUAAAA TEAAAA VVVVxx +8951 124 1 3 1 11 51 951 951 3951 8951 102 103 HGAAAA UEAAAA AAAAxx +9902 125 0 2 2 2 2 902 1902 4902 9902 4 5 WQAAAA VEAAAA HHHHxx +1440 126 0 0 0 0 40 440 1440 1440 1440 80 81 KDAAAA WEAAAA OOOOxx +5339 127 1 3 9 19 39 339 1339 339 5339 78 79 JXAAAA XEAAAA VVVVxx +3371 128 1 3 1 11 71 371 1371 3371 3371 142 143 RZAAAA YEAAAA AAAAxx +4467 129 1 3 7 7 67 467 467 4467 4467 134 135 VPAAAA ZEAAAA HHHHxx +6216 130 0 0 6 16 16 216 216 1216 6216 32 33 CFAAAA AFAAAA OOOOxx +5364 131 0 0 4 4 64 364 1364 364 5364 128 129 IYAAAA BFAAAA VVVVxx +7547 132 1 3 7 7 47 547 1547 2547 7547 94 95 HEAAAA CFAAAA AAAAxx +4338 133 0 2 8 18 38 338 338 4338 4338 76 77 WKAAAA DFAAAA HHHHxx +3481 134 1 1 1 1 81 481 1481 3481 3481 162 163 XDAAAA EFAAAA OOOOxx +826 135 0 2 6 6 26 826 826 826 826 52 53 UFAAAA FFAAAA VVVVxx +3647 136 1 3 7 7 47 647 1647 3647 3647 94 95 HKAAAA GFAAAA AAAAxx +3337 137 1 1 7 17 37 337 1337 3337 3337 74 75 JYAAAA HFAAAA HHHHxx +3591 138 1 3 1 11 91 591 1591 3591 3591 182 183 DIAAAA IFAAAA OOOOxx +7192 139 0 0 2 12 92 192 1192 2192 7192 184 185 QQAAAA JFAAAA VVVVxx +1078 140 0 2 8 18 78 78 1078 1078 1078 156 157 MPAAAA KFAAAA AAAAxx +1310 141 0 2 0 10 10 310 1310 1310 1310 20 21 KYAAAA LFAAAA HHHHxx +9642 142 0 2 2 2 42 642 1642 4642 9642 84 85 WGAAAA MFAAAA OOOOxx +39 143 1 3 9 19 39 39 39 39 39 78 79 NBAAAA NFAAAA VVVVxx +8682 144 0 2 2 2 82 682 682 3682 8682 164 165 YVAAAA OFAAAA AAAAxx +1794 145 0 2 4 14 94 794 1794 1794 1794 188 189 ARAAAA PFAAAA HHHHxx +5630 146 0 2 0 10 30 630 1630 630 5630 60 61 OIAAAA QFAAAA OOOOxx +6748 147 0 0 8 8 48 748 748 1748 6748 96 97 OZAAAA RFAAAA VVVVxx +3766 148 0 2 6 6 66 766 1766 3766 3766 132 133 WOAAAA SFAAAA AAAAxx +6403 149 1 3 3 3 3 403 403 1403 6403 6 7 HMAAAA TFAAAA HHHHxx +175 150 1 3 5 15 75 175 175 175 175 150 151 TGAAAA UFAAAA OOOOxx +2179 151 1 3 9 19 79 179 179 2179 2179 158 159 VFAAAA VFAAAA VVVVxx +7897 152 1 1 7 17 97 897 1897 2897 7897 194 195 TRAAAA WFAAAA AAAAxx +2760 153 0 0 0 0 60 760 760 2760 2760 120 121 ECAAAA XFAAAA HHHHxx +1675 154 1 3 5 15 75 675 1675 1675 1675 150 151 LMAAAA YFAAAA OOOOxx +2564 155 0 0 4 4 64 564 564 2564 2564 128 129 QUAAAA ZFAAAA VVVVxx +157 156 1 1 7 17 57 157 157 157 157 114 115 BGAAAA AGAAAA AAAAxx +8779 157 1 3 9 19 79 779 779 3779 8779 158 159 RZAAAA BGAAAA HHHHxx +9591 158 1 3 1 11 91 591 1591 4591 9591 182 183 XEAAAA CGAAAA OOOOxx +8732 159 0 0 2 12 32 732 732 3732 8732 64 65 WXAAAA DGAAAA VVVVxx +139 160 1 3 9 19 39 139 139 139 139 78 79 JFAAAA EGAAAA AAAAxx +5372 161 0 0 2 12 72 372 1372 372 5372 144 145 QYAAAA FGAAAA HHHHxx +1278 162 0 2 8 18 78 278 1278 1278 1278 156 157 EXAAAA GGAAAA OOOOxx +4697 163 1 1 7 17 97 697 697 4697 4697 194 195 RYAAAA HGAAAA VVVVxx +8610 164 0 2 0 10 10 610 610 3610 8610 20 21 ETAAAA IGAAAA AAAAxx +8180 165 0 0 0 0 80 180 180 3180 8180 160 161 QCAAAA JGAAAA HHHHxx +2399 166 1 3 9 19 99 399 399 2399 2399 198 199 HOAAAA KGAAAA OOOOxx +615 167 1 3 5 15 15 615 615 615 615 30 31 RXAAAA LGAAAA VVVVxx +7629 168 1 1 9 9 29 629 1629 2629 7629 58 59 LHAAAA MGAAAA AAAAxx +7628 169 0 0 8 8 28 628 1628 2628 7628 56 57 KHAAAA NGAAAA HHHHxx +4659 170 1 3 9 19 59 659 659 4659 4659 118 119 FXAAAA OGAAAA OOOOxx +5865 171 1 1 5 5 65 865 1865 865 5865 130 131 PRAAAA PGAAAA VVVVxx +3973 172 1 1 3 13 73 973 1973 3973 3973 146 147 VWAAAA QGAAAA AAAAxx +552 173 0 0 2 12 52 552 552 552 552 104 105 GVAAAA RGAAAA HHHHxx +708 174 0 0 8 8 8 708 708 708 708 16 17 GBAAAA SGAAAA OOOOxx +3550 175 0 2 0 10 50 550 1550 3550 3550 100 101 OGAAAA TGAAAA VVVVxx +5547 176 1 3 7 7 47 547 1547 547 5547 94 95 JFAAAA UGAAAA AAAAxx +489 177 1 1 9 9 89 489 489 489 489 178 179 VSAAAA VGAAAA HHHHxx +3794 178 0 2 4 14 94 794 1794 3794 3794 188 189 YPAAAA WGAAAA OOOOxx +9479 179 1 3 9 19 79 479 1479 4479 9479 158 159 PAAAAA XGAAAA VVVVxx +6435 180 1 3 5 15 35 435 435 1435 6435 70 71 NNAAAA YGAAAA AAAAxx +5120 181 0 0 0 0 20 120 1120 120 5120 40 41 YOAAAA ZGAAAA HHHHxx +3615 182 1 3 5 15 15 615 1615 3615 3615 30 31 BJAAAA AHAAAA OOOOxx +8399 183 1 3 9 19 99 399 399 3399 8399 198 199 BLAAAA BHAAAA VVVVxx +2155 184 1 3 5 15 55 155 155 2155 2155 110 111 XEAAAA CHAAAA AAAAxx +6690 185 0 2 0 10 90 690 690 1690 6690 180 181 IXAAAA DHAAAA HHHHxx +1683 186 1 3 3 3 83 683 1683 1683 1683 166 167 TMAAAA EHAAAA OOOOxx +6302 187 0 2 2 2 2 302 302 1302 6302 4 5 KIAAAA FHAAAA VVVVxx +516 188 0 0 6 16 16 516 516 516 516 32 33 WTAAAA GHAAAA AAAAxx +3901 189 1 1 1 1 1 901 1901 3901 3901 2 3 BUAAAA HHAAAA HHHHxx +6938 190 0 2 8 18 38 938 938 1938 6938 76 77 WGAAAA IHAAAA OOOOxx +7484 191 0 0 4 4 84 484 1484 2484 7484 168 169 WBAAAA JHAAAA VVVVxx +7424 192 0 0 4 4 24 424 1424 2424 7424 48 49 OZAAAA KHAAAA AAAAxx +9410 193 0 2 0 10 10 410 1410 4410 9410 20 21 YXAAAA LHAAAA HHHHxx +1714 194 0 2 4 14 14 714 1714 1714 1714 28 29 YNAAAA MHAAAA OOOOxx +8278 195 0 2 8 18 78 278 278 3278 8278 156 157 KGAAAA NHAAAA VVVVxx +3158 196 0 2 8 18 58 158 1158 3158 3158 116 117 MRAAAA OHAAAA AAAAxx +2511 197 1 3 1 11 11 511 511 2511 2511 22 23 PSAAAA PHAAAA HHHHxx +2912 198 0 0 2 12 12 912 912 2912 2912 24 25 AIAAAA QHAAAA OOOOxx +2648 199 0 0 8 8 48 648 648 2648 2648 96 97 WXAAAA RHAAAA VVVVxx +9385 200 1 1 5 5 85 385 1385 4385 9385 170 171 ZWAAAA SHAAAA AAAAxx +7545 201 1 1 5 5 45 545 1545 2545 7545 90 91 FEAAAA THAAAA HHHHxx +8407 202 1 3 7 7 7 407 407 3407 8407 14 15 JLAAAA UHAAAA OOOOxx +5893 203 1 1 3 13 93 893 1893 893 5893 186 187 RSAAAA VHAAAA VVVVxx +7049 204 1 1 9 9 49 49 1049 2049 7049 98 99 DLAAAA WHAAAA AAAAxx +6812 205 0 0 2 12 12 812 812 1812 6812 24 25 ACAAAA XHAAAA HHHHxx +3649 206 1 1 9 9 49 649 1649 3649 3649 98 99 JKAAAA YHAAAA OOOOxx +9275 207 1 3 5 15 75 275 1275 4275 9275 150 151 TSAAAA ZHAAAA VVVVxx +1179 208 1 3 9 19 79 179 1179 1179 1179 158 159 JTAAAA AIAAAA AAAAxx +969 209 1 1 9 9 69 969 969 969 969 138 139 HLAAAA BIAAAA HHHHxx +7920 210 0 0 0 0 20 920 1920 2920 7920 40 41 QSAAAA CIAAAA OOOOxx +998 211 0 2 8 18 98 998 998 998 998 196 197 KMAAAA DIAAAA VVVVxx +3958 212 0 2 8 18 58 958 1958 3958 3958 116 117 GWAAAA EIAAAA AAAAxx +6052 213 0 0 2 12 52 52 52 1052 6052 104 105 UYAAAA FIAAAA HHHHxx +8791 214 1 3 1 11 91 791 791 3791 8791 182 183 DAAAAA GIAAAA OOOOxx +5191 215 1 3 1 11 91 191 1191 191 5191 182 183 RRAAAA HIAAAA VVVVxx +4267 216 1 3 7 7 67 267 267 4267 4267 134 135 DIAAAA IIAAAA AAAAxx +2829 217 1 1 9 9 29 829 829 2829 2829 58 59 VEAAAA JIAAAA HHHHxx +6396 218 0 0 6 16 96 396 396 1396 6396 192 193 AMAAAA KIAAAA OOOOxx +9413 219 1 1 3 13 13 413 1413 4413 9413 26 27 BYAAAA LIAAAA VVVVxx +614 220 0 2 4 14 14 614 614 614 614 28 29 QXAAAA MIAAAA AAAAxx +4660 221 0 0 0 0 60 660 660 4660 4660 120 121 GXAAAA NIAAAA HHHHxx +8834 222 0 2 4 14 34 834 834 3834 8834 68 69 UBAAAA OIAAAA OOOOxx +2767 223 1 3 7 7 67 767 767 2767 2767 134 135 LCAAAA PIAAAA VVVVxx +2444 224 0 0 4 4 44 444 444 2444 2444 88 89 AQAAAA QIAAAA AAAAxx +4129 225 1 1 9 9 29 129 129 4129 4129 58 59 VCAAAA RIAAAA HHHHxx +3394 226 0 2 4 14 94 394 1394 3394 3394 188 189 OAAAAA SIAAAA OOOOxx +2705 227 1 1 5 5 5 705 705 2705 2705 10 11 BAAAAA TIAAAA VVVVxx +8499 228 1 3 9 19 99 499 499 3499 8499 198 199 XOAAAA UIAAAA AAAAxx +8852 229 0 0 2 12 52 852 852 3852 8852 104 105 MCAAAA VIAAAA HHHHxx +6174 230 0 2 4 14 74 174 174 1174 6174 148 149 MDAAAA WIAAAA OOOOxx +750 231 0 2 0 10 50 750 750 750 750 100 101 WCAAAA XIAAAA VVVVxx +8164 232 0 0 4 4 64 164 164 3164 8164 128 129 ACAAAA YIAAAA AAAAxx +4930 233 0 2 0 10 30 930 930 4930 4930 60 61 QHAAAA ZIAAAA HHHHxx +9904 234 0 0 4 4 4 904 1904 4904 9904 8 9 YQAAAA AJAAAA OOOOxx +7378 235 0 2 8 18 78 378 1378 2378 7378 156 157 UXAAAA BJAAAA VVVVxx +2927 236 1 3 7 7 27 927 927 2927 2927 54 55 PIAAAA CJAAAA AAAAxx +7155 237 1 3 5 15 55 155 1155 2155 7155 110 111 FPAAAA DJAAAA HHHHxx +1302 238 0 2 2 2 2 302 1302 1302 1302 4 5 CYAAAA EJAAAA OOOOxx +5904 239 0 0 4 4 4 904 1904 904 5904 8 9 CTAAAA FJAAAA VVVVxx +9687 240 1 3 7 7 87 687 1687 4687 9687 174 175 PIAAAA GJAAAA AAAAxx +3553 241 1 1 3 13 53 553 1553 3553 3553 106 107 RGAAAA HJAAAA HHHHxx +4447 242 1 3 7 7 47 447 447 4447 4447 94 95 BPAAAA IJAAAA OOOOxx +6878 243 0 2 8 18 78 878 878 1878 6878 156 157 OEAAAA JJAAAA VVVVxx +9470 244 0 2 0 10 70 470 1470 4470 9470 140 141 GAAAAA KJAAAA AAAAxx +9735 245 1 3 5 15 35 735 1735 4735 9735 70 71 LKAAAA LJAAAA HHHHxx +5967 246 1 3 7 7 67 967 1967 967 5967 134 135 NVAAAA MJAAAA OOOOxx +6601 247 1 1 1 1 1 601 601 1601 6601 2 3 XTAAAA NJAAAA VVVVxx +7631 248 1 3 1 11 31 631 1631 2631 7631 62 63 NHAAAA OJAAAA AAAAxx +3559 249 1 3 9 19 59 559 1559 3559 3559 118 119 XGAAAA PJAAAA HHHHxx +2247 250 1 3 7 7 47 247 247 2247 2247 94 95 LIAAAA QJAAAA OOOOxx +9649 251 1 1 9 9 49 649 1649 4649 9649 98 99 DHAAAA RJAAAA VVVVxx +808 252 0 0 8 8 8 808 808 808 808 16 17 CFAAAA SJAAAA AAAAxx +240 253 0 0 0 0 40 240 240 240 240 80 81 GJAAAA TJAAAA HHHHxx +5031 254 1 3 1 11 31 31 1031 31 5031 62 63 NLAAAA UJAAAA OOOOxx +9563 255 1 3 3 3 63 563 1563 4563 9563 126 127 VDAAAA VJAAAA VVVVxx +5656 256 0 0 6 16 56 656 1656 656 5656 112 113 OJAAAA WJAAAA AAAAxx +3886 257 0 2 6 6 86 886 1886 3886 3886 172 173 MTAAAA XJAAAA HHHHxx +2431 258 1 3 1 11 31 431 431 2431 2431 62 63 NPAAAA YJAAAA OOOOxx +5560 259 0 0 0 0 60 560 1560 560 5560 120 121 WFAAAA ZJAAAA VVVVxx +9065 260 1 1 5 5 65 65 1065 4065 9065 130 131 RKAAAA AKAAAA AAAAxx +8130 261 0 2 0 10 30 130 130 3130 8130 60 61 SAAAAA BKAAAA HHHHxx +4054 262 0 2 4 14 54 54 54 4054 4054 108 109 YZAAAA CKAAAA OOOOxx +873 263 1 1 3 13 73 873 873 873 873 146 147 PHAAAA DKAAAA VVVVxx +3092 264 0 0 2 12 92 92 1092 3092 3092 184 185 YOAAAA EKAAAA AAAAxx +6697 265 1 1 7 17 97 697 697 1697 6697 194 195 PXAAAA FKAAAA HHHHxx +2452 266 0 0 2 12 52 452 452 2452 2452 104 105 IQAAAA GKAAAA OOOOxx +7867 267 1 3 7 7 67 867 1867 2867 7867 134 135 PQAAAA HKAAAA VVVVxx +3753 268 1 1 3 13 53 753 1753 3753 3753 106 107 JOAAAA IKAAAA AAAAxx +7834 269 0 2 4 14 34 834 1834 2834 7834 68 69 IPAAAA JKAAAA HHHHxx +5846 270 0 2 6 6 46 846 1846 846 5846 92 93 WQAAAA KKAAAA OOOOxx +7604 271 0 0 4 4 4 604 1604 2604 7604 8 9 MGAAAA LKAAAA VVVVxx +3452 272 0 0 2 12 52 452 1452 3452 3452 104 105 UCAAAA MKAAAA AAAAxx +4788 273 0 0 8 8 88 788 788 4788 4788 176 177 ECAAAA NKAAAA HHHHxx +8600 274 0 0 0 0 0 600 600 3600 8600 0 1 USAAAA OKAAAA OOOOxx +8511 275 1 3 1 11 11 511 511 3511 8511 22 23 JPAAAA PKAAAA VVVVxx +4452 276 0 0 2 12 52 452 452 4452 4452 104 105 GPAAAA QKAAAA AAAAxx +1709 277 1 1 9 9 9 709 1709 1709 1709 18 19 TNAAAA RKAAAA HHHHxx +3440 278 0 0 0 0 40 440 1440 3440 3440 80 81 ICAAAA SKAAAA OOOOxx +9188 279 0 0 8 8 88 188 1188 4188 9188 176 177 KPAAAA TKAAAA VVVVxx +3058 280 0 2 8 18 58 58 1058 3058 3058 116 117 QNAAAA UKAAAA AAAAxx +5821 281 1 1 1 1 21 821 1821 821 5821 42 43 XPAAAA VKAAAA HHHHxx +3428 282 0 0 8 8 28 428 1428 3428 3428 56 57 WBAAAA WKAAAA OOOOxx +3581 283 1 1 1 1 81 581 1581 3581 3581 162 163 THAAAA XKAAAA VVVVxx +7523 284 1 3 3 3 23 523 1523 2523 7523 46 47 JDAAAA YKAAAA AAAAxx +3131 285 1 3 1 11 31 131 1131 3131 3131 62 63 LQAAAA ZKAAAA HHHHxx +2404 286 0 0 4 4 4 404 404 2404 2404 8 9 MOAAAA ALAAAA OOOOxx +5453 287 1 1 3 13 53 453 1453 453 5453 106 107 TBAAAA BLAAAA VVVVxx +1599 288 1 3 9 19 99 599 1599 1599 1599 198 199 NJAAAA CLAAAA AAAAxx +7081 289 1 1 1 1 81 81 1081 2081 7081 162 163 JMAAAA DLAAAA HHHHxx +1750 290 0 2 0 10 50 750 1750 1750 1750 100 101 IPAAAA ELAAAA OOOOxx +5085 291 1 1 5 5 85 85 1085 85 5085 170 171 PNAAAA FLAAAA VVVVxx +9777 292 1 1 7 17 77 777 1777 4777 9777 154 155 BMAAAA GLAAAA AAAAxx +574 293 0 2 4 14 74 574 574 574 574 148 149 CWAAAA HLAAAA HHHHxx +5984 294 0 0 4 4 84 984 1984 984 5984 168 169 EWAAAA ILAAAA OOOOxx +7039 295 1 3 9 19 39 39 1039 2039 7039 78 79 TKAAAA JLAAAA VVVVxx +7143 296 1 3 3 3 43 143 1143 2143 7143 86 87 TOAAAA KLAAAA AAAAxx +5702 297 0 2 2 2 2 702 1702 702 5702 4 5 ILAAAA LLAAAA HHHHxx +362 298 0 2 2 2 62 362 362 362 362 124 125 YNAAAA MLAAAA OOOOxx +6997 299 1 1 7 17 97 997 997 1997 6997 194 195 DJAAAA NLAAAA VVVVxx +2529 300 1 1 9 9 29 529 529 2529 2529 58 59 HTAAAA OLAAAA AAAAxx +6319 301 1 3 9 19 19 319 319 1319 6319 38 39 BJAAAA PLAAAA HHHHxx +954 302 0 2 4 14 54 954 954 954 954 108 109 SKAAAA QLAAAA OOOOxx +3413 303 1 1 3 13 13 413 1413 3413 3413 26 27 HBAAAA RLAAAA VVVVxx +9081 304 1 1 1 1 81 81 1081 4081 9081 162 163 HLAAAA SLAAAA AAAAxx +5599 305 1 3 9 19 99 599 1599 599 5599 198 199 JHAAAA TLAAAA HHHHxx +4772 306 0 0 2 12 72 772 772 4772 4772 144 145 OBAAAA ULAAAA OOOOxx +1124 307 0 0 4 4 24 124 1124 1124 1124 48 49 GRAAAA VLAAAA VVVVxx +7793 308 1 1 3 13 93 793 1793 2793 7793 186 187 TNAAAA WLAAAA AAAAxx +4201 309 1 1 1 1 1 201 201 4201 4201 2 3 PFAAAA XLAAAA HHHHxx +7015 310 1 3 5 15 15 15 1015 2015 7015 30 31 VJAAAA YLAAAA OOOOxx +5936 311 0 0 6 16 36 936 1936 936 5936 72 73 IUAAAA ZLAAAA VVVVxx +4625 312 1 1 5 5 25 625 625 4625 4625 50 51 XVAAAA AMAAAA AAAAxx +4989 313 1 1 9 9 89 989 989 4989 4989 178 179 XJAAAA BMAAAA HHHHxx +4949 314 1 1 9 9 49 949 949 4949 4949 98 99 JIAAAA CMAAAA OOOOxx +6273 315 1 1 3 13 73 273 273 1273 6273 146 147 HHAAAA DMAAAA VVVVxx +4478 316 0 2 8 18 78 478 478 4478 4478 156 157 GQAAAA EMAAAA AAAAxx +8854 317 0 2 4 14 54 854 854 3854 8854 108 109 OCAAAA FMAAAA HHHHxx +2105 318 1 1 5 5 5 105 105 2105 2105 10 11 ZCAAAA GMAAAA OOOOxx +8345 319 1 1 5 5 45 345 345 3345 8345 90 91 ZIAAAA HMAAAA VVVVxx +1941 320 1 1 1 1 41 941 1941 1941 1941 82 83 RWAAAA IMAAAA AAAAxx +1765 321 1 1 5 5 65 765 1765 1765 1765 130 131 XPAAAA JMAAAA HHHHxx +9592 322 0 0 2 12 92 592 1592 4592 9592 184 185 YEAAAA KMAAAA OOOOxx +1694 323 0 2 4 14 94 694 1694 1694 1694 188 189 ENAAAA LMAAAA VVVVxx +8940 324 0 0 0 0 40 940 940 3940 8940 80 81 WFAAAA MMAAAA AAAAxx +7264 325 0 0 4 4 64 264 1264 2264 7264 128 129 KTAAAA NMAAAA HHHHxx +4699 326 1 3 9 19 99 699 699 4699 4699 198 199 TYAAAA OMAAAA OOOOxx +4541 327 1 1 1 1 41 541 541 4541 4541 82 83 RSAAAA PMAAAA VVVVxx +5768 328 0 0 8 8 68 768 1768 768 5768 136 137 WNAAAA QMAAAA AAAAxx +6183 329 1 3 3 3 83 183 183 1183 6183 166 167 VDAAAA RMAAAA HHHHxx +7457 330 1 1 7 17 57 457 1457 2457 7457 114 115 VAAAAA SMAAAA OOOOxx +7317 331 1 1 7 17 17 317 1317 2317 7317 34 35 LVAAAA TMAAAA VVVVxx +1944 332 0 0 4 4 44 944 1944 1944 1944 88 89 UWAAAA UMAAAA AAAAxx +665 333 1 1 5 5 65 665 665 665 665 130 131 PZAAAA VMAAAA HHHHxx +5974 334 0 2 4 14 74 974 1974 974 5974 148 149 UVAAAA WMAAAA OOOOxx +7370 335 0 2 0 10 70 370 1370 2370 7370 140 141 MXAAAA XMAAAA VVVVxx +9196 336 0 0 6 16 96 196 1196 4196 9196 192 193 SPAAAA YMAAAA AAAAxx +6796 337 0 0 6 16 96 796 796 1796 6796 192 193 KBAAAA ZMAAAA HHHHxx +6180 338 0 0 0 0 80 180 180 1180 6180 160 161 SDAAAA ANAAAA OOOOxx +8557 339 1 1 7 17 57 557 557 3557 8557 114 115 DRAAAA BNAAAA VVVVxx +928 340 0 0 8 8 28 928 928 928 928 56 57 SJAAAA CNAAAA AAAAxx +6275 341 1 3 5 15 75 275 275 1275 6275 150 151 JHAAAA DNAAAA HHHHxx +409 342 1 1 9 9 9 409 409 409 409 18 19 TPAAAA ENAAAA OOOOxx +6442 343 0 2 2 2 42 442 442 1442 6442 84 85 UNAAAA FNAAAA VVVVxx +5889 344 1 1 9 9 89 889 1889 889 5889 178 179 NSAAAA GNAAAA AAAAxx +5180 345 0 0 0 0 80 180 1180 180 5180 160 161 GRAAAA HNAAAA HHHHxx +1629 346 1 1 9 9 29 629 1629 1629 1629 58 59 RKAAAA INAAAA OOOOxx +6088 347 0 0 8 8 88 88 88 1088 6088 176 177 EAAAAA JNAAAA VVVVxx +5598 348 0 2 8 18 98 598 1598 598 5598 196 197 IHAAAA KNAAAA AAAAxx +1803 349 1 3 3 3 3 803 1803 1803 1803 6 7 JRAAAA LNAAAA HHHHxx +2330 350 0 2 0 10 30 330 330 2330 2330 60 61 QLAAAA MNAAAA OOOOxx +5901 351 1 1 1 1 1 901 1901 901 5901 2 3 ZSAAAA NNAAAA VVVVxx +780 352 0 0 0 0 80 780 780 780 780 160 161 AEAAAA ONAAAA AAAAxx +7171 353 1 3 1 11 71 171 1171 2171 7171 142 143 VPAAAA PNAAAA HHHHxx +8778 354 0 2 8 18 78 778 778 3778 8778 156 157 QZAAAA QNAAAA OOOOxx +6622 355 0 2 2 2 22 622 622 1622 6622 44 45 SUAAAA RNAAAA VVVVxx +9938 356 0 2 8 18 38 938 1938 4938 9938 76 77 GSAAAA SNAAAA AAAAxx +8254 357 0 2 4 14 54 254 254 3254 8254 108 109 MFAAAA TNAAAA HHHHxx +1951 358 1 3 1 11 51 951 1951 1951 1951 102 103 BXAAAA UNAAAA OOOOxx +1434 359 0 2 4 14 34 434 1434 1434 1434 68 69 EDAAAA VNAAAA VVVVxx +7539 360 1 3 9 19 39 539 1539 2539 7539 78 79 ZDAAAA WNAAAA AAAAxx +600 361 0 0 0 0 0 600 600 600 600 0 1 CXAAAA XNAAAA HHHHxx +3122 362 0 2 2 2 22 122 1122 3122 3122 44 45 CQAAAA YNAAAA OOOOxx +5704 363 0 0 4 4 4 704 1704 704 5704 8 9 KLAAAA ZNAAAA VVVVxx +6300 364 0 0 0 0 0 300 300 1300 6300 0 1 IIAAAA AOAAAA AAAAxx +4585 365 1 1 5 5 85 585 585 4585 4585 170 171 JUAAAA BOAAAA HHHHxx +6313 366 1 1 3 13 13 313 313 1313 6313 26 27 VIAAAA COAAAA OOOOxx +3154 367 0 2 4 14 54 154 1154 3154 3154 108 109 IRAAAA DOAAAA VVVVxx +642 368 0 2 2 2 42 642 642 642 642 84 85 SYAAAA EOAAAA AAAAxx +7736 369 0 0 6 16 36 736 1736 2736 7736 72 73 OLAAAA FOAAAA HHHHxx +5087 370 1 3 7 7 87 87 1087 87 5087 174 175 RNAAAA GOAAAA OOOOxx +5708 371 0 0 8 8 8 708 1708 708 5708 16 17 OLAAAA HOAAAA VVVVxx +8169 372 1 1 9 9 69 169 169 3169 8169 138 139 FCAAAA IOAAAA AAAAxx +9768 373 0 0 8 8 68 768 1768 4768 9768 136 137 SLAAAA JOAAAA HHHHxx +3874 374 0 2 4 14 74 874 1874 3874 3874 148 149 ATAAAA KOAAAA OOOOxx +6831 375 1 3 1 11 31 831 831 1831 6831 62 63 TCAAAA LOAAAA VVVVxx +18 376 0 2 8 18 18 18 18 18 18 36 37 SAAAAA MOAAAA AAAAxx +6375 377 1 3 5 15 75 375 375 1375 6375 150 151 FLAAAA NOAAAA HHHHxx +7106 378 0 2 6 6 6 106 1106 2106 7106 12 13 INAAAA OOAAAA OOOOxx +5926 379 0 2 6 6 26 926 1926 926 5926 52 53 YTAAAA POAAAA VVVVxx +4956 380 0 0 6 16 56 956 956 4956 4956 112 113 QIAAAA QOAAAA AAAAxx +7042 381 0 2 2 2 42 42 1042 2042 7042 84 85 WKAAAA ROAAAA HHHHxx +6043 382 1 3 3 3 43 43 43 1043 6043 86 87 LYAAAA SOAAAA OOOOxx +2084 383 0 0 4 4 84 84 84 2084 2084 168 169 ECAAAA TOAAAA VVVVxx +6038 384 0 2 8 18 38 38 38 1038 6038 76 77 GYAAAA UOAAAA AAAAxx +7253 385 1 1 3 13 53 253 1253 2253 7253 106 107 ZSAAAA VOAAAA HHHHxx +2061 386 1 1 1 1 61 61 61 2061 2061 122 123 HBAAAA WOAAAA OOOOxx +7800 387 0 0 0 0 0 800 1800 2800 7800 0 1 AOAAAA XOAAAA VVVVxx +4970 388 0 2 0 10 70 970 970 4970 4970 140 141 EJAAAA YOAAAA AAAAxx +8580 389 0 0 0 0 80 580 580 3580 8580 160 161 ASAAAA ZOAAAA HHHHxx +9173 390 1 1 3 13 73 173 1173 4173 9173 146 147 VOAAAA APAAAA OOOOxx +8558 391 0 2 8 18 58 558 558 3558 8558 116 117 ERAAAA BPAAAA VVVVxx +3897 392 1 1 7 17 97 897 1897 3897 3897 194 195 XTAAAA CPAAAA AAAAxx +5069 393 1 1 9 9 69 69 1069 69 5069 138 139 ZMAAAA DPAAAA HHHHxx +2301 394 1 1 1 1 1 301 301 2301 2301 2 3 NKAAAA EPAAAA OOOOxx +9863 395 1 3 3 3 63 863 1863 4863 9863 126 127 JPAAAA FPAAAA VVVVxx +5733 396 1 1 3 13 33 733 1733 733 5733 66 67 NMAAAA GPAAAA AAAAxx +2338 397 0 2 8 18 38 338 338 2338 2338 76 77 YLAAAA HPAAAA HHHHxx +9639 398 1 3 9 19 39 639 1639 4639 9639 78 79 TGAAAA IPAAAA OOOOxx +1139 399 1 3 9 19 39 139 1139 1139 1139 78 79 VRAAAA JPAAAA VVVVxx +2293 400 1 1 3 13 93 293 293 2293 2293 186 187 FKAAAA KPAAAA AAAAxx +6125 401 1 1 5 5 25 125 125 1125 6125 50 51 PBAAAA LPAAAA HHHHxx +5374 402 0 2 4 14 74 374 1374 374 5374 148 149 SYAAAA MPAAAA OOOOxx +7216 403 0 0 6 16 16 216 1216 2216 7216 32 33 ORAAAA NPAAAA VVVVxx +2285 404 1 1 5 5 85 285 285 2285 2285 170 171 XJAAAA OPAAAA AAAAxx +2387 405 1 3 7 7 87 387 387 2387 2387 174 175 VNAAAA PPAAAA HHHHxx +5015 406 1 3 5 15 15 15 1015 15 5015 30 31 XKAAAA QPAAAA OOOOxx +2087 407 1 3 7 7 87 87 87 2087 2087 174 175 HCAAAA RPAAAA VVVVxx +4938 408 0 2 8 18 38 938 938 4938 4938 76 77 YHAAAA SPAAAA AAAAxx +3635 409 1 3 5 15 35 635 1635 3635 3635 70 71 VJAAAA TPAAAA HHHHxx +7737 410 1 1 7 17 37 737 1737 2737 7737 74 75 PLAAAA UPAAAA OOOOxx +8056 411 0 0 6 16 56 56 56 3056 8056 112 113 WXAAAA VPAAAA VVVVxx +4502 412 0 2 2 2 2 502 502 4502 4502 4 5 ERAAAA WPAAAA AAAAxx +54 413 0 2 4 14 54 54 54 54 54 108 109 CCAAAA XPAAAA HHHHxx +3182 414 0 2 2 2 82 182 1182 3182 3182 164 165 KSAAAA YPAAAA OOOOxx +3718 415 0 2 8 18 18 718 1718 3718 3718 36 37 ANAAAA ZPAAAA VVVVxx +3989 416 1 1 9 9 89 989 1989 3989 3989 178 179 LXAAAA AQAAAA AAAAxx +8028 417 0 0 8 8 28 28 28 3028 8028 56 57 UWAAAA BQAAAA HHHHxx +1426 418 0 2 6 6 26 426 1426 1426 1426 52 53 WCAAAA CQAAAA OOOOxx +3801 419 1 1 1 1 1 801 1801 3801 3801 2 3 FQAAAA DQAAAA VVVVxx +241 420 1 1 1 1 41 241 241 241 241 82 83 HJAAAA EQAAAA AAAAxx +8000 421 0 0 0 0 0 0 0 3000 8000 0 1 SVAAAA FQAAAA HHHHxx +8357 422 1 1 7 17 57 357 357 3357 8357 114 115 LJAAAA GQAAAA OOOOxx +7548 423 0 0 8 8 48 548 1548 2548 7548 96 97 IEAAAA HQAAAA VVVVxx +7307 424 1 3 7 7 7 307 1307 2307 7307 14 15 BVAAAA IQAAAA AAAAxx +2275 425 1 3 5 15 75 275 275 2275 2275 150 151 NJAAAA JQAAAA HHHHxx +2718 426 0 2 8 18 18 718 718 2718 2718 36 37 OAAAAA KQAAAA OOOOxx +7068 427 0 0 8 8 68 68 1068 2068 7068 136 137 WLAAAA LQAAAA VVVVxx +3181 428 1 1 1 1 81 181 1181 3181 3181 162 163 JSAAAA MQAAAA AAAAxx +749 429 1 1 9 9 49 749 749 749 749 98 99 VCAAAA NQAAAA HHHHxx +5195 430 1 3 5 15 95 195 1195 195 5195 190 191 VRAAAA OQAAAA OOOOxx +6136 431 0 0 6 16 36 136 136 1136 6136 72 73 ACAAAA PQAAAA VVVVxx +8012 432 0 0 2 12 12 12 12 3012 8012 24 25 EWAAAA QQAAAA AAAAxx +3957 433 1 1 7 17 57 957 1957 3957 3957 114 115 FWAAAA RQAAAA HHHHxx +3083 434 1 3 3 3 83 83 1083 3083 3083 166 167 POAAAA SQAAAA OOOOxx +9997 435 1 1 7 17 97 997 1997 4997 9997 194 195 NUAAAA TQAAAA VVVVxx +3299 436 1 3 9 19 99 299 1299 3299 3299 198 199 XWAAAA UQAAAA AAAAxx +846 437 0 2 6 6 46 846 846 846 846 92 93 OGAAAA VQAAAA HHHHxx +2985 438 1 1 5 5 85 985 985 2985 2985 170 171 VKAAAA WQAAAA OOOOxx +9238 439 0 2 8 18 38 238 1238 4238 9238 76 77 IRAAAA XQAAAA VVVVxx +1403 440 1 3 3 3 3 403 1403 1403 1403 6 7 ZBAAAA YQAAAA AAAAxx +5563 441 1 3 3 3 63 563 1563 563 5563 126 127 ZFAAAA ZQAAAA HHHHxx +7965 442 1 1 5 5 65 965 1965 2965 7965 130 131 JUAAAA ARAAAA OOOOxx +4512 443 0 0 2 12 12 512 512 4512 4512 24 25 ORAAAA BRAAAA VVVVxx +9730 444 0 2 0 10 30 730 1730 4730 9730 60 61 GKAAAA CRAAAA AAAAxx +1129 445 1 1 9 9 29 129 1129 1129 1129 58 59 LRAAAA DRAAAA HHHHxx +2624 446 0 0 4 4 24 624 624 2624 2624 48 49 YWAAAA ERAAAA OOOOxx +8178 447 0 2 8 18 78 178 178 3178 8178 156 157 OCAAAA FRAAAA VVVVxx +6468 448 0 0 8 8 68 468 468 1468 6468 136 137 UOAAAA GRAAAA AAAAxx +3027 449 1 3 7 7 27 27 1027 3027 3027 54 55 LMAAAA HRAAAA HHHHxx +3845 450 1 1 5 5 45 845 1845 3845 3845 90 91 XRAAAA IRAAAA OOOOxx +786 451 0 2 6 6 86 786 786 786 786 172 173 GEAAAA JRAAAA VVVVxx +4971 452 1 3 1 11 71 971 971 4971 4971 142 143 FJAAAA KRAAAA AAAAxx +1542 453 0 2 2 2 42 542 1542 1542 1542 84 85 IHAAAA LRAAAA HHHHxx +7967 454 1 3 7 7 67 967 1967 2967 7967 134 135 LUAAAA MRAAAA OOOOxx +443 455 1 3 3 3 43 443 443 443 443 86 87 BRAAAA NRAAAA VVVVxx +7318 456 0 2 8 18 18 318 1318 2318 7318 36 37 MVAAAA ORAAAA AAAAxx +4913 457 1 1 3 13 13 913 913 4913 4913 26 27 ZGAAAA PRAAAA HHHHxx +9466 458 0 2 6 6 66 466 1466 4466 9466 132 133 CAAAAA QRAAAA OOOOxx +7866 459 0 2 6 6 66 866 1866 2866 7866 132 133 OQAAAA RRAAAA VVVVxx +784 460 0 0 4 4 84 784 784 784 784 168 169 EEAAAA SRAAAA AAAAxx +9040 461 0 0 0 0 40 40 1040 4040 9040 80 81 SJAAAA TRAAAA HHHHxx +3954 462 0 2 4 14 54 954 1954 3954 3954 108 109 CWAAAA URAAAA OOOOxx +4183 463 1 3 3 3 83 183 183 4183 4183 166 167 XEAAAA VRAAAA VVVVxx +3608 464 0 0 8 8 8 608 1608 3608 3608 16 17 UIAAAA WRAAAA AAAAxx +7630 465 0 2 0 10 30 630 1630 2630 7630 60 61 MHAAAA XRAAAA HHHHxx +590 466 0 2 0 10 90 590 590 590 590 180 181 SWAAAA YRAAAA OOOOxx +3453 467 1 1 3 13 53 453 1453 3453 3453 106 107 VCAAAA ZRAAAA VVVVxx +7757 468 1 1 7 17 57 757 1757 2757 7757 114 115 JMAAAA ASAAAA AAAAxx +7394 469 0 2 4 14 94 394 1394 2394 7394 188 189 KYAAAA BSAAAA HHHHxx +396 470 0 0 6 16 96 396 396 396 396 192 193 GPAAAA CSAAAA OOOOxx +7873 471 1 1 3 13 73 873 1873 2873 7873 146 147 VQAAAA DSAAAA VVVVxx +1553 472 1 1 3 13 53 553 1553 1553 1553 106 107 THAAAA ESAAAA AAAAxx +598 473 0 2 8 18 98 598 598 598 598 196 197 AXAAAA FSAAAA HHHHxx +7191 474 1 3 1 11 91 191 1191 2191 7191 182 183 PQAAAA GSAAAA OOOOxx +8116 475 0 0 6 16 16 116 116 3116 8116 32 33 EAAAAA HSAAAA VVVVxx +2516 476 0 0 6 16 16 516 516 2516 2516 32 33 USAAAA ISAAAA AAAAxx +7750 477 0 2 0 10 50 750 1750 2750 7750 100 101 CMAAAA JSAAAA HHHHxx +6625 478 1 1 5 5 25 625 625 1625 6625 50 51 VUAAAA KSAAAA OOOOxx +8838 479 0 2 8 18 38 838 838 3838 8838 76 77 YBAAAA LSAAAA VVVVxx +4636 480 0 0 6 16 36 636 636 4636 4636 72 73 IWAAAA MSAAAA AAAAxx +7627 481 1 3 7 7 27 627 1627 2627 7627 54 55 JHAAAA NSAAAA HHHHxx +1690 482 0 2 0 10 90 690 1690 1690 1690 180 181 ANAAAA OSAAAA OOOOxx +7071 483 1 3 1 11 71 71 1071 2071 7071 142 143 ZLAAAA PSAAAA VVVVxx +2081 484 1 1 1 1 81 81 81 2081 2081 162 163 BCAAAA QSAAAA AAAAxx +7138 485 0 2 8 18 38 138 1138 2138 7138 76 77 OOAAAA RSAAAA HHHHxx +864 486 0 0 4 4 64 864 864 864 864 128 129 GHAAAA SSAAAA OOOOxx +6392 487 0 0 2 12 92 392 392 1392 6392 184 185 WLAAAA TSAAAA VVVVxx +7544 488 0 0 4 4 44 544 1544 2544 7544 88 89 EEAAAA USAAAA AAAAxx +5438 489 0 2 8 18 38 438 1438 438 5438 76 77 EBAAAA VSAAAA HHHHxx +7099 490 1 3 9 19 99 99 1099 2099 7099 198 199 BNAAAA WSAAAA OOOOxx +5157 491 1 1 7 17 57 157 1157 157 5157 114 115 JQAAAA XSAAAA VVVVxx +3391 492 1 3 1 11 91 391 1391 3391 3391 182 183 LAAAAA YSAAAA AAAAxx +3805 493 1 1 5 5 5 805 1805 3805 3805 10 11 JQAAAA ZSAAAA HHHHxx +2110 494 0 2 0 10 10 110 110 2110 2110 20 21 EDAAAA ATAAAA OOOOxx +3176 495 0 0 6 16 76 176 1176 3176 3176 152 153 ESAAAA BTAAAA VVVVxx +5918 496 0 2 8 18 18 918 1918 918 5918 36 37 QTAAAA CTAAAA AAAAxx +1218 497 0 2 8 18 18 218 1218 1218 1218 36 37 WUAAAA DTAAAA HHHHxx +6683 498 1 3 3 3 83 683 683 1683 6683 166 167 BXAAAA ETAAAA OOOOxx +914 499 0 2 4 14 14 914 914 914 914 28 29 EJAAAA FTAAAA VVVVxx +4737 500 1 1 7 17 37 737 737 4737 4737 74 75 FAAAAA GTAAAA AAAAxx +7286 501 0 2 6 6 86 286 1286 2286 7286 172 173 GUAAAA HTAAAA HHHHxx +9975 502 1 3 5 15 75 975 1975 4975 9975 150 151 RTAAAA ITAAAA OOOOxx +8030 503 0 2 0 10 30 30 30 3030 8030 60 61 WWAAAA JTAAAA VVVVxx +7364 504 0 0 4 4 64 364 1364 2364 7364 128 129 GXAAAA KTAAAA AAAAxx +1389 505 1 1 9 9 89 389 1389 1389 1389 178 179 LBAAAA LTAAAA HHHHxx +4025 506 1 1 5 5 25 25 25 4025 4025 50 51 VYAAAA MTAAAA OOOOxx +4835 507 1 3 5 15 35 835 835 4835 4835 70 71 ZDAAAA NTAAAA VVVVxx +8045 508 1 1 5 5 45 45 45 3045 8045 90 91 LXAAAA OTAAAA AAAAxx +1864 509 0 0 4 4 64 864 1864 1864 1864 128 129 STAAAA PTAAAA HHHHxx +3313 510 1 1 3 13 13 313 1313 3313 3313 26 27 LXAAAA QTAAAA OOOOxx +2384 511 0 0 4 4 84 384 384 2384 2384 168 169 SNAAAA RTAAAA VVVVxx +6115 512 1 3 5 15 15 115 115 1115 6115 30 31 FBAAAA STAAAA AAAAxx +5705 513 1 1 5 5 5 705 1705 705 5705 10 11 LLAAAA TTAAAA HHHHxx +9269 514 1 1 9 9 69 269 1269 4269 9269 138 139 NSAAAA UTAAAA OOOOxx +3379 515 1 3 9 19 79 379 1379 3379 3379 158 159 ZZAAAA VTAAAA VVVVxx +8205 516 1 1 5 5 5 205 205 3205 8205 10 11 PDAAAA WTAAAA AAAAxx +6575 517 1 3 5 15 75 575 575 1575 6575 150 151 XSAAAA XTAAAA HHHHxx +486 518 0 2 6 6 86 486 486 486 486 172 173 SSAAAA YTAAAA OOOOxx +4894 519 0 2 4 14 94 894 894 4894 4894 188 189 GGAAAA ZTAAAA VVVVxx +3090 520 0 2 0 10 90 90 1090 3090 3090 180 181 WOAAAA AUAAAA AAAAxx +759 521 1 3 9 19 59 759 759 759 759 118 119 FDAAAA BUAAAA HHHHxx +4864 522 0 0 4 4 64 864 864 4864 4864 128 129 CFAAAA CUAAAA OOOOxx +4083 523 1 3 3 3 83 83 83 4083 4083 166 167 BBAAAA DUAAAA VVVVxx +6918 524 0 2 8 18 18 918 918 1918 6918 36 37 CGAAAA EUAAAA AAAAxx +8146 525 0 2 6 6 46 146 146 3146 8146 92 93 IBAAAA FUAAAA HHHHxx +1523 526 1 3 3 3 23 523 1523 1523 1523 46 47 PGAAAA GUAAAA OOOOxx +1591 527 1 3 1 11 91 591 1591 1591 1591 182 183 FJAAAA HUAAAA VVVVxx +3343 528 1 3 3 3 43 343 1343 3343 3343 86 87 PYAAAA IUAAAA AAAAxx +1391 529 1 3 1 11 91 391 1391 1391 1391 182 183 NBAAAA JUAAAA HHHHxx +9963 530 1 3 3 3 63 963 1963 4963 9963 126 127 FTAAAA KUAAAA OOOOxx +2423 531 1 3 3 3 23 423 423 2423 2423 46 47 FPAAAA LUAAAA VVVVxx +1822 532 0 2 2 2 22 822 1822 1822 1822 44 45 CSAAAA MUAAAA AAAAxx +8706 533 0 2 6 6 6 706 706 3706 8706 12 13 WWAAAA NUAAAA HHHHxx +3001 534 1 1 1 1 1 1 1001 3001 3001 2 3 LLAAAA OUAAAA OOOOxx +6707 535 1 3 7 7 7 707 707 1707 6707 14 15 ZXAAAA PUAAAA VVVVxx +2121 536 1 1 1 1 21 121 121 2121 2121 42 43 PDAAAA QUAAAA AAAAxx +5814 537 0 2 4 14 14 814 1814 814 5814 28 29 QPAAAA RUAAAA HHHHxx +2659 538 1 3 9 19 59 659 659 2659 2659 118 119 HYAAAA SUAAAA OOOOxx +2016 539 0 0 6 16 16 16 16 2016 2016 32 33 OZAAAA TUAAAA VVVVxx +4286 540 0 2 6 6 86 286 286 4286 4286 172 173 WIAAAA UUAAAA AAAAxx +9205 541 1 1 5 5 5 205 1205 4205 9205 10 11 BQAAAA VUAAAA HHHHxx +3496 542 0 0 6 16 96 496 1496 3496 3496 192 193 MEAAAA WUAAAA OOOOxx +5333 543 1 1 3 13 33 333 1333 333 5333 66 67 DXAAAA XUAAAA VVVVxx +5571 544 1 3 1 11 71 571 1571 571 5571 142 143 HGAAAA YUAAAA AAAAxx +1696 545 0 0 6 16 96 696 1696 1696 1696 192 193 GNAAAA ZUAAAA HHHHxx +4871 546 1 3 1 11 71 871 871 4871 4871 142 143 JFAAAA AVAAAA OOOOxx +4852 547 0 0 2 12 52 852 852 4852 4852 104 105 QEAAAA BVAAAA VVVVxx +8483 548 1 3 3 3 83 483 483 3483 8483 166 167 HOAAAA CVAAAA AAAAxx +1376 549 0 0 6 16 76 376 1376 1376 1376 152 153 YAAAAA DVAAAA HHHHxx +5456 550 0 0 6 16 56 456 1456 456 5456 112 113 WBAAAA EVAAAA OOOOxx +499 551 1 3 9 19 99 499 499 499 499 198 199 FTAAAA FVAAAA VVVVxx +3463 552 1 3 3 3 63 463 1463 3463 3463 126 127 FDAAAA GVAAAA AAAAxx +7426 553 0 2 6 6 26 426 1426 2426 7426 52 53 QZAAAA HVAAAA HHHHxx +5341 554 1 1 1 1 41 341 1341 341 5341 82 83 LXAAAA IVAAAA OOOOxx +9309 555 1 1 9 9 9 309 1309 4309 9309 18 19 BUAAAA JVAAAA VVVVxx +2055 556 1 3 5 15 55 55 55 2055 2055 110 111 BBAAAA KVAAAA AAAAxx +2199 557 1 3 9 19 99 199 199 2199 2199 198 199 PGAAAA LVAAAA HHHHxx +7235 558 1 3 5 15 35 235 1235 2235 7235 70 71 HSAAAA MVAAAA OOOOxx +8661 559 1 1 1 1 61 661 661 3661 8661 122 123 DVAAAA NVAAAA VVVVxx +9494 560 0 2 4 14 94 494 1494 4494 9494 188 189 EBAAAA OVAAAA AAAAxx +935 561 1 3 5 15 35 935 935 935 935 70 71 ZJAAAA PVAAAA HHHHxx +7044 562 0 0 4 4 44 44 1044 2044 7044 88 89 YKAAAA QVAAAA OOOOxx +1974 563 0 2 4 14 74 974 1974 1974 1974 148 149 YXAAAA RVAAAA VVVVxx +9679 564 1 3 9 19 79 679 1679 4679 9679 158 159 HIAAAA SVAAAA AAAAxx +9822 565 0 2 2 2 22 822 1822 4822 9822 44 45 UNAAAA TVAAAA HHHHxx +4088 566 0 0 8 8 88 88 88 4088 4088 176 177 GBAAAA UVAAAA OOOOxx +1749 567 1 1 9 9 49 749 1749 1749 1749 98 99 HPAAAA VVAAAA VVVVxx +2116 568 0 0 6 16 16 116 116 2116 2116 32 33 KDAAAA WVAAAA AAAAxx +976 569 0 0 6 16 76 976 976 976 976 152 153 OLAAAA XVAAAA HHHHxx +8689 570 1 1 9 9 89 689 689 3689 8689 178 179 FWAAAA YVAAAA OOOOxx +2563 571 1 3 3 3 63 563 563 2563 2563 126 127 PUAAAA ZVAAAA VVVVxx +7195 572 1 3 5 15 95 195 1195 2195 7195 190 191 TQAAAA AWAAAA AAAAxx +9985 573 1 1 5 5 85 985 1985 4985 9985 170 171 BUAAAA BWAAAA HHHHxx +7699 574 1 3 9 19 99 699 1699 2699 7699 198 199 DKAAAA CWAAAA OOOOxx +5311 575 1 3 1 11 11 311 1311 311 5311 22 23 HWAAAA DWAAAA VVVVxx +295 576 1 3 5 15 95 295 295 295 295 190 191 JLAAAA EWAAAA AAAAxx +8214 577 0 2 4 14 14 214 214 3214 8214 28 29 YDAAAA FWAAAA HHHHxx +3275 578 1 3 5 15 75 275 1275 3275 3275 150 151 ZVAAAA GWAAAA OOOOxx +9646 579 0 2 6 6 46 646 1646 4646 9646 92 93 AHAAAA HWAAAA VVVVxx +1908 580 0 0 8 8 8 908 1908 1908 1908 16 17 KVAAAA IWAAAA AAAAxx +3858 581 0 2 8 18 58 858 1858 3858 3858 116 117 KSAAAA JWAAAA HHHHxx +9362 582 0 2 2 2 62 362 1362 4362 9362 124 125 CWAAAA KWAAAA OOOOxx +9307 583 1 3 7 7 7 307 1307 4307 9307 14 15 ZTAAAA LWAAAA VVVVxx +6124 584 0 0 4 4 24 124 124 1124 6124 48 49 OBAAAA MWAAAA AAAAxx +2405 585 1 1 5 5 5 405 405 2405 2405 10 11 NOAAAA NWAAAA HHHHxx +8422 586 0 2 2 2 22 422 422 3422 8422 44 45 YLAAAA OWAAAA OOOOxx +393 587 1 1 3 13 93 393 393 393 393 186 187 DPAAAA PWAAAA VVVVxx +8973 588 1 1 3 13 73 973 973 3973 8973 146 147 DHAAAA QWAAAA AAAAxx +5171 589 1 3 1 11 71 171 1171 171 5171 142 143 XQAAAA RWAAAA HHHHxx +4929 590 1 1 9 9 29 929 929 4929 4929 58 59 PHAAAA SWAAAA OOOOxx +6935 591 1 3 5 15 35 935 935 1935 6935 70 71 TGAAAA TWAAAA VVVVxx +8584 592 0 0 4 4 84 584 584 3584 8584 168 169 ESAAAA UWAAAA AAAAxx +1035 593 1 3 5 15 35 35 1035 1035 1035 70 71 VNAAAA VWAAAA HHHHxx +3734 594 0 2 4 14 34 734 1734 3734 3734 68 69 QNAAAA WWAAAA OOOOxx +1458 595 0 2 8 18 58 458 1458 1458 1458 116 117 CEAAAA XWAAAA VVVVxx +8746 596 0 2 6 6 46 746 746 3746 8746 92 93 KYAAAA YWAAAA AAAAxx +1677 597 1 1 7 17 77 677 1677 1677 1677 154 155 NMAAAA ZWAAAA HHHHxx +8502 598 0 2 2 2 2 502 502 3502 8502 4 5 APAAAA AXAAAA OOOOxx +7752 599 0 0 2 12 52 752 1752 2752 7752 104 105 EMAAAA BXAAAA VVVVxx +2556 600 0 0 6 16 56 556 556 2556 2556 112 113 IUAAAA CXAAAA AAAAxx +6426 601 0 2 6 6 26 426 426 1426 6426 52 53 ENAAAA DXAAAA HHHHxx +8420 602 0 0 0 0 20 420 420 3420 8420 40 41 WLAAAA EXAAAA OOOOxx +4462 603 0 2 2 2 62 462 462 4462 4462 124 125 QPAAAA FXAAAA VVVVxx +1378 604 0 2 8 18 78 378 1378 1378 1378 156 157 ABAAAA GXAAAA AAAAxx +1387 605 1 3 7 7 87 387 1387 1387 1387 174 175 JBAAAA HXAAAA HHHHxx +8094 606 0 2 4 14 94 94 94 3094 8094 188 189 IZAAAA IXAAAA OOOOxx +7247 607 1 3 7 7 47 247 1247 2247 7247 94 95 TSAAAA JXAAAA VVVVxx +4261 608 1 1 1 1 61 261 261 4261 4261 122 123 XHAAAA KXAAAA AAAAxx +5029 609 1 1 9 9 29 29 1029 29 5029 58 59 LLAAAA LXAAAA HHHHxx +3625 610 1 1 5 5 25 625 1625 3625 3625 50 51 LJAAAA MXAAAA OOOOxx +8068 611 0 0 8 8 68 68 68 3068 8068 136 137 IYAAAA NXAAAA VVVVxx +102 612 0 2 2 2 2 102 102 102 102 4 5 YDAAAA OXAAAA AAAAxx +5596 613 0 0 6 16 96 596 1596 596 5596 192 193 GHAAAA PXAAAA HHHHxx +5872 614 0 0 2 12 72 872 1872 872 5872 144 145 WRAAAA QXAAAA OOOOxx +4742 615 0 2 2 2 42 742 742 4742 4742 84 85 KAAAAA RXAAAA VVVVxx +2117 616 1 1 7 17 17 117 117 2117 2117 34 35 LDAAAA SXAAAA AAAAxx +3945 617 1 1 5 5 45 945 1945 3945 3945 90 91 TVAAAA TXAAAA HHHHxx +7483 618 1 3 3 3 83 483 1483 2483 7483 166 167 VBAAAA UXAAAA OOOOxx +4455 619 1 3 5 15 55 455 455 4455 4455 110 111 JPAAAA VXAAAA VVVVxx +609 620 1 1 9 9 9 609 609 609 609 18 19 LXAAAA WXAAAA AAAAxx +9829 621 1 1 9 9 29 829 1829 4829 9829 58 59 BOAAAA XXAAAA HHHHxx +4857 622 1 1 7 17 57 857 857 4857 4857 114 115 VEAAAA YXAAAA OOOOxx +3314 623 0 2 4 14 14 314 1314 3314 3314 28 29 MXAAAA ZXAAAA VVVVxx +5353 624 1 1 3 13 53 353 1353 353 5353 106 107 XXAAAA AYAAAA AAAAxx +4909 625 1 1 9 9 9 909 909 4909 4909 18 19 VGAAAA BYAAAA HHHHxx +7597 626 1 1 7 17 97 597 1597 2597 7597 194 195 FGAAAA CYAAAA OOOOxx +2683 627 1 3 3 3 83 683 683 2683 2683 166 167 FZAAAA DYAAAA VVVVxx +3223 628 1 3 3 3 23 223 1223 3223 3223 46 47 ZTAAAA EYAAAA AAAAxx +5363 629 1 3 3 3 63 363 1363 363 5363 126 127 HYAAAA FYAAAA HHHHxx +4578 630 0 2 8 18 78 578 578 4578 4578 156 157 CUAAAA GYAAAA OOOOxx +5544 631 0 0 4 4 44 544 1544 544 5544 88 89 GFAAAA HYAAAA VVVVxx +1589 632 1 1 9 9 89 589 1589 1589 1589 178 179 DJAAAA IYAAAA AAAAxx +7412 633 0 0 2 12 12 412 1412 2412 7412 24 25 CZAAAA JYAAAA HHHHxx +3803 634 1 3 3 3 3 803 1803 3803 3803 6 7 HQAAAA KYAAAA OOOOxx +6179 635 1 3 9 19 79 179 179 1179 6179 158 159 RDAAAA LYAAAA VVVVxx +5588 636 0 0 8 8 88 588 1588 588 5588 176 177 YGAAAA MYAAAA AAAAxx +2134 637 0 2 4 14 34 134 134 2134 2134 68 69 CEAAAA NYAAAA HHHHxx +4383 638 1 3 3 3 83 383 383 4383 4383 166 167 PMAAAA OYAAAA OOOOxx +6995 639 1 3 5 15 95 995 995 1995 6995 190 191 BJAAAA PYAAAA VVVVxx +6598 640 0 2 8 18 98 598 598 1598 6598 196 197 UTAAAA QYAAAA AAAAxx +8731 641 1 3 1 11 31 731 731 3731 8731 62 63 VXAAAA RYAAAA HHHHxx +7177 642 1 1 7 17 77 177 1177 2177 7177 154 155 BQAAAA SYAAAA OOOOxx +6578 643 0 2 8 18 78 578 578 1578 6578 156 157 ATAAAA TYAAAA VVVVxx +9393 644 1 1 3 13 93 393 1393 4393 9393 186 187 HXAAAA UYAAAA AAAAxx +1276 645 0 0 6 16 76 276 1276 1276 1276 152 153 CXAAAA VYAAAA HHHHxx +8766 646 0 2 6 6 66 766 766 3766 8766 132 133 EZAAAA WYAAAA OOOOxx +1015 647 1 3 5 15 15 15 1015 1015 1015 30 31 BNAAAA XYAAAA VVVVxx +4396 648 0 0 6 16 96 396 396 4396 4396 192 193 CNAAAA YYAAAA AAAAxx +5564 649 0 0 4 4 64 564 1564 564 5564 128 129 AGAAAA ZYAAAA HHHHxx +927 650 1 3 7 7 27 927 927 927 927 54 55 RJAAAA AZAAAA OOOOxx +3306 651 0 2 6 6 6 306 1306 3306 3306 12 13 EXAAAA BZAAAA VVVVxx +1615 652 1 3 5 15 15 615 1615 1615 1615 30 31 DKAAAA CZAAAA AAAAxx +4550 653 0 2 0 10 50 550 550 4550 4550 100 101 ATAAAA DZAAAA HHHHxx +2468 654 0 0 8 8 68 468 468 2468 2468 136 137 YQAAAA EZAAAA OOOOxx +5336 655 0 0 6 16 36 336 1336 336 5336 72 73 GXAAAA FZAAAA VVVVxx +4471 656 1 3 1 11 71 471 471 4471 4471 142 143 ZPAAAA GZAAAA AAAAxx +8085 657 1 1 5 5 85 85 85 3085 8085 170 171 ZYAAAA HZAAAA HHHHxx +540 658 0 0 0 0 40 540 540 540 540 80 81 UUAAAA IZAAAA OOOOxx +5108 659 0 0 8 8 8 108 1108 108 5108 16 17 MOAAAA JZAAAA VVVVxx +8015 660 1 3 5 15 15 15 15 3015 8015 30 31 HWAAAA KZAAAA AAAAxx +2857 661 1 1 7 17 57 857 857 2857 2857 114 115 XFAAAA LZAAAA HHHHxx +9472 662 0 0 2 12 72 472 1472 4472 9472 144 145 IAAAAA MZAAAA OOOOxx +5666 663 0 2 6 6 66 666 1666 666 5666 132 133 YJAAAA NZAAAA VVVVxx +3555 664 1 3 5 15 55 555 1555 3555 3555 110 111 TGAAAA OZAAAA AAAAxx +378 665 0 2 8 18 78 378 378 378 378 156 157 OOAAAA PZAAAA HHHHxx +4466 666 0 2 6 6 66 466 466 4466 4466 132 133 UPAAAA QZAAAA OOOOxx +3247 667 1 3 7 7 47 247 1247 3247 3247 94 95 XUAAAA RZAAAA VVVVxx +6570 668 0 2 0 10 70 570 570 1570 6570 140 141 SSAAAA SZAAAA AAAAxx +5655 669 1 3 5 15 55 655 1655 655 5655 110 111 NJAAAA TZAAAA HHHHxx +917 670 1 1 7 17 17 917 917 917 917 34 35 HJAAAA UZAAAA OOOOxx +3637 671 1 1 7 17 37 637 1637 3637 3637 74 75 XJAAAA VZAAAA VVVVxx +3668 672 0 0 8 8 68 668 1668 3668 3668 136 137 CLAAAA WZAAAA AAAAxx +5644 673 0 0 4 4 44 644 1644 644 5644 88 89 CJAAAA XZAAAA HHHHxx +8286 674 0 2 6 6 86 286 286 3286 8286 172 173 SGAAAA YZAAAA OOOOxx +6896 675 0 0 6 16 96 896 896 1896 6896 192 193 GFAAAA ZZAAAA VVVVxx +2870 676 0 2 0 10 70 870 870 2870 2870 140 141 KGAAAA AABAAA AAAAxx +8041 677 1 1 1 1 41 41 41 3041 8041 82 83 HXAAAA BABAAA HHHHxx +8137 678 1 1 7 17 37 137 137 3137 8137 74 75 ZAAAAA CABAAA OOOOxx +4823 679 1 3 3 3 23 823 823 4823 4823 46 47 NDAAAA DABAAA VVVVxx +2438 680 0 2 8 18 38 438 438 2438 2438 76 77 UPAAAA EABAAA AAAAxx +6329 681 1 1 9 9 29 329 329 1329 6329 58 59 LJAAAA FABAAA HHHHxx +623 682 1 3 3 3 23 623 623 623 623 46 47 ZXAAAA GABAAA OOOOxx +1360 683 0 0 0 0 60 360 1360 1360 1360 120 121 IAAAAA HABAAA VVVVxx +7987 684 1 3 7 7 87 987 1987 2987 7987 174 175 FVAAAA IABAAA AAAAxx +9788 685 0 0 8 8 88 788 1788 4788 9788 176 177 MMAAAA JABAAA HHHHxx +3212 686 0 0 2 12 12 212 1212 3212 3212 24 25 OTAAAA KABAAA OOOOxx +2725 687 1 1 5 5 25 725 725 2725 2725 50 51 VAAAAA LABAAA VVVVxx +7837 688 1 1 7 17 37 837 1837 2837 7837 74 75 LPAAAA MABAAA AAAAxx +4746 689 0 2 6 6 46 746 746 4746 4746 92 93 OAAAAA NABAAA HHHHxx +3986 690 0 2 6 6 86 986 1986 3986 3986 172 173 IXAAAA OABAAA OOOOxx +9128 691 0 0 8 8 28 128 1128 4128 9128 56 57 CNAAAA PABAAA VVVVxx +5044 692 0 0 4 4 44 44 1044 44 5044 88 89 AMAAAA QABAAA AAAAxx +8132 693 0 0 2 12 32 132 132 3132 8132 64 65 UAAAAA RABAAA HHHHxx +9992 694 0 0 2 12 92 992 1992 4992 9992 184 185 IUAAAA SABAAA OOOOxx +8468 695 0 0 8 8 68 468 468 3468 8468 136 137 SNAAAA TABAAA VVVVxx +6876 696 0 0 6 16 76 876 876 1876 6876 152 153 MEAAAA UABAAA AAAAxx +3532 697 0 0 2 12 32 532 1532 3532 3532 64 65 WFAAAA VABAAA HHHHxx +2140 698 0 0 0 0 40 140 140 2140 2140 80 81 IEAAAA WABAAA OOOOxx +2183 699 1 3 3 3 83 183 183 2183 2183 166 167 ZFAAAA XABAAA VVVVxx +9766 700 0 2 6 6 66 766 1766 4766 9766 132 133 QLAAAA YABAAA AAAAxx +7943 701 1 3 3 3 43 943 1943 2943 7943 86 87 NTAAAA ZABAAA HHHHxx +9243 702 1 3 3 3 43 243 1243 4243 9243 86 87 NRAAAA ABBAAA OOOOxx +6241 703 1 1 1 1 41 241 241 1241 6241 82 83 BGAAAA BBBAAA VVVVxx +9540 704 0 0 0 0 40 540 1540 4540 9540 80 81 YCAAAA CBBAAA AAAAxx +7418 705 0 2 8 18 18 418 1418 2418 7418 36 37 IZAAAA DBBAAA HHHHxx +1603 706 1 3 3 3 3 603 1603 1603 1603 6 7 RJAAAA EBBAAA OOOOxx +8950 707 0 2 0 10 50 950 950 3950 8950 100 101 GGAAAA FBBAAA VVVVxx +6933 708 1 1 3 13 33 933 933 1933 6933 66 67 RGAAAA GBBAAA AAAAxx +2646 709 0 2 6 6 46 646 646 2646 2646 92 93 UXAAAA HBBAAA HHHHxx +3447 710 1 3 7 7 47 447 1447 3447 3447 94 95 PCAAAA IBBAAA OOOOxx +9957 711 1 1 7 17 57 957 1957 4957 9957 114 115 ZSAAAA JBBAAA VVVVxx +4623 712 1 3 3 3 23 623 623 4623 4623 46 47 VVAAAA KBBAAA AAAAxx +9058 713 0 2 8 18 58 58 1058 4058 9058 116 117 KKAAAA LBBAAA HHHHxx +7361 714 1 1 1 1 61 361 1361 2361 7361 122 123 DXAAAA MBBAAA OOOOxx +2489 715 1 1 9 9 89 489 489 2489 2489 178 179 TRAAAA NBBAAA VVVVxx +7643 716 1 3 3 3 43 643 1643 2643 7643 86 87 ZHAAAA OBBAAA AAAAxx +9166 717 0 2 6 6 66 166 1166 4166 9166 132 133 OOAAAA PBBAAA HHHHxx +7789 718 1 1 9 9 89 789 1789 2789 7789 178 179 PNAAAA QBBAAA OOOOxx +2332 719 0 0 2 12 32 332 332 2332 2332 64 65 SLAAAA RBBAAA VVVVxx +1832 720 0 0 2 12 32 832 1832 1832 1832 64 65 MSAAAA SBBAAA AAAAxx +8375 721 1 3 5 15 75 375 375 3375 8375 150 151 DKAAAA TBBAAA HHHHxx +948 722 0 0 8 8 48 948 948 948 948 96 97 MKAAAA UBBAAA OOOOxx +5613 723 1 1 3 13 13 613 1613 613 5613 26 27 XHAAAA VBBAAA VVVVxx +6310 724 0 2 0 10 10 310 310 1310 6310 20 21 SIAAAA WBBAAA AAAAxx +4254 725 0 2 4 14 54 254 254 4254 4254 108 109 QHAAAA XBBAAA HHHHxx +4260 726 0 0 0 0 60 260 260 4260 4260 120 121 WHAAAA YBBAAA OOOOxx +2060 727 0 0 0 0 60 60 60 2060 2060 120 121 GBAAAA ZBBAAA VVVVxx +4831 728 1 3 1 11 31 831 831 4831 4831 62 63 VDAAAA ACBAAA AAAAxx +6176 729 0 0 6 16 76 176 176 1176 6176 152 153 ODAAAA BCBAAA HHHHxx +6688 730 0 0 8 8 88 688 688 1688 6688 176 177 GXAAAA CCBAAA OOOOxx +5752 731 0 0 2 12 52 752 1752 752 5752 104 105 GNAAAA DCBAAA VVVVxx +8714 732 0 2 4 14 14 714 714 3714 8714 28 29 EXAAAA ECBAAA AAAAxx +6739 733 1 3 9 19 39 739 739 1739 6739 78 79 FZAAAA FCBAAA HHHHxx +7066 734 0 2 6 6 66 66 1066 2066 7066 132 133 ULAAAA GCBAAA OOOOxx +7250 735 0 2 0 10 50 250 1250 2250 7250 100 101 WSAAAA HCBAAA VVVVxx +3161 736 1 1 1 1 61 161 1161 3161 3161 122 123 PRAAAA ICBAAA AAAAxx +1411 737 1 3 1 11 11 411 1411 1411 1411 22 23 HCAAAA JCBAAA HHHHxx +9301 738 1 1 1 1 1 301 1301 4301 9301 2 3 TTAAAA KCBAAA OOOOxx +8324 739 0 0 4 4 24 324 324 3324 8324 48 49 EIAAAA LCBAAA VVVVxx +9641 740 1 1 1 1 41 641 1641 4641 9641 82 83 VGAAAA MCBAAA AAAAxx +7077 741 1 1 7 17 77 77 1077 2077 7077 154 155 FMAAAA NCBAAA HHHHxx +9888 742 0 0 8 8 88 888 1888 4888 9888 176 177 IQAAAA OCBAAA OOOOxx +9909 743 1 1 9 9 9 909 1909 4909 9909 18 19 DRAAAA PCBAAA VVVVxx +2209 744 1 1 9 9 9 209 209 2209 2209 18 19 ZGAAAA QCBAAA AAAAxx +6904 745 0 0 4 4 4 904 904 1904 6904 8 9 OFAAAA RCBAAA HHHHxx +6608 746 0 0 8 8 8 608 608 1608 6608 16 17 EUAAAA SCBAAA OOOOxx +8400 747 0 0 0 0 0 400 400 3400 8400 0 1 CLAAAA TCBAAA VVVVxx +5124 748 0 0 4 4 24 124 1124 124 5124 48 49 CPAAAA UCBAAA AAAAxx +5484 749 0 0 4 4 84 484 1484 484 5484 168 169 YCAAAA VCBAAA HHHHxx +3575 750 1 3 5 15 75 575 1575 3575 3575 150 151 NHAAAA WCBAAA OOOOxx +9723 751 1 3 3 3 23 723 1723 4723 9723 46 47 ZJAAAA XCBAAA VVVVxx +360 752 0 0 0 0 60 360 360 360 360 120 121 WNAAAA YCBAAA AAAAxx +1059 753 1 3 9 19 59 59 1059 1059 1059 118 119 TOAAAA ZCBAAA HHHHxx +4941 754 1 1 1 1 41 941 941 4941 4941 82 83 BIAAAA ADBAAA OOOOxx +2535 755 1 3 5 15 35 535 535 2535 2535 70 71 NTAAAA BDBAAA VVVVxx +4119 756 1 3 9 19 19 119 119 4119 4119 38 39 LCAAAA CDBAAA AAAAxx +3725 757 1 1 5 5 25 725 1725 3725 3725 50 51 HNAAAA DDBAAA HHHHxx +4758 758 0 2 8 18 58 758 758 4758 4758 116 117 ABAAAA EDBAAA OOOOxx +9593 759 1 1 3 13 93 593 1593 4593 9593 186 187 ZEAAAA FDBAAA VVVVxx +4663 760 1 3 3 3 63 663 663 4663 4663 126 127 JXAAAA GDBAAA AAAAxx +7734 761 0 2 4 14 34 734 1734 2734 7734 68 69 MLAAAA HDBAAA HHHHxx +9156 762 0 0 6 16 56 156 1156 4156 9156 112 113 EOAAAA IDBAAA OOOOxx +8120 763 0 0 0 0 20 120 120 3120 8120 40 41 IAAAAA JDBAAA VVVVxx +4385 764 1 1 5 5 85 385 385 4385 4385 170 171 RMAAAA KDBAAA AAAAxx +2926 765 0 2 6 6 26 926 926 2926 2926 52 53 OIAAAA LDBAAA HHHHxx +4186 766 0 2 6 6 86 186 186 4186 4186 172 173 AFAAAA MDBAAA OOOOxx +2508 767 0 0 8 8 8 508 508 2508 2508 16 17 MSAAAA NDBAAA VVVVxx +4012 768 0 0 2 12 12 12 12 4012 4012 24 25 IYAAAA ODBAAA AAAAxx +6266 769 0 2 6 6 66 266 266 1266 6266 132 133 AHAAAA PDBAAA HHHHxx +3709 770 1 1 9 9 9 709 1709 3709 3709 18 19 RMAAAA QDBAAA OOOOxx +7289 771 1 1 9 9 89 289 1289 2289 7289 178 179 JUAAAA RDBAAA VVVVxx +8875 772 1 3 5 15 75 875 875 3875 8875 150 151 JDAAAA SDBAAA AAAAxx +4412 773 0 0 2 12 12 412 412 4412 4412 24 25 SNAAAA TDBAAA HHHHxx +3033 774 1 1 3 13 33 33 1033 3033 3033 66 67 RMAAAA UDBAAA OOOOxx +1645 775 1 1 5 5 45 645 1645 1645 1645 90 91 HLAAAA VDBAAA VVVVxx +3557 776 1 1 7 17 57 557 1557 3557 3557 114 115 VGAAAA WDBAAA AAAAxx +6316 777 0 0 6 16 16 316 316 1316 6316 32 33 YIAAAA XDBAAA HHHHxx +2054 778 0 2 4 14 54 54 54 2054 2054 108 109 ABAAAA YDBAAA OOOOxx +7031 779 1 3 1 11 31 31 1031 2031 7031 62 63 LKAAAA ZDBAAA VVVVxx +3405 780 1 1 5 5 5 405 1405 3405 3405 10 11 ZAAAAA AEBAAA AAAAxx +5343 781 1 3 3 3 43 343 1343 343 5343 86 87 NXAAAA BEBAAA HHHHxx +5240 782 0 0 0 0 40 240 1240 240 5240 80 81 OTAAAA CEBAAA OOOOxx +9650 783 0 2 0 10 50 650 1650 4650 9650 100 101 EHAAAA DEBAAA VVVVxx +3777 784 1 1 7 17 77 777 1777 3777 3777 154 155 HPAAAA EEBAAA AAAAxx +9041 785 1 1 1 1 41 41 1041 4041 9041 82 83 TJAAAA FEBAAA HHHHxx +6923 786 1 3 3 3 23 923 923 1923 6923 46 47 HGAAAA GEBAAA OOOOxx +2977 787 1 1 7 17 77 977 977 2977 2977 154 155 NKAAAA HEBAAA VVVVxx +5500 788 0 0 0 0 0 500 1500 500 5500 0 1 ODAAAA IEBAAA AAAAxx +1044 789 0 0 4 4 44 44 1044 1044 1044 88 89 EOAAAA JEBAAA HHHHxx +434 790 0 2 4 14 34 434 434 434 434 68 69 SQAAAA KEBAAA OOOOxx +611 791 1 3 1 11 11 611 611 611 611 22 23 NXAAAA LEBAAA VVVVxx +5760 792 0 0 0 0 60 760 1760 760 5760 120 121 ONAAAA MEBAAA AAAAxx +2445 793 1 1 5 5 45 445 445 2445 2445 90 91 BQAAAA NEBAAA HHHHxx +7098 794 0 2 8 18 98 98 1098 2098 7098 196 197 ANAAAA OEBAAA OOOOxx +2188 795 0 0 8 8 88 188 188 2188 2188 176 177 EGAAAA PEBAAA VVVVxx +4597 796 1 1 7 17 97 597 597 4597 4597 194 195 VUAAAA QEBAAA AAAAxx +1913 797 1 1 3 13 13 913 1913 1913 1913 26 27 PVAAAA REBAAA HHHHxx +8696 798 0 0 6 16 96 696 696 3696 8696 192 193 MWAAAA SEBAAA OOOOxx +3332 799 0 0 2 12 32 332 1332 3332 3332 64 65 EYAAAA TEBAAA VVVVxx +8760 800 0 0 0 0 60 760 760 3760 8760 120 121 YYAAAA UEBAAA AAAAxx +3215 801 1 3 5 15 15 215 1215 3215 3215 30 31 RTAAAA VEBAAA HHHHxx +1625 802 1 1 5 5 25 625 1625 1625 1625 50 51 NKAAAA WEBAAA OOOOxx +4219 803 1 3 9 19 19 219 219 4219 4219 38 39 HGAAAA XEBAAA VVVVxx +415 804 1 3 5 15 15 415 415 415 415 30 31 ZPAAAA YEBAAA AAAAxx +4242 805 0 2 2 2 42 242 242 4242 4242 84 85 EHAAAA ZEBAAA HHHHxx +8660 806 0 0 0 0 60 660 660 3660 8660 120 121 CVAAAA AFBAAA OOOOxx +6525 807 1 1 5 5 25 525 525 1525 6525 50 51 ZQAAAA BFBAAA VVVVxx +2141 808 1 1 1 1 41 141 141 2141 2141 82 83 JEAAAA CFBAAA AAAAxx +5152 809 0 0 2 12 52 152 1152 152 5152 104 105 EQAAAA DFBAAA HHHHxx +8560 810 0 0 0 0 60 560 560 3560 8560 120 121 GRAAAA EFBAAA OOOOxx +9835 811 1 3 5 15 35 835 1835 4835 9835 70 71 HOAAAA FFBAAA VVVVxx +2657 812 1 1 7 17 57 657 657 2657 2657 114 115 FYAAAA GFBAAA AAAAxx +6085 813 1 1 5 5 85 85 85 1085 6085 170 171 BAAAAA HFBAAA HHHHxx +6698 814 0 2 8 18 98 698 698 1698 6698 196 197 QXAAAA IFBAAA OOOOxx +5421 815 1 1 1 1 21 421 1421 421 5421 42 43 NAAAAA JFBAAA VVVVxx +6661 816 1 1 1 1 61 661 661 1661 6661 122 123 FWAAAA KFBAAA AAAAxx +5645 817 1 1 5 5 45 645 1645 645 5645 90 91 DJAAAA LFBAAA HHHHxx +1248 818 0 0 8 8 48 248 1248 1248 1248 96 97 AWAAAA MFBAAA OOOOxx +5690 819 0 2 0 10 90 690 1690 690 5690 180 181 WKAAAA NFBAAA VVVVxx +4762 820 0 2 2 2 62 762 762 4762 4762 124 125 EBAAAA OFBAAA AAAAxx +1455 821 1 3 5 15 55 455 1455 1455 1455 110 111 ZDAAAA PFBAAA HHHHxx +9846 822 0 2 6 6 46 846 1846 4846 9846 92 93 SOAAAA QFBAAA OOOOxx +5295 823 1 3 5 15 95 295 1295 295 5295 190 191 RVAAAA RFBAAA VVVVxx +2826 824 0 2 6 6 26 826 826 2826 2826 52 53 SEAAAA SFBAAA AAAAxx +7496 825 0 0 6 16 96 496 1496 2496 7496 192 193 ICAAAA TFBAAA HHHHxx +3024 826 0 0 4 4 24 24 1024 3024 3024 48 49 IMAAAA UFBAAA OOOOxx +4945 827 1 1 5 5 45 945 945 4945 4945 90 91 FIAAAA VFBAAA VVVVxx +4404 828 0 0 4 4 4 404 404 4404 4404 8 9 KNAAAA WFBAAA AAAAxx +9302 829 0 2 2 2 2 302 1302 4302 9302 4 5 UTAAAA XFBAAA HHHHxx +1286 830 0 2 6 6 86 286 1286 1286 1286 172 173 MXAAAA YFBAAA OOOOxx +8435 831 1 3 5 15 35 435 435 3435 8435 70 71 LMAAAA ZFBAAA VVVVxx +8969 832 1 1 9 9 69 969 969 3969 8969 138 139 ZGAAAA AGBAAA AAAAxx +3302 833 0 2 2 2 2 302 1302 3302 3302 4 5 AXAAAA BGBAAA HHHHxx +9753 834 1 1 3 13 53 753 1753 4753 9753 106 107 DLAAAA CGBAAA OOOOxx +9374 835 0 2 4 14 74 374 1374 4374 9374 148 149 OWAAAA DGBAAA VVVVxx +4907 836 1 3 7 7 7 907 907 4907 4907 14 15 TGAAAA EGBAAA AAAAxx +1659 837 1 3 9 19 59 659 1659 1659 1659 118 119 VLAAAA FGBAAA HHHHxx +5095 838 1 3 5 15 95 95 1095 95 5095 190 191 ZNAAAA GGBAAA OOOOxx +9446 839 0 2 6 6 46 446 1446 4446 9446 92 93 IZAAAA HGBAAA VVVVxx +8528 840 0 0 8 8 28 528 528 3528 8528 56 57 AQAAAA IGBAAA AAAAxx +4890 841 0 2 0 10 90 890 890 4890 4890 180 181 CGAAAA JGBAAA HHHHxx +1221 842 1 1 1 1 21 221 1221 1221 1221 42 43 ZUAAAA KGBAAA OOOOxx +5583 843 1 3 3 3 83 583 1583 583 5583 166 167 TGAAAA LGBAAA VVVVxx +7303 844 1 3 3 3 3 303 1303 2303 7303 6 7 XUAAAA MGBAAA AAAAxx +406 845 0 2 6 6 6 406 406 406 406 12 13 QPAAAA NGBAAA HHHHxx +7542 846 0 2 2 2 42 542 1542 2542 7542 84 85 CEAAAA OGBAAA OOOOxx +9507 847 1 3 7 7 7 507 1507 4507 9507 14 15 RBAAAA PGBAAA VVVVxx +9511 848 1 3 1 11 11 511 1511 4511 9511 22 23 VBAAAA QGBAAA AAAAxx +1373 849 1 1 3 13 73 373 1373 1373 1373 146 147 VAAAAA RGBAAA HHHHxx +6556 850 0 0 6 16 56 556 556 1556 6556 112 113 ESAAAA SGBAAA OOOOxx +4117 851 1 1 7 17 17 117 117 4117 4117 34 35 JCAAAA TGBAAA VVVVxx +7794 852 0 2 4 14 94 794 1794 2794 7794 188 189 UNAAAA UGBAAA AAAAxx +7170 853 0 2 0 10 70 170 1170 2170 7170 140 141 UPAAAA VGBAAA HHHHxx +5809 854 1 1 9 9 9 809 1809 809 5809 18 19 LPAAAA WGBAAA OOOOxx +7828 855 0 0 8 8 28 828 1828 2828 7828 56 57 CPAAAA XGBAAA VVVVxx +8046 856 0 2 6 6 46 46 46 3046 8046 92 93 MXAAAA YGBAAA AAAAxx +4833 857 1 1 3 13 33 833 833 4833 4833 66 67 XDAAAA ZGBAAA HHHHxx +2107 858 1 3 7 7 7 107 107 2107 2107 14 15 BDAAAA AHBAAA OOOOxx +4276 859 0 0 6 16 76 276 276 4276 4276 152 153 MIAAAA BHBAAA VVVVxx +9536 860 0 0 6 16 36 536 1536 4536 9536 72 73 UCAAAA CHBAAA AAAAxx +5549 861 1 1 9 9 49 549 1549 549 5549 98 99 LFAAAA DHBAAA HHHHxx +6427 862 1 3 7 7 27 427 427 1427 6427 54 55 FNAAAA EHBAAA OOOOxx +1382 863 0 2 2 2 82 382 1382 1382 1382 164 165 EBAAAA FHBAAA VVVVxx +3256 864 0 0 6 16 56 256 1256 3256 3256 112 113 GVAAAA GHBAAA AAAAxx +3270 865 0 2 0 10 70 270 1270 3270 3270 140 141 UVAAAA HHBAAA HHHHxx +4808 866 0 0 8 8 8 808 808 4808 4808 16 17 YCAAAA IHBAAA OOOOxx +7938 867 0 2 8 18 38 938 1938 2938 7938 76 77 ITAAAA JHBAAA VVVVxx +4405 868 1 1 5 5 5 405 405 4405 4405 10 11 LNAAAA KHBAAA AAAAxx +2264 869 0 0 4 4 64 264 264 2264 2264 128 129 CJAAAA LHBAAA HHHHxx +80 870 0 0 0 0 80 80 80 80 80 160 161 CDAAAA MHBAAA OOOOxx +320 871 0 0 0 0 20 320 320 320 320 40 41 IMAAAA NHBAAA VVVVxx +2383 872 1 3 3 3 83 383 383 2383 2383 166 167 RNAAAA OHBAAA AAAAxx +3146 873 0 2 6 6 46 146 1146 3146 3146 92 93 ARAAAA PHBAAA HHHHxx +6911 874 1 3 1 11 11 911 911 1911 6911 22 23 VFAAAA QHBAAA OOOOxx +7377 875 1 1 7 17 77 377 1377 2377 7377 154 155 TXAAAA RHBAAA VVVVxx +9965 876 1 1 5 5 65 965 1965 4965 9965 130 131 HTAAAA SHBAAA AAAAxx +8361 877 1 1 1 1 61 361 361 3361 8361 122 123 PJAAAA THBAAA HHHHxx +9417 878 1 1 7 17 17 417 1417 4417 9417 34 35 FYAAAA UHBAAA OOOOxx +2483 879 1 3 3 3 83 483 483 2483 2483 166 167 NRAAAA VHBAAA VVVVxx +9843 880 1 3 3 3 43 843 1843 4843 9843 86 87 POAAAA WHBAAA AAAAxx +6395 881 1 3 5 15 95 395 395 1395 6395 190 191 ZLAAAA XHBAAA HHHHxx +6444 882 0 0 4 4 44 444 444 1444 6444 88 89 WNAAAA YHBAAA OOOOxx +1820 883 0 0 0 0 20 820 1820 1820 1820 40 41 ASAAAA ZHBAAA VVVVxx +2768 884 0 0 8 8 68 768 768 2768 2768 136 137 MCAAAA AIBAAA AAAAxx +5413 885 1 1 3 13 13 413 1413 413 5413 26 27 FAAAAA BIBAAA HHHHxx +2923 886 1 3 3 3 23 923 923 2923 2923 46 47 LIAAAA CIBAAA OOOOxx +5286 887 0 2 6 6 86 286 1286 286 5286 172 173 IVAAAA DIBAAA VVVVxx +6126 888 0 2 6 6 26 126 126 1126 6126 52 53 QBAAAA EIBAAA AAAAxx +8343 889 1 3 3 3 43 343 343 3343 8343 86 87 XIAAAA FIBAAA HHHHxx +6010 890 0 2 0 10 10 10 10 1010 6010 20 21 EXAAAA GIBAAA OOOOxx +4177 891 1 1 7 17 77 177 177 4177 4177 154 155 REAAAA HIBAAA VVVVxx +5808 892 0 0 8 8 8 808 1808 808 5808 16 17 KPAAAA IIBAAA AAAAxx +4859 893 1 3 9 19 59 859 859 4859 4859 118 119 XEAAAA JIBAAA HHHHxx +9252 894 0 0 2 12 52 252 1252 4252 9252 104 105 WRAAAA KIBAAA OOOOxx +2941 895 1 1 1 1 41 941 941 2941 2941 82 83 DJAAAA LIBAAA VVVVxx +8693 896 1 1 3 13 93 693 693 3693 8693 186 187 JWAAAA MIBAAA AAAAxx +4432 897 0 0 2 12 32 432 432 4432 4432 64 65 MOAAAA NIBAAA HHHHxx +2371 898 1 3 1 11 71 371 371 2371 2371 142 143 FNAAAA OIBAAA OOOOxx +7546 899 0 2 6 6 46 546 1546 2546 7546 92 93 GEAAAA PIBAAA VVVVxx +1369 900 1 1 9 9 69 369 1369 1369 1369 138 139 RAAAAA QIBAAA AAAAxx +4687 901 1 3 7 7 87 687 687 4687 4687 174 175 HYAAAA RIBAAA HHHHxx +8941 902 1 1 1 1 41 941 941 3941 8941 82 83 XFAAAA SIBAAA OOOOxx +226 903 0 2 6 6 26 226 226 226 226 52 53 SIAAAA TIBAAA VVVVxx +3493 904 1 1 3 13 93 493 1493 3493 3493 186 187 JEAAAA UIBAAA AAAAxx +6433 905 1 1 3 13 33 433 433 1433 6433 66 67 LNAAAA VIBAAA HHHHxx +9189 906 1 1 9 9 89 189 1189 4189 9189 178 179 LPAAAA WIBAAA OOOOxx +6027 907 1 3 7 7 27 27 27 1027 6027 54 55 VXAAAA XIBAAA VVVVxx +4615 908 1 3 5 15 15 615 615 4615 4615 30 31 NVAAAA YIBAAA AAAAxx +5320 909 0 0 0 0 20 320 1320 320 5320 40 41 QWAAAA ZIBAAA HHHHxx +7002 910 0 2 2 2 2 2 1002 2002 7002 4 5 IJAAAA AJBAAA OOOOxx +7367 911 1 3 7 7 67 367 1367 2367 7367 134 135 JXAAAA BJBAAA VVVVxx +289 912 1 1 9 9 89 289 289 289 289 178 179 DLAAAA CJBAAA AAAAxx +407 913 1 3 7 7 7 407 407 407 407 14 15 RPAAAA DJBAAA HHHHxx +504 914 0 0 4 4 4 504 504 504 504 8 9 KTAAAA EJBAAA OOOOxx +8301 915 1 1 1 1 1 301 301 3301 8301 2 3 HHAAAA FJBAAA VVVVxx +1396 916 0 0 6 16 96 396 1396 1396 1396 192 193 SBAAAA GJBAAA AAAAxx +4794 917 0 2 4 14 94 794 794 4794 4794 188 189 KCAAAA HJBAAA HHHHxx +6400 918 0 0 0 0 0 400 400 1400 6400 0 1 EMAAAA IJBAAA OOOOxx +1275 919 1 3 5 15 75 275 1275 1275 1275 150 151 BXAAAA JJBAAA VVVVxx +5797 920 1 1 7 17 97 797 1797 797 5797 194 195 ZOAAAA KJBAAA AAAAxx +2221 921 1 1 1 1 21 221 221 2221 2221 42 43 LHAAAA LJBAAA HHHHxx +2504 922 0 0 4 4 4 504 504 2504 2504 8 9 ISAAAA MJBAAA OOOOxx +2143 923 1 3 3 3 43 143 143 2143 2143 86 87 LEAAAA NJBAAA VVVVxx +1083 924 1 3 3 3 83 83 1083 1083 1083 166 167 RPAAAA OJBAAA AAAAxx +6148 925 0 0 8 8 48 148 148 1148 6148 96 97 MCAAAA PJBAAA HHHHxx +3612 926 0 0 2 12 12 612 1612 3612 3612 24 25 YIAAAA QJBAAA OOOOxx +9499 927 1 3 9 19 99 499 1499 4499 9499 198 199 JBAAAA RJBAAA VVVVxx +5773 928 1 1 3 13 73 773 1773 773 5773 146 147 BOAAAA SJBAAA AAAAxx +1014 929 0 2 4 14 14 14 1014 1014 1014 28 29 ANAAAA TJBAAA HHHHxx +1427 930 1 3 7 7 27 427 1427 1427 1427 54 55 XCAAAA UJBAAA OOOOxx +6770 931 0 2 0 10 70 770 770 1770 6770 140 141 KAAAAA VJBAAA VVVVxx +9042 932 0 2 2 2 42 42 1042 4042 9042 84 85 UJAAAA WJBAAA AAAAxx +9892 933 0 0 2 12 92 892 1892 4892 9892 184 185 MQAAAA XJBAAA HHHHxx +1771 934 1 3 1 11 71 771 1771 1771 1771 142 143 DQAAAA YJBAAA OOOOxx +7392 935 0 0 2 12 92 392 1392 2392 7392 184 185 IYAAAA ZJBAAA VVVVxx +4465 936 1 1 5 5 65 465 465 4465 4465 130 131 TPAAAA AKBAAA AAAAxx +278 937 0 2 8 18 78 278 278 278 278 156 157 SKAAAA BKBAAA HHHHxx +7776 938 0 0 6 16 76 776 1776 2776 7776 152 153 CNAAAA CKBAAA OOOOxx +3763 939 1 3 3 3 63 763 1763 3763 3763 126 127 TOAAAA DKBAAA VVVVxx +7503 940 1 3 3 3 3 503 1503 2503 7503 6 7 PCAAAA EKBAAA AAAAxx +3793 941 1 1 3 13 93 793 1793 3793 3793 186 187 XPAAAA FKBAAA HHHHxx +6510 942 0 2 0 10 10 510 510 1510 6510 20 21 KQAAAA GKBAAA OOOOxx +7641 943 1 1 1 1 41 641 1641 2641 7641 82 83 XHAAAA HKBAAA VVVVxx +3228 944 0 0 8 8 28 228 1228 3228 3228 56 57 EUAAAA IKBAAA AAAAxx +194 945 0 2 4 14 94 194 194 194 194 188 189 MHAAAA JKBAAA HHHHxx +8555 946 1 3 5 15 55 555 555 3555 8555 110 111 BRAAAA KKBAAA OOOOxx +4997 947 1 1 7 17 97 997 997 4997 4997 194 195 FKAAAA LKBAAA VVVVxx +8687 948 1 3 7 7 87 687 687 3687 8687 174 175 DWAAAA MKBAAA AAAAxx +6632 949 0 0 2 12 32 632 632 1632 6632 64 65 CVAAAA NKBAAA HHHHxx +9607 950 1 3 7 7 7 607 1607 4607 9607 14 15 NFAAAA OKBAAA OOOOxx +6201 951 1 1 1 1 1 201 201 1201 6201 2 3 NEAAAA PKBAAA VVVVxx +857 952 1 1 7 17 57 857 857 857 857 114 115 ZGAAAA QKBAAA AAAAxx +5623 953 1 3 3 3 23 623 1623 623 5623 46 47 HIAAAA RKBAAA HHHHxx +5979 954 1 3 9 19 79 979 1979 979 5979 158 159 ZVAAAA SKBAAA OOOOxx +2201 955 1 1 1 1 1 201 201 2201 2201 2 3 RGAAAA TKBAAA VVVVxx +3166 956 0 2 6 6 66 166 1166 3166 3166 132 133 URAAAA UKBAAA AAAAxx +6249 957 1 1 9 9 49 249 249 1249 6249 98 99 JGAAAA VKBAAA HHHHxx +3271 958 1 3 1 11 71 271 1271 3271 3271 142 143 VVAAAA WKBAAA OOOOxx +7777 959 1 1 7 17 77 777 1777 2777 7777 154 155 DNAAAA XKBAAA VVVVxx +6732 960 0 0 2 12 32 732 732 1732 6732 64 65 YYAAAA YKBAAA AAAAxx +6297 961 1 1 7 17 97 297 297 1297 6297 194 195 FIAAAA ZKBAAA HHHHxx +5685 962 1 1 5 5 85 685 1685 685 5685 170 171 RKAAAA ALBAAA OOOOxx +9931 963 1 3 1 11 31 931 1931 4931 9931 62 63 ZRAAAA BLBAAA VVVVxx +7485 964 1 1 5 5 85 485 1485 2485 7485 170 171 XBAAAA CLBAAA AAAAxx +386 965 0 2 6 6 86 386 386 386 386 172 173 WOAAAA DLBAAA HHHHxx +8204 966 0 0 4 4 4 204 204 3204 8204 8 9 ODAAAA ELBAAA OOOOxx +3606 967 0 2 6 6 6 606 1606 3606 3606 12 13 SIAAAA FLBAAA VVVVxx +1692 968 0 0 2 12 92 692 1692 1692 1692 184 185 CNAAAA GLBAAA AAAAxx +3002 969 0 2 2 2 2 2 1002 3002 3002 4 5 MLAAAA HLBAAA HHHHxx +9676 970 0 0 6 16 76 676 1676 4676 9676 152 153 EIAAAA ILBAAA OOOOxx +915 971 1 3 5 15 15 915 915 915 915 30 31 FJAAAA JLBAAA VVVVxx +7706 972 0 2 6 6 6 706 1706 2706 7706 12 13 KKAAAA KLBAAA AAAAxx +6080 973 0 0 0 0 80 80 80 1080 6080 160 161 WZAAAA LLBAAA HHHHxx +1860 974 0 0 0 0 60 860 1860 1860 1860 120 121 OTAAAA MLBAAA OOOOxx +1444 975 0 0 4 4 44 444 1444 1444 1444 88 89 ODAAAA NLBAAA VVVVxx +7208 976 0 0 8 8 8 208 1208 2208 7208 16 17 GRAAAA OLBAAA AAAAxx +8554 977 0 2 4 14 54 554 554 3554 8554 108 109 ARAAAA PLBAAA HHHHxx +2028 978 0 0 8 8 28 28 28 2028 2028 56 57 AAAAAA QLBAAA OOOOxx +9893 979 1 1 3 13 93 893 1893 4893 9893 186 187 NQAAAA RLBAAA VVVVxx +4740 980 0 0 0 0 40 740 740 4740 4740 80 81 IAAAAA SLBAAA AAAAxx +6186 981 0 2 6 6 86 186 186 1186 6186 172 173 YDAAAA TLBAAA HHHHxx +6357 982 1 1 7 17 57 357 357 1357 6357 114 115 NKAAAA ULBAAA OOOOxx +3699 983 1 3 9 19 99 699 1699 3699 3699 198 199 HMAAAA VLBAAA VVVVxx +7620 984 0 0 0 0 20 620 1620 2620 7620 40 41 CHAAAA WLBAAA AAAAxx +921 985 1 1 1 1 21 921 921 921 921 42 43 LJAAAA XLBAAA HHHHxx +5506 986 0 2 6 6 6 506 1506 506 5506 12 13 UDAAAA YLBAAA OOOOxx +8851 987 1 3 1 11 51 851 851 3851 8851 102 103 LCAAAA ZLBAAA VVVVxx +3205 988 1 1 5 5 5 205 1205 3205 3205 10 11 HTAAAA AMBAAA AAAAxx +1956 989 0 0 6 16 56 956 1956 1956 1956 112 113 GXAAAA BMBAAA HHHHxx +6272 990 0 0 2 12 72 272 272 1272 6272 144 145 GHAAAA CMBAAA OOOOxx +1509 991 1 1 9 9 9 509 1509 1509 1509 18 19 BGAAAA DMBAAA VVVVxx +53 992 1 1 3 13 53 53 53 53 53 106 107 BCAAAA EMBAAA AAAAxx +213 993 1 1 3 13 13 213 213 213 213 26 27 FIAAAA FMBAAA HHHHxx +4924 994 0 0 4 4 24 924 924 4924 4924 48 49 KHAAAA GMBAAA OOOOxx +2097 995 1 1 7 17 97 97 97 2097 2097 194 195 RCAAAA HMBAAA VVVVxx +4607 996 1 3 7 7 7 607 607 4607 4607 14 15 FVAAAA IMBAAA AAAAxx +1582 997 0 2 2 2 82 582 1582 1582 1582 164 165 WIAAAA JMBAAA HHHHxx +6643 998 1 3 3 3 43 643 643 1643 6643 86 87 NVAAAA KMBAAA OOOOxx +2238 999 0 2 8 18 38 238 238 2238 2238 76 77 CIAAAA LMBAAA VVVVxx +2942 1000 0 2 2 2 42 942 942 2942 2942 84 85 EJAAAA MMBAAA AAAAxx +1655 1001 1 3 5 15 55 655 1655 1655 1655 110 111 RLAAAA NMBAAA HHHHxx +3226 1002 0 2 6 6 26 226 1226 3226 3226 52 53 CUAAAA OMBAAA OOOOxx +4263 1003 1 3 3 3 63 263 263 4263 4263 126 127 ZHAAAA PMBAAA VVVVxx +960 1004 0 0 0 0 60 960 960 960 960 120 121 YKAAAA QMBAAA AAAAxx +1213 1005 1 1 3 13 13 213 1213 1213 1213 26 27 RUAAAA RMBAAA HHHHxx +1845 1006 1 1 5 5 45 845 1845 1845 1845 90 91 ZSAAAA SMBAAA OOOOxx +6944 1007 0 0 4 4 44 944 944 1944 6944 88 89 CHAAAA TMBAAA VVVVxx +5284 1008 0 0 4 4 84 284 1284 284 5284 168 169 GVAAAA UMBAAA AAAAxx +188 1009 0 0 8 8 88 188 188 188 188 176 177 GHAAAA VMBAAA HHHHxx +748 1010 0 0 8 8 48 748 748 748 748 96 97 UCAAAA WMBAAA OOOOxx +2226 1011 0 2 6 6 26 226 226 2226 2226 52 53 QHAAAA XMBAAA VVVVxx +7342 1012 0 2 2 2 42 342 1342 2342 7342 84 85 KWAAAA YMBAAA AAAAxx +6120 1013 0 0 0 0 20 120 120 1120 6120 40 41 KBAAAA ZMBAAA HHHHxx +536 1014 0 0 6 16 36 536 536 536 536 72 73 QUAAAA ANBAAA OOOOxx +3239 1015 1 3 9 19 39 239 1239 3239 3239 78 79 PUAAAA BNBAAA VVVVxx +2832 1016 0 0 2 12 32 832 832 2832 2832 64 65 YEAAAA CNBAAA AAAAxx +5296 1017 0 0 6 16 96 296 1296 296 5296 192 193 SVAAAA DNBAAA HHHHxx +5795 1018 1 3 5 15 95 795 1795 795 5795 190 191 XOAAAA ENBAAA OOOOxx +6290 1019 0 2 0 10 90 290 290 1290 6290 180 181 YHAAAA FNBAAA VVVVxx +4916 1020 0 0 6 16 16 916 916 4916 4916 32 33 CHAAAA GNBAAA AAAAxx +8366 1021 0 2 6 6 66 366 366 3366 8366 132 133 UJAAAA HNBAAA HHHHxx +4248 1022 0 0 8 8 48 248 248 4248 4248 96 97 KHAAAA INBAAA OOOOxx +6460 1023 0 0 0 0 60 460 460 1460 6460 120 121 MOAAAA JNBAAA VVVVxx +9296 1024 0 0 6 16 96 296 1296 4296 9296 192 193 OTAAAA KNBAAA AAAAxx +3486 1025 0 2 6 6 86 486 1486 3486 3486 172 173 CEAAAA LNBAAA HHHHxx +5664 1026 0 0 4 4 64 664 1664 664 5664 128 129 WJAAAA MNBAAA OOOOxx +7624 1027 0 0 4 4 24 624 1624 2624 7624 48 49 GHAAAA NNBAAA VVVVxx +2790 1028 0 2 0 10 90 790 790 2790 2790 180 181 IDAAAA ONBAAA AAAAxx +682 1029 0 2 2 2 82 682 682 682 682 164 165 GAAAAA PNBAAA HHHHxx +6412 1030 0 0 2 12 12 412 412 1412 6412 24 25 QMAAAA QNBAAA OOOOxx +6882 1031 0 2 2 2 82 882 882 1882 6882 164 165 SEAAAA RNBAAA VVVVxx +1332 1032 0 0 2 12 32 332 1332 1332 1332 64 65 GZAAAA SNBAAA AAAAxx +4911 1033 1 3 1 11 11 911 911 4911 4911 22 23 XGAAAA TNBAAA HHHHxx +3528 1034 0 0 8 8 28 528 1528 3528 3528 56 57 SFAAAA UNBAAA OOOOxx +271 1035 1 3 1 11 71 271 271 271 271 142 143 LKAAAA VNBAAA VVVVxx +7007 1036 1 3 7 7 7 7 1007 2007 7007 14 15 NJAAAA WNBAAA AAAAxx +2198 1037 0 2 8 18 98 198 198 2198 2198 196 197 OGAAAA XNBAAA HHHHxx +4266 1038 0 2 6 6 66 266 266 4266 4266 132 133 CIAAAA YNBAAA OOOOxx +9867 1039 1 3 7 7 67 867 1867 4867 9867 134 135 NPAAAA ZNBAAA VVVVxx +7602 1040 0 2 2 2 2 602 1602 2602 7602 4 5 KGAAAA AOBAAA AAAAxx +7521 1041 1 1 1 1 21 521 1521 2521 7521 42 43 HDAAAA BOBAAA HHHHxx +7200 1042 0 0 0 0 0 200 1200 2200 7200 0 1 YQAAAA COBAAA OOOOxx +4816 1043 0 0 6 16 16 816 816 4816 4816 32 33 GDAAAA DOBAAA VVVVxx +1669 1044 1 1 9 9 69 669 1669 1669 1669 138 139 FMAAAA EOBAAA AAAAxx +4764 1045 0 0 4 4 64 764 764 4764 4764 128 129 GBAAAA FOBAAA HHHHxx +7393 1046 1 1 3 13 93 393 1393 2393 7393 186 187 JYAAAA GOBAAA OOOOxx +7434 1047 0 2 4 14 34 434 1434 2434 7434 68 69 YZAAAA HOBAAA VVVVxx +9079 1048 1 3 9 19 79 79 1079 4079 9079 158 159 FLAAAA IOBAAA AAAAxx +9668 1049 0 0 8 8 68 668 1668 4668 9668 136 137 WHAAAA JOBAAA HHHHxx +7184 1050 0 0 4 4 84 184 1184 2184 7184 168 169 IQAAAA KOBAAA OOOOxx +7347 1051 1 3 7 7 47 347 1347 2347 7347 94 95 PWAAAA LOBAAA VVVVxx +951 1052 1 3 1 11 51 951 951 951 951 102 103 PKAAAA MOBAAA AAAAxx +4513 1053 1 1 3 13 13 513 513 4513 4513 26 27 PRAAAA NOBAAA HHHHxx +2692 1054 0 0 2 12 92 692 692 2692 2692 184 185 OZAAAA OOBAAA OOOOxx +9930 1055 0 2 0 10 30 930 1930 4930 9930 60 61 YRAAAA POBAAA VVVVxx +4516 1056 0 0 6 16 16 516 516 4516 4516 32 33 SRAAAA QOBAAA AAAAxx +1592 1057 0 0 2 12 92 592 1592 1592 1592 184 185 GJAAAA ROBAAA HHHHxx +6312 1058 0 0 2 12 12 312 312 1312 6312 24 25 UIAAAA SOBAAA OOOOxx +185 1059 1 1 5 5 85 185 185 185 185 170 171 DHAAAA TOBAAA VVVVxx +1848 1060 0 0 8 8 48 848 1848 1848 1848 96 97 CTAAAA UOBAAA AAAAxx +5844 1061 0 0 4 4 44 844 1844 844 5844 88 89 UQAAAA VOBAAA HHHHxx +1666 1062 0 2 6 6 66 666 1666 1666 1666 132 133 CMAAAA WOBAAA OOOOxx +5864 1063 0 0 4 4 64 864 1864 864 5864 128 129 ORAAAA XOBAAA VVVVxx +1004 1064 0 0 4 4 4 4 1004 1004 1004 8 9 QMAAAA YOBAAA AAAAxx +1758 1065 0 2 8 18 58 758 1758 1758 1758 116 117 QPAAAA ZOBAAA HHHHxx +8823 1066 1 3 3 3 23 823 823 3823 8823 46 47 JBAAAA APBAAA OOOOxx +129 1067 1 1 9 9 29 129 129 129 129 58 59 ZEAAAA BPBAAA VVVVxx +5703 1068 1 3 3 3 3 703 1703 703 5703 6 7 JLAAAA CPBAAA AAAAxx +3331 1069 1 3 1 11 31 331 1331 3331 3331 62 63 DYAAAA DPBAAA HHHHxx +5791 1070 1 3 1 11 91 791 1791 791 5791 182 183 TOAAAA EPBAAA OOOOxx +4421 1071 1 1 1 1 21 421 421 4421 4421 42 43 BOAAAA FPBAAA VVVVxx +9740 1072 0 0 0 0 40 740 1740 4740 9740 80 81 QKAAAA GPBAAA AAAAxx +798 1073 0 2 8 18 98 798 798 798 798 196 197 SEAAAA HPBAAA HHHHxx +571 1074 1 3 1 11 71 571 571 571 571 142 143 ZVAAAA IPBAAA OOOOxx +7084 1075 0 0 4 4 84 84 1084 2084 7084 168 169 MMAAAA JPBAAA VVVVxx +650 1076 0 2 0 10 50 650 650 650 650 100 101 AZAAAA KPBAAA AAAAxx +1467 1077 1 3 7 7 67 467 1467 1467 1467 134 135 LEAAAA LPBAAA HHHHxx +5446 1078 0 2 6 6 46 446 1446 446 5446 92 93 MBAAAA MPBAAA OOOOxx +830 1079 0 2 0 10 30 830 830 830 830 60 61 YFAAAA NPBAAA VVVVxx +5516 1080 0 0 6 16 16 516 1516 516 5516 32 33 EEAAAA OPBAAA AAAAxx +8520 1081 0 0 0 0 20 520 520 3520 8520 40 41 SPAAAA PPBAAA HHHHxx +1152 1082 0 0 2 12 52 152 1152 1152 1152 104 105 ISAAAA QPBAAA OOOOxx +862 1083 0 2 2 2 62 862 862 862 862 124 125 EHAAAA RPBAAA VVVVxx +454 1084 0 2 4 14 54 454 454 454 454 108 109 MRAAAA SPBAAA AAAAxx +9956 1085 0 0 6 16 56 956 1956 4956 9956 112 113 YSAAAA TPBAAA HHHHxx +1654 1086 0 2 4 14 54 654 1654 1654 1654 108 109 QLAAAA UPBAAA OOOOxx +257 1087 1 1 7 17 57 257 257 257 257 114 115 XJAAAA VPBAAA VVVVxx +5469 1088 1 1 9 9 69 469 1469 469 5469 138 139 JCAAAA WPBAAA AAAAxx +9075 1089 1 3 5 15 75 75 1075 4075 9075 150 151 BLAAAA XPBAAA HHHHxx +7799 1090 1 3 9 19 99 799 1799 2799 7799 198 199 ZNAAAA YPBAAA OOOOxx +2001 1091 1 1 1 1 1 1 1 2001 2001 2 3 ZYAAAA ZPBAAA VVVVxx +9786 1092 0 2 6 6 86 786 1786 4786 9786 172 173 KMAAAA AQBAAA AAAAxx +7281 1093 1 1 1 1 81 281 1281 2281 7281 162 163 BUAAAA BQBAAA HHHHxx +5137 1094 1 1 7 17 37 137 1137 137 5137 74 75 PPAAAA CQBAAA OOOOxx +4053 1095 1 1 3 13 53 53 53 4053 4053 106 107 XZAAAA DQBAAA VVVVxx +7911 1096 1 3 1 11 11 911 1911 2911 7911 22 23 HSAAAA EQBAAA AAAAxx +4298 1097 0 2 8 18 98 298 298 4298 4298 196 197 IJAAAA FQBAAA HHHHxx +4805 1098 1 1 5 5 5 805 805 4805 4805 10 11 VCAAAA GQBAAA OOOOxx +9038 1099 0 2 8 18 38 38 1038 4038 9038 76 77 QJAAAA HQBAAA VVVVxx +8023 1100 1 3 3 3 23 23 23 3023 8023 46 47 PWAAAA IQBAAA AAAAxx +6595 1101 1 3 5 15 95 595 595 1595 6595 190 191 RTAAAA JQBAAA HHHHxx +9831 1102 1 3 1 11 31 831 1831 4831 9831 62 63 DOAAAA KQBAAA OOOOxx +788 1103 0 0 8 8 88 788 788 788 788 176 177 IEAAAA LQBAAA VVVVxx +902 1104 0 2 2 2 2 902 902 902 902 4 5 SIAAAA MQBAAA AAAAxx +9137 1105 1 1 7 17 37 137 1137 4137 9137 74 75 LNAAAA NQBAAA HHHHxx +1744 1106 0 0 4 4 44 744 1744 1744 1744 88 89 CPAAAA OQBAAA OOOOxx +7285 1107 1 1 5 5 85 285 1285 2285 7285 170 171 FUAAAA PQBAAA VVVVxx +7006 1108 0 2 6 6 6 6 1006 2006 7006 12 13 MJAAAA QQBAAA AAAAxx +9236 1109 0 0 6 16 36 236 1236 4236 9236 72 73 GRAAAA RQBAAA HHHHxx +5472 1110 0 0 2 12 72 472 1472 472 5472 144 145 MCAAAA SQBAAA OOOOxx +7975 1111 1 3 5 15 75 975 1975 2975 7975 150 151 TUAAAA TQBAAA VVVVxx +4181 1112 1 1 1 1 81 181 181 4181 4181 162 163 VEAAAA UQBAAA AAAAxx +7677 1113 1 1 7 17 77 677 1677 2677 7677 154 155 HJAAAA VQBAAA HHHHxx +35 1114 1 3 5 15 35 35 35 35 35 70 71 JBAAAA WQBAAA OOOOxx +6813 1115 1 1 3 13 13 813 813 1813 6813 26 27 BCAAAA XQBAAA VVVVxx +6618 1116 0 2 8 18 18 618 618 1618 6618 36 37 OUAAAA YQBAAA AAAAxx +8069 1117 1 1 9 9 69 69 69 3069 8069 138 139 JYAAAA ZQBAAA HHHHxx +3071 1118 1 3 1 11 71 71 1071 3071 3071 142 143 DOAAAA ARBAAA OOOOxx +4390 1119 0 2 0 10 90 390 390 4390 4390 180 181 WMAAAA BRBAAA VVVVxx +7764 1120 0 0 4 4 64 764 1764 2764 7764 128 129 QMAAAA CRBAAA AAAAxx +8163 1121 1 3 3 3 63 163 163 3163 8163 126 127 ZBAAAA DRBAAA HHHHxx +1961 1122 1 1 1 1 61 961 1961 1961 1961 122 123 LXAAAA ERBAAA OOOOxx +1103 1123 1 3 3 3 3 103 1103 1103 1103 6 7 LQAAAA FRBAAA VVVVxx +5486 1124 0 2 6 6 86 486 1486 486 5486 172 173 ADAAAA GRBAAA AAAAxx +9513 1125 1 1 3 13 13 513 1513 4513 9513 26 27 XBAAAA HRBAAA HHHHxx +7311 1126 1 3 1 11 11 311 1311 2311 7311 22 23 FVAAAA IRBAAA OOOOxx +4144 1127 0 0 4 4 44 144 144 4144 4144 88 89 KDAAAA JRBAAA VVVVxx +7901 1128 1 1 1 1 1 901 1901 2901 7901 2 3 XRAAAA KRBAAA AAAAxx +4629 1129 1 1 9 9 29 629 629 4629 4629 58 59 BWAAAA LRBAAA HHHHxx +6858 1130 0 2 8 18 58 858 858 1858 6858 116 117 UDAAAA MRBAAA OOOOxx +125 1131 1 1 5 5 25 125 125 125 125 50 51 VEAAAA NRBAAA VVVVxx +3834 1132 0 2 4 14 34 834 1834 3834 3834 68 69 MRAAAA ORBAAA AAAAxx +8155 1133 1 3 5 15 55 155 155 3155 8155 110 111 RBAAAA PRBAAA HHHHxx +8230 1134 0 2 0 10 30 230 230 3230 8230 60 61 OEAAAA QRBAAA OOOOxx +744 1135 0 0 4 4 44 744 744 744 744 88 89 QCAAAA RRBAAA VVVVxx +357 1136 1 1 7 17 57 357 357 357 357 114 115 TNAAAA SRBAAA AAAAxx +2159 1137 1 3 9 19 59 159 159 2159 2159 118 119 BFAAAA TRBAAA HHHHxx +8559 1138 1 3 9 19 59 559 559 3559 8559 118 119 FRAAAA URBAAA OOOOxx +6866 1139 0 2 6 6 66 866 866 1866 6866 132 133 CEAAAA VRBAAA VVVVxx +3863 1140 1 3 3 3 63 863 1863 3863 3863 126 127 PSAAAA WRBAAA AAAAxx +4193 1141 1 1 3 13 93 193 193 4193 4193 186 187 HFAAAA XRBAAA HHHHxx +3277 1142 1 1 7 17 77 277 1277 3277 3277 154 155 BWAAAA YRBAAA OOOOxx +5577 1143 1 1 7 17 77 577 1577 577 5577 154 155 NGAAAA ZRBAAA VVVVxx +9503 1144 1 3 3 3 3 503 1503 4503 9503 6 7 NBAAAA ASBAAA AAAAxx +7642 1145 0 2 2 2 42 642 1642 2642 7642 84 85 YHAAAA BSBAAA HHHHxx +6197 1146 1 1 7 17 97 197 197 1197 6197 194 195 JEAAAA CSBAAA OOOOxx +8995 1147 1 3 5 15 95 995 995 3995 8995 190 191 ZHAAAA DSBAAA VVVVxx +440 1148 0 0 0 0 40 440 440 440 440 80 81 YQAAAA ESBAAA AAAAxx +8418 1149 0 2 8 18 18 418 418 3418 8418 36 37 ULAAAA FSBAAA HHHHxx +8531 1150 1 3 1 11 31 531 531 3531 8531 62 63 DQAAAA GSBAAA OOOOxx +3790 1151 0 2 0 10 90 790 1790 3790 3790 180 181 UPAAAA HSBAAA VVVVxx +7610 1152 0 2 0 10 10 610 1610 2610 7610 20 21 SGAAAA ISBAAA AAAAxx +1252 1153 0 0 2 12 52 252 1252 1252 1252 104 105 EWAAAA JSBAAA HHHHxx +7559 1154 1 3 9 19 59 559 1559 2559 7559 118 119 TEAAAA KSBAAA OOOOxx +9945 1155 1 1 5 5 45 945 1945 4945 9945 90 91 NSAAAA LSBAAA VVVVxx +9023 1156 1 3 3 3 23 23 1023 4023 9023 46 47 BJAAAA MSBAAA AAAAxx +3516 1157 0 0 6 16 16 516 1516 3516 3516 32 33 GFAAAA NSBAAA HHHHxx +4671 1158 1 3 1 11 71 671 671 4671 4671 142 143 RXAAAA OSBAAA OOOOxx +1465 1159 1 1 5 5 65 465 1465 1465 1465 130 131 JEAAAA PSBAAA VVVVxx +9515 1160 1 3 5 15 15 515 1515 4515 9515 30 31 ZBAAAA QSBAAA AAAAxx +3242 1161 0 2 2 2 42 242 1242 3242 3242 84 85 SUAAAA RSBAAA HHHHxx +1732 1162 0 0 2 12 32 732 1732 1732 1732 64 65 QOAAAA SSBAAA OOOOxx +1678 1163 0 2 8 18 78 678 1678 1678 1678 156 157 OMAAAA TSBAAA VVVVxx +1464 1164 0 0 4 4 64 464 1464 1464 1464 128 129 IEAAAA USBAAA AAAAxx +6546 1165 0 2 6 6 46 546 546 1546 6546 92 93 URAAAA VSBAAA HHHHxx +4448 1166 0 0 8 8 48 448 448 4448 4448 96 97 CPAAAA WSBAAA OOOOxx +9847 1167 1 3 7 7 47 847 1847 4847 9847 94 95 TOAAAA XSBAAA VVVVxx +8264 1168 0 0 4 4 64 264 264 3264 8264 128 129 WFAAAA YSBAAA AAAAxx +1620 1169 0 0 0 0 20 620 1620 1620 1620 40 41 IKAAAA ZSBAAA HHHHxx +9388 1170 0 0 8 8 88 388 1388 4388 9388 176 177 CXAAAA ATBAAA OOOOxx +6445 1171 1 1 5 5 45 445 445 1445 6445 90 91 XNAAAA BTBAAA VVVVxx +4789 1172 1 1 9 9 89 789 789 4789 4789 178 179 FCAAAA CTBAAA AAAAxx +1562 1173 0 2 2 2 62 562 1562 1562 1562 124 125 CIAAAA DTBAAA HHHHxx +7305 1174 1 1 5 5 5 305 1305 2305 7305 10 11 ZUAAAA ETBAAA OOOOxx +6344 1175 0 0 4 4 44 344 344 1344 6344 88 89 AKAAAA FTBAAA VVVVxx +5130 1176 0 2 0 10 30 130 1130 130 5130 60 61 IPAAAA GTBAAA AAAAxx +3284 1177 0 0 4 4 84 284 1284 3284 3284 168 169 IWAAAA HTBAAA HHHHxx +6346 1178 0 2 6 6 46 346 346 1346 6346 92 93 CKAAAA ITBAAA OOOOxx +1061 1179 1 1 1 1 61 61 1061 1061 1061 122 123 VOAAAA JTBAAA VVVVxx +872 1180 0 0 2 12 72 872 872 872 872 144 145 OHAAAA KTBAAA AAAAxx +123 1181 1 3 3 3 23 123 123 123 123 46 47 TEAAAA LTBAAA HHHHxx +7903 1182 1 3 3 3 3 903 1903 2903 7903 6 7 ZRAAAA MTBAAA OOOOxx +560 1183 0 0 0 0 60 560 560 560 560 120 121 OVAAAA NTBAAA VVVVxx +4446 1184 0 2 6 6 46 446 446 4446 4446 92 93 APAAAA OTBAAA AAAAxx +3909 1185 1 1 9 9 9 909 1909 3909 3909 18 19 JUAAAA PTBAAA HHHHxx +669 1186 1 1 9 9 69 669 669 669 669 138 139 TZAAAA QTBAAA OOOOxx +7843 1187 1 3 3 3 43 843 1843 2843 7843 86 87 RPAAAA RTBAAA VVVVxx +2546 1188 0 2 6 6 46 546 546 2546 2546 92 93 YTAAAA STBAAA AAAAxx +6757 1189 1 1 7 17 57 757 757 1757 6757 114 115 XZAAAA TTBAAA HHHHxx +466 1190 0 2 6 6 66 466 466 466 466 132 133 YRAAAA UTBAAA OOOOxx +5556 1191 0 0 6 16 56 556 1556 556 5556 112 113 SFAAAA VTBAAA VVVVxx +7196 1192 0 0 6 16 96 196 1196 2196 7196 192 193 UQAAAA WTBAAA AAAAxx +2947 1193 1 3 7 7 47 947 947 2947 2947 94 95 JJAAAA XTBAAA HHHHxx +6493 1194 1 1 3 13 93 493 493 1493 6493 186 187 TPAAAA YTBAAA OOOOxx +7203 1195 1 3 3 3 3 203 1203 2203 7203 6 7 BRAAAA ZTBAAA VVVVxx +3716 1196 0 0 6 16 16 716 1716 3716 3716 32 33 YMAAAA AUBAAA AAAAxx +8058 1197 0 2 8 18 58 58 58 3058 8058 116 117 YXAAAA BUBAAA HHHHxx +433 1198 1 1 3 13 33 433 433 433 433 66 67 RQAAAA CUBAAA OOOOxx +7649 1199 1 1 9 9 49 649 1649 2649 7649 98 99 FIAAAA DUBAAA VVVVxx +6966 1200 0 2 6 6 66 966 966 1966 6966 132 133 YHAAAA EUBAAA AAAAxx +553 1201 1 1 3 13 53 553 553 553 553 106 107 HVAAAA FUBAAA HHHHxx +3677 1202 1 1 7 17 77 677 1677 3677 3677 154 155 LLAAAA GUBAAA OOOOxx +2344 1203 0 0 4 4 44 344 344 2344 2344 88 89 EMAAAA HUBAAA VVVVxx +7439 1204 1 3 9 19 39 439 1439 2439 7439 78 79 DAAAAA IUBAAA AAAAxx +3910 1205 0 2 0 10 10 910 1910 3910 3910 20 21 KUAAAA JUBAAA HHHHxx +3638 1206 0 2 8 18 38 638 1638 3638 3638 76 77 YJAAAA KUBAAA OOOOxx +6637 1207 1 1 7 17 37 637 637 1637 6637 74 75 HVAAAA LUBAAA VVVVxx +4438 1208 0 2 8 18 38 438 438 4438 4438 76 77 SOAAAA MUBAAA AAAAxx +171 1209 1 3 1 11 71 171 171 171 171 142 143 PGAAAA NUBAAA HHHHxx +310 1210 0 2 0 10 10 310 310 310 310 20 21 YLAAAA OUBAAA OOOOxx +2714 1211 0 2 4 14 14 714 714 2714 2714 28 29 KAAAAA PUBAAA VVVVxx +5199 1212 1 3 9 19 99 199 1199 199 5199 198 199 ZRAAAA QUBAAA AAAAxx +8005 1213 1 1 5 5 5 5 5 3005 8005 10 11 XVAAAA RUBAAA HHHHxx +3188 1214 0 0 8 8 88 188 1188 3188 3188 176 177 QSAAAA SUBAAA OOOOxx +1518 1215 0 2 8 18 18 518 1518 1518 1518 36 37 KGAAAA TUBAAA VVVVxx +6760 1216 0 0 0 0 60 760 760 1760 6760 120 121 AAAAAA UUBAAA AAAAxx +9373 1217 1 1 3 13 73 373 1373 4373 9373 146 147 NWAAAA VUBAAA HHHHxx +1938 1218 0 2 8 18 38 938 1938 1938 1938 76 77 OWAAAA WUBAAA OOOOxx +2865 1219 1 1 5 5 65 865 865 2865 2865 130 131 FGAAAA XUBAAA VVVVxx +3203 1220 1 3 3 3 3 203 1203 3203 3203 6 7 FTAAAA YUBAAA AAAAxx +6025 1221 1 1 5 5 25 25 25 1025 6025 50 51 TXAAAA ZUBAAA HHHHxx +8684 1222 0 0 4 4 84 684 684 3684 8684 168 169 AWAAAA AVBAAA OOOOxx +7732 1223 0 0 2 12 32 732 1732 2732 7732 64 65 KLAAAA BVBAAA VVVVxx +3218 1224 0 2 8 18 18 218 1218 3218 3218 36 37 UTAAAA CVBAAA AAAAxx +525 1225 1 1 5 5 25 525 525 525 525 50 51 FUAAAA DVBAAA HHHHxx +601 1226 1 1 1 1 1 601 601 601 601 2 3 DXAAAA EVBAAA OOOOxx +6091 1227 1 3 1 11 91 91 91 1091 6091 182 183 HAAAAA FVBAAA VVVVxx +4498 1228 0 2 8 18 98 498 498 4498 4498 196 197 ARAAAA GVBAAA AAAAxx +8192 1229 0 0 2 12 92 192 192 3192 8192 184 185 CDAAAA HVBAAA HHHHxx +8006 1230 0 2 6 6 6 6 6 3006 8006 12 13 YVAAAA IVBAAA OOOOxx +6157 1231 1 1 7 17 57 157 157 1157 6157 114 115 VCAAAA JVBAAA VVVVxx +312 1232 0 0 2 12 12 312 312 312 312 24 25 AMAAAA KVBAAA AAAAxx +8652 1233 0 0 2 12 52 652 652 3652 8652 104 105 UUAAAA LVBAAA HHHHxx +2787 1234 1 3 7 7 87 787 787 2787 2787 174 175 FDAAAA MVBAAA OOOOxx +1782 1235 0 2 2 2 82 782 1782 1782 1782 164 165 OQAAAA NVBAAA VVVVxx +23 1236 1 3 3 3 23 23 23 23 23 46 47 XAAAAA OVBAAA AAAAxx +1206 1237 0 2 6 6 6 206 1206 1206 1206 12 13 KUAAAA PVBAAA HHHHxx +1076 1238 0 0 6 16 76 76 1076 1076 1076 152 153 KPAAAA QVBAAA OOOOxx +5379 1239 1 3 9 19 79 379 1379 379 5379 158 159 XYAAAA RVBAAA VVVVxx +2047 1240 1 3 7 7 47 47 47 2047 2047 94 95 TAAAAA SVBAAA AAAAxx +6262 1241 0 2 2 2 62 262 262 1262 6262 124 125 WGAAAA TVBAAA HHHHxx +1840 1242 0 0 0 0 40 840 1840 1840 1840 80 81 USAAAA UVBAAA OOOOxx +2106 1243 0 2 6 6 6 106 106 2106 2106 12 13 ADAAAA VVBAAA VVVVxx +1307 1244 1 3 7 7 7 307 1307 1307 1307 14 15 HYAAAA WVBAAA AAAAxx +735 1245 1 3 5 15 35 735 735 735 735 70 71 HCAAAA XVBAAA HHHHxx +3657 1246 1 1 7 17 57 657 1657 3657 3657 114 115 RKAAAA YVBAAA OOOOxx +3006 1247 0 2 6 6 6 6 1006 3006 3006 12 13 QLAAAA ZVBAAA VVVVxx +1538 1248 0 2 8 18 38 538 1538 1538 1538 76 77 EHAAAA AWBAAA AAAAxx +6098 1249 0 2 8 18 98 98 98 1098 6098 196 197 OAAAAA BWBAAA HHHHxx +5267 1250 1 3 7 7 67 267 1267 267 5267 134 135 PUAAAA CWBAAA OOOOxx +9757 1251 1 1 7 17 57 757 1757 4757 9757 114 115 HLAAAA DWBAAA VVVVxx +1236 1252 0 0 6 16 36 236 1236 1236 1236 72 73 OVAAAA EWBAAA AAAAxx +83 1253 1 3 3 3 83 83 83 83 83 166 167 FDAAAA FWBAAA HHHHxx +9227 1254 1 3 7 7 27 227 1227 4227 9227 54 55 XQAAAA GWBAAA OOOOxx +8772 1255 0 0 2 12 72 772 772 3772 8772 144 145 KZAAAA HWBAAA VVVVxx +8822 1256 0 2 2 2 22 822 822 3822 8822 44 45 IBAAAA IWBAAA AAAAxx +7167 1257 1 3 7 7 67 167 1167 2167 7167 134 135 RPAAAA JWBAAA HHHHxx +6909 1258 1 1 9 9 9 909 909 1909 6909 18 19 TFAAAA KWBAAA OOOOxx +1439 1259 1 3 9 19 39 439 1439 1439 1439 78 79 JDAAAA LWBAAA VVVVxx +2370 1260 0 2 0 10 70 370 370 2370 2370 140 141 ENAAAA MWBAAA AAAAxx +4577 1261 1 1 7 17 77 577 577 4577 4577 154 155 BUAAAA NWBAAA HHHHxx +2575 1262 1 3 5 15 75 575 575 2575 2575 150 151 BVAAAA OWBAAA OOOOxx +2795 1263 1 3 5 15 95 795 795 2795 2795 190 191 NDAAAA PWBAAA VVVVxx +5520 1264 0 0 0 0 20 520 1520 520 5520 40 41 IEAAAA QWBAAA AAAAxx +382 1265 0 2 2 2 82 382 382 382 382 164 165 SOAAAA RWBAAA HHHHxx +6335 1266 1 3 5 15 35 335 335 1335 6335 70 71 RJAAAA SWBAAA OOOOxx +8430 1267 0 2 0 10 30 430 430 3430 8430 60 61 GMAAAA TWBAAA VVVVxx +4131 1268 1 3 1 11 31 131 131 4131 4131 62 63 XCAAAA UWBAAA AAAAxx +9332 1269 0 0 2 12 32 332 1332 4332 9332 64 65 YUAAAA VWBAAA HHHHxx +293 1270 1 1 3 13 93 293 293 293 293 186 187 HLAAAA WWBAAA OOOOxx +2276 1271 0 0 6 16 76 276 276 2276 2276 152 153 OJAAAA XWBAAA VVVVxx +5687 1272 1 3 7 7 87 687 1687 687 5687 174 175 TKAAAA YWBAAA AAAAxx +5862 1273 0 2 2 2 62 862 1862 862 5862 124 125 MRAAAA ZWBAAA HHHHxx +5073 1274 1 1 3 13 73 73 1073 73 5073 146 147 DNAAAA AXBAAA OOOOxx +4170 1275 0 2 0 10 70 170 170 4170 4170 140 141 KEAAAA BXBAAA VVVVxx +5039 1276 1 3 9 19 39 39 1039 39 5039 78 79 VLAAAA CXBAAA AAAAxx +3294 1277 0 2 4 14 94 294 1294 3294 3294 188 189 SWAAAA DXBAAA HHHHxx +6015 1278 1 3 5 15 15 15 15 1015 6015 30 31 JXAAAA EXBAAA OOOOxx +9015 1279 1 3 5 15 15 15 1015 4015 9015 30 31 TIAAAA FXBAAA VVVVxx +9785 1280 1 1 5 5 85 785 1785 4785 9785 170 171 JMAAAA GXBAAA AAAAxx +4312 1281 0 0 2 12 12 312 312 4312 4312 24 25 WJAAAA HXBAAA HHHHxx +6343 1282 1 3 3 3 43 343 343 1343 6343 86 87 ZJAAAA IXBAAA OOOOxx +2161 1283 1 1 1 1 61 161 161 2161 2161 122 123 DFAAAA JXBAAA VVVVxx +4490 1284 0 2 0 10 90 490 490 4490 4490 180 181 SQAAAA KXBAAA AAAAxx +4454 1285 0 2 4 14 54 454 454 4454 4454 108 109 IPAAAA LXBAAA HHHHxx +7647 1286 1 3 7 7 47 647 1647 2647 7647 94 95 DIAAAA MXBAAA OOOOxx +1028 1287 0 0 8 8 28 28 1028 1028 1028 56 57 ONAAAA NXBAAA VVVVxx +2965 1288 1 1 5 5 65 965 965 2965 2965 130 131 BKAAAA OXBAAA AAAAxx +9900 1289 0 0 0 0 0 900 1900 4900 9900 0 1 UQAAAA PXBAAA HHHHxx +5509 1290 1 1 9 9 9 509 1509 509 5509 18 19 XDAAAA QXBAAA OOOOxx +7751 1291 1 3 1 11 51 751 1751 2751 7751 102 103 DMAAAA RXBAAA VVVVxx +9594 1292 0 2 4 14 94 594 1594 4594 9594 188 189 AFAAAA SXBAAA AAAAxx +7632 1293 0 0 2 12 32 632 1632 2632 7632 64 65 OHAAAA TXBAAA HHHHxx +6528 1294 0 0 8 8 28 528 528 1528 6528 56 57 CRAAAA UXBAAA OOOOxx +1041 1295 1 1 1 1 41 41 1041 1041 1041 82 83 BOAAAA VXBAAA VVVVxx +1534 1296 0 2 4 14 34 534 1534 1534 1534 68 69 AHAAAA WXBAAA AAAAxx +4229 1297 1 1 9 9 29 229 229 4229 4229 58 59 RGAAAA XXBAAA HHHHxx +84 1298 0 0 4 4 84 84 84 84 84 168 169 GDAAAA YXBAAA OOOOxx +2189 1299 1 1 9 9 89 189 189 2189 2189 178 179 FGAAAA ZXBAAA VVVVxx +7566 1300 0 2 6 6 66 566 1566 2566 7566 132 133 AFAAAA AYBAAA AAAAxx +707 1301 1 3 7 7 7 707 707 707 707 14 15 FBAAAA BYBAAA HHHHxx +581 1302 1 1 1 1 81 581 581 581 581 162 163 JWAAAA CYBAAA OOOOxx +6753 1303 1 1 3 13 53 753 753 1753 6753 106 107 TZAAAA DYBAAA VVVVxx +8604 1304 0 0 4 4 4 604 604 3604 8604 8 9 YSAAAA EYBAAA AAAAxx +373 1305 1 1 3 13 73 373 373 373 373 146 147 JOAAAA FYBAAA HHHHxx +9635 1306 1 3 5 15 35 635 1635 4635 9635 70 71 PGAAAA GYBAAA OOOOxx +9277 1307 1 1 7 17 77 277 1277 4277 9277 154 155 VSAAAA HYBAAA VVVVxx +7117 1308 1 1 7 17 17 117 1117 2117 7117 34 35 TNAAAA IYBAAA AAAAxx +8564 1309 0 0 4 4 64 564 564 3564 8564 128 129 KRAAAA JYBAAA HHHHxx +1697 1310 1 1 7 17 97 697 1697 1697 1697 194 195 HNAAAA KYBAAA OOOOxx +7840 1311 0 0 0 0 40 840 1840 2840 7840 80 81 OPAAAA LYBAAA VVVVxx +3646 1312 0 2 6 6 46 646 1646 3646 3646 92 93 GKAAAA MYBAAA AAAAxx +368 1313 0 0 8 8 68 368 368 368 368 136 137 EOAAAA NYBAAA HHHHxx +4797 1314 1 1 7 17 97 797 797 4797 4797 194 195 NCAAAA OYBAAA OOOOxx +5300 1315 0 0 0 0 0 300 1300 300 5300 0 1 WVAAAA PYBAAA VVVVxx +7664 1316 0 0 4 4 64 664 1664 2664 7664 128 129 UIAAAA QYBAAA AAAAxx +1466 1317 0 2 6 6 66 466 1466 1466 1466 132 133 KEAAAA RYBAAA HHHHxx +2477 1318 1 1 7 17 77 477 477 2477 2477 154 155 HRAAAA SYBAAA OOOOxx +2036 1319 0 0 6 16 36 36 36 2036 2036 72 73 IAAAAA TYBAAA VVVVxx +3624 1320 0 0 4 4 24 624 1624 3624 3624 48 49 KJAAAA UYBAAA AAAAxx +5099 1321 1 3 9 19 99 99 1099 99 5099 198 199 DOAAAA VYBAAA HHHHxx +1308 1322 0 0 8 8 8 308 1308 1308 1308 16 17 IYAAAA WYBAAA OOOOxx +3704 1323 0 0 4 4 4 704 1704 3704 3704 8 9 MMAAAA XYBAAA VVVVxx +2451 1324 1 3 1 11 51 451 451 2451 2451 102 103 HQAAAA YYBAAA AAAAxx +4898 1325 0 2 8 18 98 898 898 4898 4898 196 197 KGAAAA ZYBAAA HHHHxx +4959 1326 1 3 9 19 59 959 959 4959 4959 118 119 TIAAAA AZBAAA OOOOxx +5942 1327 0 2 2 2 42 942 1942 942 5942 84 85 OUAAAA BZBAAA VVVVxx +2425 1328 1 1 5 5 25 425 425 2425 2425 50 51 HPAAAA CZBAAA AAAAxx +7760 1329 0 0 0 0 60 760 1760 2760 7760 120 121 MMAAAA DZBAAA HHHHxx +6294 1330 0 2 4 14 94 294 294 1294 6294 188 189 CIAAAA EZBAAA OOOOxx +6785 1331 1 1 5 5 85 785 785 1785 6785 170 171 ZAAAAA FZBAAA VVVVxx +3542 1332 0 2 2 2 42 542 1542 3542 3542 84 85 GGAAAA GZBAAA AAAAxx +1809 1333 1 1 9 9 9 809 1809 1809 1809 18 19 PRAAAA HZBAAA HHHHxx +130 1334 0 2 0 10 30 130 130 130 130 60 61 AFAAAA IZBAAA OOOOxx +8672 1335 0 0 2 12 72 672 672 3672 8672 144 145 OVAAAA JZBAAA VVVVxx +2125 1336 1 1 5 5 25 125 125 2125 2125 50 51 TDAAAA KZBAAA AAAAxx +7683 1337 1 3 3 3 83 683 1683 2683 7683 166 167 NJAAAA LZBAAA HHHHxx +7842 1338 0 2 2 2 42 842 1842 2842 7842 84 85 QPAAAA MZBAAA OOOOxx +9584 1339 0 0 4 4 84 584 1584 4584 9584 168 169 QEAAAA NZBAAA VVVVxx +7963 1340 1 3 3 3 63 963 1963 2963 7963 126 127 HUAAAA OZBAAA AAAAxx +8581 1341 1 1 1 1 81 581 581 3581 8581 162 163 BSAAAA PZBAAA HHHHxx +2135 1342 1 3 5 15 35 135 135 2135 2135 70 71 DEAAAA QZBAAA OOOOxx +7352 1343 0 0 2 12 52 352 1352 2352 7352 104 105 UWAAAA RZBAAA VVVVxx +5789 1344 1 1 9 9 89 789 1789 789 5789 178 179 ROAAAA SZBAAA AAAAxx +8490 1345 0 2 0 10 90 490 490 3490 8490 180 181 OOAAAA TZBAAA HHHHxx +2145 1346 1 1 5 5 45 145 145 2145 2145 90 91 NEAAAA UZBAAA OOOOxx +7021 1347 1 1 1 1 21 21 1021 2021 7021 42 43 BKAAAA VZBAAA VVVVxx +3736 1348 0 0 6 16 36 736 1736 3736 3736 72 73 SNAAAA WZBAAA AAAAxx +7396 1349 0 0 6 16 96 396 1396 2396 7396 192 193 MYAAAA XZBAAA HHHHxx +6334 1350 0 2 4 14 34 334 334 1334 6334 68 69 QJAAAA YZBAAA OOOOxx +5461 1351 1 1 1 1 61 461 1461 461 5461 122 123 BCAAAA ZZBAAA VVVVxx +5337 1352 1 1 7 17 37 337 1337 337 5337 74 75 HXAAAA AACAAA AAAAxx +7440 1353 0 0 0 0 40 440 1440 2440 7440 80 81 EAAAAA BACAAA HHHHxx +6879 1354 1 3 9 19 79 879 879 1879 6879 158 159 PEAAAA CACAAA OOOOxx +2432 1355 0 0 2 12 32 432 432 2432 2432 64 65 OPAAAA DACAAA VVVVxx +8529 1356 1 1 9 9 29 529 529 3529 8529 58 59 BQAAAA EACAAA AAAAxx +7859 1357 1 3 9 19 59 859 1859 2859 7859 118 119 HQAAAA FACAAA HHHHxx +15 1358 1 3 5 15 15 15 15 15 15 30 31 PAAAAA GACAAA OOOOxx +7475 1359 1 3 5 15 75 475 1475 2475 7475 150 151 NBAAAA HACAAA VVVVxx +717 1360 1 1 7 17 17 717 717 717 717 34 35 PBAAAA IACAAA AAAAxx +250 1361 0 2 0 10 50 250 250 250 250 100 101 QJAAAA JACAAA HHHHxx +4700 1362 0 0 0 0 0 700 700 4700 4700 0 1 UYAAAA KACAAA OOOOxx +7510 1363 0 2 0 10 10 510 1510 2510 7510 20 21 WCAAAA LACAAA VVVVxx +4562 1364 0 2 2 2 62 562 562 4562 4562 124 125 MTAAAA MACAAA AAAAxx +8075 1365 1 3 5 15 75 75 75 3075 8075 150 151 PYAAAA NACAAA HHHHxx +871 1366 1 3 1 11 71 871 871 871 871 142 143 NHAAAA OACAAA OOOOxx +7161 1367 1 1 1 1 61 161 1161 2161 7161 122 123 LPAAAA PACAAA VVVVxx +9109 1368 1 1 9 9 9 109 1109 4109 9109 18 19 JMAAAA QACAAA AAAAxx +8675 1369 1 3 5 15 75 675 675 3675 8675 150 151 RVAAAA RACAAA HHHHxx +1025 1370 1 1 5 5 25 25 1025 1025 1025 50 51 LNAAAA SACAAA OOOOxx +4065 1371 1 1 5 5 65 65 65 4065 4065 130 131 JAAAAA TACAAA VVVVxx +3511 1372 1 3 1 11 11 511 1511 3511 3511 22 23 BFAAAA UACAAA AAAAxx +9840 1373 0 0 0 0 40 840 1840 4840 9840 80 81 MOAAAA VACAAA HHHHxx +7495 1374 1 3 5 15 95 495 1495 2495 7495 190 191 HCAAAA WACAAA OOOOxx +55 1375 1 3 5 15 55 55 55 55 55 110 111 DCAAAA XACAAA VVVVxx +6151 1376 1 3 1 11 51 151 151 1151 6151 102 103 PCAAAA YACAAA AAAAxx +2512 1377 0 0 2 12 12 512 512 2512 2512 24 25 QSAAAA ZACAAA HHHHxx +5881 1378 1 1 1 1 81 881 1881 881 5881 162 163 FSAAAA ABCAAA OOOOxx +1442 1379 0 2 2 2 42 442 1442 1442 1442 84 85 MDAAAA BBCAAA VVVVxx +1270 1380 0 2 0 10 70 270 1270 1270 1270 140 141 WWAAAA CBCAAA AAAAxx +959 1381 1 3 9 19 59 959 959 959 959 118 119 XKAAAA DBCAAA HHHHxx +8251 1382 1 3 1 11 51 251 251 3251 8251 102 103 JFAAAA EBCAAA OOOOxx +3051 1383 1 3 1 11 51 51 1051 3051 3051 102 103 JNAAAA FBCAAA VVVVxx +5052 1384 0 0 2 12 52 52 1052 52 5052 104 105 IMAAAA GBCAAA AAAAxx +1863 1385 1 3 3 3 63 863 1863 1863 1863 126 127 RTAAAA HBCAAA HHHHxx +344 1386 0 0 4 4 44 344 344 344 344 88 89 GNAAAA IBCAAA OOOOxx +3590 1387 0 2 0 10 90 590 1590 3590 3590 180 181 CIAAAA JBCAAA VVVVxx +4223 1388 1 3 3 3 23 223 223 4223 4223 46 47 LGAAAA KBCAAA AAAAxx +2284 1389 0 0 4 4 84 284 284 2284 2284 168 169 WJAAAA LBCAAA HHHHxx +9425 1390 1 1 5 5 25 425 1425 4425 9425 50 51 NYAAAA MBCAAA OOOOxx +6221 1391 1 1 1 1 21 221 221 1221 6221 42 43 HFAAAA NBCAAA VVVVxx +195 1392 1 3 5 15 95 195 195 195 195 190 191 NHAAAA OBCAAA AAAAxx +1517 1393 1 1 7 17 17 517 1517 1517 1517 34 35 JGAAAA PBCAAA HHHHxx +3791 1394 1 3 1 11 91 791 1791 3791 3791 182 183 VPAAAA QBCAAA OOOOxx +572 1395 0 0 2 12 72 572 572 572 572 144 145 AWAAAA RBCAAA VVVVxx +46 1396 0 2 6 6 46 46 46 46 46 92 93 UBAAAA SBCAAA AAAAxx +9451 1397 1 3 1 11 51 451 1451 4451 9451 102 103 NZAAAA TBCAAA HHHHxx +3359 1398 1 3 9 19 59 359 1359 3359 3359 118 119 FZAAAA UBCAAA OOOOxx +8867 1399 1 3 7 7 67 867 867 3867 8867 134 135 BDAAAA VBCAAA VVVVxx +674 1400 0 2 4 14 74 674 674 674 674 148 149 YZAAAA WBCAAA AAAAxx +2674 1401 0 2 4 14 74 674 674 2674 2674 148 149 WYAAAA XBCAAA HHHHxx +6523 1402 1 3 3 3 23 523 523 1523 6523 46 47 XQAAAA YBCAAA OOOOxx +6210 1403 0 2 0 10 10 210 210 1210 6210 20 21 WEAAAA ZBCAAA VVVVxx +7564 1404 0 0 4 4 64 564 1564 2564 7564 128 129 YEAAAA ACCAAA AAAAxx +4776 1405 0 0 6 16 76 776 776 4776 4776 152 153 SBAAAA BCCAAA HHHHxx +2993 1406 1 1 3 13 93 993 993 2993 2993 186 187 DLAAAA CCCAAA OOOOxx +2969 1407 1 1 9 9 69 969 969 2969 2969 138 139 FKAAAA DCCAAA VVVVxx +1762 1408 0 2 2 2 62 762 1762 1762 1762 124 125 UPAAAA ECCAAA AAAAxx +685 1409 1 1 5 5 85 685 685 685 685 170 171 JAAAAA FCCAAA HHHHxx +5312 1410 0 0 2 12 12 312 1312 312 5312 24 25 IWAAAA GCCAAA OOOOxx +3264 1411 0 0 4 4 64 264 1264 3264 3264 128 129 OVAAAA HCCAAA VVVVxx +7008 1412 0 0 8 8 8 8 1008 2008 7008 16 17 OJAAAA ICCAAA AAAAxx +5167 1413 1 3 7 7 67 167 1167 167 5167 134 135 TQAAAA JCCAAA HHHHxx +3060 1414 0 0 0 0 60 60 1060 3060 3060 120 121 SNAAAA KCCAAA OOOOxx +1752 1415 0 0 2 12 52 752 1752 1752 1752 104 105 KPAAAA LCCAAA VVVVxx +1016 1416 0 0 6 16 16 16 1016 1016 1016 32 33 CNAAAA MCCAAA AAAAxx +7365 1417 1 1 5 5 65 365 1365 2365 7365 130 131 HXAAAA NCCAAA HHHHxx +4358 1418 0 2 8 18 58 358 358 4358 4358 116 117 QLAAAA OCCAAA OOOOxx +2819 1419 1 3 9 19 19 819 819 2819 2819 38 39 LEAAAA PCCAAA VVVVxx +6727 1420 1 3 7 7 27 727 727 1727 6727 54 55 TYAAAA QCCAAA AAAAxx +1459 1421 1 3 9 19 59 459 1459 1459 1459 118 119 DEAAAA RCCAAA HHHHxx +1708 1422 0 0 8 8 8 708 1708 1708 1708 16 17 SNAAAA SCCAAA OOOOxx +471 1423 1 3 1 11 71 471 471 471 471 142 143 DSAAAA TCCAAA VVVVxx +387 1424 1 3 7 7 87 387 387 387 387 174 175 XOAAAA UCCAAA AAAAxx +1166 1425 0 2 6 6 66 166 1166 1166 1166 132 133 WSAAAA VCCAAA HHHHxx +2400 1426 0 0 0 0 0 400 400 2400 2400 0 1 IOAAAA WCCAAA OOOOxx +3584 1427 0 0 4 4 84 584 1584 3584 3584 168 169 WHAAAA XCCAAA VVVVxx +6423 1428 1 3 3 3 23 423 423 1423 6423 46 47 BNAAAA YCCAAA AAAAxx +9520 1429 0 0 0 0 20 520 1520 4520 9520 40 41 ECAAAA ZCCAAA HHHHxx +8080 1430 0 0 0 0 80 80 80 3080 8080 160 161 UYAAAA ADCAAA OOOOxx +5709 1431 1 1 9 9 9 709 1709 709 5709 18 19 PLAAAA BDCAAA VVVVxx +1131 1432 1 3 1 11 31 131 1131 1131 1131 62 63 NRAAAA CDCAAA AAAAxx +8562 1433 0 2 2 2 62 562 562 3562 8562 124 125 IRAAAA DDCAAA HHHHxx +5766 1434 0 2 6 6 66 766 1766 766 5766 132 133 UNAAAA EDCAAA OOOOxx +245 1435 1 1 5 5 45 245 245 245 245 90 91 LJAAAA FDCAAA VVVVxx +9869 1436 1 1 9 9 69 869 1869 4869 9869 138 139 PPAAAA GDCAAA AAAAxx +3533 1437 1 1 3 13 33 533 1533 3533 3533 66 67 XFAAAA HDCAAA HHHHxx +5109 1438 1 1 9 9 9 109 1109 109 5109 18 19 NOAAAA IDCAAA OOOOxx +977 1439 1 1 7 17 77 977 977 977 977 154 155 PLAAAA JDCAAA VVVVxx +1651 1440 1 3 1 11 51 651 1651 1651 1651 102 103 NLAAAA KDCAAA AAAAxx +1357 1441 1 1 7 17 57 357 1357 1357 1357 114 115 FAAAAA LDCAAA HHHHxx +9087 1442 1 3 7 7 87 87 1087 4087 9087 174 175 NLAAAA MDCAAA OOOOxx +3399 1443 1 3 9 19 99 399 1399 3399 3399 198 199 TAAAAA NDCAAA VVVVxx +7543 1444 1 3 3 3 43 543 1543 2543 7543 86 87 DEAAAA ODCAAA AAAAxx +2469 1445 1 1 9 9 69 469 469 2469 2469 138 139 ZQAAAA PDCAAA HHHHxx +8305 1446 1 1 5 5 5 305 305 3305 8305 10 11 LHAAAA QDCAAA OOOOxx +3265 1447 1 1 5 5 65 265 1265 3265 3265 130 131 PVAAAA RDCAAA VVVVxx +9977 1448 1 1 7 17 77 977 1977 4977 9977 154 155 TTAAAA SDCAAA AAAAxx +3961 1449 1 1 1 1 61 961 1961 3961 3961 122 123 JWAAAA TDCAAA HHHHxx +4952 1450 0 0 2 12 52 952 952 4952 4952 104 105 MIAAAA UDCAAA OOOOxx +5173 1451 1 1 3 13 73 173 1173 173 5173 146 147 ZQAAAA VDCAAA VVVVxx +860 1452 0 0 0 0 60 860 860 860 860 120 121 CHAAAA WDCAAA AAAAxx +4523 1453 1 3 3 3 23 523 523 4523 4523 46 47 ZRAAAA XDCAAA HHHHxx +2361 1454 1 1 1 1 61 361 361 2361 2361 122 123 VMAAAA YDCAAA OOOOxx +7877 1455 1 1 7 17 77 877 1877 2877 7877 154 155 ZQAAAA ZDCAAA VVVVxx +3422 1456 0 2 2 2 22 422 1422 3422 3422 44 45 QBAAAA AECAAA AAAAxx +5781 1457 1 1 1 1 81 781 1781 781 5781 162 163 JOAAAA BECAAA HHHHxx +4752 1458 0 0 2 12 52 752 752 4752 4752 104 105 UAAAAA CECAAA OOOOxx +1786 1459 0 2 6 6 86 786 1786 1786 1786 172 173 SQAAAA DECAAA VVVVxx +1892 1460 0 0 2 12 92 892 1892 1892 1892 184 185 UUAAAA EECAAA AAAAxx +6389 1461 1 1 9 9 89 389 389 1389 6389 178 179 TLAAAA FECAAA HHHHxx +8644 1462 0 0 4 4 44 644 644 3644 8644 88 89 MUAAAA GECAAA OOOOxx +9056 1463 0 0 6 16 56 56 1056 4056 9056 112 113 IKAAAA HECAAA VVVVxx +1423 1464 1 3 3 3 23 423 1423 1423 1423 46 47 TCAAAA IECAAA AAAAxx +4901 1465 1 1 1 1 1 901 901 4901 4901 2 3 NGAAAA JECAAA HHHHxx +3859 1466 1 3 9 19 59 859 1859 3859 3859 118 119 LSAAAA KECAAA OOOOxx +2324 1467 0 0 4 4 24 324 324 2324 2324 48 49 KLAAAA LECAAA VVVVxx +8101 1468 1 1 1 1 1 101 101 3101 8101 2 3 PZAAAA MECAAA AAAAxx +8016 1469 0 0 6 16 16 16 16 3016 8016 32 33 IWAAAA NECAAA HHHHxx +5826 1470 0 2 6 6 26 826 1826 826 5826 52 53 CQAAAA OECAAA OOOOxx +8266 1471 0 2 6 6 66 266 266 3266 8266 132 133 YFAAAA PECAAA VVVVxx +7558 1472 0 2 8 18 58 558 1558 2558 7558 116 117 SEAAAA QECAAA AAAAxx +6976 1473 0 0 6 16 76 976 976 1976 6976 152 153 IIAAAA RECAAA HHHHxx +222 1474 0 2 2 2 22 222 222 222 222 44 45 OIAAAA SECAAA OOOOxx +1624 1475 0 0 4 4 24 624 1624 1624 1624 48 49 MKAAAA TECAAA VVVVxx +1250 1476 0 2 0 10 50 250 1250 1250 1250 100 101 CWAAAA UECAAA AAAAxx +1621 1477 1 1 1 1 21 621 1621 1621 1621 42 43 JKAAAA VECAAA HHHHxx +2350 1478 0 2 0 10 50 350 350 2350 2350 100 101 KMAAAA WECAAA OOOOxx +5239 1479 1 3 9 19 39 239 1239 239 5239 78 79 NTAAAA XECAAA VVVVxx +6681 1480 1 1 1 1 81 681 681 1681 6681 162 163 ZWAAAA YECAAA AAAAxx +4983 1481 1 3 3 3 83 983 983 4983 4983 166 167 RJAAAA ZECAAA HHHHxx +7149 1482 1 1 9 9 49 149 1149 2149 7149 98 99 ZOAAAA AFCAAA OOOOxx +3502 1483 0 2 2 2 2 502 1502 3502 3502 4 5 SEAAAA BFCAAA VVVVxx +3133 1484 1 1 3 13 33 133 1133 3133 3133 66 67 NQAAAA CFCAAA AAAAxx +8342 1485 0 2 2 2 42 342 342 3342 8342 84 85 WIAAAA DFCAAA HHHHxx +3041 1486 1 1 1 1 41 41 1041 3041 3041 82 83 ZMAAAA EFCAAA OOOOxx +5383 1487 1 3 3 3 83 383 1383 383 5383 166 167 BZAAAA FFCAAA VVVVxx +3916 1488 0 0 6 16 16 916 1916 3916 3916 32 33 QUAAAA GFCAAA AAAAxx +1438 1489 0 2 8 18 38 438 1438 1438 1438 76 77 IDAAAA HFCAAA HHHHxx +9408 1490 0 0 8 8 8 408 1408 4408 9408 16 17 WXAAAA IFCAAA OOOOxx +5783 1491 1 3 3 3 83 783 1783 783 5783 166 167 LOAAAA JFCAAA VVVVxx +683 1492 1 3 3 3 83 683 683 683 683 166 167 HAAAAA KFCAAA AAAAxx +9381 1493 1 1 1 1 81 381 1381 4381 9381 162 163 VWAAAA LFCAAA HHHHxx +5676 1494 0 0 6 16 76 676 1676 676 5676 152 153 IKAAAA MFCAAA OOOOxx +3224 1495 0 0 4 4 24 224 1224 3224 3224 48 49 AUAAAA NFCAAA VVVVxx +8332 1496 0 0 2 12 32 332 332 3332 8332 64 65 MIAAAA OFCAAA AAAAxx +3372 1497 0 0 2 12 72 372 1372 3372 3372 144 145 SZAAAA PFCAAA HHHHxx +7436 1498 0 0 6 16 36 436 1436 2436 7436 72 73 AAAAAA QFCAAA OOOOxx +5010 1499 0 2 0 10 10 10 1010 10 5010 20 21 SKAAAA RFCAAA VVVVxx +7256 1500 0 0 6 16 56 256 1256 2256 7256 112 113 CTAAAA SFCAAA AAAAxx +961 1501 1 1 1 1 61 961 961 961 961 122 123 ZKAAAA TFCAAA HHHHxx +4182 1502 0 2 2 2 82 182 182 4182 4182 164 165 WEAAAA UFCAAA OOOOxx +639 1503 1 3 9 19 39 639 639 639 639 78 79 PYAAAA VFCAAA VVVVxx +8836 1504 0 0 6 16 36 836 836 3836 8836 72 73 WBAAAA WFCAAA AAAAxx +8705 1505 1 1 5 5 5 705 705 3705 8705 10 11 VWAAAA XFCAAA HHHHxx +32 1506 0 0 2 12 32 32 32 32 32 64 65 GBAAAA YFCAAA OOOOxx +7913 1507 1 1 3 13 13 913 1913 2913 7913 26 27 JSAAAA ZFCAAA VVVVxx +229 1508 1 1 9 9 29 229 229 229 229 58 59 VIAAAA AGCAAA AAAAxx +2393 1509 1 1 3 13 93 393 393 2393 2393 186 187 BOAAAA BGCAAA HHHHxx +2815 1510 1 3 5 15 15 815 815 2815 2815 30 31 HEAAAA CGCAAA OOOOxx +4858 1511 0 2 8 18 58 858 858 4858 4858 116 117 WEAAAA DGCAAA VVVVxx +6283 1512 1 3 3 3 83 283 283 1283 6283 166 167 RHAAAA EGCAAA AAAAxx +4147 1513 1 3 7 7 47 147 147 4147 4147 94 95 NDAAAA FGCAAA HHHHxx +6801 1514 1 1 1 1 1 801 801 1801 6801 2 3 PBAAAA GGCAAA OOOOxx +1011 1515 1 3 1 11 11 11 1011 1011 1011 22 23 XMAAAA HGCAAA VVVVxx +2527 1516 1 3 7 7 27 527 527 2527 2527 54 55 FTAAAA IGCAAA AAAAxx +381 1517 1 1 1 1 81 381 381 381 381 162 163 ROAAAA JGCAAA HHHHxx +3366 1518 0 2 6 6 66 366 1366 3366 3366 132 133 MZAAAA KGCAAA OOOOxx +9636 1519 0 0 6 16 36 636 1636 4636 9636 72 73 QGAAAA LGCAAA VVVVxx +2239 1520 1 3 9 19 39 239 239 2239 2239 78 79 DIAAAA MGCAAA AAAAxx +5911 1521 1 3 1 11 11 911 1911 911 5911 22 23 JTAAAA NGCAAA HHHHxx +449 1522 1 1 9 9 49 449 449 449 449 98 99 HRAAAA OGCAAA OOOOxx +5118 1523 0 2 8 18 18 118 1118 118 5118 36 37 WOAAAA PGCAAA VVVVxx +7684 1524 0 0 4 4 84 684 1684 2684 7684 168 169 OJAAAA QGCAAA AAAAxx +804 1525 0 0 4 4 4 804 804 804 804 8 9 YEAAAA RGCAAA HHHHxx +8378 1526 0 2 8 18 78 378 378 3378 8378 156 157 GKAAAA SGCAAA OOOOxx +9855 1527 1 3 5 15 55 855 1855 4855 9855 110 111 BPAAAA TGCAAA VVVVxx +1995 1528 1 3 5 15 95 995 1995 1995 1995 190 191 TYAAAA UGCAAA AAAAxx +1979 1529 1 3 9 19 79 979 1979 1979 1979 158 159 DYAAAA VGCAAA HHHHxx +4510 1530 0 2 0 10 10 510 510 4510 4510 20 21 MRAAAA WGCAAA OOOOxx +3792 1531 0 0 2 12 92 792 1792 3792 3792 184 185 WPAAAA XGCAAA VVVVxx +3541 1532 1 1 1 1 41 541 1541 3541 3541 82 83 FGAAAA YGCAAA AAAAxx +8847 1533 1 3 7 7 47 847 847 3847 8847 94 95 HCAAAA ZGCAAA HHHHxx +1336 1534 0 0 6 16 36 336 1336 1336 1336 72 73 KZAAAA AHCAAA OOOOxx +6780 1535 0 0 0 0 80 780 780 1780 6780 160 161 UAAAAA BHCAAA VVVVxx +8711 1536 1 3 1 11 11 711 711 3711 8711 22 23 BXAAAA CHCAAA AAAAxx +7839 1537 1 3 9 19 39 839 1839 2839 7839 78 79 NPAAAA DHCAAA HHHHxx +677 1538 1 1 7 17 77 677 677 677 677 154 155 BAAAAA EHCAAA OOOOxx +1574 1539 0 2 4 14 74 574 1574 1574 1574 148 149 OIAAAA FHCAAA VVVVxx +2905 1540 1 1 5 5 5 905 905 2905 2905 10 11 THAAAA GHCAAA AAAAxx +1879 1541 1 3 9 19 79 879 1879 1879 1879 158 159 HUAAAA HHCAAA HHHHxx +7820 1542 0 0 0 0 20 820 1820 2820 7820 40 41 UOAAAA IHCAAA OOOOxx +4308 1543 0 0 8 8 8 308 308 4308 4308 16 17 SJAAAA JHCAAA VVVVxx +4474 1544 0 2 4 14 74 474 474 4474 4474 148 149 CQAAAA KHCAAA AAAAxx +6985 1545 1 1 5 5 85 985 985 1985 6985 170 171 RIAAAA LHCAAA HHHHxx +6929 1546 1 1 9 9 29 929 929 1929 6929 58 59 NGAAAA MHCAAA OOOOxx +777 1547 1 1 7 17 77 777 777 777 777 154 155 XDAAAA NHCAAA VVVVxx +8271 1548 1 3 1 11 71 271 271 3271 8271 142 143 DGAAAA OHCAAA AAAAxx +2389 1549 1 1 9 9 89 389 389 2389 2389 178 179 XNAAAA PHCAAA HHHHxx +946 1550 0 2 6 6 46 946 946 946 946 92 93 KKAAAA QHCAAA OOOOxx +9682 1551 0 2 2 2 82 682 1682 4682 9682 164 165 KIAAAA RHCAAA VVVVxx +8722 1552 0 2 2 2 22 722 722 3722 8722 44 45 MXAAAA SHCAAA AAAAxx +470 1553 0 2 0 10 70 470 470 470 470 140 141 CSAAAA THCAAA HHHHxx +7425 1554 1 1 5 5 25 425 1425 2425 7425 50 51 PZAAAA UHCAAA OOOOxx +2372 1555 0 0 2 12 72 372 372 2372 2372 144 145 GNAAAA VHCAAA VVVVxx +508 1556 0 0 8 8 8 508 508 508 508 16 17 OTAAAA WHCAAA AAAAxx +163 1557 1 3 3 3 63 163 163 163 163 126 127 HGAAAA XHCAAA HHHHxx +6579 1558 1 3 9 19 79 579 579 1579 6579 158 159 BTAAAA YHCAAA OOOOxx +2355 1559 1 3 5 15 55 355 355 2355 2355 110 111 PMAAAA ZHCAAA VVVVxx +70 1560 0 2 0 10 70 70 70 70 70 140 141 SCAAAA AICAAA AAAAxx +651 1561 1 3 1 11 51 651 651 651 651 102 103 BZAAAA BICAAA HHHHxx +4436 1562 0 0 6 16 36 436 436 4436 4436 72 73 QOAAAA CICAAA OOOOxx +4240 1563 0 0 0 0 40 240 240 4240 4240 80 81 CHAAAA DICAAA VVVVxx +2722 1564 0 2 2 2 22 722 722 2722 2722 44 45 SAAAAA EICAAA AAAAxx +8937 1565 1 1 7 17 37 937 937 3937 8937 74 75 TFAAAA FICAAA HHHHxx +8364 1566 0 0 4 4 64 364 364 3364 8364 128 129 SJAAAA GICAAA OOOOxx +8317 1567 1 1 7 17 17 317 317 3317 8317 34 35 XHAAAA HICAAA VVVVxx +8872 1568 0 0 2 12 72 872 872 3872 8872 144 145 GDAAAA IICAAA AAAAxx +5512 1569 0 0 2 12 12 512 1512 512 5512 24 25 AEAAAA JICAAA HHHHxx +6651 1570 1 3 1 11 51 651 651 1651 6651 102 103 VVAAAA KICAAA OOOOxx +5976 1571 0 0 6 16 76 976 1976 976 5976 152 153 WVAAAA LICAAA VVVVxx +3301 1572 1 1 1 1 1 301 1301 3301 3301 2 3 ZWAAAA MICAAA AAAAxx +6784 1573 0 0 4 4 84 784 784 1784 6784 168 169 YAAAAA NICAAA HHHHxx +573 1574 1 1 3 13 73 573 573 573 573 146 147 BWAAAA OICAAA OOOOxx +3015 1575 1 3 5 15 15 15 1015 3015 3015 30 31 ZLAAAA PICAAA VVVVxx +8245 1576 1 1 5 5 45 245 245 3245 8245 90 91 DFAAAA QICAAA AAAAxx +5251 1577 1 3 1 11 51 251 1251 251 5251 102 103 ZTAAAA RICAAA HHHHxx +2281 1578 1 1 1 1 81 281 281 2281 2281 162 163 TJAAAA SICAAA OOOOxx +518 1579 0 2 8 18 18 518 518 518 518 36 37 YTAAAA TICAAA VVVVxx +9839 1580 1 3 9 19 39 839 1839 4839 9839 78 79 LOAAAA UICAAA AAAAxx +4526 1581 0 2 6 6 26 526 526 4526 4526 52 53 CSAAAA VICAAA HHHHxx +1261 1582 1 1 1 1 61 261 1261 1261 1261 122 123 NWAAAA WICAAA OOOOxx +4259 1583 1 3 9 19 59 259 259 4259 4259 118 119 VHAAAA XICAAA VVVVxx +9098 1584 0 2 8 18 98 98 1098 4098 9098 196 197 YLAAAA YICAAA AAAAxx +6037 1585 1 1 7 17 37 37 37 1037 6037 74 75 FYAAAA ZICAAA HHHHxx +4284 1586 0 0 4 4 84 284 284 4284 4284 168 169 UIAAAA AJCAAA OOOOxx +3267 1587 1 3 7 7 67 267 1267 3267 3267 134 135 RVAAAA BJCAAA VVVVxx +5908 1588 0 0 8 8 8 908 1908 908 5908 16 17 GTAAAA CJCAAA AAAAxx +1549 1589 1 1 9 9 49 549 1549 1549 1549 98 99 PHAAAA DJCAAA HHHHxx +8736 1590 0 0 6 16 36 736 736 3736 8736 72 73 AYAAAA EJCAAA OOOOxx +2008 1591 0 0 8 8 8 8 8 2008 2008 16 17 GZAAAA FJCAAA VVVVxx +548 1592 0 0 8 8 48 548 548 548 548 96 97 CVAAAA GJCAAA AAAAxx +8846 1593 0 2 6 6 46 846 846 3846 8846 92 93 GCAAAA HJCAAA HHHHxx +8374 1594 0 2 4 14 74 374 374 3374 8374 148 149 CKAAAA IJCAAA OOOOxx +7986 1595 0 2 6 6 86 986 1986 2986 7986 172 173 EVAAAA JJCAAA VVVVxx +6819 1596 1 3 9 19 19 819 819 1819 6819 38 39 HCAAAA KJCAAA AAAAxx +4418 1597 0 2 8 18 18 418 418 4418 4418 36 37 YNAAAA LJCAAA HHHHxx +833 1598 1 1 3 13 33 833 833 833 833 66 67 BGAAAA MJCAAA OOOOxx +4416 1599 0 0 6 16 16 416 416 4416 4416 32 33 WNAAAA NJCAAA VVVVxx +4902 1600 0 2 2 2 2 902 902 4902 4902 4 5 OGAAAA OJCAAA AAAAxx +6828 1601 0 0 8 8 28 828 828 1828 6828 56 57 QCAAAA PJCAAA HHHHxx +1118 1602 0 2 8 18 18 118 1118 1118 1118 36 37 ARAAAA QJCAAA OOOOxx +9993 1603 1 1 3 13 93 993 1993 4993 9993 186 187 JUAAAA RJCAAA VVVVxx +1430 1604 0 2 0 10 30 430 1430 1430 1430 60 61 ADAAAA SJCAAA AAAAxx +5670 1605 0 2 0 10 70 670 1670 670 5670 140 141 CKAAAA TJCAAA HHHHxx +5424 1606 0 0 4 4 24 424 1424 424 5424 48 49 QAAAAA UJCAAA OOOOxx +5561 1607 1 1 1 1 61 561 1561 561 5561 122 123 XFAAAA VJCAAA VVVVxx +2027 1608 1 3 7 7 27 27 27 2027 2027 54 55 ZZAAAA WJCAAA AAAAxx +6924 1609 0 0 4 4 24 924 924 1924 6924 48 49 IGAAAA XJCAAA HHHHxx +5946 1610 0 2 6 6 46 946 1946 946 5946 92 93 SUAAAA YJCAAA OOOOxx +4294 1611 0 2 4 14 94 294 294 4294 4294 188 189 EJAAAA ZJCAAA VVVVxx +2936 1612 0 0 6 16 36 936 936 2936 2936 72 73 YIAAAA AKCAAA AAAAxx +3855 1613 1 3 5 15 55 855 1855 3855 3855 110 111 HSAAAA BKCAAA HHHHxx +455 1614 1 3 5 15 55 455 455 455 455 110 111 NRAAAA CKCAAA OOOOxx +2918 1615 0 2 8 18 18 918 918 2918 2918 36 37 GIAAAA DKCAAA VVVVxx +448 1616 0 0 8 8 48 448 448 448 448 96 97 GRAAAA EKCAAA AAAAxx +2149 1617 1 1 9 9 49 149 149 2149 2149 98 99 REAAAA FKCAAA HHHHxx +8890 1618 0 2 0 10 90 890 890 3890 8890 180 181 YDAAAA GKCAAA OOOOxx +8919 1619 1 3 9 19 19 919 919 3919 8919 38 39 BFAAAA HKCAAA VVVVxx +4957 1620 1 1 7 17 57 957 957 4957 4957 114 115 RIAAAA IKCAAA AAAAxx +4 1621 0 0 4 4 4 4 4 4 4 8 9 EAAAAA JKCAAA HHHHxx +4837 1622 1 1 7 17 37 837 837 4837 4837 74 75 BEAAAA KKCAAA OOOOxx +3976 1623 0 0 6 16 76 976 1976 3976 3976 152 153 YWAAAA LKCAAA VVVVxx +9459 1624 1 3 9 19 59 459 1459 4459 9459 118 119 VZAAAA MKCAAA AAAAxx +7097 1625 1 1 7 17 97 97 1097 2097 7097 194 195 ZMAAAA NKCAAA HHHHxx +9226 1626 0 2 6 6 26 226 1226 4226 9226 52 53 WQAAAA OKCAAA OOOOxx +5803 1627 1 3 3 3 3 803 1803 803 5803 6 7 FPAAAA PKCAAA VVVVxx +21 1628 1 1 1 1 21 21 21 21 21 42 43 VAAAAA QKCAAA AAAAxx +5275 1629 1 3 5 15 75 275 1275 275 5275 150 151 XUAAAA RKCAAA HHHHxx +3488 1630 0 0 8 8 88 488 1488 3488 3488 176 177 EEAAAA SKCAAA OOOOxx +1595 1631 1 3 5 15 95 595 1595 1595 1595 190 191 JJAAAA TKCAAA VVVVxx +5212 1632 0 0 2 12 12 212 1212 212 5212 24 25 MSAAAA UKCAAA AAAAxx +6574 1633 0 2 4 14 74 574 574 1574 6574 148 149 WSAAAA VKCAAA HHHHxx +7524 1634 0 0 4 4 24 524 1524 2524 7524 48 49 KDAAAA WKCAAA OOOOxx +6100 1635 0 0 0 0 0 100 100 1100 6100 0 1 QAAAAA XKCAAA VVVVxx +1198 1636 0 2 8 18 98 198 1198 1198 1198 196 197 CUAAAA YKCAAA AAAAxx +7345 1637 1 1 5 5 45 345 1345 2345 7345 90 91 NWAAAA ZKCAAA HHHHxx +5020 1638 0 0 0 0 20 20 1020 20 5020 40 41 CLAAAA ALCAAA OOOOxx +6925 1639 1 1 5 5 25 925 925 1925 6925 50 51 JGAAAA BLCAAA VVVVxx +8915 1640 1 3 5 15 15 915 915 3915 8915 30 31 XEAAAA CLCAAA AAAAxx +3088 1641 0 0 8 8 88 88 1088 3088 3088 176 177 UOAAAA DLCAAA HHHHxx +4828 1642 0 0 8 8 28 828 828 4828 4828 56 57 SDAAAA ELCAAA OOOOxx +7276 1643 0 0 6 16 76 276 1276 2276 7276 152 153 WTAAAA FLCAAA VVVVxx +299 1644 1 3 9 19 99 299 299 299 299 198 199 NLAAAA GLCAAA AAAAxx +76 1645 0 0 6 16 76 76 76 76 76 152 153 YCAAAA HLCAAA HHHHxx +8458 1646 0 2 8 18 58 458 458 3458 8458 116 117 INAAAA ILCAAA OOOOxx +7207 1647 1 3 7 7 7 207 1207 2207 7207 14 15 FRAAAA JLCAAA VVVVxx +5585 1648 1 1 5 5 85 585 1585 585 5585 170 171 VGAAAA KLCAAA AAAAxx +3234 1649 0 2 4 14 34 234 1234 3234 3234 68 69 KUAAAA LLCAAA HHHHxx +8001 1650 1 1 1 1 1 1 1 3001 8001 2 3 TVAAAA MLCAAA OOOOxx +1319 1651 1 3 9 19 19 319 1319 1319 1319 38 39 TYAAAA NLCAAA VVVVxx +6342 1652 0 2 2 2 42 342 342 1342 6342 84 85 YJAAAA OLCAAA AAAAxx +9199 1653 1 3 9 19 99 199 1199 4199 9199 198 199 VPAAAA PLCAAA HHHHxx +5696 1654 0 0 6 16 96 696 1696 696 5696 192 193 CLAAAA QLCAAA OOOOxx +2562 1655 0 2 2 2 62 562 562 2562 2562 124 125 OUAAAA RLCAAA VVVVxx +4226 1656 0 2 6 6 26 226 226 4226 4226 52 53 OGAAAA SLCAAA AAAAxx +1184 1657 0 0 4 4 84 184 1184 1184 1184 168 169 OTAAAA TLCAAA HHHHxx +5807 1658 1 3 7 7 7 807 1807 807 5807 14 15 JPAAAA ULCAAA OOOOxx +1890 1659 0 2 0 10 90 890 1890 1890 1890 180 181 SUAAAA VLCAAA VVVVxx +451 1660 1 3 1 11 51 451 451 451 451 102 103 JRAAAA WLCAAA AAAAxx +1049 1661 1 1 9 9 49 49 1049 1049 1049 98 99 JOAAAA XLCAAA HHHHxx +5272 1662 0 0 2 12 72 272 1272 272 5272 144 145 UUAAAA YLCAAA OOOOxx +4588 1663 0 0 8 8 88 588 588 4588 4588 176 177 MUAAAA ZLCAAA VVVVxx +5213 1664 1 1 3 13 13 213 1213 213 5213 26 27 NSAAAA AMCAAA AAAAxx +9543 1665 1 3 3 3 43 543 1543 4543 9543 86 87 BDAAAA BMCAAA HHHHxx +6318 1666 0 2 8 18 18 318 318 1318 6318 36 37 AJAAAA CMCAAA OOOOxx +7992 1667 0 0 2 12 92 992 1992 2992 7992 184 185 KVAAAA DMCAAA VVVVxx +4619 1668 1 3 9 19 19 619 619 4619 4619 38 39 RVAAAA EMCAAA AAAAxx +7189 1669 1 1 9 9 89 189 1189 2189 7189 178 179 NQAAAA FMCAAA HHHHxx +2178 1670 0 2 8 18 78 178 178 2178 2178 156 157 UFAAAA GMCAAA OOOOxx +4928 1671 0 0 8 8 28 928 928 4928 4928 56 57 OHAAAA HMCAAA VVVVxx +3966 1672 0 2 6 6 66 966 1966 3966 3966 132 133 OWAAAA IMCAAA AAAAxx +9790 1673 0 2 0 10 90 790 1790 4790 9790 180 181 OMAAAA JMCAAA HHHHxx +9150 1674 0 2 0 10 50 150 1150 4150 9150 100 101 YNAAAA KMCAAA OOOOxx +313 1675 1 1 3 13 13 313 313 313 313 26 27 BMAAAA LMCAAA VVVVxx +1614 1676 0 2 4 14 14 614 1614 1614 1614 28 29 CKAAAA MMCAAA AAAAxx +1581 1677 1 1 1 1 81 581 1581 1581 1581 162 163 VIAAAA NMCAAA HHHHxx +3674 1678 0 2 4 14 74 674 1674 3674 3674 148 149 ILAAAA OMCAAA OOOOxx +3444 1679 0 0 4 4 44 444 1444 3444 3444 88 89 MCAAAA PMCAAA VVVVxx +1050 1680 0 2 0 10 50 50 1050 1050 1050 100 101 KOAAAA QMCAAA AAAAxx +8241 1681 1 1 1 1 41 241 241 3241 8241 82 83 ZEAAAA RMCAAA HHHHxx +3382 1682 0 2 2 2 82 382 1382 3382 3382 164 165 CAAAAA SMCAAA OOOOxx +7105 1683 1 1 5 5 5 105 1105 2105 7105 10 11 HNAAAA TMCAAA VVVVxx +2957 1684 1 1 7 17 57 957 957 2957 2957 114 115 TJAAAA UMCAAA AAAAxx +6162 1685 0 2 2 2 62 162 162 1162 6162 124 125 ADAAAA VMCAAA HHHHxx +5150 1686 0 2 0 10 50 150 1150 150 5150 100 101 CQAAAA WMCAAA OOOOxx +2622 1687 0 2 2 2 22 622 622 2622 2622 44 45 WWAAAA XMCAAA VVVVxx +2240 1688 0 0 0 0 40 240 240 2240 2240 80 81 EIAAAA YMCAAA AAAAxx +8880 1689 0 0 0 0 80 880 880 3880 8880 160 161 ODAAAA ZMCAAA HHHHxx +9250 1690 0 2 0 10 50 250 1250 4250 9250 100 101 URAAAA ANCAAA OOOOxx +7010 1691 0 2 0 10 10 10 1010 2010 7010 20 21 QJAAAA BNCAAA VVVVxx +1098 1692 0 2 8 18 98 98 1098 1098 1098 196 197 GQAAAA CNCAAA AAAAxx +648 1693 0 0 8 8 48 648 648 648 648 96 97 YYAAAA DNCAAA HHHHxx +5536 1694 0 0 6 16 36 536 1536 536 5536 72 73 YEAAAA ENCAAA OOOOxx +7858 1695 0 2 8 18 58 858 1858 2858 7858 116 117 GQAAAA FNCAAA VVVVxx +7053 1696 1 1 3 13 53 53 1053 2053 7053 106 107 HLAAAA GNCAAA AAAAxx +8681 1697 1 1 1 1 81 681 681 3681 8681 162 163 XVAAAA HNCAAA HHHHxx +8832 1698 0 0 2 12 32 832 832 3832 8832 64 65 SBAAAA INCAAA OOOOxx +6836 1699 0 0 6 16 36 836 836 1836 6836 72 73 YCAAAA JNCAAA VVVVxx +4856 1700 0 0 6 16 56 856 856 4856 4856 112 113 UEAAAA KNCAAA AAAAxx +345 1701 1 1 5 5 45 345 345 345 345 90 91 HNAAAA LNCAAA HHHHxx +6559 1702 1 3 9 19 59 559 559 1559 6559 118 119 HSAAAA MNCAAA OOOOxx +3017 1703 1 1 7 17 17 17 1017 3017 3017 34 35 BMAAAA NNCAAA VVVVxx +4176 1704 0 0 6 16 76 176 176 4176 4176 152 153 QEAAAA ONCAAA AAAAxx +2839 1705 1 3 9 19 39 839 839 2839 2839 78 79 FFAAAA PNCAAA HHHHxx +6065 1706 1 1 5 5 65 65 65 1065 6065 130 131 HZAAAA QNCAAA OOOOxx +7360 1707 0 0 0 0 60 360 1360 2360 7360 120 121 CXAAAA RNCAAA VVVVxx +9527 1708 1 3 7 7 27 527 1527 4527 9527 54 55 LCAAAA SNCAAA AAAAxx +8849 1709 1 1 9 9 49 849 849 3849 8849 98 99 JCAAAA TNCAAA HHHHxx +7274 1710 0 2 4 14 74 274 1274 2274 7274 148 149 UTAAAA UNCAAA OOOOxx +4368 1711 0 0 8 8 68 368 368 4368 4368 136 137 AMAAAA VNCAAA VVVVxx +2488 1712 0 0 8 8 88 488 488 2488 2488 176 177 SRAAAA WNCAAA AAAAxx +4674 1713 0 2 4 14 74 674 674 4674 4674 148 149 UXAAAA XNCAAA HHHHxx +365 1714 1 1 5 5 65 365 365 365 365 130 131 BOAAAA YNCAAA OOOOxx +5897 1715 1 1 7 17 97 897 1897 897 5897 194 195 VSAAAA ZNCAAA VVVVxx +8918 1716 0 2 8 18 18 918 918 3918 8918 36 37 AFAAAA AOCAAA AAAAxx +1988 1717 0 0 8 8 88 988 1988 1988 1988 176 177 MYAAAA BOCAAA HHHHxx +1210 1718 0 2 0 10 10 210 1210 1210 1210 20 21 OUAAAA COCAAA OOOOxx +2945 1719 1 1 5 5 45 945 945 2945 2945 90 91 HJAAAA DOCAAA VVVVxx +555 1720 1 3 5 15 55 555 555 555 555 110 111 JVAAAA EOCAAA AAAAxx +9615 1721 1 3 5 15 15 615 1615 4615 9615 30 31 VFAAAA FOCAAA HHHHxx +9939 1722 1 3 9 19 39 939 1939 4939 9939 78 79 HSAAAA GOCAAA OOOOxx +1216 1723 0 0 6 16 16 216 1216 1216 1216 32 33 UUAAAA HOCAAA VVVVxx +745 1724 1 1 5 5 45 745 745 745 745 90 91 RCAAAA IOCAAA AAAAxx +3326 1725 0 2 6 6 26 326 1326 3326 3326 52 53 YXAAAA JOCAAA HHHHxx +953 1726 1 1 3 13 53 953 953 953 953 106 107 RKAAAA KOCAAA OOOOxx +444 1727 0 0 4 4 44 444 444 444 444 88 89 CRAAAA LOCAAA VVVVxx +280 1728 0 0 0 0 80 280 280 280 280 160 161 UKAAAA MOCAAA AAAAxx +3707 1729 1 3 7 7 7 707 1707 3707 3707 14 15 PMAAAA NOCAAA HHHHxx +1351 1730 1 3 1 11 51 351 1351 1351 1351 102 103 ZZAAAA OOCAAA OOOOxx +1280 1731 0 0 0 0 80 280 1280 1280 1280 160 161 GXAAAA POCAAA VVVVxx +628 1732 0 0 8 8 28 628 628 628 628 56 57 EYAAAA QOCAAA AAAAxx +6198 1733 0 2 8 18 98 198 198 1198 6198 196 197 KEAAAA ROCAAA HHHHxx +1957 1734 1 1 7 17 57 957 1957 1957 1957 114 115 HXAAAA SOCAAA OOOOxx +9241 1735 1 1 1 1 41 241 1241 4241 9241 82 83 LRAAAA TOCAAA VVVVxx +303 1736 1 3 3 3 3 303 303 303 303 6 7 RLAAAA UOCAAA AAAAxx +1945 1737 1 1 5 5 45 945 1945 1945 1945 90 91 VWAAAA VOCAAA HHHHxx +3634 1738 0 2 4 14 34 634 1634 3634 3634 68 69 UJAAAA WOCAAA OOOOxx +4768 1739 0 0 8 8 68 768 768 4768 4768 136 137 KBAAAA XOCAAA VVVVxx +9262 1740 0 2 2 2 62 262 1262 4262 9262 124 125 GSAAAA YOCAAA AAAAxx +2610 1741 0 2 0 10 10 610 610 2610 2610 20 21 KWAAAA ZOCAAA HHHHxx +6640 1742 0 0 0 0 40 640 640 1640 6640 80 81 KVAAAA APCAAA OOOOxx +3338 1743 0 2 8 18 38 338 1338 3338 3338 76 77 KYAAAA BPCAAA VVVVxx +6560 1744 0 0 0 0 60 560 560 1560 6560 120 121 ISAAAA CPCAAA AAAAxx +5986 1745 0 2 6 6 86 986 1986 986 5986 172 173 GWAAAA DPCAAA HHHHxx +2970 1746 0 2 0 10 70 970 970 2970 2970 140 141 GKAAAA EPCAAA OOOOxx +4731 1747 1 3 1 11 31 731 731 4731 4731 62 63 ZZAAAA FPCAAA VVVVxx +9486 1748 0 2 6 6 86 486 1486 4486 9486 172 173 WAAAAA GPCAAA AAAAxx +7204 1749 0 0 4 4 4 204 1204 2204 7204 8 9 CRAAAA HPCAAA HHHHxx +6685 1750 1 1 5 5 85 685 685 1685 6685 170 171 DXAAAA IPCAAA OOOOxx +6852 1751 0 0 2 12 52 852 852 1852 6852 104 105 ODAAAA JPCAAA VVVVxx +2325 1752 1 1 5 5 25 325 325 2325 2325 50 51 LLAAAA KPCAAA AAAAxx +1063 1753 1 3 3 3 63 63 1063 1063 1063 126 127 XOAAAA LPCAAA HHHHxx +6810 1754 0 2 0 10 10 810 810 1810 6810 20 21 YBAAAA MPCAAA OOOOxx +7718 1755 0 2 8 18 18 718 1718 2718 7718 36 37 WKAAAA NPCAAA VVVVxx +1680 1756 0 0 0 0 80 680 1680 1680 1680 160 161 QMAAAA OPCAAA AAAAxx +7402 1757 0 2 2 2 2 402 1402 2402 7402 4 5 SYAAAA PPCAAA HHHHxx +4134 1758 0 2 4 14 34 134 134 4134 4134 68 69 ADAAAA QPCAAA OOOOxx +8232 1759 0 0 2 12 32 232 232 3232 8232 64 65 QEAAAA RPCAAA VVVVxx +6682 1760 0 2 2 2 82 682 682 1682 6682 164 165 AXAAAA SPCAAA AAAAxx +7952 1761 0 0 2 12 52 952 1952 2952 7952 104 105 WTAAAA TPCAAA HHHHxx +5943 1762 1 3 3 3 43 943 1943 943 5943 86 87 PUAAAA UPCAAA OOOOxx +5394 1763 0 2 4 14 94 394 1394 394 5394 188 189 MZAAAA VPCAAA VVVVxx +6554 1764 0 2 4 14 54 554 554 1554 6554 108 109 CSAAAA WPCAAA AAAAxx +8186 1765 0 2 6 6 86 186 186 3186 8186 172 173 WCAAAA XPCAAA HHHHxx +199 1766 1 3 9 19 99 199 199 199 199 198 199 RHAAAA YPCAAA OOOOxx +3386 1767 0 2 6 6 86 386 1386 3386 3386 172 173 GAAAAA ZPCAAA VVVVxx +8974 1768 0 2 4 14 74 974 974 3974 8974 148 149 EHAAAA AQCAAA AAAAxx +8140 1769 0 0 0 0 40 140 140 3140 8140 80 81 CBAAAA BQCAAA HHHHxx +3723 1770 1 3 3 3 23 723 1723 3723 3723 46 47 FNAAAA CQCAAA OOOOxx +8827 1771 1 3 7 7 27 827 827 3827 8827 54 55 NBAAAA DQCAAA VVVVxx +1998 1772 0 2 8 18 98 998 1998 1998 1998 196 197 WYAAAA EQCAAA AAAAxx +879 1773 1 3 9 19 79 879 879 879 879 158 159 VHAAAA FQCAAA HHHHxx +892 1774 0 0 2 12 92 892 892 892 892 184 185 IIAAAA GQCAAA OOOOxx +9468 1775 0 0 8 8 68 468 1468 4468 9468 136 137 EAAAAA HQCAAA VVVVxx +3797 1776 1 1 7 17 97 797 1797 3797 3797 194 195 BQAAAA IQCAAA AAAAxx +8379 1777 1 3 9 19 79 379 379 3379 8379 158 159 HKAAAA JQCAAA HHHHxx +2817 1778 1 1 7 17 17 817 817 2817 2817 34 35 JEAAAA KQCAAA OOOOxx +789 1779 1 1 9 9 89 789 789 789 789 178 179 JEAAAA LQCAAA VVVVxx +3871 1780 1 3 1 11 71 871 1871 3871 3871 142 143 XSAAAA MQCAAA AAAAxx +7931 1781 1 3 1 11 31 931 1931 2931 7931 62 63 BTAAAA NQCAAA HHHHxx +3636 1782 0 0 6 16 36 636 1636 3636 3636 72 73 WJAAAA OQCAAA OOOOxx +699 1783 1 3 9 19 99 699 699 699 699 198 199 XAAAAA PQCAAA VVVVxx +6850 1784 0 2 0 10 50 850 850 1850 6850 100 101 MDAAAA QQCAAA AAAAxx +6394 1785 0 2 4 14 94 394 394 1394 6394 188 189 YLAAAA RQCAAA HHHHxx +3475 1786 1 3 5 15 75 475 1475 3475 3475 150 151 RDAAAA SQCAAA OOOOxx +3026 1787 0 2 6 6 26 26 1026 3026 3026 52 53 KMAAAA TQCAAA VVVVxx +876 1788 0 0 6 16 76 876 876 876 876 152 153 SHAAAA UQCAAA AAAAxx +1992 1789 0 0 2 12 92 992 1992 1992 1992 184 185 QYAAAA VQCAAA HHHHxx +3079 1790 1 3 9 19 79 79 1079 3079 3079 158 159 LOAAAA WQCAAA OOOOxx +8128 1791 0 0 8 8 28 128 128 3128 8128 56 57 QAAAAA XQCAAA VVVVxx +8123 1792 1 3 3 3 23 123 123 3123 8123 46 47 LAAAAA YQCAAA AAAAxx +3285 1793 1 1 5 5 85 285 1285 3285 3285 170 171 JWAAAA ZQCAAA HHHHxx +9315 1794 1 3 5 15 15 315 1315 4315 9315 30 31 HUAAAA ARCAAA OOOOxx +9862 1795 0 2 2 2 62 862 1862 4862 9862 124 125 IPAAAA BRCAAA VVVVxx +2764 1796 0 0 4 4 64 764 764 2764 2764 128 129 ICAAAA CRCAAA AAAAxx +3544 1797 0 0 4 4 44 544 1544 3544 3544 88 89 IGAAAA DRCAAA HHHHxx +7747 1798 1 3 7 7 47 747 1747 2747 7747 94 95 ZLAAAA ERCAAA OOOOxx +7725 1799 1 1 5 5 25 725 1725 2725 7725 50 51 DLAAAA FRCAAA VVVVxx +2449 1800 1 1 9 9 49 449 449 2449 2449 98 99 FQAAAA GRCAAA AAAAxx +8967 1801 1 3 7 7 67 967 967 3967 8967 134 135 XGAAAA HRCAAA HHHHxx +7371 1802 1 3 1 11 71 371 1371 2371 7371 142 143 NXAAAA IRCAAA OOOOxx +2158 1803 0 2 8 18 58 158 158 2158 2158 116 117 AFAAAA JRCAAA VVVVxx +5590 1804 0 2 0 10 90 590 1590 590 5590 180 181 AHAAAA KRCAAA AAAAxx +8072 1805 0 0 2 12 72 72 72 3072 8072 144 145 MYAAAA LRCAAA HHHHxx +1971 1806 1 3 1 11 71 971 1971 1971 1971 142 143 VXAAAA MRCAAA OOOOxx +772 1807 0 0 2 12 72 772 772 772 772 144 145 SDAAAA NRCAAA VVVVxx +3433 1808 1 1 3 13 33 433 1433 3433 3433 66 67 BCAAAA ORCAAA AAAAxx +8419 1809 1 3 9 19 19 419 419 3419 8419 38 39 VLAAAA PRCAAA HHHHxx +1493 1810 1 1 3 13 93 493 1493 1493 1493 186 187 LFAAAA QRCAAA OOOOxx +2584 1811 0 0 4 4 84 584 584 2584 2584 168 169 KVAAAA RRCAAA VVVVxx +9502 1812 0 2 2 2 2 502 1502 4502 9502 4 5 MBAAAA SRCAAA AAAAxx +4673 1813 1 1 3 13 73 673 673 4673 4673 146 147 TXAAAA TRCAAA HHHHxx +7403 1814 1 3 3 3 3 403 1403 2403 7403 6 7 TYAAAA URCAAA OOOOxx +7103 1815 1 3 3 3 3 103 1103 2103 7103 6 7 FNAAAA VRCAAA VVVVxx +7026 1816 0 2 6 6 26 26 1026 2026 7026 52 53 GKAAAA WRCAAA AAAAxx +8574 1817 0 2 4 14 74 574 574 3574 8574 148 149 URAAAA XRCAAA HHHHxx +1366 1818 0 2 6 6 66 366 1366 1366 1366 132 133 OAAAAA YRCAAA OOOOxx +5787 1819 1 3 7 7 87 787 1787 787 5787 174 175 POAAAA ZRCAAA VVVVxx +2552 1820 0 0 2 12 52 552 552 2552 2552 104 105 EUAAAA ASCAAA AAAAxx +4557 1821 1 1 7 17 57 557 557 4557 4557 114 115 HTAAAA BSCAAA HHHHxx +3237 1822 1 1 7 17 37 237 1237 3237 3237 74 75 NUAAAA CSCAAA OOOOxx +6901 1823 1 1 1 1 1 901 901 1901 6901 2 3 LFAAAA DSCAAA VVVVxx +7708 1824 0 0 8 8 8 708 1708 2708 7708 16 17 MKAAAA ESCAAA AAAAxx +2011 1825 1 3 1 11 11 11 11 2011 2011 22 23 JZAAAA FSCAAA HHHHxx +9455 1826 1 3 5 15 55 455 1455 4455 9455 110 111 RZAAAA GSCAAA OOOOxx +5228 1827 0 0 8 8 28 228 1228 228 5228 56 57 CTAAAA HSCAAA VVVVxx +4043 1828 1 3 3 3 43 43 43 4043 4043 86 87 NZAAAA ISCAAA AAAAxx +8242 1829 0 2 2 2 42 242 242 3242 8242 84 85 AFAAAA JSCAAA HHHHxx +6351 1830 1 3 1 11 51 351 351 1351 6351 102 103 HKAAAA KSCAAA OOOOxx +5899 1831 1 3 9 19 99 899 1899 899 5899 198 199 XSAAAA LSCAAA VVVVxx +4849 1832 1 1 9 9 49 849 849 4849 4849 98 99 NEAAAA MSCAAA AAAAxx +9583 1833 1 3 3 3 83 583 1583 4583 9583 166 167 PEAAAA NSCAAA HHHHxx +4994 1834 0 2 4 14 94 994 994 4994 4994 188 189 CKAAAA OSCAAA OOOOxx +9787 1835 1 3 7 7 87 787 1787 4787 9787 174 175 LMAAAA PSCAAA VVVVxx +243 1836 1 3 3 3 43 243 243 243 243 86 87 JJAAAA QSCAAA AAAAxx +3931 1837 1 3 1 11 31 931 1931 3931 3931 62 63 FVAAAA RSCAAA HHHHxx +5945 1838 1 1 5 5 45 945 1945 945 5945 90 91 RUAAAA SSCAAA OOOOxx +1325 1839 1 1 5 5 25 325 1325 1325 1325 50 51 ZYAAAA TSCAAA VVVVxx +4142 1840 0 2 2 2 42 142 142 4142 4142 84 85 IDAAAA USCAAA AAAAxx +1963 1841 1 3 3 3 63 963 1963 1963 1963 126 127 NXAAAA VSCAAA HHHHxx +7041 1842 1 1 1 1 41 41 1041 2041 7041 82 83 VKAAAA WSCAAA OOOOxx +3074 1843 0 2 4 14 74 74 1074 3074 3074 148 149 GOAAAA XSCAAA VVVVxx +3290 1844 0 2 0 10 90 290 1290 3290 3290 180 181 OWAAAA YSCAAA AAAAxx +4146 1845 0 2 6 6 46 146 146 4146 4146 92 93 MDAAAA ZSCAAA HHHHxx +3832 1846 0 0 2 12 32 832 1832 3832 3832 64 65 KRAAAA ATCAAA OOOOxx +2217 1847 1 1 7 17 17 217 217 2217 2217 34 35 HHAAAA BTCAAA VVVVxx +635 1848 1 3 5 15 35 635 635 635 635 70 71 LYAAAA CTCAAA AAAAxx +6967 1849 1 3 7 7 67 967 967 1967 6967 134 135 ZHAAAA DTCAAA HHHHxx +3522 1850 0 2 2 2 22 522 1522 3522 3522 44 45 MFAAAA ETCAAA OOOOxx +2471 1851 1 3 1 11 71 471 471 2471 2471 142 143 BRAAAA FTCAAA VVVVxx +4236 1852 0 0 6 16 36 236 236 4236 4236 72 73 YGAAAA GTCAAA AAAAxx +853 1853 1 1 3 13 53 853 853 853 853 106 107 VGAAAA HTCAAA HHHHxx +3754 1854 0 2 4 14 54 754 1754 3754 3754 108 109 KOAAAA ITCAAA OOOOxx +796 1855 0 0 6 16 96 796 796 796 796 192 193 QEAAAA JTCAAA VVVVxx +4640 1856 0 0 0 0 40 640 640 4640 4640 80 81 MWAAAA KTCAAA AAAAxx +9496 1857 0 0 6 16 96 496 1496 4496 9496 192 193 GBAAAA LTCAAA HHHHxx +6873 1858 1 1 3 13 73 873 873 1873 6873 146 147 JEAAAA MTCAAA OOOOxx +4632 1859 0 0 2 12 32 632 632 4632 4632 64 65 EWAAAA NTCAAA VVVVxx +5758 1860 0 2 8 18 58 758 1758 758 5758 116 117 MNAAAA OTCAAA AAAAxx +6514 1861 0 2 4 14 14 514 514 1514 6514 28 29 OQAAAA PTCAAA HHHHxx +9510 1862 0 2 0 10 10 510 1510 4510 9510 20 21 UBAAAA QTCAAA OOOOxx +8411 1863 1 3 1 11 11 411 411 3411 8411 22 23 NLAAAA RTCAAA VVVVxx +7762 1864 0 2 2 2 62 762 1762 2762 7762 124 125 OMAAAA STCAAA AAAAxx +2225 1865 1 1 5 5 25 225 225 2225 2225 50 51 PHAAAA TTCAAA HHHHxx +4373 1866 1 1 3 13 73 373 373 4373 4373 146 147 FMAAAA UTCAAA OOOOxx +7326 1867 0 2 6 6 26 326 1326 2326 7326 52 53 UVAAAA VTCAAA VVVVxx +8651 1868 1 3 1 11 51 651 651 3651 8651 102 103 TUAAAA WTCAAA AAAAxx +9825 1869 1 1 5 5 25 825 1825 4825 9825 50 51 XNAAAA XTCAAA HHHHxx +2988 1870 0 0 8 8 88 988 988 2988 2988 176 177 YKAAAA YTCAAA OOOOxx +8138 1871 0 2 8 18 38 138 138 3138 8138 76 77 ABAAAA ZTCAAA VVVVxx +7792 1872 0 0 2 12 92 792 1792 2792 7792 184 185 SNAAAA AUCAAA AAAAxx +1232 1873 0 0 2 12 32 232 1232 1232 1232 64 65 KVAAAA BUCAAA HHHHxx +8221 1874 1 1 1 1 21 221 221 3221 8221 42 43 FEAAAA CUCAAA OOOOxx +4044 1875 0 0 4 4 44 44 44 4044 4044 88 89 OZAAAA DUCAAA VVVVxx +1204 1876 0 0 4 4 4 204 1204 1204 1204 8 9 IUAAAA EUCAAA AAAAxx +5145 1877 1 1 5 5 45 145 1145 145 5145 90 91 XPAAAA FUCAAA HHHHxx +7791 1878 1 3 1 11 91 791 1791 2791 7791 182 183 RNAAAA GUCAAA OOOOxx +8270 1879 0 2 0 10 70 270 270 3270 8270 140 141 CGAAAA HUCAAA VVVVxx +9427 1880 1 3 7 7 27 427 1427 4427 9427 54 55 PYAAAA IUCAAA AAAAxx +2152 1881 0 0 2 12 52 152 152 2152 2152 104 105 UEAAAA JUCAAA HHHHxx +7790 1882 0 2 0 10 90 790 1790 2790 7790 180 181 QNAAAA KUCAAA OOOOxx +5301 1883 1 1 1 1 1 301 1301 301 5301 2 3 XVAAAA LUCAAA VVVVxx +626 1884 0 2 6 6 26 626 626 626 626 52 53 CYAAAA MUCAAA AAAAxx +260 1885 0 0 0 0 60 260 260 260 260 120 121 AKAAAA NUCAAA HHHHxx +4369 1886 1 1 9 9 69 369 369 4369 4369 138 139 BMAAAA OUCAAA OOOOxx +5457 1887 1 1 7 17 57 457 1457 457 5457 114 115 XBAAAA PUCAAA VVVVxx +3468 1888 0 0 8 8 68 468 1468 3468 3468 136 137 KDAAAA QUCAAA AAAAxx +2257 1889 1 1 7 17 57 257 257 2257 2257 114 115 VIAAAA RUCAAA HHHHxx +9318 1890 0 2 8 18 18 318 1318 4318 9318 36 37 KUAAAA SUCAAA OOOOxx +8762 1891 0 2 2 2 62 762 762 3762 8762 124 125 AZAAAA TUCAAA VVVVxx +9153 1892 1 1 3 13 53 153 1153 4153 9153 106 107 BOAAAA UUCAAA AAAAxx +9220 1893 0 0 0 0 20 220 1220 4220 9220 40 41 QQAAAA VUCAAA HHHHxx +8003 1894 1 3 3 3 3 3 3 3003 8003 6 7 VVAAAA WUCAAA OOOOxx +7257 1895 1 1 7 17 57 257 1257 2257 7257 114 115 DTAAAA XUCAAA VVVVxx +3930 1896 0 2 0 10 30 930 1930 3930 3930 60 61 EVAAAA YUCAAA AAAAxx +2976 1897 0 0 6 16 76 976 976 2976 2976 152 153 MKAAAA ZUCAAA HHHHxx +2531 1898 1 3 1 11 31 531 531 2531 2531 62 63 JTAAAA AVCAAA OOOOxx +2250 1899 0 2 0 10 50 250 250 2250 2250 100 101 OIAAAA BVCAAA VVVVxx +8549 1900 1 1 9 9 49 549 549 3549 8549 98 99 VQAAAA CVCAAA AAAAxx +7197 1901 1 1 7 17 97 197 1197 2197 7197 194 195 VQAAAA DVCAAA HHHHxx +5916 1902 0 0 6 16 16 916 1916 916 5916 32 33 OTAAAA EVCAAA OOOOxx +5287 1903 1 3 7 7 87 287 1287 287 5287 174 175 JVAAAA FVCAAA VVVVxx +9095 1904 1 3 5 15 95 95 1095 4095 9095 190 191 VLAAAA GVCAAA AAAAxx +7137 1905 1 1 7 17 37 137 1137 2137 7137 74 75 NOAAAA HVCAAA HHHHxx +7902 1906 0 2 2 2 2 902 1902 2902 7902 4 5 YRAAAA IVCAAA OOOOxx +7598 1907 0 2 8 18 98 598 1598 2598 7598 196 197 GGAAAA JVCAAA VVVVxx +5652 1908 0 0 2 12 52 652 1652 652 5652 104 105 KJAAAA KVCAAA AAAAxx +2017 1909 1 1 7 17 17 17 17 2017 2017 34 35 PZAAAA LVCAAA HHHHxx +7255 1910 1 3 5 15 55 255 1255 2255 7255 110 111 BTAAAA MVCAAA OOOOxx +7999 1911 1 3 9 19 99 999 1999 2999 7999 198 199 RVAAAA NVCAAA VVVVxx +5388 1912 0 0 8 8 88 388 1388 388 5388 176 177 GZAAAA OVCAAA AAAAxx +8754 1913 0 2 4 14 54 754 754 3754 8754 108 109 SYAAAA PVCAAA HHHHxx +5415 1914 1 3 5 15 15 415 1415 415 5415 30 31 HAAAAA QVCAAA OOOOxx +8861 1915 1 1 1 1 61 861 861 3861 8861 122 123 VCAAAA RVCAAA VVVVxx +2874 1916 0 2 4 14 74 874 874 2874 2874 148 149 OGAAAA SVCAAA AAAAxx +9910 1917 0 2 0 10 10 910 1910 4910 9910 20 21 ERAAAA TVCAAA HHHHxx +5178 1918 0 2 8 18 78 178 1178 178 5178 156 157 ERAAAA UVCAAA OOOOxx +5698 1919 0 2 8 18 98 698 1698 698 5698 196 197 ELAAAA VVCAAA VVVVxx +8500 1920 0 0 0 0 0 500 500 3500 8500 0 1 YOAAAA WVCAAA AAAAxx +1814 1921 0 2 4 14 14 814 1814 1814 1814 28 29 URAAAA XVCAAA HHHHxx +4968 1922 0 0 8 8 68 968 968 4968 4968 136 137 CJAAAA YVCAAA OOOOxx +2642 1923 0 2 2 2 42 642 642 2642 2642 84 85 QXAAAA ZVCAAA VVVVxx +1578 1924 0 2 8 18 78 578 1578 1578 1578 156 157 SIAAAA AWCAAA AAAAxx +4774 1925 0 2 4 14 74 774 774 4774 4774 148 149 QBAAAA BWCAAA HHHHxx +7062 1926 0 2 2 2 62 62 1062 2062 7062 124 125 QLAAAA CWCAAA OOOOxx +5381 1927 1 1 1 1 81 381 1381 381 5381 162 163 ZYAAAA DWCAAA VVVVxx +7985 1928 1 1 5 5 85 985 1985 2985 7985 170 171 DVAAAA EWCAAA AAAAxx +3850 1929 0 2 0 10 50 850 1850 3850 3850 100 101 CSAAAA FWCAAA HHHHxx +5624 1930 0 0 4 4 24 624 1624 624 5624 48 49 IIAAAA GWCAAA OOOOxx +8948 1931 0 0 8 8 48 948 948 3948 8948 96 97 EGAAAA HWCAAA VVVVxx +995 1932 1 3 5 15 95 995 995 995 995 190 191 HMAAAA IWCAAA AAAAxx +5058 1933 0 2 8 18 58 58 1058 58 5058 116 117 OMAAAA JWCAAA HHHHxx +9670 1934 0 2 0 10 70 670 1670 4670 9670 140 141 YHAAAA KWCAAA OOOOxx +3115 1935 1 3 5 15 15 115 1115 3115 3115 30 31 VPAAAA LWCAAA VVVVxx +4935 1936 1 3 5 15 35 935 935 4935 4935 70 71 VHAAAA MWCAAA AAAAxx +4735 1937 1 3 5 15 35 735 735 4735 4735 70 71 DAAAAA NWCAAA HHHHxx +1348 1938 0 0 8 8 48 348 1348 1348 1348 96 97 WZAAAA OWCAAA OOOOxx +2380 1939 0 0 0 0 80 380 380 2380 2380 160 161 ONAAAA PWCAAA VVVVxx +4246 1940 0 2 6 6 46 246 246 4246 4246 92 93 IHAAAA QWCAAA AAAAxx +522 1941 0 2 2 2 22 522 522 522 522 44 45 CUAAAA RWCAAA HHHHxx +1701 1942 1 1 1 1 1 701 1701 1701 1701 2 3 LNAAAA SWCAAA OOOOxx +9709 1943 1 1 9 9 9 709 1709 4709 9709 18 19 LJAAAA TWCAAA VVVVxx +8829 1944 1 1 9 9 29 829 829 3829 8829 58 59 PBAAAA UWCAAA AAAAxx +7936 1945 0 0 6 16 36 936 1936 2936 7936 72 73 GTAAAA VWCAAA HHHHxx +8474 1946 0 2 4 14 74 474 474 3474 8474 148 149 YNAAAA WWCAAA OOOOxx +4676 1947 0 0 6 16 76 676 676 4676 4676 152 153 WXAAAA XWCAAA VVVVxx +6303 1948 1 3 3 3 3 303 303 1303 6303 6 7 LIAAAA YWCAAA AAAAxx +3485 1949 1 1 5 5 85 485 1485 3485 3485 170 171 BEAAAA ZWCAAA HHHHxx +2695 1950 1 3 5 15 95 695 695 2695 2695 190 191 RZAAAA AXCAAA OOOOxx +8830 1951 0 2 0 10 30 830 830 3830 8830 60 61 QBAAAA BXCAAA VVVVxx +898 1952 0 2 8 18 98 898 898 898 898 196 197 OIAAAA CXCAAA AAAAxx +7268 1953 0 0 8 8 68 268 1268 2268 7268 136 137 OTAAAA DXCAAA HHHHxx +6568 1954 0 0 8 8 68 568 568 1568 6568 136 137 QSAAAA EXCAAA OOOOxx +9724 1955 0 0 4 4 24 724 1724 4724 9724 48 49 AKAAAA FXCAAA VVVVxx +3329 1956 1 1 9 9 29 329 1329 3329 3329 58 59 BYAAAA GXCAAA AAAAxx +9860 1957 0 0 0 0 60 860 1860 4860 9860 120 121 GPAAAA HXCAAA HHHHxx +6833 1958 1 1 3 13 33 833 833 1833 6833 66 67 VCAAAA IXCAAA OOOOxx +5956 1959 0 0 6 16 56 956 1956 956 5956 112 113 CVAAAA JXCAAA VVVVxx +3963 1960 1 3 3 3 63 963 1963 3963 3963 126 127 LWAAAA KXCAAA AAAAxx +883 1961 1 3 3 3 83 883 883 883 883 166 167 ZHAAAA LXCAAA HHHHxx +2761 1962 1 1 1 1 61 761 761 2761 2761 122 123 FCAAAA MXCAAA OOOOxx +4644 1963 0 0 4 4 44 644 644 4644 4644 88 89 QWAAAA NXCAAA VVVVxx +1358 1964 0 2 8 18 58 358 1358 1358 1358 116 117 GAAAAA OXCAAA AAAAxx +2049 1965 1 1 9 9 49 49 49 2049 2049 98 99 VAAAAA PXCAAA HHHHxx +2193 1966 1 1 3 13 93 193 193 2193 2193 186 187 JGAAAA QXCAAA OOOOxx +9435 1967 1 3 5 15 35 435 1435 4435 9435 70 71 XYAAAA RXCAAA VVVVxx +5890 1968 0 2 0 10 90 890 1890 890 5890 180 181 OSAAAA SXCAAA AAAAxx +8149 1969 1 1 9 9 49 149 149 3149 8149 98 99 LBAAAA TXCAAA HHHHxx +423 1970 1 3 3 3 23 423 423 423 423 46 47 HQAAAA UXCAAA OOOOxx +7980 1971 0 0 0 0 80 980 1980 2980 7980 160 161 YUAAAA VXCAAA VVVVxx +9019 1972 1 3 9 19 19 19 1019 4019 9019 38 39 XIAAAA WXCAAA AAAAxx +1647 1973 1 3 7 7 47 647 1647 1647 1647 94 95 JLAAAA XXCAAA HHHHxx +9495 1974 1 3 5 15 95 495 1495 4495 9495 190 191 FBAAAA YXCAAA OOOOxx +3904 1975 0 0 4 4 4 904 1904 3904 3904 8 9 EUAAAA ZXCAAA VVVVxx +5838 1976 0 2 8 18 38 838 1838 838 5838 76 77 OQAAAA AYCAAA AAAAxx +3866 1977 0 2 6 6 66 866 1866 3866 3866 132 133 SSAAAA BYCAAA HHHHxx +3093 1978 1 1 3 13 93 93 1093 3093 3093 186 187 ZOAAAA CYCAAA OOOOxx +9666 1979 0 2 6 6 66 666 1666 4666 9666 132 133 UHAAAA DYCAAA VVVVxx +1246 1980 0 2 6 6 46 246 1246 1246 1246 92 93 YVAAAA EYCAAA AAAAxx +9759 1981 1 3 9 19 59 759 1759 4759 9759 118 119 JLAAAA FYCAAA HHHHxx +7174 1982 0 2 4 14 74 174 1174 2174 7174 148 149 YPAAAA GYCAAA OOOOxx +7678 1983 0 2 8 18 78 678 1678 2678 7678 156 157 IJAAAA HYCAAA VVVVxx +3004 1984 0 0 4 4 4 4 1004 3004 3004 8 9 OLAAAA IYCAAA AAAAxx +5607 1985 1 3 7 7 7 607 1607 607 5607 14 15 RHAAAA JYCAAA HHHHxx +8510 1986 0 2 0 10 10 510 510 3510 8510 20 21 IPAAAA KYCAAA OOOOxx +1483 1987 1 3 3 3 83 483 1483 1483 1483 166 167 BFAAAA LYCAAA VVVVxx +2915 1988 1 3 5 15 15 915 915 2915 2915 30 31 DIAAAA MYCAAA AAAAxx +1548 1989 0 0 8 8 48 548 1548 1548 1548 96 97 OHAAAA NYCAAA HHHHxx +5767 1990 1 3 7 7 67 767 1767 767 5767 134 135 VNAAAA OYCAAA OOOOxx +3214 1991 0 2 4 14 14 214 1214 3214 3214 28 29 QTAAAA PYCAAA VVVVxx +8663 1992 1 3 3 3 63 663 663 3663 8663 126 127 FVAAAA QYCAAA AAAAxx +5425 1993 1 1 5 5 25 425 1425 425 5425 50 51 RAAAAA RYCAAA HHHHxx +8530 1994 0 2 0 10 30 530 530 3530 8530 60 61 CQAAAA SYCAAA OOOOxx +821 1995 1 1 1 1 21 821 821 821 821 42 43 PFAAAA TYCAAA VVVVxx +8816 1996 0 0 6 16 16 816 816 3816 8816 32 33 CBAAAA UYCAAA AAAAxx +9367 1997 1 3 7 7 67 367 1367 4367 9367 134 135 HWAAAA VYCAAA HHHHxx +4138 1998 0 2 8 18 38 138 138 4138 4138 76 77 EDAAAA WYCAAA OOOOxx +94 1999 0 2 4 14 94 94 94 94 94 188 189 QDAAAA XYCAAA VVVVxx +1858 2000 0 2 8 18 58 858 1858 1858 1858 116 117 MTAAAA YYCAAA AAAAxx +5513 2001 1 1 3 13 13 513 1513 513 5513 26 27 BEAAAA ZYCAAA HHHHxx +9620 2002 0 0 0 0 20 620 1620 4620 9620 40 41 AGAAAA AZCAAA OOOOxx +4770 2003 0 2 0 10 70 770 770 4770 4770 140 141 MBAAAA BZCAAA VVVVxx +5193 2004 1 1 3 13 93 193 1193 193 5193 186 187 TRAAAA CZCAAA AAAAxx +198 2005 0 2 8 18 98 198 198 198 198 196 197 QHAAAA DZCAAA HHHHxx +417 2006 1 1 7 17 17 417 417 417 417 34 35 BQAAAA EZCAAA OOOOxx +173 2007 1 1 3 13 73 173 173 173 173 146 147 RGAAAA FZCAAA VVVVxx +6248 2008 0 0 8 8 48 248 248 1248 6248 96 97 IGAAAA GZCAAA AAAAxx +302 2009 0 2 2 2 2 302 302 302 302 4 5 QLAAAA HZCAAA HHHHxx +8983 2010 1 3 3 3 83 983 983 3983 8983 166 167 NHAAAA IZCAAA OOOOxx +4840 2011 0 0 0 0 40 840 840 4840 4840 80 81 EEAAAA JZCAAA VVVVxx +2876 2012 0 0 6 16 76 876 876 2876 2876 152 153 QGAAAA KZCAAA AAAAxx +5841 2013 1 1 1 1 41 841 1841 841 5841 82 83 RQAAAA LZCAAA HHHHxx +2766 2014 0 2 6 6 66 766 766 2766 2766 132 133 KCAAAA MZCAAA OOOOxx +9482 2015 0 2 2 2 82 482 1482 4482 9482 164 165 SAAAAA NZCAAA VVVVxx +5335 2016 1 3 5 15 35 335 1335 335 5335 70 71 FXAAAA OZCAAA AAAAxx +1502 2017 0 2 2 2 2 502 1502 1502 1502 4 5 UFAAAA PZCAAA HHHHxx +9291 2018 1 3 1 11 91 291 1291 4291 9291 182 183 JTAAAA QZCAAA OOOOxx +8655 2019 1 3 5 15 55 655 655 3655 8655 110 111 XUAAAA RZCAAA VVVVxx +1687 2020 1 3 7 7 87 687 1687 1687 1687 174 175 XMAAAA SZCAAA AAAAxx +8171 2021 1 3 1 11 71 171 171 3171 8171 142 143 HCAAAA TZCAAA HHHHxx +5699 2022 1 3 9 19 99 699 1699 699 5699 198 199 FLAAAA UZCAAA OOOOxx +1462 2023 0 2 2 2 62 462 1462 1462 1462 124 125 GEAAAA VZCAAA VVVVxx +608 2024 0 0 8 8 8 608 608 608 608 16 17 KXAAAA WZCAAA AAAAxx +6860 2025 0 0 0 0 60 860 860 1860 6860 120 121 WDAAAA XZCAAA HHHHxx +6063 2026 1 3 3 3 63 63 63 1063 6063 126 127 FZAAAA YZCAAA OOOOxx +1422 2027 0 2 2 2 22 422 1422 1422 1422 44 45 SCAAAA ZZCAAA VVVVxx +1932 2028 0 0 2 12 32 932 1932 1932 1932 64 65 IWAAAA AADAAA AAAAxx +5065 2029 1 1 5 5 65 65 1065 65 5065 130 131 VMAAAA BADAAA HHHHxx +432 2030 0 0 2 12 32 432 432 432 432 64 65 QQAAAA CADAAA OOOOxx +4680 2031 0 0 0 0 80 680 680 4680 4680 160 161 AYAAAA DADAAA VVVVxx +8172 2032 0 0 2 12 72 172 172 3172 8172 144 145 ICAAAA EADAAA AAAAxx +8668 2033 0 0 8 8 68 668 668 3668 8668 136 137 KVAAAA FADAAA HHHHxx +256 2034 0 0 6 16 56 256 256 256 256 112 113 WJAAAA GADAAA OOOOxx +2500 2035 0 0 0 0 0 500 500 2500 2500 0 1 ESAAAA HADAAA VVVVxx +274 2036 0 2 4 14 74 274 274 274 274 148 149 OKAAAA IADAAA AAAAxx +5907 2037 1 3 7 7 7 907 1907 907 5907 14 15 FTAAAA JADAAA HHHHxx +8587 2038 1 3 7 7 87 587 587 3587 8587 174 175 HSAAAA KADAAA OOOOxx +9942 2039 0 2 2 2 42 942 1942 4942 9942 84 85 KSAAAA LADAAA VVVVxx +116 2040 0 0 6 16 16 116 116 116 116 32 33 MEAAAA MADAAA AAAAxx +7134 2041 0 2 4 14 34 134 1134 2134 7134 68 69 KOAAAA NADAAA HHHHxx +9002 2042 0 2 2 2 2 2 1002 4002 9002 4 5 GIAAAA OADAAA OOOOxx +1209 2043 1 1 9 9 9 209 1209 1209 1209 18 19 NUAAAA PADAAA VVVVxx +9983 2044 1 3 3 3 83 983 1983 4983 9983 166 167 ZTAAAA QADAAA AAAAxx +1761 2045 1 1 1 1 61 761 1761 1761 1761 122 123 TPAAAA RADAAA HHHHxx +7723 2046 1 3 3 3 23 723 1723 2723 7723 46 47 BLAAAA SADAAA OOOOxx +6518 2047 0 2 8 18 18 518 518 1518 6518 36 37 SQAAAA TADAAA VVVVxx +1372 2048 0 0 2 12 72 372 1372 1372 1372 144 145 UAAAAA UADAAA AAAAxx +3587 2049 1 3 7 7 87 587 1587 3587 3587 174 175 ZHAAAA VADAAA HHHHxx +5323 2050 1 3 3 3 23 323 1323 323 5323 46 47 TWAAAA WADAAA OOOOxx +5902 2051 0 2 2 2 2 902 1902 902 5902 4 5 ATAAAA XADAAA VVVVxx +3749 2052 1 1 9 9 49 749 1749 3749 3749 98 99 FOAAAA YADAAA AAAAxx +5965 2053 1 1 5 5 65 965 1965 965 5965 130 131 LVAAAA ZADAAA HHHHxx +663 2054 1 3 3 3 63 663 663 663 663 126 127 NZAAAA ABDAAA OOOOxx +36 2055 0 0 6 16 36 36 36 36 36 72 73 KBAAAA BBDAAA VVVVxx +9782 2056 0 2 2 2 82 782 1782 4782 9782 164 165 GMAAAA CBDAAA AAAAxx +5412 2057 0 0 2 12 12 412 1412 412 5412 24 25 EAAAAA DBDAAA HHHHxx +9961 2058 1 1 1 1 61 961 1961 4961 9961 122 123 DTAAAA EBDAAA OOOOxx +6492 2059 0 0 2 12 92 492 492 1492 6492 184 185 SPAAAA FBDAAA VVVVxx +4234 2060 0 2 4 14 34 234 234 4234 4234 68 69 WGAAAA GBDAAA AAAAxx +4922 2061 0 2 2 2 22 922 922 4922 4922 44 45 IHAAAA HBDAAA HHHHxx +6166 2062 0 2 6 6 66 166 166 1166 6166 132 133 EDAAAA IBDAAA OOOOxx +7019 2063 1 3 9 19 19 19 1019 2019 7019 38 39 ZJAAAA JBDAAA VVVVxx +7805 2064 1 1 5 5 5 805 1805 2805 7805 10 11 FOAAAA KBDAAA AAAAxx +9808 2065 0 0 8 8 8 808 1808 4808 9808 16 17 GNAAAA LBDAAA HHHHxx +2550 2066 0 2 0 10 50 550 550 2550 2550 100 101 CUAAAA MBDAAA OOOOxx +8626 2067 0 2 6 6 26 626 626 3626 8626 52 53 UTAAAA NBDAAA VVVVxx +5649 2068 1 1 9 9 49 649 1649 649 5649 98 99 HJAAAA OBDAAA AAAAxx +3117 2069 1 1 7 17 17 117 1117 3117 3117 34 35 XPAAAA PBDAAA HHHHxx +866 2070 0 2 6 6 66 866 866 866 866 132 133 IHAAAA QBDAAA OOOOxx +2323 2071 1 3 3 3 23 323 323 2323 2323 46 47 JLAAAA RBDAAA VVVVxx +5132 2072 0 0 2 12 32 132 1132 132 5132 64 65 KPAAAA SBDAAA AAAAxx +9222 2073 0 2 2 2 22 222 1222 4222 9222 44 45 SQAAAA TBDAAA HHHHxx +3934 2074 0 2 4 14 34 934 1934 3934 3934 68 69 IVAAAA UBDAAA OOOOxx +4845 2075 1 1 5 5 45 845 845 4845 4845 90 91 JEAAAA VBDAAA VVVVxx +7714 2076 0 2 4 14 14 714 1714 2714 7714 28 29 SKAAAA WBDAAA AAAAxx +9818 2077 0 2 8 18 18 818 1818 4818 9818 36 37 QNAAAA XBDAAA HHHHxx +2219 2078 1 3 9 19 19 219 219 2219 2219 38 39 JHAAAA YBDAAA OOOOxx +6573 2079 1 1 3 13 73 573 573 1573 6573 146 147 VSAAAA ZBDAAA VVVVxx +4555 2080 1 3 5 15 55 555 555 4555 4555 110 111 FTAAAA ACDAAA AAAAxx +7306 2081 0 2 6 6 6 306 1306 2306 7306 12 13 AVAAAA BCDAAA HHHHxx +9313 2082 1 1 3 13 13 313 1313 4313 9313 26 27 FUAAAA CCDAAA OOOOxx +3924 2083 0 0 4 4 24 924 1924 3924 3924 48 49 YUAAAA DCDAAA VVVVxx +5176 2084 0 0 6 16 76 176 1176 176 5176 152 153 CRAAAA ECDAAA AAAAxx +9767 2085 1 3 7 7 67 767 1767 4767 9767 134 135 RLAAAA FCDAAA HHHHxx +905 2086 1 1 5 5 5 905 905 905 905 10 11 VIAAAA GCDAAA OOOOxx +8037 2087 1 1 7 17 37 37 37 3037 8037 74 75 DXAAAA HCDAAA VVVVxx +8133 2088 1 1 3 13 33 133 133 3133 8133 66 67 VAAAAA ICDAAA AAAAxx +2954 2089 0 2 4 14 54 954 954 2954 2954 108 109 QJAAAA JCDAAA HHHHxx +7262 2090 0 2 2 2 62 262 1262 2262 7262 124 125 ITAAAA KCDAAA OOOOxx +8768 2091 0 0 8 8 68 768 768 3768 8768 136 137 GZAAAA LCDAAA VVVVxx +6953 2092 1 1 3 13 53 953 953 1953 6953 106 107 LHAAAA MCDAAA AAAAxx +1984 2093 0 0 4 4 84 984 1984 1984 1984 168 169 IYAAAA NCDAAA HHHHxx +9348 2094 0 0 8 8 48 348 1348 4348 9348 96 97 OVAAAA OCDAAA OOOOxx +7769 2095 1 1 9 9 69 769 1769 2769 7769 138 139 VMAAAA PCDAAA VVVVxx +2994 2096 0 2 4 14 94 994 994 2994 2994 188 189 ELAAAA QCDAAA AAAAxx +5938 2097 0 2 8 18 38 938 1938 938 5938 76 77 KUAAAA RCDAAA HHHHxx +556 2098 0 0 6 16 56 556 556 556 556 112 113 KVAAAA SCDAAA OOOOxx +2577 2099 1 1 7 17 77 577 577 2577 2577 154 155 DVAAAA TCDAAA VVVVxx +8733 2100 1 1 3 13 33 733 733 3733 8733 66 67 XXAAAA UCDAAA AAAAxx +3108 2101 0 0 8 8 8 108 1108 3108 3108 16 17 OPAAAA VCDAAA HHHHxx +4166 2102 0 2 6 6 66 166 166 4166 4166 132 133 GEAAAA WCDAAA OOOOxx +3170 2103 0 2 0 10 70 170 1170 3170 3170 140 141 YRAAAA XCDAAA VVVVxx +8118 2104 0 2 8 18 18 118 118 3118 8118 36 37 GAAAAA YCDAAA AAAAxx +8454 2105 0 2 4 14 54 454 454 3454 8454 108 109 ENAAAA ZCDAAA HHHHxx +5338 2106 0 2 8 18 38 338 1338 338 5338 76 77 IXAAAA ADDAAA OOOOxx +402 2107 0 2 2 2 2 402 402 402 402 4 5 MPAAAA BDDAAA VVVVxx +5673 2108 1 1 3 13 73 673 1673 673 5673 146 147 FKAAAA CDDAAA AAAAxx +4324 2109 0 0 4 4 24 324 324 4324 4324 48 49 IKAAAA DDDAAA HHHHxx +1943 2110 1 3 3 3 43 943 1943 1943 1943 86 87 TWAAAA EDDAAA OOOOxx +7703 2111 1 3 3 3 3 703 1703 2703 7703 6 7 HKAAAA FDDAAA VVVVxx +7180 2112 0 0 0 0 80 180 1180 2180 7180 160 161 EQAAAA GDDAAA AAAAxx +5478 2113 0 2 8 18 78 478 1478 478 5478 156 157 SCAAAA HDDAAA HHHHxx +5775 2114 1 3 5 15 75 775 1775 775 5775 150 151 DOAAAA IDDAAA OOOOxx +6952 2115 0 0 2 12 52 952 952 1952 6952 104 105 KHAAAA JDDAAA VVVVxx +9022 2116 0 2 2 2 22 22 1022 4022 9022 44 45 AJAAAA KDDAAA AAAAxx +547 2117 1 3 7 7 47 547 547 547 547 94 95 BVAAAA LDDAAA HHHHxx +5877 2118 1 1 7 17 77 877 1877 877 5877 154 155 BSAAAA MDDAAA OOOOxx +9580 2119 0 0 0 0 80 580 1580 4580 9580 160 161 MEAAAA NDDAAA VVVVxx +6094 2120 0 2 4 14 94 94 94 1094 6094 188 189 KAAAAA ODDAAA AAAAxx +3398 2121 0 2 8 18 98 398 1398 3398 3398 196 197 SAAAAA PDDAAA HHHHxx +4574 2122 0 2 4 14 74 574 574 4574 4574 148 149 YTAAAA QDDAAA OOOOxx +3675 2123 1 3 5 15 75 675 1675 3675 3675 150 151 JLAAAA RDDAAA VVVVxx +6413 2124 1 1 3 13 13 413 413 1413 6413 26 27 RMAAAA SDDAAA AAAAxx +9851 2125 1 3 1 11 51 851 1851 4851 9851 102 103 XOAAAA TDDAAA HHHHxx +126 2126 0 2 6 6 26 126 126 126 126 52 53 WEAAAA UDDAAA OOOOxx +6803 2127 1 3 3 3 3 803 803 1803 6803 6 7 RBAAAA VDDAAA VVVVxx +6949 2128 1 1 9 9 49 949 949 1949 6949 98 99 HHAAAA WDDAAA AAAAxx +115 2129 1 3 5 15 15 115 115 115 115 30 31 LEAAAA XDDAAA HHHHxx +4165 2130 1 1 5 5 65 165 165 4165 4165 130 131 FEAAAA YDDAAA OOOOxx +201 2131 1 1 1 1 1 201 201 201 201 2 3 THAAAA ZDDAAA VVVVxx +9324 2132 0 0 4 4 24 324 1324 4324 9324 48 49 QUAAAA AEDAAA AAAAxx +6562 2133 0 2 2 2 62 562 562 1562 6562 124 125 KSAAAA BEDAAA HHHHxx +1917 2134 1 1 7 17 17 917 1917 1917 1917 34 35 TVAAAA CEDAAA OOOOxx +558 2135 0 2 8 18 58 558 558 558 558 116 117 MVAAAA DEDAAA VVVVxx +8515 2136 1 3 5 15 15 515 515 3515 8515 30 31 NPAAAA EEDAAA AAAAxx +6321 2137 1 1 1 1 21 321 321 1321 6321 42 43 DJAAAA FEDAAA HHHHxx +6892 2138 0 0 2 12 92 892 892 1892 6892 184 185 CFAAAA GEDAAA OOOOxx +1001 2139 1 1 1 1 1 1 1001 1001 1001 2 3 NMAAAA HEDAAA VVVVxx +2858 2140 0 2 8 18 58 858 858 2858 2858 116 117 YFAAAA IEDAAA AAAAxx +2434 2141 0 2 4 14 34 434 434 2434 2434 68 69 QPAAAA JEDAAA HHHHxx +4460 2142 0 0 0 0 60 460 460 4460 4460 120 121 OPAAAA KEDAAA OOOOxx +5447 2143 1 3 7 7 47 447 1447 447 5447 94 95 NBAAAA LEDAAA VVVVxx +3799 2144 1 3 9 19 99 799 1799 3799 3799 198 199 DQAAAA MEDAAA AAAAxx +4310 2145 0 2 0 10 10 310 310 4310 4310 20 21 UJAAAA NEDAAA HHHHxx +405 2146 1 1 5 5 5 405 405 405 405 10 11 PPAAAA OEDAAA OOOOxx +4573 2147 1 1 3 13 73 573 573 4573 4573 146 147 XTAAAA PEDAAA VVVVxx +706 2148 0 2 6 6 6 706 706 706 706 12 13 EBAAAA QEDAAA AAAAxx +7619 2149 1 3 9 19 19 619 1619 2619 7619 38 39 BHAAAA REDAAA HHHHxx +7959 2150 1 3 9 19 59 959 1959 2959 7959 118 119 DUAAAA SEDAAA OOOOxx +6712 2151 0 0 2 12 12 712 712 1712 6712 24 25 EYAAAA TEDAAA VVVVxx +6959 2152 1 3 9 19 59 959 959 1959 6959 118 119 RHAAAA UEDAAA AAAAxx +9791 2153 1 3 1 11 91 791 1791 4791 9791 182 183 PMAAAA VEDAAA HHHHxx +2112 2154 0 0 2 12 12 112 112 2112 2112 24 25 GDAAAA WEDAAA OOOOxx +9114 2155 0 2 4 14 14 114 1114 4114 9114 28 29 OMAAAA XEDAAA VVVVxx +3506 2156 0 2 6 6 6 506 1506 3506 3506 12 13 WEAAAA YEDAAA AAAAxx +5002 2157 0 2 2 2 2 2 1002 2 5002 4 5 KKAAAA ZEDAAA HHHHxx +3518 2158 0 2 8 18 18 518 1518 3518 3518 36 37 IFAAAA AFDAAA OOOOxx +602 2159 0 2 2 2 2 602 602 602 602 4 5 EXAAAA BFDAAA VVVVxx +9060 2160 0 0 0 0 60 60 1060 4060 9060 120 121 MKAAAA CFDAAA AAAAxx +3292 2161 0 0 2 12 92 292 1292 3292 3292 184 185 QWAAAA DFDAAA HHHHxx +77 2162 1 1 7 17 77 77 77 77 77 154 155 ZCAAAA EFDAAA OOOOxx +1420 2163 0 0 0 0 20 420 1420 1420 1420 40 41 QCAAAA FFDAAA VVVVxx +6001 2164 1 1 1 1 1 1 1 1001 6001 2 3 VWAAAA GFDAAA AAAAxx +7477 2165 1 1 7 17 77 477 1477 2477 7477 154 155 PBAAAA HFDAAA HHHHxx +6655 2166 1 3 5 15 55 655 655 1655 6655 110 111 ZVAAAA IFDAAA OOOOxx +7845 2167 1 1 5 5 45 845 1845 2845 7845 90 91 TPAAAA JFDAAA VVVVxx +8484 2168 0 0 4 4 84 484 484 3484 8484 168 169 IOAAAA KFDAAA AAAAxx +4345 2169 1 1 5 5 45 345 345 4345 4345 90 91 DLAAAA LFDAAA HHHHxx +4250 2170 0 2 0 10 50 250 250 4250 4250 100 101 MHAAAA MFDAAA OOOOxx +2391 2171 1 3 1 11 91 391 391 2391 2391 182 183 ZNAAAA NFDAAA VVVVxx +6884 2172 0 0 4 4 84 884 884 1884 6884 168 169 UEAAAA OFDAAA AAAAxx +7270 2173 0 2 0 10 70 270 1270 2270 7270 140 141 QTAAAA PFDAAA HHHHxx +2499 2174 1 3 9 19 99 499 499 2499 2499 198 199 DSAAAA QFDAAA OOOOxx +7312 2175 0 0 2 12 12 312 1312 2312 7312 24 25 GVAAAA RFDAAA VVVVxx +7113 2176 1 1 3 13 13 113 1113 2113 7113 26 27 PNAAAA SFDAAA AAAAxx +6695 2177 1 3 5 15 95 695 695 1695 6695 190 191 NXAAAA TFDAAA HHHHxx +6521 2178 1 1 1 1 21 521 521 1521 6521 42 43 VQAAAA UFDAAA OOOOxx +272 2179 0 0 2 12 72 272 272 272 272 144 145 MKAAAA VFDAAA VVVVxx +9976 2180 0 0 6 16 76 976 1976 4976 9976 152 153 STAAAA WFDAAA AAAAxx +992 2181 0 0 2 12 92 992 992 992 992 184 185 EMAAAA XFDAAA HHHHxx +6158 2182 0 2 8 18 58 158 158 1158 6158 116 117 WCAAAA YFDAAA OOOOxx +3281 2183 1 1 1 1 81 281 1281 3281 3281 162 163 FWAAAA ZFDAAA VVVVxx +7446 2184 0 2 6 6 46 446 1446 2446 7446 92 93 KAAAAA AGDAAA AAAAxx +4679 2185 1 3 9 19 79 679 679 4679 4679 158 159 ZXAAAA BGDAAA HHHHxx +5203 2186 1 3 3 3 3 203 1203 203 5203 6 7 DSAAAA CGDAAA OOOOxx +9874 2187 0 2 4 14 74 874 1874 4874 9874 148 149 UPAAAA DGDAAA VVVVxx +8371 2188 1 3 1 11 71 371 371 3371 8371 142 143 ZJAAAA EGDAAA AAAAxx +9086 2189 0 2 6 6 86 86 1086 4086 9086 172 173 MLAAAA FGDAAA HHHHxx +430 2190 0 2 0 10 30 430 430 430 430 60 61 OQAAAA GGDAAA OOOOxx +8749 2191 1 1 9 9 49 749 749 3749 8749 98 99 NYAAAA HGDAAA VVVVxx +577 2192 1 1 7 17 77 577 577 577 577 154 155 FWAAAA IGDAAA AAAAxx +4884 2193 0 0 4 4 84 884 884 4884 4884 168 169 WFAAAA JGDAAA HHHHxx +3421 2194 1 1 1 1 21 421 1421 3421 3421 42 43 PBAAAA KGDAAA OOOOxx +2812 2195 0 0 2 12 12 812 812 2812 2812 24 25 EEAAAA LGDAAA VVVVxx +5958 2196 0 2 8 18 58 958 1958 958 5958 116 117 EVAAAA MGDAAA AAAAxx +9901 2197 1 1 1 1 1 901 1901 4901 9901 2 3 VQAAAA NGDAAA HHHHxx +8478 2198 0 2 8 18 78 478 478 3478 8478 156 157 COAAAA OGDAAA OOOOxx +6545 2199 1 1 5 5 45 545 545 1545 6545 90 91 TRAAAA PGDAAA VVVVxx +1479 2200 1 3 9 19 79 479 1479 1479 1479 158 159 XEAAAA QGDAAA AAAAxx +1046 2201 0 2 6 6 46 46 1046 1046 1046 92 93 GOAAAA RGDAAA HHHHxx +6372 2202 0 0 2 12 72 372 372 1372 6372 144 145 CLAAAA SGDAAA OOOOxx +8206 2203 0 2 6 6 6 206 206 3206 8206 12 13 QDAAAA TGDAAA VVVVxx +9544 2204 0 0 4 4 44 544 1544 4544 9544 88 89 CDAAAA UGDAAA AAAAxx +9287 2205 1 3 7 7 87 287 1287 4287 9287 174 175 FTAAAA VGDAAA HHHHxx +6786 2206 0 2 6 6 86 786 786 1786 6786 172 173 ABAAAA WGDAAA OOOOxx +6511 2207 1 3 1 11 11 511 511 1511 6511 22 23 LQAAAA XGDAAA VVVVxx +603 2208 1 3 3 3 3 603 603 603 603 6 7 FXAAAA YGDAAA AAAAxx +2022 2209 0 2 2 2 22 22 22 2022 2022 44 45 UZAAAA ZGDAAA HHHHxx +2086 2210 0 2 6 6 86 86 86 2086 2086 172 173 GCAAAA AHDAAA OOOOxx +1969 2211 1 1 9 9 69 969 1969 1969 1969 138 139 TXAAAA BHDAAA VVVVxx +4841 2212 1 1 1 1 41 841 841 4841 4841 82 83 FEAAAA CHDAAA AAAAxx +5845 2213 1 1 5 5 45 845 1845 845 5845 90 91 VQAAAA DHDAAA HHHHxx +4635 2214 1 3 5 15 35 635 635 4635 4635 70 71 HWAAAA EHDAAA OOOOxx +4658 2215 0 2 8 18 58 658 658 4658 4658 116 117 EXAAAA FHDAAA VVVVxx +2896 2216 0 0 6 16 96 896 896 2896 2896 192 193 KHAAAA GHDAAA AAAAxx +5179 2217 1 3 9 19 79 179 1179 179 5179 158 159 FRAAAA HHDAAA HHHHxx +8667 2218 1 3 7 7 67 667 667 3667 8667 134 135 JVAAAA IHDAAA OOOOxx +7294 2219 0 2 4 14 94 294 1294 2294 7294 188 189 OUAAAA JHDAAA VVVVxx +3706 2220 0 2 6 6 6 706 1706 3706 3706 12 13 OMAAAA KHDAAA AAAAxx +8389 2221 1 1 9 9 89 389 389 3389 8389 178 179 RKAAAA LHDAAA HHHHxx +2486 2222 0 2 6 6 86 486 486 2486 2486 172 173 QRAAAA MHDAAA OOOOxx +8743 2223 1 3 3 3 43 743 743 3743 8743 86 87 HYAAAA NHDAAA VVVVxx +2777 2224 1 1 7 17 77 777 777 2777 2777 154 155 VCAAAA OHDAAA AAAAxx +2113 2225 1 1 3 13 13 113 113 2113 2113 26 27 HDAAAA PHDAAA HHHHxx +2076 2226 0 0 6 16 76 76 76 2076 2076 152 153 WBAAAA QHDAAA OOOOxx +2300 2227 0 0 0 0 0 300 300 2300 2300 0 1 MKAAAA RHDAAA VVVVxx +6894 2228 0 2 4 14 94 894 894 1894 6894 188 189 EFAAAA SHDAAA AAAAxx +6939 2229 1 3 9 19 39 939 939 1939 6939 78 79 XGAAAA THDAAA HHHHxx +446 2230 0 2 6 6 46 446 446 446 446 92 93 ERAAAA UHDAAA OOOOxx +6218 2231 0 2 8 18 18 218 218 1218 6218 36 37 EFAAAA VHDAAA VVVVxx +1295 2232 1 3 5 15 95 295 1295 1295 1295 190 191 VXAAAA WHDAAA AAAAxx +5135 2233 1 3 5 15 35 135 1135 135 5135 70 71 NPAAAA XHDAAA HHHHxx +8122 2234 0 2 2 2 22 122 122 3122 8122 44 45 KAAAAA YHDAAA OOOOxx +316 2235 0 0 6 16 16 316 316 316 316 32 33 EMAAAA ZHDAAA VVVVxx +514 2236 0 2 4 14 14 514 514 514 514 28 29 UTAAAA AIDAAA AAAAxx +7970 2237 0 2 0 10 70 970 1970 2970 7970 140 141 OUAAAA BIDAAA HHHHxx +9350 2238 0 2 0 10 50 350 1350 4350 9350 100 101 QVAAAA CIDAAA OOOOxx +3700 2239 0 0 0 0 0 700 1700 3700 3700 0 1 IMAAAA DIDAAA VVVVxx +582 2240 0 2 2 2 82 582 582 582 582 164 165 KWAAAA EIDAAA AAAAxx +9722 2241 0 2 2 2 22 722 1722 4722 9722 44 45 YJAAAA FIDAAA HHHHxx +7398 2242 0 2 8 18 98 398 1398 2398 7398 196 197 OYAAAA GIDAAA OOOOxx +2265 2243 1 1 5 5 65 265 265 2265 2265 130 131 DJAAAA HIDAAA VVVVxx +3049 2244 1 1 9 9 49 49 1049 3049 3049 98 99 HNAAAA IIDAAA AAAAxx +9121 2245 1 1 1 1 21 121 1121 4121 9121 42 43 VMAAAA JIDAAA HHHHxx +4275 2246 1 3 5 15 75 275 275 4275 4275 150 151 LIAAAA KIDAAA OOOOxx +6567 2247 1 3 7 7 67 567 567 1567 6567 134 135 PSAAAA LIDAAA VVVVxx +6755 2248 1 3 5 15 55 755 755 1755 6755 110 111 VZAAAA MIDAAA AAAAxx +4535 2249 1 3 5 15 35 535 535 4535 4535 70 71 LSAAAA NIDAAA HHHHxx +7968 2250 0 0 8 8 68 968 1968 2968 7968 136 137 MUAAAA OIDAAA OOOOxx +3412 2251 0 0 2 12 12 412 1412 3412 3412 24 25 GBAAAA PIDAAA VVVVxx +6112 2252 0 0 2 12 12 112 112 1112 6112 24 25 CBAAAA QIDAAA AAAAxx +6805 2253 1 1 5 5 5 805 805 1805 6805 10 11 TBAAAA RIDAAA HHHHxx +2880 2254 0 0 0 0 80 880 880 2880 2880 160 161 UGAAAA SIDAAA OOOOxx +7710 2255 0 2 0 10 10 710 1710 2710 7710 20 21 OKAAAA TIDAAA VVVVxx +7949 2256 1 1 9 9 49 949 1949 2949 7949 98 99 TTAAAA UIDAAA AAAAxx +7043 2257 1 3 3 3 43 43 1043 2043 7043 86 87 XKAAAA VIDAAA HHHHxx +9012 2258 0 0 2 12 12 12 1012 4012 9012 24 25 QIAAAA WIDAAA OOOOxx +878 2259 0 2 8 18 78 878 878 878 878 156 157 UHAAAA XIDAAA VVVVxx +7930 2260 0 2 0 10 30 930 1930 2930 7930 60 61 ATAAAA YIDAAA AAAAxx +667 2261 1 3 7 7 67 667 667 667 667 134 135 RZAAAA ZIDAAA HHHHxx +1905 2262 1 1 5 5 5 905 1905 1905 1905 10 11 HVAAAA AJDAAA OOOOxx +4958 2263 0 2 8 18 58 958 958 4958 4958 116 117 SIAAAA BJDAAA VVVVxx +2973 2264 1 1 3 13 73 973 973 2973 2973 146 147 JKAAAA CJDAAA AAAAxx +3631 2265 1 3 1 11 31 631 1631 3631 3631 62 63 RJAAAA DJDAAA HHHHxx +5868 2266 0 0 8 8 68 868 1868 868 5868 136 137 SRAAAA EJDAAA OOOOxx +2873 2267 1 1 3 13 73 873 873 2873 2873 146 147 NGAAAA FJDAAA VVVVxx +6941 2268 1 1 1 1 41 941 941 1941 6941 82 83 ZGAAAA GJDAAA AAAAxx +6384 2269 0 0 4 4 84 384 384 1384 6384 168 169 OLAAAA HJDAAA HHHHxx +3806 2270 0 2 6 6 6 806 1806 3806 3806 12 13 KQAAAA IJDAAA OOOOxx +5079 2271 1 3 9 19 79 79 1079 79 5079 158 159 JNAAAA JJDAAA VVVVxx +1970 2272 0 2 0 10 70 970 1970 1970 1970 140 141 UXAAAA KJDAAA AAAAxx +7810 2273 0 2 0 10 10 810 1810 2810 7810 20 21 KOAAAA LJDAAA HHHHxx +4639 2274 1 3 9 19 39 639 639 4639 4639 78 79 LWAAAA MJDAAA OOOOxx +6527 2275 1 3 7 7 27 527 527 1527 6527 54 55 BRAAAA NJDAAA VVVVxx +8079 2276 1 3 9 19 79 79 79 3079 8079 158 159 TYAAAA OJDAAA AAAAxx +2740 2277 0 0 0 0 40 740 740 2740 2740 80 81 KBAAAA PJDAAA HHHHxx +2337 2278 1 1 7 17 37 337 337 2337 2337 74 75 XLAAAA QJDAAA OOOOxx +6670 2279 0 2 0 10 70 670 670 1670 6670 140 141 OWAAAA RJDAAA VVVVxx +2345 2280 1 1 5 5 45 345 345 2345 2345 90 91 FMAAAA SJDAAA AAAAxx +401 2281 1 1 1 1 1 401 401 401 401 2 3 LPAAAA TJDAAA HHHHxx +2704 2282 0 0 4 4 4 704 704 2704 2704 8 9 AAAAAA UJDAAA OOOOxx +5530 2283 0 2 0 10 30 530 1530 530 5530 60 61 SEAAAA VJDAAA VVVVxx +51 2284 1 3 1 11 51 51 51 51 51 102 103 ZBAAAA WJDAAA AAAAxx +4282 2285 0 2 2 2 82 282 282 4282 4282 164 165 SIAAAA XJDAAA HHHHxx +7336 2286 0 0 6 16 36 336 1336 2336 7336 72 73 EWAAAA YJDAAA OOOOxx +8320 2287 0 0 0 0 20 320 320 3320 8320 40 41 AIAAAA ZJDAAA VVVVxx +7772 2288 0 0 2 12 72 772 1772 2772 7772 144 145 YMAAAA AKDAAA AAAAxx +1894 2289 0 2 4 14 94 894 1894 1894 1894 188 189 WUAAAA BKDAAA HHHHxx +2320 2290 0 0 0 0 20 320 320 2320 2320 40 41 GLAAAA CKDAAA OOOOxx +6232 2291 0 0 2 12 32 232 232 1232 6232 64 65 SFAAAA DKDAAA VVVVxx +2833 2292 1 1 3 13 33 833 833 2833 2833 66 67 ZEAAAA EKDAAA AAAAxx +8265 2293 1 1 5 5 65 265 265 3265 8265 130 131 XFAAAA FKDAAA HHHHxx +4589 2294 1 1 9 9 89 589 589 4589 4589 178 179 NUAAAA GKDAAA OOOOxx +8182 2295 0 2 2 2 82 182 182 3182 8182 164 165 SCAAAA HKDAAA VVVVxx +8337 2296 1 1 7 17 37 337 337 3337 8337 74 75 RIAAAA IKDAAA AAAAxx +8210 2297 0 2 0 10 10 210 210 3210 8210 20 21 UDAAAA JKDAAA HHHHxx +1406 2298 0 2 6 6 6 406 1406 1406 1406 12 13 CCAAAA KKDAAA OOOOxx +4463 2299 1 3 3 3 63 463 463 4463 4463 126 127 RPAAAA LKDAAA VVVVxx +4347 2300 1 3 7 7 47 347 347 4347 4347 94 95 FLAAAA MKDAAA AAAAxx +181 2301 1 1 1 1 81 181 181 181 181 162 163 ZGAAAA NKDAAA HHHHxx +9986 2302 0 2 6 6 86 986 1986 4986 9986 172 173 CUAAAA OKDAAA OOOOxx +661 2303 1 1 1 1 61 661 661 661 661 122 123 LZAAAA PKDAAA VVVVxx +4105 2304 1 1 5 5 5 105 105 4105 4105 10 11 XBAAAA QKDAAA AAAAxx +2187 2305 1 3 7 7 87 187 187 2187 2187 174 175 DGAAAA RKDAAA HHHHxx +1628 2306 0 0 8 8 28 628 1628 1628 1628 56 57 QKAAAA SKDAAA OOOOxx +3119 2307 1 3 9 19 19 119 1119 3119 3119 38 39 ZPAAAA TKDAAA VVVVxx +6804 2308 0 0 4 4 4 804 804 1804 6804 8 9 SBAAAA UKDAAA AAAAxx +9918 2309 0 2 8 18 18 918 1918 4918 9918 36 37 MRAAAA VKDAAA HHHHxx +8916 2310 0 0 6 16 16 916 916 3916 8916 32 33 YEAAAA WKDAAA OOOOxx +6057 2311 1 1 7 17 57 57 57 1057 6057 114 115 ZYAAAA XKDAAA VVVVxx +3622 2312 0 2 2 2 22 622 1622 3622 3622 44 45 IJAAAA YKDAAA AAAAxx +9168 2313 0 0 8 8 68 168 1168 4168 9168 136 137 QOAAAA ZKDAAA HHHHxx +3720 2314 0 0 0 0 20 720 1720 3720 3720 40 41 CNAAAA ALDAAA OOOOxx +9927 2315 1 3 7 7 27 927 1927 4927 9927 54 55 VRAAAA BLDAAA VVVVxx +5616 2316 0 0 6 16 16 616 1616 616 5616 32 33 AIAAAA CLDAAA AAAAxx +5210 2317 0 2 0 10 10 210 1210 210 5210 20 21 KSAAAA DLDAAA HHHHxx +636 2318 0 0 6 16 36 636 636 636 636 72 73 MYAAAA ELDAAA OOOOxx +9936 2319 0 0 6 16 36 936 1936 4936 9936 72 73 ESAAAA FLDAAA VVVVxx +2316 2320 0 0 6 16 16 316 316 2316 2316 32 33 CLAAAA GLDAAA AAAAxx +4363 2321 1 3 3 3 63 363 363 4363 4363 126 127 VLAAAA HLDAAA HHHHxx +7657 2322 1 1 7 17 57 657 1657 2657 7657 114 115 NIAAAA ILDAAA OOOOxx +697 2323 1 1 7 17 97 697 697 697 697 194 195 VAAAAA JLDAAA VVVVxx +912 2324 0 0 2 12 12 912 912 912 912 24 25 CJAAAA KLDAAA AAAAxx +8806 2325 0 2 6 6 6 806 806 3806 8806 12 13 SAAAAA LLDAAA HHHHxx +9698 2326 0 2 8 18 98 698 1698 4698 9698 196 197 AJAAAA MLDAAA OOOOxx +6191 2327 1 3 1 11 91 191 191 1191 6191 182 183 DEAAAA NLDAAA VVVVxx +1188 2328 0 0 8 8 88 188 1188 1188 1188 176 177 STAAAA OLDAAA AAAAxx +7676 2329 0 0 6 16 76 676 1676 2676 7676 152 153 GJAAAA PLDAAA HHHHxx +7073 2330 1 1 3 13 73 73 1073 2073 7073 146 147 BMAAAA QLDAAA OOOOxx +8019 2331 1 3 9 19 19 19 19 3019 8019 38 39 LWAAAA RLDAAA VVVVxx +4726 2332 0 2 6 6 26 726 726 4726 4726 52 53 UZAAAA SLDAAA AAAAxx +4648 2333 0 0 8 8 48 648 648 4648 4648 96 97 UWAAAA TLDAAA HHHHxx +3227 2334 1 3 7 7 27 227 1227 3227 3227 54 55 DUAAAA ULDAAA OOOOxx +7232 2335 0 0 2 12 32 232 1232 2232 7232 64 65 ESAAAA VLDAAA VVVVxx +9761 2336 1 1 1 1 61 761 1761 4761 9761 122 123 LLAAAA WLDAAA AAAAxx +3105 2337 1 1 5 5 5 105 1105 3105 3105 10 11 LPAAAA XLDAAA HHHHxx +5266 2338 0 2 6 6 66 266 1266 266 5266 132 133 OUAAAA YLDAAA OOOOxx +6788 2339 0 0 8 8 88 788 788 1788 6788 176 177 CBAAAA ZLDAAA VVVVxx +2442 2340 0 2 2 2 42 442 442 2442 2442 84 85 YPAAAA AMDAAA AAAAxx +8198 2341 0 2 8 18 98 198 198 3198 8198 196 197 IDAAAA BMDAAA HHHHxx +5806 2342 0 2 6 6 6 806 1806 806 5806 12 13 IPAAAA CMDAAA OOOOxx +8928 2343 0 0 8 8 28 928 928 3928 8928 56 57 KFAAAA DMDAAA VVVVxx +1657 2344 1 1 7 17 57 657 1657 1657 1657 114 115 TLAAAA EMDAAA AAAAxx +9164 2345 0 0 4 4 64 164 1164 4164 9164 128 129 MOAAAA FMDAAA HHHHxx +1851 2346 1 3 1 11 51 851 1851 1851 1851 102 103 FTAAAA GMDAAA OOOOxx +4744 2347 0 0 4 4 44 744 744 4744 4744 88 89 MAAAAA HMDAAA VVVVxx +8055 2348 1 3 5 15 55 55 55 3055 8055 110 111 VXAAAA IMDAAA AAAAxx +1533 2349 1 1 3 13 33 533 1533 1533 1533 66 67 ZGAAAA JMDAAA HHHHxx +1260 2350 0 0 0 0 60 260 1260 1260 1260 120 121 MWAAAA KMDAAA OOOOxx +1290 2351 0 2 0 10 90 290 1290 1290 1290 180 181 QXAAAA LMDAAA VVVVxx +297 2352 1 1 7 17 97 297 297 297 297 194 195 LLAAAA MMDAAA AAAAxx +4145 2353 1 1 5 5 45 145 145 4145 4145 90 91 LDAAAA NMDAAA HHHHxx +863 2354 1 3 3 3 63 863 863 863 863 126 127 FHAAAA OMDAAA OOOOxx +3423 2355 1 3 3 3 23 423 1423 3423 3423 46 47 RBAAAA PMDAAA VVVVxx +8750 2356 0 2 0 10 50 750 750 3750 8750 100 101 OYAAAA QMDAAA AAAAxx +3546 2357 0 2 6 6 46 546 1546 3546 3546 92 93 KGAAAA RMDAAA HHHHxx +3678 2358 0 2 8 18 78 678 1678 3678 3678 156 157 MLAAAA SMDAAA OOOOxx +5313 2359 1 1 3 13 13 313 1313 313 5313 26 27 JWAAAA TMDAAA VVVVxx +6233 2360 1 1 3 13 33 233 233 1233 6233 66 67 TFAAAA UMDAAA AAAAxx +5802 2361 0 2 2 2 2 802 1802 802 5802 4 5 EPAAAA VMDAAA HHHHxx +7059 2362 1 3 9 19 59 59 1059 2059 7059 118 119 NLAAAA WMDAAA OOOOxx +6481 2363 1 1 1 1 81 481 481 1481 6481 162 163 HPAAAA XMDAAA VVVVxx +1596 2364 0 0 6 16 96 596 1596 1596 1596 192 193 KJAAAA YMDAAA AAAAxx +8181 2365 1 1 1 1 81 181 181 3181 8181 162 163 RCAAAA ZMDAAA HHHHxx +5368 2366 0 0 8 8 68 368 1368 368 5368 136 137 MYAAAA ANDAAA OOOOxx +9416 2367 0 0 6 16 16 416 1416 4416 9416 32 33 EYAAAA BNDAAA VVVVxx +9521 2368 1 1 1 1 21 521 1521 4521 9521 42 43 FCAAAA CNDAAA AAAAxx +1042 2369 0 2 2 2 42 42 1042 1042 1042 84 85 COAAAA DNDAAA HHHHxx +4503 2370 1 3 3 3 3 503 503 4503 4503 6 7 FRAAAA ENDAAA OOOOxx +3023 2371 1 3 3 3 23 23 1023 3023 3023 46 47 HMAAAA FNDAAA VVVVxx +1976 2372 0 0 6 16 76 976 1976 1976 1976 152 153 AYAAAA GNDAAA AAAAxx +5610 2373 0 2 0 10 10 610 1610 610 5610 20 21 UHAAAA HNDAAA HHHHxx +7410 2374 0 2 0 10 10 410 1410 2410 7410 20 21 AZAAAA INDAAA OOOOxx +7872 2375 0 0 2 12 72 872 1872 2872 7872 144 145 UQAAAA JNDAAA VVVVxx +8591 2376 1 3 1 11 91 591 591 3591 8591 182 183 LSAAAA KNDAAA AAAAxx +1804 2377 0 0 4 4 4 804 1804 1804 1804 8 9 KRAAAA LNDAAA HHHHxx +5299 2378 1 3 9 19 99 299 1299 299 5299 198 199 VVAAAA MNDAAA OOOOxx +4695 2379 1 3 5 15 95 695 695 4695 4695 190 191 PYAAAA NNDAAA VVVVxx +2672 2380 0 0 2 12 72 672 672 2672 2672 144 145 UYAAAA ONDAAA AAAAxx +585 2381 1 1 5 5 85 585 585 585 585 170 171 NWAAAA PNDAAA HHHHxx +8622 2382 0 2 2 2 22 622 622 3622 8622 44 45 QTAAAA QNDAAA OOOOxx +3780 2383 0 0 0 0 80 780 1780 3780 3780 160 161 KPAAAA RNDAAA VVVVxx +7941 2384 1 1 1 1 41 941 1941 2941 7941 82 83 LTAAAA SNDAAA AAAAxx +3305 2385 1 1 5 5 5 305 1305 3305 3305 10 11 DXAAAA TNDAAA HHHHxx +8653 2386 1 1 3 13 53 653 653 3653 8653 106 107 VUAAAA UNDAAA OOOOxx +5756 2387 0 0 6 16 56 756 1756 756 5756 112 113 KNAAAA VNDAAA VVVVxx +576 2388 0 0 6 16 76 576 576 576 576 152 153 EWAAAA WNDAAA AAAAxx +1915 2389 1 3 5 15 15 915 1915 1915 1915 30 31 RVAAAA XNDAAA HHHHxx +4627 2390 1 3 7 7 27 627 627 4627 4627 54 55 ZVAAAA YNDAAA OOOOxx +920 2391 0 0 0 0 20 920 920 920 920 40 41 KJAAAA ZNDAAA VVVVxx +2537 2392 1 1 7 17 37 537 537 2537 2537 74 75 PTAAAA AODAAA AAAAxx +50 2393 0 2 0 10 50 50 50 50 50 100 101 YBAAAA BODAAA HHHHxx +1313 2394 1 1 3 13 13 313 1313 1313 1313 26 27 NYAAAA CODAAA OOOOxx +8542 2395 0 2 2 2 42 542 542 3542 8542 84 85 OQAAAA DODAAA VVVVxx +6428 2396 0 0 8 8 28 428 428 1428 6428 56 57 GNAAAA EODAAA AAAAxx +4351 2397 1 3 1 11 51 351 351 4351 4351 102 103 JLAAAA FODAAA HHHHxx +2050 2398 0 2 0 10 50 50 50 2050 2050 100 101 WAAAAA GODAAA OOOOxx +5162 2399 0 2 2 2 62 162 1162 162 5162 124 125 OQAAAA HODAAA VVVVxx +8229 2400 1 1 9 9 29 229 229 3229 8229 58 59 NEAAAA IODAAA AAAAxx +7782 2401 0 2 2 2 82 782 1782 2782 7782 164 165 INAAAA JODAAA HHHHxx +1563 2402 1 3 3 3 63 563 1563 1563 1563 126 127 DIAAAA KODAAA OOOOxx +267 2403 1 3 7 7 67 267 267 267 267 134 135 HKAAAA LODAAA VVVVxx +5138 2404 0 2 8 18 38 138 1138 138 5138 76 77 QPAAAA MODAAA AAAAxx +7022 2405 0 2 2 2 22 22 1022 2022 7022 44 45 CKAAAA NODAAA HHHHxx +6705 2406 1 1 5 5 5 705 705 1705 6705 10 11 XXAAAA OODAAA OOOOxx +6190 2407 0 2 0 10 90 190 190 1190 6190 180 181 CEAAAA PODAAA VVVVxx +8226 2408 0 2 6 6 26 226 226 3226 8226 52 53 KEAAAA QODAAA AAAAxx +8882 2409 0 2 2 2 82 882 882 3882 8882 164 165 QDAAAA RODAAA HHHHxx +5181 2410 1 1 1 1 81 181 1181 181 5181 162 163 HRAAAA SODAAA OOOOxx +4598 2411 0 2 8 18 98 598 598 4598 4598 196 197 WUAAAA TODAAA VVVVxx +4882 2412 0 2 2 2 82 882 882 4882 4882 164 165 UFAAAA UODAAA AAAAxx +7490 2413 0 2 0 10 90 490 1490 2490 7490 180 181 CCAAAA VODAAA HHHHxx +5224 2414 0 0 4 4 24 224 1224 224 5224 48 49 YSAAAA WODAAA OOOOxx +2174 2415 0 2 4 14 74 174 174 2174 2174 148 149 QFAAAA XODAAA VVVVxx +3059 2416 1 3 9 19 59 59 1059 3059 3059 118 119 RNAAAA YODAAA AAAAxx +8790 2417 0 2 0 10 90 790 790 3790 8790 180 181 CAAAAA ZODAAA HHHHxx +2222 2418 0 2 2 2 22 222 222 2222 2222 44 45 MHAAAA APDAAA OOOOxx +5473 2419 1 1 3 13 73 473 1473 473 5473 146 147 NCAAAA BPDAAA VVVVxx +937 2420 1 1 7 17 37 937 937 937 937 74 75 BKAAAA CPDAAA AAAAxx +2975 2421 1 3 5 15 75 975 975 2975 2975 150 151 LKAAAA DPDAAA HHHHxx +9569 2422 1 1 9 9 69 569 1569 4569 9569 138 139 BEAAAA EPDAAA OOOOxx +3456 2423 0 0 6 16 56 456 1456 3456 3456 112 113 YCAAAA FPDAAA VVVVxx +6657 2424 1 1 7 17 57 657 657 1657 6657 114 115 BWAAAA GPDAAA AAAAxx +3776 2425 0 0 6 16 76 776 1776 3776 3776 152 153 GPAAAA HPDAAA HHHHxx +6072 2426 0 0 2 12 72 72 72 1072 6072 144 145 OZAAAA IPDAAA OOOOxx +8129 2427 1 1 9 9 29 129 129 3129 8129 58 59 RAAAAA JPDAAA VVVVxx +1085 2428 1 1 5 5 85 85 1085 1085 1085 170 171 TPAAAA KPDAAA AAAAxx +2079 2429 1 3 9 19 79 79 79 2079 2079 158 159 ZBAAAA LPDAAA HHHHxx +1200 2430 0 0 0 0 0 200 1200 1200 1200 0 1 EUAAAA MPDAAA OOOOxx +3276 2431 0 0 6 16 76 276 1276 3276 3276 152 153 AWAAAA NPDAAA VVVVxx +2608 2432 0 0 8 8 8 608 608 2608 2608 16 17 IWAAAA OPDAAA AAAAxx +702 2433 0 2 2 2 2 702 702 702 702 4 5 ABAAAA PPDAAA HHHHxx +5750 2434 0 2 0 10 50 750 1750 750 5750 100 101 ENAAAA QPDAAA OOOOxx +2776 2435 0 0 6 16 76 776 776 2776 2776 152 153 UCAAAA RPDAAA VVVVxx +9151 2436 1 3 1 11 51 151 1151 4151 9151 102 103 ZNAAAA SPDAAA AAAAxx +3282 2437 0 2 2 2 82 282 1282 3282 3282 164 165 GWAAAA TPDAAA HHHHxx +408 2438 0 0 8 8 8 408 408 408 408 16 17 SPAAAA UPDAAA OOOOxx +3473 2439 1 1 3 13 73 473 1473 3473 3473 146 147 PDAAAA VPDAAA VVVVxx +7095 2440 1 3 5 15 95 95 1095 2095 7095 190 191 XMAAAA WPDAAA AAAAxx +3288 2441 0 0 8 8 88 288 1288 3288 3288 176 177 MWAAAA XPDAAA HHHHxx +8215 2442 1 3 5 15 15 215 215 3215 8215 30 31 ZDAAAA YPDAAA OOOOxx +6244 2443 0 0 4 4 44 244 244 1244 6244 88 89 EGAAAA ZPDAAA VVVVxx +8440 2444 0 0 0 0 40 440 440 3440 8440 80 81 QMAAAA AQDAAA AAAAxx +3800 2445 0 0 0 0 0 800 1800 3800 3800 0 1 EQAAAA BQDAAA HHHHxx +7279 2446 1 3 9 19 79 279 1279 2279 7279 158 159 ZTAAAA CQDAAA OOOOxx +9206 2447 0 2 6 6 6 206 1206 4206 9206 12 13 CQAAAA DQDAAA VVVVxx +6465 2448 1 1 5 5 65 465 465 1465 6465 130 131 ROAAAA EQDAAA AAAAxx +4127 2449 1 3 7 7 27 127 127 4127 4127 54 55 TCAAAA FQDAAA HHHHxx +7463 2450 1 3 3 3 63 463 1463 2463 7463 126 127 BBAAAA GQDAAA OOOOxx +5117 2451 1 1 7 17 17 117 1117 117 5117 34 35 VOAAAA HQDAAA VVVVxx +4715 2452 1 3 5 15 15 715 715 4715 4715 30 31 JZAAAA IQDAAA AAAAxx +2010 2453 0 2 0 10 10 10 10 2010 2010 20 21 IZAAAA JQDAAA HHHHxx +6486 2454 0 2 6 6 86 486 486 1486 6486 172 173 MPAAAA KQDAAA OOOOxx +6434 2455 0 2 4 14 34 434 434 1434 6434 68 69 MNAAAA LQDAAA VVVVxx +2151 2456 1 3 1 11 51 151 151 2151 2151 102 103 TEAAAA MQDAAA AAAAxx +4821 2457 1 1 1 1 21 821 821 4821 4821 42 43 LDAAAA NQDAAA HHHHxx +6507 2458 1 3 7 7 7 507 507 1507 6507 14 15 HQAAAA OQDAAA OOOOxx +8741 2459 1 1 1 1 41 741 741 3741 8741 82 83 FYAAAA PQDAAA VVVVxx +6846 2460 0 2 6 6 46 846 846 1846 6846 92 93 IDAAAA QQDAAA AAAAxx +4525 2461 1 1 5 5 25 525 525 4525 4525 50 51 BSAAAA RQDAAA HHHHxx +8299 2462 1 3 9 19 99 299 299 3299 8299 198 199 FHAAAA SQDAAA OOOOxx +5465 2463 1 1 5 5 65 465 1465 465 5465 130 131 FCAAAA TQDAAA VVVVxx +7206 2464 0 2 6 6 6 206 1206 2206 7206 12 13 ERAAAA UQDAAA AAAAxx +2616 2465 0 0 6 16 16 616 616 2616 2616 32 33 QWAAAA VQDAAA HHHHxx +4440 2466 0 0 0 0 40 440 440 4440 4440 80 81 UOAAAA WQDAAA OOOOxx +6109 2467 1 1 9 9 9 109 109 1109 6109 18 19 ZAAAAA XQDAAA VVVVxx +7905 2468 1 1 5 5 5 905 1905 2905 7905 10 11 BSAAAA YQDAAA AAAAxx +6498 2469 0 2 8 18 98 498 498 1498 6498 196 197 YPAAAA ZQDAAA HHHHxx +2034 2470 0 2 4 14 34 34 34 2034 2034 68 69 GAAAAA ARDAAA OOOOxx +7693 2471 1 1 3 13 93 693 1693 2693 7693 186 187 XJAAAA BRDAAA VVVVxx +7511 2472 1 3 1 11 11 511 1511 2511 7511 22 23 XCAAAA CRDAAA AAAAxx +7531 2473 1 3 1 11 31 531 1531 2531 7531 62 63 RDAAAA DRDAAA HHHHxx +6869 2474 1 1 9 9 69 869 869 1869 6869 138 139 FEAAAA ERDAAA OOOOxx +2763 2475 1 3 3 3 63 763 763 2763 2763 126 127 HCAAAA FRDAAA VVVVxx +575 2476 1 3 5 15 75 575 575 575 575 150 151 DWAAAA GRDAAA AAAAxx +8953 2477 1 1 3 13 53 953 953 3953 8953 106 107 JGAAAA HRDAAA HHHHxx +5833 2478 1 1 3 13 33 833 1833 833 5833 66 67 JQAAAA IRDAAA OOOOxx +9035 2479 1 3 5 15 35 35 1035 4035 9035 70 71 NJAAAA JRDAAA VVVVxx +9123 2480 1 3 3 3 23 123 1123 4123 9123 46 47 XMAAAA KRDAAA AAAAxx +206 2481 0 2 6 6 6 206 206 206 206 12 13 YHAAAA LRDAAA HHHHxx +4155 2482 1 3 5 15 55 155 155 4155 4155 110 111 VDAAAA MRDAAA OOOOxx +532 2483 0 0 2 12 32 532 532 532 532 64 65 MUAAAA NRDAAA VVVVxx +1370 2484 0 2 0 10 70 370 1370 1370 1370 140 141 SAAAAA ORDAAA AAAAxx +7656 2485 0 0 6 16 56 656 1656 2656 7656 112 113 MIAAAA PRDAAA HHHHxx +7735 2486 1 3 5 15 35 735 1735 2735 7735 70 71 NLAAAA QRDAAA OOOOxx +2118 2487 0 2 8 18 18 118 118 2118 2118 36 37 MDAAAA RRDAAA VVVVxx +6914 2488 0 2 4 14 14 914 914 1914 6914 28 29 YFAAAA SRDAAA AAAAxx +6277 2489 1 1 7 17 77 277 277 1277 6277 154 155 LHAAAA TRDAAA HHHHxx +6347 2490 1 3 7 7 47 347 347 1347 6347 94 95 DKAAAA URDAAA OOOOxx +4030 2491 0 2 0 10 30 30 30 4030 4030 60 61 AZAAAA VRDAAA VVVVxx +9673 2492 1 1 3 13 73 673 1673 4673 9673 146 147 BIAAAA WRDAAA AAAAxx +2015 2493 1 3 5 15 15 15 15 2015 2015 30 31 NZAAAA XRDAAA HHHHxx +1317 2494 1 1 7 17 17 317 1317 1317 1317 34 35 RYAAAA YRDAAA OOOOxx +404 2495 0 0 4 4 4 404 404 404 404 8 9 OPAAAA ZRDAAA VVVVxx +1604 2496 0 0 4 4 4 604 1604 1604 1604 8 9 SJAAAA ASDAAA AAAAxx +1912 2497 0 0 2 12 12 912 1912 1912 1912 24 25 OVAAAA BSDAAA HHHHxx +5727 2498 1 3 7 7 27 727 1727 727 5727 54 55 HMAAAA CSDAAA OOOOxx +4538 2499 0 2 8 18 38 538 538 4538 4538 76 77 OSAAAA DSDAAA VVVVxx +6868 2500 0 0 8 8 68 868 868 1868 6868 136 137 EEAAAA ESDAAA AAAAxx +9801 2501 1 1 1 1 1 801 1801 4801 9801 2 3 ZMAAAA FSDAAA HHHHxx +1781 2502 1 1 1 1 81 781 1781 1781 1781 162 163 NQAAAA GSDAAA OOOOxx +7061 2503 1 1 1 1 61 61 1061 2061 7061 122 123 PLAAAA HSDAAA VVVVxx +2412 2504 0 0 2 12 12 412 412 2412 2412 24 25 UOAAAA ISDAAA AAAAxx +9191 2505 1 3 1 11 91 191 1191 4191 9191 182 183 NPAAAA JSDAAA HHHHxx +1958 2506 0 2 8 18 58 958 1958 1958 1958 116 117 IXAAAA KSDAAA OOOOxx +2203 2507 1 3 3 3 3 203 203 2203 2203 6 7 TGAAAA LSDAAA VVVVxx +9104 2508 0 0 4 4 4 104 1104 4104 9104 8 9 EMAAAA MSDAAA AAAAxx +3837 2509 1 1 7 17 37 837 1837 3837 3837 74 75 PRAAAA NSDAAA HHHHxx +7055 2510 1 3 5 15 55 55 1055 2055 7055 110 111 JLAAAA OSDAAA OOOOxx +4612 2511 0 0 2 12 12 612 612 4612 4612 24 25 KVAAAA PSDAAA VVVVxx +6420 2512 0 0 0 0 20 420 420 1420 6420 40 41 YMAAAA QSDAAA AAAAxx +613 2513 1 1 3 13 13 613 613 613 613 26 27 PXAAAA RSDAAA HHHHxx +1691 2514 1 3 1 11 91 691 1691 1691 1691 182 183 BNAAAA SSDAAA OOOOxx +33 2515 1 1 3 13 33 33 33 33 33 66 67 HBAAAA TSDAAA VVVVxx +875 2516 1 3 5 15 75 875 875 875 875 150 151 RHAAAA USDAAA AAAAxx +9030 2517 0 2 0 10 30 30 1030 4030 9030 60 61 IJAAAA VSDAAA HHHHxx +4285 2518 1 1 5 5 85 285 285 4285 4285 170 171 VIAAAA WSDAAA OOOOxx +6236 2519 0 0 6 16 36 236 236 1236 6236 72 73 WFAAAA XSDAAA VVVVxx +4702 2520 0 2 2 2 2 702 702 4702 4702 4 5 WYAAAA YSDAAA AAAAxx +3441 2521 1 1 1 1 41 441 1441 3441 3441 82 83 JCAAAA ZSDAAA HHHHxx +2150 2522 0 2 0 10 50 150 150 2150 2150 100 101 SEAAAA ATDAAA OOOOxx +1852 2523 0 0 2 12 52 852 1852 1852 1852 104 105 GTAAAA BTDAAA VVVVxx +7713 2524 1 1 3 13 13 713 1713 2713 7713 26 27 RKAAAA CTDAAA AAAAxx +6849 2525 1 1 9 9 49 849 849 1849 6849 98 99 LDAAAA DTDAAA HHHHxx +3425 2526 1 1 5 5 25 425 1425 3425 3425 50 51 TBAAAA ETDAAA OOOOxx +4681 2527 1 1 1 1 81 681 681 4681 4681 162 163 BYAAAA FTDAAA VVVVxx +1134 2528 0 2 4 14 34 134 1134 1134 1134 68 69 QRAAAA GTDAAA AAAAxx +7462 2529 0 2 2 2 62 462 1462 2462 7462 124 125 ABAAAA HTDAAA HHHHxx +2148 2530 0 0 8 8 48 148 148 2148 2148 96 97 QEAAAA ITDAAA OOOOxx +5921 2531 1 1 1 1 21 921 1921 921 5921 42 43 TTAAAA JTDAAA VVVVxx +118 2532 0 2 8 18 18 118 118 118 118 36 37 OEAAAA KTDAAA AAAAxx +3065 2533 1 1 5 5 65 65 1065 3065 3065 130 131 XNAAAA LTDAAA HHHHxx +6590 2534 0 2 0 10 90 590 590 1590 6590 180 181 MTAAAA MTDAAA OOOOxx +4993 2535 1 1 3 13 93 993 993 4993 4993 186 187 BKAAAA NTDAAA VVVVxx +6818 2536 0 2 8 18 18 818 818 1818 6818 36 37 GCAAAA OTDAAA AAAAxx +1449 2537 1 1 9 9 49 449 1449 1449 1449 98 99 TDAAAA PTDAAA HHHHxx +2039 2538 1 3 9 19 39 39 39 2039 2039 78 79 LAAAAA QTDAAA OOOOxx +2524 2539 0 0 4 4 24 524 524 2524 2524 48 49 CTAAAA RTDAAA VVVVxx +1481 2540 1 1 1 1 81 481 1481 1481 1481 162 163 ZEAAAA STDAAA AAAAxx +6984 2541 0 0 4 4 84 984 984 1984 6984 168 169 QIAAAA TTDAAA HHHHxx +3960 2542 0 0 0 0 60 960 1960 3960 3960 120 121 IWAAAA UTDAAA OOOOxx +1983 2543 1 3 3 3 83 983 1983 1983 1983 166 167 HYAAAA VTDAAA VVVVxx +6379 2544 1 3 9 19 79 379 379 1379 6379 158 159 JLAAAA WTDAAA AAAAxx +8975 2545 1 3 5 15 75 975 975 3975 8975 150 151 FHAAAA XTDAAA HHHHxx +1102 2546 0 2 2 2 2 102 1102 1102 1102 4 5 KQAAAA YTDAAA OOOOxx +2517 2547 1 1 7 17 17 517 517 2517 2517 34 35 VSAAAA ZTDAAA VVVVxx +712 2548 0 0 2 12 12 712 712 712 712 24 25 KBAAAA AUDAAA AAAAxx +5419 2549 1 3 9 19 19 419 1419 419 5419 38 39 LAAAAA BUDAAA HHHHxx +723 2550 1 3 3 3 23 723 723 723 723 46 47 VBAAAA CUDAAA OOOOxx +8057 2551 1 1 7 17 57 57 57 3057 8057 114 115 XXAAAA DUDAAA VVVVxx +7471 2552 1 3 1 11 71 471 1471 2471 7471 142 143 JBAAAA EUDAAA AAAAxx +8855 2553 1 3 5 15 55 855 855 3855 8855 110 111 PCAAAA FUDAAA HHHHxx +5074 2554 0 2 4 14 74 74 1074 74 5074 148 149 ENAAAA GUDAAA OOOOxx +7139 2555 1 3 9 19 39 139 1139 2139 7139 78 79 POAAAA HUDAAA VVVVxx +3833 2556 1 1 3 13 33 833 1833 3833 3833 66 67 LRAAAA IUDAAA AAAAxx +5186 2557 0 2 6 6 86 186 1186 186 5186 172 173 MRAAAA JUDAAA HHHHxx +9436 2558 0 0 6 16 36 436 1436 4436 9436 72 73 YYAAAA KUDAAA OOOOxx +8859 2559 1 3 9 19 59 859 859 3859 8859 118 119 TCAAAA LUDAAA VVVVxx +6943 2560 1 3 3 3 43 943 943 1943 6943 86 87 BHAAAA MUDAAA AAAAxx +2315 2561 1 3 5 15 15 315 315 2315 2315 30 31 BLAAAA NUDAAA HHHHxx +1394 2562 0 2 4 14 94 394 1394 1394 1394 188 189 QBAAAA OUDAAA OOOOxx +8863 2563 1 3 3 3 63 863 863 3863 8863 126 127 XCAAAA PUDAAA VVVVxx +8812 2564 0 0 2 12 12 812 812 3812 8812 24 25 YAAAAA QUDAAA AAAAxx +7498 2565 0 2 8 18 98 498 1498 2498 7498 196 197 KCAAAA RUDAAA HHHHxx +8962 2566 0 2 2 2 62 962 962 3962 8962 124 125 SGAAAA SUDAAA OOOOxx +2533 2567 1 1 3 13 33 533 533 2533 2533 66 67 LTAAAA TUDAAA VVVVxx +8188 2568 0 0 8 8 88 188 188 3188 8188 176 177 YCAAAA UUDAAA AAAAxx +6137 2569 1 1 7 17 37 137 137 1137 6137 74 75 BCAAAA VUDAAA HHHHxx +974 2570 0 2 4 14 74 974 974 974 974 148 149 MLAAAA WUDAAA OOOOxx +2751 2571 1 3 1 11 51 751 751 2751 2751 102 103 VBAAAA XUDAAA VVVVxx +4975 2572 1 3 5 15 75 975 975 4975 4975 150 151 JJAAAA YUDAAA AAAAxx +3411 2573 1 3 1 11 11 411 1411 3411 3411 22 23 FBAAAA ZUDAAA HHHHxx +3143 2574 1 3 3 3 43 143 1143 3143 3143 86 87 XQAAAA AVDAAA OOOOxx +8011 2575 1 3 1 11 11 11 11 3011 8011 22 23 DWAAAA BVDAAA VVVVxx +988 2576 0 0 8 8 88 988 988 988 988 176 177 AMAAAA CVDAAA AAAAxx +4289 2577 1 1 9 9 89 289 289 4289 4289 178 179 ZIAAAA DVDAAA HHHHxx +8105 2578 1 1 5 5 5 105 105 3105 8105 10 11 TZAAAA EVDAAA OOOOxx +9885 2579 1 1 5 5 85 885 1885 4885 9885 170 171 FQAAAA FVDAAA VVVVxx +1002 2580 0 2 2 2 2 2 1002 1002 1002 4 5 OMAAAA GVDAAA AAAAxx +5827 2581 1 3 7 7 27 827 1827 827 5827 54 55 DQAAAA HVDAAA HHHHxx +1228 2582 0 0 8 8 28 228 1228 1228 1228 56 57 GVAAAA IVDAAA OOOOxx +6352 2583 0 0 2 12 52 352 352 1352 6352 104 105 IKAAAA JVDAAA VVVVxx +8868 2584 0 0 8 8 68 868 868 3868 8868 136 137 CDAAAA KVDAAA AAAAxx +3643 2585 1 3 3 3 43 643 1643 3643 3643 86 87 DKAAAA LVDAAA HHHHxx +1468 2586 0 0 8 8 68 468 1468 1468 1468 136 137 MEAAAA MVDAAA OOOOxx +8415 2587 1 3 5 15 15 415 415 3415 8415 30 31 RLAAAA NVDAAA VVVVxx +9631 2588 1 3 1 11 31 631 1631 4631 9631 62 63 LGAAAA OVDAAA AAAAxx +7408 2589 0 0 8 8 8 408 1408 2408 7408 16 17 YYAAAA PVDAAA HHHHxx +1934 2590 0 2 4 14 34 934 1934 1934 1934 68 69 KWAAAA QVDAAA OOOOxx +996 2591 0 0 6 16 96 996 996 996 996 192 193 IMAAAA RVDAAA VVVVxx +8027 2592 1 3 7 7 27 27 27 3027 8027 54 55 TWAAAA SVDAAA AAAAxx +8464 2593 0 0 4 4 64 464 464 3464 8464 128 129 ONAAAA TVDAAA HHHHxx +5007 2594 1 3 7 7 7 7 1007 7 5007 14 15 PKAAAA UVDAAA OOOOxx +8356 2595 0 0 6 16 56 356 356 3356 8356 112 113 KJAAAA VVDAAA VVVVxx +4579 2596 1 3 9 19 79 579 579 4579 4579 158 159 DUAAAA WVDAAA AAAAxx +8513 2597 1 1 3 13 13 513 513 3513 8513 26 27 LPAAAA XVDAAA HHHHxx +383 2598 1 3 3 3 83 383 383 383 383 166 167 TOAAAA YVDAAA OOOOxx +9304 2599 0 0 4 4 4 304 1304 4304 9304 8 9 WTAAAA ZVDAAA VVVVxx +7224 2600 0 0 4 4 24 224 1224 2224 7224 48 49 WRAAAA AWDAAA AAAAxx +6023 2601 1 3 3 3 23 23 23 1023 6023 46 47 RXAAAA BWDAAA HHHHxx +2746 2602 0 2 6 6 46 746 746 2746 2746 92 93 QBAAAA CWDAAA OOOOxx +137 2603 1 1 7 17 37 137 137 137 137 74 75 HFAAAA DWDAAA VVVVxx +9441 2604 1 1 1 1 41 441 1441 4441 9441 82 83 DZAAAA EWDAAA AAAAxx +3690 2605 0 2 0 10 90 690 1690 3690 3690 180 181 YLAAAA FWDAAA HHHHxx +913 2606 1 1 3 13 13 913 913 913 913 26 27 DJAAAA GWDAAA OOOOxx +1768 2607 0 0 8 8 68 768 1768 1768 1768 136 137 AQAAAA HWDAAA VVVVxx +8492 2608 0 0 2 12 92 492 492 3492 8492 184 185 QOAAAA IWDAAA AAAAxx +8083 2609 1 3 3 3 83 83 83 3083 8083 166 167 XYAAAA JWDAAA HHHHxx +4609 2610 1 1 9 9 9 609 609 4609 4609 18 19 HVAAAA KWDAAA OOOOxx +7520 2611 0 0 0 0 20 520 1520 2520 7520 40 41 GDAAAA LWDAAA VVVVxx +4231 2612 1 3 1 11 31 231 231 4231 4231 62 63 TGAAAA MWDAAA AAAAxx +6022 2613 0 2 2 2 22 22 22 1022 6022 44 45 QXAAAA NWDAAA HHHHxx +9784 2614 0 0 4 4 84 784 1784 4784 9784 168 169 IMAAAA OWDAAA OOOOxx +1343 2615 1 3 3 3 43 343 1343 1343 1343 86 87 RZAAAA PWDAAA VVVVxx +7549 2616 1 1 9 9 49 549 1549 2549 7549 98 99 JEAAAA QWDAAA AAAAxx +269 2617 1 1 9 9 69 269 269 269 269 138 139 JKAAAA RWDAAA HHHHxx +1069 2618 1 1 9 9 69 69 1069 1069 1069 138 139 DPAAAA SWDAAA OOOOxx +4610 2619 0 2 0 10 10 610 610 4610 4610 20 21 IVAAAA TWDAAA VVVVxx +482 2620 0 2 2 2 82 482 482 482 482 164 165 OSAAAA UWDAAA AAAAxx +3025 2621 1 1 5 5 25 25 1025 3025 3025 50 51 JMAAAA VWDAAA HHHHxx +7914 2622 0 2 4 14 14 914 1914 2914 7914 28 29 KSAAAA WWDAAA OOOOxx +3198 2623 0 2 8 18 98 198 1198 3198 3198 196 197 ATAAAA XWDAAA VVVVxx +1187 2624 1 3 7 7 87 187 1187 1187 1187 174 175 RTAAAA YWDAAA AAAAxx +4707 2625 1 3 7 7 7 707 707 4707 4707 14 15 BZAAAA ZWDAAA HHHHxx +8279 2626 1 3 9 19 79 279 279 3279 8279 158 159 LGAAAA AXDAAA OOOOxx +6127 2627 1 3 7 7 27 127 127 1127 6127 54 55 RBAAAA BXDAAA VVVVxx +1305 2628 1 1 5 5 5 305 1305 1305 1305 10 11 FYAAAA CXDAAA AAAAxx +4804 2629 0 0 4 4 4 804 804 4804 4804 8 9 UCAAAA DXDAAA HHHHxx +6069 2630 1 1 9 9 69 69 69 1069 6069 138 139 LZAAAA EXDAAA OOOOxx +9229 2631 1 1 9 9 29 229 1229 4229 9229 58 59 ZQAAAA FXDAAA VVVVxx +4703 2632 1 3 3 3 3 703 703 4703 4703 6 7 XYAAAA GXDAAA AAAAxx +6410 2633 0 2 0 10 10 410 410 1410 6410 20 21 OMAAAA HXDAAA HHHHxx +944 2634 0 0 4 4 44 944 944 944 944 88 89 IKAAAA IXDAAA OOOOxx +3744 2635 0 0 4 4 44 744 1744 3744 3744 88 89 AOAAAA JXDAAA VVVVxx +1127 2636 1 3 7 7 27 127 1127 1127 1127 54 55 JRAAAA KXDAAA AAAAxx +6693 2637 1 1 3 13 93 693 693 1693 6693 186 187 LXAAAA LXDAAA HHHHxx +583 2638 1 3 3 3 83 583 583 583 583 166 167 LWAAAA MXDAAA OOOOxx +2684 2639 0 0 4 4 84 684 684 2684 2684 168 169 GZAAAA NXDAAA VVVVxx +6192 2640 0 0 2 12 92 192 192 1192 6192 184 185 EEAAAA OXDAAA AAAAxx +4157 2641 1 1 7 17 57 157 157 4157 4157 114 115 XDAAAA PXDAAA HHHHxx +6470 2642 0 2 0 10 70 470 470 1470 6470 140 141 WOAAAA QXDAAA OOOOxx +8965 2643 1 1 5 5 65 965 965 3965 8965 130 131 VGAAAA RXDAAA VVVVxx +1433 2644 1 1 3 13 33 433 1433 1433 1433 66 67 DDAAAA SXDAAA AAAAxx +4570 2645 0 2 0 10 70 570 570 4570 4570 140 141 UTAAAA TXDAAA HHHHxx +1806 2646 0 2 6 6 6 806 1806 1806 1806 12 13 MRAAAA UXDAAA OOOOxx +1230 2647 0 2 0 10 30 230 1230 1230 1230 60 61 IVAAAA VXDAAA VVVVxx +2283 2648 1 3 3 3 83 283 283 2283 2283 166 167 VJAAAA WXDAAA AAAAxx +6456 2649 0 0 6 16 56 456 456 1456 6456 112 113 IOAAAA XXDAAA HHHHxx +7427 2650 1 3 7 7 27 427 1427 2427 7427 54 55 RZAAAA YXDAAA OOOOxx +8310 2651 0 2 0 10 10 310 310 3310 8310 20 21 QHAAAA ZXDAAA VVVVxx +8103 2652 1 3 3 3 3 103 103 3103 8103 6 7 RZAAAA AYDAAA AAAAxx +3947 2653 1 3 7 7 47 947 1947 3947 3947 94 95 VVAAAA BYDAAA HHHHxx +3414 2654 0 2 4 14 14 414 1414 3414 3414 28 29 IBAAAA CYDAAA OOOOxx +2043 2655 1 3 3 3 43 43 43 2043 2043 86 87 PAAAAA DYDAAA VVVVxx +4393 2656 1 1 3 13 93 393 393 4393 4393 186 187 ZMAAAA EYDAAA AAAAxx +6664 2657 0 0 4 4 64 664 664 1664 6664 128 129 IWAAAA FYDAAA HHHHxx +4545 2658 1 1 5 5 45 545 545 4545 4545 90 91 VSAAAA GYDAAA OOOOxx +7637 2659 1 1 7 17 37 637 1637 2637 7637 74 75 THAAAA HYDAAA VVVVxx +1359 2660 1 3 9 19 59 359 1359 1359 1359 118 119 HAAAAA IYDAAA AAAAxx +5018 2661 0 2 8 18 18 18 1018 18 5018 36 37 ALAAAA JYDAAA HHHHxx +987 2662 1 3 7 7 87 987 987 987 987 174 175 ZLAAAA KYDAAA OOOOxx +1320 2663 0 0 0 0 20 320 1320 1320 1320 40 41 UYAAAA LYDAAA VVVVxx +9311 2664 1 3 1 11 11 311 1311 4311 9311 22 23 DUAAAA MYDAAA AAAAxx +7993 2665 1 1 3 13 93 993 1993 2993 7993 186 187 LVAAAA NYDAAA HHHHxx +7588 2666 0 0 8 8 88 588 1588 2588 7588 176 177 WFAAAA OYDAAA OOOOxx +5983 2667 1 3 3 3 83 983 1983 983 5983 166 167 DWAAAA PYDAAA VVVVxx +4070 2668 0 2 0 10 70 70 70 4070 4070 140 141 OAAAAA QYDAAA AAAAxx +8349 2669 1 1 9 9 49 349 349 3349 8349 98 99 DJAAAA RYDAAA HHHHxx +3810 2670 0 2 0 10 10 810 1810 3810 3810 20 21 OQAAAA SYDAAA OOOOxx +6948 2671 0 0 8 8 48 948 948 1948 6948 96 97 GHAAAA TYDAAA VVVVxx +7153 2672 1 1 3 13 53 153 1153 2153 7153 106 107 DPAAAA UYDAAA AAAAxx +5371 2673 1 3 1 11 71 371 1371 371 5371 142 143 PYAAAA VYDAAA HHHHxx +8316 2674 0 0 6 16 16 316 316 3316 8316 32 33 WHAAAA WYDAAA OOOOxx +5903 2675 1 3 3 3 3 903 1903 903 5903 6 7 BTAAAA XYDAAA VVVVxx +6718 2676 0 2 8 18 18 718 718 1718 6718 36 37 KYAAAA YYDAAA AAAAxx +4759 2677 1 3 9 19 59 759 759 4759 4759 118 119 BBAAAA ZYDAAA HHHHxx +2555 2678 1 3 5 15 55 555 555 2555 2555 110 111 HUAAAA AZDAAA OOOOxx +3457 2679 1 1 7 17 57 457 1457 3457 3457 114 115 ZCAAAA BZDAAA VVVVxx +9626 2680 0 2 6 6 26 626 1626 4626 9626 52 53 GGAAAA CZDAAA AAAAxx +2570 2681 0 2 0 10 70 570 570 2570 2570 140 141 WUAAAA DZDAAA HHHHxx +7964 2682 0 0 4 4 64 964 1964 2964 7964 128 129 IUAAAA EZDAAA OOOOxx +1543 2683 1 3 3 3 43 543 1543 1543 1543 86 87 JHAAAA FZDAAA VVVVxx +929 2684 1 1 9 9 29 929 929 929 929 58 59 TJAAAA GZDAAA AAAAxx +9244 2685 0 0 4 4 44 244 1244 4244 9244 88 89 ORAAAA HZDAAA HHHHxx +9210 2686 0 2 0 10 10 210 1210 4210 9210 20 21 GQAAAA IZDAAA OOOOxx +8334 2687 0 2 4 14 34 334 334 3334 8334 68 69 OIAAAA JZDAAA VVVVxx +9310 2688 0 2 0 10 10 310 1310 4310 9310 20 21 CUAAAA KZDAAA AAAAxx +5024 2689 0 0 4 4 24 24 1024 24 5024 48 49 GLAAAA LZDAAA HHHHxx +8794 2690 0 2 4 14 94 794 794 3794 8794 188 189 GAAAAA MZDAAA OOOOxx +4091 2691 1 3 1 11 91 91 91 4091 4091 182 183 JBAAAA NZDAAA VVVVxx +649 2692 1 1 9 9 49 649 649 649 649 98 99 ZYAAAA OZDAAA AAAAxx +8505 2693 1 1 5 5 5 505 505 3505 8505 10 11 DPAAAA PZDAAA HHHHxx +6652 2694 0 0 2 12 52 652 652 1652 6652 104 105 WVAAAA QZDAAA OOOOxx +8945 2695 1 1 5 5 45 945 945 3945 8945 90 91 BGAAAA RZDAAA VVVVxx +2095 2696 1 3 5 15 95 95 95 2095 2095 190 191 PCAAAA SZDAAA AAAAxx +8676 2697 0 0 6 16 76 676 676 3676 8676 152 153 SVAAAA TZDAAA HHHHxx +3994 2698 0 2 4 14 94 994 1994 3994 3994 188 189 QXAAAA UZDAAA OOOOxx +2859 2699 1 3 9 19 59 859 859 2859 2859 118 119 ZFAAAA VZDAAA VVVVxx +5403 2700 1 3 3 3 3 403 1403 403 5403 6 7 VZAAAA WZDAAA AAAAxx +3254 2701 0 2 4 14 54 254 1254 3254 3254 108 109 EVAAAA XZDAAA HHHHxx +7339 2702 1 3 9 19 39 339 1339 2339 7339 78 79 HWAAAA YZDAAA OOOOxx +7220 2703 0 0 0 0 20 220 1220 2220 7220 40 41 SRAAAA ZZDAAA VVVVxx +4154 2704 0 2 4 14 54 154 154 4154 4154 108 109 UDAAAA AAEAAA AAAAxx +7570 2705 0 2 0 10 70 570 1570 2570 7570 140 141 EFAAAA BAEAAA HHHHxx +2576 2706 0 0 6 16 76 576 576 2576 2576 152 153 CVAAAA CAEAAA OOOOxx +5764 2707 0 0 4 4 64 764 1764 764 5764 128 129 SNAAAA DAEAAA VVVVxx +4314 2708 0 2 4 14 14 314 314 4314 4314 28 29 YJAAAA EAEAAA AAAAxx +2274 2709 0 2 4 14 74 274 274 2274 2274 148 149 MJAAAA FAEAAA HHHHxx +9756 2710 0 0 6 16 56 756 1756 4756 9756 112 113 GLAAAA GAEAAA OOOOxx +8274 2711 0 2 4 14 74 274 274 3274 8274 148 149 GGAAAA HAEAAA VVVVxx +1289 2712 1 1 9 9 89 289 1289 1289 1289 178 179 PXAAAA IAEAAA AAAAxx +7335 2713 1 3 5 15 35 335 1335 2335 7335 70 71 DWAAAA JAEAAA HHHHxx +5351 2714 1 3 1 11 51 351 1351 351 5351 102 103 VXAAAA KAEAAA OOOOxx +8978 2715 0 2 8 18 78 978 978 3978 8978 156 157 IHAAAA LAEAAA VVVVxx +2 2716 0 2 2 2 2 2 2 2 2 4 5 CAAAAA MAEAAA AAAAxx +8906 2717 0 2 6 6 6 906 906 3906 8906 12 13 OEAAAA NAEAAA HHHHxx +6388 2718 0 0 8 8 88 388 388 1388 6388 176 177 SLAAAA OAEAAA OOOOxx +5675 2719 1 3 5 15 75 675 1675 675 5675 150 151 HKAAAA PAEAAA VVVVxx +255 2720 1 3 5 15 55 255 255 255 255 110 111 VJAAAA QAEAAA AAAAxx +9538 2721 0 2 8 18 38 538 1538 4538 9538 76 77 WCAAAA RAEAAA HHHHxx +1480 2722 0 0 0 0 80 480 1480 1480 1480 160 161 YEAAAA SAEAAA OOOOxx +4015 2723 1 3 5 15 15 15 15 4015 4015 30 31 LYAAAA TAEAAA VVVVxx +5166 2724 0 2 6 6 66 166 1166 166 5166 132 133 SQAAAA UAEAAA AAAAxx +91 2725 1 3 1 11 91 91 91 91 91 182 183 NDAAAA VAEAAA HHHHxx +2958 2726 0 2 8 18 58 958 958 2958 2958 116 117 UJAAAA WAEAAA OOOOxx +9131 2727 1 3 1 11 31 131 1131 4131 9131 62 63 FNAAAA XAEAAA VVVVxx +3944 2728 0 0 4 4 44 944 1944 3944 3944 88 89 SVAAAA YAEAAA AAAAxx +4514 2729 0 2 4 14 14 514 514 4514 4514 28 29 QRAAAA ZAEAAA HHHHxx +5661 2730 1 1 1 1 61 661 1661 661 5661 122 123 TJAAAA ABEAAA OOOOxx +8724 2731 0 0 4 4 24 724 724 3724 8724 48 49 OXAAAA BBEAAA VVVVxx +6408 2732 0 0 8 8 8 408 408 1408 6408 16 17 MMAAAA CBEAAA AAAAxx +5013 2733 1 1 3 13 13 13 1013 13 5013 26 27 VKAAAA DBEAAA HHHHxx +6156 2734 0 0 6 16 56 156 156 1156 6156 112 113 UCAAAA EBEAAA OOOOxx +7350 2735 0 2 0 10 50 350 1350 2350 7350 100 101 SWAAAA FBEAAA VVVVxx +9858 2736 0 2 8 18 58 858 1858 4858 9858 116 117 EPAAAA GBEAAA AAAAxx +895 2737 1 3 5 15 95 895 895 895 895 190 191 LIAAAA HBEAAA HHHHxx +8368 2738 0 0 8 8 68 368 368 3368 8368 136 137 WJAAAA IBEAAA OOOOxx +179 2739 1 3 9 19 79 179 179 179 179 158 159 XGAAAA JBEAAA VVVVxx +4048 2740 0 0 8 8 48 48 48 4048 4048 96 97 SZAAAA KBEAAA AAAAxx +3073 2741 1 1 3 13 73 73 1073 3073 3073 146 147 FOAAAA LBEAAA HHHHxx +321 2742 1 1 1 1 21 321 321 321 321 42 43 JMAAAA MBEAAA OOOOxx +5352 2743 0 0 2 12 52 352 1352 352 5352 104 105 WXAAAA NBEAAA VVVVxx +1940 2744 0 0 0 0 40 940 1940 1940 1940 80 81 QWAAAA OBEAAA AAAAxx +8803 2745 1 3 3 3 3 803 803 3803 8803 6 7 PAAAAA PBEAAA HHHHxx +791 2746 1 3 1 11 91 791 791 791 791 182 183 LEAAAA QBEAAA OOOOxx +9809 2747 1 1 9 9 9 809 1809 4809 9809 18 19 HNAAAA RBEAAA VVVVxx +5519 2748 1 3 9 19 19 519 1519 519 5519 38 39 HEAAAA SBEAAA AAAAxx +7420 2749 0 0 0 0 20 420 1420 2420 7420 40 41 KZAAAA TBEAAA HHHHxx +7541 2750 1 1 1 1 41 541 1541 2541 7541 82 83 BEAAAA UBEAAA OOOOxx +6538 2751 0 2 8 18 38 538 538 1538 6538 76 77 MRAAAA VBEAAA VVVVxx +710 2752 0 2 0 10 10 710 710 710 710 20 21 IBAAAA WBEAAA AAAAxx +9488 2753 0 0 8 8 88 488 1488 4488 9488 176 177 YAAAAA XBEAAA HHHHxx +3135 2754 1 3 5 15 35 135 1135 3135 3135 70 71 PQAAAA YBEAAA OOOOxx +4273 2755 1 1 3 13 73 273 273 4273 4273 146 147 JIAAAA ZBEAAA VVVVxx +629 2756 1 1 9 9 29 629 629 629 629 58 59 FYAAAA ACEAAA AAAAxx +9167 2757 1 3 7 7 67 167 1167 4167 9167 134 135 POAAAA BCEAAA HHHHxx +751 2758 1 3 1 11 51 751 751 751 751 102 103 XCAAAA CCEAAA OOOOxx +1126 2759 0 2 6 6 26 126 1126 1126 1126 52 53 IRAAAA DCEAAA VVVVxx +3724 2760 0 0 4 4 24 724 1724 3724 3724 48 49 GNAAAA ECEAAA AAAAxx +1789 2761 1 1 9 9 89 789 1789 1789 1789 178 179 VQAAAA FCEAAA HHHHxx +792 2762 0 0 2 12 92 792 792 792 792 184 185 MEAAAA GCEAAA OOOOxx +2771 2763 1 3 1 11 71 771 771 2771 2771 142 143 PCAAAA HCEAAA VVVVxx +4313 2764 1 1 3 13 13 313 313 4313 4313 26 27 XJAAAA ICEAAA AAAAxx +9312 2765 0 0 2 12 12 312 1312 4312 9312 24 25 EUAAAA JCEAAA HHHHxx +955 2766 1 3 5 15 55 955 955 955 955 110 111 TKAAAA KCEAAA OOOOxx +6382 2767 0 2 2 2 82 382 382 1382 6382 164 165 MLAAAA LCEAAA VVVVxx +7875 2768 1 3 5 15 75 875 1875 2875 7875 150 151 XQAAAA MCEAAA AAAAxx +7491 2769 1 3 1 11 91 491 1491 2491 7491 182 183 DCAAAA NCEAAA HHHHxx +8193 2770 1 1 3 13 93 193 193 3193 8193 186 187 DDAAAA OCEAAA OOOOxx +968 2771 0 0 8 8 68 968 968 968 968 136 137 GLAAAA PCEAAA VVVVxx +4951 2772 1 3 1 11 51 951 951 4951 4951 102 103 LIAAAA QCEAAA AAAAxx +2204 2773 0 0 4 4 4 204 204 2204 2204 8 9 UGAAAA RCEAAA HHHHxx +2066 2774 0 2 6 6 66 66 66 2066 2066 132 133 MBAAAA SCEAAA OOOOxx +2631 2775 1 3 1 11 31 631 631 2631 2631 62 63 FXAAAA TCEAAA VVVVxx +8947 2776 1 3 7 7 47 947 947 3947 8947 94 95 DGAAAA UCEAAA AAAAxx +8033 2777 1 1 3 13 33 33 33 3033 8033 66 67 ZWAAAA VCEAAA HHHHxx +6264 2778 0 0 4 4 64 264 264 1264 6264 128 129 YGAAAA WCEAAA OOOOxx +7778 2779 0 2 8 18 78 778 1778 2778 7778 156 157 ENAAAA XCEAAA VVVVxx +9701 2780 1 1 1 1 1 701 1701 4701 9701 2 3 DJAAAA YCEAAA AAAAxx +5091 2781 1 3 1 11 91 91 1091 91 5091 182 183 VNAAAA ZCEAAA HHHHxx +7577 2782 1 1 7 17 77 577 1577 2577 7577 154 155 LFAAAA ADEAAA OOOOxx +3345 2783 1 1 5 5 45 345 1345 3345 3345 90 91 RYAAAA BDEAAA VVVVxx +7329 2784 1 1 9 9 29 329 1329 2329 7329 58 59 XVAAAA CDEAAA AAAAxx +7551 2785 1 3 1 11 51 551 1551 2551 7551 102 103 LEAAAA DDEAAA HHHHxx +6207 2786 1 3 7 7 7 207 207 1207 6207 14 15 TEAAAA EDEAAA OOOOxx +8664 2787 0 0 4 4 64 664 664 3664 8664 128 129 GVAAAA FDEAAA VVVVxx +8394 2788 0 2 4 14 94 394 394 3394 8394 188 189 WKAAAA GDEAAA AAAAxx +7324 2789 0 0 4 4 24 324 1324 2324 7324 48 49 SVAAAA HDEAAA HHHHxx +2713 2790 1 1 3 13 13 713 713 2713 2713 26 27 JAAAAA IDEAAA OOOOxx +2230 2791 0 2 0 10 30 230 230 2230 2230 60 61 UHAAAA JDEAAA VVVVxx +9211 2792 1 3 1 11 11 211 1211 4211 9211 22 23 HQAAAA KDEAAA AAAAxx +1296 2793 0 0 6 16 96 296 1296 1296 1296 192 193 WXAAAA LDEAAA HHHHxx +8104 2794 0 0 4 4 4 104 104 3104 8104 8 9 SZAAAA MDEAAA OOOOxx +6916 2795 0 0 6 16 16 916 916 1916 6916 32 33 AGAAAA NDEAAA VVVVxx +2208 2796 0 0 8 8 8 208 208 2208 2208 16 17 YGAAAA ODEAAA AAAAxx +3935 2797 1 3 5 15 35 935 1935 3935 3935 70 71 JVAAAA PDEAAA HHHHxx +7814 2798 0 2 4 14 14 814 1814 2814 7814 28 29 OOAAAA QDEAAA OOOOxx +6508 2799 0 0 8 8 8 508 508 1508 6508 16 17 IQAAAA RDEAAA VVVVxx +1703 2800 1 3 3 3 3 703 1703 1703 1703 6 7 NNAAAA SDEAAA AAAAxx +5640 2801 0 0 0 0 40 640 1640 640 5640 80 81 YIAAAA TDEAAA HHHHxx +6417 2802 1 1 7 17 17 417 417 1417 6417 34 35 VMAAAA UDEAAA OOOOxx +1713 2803 1 1 3 13 13 713 1713 1713 1713 26 27 XNAAAA VDEAAA VVVVxx +5309 2804 1 1 9 9 9 309 1309 309 5309 18 19 FWAAAA WDEAAA AAAAxx +4364 2805 0 0 4 4 64 364 364 4364 4364 128 129 WLAAAA XDEAAA HHHHxx +619 2806 1 3 9 19 19 619 619 619 619 38 39 VXAAAA YDEAAA OOOOxx +9498 2807 0 2 8 18 98 498 1498 4498 9498 196 197 IBAAAA ZDEAAA VVVVxx +2804 2808 0 0 4 4 4 804 804 2804 2804 8 9 WDAAAA AEEAAA AAAAxx +2220 2809 0 0 0 0 20 220 220 2220 2220 40 41 KHAAAA BEEAAA HHHHxx +9542 2810 0 2 2 2 42 542 1542 4542 9542 84 85 ADAAAA CEEAAA OOOOxx +3349 2811 1 1 9 9 49 349 1349 3349 3349 98 99 VYAAAA DEEAAA VVVVxx +9198 2812 0 2 8 18 98 198 1198 4198 9198 196 197 UPAAAA EEEAAA AAAAxx +2727 2813 1 3 7 7 27 727 727 2727 2727 54 55 XAAAAA FEEAAA HHHHxx +3768 2814 0 0 8 8 68 768 1768 3768 3768 136 137 YOAAAA GEEAAA OOOOxx +2334 2815 0 2 4 14 34 334 334 2334 2334 68 69 ULAAAA HEEAAA VVVVxx +7770 2816 0 2 0 10 70 770 1770 2770 7770 140 141 WMAAAA IEEAAA AAAAxx +5963 2817 1 3 3 3 63 963 1963 963 5963 126 127 JVAAAA JEEAAA HHHHxx +4732 2818 0 0 2 12 32 732 732 4732 4732 64 65 AAAAAA KEEAAA OOOOxx +2448 2819 0 0 8 8 48 448 448 2448 2448 96 97 EQAAAA LEEAAA VVVVxx +5998 2820 0 2 8 18 98 998 1998 998 5998 196 197 SWAAAA MEEAAA AAAAxx +8577 2821 1 1 7 17 77 577 577 3577 8577 154 155 XRAAAA NEEAAA HHHHxx +266 2822 0 2 6 6 66 266 266 266 266 132 133 GKAAAA OEEAAA OOOOxx +2169 2823 1 1 9 9 69 169 169 2169 2169 138 139 LFAAAA PEEAAA VVVVxx +8228 2824 0 0 8 8 28 228 228 3228 8228 56 57 MEAAAA QEEAAA AAAAxx +4813 2825 1 1 3 13 13 813 813 4813 4813 26 27 DDAAAA REEAAA HHHHxx +2769 2826 1 1 9 9 69 769 769 2769 2769 138 139 NCAAAA SEEAAA OOOOxx +8382 2827 0 2 2 2 82 382 382 3382 8382 164 165 KKAAAA TEEAAA VVVVxx +1717 2828 1 1 7 17 17 717 1717 1717 1717 34 35 BOAAAA UEEAAA AAAAxx +7178 2829 0 2 8 18 78 178 1178 2178 7178 156 157 CQAAAA VEEAAA HHHHxx +9547 2830 1 3 7 7 47 547 1547 4547 9547 94 95 FDAAAA WEEAAA OOOOxx +8187 2831 1 3 7 7 87 187 187 3187 8187 174 175 XCAAAA XEEAAA VVVVxx +3168 2832 0 0 8 8 68 168 1168 3168 3168 136 137 WRAAAA YEEAAA AAAAxx +2180 2833 0 0 0 0 80 180 180 2180 2180 160 161 WFAAAA ZEEAAA HHHHxx +859 2834 1 3 9 19 59 859 859 859 859 118 119 BHAAAA AFEAAA OOOOxx +1554 2835 0 2 4 14 54 554 1554 1554 1554 108 109 UHAAAA BFEAAA VVVVxx +3567 2836 1 3 7 7 67 567 1567 3567 3567 134 135 FHAAAA CFEAAA AAAAxx +5985 2837 1 1 5 5 85 985 1985 985 5985 170 171 FWAAAA DFEAAA HHHHxx +1 2838 1 1 1 1 1 1 1 1 1 2 3 BAAAAA EFEAAA OOOOxx +5937 2839 1 1 7 17 37 937 1937 937 5937 74 75 JUAAAA FFEAAA VVVVxx +7594 2840 0 2 4 14 94 594 1594 2594 7594 188 189 CGAAAA GFEAAA AAAAxx +3783 2841 1 3 3 3 83 783 1783 3783 3783 166 167 NPAAAA HFEAAA HHHHxx +6841 2842 1 1 1 1 41 841 841 1841 6841 82 83 DDAAAA IFEAAA OOOOxx +9694 2843 0 2 4 14 94 694 1694 4694 9694 188 189 WIAAAA JFEAAA VVVVxx +4322 2844 0 2 2 2 22 322 322 4322 4322 44 45 GKAAAA KFEAAA AAAAxx +6012 2845 0 0 2 12 12 12 12 1012 6012 24 25 GXAAAA LFEAAA HHHHxx +108 2846 0 0 8 8 8 108 108 108 108 16 17 EEAAAA MFEAAA OOOOxx +3396 2847 0 0 6 16 96 396 1396 3396 3396 192 193 QAAAAA NFEAAA VVVVxx +8643 2848 1 3 3 3 43 643 643 3643 8643 86 87 LUAAAA OFEAAA AAAAxx +6087 2849 1 3 7 7 87 87 87 1087 6087 174 175 DAAAAA PFEAAA HHHHxx +2629 2850 1 1 9 9 29 629 629 2629 2629 58 59 DXAAAA QFEAAA OOOOxx +3009 2851 1 1 9 9 9 9 1009 3009 3009 18 19 TLAAAA RFEAAA VVVVxx +438 2852 0 2 8 18 38 438 438 438 438 76 77 WQAAAA SFEAAA AAAAxx +2480 2853 0 0 0 0 80 480 480 2480 2480 160 161 KRAAAA TFEAAA HHHHxx +936 2854 0 0 6 16 36 936 936 936 936 72 73 AKAAAA UFEAAA OOOOxx +6 2855 0 2 6 6 6 6 6 6 6 12 13 GAAAAA VFEAAA VVVVxx +768 2856 0 0 8 8 68 768 768 768 768 136 137 ODAAAA WFEAAA AAAAxx +1564 2857 0 0 4 4 64 564 1564 1564 1564 128 129 EIAAAA XFEAAA HHHHxx +3236 2858 0 0 6 16 36 236 1236 3236 3236 72 73 MUAAAA YFEAAA OOOOxx +3932 2859 0 0 2 12 32 932 1932 3932 3932 64 65 GVAAAA ZFEAAA VVVVxx +8914 2860 0 2 4 14 14 914 914 3914 8914 28 29 WEAAAA AGEAAA AAAAxx +119 2861 1 3 9 19 19 119 119 119 119 38 39 PEAAAA BGEAAA HHHHxx +6034 2862 0 2 4 14 34 34 34 1034 6034 68 69 CYAAAA CGEAAA OOOOxx +5384 2863 0 0 4 4 84 384 1384 384 5384 168 169 CZAAAA DGEAAA VVVVxx +6885 2864 1 1 5 5 85 885 885 1885 6885 170 171 VEAAAA EGEAAA AAAAxx +232 2865 0 0 2 12 32 232 232 232 232 64 65 YIAAAA FGEAAA HHHHxx +1293 2866 1 1 3 13 93 293 1293 1293 1293 186 187 TXAAAA GGEAAA OOOOxx +9204 2867 0 0 4 4 4 204 1204 4204 9204 8 9 AQAAAA HGEAAA VVVVxx +527 2868 1 3 7 7 27 527 527 527 527 54 55 HUAAAA IGEAAA AAAAxx +6539 2869 1 3 9 19 39 539 539 1539 6539 78 79 NRAAAA JGEAAA HHHHxx +3679 2870 1 3 9 19 79 679 1679 3679 3679 158 159 NLAAAA KGEAAA OOOOxx +8282 2871 0 2 2 2 82 282 282 3282 8282 164 165 OGAAAA LGEAAA VVVVxx +5027 2872 1 3 7 7 27 27 1027 27 5027 54 55 JLAAAA MGEAAA AAAAxx +7694 2873 0 2 4 14 94 694 1694 2694 7694 188 189 YJAAAA NGEAAA HHHHxx +473 2874 1 1 3 13 73 473 473 473 473 146 147 FSAAAA OGEAAA OOOOxx +6325 2875 1 1 5 5 25 325 325 1325 6325 50 51 HJAAAA PGEAAA VVVVxx +8761 2876 1 1 1 1 61 761 761 3761 8761 122 123 ZYAAAA QGEAAA AAAAxx +6184 2877 0 0 4 4 84 184 184 1184 6184 168 169 WDAAAA RGEAAA HHHHxx +419 2878 1 3 9 19 19 419 419 419 419 38 39 DQAAAA SGEAAA OOOOxx +6111 2879 1 3 1 11 11 111 111 1111 6111 22 23 BBAAAA TGEAAA VVVVxx +3836 2880 0 0 6 16 36 836 1836 3836 3836 72 73 ORAAAA UGEAAA AAAAxx +4086 2881 0 2 6 6 86 86 86 4086 4086 172 173 EBAAAA VGEAAA HHHHxx +5818 2882 0 2 8 18 18 818 1818 818 5818 36 37 UPAAAA WGEAAA OOOOxx +4528 2883 0 0 8 8 28 528 528 4528 4528 56 57 ESAAAA XGEAAA VVVVxx +7199 2884 1 3 9 19 99 199 1199 2199 7199 198 199 XQAAAA YGEAAA AAAAxx +1847 2885 1 3 7 7 47 847 1847 1847 1847 94 95 BTAAAA ZGEAAA HHHHxx +2875 2886 1 3 5 15 75 875 875 2875 2875 150 151 PGAAAA AHEAAA OOOOxx +2872 2887 0 0 2 12 72 872 872 2872 2872 144 145 MGAAAA BHEAAA VVVVxx +3972 2888 0 0 2 12 72 972 1972 3972 3972 144 145 UWAAAA CHEAAA AAAAxx +7590 2889 0 2 0 10 90 590 1590 2590 7590 180 181 YFAAAA DHEAAA HHHHxx +1914 2890 0 2 4 14 14 914 1914 1914 1914 28 29 QVAAAA EHEAAA OOOOxx +1658 2891 0 2 8 18 58 658 1658 1658 1658 116 117 ULAAAA FHEAAA VVVVxx +2126 2892 0 2 6 6 26 126 126 2126 2126 52 53 UDAAAA GHEAAA AAAAxx +645 2893 1 1 5 5 45 645 645 645 645 90 91 VYAAAA HHEAAA HHHHxx +6636 2894 0 0 6 16 36 636 636 1636 6636 72 73 GVAAAA IHEAAA OOOOxx +1469 2895 1 1 9 9 69 469 1469 1469 1469 138 139 NEAAAA JHEAAA VVVVxx +1377 2896 1 1 7 17 77 377 1377 1377 1377 154 155 ZAAAAA KHEAAA AAAAxx +8425 2897 1 1 5 5 25 425 425 3425 8425 50 51 BMAAAA LHEAAA HHHHxx +9300 2898 0 0 0 0 0 300 1300 4300 9300 0 1 STAAAA MHEAAA OOOOxx +5355 2899 1 3 5 15 55 355 1355 355 5355 110 111 ZXAAAA NHEAAA VVVVxx +840 2900 0 0 0 0 40 840 840 840 840 80 81 IGAAAA OHEAAA AAAAxx +5185 2901 1 1 5 5 85 185 1185 185 5185 170 171 LRAAAA PHEAAA HHHHxx +6467 2902 1 3 7 7 67 467 467 1467 6467 134 135 TOAAAA QHEAAA OOOOxx +58 2903 0 2 8 18 58 58 58 58 58 116 117 GCAAAA RHEAAA VVVVxx +5051 2904 1 3 1 11 51 51 1051 51 5051 102 103 HMAAAA SHEAAA AAAAxx +8901 2905 1 1 1 1 1 901 901 3901 8901 2 3 JEAAAA THEAAA HHHHxx +1550 2906 0 2 0 10 50 550 1550 1550 1550 100 101 QHAAAA UHEAAA OOOOxx +1698 2907 0 2 8 18 98 698 1698 1698 1698 196 197 INAAAA VHEAAA VVVVxx +802 2908 0 2 2 2 2 802 802 802 802 4 5 WEAAAA WHEAAA AAAAxx +2440 2909 0 0 0 0 40 440 440 2440 2440 80 81 WPAAAA XHEAAA HHHHxx +2260 2910 0 0 0 0 60 260 260 2260 2260 120 121 YIAAAA YHEAAA OOOOxx +8218 2911 0 2 8 18 18 218 218 3218 8218 36 37 CEAAAA ZHEAAA VVVVxx +5144 2912 0 0 4 4 44 144 1144 144 5144 88 89 WPAAAA AIEAAA AAAAxx +4822 2913 0 2 2 2 22 822 822 4822 4822 44 45 MDAAAA BIEAAA HHHHxx +9476 2914 0 0 6 16 76 476 1476 4476 9476 152 153 MAAAAA CIEAAA OOOOxx +7535 2915 1 3 5 15 35 535 1535 2535 7535 70 71 VDAAAA DIEAAA VVVVxx +8738 2916 0 2 8 18 38 738 738 3738 8738 76 77 CYAAAA EIEAAA AAAAxx +7946 2917 0 2 6 6 46 946 1946 2946 7946 92 93 QTAAAA FIEAAA HHHHxx +8143 2918 1 3 3 3 43 143 143 3143 8143 86 87 FBAAAA GIEAAA OOOOxx +2623 2919 1 3 3 3 23 623 623 2623 2623 46 47 XWAAAA HIEAAA VVVVxx +5209 2920 1 1 9 9 9 209 1209 209 5209 18 19 JSAAAA IIEAAA AAAAxx +7674 2921 0 2 4 14 74 674 1674 2674 7674 148 149 EJAAAA JIEAAA HHHHxx +1135 2922 1 3 5 15 35 135 1135 1135 1135 70 71 RRAAAA KIEAAA OOOOxx +424 2923 0 0 4 4 24 424 424 424 424 48 49 IQAAAA LIEAAA VVVVxx +942 2924 0 2 2 2 42 942 942 942 942 84 85 GKAAAA MIEAAA AAAAxx +7813 2925 1 1 3 13 13 813 1813 2813 7813 26 27 NOAAAA NIEAAA HHHHxx +3539 2926 1 3 9 19 39 539 1539 3539 3539 78 79 DGAAAA OIEAAA OOOOxx +2909 2927 1 1 9 9 9 909 909 2909 2909 18 19 XHAAAA PIEAAA VVVVxx +3748 2928 0 0 8 8 48 748 1748 3748 3748 96 97 EOAAAA QIEAAA AAAAxx +2996 2929 0 0 6 16 96 996 996 2996 2996 192 193 GLAAAA RIEAAA HHHHxx +1869 2930 1 1 9 9 69 869 1869 1869 1869 138 139 XTAAAA SIEAAA OOOOxx +8151 2931 1 3 1 11 51 151 151 3151 8151 102 103 NBAAAA TIEAAA VVVVxx +6361 2932 1 1 1 1 61 361 361 1361 6361 122 123 RKAAAA UIEAAA AAAAxx +5568 2933 0 0 8 8 68 568 1568 568 5568 136 137 EGAAAA VIEAAA HHHHxx +2796 2934 0 0 6 16 96 796 796 2796 2796 192 193 ODAAAA WIEAAA OOOOxx +8489 2935 1 1 9 9 89 489 489 3489 8489 178 179 NOAAAA XIEAAA VVVVxx +9183 2936 1 3 3 3 83 183 1183 4183 9183 166 167 FPAAAA YIEAAA AAAAxx +8227 2937 1 3 7 7 27 227 227 3227 8227 54 55 LEAAAA ZIEAAA HHHHxx +1844 2938 0 0 4 4 44 844 1844 1844 1844 88 89 YSAAAA AJEAAA OOOOxx +3975 2939 1 3 5 15 75 975 1975 3975 3975 150 151 XWAAAA BJEAAA VVVVxx +6490 2940 0 2 0 10 90 490 490 1490 6490 180 181 QPAAAA CJEAAA AAAAxx +8303 2941 1 3 3 3 3 303 303 3303 8303 6 7 JHAAAA DJEAAA HHHHxx +7334 2942 0 2 4 14 34 334 1334 2334 7334 68 69 CWAAAA EJEAAA OOOOxx +2382 2943 0 2 2 2 82 382 382 2382 2382 164 165 QNAAAA FJEAAA VVVVxx +177 2944 1 1 7 17 77 177 177 177 177 154 155 VGAAAA GJEAAA AAAAxx +8117 2945 1 1 7 17 17 117 117 3117 8117 34 35 FAAAAA HJEAAA HHHHxx +5485 2946 1 1 5 5 85 485 1485 485 5485 170 171 ZCAAAA IJEAAA OOOOxx +6544 2947 0 0 4 4 44 544 544 1544 6544 88 89 SRAAAA JJEAAA VVVVxx +8517 2948 1 1 7 17 17 517 517 3517 8517 34 35 PPAAAA KJEAAA AAAAxx +2252 2949 0 0 2 12 52 252 252 2252 2252 104 105 QIAAAA LJEAAA HHHHxx +4480 2950 0 0 0 0 80 480 480 4480 4480 160 161 IQAAAA MJEAAA OOOOxx +4785 2951 1 1 5 5 85 785 785 4785 4785 170 171 BCAAAA NJEAAA VVVVxx +9700 2952 0 0 0 0 0 700 1700 4700 9700 0 1 CJAAAA OJEAAA AAAAxx +2122 2953 0 2 2 2 22 122 122 2122 2122 44 45 QDAAAA PJEAAA HHHHxx +8783 2954 1 3 3 3 83 783 783 3783 8783 166 167 VZAAAA QJEAAA OOOOxx +1453 2955 1 1 3 13 53 453 1453 1453 1453 106 107 XDAAAA RJEAAA VVVVxx +3908 2956 0 0 8 8 8 908 1908 3908 3908 16 17 IUAAAA SJEAAA AAAAxx +7707 2957 1 3 7 7 7 707 1707 2707 7707 14 15 LKAAAA TJEAAA HHHHxx +9049 2958 1 1 9 9 49 49 1049 4049 9049 98 99 BKAAAA UJEAAA OOOOxx +654 2959 0 2 4 14 54 654 654 654 654 108 109 EZAAAA VJEAAA VVVVxx +3336 2960 0 0 6 16 36 336 1336 3336 3336 72 73 IYAAAA WJEAAA AAAAxx +622 2961 0 2 2 2 22 622 622 622 622 44 45 YXAAAA XJEAAA HHHHxx +8398 2962 0 2 8 18 98 398 398 3398 8398 196 197 ALAAAA YJEAAA OOOOxx +9193 2963 1 1 3 13 93 193 1193 4193 9193 186 187 PPAAAA ZJEAAA VVVVxx +7896 2964 0 0 6 16 96 896 1896 2896 7896 192 193 SRAAAA AKEAAA AAAAxx +9798 2965 0 2 8 18 98 798 1798 4798 9798 196 197 WMAAAA BKEAAA HHHHxx +2881 2966 1 1 1 1 81 881 881 2881 2881 162 163 VGAAAA CKEAAA OOOOxx +672 2967 0 0 2 12 72 672 672 672 672 144 145 WZAAAA DKEAAA VVVVxx +6743 2968 1 3 3 3 43 743 743 1743 6743 86 87 JZAAAA EKEAAA AAAAxx +8935 2969 1 3 5 15 35 935 935 3935 8935 70 71 RFAAAA FKEAAA HHHHxx +2426 2970 0 2 6 6 26 426 426 2426 2426 52 53 IPAAAA GKEAAA OOOOxx +722 2971 0 2 2 2 22 722 722 722 722 44 45 UBAAAA HKEAAA VVVVxx +5088 2972 0 0 8 8 88 88 1088 88 5088 176 177 SNAAAA IKEAAA AAAAxx +8677 2973 1 1 7 17 77 677 677 3677 8677 154 155 TVAAAA JKEAAA HHHHxx +6963 2974 1 3 3 3 63 963 963 1963 6963 126 127 VHAAAA KKEAAA OOOOxx +1653 2975 1 1 3 13 53 653 1653 1653 1653 106 107 PLAAAA LKEAAA VVVVxx +7295 2976 1 3 5 15 95 295 1295 2295 7295 190 191 PUAAAA MKEAAA AAAAxx +6675 2977 1 3 5 15 75 675 675 1675 6675 150 151 TWAAAA NKEAAA HHHHxx +7183 2978 1 3 3 3 83 183 1183 2183 7183 166 167 HQAAAA OKEAAA OOOOxx +4378 2979 0 2 8 18 78 378 378 4378 4378 156 157 KMAAAA PKEAAA VVVVxx +2157 2980 1 1 7 17 57 157 157 2157 2157 114 115 ZEAAAA QKEAAA AAAAxx +2621 2981 1 1 1 1 21 621 621 2621 2621 42 43 VWAAAA RKEAAA HHHHxx +9278 2982 0 2 8 18 78 278 1278 4278 9278 156 157 WSAAAA SKEAAA OOOOxx +79 2983 1 3 9 19 79 79 79 79 79 158 159 BDAAAA TKEAAA VVVVxx +7358 2984 0 2 8 18 58 358 1358 2358 7358 116 117 AXAAAA UKEAAA AAAAxx +3589 2985 1 1 9 9 89 589 1589 3589 3589 178 179 BIAAAA VKEAAA HHHHxx +1254 2986 0 2 4 14 54 254 1254 1254 1254 108 109 GWAAAA WKEAAA OOOOxx +3490 2987 0 2 0 10 90 490 1490 3490 3490 180 181 GEAAAA XKEAAA VVVVxx +7533 2988 1 1 3 13 33 533 1533 2533 7533 66 67 TDAAAA YKEAAA AAAAxx +2800 2989 0 0 0 0 0 800 800 2800 2800 0 1 SDAAAA ZKEAAA HHHHxx +351 2990 1 3 1 11 51 351 351 351 351 102 103 NNAAAA ALEAAA OOOOxx +4359 2991 1 3 9 19 59 359 359 4359 4359 118 119 RLAAAA BLEAAA VVVVxx +5788 2992 0 0 8 8 88 788 1788 788 5788 176 177 QOAAAA CLEAAA AAAAxx +5521 2993 1 1 1 1 21 521 1521 521 5521 42 43 JEAAAA DLEAAA HHHHxx +3351 2994 1 3 1 11 51 351 1351 3351 3351 102 103 XYAAAA ELEAAA OOOOxx +5129 2995 1 1 9 9 29 129 1129 129 5129 58 59 HPAAAA FLEAAA VVVVxx +315 2996 1 3 5 15 15 315 315 315 315 30 31 DMAAAA GLEAAA AAAAxx +7552 2997 0 0 2 12 52 552 1552 2552 7552 104 105 MEAAAA HLEAAA HHHHxx +9176 2998 0 0 6 16 76 176 1176 4176 9176 152 153 YOAAAA ILEAAA OOOOxx +7458 2999 0 2 8 18 58 458 1458 2458 7458 116 117 WAAAAA JLEAAA VVVVxx +279 3000 1 3 9 19 79 279 279 279 279 158 159 TKAAAA KLEAAA AAAAxx +738 3001 0 2 8 18 38 738 738 738 738 76 77 KCAAAA LLEAAA HHHHxx +2557 3002 1 1 7 17 57 557 557 2557 2557 114 115 JUAAAA MLEAAA OOOOxx +9395 3003 1 3 5 15 95 395 1395 4395 9395 190 191 JXAAAA NLEAAA VVVVxx +7214 3004 0 2 4 14 14 214 1214 2214 7214 28 29 MRAAAA OLEAAA AAAAxx +6354 3005 0 2 4 14 54 354 354 1354 6354 108 109 KKAAAA PLEAAA HHHHxx +4799 3006 1 3 9 19 99 799 799 4799 4799 198 199 PCAAAA QLEAAA OOOOxx +1231 3007 1 3 1 11 31 231 1231 1231 1231 62 63 JVAAAA RLEAAA VVVVxx +5252 3008 0 0 2 12 52 252 1252 252 5252 104 105 AUAAAA SLEAAA AAAAxx +5250 3009 0 2 0 10 50 250 1250 250 5250 100 101 YTAAAA TLEAAA HHHHxx +9319 3010 1 3 9 19 19 319 1319 4319 9319 38 39 LUAAAA ULEAAA OOOOxx +1724 3011 0 0 4 4 24 724 1724 1724 1724 48 49 IOAAAA VLEAAA VVVVxx +7947 3012 1 3 7 7 47 947 1947 2947 7947 94 95 RTAAAA WLEAAA AAAAxx +1105 3013 1 1 5 5 5 105 1105 1105 1105 10 11 NQAAAA XLEAAA HHHHxx +1417 3014 1 1 7 17 17 417 1417 1417 1417 34 35 NCAAAA YLEAAA OOOOxx +7101 3015 1 1 1 1 1 101 1101 2101 7101 2 3 DNAAAA ZLEAAA VVVVxx +1088 3016 0 0 8 8 88 88 1088 1088 1088 176 177 WPAAAA AMEAAA AAAAxx +979 3017 1 3 9 19 79 979 979 979 979 158 159 RLAAAA BMEAAA HHHHxx +7589 3018 1 1 9 9 89 589 1589 2589 7589 178 179 XFAAAA CMEAAA OOOOxx +8952 3019 0 0 2 12 52 952 952 3952 8952 104 105 IGAAAA DMEAAA VVVVxx +2864 3020 0 0 4 4 64 864 864 2864 2864 128 129 EGAAAA EMEAAA AAAAxx +234 3021 0 2 4 14 34 234 234 234 234 68 69 AJAAAA FMEAAA HHHHxx +7231 3022 1 3 1 11 31 231 1231 2231 7231 62 63 DSAAAA GMEAAA OOOOxx +6792 3023 0 0 2 12 92 792 792 1792 6792 184 185 GBAAAA HMEAAA VVVVxx +4311 3024 1 3 1 11 11 311 311 4311 4311 22 23 VJAAAA IMEAAA AAAAxx +3374 3025 0 2 4 14 74 374 1374 3374 3374 148 149 UZAAAA JMEAAA HHHHxx +3367 3026 1 3 7 7 67 367 1367 3367 3367 134 135 NZAAAA KMEAAA OOOOxx +2598 3027 0 2 8 18 98 598 598 2598 2598 196 197 YVAAAA LMEAAA VVVVxx +1033 3028 1 1 3 13 33 33 1033 1033 1033 66 67 TNAAAA MMEAAA AAAAxx +7803 3029 1 3 3 3 3 803 1803 2803 7803 6 7 DOAAAA NMEAAA HHHHxx +3870 3030 0 2 0 10 70 870 1870 3870 3870 140 141 WSAAAA OMEAAA OOOOxx +4962 3031 0 2 2 2 62 962 962 4962 4962 124 125 WIAAAA PMEAAA VVVVxx +4842 3032 0 2 2 2 42 842 842 4842 4842 84 85 GEAAAA QMEAAA AAAAxx +8814 3033 0 2 4 14 14 814 814 3814 8814 28 29 ABAAAA RMEAAA HHHHxx +3429 3034 1 1 9 9 29 429 1429 3429 3429 58 59 XBAAAA SMEAAA OOOOxx +6550 3035 0 2 0 10 50 550 550 1550 6550 100 101 YRAAAA TMEAAA VVVVxx +6317 3036 1 1 7 17 17 317 317 1317 6317 34 35 ZIAAAA UMEAAA AAAAxx +5023 3037 1 3 3 3 23 23 1023 23 5023 46 47 FLAAAA VMEAAA HHHHxx +5825 3038 1 1 5 5 25 825 1825 825 5825 50 51 BQAAAA WMEAAA OOOOxx +5297 3039 1 1 7 17 97 297 1297 297 5297 194 195 TVAAAA XMEAAA VVVVxx +8764 3040 0 0 4 4 64 764 764 3764 8764 128 129 CZAAAA YMEAAA AAAAxx +5084 3041 0 0 4 4 84 84 1084 84 5084 168 169 ONAAAA ZMEAAA HHHHxx +6808 3042 0 0 8 8 8 808 808 1808 6808 16 17 WBAAAA ANEAAA OOOOxx +1780 3043 0 0 0 0 80 780 1780 1780 1780 160 161 MQAAAA BNEAAA VVVVxx +4092 3044 0 0 2 12 92 92 92 4092 4092 184 185 KBAAAA CNEAAA AAAAxx +3618 3045 0 2 8 18 18 618 1618 3618 3618 36 37 EJAAAA DNEAAA HHHHxx +7299 3046 1 3 9 19 99 299 1299 2299 7299 198 199 TUAAAA ENEAAA OOOOxx +8544 3047 0 0 4 4 44 544 544 3544 8544 88 89 QQAAAA FNEAAA VVVVxx +2359 3048 1 3 9 19 59 359 359 2359 2359 118 119 TMAAAA GNEAAA AAAAxx +1939 3049 1 3 9 19 39 939 1939 1939 1939 78 79 PWAAAA HNEAAA HHHHxx +5834 3050 0 2 4 14 34 834 1834 834 5834 68 69 KQAAAA INEAAA OOOOxx +1997 3051 1 1 7 17 97 997 1997 1997 1997 194 195 VYAAAA JNEAAA VVVVxx +7917 3052 1 1 7 17 17 917 1917 2917 7917 34 35 NSAAAA KNEAAA AAAAxx +2098 3053 0 2 8 18 98 98 98 2098 2098 196 197 SCAAAA LNEAAA HHHHxx +7576 3054 0 0 6 16 76 576 1576 2576 7576 152 153 KFAAAA MNEAAA OOOOxx +376 3055 0 0 6 16 76 376 376 376 376 152 153 MOAAAA NNEAAA VVVVxx +8535 3056 1 3 5 15 35 535 535 3535 8535 70 71 HQAAAA ONEAAA AAAAxx +5659 3057 1 3 9 19 59 659 1659 659 5659 118 119 RJAAAA PNEAAA HHHHxx +2786 3058 0 2 6 6 86 786 786 2786 2786 172 173 EDAAAA QNEAAA OOOOxx +8820 3059 0 0 0 0 20 820 820 3820 8820 40 41 GBAAAA RNEAAA VVVVxx +1229 3060 1 1 9 9 29 229 1229 1229 1229 58 59 HVAAAA SNEAAA AAAAxx +9321 3061 1 1 1 1 21 321 1321 4321 9321 42 43 NUAAAA TNEAAA HHHHxx +7662 3062 0 2 2 2 62 662 1662 2662 7662 124 125 SIAAAA UNEAAA OOOOxx +5535 3063 1 3 5 15 35 535 1535 535 5535 70 71 XEAAAA VNEAAA VVVVxx +4889 3064 1 1 9 9 89 889 889 4889 4889 178 179 BGAAAA WNEAAA AAAAxx +8259 3065 1 3 9 19 59 259 259 3259 8259 118 119 RFAAAA XNEAAA HHHHxx +6789 3066 1 1 9 9 89 789 789 1789 6789 178 179 DBAAAA YNEAAA OOOOxx +5411 3067 1 3 1 11 11 411 1411 411 5411 22 23 DAAAAA ZNEAAA VVVVxx +6992 3068 0 0 2 12 92 992 992 1992 6992 184 185 YIAAAA AOEAAA AAAAxx +7698 3069 0 2 8 18 98 698 1698 2698 7698 196 197 CKAAAA BOEAAA HHHHxx +2342 3070 0 2 2 2 42 342 342 2342 2342 84 85 CMAAAA COEAAA OOOOxx +1501 3071 1 1 1 1 1 501 1501 1501 1501 2 3 TFAAAA DOEAAA VVVVxx +6322 3072 0 2 2 2 22 322 322 1322 6322 44 45 EJAAAA EOEAAA AAAAxx +9861 3073 1 1 1 1 61 861 1861 4861 9861 122 123 HPAAAA FOEAAA HHHHxx +9802 3074 0 2 2 2 2 802 1802 4802 9802 4 5 ANAAAA GOEAAA OOOOxx +4750 3075 0 2 0 10 50 750 750 4750 4750 100 101 SAAAAA HOEAAA VVVVxx +5855 3076 1 3 5 15 55 855 1855 855 5855 110 111 FRAAAA IOEAAA AAAAxx +4304 3077 0 0 4 4 4 304 304 4304 4304 8 9 OJAAAA JOEAAA HHHHxx +2605 3078 1 1 5 5 5 605 605 2605 2605 10 11 FWAAAA KOEAAA OOOOxx +1802 3079 0 2 2 2 2 802 1802 1802 1802 4 5 IRAAAA LOEAAA VVVVxx +9368 3080 0 0 8 8 68 368 1368 4368 9368 136 137 IWAAAA MOEAAA AAAAxx +7107 3081 1 3 7 7 7 107 1107 2107 7107 14 15 JNAAAA NOEAAA HHHHxx +8895 3082 1 3 5 15 95 895 895 3895 8895 190 191 DEAAAA OOEAAA OOOOxx +3750 3083 0 2 0 10 50 750 1750 3750 3750 100 101 GOAAAA POEAAA VVVVxx +8934 3084 0 2 4 14 34 934 934 3934 8934 68 69 QFAAAA QOEAAA AAAAxx +9464 3085 0 0 4 4 64 464 1464 4464 9464 128 129 AAAAAA ROEAAA HHHHxx +1928 3086 0 0 8 8 28 928 1928 1928 1928 56 57 EWAAAA SOEAAA OOOOxx +3196 3087 0 0 6 16 96 196 1196 3196 3196 192 193 YSAAAA TOEAAA VVVVxx +5256 3088 0 0 6 16 56 256 1256 256 5256 112 113 EUAAAA UOEAAA AAAAxx +7119 3089 1 3 9 19 19 119 1119 2119 7119 38 39 VNAAAA VOEAAA HHHHxx +4495 3090 1 3 5 15 95 495 495 4495 4495 190 191 XQAAAA WOEAAA OOOOxx +9292 3091 0 0 2 12 92 292 1292 4292 9292 184 185 KTAAAA XOEAAA VVVVxx +1617 3092 1 1 7 17 17 617 1617 1617 1617 34 35 FKAAAA YOEAAA AAAAxx +481 3093 1 1 1 1 81 481 481 481 481 162 163 NSAAAA ZOEAAA HHHHxx +56 3094 0 0 6 16 56 56 56 56 56 112 113 ECAAAA APEAAA OOOOxx +9120 3095 0 0 0 0 20 120 1120 4120 9120 40 41 UMAAAA BPEAAA VVVVxx +1306 3096 0 2 6 6 6 306 1306 1306 1306 12 13 GYAAAA CPEAAA AAAAxx +7773 3097 1 1 3 13 73 773 1773 2773 7773 146 147 ZMAAAA DPEAAA HHHHxx +4863 3098 1 3 3 3 63 863 863 4863 4863 126 127 BFAAAA EPEAAA OOOOxx +1114 3099 0 2 4 14 14 114 1114 1114 1114 28 29 WQAAAA FPEAAA VVVVxx +8124 3100 0 0 4 4 24 124 124 3124 8124 48 49 MAAAAA GPEAAA AAAAxx +6254 3101 0 2 4 14 54 254 254 1254 6254 108 109 OGAAAA HPEAAA HHHHxx +8109 3102 1 1 9 9 9 109 109 3109 8109 18 19 XZAAAA IPEAAA OOOOxx +1747 3103 1 3 7 7 47 747 1747 1747 1747 94 95 FPAAAA JPEAAA VVVVxx +6185 3104 1 1 5 5 85 185 185 1185 6185 170 171 XDAAAA KPEAAA AAAAxx +3388 3105 0 0 8 8 88 388 1388 3388 3388 176 177 IAAAAA LPEAAA HHHHxx +4905 3106 1 1 5 5 5 905 905 4905 4905 10 11 RGAAAA MPEAAA OOOOxx +5728 3107 0 0 8 8 28 728 1728 728 5728 56 57 IMAAAA NPEAAA VVVVxx +7507 3108 1 3 7 7 7 507 1507 2507 7507 14 15 TCAAAA OPEAAA AAAAxx +5662 3109 0 2 2 2 62 662 1662 662 5662 124 125 UJAAAA PPEAAA HHHHxx +1686 3110 0 2 6 6 86 686 1686 1686 1686 172 173 WMAAAA QPEAAA OOOOxx +5202 3111 0 2 2 2 2 202 1202 202 5202 4 5 CSAAAA RPEAAA VVVVxx +6905 3112 1 1 5 5 5 905 905 1905 6905 10 11 PFAAAA SPEAAA AAAAxx +9577 3113 1 1 7 17 77 577 1577 4577 9577 154 155 JEAAAA TPEAAA HHHHxx +7194 3114 0 2 4 14 94 194 1194 2194 7194 188 189 SQAAAA UPEAAA OOOOxx +7016 3115 0 0 6 16 16 16 1016 2016 7016 32 33 WJAAAA VPEAAA VVVVxx +8905 3116 1 1 5 5 5 905 905 3905 8905 10 11 NEAAAA WPEAAA AAAAxx +3419 3117 1 3 9 19 19 419 1419 3419 3419 38 39 NBAAAA XPEAAA HHHHxx +6881 3118 1 1 1 1 81 881 881 1881 6881 162 163 REAAAA YPEAAA OOOOxx +8370 3119 0 2 0 10 70 370 370 3370 8370 140 141 YJAAAA ZPEAAA VVVVxx +6117 3120 1 1 7 17 17 117 117 1117 6117 34 35 HBAAAA AQEAAA AAAAxx +1636 3121 0 0 6 16 36 636 1636 1636 1636 72 73 YKAAAA BQEAAA HHHHxx +6857 3122 1 1 7 17 57 857 857 1857 6857 114 115 TDAAAA CQEAAA OOOOxx +7163 3123 1 3 3 3 63 163 1163 2163 7163 126 127 NPAAAA DQEAAA VVVVxx +5040 3124 0 0 0 0 40 40 1040 40 5040 80 81 WLAAAA EQEAAA AAAAxx +6263 3125 1 3 3 3 63 263 263 1263 6263 126 127 XGAAAA FQEAAA HHHHxx +4809 3126 1 1 9 9 9 809 809 4809 4809 18 19 ZCAAAA GQEAAA OOOOxx +900 3127 0 0 0 0 0 900 900 900 900 0 1 QIAAAA HQEAAA VVVVxx +3199 3128 1 3 9 19 99 199 1199 3199 3199 198 199 BTAAAA IQEAAA AAAAxx +4156 3129 0 0 6 16 56 156 156 4156 4156 112 113 WDAAAA JQEAAA HHHHxx +3501 3130 1 1 1 1 1 501 1501 3501 3501 2 3 REAAAA KQEAAA OOOOxx +164 3131 0 0 4 4 64 164 164 164 164 128 129 IGAAAA LQEAAA VVVVxx +9548 3132 0 0 8 8 48 548 1548 4548 9548 96 97 GDAAAA MQEAAA AAAAxx +1149 3133 1 1 9 9 49 149 1149 1149 1149 98 99 FSAAAA NQEAAA HHHHxx +1962 3134 0 2 2 2 62 962 1962 1962 1962 124 125 MXAAAA OQEAAA OOOOxx +4072 3135 0 0 2 12 72 72 72 4072 4072 144 145 QAAAAA PQEAAA VVVVxx +4280 3136 0 0 0 0 80 280 280 4280 4280 160 161 QIAAAA QQEAAA AAAAxx +1398 3137 0 2 8 18 98 398 1398 1398 1398 196 197 UBAAAA RQEAAA HHHHxx +725 3138 1 1 5 5 25 725 725 725 725 50 51 XBAAAA SQEAAA OOOOxx +3988 3139 0 0 8 8 88 988 1988 3988 3988 176 177 KXAAAA TQEAAA VVVVxx +5059 3140 1 3 9 19 59 59 1059 59 5059 118 119 PMAAAA UQEAAA AAAAxx +2632 3141 0 0 2 12 32 632 632 2632 2632 64 65 GXAAAA VQEAAA HHHHxx +1909 3142 1 1 9 9 9 909 1909 1909 1909 18 19 LVAAAA WQEAAA OOOOxx +6827 3143 1 3 7 7 27 827 827 1827 6827 54 55 PCAAAA XQEAAA VVVVxx +8156 3144 0 0 6 16 56 156 156 3156 8156 112 113 SBAAAA YQEAAA AAAAxx +1192 3145 0 0 2 12 92 192 1192 1192 1192 184 185 WTAAAA ZQEAAA HHHHxx +9545 3146 1 1 5 5 45 545 1545 4545 9545 90 91 DDAAAA AREAAA OOOOxx +2249 3147 1 1 9 9 49 249 249 2249 2249 98 99 NIAAAA BREAAA VVVVxx +5580 3148 0 0 0 0 80 580 1580 580 5580 160 161 QGAAAA CREAAA AAAAxx +8403 3149 1 3 3 3 3 403 403 3403 8403 6 7 FLAAAA DREAAA HHHHxx +4024 3150 0 0 4 4 24 24 24 4024 4024 48 49 UYAAAA EREAAA OOOOxx +1866 3151 0 2 6 6 66 866 1866 1866 1866 132 133 UTAAAA FREAAA VVVVxx +9251 3152 1 3 1 11 51 251 1251 4251 9251 102 103 VRAAAA GREAAA AAAAxx +9979 3153 1 3 9 19 79 979 1979 4979 9979 158 159 VTAAAA HREAAA HHHHxx +9899 3154 1 3 9 19 99 899 1899 4899 9899 198 199 TQAAAA IREAAA OOOOxx +2540 3155 0 0 0 0 40 540 540 2540 2540 80 81 STAAAA JREAAA VVVVxx +8957 3156 1 1 7 17 57 957 957 3957 8957 114 115 NGAAAA KREAAA AAAAxx +7702 3157 0 2 2 2 2 702 1702 2702 7702 4 5 GKAAAA LREAAA HHHHxx +4211 3158 1 3 1 11 11 211 211 4211 4211 22 23 ZFAAAA MREAAA OOOOxx +6684 3159 0 0 4 4 84 684 684 1684 6684 168 169 CXAAAA NREAAA VVVVxx +3883 3160 1 3 3 3 83 883 1883 3883 3883 166 167 JTAAAA OREAAA AAAAxx +3531 3161 1 3 1 11 31 531 1531 3531 3531 62 63 VFAAAA PREAAA HHHHxx +9178 3162 0 2 8 18 78 178 1178 4178 9178 156 157 APAAAA QREAAA OOOOxx +3389 3163 1 1 9 9 89 389 1389 3389 3389 178 179 JAAAAA RREAAA VVVVxx +7874 3164 0 2 4 14 74 874 1874 2874 7874 148 149 WQAAAA SREAAA AAAAxx +4522 3165 0 2 2 2 22 522 522 4522 4522 44 45 YRAAAA TREAAA HHHHxx +9399 3166 1 3 9 19 99 399 1399 4399 9399 198 199 NXAAAA UREAAA OOOOxx +9083 3167 1 3 3 3 83 83 1083 4083 9083 166 167 JLAAAA VREAAA VVVVxx +1530 3168 0 2 0 10 30 530 1530 1530 1530 60 61 WGAAAA WREAAA AAAAxx +2360 3169 0 0 0 0 60 360 360 2360 2360 120 121 UMAAAA XREAAA HHHHxx +4908 3170 0 0 8 8 8 908 908 4908 4908 16 17 UGAAAA YREAAA OOOOxx +4628 3171 0 0 8 8 28 628 628 4628 4628 56 57 AWAAAA ZREAAA VVVVxx +3889 3172 1 1 9 9 89 889 1889 3889 3889 178 179 PTAAAA ASEAAA AAAAxx +1331 3173 1 3 1 11 31 331 1331 1331 1331 62 63 FZAAAA BSEAAA HHHHxx +1942 3174 0 2 2 2 42 942 1942 1942 1942 84 85 SWAAAA CSEAAA OOOOxx +4734 3175 0 2 4 14 34 734 734 4734 4734 68 69 CAAAAA DSEAAA VVVVxx +8386 3176 0 2 6 6 86 386 386 3386 8386 172 173 OKAAAA ESEAAA AAAAxx +3586 3177 0 2 6 6 86 586 1586 3586 3586 172 173 YHAAAA FSEAAA HHHHxx +2354 3178 0 2 4 14 54 354 354 2354 2354 108 109 OMAAAA GSEAAA OOOOxx +7108 3179 0 0 8 8 8 108 1108 2108 7108 16 17 KNAAAA HSEAAA VVVVxx +1857 3180 1 1 7 17 57 857 1857 1857 1857 114 115 LTAAAA ISEAAA AAAAxx +2544 3181 0 0 4 4 44 544 544 2544 2544 88 89 WTAAAA JSEAAA HHHHxx +819 3182 1 3 9 19 19 819 819 819 819 38 39 NFAAAA KSEAAA OOOOxx +2878 3183 0 2 8 18 78 878 878 2878 2878 156 157 SGAAAA LSEAAA VVVVxx +1772 3184 0 0 2 12 72 772 1772 1772 1772 144 145 EQAAAA MSEAAA AAAAxx +354 3185 0 2 4 14 54 354 354 354 354 108 109 QNAAAA NSEAAA HHHHxx +3259 3186 1 3 9 19 59 259 1259 3259 3259 118 119 JVAAAA OSEAAA OOOOxx +2170 3187 0 2 0 10 70 170 170 2170 2170 140 141 MFAAAA PSEAAA VVVVxx +1190 3188 0 2 0 10 90 190 1190 1190 1190 180 181 UTAAAA QSEAAA AAAAxx +3607 3189 1 3 7 7 7 607 1607 3607 3607 14 15 TIAAAA RSEAAA HHHHxx +4661 3190 1 1 1 1 61 661 661 4661 4661 122 123 HXAAAA SSEAAA OOOOxx +1796 3191 0 0 6 16 96 796 1796 1796 1796 192 193 CRAAAA TSEAAA VVVVxx +1561 3192 1 1 1 1 61 561 1561 1561 1561 122 123 BIAAAA USEAAA AAAAxx +4336 3193 0 0 6 16 36 336 336 4336 4336 72 73 UKAAAA VSEAAA HHHHxx +7550 3194 0 2 0 10 50 550 1550 2550 7550 100 101 KEAAAA WSEAAA OOOOxx +3238 3195 0 2 8 18 38 238 1238 3238 3238 76 77 OUAAAA XSEAAA VVVVxx +9870 3196 0 2 0 10 70 870 1870 4870 9870 140 141 QPAAAA YSEAAA AAAAxx +6502 3197 0 2 2 2 2 502 502 1502 6502 4 5 CQAAAA ZSEAAA HHHHxx +3903 3198 1 3 3 3 3 903 1903 3903 3903 6 7 DUAAAA ATEAAA OOOOxx +2869 3199 1 1 9 9 69 869 869 2869 2869 138 139 JGAAAA BTEAAA VVVVxx +5072 3200 0 0 2 12 72 72 1072 72 5072 144 145 CNAAAA CTEAAA AAAAxx +1201 3201 1 1 1 1 1 201 1201 1201 1201 2 3 FUAAAA DTEAAA HHHHxx +6245 3202 1 1 5 5 45 245 245 1245 6245 90 91 FGAAAA ETEAAA OOOOxx +1402 3203 0 2 2 2 2 402 1402 1402 1402 4 5 YBAAAA FTEAAA VVVVxx +2594 3204 0 2 4 14 94 594 594 2594 2594 188 189 UVAAAA GTEAAA AAAAxx +9171 3205 1 3 1 11 71 171 1171 4171 9171 142 143 TOAAAA HTEAAA HHHHxx +2620 3206 0 0 0 0 20 620 620 2620 2620 40 41 UWAAAA ITEAAA OOOOxx +6309 3207 1 1 9 9 9 309 309 1309 6309 18 19 RIAAAA JTEAAA VVVVxx +1285 3208 1 1 5 5 85 285 1285 1285 1285 170 171 LXAAAA KTEAAA AAAAxx +5466 3209 0 2 6 6 66 466 1466 466 5466 132 133 GCAAAA LTEAAA HHHHxx +168 3210 0 0 8 8 68 168 168 168 168 136 137 MGAAAA MTEAAA OOOOxx +1410 3211 0 2 0 10 10 410 1410 1410 1410 20 21 GCAAAA NTEAAA VVVVxx +6332 3212 0 0 2 12 32 332 332 1332 6332 64 65 OJAAAA OTEAAA AAAAxx +9530 3213 0 2 0 10 30 530 1530 4530 9530 60 61 OCAAAA PTEAAA HHHHxx +7749 3214 1 1 9 9 49 749 1749 2749 7749 98 99 BMAAAA QTEAAA OOOOxx +3656 3215 0 0 6 16 56 656 1656 3656 3656 112 113 QKAAAA RTEAAA VVVVxx +37 3216 1 1 7 17 37 37 37 37 37 74 75 LBAAAA STEAAA AAAAxx +2744 3217 0 0 4 4 44 744 744 2744 2744 88 89 OBAAAA TTEAAA HHHHxx +4206 3218 0 2 6 6 6 206 206 4206 4206 12 13 UFAAAA UTEAAA OOOOxx +1846 3219 0 2 6 6 46 846 1846 1846 1846 92 93 ATAAAA VTEAAA VVVVxx +9913 3220 1 1 3 13 13 913 1913 4913 9913 26 27 HRAAAA WTEAAA AAAAxx +4078 3221 0 2 8 18 78 78 78 4078 4078 156 157 WAAAAA XTEAAA HHHHxx +2080 3222 0 0 0 0 80 80 80 2080 2080 160 161 ACAAAA YTEAAA OOOOxx +4169 3223 1 1 9 9 69 169 169 4169 4169 138 139 JEAAAA ZTEAAA VVVVxx +2070 3224 0 2 0 10 70 70 70 2070 2070 140 141 QBAAAA AUEAAA AAAAxx +4500 3225 0 0 0 0 0 500 500 4500 4500 0 1 CRAAAA BUEAAA HHHHxx +4123 3226 1 3 3 3 23 123 123 4123 4123 46 47 PCAAAA CUEAAA OOOOxx +5594 3227 0 2 4 14 94 594 1594 594 5594 188 189 EHAAAA DUEAAA VVVVxx +9941 3228 1 1 1 1 41 941 1941 4941 9941 82 83 JSAAAA EUEAAA AAAAxx +7154 3229 0 2 4 14 54 154 1154 2154 7154 108 109 EPAAAA FUEAAA HHHHxx +8340 3230 0 0 0 0 40 340 340 3340 8340 80 81 UIAAAA GUEAAA OOOOxx +7110 3231 0 2 0 10 10 110 1110 2110 7110 20 21 MNAAAA HUEAAA VVVVxx +7795 3232 1 3 5 15 95 795 1795 2795 7795 190 191 VNAAAA IUEAAA AAAAxx +132 3233 0 0 2 12 32 132 132 132 132 64 65 CFAAAA JUEAAA HHHHxx +4603 3234 1 3 3 3 3 603 603 4603 4603 6 7 BVAAAA KUEAAA OOOOxx +9720 3235 0 0 0 0 20 720 1720 4720 9720 40 41 WJAAAA LUEAAA VVVVxx +1460 3236 0 0 0 0 60 460 1460 1460 1460 120 121 EEAAAA MUEAAA AAAAxx +4677 3237 1 1 7 17 77 677 677 4677 4677 154 155 XXAAAA NUEAAA HHHHxx +9272 3238 0 0 2 12 72 272 1272 4272 9272 144 145 QSAAAA OUEAAA OOOOxx +2279 3239 1 3 9 19 79 279 279 2279 2279 158 159 RJAAAA PUEAAA VVVVxx +4587 3240 1 3 7 7 87 587 587 4587 4587 174 175 LUAAAA QUEAAA AAAAxx +2244 3241 0 0 4 4 44 244 244 2244 2244 88 89 IIAAAA RUEAAA HHHHxx +742 3242 0 2 2 2 42 742 742 742 742 84 85 OCAAAA SUEAAA OOOOxx +4426 3243 0 2 6 6 26 426 426 4426 4426 52 53 GOAAAA TUEAAA VVVVxx +4571 3244 1 3 1 11 71 571 571 4571 4571 142 143 VTAAAA UUEAAA AAAAxx +4775 3245 1 3 5 15 75 775 775 4775 4775 150 151 RBAAAA VUEAAA HHHHxx +24 3246 0 0 4 4 24 24 24 24 24 48 49 YAAAAA WUEAAA OOOOxx +4175 3247 1 3 5 15 75 175 175 4175 4175 150 151 PEAAAA XUEAAA VVVVxx +9877 3248 1 1 7 17 77 877 1877 4877 9877 154 155 XPAAAA YUEAAA AAAAxx +7271 3249 1 3 1 11 71 271 1271 2271 7271 142 143 RTAAAA ZUEAAA HHHHxx +5468 3250 0 0 8 8 68 468 1468 468 5468 136 137 ICAAAA AVEAAA OOOOxx +6106 3251 0 2 6 6 6 106 106 1106 6106 12 13 WAAAAA BVEAAA VVVVxx +9005 3252 1 1 5 5 5 5 1005 4005 9005 10 11 JIAAAA CVEAAA AAAAxx +109 3253 1 1 9 9 9 109 109 109 109 18 19 FEAAAA DVEAAA HHHHxx +6365 3254 1 1 5 5 65 365 365 1365 6365 130 131 VKAAAA EVEAAA OOOOxx +7437 3255 1 1 7 17 37 437 1437 2437 7437 74 75 BAAAAA FVEAAA VVVVxx +7979 3256 1 3 9 19 79 979 1979 2979 7979 158 159 XUAAAA GVEAAA AAAAxx +6050 3257 0 2 0 10 50 50 50 1050 6050 100 101 SYAAAA HVEAAA HHHHxx +2853 3258 1 1 3 13 53 853 853 2853 2853 106 107 TFAAAA IVEAAA OOOOxx +7603 3259 1 3 3 3 3 603 1603 2603 7603 6 7 LGAAAA JVEAAA VVVVxx +483 3260 1 3 3 3 83 483 483 483 483 166 167 PSAAAA KVEAAA AAAAxx +5994 3261 0 2 4 14 94 994 1994 994 5994 188 189 OWAAAA LVEAAA HHHHxx +6708 3262 0 0 8 8 8 708 708 1708 6708 16 17 AYAAAA MVEAAA OOOOxx +5090 3263 0 2 0 10 90 90 1090 90 5090 180 181 UNAAAA NVEAAA VVVVxx +4608 3264 0 0 8 8 8 608 608 4608 4608 16 17 GVAAAA OVEAAA AAAAxx +4551 3265 1 3 1 11 51 551 551 4551 4551 102 103 BTAAAA PVEAAA HHHHxx +5437 3266 1 1 7 17 37 437 1437 437 5437 74 75 DBAAAA QVEAAA OOOOxx +4130 3267 0 2 0 10 30 130 130 4130 4130 60 61 WCAAAA RVEAAA VVVVxx +6363 3268 1 3 3 3 63 363 363 1363 6363 126 127 TKAAAA SVEAAA AAAAxx +1499 3269 1 3 9 19 99 499 1499 1499 1499 198 199 RFAAAA TVEAAA HHHHxx +384 3270 0 0 4 4 84 384 384 384 384 168 169 UOAAAA UVEAAA OOOOxx +2266 3271 0 2 6 6 66 266 266 2266 2266 132 133 EJAAAA VVEAAA VVVVxx +6018 3272 0 2 8 18 18 18 18 1018 6018 36 37 MXAAAA WVEAAA AAAAxx +7915 3273 1 3 5 15 15 915 1915 2915 7915 30 31 LSAAAA XVEAAA HHHHxx +6167 3274 1 3 7 7 67 167 167 1167 6167 134 135 FDAAAA YVEAAA OOOOxx +9988 3275 0 0 8 8 88 988 1988 4988 9988 176 177 EUAAAA ZVEAAA VVVVxx +6599 3276 1 3 9 19 99 599 599 1599 6599 198 199 VTAAAA AWEAAA AAAAxx +1693 3277 1 1 3 13 93 693 1693 1693 1693 186 187 DNAAAA BWEAAA HHHHxx +5971 3278 1 3 1 11 71 971 1971 971 5971 142 143 RVAAAA CWEAAA OOOOxx +8470 3279 0 2 0 10 70 470 470 3470 8470 140 141 UNAAAA DWEAAA VVVVxx +2807 3280 1 3 7 7 7 807 807 2807 2807 14 15 ZDAAAA EWEAAA AAAAxx +1120 3281 0 0 0 0 20 120 1120 1120 1120 40 41 CRAAAA FWEAAA HHHHxx +5924 3282 0 0 4 4 24 924 1924 924 5924 48 49 WTAAAA GWEAAA OOOOxx +9025 3283 1 1 5 5 25 25 1025 4025 9025 50 51 DJAAAA HWEAAA VVVVxx +9454 3284 0 2 4 14 54 454 1454 4454 9454 108 109 QZAAAA IWEAAA AAAAxx +2259 3285 1 3 9 19 59 259 259 2259 2259 118 119 XIAAAA JWEAAA HHHHxx +5249 3286 1 1 9 9 49 249 1249 249 5249 98 99 XTAAAA KWEAAA OOOOxx +6350 3287 0 2 0 10 50 350 350 1350 6350 100 101 GKAAAA LWEAAA VVVVxx +2930 3288 0 2 0 10 30 930 930 2930 2930 60 61 SIAAAA MWEAAA AAAAxx +6055 3289 1 3 5 15 55 55 55 1055 6055 110 111 XYAAAA NWEAAA HHHHxx +7691 3290 1 3 1 11 91 691 1691 2691 7691 182 183 VJAAAA OWEAAA OOOOxx +1573 3291 1 1 3 13 73 573 1573 1573 1573 146 147 NIAAAA PWEAAA VVVVxx +9943 3292 1 3 3 3 43 943 1943 4943 9943 86 87 LSAAAA QWEAAA AAAAxx +3085 3293 1 1 5 5 85 85 1085 3085 3085 170 171 ROAAAA RWEAAA HHHHxx +5928 3294 0 0 8 8 28 928 1928 928 5928 56 57 AUAAAA SWEAAA OOOOxx +887 3295 1 3 7 7 87 887 887 887 887 174 175 DIAAAA TWEAAA VVVVxx +4630 3296 0 2 0 10 30 630 630 4630 4630 60 61 CWAAAA UWEAAA AAAAxx +9827 3297 1 3 7 7 27 827 1827 4827 9827 54 55 ZNAAAA VWEAAA HHHHxx +8926 3298 0 2 6 6 26 926 926 3926 8926 52 53 IFAAAA WWEAAA OOOOxx +5726 3299 0 2 6 6 26 726 1726 726 5726 52 53 GMAAAA XWEAAA VVVVxx +1569 3300 1 1 9 9 69 569 1569 1569 1569 138 139 JIAAAA YWEAAA AAAAxx +8074 3301 0 2 4 14 74 74 74 3074 8074 148 149 OYAAAA ZWEAAA HHHHxx +7909 3302 1 1 9 9 9 909 1909 2909 7909 18 19 FSAAAA AXEAAA OOOOxx +8367 3303 1 3 7 7 67 367 367 3367 8367 134 135 VJAAAA BXEAAA VVVVxx +7217 3304 1 1 7 17 17 217 1217 2217 7217 34 35 PRAAAA CXEAAA AAAAxx +5254 3305 0 2 4 14 54 254 1254 254 5254 108 109 CUAAAA DXEAAA HHHHxx +1181 3306 1 1 1 1 81 181 1181 1181 1181 162 163 LTAAAA EXEAAA OOOOxx +6907 3307 1 3 7 7 7 907 907 1907 6907 14 15 RFAAAA FXEAAA VVVVxx +5508 3308 0 0 8 8 8 508 1508 508 5508 16 17 WDAAAA GXEAAA AAAAxx +4782 3309 0 2 2 2 82 782 782 4782 4782 164 165 YBAAAA HXEAAA HHHHxx +793 3310 1 1 3 13 93 793 793 793 793 186 187 NEAAAA IXEAAA OOOOxx +5740 3311 0 0 0 0 40 740 1740 740 5740 80 81 UMAAAA JXEAAA VVVVxx +3107 3312 1 3 7 7 7 107 1107 3107 3107 14 15 NPAAAA KXEAAA AAAAxx +1197 3313 1 1 7 17 97 197 1197 1197 1197 194 195 BUAAAA LXEAAA HHHHxx +4376 3314 0 0 6 16 76 376 376 4376 4376 152 153 IMAAAA MXEAAA OOOOxx +6226 3315 0 2 6 6 26 226 226 1226 6226 52 53 MFAAAA NXEAAA VVVVxx +5033 3316 1 1 3 13 33 33 1033 33 5033 66 67 PLAAAA OXEAAA AAAAxx +5494 3317 0 2 4 14 94 494 1494 494 5494 188 189 IDAAAA PXEAAA HHHHxx +3244 3318 0 0 4 4 44 244 1244 3244 3244 88 89 UUAAAA QXEAAA OOOOxx +7670 3319 0 2 0 10 70 670 1670 2670 7670 140 141 AJAAAA RXEAAA VVVVxx +9273 3320 1 1 3 13 73 273 1273 4273 9273 146 147 RSAAAA SXEAAA AAAAxx +5248 3321 0 0 8 8 48 248 1248 248 5248 96 97 WTAAAA TXEAAA HHHHxx +3381 3322 1 1 1 1 81 381 1381 3381 3381 162 163 BAAAAA UXEAAA OOOOxx +4136 3323 0 0 6 16 36 136 136 4136 4136 72 73 CDAAAA VXEAAA VVVVxx +4163 3324 1 3 3 3 63 163 163 4163 4163 126 127 DEAAAA WXEAAA AAAAxx +4270 3325 0 2 0 10 70 270 270 4270 4270 140 141 GIAAAA XXEAAA HHHHxx +1729 3326 1 1 9 9 29 729 1729 1729 1729 58 59 NOAAAA YXEAAA OOOOxx +2778 3327 0 2 8 18 78 778 778 2778 2778 156 157 WCAAAA ZXEAAA VVVVxx +5082 3328 0 2 2 2 82 82 1082 82 5082 164 165 MNAAAA AYEAAA AAAAxx +870 3329 0 2 0 10 70 870 870 870 870 140 141 MHAAAA BYEAAA HHHHxx +4192 3330 0 0 2 12 92 192 192 4192 4192 184 185 GFAAAA CYEAAA OOOOxx +308 3331 0 0 8 8 8 308 308 308 308 16 17 WLAAAA DYEAAA VVVVxx +6783 3332 1 3 3 3 83 783 783 1783 6783 166 167 XAAAAA EYEAAA AAAAxx +7611 3333 1 3 1 11 11 611 1611 2611 7611 22 23 TGAAAA FYEAAA HHHHxx +4221 3334 1 1 1 1 21 221 221 4221 4221 42 43 JGAAAA GYEAAA OOOOxx +6353 3335 1 1 3 13 53 353 353 1353 6353 106 107 JKAAAA HYEAAA VVVVxx +1830 3336 0 2 0 10 30 830 1830 1830 1830 60 61 KSAAAA IYEAAA AAAAxx +2437 3337 1 1 7 17 37 437 437 2437 2437 74 75 TPAAAA JYEAAA HHHHxx +3360 3338 0 0 0 0 60 360 1360 3360 3360 120 121 GZAAAA KYEAAA OOOOxx +1829 3339 1 1 9 9 29 829 1829 1829 1829 58 59 JSAAAA LYEAAA VVVVxx +9475 3340 1 3 5 15 75 475 1475 4475 9475 150 151 LAAAAA MYEAAA AAAAxx +4566 3341 0 2 6 6 66 566 566 4566 4566 132 133 QTAAAA NYEAAA HHHHxx +9944 3342 0 0 4 4 44 944 1944 4944 9944 88 89 MSAAAA OYEAAA OOOOxx +6054 3343 0 2 4 14 54 54 54 1054 6054 108 109 WYAAAA PYEAAA VVVVxx +4722 3344 0 2 2 2 22 722 722 4722 4722 44 45 QZAAAA QYEAAA AAAAxx +2779 3345 1 3 9 19 79 779 779 2779 2779 158 159 XCAAAA RYEAAA HHHHxx +8051 3346 1 3 1 11 51 51 51 3051 8051 102 103 RXAAAA SYEAAA OOOOxx +9671 3347 1 3 1 11 71 671 1671 4671 9671 142 143 ZHAAAA TYEAAA VVVVxx +6084 3348 0 0 4 4 84 84 84 1084 6084 168 169 AAAAAA UYEAAA AAAAxx +3729 3349 1 1 9 9 29 729 1729 3729 3729 58 59 LNAAAA VYEAAA HHHHxx +6627 3350 1 3 7 7 27 627 627 1627 6627 54 55 XUAAAA WYEAAA OOOOxx +4769 3351 1 1 9 9 69 769 769 4769 4769 138 139 LBAAAA XYEAAA VVVVxx +2224 3352 0 0 4 4 24 224 224 2224 2224 48 49 OHAAAA YYEAAA AAAAxx +1404 3353 0 0 4 4 4 404 1404 1404 1404 8 9 ACAAAA ZYEAAA HHHHxx +8532 3354 0 0 2 12 32 532 532 3532 8532 64 65 EQAAAA AZEAAA OOOOxx +6759 3355 1 3 9 19 59 759 759 1759 6759 118 119 ZZAAAA BZEAAA VVVVxx +6404 3356 0 0 4 4 4 404 404 1404 6404 8 9 IMAAAA CZEAAA AAAAxx +3144 3357 0 0 4 4 44 144 1144 3144 3144 88 89 YQAAAA DZEAAA HHHHxx +973 3358 1 1 3 13 73 973 973 973 973 146 147 LLAAAA EZEAAA OOOOxx +9789 3359 1 1 9 9 89 789 1789 4789 9789 178 179 NMAAAA FZEAAA VVVVxx +6181 3360 1 1 1 1 81 181 181 1181 6181 162 163 TDAAAA GZEAAA AAAAxx +1519 3361 1 3 9 19 19 519 1519 1519 1519 38 39 LGAAAA HZEAAA HHHHxx +9729 3362 1 1 9 9 29 729 1729 4729 9729 58 59 FKAAAA IZEAAA OOOOxx +8167 3363 1 3 7 7 67 167 167 3167 8167 134 135 DCAAAA JZEAAA VVVVxx +3830 3364 0 2 0 10 30 830 1830 3830 3830 60 61 IRAAAA KZEAAA AAAAxx +6286 3365 0 2 6 6 86 286 286 1286 6286 172 173 UHAAAA LZEAAA HHHHxx +3047 3366 1 3 7 7 47 47 1047 3047 3047 94 95 FNAAAA MZEAAA OOOOxx +3183 3367 1 3 3 3 83 183 1183 3183 3183 166 167 LSAAAA NZEAAA VVVVxx +6687 3368 1 3 7 7 87 687 687 1687 6687 174 175 FXAAAA OZEAAA AAAAxx +2783 3369 1 3 3 3 83 783 783 2783 2783 166 167 BDAAAA PZEAAA HHHHxx +9920 3370 0 0 0 0 20 920 1920 4920 9920 40 41 ORAAAA QZEAAA OOOOxx +4847 3371 1 3 7 7 47 847 847 4847 4847 94 95 LEAAAA RZEAAA VVVVxx +3645 3372 1 1 5 5 45 645 1645 3645 3645 90 91 FKAAAA SZEAAA AAAAxx +7406 3373 0 2 6 6 6 406 1406 2406 7406 12 13 WYAAAA TZEAAA HHHHxx +6003 3374 1 3 3 3 3 3 3 1003 6003 6 7 XWAAAA UZEAAA OOOOxx +3408 3375 0 0 8 8 8 408 1408 3408 3408 16 17 CBAAAA VZEAAA VVVVxx +4243 3376 1 3 3 3 43 243 243 4243 4243 86 87 FHAAAA WZEAAA AAAAxx +1622 3377 0 2 2 2 22 622 1622 1622 1622 44 45 KKAAAA XZEAAA HHHHxx +5319 3378 1 3 9 19 19 319 1319 319 5319 38 39 PWAAAA YZEAAA OOOOxx +4033 3379 1 1 3 13 33 33 33 4033 4033 66 67 DZAAAA ZZEAAA VVVVxx +8573 3380 1 1 3 13 73 573 573 3573 8573 146 147 TRAAAA AAFAAA AAAAxx +8404 3381 0 0 4 4 4 404 404 3404 8404 8 9 GLAAAA BAFAAA HHHHxx +6993 3382 1 1 3 13 93 993 993 1993 6993 186 187 ZIAAAA CAFAAA OOOOxx +660 3383 0 0 0 0 60 660 660 660 660 120 121 KZAAAA DAFAAA VVVVxx +1136 3384 0 0 6 16 36 136 1136 1136 1136 72 73 SRAAAA EAFAAA AAAAxx +3393 3385 1 1 3 13 93 393 1393 3393 3393 186 187 NAAAAA FAFAAA HHHHxx +9743 3386 1 3 3 3 43 743 1743 4743 9743 86 87 TKAAAA GAFAAA OOOOxx +9705 3387 1 1 5 5 5 705 1705 4705 9705 10 11 HJAAAA HAFAAA VVVVxx +6960 3388 0 0 0 0 60 960 960 1960 6960 120 121 SHAAAA IAFAAA AAAAxx +2753 3389 1 1 3 13 53 753 753 2753 2753 106 107 XBAAAA JAFAAA HHHHxx +906 3390 0 2 6 6 6 906 906 906 906 12 13 WIAAAA KAFAAA OOOOxx +999 3391 1 3 9 19 99 999 999 999 999 198 199 LMAAAA LAFAAA VVVVxx +6927 3392 1 3 7 7 27 927 927 1927 6927 54 55 LGAAAA MAFAAA AAAAxx +4846 3393 0 2 6 6 46 846 846 4846 4846 92 93 KEAAAA NAFAAA HHHHxx +676 3394 0 0 6 16 76 676 676 676 676 152 153 AAAAAA OAFAAA OOOOxx +8612 3395 0 0 2 12 12 612 612 3612 8612 24 25 GTAAAA PAFAAA VVVVxx +4111 3396 1 3 1 11 11 111 111 4111 4111 22 23 DCAAAA QAFAAA AAAAxx +9994 3397 0 2 4 14 94 994 1994 4994 9994 188 189 KUAAAA RAFAAA HHHHxx +4399 3398 1 3 9 19 99 399 399 4399 4399 198 199 FNAAAA SAFAAA OOOOxx +4464 3399 0 0 4 4 64 464 464 4464 4464 128 129 SPAAAA TAFAAA VVVVxx +7316 3400 0 0 6 16 16 316 1316 2316 7316 32 33 KVAAAA UAFAAA AAAAxx +8982 3401 0 2 2 2 82 982 982 3982 8982 164 165 MHAAAA VAFAAA HHHHxx +1871 3402 1 3 1 11 71 871 1871 1871 1871 142 143 ZTAAAA WAFAAA OOOOxx +4082 3403 0 2 2 2 82 82 82 4082 4082 164 165 ABAAAA XAFAAA VVVVxx +3949 3404 1 1 9 9 49 949 1949 3949 3949 98 99 XVAAAA YAFAAA AAAAxx +9352 3405 0 0 2 12 52 352 1352 4352 9352 104 105 SVAAAA ZAFAAA HHHHxx +9638 3406 0 2 8 18 38 638 1638 4638 9638 76 77 SGAAAA ABFAAA OOOOxx +8177 3407 1 1 7 17 77 177 177 3177 8177 154 155 NCAAAA BBFAAA VVVVxx +3499 3408 1 3 9 19 99 499 1499 3499 3499 198 199 PEAAAA CBFAAA AAAAxx +4233 3409 1 1 3 13 33 233 233 4233 4233 66 67 VGAAAA DBFAAA HHHHxx +1953 3410 1 1 3 13 53 953 1953 1953 1953 106 107 DXAAAA EBFAAA OOOOxx +7372 3411 0 0 2 12 72 372 1372 2372 7372 144 145 OXAAAA FBFAAA VVVVxx +5127 3412 1 3 7 7 27 127 1127 127 5127 54 55 FPAAAA GBFAAA AAAAxx +4384 3413 0 0 4 4 84 384 384 4384 4384 168 169 QMAAAA HBFAAA HHHHxx +9964 3414 0 0 4 4 64 964 1964 4964 9964 128 129 GTAAAA IBFAAA OOOOxx +5392 3415 0 0 2 12 92 392 1392 392 5392 184 185 KZAAAA JBFAAA VVVVxx +616 3416 0 0 6 16 16 616 616 616 616 32 33 SXAAAA KBFAAA AAAAxx +591 3417 1 3 1 11 91 591 591 591 591 182 183 TWAAAA LBFAAA HHHHxx +6422 3418 0 2 2 2 22 422 422 1422 6422 44 45 ANAAAA MBFAAA OOOOxx +6551 3419 1 3 1 11 51 551 551 1551 6551 102 103 ZRAAAA NBFAAA VVVVxx +9286 3420 0 2 6 6 86 286 1286 4286 9286 172 173 ETAAAA OBFAAA AAAAxx +3817 3421 1 1 7 17 17 817 1817 3817 3817 34 35 VQAAAA PBFAAA HHHHxx +7717 3422 1 1 7 17 17 717 1717 2717 7717 34 35 VKAAAA QBFAAA OOOOxx +8718 3423 0 2 8 18 18 718 718 3718 8718 36 37 IXAAAA RBFAAA VVVVxx +8608 3424 0 0 8 8 8 608 608 3608 8608 16 17 CTAAAA SBFAAA AAAAxx +2242 3425 0 2 2 2 42 242 242 2242 2242 84 85 GIAAAA TBFAAA HHHHxx +4811 3426 1 3 1 11 11 811 811 4811 4811 22 23 BDAAAA UBFAAA OOOOxx +6838 3427 0 2 8 18 38 838 838 1838 6838 76 77 ADAAAA VBFAAA VVVVxx +787 3428 1 3 7 7 87 787 787 787 787 174 175 HEAAAA WBFAAA AAAAxx +7940 3429 0 0 0 0 40 940 1940 2940 7940 80 81 KTAAAA XBFAAA HHHHxx +336 3430 0 0 6 16 36 336 336 336 336 72 73 YMAAAA YBFAAA OOOOxx +9859 3431 1 3 9 19 59 859 1859 4859 9859 118 119 FPAAAA ZBFAAA VVVVxx +3864 3432 0 0 4 4 64 864 1864 3864 3864 128 129 QSAAAA ACFAAA AAAAxx +7162 3433 0 2 2 2 62 162 1162 2162 7162 124 125 MPAAAA BCFAAA HHHHxx +2071 3434 1 3 1 11 71 71 71 2071 2071 142 143 RBAAAA CCFAAA OOOOxx +7469 3435 1 1 9 9 69 469 1469 2469 7469 138 139 HBAAAA DCFAAA VVVVxx +2917 3436 1 1 7 17 17 917 917 2917 2917 34 35 FIAAAA ECFAAA AAAAxx +7486 3437 0 2 6 6 86 486 1486 2486 7486 172 173 YBAAAA FCFAAA HHHHxx +3355 3438 1 3 5 15 55 355 1355 3355 3355 110 111 BZAAAA GCFAAA OOOOxx +6998 3439 0 2 8 18 98 998 998 1998 6998 196 197 EJAAAA HCFAAA VVVVxx +5498 3440 0 2 8 18 98 498 1498 498 5498 196 197 MDAAAA ICFAAA AAAAxx +5113 3441 1 1 3 13 13 113 1113 113 5113 26 27 ROAAAA JCFAAA HHHHxx +2846 3442 0 2 6 6 46 846 846 2846 2846 92 93 MFAAAA KCFAAA OOOOxx +6834 3443 0 2 4 14 34 834 834 1834 6834 68 69 WCAAAA LCFAAA VVVVxx +8925 3444 1 1 5 5 25 925 925 3925 8925 50 51 HFAAAA MCFAAA AAAAxx +2757 3445 1 1 7 17 57 757 757 2757 2757 114 115 BCAAAA NCFAAA HHHHxx +2775 3446 1 3 5 15 75 775 775 2775 2775 150 151 TCAAAA OCFAAA OOOOxx +6182 3447 0 2 2 2 82 182 182 1182 6182 164 165 UDAAAA PCFAAA VVVVxx +4488 3448 0 0 8 8 88 488 488 4488 4488 176 177 QQAAAA QCFAAA AAAAxx +8523 3449 1 3 3 3 23 523 523 3523 8523 46 47 VPAAAA RCFAAA HHHHxx +52 3450 0 0 2 12 52 52 52 52 52 104 105 ACAAAA SCFAAA OOOOxx +7251 3451 1 3 1 11 51 251 1251 2251 7251 102 103 XSAAAA TCFAAA VVVVxx +6130 3452 0 2 0 10 30 130 130 1130 6130 60 61 UBAAAA UCFAAA AAAAxx +205 3453 1 1 5 5 5 205 205 205 205 10 11 XHAAAA VCFAAA HHHHxx +1186 3454 0 2 6 6 86 186 1186 1186 1186 172 173 QTAAAA WCFAAA OOOOxx +1738 3455 0 2 8 18 38 738 1738 1738 1738 76 77 WOAAAA XCFAAA VVVVxx +9485 3456 1 1 5 5 85 485 1485 4485 9485 170 171 VAAAAA YCFAAA AAAAxx +4235 3457 1 3 5 15 35 235 235 4235 4235 70 71 XGAAAA ZCFAAA HHHHxx +7891 3458 1 3 1 11 91 891 1891 2891 7891 182 183 NRAAAA ADFAAA OOOOxx +4960 3459 0 0 0 0 60 960 960 4960 4960 120 121 UIAAAA BDFAAA VVVVxx +8911 3460 1 3 1 11 11 911 911 3911 8911 22 23 TEAAAA CDFAAA AAAAxx +1219 3461 1 3 9 19 19 219 1219 1219 1219 38 39 XUAAAA DDFAAA HHHHxx +9652 3462 0 0 2 12 52 652 1652 4652 9652 104 105 GHAAAA EDFAAA OOOOxx +9715 3463 1 3 5 15 15 715 1715 4715 9715 30 31 RJAAAA FDFAAA VVVVxx +6629 3464 1 1 9 9 29 629 629 1629 6629 58 59 ZUAAAA GDFAAA AAAAxx +700 3465 0 0 0 0 0 700 700 700 700 0 1 YAAAAA HDFAAA HHHHxx +9819 3466 1 3 9 19 19 819 1819 4819 9819 38 39 RNAAAA IDFAAA OOOOxx +5188 3467 0 0 8 8 88 188 1188 188 5188 176 177 ORAAAA JDFAAA VVVVxx +5367 3468 1 3 7 7 67 367 1367 367 5367 134 135 LYAAAA KDFAAA AAAAxx +6447 3469 1 3 7 7 47 447 447 1447 6447 94 95 ZNAAAA LDFAAA HHHHxx +720 3470 0 0 0 0 20 720 720 720 720 40 41 SBAAAA MDFAAA OOOOxx +9157 3471 1 1 7 17 57 157 1157 4157 9157 114 115 FOAAAA NDFAAA VVVVxx +1082 3472 0 2 2 2 82 82 1082 1082 1082 164 165 QPAAAA ODFAAA AAAAxx +3179 3473 1 3 9 19 79 179 1179 3179 3179 158 159 HSAAAA PDFAAA HHHHxx +4818 3474 0 2 8 18 18 818 818 4818 4818 36 37 IDAAAA QDFAAA OOOOxx +7607 3475 1 3 7 7 7 607 1607 2607 7607 14 15 PGAAAA RDFAAA VVVVxx +2352 3476 0 0 2 12 52 352 352 2352 2352 104 105 MMAAAA SDFAAA AAAAxx +1170 3477 0 2 0 10 70 170 1170 1170 1170 140 141 ATAAAA TDFAAA HHHHxx +4269 3478 1 1 9 9 69 269 269 4269 4269 138 139 FIAAAA UDFAAA OOOOxx +8767 3479 1 3 7 7 67 767 767 3767 8767 134 135 FZAAAA VDFAAA VVVVxx +3984 3480 0 0 4 4 84 984 1984 3984 3984 168 169 GXAAAA WDFAAA AAAAxx +3190 3481 0 2 0 10 90 190 1190 3190 3190 180 181 SSAAAA XDFAAA HHHHxx +7456 3482 0 0 6 16 56 456 1456 2456 7456 112 113 UAAAAA YDFAAA OOOOxx +4348 3483 0 0 8 8 48 348 348 4348 4348 96 97 GLAAAA ZDFAAA VVVVxx +3150 3484 0 2 0 10 50 150 1150 3150 3150 100 101 ERAAAA AEFAAA AAAAxx +8780 3485 0 0 0 0 80 780 780 3780 8780 160 161 SZAAAA BEFAAA HHHHxx +2553 3486 1 1 3 13 53 553 553 2553 2553 106 107 FUAAAA CEFAAA OOOOxx +7526 3487 0 2 6 6 26 526 1526 2526 7526 52 53 MDAAAA DEFAAA VVVVxx +2031 3488 1 3 1 11 31 31 31 2031 2031 62 63 DAAAAA EEFAAA AAAAxx +8793 3489 1 1 3 13 93 793 793 3793 8793 186 187 FAAAAA FEFAAA HHHHxx +1122 3490 0 2 2 2 22 122 1122 1122 1122 44 45 ERAAAA GEFAAA OOOOxx +1855 3491 1 3 5 15 55 855 1855 1855 1855 110 111 JTAAAA HEFAAA VVVVxx +6613 3492 1 1 3 13 13 613 613 1613 6613 26 27 JUAAAA IEFAAA AAAAxx +3231 3493 1 3 1 11 31 231 1231 3231 3231 62 63 HUAAAA JEFAAA HHHHxx +9101 3494 1 1 1 1 1 101 1101 4101 9101 2 3 BMAAAA KEFAAA OOOOxx +4937 3495 1 1 7 17 37 937 937 4937 4937 74 75 XHAAAA LEFAAA VVVVxx +666 3496 0 2 6 6 66 666 666 666 666 132 133 QZAAAA MEFAAA AAAAxx +8943 3497 1 3 3 3 43 943 943 3943 8943 86 87 ZFAAAA NEFAAA HHHHxx +6164 3498 0 0 4 4 64 164 164 1164 6164 128 129 CDAAAA OEFAAA OOOOxx +1081 3499 1 1 1 1 81 81 1081 1081 1081 162 163 PPAAAA PEFAAA VVVVxx +210 3500 0 2 0 10 10 210 210 210 210 20 21 CIAAAA QEFAAA AAAAxx +6024 3501 0 0 4 4 24 24 24 1024 6024 48 49 SXAAAA REFAAA HHHHxx +5715 3502 1 3 5 15 15 715 1715 715 5715 30 31 VLAAAA SEFAAA OOOOxx +8938 3503 0 2 8 18 38 938 938 3938 8938 76 77 UFAAAA TEFAAA VVVVxx +1326 3504 0 2 6 6 26 326 1326 1326 1326 52 53 AZAAAA UEFAAA AAAAxx +7111 3505 1 3 1 11 11 111 1111 2111 7111 22 23 NNAAAA VEFAAA HHHHxx +757 3506 1 1 7 17 57 757 757 757 757 114 115 DDAAAA WEFAAA OOOOxx +8933 3507 1 1 3 13 33 933 933 3933 8933 66 67 PFAAAA XEFAAA VVVVxx +6495 3508 1 3 5 15 95 495 495 1495 6495 190 191 VPAAAA YEFAAA AAAAxx +3134 3509 0 2 4 14 34 134 1134 3134 3134 68 69 OQAAAA ZEFAAA HHHHxx +1304 3510 0 0 4 4 4 304 1304 1304 1304 8 9 EYAAAA AFFAAA OOOOxx +1835 3511 1 3 5 15 35 835 1835 1835 1835 70 71 PSAAAA BFFAAA VVVVxx +7275 3512 1 3 5 15 75 275 1275 2275 7275 150 151 VTAAAA CFFAAA AAAAxx +7337 3513 1 1 7 17 37 337 1337 2337 7337 74 75 FWAAAA DFFAAA HHHHxx +1282 3514 0 2 2 2 82 282 1282 1282 1282 164 165 IXAAAA EFFAAA OOOOxx +6566 3515 0 2 6 6 66 566 566 1566 6566 132 133 OSAAAA FFFAAA VVVVxx +3786 3516 0 2 6 6 86 786 1786 3786 3786 172 173 QPAAAA GFFAAA AAAAxx +5741 3517 1 1 1 1 41 741 1741 741 5741 82 83 VMAAAA HFFAAA HHHHxx +6076 3518 0 0 6 16 76 76 76 1076 6076 152 153 SZAAAA IFFAAA OOOOxx +9998 3519 0 2 8 18 98 998 1998 4998 9998 196 197 OUAAAA JFFAAA VVVVxx +6268 3520 0 0 8 8 68 268 268 1268 6268 136 137 CHAAAA KFFAAA AAAAxx +9647 3521 1 3 7 7 47 647 1647 4647 9647 94 95 BHAAAA LFFAAA HHHHxx +4877 3522 1 1 7 17 77 877 877 4877 4877 154 155 PFAAAA MFFAAA OOOOxx +2652 3523 0 0 2 12 52 652 652 2652 2652 104 105 AYAAAA NFFAAA VVVVxx +1247 3524 1 3 7 7 47 247 1247 1247 1247 94 95 ZVAAAA OFFAAA AAAAxx +2721 3525 1 1 1 1 21 721 721 2721 2721 42 43 RAAAAA PFFAAA HHHHxx +5968 3526 0 0 8 8 68 968 1968 968 5968 136 137 OVAAAA QFFAAA OOOOxx +9570 3527 0 2 0 10 70 570 1570 4570 9570 140 141 CEAAAA RFFAAA VVVVxx +6425 3528 1 1 5 5 25 425 425 1425 6425 50 51 DNAAAA SFFAAA AAAAxx +5451 3529 1 3 1 11 51 451 1451 451 5451 102 103 RBAAAA TFFAAA HHHHxx +5668 3530 0 0 8 8 68 668 1668 668 5668 136 137 AKAAAA UFFAAA OOOOxx +9493 3531 1 1 3 13 93 493 1493 4493 9493 186 187 DBAAAA VFFAAA VVVVxx +7973 3532 1 1 3 13 73 973 1973 2973 7973 146 147 RUAAAA WFFAAA AAAAxx +8250 3533 0 2 0 10 50 250 250 3250 8250 100 101 IFAAAA XFFAAA HHHHxx +82 3534 0 2 2 2 82 82 82 82 82 164 165 EDAAAA YFFAAA OOOOxx +6258 3535 0 2 8 18 58 258 258 1258 6258 116 117 SGAAAA ZFFAAA VVVVxx +9978 3536 0 2 8 18 78 978 1978 4978 9978 156 157 UTAAAA AGFAAA AAAAxx +6930 3537 0 2 0 10 30 930 930 1930 6930 60 61 OGAAAA BGFAAA HHHHxx +3746 3538 0 2 6 6 46 746 1746 3746 3746 92 93 COAAAA CGFAAA OOOOxx +7065 3539 1 1 5 5 65 65 1065 2065 7065 130 131 TLAAAA DGFAAA VVVVxx +4281 3540 1 1 1 1 81 281 281 4281 4281 162 163 RIAAAA EGFAAA AAAAxx +4367 3541 1 3 7 7 67 367 367 4367 4367 134 135 ZLAAAA FGFAAA HHHHxx +9526 3542 0 2 6 6 26 526 1526 4526 9526 52 53 KCAAAA GGFAAA OOOOxx +5880 3543 0 0 0 0 80 880 1880 880 5880 160 161 ESAAAA HGFAAA VVVVxx +8480 3544 0 0 0 0 80 480 480 3480 8480 160 161 EOAAAA IGFAAA AAAAxx +2476 3545 0 0 6 16 76 476 476 2476 2476 152 153 GRAAAA JGFAAA HHHHxx +9074 3546 0 2 4 14 74 74 1074 4074 9074 148 149 ALAAAA KGFAAA OOOOxx +4830 3547 0 2 0 10 30 830 830 4830 4830 60 61 UDAAAA LGFAAA VVVVxx +3207 3548 1 3 7 7 7 207 1207 3207 3207 14 15 JTAAAA MGFAAA AAAAxx +7894 3549 0 2 4 14 94 894 1894 2894 7894 188 189 QRAAAA NGFAAA HHHHxx +3860 3550 0 0 0 0 60 860 1860 3860 3860 120 121 MSAAAA OGFAAA OOOOxx +5293 3551 1 1 3 13 93 293 1293 293 5293 186 187 PVAAAA PGFAAA VVVVxx +6895 3552 1 3 5 15 95 895 895 1895 6895 190 191 FFAAAA QGFAAA AAAAxx +9908 3553 0 0 8 8 8 908 1908 4908 9908 16 17 CRAAAA RGFAAA HHHHxx +9247 3554 1 3 7 7 47 247 1247 4247 9247 94 95 RRAAAA SGFAAA OOOOxx +8110 3555 0 2 0 10 10 110 110 3110 8110 20 21 YZAAAA TGFAAA VVVVxx +4716 3556 0 0 6 16 16 716 716 4716 4716 32 33 KZAAAA UGFAAA AAAAxx +4979 3557 1 3 9 19 79 979 979 4979 4979 158 159 NJAAAA VGFAAA HHHHxx +5280 3558 0 0 0 0 80 280 1280 280 5280 160 161 CVAAAA WGFAAA OOOOxx +8326 3559 0 2 6 6 26 326 326 3326 8326 52 53 GIAAAA XGFAAA VVVVxx +5572 3560 0 0 2 12 72 572 1572 572 5572 144 145 IGAAAA YGFAAA AAAAxx +4665 3561 1 1 5 5 65 665 665 4665 4665 130 131 LXAAAA ZGFAAA HHHHxx +3665 3562 1 1 5 5 65 665 1665 3665 3665 130 131 ZKAAAA AHFAAA OOOOxx +6744 3563 0 0 4 4 44 744 744 1744 6744 88 89 KZAAAA BHFAAA VVVVxx +1897 3564 1 1 7 17 97 897 1897 1897 1897 194 195 ZUAAAA CHFAAA AAAAxx +1220 3565 0 0 0 0 20 220 1220 1220 1220 40 41 YUAAAA DHFAAA HHHHxx +2614 3566 0 2 4 14 14 614 614 2614 2614 28 29 OWAAAA EHFAAA OOOOxx +8509 3567 1 1 9 9 9 509 509 3509 8509 18 19 HPAAAA FHFAAA VVVVxx +8521 3568 1 1 1 1 21 521 521 3521 8521 42 43 TPAAAA GHFAAA AAAAxx +4121 3569 1 1 1 1 21 121 121 4121 4121 42 43 NCAAAA HHFAAA HHHHxx +9663 3570 1 3 3 3 63 663 1663 4663 9663 126 127 RHAAAA IHFAAA OOOOxx +2346 3571 0 2 6 6 46 346 346 2346 2346 92 93 GMAAAA JHFAAA VVVVxx +3370 3572 0 2 0 10 70 370 1370 3370 3370 140 141 QZAAAA KHFAAA AAAAxx +1498 3573 0 2 8 18 98 498 1498 1498 1498 196 197 QFAAAA LHFAAA HHHHxx +7422 3574 0 2 2 2 22 422 1422 2422 7422 44 45 MZAAAA MHFAAA OOOOxx +3472 3575 0 0 2 12 72 472 1472 3472 3472 144 145 ODAAAA NHFAAA VVVVxx +4126 3576 0 2 6 6 26 126 126 4126 4126 52 53 SCAAAA OHFAAA AAAAxx +4494 3577 0 2 4 14 94 494 494 4494 4494 188 189 WQAAAA PHFAAA HHHHxx +6323 3578 1 3 3 3 23 323 323 1323 6323 46 47 FJAAAA QHFAAA OOOOxx +2823 3579 1 3 3 3 23 823 823 2823 2823 46 47 PEAAAA RHFAAA VVVVxx +8596 3580 0 0 6 16 96 596 596 3596 8596 192 193 QSAAAA SHFAAA AAAAxx +6642 3581 0 2 2 2 42 642 642 1642 6642 84 85 MVAAAA THFAAA HHHHxx +9276 3582 0 0 6 16 76 276 1276 4276 9276 152 153 USAAAA UHFAAA OOOOxx +4148 3583 0 0 8 8 48 148 148 4148 4148 96 97 ODAAAA VHFAAA VVVVxx +9770 3584 0 2 0 10 70 770 1770 4770 9770 140 141 ULAAAA WHFAAA AAAAxx +9812 3585 0 0 2 12 12 812 1812 4812 9812 24 25 KNAAAA XHFAAA HHHHxx +4419 3586 1 3 9 19 19 419 419 4419 4419 38 39 ZNAAAA YHFAAA OOOOxx +3802 3587 0 2 2 2 2 802 1802 3802 3802 4 5 GQAAAA ZHFAAA VVVVxx +3210 3588 0 2 0 10 10 210 1210 3210 3210 20 21 MTAAAA AIFAAA AAAAxx +6794 3589 0 2 4 14 94 794 794 1794 6794 188 189 IBAAAA BIFAAA HHHHxx +242 3590 0 2 2 2 42 242 242 242 242 84 85 IJAAAA CIFAAA OOOOxx +962 3591 0 2 2 2 62 962 962 962 962 124 125 ALAAAA DIFAAA VVVVxx +7151 3592 1 3 1 11 51 151 1151 2151 7151 102 103 BPAAAA EIFAAA AAAAxx +9440 3593 0 0 0 0 40 440 1440 4440 9440 80 81 CZAAAA FIFAAA HHHHxx +721 3594 1 1 1 1 21 721 721 721 721 42 43 TBAAAA GIFAAA OOOOxx +2119 3595 1 3 9 19 19 119 119 2119 2119 38 39 NDAAAA HIFAAA VVVVxx +9883 3596 1 3 3 3 83 883 1883 4883 9883 166 167 DQAAAA IIFAAA AAAAxx +5071 3597 1 3 1 11 71 71 1071 71 5071 142 143 BNAAAA JIFAAA HHHHxx +8239 3598 1 3 9 19 39 239 239 3239 8239 78 79 XEAAAA KIFAAA OOOOxx +7451 3599 1 3 1 11 51 451 1451 2451 7451 102 103 PAAAAA LIFAAA VVVVxx +9517 3600 1 1 7 17 17 517 1517 4517 9517 34 35 BCAAAA MIFAAA AAAAxx +9180 3601 0 0 0 0 80 180 1180 4180 9180 160 161 CPAAAA NIFAAA HHHHxx +9327 3602 1 3 7 7 27 327 1327 4327 9327 54 55 TUAAAA OIFAAA OOOOxx +5462 3603 0 2 2 2 62 462 1462 462 5462 124 125 CCAAAA PIFAAA VVVVxx +8306 3604 0 2 6 6 6 306 306 3306 8306 12 13 MHAAAA QIFAAA AAAAxx +6234 3605 0 2 4 14 34 234 234 1234 6234 68 69 UFAAAA RIFAAA HHHHxx +8771 3606 1 3 1 11 71 771 771 3771 8771 142 143 JZAAAA SIFAAA OOOOxx +5853 3607 1 1 3 13 53 853 1853 853 5853 106 107 DRAAAA TIFAAA VVVVxx +8373 3608 1 1 3 13 73 373 373 3373 8373 146 147 BKAAAA UIFAAA AAAAxx +5017 3609 1 1 7 17 17 17 1017 17 5017 34 35 ZKAAAA VIFAAA HHHHxx +8025 3610 1 1 5 5 25 25 25 3025 8025 50 51 RWAAAA WIFAAA OOOOxx +2526 3611 0 2 6 6 26 526 526 2526 2526 52 53 ETAAAA XIFAAA VVVVxx +7419 3612 1 3 9 19 19 419 1419 2419 7419 38 39 JZAAAA YIFAAA AAAAxx +4572 3613 0 0 2 12 72 572 572 4572 4572 144 145 WTAAAA ZIFAAA HHHHxx +7744 3614 0 0 4 4 44 744 1744 2744 7744 88 89 WLAAAA AJFAAA OOOOxx +8825 3615 1 1 5 5 25 825 825 3825 8825 50 51 LBAAAA BJFAAA VVVVxx +6067 3616 1 3 7 7 67 67 67 1067 6067 134 135 JZAAAA CJFAAA AAAAxx +3291 3617 1 3 1 11 91 291 1291 3291 3291 182 183 PWAAAA DJFAAA HHHHxx +7115 3618 1 3 5 15 15 115 1115 2115 7115 30 31 RNAAAA EJFAAA OOOOxx +2626 3619 0 2 6 6 26 626 626 2626 2626 52 53 AXAAAA FJFAAA VVVVxx +4109 3620 1 1 9 9 9 109 109 4109 4109 18 19 BCAAAA GJFAAA AAAAxx +4056 3621 0 0 6 16 56 56 56 4056 4056 112 113 AAAAAA HJFAAA HHHHxx +6811 3622 1 3 1 11 11 811 811 1811 6811 22 23 ZBAAAA IJFAAA OOOOxx +680 3623 0 0 0 0 80 680 680 680 680 160 161 EAAAAA JJFAAA VVVVxx +474 3624 0 2 4 14 74 474 474 474 474 148 149 GSAAAA KJFAAA AAAAxx +9294 3625 0 2 4 14 94 294 1294 4294 9294 188 189 MTAAAA LJFAAA HHHHxx +7555 3626 1 3 5 15 55 555 1555 2555 7555 110 111 PEAAAA MJFAAA OOOOxx +8076 3627 0 0 6 16 76 76 76 3076 8076 152 153 QYAAAA NJFAAA VVVVxx +3840 3628 0 0 0 0 40 840 1840 3840 3840 80 81 SRAAAA OJFAAA AAAAxx +5955 3629 1 3 5 15 55 955 1955 955 5955 110 111 BVAAAA PJFAAA HHHHxx +994 3630 0 2 4 14 94 994 994 994 994 188 189 GMAAAA QJFAAA OOOOxx +2089 3631 1 1 9 9 89 89 89 2089 2089 178 179 JCAAAA RJFAAA VVVVxx +869 3632 1 1 9 9 69 869 869 869 869 138 139 LHAAAA SJFAAA AAAAxx +1223 3633 1 3 3 3 23 223 1223 1223 1223 46 47 BVAAAA TJFAAA HHHHxx +1514 3634 0 2 4 14 14 514 1514 1514 1514 28 29 GGAAAA UJFAAA OOOOxx +4891 3635 1 3 1 11 91 891 891 4891 4891 182 183 DGAAAA VJFAAA VVVVxx +4190 3636 0 2 0 10 90 190 190 4190 4190 180 181 EFAAAA WJFAAA AAAAxx +4377 3637 1 1 7 17 77 377 377 4377 4377 154 155 JMAAAA XJFAAA HHHHxx +9195 3638 1 3 5 15 95 195 1195 4195 9195 190 191 RPAAAA YJFAAA OOOOxx +3827 3639 1 3 7 7 27 827 1827 3827 3827 54 55 FRAAAA ZJFAAA VVVVxx +7386 3640 0 2 6 6 86 386 1386 2386 7386 172 173 CYAAAA AKFAAA AAAAxx +6665 3641 1 1 5 5 65 665 665 1665 6665 130 131 JWAAAA BKFAAA HHHHxx +7514 3642 0 2 4 14 14 514 1514 2514 7514 28 29 ADAAAA CKFAAA OOOOxx +6431 3643 1 3 1 11 31 431 431 1431 6431 62 63 JNAAAA DKFAAA VVVVxx +3251 3644 1 3 1 11 51 251 1251 3251 3251 102 103 BVAAAA EKFAAA AAAAxx +8439 3645 1 3 9 19 39 439 439 3439 8439 78 79 PMAAAA FKFAAA HHHHxx +831 3646 1 3 1 11 31 831 831 831 831 62 63 ZFAAAA GKFAAA OOOOxx +8485 3647 1 1 5 5 85 485 485 3485 8485 170 171 JOAAAA HKFAAA VVVVxx +7314 3648 0 2 4 14 14 314 1314 2314 7314 28 29 IVAAAA IKFAAA AAAAxx +3044 3649 0 0 4 4 44 44 1044 3044 3044 88 89 CNAAAA JKFAAA HHHHxx +4283 3650 1 3 3 3 83 283 283 4283 4283 166 167 TIAAAA KKFAAA OOOOxx +298 3651 0 2 8 18 98 298 298 298 298 196 197 MLAAAA LKFAAA VVVVxx +7114 3652 0 2 4 14 14 114 1114 2114 7114 28 29 QNAAAA MKFAAA AAAAxx +9664 3653 0 0 4 4 64 664 1664 4664 9664 128 129 SHAAAA NKFAAA HHHHxx +5315 3654 1 3 5 15 15 315 1315 315 5315 30 31 LWAAAA OKFAAA OOOOxx +2164 3655 0 0 4 4 64 164 164 2164 2164 128 129 GFAAAA PKFAAA VVVVxx +3390 3656 0 2 0 10 90 390 1390 3390 3390 180 181 KAAAAA QKFAAA AAAAxx +836 3657 0 0 6 16 36 836 836 836 836 72 73 EGAAAA RKFAAA HHHHxx +3316 3658 0 0 6 16 16 316 1316 3316 3316 32 33 OXAAAA SKFAAA OOOOxx +1284 3659 0 0 4 4 84 284 1284 1284 1284 168 169 KXAAAA TKFAAA VVVVxx +2497 3660 1 1 7 17 97 497 497 2497 2497 194 195 BSAAAA UKFAAA AAAAxx +1374 3661 0 2 4 14 74 374 1374 1374 1374 148 149 WAAAAA VKFAAA HHHHxx +9525 3662 1 1 5 5 25 525 1525 4525 9525 50 51 JCAAAA WKFAAA OOOOxx +2911 3663 1 3 1 11 11 911 911 2911 2911 22 23 ZHAAAA XKFAAA VVVVxx +9686 3664 0 2 6 6 86 686 1686 4686 9686 172 173 OIAAAA YKFAAA AAAAxx +584 3665 0 0 4 4 84 584 584 584 584 168 169 MWAAAA ZKFAAA HHHHxx +5653 3666 1 1 3 13 53 653 1653 653 5653 106 107 LJAAAA ALFAAA OOOOxx +4986 3667 0 2 6 6 86 986 986 4986 4986 172 173 UJAAAA BLFAAA VVVVxx +6049 3668 1 1 9 9 49 49 49 1049 6049 98 99 RYAAAA CLFAAA AAAAxx +9891 3669 1 3 1 11 91 891 1891 4891 9891 182 183 LQAAAA DLFAAA HHHHxx +8809 3670 1 1 9 9 9 809 809 3809 8809 18 19 VAAAAA ELFAAA OOOOxx +8598 3671 0 2 8 18 98 598 598 3598 8598 196 197 SSAAAA FLFAAA VVVVxx +2573 3672 1 1 3 13 73 573 573 2573 2573 146 147 ZUAAAA GLFAAA AAAAxx +6864 3673 0 0 4 4 64 864 864 1864 6864 128 129 AEAAAA HLFAAA HHHHxx +7932 3674 0 0 2 12 32 932 1932 2932 7932 64 65 CTAAAA ILFAAA OOOOxx +6605 3675 1 1 5 5 5 605 605 1605 6605 10 11 BUAAAA JLFAAA VVVVxx +9500 3676 0 0 0 0 0 500 1500 4500 9500 0 1 KBAAAA KLFAAA AAAAxx +8742 3677 0 2 2 2 42 742 742 3742 8742 84 85 GYAAAA LLFAAA HHHHxx +9815 3678 1 3 5 15 15 815 1815 4815 9815 30 31 NNAAAA MLFAAA OOOOxx +3319 3679 1 3 9 19 19 319 1319 3319 3319 38 39 RXAAAA NLFAAA VVVVxx +184 3680 0 0 4 4 84 184 184 184 184 168 169 CHAAAA OLFAAA AAAAxx +8886 3681 0 2 6 6 86 886 886 3886 8886 172 173 UDAAAA PLFAAA HHHHxx +7050 3682 0 2 0 10 50 50 1050 2050 7050 100 101 ELAAAA QLFAAA OOOOxx +9781 3683 1 1 1 1 81 781 1781 4781 9781 162 163 FMAAAA RLFAAA VVVVxx +2443 3684 1 3 3 3 43 443 443 2443 2443 86 87 ZPAAAA SLFAAA AAAAxx +1160 3685 0 0 0 0 60 160 1160 1160 1160 120 121 QSAAAA TLFAAA HHHHxx +4600 3686 0 0 0 0 0 600 600 4600 4600 0 1 YUAAAA ULFAAA OOOOxx +813 3687 1 1 3 13 13 813 813 813 813 26 27 HFAAAA VLFAAA VVVVxx +5078 3688 0 2 8 18 78 78 1078 78 5078 156 157 INAAAA WLFAAA AAAAxx +9008 3689 0 0 8 8 8 8 1008 4008 9008 16 17 MIAAAA XLFAAA HHHHxx +9016 3690 0 0 6 16 16 16 1016 4016 9016 32 33 UIAAAA YLFAAA OOOOxx +2747 3691 1 3 7 7 47 747 747 2747 2747 94 95 RBAAAA ZLFAAA VVVVxx +3106 3692 0 2 6 6 6 106 1106 3106 3106 12 13 MPAAAA AMFAAA AAAAxx +8235 3693 1 3 5 15 35 235 235 3235 8235 70 71 TEAAAA BMFAAA HHHHxx +5582 3694 0 2 2 2 82 582 1582 582 5582 164 165 SGAAAA CMFAAA OOOOxx +4334 3695 0 2 4 14 34 334 334 4334 4334 68 69 SKAAAA DMFAAA VVVVxx +1612 3696 0 0 2 12 12 612 1612 1612 1612 24 25 AKAAAA EMFAAA AAAAxx +5650 3697 0 2 0 10 50 650 1650 650 5650 100 101 IJAAAA FMFAAA HHHHxx +6086 3698 0 2 6 6 86 86 86 1086 6086 172 173 CAAAAA GMFAAA OOOOxx +9667 3699 1 3 7 7 67 667 1667 4667 9667 134 135 VHAAAA HMFAAA VVVVxx +4215 3700 1 3 5 15 15 215 215 4215 4215 30 31 DGAAAA IMFAAA AAAAxx +8553 3701 1 1 3 13 53 553 553 3553 8553 106 107 ZQAAAA JMFAAA HHHHxx +9066 3702 0 2 6 6 66 66 1066 4066 9066 132 133 SKAAAA KMFAAA OOOOxx +1092 3703 0 0 2 12 92 92 1092 1092 1092 184 185 AQAAAA LMFAAA VVVVxx +2848 3704 0 0 8 8 48 848 848 2848 2848 96 97 OFAAAA MMFAAA AAAAxx +2765 3705 1 1 5 5 65 765 765 2765 2765 130 131 JCAAAA NMFAAA HHHHxx +6513 3706 1 1 3 13 13 513 513 1513 6513 26 27 NQAAAA OMFAAA OOOOxx +6541 3707 1 1 1 1 41 541 541 1541 6541 82 83 PRAAAA PMFAAA VVVVxx +9617 3708 1 1 7 17 17 617 1617 4617 9617 34 35 XFAAAA QMFAAA AAAAxx +5870 3709 0 2 0 10 70 870 1870 870 5870 140 141 URAAAA RMFAAA HHHHxx +8811 3710 1 3 1 11 11 811 811 3811 8811 22 23 XAAAAA SMFAAA OOOOxx +4529 3711 1 1 9 9 29 529 529 4529 4529 58 59 FSAAAA TMFAAA VVVVxx +161 3712 1 1 1 1 61 161 161 161 161 122 123 FGAAAA UMFAAA AAAAxx +641 3713 1 1 1 1 41 641 641 641 641 82 83 RYAAAA VMFAAA HHHHxx +4767 3714 1 3 7 7 67 767 767 4767 4767 134 135 JBAAAA WMFAAA OOOOxx +6293 3715 1 1 3 13 93 293 293 1293 6293 186 187 BIAAAA XMFAAA VVVVxx +3816 3716 0 0 6 16 16 816 1816 3816 3816 32 33 UQAAAA YMFAAA AAAAxx +4748 3717 0 0 8 8 48 748 748 4748 4748 96 97 QAAAAA ZMFAAA HHHHxx +9924 3718 0 0 4 4 24 924 1924 4924 9924 48 49 SRAAAA ANFAAA OOOOxx +6716 3719 0 0 6 16 16 716 716 1716 6716 32 33 IYAAAA BNFAAA VVVVxx +8828 3720 0 0 8 8 28 828 828 3828 8828 56 57 OBAAAA CNFAAA AAAAxx +4967 3721 1 3 7 7 67 967 967 4967 4967 134 135 BJAAAA DNFAAA HHHHxx +9680 3722 0 0 0 0 80 680 1680 4680 9680 160 161 IIAAAA ENFAAA OOOOxx +2784 3723 0 0 4 4 84 784 784 2784 2784 168 169 CDAAAA FNFAAA VVVVxx +2882 3724 0 2 2 2 82 882 882 2882 2882 164 165 WGAAAA GNFAAA AAAAxx +3641 3725 1 1 1 1 41 641 1641 3641 3641 82 83 BKAAAA HNFAAA HHHHxx +5537 3726 1 1 7 17 37 537 1537 537 5537 74 75 ZEAAAA INFAAA OOOOxx +820 3727 0 0 0 0 20 820 820 820 820 40 41 OFAAAA JNFAAA VVVVxx +5847 3728 1 3 7 7 47 847 1847 847 5847 94 95 XQAAAA KNFAAA AAAAxx +566 3729 0 2 6 6 66 566 566 566 566 132 133 UVAAAA LNFAAA HHHHxx +2246 3730 0 2 6 6 46 246 246 2246 2246 92 93 KIAAAA MNFAAA OOOOxx +6680 3731 0 0 0 0 80 680 680 1680 6680 160 161 YWAAAA NNFAAA VVVVxx +2014 3732 0 2 4 14 14 14 14 2014 2014 28 29 MZAAAA ONFAAA AAAAxx +8355 3733 1 3 5 15 55 355 355 3355 8355 110 111 JJAAAA PNFAAA HHHHxx +1610 3734 0 2 0 10 10 610 1610 1610 1610 20 21 YJAAAA QNFAAA OOOOxx +9719 3735 1 3 9 19 19 719 1719 4719 9719 38 39 VJAAAA RNFAAA VVVVxx +8498 3736 0 2 8 18 98 498 498 3498 8498 196 197 WOAAAA SNFAAA AAAAxx +5883 3737 1 3 3 3 83 883 1883 883 5883 166 167 HSAAAA TNFAAA HHHHxx +7380 3738 0 0 0 0 80 380 1380 2380 7380 160 161 WXAAAA UNFAAA OOOOxx +8865 3739 1 1 5 5 65 865 865 3865 8865 130 131 ZCAAAA VNFAAA VVVVxx +4743 3740 1 3 3 3 43 743 743 4743 4743 86 87 LAAAAA WNFAAA AAAAxx +5086 3741 0 2 6 6 86 86 1086 86 5086 172 173 QNAAAA XNFAAA HHHHxx +2739 3742 1 3 9 19 39 739 739 2739 2739 78 79 JBAAAA YNFAAA OOOOxx +9375 3743 1 3 5 15 75 375 1375 4375 9375 150 151 PWAAAA ZNFAAA VVVVxx +7876 3744 0 0 6 16 76 876 1876 2876 7876 152 153 YQAAAA AOFAAA AAAAxx +453 3745 1 1 3 13 53 453 453 453 453 106 107 LRAAAA BOFAAA HHHHxx +6987 3746 1 3 7 7 87 987 987 1987 6987 174 175 TIAAAA COFAAA OOOOxx +2860 3747 0 0 0 0 60 860 860 2860 2860 120 121 AGAAAA DOFAAA VVVVxx +8372 3748 0 0 2 12 72 372 372 3372 8372 144 145 AKAAAA EOFAAA AAAAxx +2048 3749 0 0 8 8 48 48 48 2048 2048 96 97 UAAAAA FOFAAA HHHHxx +9231 3750 1 3 1 11 31 231 1231 4231 9231 62 63 BRAAAA GOFAAA OOOOxx +634 3751 0 2 4 14 34 634 634 634 634 68 69 KYAAAA HOFAAA VVVVxx +3998 3752 0 2 8 18 98 998 1998 3998 3998 196 197 UXAAAA IOFAAA AAAAxx +4728 3753 0 0 8 8 28 728 728 4728 4728 56 57 WZAAAA JOFAAA HHHHxx +579 3754 1 3 9 19 79 579 579 579 579 158 159 HWAAAA KOFAAA OOOOxx +815 3755 1 3 5 15 15 815 815 815 815 30 31 JFAAAA LOFAAA VVVVxx +1009 3756 1 1 9 9 9 9 1009 1009 1009 18 19 VMAAAA MOFAAA AAAAxx +6596 3757 0 0 6 16 96 596 596 1596 6596 192 193 STAAAA NOFAAA HHHHxx +2793 3758 1 1 3 13 93 793 793 2793 2793 186 187 LDAAAA OOFAAA OOOOxx +9589 3759 1 1 9 9 89 589 1589 4589 9589 178 179 VEAAAA POFAAA VVVVxx +2794 3760 0 2 4 14 94 794 794 2794 2794 188 189 MDAAAA QOFAAA AAAAxx +2551 3761 1 3 1 11 51 551 551 2551 2551 102 103 DUAAAA ROFAAA HHHHxx +1588 3762 0 0 8 8 88 588 1588 1588 1588 176 177 CJAAAA SOFAAA OOOOxx +4443 3763 1 3 3 3 43 443 443 4443 4443 86 87 XOAAAA TOFAAA VVVVxx +5009 3764 1 1 9 9 9 9 1009 9 5009 18 19 RKAAAA UOFAAA AAAAxx +4287 3765 1 3 7 7 87 287 287 4287 4287 174 175 XIAAAA VOFAAA HHHHxx +2167 3766 1 3 7 7 67 167 167 2167 2167 134 135 JFAAAA WOFAAA OOOOxx +2290 3767 0 2 0 10 90 290 290 2290 2290 180 181 CKAAAA XOFAAA VVVVxx +7225 3768 1 1 5 5 25 225 1225 2225 7225 50 51 XRAAAA YOFAAA AAAAxx +8992 3769 0 0 2 12 92 992 992 3992 8992 184 185 WHAAAA ZOFAAA HHHHxx +1540 3770 0 0 0 0 40 540 1540 1540 1540 80 81 GHAAAA APFAAA OOOOxx +2029 3771 1 1 9 9 29 29 29 2029 2029 58 59 BAAAAA BPFAAA VVVVxx +2855 3772 1 3 5 15 55 855 855 2855 2855 110 111 VFAAAA CPFAAA AAAAxx +3534 3773 0 2 4 14 34 534 1534 3534 3534 68 69 YFAAAA DPFAAA HHHHxx +8078 3774 0 2 8 18 78 78 78 3078 8078 156 157 SYAAAA EPFAAA OOOOxx +9778 3775 0 2 8 18 78 778 1778 4778 9778 156 157 CMAAAA FPFAAA VVVVxx +3543 3776 1 3 3 3 43 543 1543 3543 3543 86 87 HGAAAA GPFAAA AAAAxx +4778 3777 0 2 8 18 78 778 778 4778 4778 156 157 UBAAAA HPFAAA HHHHxx +8931 3778 1 3 1 11 31 931 931 3931 8931 62 63 NFAAAA IPFAAA OOOOxx +557 3779 1 1 7 17 57 557 557 557 557 114 115 LVAAAA JPFAAA VVVVxx +5546 3780 0 2 6 6 46 546 1546 546 5546 92 93 IFAAAA KPFAAA AAAAxx +7527 3781 1 3 7 7 27 527 1527 2527 7527 54 55 NDAAAA LPFAAA HHHHxx +5000 3782 0 0 0 0 0 0 1000 0 5000 0 1 IKAAAA MPFAAA OOOOxx +7587 3783 1 3 7 7 87 587 1587 2587 7587 174 175 VFAAAA NPFAAA VVVVxx +3014 3784 0 2 4 14 14 14 1014 3014 3014 28 29 YLAAAA OPFAAA AAAAxx +5276 3785 0 0 6 16 76 276 1276 276 5276 152 153 YUAAAA PPFAAA HHHHxx +6457 3786 1 1 7 17 57 457 457 1457 6457 114 115 JOAAAA QPFAAA OOOOxx +389 3787 1 1 9 9 89 389 389 389 389 178 179 ZOAAAA RPFAAA VVVVxx +7104 3788 0 0 4 4 4 104 1104 2104 7104 8 9 GNAAAA SPFAAA AAAAxx +9995 3789 1 3 5 15 95 995 1995 4995 9995 190 191 LUAAAA TPFAAA HHHHxx +7368 3790 0 0 8 8 68 368 1368 2368 7368 136 137 KXAAAA UPFAAA OOOOxx +3258 3791 0 2 8 18 58 258 1258 3258 3258 116 117 IVAAAA VPFAAA VVVVxx +9208 3792 0 0 8 8 8 208 1208 4208 9208 16 17 EQAAAA WPFAAA AAAAxx +2396 3793 0 0 6 16 96 396 396 2396 2396 192 193 EOAAAA XPFAAA HHHHxx +1715 3794 1 3 5 15 15 715 1715 1715 1715 30 31 ZNAAAA YPFAAA OOOOxx +1240 3795 0 0 0 0 40 240 1240 1240 1240 80 81 SVAAAA ZPFAAA VVVVxx +1952 3796 0 0 2 12 52 952 1952 1952 1952 104 105 CXAAAA AQFAAA AAAAxx +4403 3797 1 3 3 3 3 403 403 4403 4403 6 7 JNAAAA BQFAAA HHHHxx +6333 3798 1 1 3 13 33 333 333 1333 6333 66 67 PJAAAA CQFAAA OOOOxx +2492 3799 0 0 2 12 92 492 492 2492 2492 184 185 WRAAAA DQFAAA VVVVxx +6543 3800 1 3 3 3 43 543 543 1543 6543 86 87 RRAAAA EQFAAA AAAAxx +5548 3801 0 0 8 8 48 548 1548 548 5548 96 97 KFAAAA FQFAAA HHHHxx +3458 3802 0 2 8 18 58 458 1458 3458 3458 116 117 ADAAAA GQFAAA OOOOxx +2588 3803 0 0 8 8 88 588 588 2588 2588 176 177 OVAAAA HQFAAA VVVVxx +1364 3804 0 0 4 4 64 364 1364 1364 1364 128 129 MAAAAA IQFAAA AAAAxx +9856 3805 0 0 6 16 56 856 1856 4856 9856 112 113 CPAAAA JQFAAA HHHHxx +4964 3806 0 0 4 4 64 964 964 4964 4964 128 129 YIAAAA KQFAAA OOOOxx +773 3807 1 1 3 13 73 773 773 773 773 146 147 TDAAAA LQFAAA VVVVxx +6402 3808 0 2 2 2 2 402 402 1402 6402 4 5 GMAAAA MQFAAA AAAAxx +7213 3809 1 1 3 13 13 213 1213 2213 7213 26 27 LRAAAA NQFAAA HHHHxx +3385 3810 1 1 5 5 85 385 1385 3385 3385 170 171 FAAAAA OQFAAA OOOOxx +6005 3811 1 1 5 5 5 5 5 1005 6005 10 11 ZWAAAA PQFAAA VVVVxx +9346 3812 0 2 6 6 46 346 1346 4346 9346 92 93 MVAAAA QQFAAA AAAAxx +1831 3813 1 3 1 11 31 831 1831 1831 1831 62 63 LSAAAA RQFAAA HHHHxx +5406 3814 0 2 6 6 6 406 1406 406 5406 12 13 YZAAAA SQFAAA OOOOxx +2154 3815 0 2 4 14 54 154 154 2154 2154 108 109 WEAAAA TQFAAA VVVVxx +3721 3816 1 1 1 1 21 721 1721 3721 3721 42 43 DNAAAA UQFAAA AAAAxx +2889 3817 1 1 9 9 89 889 889 2889 2889 178 179 DHAAAA VQFAAA HHHHxx +4410 3818 0 2 0 10 10 410 410 4410 4410 20 21 QNAAAA WQFAAA OOOOxx +7102 3819 0 2 2 2 2 102 1102 2102 7102 4 5 ENAAAA XQFAAA VVVVxx +4057 3820 1 1 7 17 57 57 57 4057 4057 114 115 BAAAAA YQFAAA AAAAxx +9780 3821 0 0 0 0 80 780 1780 4780 9780 160 161 EMAAAA ZQFAAA HHHHxx +9481 3822 1 1 1 1 81 481 1481 4481 9481 162 163 RAAAAA ARFAAA OOOOxx +2366 3823 0 2 6 6 66 366 366 2366 2366 132 133 ANAAAA BRFAAA VVVVxx +2708 3824 0 0 8 8 8 708 708 2708 2708 16 17 EAAAAA CRFAAA AAAAxx +7399 3825 1 3 9 19 99 399 1399 2399 7399 198 199 PYAAAA DRFAAA HHHHxx +5234 3826 0 2 4 14 34 234 1234 234 5234 68 69 ITAAAA ERFAAA OOOOxx +1843 3827 1 3 3 3 43 843 1843 1843 1843 86 87 XSAAAA FRFAAA VVVVxx +1006 3828 0 2 6 6 6 6 1006 1006 1006 12 13 SMAAAA GRFAAA AAAAxx +7696 3829 0 0 6 16 96 696 1696 2696 7696 192 193 AKAAAA HRFAAA HHHHxx +6411 3830 1 3 1 11 11 411 411 1411 6411 22 23 PMAAAA IRFAAA OOOOxx +3913 3831 1 1 3 13 13 913 1913 3913 3913 26 27 NUAAAA JRFAAA VVVVxx +2538 3832 0 2 8 18 38 538 538 2538 2538 76 77 QTAAAA KRFAAA AAAAxx +3019 3833 1 3 9 19 19 19 1019 3019 3019 38 39 DMAAAA LRFAAA HHHHxx +107 3834 1 3 7 7 7 107 107 107 107 14 15 DEAAAA MRFAAA OOOOxx +427 3835 1 3 7 7 27 427 427 427 427 54 55 LQAAAA NRFAAA VVVVxx +9849 3836 1 1 9 9 49 849 1849 4849 9849 98 99 VOAAAA ORFAAA AAAAxx +4195 3837 1 3 5 15 95 195 195 4195 4195 190 191 JFAAAA PRFAAA HHHHxx +9215 3838 1 3 5 15 15 215 1215 4215 9215 30 31 LQAAAA QRFAAA OOOOxx +3165 3839 1 1 5 5 65 165 1165 3165 3165 130 131 TRAAAA RRFAAA VVVVxx +3280 3840 0 0 0 0 80 280 1280 3280 3280 160 161 EWAAAA SRFAAA AAAAxx +4477 3841 1 1 7 17 77 477 477 4477 4477 154 155 FQAAAA TRFAAA HHHHxx +5885 3842 1 1 5 5 85 885 1885 885 5885 170 171 JSAAAA URFAAA OOOOxx +3311 3843 1 3 1 11 11 311 1311 3311 3311 22 23 JXAAAA VRFAAA VVVVxx +6453 3844 1 1 3 13 53 453 453 1453 6453 106 107 FOAAAA WRFAAA AAAAxx +8527 3845 1 3 7 7 27 527 527 3527 8527 54 55 ZPAAAA XRFAAA HHHHxx +1921 3846 1 1 1 1 21 921 1921 1921 1921 42 43 XVAAAA YRFAAA OOOOxx +2427 3847 1 3 7 7 27 427 427 2427 2427 54 55 JPAAAA ZRFAAA VVVVxx +3691 3848 1 3 1 11 91 691 1691 3691 3691 182 183 ZLAAAA ASFAAA AAAAxx +3882 3849 0 2 2 2 82 882 1882 3882 3882 164 165 ITAAAA BSFAAA HHHHxx +562 3850 0 2 2 2 62 562 562 562 562 124 125 QVAAAA CSFAAA OOOOxx +377 3851 1 1 7 17 77 377 377 377 377 154 155 NOAAAA DSFAAA VVVVxx +1497 3852 1 1 7 17 97 497 1497 1497 1497 194 195 PFAAAA ESFAAA AAAAxx +4453 3853 1 1 3 13 53 453 453 4453 4453 106 107 HPAAAA FSFAAA HHHHxx +4678 3854 0 2 8 18 78 678 678 4678 4678 156 157 YXAAAA GSFAAA OOOOxx +2234 3855 0 2 4 14 34 234 234 2234 2234 68 69 YHAAAA HSFAAA VVVVxx +1073 3856 1 1 3 13 73 73 1073 1073 1073 146 147 HPAAAA ISFAAA AAAAxx +6479 3857 1 3 9 19 79 479 479 1479 6479 158 159 FPAAAA JSFAAA HHHHxx +5665 3858 1 1 5 5 65 665 1665 665 5665 130 131 XJAAAA KSFAAA OOOOxx +586 3859 0 2 6 6 86 586 586 586 586 172 173 OWAAAA LSFAAA VVVVxx +1584 3860 0 0 4 4 84 584 1584 1584 1584 168 169 YIAAAA MSFAAA AAAAxx +2574 3861 0 2 4 14 74 574 574 2574 2574 148 149 AVAAAA NSFAAA HHHHxx +9833 3862 1 1 3 13 33 833 1833 4833 9833 66 67 FOAAAA OSFAAA OOOOxx +6726 3863 0 2 6 6 26 726 726 1726 6726 52 53 SYAAAA PSFAAA VVVVxx +8497 3864 1 1 7 17 97 497 497 3497 8497 194 195 VOAAAA QSFAAA AAAAxx +2914 3865 0 2 4 14 14 914 914 2914 2914 28 29 CIAAAA RSFAAA HHHHxx +8586 3866 0 2 6 6 86 586 586 3586 8586 172 173 GSAAAA SSFAAA OOOOxx +6973 3867 1 1 3 13 73 973 973 1973 6973 146 147 FIAAAA TSFAAA VVVVxx +1322 3868 0 2 2 2 22 322 1322 1322 1322 44 45 WYAAAA USFAAA AAAAxx +5242 3869 0 2 2 2 42 242 1242 242 5242 84 85 QTAAAA VSFAAA HHHHxx +5581 3870 1 1 1 1 81 581 1581 581 5581 162 163 RGAAAA WSFAAA OOOOxx +1365 3871 1 1 5 5 65 365 1365 1365 1365 130 131 NAAAAA XSFAAA VVVVxx +2818 3872 0 2 8 18 18 818 818 2818 2818 36 37 KEAAAA YSFAAA AAAAxx +3758 3873 0 2 8 18 58 758 1758 3758 3758 116 117 OOAAAA ZSFAAA HHHHxx +2665 3874 1 1 5 5 65 665 665 2665 2665 130 131 NYAAAA ATFAAA OOOOxx +9823 3875 1 3 3 3 23 823 1823 4823 9823 46 47 VNAAAA BTFAAA VVVVxx +7057 3876 1 1 7 17 57 57 1057 2057 7057 114 115 LLAAAA CTFAAA AAAAxx +543 3877 1 3 3 3 43 543 543 543 543 86 87 XUAAAA DTFAAA HHHHxx +4008 3878 0 0 8 8 8 8 8 4008 4008 16 17 EYAAAA ETFAAA OOOOxx +4397 3879 1 1 7 17 97 397 397 4397 4397 194 195 DNAAAA FTFAAA VVVVxx +8533 3880 1 1 3 13 33 533 533 3533 8533 66 67 FQAAAA GTFAAA AAAAxx +9728 3881 0 0 8 8 28 728 1728 4728 9728 56 57 EKAAAA HTFAAA HHHHxx +5198 3882 0 2 8 18 98 198 1198 198 5198 196 197 YRAAAA ITFAAA OOOOxx +5036 3883 0 0 6 16 36 36 1036 36 5036 72 73 SLAAAA JTFAAA VVVVxx +4394 3884 0 2 4 14 94 394 394 4394 4394 188 189 ANAAAA KTFAAA AAAAxx +9633 3885 1 1 3 13 33 633 1633 4633 9633 66 67 NGAAAA LTFAAA HHHHxx +3339 3886 1 3 9 19 39 339 1339 3339 3339 78 79 LYAAAA MTFAAA OOOOxx +9529 3887 1 1 9 9 29 529 1529 4529 9529 58 59 NCAAAA NTFAAA VVVVxx +4780 3888 0 0 0 0 80 780 780 4780 4780 160 161 WBAAAA OTFAAA AAAAxx +4862 3889 0 2 2 2 62 862 862 4862 4862 124 125 AFAAAA PTFAAA HHHHxx +8152 3890 0 0 2 12 52 152 152 3152 8152 104 105 OBAAAA QTFAAA OOOOxx +9330 3891 0 2 0 10 30 330 1330 4330 9330 60 61 WUAAAA RTFAAA VVVVxx +4362 3892 0 2 2 2 62 362 362 4362 4362 124 125 ULAAAA STFAAA AAAAxx +4688 3893 0 0 8 8 88 688 688 4688 4688 176 177 IYAAAA TTFAAA HHHHxx +1903 3894 1 3 3 3 3 903 1903 1903 1903 6 7 FVAAAA UTFAAA OOOOxx +9027 3895 1 3 7 7 27 27 1027 4027 9027 54 55 FJAAAA VTFAAA VVVVxx +5385 3896 1 1 5 5 85 385 1385 385 5385 170 171 DZAAAA WTFAAA AAAAxx +9854 3897 0 2 4 14 54 854 1854 4854 9854 108 109 APAAAA XTFAAA HHHHxx +9033 3898 1 1 3 13 33 33 1033 4033 9033 66 67 LJAAAA YTFAAA OOOOxx +3185 3899 1 1 5 5 85 185 1185 3185 3185 170 171 NSAAAA ZTFAAA VVVVxx +2618 3900 0 2 8 18 18 618 618 2618 2618 36 37 SWAAAA AUFAAA AAAAxx +371 3901 1 3 1 11 71 371 371 371 371 142 143 HOAAAA BUFAAA HHHHxx +3697 3902 1 1 7 17 97 697 1697 3697 3697 194 195 FMAAAA CUFAAA OOOOxx +1682 3903 0 2 2 2 82 682 1682 1682 1682 164 165 SMAAAA DUFAAA VVVVxx +3333 3904 1 1 3 13 33 333 1333 3333 3333 66 67 FYAAAA EUFAAA AAAAxx +1722 3905 0 2 2 2 22 722 1722 1722 1722 44 45 GOAAAA FUFAAA HHHHxx +2009 3906 1 1 9 9 9 9 9 2009 2009 18 19 HZAAAA GUFAAA OOOOxx +3517 3907 1 1 7 17 17 517 1517 3517 3517 34 35 HFAAAA HUFAAA VVVVxx +7640 3908 0 0 0 0 40 640 1640 2640 7640 80 81 WHAAAA IUFAAA AAAAxx +259 3909 1 3 9 19 59 259 259 259 259 118 119 ZJAAAA JUFAAA HHHHxx +1400 3910 0 0 0 0 0 400 1400 1400 1400 0 1 WBAAAA KUFAAA OOOOxx +6663 3911 1 3 3 3 63 663 663 1663 6663 126 127 HWAAAA LUFAAA VVVVxx +1576 3912 0 0 6 16 76 576 1576 1576 1576 152 153 QIAAAA MUFAAA AAAAxx +8843 3913 1 3 3 3 43 843 843 3843 8843 86 87 DCAAAA NUFAAA HHHHxx +9474 3914 0 2 4 14 74 474 1474 4474 9474 148 149 KAAAAA OUFAAA OOOOxx +1597 3915 1 1 7 17 97 597 1597 1597 1597 194 195 LJAAAA PUFAAA VVVVxx +1143 3916 1 3 3 3 43 143 1143 1143 1143 86 87 ZRAAAA QUFAAA AAAAxx +4162 3917 0 2 2 2 62 162 162 4162 4162 124 125 CEAAAA RUFAAA HHHHxx +1301 3918 1 1 1 1 1 301 1301 1301 1301 2 3 BYAAAA SUFAAA OOOOxx +2935 3919 1 3 5 15 35 935 935 2935 2935 70 71 XIAAAA TUFAAA VVVVxx +886 3920 0 2 6 6 86 886 886 886 886 172 173 CIAAAA UUFAAA AAAAxx +1661 3921 1 1 1 1 61 661 1661 1661 1661 122 123 XLAAAA VUFAAA HHHHxx +1026 3922 0 2 6 6 26 26 1026 1026 1026 52 53 MNAAAA WUFAAA OOOOxx +7034 3923 0 2 4 14 34 34 1034 2034 7034 68 69 OKAAAA XUFAAA VVVVxx +2305 3924 1 1 5 5 5 305 305 2305 2305 10 11 RKAAAA YUFAAA AAAAxx +1725 3925 1 1 5 5 25 725 1725 1725 1725 50 51 JOAAAA ZUFAAA HHHHxx +909 3926 1 1 9 9 9 909 909 909 909 18 19 ZIAAAA AVFAAA OOOOxx +9906 3927 0 2 6 6 6 906 1906 4906 9906 12 13 ARAAAA BVFAAA VVVVxx +3309 3928 1 1 9 9 9 309 1309 3309 3309 18 19 HXAAAA CVFAAA AAAAxx +515 3929 1 3 5 15 15 515 515 515 515 30 31 VTAAAA DVFAAA HHHHxx +932 3930 0 0 2 12 32 932 932 932 932 64 65 WJAAAA EVFAAA OOOOxx +8144 3931 0 0 4 4 44 144 144 3144 8144 88 89 GBAAAA FVFAAA VVVVxx +5592 3932 0 0 2 12 92 592 1592 592 5592 184 185 CHAAAA GVFAAA AAAAxx +4003 3933 1 3 3 3 3 3 3 4003 4003 6 7 ZXAAAA HVFAAA HHHHxx +9566 3934 0 2 6 6 66 566 1566 4566 9566 132 133 YDAAAA IVFAAA OOOOxx +4556 3935 0 0 6 16 56 556 556 4556 4556 112 113 GTAAAA JVFAAA VVVVxx +268 3936 0 0 8 8 68 268 268 268 268 136 137 IKAAAA KVFAAA AAAAxx +8107 3937 1 3 7 7 7 107 107 3107 8107 14 15 VZAAAA LVFAAA HHHHxx +5816 3938 0 0 6 16 16 816 1816 816 5816 32 33 SPAAAA MVFAAA OOOOxx +8597 3939 1 1 7 17 97 597 597 3597 8597 194 195 RSAAAA NVFAAA VVVVxx +9611 3940 1 3 1 11 11 611 1611 4611 9611 22 23 RFAAAA OVFAAA AAAAxx +8070 3941 0 2 0 10 70 70 70 3070 8070 140 141 KYAAAA PVFAAA HHHHxx +6040 3942 0 0 0 0 40 40 40 1040 6040 80 81 IYAAAA QVFAAA OOOOxx +3184 3943 0 0 4 4 84 184 1184 3184 3184 168 169 MSAAAA RVFAAA VVVVxx +9656 3944 0 0 6 16 56 656 1656 4656 9656 112 113 KHAAAA SVFAAA AAAAxx +1577 3945 1 1 7 17 77 577 1577 1577 1577 154 155 RIAAAA TVFAAA HHHHxx +1805 3946 1 1 5 5 5 805 1805 1805 1805 10 11 LRAAAA UVFAAA OOOOxx +8268 3947 0 0 8 8 68 268 268 3268 8268 136 137 AGAAAA VVFAAA VVVVxx +3489 3948 1 1 9 9 89 489 1489 3489 3489 178 179 FEAAAA WVFAAA AAAAxx +4564 3949 0 0 4 4 64 564 564 4564 4564 128 129 OTAAAA XVFAAA HHHHxx +4006 3950 0 2 6 6 6 6 6 4006 4006 12 13 CYAAAA YVFAAA OOOOxx +8466 3951 0 2 6 6 66 466 466 3466 8466 132 133 QNAAAA ZVFAAA VVVVxx +938 3952 0 2 8 18 38 938 938 938 938 76 77 CKAAAA AWFAAA AAAAxx +5944 3953 0 0 4 4 44 944 1944 944 5944 88 89 QUAAAA BWFAAA HHHHxx +8363 3954 1 3 3 3 63 363 363 3363 8363 126 127 RJAAAA CWFAAA OOOOxx +5348 3955 0 0 8 8 48 348 1348 348 5348 96 97 SXAAAA DWFAAA VVVVxx +71 3956 1 3 1 11 71 71 71 71 71 142 143 TCAAAA EWFAAA AAAAxx +3620 3957 0 0 0 0 20 620 1620 3620 3620 40 41 GJAAAA FWFAAA HHHHxx +3230 3958 0 2 0 10 30 230 1230 3230 3230 60 61 GUAAAA GWFAAA OOOOxx +6132 3959 0 0 2 12 32 132 132 1132 6132 64 65 WBAAAA HWFAAA VVVVxx +6143 3960 1 3 3 3 43 143 143 1143 6143 86 87 HCAAAA IWFAAA AAAAxx +8781 3961 1 1 1 1 81 781 781 3781 8781 162 163 TZAAAA JWFAAA HHHHxx +5522 3962 0 2 2 2 22 522 1522 522 5522 44 45 KEAAAA KWFAAA OOOOxx +6320 3963 0 0 0 0 20 320 320 1320 6320 40 41 CJAAAA LWFAAA VVVVxx +3923 3964 1 3 3 3 23 923 1923 3923 3923 46 47 XUAAAA MWFAAA AAAAxx +2207 3965 1 3 7 7 7 207 207 2207 2207 14 15 XGAAAA NWFAAA HHHHxx +966 3966 0 2 6 6 66 966 966 966 966 132 133 ELAAAA OWFAAA OOOOxx +9020 3967 0 0 0 0 20 20 1020 4020 9020 40 41 YIAAAA PWFAAA VVVVxx +4616 3968 0 0 6 16 16 616 616 4616 4616 32 33 OVAAAA QWFAAA AAAAxx +8289 3969 1 1 9 9 89 289 289 3289 8289 178 179 VGAAAA RWFAAA HHHHxx +5796 3970 0 0 6 16 96 796 1796 796 5796 192 193 YOAAAA SWFAAA OOOOxx +9259 3971 1 3 9 19 59 259 1259 4259 9259 118 119 DSAAAA TWFAAA VVVVxx +3710 3972 0 2 0 10 10 710 1710 3710 3710 20 21 SMAAAA UWFAAA AAAAxx +251 3973 1 3 1 11 51 251 251 251 251 102 103 RJAAAA VWFAAA HHHHxx +7669 3974 1 1 9 9 69 669 1669 2669 7669 138 139 ZIAAAA WWFAAA OOOOxx +6304 3975 0 0 4 4 4 304 304 1304 6304 8 9 MIAAAA XWFAAA VVVVxx +6454 3976 0 2 4 14 54 454 454 1454 6454 108 109 GOAAAA YWFAAA AAAAxx +1489 3977 1 1 9 9 89 489 1489 1489 1489 178 179 HFAAAA ZWFAAA HHHHxx +715 3978 1 3 5 15 15 715 715 715 715 30 31 NBAAAA AXFAAA OOOOxx +4319 3979 1 3 9 19 19 319 319 4319 4319 38 39 DKAAAA BXFAAA VVVVxx +7112 3980 0 0 2 12 12 112 1112 2112 7112 24 25 ONAAAA CXFAAA AAAAxx +3726 3981 0 2 6 6 26 726 1726 3726 3726 52 53 INAAAA DXFAAA HHHHxx +7727 3982 1 3 7 7 27 727 1727 2727 7727 54 55 FLAAAA EXFAAA OOOOxx +8387 3983 1 3 7 7 87 387 387 3387 8387 174 175 PKAAAA FXFAAA VVVVxx +6555 3984 1 3 5 15 55 555 555 1555 6555 110 111 DSAAAA GXFAAA AAAAxx +1148 3985 0 0 8 8 48 148 1148 1148 1148 96 97 ESAAAA HXFAAA HHHHxx +9000 3986 0 0 0 0 0 0 1000 4000 9000 0 1 EIAAAA IXFAAA OOOOxx +5278 3987 0 2 8 18 78 278 1278 278 5278 156 157 AVAAAA JXFAAA VVVVxx +2388 3988 0 0 8 8 88 388 388 2388 2388 176 177 WNAAAA KXFAAA AAAAxx +7984 3989 0 0 4 4 84 984 1984 2984 7984 168 169 CVAAAA LXFAAA HHHHxx +881 3990 1 1 1 1 81 881 881 881 881 162 163 XHAAAA MXFAAA OOOOxx +6830 3991 0 2 0 10 30 830 830 1830 6830 60 61 SCAAAA NXFAAA VVVVxx +7056 3992 0 0 6 16 56 56 1056 2056 7056 112 113 KLAAAA OXFAAA AAAAxx +7581 3993 1 1 1 1 81 581 1581 2581 7581 162 163 PFAAAA PXFAAA HHHHxx +5214 3994 0 2 4 14 14 214 1214 214 5214 28 29 OSAAAA QXFAAA OOOOxx +2505 3995 1 1 5 5 5 505 505 2505 2505 10 11 JSAAAA RXFAAA VVVVxx +5112 3996 0 0 2 12 12 112 1112 112 5112 24 25 QOAAAA SXFAAA AAAAxx +9884 3997 0 0 4 4 84 884 1884 4884 9884 168 169 EQAAAA TXFAAA HHHHxx +8040 3998 0 0 0 0 40 40 40 3040 8040 80 81 GXAAAA UXFAAA OOOOxx +7033 3999 1 1 3 13 33 33 1033 2033 7033 66 67 NKAAAA VXFAAA VVVVxx +9343 4000 1 3 3 3 43 343 1343 4343 9343 86 87 JVAAAA WXFAAA AAAAxx +2931 4001 1 3 1 11 31 931 931 2931 2931 62 63 TIAAAA XXFAAA HHHHxx +9024 4002 0 0 4 4 24 24 1024 4024 9024 48 49 CJAAAA YXFAAA OOOOxx +6485 4003 1 1 5 5 85 485 485 1485 6485 170 171 LPAAAA ZXFAAA VVVVxx +3465 4004 1 1 5 5 65 465 1465 3465 3465 130 131 HDAAAA AYFAAA AAAAxx +3357 4005 1 1 7 17 57 357 1357 3357 3357 114 115 DZAAAA BYFAAA HHHHxx +2929 4006 1 1 9 9 29 929 929 2929 2929 58 59 RIAAAA CYFAAA OOOOxx +3086 4007 0 2 6 6 86 86 1086 3086 3086 172 173 SOAAAA DYFAAA VVVVxx +8897 4008 1 1 7 17 97 897 897 3897 8897 194 195 FEAAAA EYFAAA AAAAxx +9688 4009 0 0 8 8 88 688 1688 4688 9688 176 177 QIAAAA FYFAAA HHHHxx +6522 4010 0 2 2 2 22 522 522 1522 6522 44 45 WQAAAA GYFAAA OOOOxx +3241 4011 1 1 1 1 41 241 1241 3241 3241 82 83 RUAAAA HYFAAA VVVVxx +8770 4012 0 2 0 10 70 770 770 3770 8770 140 141 IZAAAA IYFAAA AAAAxx +2884 4013 0 0 4 4 84 884 884 2884 2884 168 169 YGAAAA JYFAAA HHHHxx +9579 4014 1 3 9 19 79 579 1579 4579 9579 158 159 LEAAAA KYFAAA OOOOxx +3125 4015 1 1 5 5 25 125 1125 3125 3125 50 51 FQAAAA LYFAAA VVVVxx +4604 4016 0 0 4 4 4 604 604 4604 4604 8 9 CVAAAA MYFAAA AAAAxx +2682 4017 0 2 2 2 82 682 682 2682 2682 164 165 EZAAAA NYFAAA HHHHxx +254 4018 0 2 4 14 54 254 254 254 254 108 109 UJAAAA OYFAAA OOOOxx +6569 4019 1 1 9 9 69 569 569 1569 6569 138 139 RSAAAA PYFAAA VVVVxx +2686 4020 0 2 6 6 86 686 686 2686 2686 172 173 IZAAAA QYFAAA AAAAxx +2123 4021 1 3 3 3 23 123 123 2123 2123 46 47 RDAAAA RYFAAA HHHHxx +1745 4022 1 1 5 5 45 745 1745 1745 1745 90 91 DPAAAA SYFAAA OOOOxx +247 4023 1 3 7 7 47 247 247 247 247 94 95 NJAAAA TYFAAA VVVVxx +5800 4024 0 0 0 0 0 800 1800 800 5800 0 1 CPAAAA UYFAAA AAAAxx +1121 4025 1 1 1 1 21 121 1121 1121 1121 42 43 DRAAAA VYFAAA HHHHxx +8893 4026 1 1 3 13 93 893 893 3893 8893 186 187 BEAAAA WYFAAA OOOOxx +7819 4027 1 3 9 19 19 819 1819 2819 7819 38 39 TOAAAA XYFAAA VVVVxx +1339 4028 1 3 9 19 39 339 1339 1339 1339 78 79 NZAAAA YYFAAA AAAAxx +5680 4029 0 0 0 0 80 680 1680 680 5680 160 161 MKAAAA ZYFAAA HHHHxx +5093 4030 1 1 3 13 93 93 1093 93 5093 186 187 XNAAAA AZFAAA OOOOxx +3508 4031 0 0 8 8 8 508 1508 3508 3508 16 17 YEAAAA BZFAAA VVVVxx +933 4032 1 1 3 13 33 933 933 933 933 66 67 XJAAAA CZFAAA AAAAxx +1106 4033 0 2 6 6 6 106 1106 1106 1106 12 13 OQAAAA DZFAAA HHHHxx +4386 4034 0 2 6 6 86 386 386 4386 4386 172 173 SMAAAA EZFAAA OOOOxx +5895 4035 1 3 5 15 95 895 1895 895 5895 190 191 TSAAAA FZFAAA VVVVxx +2980 4036 0 0 0 0 80 980 980 2980 2980 160 161 QKAAAA GZFAAA AAAAxx +4400 4037 0 0 0 0 0 400 400 4400 4400 0 1 GNAAAA HZFAAA HHHHxx +7433 4038 1 1 3 13 33 433 1433 2433 7433 66 67 XZAAAA IZFAAA OOOOxx +6110 4039 0 2 0 10 10 110 110 1110 6110 20 21 ABAAAA JZFAAA VVVVxx +867 4040 1 3 7 7 67 867 867 867 867 134 135 JHAAAA KZFAAA AAAAxx +5292 4041 0 0 2 12 92 292 1292 292 5292 184 185 OVAAAA LZFAAA HHHHxx +3926 4042 0 2 6 6 26 926 1926 3926 3926 52 53 AVAAAA MZFAAA OOOOxx +1107 4043 1 3 7 7 7 107 1107 1107 1107 14 15 PQAAAA NZFAAA VVVVxx +7355 4044 1 3 5 15 55 355 1355 2355 7355 110 111 XWAAAA OZFAAA AAAAxx +4689 4045 1 1 9 9 89 689 689 4689 4689 178 179 JYAAAA PZFAAA HHHHxx +4872 4046 0 0 2 12 72 872 872 4872 4872 144 145 KFAAAA QZFAAA OOOOxx +7821 4047 1 1 1 1 21 821 1821 2821 7821 42 43 VOAAAA RZFAAA VVVVxx +7277 4048 1 1 7 17 77 277 1277 2277 7277 154 155 XTAAAA SZFAAA AAAAxx +3268 4049 0 0 8 8 68 268 1268 3268 3268 136 137 SVAAAA TZFAAA HHHHxx +8877 4050 1 1 7 17 77 877 877 3877 8877 154 155 LDAAAA UZFAAA OOOOxx +343 4051 1 3 3 3 43 343 343 343 343 86 87 FNAAAA VZFAAA VVVVxx +621 4052 1 1 1 1 21 621 621 621 621 42 43 XXAAAA WZFAAA AAAAxx +5429 4053 1 1 9 9 29 429 1429 429 5429 58 59 VAAAAA XZFAAA HHHHxx +392 4054 0 0 2 12 92 392 392 392 392 184 185 CPAAAA YZFAAA OOOOxx +6004 4055 0 0 4 4 4 4 4 1004 6004 8 9 YWAAAA ZZFAAA VVVVxx +6377 4056 1 1 7 17 77 377 377 1377 6377 154 155 HLAAAA AAGAAA AAAAxx +3037 4057 1 1 7 17 37 37 1037 3037 3037 74 75 VMAAAA BAGAAA HHHHxx +3514 4058 0 2 4 14 14 514 1514 3514 3514 28 29 EFAAAA CAGAAA OOOOxx +8740 4059 0 0 0 0 40 740 740 3740 8740 80 81 EYAAAA DAGAAA VVVVxx +3877 4060 1 1 7 17 77 877 1877 3877 3877 154 155 DTAAAA EAGAAA AAAAxx +5731 4061 1 3 1 11 31 731 1731 731 5731 62 63 LMAAAA FAGAAA HHHHxx +6407 4062 1 3 7 7 7 407 407 1407 6407 14 15 LMAAAA GAGAAA OOOOxx +2044 4063 0 0 4 4 44 44 44 2044 2044 88 89 QAAAAA HAGAAA VVVVxx +7362 4064 0 2 2 2 62 362 1362 2362 7362 124 125 EXAAAA IAGAAA AAAAxx +5458 4065 0 2 8 18 58 458 1458 458 5458 116 117 YBAAAA JAGAAA HHHHxx +6437 4066 1 1 7 17 37 437 437 1437 6437 74 75 PNAAAA KAGAAA OOOOxx +1051 4067 1 3 1 11 51 51 1051 1051 1051 102 103 LOAAAA LAGAAA VVVVxx +1203 4068 1 3 3 3 3 203 1203 1203 1203 6 7 HUAAAA MAGAAA AAAAxx +2176 4069 0 0 6 16 76 176 176 2176 2176 152 153 SFAAAA NAGAAA HHHHxx +8997 4070 1 1 7 17 97 997 997 3997 8997 194 195 BIAAAA OAGAAA OOOOxx +6378 4071 0 2 8 18 78 378 378 1378 6378 156 157 ILAAAA PAGAAA VVVVxx +6006 4072 0 2 6 6 6 6 6 1006 6006 12 13 AXAAAA QAGAAA AAAAxx +2308 4073 0 0 8 8 8 308 308 2308 2308 16 17 UKAAAA RAGAAA HHHHxx +625 4074 1 1 5 5 25 625 625 625 625 50 51 BYAAAA SAGAAA OOOOxx +7298 4075 0 2 8 18 98 298 1298 2298 7298 196 197 SUAAAA TAGAAA VVVVxx +5575 4076 1 3 5 15 75 575 1575 575 5575 150 151 LGAAAA UAGAAA AAAAxx +3565 4077 1 1 5 5 65 565 1565 3565 3565 130 131 DHAAAA VAGAAA HHHHxx +47 4078 1 3 7 7 47 47 47 47 47 94 95 VBAAAA WAGAAA OOOOxx +2413 4079 1 1 3 13 13 413 413 2413 2413 26 27 VOAAAA XAGAAA VVVVxx +2153 4080 1 1 3 13 53 153 153 2153 2153 106 107 VEAAAA YAGAAA AAAAxx +752 4081 0 0 2 12 52 752 752 752 752 104 105 YCAAAA ZAGAAA HHHHxx +4095 4082 1 3 5 15 95 95 95 4095 4095 190 191 NBAAAA ABGAAA OOOOxx +2518 4083 0 2 8 18 18 518 518 2518 2518 36 37 WSAAAA BBGAAA VVVVxx +3681 4084 1 1 1 1 81 681 1681 3681 3681 162 163 PLAAAA CBGAAA AAAAxx +4213 4085 1 1 3 13 13 213 213 4213 4213 26 27 BGAAAA DBGAAA HHHHxx +2615 4086 1 3 5 15 15 615 615 2615 2615 30 31 PWAAAA EBGAAA OOOOxx +1471 4087 1 3 1 11 71 471 1471 1471 1471 142 143 PEAAAA FBGAAA VVVVxx +7315 4088 1 3 5 15 15 315 1315 2315 7315 30 31 JVAAAA GBGAAA AAAAxx +6013 4089 1 1 3 13 13 13 13 1013 6013 26 27 HXAAAA HBGAAA HHHHxx +3077 4090 1 1 7 17 77 77 1077 3077 3077 154 155 JOAAAA IBGAAA OOOOxx +2190 4091 0 2 0 10 90 190 190 2190 2190 180 181 GGAAAA JBGAAA VVVVxx +528 4092 0 0 8 8 28 528 528 528 528 56 57 IUAAAA KBGAAA AAAAxx +9508 4093 0 0 8 8 8 508 1508 4508 9508 16 17 SBAAAA LBGAAA HHHHxx +2473 4094 1 1 3 13 73 473 473 2473 2473 146 147 DRAAAA MBGAAA OOOOxx +167 4095 1 3 7 7 67 167 167 167 167 134 135 LGAAAA NBGAAA VVVVxx +8448 4096 0 0 8 8 48 448 448 3448 8448 96 97 YMAAAA OBGAAA AAAAxx +7538 4097 0 2 8 18 38 538 1538 2538 7538 76 77 YDAAAA PBGAAA HHHHxx +7638 4098 0 2 8 18 38 638 1638 2638 7638 76 77 UHAAAA QBGAAA OOOOxx +4328 4099 0 0 8 8 28 328 328 4328 4328 56 57 MKAAAA RBGAAA VVVVxx +3812 4100 0 0 2 12 12 812 1812 3812 3812 24 25 QQAAAA SBGAAA AAAAxx +2879 4101 1 3 9 19 79 879 879 2879 2879 158 159 TGAAAA TBGAAA HHHHxx +4741 4102 1 1 1 1 41 741 741 4741 4741 82 83 JAAAAA UBGAAA OOOOxx +9155 4103 1 3 5 15 55 155 1155 4155 9155 110 111 DOAAAA VBGAAA VVVVxx +5151 4104 1 3 1 11 51 151 1151 151 5151 102 103 DQAAAA WBGAAA AAAAxx +5591 4105 1 3 1 11 91 591 1591 591 5591 182 183 BHAAAA XBGAAA HHHHxx +1034 4106 0 2 4 14 34 34 1034 1034 1034 68 69 UNAAAA YBGAAA OOOOxx +765 4107 1 1 5 5 65 765 765 765 765 130 131 LDAAAA ZBGAAA VVVVxx +2664 4108 0 0 4 4 64 664 664 2664 2664 128 129 MYAAAA ACGAAA AAAAxx +6854 4109 0 2 4 14 54 854 854 1854 6854 108 109 QDAAAA BCGAAA HHHHxx +8263 4110 1 3 3 3 63 263 263 3263 8263 126 127 VFAAAA CCGAAA OOOOxx +8658 4111 0 2 8 18 58 658 658 3658 8658 116 117 AVAAAA DCGAAA VVVVxx +587 4112 1 3 7 7 87 587 587 587 587 174 175 PWAAAA ECGAAA AAAAxx +4553 4113 1 1 3 13 53 553 553 4553 4553 106 107 DTAAAA FCGAAA HHHHxx +1368 4114 0 0 8 8 68 368 1368 1368 1368 136 137 QAAAAA GCGAAA OOOOxx +1718 4115 0 2 8 18 18 718 1718 1718 1718 36 37 COAAAA HCGAAA VVVVxx +140 4116 0 0 0 0 40 140 140 140 140 80 81 KFAAAA ICGAAA AAAAxx +8341 4117 1 1 1 1 41 341 341 3341 8341 82 83 VIAAAA JCGAAA HHHHxx +72 4118 0 0 2 12 72 72 72 72 72 144 145 UCAAAA KCGAAA OOOOxx +6589 4119 1 1 9 9 89 589 589 1589 6589 178 179 LTAAAA LCGAAA VVVVxx +2024 4120 0 0 4 4 24 24 24 2024 2024 48 49 WZAAAA MCGAAA AAAAxx +8024 4121 0 0 4 4 24 24 24 3024 8024 48 49 QWAAAA NCGAAA HHHHxx +9564 4122 0 0 4 4 64 564 1564 4564 9564 128 129 WDAAAA OCGAAA OOOOxx +8625 4123 1 1 5 5 25 625 625 3625 8625 50 51 TTAAAA PCGAAA VVVVxx +2680 4124 0 0 0 0 80 680 680 2680 2680 160 161 CZAAAA QCGAAA AAAAxx +4323 4125 1 3 3 3 23 323 323 4323 4323 46 47 HKAAAA RCGAAA HHHHxx +8981 4126 1 1 1 1 81 981 981 3981 8981 162 163 LHAAAA SCGAAA OOOOxx +8909 4127 1 1 9 9 9 909 909 3909 8909 18 19 REAAAA TCGAAA VVVVxx +5288 4128 0 0 8 8 88 288 1288 288 5288 176 177 KVAAAA UCGAAA AAAAxx +2057 4129 1 1 7 17 57 57 57 2057 2057 114 115 DBAAAA VCGAAA HHHHxx +5931 4130 1 3 1 11 31 931 1931 931 5931 62 63 DUAAAA WCGAAA OOOOxx +9794 4131 0 2 4 14 94 794 1794 4794 9794 188 189 SMAAAA XCGAAA VVVVxx +1012 4132 0 0 2 12 12 12 1012 1012 1012 24 25 YMAAAA YCGAAA AAAAxx +5496 4133 0 0 6 16 96 496 1496 496 5496 192 193 KDAAAA ZCGAAA HHHHxx +9182 4134 0 2 2 2 82 182 1182 4182 9182 164 165 EPAAAA ADGAAA OOOOxx +5258 4135 0 2 8 18 58 258 1258 258 5258 116 117 GUAAAA BDGAAA VVVVxx +3050 4136 0 2 0 10 50 50 1050 3050 3050 100 101 INAAAA CDGAAA AAAAxx +2083 4137 1 3 3 3 83 83 83 2083 2083 166 167 DCAAAA DDGAAA HHHHxx +3069 4138 1 1 9 9 69 69 1069 3069 3069 138 139 BOAAAA EDGAAA OOOOxx +8459 4139 1 3 9 19 59 459 459 3459 8459 118 119 JNAAAA FDGAAA VVVVxx +169 4140 1 1 9 9 69 169 169 169 169 138 139 NGAAAA GDGAAA AAAAxx +4379 4141 1 3 9 19 79 379 379 4379 4379 158 159 LMAAAA HDGAAA HHHHxx +5126 4142 0 2 6 6 26 126 1126 126 5126 52 53 EPAAAA IDGAAA OOOOxx +1415 4143 1 3 5 15 15 415 1415 1415 1415 30 31 LCAAAA JDGAAA VVVVxx +1163 4144 1 3 3 3 63 163 1163 1163 1163 126 127 TSAAAA KDGAAA AAAAxx +3500 4145 0 0 0 0 0 500 1500 3500 3500 0 1 QEAAAA LDGAAA HHHHxx +7202 4146 0 2 2 2 2 202 1202 2202 7202 4 5 ARAAAA MDGAAA OOOOxx +747 4147 1 3 7 7 47 747 747 747 747 94 95 TCAAAA NDGAAA VVVVxx +9264 4148 0 0 4 4 64 264 1264 4264 9264 128 129 ISAAAA ODGAAA AAAAxx +8548 4149 0 0 8 8 48 548 548 3548 8548 96 97 UQAAAA PDGAAA HHHHxx +4228 4150 0 0 8 8 28 228 228 4228 4228 56 57 QGAAAA QDGAAA OOOOxx +7122 4151 0 2 2 2 22 122 1122 2122 7122 44 45 YNAAAA RDGAAA VVVVxx +3395 4152 1 3 5 15 95 395 1395 3395 3395 190 191 PAAAAA SDGAAA AAAAxx +5674 4153 0 2 4 14 74 674 1674 674 5674 148 149 GKAAAA TDGAAA HHHHxx +7293 4154 1 1 3 13 93 293 1293 2293 7293 186 187 NUAAAA UDGAAA OOOOxx +737 4155 1 1 7 17 37 737 737 737 737 74 75 JCAAAA VDGAAA VVVVxx +9595 4156 1 3 5 15 95 595 1595 4595 9595 190 191 BFAAAA WDGAAA AAAAxx +594 4157 0 2 4 14 94 594 594 594 594 188 189 WWAAAA XDGAAA HHHHxx +5322 4158 0 2 2 2 22 322 1322 322 5322 44 45 SWAAAA YDGAAA OOOOxx +2933 4159 1 1 3 13 33 933 933 2933 2933 66 67 VIAAAA ZDGAAA VVVVxx +4955 4160 1 3 5 15 55 955 955 4955 4955 110 111 PIAAAA AEGAAA AAAAxx +4073 4161 1 1 3 13 73 73 73 4073 4073 146 147 RAAAAA BEGAAA HHHHxx +7249 4162 1 1 9 9 49 249 1249 2249 7249 98 99 VSAAAA CEGAAA OOOOxx +192 4163 0 0 2 12 92 192 192 192 192 184 185 KHAAAA DEGAAA VVVVxx +2617 4164 1 1 7 17 17 617 617 2617 2617 34 35 RWAAAA EEGAAA AAAAxx +7409 4165 1 1 9 9 9 409 1409 2409 7409 18 19 ZYAAAA FEGAAA HHHHxx +4903 4166 1 3 3 3 3 903 903 4903 4903 6 7 PGAAAA GEGAAA OOOOxx +9797 4167 1 1 7 17 97 797 1797 4797 9797 194 195 VMAAAA HEGAAA VVVVxx +9919 4168 1 3 9 19 19 919 1919 4919 9919 38 39 NRAAAA IEGAAA AAAAxx +1878 4169 0 2 8 18 78 878 1878 1878 1878 156 157 GUAAAA JEGAAA HHHHxx +4851 4170 1 3 1 11 51 851 851 4851 4851 102 103 PEAAAA KEGAAA OOOOxx +5514 4171 0 2 4 14 14 514 1514 514 5514 28 29 CEAAAA LEGAAA VVVVxx +2582 4172 0 2 2 2 82 582 582 2582 2582 164 165 IVAAAA MEGAAA AAAAxx +3564 4173 0 0 4 4 64 564 1564 3564 3564 128 129 CHAAAA NEGAAA HHHHxx +7085 4174 1 1 5 5 85 85 1085 2085 7085 170 171 NMAAAA OEGAAA OOOOxx +3619 4175 1 3 9 19 19 619 1619 3619 3619 38 39 FJAAAA PEGAAA VVVVxx +261 4176 1 1 1 1 61 261 261 261 261 122 123 BKAAAA QEGAAA AAAAxx +7338 4177 0 2 8 18 38 338 1338 2338 7338 76 77 GWAAAA REGAAA HHHHxx +4251 4178 1 3 1 11 51 251 251 4251 4251 102 103 NHAAAA SEGAAA OOOOxx +5360 4179 0 0 0 0 60 360 1360 360 5360 120 121 EYAAAA TEGAAA VVVVxx +5678 4180 0 2 8 18 78 678 1678 678 5678 156 157 KKAAAA UEGAAA AAAAxx +9162 4181 0 2 2 2 62 162 1162 4162 9162 124 125 KOAAAA VEGAAA HHHHxx +5920 4182 0 0 0 0 20 920 1920 920 5920 40 41 STAAAA WEGAAA OOOOxx +7156 4183 0 0 6 16 56 156 1156 2156 7156 112 113 GPAAAA XEGAAA VVVVxx +4271 4184 1 3 1 11 71 271 271 4271 4271 142 143 HIAAAA YEGAAA AAAAxx +4698 4185 0 2 8 18 98 698 698 4698 4698 196 197 SYAAAA ZEGAAA HHHHxx +1572 4186 0 0 2 12 72 572 1572 1572 1572 144 145 MIAAAA AFGAAA OOOOxx +6974 4187 0 2 4 14 74 974 974 1974 6974 148 149 GIAAAA BFGAAA VVVVxx +4291 4188 1 3 1 11 91 291 291 4291 4291 182 183 BJAAAA CFGAAA AAAAxx +4036 4189 0 0 6 16 36 36 36 4036 4036 72 73 GZAAAA DFGAAA HHHHxx +7473 4190 1 1 3 13 73 473 1473 2473 7473 146 147 LBAAAA EFGAAA OOOOxx +4786 4191 0 2 6 6 86 786 786 4786 4786 172 173 CCAAAA FFGAAA VVVVxx +2662 4192 0 2 2 2 62 662 662 2662 2662 124 125 KYAAAA GFGAAA AAAAxx +916 4193 0 0 6 16 16 916 916 916 916 32 33 GJAAAA HFGAAA HHHHxx +668 4194 0 0 8 8 68 668 668 668 668 136 137 SZAAAA IFGAAA OOOOxx +4874 4195 0 2 4 14 74 874 874 4874 4874 148 149 MFAAAA JFGAAA VVVVxx +3752 4196 0 0 2 12 52 752 1752 3752 3752 104 105 IOAAAA KFGAAA AAAAxx +4865 4197 1 1 5 5 65 865 865 4865 4865 130 131 DFAAAA LFGAAA HHHHxx +7052 4198 0 0 2 12 52 52 1052 2052 7052 104 105 GLAAAA MFGAAA OOOOxx +5712 4199 0 0 2 12 12 712 1712 712 5712 24 25 SLAAAA NFGAAA VVVVxx +31 4200 1 3 1 11 31 31 31 31 31 62 63 FBAAAA OFGAAA AAAAxx +4944 4201 0 0 4 4 44 944 944 4944 4944 88 89 EIAAAA PFGAAA HHHHxx +1435 4202 1 3 5 15 35 435 1435 1435 1435 70 71 FDAAAA QFGAAA OOOOxx +501 4203 1 1 1 1 1 501 501 501 501 2 3 HTAAAA RFGAAA VVVVxx +9401 4204 1 1 1 1 1 401 1401 4401 9401 2 3 PXAAAA SFGAAA AAAAxx +5014 4205 0 2 4 14 14 14 1014 14 5014 28 29 WKAAAA TFGAAA HHHHxx +9125 4206 1 1 5 5 25 125 1125 4125 9125 50 51 ZMAAAA UFGAAA OOOOxx +6144 4207 0 0 4 4 44 144 144 1144 6144 88 89 ICAAAA VFGAAA VVVVxx +1743 4208 1 3 3 3 43 743 1743 1743 1743 86 87 BPAAAA WFGAAA AAAAxx +4316 4209 0 0 6 16 16 316 316 4316 4316 32 33 AKAAAA XFGAAA HHHHxx +8212 4210 0 0 2 12 12 212 212 3212 8212 24 25 WDAAAA YFGAAA OOOOxx +7344 4211 0 0 4 4 44 344 1344 2344 7344 88 89 MWAAAA ZFGAAA VVVVxx +2051 4212 1 3 1 11 51 51 51 2051 2051 102 103 XAAAAA AGGAAA AAAAxx +8131 4213 1 3 1 11 31 131 131 3131 8131 62 63 TAAAAA BGGAAA HHHHxx +7023 4214 1 3 3 3 23 23 1023 2023 7023 46 47 DKAAAA CGGAAA OOOOxx +9674 4215 0 2 4 14 74 674 1674 4674 9674 148 149 CIAAAA DGGAAA VVVVxx +4984 4216 0 0 4 4 84 984 984 4984 4984 168 169 SJAAAA EGGAAA AAAAxx +111 4217 1 3 1 11 11 111 111 111 111 22 23 HEAAAA FGGAAA HHHHxx +2296 4218 0 0 6 16 96 296 296 2296 2296 192 193 IKAAAA GGGAAA OOOOxx +5025 4219 1 1 5 5 25 25 1025 25 5025 50 51 HLAAAA HGGAAA VVVVxx +1756 4220 0 0 6 16 56 756 1756 1756 1756 112 113 OPAAAA IGGAAA AAAAxx +2885 4221 1 1 5 5 85 885 885 2885 2885 170 171 ZGAAAA JGGAAA HHHHxx +2541 4222 1 1 1 1 41 541 541 2541 2541 82 83 TTAAAA KGGAAA OOOOxx +1919 4223 1 3 9 19 19 919 1919 1919 1919 38 39 VVAAAA LGGAAA VVVVxx +6496 4224 0 0 6 16 96 496 496 1496 6496 192 193 WPAAAA MGGAAA AAAAxx +6103 4225 1 3 3 3 3 103 103 1103 6103 6 7 TAAAAA NGGAAA HHHHxx +98 4226 0 2 8 18 98 98 98 98 98 196 197 UDAAAA OGGAAA OOOOxx +3727 4227 1 3 7 7 27 727 1727 3727 3727 54 55 JNAAAA PGGAAA VVVVxx +689 4228 1 1 9 9 89 689 689 689 689 178 179 NAAAAA QGGAAA AAAAxx +7181 4229 1 1 1 1 81 181 1181 2181 7181 162 163 FQAAAA RGGAAA HHHHxx +8447 4230 1 3 7 7 47 447 447 3447 8447 94 95 XMAAAA SGGAAA OOOOxx +4569 4231 1 1 9 9 69 569 569 4569 4569 138 139 TTAAAA TGGAAA VVVVxx +8844 4232 0 0 4 4 44 844 844 3844 8844 88 89 ECAAAA UGGAAA AAAAxx +2436 4233 0 0 6 16 36 436 436 2436 2436 72 73 SPAAAA VGGAAA HHHHxx +391 4234 1 3 1 11 91 391 391 391 391 182 183 BPAAAA WGGAAA OOOOxx +3035 4235 1 3 5 15 35 35 1035 3035 3035 70 71 TMAAAA XGGAAA VVVVxx +7583 4236 1 3 3 3 83 583 1583 2583 7583 166 167 RFAAAA YGGAAA AAAAxx +1145 4237 1 1 5 5 45 145 1145 1145 1145 90 91 BSAAAA ZGGAAA HHHHxx +93 4238 1 1 3 13 93 93 93 93 93 186 187 PDAAAA AHGAAA OOOOxx +8896 4239 0 0 6 16 96 896 896 3896 8896 192 193 EEAAAA BHGAAA VVVVxx +6719 4240 1 3 9 19 19 719 719 1719 6719 38 39 LYAAAA CHGAAA AAAAxx +7728 4241 0 0 8 8 28 728 1728 2728 7728 56 57 GLAAAA DHGAAA HHHHxx +1349 4242 1 1 9 9 49 349 1349 1349 1349 98 99 XZAAAA EHGAAA OOOOxx +5349 4243 1 1 9 9 49 349 1349 349 5349 98 99 TXAAAA FHGAAA VVVVxx +3040 4244 0 0 0 0 40 40 1040 3040 3040 80 81 YMAAAA GHGAAA AAAAxx +2414 4245 0 2 4 14 14 414 414 2414 2414 28 29 WOAAAA HHGAAA HHHHxx +5122 4246 0 2 2 2 22 122 1122 122 5122 44 45 APAAAA IHGAAA OOOOxx +9553 4247 1 1 3 13 53 553 1553 4553 9553 106 107 LDAAAA JHGAAA VVVVxx +5987 4248 1 3 7 7 87 987 1987 987 5987 174 175 HWAAAA KHGAAA AAAAxx +5939 4249 1 3 9 19 39 939 1939 939 5939 78 79 LUAAAA LHGAAA HHHHxx +3525 4250 1 1 5 5 25 525 1525 3525 3525 50 51 PFAAAA MHGAAA OOOOxx +1371 4251 1 3 1 11 71 371 1371 1371 1371 142 143 TAAAAA NHGAAA VVVVxx +618 4252 0 2 8 18 18 618 618 618 618 36 37 UXAAAA OHGAAA AAAAxx +6529 4253 1 1 9 9 29 529 529 1529 6529 58 59 DRAAAA PHGAAA HHHHxx +4010 4254 0 2 0 10 10 10 10 4010 4010 20 21 GYAAAA QHGAAA OOOOxx +328 4255 0 0 8 8 28 328 328 328 328 56 57 QMAAAA RHGAAA VVVVxx +6121 4256 1 1 1 1 21 121 121 1121 6121 42 43 LBAAAA SHGAAA AAAAxx +3505 4257 1 1 5 5 5 505 1505 3505 3505 10 11 VEAAAA THGAAA HHHHxx +2033 4258 1 1 3 13 33 33 33 2033 2033 66 67 FAAAAA UHGAAA OOOOxx +4724 4259 0 0 4 4 24 724 724 4724 4724 48 49 SZAAAA VHGAAA VVVVxx +8717 4260 1 1 7 17 17 717 717 3717 8717 34 35 HXAAAA WHGAAA AAAAxx +5639 4261 1 3 9 19 39 639 1639 639 5639 78 79 XIAAAA XHGAAA HHHHxx +3448 4262 0 0 8 8 48 448 1448 3448 3448 96 97 QCAAAA YHGAAA OOOOxx +2919 4263 1 3 9 19 19 919 919 2919 2919 38 39 HIAAAA ZHGAAA VVVVxx +3417 4264 1 1 7 17 17 417 1417 3417 3417 34 35 LBAAAA AIGAAA AAAAxx +943 4265 1 3 3 3 43 943 943 943 943 86 87 HKAAAA BIGAAA HHHHxx +775 4266 1 3 5 15 75 775 775 775 775 150 151 VDAAAA CIGAAA OOOOxx +2333 4267 1 1 3 13 33 333 333 2333 2333 66 67 TLAAAA DIGAAA VVVVxx +4801 4268 1 1 1 1 1 801 801 4801 4801 2 3 RCAAAA EIGAAA AAAAxx +7169 4269 1 1 9 9 69 169 1169 2169 7169 138 139 TPAAAA FIGAAA HHHHxx +2840 4270 0 0 0 0 40 840 840 2840 2840 80 81 GFAAAA GIGAAA OOOOxx +9034 4271 0 2 4 14 34 34 1034 4034 9034 68 69 MJAAAA HIGAAA VVVVxx +6154 4272 0 2 4 14 54 154 154 1154 6154 108 109 SCAAAA IIGAAA AAAAxx +1412 4273 0 0 2 12 12 412 1412 1412 1412 24 25 ICAAAA JIGAAA HHHHxx +2263 4274 1 3 3 3 63 263 263 2263 2263 126 127 BJAAAA KIGAAA OOOOxx +7118 4275 0 2 8 18 18 118 1118 2118 7118 36 37 UNAAAA LIGAAA VVVVxx +1526 4276 0 2 6 6 26 526 1526 1526 1526 52 53 SGAAAA MIGAAA AAAAxx +491 4277 1 3 1 11 91 491 491 491 491 182 183 XSAAAA NIGAAA HHHHxx +9732 4278 0 0 2 12 32 732 1732 4732 9732 64 65 IKAAAA OIGAAA OOOOxx +7067 4279 1 3 7 7 67 67 1067 2067 7067 134 135 VLAAAA PIGAAA VVVVxx +212 4280 0 0 2 12 12 212 212 212 212 24 25 EIAAAA QIGAAA AAAAxx +1955 4281 1 3 5 15 55 955 1955 1955 1955 110 111 FXAAAA RIGAAA HHHHxx +3303 4282 1 3 3 3 3 303 1303 3303 3303 6 7 BXAAAA SIGAAA OOOOxx +2715 4283 1 3 5 15 15 715 715 2715 2715 30 31 LAAAAA TIGAAA VVVVxx +8168 4284 0 0 8 8 68 168 168 3168 8168 136 137 ECAAAA UIGAAA AAAAxx +6799 4285 1 3 9 19 99 799 799 1799 6799 198 199 NBAAAA VIGAAA HHHHxx +5080 4286 0 0 0 0 80 80 1080 80 5080 160 161 KNAAAA WIGAAA OOOOxx +4939 4287 1 3 9 19 39 939 939 4939 4939 78 79 ZHAAAA XIGAAA VVVVxx +6604 4288 0 0 4 4 4 604 604 1604 6604 8 9 AUAAAA YIGAAA AAAAxx +6531 4289 1 3 1 11 31 531 531 1531 6531 62 63 FRAAAA ZIGAAA HHHHxx +9948 4290 0 0 8 8 48 948 1948 4948 9948 96 97 QSAAAA AJGAAA OOOOxx +7923 4291 1 3 3 3 23 923 1923 2923 7923 46 47 TSAAAA BJGAAA VVVVxx +9905 4292 1 1 5 5 5 905 1905 4905 9905 10 11 ZQAAAA CJGAAA AAAAxx +340 4293 0 0 0 0 40 340 340 340 340 80 81 CNAAAA DJGAAA HHHHxx +1721 4294 1 1 1 1 21 721 1721 1721 1721 42 43 FOAAAA EJGAAA OOOOxx +9047 4295 1 3 7 7 47 47 1047 4047 9047 94 95 ZJAAAA FJGAAA VVVVxx +4723 4296 1 3 3 3 23 723 723 4723 4723 46 47 RZAAAA GJGAAA AAAAxx +5748 4297 0 0 8 8 48 748 1748 748 5748 96 97 CNAAAA HJGAAA HHHHxx +6845 4298 1 1 5 5 45 845 845 1845 6845 90 91 HDAAAA IJGAAA OOOOxx +1556 4299 0 0 6 16 56 556 1556 1556 1556 112 113 WHAAAA JJGAAA VVVVxx +9505 4300 1 1 5 5 5 505 1505 4505 9505 10 11 PBAAAA KJGAAA AAAAxx +3573 4301 1 1 3 13 73 573 1573 3573 3573 146 147 LHAAAA LJGAAA HHHHxx +3785 4302 1 1 5 5 85 785 1785 3785 3785 170 171 PPAAAA MJGAAA OOOOxx +2772 4303 0 0 2 12 72 772 772 2772 2772 144 145 QCAAAA NJGAAA VVVVxx +7282 4304 0 2 2 2 82 282 1282 2282 7282 164 165 CUAAAA OJGAAA AAAAxx +8106 4305 0 2 6 6 6 106 106 3106 8106 12 13 UZAAAA PJGAAA HHHHxx +2847 4306 1 3 7 7 47 847 847 2847 2847 94 95 NFAAAA QJGAAA OOOOxx +9803 4307 1 3 3 3 3 803 1803 4803 9803 6 7 BNAAAA RJGAAA VVVVxx +7719 4308 1 3 9 19 19 719 1719 2719 7719 38 39 XKAAAA SJGAAA AAAAxx +4649 4309 1 1 9 9 49 649 649 4649 4649 98 99 VWAAAA TJGAAA HHHHxx +6196 4310 0 0 6 16 96 196 196 1196 6196 192 193 IEAAAA UJGAAA OOOOxx +6026 4311 0 2 6 6 26 26 26 1026 6026 52 53 UXAAAA VJGAAA VVVVxx +1646 4312 0 2 6 6 46 646 1646 1646 1646 92 93 ILAAAA WJGAAA AAAAxx +6526 4313 0 2 6 6 26 526 526 1526 6526 52 53 ARAAAA XJGAAA HHHHxx +5110 4314 0 2 0 10 10 110 1110 110 5110 20 21 OOAAAA YJGAAA OOOOxx +3946 4315 0 2 6 6 46 946 1946 3946 3946 92 93 UVAAAA ZJGAAA VVVVxx +445 4316 1 1 5 5 45 445 445 445 445 90 91 DRAAAA AKGAAA AAAAxx +3249 4317 1 1 9 9 49 249 1249 3249 3249 98 99 ZUAAAA BKGAAA HHHHxx +2501 4318 1 1 1 1 1 501 501 2501 2501 2 3 FSAAAA CKGAAA OOOOxx +3243 4319 1 3 3 3 43 243 1243 3243 3243 86 87 TUAAAA DKGAAA VVVVxx +4701 4320 1 1 1 1 1 701 701 4701 4701 2 3 VYAAAA EKGAAA AAAAxx +472 4321 0 0 2 12 72 472 472 472 472 144 145 ESAAAA FKGAAA HHHHxx +3356 4322 0 0 6 16 56 356 1356 3356 3356 112 113 CZAAAA GKGAAA OOOOxx +9967 4323 1 3 7 7 67 967 1967 4967 9967 134 135 JTAAAA HKGAAA VVVVxx +4292 4324 0 0 2 12 92 292 292 4292 4292 184 185 CJAAAA IKGAAA AAAAxx +7005 4325 1 1 5 5 5 5 1005 2005 7005 10 11 LJAAAA JKGAAA HHHHxx +6267 4326 1 3 7 7 67 267 267 1267 6267 134 135 BHAAAA KKGAAA OOOOxx +6678 4327 0 2 8 18 78 678 678 1678 6678 156 157 WWAAAA LKGAAA VVVVxx +6083 4328 1 3 3 3 83 83 83 1083 6083 166 167 ZZAAAA MKGAAA AAAAxx +760 4329 0 0 0 0 60 760 760 760 760 120 121 GDAAAA NKGAAA HHHHxx +7833 4330 1 1 3 13 33 833 1833 2833 7833 66 67 HPAAAA OKGAAA OOOOxx +2877 4331 1 1 7 17 77 877 877 2877 2877 154 155 RGAAAA PKGAAA VVVVxx +8810 4332 0 2 0 10 10 810 810 3810 8810 20 21 WAAAAA QKGAAA AAAAxx +1560 4333 0 0 0 0 60 560 1560 1560 1560 120 121 AIAAAA RKGAAA HHHHxx +1367 4334 1 3 7 7 67 367 1367 1367 1367 134 135 PAAAAA SKGAAA OOOOxx +8756 4335 0 0 6 16 56 756 756 3756 8756 112 113 UYAAAA TKGAAA VVVVxx +1346 4336 0 2 6 6 46 346 1346 1346 1346 92 93 UZAAAA UKGAAA AAAAxx +6449 4337 1 1 9 9 49 449 449 1449 6449 98 99 BOAAAA VKGAAA HHHHxx +6658 4338 0 2 8 18 58 658 658 1658 6658 116 117 CWAAAA WKGAAA OOOOxx +6745 4339 1 1 5 5 45 745 745 1745 6745 90 91 LZAAAA XKGAAA VVVVxx +4866 4340 0 2 6 6 66 866 866 4866 4866 132 133 EFAAAA YKGAAA AAAAxx +14 4341 0 2 4 14 14 14 14 14 14 28 29 OAAAAA ZKGAAA HHHHxx +4506 4342 0 2 6 6 6 506 506 4506 4506 12 13 IRAAAA ALGAAA OOOOxx +1923 4343 1 3 3 3 23 923 1923 1923 1923 46 47 ZVAAAA BLGAAA VVVVxx +8365 4344 1 1 5 5 65 365 365 3365 8365 130 131 TJAAAA CLGAAA AAAAxx +1279 4345 1 3 9 19 79 279 1279 1279 1279 158 159 FXAAAA DLGAAA HHHHxx +7666 4346 0 2 6 6 66 666 1666 2666 7666 132 133 WIAAAA ELGAAA OOOOxx +7404 4347 0 0 4 4 4 404 1404 2404 7404 8 9 UYAAAA FLGAAA VVVVxx +65 4348 1 1 5 5 65 65 65 65 65 130 131 NCAAAA GLGAAA AAAAxx +5820 4349 0 0 0 0 20 820 1820 820 5820 40 41 WPAAAA HLGAAA HHHHxx +459 4350 1 3 9 19 59 459 459 459 459 118 119 RRAAAA ILGAAA OOOOxx +4787 4351 1 3 7 7 87 787 787 4787 4787 174 175 DCAAAA JLGAAA VVVVxx +5631 4352 1 3 1 11 31 631 1631 631 5631 62 63 PIAAAA KLGAAA AAAAxx +9717 4353 1 1 7 17 17 717 1717 4717 9717 34 35 TJAAAA LLGAAA HHHHxx +2560 4354 0 0 0 0 60 560 560 2560 2560 120 121 MUAAAA MLGAAA OOOOxx +8295 4355 1 3 5 15 95 295 295 3295 8295 190 191 BHAAAA NLGAAA VVVVxx +3596 4356 0 0 6 16 96 596 1596 3596 3596 192 193 IIAAAA OLGAAA AAAAxx +2023 4357 1 3 3 3 23 23 23 2023 2023 46 47 VZAAAA PLGAAA HHHHxx +5055 4358 1 3 5 15 55 55 1055 55 5055 110 111 LMAAAA QLGAAA OOOOxx +763 4359 1 3 3 3 63 763 763 763 763 126 127 JDAAAA RLGAAA VVVVxx +6733 4360 1 1 3 13 33 733 733 1733 6733 66 67 ZYAAAA SLGAAA AAAAxx +9266 4361 0 2 6 6 66 266 1266 4266 9266 132 133 KSAAAA TLGAAA HHHHxx +4479 4362 1 3 9 19 79 479 479 4479 4479 158 159 HQAAAA ULGAAA OOOOxx +1816 4363 0 0 6 16 16 816 1816 1816 1816 32 33 WRAAAA VLGAAA VVVVxx +899 4364 1 3 9 19 99 899 899 899 899 198 199 PIAAAA WLGAAA AAAAxx +230 4365 0 2 0 10 30 230 230 230 230 60 61 WIAAAA XLGAAA HHHHxx +5362 4366 0 2 2 2 62 362 1362 362 5362 124 125 GYAAAA YLGAAA OOOOxx +1609 4367 1 1 9 9 9 609 1609 1609 1609 18 19 XJAAAA ZLGAAA VVVVxx +6750 4368 0 2 0 10 50 750 750 1750 6750 100 101 QZAAAA AMGAAA AAAAxx +9704 4369 0 0 4 4 4 704 1704 4704 9704 8 9 GJAAAA BMGAAA HHHHxx +3991 4370 1 3 1 11 91 991 1991 3991 3991 182 183 NXAAAA CMGAAA OOOOxx +3959 4371 1 3 9 19 59 959 1959 3959 3959 118 119 HWAAAA DMGAAA VVVVxx +9021 4372 1 1 1 1 21 21 1021 4021 9021 42 43 ZIAAAA EMGAAA AAAAxx +7585 4373 1 1 5 5 85 585 1585 2585 7585 170 171 TFAAAA FMGAAA HHHHxx +7083 4374 1 3 3 3 83 83 1083 2083 7083 166 167 LMAAAA GMGAAA OOOOxx +7688 4375 0 0 8 8 88 688 1688 2688 7688 176 177 SJAAAA HMGAAA VVVVxx +2673 4376 1 1 3 13 73 673 673 2673 2673 146 147 VYAAAA IMGAAA AAAAxx +3554 4377 0 2 4 14 54 554 1554 3554 3554 108 109 SGAAAA JMGAAA HHHHxx +7416 4378 0 0 6 16 16 416 1416 2416 7416 32 33 GZAAAA KMGAAA OOOOxx +5672 4379 0 0 2 12 72 672 1672 672 5672 144 145 EKAAAA LMGAAA VVVVxx +1355 4380 1 3 5 15 55 355 1355 1355 1355 110 111 DAAAAA MMGAAA AAAAxx +3149 4381 1 1 9 9 49 149 1149 3149 3149 98 99 DRAAAA NMGAAA HHHHxx +5811 4382 1 3 1 11 11 811 1811 811 5811 22 23 NPAAAA OMGAAA OOOOxx +3759 4383 1 3 9 19 59 759 1759 3759 3759 118 119 POAAAA PMGAAA VVVVxx +5634 4384 0 2 4 14 34 634 1634 634 5634 68 69 SIAAAA QMGAAA AAAAxx +8617 4385 1 1 7 17 17 617 617 3617 8617 34 35 LTAAAA RMGAAA HHHHxx +8949 4386 1 1 9 9 49 949 949 3949 8949 98 99 FGAAAA SMGAAA OOOOxx +3964 4387 0 0 4 4 64 964 1964 3964 3964 128 129 MWAAAA TMGAAA VVVVxx +3852 4388 0 0 2 12 52 852 1852 3852 3852 104 105 ESAAAA UMGAAA AAAAxx +1555 4389 1 3 5 15 55 555 1555 1555 1555 110 111 VHAAAA VMGAAA HHHHxx +6536 4390 0 0 6 16 36 536 536 1536 6536 72 73 KRAAAA WMGAAA OOOOxx +4779 4391 1 3 9 19 79 779 779 4779 4779 158 159 VBAAAA XMGAAA VVVVxx +1893 4392 1 1 3 13 93 893 1893 1893 1893 186 187 VUAAAA YMGAAA AAAAxx +9358 4393 0 2 8 18 58 358 1358 4358 9358 116 117 YVAAAA ZMGAAA HHHHxx +7438 4394 0 2 8 18 38 438 1438 2438 7438 76 77 CAAAAA ANGAAA OOOOxx +941 4395 1 1 1 1 41 941 941 941 941 82 83 FKAAAA BNGAAA VVVVxx +4844 4396 0 0 4 4 44 844 844 4844 4844 88 89 IEAAAA CNGAAA AAAAxx +4745 4397 1 1 5 5 45 745 745 4745 4745 90 91 NAAAAA DNGAAA HHHHxx +1017 4398 1 1 7 17 17 17 1017 1017 1017 34 35 DNAAAA ENGAAA OOOOxx +327 4399 1 3 7 7 27 327 327 327 327 54 55 PMAAAA FNGAAA VVVVxx +3152 4400 0 0 2 12 52 152 1152 3152 3152 104 105 GRAAAA GNGAAA AAAAxx +4711 4401 1 3 1 11 11 711 711 4711 4711 22 23 FZAAAA HNGAAA HHHHxx +141 4402 1 1 1 1 41 141 141 141 141 82 83 LFAAAA INGAAA OOOOxx +1303 4403 1 3 3 3 3 303 1303 1303 1303 6 7 DYAAAA JNGAAA VVVVxx +8873 4404 1 1 3 13 73 873 873 3873 8873 146 147 HDAAAA KNGAAA AAAAxx +8481 4405 1 1 1 1 81 481 481 3481 8481 162 163 FOAAAA LNGAAA HHHHxx +5445 4406 1 1 5 5 45 445 1445 445 5445 90 91 LBAAAA MNGAAA OOOOxx +7868 4407 0 0 8 8 68 868 1868 2868 7868 136 137 QQAAAA NNGAAA VVVVxx +6722 4408 0 2 2 2 22 722 722 1722 6722 44 45 OYAAAA ONGAAA AAAAxx +6628 4409 0 0 8 8 28 628 628 1628 6628 56 57 YUAAAA PNGAAA HHHHxx +7738 4410 0 2 8 18 38 738 1738 2738 7738 76 77 QLAAAA QNGAAA OOOOxx +1018 4411 0 2 8 18 18 18 1018 1018 1018 36 37 ENAAAA RNGAAA VVVVxx +3296 4412 0 0 6 16 96 296 1296 3296 3296 192 193 UWAAAA SNGAAA AAAAxx +1946 4413 0 2 6 6 46 946 1946 1946 1946 92 93 WWAAAA TNGAAA HHHHxx +6603 4414 1 3 3 3 3 603 603 1603 6603 6 7 ZTAAAA UNGAAA OOOOxx +3562 4415 0 2 2 2 62 562 1562 3562 3562 124 125 AHAAAA VNGAAA VVVVxx +1147 4416 1 3 7 7 47 147 1147 1147 1147 94 95 DSAAAA WNGAAA AAAAxx +6031 4417 1 3 1 11 31 31 31 1031 6031 62 63 ZXAAAA XNGAAA HHHHxx +6484 4418 0 0 4 4 84 484 484 1484 6484 168 169 KPAAAA YNGAAA OOOOxx +496 4419 0 0 6 16 96 496 496 496 496 192 193 CTAAAA ZNGAAA VVVVxx +4563 4420 1 3 3 3 63 563 563 4563 4563 126 127 NTAAAA AOGAAA AAAAxx +1037 4421 1 1 7 17 37 37 1037 1037 1037 74 75 XNAAAA BOGAAA HHHHxx +9672 4422 0 0 2 12 72 672 1672 4672 9672 144 145 AIAAAA COGAAA OOOOxx +9053 4423 1 1 3 13 53 53 1053 4053 9053 106 107 FKAAAA DOGAAA VVVVxx +2523 4424 1 3 3 3 23 523 523 2523 2523 46 47 BTAAAA EOGAAA AAAAxx +8519 4425 1 3 9 19 19 519 519 3519 8519 38 39 RPAAAA FOGAAA HHHHxx +8190 4426 0 2 0 10 90 190 190 3190 8190 180 181 ADAAAA GOGAAA OOOOxx +2068 4427 0 0 8 8 68 68 68 2068 2068 136 137 OBAAAA HOGAAA VVVVxx +8569 4428 1 1 9 9 69 569 569 3569 8569 138 139 PRAAAA IOGAAA AAAAxx +6535 4429 1 3 5 15 35 535 535 1535 6535 70 71 JRAAAA JOGAAA HHHHxx +1810 4430 0 2 0 10 10 810 1810 1810 1810 20 21 QRAAAA KOGAAA OOOOxx +3099 4431 1 3 9 19 99 99 1099 3099 3099 198 199 FPAAAA LOGAAA VVVVxx +7466 4432 0 2 6 6 66 466 1466 2466 7466 132 133 EBAAAA MOGAAA AAAAxx +4017 4433 1 1 7 17 17 17 17 4017 4017 34 35 NYAAAA NOGAAA HHHHxx +1097 4434 1 1 7 17 97 97 1097 1097 1097 194 195 FQAAAA OOGAAA OOOOxx +7686 4435 0 2 6 6 86 686 1686 2686 7686 172 173 QJAAAA POGAAA VVVVxx +6742 4436 0 2 2 2 42 742 742 1742 6742 84 85 IZAAAA QOGAAA AAAAxx +5966 4437 0 2 6 6 66 966 1966 966 5966 132 133 MVAAAA ROGAAA HHHHxx +3632 4438 0 0 2 12 32 632 1632 3632 3632 64 65 SJAAAA SOGAAA OOOOxx +8837 4439 1 1 7 17 37 837 837 3837 8837 74 75 XBAAAA TOGAAA VVVVxx +1667 4440 1 3 7 7 67 667 1667 1667 1667 134 135 DMAAAA UOGAAA AAAAxx +8833 4441 1 1 3 13 33 833 833 3833 8833 66 67 TBAAAA VOGAAA HHHHxx +9805 4442 1 1 5 5 5 805 1805 4805 9805 10 11 DNAAAA WOGAAA OOOOxx +3650 4443 0 2 0 10 50 650 1650 3650 3650 100 101 KKAAAA XOGAAA VVVVxx +2237 4444 1 1 7 17 37 237 237 2237 2237 74 75 BIAAAA YOGAAA AAAAxx +9980 4445 0 0 0 0 80 980 1980 4980 9980 160 161 WTAAAA ZOGAAA HHHHxx +2861 4446 1 1 1 1 61 861 861 2861 2861 122 123 BGAAAA APGAAA OOOOxx +1334 4447 0 2 4 14 34 334 1334 1334 1334 68 69 IZAAAA BPGAAA VVVVxx +842 4448 0 2 2 2 42 842 842 842 842 84 85 KGAAAA CPGAAA AAAAxx +1116 4449 0 0 6 16 16 116 1116 1116 1116 32 33 YQAAAA DPGAAA HHHHxx +4055 4450 1 3 5 15 55 55 55 4055 4055 110 111 ZZAAAA EPGAAA OOOOxx +3842 4451 0 2 2 2 42 842 1842 3842 3842 84 85 URAAAA FPGAAA VVVVxx +1886 4452 0 2 6 6 86 886 1886 1886 1886 172 173 OUAAAA GPGAAA AAAAxx +8589 4453 1 1 9 9 89 589 589 3589 8589 178 179 JSAAAA HPGAAA HHHHxx +5873 4454 1 1 3 13 73 873 1873 873 5873 146 147 XRAAAA IPGAAA OOOOxx +7711 4455 1 3 1 11 11 711 1711 2711 7711 22 23 PKAAAA JPGAAA VVVVxx +911 4456 1 3 1 11 11 911 911 911 911 22 23 BJAAAA KPGAAA AAAAxx +5837 4457 1 1 7 17 37 837 1837 837 5837 74 75 NQAAAA LPGAAA HHHHxx +897 4458 1 1 7 17 97 897 897 897 897 194 195 NIAAAA MPGAAA OOOOxx +4299 4459 1 3 9 19 99 299 299 4299 4299 198 199 JJAAAA NPGAAA VVVVxx +7774 4460 0 2 4 14 74 774 1774 2774 7774 148 149 ANAAAA OPGAAA AAAAxx +7832 4461 0 0 2 12 32 832 1832 2832 7832 64 65 GPAAAA PPGAAA HHHHxx +9915 4462 1 3 5 15 15 915 1915 4915 9915 30 31 JRAAAA QPGAAA OOOOxx +9 4463 1 1 9 9 9 9 9 9 9 18 19 JAAAAA RPGAAA VVVVxx +9675 4464 1 3 5 15 75 675 1675 4675 9675 150 151 DIAAAA SPGAAA AAAAxx +7953 4465 1 1 3 13 53 953 1953 2953 7953 106 107 XTAAAA TPGAAA HHHHxx +8912 4466 0 0 2 12 12 912 912 3912 8912 24 25 UEAAAA UPGAAA OOOOxx +4188 4467 0 0 8 8 88 188 188 4188 4188 176 177 CFAAAA VPGAAA VVVVxx +8446 4468 0 2 6 6 46 446 446 3446 8446 92 93 WMAAAA WPGAAA AAAAxx +1600 4469 0 0 0 0 0 600 1600 1600 1600 0 1 OJAAAA XPGAAA HHHHxx +43 4470 1 3 3 3 43 43 43 43 43 86 87 RBAAAA YPGAAA OOOOxx +544 4471 0 0 4 4 44 544 544 544 544 88 89 YUAAAA ZPGAAA VVVVxx +6977 4472 1 1 7 17 77 977 977 1977 6977 154 155 JIAAAA AQGAAA AAAAxx +3191 4473 1 3 1 11 91 191 1191 3191 3191 182 183 TSAAAA BQGAAA HHHHxx +418 4474 0 2 8 18 18 418 418 418 418 36 37 CQAAAA CQGAAA OOOOxx +3142 4475 0 2 2 2 42 142 1142 3142 3142 84 85 WQAAAA DQGAAA VVVVxx +5042 4476 0 2 2 2 42 42 1042 42 5042 84 85 YLAAAA EQGAAA AAAAxx +2194 4477 0 2 4 14 94 194 194 2194 2194 188 189 KGAAAA FQGAAA HHHHxx +2397 4478 1 1 7 17 97 397 397 2397 2397 194 195 FOAAAA GQGAAA OOOOxx +4684 4479 0 0 4 4 84 684 684 4684 4684 168 169 EYAAAA HQGAAA VVVVxx +34 4480 0 2 4 14 34 34 34 34 34 68 69 IBAAAA IQGAAA AAAAxx +3844 4481 0 0 4 4 44 844 1844 3844 3844 88 89 WRAAAA JQGAAA HHHHxx +7824 4482 0 0 4 4 24 824 1824 2824 7824 48 49 YOAAAA KQGAAA OOOOxx +6177 4483 1 1 7 17 77 177 177 1177 6177 154 155 PDAAAA LQGAAA VVVVxx +9657 4484 1 1 7 17 57 657 1657 4657 9657 114 115 LHAAAA MQGAAA AAAAxx +4546 4485 0 2 6 6 46 546 546 4546 4546 92 93 WSAAAA NQGAAA HHHHxx +599 4486 1 3 9 19 99 599 599 599 599 198 199 BXAAAA OQGAAA OOOOxx +153 4487 1 1 3 13 53 153 153 153 153 106 107 XFAAAA PQGAAA VVVVxx +6910 4488 0 2 0 10 10 910 910 1910 6910 20 21 UFAAAA QQGAAA AAAAxx +4408 4489 0 0 8 8 8 408 408 4408 4408 16 17 ONAAAA RQGAAA HHHHxx +1164 4490 0 0 4 4 64 164 1164 1164 1164 128 129 USAAAA SQGAAA OOOOxx +6469 4491 1 1 9 9 69 469 469 1469 6469 138 139 VOAAAA TQGAAA VVVVxx +5996 4492 0 0 6 16 96 996 1996 996 5996 192 193 QWAAAA UQGAAA AAAAxx +2639 4493 1 3 9 19 39 639 639 2639 2639 78 79 NXAAAA VQGAAA HHHHxx +2678 4494 0 2 8 18 78 678 678 2678 2678 156 157 AZAAAA WQGAAA OOOOxx +8392 4495 0 0 2 12 92 392 392 3392 8392 184 185 UKAAAA XQGAAA VVVVxx +1386 4496 0 2 6 6 86 386 1386 1386 1386 172 173 IBAAAA YQGAAA AAAAxx +5125 4497 1 1 5 5 25 125 1125 125 5125 50 51 DPAAAA ZQGAAA HHHHxx +8453 4498 1 1 3 13 53 453 453 3453 8453 106 107 DNAAAA ARGAAA OOOOxx +2369 4499 1 1 9 9 69 369 369 2369 2369 138 139 DNAAAA BRGAAA VVVVxx +1608 4500 0 0 8 8 8 608 1608 1608 1608 16 17 WJAAAA CRGAAA AAAAxx +3781 4501 1 1 1 1 81 781 1781 3781 3781 162 163 LPAAAA DRGAAA HHHHxx +903 4502 1 3 3 3 3 903 903 903 903 6 7 TIAAAA ERGAAA OOOOxx +2099 4503 1 3 9 19 99 99 99 2099 2099 198 199 TCAAAA FRGAAA VVVVxx +538 4504 0 2 8 18 38 538 538 538 538 76 77 SUAAAA GRGAAA AAAAxx +9177 4505 1 1 7 17 77 177 1177 4177 9177 154 155 ZOAAAA HRGAAA HHHHxx +420 4506 0 0 0 0 20 420 420 420 420 40 41 EQAAAA IRGAAA OOOOxx +9080 4507 0 0 0 0 80 80 1080 4080 9080 160 161 GLAAAA JRGAAA VVVVxx +2630 4508 0 2 0 10 30 630 630 2630 2630 60 61 EXAAAA KRGAAA AAAAxx +5978 4509 0 2 8 18 78 978 1978 978 5978 156 157 YVAAAA LRGAAA HHHHxx +9239 4510 1 3 9 19 39 239 1239 4239 9239 78 79 JRAAAA MRGAAA OOOOxx +4372 4511 0 0 2 12 72 372 372 4372 4372 144 145 EMAAAA NRGAAA VVVVxx +4357 4512 1 1 7 17 57 357 357 4357 4357 114 115 PLAAAA ORGAAA AAAAxx +9857 4513 1 1 7 17 57 857 1857 4857 9857 114 115 DPAAAA PRGAAA HHHHxx +7933 4514 1 1 3 13 33 933 1933 2933 7933 66 67 DTAAAA QRGAAA OOOOxx +9574 4515 0 2 4 14 74 574 1574 4574 9574 148 149 GEAAAA RRGAAA VVVVxx +8294 4516 0 2 4 14 94 294 294 3294 8294 188 189 AHAAAA SRGAAA AAAAxx +627 4517 1 3 7 7 27 627 627 627 627 54 55 DYAAAA TRGAAA HHHHxx +3229 4518 1 1 9 9 29 229 1229 3229 3229 58 59 FUAAAA URGAAA OOOOxx +3163 4519 1 3 3 3 63 163 1163 3163 3163 126 127 RRAAAA VRGAAA VVVVxx +7349 4520 1 1 9 9 49 349 1349 2349 7349 98 99 RWAAAA WRGAAA AAAAxx +6889 4521 1 1 9 9 89 889 889 1889 6889 178 179 ZEAAAA XRGAAA HHHHxx +2101 4522 1 1 1 1 1 101 101 2101 2101 2 3 VCAAAA YRGAAA OOOOxx +6476 4523 0 0 6 16 76 476 476 1476 6476 152 153 CPAAAA ZRGAAA VVVVxx +6765 4524 1 1 5 5 65 765 765 1765 6765 130 131 FAAAAA ASGAAA AAAAxx +4204 4525 0 0 4 4 4 204 204 4204 4204 8 9 SFAAAA BSGAAA HHHHxx +5915 4526 1 3 5 15 15 915 1915 915 5915 30 31 NTAAAA CSGAAA OOOOxx +2318 4527 0 2 8 18 18 318 318 2318 2318 36 37 ELAAAA DSGAAA VVVVxx +294 4528 0 2 4 14 94 294 294 294 294 188 189 ILAAAA ESGAAA AAAAxx +5245 4529 1 1 5 5 45 245 1245 245 5245 90 91 TTAAAA FSGAAA HHHHxx +4481 4530 1 1 1 1 81 481 481 4481 4481 162 163 JQAAAA GSGAAA OOOOxx +7754 4531 0 2 4 14 54 754 1754 2754 7754 108 109 GMAAAA HSGAAA VVVVxx +8494 4532 0 2 4 14 94 494 494 3494 8494 188 189 SOAAAA ISGAAA AAAAxx +4014 4533 0 2 4 14 14 14 14 4014 4014 28 29 KYAAAA JSGAAA HHHHxx +2197 4534 1 1 7 17 97 197 197 2197 2197 194 195 NGAAAA KSGAAA OOOOxx +1297 4535 1 1 7 17 97 297 1297 1297 1297 194 195 XXAAAA LSGAAA VVVVxx +1066 4536 0 2 6 6 66 66 1066 1066 1066 132 133 APAAAA MSGAAA AAAAxx +5710 4537 0 2 0 10 10 710 1710 710 5710 20 21 QLAAAA NSGAAA HHHHxx +4100 4538 0 0 0 0 0 100 100 4100 4100 0 1 SBAAAA OSGAAA OOOOxx +7356 4539 0 0 6 16 56 356 1356 2356 7356 112 113 YWAAAA PSGAAA VVVVxx +7658 4540 0 2 8 18 58 658 1658 2658 7658 116 117 OIAAAA QSGAAA AAAAxx +3666 4541 0 2 6 6 66 666 1666 3666 3666 132 133 ALAAAA RSGAAA HHHHxx +9713 4542 1 1 3 13 13 713 1713 4713 9713 26 27 PJAAAA SSGAAA OOOOxx +691 4543 1 3 1 11 91 691 691 691 691 182 183 PAAAAA TSGAAA VVVVxx +3112 4544 0 0 2 12 12 112 1112 3112 3112 24 25 SPAAAA USGAAA AAAAxx +6035 4545 1 3 5 15 35 35 35 1035 6035 70 71 DYAAAA VSGAAA HHHHxx +8353 4546 1 1 3 13 53 353 353 3353 8353 106 107 HJAAAA WSGAAA OOOOxx +5679 4547 1 3 9 19 79 679 1679 679 5679 158 159 LKAAAA XSGAAA VVVVxx +2124 4548 0 0 4 4 24 124 124 2124 2124 48 49 SDAAAA YSGAAA AAAAxx +4714 4549 0 2 4 14 14 714 714 4714 4714 28 29 IZAAAA ZSGAAA HHHHxx +9048 4550 0 0 8 8 48 48 1048 4048 9048 96 97 AKAAAA ATGAAA OOOOxx +7692 4551 0 0 2 12 92 692 1692 2692 7692 184 185 WJAAAA BTGAAA VVVVxx +4542 4552 0 2 2 2 42 542 542 4542 4542 84 85 SSAAAA CTGAAA AAAAxx +8737 4553 1 1 7 17 37 737 737 3737 8737 74 75 BYAAAA DTGAAA HHHHxx +4977 4554 1 1 7 17 77 977 977 4977 4977 154 155 LJAAAA ETGAAA OOOOxx +9349 4555 1 1 9 9 49 349 1349 4349 9349 98 99 PVAAAA FTGAAA VVVVxx +731 4556 1 3 1 11 31 731 731 731 731 62 63 DCAAAA GTGAAA AAAAxx +1788 4557 0 0 8 8 88 788 1788 1788 1788 176 177 UQAAAA HTGAAA HHHHxx +7830 4558 0 2 0 10 30 830 1830 2830 7830 60 61 EPAAAA ITGAAA OOOOxx +3977 4559 1 1 7 17 77 977 1977 3977 3977 154 155 ZWAAAA JTGAAA VVVVxx +2421 4560 1 1 1 1 21 421 421 2421 2421 42 43 DPAAAA KTGAAA AAAAxx +5891 4561 1 3 1 11 91 891 1891 891 5891 182 183 PSAAAA LTGAAA HHHHxx +1111 4562 1 3 1 11 11 111 1111 1111 1111 22 23 TQAAAA MTGAAA OOOOxx +9224 4563 0 0 4 4 24 224 1224 4224 9224 48 49 UQAAAA NTGAAA VVVVxx +9872 4564 0 0 2 12 72 872 1872 4872 9872 144 145 SPAAAA OTGAAA AAAAxx +2433 4565 1 1 3 13 33 433 433 2433 2433 66 67 PPAAAA PTGAAA HHHHxx +1491 4566 1 3 1 11 91 491 1491 1491 1491 182 183 JFAAAA QTGAAA OOOOxx +6653 4567 1 1 3 13 53 653 653 1653 6653 106 107 XVAAAA RTGAAA VVVVxx +1907 4568 1 3 7 7 7 907 1907 1907 1907 14 15 JVAAAA STGAAA AAAAxx +889 4569 1 1 9 9 89 889 889 889 889 178 179 FIAAAA TTGAAA HHHHxx +561 4570 1 1 1 1 61 561 561 561 561 122 123 PVAAAA UTGAAA OOOOxx +7415 4571 1 3 5 15 15 415 1415 2415 7415 30 31 FZAAAA VTGAAA VVVVxx +2703 4572 1 3 3 3 3 703 703 2703 2703 6 7 ZZAAAA WTGAAA AAAAxx +2561 4573 1 1 1 1 61 561 561 2561 2561 122 123 NUAAAA XTGAAA HHHHxx +1257 4574 1 1 7 17 57 257 1257 1257 1257 114 115 JWAAAA YTGAAA OOOOxx +2390 4575 0 2 0 10 90 390 390 2390 2390 180 181 YNAAAA ZTGAAA VVVVxx +3915 4576 1 3 5 15 15 915 1915 3915 3915 30 31 PUAAAA AUGAAA AAAAxx +8476 4577 0 0 6 16 76 476 476 3476 8476 152 153 AOAAAA BUGAAA HHHHxx +607 4578 1 3 7 7 7 607 607 607 607 14 15 JXAAAA CUGAAA OOOOxx +3891 4579 1 3 1 11 91 891 1891 3891 3891 182 183 RTAAAA DUGAAA VVVVxx +7269 4580 1 1 9 9 69 269 1269 2269 7269 138 139 PTAAAA EUGAAA AAAAxx +9537 4581 1 1 7 17 37 537 1537 4537 9537 74 75 VCAAAA FUGAAA HHHHxx +8518 4582 0 2 8 18 18 518 518 3518 8518 36 37 QPAAAA GUGAAA OOOOxx +5221 4583 1 1 1 1 21 221 1221 221 5221 42 43 VSAAAA HUGAAA VVVVxx +3274 4584 0 2 4 14 74 274 1274 3274 3274 148 149 YVAAAA IUGAAA AAAAxx +6677 4585 1 1 7 17 77 677 677 1677 6677 154 155 VWAAAA JUGAAA HHHHxx +3114 4586 0 2 4 14 14 114 1114 3114 3114 28 29 UPAAAA KUGAAA OOOOxx +1966 4587 0 2 6 6 66 966 1966 1966 1966 132 133 QXAAAA LUGAAA VVVVxx +5941 4588 1 1 1 1 41 941 1941 941 5941 82 83 NUAAAA MUGAAA AAAAxx +9463 4589 1 3 3 3 63 463 1463 4463 9463 126 127 ZZAAAA NUGAAA HHHHxx +8966 4590 0 2 6 6 66 966 966 3966 8966 132 133 WGAAAA OUGAAA OOOOxx +4402 4591 0 2 2 2 2 402 402 4402 4402 4 5 INAAAA PUGAAA VVVVxx +3364 4592 0 0 4 4 64 364 1364 3364 3364 128 129 KZAAAA QUGAAA AAAAxx +3698 4593 0 2 8 18 98 698 1698 3698 3698 196 197 GMAAAA RUGAAA HHHHxx +4651 4594 1 3 1 11 51 651 651 4651 4651 102 103 XWAAAA SUGAAA OOOOxx +2127 4595 1 3 7 7 27 127 127 2127 2127 54 55 VDAAAA TUGAAA VVVVxx +3614 4596 0 2 4 14 14 614 1614 3614 3614 28 29 AJAAAA UUGAAA AAAAxx +5430 4597 0 2 0 10 30 430 1430 430 5430 60 61 WAAAAA VUGAAA HHHHxx +3361 4598 1 1 1 1 61 361 1361 3361 3361 122 123 HZAAAA WUGAAA OOOOxx +4798 4599 0 2 8 18 98 798 798 4798 4798 196 197 OCAAAA XUGAAA VVVVxx +8269 4600 1 1 9 9 69 269 269 3269 8269 138 139 BGAAAA YUGAAA AAAAxx +6458 4601 0 2 8 18 58 458 458 1458 6458 116 117 KOAAAA ZUGAAA HHHHxx +3358 4602 0 2 8 18 58 358 1358 3358 3358 116 117 EZAAAA AVGAAA OOOOxx +5898 4603 0 2 8 18 98 898 1898 898 5898 196 197 WSAAAA BVGAAA VVVVxx +1880 4604 0 0 0 0 80 880 1880 1880 1880 160 161 IUAAAA CVGAAA AAAAxx +782 4605 0 2 2 2 82 782 782 782 782 164 165 CEAAAA DVGAAA HHHHxx +3102 4606 0 2 2 2 2 102 1102 3102 3102 4 5 IPAAAA EVGAAA OOOOxx +6366 4607 0 2 6 6 66 366 366 1366 6366 132 133 WKAAAA FVGAAA VVVVxx +399 4608 1 3 9 19 99 399 399 399 399 198 199 JPAAAA GVGAAA AAAAxx +6773 4609 1 1 3 13 73 773 773 1773 6773 146 147 NAAAAA HVGAAA HHHHxx +7942 4610 0 2 2 2 42 942 1942 2942 7942 84 85 MTAAAA IVGAAA OOOOxx +6274 4611 0 2 4 14 74 274 274 1274 6274 148 149 IHAAAA JVGAAA VVVVxx +7447 4612 1 3 7 7 47 447 1447 2447 7447 94 95 LAAAAA KVGAAA AAAAxx +7648 4613 0 0 8 8 48 648 1648 2648 7648 96 97 EIAAAA LVGAAA HHHHxx +3997 4614 1 1 7 17 97 997 1997 3997 3997 194 195 TXAAAA MVGAAA OOOOxx +1759 4615 1 3 9 19 59 759 1759 1759 1759 118 119 RPAAAA NVGAAA VVVVxx +1785 4616 1 1 5 5 85 785 1785 1785 1785 170 171 RQAAAA OVGAAA AAAAxx +8930 4617 0 2 0 10 30 930 930 3930 8930 60 61 MFAAAA PVGAAA HHHHxx +7595 4618 1 3 5 15 95 595 1595 2595 7595 190 191 DGAAAA QVGAAA OOOOxx +6752 4619 0 0 2 12 52 752 752 1752 6752 104 105 SZAAAA RVGAAA VVVVxx +5635 4620 1 3 5 15 35 635 1635 635 5635 70 71 TIAAAA SVGAAA AAAAxx +1579 4621 1 3 9 19 79 579 1579 1579 1579 158 159 TIAAAA TVGAAA HHHHxx +7743 4622 1 3 3 3 43 743 1743 2743 7743 86 87 VLAAAA UVGAAA OOOOxx +5856 4623 0 0 6 16 56 856 1856 856 5856 112 113 GRAAAA VVGAAA VVVVxx +7273 4624 1 1 3 13 73 273 1273 2273 7273 146 147 TTAAAA WVGAAA AAAAxx +1399 4625 1 3 9 19 99 399 1399 1399 1399 198 199 VBAAAA XVGAAA HHHHxx +3694 4626 0 2 4 14 94 694 1694 3694 3694 188 189 CMAAAA YVGAAA OOOOxx +2782 4627 0 2 2 2 82 782 782 2782 2782 164 165 ADAAAA ZVGAAA VVVVxx +6951 4628 1 3 1 11 51 951 951 1951 6951 102 103 JHAAAA AWGAAA AAAAxx +6053 4629 1 1 3 13 53 53 53 1053 6053 106 107 VYAAAA BWGAAA HHHHxx +1753 4630 1 1 3 13 53 753 1753 1753 1753 106 107 LPAAAA CWGAAA OOOOxx +3985 4631 1 1 5 5 85 985 1985 3985 3985 170 171 HXAAAA DWGAAA VVVVxx +6159 4632 1 3 9 19 59 159 159 1159 6159 118 119 XCAAAA EWGAAA AAAAxx +6250 4633 0 2 0 10 50 250 250 1250 6250 100 101 KGAAAA FWGAAA HHHHxx +6240 4634 0 0 0 0 40 240 240 1240 6240 80 81 AGAAAA GWGAAA OOOOxx +6571 4635 1 3 1 11 71 571 571 1571 6571 142 143 TSAAAA HWGAAA VVVVxx +8624 4636 0 0 4 4 24 624 624 3624 8624 48 49 STAAAA IWGAAA AAAAxx +9718 4637 0 2 8 18 18 718 1718 4718 9718 36 37 UJAAAA JWGAAA HHHHxx +5529 4638 1 1 9 9 29 529 1529 529 5529 58 59 REAAAA KWGAAA OOOOxx +7089 4639 1 1 9 9 89 89 1089 2089 7089 178 179 RMAAAA LWGAAA VVVVxx +5488 4640 0 0 8 8 88 488 1488 488 5488 176 177 CDAAAA MWGAAA AAAAxx +5444 4641 0 0 4 4 44 444 1444 444 5444 88 89 KBAAAA NWGAAA HHHHxx +4899 4642 1 3 9 19 99 899 899 4899 4899 198 199 LGAAAA OWGAAA OOOOxx +7928 4643 0 0 8 8 28 928 1928 2928 7928 56 57 YSAAAA PWGAAA VVVVxx +4736 4644 0 0 6 16 36 736 736 4736 4736 72 73 EAAAAA QWGAAA AAAAxx +4317 4645 1 1 7 17 17 317 317 4317 4317 34 35 BKAAAA RWGAAA HHHHxx +1174 4646 0 2 4 14 74 174 1174 1174 1174 148 149 ETAAAA SWGAAA OOOOxx +6138 4647 0 2 8 18 38 138 138 1138 6138 76 77 CCAAAA TWGAAA VVVVxx +3943 4648 1 3 3 3 43 943 1943 3943 3943 86 87 RVAAAA UWGAAA AAAAxx +1545 4649 1 1 5 5 45 545 1545 1545 1545 90 91 LHAAAA VWGAAA HHHHxx +6867 4650 1 3 7 7 67 867 867 1867 6867 134 135 DEAAAA WWGAAA OOOOxx +6832 4651 0 0 2 12 32 832 832 1832 6832 64 65 UCAAAA XWGAAA VVVVxx +2987 4652 1 3 7 7 87 987 987 2987 2987 174 175 XKAAAA YWGAAA AAAAxx +5169 4653 1 1 9 9 69 169 1169 169 5169 138 139 VQAAAA ZWGAAA HHHHxx +8998 4654 0 2 8 18 98 998 998 3998 8998 196 197 CIAAAA AXGAAA OOOOxx +9347 4655 1 3 7 7 47 347 1347 4347 9347 94 95 NVAAAA BXGAAA VVVVxx +4800 4656 0 0 0 0 0 800 800 4800 4800 0 1 QCAAAA CXGAAA AAAAxx +4200 4657 0 0 0 0 0 200 200 4200 4200 0 1 OFAAAA DXGAAA HHHHxx +4046 4658 0 2 6 6 46 46 46 4046 4046 92 93 QZAAAA EXGAAA OOOOxx +7142 4659 0 2 2 2 42 142 1142 2142 7142 84 85 SOAAAA FXGAAA VVVVxx +2733 4660 1 1 3 13 33 733 733 2733 2733 66 67 DBAAAA GXGAAA AAAAxx +1568 4661 0 0 8 8 68 568 1568 1568 1568 136 137 IIAAAA HXGAAA HHHHxx +5105 4662 1 1 5 5 5 105 1105 105 5105 10 11 JOAAAA IXGAAA OOOOxx +9115 4663 1 3 5 15 15 115 1115 4115 9115 30 31 PMAAAA JXGAAA VVVVxx +6475 4664 1 3 5 15 75 475 475 1475 6475 150 151 BPAAAA KXGAAA AAAAxx +3796 4665 0 0 6 16 96 796 1796 3796 3796 192 193 AQAAAA LXGAAA HHHHxx +5410 4666 0 2 0 10 10 410 1410 410 5410 20 21 CAAAAA MXGAAA OOOOxx +4023 4667 1 3 3 3 23 23 23 4023 4023 46 47 TYAAAA NXGAAA VVVVxx +8904 4668 0 0 4 4 4 904 904 3904 8904 8 9 MEAAAA OXGAAA AAAAxx +450 4669 0 2 0 10 50 450 450 450 450 100 101 IRAAAA PXGAAA HHHHxx +8087 4670 1 3 7 7 87 87 87 3087 8087 174 175 BZAAAA QXGAAA OOOOxx +6478 4671 0 2 8 18 78 478 478 1478 6478 156 157 EPAAAA RXGAAA VVVVxx +2696 4672 0 0 6 16 96 696 696 2696 2696 192 193 SZAAAA SXGAAA AAAAxx +1792 4673 0 0 2 12 92 792 1792 1792 1792 184 185 YQAAAA TXGAAA HHHHxx +9699 4674 1 3 9 19 99 699 1699 4699 9699 198 199 BJAAAA UXGAAA OOOOxx +9160 4675 0 0 0 0 60 160 1160 4160 9160 120 121 IOAAAA VXGAAA VVVVxx +9989 4676 1 1 9 9 89 989 1989 4989 9989 178 179 FUAAAA WXGAAA AAAAxx +9568 4677 0 0 8 8 68 568 1568 4568 9568 136 137 AEAAAA XXGAAA HHHHxx +487 4678 1 3 7 7 87 487 487 487 487 174 175 TSAAAA YXGAAA OOOOxx +7863 4679 1 3 3 3 63 863 1863 2863 7863 126 127 LQAAAA ZXGAAA VVVVxx +1884 4680 0 0 4 4 84 884 1884 1884 1884 168 169 MUAAAA AYGAAA AAAAxx +2651 4681 1 3 1 11 51 651 651 2651 2651 102 103 ZXAAAA BYGAAA HHHHxx +8285 4682 1 1 5 5 85 285 285 3285 8285 170 171 RGAAAA CYGAAA OOOOxx +3927 4683 1 3 7 7 27 927 1927 3927 3927 54 55 BVAAAA DYGAAA VVVVxx +4076 4684 0 0 6 16 76 76 76 4076 4076 152 153 UAAAAA EYGAAA AAAAxx +6149 4685 1 1 9 9 49 149 149 1149 6149 98 99 NCAAAA FYGAAA HHHHxx +6581 4686 1 1 1 1 81 581 581 1581 6581 162 163 DTAAAA GYGAAA OOOOxx +8293 4687 1 1 3 13 93 293 293 3293 8293 186 187 ZGAAAA HYGAAA VVVVxx +7665 4688 1 1 5 5 65 665 1665 2665 7665 130 131 VIAAAA IYGAAA AAAAxx +4435 4689 1 3 5 15 35 435 435 4435 4435 70 71 POAAAA JYGAAA HHHHxx +1271 4690 1 3 1 11 71 271 1271 1271 1271 142 143 XWAAAA KYGAAA OOOOxx +3928 4691 0 0 8 8 28 928 1928 3928 3928 56 57 CVAAAA LYGAAA VVVVxx +7045 4692 1 1 5 5 45 45 1045 2045 7045 90 91 ZKAAAA MYGAAA AAAAxx +4943 4693 1 3 3 3 43 943 943 4943 4943 86 87 DIAAAA NYGAAA HHHHxx +8473 4694 1 1 3 13 73 473 473 3473 8473 146 147 XNAAAA OYGAAA OOOOxx +1707 4695 1 3 7 7 7 707 1707 1707 1707 14 15 RNAAAA PYGAAA VVVVxx +7509 4696 1 1 9 9 9 509 1509 2509 7509 18 19 VCAAAA QYGAAA AAAAxx +1593 4697 1 1 3 13 93 593 1593 1593 1593 186 187 HJAAAA RYGAAA HHHHxx +9281 4698 1 1 1 1 81 281 1281 4281 9281 162 163 ZSAAAA SYGAAA OOOOxx +8986 4699 0 2 6 6 86 986 986 3986 8986 172 173 QHAAAA TYGAAA VVVVxx +3740 4700 0 0 0 0 40 740 1740 3740 3740 80 81 WNAAAA UYGAAA AAAAxx +9265 4701 1 1 5 5 65 265 1265 4265 9265 130 131 JSAAAA VYGAAA HHHHxx +1510 4702 0 2 0 10 10 510 1510 1510 1510 20 21 CGAAAA WYGAAA OOOOxx +3022 4703 0 2 2 2 22 22 1022 3022 3022 44 45 GMAAAA XYGAAA VVVVxx +9014 4704 0 2 4 14 14 14 1014 4014 9014 28 29 SIAAAA YYGAAA AAAAxx +6816 4705 0 0 6 16 16 816 816 1816 6816 32 33 ECAAAA ZYGAAA HHHHxx +5518 4706 0 2 8 18 18 518 1518 518 5518 36 37 GEAAAA AZGAAA OOOOxx +4451 4707 1 3 1 11 51 451 451 4451 4451 102 103 FPAAAA BZGAAA VVVVxx +8747 4708 1 3 7 7 47 747 747 3747 8747 94 95 LYAAAA CZGAAA AAAAxx +4646 4709 0 2 6 6 46 646 646 4646 4646 92 93 SWAAAA DZGAAA HHHHxx +7296 4710 0 0 6 16 96 296 1296 2296 7296 192 193 QUAAAA EZGAAA OOOOxx +9644 4711 0 0 4 4 44 644 1644 4644 9644 88 89 YGAAAA FZGAAA VVVVxx +5977 4712 1 1 7 17 77 977 1977 977 5977 154 155 XVAAAA GZGAAA AAAAxx +6270 4713 0 2 0 10 70 270 270 1270 6270 140 141 EHAAAA HZGAAA HHHHxx +5578 4714 0 2 8 18 78 578 1578 578 5578 156 157 OGAAAA IZGAAA OOOOxx +2465 4715 1 1 5 5 65 465 465 2465 2465 130 131 VQAAAA JZGAAA VVVVxx +6436 4716 0 0 6 16 36 436 436 1436 6436 72 73 ONAAAA KZGAAA AAAAxx +8089 4717 1 1 9 9 89 89 89 3089 8089 178 179 DZAAAA LZGAAA HHHHxx +2409 4718 1 1 9 9 9 409 409 2409 2409 18 19 ROAAAA MZGAAA OOOOxx +284 4719 0 0 4 4 84 284 284 284 284 168 169 YKAAAA NZGAAA VVVVxx +5576 4720 0 0 6 16 76 576 1576 576 5576 152 153 MGAAAA OZGAAA AAAAxx +6534 4721 0 2 4 14 34 534 534 1534 6534 68 69 IRAAAA PZGAAA HHHHxx +8848 4722 0 0 8 8 48 848 848 3848 8848 96 97 ICAAAA QZGAAA OOOOxx +4305 4723 1 1 5 5 5 305 305 4305 4305 10 11 PJAAAA RZGAAA VVVVxx +5574 4724 0 2 4 14 74 574 1574 574 5574 148 149 KGAAAA SZGAAA AAAAxx +596 4725 0 0 6 16 96 596 596 596 596 192 193 YWAAAA TZGAAA HHHHxx +1253 4726 1 1 3 13 53 253 1253 1253 1253 106 107 FWAAAA UZGAAA OOOOxx +521 4727 1 1 1 1 21 521 521 521 521 42 43 BUAAAA VZGAAA VVVVxx +8739 4728 1 3 9 19 39 739 739 3739 8739 78 79 DYAAAA WZGAAA AAAAxx +908 4729 0 0 8 8 8 908 908 908 908 16 17 YIAAAA XZGAAA HHHHxx +6937 4730 1 1 7 17 37 937 937 1937 6937 74 75 VGAAAA YZGAAA OOOOxx +4515 4731 1 3 5 15 15 515 515 4515 4515 30 31 RRAAAA ZZGAAA VVVVxx +8630 4732 0 2 0 10 30 630 630 3630 8630 60 61 YTAAAA AAHAAA AAAAxx +7518 4733 0 2 8 18 18 518 1518 2518 7518 36 37 EDAAAA BAHAAA HHHHxx +8300 4734 0 0 0 0 0 300 300 3300 8300 0 1 GHAAAA CAHAAA OOOOxx +8434 4735 0 2 4 14 34 434 434 3434 8434 68 69 KMAAAA DAHAAA VVVVxx +6000 4736 0 0 0 0 0 0 0 1000 6000 0 1 UWAAAA EAHAAA AAAAxx +4508 4737 0 0 8 8 8 508 508 4508 4508 16 17 KRAAAA FAHAAA HHHHxx +7861 4738 1 1 1 1 61 861 1861 2861 7861 122 123 JQAAAA GAHAAA OOOOxx +5953 4739 1 1 3 13 53 953 1953 953 5953 106 107 ZUAAAA HAHAAA VVVVxx +5063 4740 1 3 3 3 63 63 1063 63 5063 126 127 TMAAAA IAHAAA AAAAxx +4501 4741 1 1 1 1 1 501 501 4501 4501 2 3 DRAAAA JAHAAA HHHHxx +7092 4742 0 0 2 12 92 92 1092 2092 7092 184 185 UMAAAA KAHAAA OOOOxx +4388 4743 0 0 8 8 88 388 388 4388 4388 176 177 UMAAAA LAHAAA VVVVxx +1826 4744 0 2 6 6 26 826 1826 1826 1826 52 53 GSAAAA MAHAAA AAAAxx +568 4745 0 0 8 8 68 568 568 568 568 136 137 WVAAAA NAHAAA HHHHxx +8184 4746 0 0 4 4 84 184 184 3184 8184 168 169 UCAAAA OAHAAA OOOOxx +4268 4747 0 0 8 8 68 268 268 4268 4268 136 137 EIAAAA PAHAAA VVVVxx +5798 4748 0 2 8 18 98 798 1798 798 5798 196 197 APAAAA QAHAAA AAAAxx +5190 4749 0 2 0 10 90 190 1190 190 5190 180 181 QRAAAA RAHAAA HHHHxx +1298 4750 0 2 8 18 98 298 1298 1298 1298 196 197 YXAAAA SAHAAA OOOOxx +4035 4751 1 3 5 15 35 35 35 4035 4035 70 71 FZAAAA TAHAAA VVVVxx +4504 4752 0 0 4 4 4 504 504 4504 4504 8 9 GRAAAA UAHAAA AAAAxx +5992 4753 0 0 2 12 92 992 1992 992 5992 184 185 MWAAAA VAHAAA HHHHxx +770 4754 0 2 0 10 70 770 770 770 770 140 141 QDAAAA WAHAAA OOOOxx +7502 4755 0 2 2 2 2 502 1502 2502 7502 4 5 OCAAAA XAHAAA VVVVxx +824 4756 0 0 4 4 24 824 824 824 824 48 49 SFAAAA YAHAAA AAAAxx +7716 4757 0 0 6 16 16 716 1716 2716 7716 32 33 UKAAAA ZAHAAA HHHHxx +5749 4758 1 1 9 9 49 749 1749 749 5749 98 99 DNAAAA ABHAAA OOOOxx +9814 4759 0 2 4 14 14 814 1814 4814 9814 28 29 MNAAAA BBHAAA VVVVxx +350 4760 0 2 0 10 50 350 350 350 350 100 101 MNAAAA CBHAAA AAAAxx +1390 4761 0 2 0 10 90 390 1390 1390 1390 180 181 MBAAAA DBHAAA HHHHxx +6994 4762 0 2 4 14 94 994 994 1994 6994 188 189 AJAAAA EBHAAA OOOOxx +3629 4763 1 1 9 9 29 629 1629 3629 3629 58 59 PJAAAA FBHAAA VVVVxx +9937 4764 1 1 7 17 37 937 1937 4937 9937 74 75 FSAAAA GBHAAA AAAAxx +5285 4765 1 1 5 5 85 285 1285 285 5285 170 171 HVAAAA HBHAAA HHHHxx +3157 4766 1 1 7 17 57 157 1157 3157 3157 114 115 LRAAAA IBHAAA OOOOxx +9549 4767 1 1 9 9 49 549 1549 4549 9549 98 99 HDAAAA JBHAAA VVVVxx +4118 4768 0 2 8 18 18 118 118 4118 4118 36 37 KCAAAA KBHAAA AAAAxx +756 4769 0 0 6 16 56 756 756 756 756 112 113 CDAAAA LBHAAA HHHHxx +5964 4770 0 0 4 4 64 964 1964 964 5964 128 129 KVAAAA MBHAAA OOOOxx +7701 4771 1 1 1 1 1 701 1701 2701 7701 2 3 FKAAAA NBHAAA VVVVxx +1242 4772 0 2 2 2 42 242 1242 1242 1242 84 85 UVAAAA OBHAAA AAAAxx +7890 4773 0 2 0 10 90 890 1890 2890 7890 180 181 MRAAAA PBHAAA HHHHxx +1991 4774 1 3 1 11 91 991 1991 1991 1991 182 183 PYAAAA QBHAAA OOOOxx +110 4775 0 2 0 10 10 110 110 110 110 20 21 GEAAAA RBHAAA VVVVxx +9334 4776 0 2 4 14 34 334 1334 4334 9334 68 69 AVAAAA SBHAAA AAAAxx +6231 4777 1 3 1 11 31 231 231 1231 6231 62 63 RFAAAA TBHAAA HHHHxx +9871 4778 1 3 1 11 71 871 1871 4871 9871 142 143 RPAAAA UBHAAA OOOOxx +9471 4779 1 3 1 11 71 471 1471 4471 9471 142 143 HAAAAA VBHAAA VVVVxx +2697 4780 1 1 7 17 97 697 697 2697 2697 194 195 TZAAAA WBHAAA AAAAxx +4761 4781 1 1 1 1 61 761 761 4761 4761 122 123 DBAAAA XBHAAA HHHHxx +8493 4782 1 1 3 13 93 493 493 3493 8493 186 187 ROAAAA YBHAAA OOOOxx +1045 4783 1 1 5 5 45 45 1045 1045 1045 90 91 FOAAAA ZBHAAA VVVVxx +3403 4784 1 3 3 3 3 403 1403 3403 3403 6 7 XAAAAA ACHAAA AAAAxx +9412 4785 0 0 2 12 12 412 1412 4412 9412 24 25 AYAAAA BCHAAA HHHHxx +7652 4786 0 0 2 12 52 652 1652 2652 7652 104 105 IIAAAA CCHAAA OOOOxx +5866 4787 0 2 6 6 66 866 1866 866 5866 132 133 QRAAAA DCHAAA VVVVxx +6942 4788 0 2 2 2 42 942 942 1942 6942 84 85 AHAAAA ECHAAA AAAAxx +9353 4789 1 1 3 13 53 353 1353 4353 9353 106 107 TVAAAA FCHAAA HHHHxx +2600 4790 0 0 0 0 0 600 600 2600 2600 0 1 AWAAAA GCHAAA OOOOxx +6971 4791 1 3 1 11 71 971 971 1971 6971 142 143 DIAAAA HCHAAA VVVVxx +5391 4792 1 3 1 11 91 391 1391 391 5391 182 183 JZAAAA ICHAAA AAAAxx +7654 4793 0 2 4 14 54 654 1654 2654 7654 108 109 KIAAAA JCHAAA HHHHxx +1797 4794 1 1 7 17 97 797 1797 1797 1797 194 195 DRAAAA KCHAAA OOOOxx +4530 4795 0 2 0 10 30 530 530 4530 4530 60 61 GSAAAA LCHAAA VVVVxx +3130 4796 0 2 0 10 30 130 1130 3130 3130 60 61 KQAAAA MCHAAA AAAAxx +9442 4797 0 2 2 2 42 442 1442 4442 9442 84 85 EZAAAA NCHAAA HHHHxx +6659 4798 1 3 9 19 59 659 659 1659 6659 118 119 DWAAAA OCHAAA OOOOxx +9714 4799 0 2 4 14 14 714 1714 4714 9714 28 29 QJAAAA PCHAAA VVVVxx +3660 4800 0 0 0 0 60 660 1660 3660 3660 120 121 UKAAAA QCHAAA AAAAxx +1906 4801 0 2 6 6 6 906 1906 1906 1906 12 13 IVAAAA RCHAAA HHHHxx +7927 4802 1 3 7 7 27 927 1927 2927 7927 54 55 XSAAAA SCHAAA OOOOxx +1767 4803 1 3 7 7 67 767 1767 1767 1767 134 135 ZPAAAA TCHAAA VVVVxx +5523 4804 1 3 3 3 23 523 1523 523 5523 46 47 LEAAAA UCHAAA AAAAxx +9289 4805 1 1 9 9 89 289 1289 4289 9289 178 179 HTAAAA VCHAAA HHHHxx +2717 4806 1 1 7 17 17 717 717 2717 2717 34 35 NAAAAA WCHAAA OOOOxx +4099 4807 1 3 9 19 99 99 99 4099 4099 198 199 RBAAAA XCHAAA VVVVxx +4387 4808 1 3 7 7 87 387 387 4387 4387 174 175 TMAAAA YCHAAA AAAAxx +8864 4809 0 0 4 4 64 864 864 3864 8864 128 129 YCAAAA ZCHAAA HHHHxx +1774 4810 0 2 4 14 74 774 1774 1774 1774 148 149 GQAAAA ADHAAA OOOOxx +6292 4811 0 0 2 12 92 292 292 1292 6292 184 185 AIAAAA BDHAAA VVVVxx +847 4812 1 3 7 7 47 847 847 847 847 94 95 PGAAAA CDHAAA AAAAxx +5954 4813 0 2 4 14 54 954 1954 954 5954 108 109 AVAAAA DDHAAA HHHHxx +8032 4814 0 0 2 12 32 32 32 3032 8032 64 65 YWAAAA EDHAAA OOOOxx +3295 4815 1 3 5 15 95 295 1295 3295 3295 190 191 TWAAAA FDHAAA VVVVxx +8984 4816 0 0 4 4 84 984 984 3984 8984 168 169 OHAAAA GDHAAA AAAAxx +7809 4817 1 1 9 9 9 809 1809 2809 7809 18 19 JOAAAA HDHAAA HHHHxx +1670 4818 0 2 0 10 70 670 1670 1670 1670 140 141 GMAAAA IDHAAA OOOOxx +7733 4819 1 1 3 13 33 733 1733 2733 7733 66 67 LLAAAA JDHAAA VVVVxx +6187 4820 1 3 7 7 87 187 187 1187 6187 174 175 ZDAAAA KDHAAA AAAAxx +9326 4821 0 2 6 6 26 326 1326 4326 9326 52 53 SUAAAA LDHAAA HHHHxx +2493 4822 1 1 3 13 93 493 493 2493 2493 186 187 XRAAAA MDHAAA OOOOxx +9512 4823 0 0 2 12 12 512 1512 4512 9512 24 25 WBAAAA NDHAAA VVVVxx +4342 4824 0 2 2 2 42 342 342 4342 4342 84 85 ALAAAA ODHAAA AAAAxx +5350 4825 0 2 0 10 50 350 1350 350 5350 100 101 UXAAAA PDHAAA HHHHxx +6009 4826 1 1 9 9 9 9 9 1009 6009 18 19 DXAAAA QDHAAA OOOOxx +1208 4827 0 0 8 8 8 208 1208 1208 1208 16 17 MUAAAA RDHAAA VVVVxx +7014 4828 0 2 4 14 14 14 1014 2014 7014 28 29 UJAAAA SDHAAA AAAAxx +2967 4829 1 3 7 7 67 967 967 2967 2967 134 135 DKAAAA TDHAAA HHHHxx +5831 4830 1 3 1 11 31 831 1831 831 5831 62 63 HQAAAA UDHAAA OOOOxx +3097 4831 1 1 7 17 97 97 1097 3097 3097 194 195 DPAAAA VDHAAA VVVVxx +1528 4832 0 0 8 8 28 528 1528 1528 1528 56 57 UGAAAA WDHAAA AAAAxx +6429 4833 1 1 9 9 29 429 429 1429 6429 58 59 HNAAAA XDHAAA HHHHxx +7320 4834 0 0 0 0 20 320 1320 2320 7320 40 41 OVAAAA YDHAAA OOOOxx +844 4835 0 0 4 4 44 844 844 844 844 88 89 MGAAAA ZDHAAA VVVVxx +7054 4836 0 2 4 14 54 54 1054 2054 7054 108 109 ILAAAA AEHAAA AAAAxx +1643 4837 1 3 3 3 43 643 1643 1643 1643 86 87 FLAAAA BEHAAA HHHHxx +7626 4838 0 2 6 6 26 626 1626 2626 7626 52 53 IHAAAA CEHAAA OOOOxx +8728 4839 0 0 8 8 28 728 728 3728 8728 56 57 SXAAAA DEHAAA VVVVxx +8277 4840 1 1 7 17 77 277 277 3277 8277 154 155 JGAAAA EEHAAA AAAAxx +189 4841 1 1 9 9 89 189 189 189 189 178 179 HHAAAA FEHAAA HHHHxx +3717 4842 1 1 7 17 17 717 1717 3717 3717 34 35 ZMAAAA GEHAAA OOOOxx +1020 4843 0 0 0 0 20 20 1020 1020 1020 40 41 GNAAAA HEHAAA VVVVxx +9234 4844 0 2 4 14 34 234 1234 4234 9234 68 69 ERAAAA IEHAAA AAAAxx +9541 4845 1 1 1 1 41 541 1541 4541 9541 82 83 ZCAAAA JEHAAA HHHHxx +380 4846 0 0 0 0 80 380 380 380 380 160 161 QOAAAA KEHAAA OOOOxx +397 4847 1 1 7 17 97 397 397 397 397 194 195 HPAAAA LEHAAA VVVVxx +835 4848 1 3 5 15 35 835 835 835 835 70 71 DGAAAA MEHAAA AAAAxx +347 4849 1 3 7 7 47 347 347 347 347 94 95 JNAAAA NEHAAA HHHHxx +2490 4850 0 2 0 10 90 490 490 2490 2490 180 181 URAAAA OEHAAA OOOOxx +605 4851 1 1 5 5 5 605 605 605 605 10 11 HXAAAA PEHAAA VVVVxx +7960 4852 0 0 0 0 60 960 1960 2960 7960 120 121 EUAAAA QEHAAA AAAAxx +9681 4853 1 1 1 1 81 681 1681 4681 9681 162 163 JIAAAA REHAAA HHHHxx +5753 4854 1 1 3 13 53 753 1753 753 5753 106 107 HNAAAA SEHAAA OOOOxx +1676 4855 0 0 6 16 76 676 1676 1676 1676 152 153 MMAAAA TEHAAA VVVVxx +5533 4856 1 1 3 13 33 533 1533 533 5533 66 67 VEAAAA UEHAAA AAAAxx +8958 4857 0 2 8 18 58 958 958 3958 8958 116 117 OGAAAA VEHAAA HHHHxx +664 4858 0 0 4 4 64 664 664 664 664 128 129 OZAAAA WEHAAA OOOOxx +3005 4859 1 1 5 5 5 5 1005 3005 3005 10 11 PLAAAA XEHAAA VVVVxx +8576 4860 0 0 6 16 76 576 576 3576 8576 152 153 WRAAAA YEHAAA AAAAxx +7304 4861 0 0 4 4 4 304 1304 2304 7304 8 9 YUAAAA ZEHAAA HHHHxx +3375 4862 1 3 5 15 75 375 1375 3375 3375 150 151 VZAAAA AFHAAA OOOOxx +6336 4863 0 0 6 16 36 336 336 1336 6336 72 73 SJAAAA BFHAAA VVVVxx +1392 4864 0 0 2 12 92 392 1392 1392 1392 184 185 OBAAAA CFHAAA AAAAxx +2925 4865 1 1 5 5 25 925 925 2925 2925 50 51 NIAAAA DFHAAA HHHHxx +1217 4866 1 1 7 17 17 217 1217 1217 1217 34 35 VUAAAA EFHAAA OOOOxx +3714 4867 0 2 4 14 14 714 1714 3714 3714 28 29 WMAAAA FFHAAA VVVVxx +2120 4868 0 0 0 0 20 120 120 2120 2120 40 41 ODAAAA GFHAAA AAAAxx +2845 4869 1 1 5 5 45 845 845 2845 2845 90 91 LFAAAA HFHAAA HHHHxx +3865 4870 1 1 5 5 65 865 1865 3865 3865 130 131 RSAAAA IFHAAA OOOOxx +124 4871 0 0 4 4 24 124 124 124 124 48 49 UEAAAA JFHAAA VVVVxx +865 4872 1 1 5 5 65 865 865 865 865 130 131 HHAAAA KFHAAA AAAAxx +9361 4873 1 1 1 1 61 361 1361 4361 9361 122 123 BWAAAA LFHAAA HHHHxx +6338 4874 0 2 8 18 38 338 338 1338 6338 76 77 UJAAAA MFHAAA OOOOxx +7330 4875 0 2 0 10 30 330 1330 2330 7330 60 61 YVAAAA NFHAAA VVVVxx +513 4876 1 1 3 13 13 513 513 513 513 26 27 TTAAAA OFHAAA AAAAxx +5001 4877 1 1 1 1 1 1 1001 1 5001 2 3 JKAAAA PFHAAA HHHHxx +549 4878 1 1 9 9 49 549 549 549 549 98 99 DVAAAA QFHAAA OOOOxx +1808 4879 0 0 8 8 8 808 1808 1808 1808 16 17 ORAAAA RFHAAA VVVVxx +7168 4880 0 0 8 8 68 168 1168 2168 7168 136 137 SPAAAA SFHAAA AAAAxx +9878 4881 0 2 8 18 78 878 1878 4878 9878 156 157 YPAAAA TFHAAA HHHHxx +233 4882 1 1 3 13 33 233 233 233 233 66 67 ZIAAAA UFHAAA OOOOxx +4262 4883 0 2 2 2 62 262 262 4262 4262 124 125 YHAAAA VFHAAA VVVVxx +7998 4884 0 2 8 18 98 998 1998 2998 7998 196 197 QVAAAA WFHAAA AAAAxx +2419 4885 1 3 9 19 19 419 419 2419 2419 38 39 BPAAAA XFHAAA HHHHxx +9960 4886 0 0 0 0 60 960 1960 4960 9960 120 121 CTAAAA YFHAAA OOOOxx +3523 4887 1 3 3 3 23 523 1523 3523 3523 46 47 NFAAAA ZFHAAA VVVVxx +5440 4888 0 0 0 0 40 440 1440 440 5440 80 81 GBAAAA AGHAAA AAAAxx +3030 4889 0 2 0 10 30 30 1030 3030 3030 60 61 OMAAAA BGHAAA HHHHxx +2745 4890 1 1 5 5 45 745 745 2745 2745 90 91 PBAAAA CGHAAA OOOOxx +7175 4891 1 3 5 15 75 175 1175 2175 7175 150 151 ZPAAAA DGHAAA VVVVxx +640 4892 0 0 0 0 40 640 640 640 640 80 81 QYAAAA EGHAAA AAAAxx +1798 4893 0 2 8 18 98 798 1798 1798 1798 196 197 ERAAAA FGHAAA HHHHxx +7499 4894 1 3 9 19 99 499 1499 2499 7499 198 199 LCAAAA GGHAAA OOOOxx +1924 4895 0 0 4 4 24 924 1924 1924 1924 48 49 AWAAAA HGHAAA VVVVxx +1327 4896 1 3 7 7 27 327 1327 1327 1327 54 55 BZAAAA IGHAAA AAAAxx +73 4897 1 1 3 13 73 73 73 73 73 146 147 VCAAAA JGHAAA HHHHxx +9558 4898 0 2 8 18 58 558 1558 4558 9558 116 117 QDAAAA KGHAAA OOOOxx +818 4899 0 2 8 18 18 818 818 818 818 36 37 MFAAAA LGHAAA VVVVxx +9916 4900 0 0 6 16 16 916 1916 4916 9916 32 33 KRAAAA MGHAAA AAAAxx +2978 4901 0 2 8 18 78 978 978 2978 2978 156 157 OKAAAA NGHAAA HHHHxx +8469 4902 1 1 9 9 69 469 469 3469 8469 138 139 TNAAAA OGHAAA OOOOxx +9845 4903 1 1 5 5 45 845 1845 4845 9845 90 91 ROAAAA PGHAAA VVVVxx +2326 4904 0 2 6 6 26 326 326 2326 2326 52 53 MLAAAA QGHAAA AAAAxx +4032 4905 0 0 2 12 32 32 32 4032 4032 64 65 CZAAAA RGHAAA HHHHxx +5604 4906 0 0 4 4 4 604 1604 604 5604 8 9 OHAAAA SGHAAA OOOOxx +9610 4907 0 2 0 10 10 610 1610 4610 9610 20 21 QFAAAA TGHAAA VVVVxx +5101 4908 1 1 1 1 1 101 1101 101 5101 2 3 FOAAAA UGHAAA AAAAxx +7246 4909 0 2 6 6 46 246 1246 2246 7246 92 93 SSAAAA VGHAAA HHHHxx +1292 4910 0 0 2 12 92 292 1292 1292 1292 184 185 SXAAAA WGHAAA OOOOxx +6235 4911 1 3 5 15 35 235 235 1235 6235 70 71 VFAAAA XGHAAA VVVVxx +1733 4912 1 1 3 13 33 733 1733 1733 1733 66 67 ROAAAA YGHAAA AAAAxx +4647 4913 1 3 7 7 47 647 647 4647 4647 94 95 TWAAAA ZGHAAA HHHHxx +258 4914 0 2 8 18 58 258 258 258 258 116 117 YJAAAA AHHAAA OOOOxx +8438 4915 0 2 8 18 38 438 438 3438 8438 76 77 OMAAAA BHHAAA VVVVxx +7869 4916 1 1 9 9 69 869 1869 2869 7869 138 139 RQAAAA CHHAAA AAAAxx +9691 4917 1 3 1 11 91 691 1691 4691 9691 182 183 TIAAAA DHHAAA HHHHxx +5422 4918 0 2 2 2 22 422 1422 422 5422 44 45 OAAAAA EHHAAA OOOOxx +9630 4919 0 2 0 10 30 630 1630 4630 9630 60 61 KGAAAA FHHAAA VVVVxx +4439 4920 1 3 9 19 39 439 439 4439 4439 78 79 TOAAAA GHHAAA AAAAxx +3140 4921 0 0 0 0 40 140 1140 3140 3140 80 81 UQAAAA HHHAAA HHHHxx +9111 4922 1 3 1 11 11 111 1111 4111 9111 22 23 LMAAAA IHHAAA OOOOxx +4606 4923 0 2 6 6 6 606 606 4606 4606 12 13 EVAAAA JHHAAA VVVVxx +8620 4924 0 0 0 0 20 620 620 3620 8620 40 41 OTAAAA KHHAAA AAAAxx +7849 4925 1 1 9 9 49 849 1849 2849 7849 98 99 XPAAAA LHHAAA HHHHxx +346 4926 0 2 6 6 46 346 346 346 346 92 93 INAAAA MHHAAA OOOOxx +9528 4927 0 0 8 8 28 528 1528 4528 9528 56 57 MCAAAA NHHAAA VVVVxx +1811 4928 1 3 1 11 11 811 1811 1811 1811 22 23 RRAAAA OHHAAA AAAAxx +6068 4929 0 0 8 8 68 68 68 1068 6068 136 137 KZAAAA PHHAAA HHHHxx +6260 4930 0 0 0 0 60 260 260 1260 6260 120 121 UGAAAA QHHAAA OOOOxx +5909 4931 1 1 9 9 9 909 1909 909 5909 18 19 HTAAAA RHHAAA VVVVxx +4518 4932 0 2 8 18 18 518 518 4518 4518 36 37 URAAAA SHHAAA AAAAxx +7530 4933 0 2 0 10 30 530 1530 2530 7530 60 61 QDAAAA THHAAA HHHHxx +3900 4934 0 0 0 0 0 900 1900 3900 3900 0 1 AUAAAA UHHAAA OOOOxx +3969 4935 1 1 9 9 69 969 1969 3969 3969 138 139 RWAAAA VHHAAA VVVVxx +8690 4936 0 2 0 10 90 690 690 3690 8690 180 181 GWAAAA WHHAAA AAAAxx +5532 4937 0 0 2 12 32 532 1532 532 5532 64 65 UEAAAA XHHAAA HHHHxx +5989 4938 1 1 9 9 89 989 1989 989 5989 178 179 JWAAAA YHHAAA OOOOxx +1870 4939 0 2 0 10 70 870 1870 1870 1870 140 141 YTAAAA ZHHAAA VVVVxx +1113 4940 1 1 3 13 13 113 1113 1113 1113 26 27 VQAAAA AIHAAA AAAAxx +5155 4941 1 3 5 15 55 155 1155 155 5155 110 111 HQAAAA BIHAAA HHHHxx +7460 4942 0 0 0 0 60 460 1460 2460 7460 120 121 YAAAAA CIHAAA OOOOxx +6217 4943 1 1 7 17 17 217 217 1217 6217 34 35 DFAAAA DIHAAA VVVVxx +8333 4944 1 1 3 13 33 333 333 3333 8333 66 67 NIAAAA EIHAAA AAAAxx +6341 4945 1 1 1 1 41 341 341 1341 6341 82 83 XJAAAA FIHAAA HHHHxx +6230 4946 0 2 0 10 30 230 230 1230 6230 60 61 QFAAAA GIHAAA OOOOxx +6902 4947 0 2 2 2 2 902 902 1902 6902 4 5 MFAAAA HIHAAA VVVVxx +670 4948 0 2 0 10 70 670 670 670 670 140 141 UZAAAA IIHAAA AAAAxx +805 4949 1 1 5 5 5 805 805 805 805 10 11 ZEAAAA JIHAAA HHHHxx +1340 4950 0 0 0 0 40 340 1340 1340 1340 80 81 OZAAAA KIHAAA OOOOxx +8649 4951 1 1 9 9 49 649 649 3649 8649 98 99 RUAAAA LIHAAA VVVVxx +3887 4952 1 3 7 7 87 887 1887 3887 3887 174 175 NTAAAA MIHAAA AAAAxx +5400 4953 0 0 0 0 0 400 1400 400 5400 0 1 SZAAAA NIHAAA HHHHxx +4354 4954 0 2 4 14 54 354 354 4354 4354 108 109 MLAAAA OIHAAA OOOOxx +950 4955 0 2 0 10 50 950 950 950 950 100 101 OKAAAA PIHAAA VVVVxx +1544 4956 0 0 4 4 44 544 1544 1544 1544 88 89 KHAAAA QIHAAA AAAAxx +3898 4957 0 2 8 18 98 898 1898 3898 3898 196 197 YTAAAA RIHAAA HHHHxx +8038 4958 0 2 8 18 38 38 38 3038 8038 76 77 EXAAAA SIHAAA OOOOxx +1095 4959 1 3 5 15 95 95 1095 1095 1095 190 191 DQAAAA TIHAAA VVVVxx +1748 4960 0 0 8 8 48 748 1748 1748 1748 96 97 GPAAAA UIHAAA AAAAxx +9154 4961 0 2 4 14 54 154 1154 4154 9154 108 109 COAAAA VIHAAA HHHHxx +2182 4962 0 2 2 2 82 182 182 2182 2182 164 165 YFAAAA WIHAAA OOOOxx +6797 4963 1 1 7 17 97 797 797 1797 6797 194 195 LBAAAA XIHAAA VVVVxx +9149 4964 1 1 9 9 49 149 1149 4149 9149 98 99 XNAAAA YIHAAA AAAAxx +7351 4965 1 3 1 11 51 351 1351 2351 7351 102 103 TWAAAA ZIHAAA HHHHxx +2820 4966 0 0 0 0 20 820 820 2820 2820 40 41 MEAAAA AJHAAA OOOOxx +9696 4967 0 0 6 16 96 696 1696 4696 9696 192 193 YIAAAA BJHAAA VVVVxx +253 4968 1 1 3 13 53 253 253 253 253 106 107 TJAAAA CJHAAA AAAAxx +3600 4969 0 0 0 0 0 600 1600 3600 3600 0 1 MIAAAA DJHAAA HHHHxx +3892 4970 0 0 2 12 92 892 1892 3892 3892 184 185 STAAAA EJHAAA OOOOxx +231 4971 1 3 1 11 31 231 231 231 231 62 63 XIAAAA FJHAAA VVVVxx +8331 4972 1 3 1 11 31 331 331 3331 8331 62 63 LIAAAA GJHAAA AAAAxx +403 4973 1 3 3 3 3 403 403 403 403 6 7 NPAAAA HJHAAA HHHHxx +8642 4974 0 2 2 2 42 642 642 3642 8642 84 85 KUAAAA IJHAAA OOOOxx +3118 4975 0 2 8 18 18 118 1118 3118 3118 36 37 YPAAAA JJHAAA VVVVxx +3835 4976 1 3 5 15 35 835 1835 3835 3835 70 71 NRAAAA KJHAAA AAAAxx +1117 4977 1 1 7 17 17 117 1117 1117 1117 34 35 ZQAAAA LJHAAA HHHHxx +7024 4978 0 0 4 4 24 24 1024 2024 7024 48 49 EKAAAA MJHAAA OOOOxx +2636 4979 0 0 6 16 36 636 636 2636 2636 72 73 KXAAAA NJHAAA VVVVxx +3778 4980 0 2 8 18 78 778 1778 3778 3778 156 157 IPAAAA OJHAAA AAAAxx +2003 4981 1 3 3 3 3 3 3 2003 2003 6 7 BZAAAA PJHAAA HHHHxx +5717 4982 1 1 7 17 17 717 1717 717 5717 34 35 XLAAAA QJHAAA OOOOxx +4869 4983 1 1 9 9 69 869 869 4869 4869 138 139 HFAAAA RJHAAA VVVVxx +8921 4984 1 1 1 1 21 921 921 3921 8921 42 43 DFAAAA SJHAAA AAAAxx +888 4985 0 0 8 8 88 888 888 888 888 176 177 EIAAAA TJHAAA HHHHxx +7599 4986 1 3 9 19 99 599 1599 2599 7599 198 199 HGAAAA UJHAAA OOOOxx +8621 4987 1 1 1 1 21 621 621 3621 8621 42 43 PTAAAA VJHAAA VVVVxx +811 4988 1 3 1 11 11 811 811 811 811 22 23 FFAAAA WJHAAA AAAAxx +9147 4989 1 3 7 7 47 147 1147 4147 9147 94 95 VNAAAA XJHAAA HHHHxx +1413 4990 1 1 3 13 13 413 1413 1413 1413 26 27 JCAAAA YJHAAA OOOOxx +5232 4991 0 0 2 12 32 232 1232 232 5232 64 65 GTAAAA ZJHAAA VVVVxx +5912 4992 0 0 2 12 12 912 1912 912 5912 24 25 KTAAAA AKHAAA AAAAxx +3418 4993 0 2 8 18 18 418 1418 3418 3418 36 37 MBAAAA BKHAAA HHHHxx +3912 4994 0 0 2 12 12 912 1912 3912 3912 24 25 MUAAAA CKHAAA OOOOxx +9576 4995 0 0 6 16 76 576 1576 4576 9576 152 153 IEAAAA DKHAAA VVVVxx +4225 4996 1 1 5 5 25 225 225 4225 4225 50 51 NGAAAA EKHAAA AAAAxx +8222 4997 0 2 2 2 22 222 222 3222 8222 44 45 GEAAAA FKHAAA HHHHxx +7013 4998 1 1 3 13 13 13 1013 2013 7013 26 27 TJAAAA GKHAAA OOOOxx +7037 4999 1 1 7 17 37 37 1037 2037 7037 74 75 RKAAAA HKHAAA VVVVxx +1205 5000 1 1 5 5 5 205 1205 1205 1205 10 11 JUAAAA IKHAAA AAAAxx +8114 5001 0 2 4 14 14 114 114 3114 8114 28 29 CAAAAA JKHAAA HHHHxx +6585 5002 1 1 5 5 85 585 585 1585 6585 170 171 HTAAAA KKHAAA OOOOxx +155 5003 1 3 5 15 55 155 155 155 155 110 111 ZFAAAA LKHAAA VVVVxx +2841 5004 1 1 1 1 41 841 841 2841 2841 82 83 HFAAAA MKHAAA AAAAxx +1996 5005 0 0 6 16 96 996 1996 1996 1996 192 193 UYAAAA NKHAAA HHHHxx +4948 5006 0 0 8 8 48 948 948 4948 4948 96 97 IIAAAA OKHAAA OOOOxx +3304 5007 0 0 4 4 4 304 1304 3304 3304 8 9 CXAAAA PKHAAA VVVVxx +5684 5008 0 0 4 4 84 684 1684 684 5684 168 169 QKAAAA QKHAAA AAAAxx +6962 5009 0 2 2 2 62 962 962 1962 6962 124 125 UHAAAA RKHAAA HHHHxx +8691 5010 1 3 1 11 91 691 691 3691 8691 182 183 HWAAAA SKHAAA OOOOxx +8501 5011 1 1 1 1 1 501 501 3501 8501 2 3 ZOAAAA TKHAAA VVVVxx +4783 5012 1 3 3 3 83 783 783 4783 4783 166 167 ZBAAAA UKHAAA AAAAxx +3762 5013 0 2 2 2 62 762 1762 3762 3762 124 125 SOAAAA VKHAAA HHHHxx +4534 5014 0 2 4 14 34 534 534 4534 4534 68 69 KSAAAA WKHAAA OOOOxx +4999 5015 1 3 9 19 99 999 999 4999 4999 198 199 HKAAAA XKHAAA VVVVxx +4618 5016 0 2 8 18 18 618 618 4618 4618 36 37 QVAAAA YKHAAA AAAAxx +4220 5017 0 0 0 0 20 220 220 4220 4220 40 41 IGAAAA ZKHAAA HHHHxx +3384 5018 0 0 4 4 84 384 1384 3384 3384 168 169 EAAAAA ALHAAA OOOOxx +3036 5019 0 0 6 16 36 36 1036 3036 3036 72 73 UMAAAA BLHAAA VVVVxx +545 5020 1 1 5 5 45 545 545 545 545 90 91 ZUAAAA CLHAAA AAAAxx +9946 5021 0 2 6 6 46 946 1946 4946 9946 92 93 OSAAAA DLHAAA HHHHxx +1985 5022 1 1 5 5 85 985 1985 1985 1985 170 171 JYAAAA ELHAAA OOOOxx +2310 5023 0 2 0 10 10 310 310 2310 2310 20 21 WKAAAA FLHAAA VVVVxx +6563 5024 1 3 3 3 63 563 563 1563 6563 126 127 LSAAAA GLHAAA AAAAxx +4886 5025 0 2 6 6 86 886 886 4886 4886 172 173 YFAAAA HLHAAA HHHHxx +9359 5026 1 3 9 19 59 359 1359 4359 9359 118 119 ZVAAAA ILHAAA OOOOxx +400 5027 0 0 0 0 0 400 400 400 400 0 1 KPAAAA JLHAAA VVVVxx +9742 5028 0 2 2 2 42 742 1742 4742 9742 84 85 SKAAAA KLHAAA AAAAxx +6736 5029 0 0 6 16 36 736 736 1736 6736 72 73 CZAAAA LLHAAA HHHHxx +8166 5030 0 2 6 6 66 166 166 3166 8166 132 133 CCAAAA MLHAAA OOOOxx +861 5031 1 1 1 1 61 861 861 861 861 122 123 DHAAAA NLHAAA VVVVxx +7492 5032 0 0 2 12 92 492 1492 2492 7492 184 185 ECAAAA OLHAAA AAAAxx +1155 5033 1 3 5 15 55 155 1155 1155 1155 110 111 LSAAAA PLHAAA HHHHxx +9769 5034 1 1 9 9 69 769 1769 4769 9769 138 139 TLAAAA QLHAAA OOOOxx +6843 5035 1 3 3 3 43 843 843 1843 6843 86 87 FDAAAA RLHAAA VVVVxx +5625 5036 1 1 5 5 25 625 1625 625 5625 50 51 JIAAAA SLHAAA AAAAxx +1910 5037 0 2 0 10 10 910 1910 1910 1910 20 21 MVAAAA TLHAAA HHHHxx +9796 5038 0 0 6 16 96 796 1796 4796 9796 192 193 UMAAAA ULHAAA OOOOxx +6950 5039 0 2 0 10 50 950 950 1950 6950 100 101 IHAAAA VLHAAA VVVVxx +3084 5040 0 0 4 4 84 84 1084 3084 3084 168 169 QOAAAA WLHAAA AAAAxx +2959 5041 1 3 9 19 59 959 959 2959 2959 118 119 VJAAAA XLHAAA HHHHxx +2093 5042 1 1 3 13 93 93 93 2093 2093 186 187 NCAAAA YLHAAA OOOOxx +2738 5043 0 2 8 18 38 738 738 2738 2738 76 77 IBAAAA ZLHAAA VVVVxx +6406 5044 0 2 6 6 6 406 406 1406 6406 12 13 KMAAAA AMHAAA AAAAxx +9082 5045 0 2 2 2 82 82 1082 4082 9082 164 165 ILAAAA BMHAAA HHHHxx +8568 5046 0 0 8 8 68 568 568 3568 8568 136 137 ORAAAA CMHAAA OOOOxx +3566 5047 0 2 6 6 66 566 1566 3566 3566 132 133 EHAAAA DMHAAA VVVVxx +3016 5048 0 0 6 16 16 16 1016 3016 3016 32 33 AMAAAA EMHAAA AAAAxx +1207 5049 1 3 7 7 7 207 1207 1207 1207 14 15 LUAAAA FMHAAA HHHHxx +4045 5050 1 1 5 5 45 45 45 4045 4045 90 91 PZAAAA GMHAAA OOOOxx +4173 5051 1 1 3 13 73 173 173 4173 4173 146 147 NEAAAA HMHAAA VVVVxx +3939 5052 1 3 9 19 39 939 1939 3939 3939 78 79 NVAAAA IMHAAA AAAAxx +9683 5053 1 3 3 3 83 683 1683 4683 9683 166 167 LIAAAA JMHAAA HHHHxx +1684 5054 0 0 4 4 84 684 1684 1684 1684 168 169 UMAAAA KMHAAA OOOOxx +9271 5055 1 3 1 11 71 271 1271 4271 9271 142 143 PSAAAA LMHAAA VVVVxx +9317 5056 1 1 7 17 17 317 1317 4317 9317 34 35 JUAAAA MMHAAA AAAAxx +5793 5057 1 1 3 13 93 793 1793 793 5793 186 187 VOAAAA NMHAAA HHHHxx +352 5058 0 0 2 12 52 352 352 352 352 104 105 ONAAAA OMHAAA OOOOxx +7328 5059 0 0 8 8 28 328 1328 2328 7328 56 57 WVAAAA PMHAAA VVVVxx +4582 5060 0 2 2 2 82 582 582 4582 4582 164 165 GUAAAA QMHAAA AAAAxx +7413 5061 1 1 3 13 13 413 1413 2413 7413 26 27 DZAAAA RMHAAA HHHHxx +6772 5062 0 0 2 12 72 772 772 1772 6772 144 145 MAAAAA SMHAAA OOOOxx +4973 5063 1 1 3 13 73 973 973 4973 4973 146 147 HJAAAA TMHAAA VVVVxx +7480 5064 0 0 0 0 80 480 1480 2480 7480 160 161 SBAAAA UMHAAA AAAAxx +5555 5065 1 3 5 15 55 555 1555 555 5555 110 111 RFAAAA VMHAAA HHHHxx +4227 5066 1 3 7 7 27 227 227 4227 4227 54 55 PGAAAA WMHAAA OOOOxx +4153 5067 1 1 3 13 53 153 153 4153 4153 106 107 TDAAAA XMHAAA VVVVxx +4601 5068 1 1 1 1 1 601 601 4601 4601 2 3 ZUAAAA YMHAAA AAAAxx +3782 5069 0 2 2 2 82 782 1782 3782 3782 164 165 MPAAAA ZMHAAA HHHHxx +3872 5070 0 0 2 12 72 872 1872 3872 3872 144 145 YSAAAA ANHAAA OOOOxx +893 5071 1 1 3 13 93 893 893 893 893 186 187 JIAAAA BNHAAA VVVVxx +2430 5072 0 2 0 10 30 430 430 2430 2430 60 61 MPAAAA CNHAAA AAAAxx +2591 5073 1 3 1 11 91 591 591 2591 2591 182 183 RVAAAA DNHAAA HHHHxx +264 5074 0 0 4 4 64 264 264 264 264 128 129 EKAAAA ENHAAA OOOOxx +6238 5075 0 2 8 18 38 238 238 1238 6238 76 77 YFAAAA FNHAAA VVVVxx +633 5076 1 1 3 13 33 633 633 633 633 66 67 JYAAAA GNHAAA AAAAxx +1029 5077 1 1 9 9 29 29 1029 1029 1029 58 59 PNAAAA HNHAAA HHHHxx +5934 5078 0 2 4 14 34 934 1934 934 5934 68 69 GUAAAA INHAAA OOOOxx +8694 5079 0 2 4 14 94 694 694 3694 8694 188 189 KWAAAA JNHAAA VVVVxx +7401 5080 1 1 1 1 1 401 1401 2401 7401 2 3 RYAAAA KNHAAA AAAAxx +1165 5081 1 1 5 5 65 165 1165 1165 1165 130 131 VSAAAA LNHAAA HHHHxx +9438 5082 0 2 8 18 38 438 1438 4438 9438 76 77 AZAAAA MNHAAA OOOOxx +4790 5083 0 2 0 10 90 790 790 4790 4790 180 181 GCAAAA NNHAAA VVVVxx +4531 5084 1 3 1 11 31 531 531 4531 4531 62 63 HSAAAA ONHAAA AAAAxx +6099 5085 1 3 9 19 99 99 99 1099 6099 198 199 PAAAAA PNHAAA HHHHxx +8236 5086 0 0 6 16 36 236 236 3236 8236 72 73 UEAAAA QNHAAA OOOOxx +8551 5087 1 3 1 11 51 551 551 3551 8551 102 103 XQAAAA RNHAAA VVVVxx +3128 5088 0 0 8 8 28 128 1128 3128 3128 56 57 IQAAAA SNHAAA AAAAxx +3504 5089 0 0 4 4 4 504 1504 3504 3504 8 9 UEAAAA TNHAAA HHHHxx +9071 5090 1 3 1 11 71 71 1071 4071 9071 142 143 XKAAAA UNHAAA OOOOxx +5930 5091 0 2 0 10 30 930 1930 930 5930 60 61 CUAAAA VNHAAA VVVVxx +6825 5092 1 1 5 5 25 825 825 1825 6825 50 51 NCAAAA WNHAAA AAAAxx +2218 5093 0 2 8 18 18 218 218 2218 2218 36 37 IHAAAA XNHAAA HHHHxx +3604 5094 0 0 4 4 4 604 1604 3604 3604 8 9 QIAAAA YNHAAA OOOOxx +5761 5095 1 1 1 1 61 761 1761 761 5761 122 123 PNAAAA ZNHAAA VVVVxx +5414 5096 0 2 4 14 14 414 1414 414 5414 28 29 GAAAAA AOHAAA AAAAxx +5892 5097 0 0 2 12 92 892 1892 892 5892 184 185 QSAAAA BOHAAA HHHHxx +4080 5098 0 0 0 0 80 80 80 4080 4080 160 161 YAAAAA COHAAA OOOOxx +8018 5099 0 2 8 18 18 18 18 3018 8018 36 37 KWAAAA DOHAAA VVVVxx +1757 5100 1 1 7 17 57 757 1757 1757 1757 114 115 PPAAAA EOHAAA AAAAxx +5854 5101 0 2 4 14 54 854 1854 854 5854 108 109 ERAAAA FOHAAA HHHHxx +1335 5102 1 3 5 15 35 335 1335 1335 1335 70 71 JZAAAA GOHAAA OOOOxx +3811 5103 1 3 1 11 11 811 1811 3811 3811 22 23 PQAAAA HOHAAA VVVVxx +9917 5104 1 1 7 17 17 917 1917 4917 9917 34 35 LRAAAA IOHAAA AAAAxx +5947 5105 1 3 7 7 47 947 1947 947 5947 94 95 TUAAAA JOHAAA HHHHxx +7263 5106 1 3 3 3 63 263 1263 2263 7263 126 127 JTAAAA KOHAAA OOOOxx +1730 5107 0 2 0 10 30 730 1730 1730 1730 60 61 OOAAAA LOHAAA VVVVxx +5747 5108 1 3 7 7 47 747 1747 747 5747 94 95 BNAAAA MOHAAA AAAAxx +3876 5109 0 0 6 16 76 876 1876 3876 3876 152 153 CTAAAA NOHAAA HHHHxx +2762 5110 0 2 2 2 62 762 762 2762 2762 124 125 GCAAAA OOHAAA OOOOxx +7613 5111 1 1 3 13 13 613 1613 2613 7613 26 27 VGAAAA POHAAA VVVVxx +152 5112 0 0 2 12 52 152 152 152 152 104 105 WFAAAA QOHAAA AAAAxx +3941 5113 1 1 1 1 41 941 1941 3941 3941 82 83 PVAAAA ROHAAA HHHHxx +5614 5114 0 2 4 14 14 614 1614 614 5614 28 29 YHAAAA SOHAAA OOOOxx +9279 5115 1 3 9 19 79 279 1279 4279 9279 158 159 XSAAAA TOHAAA VVVVxx +3048 5116 0 0 8 8 48 48 1048 3048 3048 96 97 GNAAAA UOHAAA AAAAxx +6152 5117 0 0 2 12 52 152 152 1152 6152 104 105 QCAAAA VOHAAA HHHHxx +5481 5118 1 1 1 1 81 481 1481 481 5481 162 163 VCAAAA WOHAAA OOOOxx +4675 5119 1 3 5 15 75 675 675 4675 4675 150 151 VXAAAA XOHAAA VVVVxx +3334 5120 0 2 4 14 34 334 1334 3334 3334 68 69 GYAAAA YOHAAA AAAAxx +4691 5121 1 3 1 11 91 691 691 4691 4691 182 183 LYAAAA ZOHAAA HHHHxx +803 5122 1 3 3 3 3 803 803 803 803 6 7 XEAAAA APHAAA OOOOxx +5409 5123 1 1 9 9 9 409 1409 409 5409 18 19 BAAAAA BPHAAA VVVVxx +1054 5124 0 2 4 14 54 54 1054 1054 1054 108 109 OOAAAA CPHAAA AAAAxx +103 5125 1 3 3 3 3 103 103 103 103 6 7 ZDAAAA DPHAAA HHHHxx +8565 5126 1 1 5 5 65 565 565 3565 8565 130 131 LRAAAA EPHAAA OOOOxx +4666 5127 0 2 6 6 66 666 666 4666 4666 132 133 MXAAAA FPHAAA VVVVxx +6634 5128 0 2 4 14 34 634 634 1634 6634 68 69 EVAAAA GPHAAA AAAAxx +5538 5129 0 2 8 18 38 538 1538 538 5538 76 77 AFAAAA HPHAAA HHHHxx +3789 5130 1 1 9 9 89 789 1789 3789 3789 178 179 TPAAAA IPHAAA OOOOxx +4641 5131 1 1 1 1 41 641 641 4641 4641 82 83 NWAAAA JPHAAA VVVVxx +2458 5132 0 2 8 18 58 458 458 2458 2458 116 117 OQAAAA KPHAAA AAAAxx +5667 5133 1 3 7 7 67 667 1667 667 5667 134 135 ZJAAAA LPHAAA HHHHxx +6524 5134 0 0 4 4 24 524 524 1524 6524 48 49 YQAAAA MPHAAA OOOOxx +9179 5135 1 3 9 19 79 179 1179 4179 9179 158 159 BPAAAA NPHAAA VVVVxx +6358 5136 0 2 8 18 58 358 358 1358 6358 116 117 OKAAAA OPHAAA AAAAxx +6668 5137 0 0 8 8 68 668 668 1668 6668 136 137 MWAAAA PPHAAA HHHHxx +6414 5138 0 2 4 14 14 414 414 1414 6414 28 29 SMAAAA QPHAAA OOOOxx +2813 5139 1 1 3 13 13 813 813 2813 2813 26 27 FEAAAA RPHAAA VVVVxx +8927 5140 1 3 7 7 27 927 927 3927 8927 54 55 JFAAAA SPHAAA AAAAxx +8695 5141 1 3 5 15 95 695 695 3695 8695 190 191 LWAAAA TPHAAA HHHHxx +363 5142 1 3 3 3 63 363 363 363 363 126 127 ZNAAAA UPHAAA OOOOxx +9966 5143 0 2 6 6 66 966 1966 4966 9966 132 133 ITAAAA VPHAAA VVVVxx +1323 5144 1 3 3 3 23 323 1323 1323 1323 46 47 XYAAAA WPHAAA AAAAxx +8211 5145 1 3 1 11 11 211 211 3211 8211 22 23 VDAAAA XPHAAA HHHHxx +4375 5146 1 3 5 15 75 375 375 4375 4375 150 151 HMAAAA YPHAAA OOOOxx +3257 5147 1 1 7 17 57 257 1257 3257 3257 114 115 HVAAAA ZPHAAA VVVVxx +6239 5148 1 3 9 19 39 239 239 1239 6239 78 79 ZFAAAA AQHAAA AAAAxx +3602 5149 0 2 2 2 2 602 1602 3602 3602 4 5 OIAAAA BQHAAA HHHHxx +9830 5150 0 2 0 10 30 830 1830 4830 9830 60 61 COAAAA CQHAAA OOOOxx +7826 5151 0 2 6 6 26 826 1826 2826 7826 52 53 APAAAA DQHAAA VVVVxx +2108 5152 0 0 8 8 8 108 108 2108 2108 16 17 CDAAAA EQHAAA AAAAxx +7245 5153 1 1 5 5 45 245 1245 2245 7245 90 91 RSAAAA FQHAAA HHHHxx +8330 5154 0 2 0 10 30 330 330 3330 8330 60 61 KIAAAA GQHAAA OOOOxx +7441 5155 1 1 1 1 41 441 1441 2441 7441 82 83 FAAAAA HQHAAA VVVVxx +9848 5156 0 0 8 8 48 848 1848 4848 9848 96 97 UOAAAA IQHAAA AAAAxx +1226 5157 0 2 6 6 26 226 1226 1226 1226 52 53 EVAAAA JQHAAA HHHHxx +414 5158 0 2 4 14 14 414 414 414 414 28 29 YPAAAA KQHAAA OOOOxx +1273 5159 1 1 3 13 73 273 1273 1273 1273 146 147 ZWAAAA LQHAAA VVVVxx +9866 5160 0 2 6 6 66 866 1866 4866 9866 132 133 MPAAAA MQHAAA AAAAxx +4633 5161 1 1 3 13 33 633 633 4633 4633 66 67 FWAAAA NQHAAA HHHHxx +8727 5162 1 3 7 7 27 727 727 3727 8727 54 55 RXAAAA OQHAAA OOOOxx +5308 5163 0 0 8 8 8 308 1308 308 5308 16 17 EWAAAA PQHAAA VVVVxx +1395 5164 1 3 5 15 95 395 1395 1395 1395 190 191 RBAAAA QQHAAA AAAAxx +1825 5165 1 1 5 5 25 825 1825 1825 1825 50 51 FSAAAA RQHAAA HHHHxx +7606 5166 0 2 6 6 6 606 1606 2606 7606 12 13 OGAAAA SQHAAA OOOOxx +9390 5167 0 2 0 10 90 390 1390 4390 9390 180 181 EXAAAA TQHAAA VVVVxx +2376 5168 0 0 6 16 76 376 376 2376 2376 152 153 KNAAAA UQHAAA AAAAxx +2377 5169 1 1 7 17 77 377 377 2377 2377 154 155 LNAAAA VQHAAA HHHHxx +5346 5170 0 2 6 6 46 346 1346 346 5346 92 93 QXAAAA WQHAAA OOOOxx +4140 5171 0 0 0 0 40 140 140 4140 4140 80 81 GDAAAA XQHAAA VVVVxx +6032 5172 0 0 2 12 32 32 32 1032 6032 64 65 AYAAAA YQHAAA AAAAxx +9453 5173 1 1 3 13 53 453 1453 4453 9453 106 107 PZAAAA ZQHAAA HHHHxx +9297 5174 1 1 7 17 97 297 1297 4297 9297 194 195 PTAAAA ARHAAA OOOOxx +6455 5175 1 3 5 15 55 455 455 1455 6455 110 111 HOAAAA BRHAAA VVVVxx +4458 5176 0 2 8 18 58 458 458 4458 4458 116 117 MPAAAA CRHAAA AAAAxx +9516 5177 0 0 6 16 16 516 1516 4516 9516 32 33 ACAAAA DRHAAA HHHHxx +6211 5178 1 3 1 11 11 211 211 1211 6211 22 23 XEAAAA ERHAAA OOOOxx +526 5179 0 2 6 6 26 526 526 526 526 52 53 GUAAAA FRHAAA VVVVxx +3570 5180 0 2 0 10 70 570 1570 3570 3570 140 141 IHAAAA GRHAAA AAAAxx +4885 5181 1 1 5 5 85 885 885 4885 4885 170 171 XFAAAA HRHAAA HHHHxx +6390 5182 0 2 0 10 90 390 390 1390 6390 180 181 ULAAAA IRHAAA OOOOxx +1606 5183 0 2 6 6 6 606 1606 1606 1606 12 13 UJAAAA JRHAAA VVVVxx +7850 5184 0 2 0 10 50 850 1850 2850 7850 100 101 YPAAAA KRHAAA AAAAxx +3315 5185 1 3 5 15 15 315 1315 3315 3315 30 31 NXAAAA LRHAAA HHHHxx +8322 5186 0 2 2 2 22 322 322 3322 8322 44 45 CIAAAA MRHAAA OOOOxx +3703 5187 1 3 3 3 3 703 1703 3703 3703 6 7 LMAAAA NRHAAA VVVVxx +9489 5188 1 1 9 9 89 489 1489 4489 9489 178 179 ZAAAAA ORHAAA AAAAxx +6104 5189 0 0 4 4 4 104 104 1104 6104 8 9 UAAAAA PRHAAA HHHHxx +3067 5190 1 3 7 7 67 67 1067 3067 3067 134 135 ZNAAAA QRHAAA OOOOxx +2521 5191 1 1 1 1 21 521 521 2521 2521 42 43 ZSAAAA RRHAAA VVVVxx +2581 5192 1 1 1 1 81 581 581 2581 2581 162 163 HVAAAA SRHAAA AAAAxx +595 5193 1 3 5 15 95 595 595 595 595 190 191 XWAAAA TRHAAA HHHHxx +8291 5194 1 3 1 11 91 291 291 3291 8291 182 183 XGAAAA URHAAA OOOOxx +1727 5195 1 3 7 7 27 727 1727 1727 1727 54 55 LOAAAA VRHAAA VVVVxx +6847 5196 1 3 7 7 47 847 847 1847 6847 94 95 JDAAAA WRHAAA AAAAxx +7494 5197 0 2 4 14 94 494 1494 2494 7494 188 189 GCAAAA XRHAAA HHHHxx +7093 5198 1 1 3 13 93 93 1093 2093 7093 186 187 VMAAAA YRHAAA OOOOxx +7357 5199 1 1 7 17 57 357 1357 2357 7357 114 115 ZWAAAA ZRHAAA VVVVxx +620 5200 0 0 0 0 20 620 620 620 620 40 41 WXAAAA ASHAAA AAAAxx +2460 5201 0 0 0 0 60 460 460 2460 2460 120 121 QQAAAA BSHAAA HHHHxx +1598 5202 0 2 8 18 98 598 1598 1598 1598 196 197 MJAAAA CSHAAA OOOOxx +4112 5203 0 0 2 12 12 112 112 4112 4112 24 25 ECAAAA DSHAAA VVVVxx +2956 5204 0 0 6 16 56 956 956 2956 2956 112 113 SJAAAA ESHAAA AAAAxx +3193 5205 1 1 3 13 93 193 1193 3193 3193 186 187 VSAAAA FSHAAA HHHHxx +6356 5206 0 0 6 16 56 356 356 1356 6356 112 113 MKAAAA GSHAAA OOOOxx +730 5207 0 2 0 10 30 730 730 730 730 60 61 CCAAAA HSHAAA VVVVxx +8826 5208 0 2 6 6 26 826 826 3826 8826 52 53 MBAAAA ISHAAA AAAAxx +9036 5209 0 0 6 16 36 36 1036 4036 9036 72 73 OJAAAA JSHAAA HHHHxx +2085 5210 1 1 5 5 85 85 85 2085 2085 170 171 FCAAAA KSHAAA OOOOxx +9007 5211 1 3 7 7 7 7 1007 4007 9007 14 15 LIAAAA LSHAAA VVVVxx +6047 5212 1 3 7 7 47 47 47 1047 6047 94 95 PYAAAA MSHAAA AAAAxx +3953 5213 1 1 3 13 53 953 1953 3953 3953 106 107 BWAAAA NSHAAA HHHHxx +1214 5214 0 2 4 14 14 214 1214 1214 1214 28 29 SUAAAA OSHAAA OOOOxx +4814 5215 0 2 4 14 14 814 814 4814 4814 28 29 EDAAAA PSHAAA VVVVxx +5738 5216 0 2 8 18 38 738 1738 738 5738 76 77 SMAAAA QSHAAA AAAAxx +7176 5217 0 0 6 16 76 176 1176 2176 7176 152 153 AQAAAA RSHAAA HHHHxx +3609 5218 1 1 9 9 9 609 1609 3609 3609 18 19 VIAAAA SSHAAA OOOOxx +592 5219 0 0 2 12 92 592 592 592 592 184 185 UWAAAA TSHAAA VVVVxx +9391 5220 1 3 1 11 91 391 1391 4391 9391 182 183 FXAAAA USHAAA AAAAxx +5345 5221 1 1 5 5 45 345 1345 345 5345 90 91 PXAAAA VSHAAA HHHHxx +1171 5222 1 3 1 11 71 171 1171 1171 1171 142 143 BTAAAA WSHAAA OOOOxx +7238 5223 0 2 8 18 38 238 1238 2238 7238 76 77 KSAAAA XSHAAA VVVVxx +7561 5224 1 1 1 1 61 561 1561 2561 7561 122 123 VEAAAA YSHAAA AAAAxx +5876 5225 0 0 6 16 76 876 1876 876 5876 152 153 ASAAAA ZSHAAA HHHHxx +6611 5226 1 3 1 11 11 611 611 1611 6611 22 23 HUAAAA ATHAAA OOOOxx +7300 5227 0 0 0 0 0 300 1300 2300 7300 0 1 UUAAAA BTHAAA VVVVxx +1506 5228 0 2 6 6 6 506 1506 1506 1506 12 13 YFAAAA CTHAAA AAAAxx +1153 5229 1 1 3 13 53 153 1153 1153 1153 106 107 JSAAAA DTHAAA HHHHxx +3831 5230 1 3 1 11 31 831 1831 3831 3831 62 63 JRAAAA ETHAAA OOOOxx +9255 5231 1 3 5 15 55 255 1255 4255 9255 110 111 ZRAAAA FTHAAA VVVVxx +1841 5232 1 1 1 1 41 841 1841 1841 1841 82 83 VSAAAA GTHAAA AAAAxx +5075 5233 1 3 5 15 75 75 1075 75 5075 150 151 FNAAAA HTHAAA HHHHxx +101 5234 1 1 1 1 1 101 101 101 101 2 3 XDAAAA ITHAAA OOOOxx +2627 5235 1 3 7 7 27 627 627 2627 2627 54 55 BXAAAA JTHAAA VVVVxx +7078 5236 0 2 8 18 78 78 1078 2078 7078 156 157 GMAAAA KTHAAA AAAAxx +2850 5237 0 2 0 10 50 850 850 2850 2850 100 101 QFAAAA LTHAAA HHHHxx +8703 5238 1 3 3 3 3 703 703 3703 8703 6 7 TWAAAA MTHAAA OOOOxx +4101 5239 1 1 1 1 1 101 101 4101 4101 2 3 TBAAAA NTHAAA VVVVxx +318 5240 0 2 8 18 18 318 318 318 318 36 37 GMAAAA OTHAAA AAAAxx +6452 5241 0 0 2 12 52 452 452 1452 6452 104 105 EOAAAA PTHAAA HHHHxx +5558 5242 0 2 8 18 58 558 1558 558 5558 116 117 UFAAAA QTHAAA OOOOxx +3127 5243 1 3 7 7 27 127 1127 3127 3127 54 55 HQAAAA RTHAAA VVVVxx +535 5244 1 3 5 15 35 535 535 535 535 70 71 PUAAAA STHAAA AAAAxx +270 5245 0 2 0 10 70 270 270 270 270 140 141 KKAAAA TTHAAA HHHHxx +4038 5246 0 2 8 18 38 38 38 4038 4038 76 77 IZAAAA UTHAAA OOOOxx +3404 5247 0 0 4 4 4 404 1404 3404 3404 8 9 YAAAAA VTHAAA VVVVxx +2374 5248 0 2 4 14 74 374 374 2374 2374 148 149 INAAAA WTHAAA AAAAxx +6446 5249 0 2 6 6 46 446 446 1446 6446 92 93 YNAAAA XTHAAA HHHHxx +7758 5250 0 2 8 18 58 758 1758 2758 7758 116 117 KMAAAA YTHAAA OOOOxx +356 5251 0 0 6 16 56 356 356 356 356 112 113 SNAAAA ZTHAAA VVVVxx +9197 5252 1 1 7 17 97 197 1197 4197 9197 194 195 TPAAAA AUHAAA AAAAxx +9765 5253 1 1 5 5 65 765 1765 4765 9765 130 131 PLAAAA BUHAAA HHHHxx +4974 5254 0 2 4 14 74 974 974 4974 4974 148 149 IJAAAA CUHAAA OOOOxx +442 5255 0 2 2 2 42 442 442 442 442 84 85 ARAAAA DUHAAA VVVVxx +4349 5256 1 1 9 9 49 349 349 4349 4349 98 99 HLAAAA EUHAAA AAAAxx +6119 5257 1 3 9 19 19 119 119 1119 6119 38 39 JBAAAA FUHAAA HHHHxx +7574 5258 0 2 4 14 74 574 1574 2574 7574 148 149 IFAAAA GUHAAA OOOOxx +4445 5259 1 1 5 5 45 445 445 4445 4445 90 91 ZOAAAA HUHAAA VVVVxx +940 5260 0 0 0 0 40 940 940 940 940 80 81 EKAAAA IUHAAA AAAAxx +1875 5261 1 3 5 15 75 875 1875 1875 1875 150 151 DUAAAA JUHAAA HHHHxx +5951 5262 1 3 1 11 51 951 1951 951 5951 102 103 XUAAAA KUHAAA OOOOxx +9132 5263 0 0 2 12 32 132 1132 4132 9132 64 65 GNAAAA LUHAAA VVVVxx +6913 5264 1 1 3 13 13 913 913 1913 6913 26 27 XFAAAA MUHAAA AAAAxx +3308 5265 0 0 8 8 8 308 1308 3308 3308 16 17 GXAAAA NUHAAA HHHHxx +7553 5266 1 1 3 13 53 553 1553 2553 7553 106 107 NEAAAA OUHAAA OOOOxx +2138 5267 0 2 8 18 38 138 138 2138 2138 76 77 GEAAAA PUHAAA VVVVxx +6252 5268 0 0 2 12 52 252 252 1252 6252 104 105 MGAAAA QUHAAA AAAAxx +2171 5269 1 3 1 11 71 171 171 2171 2171 142 143 NFAAAA RUHAAA HHHHxx +4159 5270 1 3 9 19 59 159 159 4159 4159 118 119 ZDAAAA SUHAAA OOOOxx +2401 5271 1 1 1 1 1 401 401 2401 2401 2 3 JOAAAA TUHAAA VVVVxx +6553 5272 1 1 3 13 53 553 553 1553 6553 106 107 BSAAAA UUHAAA AAAAxx +5217 5273 1 1 7 17 17 217 1217 217 5217 34 35 RSAAAA VUHAAA HHHHxx +1405 5274 1 1 5 5 5 405 1405 1405 1405 10 11 BCAAAA WUHAAA OOOOxx +1494 5275 0 2 4 14 94 494 1494 1494 1494 188 189 MFAAAA XUHAAA VVVVxx +5553 5276 1 1 3 13 53 553 1553 553 5553 106 107 PFAAAA YUHAAA AAAAxx +8296 5277 0 0 6 16 96 296 296 3296 8296 192 193 CHAAAA ZUHAAA HHHHxx +6565 5278 1 1 5 5 65 565 565 1565 6565 130 131 NSAAAA AVHAAA OOOOxx +817 5279 1 1 7 17 17 817 817 817 817 34 35 LFAAAA BVHAAA VVVVxx +6947 5280 1 3 7 7 47 947 947 1947 6947 94 95 FHAAAA CVHAAA AAAAxx +4184 5281 0 0 4 4 84 184 184 4184 4184 168 169 YEAAAA DVHAAA HHHHxx +6577 5282 1 1 7 17 77 577 577 1577 6577 154 155 ZSAAAA EVHAAA OOOOxx +6424 5283 0 0 4 4 24 424 424 1424 6424 48 49 CNAAAA FVHAAA VVVVxx +2482 5284 0 2 2 2 82 482 482 2482 2482 164 165 MRAAAA GVHAAA AAAAxx +6874 5285 0 2 4 14 74 874 874 1874 6874 148 149 KEAAAA HVHAAA HHHHxx +7601 5286 1 1 1 1 1 601 1601 2601 7601 2 3 JGAAAA IVHAAA OOOOxx +4552 5287 0 0 2 12 52 552 552 4552 4552 104 105 CTAAAA JVHAAA VVVVxx +8406 5288 0 2 6 6 6 406 406 3406 8406 12 13 ILAAAA KVHAAA AAAAxx +2924 5289 0 0 4 4 24 924 924 2924 2924 48 49 MIAAAA LVHAAA HHHHxx +8255 5290 1 3 5 15 55 255 255 3255 8255 110 111 NFAAAA MVHAAA OOOOxx +4920 5291 0 0 0 0 20 920 920 4920 4920 40 41 GHAAAA NVHAAA VVVVxx +228 5292 0 0 8 8 28 228 228 228 228 56 57 UIAAAA OVHAAA AAAAxx +9431 5293 1 3 1 11 31 431 1431 4431 9431 62 63 TYAAAA PVHAAA HHHHxx +4021 5294 1 1 1 1 21 21 21 4021 4021 42 43 RYAAAA QVHAAA OOOOxx +2966 5295 0 2 6 6 66 966 966 2966 2966 132 133 CKAAAA RVHAAA VVVVxx +2862 5296 0 2 2 2 62 862 862 2862 2862 124 125 CGAAAA SVHAAA AAAAxx +4303 5297 1 3 3 3 3 303 303 4303 4303 6 7 NJAAAA TVHAAA HHHHxx +9643 5298 1 3 3 3 43 643 1643 4643 9643 86 87 XGAAAA UVHAAA OOOOxx +3008 5299 0 0 8 8 8 8 1008 3008 3008 16 17 SLAAAA VVHAAA VVVVxx +7476 5300 0 0 6 16 76 476 1476 2476 7476 152 153 OBAAAA WVHAAA AAAAxx +3686 5301 0 2 6 6 86 686 1686 3686 3686 172 173 ULAAAA XVHAAA HHHHxx +9051 5302 1 3 1 11 51 51 1051 4051 9051 102 103 DKAAAA YVHAAA OOOOxx +6592 5303 0 0 2 12 92 592 592 1592 6592 184 185 OTAAAA ZVHAAA VVVVxx +924 5304 0 0 4 4 24 924 924 924 924 48 49 OJAAAA AWHAAA AAAAxx +4406 5305 0 2 6 6 6 406 406 4406 4406 12 13 MNAAAA BWHAAA HHHHxx +5233 5306 1 1 3 13 33 233 1233 233 5233 66 67 HTAAAA CWHAAA OOOOxx +8881 5307 1 1 1 1 81 881 881 3881 8881 162 163 PDAAAA DWHAAA VVVVxx +2212 5308 0 0 2 12 12 212 212 2212 2212 24 25 CHAAAA EWHAAA AAAAxx +5804 5309 0 0 4 4 4 804 1804 804 5804 8 9 GPAAAA FWHAAA HHHHxx +2990 5310 0 2 0 10 90 990 990 2990 2990 180 181 ALAAAA GWHAAA OOOOxx +4069 5311 1 1 9 9 69 69 69 4069 4069 138 139 NAAAAA HWHAAA VVVVxx +5380 5312 0 0 0 0 80 380 1380 380 5380 160 161 YYAAAA IWHAAA AAAAxx +5016 5313 0 0 6 16 16 16 1016 16 5016 32 33 YKAAAA JWHAAA HHHHxx +5056 5314 0 0 6 16 56 56 1056 56 5056 112 113 MMAAAA KWHAAA OOOOxx +3732 5315 0 0 2 12 32 732 1732 3732 3732 64 65 ONAAAA LWHAAA VVVVxx +5527 5316 1 3 7 7 27 527 1527 527 5527 54 55 PEAAAA MWHAAA AAAAxx +1151 5317 1 3 1 11 51 151 1151 1151 1151 102 103 HSAAAA NWHAAA HHHHxx +7900 5318 0 0 0 0 0 900 1900 2900 7900 0 1 WRAAAA OWHAAA OOOOxx +1660 5319 0 0 0 0 60 660 1660 1660 1660 120 121 WLAAAA PWHAAA VVVVxx +8064 5320 0 0 4 4 64 64 64 3064 8064 128 129 EYAAAA QWHAAA AAAAxx +8240 5321 0 0 0 0 40 240 240 3240 8240 80 81 YEAAAA RWHAAA HHHHxx +413 5322 1 1 3 13 13 413 413 413 413 26 27 XPAAAA SWHAAA OOOOxx +8311 5323 1 3 1 11 11 311 311 3311 8311 22 23 RHAAAA TWHAAA VVVVxx +1065 5324 1 1 5 5 65 65 1065 1065 1065 130 131 ZOAAAA UWHAAA AAAAxx +2741 5325 1 1 1 1 41 741 741 2741 2741 82 83 LBAAAA VWHAAA HHHHxx +5306 5326 0 2 6 6 6 306 1306 306 5306 12 13 CWAAAA WWHAAA OOOOxx +5464 5327 0 0 4 4 64 464 1464 464 5464 128 129 ECAAAA XWHAAA VVVVxx +4237 5328 1 1 7 17 37 237 237 4237 4237 74 75 ZGAAAA YWHAAA AAAAxx +3822 5329 0 2 2 2 22 822 1822 3822 3822 44 45 ARAAAA ZWHAAA HHHHxx +2548 5330 0 0 8 8 48 548 548 2548 2548 96 97 AUAAAA AXHAAA OOOOxx +2688 5331 0 0 8 8 88 688 688 2688 2688 176 177 KZAAAA BXHAAA VVVVxx +8061 5332 1 1 1 1 61 61 61 3061 8061 122 123 BYAAAA CXHAAA AAAAxx +9340 5333 0 0 0 0 40 340 1340 4340 9340 80 81 GVAAAA DXHAAA HHHHxx +4031 5334 1 3 1 11 31 31 31 4031 4031 62 63 BZAAAA EXHAAA OOOOxx +2635 5335 1 3 5 15 35 635 635 2635 2635 70 71 JXAAAA FXHAAA VVVVxx +809 5336 1 1 9 9 9 809 809 809 809 18 19 DFAAAA GXHAAA AAAAxx +3209 5337 1 1 9 9 9 209 1209 3209 3209 18 19 LTAAAA HXHAAA HHHHxx +3825 5338 1 1 5 5 25 825 1825 3825 3825 50 51 DRAAAA IXHAAA OOOOxx +1448 5339 0 0 8 8 48 448 1448 1448 1448 96 97 SDAAAA JXHAAA VVVVxx +9077 5340 1 1 7 17 77 77 1077 4077 9077 154 155 DLAAAA KXHAAA AAAAxx +3730 5341 0 2 0 10 30 730 1730 3730 3730 60 61 MNAAAA LXHAAA HHHHxx +9596 5342 0 0 6 16 96 596 1596 4596 9596 192 193 CFAAAA MXHAAA OOOOxx +3563 5343 1 3 3 3 63 563 1563 3563 3563 126 127 BHAAAA NXHAAA VVVVxx +4116 5344 0 0 6 16 16 116 116 4116 4116 32 33 ICAAAA OXHAAA AAAAxx +4825 5345 1 1 5 5 25 825 825 4825 4825 50 51 PDAAAA PXHAAA HHHHxx +8376 5346 0 0 6 16 76 376 376 3376 8376 152 153 EKAAAA QXHAAA OOOOxx +3917 5347 1 1 7 17 17 917 1917 3917 3917 34 35 RUAAAA RXHAAA VVVVxx +4407 5348 1 3 7 7 7 407 407 4407 4407 14 15 NNAAAA SXHAAA AAAAxx +8202 5349 0 2 2 2 2 202 202 3202 8202 4 5 MDAAAA TXHAAA HHHHxx +7675 5350 1 3 5 15 75 675 1675 2675 7675 150 151 FJAAAA UXHAAA OOOOxx +4104 5351 0 0 4 4 4 104 104 4104 4104 8 9 WBAAAA VXHAAA VVVVxx +9225 5352 1 1 5 5 25 225 1225 4225 9225 50 51 VQAAAA WXHAAA AAAAxx +2834 5353 0 2 4 14 34 834 834 2834 2834 68 69 AFAAAA XXHAAA HHHHxx +1227 5354 1 3 7 7 27 227 1227 1227 1227 54 55 FVAAAA YXHAAA OOOOxx +3383 5355 1 3 3 3 83 383 1383 3383 3383 166 167 DAAAAA ZXHAAA VVVVxx +67 5356 1 3 7 7 67 67 67 67 67 134 135 PCAAAA AYHAAA AAAAxx +1751 5357 1 3 1 11 51 751 1751 1751 1751 102 103 JPAAAA BYHAAA HHHHxx +8054 5358 0 2 4 14 54 54 54 3054 8054 108 109 UXAAAA CYHAAA OOOOxx +8571 5359 1 3 1 11 71 571 571 3571 8571 142 143 RRAAAA DYHAAA VVVVxx +2466 5360 0 2 6 6 66 466 466 2466 2466 132 133 WQAAAA EYHAAA AAAAxx +9405 5361 1 1 5 5 5 405 1405 4405 9405 10 11 TXAAAA FYHAAA HHHHxx +6883 5362 1 3 3 3 83 883 883 1883 6883 166 167 TEAAAA GYHAAA OOOOxx +4301 5363 1 1 1 1 1 301 301 4301 4301 2 3 LJAAAA HYHAAA VVVVxx +3705 5364 1 1 5 5 5 705 1705 3705 3705 10 11 NMAAAA IYHAAA AAAAxx +5420 5365 0 0 0 0 20 420 1420 420 5420 40 41 MAAAAA JYHAAA HHHHxx +3692 5366 0 0 2 12 92 692 1692 3692 3692 184 185 AMAAAA KYHAAA OOOOxx +6851 5367 1 3 1 11 51 851 851 1851 6851 102 103 NDAAAA LYHAAA VVVVxx +9363 5368 1 3 3 3 63 363 1363 4363 9363 126 127 DWAAAA MYHAAA AAAAxx +2269 5369 1 1 9 9 69 269 269 2269 2269 138 139 HJAAAA NYHAAA HHHHxx +4918 5370 0 2 8 18 18 918 918 4918 4918 36 37 EHAAAA OYHAAA OOOOxx +4297 5371 1 1 7 17 97 297 297 4297 4297 194 195 HJAAAA PYHAAA VVVVxx +1836 5372 0 0 6 16 36 836 1836 1836 1836 72 73 QSAAAA QYHAAA AAAAxx +237 5373 1 1 7 17 37 237 237 237 237 74 75 DJAAAA RYHAAA HHHHxx +6131 5374 1 3 1 11 31 131 131 1131 6131 62 63 VBAAAA SYHAAA OOOOxx +3174 5375 0 2 4 14 74 174 1174 3174 3174 148 149 CSAAAA TYHAAA VVVVxx +9987 5376 1 3 7 7 87 987 1987 4987 9987 174 175 DUAAAA UYHAAA AAAAxx +3630 5377 0 2 0 10 30 630 1630 3630 3630 60 61 QJAAAA VYHAAA HHHHxx +2899 5378 1 3 9 19 99 899 899 2899 2899 198 199 NHAAAA WYHAAA OOOOxx +4079 5379 1 3 9 19 79 79 79 4079 4079 158 159 XAAAAA XYHAAA VVVVxx +5049 5380 1 1 9 9 49 49 1049 49 5049 98 99 FMAAAA YYHAAA AAAAxx +2963 5381 1 3 3 3 63 963 963 2963 2963 126 127 ZJAAAA ZYHAAA HHHHxx +3962 5382 0 2 2 2 62 962 1962 3962 3962 124 125 KWAAAA AZHAAA OOOOxx +7921 5383 1 1 1 1 21 921 1921 2921 7921 42 43 RSAAAA BZHAAA VVVVxx +3967 5384 1 3 7 7 67 967 1967 3967 3967 134 135 PWAAAA CZHAAA AAAAxx +2752 5385 0 0 2 12 52 752 752 2752 2752 104 105 WBAAAA DZHAAA HHHHxx +7944 5386 0 0 4 4 44 944 1944 2944 7944 88 89 OTAAAA EZHAAA OOOOxx +2205 5387 1 1 5 5 5 205 205 2205 2205 10 11 VGAAAA FZHAAA VVVVxx +5035 5388 1 3 5 15 35 35 1035 35 5035 70 71 RLAAAA GZHAAA AAAAxx +1425 5389 1 1 5 5 25 425 1425 1425 1425 50 51 VCAAAA HZHAAA HHHHxx +832 5390 0 0 2 12 32 832 832 832 832 64 65 AGAAAA IZHAAA OOOOxx +1447 5391 1 3 7 7 47 447 1447 1447 1447 94 95 RDAAAA JZHAAA VVVVxx +6108 5392 0 0 8 8 8 108 108 1108 6108 16 17 YAAAAA KZHAAA AAAAxx +4936 5393 0 0 6 16 36 936 936 4936 4936 72 73 WHAAAA LZHAAA HHHHxx +7704 5394 0 0 4 4 4 704 1704 2704 7704 8 9 IKAAAA MZHAAA OOOOxx +142 5395 0 2 2 2 42 142 142 142 142 84 85 MFAAAA NZHAAA VVVVxx +4272 5396 0 0 2 12 72 272 272 4272 4272 144 145 IIAAAA OZHAAA AAAAxx +7667 5397 1 3 7 7 67 667 1667 2667 7667 134 135 XIAAAA PZHAAA HHHHxx +366 5398 0 2 6 6 66 366 366 366 366 132 133 COAAAA QZHAAA OOOOxx +8866 5399 0 2 6 6 66 866 866 3866 8866 132 133 ADAAAA RZHAAA VVVVxx +7712 5400 0 0 2 12 12 712 1712 2712 7712 24 25 QKAAAA SZHAAA AAAAxx +3880 5401 0 0 0 0 80 880 1880 3880 3880 160 161 GTAAAA TZHAAA HHHHxx +4631 5402 1 3 1 11 31 631 631 4631 4631 62 63 DWAAAA UZHAAA OOOOxx +2789 5403 1 1 9 9 89 789 789 2789 2789 178 179 HDAAAA VZHAAA VVVVxx +7720 5404 0 0 0 0 20 720 1720 2720 7720 40 41 YKAAAA WZHAAA AAAAxx +7618 5405 0 2 8 18 18 618 1618 2618 7618 36 37 AHAAAA XZHAAA HHHHxx +4990 5406 0 2 0 10 90 990 990 4990 4990 180 181 YJAAAA YZHAAA OOOOxx +7918 5407 0 2 8 18 18 918 1918 2918 7918 36 37 OSAAAA ZZHAAA VVVVxx +5067 5408 1 3 7 7 67 67 1067 67 5067 134 135 XMAAAA AAIAAA AAAAxx +6370 5409 0 2 0 10 70 370 370 1370 6370 140 141 ALAAAA BAIAAA HHHHxx +2268 5410 0 0 8 8 68 268 268 2268 2268 136 137 GJAAAA CAIAAA OOOOxx +1949 5411 1 1 9 9 49 949 1949 1949 1949 98 99 ZWAAAA DAIAAA VVVVxx +5503 5412 1 3 3 3 3 503 1503 503 5503 6 7 RDAAAA EAIAAA AAAAxx +9951 5413 1 3 1 11 51 951 1951 4951 9951 102 103 TSAAAA FAIAAA HHHHxx +6823 5414 1 3 3 3 23 823 823 1823 6823 46 47 LCAAAA GAIAAA OOOOxx +6287 5415 1 3 7 7 87 287 287 1287 6287 174 175 VHAAAA HAIAAA VVVVxx +6016 5416 0 0 6 16 16 16 16 1016 6016 32 33 KXAAAA IAIAAA AAAAxx +1977 5417 1 1 7 17 77 977 1977 1977 1977 154 155 BYAAAA JAIAAA HHHHxx +8579 5418 1 3 9 19 79 579 579 3579 8579 158 159 ZRAAAA KAIAAA OOOOxx +6204 5419 0 0 4 4 4 204 204 1204 6204 8 9 QEAAAA LAIAAA VVVVxx +9764 5420 0 0 4 4 64 764 1764 4764 9764 128 129 OLAAAA MAIAAA AAAAxx +2005 5421 1 1 5 5 5 5 5 2005 2005 10 11 DZAAAA NAIAAA HHHHxx +1648 5422 0 0 8 8 48 648 1648 1648 1648 96 97 KLAAAA OAIAAA OOOOxx +2457 5423 1 1 7 17 57 457 457 2457 2457 114 115 NQAAAA PAIAAA VVVVxx +2698 5424 0 2 8 18 98 698 698 2698 2698 196 197 UZAAAA QAIAAA AAAAxx +7730 5425 0 2 0 10 30 730 1730 2730 7730 60 61 ILAAAA RAIAAA HHHHxx +7287 5426 1 3 7 7 87 287 1287 2287 7287 174 175 HUAAAA SAIAAA OOOOxx +2937 5427 1 1 7 17 37 937 937 2937 2937 74 75 ZIAAAA TAIAAA VVVVxx +6824 5428 0 0 4 4 24 824 824 1824 6824 48 49 MCAAAA UAIAAA AAAAxx +9256 5429 0 0 6 16 56 256 1256 4256 9256 112 113 ASAAAA VAIAAA HHHHxx +4810 5430 0 2 0 10 10 810 810 4810 4810 20 21 ADAAAA WAIAAA OOOOxx +3869 5431 1 1 9 9 69 869 1869 3869 3869 138 139 VSAAAA XAIAAA VVVVxx +1993 5432 1 1 3 13 93 993 1993 1993 1993 186 187 RYAAAA YAIAAA AAAAxx +6048 5433 0 0 8 8 48 48 48 1048 6048 96 97 QYAAAA ZAIAAA HHHHxx +6922 5434 0 2 2 2 22 922 922 1922 6922 44 45 GGAAAA ABIAAA OOOOxx +8 5435 0 0 8 8 8 8 8 8 8 16 17 IAAAAA BBIAAA VVVVxx +6706 5436 0 2 6 6 6 706 706 1706 6706 12 13 YXAAAA CBIAAA AAAAxx +9159 5437 1 3 9 19 59 159 1159 4159 9159 118 119 HOAAAA DBIAAA HHHHxx +7020 5438 0 0 0 0 20 20 1020 2020 7020 40 41 AKAAAA EBIAAA OOOOxx +767 5439 1 3 7 7 67 767 767 767 767 134 135 NDAAAA FBIAAA VVVVxx +8602 5440 0 2 2 2 2 602 602 3602 8602 4 5 WSAAAA GBIAAA AAAAxx +4442 5441 0 2 2 2 42 442 442 4442 4442 84 85 WOAAAA HBIAAA HHHHxx +2040 5442 0 0 0 0 40 40 40 2040 2040 80 81 MAAAAA IBIAAA OOOOxx +5493 5443 1 1 3 13 93 493 1493 493 5493 186 187 HDAAAA JBIAAA VVVVxx +275 5444 1 3 5 15 75 275 275 275 275 150 151 PKAAAA KBIAAA AAAAxx +8876 5445 0 0 6 16 76 876 876 3876 8876 152 153 KDAAAA LBIAAA HHHHxx +7381 5446 1 1 1 1 81 381 1381 2381 7381 162 163 XXAAAA MBIAAA OOOOxx +1827 5447 1 3 7 7 27 827 1827 1827 1827 54 55 HSAAAA NBIAAA VVVVxx +3537 5448 1 1 7 17 37 537 1537 3537 3537 74 75 BGAAAA OBIAAA AAAAxx +6978 5449 0 2 8 18 78 978 978 1978 6978 156 157 KIAAAA PBIAAA HHHHxx +6160 5450 0 0 0 0 60 160 160 1160 6160 120 121 YCAAAA QBIAAA OOOOxx +9219 5451 1 3 9 19 19 219 1219 4219 9219 38 39 PQAAAA RBIAAA VVVVxx +5034 5452 0 2 4 14 34 34 1034 34 5034 68 69 QLAAAA SBIAAA AAAAxx +8463 5453 1 3 3 3 63 463 463 3463 8463 126 127 NNAAAA TBIAAA HHHHxx +2038 5454 0 2 8 18 38 38 38 2038 2038 76 77 KAAAAA UBIAAA OOOOxx +9562 5455 0 2 2 2 62 562 1562 4562 9562 124 125 UDAAAA VBIAAA VVVVxx +2687 5456 1 3 7 7 87 687 687 2687 2687 174 175 JZAAAA WBIAAA AAAAxx +5092 5457 0 0 2 12 92 92 1092 92 5092 184 185 WNAAAA XBIAAA HHHHxx +539 5458 1 3 9 19 39 539 539 539 539 78 79 TUAAAA YBIAAA OOOOxx +2139 5459 1 3 9 19 39 139 139 2139 2139 78 79 HEAAAA ZBIAAA VVVVxx +9221 5460 1 1 1 1 21 221 1221 4221 9221 42 43 RQAAAA ACIAAA AAAAxx +965 5461 1 1 5 5 65 965 965 965 965 130 131 DLAAAA BCIAAA HHHHxx +6051 5462 1 3 1 11 51 51 51 1051 6051 102 103 TYAAAA CCIAAA OOOOxx +5822 5463 0 2 2 2 22 822 1822 822 5822 44 45 YPAAAA DCIAAA VVVVxx +6397 5464 1 1 7 17 97 397 397 1397 6397 194 195 BMAAAA ECIAAA AAAAxx +2375 5465 1 3 5 15 75 375 375 2375 2375 150 151 JNAAAA FCIAAA HHHHxx +9415 5466 1 3 5 15 15 415 1415 4415 9415 30 31 DYAAAA GCIAAA OOOOxx +6552 5467 0 0 2 12 52 552 552 1552 6552 104 105 ASAAAA HCIAAA VVVVxx +2248 5468 0 0 8 8 48 248 248 2248 2248 96 97 MIAAAA ICIAAA AAAAxx +2611 5469 1 3 1 11 11 611 611 2611 2611 22 23 LWAAAA JCIAAA HHHHxx +9609 5470 1 1 9 9 9 609 1609 4609 9609 18 19 PFAAAA KCIAAA OOOOxx +2132 5471 0 0 2 12 32 132 132 2132 2132 64 65 AEAAAA LCIAAA VVVVxx +8452 5472 0 0 2 12 52 452 452 3452 8452 104 105 CNAAAA MCIAAA AAAAxx +9407 5473 1 3 7 7 7 407 1407 4407 9407 14 15 VXAAAA NCIAAA HHHHxx +2814 5474 0 2 4 14 14 814 814 2814 2814 28 29 GEAAAA OCIAAA OOOOxx +1889 5475 1 1 9 9 89 889 1889 1889 1889 178 179 RUAAAA PCIAAA VVVVxx +7489 5476 1 1 9 9 89 489 1489 2489 7489 178 179 BCAAAA QCIAAA AAAAxx +2255 5477 1 3 5 15 55 255 255 2255 2255 110 111 TIAAAA RCIAAA HHHHxx +3380 5478 0 0 0 0 80 380 1380 3380 3380 160 161 AAAAAA SCIAAA OOOOxx +1167 5479 1 3 7 7 67 167 1167 1167 1167 134 135 XSAAAA TCIAAA VVVVxx +5369 5480 1 1 9 9 69 369 1369 369 5369 138 139 NYAAAA UCIAAA AAAAxx +2378 5481 0 2 8 18 78 378 378 2378 2378 156 157 MNAAAA VCIAAA HHHHxx +8315 5482 1 3 5 15 15 315 315 3315 8315 30 31 VHAAAA WCIAAA OOOOxx +2934 5483 0 2 4 14 34 934 934 2934 2934 68 69 WIAAAA XCIAAA VVVVxx +7924 5484 0 0 4 4 24 924 1924 2924 7924 48 49 USAAAA YCIAAA AAAAxx +2867 5485 1 3 7 7 67 867 867 2867 2867 134 135 HGAAAA ZCIAAA HHHHxx +9141 5486 1 1 1 1 41 141 1141 4141 9141 82 83 PNAAAA ADIAAA OOOOxx +3613 5487 1 1 3 13 13 613 1613 3613 3613 26 27 ZIAAAA BDIAAA VVVVxx +2461 5488 1 1 1 1 61 461 461 2461 2461 122 123 RQAAAA CDIAAA AAAAxx +4567 5489 1 3 7 7 67 567 567 4567 4567 134 135 RTAAAA DDIAAA HHHHxx +2906 5490 0 2 6 6 6 906 906 2906 2906 12 13 UHAAAA EDIAAA OOOOxx +4848 5491 0 0 8 8 48 848 848 4848 4848 96 97 MEAAAA FDIAAA VVVVxx +6614 5492 0 2 4 14 14 614 614 1614 6614 28 29 KUAAAA GDIAAA AAAAxx +6200 5493 0 0 0 0 0 200 200 1200 6200 0 1 MEAAAA HDIAAA HHHHxx +7895 5494 1 3 5 15 95 895 1895 2895 7895 190 191 RRAAAA IDIAAA OOOOxx +6829 5495 1 1 9 9 29 829 829 1829 6829 58 59 RCAAAA JDIAAA VVVVxx +4087 5496 1 3 7 7 87 87 87 4087 4087 174 175 FBAAAA KDIAAA AAAAxx +8787 5497 1 3 7 7 87 787 787 3787 8787 174 175 ZZAAAA LDIAAA HHHHxx +3322 5498 0 2 2 2 22 322 1322 3322 3322 44 45 UXAAAA MDIAAA OOOOxx +9091 5499 1 3 1 11 91 91 1091 4091 9091 182 183 RLAAAA NDIAAA VVVVxx +5268 5500 0 0 8 8 68 268 1268 268 5268 136 137 QUAAAA ODIAAA AAAAxx +2719 5501 1 3 9 19 19 719 719 2719 2719 38 39 PAAAAA PDIAAA HHHHxx +30 5502 0 2 0 10 30 30 30 30 30 60 61 EBAAAA QDIAAA OOOOxx +1975 5503 1 3 5 15 75 975 1975 1975 1975 150 151 ZXAAAA RDIAAA VVVVxx +2641 5504 1 1 1 1 41 641 641 2641 2641 82 83 PXAAAA SDIAAA AAAAxx +8616 5505 0 0 6 16 16 616 616 3616 8616 32 33 KTAAAA TDIAAA HHHHxx +5980 5506 0 0 0 0 80 980 1980 980 5980 160 161 AWAAAA UDIAAA OOOOxx +5170 5507 0 2 0 10 70 170 1170 170 5170 140 141 WQAAAA VDIAAA VVVVxx +1960 5508 0 0 0 0 60 960 1960 1960 1960 120 121 KXAAAA WDIAAA AAAAxx +8141 5509 1 1 1 1 41 141 141 3141 8141 82 83 DBAAAA XDIAAA HHHHxx +6692 5510 0 0 2 12 92 692 692 1692 6692 184 185 KXAAAA YDIAAA OOOOxx +7621 5511 1 1 1 1 21 621 1621 2621 7621 42 43 DHAAAA ZDIAAA VVVVxx +3890 5512 0 2 0 10 90 890 1890 3890 3890 180 181 QTAAAA AEIAAA AAAAxx +4300 5513 0 0 0 0 0 300 300 4300 4300 0 1 KJAAAA BEIAAA HHHHxx +736 5514 0 0 6 16 36 736 736 736 736 72 73 ICAAAA CEIAAA OOOOxx +6626 5515 0 2 6 6 26 626 626 1626 6626 52 53 WUAAAA DEIAAA VVVVxx +1800 5516 0 0 0 0 0 800 1800 1800 1800 0 1 GRAAAA EEIAAA AAAAxx +3430 5517 0 2 0 10 30 430 1430 3430 3430 60 61 YBAAAA FEIAAA HHHHxx +9519 5518 1 3 9 19 19 519 1519 4519 9519 38 39 DCAAAA GEIAAA OOOOxx +5111 5519 1 3 1 11 11 111 1111 111 5111 22 23 POAAAA HEIAAA VVVVxx +6915 5520 1 3 5 15 15 915 915 1915 6915 30 31 ZFAAAA IEIAAA AAAAxx +9246 5521 0 2 6 6 46 246 1246 4246 9246 92 93 QRAAAA JEIAAA HHHHxx +5141 5522 1 1 1 1 41 141 1141 141 5141 82 83 TPAAAA KEIAAA OOOOxx +5922 5523 0 2 2 2 22 922 1922 922 5922 44 45 UTAAAA LEIAAA VVVVxx +3087 5524 1 3 7 7 87 87 1087 3087 3087 174 175 TOAAAA MEIAAA AAAAxx +1859 5525 1 3 9 19 59 859 1859 1859 1859 118 119 NTAAAA NEIAAA HHHHxx +8482 5526 0 2 2 2 82 482 482 3482 8482 164 165 GOAAAA OEIAAA OOOOxx +8414 5527 0 2 4 14 14 414 414 3414 8414 28 29 QLAAAA PEIAAA VVVVxx +6662 5528 0 2 2 2 62 662 662 1662 6662 124 125 GWAAAA QEIAAA AAAAxx +8614 5529 0 2 4 14 14 614 614 3614 8614 28 29 ITAAAA REIAAA HHHHxx +42 5530 0 2 2 2 42 42 42 42 42 84 85 QBAAAA SEIAAA OOOOxx +7582 5531 0 2 2 2 82 582 1582 2582 7582 164 165 QFAAAA TEIAAA VVVVxx +8183 5532 1 3 3 3 83 183 183 3183 8183 166 167 TCAAAA UEIAAA AAAAxx +1299 5533 1 3 9 19 99 299 1299 1299 1299 198 199 ZXAAAA VEIAAA HHHHxx +7004 5534 0 0 4 4 4 4 1004 2004 7004 8 9 KJAAAA WEIAAA OOOOxx +3298 5535 0 2 8 18 98 298 1298 3298 3298 196 197 WWAAAA XEIAAA VVVVxx +7884 5536 0 0 4 4 84 884 1884 2884 7884 168 169 GRAAAA YEIAAA AAAAxx +4191 5537 1 3 1 11 91 191 191 4191 4191 182 183 FFAAAA ZEIAAA HHHHxx +7346 5538 0 2 6 6 46 346 1346 2346 7346 92 93 OWAAAA AFIAAA OOOOxx +7989 5539 1 1 9 9 89 989 1989 2989 7989 178 179 HVAAAA BFIAAA VVVVxx +5719 5540 1 3 9 19 19 719 1719 719 5719 38 39 ZLAAAA CFIAAA AAAAxx +800 5541 0 0 0 0 0 800 800 800 800 0 1 UEAAAA DFIAAA HHHHxx +6509 5542 1 1 9 9 9 509 509 1509 6509 18 19 JQAAAA EFIAAA OOOOxx +4672 5543 0 0 2 12 72 672 672 4672 4672 144 145 SXAAAA FFIAAA VVVVxx +4434 5544 0 2 4 14 34 434 434 4434 4434 68 69 OOAAAA GFIAAA AAAAxx +8309 5545 1 1 9 9 9 309 309 3309 8309 18 19 PHAAAA HFIAAA HHHHxx +5134 5546 0 2 4 14 34 134 1134 134 5134 68 69 MPAAAA IFIAAA OOOOxx +5153 5547 1 1 3 13 53 153 1153 153 5153 106 107 FQAAAA JFIAAA VVVVxx +1522 5548 0 2 2 2 22 522 1522 1522 1522 44 45 OGAAAA KFIAAA AAAAxx +8629 5549 1 1 9 9 29 629 629 3629 8629 58 59 XTAAAA LFIAAA HHHHxx +4549 5550 1 1 9 9 49 549 549 4549 4549 98 99 ZSAAAA MFIAAA OOOOxx +9506 5551 0 2 6 6 6 506 1506 4506 9506 12 13 QBAAAA NFIAAA VVVVxx +6542 5552 0 2 2 2 42 542 542 1542 6542 84 85 QRAAAA OFIAAA AAAAxx +2579 5553 1 3 9 19 79 579 579 2579 2579 158 159 FVAAAA PFIAAA HHHHxx +4664 5554 0 0 4 4 64 664 664 4664 4664 128 129 KXAAAA QFIAAA OOOOxx +696 5555 0 0 6 16 96 696 696 696 696 192 193 UAAAAA RFIAAA VVVVxx +7950 5556 0 2 0 10 50 950 1950 2950 7950 100 101 UTAAAA SFIAAA AAAAxx +5 5557 1 1 5 5 5 5 5 5 5 10 11 FAAAAA TFIAAA HHHHxx +7806 5558 0 2 6 6 6 806 1806 2806 7806 12 13 GOAAAA UFIAAA OOOOxx +2770 5559 0 2 0 10 70 770 770 2770 2770 140 141 OCAAAA VFIAAA VVVVxx +1344 5560 0 0 4 4 44 344 1344 1344 1344 88 89 SZAAAA WFIAAA AAAAxx +511 5561 1 3 1 11 11 511 511 511 511 22 23 RTAAAA XFIAAA HHHHxx +9070 5562 0 2 0 10 70 70 1070 4070 9070 140 141 WKAAAA YFIAAA OOOOxx +2961 5563 1 1 1 1 61 961 961 2961 2961 122 123 XJAAAA ZFIAAA VVVVxx +8031 5564 1 3 1 11 31 31 31 3031 8031 62 63 XWAAAA AGIAAA AAAAxx +326 5565 0 2 6 6 26 326 326 326 326 52 53 OMAAAA BGIAAA HHHHxx +183 5566 1 3 3 3 83 183 183 183 183 166 167 BHAAAA CGIAAA OOOOxx +5917 5567 1 1 7 17 17 917 1917 917 5917 34 35 PTAAAA DGIAAA VVVVxx +8256 5568 0 0 6 16 56 256 256 3256 8256 112 113 OFAAAA EGIAAA AAAAxx +7889 5569 1 1 9 9 89 889 1889 2889 7889 178 179 LRAAAA FGIAAA HHHHxx +9029 5570 1 1 9 9 29 29 1029 4029 9029 58 59 HJAAAA GGIAAA OOOOxx +1316 5571 0 0 6 16 16 316 1316 1316 1316 32 33 QYAAAA HGIAAA VVVVxx +7442 5572 0 2 2 2 42 442 1442 2442 7442 84 85 GAAAAA IGIAAA AAAAxx +2810 5573 0 2 0 10 10 810 810 2810 2810 20 21 CEAAAA JGIAAA HHHHxx +20 5574 0 0 0 0 20 20 20 20 20 40 41 UAAAAA KGIAAA OOOOxx +2306 5575 0 2 6 6 6 306 306 2306 2306 12 13 SKAAAA LGIAAA VVVVxx +4694 5576 0 2 4 14 94 694 694 4694 4694 188 189 OYAAAA MGIAAA AAAAxx +9710 5577 0 2 0 10 10 710 1710 4710 9710 20 21 MJAAAA NGIAAA HHHHxx +1791 5578 1 3 1 11 91 791 1791 1791 1791 182 183 XQAAAA OGIAAA OOOOxx +6730 5579 0 2 0 10 30 730 730 1730 6730 60 61 WYAAAA PGIAAA VVVVxx +359 5580 1 3 9 19 59 359 359 359 359 118 119 VNAAAA QGIAAA AAAAxx +8097 5581 1 1 7 17 97 97 97 3097 8097 194 195 LZAAAA RGIAAA HHHHxx +6147 5582 1 3 7 7 47 147 147 1147 6147 94 95 LCAAAA SGIAAA OOOOxx +643 5583 1 3 3 3 43 643 643 643 643 86 87 TYAAAA TGIAAA VVVVxx +698 5584 0 2 8 18 98 698 698 698 698 196 197 WAAAAA UGIAAA AAAAxx +3881 5585 1 1 1 1 81 881 1881 3881 3881 162 163 HTAAAA VGIAAA HHHHxx +7600 5586 0 0 0 0 0 600 1600 2600 7600 0 1 IGAAAA WGIAAA OOOOxx +1583 5587 1 3 3 3 83 583 1583 1583 1583 166 167 XIAAAA XGIAAA VVVVxx +9612 5588 0 0 2 12 12 612 1612 4612 9612 24 25 SFAAAA YGIAAA AAAAxx +1032 5589 0 0 2 12 32 32 1032 1032 1032 64 65 SNAAAA ZGIAAA HHHHxx +4834 5590 0 2 4 14 34 834 834 4834 4834 68 69 YDAAAA AHIAAA OOOOxx +5076 5591 0 0 6 16 76 76 1076 76 5076 152 153 GNAAAA BHIAAA VVVVxx +3070 5592 0 2 0 10 70 70 1070 3070 3070 140 141 COAAAA CHIAAA AAAAxx +1421 5593 1 1 1 1 21 421 1421 1421 1421 42 43 RCAAAA DHIAAA HHHHxx +8970 5594 0 2 0 10 70 970 970 3970 8970 140 141 AHAAAA EHIAAA OOOOxx +6271 5595 1 3 1 11 71 271 271 1271 6271 142 143 FHAAAA FHIAAA VVVVxx +8547 5596 1 3 7 7 47 547 547 3547 8547 94 95 TQAAAA GHIAAA AAAAxx +1259 5597 1 3 9 19 59 259 1259 1259 1259 118 119 LWAAAA HHIAAA HHHHxx +8328 5598 0 0 8 8 28 328 328 3328 8328 56 57 IIAAAA IHIAAA OOOOxx +1503 5599 1 3 3 3 3 503 1503 1503 1503 6 7 VFAAAA JHIAAA VVVVxx +2253 5600 1 1 3 13 53 253 253 2253 2253 106 107 RIAAAA KHIAAA AAAAxx +7449 5601 1 1 9 9 49 449 1449 2449 7449 98 99 NAAAAA LHIAAA HHHHxx +3579 5602 1 3 9 19 79 579 1579 3579 3579 158 159 RHAAAA MHIAAA OOOOxx +1585 5603 1 1 5 5 85 585 1585 1585 1585 170 171 ZIAAAA NHIAAA VVVVxx +5543 5604 1 3 3 3 43 543 1543 543 5543 86 87 FFAAAA OHIAAA AAAAxx +8627 5605 1 3 7 7 27 627 627 3627 8627 54 55 VTAAAA PHIAAA HHHHxx +8618 5606 0 2 8 18 18 618 618 3618 8618 36 37 MTAAAA QHIAAA OOOOxx +1911 5607 1 3 1 11 11 911 1911 1911 1911 22 23 NVAAAA RHIAAA VVVVxx +2758 5608 0 2 8 18 58 758 758 2758 2758 116 117 CCAAAA SHIAAA AAAAxx +5744 5609 0 0 4 4 44 744 1744 744 5744 88 89 YMAAAA THIAAA HHHHxx +4976 5610 0 0 6 16 76 976 976 4976 4976 152 153 KJAAAA UHIAAA OOOOxx +6380 5611 0 0 0 0 80 380 380 1380 6380 160 161 KLAAAA VHIAAA VVVVxx +1937 5612 1 1 7 17 37 937 1937 1937 1937 74 75 NWAAAA WHIAAA AAAAxx +9903 5613 1 3 3 3 3 903 1903 4903 9903 6 7 XQAAAA XHIAAA HHHHxx +4409 5614 1 1 9 9 9 409 409 4409 4409 18 19 PNAAAA YHIAAA OOOOxx +4133 5615 1 1 3 13 33 133 133 4133 4133 66 67 ZCAAAA ZHIAAA VVVVxx +5263 5616 1 3 3 3 63 263 1263 263 5263 126 127 LUAAAA AIIAAA AAAAxx +7888 5617 0 0 8 8 88 888 1888 2888 7888 176 177 KRAAAA BIIAAA HHHHxx +6060 5618 0 0 0 0 60 60 60 1060 6060 120 121 CZAAAA CIIAAA OOOOxx +2522 5619 0 2 2 2 22 522 522 2522 2522 44 45 ATAAAA DIIAAA VVVVxx +5550 5620 0 2 0 10 50 550 1550 550 5550 100 101 MFAAAA EIIAAA AAAAxx +9396 5621 0 0 6 16 96 396 1396 4396 9396 192 193 KXAAAA FIIAAA HHHHxx +176 5622 0 0 6 16 76 176 176 176 176 152 153 UGAAAA GIIAAA OOOOxx +5148 5623 0 0 8 8 48 148 1148 148 5148 96 97 AQAAAA HIIAAA VVVVxx +6691 5624 1 3 1 11 91 691 691 1691 6691 182 183 JXAAAA IIIAAA AAAAxx +4652 5625 0 0 2 12 52 652 652 4652 4652 104 105 YWAAAA JIIAAA HHHHxx +5096 5626 0 0 6 16 96 96 1096 96 5096 192 193 AOAAAA KIIAAA OOOOxx +2408 5627 0 0 8 8 8 408 408 2408 2408 16 17 QOAAAA LIIAAA VVVVxx +7322 5628 0 2 2 2 22 322 1322 2322 7322 44 45 QVAAAA MIIAAA AAAAxx +6782 5629 0 2 2 2 82 782 782 1782 6782 164 165 WAAAAA NIIAAA HHHHxx +4642 5630 0 2 2 2 42 642 642 4642 4642 84 85 OWAAAA OIIAAA OOOOxx +5427 5631 1 3 7 7 27 427 1427 427 5427 54 55 TAAAAA PIIAAA VVVVxx +4461 5632 1 1 1 1 61 461 461 4461 4461 122 123 PPAAAA QIIAAA AAAAxx +8416 5633 0 0 6 16 16 416 416 3416 8416 32 33 SLAAAA RIIAAA HHHHxx +2593 5634 1 1 3 13 93 593 593 2593 2593 186 187 TVAAAA SIIAAA OOOOxx +6202 5635 0 2 2 2 2 202 202 1202 6202 4 5 OEAAAA TIIAAA VVVVxx +3826 5636 0 2 6 6 26 826 1826 3826 3826 52 53 ERAAAA UIIAAA AAAAxx +4417 5637 1 1 7 17 17 417 417 4417 4417 34 35 XNAAAA VIIAAA HHHHxx +7871 5638 1 3 1 11 71 871 1871 2871 7871 142 143 TQAAAA WIIAAA OOOOxx +5622 5639 0 2 2 2 22 622 1622 622 5622 44 45 GIAAAA XIIAAA VVVVxx +3010 5640 0 2 0 10 10 10 1010 3010 3010 20 21 ULAAAA YIIAAA AAAAxx +3407 5641 1 3 7 7 7 407 1407 3407 3407 14 15 BBAAAA ZIIAAA HHHHxx +1274 5642 0 2 4 14 74 274 1274 1274 1274 148 149 AXAAAA AJIAAA OOOOxx +2828 5643 0 0 8 8 28 828 828 2828 2828 56 57 UEAAAA BJIAAA VVVVxx +3427 5644 1 3 7 7 27 427 1427 3427 3427 54 55 VBAAAA CJIAAA AAAAxx +612 5645 0 0 2 12 12 612 612 612 612 24 25 OXAAAA DJIAAA HHHHxx +8729 5646 1 1 9 9 29 729 729 3729 8729 58 59 TXAAAA EJIAAA OOOOxx +1239 5647 1 3 9 19 39 239 1239 1239 1239 78 79 RVAAAA FJIAAA VVVVxx +8990 5648 0 2 0 10 90 990 990 3990 8990 180 181 UHAAAA GJIAAA AAAAxx +5609 5649 1 1 9 9 9 609 1609 609 5609 18 19 THAAAA HJIAAA HHHHxx +4441 5650 1 1 1 1 41 441 441 4441 4441 82 83 VOAAAA IJIAAA OOOOxx +9078 5651 0 2 8 18 78 78 1078 4078 9078 156 157 ELAAAA JJIAAA VVVVxx +6699 5652 1 3 9 19 99 699 699 1699 6699 198 199 RXAAAA KJIAAA AAAAxx +8390 5653 0 2 0 10 90 390 390 3390 8390 180 181 SKAAAA LJIAAA HHHHxx +5455 5654 1 3 5 15 55 455 1455 455 5455 110 111 VBAAAA MJIAAA OOOOxx +7537 5655 1 1 7 17 37 537 1537 2537 7537 74 75 XDAAAA NJIAAA VVVVxx +4669 5656 1 1 9 9 69 669 669 4669 4669 138 139 PXAAAA OJIAAA AAAAxx +5534 5657 0 2 4 14 34 534 1534 534 5534 68 69 WEAAAA PJIAAA HHHHxx +1920 5658 0 0 0 0 20 920 1920 1920 1920 40 41 WVAAAA QJIAAA OOOOxx +9465 5659 1 1 5 5 65 465 1465 4465 9465 130 131 BAAAAA RJIAAA VVVVxx +4897 5660 1 1 7 17 97 897 897 4897 4897 194 195 JGAAAA SJIAAA AAAAxx +1990 5661 0 2 0 10 90 990 1990 1990 1990 180 181 OYAAAA TJIAAA HHHHxx +7148 5662 0 0 8 8 48 148 1148 2148 7148 96 97 YOAAAA UJIAAA OOOOxx +533 5663 1 1 3 13 33 533 533 533 533 66 67 NUAAAA VJIAAA VVVVxx +4339 5664 1 3 9 19 39 339 339 4339 4339 78 79 XKAAAA WJIAAA AAAAxx +6450 5665 0 2 0 10 50 450 450 1450 6450 100 101 COAAAA XJIAAA HHHHxx +9627 5666 1 3 7 7 27 627 1627 4627 9627 54 55 HGAAAA YJIAAA OOOOxx +5539 5667 1 3 9 19 39 539 1539 539 5539 78 79 BFAAAA ZJIAAA VVVVxx +6758 5668 0 2 8 18 58 758 758 1758 6758 116 117 YZAAAA AKIAAA AAAAxx +3435 5669 1 3 5 15 35 435 1435 3435 3435 70 71 DCAAAA BKIAAA HHHHxx +4350 5670 0 2 0 10 50 350 350 4350 4350 100 101 ILAAAA CKIAAA OOOOxx +9088 5671 0 0 8 8 88 88 1088 4088 9088 176 177 OLAAAA DKIAAA VVVVxx +6368 5672 0 0 8 8 68 368 368 1368 6368 136 137 YKAAAA EKIAAA AAAAxx +6337 5673 1 1 7 17 37 337 337 1337 6337 74 75 TJAAAA FKIAAA HHHHxx +4361 5674 1 1 1 1 61 361 361 4361 4361 122 123 TLAAAA GKIAAA OOOOxx +1719 5675 1 3 9 19 19 719 1719 1719 1719 38 39 DOAAAA HKIAAA VVVVxx +3109 5676 1 1 9 9 9 109 1109 3109 3109 18 19 PPAAAA IKIAAA AAAAxx +7135 5677 1 3 5 15 35 135 1135 2135 7135 70 71 LOAAAA JKIAAA HHHHxx +1964 5678 0 0 4 4 64 964 1964 1964 1964 128 129 OXAAAA KKIAAA OOOOxx +3 5679 1 3 3 3 3 3 3 3 3 6 7 DAAAAA LKIAAA VVVVxx +1868 5680 0 0 8 8 68 868 1868 1868 1868 136 137 WTAAAA MKIAAA AAAAxx +5182 5681 0 2 2 2 82 182 1182 182 5182 164 165 IRAAAA NKIAAA HHHHxx +7567 5682 1 3 7 7 67 567 1567 2567 7567 134 135 BFAAAA OKIAAA OOOOxx +3676 5683 0 0 6 16 76 676 1676 3676 3676 152 153 KLAAAA PKIAAA VVVVxx +9382 5684 0 2 2 2 82 382 1382 4382 9382 164 165 WWAAAA QKIAAA AAAAxx +8645 5685 1 1 5 5 45 645 645 3645 8645 90 91 NUAAAA RKIAAA HHHHxx +2018 5686 0 2 8 18 18 18 18 2018 2018 36 37 QZAAAA SKIAAA OOOOxx +217 5687 1 1 7 17 17 217 217 217 217 34 35 JIAAAA TKIAAA VVVVxx +6793 5688 1 1 3 13 93 793 793 1793 6793 186 187 HBAAAA UKIAAA AAAAxx +7280 5689 0 0 0 0 80 280 1280 2280 7280 160 161 AUAAAA VKIAAA HHHHxx +2168 5690 0 0 8 8 68 168 168 2168 2168 136 137 KFAAAA WKIAAA OOOOxx +5259 5691 1 3 9 19 59 259 1259 259 5259 118 119 HUAAAA XKIAAA VVVVxx +6019 5692 1 3 9 19 19 19 19 1019 6019 38 39 NXAAAA YKIAAA AAAAxx +877 5693 1 1 7 17 77 877 877 877 877 154 155 THAAAA ZKIAAA HHHHxx +4961 5694 1 1 1 1 61 961 961 4961 4961 122 123 VIAAAA ALIAAA OOOOxx +1873 5695 1 1 3 13 73 873 1873 1873 1873 146 147 BUAAAA BLIAAA VVVVxx +13 5696 1 1 3 13 13 13 13 13 13 26 27 NAAAAA CLIAAA AAAAxx +1537 5697 1 1 7 17 37 537 1537 1537 1537 74 75 DHAAAA DLIAAA HHHHxx +3129 5698 1 1 9 9 29 129 1129 3129 3129 58 59 JQAAAA ELIAAA OOOOxx +6473 5699 1 1 3 13 73 473 473 1473 6473 146 147 ZOAAAA FLIAAA VVVVxx +7865 5700 1 1 5 5 65 865 1865 2865 7865 130 131 NQAAAA GLIAAA AAAAxx +7822 5701 0 2 2 2 22 822 1822 2822 7822 44 45 WOAAAA HLIAAA HHHHxx +239 5702 1 3 9 19 39 239 239 239 239 78 79 FJAAAA ILIAAA OOOOxx +2062 5703 0 2 2 2 62 62 62 2062 2062 124 125 IBAAAA JLIAAA VVVVxx +762 5704 0 2 2 2 62 762 762 762 762 124 125 IDAAAA KLIAAA AAAAxx +3764 5705 0 0 4 4 64 764 1764 3764 3764 128 129 UOAAAA LLIAAA HHHHxx +465 5706 1 1 5 5 65 465 465 465 465 130 131 XRAAAA MLIAAA OOOOxx +2587 5707 1 3 7 7 87 587 587 2587 2587 174 175 NVAAAA NLIAAA VVVVxx +8402 5708 0 2 2 2 2 402 402 3402 8402 4 5 ELAAAA OLIAAA AAAAxx +1055 5709 1 3 5 15 55 55 1055 1055 1055 110 111 POAAAA PLIAAA HHHHxx +3072 5710 0 0 2 12 72 72 1072 3072 3072 144 145 EOAAAA QLIAAA OOOOxx +7359 5711 1 3 9 19 59 359 1359 2359 7359 118 119 BXAAAA RLIAAA VVVVxx +6558 5712 0 2 8 18 58 558 558 1558 6558 116 117 GSAAAA SLIAAA AAAAxx +48 5713 0 0 8 8 48 48 48 48 48 96 97 WBAAAA TLIAAA HHHHxx +5382 5714 0 2 2 2 82 382 1382 382 5382 164 165 AZAAAA ULIAAA OOOOxx +947 5715 1 3 7 7 47 947 947 947 947 94 95 LKAAAA VLIAAA VVVVxx +2644 5716 0 0 4 4 44 644 644 2644 2644 88 89 SXAAAA WLIAAA AAAAxx +7516 5717 0 0 6 16 16 516 1516 2516 7516 32 33 CDAAAA XLIAAA HHHHxx +2362 5718 0 2 2 2 62 362 362 2362 2362 124 125 WMAAAA YLIAAA OOOOxx +839 5719 1 3 9 19 39 839 839 839 839 78 79 HGAAAA ZLIAAA VVVVxx +2216 5720 0 0 6 16 16 216 216 2216 2216 32 33 GHAAAA AMIAAA AAAAxx +7673 5721 1 1 3 13 73 673 1673 2673 7673 146 147 DJAAAA BMIAAA HHHHxx +8173 5722 1 1 3 13 73 173 173 3173 8173 146 147 JCAAAA CMIAAA OOOOxx +1630 5723 0 2 0 10 30 630 1630 1630 1630 60 61 SKAAAA DMIAAA VVVVxx +9057 5724 1 1 7 17 57 57 1057 4057 9057 114 115 JKAAAA EMIAAA AAAAxx +4392 5725 0 0 2 12 92 392 392 4392 4392 184 185 YMAAAA FMIAAA HHHHxx +3695 5726 1 3 5 15 95 695 1695 3695 3695 190 191 DMAAAA GMIAAA OOOOxx +5751 5727 1 3 1 11 51 751 1751 751 5751 102 103 FNAAAA HMIAAA VVVVxx +5745 5728 1 1 5 5 45 745 1745 745 5745 90 91 ZMAAAA IMIAAA AAAAxx +7945 5729 1 1 5 5 45 945 1945 2945 7945 90 91 PTAAAA JMIAAA HHHHxx +5174 5730 0 2 4 14 74 174 1174 174 5174 148 149 ARAAAA KMIAAA OOOOxx +3829 5731 1 1 9 9 29 829 1829 3829 3829 58 59 HRAAAA LMIAAA VVVVxx +3317 5732 1 1 7 17 17 317 1317 3317 3317 34 35 PXAAAA MMIAAA AAAAxx +4253 5733 1 1 3 13 53 253 253 4253 4253 106 107 PHAAAA NMIAAA HHHHxx +1291 5734 1 3 1 11 91 291 1291 1291 1291 182 183 RXAAAA OMIAAA OOOOxx +3266 5735 0 2 6 6 66 266 1266 3266 3266 132 133 QVAAAA PMIAAA VVVVxx +2939 5736 1 3 9 19 39 939 939 2939 2939 78 79 BJAAAA QMIAAA AAAAxx +2755 5737 1 3 5 15 55 755 755 2755 2755 110 111 ZBAAAA RMIAAA HHHHxx +6844 5738 0 0 4 4 44 844 844 1844 6844 88 89 GDAAAA SMIAAA OOOOxx +8594 5739 0 2 4 14 94 594 594 3594 8594 188 189 OSAAAA TMIAAA VVVVxx +704 5740 0 0 4 4 4 704 704 704 704 8 9 CBAAAA UMIAAA AAAAxx +1681 5741 1 1 1 1 81 681 1681 1681 1681 162 163 RMAAAA VMIAAA HHHHxx +364 5742 0 0 4 4 64 364 364 364 364 128 129 AOAAAA WMIAAA OOOOxx +2928 5743 0 0 8 8 28 928 928 2928 2928 56 57 QIAAAA XMIAAA VVVVxx +117 5744 1 1 7 17 17 117 117 117 117 34 35 NEAAAA YMIAAA AAAAxx +96 5745 0 0 6 16 96 96 96 96 96 192 193 SDAAAA ZMIAAA HHHHxx +7796 5746 0 0 6 16 96 796 1796 2796 7796 192 193 WNAAAA ANIAAA OOOOxx +3101 5747 1 1 1 1 1 101 1101 3101 3101 2 3 HPAAAA BNIAAA VVVVxx +3397 5748 1 1 7 17 97 397 1397 3397 3397 194 195 RAAAAA CNIAAA AAAAxx +1605 5749 1 1 5 5 5 605 1605 1605 1605 10 11 TJAAAA DNIAAA HHHHxx +4881 5750 1 1 1 1 81 881 881 4881 4881 162 163 TFAAAA ENIAAA OOOOxx +4521 5751 1 1 1 1 21 521 521 4521 4521 42 43 XRAAAA FNIAAA VVVVxx +6430 5752 0 2 0 10 30 430 430 1430 6430 60 61 INAAAA GNIAAA AAAAxx +282 5753 0 2 2 2 82 282 282 282 282 164 165 WKAAAA HNIAAA HHHHxx +9645 5754 1 1 5 5 45 645 1645 4645 9645 90 91 ZGAAAA INIAAA OOOOxx +8946 5755 0 2 6 6 46 946 946 3946 8946 92 93 CGAAAA JNIAAA VVVVxx +5064 5756 0 0 4 4 64 64 1064 64 5064 128 129 UMAAAA KNIAAA AAAAxx +7470 5757 0 2 0 10 70 470 1470 2470 7470 140 141 IBAAAA LNIAAA HHHHxx +5886 5758 0 2 6 6 86 886 1886 886 5886 172 173 KSAAAA MNIAAA OOOOxx +6280 5759 0 0 0 0 80 280 280 1280 6280 160 161 OHAAAA NNIAAA VVVVxx +5247 5760 1 3 7 7 47 247 1247 247 5247 94 95 VTAAAA ONIAAA AAAAxx +412 5761 0 0 2 12 12 412 412 412 412 24 25 WPAAAA PNIAAA HHHHxx +5342 5762 0 2 2 2 42 342 1342 342 5342 84 85 MXAAAA QNIAAA OOOOxx +2271 5763 1 3 1 11 71 271 271 2271 2271 142 143 JJAAAA RNIAAA VVVVxx +849 5764 1 1 9 9 49 849 849 849 849 98 99 RGAAAA SNIAAA AAAAxx +1885 5765 1 1 5 5 85 885 1885 1885 1885 170 171 NUAAAA TNIAAA HHHHxx +5620 5766 0 0 0 0 20 620 1620 620 5620 40 41 EIAAAA UNIAAA OOOOxx +7079 5767 1 3 9 19 79 79 1079 2079 7079 158 159 HMAAAA VNIAAA VVVVxx +5819 5768 1 3 9 19 19 819 1819 819 5819 38 39 VPAAAA WNIAAA AAAAxx +7497 5769 1 1 7 17 97 497 1497 2497 7497 194 195 JCAAAA XNIAAA HHHHxx +5993 5770 1 1 3 13 93 993 1993 993 5993 186 187 NWAAAA YNIAAA OOOOxx +3739 5771 1 3 9 19 39 739 1739 3739 3739 78 79 VNAAAA ZNIAAA VVVVxx +6296 5772 0 0 6 16 96 296 296 1296 6296 192 193 EIAAAA AOIAAA AAAAxx +2716 5773 0 0 6 16 16 716 716 2716 2716 32 33 MAAAAA BOIAAA HHHHxx +1130 5774 0 2 0 10 30 130 1130 1130 1130 60 61 MRAAAA COIAAA OOOOxx +5593 5775 1 1 3 13 93 593 1593 593 5593 186 187 DHAAAA DOIAAA VVVVxx +6972 5776 0 0 2 12 72 972 972 1972 6972 144 145 EIAAAA EOIAAA AAAAxx +8360 5777 0 0 0 0 60 360 360 3360 8360 120 121 OJAAAA FOIAAA HHHHxx +6448 5778 0 0 8 8 48 448 448 1448 6448 96 97 AOAAAA GOIAAA OOOOxx +3689 5779 1 1 9 9 89 689 1689 3689 3689 178 179 XLAAAA HOIAAA VVVVxx +7951 5780 1 3 1 11 51 951 1951 2951 7951 102 103 VTAAAA IOIAAA AAAAxx +2974 5781 0 2 4 14 74 974 974 2974 2974 148 149 KKAAAA JOIAAA HHHHxx +6600 5782 0 0 0 0 0 600 600 1600 6600 0 1 WTAAAA KOIAAA OOOOxx +4662 5783 0 2 2 2 62 662 662 4662 4662 124 125 IXAAAA LOIAAA VVVVxx +4765 5784 1 1 5 5 65 765 765 4765 4765 130 131 HBAAAA MOIAAA AAAAxx +355 5785 1 3 5 15 55 355 355 355 355 110 111 RNAAAA NOIAAA HHHHxx +6228 5786 0 0 8 8 28 228 228 1228 6228 56 57 OFAAAA OOIAAA OOOOxx +964 5787 0 0 4 4 64 964 964 964 964 128 129 CLAAAA POIAAA VVVVxx +3082 5788 0 2 2 2 82 82 1082 3082 3082 164 165 OOAAAA QOIAAA AAAAxx +7028 5789 0 0 8 8 28 28 1028 2028 7028 56 57 IKAAAA ROIAAA HHHHxx +4505 5790 1 1 5 5 5 505 505 4505 4505 10 11 HRAAAA SOIAAA OOOOxx +8961 5791 1 1 1 1 61 961 961 3961 8961 122 123 RGAAAA TOIAAA VVVVxx +9571 5792 1 3 1 11 71 571 1571 4571 9571 142 143 DEAAAA UOIAAA AAAAxx +9394 5793 0 2 4 14 94 394 1394 4394 9394 188 189 IXAAAA VOIAAA HHHHxx +4245 5794 1 1 5 5 45 245 245 4245 4245 90 91 HHAAAA WOIAAA OOOOxx +7560 5795 0 0 0 0 60 560 1560 2560 7560 120 121 UEAAAA XOIAAA VVVVxx +2907 5796 1 3 7 7 7 907 907 2907 2907 14 15 VHAAAA YOIAAA AAAAxx +7817 5797 1 1 7 17 17 817 1817 2817 7817 34 35 ROAAAA ZOIAAA HHHHxx +5408 5798 0 0 8 8 8 408 1408 408 5408 16 17 AAAAAA APIAAA OOOOxx +8092 5799 0 0 2 12 92 92 92 3092 8092 184 185 GZAAAA BPIAAA VVVVxx +1309 5800 1 1 9 9 9 309 1309 1309 1309 18 19 JYAAAA CPIAAA AAAAxx +6673 5801 1 1 3 13 73 673 673 1673 6673 146 147 RWAAAA DPIAAA HHHHxx +1245 5802 1 1 5 5 45 245 1245 1245 1245 90 91 XVAAAA EPIAAA OOOOxx +6790 5803 0 2 0 10 90 790 790 1790 6790 180 181 EBAAAA FPIAAA VVVVxx +8380 5804 0 0 0 0 80 380 380 3380 8380 160 161 IKAAAA GPIAAA AAAAxx +5786 5805 0 2 6 6 86 786 1786 786 5786 172 173 OOAAAA HPIAAA HHHHxx +9590 5806 0 2 0 10 90 590 1590 4590 9590 180 181 WEAAAA IPIAAA OOOOxx +5763 5807 1 3 3 3 63 763 1763 763 5763 126 127 RNAAAA JPIAAA VVVVxx +1345 5808 1 1 5 5 45 345 1345 1345 1345 90 91 TZAAAA KPIAAA AAAAxx +3480 5809 0 0 0 0 80 480 1480 3480 3480 160 161 WDAAAA LPIAAA HHHHxx +7864 5810 0 0 4 4 64 864 1864 2864 7864 128 129 MQAAAA MPIAAA OOOOxx +4853 5811 1 1 3 13 53 853 853 4853 4853 106 107 REAAAA NPIAAA VVVVxx +1445 5812 1 1 5 5 45 445 1445 1445 1445 90 91 PDAAAA OPIAAA AAAAxx +170 5813 0 2 0 10 70 170 170 170 170 140 141 OGAAAA PPIAAA HHHHxx +7348 5814 0 0 8 8 48 348 1348 2348 7348 96 97 QWAAAA QPIAAA OOOOxx +3920 5815 0 0 0 0 20 920 1920 3920 3920 40 41 UUAAAA RPIAAA VVVVxx +3307 5816 1 3 7 7 7 307 1307 3307 3307 14 15 FXAAAA SPIAAA AAAAxx +4584 5817 0 0 4 4 84 584 584 4584 4584 168 169 IUAAAA TPIAAA HHHHxx +3344 5818 0 0 4 4 44 344 1344 3344 3344 88 89 QYAAAA UPIAAA OOOOxx +4360 5819 0 0 0 0 60 360 360 4360 4360 120 121 SLAAAA VPIAAA VVVVxx +8757 5820 1 1 7 17 57 757 757 3757 8757 114 115 VYAAAA WPIAAA AAAAxx +4315 5821 1 3 5 15 15 315 315 4315 4315 30 31 ZJAAAA XPIAAA HHHHxx +5243 5822 1 3 3 3 43 243 1243 243 5243 86 87 RTAAAA YPIAAA OOOOxx +8550 5823 0 2 0 10 50 550 550 3550 8550 100 101 WQAAAA ZPIAAA VVVVxx +159 5824 1 3 9 19 59 159 159 159 159 118 119 DGAAAA AQIAAA AAAAxx +4710 5825 0 2 0 10 10 710 710 4710 4710 20 21 EZAAAA BQIAAA HHHHxx +7179 5826 1 3 9 19 79 179 1179 2179 7179 158 159 DQAAAA CQIAAA OOOOxx +2509 5827 1 1 9 9 9 509 509 2509 2509 18 19 NSAAAA DQIAAA VVVVxx +6981 5828 1 1 1 1 81 981 981 1981 6981 162 163 NIAAAA EQIAAA AAAAxx +5060 5829 0 0 0 0 60 60 1060 60 5060 120 121 QMAAAA FQIAAA HHHHxx +5601 5830 1 1 1 1 1 601 1601 601 5601 2 3 LHAAAA GQIAAA OOOOxx +703 5831 1 3 3 3 3 703 703 703 703 6 7 BBAAAA HQIAAA VVVVxx +8719 5832 1 3 9 19 19 719 719 3719 8719 38 39 JXAAAA IQIAAA AAAAxx +1570 5833 0 2 0 10 70 570 1570 1570 1570 140 141 KIAAAA JQIAAA HHHHxx +1036 5834 0 0 6 16 36 36 1036 1036 1036 72 73 WNAAAA KQIAAA OOOOxx +6703 5835 1 3 3 3 3 703 703 1703 6703 6 7 VXAAAA LQIAAA VVVVxx +252 5836 0 0 2 12 52 252 252 252 252 104 105 SJAAAA MQIAAA AAAAxx +631 5837 1 3 1 11 31 631 631 631 631 62 63 HYAAAA NQIAAA HHHHxx +5098 5838 0 2 8 18 98 98 1098 98 5098 196 197 COAAAA OQIAAA OOOOxx +8346 5839 0 2 6 6 46 346 346 3346 8346 92 93 AJAAAA PQIAAA VVVVxx +4910 5840 0 2 0 10 10 910 910 4910 4910 20 21 WGAAAA QQIAAA AAAAxx +559 5841 1 3 9 19 59 559 559 559 559 118 119 NVAAAA RQIAAA HHHHxx +1477 5842 1 1 7 17 77 477 1477 1477 1477 154 155 VEAAAA SQIAAA OOOOxx +5115 5843 1 3 5 15 15 115 1115 115 5115 30 31 TOAAAA TQIAAA VVVVxx +8784 5844 0 0 4 4 84 784 784 3784 8784 168 169 WZAAAA UQIAAA AAAAxx +4422 5845 0 2 2 2 22 422 422 4422 4422 44 45 COAAAA VQIAAA HHHHxx +2702 5846 0 2 2 2 2 702 702 2702 2702 4 5 YZAAAA WQIAAA OOOOxx +9599 5847 1 3 9 19 99 599 1599 4599 9599 198 199 FFAAAA XQIAAA VVVVxx +2463 5848 1 3 3 3 63 463 463 2463 2463 126 127 TQAAAA YQIAAA AAAAxx +498 5849 0 2 8 18 98 498 498 498 498 196 197 ETAAAA ZQIAAA HHHHxx +494 5850 0 2 4 14 94 494 494 494 494 188 189 ATAAAA ARIAAA OOOOxx +8632 5851 0 0 2 12 32 632 632 3632 8632 64 65 AUAAAA BRIAAA VVVVxx +3449 5852 1 1 9 9 49 449 1449 3449 3449 98 99 RCAAAA CRIAAA AAAAxx +5888 5853 0 0 8 8 88 888 1888 888 5888 176 177 MSAAAA DRIAAA HHHHxx +2211 5854 1 3 1 11 11 211 211 2211 2211 22 23 BHAAAA ERIAAA OOOOxx +2835 5855 1 3 5 15 35 835 835 2835 2835 70 71 BFAAAA FRIAAA VVVVxx +4196 5856 0 0 6 16 96 196 196 4196 4196 192 193 KFAAAA GRIAAA AAAAxx +2177 5857 1 1 7 17 77 177 177 2177 2177 154 155 TFAAAA HRIAAA HHHHxx +1959 5858 1 3 9 19 59 959 1959 1959 1959 118 119 JXAAAA IRIAAA OOOOxx +5172 5859 0 0 2 12 72 172 1172 172 5172 144 145 YQAAAA JRIAAA VVVVxx +7898 5860 0 2 8 18 98 898 1898 2898 7898 196 197 URAAAA KRIAAA AAAAxx +5729 5861 1 1 9 9 29 729 1729 729 5729 58 59 JMAAAA LRIAAA HHHHxx +469 5862 1 1 9 9 69 469 469 469 469 138 139 BSAAAA MRIAAA OOOOxx +4456 5863 0 0 6 16 56 456 456 4456 4456 112 113 KPAAAA NRIAAA VVVVxx +3578 5864 0 2 8 18 78 578 1578 3578 3578 156 157 QHAAAA ORIAAA AAAAxx +8623 5865 1 3 3 3 23 623 623 3623 8623 46 47 RTAAAA PRIAAA HHHHxx +6749 5866 1 1 9 9 49 749 749 1749 6749 98 99 PZAAAA QRIAAA OOOOxx +6735 5867 1 3 5 15 35 735 735 1735 6735 70 71 BZAAAA RRIAAA VVVVxx +5197 5868 1 1 7 17 97 197 1197 197 5197 194 195 XRAAAA SRIAAA AAAAxx +2067 5869 1 3 7 7 67 67 67 2067 2067 134 135 NBAAAA TRIAAA HHHHxx +5600 5870 0 0 0 0 0 600 1600 600 5600 0 1 KHAAAA URIAAA OOOOxx +7741 5871 1 1 1 1 41 741 1741 2741 7741 82 83 TLAAAA VRIAAA VVVVxx +9925 5872 1 1 5 5 25 925 1925 4925 9925 50 51 TRAAAA WRIAAA AAAAxx +9685 5873 1 1 5 5 85 685 1685 4685 9685 170 171 NIAAAA XRIAAA HHHHxx +7622 5874 0 2 2 2 22 622 1622 2622 7622 44 45 EHAAAA YRIAAA OOOOxx +6859 5875 1 3 9 19 59 859 859 1859 6859 118 119 VDAAAA ZRIAAA VVVVxx +3094 5876 0 2 4 14 94 94 1094 3094 3094 188 189 APAAAA ASIAAA AAAAxx +2628 5877 0 0 8 8 28 628 628 2628 2628 56 57 CXAAAA BSIAAA HHHHxx +40 5878 0 0 0 0 40 40 40 40 40 80 81 OBAAAA CSIAAA OOOOxx +1644 5879 0 0 4 4 44 644 1644 1644 1644 88 89 GLAAAA DSIAAA VVVVxx +588 5880 0 0 8 8 88 588 588 588 588 176 177 QWAAAA ESIAAA AAAAxx +7522 5881 0 2 2 2 22 522 1522 2522 7522 44 45 IDAAAA FSIAAA HHHHxx +162 5882 0 2 2 2 62 162 162 162 162 124 125 GGAAAA GSIAAA OOOOxx +3610 5883 0 2 0 10 10 610 1610 3610 3610 20 21 WIAAAA HSIAAA VVVVxx +3561 5884 1 1 1 1 61 561 1561 3561 3561 122 123 ZGAAAA ISIAAA AAAAxx +8185 5885 1 1 5 5 85 185 185 3185 8185 170 171 VCAAAA JSIAAA HHHHxx +7237 5886 1 1 7 17 37 237 1237 2237 7237 74 75 JSAAAA KSIAAA OOOOxx +4592 5887 0 0 2 12 92 592 592 4592 4592 184 185 QUAAAA LSIAAA VVVVxx +7082 5888 0 2 2 2 82 82 1082 2082 7082 164 165 KMAAAA MSIAAA AAAAxx +4719 5889 1 3 9 19 19 719 719 4719 4719 38 39 NZAAAA NSIAAA HHHHxx +3879 5890 1 3 9 19 79 879 1879 3879 3879 158 159 FTAAAA OSIAAA OOOOxx +1662 5891 0 2 2 2 62 662 1662 1662 1662 124 125 YLAAAA PSIAAA VVVVxx +3995 5892 1 3 5 15 95 995 1995 3995 3995 190 191 RXAAAA QSIAAA AAAAxx +5828 5893 0 0 8 8 28 828 1828 828 5828 56 57 EQAAAA RSIAAA HHHHxx +4197 5894 1 1 7 17 97 197 197 4197 4197 194 195 LFAAAA SSIAAA OOOOxx +5146 5895 0 2 6 6 46 146 1146 146 5146 92 93 YPAAAA TSIAAA VVVVxx +753 5896 1 1 3 13 53 753 753 753 753 106 107 ZCAAAA USIAAA AAAAxx +7064 5897 0 0 4 4 64 64 1064 2064 7064 128 129 SLAAAA VSIAAA HHHHxx +1312 5898 0 0 2 12 12 312 1312 1312 1312 24 25 MYAAAA WSIAAA OOOOxx +5573 5899 1 1 3 13 73 573 1573 573 5573 146 147 JGAAAA XSIAAA VVVVxx +7634 5900 0 2 4 14 34 634 1634 2634 7634 68 69 QHAAAA YSIAAA AAAAxx +2459 5901 1 3 9 19 59 459 459 2459 2459 118 119 PQAAAA ZSIAAA HHHHxx +8636 5902 0 0 6 16 36 636 636 3636 8636 72 73 EUAAAA ATIAAA OOOOxx +5318 5903 0 2 8 18 18 318 1318 318 5318 36 37 OWAAAA BTIAAA VVVVxx +1064 5904 0 0 4 4 64 64 1064 1064 1064 128 129 YOAAAA CTIAAA AAAAxx +9779 5905 1 3 9 19 79 779 1779 4779 9779 158 159 DMAAAA DTIAAA HHHHxx +6512 5906 0 0 2 12 12 512 512 1512 6512 24 25 MQAAAA ETIAAA OOOOxx +3572 5907 0 0 2 12 72 572 1572 3572 3572 144 145 KHAAAA FTIAAA VVVVxx +816 5908 0 0 6 16 16 816 816 816 816 32 33 KFAAAA GTIAAA AAAAxx +3978 5909 0 2 8 18 78 978 1978 3978 3978 156 157 AXAAAA HTIAAA HHHHxx +5390 5910 0 2 0 10 90 390 1390 390 5390 180 181 IZAAAA ITIAAA OOOOxx +4685 5911 1 1 5 5 85 685 685 4685 4685 170 171 FYAAAA JTIAAA VVVVxx +3003 5912 1 3 3 3 3 3 1003 3003 3003 6 7 NLAAAA KTIAAA AAAAxx +2638 5913 0 2 8 18 38 638 638 2638 2638 76 77 MXAAAA LTIAAA HHHHxx +9716 5914 0 0 6 16 16 716 1716 4716 9716 32 33 SJAAAA MTIAAA OOOOxx +9598 5915 0 2 8 18 98 598 1598 4598 9598 196 197 EFAAAA NTIAAA VVVVxx +9501 5916 1 1 1 1 1 501 1501 4501 9501 2 3 LBAAAA OTIAAA AAAAxx +1704 5917 0 0 4 4 4 704 1704 1704 1704 8 9 ONAAAA PTIAAA HHHHxx +8609 5918 1 1 9 9 9 609 609 3609 8609 18 19 DTAAAA QTIAAA OOOOxx +5211 5919 1 3 1 11 11 211 1211 211 5211 22 23 LSAAAA RTIAAA VVVVxx +3605 5920 1 1 5 5 5 605 1605 3605 3605 10 11 RIAAAA STIAAA AAAAxx +8730 5921 0 2 0 10 30 730 730 3730 8730 60 61 UXAAAA TTIAAA HHHHxx +4208 5922 0 0 8 8 8 208 208 4208 4208 16 17 WFAAAA UTIAAA OOOOxx +7784 5923 0 0 4 4 84 784 1784 2784 7784 168 169 KNAAAA VTIAAA VVVVxx +7501 5924 1 1 1 1 1 501 1501 2501 7501 2 3 NCAAAA WTIAAA AAAAxx +7862 5925 0 2 2 2 62 862 1862 2862 7862 124 125 KQAAAA XTIAAA HHHHxx +8922 5926 0 2 2 2 22 922 922 3922 8922 44 45 EFAAAA YTIAAA OOOOxx +3857 5927 1 1 7 17 57 857 1857 3857 3857 114 115 JSAAAA ZTIAAA VVVVxx +6393 5928 1 1 3 13 93 393 393 1393 6393 186 187 XLAAAA AUIAAA AAAAxx +506 5929 0 2 6 6 6 506 506 506 506 12 13 MTAAAA BUIAAA HHHHxx +4232 5930 0 0 2 12 32 232 232 4232 4232 64 65 UGAAAA CUIAAA OOOOxx +8991 5931 1 3 1 11 91 991 991 3991 8991 182 183 VHAAAA DUIAAA VVVVxx +8578 5932 0 2 8 18 78 578 578 3578 8578 156 157 YRAAAA EUIAAA AAAAxx +3235 5933 1 3 5 15 35 235 1235 3235 3235 70 71 LUAAAA FUIAAA HHHHxx +963 5934 1 3 3 3 63 963 963 963 963 126 127 BLAAAA GUIAAA OOOOxx +113 5935 1 1 3 13 13 113 113 113 113 26 27 JEAAAA HUIAAA VVVVxx +8234 5936 0 2 4 14 34 234 234 3234 8234 68 69 SEAAAA IUIAAA AAAAxx +2613 5937 1 1 3 13 13 613 613 2613 2613 26 27 NWAAAA JUIAAA HHHHxx +5540 5938 0 0 0 0 40 540 1540 540 5540 80 81 CFAAAA KUIAAA OOOOxx +9727 5939 1 3 7 7 27 727 1727 4727 9727 54 55 DKAAAA LUIAAA VVVVxx +2229 5940 1 1 9 9 29 229 229 2229 2229 58 59 THAAAA MUIAAA AAAAxx +6242 5941 0 2 2 2 42 242 242 1242 6242 84 85 CGAAAA NUIAAA HHHHxx +2502 5942 0 2 2 2 2 502 502 2502 2502 4 5 GSAAAA OUIAAA OOOOxx +6212 5943 0 0 2 12 12 212 212 1212 6212 24 25 YEAAAA PUIAAA VVVVxx +3495 5944 1 3 5 15 95 495 1495 3495 3495 190 191 LEAAAA QUIAAA AAAAxx +2364 5945 0 0 4 4 64 364 364 2364 2364 128 129 YMAAAA RUIAAA HHHHxx +6777 5946 1 1 7 17 77 777 777 1777 6777 154 155 RAAAAA SUIAAA OOOOxx +9811 5947 1 3 1 11 11 811 1811 4811 9811 22 23 JNAAAA TUIAAA VVVVxx +1450 5948 0 2 0 10 50 450 1450 1450 1450 100 101 UDAAAA UUIAAA AAAAxx +5008 5949 0 0 8 8 8 8 1008 8 5008 16 17 QKAAAA VUIAAA HHHHxx +1318 5950 0 2 8 18 18 318 1318 1318 1318 36 37 SYAAAA WUIAAA OOOOxx +3373 5951 1 1 3 13 73 373 1373 3373 3373 146 147 TZAAAA XUIAAA VVVVxx +398 5952 0 2 8 18 98 398 398 398 398 196 197 IPAAAA YUIAAA AAAAxx +3804 5953 0 0 4 4 4 804 1804 3804 3804 8 9 IQAAAA ZUIAAA HHHHxx +9148 5954 0 0 8 8 48 148 1148 4148 9148 96 97 WNAAAA AVIAAA OOOOxx +4382 5955 0 2 2 2 82 382 382 4382 4382 164 165 OMAAAA BVIAAA VVVVxx +4026 5956 0 2 6 6 26 26 26 4026 4026 52 53 WYAAAA CVIAAA AAAAxx +7804 5957 0 0 4 4 4 804 1804 2804 7804 8 9 EOAAAA DVIAAA HHHHxx +6839 5958 1 3 9 19 39 839 839 1839 6839 78 79 BDAAAA EVIAAA OOOOxx +3756 5959 0 0 6 16 56 756 1756 3756 3756 112 113 MOAAAA FVIAAA VVVVxx +6734 5960 0 2 4 14 34 734 734 1734 6734 68 69 AZAAAA GVIAAA AAAAxx +2228 5961 0 0 8 8 28 228 228 2228 2228 56 57 SHAAAA HVIAAA HHHHxx +3273 5962 1 1 3 13 73 273 1273 3273 3273 146 147 XVAAAA IVIAAA OOOOxx +3708 5963 0 0 8 8 8 708 1708 3708 3708 16 17 QMAAAA JVIAAA VVVVxx +4320 5964 0 0 0 0 20 320 320 4320 4320 40 41 EKAAAA KVIAAA AAAAxx +74 5965 0 2 4 14 74 74 74 74 74 148 149 WCAAAA LVIAAA HHHHxx +2520 5966 0 0 0 0 20 520 520 2520 2520 40 41 YSAAAA MVIAAA OOOOxx +9619 5967 1 3 9 19 19 619 1619 4619 9619 38 39 ZFAAAA NVIAAA VVVVxx +1801 5968 1 1 1 1 1 801 1801 1801 1801 2 3 HRAAAA OVIAAA AAAAxx +6399 5969 1 3 9 19 99 399 399 1399 6399 198 199 DMAAAA PVIAAA HHHHxx +8313 5970 1 1 3 13 13 313 313 3313 8313 26 27 THAAAA QVIAAA OOOOxx +7003 5971 1 3 3 3 3 3 1003 2003 7003 6 7 JJAAAA RVIAAA VVVVxx +329 5972 1 1 9 9 29 329 329 329 329 58 59 RMAAAA SVIAAA AAAAxx +9090 5973 0 2 0 10 90 90 1090 4090 9090 180 181 QLAAAA TVIAAA HHHHxx +2299 5974 1 3 9 19 99 299 299 2299 2299 198 199 LKAAAA UVIAAA OOOOxx +3925 5975 1 1 5 5 25 925 1925 3925 3925 50 51 ZUAAAA VVIAAA VVVVxx +8145 5976 1 1 5 5 45 145 145 3145 8145 90 91 HBAAAA WVIAAA AAAAxx +8561 5977 1 1 1 1 61 561 561 3561 8561 122 123 HRAAAA XVIAAA HHHHxx +2797 5978 1 1 7 17 97 797 797 2797 2797 194 195 PDAAAA YVIAAA OOOOxx +1451 5979 1 3 1 11 51 451 1451 1451 1451 102 103 VDAAAA ZVIAAA VVVVxx +7977 5980 1 1 7 17 77 977 1977 2977 7977 154 155 VUAAAA AWIAAA AAAAxx +112 5981 0 0 2 12 12 112 112 112 112 24 25 IEAAAA BWIAAA HHHHxx +5265 5982 1 1 5 5 65 265 1265 265 5265 130 131 NUAAAA CWIAAA OOOOxx +3819 5983 1 3 9 19 19 819 1819 3819 3819 38 39 XQAAAA DWIAAA VVVVxx +3648 5984 0 0 8 8 48 648 1648 3648 3648 96 97 IKAAAA EWIAAA AAAAxx +6306 5985 0 2 6 6 6 306 306 1306 6306 12 13 OIAAAA FWIAAA HHHHxx +2385 5986 1 1 5 5 85 385 385 2385 2385 170 171 TNAAAA GWIAAA OOOOxx +9084 5987 0 0 4 4 84 84 1084 4084 9084 168 169 KLAAAA HWIAAA VVVVxx +4499 5988 1 3 9 19 99 499 499 4499 4499 198 199 BRAAAA IWIAAA AAAAxx +1154 5989 0 2 4 14 54 154 1154 1154 1154 108 109 KSAAAA JWIAAA HHHHxx +6800 5990 0 0 0 0 0 800 800 1800 6800 0 1 OBAAAA KWIAAA OOOOxx +8049 5991 1 1 9 9 49 49 49 3049 8049 98 99 PXAAAA LWIAAA VVVVxx +3733 5992 1 1 3 13 33 733 1733 3733 3733 66 67 PNAAAA MWIAAA AAAAxx +8496 5993 0 0 6 16 96 496 496 3496 8496 192 193 UOAAAA NWIAAA HHHHxx +9952 5994 0 0 2 12 52 952 1952 4952 9952 104 105 USAAAA OWIAAA OOOOxx +9792 5995 0 0 2 12 92 792 1792 4792 9792 184 185 QMAAAA PWIAAA VVVVxx +5081 5996 1 1 1 1 81 81 1081 81 5081 162 163 LNAAAA QWIAAA AAAAxx +7908 5997 0 0 8 8 8 908 1908 2908 7908 16 17 ESAAAA RWIAAA HHHHxx +5398 5998 0 2 8 18 98 398 1398 398 5398 196 197 QZAAAA SWIAAA OOOOxx +8423 5999 1 3 3 3 23 423 423 3423 8423 46 47 ZLAAAA TWIAAA VVVVxx +3362 6000 0 2 2 2 62 362 1362 3362 3362 124 125 IZAAAA UWIAAA AAAAxx +7767 6001 1 3 7 7 67 767 1767 2767 7767 134 135 TMAAAA VWIAAA HHHHxx +7063 6002 1 3 3 3 63 63 1063 2063 7063 126 127 RLAAAA WWIAAA OOOOxx +8350 6003 0 2 0 10 50 350 350 3350 8350 100 101 EJAAAA XWIAAA VVVVxx +6779 6004 1 3 9 19 79 779 779 1779 6779 158 159 TAAAAA YWIAAA AAAAxx +5742 6005 0 2 2 2 42 742 1742 742 5742 84 85 WMAAAA ZWIAAA HHHHxx +9045 6006 1 1 5 5 45 45 1045 4045 9045 90 91 XJAAAA AXIAAA OOOOxx +8792 6007 0 0 2 12 92 792 792 3792 8792 184 185 EAAAAA BXIAAA VVVVxx +8160 6008 0 0 0 0 60 160 160 3160 8160 120 121 WBAAAA CXIAAA AAAAxx +3061 6009 1 1 1 1 61 61 1061 3061 3061 122 123 TNAAAA DXIAAA HHHHxx +4721 6010 1 1 1 1 21 721 721 4721 4721 42 43 PZAAAA EXIAAA OOOOxx +9817 6011 1 1 7 17 17 817 1817 4817 9817 34 35 PNAAAA FXIAAA VVVVxx +9257 6012 1 1 7 17 57 257 1257 4257 9257 114 115 BSAAAA GXIAAA AAAAxx +7779 6013 1 3 9 19 79 779 1779 2779 7779 158 159 FNAAAA HXIAAA HHHHxx +2663 6014 1 3 3 3 63 663 663 2663 2663 126 127 LYAAAA IXIAAA OOOOxx +3885 6015 1 1 5 5 85 885 1885 3885 3885 170 171 LTAAAA JXIAAA VVVVxx +9469 6016 1 1 9 9 69 469 1469 4469 9469 138 139 FAAAAA KXIAAA AAAAxx +6766 6017 0 2 6 6 66 766 766 1766 6766 132 133 GAAAAA LXIAAA HHHHxx +7173 6018 1 1 3 13 73 173 1173 2173 7173 146 147 XPAAAA MXIAAA OOOOxx +4709 6019 1 1 9 9 9 709 709 4709 4709 18 19 DZAAAA NXIAAA VVVVxx +4210 6020 0 2 0 10 10 210 210 4210 4210 20 21 YFAAAA OXIAAA AAAAxx +3715 6021 1 3 5 15 15 715 1715 3715 3715 30 31 XMAAAA PXIAAA HHHHxx +5089 6022 1 1 9 9 89 89 1089 89 5089 178 179 TNAAAA QXIAAA OOOOxx +1639 6023 1 3 9 19 39 639 1639 1639 1639 78 79 BLAAAA RXIAAA VVVVxx +5757 6024 1 1 7 17 57 757 1757 757 5757 114 115 LNAAAA SXIAAA AAAAxx +3545 6025 1 1 5 5 45 545 1545 3545 3545 90 91 JGAAAA TXIAAA HHHHxx +709 6026 1 1 9 9 9 709 709 709 709 18 19 HBAAAA UXIAAA OOOOxx +6519 6027 1 3 9 19 19 519 519 1519 6519 38 39 TQAAAA VXIAAA VVVVxx +4341 6028 1 1 1 1 41 341 341 4341 4341 82 83 ZKAAAA WXIAAA AAAAxx +2381 6029 1 1 1 1 81 381 381 2381 2381 162 163 PNAAAA XXIAAA HHHHxx +7215 6030 1 3 5 15 15 215 1215 2215 7215 30 31 NRAAAA YXIAAA OOOOxx +9323 6031 1 3 3 3 23 323 1323 4323 9323 46 47 PUAAAA ZXIAAA VVVVxx +3593 6032 1 1 3 13 93 593 1593 3593 3593 186 187 FIAAAA AYIAAA AAAAxx +3123 6033 1 3 3 3 23 123 1123 3123 3123 46 47 DQAAAA BYIAAA HHHHxx +8673 6034 1 1 3 13 73 673 673 3673 8673 146 147 PVAAAA CYIAAA OOOOxx +5094 6035 0 2 4 14 94 94 1094 94 5094 188 189 YNAAAA DYIAAA VVVVxx +6477 6036 1 1 7 17 77 477 477 1477 6477 154 155 DPAAAA EYIAAA AAAAxx +9734 6037 0 2 4 14 34 734 1734 4734 9734 68 69 KKAAAA FYIAAA HHHHxx +2998 6038 0 2 8 18 98 998 998 2998 2998 196 197 ILAAAA GYIAAA OOOOxx +7807 6039 1 3 7 7 7 807 1807 2807 7807 14 15 HOAAAA HYIAAA VVVVxx +5739 6040 1 3 9 19 39 739 1739 739 5739 78 79 TMAAAA IYIAAA AAAAxx +138 6041 0 2 8 18 38 138 138 138 138 76 77 IFAAAA JYIAAA HHHHxx +2403 6042 1 3 3 3 3 403 403 2403 2403 6 7 LOAAAA KYIAAA OOOOxx +2484 6043 0 0 4 4 84 484 484 2484 2484 168 169 ORAAAA LYIAAA VVVVxx +2805 6044 1 1 5 5 5 805 805 2805 2805 10 11 XDAAAA MYIAAA AAAAxx +5189 6045 1 1 9 9 89 189 1189 189 5189 178 179 PRAAAA NYIAAA HHHHxx +8336 6046 0 0 6 16 36 336 336 3336 8336 72 73 QIAAAA OYIAAA OOOOxx +5241 6047 1 1 1 1 41 241 1241 241 5241 82 83 PTAAAA PYIAAA VVVVxx +2612 6048 0 0 2 12 12 612 612 2612 2612 24 25 MWAAAA QYIAAA AAAAxx +2571 6049 1 3 1 11 71 571 571 2571 2571 142 143 XUAAAA RYIAAA HHHHxx +926 6050 0 2 6 6 26 926 926 926 926 52 53 QJAAAA SYIAAA OOOOxx +337 6051 1 1 7 17 37 337 337 337 337 74 75 ZMAAAA TYIAAA VVVVxx +2821 6052 1 1 1 1 21 821 821 2821 2821 42 43 NEAAAA UYIAAA AAAAxx +2658 6053 0 2 8 18 58 658 658 2658 2658 116 117 GYAAAA VYIAAA HHHHxx +9054 6054 0 2 4 14 54 54 1054 4054 9054 108 109 GKAAAA WYIAAA OOOOxx +5492 6055 0 0 2 12 92 492 1492 492 5492 184 185 GDAAAA XYIAAA VVVVxx +7313 6056 1 1 3 13 13 313 1313 2313 7313 26 27 HVAAAA YYIAAA AAAAxx +75 6057 1 3 5 15 75 75 75 75 75 150 151 XCAAAA ZYIAAA HHHHxx +5489 6058 1 1 9 9 89 489 1489 489 5489 178 179 DDAAAA AZIAAA OOOOxx +8413 6059 1 1 3 13 13 413 413 3413 8413 26 27 PLAAAA BZIAAA VVVVxx +3693 6060 1 1 3 13 93 693 1693 3693 3693 186 187 BMAAAA CZIAAA AAAAxx +9820 6061 0 0 0 0 20 820 1820 4820 9820 40 41 SNAAAA DZIAAA HHHHxx +8157 6062 1 1 7 17 57 157 157 3157 8157 114 115 TBAAAA EZIAAA OOOOxx +4161 6063 1 1 1 1 61 161 161 4161 4161 122 123 BEAAAA FZIAAA VVVVxx +8339 6064 1 3 9 19 39 339 339 3339 8339 78 79 TIAAAA GZIAAA AAAAxx +4141 6065 1 1 1 1 41 141 141 4141 4141 82 83 HDAAAA HZIAAA HHHHxx +9001 6066 1 1 1 1 1 1 1001 4001 9001 2 3 FIAAAA IZIAAA OOOOxx +8247 6067 1 3 7 7 47 247 247 3247 8247 94 95 FFAAAA JZIAAA VVVVxx +1182 6068 0 2 2 2 82 182 1182 1182 1182 164 165 MTAAAA KZIAAA AAAAxx +9876 6069 0 0 6 16 76 876 1876 4876 9876 152 153 WPAAAA LZIAAA HHHHxx +4302 6070 0 2 2 2 2 302 302 4302 4302 4 5 MJAAAA MZIAAA OOOOxx +6674 6071 0 2 4 14 74 674 674 1674 6674 148 149 SWAAAA NZIAAA VVVVxx +4214 6072 0 2 4 14 14 214 214 4214 4214 28 29 CGAAAA OZIAAA AAAAxx +5584 6073 0 0 4 4 84 584 1584 584 5584 168 169 UGAAAA PZIAAA HHHHxx +265 6074 1 1 5 5 65 265 265 265 265 130 131 FKAAAA QZIAAA OOOOxx +9207 6075 1 3 7 7 7 207 1207 4207 9207 14 15 DQAAAA RZIAAA VVVVxx +9434 6076 0 2 4 14 34 434 1434 4434 9434 68 69 WYAAAA SZIAAA AAAAxx +2921 6077 1 1 1 1 21 921 921 2921 2921 42 43 JIAAAA TZIAAA HHHHxx +9355 6078 1 3 5 15 55 355 1355 4355 9355 110 111 VVAAAA UZIAAA OOOOxx +8538 6079 0 2 8 18 38 538 538 3538 8538 76 77 KQAAAA VZIAAA VVVVxx +4559 6080 1 3 9 19 59 559 559 4559 4559 118 119 JTAAAA WZIAAA AAAAxx +9175 6081 1 3 5 15 75 175 1175 4175 9175 150 151 XOAAAA XZIAAA HHHHxx +4489 6082 1 1 9 9 89 489 489 4489 4489 178 179 RQAAAA YZIAAA OOOOxx +1485 6083 1 1 5 5 85 485 1485 1485 1485 170 171 DFAAAA ZZIAAA VVVVxx +8853 6084 1 1 3 13 53 853 853 3853 8853 106 107 NCAAAA AAJAAA AAAAxx +9143 6085 1 3 3 3 43 143 1143 4143 9143 86 87 RNAAAA BAJAAA HHHHxx +9551 6086 1 3 1 11 51 551 1551 4551 9551 102 103 JDAAAA CAJAAA OOOOxx +49 6087 1 1 9 9 49 49 49 49 49 98 99 XBAAAA DAJAAA VVVVxx +8351 6088 1 3 1 11 51 351 351 3351 8351 102 103 FJAAAA EAJAAA AAAAxx +9748 6089 0 0 8 8 48 748 1748 4748 9748 96 97 YKAAAA FAJAAA HHHHxx +4536 6090 0 0 6 16 36 536 536 4536 4536 72 73 MSAAAA GAJAAA OOOOxx +930 6091 0 2 0 10 30 930 930 930 930 60 61 UJAAAA HAJAAA VVVVxx +2206 6092 0 2 6 6 6 206 206 2206 2206 12 13 WGAAAA IAJAAA AAAAxx +8004 6093 0 0 4 4 4 4 4 3004 8004 8 9 WVAAAA JAJAAA HHHHxx +219 6094 1 3 9 19 19 219 219 219 219 38 39 LIAAAA KAJAAA OOOOxx +2724 6095 0 0 4 4 24 724 724 2724 2724 48 49 UAAAAA LAJAAA VVVVxx +4868 6096 0 0 8 8 68 868 868 4868 4868 136 137 GFAAAA MAJAAA AAAAxx +5952 6097 0 0 2 12 52 952 1952 952 5952 104 105 YUAAAA NAJAAA HHHHxx +2094 6098 0 2 4 14 94 94 94 2094 2094 188 189 OCAAAA OAJAAA OOOOxx +5707 6099 1 3 7 7 7 707 1707 707 5707 14 15 NLAAAA PAJAAA VVVVxx +5200 6100 0 0 0 0 0 200 1200 200 5200 0 1 ASAAAA QAJAAA AAAAxx +967 6101 1 3 7 7 67 967 967 967 967 134 135 FLAAAA RAJAAA HHHHxx +1982 6102 0 2 2 2 82 982 1982 1982 1982 164 165 GYAAAA SAJAAA OOOOxx +3410 6103 0 2 0 10 10 410 1410 3410 3410 20 21 EBAAAA TAJAAA VVVVxx +174 6104 0 2 4 14 74 174 174 174 174 148 149 SGAAAA UAJAAA AAAAxx +9217 6105 1 1 7 17 17 217 1217 4217 9217 34 35 NQAAAA VAJAAA HHHHxx +9103 6106 1 3 3 3 3 103 1103 4103 9103 6 7 DMAAAA WAJAAA OOOOxx +868 6107 0 0 8 8 68 868 868 868 868 136 137 KHAAAA XAJAAA VVVVxx +8261 6108 1 1 1 1 61 261 261 3261 8261 122 123 TFAAAA YAJAAA AAAAxx +2720 6109 0 0 0 0 20 720 720 2720 2720 40 41 QAAAAA ZAJAAA HHHHxx +2999 6110 1 3 9 19 99 999 999 2999 2999 198 199 JLAAAA ABJAAA OOOOxx +769 6111 1 1 9 9 69 769 769 769 769 138 139 PDAAAA BBJAAA VVVVxx +4533 6112 1 1 3 13 33 533 533 4533 4533 66 67 JSAAAA CBJAAA AAAAxx +2030 6113 0 2 0 10 30 30 30 2030 2030 60 61 CAAAAA DBJAAA HHHHxx +5824 6114 0 0 4 4 24 824 1824 824 5824 48 49 AQAAAA EBJAAA OOOOxx +2328 6115 0 0 8 8 28 328 328 2328 2328 56 57 OLAAAA FBJAAA VVVVxx +9970 6116 0 2 0 10 70 970 1970 4970 9970 140 141 MTAAAA GBJAAA AAAAxx +3192 6117 0 0 2 12 92 192 1192 3192 3192 184 185 USAAAA HBJAAA HHHHxx +3387 6118 1 3 7 7 87 387 1387 3387 3387 174 175 HAAAAA IBJAAA OOOOxx +1936 6119 0 0 6 16 36 936 1936 1936 1936 72 73 MWAAAA JBJAAA VVVVxx +6934 6120 0 2 4 14 34 934 934 1934 6934 68 69 SGAAAA KBJAAA AAAAxx +5615 6121 1 3 5 15 15 615 1615 615 5615 30 31 ZHAAAA LBJAAA HHHHxx +2241 6122 1 1 1 1 41 241 241 2241 2241 82 83 FIAAAA MBJAAA OOOOxx +1842 6123 0 2 2 2 42 842 1842 1842 1842 84 85 WSAAAA NBJAAA VVVVxx +8044 6124 0 0 4 4 44 44 44 3044 8044 88 89 KXAAAA OBJAAA AAAAxx +8902 6125 0 2 2 2 2 902 902 3902 8902 4 5 KEAAAA PBJAAA HHHHxx +4519 6126 1 3 9 19 19 519 519 4519 4519 38 39 VRAAAA QBJAAA OOOOxx +492 6127 0 0 2 12 92 492 492 492 492 184 185 YSAAAA RBJAAA VVVVxx +2694 6128 0 2 4 14 94 694 694 2694 2694 188 189 QZAAAA SBJAAA AAAAxx +5861 6129 1 1 1 1 61 861 1861 861 5861 122 123 LRAAAA TBJAAA HHHHxx +2104 6130 0 0 4 4 4 104 104 2104 2104 8 9 YCAAAA UBJAAA OOOOxx +5376 6131 0 0 6 16 76 376 1376 376 5376 152 153 UYAAAA VBJAAA VVVVxx +3147 6132 1 3 7 7 47 147 1147 3147 3147 94 95 BRAAAA WBJAAA AAAAxx +9880 6133 0 0 0 0 80 880 1880 4880 9880 160 161 AQAAAA XBJAAA HHHHxx +6171 6134 1 3 1 11 71 171 171 1171 6171 142 143 JDAAAA YBJAAA OOOOxx +1850 6135 0 2 0 10 50 850 1850 1850 1850 100 101 ETAAAA ZBJAAA VVVVxx +1775 6136 1 3 5 15 75 775 1775 1775 1775 150 151 HQAAAA ACJAAA AAAAxx +9261 6137 1 1 1 1 61 261 1261 4261 9261 122 123 FSAAAA BCJAAA HHHHxx +9648 6138 0 0 8 8 48 648 1648 4648 9648 96 97 CHAAAA CCJAAA OOOOxx +7846 6139 0 2 6 6 46 846 1846 2846 7846 92 93 UPAAAA DCJAAA VVVVxx +1446 6140 0 2 6 6 46 446 1446 1446 1446 92 93 QDAAAA ECJAAA AAAAxx +3139 6141 1 3 9 19 39 139 1139 3139 3139 78 79 TQAAAA FCJAAA HHHHxx +6142 6142 0 2 2 2 42 142 142 1142 6142 84 85 GCAAAA GCJAAA OOOOxx +5812 6143 0 0 2 12 12 812 1812 812 5812 24 25 OPAAAA HCJAAA VVVVxx +6728 6144 0 0 8 8 28 728 728 1728 6728 56 57 UYAAAA ICJAAA AAAAxx +4428 6145 0 0 8 8 28 428 428 4428 4428 56 57 IOAAAA JCJAAA HHHHxx +502 6146 0 2 2 2 2 502 502 502 502 4 5 ITAAAA KCJAAA OOOOxx +2363 6147 1 3 3 3 63 363 363 2363 2363 126 127 XMAAAA LCJAAA VVVVxx +3808 6148 0 0 8 8 8 808 1808 3808 3808 16 17 MQAAAA MCJAAA AAAAxx +1010 6149 0 2 0 10 10 10 1010 1010 1010 20 21 WMAAAA NCJAAA HHHHxx +9565 6150 1 1 5 5 65 565 1565 4565 9565 130 131 XDAAAA OCJAAA OOOOxx +1587 6151 1 3 7 7 87 587 1587 1587 1587 174 175 BJAAAA PCJAAA VVVVxx +1474 6152 0 2 4 14 74 474 1474 1474 1474 148 149 SEAAAA QCJAAA AAAAxx +6215 6153 1 3 5 15 15 215 215 1215 6215 30 31 BFAAAA RCJAAA HHHHxx +2395 6154 1 3 5 15 95 395 395 2395 2395 190 191 DOAAAA SCJAAA OOOOxx +8753 6155 1 1 3 13 53 753 753 3753 8753 106 107 RYAAAA TCJAAA VVVVxx +2446 6156 0 2 6 6 46 446 446 2446 2446 92 93 CQAAAA UCJAAA AAAAxx +60 6157 0 0 0 0 60 60 60 60 60 120 121 ICAAAA VCJAAA HHHHxx +982 6158 0 2 2 2 82 982 982 982 982 164 165 ULAAAA WCJAAA OOOOxx +6489 6159 1 1 9 9 89 489 489 1489 6489 178 179 PPAAAA XCJAAA VVVVxx +5334 6160 0 2 4 14 34 334 1334 334 5334 68 69 EXAAAA YCJAAA AAAAxx +8540 6161 0 0 0 0 40 540 540 3540 8540 80 81 MQAAAA ZCJAAA HHHHxx +490 6162 0 2 0 10 90 490 490 490 490 180 181 WSAAAA ADJAAA OOOOxx +6763 6163 1 3 3 3 63 763 763 1763 6763 126 127 DAAAAA BDJAAA VVVVxx +8273 6164 1 1 3 13 73 273 273 3273 8273 146 147 FGAAAA CDJAAA AAAAxx +8327 6165 1 3 7 7 27 327 327 3327 8327 54 55 HIAAAA DDJAAA HHHHxx +8541 6166 1 1 1 1 41 541 541 3541 8541 82 83 NQAAAA EDJAAA OOOOxx +3459 6167 1 3 9 19 59 459 1459 3459 3459 118 119 BDAAAA FDJAAA VVVVxx +5557 6168 1 1 7 17 57 557 1557 557 5557 114 115 TFAAAA GDJAAA AAAAxx +158 6169 0 2 8 18 58 158 158 158 158 116 117 CGAAAA HDJAAA HHHHxx +1741 6170 1 1 1 1 41 741 1741 1741 1741 82 83 ZOAAAA IDJAAA OOOOxx +8385 6171 1 1 5 5 85 385 385 3385 8385 170 171 NKAAAA JDJAAA VVVVxx +617 6172 1 1 7 17 17 617 617 617 617 34 35 TXAAAA KDJAAA AAAAxx +3560 6173 0 0 0 0 60 560 1560 3560 3560 120 121 YGAAAA LDJAAA HHHHxx +5216 6174 0 0 6 16 16 216 1216 216 5216 32 33 QSAAAA MDJAAA OOOOxx +8443 6175 1 3 3 3 43 443 443 3443 8443 86 87 TMAAAA NDJAAA VVVVxx +2700 6176 0 0 0 0 0 700 700 2700 2700 0 1 WZAAAA ODJAAA AAAAxx +3661 6177 1 1 1 1 61 661 1661 3661 3661 122 123 VKAAAA PDJAAA HHHHxx +4875 6178 1 3 5 15 75 875 875 4875 4875 150 151 NFAAAA QDJAAA OOOOxx +6721 6179 1 1 1 1 21 721 721 1721 6721 42 43 NYAAAA RDJAAA VVVVxx +3659 6180 1 3 9 19 59 659 1659 3659 3659 118 119 TKAAAA SDJAAA AAAAxx +8944 6181 0 0 4 4 44 944 944 3944 8944 88 89 AGAAAA TDJAAA HHHHxx +9133 6182 1 1 3 13 33 133 1133 4133 9133 66 67 HNAAAA UDJAAA OOOOxx +9882 6183 0 2 2 2 82 882 1882 4882 9882 164 165 CQAAAA VDJAAA VVVVxx +2102 6184 0 2 2 2 2 102 102 2102 2102 4 5 WCAAAA WDJAAA AAAAxx +9445 6185 1 1 5 5 45 445 1445 4445 9445 90 91 HZAAAA XDJAAA HHHHxx +5559 6186 1 3 9 19 59 559 1559 559 5559 118 119 VFAAAA YDJAAA OOOOxx +6096 6187 0 0 6 16 96 96 96 1096 6096 192 193 MAAAAA ZDJAAA VVVVxx +9336 6188 0 0 6 16 36 336 1336 4336 9336 72 73 CVAAAA AEJAAA AAAAxx +2162 6189 0 2 2 2 62 162 162 2162 2162 124 125 EFAAAA BEJAAA HHHHxx +7459 6190 1 3 9 19 59 459 1459 2459 7459 118 119 XAAAAA CEJAAA OOOOxx +3248 6191 0 0 8 8 48 248 1248 3248 3248 96 97 YUAAAA DEJAAA VVVVxx +9539 6192 1 3 9 19 39 539 1539 4539 9539 78 79 XCAAAA EEJAAA AAAAxx +4449 6193 1 1 9 9 49 449 449 4449 4449 98 99 DPAAAA FEJAAA HHHHxx +2809 6194 1 1 9 9 9 809 809 2809 2809 18 19 BEAAAA GEJAAA OOOOxx +7058 6195 0 2 8 18 58 58 1058 2058 7058 116 117 MLAAAA HEJAAA VVVVxx +3512 6196 0 0 2 12 12 512 1512 3512 3512 24 25 CFAAAA IEJAAA AAAAxx +2802 6197 0 2 2 2 2 802 802 2802 2802 4 5 UDAAAA JEJAAA HHHHxx +6289 6198 1 1 9 9 89 289 289 1289 6289 178 179 XHAAAA KEJAAA OOOOxx +1947 6199 1 3 7 7 47 947 1947 1947 1947 94 95 XWAAAA LEJAAA VVVVxx +9572 6200 0 0 2 12 72 572 1572 4572 9572 144 145 EEAAAA MEJAAA AAAAxx +2356 6201 0 0 6 16 56 356 356 2356 2356 112 113 QMAAAA NEJAAA HHHHxx +3039 6202 1 3 9 19 39 39 1039 3039 3039 78 79 XMAAAA OEJAAA OOOOxx +9452 6203 0 0 2 12 52 452 1452 4452 9452 104 105 OZAAAA PEJAAA VVVVxx +6328 6204 0 0 8 8 28 328 328 1328 6328 56 57 KJAAAA QEJAAA AAAAxx +7661 6205 1 1 1 1 61 661 1661 2661 7661 122 123 RIAAAA REJAAA HHHHxx +2566 6206 0 2 6 6 66 566 566 2566 2566 132 133 SUAAAA SEJAAA OOOOxx +6095 6207 1 3 5 15 95 95 95 1095 6095 190 191 LAAAAA TEJAAA VVVVxx +6367 6208 1 3 7 7 67 367 367 1367 6367 134 135 XKAAAA UEJAAA AAAAxx +3368 6209 0 0 8 8 68 368 1368 3368 3368 136 137 OZAAAA VEJAAA HHHHxx +5567 6210 1 3 7 7 67 567 1567 567 5567 134 135 DGAAAA WEJAAA OOOOxx +9834 6211 0 2 4 14 34 834 1834 4834 9834 68 69 GOAAAA XEJAAA VVVVxx +9695 6212 1 3 5 15 95 695 1695 4695 9695 190 191 XIAAAA YEJAAA AAAAxx +7291 6213 1 3 1 11 91 291 1291 2291 7291 182 183 LUAAAA ZEJAAA HHHHxx +4806 6214 0 2 6 6 6 806 806 4806 4806 12 13 WCAAAA AFJAAA OOOOxx +2000 6215 0 0 0 0 0 0 0 2000 2000 0 1 YYAAAA BFJAAA VVVVxx +6817 6216 1 1 7 17 17 817 817 1817 6817 34 35 FCAAAA CFJAAA AAAAxx +8487 6217 1 3 7 7 87 487 487 3487 8487 174 175 LOAAAA DFJAAA HHHHxx +3245 6218 1 1 5 5 45 245 1245 3245 3245 90 91 VUAAAA EFJAAA OOOOxx +632 6219 0 0 2 12 32 632 632 632 632 64 65 IYAAAA FFJAAA VVVVxx +8067 6220 1 3 7 7 67 67 67 3067 8067 134 135 HYAAAA GFJAAA AAAAxx +7140 6221 0 0 0 0 40 140 1140 2140 7140 80 81 QOAAAA HFJAAA HHHHxx +6802 6222 0 2 2 2 2 802 802 1802 6802 4 5 QBAAAA IFJAAA OOOOxx +3980 6223 0 0 0 0 80 980 1980 3980 3980 160 161 CXAAAA JFJAAA VVVVxx +1321 6224 1 1 1 1 21 321 1321 1321 1321 42 43 VYAAAA KFJAAA AAAAxx +2273 6225 1 1 3 13 73 273 273 2273 2273 146 147 LJAAAA LFJAAA HHHHxx +6787 6226 1 3 7 7 87 787 787 1787 6787 174 175 BBAAAA MFJAAA OOOOxx +9480 6227 0 0 0 0 80 480 1480 4480 9480 160 161 QAAAAA NFJAAA VVVVxx +9404 6228 0 0 4 4 4 404 1404 4404 9404 8 9 SXAAAA OFJAAA AAAAxx +3914 6229 0 2 4 14 14 914 1914 3914 3914 28 29 OUAAAA PFJAAA HHHHxx +5507 6230 1 3 7 7 7 507 1507 507 5507 14 15 VDAAAA QFJAAA OOOOxx +1813 6231 1 1 3 13 13 813 1813 1813 1813 26 27 TRAAAA RFJAAA VVVVxx +1999 6232 1 3 9 19 99 999 1999 1999 1999 198 199 XYAAAA SFJAAA AAAAxx +3848 6233 0 0 8 8 48 848 1848 3848 3848 96 97 ASAAAA TFJAAA HHHHxx +9693 6234 1 1 3 13 93 693 1693 4693 9693 186 187 VIAAAA UFJAAA OOOOxx +1353 6235 1 1 3 13 53 353 1353 1353 1353 106 107 BAAAAA VFJAAA VVVVxx +7218 6236 0 2 8 18 18 218 1218 2218 7218 36 37 QRAAAA WFJAAA AAAAxx +8223 6237 1 3 3 3 23 223 223 3223 8223 46 47 HEAAAA XFJAAA HHHHxx +9982 6238 0 2 2 2 82 982 1982 4982 9982 164 165 YTAAAA YFJAAA OOOOxx +8799 6239 1 3 9 19 99 799 799 3799 8799 198 199 LAAAAA ZFJAAA VVVVxx +8929 6240 1 1 9 9 29 929 929 3929 8929 58 59 LFAAAA AGJAAA AAAAxx +4626 6241 0 2 6 6 26 626 626 4626 4626 52 53 YVAAAA BGJAAA HHHHxx +7958 6242 0 2 8 18 58 958 1958 2958 7958 116 117 CUAAAA CGJAAA OOOOxx +3743 6243 1 3 3 3 43 743 1743 3743 3743 86 87 ZNAAAA DGJAAA VVVVxx +8165 6244 1 1 5 5 65 165 165 3165 8165 130 131 BCAAAA EGJAAA AAAAxx +7899 6245 1 3 9 19 99 899 1899 2899 7899 198 199 VRAAAA FGJAAA HHHHxx +8698 6246 0 2 8 18 98 698 698 3698 8698 196 197 OWAAAA GGJAAA OOOOxx +9270 6247 0 2 0 10 70 270 1270 4270 9270 140 141 OSAAAA HGJAAA VVVVxx +6348 6248 0 0 8 8 48 348 348 1348 6348 96 97 EKAAAA IGJAAA AAAAxx +6999 6249 1 3 9 19 99 999 999 1999 6999 198 199 FJAAAA JGJAAA HHHHxx +8467 6250 1 3 7 7 67 467 467 3467 8467 134 135 RNAAAA KGJAAA OOOOxx +3907 6251 1 3 7 7 7 907 1907 3907 3907 14 15 HUAAAA LGJAAA VVVVxx +4738 6252 0 2 8 18 38 738 738 4738 4738 76 77 GAAAAA MGJAAA AAAAxx +248 6253 0 0 8 8 48 248 248 248 248 96 97 OJAAAA NGJAAA HHHHxx +8769 6254 1 1 9 9 69 769 769 3769 8769 138 139 HZAAAA OGJAAA OOOOxx +9922 6255 0 2 2 2 22 922 1922 4922 9922 44 45 QRAAAA PGJAAA VVVVxx +778 6256 0 2 8 18 78 778 778 778 778 156 157 YDAAAA QGJAAA AAAAxx +1233 6257 1 1 3 13 33 233 1233 1233 1233 66 67 LVAAAA RGJAAA HHHHxx +1183 6258 1 3 3 3 83 183 1183 1183 1183 166 167 NTAAAA SGJAAA OOOOxx +2838 6259 0 2 8 18 38 838 838 2838 2838 76 77 EFAAAA TGJAAA VVVVxx +3096 6260 0 0 6 16 96 96 1096 3096 3096 192 193 CPAAAA UGJAAA AAAAxx +8566 6261 0 2 6 6 66 566 566 3566 8566 132 133 MRAAAA VGJAAA HHHHxx +7635 6262 1 3 5 15 35 635 1635 2635 7635 70 71 RHAAAA WGJAAA OOOOxx +5428 6263 0 0 8 8 28 428 1428 428 5428 56 57 UAAAAA XGJAAA VVVVxx +7430 6264 0 2 0 10 30 430 1430 2430 7430 60 61 UZAAAA YGJAAA AAAAxx +7210 6265 0 2 0 10 10 210 1210 2210 7210 20 21 IRAAAA ZGJAAA HHHHxx +4485 6266 1 1 5 5 85 485 485 4485 4485 170 171 NQAAAA AHJAAA OOOOxx +9623 6267 1 3 3 3 23 623 1623 4623 9623 46 47 DGAAAA BHJAAA VVVVxx +3670 6268 0 2 0 10 70 670 1670 3670 3670 140 141 ELAAAA CHJAAA AAAAxx +1575 6269 1 3 5 15 75 575 1575 1575 1575 150 151 PIAAAA DHJAAA HHHHxx +5874 6270 0 2 4 14 74 874 1874 874 5874 148 149 YRAAAA EHJAAA OOOOxx +673 6271 1 1 3 13 73 673 673 673 673 146 147 XZAAAA FHJAAA VVVVxx +9712 6272 0 0 2 12 12 712 1712 4712 9712 24 25 OJAAAA GHJAAA AAAAxx +7729 6273 1 1 9 9 29 729 1729 2729 7729 58 59 HLAAAA HHJAAA HHHHxx +4318 6274 0 2 8 18 18 318 318 4318 4318 36 37 CKAAAA IHJAAA OOOOxx +4143 6275 1 3 3 3 43 143 143 4143 4143 86 87 JDAAAA JHJAAA VVVVxx +4932 6276 0 0 2 12 32 932 932 4932 4932 64 65 SHAAAA KHJAAA AAAAxx +5835 6277 1 3 5 15 35 835 1835 835 5835 70 71 LQAAAA LHJAAA HHHHxx +4966 6278 0 2 6 6 66 966 966 4966 4966 132 133 AJAAAA MHJAAA OOOOxx +6711 6279 1 3 1 11 11 711 711 1711 6711 22 23 DYAAAA NHJAAA VVVVxx +3990 6280 0 2 0 10 90 990 1990 3990 3990 180 181 MXAAAA OHJAAA AAAAxx +990 6281 0 2 0 10 90 990 990 990 990 180 181 CMAAAA PHJAAA HHHHxx +220 6282 0 0 0 0 20 220 220 220 220 40 41 MIAAAA QHJAAA OOOOxx +5693 6283 1 1 3 13 93 693 1693 693 5693 186 187 ZKAAAA RHJAAA VVVVxx +3662 6284 0 2 2 2 62 662 1662 3662 3662 124 125 WKAAAA SHJAAA AAAAxx +7844 6285 0 0 4 4 44 844 1844 2844 7844 88 89 SPAAAA THJAAA HHHHxx +5515 6286 1 3 5 15 15 515 1515 515 5515 30 31 DEAAAA UHJAAA OOOOxx +5551 6287 1 3 1 11 51 551 1551 551 5551 102 103 NFAAAA VHJAAA VVVVxx +2358 6288 0 2 8 18 58 358 358 2358 2358 116 117 SMAAAA WHJAAA AAAAxx +8977 6289 1 1 7 17 77 977 977 3977 8977 154 155 HHAAAA XHJAAA HHHHxx +7040 6290 0 0 0 0 40 40 1040 2040 7040 80 81 UKAAAA YHJAAA OOOOxx +105 6291 1 1 5 5 5 105 105 105 105 10 11 BEAAAA ZHJAAA VVVVxx +4496 6292 0 0 6 16 96 496 496 4496 4496 192 193 YQAAAA AIJAAA AAAAxx +2254 6293 0 2 4 14 54 254 254 2254 2254 108 109 SIAAAA BIJAAA HHHHxx +411 6294 1 3 1 11 11 411 411 411 411 22 23 VPAAAA CIJAAA OOOOxx +2373 6295 1 1 3 13 73 373 373 2373 2373 146 147 HNAAAA DIJAAA VVVVxx +3477 6296 1 1 7 17 77 477 1477 3477 3477 154 155 TDAAAA EIJAAA AAAAxx +8964 6297 0 0 4 4 64 964 964 3964 8964 128 129 UGAAAA FIJAAA HHHHxx +8471 6298 1 3 1 11 71 471 471 3471 8471 142 143 VNAAAA GIJAAA OOOOxx +5776 6299 0 0 6 16 76 776 1776 776 5776 152 153 EOAAAA HIJAAA VVVVxx +9921 6300 1 1 1 1 21 921 1921 4921 9921 42 43 PRAAAA IIJAAA AAAAxx +7816 6301 0 0 6 16 16 816 1816 2816 7816 32 33 QOAAAA JIJAAA HHHHxx +2439 6302 1 3 9 19 39 439 439 2439 2439 78 79 VPAAAA KIJAAA OOOOxx +9298 6303 0 2 8 18 98 298 1298 4298 9298 196 197 QTAAAA LIJAAA VVVVxx +9424 6304 0 0 4 4 24 424 1424 4424 9424 48 49 MYAAAA MIJAAA AAAAxx +3252 6305 0 0 2 12 52 252 1252 3252 3252 104 105 CVAAAA NIJAAA HHHHxx +1401 6306 1 1 1 1 1 401 1401 1401 1401 2 3 XBAAAA OIJAAA OOOOxx +9632 6307 0 0 2 12 32 632 1632 4632 9632 64 65 MGAAAA PIJAAA VVVVxx +370 6308 0 2 0 10 70 370 370 370 370 140 141 GOAAAA QIJAAA AAAAxx +728 6309 0 0 8 8 28 728 728 728 728 56 57 ACAAAA RIJAAA HHHHxx +2888 6310 0 0 8 8 88 888 888 2888 2888 176 177 CHAAAA SIJAAA OOOOxx +1441 6311 1 1 1 1 41 441 1441 1441 1441 82 83 LDAAAA TIJAAA VVVVxx +8308 6312 0 0 8 8 8 308 308 3308 8308 16 17 OHAAAA UIJAAA AAAAxx +2165 6313 1 1 5 5 65 165 165 2165 2165 130 131 HFAAAA VIJAAA HHHHxx +6359 6314 1 3 9 19 59 359 359 1359 6359 118 119 PKAAAA WIJAAA OOOOxx +9637 6315 1 1 7 17 37 637 1637 4637 9637 74 75 RGAAAA XIJAAA VVVVxx +5208 6316 0 0 8 8 8 208 1208 208 5208 16 17 ISAAAA YIJAAA AAAAxx +4705 6317 1 1 5 5 5 705 705 4705 4705 10 11 ZYAAAA ZIJAAA HHHHxx +2341 6318 1 1 1 1 41 341 341 2341 2341 82 83 BMAAAA AJJAAA OOOOxx +8539 6319 1 3 9 19 39 539 539 3539 8539 78 79 LQAAAA BJJAAA VVVVxx +7528 6320 0 0 8 8 28 528 1528 2528 7528 56 57 ODAAAA CJJAAA AAAAxx +7969 6321 1 1 9 9 69 969 1969 2969 7969 138 139 NUAAAA DJJAAA HHHHxx +6381 6322 1 1 1 1 81 381 381 1381 6381 162 163 LLAAAA EJJAAA OOOOxx +4906 6323 0 2 6 6 6 906 906 4906 4906 12 13 SGAAAA FJJAAA VVVVxx +8697 6324 1 1 7 17 97 697 697 3697 8697 194 195 NWAAAA GJJAAA AAAAxx +6301 6325 1 1 1 1 1 301 301 1301 6301 2 3 JIAAAA HJJAAA HHHHxx +7554 6326 0 2 4 14 54 554 1554 2554 7554 108 109 OEAAAA IJJAAA OOOOxx +5107 6327 1 3 7 7 7 107 1107 107 5107 14 15 LOAAAA JJJAAA VVVVxx +5046 6328 0 2 6 6 46 46 1046 46 5046 92 93 CMAAAA KJJAAA AAAAxx +4063 6329 1 3 3 3 63 63 63 4063 4063 126 127 HAAAAA LJJAAA HHHHxx +7580 6330 0 0 0 0 80 580 1580 2580 7580 160 161 OFAAAA MJJAAA OOOOxx +2245 6331 1 1 5 5 45 245 245 2245 2245 90 91 JIAAAA NJJAAA VVVVxx +3711 6332 1 3 1 11 11 711 1711 3711 3711 22 23 TMAAAA OJJAAA AAAAxx +3220 6333 0 0 0 0 20 220 1220 3220 3220 40 41 WTAAAA PJJAAA HHHHxx +6463 6334 1 3 3 3 63 463 463 1463 6463 126 127 POAAAA QJJAAA OOOOxx +8196 6335 0 0 6 16 96 196 196 3196 8196 192 193 GDAAAA RJJAAA VVVVxx +9875 6336 1 3 5 15 75 875 1875 4875 9875 150 151 VPAAAA SJJAAA AAAAxx +1333 6337 1 1 3 13 33 333 1333 1333 1333 66 67 HZAAAA TJJAAA HHHHxx +7880 6338 0 0 0 0 80 880 1880 2880 7880 160 161 CRAAAA UJJAAA OOOOxx +2322 6339 0 2 2 2 22 322 322 2322 2322 44 45 ILAAAA VJJAAA VVVVxx +2163 6340 1 3 3 3 63 163 163 2163 2163 126 127 FFAAAA WJJAAA AAAAxx +421 6341 1 1 1 1 21 421 421 421 421 42 43 FQAAAA XJJAAA HHHHxx +2042 6342 0 2 2 2 42 42 42 2042 2042 84 85 OAAAAA YJJAAA OOOOxx +1424 6343 0 0 4 4 24 424 1424 1424 1424 48 49 UCAAAA ZJJAAA VVVVxx +7870 6344 0 2 0 10 70 870 1870 2870 7870 140 141 SQAAAA AKJAAA AAAAxx +2653 6345 1 1 3 13 53 653 653 2653 2653 106 107 BYAAAA BKJAAA HHHHxx +4216 6346 0 0 6 16 16 216 216 4216 4216 32 33 EGAAAA CKJAAA OOOOxx +1515 6347 1 3 5 15 15 515 1515 1515 1515 30 31 HGAAAA DKJAAA VVVVxx +7860 6348 0 0 0 0 60 860 1860 2860 7860 120 121 IQAAAA EKJAAA AAAAxx +2984 6349 0 0 4 4 84 984 984 2984 2984 168 169 UKAAAA FKJAAA HHHHxx +6269 6350 1 1 9 9 69 269 269 1269 6269 138 139 DHAAAA GKJAAA OOOOxx +2609 6351 1 1 9 9 9 609 609 2609 2609 18 19 JWAAAA HKJAAA VVVVxx +3671 6352 1 3 1 11 71 671 1671 3671 3671 142 143 FLAAAA IKJAAA AAAAxx +4544 6353 0 0 4 4 44 544 544 4544 4544 88 89 USAAAA JKJAAA HHHHxx +4668 6354 0 0 8 8 68 668 668 4668 4668 136 137 OXAAAA KKJAAA OOOOxx +2565 6355 1 1 5 5 65 565 565 2565 2565 130 131 RUAAAA LKJAAA VVVVxx +3126 6356 0 2 6 6 26 126 1126 3126 3126 52 53 GQAAAA MKJAAA AAAAxx +7573 6357 1 1 3 13 73 573 1573 2573 7573 146 147 HFAAAA NKJAAA HHHHxx +1476 6358 0 0 6 16 76 476 1476 1476 1476 152 153 UEAAAA OKJAAA OOOOxx +2146 6359 0 2 6 6 46 146 146 2146 2146 92 93 OEAAAA PKJAAA VVVVxx +9990 6360 0 2 0 10 90 990 1990 4990 9990 180 181 GUAAAA QKJAAA AAAAxx +2530 6361 0 2 0 10 30 530 530 2530 2530 60 61 ITAAAA RKJAAA HHHHxx +9288 6362 0 0 8 8 88 288 1288 4288 9288 176 177 GTAAAA SKJAAA OOOOxx +9755 6363 1 3 5 15 55 755 1755 4755 9755 110 111 FLAAAA TKJAAA VVVVxx +5305 6364 1 1 5 5 5 305 1305 305 5305 10 11 BWAAAA UKJAAA AAAAxx +2495 6365 1 3 5 15 95 495 495 2495 2495 190 191 ZRAAAA VKJAAA HHHHxx +5443 6366 1 3 3 3 43 443 1443 443 5443 86 87 JBAAAA WKJAAA OOOOxx +1930 6367 0 2 0 10 30 930 1930 1930 1930 60 61 GWAAAA XKJAAA VVVVxx +9134 6368 0 2 4 14 34 134 1134 4134 9134 68 69 INAAAA YKJAAA AAAAxx +2844 6369 0 0 4 4 44 844 844 2844 2844 88 89 KFAAAA ZKJAAA HHHHxx +896 6370 0 0 6 16 96 896 896 896 896 192 193 MIAAAA ALJAAA OOOOxx +1330 6371 0 2 0 10 30 330 1330 1330 1330 60 61 EZAAAA BLJAAA VVVVxx +8980 6372 0 0 0 0 80 980 980 3980 8980 160 161 KHAAAA CLJAAA AAAAxx +5940 6373 0 0 0 0 40 940 1940 940 5940 80 81 MUAAAA DLJAAA HHHHxx +6494 6374 0 2 4 14 94 494 494 1494 6494 188 189 UPAAAA ELJAAA OOOOxx +165 6375 1 1 5 5 65 165 165 165 165 130 131 JGAAAA FLJAAA VVVVxx +2510 6376 0 2 0 10 10 510 510 2510 2510 20 21 OSAAAA GLJAAA AAAAxx +9950 6377 0 2 0 10 50 950 1950 4950 9950 100 101 SSAAAA HLJAAA HHHHxx +3854 6378 0 2 4 14 54 854 1854 3854 3854 108 109 GSAAAA ILJAAA OOOOxx +7493 6379 1 1 3 13 93 493 1493 2493 7493 186 187 FCAAAA JLJAAA VVVVxx +4124 6380 0 0 4 4 24 124 124 4124 4124 48 49 QCAAAA KLJAAA AAAAxx +8563 6381 1 3 3 3 63 563 563 3563 8563 126 127 JRAAAA LLJAAA HHHHxx +8735 6382 1 3 5 15 35 735 735 3735 8735 70 71 ZXAAAA MLJAAA OOOOxx +9046 6383 0 2 6 6 46 46 1046 4046 9046 92 93 YJAAAA NLJAAA VVVVxx +1754 6384 0 2 4 14 54 754 1754 1754 1754 108 109 MPAAAA OLJAAA AAAAxx +6954 6385 0 2 4 14 54 954 954 1954 6954 108 109 MHAAAA PLJAAA HHHHxx +4953 6386 1 1 3 13 53 953 953 4953 4953 106 107 NIAAAA QLJAAA OOOOxx +8142 6387 0 2 2 2 42 142 142 3142 8142 84 85 EBAAAA RLJAAA VVVVxx +9661 6388 1 1 1 1 61 661 1661 4661 9661 122 123 PHAAAA SLJAAA AAAAxx +6415 6389 1 3 5 15 15 415 415 1415 6415 30 31 TMAAAA TLJAAA HHHHxx +5782 6390 0 2 2 2 82 782 1782 782 5782 164 165 KOAAAA ULJAAA OOOOxx +7721 6391 1 1 1 1 21 721 1721 2721 7721 42 43 ZKAAAA VLJAAA VVVVxx +580 6392 0 0 0 0 80 580 580 580 580 160 161 IWAAAA WLJAAA AAAAxx +3784 6393 0 0 4 4 84 784 1784 3784 3784 168 169 OPAAAA XLJAAA HHHHxx +9810 6394 0 2 0 10 10 810 1810 4810 9810 20 21 INAAAA YLJAAA OOOOxx +8488 6395 0 0 8 8 88 488 488 3488 8488 176 177 MOAAAA ZLJAAA VVVVxx +6214 6396 0 2 4 14 14 214 214 1214 6214 28 29 AFAAAA AMJAAA AAAAxx +9433 6397 1 1 3 13 33 433 1433 4433 9433 66 67 VYAAAA BMJAAA HHHHxx +9959 6398 1 3 9 19 59 959 1959 4959 9959 118 119 BTAAAA CMJAAA OOOOxx +554 6399 0 2 4 14 54 554 554 554 554 108 109 IVAAAA DMJAAA VVVVxx +6646 6400 0 2 6 6 46 646 646 1646 6646 92 93 QVAAAA EMJAAA AAAAxx +1138 6401 0 2 8 18 38 138 1138 1138 1138 76 77 URAAAA FMJAAA HHHHxx +9331 6402 1 3 1 11 31 331 1331 4331 9331 62 63 XUAAAA GMJAAA OOOOxx +7331 6403 1 3 1 11 31 331 1331 2331 7331 62 63 ZVAAAA HMJAAA VVVVxx +3482 6404 0 2 2 2 82 482 1482 3482 3482 164 165 YDAAAA IMJAAA AAAAxx +3795 6405 1 3 5 15 95 795 1795 3795 3795 190 191 ZPAAAA JMJAAA HHHHxx +2441 6406 1 1 1 1 41 441 441 2441 2441 82 83 XPAAAA KMJAAA OOOOxx +5229 6407 1 1 9 9 29 229 1229 229 5229 58 59 DTAAAA LMJAAA VVVVxx +7012 6408 0 0 2 12 12 12 1012 2012 7012 24 25 SJAAAA MMJAAA AAAAxx +7036 6409 0 0 6 16 36 36 1036 2036 7036 72 73 QKAAAA NMJAAA HHHHxx +8243 6410 1 3 3 3 43 243 243 3243 8243 86 87 BFAAAA OMJAAA OOOOxx +9320 6411 0 0 0 0 20 320 1320 4320 9320 40 41 MUAAAA PMJAAA VVVVxx +4693 6412 1 1 3 13 93 693 693 4693 4693 186 187 NYAAAA QMJAAA AAAAxx +6741 6413 1 1 1 1 41 741 741 1741 6741 82 83 HZAAAA RMJAAA HHHHxx +2997 6414 1 1 7 17 97 997 997 2997 2997 194 195 HLAAAA SMJAAA OOOOxx +4838 6415 0 2 8 18 38 838 838 4838 4838 76 77 CEAAAA TMJAAA VVVVxx +6945 6416 1 1 5 5 45 945 945 1945 6945 90 91 DHAAAA UMJAAA AAAAxx +8253 6417 1 1 3 13 53 253 253 3253 8253 106 107 LFAAAA VMJAAA HHHHxx +8989 6418 1 1 9 9 89 989 989 3989 8989 178 179 THAAAA WMJAAA OOOOxx +2640 6419 0 0 0 0 40 640 640 2640 2640 80 81 OXAAAA XMJAAA VVVVxx +5647 6420 1 3 7 7 47 647 1647 647 5647 94 95 FJAAAA YMJAAA AAAAxx +7186 6421 0 2 6 6 86 186 1186 2186 7186 172 173 KQAAAA ZMJAAA HHHHxx +3278 6422 0 2 8 18 78 278 1278 3278 3278 156 157 CWAAAA ANJAAA OOOOxx +8546 6423 0 2 6 6 46 546 546 3546 8546 92 93 SQAAAA BNJAAA VVVVxx +8297 6424 1 1 7 17 97 297 297 3297 8297 194 195 DHAAAA CNJAAA AAAAxx +9534 6425 0 2 4 14 34 534 1534 4534 9534 68 69 SCAAAA DNJAAA HHHHxx +9618 6426 0 2 8 18 18 618 1618 4618 9618 36 37 YFAAAA ENJAAA OOOOxx +8839 6427 1 3 9 19 39 839 839 3839 8839 78 79 ZBAAAA FNJAAA VVVVxx +7605 6428 1 1 5 5 5 605 1605 2605 7605 10 11 NGAAAA GNJAAA AAAAxx +6421 6429 1 1 1 1 21 421 421 1421 6421 42 43 ZMAAAA HNJAAA HHHHxx +3582 6430 0 2 2 2 82 582 1582 3582 3582 164 165 UHAAAA INJAAA OOOOxx +485 6431 1 1 5 5 85 485 485 485 485 170 171 RSAAAA JNJAAA VVVVxx +1925 6432 1 1 5 5 25 925 1925 1925 1925 50 51 BWAAAA KNJAAA AAAAxx +4296 6433 0 0 6 16 96 296 296 4296 4296 192 193 GJAAAA LNJAAA HHHHxx +8874 6434 0 2 4 14 74 874 874 3874 8874 148 149 IDAAAA MNJAAA OOOOxx +1443 6435 1 3 3 3 43 443 1443 1443 1443 86 87 NDAAAA NNJAAA VVVVxx +4239 6436 1 3 9 19 39 239 239 4239 4239 78 79 BHAAAA ONJAAA AAAAxx +9760 6437 0 0 0 0 60 760 1760 4760 9760 120 121 KLAAAA PNJAAA HHHHxx +136 6438 0 0 6 16 36 136 136 136 136 72 73 GFAAAA QNJAAA OOOOxx +6472 6439 0 0 2 12 72 472 472 1472 6472 144 145 YOAAAA RNJAAA VVVVxx +4896 6440 0 0 6 16 96 896 896 4896 4896 192 193 IGAAAA SNJAAA AAAAxx +9028 6441 0 0 8 8 28 28 1028 4028 9028 56 57 GJAAAA TNJAAA HHHHxx +8354 6442 0 2 4 14 54 354 354 3354 8354 108 109 IJAAAA UNJAAA OOOOxx +8648 6443 0 0 8 8 48 648 648 3648 8648 96 97 QUAAAA VNJAAA VVVVxx +918 6444 0 2 8 18 18 918 918 918 918 36 37 IJAAAA WNJAAA AAAAxx +6606 6445 0 2 6 6 6 606 606 1606 6606 12 13 CUAAAA XNJAAA HHHHxx +2462 6446 0 2 2 2 62 462 462 2462 2462 124 125 SQAAAA YNJAAA OOOOxx +7536 6447 0 0 6 16 36 536 1536 2536 7536 72 73 WDAAAA ZNJAAA VVVVxx +1700 6448 0 0 0 0 0 700 1700 1700 1700 0 1 KNAAAA AOJAAA AAAAxx +6740 6449 0 0 0 0 40 740 740 1740 6740 80 81 GZAAAA BOJAAA HHHHxx +28 6450 0 0 8 8 28 28 28 28 28 56 57 CBAAAA COJAAA OOOOxx +6044 6451 0 0 4 4 44 44 44 1044 6044 88 89 MYAAAA DOJAAA VVVVxx +5053 6452 1 1 3 13 53 53 1053 53 5053 106 107 JMAAAA EOJAAA AAAAxx +4832 6453 0 0 2 12 32 832 832 4832 4832 64 65 WDAAAA FOJAAA HHHHxx +9145 6454 1 1 5 5 45 145 1145 4145 9145 90 91 TNAAAA GOJAAA OOOOxx +5482 6455 0 2 2 2 82 482 1482 482 5482 164 165 WCAAAA HOJAAA VVVVxx +7644 6456 0 0 4 4 44 644 1644 2644 7644 88 89 AIAAAA IOJAAA AAAAxx +2128 6457 0 0 8 8 28 128 128 2128 2128 56 57 WDAAAA JOJAAA HHHHxx +6583 6458 1 3 3 3 83 583 583 1583 6583 166 167 FTAAAA KOJAAA OOOOxx +4224 6459 0 0 4 4 24 224 224 4224 4224 48 49 MGAAAA LOJAAA VVVVxx +5253 6460 1 1 3 13 53 253 1253 253 5253 106 107 BUAAAA MOJAAA AAAAxx +8219 6461 1 3 9 19 19 219 219 3219 8219 38 39 DEAAAA NOJAAA HHHHxx +8113 6462 1 1 3 13 13 113 113 3113 8113 26 27 BAAAAA OOJAAA OOOOxx +3616 6463 0 0 6 16 16 616 1616 3616 3616 32 33 CJAAAA POJAAA VVVVxx +1361 6464 1 1 1 1 61 361 1361 1361 1361 122 123 JAAAAA QOJAAA AAAAxx +949 6465 1 1 9 9 49 949 949 949 949 98 99 NKAAAA ROJAAA HHHHxx +8582 6466 0 2 2 2 82 582 582 3582 8582 164 165 CSAAAA SOJAAA OOOOxx +5104 6467 0 0 4 4 4 104 1104 104 5104 8 9 IOAAAA TOJAAA VVVVxx +6146 6468 0 2 6 6 46 146 146 1146 6146 92 93 KCAAAA UOJAAA AAAAxx +7681 6469 1 1 1 1 81 681 1681 2681 7681 162 163 LJAAAA VOJAAA HHHHxx +1904 6470 0 0 4 4 4 904 1904 1904 1904 8 9 GVAAAA WOJAAA OOOOxx +1989 6471 1 1 9 9 89 989 1989 1989 1989 178 179 NYAAAA XOJAAA VVVVxx +4179 6472 1 3 9 19 79 179 179 4179 4179 158 159 TEAAAA YOJAAA AAAAxx +1739 6473 1 3 9 19 39 739 1739 1739 1739 78 79 XOAAAA ZOJAAA HHHHxx +2447 6474 1 3 7 7 47 447 447 2447 2447 94 95 DQAAAA APJAAA OOOOxx +3029 6475 1 1 9 9 29 29 1029 3029 3029 58 59 NMAAAA BPJAAA VVVVxx +9783 6476 1 3 3 3 83 783 1783 4783 9783 166 167 HMAAAA CPJAAA AAAAxx +8381 6477 1 1 1 1 81 381 381 3381 8381 162 163 JKAAAA DPJAAA HHHHxx +8755 6478 1 3 5 15 55 755 755 3755 8755 110 111 TYAAAA EPJAAA OOOOxx +8384 6479 0 0 4 4 84 384 384 3384 8384 168 169 MKAAAA FPJAAA VVVVxx +7655 6480 1 3 5 15 55 655 1655 2655 7655 110 111 LIAAAA GPJAAA AAAAxx +4766 6481 0 2 6 6 66 766 766 4766 4766 132 133 IBAAAA HPJAAA HHHHxx +3324 6482 0 0 4 4 24 324 1324 3324 3324 48 49 WXAAAA IPJAAA OOOOxx +5022 6483 0 2 2 2 22 22 1022 22 5022 44 45 ELAAAA JPJAAA VVVVxx +2856 6484 0 0 6 16 56 856 856 2856 2856 112 113 WFAAAA KPJAAA AAAAxx +6503 6485 1 3 3 3 3 503 503 1503 6503 6 7 DQAAAA LPJAAA HHHHxx +6872 6486 0 0 2 12 72 872 872 1872 6872 144 145 IEAAAA MPJAAA OOOOxx +1663 6487 1 3 3 3 63 663 1663 1663 1663 126 127 ZLAAAA NPJAAA VVVVxx +6964 6488 0 0 4 4 64 964 964 1964 6964 128 129 WHAAAA OPJAAA AAAAxx +4622 6489 0 2 2 2 22 622 622 4622 4622 44 45 UVAAAA PPJAAA HHHHxx +6089 6490 1 1 9 9 89 89 89 1089 6089 178 179 FAAAAA QPJAAA OOOOxx +8567 6491 1 3 7 7 67 567 567 3567 8567 134 135 NRAAAA RPJAAA VVVVxx +597 6492 1 1 7 17 97 597 597 597 597 194 195 ZWAAAA SPJAAA AAAAxx +4222 6493 0 2 2 2 22 222 222 4222 4222 44 45 KGAAAA TPJAAA HHHHxx +9322 6494 0 2 2 2 22 322 1322 4322 9322 44 45 OUAAAA UPJAAA OOOOxx +624 6495 0 0 4 4 24 624 624 624 624 48 49 AYAAAA VPJAAA VVVVxx +4329 6496 1 1 9 9 29 329 329 4329 4329 58 59 NKAAAA WPJAAA AAAAxx +6781 6497 1 1 1 1 81 781 781 1781 6781 162 163 VAAAAA XPJAAA HHHHxx +1673 6498 1 1 3 13 73 673 1673 1673 1673 146 147 JMAAAA YPJAAA OOOOxx +6633 6499 1 1 3 13 33 633 633 1633 6633 66 67 DVAAAA ZPJAAA VVVVxx +2569 6500 1 1 9 9 69 569 569 2569 2569 138 139 VUAAAA AQJAAA AAAAxx +4995 6501 1 3 5 15 95 995 995 4995 4995 190 191 DKAAAA BQJAAA HHHHxx +2749 6502 1 1 9 9 49 749 749 2749 2749 98 99 TBAAAA CQJAAA OOOOxx +9044 6503 0 0 4 4 44 44 1044 4044 9044 88 89 WJAAAA DQJAAA VVVVxx +5823 6504 1 3 3 3 23 823 1823 823 5823 46 47 ZPAAAA EQJAAA AAAAxx +9366 6505 0 2 6 6 66 366 1366 4366 9366 132 133 GWAAAA FQJAAA HHHHxx +1169 6506 1 1 9 9 69 169 1169 1169 1169 138 139 ZSAAAA GQJAAA OOOOxx +1300 6507 0 0 0 0 0 300 1300 1300 1300 0 1 AYAAAA HQJAAA VVVVxx +9973 6508 1 1 3 13 73 973 1973 4973 9973 146 147 PTAAAA IQJAAA AAAAxx +2092 6509 0 0 2 12 92 92 92 2092 2092 184 185 MCAAAA JQJAAA HHHHxx +9776 6510 0 0 6 16 76 776 1776 4776 9776 152 153 AMAAAA KQJAAA OOOOxx +7612 6511 0 0 2 12 12 612 1612 2612 7612 24 25 UGAAAA LQJAAA VVVVxx +7190 6512 0 2 0 10 90 190 1190 2190 7190 180 181 OQAAAA MQJAAA AAAAxx +5147 6513 1 3 7 7 47 147 1147 147 5147 94 95 ZPAAAA NQJAAA HHHHxx +3722 6514 0 2 2 2 22 722 1722 3722 3722 44 45 ENAAAA OQJAAA OOOOxx +5858 6515 0 2 8 18 58 858 1858 858 5858 116 117 IRAAAA PQJAAA VVVVxx +3204 6516 0 0 4 4 4 204 1204 3204 3204 8 9 GTAAAA QQJAAA AAAAxx +8994 6517 0 2 4 14 94 994 994 3994 8994 188 189 YHAAAA RQJAAA HHHHxx +7478 6518 0 2 8 18 78 478 1478 2478 7478 156 157 QBAAAA SQJAAA OOOOxx +9624 6519 0 0 4 4 24 624 1624 4624 9624 48 49 EGAAAA TQJAAA VVVVxx +6639 6520 1 3 9 19 39 639 639 1639 6639 78 79 JVAAAA UQJAAA AAAAxx +369 6521 1 1 9 9 69 369 369 369 369 138 139 FOAAAA VQJAAA HHHHxx +7766 6522 0 2 6 6 66 766 1766 2766 7766 132 133 SMAAAA WQJAAA OOOOxx +4094 6523 0 2 4 14 94 94 94 4094 4094 188 189 MBAAAA XQJAAA VVVVxx +9556 6524 0 0 6 16 56 556 1556 4556 9556 112 113 ODAAAA YQJAAA AAAAxx +4887 6525 1 3 7 7 87 887 887 4887 4887 174 175 ZFAAAA ZQJAAA HHHHxx +2321 6526 1 1 1 1 21 321 321 2321 2321 42 43 HLAAAA ARJAAA OOOOxx +9201 6527 1 1 1 1 1 201 1201 4201 9201 2 3 XPAAAA BRJAAA VVVVxx +1627 6528 1 3 7 7 27 627 1627 1627 1627 54 55 PKAAAA CRJAAA AAAAxx +150 6529 0 2 0 10 50 150 150 150 150 100 101 UFAAAA DRJAAA HHHHxx +8010 6530 0 2 0 10 10 10 10 3010 8010 20 21 CWAAAA ERJAAA OOOOxx +8026 6531 0 2 6 6 26 26 26 3026 8026 52 53 SWAAAA FRJAAA VVVVxx +5495 6532 1 3 5 15 95 495 1495 495 5495 190 191 JDAAAA GRJAAA AAAAxx +6213 6533 1 1 3 13 13 213 213 1213 6213 26 27 ZEAAAA HRJAAA HHHHxx +6464 6534 0 0 4 4 64 464 464 1464 6464 128 129 QOAAAA IRJAAA OOOOxx +1158 6535 0 2 8 18 58 158 1158 1158 1158 116 117 OSAAAA JRJAAA VVVVxx +8669 6536 1 1 9 9 69 669 669 3669 8669 138 139 LVAAAA KRJAAA AAAAxx +3225 6537 1 1 5 5 25 225 1225 3225 3225 50 51 BUAAAA LRJAAA HHHHxx +1294 6538 0 2 4 14 94 294 1294 1294 1294 188 189 UXAAAA MRJAAA OOOOxx +2166 6539 0 2 6 6 66 166 166 2166 2166 132 133 IFAAAA NRJAAA VVVVxx +9328 6540 0 0 8 8 28 328 1328 4328 9328 56 57 UUAAAA ORJAAA AAAAxx +8431 6541 1 3 1 11 31 431 431 3431 8431 62 63 HMAAAA PRJAAA HHHHxx +7100 6542 0 0 0 0 0 100 1100 2100 7100 0 1 CNAAAA QRJAAA OOOOxx +8126 6543 0 2 6 6 26 126 126 3126 8126 52 53 OAAAAA RRJAAA VVVVxx +2185 6544 1 1 5 5 85 185 185 2185 2185 170 171 BGAAAA SRJAAA AAAAxx +5697 6545 1 1 7 17 97 697 1697 697 5697 194 195 DLAAAA TRJAAA HHHHxx +5531 6546 1 3 1 11 31 531 1531 531 5531 62 63 TEAAAA URJAAA OOOOxx +3020 6547 0 0 0 0 20 20 1020 3020 3020 40 41 EMAAAA VRJAAA VVVVxx +3076 6548 0 0 6 16 76 76 1076 3076 3076 152 153 IOAAAA WRJAAA AAAAxx +9228 6549 0 0 8 8 28 228 1228 4228 9228 56 57 YQAAAA XRJAAA HHHHxx +1734 6550 0 2 4 14 34 734 1734 1734 1734 68 69 SOAAAA YRJAAA OOOOxx +7616 6551 0 0 6 16 16 616 1616 2616 7616 32 33 YGAAAA ZRJAAA VVVVxx +9059 6552 1 3 9 19 59 59 1059 4059 9059 118 119 LKAAAA ASJAAA AAAAxx +323 6553 1 3 3 3 23 323 323 323 323 46 47 LMAAAA BSJAAA HHHHxx +1283 6554 1 3 3 3 83 283 1283 1283 1283 166 167 JXAAAA CSJAAA OOOOxx +9535 6555 1 3 5 15 35 535 1535 4535 9535 70 71 TCAAAA DSJAAA VVVVxx +2580 6556 0 0 0 0 80 580 580 2580 2580 160 161 GVAAAA ESJAAA AAAAxx +7633 6557 1 1 3 13 33 633 1633 2633 7633 66 67 PHAAAA FSJAAA HHHHxx +9497 6558 1 1 7 17 97 497 1497 4497 9497 194 195 HBAAAA GSJAAA OOOOxx +9842 6559 0 2 2 2 42 842 1842 4842 9842 84 85 OOAAAA HSJAAA VVVVxx +3426 6560 0 2 6 6 26 426 1426 3426 3426 52 53 UBAAAA ISJAAA AAAAxx +7650 6561 0 2 0 10 50 650 1650 2650 7650 100 101 GIAAAA JSJAAA HHHHxx +9935 6562 1 3 5 15 35 935 1935 4935 9935 70 71 DSAAAA KSJAAA OOOOxx +9354 6563 0 2 4 14 54 354 1354 4354 9354 108 109 UVAAAA LSJAAA VVVVxx +5569 6564 1 1 9 9 69 569 1569 569 5569 138 139 FGAAAA MSJAAA AAAAxx +5765 6565 1 1 5 5 65 765 1765 765 5765 130 131 TNAAAA NSJAAA HHHHxx +7283 6566 1 3 3 3 83 283 1283 2283 7283 166 167 DUAAAA OSJAAA OOOOxx +1068 6567 0 0 8 8 68 68 1068 1068 1068 136 137 CPAAAA PSJAAA VVVVxx +1641 6568 1 1 1 1 41 641 1641 1641 1641 82 83 DLAAAA QSJAAA AAAAxx +1688 6569 0 0 8 8 88 688 1688 1688 1688 176 177 YMAAAA RSJAAA HHHHxx +1133 6570 1 1 3 13 33 133 1133 1133 1133 66 67 PRAAAA SSJAAA OOOOxx +4493 6571 1 1 3 13 93 493 493 4493 4493 186 187 VQAAAA TSJAAA VVVVxx +3354 6572 0 2 4 14 54 354 1354 3354 3354 108 109 AZAAAA USJAAA AAAAxx +4029 6573 1 1 9 9 29 29 29 4029 4029 58 59 ZYAAAA VSJAAA HHHHxx +6704 6574 0 0 4 4 4 704 704 1704 6704 8 9 WXAAAA WSJAAA OOOOxx +3221 6575 1 1 1 1 21 221 1221 3221 3221 42 43 XTAAAA XSJAAA VVVVxx +9432 6576 0 0 2 12 32 432 1432 4432 9432 64 65 UYAAAA YSJAAA AAAAxx +6990 6577 0 2 0 10 90 990 990 1990 6990 180 181 WIAAAA ZSJAAA HHHHxx +1760 6578 0 0 0 0 60 760 1760 1760 1760 120 121 SPAAAA ATJAAA OOOOxx +4754 6579 0 2 4 14 54 754 754 4754 4754 108 109 WAAAAA BTJAAA VVVVxx +7724 6580 0 0 4 4 24 724 1724 2724 7724 48 49 CLAAAA CTJAAA AAAAxx +9487 6581 1 3 7 7 87 487 1487 4487 9487 174 175 XAAAAA DTJAAA HHHHxx +166 6582 0 2 6 6 66 166 166 166 166 132 133 KGAAAA ETJAAA OOOOxx +5479 6583 1 3 9 19 79 479 1479 479 5479 158 159 TCAAAA FTJAAA VVVVxx +8744 6584 0 0 4 4 44 744 744 3744 8744 88 89 IYAAAA GTJAAA AAAAxx +5746 6585 0 2 6 6 46 746 1746 746 5746 92 93 ANAAAA HTJAAA HHHHxx +907 6586 1 3 7 7 7 907 907 907 907 14 15 XIAAAA ITJAAA OOOOxx +3968 6587 0 0 8 8 68 968 1968 3968 3968 136 137 QWAAAA JTJAAA VVVVxx +5721 6588 1 1 1 1 21 721 1721 721 5721 42 43 BMAAAA KTJAAA AAAAxx +6738 6589 0 2 8 18 38 738 738 1738 6738 76 77 EZAAAA LTJAAA HHHHxx +4097 6590 1 1 7 17 97 97 97 4097 4097 194 195 PBAAAA MTJAAA OOOOxx +8456 6591 0 0 6 16 56 456 456 3456 8456 112 113 GNAAAA NTJAAA VVVVxx +1269 6592 1 1 9 9 69 269 1269 1269 1269 138 139 VWAAAA OTJAAA AAAAxx +7997 6593 1 1 7 17 97 997 1997 2997 7997 194 195 PVAAAA PTJAAA HHHHxx +9457 6594 1 1 7 17 57 457 1457 4457 9457 114 115 TZAAAA QTJAAA OOOOxx +1159 6595 1 3 9 19 59 159 1159 1159 1159 118 119 PSAAAA RTJAAA VVVVxx +1631 6596 1 3 1 11 31 631 1631 1631 1631 62 63 TKAAAA STJAAA AAAAxx +2019 6597 1 3 9 19 19 19 19 2019 2019 38 39 RZAAAA TTJAAA HHHHxx +3186 6598 0 2 6 6 86 186 1186 3186 3186 172 173 OSAAAA UTJAAA OOOOxx +5587 6599 1 3 7 7 87 587 1587 587 5587 174 175 XGAAAA VTJAAA VVVVxx +9172 6600 0 0 2 12 72 172 1172 4172 9172 144 145 UOAAAA WTJAAA AAAAxx +5589 6601 1 1 9 9 89 589 1589 589 5589 178 179 ZGAAAA XTJAAA HHHHxx +5103 6602 1 3 3 3 3 103 1103 103 5103 6 7 HOAAAA YTJAAA OOOOxx +3177 6603 1 1 7 17 77 177 1177 3177 3177 154 155 FSAAAA ZTJAAA VVVVxx +8887 6604 1 3 7 7 87 887 887 3887 8887 174 175 VDAAAA AUJAAA AAAAxx +12 6605 0 0 2 12 12 12 12 12 12 24 25 MAAAAA BUJAAA HHHHxx +8575 6606 1 3 5 15 75 575 575 3575 8575 150 151 VRAAAA CUJAAA OOOOxx +4335 6607 1 3 5 15 35 335 335 4335 4335 70 71 TKAAAA DUJAAA VVVVxx +4581 6608 1 1 1 1 81 581 581 4581 4581 162 163 FUAAAA EUJAAA AAAAxx +4444 6609 0 0 4 4 44 444 444 4444 4444 88 89 YOAAAA FUJAAA HHHHxx +7978 6610 0 2 8 18 78 978 1978 2978 7978 156 157 WUAAAA GUJAAA OOOOxx +3081 6611 1 1 1 1 81 81 1081 3081 3081 162 163 NOAAAA HUJAAA VVVVxx +4059 6612 1 3 9 19 59 59 59 4059 4059 118 119 DAAAAA IUJAAA AAAAxx +5711 6613 1 3 1 11 11 711 1711 711 5711 22 23 RLAAAA JUJAAA HHHHxx +7069 6614 1 1 9 9 69 69 1069 2069 7069 138 139 XLAAAA KUJAAA OOOOxx +6150 6615 0 2 0 10 50 150 150 1150 6150 100 101 OCAAAA LUJAAA VVVVxx +9550 6616 0 2 0 10 50 550 1550 4550 9550 100 101 IDAAAA MUJAAA AAAAxx +7087 6617 1 3 7 7 87 87 1087 2087 7087 174 175 PMAAAA NUJAAA HHHHxx +9557 6618 1 1 7 17 57 557 1557 4557 9557 114 115 PDAAAA OUJAAA OOOOxx +7856 6619 0 0 6 16 56 856 1856 2856 7856 112 113 EQAAAA PUJAAA VVVVxx +1115 6620 1 3 5 15 15 115 1115 1115 1115 30 31 XQAAAA QUJAAA AAAAxx +1086 6621 0 2 6 6 86 86 1086 1086 1086 172 173 UPAAAA RUJAAA HHHHxx +5048 6622 0 0 8 8 48 48 1048 48 5048 96 97 EMAAAA SUJAAA OOOOxx +5168 6623 0 0 8 8 68 168 1168 168 5168 136 137 UQAAAA TUJAAA VVVVxx +6029 6624 1 1 9 9 29 29 29 1029 6029 58 59 XXAAAA UUJAAA AAAAxx +546 6625 0 2 6 6 46 546 546 546 546 92 93 AVAAAA VUJAAA HHHHxx +2908 6626 0 0 8 8 8 908 908 2908 2908 16 17 WHAAAA WUJAAA OOOOxx +779 6627 1 3 9 19 79 779 779 779 779 158 159 ZDAAAA XUJAAA VVVVxx +4202 6628 0 2 2 2 2 202 202 4202 4202 4 5 QFAAAA YUJAAA AAAAxx +9984 6629 0 0 4 4 84 984 1984 4984 9984 168 169 AUAAAA ZUJAAA HHHHxx +4730 6630 0 2 0 10 30 730 730 4730 4730 60 61 YZAAAA AVJAAA OOOOxx +6517 6631 1 1 7 17 17 517 517 1517 6517 34 35 RQAAAA BVJAAA VVVVxx +8410 6632 0 2 0 10 10 410 410 3410 8410 20 21 MLAAAA CVJAAA AAAAxx +4793 6633 1 1 3 13 93 793 793 4793 4793 186 187 JCAAAA DVJAAA HHHHxx +3431 6634 1 3 1 11 31 431 1431 3431 3431 62 63 ZBAAAA EVJAAA OOOOxx +2481 6635 1 1 1 1 81 481 481 2481 2481 162 163 LRAAAA FVJAAA VVVVxx +3905 6636 1 1 5 5 5 905 1905 3905 3905 10 11 FUAAAA GVJAAA AAAAxx +8807 6637 1 3 7 7 7 807 807 3807 8807 14 15 TAAAAA HVJAAA HHHHxx +2660 6638 0 0 0 0 60 660 660 2660 2660 120 121 IYAAAA IVJAAA OOOOxx +4985 6639 1 1 5 5 85 985 985 4985 4985 170 171 TJAAAA JVJAAA VVVVxx +3080 6640 0 0 0 0 80 80 1080 3080 3080 160 161 MOAAAA KVJAAA AAAAxx +1090 6641 0 2 0 10 90 90 1090 1090 1090 180 181 YPAAAA LVJAAA HHHHxx +6917 6642 1 1 7 17 17 917 917 1917 6917 34 35 BGAAAA MVJAAA OOOOxx +5177 6643 1 1 7 17 77 177 1177 177 5177 154 155 DRAAAA NVJAAA VVVVxx +2729 6644 1 1 9 9 29 729 729 2729 2729 58 59 ZAAAAA OVJAAA AAAAxx +9706 6645 0 2 6 6 6 706 1706 4706 9706 12 13 IJAAAA PVJAAA HHHHxx +9929 6646 1 1 9 9 29 929 1929 4929 9929 58 59 XRAAAA QVJAAA OOOOxx +1547 6647 1 3 7 7 47 547 1547 1547 1547 94 95 NHAAAA RVJAAA VVVVxx +2798 6648 0 2 8 18 98 798 798 2798 2798 196 197 QDAAAA SVJAAA AAAAxx +4420 6649 0 0 0 0 20 420 420 4420 4420 40 41 AOAAAA TVJAAA HHHHxx +6771 6650 1 3 1 11 71 771 771 1771 6771 142 143 LAAAAA UVJAAA OOOOxx +2004 6651 0 0 4 4 4 4 4 2004 2004 8 9 CZAAAA VVJAAA VVVVxx +8686 6652 0 2 6 6 86 686 686 3686 8686 172 173 CWAAAA WVJAAA AAAAxx +3663 6653 1 3 3 3 63 663 1663 3663 3663 126 127 XKAAAA XVJAAA HHHHxx +806 6654 0 2 6 6 6 806 806 806 806 12 13 AFAAAA YVJAAA OOOOxx +4309 6655 1 1 9 9 9 309 309 4309 4309 18 19 TJAAAA ZVJAAA VVVVxx +7443 6656 1 3 3 3 43 443 1443 2443 7443 86 87 HAAAAA AWJAAA AAAAxx +5779 6657 1 3 9 19 79 779 1779 779 5779 158 159 HOAAAA BWJAAA HHHHxx +8821 6658 1 1 1 1 21 821 821 3821 8821 42 43 HBAAAA CWJAAA OOOOxx +4198 6659 0 2 8 18 98 198 198 4198 4198 196 197 MFAAAA DWJAAA VVVVxx +8115 6660 1 3 5 15 15 115 115 3115 8115 30 31 DAAAAA EWJAAA AAAAxx +9554 6661 0 2 4 14 54 554 1554 4554 9554 108 109 MDAAAA FWJAAA HHHHxx +8956 6662 0 0 6 16 56 956 956 3956 8956 112 113 MGAAAA GWJAAA OOOOxx +4733 6663 1 1 3 13 33 733 733 4733 4733 66 67 BAAAAA HWJAAA VVVVxx +5417 6664 1 1 7 17 17 417 1417 417 5417 34 35 JAAAAA IWJAAA AAAAxx +4792 6665 0 0 2 12 92 792 792 4792 4792 184 185 ICAAAA JWJAAA HHHHxx +462 6666 0 2 2 2 62 462 462 462 462 124 125 URAAAA KWJAAA OOOOxx +3687 6667 1 3 7 7 87 687 1687 3687 3687 174 175 VLAAAA LWJAAA VVVVxx +2013 6668 1 1 3 13 13 13 13 2013 2013 26 27 LZAAAA MWJAAA AAAAxx +5386 6669 0 2 6 6 86 386 1386 386 5386 172 173 EZAAAA NWJAAA HHHHxx +2816 6670 0 0 6 16 16 816 816 2816 2816 32 33 IEAAAA OWJAAA OOOOxx +7827 6671 1 3 7 7 27 827 1827 2827 7827 54 55 BPAAAA PWJAAA VVVVxx +5077 6672 1 1 7 17 77 77 1077 77 5077 154 155 HNAAAA QWJAAA AAAAxx +6039 6673 1 3 9 19 39 39 39 1039 6039 78 79 HYAAAA RWJAAA HHHHxx +215 6674 1 3 5 15 15 215 215 215 215 30 31 HIAAAA SWJAAA OOOOxx +855 6675 1 3 5 15 55 855 855 855 855 110 111 XGAAAA TWJAAA VVVVxx +9692 6676 0 0 2 12 92 692 1692 4692 9692 184 185 UIAAAA UWJAAA AAAAxx +8391 6677 1 3 1 11 91 391 391 3391 8391 182 183 TKAAAA VWJAAA HHHHxx +8424 6678 0 0 4 4 24 424 424 3424 8424 48 49 AMAAAA WWJAAA OOOOxx +6331 6679 1 3 1 11 31 331 331 1331 6331 62 63 NJAAAA XWJAAA VVVVxx +6561 6680 1 1 1 1 61 561 561 1561 6561 122 123 JSAAAA YWJAAA AAAAxx +8955 6681 1 3 5 15 55 955 955 3955 8955 110 111 LGAAAA ZWJAAA HHHHxx +1764 6682 0 0 4 4 64 764 1764 1764 1764 128 129 WPAAAA AXJAAA OOOOxx +6623 6683 1 3 3 3 23 623 623 1623 6623 46 47 TUAAAA BXJAAA VVVVxx +2900 6684 0 0 0 0 0 900 900 2900 2900 0 1 OHAAAA CXJAAA AAAAxx +7048 6685 0 0 8 8 48 48 1048 2048 7048 96 97 CLAAAA DXJAAA HHHHxx +3843 6686 1 3 3 3 43 843 1843 3843 3843 86 87 VRAAAA EXJAAA OOOOxx +4855 6687 1 3 5 15 55 855 855 4855 4855 110 111 TEAAAA FXJAAA VVVVxx +7383 6688 1 3 3 3 83 383 1383 2383 7383 166 167 ZXAAAA GXJAAA AAAAxx +7765 6689 1 1 5 5 65 765 1765 2765 7765 130 131 RMAAAA HXJAAA HHHHxx +1125 6690 1 1 5 5 25 125 1125 1125 1125 50 51 HRAAAA IXJAAA OOOOxx +755 6691 1 3 5 15 55 755 755 755 755 110 111 BDAAAA JXJAAA VVVVxx +2995 6692 1 3 5 15 95 995 995 2995 2995 190 191 FLAAAA KXJAAA AAAAxx +8907 6693 1 3 7 7 7 907 907 3907 8907 14 15 PEAAAA LXJAAA HHHHxx +9357 6694 1 1 7 17 57 357 1357 4357 9357 114 115 XVAAAA MXJAAA OOOOxx +4469 6695 1 1 9 9 69 469 469 4469 4469 138 139 XPAAAA NXJAAA VVVVxx +2147 6696 1 3 7 7 47 147 147 2147 2147 94 95 PEAAAA OXJAAA AAAAxx +2952 6697 0 0 2 12 52 952 952 2952 2952 104 105 OJAAAA PXJAAA HHHHxx +1324 6698 0 0 4 4 24 324 1324 1324 1324 48 49 YYAAAA QXJAAA OOOOxx +1173 6699 1 1 3 13 73 173 1173 1173 1173 146 147 DTAAAA RXJAAA VVVVxx +3169 6700 1 1 9 9 69 169 1169 3169 3169 138 139 XRAAAA SXJAAA AAAAxx +5149 6701 1 1 9 9 49 149 1149 149 5149 98 99 BQAAAA TXJAAA HHHHxx +9660 6702 0 0 0 0 60 660 1660 4660 9660 120 121 OHAAAA UXJAAA OOOOxx +3446 6703 0 2 6 6 46 446 1446 3446 3446 92 93 OCAAAA VXJAAA VVVVxx +6988 6704 0 0 8 8 88 988 988 1988 6988 176 177 UIAAAA WXJAAA AAAAxx +5829 6705 1 1 9 9 29 829 1829 829 5829 58 59 FQAAAA XXJAAA HHHHxx +7166 6706 0 2 6 6 66 166 1166 2166 7166 132 133 QPAAAA YXJAAA OOOOxx +3940 6707 0 0 0 0 40 940 1940 3940 3940 80 81 OVAAAA ZXJAAA VVVVxx +2645 6708 1 1 5 5 45 645 645 2645 2645 90 91 TXAAAA AYJAAA AAAAxx +478 6709 0 2 8 18 78 478 478 478 478 156 157 KSAAAA BYJAAA HHHHxx +1156 6710 0 0 6 16 56 156 1156 1156 1156 112 113 MSAAAA CYJAAA OOOOxx +2731 6711 1 3 1 11 31 731 731 2731 2731 62 63 BBAAAA DYJAAA VVVVxx +5637 6712 1 1 7 17 37 637 1637 637 5637 74 75 VIAAAA EYJAAA AAAAxx +7517 6713 1 1 7 17 17 517 1517 2517 7517 34 35 DDAAAA FYJAAA HHHHxx +5331 6714 1 3 1 11 31 331 1331 331 5331 62 63 BXAAAA GYJAAA OOOOxx +9640 6715 0 0 0 0 40 640 1640 4640 9640 80 81 UGAAAA HYJAAA VVVVxx +4108 6716 0 0 8 8 8 108 108 4108 4108 16 17 ACAAAA IYJAAA AAAAxx +1087 6717 1 3 7 7 87 87 1087 1087 1087 174 175 VPAAAA JYJAAA HHHHxx +8017 6718 1 1 7 17 17 17 17 3017 8017 34 35 JWAAAA KYJAAA OOOOxx +8795 6719 1 3 5 15 95 795 795 3795 8795 190 191 HAAAAA LYJAAA VVVVxx +7060 6720 0 0 0 0 60 60 1060 2060 7060 120 121 OLAAAA MYJAAA AAAAxx +9450 6721 0 2 0 10 50 450 1450 4450 9450 100 101 MZAAAA NYJAAA HHHHxx +390 6722 0 2 0 10 90 390 390 390 390 180 181 APAAAA OYJAAA OOOOxx +66 6723 0 2 6 6 66 66 66 66 66 132 133 OCAAAA PYJAAA VVVVxx +8789 6724 1 1 9 9 89 789 789 3789 8789 178 179 BAAAAA QYJAAA AAAAxx +9260 6725 0 0 0 0 60 260 1260 4260 9260 120 121 ESAAAA RYJAAA HHHHxx +6679 6726 1 3 9 19 79 679 679 1679 6679 158 159 XWAAAA SYJAAA OOOOxx +9052 6727 0 0 2 12 52 52 1052 4052 9052 104 105 EKAAAA TYJAAA VVVVxx +9561 6728 1 1 1 1 61 561 1561 4561 9561 122 123 TDAAAA UYJAAA AAAAxx +9725 6729 1 1 5 5 25 725 1725 4725 9725 50 51 BKAAAA VYJAAA HHHHxx +6298 6730 0 2 8 18 98 298 298 1298 6298 196 197 GIAAAA WYJAAA OOOOxx +8654 6731 0 2 4 14 54 654 654 3654 8654 108 109 WUAAAA XYJAAA VVVVxx +8725 6732 1 1 5 5 25 725 725 3725 8725 50 51 PXAAAA YYJAAA AAAAxx +9377 6733 1 1 7 17 77 377 1377 4377 9377 154 155 RWAAAA ZYJAAA HHHHxx +3807 6734 1 3 7 7 7 807 1807 3807 3807 14 15 LQAAAA AZJAAA OOOOxx +8048 6735 0 0 8 8 48 48 48 3048 8048 96 97 OXAAAA BZJAAA VVVVxx +764 6736 0 0 4 4 64 764 764 764 764 128 129 KDAAAA CZJAAA AAAAxx +9702 6737 0 2 2 2 2 702 1702 4702 9702 4 5 EJAAAA DZJAAA HHHHxx +8060 6738 0 0 0 0 60 60 60 3060 8060 120 121 AYAAAA EZJAAA OOOOxx +6371 6739 1 3 1 11 71 371 371 1371 6371 142 143 BLAAAA FZJAAA VVVVxx +5237 6740 1 1 7 17 37 237 1237 237 5237 74 75 LTAAAA GZJAAA AAAAxx +743 6741 1 3 3 3 43 743 743 743 743 86 87 PCAAAA HZJAAA HHHHxx +7395 6742 1 3 5 15 95 395 1395 2395 7395 190 191 LYAAAA IZJAAA OOOOxx +3365 6743 1 1 5 5 65 365 1365 3365 3365 130 131 LZAAAA JZJAAA VVVVxx +6667 6744 1 3 7 7 67 667 667 1667 6667 134 135 LWAAAA KZJAAA AAAAxx +3445 6745 1 1 5 5 45 445 1445 3445 3445 90 91 NCAAAA LZJAAA HHHHxx +4019 6746 1 3 9 19 19 19 19 4019 4019 38 39 PYAAAA MZJAAA OOOOxx +7035 6747 1 3 5 15 35 35 1035 2035 7035 70 71 PKAAAA NZJAAA VVVVxx +5274 6748 0 2 4 14 74 274 1274 274 5274 148 149 WUAAAA OZJAAA AAAAxx +519 6749 1 3 9 19 19 519 519 519 519 38 39 ZTAAAA PZJAAA HHHHxx +2801 6750 1 1 1 1 1 801 801 2801 2801 2 3 TDAAAA QZJAAA OOOOxx +3320 6751 0 0 0 0 20 320 1320 3320 3320 40 41 SXAAAA RZJAAA VVVVxx +3153 6752 1 1 3 13 53 153 1153 3153 3153 106 107 HRAAAA SZJAAA AAAAxx +7680 6753 0 0 0 0 80 680 1680 2680 7680 160 161 KJAAAA TZJAAA HHHHxx +8942 6754 0 2 2 2 42 942 942 3942 8942 84 85 YFAAAA UZJAAA OOOOxx +3195 6755 1 3 5 15 95 195 1195 3195 3195 190 191 XSAAAA VZJAAA VVVVxx +2287 6756 1 3 7 7 87 287 287 2287 2287 174 175 ZJAAAA WZJAAA AAAAxx +8325 6757 1 1 5 5 25 325 325 3325 8325 50 51 FIAAAA XZJAAA HHHHxx +2603 6758 1 3 3 3 3 603 603 2603 2603 6 7 DWAAAA YZJAAA OOOOxx +5871 6759 1 3 1 11 71 871 1871 871 5871 142 143 VRAAAA ZZJAAA VVVVxx +1773 6760 1 1 3 13 73 773 1773 1773 1773 146 147 FQAAAA AAKAAA AAAAxx +3323 6761 1 3 3 3 23 323 1323 3323 3323 46 47 VXAAAA BAKAAA HHHHxx +2053 6762 1 1 3 13 53 53 53 2053 2053 106 107 ZAAAAA CAKAAA OOOOxx +4062 6763 0 2 2 2 62 62 62 4062 4062 124 125 GAAAAA DAKAAA VVVVxx +4611 6764 1 3 1 11 11 611 611 4611 4611 22 23 JVAAAA EAKAAA AAAAxx +3451 6765 1 3 1 11 51 451 1451 3451 3451 102 103 TCAAAA FAKAAA HHHHxx +1819 6766 1 3 9 19 19 819 1819 1819 1819 38 39 ZRAAAA GAKAAA OOOOxx +9806 6767 0 2 6 6 6 806 1806 4806 9806 12 13 ENAAAA HAKAAA VVVVxx +6619 6768 1 3 9 19 19 619 619 1619 6619 38 39 PUAAAA IAKAAA AAAAxx +1031 6769 1 3 1 11 31 31 1031 1031 1031 62 63 RNAAAA JAKAAA HHHHxx +1865 6770 1 1 5 5 65 865 1865 1865 1865 130 131 TTAAAA KAKAAA OOOOxx +6282 6771 0 2 2 2 82 282 282 1282 6282 164 165 QHAAAA LAKAAA VVVVxx +1178 6772 0 2 8 18 78 178 1178 1178 1178 156 157 ITAAAA MAKAAA AAAAxx +8007 6773 1 3 7 7 7 7 7 3007 8007 14 15 ZVAAAA NAKAAA HHHHxx +9126 6774 0 2 6 6 26 126 1126 4126 9126 52 53 ANAAAA OAKAAA OOOOxx +9113 6775 1 1 3 13 13 113 1113 4113 9113 26 27 NMAAAA PAKAAA VVVVxx +537 6776 1 1 7 17 37 537 537 537 537 74 75 RUAAAA QAKAAA AAAAxx +6208 6777 0 0 8 8 8 208 208 1208 6208 16 17 UEAAAA RAKAAA HHHHxx +1626 6778 0 2 6 6 26 626 1626 1626 1626 52 53 OKAAAA SAKAAA OOOOxx +7188 6779 0 0 8 8 88 188 1188 2188 7188 176 177 MQAAAA TAKAAA VVVVxx +9216 6780 0 0 6 16 16 216 1216 4216 9216 32 33 MQAAAA UAKAAA AAAAxx +6134 6781 0 2 4 14 34 134 134 1134 6134 68 69 YBAAAA VAKAAA HHHHxx +2074 6782 0 2 4 14 74 74 74 2074 2074 148 149 UBAAAA WAKAAA OOOOxx +6369 6783 1 1 9 9 69 369 369 1369 6369 138 139 ZKAAAA XAKAAA VVVVxx +9306 6784 0 2 6 6 6 306 1306 4306 9306 12 13 YTAAAA YAKAAA AAAAxx +3155 6785 1 3 5 15 55 155 1155 3155 3155 110 111 JRAAAA ZAKAAA HHHHxx +3611 6786 1 3 1 11 11 611 1611 3611 3611 22 23 XIAAAA ABKAAA OOOOxx +6530 6787 0 2 0 10 30 530 530 1530 6530 60 61 ERAAAA BBKAAA VVVVxx +6979 6788 1 3 9 19 79 979 979 1979 6979 158 159 LIAAAA CBKAAA AAAAxx +9129 6789 1 1 9 9 29 129 1129 4129 9129 58 59 DNAAAA DBKAAA HHHHxx +8013 6790 1 1 3 13 13 13 13 3013 8013 26 27 FWAAAA EBKAAA OOOOxx +6926 6791 0 2 6 6 26 926 926 1926 6926 52 53 KGAAAA FBKAAA VVVVxx +1877 6792 1 1 7 17 77 877 1877 1877 1877 154 155 FUAAAA GBKAAA AAAAxx +1882 6793 0 2 2 2 82 882 1882 1882 1882 164 165 KUAAAA HBKAAA HHHHxx +6720 6794 0 0 0 0 20 720 720 1720 6720 40 41 MYAAAA IBKAAA OOOOxx +690 6795 0 2 0 10 90 690 690 690 690 180 181 OAAAAA JBKAAA VVVVxx +143 6796 1 3 3 3 43 143 143 143 143 86 87 NFAAAA KBKAAA AAAAxx +7241 6797 1 1 1 1 41 241 1241 2241 7241 82 83 NSAAAA LBKAAA HHHHxx +6461 6798 1 1 1 1 61 461 461 1461 6461 122 123 NOAAAA MBKAAA OOOOxx +2258 6799 0 2 8 18 58 258 258 2258 2258 116 117 WIAAAA NBKAAA VVVVxx +2280 6800 0 0 0 0 80 280 280 2280 2280 160 161 SJAAAA OBKAAA AAAAxx +7556 6801 0 0 6 16 56 556 1556 2556 7556 112 113 QEAAAA PBKAAA HHHHxx +1038 6802 0 2 8 18 38 38 1038 1038 1038 76 77 YNAAAA QBKAAA OOOOxx +2634 6803 0 2 4 14 34 634 634 2634 2634 68 69 IXAAAA RBKAAA VVVVxx +7847 6804 1 3 7 7 47 847 1847 2847 7847 94 95 VPAAAA SBKAAA AAAAxx +4415 6805 1 3 5 15 15 415 415 4415 4415 30 31 VNAAAA TBKAAA HHHHxx +1933 6806 1 1 3 13 33 933 1933 1933 1933 66 67 JWAAAA UBKAAA OOOOxx +8034 6807 0 2 4 14 34 34 34 3034 8034 68 69 AXAAAA VBKAAA VVVVxx +9233 6808 1 1 3 13 33 233 1233 4233 9233 66 67 DRAAAA WBKAAA AAAAxx +6572 6809 0 0 2 12 72 572 572 1572 6572 144 145 USAAAA XBKAAA HHHHxx +1586 6810 0 2 6 6 86 586 1586 1586 1586 172 173 AJAAAA YBKAAA OOOOxx +8512 6811 0 0 2 12 12 512 512 3512 8512 24 25 KPAAAA ZBKAAA VVVVxx +7421 6812 1 1 1 1 21 421 1421 2421 7421 42 43 LZAAAA ACKAAA AAAAxx +503 6813 1 3 3 3 3 503 503 503 503 6 7 JTAAAA BCKAAA HHHHxx +5332 6814 0 0 2 12 32 332 1332 332 5332 64 65 CXAAAA CCKAAA OOOOxx +2602 6815 0 2 2 2 2 602 602 2602 2602 4 5 CWAAAA DCKAAA VVVVxx +2902 6816 0 2 2 2 2 902 902 2902 2902 4 5 QHAAAA ECKAAA AAAAxx +2979 6817 1 3 9 19 79 979 979 2979 2979 158 159 PKAAAA FCKAAA HHHHxx +1431 6818 1 3 1 11 31 431 1431 1431 1431 62 63 BDAAAA GCKAAA OOOOxx +8639 6819 1 3 9 19 39 639 639 3639 8639 78 79 HUAAAA HCKAAA VVVVxx +4218 6820 0 2 8 18 18 218 218 4218 4218 36 37 GGAAAA ICKAAA AAAAxx +7453 6821 1 1 3 13 53 453 1453 2453 7453 106 107 RAAAAA JCKAAA HHHHxx +5448 6822 0 0 8 8 48 448 1448 448 5448 96 97 OBAAAA KCKAAA OOOOxx +6768 6823 0 0 8 8 68 768 768 1768 6768 136 137 IAAAAA LCKAAA VVVVxx +3104 6824 0 0 4 4 4 104 1104 3104 3104 8 9 KPAAAA MCKAAA AAAAxx +2297 6825 1 1 7 17 97 297 297 2297 2297 194 195 JKAAAA NCKAAA HHHHxx +7994 6826 0 2 4 14 94 994 1994 2994 7994 188 189 MVAAAA OCKAAA OOOOxx +550 6827 0 2 0 10 50 550 550 550 550 100 101 EVAAAA PCKAAA VVVVxx +4777 6828 1 1 7 17 77 777 777 4777 4777 154 155 TBAAAA QCKAAA AAAAxx +5962 6829 0 2 2 2 62 962 1962 962 5962 124 125 IVAAAA RCKAAA HHHHxx +1763 6830 1 3 3 3 63 763 1763 1763 1763 126 127 VPAAAA SCKAAA OOOOxx +3654 6831 0 2 4 14 54 654 1654 3654 3654 108 109 OKAAAA TCKAAA VVVVxx +4106 6832 0 2 6 6 6 106 106 4106 4106 12 13 YBAAAA UCKAAA AAAAxx +5156 6833 0 0 6 16 56 156 1156 156 5156 112 113 IQAAAA VCKAAA HHHHxx +422 6834 0 2 2 2 22 422 422 422 422 44 45 GQAAAA WCKAAA OOOOxx +5011 6835 1 3 1 11 11 11 1011 11 5011 22 23 TKAAAA XCKAAA VVVVxx +218 6836 0 2 8 18 18 218 218 218 218 36 37 KIAAAA YCKAAA AAAAxx +9762 6837 0 2 2 2 62 762 1762 4762 9762 124 125 MLAAAA ZCKAAA HHHHxx +6074 6838 0 2 4 14 74 74 74 1074 6074 148 149 QZAAAA ADKAAA OOOOxx +4060 6839 0 0 0 0 60 60 60 4060 4060 120 121 EAAAAA BDKAAA VVVVxx +8680 6840 0 0 0 0 80 680 680 3680 8680 160 161 WVAAAA CDKAAA AAAAxx +5863 6841 1 3 3 3 63 863 1863 863 5863 126 127 NRAAAA DDKAAA HHHHxx +8042 6842 0 2 2 2 42 42 42 3042 8042 84 85 IXAAAA EDKAAA OOOOxx +2964 6843 0 0 4 4 64 964 964 2964 2964 128 129 AKAAAA FDKAAA VVVVxx +6931 6844 1 3 1 11 31 931 931 1931 6931 62 63 PGAAAA GDKAAA AAAAxx +6715 6845 1 3 5 15 15 715 715 1715 6715 30 31 HYAAAA HDKAAA HHHHxx +5859 6846 1 3 9 19 59 859 1859 859 5859 118 119 JRAAAA IDKAAA OOOOxx +6173 6847 1 1 3 13 73 173 173 1173 6173 146 147 LDAAAA JDKAAA VVVVxx +7788 6848 0 0 8 8 88 788 1788 2788 7788 176 177 ONAAAA KDKAAA AAAAxx +9370 6849 0 2 0 10 70 370 1370 4370 9370 140 141 KWAAAA LDKAAA HHHHxx +3038 6850 0 2 8 18 38 38 1038 3038 3038 76 77 WMAAAA MDKAAA OOOOxx +6483 6851 1 3 3 3 83 483 483 1483 6483 166 167 JPAAAA NDKAAA VVVVxx +7534 6852 0 2 4 14 34 534 1534 2534 7534 68 69 UDAAAA ODKAAA AAAAxx +5769 6853 1 1 9 9 69 769 1769 769 5769 138 139 XNAAAA PDKAAA HHHHxx +9152 6854 0 0 2 12 52 152 1152 4152 9152 104 105 AOAAAA QDKAAA OOOOxx +6251 6855 1 3 1 11 51 251 251 1251 6251 102 103 LGAAAA RDKAAA VVVVxx +9209 6856 1 1 9 9 9 209 1209 4209 9209 18 19 FQAAAA SDKAAA AAAAxx +5365 6857 1 1 5 5 65 365 1365 365 5365 130 131 JYAAAA TDKAAA HHHHxx +509 6858 1 1 9 9 9 509 509 509 509 18 19 PTAAAA UDKAAA OOOOxx +3132 6859 0 0 2 12 32 132 1132 3132 3132 64 65 MQAAAA VDKAAA VVVVxx +5373 6860 1 1 3 13 73 373 1373 373 5373 146 147 RYAAAA WDKAAA AAAAxx +4247 6861 1 3 7 7 47 247 247 4247 4247 94 95 JHAAAA XDKAAA HHHHxx +3491 6862 1 3 1 11 91 491 1491 3491 3491 182 183 HEAAAA YDKAAA OOOOxx +495 6863 1 3 5 15 95 495 495 495 495 190 191 BTAAAA ZDKAAA VVVVxx +1594 6864 0 2 4 14 94 594 1594 1594 1594 188 189 IJAAAA AEKAAA AAAAxx +2243 6865 1 3 3 3 43 243 243 2243 2243 86 87 HIAAAA BEKAAA HHHHxx +7780 6866 0 0 0 0 80 780 1780 2780 7780 160 161 GNAAAA CEKAAA OOOOxx +5632 6867 0 0 2 12 32 632 1632 632 5632 64 65 QIAAAA DEKAAA VVVVxx +2679 6868 1 3 9 19 79 679 679 2679 2679 158 159 BZAAAA EEKAAA AAAAxx +1354 6869 0 2 4 14 54 354 1354 1354 1354 108 109 CAAAAA FEKAAA HHHHxx +180 6870 0 0 0 0 80 180 180 180 180 160 161 YGAAAA GEKAAA OOOOxx +7017 6871 1 1 7 17 17 17 1017 2017 7017 34 35 XJAAAA HEKAAA VVVVxx +1867 6872 1 3 7 7 67 867 1867 1867 1867 134 135 VTAAAA IEKAAA AAAAxx +2213 6873 1 1 3 13 13 213 213 2213 2213 26 27 DHAAAA JEKAAA HHHHxx +8773 6874 1 1 3 13 73 773 773 3773 8773 146 147 LZAAAA KEKAAA OOOOxx +1784 6875 0 0 4 4 84 784 1784 1784 1784 168 169 QQAAAA LEKAAA VVVVxx +5961 6876 1 1 1 1 61 961 1961 961 5961 122 123 HVAAAA MEKAAA AAAAxx +8801 6877 1 1 1 1 1 801 801 3801 8801 2 3 NAAAAA NEKAAA HHHHxx +4860 6878 0 0 0 0 60 860 860 4860 4860 120 121 YEAAAA OEKAAA OOOOxx +2214 6879 0 2 4 14 14 214 214 2214 2214 28 29 EHAAAA PEKAAA VVVVxx +1735 6880 1 3 5 15 35 735 1735 1735 1735 70 71 TOAAAA QEKAAA AAAAxx +578 6881 0 2 8 18 78 578 578 578 578 156 157 GWAAAA REKAAA HHHHxx +7853 6882 1 1 3 13 53 853 1853 2853 7853 106 107 BQAAAA SEKAAA OOOOxx +2215 6883 1 3 5 15 15 215 215 2215 2215 30 31 FHAAAA TEKAAA VVVVxx +4704 6884 0 0 4 4 4 704 704 4704 4704 8 9 YYAAAA UEKAAA AAAAxx +9379 6885 1 3 9 19 79 379 1379 4379 9379 158 159 TWAAAA VEKAAA HHHHxx +9745 6886 1 1 5 5 45 745 1745 4745 9745 90 91 VKAAAA WEKAAA OOOOxx +5636 6887 0 0 6 16 36 636 1636 636 5636 72 73 UIAAAA XEKAAA VVVVxx +4548 6888 0 0 8 8 48 548 548 4548 4548 96 97 YSAAAA YEKAAA AAAAxx +6537 6889 1 1 7 17 37 537 537 1537 6537 74 75 LRAAAA ZEKAAA HHHHxx +7748 6890 0 0 8 8 48 748 1748 2748 7748 96 97 AMAAAA AFKAAA OOOOxx +687 6891 1 3 7 7 87 687 687 687 687 174 175 LAAAAA BFKAAA VVVVxx +1243 6892 1 3 3 3 43 243 1243 1243 1243 86 87 VVAAAA CFKAAA AAAAxx +852 6893 0 0 2 12 52 852 852 852 852 104 105 UGAAAA DFKAAA HHHHxx +785 6894 1 1 5 5 85 785 785 785 785 170 171 FEAAAA EFKAAA OOOOxx +2002 6895 0 2 2 2 2 2 2 2002 2002 4 5 AZAAAA FFKAAA VVVVxx +2748 6896 0 0 8 8 48 748 748 2748 2748 96 97 SBAAAA GFKAAA AAAAxx +6075 6897 1 3 5 15 75 75 75 1075 6075 150 151 RZAAAA HFKAAA HHHHxx +7029 6898 1 1 9 9 29 29 1029 2029 7029 58 59 JKAAAA IFKAAA OOOOxx +7474 6899 0 2 4 14 74 474 1474 2474 7474 148 149 MBAAAA JFKAAA VVVVxx +7755 6900 1 3 5 15 55 755 1755 2755 7755 110 111 HMAAAA KFKAAA AAAAxx +1456 6901 0 0 6 16 56 456 1456 1456 1456 112 113 AEAAAA LFKAAA HHHHxx +2808 6902 0 0 8 8 8 808 808 2808 2808 16 17 AEAAAA MFKAAA OOOOxx +4089 6903 1 1 9 9 89 89 89 4089 4089 178 179 HBAAAA NFKAAA VVVVxx +4718 6904 0 2 8 18 18 718 718 4718 4718 36 37 MZAAAA OFKAAA AAAAxx +910 6905 0 2 0 10 10 910 910 910 910 20 21 AJAAAA PFKAAA HHHHxx +2868 6906 0 0 8 8 68 868 868 2868 2868 136 137 IGAAAA QFKAAA OOOOxx +2103 6907 1 3 3 3 3 103 103 2103 2103 6 7 XCAAAA RFKAAA VVVVxx +2407 6908 1 3 7 7 7 407 407 2407 2407 14 15 POAAAA SFKAAA AAAAxx +4353 6909 1 1 3 13 53 353 353 4353 4353 106 107 LLAAAA TFKAAA HHHHxx +7988 6910 0 0 8 8 88 988 1988 2988 7988 176 177 GVAAAA UFKAAA OOOOxx +2750 6911 0 2 0 10 50 750 750 2750 2750 100 101 UBAAAA VFKAAA VVVVxx +2006 6912 0 2 6 6 6 6 6 2006 2006 12 13 EZAAAA WFKAAA AAAAxx +4617 6913 1 1 7 17 17 617 617 4617 4617 34 35 PVAAAA XFKAAA HHHHxx +1251 6914 1 3 1 11 51 251 1251 1251 1251 102 103 DWAAAA YFKAAA OOOOxx +4590 6915 0 2 0 10 90 590 590 4590 4590 180 181 OUAAAA ZFKAAA VVVVxx +1144 6916 0 0 4 4 44 144 1144 1144 1144 88 89 ASAAAA AGKAAA AAAAxx +7131 6917 1 3 1 11 31 131 1131 2131 7131 62 63 HOAAAA BGKAAA HHHHxx +95 6918 1 3 5 15 95 95 95 95 95 190 191 RDAAAA CGKAAA OOOOxx +4827 6919 1 3 7 7 27 827 827 4827 4827 54 55 RDAAAA DGKAAA VVVVxx +4307 6920 1 3 7 7 7 307 307 4307 4307 14 15 RJAAAA EGKAAA AAAAxx +1505 6921 1 1 5 5 5 505 1505 1505 1505 10 11 XFAAAA FGKAAA HHHHxx +8191 6922 1 3 1 11 91 191 191 3191 8191 182 183 BDAAAA GGKAAA OOOOxx +5037 6923 1 1 7 17 37 37 1037 37 5037 74 75 TLAAAA HGKAAA VVVVxx +7363 6924 1 3 3 3 63 363 1363 2363 7363 126 127 FXAAAA IGKAAA AAAAxx +8427 6925 1 3 7 7 27 427 427 3427 8427 54 55 DMAAAA JGKAAA HHHHxx +5231 6926 1 3 1 11 31 231 1231 231 5231 62 63 FTAAAA KGKAAA OOOOxx +2943 6927 1 3 3 3 43 943 943 2943 2943 86 87 FJAAAA LGKAAA VVVVxx +4624 6928 0 0 4 4 24 624 624 4624 4624 48 49 WVAAAA MGKAAA AAAAxx +2020 6929 0 0 0 0 20 20 20 2020 2020 40 41 SZAAAA NGKAAA HHHHxx +6155 6930 1 3 5 15 55 155 155 1155 6155 110 111 TCAAAA OGKAAA OOOOxx +4381 6931 1 1 1 1 81 381 381 4381 4381 162 163 NMAAAA PGKAAA VVVVxx +1057 6932 1 1 7 17 57 57 1057 1057 1057 114 115 ROAAAA QGKAAA AAAAxx +9010 6933 0 2 0 10 10 10 1010 4010 9010 20 21 OIAAAA RGKAAA HHHHxx +4947 6934 1 3 7 7 47 947 947 4947 4947 94 95 HIAAAA SGKAAA OOOOxx +335 6935 1 3 5 15 35 335 335 335 335 70 71 XMAAAA TGKAAA VVVVxx +6890 6936 0 2 0 10 90 890 890 1890 6890 180 181 AFAAAA UGKAAA AAAAxx +5070 6937 0 2 0 10 70 70 1070 70 5070 140 141 ANAAAA VGKAAA HHHHxx +5270 6938 0 2 0 10 70 270 1270 270 5270 140 141 SUAAAA WGKAAA OOOOxx +8657 6939 1 1 7 17 57 657 657 3657 8657 114 115 ZUAAAA XGKAAA VVVVxx +7625 6940 1 1 5 5 25 625 1625 2625 7625 50 51 HHAAAA YGKAAA AAAAxx +5759 6941 1 3 9 19 59 759 1759 759 5759 118 119 NNAAAA ZGKAAA HHHHxx +9483 6942 1 3 3 3 83 483 1483 4483 9483 166 167 TAAAAA AHKAAA OOOOxx +8304 6943 0 0 4 4 4 304 304 3304 8304 8 9 KHAAAA BHKAAA VVVVxx +296 6944 0 0 6 16 96 296 296 296 296 192 193 KLAAAA CHKAAA AAAAxx +1176 6945 0 0 6 16 76 176 1176 1176 1176 152 153 GTAAAA DHKAAA HHHHxx +2069 6946 1 1 9 9 69 69 69 2069 2069 138 139 PBAAAA EHKAAA OOOOxx +1531 6947 1 3 1 11 31 531 1531 1531 1531 62 63 XGAAAA FHKAAA VVVVxx +5329 6948 1 1 9 9 29 329 1329 329 5329 58 59 ZWAAAA GHKAAA AAAAxx +3702 6949 0 2 2 2 2 702 1702 3702 3702 4 5 KMAAAA HHKAAA HHHHxx +6520 6950 0 0 0 0 20 520 520 1520 6520 40 41 UQAAAA IHKAAA OOOOxx +7310 6951 0 2 0 10 10 310 1310 2310 7310 20 21 EVAAAA JHKAAA VVVVxx +1175 6952 1 3 5 15 75 175 1175 1175 1175 150 151 FTAAAA KHKAAA AAAAxx +9107 6953 1 3 7 7 7 107 1107 4107 9107 14 15 HMAAAA LHKAAA HHHHxx +2737 6954 1 1 7 17 37 737 737 2737 2737 74 75 HBAAAA MHKAAA OOOOxx +3437 6955 1 1 7 17 37 437 1437 3437 3437 74 75 FCAAAA NHKAAA VVVVxx +281 6956 1 1 1 1 81 281 281 281 281 162 163 VKAAAA OHKAAA AAAAxx +6676 6957 0 0 6 16 76 676 676 1676 6676 152 153 UWAAAA PHKAAA HHHHxx +145 6958 1 1 5 5 45 145 145 145 145 90 91 PFAAAA QHKAAA OOOOxx +3172 6959 0 0 2 12 72 172 1172 3172 3172 144 145 ASAAAA RHKAAA VVVVxx +4049 6960 1 1 9 9 49 49 49 4049 4049 98 99 TZAAAA SHKAAA AAAAxx +6042 6961 0 2 2 2 42 42 42 1042 6042 84 85 KYAAAA THKAAA HHHHxx +9122 6962 0 2 2 2 22 122 1122 4122 9122 44 45 WMAAAA UHKAAA OOOOxx +7244 6963 0 0 4 4 44 244 1244 2244 7244 88 89 QSAAAA VHKAAA VVVVxx +5361 6964 1 1 1 1 61 361 1361 361 5361 122 123 FYAAAA WHKAAA AAAAxx +8647 6965 1 3 7 7 47 647 647 3647 8647 94 95 PUAAAA XHKAAA HHHHxx +7956 6966 0 0 6 16 56 956 1956 2956 7956 112 113 AUAAAA YHKAAA OOOOxx +7812 6967 0 0 2 12 12 812 1812 2812 7812 24 25 MOAAAA ZHKAAA VVVVxx +570 6968 0 2 0 10 70 570 570 570 570 140 141 YVAAAA AIKAAA AAAAxx +4115 6969 1 3 5 15 15 115 115 4115 4115 30 31 HCAAAA BIKAAA HHHHxx +1856 6970 0 0 6 16 56 856 1856 1856 1856 112 113 KTAAAA CIKAAA OOOOxx +9582 6971 0 2 2 2 82 582 1582 4582 9582 164 165 OEAAAA DIKAAA VVVVxx +2025 6972 1 1 5 5 25 25 25 2025 2025 50 51 XZAAAA EIKAAA AAAAxx +986 6973 0 2 6 6 86 986 986 986 986 172 173 YLAAAA FIKAAA HHHHxx +8358 6974 0 2 8 18 58 358 358 3358 8358 116 117 MJAAAA GIKAAA OOOOxx +510 6975 0 2 0 10 10 510 510 510 510 20 21 QTAAAA HIKAAA VVVVxx +6101 6976 1 1 1 1 1 101 101 1101 6101 2 3 RAAAAA IIKAAA AAAAxx +4167 6977 1 3 7 7 67 167 167 4167 4167 134 135 HEAAAA JIKAAA HHHHxx +6139 6978 1 3 9 19 39 139 139 1139 6139 78 79 DCAAAA KIKAAA OOOOxx +6912 6979 0 0 2 12 12 912 912 1912 6912 24 25 WFAAAA LIKAAA VVVVxx +339 6980 1 3 9 19 39 339 339 339 339 78 79 BNAAAA MIKAAA AAAAxx +8759 6981 1 3 9 19 59 759 759 3759 8759 118 119 XYAAAA NIKAAA HHHHxx +246 6982 0 2 6 6 46 246 246 246 246 92 93 MJAAAA OIKAAA OOOOxx +2831 6983 1 3 1 11 31 831 831 2831 2831 62 63 XEAAAA PIKAAA VVVVxx +2327 6984 1 3 7 7 27 327 327 2327 2327 54 55 NLAAAA QIKAAA AAAAxx +7001 6985 1 1 1 1 1 1 1001 2001 7001 2 3 HJAAAA RIKAAA HHHHxx +4398 6986 0 2 8 18 98 398 398 4398 4398 196 197 ENAAAA SIKAAA OOOOxx +1495 6987 1 3 5 15 95 495 1495 1495 1495 190 191 NFAAAA TIKAAA VVVVxx +8522 6988 0 2 2 2 22 522 522 3522 8522 44 45 UPAAAA UIKAAA AAAAxx +7090 6989 0 2 0 10 90 90 1090 2090 7090 180 181 SMAAAA VIKAAA HHHHxx +8457 6990 1 1 7 17 57 457 457 3457 8457 114 115 HNAAAA WIKAAA OOOOxx +4238 6991 0 2 8 18 38 238 238 4238 4238 76 77 AHAAAA XIKAAA VVVVxx +6791 6992 1 3 1 11 91 791 791 1791 6791 182 183 FBAAAA YIKAAA AAAAxx +1342 6993 0 2 2 2 42 342 1342 1342 1342 84 85 QZAAAA ZIKAAA HHHHxx +4580 6994 0 0 0 0 80 580 580 4580 4580 160 161 EUAAAA AJKAAA OOOOxx +1475 6995 1 3 5 15 75 475 1475 1475 1475 150 151 TEAAAA BJKAAA VVVVxx +9184 6996 0 0 4 4 84 184 1184 4184 9184 168 169 GPAAAA CJKAAA AAAAxx +1189 6997 1 1 9 9 89 189 1189 1189 1189 178 179 TTAAAA DJKAAA HHHHxx +638 6998 0 2 8 18 38 638 638 638 638 76 77 OYAAAA EJKAAA OOOOxx +5867 6999 1 3 7 7 67 867 1867 867 5867 134 135 RRAAAA FJKAAA VVVVxx +9911 7000 1 3 1 11 11 911 1911 4911 9911 22 23 FRAAAA GJKAAA AAAAxx +8147 7001 1 3 7 7 47 147 147 3147 8147 94 95 JBAAAA HJKAAA HHHHxx +4492 7002 0 0 2 12 92 492 492 4492 4492 184 185 UQAAAA IJKAAA OOOOxx +385 7003 1 1 5 5 85 385 385 385 385 170 171 VOAAAA JJKAAA VVVVxx +5235 7004 1 3 5 15 35 235 1235 235 5235 70 71 JTAAAA KJKAAA AAAAxx +4812 7005 0 0 2 12 12 812 812 4812 4812 24 25 CDAAAA LJKAAA HHHHxx +9807 7006 1 3 7 7 7 807 1807 4807 9807 14 15 FNAAAA MJKAAA OOOOxx +9588 7007 0 0 8 8 88 588 1588 4588 9588 176 177 UEAAAA NJKAAA VVVVxx +9832 7008 0 0 2 12 32 832 1832 4832 9832 64 65 EOAAAA OJKAAA AAAAxx +3757 7009 1 1 7 17 57 757 1757 3757 3757 114 115 NOAAAA PJKAAA HHHHxx +9703 7010 1 3 3 3 3 703 1703 4703 9703 6 7 FJAAAA QJKAAA OOOOxx +1022 7011 0 2 2 2 22 22 1022 1022 1022 44 45 INAAAA RJKAAA VVVVxx +5165 7012 1 1 5 5 65 165 1165 165 5165 130 131 RQAAAA SJKAAA AAAAxx +7129 7013 1 1 9 9 29 129 1129 2129 7129 58 59 FOAAAA TJKAAA HHHHxx +4164 7014 0 0 4 4 64 164 164 4164 4164 128 129 EEAAAA UJKAAA OOOOxx +7239 7015 1 3 9 19 39 239 1239 2239 7239 78 79 LSAAAA VJKAAA VVVVxx +523 7016 1 3 3 3 23 523 523 523 523 46 47 DUAAAA WJKAAA AAAAxx +4670 7017 0 2 0 10 70 670 670 4670 4670 140 141 QXAAAA XJKAAA HHHHxx +8503 7018 1 3 3 3 3 503 503 3503 8503 6 7 BPAAAA YJKAAA OOOOxx +714 7019 0 2 4 14 14 714 714 714 714 28 29 MBAAAA ZJKAAA VVVVxx +1350 7020 0 2 0 10 50 350 1350 1350 1350 100 101 YZAAAA AKKAAA AAAAxx +8318 7021 0 2 8 18 18 318 318 3318 8318 36 37 YHAAAA BKKAAA HHHHxx +1834 7022 0 2 4 14 34 834 1834 1834 1834 68 69 OSAAAA CKKAAA OOOOxx +4306 7023 0 2 6 6 6 306 306 4306 4306 12 13 QJAAAA DKKAAA VVVVxx +8543 7024 1 3 3 3 43 543 543 3543 8543 86 87 PQAAAA EKKAAA AAAAxx +9397 7025 1 1 7 17 97 397 1397 4397 9397 194 195 LXAAAA FKKAAA HHHHxx +3145 7026 1 1 5 5 45 145 1145 3145 3145 90 91 ZQAAAA GKKAAA OOOOxx +3942 7027 0 2 2 2 42 942 1942 3942 3942 84 85 QVAAAA HKKAAA VVVVxx +8583 7028 1 3 3 3 83 583 583 3583 8583 166 167 DSAAAA IKKAAA AAAAxx +8073 7029 1 1 3 13 73 73 73 3073 8073 146 147 NYAAAA JKKAAA HHHHxx +4940 7030 0 0 0 0 40 940 940 4940 4940 80 81 AIAAAA KKKAAA OOOOxx +9573 7031 1 1 3 13 73 573 1573 4573 9573 146 147 FEAAAA LKKAAA VVVVxx +5325 7032 1 1 5 5 25 325 1325 325 5325 50 51 VWAAAA MKKAAA AAAAxx +1833 7033 1 1 3 13 33 833 1833 1833 1833 66 67 NSAAAA NKKAAA HHHHxx +1337 7034 1 1 7 17 37 337 1337 1337 1337 74 75 LZAAAA OKKAAA OOOOxx +9749 7035 1 1 9 9 49 749 1749 4749 9749 98 99 ZKAAAA PKKAAA VVVVxx +7505 7036 1 1 5 5 5 505 1505 2505 7505 10 11 RCAAAA QKKAAA AAAAxx +9731 7037 1 3 1 11 31 731 1731 4731 9731 62 63 HKAAAA RKKAAA HHHHxx +4098 7038 0 2 8 18 98 98 98 4098 4098 196 197 QBAAAA SKKAAA OOOOxx +1418 7039 0 2 8 18 18 418 1418 1418 1418 36 37 OCAAAA TKKAAA VVVVxx +63 7040 1 3 3 3 63 63 63 63 63 126 127 LCAAAA UKKAAA AAAAxx +9889 7041 1 1 9 9 89 889 1889 4889 9889 178 179 JQAAAA VKKAAA HHHHxx +2871 7042 1 3 1 11 71 871 871 2871 2871 142 143 LGAAAA WKKAAA OOOOxx +1003 7043 1 3 3 3 3 3 1003 1003 1003 6 7 PMAAAA XKKAAA VVVVxx +8796 7044 0 0 6 16 96 796 796 3796 8796 192 193 IAAAAA YKKAAA AAAAxx +22 7045 0 2 2 2 22 22 22 22 22 44 45 WAAAAA ZKKAAA HHHHxx +8244 7046 0 0 4 4 44 244 244 3244 8244 88 89 CFAAAA ALKAAA OOOOxx +2282 7047 0 2 2 2 82 282 282 2282 2282 164 165 UJAAAA BLKAAA VVVVxx +3487 7048 1 3 7 7 87 487 1487 3487 3487 174 175 DEAAAA CLKAAA AAAAxx +8633 7049 1 1 3 13 33 633 633 3633 8633 66 67 BUAAAA DLKAAA HHHHxx +6418 7050 0 2 8 18 18 418 418 1418 6418 36 37 WMAAAA ELKAAA OOOOxx +4682 7051 0 2 2 2 82 682 682 4682 4682 164 165 CYAAAA FLKAAA VVVVxx +4103 7052 1 3 3 3 3 103 103 4103 4103 6 7 VBAAAA GLKAAA AAAAxx +6256 7053 0 0 6 16 56 256 256 1256 6256 112 113 QGAAAA HLKAAA HHHHxx +4040 7054 0 0 0 0 40 40 40 4040 4040 80 81 KZAAAA ILKAAA OOOOxx +9342 7055 0 2 2 2 42 342 1342 4342 9342 84 85 IVAAAA JLKAAA VVVVxx +9969 7056 1 1 9 9 69 969 1969 4969 9969 138 139 LTAAAA KLKAAA AAAAxx +223 7057 1 3 3 3 23 223 223 223 223 46 47 PIAAAA LLKAAA HHHHxx +4593 7058 1 1 3 13 93 593 593 4593 4593 186 187 RUAAAA MLKAAA OOOOxx +44 7059 0 0 4 4 44 44 44 44 44 88 89 SBAAAA NLKAAA VVVVxx +3513 7060 1 1 3 13 13 513 1513 3513 3513 26 27 DFAAAA OLKAAA AAAAxx +5771 7061 1 3 1 11 71 771 1771 771 5771 142 143 ZNAAAA PLKAAA HHHHxx +5083 7062 1 3 3 3 83 83 1083 83 5083 166 167 NNAAAA QLKAAA OOOOxx +3839 7063 1 3 9 19 39 839 1839 3839 3839 78 79 RRAAAA RLKAAA VVVVxx +2986 7064 0 2 6 6 86 986 986 2986 2986 172 173 WKAAAA SLKAAA AAAAxx +2200 7065 0 0 0 0 0 200 200 2200 2200 0 1 QGAAAA TLKAAA HHHHxx +197 7066 1 1 7 17 97 197 197 197 197 194 195 PHAAAA ULKAAA OOOOxx +7455 7067 1 3 5 15 55 455 1455 2455 7455 110 111 TAAAAA VLKAAA VVVVxx +1379 7068 1 3 9 19 79 379 1379 1379 1379 158 159 BBAAAA WLKAAA AAAAxx +4356 7069 0 0 6 16 56 356 356 4356 4356 112 113 OLAAAA XLKAAA HHHHxx +6888 7070 0 0 8 8 88 888 888 1888 6888 176 177 YEAAAA YLKAAA OOOOxx +9139 7071 1 3 9 19 39 139 1139 4139 9139 78 79 NNAAAA ZLKAAA VVVVxx +7682 7072 0 2 2 2 82 682 1682 2682 7682 164 165 MJAAAA AMKAAA AAAAxx +4873 7073 1 1 3 13 73 873 873 4873 4873 146 147 LFAAAA BMKAAA HHHHxx +783 7074 1 3 3 3 83 783 783 783 783 166 167 DEAAAA CMKAAA OOOOxx +6071 7075 1 3 1 11 71 71 71 1071 6071 142 143 NZAAAA DMKAAA VVVVxx +5160 7076 0 0 0 0 60 160 1160 160 5160 120 121 MQAAAA EMKAAA AAAAxx +2291 7077 1 3 1 11 91 291 291 2291 2291 182 183 DKAAAA FMKAAA HHHHxx +187 7078 1 3 7 7 87 187 187 187 187 174 175 FHAAAA GMKAAA OOOOxx +7786 7079 0 2 6 6 86 786 1786 2786 7786 172 173 MNAAAA HMKAAA VVVVxx +3432 7080 0 0 2 12 32 432 1432 3432 3432 64 65 ACAAAA IMKAAA AAAAxx +5450 7081 0 2 0 10 50 450 1450 450 5450 100 101 QBAAAA JMKAAA HHHHxx +2699 7082 1 3 9 19 99 699 699 2699 2699 198 199 VZAAAA KMKAAA OOOOxx +692 7083 0 0 2 12 92 692 692 692 692 184 185 QAAAAA LMKAAA VVVVxx +6081 7084 1 1 1 1 81 81 81 1081 6081 162 163 XZAAAA MMKAAA AAAAxx +4829 7085 1 1 9 9 29 829 829 4829 4829 58 59 TDAAAA NMKAAA HHHHxx +238 7086 0 2 8 18 38 238 238 238 238 76 77 EJAAAA OMKAAA OOOOxx +9100 7087 0 0 0 0 0 100 1100 4100 9100 0 1 AMAAAA PMKAAA VVVVxx +1968 7088 0 0 8 8 68 968 1968 1968 1968 136 137 SXAAAA QMKAAA AAAAxx +1872 7089 0 0 2 12 72 872 1872 1872 1872 144 145 AUAAAA RMKAAA HHHHxx +7051 7090 1 3 1 11 51 51 1051 2051 7051 102 103 FLAAAA SMKAAA OOOOxx +2743 7091 1 3 3 3 43 743 743 2743 2743 86 87 NBAAAA TMKAAA VVVVxx +1237 7092 1 1 7 17 37 237 1237 1237 1237 74 75 PVAAAA UMKAAA AAAAxx +3052 7093 0 0 2 12 52 52 1052 3052 3052 104 105 KNAAAA VMKAAA HHHHxx +8021 7094 1 1 1 1 21 21 21 3021 8021 42 43 NWAAAA WMKAAA OOOOxx +657 7095 1 1 7 17 57 657 657 657 657 114 115 HZAAAA XMKAAA VVVVxx +2236 7096 0 0 6 16 36 236 236 2236 2236 72 73 AIAAAA YMKAAA AAAAxx +7011 7097 1 3 1 11 11 11 1011 2011 7011 22 23 RJAAAA ZMKAAA HHHHxx +4067 7098 1 3 7 7 67 67 67 4067 4067 134 135 LAAAAA ANKAAA OOOOxx +9449 7099 1 1 9 9 49 449 1449 4449 9449 98 99 LZAAAA BNKAAA VVVVxx +7428 7100 0 0 8 8 28 428 1428 2428 7428 56 57 SZAAAA CNKAAA AAAAxx +1272 7101 0 0 2 12 72 272 1272 1272 1272 144 145 YWAAAA DNKAAA HHHHxx +6897 7102 1 1 7 17 97 897 897 1897 6897 194 195 HFAAAA ENKAAA OOOOxx +5839 7103 1 3 9 19 39 839 1839 839 5839 78 79 PQAAAA FNKAAA VVVVxx +6835 7104 1 3 5 15 35 835 835 1835 6835 70 71 XCAAAA GNKAAA AAAAxx +1887 7105 1 3 7 7 87 887 1887 1887 1887 174 175 PUAAAA HNKAAA HHHHxx +1551 7106 1 3 1 11 51 551 1551 1551 1551 102 103 RHAAAA INKAAA OOOOxx +4667 7107 1 3 7 7 67 667 667 4667 4667 134 135 NXAAAA JNKAAA VVVVxx +9603 7108 1 3 3 3 3 603 1603 4603 9603 6 7 JFAAAA KNKAAA AAAAxx +4332 7109 0 0 2 12 32 332 332 4332 4332 64 65 QKAAAA LNKAAA HHHHxx +5681 7110 1 1 1 1 81 681 1681 681 5681 162 163 NKAAAA MNKAAA OOOOxx +8062 7111 0 2 2 2 62 62 62 3062 8062 124 125 CYAAAA NNKAAA VVVVxx +2302 7112 0 2 2 2 2 302 302 2302 2302 4 5 OKAAAA ONKAAA AAAAxx +2825 7113 1 1 5 5 25 825 825 2825 2825 50 51 REAAAA PNKAAA HHHHxx +4527 7114 1 3 7 7 27 527 527 4527 4527 54 55 DSAAAA QNKAAA OOOOxx +4230 7115 0 2 0 10 30 230 230 4230 4230 60 61 SGAAAA RNKAAA VVVVxx +3053 7116 1 1 3 13 53 53 1053 3053 3053 106 107 LNAAAA SNKAAA AAAAxx +983 7117 1 3 3 3 83 983 983 983 983 166 167 VLAAAA TNKAAA HHHHxx +9458 7118 0 2 8 18 58 458 1458 4458 9458 116 117 UZAAAA UNKAAA OOOOxx +4128 7119 0 0 8 8 28 128 128 4128 4128 56 57 UCAAAA VNKAAA VVVVxx +425 7120 1 1 5 5 25 425 425 425 425 50 51 JQAAAA WNKAAA AAAAxx +3911 7121 1 3 1 11 11 911 1911 3911 3911 22 23 LUAAAA XNKAAA HHHHxx +6607 7122 1 3 7 7 7 607 607 1607 6607 14 15 DUAAAA YNKAAA OOOOxx +5431 7123 1 3 1 11 31 431 1431 431 5431 62 63 XAAAAA ZNKAAA VVVVxx +6330 7124 0 2 0 10 30 330 330 1330 6330 60 61 MJAAAA AOKAAA AAAAxx +3592 7125 0 0 2 12 92 592 1592 3592 3592 184 185 EIAAAA BOKAAA HHHHxx +154 7126 0 2 4 14 54 154 154 154 154 108 109 YFAAAA COKAAA OOOOxx +9879 7127 1 3 9 19 79 879 1879 4879 9879 158 159 ZPAAAA DOKAAA VVVVxx +3202 7128 0 2 2 2 2 202 1202 3202 3202 4 5 ETAAAA EOKAAA AAAAxx +3056 7129 0 0 6 16 56 56 1056 3056 3056 112 113 ONAAAA FOKAAA HHHHxx +9890 7130 0 2 0 10 90 890 1890 4890 9890 180 181 KQAAAA GOKAAA OOOOxx +5840 7131 0 0 0 0 40 840 1840 840 5840 80 81 QQAAAA HOKAAA VVVVxx +9804 7132 0 0 4 4 4 804 1804 4804 9804 8 9 CNAAAA IOKAAA AAAAxx +681 7133 1 1 1 1 81 681 681 681 681 162 163 FAAAAA JOKAAA HHHHxx +3443 7134 1 3 3 3 43 443 1443 3443 3443 86 87 LCAAAA KOKAAA OOOOxx +8088 7135 0 0 8 8 88 88 88 3088 8088 176 177 CZAAAA LOKAAA VVVVxx +9447 7136 1 3 7 7 47 447 1447 4447 9447 94 95 JZAAAA MOKAAA AAAAxx +1490 7137 0 2 0 10 90 490 1490 1490 1490 180 181 IFAAAA NOKAAA HHHHxx +3684 7138 0 0 4 4 84 684 1684 3684 3684 168 169 SLAAAA OOKAAA OOOOxx +3113 7139 1 1 3 13 13 113 1113 3113 3113 26 27 TPAAAA POKAAA VVVVxx +9004 7140 0 0 4 4 4 4 1004 4004 9004 8 9 IIAAAA QOKAAA AAAAxx +7147 7141 1 3 7 7 47 147 1147 2147 7147 94 95 XOAAAA ROKAAA HHHHxx +7571 7142 1 3 1 11 71 571 1571 2571 7571 142 143 FFAAAA SOKAAA OOOOxx +5545 7143 1 1 5 5 45 545 1545 545 5545 90 91 HFAAAA TOKAAA VVVVxx +4558 7144 0 2 8 18 58 558 558 4558 4558 116 117 ITAAAA UOKAAA AAAAxx +6206 7145 0 2 6 6 6 206 206 1206 6206 12 13 SEAAAA VOKAAA HHHHxx +5695 7146 1 3 5 15 95 695 1695 695 5695 190 191 BLAAAA WOKAAA OOOOxx +9600 7147 0 0 0 0 0 600 1600 4600 9600 0 1 GFAAAA XOKAAA VVVVxx +5432 7148 0 0 2 12 32 432 1432 432 5432 64 65 YAAAAA YOKAAA AAAAxx +9299 7149 1 3 9 19 99 299 1299 4299 9299 198 199 RTAAAA ZOKAAA HHHHxx +2386 7150 0 2 6 6 86 386 386 2386 2386 172 173 UNAAAA APKAAA OOOOxx +2046 7151 0 2 6 6 46 46 46 2046 2046 92 93 SAAAAA BPKAAA VVVVxx +3293 7152 1 1 3 13 93 293 1293 3293 3293 186 187 RWAAAA CPKAAA AAAAxx +3046 7153 0 2 6 6 46 46 1046 3046 3046 92 93 ENAAAA DPKAAA HHHHxx +214 7154 0 2 4 14 14 214 214 214 214 28 29 GIAAAA EPKAAA OOOOxx +7893 7155 1 1 3 13 93 893 1893 2893 7893 186 187 PRAAAA FPKAAA VVVVxx +891 7156 1 3 1 11 91 891 891 891 891 182 183 HIAAAA GPKAAA AAAAxx +6499 7157 1 3 9 19 99 499 499 1499 6499 198 199 ZPAAAA HPKAAA HHHHxx +5003 7158 1 3 3 3 3 3 1003 3 5003 6 7 LKAAAA IPKAAA OOOOxx +6487 7159 1 3 7 7 87 487 487 1487 6487 174 175 NPAAAA JPKAAA VVVVxx +9403 7160 1 3 3 3 3 403 1403 4403 9403 6 7 RXAAAA KPKAAA AAAAxx +945 7161 1 1 5 5 45 945 945 945 945 90 91 JKAAAA LPKAAA HHHHxx +6713 7162 1 1 3 13 13 713 713 1713 6713 26 27 FYAAAA MPKAAA OOOOxx +9928 7163 0 0 8 8 28 928 1928 4928 9928 56 57 WRAAAA NPKAAA VVVVxx +8585 7164 1 1 5 5 85 585 585 3585 8585 170 171 FSAAAA OPKAAA AAAAxx +4004 7165 0 0 4 4 4 4 4 4004 4004 8 9 AYAAAA PPKAAA HHHHxx +2528 7166 0 0 8 8 28 528 528 2528 2528 56 57 GTAAAA QPKAAA OOOOxx +3350 7167 0 2 0 10 50 350 1350 3350 3350 100 101 WYAAAA RPKAAA VVVVxx +2160 7168 0 0 0 0 60 160 160 2160 2160 120 121 CFAAAA SPKAAA AAAAxx +1521 7169 1 1 1 1 21 521 1521 1521 1521 42 43 NGAAAA TPKAAA HHHHxx +5660 7170 0 0 0 0 60 660 1660 660 5660 120 121 SJAAAA UPKAAA OOOOxx +5755 7171 1 3 5 15 55 755 1755 755 5755 110 111 JNAAAA VPKAAA VVVVxx +7614 7172 0 2 4 14 14 614 1614 2614 7614 28 29 WGAAAA WPKAAA AAAAxx +3121 7173 1 1 1 1 21 121 1121 3121 3121 42 43 BQAAAA XPKAAA HHHHxx +2735 7174 1 3 5 15 35 735 735 2735 2735 70 71 FBAAAA YPKAAA OOOOxx +7506 7175 0 2 6 6 6 506 1506 2506 7506 12 13 SCAAAA ZPKAAA VVVVxx +2693 7176 1 1 3 13 93 693 693 2693 2693 186 187 PZAAAA AQKAAA AAAAxx +2892 7177 0 0 2 12 92 892 892 2892 2892 184 185 GHAAAA BQKAAA HHHHxx +3310 7178 0 2 0 10 10 310 1310 3310 3310 20 21 IXAAAA CQKAAA OOOOxx +3484 7179 0 0 4 4 84 484 1484 3484 3484 168 169 AEAAAA DQKAAA VVVVxx +9733 7180 1 1 3 13 33 733 1733 4733 9733 66 67 JKAAAA EQKAAA AAAAxx +29 7181 1 1 9 9 29 29 29 29 29 58 59 DBAAAA FQKAAA HHHHxx +9013 7182 1 1 3 13 13 13 1013 4013 9013 26 27 RIAAAA GQKAAA OOOOxx +3847 7183 1 3 7 7 47 847 1847 3847 3847 94 95 ZRAAAA HQKAAA VVVVxx +6724 7184 0 0 4 4 24 724 724 1724 6724 48 49 QYAAAA IQKAAA AAAAxx +2559 7185 1 3 9 19 59 559 559 2559 2559 118 119 LUAAAA JQKAAA HHHHxx +5326 7186 0 2 6 6 26 326 1326 326 5326 52 53 WWAAAA KQKAAA OOOOxx +4802 7187 0 2 2 2 2 802 802 4802 4802 4 5 SCAAAA LQKAAA VVVVxx +131 7188 1 3 1 11 31 131 131 131 131 62 63 BFAAAA MQKAAA AAAAxx +1634 7189 0 2 4 14 34 634 1634 1634 1634 68 69 WKAAAA NQKAAA HHHHxx +919 7190 1 3 9 19 19 919 919 919 919 38 39 JJAAAA OQKAAA OOOOxx +9575 7191 1 3 5 15 75 575 1575 4575 9575 150 151 HEAAAA PQKAAA VVVVxx +1256 7192 0 0 6 16 56 256 1256 1256 1256 112 113 IWAAAA QQKAAA AAAAxx +9428 7193 0 0 8 8 28 428 1428 4428 9428 56 57 QYAAAA RQKAAA HHHHxx +5121 7194 1 1 1 1 21 121 1121 121 5121 42 43 ZOAAAA SQKAAA OOOOxx +6584 7195 0 0 4 4 84 584 584 1584 6584 168 169 GTAAAA TQKAAA VVVVxx +7193 7196 1 1 3 13 93 193 1193 2193 7193 186 187 RQAAAA UQKAAA AAAAxx +4047 7197 1 3 7 7 47 47 47 4047 4047 94 95 RZAAAA VQKAAA HHHHxx +104 7198 0 0 4 4 4 104 104 104 104 8 9 AEAAAA WQKAAA OOOOxx +1527 7199 1 3 7 7 27 527 1527 1527 1527 54 55 TGAAAA XQKAAA VVVVxx +3460 7200 0 0 0 0 60 460 1460 3460 3460 120 121 CDAAAA YQKAAA AAAAxx +8526 7201 0 2 6 6 26 526 526 3526 8526 52 53 YPAAAA ZQKAAA HHHHxx +8959 7202 1 3 9 19 59 959 959 3959 8959 118 119 PGAAAA ARKAAA OOOOxx +3633 7203 1 1 3 13 33 633 1633 3633 3633 66 67 TJAAAA BRKAAA VVVVxx +1799 7204 1 3 9 19 99 799 1799 1799 1799 198 199 FRAAAA CRKAAA AAAAxx +461 7205 1 1 1 1 61 461 461 461 461 122 123 TRAAAA DRKAAA HHHHxx +718 7206 0 2 8 18 18 718 718 718 718 36 37 QBAAAA ERKAAA OOOOxx +3219 7207 1 3 9 19 19 219 1219 3219 3219 38 39 VTAAAA FRKAAA VVVVxx +3494 7208 0 2 4 14 94 494 1494 3494 3494 188 189 KEAAAA GRKAAA AAAAxx +9402 7209 0 2 2 2 2 402 1402 4402 9402 4 5 QXAAAA HRKAAA HHHHxx +7983 7210 1 3 3 3 83 983 1983 2983 7983 166 167 BVAAAA IRKAAA OOOOxx +7919 7211 1 3 9 19 19 919 1919 2919 7919 38 39 PSAAAA JRKAAA VVVVxx +8036 7212 0 0 6 16 36 36 36 3036 8036 72 73 CXAAAA KRKAAA AAAAxx +5164 7213 0 0 4 4 64 164 1164 164 5164 128 129 QQAAAA LRKAAA HHHHxx +4160 7214 0 0 0 0 60 160 160 4160 4160 120 121 AEAAAA MRKAAA OOOOxx +5370 7215 0 2 0 10 70 370 1370 370 5370 140 141 OYAAAA NRKAAA VVVVxx +5347 7216 1 3 7 7 47 347 1347 347 5347 94 95 RXAAAA ORKAAA AAAAxx +7109 7217 1 1 9 9 9 109 1109 2109 7109 18 19 LNAAAA PRKAAA HHHHxx +4826 7218 0 2 6 6 26 826 826 4826 4826 52 53 QDAAAA QRKAAA OOOOxx +1338 7219 0 2 8 18 38 338 1338 1338 1338 76 77 MZAAAA RRKAAA VVVVxx +2711 7220 1 3 1 11 11 711 711 2711 2711 22 23 HAAAAA SRKAAA AAAAxx +6299 7221 1 3 9 19 99 299 299 1299 6299 198 199 HIAAAA TRKAAA HHHHxx +1616 7222 0 0 6 16 16 616 1616 1616 1616 32 33 EKAAAA URKAAA OOOOxx +7519 7223 1 3 9 19 19 519 1519 2519 7519 38 39 FDAAAA VRKAAA VVVVxx +1262 7224 0 2 2 2 62 262 1262 1262 1262 124 125 OWAAAA WRKAAA AAAAxx +7228 7225 0 0 8 8 28 228 1228 2228 7228 56 57 ASAAAA XRKAAA HHHHxx +7892 7226 0 0 2 12 92 892 1892 2892 7892 184 185 ORAAAA YRKAAA OOOOxx +7929 7227 1 1 9 9 29 929 1929 2929 7929 58 59 ZSAAAA ZRKAAA VVVVxx +7705 7228 1 1 5 5 5 705 1705 2705 7705 10 11 JKAAAA ASKAAA AAAAxx +3111 7229 1 3 1 11 11 111 1111 3111 3111 22 23 RPAAAA BSKAAA HHHHxx +3066 7230 0 2 6 6 66 66 1066 3066 3066 132 133 YNAAAA CSKAAA OOOOxx +9559 7231 1 3 9 19 59 559 1559 4559 9559 118 119 RDAAAA DSKAAA VVVVxx +3787 7232 1 3 7 7 87 787 1787 3787 3787 174 175 RPAAAA ESKAAA AAAAxx +8710 7233 0 2 0 10 10 710 710 3710 8710 20 21 AXAAAA FSKAAA HHHHxx +4870 7234 0 2 0 10 70 870 870 4870 4870 140 141 IFAAAA GSKAAA OOOOxx +1883 7235 1 3 3 3 83 883 1883 1883 1883 166 167 LUAAAA HSKAAA VVVVxx +9689 7236 1 1 9 9 89 689 1689 4689 9689 178 179 RIAAAA ISKAAA AAAAxx +9491 7237 1 3 1 11 91 491 1491 4491 9491 182 183 BBAAAA JSKAAA HHHHxx +2035 7238 1 3 5 15 35 35 35 2035 2035 70 71 HAAAAA KSKAAA OOOOxx +655 7239 1 3 5 15 55 655 655 655 655 110 111 FZAAAA LSKAAA VVVVxx +6305 7240 1 1 5 5 5 305 305 1305 6305 10 11 NIAAAA MSKAAA AAAAxx +9423 7241 1 3 3 3 23 423 1423 4423 9423 46 47 LYAAAA NSKAAA HHHHxx +283 7242 1 3 3 3 83 283 283 283 283 166 167 XKAAAA OSKAAA OOOOxx +2607 7243 1 3 7 7 7 607 607 2607 2607 14 15 HWAAAA PSKAAA VVVVxx +7740 7244 0 0 0 0 40 740 1740 2740 7740 80 81 SLAAAA QSKAAA AAAAxx +6956 7245 0 0 6 16 56 956 956 1956 6956 112 113 OHAAAA RSKAAA HHHHxx +884 7246 0 0 4 4 84 884 884 884 884 168 169 AIAAAA SSKAAA OOOOxx +5730 7247 0 2 0 10 30 730 1730 730 5730 60 61 KMAAAA TSKAAA VVVVxx +3438 7248 0 2 8 18 38 438 1438 3438 3438 76 77 GCAAAA USKAAA AAAAxx +3250 7249 0 2 0 10 50 250 1250 3250 3250 100 101 AVAAAA VSKAAA HHHHxx +5470 7250 0 2 0 10 70 470 1470 470 5470 140 141 KCAAAA WSKAAA OOOOxx +2037 7251 1 1 7 17 37 37 37 2037 2037 74 75 JAAAAA XSKAAA VVVVxx +6593 7252 1 1 3 13 93 593 593 1593 6593 186 187 PTAAAA YSKAAA AAAAxx +3893 7253 1 1 3 13 93 893 1893 3893 3893 186 187 TTAAAA ZSKAAA HHHHxx +3200 7254 0 0 0 0 0 200 1200 3200 3200 0 1 CTAAAA ATKAAA OOOOxx +7125 7255 1 1 5 5 25 125 1125 2125 7125 50 51 BOAAAA BTKAAA VVVVxx +2295 7256 1 3 5 15 95 295 295 2295 2295 190 191 HKAAAA CTKAAA AAAAxx +2056 7257 0 0 6 16 56 56 56 2056 2056 112 113 CBAAAA DTKAAA HHHHxx +2962 7258 0 2 2 2 62 962 962 2962 2962 124 125 YJAAAA ETKAAA OOOOxx +993 7259 1 1 3 13 93 993 993 993 993 186 187 FMAAAA FTKAAA VVVVxx +9127 7260 1 3 7 7 27 127 1127 4127 9127 54 55 BNAAAA GTKAAA AAAAxx +2075 7261 1 3 5 15 75 75 75 2075 2075 150 151 VBAAAA HTKAAA HHHHxx +9338 7262 0 2 8 18 38 338 1338 4338 9338 76 77 EVAAAA ITKAAA OOOOxx +8100 7263 0 0 0 0 0 100 100 3100 8100 0 1 OZAAAA JTKAAA VVVVxx +5047 7264 1 3 7 7 47 47 1047 47 5047 94 95 DMAAAA KTKAAA AAAAxx +7032 7265 0 0 2 12 32 32 1032 2032 7032 64 65 MKAAAA LTKAAA HHHHxx +6374 7266 0 2 4 14 74 374 374 1374 6374 148 149 ELAAAA MTKAAA OOOOxx +4137 7267 1 1 7 17 37 137 137 4137 4137 74 75 DDAAAA NTKAAA VVVVxx +7132 7268 0 0 2 12 32 132 1132 2132 7132 64 65 IOAAAA OTKAAA AAAAxx +3064 7269 0 0 4 4 64 64 1064 3064 3064 128 129 WNAAAA PTKAAA HHHHxx +3621 7270 1 1 1 1 21 621 1621 3621 3621 42 43 HJAAAA QTKAAA OOOOxx +6199 7271 1 3 9 19 99 199 199 1199 6199 198 199 LEAAAA RTKAAA VVVVxx +4926 7272 0 2 6 6 26 926 926 4926 4926 52 53 MHAAAA STKAAA AAAAxx +8035 7273 1 3 5 15 35 35 35 3035 8035 70 71 BXAAAA TTKAAA HHHHxx +2195 7274 1 3 5 15 95 195 195 2195 2195 190 191 LGAAAA UTKAAA OOOOxx +5366 7275 0 2 6 6 66 366 1366 366 5366 132 133 KYAAAA VTKAAA VVVVxx +3478 7276 0 2 8 18 78 478 1478 3478 3478 156 157 UDAAAA WTKAAA AAAAxx +1926 7277 0 2 6 6 26 926 1926 1926 1926 52 53 CWAAAA XTKAAA HHHHxx +7265 7278 1 1 5 5 65 265 1265 2265 7265 130 131 LTAAAA YTKAAA OOOOxx +7668 7279 0 0 8 8 68 668 1668 2668 7668 136 137 YIAAAA ZTKAAA VVVVxx +3335 7280 1 3 5 15 35 335 1335 3335 3335 70 71 HYAAAA AUKAAA AAAAxx +7660 7281 0 0 0 0 60 660 1660 2660 7660 120 121 QIAAAA BUKAAA HHHHxx +9604 7282 0 0 4 4 4 604 1604 4604 9604 8 9 KFAAAA CUKAAA OOOOxx +7301 7283 1 1 1 1 1 301 1301 2301 7301 2 3 VUAAAA DUKAAA VVVVxx +4475 7284 1 3 5 15 75 475 475 4475 4475 150 151 DQAAAA EUKAAA AAAAxx +9954 7285 0 2 4 14 54 954 1954 4954 9954 108 109 WSAAAA FUKAAA HHHHxx +5723 7286 1 3 3 3 23 723 1723 723 5723 46 47 DMAAAA GUKAAA OOOOxx +2669 7287 1 1 9 9 69 669 669 2669 2669 138 139 RYAAAA HUKAAA VVVVxx +1685 7288 1 1 5 5 85 685 1685 1685 1685 170 171 VMAAAA IUKAAA AAAAxx +2233 7289 1 1 3 13 33 233 233 2233 2233 66 67 XHAAAA JUKAAA HHHHxx +8111 7290 1 3 1 11 11 111 111 3111 8111 22 23 ZZAAAA KUKAAA OOOOxx +7685 7291 1 1 5 5 85 685 1685 2685 7685 170 171 PJAAAA LUKAAA VVVVxx +3773 7292 1 1 3 13 73 773 1773 3773 3773 146 147 DPAAAA MUKAAA AAAAxx +7172 7293 0 0 2 12 72 172 1172 2172 7172 144 145 WPAAAA NUKAAA HHHHxx +1740 7294 0 0 0 0 40 740 1740 1740 1740 80 81 YOAAAA OUKAAA OOOOxx +5416 7295 0 0 6 16 16 416 1416 416 5416 32 33 IAAAAA PUKAAA VVVVxx +1823 7296 1 3 3 3 23 823 1823 1823 1823 46 47 DSAAAA QUKAAA AAAAxx +1668 7297 0 0 8 8 68 668 1668 1668 1668 136 137 EMAAAA RUKAAA HHHHxx +1795 7298 1 3 5 15 95 795 1795 1795 1795 190 191 BRAAAA SUKAAA OOOOxx +8599 7299 1 3 9 19 99 599 599 3599 8599 198 199 TSAAAA TUKAAA VVVVxx +5542 7300 0 2 2 2 42 542 1542 542 5542 84 85 EFAAAA UUKAAA AAAAxx +5658 7301 0 2 8 18 58 658 1658 658 5658 116 117 QJAAAA VUKAAA HHHHxx +9824 7302 0 0 4 4 24 824 1824 4824 9824 48 49 WNAAAA WUKAAA OOOOxx +19 7303 1 3 9 19 19 19 19 19 19 38 39 TAAAAA XUKAAA VVVVxx +9344 7304 0 0 4 4 44 344 1344 4344 9344 88 89 KVAAAA YUKAAA AAAAxx +5900 7305 0 0 0 0 0 900 1900 900 5900 0 1 YSAAAA ZUKAAA HHHHxx +7818 7306 0 2 8 18 18 818 1818 2818 7818 36 37 SOAAAA AVKAAA OOOOxx +8377 7307 1 1 7 17 77 377 377 3377 8377 154 155 FKAAAA BVKAAA VVVVxx +6886 7308 0 2 6 6 86 886 886 1886 6886 172 173 WEAAAA CVKAAA AAAAxx +3201 7309 1 1 1 1 1 201 1201 3201 3201 2 3 DTAAAA DVKAAA HHHHxx +87 7310 1 3 7 7 87 87 87 87 87 174 175 JDAAAA EVKAAA OOOOxx +1089 7311 1 1 9 9 89 89 1089 1089 1089 178 179 XPAAAA FVKAAA VVVVxx +3948 7312 0 0 8 8 48 948 1948 3948 3948 96 97 WVAAAA GVKAAA AAAAxx +6383 7313 1 3 3 3 83 383 383 1383 6383 166 167 NLAAAA HVKAAA HHHHxx +837 7314 1 1 7 17 37 837 837 837 837 74 75 FGAAAA IVKAAA OOOOxx +6285 7315 1 1 5 5 85 285 285 1285 6285 170 171 THAAAA JVKAAA VVVVxx +78 7316 0 2 8 18 78 78 78 78 78 156 157 ADAAAA KVKAAA AAAAxx +4389 7317 1 1 9 9 89 389 389 4389 4389 178 179 VMAAAA LVKAAA HHHHxx +4795 7318 1 3 5 15 95 795 795 4795 4795 190 191 LCAAAA MVKAAA OOOOxx +9369 7319 1 1 9 9 69 369 1369 4369 9369 138 139 JWAAAA NVKAAA VVVVxx +69 7320 1 1 9 9 69 69 69 69 69 138 139 RCAAAA OVKAAA AAAAxx +7689 7321 1 1 9 9 89 689 1689 2689 7689 178 179 TJAAAA PVKAAA HHHHxx +5642 7322 0 2 2 2 42 642 1642 642 5642 84 85 AJAAAA QVKAAA OOOOxx +2348 7323 0 0 8 8 48 348 348 2348 2348 96 97 IMAAAA RVKAAA VVVVxx +9308 7324 0 0 8 8 8 308 1308 4308 9308 16 17 AUAAAA SVKAAA AAAAxx +9093 7325 1 1 3 13 93 93 1093 4093 9093 186 187 TLAAAA TVKAAA HHHHxx +1199 7326 1 3 9 19 99 199 1199 1199 1199 198 199 DUAAAA UVKAAA OOOOxx +307 7327 1 3 7 7 7 307 307 307 307 14 15 VLAAAA VVKAAA VVVVxx +3814 7328 0 2 4 14 14 814 1814 3814 3814 28 29 SQAAAA WVKAAA AAAAxx +8817 7329 1 1 7 17 17 817 817 3817 8817 34 35 DBAAAA XVKAAA HHHHxx +2329 7330 1 1 9 9 29 329 329 2329 2329 58 59 PLAAAA YVKAAA OOOOxx +2932 7331 0 0 2 12 32 932 932 2932 2932 64 65 UIAAAA ZVKAAA VVVVxx +1986 7332 0 2 6 6 86 986 1986 1986 1986 172 173 KYAAAA AWKAAA AAAAxx +5279 7333 1 3 9 19 79 279 1279 279 5279 158 159 BVAAAA BWKAAA HHHHxx +5357 7334 1 1 7 17 57 357 1357 357 5357 114 115 BYAAAA CWKAAA OOOOxx +6778 7335 0 2 8 18 78 778 778 1778 6778 156 157 SAAAAA DWKAAA VVVVxx +2773 7336 1 1 3 13 73 773 773 2773 2773 146 147 RCAAAA EWKAAA AAAAxx +244 7337 0 0 4 4 44 244 244 244 244 88 89 KJAAAA FWKAAA HHHHxx +6900 7338 0 0 0 0 0 900 900 1900 6900 0 1 KFAAAA GWKAAA OOOOxx +4739 7339 1 3 9 19 39 739 739 4739 4739 78 79 HAAAAA HWKAAA VVVVxx +3217 7340 1 1 7 17 17 217 1217 3217 3217 34 35 TTAAAA IWKAAA AAAAxx +7563 7341 1 3 3 3 63 563 1563 2563 7563 126 127 XEAAAA JWKAAA HHHHxx +1807 7342 1 3 7 7 7 807 1807 1807 1807 14 15 NRAAAA KWKAAA OOOOxx +4199 7343 1 3 9 19 99 199 199 4199 4199 198 199 NFAAAA LWKAAA VVVVxx +1077 7344 1 1 7 17 77 77 1077 1077 1077 154 155 LPAAAA MWKAAA AAAAxx +8348 7345 0 0 8 8 48 348 348 3348 8348 96 97 CJAAAA NWKAAA HHHHxx +841 7346 1 1 1 1 41 841 841 841 841 82 83 JGAAAA OWKAAA OOOOxx +8154 7347 0 2 4 14 54 154 154 3154 8154 108 109 QBAAAA PWKAAA VVVVxx +5261 7348 1 1 1 1 61 261 1261 261 5261 122 123 JUAAAA QWKAAA AAAAxx +1950 7349 0 2 0 10 50 950 1950 1950 1950 100 101 AXAAAA RWKAAA HHHHxx +8472 7350 0 0 2 12 72 472 472 3472 8472 144 145 WNAAAA SWKAAA OOOOxx +8745 7351 1 1 5 5 45 745 745 3745 8745 90 91 JYAAAA TWKAAA VVVVxx +8715 7352 1 3 5 15 15 715 715 3715 8715 30 31 FXAAAA UWKAAA AAAAxx +9708 7353 0 0 8 8 8 708 1708 4708 9708 16 17 KJAAAA VWKAAA HHHHxx +5860 7354 0 0 0 0 60 860 1860 860 5860 120 121 KRAAAA WWKAAA OOOOxx +9142 7355 0 2 2 2 42 142 1142 4142 9142 84 85 QNAAAA XWKAAA VVVVxx +6582 7356 0 2 2 2 82 582 582 1582 6582 164 165 ETAAAA YWKAAA AAAAxx +1255 7357 1 3 5 15 55 255 1255 1255 1255 110 111 HWAAAA ZWKAAA HHHHxx +6459 7358 1 3 9 19 59 459 459 1459 6459 118 119 LOAAAA AXKAAA OOOOxx +6327 7359 1 3 7 7 27 327 327 1327 6327 54 55 JJAAAA BXKAAA VVVVxx +4692 7360 0 0 2 12 92 692 692 4692 4692 184 185 MYAAAA CXKAAA AAAAxx +3772 7361 0 0 2 12 72 772 1772 3772 3772 144 145 CPAAAA DXKAAA HHHHxx +4203 7362 1 3 3 3 3 203 203 4203 4203 6 7 RFAAAA EXKAAA OOOOxx +2946 7363 0 2 6 6 46 946 946 2946 2946 92 93 IJAAAA FXKAAA VVVVxx +3524 7364 0 0 4 4 24 524 1524 3524 3524 48 49 OFAAAA GXKAAA AAAAxx +8409 7365 1 1 9 9 9 409 409 3409 8409 18 19 LLAAAA HXKAAA HHHHxx +1824 7366 0 0 4 4 24 824 1824 1824 1824 48 49 ESAAAA IXKAAA OOOOxx +4637 7367 1 1 7 17 37 637 637 4637 4637 74 75 JWAAAA JXKAAA VVVVxx +589 7368 1 1 9 9 89 589 589 589 589 178 179 RWAAAA KXKAAA AAAAxx +484 7369 0 0 4 4 84 484 484 484 484 168 169 QSAAAA LXKAAA HHHHxx +8963 7370 1 3 3 3 63 963 963 3963 8963 126 127 TGAAAA MXKAAA OOOOxx +5502 7371 0 2 2 2 2 502 1502 502 5502 4 5 QDAAAA NXKAAA VVVVxx +6982 7372 0 2 2 2 82 982 982 1982 6982 164 165 OIAAAA OXKAAA AAAAxx +8029 7373 1 1 9 9 29 29 29 3029 8029 58 59 VWAAAA PXKAAA HHHHxx +4395 7374 1 3 5 15 95 395 395 4395 4395 190 191 BNAAAA QXKAAA OOOOxx +2595 7375 1 3 5 15 95 595 595 2595 2595 190 191 VVAAAA RXKAAA VVVVxx +2133 7376 1 1 3 13 33 133 133 2133 2133 66 67 BEAAAA SXKAAA AAAAxx +1414 7377 0 2 4 14 14 414 1414 1414 1414 28 29 KCAAAA TXKAAA HHHHxx +8201 7378 1 1 1 1 1 201 201 3201 8201 2 3 LDAAAA UXKAAA OOOOxx +4706 7379 0 2 6 6 6 706 706 4706 4706 12 13 AZAAAA VXKAAA VVVVxx +5310 7380 0 2 0 10 10 310 1310 310 5310 20 21 GWAAAA WXKAAA AAAAxx +7333 7381 1 1 3 13 33 333 1333 2333 7333 66 67 BWAAAA XXKAAA HHHHxx +9420 7382 0 0 0 0 20 420 1420 4420 9420 40 41 IYAAAA YXKAAA OOOOxx +1383 7383 1 3 3 3 83 383 1383 1383 1383 166 167 FBAAAA ZXKAAA VVVVxx +6225 7384 1 1 5 5 25 225 225 1225 6225 50 51 LFAAAA AYKAAA AAAAxx +2064 7385 0 0 4 4 64 64 64 2064 2064 128 129 KBAAAA BYKAAA HHHHxx +6700 7386 0 0 0 0 0 700 700 1700 6700 0 1 SXAAAA CYKAAA OOOOxx +1352 7387 0 0 2 12 52 352 1352 1352 1352 104 105 AAAAAA DYKAAA VVVVxx +4249 7388 1 1 9 9 49 249 249 4249 4249 98 99 LHAAAA EYKAAA AAAAxx +9429 7389 1 1 9 9 29 429 1429 4429 9429 58 59 RYAAAA FYKAAA HHHHxx +8090 7390 0 2 0 10 90 90 90 3090 8090 180 181 EZAAAA GYKAAA OOOOxx +5378 7391 0 2 8 18 78 378 1378 378 5378 156 157 WYAAAA HYKAAA VVVVxx +9085 7392 1 1 5 5 85 85 1085 4085 9085 170 171 LLAAAA IYKAAA AAAAxx +7468 7393 0 0 8 8 68 468 1468 2468 7468 136 137 GBAAAA JYKAAA HHHHxx +9955 7394 1 3 5 15 55 955 1955 4955 9955 110 111 XSAAAA KYKAAA OOOOxx +8692 7395 0 0 2 12 92 692 692 3692 8692 184 185 IWAAAA LYKAAA VVVVxx +1463 7396 1 3 3 3 63 463 1463 1463 1463 126 127 HEAAAA MYKAAA AAAAxx +3577 7397 1 1 7 17 77 577 1577 3577 3577 154 155 PHAAAA NYKAAA HHHHxx +5654 7398 0 2 4 14 54 654 1654 654 5654 108 109 MJAAAA OYKAAA OOOOxx +7955 7399 1 3 5 15 55 955 1955 2955 7955 110 111 ZTAAAA PYKAAA VVVVxx +4843 7400 1 3 3 3 43 843 843 4843 4843 86 87 HEAAAA QYKAAA AAAAxx +1776 7401 0 0 6 16 76 776 1776 1776 1776 152 153 IQAAAA RYKAAA HHHHxx +2223 7402 1 3 3 3 23 223 223 2223 2223 46 47 NHAAAA SYKAAA OOOOxx +8442 7403 0 2 2 2 42 442 442 3442 8442 84 85 SMAAAA TYKAAA VVVVxx +9738 7404 0 2 8 18 38 738 1738 4738 9738 76 77 OKAAAA UYKAAA AAAAxx +4867 7405 1 3 7 7 67 867 867 4867 4867 134 135 FFAAAA VYKAAA HHHHxx +2983 7406 1 3 3 3 83 983 983 2983 2983 166 167 TKAAAA WYKAAA OOOOxx +3300 7407 0 0 0 0 0 300 1300 3300 3300 0 1 YWAAAA XYKAAA VVVVxx +3815 7408 1 3 5 15 15 815 1815 3815 3815 30 31 TQAAAA YYKAAA AAAAxx +1779 7409 1 3 9 19 79 779 1779 1779 1779 158 159 LQAAAA ZYKAAA HHHHxx +1123 7410 1 3 3 3 23 123 1123 1123 1123 46 47 FRAAAA AZKAAA OOOOxx +4824 7411 0 0 4 4 24 824 824 4824 4824 48 49 ODAAAA BZKAAA VVVVxx +5407 7412 1 3 7 7 7 407 1407 407 5407 14 15 ZZAAAA CZKAAA AAAAxx +5123 7413 1 3 3 3 23 123 1123 123 5123 46 47 BPAAAA DZKAAA HHHHxx +2515 7414 1 3 5 15 15 515 515 2515 2515 30 31 TSAAAA EZKAAA OOOOxx +4781 7415 1 1 1 1 81 781 781 4781 4781 162 163 XBAAAA FZKAAA VVVVxx +7831 7416 1 3 1 11 31 831 1831 2831 7831 62 63 FPAAAA GZKAAA AAAAxx +6946 7417 0 2 6 6 46 946 946 1946 6946 92 93 EHAAAA HZKAAA HHHHxx +1215 7418 1 3 5 15 15 215 1215 1215 1215 30 31 TUAAAA IZKAAA OOOOxx +7783 7419 1 3 3 3 83 783 1783 2783 7783 166 167 JNAAAA JZKAAA VVVVxx +4532 7420 0 0 2 12 32 532 532 4532 4532 64 65 ISAAAA KZKAAA AAAAxx +9068 7421 0 0 8 8 68 68 1068 4068 9068 136 137 UKAAAA LZKAAA HHHHxx +7030 7422 0 2 0 10 30 30 1030 2030 7030 60 61 KKAAAA MZKAAA OOOOxx +436 7423 0 0 6 16 36 436 436 436 436 72 73 UQAAAA NZKAAA VVVVxx +6549 7424 1 1 9 9 49 549 549 1549 6549 98 99 XRAAAA OZKAAA AAAAxx +3348 7425 0 0 8 8 48 348 1348 3348 3348 96 97 UYAAAA PZKAAA HHHHxx +6229 7426 1 1 9 9 29 229 229 1229 6229 58 59 PFAAAA QZKAAA OOOOxx +3933 7427 1 1 3 13 33 933 1933 3933 3933 66 67 HVAAAA RZKAAA VVVVxx +1876 7428 0 0 6 16 76 876 1876 1876 1876 152 153 EUAAAA SZKAAA AAAAxx +8920 7429 0 0 0 0 20 920 920 3920 8920 40 41 CFAAAA TZKAAA HHHHxx +7926 7430 0 2 6 6 26 926 1926 2926 7926 52 53 WSAAAA UZKAAA OOOOxx +8805 7431 1 1 5 5 5 805 805 3805 8805 10 11 RAAAAA VZKAAA VVVVxx +6729 7432 1 1 9 9 29 729 729 1729 6729 58 59 VYAAAA WZKAAA AAAAxx +7397 7433 1 1 7 17 97 397 1397 2397 7397 194 195 NYAAAA XZKAAA HHHHxx +9303 7434 1 3 3 3 3 303 1303 4303 9303 6 7 VTAAAA YZKAAA OOOOxx +4255 7435 1 3 5 15 55 255 255 4255 4255 110 111 RHAAAA ZZKAAA VVVVxx +7229 7436 1 1 9 9 29 229 1229 2229 7229 58 59 BSAAAA AALAAA AAAAxx +854 7437 0 2 4 14 54 854 854 854 854 108 109 WGAAAA BALAAA HHHHxx +6723 7438 1 3 3 3 23 723 723 1723 6723 46 47 PYAAAA CALAAA OOOOxx +9597 7439 1 1 7 17 97 597 1597 4597 9597 194 195 DFAAAA DALAAA VVVVxx +6532 7440 0 0 2 12 32 532 532 1532 6532 64 65 GRAAAA EALAAA AAAAxx +2910 7441 0 2 0 10 10 910 910 2910 2910 20 21 YHAAAA FALAAA HHHHxx +6717 7442 1 1 7 17 17 717 717 1717 6717 34 35 JYAAAA GALAAA OOOOxx +1790 7443 0 2 0 10 90 790 1790 1790 1790 180 181 WQAAAA HALAAA VVVVxx +3761 7444 1 1 1 1 61 761 1761 3761 3761 122 123 ROAAAA IALAAA AAAAxx +1565 7445 1 1 5 5 65 565 1565 1565 1565 130 131 FIAAAA JALAAA HHHHxx +6205 7446 1 1 5 5 5 205 205 1205 6205 10 11 REAAAA KALAAA OOOOxx +2726 7447 0 2 6 6 26 726 726 2726 2726 52 53 WAAAAA LALAAA VVVVxx +799 7448 1 3 9 19 99 799 799 799 799 198 199 TEAAAA MALAAA AAAAxx +3540 7449 0 0 0 0 40 540 1540 3540 3540 80 81 EGAAAA NALAAA HHHHxx +5878 7450 0 2 8 18 78 878 1878 878 5878 156 157 CSAAAA OALAAA OOOOxx +2542 7451 0 2 2 2 42 542 542 2542 2542 84 85 UTAAAA PALAAA VVVVxx +4888 7452 0 0 8 8 88 888 888 4888 4888 176 177 AGAAAA QALAAA AAAAxx +5290 7453 0 2 0 10 90 290 1290 290 5290 180 181 MVAAAA RALAAA HHHHxx +7995 7454 1 3 5 15 95 995 1995 2995 7995 190 191 NVAAAA SALAAA OOOOxx +3519 7455 1 3 9 19 19 519 1519 3519 3519 38 39 JFAAAA TALAAA VVVVxx +3571 7456 1 3 1 11 71 571 1571 3571 3571 142 143 JHAAAA UALAAA AAAAxx +7854 7457 0 2 4 14 54 854 1854 2854 7854 108 109 CQAAAA VALAAA HHHHxx +5184 7458 0 0 4 4 84 184 1184 184 5184 168 169 KRAAAA WALAAA OOOOxx +3498 7459 0 2 8 18 98 498 1498 3498 3498 196 197 OEAAAA XALAAA VVVVxx +1264 7460 0 0 4 4 64 264 1264 1264 1264 128 129 QWAAAA YALAAA AAAAxx +3159 7461 1 3 9 19 59 159 1159 3159 3159 118 119 NRAAAA ZALAAA HHHHxx +5480 7462 0 0 0 0 80 480 1480 480 5480 160 161 UCAAAA ABLAAA OOOOxx +1706 7463 0 2 6 6 6 706 1706 1706 1706 12 13 QNAAAA BBLAAA VVVVxx +4540 7464 0 0 0 0 40 540 540 4540 4540 80 81 QSAAAA CBLAAA AAAAxx +2799 7465 1 3 9 19 99 799 799 2799 2799 198 199 RDAAAA DBLAAA HHHHxx +7389 7466 1 1 9 9 89 389 1389 2389 7389 178 179 FYAAAA EBLAAA OOOOxx +5565 7467 1 1 5 5 65 565 1565 565 5565 130 131 BGAAAA FBLAAA VVVVxx +3896 7468 0 0 6 16 96 896 1896 3896 3896 192 193 WTAAAA GBLAAA AAAAxx +2100 7469 0 0 0 0 0 100 100 2100 2100 0 1 UCAAAA HBLAAA HHHHxx +3507 7470 1 3 7 7 7 507 1507 3507 3507 14 15 XEAAAA IBLAAA OOOOxx +7971 7471 1 3 1 11 71 971 1971 2971 7971 142 143 PUAAAA JBLAAA VVVVxx +2312 7472 0 0 2 12 12 312 312 2312 2312 24 25 YKAAAA KBLAAA AAAAxx +2494 7473 0 2 4 14 94 494 494 2494 2494 188 189 YRAAAA LBLAAA HHHHxx +2474 7474 0 2 4 14 74 474 474 2474 2474 148 149 ERAAAA MBLAAA OOOOxx +3136 7475 0 0 6 16 36 136 1136 3136 3136 72 73 QQAAAA NBLAAA VVVVxx +7242 7476 0 2 2 2 42 242 1242 2242 7242 84 85 OSAAAA OBLAAA AAAAxx +9430 7477 0 2 0 10 30 430 1430 4430 9430 60 61 SYAAAA PBLAAA HHHHxx +1052 7478 0 0 2 12 52 52 1052 1052 1052 104 105 MOAAAA QBLAAA OOOOxx +4172 7479 0 0 2 12 72 172 172 4172 4172 144 145 MEAAAA RBLAAA VVVVxx +970 7480 0 2 0 10 70 970 970 970 970 140 141 ILAAAA SBLAAA AAAAxx +882 7481 0 2 2 2 82 882 882 882 882 164 165 YHAAAA TBLAAA HHHHxx +9799 7482 1 3 9 19 99 799 1799 4799 9799 198 199 XMAAAA UBLAAA OOOOxx +5850 7483 0 2 0 10 50 850 1850 850 5850 100 101 ARAAAA VBLAAA VVVVxx +9473 7484 1 1 3 13 73 473 1473 4473 9473 146 147 JAAAAA WBLAAA AAAAxx +8635 7485 1 3 5 15 35 635 635 3635 8635 70 71 DUAAAA XBLAAA HHHHxx +2349 7486 1 1 9 9 49 349 349 2349 2349 98 99 JMAAAA YBLAAA OOOOxx +2270 7487 0 2 0 10 70 270 270 2270 2270 140 141 IJAAAA ZBLAAA VVVVxx +7887 7488 1 3 7 7 87 887 1887 2887 7887 174 175 JRAAAA ACLAAA AAAAxx +3091 7489 1 3 1 11 91 91 1091 3091 3091 182 183 XOAAAA BCLAAA HHHHxx +3728 7490 0 0 8 8 28 728 1728 3728 3728 56 57 KNAAAA CCLAAA OOOOxx +3658 7491 0 2 8 18 58 658 1658 3658 3658 116 117 SKAAAA DCLAAA VVVVxx +5975 7492 1 3 5 15 75 975 1975 975 5975 150 151 VVAAAA ECLAAA AAAAxx +332 7493 0 0 2 12 32 332 332 332 332 64 65 UMAAAA FCLAAA HHHHxx +7990 7494 0 2 0 10 90 990 1990 2990 7990 180 181 IVAAAA GCLAAA OOOOxx +8688 7495 0 0 8 8 88 688 688 3688 8688 176 177 EWAAAA HCLAAA VVVVxx +9601 7496 1 1 1 1 1 601 1601 4601 9601 2 3 HFAAAA ICLAAA AAAAxx +8401 7497 1 1 1 1 1 401 401 3401 8401 2 3 DLAAAA JCLAAA HHHHxx +8093 7498 1 1 3 13 93 93 93 3093 8093 186 187 HZAAAA KCLAAA OOOOxx +4278 7499 0 2 8 18 78 278 278 4278 4278 156 157 OIAAAA LCLAAA VVVVxx +5467 7500 1 3 7 7 67 467 1467 467 5467 134 135 HCAAAA MCLAAA AAAAxx +3137 7501 1 1 7 17 37 137 1137 3137 3137 74 75 RQAAAA NCLAAA HHHHxx +204 7502 0 0 4 4 4 204 204 204 204 8 9 WHAAAA OCLAAA OOOOxx +8224 7503 0 0 4 4 24 224 224 3224 8224 48 49 IEAAAA PCLAAA VVVVxx +2944 7504 0 0 4 4 44 944 944 2944 2944 88 89 GJAAAA QCLAAA AAAAxx +7593 7505 1 1 3 13 93 593 1593 2593 7593 186 187 BGAAAA RCLAAA HHHHxx +814 7506 0 2 4 14 14 814 814 814 814 28 29 IFAAAA SCLAAA OOOOxx +8047 7507 1 3 7 7 47 47 47 3047 8047 94 95 NXAAAA TCLAAA VVVVxx +7802 7508 0 2 2 2 2 802 1802 2802 7802 4 5 COAAAA UCLAAA AAAAxx +901 7509 1 1 1 1 1 901 901 901 901 2 3 RIAAAA VCLAAA HHHHxx +6168 7510 0 0 8 8 68 168 168 1168 6168 136 137 GDAAAA WCLAAA OOOOxx +2950 7511 0 2 0 10 50 950 950 2950 2950 100 101 MJAAAA XCLAAA VVVVxx +5393 7512 1 1 3 13 93 393 1393 393 5393 186 187 LZAAAA YCLAAA AAAAxx +3585 7513 1 1 5 5 85 585 1585 3585 3585 170 171 XHAAAA ZCLAAA HHHHxx +9392 7514 0 0 2 12 92 392 1392 4392 9392 184 185 GXAAAA ADLAAA OOOOxx +8314 7515 0 2 4 14 14 314 314 3314 8314 28 29 UHAAAA BDLAAA VVVVxx +9972 7516 0 0 2 12 72 972 1972 4972 9972 144 145 OTAAAA CDLAAA AAAAxx +9130 7517 0 2 0 10 30 130 1130 4130 9130 60 61 ENAAAA DDLAAA HHHHxx +975 7518 1 3 5 15 75 975 975 975 975 150 151 NLAAAA EDLAAA OOOOxx +5720 7519 0 0 0 0 20 720 1720 720 5720 40 41 AMAAAA FDLAAA VVVVxx +3769 7520 1 1 9 9 69 769 1769 3769 3769 138 139 ZOAAAA GDLAAA AAAAxx +5303 7521 1 3 3 3 3 303 1303 303 5303 6 7 ZVAAAA HDLAAA HHHHxx +6564 7522 0 0 4 4 64 564 564 1564 6564 128 129 MSAAAA IDLAAA OOOOxx +7855 7523 1 3 5 15 55 855 1855 2855 7855 110 111 DQAAAA JDLAAA VVVVxx +8153 7524 1 1 3 13 53 153 153 3153 8153 106 107 PBAAAA KDLAAA AAAAxx +2292 7525 0 0 2 12 92 292 292 2292 2292 184 185 EKAAAA LDLAAA HHHHxx +3156 7526 0 0 6 16 56 156 1156 3156 3156 112 113 KRAAAA MDLAAA OOOOxx +6580 7527 0 0 0 0 80 580 580 1580 6580 160 161 CTAAAA NDLAAA VVVVxx +5324 7528 0 0 4 4 24 324 1324 324 5324 48 49 UWAAAA ODLAAA AAAAxx +8871 7529 1 3 1 11 71 871 871 3871 8871 142 143 FDAAAA PDLAAA HHHHxx +2543 7530 1 3 3 3 43 543 543 2543 2543 86 87 VTAAAA QDLAAA OOOOxx +7857 7531 1 1 7 17 57 857 1857 2857 7857 114 115 FQAAAA RDLAAA VVVVxx +4084 7532 0 0 4 4 84 84 84 4084 4084 168 169 CBAAAA SDLAAA AAAAxx +9887 7533 1 3 7 7 87 887 1887 4887 9887 174 175 HQAAAA TDLAAA HHHHxx +6940 7534 0 0 0 0 40 940 940 1940 6940 80 81 YGAAAA UDLAAA OOOOxx +3415 7535 1 3 5 15 15 415 1415 3415 3415 30 31 JBAAAA VDLAAA VVVVxx +5012 7536 0 0 2 12 12 12 1012 12 5012 24 25 UKAAAA WDLAAA AAAAxx +3187 7537 1 3 7 7 87 187 1187 3187 3187 174 175 PSAAAA XDLAAA HHHHxx +8556 7538 0 0 6 16 56 556 556 3556 8556 112 113 CRAAAA YDLAAA OOOOxx +7966 7539 0 2 6 6 66 966 1966 2966 7966 132 133 KUAAAA ZDLAAA VVVVxx +7481 7540 1 1 1 1 81 481 1481 2481 7481 162 163 TBAAAA AELAAA AAAAxx +8524 7541 0 0 4 4 24 524 524 3524 8524 48 49 WPAAAA BELAAA HHHHxx +3021 7542 1 1 1 1 21 21 1021 3021 3021 42 43 FMAAAA CELAAA OOOOxx +6045 7543 1 1 5 5 45 45 45 1045 6045 90 91 NYAAAA DELAAA VVVVxx +8022 7544 0 2 2 2 22 22 22 3022 8022 44 45 OWAAAA EELAAA AAAAxx +3626 7545 0 2 6 6 26 626 1626 3626 3626 52 53 MJAAAA FELAAA HHHHxx +1030 7546 0 2 0 10 30 30 1030 1030 1030 60 61 QNAAAA GELAAA OOOOxx +8903 7547 1 3 3 3 3 903 903 3903 8903 6 7 LEAAAA HELAAA VVVVxx +7488 7548 0 0 8 8 88 488 1488 2488 7488 176 177 ACAAAA IELAAA AAAAxx +9293 7549 1 1 3 13 93 293 1293 4293 9293 186 187 LTAAAA JELAAA HHHHxx +4586 7550 0 2 6 6 86 586 586 4586 4586 172 173 KUAAAA KELAAA OOOOxx +9282 7551 0 2 2 2 82 282 1282 4282 9282 164 165 ATAAAA LELAAA VVVVxx +1948 7552 0 0 8 8 48 948 1948 1948 1948 96 97 YWAAAA MELAAA AAAAxx +2534 7553 0 2 4 14 34 534 534 2534 2534 68 69 MTAAAA NELAAA HHHHxx +1150 7554 0 2 0 10 50 150 1150 1150 1150 100 101 GSAAAA OELAAA OOOOxx +4931 7555 1 3 1 11 31 931 931 4931 4931 62 63 RHAAAA PELAAA VVVVxx +2866 7556 0 2 6 6 66 866 866 2866 2866 132 133 GGAAAA QELAAA AAAAxx +6172 7557 0 0 2 12 72 172 172 1172 6172 144 145 KDAAAA RELAAA HHHHxx +4819 7558 1 3 9 19 19 819 819 4819 4819 38 39 JDAAAA SELAAA OOOOxx +569 7559 1 1 9 9 69 569 569 569 569 138 139 XVAAAA TELAAA VVVVxx +1146 7560 0 2 6 6 46 146 1146 1146 1146 92 93 CSAAAA UELAAA AAAAxx +3062 7561 0 2 2 2 62 62 1062 3062 3062 124 125 UNAAAA VELAAA HHHHxx +7690 7562 0 2 0 10 90 690 1690 2690 7690 180 181 UJAAAA WELAAA OOOOxx +8611 7563 1 3 1 11 11 611 611 3611 8611 22 23 FTAAAA XELAAA VVVVxx +1142 7564 0 2 2 2 42 142 1142 1142 1142 84 85 YRAAAA YELAAA AAAAxx +1193 7565 1 1 3 13 93 193 1193 1193 1193 186 187 XTAAAA ZELAAA HHHHxx +2507 7566 1 3 7 7 7 507 507 2507 2507 14 15 LSAAAA AFLAAA OOOOxx +1043 7567 1 3 3 3 43 43 1043 1043 1043 86 87 DOAAAA BFLAAA VVVVxx +7472 7568 0 0 2 12 72 472 1472 2472 7472 144 145 KBAAAA CFLAAA AAAAxx +1817 7569 1 1 7 17 17 817 1817 1817 1817 34 35 XRAAAA DFLAAA HHHHxx +3868 7570 0 0 8 8 68 868 1868 3868 3868 136 137 USAAAA EFLAAA OOOOxx +9031 7571 1 3 1 11 31 31 1031 4031 9031 62 63 JJAAAA FFLAAA VVVVxx +7254 7572 0 2 4 14 54 254 1254 2254 7254 108 109 ATAAAA GFLAAA AAAAxx +5030 7573 0 2 0 10 30 30 1030 30 5030 60 61 MLAAAA HFLAAA HHHHxx +6594 7574 0 2 4 14 94 594 594 1594 6594 188 189 QTAAAA IFLAAA OOOOxx +6862 7575 0 2 2 2 62 862 862 1862 6862 124 125 YDAAAA JFLAAA VVVVxx +1994 7576 0 2 4 14 94 994 1994 1994 1994 188 189 SYAAAA KFLAAA AAAAxx +9017 7577 1 1 7 17 17 17 1017 4017 9017 34 35 VIAAAA LFLAAA HHHHxx +5716 7578 0 0 6 16 16 716 1716 716 5716 32 33 WLAAAA MFLAAA OOOOxx +1900 7579 0 0 0 0 0 900 1900 1900 1900 0 1 CVAAAA NFLAAA VVVVxx +120 7580 0 0 0 0 20 120 120 120 120 40 41 QEAAAA OFLAAA AAAAxx +9003 7581 1 3 3 3 3 3 1003 4003 9003 6 7 HIAAAA PFLAAA HHHHxx +4178 7582 0 2 8 18 78 178 178 4178 4178 156 157 SEAAAA QFLAAA OOOOxx +8777 7583 1 1 7 17 77 777 777 3777 8777 154 155 PZAAAA RFLAAA VVVVxx +3653 7584 1 1 3 13 53 653 1653 3653 3653 106 107 NKAAAA SFLAAA AAAAxx +1137 7585 1 1 7 17 37 137 1137 1137 1137 74 75 TRAAAA TFLAAA HHHHxx +6362 7586 0 2 2 2 62 362 362 1362 6362 124 125 SKAAAA UFLAAA OOOOxx +8537 7587 1 1 7 17 37 537 537 3537 8537 74 75 JQAAAA VFLAAA VVVVxx +1590 7588 0 2 0 10 90 590 1590 1590 1590 180 181 EJAAAA WFLAAA AAAAxx +374 7589 0 2 4 14 74 374 374 374 374 148 149 KOAAAA XFLAAA HHHHxx +2597 7590 1 1 7 17 97 597 597 2597 2597 194 195 XVAAAA YFLAAA OOOOxx +8071 7591 1 3 1 11 71 71 71 3071 8071 142 143 LYAAAA ZFLAAA VVVVxx +9009 7592 1 1 9 9 9 9 1009 4009 9009 18 19 NIAAAA AGLAAA AAAAxx +1978 7593 0 2 8 18 78 978 1978 1978 1978 156 157 CYAAAA BGLAAA HHHHxx +1541 7594 1 1 1 1 41 541 1541 1541 1541 82 83 HHAAAA CGLAAA OOOOxx +4998 7595 0 2 8 18 98 998 998 4998 4998 196 197 GKAAAA DGLAAA VVVVxx +1649 7596 1 1 9 9 49 649 1649 1649 1649 98 99 LLAAAA EGLAAA AAAAxx +5426 7597 0 2 6 6 26 426 1426 426 5426 52 53 SAAAAA FGLAAA HHHHxx +1492 7598 0 0 2 12 92 492 1492 1492 1492 184 185 KFAAAA GGLAAA OOOOxx +9622 7599 0 2 2 2 22 622 1622 4622 9622 44 45 CGAAAA HGLAAA VVVVxx +701 7600 1 1 1 1 1 701 701 701 701 2 3 ZAAAAA IGLAAA AAAAxx +2781 7601 1 1 1 1 81 781 781 2781 2781 162 163 ZCAAAA JGLAAA HHHHxx +3982 7602 0 2 2 2 82 982 1982 3982 3982 164 165 EXAAAA KGLAAA OOOOxx +7259 7603 1 3 9 19 59 259 1259 2259 7259 118 119 FTAAAA LGLAAA VVVVxx +9868 7604 0 0 8 8 68 868 1868 4868 9868 136 137 OPAAAA MGLAAA AAAAxx +564 7605 0 0 4 4 64 564 564 564 564 128 129 SVAAAA NGLAAA HHHHxx +6315 7606 1 3 5 15 15 315 315 1315 6315 30 31 XIAAAA OGLAAA OOOOxx +9092 7607 0 0 2 12 92 92 1092 4092 9092 184 185 SLAAAA PGLAAA VVVVxx +8237 7608 1 1 7 17 37 237 237 3237 8237 74 75 VEAAAA QGLAAA AAAAxx +1513 7609 1 1 3 13 13 513 1513 1513 1513 26 27 FGAAAA RGLAAA HHHHxx +1922 7610 0 2 2 2 22 922 1922 1922 1922 44 45 YVAAAA SGLAAA OOOOxx +5396 7611 0 0 6 16 96 396 1396 396 5396 192 193 OZAAAA TGLAAA VVVVxx +2485 7612 1 1 5 5 85 485 485 2485 2485 170 171 PRAAAA UGLAAA AAAAxx +5774 7613 0 2 4 14 74 774 1774 774 5774 148 149 COAAAA VGLAAA HHHHxx +3983 7614 1 3 3 3 83 983 1983 3983 3983 166 167 FXAAAA WGLAAA OOOOxx +221 7615 1 1 1 1 21 221 221 221 221 42 43 NIAAAA XGLAAA VVVVxx +8662 7616 0 2 2 2 62 662 662 3662 8662 124 125 EVAAAA YGLAAA AAAAxx +2456 7617 0 0 6 16 56 456 456 2456 2456 112 113 MQAAAA ZGLAAA HHHHxx +9736 7618 0 0 6 16 36 736 1736 4736 9736 72 73 MKAAAA AHLAAA OOOOxx +8936 7619 0 0 6 16 36 936 936 3936 8936 72 73 SFAAAA BHLAAA VVVVxx +5395 7620 1 3 5 15 95 395 1395 395 5395 190 191 NZAAAA CHLAAA AAAAxx +9523 7621 1 3 3 3 23 523 1523 4523 9523 46 47 HCAAAA DHLAAA HHHHxx +6980 7622 0 0 0 0 80 980 980 1980 6980 160 161 MIAAAA EHLAAA OOOOxx +2091 7623 1 3 1 11 91 91 91 2091 2091 182 183 LCAAAA FHLAAA VVVVxx +6807 7624 1 3 7 7 7 807 807 1807 6807 14 15 VBAAAA GHLAAA AAAAxx +8818 7625 0 2 8 18 18 818 818 3818 8818 36 37 EBAAAA HHLAAA HHHHxx +5298 7626 0 2 8 18 98 298 1298 298 5298 196 197 UVAAAA IHLAAA OOOOxx +1726 7627 0 2 6 6 26 726 1726 1726 1726 52 53 KOAAAA JHLAAA VVVVxx +3878 7628 0 2 8 18 78 878 1878 3878 3878 156 157 ETAAAA KHLAAA AAAAxx +8700 7629 0 0 0 0 0 700 700 3700 8700 0 1 QWAAAA LHLAAA HHHHxx +5201 7630 1 1 1 1 1 201 1201 201 5201 2 3 BSAAAA MHLAAA OOOOxx +3936 7631 0 0 6 16 36 936 1936 3936 3936 72 73 KVAAAA NHLAAA VVVVxx +776 7632 0 0 6 16 76 776 776 776 776 152 153 WDAAAA OHLAAA AAAAxx +5302 7633 0 2 2 2 2 302 1302 302 5302 4 5 YVAAAA PHLAAA HHHHxx +3595 7634 1 3 5 15 95 595 1595 3595 3595 190 191 HIAAAA QHLAAA OOOOxx +9061 7635 1 1 1 1 61 61 1061 4061 9061 122 123 NKAAAA RHLAAA VVVVxx +6261 7636 1 1 1 1 61 261 261 1261 6261 122 123 VGAAAA SHLAAA AAAAxx +8878 7637 0 2 8 18 78 878 878 3878 8878 156 157 MDAAAA THLAAA HHHHxx +3312 7638 0 0 2 12 12 312 1312 3312 3312 24 25 KXAAAA UHLAAA OOOOxx +9422 7639 0 2 2 2 22 422 1422 4422 9422 44 45 KYAAAA VHLAAA VVVVxx +7321 7640 1 1 1 1 21 321 1321 2321 7321 42 43 PVAAAA WHLAAA AAAAxx +3813 7641 1 1 3 13 13 813 1813 3813 3813 26 27 RQAAAA XHLAAA HHHHxx +5848 7642 0 0 8 8 48 848 1848 848 5848 96 97 YQAAAA YHLAAA OOOOxx +3535 7643 1 3 5 15 35 535 1535 3535 3535 70 71 ZFAAAA ZHLAAA VVVVxx +1040 7644 0 0 0 0 40 40 1040 1040 1040 80 81 AOAAAA AILAAA AAAAxx +8572 7645 0 0 2 12 72 572 572 3572 8572 144 145 SRAAAA BILAAA HHHHxx +5435 7646 1 3 5 15 35 435 1435 435 5435 70 71 BBAAAA CILAAA OOOOxx +8199 7647 1 3 9 19 99 199 199 3199 8199 198 199 JDAAAA DILAAA VVVVxx +8775 7648 1 3 5 15 75 775 775 3775 8775 150 151 NZAAAA EILAAA AAAAxx +7722 7649 0 2 2 2 22 722 1722 2722 7722 44 45 ALAAAA FILAAA HHHHxx +3549 7650 1 1 9 9 49 549 1549 3549 3549 98 99 NGAAAA GILAAA OOOOxx +2578 7651 0 2 8 18 78 578 578 2578 2578 156 157 EVAAAA HILAAA VVVVxx +1695 7652 1 3 5 15 95 695 1695 1695 1695 190 191 FNAAAA IILAAA AAAAxx +1902 7653 0 2 2 2 2 902 1902 1902 1902 4 5 EVAAAA JILAAA HHHHxx +6058 7654 0 2 8 18 58 58 58 1058 6058 116 117 AZAAAA KILAAA OOOOxx +6591 7655 1 3 1 11 91 591 591 1591 6591 182 183 NTAAAA LILAAA VVVVxx +7962 7656 0 2 2 2 62 962 1962 2962 7962 124 125 GUAAAA MILAAA AAAAxx +5612 7657 0 0 2 12 12 612 1612 612 5612 24 25 WHAAAA NILAAA HHHHxx +3341 7658 1 1 1 1 41 341 1341 3341 3341 82 83 NYAAAA OILAAA OOOOxx +5460 7659 0 0 0 0 60 460 1460 460 5460 120 121 ACAAAA PILAAA VVVVxx +2368 7660 0 0 8 8 68 368 368 2368 2368 136 137 CNAAAA QILAAA AAAAxx +8646 7661 0 2 6 6 46 646 646 3646 8646 92 93 OUAAAA RILAAA HHHHxx +4987 7662 1 3 7 7 87 987 987 4987 4987 174 175 VJAAAA SILAAA OOOOxx +9018 7663 0 2 8 18 18 18 1018 4018 9018 36 37 WIAAAA TILAAA VVVVxx +8685 7664 1 1 5 5 85 685 685 3685 8685 170 171 BWAAAA UILAAA AAAAxx +694 7665 0 2 4 14 94 694 694 694 694 188 189 SAAAAA VILAAA HHHHxx +2012 7666 0 0 2 12 12 12 12 2012 2012 24 25 KZAAAA WILAAA OOOOxx +2417 7667 1 1 7 17 17 417 417 2417 2417 34 35 ZOAAAA XILAAA VVVVxx +4022 7668 0 2 2 2 22 22 22 4022 4022 44 45 SYAAAA YILAAA AAAAxx +5935 7669 1 3 5 15 35 935 1935 935 5935 70 71 HUAAAA ZILAAA HHHHxx +1656 7670 0 0 6 16 56 656 1656 1656 1656 112 113 SLAAAA AJLAAA OOOOxx +6195 7671 1 3 5 15 95 195 195 1195 6195 190 191 HEAAAA BJLAAA VVVVxx +3057 7672 1 1 7 17 57 57 1057 3057 3057 114 115 PNAAAA CJLAAA AAAAxx +2852 7673 0 0 2 12 52 852 852 2852 2852 104 105 SFAAAA DJLAAA HHHHxx +4634 7674 0 2 4 14 34 634 634 4634 4634 68 69 GWAAAA EJLAAA OOOOxx +1689 7675 1 1 9 9 89 689 1689 1689 1689 178 179 ZMAAAA FJLAAA VVVVxx +4102 7676 0 2 2 2 2 102 102 4102 4102 4 5 UBAAAA GJLAAA AAAAxx +3287 7677 1 3 7 7 87 287 1287 3287 3287 174 175 LWAAAA HJLAAA HHHHxx +5246 7678 0 2 6 6 46 246 1246 246 5246 92 93 UTAAAA IJLAAA OOOOxx +7450 7679 0 2 0 10 50 450 1450 2450 7450 100 101 OAAAAA JJLAAA VVVVxx +6548 7680 0 0 8 8 48 548 548 1548 6548 96 97 WRAAAA KJLAAA AAAAxx +379 7681 1 3 9 19 79 379 379 379 379 158 159 POAAAA LJLAAA HHHHxx +7435 7682 1 3 5 15 35 435 1435 2435 7435 70 71 ZZAAAA MJLAAA OOOOxx +2041 7683 1 1 1 1 41 41 41 2041 2041 82 83 NAAAAA NJLAAA VVVVxx +8462 7684 0 2 2 2 62 462 462 3462 8462 124 125 MNAAAA OJLAAA AAAAxx +9076 7685 0 0 6 16 76 76 1076 4076 9076 152 153 CLAAAA PJLAAA HHHHxx +761 7686 1 1 1 1 61 761 761 761 761 122 123 HDAAAA QJLAAA OOOOxx +795 7687 1 3 5 15 95 795 795 795 795 190 191 PEAAAA RJLAAA VVVVxx +1671 7688 1 3 1 11 71 671 1671 1671 1671 142 143 HMAAAA SJLAAA AAAAxx +695 7689 1 3 5 15 95 695 695 695 695 190 191 TAAAAA TJLAAA HHHHxx +4981 7690 1 1 1 1 81 981 981 4981 4981 162 163 PJAAAA UJLAAA OOOOxx +1211 7691 1 3 1 11 11 211 1211 1211 1211 22 23 PUAAAA VJLAAA VVVVxx +5914 7692 0 2 4 14 14 914 1914 914 5914 28 29 MTAAAA WJLAAA AAAAxx +9356 7693 0 0 6 16 56 356 1356 4356 9356 112 113 WVAAAA XJLAAA HHHHxx +1500 7694 0 0 0 0 0 500 1500 1500 1500 0 1 SFAAAA YJLAAA OOOOxx +3353 7695 1 1 3 13 53 353 1353 3353 3353 106 107 ZYAAAA ZJLAAA VVVVxx +1060 7696 0 0 0 0 60 60 1060 1060 1060 120 121 UOAAAA AKLAAA AAAAxx +7910 7697 0 2 0 10 10 910 1910 2910 7910 20 21 GSAAAA BKLAAA HHHHxx +1329 7698 1 1 9 9 29 329 1329 1329 1329 58 59 DZAAAA CKLAAA OOOOxx +6011 7699 1 3 1 11 11 11 11 1011 6011 22 23 FXAAAA DKLAAA VVVVxx +7146 7700 0 2 6 6 46 146 1146 2146 7146 92 93 WOAAAA EKLAAA AAAAxx +4602 7701 0 2 2 2 2 602 602 4602 4602 4 5 AVAAAA FKLAAA HHHHxx +6751 7702 1 3 1 11 51 751 751 1751 6751 102 103 RZAAAA GKLAAA OOOOxx +2666 7703 0 2 6 6 66 666 666 2666 2666 132 133 OYAAAA HKLAAA VVVVxx +2785 7704 1 1 5 5 85 785 785 2785 2785 170 171 DDAAAA IKLAAA AAAAxx +5851 7705 1 3 1 11 51 851 1851 851 5851 102 103 BRAAAA JKLAAA HHHHxx +2435 7706 1 3 5 15 35 435 435 2435 2435 70 71 RPAAAA KKLAAA OOOOxx +7429 7707 1 1 9 9 29 429 1429 2429 7429 58 59 TZAAAA LKLAAA VVVVxx +4241 7708 1 1 1 1 41 241 241 4241 4241 82 83 DHAAAA MKLAAA AAAAxx +5691 7709 1 3 1 11 91 691 1691 691 5691 182 183 XKAAAA NKLAAA HHHHxx +7731 7710 1 3 1 11 31 731 1731 2731 7731 62 63 JLAAAA OKLAAA OOOOxx +249 7711 1 1 9 9 49 249 249 249 249 98 99 PJAAAA PKLAAA VVVVxx +1731 7712 1 3 1 11 31 731 1731 1731 1731 62 63 POAAAA QKLAAA AAAAxx +8716 7713 0 0 6 16 16 716 716 3716 8716 32 33 GXAAAA RKLAAA HHHHxx +2670 7714 0 2 0 10 70 670 670 2670 2670 140 141 SYAAAA SKLAAA OOOOxx +4654 7715 0 2 4 14 54 654 654 4654 4654 108 109 AXAAAA TKLAAA VVVVxx +1027 7716 1 3 7 7 27 27 1027 1027 1027 54 55 NNAAAA UKLAAA AAAAxx +1099 7717 1 3 9 19 99 99 1099 1099 1099 198 199 HQAAAA VKLAAA HHHHxx +3617 7718 1 1 7 17 17 617 1617 3617 3617 34 35 DJAAAA WKLAAA OOOOxx +4330 7719 0 2 0 10 30 330 330 4330 4330 60 61 OKAAAA XKLAAA VVVVxx +9750 7720 0 2 0 10 50 750 1750 4750 9750 100 101 ALAAAA YKLAAA AAAAxx +467 7721 1 3 7 7 67 467 467 467 467 134 135 ZRAAAA ZKLAAA HHHHxx +8525 7722 1 1 5 5 25 525 525 3525 8525 50 51 XPAAAA ALLAAA OOOOxx +5990 7723 0 2 0 10 90 990 1990 990 5990 180 181 KWAAAA BLLAAA VVVVxx +4839 7724 1 3 9 19 39 839 839 4839 4839 78 79 DEAAAA CLLAAA AAAAxx +9914 7725 0 2 4 14 14 914 1914 4914 9914 28 29 IRAAAA DLLAAA HHHHxx +7047 7726 1 3 7 7 47 47 1047 2047 7047 94 95 BLAAAA ELLAAA OOOOxx +874 7727 0 2 4 14 74 874 874 874 874 148 149 QHAAAA FLLAAA VVVVxx +6061 7728 1 1 1 1 61 61 61 1061 6061 122 123 DZAAAA GLLAAA AAAAxx +5491 7729 1 3 1 11 91 491 1491 491 5491 182 183 FDAAAA HLLAAA HHHHxx +4344 7730 0 0 4 4 44 344 344 4344 4344 88 89 CLAAAA ILLAAA OOOOxx +1281 7731 1 1 1 1 81 281 1281 1281 1281 162 163 HXAAAA JLLAAA VVVVxx +3597 7732 1 1 7 17 97 597 1597 3597 3597 194 195 JIAAAA KLLAAA AAAAxx +4992 7733 0 0 2 12 92 992 992 4992 4992 184 185 AKAAAA LLLAAA HHHHxx +3849 7734 1 1 9 9 49 849 1849 3849 3849 98 99 BSAAAA MLLAAA OOOOxx +2655 7735 1 3 5 15 55 655 655 2655 2655 110 111 DYAAAA NLLAAA VVVVxx +147 7736 1 3 7 7 47 147 147 147 147 94 95 RFAAAA OLLAAA AAAAxx +9110 7737 0 2 0 10 10 110 1110 4110 9110 20 21 KMAAAA PLLAAA HHHHxx +1637 7738 1 1 7 17 37 637 1637 1637 1637 74 75 ZKAAAA QLLAAA OOOOxx +9826 7739 0 2 6 6 26 826 1826 4826 9826 52 53 YNAAAA RLLAAA VVVVxx +5957 7740 1 1 7 17 57 957 1957 957 5957 114 115 DVAAAA SLLAAA AAAAxx +6932 7741 0 0 2 12 32 932 932 1932 6932 64 65 QGAAAA TLLAAA HHHHxx +9684 7742 0 0 4 4 84 684 1684 4684 9684 168 169 MIAAAA ULLAAA OOOOxx +4653 7743 1 1 3 13 53 653 653 4653 4653 106 107 ZWAAAA VLLAAA VVVVxx +8065 7744 1 1 5 5 65 65 65 3065 8065 130 131 FYAAAA WLLAAA AAAAxx +1202 7745 0 2 2 2 2 202 1202 1202 1202 4 5 GUAAAA XLLAAA HHHHxx +9214 7746 0 2 4 14 14 214 1214 4214 9214 28 29 KQAAAA YLLAAA OOOOxx +196 7747 0 0 6 16 96 196 196 196 196 192 193 OHAAAA ZLLAAA VVVVxx +4486 7748 0 2 6 6 86 486 486 4486 4486 172 173 OQAAAA AMLAAA AAAAxx +2585 7749 1 1 5 5 85 585 585 2585 2585 170 171 LVAAAA BMLAAA HHHHxx +2464 7750 0 0 4 4 64 464 464 2464 2464 128 129 UQAAAA CMLAAA OOOOxx +3467 7751 1 3 7 7 67 467 1467 3467 3467 134 135 JDAAAA DMLAAA VVVVxx +9295 7752 1 3 5 15 95 295 1295 4295 9295 190 191 NTAAAA EMLAAA AAAAxx +517 7753 1 1 7 17 17 517 517 517 517 34 35 XTAAAA FMLAAA HHHHxx +6870 7754 0 2 0 10 70 870 870 1870 6870 140 141 GEAAAA GMLAAA OOOOxx +5732 7755 0 0 2 12 32 732 1732 732 5732 64 65 MMAAAA HMLAAA VVVVxx +9376 7756 0 0 6 16 76 376 1376 4376 9376 152 153 QWAAAA IMLAAA AAAAxx +838 7757 0 2 8 18 38 838 838 838 838 76 77 GGAAAA JMLAAA HHHHxx +9254 7758 0 2 4 14 54 254 1254 4254 9254 108 109 YRAAAA KMLAAA OOOOxx +8879 7759 1 3 9 19 79 879 879 3879 8879 158 159 NDAAAA LMLAAA VVVVxx +6281 7760 1 1 1 1 81 281 281 1281 6281 162 163 PHAAAA MMLAAA AAAAxx +8216 7761 0 0 6 16 16 216 216 3216 8216 32 33 AEAAAA NMLAAA HHHHxx +9213 7762 1 1 3 13 13 213 1213 4213 9213 26 27 JQAAAA OMLAAA OOOOxx +7234 7763 0 2 4 14 34 234 1234 2234 7234 68 69 GSAAAA PMLAAA VVVVxx +5692 7764 0 0 2 12 92 692 1692 692 5692 184 185 YKAAAA QMLAAA AAAAxx +693 7765 1 1 3 13 93 693 693 693 693 186 187 RAAAAA RMLAAA HHHHxx +9050 7766 0 2 0 10 50 50 1050 4050 9050 100 101 CKAAAA SMLAAA OOOOxx +3623 7767 1 3 3 3 23 623 1623 3623 3623 46 47 JJAAAA TMLAAA VVVVxx +2130 7768 0 2 0 10 30 130 130 2130 2130 60 61 YDAAAA UMLAAA AAAAxx +2514 7769 0 2 4 14 14 514 514 2514 2514 28 29 SSAAAA VMLAAA HHHHxx +1812 7770 0 0 2 12 12 812 1812 1812 1812 24 25 SRAAAA WMLAAA OOOOxx +9037 7771 1 1 7 17 37 37 1037 4037 9037 74 75 PJAAAA XMLAAA VVVVxx +5054 7772 0 2 4 14 54 54 1054 54 5054 108 109 KMAAAA YMLAAA AAAAxx +7801 7773 1 1 1 1 1 801 1801 2801 7801 2 3 BOAAAA ZMLAAA HHHHxx +7939 7774 1 3 9 19 39 939 1939 2939 7939 78 79 JTAAAA ANLAAA OOOOxx +7374 7775 0 2 4 14 74 374 1374 2374 7374 148 149 QXAAAA BNLAAA VVVVxx +1058 7776 0 2 8 18 58 58 1058 1058 1058 116 117 SOAAAA CNLAAA AAAAxx +1972 7777 0 0 2 12 72 972 1972 1972 1972 144 145 WXAAAA DNLAAA HHHHxx +3741 7778 1 1 1 1 41 741 1741 3741 3741 82 83 XNAAAA ENLAAA OOOOxx +2227 7779 1 3 7 7 27 227 227 2227 2227 54 55 RHAAAA FNLAAA VVVVxx +304 7780 0 0 4 4 4 304 304 304 304 8 9 SLAAAA GNLAAA AAAAxx +4914 7781 0 2 4 14 14 914 914 4914 4914 28 29 AHAAAA HNLAAA HHHHxx +2428 7782 0 0 8 8 28 428 428 2428 2428 56 57 KPAAAA INLAAA OOOOxx +6660 7783 0 0 0 0 60 660 660 1660 6660 120 121 EWAAAA JNLAAA VVVVxx +2676 7784 0 0 6 16 76 676 676 2676 2676 152 153 YYAAAA KNLAAA AAAAxx +2454 7785 0 2 4 14 54 454 454 2454 2454 108 109 KQAAAA LNLAAA HHHHxx +3798 7786 0 2 8 18 98 798 1798 3798 3798 196 197 CQAAAA MNLAAA OOOOxx +1341 7787 1 1 1 1 41 341 1341 1341 1341 82 83 PZAAAA NNLAAA VVVVxx +1611 7788 1 3 1 11 11 611 1611 1611 1611 22 23 ZJAAAA ONLAAA AAAAxx +2681 7789 1 1 1 1 81 681 681 2681 2681 162 163 DZAAAA PNLAAA HHHHxx +7292 7790 0 0 2 12 92 292 1292 2292 7292 184 185 MUAAAA QNLAAA OOOOxx +7775 7791 1 3 5 15 75 775 1775 2775 7775 150 151 BNAAAA RNLAAA VVVVxx +794 7792 0 2 4 14 94 794 794 794 794 188 189 OEAAAA SNLAAA AAAAxx +8709 7793 1 1 9 9 9 709 709 3709 8709 18 19 ZWAAAA TNLAAA HHHHxx +1901 7794 1 1 1 1 1 901 1901 1901 1901 2 3 DVAAAA UNLAAA OOOOxx +3089 7795 1 1 9 9 89 89 1089 3089 3089 178 179 VOAAAA VNLAAA VVVVxx +7797 7796 1 1 7 17 97 797 1797 2797 7797 194 195 XNAAAA WNLAAA AAAAxx +6070 7797 0 2 0 10 70 70 70 1070 6070 140 141 MZAAAA XNLAAA HHHHxx +2191 7798 1 3 1 11 91 191 191 2191 2191 182 183 HGAAAA YNLAAA OOOOxx +3497 7799 1 1 7 17 97 497 1497 3497 3497 194 195 NEAAAA ZNLAAA VVVVxx +8302 7800 0 2 2 2 2 302 302 3302 8302 4 5 IHAAAA AOLAAA AAAAxx +4365 7801 1 1 5 5 65 365 365 4365 4365 130 131 XLAAAA BOLAAA HHHHxx +3588 7802 0 0 8 8 88 588 1588 3588 3588 176 177 AIAAAA COLAAA OOOOxx +8292 7803 0 0 2 12 92 292 292 3292 8292 184 185 YGAAAA DOLAAA VVVVxx +4696 7804 0 0 6 16 96 696 696 4696 4696 192 193 QYAAAA EOLAAA AAAAxx +5641 7805 1 1 1 1 41 641 1641 641 5641 82 83 ZIAAAA FOLAAA HHHHxx +9386 7806 0 2 6 6 86 386 1386 4386 9386 172 173 AXAAAA GOLAAA OOOOxx +507 7807 1 3 7 7 7 507 507 507 507 14 15 NTAAAA HOLAAA VVVVxx +7201 7808 1 1 1 1 1 201 1201 2201 7201 2 3 ZQAAAA IOLAAA AAAAxx +7785 7809 1 1 5 5 85 785 1785 2785 7785 170 171 LNAAAA JOLAAA HHHHxx +463 7810 1 3 3 3 63 463 463 463 463 126 127 VRAAAA KOLAAA OOOOxx +6656 7811 0 0 6 16 56 656 656 1656 6656 112 113 AWAAAA LOLAAA VVVVxx +807 7812 1 3 7 7 7 807 807 807 807 14 15 BFAAAA MOLAAA AAAAxx +7278 7813 0 2 8 18 78 278 1278 2278 7278 156 157 YTAAAA NOLAAA HHHHxx +6237 7814 1 1 7 17 37 237 237 1237 6237 74 75 XFAAAA OOLAAA OOOOxx +7671 7815 1 3 1 11 71 671 1671 2671 7671 142 143 BJAAAA POLAAA VVVVxx +2235 7816 1 3 5 15 35 235 235 2235 2235 70 71 ZHAAAA QOLAAA AAAAxx +4042 7817 0 2 2 2 42 42 42 4042 4042 84 85 MZAAAA ROLAAA HHHHxx +5273 7818 1 1 3 13 73 273 1273 273 5273 146 147 VUAAAA SOLAAA OOOOxx +7557 7819 1 1 7 17 57 557 1557 2557 7557 114 115 REAAAA TOLAAA VVVVxx +4007 7820 1 3 7 7 7 7 7 4007 4007 14 15 DYAAAA UOLAAA AAAAxx +1428 7821 0 0 8 8 28 428 1428 1428 1428 56 57 YCAAAA VOLAAA HHHHxx +9739 7822 1 3 9 19 39 739 1739 4739 9739 78 79 PKAAAA WOLAAA OOOOxx +7836 7823 0 0 6 16 36 836 1836 2836 7836 72 73 KPAAAA XOLAAA VVVVxx +1777 7824 1 1 7 17 77 777 1777 1777 1777 154 155 JQAAAA YOLAAA AAAAxx +5192 7825 0 0 2 12 92 192 1192 192 5192 184 185 SRAAAA ZOLAAA HHHHxx +7236 7826 0 0 6 16 36 236 1236 2236 7236 72 73 ISAAAA APLAAA OOOOxx +1623 7827 1 3 3 3 23 623 1623 1623 1623 46 47 LKAAAA BPLAAA VVVVxx +8288 7828 0 0 8 8 88 288 288 3288 8288 176 177 UGAAAA CPLAAA AAAAxx +2827 7829 1 3 7 7 27 827 827 2827 2827 54 55 TEAAAA DPLAAA HHHHxx +458 7830 0 2 8 18 58 458 458 458 458 116 117 QRAAAA EPLAAA OOOOxx +1818 7831 0 2 8 18 18 818 1818 1818 1818 36 37 YRAAAA FPLAAA VVVVxx +6837 7832 1 1 7 17 37 837 837 1837 6837 74 75 ZCAAAA GPLAAA AAAAxx +7825 7833 1 1 5 5 25 825 1825 2825 7825 50 51 ZOAAAA HPLAAA HHHHxx +9146 7834 0 2 6 6 46 146 1146 4146 9146 92 93 UNAAAA IPLAAA OOOOxx +8451 7835 1 3 1 11 51 451 451 3451 8451 102 103 BNAAAA JPLAAA VVVVxx +6438 7836 0 2 8 18 38 438 438 1438 6438 76 77 QNAAAA KPLAAA AAAAxx +4020 7837 0 0 0 0 20 20 20 4020 4020 40 41 QYAAAA LPLAAA HHHHxx +4068 7838 0 0 8 8 68 68 68 4068 4068 136 137 MAAAAA MPLAAA OOOOxx +2411 7839 1 3 1 11 11 411 411 2411 2411 22 23 TOAAAA NPLAAA VVVVxx +6222 7840 0 2 2 2 22 222 222 1222 6222 44 45 IFAAAA OPLAAA AAAAxx +3164 7841 0 0 4 4 64 164 1164 3164 3164 128 129 SRAAAA PPLAAA HHHHxx +311 7842 1 3 1 11 11 311 311 311 311 22 23 ZLAAAA QPLAAA OOOOxx +5683 7843 1 3 3 3 83 683 1683 683 5683 166 167 PKAAAA RPLAAA VVVVxx +3993 7844 1 1 3 13 93 993 1993 3993 3993 186 187 PXAAAA SPLAAA AAAAxx +9897 7845 1 1 7 17 97 897 1897 4897 9897 194 195 RQAAAA TPLAAA HHHHxx +6609 7846 1 1 9 9 9 609 609 1609 6609 18 19 FUAAAA UPLAAA OOOOxx +1362 7847 0 2 2 2 62 362 1362 1362 1362 124 125 KAAAAA VPLAAA VVVVxx +3918 7848 0 2 8 18 18 918 1918 3918 3918 36 37 SUAAAA WPLAAA AAAAxx +7376 7849 0 0 6 16 76 376 1376 2376 7376 152 153 SXAAAA XPLAAA HHHHxx +6996 7850 0 0 6 16 96 996 996 1996 6996 192 193 CJAAAA YPLAAA OOOOxx +9567 7851 1 3 7 7 67 567 1567 4567 9567 134 135 ZDAAAA ZPLAAA VVVVxx +7525 7852 1 1 5 5 25 525 1525 2525 7525 50 51 LDAAAA AQLAAA AAAAxx +9069 7853 1 1 9 9 69 69 1069 4069 9069 138 139 VKAAAA BQLAAA HHHHxx +9999 7854 1 3 9 19 99 999 1999 4999 9999 198 199 PUAAAA CQLAAA OOOOxx +9237 7855 1 1 7 17 37 237 1237 4237 9237 74 75 HRAAAA DQLAAA VVVVxx +8441 7856 1 1 1 1 41 441 441 3441 8441 82 83 RMAAAA EQLAAA AAAAxx +6769 7857 1 1 9 9 69 769 769 1769 6769 138 139 JAAAAA FQLAAA HHHHxx +6073 7858 1 1 3 13 73 73 73 1073 6073 146 147 PZAAAA GQLAAA OOOOxx +1091 7859 1 3 1 11 91 91 1091 1091 1091 182 183 ZPAAAA HQLAAA VVVVxx +9886 7860 0 2 6 6 86 886 1886 4886 9886 172 173 GQAAAA IQLAAA AAAAxx +3971 7861 1 3 1 11 71 971 1971 3971 3971 142 143 TWAAAA JQLAAA HHHHxx +4621 7862 1 1 1 1 21 621 621 4621 4621 42 43 TVAAAA KQLAAA OOOOxx +3120 7863 0 0 0 0 20 120 1120 3120 3120 40 41 AQAAAA LQLAAA VVVVxx +9773 7864 1 1 3 13 73 773 1773 4773 9773 146 147 XLAAAA MQLAAA AAAAxx +8712 7865 0 0 2 12 12 712 712 3712 8712 24 25 CXAAAA NQLAAA HHHHxx +801 7866 1 1 1 1 1 801 801 801 801 2 3 VEAAAA OQLAAA OOOOxx +9478 7867 0 2 8 18 78 478 1478 4478 9478 156 157 OAAAAA PQLAAA VVVVxx +3466 7868 0 2 6 6 66 466 1466 3466 3466 132 133 IDAAAA QQLAAA AAAAxx +6326 7869 0 2 6 6 26 326 326 1326 6326 52 53 IJAAAA RQLAAA HHHHxx +1723 7870 1 3 3 3 23 723 1723 1723 1723 46 47 HOAAAA SQLAAA OOOOxx +4978 7871 0 2 8 18 78 978 978 4978 4978 156 157 MJAAAA TQLAAA VVVVxx +2311 7872 1 3 1 11 11 311 311 2311 2311 22 23 XKAAAA UQLAAA AAAAxx +9532 7873 0 0 2 12 32 532 1532 4532 9532 64 65 QCAAAA VQLAAA HHHHxx +3680 7874 0 0 0 0 80 680 1680 3680 3680 160 161 OLAAAA WQLAAA OOOOxx +1244 7875 0 0 4 4 44 244 1244 1244 1244 88 89 WVAAAA XQLAAA VVVVxx +3821 7876 1 1 1 1 21 821 1821 3821 3821 42 43 ZQAAAA YQLAAA AAAAxx +9586 7877 0 2 6 6 86 586 1586 4586 9586 172 173 SEAAAA ZQLAAA HHHHxx +3894 7878 0 2 4 14 94 894 1894 3894 3894 188 189 UTAAAA ARLAAA OOOOxx +6169 7879 1 1 9 9 69 169 169 1169 6169 138 139 HDAAAA BRLAAA VVVVxx +5919 7880 1 3 9 19 19 919 1919 919 5919 38 39 RTAAAA CRLAAA AAAAxx +4187 7881 1 3 7 7 87 187 187 4187 4187 174 175 BFAAAA DRLAAA HHHHxx +5477 7882 1 1 7 17 77 477 1477 477 5477 154 155 RCAAAA ERLAAA OOOOxx +2806 7883 0 2 6 6 6 806 806 2806 2806 12 13 YDAAAA FRLAAA VVVVxx +8158 7884 0 2 8 18 58 158 158 3158 8158 116 117 UBAAAA GRLAAA AAAAxx +7130 7885 0 2 0 10 30 130 1130 2130 7130 60 61 GOAAAA HRLAAA HHHHxx +7133 7886 1 1 3 13 33 133 1133 2133 7133 66 67 JOAAAA IRLAAA OOOOxx +6033 7887 1 1 3 13 33 33 33 1033 6033 66 67 BYAAAA JRLAAA VVVVxx +2415 7888 1 3 5 15 15 415 415 2415 2415 30 31 XOAAAA KRLAAA AAAAxx +8091 7889 1 3 1 11 91 91 91 3091 8091 182 183 FZAAAA LRLAAA HHHHxx +8347 7890 1 3 7 7 47 347 347 3347 8347 94 95 BJAAAA MRLAAA OOOOxx +7879 7891 1 3 9 19 79 879 1879 2879 7879 158 159 BRAAAA NRLAAA VVVVxx +9360 7892 0 0 0 0 60 360 1360 4360 9360 120 121 AWAAAA ORLAAA AAAAxx +3369 7893 1 1 9 9 69 369 1369 3369 3369 138 139 PZAAAA PRLAAA HHHHxx +8536 7894 0 0 6 16 36 536 536 3536 8536 72 73 IQAAAA QRLAAA OOOOxx +8628 7895 0 0 8 8 28 628 628 3628 8628 56 57 WTAAAA RRLAAA VVVVxx +1580 7896 0 0 0 0 80 580 1580 1580 1580 160 161 UIAAAA SRLAAA AAAAxx +705 7897 1 1 5 5 5 705 705 705 705 10 11 DBAAAA TRLAAA HHHHxx +4650 7898 0 2 0 10 50 650 650 4650 4650 100 101 WWAAAA URLAAA OOOOxx +9165 7899 1 1 5 5 65 165 1165 4165 9165 130 131 NOAAAA VRLAAA VVVVxx +4820 7900 0 0 0 0 20 820 820 4820 4820 40 41 KDAAAA WRLAAA AAAAxx +3538 7901 0 2 8 18 38 538 1538 3538 3538 76 77 CGAAAA XRLAAA HHHHxx +9947 7902 1 3 7 7 47 947 1947 4947 9947 94 95 PSAAAA YRLAAA OOOOxx +4954 7903 0 2 4 14 54 954 954 4954 4954 108 109 OIAAAA ZRLAAA VVVVxx +1104 7904 0 0 4 4 4 104 1104 1104 1104 8 9 MQAAAA ASLAAA AAAAxx +8455 7905 1 3 5 15 55 455 455 3455 8455 110 111 FNAAAA BSLAAA HHHHxx +8307 7906 1 3 7 7 7 307 307 3307 8307 14 15 NHAAAA CSLAAA OOOOxx +9203 7907 1 3 3 3 3 203 1203 4203 9203 6 7 ZPAAAA DSLAAA VVVVxx +7565 7908 1 1 5 5 65 565 1565 2565 7565 130 131 ZEAAAA ESLAAA AAAAxx +7745 7909 1 1 5 5 45 745 1745 2745 7745 90 91 XLAAAA FSLAAA HHHHxx +1787 7910 1 3 7 7 87 787 1787 1787 1787 174 175 TQAAAA GSLAAA OOOOxx +4861 7911 1 1 1 1 61 861 861 4861 4861 122 123 ZEAAAA HSLAAA VVVVxx +5183 7912 1 3 3 3 83 183 1183 183 5183 166 167 JRAAAA ISLAAA AAAAxx +529 7913 1 1 9 9 29 529 529 529 529 58 59 JUAAAA JSLAAA HHHHxx +2470 7914 0 2 0 10 70 470 470 2470 2470 140 141 ARAAAA KSLAAA OOOOxx +1267 7915 1 3 7 7 67 267 1267 1267 1267 134 135 TWAAAA LSLAAA VVVVxx +2059 7916 1 3 9 19 59 59 59 2059 2059 118 119 FBAAAA MSLAAA AAAAxx +1862 7917 0 2 2 2 62 862 1862 1862 1862 124 125 QTAAAA NSLAAA HHHHxx +7382 7918 0 2 2 2 82 382 1382 2382 7382 164 165 YXAAAA OSLAAA OOOOxx +4796 7919 0 0 6 16 96 796 796 4796 4796 192 193 MCAAAA PSLAAA VVVVxx +2331 7920 1 3 1 11 31 331 331 2331 2331 62 63 RLAAAA QSLAAA AAAAxx +8870 7921 0 2 0 10 70 870 870 3870 8870 140 141 EDAAAA RSLAAA HHHHxx +9581 7922 1 1 1 1 81 581 1581 4581 9581 162 163 NEAAAA SSLAAA OOOOxx +9063 7923 1 3 3 3 63 63 1063 4063 9063 126 127 PKAAAA TSLAAA VVVVxx +2192 7924 0 0 2 12 92 192 192 2192 2192 184 185 IGAAAA USLAAA AAAAxx +6466 7925 0 2 6 6 66 466 466 1466 6466 132 133 SOAAAA VSLAAA HHHHxx +7096 7926 0 0 6 16 96 96 1096 2096 7096 192 193 YMAAAA WSLAAA OOOOxx +6257 7927 1 1 7 17 57 257 257 1257 6257 114 115 RGAAAA XSLAAA VVVVxx +7009 7928 1 1 9 9 9 9 1009 2009 7009 18 19 PJAAAA YSLAAA AAAAxx +8136 7929 0 0 6 16 36 136 136 3136 8136 72 73 YAAAAA ZSLAAA HHHHxx +1854 7930 0 2 4 14 54 854 1854 1854 1854 108 109 ITAAAA ATLAAA OOOOxx +3644 7931 0 0 4 4 44 644 1644 3644 3644 88 89 EKAAAA BTLAAA VVVVxx +4437 7932 1 1 7 17 37 437 437 4437 4437 74 75 ROAAAA CTLAAA AAAAxx +7209 7933 1 1 9 9 9 209 1209 2209 7209 18 19 HRAAAA DTLAAA HHHHxx +1516 7934 0 0 6 16 16 516 1516 1516 1516 32 33 IGAAAA ETLAAA OOOOxx +822 7935 0 2 2 2 22 822 822 822 822 44 45 QFAAAA FTLAAA VVVVxx +1778 7936 0 2 8 18 78 778 1778 1778 1778 156 157 KQAAAA GTLAAA AAAAxx +8161 7937 1 1 1 1 61 161 161 3161 8161 122 123 XBAAAA HTLAAA HHHHxx +6030 7938 0 2 0 10 30 30 30 1030 6030 60 61 YXAAAA ITLAAA OOOOxx +3515 7939 1 3 5 15 15 515 1515 3515 3515 30 31 FFAAAA JTLAAA VVVVxx +1702 7940 0 2 2 2 2 702 1702 1702 1702 4 5 MNAAAA KTLAAA AAAAxx +2671 7941 1 3 1 11 71 671 671 2671 2671 142 143 TYAAAA LTLAAA HHHHxx +7623 7942 1 3 3 3 23 623 1623 2623 7623 46 47 FHAAAA MTLAAA OOOOxx +9828 7943 0 0 8 8 28 828 1828 4828 9828 56 57 AOAAAA NTLAAA VVVVxx +1888 7944 0 0 8 8 88 888 1888 1888 1888 176 177 QUAAAA OTLAAA AAAAxx +4520 7945 0 0 0 0 20 520 520 4520 4520 40 41 WRAAAA PTLAAA HHHHxx +3461 7946 1 1 1 1 61 461 1461 3461 3461 122 123 DDAAAA QTLAAA OOOOxx +1488 7947 0 0 8 8 88 488 1488 1488 1488 176 177 GFAAAA RTLAAA VVVVxx +7753 7948 1 1 3 13 53 753 1753 2753 7753 106 107 FMAAAA STLAAA AAAAxx +5525 7949 1 1 5 5 25 525 1525 525 5525 50 51 NEAAAA TTLAAA HHHHxx +5220 7950 0 0 0 0 20 220 1220 220 5220 40 41 USAAAA UTLAAA OOOOxx +305 7951 1 1 5 5 5 305 305 305 305 10 11 TLAAAA VTLAAA VVVVxx +7883 7952 1 3 3 3 83 883 1883 2883 7883 166 167 FRAAAA WTLAAA AAAAxx +1222 7953 0 2 2 2 22 222 1222 1222 1222 44 45 AVAAAA XTLAAA HHHHxx +8552 7954 0 0 2 12 52 552 552 3552 8552 104 105 YQAAAA YTLAAA OOOOxx +6097 7955 1 1 7 17 97 97 97 1097 6097 194 195 NAAAAA ZTLAAA VVVVxx +2298 7956 0 2 8 18 98 298 298 2298 2298 196 197 KKAAAA AULAAA AAAAxx +956 7957 0 0 6 16 56 956 956 956 956 112 113 UKAAAA BULAAA HHHHxx +9351 7958 1 3 1 11 51 351 1351 4351 9351 102 103 RVAAAA CULAAA OOOOxx +6669 7959 1 1 9 9 69 669 669 1669 6669 138 139 NWAAAA DULAAA VVVVxx +9383 7960 1 3 3 3 83 383 1383 4383 9383 166 167 XWAAAA EULAAA AAAAxx +1607 7961 1 3 7 7 7 607 1607 1607 1607 14 15 VJAAAA FULAAA HHHHxx +812 7962 0 0 2 12 12 812 812 812 812 24 25 GFAAAA GULAAA OOOOxx +2109 7963 1 1 9 9 9 109 109 2109 2109 18 19 DDAAAA HULAAA VVVVxx +207 7964 1 3 7 7 7 207 207 207 207 14 15 ZHAAAA IULAAA AAAAxx +7124 7965 0 0 4 4 24 124 1124 2124 7124 48 49 AOAAAA JULAAA HHHHxx +9333 7966 1 1 3 13 33 333 1333 4333 9333 66 67 ZUAAAA KULAAA OOOOxx +3262 7967 0 2 2 2 62 262 1262 3262 3262 124 125 MVAAAA LULAAA VVVVxx +1070 7968 0 2 0 10 70 70 1070 1070 1070 140 141 EPAAAA MULAAA AAAAxx +7579 7969 1 3 9 19 79 579 1579 2579 7579 158 159 NFAAAA NULAAA HHHHxx +9283 7970 1 3 3 3 83 283 1283 4283 9283 166 167 BTAAAA OULAAA OOOOxx +4917 7971 1 1 7 17 17 917 917 4917 4917 34 35 DHAAAA PULAAA VVVVxx +1328 7972 0 0 8 8 28 328 1328 1328 1328 56 57 CZAAAA QULAAA AAAAxx +3042 7973 0 2 2 2 42 42 1042 3042 3042 84 85 ANAAAA RULAAA HHHHxx +8352 7974 0 0 2 12 52 352 352 3352 8352 104 105 GJAAAA SULAAA OOOOxx +2710 7975 0 2 0 10 10 710 710 2710 2710 20 21 GAAAAA TULAAA VVVVxx +3330 7976 0 2 0 10 30 330 1330 3330 3330 60 61 CYAAAA UULAAA AAAAxx +2822 7977 0 2 2 2 22 822 822 2822 2822 44 45 OEAAAA VULAAA HHHHxx +5627 7978 1 3 7 7 27 627 1627 627 5627 54 55 LIAAAA WULAAA OOOOxx +7848 7979 0 0 8 8 48 848 1848 2848 7848 96 97 WPAAAA XULAAA VVVVxx +7384 7980 0 0 4 4 84 384 1384 2384 7384 168 169 AYAAAA YULAAA AAAAxx +727 7981 1 3 7 7 27 727 727 727 727 54 55 ZBAAAA ZULAAA HHHHxx +9926 7982 0 2 6 6 26 926 1926 4926 9926 52 53 URAAAA AVLAAA OOOOxx +2647 7983 1 3 7 7 47 647 647 2647 2647 94 95 VXAAAA BVLAAA VVVVxx +6416 7984 0 0 6 16 16 416 416 1416 6416 32 33 UMAAAA CVLAAA AAAAxx +8751 7985 1 3 1 11 51 751 751 3751 8751 102 103 PYAAAA DVLAAA HHHHxx +6515 7986 1 3 5 15 15 515 515 1515 6515 30 31 PQAAAA EVLAAA OOOOxx +2472 7987 0 0 2 12 72 472 472 2472 2472 144 145 CRAAAA FVLAAA VVVVxx +7205 7988 1 1 5 5 5 205 1205 2205 7205 10 11 DRAAAA GVLAAA AAAAxx +9654 7989 0 2 4 14 54 654 1654 4654 9654 108 109 IHAAAA HVLAAA HHHHxx +5646 7990 0 2 6 6 46 646 1646 646 5646 92 93 EJAAAA IVLAAA OOOOxx +4217 7991 1 1 7 17 17 217 217 4217 4217 34 35 FGAAAA JVLAAA VVVVxx +4484 7992 0 0 4 4 84 484 484 4484 4484 168 169 MQAAAA KVLAAA AAAAxx +6654 7993 0 2 4 14 54 654 654 1654 6654 108 109 YVAAAA LVLAAA HHHHxx +4876 7994 0 0 6 16 76 876 876 4876 4876 152 153 OFAAAA MVLAAA OOOOxx +9690 7995 0 2 0 10 90 690 1690 4690 9690 180 181 SIAAAA NVLAAA VVVVxx +2453 7996 1 1 3 13 53 453 453 2453 2453 106 107 JQAAAA OVLAAA AAAAxx +829 7997 1 1 9 9 29 829 829 829 829 58 59 XFAAAA PVLAAA HHHHxx +2547 7998 1 3 7 7 47 547 547 2547 2547 94 95 ZTAAAA QVLAAA OOOOxx +9726 7999 0 2 6 6 26 726 1726 4726 9726 52 53 CKAAAA RVLAAA VVVVxx +9267 8000 1 3 7 7 67 267 1267 4267 9267 134 135 LSAAAA SVLAAA AAAAxx +7448 8001 0 0 8 8 48 448 1448 2448 7448 96 97 MAAAAA TVLAAA HHHHxx +610 8002 0 2 0 10 10 610 610 610 610 20 21 MXAAAA UVLAAA OOOOxx +2791 8003 1 3 1 11 91 791 791 2791 2791 182 183 JDAAAA VVLAAA VVVVxx +3651 8004 1 3 1 11 51 651 1651 3651 3651 102 103 LKAAAA WVLAAA AAAAxx +5206 8005 0 2 6 6 6 206 1206 206 5206 12 13 GSAAAA XVLAAA HHHHxx +8774 8006 0 2 4 14 74 774 774 3774 8774 148 149 MZAAAA YVLAAA OOOOxx +4753 8007 1 1 3 13 53 753 753 4753 4753 106 107 VAAAAA ZVLAAA VVVVxx +4755 8008 1 3 5 15 55 755 755 4755 4755 110 111 XAAAAA AWLAAA AAAAxx +686 8009 0 2 6 6 86 686 686 686 686 172 173 KAAAAA BWLAAA HHHHxx +8281 8010 1 1 1 1 81 281 281 3281 8281 162 163 NGAAAA CWLAAA OOOOxx +2058 8011 0 2 8 18 58 58 58 2058 2058 116 117 EBAAAA DWLAAA VVVVxx +8900 8012 0 0 0 0 0 900 900 3900 8900 0 1 IEAAAA EWLAAA AAAAxx +8588 8013 0 0 8 8 88 588 588 3588 8588 176 177 ISAAAA FWLAAA HHHHxx +2904 8014 0 0 4 4 4 904 904 2904 2904 8 9 SHAAAA GWLAAA OOOOxx +8917 8015 1 1 7 17 17 917 917 3917 8917 34 35 ZEAAAA HWLAAA VVVVxx +9026 8016 0 2 6 6 26 26 1026 4026 9026 52 53 EJAAAA IWLAAA AAAAxx +2416 8017 0 0 6 16 16 416 416 2416 2416 32 33 YOAAAA JWLAAA HHHHxx +1053 8018 1 1 3 13 53 53 1053 1053 1053 106 107 NOAAAA KWLAAA OOOOxx +7141 8019 1 1 1 1 41 141 1141 2141 7141 82 83 ROAAAA LWLAAA VVVVxx +9771 8020 1 3 1 11 71 771 1771 4771 9771 142 143 VLAAAA MWLAAA AAAAxx +2774 8021 0 2 4 14 74 774 774 2774 2774 148 149 SCAAAA NWLAAA HHHHxx +3213 8022 1 1 3 13 13 213 1213 3213 3213 26 27 PTAAAA OWLAAA OOOOxx +5694 8023 0 2 4 14 94 694 1694 694 5694 188 189 ALAAAA PWLAAA VVVVxx +6631 8024 1 3 1 11 31 631 631 1631 6631 62 63 BVAAAA QWLAAA AAAAxx +6638 8025 0 2 8 18 38 638 638 1638 6638 76 77 IVAAAA RWLAAA HHHHxx +7407 8026 1 3 7 7 7 407 1407 2407 7407 14 15 XYAAAA SWLAAA OOOOxx +8972 8027 0 0 2 12 72 972 972 3972 8972 144 145 CHAAAA TWLAAA VVVVxx +2202 8028 0 2 2 2 2 202 202 2202 2202 4 5 SGAAAA UWLAAA AAAAxx +6135 8029 1 3 5 15 35 135 135 1135 6135 70 71 ZBAAAA VWLAAA HHHHxx +5043 8030 1 3 3 3 43 43 1043 43 5043 86 87 ZLAAAA WWLAAA OOOOxx +5163 8031 1 3 3 3 63 163 1163 163 5163 126 127 PQAAAA XWLAAA VVVVxx +1191 8032 1 3 1 11 91 191 1191 1191 1191 182 183 VTAAAA YWLAAA AAAAxx +6576 8033 0 0 6 16 76 576 576 1576 6576 152 153 YSAAAA ZWLAAA HHHHxx +3455 8034 1 3 5 15 55 455 1455 3455 3455 110 111 XCAAAA AXLAAA OOOOxx +3688 8035 0 0 8 8 88 688 1688 3688 3688 176 177 WLAAAA BXLAAA VVVVxx +4982 8036 0 2 2 2 82 982 982 4982 4982 164 165 QJAAAA CXLAAA AAAAxx +4180 8037 0 0 0 0 80 180 180 4180 4180 160 161 UEAAAA DXLAAA HHHHxx +4708 8038 0 0 8 8 8 708 708 4708 4708 16 17 CZAAAA EXLAAA OOOOxx +1241 8039 1 1 1 1 41 241 1241 1241 1241 82 83 TVAAAA FXLAAA VVVVxx +4921 8040 1 1 1 1 21 921 921 4921 4921 42 43 HHAAAA GXLAAA AAAAxx +3197 8041 1 1 7 17 97 197 1197 3197 3197 194 195 ZSAAAA HXLAAA HHHHxx +8225 8042 1 1 5 5 25 225 225 3225 8225 50 51 JEAAAA IXLAAA OOOOxx +5913 8043 1 1 3 13 13 913 1913 913 5913 26 27 LTAAAA JXLAAA VVVVxx +6387 8044 1 3 7 7 87 387 387 1387 6387 174 175 RLAAAA KXLAAA AAAAxx +2706 8045 0 2 6 6 6 706 706 2706 2706 12 13 CAAAAA LXLAAA HHHHxx +1461 8046 1 1 1 1 61 461 1461 1461 1461 122 123 FEAAAA MXLAAA OOOOxx +7646 8047 0 2 6 6 46 646 1646 2646 7646 92 93 CIAAAA NXLAAA VVVVxx +8066 8048 0 2 6 6 66 66 66 3066 8066 132 133 GYAAAA OXLAAA AAAAxx +4171 8049 1 3 1 11 71 171 171 4171 4171 142 143 LEAAAA PXLAAA HHHHxx +8008 8050 0 0 8 8 8 8 8 3008 8008 16 17 AWAAAA QXLAAA OOOOxx +2088 8051 0 0 8 8 88 88 88 2088 2088 176 177 ICAAAA RXLAAA VVVVxx +7907 8052 1 3 7 7 7 907 1907 2907 7907 14 15 DSAAAA SXLAAA AAAAxx +2429 8053 1 1 9 9 29 429 429 2429 2429 58 59 LPAAAA TXLAAA HHHHxx +9629 8054 1 1 9 9 29 629 1629 4629 9629 58 59 JGAAAA UXLAAA OOOOxx +1470 8055 0 2 0 10 70 470 1470 1470 1470 140 141 OEAAAA VXLAAA VVVVxx +4346 8056 0 2 6 6 46 346 346 4346 4346 92 93 ELAAAA WXLAAA AAAAxx +7219 8057 1 3 9 19 19 219 1219 2219 7219 38 39 RRAAAA XXLAAA HHHHxx +1185 8058 1 1 5 5 85 185 1185 1185 1185 170 171 PTAAAA YXLAAA OOOOxx +8776 8059 0 0 6 16 76 776 776 3776 8776 152 153 OZAAAA ZXLAAA VVVVxx +684 8060 0 0 4 4 84 684 684 684 684 168 169 IAAAAA AYLAAA AAAAxx +2343 8061 1 3 3 3 43 343 343 2343 2343 86 87 DMAAAA BYLAAA HHHHxx +4470 8062 0 2 0 10 70 470 470 4470 4470 140 141 YPAAAA CYLAAA OOOOxx +5116 8063 0 0 6 16 16 116 1116 116 5116 32 33 UOAAAA DYLAAA VVVVxx +1746 8064 0 2 6 6 46 746 1746 1746 1746 92 93 EPAAAA EYLAAA AAAAxx +3216 8065 0 0 6 16 16 216 1216 3216 3216 32 33 STAAAA FYLAAA HHHHxx +4594 8066 0 2 4 14 94 594 594 4594 4594 188 189 SUAAAA GYLAAA OOOOxx +3013 8067 1 1 3 13 13 13 1013 3013 3013 26 27 XLAAAA HYLAAA VVVVxx +2307 8068 1 3 7 7 7 307 307 2307 2307 14 15 TKAAAA IYLAAA AAAAxx +7663 8069 1 3 3 3 63 663 1663 2663 7663 126 127 TIAAAA JYLAAA HHHHxx +8504 8070 0 0 4 4 4 504 504 3504 8504 8 9 CPAAAA KYLAAA OOOOxx +3683 8071 1 3 3 3 83 683 1683 3683 3683 166 167 RLAAAA LYLAAA VVVVxx +144 8072 0 0 4 4 44 144 144 144 144 88 89 OFAAAA MYLAAA AAAAxx +203 8073 1 3 3 3 3 203 203 203 203 6 7 VHAAAA NYLAAA HHHHxx +5255 8074 1 3 5 15 55 255 1255 255 5255 110 111 DUAAAA OYLAAA OOOOxx +4150 8075 0 2 0 10 50 150 150 4150 4150 100 101 QDAAAA PYLAAA VVVVxx +5701 8076 1 1 1 1 1 701 1701 701 5701 2 3 HLAAAA QYLAAA AAAAxx +7400 8077 0 0 0 0 0 400 1400 2400 7400 0 1 QYAAAA RYLAAA HHHHxx +8203 8078 1 3 3 3 3 203 203 3203 8203 6 7 NDAAAA SYLAAA OOOOxx +637 8079 1 1 7 17 37 637 637 637 637 74 75 NYAAAA TYLAAA VVVVxx +2898 8080 0 2 8 18 98 898 898 2898 2898 196 197 MHAAAA UYLAAA AAAAxx +1110 8081 0 2 0 10 10 110 1110 1110 1110 20 21 SQAAAA VYLAAA HHHHxx +6255 8082 1 3 5 15 55 255 255 1255 6255 110 111 PGAAAA WYLAAA OOOOxx +1071 8083 1 3 1 11 71 71 1071 1071 1071 142 143 FPAAAA XYLAAA VVVVxx +541 8084 1 1 1 1 41 541 541 541 541 82 83 VUAAAA YYLAAA AAAAxx +8077 8085 1 1 7 17 77 77 77 3077 8077 154 155 RYAAAA ZYLAAA HHHHxx +6809 8086 1 1 9 9 9 809 809 1809 6809 18 19 XBAAAA AZLAAA OOOOxx +4749 8087 1 1 9 9 49 749 749 4749 4749 98 99 RAAAAA BZLAAA VVVVxx +2886 8088 0 2 6 6 86 886 886 2886 2886 172 173 AHAAAA CZLAAA AAAAxx +5510 8089 0 2 0 10 10 510 1510 510 5510 20 21 YDAAAA DZLAAA HHHHxx +713 8090 1 1 3 13 13 713 713 713 713 26 27 LBAAAA EZLAAA OOOOxx +8388 8091 0 0 8 8 88 388 388 3388 8388 176 177 QKAAAA FZLAAA VVVVxx +9524 8092 0 0 4 4 24 524 1524 4524 9524 48 49 ICAAAA GZLAAA AAAAxx +9949 8093 1 1 9 9 49 949 1949 4949 9949 98 99 RSAAAA HZLAAA HHHHxx +885 8094 1 1 5 5 85 885 885 885 885 170 171 BIAAAA IZLAAA OOOOxx +8699 8095 1 3 9 19 99 699 699 3699 8699 198 199 PWAAAA JZLAAA VVVVxx +2232 8096 0 0 2 12 32 232 232 2232 2232 64 65 WHAAAA KZLAAA AAAAxx +5142 8097 0 2 2 2 42 142 1142 142 5142 84 85 UPAAAA LZLAAA HHHHxx +8891 8098 1 3 1 11 91 891 891 3891 8891 182 183 ZDAAAA MZLAAA OOOOxx +1881 8099 1 1 1 1 81 881 1881 1881 1881 162 163 JUAAAA NZLAAA VVVVxx +3751 8100 1 3 1 11 51 751 1751 3751 3751 102 103 HOAAAA OZLAAA AAAAxx +1896 8101 0 0 6 16 96 896 1896 1896 1896 192 193 YUAAAA PZLAAA HHHHxx +8258 8102 0 2 8 18 58 258 258 3258 8258 116 117 QFAAAA QZLAAA OOOOxx +3820 8103 0 0 0 0 20 820 1820 3820 3820 40 41 YQAAAA RZLAAA VVVVxx +6617 8104 1 1 7 17 17 617 617 1617 6617 34 35 NUAAAA SZLAAA AAAAxx +5100 8105 0 0 0 0 0 100 1100 100 5100 0 1 EOAAAA TZLAAA HHHHxx +4277 8106 1 1 7 17 77 277 277 4277 4277 154 155 NIAAAA UZLAAA OOOOxx +2498 8107 0 2 8 18 98 498 498 2498 2498 196 197 CSAAAA VZLAAA VVVVxx +4343 8108 1 3 3 3 43 343 343 4343 4343 86 87 BLAAAA WZLAAA AAAAxx +8319 8109 1 3 9 19 19 319 319 3319 8319 38 39 ZHAAAA XZLAAA HHHHxx +4803 8110 1 3 3 3 3 803 803 4803 4803 6 7 TCAAAA YZLAAA OOOOxx +3100 8111 0 0 0 0 0 100 1100 3100 3100 0 1 GPAAAA ZZLAAA VVVVxx +428 8112 0 0 8 8 28 428 428 428 428 56 57 MQAAAA AAMAAA AAAAxx +2811 8113 1 3 1 11 11 811 811 2811 2811 22 23 DEAAAA BAMAAA HHHHxx +2989 8114 1 1 9 9 89 989 989 2989 2989 178 179 ZKAAAA CAMAAA OOOOxx +1100 8115 0 0 0 0 0 100 1100 1100 1100 0 1 IQAAAA DAMAAA VVVVxx +6586 8116 0 2 6 6 86 586 586 1586 6586 172 173 ITAAAA EAMAAA AAAAxx +3124 8117 0 0 4 4 24 124 1124 3124 3124 48 49 EQAAAA FAMAAA HHHHxx +1635 8118 1 3 5 15 35 635 1635 1635 1635 70 71 XKAAAA GAMAAA OOOOxx +3888 8119 0 0 8 8 88 888 1888 3888 3888 176 177 OTAAAA HAMAAA VVVVxx +8369 8120 1 1 9 9 69 369 369 3369 8369 138 139 XJAAAA IAMAAA AAAAxx +3148 8121 0 0 8 8 48 148 1148 3148 3148 96 97 CRAAAA JAMAAA HHHHxx +2842 8122 0 2 2 2 42 842 842 2842 2842 84 85 IFAAAA KAMAAA OOOOxx +4965 8123 1 1 5 5 65 965 965 4965 4965 130 131 ZIAAAA LAMAAA VVVVxx +3742 8124 0 2 2 2 42 742 1742 3742 3742 84 85 YNAAAA MAMAAA AAAAxx +5196 8125 0 0 6 16 96 196 1196 196 5196 192 193 WRAAAA NAMAAA HHHHxx +9105 8126 1 1 5 5 5 105 1105 4105 9105 10 11 FMAAAA OAMAAA OOOOxx +6806 8127 0 2 6 6 6 806 806 1806 6806 12 13 UBAAAA PAMAAA VVVVxx +5849 8128 1 1 9 9 49 849 1849 849 5849 98 99 ZQAAAA QAMAAA AAAAxx +6504 8129 0 0 4 4 4 504 504 1504 6504 8 9 EQAAAA RAMAAA HHHHxx +9841 8130 1 1 1 1 41 841 1841 4841 9841 82 83 NOAAAA SAMAAA OOOOxx +457 8131 1 1 7 17 57 457 457 457 457 114 115 PRAAAA TAMAAA VVVVxx +8856 8132 0 0 6 16 56 856 856 3856 8856 112 113 QCAAAA UAMAAA AAAAxx +8043 8133 1 3 3 3 43 43 43 3043 8043 86 87 JXAAAA VAMAAA HHHHxx +5933 8134 1 1 3 13 33 933 1933 933 5933 66 67 FUAAAA WAMAAA OOOOxx +5725 8135 1 1 5 5 25 725 1725 725 5725 50 51 FMAAAA XAMAAA VVVVxx +8607 8136 1 3 7 7 7 607 607 3607 8607 14 15 BTAAAA YAMAAA AAAAxx +9280 8137 0 0 0 0 80 280 1280 4280 9280 160 161 YSAAAA ZAMAAA HHHHxx +6017 8138 1 1 7 17 17 17 17 1017 6017 34 35 LXAAAA ABMAAA OOOOxx +4946 8139 0 2 6 6 46 946 946 4946 4946 92 93 GIAAAA BBMAAA VVVVxx +7373 8140 1 1 3 13 73 373 1373 2373 7373 146 147 PXAAAA CBMAAA AAAAxx +8096 8141 0 0 6 16 96 96 96 3096 8096 192 193 KZAAAA DBMAAA HHHHxx +3178 8142 0 2 8 18 78 178 1178 3178 3178 156 157 GSAAAA EBMAAA OOOOxx +1849 8143 1 1 9 9 49 849 1849 1849 1849 98 99 DTAAAA FBMAAA VVVVxx +8813 8144 1 1 3 13 13 813 813 3813 8813 26 27 ZAAAAA GBMAAA AAAAxx +460 8145 0 0 0 0 60 460 460 460 460 120 121 SRAAAA HBMAAA HHHHxx +7756 8146 0 0 6 16 56 756 1756 2756 7756 112 113 IMAAAA IBMAAA OOOOxx +4425 8147 1 1 5 5 25 425 425 4425 4425 50 51 FOAAAA JBMAAA VVVVxx +1602 8148 0 2 2 2 2 602 1602 1602 1602 4 5 QJAAAA KBMAAA AAAAxx +5981 8149 1 1 1 1 81 981 1981 981 5981 162 163 BWAAAA LBMAAA HHHHxx +8139 8150 1 3 9 19 39 139 139 3139 8139 78 79 BBAAAA MBMAAA OOOOxx +754 8151 0 2 4 14 54 754 754 754 754 108 109 ADAAAA NBMAAA VVVVxx +26 8152 0 2 6 6 26 26 26 26 26 52 53 ABAAAA OBMAAA AAAAxx +106 8153 0 2 6 6 6 106 106 106 106 12 13 CEAAAA PBMAAA HHHHxx +7465 8154 1 1 5 5 65 465 1465 2465 7465 130 131 DBAAAA QBMAAA OOOOxx +1048 8155 0 0 8 8 48 48 1048 1048 1048 96 97 IOAAAA RBMAAA VVVVxx +2303 8156 1 3 3 3 3 303 303 2303 2303 6 7 PKAAAA SBMAAA AAAAxx +5794 8157 0 2 4 14 94 794 1794 794 5794 188 189 WOAAAA TBMAAA HHHHxx +3321 8158 1 1 1 1 21 321 1321 3321 3321 42 43 TXAAAA UBMAAA OOOOxx +6122 8159 0 2 2 2 22 122 122 1122 6122 44 45 MBAAAA VBMAAA VVVVxx +6474 8160 0 2 4 14 74 474 474 1474 6474 148 149 APAAAA WBMAAA AAAAxx +827 8161 1 3 7 7 27 827 827 827 827 54 55 VFAAAA XBMAAA HHHHxx +6616 8162 0 0 6 16 16 616 616 1616 6616 32 33 MUAAAA YBMAAA OOOOxx +2131 8163 1 3 1 11 31 131 131 2131 2131 62 63 ZDAAAA ZBMAAA VVVVxx +5483 8164 1 3 3 3 83 483 1483 483 5483 166 167 XCAAAA ACMAAA AAAAxx +606 8165 0 2 6 6 6 606 606 606 606 12 13 IXAAAA BCMAAA HHHHxx +922 8166 0 2 2 2 22 922 922 922 922 44 45 MJAAAA CCMAAA OOOOxx +8475 8167 1 3 5 15 75 475 475 3475 8475 150 151 ZNAAAA DCMAAA VVVVxx +7645 8168 1 1 5 5 45 645 1645 2645 7645 90 91 BIAAAA ECMAAA AAAAxx +5097 8169 1 1 7 17 97 97 1097 97 5097 194 195 BOAAAA FCMAAA HHHHxx +5377 8170 1 1 7 17 77 377 1377 377 5377 154 155 VYAAAA GCMAAA OOOOxx +6116 8171 0 0 6 16 16 116 116 1116 6116 32 33 GBAAAA HCMAAA VVVVxx +8674 8172 0 2 4 14 74 674 674 3674 8674 148 149 QVAAAA ICMAAA AAAAxx +8063 8173 1 3 3 3 63 63 63 3063 8063 126 127 DYAAAA JCMAAA HHHHxx +5271 8174 1 3 1 11 71 271 1271 271 5271 142 143 TUAAAA KCMAAA OOOOxx +1619 8175 1 3 9 19 19 619 1619 1619 1619 38 39 HKAAAA LCMAAA VVVVxx +6419 8176 1 3 9 19 19 419 419 1419 6419 38 39 XMAAAA MCMAAA AAAAxx +7651 8177 1 3 1 11 51 651 1651 2651 7651 102 103 HIAAAA NCMAAA HHHHxx +2897 8178 1 1 7 17 97 897 897 2897 2897 194 195 LHAAAA OCMAAA OOOOxx +8148 8179 0 0 8 8 48 148 148 3148 8148 96 97 KBAAAA PCMAAA VVVVxx +7461 8180 1 1 1 1 61 461 1461 2461 7461 122 123 ZAAAAA QCMAAA AAAAxx +9186 8181 0 2 6 6 86 186 1186 4186 9186 172 173 IPAAAA RCMAAA HHHHxx +7127 8182 1 3 7 7 27 127 1127 2127 7127 54 55 DOAAAA SCMAAA OOOOxx +8233 8183 1 1 3 13 33 233 233 3233 8233 66 67 REAAAA TCMAAA VVVVxx +9651 8184 1 3 1 11 51 651 1651 4651 9651 102 103 FHAAAA UCMAAA AAAAxx +6746 8185 0 2 6 6 46 746 746 1746 6746 92 93 MZAAAA VCMAAA HHHHxx +7835 8186 1 3 5 15 35 835 1835 2835 7835 70 71 JPAAAA WCMAAA OOOOxx +8815 8187 1 3 5 15 15 815 815 3815 8815 30 31 BBAAAA XCMAAA VVVVxx +6398 8188 0 2 8 18 98 398 398 1398 6398 196 197 CMAAAA YCMAAA AAAAxx +5344 8189 0 0 4 4 44 344 1344 344 5344 88 89 OXAAAA ZCMAAA HHHHxx +8209 8190 1 1 9 9 9 209 209 3209 8209 18 19 TDAAAA ADMAAA OOOOxx +8444 8191 0 0 4 4 44 444 444 3444 8444 88 89 UMAAAA BDMAAA VVVVxx +5669 8192 1 1 9 9 69 669 1669 669 5669 138 139 BKAAAA CDMAAA AAAAxx +2455 8193 1 3 5 15 55 455 455 2455 2455 110 111 LQAAAA DDMAAA HHHHxx +6767 8194 1 3 7 7 67 767 767 1767 6767 134 135 HAAAAA EDMAAA OOOOxx +135 8195 1 3 5 15 35 135 135 135 135 70 71 FFAAAA FDMAAA VVVVxx +3503 8196 1 3 3 3 3 503 1503 3503 3503 6 7 TEAAAA GDMAAA AAAAxx +6102 8197 0 2 2 2 2 102 102 1102 6102 4 5 SAAAAA HDMAAA HHHHxx +7136 8198 0 0 6 16 36 136 1136 2136 7136 72 73 MOAAAA IDMAAA OOOOxx +4933 8199 1 1 3 13 33 933 933 4933 4933 66 67 THAAAA JDMAAA VVVVxx +8804 8200 0 0 4 4 4 804 804 3804 8804 8 9 QAAAAA KDMAAA AAAAxx +3760 8201 0 0 0 0 60 760 1760 3760 3760 120 121 QOAAAA LDMAAA HHHHxx +8603 8202 1 3 3 3 3 603 603 3603 8603 6 7 XSAAAA MDMAAA OOOOxx +7411 8203 1 3 1 11 11 411 1411 2411 7411 22 23 BZAAAA NDMAAA VVVVxx +834 8204 0 2 4 14 34 834 834 834 834 68 69 CGAAAA ODMAAA AAAAxx +7385 8205 1 1 5 5 85 385 1385 2385 7385 170 171 BYAAAA PDMAAA HHHHxx +3696 8206 0 0 6 16 96 696 1696 3696 3696 192 193 EMAAAA QDMAAA OOOOxx +8720 8207 0 0 0 0 20 720 720 3720 8720 40 41 KXAAAA RDMAAA VVVVxx +4539 8208 1 3 9 19 39 539 539 4539 4539 78 79 PSAAAA SDMAAA AAAAxx +9837 8209 1 1 7 17 37 837 1837 4837 9837 74 75 JOAAAA TDMAAA HHHHxx +8595 8210 1 3 5 15 95 595 595 3595 8595 190 191 PSAAAA UDMAAA OOOOxx +3673 8211 1 1 3 13 73 673 1673 3673 3673 146 147 HLAAAA VDMAAA VVVVxx +475 8212 1 3 5 15 75 475 475 475 475 150 151 HSAAAA WDMAAA AAAAxx +2256 8213 0 0 6 16 56 256 256 2256 2256 112 113 UIAAAA XDMAAA HHHHxx +6349 8214 1 1 9 9 49 349 349 1349 6349 98 99 FKAAAA YDMAAA OOOOxx +9968 8215 0 0 8 8 68 968 1968 4968 9968 136 137 KTAAAA ZDMAAA VVVVxx +7261 8216 1 1 1 1 61 261 1261 2261 7261 122 123 HTAAAA AEMAAA AAAAxx +5799 8217 1 3 9 19 99 799 1799 799 5799 198 199 BPAAAA BEMAAA HHHHxx +8159 8218 1 3 9 19 59 159 159 3159 8159 118 119 VBAAAA CEMAAA OOOOxx +92 8219 0 0 2 12 92 92 92 92 92 184 185 ODAAAA DEMAAA VVVVxx +5927 8220 1 3 7 7 27 927 1927 927 5927 54 55 ZTAAAA EEMAAA AAAAxx +7925 8221 1 1 5 5 25 925 1925 2925 7925 50 51 VSAAAA FEMAAA HHHHxx +5836 8222 0 0 6 16 36 836 1836 836 5836 72 73 MQAAAA GEMAAA OOOOxx +7935 8223 1 3 5 15 35 935 1935 2935 7935 70 71 FTAAAA HEMAAA VVVVxx +5505 8224 1 1 5 5 5 505 1505 505 5505 10 11 TDAAAA IEMAAA AAAAxx +5882 8225 0 2 2 2 82 882 1882 882 5882 164 165 GSAAAA JEMAAA HHHHxx +4411 8226 1 3 1 11 11 411 411 4411 4411 22 23 RNAAAA KEMAAA OOOOxx +64 8227 0 0 4 4 64 64 64 64 64 128 129 MCAAAA LEMAAA VVVVxx +2851 8228 1 3 1 11 51 851 851 2851 2851 102 103 RFAAAA MEMAAA AAAAxx +1665 8229 1 1 5 5 65 665 1665 1665 1665 130 131 BMAAAA NEMAAA HHHHxx +2895 8230 1 3 5 15 95 895 895 2895 2895 190 191 JHAAAA OEMAAA OOOOxx +2210 8231 0 2 0 10 10 210 210 2210 2210 20 21 AHAAAA PEMAAA VVVVxx +9873 8232 1 1 3 13 73 873 1873 4873 9873 146 147 TPAAAA QEMAAA AAAAxx +5402 8233 0 2 2 2 2 402 1402 402 5402 4 5 UZAAAA REMAAA HHHHxx +285 8234 1 1 5 5 85 285 285 285 285 170 171 ZKAAAA SEMAAA OOOOxx +8545 8235 1 1 5 5 45 545 545 3545 8545 90 91 RQAAAA TEMAAA VVVVxx +5328 8236 0 0 8 8 28 328 1328 328 5328 56 57 YWAAAA UEMAAA AAAAxx +733 8237 1 1 3 13 33 733 733 733 733 66 67 FCAAAA VEMAAA HHHHxx +7726 8238 0 2 6 6 26 726 1726 2726 7726 52 53 ELAAAA WEMAAA OOOOxx +5418 8239 0 2 8 18 18 418 1418 418 5418 36 37 KAAAAA XEMAAA VVVVxx +7761 8240 1 1 1 1 61 761 1761 2761 7761 122 123 NMAAAA YEMAAA AAAAxx +9263 8241 1 3 3 3 63 263 1263 4263 9263 126 127 HSAAAA ZEMAAA HHHHxx +5579 8242 1 3 9 19 79 579 1579 579 5579 158 159 PGAAAA AFMAAA OOOOxx +5434 8243 0 2 4 14 34 434 1434 434 5434 68 69 ABAAAA BFMAAA VVVVxx +5230 8244 0 2 0 10 30 230 1230 230 5230 60 61 ETAAAA CFMAAA AAAAxx +9981 8245 1 1 1 1 81 981 1981 4981 9981 162 163 XTAAAA DFMAAA HHHHxx +5830 8246 0 2 0 10 30 830 1830 830 5830 60 61 GQAAAA EFMAAA OOOOxx +128 8247 0 0 8 8 28 128 128 128 128 56 57 YEAAAA FFMAAA VVVVxx +2734 8248 0 2 4 14 34 734 734 2734 2734 68 69 EBAAAA GFMAAA AAAAxx +4537 8249 1 1 7 17 37 537 537 4537 4537 74 75 NSAAAA HFMAAA HHHHxx +3899 8250 1 3 9 19 99 899 1899 3899 3899 198 199 ZTAAAA IFMAAA OOOOxx +1000 8251 0 0 0 0 0 0 1000 1000 1000 0 1 MMAAAA JFMAAA VVVVxx +9896 8252 0 0 6 16 96 896 1896 4896 9896 192 193 QQAAAA KFMAAA AAAAxx +3640 8253 0 0 0 0 40 640 1640 3640 3640 80 81 AKAAAA LFMAAA HHHHxx +2568 8254 0 0 8 8 68 568 568 2568 2568 136 137 UUAAAA MFMAAA OOOOxx +2026 8255 0 2 6 6 26 26 26 2026 2026 52 53 YZAAAA NFMAAA VVVVxx +3955 8256 1 3 5 15 55 955 1955 3955 3955 110 111 DWAAAA OFMAAA AAAAxx +7152 8257 0 0 2 12 52 152 1152 2152 7152 104 105 CPAAAA PFMAAA HHHHxx +2402 8258 0 2 2 2 2 402 402 2402 2402 4 5 KOAAAA QFMAAA OOOOxx +9522 8259 0 2 2 2 22 522 1522 4522 9522 44 45 GCAAAA RFMAAA VVVVxx +4011 8260 1 3 1 11 11 11 11 4011 4011 22 23 HYAAAA SFMAAA AAAAxx +3297 8261 1 1 7 17 97 297 1297 3297 3297 194 195 VWAAAA TFMAAA HHHHxx +4915 8262 1 3 5 15 15 915 915 4915 4915 30 31 BHAAAA UFMAAA OOOOxx +5397 8263 1 1 7 17 97 397 1397 397 5397 194 195 PZAAAA VFMAAA VVVVxx +5454 8264 0 2 4 14 54 454 1454 454 5454 108 109 UBAAAA WFMAAA AAAAxx +4568 8265 0 0 8 8 68 568 568 4568 4568 136 137 STAAAA XFMAAA HHHHxx +5875 8266 1 3 5 15 75 875 1875 875 5875 150 151 ZRAAAA YFMAAA OOOOxx +3642 8267 0 2 2 2 42 642 1642 3642 3642 84 85 CKAAAA ZFMAAA VVVVxx +8506 8268 0 2 6 6 6 506 506 3506 8506 12 13 EPAAAA AGMAAA AAAAxx +9621 8269 1 1 1 1 21 621 1621 4621 9621 42 43 BGAAAA BGMAAA HHHHxx +7739 8270 1 3 9 19 39 739 1739 2739 7739 78 79 RLAAAA CGMAAA OOOOxx +3987 8271 1 3 7 7 87 987 1987 3987 3987 174 175 JXAAAA DGMAAA VVVVxx +2090 8272 0 2 0 10 90 90 90 2090 2090 180 181 KCAAAA EGMAAA AAAAxx +3838 8273 0 2 8 18 38 838 1838 3838 3838 76 77 QRAAAA FGMAAA HHHHxx +17 8274 1 1 7 17 17 17 17 17 17 34 35 RAAAAA GGMAAA OOOOxx +3406 8275 0 2 6 6 6 406 1406 3406 3406 12 13 ABAAAA HGMAAA VVVVxx +8312 8276 0 0 2 12 12 312 312 3312 8312 24 25 SHAAAA IGMAAA AAAAxx +4034 8277 0 2 4 14 34 34 34 4034 4034 68 69 EZAAAA JGMAAA HHHHxx +1535 8278 1 3 5 15 35 535 1535 1535 1535 70 71 BHAAAA KGMAAA OOOOxx +7198 8279 0 2 8 18 98 198 1198 2198 7198 196 197 WQAAAA LGMAAA VVVVxx +8885 8280 1 1 5 5 85 885 885 3885 8885 170 171 TDAAAA MGMAAA AAAAxx +4081 8281 1 1 1 1 81 81 81 4081 4081 162 163 ZAAAAA NGMAAA HHHHxx +980 8282 0 0 0 0 80 980 980 980 980 160 161 SLAAAA OGMAAA OOOOxx +551 8283 1 3 1 11 51 551 551 551 551 102 103 FVAAAA PGMAAA VVVVxx +7746 8284 0 2 6 6 46 746 1746 2746 7746 92 93 YLAAAA QGMAAA AAAAxx +4756 8285 0 0 6 16 56 756 756 4756 4756 112 113 YAAAAA RGMAAA HHHHxx +3655 8286 1 3 5 15 55 655 1655 3655 3655 110 111 PKAAAA SGMAAA OOOOxx +7075 8287 1 3 5 15 75 75 1075 2075 7075 150 151 DMAAAA TGMAAA VVVVxx +3950 8288 0 2 0 10 50 950 1950 3950 3950 100 101 YVAAAA UGMAAA AAAAxx +2314 8289 0 2 4 14 14 314 314 2314 2314 28 29 ALAAAA VGMAAA HHHHxx +8432 8290 0 0 2 12 32 432 432 3432 8432 64 65 IMAAAA WGMAAA OOOOxx +62 8291 0 2 2 2 62 62 62 62 62 124 125 KCAAAA XGMAAA VVVVxx +6920 8292 0 0 0 0 20 920 920 1920 6920 40 41 EGAAAA YGMAAA AAAAxx +4077 8293 1 1 7 17 77 77 77 4077 4077 154 155 VAAAAA ZGMAAA HHHHxx +9118 8294 0 2 8 18 18 118 1118 4118 9118 36 37 SMAAAA AHMAAA OOOOxx +5375 8295 1 3 5 15 75 375 1375 375 5375 150 151 TYAAAA BHMAAA VVVVxx +178 8296 0 2 8 18 78 178 178 178 178 156 157 WGAAAA CHMAAA AAAAxx +1079 8297 1 3 9 19 79 79 1079 1079 1079 158 159 NPAAAA DHMAAA HHHHxx +4279 8298 1 3 9 19 79 279 279 4279 4279 158 159 PIAAAA EHMAAA OOOOxx +8436 8299 0 0 6 16 36 436 436 3436 8436 72 73 MMAAAA FHMAAA VVVVxx +1931 8300 1 3 1 11 31 931 1931 1931 1931 62 63 HWAAAA GHMAAA AAAAxx +2096 8301 0 0 6 16 96 96 96 2096 2096 192 193 QCAAAA HHMAAA HHHHxx +1638 8302 0 2 8 18 38 638 1638 1638 1638 76 77 ALAAAA IHMAAA OOOOxx +2788 8303 0 0 8 8 88 788 788 2788 2788 176 177 GDAAAA JHMAAA VVVVxx +4751 8304 1 3 1 11 51 751 751 4751 4751 102 103 TAAAAA KHMAAA AAAAxx +8824 8305 0 0 4 4 24 824 824 3824 8824 48 49 KBAAAA LHMAAA HHHHxx +3098 8306 0 2 8 18 98 98 1098 3098 3098 196 197 EPAAAA MHMAAA OOOOxx +4497 8307 1 1 7 17 97 497 497 4497 4497 194 195 ZQAAAA NHMAAA VVVVxx +5223 8308 1 3 3 3 23 223 1223 223 5223 46 47 XSAAAA OHMAAA AAAAxx +9212 8309 0 0 2 12 12 212 1212 4212 9212 24 25 IQAAAA PHMAAA HHHHxx +4265 8310 1 1 5 5 65 265 265 4265 4265 130 131 BIAAAA QHMAAA OOOOxx +6898 8311 0 2 8 18 98 898 898 1898 6898 196 197 IFAAAA RHMAAA VVVVxx +8808 8312 0 0 8 8 8 808 808 3808 8808 16 17 UAAAAA SHMAAA AAAAxx +5629 8313 1 1 9 9 29 629 1629 629 5629 58 59 NIAAAA THMAAA HHHHxx +3779 8314 1 3 9 19 79 779 1779 3779 3779 158 159 JPAAAA UHMAAA OOOOxx +4972 8315 0 0 2 12 72 972 972 4972 4972 144 145 GJAAAA VHMAAA VVVVxx +4511 8316 1 3 1 11 11 511 511 4511 4511 22 23 NRAAAA WHMAAA AAAAxx +6761 8317 1 1 1 1 61 761 761 1761 6761 122 123 BAAAAA XHMAAA HHHHxx +2335 8318 1 3 5 15 35 335 335 2335 2335 70 71 VLAAAA YHMAAA OOOOxx +732 8319 0 0 2 12 32 732 732 732 732 64 65 ECAAAA ZHMAAA VVVVxx +4757 8320 1 1 7 17 57 757 757 4757 4757 114 115 ZAAAAA AIMAAA AAAAxx +6624 8321 0 0 4 4 24 624 624 1624 6624 48 49 UUAAAA BIMAAA HHHHxx +5869 8322 1 1 9 9 69 869 1869 869 5869 138 139 TRAAAA CIMAAA OOOOxx +5842 8323 0 2 2 2 42 842 1842 842 5842 84 85 SQAAAA DIMAAA VVVVxx +5735 8324 1 3 5 15 35 735 1735 735 5735 70 71 PMAAAA EIMAAA AAAAxx +8276 8325 0 0 6 16 76 276 276 3276 8276 152 153 IGAAAA FIMAAA HHHHxx +7227 8326 1 3 7 7 27 227 1227 2227 7227 54 55 ZRAAAA GIMAAA OOOOxx +4923 8327 1 3 3 3 23 923 923 4923 4923 46 47 JHAAAA HIMAAA VVVVxx +9135 8328 1 3 5 15 35 135 1135 4135 9135 70 71 JNAAAA IIMAAA AAAAxx +5813 8329 1 1 3 13 13 813 1813 813 5813 26 27 PPAAAA JIMAAA HHHHxx +9697 8330 1 1 7 17 97 697 1697 4697 9697 194 195 ZIAAAA KIMAAA OOOOxx +3222 8331 0 2 2 2 22 222 1222 3222 3222 44 45 YTAAAA LIMAAA VVVVxx +2394 8332 0 2 4 14 94 394 394 2394 2394 188 189 COAAAA MIMAAA AAAAxx +5784 8333 0 0 4 4 84 784 1784 784 5784 168 169 MOAAAA NIMAAA HHHHxx +3652 8334 0 0 2 12 52 652 1652 3652 3652 104 105 MKAAAA OIMAAA OOOOxx +8175 8335 1 3 5 15 75 175 175 3175 8175 150 151 LCAAAA PIMAAA VVVVxx +7568 8336 0 0 8 8 68 568 1568 2568 7568 136 137 CFAAAA QIMAAA AAAAxx +6645 8337 1 1 5 5 45 645 645 1645 6645 90 91 PVAAAA RIMAAA HHHHxx +8176 8338 0 0 6 16 76 176 176 3176 8176 152 153 MCAAAA SIMAAA OOOOxx +530 8339 0 2 0 10 30 530 530 530 530 60 61 KUAAAA TIMAAA VVVVxx +5439 8340 1 3 9 19 39 439 1439 439 5439 78 79 FBAAAA UIMAAA AAAAxx +61 8341 1 1 1 1 61 61 61 61 61 122 123 JCAAAA VIMAAA HHHHxx +3951 8342 1 3 1 11 51 951 1951 3951 3951 102 103 ZVAAAA WIMAAA OOOOxx +5283 8343 1 3 3 3 83 283 1283 283 5283 166 167 FVAAAA XIMAAA VVVVxx +7226 8344 0 2 6 6 26 226 1226 2226 7226 52 53 YRAAAA YIMAAA AAAAxx +1954 8345 0 2 4 14 54 954 1954 1954 1954 108 109 EXAAAA ZIMAAA HHHHxx +334 8346 0 2 4 14 34 334 334 334 334 68 69 WMAAAA AJMAAA OOOOxx +3921 8347 1 1 1 1 21 921 1921 3921 3921 42 43 VUAAAA BJMAAA VVVVxx +6276 8348 0 0 6 16 76 276 276 1276 6276 152 153 KHAAAA CJMAAA AAAAxx +3378 8349 0 2 8 18 78 378 1378 3378 3378 156 157 YZAAAA DJMAAA HHHHxx +5236 8350 0 0 6 16 36 236 1236 236 5236 72 73 KTAAAA EJMAAA OOOOxx +7781 8351 1 1 1 1 81 781 1781 2781 7781 162 163 HNAAAA FJMAAA VVVVxx +8601 8352 1 1 1 1 1 601 601 3601 8601 2 3 VSAAAA GJMAAA AAAAxx +1473 8353 1 1 3 13 73 473 1473 1473 1473 146 147 REAAAA HJMAAA HHHHxx +3246 8354 0 2 6 6 46 246 1246 3246 3246 92 93 WUAAAA IJMAAA OOOOxx +3601 8355 1 1 1 1 1 601 1601 3601 3601 2 3 NIAAAA JJMAAA VVVVxx +6861 8356 1 1 1 1 61 861 861 1861 6861 122 123 XDAAAA KJMAAA AAAAxx +9032 8357 0 0 2 12 32 32 1032 4032 9032 64 65 KJAAAA LJMAAA HHHHxx +216 8358 0 0 6 16 16 216 216 216 216 32 33 IIAAAA MJMAAA OOOOxx +3824 8359 0 0 4 4 24 824 1824 3824 3824 48 49 CRAAAA NJMAAA VVVVxx +8486 8360 0 2 6 6 86 486 486 3486 8486 172 173 KOAAAA OJMAAA AAAAxx +276 8361 0 0 6 16 76 276 276 276 276 152 153 QKAAAA PJMAAA HHHHxx +1838 8362 0 2 8 18 38 838 1838 1838 1838 76 77 SSAAAA QJMAAA OOOOxx +6175 8363 1 3 5 15 75 175 175 1175 6175 150 151 NDAAAA RJMAAA VVVVxx +3719 8364 1 3 9 19 19 719 1719 3719 3719 38 39 BNAAAA SJMAAA AAAAxx +6958 8365 0 2 8 18 58 958 958 1958 6958 116 117 QHAAAA TJMAAA HHHHxx +6822 8366 0 2 2 2 22 822 822 1822 6822 44 45 KCAAAA UJMAAA OOOOxx +3318 8367 0 2 8 18 18 318 1318 3318 3318 36 37 QXAAAA VJMAAA VVVVxx +7222 8368 0 2 2 2 22 222 1222 2222 7222 44 45 URAAAA WJMAAA AAAAxx +85 8369 1 1 5 5 85 85 85 85 85 170 171 HDAAAA XJMAAA HHHHxx +5158 8370 0 2 8 18 58 158 1158 158 5158 116 117 KQAAAA YJMAAA OOOOxx +6360 8371 0 0 0 0 60 360 360 1360 6360 120 121 QKAAAA ZJMAAA VVVVxx +2599 8372 1 3 9 19 99 599 599 2599 2599 198 199 ZVAAAA AKMAAA AAAAxx +4002 8373 0 2 2 2 2 2 2 4002 4002 4 5 YXAAAA BKMAAA HHHHxx +6597 8374 1 1 7 17 97 597 597 1597 6597 194 195 TTAAAA CKMAAA OOOOxx +5762 8375 0 2 2 2 62 762 1762 762 5762 124 125 QNAAAA DKMAAA VVVVxx +8383 8376 1 3 3 3 83 383 383 3383 8383 166 167 LKAAAA EKMAAA AAAAxx +4686 8377 0 2 6 6 86 686 686 4686 4686 172 173 GYAAAA FKMAAA HHHHxx +5972 8378 0 0 2 12 72 972 1972 972 5972 144 145 SVAAAA GKMAAA OOOOxx +1432 8379 0 0 2 12 32 432 1432 1432 1432 64 65 CDAAAA HKMAAA VVVVxx +1601 8380 1 1 1 1 1 601 1601 1601 1601 2 3 PJAAAA IKMAAA AAAAxx +3012 8381 0 0 2 12 12 12 1012 3012 3012 24 25 WLAAAA JKMAAA HHHHxx +9345 8382 1 1 5 5 45 345 1345 4345 9345 90 91 LVAAAA KKMAAA OOOOxx +8869 8383 1 1 9 9 69 869 869 3869 8869 138 139 DDAAAA LKMAAA VVVVxx +6612 8384 0 0 2 12 12 612 612 1612 6612 24 25 IUAAAA MKMAAA AAAAxx +262 8385 0 2 2 2 62 262 262 262 262 124 125 CKAAAA NKMAAA HHHHxx +300 8386 0 0 0 0 0 300 300 300 300 0 1 OLAAAA OKMAAA OOOOxx +3045 8387 1 1 5 5 45 45 1045 3045 3045 90 91 DNAAAA PKMAAA VVVVxx +7252 8388 0 0 2 12 52 252 1252 2252 7252 104 105 YSAAAA QKMAAA AAAAxx +9099 8389 1 3 9 19 99 99 1099 4099 9099 198 199 ZLAAAA RKMAAA HHHHxx +9006 8390 0 2 6 6 6 6 1006 4006 9006 12 13 KIAAAA SKMAAA OOOOxx +3078 8391 0 2 8 18 78 78 1078 3078 3078 156 157 KOAAAA TKMAAA VVVVxx +5159 8392 1 3 9 19 59 159 1159 159 5159 118 119 LQAAAA UKMAAA AAAAxx +9329 8393 1 1 9 9 29 329 1329 4329 9329 58 59 VUAAAA VKMAAA HHHHxx +1393 8394 1 1 3 13 93 393 1393 1393 1393 186 187 PBAAAA WKMAAA OOOOxx +5894 8395 0 2 4 14 94 894 1894 894 5894 188 189 SSAAAA XKMAAA VVVVxx +11 8396 1 3 1 11 11 11 11 11 11 22 23 LAAAAA YKMAAA AAAAxx +5606 8397 0 2 6 6 6 606 1606 606 5606 12 13 QHAAAA ZKMAAA HHHHxx +5541 8398 1 1 1 1 41 541 1541 541 5541 82 83 DFAAAA ALMAAA OOOOxx +2689 8399 1 1 9 9 89 689 689 2689 2689 178 179 LZAAAA BLMAAA VVVVxx +1023 8400 1 3 3 3 23 23 1023 1023 1023 46 47 JNAAAA CLMAAA AAAAxx +8134 8401 0 2 4 14 34 134 134 3134 8134 68 69 WAAAAA DLMAAA HHHHxx +5923 8402 1 3 3 3 23 923 1923 923 5923 46 47 VTAAAA ELMAAA OOOOxx +6056 8403 0 0 6 16 56 56 56 1056 6056 112 113 YYAAAA FLMAAA VVVVxx +653 8404 1 1 3 13 53 653 653 653 653 106 107 DZAAAA GLMAAA AAAAxx +367 8405 1 3 7 7 67 367 367 367 367 134 135 DOAAAA HLMAAA HHHHxx +1828 8406 0 0 8 8 28 828 1828 1828 1828 56 57 ISAAAA ILMAAA OOOOxx +6506 8407 0 2 6 6 6 506 506 1506 6506 12 13 GQAAAA JLMAAA VVVVxx +5772 8408 0 0 2 12 72 772 1772 772 5772 144 145 AOAAAA KLMAAA AAAAxx +8052 8409 0 0 2 12 52 52 52 3052 8052 104 105 SXAAAA LLMAAA HHHHxx +2633 8410 1 1 3 13 33 633 633 2633 2633 66 67 HXAAAA MLMAAA OOOOxx +4878 8411 0 2 8 18 78 878 878 4878 4878 156 157 QFAAAA NLMAAA VVVVxx +5621 8412 1 1 1 1 21 621 1621 621 5621 42 43 FIAAAA OLMAAA AAAAxx +41 8413 1 1 1 1 41 41 41 41 41 82 83 PBAAAA PLMAAA HHHHxx +4613 8414 1 1 3 13 13 613 613 4613 4613 26 27 LVAAAA QLMAAA OOOOxx +9389 8415 1 1 9 9 89 389 1389 4389 9389 178 179 DXAAAA RLMAAA VVVVxx +9414 8416 0 2 4 14 14 414 1414 4414 9414 28 29 CYAAAA SLMAAA AAAAxx +3583 8417 1 3 3 3 83 583 1583 3583 3583 166 167 VHAAAA TLMAAA HHHHxx +3454 8418 0 2 4 14 54 454 1454 3454 3454 108 109 WCAAAA ULMAAA OOOOxx +719 8419 1 3 9 19 19 719 719 719 719 38 39 RBAAAA VLMAAA VVVVxx +6188 8420 0 0 8 8 88 188 188 1188 6188 176 177 AEAAAA WLMAAA AAAAxx +2288 8421 0 0 8 8 88 288 288 2288 2288 176 177 AKAAAA XLMAAA HHHHxx +1287 8422 1 3 7 7 87 287 1287 1287 1287 174 175 NXAAAA YLMAAA OOOOxx +1397 8423 1 1 7 17 97 397 1397 1397 1397 194 195 TBAAAA ZLMAAA VVVVxx +7763 8424 1 3 3 3 63 763 1763 2763 7763 126 127 PMAAAA AMMAAA AAAAxx +5194 8425 0 2 4 14 94 194 1194 194 5194 188 189 URAAAA BMMAAA HHHHxx +3167 8426 1 3 7 7 67 167 1167 3167 3167 134 135 VRAAAA CMMAAA OOOOxx +9218 8427 0 2 8 18 18 218 1218 4218 9218 36 37 OQAAAA DMMAAA VVVVxx +2065 8428 1 1 5 5 65 65 65 2065 2065 130 131 LBAAAA EMMAAA AAAAxx +9669 8429 1 1 9 9 69 669 1669 4669 9669 138 139 XHAAAA FMMAAA HHHHxx +146 8430 0 2 6 6 46 146 146 146 146 92 93 QFAAAA GMMAAA OOOOxx +6141 8431 1 1 1 1 41 141 141 1141 6141 82 83 FCAAAA HMMAAA VVVVxx +2843 8432 1 3 3 3 43 843 843 2843 2843 86 87 JFAAAA IMMAAA AAAAxx +7934 8433 0 2 4 14 34 934 1934 2934 7934 68 69 ETAAAA JMMAAA HHHHxx +2536 8434 0 0 6 16 36 536 536 2536 2536 72 73 OTAAAA KMMAAA OOOOxx +7088 8435 0 0 8 8 88 88 1088 2088 7088 176 177 QMAAAA LMMAAA VVVVxx +2519 8436 1 3 9 19 19 519 519 2519 2519 38 39 XSAAAA MMMAAA AAAAxx +6650 8437 0 2 0 10 50 650 650 1650 6650 100 101 UVAAAA NMMAAA HHHHxx +3007 8438 1 3 7 7 7 7 1007 3007 3007 14 15 RLAAAA OMMAAA OOOOxx +4507 8439 1 3 7 7 7 507 507 4507 4507 14 15 JRAAAA PMMAAA VVVVxx +4892 8440 0 0 2 12 92 892 892 4892 4892 184 185 EGAAAA QMMAAA AAAAxx +7159 8441 1 3 9 19 59 159 1159 2159 7159 118 119 JPAAAA RMMAAA HHHHxx +3171 8442 1 3 1 11 71 171 1171 3171 3171 142 143 ZRAAAA SMMAAA OOOOxx +1080 8443 0 0 0 0 80 80 1080 1080 1080 160 161 OPAAAA TMMAAA VVVVxx +7248 8444 0 0 8 8 48 248 1248 2248 7248 96 97 USAAAA UMMAAA AAAAxx +7230 8445 0 2 0 10 30 230 1230 2230 7230 60 61 CSAAAA VMMAAA HHHHxx +3823 8446 1 3 3 3 23 823 1823 3823 3823 46 47 BRAAAA WMMAAA OOOOxx +5517 8447 1 1 7 17 17 517 1517 517 5517 34 35 FEAAAA XMMAAA VVVVxx +1482 8448 0 2 2 2 82 482 1482 1482 1482 164 165 AFAAAA YMMAAA AAAAxx +9953 8449 1 1 3 13 53 953 1953 4953 9953 106 107 VSAAAA ZMMAAA HHHHxx +2754 8450 0 2 4 14 54 754 754 2754 2754 108 109 YBAAAA ANMAAA OOOOxx +3875 8451 1 3 5 15 75 875 1875 3875 3875 150 151 BTAAAA BNMAAA VVVVxx +9800 8452 0 0 0 0 0 800 1800 4800 9800 0 1 YMAAAA CNMAAA AAAAxx +8819 8453 1 3 9 19 19 819 819 3819 8819 38 39 FBAAAA DNMAAA HHHHxx +8267 8454 1 3 7 7 67 267 267 3267 8267 134 135 ZFAAAA ENMAAA OOOOxx +520 8455 0 0 0 0 20 520 520 520 520 40 41 AUAAAA FNMAAA VVVVxx +5770 8456 0 2 0 10 70 770 1770 770 5770 140 141 YNAAAA GNMAAA AAAAxx +2114 8457 0 2 4 14 14 114 114 2114 2114 28 29 IDAAAA HNMAAA HHHHxx +5045 8458 1 1 5 5 45 45 1045 45 5045 90 91 BMAAAA INMAAA OOOOxx +1094 8459 0 2 4 14 94 94 1094 1094 1094 188 189 CQAAAA JNMAAA VVVVxx +8786 8460 0 2 6 6 86 786 786 3786 8786 172 173 YZAAAA KNMAAA AAAAxx +353 8461 1 1 3 13 53 353 353 353 353 106 107 PNAAAA LNMAAA HHHHxx +290 8462 0 2 0 10 90 290 290 290 290 180 181 ELAAAA MNMAAA OOOOxx +3376 8463 0 0 6 16 76 376 1376 3376 3376 152 153 WZAAAA NNMAAA VVVVxx +9305 8464 1 1 5 5 5 305 1305 4305 9305 10 11 XTAAAA ONMAAA AAAAxx +186 8465 0 2 6 6 86 186 186 186 186 172 173 EHAAAA PNMAAA HHHHxx +4817 8466 1 1 7 17 17 817 817 4817 4817 34 35 HDAAAA QNMAAA OOOOxx +4638 8467 0 2 8 18 38 638 638 4638 4638 76 77 KWAAAA RNMAAA VVVVxx +3558 8468 0 2 8 18 58 558 1558 3558 3558 116 117 WGAAAA SNMAAA AAAAxx +9285 8469 1 1 5 5 85 285 1285 4285 9285 170 171 DTAAAA TNMAAA HHHHxx +848 8470 0 0 8 8 48 848 848 848 848 96 97 QGAAAA UNMAAA OOOOxx +8923 8471 1 3 3 3 23 923 923 3923 8923 46 47 FFAAAA VNMAAA VVVVxx +6826 8472 0 2 6 6 26 826 826 1826 6826 52 53 OCAAAA WNMAAA AAAAxx +5187 8473 1 3 7 7 87 187 1187 187 5187 174 175 NRAAAA XNMAAA HHHHxx +2398 8474 0 2 8 18 98 398 398 2398 2398 196 197 GOAAAA YNMAAA OOOOxx +7653 8475 1 1 3 13 53 653 1653 2653 7653 106 107 JIAAAA ZNMAAA VVVVxx +8835 8476 1 3 5 15 35 835 835 3835 8835 70 71 VBAAAA AOMAAA AAAAxx +5736 8477 0 0 6 16 36 736 1736 736 5736 72 73 QMAAAA BOMAAA HHHHxx +1238 8478 0 2 8 18 38 238 1238 1238 1238 76 77 QVAAAA COMAAA OOOOxx +6021 8479 1 1 1 1 21 21 21 1021 6021 42 43 PXAAAA DOMAAA VVVVxx +6815 8480 1 3 5 15 15 815 815 1815 6815 30 31 DCAAAA EOMAAA AAAAxx +2549 8481 1 1 9 9 49 549 549 2549 2549 98 99 BUAAAA FOMAAA HHHHxx +5657 8482 1 1 7 17 57 657 1657 657 5657 114 115 PJAAAA GOMAAA OOOOxx +6855 8483 1 3 5 15 55 855 855 1855 6855 110 111 RDAAAA HOMAAA VVVVxx +1225 8484 1 1 5 5 25 225 1225 1225 1225 50 51 DVAAAA IOMAAA AAAAxx +7452 8485 0 0 2 12 52 452 1452 2452 7452 104 105 QAAAAA JOMAAA HHHHxx +2479 8486 1 3 9 19 79 479 479 2479 2479 158 159 JRAAAA KOMAAA OOOOxx +7974 8487 0 2 4 14 74 974 1974 2974 7974 148 149 SUAAAA LOMAAA VVVVxx +1212 8488 0 0 2 12 12 212 1212 1212 1212 24 25 QUAAAA MOMAAA AAAAxx +8883 8489 1 3 3 3 83 883 883 3883 8883 166 167 RDAAAA NOMAAA HHHHxx +8150 8490 0 2 0 10 50 150 150 3150 8150 100 101 MBAAAA OOMAAA OOOOxx +3392 8491 0 0 2 12 92 392 1392 3392 3392 184 185 MAAAAA POMAAA VVVVxx +6774 8492 0 2 4 14 74 774 774 1774 6774 148 149 OAAAAA QOMAAA AAAAxx +904 8493 0 0 4 4 4 904 904 904 904 8 9 UIAAAA ROMAAA HHHHxx +5068 8494 0 0 8 8 68 68 1068 68 5068 136 137 YMAAAA SOMAAA OOOOxx +9339 8495 1 3 9 19 39 339 1339 4339 9339 78 79 FVAAAA TOMAAA VVVVxx +1062 8496 0 2 2 2 62 62 1062 1062 1062 124 125 WOAAAA UOMAAA AAAAxx +3841 8497 1 1 1 1 41 841 1841 3841 3841 82 83 TRAAAA VOMAAA HHHHxx +8924 8498 0 0 4 4 24 924 924 3924 8924 48 49 GFAAAA WOMAAA OOOOxx +9795 8499 1 3 5 15 95 795 1795 4795 9795 190 191 TMAAAA XOMAAA VVVVxx +3981 8500 1 1 1 1 81 981 1981 3981 3981 162 163 DXAAAA YOMAAA AAAAxx +4290 8501 0 2 0 10 90 290 290 4290 4290 180 181 AJAAAA ZOMAAA HHHHxx +1067 8502 1 3 7 7 67 67 1067 1067 1067 134 135 BPAAAA APMAAA OOOOxx +8679 8503 1 3 9 19 79 679 679 3679 8679 158 159 VVAAAA BPMAAA VVVVxx +2894 8504 0 2 4 14 94 894 894 2894 2894 188 189 IHAAAA CPMAAA AAAAxx +9248 8505 0 0 8 8 48 248 1248 4248 9248 96 97 SRAAAA DPMAAA HHHHxx +1072 8506 0 0 2 12 72 72 1072 1072 1072 144 145 GPAAAA EPMAAA OOOOxx +3510 8507 0 2 0 10 10 510 1510 3510 3510 20 21 AFAAAA FPMAAA VVVVxx +6871 8508 1 3 1 11 71 871 871 1871 6871 142 143 HEAAAA GPMAAA AAAAxx +8701 8509 1 1 1 1 1 701 701 3701 8701 2 3 RWAAAA HPMAAA HHHHxx +8170 8510 0 2 0 10 70 170 170 3170 8170 140 141 GCAAAA IPMAAA OOOOxx +2730 8511 0 2 0 10 30 730 730 2730 2730 60 61 ABAAAA JPMAAA VVVVxx +2668 8512 0 0 8 8 68 668 668 2668 2668 136 137 QYAAAA KPMAAA AAAAxx +8723 8513 1 3 3 3 23 723 723 3723 8723 46 47 NXAAAA LPMAAA HHHHxx +3439 8514 1 3 9 19 39 439 1439 3439 3439 78 79 HCAAAA MPMAAA OOOOxx +6219 8515 1 3 9 19 19 219 219 1219 6219 38 39 FFAAAA NPMAAA VVVVxx +4264 8516 0 0 4 4 64 264 264 4264 4264 128 129 AIAAAA OPMAAA AAAAxx +3929 8517 1 1 9 9 29 929 1929 3929 3929 58 59 DVAAAA PPMAAA HHHHxx +7 8518 1 3 7 7 7 7 7 7 7 14 15 HAAAAA QPMAAA OOOOxx +3737 8519 1 1 7 17 37 737 1737 3737 3737 74 75 TNAAAA RPMAAA VVVVxx +358 8520 0 2 8 18 58 358 358 358 358 116 117 UNAAAA SPMAAA AAAAxx +5128 8521 0 0 8 8 28 128 1128 128 5128 56 57 GPAAAA TPMAAA HHHHxx +7353 8522 1 1 3 13 53 353 1353 2353 7353 106 107 VWAAAA UPMAAA OOOOxx +8758 8523 0 2 8 18 58 758 758 3758 8758 116 117 WYAAAA VPMAAA VVVVxx +7284 8524 0 0 4 4 84 284 1284 2284 7284 168 169 EUAAAA WPMAAA AAAAxx +4037 8525 1 1 7 17 37 37 37 4037 4037 74 75 HZAAAA XPMAAA HHHHxx +435 8526 1 3 5 15 35 435 435 435 435 70 71 TQAAAA YPMAAA OOOOxx +3580 8527 0 0 0 0 80 580 1580 3580 3580 160 161 SHAAAA ZPMAAA VVVVxx +4554 8528 0 2 4 14 54 554 554 4554 4554 108 109 ETAAAA AQMAAA AAAAxx +4337 8529 1 1 7 17 37 337 337 4337 4337 74 75 VKAAAA BQMAAA HHHHxx +512 8530 0 0 2 12 12 512 512 512 512 24 25 STAAAA CQMAAA OOOOxx +2032 8531 0 0 2 12 32 32 32 2032 2032 64 65 EAAAAA DQMAAA VVVVxx +1755 8532 1 3 5 15 55 755 1755 1755 1755 110 111 NPAAAA EQMAAA AAAAxx +9923 8533 1 3 3 3 23 923 1923 4923 9923 46 47 RRAAAA FQMAAA HHHHxx +3747 8534 1 3 7 7 47 747 1747 3747 3747 94 95 DOAAAA GQMAAA OOOOxx +27 8535 1 3 7 7 27 27 27 27 27 54 55 BBAAAA HQMAAA VVVVxx +3075 8536 1 3 5 15 75 75 1075 3075 3075 150 151 HOAAAA IQMAAA AAAAxx +6259 8537 1 3 9 19 59 259 259 1259 6259 118 119 TGAAAA JQMAAA HHHHxx +2940 8538 0 0 0 0 40 940 940 2940 2940 80 81 CJAAAA KQMAAA OOOOxx +5724 8539 0 0 4 4 24 724 1724 724 5724 48 49 EMAAAA LQMAAA VVVVxx +5638 8540 0 2 8 18 38 638 1638 638 5638 76 77 WIAAAA MQMAAA AAAAxx +479 8541 1 3 9 19 79 479 479 479 479 158 159 LSAAAA NQMAAA HHHHxx +4125 8542 1 1 5 5 25 125 125 4125 4125 50 51 RCAAAA OQMAAA OOOOxx +1525 8543 1 1 5 5 25 525 1525 1525 1525 50 51 RGAAAA PQMAAA VVVVxx +7529 8544 1 1 9 9 29 529 1529 2529 7529 58 59 PDAAAA QQMAAA AAAAxx +931 8545 1 3 1 11 31 931 931 931 931 62 63 VJAAAA RQMAAA HHHHxx +5175 8546 1 3 5 15 75 175 1175 175 5175 150 151 BRAAAA SQMAAA OOOOxx +6798 8547 0 2 8 18 98 798 798 1798 6798 196 197 MBAAAA TQMAAA VVVVxx +2111 8548 1 3 1 11 11 111 111 2111 2111 22 23 FDAAAA UQMAAA AAAAxx +6145 8549 1 1 5 5 45 145 145 1145 6145 90 91 JCAAAA VQMAAA HHHHxx +4712 8550 0 0 2 12 12 712 712 4712 4712 24 25 GZAAAA WQMAAA OOOOxx +3110 8551 0 2 0 10 10 110 1110 3110 3110 20 21 QPAAAA XQMAAA VVVVxx +97 8552 1 1 7 17 97 97 97 97 97 194 195 TDAAAA YQMAAA AAAAxx +758 8553 0 2 8 18 58 758 758 758 758 116 117 EDAAAA ZQMAAA HHHHxx +1895 8554 1 3 5 15 95 895 1895 1895 1895 190 191 XUAAAA ARMAAA OOOOxx +5289 8555 1 1 9 9 89 289 1289 289 5289 178 179 LVAAAA BRMAAA VVVVxx +5026 8556 0 2 6 6 26 26 1026 26 5026 52 53 ILAAAA CRMAAA AAAAxx +4725 8557 1 1 5 5 25 725 725 4725 4725 50 51 TZAAAA DRMAAA HHHHxx +1679 8558 1 3 9 19 79 679 1679 1679 1679 158 159 PMAAAA ERMAAA OOOOxx +4433 8559 1 1 3 13 33 433 433 4433 4433 66 67 NOAAAA FRMAAA VVVVxx +5340 8560 0 0 0 0 40 340 1340 340 5340 80 81 KXAAAA GRMAAA AAAAxx +6340 8561 0 0 0 0 40 340 340 1340 6340 80 81 WJAAAA HRMAAA HHHHxx +3261 8562 1 1 1 1 61 261 1261 3261 3261 122 123 LVAAAA IRMAAA OOOOxx +8108 8563 0 0 8 8 8 108 108 3108 8108 16 17 WZAAAA JRMAAA VVVVxx +8785 8564 1 1 5 5 85 785 785 3785 8785 170 171 XZAAAA KRMAAA AAAAxx +7391 8565 1 3 1 11 91 391 1391 2391 7391 182 183 HYAAAA LRMAAA HHHHxx +1496 8566 0 0 6 16 96 496 1496 1496 1496 192 193 OFAAAA MRMAAA OOOOxx +1484 8567 0 0 4 4 84 484 1484 1484 1484 168 169 CFAAAA NRMAAA VVVVxx +5884 8568 0 0 4 4 84 884 1884 884 5884 168 169 ISAAAA ORMAAA AAAAxx +342 8569 0 2 2 2 42 342 342 342 342 84 85 ENAAAA PRMAAA HHHHxx +7659 8570 1 3 9 19 59 659 1659 2659 7659 118 119 PIAAAA QRMAAA OOOOxx +6635 8571 1 3 5 15 35 635 635 1635 6635 70 71 FVAAAA RRMAAA VVVVxx +8507 8572 1 3 7 7 7 507 507 3507 8507 14 15 FPAAAA SRMAAA AAAAxx +2583 8573 1 3 3 3 83 583 583 2583 2583 166 167 JVAAAA TRMAAA HHHHxx +6533 8574 1 1 3 13 33 533 533 1533 6533 66 67 HRAAAA URMAAA OOOOxx +5879 8575 1 3 9 19 79 879 1879 879 5879 158 159 DSAAAA VRMAAA VVVVxx +5511 8576 1 3 1 11 11 511 1511 511 5511 22 23 ZDAAAA WRMAAA AAAAxx +3682 8577 0 2 2 2 82 682 1682 3682 3682 164 165 QLAAAA XRMAAA HHHHxx +7182 8578 0 2 2 2 82 182 1182 2182 7182 164 165 GQAAAA YRMAAA OOOOxx +1409 8579 1 1 9 9 9 409 1409 1409 1409 18 19 FCAAAA ZRMAAA VVVVxx +3363 8580 1 3 3 3 63 363 1363 3363 3363 126 127 JZAAAA ASMAAA AAAAxx +729 8581 1 1 9 9 29 729 729 729 729 58 59 BCAAAA BSMAAA HHHHxx +5857 8582 1 1 7 17 57 857 1857 857 5857 114 115 HRAAAA CSMAAA OOOOxx +235 8583 1 3 5 15 35 235 235 235 235 70 71 BJAAAA DSMAAA VVVVxx +193 8584 1 1 3 13 93 193 193 193 193 186 187 LHAAAA ESMAAA AAAAxx +5586 8585 0 2 6 6 86 586 1586 586 5586 172 173 WGAAAA FSMAAA HHHHxx +6203 8586 1 3 3 3 3 203 203 1203 6203 6 7 PEAAAA GSMAAA OOOOxx +6795 8587 1 3 5 15 95 795 795 1795 6795 190 191 JBAAAA HSMAAA VVVVxx +3211 8588 1 3 1 11 11 211 1211 3211 3211 22 23 NTAAAA ISMAAA AAAAxx +9763 8589 1 3 3 3 63 763 1763 4763 9763 126 127 NLAAAA JSMAAA HHHHxx +9043 8590 1 3 3 3 43 43 1043 4043 9043 86 87 VJAAAA KSMAAA OOOOxx +2854 8591 0 2 4 14 54 854 854 2854 2854 108 109 UFAAAA LSMAAA VVVVxx +565 8592 1 1 5 5 65 565 565 565 565 130 131 TVAAAA MSMAAA AAAAxx +9284 8593 0 0 4 4 84 284 1284 4284 9284 168 169 CTAAAA NSMAAA HHHHxx +7886 8594 0 2 6 6 86 886 1886 2886 7886 172 173 IRAAAA OSMAAA OOOOxx +122 8595 0 2 2 2 22 122 122 122 122 44 45 SEAAAA PSMAAA VVVVxx +4934 8596 0 2 4 14 34 934 934 4934 4934 68 69 UHAAAA QSMAAA AAAAxx +1766 8597 0 2 6 6 66 766 1766 1766 1766 132 133 YPAAAA RSMAAA HHHHxx +2554 8598 0 2 4 14 54 554 554 2554 2554 108 109 GUAAAA SSMAAA OOOOxx +488 8599 0 0 8 8 88 488 488 488 488 176 177 USAAAA TSMAAA VVVVxx +825 8600 1 1 5 5 25 825 825 825 825 50 51 TFAAAA USMAAA AAAAxx +678 8601 0 2 8 18 78 678 678 678 678 156 157 CAAAAA VSMAAA HHHHxx +4543 8602 1 3 3 3 43 543 543 4543 4543 86 87 TSAAAA WSMAAA OOOOxx +1699 8603 1 3 9 19 99 699 1699 1699 1699 198 199 JNAAAA XSMAAA VVVVxx +3771 8604 1 3 1 11 71 771 1771 3771 3771 142 143 BPAAAA YSMAAA AAAAxx +1234 8605 0 2 4 14 34 234 1234 1234 1234 68 69 MVAAAA ZSMAAA HHHHxx +4152 8606 0 0 2 12 52 152 152 4152 4152 104 105 SDAAAA ATMAAA OOOOxx +1632 8607 0 0 2 12 32 632 1632 1632 1632 64 65 UKAAAA BTMAAA VVVVxx +4988 8608 0 0 8 8 88 988 988 4988 4988 176 177 WJAAAA CTMAAA AAAAxx +1980 8609 0 0 0 0 80 980 1980 1980 1980 160 161 EYAAAA DTMAAA HHHHxx +7479 8610 1 3 9 19 79 479 1479 2479 7479 158 159 RBAAAA ETMAAA OOOOxx +2586 8611 0 2 6 6 86 586 586 2586 2586 172 173 MVAAAA FTMAAA VVVVxx +5433 8612 1 1 3 13 33 433 1433 433 5433 66 67 ZAAAAA GTMAAA AAAAxx +2261 8613 1 1 1 1 61 261 261 2261 2261 122 123 ZIAAAA HTMAAA HHHHxx +1180 8614 0 0 0 0 80 180 1180 1180 1180 160 161 KTAAAA ITMAAA OOOOxx +3938 8615 0 2 8 18 38 938 1938 3938 3938 76 77 MVAAAA JTMAAA VVVVxx +6714 8616 0 2 4 14 14 714 714 1714 6714 28 29 GYAAAA KTMAAA AAAAxx +2890 8617 0 2 0 10 90 890 890 2890 2890 180 181 EHAAAA LTMAAA HHHHxx +7379 8618 1 3 9 19 79 379 1379 2379 7379 158 159 VXAAAA MTMAAA OOOOxx +5896 8619 0 0 6 16 96 896 1896 896 5896 192 193 USAAAA NTMAAA VVVVxx +5949 8620 1 1 9 9 49 949 1949 949 5949 98 99 VUAAAA OTMAAA AAAAxx +3194 8621 0 2 4 14 94 194 1194 3194 3194 188 189 WSAAAA PTMAAA HHHHxx +9325 8622 1 1 5 5 25 325 1325 4325 9325 50 51 RUAAAA QTMAAA OOOOxx +9531 8623 1 3 1 11 31 531 1531 4531 9531 62 63 PCAAAA RTMAAA VVVVxx +711 8624 1 3 1 11 11 711 711 711 711 22 23 JBAAAA STMAAA AAAAxx +2450 8625 0 2 0 10 50 450 450 2450 2450 100 101 GQAAAA TTMAAA HHHHxx +1929 8626 1 1 9 9 29 929 1929 1929 1929 58 59 FWAAAA UTMAAA OOOOxx +6165 8627 1 1 5 5 65 165 165 1165 6165 130 131 DDAAAA VTMAAA VVVVxx +4050 8628 0 2 0 10 50 50 50 4050 4050 100 101 UZAAAA WTMAAA AAAAxx +9011 8629 1 3 1 11 11 11 1011 4011 9011 22 23 PIAAAA XTMAAA HHHHxx +7916 8630 0 0 6 16 16 916 1916 2916 7916 32 33 MSAAAA YTMAAA OOOOxx +9136 8631 0 0 6 16 36 136 1136 4136 9136 72 73 KNAAAA ZTMAAA VVVVxx +8782 8632 0 2 2 2 82 782 782 3782 8782 164 165 UZAAAA AUMAAA AAAAxx +8491 8633 1 3 1 11 91 491 491 3491 8491 182 183 POAAAA BUMAAA HHHHxx +5114 8634 0 2 4 14 14 114 1114 114 5114 28 29 SOAAAA CUMAAA OOOOxx +5815 8635 1 3 5 15 15 815 1815 815 5815 30 31 RPAAAA DUMAAA VVVVxx +5628 8636 0 0 8 8 28 628 1628 628 5628 56 57 MIAAAA EUMAAA AAAAxx +810 8637 0 2 0 10 10 810 810 810 810 20 21 EFAAAA FUMAAA HHHHxx +6178 8638 0 2 8 18 78 178 178 1178 6178 156 157 QDAAAA GUMAAA OOOOxx +2619 8639 1 3 9 19 19 619 619 2619 2619 38 39 TWAAAA HUMAAA VVVVxx +3340 8640 0 0 0 0 40 340 1340 3340 3340 80 81 MYAAAA IUMAAA AAAAxx +2491 8641 1 3 1 11 91 491 491 2491 2491 182 183 VRAAAA JUMAAA HHHHxx +3574 8642 0 2 4 14 74 574 1574 3574 3574 148 149 MHAAAA KUMAAA OOOOxx +6754 8643 0 2 4 14 54 754 754 1754 6754 108 109 UZAAAA LUMAAA VVVVxx +1566 8644 0 2 6 6 66 566 1566 1566 1566 132 133 GIAAAA MUMAAA AAAAxx +9174 8645 0 2 4 14 74 174 1174 4174 9174 148 149 WOAAAA NUMAAA HHHHxx +1520 8646 0 0 0 0 20 520 1520 1520 1520 40 41 MGAAAA OUMAAA OOOOxx +2691 8647 1 3 1 11 91 691 691 2691 2691 182 183 NZAAAA PUMAAA VVVVxx +6961 8648 1 1 1 1 61 961 961 1961 6961 122 123 THAAAA QUMAAA AAAAxx +5722 8649 0 2 2 2 22 722 1722 722 5722 44 45 CMAAAA RUMAAA HHHHxx +9707 8650 1 3 7 7 7 707 1707 4707 9707 14 15 JJAAAA SUMAAA OOOOxx +2891 8651 1 3 1 11 91 891 891 2891 2891 182 183 FHAAAA TUMAAA VVVVxx +341 8652 1 1 1 1 41 341 341 341 341 82 83 DNAAAA UUMAAA AAAAxx +4690 8653 0 2 0 10 90 690 690 4690 4690 180 181 KYAAAA VUMAAA HHHHxx +7841 8654 1 1 1 1 41 841 1841 2841 7841 82 83 PPAAAA WUMAAA OOOOxx +6615 8655 1 3 5 15 15 615 615 1615 6615 30 31 LUAAAA XUMAAA VVVVxx +9169 8656 1 1 9 9 69 169 1169 4169 9169 138 139 ROAAAA YUMAAA AAAAxx +6689 8657 1 1 9 9 89 689 689 1689 6689 178 179 HXAAAA ZUMAAA HHHHxx +8721 8658 1 1 1 1 21 721 721 3721 8721 42 43 LXAAAA AVMAAA OOOOxx +7508 8659 0 0 8 8 8 508 1508 2508 7508 16 17 UCAAAA BVMAAA VVVVxx +8631 8660 1 3 1 11 31 631 631 3631 8631 62 63 ZTAAAA CVMAAA AAAAxx +480 8661 0 0 0 0 80 480 480 480 480 160 161 MSAAAA DVMAAA HHHHxx +7094 8662 0 2 4 14 94 94 1094 2094 7094 188 189 WMAAAA EVMAAA OOOOxx +319 8663 1 3 9 19 19 319 319 319 319 38 39 HMAAAA FVMAAA VVVVxx +9421 8664 1 1 1 1 21 421 1421 4421 9421 42 43 JYAAAA GVMAAA AAAAxx +4352 8665 0 0 2 12 52 352 352 4352 4352 104 105 KLAAAA HVMAAA HHHHxx +5019 8666 1 3 9 19 19 19 1019 19 5019 38 39 BLAAAA IVMAAA OOOOxx +3956 8667 0 0 6 16 56 956 1956 3956 3956 112 113 EWAAAA JVMAAA VVVVxx +114 8668 0 2 4 14 14 114 114 114 114 28 29 KEAAAA KVMAAA AAAAxx +1196 8669 0 0 6 16 96 196 1196 1196 1196 192 193 AUAAAA LVMAAA HHHHxx +1407 8670 1 3 7 7 7 407 1407 1407 1407 14 15 DCAAAA MVMAAA OOOOxx +7432 8671 0 0 2 12 32 432 1432 2432 7432 64 65 WZAAAA NVMAAA VVVVxx +3141 8672 1 1 1 1 41 141 1141 3141 3141 82 83 VQAAAA OVMAAA AAAAxx +2073 8673 1 1 3 13 73 73 73 2073 2073 146 147 TBAAAA PVMAAA HHHHxx +3400 8674 0 0 0 0 0 400 1400 3400 3400 0 1 UAAAAA QVMAAA OOOOxx +505 8675 1 1 5 5 5 505 505 505 505 10 11 LTAAAA RVMAAA VVVVxx +1263 8676 1 3 3 3 63 263 1263 1263 1263 126 127 PWAAAA SVMAAA AAAAxx +190 8677 0 2 0 10 90 190 190 190 190 180 181 IHAAAA TVMAAA HHHHxx +6686 8678 0 2 6 6 86 686 686 1686 6686 172 173 EXAAAA UVMAAA OOOOxx +9821 8679 1 1 1 1 21 821 1821 4821 9821 42 43 TNAAAA VVMAAA VVVVxx +1119 8680 1 3 9 19 19 119 1119 1119 1119 38 39 BRAAAA WVMAAA AAAAxx +2955 8681 1 3 5 15 55 955 955 2955 2955 110 111 RJAAAA XVMAAA HHHHxx +224 8682 0 0 4 4 24 224 224 224 224 48 49 QIAAAA YVMAAA OOOOxx +7562 8683 0 2 2 2 62 562 1562 2562 7562 124 125 WEAAAA ZVMAAA VVVVxx +8845 8684 1 1 5 5 45 845 845 3845 8845 90 91 FCAAAA AWMAAA AAAAxx +5405 8685 1 1 5 5 5 405 1405 405 5405 10 11 XZAAAA BWMAAA HHHHxx +9192 8686 0 0 2 12 92 192 1192 4192 9192 184 185 OPAAAA CWMAAA OOOOxx +4927 8687 1 3 7 7 27 927 927 4927 4927 54 55 NHAAAA DWMAAA VVVVxx +997 8688 1 1 7 17 97 997 997 997 997 194 195 JMAAAA EWMAAA AAAAxx +989 8689 1 1 9 9 89 989 989 989 989 178 179 BMAAAA FWMAAA HHHHxx +7258 8690 0 2 8 18 58 258 1258 2258 7258 116 117 ETAAAA GWMAAA OOOOxx +6899 8691 1 3 9 19 99 899 899 1899 6899 198 199 JFAAAA HWMAAA VVVVxx +1770 8692 0 2 0 10 70 770 1770 1770 1770 140 141 CQAAAA IWMAAA AAAAxx +4423 8693 1 3 3 3 23 423 423 4423 4423 46 47 DOAAAA JWMAAA HHHHxx +5671 8694 1 3 1 11 71 671 1671 671 5671 142 143 DKAAAA KWMAAA OOOOxx +8393 8695 1 1 3 13 93 393 393 3393 8393 186 187 VKAAAA LWMAAA VVVVxx +4355 8696 1 3 5 15 55 355 355 4355 4355 110 111 NLAAAA MWMAAA AAAAxx +3919 8697 1 3 9 19 19 919 1919 3919 3919 38 39 TUAAAA NWMAAA HHHHxx +338 8698 0 2 8 18 38 338 338 338 338 76 77 ANAAAA OWMAAA OOOOxx +5790 8699 0 2 0 10 90 790 1790 790 5790 180 181 SOAAAA PWMAAA VVVVxx +1452 8700 0 0 2 12 52 452 1452 1452 1452 104 105 WDAAAA QWMAAA AAAAxx +939 8701 1 3 9 19 39 939 939 939 939 78 79 DKAAAA RWMAAA HHHHxx +8913 8702 1 1 3 13 13 913 913 3913 8913 26 27 VEAAAA SWMAAA OOOOxx +7157 8703 1 1 7 17 57 157 1157 2157 7157 114 115 HPAAAA TWMAAA VVVVxx +7240 8704 0 0 0 0 40 240 1240 2240 7240 80 81 MSAAAA UWMAAA AAAAxx +3492 8705 0 0 2 12 92 492 1492 3492 3492 184 185 IEAAAA VWMAAA HHHHxx +3464 8706 0 0 4 4 64 464 1464 3464 3464 128 129 GDAAAA WWMAAA OOOOxx +388 8707 0 0 8 8 88 388 388 388 388 176 177 YOAAAA XWMAAA VVVVxx +4135 8708 1 3 5 15 35 135 135 4135 4135 70 71 BDAAAA YWMAAA AAAAxx +1194 8709 0 2 4 14 94 194 1194 1194 1194 188 189 YTAAAA ZWMAAA HHHHxx +5476 8710 0 0 6 16 76 476 1476 476 5476 152 153 QCAAAA AXMAAA OOOOxx +9844 8711 0 0 4 4 44 844 1844 4844 9844 88 89 QOAAAA BXMAAA VVVVxx +9364 8712 0 0 4 4 64 364 1364 4364 9364 128 129 EWAAAA CXMAAA AAAAxx +5238 8713 0 2 8 18 38 238 1238 238 5238 76 77 MTAAAA DXMAAA HHHHxx +3712 8714 0 0 2 12 12 712 1712 3712 3712 24 25 UMAAAA EXMAAA OOOOxx +6189 8715 1 1 9 9 89 189 189 1189 6189 178 179 BEAAAA FXMAAA VVVVxx +5257 8716 1 1 7 17 57 257 1257 257 5257 114 115 FUAAAA GXMAAA AAAAxx +81 8717 1 1 1 1 81 81 81 81 81 162 163 DDAAAA HXMAAA HHHHxx +3289 8718 1 1 9 9 89 289 1289 3289 3289 178 179 NWAAAA IXMAAA OOOOxx +1177 8719 1 1 7 17 77 177 1177 1177 1177 154 155 HTAAAA JXMAAA VVVVxx +5038 8720 0 2 8 18 38 38 1038 38 5038 76 77 ULAAAA KXMAAA AAAAxx +325 8721 1 1 5 5 25 325 325 325 325 50 51 NMAAAA LXMAAA HHHHxx +7221 8722 1 1 1 1 21 221 1221 2221 7221 42 43 TRAAAA MXMAAA OOOOxx +7123 8723 1 3 3 3 23 123 1123 2123 7123 46 47 ZNAAAA NXMAAA VVVVxx +6364 8724 0 0 4 4 64 364 364 1364 6364 128 129 UKAAAA OXMAAA AAAAxx +4468 8725 0 0 8 8 68 468 468 4468 4468 136 137 WPAAAA PXMAAA HHHHxx +9185 8726 1 1 5 5 85 185 1185 4185 9185 170 171 HPAAAA QXMAAA OOOOxx +4158 8727 0 2 8 18 58 158 158 4158 4158 116 117 YDAAAA RXMAAA VVVVxx +9439 8728 1 3 9 19 39 439 1439 4439 9439 78 79 BZAAAA SXMAAA AAAAxx +7759 8729 1 3 9 19 59 759 1759 2759 7759 118 119 LMAAAA TXMAAA HHHHxx +3325 8730 1 1 5 5 25 325 1325 3325 3325 50 51 XXAAAA UXMAAA OOOOxx +7991 8731 1 3 1 11 91 991 1991 2991 7991 182 183 JVAAAA VXMAAA VVVVxx +1650 8732 0 2 0 10 50 650 1650 1650 1650 100 101 MLAAAA WXMAAA AAAAxx +8395 8733 1 3 5 15 95 395 395 3395 8395 190 191 XKAAAA XXMAAA HHHHxx +286 8734 0 2 6 6 86 286 286 286 286 172 173 ALAAAA YXMAAA OOOOxx +1507 8735 1 3 7 7 7 507 1507 1507 1507 14 15 ZFAAAA ZXMAAA VVVVxx +4122 8736 0 2 2 2 22 122 122 4122 4122 44 45 OCAAAA AYMAAA AAAAxx +2625 8737 1 1 5 5 25 625 625 2625 2625 50 51 ZWAAAA BYMAAA HHHHxx +1140 8738 0 0 0 0 40 140 1140 1140 1140 80 81 WRAAAA CYMAAA OOOOxx +5262 8739 0 2 2 2 62 262 1262 262 5262 124 125 KUAAAA DYMAAA VVVVxx +4919 8740 1 3 9 19 19 919 919 4919 4919 38 39 FHAAAA EYMAAA AAAAxx +7266 8741 0 2 6 6 66 266 1266 2266 7266 132 133 MTAAAA FYMAAA HHHHxx +630 8742 0 2 0 10 30 630 630 630 630 60 61 GYAAAA GYMAAA OOOOxx +2129 8743 1 1 9 9 29 129 129 2129 2129 58 59 XDAAAA HYMAAA VVVVxx +9552 8744 0 0 2 12 52 552 1552 4552 9552 104 105 KDAAAA IYMAAA AAAAxx +3018 8745 0 2 8 18 18 18 1018 3018 3018 36 37 CMAAAA JYMAAA HHHHxx +7145 8746 1 1 5 5 45 145 1145 2145 7145 90 91 VOAAAA KYMAAA OOOOxx +1633 8747 1 1 3 13 33 633 1633 1633 1633 66 67 VKAAAA LYMAAA VVVVxx +7957 8748 1 1 7 17 57 957 1957 2957 7957 114 115 BUAAAA MYMAAA AAAAxx +774 8749 0 2 4 14 74 774 774 774 774 148 149 UDAAAA NYMAAA HHHHxx +9371 8750 1 3 1 11 71 371 1371 4371 9371 142 143 LWAAAA OYMAAA OOOOxx +6007 8751 1 3 7 7 7 7 7 1007 6007 14 15 BXAAAA PYMAAA VVVVxx +5277 8752 1 1 7 17 77 277 1277 277 5277 154 155 ZUAAAA QYMAAA AAAAxx +9426 8753 0 2 6 6 26 426 1426 4426 9426 52 53 OYAAAA RYMAAA HHHHxx +9190 8754 0 2 0 10 90 190 1190 4190 9190 180 181 MPAAAA SYMAAA OOOOxx +8996 8755 0 0 6 16 96 996 996 3996 8996 192 193 AIAAAA TYMAAA VVVVxx +3409 8756 1 1 9 9 9 409 1409 3409 3409 18 19 DBAAAA UYMAAA AAAAxx +7212 8757 0 0 2 12 12 212 1212 2212 7212 24 25 KRAAAA VYMAAA HHHHxx +416 8758 0 0 6 16 16 416 416 416 416 32 33 AQAAAA WYMAAA OOOOxx +7211 8759 1 3 1 11 11 211 1211 2211 7211 22 23 JRAAAA XYMAAA VVVVxx +7454 8760 0 2 4 14 54 454 1454 2454 7454 108 109 SAAAAA YYMAAA AAAAxx +8417 8761 1 1 7 17 17 417 417 3417 8417 34 35 TLAAAA ZYMAAA HHHHxx +5562 8762 0 2 2 2 62 562 1562 562 5562 124 125 YFAAAA AZMAAA OOOOxx +4996 8763 0 0 6 16 96 996 996 4996 4996 192 193 EKAAAA BZMAAA VVVVxx +5718 8764 0 2 8 18 18 718 1718 718 5718 36 37 YLAAAA CZMAAA AAAAxx +7838 8765 0 2 8 18 38 838 1838 2838 7838 76 77 MPAAAA DZMAAA HHHHxx +7715 8766 1 3 5 15 15 715 1715 2715 7715 30 31 TKAAAA EZMAAA OOOOxx +2780 8767 0 0 0 0 80 780 780 2780 2780 160 161 YCAAAA FZMAAA VVVVxx +1013 8768 1 1 3 13 13 13 1013 1013 1013 26 27 ZMAAAA GZMAAA AAAAxx +8465 8769 1 1 5 5 65 465 465 3465 8465 130 131 PNAAAA HZMAAA HHHHxx +7976 8770 0 0 6 16 76 976 1976 2976 7976 152 153 UUAAAA IZMAAA OOOOxx +7150 8771 0 2 0 10 50 150 1150 2150 7150 100 101 APAAAA JZMAAA VVVVxx +6471 8772 1 3 1 11 71 471 471 1471 6471 142 143 XOAAAA KZMAAA AAAAxx +1927 8773 1 3 7 7 27 927 1927 1927 1927 54 55 DWAAAA LZMAAA HHHHxx +227 8774 1 3 7 7 27 227 227 227 227 54 55 TIAAAA MZMAAA OOOOxx +6462 8775 0 2 2 2 62 462 462 1462 6462 124 125 OOAAAA NZMAAA VVVVxx +5227 8776 1 3 7 7 27 227 1227 227 5227 54 55 BTAAAA OZMAAA AAAAxx +1074 8777 0 2 4 14 74 74 1074 1074 1074 148 149 IPAAAA PZMAAA HHHHxx +9448 8778 0 0 8 8 48 448 1448 4448 9448 96 97 KZAAAA QZMAAA OOOOxx +4459 8779 1 3 9 19 59 459 459 4459 4459 118 119 NPAAAA RZMAAA VVVVxx +2478 8780 0 2 8 18 78 478 478 2478 2478 156 157 IRAAAA SZMAAA AAAAxx +5005 8781 1 1 5 5 5 5 1005 5 5005 10 11 NKAAAA TZMAAA HHHHxx +2418 8782 0 2 8 18 18 418 418 2418 2418 36 37 APAAAA UZMAAA OOOOxx +6991 8783 1 3 1 11 91 991 991 1991 6991 182 183 XIAAAA VZMAAA VVVVxx +4729 8784 1 1 9 9 29 729 729 4729 4729 58 59 XZAAAA WZMAAA AAAAxx +3548 8785 0 0 8 8 48 548 1548 3548 3548 96 97 MGAAAA XZMAAA HHHHxx +9616 8786 0 0 6 16 16 616 1616 4616 9616 32 33 WFAAAA YZMAAA OOOOxx +2901 8787 1 1 1 1 1 901 901 2901 2901 2 3 PHAAAA ZZMAAA VVVVxx +10 8788 0 2 0 10 10 10 10 10 10 20 21 KAAAAA AANAAA AAAAxx +2637 8789 1 1 7 17 37 637 637 2637 2637 74 75 LXAAAA BANAAA HHHHxx +6747 8790 1 3 7 7 47 747 747 1747 6747 94 95 NZAAAA CANAAA OOOOxx +797 8791 1 1 7 17 97 797 797 797 797 194 195 REAAAA DANAAA VVVVxx +7609 8792 1 1 9 9 9 609 1609 2609 7609 18 19 RGAAAA EANAAA AAAAxx +8290 8793 0 2 0 10 90 290 290 3290 8290 180 181 WGAAAA FANAAA HHHHxx +8765 8794 1 1 5 5 65 765 765 3765 8765 130 131 DZAAAA GANAAA OOOOxx +8053 8795 1 1 3 13 53 53 53 3053 8053 106 107 TXAAAA HANAAA VVVVxx +5602 8796 0 2 2 2 2 602 1602 602 5602 4 5 MHAAAA IANAAA AAAAxx +3672 8797 0 0 2 12 72 672 1672 3672 3672 144 145 GLAAAA JANAAA HHHHxx +7513 8798 1 1 3 13 13 513 1513 2513 7513 26 27 ZCAAAA KANAAA OOOOxx +3462 8799 0 2 2 2 62 462 1462 3462 3462 124 125 EDAAAA LANAAA VVVVxx +4457 8800 1 1 7 17 57 457 457 4457 4457 114 115 LPAAAA MANAAA AAAAxx +6547 8801 1 3 7 7 47 547 547 1547 6547 94 95 VRAAAA NANAAA HHHHxx +7417 8802 1 1 7 17 17 417 1417 2417 7417 34 35 HZAAAA OANAAA OOOOxx +8641 8803 1 1 1 1 41 641 641 3641 8641 82 83 JUAAAA PANAAA VVVVxx +149 8804 1 1 9 9 49 149 149 149 149 98 99 TFAAAA QANAAA AAAAxx +5041 8805 1 1 1 1 41 41 1041 41 5041 82 83 XLAAAA RANAAA HHHHxx +9232 8806 0 0 2 12 32 232 1232 4232 9232 64 65 CRAAAA SANAAA OOOOxx +3603 8807 1 3 3 3 3 603 1603 3603 3603 6 7 PIAAAA TANAAA VVVVxx +2792 8808 0 0 2 12 92 792 792 2792 2792 184 185 KDAAAA UANAAA AAAAxx +6620 8809 0 0 0 0 20 620 620 1620 6620 40 41 QUAAAA VANAAA HHHHxx +4000 8810 0 0 0 0 0 0 0 4000 4000 0 1 WXAAAA WANAAA OOOOxx +659 8811 1 3 9 19 59 659 659 659 659 118 119 JZAAAA XANAAA VVVVxx +8174 8812 0 2 4 14 74 174 174 3174 8174 148 149 KCAAAA YANAAA AAAAxx +4599 8813 1 3 9 19 99 599 599 4599 4599 198 199 XUAAAA ZANAAA HHHHxx +7851 8814 1 3 1 11 51 851 1851 2851 7851 102 103 ZPAAAA ABNAAA OOOOxx +6284 8815 0 0 4 4 84 284 284 1284 6284 168 169 SHAAAA BBNAAA VVVVxx +7116 8816 0 0 6 16 16 116 1116 2116 7116 32 33 SNAAAA CBNAAA AAAAxx +5595 8817 1 3 5 15 95 595 1595 595 5595 190 191 FHAAAA DBNAAA HHHHxx +2903 8818 1 3 3 3 3 903 903 2903 2903 6 7 RHAAAA EBNAAA OOOOxx +5948 8819 0 0 8 8 48 948 1948 948 5948 96 97 UUAAAA FBNAAA VVVVxx +225 8820 1 1 5 5 25 225 225 225 225 50 51 RIAAAA GBNAAA AAAAxx +524 8821 0 0 4 4 24 524 524 524 524 48 49 EUAAAA HBNAAA HHHHxx +7639 8822 1 3 9 19 39 639 1639 2639 7639 78 79 VHAAAA IBNAAA OOOOxx +7297 8823 1 1 7 17 97 297 1297 2297 7297 194 195 RUAAAA JBNAAA VVVVxx +2606 8824 0 2 6 6 6 606 606 2606 2606 12 13 GWAAAA KBNAAA AAAAxx +4771 8825 1 3 1 11 71 771 771 4771 4771 142 143 NBAAAA LBNAAA HHHHxx +8162 8826 0 2 2 2 62 162 162 3162 8162 124 125 YBAAAA MBNAAA OOOOxx +8999 8827 1 3 9 19 99 999 999 3999 8999 198 199 DIAAAA NBNAAA VVVVxx +2309 8828 1 1 9 9 9 309 309 2309 2309 18 19 VKAAAA OBNAAA AAAAxx +3594 8829 0 2 4 14 94 594 1594 3594 3594 188 189 GIAAAA PBNAAA HHHHxx +6092 8830 0 0 2 12 92 92 92 1092 6092 184 185 IAAAAA QBNAAA OOOOxx +7467 8831 1 3 7 7 67 467 1467 2467 7467 134 135 FBAAAA RBNAAA VVVVxx +6986 8832 0 2 6 6 86 986 986 1986 6986 172 173 SIAAAA SBNAAA AAAAxx +9898 8833 0 2 8 18 98 898 1898 4898 9898 196 197 SQAAAA TBNAAA HHHHxx +9578 8834 0 2 8 18 78 578 1578 4578 9578 156 157 KEAAAA UBNAAA OOOOxx +156 8835 0 0 6 16 56 156 156 156 156 112 113 AGAAAA VBNAAA VVVVxx +5810 8836 0 2 0 10 10 810 1810 810 5810 20 21 MPAAAA WBNAAA AAAAxx +790 8837 0 2 0 10 90 790 790 790 790 180 181 KEAAAA XBNAAA HHHHxx +6840 8838 0 0 0 0 40 840 840 1840 6840 80 81 CDAAAA YBNAAA OOOOxx +6725 8839 1 1 5 5 25 725 725 1725 6725 50 51 RYAAAA ZBNAAA VVVVxx +5528 8840 0 0 8 8 28 528 1528 528 5528 56 57 QEAAAA ACNAAA AAAAxx +4120 8841 0 0 0 0 20 120 120 4120 4120 40 41 MCAAAA BCNAAA HHHHxx +6694 8842 0 2 4 14 94 694 694 1694 6694 188 189 MXAAAA CCNAAA OOOOxx +3552 8843 0 0 2 12 52 552 1552 3552 3552 104 105 QGAAAA DCNAAA VVVVxx +1478 8844 0 2 8 18 78 478 1478 1478 1478 156 157 WEAAAA ECNAAA AAAAxx +8084 8845 0 0 4 4 84 84 84 3084 8084 168 169 YYAAAA FCNAAA HHHHxx +7578 8846 0 2 8 18 78 578 1578 2578 7578 156 157 MFAAAA GCNAAA OOOOxx +6314 8847 0 2 4 14 14 314 314 1314 6314 28 29 WIAAAA HCNAAA VVVVxx +6123 8848 1 3 3 3 23 123 123 1123 6123 46 47 NBAAAA ICNAAA AAAAxx +9443 8849 1 3 3 3 43 443 1443 4443 9443 86 87 FZAAAA JCNAAA HHHHxx +9628 8850 0 0 8 8 28 628 1628 4628 9628 56 57 IGAAAA KCNAAA OOOOxx +8508 8851 0 0 8 8 8 508 508 3508 8508 16 17 GPAAAA LCNAAA VVVVxx +5552 8852 0 0 2 12 52 552 1552 552 5552 104 105 OFAAAA MCNAAA AAAAxx +5327 8853 1 3 7 7 27 327 1327 327 5327 54 55 XWAAAA NCNAAA HHHHxx +7771 8854 1 3 1 11 71 771 1771 2771 7771 142 143 XMAAAA OCNAAA OOOOxx +8932 8855 0 0 2 12 32 932 932 3932 8932 64 65 OFAAAA PCNAAA VVVVxx +3526 8856 0 2 6 6 26 526 1526 3526 3526 52 53 QFAAAA QCNAAA AAAAxx +4340 8857 0 0 0 0 40 340 340 4340 4340 80 81 YKAAAA RCNAAA HHHHxx +9419 8858 1 3 9 19 19 419 1419 4419 9419 38 39 HYAAAA SCNAAA OOOOxx +8421 8859 1 1 1 1 21 421 421 3421 8421 42 43 XLAAAA TCNAAA VVVVxx +7431 8860 1 3 1 11 31 431 1431 2431 7431 62 63 VZAAAA UCNAAA AAAAxx +172 8861 0 0 2 12 72 172 172 172 172 144 145 QGAAAA VCNAAA HHHHxx +3279 8862 1 3 9 19 79 279 1279 3279 3279 158 159 DWAAAA WCNAAA OOOOxx +1508 8863 0 0 8 8 8 508 1508 1508 1508 16 17 AGAAAA XCNAAA VVVVxx +7091 8864 1 3 1 11 91 91 1091 2091 7091 182 183 TMAAAA YCNAAA AAAAxx +1419 8865 1 3 9 19 19 419 1419 1419 1419 38 39 PCAAAA ZCNAAA HHHHxx +3032 8866 0 0 2 12 32 32 1032 3032 3032 64 65 QMAAAA ADNAAA OOOOxx +8683 8867 1 3 3 3 83 683 683 3683 8683 166 167 ZVAAAA BDNAAA VVVVxx +4763 8868 1 3 3 3 63 763 763 4763 4763 126 127 FBAAAA CDNAAA AAAAxx +4424 8869 0 0 4 4 24 424 424 4424 4424 48 49 EOAAAA DDNAAA HHHHxx +8640 8870 0 0 0 0 40 640 640 3640 8640 80 81 IUAAAA EDNAAA OOOOxx +7187 8871 1 3 7 7 87 187 1187 2187 7187 174 175 LQAAAA FDNAAA VVVVxx +6247 8872 1 3 7 7 47 247 247 1247 6247 94 95 HGAAAA GDNAAA AAAAxx +7340 8873 0 0 0 0 40 340 1340 2340 7340 80 81 IWAAAA HDNAAA HHHHxx +182 8874 0 2 2 2 82 182 182 182 182 164 165 AHAAAA IDNAAA OOOOxx +2948 8875 0 0 8 8 48 948 948 2948 2948 96 97 KJAAAA JDNAAA VVVVxx +9462 8876 0 2 2 2 62 462 1462 4462 9462 124 125 YZAAAA KDNAAA AAAAxx +5997 8877 1 1 7 17 97 997 1997 997 5997 194 195 RWAAAA LDNAAA HHHHxx +5608 8878 0 0 8 8 8 608 1608 608 5608 16 17 SHAAAA MDNAAA OOOOxx +1472 8879 0 0 2 12 72 472 1472 1472 1472 144 145 QEAAAA NDNAAA VVVVxx +277 8880 1 1 7 17 77 277 277 277 277 154 155 RKAAAA ODNAAA AAAAxx +4807 8881 1 3 7 7 7 807 807 4807 4807 14 15 XCAAAA PDNAAA HHHHxx +4969 8882 1 1 9 9 69 969 969 4969 4969 138 139 DJAAAA QDNAAA OOOOxx +5611 8883 1 3 1 11 11 611 1611 611 5611 22 23 VHAAAA RDNAAA VVVVxx +372 8884 0 0 2 12 72 372 372 372 372 144 145 IOAAAA SDNAAA AAAAxx +6666 8885 0 2 6 6 66 666 666 1666 6666 132 133 KWAAAA TDNAAA HHHHxx +476 8886 0 0 6 16 76 476 476 476 476 152 153 ISAAAA UDNAAA OOOOxx +5225 8887 1 1 5 5 25 225 1225 225 5225 50 51 ZSAAAA VDNAAA VVVVxx +5143 8888 1 3 3 3 43 143 1143 143 5143 86 87 VPAAAA WDNAAA AAAAxx +1853 8889 1 1 3 13 53 853 1853 1853 1853 106 107 HTAAAA XDNAAA HHHHxx +675 8890 1 3 5 15 75 675 675 675 675 150 151 ZZAAAA YDNAAA OOOOxx +5643 8891 1 3 3 3 43 643 1643 643 5643 86 87 BJAAAA ZDNAAA VVVVxx +5317 8892 1 1 7 17 17 317 1317 317 5317 34 35 NWAAAA AENAAA AAAAxx +8102 8893 0 2 2 2 2 102 102 3102 8102 4 5 QZAAAA BENAAA HHHHxx +978 8894 0 2 8 18 78 978 978 978 978 156 157 QLAAAA CENAAA OOOOxx +4620 8895 0 0 0 0 20 620 620 4620 4620 40 41 SVAAAA DENAAA VVVVxx +151 8896 1 3 1 11 51 151 151 151 151 102 103 VFAAAA EENAAA AAAAxx +972 8897 0 0 2 12 72 972 972 972 972 144 145 KLAAAA FENAAA HHHHxx +6820 8898 0 0 0 0 20 820 820 1820 6820 40 41 ICAAAA GENAAA OOOOxx +7387 8899 1 3 7 7 87 387 1387 2387 7387 174 175 DYAAAA HENAAA VVVVxx +9634 8900 0 2 4 14 34 634 1634 4634 9634 68 69 OGAAAA IENAAA AAAAxx +6308 8901 0 0 8 8 8 308 308 1308 6308 16 17 QIAAAA JENAAA HHHHxx +8323 8902 1 3 3 3 23 323 323 3323 8323 46 47 DIAAAA KENAAA OOOOxx +6672 8903 0 0 2 12 72 672 672 1672 6672 144 145 QWAAAA LENAAA VVVVxx +8283 8904 1 3 3 3 83 283 283 3283 8283 166 167 PGAAAA MENAAA AAAAxx +7996 8905 0 0 6 16 96 996 1996 2996 7996 192 193 OVAAAA NENAAA HHHHxx +6488 8906 0 0 8 8 88 488 488 1488 6488 176 177 OPAAAA OENAAA OOOOxx +2365 8907 1 1 5 5 65 365 365 2365 2365 130 131 ZMAAAA PENAAA VVVVxx +9746 8908 0 2 6 6 46 746 1746 4746 9746 92 93 WKAAAA QENAAA AAAAxx +8605 8909 1 1 5 5 5 605 605 3605 8605 10 11 ZSAAAA RENAAA HHHHxx +3342 8910 0 2 2 2 42 342 1342 3342 3342 84 85 OYAAAA SENAAA OOOOxx +8429 8911 1 1 9 9 29 429 429 3429 8429 58 59 FMAAAA TENAAA VVVVxx +1162 8912 0 2 2 2 62 162 1162 1162 1162 124 125 SSAAAA UENAAA AAAAxx +531 8913 1 3 1 11 31 531 531 531 531 62 63 LUAAAA VENAAA HHHHxx +8408 8914 0 0 8 8 8 408 408 3408 8408 16 17 KLAAAA WENAAA OOOOxx +8862 8915 0 2 2 2 62 862 862 3862 8862 124 125 WCAAAA XENAAA VVVVxx +5843 8916 1 3 3 3 43 843 1843 843 5843 86 87 TQAAAA YENAAA AAAAxx +8704 8917 0 0 4 4 4 704 704 3704 8704 8 9 UWAAAA ZENAAA HHHHxx +7070 8918 0 2 0 10 70 70 1070 2070 7070 140 141 YLAAAA AFNAAA OOOOxx +9119 8919 1 3 9 19 19 119 1119 4119 9119 38 39 TMAAAA BFNAAA VVVVxx +8344 8920 0 0 4 4 44 344 344 3344 8344 88 89 YIAAAA CFNAAA AAAAxx +8979 8921 1 3 9 19 79 979 979 3979 8979 158 159 JHAAAA DFNAAA HHHHxx +2971 8922 1 3 1 11 71 971 971 2971 2971 142 143 HKAAAA EFNAAA OOOOxx +7700 8923 0 0 0 0 0 700 1700 2700 7700 0 1 EKAAAA FFNAAA VVVVxx +8280 8924 0 0 0 0 80 280 280 3280 8280 160 161 MGAAAA GFNAAA AAAAxx +9096 8925 0 0 6 16 96 96 1096 4096 9096 192 193 WLAAAA HFNAAA HHHHxx +99 8926 1 3 9 19 99 99 99 99 99 198 199 VDAAAA IFNAAA OOOOxx +6696 8927 0 0 6 16 96 696 696 1696 6696 192 193 OXAAAA JFNAAA VVVVxx +9490 8928 0 2 0 10 90 490 1490 4490 9490 180 181 ABAAAA KFNAAA AAAAxx +9073 8929 1 1 3 13 73 73 1073 4073 9073 146 147 ZKAAAA LFNAAA HHHHxx +1861 8930 1 1 1 1 61 861 1861 1861 1861 122 123 PTAAAA MFNAAA OOOOxx +4413 8931 1 1 3 13 13 413 413 4413 4413 26 27 TNAAAA NFNAAA VVVVxx +6002 8932 0 2 2 2 2 2 2 1002 6002 4 5 WWAAAA OFNAAA AAAAxx +439 8933 1 3 9 19 39 439 439 439 439 78 79 XQAAAA PFNAAA HHHHxx +5449 8934 1 1 9 9 49 449 1449 449 5449 98 99 PBAAAA QFNAAA OOOOxx +9737 8935 1 1 7 17 37 737 1737 4737 9737 74 75 NKAAAA RFNAAA VVVVxx +1898 8936 0 2 8 18 98 898 1898 1898 1898 196 197 AVAAAA SFNAAA AAAAxx +4189 8937 1 1 9 9 89 189 189 4189 4189 178 179 DFAAAA TFNAAA HHHHxx +1408 8938 0 0 8 8 8 408 1408 1408 1408 16 17 ECAAAA UFNAAA OOOOxx +394 8939 0 2 4 14 94 394 394 394 394 188 189 EPAAAA VFNAAA VVVVxx +1935 8940 1 3 5 15 35 935 1935 1935 1935 70 71 LWAAAA WFNAAA AAAAxx +3965 8941 1 1 5 5 65 965 1965 3965 3965 130 131 NWAAAA XFNAAA HHHHxx +6821 8942 1 1 1 1 21 821 821 1821 6821 42 43 JCAAAA YFNAAA OOOOxx +349 8943 1 1 9 9 49 349 349 349 349 98 99 LNAAAA ZFNAAA VVVVxx +8428 8944 0 0 8 8 28 428 428 3428 8428 56 57 EMAAAA AGNAAA AAAAxx +8200 8945 0 0 0 0 0 200 200 3200 8200 0 1 KDAAAA BGNAAA HHHHxx +1737 8946 1 1 7 17 37 737 1737 1737 1737 74 75 VOAAAA CGNAAA OOOOxx +6516 8947 0 0 6 16 16 516 516 1516 6516 32 33 QQAAAA DGNAAA VVVVxx +5441 8948 1 1 1 1 41 441 1441 441 5441 82 83 HBAAAA EGNAAA AAAAxx +5999 8949 1 3 9 19 99 999 1999 999 5999 198 199 TWAAAA FGNAAA HHHHxx +1539 8950 1 3 9 19 39 539 1539 1539 1539 78 79 FHAAAA GGNAAA OOOOxx +9067 8951 1 3 7 7 67 67 1067 4067 9067 134 135 TKAAAA HGNAAA VVVVxx +4061 8952 1 1 1 1 61 61 61 4061 4061 122 123 FAAAAA IGNAAA AAAAxx +1642 8953 0 2 2 2 42 642 1642 1642 1642 84 85 ELAAAA JGNAAA HHHHxx +4657 8954 1 1 7 17 57 657 657 4657 4657 114 115 DXAAAA KGNAAA OOOOxx +9934 8955 0 2 4 14 34 934 1934 4934 9934 68 69 CSAAAA LGNAAA VVVVxx +6385 8956 1 1 5 5 85 385 385 1385 6385 170 171 PLAAAA MGNAAA AAAAxx +6775 8957 1 3 5 15 75 775 775 1775 6775 150 151 PAAAAA NGNAAA HHHHxx +3873 8958 1 1 3 13 73 873 1873 3873 3873 146 147 ZSAAAA OGNAAA OOOOxx +3862 8959 0 2 2 2 62 862 1862 3862 3862 124 125 OSAAAA PGNAAA VVVVxx +1224 8960 0 0 4 4 24 224 1224 1224 1224 48 49 CVAAAA QGNAAA AAAAxx +4483 8961 1 3 3 3 83 483 483 4483 4483 166 167 LQAAAA RGNAAA HHHHxx +3685 8962 1 1 5 5 85 685 1685 3685 3685 170 171 TLAAAA SGNAAA OOOOxx +6082 8963 0 2 2 2 82 82 82 1082 6082 164 165 YZAAAA TGNAAA VVVVxx +7798 8964 0 2 8 18 98 798 1798 2798 7798 196 197 YNAAAA UGNAAA AAAAxx +9039 8965 1 3 9 19 39 39 1039 4039 9039 78 79 RJAAAA VGNAAA HHHHxx +985 8966 1 1 5 5 85 985 985 985 985 170 171 XLAAAA WGNAAA OOOOxx +5389 8967 1 1 9 9 89 389 1389 389 5389 178 179 HZAAAA XGNAAA VVVVxx +1716 8968 0 0 6 16 16 716 1716 1716 1716 32 33 AOAAAA YGNAAA AAAAxx +4209 8969 1 1 9 9 9 209 209 4209 4209 18 19 XFAAAA ZGNAAA HHHHxx +746 8970 0 2 6 6 46 746 746 746 746 92 93 SCAAAA AHNAAA OOOOxx +6295 8971 1 3 5 15 95 295 295 1295 6295 190 191 DIAAAA BHNAAA VVVVxx +9754 8972 0 2 4 14 54 754 1754 4754 9754 108 109 ELAAAA CHNAAA AAAAxx +2336 8973 0 0 6 16 36 336 336 2336 2336 72 73 WLAAAA DHNAAA HHHHxx +3701 8974 1 1 1 1 1 701 1701 3701 3701 2 3 JMAAAA EHNAAA OOOOxx +3551 8975 1 3 1 11 51 551 1551 3551 3551 102 103 PGAAAA FHNAAA VVVVxx +8516 8976 0 0 6 16 16 516 516 3516 8516 32 33 OPAAAA GHNAAA AAAAxx +9290 8977 0 2 0 10 90 290 1290 4290 9290 180 181 ITAAAA HHNAAA HHHHxx +5686 8978 0 2 6 6 86 686 1686 686 5686 172 173 SKAAAA IHNAAA OOOOxx +2893 8979 1 1 3 13 93 893 893 2893 2893 186 187 HHAAAA JHNAAA VVVVxx +6279 8980 1 3 9 19 79 279 279 1279 6279 158 159 NHAAAA KHNAAA AAAAxx +2278 8981 0 2 8 18 78 278 278 2278 2278 156 157 QJAAAA LHNAAA HHHHxx +1618 8982 0 2 8 18 18 618 1618 1618 1618 36 37 GKAAAA MHNAAA OOOOxx +3450 8983 0 2 0 10 50 450 1450 3450 3450 100 101 SCAAAA NHNAAA VVVVxx +8857 8984 1 1 7 17 57 857 857 3857 8857 114 115 RCAAAA OHNAAA AAAAxx +1005 8985 1 1 5 5 5 5 1005 1005 1005 10 11 RMAAAA PHNAAA HHHHxx +4727 8986 1 3 7 7 27 727 727 4727 4727 54 55 VZAAAA QHNAAA OOOOxx +7617 8987 1 1 7 17 17 617 1617 2617 7617 34 35 ZGAAAA RHNAAA VVVVxx +2021 8988 1 1 1 1 21 21 21 2021 2021 42 43 TZAAAA SHNAAA AAAAxx +9124 8989 0 0 4 4 24 124 1124 4124 9124 48 49 YMAAAA THNAAA HHHHxx +3175 8990 1 3 5 15 75 175 1175 3175 3175 150 151 DSAAAA UHNAAA OOOOxx +2949 8991 1 1 9 9 49 949 949 2949 2949 98 99 LJAAAA VHNAAA VVVVxx +2424 8992 0 0 4 4 24 424 424 2424 2424 48 49 GPAAAA WHNAAA AAAAxx +4791 8993 1 3 1 11 91 791 791 4791 4791 182 183 HCAAAA XHNAAA HHHHxx +7500 8994 0 0 0 0 0 500 1500 2500 7500 0 1 MCAAAA YHNAAA OOOOxx +4893 8995 1 1 3 13 93 893 893 4893 4893 186 187 FGAAAA ZHNAAA VVVVxx +121 8996 1 1 1 1 21 121 121 121 121 42 43 REAAAA AINAAA AAAAxx +1965 8997 1 1 5 5 65 965 1965 1965 1965 130 131 PXAAAA BINAAA HHHHxx +2972 8998 0 0 2 12 72 972 972 2972 2972 144 145 IKAAAA CINAAA OOOOxx +662 8999 0 2 2 2 62 662 662 662 662 124 125 MZAAAA DINAAA VVVVxx +7074 9000 0 2 4 14 74 74 1074 2074 7074 148 149 CMAAAA EINAAA AAAAxx +981 9001 1 1 1 1 81 981 981 981 981 162 163 TLAAAA FINAAA HHHHxx +3520 9002 0 0 0 0 20 520 1520 3520 3520 40 41 KFAAAA GINAAA OOOOxx +6540 9003 0 0 0 0 40 540 540 1540 6540 80 81 ORAAAA HINAAA VVVVxx +6648 9004 0 0 8 8 48 648 648 1648 6648 96 97 SVAAAA IINAAA AAAAxx +7076 9005 0 0 6 16 76 76 1076 2076 7076 152 153 EMAAAA JINAAA HHHHxx +6919 9006 1 3 9 19 19 919 919 1919 6919 38 39 DGAAAA KINAAA OOOOxx +1108 9007 0 0 8 8 8 108 1108 1108 1108 16 17 QQAAAA LINAAA VVVVxx +317 9008 1 1 7 17 17 317 317 317 317 34 35 FMAAAA MINAAA AAAAxx +3483 9009 1 3 3 3 83 483 1483 3483 3483 166 167 ZDAAAA NINAAA HHHHxx +6764 9010 0 0 4 4 64 764 764 1764 6764 128 129 EAAAAA OINAAA OOOOxx +1235 9011 1 3 5 15 35 235 1235 1235 1235 70 71 NVAAAA PINAAA VVVVxx +7121 9012 1 1 1 1 21 121 1121 2121 7121 42 43 XNAAAA QINAAA AAAAxx +426 9013 0 2 6 6 26 426 426 426 426 52 53 KQAAAA RINAAA HHHHxx +6880 9014 0 0 0 0 80 880 880 1880 6880 160 161 QEAAAA SINAAA OOOOxx +5401 9015 1 1 1 1 1 401 1401 401 5401 2 3 TZAAAA TINAAA VVVVxx +7323 9016 1 3 3 3 23 323 1323 2323 7323 46 47 RVAAAA UINAAA AAAAxx +9751 9017 1 3 1 11 51 751 1751 4751 9751 102 103 BLAAAA VINAAA HHHHxx +3436 9018 0 0 6 16 36 436 1436 3436 3436 72 73 ECAAAA WINAAA OOOOxx +7319 9019 1 3 9 19 19 319 1319 2319 7319 38 39 NVAAAA XINAAA VVVVxx +7882 9020 0 2 2 2 82 882 1882 2882 7882 164 165 ERAAAA YINAAA AAAAxx +8260 9021 0 0 0 0 60 260 260 3260 8260 120 121 SFAAAA ZINAAA HHHHxx +9758 9022 0 2 8 18 58 758 1758 4758 9758 116 117 ILAAAA AJNAAA OOOOxx +4205 9023 1 1 5 5 5 205 205 4205 4205 10 11 TFAAAA BJNAAA VVVVxx +8884 9024 0 0 4 4 84 884 884 3884 8884 168 169 SDAAAA CJNAAA AAAAxx +1112 9025 0 0 2 12 12 112 1112 1112 1112 24 25 UQAAAA DJNAAA HHHHxx +2186 9026 0 2 6 6 86 186 186 2186 2186 172 173 CGAAAA EJNAAA OOOOxx +8666 9027 0 2 6 6 66 666 666 3666 8666 132 133 IVAAAA FJNAAA VVVVxx +4325 9028 1 1 5 5 25 325 325 4325 4325 50 51 JKAAAA GJNAAA AAAAxx +4912 9029 0 0 2 12 12 912 912 4912 4912 24 25 YGAAAA HJNAAA HHHHxx +6497 9030 1 1 7 17 97 497 497 1497 6497 194 195 XPAAAA IJNAAA OOOOxx +9072 9031 0 0 2 12 72 72 1072 4072 9072 144 145 YKAAAA JJNAAA VVVVxx +8899 9032 1 3 9 19 99 899 899 3899 8899 198 199 HEAAAA KJNAAA AAAAxx +5619 9033 1 3 9 19 19 619 1619 619 5619 38 39 DIAAAA LJNAAA HHHHxx +4110 9034 0 2 0 10 10 110 110 4110 4110 20 21 CCAAAA MJNAAA OOOOxx +7025 9035 1 1 5 5 25 25 1025 2025 7025 50 51 FKAAAA NJNAAA VVVVxx +5605 9036 1 1 5 5 5 605 1605 605 5605 10 11 PHAAAA OJNAAA AAAAxx +2572 9037 0 0 2 12 72 572 572 2572 2572 144 145 YUAAAA PJNAAA HHHHxx +3895 9038 1 3 5 15 95 895 1895 3895 3895 190 191 VTAAAA QJNAAA OOOOxx +9138 9039 0 2 8 18 38 138 1138 4138 9138 76 77 MNAAAA RJNAAA VVVVxx +4713 9040 1 1 3 13 13 713 713 4713 4713 26 27 HZAAAA SJNAAA AAAAxx +6079 9041 1 3 9 19 79 79 79 1079 6079 158 159 VZAAAA TJNAAA HHHHxx +8898 9042 0 2 8 18 98 898 898 3898 8898 196 197 GEAAAA UJNAAA OOOOxx +2650 9043 0 2 0 10 50 650 650 2650 2650 100 101 YXAAAA VJNAAA VVVVxx +5316 9044 0 0 6 16 16 316 1316 316 5316 32 33 MWAAAA WJNAAA AAAAxx +5133 9045 1 1 3 13 33 133 1133 133 5133 66 67 LPAAAA XJNAAA HHHHxx +2184 9046 0 0 4 4 84 184 184 2184 2184 168 169 AGAAAA YJNAAA OOOOxx +2728 9047 0 0 8 8 28 728 728 2728 2728 56 57 YAAAAA ZJNAAA VVVVxx +6737 9048 1 1 7 17 37 737 737 1737 6737 74 75 DZAAAA AKNAAA AAAAxx +1128 9049 0 0 8 8 28 128 1128 1128 1128 56 57 KRAAAA BKNAAA HHHHxx +9662 9050 0 2 2 2 62 662 1662 4662 9662 124 125 QHAAAA CKNAAA OOOOxx +9384 9051 0 0 4 4 84 384 1384 4384 9384 168 169 YWAAAA DKNAAA VVVVxx +4576 9052 0 0 6 16 76 576 576 4576 4576 152 153 AUAAAA EKNAAA AAAAxx +9613 9053 1 1 3 13 13 613 1613 4613 9613 26 27 TFAAAA FKNAAA HHHHxx +4001 9054 1 1 1 1 1 1 1 4001 4001 2 3 XXAAAA GKNAAA OOOOxx +3628 9055 0 0 8 8 28 628 1628 3628 3628 56 57 OJAAAA HKNAAA VVVVxx +6968 9056 0 0 8 8 68 968 968 1968 6968 136 137 AIAAAA IKNAAA AAAAxx +6491 9057 1 3 1 11 91 491 491 1491 6491 182 183 RPAAAA JKNAAA HHHHxx +1265 9058 1 1 5 5 65 265 1265 1265 1265 130 131 RWAAAA KKNAAA OOOOxx +6128 9059 0 0 8 8 28 128 128 1128 6128 56 57 SBAAAA LKNAAA VVVVxx +4274 9060 0 2 4 14 74 274 274 4274 4274 148 149 KIAAAA MKNAAA AAAAxx +3598 9061 0 2 8 18 98 598 1598 3598 3598 196 197 KIAAAA NKNAAA HHHHxx +7961 9062 1 1 1 1 61 961 1961 2961 7961 122 123 FUAAAA OKNAAA OOOOxx +2643 9063 1 3 3 3 43 643 643 2643 2643 86 87 RXAAAA PKNAAA VVVVxx +4547 9064 1 3 7 7 47 547 547 4547 4547 94 95 XSAAAA QKNAAA AAAAxx +3568 9065 0 0 8 8 68 568 1568 3568 3568 136 137 GHAAAA RKNAAA HHHHxx +8954 9066 0 2 4 14 54 954 954 3954 8954 108 109 KGAAAA SKNAAA OOOOxx +8802 9067 0 2 2 2 2 802 802 3802 8802 4 5 OAAAAA TKNAAA VVVVxx +7829 9068 1 1 9 9 29 829 1829 2829 7829 58 59 DPAAAA UKNAAA AAAAxx +1008 9069 0 0 8 8 8 8 1008 1008 1008 16 17 UMAAAA VKNAAA HHHHxx +3627 9070 1 3 7 7 27 627 1627 3627 3627 54 55 NJAAAA WKNAAA OOOOxx +3999 9071 1 3 9 19 99 999 1999 3999 3999 198 199 VXAAAA XKNAAA VVVVxx +7697 9072 1 1 7 17 97 697 1697 2697 7697 194 195 BKAAAA YKNAAA AAAAxx +9380 9073 0 0 0 0 80 380 1380 4380 9380 160 161 UWAAAA ZKNAAA HHHHxx +2707 9074 1 3 7 7 7 707 707 2707 2707 14 15 DAAAAA ALNAAA OOOOxx +4430 9075 0 2 0 10 30 430 430 4430 4430 60 61 KOAAAA BLNAAA VVVVxx +6440 9076 0 0 0 0 40 440 440 1440 6440 80 81 SNAAAA CLNAAA AAAAxx +9958 9077 0 2 8 18 58 958 1958 4958 9958 116 117 ATAAAA DLNAAA HHHHxx +7592 9078 0 0 2 12 92 592 1592 2592 7592 184 185 AGAAAA ELNAAA OOOOxx +7852 9079 0 0 2 12 52 852 1852 2852 7852 104 105 AQAAAA FLNAAA VVVVxx +9253 9080 1 1 3 13 53 253 1253 4253 9253 106 107 XRAAAA GLNAAA AAAAxx +5910 9081 0 2 0 10 10 910 1910 910 5910 20 21 ITAAAA HLNAAA HHHHxx +7487 9082 1 3 7 7 87 487 1487 2487 7487 174 175 ZBAAAA ILNAAA OOOOxx +6324 9083 0 0 4 4 24 324 324 1324 6324 48 49 GJAAAA JLNAAA VVVVxx +5792 9084 0 0 2 12 92 792 1792 792 5792 184 185 UOAAAA KLNAAA AAAAxx +7390 9085 0 2 0 10 90 390 1390 2390 7390 180 181 GYAAAA LLNAAA HHHHxx +8534 9086 0 2 4 14 34 534 534 3534 8534 68 69 GQAAAA MLNAAA OOOOxx +2690 9087 0 2 0 10 90 690 690 2690 2690 180 181 MZAAAA NLNAAA VVVVxx +3992 9088 0 0 2 12 92 992 1992 3992 3992 184 185 OXAAAA OLNAAA AAAAxx +6928 9089 0 0 8 8 28 928 928 1928 6928 56 57 MGAAAA PLNAAA HHHHxx +7815 9090 1 3 5 15 15 815 1815 2815 7815 30 31 POAAAA QLNAAA OOOOxx +9477 9091 1 1 7 17 77 477 1477 4477 9477 154 155 NAAAAA RLNAAA VVVVxx +497 9092 1 1 7 17 97 497 497 497 497 194 195 DTAAAA SLNAAA AAAAxx +7532 9093 0 0 2 12 32 532 1532 2532 7532 64 65 SDAAAA TLNAAA HHHHxx +9838 9094 0 2 8 18 38 838 1838 4838 9838 76 77 KOAAAA ULNAAA OOOOxx +1557 9095 1 1 7 17 57 557 1557 1557 1557 114 115 XHAAAA VLNAAA VVVVxx +2467 9096 1 3 7 7 67 467 467 2467 2467 134 135 XQAAAA WLNAAA AAAAxx +2367 9097 1 3 7 7 67 367 367 2367 2367 134 135 BNAAAA XLNAAA HHHHxx +5677 9098 1 1 7 17 77 677 1677 677 5677 154 155 JKAAAA YLNAAA OOOOxx +6193 9099 1 1 3 13 93 193 193 1193 6193 186 187 FEAAAA ZLNAAA VVVVxx +7126 9100 0 2 6 6 26 126 1126 2126 7126 52 53 COAAAA AMNAAA AAAAxx +5264 9101 0 0 4 4 64 264 1264 264 5264 128 129 MUAAAA BMNAAA HHHHxx +850 9102 0 2 0 10 50 850 850 850 850 100 101 SGAAAA CMNAAA OOOOxx +4854 9103 0 2 4 14 54 854 854 4854 4854 108 109 SEAAAA DMNAAA VVVVxx +4414 9104 0 2 4 14 14 414 414 4414 4414 28 29 UNAAAA EMNAAA AAAAxx +8971 9105 1 3 1 11 71 971 971 3971 8971 142 143 BHAAAA FMNAAA HHHHxx +9240 9106 0 0 0 0 40 240 1240 4240 9240 80 81 KRAAAA GMNAAA OOOOxx +7341 9107 1 1 1 1 41 341 1341 2341 7341 82 83 JWAAAA HMNAAA VVVVxx +3151 9108 1 3 1 11 51 151 1151 3151 3151 102 103 FRAAAA IMNAAA AAAAxx +1742 9109 0 2 2 2 42 742 1742 1742 1742 84 85 APAAAA JMNAAA HHHHxx +1347 9110 1 3 7 7 47 347 1347 1347 1347 94 95 VZAAAA KMNAAA OOOOxx +9418 9111 0 2 8 18 18 418 1418 4418 9418 36 37 GYAAAA LMNAAA VVVVxx +5452 9112 0 0 2 12 52 452 1452 452 5452 104 105 SBAAAA MMNAAA AAAAxx +8637 9113 1 1 7 17 37 637 637 3637 8637 74 75 FUAAAA NMNAAA HHHHxx +8287 9114 1 3 7 7 87 287 287 3287 8287 174 175 TGAAAA OMNAAA OOOOxx +9865 9115 1 1 5 5 65 865 1865 4865 9865 130 131 LPAAAA PMNAAA VVVVxx +1664 9116 0 0 4 4 64 664 1664 1664 1664 128 129 AMAAAA QMNAAA AAAAxx +9933 9117 1 1 3 13 33 933 1933 4933 9933 66 67 BSAAAA RMNAAA HHHHxx +3416 9118 0 0 6 16 16 416 1416 3416 3416 32 33 KBAAAA SMNAAA OOOOxx +7981 9119 1 1 1 1 81 981 1981 2981 7981 162 163 ZUAAAA TMNAAA VVVVxx +1981 9120 1 1 1 1 81 981 1981 1981 1981 162 163 FYAAAA UMNAAA AAAAxx +441 9121 1 1 1 1 41 441 441 441 441 82 83 ZQAAAA VMNAAA HHHHxx +1380 9122 0 0 0 0 80 380 1380 1380 1380 160 161 CBAAAA WMNAAA OOOOxx +7325 9123 1 1 5 5 25 325 1325 2325 7325 50 51 TVAAAA XMNAAA VVVVxx +5682 9124 0 2 2 2 82 682 1682 682 5682 164 165 OKAAAA YMNAAA AAAAxx +1024 9125 0 0 4 4 24 24 1024 1024 1024 48 49 KNAAAA ZMNAAA HHHHxx +1096 9126 0 0 6 16 96 96 1096 1096 1096 192 193 EQAAAA ANNAAA OOOOxx +4717 9127 1 1 7 17 17 717 717 4717 4717 34 35 LZAAAA BNNAAA VVVVxx +7948 9128 0 0 8 8 48 948 1948 2948 7948 96 97 STAAAA CNNAAA AAAAxx +4074 9129 0 2 4 14 74 74 74 4074 4074 148 149 SAAAAA DNNAAA HHHHxx +211 9130 1 3 1 11 11 211 211 211 211 22 23 DIAAAA ENNAAA OOOOxx +8993 9131 1 1 3 13 93 993 993 3993 8993 186 187 XHAAAA FNNAAA VVVVxx +4509 9132 1 1 9 9 9 509 509 4509 4509 18 19 LRAAAA GNNAAA AAAAxx +823 9133 1 3 3 3 23 823 823 823 823 46 47 RFAAAA HNNAAA HHHHxx +4747 9134 1 3 7 7 47 747 747 4747 4747 94 95 PAAAAA INNAAA OOOOxx +6955 9135 1 3 5 15 55 955 955 1955 6955 110 111 NHAAAA JNNAAA VVVVxx +7922 9136 0 2 2 2 22 922 1922 2922 7922 44 45 SSAAAA KNNAAA AAAAxx +6936 9137 0 0 6 16 36 936 936 1936 6936 72 73 UGAAAA LNNAAA HHHHxx +1546 9138 0 2 6 6 46 546 1546 1546 1546 92 93 MHAAAA MNNAAA OOOOxx +9836 9139 0 0 6 16 36 836 1836 4836 9836 72 73 IOAAAA NNNAAA VVVVxx +5626 9140 0 2 6 6 26 626 1626 626 5626 52 53 KIAAAA ONNAAA AAAAxx +4879 9141 1 3 9 19 79 879 879 4879 4879 158 159 RFAAAA PNNAAA HHHHxx +8590 9142 0 2 0 10 90 590 590 3590 8590 180 181 KSAAAA QNNAAA OOOOxx +8842 9143 0 2 2 2 42 842 842 3842 8842 84 85 CCAAAA RNNAAA VVVVxx +6505 9144 1 1 5 5 5 505 505 1505 6505 10 11 FQAAAA SNNAAA AAAAxx +2803 9145 1 3 3 3 3 803 803 2803 2803 6 7 VDAAAA TNNAAA HHHHxx +9258 9146 0 2 8 18 58 258 1258 4258 9258 116 117 CSAAAA UNNAAA OOOOxx +741 9147 1 1 1 1 41 741 741 741 741 82 83 NCAAAA VNNAAA VVVVxx +1457 9148 1 1 7 17 57 457 1457 1457 1457 114 115 BEAAAA WNNAAA AAAAxx +5777 9149 1 1 7 17 77 777 1777 777 5777 154 155 FOAAAA XNNAAA HHHHxx +2883 9150 1 3 3 3 83 883 883 2883 2883 166 167 XGAAAA YNNAAA OOOOxx +6610 9151 0 2 0 10 10 610 610 1610 6610 20 21 GUAAAA ZNNAAA VVVVxx +4331 9152 1 3 1 11 31 331 331 4331 4331 62 63 PKAAAA AONAAA AAAAxx +2712 9153 0 0 2 12 12 712 712 2712 2712 24 25 IAAAAA BONAAA HHHHxx +9268 9154 0 0 8 8 68 268 1268 4268 9268 136 137 MSAAAA CONAAA OOOOxx +410 9155 0 2 0 10 10 410 410 410 410 20 21 UPAAAA DONAAA VVVVxx +9411 9156 1 3 1 11 11 411 1411 4411 9411 22 23 ZXAAAA EONAAA AAAAxx +4683 9157 1 3 3 3 83 683 683 4683 4683 166 167 DYAAAA FONAAA HHHHxx +7072 9158 0 0 2 12 72 72 1072 2072 7072 144 145 AMAAAA GONAAA OOOOxx +5050 9159 0 2 0 10 50 50 1050 50 5050 100 101 GMAAAA HONAAA VVVVxx +5932 9160 0 0 2 12 32 932 1932 932 5932 64 65 EUAAAA IONAAA AAAAxx +2756 9161 0 0 6 16 56 756 756 2756 2756 112 113 ACAAAA JONAAA HHHHxx +9813 9162 1 1 3 13 13 813 1813 4813 9813 26 27 LNAAAA KONAAA OOOOxx +7388 9163 0 0 8 8 88 388 1388 2388 7388 176 177 EYAAAA LONAAA VVVVxx +2596 9164 0 0 6 16 96 596 596 2596 2596 192 193 WVAAAA MONAAA AAAAxx +5102 9165 0 2 2 2 2 102 1102 102 5102 4 5 GOAAAA NONAAA HHHHxx +208 9166 0 0 8 8 8 208 208 208 208 16 17 AIAAAA OONAAA OOOOxx +86 9167 0 2 6 6 86 86 86 86 86 172 173 IDAAAA PONAAA VVVVxx +8127 9168 1 3 7 7 27 127 127 3127 8127 54 55 PAAAAA QONAAA AAAAxx +5154 9169 0 2 4 14 54 154 1154 154 5154 108 109 GQAAAA RONAAA HHHHxx +4491 9170 1 3 1 11 91 491 491 4491 4491 182 183 TQAAAA SONAAA OOOOxx +7423 9171 1 3 3 3 23 423 1423 2423 7423 46 47 NZAAAA TONAAA VVVVxx +6441 9172 1 1 1 1 41 441 441 1441 6441 82 83 TNAAAA UONAAA AAAAxx +2920 9173 0 0 0 0 20 920 920 2920 2920 40 41 IIAAAA VONAAA HHHHxx +6386 9174 0 2 6 6 86 386 386 1386 6386 172 173 QLAAAA WONAAA OOOOxx +9744 9175 0 0 4 4 44 744 1744 4744 9744 88 89 UKAAAA XONAAA VVVVxx +2667 9176 1 3 7 7 67 667 667 2667 2667 134 135 PYAAAA YONAAA AAAAxx +5754 9177 0 2 4 14 54 754 1754 754 5754 108 109 INAAAA ZONAAA HHHHxx +4645 9178 1 1 5 5 45 645 645 4645 4645 90 91 RWAAAA APNAAA OOOOxx +4327 9179 1 3 7 7 27 327 327 4327 4327 54 55 LKAAAA BPNAAA VVVVxx +843 9180 1 3 3 3 43 843 843 843 843 86 87 LGAAAA CPNAAA AAAAxx +4085 9181 1 1 5 5 85 85 85 4085 4085 170 171 DBAAAA DPNAAA HHHHxx +2849 9182 1 1 9 9 49 849 849 2849 2849 98 99 PFAAAA EPNAAA OOOOxx +5734 9183 0 2 4 14 34 734 1734 734 5734 68 69 OMAAAA FPNAAA VVVVxx +5307 9184 1 3 7 7 7 307 1307 307 5307 14 15 DWAAAA GPNAAA AAAAxx +8433 9185 1 1 3 13 33 433 433 3433 8433 66 67 JMAAAA HPNAAA HHHHxx +3031 9186 1 3 1 11 31 31 1031 3031 3031 62 63 PMAAAA IPNAAA OOOOxx +5714 9187 0 2 4 14 14 714 1714 714 5714 28 29 ULAAAA JPNAAA VVVVxx +5969 9188 1 1 9 9 69 969 1969 969 5969 138 139 PVAAAA KPNAAA AAAAxx +2532 9189 0 0 2 12 32 532 532 2532 2532 64 65 KTAAAA LPNAAA HHHHxx +5219 9190 1 3 9 19 19 219 1219 219 5219 38 39 TSAAAA MPNAAA OOOOxx +7343 9191 1 3 3 3 43 343 1343 2343 7343 86 87 LWAAAA NPNAAA VVVVxx +9089 9192 1 1 9 9 89 89 1089 4089 9089 178 179 PLAAAA OPNAAA AAAAxx +9337 9193 1 1 7 17 37 337 1337 4337 9337 74 75 DVAAAA PPNAAA HHHHxx +5131 9194 1 3 1 11 31 131 1131 131 5131 62 63 JPAAAA QPNAAA OOOOxx +6253 9195 1 1 3 13 53 253 253 1253 6253 106 107 NGAAAA RPNAAA VVVVxx +5140 9196 0 0 0 0 40 140 1140 140 5140 80 81 SPAAAA SPNAAA AAAAxx +2953 9197 1 1 3 13 53 953 953 2953 2953 106 107 PJAAAA TPNAAA HHHHxx +4293 9198 1 1 3 13 93 293 293 4293 4293 186 187 DJAAAA UPNAAA OOOOxx +9974 9199 0 2 4 14 74 974 1974 4974 9974 148 149 QTAAAA VPNAAA VVVVxx +5061 9200 1 1 1 1 61 61 1061 61 5061 122 123 RMAAAA WPNAAA AAAAxx +8570 9201 0 2 0 10 70 570 570 3570 8570 140 141 QRAAAA XPNAAA HHHHxx +9504 9202 0 0 4 4 4 504 1504 4504 9504 8 9 OBAAAA YPNAAA OOOOxx +604 9203 0 0 4 4 4 604 604 604 604 8 9 GXAAAA ZPNAAA VVVVxx +4991 9204 1 3 1 11 91 991 991 4991 4991 182 183 ZJAAAA AQNAAA AAAAxx +880 9205 0 0 0 0 80 880 880 880 880 160 161 WHAAAA BQNAAA HHHHxx +3861 9206 1 1 1 1 61 861 1861 3861 3861 122 123 NSAAAA CQNAAA OOOOxx +8262 9207 0 2 2 2 62 262 262 3262 8262 124 125 UFAAAA DQNAAA VVVVxx +5689 9208 1 1 9 9 89 689 1689 689 5689 178 179 VKAAAA EQNAAA AAAAxx +1793 9209 1 1 3 13 93 793 1793 1793 1793 186 187 ZQAAAA FQNAAA HHHHxx +2661 9210 1 1 1 1 61 661 661 2661 2661 122 123 JYAAAA GQNAAA OOOOxx +7954 9211 0 2 4 14 54 954 1954 2954 7954 108 109 YTAAAA HQNAAA VVVVxx +1874 9212 0 2 4 14 74 874 1874 1874 1874 148 149 CUAAAA IQNAAA AAAAxx +2982 9213 0 2 2 2 82 982 982 2982 2982 164 165 SKAAAA JQNAAA HHHHxx +331 9214 1 3 1 11 31 331 331 331 331 62 63 TMAAAA KQNAAA OOOOxx +5021 9215 1 1 1 1 21 21 1021 21 5021 42 43 DLAAAA LQNAAA VVVVxx +9894 9216 0 2 4 14 94 894 1894 4894 9894 188 189 OQAAAA MQNAAA AAAAxx +7709 9217 1 1 9 9 9 709 1709 2709 7709 18 19 NKAAAA NQNAAA HHHHxx +4980 9218 0 0 0 0 80 980 980 4980 4980 160 161 OJAAAA OQNAAA OOOOxx +8249 9219 1 1 9 9 49 249 249 3249 8249 98 99 HFAAAA PQNAAA VVVVxx +7120 9220 0 0 0 0 20 120 1120 2120 7120 40 41 WNAAAA QQNAAA AAAAxx +7464 9221 0 0 4 4 64 464 1464 2464 7464 128 129 CBAAAA RQNAAA HHHHxx +8086 9222 0 2 6 6 86 86 86 3086 8086 172 173 AZAAAA SQNAAA OOOOxx +3509 9223 1 1 9 9 9 509 1509 3509 3509 18 19 ZEAAAA TQNAAA VVVVxx +3902 9224 0 2 2 2 2 902 1902 3902 3902 4 5 CUAAAA UQNAAA AAAAxx +9907 9225 1 3 7 7 7 907 1907 4907 9907 14 15 BRAAAA VQNAAA HHHHxx +6278 9226 0 2 8 18 78 278 278 1278 6278 156 157 MHAAAA WQNAAA OOOOxx +9316 9227 0 0 6 16 16 316 1316 4316 9316 32 33 IUAAAA XQNAAA VVVVxx +2824 9228 0 0 4 4 24 824 824 2824 2824 48 49 QEAAAA YQNAAA AAAAxx +1558 9229 0 2 8 18 58 558 1558 1558 1558 116 117 YHAAAA ZQNAAA HHHHxx +5436 9230 0 0 6 16 36 436 1436 436 5436 72 73 CBAAAA ARNAAA OOOOxx +1161 9231 1 1 1 1 61 161 1161 1161 1161 122 123 RSAAAA BRNAAA VVVVxx +7569 9232 1 1 9 9 69 569 1569 2569 7569 138 139 DFAAAA CRNAAA AAAAxx +9614 9233 0 2 4 14 14 614 1614 4614 9614 28 29 UFAAAA DRNAAA HHHHxx +6970 9234 0 2 0 10 70 970 970 1970 6970 140 141 CIAAAA ERNAAA OOOOxx +2422 9235 0 2 2 2 22 422 422 2422 2422 44 45 EPAAAA FRNAAA VVVVxx +8860 9236 0 0 0 0 60 860 860 3860 8860 120 121 UCAAAA GRNAAA AAAAxx +9912 9237 0 0 2 12 12 912 1912 4912 9912 24 25 GRAAAA HRNAAA HHHHxx +1109 9238 1 1 9 9 9 109 1109 1109 1109 18 19 RQAAAA IRNAAA OOOOxx +3286 9239 0 2 6 6 86 286 1286 3286 3286 172 173 KWAAAA JRNAAA VVVVxx +2277 9240 1 1 7 17 77 277 277 2277 2277 154 155 PJAAAA KRNAAA AAAAxx +8656 9241 0 0 6 16 56 656 656 3656 8656 112 113 YUAAAA LRNAAA HHHHxx +4656 9242 0 0 6 16 56 656 656 4656 4656 112 113 CXAAAA MRNAAA OOOOxx +6965 9243 1 1 5 5 65 965 965 1965 6965 130 131 XHAAAA NRNAAA VVVVxx +7591 9244 1 3 1 11 91 591 1591 2591 7591 182 183 ZFAAAA ORNAAA AAAAxx +4883 9245 1 3 3 3 83 883 883 4883 4883 166 167 VFAAAA PRNAAA HHHHxx +452 9246 0 0 2 12 52 452 452 452 452 104 105 KRAAAA QRNAAA OOOOxx +4018 9247 0 2 8 18 18 18 18 4018 4018 36 37 OYAAAA RRNAAA VVVVxx +4066 9248 0 2 6 6 66 66 66 4066 4066 132 133 KAAAAA SRNAAA AAAAxx +6480 9249 0 0 0 0 80 480 480 1480 6480 160 161 GPAAAA TRNAAA HHHHxx +8634 9250 0 2 4 14 34 634 634 3634 8634 68 69 CUAAAA URNAAA OOOOxx +9387 9251 1 3 7 7 87 387 1387 4387 9387 174 175 BXAAAA VRNAAA VVVVxx +3476 9252 0 0 6 16 76 476 1476 3476 3476 152 153 SDAAAA WRNAAA AAAAxx +5995 9253 1 3 5 15 95 995 1995 995 5995 190 191 PWAAAA XRNAAA HHHHxx +9677 9254 1 1 7 17 77 677 1677 4677 9677 154 155 FIAAAA YRNAAA OOOOxx +3884 9255 0 0 4 4 84 884 1884 3884 3884 168 169 KTAAAA ZRNAAA VVVVxx +6500 9256 0 0 0 0 0 500 500 1500 6500 0 1 AQAAAA ASNAAA AAAAxx +7972 9257 0 0 2 12 72 972 1972 2972 7972 144 145 QUAAAA BSNAAA HHHHxx +5281 9258 1 1 1 1 81 281 1281 281 5281 162 163 DVAAAA CSNAAA OOOOxx +1288 9259 0 0 8 8 88 288 1288 1288 1288 176 177 OXAAAA DSNAAA VVVVxx +4366 9260 0 2 6 6 66 366 366 4366 4366 132 133 YLAAAA ESNAAA AAAAxx +6557 9261 1 1 7 17 57 557 557 1557 6557 114 115 FSAAAA FSNAAA HHHHxx +7086 9262 0 2 6 6 86 86 1086 2086 7086 172 173 OMAAAA GSNAAA OOOOxx +6588 9263 0 0 8 8 88 588 588 1588 6588 176 177 KTAAAA HSNAAA VVVVxx +9062 9264 0 2 2 2 62 62 1062 4062 9062 124 125 OKAAAA ISNAAA AAAAxx +9230 9265 0 2 0 10 30 230 1230 4230 9230 60 61 ARAAAA JSNAAA HHHHxx +7672 9266 0 0 2 12 72 672 1672 2672 7672 144 145 CJAAAA KSNAAA OOOOxx +5204 9267 0 0 4 4 4 204 1204 204 5204 8 9 ESAAAA LSNAAA VVVVxx +2836 9268 0 0 6 16 36 836 836 2836 2836 72 73 CFAAAA MSNAAA AAAAxx +7165 9269 1 1 5 5 65 165 1165 2165 7165 130 131 PPAAAA NSNAAA HHHHxx +971 9270 1 3 1 11 71 971 971 971 971 142 143 JLAAAA OSNAAA OOOOxx +3851 9271 1 3 1 11 51 851 1851 3851 3851 102 103 DSAAAA PSNAAA VVVVxx +8593 9272 1 1 3 13 93 593 593 3593 8593 186 187 NSAAAA QSNAAA AAAAxx +7742 9273 0 2 2 2 42 742 1742 2742 7742 84 85 ULAAAA RSNAAA HHHHxx +2887 9274 1 3 7 7 87 887 887 2887 2887 174 175 BHAAAA SSNAAA OOOOxx +8479 9275 1 3 9 19 79 479 479 3479 8479 158 159 DOAAAA TSNAAA VVVVxx +9514 9276 0 2 4 14 14 514 1514 4514 9514 28 29 YBAAAA USNAAA AAAAxx +273 9277 1 1 3 13 73 273 273 273 273 146 147 NKAAAA VSNAAA HHHHxx +2938 9278 0 2 8 18 38 938 938 2938 2938 76 77 AJAAAA WSNAAA OOOOxx +9793 9279 1 1 3 13 93 793 1793 4793 9793 186 187 RMAAAA XSNAAA VVVVxx +8050 9280 0 2 0 10 50 50 50 3050 8050 100 101 QXAAAA YSNAAA AAAAxx +6702 9281 0 2 2 2 2 702 702 1702 6702 4 5 UXAAAA ZSNAAA HHHHxx +7290 9282 0 2 0 10 90 290 1290 2290 7290 180 181 KUAAAA ATNAAA OOOOxx +1837 9283 1 1 7 17 37 837 1837 1837 1837 74 75 RSAAAA BTNAAA VVVVxx +3206 9284 0 2 6 6 6 206 1206 3206 3206 12 13 ITAAAA CTNAAA AAAAxx +4925 9285 1 1 5 5 25 925 925 4925 4925 50 51 LHAAAA DTNAAA HHHHxx +5066 9286 0 2 6 6 66 66 1066 66 5066 132 133 WMAAAA ETNAAA OOOOxx +3401 9287 1 1 1 1 1 401 1401 3401 3401 2 3 VAAAAA FTNAAA VVVVxx +3474 9288 0 2 4 14 74 474 1474 3474 3474 148 149 QDAAAA GTNAAA AAAAxx +57 9289 1 1 7 17 57 57 57 57 57 114 115 FCAAAA HTNAAA HHHHxx +2082 9290 0 2 2 2 82 82 82 2082 2082 164 165 CCAAAA ITNAAA OOOOxx +100 9291 0 0 0 0 0 100 100 100 100 0 1 WDAAAA JTNAAA VVVVxx +9665 9292 1 1 5 5 65 665 1665 4665 9665 130 131 THAAAA KTNAAA AAAAxx +8284 9293 0 0 4 4 84 284 284 3284 8284 168 169 QGAAAA LTNAAA HHHHxx +958 9294 0 2 8 18 58 958 958 958 958 116 117 WKAAAA MTNAAA OOOOxx +5282 9295 0 2 2 2 82 282 1282 282 5282 164 165 EVAAAA NTNAAA VVVVxx +4257 9296 1 1 7 17 57 257 257 4257 4257 114 115 THAAAA OTNAAA AAAAxx +3160 9297 0 0 0 0 60 160 1160 3160 3160 120 121 ORAAAA PTNAAA HHHHxx +8449 9298 1 1 9 9 49 449 449 3449 8449 98 99 ZMAAAA QTNAAA OOOOxx +500 9299 0 0 0 0 0 500 500 500 500 0 1 GTAAAA RTNAAA VVVVxx +6432 9300 0 0 2 12 32 432 432 1432 6432 64 65 KNAAAA STNAAA AAAAxx +6220 9301 0 0 0 0 20 220 220 1220 6220 40 41 GFAAAA TTNAAA HHHHxx +7233 9302 1 1 3 13 33 233 1233 2233 7233 66 67 FSAAAA UTNAAA OOOOxx +2723 9303 1 3 3 3 23 723 723 2723 2723 46 47 TAAAAA VTNAAA VVVVxx +1899 9304 1 3 9 19 99 899 1899 1899 1899 198 199 BVAAAA WTNAAA AAAAxx +7158 9305 0 2 8 18 58 158 1158 2158 7158 116 117 IPAAAA XTNAAA HHHHxx +202 9306 0 2 2 2 2 202 202 202 202 4 5 UHAAAA YTNAAA OOOOxx +2286 9307 0 2 6 6 86 286 286 2286 2286 172 173 YJAAAA ZTNAAA VVVVxx +5356 9308 0 0 6 16 56 356 1356 356 5356 112 113 AYAAAA AUNAAA AAAAxx +3809 9309 1 1 9 9 9 809 1809 3809 3809 18 19 NQAAAA BUNAAA HHHHxx +3979 9310 1 3 9 19 79 979 1979 3979 3979 158 159 BXAAAA CUNAAA OOOOxx +8359 9311 1 3 9 19 59 359 359 3359 8359 118 119 NJAAAA DUNAAA VVVVxx +3479 9312 1 3 9 19 79 479 1479 3479 3479 158 159 VDAAAA EUNAAA AAAAxx +4895 9313 1 3 5 15 95 895 895 4895 4895 190 191 HGAAAA FUNAAA HHHHxx +6059 9314 1 3 9 19 59 59 59 1059 6059 118 119 BZAAAA GUNAAA OOOOxx +9560 9315 0 0 0 0 60 560 1560 4560 9560 120 121 SDAAAA HUNAAA VVVVxx +6756 9316 0 0 6 16 56 756 756 1756 6756 112 113 WZAAAA IUNAAA AAAAxx +7504 9317 0 0 4 4 4 504 1504 2504 7504 8 9 QCAAAA JUNAAA HHHHxx +6762 9318 0 2 2 2 62 762 762 1762 6762 124 125 CAAAAA KUNAAA OOOOxx +5304 9319 0 0 4 4 4 304 1304 304 5304 8 9 AWAAAA LUNAAA VVVVxx +9533 9320 1 1 3 13 33 533 1533 4533 9533 66 67 RCAAAA MUNAAA AAAAxx +6649 9321 1 1 9 9 49 649 649 1649 6649 98 99 TVAAAA NUNAAA HHHHxx +38 9322 0 2 8 18 38 38 38 38 38 76 77 MBAAAA OUNAAA OOOOxx +5713 9323 1 1 3 13 13 713 1713 713 5713 26 27 TLAAAA PUNAAA VVVVxx +3000 9324 0 0 0 0 0 0 1000 3000 3000 0 1 KLAAAA QUNAAA AAAAxx +3738 9325 0 2 8 18 38 738 1738 3738 3738 76 77 UNAAAA RUNAAA HHHHxx +3327 9326 1 3 7 7 27 327 1327 3327 3327 54 55 ZXAAAA SUNAAA OOOOxx +3922 9327 0 2 2 2 22 922 1922 3922 3922 44 45 WUAAAA TUNAAA VVVVxx +9245 9328 1 1 5 5 45 245 1245 4245 9245 90 91 PRAAAA UUNAAA AAAAxx +2172 9329 0 0 2 12 72 172 172 2172 2172 144 145 OFAAAA VUNAAA HHHHxx +7128 9330 0 0 8 8 28 128 1128 2128 7128 56 57 EOAAAA WUNAAA OOOOxx +1195 9331 1 3 5 15 95 195 1195 1195 1195 190 191 ZTAAAA XUNAAA VVVVxx +8445 9332 1 1 5 5 45 445 445 3445 8445 90 91 VMAAAA YUNAAA AAAAxx +8638 9333 0 2 8 18 38 638 638 3638 8638 76 77 GUAAAA ZUNAAA HHHHxx +1249 9334 1 1 9 9 49 249 1249 1249 1249 98 99 BWAAAA AVNAAA OOOOxx +8659 9335 1 3 9 19 59 659 659 3659 8659 118 119 BVAAAA BVNAAA VVVVxx +3556 9336 0 0 6 16 56 556 1556 3556 3556 112 113 UGAAAA CVNAAA AAAAxx +3347 9337 1 3 7 7 47 347 1347 3347 3347 94 95 TYAAAA DVNAAA HHHHxx +3260 9338 0 0 0 0 60 260 1260 3260 3260 120 121 KVAAAA EVNAAA OOOOxx +5139 9339 1 3 9 19 39 139 1139 139 5139 78 79 RPAAAA FVNAAA VVVVxx +9991 9340 1 3 1 11 91 991 1991 4991 9991 182 183 HUAAAA GVNAAA AAAAxx +5499 9341 1 3 9 19 99 499 1499 499 5499 198 199 NDAAAA HVNAAA HHHHxx +8082 9342 0 2 2 2 82 82 82 3082 8082 164 165 WYAAAA IVNAAA OOOOxx +1640 9343 0 0 0 0 40 640 1640 1640 1640 80 81 CLAAAA JVNAAA VVVVxx +8726 9344 0 2 6 6 26 726 726 3726 8726 52 53 QXAAAA KVNAAA AAAAxx +2339 9345 1 3 9 19 39 339 339 2339 2339 78 79 ZLAAAA LVNAAA HHHHxx +2601 9346 1 1 1 1 1 601 601 2601 2601 2 3 BWAAAA MVNAAA OOOOxx +9940 9347 0 0 0 0 40 940 1940 4940 9940 80 81 ISAAAA NVNAAA VVVVxx +4185 9348 1 1 5 5 85 185 185 4185 4185 170 171 ZEAAAA OVNAAA AAAAxx +9546 9349 0 2 6 6 46 546 1546 4546 9546 92 93 EDAAAA PVNAAA HHHHxx +5218 9350 0 2 8 18 18 218 1218 218 5218 36 37 SSAAAA QVNAAA OOOOxx +4374 9351 0 2 4 14 74 374 374 4374 4374 148 149 GMAAAA RVNAAA VVVVxx +288 9352 0 0 8 8 88 288 288 288 288 176 177 CLAAAA SVNAAA AAAAxx +7445 9353 1 1 5 5 45 445 1445 2445 7445 90 91 JAAAAA TVNAAA HHHHxx +1710 9354 0 2 0 10 10 710 1710 1710 1710 20 21 UNAAAA UVNAAA OOOOxx +6409 9355 1 1 9 9 9 409 409 1409 6409 18 19 NMAAAA VVNAAA VVVVxx +7982 9356 0 2 2 2 82 982 1982 2982 7982 164 165 AVAAAA WVNAAA AAAAxx +4950 9357 0 2 0 10 50 950 950 4950 4950 100 101 KIAAAA XVNAAA HHHHxx +9242 9358 0 2 2 2 42 242 1242 4242 9242 84 85 MRAAAA YVNAAA OOOOxx +3272 9359 0 0 2 12 72 272 1272 3272 3272 144 145 WVAAAA ZVNAAA VVVVxx +739 9360 1 3 9 19 39 739 739 739 739 78 79 LCAAAA AWNAAA AAAAxx +5526 9361 0 2 6 6 26 526 1526 526 5526 52 53 OEAAAA BWNAAA HHHHxx +8189 9362 1 1 9 9 89 189 189 3189 8189 178 179 ZCAAAA CWNAAA OOOOxx +9106 9363 0 2 6 6 6 106 1106 4106 9106 12 13 GMAAAA DWNAAA VVVVxx +9775 9364 1 3 5 15 75 775 1775 4775 9775 150 151 ZLAAAA EWNAAA AAAAxx +4643 9365 1 3 3 3 43 643 643 4643 4643 86 87 PWAAAA FWNAAA HHHHxx +8396 9366 0 0 6 16 96 396 396 3396 8396 192 193 YKAAAA GWNAAA OOOOxx +3255 9367 1 3 5 15 55 255 1255 3255 3255 110 111 FVAAAA HWNAAA VVVVxx +301 9368 1 1 1 1 1 301 301 301 301 2 3 PLAAAA IWNAAA AAAAxx +6014 9369 0 2 4 14 14 14 14 1014 6014 28 29 IXAAAA JWNAAA HHHHxx +6046 9370 0 2 6 6 46 46 46 1046 6046 92 93 OYAAAA KWNAAA OOOOxx +984 9371 0 0 4 4 84 984 984 984 984 168 169 WLAAAA LWNAAA VVVVxx +2420 9372 0 0 0 0 20 420 420 2420 2420 40 41 CPAAAA MWNAAA AAAAxx +2922 9373 0 2 2 2 22 922 922 2922 2922 44 45 KIAAAA NWNAAA HHHHxx +2317 9374 1 1 7 17 17 317 317 2317 2317 34 35 DLAAAA OWNAAA OOOOxx +7332 9375 0 0 2 12 32 332 1332 2332 7332 64 65 AWAAAA PWNAAA VVVVxx +6451 9376 1 3 1 11 51 451 451 1451 6451 102 103 DOAAAA QWNAAA AAAAxx +2589 9377 1 1 9 9 89 589 589 2589 2589 178 179 PVAAAA RWNAAA HHHHxx +4333 9378 1 1 3 13 33 333 333 4333 4333 66 67 RKAAAA SWNAAA OOOOxx +8650 9379 0 2 0 10 50 650 650 3650 8650 100 101 SUAAAA TWNAAA VVVVxx +6856 9380 0 0 6 16 56 856 856 1856 6856 112 113 SDAAAA UWNAAA AAAAxx +4194 9381 0 2 4 14 94 194 194 4194 4194 188 189 IFAAAA VWNAAA HHHHxx +6246 9382 0 2 6 6 46 246 246 1246 6246 92 93 GGAAAA WWNAAA OOOOxx +4371 9383 1 3 1 11 71 371 371 4371 4371 142 143 DMAAAA XWNAAA VVVVxx +1388 9384 0 0 8 8 88 388 1388 1388 1388 176 177 KBAAAA YWNAAA AAAAxx +1056 9385 0 0 6 16 56 56 1056 1056 1056 112 113 QOAAAA ZWNAAA HHHHxx +6041 9386 1 1 1 1 41 41 41 1041 6041 82 83 JYAAAA AXNAAA OOOOxx +6153 9387 1 1 3 13 53 153 153 1153 6153 106 107 RCAAAA BXNAAA VVVVxx +8450 9388 0 2 0 10 50 450 450 3450 8450 100 101 ANAAAA CXNAAA AAAAxx +3469 9389 1 1 9 9 69 469 1469 3469 3469 138 139 LDAAAA DXNAAA HHHHxx +5226 9390 0 2 6 6 26 226 1226 226 5226 52 53 ATAAAA EXNAAA OOOOxx +8112 9391 0 0 2 12 12 112 112 3112 8112 24 25 AAAAAA FXNAAA VVVVxx +647 9392 1 3 7 7 47 647 647 647 647 94 95 XYAAAA GXNAAA AAAAxx +2567 9393 1 3 7 7 67 567 567 2567 2567 134 135 TUAAAA HXNAAA HHHHxx +9064 9394 0 0 4 4 64 64 1064 4064 9064 128 129 QKAAAA IXNAAA OOOOxx +5161 9395 1 1 1 1 61 161 1161 161 5161 122 123 NQAAAA JXNAAA VVVVxx +5260 9396 0 0 0 0 60 260 1260 260 5260 120 121 IUAAAA KXNAAA AAAAxx +8988 9397 0 0 8 8 88 988 988 3988 8988 176 177 SHAAAA LXNAAA HHHHxx +9678 9398 0 2 8 18 78 678 1678 4678 9678 156 157 GIAAAA MXNAAA OOOOxx +6853 9399 1 1 3 13 53 853 853 1853 6853 106 107 PDAAAA NXNAAA VVVVxx +5294 9400 0 2 4 14 94 294 1294 294 5294 188 189 QVAAAA OXNAAA AAAAxx +9864 9401 0 0 4 4 64 864 1864 4864 9864 128 129 KPAAAA PXNAAA HHHHxx +8702 9402 0 2 2 2 2 702 702 3702 8702 4 5 SWAAAA QXNAAA OOOOxx +1132 9403 0 0 2 12 32 132 1132 1132 1132 64 65 ORAAAA RXNAAA VVVVxx +1524 9404 0 0 4 4 24 524 1524 1524 1524 48 49 QGAAAA SXNAAA AAAAxx +4560 9405 0 0 0 0 60 560 560 4560 4560 120 121 KTAAAA TXNAAA HHHHxx +2137 9406 1 1 7 17 37 137 137 2137 2137 74 75 FEAAAA UXNAAA OOOOxx +3283 9407 1 3 3 3 83 283 1283 3283 3283 166 167 HWAAAA VXNAAA VVVVxx +3377 9408 1 1 7 17 77 377 1377 3377 3377 154 155 XZAAAA WXNAAA AAAAxx +2267 9409 1 3 7 7 67 267 267 2267 2267 134 135 FJAAAA XXNAAA HHHHxx +8987 9410 1 3 7 7 87 987 987 3987 8987 174 175 RHAAAA YXNAAA OOOOxx +6709 9411 1 1 9 9 9 709 709 1709 6709 18 19 BYAAAA ZXNAAA VVVVxx +8059 9412 1 3 9 19 59 59 59 3059 8059 118 119 ZXAAAA AYNAAA AAAAxx +3402 9413 0 2 2 2 2 402 1402 3402 3402 4 5 WAAAAA BYNAAA HHHHxx +6443 9414 1 3 3 3 43 443 443 1443 6443 86 87 VNAAAA CYNAAA OOOOxx +8858 9415 0 2 8 18 58 858 858 3858 8858 116 117 SCAAAA DYNAAA VVVVxx +3974 9416 0 2 4 14 74 974 1974 3974 3974 148 149 WWAAAA EYNAAA AAAAxx +3521 9417 1 1 1 1 21 521 1521 3521 3521 42 43 LFAAAA FYNAAA HHHHxx +9509 9418 1 1 9 9 9 509 1509 4509 9509 18 19 TBAAAA GYNAAA OOOOxx +5442 9419 0 2 2 2 42 442 1442 442 5442 84 85 IBAAAA HYNAAA VVVVxx +8968 9420 0 0 8 8 68 968 968 3968 8968 136 137 YGAAAA IYNAAA AAAAxx +333 9421 1 1 3 13 33 333 333 333 333 66 67 VMAAAA JYNAAA HHHHxx +952 9422 0 0 2 12 52 952 952 952 952 104 105 QKAAAA KYNAAA OOOOxx +7482 9423 0 2 2 2 82 482 1482 2482 7482 164 165 UBAAAA LYNAAA VVVVxx +1486 9424 0 2 6 6 86 486 1486 1486 1486 172 173 EFAAAA MYNAAA AAAAxx +1815 9425 1 3 5 15 15 815 1815 1815 1815 30 31 VRAAAA NYNAAA HHHHxx +7937 9426 1 1 7 17 37 937 1937 2937 7937 74 75 HTAAAA OYNAAA OOOOxx +1436 9427 0 0 6 16 36 436 1436 1436 1436 72 73 GDAAAA PYNAAA VVVVxx +3470 9428 0 2 0 10 70 470 1470 3470 3470 140 141 MDAAAA QYNAAA AAAAxx +8195 9429 1 3 5 15 95 195 195 3195 8195 190 191 FDAAAA RYNAAA HHHHxx +6906 9430 0 2 6 6 6 906 906 1906 6906 12 13 QFAAAA SYNAAA OOOOxx +2539 9431 1 3 9 19 39 539 539 2539 2539 78 79 RTAAAA TYNAAA VVVVxx +5988 9432 0 0 8 8 88 988 1988 988 5988 176 177 IWAAAA UYNAAA AAAAxx +8908 9433 0 0 8 8 8 908 908 3908 8908 16 17 QEAAAA VYNAAA HHHHxx +2319 9434 1 3 9 19 19 319 319 2319 2319 38 39 FLAAAA WYNAAA OOOOxx +3263 9435 1 3 3 3 63 263 1263 3263 3263 126 127 NVAAAA XYNAAA VVVVxx +4039 9436 1 3 9 19 39 39 39 4039 4039 78 79 JZAAAA YYNAAA AAAAxx +6373 9437 1 1 3 13 73 373 373 1373 6373 146 147 DLAAAA ZYNAAA HHHHxx +1168 9438 0 0 8 8 68 168 1168 1168 1168 136 137 YSAAAA AZNAAA OOOOxx +8338 9439 0 2 8 18 38 338 338 3338 8338 76 77 SIAAAA BZNAAA VVVVxx +1172 9440 0 0 2 12 72 172 1172 1172 1172 144 145 CTAAAA CZNAAA AAAAxx +200 9441 0 0 0 0 0 200 200 200 200 0 1 SHAAAA DZNAAA HHHHxx +6355 9442 1 3 5 15 55 355 355 1355 6355 110 111 LKAAAA EZNAAA OOOOxx +7768 9443 0 0 8 8 68 768 1768 2768 7768 136 137 UMAAAA FZNAAA VVVVxx +25 9444 1 1 5 5 25 25 25 25 25 50 51 ZAAAAA GZNAAA AAAAxx +7144 9445 0 0 4 4 44 144 1144 2144 7144 88 89 UOAAAA HZNAAA HHHHxx +8671 9446 1 3 1 11 71 671 671 3671 8671 142 143 NVAAAA IZNAAA OOOOxx +9163 9447 1 3 3 3 63 163 1163 4163 9163 126 127 LOAAAA JZNAAA VVVVxx +8889 9448 1 1 9 9 89 889 889 3889 8889 178 179 XDAAAA KZNAAA AAAAxx +5950 9449 0 2 0 10 50 950 1950 950 5950 100 101 WUAAAA LZNAAA HHHHxx +6163 9450 1 3 3 3 63 163 163 1163 6163 126 127 BDAAAA MZNAAA OOOOxx +8119 9451 1 3 9 19 19 119 119 3119 8119 38 39 HAAAAA NZNAAA VVVVxx +1416 9452 0 0 6 16 16 416 1416 1416 1416 32 33 MCAAAA OZNAAA AAAAxx +4132 9453 0 0 2 12 32 132 132 4132 4132 64 65 YCAAAA PZNAAA HHHHxx +2294 9454 0 2 4 14 94 294 294 2294 2294 188 189 GKAAAA QZNAAA OOOOxx +9094 9455 0 2 4 14 94 94 1094 4094 9094 188 189 ULAAAA RZNAAA VVVVxx +4168 9456 0 0 8 8 68 168 168 4168 4168 136 137 IEAAAA SZNAAA AAAAxx +9108 9457 0 0 8 8 8 108 1108 4108 9108 16 17 IMAAAA TZNAAA HHHHxx +5706 9458 0 2 6 6 6 706 1706 706 5706 12 13 MLAAAA UZNAAA OOOOxx +2231 9459 1 3 1 11 31 231 231 2231 2231 62 63 VHAAAA VZNAAA VVVVxx +2173 9460 1 1 3 13 73 173 173 2173 2173 146 147 PFAAAA WZNAAA AAAAxx +90 9461 0 2 0 10 90 90 90 90 90 180 181 MDAAAA XZNAAA HHHHxx +9996 9462 0 0 6 16 96 996 1996 4996 9996 192 193 MUAAAA YZNAAA OOOOxx +330 9463 0 2 0 10 30 330 330 330 330 60 61 SMAAAA ZZNAAA VVVVxx +2052 9464 0 0 2 12 52 52 52 2052 2052 104 105 YAAAAA AAOAAA AAAAxx +1093 9465 1 1 3 13 93 93 1093 1093 1093 186 187 BQAAAA BAOAAA HHHHxx +5817 9466 1 1 7 17 17 817 1817 817 5817 34 35 TPAAAA CAOAAA OOOOxx +1559 9467 1 3 9 19 59 559 1559 1559 1559 118 119 ZHAAAA DAOAAA VVVVxx +8405 9468 1 1 5 5 5 405 405 3405 8405 10 11 HLAAAA EAOAAA AAAAxx +9962 9469 0 2 2 2 62 962 1962 4962 9962 124 125 ETAAAA FAOAAA HHHHxx +9461 9470 1 1 1 1 61 461 1461 4461 9461 122 123 XZAAAA GAOAAA OOOOxx +3028 9471 0 0 8 8 28 28 1028 3028 3028 56 57 MMAAAA HAOAAA VVVVxx +6814 9472 0 2 4 14 14 814 814 1814 6814 28 29 CCAAAA IAOAAA AAAAxx +9587 9473 1 3 7 7 87 587 1587 4587 9587 174 175 TEAAAA JAOAAA HHHHxx +6863 9474 1 3 3 3 63 863 863 1863 6863 126 127 ZDAAAA KAOAAA OOOOxx +4963 9475 1 3 3 3 63 963 963 4963 4963 126 127 XIAAAA LAOAAA VVVVxx +7811 9476 1 3 1 11 11 811 1811 2811 7811 22 23 LOAAAA MAOAAA AAAAxx +7608 9477 0 0 8 8 8 608 1608 2608 7608 16 17 QGAAAA NAOAAA HHHHxx +5321 9478 1 1 1 1 21 321 1321 321 5321 42 43 RWAAAA OAOAAA OOOOxx +9971 9479 1 3 1 11 71 971 1971 4971 9971 142 143 NTAAAA PAOAAA VVVVxx +6161 9480 1 1 1 1 61 161 161 1161 6161 122 123 ZCAAAA QAOAAA AAAAxx +2181 9481 1 1 1 1 81 181 181 2181 2181 162 163 XFAAAA RAOAAA HHHHxx +3828 9482 0 0 8 8 28 828 1828 3828 3828 56 57 GRAAAA SAOAAA OOOOxx +348 9483 0 0 8 8 48 348 348 348 348 96 97 KNAAAA TAOAAA VVVVxx +5459 9484 1 3 9 19 59 459 1459 459 5459 118 119 ZBAAAA UAOAAA AAAAxx +9406 9485 0 2 6 6 6 406 1406 4406 9406 12 13 UXAAAA VAOAAA HHHHxx +9852 9486 0 0 2 12 52 852 1852 4852 9852 104 105 YOAAAA WAOAAA OOOOxx +3095 9487 1 3 5 15 95 95 1095 3095 3095 190 191 BPAAAA XAOAAA VVVVxx +5597 9488 1 1 7 17 97 597 1597 597 5597 194 195 HHAAAA YAOAAA AAAAxx +8841 9489 1 1 1 1 41 841 841 3841 8841 82 83 BCAAAA ZAOAAA HHHHxx +3536 9490 0 0 6 16 36 536 1536 3536 3536 72 73 AGAAAA ABOAAA OOOOxx +4009 9491 1 1 9 9 9 9 9 4009 4009 18 19 FYAAAA BBOAAA VVVVxx +7366 9492 0 2 6 6 66 366 1366 2366 7366 132 133 IXAAAA CBOAAA AAAAxx +7327 9493 1 3 7 7 27 327 1327 2327 7327 54 55 VVAAAA DBOAAA HHHHxx +1613 9494 1 1 3 13 13 613 1613 1613 1613 26 27 BKAAAA EBOAAA OOOOxx +8619 9495 1 3 9 19 19 619 619 3619 8619 38 39 NTAAAA FBOAAA VVVVxx +4880 9496 0 0 0 0 80 880 880 4880 4880 160 161 SFAAAA GBOAAA AAAAxx +1552 9497 0 0 2 12 52 552 1552 1552 1552 104 105 SHAAAA HBOAAA HHHHxx +7636 9498 0 0 6 16 36 636 1636 2636 7636 72 73 SHAAAA IBOAAA OOOOxx +8397 9499 1 1 7 17 97 397 397 3397 8397 194 195 ZKAAAA JBOAAA VVVVxx +6224 9500 0 0 4 4 24 224 224 1224 6224 48 49 KFAAAA KBOAAA AAAAxx +9102 9501 0 2 2 2 2 102 1102 4102 9102 4 5 CMAAAA LBOAAA HHHHxx +7906 9502 0 2 6 6 6 906 1906 2906 7906 12 13 CSAAAA MBOAAA OOOOxx +9467 9503 1 3 7 7 67 467 1467 4467 9467 134 135 DAAAAA NBOAAA VVVVxx +828 9504 0 0 8 8 28 828 828 828 828 56 57 WFAAAA OBOAAA AAAAxx +9585 9505 1 1 5 5 85 585 1585 4585 9585 170 171 REAAAA PBOAAA HHHHxx +925 9506 1 1 5 5 25 925 925 925 925 50 51 PJAAAA QBOAAA OOOOxx +7375 9507 1 3 5 15 75 375 1375 2375 7375 150 151 RXAAAA RBOAAA VVVVxx +4027 9508 1 3 7 7 27 27 27 4027 4027 54 55 XYAAAA SBOAAA AAAAxx +766 9509 0 2 6 6 66 766 766 766 766 132 133 MDAAAA TBOAAA HHHHxx +5633 9510 1 1 3 13 33 633 1633 633 5633 66 67 RIAAAA UBOAAA OOOOxx +5648 9511 0 0 8 8 48 648 1648 648 5648 96 97 GJAAAA VBOAAA VVVVxx +148 9512 0 0 8 8 48 148 148 148 148 96 97 SFAAAA WBOAAA AAAAxx +2072 9513 0 0 2 12 72 72 72 2072 2072 144 145 SBAAAA XBOAAA HHHHxx +431 9514 1 3 1 11 31 431 431 431 431 62 63 PQAAAA YBOAAA OOOOxx +1711 9515 1 3 1 11 11 711 1711 1711 1711 22 23 VNAAAA ZBOAAA VVVVxx +9378 9516 0 2 8 18 78 378 1378 4378 9378 156 157 SWAAAA ACOAAA AAAAxx +6776 9517 0 0 6 16 76 776 776 1776 6776 152 153 QAAAAA BCOAAA HHHHxx +6842 9518 0 2 2 2 42 842 842 1842 6842 84 85 EDAAAA CCOAAA OOOOxx +2656 9519 0 0 6 16 56 656 656 2656 2656 112 113 EYAAAA DCOAAA VVVVxx +3116 9520 0 0 6 16 16 116 1116 3116 3116 32 33 WPAAAA ECOAAA AAAAxx +7904 9521 0 0 4 4 4 904 1904 2904 7904 8 9 ASAAAA FCOAAA HHHHxx +3529 9522 1 1 9 9 29 529 1529 3529 3529 58 59 TFAAAA GCOAAA OOOOxx +3240 9523 0 0 0 0 40 240 1240 3240 3240 80 81 QUAAAA HCOAAA VVVVxx +5801 9524 1 1 1 1 1 801 1801 801 5801 2 3 DPAAAA ICOAAA AAAAxx +4090 9525 0 2 0 10 90 90 90 4090 4090 180 181 IBAAAA JCOAAA HHHHxx +7687 9526 1 3 7 7 87 687 1687 2687 7687 174 175 RJAAAA KCOAAA OOOOxx +9711 9527 1 3 1 11 11 711 1711 4711 9711 22 23 NJAAAA LCOAAA VVVVxx +4760 9528 0 0 0 0 60 760 760 4760 4760 120 121 CBAAAA MCOAAA AAAAxx +5524 9529 0 0 4 4 24 524 1524 524 5524 48 49 MEAAAA NCOAAA HHHHxx +2251 9530 1 3 1 11 51 251 251 2251 2251 102 103 PIAAAA OCOAAA OOOOxx +1511 9531 1 3 1 11 11 511 1511 1511 1511 22 23 DGAAAA PCOAAA VVVVxx +5991 9532 1 3 1 11 91 991 1991 991 5991 182 183 LWAAAA QCOAAA AAAAxx +7808 9533 0 0 8 8 8 808 1808 2808 7808 16 17 IOAAAA RCOAAA HHHHxx +8708 9534 0 0 8 8 8 708 708 3708 8708 16 17 YWAAAA SCOAAA OOOOxx +8939 9535 1 3 9 19 39 939 939 3939 8939 78 79 VFAAAA TCOAAA VVVVxx +4295 9536 1 3 5 15 95 295 295 4295 4295 190 191 FJAAAA UCOAAA AAAAxx +5905 9537 1 1 5 5 5 905 1905 905 5905 10 11 DTAAAA VCOAAA HHHHxx +2649 9538 1 1 9 9 49 649 649 2649 2649 98 99 XXAAAA WCOAAA OOOOxx +2347 9539 1 3 7 7 47 347 347 2347 2347 94 95 HMAAAA XCOAAA VVVVxx +6339 9540 1 3 9 19 39 339 339 1339 6339 78 79 VJAAAA YCOAAA AAAAxx +292 9541 0 0 2 12 92 292 292 292 292 184 185 GLAAAA ZCOAAA HHHHxx +9314 9542 0 2 4 14 14 314 1314 4314 9314 28 29 GUAAAA ADOAAA OOOOxx +6893 9543 1 1 3 13 93 893 893 1893 6893 186 187 DFAAAA BDOAAA VVVVxx +3970 9544 0 2 0 10 70 970 1970 3970 3970 140 141 SWAAAA CDOAAA AAAAxx +1652 9545 0 0 2 12 52 652 1652 1652 1652 104 105 OLAAAA DDOAAA HHHHxx +4326 9546 0 2 6 6 26 326 326 4326 4326 52 53 KKAAAA EDOAAA OOOOxx +7881 9547 1 1 1 1 81 881 1881 2881 7881 162 163 DRAAAA FDOAAA VVVVxx +5291 9548 1 3 1 11 91 291 1291 291 5291 182 183 NVAAAA GDOAAA AAAAxx +957 9549 1 1 7 17 57 957 957 957 957 114 115 VKAAAA HDOAAA HHHHxx +2313 9550 1 1 3 13 13 313 313 2313 2313 26 27 ZKAAAA IDOAAA OOOOxx +5463 9551 1 3 3 3 63 463 1463 463 5463 126 127 DCAAAA JDOAAA VVVVxx +1268 9552 0 0 8 8 68 268 1268 1268 1268 136 137 UWAAAA KDOAAA AAAAxx +5028 9553 0 0 8 8 28 28 1028 28 5028 56 57 KLAAAA LDOAAA HHHHxx +656 9554 0 0 6 16 56 656 656 656 656 112 113 GZAAAA MDOAAA OOOOxx +9274 9555 0 2 4 14 74 274 1274 4274 9274 148 149 SSAAAA NDOAAA VVVVxx +8217 9556 1 1 7 17 17 217 217 3217 8217 34 35 BEAAAA ODOAAA AAAAxx +2175 9557 1 3 5 15 75 175 175 2175 2175 150 151 RFAAAA PDOAAA HHHHxx +6028 9558 0 0 8 8 28 28 28 1028 6028 56 57 WXAAAA QDOAAA OOOOxx +7584 9559 0 0 4 4 84 584 1584 2584 7584 168 169 SFAAAA RDOAAA VVVVxx +4114 9560 0 2 4 14 14 114 114 4114 4114 28 29 GCAAAA SDOAAA AAAAxx +8894 9561 0 2 4 14 94 894 894 3894 8894 188 189 CEAAAA TDOAAA HHHHxx +781 9562 1 1 1 1 81 781 781 781 781 162 163 BEAAAA UDOAAA OOOOxx +133 9563 1 1 3 13 33 133 133 133 133 66 67 DFAAAA VDOAAA VVVVxx +7572 9564 0 0 2 12 72 572 1572 2572 7572 144 145 GFAAAA WDOAAA AAAAxx +8514 9565 0 2 4 14 14 514 514 3514 8514 28 29 MPAAAA XDOAAA HHHHxx +3352 9566 0 0 2 12 52 352 1352 3352 3352 104 105 YYAAAA YDOAAA OOOOxx +8098 9567 0 2 8 18 98 98 98 3098 8098 196 197 MZAAAA ZDOAAA VVVVxx +9116 9568 0 0 6 16 16 116 1116 4116 9116 32 33 QMAAAA AEOAAA AAAAxx +9444 9569 0 0 4 4 44 444 1444 4444 9444 88 89 GZAAAA BEOAAA HHHHxx +2590 9570 0 2 0 10 90 590 590 2590 2590 180 181 QVAAAA CEOAAA OOOOxx +7302 9571 0 2 2 2 2 302 1302 2302 7302 4 5 WUAAAA DEOAAA VVVVxx +7444 9572 0 0 4 4 44 444 1444 2444 7444 88 89 IAAAAA EEOAAA AAAAxx +8748 9573 0 0 8 8 48 748 748 3748 8748 96 97 MYAAAA FEOAAA HHHHxx +7615 9574 1 3 5 15 15 615 1615 2615 7615 30 31 XGAAAA GEOAAA OOOOxx +6090 9575 0 2 0 10 90 90 90 1090 6090 180 181 GAAAAA HEOAAA VVVVxx +1529 9576 1 1 9 9 29 529 1529 1529 1529 58 59 VGAAAA IEOAAA AAAAxx +9398 9577 0 2 8 18 98 398 1398 4398 9398 196 197 MXAAAA JEOAAA HHHHxx +6114 9578 0 2 4 14 14 114 114 1114 6114 28 29 EBAAAA KEOAAA OOOOxx +2736 9579 0 0 6 16 36 736 736 2736 2736 72 73 GBAAAA LEOAAA VVVVxx +468 9580 0 0 8 8 68 468 468 468 468 136 137 ASAAAA MEOAAA AAAAxx +1487 9581 1 3 7 7 87 487 1487 1487 1487 174 175 FFAAAA NEOAAA HHHHxx +4784 9582 0 0 4 4 84 784 784 4784 4784 168 169 ACAAAA OEOAAA OOOOxx +6731 9583 1 3 1 11 31 731 731 1731 6731 62 63 XYAAAA PEOAAA VVVVxx +3328 9584 0 0 8 8 28 328 1328 3328 3328 56 57 AYAAAA QEOAAA AAAAxx +6891 9585 1 3 1 11 91 891 891 1891 6891 182 183 BFAAAA REOAAA HHHHxx +8039 9586 1 3 9 19 39 39 39 3039 8039 78 79 FXAAAA SEOAAA OOOOxx +4064 9587 0 0 4 4 64 64 64 4064 4064 128 129 IAAAAA TEOAAA VVVVxx +542 9588 0 2 2 2 42 542 542 542 542 84 85 WUAAAA UEOAAA AAAAxx +1039 9589 1 3 9 19 39 39 1039 1039 1039 78 79 ZNAAAA VEOAAA HHHHxx +5603 9590 1 3 3 3 3 603 1603 603 5603 6 7 NHAAAA WEOAAA OOOOxx +6641 9591 1 1 1 1 41 641 641 1641 6641 82 83 LVAAAA XEOAAA VVVVxx +6307 9592 1 3 7 7 7 307 307 1307 6307 14 15 PIAAAA YEOAAA AAAAxx +5354 9593 0 2 4 14 54 354 1354 354 5354 108 109 YXAAAA ZEOAAA HHHHxx +7878 9594 0 2 8 18 78 878 1878 2878 7878 156 157 ARAAAA AFOAAA OOOOxx +6391 9595 1 3 1 11 91 391 391 1391 6391 182 183 VLAAAA BFOAAA VVVVxx +4575 9596 1 3 5 15 75 575 575 4575 4575 150 151 ZTAAAA CFOAAA AAAAxx +6644 9597 0 0 4 4 44 644 644 1644 6644 88 89 OVAAAA DFOAAA HHHHxx +5207 9598 1 3 7 7 7 207 1207 207 5207 14 15 HSAAAA EFOAAA OOOOxx +1736 9599 0 0 6 16 36 736 1736 1736 1736 72 73 UOAAAA FFOAAA VVVVxx +3547 9600 1 3 7 7 47 547 1547 3547 3547 94 95 LGAAAA GFOAAA AAAAxx +6647 9601 1 3 7 7 47 647 647 1647 6647 94 95 RVAAAA HFOAAA HHHHxx +4107 9602 1 3 7 7 7 107 107 4107 4107 14 15 ZBAAAA IFOAAA OOOOxx +8125 9603 1 1 5 5 25 125 125 3125 8125 50 51 NAAAAA JFOAAA VVVVxx +9223 9604 1 3 3 3 23 223 1223 4223 9223 46 47 TQAAAA KFOAAA AAAAxx +6903 9605 1 3 3 3 3 903 903 1903 6903 6 7 NFAAAA LFOAAA HHHHxx +3639 9606 1 3 9 19 39 639 1639 3639 3639 78 79 ZJAAAA MFOAAA OOOOxx +9606 9607 0 2 6 6 6 606 1606 4606 9606 12 13 MFAAAA NFOAAA VVVVxx +3232 9608 0 0 2 12 32 232 1232 3232 3232 64 65 IUAAAA OFOAAA AAAAxx +2063 9609 1 3 3 3 63 63 63 2063 2063 126 127 JBAAAA PFOAAA HHHHxx +3731 9610 1 3 1 11 31 731 1731 3731 3731 62 63 NNAAAA QFOAAA OOOOxx +2558 9611 0 2 8 18 58 558 558 2558 2558 116 117 KUAAAA RFOAAA VVVVxx +2357 9612 1 1 7 17 57 357 357 2357 2357 114 115 RMAAAA SFOAAA AAAAxx +6008 9613 0 0 8 8 8 8 8 1008 6008 16 17 CXAAAA TFOAAA HHHHxx +8246 9614 0 2 6 6 46 246 246 3246 8246 92 93 EFAAAA UFOAAA OOOOxx +8220 9615 0 0 0 0 20 220 220 3220 8220 40 41 EEAAAA VFOAAA VVVVxx +1075 9616 1 3 5 15 75 75 1075 1075 1075 150 151 JPAAAA WFOAAA AAAAxx +2410 9617 0 2 0 10 10 410 410 2410 2410 20 21 SOAAAA XFOAAA HHHHxx +3253 9618 1 1 3 13 53 253 1253 3253 3253 106 107 DVAAAA YFOAAA OOOOxx +4370 9619 0 2 0 10 70 370 370 4370 4370 140 141 CMAAAA ZFOAAA VVVVxx +8426 9620 0 2 6 6 26 426 426 3426 8426 52 53 CMAAAA AGOAAA AAAAxx +2262 9621 0 2 2 2 62 262 262 2262 2262 124 125 AJAAAA BGOAAA HHHHxx +4149 9622 1 1 9 9 49 149 149 4149 4149 98 99 PDAAAA CGOAAA OOOOxx +2732 9623 0 0 2 12 32 732 732 2732 2732 64 65 CBAAAA DGOAAA VVVVxx +8606 9624 0 2 6 6 6 606 606 3606 8606 12 13 ATAAAA EGOAAA AAAAxx +6311 9625 1 3 1 11 11 311 311 1311 6311 22 23 TIAAAA FGOAAA HHHHxx +7223 9626 1 3 3 3 23 223 1223 2223 7223 46 47 VRAAAA GGOAAA OOOOxx +3054 9627 0 2 4 14 54 54 1054 3054 3054 108 109 MNAAAA HGOAAA VVVVxx +3952 9628 0 0 2 12 52 952 1952 3952 3952 104 105 AWAAAA IGOAAA AAAAxx +8252 9629 0 0 2 12 52 252 252 3252 8252 104 105 KFAAAA JGOAAA HHHHxx +6020 9630 0 0 0 0 20 20 20 1020 6020 40 41 OXAAAA KGOAAA OOOOxx +3846 9631 0 2 6 6 46 846 1846 3846 3846 92 93 YRAAAA LGOAAA VVVVxx +3755 9632 1 3 5 15 55 755 1755 3755 3755 110 111 LOAAAA MGOAAA AAAAxx +3765 9633 1 1 5 5 65 765 1765 3765 3765 130 131 VOAAAA NGOAAA HHHHxx +3434 9634 0 2 4 14 34 434 1434 3434 3434 68 69 CCAAAA OGOAAA OOOOxx +1381 9635 1 1 1 1 81 381 1381 1381 1381 162 163 DBAAAA PGOAAA VVVVxx +287 9636 1 3 7 7 87 287 287 287 287 174 175 BLAAAA QGOAAA AAAAxx +4476 9637 0 0 6 16 76 476 476 4476 4476 152 153 EQAAAA RGOAAA HHHHxx +2916 9638 0 0 6 16 16 916 916 2916 2916 32 33 EIAAAA SGOAAA OOOOxx +4517 9639 1 1 7 17 17 517 517 4517 4517 34 35 TRAAAA TGOAAA VVVVxx +4561 9640 1 1 1 1 61 561 561 4561 4561 122 123 LTAAAA UGOAAA AAAAxx +5106 9641 0 2 6 6 6 106 1106 106 5106 12 13 KOAAAA VGOAAA HHHHxx +2077 9642 1 1 7 17 77 77 77 2077 2077 154 155 XBAAAA WGOAAA OOOOxx +5269 9643 1 1 9 9 69 269 1269 269 5269 138 139 RUAAAA XGOAAA VVVVxx +5688 9644 0 0 8 8 88 688 1688 688 5688 176 177 UKAAAA YGOAAA AAAAxx +8831 9645 1 3 1 11 31 831 831 3831 8831 62 63 RBAAAA ZGOAAA HHHHxx +3867 9646 1 3 7 7 67 867 1867 3867 3867 134 135 TSAAAA AHOAAA OOOOxx +6062 9647 0 2 2 2 62 62 62 1062 6062 124 125 EZAAAA BHOAAA VVVVxx +8460 9648 0 0 0 0 60 460 460 3460 8460 120 121 KNAAAA CHOAAA AAAAxx +3138 9649 0 2 8 18 38 138 1138 3138 3138 76 77 SQAAAA DHOAAA HHHHxx +3173 9650 1 1 3 13 73 173 1173 3173 3173 146 147 BSAAAA EHOAAA OOOOxx +7018 9651 0 2 8 18 18 18 1018 2018 7018 36 37 YJAAAA FHOAAA VVVVxx +4836 9652 0 0 6 16 36 836 836 4836 4836 72 73 AEAAAA GHOAAA AAAAxx +1007 9653 1 3 7 7 7 7 1007 1007 1007 14 15 TMAAAA HHOAAA HHHHxx +658 9654 0 2 8 18 58 658 658 658 658 116 117 IZAAAA IHOAAA OOOOxx +5205 9655 1 1 5 5 5 205 1205 205 5205 10 11 FSAAAA JHOAAA VVVVxx +5805 9656 1 1 5 5 5 805 1805 805 5805 10 11 HPAAAA KHOAAA AAAAxx +5959 9657 1 3 9 19 59 959 1959 959 5959 118 119 FVAAAA LHOAAA HHHHxx +2863 9658 1 3 3 3 63 863 863 2863 2863 126 127 DGAAAA MHOAAA OOOOxx +7272 9659 0 0 2 12 72 272 1272 2272 7272 144 145 STAAAA NHOAAA VVVVxx +8437 9660 1 1 7 17 37 437 437 3437 8437 74 75 NMAAAA OHOAAA AAAAxx +4900 9661 0 0 0 0 0 900 900 4900 4900 0 1 MGAAAA PHOAAA HHHHxx +890 9662 0 2 0 10 90 890 890 890 890 180 181 GIAAAA QHOAAA OOOOxx +3530 9663 0 2 0 10 30 530 1530 3530 3530 60 61 UFAAAA RHOAAA VVVVxx +6209 9664 1 1 9 9 9 209 209 1209 6209 18 19 VEAAAA SHOAAA AAAAxx +4595 9665 1 3 5 15 95 595 595 4595 4595 190 191 TUAAAA THOAAA HHHHxx +5982 9666 0 2 2 2 82 982 1982 982 5982 164 165 CWAAAA UHOAAA OOOOxx +1101 9667 1 1 1 1 1 101 1101 1101 1101 2 3 JQAAAA VHOAAA VVVVxx +9555 9668 1 3 5 15 55 555 1555 4555 9555 110 111 NDAAAA WHOAAA AAAAxx +1918 9669 0 2 8 18 18 918 1918 1918 1918 36 37 UVAAAA XHOAAA HHHHxx +3527 9670 1 3 7 7 27 527 1527 3527 3527 54 55 RFAAAA YHOAAA OOOOxx +7309 9671 1 1 9 9 9 309 1309 2309 7309 18 19 DVAAAA ZHOAAA VVVVxx +8213 9672 1 1 3 13 13 213 213 3213 8213 26 27 XDAAAA AIOAAA AAAAxx +306 9673 0 2 6 6 6 306 306 306 306 12 13 ULAAAA BIOAAA HHHHxx +845 9674 1 1 5 5 45 845 845 845 845 90 91 NGAAAA CIOAAA OOOOxx +16 9675 0 0 6 16 16 16 16 16 16 32 33 QAAAAA DIOAAA VVVVxx +437 9676 1 1 7 17 37 437 437 437 437 74 75 VQAAAA EIOAAA AAAAxx +9518 9677 0 2 8 18 18 518 1518 4518 9518 36 37 CCAAAA FIOAAA HHHHxx +2142 9678 0 2 2 2 42 142 142 2142 2142 84 85 KEAAAA GIOAAA OOOOxx +8121 9679 1 1 1 1 21 121 121 3121 8121 42 43 JAAAAA HIOAAA VVVVxx +7354 9680 0 2 4 14 54 354 1354 2354 7354 108 109 WWAAAA IIOAAA AAAAxx +1720 9681 0 0 0 0 20 720 1720 1720 1720 40 41 EOAAAA JIOAAA HHHHxx +6078 9682 0 2 8 18 78 78 78 1078 6078 156 157 UZAAAA KIOAAA OOOOxx +5929 9683 1 1 9 9 29 929 1929 929 5929 58 59 BUAAAA LIOAAA VVVVxx +3856 9684 0 0 6 16 56 856 1856 3856 3856 112 113 ISAAAA MIOAAA AAAAxx +3424 9685 0 0 4 4 24 424 1424 3424 3424 48 49 SBAAAA NIOAAA HHHHxx +1712 9686 0 0 2 12 12 712 1712 1712 1712 24 25 WNAAAA OIOAAA OOOOxx +2340 9687 0 0 0 0 40 340 340 2340 2340 80 81 AMAAAA PIOAAA VVVVxx +5570 9688 0 2 0 10 70 570 1570 570 5570 140 141 GGAAAA QIOAAA AAAAxx +8734 9689 0 2 4 14 34 734 734 3734 8734 68 69 YXAAAA RIOAAA HHHHxx +6077 9690 1 1 7 17 77 77 77 1077 6077 154 155 TZAAAA SIOAAA OOOOxx +2960 9691 0 0 0 0 60 960 960 2960 2960 120 121 WJAAAA TIOAAA VVVVxx +5062 9692 0 2 2 2 62 62 1062 62 5062 124 125 SMAAAA UIOAAA AAAAxx +1532 9693 0 0 2 12 32 532 1532 1532 1532 64 65 YGAAAA VIOAAA HHHHxx +8298 9694 0 2 8 18 98 298 298 3298 8298 196 197 EHAAAA WIOAAA OOOOxx +2496 9695 0 0 6 16 96 496 496 2496 2496 192 193 ASAAAA XIOAAA VVVVxx +8412 9696 0 0 2 12 12 412 412 3412 8412 24 25 OLAAAA YIOAAA AAAAxx +724 9697 0 0 4 4 24 724 724 724 724 48 49 WBAAAA ZIOAAA HHHHxx +1019 9698 1 3 9 19 19 19 1019 1019 1019 38 39 FNAAAA AJOAAA OOOOxx +6265 9699 1 1 5 5 65 265 265 1265 6265 130 131 ZGAAAA BJOAAA VVVVxx +740 9700 0 0 0 0 40 740 740 740 740 80 81 MCAAAA CJOAAA AAAAxx +8495 9701 1 3 5 15 95 495 495 3495 8495 190 191 TOAAAA DJOAAA HHHHxx +6983 9702 1 3 3 3 83 983 983 1983 6983 166 167 PIAAAA EJOAAA OOOOxx +991 9703 1 3 1 11 91 991 991 991 991 182 183 DMAAAA FJOAAA VVVVxx +3189 9704 1 1 9 9 89 189 1189 3189 3189 178 179 RSAAAA GJOAAA AAAAxx +4487 9705 1 3 7 7 87 487 487 4487 4487 174 175 PQAAAA HJOAAA HHHHxx +5554 9706 0 2 4 14 54 554 1554 554 5554 108 109 QFAAAA IJOAAA OOOOxx +1258 9707 0 2 8 18 58 258 1258 1258 1258 116 117 KWAAAA JJOAAA VVVVxx +5359 9708 1 3 9 19 59 359 1359 359 5359 118 119 DYAAAA KJOAAA AAAAxx +2709 9709 1 1 9 9 9 709 709 2709 2709 18 19 FAAAAA LJOAAA HHHHxx +361 9710 1 1 1 1 61 361 361 361 361 122 123 XNAAAA MJOAAA OOOOxx +4028 9711 0 0 8 8 28 28 28 4028 4028 56 57 YYAAAA NJOAAA VVVVxx +3735 9712 1 3 5 15 35 735 1735 3735 3735 70 71 RNAAAA OJOAAA AAAAxx +4427 9713 1 3 7 7 27 427 427 4427 4427 54 55 HOAAAA PJOAAA HHHHxx +7540 9714 0 0 0 0 40 540 1540 2540 7540 80 81 AEAAAA QJOAAA OOOOxx +3569 9715 1 1 9 9 69 569 1569 3569 3569 138 139 HHAAAA RJOAAA VVVVxx +1916 9716 0 0 6 16 16 916 1916 1916 1916 32 33 SVAAAA SJOAAA AAAAxx +7596 9717 0 0 6 16 96 596 1596 2596 7596 192 193 EGAAAA TJOAAA HHHHxx +9721 9718 1 1 1 1 21 721 1721 4721 9721 42 43 XJAAAA UJOAAA OOOOxx +4429 9719 1 1 9 9 29 429 429 4429 4429 58 59 JOAAAA VJOAAA VVVVxx +3471 9720 1 3 1 11 71 471 1471 3471 3471 142 143 NDAAAA WJOAAA AAAAxx +1157 9721 1 1 7 17 57 157 1157 1157 1157 114 115 NSAAAA XJOAAA HHHHxx +5700 9722 0 0 0 0 0 700 1700 700 5700 0 1 GLAAAA YJOAAA OOOOxx +4431 9723 1 3 1 11 31 431 431 4431 4431 62 63 LOAAAA ZJOAAA VVVVxx +9409 9724 1 1 9 9 9 409 1409 4409 9409 18 19 XXAAAA AKOAAA AAAAxx +8752 9725 0 0 2 12 52 752 752 3752 8752 104 105 QYAAAA BKOAAA HHHHxx +9484 9726 0 0 4 4 84 484 1484 4484 9484 168 169 UAAAAA CKOAAA OOOOxx +1266 9727 0 2 6 6 66 266 1266 1266 1266 132 133 SWAAAA DKOAAA VVVVxx +9097 9728 1 1 7 17 97 97 1097 4097 9097 194 195 XLAAAA EKOAAA AAAAxx +3068 9729 0 0 8 8 68 68 1068 3068 3068 136 137 AOAAAA FKOAAA HHHHxx +5490 9730 0 2 0 10 90 490 1490 490 5490 180 181 EDAAAA GKOAAA OOOOxx +1375 9731 1 3 5 15 75 375 1375 1375 1375 150 151 XAAAAA HKOAAA VVVVxx +2487 9732 1 3 7 7 87 487 487 2487 2487 174 175 RRAAAA IKOAAA AAAAxx +1705 9733 1 1 5 5 5 705 1705 1705 1705 10 11 PNAAAA JKOAAA HHHHxx +1571 9734 1 3 1 11 71 571 1571 1571 1571 142 143 LIAAAA KKOAAA OOOOxx +4005 9735 1 1 5 5 5 5 5 4005 4005 10 11 BYAAAA LKOAAA VVVVxx +5497 9736 1 1 7 17 97 497 1497 497 5497 194 195 LDAAAA MKOAAA AAAAxx +2144 9737 0 0 4 4 44 144 144 2144 2144 88 89 MEAAAA NKOAAA HHHHxx +4052 9738 0 0 2 12 52 52 52 4052 4052 104 105 WZAAAA OKOAAA OOOOxx +4942 9739 0 2 2 2 42 942 942 4942 4942 84 85 CIAAAA PKOAAA VVVVxx +5504 9740 0 0 4 4 4 504 1504 504 5504 8 9 SDAAAA QKOAAA AAAAxx +2913 9741 1 1 3 13 13 913 913 2913 2913 26 27 BIAAAA RKOAAA HHHHxx +5617 9742 1 1 7 17 17 617 1617 617 5617 34 35 BIAAAA SKOAAA OOOOxx +8179 9743 1 3 9 19 79 179 179 3179 8179 158 159 PCAAAA TKOAAA VVVVxx +9437 9744 1 1 7 17 37 437 1437 4437 9437 74 75 ZYAAAA UKOAAA AAAAxx +1821 9745 1 1 1 1 21 821 1821 1821 1821 42 43 BSAAAA VKOAAA HHHHxx +5737 9746 1 1 7 17 37 737 1737 737 5737 74 75 RMAAAA WKOAAA OOOOxx +4207 9747 1 3 7 7 7 207 207 4207 4207 14 15 VFAAAA XKOAAA VVVVxx +4815 9748 1 3 5 15 15 815 815 4815 4815 30 31 FDAAAA YKOAAA AAAAxx +8707 9749 1 3 7 7 7 707 707 3707 8707 14 15 XWAAAA ZKOAAA HHHHxx +5970 9750 0 2 0 10 70 970 1970 970 5970 140 141 QVAAAA ALOAAA OOOOxx +5501 9751 1 1 1 1 1 501 1501 501 5501 2 3 PDAAAA BLOAAA VVVVxx +4013 9752 1 1 3 13 13 13 13 4013 4013 26 27 JYAAAA CLOAAA AAAAxx +9235 9753 1 3 5 15 35 235 1235 4235 9235 70 71 FRAAAA DLOAAA HHHHxx +2503 9754 1 3 3 3 3 503 503 2503 2503 6 7 HSAAAA ELOAAA OOOOxx +9181 9755 1 1 1 1 81 181 1181 4181 9181 162 163 DPAAAA FLOAAA VVVVxx +2289 9756 1 1 9 9 89 289 289 2289 2289 178 179 BKAAAA GLOAAA AAAAxx +4256 9757 0 0 6 16 56 256 256 4256 4256 112 113 SHAAAA HLOAAA HHHHxx +191 9758 1 3 1 11 91 191 191 191 191 182 183 JHAAAA ILOAAA OOOOxx +9655 9759 1 3 5 15 55 655 1655 4655 9655 110 111 JHAAAA JLOAAA VVVVxx +8615 9760 1 3 5 15 15 615 615 3615 8615 30 31 JTAAAA KLOAAA AAAAxx +3011 9761 1 3 1 11 11 11 1011 3011 3011 22 23 VLAAAA LLOAAA HHHHxx +6376 9762 0 0 6 16 76 376 376 1376 6376 152 153 GLAAAA MLOAAA OOOOxx +68 9763 0 0 8 8 68 68 68 68 68 136 137 QCAAAA NLOAAA VVVVxx +4720 9764 0 0 0 0 20 720 720 4720 4720 40 41 OZAAAA OLOAAA AAAAxx +6848 9765 0 0 8 8 48 848 848 1848 6848 96 97 KDAAAA PLOAAA HHHHxx +456 9766 0 0 6 16 56 456 456 456 456 112 113 ORAAAA QLOAAA OOOOxx +5887 9767 1 3 7 7 87 887 1887 887 5887 174 175 LSAAAA RLOAAA VVVVxx +9249 9768 1 1 9 9 49 249 1249 4249 9249 98 99 TRAAAA SLOAAA AAAAxx +4041 9769 1 1 1 1 41 41 41 4041 4041 82 83 LZAAAA TLOAAA HHHHxx +2304 9770 0 0 4 4 4 304 304 2304 2304 8 9 QKAAAA ULOAAA OOOOxx +8763 9771 1 3 3 3 63 763 763 3763 8763 126 127 BZAAAA VLOAAA VVVVxx +2115 9772 1 3 5 15 15 115 115 2115 2115 30 31 JDAAAA WLOAAA AAAAxx +8014 9773 0 2 4 14 14 14 14 3014 8014 28 29 GWAAAA XLOAAA HHHHxx +9895 9774 1 3 5 15 95 895 1895 4895 9895 190 191 PQAAAA YLOAAA OOOOxx +671 9775 1 3 1 11 71 671 671 671 671 142 143 VZAAAA ZLOAAA VVVVxx +3774 9776 0 2 4 14 74 774 1774 3774 3774 148 149 EPAAAA AMOAAA AAAAxx +134 9777 0 2 4 14 34 134 134 134 134 68 69 EFAAAA BMOAAA HHHHxx +534 9778 0 2 4 14 34 534 534 534 534 68 69 OUAAAA CMOAAA OOOOxx +7308 9779 0 0 8 8 8 308 1308 2308 7308 16 17 CVAAAA DMOAAA VVVVxx +5244 9780 0 0 4 4 44 244 1244 244 5244 88 89 STAAAA EMOAAA AAAAxx +1512 9781 0 0 2 12 12 512 1512 1512 1512 24 25 EGAAAA FMOAAA HHHHxx +8960 9782 0 0 0 0 60 960 960 3960 8960 120 121 QGAAAA GMOAAA OOOOxx +6602 9783 0 2 2 2 2 602 602 1602 6602 4 5 YTAAAA HMOAAA VVVVxx +593 9784 1 1 3 13 93 593 593 593 593 186 187 VWAAAA IMOAAA AAAAxx +2353 9785 1 1 3 13 53 353 353 2353 2353 106 107 NMAAAA JMOAAA HHHHxx +4139 9786 1 3 9 19 39 139 139 4139 4139 78 79 FDAAAA KMOAAA OOOOxx +3063 9787 1 3 3 3 63 63 1063 3063 3063 126 127 VNAAAA LMOAAA VVVVxx +652 9788 0 0 2 12 52 652 652 652 652 104 105 CZAAAA MMOAAA AAAAxx +7405 9789 1 1 5 5 5 405 1405 2405 7405 10 11 VYAAAA NMOAAA HHHHxx +3034 9790 0 2 4 14 34 34 1034 3034 3034 68 69 SMAAAA OMOAAA OOOOxx +4614 9791 0 2 4 14 14 614 614 4614 4614 28 29 MVAAAA PMOAAA VVVVxx +2351 9792 1 3 1 11 51 351 351 2351 2351 102 103 LMAAAA QMOAAA AAAAxx +8208 9793 0 0 8 8 8 208 208 3208 8208 16 17 SDAAAA RMOAAA HHHHxx +5475 9794 1 3 5 15 75 475 1475 475 5475 150 151 PCAAAA SMOAAA OOOOxx +6875 9795 1 3 5 15 75 875 875 1875 6875 150 151 LEAAAA TMOAAA VVVVxx +563 9796 1 3 3 3 63 563 563 563 563 126 127 RVAAAA UMOAAA AAAAxx +3346 9797 0 2 6 6 46 346 1346 3346 3346 92 93 SYAAAA VMOAAA HHHHxx +291 9798 1 3 1 11 91 291 291 291 291 182 183 FLAAAA WMOAAA OOOOxx +6345 9799 1 1 5 5 45 345 345 1345 6345 90 91 BKAAAA XMOAAA VVVVxx +8099 9800 1 3 9 19 99 99 99 3099 8099 198 199 NZAAAA YMOAAA AAAAxx +2078 9801 0 2 8 18 78 78 78 2078 2078 156 157 YBAAAA ZMOAAA HHHHxx +8238 9802 0 2 8 18 38 238 238 3238 8238 76 77 WEAAAA ANOAAA OOOOxx +4482 9803 0 2 2 2 82 482 482 4482 4482 164 165 KQAAAA BNOAAA VVVVxx +716 9804 0 0 6 16 16 716 716 716 716 32 33 OBAAAA CNOAAA AAAAxx +7288 9805 0 0 8 8 88 288 1288 2288 7288 176 177 IUAAAA DNOAAA HHHHxx +5906 9806 0 2 6 6 6 906 1906 906 5906 12 13 ETAAAA ENOAAA OOOOxx +5618 9807 0 2 8 18 18 618 1618 618 5618 36 37 CIAAAA FNOAAA VVVVxx +1141 9808 1 1 1 1 41 141 1141 1141 1141 82 83 XRAAAA GNOAAA AAAAxx +8231 9809 1 3 1 11 31 231 231 3231 8231 62 63 PEAAAA HNOAAA HHHHxx +3713 9810 1 1 3 13 13 713 1713 3713 3713 26 27 VMAAAA INOAAA OOOOxx +9158 9811 0 2 8 18 58 158 1158 4158 9158 116 117 GOAAAA JNOAAA VVVVxx +4051 9812 1 3 1 11 51 51 51 4051 4051 102 103 VZAAAA KNOAAA AAAAxx +1973 9813 1 1 3 13 73 973 1973 1973 1973 146 147 XXAAAA LNOAAA HHHHxx +6710 9814 0 2 0 10 10 710 710 1710 6710 20 21 CYAAAA MNOAAA OOOOxx +1021 9815 1 1 1 1 21 21 1021 1021 1021 42 43 HNAAAA NNOAAA VVVVxx +2196 9816 0 0 6 16 96 196 196 2196 2196 192 193 MGAAAA ONOAAA AAAAxx +8335 9817 1 3 5 15 35 335 335 3335 8335 70 71 PIAAAA PNOAAA HHHHxx +2272 9818 0 0 2 12 72 272 272 2272 2272 144 145 KJAAAA QNOAAA OOOOxx +3818 9819 0 2 8 18 18 818 1818 3818 3818 36 37 WQAAAA RNOAAA VVVVxx +679 9820 1 3 9 19 79 679 679 679 679 158 159 DAAAAA SNOAAA AAAAxx +7512 9821 0 0 2 12 12 512 1512 2512 7512 24 25 YCAAAA TNOAAA HHHHxx +493 9822 1 1 3 13 93 493 493 493 493 186 187 ZSAAAA UNOAAA OOOOxx +5663 9823 1 3 3 3 63 663 1663 663 5663 126 127 VJAAAA VNOAAA VVVVxx +4655 9824 1 3 5 15 55 655 655 4655 4655 110 111 BXAAAA WNOAAA AAAAxx +3996 9825 0 0 6 16 96 996 1996 3996 3996 192 193 SXAAAA XNOAAA HHHHxx +8797 9826 1 1 7 17 97 797 797 3797 8797 194 195 JAAAAA YNOAAA OOOOxx +2991 9827 1 3 1 11 91 991 991 2991 2991 182 183 BLAAAA ZNOAAA VVVVxx +7038 9828 0 2 8 18 38 38 1038 2038 7038 76 77 SKAAAA AOOAAA AAAAxx +4174 9829 0 2 4 14 74 174 174 4174 4174 148 149 OEAAAA BOOAAA HHHHxx +6908 9830 0 0 8 8 8 908 908 1908 6908 16 17 SFAAAA COOAAA OOOOxx +8477 9831 1 1 7 17 77 477 477 3477 8477 154 155 BOAAAA DOOAAA VVVVxx +3576 9832 0 0 6 16 76 576 1576 3576 3576 152 153 OHAAAA EOOAAA AAAAxx +2685 9833 1 1 5 5 85 685 685 2685 2685 170 171 HZAAAA FOOAAA HHHHxx +9161 9834 1 1 1 1 61 161 1161 4161 9161 122 123 JOAAAA GOOAAA OOOOxx +2951 9835 1 3 1 11 51 951 951 2951 2951 102 103 NJAAAA HOOAAA VVVVxx +8362 9836 0 2 2 2 62 362 362 3362 8362 124 125 QJAAAA IOOAAA AAAAxx +2379 9837 1 3 9 19 79 379 379 2379 2379 158 159 NNAAAA JOOAAA HHHHxx +1277 9838 1 1 7 17 77 277 1277 1277 1277 154 155 DXAAAA KOOAAA OOOOxx +1728 9839 0 0 8 8 28 728 1728 1728 1728 56 57 MOAAAA LOOAAA VVVVxx +9816 9840 0 0 6 16 16 816 1816 4816 9816 32 33 ONAAAA MOOAAA AAAAxx +6288 9841 0 0 8 8 88 288 288 1288 6288 176 177 WHAAAA NOOAAA HHHHxx +8985 9842 1 1 5 5 85 985 985 3985 8985 170 171 PHAAAA OOOAAA OOOOxx +771 9843 1 3 1 11 71 771 771 771 771 142 143 RDAAAA POOAAA VVVVxx +464 9844 0 0 4 4 64 464 464 464 464 128 129 WRAAAA QOOAAA AAAAxx +9625 9845 1 1 5 5 25 625 1625 4625 9625 50 51 FGAAAA ROOAAA HHHHxx +9608 9846 0 0 8 8 8 608 1608 4608 9608 16 17 OFAAAA SOOAAA OOOOxx +9170 9847 0 2 0 10 70 170 1170 4170 9170 140 141 SOAAAA TOOAAA VVVVxx +9658 9848 0 2 8 18 58 658 1658 4658 9658 116 117 MHAAAA UOOAAA AAAAxx +7515 9849 1 3 5 15 15 515 1515 2515 7515 30 31 BDAAAA VOOAAA HHHHxx +9400 9850 0 0 0 0 0 400 1400 4400 9400 0 1 OXAAAA WOOAAA OOOOxx +2045 9851 1 1 5 5 45 45 45 2045 2045 90 91 RAAAAA XOOAAA VVVVxx +324 9852 0 0 4 4 24 324 324 324 324 48 49 MMAAAA YOOAAA AAAAxx +4252 9853 0 0 2 12 52 252 252 4252 4252 104 105 OHAAAA ZOOAAA HHHHxx +8329 9854 1 1 9 9 29 329 329 3329 8329 58 59 JIAAAA APOAAA OOOOxx +4472 9855 0 0 2 12 72 472 472 4472 4472 144 145 AQAAAA BPOAAA VVVVxx +1047 9856 1 3 7 7 47 47 1047 1047 1047 94 95 HOAAAA CPOAAA AAAAxx +9341 9857 1 1 1 1 41 341 1341 4341 9341 82 83 HVAAAA DPOAAA HHHHxx +7000 9858 0 0 0 0 0 0 1000 2000 7000 0 1 GJAAAA EPOAAA OOOOxx +1429 9859 1 1 9 9 29 429 1429 1429 1429 58 59 ZCAAAA FPOAAA VVVVxx +2701 9860 1 1 1 1 1 701 701 2701 2701 2 3 XZAAAA GPOAAA AAAAxx +6630 9861 0 2 0 10 30 630 630 1630 6630 60 61 AVAAAA HPOAAA HHHHxx +3669 9862 1 1 9 9 69 669 1669 3669 3669 138 139 DLAAAA IPOAAA OOOOxx +8613 9863 1 1 3 13 13 613 613 3613 8613 26 27 HTAAAA JPOAAA VVVVxx +7080 9864 0 0 0 0 80 80 1080 2080 7080 160 161 IMAAAA KPOAAA AAAAxx +8788 9865 0 0 8 8 88 788 788 3788 8788 176 177 AAAAAA LPOAAA HHHHxx +6291 9866 1 3 1 11 91 291 291 1291 6291 182 183 ZHAAAA MPOAAA OOOOxx +7885 9867 1 1 5 5 85 885 1885 2885 7885 170 171 HRAAAA NPOAAA VVVVxx +7160 9868 0 0 0 0 60 160 1160 2160 7160 120 121 KPAAAA OPOAAA AAAAxx +6140 9869 0 0 0 0 40 140 140 1140 6140 80 81 ECAAAA PPOAAA HHHHxx +9881 9870 1 1 1 1 81 881 1881 4881 9881 162 163 BQAAAA QPOAAA OOOOxx +9140 9871 0 0 0 0 40 140 1140 4140 9140 80 81 ONAAAA RPOAAA VVVVxx +644 9872 0 0 4 4 44 644 644 644 644 88 89 UYAAAA SPOAAA AAAAxx +3667 9873 1 3 7 7 67 667 1667 3667 3667 134 135 BLAAAA TPOAAA HHHHxx +2675 9874 1 3 5 15 75 675 675 2675 2675 150 151 XYAAAA UPOAAA OOOOxx +9492 9875 0 0 2 12 92 492 1492 4492 9492 184 185 CBAAAA VPOAAA VVVVxx +5004 9876 0 0 4 4 4 4 1004 4 5004 8 9 MKAAAA WPOAAA AAAAxx +9456 9877 0 0 6 16 56 456 1456 4456 9456 112 113 SZAAAA XPOAAA HHHHxx +8197 9878 1 1 7 17 97 197 197 3197 8197 194 195 HDAAAA YPOAAA OOOOxx +2837 9879 1 1 7 17 37 837 837 2837 2837 74 75 DFAAAA ZPOAAA VVVVxx +127 9880 1 3 7 7 27 127 127 127 127 54 55 XEAAAA AQOAAA AAAAxx +9772 9881 0 0 2 12 72 772 1772 4772 9772 144 145 WLAAAA BQOAAA HHHHxx +5743 9882 1 3 3 3 43 743 1743 743 5743 86 87 XMAAAA CQOAAA OOOOxx +2007 9883 1 3 7 7 7 7 7 2007 2007 14 15 FZAAAA DQOAAA VVVVxx +7586 9884 0 2 6 6 86 586 1586 2586 7586 172 173 UFAAAA EQOAAA AAAAxx +45 9885 1 1 5 5 45 45 45 45 45 90 91 TBAAAA FQOAAA HHHHxx +6482 9886 0 2 2 2 82 482 482 1482 6482 164 165 IPAAAA GQOAAA OOOOxx +4565 9887 1 1 5 5 65 565 565 4565 4565 130 131 PTAAAA HQOAAA VVVVxx +6975 9888 1 3 5 15 75 975 975 1975 6975 150 151 HIAAAA IQOAAA AAAAxx +7260 9889 0 0 0 0 60 260 1260 2260 7260 120 121 GTAAAA JQOAAA HHHHxx +2830 9890 0 2 0 10 30 830 830 2830 2830 60 61 WEAAAA KQOAAA OOOOxx +9365 9891 1 1 5 5 65 365 1365 4365 9365 130 131 FWAAAA LQOAAA VVVVxx +8207 9892 1 3 7 7 7 207 207 3207 8207 14 15 RDAAAA MQOAAA AAAAxx +2506 9893 0 2 6 6 6 506 506 2506 2506 12 13 KSAAAA NQOAAA HHHHxx +8081 9894 1 1 1 1 81 81 81 3081 8081 162 163 VYAAAA OQOAAA OOOOxx +8678 9895 0 2 8 18 78 678 678 3678 8678 156 157 UVAAAA PQOAAA VVVVxx +9932 9896 0 0 2 12 32 932 1932 4932 9932 64 65 ASAAAA QQOAAA AAAAxx +447 9897 1 3 7 7 47 447 447 447 447 94 95 FRAAAA RQOAAA HHHHxx +9187 9898 1 3 7 7 87 187 1187 4187 9187 174 175 JPAAAA SQOAAA OOOOxx +89 9899 1 1 9 9 89 89 89 89 89 178 179 LDAAAA TQOAAA VVVVxx +7027 9900 1 3 7 7 27 27 1027 2027 7027 54 55 HKAAAA UQOAAA AAAAxx +1536 9901 0 0 6 16 36 536 1536 1536 1536 72 73 CHAAAA VQOAAA HHHHxx +160 9902 0 0 0 0 60 160 160 160 160 120 121 EGAAAA WQOAAA OOOOxx +7679 9903 1 3 9 19 79 679 1679 2679 7679 158 159 JJAAAA XQOAAA VVVVxx +5973 9904 1 1 3 13 73 973 1973 973 5973 146 147 TVAAAA YQOAAA AAAAxx +4401 9905 1 1 1 1 1 401 401 4401 4401 2 3 HNAAAA ZQOAAA HHHHxx +395 9906 1 3 5 15 95 395 395 395 395 190 191 FPAAAA AROAAA OOOOxx +4904 9907 0 0 4 4 4 904 904 4904 4904 8 9 QGAAAA BROAAA VVVVxx +2759 9908 1 3 9 19 59 759 759 2759 2759 118 119 DCAAAA CROAAA AAAAxx +8713 9909 1 1 3 13 13 713 713 3713 8713 26 27 DXAAAA DROAAA HHHHxx +3770 9910 0 2 0 10 70 770 1770 3770 3770 140 141 APAAAA EROAAA OOOOxx +8272 9911 0 0 2 12 72 272 272 3272 8272 144 145 EGAAAA FROAAA VVVVxx +5358 9912 0 2 8 18 58 358 1358 358 5358 116 117 CYAAAA GROAAA AAAAxx +9747 9913 1 3 7 7 47 747 1747 4747 9747 94 95 XKAAAA HROAAA HHHHxx +1567 9914 1 3 7 7 67 567 1567 1567 1567 134 135 HIAAAA IROAAA OOOOxx +2136 9915 0 0 6 16 36 136 136 2136 2136 72 73 EEAAAA JROAAA VVVVxx +314 9916 0 2 4 14 14 314 314 314 314 28 29 CMAAAA KROAAA AAAAxx +4583 9917 1 3 3 3 83 583 583 4583 4583 166 167 HUAAAA LROAAA HHHHxx +375 9918 1 3 5 15 75 375 375 375 375 150 151 LOAAAA MROAAA OOOOxx +5566 9919 0 2 6 6 66 566 1566 566 5566 132 133 CGAAAA NROAAA VVVVxx +6865 9920 1 1 5 5 65 865 865 1865 6865 130 131 BEAAAA OROAAA AAAAxx +894 9921 0 2 4 14 94 894 894 894 894 188 189 KIAAAA PROAAA HHHHxx +5399 9922 1 3 9 19 99 399 1399 399 5399 198 199 RZAAAA QROAAA OOOOxx +1385 9923 1 1 5 5 85 385 1385 1385 1385 170 171 HBAAAA RROAAA VVVVxx +2156 9924 0 0 6 16 56 156 156 2156 2156 112 113 YEAAAA SROAAA AAAAxx +9659 9925 1 3 9 19 59 659 1659 4659 9659 118 119 NHAAAA TROAAA HHHHxx +477 9926 1 1 7 17 77 477 477 477 477 154 155 JSAAAA UROAAA OOOOxx +8194 9927 0 2 4 14 94 194 194 3194 8194 188 189 EDAAAA VROAAA VVVVxx +3937 9928 1 1 7 17 37 937 1937 3937 3937 74 75 LVAAAA WROAAA AAAAxx +3745 9929 1 1 5 5 45 745 1745 3745 3745 90 91 BOAAAA XROAAA HHHHxx +4096 9930 0 0 6 16 96 96 96 4096 4096 192 193 OBAAAA YROAAA OOOOxx +5487 9931 1 3 7 7 87 487 1487 487 5487 174 175 BDAAAA ZROAAA VVVVxx +2475 9932 1 3 5 15 75 475 475 2475 2475 150 151 FRAAAA ASOAAA AAAAxx +6105 9933 1 1 5 5 5 105 105 1105 6105 10 11 VAAAAA BSOAAA HHHHxx +6036 9934 0 0 6 16 36 36 36 1036 6036 72 73 EYAAAA CSOAAA OOOOxx +1315 9935 1 3 5 15 15 315 1315 1315 1315 30 31 PYAAAA DSOAAA VVVVxx +4473 9936 1 1 3 13 73 473 473 4473 4473 146 147 BQAAAA ESOAAA AAAAxx +4016 9937 0 0 6 16 16 16 16 4016 4016 32 33 MYAAAA FSOAAA HHHHxx +8135 9938 1 3 5 15 35 135 135 3135 8135 70 71 XAAAAA GSOAAA OOOOxx +8892 9939 0 0 2 12 92 892 892 3892 8892 184 185 AEAAAA HSOAAA VVVVxx +4850 9940 0 2 0 10 50 850 850 4850 4850 100 101 OEAAAA ISOAAA AAAAxx +2545 9941 1 1 5 5 45 545 545 2545 2545 90 91 XTAAAA JSOAAA HHHHxx +3788 9942 0 0 8 8 88 788 1788 3788 3788 176 177 SPAAAA KSOAAA OOOOxx +1672 9943 0 0 2 12 72 672 1672 1672 1672 144 145 IMAAAA LSOAAA VVVVxx +3664 9944 0 0 4 4 64 664 1664 3664 3664 128 129 YKAAAA MSOAAA AAAAxx +3775 9945 1 3 5 15 75 775 1775 3775 3775 150 151 FPAAAA NSOAAA HHHHxx +3103 9946 1 3 3 3 3 103 1103 3103 3103 6 7 JPAAAA OSOAAA OOOOxx +9335 9947 1 3 5 15 35 335 1335 4335 9335 70 71 BVAAAA PSOAAA VVVVxx +9200 9948 0 0 0 0 0 200 1200 4200 9200 0 1 WPAAAA QSOAAA AAAAxx +8665 9949 1 1 5 5 65 665 665 3665 8665 130 131 HVAAAA RSOAAA HHHHxx +1356 9950 0 0 6 16 56 356 1356 1356 1356 112 113 EAAAAA SSOAAA OOOOxx +6118 9951 0 2 8 18 18 118 118 1118 6118 36 37 IBAAAA TSOAAA VVVVxx +4605 9952 1 1 5 5 5 605 605 4605 4605 10 11 DVAAAA USOAAA AAAAxx +5651 9953 1 3 1 11 51 651 1651 651 5651 102 103 JJAAAA VSOAAA HHHHxx +9055 9954 1 3 5 15 55 55 1055 4055 9055 110 111 HKAAAA WSOAAA OOOOxx +8461 9955 1 1 1 1 61 461 461 3461 8461 122 123 LNAAAA XSOAAA VVVVxx +6107 9956 1 3 7 7 7 107 107 1107 6107 14 15 XAAAAA YSOAAA AAAAxx +1967 9957 1 3 7 7 67 967 1967 1967 1967 134 135 RXAAAA ZSOAAA HHHHxx +8910 9958 0 2 0 10 10 910 910 3910 8910 20 21 SEAAAA ATOAAA OOOOxx +8257 9959 1 1 7 17 57 257 257 3257 8257 114 115 PFAAAA BTOAAA VVVVxx +851 9960 1 3 1 11 51 851 851 851 851 102 103 TGAAAA CTOAAA AAAAxx +7823 9961 1 3 3 3 23 823 1823 2823 7823 46 47 XOAAAA DTOAAA HHHHxx +3208 9962 0 0 8 8 8 208 1208 3208 3208 16 17 KTAAAA ETOAAA OOOOxx +856 9963 0 0 6 16 56 856 856 856 856 112 113 YGAAAA FTOAAA VVVVxx +2654 9964 0 2 4 14 54 654 654 2654 2654 108 109 CYAAAA GTOAAA AAAAxx +7185 9965 1 1 5 5 85 185 1185 2185 7185 170 171 JQAAAA HTOAAA HHHHxx +309 9966 1 1 9 9 9 309 309 309 309 18 19 XLAAAA ITOAAA OOOOxx +9752 9967 0 0 2 12 52 752 1752 4752 9752 104 105 CLAAAA JTOAAA VVVVxx +6405 9968 1 1 5 5 5 405 405 1405 6405 10 11 JMAAAA KTOAAA AAAAxx +6113 9969 1 1 3 13 13 113 113 1113 6113 26 27 DBAAAA LTOAAA HHHHxx +9774 9970 0 2 4 14 74 774 1774 4774 9774 148 149 YLAAAA MTOAAA OOOOxx +1674 9971 0 2 4 14 74 674 1674 1674 1674 148 149 KMAAAA NTOAAA VVVVxx +9602 9972 0 2 2 2 2 602 1602 4602 9602 4 5 IFAAAA OTOAAA AAAAxx +1363 9973 1 3 3 3 63 363 1363 1363 1363 126 127 LAAAAA PTOAAA HHHHxx +6887 9974 1 3 7 7 87 887 887 1887 6887 174 175 XEAAAA QTOAAA OOOOxx +6170 9975 0 2 0 10 70 170 170 1170 6170 140 141 IDAAAA RTOAAA VVVVxx +8888 9976 0 0 8 8 88 888 888 3888 8888 176 177 WDAAAA STOAAA AAAAxx +2981 9977 1 1 1 1 81 981 981 2981 2981 162 163 RKAAAA TTOAAA HHHHxx +7369 9978 1 1 9 9 69 369 1369 2369 7369 138 139 LXAAAA UTOAAA OOOOxx +6227 9979 1 3 7 7 27 227 227 1227 6227 54 55 NFAAAA VTOAAA VVVVxx +8002 9980 0 2 2 2 2 2 2 3002 8002 4 5 UVAAAA WTOAAA AAAAxx +4288 9981 0 0 8 8 88 288 288 4288 4288 176 177 YIAAAA XTOAAA HHHHxx +5136 9982 0 0 6 16 36 136 1136 136 5136 72 73 OPAAAA YTOAAA OOOOxx +1084 9983 0 0 4 4 84 84 1084 1084 1084 168 169 SPAAAA ZTOAAA VVVVxx +9117 9984 1 1 7 17 17 117 1117 4117 9117 34 35 RMAAAA AUOAAA AAAAxx +2406 9985 0 2 6 6 6 406 406 2406 2406 12 13 OOAAAA BUOAAA HHHHxx +1384 9986 0 0 4 4 84 384 1384 1384 1384 168 169 GBAAAA CUOAAA OOOOxx +9194 9987 0 2 4 14 94 194 1194 4194 9194 188 189 QPAAAA DUOAAA VVVVxx +858 9988 0 2 8 18 58 858 858 858 858 116 117 AHAAAA EUOAAA AAAAxx +8592 9989 0 0 2 12 92 592 592 3592 8592 184 185 MSAAAA FUOAAA HHHHxx +4773 9990 1 1 3 13 73 773 773 4773 4773 146 147 PBAAAA GUOAAA OOOOxx +4093 9991 1 1 3 13 93 93 93 4093 4093 186 187 LBAAAA HUOAAA VVVVxx +6587 9992 1 3 7 7 87 587 587 1587 6587 174 175 JTAAAA IUOAAA AAAAxx +6093 9993 1 1 3 13 93 93 93 1093 6093 186 187 JAAAAA JUOAAA HHHHxx +429 9994 1 1 9 9 29 429 429 429 429 58 59 NQAAAA KUOAAA OOOOxx +5780 9995 0 0 0 0 80 780 1780 780 5780 160 161 IOAAAA LUOAAA VVVVxx +1783 9996 1 3 3 3 83 783 1783 1783 1783 166 167 PQAAAA MUOAAA AAAAxx +2992 9997 0 0 2 12 92 992 992 2992 2992 184 185 CLAAAA NUOAAA HHHHxx +0 9998 0 0 0 0 0 0 0 0 0 0 1 AAAAAA OUOAAA OOOOxx +2968 9999 0 0 8 8 68 968 968 2968 2968 136 137 EKAAAA PUOAAA VVVVxx diff --git a/contrib/pg_tde/docker/Dockerfile b/contrib/pg_tde/docker/Dockerfile new file mode 100644 index 00000000000..4abd758fe6c --- /dev/null +++ b/contrib/pg_tde/docker/Dockerfile @@ -0,0 +1,29 @@ +FROM postgres:16 + +RUN apt-get update; \ + apt-get install -y --no-install-recommends \ + curl \ + libssl-dev \ + gcc \ + postgresql-server-dev-16 \ + make \ + libcurl4-openssl-dev + +WORKDIR /opt/pg_tde + +COPY . . + +RUN make USE_PGXS=1 MAJORVERSION=16 && \ + make USE_PGXS=1 install +RUN cp /usr/share/postgresql/postgresql.conf.sample /etc/postgresql/postgresql.conf; \ + echo "shared_preload_libraries = 'pg_tde'" >> /etc/postgresql/postgresql.conf; \ + # echo "log_min_messages = debug3" >> /etc/postgresql/postgresql.conf; \ + # echo "log_min_error_statement = debug3" >> /etc/postgresql/postgresql.conf; \ + chown postgres /etc/postgresql/tde_conf.json; \ + mkdir -p /docker-entrypoint-initdb.d +COPY ./docker/pg-tde-create-ext.sh /docker-entrypoint-initdb.d/pg-tde-create-ext.sh +COPY ./docker/pg-tde-streaming-repl.sh /docker-entrypoint-initdb.d/pg-tde-streaming-repl.sh + +VOLUME /etc/postgresql/ + +CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"] diff --git a/contrib/pg_tde/docker/docker-compose.yaml b/contrib/pg_tde/docker/docker-compose.yaml new file mode 100644 index 00000000000..65bf3158b28 --- /dev/null +++ b/contrib/pg_tde/docker/docker-compose.yaml @@ -0,0 +1,26 @@ +# TODO: needs improvements as currentlly ` docker-compose up -d --build` has to be run twice +# as replication init on secodary doen't work 100% properly +version: "3.4" +services: + pg-primary: + build: + dockerfile: ./docker/Dockerfile + context: .. + environment: + - "POSTGRES_PASSWORD=testpass" + - "PG_PRIMARY=true" + - "POSTGRES_HOST_AUTH_METHOD=trust" + - "PG_REPLICATION=true" + ports: + - "5433:5432" + pg-secondary: + build: + dockerfile: ./docker/Dockerfile + context: .. + depends_on: + - pg-primary + environment: + - "POSTGRES_PASSWORD=testpass" + - "PG_REPLICATION=true" + ports: + - "5434:5432" diff --git a/contrib/pg_tde/docker/pg-tde-create-ext.sh b/contrib/pg_tde/docker/pg-tde-create-ext.sh new file mode 100644 index 00000000000..b2b5814a3d6 --- /dev/null +++ b/contrib/pg_tde/docker/pg-tde-create-ext.sh @@ -0,0 +1,8 @@ +#!/bin/bash + +set -e + +PG_USER=${POSTGRES_USER:-"postgres"} + +psql -U ${PG_USER} -c 'CREATE EXTENSION pg_tde;' +psql -U ${PG_USER} -d template1 -c 'CREATE EXTENSION pg_tde;' diff --git a/contrib/pg_tde/docker/pg-tde-streaming-repl.sh b/contrib/pg_tde/docker/pg-tde-streaming-repl.sh new file mode 100644 index 00000000000..8dce61a8c9b --- /dev/null +++ b/contrib/pg_tde/docker/pg-tde-streaming-repl.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +set -e + +PG_PRIMARY=${PG_PRIMARY:-"false"} +PG_REPLICATION=${PG_REPLICATION:-"false"} +REPL_PASS=${REPL_PASS:-"replpass"} +PG_USER=${POSTGRES_USER:-"postgres"} + + +if [ "$PG_REPLICATION" == "true" ] ; then + if [ "$PG_PRIMARY" == "true" ] ; then + psql -U ${PG_USER} -c "CREATE ROLE repl WITH REPLICATION PASSWORD '${REPL_PASS}' LOGIN;" + echo "host replication repl 0.0.0.0/0 trust" >> ${PGDATA}/pg_hba.conf + else + rm -rf ${PGDATA}/* + pg_basebackup -h pg-primary -p 5432 -U repl -D ${PGDATA} -Fp -Xs -R + fi +fi + diff --git a/contrib/pg_tde/documentation/.gitignore b/contrib/pg_tde/documentation/.gitignore new file mode 100644 index 00000000000..45ddf0ae397 --- /dev/null +++ b/contrib/pg_tde/documentation/.gitignore @@ -0,0 +1 @@ +site/ diff --git a/contrib/pg_tde/documentation/CONTRIBUTING.md b/contrib/pg_tde/documentation/CONTRIBUTING.md new file mode 100644 index 00000000000..f49b40b722b --- /dev/null +++ b/contrib/pg_tde/documentation/CONTRIBUTING.md @@ -0,0 +1,114 @@ +## Contribute to documentation + +`pg_tde` documentation is written in Markdown language, so you can [edit it online via GitHub](#edit-documentation-online-vi-github). If you wish to have more control over the doc process, jump to how to [edit documentation locally](#edit-documentation-locally). + +Before you start, learn what [git], [MkDocs] and [Docker] are and what [Markdown] is and how to write it. For your convenience, there's also a [cheat sheet](https://www.markdownguide.org/cheat-sheet/) to help you with the syntax. + +The doc files are in the `documentation` directory. + +### Edit documentation online via GitHub + +1. Click the **Edit this page** icon next to the page title. The source `.md` file of the page opens in GitHub editor in your browser. If you haven’t worked with the repository before, GitHub creates a [fork](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo) of it for you. +2. Edit the page. You can check your changes on the **Preview** tab. +3. Commit your changes. + * In the _Commit changes_ section, describe your changes. + * Select the **Create a new branch for this commit** and start a pull request option + * Click **Propose changes**. +4. GitHub creates a branch and a commit for your changes. It loads a new page on which you can open a pull request to Percona. The page shows the base branch - the one you offer your changes for, your commit message and a diff - a visual representation of your changes against the original page. This allows you to make a last-minute review. When you are ready, click the Create pull request button. +5. Someone from our team reviews the pull request and if everything is correct, merges it into the documentation. Then it gets published on the site. + +### Edit documentation locally + +This option is for users who prefer to work from their computer and / or have the full control over the documentation process. + +The steps are the following: + +1. Fork this repository +2. Clone the repository on your machine: + +```sh +git clone git@github.com:/pg_tde.git +``` + +3. Change the directory to ``pg_tde`` and add the remote upstream repository: + +```sh +git remote add upstream git@github.com:percona/pg_tde.git +``` + +4. Pull the latest changes from upstream + +```sh +git fetch upstream +git merge upstream/main +``` + +5. Create a separate branch for your changes + +```sh +git checkout -b +``` + +6. Make changes +7. Commit your changes. The [commit message guidelines](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53) will help you with writing great commit messages + +8. Open a pull request to Percona + +#### Building the documentation + +To verify how your changes look, generate the static site with the documentation. This process is called *building*. You can do it in these ways: +- [Use Docker](#use-docker) +- [Install MkDocs and build locally](#install-sphinx-and-build-locally) + +##### Use Docker + +1. [Get Docker](https://docs.docker.com/get-docker/) +2. We use [our Docker image](https://hub.docker.com/repository/docker/perconalab/pmm-doc-md) to build documentation. Run the following command: + +```sh +docker run --rm -v $(pwd):/documentation perconalab/pmm-doc-md mkdocs build +``` + If Docker can't find the image locally, it first downloads the image, and then runs it to build the documentation. + +3. Go to the ``site`` directory and open the ``index.html`` file to see the documentation. + +If you want to see the changes as you edit the docs, use this command instead: + +```sh +docker run --rm -v $(pwd):/documentation -p 8000:8000 perconalab/pmm-doc-md mkdocs serve --dev-addr=0.0.0.0:8000 +``` + +Wait until you see `INFO - Start detecting changes`, then enter `0.0.0.0:8000` in the browser's address bar. The documentation automatically reloads after you save the changes in source files. + +##### Install MkDocs and build locally + +1. Install [Python]. + +2. Install MkDocs and required extensions: + + ```sh + pip install -r requirements.txt + ``` + +3. Build the site: + + ```sh + mkdocs build + ``` + +4. Open `site/index.html` + +Or, to run the built-in web server: + +```sh +mkdocs serve +``` + + +View the site at + +[MkDocs]: https://www.mkdocs.org/ +[Markdown]: https://daringfireball.net/projects/markdown/ +[Git]: https://git-scm.com +[Python]: https://www.python.org/downloads/ +[Docker]: https://docs.docker.com/get-docker/ diff --git a/contrib/pg_tde/documentation/_resource/.icons/percona/logo.svg b/contrib/pg_tde/documentation/_resource/.icons/percona/logo.svg new file mode 100644 index 00000000000..6bb15edb5a4 --- /dev/null +++ b/contrib/pg_tde/documentation/_resource/.icons/percona/logo.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/contrib/pg_tde/documentation/_resource/overrides/main.html b/contrib/pg_tde/documentation/_resource/overrides/main.html new file mode 100644 index 00000000000..4ef03fedaef --- /dev/null +++ b/contrib/pg_tde/documentation/_resource/overrides/main.html @@ -0,0 +1,37 @@ +{#- + This file was automatically generated - do not edit +-#} +{% extends "base.html" %} + +{% block announce %} + This is a Beta version of Percona Transparent Encryption extension and it is + not recommended for production environments yet. We encourage you to test it and give your feedback. + This will help us improve the product and make it production-ready faster. +{% endblock %} + +{% block scripts %} + +{{ super() }} +{% endblock %} + + {% block extrahead %} + {{ super() }} + {% set title = config.site_name %} + {% if page and page.meta and page.meta.title %} + {% set title = title ~ " - " ~ page.meta.title %} + {% elif page and page.title and not page.is_homepage %} + {% set title = title ~ " - " ~ page.title %} + {% endif %} + + + + + + {% endblock %} + + + + diff --git a/contrib/pg_tde/documentation/_resource/overrides/partials/copyright.html b/contrib/pg_tde/documentation/_resource/overrides/partials/copyright.html new file mode 100644 index 00000000000..dd0f101fad6 --- /dev/null +++ b/contrib/pg_tde/documentation/_resource/overrides/partials/copyright.html @@ -0,0 +1,14 @@ +{#- + This file was automatically generated - do not edit +-#} + \ No newline at end of file diff --git a/contrib/pg_tde/documentation/_resource/overrides/partials/header.html b/contrib/pg_tde/documentation/_resource/overrides/partials/header.html new file mode 100644 index 00000000000..2d0d6e7401a --- /dev/null +++ b/contrib/pg_tde/documentation/_resource/overrides/partials/header.html @@ -0,0 +1,135 @@ + + + +{% set class = "md-header" %} +{% if "navigation.tabs.sticky" in features %} + {% set class = class ~ " md-header--shadow md-header--lifted" %} +{% elif "navigation.tabs" not in features %} + {% set class = class ~ " md-header--shadow" %} +{% endif %} + + +
+ + + + + + + + {% if "navigation.tabs.sticky" in features %} + {% if "navigation.tabs" in features %} + {% include "partials/tabs.html" %} + {% endif %} + {% endif %} +
\ No newline at end of file diff --git a/contrib/pg_tde/documentation/_resource/templates/styles.scss b/contrib/pg_tde/documentation/_resource/templates/styles.scss new file mode 100644 index 00000000000..fc9bc9fce52 --- /dev/null +++ b/contrib/pg_tde/documentation/_resource/templates/styles.scss @@ -0,0 +1,118 @@ +/* Style for PDF created by MkDocs with mkdocs-with-pdf plugin (https://pypi.org/project/mkdocs-with-pdf/) */ +@media print { + html { + font-size: 100%; + font-family: "Poppins", sans-serif; + } + + body { + font-family: "Poppins", sans-serif; + color: #00162b; + } + + article { + font-family: "Poppins", sans-serif; + text-align: justify; + } + + pre, + code, + var, + samp, + kbd, + tt { + font-family: monospace; + font-size: 110%; + } + + pre code, + pre var, + pre samp, + pre kbd, + pre tt { + font-size: 110%; + } +} + +@page { + size: a4 portrait; + margin: 25mm 20mm 25mm 20mm; + counter-increment: page; + white-space: pre; + color: #00162b; + + @top-right { + content: string(chapter); + } + + @bottom-left { + content: string(subtitle); + } + + @bottom-center { + content: counter(page)' of 'counter(pages); + } + + @bottom-right { + } +} + +@page :first { + @top-right { + content: ''; + } + + @bottom-right { + content: ''; + } + + @bottom-left { + content: ''; + } +} + +article { + page-break-before: always; + min-height: 100%; +} + +article { + + h1, + h2, + h3 { + border-bottom: 0px solid #fff; + } + + h1>.pdf-order, + h2>.pdf-order, + h3>.pdf-order { + padding-left: 6px; + padding-right: 0.8rem; + } +} + +article h1, +h2, +h3 { + border-bottom: 0px solid #fff; + color: #3e4875; +} + +.admonition { + font-size: 100%; +} + +article div.tabbed-content--wrap { + page-break-inside: auto !important; +} +article div.tabbed-content--wrap * { + page-break-before: auto !important; + page-break-inside: auto !important; +} + +.md-typeset .admonition.note, +.md-typeset details.note { + color: #00162b; + border-color: #fff; +} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/_images/Percona_Logo_Color.png b/contrib/pg_tde/documentation/docs/_images/Percona_Logo_Color.png new file mode 100644 index 00000000000..673f8d87bb7 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/_images/Percona_Logo_Color.png differ diff --git a/contrib/pg_tde/documentation/docs/_images/pg_tde.png b/contrib/pg_tde/documentation/docs/_images/pg_tde.png new file mode 100644 index 00000000000..d033cad1253 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/_images/pg_tde.png differ diff --git a/contrib/pg_tde/documentation/docs/_images/postgresql-fav.svg b/contrib/pg_tde/documentation/docs/_images/postgresql-fav.svg new file mode 100644 index 00000000000..635ea246049 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/_images/postgresql-fav.svg @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + + + + + diff --git a/contrib/pg_tde/documentation/docs/_images/postgresql-mark.svg b/contrib/pg_tde/documentation/docs/_images/postgresql-mark.svg new file mode 100644 index 00000000000..734c07380aa --- /dev/null +++ b/contrib/pg_tde/documentation/docs/_images/postgresql-mark.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/contrib/pg_tde/documentation/docs/_images/tde-flow.png b/contrib/pg_tde/documentation/docs/_images/tde-flow.png new file mode 100644 index 00000000000..09c4da2eba5 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/_images/tde-flow.png differ diff --git a/contrib/pg_tde/documentation/docs/apt.md b/contrib/pg_tde/documentation/docs/apt.md new file mode 100644 index 00000000000..88d6e1be094 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/apt.md @@ -0,0 +1,82 @@ +# Install `pg_tde` on Debian or Ubuntu + +The packages for the tech preview `pg_tde` are available in the experimental repository for Percona Distribution for PostgreSQL 17. + +Check the [list of supported platforms](install.md#__tabbed_1_2). + +This tutorial shows how to install `pg_tde` with [Percona Distribution for PostgreSQL](https://docs.percona.com/postgresql/latest/index.html). + +## Preconditions + +1. Debian and other systems that use the `apt` package manager include the upstream PostgreSQL server package (`postgresql-{{pgversion17}}`) by default. You need to uninstall this package before you install Percona Server for PostgreSQL and `pg_tde` to avoid conflicts. +2. You need the `percona-release` repository management tool that enables the desired Percona repository for you. + + +### Install `percona-release` + +1. You need the following dependencies to install `percona-release`: + + - `wget` + - `gnupg2` + - `curl` + - `lsb-release` + + Install them with the following command: + + ```bash + sudo apt-get install -y wget gnupg2 curl lsb-release + ``` + +2. Fetch the `percona-release` package + + ```bash + sudo wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb + ``` + +3. Install `percona-release` + + ```bash + sudo dpkg -i percona-release_latest.generic_all.deb + ``` + +4. Enable the Percona Distribution for PostgreSQL repository + + Percona provides [two repositories](repo-overview.md) for Percona Distribution for PostgreSQL. We recommend enabling the Major release repository to timely receive the latest updates. + + ```{.bash data-prompt="$"} + $ sudo percona-release enable ppg-{{pgversion17}} + ``` + +6. Update the local cache + + ```bash + sudo apt-get update + ``` + +## Install `pg_tde` + +!!! important + + The `pg_tde` {{release}} extension is a part of the `percona-postgresql-17` package. If you installed a previous version of `pg_tde` from the `percona-postgresql-17-pg-tde` package, do the following: + + * Drop the extension using the `DROP EXTENSION` with `CASCADE` command. + + :material-alert: Warning: The use of the `CASCADE` parameter deletes all tables that were created in the database with `pg_tde` enabled and also all dependencies upon the encrypted table (e.g. foreign keys in a non-encrypted table used in the encrypted one). + + ```sql + DROP EXTENSION pg_tde CASCADE + ``` + + * Uninstall the `percona-postgresql-17-pg-tde` package. + +After all [preconditions](#preconditions) are met, run the following command to install `pg_tde`: + + +```bash +sudo apt-get install -y percona-postgresql-17 +``` + + +## Next step + +[Setup](setup.md){.md-button} diff --git a/contrib/pg_tde/documentation/docs/contribute.md b/contrib/pg_tde/documentation/docs/contribute.md new file mode 100644 index 00000000000..1e686f251b2 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/contribute.md @@ -0,0 +1,197 @@ +# Contributing guide + +Welcome to `pg_tde` - the Transparent Data Encryption extension for PostgreSQL! + +We're glad that you would like to become a community member and contribute to this project. + +You can contribute in one of the following ways: + +1. Reach us on our [Forums](https://forums.percona.com/c/postgresql/25). +2. Submit a bug report or a feature request +3. Submit a pull request (PR) with the code patch +4. Contribute to documentation + +## Prerequisites + +Before submitting code contributions, we ask you to complete the following prerequisites. + +### 1. Sign the CLA + +Before you can contribute, we kindly ask you to sign our [Contributor License Agreement](https://cla-assistant.percona.com/<linktoCLA>) (CLA). You can do this in on click using your GitHub account. + +**Note**: You can sign it later, when submitting your first pull request. The CLA assistant validates the PR and asks you to sign the CLA to proceed. + +### 2. Code of Conduct + +Please make sure to read and agree to our [Code of Conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md). + +## Submitting a pull request + +All bug reports, enhancements and feature requests are tracked in [GitHub issues](https://github.com/percona/pg_tde/issues). Though not mandatory, we encourage you to first check for a bug report among the issues and in the PR list: perhaps the bug has already been addressed. + +For feature requests and enhancements, we do ask you to create a GitHub issue, describe your idea and discuss the design with us. This way we align your ideas with our vision for the product development. + +If the bug hasn’t been reported / addressed, or we’ve agreed on the enhancement implementation with you, do the following: + +1. [Fork](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo) this repository +2. Clone this repository on your machine. +3. Create a separate branch for your changes. If you work on a GitHub issue, please [create a branch from it](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#manually-linking-a-pull-request-or-branch-to-an-issue-using-the-issue-sidebar). This makes it easier to track your contribution. +4. Make your changes. Please follow the following guidelines to improve code readability: + + - [PostgreSQL coding conventions](https://www.postgresql.org/docs/current/source.html) + - [C style and Coding rules](https://github.com/MaJerle/c-code-style) + +5. [Build `pg_tde`](https://github.com/percona/pg_tde/wiki/Make-builds-for-developers) and [test your changes locally](#run-local-tests). +6. Commit the changes. The [commit message guidelines](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53) will help you with writing great commit messages +7. Open a pull request to Percona. +8. Our team will review your code and if everything is correct, will merge it. +Otherwise, we will contact you for additional information or with the request to make changes. + +### Run local tests + +When you work, you should periodically run tests to check that your changes don’t break existing code. + +To run the tests, use the following command: + +``` +cd pg_tde +make USE_PGXS=1 installcheck +``` + +You can run tests on your local machine with whatever operating system you have. After you submit the pull request, we will check your patch on multiple operating systems. + +## Contribute to documentation + +`pg_tde` documentation is written in Markdown language, so you can +[edit it online via GitHub](#edit-documentation-online-vi-github). If you wish to have more control over the doc process, jump to how to [edit documentation locally](#edit-documentation-locally). + +Before you start, learn what [git], [MkDocs] and [Docker] are and what [Markdown] is and how to write it. For your convenience, there's also a cheat sheet to help you with the syntax. + +The doc files are in the `docs` directory. + +### Edit documentation online via GitHub + +1. Click the **Edit this page** icon next to the page title. The source `.md` file of the page opens in GitHub editor in your browser. If you haven’t worked with the repository before, GitHub creates a [fork](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo) of it for you. +2. Edit the page. You can check your changes on the **Preview** tab. +3. Commit your changes. + * In the _Commit changes_ section, describe your changes. + * Select the **Create a new branch for this commit** and start a pull request option + * Click **Propose changes**. +4. GitHub creates a branch and a commit for your changes. It loads a new page on which you can open a pull request to Percona. The page shows the base branch - the one you offer your changes for, your commit message and a diff - a visual representation of your changes against the original page. This allows you to make a last-minute review. When you are ready, click the Create pull request button. +5. Someone from our team reviews the pull request and if everything is correct, merges it into the documentation. Then it gets published on the site. + +### Edit documentation locally + +This option is for users who prefer to work from their computer and / or have the full control over the documentation process. + +The steps are the following: + +1. Fork this repository +2. Clone the repository on your machine: + +```sh +git clone git@github.com:/pg_tde.git + +3. Change the directory to ``pg_tde`` and add the remote upstream repository: + +```sh +git remote add upstream git@github.com:percona/pg_tde.git +``` + +4. Pull the latest changes from upstream + +```sh +git fetch upstream +git merge upstream/main +``` + +5. Create a separate branch for your changes + +```sh +git checkout -b +``` + +6. Make changes +7. Commit your changes. The [commit message guidelines](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53) will help you with writing great commit messages + +8. Open a pull request to Percona + +#### Building the documentation + +To verify how your changes look, generate the static site with the documentation. This process is called *building*. You can do it in these ways: +- [Use Docker](#use-docker) +- [Install MkDocs and build locally](#install-mkdocs-and-build-locally) + +##### Use Docker + +1. [Get Docker](https://docs.docker.com/get-docker/) +2. We use [our Docker image](https://hub.docker.com/repository/docker/perconalab/pmm-doc-md) to build documentation. Run the following command: + +```sh +cd documentation +docker run --rm -v $(pwd):/docs perconalab/pmm-doc-md mkdocs build +``` + If Docker can't find the image locally, it first downloads the image, and then runs it to build the documentation. + +3. Go to the ``site`` directory and open the ``index.html`` file to see the documentation. + +If you want to see the changes as you edit the docs, use this command instead: + +```sh +cd documentation +docker run --rm -v $(pwd):/docs -p 8000:8000 perconalab/pmm-doc-md mkdocs serve --dev-addr=0.0.0.0:8000 +``` + +Wait until you see `INFO - Start detecting changes`, then enter `0.0.0.0:8000` in the browser's address bar. The documentation automatically reloads after you save the changes in source files. + +##### Install MkDocs and build locally + +1. Install [Python]. + +2. Install MkDocs and required extensions: + + ```sh + pip install -r requirements.txt + ``` + +3. Build the site: + + ```sh + cd documentation + mkdocs build + ``` + +4. Open `site/index.html` + +Or, to run the built-in web server: + +```sh +cd documentation +mkdocs serve +``` + +View the site at + +#### Build PDF file + +To build a PDF version of the documentation, do the following: + +1. Disable displaying the last modification of the page: + + ```sh + export ENABLED_GIT_REVISION_DATE=false + ``` + +2. Build the PDF file: + + ```sh + ENABLE_PDF_EXPORT=1 mkdocs build -f mkdocs-pdf.yml + ``` + + The PDF document is in the ``site/pdf`` folder. + +[MkDocs]: https://www.mkdocs.org/ +[Markdown]: https://daringfireball.net/projects/markdown/ +[Git]: https://git-scm.com +[Python]: https://www.python.org/downloads/ +[Docker]: https://docs.docker.com/get-docker/ diff --git a/contrib/pg_tde/documentation/docs/css/design.css b/contrib/pg_tde/documentation/docs/css/design.css new file mode 100644 index 00000000000..761e52cc058 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/css/design.css @@ -0,0 +1,670 @@ +/* +* Prefixed by https://autoprefixer.github.io +* PostCSS: v8.4.14, +* Autoprefixer: v10.4.7 +* Browsers: last 4 version +*/ + +/* Custom fonts */ + +@font-face { + font-family: "Poppins"; + src: url("../fonts/Poppins-Regular.ttf"); + font-weight: normal; + font-style: normal; +} +@font-face { + font-family: "Poppins"; + src: url("../fonts/Poppins-Italic.ttf"); + font-weight: normal; + font-style: italic; +} +@font-face { + font-family: "Poppins"; + src: url("../fonts/Poppins-SemiBold.ttf"); + font-weight: bold; + font-style: normal; +} +@font-face { + font-family: "Poppins"; + src: url("../fonts/Poppins-SemiBoldItalic.ttf"); + font-weight: bold; + font-style: italic; +} + +/* Variables */ + +:root { + + /* Typography */ + --fHeading: "Poppins", "Roboto", Arial, Helvetica, sans-serif; + + /* Colors */ + --white: #fff; + + /* Percona Night */ + --night500: #0E1A53; + --night400: #3E4875; + --night300: #5E668C; + + /* Percona Aqua */ + --aqua900: #14584B; + --aqua800: #1A7362; + --aqua700: #22947E; + --aqua600: #2CBEA2; + + /* Percona Sky */ + --sky900: #08386B; + --sky800: #0B4A8C; + --sky700: #0E5FB5; + --sky600: #127AE8; + --sky500: #1486FF; + --sky400: #439EFF; + --sky300: #62AEFF; + --sky200: #93C7FF; + + /* Percona Stone */ + --stone900: #2C323E; + --stone800: #3A4151; + --stone700: #4B5468; + --stone100: #D1D5DE; + --stone50: #F0F1F4; + + /* mkdocs root override */ + --md-primary-fg-color--dark: var(--night400); +} +:root, +[data-md-color-scheme="percona-light"] { + + /* Primitives */ + --md-primary-fg-color: var(--sky700); + + /* Type */ + --md-typeset-color: var(--stone900); + --md-typeset-a-color: var(--sky700); + + /* Defaults */ + --md-default-bg-color: var(--white); + --md-default-fg-color--light: rgba(44,50,62,0.72); + --md-default-fg-color--lighter: rgba(44,50,62,0.40); + --md-default-fg-color--lightest: rgba(44,50,62,0.25); + + /* Accent */ + --md-accent-fg-color: var(--sky500); + + /* Footer */ + --md-footer-fg-color: var(--stone900); + --md-footer-fg-color--light: rgba(44,50,62,0.72); + --md-footer-fg-color--lighter: rgba(44,50,62,0.40); + --md-footer-bg-color: var(--stone50); + --md-footer-bg-color--dark: var(--stone50); + + /* Code */ + --md-code-bg-color: var(--stone800); + --md-code-bg-color: var(--stone50); +} +[data-md-color-scheme="percona-dark"] { + + /* Primitives */ + --md-hue: 230; + --md-primary-fg-color: var(--sky200); + + /* Type */ + --md-typeset-color: #FBFBFB; + --md-typeset-a-color: var(--sky200); + + /* Defaults */ + --md-default-bg-color: var(--stone900); + --md-default-fg-color--light: rgba(251,251,251,0.72); + --md-default-fg-color--lighter: rgba(251,251,251,0.4); + --md-default-fg-color--lightest: rgba(209,213,222,0.25); + + /* Accent */ + --md-accent-fg-color: var(--sky400); + --md-accent-bg-color: var(--stone900); + + /* Footer */ + --md-footer-fg-color: #FBFBFB; + --md-footer-fg-color--light: rgba(251,251,251,0.72); + --md-footer-fg-color--lighter: rgba(251,251,251,0.4); + --md-footer-bg-color: var(--stone800); + --md-footer-bg-color--dark: var(--stone800); + + /* Code */ + --md-code-bg-color: var(--stone50); + --md-code-bg-color: var(--stone800); +} + +/* Typography */ + +.md-typeset { + font-size: 0.75rem; +} +.md-typeset h1, +.md-typeset h2, +.md-typeset h3, +.md-typeset h4, +.md-typeset h5, +.md-typeset h6 { + font-family: var(--fHeading); + font-weight: bold; +} +.md-typeset h1 { + color: inherit; +} +.md-typeset h1 { + margin: 0 0 0.75em; +} +.md-header { + font-family: var(--fHeading); + font-weight: bold; +} +.md-header__button.md-logo { + margin: 0.2rem 0.1rem 0.2rem 0.4rem; + padding: 0.2rem; +} +.md-header__button.md-logo img, +.md-header__button.md-logo svg { + height: 1.6rem; +} +.md-nav__link--active { + font-weight: bold; +} +.md-typeset small { + opacity: 0.5; +} +.md-content a:not(.md-button) { + text-decoration: underline; +} +.md-content .tabbed-labels a { + text-decoration: none; +} + +/* Header nav */ + +.md-header, +.md-tabs { + background-color: var(--night400); +} +[dir=ltr] .md-header__title { + margin-left: 0; +} +[dir=rtl] .md-header__title { + margin-right: 0; +} +.md-tabs .md-tabs__link { + font-family: var(--fHeading); + font-weight: bold; +} +.md-nav__source { + margin-top: -0.25rem; +} +.md-header__inner > :last-child { + padding-right: 0.6rem; +} +.md-tabs__item { + height: 2rem; +} +.md-tabs__link { + margin-top: 0.55rem; +} +.md-header__topic { + transition: opacity .25s; +} +.md-header__topic:hover { + opacity: 0.7; +} + +/* Footer */ + +.md-footer a { + text-decoration: underline; +} +.md-copyright, +.md-copyright__highlight { + color: var(--md-footer-fg-color--light); +} + +/* Base components */ + +.md-typeset .md-button { + font-family: var(--fHeading); + font-size: 0.6818rem; + font-weight: bold; + padding: 0.4167em 1.6em; + border-radius: 10rem; + transition: all 0.2s ease-out; +} +.md-typeset .md-button--primary { + color: var(--md-accent-bg-color); + box-shadow: 0px 1px 5px 0px rgba(0, 0, 0, 0.12), 0px 2px 2px 0px rgba(0, 0, 0, 0.14), 0px 3px 1px -2px rgba(0, 0, 0, 0.20); +} +.md-typeset .md-button--primary:focus, +.md-typeset .md-button--primary:hover { + box-shadow: 0px 1px 10px 0px rgba(0, 0, 0, 0.12), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 2px 4px -1px rgba(0, 0, 0, 0.20); +} +.md-typeset .md-button:not(.md-button--primary):focus, +.md-typeset .md-button:not(.md-button--primary):hover { + background: none; + color: var(--md-accent-fg-color); +} +.md-typeset code { + font-size: 0.9091em; + color: var(--md-typeset-color); + vertical-align: baseline; + padding: 0 0.2em 0.1em; + border-radius: 0.15em; +} +.md-button code, +[data-md-color-scheme="percona-dark"] .md-button:not(.md-button--primary) code { + background-color: rgba(255, 255, 255, 0.1); + box-shadow: 0 0 0 2px rgba(255, 255, 255, 0.1) inset; +} +.md-button:not(.md-button--primary) code { + background-color: rgba(0, 0, 0, 0.05); + box-shadow: 0 0 0 2px rgba(0, 0, 0, 0.05) inset; +} +.md-content .md-button { + margin: 0 0.25em 0.5em 0; +} +.md-typeset .tabbed-labels--linked > label > a { + font-size: 0.75rem; + padding: 0.75em 1em; +} +.js .md-typeset .tabbed-labels:before { + height: 4px; + background-color: var(--md-typeset-a-color); +} +.md-typeset [class*="moji"] { + vertical-align: -0.25em; +} +.md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child, .md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10), .md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11), .md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12), .md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13), .md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14), .md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15), .md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16), .md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17), .md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18), .md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19), .md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2), .md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20), .md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3), .md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4), .md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5), .md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6), .md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7), .md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8), .md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9) { + color: var(--md-typeset-a-color); +} +.md-typeset .md-button [class*="moji"], +.md-typeset .tabbed-set [class*="moji"] { + height: 1.3333em; + vertical-align: -0.3333em; +} +.md-typeset .md-button [class*="moji"] svg, +.md-typeset .tabbed-set [class*="moji"] svg { + width: 1.3333em; +} +.md-typeset a [class*="moji"] { + vertical-align: -0.2222em; +} +.md-clipboard { + color: var(--md-default-fg-color--lighter); +} +.md-typeset hr { + margin: 2em 0; + border-color: var(--md-default-fg-color--lightest) +} +.md-typeset .tabbed-labels { + box-shadow: 0 -0.05rem var(--md-default-fg-color--lightest) inset; +} +.md-typeset .tabbed-labels > label:hover { + color: var(--md-accent-fg-color); +} +.md-typeset .tabbed-button { + width: 1.25rem; + height: 1.25rem; + margin-top: 0.0625rem; +} +.md-typeset .tabbed-control { + width: 2.25rem; + height: 2.25rem; +} +.tabbed-block > *:last-child { + margin-bottom: 0; +} + +/* Content re-styling */ + +.md-main__inner { + margin-top: 0.75rem; + margin-bottom: 0.75rem; +} +.md-typeset [type=checkbox]:checked + .task-list-indicator:before { + background-color: var(--aqua600); +} +.md-feedback { + margin: 2em 0 !important; +} +:not([data-banner]):not(.splash) + .md-feedback { + padding-top: 2em; + border-top: 0.05rem solid var(--md-default-fg-color--lightest); +} +.md-typeset .admonition, +.md-typeset details { + --md-admonition-bg-color: var(--md-default-bg-color); + --md-admonition-fg-color: var(--md-typeset-color); + border-width: 0.1125rem; + box-shadow: none; +} +.md-tabs__link { + font-size: 0.67rem; +} +.md-tabs__item--active .md-tabs__link, +.md-tabs__item--active .md-tabs__link a { + font-weight: bold; + border-bottom: 0.15em solid currentColor; +} +.md-sidebar__scrollwrap { + scrollbar-gutter: unset; +} + +/* Custom Banner */ + +[data-banner] { + padding: 1.5em; + margin: 1.5em 0; + border: 0.05rem solid var(--md-default-fg-color--lightest); + border-radius: 0.2rem; + /* box-shadow: 0px 1px 5px 0px rgba(0, 0, 0, 0.12), 0px 2px 2px 0px rgba(0, 0, 0, 0.14), 0px 3px 1px -2px rgba(0, 0, 0, 0.20); */ + transition: all 0.2s ease-out; +} +[data-banner]:hover { + box-shadow: 0px 1px 10px 0px rgba(0, 0, 0, 0.12), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 2px 4px -1px rgba(0, 0, 0, 0.20); +} +[data-banner] .title { + font-weight: bold; + margin: 0; +} +[data-banner] .title + * { + margin-top: 0.25em; +} +[data-banner] > :last-child { + margin-bottom: 0; +} +[data-banner] a:link { + font-family: var(--fHeading); + font-size: 0.6818rem; + font-weight: bold; + text-decoration: none; +} +[data-banner] .actions > p { + margin: 0; +} +[data-banner] .actions a { + display: inline-block; + margin: 0 1em 0 0; +} +[data-banner] > :only-child, +[data-banner] .actions a:first-of-type { + margin-top: 0; +} +[data-banner] a [class*="moji"] { + height: 1.3333em; + vertical-align: -0.3333em; +} +[data-banner] a [class*="moji"] svg { + width: 1.3333em; +} +[data-banner="logo"] > p:first-child { + margin-top: 0; +} +[data-banner="logo"] > p:first-child [class*="moji"] { + font-size: 4em; +} +[data-grid] { + display: flex; + flex-wrap: wrap; + margin-right: -1rem; +} +[data-grid] [data-banner] { + flex: 1 1 320px; + display: flex; + flex-direction: column; + margin: 0 1rem 1rem 0; +} +[data-grid] .title { + font-size: 0.8rem; + font-weight: bold; +} +[data-grid] [data-banner] > p:last-child { + margin-top: 0; +} +[data-grid] [data-banner] > p:nth-last-child(2) { + flex-grow: 2; +} +[data-grid] + [data-banner] { + margin-top: 0; +} +[data-grid] .md-button { + margin: 0.5em 0.25em 0 0; +} + +/* Custom lists */ + +[dir] .power-bullet + ul, +[dir] .power-bullet + ul ul, +[dir] .power-bullet + ul ol, +[dir] .power-number + ol, +[dir] .power-number + ol ol, +[dir] .power-number + ol ul { + list-style: none; + --power-list-indent: 2em; + --power-list-gap: 0.5em; + --power-list-counter-size: calc(var(--power-list-indent) - var(--power-list-gap)); + margin: 1.25em 0 2em; +} +[dir] .power-bullet + ul ul:last-child, +[dir] .power-bullet + ul ol:last-child, +[dir] .power-number + ol ol:last-child, +[dir] .power-number + ol ul:last-child { + margin-bottom: 0; +} +.power-bullet + ul > li:not(:last-child), +.power-bullet + ul ul > li:not(:last-child), +.power-bullet + ul ol > li:not(:last-child), +.power-number + ol > li:not(:last-child), +.power-number + ol ol > li:not(:last-child), +.power-number + ol ul > li:not(:last-child) { + margin-bottom: 1.25em; +} +[dir=ltr] .power-bullet + ul > li, +[dir=ltr] .power-bullet + ul ul > li, +[dir=ltr] .power-bullet + ul ol > li, +[dir=ltr] .power-number + ol > li, +[dir=ltr] .power-number + ol ol > li, +[dir=ltr] .power-number + ol ul > li { + margin-left: var(--power-list-indent); +} +[dir=rtl] .power-bullet + ul > li, +[dir=rtl] .power-bullet + ul ul > li, +[dir=rtl] .power-bullet + ul ol > li, +[dir=rtl] .power-number + ol > li, +[dir=rtl] .power-number + ol ol > li, +[dir=rtl] .power-number + ol ul > li { + margin-right: var(--power-list-indent); +} +.power-bullet + ul > li::before, +.power-bullet + ul ul > li::before, +.power-number + ol ul > li::before { + content: "→"; +} +.power-number + ol, +.power-number + ol ol, +.power-bullet + ul ol { + counter-reset: power-list; +} +.power-number + ol > li, +.power-number + ol ol > li, +.power-bullet + ul ol > li { + counter-increment: power-list; + position: relative; +} +.power-number + ol > li::before, +.power-number + ol ol > li::before, +.power-bullet + ul ol > li::before { + content: counter(power-list); + font-family: var(--fHeading); +} +.power-bullet + ul > li::before, +.power-bullet + ul ul > li::before, +.power-bullet + ul ol > li::before, +.power-number + ol > li::before, +.power-number + ol ol > li::before, +.power-number + ol ul > li::before { + display: inline-block; + position: absolute; + font-weight: bold; + text-align: center; + line-height: var(--power-list-counter-size); + width: var(--power-list-counter-size); + height: var(--power-list-counter-size); + margin-right: var(--power-list-gap); + border-radius: 50%; + color: var(--md-default-bg-color); + background-color: var(--md-typeset-color); +} +[dir=ltr] .power-bullet + ul > li::before, +[dir=ltr] .power-bullet + ul ul > li::before, +[dir=ltr] .power-bullet + ul ol > li::before, +[dir=ltr] .power-number + ol > li::before, +[dir=ltr] .power-number + ol ol > li::before, +[dir=ltr] .power-number + ol ul > li::before { + margin-left: calc(var(--power-list-indent) - (var(--power-list-indent) * 2)); +} +[dir=rtl] .power-bullet + ul > li::before, +[dir=rtl] .power-bullet + ul ul > li::before, +[dir=rtl] .power-bullet + ul ol > li::before, +[dir=rtl] .power-number + ol > li::before, +[dir=rtl] .power-number + ol ol > li::before, +[dir=rtl] .power-number + ol ul > li::before { + margin-right: calc(var(--power-list-indent) - (var(--power-list-indent) * 2)); +} +.power-bullet + ul ul > li::before, +.power-bullet + ul ol > li::before, +.power-number + ol ul > li::before, +.power-number + ol ol > li::before { + opacity: 0.3; +} + +/* Custom highlights */ + +i[info], +i[warning] { + font-style: normal; + font-weight: bold; + display: inline-block; + padding: 0 0.25em; + border-radius: 0.2em; +} +i[info] { + background-color: #00b8d41a; + border-width: 0.05rem; + border-style: solid; + border-color: #00b8d41a; +} +i[info] [class*="moji"] { + color: #00b8d4; +} +i[warning] { + background-color: #ff91001a; + border-width: 0.05rem; + border-style: solid; + border-color: #ff91001a; +} +i[warning] [class*="moji"] { + color: #ff9100; +} + +/* Modals */ + +.md-consent__overlay { + -webkit-backdrop-filter: blur(.2rem); + backdrop-filter: blur(.2rem); + background-color: rgba(44,50,62,0.72); +} +.md-consent__inner { + background-color: var(--md-footer-bg-color--dark); +} + +/* Code injections */ + +.injections { + position: absolute; + width: 0; + height: 0; + padding: 0; + margin: 0; + visibility: hidden; + pointer-events: none; +} + +/* Super Nav */ + +.superNav { + font-family: var(--fHeading); + font-size: 0.5625rem; + line-height: 1; + font-weight: bold; + text-transform: uppercase; + letter-spacing: 0.0625em; + color: var(--white); + background-color: var(--stone800); +} +.superNav a { + display: inline-block; + padding: 0.25rem 0.625rem !important; + transition: all 0.2s ease-out; +} +.superNav a:hover { + opacity: 0.7; +} +.superNav svg { + width: 1.375em; + height: 1.375em; + margin-right: 0.125em; + fill: currentColor; + vertical-align: -0.3125em; +} + +/* Version Select */ + +.version-select::after { + content: "\25BE"; + display: inline-block; + margin-left: -1em; + transform: translate(-0.625em, -0.0625em); + pointer-events: none; +} +#versionSelect { + -webkit-appearance: none; + -moz-appearance: none; + appearance: none; + align-self: center; + font-size: 0.9rem; + line-height: 1; + font-weight: 700; + padding: 0.5em 1.375em 0.5em 0.5em; + margin: 0 0.25em; + background-color: rgba(0,0,0,0.2); + color: inherit; + border: none; + border-radius: 0.1rem; +} +#versionSelect::-ms-expand { + display: none; +} + +/* Media queries */ + +@media screen and (max-width: 76.1875em) { + .md-nav--primary .md-nav__title[for=__drawer], + .md-nav--primary .md-nav__title { + line-height: 1.5; + height: unset; + padding: 3.5rem .8rem 0.5rem; + color: var(--md-primary-bg-color); + background-color: var(--md-primary-fg-color--dark); + } +} +@media screen and (max-width: 60em) { + [data-banner] { + padding: 1em; + } +} +/**/ diff --git a/contrib/pg_tde/documentation/docs/css/extra.css b/contrib/pg_tde/documentation/docs/css/extra.css new file mode 100644 index 00000000000..30f5a627800 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/css/extra.css @@ -0,0 +1,7 @@ +@media print { + /* Adjusts positioning of admonition icon */ + .md-typeset :is(.admonition-title,summary):before { + top: 0.6rem; + left: 0.6rem; + } + } \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/css/landing.css b/contrib/pg_tde/documentation/docs/css/landing.css new file mode 100644 index 00000000000..df69386e85e --- /dev/null +++ b/contrib/pg_tde/documentation/docs/css/landing.css @@ -0,0 +1,301 @@ + +/* Type */ + +.landing h1, +.landing h2 { + font-size: calc(1.5em + 1vw); + line-height: 1.125; + text-transform: uppercase; + letter-spacing: 0; + margin: 0.5em 0; +} + +/* Layout adjustments */ + +.md-header, .md-tabs { + background-color: var(--stone800); +} +.landing > :not(:last-child) { + margin-bottom: 2em; +} +/* .md-content__inner { + display: flex; + flex-direction: column; +} +.md-content__inner > :not(.landing) { + width: 100%; + max-width: calc(34.3rem); + max-width: calc(34.3rem + 1.2rem + 12.1rem); + align-self: center; +} */ +[data-grid] [data-banner] { + flex: 0 1 calc(50% - 1rem); +} + +/* Splash Box */ + +.splash { + display: flex; + position: relative; + justify-content: space-between; + line-height: 1.25; + padding: calc(0.5em + 3%); + border: 1px solid var(--md-default-fg-color--lightest); + border-radius: calc(0.5rem + 0.75vw); + background: linear-gradient(110deg, var(--md-default-bg-color) 33%, var(--md-footer-bg-color--dark) 95%); + overflow: hidden; + background-repeat: no-repeat; +} +.splash.dark { + color: var(--white); + --md-primary-fg-color: var(--stone50); + --md-accent-fg-color: var(--white); +} +.splash.highlight { + background: + linear-gradient( + 110deg, + rgba(44,50,62,0.9) 10%, + rgba(44,50,62,0.1) 90% + ), + url(../assets/highlight.jpg) center / cover var(--stone800); + border: none; + background-repeat: no-repeat; +} +.splash.mysql { + background: + linear-gradient( + 110deg, + rgba(0,0,0,0.2) 33%, + rgba(0,0,0,0.1) 95% + ), + linear-gradient( + 110deg, + rgb(14,95,181) 33%, + rgb(48,209,178) 95% + ); +} +.splash.postgresql { + background: + linear-gradient( + 110deg, + rgba(0,0,0,0.4) 33%, + rgba(0,0,0,0.1) 95% + ), + linear-gradient( + 110deg, + rgb(78,91,150) 33%, + rgb(67,158,255) 95% + ); +} +.splash.mongodb { + background: + linear-gradient( + 110deg, + rgba(0,0,0,0.4) 33%, + rgba(0,0,0,0.1) 95% + ), + linear-gradient( + 110deg, + rgb(24,109,73) 33%, + rgb(48,209,190) 95% + ); +} +.splash.operators { + background: + linear-gradient( + 110deg, + transparent 33%, + rgba(0,0,0,0.1) 95% + ), + linear-gradient( + 110deg, + rgb(11,39,140) 33%, + rgb(20,142,255) 95% + ); +} +.splash.header { + flex-direction: column; + align-items: flex-start; + border: none; + background-repeat: no-repeat; +} + +/* Splash Contents */ + +.splash > * { + flex: 0 1 45%; +} +.splash h1, +.splash h2 { + margin-top: 0; + margin-bottom: -0.125em; +} +.splash > :last-child { + margin-bottom: 0; +} +.splash-intro { + margin: 0.5rem 0.75rem; +} +.splash-links > :not(:last-child) { + margin-bottom: 1em; +} +.splash.dark .md-button { + border-color: rgba(255, 255, 255, 0.4) +} +.splash.dark .md-button:hover { + border-color: var(--white) +} +.splash.dark .md-button--primary, +.splash.dark .md-button--primary:hover { + color: var(--stone700); +} +.splash.dark .md-button--primary:hover { + color: var(--stone900); +} +.splash.header > * { + max-width: 30rem; + z-index: 1; +} +.splash.header > :first-child { + margin: 0; +} +.splash.header img { + display: block; + position: absolute; + top: 50%; + right: 1rem; + width: 12rem; + height: 12rem; + margin: 0; + transform: translateY(-50%); + z-index: 0; +} + +/* Splash Card */ + +a.splash-card { + display: flex; + flex-direction: column; + justify-content: center; + min-height: 6.75em; + padding: 0.75rem 0.375rem 0.5rem 4.75rem; + border: 1px solid var(--md-default-fg-color--lightest); + border-radius: calc(0.25rem + 0.375vw); + cursor: pointer; + text-decoration: none !important; + color: var(--md-typeset-color); + position: relative; + background-color: var(--md-default-bg-color); + transition: all 0.2s ease-out; +} +.splash.highlight a.splash-card { + color: var(--white); + background-color: rgba(255, 255, 255, 0.2); + backdrop-filter: blur(0.75rem); + border-color: rgba(255,255,255,0.1); +} +a.splash-card:hover { + box-shadow: 0px 1px 10px 0px rgba(0, 0, 0, 0.12), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 2px 4px -1px rgba(0, 0, 0, 0.20); + color: var(--md-typeset-color); +} +.splash.highlight a.splash-card:hover { + background-color: rgba(255, 255, 255, 0.4); + border-color: rgba(255,255,255,0.2); + backdrop-filter: blur(1.5rem); +} +a.splash-card img { + display: block; + position: absolute; + top: 0.75rem; + left: 0.75rem; + width: 3.5rem; + height: 3.5rem; + border-radius: 0.25rem; + float: left; +} +.splash-card > * { + margin: 0 0.25rem 0.25rem 0 !important; +} +.splash-card > h3 { + font-size: 0.875rem; + margin-bottom: 0.0625rem !important; +} + +/* News elements */ + +[data-news] { + display: flex; + flex-wrap: wrap; + margin-right: -1rem; +} +[data-news] [data-article] { + flex: 0 1 calc(50% - 1rem); + display: flex; + flex-direction: column; + margin: 0 1rem 1rem 0; + padding: 0 1rem 1rem 0; + border-bottom: 1px solid var(--md-default-fg-color--lightest); +} +[data-article] > * { + margin: 0.25rem 0; +} +[data-article] > :first-child { + font-family: var(--fHeading); + font-size: 0.8rem; + /* flex-grow: 1; */ +} +[data-article] > :nth-child(2):not(:last-child) { + font-size: 0.875em; + line-height: 1.4; + display: -webkit-box; + -webkit-line-clamp: 3; + -webkit-box-orient: vertical; + overflow: hidden; + text-overflow: ellipsis; + max-height: 2.8em; + position: relative; +} +[data-article] > :nth-child(2):not(:last-child)::after { + content: ""; + position: absolute; + display: block; + right: 0; + bottom: 0; + width: 4rem; + height: 1.4em; + background: linear-gradient(to right, transparent 0%, var(--md-default-bg-color) 50%); +} +[data-article] > :last-child > * { + margin-right: 1em; +} +[data-article] a:link { + font-family: var(--fHeading); + font-size: 0.6818rem; + font-weight: bold; + text-decoration: none; +} + +/* Conditionals */ + +@media screen and (max-width: 76.1875em) { + .md-nav--primary .md-nav__title[for=__drawer], + .md-nav--primary .md-nav__title { + background-color: var(--stone800); + } +} +@media screen and (max-width: 55em) { + .splash.header img { + right: -2rem; + opacity: 0.2; + } +} +@media screen and (max-width: 45em) { + .splash { + flex-direction: column; + } + [data-grid] [data-banner], + [data-news] [data-article] { + flex: 1 1 100%; + } +} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/css/osano.css b/contrib/pg_tde/documentation/docs/css/osano.css new file mode 100644 index 00000000000..b89fa6ac210 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/css/osano.css @@ -0,0 +1,206 @@ +/* General styling */ + +.osano-cm-window { + font-family: "Roboto", Arial, Helvetica, sans-serif; + font-size: 20px; +} +.osano-cm-dialog--type_bar { + justify-content: center; + color: #000; + background: #fff; + box-shadow: 0 0 0 100vmax rgba(0,0,0,0.66) +} + +.osano-cm-dialog { + font-size: 0.75em; + padding: 2em 1em; + color: var(--md-typeset-color); + background: var(--md-footer-bg-color--dark); +} +.osano-cm-header, +.osano-cm-info-dialog-header { + background: var(--md-default-bg-color); +} +.osano-cm-link, +.osano-cm-disclosure__toggle, +.osano-cm-expansion-panel__toggle { + color: var(--md-typeset-a-color); +} +.osano-cm-link:hover, +.osano-cm-link:active, +.osano-cm-disclosure__toggle:hover, +.osano-cm-disclosure__toggle:active, +.osano-cm-disclosure__toggle:focus, +.osano-cm-expansion-panel__toggle:hover, +.osano-cm-expansion-panel__toggle:active, +.osano-cm-expansion-panel__toggle:focus { + color: var(--md-accent-fg-color); +} +.osano-cm-drawer-links { + display: inline-block; +} +.osano-cm-link.osano-cm-storage-policy { + margin-right: 0.5em; +} +.osano-cm-description { + font-weight: 400; +} +.osano-cm-info { + color: var(--md-typeset-color); + background: var(--md-default-bg-color); + box-shadow: unset; +} +.osano-cm-dialog--hidden, +.osano-cm-info-dialog--hidden { + transition-delay: 0ms, 0ms; +} +.osano-cm-disclosure { + padding-top: 0; +} +.osano-cm-disclosure--collapse { + border-color: var(--md-default-fg-color--lightest); +} + +/* Closing button */ + +.osano-cm-dialog__close, +.osano-cm-dialog__close:hover, +.osano-cm-dialog__close:focus, +.osano-cm-dialog__close:focus:hover { + color: var(--md-typeset-color); + stroke: var(--md-typeset-color); + border-color: transparent; + outline: initial; +} +.osano-cm-dialog__close:focus { + background-color: var(--md-default-fg-color--lightest); +} +.osano-cm-close { + padding: 0.25em; + margin: 0.5em; + stroke-width: 2px; + border-width: 2px; + opacity: 0.4; +} +.osano-cm-close:focus, +.osano-cm-close:hover { + stroke-width: 2px; + opacity: 1; +} +.osano-cm-info-dialog-header__close:focus { + background-color: var(--md-typeset-color); +} + +/* Switch buttons */ + +.osano-cm-toggle__switch { + background-color: var(--md-default-fg-color--lightest); + transition: all 0.1s ease-out; +} +.osano-cm-toggle__input:hover + .osano-cm-toggle__switch { + background-color: var(--md-default-fg-color--light); + border-color: transparent; +} +.osano-cm-toggle__input:focus + .osano-cm-toggle__switch { + background-color: var(--md-default-fg-color--lightest); + border-color: transparent; +} +.osano-cm-toggle__input:focus + .osano-cm-toggle__switch::before { + border-color: var(--md-accent-fg-color); +} +.osano-cm-toggle__input:focus:hover + .osano-cm-toggle__switch { + background-color: var(--md-default-fg-color--light); + border-color: transparent; +} +.osano-cm-toggle__input:checked + .osano-cm-toggle__switch, +.osano-cm-toggle__input:disabled:checked + .osano-cm-toggle__switch { + background-color: var(--md-primary-fg-color); + border-color: var(--md-primary-fg-color); +} +.osano-cm-toggle__input:checked:hover + .osano-cm-toggle__switch, +.osano-cm-toggle__input:disabled:checked:hover + .osano-cm-toggle__switch { + background-color: var(--md-accent-fg-color); + border-color: var(--md-accent-fg-color); +} +.osano-cm-toggle__input:checked:focus + .osano-cm-toggle__switch, +.osano-cm-toggle__input:disabled:checked:focus + .osano-cm-toggle__switch { + background-color: var(--md-primary-fg-color); + border-color: var(--md-primary-fg-color); +} +.osano-cm-toggle__input:checked:focus + .osano-cm-toggle__switch::before { + border-color: var(--md-accent-fg-color); +} +.osano-cm-toggle__input:checked:focus:hover + .osano-cm-toggle__switch { + background-color: var(--md-accent-fg-color); + border-color: var(--md-accent-fg-color); +} +.osano-cm-toggle__input:disabled:checked + .osano-cm-toggle__switch, +.osano-cm-toggle__input:disabled:checked:focus + .osano-cm-toggle__switch, +.osano-cm-toggle__input:disabled:checked:hover + .osano-cm-toggle__switch { + opacity: 0.3; + cursor: not-allowed; +} +.osano-cm-toggle__input + .osano-cm-toggle__switch::after { + background-color: var(--md-default-bg-color) !important; +} +.osano-cm-toggle__input:checked + .osano-cm-toggle__switch::before { + border-color: transparent; +} +.osano-cm-list { + gap: 0.75em; +} + +/* CTA Buttons */ + +.osano-cm-dialog__buttons { + display: flex; + justify-content: flex-start; + flex-wrap: wrap; + gap: 0.5em 0.75em; +} +.osano-cm-button { + font-family: var(--fHeading); + flex: 1 1 20em; + color: var(--md-primary-fg-color); + background-color: transparent; + border-width: 2px; + border-color: var(--md-primary-fg-color); + border-radius: 20em; +} +.osano-cm-button:hover { + color: var(--md-accent-fg-color); + background-color: transparent; + border-color: var(--md-accent-fg-color); +} + +/* Widget */ + +.osano-cm-widget { + display: none; + opacity: 0.5; + border-radius: 10em; + bottom: 3em; +} +.osano-cm-widget:focus { + outline-offset: 0.125em; + outline-color: var(--md-default-fg-color--lighter); + outline-width: 0.1875em; +} +.osano-cm-widget__outline { + fill: transparent; + stroke: var(--md-typeset-color); +} +.osano-cm-widget__dot { + fill: var(--md-typeset-color); +} + +/* Media conditions */ + +@media screen and (min-width: 768px) { + .osano-cm-dialog--type_bar .osano-cm-dialog__content { + max-width: 50em; + } + .osano-cm-dialog--type_bar .osano-cm-dialog__buttons { + max-width: 20em; + } +} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/css/postgresql.css b/contrib/pg_tde/documentation/docs/css/postgresql.css new file mode 100644 index 00000000000..e5d70d97d7e --- /dev/null +++ b/contrib/pg_tde/documentation/docs/css/postgresql.css @@ -0,0 +1,61 @@ +/* Overrides */ + +:root { + --md-primary-fg-color--dark: var(--night400); +} +.md-header, +.md-tabs { + background: + -o-linear-gradient( + 340deg, + rgba(0,0,0,0.3) 33%, + rgba(0,0,0,0.2) 95% + ), + -o-linear-gradient( + 340deg, + rgb(78,91,150) 33%, + rgb(67,158,255) 95% + ); + background: + linear-gradient( + 110deg, + rgba(0,0,0,0.3) 33%, + rgba(0,0,0,0.2) 95% + ), + linear-gradient( + 110deg, + rgb(78,91,150) 33%, + rgb(67,158,255) 95% + ); +} +@media screen and (max-width: 76.1875em) { + .md-nav--primary .md-nav__title[for="__drawer"], + .md-nav--primary .md-nav__title { + background: + -o-linear-gradient( + 340deg, + rgba(0,0,0,0.3) 33%, + rgba(0,0,0,0.2) 95% + ), + -o-linear-gradient( + 340deg, + rgb(78,91,150) 33%, + rgb(67,158,255) 95% + ); + background: + linear-gradient( + 110deg, + rgba(0,0,0,0.3) 33%, + rgba(0,0,0,0.2) 95% + ), + linear-gradient( + 110deg, + rgb(78,91,150) 33%, + rgb(67,158,255) 95% + ); + } +} +.superNav, +.md-nav__source { + background-color: var(--night500); +} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/decrypt.md b/contrib/pg_tde/documentation/docs/decrypt.md new file mode 100644 index 00000000000..c29ee6c1329 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/decrypt.md @@ -0,0 +1,44 @@ +# Decrypt an encrypted table + +## Method 1. Change the access method + +If you encrypted a table with the `tde_heap` or `tde_heap_basic` access method and need to decrypt it, run the following command against the desired table (`mytable` in the example below): + +``` +ALTER TABLE mytable SET access method heap; +``` + +Check that the table is not encrypted: + +``` +SELECT pg_tde_is_encrypted('mytable'); +``` + +The output returns `f` meaning that the table is no longer encrypted. + +!!! note "" + + In the same way you can re-encrypt the data with the `tde_heap_basic` access method. + + ``` + ALTER TABLE mytable SET access method tde_heap_basic; + ``` + + Note that the indexes and WAL files will no longer be encrypted. + +## Method 2. Create a new unencrypted table on the base of the encrypted one + +Alternatively, you can create a new unencrypted table with the same structure and data as the initial table. For example, the original encrypted table is `EncryptedCustomers`. Use the following command to create a new table `Customers`: + +``` +CREATE TABLE Customers AS +SELECT * FROM EncryptedCustomers; +``` + +The new table `Customers` inherits the structure and the data from `EncryptedCustomers`. + +(Optional) If you no longer need the `EncryptedCustomers` table, you can delete it. + +``` +DROP TABLE EncryptedCustomers; +``` \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/external-parameters.md b/contrib/pg_tde/documentation/docs/external-parameters.md new file mode 100644 index 00000000000..a27e97b0312 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/external-parameters.md @@ -0,0 +1,34 @@ +# Use external reference to parameters + +To allow storing secrets or any other parameters in a more secure, external location, `pg_tde` +allows users to specify an external reference instead of hardcoded parameters. + +In Alpha1 version, `pg_tde` supports the following external storage methods: + +* `file`, which just stores the data in a simple file specified by a `path`. The file should be +readable to the postgres process. +* `remote`, which uses a HTTP request to retrieve the parameter from the specified `url`. + +## Examples + +To use the file provider with a file location specified by the `remote` method, +use the following command: + +``` +SELECT pg_tde_add_key_provider_file( + 'file-provider', + json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/hello' ) + );" +``` + +Or to use the `file` method, use the following command: + +``` +SELECT pg_tde_add_key_provider_file( + 'file-provider', + json_object( 'type' VALUE 'remote', 'path' VALUE '/tmp/datafile-location' ) + );" +``` + +Any parameter specified to the `add_key_provider` function can be a `json_object` instead of the string, +similar to the above examples. \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/faq.md b/contrib/pg_tde/documentation/docs/faq.md new file mode 100644 index 00000000000..9e9d5afdf84 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/faq.md @@ -0,0 +1,30 @@ +# FAQ + +## Why do I need TDE? + +- Compliance to security and legal regulations like GDPR, PCI DSS and others +- Encryption of backups +- Granular encryption of specific data sets and reducing the performance overhead that encryption brings +- Additional layer of security to existing security measures + +## I use disk-level encryption. Why should I care about TDE? + +Encrypting a hard drive encrypts all data including system and application files that are there. However, disk encryption doesn’t protect your data after the boot-up of your system. During runtime, the files are decrypted with disk-encryption. + +TDE focuses specifically on data files and offers a more granular control over encrypted data. It also ensures that files are encrypted on disk during runtime and when moved to another system or storage. + +Consider using TDE and storage-level encryption together to add another layer of data security + +## Is TDE enough to ensure data security? + +No. TDE is an additional layer to ensure data security. It protects data at rest. Consider introducing also these measures: + +* Access control and authentication +* Strong network security like TLS +* Disk encryption +* Regular monitoring and auditing +* Additional data protection for sensitive fields (e.g., application-layer encryption) + +## What happens to my data if I lose a principal key? + +If you lose encryption keys, especially, the principal key, the data is lost. That's why it's critical to back up your encryption keys securely. \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/features.md b/contrib/pg_tde/documentation/docs/features.md new file mode 100644 index 00000000000..b62ca50bc4a --- /dev/null +++ b/contrib/pg_tde/documentation/docs/features.md @@ -0,0 +1,18 @@ +# Features + +We provide `pg_tde` in two versions for both PostgreSQL Community and [Percona Server for PostgreSQL](https://docs.percona.com/postgresql/17/). The difference between the versions is in the set of included features which in its turn depends on the Storage Manager API. While PostgreSQL Community uses the default Storage Manager API, Percona Server for PostgreSQL extends the Storage Manager API enabling to integrate custom storage managers. + +The following table provides features available for each version: + +| PostgreSQL Community version | Percona Server for PostgreSQL version
| +|----------------------|-------------------------------| +| Table encryption:
- data tables,
- TOAST tables
- temporary tables created during the database operation.

Metadata of those tables is not encrypted. | Table encryption:
- data tables,
- **Index data for encrypted tables**,
- TOAST tables,
- temporary tables created during the database operation.

Metadata of those tables is not encrypted. | +| Write-Ahead Log (WAL) encryption of data in encrypted tables | **Global** Write-Ahead Log (WAL) encryption: for data in encrypted and non-encrypted tables | +| Multi-tenancy support| Multi-tenancy support | +| Table-level granularity |Table-level granularity | +| Key management via:
- HashiCorp Vault;
- Local keyfile | Key management via:
- HashiCorp Vault;
- KMIP server;
- Local keyfile| +| | Logical replication of encrypted tables | + + + +[Get started](install.md){.md-button} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-Italic.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-Italic.ttf new file mode 100644 index 00000000000..12b7b3c40b5 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-Italic.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-Light.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-Light.ttf new file mode 100644 index 00000000000..bc36bcc2427 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-Light.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-LightItalic.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-LightItalic.ttf new file mode 100644 index 00000000000..9e70be6a9ef Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-LightItalic.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-Medium.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-Medium.ttf new file mode 100644 index 00000000000..6bcdcc27f22 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-Medium.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-MediumItalic.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-MediumItalic.ttf new file mode 100644 index 00000000000..be67410fd0a Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-MediumItalic.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-Regular.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-Regular.ttf new file mode 100644 index 00000000000..9f0c71b70a4 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-Regular.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBold.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBold.ttf new file mode 100644 index 00000000000..74c726e3278 Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBold.ttf differ diff --git a/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBoldItalic.ttf b/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBoldItalic.ttf new file mode 100644 index 00000000000..3e6c942233c Binary files /dev/null and b/contrib/pg_tde/documentation/docs/fonts/Poppins-SemiBoldItalic.ttf differ diff --git a/contrib/pg_tde/documentation/docs/functions.md b/contrib/pg_tde/documentation/docs/functions.md new file mode 100644 index 00000000000..a83d812a03f --- /dev/null +++ b/contrib/pg_tde/documentation/docs/functions.md @@ -0,0 +1,108 @@ +# Functions + +The `pg_tde` extension provides the following functions: + +## pg_tde_add_key_provider_file + +Creates a new key provider for the database using a local file. + +This function is intended for development, and stores the keys unencrypted in the specified data file. + +``` +SELECT pg_tde_add_key_provider_file('provider-name','/path/to/the/keyring/data.file'); +``` + +All parameters can be either strings, or JSON objects [referencing remote parameters](external-parameters.md). + +## pg_tde_add_key_provider_vault_v2 + +Creates a new key provider for the database using a remote HashiCorp Vault server. + +The specified access parameters require permission to read and write keys at the location. + +``` +SELECT pg_tde_add_key_provider_vault_v2('provider-name','secret_token','url','mount','ca_path'); +``` + +where: + +* `url` is the URL of the Vault server +* `mount` is the mount point where the keyring should store the keys +* `secret_token` is an access token with read and write access to the above mount point +* [optional] `ca_path` is the path of the CA file used for SSL verification + +All parameters can be either strings, or JSON objects [referencing remote parameters](external-parameters.md). + +## pg_tde_add_key_provider_kmip + +Creates a new key provider for the database using a remote KMIP server. + +The specified access parameters require permission to read and write keys at the server. + +``` +SELECT pg_tde_add_key_provider_kmip('provider-name','kmip-IP', 5696, '/path_to/server_certificate.pem', '/path_to/client_key.pem'); +``` + +where: + +* `provider-name` is the name of the provider. You can specify any name, it's for you to identify the provider. +* `kmip-IP` is the IP address of a domain name of the KMIP server +* The port to communicate with the KMIP server. The default port is `5696`. +* `server-certificate` is the path to the certificate file for the KMIP server. +* `client key` is the path to the client key. + +## pg_tde_set_principal_key + +Sets the principal key for the database using the specified key provider. + +The principal key name is also used for constructing the name in the provider, for example on the remote Vault server. + +You can use this function only to a principal key. For changes in the principal key, use the [`pg_tde_rotate_principal_key`](#pg_tde_rotate_principal_key) function. + +``` +SELECT pg_tde_set_principal_key('name-of-the-principal-key', 'provider-name'); +``` + +## pg_tde_rotate_principal_key + +Creates a new version of the specified principal key and updates the database so that it uses the new principal key version. + +When used without any parameters, the function will just create a new version of the current database +principal key, using the same provider: + +``` +SELECT pg_tde_rotate_principal_key(); +``` + +Alternatively, you can pass two parameters to the function, specifying both a new key name and a new provider name: + +``` +SELECT pg_tde_rotate_principal_key('name-of-the-new-principal-key', 'name-of-the-new-provider'); +``` + +Both parameters support the `NULL` value, which means that the parameter won't be changed: + +``` +-- creates new principal key on the same provider as before +SELECT pg_tde_rotate_principal_key('name-of-the-new-principal-key', NULL); + +-- copies the current principal key to a new provider +SELECT pg_tde_rotate_principal_key(NULL, 'name-of-the-new-provider'); +``` + + +## pg_tde_is_encrypted + +Tells if a table is encrypted using the `tde_heap` access method or not. + +To verify a table encryption, run the following statement: + +``` +SELECT pg_tde_is_encrypted('table_name'); +``` + +You can also verify if the table in a custom schema is encrypted. Pass teh schema name for the function as follows: + +``` +SELECT pg_tde_is_encrypted('schema.table_name'); +``` diff --git a/contrib/pg_tde/documentation/docs/index.md b/contrib/pg_tde/documentation/docs/index.md new file mode 100644 index 00000000000..bf3b12fd2e1 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/index.md @@ -0,0 +1,58 @@ +# `pg_tde` documentation + +`pg_tde` is the open source PostgreSQL extension that provides Transparent Data Encryption (TDE) to protect data at rest. This ensures that the data stored on disk is encrypted, and no one can read it without the proper encryption keys, even if they gain access to the physical storage media. + +You can configure encryption differently for each database, encrypting specific tables in some databases with different encryption keys while keeping others unencrypted. + +Lear more [what is Transparent Data Encryption](tde.md#how-does-it-work) and [why you need it](tde.md#why-do-you-need-tde). + +!!! important + + This is the {{release}} version of the extension and it is not meant for production use yet. We encourage you to use it in testing environments and [provide your feedback](https://forums.percona.com/c/postgresql/pg-tde-transparent-data-encryption-tde/82). + +[Get started](install.md){.md-button} +[What's new in pg_tde {{release}}](release-notes/release-notes.md){.md-button} + +## What's encrypted: + +* User data in tables, including TOAST tables, that are created using the extension. Metadata of those tables is not encrypted. +* Temporary tables created during the database operation for data tables created using the extension +* Write-Ahead Log (WAL) data for the entire database cluster. This includes WAL data in encrypted and non-encrypted tables +* Indexes on encrypted tables +* Logical replication on encrypted tables + +[Check the full feature list](features.md){.md-button} + +## Known limitations + +* Keys in the local keyfile are stored unencrypted. For better security we recommend using the Key management storage. +* System tables are currently not encrypted. +* Currently you cannot update the configuration of an existing Key Management Store (KMS). If its configuration changes (e.g. your Vault server has a new URL), you must set up a new key provider in `pg_tde` and create new keys there. Both the KMS and PostgreSQL servers must be up and running during these changes. [Reach out to our experts](https://www.percona.com/about/contact) for assistance and to outline the best update path for you. + + We plan to introduce the way to update the configuration of an existing KMS in future releases. + +* `pg_rewind` doesn't work with encrypted WAL for now. We plan to fix it in future releases. + + +:material-alert: Warning: Note that introducing encryption/decryption affects performance. Our benchmark tests show less than 10% performance overhead for most situations. However, in some specific applications such as those using JSONB operations, performance degradation might be higher. + +## Versions and supported PostgreSQL deployments + +The `pg_tde` extension comes in two distinct versions with specific access methods to encrypt the data. These versions are database-specific and differ in terms of what they encrypt and with what access method. Each version is characterized by the database it supports, the access method it provides, and the scope of encryption it offers. + +* **Version for Percona Server for PostgreSQL** + + This `pg_tde` version is based on and supported for [Percona Server for PostgreSQL 17.x :octicons-link-external-16:](https://docs.percona.com/postgresql/17/postgresql-server.html) - an open source binary drop-in replacement for PostgreSQL Community. It provides the `tde_heap` access method and offers [full encryption capabilities](features.md). + +* **Community version** + + This version is supported for PostgreSQL Community 16 and 17, and Percona Distribution for PostgreSQL 16. It provides the `tde_heap_basic` access method, offering limited encryption features. The limitations are in encrypting WAL data only for tables created using the extension and no support of index encryption nor logical replication. + +### Which version to chose? + +The answer is pretty straightforward: for data sets where indexing is not mandatory or index encryption is not required, use the community version and the `tde_heap_basic` access method. Check the [upstream documentation :octicons-link-external-16:](https://github.com/percona/pg_tde/blob/main/README.md) how to get started. + +Otherwise, enjoy full encryption with the Percona Server for PostgreSQL version and the `tde_heap` access method. + +Still not sure? [Contact our experts](https://www.percona.com/about/contact) to find the best solution for you. + diff --git a/contrib/pg_tde/documentation/docs/install.md b/contrib/pg_tde/documentation/docs/install.md new file mode 100644 index 00000000000..daea87d462d --- /dev/null +++ b/contrib/pg_tde/documentation/docs/install.md @@ -0,0 +1,93 @@ +# Installation + +## Considerations + +You can use the following options to manage encryption keys: + +* Use the HashiCorp Vault server. This is the recommended approach. The Vault server configuration is out of scope of this document. We assume that you have the Vault server up and running. For the `pg_tde` configuration, you need the following information: + + * The secret access token to the Vault server + * The URL to access the Vault server + * (Optional) The CA file used for SSL verification + +* Use the local keyfile. This approach is rather used for development and testing purposes since the keys are stored unencrypted in the specified keyfile. + +## Procedure + +Install `pg_tde` using one of available installation methods: + + +=== "Package manager" + + The packages are available for the following operating systems: + + - Red Hat Enterprise Linux 8 and compatible derivatives + - Red Hat Enterprise Linux 9 and compatible derivatives + - Ubuntu 20.04 (Focal Fossa) + - Ubuntu 22.04 (Jammy Jellyfish) + - Ubuntu 24.04 (Noble Numbat) + - Debian 11 (Bullseye) + - Debian 12 (Bookworm) + + [Install on Debian or Ubuntu](apt.md){.md-button} + [Install on RHEL or derivatives](yum.md){.md-button} + +=== "Build from source" + + To build `pg_tde` from source code, do the following + + 1. On Ubuntu/Debian: Install the following dependencies required for the build: + + ```sh + sudo apt install make gcc postgresql-server-dev-17 libcurl4-openssl-dev + ``` + + 2. [Install Percona Distribution for PostgreSQL 17 :octicons-link-external-16:](https://docs.percona.com/postgresql/17/installing.html) or [upstream PostgreSQL 17 :octicons-link-external-16:](https://www.postgresql.org/download/) + + 3. If PostgreSQL is installed in a non standard directory, set the `PG_CONFIG` environment variable to point to the `pg_config` executable. + + 4. Clone the repository: + + ``` + git clone git://github.com/percona/pg_tde + ``` + + 5. Compile and install the extension + + ``` + cd pg_tde + make USE_PGXS=1 + sudo make USE_PGXS=1 install + ``` + +=== "Run in Docker" + + !!! note + + The steps below are for the `pg_tde` community version. + + To run `pg_tde` version for Percona Server for PostgreSQL, [use the Percona Distribution for PostgreSQL Docker image :octicons-link-external-16:](https://docs.percona.com/postgresql/17/docker.html). + + You can find Docker images built from the current main branch on [Docker Hub](https://hub.docker.com/r/perconalab/pg_tde). Images are built on top of [postgres:16](https://hub.docker.com/_/postgres) official image. + + To run `pg_tde` in Docker, use the following command: + + ``` + docker run --name pg-tde -e POSTGRES_PASSWORD=mysecretpassword -d perconalab/pg_tde + ``` + + It builds and adds `pg_tde` extension to PostgreSQL 16. The `postgresql.conf` contains the required modifications. The `pg_tde` extension is added to `template1` so that all new databases automatically have the `pg_tde` extension loaded. + + Keys are not created automatically. You must configure a key provider and a principal key for each database where you wish to use encrypted tables. See the instructions in the [Setup](setup.md) section, starting with the 4th point, as the first 3 steps are already completed in the Docker image. + + See [Docker Docs](https://hub.docker.com/_/postgres) on usage. + + You can also build a Docker image manually with: + + ``` + docker build . -f ./docker/Dockerfile -t your-image-name + ``` + +## Next steps + +[Setup](setup.md){.md-button} diff --git a/contrib/pg_tde/documentation/docs/js/consent.js b/contrib/pg_tde/documentation/docs/js/consent.js new file mode 100644 index 00000000000..b6f8a8ac0a3 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/js/consent.js @@ -0,0 +1,6 @@ +var consent = __md_get("__consent") +if (consent && consent.custom) { + /* The user accepted the cookie */ +} else { + /* The user rejected the cookie */ +} \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/js/promptremover.js b/contrib/pg_tde/documentation/docs/js/promptremover.js new file mode 100644 index 00000000000..aef117323fb --- /dev/null +++ b/contrib/pg_tde/documentation/docs/js/promptremover.js @@ -0,0 +1,44 @@ +document.addEventListener("DOMContentLoaded", function(){ + // get collection of code blocks: + const collection = document.getElementsByClassName("highlight"); + for (let i = 0; i < collection.length; i++) { + const commandElement=collection.item(i); + let commandButtonElement = commandElement.getElementsByTagName("button"); + // read the prompt string from an attribute of the code block: + let promptString = commandElement.getAttribute("data-prompt"); + if (!promptString) continue; + let commandCodeElement = commandElement.getElementsByTagName("code"); + let commandCodeElementString = commandCodeElement.item(0).textContent; + let trueCommand = commandCodeElementString; + if (commandCodeElementString.startsWith(promptString)) { + // remove the first occurrence of the prompt: + trueCommand = commandCodeElementString.substring(promptString.length, commandCodeElementString.length).trim(); + } + // remove other occurrencies in case of a multi-line string: + trueCommand = trueCommand.replaceAll("\n"+promptString, "\n").replace(/^[^\S\r\n]+/gm, ""); + + // CHECK IF THERE IS A SECOND PROMPT: + promptString = commandElement.getAttribute("data-prompt-second"); + if (promptString) { + if (trueCommand.startsWith(promptString)) { + trueCommand = trueCommand.substring(promptString.length, trueCommand.length).trim(); + } + trueCommand = trueCommand.replaceAll("\n"+promptString, "\n").replace(/^[^\S\r\n]+/gm, ""); + } + + // CHECK IF THERE IS A THIRD PROMPT: + promptString = commandElement.getAttribute("data-prompt-third"); + if (promptString) { + if (trueCommand.startsWith(promptString)) { + trueCommand = trueCommand.substring(promptString.length, trueCommand.length).trim(); + } + trueCommand = trueCommand.replaceAll("\n"+promptString, "\n").replace(/^[^\S\r\n]+/gm, ""); + } + // attach the updated command as an attribute to the button where clipboard.js will find it: + commandButtonElement.item(0).setAttribute("data-clipboard-text", trueCommand); + } +}); + + + + diff --git a/contrib/pg_tde/documentation/docs/release-notes/release-notes.md b/contrib/pg_tde/documentation/docs/release-notes/release-notes.md new file mode 100644 index 00000000000..25e2ada2ff3 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/release-notes/release-notes.md @@ -0,0 +1,100 @@ +# pg_tde release notes + +`pg_tde` extension brings in [Transparent Data Encryption (TDE)](tde.md) to PostgreSQL and enables you to keep sensitive data safe and secure. + +[Get started](../install.md){.md-button} + +## Beta 2 (2024-12-16) + +With this release, `pg_tde` extension offers two database specific versions: + +* PostgreSQL Community version provides only the `tde_heap_basic` access method using which you can introduce table encryption and WAL encryption for data in the encrypted tables. Index data remains unencrypted. +* Version for Percona Server for PostgreSQL provides the `tde_heap`access method. using this method you can encrypt index data in encrypted tables thus increasing the safety of your sensitive data. For backward compatibility, the `tde_heap_basic` method is available in this version too. + +The Beta 2 version introduces the following features and improvements: + +### New Features + +* Added the `tde_heap` access method with which you can now enable index encryption for encrypted tables and global WAL data encryption. To use this access method, you must install Percona Server for PostgreSQL. Check the [installation guide](../install.md) +* Added event triggers to identify index creation operations on encrypted tables and store those in a custom storage. +* Added support for secure transfer of keys using the [OASIS Key Management Interoperability Protocol (KMIP)](https://docs.oasis-open.org/kmip/kmip-spec/v2.0/os/kmip-spec-v2.0-os.html). The KMIP implementation was tested with the PyKMIP server and the HashiCorp Vault Enterprise KMIP Secrets Engine. + + +### Improvements + +* WAL encryption improvements: + + * Added a global key to encrypt WAL data in global space + * Added WAL key management + +* Keyring improvements: + + * Renamed functions to point their usage for principal key management + * Improved keyring provider management across databases and the global space. + * Keyring configuration now uses common JSON API. This simplifies code handling and enables frontend tools like `pg_waldump` to read the code thus improving debugging. + +* The `pg_tde_is_encrypted` function now supports custom schemas in the format of `pg_tde_is_encrypted('schema.table');` +* Changed the location of internal TDE files: instead of the database directory, now all files are stored in ` $PGDATA/pg_tde` +* Improved error reporting when `pg_tde` is not added to the `shared_preload_libraries` +* Improved memory usage of `tde_heap_basic `during sequential reads +* Improved `tde_heap_basic` for select statements +* Added encryption support for (some) command line utilities + +### Bugs fixed + +* Fixed multiple bugs with `tde_heap_basic` and TOAST records +* Fixed various memory leaks + +## Beta (2024-06-30) + +With this version, the access method for `pg_tde` extension is renamed `tde_heap_basic`. Use this access method name to create tables. Find guidelines in [Test TDE](../test.md) tutorial. + +The Beta version introduces the following bug fixes and improvements: + +* Fixed the issue with `pg_tde` running out of memory used for decrypted tuples. The fix introduces the new component `TDEBufferHeapTupleTableSlot` that keeps track of the allocated memory for decrypted tuples and frees this memory when the tuple slot is no longer needed. + +* Fixed the issue with adjusting a current position in a file by using raw file descriptor for the `lseek` function. (Thanks to user _rainhard_ for providing the fix) + +* Enhanced the init script to consider a custom superuser for the POSTGRES_USER parameter when `pg_tde` is running via Docker (Thanks to _Alejandro Paredero_ for reporting the issue) + + + +## Alpha 1 (2024-03-28) + +### Release Highlights + +The Alpha1 version of the extension introduces the following key features: + +* You can now rotate principal keys used for data encryption. This reduces the risk of long-term exposure to potential attacks and helps you comply with security standards such as GDPR, HIPAA, and PCI DSS. + +* You can now configure encryption differently for each database. For example, encrypt specific tables in some databases with different encryption keys while keeping others non-encrypted. + +* Keyring configuration has undergone several improvements, namely: + + * You can define separate keyring configuration for each database + * You can change keyring configuration dynamically, without having to restart the server + * The keyring configuration is now stored in a catalog separately for each database, instead of a configuration file + * Avoid storing secrets in the unencrypted catalog by configuring keyring parameters to be read from external sources (file, http(s) request) + +### Improvements + +* Renamed the repository and Docker image from `postgres-tde-ext` to `pg_tde`. The extension name remains unchanged +* Changed the Initialization Vector (IV) calculation of both the data and internal keys + +### Bugs fixed + +* Fixed toast related crashes +* Fixed a crash with the DELETE statement +* Fixed performance-related issues +* Fixed a bug where `pg_tde` sent many 404 requests to the Vault server +* Fixed сompatibility issues with old OpenSSL versions +* Fixed сompatibility with old Curl versions + +## MVP (2023-12-12) + +The Minimum Viable Product (MVP) version introduces the following functionality: + +* Encryption of heap tables, including TOAST +* Encryption keys are stored either in Hashicorp Vault server or in local keyring file (for development) +* The key storage is configurable via separate JSON configuration files +* Replication support diff --git a/contrib/pg_tde/documentation/docs/setup.md b/contrib/pg_tde/documentation/docs/setup.md new file mode 100644 index 00000000000..e6e6c7e03fb --- /dev/null +++ b/contrib/pg_tde/documentation/docs/setup.md @@ -0,0 +1,192 @@ +# Set up `pg_tde` + +## Enable extension + +Load the `pg_tde` at the start time. The extension requires additional shared memory; therefore, add the `pg_tde` value for the `shared_preload_libraries` parameter and restart the `postgresql` instance. + +1. Use the [ALTER SYSTEM](https://www.postgresql.org/docs/current/sql-altersystem.html) command from `psql` terminal to modify the `shared_preload_libraries` parameter. This requires superuser privileges. + + ``` + ALTER SYSTEM SET shared_preload_libraries = 'pg_tde'; + ``` + +2. Start or restart the `postgresql` instance to apply the changes. + + * On Debian and Ubuntu: + + ```sh + sudo systemctl restart postgresql.service + ``` + + * On RHEL and derivatives + + ```sh + sudo systemctl restart postgresql-17 + ``` + +3. Create the extension using the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. You must have the privileges of a superuser or a database owner to use this command. Connect to `psql` as a superuser for a database and run the following command: + + ``` + CREATE EXTENSION pg_tde; + ``` + + By default, the `pg_tde` extension is created for the currently used database. To enable data encryption in other databases, you must explicitly run the `CREATE EXTENSION` command against them. + + !!! tip + + You can have the `pg_tde` extension automatically enabled for every newly created database. Modify the template `template1` database as follows: + + ```sh + psql -d template1 -c 'CREATE EXTENSION pg_tde;' + ``` + +## Key provider configuration + +1. Set up a key provider for the database where you have enabled the extension. + + === "With KMIP server" + + Make sure you have obtained the root certificate for the KMIP server and the keypair for the client. The client key needs permissions to create / read keys on the server. Find the [configuration guidelines for the HashiCorp Vault Enterprise KMIP Secrets Engine](https://developer.hashicorp.com/vault/tutorials/enterprise/kmip-engine). + + For testing purposes, you can use the PyKMIP server which enables you to set up required certificates. To use a real KMIP server, make sure to obtain the valid certificates issued by the key management appliance. + + ``` + SELECT pg_tde_add_key_provider_kmip('provider-name','kmip-IP', 5696, '/path_to/server_certificate.pem', '/path_to/client_key.pem'); + ``` + + where: + + * `provider-name` is the name of the provider. You can specify any name, it's for you to identify the provider. + * `kmip-IP` is the IP address of a domain name of the KMIP server + * `port` is the port to communicate with the KMIP server. Typically used port is 5696. + * `server-certificate` is the path to the certificate file for the KMIP server. + * `client key` is the path to the client key. + + :material-information: Warning: This example is for testing purposes only: + + The Vault server setup is out of scope of this document. + + ``` + SELECT pg_tde_add_key_provider_kmip('kmip','127.0.0.1', 5696, '/tmp/server_certificate.pem', '/tmp/client_key_jane_doe.pem'); + ``` + + === "With HashiCorp Vault" + + The Vault server setup is out of scope of this document. + + ```sql + SELECT pg_tde_add_key_provider_vault_v2('provider-name','root_token','url','mount','ca_path'); + ``` + + where: + + * `url` is the URL of the Vault server + * `mount` is the mount point where the keyring should store the keys + * `root_token` is an access token with read and write access to the above mount point + * [optional] `ca_path` is the path of the CA file used for SSL verification + + :material-information: Warning: This example is for testing purposes only: + + ``` + SELECT pg_tde_add_key_provider_file_vault_v2('my-vault','https://vault.example.com','secret/data','hvs.zPuyktykA...example...ewUEnIRVaKoBzs2', NULL); + ``` + + === "With a keyring file" + + This setup is intended for development and stores the keys unencrypted in the specified data file. + + ```sql + SELECT pg_tde_add_key_provider_file('provider-name','/path/to/the/keyring/data.file'); + ``` + + :material-information: Warning: This example is for testing purposes only: + + ```sql + SELECT pg_tde_add_key_provider_file('file-keyring','/tmp/pg_tde_test_local_keyring.per'); + ``` + + +2. Add a principal key + + ```sql + SELECT pg_tde_set_principal_key('name-of-the-principal-key', 'provider-name'); + ``` + + :material-information: Warning: This example is for testing purposes only: + + ```sql + SELECT pg_tde_set_principal_key('test-db-master-key','file-vault'); + ``` + + The key is auto-generated. + + + :material-information: Info: The key provider configuration is stored in the database catalog in an unencrypted table. See [how to use external reference to parameters](external-parameters.md) to add an extra security layer to your setup. + + +## WAL encryption configuration + +After you [enabled `pg_tde`](#enable-extension) and started the Percona Server for PostgreSQL, a principal key and a keyring for WAL are created. Now you need to instruct `pg_tde ` to encrypt WAL files by configuring WAL encryption. + +Here's how to do it: + +1. Enable WAL level encryption using the `ALTER SYSTEM SET` command. You need the privileges of the superuser to run this command: + + ```sql + ALTER SYSTEM set pg_tde.wal_encrypt = on; + ``` + +2. Restart the server to apply the changes. + + * On Debian and Ubuntu: + + ```sh + sudo systemctl restart postgresql.service + ``` + + * On RHEL and derivatives + + ```sh + sudo systemctl restart postgresql-17 + ``` + +3. We highly recommend you to create your own keyring and rotate the principal key. This is because the default principal key is created from the local keyfile and is stored unencrypted. + + Set up the key provider for WAL encryption + + === "With HashiCorp Vault" + + ```sql + SELECT pg_tde_add_key_provider_vault_v2('PG_TDE_GLOBAL','provider-name',:'secret_token','url','mount','ca_path'); + ``` + + where: + + * `PG_TDE_GLOBAL` is the constant that defines the WAL encryption key + * `provider-name` is the name you define for the key provider + * `url` is the URL of the Vault server + * `mount` is the mount point where the keyring should store the keys + * `secret_token` is an access token with read and write access to the above mount point + * [optional] `ca_path` is the path of the CA file used for SSL verification + + + === "With keyring file" + + This setup is intended for development and stores the keys unencrypted in the specified data file. + + ```sql + SELECT pg_tde_add_key_provider_file('provider-name','/path/to/the/keyring/data.file'); + ``` + +4. Rotate the principal key. Don't forget to specify the `PG_TDE_GLOBAL` constant to rotate only the principal key for WAL. + + ```sql + SELECT pg_tde_rotate_principal_key('PG_TDE_GLOBAL', 'new-principal-key', 'provider-name'); + ``` + +Now all WAL files are encrypted for both encrypted and unencrypted tables. + +## Next steps + +[Test TDE](test.md){.md-button} + diff --git a/contrib/pg_tde/documentation/docs/switch.md b/contrib/pg_tde/documentation/docs/switch.md new file mode 100644 index 00000000000..5a3fa2edaf0 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/switch.md @@ -0,0 +1,5 @@ +# Switch from Percona Server for PostgreSQL to PostgreSQL Community + +Percona Server for PostgreSQL and PostgreSQL Community are binary compatible and enable you to switch from one to another. Here's how: + +1. If you used the `tde_heap` (tech preview feature) access method for encryption, either re-encrypt the data using the `tde_heap_basic` access method, or [decrypt](decrypt.md) it completely \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/table-access-method.md b/contrib/pg_tde/documentation/docs/table-access-method.md new file mode 100644 index 00000000000..4ed69bc6307 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/table-access-method.md @@ -0,0 +1,96 @@ +# Table access method + +A table access method is the way how PostgreSQL stores the data in a table. The default table access method is `heap`. PostgreSQL organizes data in a heap structure, meaning there is no particular order to the rows in the table. Each row is stored independently and identified by its unique row identifier (TID). + +## How the `heap` access method works + +**Insertion**: When a new row is inserted, PostgreSQL finds a free space in the tablespace and stores the row there. + +**Deletion**: When a row is deleted, PostgreSQL marks the space occupied by the row as free, but the data remains until it is overwritten by a new insertion. + +**Updates**: PostgreSQL handles updates by deleting the old row and inserting a new row with the updated values. + +## Custom access method + +Custom access methods allow you to implement and define your own way of organizing data in PostgreSQL. This is useful if the default table access method doesn't meet your needs. + +Custom access methods are typically available with PostgreSQL extensions. When you install an extension and enable it in PostgreSQL, a custom access method is created. + +An example of such an approach is the `tde_heap` access method. It is automatically created **only** for the databases where you [enabled the `pg_tde` extension](setup.md) and configured the key provider, enabling you to encrypt the data. + +To use a custom access method, specify the `USING` clause for the `CREATE TABLE` command: + +```sql +CREATE TABLE table_name ( + column1 data_type, + column2 data_type, + ... +) USING tde_heap; +``` + +### How `tde_heap` works + +The `tde_heap` access method works on top of the default `heap` access method and is a marker to point which tables require encryption. It uses the custom storage manager TDE SMGR, which becomes active only after you installed the `pg_tde` extension. + +Every data modification operation is first sent to the Buffer Manager, which updates the buffer cache. Then, it is passed to the storage manager, which then writes it to disk. When a table requires encryption, the data is sent to the TDE storage manager, where it is encrypted before written to disk. + +Similarly, when a client queries the database, the PostgreSQL core sends the request to the Buffer Manager which checks if the requested data is already in the buffer cache. If it’s not there, the Buffer Manager requests the data from the storage manager. The TDE storage manager reads the encrypted data from disk, decrypts it and loads it to the buffer cache. The Buffer Manager sends the requested data to the PostgreSQL core and then to the client. + + +Thus, the encryption is done at the storage manager level. + +## Changing the default table access method + +You can change the default table access method so that every table in the entire database cluster is created using the custom access method. For example, you can enable data encryption by default by defining the `tde_heap` as the default table access method. + +However, consider the following before making this change: + +* This is a global setting and applies across the entire database cluster and not just a single database. +We recommend setting it with caution because all tables and materialized views created without an explicit access method in their `CREATE` statement will default to the specified table access method. +* You must create the `pg_tde` extension and configure the key provider for all databases before you modify the configuration. Otherwise PostgreSQL won't find the specified access method and will throw an error. + +Here's how you can set the new default table access method: + +1. Add the access method to the `default_table_access_method` parameter. + + === "via the SQL statement" + + Use the `ALTER SYSTEM SET` command. This requires superuser or ALTER SYSTEM privileges. + + This example shows how to set the `tde_heap` access method. Replace it with the `tde_heap_basic` if needed. + + + ```sql + ALTER SYSTEM SET default_table_access_method=tde_heap; + ``` + + === "via the configuration file" + + Edit the `postgresql.conf` configuration file and add the value for the `default_table_access_method` parameter. + + This example shows how to set the `tde_heap` access method. Replace it with the `tde_heap_basic` if needed. + + ```ini + default_table_access_method = 'tde_heap' + ``` + + === "via the SET command" + + You can use the SET command to change the default table access method temporarily, for the current session. + + Unlike modifying the `postgresql.conf` file or using the ALTER SYSTEM SET command, the changes you make via the SET command don't persist after the session ends. + + You also don't need to have the superuser privileges to run the SET command. + + You can run the SET command anytime during the session. This example shows how to set the `tde_heap` access method. Replace it with the `tde_heap_basic` if needed. + + ```sql + SET default_table_access_method = tde_heap; + ``` + +2. Reload the configuration to apply the changes: + + ```sql + SELECT pg_reload_conf(); + ``` + diff --git a/contrib/pg_tde/documentation/docs/tde.md b/contrib/pg_tde/documentation/docs/tde.md new file mode 100644 index 00000000000..a33ea97fa2f --- /dev/null +++ b/contrib/pg_tde/documentation/docs/tde.md @@ -0,0 +1,43 @@ +# What is Transparent Data Encryption (TDE) + +Transparent Data Encryption is a technology to protect data at rest. The encryption process happens transparently in the background, without affecting database operations. Data is automatically encrypted as it's written to the disk and decrypted as it's read, all in real-time. Users and applications interact with the data as usual without noticing any difference. + +## How does it work? + +To encrypt the data, two types of keys are used: + +* Internal encryption keys to encrypt user data. They are stored internally, near the data that they encrypt. +* The principal key to encrypt database keys. It is kept separately from the database keys and is managed externally in the key management store. + +You have the following options to store and manage principal keys externally: + +* Use the HashiCorp Vault server. Only the back end KV Secrets Engine - Version 2 (API) is supported. +* Use the KMIP-compatible server. `pg_tde` has been tested with the [PyKMIP](https://pykmip.readthedocs.io/en/latest/server.html) server and [the HashiCorp Vault Enterprise KMIP Secrets Engine](https://www.vaultproject.io/docs/secrets/kmip). + +The encryption process is the following: + +![image](_images/tde-flow.png) + +When a user creates an encrypted table using `pg_tde`, a new random key is generated for that table using the AES128 (AES-ECB) cipher algorithm. This key is used to encrypt all data the user inserts in that table. Eventually the encrypted data gets stored in the underlying storage. + +The table itself is encrypted using the principal key. The principal key is stored externally in the key management store. + +Similarly when the user queries the encrypted table, the principal key is retrieved from the key store to decrypt the table. Then the same unique internal key for that table is used to decrypt the data, and unencrypted data gets returned to the user. So, effectively, every TDE table has a unique key, and each table key is encrypted using the principal key. + +## Why do you need TDE? + +Using TDE has the following benefits: + +* For organizations: + + - Ensure data safety when it is stored on disk and in backups + - Comply with security and legal standards like HIPAA, PCI DSS, SOC 2, ISO 27001 + +* For DBAs: + + - Granular encryption of specific tables and reducing the performance overhead that encryption brings + - Additional layer of security to existing security measures like storage-level encryption, data encryption in transit using TLS, access control and more. + +!!! admonition "See also" + + Percona Blog: [Transparent Data Encryption (TDE)](https://www.percona.com/blog/transparent-data-encryption-tde/) \ No newline at end of file diff --git a/contrib/pg_tde/documentation/docs/test.md b/contrib/pg_tde/documentation/docs/test.md new file mode 100644 index 00000000000..ac6109532b6 --- /dev/null +++ b/contrib/pg_tde/documentation/docs/test.md @@ -0,0 +1,52 @@ +# Test Transparent Data Encryption + +Enabling `pg_tde` extension for a database creates the table access method `tde_heap` . This access method enables you to encrypt the data. + +Here's how to do it: + +1. Create a table in the database for which you have [enabled `pg_tde`](setup.md) using the `tde_heap` access method as follows: + + ``` + CREATE TABLE ( ) USING tde_heap; + ``` + + :material-information: Warning: Example for testing purposes only: + + ``` + CREATE TABLE albums ( + album_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist_id INTEGER, + title TEXT NOT NULL, + released DATE NOT NULL + ) USING tde_heap; + ``` + + Learn more about table access methods and how you can enable data encryption by default in the [Table access methods](table-access-method.md) section. + +2. To check if the data is encrypted, run the following function: + + ``` + SELECT pg_tde_is_encrypted('table_name'); + ``` + + The function returns `t` if the table is encrypted and `f` - if not. + +3. Rotate the principal key when needed: + + ``` + SELECT pg_tde_rotate_principal_key(); -- uses automatic key versionin + -- or + SELECT pg_tde_rotate_principal_key('new-principal-key', NULL); -- specify new key name + -- or + SELECT pg_tde_rotate_principal_key('new-principal-key', 'new-provider'); -- changeprovider + ``` + +4. You can encrypt an existing table. It requires rewriting the table, so for large tables, it might take a considerable amount of time. + + ``` + ALTER TABLE table_name SET access method tde_heap; + ``` + +!!! hint + + If you no longer wish to use `pg_tde` or wish to switch to using the `tde_heap_basic` access method, see how you can [decrypt your data](decrypt.md). diff --git a/contrib/pg_tde/documentation/docs/uninstall.md b/contrib/pg_tde/documentation/docs/uninstall.md new file mode 100644 index 00000000000..dafe9fd9dda --- /dev/null +++ b/contrib/pg_tde/documentation/docs/uninstall.md @@ -0,0 +1,31 @@ +# Uninstall `pg_tde` + +If you no longer wish to use TDE in your deployment, you can remove the `pg_tde` extension. To do that, your user must have the privileges of the superuser or a database owner. + +Here's how to do it: + +1. Drop the extension using the `DROP EXTENSION` with `CASCADE` command. + + :material-alert: Warning: The use of the CASCADE parameter deletes all tables that were created in the database with `pg_tde` enabled and also all dependencies upon the encrypted table (e.g. foreign keys in a non-encrypted table used in the encrypted one). + + ``` + DROP EXTENSION pg_tde CASCADE + ``` + +2. Run the `DROP EXTENSION` command against every database where you have enabled the `pg_tde` extension + +3. Modify the `shared_preload_libraries` and remove the 'pg_tde' from it. Use the `ALTER SYSTEM SET` command for this purpose + +4. Start or restart the `postgre` instance to apply the changes. + + * On Debian and Ubuntu: + + ```sh + sudo systemctl restart postgre.service + ``` + + * On RHEL and derivatives + + ```sh + sudo systemctl restart postgre-17 + ``` diff --git a/contrib/pg_tde/documentation/docs/yum.md b/contrib/pg_tde/documentation/docs/yum.md new file mode 100644 index 00000000000..c4dfbc72e9c --- /dev/null +++ b/contrib/pg_tde/documentation/docs/yum.md @@ -0,0 +1,52 @@ +# Install `pg_tde` on Red Hat Enterprise Linux and derivatives + +The packages for the tech preview `pg_tde` are available in the experimental repository for Percona Distribution for PostgreSQL 17. + +Check the [list of supported platforms](install.md#__tabbed_1_2). + +This tutorial shows how to install `pg_tde` with [Percona Distribution for PostgreSQL](https://docs.percona.com/postgresql/latest/index.html). + +## Preconditions + +### Install `percona-release` + +You need the `percona-release` repository management tool that enables the desired Percona repository for you. + +1. Install `percona-release`: + + ```bash + sudo yum -y install https://repo.percona.com/yum/percona-release-latest.noarch.rpm + ``` + +2. Enable the repository. + + Percona provides [two repositories](repo-overview.md) for Percona Distribution for PostgreSQL. We recommend enabling the Major release repository to timely receive the latest updates. + + ```bash + sudo percona-release enable-only ppg-{{pgversion17}} + ``` + +## Install `pg_tde` + +!!! important + + The `pg_tde` {{release}} extension is a part of the `percona-postgresql17` package. If you installed a previous version of `pg_tde` from the `percona-pg_tde_17` package, do the following: + + * Drop the extension using the `DROP EXTENSION` with `CASCADE` command. + + :material-alert: Warning: The use of the `CASCADE` parameter deletes all tables that were created in the database with `pg_tde` enabled and also all dependencies upon the encrypted table (e.g. foreign keys in a non-encrypted table used in the encrypted one). + + ```sql + DROP EXTENSION pg_tde CASCADE + ``` + + * Uninstall the `percona-pg_tde_17` package. + + +```bash +sudo yum -y install percona-postgresql17 +``` + +## Next steps + +[Setup](setup.md){.md-button} diff --git a/contrib/pg_tde/documentation/mkdocs-pdf.yml b/contrib/pg_tde/documentation/mkdocs-pdf.yml new file mode 100644 index 00000000000..0b76f278889 --- /dev/null +++ b/contrib/pg_tde/documentation/mkdocs-pdf.yml @@ -0,0 +1,16 @@ +# MkDocs configuration for PDF builds +# Usage: ENABLE_PDF_EXPORT=1 mkdocs build -f mkdocs-pdf.yml + +INHERIT: mkdocs.yml + +copyright: Percona LLC, © 2024 + +extra_css: + - https://unicons.iconscout.com/release/v3.0.3/css/line.css + - https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.4.0/css/font-awesome.min.css + - css/extra.css + +markdown_extensions: + pymdownx.tabbed: {} + admonition: {} + diff --git a/contrib/pg_tde/documentation/mkdocs.yml b/contrib/pg_tde/documentation/mkdocs.yml new file mode 100644 index 00000000000..639689812c0 --- /dev/null +++ b/contrib/pg_tde/documentation/mkdocs.yml @@ -0,0 +1,179 @@ +# MkDocs general configuration + +site_name: pg_tde documentation +site_description: Documentation +site_author: Percona LLC +copyright: > + Percona LLC and/or its affiliates © 2023 — Cookie Consent + + +repo_name: percona/pg_tde +repo_url: https://github.com/percona/pg_tde +edit_uri: edit/main/documentation/docs/ + +use_directory_urls: false + +# Theme settings +theme: + name: material + logo: _images/postgresql-mark.svg + favicon: _images/postgresql-fav.svg + custom_dir: _resource/overrides + font: + text: Roboto + palette: + - media: "(prefers-color-scheme)" + toggle: + icon: material/brightness-auto + name: Color theme set to Automatic. Click to change + - media: "(prefers-color-scheme: light)" + scheme: percona-light + primary: custom + accent: custom + toggle: + icon: material/brightness-7 + name: Color theme set to Light Mode. Click to change + - media: "(prefers-color-scheme: dark)" + scheme: percona-dark + primary: custom + accent: custom + toggle: + icon: material/brightness-4 + name: Color theme set to Dark Mode. Click to change + +# Theme features + + features: + - search.share + - search.highlight + - content.code.copy + - content.action.view + - content.action.edit + - content.tabs.link + - navigation.top + - navigation.tracking + + + +extra_css: + - https://unicons.iconscout.com/release/v3.0.3/css/line.css + - https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.4.0/css/font-awesome.min.css + - css/percona.css + - css/design.css + - css/osano.css + - css/postgresql.css + - css/landing.css + +extra_javascript: + - js/version-select.js + - js/promptremover.js + - js/consent.js + +markdown_extensions: + - attr_list + - toc: + permalink: True + - admonition + - md_in_html + - footnotes + - def_list # https://michelf.ca/projects/php-markdown/extra/#def-list + - meta + - smarty: + smart_angled_quotes: true + - pymdownx.mark + - pymdownx.smartsymbols + - pymdownx.tilde + - pymdownx.superfences + - pymdownx.tabbed: + alternate_style: true + - pymdownx.tilde + - pymdownx.details + - pymdownx.highlight: + linenums: false + - pymdownx.snippets: + base_path: ["snippets"] + auto_append: + - services-banner.md + - pymdownx.emoji: + emoji_index: !!python/name:material.extensions.emoji.twemoji + emoji_generator: !!python/name:material.extensions.emoji.to_svg + options: + custom_icons: + - _resource/.icons + + +plugins: + - search: + separator: '[\s\-,:!=\[\]()"/]+|(?!\b)(?=[A-Z][a-z])|\.(?!\d)|&[lg]t;' + - open-in-new-tab: + - git-revision-date-localized: + enable_creation_date: true + enabled: !ENV [ENABLED_GIT_REVISION_DATE, True] + - meta-descriptions: + export_csv: false + quiet: false + enable_checks: false + min_length: 50 + max_length: 160 + - section-index # Adds links to nodes - comment out when creating PDF +# - htmlproofer # Uncomment to check links - but extends build time significantly + - glightbox + - macros: + include_yaml: + - 'variables.yml' # Use in markdown as '{{ VAR }}' + - with-pdf: # https://github.com/orzih/mkdocs-with-pdf + output_path: '_pdf/PerconaTDE.pdf' + cover_title: 'Percona Transparent Data Encryption' + cover_subtitle: Alpha 1 (2024-03-28) + author: 'Percona Technical Documentation Team' + cover_logo: docs/_images/Percona_Logo_Color.png + debug_html: false +# two_columns_level: 3 + custom_template_path: _resource/templates + enabled_if_env: ENABLE_PDF_EXPORT + +extra: + version: + provider: mike + analytics: + provider: google + property: G-J4J70BNH0G + feedback: + title: Was this page helpful? + ratings: + - icon: material/emoticon-happy-outline + name: This page was helpful + data: 1 + note: >- + Thanks for your feedback! + - icon: material/emoticon-sad-outline + name: This page could be improved + data: 0 + note: >- + Thank you for your feedback! Help us improve by using our + + feedback form. + +nav: + - Home: index.md + - features.md + - Get started: + - "Install": "install.md" + - "Via apt": apt.md + - "Via yum": yum.md + - "Set up": "setup.md" + - "Test TDE": "test.md" + - functions.md + - Concepts: + - "What is TDE": tde.md + - table-access-method.md + - How to: + - Use reference to external parameters: external-parameters.md + - Decrypt an encrypted table: decrypt.md + - faq.md + - Release notes: + - "pg_tde release notes": release-notes/release-notes.md + - uninstall.md + - contribute.md + + diff --git a/contrib/pg_tde/documentation/requirements.txt b/contrib/pg_tde/documentation/requirements.txt new file mode 100644 index 00000000000..b3ee7d78e70 --- /dev/null +++ b/contrib/pg_tde/documentation/requirements.txt @@ -0,0 +1,19 @@ + +Markdown +mkdocs +mkdocs-versioning +mkdocs-macros-plugin +mkdocs-exclude +markdown-include +mkdocs-material +mkdocs-with-pdf +mkdocs-git-revision-date-localized-plugin +mkdocs-material-extensions +mkdocs-bootstrap-tables-plugin +mkdocs-section-index +mkdocs-htmlproofer-plugin +mkdocs-meta-descriptions-plugin +mike +mkdocs-glightbox +Pillow > 10.1.0 +mkdocs-open-in-new-tab \ No newline at end of file diff --git a/contrib/pg_tde/documentation/snippets/services-banner.md b/contrib/pg_tde/documentation/snippets/services-banner.md new file mode 100644 index 00000000000..81886f96b9f --- /dev/null +++ b/contrib/pg_tde/documentation/snippets/services-banner.md @@ -0,0 +1,13 @@ +
+ +## Get expert help { .title } + +If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. + +
+ +[:material-forum-outline: Community Forum](https://forums.percona.com/c/postgresql/pg-tde-transparent-data-encryption-tde/82) [:percona-logo: Get a Percona Expert](https://www.percona.com/about/contact) + + + +
diff --git a/contrib/pg_tde/documentation/variables.yml b/contrib/pg_tde/documentation/variables.yml new file mode 100644 index 00000000000..05db6fc4c6c --- /dev/null +++ b/contrib/pg_tde/documentation/variables.yml @@ -0,0 +1,4 @@ +#Variables used throughout the docs + +release: 'Beta' +pgversion17: '17.2' \ No newline at end of file diff --git a/contrib/pg_tde/expected/alter_index.out b/contrib/pg_tde/expected/alter_index.out new file mode 100644 index 00000000000..59baff38822 --- /dev/null +++ b/contrib/pg_tde/expected/alter_index.out @@ -0,0 +1,73 @@ +\set tde_am tde_heap +\i sql/alter_index.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +SET default_table_access_method = :"tde_am"; +CREATE TABLE concur_reindex_part (c1 int, c2 int) PARTITION BY RANGE (c1); +CREATE TABLE concur_reindex_part_0 PARTITION OF concur_reindex_part + FOR VALUES FROM (0) TO (10) PARTITION BY list (c2); +CREATE TABLE concur_reindex_part_0_1 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (1); +CREATE TABLE concur_reindex_part_0_2 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (2); +-- This partitioned table will have no partitions. +CREATE TABLE concur_reindex_part_10 PARTITION OF concur_reindex_part + FOR VALUES FROM (10) TO (20) PARTITION BY list (c2); +-- Create some partitioned indexes +CREATE INDEX concur_reindex_part_index ON ONLY concur_reindex_part (c1); +CREATE INDEX concur_reindex_part_index_0 ON ONLY concur_reindex_part_0 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_0; +-- This partitioned index will have no partitions. +CREATE INDEX concur_reindex_part_index_10 ON ONLY concur_reindex_part_10 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_10; +CREATE INDEX concur_reindex_part_index_0_1 ON ONLY concur_reindex_part_0_1 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_1; +CREATE INDEX concur_reindex_part_index_0_2 ON ONLY concur_reindex_part_0_2 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_2; +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +DROP TABLE concur_reindex_part; +DROP EXTENSION pg_tde; +RESET default_table_access_method; diff --git a/contrib/pg_tde/expected/alter_index_basic.out b/contrib/pg_tde/expected/alter_index_basic.out new file mode 100644 index 00000000000..953cd268d6b --- /dev/null +++ b/contrib/pg_tde/expected/alter_index_basic.out @@ -0,0 +1,73 @@ +\set tde_am tde_heap_basic +\i sql/alter_index.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +SET default_table_access_method = :"tde_am"; +CREATE TABLE concur_reindex_part (c1 int, c2 int) PARTITION BY RANGE (c1); +CREATE TABLE concur_reindex_part_0 PARTITION OF concur_reindex_part + FOR VALUES FROM (0) TO (10) PARTITION BY list (c2); +CREATE TABLE concur_reindex_part_0_1 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (1); +CREATE TABLE concur_reindex_part_0_2 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (2); +-- This partitioned table will have no partitions. +CREATE TABLE concur_reindex_part_10 PARTITION OF concur_reindex_part + FOR VALUES FROM (10) TO (20) PARTITION BY list (c2); +-- Create some partitioned indexes +CREATE INDEX concur_reindex_part_index ON ONLY concur_reindex_part (c1); +CREATE INDEX concur_reindex_part_index_0 ON ONLY concur_reindex_part_0 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_0; +-- This partitioned index will have no partitions. +CREATE INDEX concur_reindex_part_index_10 ON ONLY concur_reindex_part_10 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_10; +CREATE INDEX concur_reindex_part_index_0_1 ON ONLY concur_reindex_part_0_1 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_1; +CREATE INDEX concur_reindex_part_index_0_2 ON ONLY concur_reindex_part_0_2 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_2; +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; + relid | parentrelid | level +-------------------------------+-----------------------------+------- + concur_reindex_part_index | | 0 + concur_reindex_part_index_0 | concur_reindex_part_index | 1 + concur_reindex_part_index_10 | concur_reindex_part_index | 1 + concur_reindex_part_index_0_1 | concur_reindex_part_index_0 | 2 + concur_reindex_part_index_0_2 | concur_reindex_part_index_0 | 2 +(5 rows) + +DROP TABLE concur_reindex_part; +DROP EXTENSION pg_tde; +RESET default_table_access_method; diff --git a/contrib/pg_tde/expected/cache_alloc.out b/contrib/pg_tde/expected/cache_alloc.out new file mode 100644 index 00000000000..7b6bf20127f --- /dev/null +++ b/contrib/pg_tde/expected/cache_alloc.out @@ -0,0 +1,125 @@ +-- We test cache so AM doesn't matter +-- Just checking there are no mem debug WARNINGs during the cache population +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +do $$ + DECLARE idx integer; +begin + for idx in 0..700 loop + EXECUTE format('CREATE TABLE t%s (c1 int) USING tde_heap_basic', idx); + end loop; +end; $$; +DROP EXTENSION pg_tde cascade; +NOTICE: drop cascades to 701 other objects +DETAIL: drop cascades to table t0 +drop cascades to table t1 +drop cascades to table t2 +drop cascades to table t3 +drop cascades to table t4 +drop cascades to table t5 +drop cascades to table t6 +drop cascades to table t7 +drop cascades to table t8 +drop cascades to table t9 +drop cascades to table t10 +drop cascades to table t11 +drop cascades to table t12 +drop cascades to table t13 +drop cascades to table t14 +drop cascades to table t15 +drop cascades to table t16 +drop cascades to table t17 +drop cascades to table t18 +drop cascades to table t19 +drop cascades to table t20 +drop cascades to table t21 +drop cascades to table t22 +drop cascades to table t23 +drop cascades to table t24 +drop cascades to table t25 +drop cascades to table t26 +drop cascades to table t27 +drop cascades to table t28 +drop cascades to table t29 +drop cascades to table t30 +drop cascades to table t31 +drop cascades to table t32 +drop cascades to table t33 +drop cascades to table t34 +drop cascades to table t35 +drop cascades to table t36 +drop cascades to table t37 +drop cascades to table t38 +drop cascades to table t39 +drop cascades to table t40 +drop cascades to table t41 +drop cascades to table t42 +drop cascades to table t43 +drop cascades to table t44 +drop cascades to table t45 +drop cascades to table t46 +drop cascades to table t47 +drop cascades to table t48 +drop cascades to table t49 +drop cascades to table t50 +drop cascades to table t51 +drop cascades to table t52 +drop cascades to table t53 +drop cascades to table t54 +drop cascades to table t55 +drop cascades to table t56 +drop cascades to table t57 +drop cascades to table t58 +drop cascades to table t59 +drop cascades to table t60 +drop cascades to table t61 +drop cascades to table t62 +drop cascades to table t63 +drop cascades to table t64 +drop cascades to table t65 +drop cascades to table t66 +drop cascades to table t67 +drop cascades to table t68 +drop cascades to table t69 +drop cascades to table t70 +drop cascades to table t71 +drop cascades to table t72 +drop cascades to table t73 +drop cascades to table t74 +drop cascades to table t75 +drop cascades to table t76 +drop cascades to table t77 +drop cascades to table t78 +drop cascades to table t79 +drop cascades to table t80 +drop cascades to table t81 +drop cascades to table t82 +drop cascades to table t83 +drop cascades to table t84 +drop cascades to table t85 +drop cascades to table t86 +drop cascades to table t87 +drop cascades to table t88 +drop cascades to table t89 +drop cascades to table t90 +drop cascades to table t91 +drop cascades to table t92 +drop cascades to table t93 +drop cascades to table t94 +drop cascades to table t95 +drop cascades to table t96 +drop cascades to table t97 +drop cascades to table t98 +drop cascades to table t99 +and 601 other objects (see server log for list) diff --git a/contrib/pg_tde/expected/change_access_method.out b/contrib/pg_tde/expected/change_access_method.out new file mode 100644 index 00000000000..9e86ebf2f70 --- /dev/null +++ b/contrib/pg_tde/expected/change_access_method.out @@ -0,0 +1,91 @@ +\set tde_am tde_heap +\i sql/change_access_method.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) using :tde_am; + +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America +(3 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +-- Try changing the encrypted table to an unencrypted table +ALTER TABLE country_table SET access method heap; +-- Insert some more data +INSERT INTO country_table (country_name, continent) + VALUES ('France', 'Europe'), + ('Germany', 'Europe'), + ('Canada', 'North America'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America + 4 | France | Europe + 5 | Germany | Europe + 6 | Canada | North America +(6 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + f +(1 row) + +-- Change it back to encrypted +ALTER TABLE country_table SET access method :tde_am; +INSERT INTO country_table (country_name, continent) + VALUES ('China', 'Asia'), + ('Brazil', 'South America'), + ('Australia', 'Oceania'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America + 4 | France | Europe + 5 | Germany | Europe + 6 | Canada | North America + 7 | China | Asia + 8 | Brazil | South America + 9 | Australia | Oceania +(9 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +DROP TABLE country_table; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/change_access_method_basic.out b/contrib/pg_tde/expected/change_access_method_basic.out new file mode 100644 index 00000000000..e5c0bb7f6fc --- /dev/null +++ b/contrib/pg_tde/expected/change_access_method_basic.out @@ -0,0 +1,91 @@ +\set tde_am tde_heap_basic +\i sql/change_access_method.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) using :tde_am; + +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America +(3 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +-- Try changing the encrypted table to an unencrypted table +ALTER TABLE country_table SET access method heap; +-- Insert some more data +INSERT INTO country_table (country_name, continent) + VALUES ('France', 'Europe'), + ('Germany', 'Europe'), + ('Canada', 'North America'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America + 4 | France | Europe + 5 | Germany | Europe + 6 | Canada | North America +(6 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + f +(1 row) + +-- Change it back to encrypted +ALTER TABLE country_table SET access method :tde_am; +INSERT INTO country_table (country_name, continent) + VALUES ('China', 'Asia'), + ('Brazil', 'South America'), + ('Australia', 'Oceania'); +SELECT * FROM country_table; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America + 4 | France | Europe + 5 | Germany | Europe + 6 | Canada | North America + 7 | China | Asia + 8 | Brazil | South America + 9 | Australia | Oceania +(9 rows) + +SELECT pg_tde_is_encrypted('country_table'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +DROP TABLE country_table; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/insert_update_delete.out b/contrib/pg_tde/expected/insert_update_delete.out new file mode 100644 index 00000000000..0a363b677c4 --- /dev/null +++ b/contrib/pg_tde/expected/insert_update_delete.out @@ -0,0 +1,96 @@ +\set tde_am tde_heap +\i sql/insert_update_delete.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE albums ( + id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist VARCHAR(256), + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; +INSERT INTO albums (artist, title, released) VALUES + ('Graindelavoix', 'Jisquin The Undead', '2021-06-12'), + ('Graindelavoix', 'Tenebrae Responsoria - Carlo Gesualdo', '2019-08-06'), + ('Graindelavoix', 'Cypriot Vespers', '2015-12-20'), + ('John Coltrane', 'Blue Train', '1957-09-15'), + ('V/A Analog Africa', 'Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed', '2016-05-27'), + ('Incapacitants', 'As Loud As Possible', '2022-09-15'), + ('Chris Corsano & Bill Orcutt', 'Made Out Of Sound', '2021-03-26'), + ('Jürg Frey (Quatuor Bozzini / Konus Quartett)', 'Continuit​é​, fragilit​é​, r​é​sonance', '2023-04-01'), + ('clipping.', 'Visions of Bodies Being Burned', '2020-10-23'), + ('clipping.', 'There Existed an Addiction to Blood', '2019-10-19'), + ('Autechre', 'elseq 1–5', '2016-05-19'), + ('Decapitated', 'Winds of Creation', '2000-04-17'), + ('Ulthar', 'Anthronomicon', '2023-02-17'), + ('Τζίμης Πανούσης', 'Κάγκελα Παντού', '1986-01-01'), + ('Воплі Відоплясова', 'Музіка', '1997-01-01'); +SELECT * FROM albums; + id | artist | title | released +----+----------------------------------------------+---------------------------------------------------------------------------------+------------ + 1 | Graindelavoix | Jisquin The Undead | 06-12-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 08-06-2019 + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 4 | John Coltrane | Blue Train | 09-15-1957 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 8 | Jürg Frey (Quatuor Bozzini / Konus Quartett) | Continuit​é​, fragilit​é​, r​é​sonance | 04-01-2023 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 12 | Decapitated | Winds of Creation | 04-17-2000 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 +(15 rows) + +DELETE FROM albums WHERE id % 4 = 0; +SELECT * FROM albums; + id | artist | title | released +----+-----------------------------+---------------------------------------------------------------------------------+------------ + 1 | Graindelavoix | Jisquin The Undead | 06-12-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 08-06-2019 + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 +(12 rows) + +UPDATE albums SET title='Jisquin The Undead: Laments, Deplorations and Dances of Death', released='2021-10-01' WHERE id=1; +UPDATE albums SET released='2020-04-01' WHERE id=2; +SELECT * FROM albums; + id | artist | title | released +----+-----------------------------+---------------------------------------------------------------------------------+------------ + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 + 1 | Graindelavoix | Jisquin The Undead: Laments, Deplorations and Dances of Death | 10-01-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 04-01-2020 +(12 rows) + +DROP TABLE albums; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/insert_update_delete_basic.out b/contrib/pg_tde/expected/insert_update_delete_basic.out new file mode 100644 index 00000000000..6ca59cae9f1 --- /dev/null +++ b/contrib/pg_tde/expected/insert_update_delete_basic.out @@ -0,0 +1,96 @@ +\set tde_am tde_heap_basic +\i sql/insert_update_delete.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE albums ( + id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist VARCHAR(256), + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; +INSERT INTO albums (artist, title, released) VALUES + ('Graindelavoix', 'Jisquin The Undead', '2021-06-12'), + ('Graindelavoix', 'Tenebrae Responsoria - Carlo Gesualdo', '2019-08-06'), + ('Graindelavoix', 'Cypriot Vespers', '2015-12-20'), + ('John Coltrane', 'Blue Train', '1957-09-15'), + ('V/A Analog Africa', 'Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed', '2016-05-27'), + ('Incapacitants', 'As Loud As Possible', '2022-09-15'), + ('Chris Corsano & Bill Orcutt', 'Made Out Of Sound', '2021-03-26'), + ('Jürg Frey (Quatuor Bozzini / Konus Quartett)', 'Continuit​é​, fragilit​é​, r​é​sonance', '2023-04-01'), + ('clipping.', 'Visions of Bodies Being Burned', '2020-10-23'), + ('clipping.', 'There Existed an Addiction to Blood', '2019-10-19'), + ('Autechre', 'elseq 1–5', '2016-05-19'), + ('Decapitated', 'Winds of Creation', '2000-04-17'), + ('Ulthar', 'Anthronomicon', '2023-02-17'), + ('Τζίμης Πανούσης', 'Κάγκελα Παντού', '1986-01-01'), + ('Воплі Відоплясова', 'Музіка', '1997-01-01'); +SELECT * FROM albums; + id | artist | title | released +----+----------------------------------------------+---------------------------------------------------------------------------------+------------ + 1 | Graindelavoix | Jisquin The Undead | 06-12-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 08-06-2019 + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 4 | John Coltrane | Blue Train | 09-15-1957 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 8 | Jürg Frey (Quatuor Bozzini / Konus Quartett) | Continuit​é​, fragilit​é​, r​é​sonance | 04-01-2023 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 12 | Decapitated | Winds of Creation | 04-17-2000 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 +(15 rows) + +DELETE FROM albums WHERE id % 4 = 0; +SELECT * FROM albums; + id | artist | title | released +----+-----------------------------+---------------------------------------------------------------------------------+------------ + 1 | Graindelavoix | Jisquin The Undead | 06-12-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 08-06-2019 + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 +(12 rows) + +UPDATE albums SET title='Jisquin The Undead: Laments, Deplorations and Dances of Death', released='2021-10-01' WHERE id=1; +UPDATE albums SET released='2020-04-01' WHERE id=2; +SELECT * FROM albums; + id | artist | title | released +----+-----------------------------+---------------------------------------------------------------------------------+------------ + 3 | Graindelavoix | Cypriot Vespers | 12-20-2015 + 5 | V/A Analog Africa | Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed | 05-27-2016 + 6 | Incapacitants | As Loud As Possible | 09-15-2022 + 7 | Chris Corsano & Bill Orcutt | Made Out Of Sound | 03-26-2021 + 9 | clipping. | Visions of Bodies Being Burned | 10-23-2020 + 10 | clipping. | There Existed an Addiction to Blood | 10-19-2019 + 11 | Autechre | elseq 1–5 | 05-19-2016 + 13 | Ulthar | Anthronomicon | 02-17-2023 + 14 | Τζίμης Πανούσης | Κάγκελα Παντού | 01-01-1986 + 15 | Воплі Відоплясова | Музіка | 01-01-1997 + 1 | Graindelavoix | Jisquin The Undead: Laments, Deplorations and Dances of Death | 10-01-2021 + 2 | Graindelavoix | Tenebrae Responsoria - Carlo Gesualdo | 04-01-2020 +(12 rows) + +DROP TABLE albums; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/keyprovider_dependency.out b/contrib/pg_tde/expected/keyprovider_dependency.out new file mode 100644 index 00000000000..7254b08d8fe --- /dev/null +++ b/contrib/pg_tde/expected/keyprovider_dependency.out @@ -0,0 +1,36 @@ +\set tde_am tde_heap +\i sql/keyprovider_dependency.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('mk-file','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_add_key_provider_file('free-file','/tmp/pg_tde_test_keyring_2.per'); + pg_tde_add_key_provider_file +------------------------------ + 2 +(1 row) + +SELECT pg_tde_add_key_provider_vault_v2('V2-vault','vault-token','percona.com/vault-v2/percona','/mount/dev','ca-cert-auth'); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 3 +(1 row) + +SELECT * FROM pg_tde_list_all_key_providers(); + id | provider_name | provider_type | options +----+---------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------- + 1 | mk-file | file | {"type" : "file", "path" : "/tmp/pg_tde_test_keyring.per"} + 2 | free-file | file | {"type" : "file", "path" : "/tmp/pg_tde_test_keyring_2.per"} + 3 | V2-vault | vault-v2 | {"type" : "vault-v2", "url" : "percona.com/vault-v2/percona", "token" : "vault-token", "mountPath" : "/mount/dev", "caPath" : "ca-cert-auth"} +(3 rows) + +SELECT pg_tde_set_principal_key('test-db-principal-key','mk-file'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/keyprovider_dependency_basic.out b/contrib/pg_tde/expected/keyprovider_dependency_basic.out new file mode 100644 index 00000000000..f0613a83448 --- /dev/null +++ b/contrib/pg_tde/expected/keyprovider_dependency_basic.out @@ -0,0 +1,36 @@ +\set tde_am tde_heap_basic +\i sql/keyprovider_dependency.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('mk-file','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_add_key_provider_file('free-file','/tmp/pg_tde_test_keyring_2.per'); + pg_tde_add_key_provider_file +------------------------------ + 2 +(1 row) + +SELECT pg_tde_add_key_provider_vault_v2('V2-vault','vault-token','percona.com/vault-v2/percona','/mount/dev','ca-cert-auth'); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 3 +(1 row) + +SELECT * FROM pg_tde_list_all_key_providers(); + id | provider_name | provider_type | options +----+---------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------- + 1 | mk-file | file | {"type" : "file", "path" : "/tmp/pg_tde_test_keyring.per"} + 2 | free-file | file | {"type" : "file", "path" : "/tmp/pg_tde_test_keyring_2.per"} + 3 | V2-vault | vault-v2 | {"type" : "vault-v2", "url" : "percona.com/vault-v2/percona", "token" : "vault-token", "mountPath" : "/mount/dev", "caPath" : "ca-cert-auth"} +(3 rows) + +SELECT pg_tde_set_principal_key('test-db-principal-key','mk-file'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/kmip_test.out b/contrib/pg_tde/expected/kmip_test.out new file mode 100644 index 00000000000..bf9d0789cdf --- /dev/null +++ b/contrib/pg_tde/expected/kmip_test.out @@ -0,0 +1,33 @@ +\set tde_am tde_heap +\i sql/kmip_test.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_kmip('kmip-prov','127.0.0.1', 5696, '/tmp/server_certificate.pem', '/tmp/client_key_jane_doe.pem'); + pg_tde_add_key_provider_kmip +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('kmip-principal-key','kmip-prov'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); +SELECT * from test_enc; + id | k +----+--- + 1 | 1 + 2 | 2 + 3 | 3 +(3 rows) + +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/kmip_test_basic.out b/contrib/pg_tde/expected/kmip_test_basic.out new file mode 100644 index 00000000000..c1074a26f9b --- /dev/null +++ b/contrib/pg_tde/expected/kmip_test_basic.out @@ -0,0 +1,33 @@ +\set tde_am tde_heap_basic +\i sql/kmip_test.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_kmip('kmip-prov','127.0.0.1', 5696, '/tmp/server_certificate.pem', '/tmp/client_key_jane_doe.pem'); + pg_tde_add_key_provider_kmip +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('kmip-principal-key','kmip-prov'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); +SELECT * from test_enc; + id | k +----+--- + 1 | 1 + 2 | 2 + 3 | 3 +(3 rows) + +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/merge_join.out b/contrib/pg_tde/expected/merge_join.out new file mode 100644 index 00000000000..2d28d3ff4ac --- /dev/null +++ b/contrib/pg_tde/expected/merge_join.out @@ -0,0 +1,97 @@ +\set tde_am tde_heap +\i sql/merge_join.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +\getenv abs_srcdir PG_ABS_SRCDIR +CREATE TABLE tenk1 ( + unique1 int4, + unique2 int4, + two int4, + four int4, + ten int4, + twenty int4, + hundred int4, + thousand int4, + twothousand int4, + fivethous int4, + tenthous int4, + odd int4, + even int4, + stringu1 name, + stringu2 name, + string4 name +) using :tde_am; +\set filename :abs_srcdir '/data/tenk.data' +COPY tenk1 FROM :'filename'; +VACUUM ANALYZE tenk1; +CREATE INDEX tenk1_unique1 ON tenk1 USING btree(unique1 int4_ops); +CREATE INDEX tenk1_unique2 ON tenk1 USING btree(unique2 int4_ops); +CREATE INDEX tenk1_hundred ON tenk1 USING btree(hundred int4_ops); +CREATE INDEX tenk1_thous_tenthous ON tenk1 (thousand, tenthous); +-- +-- regression test: check a case where join_clause_is_movable_into() +-- used to give an imprecise result, causing an assertion failure +-- +SELECT count(*) +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1; + count +------- + 1000 +(1 row) + +-- +-- check that we haven't screwed the data +-- +SELECT * +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1 LIMIT 20 OFFSET 432; + x1 | x2 | unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 | unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 +-----+--------+---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+---------+---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+--------- + 31 | FBAAAA | 31 | 4200 | 1 | 3 | 1 | 11 | 31 | 31 | 31 | 31 | 31 | 62 | 63 | FBAAAA | OFGAAA | AAAAxx | 31 | 4200 | 1 | 3 | 1 | 11 | 31 | 31 | 31 | 31 | 31 | 62 | 63 | FBAAAA | OFGAAA | AAAAxx + 501 | HTAAAA | 501 | 4203 | 1 | 1 | 1 | 1 | 1 | 501 | 501 | 501 | 501 | 2 | 3 | HTAAAA | RFGAAA | VVVVxx | 501 | 4203 | 1 | 1 | 1 | 1 | 1 | 501 | 501 | 501 | 501 | 2 | 3 | HTAAAA | RFGAAA | VVVVxx + 111 | HEAAAA | 111 | 4217 | 1 | 3 | 1 | 11 | 11 | 111 | 111 | 111 | 111 | 22 | 23 | HEAAAA | FGGAAA | HHHHxx | 111 | 4217 | 1 | 3 | 1 | 11 | 11 | 111 | 111 | 111 | 111 | 22 | 23 | HEAAAA | FGGAAA | HHHHxx + 98 | UDAAAA | 98 | 4226 | 0 | 2 | 8 | 18 | 98 | 98 | 98 | 98 | 98 | 196 | 197 | UDAAAA | OGGAAA | OOOOxx | 98 | 4226 | 0 | 2 | 8 | 18 | 98 | 98 | 98 | 98 | 98 | 196 | 197 | UDAAAA | OGGAAA | OOOOxx + 689 | NAAAAA | 689 | 4228 | 1 | 1 | 9 | 9 | 89 | 689 | 689 | 689 | 689 | 178 | 179 | NAAAAA | QGGAAA | AAAAxx | 689 | 4228 | 1 | 1 | 9 | 9 | 89 | 689 | 689 | 689 | 689 | 178 | 179 | NAAAAA | QGGAAA | AAAAxx + 391 | BPAAAA | 391 | 4234 | 1 | 3 | 1 | 11 | 91 | 391 | 391 | 391 | 391 | 182 | 183 | BPAAAA | WGGAAA | OOOOxx | 391 | 4234 | 1 | 3 | 1 | 11 | 91 | 391 | 391 | 391 | 391 | 182 | 183 | BPAAAA | WGGAAA | OOOOxx + 93 | PDAAAA | 93 | 4238 | 1 | 1 | 3 | 13 | 93 | 93 | 93 | 93 | 93 | 186 | 187 | PDAAAA | AHGAAA | OOOOxx | 93 | 4238 | 1 | 1 | 3 | 13 | 93 | 93 | 93 | 93 | 93 | 186 | 187 | PDAAAA | AHGAAA | OOOOxx + 618 | UXAAAA | 618 | 4252 | 0 | 2 | 8 | 18 | 18 | 618 | 618 | 618 | 618 | 36 | 37 | UXAAAA | OHGAAA | AAAAxx | 618 | 4252 | 0 | 2 | 8 | 18 | 18 | 618 | 618 | 618 | 618 | 36 | 37 | UXAAAA | OHGAAA | AAAAxx + 328 | QMAAAA | 328 | 4255 | 0 | 0 | 8 | 8 | 28 | 328 | 328 | 328 | 328 | 56 | 57 | QMAAAA | RHGAAA | VVVVxx | 328 | 4255 | 0 | 0 | 8 | 8 | 28 | 328 | 328 | 328 | 328 | 56 | 57 | QMAAAA | RHGAAA | VVVVxx + 943 | HKAAAA | 943 | 4265 | 1 | 3 | 3 | 3 | 43 | 943 | 943 | 943 | 943 | 86 | 87 | HKAAAA | BIGAAA | HHHHxx | 943 | 4265 | 1 | 3 | 3 | 3 | 43 | 943 | 943 | 943 | 943 | 86 | 87 | HKAAAA | BIGAAA | HHHHxx + 775 | VDAAAA | 775 | 4266 | 1 | 3 | 5 | 15 | 75 | 775 | 775 | 775 | 775 | 150 | 151 | VDAAAA | CIGAAA | OOOOxx | 775 | 4266 | 1 | 3 | 5 | 15 | 75 | 775 | 775 | 775 | 775 | 150 | 151 | VDAAAA | CIGAAA | OOOOxx + 491 | XSAAAA | 491 | 4277 | 1 | 3 | 1 | 11 | 91 | 491 | 491 | 491 | 491 | 182 | 183 | XSAAAA | NIGAAA | HHHHxx | 491 | 4277 | 1 | 3 | 1 | 11 | 91 | 491 | 491 | 491 | 491 | 182 | 183 | XSAAAA | NIGAAA | HHHHxx + 212 | EIAAAA | 212 | 4280 | 0 | 0 | 2 | 12 | 12 | 212 | 212 | 212 | 212 | 24 | 25 | EIAAAA | QIGAAA | AAAAxx | 212 | 4280 | 0 | 0 | 2 | 12 | 12 | 212 | 212 | 212 | 212 | 24 | 25 | EIAAAA | QIGAAA | AAAAxx + 340 | CNAAAA | 340 | 4293 | 0 | 0 | 0 | 0 | 40 | 340 | 340 | 340 | 340 | 80 | 81 | CNAAAA | DJGAAA | HHHHxx | 340 | 4293 | 0 | 0 | 0 | 0 | 40 | 340 | 340 | 340 | 340 | 80 | 81 | CNAAAA | DJGAAA | HHHHxx + 445 | DRAAAA | 445 | 4316 | 1 | 1 | 5 | 5 | 45 | 445 | 445 | 445 | 445 | 90 | 91 | DRAAAA | AKGAAA | AAAAxx | 445 | 4316 | 1 | 1 | 5 | 5 | 45 | 445 | 445 | 445 | 445 | 90 | 91 | DRAAAA | AKGAAA | AAAAxx + 472 | ESAAAA | 472 | 4321 | 0 | 0 | 2 | 12 | 72 | 472 | 472 | 472 | 472 | 144 | 145 | ESAAAA | FKGAAA | HHHHxx | 472 | 4321 | 0 | 0 | 2 | 12 | 72 | 472 | 472 | 472 | 472 | 144 | 145 | ESAAAA | FKGAAA | HHHHxx + 760 | GDAAAA | 760 | 4329 | 0 | 0 | 0 | 0 | 60 | 760 | 760 | 760 | 760 | 120 | 121 | GDAAAA | NKGAAA | HHHHxx | 760 | 4329 | 0 | 0 | 0 | 0 | 60 | 760 | 760 | 760 | 760 | 120 | 121 | GDAAAA | NKGAAA | HHHHxx + 14 | OAAAAA | 14 | 4341 | 0 | 2 | 4 | 14 | 14 | 14 | 14 | 14 | 14 | 28 | 29 | OAAAAA | ZKGAAA | HHHHxx | 14 | 4341 | 0 | 2 | 4 | 14 | 14 | 14 | 14 | 14 | 14 | 28 | 29 | OAAAAA | ZKGAAA | HHHHxx + 65 | NCAAAA | 65 | 4348 | 1 | 1 | 5 | 5 | 65 | 65 | 65 | 65 | 65 | 130 | 131 | NCAAAA | GLGAAA | AAAAxx | 65 | 4348 | 1 | 1 | 5 | 5 | 65 | 65 | 65 | 65 | 65 | 130 | 131 | NCAAAA | GLGAAA | AAAAxx + 459 | RRAAAA | 459 | 4350 | 1 | 3 | 9 | 19 | 59 | 459 | 459 | 459 | 459 | 118 | 119 | RRAAAA | ILGAAA | OOOOxx | 459 | 4350 | 1 | 3 | 9 | 19 | 59 | 459 | 459 | 459 | 459 | 118 | 119 | RRAAAA | ILGAAA | OOOOxx +(20 rows) + +DROP TABLE tenk1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/merge_join_basic.out b/contrib/pg_tde/expected/merge_join_basic.out new file mode 100644 index 00000000000..648bbc12663 --- /dev/null +++ b/contrib/pg_tde/expected/merge_join_basic.out @@ -0,0 +1,97 @@ +\set tde_am tde_heap_basic +\i sql/merge_join.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +\getenv abs_srcdir PG_ABS_SRCDIR +CREATE TABLE tenk1 ( + unique1 int4, + unique2 int4, + two int4, + four int4, + ten int4, + twenty int4, + hundred int4, + thousand int4, + twothousand int4, + fivethous int4, + tenthous int4, + odd int4, + even int4, + stringu1 name, + stringu2 name, + string4 name +) using :tde_am; +\set filename :abs_srcdir '/data/tenk.data' +COPY tenk1 FROM :'filename'; +VACUUM ANALYZE tenk1; +CREATE INDEX tenk1_unique1 ON tenk1 USING btree(unique1 int4_ops); +CREATE INDEX tenk1_unique2 ON tenk1 USING btree(unique2 int4_ops); +CREATE INDEX tenk1_hundred ON tenk1 USING btree(hundred int4_ops); +CREATE INDEX tenk1_thous_tenthous ON tenk1 (thousand, tenthous); +-- +-- regression test: check a case where join_clause_is_movable_into() +-- used to give an imprecise result, causing an assertion failure +-- +SELECT count(*) +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1; + count +------- + 1000 +(1 row) + +-- +-- check that we haven't screwed the data +-- +SELECT * +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1 LIMIT 20 OFFSET 432; + x1 | x2 | unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 | unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 +-----+--------+---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+---------+---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+--------- + 31 | FBAAAA | 31 | 4200 | 1 | 3 | 1 | 11 | 31 | 31 | 31 | 31 | 31 | 62 | 63 | FBAAAA | OFGAAA | AAAAxx | 31 | 4200 | 1 | 3 | 1 | 11 | 31 | 31 | 31 | 31 | 31 | 62 | 63 | FBAAAA | OFGAAA | AAAAxx + 501 | HTAAAA | 501 | 4203 | 1 | 1 | 1 | 1 | 1 | 501 | 501 | 501 | 501 | 2 | 3 | HTAAAA | RFGAAA | VVVVxx | 501 | 4203 | 1 | 1 | 1 | 1 | 1 | 501 | 501 | 501 | 501 | 2 | 3 | HTAAAA | RFGAAA | VVVVxx + 111 | HEAAAA | 111 | 4217 | 1 | 3 | 1 | 11 | 11 | 111 | 111 | 111 | 111 | 22 | 23 | HEAAAA | FGGAAA | HHHHxx | 111 | 4217 | 1 | 3 | 1 | 11 | 11 | 111 | 111 | 111 | 111 | 22 | 23 | HEAAAA | FGGAAA | HHHHxx + 98 | UDAAAA | 98 | 4226 | 0 | 2 | 8 | 18 | 98 | 98 | 98 | 98 | 98 | 196 | 197 | UDAAAA | OGGAAA | OOOOxx | 98 | 4226 | 0 | 2 | 8 | 18 | 98 | 98 | 98 | 98 | 98 | 196 | 197 | UDAAAA | OGGAAA | OOOOxx + 689 | NAAAAA | 689 | 4228 | 1 | 1 | 9 | 9 | 89 | 689 | 689 | 689 | 689 | 178 | 179 | NAAAAA | QGGAAA | AAAAxx | 689 | 4228 | 1 | 1 | 9 | 9 | 89 | 689 | 689 | 689 | 689 | 178 | 179 | NAAAAA | QGGAAA | AAAAxx + 391 | BPAAAA | 391 | 4234 | 1 | 3 | 1 | 11 | 91 | 391 | 391 | 391 | 391 | 182 | 183 | BPAAAA | WGGAAA | OOOOxx | 391 | 4234 | 1 | 3 | 1 | 11 | 91 | 391 | 391 | 391 | 391 | 182 | 183 | BPAAAA | WGGAAA | OOOOxx + 93 | PDAAAA | 93 | 4238 | 1 | 1 | 3 | 13 | 93 | 93 | 93 | 93 | 93 | 186 | 187 | PDAAAA | AHGAAA | OOOOxx | 93 | 4238 | 1 | 1 | 3 | 13 | 93 | 93 | 93 | 93 | 93 | 186 | 187 | PDAAAA | AHGAAA | OOOOxx + 618 | UXAAAA | 618 | 4252 | 0 | 2 | 8 | 18 | 18 | 618 | 618 | 618 | 618 | 36 | 37 | UXAAAA | OHGAAA | AAAAxx | 618 | 4252 | 0 | 2 | 8 | 18 | 18 | 618 | 618 | 618 | 618 | 36 | 37 | UXAAAA | OHGAAA | AAAAxx + 328 | QMAAAA | 328 | 4255 | 0 | 0 | 8 | 8 | 28 | 328 | 328 | 328 | 328 | 56 | 57 | QMAAAA | RHGAAA | VVVVxx | 328 | 4255 | 0 | 0 | 8 | 8 | 28 | 328 | 328 | 328 | 328 | 56 | 57 | QMAAAA | RHGAAA | VVVVxx + 943 | HKAAAA | 943 | 4265 | 1 | 3 | 3 | 3 | 43 | 943 | 943 | 943 | 943 | 86 | 87 | HKAAAA | BIGAAA | HHHHxx | 943 | 4265 | 1 | 3 | 3 | 3 | 43 | 943 | 943 | 943 | 943 | 86 | 87 | HKAAAA | BIGAAA | HHHHxx + 775 | VDAAAA | 775 | 4266 | 1 | 3 | 5 | 15 | 75 | 775 | 775 | 775 | 775 | 150 | 151 | VDAAAA | CIGAAA | OOOOxx | 775 | 4266 | 1 | 3 | 5 | 15 | 75 | 775 | 775 | 775 | 775 | 150 | 151 | VDAAAA | CIGAAA | OOOOxx + 491 | XSAAAA | 491 | 4277 | 1 | 3 | 1 | 11 | 91 | 491 | 491 | 491 | 491 | 182 | 183 | XSAAAA | NIGAAA | HHHHxx | 491 | 4277 | 1 | 3 | 1 | 11 | 91 | 491 | 491 | 491 | 491 | 182 | 183 | XSAAAA | NIGAAA | HHHHxx + 212 | EIAAAA | 212 | 4280 | 0 | 0 | 2 | 12 | 12 | 212 | 212 | 212 | 212 | 24 | 25 | EIAAAA | QIGAAA | AAAAxx | 212 | 4280 | 0 | 0 | 2 | 12 | 12 | 212 | 212 | 212 | 212 | 24 | 25 | EIAAAA | QIGAAA | AAAAxx + 340 | CNAAAA | 340 | 4293 | 0 | 0 | 0 | 0 | 40 | 340 | 340 | 340 | 340 | 80 | 81 | CNAAAA | DJGAAA | HHHHxx | 340 | 4293 | 0 | 0 | 0 | 0 | 40 | 340 | 340 | 340 | 340 | 80 | 81 | CNAAAA | DJGAAA | HHHHxx + 445 | DRAAAA | 445 | 4316 | 1 | 1 | 5 | 5 | 45 | 445 | 445 | 445 | 445 | 90 | 91 | DRAAAA | AKGAAA | AAAAxx | 445 | 4316 | 1 | 1 | 5 | 5 | 45 | 445 | 445 | 445 | 445 | 90 | 91 | DRAAAA | AKGAAA | AAAAxx + 472 | ESAAAA | 472 | 4321 | 0 | 0 | 2 | 12 | 72 | 472 | 472 | 472 | 472 | 144 | 145 | ESAAAA | FKGAAA | HHHHxx | 472 | 4321 | 0 | 0 | 2 | 12 | 72 | 472 | 472 | 472 | 472 | 144 | 145 | ESAAAA | FKGAAA | HHHHxx + 760 | GDAAAA | 760 | 4329 | 0 | 0 | 0 | 0 | 60 | 760 | 760 | 760 | 760 | 120 | 121 | GDAAAA | NKGAAA | HHHHxx | 760 | 4329 | 0 | 0 | 0 | 0 | 60 | 760 | 760 | 760 | 760 | 120 | 121 | GDAAAA | NKGAAA | HHHHxx + 14 | OAAAAA | 14 | 4341 | 0 | 2 | 4 | 14 | 14 | 14 | 14 | 14 | 14 | 28 | 29 | OAAAAA | ZKGAAA | HHHHxx | 14 | 4341 | 0 | 2 | 4 | 14 | 14 | 14 | 14 | 14 | 14 | 28 | 29 | OAAAAA | ZKGAAA | HHHHxx + 65 | NCAAAA | 65 | 4348 | 1 | 1 | 5 | 5 | 65 | 65 | 65 | 65 | 65 | 130 | 131 | NCAAAA | GLGAAA | AAAAxx | 65 | 4348 | 1 | 1 | 5 | 5 | 65 | 65 | 65 | 65 | 65 | 130 | 131 | NCAAAA | GLGAAA | AAAAxx + 459 | RRAAAA | 459 | 4350 | 1 | 3 | 9 | 19 | 59 | 459 | 459 | 459 | 459 | 118 | 119 | RRAAAA | ILGAAA | OOOOxx | 459 | 4350 | 1 | 3 | 9 | 19 | 59 | 459 | 459 | 459 | 459 | 118 | 119 | RRAAAA | ILGAAA | OOOOxx +(20 rows) + +DROP TABLE tenk1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/move_large_tuples.out b/contrib/pg_tde/expected/move_large_tuples.out new file mode 100644 index 00000000000..9ffac997b76 --- /dev/null +++ b/contrib/pg_tde/expected/move_large_tuples.out @@ -0,0 +1,61 @@ +\set tde_am tde_heap +\i sql/move_large_tuples.inc +-- test pg_tde_move_encrypted_data() +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE sbtest2( + id SERIAL, + k TEXT STORAGE PLAIN, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO sbtest2(k) VALUES(repeat('a', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('e', 2500)); +DELETE FROM sbtest2 WHERE id IN (2,3,4); +VACUUM sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(2 rows) + +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); +DELETE FROM sbtest2 WHERE id IN (7); +VACUUM sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 6 | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb + 8 | dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(4 rows) + +VACUUM FULL sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 6 | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb + 8 | dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(4 rows) + +DROP TABLE sbtest2; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/move_large_tuples_basic.out b/contrib/pg_tde/expected/move_large_tuples_basic.out new file mode 100644 index 00000000000..6d1e235ee4c --- /dev/null +++ b/contrib/pg_tde/expected/move_large_tuples_basic.out @@ -0,0 +1,61 @@ +\set tde_am tde_heap_basic +\i sql/move_large_tuples.inc +-- test pg_tde_move_encrypted_data() +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE sbtest2( + id SERIAL, + k TEXT STORAGE PLAIN, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO sbtest2(k) VALUES(repeat('a', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('e', 2500)); +DELETE FROM sbtest2 WHERE id IN (2,3,4); +VACUUM sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(2 rows) + +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); +DELETE FROM sbtest2 WHERE id IN (7); +VACUUM sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 6 | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb + 8 | dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(4 rows) + +VACUUM FULL sbtest2; +SELECT * FROM sbtest2; + id | k +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + 6 | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb + 8 | dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd + 5 | eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee +(4 rows) + +DROP TABLE sbtest2; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/multi_insert.out b/contrib/pg_tde/expected/multi_insert.out new file mode 100644 index 00000000000..feb9ad35ad3 --- /dev/null +++ b/contrib/pg_tde/expected/multi_insert.out @@ -0,0 +1,105 @@ +\set tde_am tde_heap +\i sql/multi_insert.inc +-- trigger multi_insert path +-- +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE albums ( + album_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist_id INTEGER, + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; +COPY albums FROM stdin CSV HEADER; +SELECT * FROM albums; + album_id | artist_id | title | released +----------+-----------+--------------------+------------ + 1 | 1 | Mirror | 06-24-2009 + 2 | 2 | Pretzel Logic | 02-20-1974 + 3 | 3 | Under Construction | 11-12-2002 + 4 | 4 | Return to Wherever | 07-11-2019 + 5 | 5 | The Nightfly | 10-01-1982 + 6 | 6 | It's Alive | 10-15-2013 + 7 | 7 | Pure Ella | 02-15-1994 +(7 rows) + +SELECT * FROM albums where album_id > 5; + album_id | artist_id | title | released +----------+-----------+------------+------------ + 6 | 6 | It's Alive | 10-15-2013 + 7 | 7 | Pure Ella | 02-15-1994 +(2 rows) + +-- On replica: +-- SELECT * FROM albums; +-- album_id | artist_id | title | released +-- ----------+-----------+--------------------+------------ +-- 1 | 1 | Mirror | 2009-06-24 +-- 2 | 2 | Pretzel Logic | 1974-02-20 +-- 3 | 3 | Under Construction | 2002-11-12 +-- 4 | 4 | Return to Wherever | 2019-07-11 +-- 5 | 5 | The Nightfly | 1982-10-01 +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (7 rows) +-- +-- SELECT * FROM albums where album_id > 5; +-- album_id | artist_id | title | released +-- ----------+-----------+------------+------------ +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (2 rows) +-- +DROP TABLE albums; +-- multi_insert2 +-- more data to take multiple pages +CREATE TABLE Towns ( + id SERIAL UNIQUE NOT NULL, + code VARCHAR(10) NOT NULL, + article TEXT, + name TEXT NOT NULL, + department VARCHAR(4) NOT NULL, + UNIQUE (code, department) +) USING :tde_am; +COPY towns (id, code, article, name, department) FROM stdin; +SELECT count(*) FROM towns; + count +------- + 1313 +(1 row) + +SELECT * FROM towns where id in (13, 666); + id | code | article | name | department +-----+------+-----------+----------------+------------ + 13 | 014 | some_text | Arbent | 01 + 666 | 252 | some_text | Cuissy-et-Geny | 02 +(2 rows) + +-- ON REPLICA +-- +-- select count(*) from towns; +-- count +-- ------- +-- 1313 +-- (1 row) +-- +-- select * from towns where id in (13, 666); +-- id | code | article | name | department +-- -----+------+-----------+----------------+------------ +-- 13 | 014 | some_text | Arbent | 01 +-- 666 | 252 | some_text | Cuissy-et-Geny | 02 +-- (2 rows) +-- +DROP TABLE towns; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/multi_insert_basic.out b/contrib/pg_tde/expected/multi_insert_basic.out new file mode 100644 index 00000000000..6662449e32f --- /dev/null +++ b/contrib/pg_tde/expected/multi_insert_basic.out @@ -0,0 +1,105 @@ +\set tde_am tde_heap_basic +\i sql/multi_insert.inc +-- trigger multi_insert path +-- +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE albums ( + album_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist_id INTEGER, + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; +COPY albums FROM stdin CSV HEADER; +SELECT * FROM albums; + album_id | artist_id | title | released +----------+-----------+--------------------+------------ + 1 | 1 | Mirror | 06-24-2009 + 2 | 2 | Pretzel Logic | 02-20-1974 + 3 | 3 | Under Construction | 11-12-2002 + 4 | 4 | Return to Wherever | 07-11-2019 + 5 | 5 | The Nightfly | 10-01-1982 + 6 | 6 | It's Alive | 10-15-2013 + 7 | 7 | Pure Ella | 02-15-1994 +(7 rows) + +SELECT * FROM albums where album_id > 5; + album_id | artist_id | title | released +----------+-----------+------------+------------ + 6 | 6 | It's Alive | 10-15-2013 + 7 | 7 | Pure Ella | 02-15-1994 +(2 rows) + +-- On replica: +-- SELECT * FROM albums; +-- album_id | artist_id | title | released +-- ----------+-----------+--------------------+------------ +-- 1 | 1 | Mirror | 2009-06-24 +-- 2 | 2 | Pretzel Logic | 1974-02-20 +-- 3 | 3 | Under Construction | 2002-11-12 +-- 4 | 4 | Return to Wherever | 2019-07-11 +-- 5 | 5 | The Nightfly | 1982-10-01 +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (7 rows) +-- +-- SELECT * FROM albums where album_id > 5; +-- album_id | artist_id | title | released +-- ----------+-----------+------------+------------ +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (2 rows) +-- +DROP TABLE albums; +-- multi_insert2 +-- more data to take multiple pages +CREATE TABLE Towns ( + id SERIAL UNIQUE NOT NULL, + code VARCHAR(10) NOT NULL, + article TEXT, + name TEXT NOT NULL, + department VARCHAR(4) NOT NULL, + UNIQUE (code, department) +) USING :tde_am; +COPY towns (id, code, article, name, department) FROM stdin; +SELECT count(*) FROM towns; + count +------- + 1313 +(1 row) + +SELECT * FROM towns where id in (13, 666); + id | code | article | name | department +-----+------+-----------+----------------+------------ + 13 | 014 | some_text | Arbent | 01 + 666 | 252 | some_text | Cuissy-et-Geny | 02 +(2 rows) + +-- ON REPLICA +-- +-- select count(*) from towns; +-- count +-- ------- +-- 1313 +-- (1 row) +-- +-- select * from towns where id in (13, 666); +-- id | code | article | name | department +-- -----+------+-----------+----------------+------------ +-- 13 | 014 | some_text | Arbent | 01 +-- 666 | 252 | some_text | Cuissy-et-Geny | 02 +-- (2 rows) +-- +DROP TABLE towns; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/non_sorted_off_compact.out b/contrib/pg_tde/expected/non_sorted_off_compact.out new file mode 100644 index 00000000000..bd1d9fc0115 --- /dev/null +++ b/contrib/pg_tde/expected/non_sorted_off_compact.out @@ -0,0 +1,61 @@ +\set tde_am tde_heap +\i sql/non_sorted_off_compact.inc +-- A test case for https://github.com/percona/pg_tde/pull/21 +-- +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP TABLE IF EXISTS sbtest1; +psql:sql/non_sorted_off_compact.inc:8: NOTICE: table "sbtest1" does not exist, skipping +CREATE TABLE sbtest1( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO sbtest1(k) VALUES +(1), +(2), +(3), +(4), +(5), +(6), +(7), +(8), +(9), +(10); +DELETE FROM sbtest1 WHERE id IN (4,5,6); +VACUUM sbtest1; +INSERT INTO sbtest1(k) VALUES +(11), +(12), +(13); +-- Line pointers (lp) point to non-sorted offsets (lp_off): +-- CREATE EXTENSION pageinspect; +-- SELECT lp, lp_off, t_ctid FROM heap_page_items(get_raw_page('sbtest1', 0)); +-- lp | lp_off | t_ctid +-- ----+--------+-------- +-- 1 | 8160 | (0,1) +-- 2 | 8128 | (0,2) +-- 3 | 8096 | (0,3) +-- 4 | 7936 | (0,4) +-- 5 | 7904 | (0,5) +-- 6 | 7872 | (0,6) +-- 7 | 8064 | (0,7) +-- 8 | 8032 | (0,8) +-- 9 | 8000 | (0,9) +-- 10 | 7968 | (0,10) +---- Trigger comapction +delete from sbtest1 where id in (2); +VACUUM sbtest1; +DROP TABLE sbtest1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/non_sorted_off_compact_basic.out b/contrib/pg_tde/expected/non_sorted_off_compact_basic.out new file mode 100644 index 00000000000..6801740c86d --- /dev/null +++ b/contrib/pg_tde/expected/non_sorted_off_compact_basic.out @@ -0,0 +1,61 @@ +\set tde_am tde_heap_basic +\i sql/non_sorted_off_compact.inc +-- A test case for https://github.com/percona/pg_tde/pull/21 +-- +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP TABLE IF EXISTS sbtest1; +psql:sql/non_sorted_off_compact.inc:8: NOTICE: table "sbtest1" does not exist, skipping +CREATE TABLE sbtest1( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO sbtest1(k) VALUES +(1), +(2), +(3), +(4), +(5), +(6), +(7), +(8), +(9), +(10); +DELETE FROM sbtest1 WHERE id IN (4,5,6); +VACUUM sbtest1; +INSERT INTO sbtest1(k) VALUES +(11), +(12), +(13); +-- Line pointers (lp) point to non-sorted offsets (lp_off): +-- CREATE EXTENSION pageinspect; +-- SELECT lp, lp_off, t_ctid FROM heap_page_items(get_raw_page('sbtest1', 0)); +-- lp | lp_off | t_ctid +-- ----+--------+-------- +-- 1 | 8160 | (0,1) +-- 2 | 8128 | (0,2) +-- 3 | 8096 | (0,3) +-- 4 | 7936 | (0,4) +-- 5 | 7904 | (0,5) +-- 6 | 7872 | (0,6) +-- 7 | 8064 | (0,7) +-- 8 | 8032 | (0,8) +-- 9 | 8000 | (0,9) +-- 10 | 7968 | (0,10) +---- Trigger comapction +delete from sbtest1 where id in (2); +VACUUM sbtest1; +DROP TABLE sbtest1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/pg_tde_is_encrypted.out b/contrib/pg_tde/expected/pg_tde_is_encrypted.out new file mode 100644 index 00000000000..fd7f0b1a565 --- /dev/null +++ b/contrib/pg_tde/expected/pg_tde_is_encrypted.out @@ -0,0 +1,69 @@ +\set tde_am tde_heap +\i sql/pg_tde_is_encrypted.inc +CREATE EXTENSION pg_tde; +SELECT * FROM pg_tde_principal_key_info(); +psql:sql/pg_tde_is_encrypted.inc:3: ERROR: Principal key does not exists for the database +HINT: Use set_principal_key interface to set the principal key +CONTEXT: SQL function "pg_tde_principal_key_info" statement 1 +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +CREATE TABLE test_norm( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING heap; +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_enc'; + amname +---------- + tde_heap +(1 row) + +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_norm'; + amname +-------- + heap +(1 row) + +SELECT pg_tde_is_encrypted('test_enc'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +SELECT pg_tde_is_encrypted('test_norm'); + pg_tde_is_encrypted +--------------------- + f +(1 row) + +SELECT pg_tde_is_encrypted('public.test_enc'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +SELECT key_provider_id, key_provider_name, principal_key_name + FROM pg_tde_principal_key_info(); + key_provider_id | key_provider_name | principal_key_name +-----------------+-------------------+----------------------- + 1 | file-vault | test-db-principal-key +(1 row) + +DROP TABLE test_enc; +DROP TABLE test_norm; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/pg_tde_is_encrypted_basic.out b/contrib/pg_tde/expected/pg_tde_is_encrypted_basic.out new file mode 100644 index 00000000000..1c573f90f73 --- /dev/null +++ b/contrib/pg_tde/expected/pg_tde_is_encrypted_basic.out @@ -0,0 +1,69 @@ +\set tde_am tde_heap_basic +\i sql/pg_tde_is_encrypted.inc +CREATE EXTENSION pg_tde; +SELECT * FROM pg_tde_principal_key_info(); +psql:sql/pg_tde_is_encrypted.inc:3: ERROR: Principal key does not exists for the database +HINT: Use set_principal_key interface to set the principal key +CONTEXT: SQL function "pg_tde_principal_key_info" statement 1 +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +CREATE TABLE test_norm( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING heap; +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_enc'; + amname +---------------- + tde_heap_basic +(1 row) + +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_norm'; + amname +-------- + heap +(1 row) + +SELECT pg_tde_is_encrypted('test_enc'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +SELECT pg_tde_is_encrypted('test_norm'); + pg_tde_is_encrypted +--------------------- + f +(1 row) + +SELECT pg_tde_is_encrypted('public.test_enc'); + pg_tde_is_encrypted +--------------------- + t +(1 row) + +SELECT key_provider_id, key_provider_name, principal_key_name + FROM pg_tde_principal_key_info(); + key_provider_id | key_provider_name | principal_key_name +-----------------+-------------------+----------------------- + 1 | file-vault | test-db-principal-key +(1 row) + +DROP TABLE test_enc; +DROP TABLE test_norm; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/subtransaction.out b/contrib/pg_tde/expected/subtransaction.out new file mode 100644 index 00000000000..731382a2513 --- /dev/null +++ b/contrib/pg_tde/expected/subtransaction.out @@ -0,0 +1,32 @@ +\set tde_am tde_heap +\i sql/subtransaction.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +CREATE TABLE foo(s TEXT); -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +DROP TABLE foo; -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/subtransaction_basic.out b/contrib/pg_tde/expected/subtransaction_basic.out new file mode 100644 index 00000000000..d84651efba9 --- /dev/null +++ b/contrib/pg_tde/expected/subtransaction_basic.out @@ -0,0 +1,32 @@ +\set tde_am tde_heap_basic +\i sql/subtransaction.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +CREATE TABLE foo(s TEXT); -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +DROP TABLE foo; -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/tablespace.out b/contrib/pg_tde/expected/tablespace.out new file mode 100644 index 00000000000..52448e20ee9 --- /dev/null +++ b/contrib/pg_tde/expected/tablespace.out @@ -0,0 +1,41 @@ +\set tde_am tde_heap +\i sql/tablespace.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test(num1 bigint, num2 double precision, t text) USING :tde_am; +INSERT INTO test(num1, num2, t) + SELECT round(random()*100), random(), 'text' + FROM generate_series(1, 10) s(i); +CREATE INDEX test_idx ON test(num1); +SET allow_in_place_tablespaces = true; +CREATE TABLESPACE test_tblspace LOCATION ''; +ALTER TABLE test SET TABLESPACE test_tblspace; +SELECT count(*) FROM test; + count +------- + 10 +(1 row) + +ALTER TABLE test SET TABLESPACE pg_default; +REINDEX (TABLESPACE test_tblspace, CONCURRENTLY) TABLE test; +INSERT INTO test VALUES (110, 2); +SELECT * FROM test WHERE num1=110; + num1 | num2 | t +------+------+--- + 110 | 2 | +(1 row) + +DROP TABLE test; +DROP TABLESPACE test_tblspace; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/tablespace_basic.out b/contrib/pg_tde/expected/tablespace_basic.out new file mode 100644 index 00000000000..6718dc5a5be --- /dev/null +++ b/contrib/pg_tde/expected/tablespace_basic.out @@ -0,0 +1,41 @@ +\set tde_am tde_heap_basic +\i sql/tablespace.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test(num1 bigint, num2 double precision, t text) USING :tde_am; +INSERT INTO test(num1, num2, t) + SELECT round(random()*100), random(), 'text' + FROM generate_series(1, 10) s(i); +CREATE INDEX test_idx ON test(num1); +SET allow_in_place_tablespaces = true; +CREATE TABLESPACE test_tblspace LOCATION ''; +ALTER TABLE test SET TABLESPACE test_tblspace; +SELECT count(*) FROM test; + count +------- + 10 +(1 row) + +ALTER TABLE test SET TABLESPACE pg_default; +REINDEX (TABLESPACE test_tblspace, CONCURRENTLY) TABLE test; +INSERT INTO test VALUES (110, 2); +SELECT * FROM test WHERE num1=110; + num1 | num2 | t +------+------+--- + 110 | 2 | +(1 row) + +DROP TABLE test; +DROP TABLESPACE test_tblspace; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/test_issue_153_fix.out b/contrib/pg_tde/expected/test_issue_153_fix.out new file mode 100644 index 00000000000..bcbb15a172e --- /dev/null +++ b/contrib/pg_tde/expected/test_issue_153_fix.out @@ -0,0 +1,589 @@ +\set tde_am tde_heap +\i sql/test_issue_153_fix.inc +CREATE EXTENSION pg_tde; +SET datestyle TO 'iso, dmy'; +SELECT * FROM pg_tde_principal_key_info(); +psql:sql/test_issue_153_fix.inc:4: ERROR: Principal key does not exists for the database +HINT: Use set_principal_key interface to set the principal key +CONTEXT: SQL function "pg_tde_principal_key_info" statement 1 +SELECT pg_tde_add_key_provider_file('file-ring','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-ring'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +-- +-- Script that creates the 'sample' tde encrypted tables, views +-- functions, triggers, etc. +-- +-- Start new transaction - commit all or nothing +-- +BEGIN; +-- +-- Create and load tables used in the documentation examples. +-- +-- Create the 'dept' table +-- +CREATE TABLE dept ( + deptno NUMERIC(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY, + dname VARCHAR(14) CONSTRAINT dept_dname_uq UNIQUE, + loc VARCHAR(13) +)using :tde_am; +-- +-- Create the 'emp' table +-- +CREATE TABLE emp ( + empno NUMERIC(4) NOT NULL CONSTRAINT emp_pk PRIMARY KEY, + ename VARCHAR(10), + job VARCHAR(9), + mgr NUMERIC(4), + hiredate DATE, + sal NUMERIC(7,2) CONSTRAINT emp_sal_ck CHECK (sal > 0), + comm NUMERIC(7,2), + deptno NUMERIC(2) CONSTRAINT emp_ref_dept_fk + REFERENCES dept(deptno) +)using :tde_am; +-- +-- Create the 'jobhist' table +-- +CREATE TABLE jobhist ( + empno NUMERIC(4) NOT NULL, + startdate TIMESTAMP(0) NOT NULL, + enddate TIMESTAMP(0), + job VARCHAR(9), + sal NUMERIC(7,2), + comm NUMERIC(7,2), + deptno NUMERIC(2), + chgdesc VARCHAR(80), + CONSTRAINT jobhist_pk PRIMARY KEY (empno, startdate), + CONSTRAINT jobhist_ref_emp_fk FOREIGN KEY (empno) + REFERENCES emp(empno) ON DELETE CASCADE, + CONSTRAINT jobhist_ref_dept_fk FOREIGN KEY (deptno) + REFERENCES dept (deptno) ON DELETE SET NULL, + CONSTRAINT jobhist_date_chk CHECK (startdate <= enddate) +)using :tde_am; +-- +-- Create the 'salesemp' view +-- +CREATE OR REPLACE VIEW salesemp AS + SELECT empno, ename, hiredate, sal, comm FROM emp WHERE job = 'SALESMAN'; +-- +-- Sequence to generate values for function 'new_empno'. +-- +CREATE SEQUENCE next_empno START WITH 8000 INCREMENT BY 1; +-- +-- Issue PUBLIC grants +-- +GRANT ALL ON emp TO PUBLIC; +GRANT ALL ON dept TO PUBLIC; +GRANT ALL ON jobhist TO PUBLIC; +GRANT ALL ON salesemp TO PUBLIC; +GRANT ALL ON next_empno TO PUBLIC; +-- +-- Load the 'dept' table +-- +INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK'); +INSERT INTO dept VALUES (20,'RESEARCH','DALLAS'); +INSERT INTO dept VALUES (30,'SALES','CHICAGO'); +INSERT INTO dept VALUES (40,'OPERATIONS','BOSTON'); +-- +-- Load the 'emp' table +-- +INSERT INTO emp VALUES (7369,'SMITH','CLERK',7902,'17-DEC-80',800,NULL,20); +INSERT INTO emp VALUES (7499,'ALLEN','SALESMAN',7698,'20-FEB-81',1600,300,30); +INSERT INTO emp VALUES (7521,'WARD','SALESMAN',7698,'22-FEB-81',1250,500,30); +INSERT INTO emp VALUES (7566,'JONES','MANAGER',7839,'02-APR-81',2975,NULL,20); +INSERT INTO emp VALUES (7654,'MARTIN','SALESMAN',7698,'28-SEP-81',1250,1400,30); +INSERT INTO emp VALUES (7698,'BLAKE','MANAGER',7839,'01-MAY-81',2850,NULL,30); +INSERT INTO emp VALUES (7782,'CLARK','MANAGER',7839,'09-JUN-81',2450,NULL,10); +INSERT INTO emp VALUES (7788,'SCOTT','ANALYST',7566,'19-APR-87',3000,NULL,20); +INSERT INTO emp VALUES (7839,'KING','PRESIDENT',NULL,'17-NOV-81',5000,NULL,10); +INSERT INTO emp VALUES (7844,'TURNER','SALESMAN',7698,'08-SEP-81',1500,0,30); +INSERT INTO emp VALUES (7876,'ADAMS','CLERK',7788,'23-MAY-87',1100,NULL,20); +INSERT INTO emp VALUES (7900,'JAMES','CLERK',7698,'03-DEC-81',950,NULL,30); +INSERT INTO emp VALUES (7902,'FORD','ANALYST',7566,'03-DEC-81',3000,NULL,20); +INSERT INTO emp VALUES (7934,'MILLER','CLERK',7782,'23-JAN-82',1300,NULL,10); +-- +-- Load the 'jobhist' table +-- +INSERT INTO jobhist VALUES (7369,'17-DEC-80',NULL,'CLERK',800,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7499,'20-FEB-81',NULL,'SALESMAN',1600,300,30,'New Hire'); +INSERT INTO jobhist VALUES (7521,'22-FEB-81',NULL,'SALESMAN',1250,500,30,'New Hire'); +INSERT INTO jobhist VALUES (7566,'02-APR-81',NULL,'MANAGER',2975,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7654,'28-SEP-81',NULL,'SALESMAN',1250,1400,30,'New Hire'); +INSERT INTO jobhist VALUES (7698,'01-MAY-81',NULL,'MANAGER',2850,NULL,30,'New Hire'); +INSERT INTO jobhist VALUES (7782,'09-JUN-81',NULL,'MANAGER',2450,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7788,'19-APR-87','12-APR-88','CLERK',1000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7788,'13-APR-88','04-MAY-89','CLERK',1040,NULL,20,'Raise'); +INSERT INTO jobhist VALUES (7788,'05-MAY-90',NULL,'ANALYST',3000,NULL,20,'Promoted to Analyst'); +INSERT INTO jobhist VALUES (7839,'17-NOV-81',NULL,'PRESIDENT',5000,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7844,'08-SEP-81',NULL,'SALESMAN',1500,0,30,'New Hire'); +INSERT INTO jobhist VALUES (7876,'23-MAY-87',NULL,'CLERK',1100,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7900,'03-DEC-81','14-JAN-83','CLERK',950,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7900,'15-JAN-83',NULL,'CLERK',950,NULL,30,'Changed to Dept 30'); +INSERT INTO jobhist VALUES (7902,'03-DEC-81',NULL,'ANALYST',3000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7934,'23-JAN-82',NULL,'CLERK',1300,NULL,10,'New Hire'); +-- +-- Populate statistics table and view (pg_statistic/pg_stats) +-- +ANALYZE dept; +ANALYZE emp; +ANALYZE jobhist; +-- +-- Function that lists all employees' numbers and names +-- from the 'emp' table using a cursor. +-- +CREATE OR REPLACE FUNCTION list_emp() RETURNS VOID +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + emp_cur CURSOR FOR + SELECT empno, ename FROM emp ORDER BY empno; +BEGIN + OPEN emp_cur; + RAISE INFO 'EMPNO ENAME'; + RAISE INFO '----- -------'; + LOOP + FETCH emp_cur INTO v_empno, v_ename; + EXIT WHEN NOT FOUND; + RAISE INFO '% %', v_empno, v_ename; + END LOOP; + CLOSE emp_cur; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that selects an employee row given the employee +-- number and displays certain columns. +-- +CREATE OR REPLACE FUNCTION select_emp ( + p_empno NUMERIC +) RETURNS VOID +AS $$ +DECLARE + v_ename emp.ename%TYPE; + v_hiredate emp.hiredate%TYPE; + v_sal emp.sal%TYPE; + v_comm emp.comm%TYPE; + v_dname dept.dname%TYPE; + v_disp_date VARCHAR(10); +BEGIN + SELECT INTO + v_ename, v_hiredate, v_sal, v_comm, v_dname + ename, hiredate, sal, COALESCE(comm, 0), dname + FROM emp e, dept d + WHERE empno = p_empno + AND e.deptno = d.deptno; + IF NOT FOUND THEN + RAISE INFO 'Employee % not found', p_empno; + RETURN; + END IF; + v_disp_date := TO_CHAR(v_hiredate, 'MM/DD/YYYY'); + RAISE INFO 'Number : %', p_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Hire Date : %', v_disp_date; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission: %', v_comm; + RAISE INFO 'Department: %', v_dname; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- A RECORD type used to format the return value of +-- function, 'emp_query'. +-- +CREATE TYPE emp_query_type AS ( + empno NUMERIC, + ename VARCHAR(10), + job VARCHAR(9), + hiredate DATE, + sal NUMERIC +); +-- +-- Function that queries the 'emp' table based on +-- department number and employee number or name. Returns +-- employee number and name as INOUT parameters and job, +-- hire date, and salary as OUT parameters. These are +-- returned in the form of a record defined by +-- RECORD type, 'emp_query_type'. +-- +CREATE OR REPLACE FUNCTION emp_query ( + IN p_deptno NUMERIC, + INOUT p_empno NUMERIC, + INOUT p_ename VARCHAR, + OUT p_job VARCHAR, + OUT p_hiredate DATE, + OUT p_sal NUMERIC +) +AS $$ +BEGIN + SELECT INTO + p_empno, p_ename, p_job, p_hiredate, p_sal + empno, ename, job, hiredate, sal + FROM emp + WHERE deptno = p_deptno + AND (empno = p_empno + OR ename = UPPER(p_ename)); +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to call 'emp_query_caller' with IN and INOUT +-- parameters. Displays the results received from INOUT and +-- OUT parameters. +-- +CREATE OR REPLACE FUNCTION emp_query_caller() RETURNS VOID +AS $$ +DECLARE + v_deptno NUMERIC; + v_empno NUMERIC; + v_ename VARCHAR; + v_rows INTEGER; + r_emp_query EMP_QUERY_TYPE; +BEGIN + v_deptno := 30; + v_empno := 0; + v_ename := 'Martin'; + r_emp_query := emp_query(v_deptno, v_empno, v_ename); + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', (r_emp_query).empno; + RAISE INFO 'Name : %', (r_emp_query).ename; + RAISE INFO 'Job : %', (r_emp_query).job; + RAISE INFO 'Hire Date : %', (r_emp_query).hiredate; + RAISE INFO 'Salary : %', (r_emp_query).sal; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to compute yearly compensation based on semimonthly +-- salary. +-- +CREATE OR REPLACE FUNCTION emp_comp ( + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +BEGIN + RETURN (p_sal + COALESCE(p_comm, 0)) * 24; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that gets the next number from sequence, 'next_empno', +-- and ensures it is not already in use as an employee number. +-- +CREATE OR REPLACE FUNCTION new_empno() RETURNS INTEGER +AS $$ +DECLARE + v_cnt INTEGER := 1; + v_new_empno INTEGER; +BEGIN + WHILE v_cnt > 0 LOOP + SELECT INTO v_new_empno nextval('next_empno'); + SELECT INTO v_cnt COUNT(*) FROM emp WHERE empno = v_new_empno; + END LOOP; + RETURN v_new_empno; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new clerk to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_clerk ( + p_ename VARCHAR, + p_deptno NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'CLERK', 7782, + CURRENT_DATE, 950.00, NULL, p_deptno); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new salesman to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_salesman ( + p_ename VARCHAR, + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'SALESMAN', 7698, + CURRENT_DATE, p_sal, p_comm, 30); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Rule to INSERT into view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_i AS ON INSERT TO salesemp +DO INSTEAD + INSERT INTO emp VALUES (NEW.empno, NEW.ename, 'SALESMAN', 7698, + NEW.hiredate, NEW.sal, NEW.comm, 30); +-- +-- Rule to UPDATE view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_u AS ON UPDATE TO salesemp +DO INSTEAD + UPDATE emp SET empno = NEW.empno, + ename = NEW.ename, + hiredate = NEW.hiredate, + sal = NEW.sal, + comm = NEW.comm + WHERE empno = OLD.empno; +-- +-- Rule to DELETE from view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_d AS ON DELETE TO salesemp +DO INSTEAD + DELETE FROM emp WHERE empno = OLD.empno; +-- +-- After statement-level trigger that displays a message after +-- an insert, update, or deletion to the 'emp' table. One message +-- per SQL command is displayed. +-- +CREATE OR REPLACE FUNCTION user_audit_trig() RETURNS TRIGGER +AS $$ +DECLARE + v_action VARCHAR(24); + v_text TEXT; +BEGIN + IF TG_OP = 'INSERT' THEN + v_action := ' added employee(s) on '; + ELSIF TG_OP = 'UPDATE' THEN + v_action := ' updated employee(s) on '; + ELSIF TG_OP = 'DELETE' THEN + v_action := ' deleted employee(s) on '; + END IF; +-- v_text := 'User ' || USER || v_action || CURRENT_DATE; Changing this as we need consistent output for regression + v_text := 'User ' || v_action ; + RAISE INFO ' %', v_text; + RETURN NULL; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER user_audit_trig + AFTER INSERT OR UPDATE OR DELETE ON emp + FOR EACH STATEMENT EXECUTE PROCEDURE user_audit_trig(); +-- +-- Before row-level trigger that displays employee number and +-- salary of an employee that is about to be added, updated, +-- or deleted in the 'emp' table. +-- +CREATE OR REPLACE FUNCTION emp_sal_trig() RETURNS TRIGGER +AS $$ +DECLARE + sal_diff NUMERIC(7,2); +BEGIN + IF TG_OP = 'INSERT' THEN + RAISE INFO 'Inserting employee %', NEW.empno; + RAISE INFO '..New salary: %', NEW.sal; + RETURN NEW; + END IF; + IF TG_OP = 'UPDATE' THEN + sal_diff := NEW.sal - OLD.sal; + RAISE INFO 'Updating employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RAISE INFO '..New salary: %', NEW.sal; + RAISE INFO '..Raise : %', sal_diff; + RETURN NEW; + END IF; + IF TG_OP = 'DELETE' THEN + RAISE INFO 'Deleting employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RETURN OLD; + END IF; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER emp_sal_trig + BEFORE DELETE OR INSERT OR UPDATE ON emp + FOR EACH ROW EXECUTE PROCEDURE emp_sal_trig(); +COMMIT; +SELECT * FROM emp; + empno | ename | job | mgr | hiredate | sal | comm | deptno +-------+--------+-----------+------+------------+---------+---------+-------- + 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800.00 | | 20 + 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 | 1600.00 | 300.00 | 30 + 7521 | WARD | SALESMAN | 7698 | 1981-02-22 | 1250.00 | 500.00 | 30 + 7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975.00 | | 20 + 7654 | MARTIN | SALESMAN | 7698 | 1981-09-28 | 1250.00 | 1400.00 | 30 + 7698 | BLAKE | MANAGER | 7839 | 1981-05-01 | 2850.00 | | 30 + 7782 | CLARK | MANAGER | 7839 | 1981-06-09 | 2450.00 | | 10 + 7788 | SCOTT | ANALYST | 7566 | 1987-04-19 | 3000.00 | | 20 + 7839 | KING | PRESIDENT | | 1981-11-17 | 5000.00 | | 10 + 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1500.00 | 0.00 | 30 + 7876 | ADAMS | CLERK | 7788 | 1987-05-23 | 1100.00 | | 20 + 7900 | JAMES | CLERK | 7698 | 1981-12-03 | 950.00 | | 30 + 7902 | FORD | ANALYST | 7566 | 1981-12-03 | 3000.00 | | 20 + 7934 | MILLER | CLERK | 7782 | 1982-01-23 | 1300.00 | | 10 +(14 rows) + +SELECT * FROM dept; + deptno | dname | loc +--------+------------+---------- + 10 | ACCOUNTING | NEW YORK + 20 | RESEARCH | DALLAS + 30 | SALES | CHICAGO + 40 | OPERATIONS | BOSTON +(4 rows) + +SELECT * FROM jobhist; + empno | startdate | enddate | job | sal | comm | deptno | chgdesc +-------+---------------------+---------------------+-----------+---------+---------+--------+--------------------- + 7369 | 1980-12-17 00:00:00 | | CLERK | 800.00 | | 20 | New Hire + 7499 | 1981-02-20 00:00:00 | | SALESMAN | 1600.00 | 300.00 | 30 | New Hire + 7521 | 1981-02-22 00:00:00 | | SALESMAN | 1250.00 | 500.00 | 30 | New Hire + 7566 | 1981-04-02 00:00:00 | | MANAGER | 2975.00 | | 20 | New Hire + 7654 | 1981-09-28 00:00:00 | | SALESMAN | 1250.00 | 1400.00 | 30 | New Hire + 7698 | 1981-05-01 00:00:00 | | MANAGER | 2850.00 | | 30 | New Hire + 7782 | 1981-06-09 00:00:00 | | MANAGER | 2450.00 | | 10 | New Hire + 7788 | 1987-04-19 00:00:00 | 1988-04-12 00:00:00 | CLERK | 1000.00 | | 20 | New Hire + 7788 | 1988-04-13 00:00:00 | 1989-05-04 00:00:00 | CLERK | 1040.00 | | 20 | Raise + 7788 | 1990-05-05 00:00:00 | | ANALYST | 3000.00 | | 20 | Promoted to Analyst + 7839 | 1981-11-17 00:00:00 | | PRESIDENT | 5000.00 | | 10 | New Hire + 7844 | 1981-09-08 00:00:00 | | SALESMAN | 1500.00 | 0.00 | 30 | New Hire + 7876 | 1987-05-23 00:00:00 | | CLERK | 1100.00 | | 20 | New Hire + 7900 | 1981-12-03 00:00:00 | 1983-01-14 00:00:00 | CLERK | 950.00 | | 10 | New Hire + 7900 | 1983-01-15 00:00:00 | | CLERK | 950.00 | | 30 | Changed to Dept 30 + 7902 | 1981-12-03 00:00:00 | | ANALYST | 3000.00 | | 20 | New Hire + 7934 | 1982-01-23 00:00:00 | | CLERK | 1300.00 | | 10 | New Hire +(17 rows) + +-- Now test the crash fix +DELETE FROM emp WHERE empno = 7934; +psql:sql/test_issue_153_fix.inc:465: INFO: Deleting employee 7934 +psql:sql/test_issue_153_fix.inc:465: INFO: ..Old salary: 1300.00 +psql:sql/test_issue_153_fix.inc:465: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7698; +psql:sql/test_issue_153_fix.inc:466: INFO: Deleting employee 7698 +psql:sql/test_issue_153_fix.inc:466: INFO: ..Old salary: 2850.00 +psql:sql/test_issue_153_fix.inc:466: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7782; +psql:sql/test_issue_153_fix.inc:467: INFO: Deleting employee 7782 +psql:sql/test_issue_153_fix.inc:467: INFO: ..Old salary: 2450.00 +psql:sql/test_issue_153_fix.inc:467: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7788; +psql:sql/test_issue_153_fix.inc:468: INFO: Deleting employee 7788 +psql:sql/test_issue_153_fix.inc:468: INFO: ..Old salary: 3000.00 +psql:sql/test_issue_153_fix.inc:468: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7838; +psql:sql/test_issue_153_fix.inc:469: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7900; +psql:sql/test_issue_153_fix.inc:470: INFO: Deleting employee 7900 +psql:sql/test_issue_153_fix.inc:470: INFO: ..Old salary: 950.00 +psql:sql/test_issue_153_fix.inc:470: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7654; +psql:sql/test_issue_153_fix.inc:471: INFO: Deleting employee 7654 +psql:sql/test_issue_153_fix.inc:471: INFO: ..Old salary: 1250.00 +psql:sql/test_issue_153_fix.inc:471: INFO: User deleted employee(s) on +DELETE FROM dept WHERE deptno = 40; +SELECT * FROM emp; + empno | ename | job | mgr | hiredate | sal | comm | deptno +-------+--------+-----------+------+------------+---------+--------+-------- + 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800.00 | | 20 + 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 | 1600.00 | 300.00 | 30 + 7521 | WARD | SALESMAN | 7698 | 1981-02-22 | 1250.00 | 500.00 | 30 + 7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975.00 | | 20 + 7839 | KING | PRESIDENT | | 1981-11-17 | 5000.00 | | 10 + 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1500.00 | 0.00 | 30 + 7876 | ADAMS | CLERK | 7788 | 1987-05-23 | 1100.00 | | 20 + 7902 | FORD | ANALYST | 7566 | 1981-12-03 | 3000.00 | | 20 +(8 rows) + +SELECT * FROM dept; + deptno | dname | loc +--------+------------+---------- + 10 | ACCOUNTING | NEW YORK + 20 | RESEARCH | DALLAS + 30 | SALES | CHICAGO +(3 rows) + +SELECT * FROM jobhist; + empno | startdate | enddate | job | sal | comm | deptno | chgdesc +-------+---------------------+---------+-----------+---------+--------+--------+---------- + 7369 | 1980-12-17 00:00:00 | | CLERK | 800.00 | | 20 | New Hire + 7499 | 1981-02-20 00:00:00 | | SALESMAN | 1600.00 | 300.00 | 30 | New Hire + 7521 | 1981-02-22 00:00:00 | | SALESMAN | 1250.00 | 500.00 | 30 | New Hire + 7566 | 1981-04-02 00:00:00 | | MANAGER | 2975.00 | | 20 | New Hire + 7839 | 1981-11-17 00:00:00 | | PRESIDENT | 5000.00 | | 10 | New Hire + 7844 | 1981-09-08 00:00:00 | | SALESMAN | 1500.00 | 0.00 | 30 | New Hire + 7876 | 1987-05-23 00:00:00 | | CLERK | 1100.00 | | 20 | New Hire + 7902 | 1981-12-03 00:00:00 | | ANALYST | 3000.00 | | 20 | New Hire +(8 rows) + +DROP TABLE jobhist CASCADE; +DROP TABLE emp CASCADE; +psql:sql/test_issue_153_fix.inc:480: NOTICE: drop cascades to view salesemp +DROP TABLE dept CASCADE; +DROP SEQUENCE next_empno; +DROP TYPE emp_query_type; +DROP EXTENSION pg_tde CASCADE; diff --git a/contrib/pg_tde/expected/test_issue_153_fix_basic.out b/contrib/pg_tde/expected/test_issue_153_fix_basic.out new file mode 100644 index 00000000000..66f293e94a4 --- /dev/null +++ b/contrib/pg_tde/expected/test_issue_153_fix_basic.out @@ -0,0 +1,589 @@ +\set tde_am tde_heap_basic +\i sql/test_issue_153_fix.inc +CREATE EXTENSION pg_tde; +SET datestyle TO 'iso, dmy'; +SELECT * FROM pg_tde_principal_key_info(); +psql:sql/test_issue_153_fix.inc:4: ERROR: Principal key does not exists for the database +HINT: Use set_principal_key interface to set the principal key +CONTEXT: SQL function "pg_tde_principal_key_info" statement 1 +SELECT pg_tde_add_key_provider_file('file-ring','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-ring'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +-- +-- Script that creates the 'sample' tde encrypted tables, views +-- functions, triggers, etc. +-- +-- Start new transaction - commit all or nothing +-- +BEGIN; +-- +-- Create and load tables used in the documentation examples. +-- +-- Create the 'dept' table +-- +CREATE TABLE dept ( + deptno NUMERIC(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY, + dname VARCHAR(14) CONSTRAINT dept_dname_uq UNIQUE, + loc VARCHAR(13) +)using :tde_am; +-- +-- Create the 'emp' table +-- +CREATE TABLE emp ( + empno NUMERIC(4) NOT NULL CONSTRAINT emp_pk PRIMARY KEY, + ename VARCHAR(10), + job VARCHAR(9), + mgr NUMERIC(4), + hiredate DATE, + sal NUMERIC(7,2) CONSTRAINT emp_sal_ck CHECK (sal > 0), + comm NUMERIC(7,2), + deptno NUMERIC(2) CONSTRAINT emp_ref_dept_fk + REFERENCES dept(deptno) +)using :tde_am; +-- +-- Create the 'jobhist' table +-- +CREATE TABLE jobhist ( + empno NUMERIC(4) NOT NULL, + startdate TIMESTAMP(0) NOT NULL, + enddate TIMESTAMP(0), + job VARCHAR(9), + sal NUMERIC(7,2), + comm NUMERIC(7,2), + deptno NUMERIC(2), + chgdesc VARCHAR(80), + CONSTRAINT jobhist_pk PRIMARY KEY (empno, startdate), + CONSTRAINT jobhist_ref_emp_fk FOREIGN KEY (empno) + REFERENCES emp(empno) ON DELETE CASCADE, + CONSTRAINT jobhist_ref_dept_fk FOREIGN KEY (deptno) + REFERENCES dept (deptno) ON DELETE SET NULL, + CONSTRAINT jobhist_date_chk CHECK (startdate <= enddate) +)using :tde_am; +-- +-- Create the 'salesemp' view +-- +CREATE OR REPLACE VIEW salesemp AS + SELECT empno, ename, hiredate, sal, comm FROM emp WHERE job = 'SALESMAN'; +-- +-- Sequence to generate values for function 'new_empno'. +-- +CREATE SEQUENCE next_empno START WITH 8000 INCREMENT BY 1; +-- +-- Issue PUBLIC grants +-- +GRANT ALL ON emp TO PUBLIC; +GRANT ALL ON dept TO PUBLIC; +GRANT ALL ON jobhist TO PUBLIC; +GRANT ALL ON salesemp TO PUBLIC; +GRANT ALL ON next_empno TO PUBLIC; +-- +-- Load the 'dept' table +-- +INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK'); +INSERT INTO dept VALUES (20,'RESEARCH','DALLAS'); +INSERT INTO dept VALUES (30,'SALES','CHICAGO'); +INSERT INTO dept VALUES (40,'OPERATIONS','BOSTON'); +-- +-- Load the 'emp' table +-- +INSERT INTO emp VALUES (7369,'SMITH','CLERK',7902,'17-DEC-80',800,NULL,20); +INSERT INTO emp VALUES (7499,'ALLEN','SALESMAN',7698,'20-FEB-81',1600,300,30); +INSERT INTO emp VALUES (7521,'WARD','SALESMAN',7698,'22-FEB-81',1250,500,30); +INSERT INTO emp VALUES (7566,'JONES','MANAGER',7839,'02-APR-81',2975,NULL,20); +INSERT INTO emp VALUES (7654,'MARTIN','SALESMAN',7698,'28-SEP-81',1250,1400,30); +INSERT INTO emp VALUES (7698,'BLAKE','MANAGER',7839,'01-MAY-81',2850,NULL,30); +INSERT INTO emp VALUES (7782,'CLARK','MANAGER',7839,'09-JUN-81',2450,NULL,10); +INSERT INTO emp VALUES (7788,'SCOTT','ANALYST',7566,'19-APR-87',3000,NULL,20); +INSERT INTO emp VALUES (7839,'KING','PRESIDENT',NULL,'17-NOV-81',5000,NULL,10); +INSERT INTO emp VALUES (7844,'TURNER','SALESMAN',7698,'08-SEP-81',1500,0,30); +INSERT INTO emp VALUES (7876,'ADAMS','CLERK',7788,'23-MAY-87',1100,NULL,20); +INSERT INTO emp VALUES (7900,'JAMES','CLERK',7698,'03-DEC-81',950,NULL,30); +INSERT INTO emp VALUES (7902,'FORD','ANALYST',7566,'03-DEC-81',3000,NULL,20); +INSERT INTO emp VALUES (7934,'MILLER','CLERK',7782,'23-JAN-82',1300,NULL,10); +-- +-- Load the 'jobhist' table +-- +INSERT INTO jobhist VALUES (7369,'17-DEC-80',NULL,'CLERK',800,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7499,'20-FEB-81',NULL,'SALESMAN',1600,300,30,'New Hire'); +INSERT INTO jobhist VALUES (7521,'22-FEB-81',NULL,'SALESMAN',1250,500,30,'New Hire'); +INSERT INTO jobhist VALUES (7566,'02-APR-81',NULL,'MANAGER',2975,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7654,'28-SEP-81',NULL,'SALESMAN',1250,1400,30,'New Hire'); +INSERT INTO jobhist VALUES (7698,'01-MAY-81',NULL,'MANAGER',2850,NULL,30,'New Hire'); +INSERT INTO jobhist VALUES (7782,'09-JUN-81',NULL,'MANAGER',2450,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7788,'19-APR-87','12-APR-88','CLERK',1000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7788,'13-APR-88','04-MAY-89','CLERK',1040,NULL,20,'Raise'); +INSERT INTO jobhist VALUES (7788,'05-MAY-90',NULL,'ANALYST',3000,NULL,20,'Promoted to Analyst'); +INSERT INTO jobhist VALUES (7839,'17-NOV-81',NULL,'PRESIDENT',5000,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7844,'08-SEP-81',NULL,'SALESMAN',1500,0,30,'New Hire'); +INSERT INTO jobhist VALUES (7876,'23-MAY-87',NULL,'CLERK',1100,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7900,'03-DEC-81','14-JAN-83','CLERK',950,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7900,'15-JAN-83',NULL,'CLERK',950,NULL,30,'Changed to Dept 30'); +INSERT INTO jobhist VALUES (7902,'03-DEC-81',NULL,'ANALYST',3000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7934,'23-JAN-82',NULL,'CLERK',1300,NULL,10,'New Hire'); +-- +-- Populate statistics table and view (pg_statistic/pg_stats) +-- +ANALYZE dept; +ANALYZE emp; +ANALYZE jobhist; +-- +-- Function that lists all employees' numbers and names +-- from the 'emp' table using a cursor. +-- +CREATE OR REPLACE FUNCTION list_emp() RETURNS VOID +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + emp_cur CURSOR FOR + SELECT empno, ename FROM emp ORDER BY empno; +BEGIN + OPEN emp_cur; + RAISE INFO 'EMPNO ENAME'; + RAISE INFO '----- -------'; + LOOP + FETCH emp_cur INTO v_empno, v_ename; + EXIT WHEN NOT FOUND; + RAISE INFO '% %', v_empno, v_ename; + END LOOP; + CLOSE emp_cur; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that selects an employee row given the employee +-- number and displays certain columns. +-- +CREATE OR REPLACE FUNCTION select_emp ( + p_empno NUMERIC +) RETURNS VOID +AS $$ +DECLARE + v_ename emp.ename%TYPE; + v_hiredate emp.hiredate%TYPE; + v_sal emp.sal%TYPE; + v_comm emp.comm%TYPE; + v_dname dept.dname%TYPE; + v_disp_date VARCHAR(10); +BEGIN + SELECT INTO + v_ename, v_hiredate, v_sal, v_comm, v_dname + ename, hiredate, sal, COALESCE(comm, 0), dname + FROM emp e, dept d + WHERE empno = p_empno + AND e.deptno = d.deptno; + IF NOT FOUND THEN + RAISE INFO 'Employee % not found', p_empno; + RETURN; + END IF; + v_disp_date := TO_CHAR(v_hiredate, 'MM/DD/YYYY'); + RAISE INFO 'Number : %', p_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Hire Date : %', v_disp_date; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission: %', v_comm; + RAISE INFO 'Department: %', v_dname; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- A RECORD type used to format the return value of +-- function, 'emp_query'. +-- +CREATE TYPE emp_query_type AS ( + empno NUMERIC, + ename VARCHAR(10), + job VARCHAR(9), + hiredate DATE, + sal NUMERIC +); +-- +-- Function that queries the 'emp' table based on +-- department number and employee number or name. Returns +-- employee number and name as INOUT parameters and job, +-- hire date, and salary as OUT parameters. These are +-- returned in the form of a record defined by +-- RECORD type, 'emp_query_type'. +-- +CREATE OR REPLACE FUNCTION emp_query ( + IN p_deptno NUMERIC, + INOUT p_empno NUMERIC, + INOUT p_ename VARCHAR, + OUT p_job VARCHAR, + OUT p_hiredate DATE, + OUT p_sal NUMERIC +) +AS $$ +BEGIN + SELECT INTO + p_empno, p_ename, p_job, p_hiredate, p_sal + empno, ename, job, hiredate, sal + FROM emp + WHERE deptno = p_deptno + AND (empno = p_empno + OR ename = UPPER(p_ename)); +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to call 'emp_query_caller' with IN and INOUT +-- parameters. Displays the results received from INOUT and +-- OUT parameters. +-- +CREATE OR REPLACE FUNCTION emp_query_caller() RETURNS VOID +AS $$ +DECLARE + v_deptno NUMERIC; + v_empno NUMERIC; + v_ename VARCHAR; + v_rows INTEGER; + r_emp_query EMP_QUERY_TYPE; +BEGIN + v_deptno := 30; + v_empno := 0; + v_ename := 'Martin'; + r_emp_query := emp_query(v_deptno, v_empno, v_ename); + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', (r_emp_query).empno; + RAISE INFO 'Name : %', (r_emp_query).ename; + RAISE INFO 'Job : %', (r_emp_query).job; + RAISE INFO 'Hire Date : %', (r_emp_query).hiredate; + RAISE INFO 'Salary : %', (r_emp_query).sal; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to compute yearly compensation based on semimonthly +-- salary. +-- +CREATE OR REPLACE FUNCTION emp_comp ( + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +BEGIN + RETURN (p_sal + COALESCE(p_comm, 0)) * 24; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that gets the next number from sequence, 'next_empno', +-- and ensures it is not already in use as an employee number. +-- +CREATE OR REPLACE FUNCTION new_empno() RETURNS INTEGER +AS $$ +DECLARE + v_cnt INTEGER := 1; + v_new_empno INTEGER; +BEGIN + WHILE v_cnt > 0 LOOP + SELECT INTO v_new_empno nextval('next_empno'); + SELECT INTO v_cnt COUNT(*) FROM emp WHERE empno = v_new_empno; + END LOOP; + RETURN v_new_empno; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new clerk to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_clerk ( + p_ename VARCHAR, + p_deptno NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'CLERK', 7782, + CURRENT_DATE, 950.00, NULL, p_deptno); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new salesman to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_salesman ( + p_ename VARCHAR, + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'SALESMAN', 7698, + CURRENT_DATE, p_sal, p_comm, 30); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Rule to INSERT into view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_i AS ON INSERT TO salesemp +DO INSTEAD + INSERT INTO emp VALUES (NEW.empno, NEW.ename, 'SALESMAN', 7698, + NEW.hiredate, NEW.sal, NEW.comm, 30); +-- +-- Rule to UPDATE view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_u AS ON UPDATE TO salesemp +DO INSTEAD + UPDATE emp SET empno = NEW.empno, + ename = NEW.ename, + hiredate = NEW.hiredate, + sal = NEW.sal, + comm = NEW.comm + WHERE empno = OLD.empno; +-- +-- Rule to DELETE from view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_d AS ON DELETE TO salesemp +DO INSTEAD + DELETE FROM emp WHERE empno = OLD.empno; +-- +-- After statement-level trigger that displays a message after +-- an insert, update, or deletion to the 'emp' table. One message +-- per SQL command is displayed. +-- +CREATE OR REPLACE FUNCTION user_audit_trig() RETURNS TRIGGER +AS $$ +DECLARE + v_action VARCHAR(24); + v_text TEXT; +BEGIN + IF TG_OP = 'INSERT' THEN + v_action := ' added employee(s) on '; + ELSIF TG_OP = 'UPDATE' THEN + v_action := ' updated employee(s) on '; + ELSIF TG_OP = 'DELETE' THEN + v_action := ' deleted employee(s) on '; + END IF; +-- v_text := 'User ' || USER || v_action || CURRENT_DATE; Changing this as we need consistent output for regression + v_text := 'User ' || v_action ; + RAISE INFO ' %', v_text; + RETURN NULL; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER user_audit_trig + AFTER INSERT OR UPDATE OR DELETE ON emp + FOR EACH STATEMENT EXECUTE PROCEDURE user_audit_trig(); +-- +-- Before row-level trigger that displays employee number and +-- salary of an employee that is about to be added, updated, +-- or deleted in the 'emp' table. +-- +CREATE OR REPLACE FUNCTION emp_sal_trig() RETURNS TRIGGER +AS $$ +DECLARE + sal_diff NUMERIC(7,2); +BEGIN + IF TG_OP = 'INSERT' THEN + RAISE INFO 'Inserting employee %', NEW.empno; + RAISE INFO '..New salary: %', NEW.sal; + RETURN NEW; + END IF; + IF TG_OP = 'UPDATE' THEN + sal_diff := NEW.sal - OLD.sal; + RAISE INFO 'Updating employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RAISE INFO '..New salary: %', NEW.sal; + RAISE INFO '..Raise : %', sal_diff; + RETURN NEW; + END IF; + IF TG_OP = 'DELETE' THEN + RAISE INFO 'Deleting employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RETURN OLD; + END IF; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER emp_sal_trig + BEFORE DELETE OR INSERT OR UPDATE ON emp + FOR EACH ROW EXECUTE PROCEDURE emp_sal_trig(); +COMMIT; +SELECT * FROM emp; + empno | ename | job | mgr | hiredate | sal | comm | deptno +-------+--------+-----------+------+------------+---------+---------+-------- + 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800.00 | | 20 + 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 | 1600.00 | 300.00 | 30 + 7521 | WARD | SALESMAN | 7698 | 1981-02-22 | 1250.00 | 500.00 | 30 + 7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975.00 | | 20 + 7654 | MARTIN | SALESMAN | 7698 | 1981-09-28 | 1250.00 | 1400.00 | 30 + 7698 | BLAKE | MANAGER | 7839 | 1981-05-01 | 2850.00 | | 30 + 7782 | CLARK | MANAGER | 7839 | 1981-06-09 | 2450.00 | | 10 + 7788 | SCOTT | ANALYST | 7566 | 1987-04-19 | 3000.00 | | 20 + 7839 | KING | PRESIDENT | | 1981-11-17 | 5000.00 | | 10 + 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1500.00 | 0.00 | 30 + 7876 | ADAMS | CLERK | 7788 | 1987-05-23 | 1100.00 | | 20 + 7900 | JAMES | CLERK | 7698 | 1981-12-03 | 950.00 | | 30 + 7902 | FORD | ANALYST | 7566 | 1981-12-03 | 3000.00 | | 20 + 7934 | MILLER | CLERK | 7782 | 1982-01-23 | 1300.00 | | 10 +(14 rows) + +SELECT * FROM dept; + deptno | dname | loc +--------+------------+---------- + 10 | ACCOUNTING | NEW YORK + 20 | RESEARCH | DALLAS + 30 | SALES | CHICAGO + 40 | OPERATIONS | BOSTON +(4 rows) + +SELECT * FROM jobhist; + empno | startdate | enddate | job | sal | comm | deptno | chgdesc +-------+---------------------+---------------------+-----------+---------+---------+--------+--------------------- + 7369 | 1980-12-17 00:00:00 | | CLERK | 800.00 | | 20 | New Hire + 7499 | 1981-02-20 00:00:00 | | SALESMAN | 1600.00 | 300.00 | 30 | New Hire + 7521 | 1981-02-22 00:00:00 | | SALESMAN | 1250.00 | 500.00 | 30 | New Hire + 7566 | 1981-04-02 00:00:00 | | MANAGER | 2975.00 | | 20 | New Hire + 7654 | 1981-09-28 00:00:00 | | SALESMAN | 1250.00 | 1400.00 | 30 | New Hire + 7698 | 1981-05-01 00:00:00 | | MANAGER | 2850.00 | | 30 | New Hire + 7782 | 1981-06-09 00:00:00 | | MANAGER | 2450.00 | | 10 | New Hire + 7788 | 1987-04-19 00:00:00 | 1988-04-12 00:00:00 | CLERK | 1000.00 | | 20 | New Hire + 7788 | 1988-04-13 00:00:00 | 1989-05-04 00:00:00 | CLERK | 1040.00 | | 20 | Raise + 7788 | 1990-05-05 00:00:00 | | ANALYST | 3000.00 | | 20 | Promoted to Analyst + 7839 | 1981-11-17 00:00:00 | | PRESIDENT | 5000.00 | | 10 | New Hire + 7844 | 1981-09-08 00:00:00 | | SALESMAN | 1500.00 | 0.00 | 30 | New Hire + 7876 | 1987-05-23 00:00:00 | | CLERK | 1100.00 | | 20 | New Hire + 7900 | 1981-12-03 00:00:00 | 1983-01-14 00:00:00 | CLERK | 950.00 | | 10 | New Hire + 7900 | 1983-01-15 00:00:00 | | CLERK | 950.00 | | 30 | Changed to Dept 30 + 7902 | 1981-12-03 00:00:00 | | ANALYST | 3000.00 | | 20 | New Hire + 7934 | 1982-01-23 00:00:00 | | CLERK | 1300.00 | | 10 | New Hire +(17 rows) + +-- Now test the crash fix +DELETE FROM emp WHERE empno = 7934; +psql:sql/test_issue_153_fix.inc:465: INFO: Deleting employee 7934 +psql:sql/test_issue_153_fix.inc:465: INFO: ..Old salary: 1300.00 +psql:sql/test_issue_153_fix.inc:465: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7698; +psql:sql/test_issue_153_fix.inc:466: INFO: Deleting employee 7698 +psql:sql/test_issue_153_fix.inc:466: INFO: ..Old salary: 2850.00 +psql:sql/test_issue_153_fix.inc:466: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7782; +psql:sql/test_issue_153_fix.inc:467: INFO: Deleting employee 7782 +psql:sql/test_issue_153_fix.inc:467: INFO: ..Old salary: 2450.00 +psql:sql/test_issue_153_fix.inc:467: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7788; +psql:sql/test_issue_153_fix.inc:468: INFO: Deleting employee 7788 +psql:sql/test_issue_153_fix.inc:468: INFO: ..Old salary: 3000.00 +psql:sql/test_issue_153_fix.inc:468: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7838; +psql:sql/test_issue_153_fix.inc:469: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7900; +psql:sql/test_issue_153_fix.inc:470: INFO: Deleting employee 7900 +psql:sql/test_issue_153_fix.inc:470: INFO: ..Old salary: 950.00 +psql:sql/test_issue_153_fix.inc:470: INFO: User deleted employee(s) on +DELETE FROM emp WHERE empno = 7654; +psql:sql/test_issue_153_fix.inc:471: INFO: Deleting employee 7654 +psql:sql/test_issue_153_fix.inc:471: INFO: ..Old salary: 1250.00 +psql:sql/test_issue_153_fix.inc:471: INFO: User deleted employee(s) on +DELETE FROM dept WHERE deptno = 40; +SELECT * FROM emp; + empno | ename | job | mgr | hiredate | sal | comm | deptno +-------+--------+-----------+------+------------+---------+--------+-------- + 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800.00 | | 20 + 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 | 1600.00 | 300.00 | 30 + 7521 | WARD | SALESMAN | 7698 | 1981-02-22 | 1250.00 | 500.00 | 30 + 7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975.00 | | 20 + 7839 | KING | PRESIDENT | | 1981-11-17 | 5000.00 | | 10 + 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1500.00 | 0.00 | 30 + 7876 | ADAMS | CLERK | 7788 | 1987-05-23 | 1100.00 | | 20 + 7902 | FORD | ANALYST | 7566 | 1981-12-03 | 3000.00 | | 20 +(8 rows) + +SELECT * FROM dept; + deptno | dname | loc +--------+------------+---------- + 10 | ACCOUNTING | NEW YORK + 20 | RESEARCH | DALLAS + 30 | SALES | CHICAGO +(3 rows) + +SELECT * FROM jobhist; + empno | startdate | enddate | job | sal | comm | deptno | chgdesc +-------+---------------------+---------+-----------+---------+--------+--------+---------- + 7369 | 1980-12-17 00:00:00 | | CLERK | 800.00 | | 20 | New Hire + 7499 | 1981-02-20 00:00:00 | | SALESMAN | 1600.00 | 300.00 | 30 | New Hire + 7521 | 1981-02-22 00:00:00 | | SALESMAN | 1250.00 | 500.00 | 30 | New Hire + 7566 | 1981-04-02 00:00:00 | | MANAGER | 2975.00 | | 20 | New Hire + 7839 | 1981-11-17 00:00:00 | | PRESIDENT | 5000.00 | | 10 | New Hire + 7844 | 1981-09-08 00:00:00 | | SALESMAN | 1500.00 | 0.00 | 30 | New Hire + 7876 | 1987-05-23 00:00:00 | | CLERK | 1100.00 | | 20 | New Hire + 7902 | 1981-12-03 00:00:00 | | ANALYST | 3000.00 | | 20 | New Hire +(8 rows) + +DROP TABLE jobhist CASCADE; +DROP TABLE emp CASCADE; +psql:sql/test_issue_153_fix.inc:480: NOTICE: drop cascades to view salesemp +DROP TABLE dept CASCADE; +DROP SEQUENCE next_empno; +DROP TYPE emp_query_type; +DROP EXTENSION pg_tde CASCADE; diff --git a/contrib/pg_tde/expected/toast_decrypt.out b/contrib/pg_tde/expected/toast_decrypt.out new file mode 100644 index 00000000000..ac47626f09d --- /dev/null +++ b/contrib/pg_tde/expected/toast_decrypt.out @@ -0,0 +1,25 @@ +\set tde_am tde_heap +\i sql/toast_decrypt.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE src (f1 TEXT STORAGE EXTERNAL) USING :tde_am; +INSERT INTO src VALUES(repeat('abcdeF',1000)); +SELECT * FROM src; + f1 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + abcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeF +(1 row) + +DROP TABLE src; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/toast_decrypt_basic.out b/contrib/pg_tde/expected/toast_decrypt_basic.out new file mode 100644 index 00000000000..1e273d44f17 --- /dev/null +++ b/contrib/pg_tde/expected/toast_decrypt_basic.out @@ -0,0 +1,25 @@ +\set tde_am tde_heap_basic +\i sql/toast_decrypt.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE src (f1 TEXT STORAGE EXTERNAL) USING :tde_am; +INSERT INTO src VALUES(repeat('abcdeF',1000)); +SELECT * FROM src; + f1 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + abcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeFabcdeF +(1 row) + +DROP TABLE src; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/toast_extended_storage.out b/contrib/pg_tde/expected/toast_extended_storage.out new file mode 100644 index 00000000000..ce59afeeaa1 --- /dev/null +++ b/contrib/pg_tde/expected/toast_extended_storage.out @@ -0,0 +1,105 @@ +\set tde_am tde_heap +\i sql/toast_extended_storage.inc +-- test https://github.com/percona/pg_tde/issues/63 +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TEMP TABLE src (f1 text) USING :tde_am; +-- Crash on INSERT +INSERT INTO src +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src; + f1 +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454 +(1 row) + +DROP TABLE src; +CREATE TABLE src2 (f1 TEXT) USING :tde_am; +INSERT INTO src2 +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src2; + f1 +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454 +(1 row) + +DROP TABLE src2; +-- https://github.com/percona/pg_tde/issues/82 +CREATE TABLE indtoasttest(descr text, cnt int DEFAULT 0, f1 text, f2 text) using :tde_am; +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',30000), repeat('1234567890',50000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-compressed,one-null', NULL, repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-toasted,one-null', NULL, repeat('1234567890',50000)); +UPDATE indtoasttest SET cnt = cnt +1 RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",1,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",1,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1 RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,2,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,2,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",2,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",2,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",3,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",3,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,4,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,4,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",4,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",4,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET f2 = '+'||f2||'-' ; +DROP TABLE indtoasttest; +-- Test substr with toasted externalized bytea values +CREATE TABLE toasttest(t bytea STORAGE EXTERNAL) using :tde_am; +INSERT INTO toasttest VALUES (decode(repeat('1234567890',10000), 'escape')); +SET bytea_output = 'escape'; +SELECT substring(t, 1, 10) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +SELECT substring(t, 50001, 10) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +SELECT substring(t, 99991) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +DROP TABLE toasttest; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/toast_extended_storage_basic.out b/contrib/pg_tde/expected/toast_extended_storage_basic.out new file mode 100644 index 00000000000..b04a32983ca --- /dev/null +++ b/contrib/pg_tde/expected/toast_extended_storage_basic.out @@ -0,0 +1,105 @@ +\set tde_am tde_heap_basic +\i sql/toast_extended_storage.inc +-- test https://github.com/percona/pg_tde/issues/63 +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TEMP TABLE src (f1 text) USING :tde_am; +-- Crash on INSERT +INSERT INTO src +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src; + f1 +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454 +(1 row) + +DROP TABLE src; +CREATE TABLE src2 (f1 TEXT) USING :tde_am; +INSERT INTO src2 +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src2; + f1 +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454 +(1 row) + +DROP TABLE src2; +-- https://github.com/percona/pg_tde/issues/82 +CREATE TABLE indtoasttest(descr text, cnt int DEFAULT 0, f1 text, f2 text) using :tde_am; +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',30000), repeat('1234567890',50000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-compressed,one-null', NULL, repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-toasted,one-null', NULL, repeat('1234567890',50000)); +UPDATE indtoasttest SET cnt = cnt +1 RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",1,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",1,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1 RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,2,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,2,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",2,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",2,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",3,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",3,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); + substring +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + (two-compressed,4,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012 + (two-toasted,4,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345 + ("one-compressed,one-null",4,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 + ("one-toasted,one-null",4,,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 +(4 rows) + +UPDATE indtoasttest SET f2 = '+'||f2||'-' ; +DROP TABLE indtoasttest; +-- Test substr with toasted externalized bytea values +CREATE TABLE toasttest(t bytea STORAGE EXTERNAL) using :tde_am; +INSERT INTO toasttest VALUES (decode(repeat('1234567890',10000), 'escape')); +SET bytea_output = 'escape'; +SELECT substring(t, 1, 10) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +SELECT substring(t, 50001, 10) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +SELECT substring(t, 99991) FROM toasttest; + substring +------------ + 1234567890 +(1 row) + +DROP TABLE toasttest; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/trigger_on_view.out b/contrib/pg_tde/expected/trigger_on_view.out new file mode 100644 index 00000000000..33cdc5f1b36 --- /dev/null +++ b/contrib/pg_tde/expected/trigger_on_view.out @@ -0,0 +1,216 @@ +\set tde_am tde_heap +\i sql/trigger_on_view.inc +CREATE extension pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +-- +-- 2 -- Test triggers on a join view +-- +SET default_table_access_method TO ':tde_am'; +psql:sql/trigger_on_view.inc:9: ERROR: invalid value for parameter "default_table_access_method": ":tde_am" +DETAIL: Table access method ":tde_am" does not exist. +DROP VIEW IF EXISTS city_view CASCADE; +psql:sql/trigger_on_view.inc:11: NOTICE: view "city_view" does not exist, skipping +DROP TABLE IF exists country_table CASCADE; +psql:sql/trigger_on_view.inc:12: NOTICE: table "country_table" does not exist, skipping +DROP TABLE IF exists city_table cascade; +psql:sql/trigger_on_view.inc:13: NOTICE: table "city_table" does not exist, skipping + CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null + ) using :tde_am; + + INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America') + RETURNING *; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America +(3 rows) + + + CREATE TABLE city_table ( + city_id serial primary key, + city_name text not null, + population bigint, + country_id int references country_table + ) using :tde_am; + + CREATE VIEW city_view AS + SELECT city_id, city_name, population, country_name, continent + FROM city_table ci + LEFT JOIN country_table co ON co.country_id = ci.country_id; + +CREATE OR REPLACE FUNCTION city_insert() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS NOT NULL then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + else + NEW.continent := NULL; + end if; + + if NEW.city_id IS NOT NULL then + INSERT INTO city_table + VALUES(NEW.city_id, NEW.city_name, NEW.population, ctry_id); + else + INSERT INTO city_table(city_name, population, country_id) + VALUES(NEW.city_name, NEW.population, ctry_id) + RETURNING city_id INTO NEW.city_id; + end if; + + RETURN NEW; + end; + $$; + CREATE TRIGGER city_insert_trig INSTEAD OF INSERT ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_insert(); + + CREATE OR REPLACE FUNCTION city_delete() RETURNS trigger LANGUAGE plpgsql AS $$ + begin + DELETE FROM city_table WHERE city_id = OLD.city_id; + if NOT FOUND then RETURN NULL; end if; + RETURN OLD; + end; + $$; + + CREATE TRIGGER city_delete_trig INSTEAD OF DELETE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_delete(); + + CREATE OR REPLACE FUNCTION city_update() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS DISTINCT FROM OLD.country_name then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population, + country_id = ctry_id + WHERE city_id = OLD.city_id; + else + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population + WHERE city_id = OLD.city_id; + NEW.continent := OLD.continent; + end if; + + if NOT FOUND then RETURN NULL; end if; + RETURN NEW; + end; + $$; + CREATE TRIGGER city_update_trig INSTEAD OF UPDATE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_update(); + +-- INSERT .. RETURNING + INSERT INTO city_view(city_name) VALUES('Tokyo') RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | | | +(1 row) + + INSERT INTO city_view(city_name, population) VALUES('London', 7556900) RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 2 | London | 7556900 | | +(1 row) + + INSERT INTO city_view(city_name, country_name) VALUES('Washington DC', 'USA') RETURNING *; + city_id | city_name | population | country_name | continent +---------+---------------+------------+--------------+--------------- + 3 | Washington DC | | USA | North America +(1 row) + + INSERT INTO city_view(city_id, city_name) VALUES(123456, 'New York') RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 123456 | New York | | | +(1 row) + + INSERT INTO city_view VALUES(234567, 'Birmingham', 1016800, 'UK', 'EU') RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + + -- UPDATE .. RETURNING + UPDATE city_view SET country_name = 'Japon' WHERE city_name = 'Tokyo'; -- error +psql:sql/trigger_on_view.inc:118: ERROR: No such country: "Japon" +CONTEXT: PL/pgSQL function city_update() line 9 at RAISE + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Takyo'; -- no match + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Tokyo' RETURNING *; -- OK + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | | Japan | Asia +(1 row) + + + UPDATE city_view SET population = 13010279 WHERE city_name = 'Tokyo' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | 13010279 | Japan | Asia +(1 row) + + UPDATE city_view SET country_name = 'UK' WHERE city_name = 'New York' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 123456 | New York | | UK | Europe +(1 row) + + UPDATE city_view SET country_name = 'USA', population = 8391881 WHERE city_name = 'New York' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+--------------- + 123456 | New York | 8391881 | USA | North America +(1 row) + + UPDATE city_view SET continent = 'EU' WHERE continent = 'Europe' RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + UPDATE city_view v1 SET country_name = v2.country_name FROM city_view v2 + WHERE v2.city_name = 'Birmingham' AND v1.city_name = 'London' RETURNING *; + city_id | city_name | population | country_name | continent | city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+-----------+---------+------------+------------+--------------+----------- + 2 | London | 7556900 | UK | Europe | 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + + -- DELETE .. RETURNING + DELETE FROM city_view WHERE city_name = 'Birmingham' RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + +DROP extension pg_tde CASCADE; +psql:sql/trigger_on_view.inc:133: NOTICE: drop cascades to 3 other objects +DETAIL: drop cascades to table country_table +drop cascades to table city_table +drop cascades to view city_view diff --git a/contrib/pg_tde/expected/trigger_on_view_basic.out b/contrib/pg_tde/expected/trigger_on_view_basic.out new file mode 100644 index 00000000000..e01bd0e9f1a --- /dev/null +++ b/contrib/pg_tde/expected/trigger_on_view_basic.out @@ -0,0 +1,216 @@ +\set tde_am tde_heap_basic +\i sql/trigger_on_view.inc +CREATE extension pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +-- +-- 2 -- Test triggers on a join view +-- +SET default_table_access_method TO ':tde_am'; +psql:sql/trigger_on_view.inc:9: ERROR: invalid value for parameter "default_table_access_method": ":tde_am" +DETAIL: Table access method ":tde_am" does not exist. +DROP VIEW IF EXISTS city_view CASCADE; +psql:sql/trigger_on_view.inc:11: NOTICE: view "city_view" does not exist, skipping +DROP TABLE IF exists country_table CASCADE; +psql:sql/trigger_on_view.inc:12: NOTICE: table "country_table" does not exist, skipping +DROP TABLE IF exists city_table cascade; +psql:sql/trigger_on_view.inc:13: NOTICE: table "city_table" does not exist, skipping + CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null + ) using :tde_am; + + INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America') + RETURNING *; + country_id | country_name | continent +------------+--------------+--------------- + 1 | Japan | Asia + 2 | UK | Europe + 3 | USA | North America +(3 rows) + + + CREATE TABLE city_table ( + city_id serial primary key, + city_name text not null, + population bigint, + country_id int references country_table + ) using :tde_am; + + CREATE VIEW city_view AS + SELECT city_id, city_name, population, country_name, continent + FROM city_table ci + LEFT JOIN country_table co ON co.country_id = ci.country_id; + +CREATE OR REPLACE FUNCTION city_insert() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS NOT NULL then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + else + NEW.continent := NULL; + end if; + + if NEW.city_id IS NOT NULL then + INSERT INTO city_table + VALUES(NEW.city_id, NEW.city_name, NEW.population, ctry_id); + else + INSERT INTO city_table(city_name, population, country_id) + VALUES(NEW.city_name, NEW.population, ctry_id) + RETURNING city_id INTO NEW.city_id; + end if; + + RETURN NEW; + end; + $$; + CREATE TRIGGER city_insert_trig INSTEAD OF INSERT ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_insert(); + + CREATE OR REPLACE FUNCTION city_delete() RETURNS trigger LANGUAGE plpgsql AS $$ + begin + DELETE FROM city_table WHERE city_id = OLD.city_id; + if NOT FOUND then RETURN NULL; end if; + RETURN OLD; + end; + $$; + + CREATE TRIGGER city_delete_trig INSTEAD OF DELETE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_delete(); + + CREATE OR REPLACE FUNCTION city_update() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS DISTINCT FROM OLD.country_name then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population, + country_id = ctry_id + WHERE city_id = OLD.city_id; + else + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population + WHERE city_id = OLD.city_id; + NEW.continent := OLD.continent; + end if; + + if NOT FOUND then RETURN NULL; end if; + RETURN NEW; + end; + $$; + CREATE TRIGGER city_update_trig INSTEAD OF UPDATE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_update(); + +-- INSERT .. RETURNING + INSERT INTO city_view(city_name) VALUES('Tokyo') RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | | | +(1 row) + + INSERT INTO city_view(city_name, population) VALUES('London', 7556900) RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 2 | London | 7556900 | | +(1 row) + + INSERT INTO city_view(city_name, country_name) VALUES('Washington DC', 'USA') RETURNING *; + city_id | city_name | population | country_name | continent +---------+---------------+------------+--------------+--------------- + 3 | Washington DC | | USA | North America +(1 row) + + INSERT INTO city_view(city_id, city_name) VALUES(123456, 'New York') RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 123456 | New York | | | +(1 row) + + INSERT INTO city_view VALUES(234567, 'Birmingham', 1016800, 'UK', 'EU') RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + + -- UPDATE .. RETURNING + UPDATE city_view SET country_name = 'Japon' WHERE city_name = 'Tokyo'; -- error +psql:sql/trigger_on_view.inc:118: ERROR: No such country: "Japon" +CONTEXT: PL/pgSQL function city_update() line 9 at RAISE + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Takyo'; -- no match + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Tokyo' RETURNING *; -- OK + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | | Japan | Asia +(1 row) + + + UPDATE city_view SET population = 13010279 WHERE city_name = 'Tokyo' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 1 | Tokyo | 13010279 | Japan | Asia +(1 row) + + UPDATE city_view SET country_name = 'UK' WHERE city_name = 'New York' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+----------- + 123456 | New York | | UK | Europe +(1 row) + + UPDATE city_view SET country_name = 'USA', population = 8391881 WHERE city_name = 'New York' RETURNING *; + city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+--------------- + 123456 | New York | 8391881 | USA | North America +(1 row) + + UPDATE city_view SET continent = 'EU' WHERE continent = 'Europe' RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + UPDATE city_view v1 SET country_name = v2.country_name FROM city_view v2 + WHERE v2.city_name = 'Birmingham' AND v1.city_name = 'London' RETURNING *; + city_id | city_name | population | country_name | continent | city_id | city_name | population | country_name | continent +---------+-----------+------------+--------------+-----------+---------+------------+------------+--------------+----------- + 2 | London | 7556900 | UK | Europe | 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + + -- DELETE .. RETURNING + DELETE FROM city_view WHERE city_name = 'Birmingham' RETURNING *; + city_id | city_name | population | country_name | continent +---------+------------+------------+--------------+----------- + 234567 | Birmingham | 1016800 | UK | Europe +(1 row) + + +DROP extension pg_tde CASCADE; +psql:sql/trigger_on_view.inc:133: NOTICE: drop cascades to 3 other objects +DETAIL: drop cascades to table country_table +drop cascades to table city_table +drop cascades to view city_view diff --git a/contrib/pg_tde/expected/update.out b/contrib/pg_tde/expected/update.out new file mode 100644 index 00000000000..2eadadf27ff --- /dev/null +++ b/contrib/pg_tde/expected/update.out @@ -0,0 +1,47 @@ +\set tde_am tde_heap +\i sql/update.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE update_test ( + a INT DEFAULT 10, + b INT, + c TEXT +) USING tde_heap_basic; +CREATE TABLE upsert_test ( + a INT PRIMARY KEY, + b TEXT +) USING tde_heap_basic; +INSERT INTO update_test VALUES (5, 10, 'foo'); +INSERT INTO update_test(b, a) VALUES (15, 10); +INSERT INTO upsert_test VALUES (2, 'Beeble') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = 0 AS xmax_correct; + tableoid | xmin_correct | xmax_correct +-------------+--------------+-------------- + upsert_test | t | t +(1 row) + +-- currently xmax is set after a conflict - that's probably not good, +-- but it seems worthwhile to have to be explicit if that changes. +INSERT INTO upsert_test VALUES (2, 'Brox') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = pg_current_xact_id()::xid AS xmax_correct; + tableoid | xmin_correct | xmax_correct +-------------+--------------+-------------- + upsert_test | t | t +(1 row) + +DROP TABLE update_test; +DROP TABLE upsert_test; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/update_basic.out b/contrib/pg_tde/expected/update_basic.out new file mode 100644 index 00000000000..46e84dabe3e --- /dev/null +++ b/contrib/pg_tde/expected/update_basic.out @@ -0,0 +1,47 @@ +\set tde_am tde_heap_basic +\i sql/update.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE update_test ( + a INT DEFAULT 10, + b INT, + c TEXT +) USING tde_heap_basic; +CREATE TABLE upsert_test ( + a INT PRIMARY KEY, + b TEXT +) USING tde_heap_basic; +INSERT INTO update_test VALUES (5, 10, 'foo'); +INSERT INTO update_test(b, a) VALUES (15, 10); +INSERT INTO upsert_test VALUES (2, 'Beeble') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = 0 AS xmax_correct; + tableoid | xmin_correct | xmax_correct +-------------+--------------+-------------- + upsert_test | t | t +(1 row) + +-- currently xmax is set after a conflict - that's probably not good, +-- but it seems worthwhile to have to be explicit if that changes. +INSERT INTO upsert_test VALUES (2, 'Brox') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = pg_current_xact_id()::xid AS xmax_correct; + tableoid | xmin_correct | xmax_correct +-------------+--------------+-------------- + upsert_test | t | t +(1 row) + +DROP TABLE update_test; +DROP TABLE upsert_test; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/update_compare_indexes.out b/contrib/pg_tde/expected/update_compare_indexes.out new file mode 100644 index 00000000000..3e21417f50c --- /dev/null +++ b/contrib/pg_tde/expected/update_compare_indexes.out @@ -0,0 +1,24 @@ +\set tde_am tde_heap +\i sql/update_compare_indexes.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP TABLE IF EXISTS pvactst; +psql:sql/update_compare_indexes.inc:6: NOTICE: table "pvactst" does not exist, skipping +CREATE TABLE pvactst (i INT, a INT[], p POINT) USING :tde_am; +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM generate_series(1,1000) i; +CREATE INDEX spgist_pvactst ON pvactst USING spgist (p); +UPDATE pvactst SET i = i WHERE i < 1000; +-- crash! +DROP TABLE pvactst; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/update_compare_indexes_basic.out b/contrib/pg_tde/expected/update_compare_indexes_basic.out new file mode 100644 index 00000000000..0840810e218 --- /dev/null +++ b/contrib/pg_tde/expected/update_compare_indexes_basic.out @@ -0,0 +1,24 @@ +\set tde_am tde_heap_basic +\i sql/update_compare_indexes.inc +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); + pg_tde_add_key_provider_file +------------------------------ + 1 +(1 row) + +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +DROP TABLE IF EXISTS pvactst; +psql:sql/update_compare_indexes.inc:6: NOTICE: table "pvactst" does not exist, skipping +CREATE TABLE pvactst (i INT, a INT[], p POINT) USING :tde_am; +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM generate_series(1,1000) i; +CREATE INDEX spgist_pvactst ON pvactst USING spgist (p); +UPDATE pvactst SET i = i WHERE i < 1000; +-- crash! +DROP TABLE pvactst; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/vault_v2_test.out b/contrib/pg_tde/expected/vault_v2_test.out new file mode 100644 index 00000000000..b1037d7290a --- /dev/null +++ b/contrib/pg_tde/expected/vault_v2_test.out @@ -0,0 +1,49 @@ +\set tde_am tde_heap +\i sql/vault_v2_test.inc +CREATE EXTENSION pg_tde; +\getenv root_token ROOT_TOKEN +SELECT pg_tde_add_key_provider_vault_v2('vault-incorrect',:'root_token','http://127.0.0.1:8200','DUMMY-TOKEN',NULL); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 1 +(1 row) + +-- FAILS +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-incorrect'); +psql:sql/vault_v2_test.inc:7: ERROR: Failed to store key on keyring. Please check the keyring configuration. +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +psql:sql/vault_v2_test.inc:13: ERROR: failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables. +SELECT pg_tde_add_key_provider_vault_v2('vault-v2',:'root_token','http://127.0.0.1:8200','secret',NULL); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 2 +(1 row) + +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-v2'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); +SELECT * from test_enc; + id | k +----+--- + 1 | 1 + 2 | 2 + 3 | 3 +(3 rows) + +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/expected/vault_v2_test_basic.out b/contrib/pg_tde/expected/vault_v2_test_basic.out new file mode 100644 index 00000000000..5fcedd36748 --- /dev/null +++ b/contrib/pg_tde/expected/vault_v2_test_basic.out @@ -0,0 +1,49 @@ +\set tde_am tde_heap_basic +\i sql/vault_v2_test.inc +CREATE EXTENSION pg_tde; +\getenv root_token ROOT_TOKEN +SELECT pg_tde_add_key_provider_vault_v2('vault-incorrect',:'root_token','http://127.0.0.1:8200','DUMMY-TOKEN',NULL); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 1 +(1 row) + +-- FAILS +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-incorrect'); +psql:sql/vault_v2_test.inc:7: ERROR: Failed to store key on keyring. Please check the keyring configuration. +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +psql:sql/vault_v2_test.inc:13: ERROR: failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables. +SELECT pg_tde_add_key_provider_vault_v2('vault-v2',:'root_token','http://127.0.0.1:8200','secret',NULL); + pg_tde_add_key_provider_vault_v2 +---------------------------------- + 2 +(1 row) + +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-v2'); + pg_tde_set_principal_key +-------------------------- + t +(1 row) + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); +SELECT * from test_enc; + id | k +----+--- + 1 | 1 + 2 | 2 + 3 | 3 +(3 rows) + +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/kmip-server.conf b/contrib/pg_tde/kmip-server.conf new file mode 100644 index 00000000000..7644e4b5952 --- /dev/null +++ b/contrib/pg_tde/kmip-server.conf @@ -0,0 +1,15 @@ +[server] +hostname=127.0.0.1 +port=5696 +certificate_path=/tmp/server_certificate.pem +key_path=/tmp/server_key.pem +ca_path=/tmp/root_certificate.pem +auth_suite=TLS1.2 +policy_path=/path/to/policy/file +enable_tls_client_auth=True +tls_cipher_suites= + TLS_RSA_WITH_AES_128_CBC_SHA256 + TLS_RSA_WITH_AES_256_CBC_SHA256 + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 +logging_level=DEBUG +database_path=/tmp/pykmip.db diff --git a/contrib/pg_tde/meson.build b/contrib/pg_tde/meson.build new file mode 100644 index 00000000000..800a8b8af2d --- /dev/null +++ b/contrib/pg_tde/meson.build @@ -0,0 +1,189 @@ + +curldep = dependency('libcurl') + +pg_version = meson.project_version().substring(0,2) +src_version = 'src' + pg_version + +pg_tde_sources = files( + 'src/pg_tde.c', + 'src/transam/pg_tde_xact_handler.c', + 'src/access/pg_tde_tdemap.c', + 'src/access/pg_tde_slot.c', + src_version / 'access/pg_tdeam.c', + src_version / 'access/pg_tdeam_handler.c', + src_version / 'access/pg_tdeam_visibility.c', + src_version / 'access/pg_tdetoast.c', + src_version / 'access/pg_tde_io.c', + src_version / 'access/pg_tde_prune.c', + src_version / 'access/pg_tde_rewrite.c', + src_version / 'access/pg_tde_vacuumlazy.c', + src_version / 'access/pg_tde_visibilitymap.c', + 'src/access/pg_tde_ddl.c', + 'src/access/pg_tde_xlog.c', + 'src/access/pg_tde_xlog_encrypt.c', + + 'src/encryption/enc_tde.c', + 'src/encryption/enc_aes.c', + + 'src/keyring/keyring_curl.c', + 'src/keyring/keyring_file.c', + 'src/keyring/keyring_vault.c', + 'src/keyring/keyring_kmip.c', + 'src/keyring/keyring_kmip_ereport.c', + 'src/keyring/keyring_api.c', + + 'src/smgr/pg_tde_smgr.c', + + 'src/catalog/tde_global_space.c', + 'src/catalog/tde_keyring.c', + 'src/catalog/tde_keyring_parse_opts.c', + 'src/catalog/tde_principal_key.c', + 'src/common/pg_tde_shmem.c', + 'src/common/pg_tde_utils.c', + 'src/pg_tde_defs.c', + 'src/pg_tde.c', + 'src/pg_tde_event_capture.c', +) + +incdir = include_directories(src_version / 'include', 'src/include', '.', 'src/libkmip/libkmip/include/') + +kmip = static_library( + 'kmip', + files( + 'src/libkmip/libkmip/src/kmip.c', + 'src/libkmip/libkmip/src/kmip_bio.c', + 'src/libkmip/libkmip/src/kmip_locate.c', + 'src/libkmip/libkmip/src/kmip_memset.c' + ), + c_args: [ '-w' ], # This is a 3rd party, disable warnings completely + include_directories: incdir +) + +deps_update = {'dependencies': contrib_mod_args.get('dependencies') + [curldep]} + +mod_args = contrib_mod_args + deps_update + +pg_tde = shared_module('pg_tde', + pg_tde_sources, + c_pch: pch_postgres_h, + kwargs: mod_args, + include_directories: incdir, + link_whole: [kmip] +) +contrib_targets += pg_tde + +ldflags = [] +if host_system == 'darwin' + # On MacOS Shared Libraries and Loadable Modules are different things, + # so we need to pass an extra flag to the linker. + ldflags += '-bundle' +endif + +install_data( + 'pg_tde.control', + 'pg_tde--1.0-beta2.sql', + kwargs: contrib_data_args, +) + + +sql_tests = [ + 'toast_decrypt_basic', + 'toast_extended_storage_basic', + 'move_large_tuples_basic', + 'non_sorted_off_compact_basic', + 'update_compare_indexes_basic', + 'pg_tde_is_encrypted_basic', + 'test_issue_153_fix_basic', + 'multi_insert_basic', + 'keyprovider_dependency_basic', + 'trigger_on_view_basic', + 'change_access_method_basic', + 'insert_update_delete_basic', + 'tablespace_basic', + 'vault_v2_test_basic', + 'kmip_test_basic', + 'alter_index_basic', + 'merge_join_basic', + 'cache_alloc', +] + +tap_tests = [ + 't/001_basic.pl', + 't/002_rotate_key.pl', + 't/003_remote_config.pl', + 't/004_file_config.pl', + 't/005_multiple_extensions.pl', + 't/006_remote_vault_config.pl', + 't/007_access_control.pl', + 't/009_key_rotate_tablespace.pl', + ] + +if get_variable('percona_ext', false) + sql_tests += [ + 'toast_decrypt', + 'toast_extended_storage', + 'move_large_tuples', + 'non_sorted_off_compact', + 'update_compare_indexes', + 'update', + 'pg_tde_is_encrypted', + 'test_issue_153_fix', + 'multi_insert', + 'keyprovider_dependency', + 'trigger_on_view', + 'change_access_method', + 'insert_update_delete', + 'tablespace', + 'vault_v2_test', + 'kmip_test', + 'alter_index', + 'merge_join', + ] + + tap_tests += [ + 't/008_tde_heap.pl', + ] +endif + +tests += { + 'name': 'pg_tde', + 'sd': meson.current_source_dir(), + 'bd': meson.current_build_dir(), + 'regress': { + 'sql': sql_tests, + 'regress_args': ['--temp-config', files('pg_tde.conf')], + 'runningcheck': false, + }, + 'tap': { + 'tests': tap_tests }, +} + +# TODO: do not duplicate +tde_decrypt_sources = files( + 'src/access/pg_tde_tdemap.c', + 'src/access/pg_tde_xlog_encrypt.c', + 'src/catalog/tde_global_space.c', + 'src/catalog/tde_keyring.c', + 'src/catalog/tde_keyring_parse_opts.c', + 'src/catalog/tde_principal_key.c', + 'src/common/pg_tde_utils.c', + 'src/encryption/enc_aes.c', + 'src/encryption/enc_tde.c', + 'src/keyring/keyring_api.c', + 'src/keyring/keyring_curl.c', + 'src/keyring/keyring_file.c', + 'src/keyring/keyring_vault.c', + 'src/keyring/keyring_kmip.c', + 'src/keyring/keyring_kmip_ereport.c', + ) + +pg_tde_inc = incdir + +pg_tde_frontend = static_library('pg_tde_frontend', + tde_decrypt_sources, + c_pch: pch_postgres_h, + c_args: ['-DFRONTEND'], + kwargs: mod_args, + include_directories: incdir, + link_whole: [kmip] +) diff --git a/contrib/pg_tde/perf/pp-2019.csv.xz b/contrib/pg_tde/perf/pp-2019.csv.xz new file mode 100644 index 00000000000..224b589a9ee Binary files /dev/null and b/contrib/pg_tde/perf/pp-2019.csv.xz differ diff --git a/contrib/pg_tde/perf/seq_read.sh b/contrib/pg_tde/perf/seq_read.sh new file mode 100755 index 00000000000..683be98f312 --- /dev/null +++ b/contrib/pg_tde/perf/seq_read.sh @@ -0,0 +1,22 @@ +#!/usr/bin/env bash +cd "$(dirname "$0")" + +xz -d pp-2019.csv.xz +RECORDS=`wc -l pp-2019.csv` +echo "CSV entries: $RECORDS" +cp pp-2019.csv /tmp/ +createdb seq_read_test +psql seq_read_test < seq_read_prepare.sql > /dev/null +echo "Sequential scan read times" +echo "==========================" +echo -n "HEAP: " +HEAP=`psql seq_read_test < seq_read_run_heap.sql | grep "Execution" | tail -n 10 | cut -d " " -f 4 | paste -sd+ | bc` +echo $HEAP +echo -n "TDE: " +TDE=`psql seq_read_test < seq_read_run_tde.sql | grep "Execution" | tail -n 10 | cut -d " " -f 4 | paste -sd+ | bc` +TDE_PERC=`bc <<< "$TDE*100/$HEAP"` +echo "$TDE ($TDE_PERC%)" +echo -n "TDE_BASIC: " +TDE_BASIC=`psql seq_read_test < seq_read_run_tde_basic.sql | grep "Execution" | tail -n 10 | cut -d " " -f 4 | paste -sd+ | bc` +TDE_BASIC_PERC=`bc <<< "$TDE_BASIC*100/$HEAP"` +echo "$TDE ($TDE_BASIC_PERC%)" diff --git a/contrib/pg_tde/perf/seq_read_prepare.sql b/contrib/pg_tde/perf/seq_read_prepare.sql new file mode 100644 index 00000000000..c9df732c9d5 --- /dev/null +++ b/contrib/pg_tde/perf/seq_read_prepare.sql @@ -0,0 +1,65 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-store','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-store'); + + +CREATE TABLE land_registry_price_paid_uk( + transaction uuid, + price numeric, + transfer_date date, + postcode text, + property_type char(1), + newly_built boolean, + duration char(1), + paon text, + saon text, + street text, + locality text, + city text, + district text, + county text, + ppd_category_type char(1), + record_status char(1)); + +CREATE TABLE land_registry_price_paid_uk_tde( + transaction uuid, + price numeric, + transfer_date date, + postcode text, + property_type char(1), + newly_built boolean, + duration char(1), + paon text, + saon text, + street text, + locality text, + city text, + district text, + county text, + ppd_category_type char(1), + record_status char(1)) USING tde_heap; + +CREATE TABLE land_registry_price_paid_uk_tde_basic( + transaction uuid, + price numeric, + transfer_date date, + postcode text, + property_type char(1), + newly_built boolean, + duration char(1), + paon text, + saon text, + street text, + locality text, + city text, + district text, + county text, + ppd_category_type char(1), + record_status char(1)) USING tde_heap_basic; + +COPY land_registry_price_paid_uk FROM '/tmp/pp-2019.csv' with (format csv, encoding 'win1252', header false, null '', quote '"', force_null (postcode, saon, paon, street, locality, city, district)); + +COPY land_registry_price_paid_uk_tde FROM '/tmp/pp-2019.csv' with (format csv, encoding 'win1252', header false, null '', quote '"', force_null (postcode, saon, paon, street, locality, city, district)); + +COPY land_registry_price_paid_uk_tde_basic FROM '/tmp/pp-2019.csv' with (format csv, encoding 'win1252', header false, null '', quote '"', force_null (postcode, saon, paon, street, locality, city, district)); diff --git a/contrib/pg_tde/perf/seq_read_run_heap.sql b/contrib/pg_tde/perf/seq_read_run_heap.sql new file mode 100644 index 00000000000..1ad94e73a22 --- /dev/null +++ b/contrib/pg_tde/perf/seq_read_run_heap.sql @@ -0,0 +1,11 @@ +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk; diff --git a/contrib/pg_tde/perf/seq_read_run_tde.sql b/contrib/pg_tde/perf/seq_read_run_tde.sql new file mode 100644 index 00000000000..13c86b184ce --- /dev/null +++ b/contrib/pg_tde/perf/seq_read_run_tde.sql @@ -0,0 +1,11 @@ +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde; diff --git a/contrib/pg_tde/perf/seq_read_run_tde_basic.sql b/contrib/pg_tde/perf/seq_read_run_tde_basic.sql new file mode 100644 index 00000000000..4a90b41afe4 --- /dev/null +++ b/contrib/pg_tde/perf/seq_read_run_tde_basic.sql @@ -0,0 +1,11 @@ +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; +EXPLAIN ANALYZE SELECT * FROM land_registry_price_paid_uk_tde_basic; diff --git a/contrib/pg_tde/pg_tde--1.0-beta2.sql b/contrib/pg_tde/pg_tde--1.0-beta2.sql new file mode 100644 index 00000000000..e4d06a824fd --- /dev/null +++ b/contrib/pg_tde/pg_tde--1.0-beta2.sql @@ -0,0 +1,544 @@ +/* contrib/pg_tde/pg_tde--1.0.sql */ + +-- complain if script is sourced in psql, rather than via CREATE EXTENSION +\echo Use "CREATE EXTENSION pg_tde" to load this file. \quit + +CREATE type PG_TDE_GLOBAL AS ENUM('PG_TDE_GLOBAL'); + +-- Key Provider Management +CREATE FUNCTION pg_tde_add_key_provider_internal(provider_type VARCHAR(10), provider_name VARCHAR(128), options JSON, is_global BOOLEAN) +RETURNS INT +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider(provider_type VARCHAR(10), provider_name VARCHAR(128), options JSON) +RETURNS INT +AS $$ + SELECT pg_tde_add_key_provider_internal(provider_type, provider_name, options, FALSE); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_file(provider_name VARCHAR(128), file_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_file_keyring_provider_options function. + + SELECT pg_tde_add_key_provider('file', provider_name, + json_object('type' VALUE 'file', 'path' VALUE COALESCE(file_path, ''))); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_file(provider_name VARCHAR(128), file_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_file_keyring_provider_options function. + + SELECT pg_tde_add_key_provider('file', provider_name, + json_object('type' VALUE 'file', 'path' VALUE file_path)); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_vault_v2(provider_name VARCHAR(128), + vault_token TEXT, + vault_url TEXT, + vault_mount_path TEXT, + vault_ca_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_vaultV2_keyring_provider_options function. + SELECT pg_tde_add_key_provider('vault-v2', provider_name, + json_object('type' VALUE 'vault-v2', + 'url' VALUE COALESCE(vault_url,''), + 'token' VALUE COALESCE(vault_token,''), + 'mountPath' VALUE COALESCE(vault_mount_path,''), + 'caPath' VALUE COALESCE(vault_ca_path,''))); +$$ +LANGUAGE SQL; +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_vault_v2(provider_name VARCHAR(128), + vault_token JSON, + vault_url JSON, + vault_mount_path JSON, + vault_ca_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_vaultV2_keyring_provider_options function. + SELECT pg_tde_add_key_provider('vault-v2', provider_name, + json_object('type' VALUE 'vault-v2', + 'url' VALUE vault_url, + 'token' VALUE vault_token, + 'mountPath' VALUE vault_mount_path, + 'caPath' VALUE vault_ca_path)); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_kmip(provider_name VARCHAR(128), + kmip_host TEXT, + kmip_port INT, + kmip_ca_path TEXT, + kmip_cert_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_kmip_keyring_provider_options function. + SELECT pg_tde_add_key_provider('kmip', provider_name, + json_object('type' VALUE 'kmip', + 'host' VALUE COALESCE(kmip_host,''), + 'port' VALUE kmip_port, + 'caPath' VALUE COALESCE(kmip_ca_path,''), + 'certPath' VALUE COALESCE(kmip_cert_path,''))); +$$ +LANGUAGE SQL; +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_kmip(provider_name VARCHAR(128), + kmip_host JSON, + kmip_port JSON, + kmip_ca_path JSON, + kmip_cert_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_kmip_keyring_provider_options function. + SELECT pg_tde_add_key_provider('kmip', provider_name, + json_object('type' VALUE 'kmip', + 'host' VALUE kmip_host, + 'port' VALUE kmip_port, + 'caPath' VALUE kmip_ca_path, + 'certPath' VALUE kmip_cert_path)); +$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_list_all_key_providers + (OUT id INT, + OUT provider_name VARCHAR(128), + OUT provider_type VARCHAR(10), + OUT options JSON) +RETURNS SETOF record +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT VOLATILE; + +-- Global Tablespace Key Provider Management +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider(PG_TDE_GLOBAL, provider_type VARCHAR(10), provider_name VARCHAR(128), options JSON) +RETURNS INT +AS $$ + SELECT pg_tde_add_key_provider_internal(provider_type, provider_name, options, TRUE); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_file(PG_TDE_GLOBAL, provider_name VARCHAR(128), file_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_file_keyring_provider_options function. + + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'file', provider_name, + json_object('type' VALUE 'file', 'path' VALUE COALESCE(file_path, ''))); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_file(PG_TDE_GLOBAL, provider_name VARCHAR(128), file_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_file_keyring_provider_options function. + + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'file', provider_name, + json_object('type' VALUE 'file', 'path' VALUE file_path)); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_vault_v2(PG_TDE_GLOBAL, + provider_name VARCHAR(128), + vault_token TEXT, + vault_url TEXT, + vault_mount_path TEXT, + vault_ca_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_vaultV2_keyring_provider_options function. + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'vault-v2', provider_name, + json_object('type' VALUE 'vault-v2', + 'url' VALUE COALESCE(vault_url,''), + 'token' VALUE COALESCE(vault_token,''), + 'mountPath' VALUE COALESCE(vault_mount_path,''), + 'caPath' VALUE COALESCE(vault_ca_path,''))); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_vault_v2(PG_TDE_GLOBAL, + provider_name VARCHAR(128), + vault_token JSON, + vault_url JSON, + vault_mount_path JSON, + vault_ca_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_vaultV2_keyring_provider_options function. + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'vault-v2', provider_name, + json_object('type' VALUE 'vault-v2', + 'url' VALUE vault_url, + 'token' VALUE vault_token, + 'mountPath' VALUE vault_mount_path, + 'caPath' VALUE vault_ca_path)); +$$ +LANGUAGE SQL; + +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_kmip(PG_TDE_GLOBAL, + provider_name VARCHAR(128), + kmip_host TEXT, + kmip_port INT, + kmip_ca_path TEXT, + kmip_cert_path TEXT) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_kmip_keyring_provider_options function. + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'kmip', provider_name, + json_object('type' VALUE 'kmip', + 'host' VALUE COALESCE(kmip_host,''), + 'port' VALUE kmip_port, + 'caPath' VALUE COALESCE(kmip_ca_path,''), + 'certPath' VALUE COALESCE(vault_cert_path,''))); +$$ +LANGUAGE SQL; +CREATE OR REPLACE FUNCTION pg_tde_add_key_provider_kmip(PG_TDE_GLOBAL, + provider_name VARCHAR(128), + kmip_host JSON, + kmip_port JSON, + kmip_ca_path JSON, + kmip_cert_path JSON) +RETURNS INT +AS $$ +-- JSON keys in the options must be matched to the keys in +-- load_kmip_keyring_provider_options function. + SELECT pg_tde_add_key_provider('PG_TDE_GLOBAL', 'vault-v2', provider_name, + json_object('type' VALUE 'vault-v2', + 'host' VALUE kmip_host, + 'port' VALUE kmip_port, + 'caPath' VALUE kmip_ca_path, + 'certPath' VALUE kmip_cert_path)); +$$ +LANGUAGE SQL; + +-- Table access method +CREATE FUNCTION pg_tdeam_basic_handler(internal) +RETURNS table_am_handler +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_internal_has_key(oid OID) +RETURNS boolean +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_is_encrypted(table_name VARCHAR) +RETURNS boolean +AS $$ +SELECT EXISTS ( + SELECT 1 + FROM pg_catalog.pg_class + WHERE oid = table_name::regclass::oid + AND (relam = (SELECT oid FROM pg_catalog.pg_am WHERE amname = 'tde_heap_basic') + OR (relam = (SELECT oid FROM pg_catalog.pg_am WHERE amname = 'tde_heap')) + AND pg_tde_internal_has_key(table_name::regclass::oid)) + )$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_rotate_principal_key_internal(new_principal_key_name VARCHAR(255) DEFAULT NULL, new_provider_name VARCHAR(255) DEFAULT NULL, ensure_new_key BOOLEAN DEFAULT TRUE, is_global BOOLEAN DEFAULT FALSE) +RETURNS boolean +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_rotate_principal_key(new_principal_key_name VARCHAR(255) DEFAULT NULL, new_provider_name VARCHAR(255) DEFAULT NULL) +RETURNS boolean +AS $$ + SELECT pg_tde_rotate_principal_key_internal(new_principal_key_name, new_provider_name, TRUE, FALSE); +$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_rotate_principal_key(PG_TDE_GLOBAL, new_principal_key_name VARCHAR(255) DEFAULT NULL, new_provider_name VARCHAR(255) DEFAULT NULL) +RETURNS boolean +AS $$ + SELECT pg_tde_rotate_principal_key_internal(new_principal_key_name, new_provider_name, TRUE, TRUE); +$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_set_principal_key(principal_key_name VARCHAR(255), provider_name VARCHAR(255), ensure_new_key BOOLEAN DEFAULT FALSE) +RETURNS boolean +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_alter_principal_key_keyring(new_provider_name VARCHAR(255)) +RETURNS boolean +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_extension_initialize() +RETURNS VOID +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_principal_key_info_internal(is_global BOOLEAN) +RETURNS TABLE ( principal_key_name text, + key_provider_name text, + key_provider_id integer, + principal_key_internal_name text, + principal_key_version integer, + key_createion_time timestamp with time zone) +AS 'MODULE_PATHNAME' +LANGUAGE C; + +CREATE FUNCTION pg_tde_principal_key_info() +RETURNS TABLE ( principal_key_name text, + key_provider_name text, + key_provider_id integer, + principal_key_internal_name text, + principal_key_version integer, + key_createion_time timestamp with time zone) +AS $$ + SELECT pg_tde_principal_key_info_internal(FALSE); +$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_principal_key_info(PG_TDE_GLOBAL) +RETURNS TABLE ( principal_key_name text, + key_provider_name text, + key_provider_id integer, + principal_key_internal_name text, + principal_key_version integer, + key_createion_time timestamp with time zone) +AS $$ + SELECT pg_tde_principal_key_info_internal(TRUE); +$$ +LANGUAGE SQL; + +CREATE FUNCTION pg_tde_version() RETURNS TEXT AS 'MODULE_PATHNAME' LANGUAGE C; + +-- Access method +CREATE ACCESS METHOD tde_heap_basic TYPE TABLE HANDLER pg_tdeam_basic_handler; +COMMENT ON ACCESS METHOD tde_heap_basic IS 'pg_tde table access method'; + +DO $$ + BEGIN + -- Table access method + CREATE FUNCTION pg_tdeam_handler(internal) + RETURNS table_am_handler + AS 'MODULE_PATHNAME' + LANGUAGE C; + + CREATE ACCESS METHOD tde_heap TYPE TABLE HANDLER pg_tdeam_handler; + COMMENT ON ACCESS METHOD tde_heap IS 'tde_heap table access method'; + + CREATE OR REPLACE FUNCTION pg_tde_ddl_command_start_capture() + RETURNS event_trigger + AS 'MODULE_PATHNAME' + LANGUAGE C; + + CREATE OR REPLACE FUNCTION pg_tde_ddl_command_end_capture() + RETURNS event_trigger + AS 'MODULE_PATHNAME' + LANGUAGE C; + + CREATE EVENT TRIGGER pg_tde_trigger_create_index + ON ddl_command_start + EXECUTE FUNCTION pg_tde_ddl_command_start_capture(); + ALTER EVENT TRIGGER pg_tde_trigger_create_index ENABLE ALWAYS; + + CREATE EVENT TRIGGER pg_tde_trigger_create_index_2 + ON ddl_command_end + EXECUTE FUNCTION pg_tde_ddl_command_end_capture(); + ALTER EVENT TRIGGER pg_tde_trigger_create_index_2 ENABLE ALWAYS; + EXCEPTION WHEN OTHERS THEN + NULL; + END; +$$; + +-- Per database extension initialization +SELECT pg_tde_extension_initialize(); + + +CREATE OR REPLACE FUNCTION pg_tde_grant_execute_privilege_on_function( + target_user_or_role TEXT, + target_function_name TEXT, + target_function_args TEXT +) +RETURNS BOOLEAN AS $$ +DECLARE + grant_query TEXT; +BEGIN + -- Construct the GRANT statement + grant_query := format('GRANT EXECUTE ON FUNCTION %I(%s) TO %I;', + target_function_name, target_function_args, target_user_or_role); + + -- Execute the GRANT statement + EXECUTE grant_query; + -- If execution reaches here, it means the query was successful + RETURN TRUE; + +END; +$$ LANGUAGE plpgsql; + +CREATE OR REPLACE FUNCTION pg_tde_revoke_execute_privilege_on_function( + target_user_or_role TEXT, + target_function_name TEXT, + argument_types TEXT +) +RETURNS BOOLEAN AS $$ +DECLARE + revoke_query TEXT; +BEGIN + -- Construct the REVOKE statement + revoke_query := format('REVOKE EXECUTE ON FUNCTION %I(%s) FROM %I;', + target_function_name, argument_types, target_user_or_role); + + -- Execute the REVOKE statement + EXECUTE revoke_query; + + -- If execution reaches here, it means the query was successful + RETURN TRUE; +END; +$$ LANGUAGE plpgsql; + + +CREATE OR REPLACE FUNCTION pg_tde_grant_key_management_to_role( + target_user_or_role TEXT) +RETURNS BOOLEAN +LANGUAGE plpgsql +AS $$ +BEGIN + -- Start the transaction block for performing grants + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'pg_tde_global, varchar, json'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'pg_tde_global, varchar, text'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'varchar, json'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'varchar, text'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_internal', 'varchar, varchar, JSON, BOOLEAN'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider', 'varchar, varchar, JSON'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'pg_tde_global, varchar, text, text,text,text'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'pg_tde_global, varchar, JSON, JSON,JSON,JSON'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'varchar, text, text,text,text'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'varchar, JSON, JSON,JSON,JSON'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_set_principal_key', 'varchar, varchar, BOOLEAN'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_alter_principal_key_keyring', 'varchar'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key', 'pg_tde_global, varchar, varchar'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key', 'varchar, varchar'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key_internal', 'varchar, varchar, BOOLEAN, BOOLEAN'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_grant_key_management_to_role', 'TEXT'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_revoke_key_management_from_role', 'TEXT'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_grant_key_viewer_to_role', 'TEXT'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_revoke_key_viewer_from_role', 'TEXT'); + + PERFORM pg_tde_grant_key_viewer_to_role(target_user_or_role); + + RETURN TRUE; + +EXCEPTION + -- If any error occurs, re-raise the error to roll back the transaction + WHEN OTHERS THEN + RAISE; +END; +$$; + +CREATE OR REPLACE FUNCTION pg_tde_grant_key_viewer_to_role( + target_user_or_role TEXT) +RETURNS BOOLEAN +LANGUAGE plpgsql +AS $$ +BEGIN + -- Start the transaction block for performing grants + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_list_all_key_providers', 'OUT INT, OUT varchar, OUT varchar, OUT JSON'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_is_encrypted', 'VARCHAR'); + + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info_internal', 'BOOLEAN'); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info', ''); + PERFORM pg_tde_grant_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info', 'pg_tde_global'); + -- If all statements succeed, return TRUE + RETURN TRUE; + +EXCEPTION + -- If any error occurs, re-raise the error to roll back the transaction + WHEN OTHERS THEN + RAISE; +END; +$$; + + + +CREATE OR REPLACE FUNCTION pg_tde_revoke_key_management_from_role( + target_user_or_role TEXT) +RETURNS BOOLEAN +LANGUAGE plpgsql +AS $$ +BEGIN + -- Start the transaction block for performing grants + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'pg_tde_global, varchar, json'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'pg_tde_global, varchar, text'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'varchar, json'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_file', 'varchar, text'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_internal', 'varchar, varchar, JSON, BOOLEAN'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider', 'varchar, varchar, JSON'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'pg_tde_global, varchar, text, text,text,text'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'pg_tde_global, varchar, JSON, JSON,JSON,JSON'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'varchar, text, text,text,text'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_add_key_provider_vault_v2', 'varchar, JSON, JSON,JSON,JSON'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_set_principal_key', 'varchar, varchar, BOOLEAN'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_alter_principal_key_keyring', 'varchar'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key', 'pg_tde_global, varchar, varchar'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key', 'varchar, varchar'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_rotate_principal_key_internal', 'varchar, varchar, BOOLEAN, BOOLEAN'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_grant_key_management_to_role', 'TEXT'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_revoke_key_management_from_role', 'TEXT'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_grant_key_viewer_to_role', 'TEXT'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_revoke_key_viewer_from_role', 'TEXT'); + + -- If all statements succeed, return TRUE + RETURN TRUE; + +EXCEPTION + -- If any error occurs, re-raise the error to roll back the transaction + WHEN OTHERS THEN + RAISE; +END; +$$; + +CREATE OR REPLACE FUNCTION pg_tde_revoke_key_viewer_from_role( + target_user_or_role TEXT) +RETURNS BOOLEAN +LANGUAGE plpgsql +AS $$ +BEGIN + -- Start the transaction block for performing grants + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_list_all_key_providers', 'OUT INT, OUT varchar, OUT varchar, OUT JSON'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_is_encrypted', 'VARCHAR'); + + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info_internal', 'BOOLEAN'); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info', ''); + PERFORM pg_tde_revoke_execute_privilege_on_function(target_user_or_role, 'pg_tde_principal_key_info', 'pg_tde_global'); + -- If all statements succeed, return TRUE + RETURN TRUE; + +EXCEPTION + -- If any error occurs, re-raise the error to roll back the transaction + WHEN OTHERS THEN + RAISE; +END; +$$; + +-- Revoking all the privileges from the public role +SELECT pg_tde_revoke_key_management_from_role('public'); +SELECT pg_tde_revoke_key_viewer_from_role('public'); diff --git a/contrib/pg_tde/pg_tde.conf b/contrib/pg_tde/pg_tde.conf new file mode 100644 index 00000000000..f4da5151ed0 --- /dev/null +++ b/contrib/pg_tde/pg_tde.conf @@ -0,0 +1 @@ +shared_preload_libraries = 'pg_tde' diff --git a/contrib/pg_tde/pg_tde.control b/contrib/pg_tde/pg_tde.control new file mode 100644 index 00000000000..b36c142990b --- /dev/null +++ b/contrib/pg_tde/pg_tde.control @@ -0,0 +1,5 @@ +# pg_tde extension +comment = 'pg_tde access method' +default_version = '1.0-beta2' +module_pathname = '$libdir/pg_tde' +relocatable = true diff --git a/contrib/pg_tde/pgindent_excludes b/contrib/pg_tde/pgindent_excludes new file mode 100644 index 00000000000..fc4b65231fa --- /dev/null +++ b/contrib/pg_tde/pgindent_excludes @@ -0,0 +1,7 @@ + +# List of filename patterns to exclude from pgindent runs +# +# This contains code copied from postgres tree as is and slightly modified. +# We don't want to run pgindent on these files to avoid unnecessary conflicts. +src\d\d/ + diff --git a/contrib/pg_tde/pykmip-server.conf b/contrib/pg_tde/pykmip-server.conf new file mode 100644 index 00000000000..7644e4b5952 --- /dev/null +++ b/contrib/pg_tde/pykmip-server.conf @@ -0,0 +1,15 @@ +[server] +hostname=127.0.0.1 +port=5696 +certificate_path=/tmp/server_certificate.pem +key_path=/tmp/server_key.pem +ca_path=/tmp/root_certificate.pem +auth_suite=TLS1.2 +policy_path=/path/to/policy/file +enable_tls_client_auth=True +tls_cipher_suites= + TLS_RSA_WITH_AES_128_CBC_SHA256 + TLS_RSA_WITH_AES_256_CBC_SHA256 + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 +logging_level=DEBUG +database_path=/tmp/pykmip.db diff --git a/contrib/pg_tde/sql/alter_index.inc b/contrib/pg_tde/sql/alter_index.inc new file mode 100644 index 00000000000..d7f1a7f7157 --- /dev/null +++ b/contrib/pg_tde/sql/alter_index.inc @@ -0,0 +1,37 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +SET default_table_access_method = :"tde_am"; + +CREATE TABLE concur_reindex_part (c1 int, c2 int) PARTITION BY RANGE (c1); +CREATE TABLE concur_reindex_part_0 PARTITION OF concur_reindex_part + FOR VALUES FROM (0) TO (10) PARTITION BY list (c2); +CREATE TABLE concur_reindex_part_0_1 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (1); +CREATE TABLE concur_reindex_part_0_2 PARTITION OF concur_reindex_part_0 + FOR VALUES IN (2); +-- This partitioned table will have no partitions. +CREATE TABLE concur_reindex_part_10 PARTITION OF concur_reindex_part + FOR VALUES FROM (10) TO (20) PARTITION BY list (c2); +-- Create some partitioned indexes +CREATE INDEX concur_reindex_part_index ON ONLY concur_reindex_part (c1); +CREATE INDEX concur_reindex_part_index_0 ON ONLY concur_reindex_part_0 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_0; +-- This partitioned index will have no partitions. +CREATE INDEX concur_reindex_part_index_10 ON ONLY concur_reindex_part_10 (c1); +ALTER INDEX concur_reindex_part_index ATTACH PARTITION concur_reindex_part_index_10; +CREATE INDEX concur_reindex_part_index_0_1 ON ONLY concur_reindex_part_0_1 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_1; +CREATE INDEX concur_reindex_part_index_0_2 ON ONLY concur_reindex_part_0_2 (c1); +ALTER INDEX concur_reindex_part_index_0 ATTACH PARTITION concur_reindex_part_index_0_2; +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; +SELECT relid, parentrelid, level FROM pg_partition_tree('concur_reindex_part_index') + ORDER BY relid, level; +DROP TABLE concur_reindex_part; +DROP EXTENSION pg_tde; +RESET default_table_access_method; diff --git a/contrib/pg_tde/sql/alter_index.sql b/contrib/pg_tde/sql/alter_index.sql new file mode 100644 index 00000000000..fce0cfab29b --- /dev/null +++ b/contrib/pg_tde/sql/alter_index.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/alter_index.inc diff --git a/contrib/pg_tde/sql/alter_index_basic.sql b/contrib/pg_tde/sql/alter_index_basic.sql new file mode 100644 index 00000000000..5689c74055c --- /dev/null +++ b/contrib/pg_tde/sql/alter_index_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/alter_index.inc diff --git a/contrib/pg_tde/sql/cache_alloc.sql b/contrib/pg_tde/sql/cache_alloc.sql new file mode 100644 index 00000000000..de791ec13dc --- /dev/null +++ b/contrib/pg_tde/sql/cache_alloc.sql @@ -0,0 +1,17 @@ +-- We test cache so AM doesn't matter +-- Just checking there are no mem debug WARNINGs during the cache population + +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +do $$ + DECLARE idx integer; +begin + for idx in 0..700 loop + EXECUTE format('CREATE TABLE t%s (c1 int) USING tde_heap_basic', idx); + end loop; +end; $$; + +DROP EXTENSION pg_tde cascade; diff --git a/contrib/pg_tde/sql/change_access_method.inc b/contrib/pg_tde/sql/change_access_method.inc new file mode 100644 index 00000000000..0849e681c6d --- /dev/null +++ b/contrib/pg_tde/sql/change_access_method.inc @@ -0,0 +1,43 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) using :tde_am; + +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); + +SELECT * FROM country_table; + +SELECT pg_tde_is_encrypted('country_table'); + +-- Try changing the encrypted table to an unencrypted table +ALTER TABLE country_table SET access method heap; +-- Insert some more data +INSERT INTO country_table (country_name, continent) + VALUES ('France', 'Europe'), + ('Germany', 'Europe'), + ('Canada', 'North America'); + +SELECT * FROM country_table; +SELECT pg_tde_is_encrypted('country_table'); + +-- Change it back to encrypted +ALTER TABLE country_table SET access method :tde_am; + +INSERT INTO country_table (country_name, continent) + VALUES ('China', 'Asia'), + ('Brazil', 'South America'), + ('Australia', 'Oceania'); +SELECT * FROM country_table; +SELECT pg_tde_is_encrypted('country_table'); + +DROP TABLE country_table; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/change_access_method.sql b/contrib/pg_tde/sql/change_access_method.sql new file mode 100644 index 00000000000..e9c1d765e42 --- /dev/null +++ b/contrib/pg_tde/sql/change_access_method.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/change_access_method.inc diff --git a/contrib/pg_tde/sql/change_access_method_basic.sql b/contrib/pg_tde/sql/change_access_method_basic.sql new file mode 100644 index 00000000000..9cd4a58eaf7 --- /dev/null +++ b/contrib/pg_tde/sql/change_access_method_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/change_access_method.inc diff --git a/contrib/pg_tde/sql/insert_update_delete.inc b/contrib/pg_tde/sql/insert_update_delete.inc new file mode 100644 index 00000000000..23acc27991f --- /dev/null +++ b/contrib/pg_tde/sql/insert_update_delete.inc @@ -0,0 +1,41 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE albums ( + id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist VARCHAR(256), + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; + +INSERT INTO albums (artist, title, released) VALUES + ('Graindelavoix', 'Jisquin The Undead', '2021-06-12'), + ('Graindelavoix', 'Tenebrae Responsoria - Carlo Gesualdo', '2019-08-06'), + ('Graindelavoix', 'Cypriot Vespers', '2015-12-20'), + ('John Coltrane', 'Blue Train', '1957-09-15'), + ('V/A Analog Africa', 'Space Echo - The Mystery Behind the Cosmic Sound of Cabo Verde Finally Revealed', '2016-05-27'), + ('Incapacitants', 'As Loud As Possible', '2022-09-15'), + ('Chris Corsano & Bill Orcutt', 'Made Out Of Sound', '2021-03-26'), + ('Jürg Frey (Quatuor Bozzini / Konus Quartett)', 'Continuit​é​, fragilit​é​, r​é​sonance', '2023-04-01'), + ('clipping.', 'Visions of Bodies Being Burned', '2020-10-23'), + ('clipping.', 'There Existed an Addiction to Blood', '2019-10-19'), + ('Autechre', 'elseq 1–5', '2016-05-19'), + ('Decapitated', 'Winds of Creation', '2000-04-17'), + ('Ulthar', 'Anthronomicon', '2023-02-17'), + ('Τζίμης Πανούσης', 'Κάγκελα Παντού', '1986-01-01'), + ('Воплі Відоплясова', 'Музіка', '1997-01-01'); + +SELECT * FROM albums; + +DELETE FROM albums WHERE id % 4 = 0; +SELECT * FROM albums; + +UPDATE albums SET title='Jisquin The Undead: Laments, Deplorations and Dances of Death', released='2021-10-01' WHERE id=1; +UPDATE albums SET released='2020-04-01' WHERE id=2; + +SELECT * FROM albums; + +DROP TABLE albums; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/insert_update_delete.sql b/contrib/pg_tde/sql/insert_update_delete.sql new file mode 100644 index 00000000000..76a81e26619 --- /dev/null +++ b/contrib/pg_tde/sql/insert_update_delete.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/insert_update_delete.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/insert_update_delete_basic.sql b/contrib/pg_tde/sql/insert_update_delete_basic.sql new file mode 100644 index 00000000000..c77ba2733b9 --- /dev/null +++ b/contrib/pg_tde/sql/insert_update_delete_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/insert_update_delete.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/keyprovider_dependency.inc b/contrib/pg_tde/sql/keyprovider_dependency.inc new file mode 100644 index 00000000000..26575ecdb85 --- /dev/null +++ b/contrib/pg_tde/sql/keyprovider_dependency.inc @@ -0,0 +1,11 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('mk-file','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_add_key_provider_file('free-file','/tmp/pg_tde_test_keyring_2.per'); +SELECT pg_tde_add_key_provider_vault_v2('V2-vault','vault-token','percona.com/vault-v2/percona','/mount/dev','ca-cert-auth'); + +SELECT * FROM pg_tde_list_all_key_providers(); + +SELECT pg_tde_set_principal_key('test-db-principal-key','mk-file'); + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/keyprovider_dependency.sql b/contrib/pg_tde/sql/keyprovider_dependency.sql new file mode 100644 index 00000000000..03eebf0d41b --- /dev/null +++ b/contrib/pg_tde/sql/keyprovider_dependency.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/keyprovider_dependency.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/keyprovider_dependency_basic.sql b/contrib/pg_tde/sql/keyprovider_dependency_basic.sql new file mode 100644 index 00000000000..9832915c4d8 --- /dev/null +++ b/contrib/pg_tde/sql/keyprovider_dependency_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/keyprovider_dependency.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/kmip_test.inc b/contrib/pg_tde/sql/kmip_test.inc new file mode 100644 index 00000000000..e748b862c09 --- /dev/null +++ b/contrib/pg_tde/sql/kmip_test.inc @@ -0,0 +1,20 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_kmip('kmip-prov','127.0.0.1', 5696, '/tmp/server_certificate.pem', '/tmp/client_key_jane_doe.pem'); +SELECT pg_tde_set_principal_key('kmip-principal-key','kmip-prov'); + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; + +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); + +SELECT * from test_enc; + +DROP TABLE test_enc; + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/kmip_test.sql b/contrib/pg_tde/sql/kmip_test.sql new file mode 100644 index 00000000000..4dffe634ca4 --- /dev/null +++ b/contrib/pg_tde/sql/kmip_test.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/kmip_test.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/kmip_test_basic.sql b/contrib/pg_tde/sql/kmip_test_basic.sql new file mode 100644 index 00000000000..41b37db55e5 --- /dev/null +++ b/contrib/pg_tde/sql/kmip_test_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/kmip_test.inc diff --git a/contrib/pg_tde/sql/merge_join.inc b/contrib/pg_tde/sql/merge_join.inc new file mode 100644 index 00000000000..8fc4211b47d --- /dev/null +++ b/contrib/pg_tde/sql/merge_join.inc @@ -0,0 +1,66 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); +\getenv abs_srcdir PG_ABS_SRCDIR + +CREATE TABLE tenk1 ( + unique1 int4, + unique2 int4, + two int4, + four int4, + ten int4, + twenty int4, + hundred int4, + thousand int4, + twothousand int4, + fivethous int4, + tenthous int4, + odd int4, + even int4, + stringu1 name, + stringu2 name, + string4 name +) using :tde_am; + +\set filename :abs_srcdir '/data/tenk.data' +COPY tenk1 FROM :'filename'; +VACUUM ANALYZE tenk1; + +CREATE INDEX tenk1_unique1 ON tenk1 USING btree(unique1 int4_ops); + +CREATE INDEX tenk1_unique2 ON tenk1 USING btree(unique2 int4_ops); + +CREATE INDEX tenk1_hundred ON tenk1 USING btree(hundred int4_ops); + +CREATE INDEX tenk1_thous_tenthous ON tenk1 (thousand, tenthous); + +-- +-- regression test: check a case where join_clause_is_movable_into() +-- used to give an imprecise result, causing an assertion failure +-- +SELECT count(*) +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1; + +-- +-- check that we haven't screwed the data +-- +SELECT * +FROM + (SELECT t3.tenthous as x1, coalesce(t1.stringu1, t2.stringu1) as x2 + FROM tenk1 t1 + LEFT JOIN tenk1 t2 on t1.unique1 = t2.unique1 + JOIN tenk1 t3 on t1.unique2 = t3.unique2) ss, + tenk1 t4, + tenk1 t5 +WHERE t4.thousand = t5.unique1 and ss.x1 = t4.tenthous and ss.x2 = t5.stringu1 LIMIT 20 OFFSET 432; + +DROP TABLE tenk1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/merge_join.sql b/contrib/pg_tde/sql/merge_join.sql new file mode 100644 index 00000000000..52206a70300 --- /dev/null +++ b/contrib/pg_tde/sql/merge_join.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/merge_join.inc diff --git a/contrib/pg_tde/sql/merge_join_basic.sql b/contrib/pg_tde/sql/merge_join_basic.sql new file mode 100644 index 00000000000..86a22fea2c2 --- /dev/null +++ b/contrib/pg_tde/sql/merge_join_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/merge_join.inc diff --git a/contrib/pg_tde/sql/move_large_tuples.inc b/contrib/pg_tde/sql/move_large_tuples.inc new file mode 100644 index 00000000000..36d090776c3 --- /dev/null +++ b/contrib/pg_tde/sql/move_large_tuples.inc @@ -0,0 +1,36 @@ +-- test pg_tde_move_encrypted_data() +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE sbtest2( + id SERIAL, + k TEXT STORAGE PLAIN, + PRIMARY KEY (id) + ) USING :tde_am; + +INSERT INTO sbtest2(k) VALUES(repeat('a', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('e', 2500)); + +DELETE FROM sbtest2 WHERE id IN (2,3,4); +VACUUM sbtest2; +SELECT * FROM sbtest2; + +INSERT INTO sbtest2(k) VALUES(repeat('b', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('c', 2500)); +INSERT INTO sbtest2(k) VALUES(repeat('d', 2500)); + +DELETE FROM sbtest2 WHERE id IN (7); +VACUUM sbtest2; + +SELECT * FROM sbtest2; + +VACUUM FULL sbtest2; +SELECT * FROM sbtest2; + +DROP TABLE sbtest2; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/move_large_tuples.sql b/contrib/pg_tde/sql/move_large_tuples.sql new file mode 100644 index 00000000000..1b6e7f8a5c0 --- /dev/null +++ b/contrib/pg_tde/sql/move_large_tuples.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/move_large_tuples.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/move_large_tuples_basic.sql b/contrib/pg_tde/sql/move_large_tuples_basic.sql new file mode 100644 index 00000000000..9e5df21d085 --- /dev/null +++ b/contrib/pg_tde/sql/move_large_tuples_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/move_large_tuples.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/multi_insert.inc b/contrib/pg_tde/sql/multi_insert.inc new file mode 100644 index 00000000000..88b92060700 --- /dev/null +++ b/contrib/pg_tde/sql/multi_insert.inc @@ -0,0 +1,1396 @@ +-- trigger multi_insert path +-- +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE albums ( + album_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + artist_id INTEGER, + title TEXT NOT NULL, + released DATE NOT NULL +) USING :tde_am; + +COPY albums FROM stdin CSV HEADER; +album_id,artist_id,title,released +1,1,"Mirror",2009-06-24 +2,2,"Pretzel Logic",1974-02-20 +3,3,"Under Construction",2002-11-12 +4,4,"Return to Wherever",2019-07-11 +5,5,"The Nightfly",1982-10-01 +6,6,"It's Alive",2013-10-15 +7,7,"Pure Ella",1994-02-15 +\. + +SELECT * FROM albums; +SELECT * FROM albums where album_id > 5; +-- On replica: +-- SELECT * FROM albums; +-- album_id | artist_id | title | released +-- ----------+-----------+--------------------+------------ +-- 1 | 1 | Mirror | 2009-06-24 +-- 2 | 2 | Pretzel Logic | 1974-02-20 +-- 3 | 3 | Under Construction | 2002-11-12 +-- 4 | 4 | Return to Wherever | 2019-07-11 +-- 5 | 5 | The Nightfly | 1982-10-01 +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (7 rows) +-- +-- SELECT * FROM albums where album_id > 5; +-- album_id | artist_id | title | released +-- ----------+-----------+------------+------------ +-- 6 | 6 | It's Alive | 2013-10-15 +-- 7 | 7 | Pure Ella | 1994-02-15 +-- (2 rows) +-- +DROP TABLE albums; + +-- multi_insert2 +-- more data to take multiple pages +CREATE TABLE Towns ( + id SERIAL UNIQUE NOT NULL, + code VARCHAR(10) NOT NULL, + article TEXT, + name TEXT NOT NULL, + department VARCHAR(4) NOT NULL, + UNIQUE (code, department) +) USING :tde_am; + +COPY towns (id, code, article, name, department) FROM stdin; +1 001 some_text Abergement-Clémenciat 01 +2 002 some_text Abergement-de-Varey 01 +3 004 some_text Ambérieu-en-Bugey 01 +4 005 some_text Ambérieux-en-Dombes 01 +5 006 some_text Ambléon 01 +6 007 some_text Ambronay 01 +7 008 some_text Ambutrix 01 +8 009 some_text Andert-et-Condon 01 +9 010 some_text Anglefort 01 +10 011 some_text Apremont 01 +11 012 some_text Aranc 01 +12 013 some_text Arandas 01 +13 014 some_text Arbent 01 +14 015 some_text Arbignieu 01 +15 016 some_text Arbigny 01 +16 017 some_text Argis 01 +17 019 some_text Armix 01 +18 021 some_text Ars-sur-Formans 01 +19 022 some_text Artemare 01 +20 023 some_text Asnières-sur-Saône 01 +21 024 some_text Attignat 01 +22 025 some_text Bâgé-la-Ville 01 +23 026 some_text Bâgé-le-Châtel 01 +24 027 some_text Balan 01 +25 028 some_text Baneins 01 +26 029 some_text Beaupont 01 +27 030 some_text Beauregard 01 +28 031 some_text Bellignat 01 +29 032 some_text Béligneux 01 +30 033 some_text Bellegarde-sur-Valserine 01 +31 034 some_text Belley 01 +32 035 some_text Belleydoux 01 +33 036 some_text Belmont-Luthézieu 01 +34 037 some_text Bénonces 01 +35 038 some_text Bény 01 +36 039 some_text Béon 01 +37 040 some_text Béréziat 01 +38 041 some_text Bettant 01 +39 042 some_text Bey 01 +40 043 some_text Beynost 01 +41 044 some_text Billiat 01 +42 045 some_text Birieux 01 +43 046 some_text Biziat 01 +44 047 some_text Blyes 01 +45 049 some_text Boisse 01 +46 050 some_text Boissey 01 +47 051 some_text Bolozon 01 +48 052 some_text Bouligneux 01 +49 053 some_text Bourg-en-Bresse 01 +50 054 some_text Bourg-Saint-Christophe 01 +51 056 some_text Boyeux-Saint-Jérôme 01 +52 057 some_text Boz 01 +53 058 some_text Brégnier-Cordon 01 +54 059 some_text Brénaz 01 +55 060 some_text Brénod 01 +56 061 some_text Brens 01 +57 062 some_text Bressolles 01 +58 063 some_text Brion 01 +59 064 some_text Briord 01 +60 065 some_text Buellas 01 +61 066 some_text Burbanche 01 +62 067 some_text Ceignes 01 +63 068 some_text Cerdon 01 +64 069 some_text Certines 01 +65 071 some_text Cessy 01 +66 072 some_text Ceyzériat 01 +67 073 some_text Ceyzérieu 01 +68 074 some_text Chalamont 01 +69 075 some_text Chaleins 01 +70 076 some_text Chaley 01 +71 077 some_text Challes 01 +72 078 some_text Challex 01 +73 079 some_text Champagne-en-Valromey 01 +74 080 some_text Champdor 01 +75 081 some_text Champfromier 01 +76 082 some_text Chanay 01 +77 083 some_text Chaneins 01 +78 084 some_text Chanoz-Châtenay 01 +79 085 some_text Chapelle-du-Châtelard 01 +80 087 some_text Charix 01 +81 088 some_text Charnoz-sur-Ain 01 +82 089 some_text Château-Gaillard 01 +83 090 some_text Châtenay 01 +84 091 some_text Châtillon-en-Michaille 01 +85 092 some_text Châtillon-la-Palud 01 +86 093 some_text Châtillon-sur-Chalaronne 01 +87 094 some_text Chavannes-sur-Reyssouze 01 +88 095 some_text Chavannes-sur-Suran 01 +89 096 some_text Chaveyriat 01 +90 097 some_text Chavornay 01 +91 098 some_text Chazey-Bons 01 +92 099 some_text Chazey-sur-Ain 01 +93 100 some_text Cheignieu-la-Balme 01 +94 101 some_text Chevillard 01 +95 102 some_text Chevroux 01 +96 103 some_text Chevry 01 +97 104 some_text Chézery-Forens 01 +98 105 some_text Civrieux 01 +99 106 some_text Cize 01 +100 107 some_text Cleyzieu 01 +101 108 some_text Coligny 01 +102 109 some_text Collonges 01 +103 110 some_text Colomieu 01 +104 111 some_text Conand 01 +105 112 some_text Condamine 01 +106 113 some_text Condeissiat 01 +107 114 some_text Confort 01 +108 115 some_text Confrançon 01 +109 116 some_text Contrevoz 01 +110 117 some_text Conzieu 01 +111 118 some_text Corbonod 01 +112 119 some_text Corcelles 01 +113 121 some_text Corlier 01 +114 122 some_text Cormaranche-en-Bugey 01 +115 123 some_text Cormoranche-sur-Saône 01 +116 124 some_text Cormoz 01 +117 125 some_text Corveissiat 01 +118 127 some_text Courmangoux 01 +119 128 some_text Courtes 01 +120 129 some_text Crans 01 +121 130 some_text Cras-sur-Reyssouze 01 +122 133 some_text Cressin-Rochefort 01 +123 134 some_text Crottet 01 +124 135 some_text Crozet 01 +125 136 some_text Cruzilles-lès-Mépillat 01 +126 138 some_text Culoz 01 +127 139 some_text Curciat-Dongalon 01 +128 140 some_text Curtafond 01 +129 141 some_text Cuzieu 01 +130 142 some_text Dagneux 01 +131 143 some_text Divonne-les-Bains 01 +132 144 some_text Dommartin 01 +133 145 some_text Dompierre-sur-Veyle 01 +134 146 some_text Dompierre-sur-Chalaronne 01 +135 147 some_text Domsure 01 +136 148 some_text Dortan 01 +137 149 some_text Douvres 01 +138 150 some_text Drom 01 +139 151 some_text Druillat 01 +140 152 some_text Échallon 01 +141 153 some_text Échenevex 01 +142 154 some_text Étrez 01 +143 155 some_text Évosges 01 +144 156 some_text Faramans 01 +145 157 some_text Fareins 01 +146 158 some_text Farges 01 +147 159 some_text Feillens 01 +148 160 some_text Ferney-Voltaire 01 +149 162 some_text Flaxieu 01 +150 163 some_text Foissiat 01 +151 165 some_text Francheleins 01 +152 166 some_text Frans 01 +153 167 some_text Garnerans 01 +154 169 some_text Genouilleux 01 +155 170 some_text Géovreissiat 01 +156 171 some_text Géovreisset 01 +157 172 some_text Germagnat 01 +158 173 some_text Gex 01 +159 174 some_text Giron 01 +160 175 some_text Gorrevod 01 +161 176 some_text Grand-Abergement 01 +162 177 some_text Grand-Corent 01 +163 179 some_text Grièges 01 +164 180 some_text Grilly 01 +165 181 some_text Groissiat 01 +166 182 some_text Groslée 01 +167 183 some_text Guéreins 01 +168 184 some_text Hautecourt-Romanèche 01 +169 185 some_text Hauteville-Lompnes 01 +170 186 some_text Hostias 01 +171 187 some_text Hotonnes 01 +172 188 some_text Illiat 01 +173 189 some_text Injoux-Génissiat 01 +174 190 some_text Innimond 01 +175 191 some_text Izenave 01 +176 192 some_text Izernore 01 +177 193 some_text Izieu 01 +178 194 some_text Jassans-Riottier 01 +179 195 some_text Jasseron 01 +180 196 some_text Jayat 01 +181 197 some_text Journans 01 +182 198 some_text Joyeux 01 +183 199 some_text Jujurieux 01 +184 200 some_text Labalme 01 +185 202 some_text Lagnieu 01 +186 203 some_text Laiz 01 +187 204 some_text Lalleyriat 01 +188 205 some_text Lancrans 01 +189 206 some_text Lantenay 01 +190 207 some_text Lapeyrouse 01 +191 208 some_text Lavours 01 +192 209 some_text Léaz 01 +193 210 some_text Lélex 01 +194 211 some_text Lent 01 +195 212 some_text Lescheroux 01 +196 213 some_text Leyment 01 +197 214 some_text Leyssard 01 +198 215 some_text Lhôpital 01 +199 216 some_text Lhuis 01 +200 218 some_text Lochieu 01 +201 219 some_text Lompnas 01 +202 221 some_text Lompnieu 01 +203 224 some_text Loyettes 01 +204 225 some_text Lurcy 01 +205 227 some_text Magnieu 01 +206 228 some_text Maillat 01 +207 229 some_text Malafretaz 01 +208 230 some_text Mantenay-Montlin 01 +209 231 some_text Manziat 01 +210 232 some_text Marboz 01 +211 233 some_text Marchamp 01 +212 234 some_text Marignieu 01 +213 235 some_text Marlieux 01 +214 236 some_text Marsonnas 01 +215 237 some_text Martignat 01 +216 238 some_text Massieux 01 +217 239 some_text Massignieu-de-Rives 01 +218 240 some_text Matafelon-Granges 01 +219 241 some_text Meillonnas 01 +220 242 some_text Mérignat 01 +221 243 some_text Messimy-sur-Saône 01 +222 244 some_text Meximieux 01 +223 245 some_text Bohas-Meyriat-Rignat 01 +224 246 some_text Mézériat 01 +225 247 some_text Mijoux 01 +226 248 some_text Mionnay 01 +227 249 some_text Miribel 01 +228 250 some_text Misérieux 01 +229 252 some_text Mogneneins 01 +230 254 some_text Montagnat 01 +231 255 some_text Montagnieu 01 +232 257 some_text Montanges 01 +233 258 some_text Montceaux 01 +234 259 some_text Montcet 01 +235 260 some_text Montellier 01 +236 261 some_text Monthieux 01 +237 262 some_text Montluel 01 +238 263 some_text Montmerle-sur-Saône 01 +239 264 some_text Montracol 01 +240 265 some_text Montréal-la-Cluse 01 +241 266 some_text Montrevel-en-Bresse 01 +242 267 some_text Nurieux-Volognat 01 +243 268 some_text Murs-et-Gélignieux 01 +244 269 some_text Nantua 01 +245 271 some_text Nattages 01 +246 272 some_text Neuville-les-Dames 01 +247 273 some_text Neuville-sur-Ain 01 +248 274 some_text Neyrolles 01 +249 275 some_text Neyron 01 +250 276 some_text Niévroz 01 +251 277 some_text Nivollet-Montgriffon 01 +252 279 some_text Oncieu 01 +253 280 some_text Ordonnaz 01 +254 281 some_text Ornex 01 +255 282 some_text Outriaz 01 +256 283 some_text Oyonnax 01 +257 284 some_text Ozan 01 +258 285 some_text Parcieux 01 +259 286 some_text Parves 01 +260 288 some_text Péron 01 +261 289 some_text Péronnas 01 +262 290 some_text Pérouges 01 +263 291 some_text Perrex 01 +264 292 some_text Petit-Abergement 01 +265 293 some_text Peyriat 01 +266 294 some_text Peyrieu 01 +267 295 some_text Peyzieux-sur-Saône 01 +268 296 some_text Pirajoux 01 +269 297 some_text Pizay 01 +270 298 some_text Plagne 01 +271 299 some_text Plantay 01 +272 300 some_text Poizat 01 +273 301 some_text Polliat 01 +274 302 some_text Pollieu 01 +275 303 some_text Poncin 01 +276 304 some_text Pont-d'Ain 01 +277 305 some_text Pont-de-Vaux 01 +278 306 some_text Pont-de-Veyle 01 +279 307 some_text Port 01 +280 308 some_text Pougny 01 +281 309 some_text Pouillat 01 +282 310 some_text Prémeyzel 01 +283 311 some_text Prémillieu 01 +284 312 some_text Pressiat 01 +285 313 some_text Prévessin-Moëns 01 +286 314 some_text Priay 01 +287 316 some_text Pugieu 01 +288 317 some_text Ramasse 01 +289 318 some_text Rancé 01 +290 319 some_text Relevant 01 +291 320 some_text Replonges 01 +292 321 some_text Revonnas 01 +293 322 some_text Reyrieux 01 +294 323 some_text Reyssouze 01 +295 325 some_text Rignieux-le-Franc 01 +296 328 some_text Romans 01 +297 329 some_text Rossillon 01 +298 330 some_text Ruffieu 01 +299 331 some_text Saint-Alban 01 +300 332 some_text Saint-André-de-Bâgé 01 +301 333 some_text Saint-André-de-Corcy 01 +302 334 some_text Saint-André-d'Huiriat 01 +303 335 some_text Saint-André-le-Bouchoux 01 +304 336 some_text Saint-André-sur-Vieux-Jonc 01 +305 337 some_text Saint-Bénigne 01 +306 338 some_text Saint-Benoît 01 +307 339 some_text Saint-Bernard 01 +308 340 some_text Saint-Bois 01 +309 341 some_text Saint-Champ 01 +310 342 some_text Sainte-Croix 01 +311 343 some_text Saint-Cyr-sur-Menthon 01 +312 344 some_text Saint-Denis-lès-Bourg 01 +313 345 some_text Saint-Denis-en-Bugey 01 +314 346 some_text Saint-Didier-d'Aussiat 01 +315 347 some_text Saint-Didier-de-Formans 01 +316 348 some_text Saint-Didier-sur-Chalaronne 01 +317 349 some_text Saint-Éloi 01 +318 350 some_text Saint-Étienne-du-Bois 01 +319 351 some_text Saint-Étienne-sur-Chalaronne 01 +320 352 some_text Saint-Étienne-sur-Reyssouze 01 +321 353 some_text Sainte-Euphémie 01 +322 354 some_text Saint-Genis-Pouilly 01 +323 355 some_text Saint-Genis-sur-Menthon 01 +324 356 some_text Saint-Georges-sur-Renon 01 +325 357 some_text Saint-Germain-de-Joux 01 +326 358 some_text Saint-Germain-les-Paroisses 01 +327 359 some_text Saint-Germain-sur-Renon 01 +328 360 some_text Saint-Jean-de-Gonville 01 +329 361 some_text Saint-Jean-de-Niost 01 +330 362 some_text Saint-Jean-de-Thurigneux 01 +331 363 some_text Saint-Jean-le-Vieux 01 +332 364 some_text Saint-Jean-sur-Reyssouze 01 +333 365 some_text Saint-Jean-sur-Veyle 01 +334 366 some_text Sainte-Julie 01 +335 367 some_text Saint-Julien-sur-Reyssouze 01 +336 368 some_text Saint-Julien-sur-Veyle 01 +337 369 some_text Saint-Just 01 +338 370 some_text Saint-Laurent-sur-Saône 01 +339 371 some_text Saint-Marcel 01 +340 372 some_text Saint-Martin-de-Bavel 01 +341 373 some_text Saint-Martin-du-Frêne 01 +342 374 some_text Saint-Martin-du-Mont 01 +343 375 some_text Saint-Martin-le-Châtel 01 +344 376 some_text Saint-Maurice-de-Beynost 01 +345 378 some_text Saint-Maurice-de-Gourdans 01 +346 379 some_text Saint-Maurice-de-Rémens 01 +347 380 some_text Saint-Nizier-le-Bouchoux 01 +348 381 some_text Saint-Nizier-le-Désert 01 +349 382 some_text Sainte-Olive 01 +350 383 some_text Saint-Paul-de-Varax 01 +351 384 some_text Saint-Rambert-en-Bugey 01 +352 385 some_text Saint-Rémy 01 +353 386 some_text Saint-Sorlin-en-Bugey 01 +354 387 some_text Saint-Sulpice 01 +355 388 some_text Saint-Trivier-de-Courtes 01 +356 389 some_text Saint-Trivier-sur-Moignans 01 +357 390 some_text Saint-Vulbas 01 +358 391 some_text Salavre 01 +359 392 some_text Samognat 01 +360 393 some_text Sandrans 01 +361 396 some_text Sault-Brénaz 01 +362 397 some_text Sauverny 01 +363 398 some_text Savigneux 01 +364 399 some_text Ségny 01 +365 400 some_text Seillonnaz 01 +366 401 some_text Sergy 01 +367 402 some_text Sermoyer 01 +368 403 some_text Serrières-de-Briord 01 +369 404 some_text Serrières-sur-Ain 01 +370 405 some_text Servas 01 +371 406 some_text Servignat 01 +372 407 some_text Seyssel 01 +373 408 some_text Simandre-sur-Suran 01 +374 409 some_text Songieu 01 +375 410 some_text Sonthonnax-la-Montagne 01 +376 411 some_text Souclin 01 +377 412 some_text Sulignat 01 +378 413 some_text Surjoux 01 +379 414 some_text Sutrieu 01 +380 415 some_text Talissieu 01 +381 416 some_text Tenay 01 +382 417 some_text Thézillieu 01 +383 418 some_text Thil 01 +384 419 some_text Thoiry 01 +385 420 some_text Thoissey 01 +386 421 some_text Torcieu 01 +387 422 some_text Tossiat 01 +388 423 some_text Toussieux 01 +389 424 some_text Tramoyes 01 +390 425 some_text Tranclière 01 +391 426 some_text Treffort-Cuisiat 01 +392 427 some_text Trévoux 01 +393 428 some_text Valeins 01 +394 429 some_text Vandeins 01 +395 430 some_text Varambon 01 +396 431 some_text Vaux-en-Bugey 01 +397 432 some_text Verjon 01 +398 433 some_text Vernoux 01 +399 434 some_text Versailleux 01 +400 435 some_text Versonnex 01 +401 436 some_text Vesancy 01 +402 437 some_text Vescours 01 +403 439 some_text Vésines 01 +404 441 some_text Vieu-d'Izenave 01 +405 442 some_text Vieu 01 +406 443 some_text Villars-les-Dombes 01 +407 444 some_text Villebois 01 +408 445 some_text Villemotier 01 +409 446 some_text Villeneuve 01 +410 447 some_text Villereversure 01 +411 448 some_text Villes 01 +412 449 some_text Villette-sur-Ain 01 +413 450 some_text Villieu-Loyes-Mollon 01 +414 451 some_text Viriat 01 +415 452 some_text Virieu-le-Grand 01 +416 453 some_text Virieu-le-Petit 01 +417 454 some_text Virignin 01 +418 456 some_text Vongnes 01 +419 457 some_text Vonnas 01 +420 001 some_text Abbécourt 02 +421 002 some_text Achery 02 +422 003 some_text Acy 02 +423 004 some_text Agnicourt-et-Séchelles 02 +424 005 some_text Aguilcourt 02 +425 006 some_text Aisonville-et-Bernoville 02 +426 007 some_text Aizelles 02 +427 008 some_text Aizy-Jouy 02 +428 009 some_text Alaincourt 02 +429 010 some_text Allemant 02 +430 011 some_text Ambleny 02 +431 012 some_text Ambrief 02 +432 013 some_text Amifontaine 02 +433 014 some_text Amigny-Rouy 02 +434 015 some_text Ancienville 02 +435 016 some_text Andelain 02 +436 017 some_text Anguilcourt-le-Sart 02 +437 018 some_text Anizy-le-Château 02 +438 019 some_text Annois 02 +439 020 some_text Any-Martin-Rieux 02 +440 021 some_text Archon 02 +441 022 some_text Arcy-Sainte-Restitue 02 +442 023 some_text Armentières-sur-Ourcq 02 +443 024 some_text Arrancy 02 +444 025 some_text Artemps 02 +445 026 some_text Artonges 02 +446 027 some_text Assis-sur-Serre 02 +447 028 some_text Athies-sous-Laon 02 +448 029 some_text Attilly 02 +449 030 some_text Aubencheul-aux-Bois 02 +450 031 some_text Aubenton 02 +451 032 some_text Aubigny-aux-Kaisnes 02 +452 033 some_text Aubigny-en-Laonnois 02 +453 034 some_text Audignicourt 02 +454 035 some_text Audigny 02 +455 036 some_text Augy 02 +456 037 some_text Aulnois-sous-Laon 02 +457 038 some_text Autels 02 +458 039 some_text Autremencourt 02 +459 040 some_text Autreppes 02 +460 041 some_text Autreville 02 +461 042 some_text Azy-sur-Marne 02 +462 043 some_text Bagneux 02 +463 044 some_text Bancigny 02 +464 046 some_text Barenton-Bugny 02 +465 047 some_text Barenton-Cel 02 +466 048 some_text Barenton-sur-Serre 02 +467 049 some_text Barisis 02 +468 050 some_text Barzy-en-Thiérache 02 +469 051 some_text Barzy-sur-Marne 02 +470 052 some_text Bassoles-Aulers 02 +471 053 some_text Baulne-en-Brie 02 +472 054 some_text Bazoches-sur-Vesles 02 +473 055 some_text Beaumé 02 +474 056 some_text Beaumont-en-Beine 02 +475 057 some_text Beaurevoir 02 +476 058 some_text Beaurieux 02 +477 059 some_text Beautor 02 +478 060 some_text Beauvois-en-Vermandois 02 +479 061 some_text Becquigny 02 +480 062 some_text Belleau 02 +481 063 some_text Bellenglise 02 +482 064 some_text Belleu 02 +483 065 some_text Bellicourt 02 +484 066 some_text Benay 02 +485 067 some_text Bergues-sur-Sambre 02 +486 068 some_text Berlancourt 02 +487 069 some_text Berlise 02 +488 070 some_text Bernot 02 +489 071 some_text Berny-Rivière 02 +490 072 some_text Berrieux 02 +491 073 some_text Berry-au-Bac 02 +492 074 some_text Bertaucourt-Epourdon 02 +493 075 some_text Berthenicourt 02 +494 076 some_text Bertricourt 02 +495 077 some_text Berzy-le-Sec 02 +496 078 some_text Besmé 02 +497 079 some_text Besmont 02 +498 080 some_text Besny-et-Loizy 02 +499 081 some_text Béthancourt-en-Vaux 02 +500 082 some_text Beugneux 02 +501 083 some_text Beuvardes 02 +502 084 some_text Bézu-le-Guéry 02 +503 085 some_text Bézu-Saint-Germain 02 +504 086 some_text Bichancourt 02 +505 087 some_text Bieuxy 02 +506 088 some_text Bièvres 02 +507 089 some_text Billy-sur-Aisne 02 +508 090 some_text Billy-sur-Ourcq 02 +509 091 some_text Blanzy-lès-Fismes 02 +510 093 some_text Blérancourt 02 +511 094 some_text Blesmes 02 +512 095 some_text Bohain-en-Vermandois 02 +513 096 some_text Bois-lès-Pargny 02 +514 097 some_text Boncourt 02 +515 098 some_text Bonneil 02 +516 099 some_text Bonnesvalyn 02 +517 100 some_text Bony 02 +518 101 some_text Bosmont-sur-Serre 02 +519 102 some_text Bouconville-Vauclair 02 +520 103 some_text Boué 02 +521 104 some_text Bouffignereux 02 +522 105 some_text Bouresches 02 +523 106 some_text Bourg-et-Comin 02 +524 107 some_text Bourguignon-sous-Coucy 02 +525 108 some_text Bourguignon-sous-Montbavin 02 +526 109 some_text Bouteille 02 +527 110 some_text Braine 02 +528 111 some_text Brancourt-en-Laonnois 02 +529 112 some_text Brancourt-le-Grand 02 +530 114 some_text Brasles 02 +531 115 some_text Braye-en-Laonnois 02 +532 116 some_text Braye-en-Thiérache 02 +533 117 some_text Bray-Saint-Christophe 02 +534 118 some_text Braye 02 +535 119 some_text Brécy 02 +536 120 some_text Brenelle 02 +537 121 some_text Breny 02 +538 122 some_text Brie 02 +539 123 some_text Brissay-Choigny 02 +540 124 some_text Brissy-Hamégicourt 02 +541 125 some_text Brumetz 02 +542 126 some_text Brunehamel 02 +543 127 some_text Bruyères-sur-Fère 02 +544 128 some_text Bruyères-et-Montbérault 02 +545 129 some_text Bruys 02 +546 130 some_text Bucilly 02 +547 131 some_text Bucy-le-Long 02 +548 132 some_text Bucy-lès-Cerny 02 +549 133 some_text Bucy-lès-Pierrepont 02 +550 134 some_text Buire 02 +551 135 some_text Buironfosse 02 +552 136 some_text Burelles 02 +553 137 some_text Bussiares 02 +554 138 some_text Buzancy 02 +555 139 some_text Caillouël-Crépigny 02 +556 140 some_text Camelin 02 +557 141 some_text Capelle 02 +558 142 some_text Castres 02 +559 143 some_text Catelet 02 +560 144 some_text Caulaincourt 02 +561 145 some_text Caumont 02 +562 146 some_text Celles-lès-Condé 02 +563 147 some_text Celle-sous-Montmirail 02 +564 148 some_text Celles-sur-Aisne 02 +565 149 some_text Cerizy 02 +566 150 some_text Cerny-en-Laonnois 02 +567 151 some_text Cerny-lès-Bucy 02 +568 152 some_text Cerseuil 02 +569 153 some_text Cessières 02 +570 154 some_text Chacrise 02 +571 155 some_text Chaillevois 02 +572 156 some_text Chalandry 02 +573 157 some_text Chambry 02 +574 158 some_text Chamouille 02 +575 159 some_text Champs 02 +576 160 some_text Chaourse 02 +577 161 some_text Chapelle-Monthodon 02 +578 162 some_text Chapelle-sur-Chézy 02 +579 163 some_text Charly 02 +580 164 some_text Charmel 02 +581 165 some_text Charmes 02 +582 166 some_text Chartèves 02 +583 167 some_text Chassemy 02 +584 168 some_text Château-Thierry 02 +585 169 some_text Châtillon-lès-Sons 02 +586 170 some_text Châtillon-sur-Oise 02 +587 171 some_text Chaudardes 02 +588 172 some_text Chaudun 02 +589 173 some_text Chauny 02 +590 174 some_text Chavignon 02 +591 175 some_text Chavigny 02 +592 176 some_text Chavonne 02 +593 177 some_text Chérêt 02 +594 178 some_text Chermizy-Ailles 02 +595 179 some_text Chéry-Chartreuve 02 +596 180 some_text Chéry-lès-Pouilly 02 +597 181 some_text Chéry-lès-Rozoy 02 +598 182 some_text Chevennes 02 +599 183 some_text Chevregny 02 +600 184 some_text Chevresis-Monceau 02 +601 185 some_text Chézy-en-Orxois 02 +602 186 some_text Chézy-sur-Marne 02 +603 187 some_text Chierry 02 +604 188 some_text Chigny 02 +605 189 some_text Chivres-en-Laonnois 02 +606 190 some_text Chivres-Val 02 +607 191 some_text Chivy-lès-Étouvelles 02 +608 192 some_text Chouy 02 +609 193 some_text Cierges 02 +610 194 some_text Cilly 02 +611 195 some_text Ciry-Salsogne 02 +612 196 some_text Clacy-et-Thierret 02 +613 197 some_text Clairfontaine 02 +614 198 some_text Clamecy 02 +615 199 some_text Clastres 02 +616 200 some_text Clermont-les-Fermes 02 +617 201 some_text Cœuvres-et-Valsery 02 +618 203 some_text Coincy 02 +619 204 some_text Coingt 02 +620 205 some_text Colligis-Crandelain 02 +621 206 some_text Colonfay 02 +622 207 some_text Commenchon 02 +623 208 some_text Concevreux 02 +624 209 some_text Condé-en-Brie 02 +625 210 some_text Condé-sur-Aisne 02 +626 211 some_text Condé-sur-Suippe 02 +627 212 some_text Condren 02 +628 213 some_text Connigis 02 +629 214 some_text Contescourt 02 +630 215 some_text Corbeny 02 +631 216 some_text Corcy 02 +632 217 some_text Coucy-le-Château-Auffrique 02 +633 218 some_text Coucy-lès-Eppes 02 +634 219 some_text Coucy-la-Ville 02 +635 220 some_text Coulonges-Cohan 02 +636 221 some_text Coupru 02 +637 222 some_text Courbes 02 +638 223 some_text Courboin 02 +639 224 some_text Courcelles-sur-Vesles 02 +640 225 some_text Courchamps 02 +641 226 some_text Courmelles 02 +642 227 some_text Courmont 02 +643 228 some_text Courtemont-Varennes 02 +644 229 some_text Courtrizy-et-Fussigny 02 +645 230 some_text Couvrelles 02 +646 231 some_text Couvron-et-Aumencourt 02 +647 232 some_text Coyolles 02 +648 233 some_text Cramaille 02 +649 234 some_text Craonne 02 +650 235 some_text Craonnelle 02 +651 236 some_text Crécy-au-Mont 02 +652 237 some_text Crécy-sur-Serre 02 +653 238 some_text Crépy 02 +654 239 some_text Crézancy 02 +655 240 some_text Croix-Fonsommes 02 +656 241 some_text Croix-sur-Ourcq 02 +657 242 some_text Crouttes-sur-Marne 02 +658 243 some_text Crouy 02 +659 244 some_text Crupilly 02 +660 245 some_text Cuffies 02 +661 246 some_text Cugny 02 +662 248 some_text Cuirieux 02 +663 249 some_text Cuiry-Housse 02 +664 250 some_text Cuiry-lès-Chaudardes 02 +665 251 some_text Cuiry-lès-Iviers 02 +666 252 some_text Cuissy-et-Geny 02 +667 253 some_text Cuisy-en-Almont 02 +668 254 some_text Cutry 02 +669 255 some_text Cys-la-Commune 02 +670 256 some_text Dagny-Lambercy 02 +671 257 some_text Dallon 02 +672 258 some_text Dammard 02 +673 259 some_text Dampleux 02 +674 260 some_text Danizy 02 +675 261 some_text Dercy 02 +676 262 some_text Deuillet 02 +677 263 some_text Dhuizel 02 +678 264 some_text Dizy-le-Gros 02 +679 265 some_text Dohis 02 +680 266 some_text Dolignon 02 +681 267 some_text Dommiers 02 +682 268 some_text Domptin 02 +683 269 some_text Dorengt 02 +684 270 some_text Douchy 02 +685 271 some_text Dravegny 02 +686 272 some_text Droizy 02 +687 273 some_text Dury 02 +688 274 some_text Ébouleau 02 +689 275 some_text Effry 02 +690 276 some_text Englancourt 02 +691 277 some_text Épagny 02 +692 278 some_text Éparcy 02 +693 279 some_text Épaux-Bézu 02 +694 280 some_text Épieds 02 +695 281 some_text Épine-aux-Bois 02 +696 282 some_text Eppes 02 +697 283 some_text Erlon 02 +698 284 some_text Erloy 02 +699 286 some_text Esquéhéries 02 +700 287 some_text Essigny-le-Grand 02 +701 288 some_text Essigny-le-Petit 02 +702 289 some_text Essises 02 +703 290 some_text Essômes-sur-Marne 02 +704 291 some_text Estrées 02 +705 292 some_text Étampes-sur-Marne 02 +706 293 some_text Étaves-et-Bocquiaux 02 +707 294 some_text Étouvelles 02 +708 295 some_text Étréaupont 02 +709 296 some_text Étreillers 02 +710 297 some_text Étrépilly 02 +711 298 some_text Étreux 02 +712 299 some_text Évergnicourt 02 +713 301 some_text Faucoucourt 02 +714 302 some_text Faverolles 02 +715 303 some_text Fayet 02 +716 304 some_text Fère 02 +717 305 some_text Fère-en-Tardenois 02 +718 306 some_text Ferté-Chevresis 02 +719 307 some_text Ferté-Milon 02 +720 308 some_text Fesmy-le-Sart 02 +721 309 some_text Festieux 02 +722 310 some_text Fieulaine 02 +723 311 some_text Filain 02 +724 312 some_text Flamengrie 02 +725 313 some_text Flavigny-le-Grand-et-Beaurain 02 +726 315 some_text Flavy-le-Martel 02 +727 316 some_text Fleury 02 +728 317 some_text Fluquières 02 +729 318 some_text Folembray 02 +730 319 some_text Fonsommes 02 +731 320 some_text Fontaine-lès-Clercs 02 +732 321 some_text Fontaine-lès-Vervins 02 +733 322 some_text Fontaine-Notre-Dame 02 +734 323 some_text Fontaine-Uterte 02 +735 324 some_text Fontenelle 02 +736 325 some_text Fontenelle-en-Brie 02 +737 326 some_text Fontenoy 02 +738 327 some_text Foreste 02 +739 328 some_text Fossoy 02 +740 329 some_text Fourdrain 02 +741 330 some_text Francilly-Selency 02 +742 331 some_text Franqueville 02 +743 332 some_text Fresnes-en-Tardenois 02 +744 333 some_text Fresnes 02 +745 334 some_text Fresnoy-le-Grand 02 +746 335 some_text Fressancourt 02 +747 336 some_text Frières-Faillouël 02 +748 337 some_text Froidestrées 02 +749 338 some_text Froidmont-Cohartille 02 +750 339 some_text Gandelu 02 +751 340 some_text Gauchy 02 +752 341 some_text Gercy 02 +753 342 some_text Gergny 02 +754 343 some_text Germaine 02 +755 344 some_text Gernicourt 02 +756 345 some_text Gibercourt 02 +757 346 some_text Gizy 02 +758 347 some_text Gland 02 +759 348 some_text Glennes 02 +760 349 some_text Goudelancourt-lès-Berrieux 02 +761 350 some_text Goudelancourt-lès-Pierrepont 02 +762 351 some_text Goussancourt 02 +763 352 some_text Gouy 02 +764 353 some_text Grandlup-et-Fay 02 +765 354 some_text Grandrieux 02 +766 355 some_text Gricourt 02 +767 356 some_text Grisolles 02 +768 357 some_text Gronard 02 +769 358 some_text Grougis 02 +770 359 some_text Grugies 02 +771 360 some_text Guignicourt 02 +772 361 some_text Guise 02 +773 362 some_text Guivry 02 +774 363 some_text Guny 02 +775 364 some_text Guyencourt 02 +776 366 some_text Hannapes 02 +777 367 some_text Happencourt 02 +778 368 some_text Haramont 02 +779 369 some_text Harcigny 02 +780 370 some_text Hargicourt 02 +781 371 some_text Harly 02 +782 372 some_text Hartennes-et-Taux 02 +783 373 some_text Hary 02 +784 374 some_text Lehaucourt 02 +785 375 some_text Hautevesnes 02 +786 376 some_text Hauteville 02 +787 377 some_text Haution 02 +788 378 some_text Hérie 02 +789 379 some_text Hérie-la-Viéville 02 +790 380 some_text Hinacourt 02 +791 381 some_text Hirson 02 +792 382 some_text Holnon 02 +793 383 some_text Homblières 02 +794 384 some_text Houry 02 +795 385 some_text Housset 02 +796 386 some_text Iron 02 +797 387 some_text Itancourt 02 +798 388 some_text Iviers 02 +799 389 some_text Jaulgonne 02 +800 390 some_text Jeancourt 02 +801 391 some_text Jeantes 02 +802 392 some_text Joncourt 02 +803 393 some_text Jouaignes 02 +804 395 some_text Jumencourt 02 +805 396 some_text Jumigny 02 +806 397 some_text Jussy 02 +807 398 some_text Juvigny 02 +808 399 some_text Juvincourt-et-Damary 02 +809 400 some_text Laffaux 02 +810 401 some_text Laigny 02 +811 402 some_text Lanchy 02 +812 403 some_text Landifay-et-Bertaignemont 02 +813 404 some_text Landouzy-la-Cour 02 +814 405 some_text Landouzy-la-Ville 02 +815 406 some_text Landricourt 02 +816 407 some_text Laniscourt 02 +817 408 some_text Laon 02 +818 409 some_text Lappion 02 +819 410 some_text Largny-sur-Automne 02 +820 411 some_text Latilly 02 +821 412 some_text Launoy 02 +822 413 some_text Laval-en-Laonnois 02 +823 414 some_text Lavaqueresse 02 +824 415 some_text Laversine 02 +825 416 some_text Lemé 02 +826 417 some_text Lempire 02 +827 418 some_text Lerzy 02 +828 419 some_text Leschelles 02 +829 420 some_text Lesdins 02 +830 421 some_text Lesges 02 +831 422 some_text Lesquielles-Saint-Germain 02 +832 423 some_text Leuilly-sous-Coucy 02 +833 424 some_text Leury 02 +834 425 some_text Leuze 02 +835 426 some_text Levergies 02 +836 427 some_text Lhuys 02 +837 428 some_text Licy-Clignon 02 +838 429 some_text Lierval 02 +839 430 some_text Liesse-Notre-Dame 02 +840 431 some_text Liez 02 +841 432 some_text Limé 02 +842 433 some_text Lislet 02 +843 434 some_text Lizy 02 +844 435 some_text Logny-lès-Aubenton 02 +845 438 some_text Longpont 02 +846 439 some_text Longueval-Barbonval 02 +847 440 some_text Lor 02 +848 441 some_text Louâtre 02 +849 442 some_text Loupeigne 02 +850 443 some_text Lucy-le-Bocage 02 +851 444 some_text Lugny 02 +852 445 some_text Luzoir 02 +853 446 some_text Ly-Fontaine 02 +854 447 some_text Maast-et-Violaine 02 +855 448 some_text Mâchecourt 02 +856 449 some_text Macogny 02 +857 450 some_text Macquigny 02 +858 451 some_text Magny-la-Fosse 02 +859 452 some_text Maissemy 02 +860 453 some_text Maizy 02 +861 454 some_text Malmaison 02 +862 455 some_text Malzy 02 +863 456 some_text Manicamp 02 +864 457 some_text Marchais 02 +865 458 some_text Marchais-en-Brie 02 +866 459 some_text Marcy 02 +867 460 some_text Marcy-sous-Marle 02 +868 461 some_text Marest-Dampcourt 02 +869 462 some_text Mareuil-en-Dôle 02 +870 463 some_text Marfontaine 02 +871 464 some_text Margival 02 +872 465 some_text Marigny-en-Orxois 02 +873 466 some_text Marizy-Sainte-Geneviève 02 +874 467 some_text Marizy-Saint-Mard 02 +875 468 some_text Marle 02 +876 469 some_text Marly-Gomont 02 +877 470 some_text Martigny 02 +878 471 some_text Martigny-Courpierre 02 +879 472 some_text Mauregny-en-Haye 02 +880 473 some_text Mayot 02 +881 474 some_text Mennessis 02 +882 475 some_text Menneville 02 +883 476 some_text Mennevret 02 +884 477 some_text Mercin-et-Vaux 02 +885 478 some_text Merlieux-et-Fouquerolles 02 +886 479 some_text Merval 02 +887 480 some_text Mesbrecourt-Richecourt 02 +888 481 some_text Mesnil-Saint-Laurent 02 +889 482 some_text Meurival 02 +890 483 some_text Mézières-sur-Oise 02 +891 484 some_text Mézy-Moulins 02 +892 485 some_text Missy-aux-Bois 02 +893 486 some_text Missy-lès-Pierrepont 02 +894 487 some_text Missy-sur-Aisne 02 +895 488 some_text Molain 02 +896 489 some_text Molinchart 02 +897 490 some_text Monampteuil 02 +898 491 some_text Monceau-le-Neuf-et-Faucouzy 02 +899 492 some_text Monceau-lès-Leups 02 +900 493 some_text Monceau-le-Waast 02 +901 494 some_text Monceau-sur-Oise 02 +902 495 some_text Mondrepuis 02 +903 496 some_text Monnes 02 +904 497 some_text Mons-en-Laonnois 02 +905 498 some_text Montaigu 02 +906 499 some_text Montbavin 02 +907 500 some_text Montbrehain 02 +908 501 some_text Montchâlons 02 +909 502 some_text Montcornet 02 +910 503 some_text Mont-d'Origny 02 +911 504 some_text Montescourt-Lizerolles 02 +912 505 some_text Montfaucon 02 +913 506 some_text Montgobert 02 +914 507 some_text Montgru-Saint-Hilaire 02 +915 508 some_text Monthenault 02 +916 509 some_text Monthiers 02 +917 510 some_text Monthurel 02 +918 511 some_text Montigny-en-Arrouaise 02 +919 512 some_text Montigny-l'Allier 02 +920 513 some_text Montigny-le-Franc 02 +921 514 some_text Montigny-Lengrain 02 +922 515 some_text Montigny-lès-Condé 02 +923 516 some_text Montigny-sous-Marle 02 +924 517 some_text Montigny-sur-Crécy 02 +925 518 some_text Montlevon 02 +926 519 some_text Montloué 02 +927 520 some_text Mont-Notre-Dame 02 +928 521 some_text Montreuil-aux-Lions 02 +929 522 some_text Mont-Saint-Jean 02 +930 523 some_text Mont-Saint-Martin 02 +931 524 some_text Mont-Saint-Père 02 +932 525 some_text Morcourt 02 +933 526 some_text Morgny-en-Thiérache 02 +934 527 some_text Morsain 02 +935 528 some_text Mortefontaine 02 +936 529 some_text Mortiers 02 +937 530 some_text Moulins 02 +938 531 some_text Moussy-Verneuil 02 +939 532 some_text Moÿ-de-l'Aisne 02 +940 533 some_text Muret-et-Crouttes 02 +941 534 some_text Muscourt 02 +942 535 some_text Nampcelles-la-Cour 02 +943 536 some_text Nampteuil-sous-Muret 02 +944 537 some_text Nanteuil-la-Fosse 02 +945 538 some_text Nanteuil-Notre-Dame 02 +946 539 some_text Nauroy 02 +947 540 some_text Nesles-la-Montagne 02 +948 541 some_text Neufchâtel-sur-Aisne 02 +949 542 some_text Neuflieux 02 +950 543 some_text Neuilly-Saint-Front 02 +951 544 some_text Neuve-Maison 02 +952 545 some_text Neuville-Bosmont 02 +953 546 some_text Neuville-en-Beine 02 +954 547 some_text Neuville-Housset 02 +955 548 some_text Neuville-lès-Dorengt 02 +956 549 some_text Neuville-Saint-Amand 02 +957 550 some_text Neuville-sur-Ailette 02 +958 551 some_text Neuville-sur-Margival 02 +959 552 some_text Neuvillette 02 +960 553 some_text Nizy-le-Comte 02 +961 554 some_text Nogentel 02 +962 555 some_text Nogent-l'Artaud 02 +963 556 some_text Noircourt 02 +964 557 some_text Noroy-sur-Ourcq 02 +965 558 some_text Nouvion-en-Thiérache 02 +966 559 some_text Nouvion-et-Catillon 02 +967 560 some_text Nouvion-le-Comte 02 +968 561 some_text Nouvion-le-Vineux 02 +969 562 some_text Nouvron-Vingré 02 +970 563 some_text Noyales 02 +971 564 some_text Noyant-et-Aconin 02 +972 565 some_text Œuilly 02 +973 566 some_text Ognes 02 +974 567 some_text Ohis 02 +975 568 some_text Oigny-en-Valois 02 +976 569 some_text Oisy 02 +977 570 some_text Ollezy 02 +978 571 some_text Omissy 02 +979 572 some_text Orainville 02 +980 573 some_text Orgeval 02 +981 574 some_text Origny-en-Thiérache 02 +982 575 some_text Origny-Sainte-Benoite 02 +983 576 some_text Osly-Courtil 02 +984 577 some_text Ostel 02 +985 578 some_text Oulches-la-Vallée-Foulon 02 +986 579 some_text Oulchy-la-Ville 02 +987 580 some_text Oulchy-le-Château 02 +988 581 some_text Paars 02 +989 582 some_text Paissy 02 +990 583 some_text Pancy-Courtecon 02 +991 584 some_text Papleux 02 +992 585 some_text Parcy-et-Tigny 02 +993 586 some_text Parfondeval 02 +994 587 some_text Parfondru 02 +995 588 some_text Pargnan 02 +996 589 some_text Pargny-Filain 02 +997 590 some_text Pargny-la-Dhuys 02 +998 591 some_text Pargny-les-Bois 02 +999 592 some_text Parpeville 02 +1000 593 some_text Pasly 02 +1001 594 some_text Passy-en-Valois 02 +1002 595 some_text Passy-sur-Marne 02 +1003 596 some_text Pavant 02 +1004 597 some_text Perles 02 +1005 598 some_text Pernant 02 +1006 599 some_text Pierremande 02 +1007 600 some_text Pierrepont 02 +1008 601 some_text Pignicourt 02 +1009 602 some_text Pinon 02 +1010 604 some_text Pithon 02 +1011 605 some_text Pleine-Selve 02 +1012 606 some_text Plessier-Huleu 02 +1013 607 some_text Ploisy 02 +1014 608 some_text Plomion 02 +1015 609 some_text Ployart-et-Vaurseine 02 +1016 610 some_text Pommiers 02 +1017 612 some_text Pont-Arcy 02 +1018 613 some_text Pontavert 02 +1019 614 some_text Pontru 02 +1020 615 some_text Pontruet 02 +1021 616 some_text Pont-Saint-Mard 02 +1022 617 some_text Pouilly-sur-Serre 02 +1023 618 some_text Prémont 02 +1024 619 some_text Prémontré 02 +1025 620 some_text Presles-et-Boves 02 +1026 621 some_text Presles-et-Thierny 02 +1027 622 some_text Priez 02 +1028 623 some_text Prisces 02 +1029 624 some_text Proisy 02 +1030 625 some_text Proix 02 +1031 626 some_text Prouvais 02 +1032 627 some_text Proviseux-et-Plesnoy 02 +1033 628 some_text Puiseux-en-Retz 02 +1034 629 some_text Puisieux-et-Clanlieu 02 +1035 631 some_text Quierzy 02 +1036 632 some_text Quincy-Basse 02 +1037 633 some_text Quincy-sous-le-Mont 02 +1038 634 some_text Raillimont 02 +1039 635 some_text Ramicourt 02 +1040 636 some_text Regny 02 +1041 637 some_text Remaucourt 02 +1042 638 some_text Remies 02 +1043 639 some_text Remigny 02 +1044 640 some_text Renansart 02 +1045 641 some_text Renneval 02 +1046 642 some_text Résigny 02 +1047 643 some_text Ressons-le-Long 02 +1048 644 some_text Retheuil 02 +1049 645 some_text Reuilly-Sauvigny 02 +1050 646 some_text Révillon 02 +1051 647 some_text Ribeauville 02 +1052 648 some_text Ribemont 02 +1053 649 some_text Rocourt-Saint-Martin 02 +1054 650 some_text Rocquigny 02 +1055 651 some_text Rogécourt 02 +1056 652 some_text Rogny 02 +1057 653 some_text Romeny-sur-Marne 02 +1058 654 some_text Romery 02 +1059 655 some_text Ronchères 02 +1060 656 some_text Roucy 02 +1061 657 some_text Rougeries 02 +1062 658 some_text Roupy 02 +1063 659 some_text Rouvroy 02 +1064 660 some_text Rouvroy-sur-Serre 02 +1065 661 some_text Royaucourt-et-Chailvet 02 +1066 662 some_text Rozet-Saint-Albin 02 +1067 663 some_text Rozières-sur-Crise 02 +1068 664 some_text Rozoy-Bellevalle 02 +1069 665 some_text Grand-Rozoy 02 +1070 666 some_text Rozoy-sur-Serre 02 +1071 667 some_text Saconin-et-Breuil 02 +1072 668 some_text Sains-Richaumont 02 +1073 669 some_text Saint-Agnan 02 +1074 670 some_text Saint-Algis 02 +1075 671 some_text Saint-Aubin 02 +1076 672 some_text Saint-Bandry 02 +1077 673 some_text Saint-Christophe-à-Berry 02 +1078 674 some_text Saint-Clément 02 +1079 675 some_text Sainte-Croix 02 +1080 676 some_text Saint-Erme-Outre-et-Ramecourt 02 +1081 677 some_text Saint-Eugène 02 +1082 678 some_text Sainte-Geneviève 02 +1083 679 some_text Saint-Gengoulph 02 +1084 680 some_text Saint-Gobain 02 +1085 681 some_text Saint-Gobert 02 +1086 682 some_text Saint-Mard 02 +1087 683 some_text Saint-Martin-Rivière 02 +1088 684 some_text Saint-Michel 02 +1089 685 some_text Saint-Nicolas-aux-Bois 02 +1090 686 some_text Saint-Paul-aux-Bois 02 +1091 687 some_text Saint-Pierre-Aigle 02 +1092 688 some_text Saint-Pierre-lès-Franqueville 02 +1093 689 some_text Saint-Pierremont 02 +1094 690 some_text Sainte-Preuve 02 +1095 691 some_text Saint-Quentin 02 +1096 693 some_text Saint-Rémy-Blanzy 02 +1097 694 some_text Saint-Simon 02 +1098 695 some_text Saint-Thibaut 02 +1099 696 some_text Saint-Thomas 02 +1100 697 some_text Samoussy 02 +1101 698 some_text Sancy-les-Cheminots 02 +1102 699 some_text Saponay 02 +1103 701 some_text Saulchery 02 +1104 702 some_text Savy 02 +1105 703 some_text Seboncourt 02 +1106 704 some_text Selens 02 +1107 705 some_text Selve 02 +1108 706 some_text Septmonts 02 +1109 707 some_text Septvaux 02 +1110 708 some_text Sequehart 02 +1111 709 some_text Serain 02 +1112 710 some_text Seraucourt-le-Grand 02 +1113 711 some_text Serches 02 +1114 712 some_text Sergy 02 +1115 713 some_text Seringes-et-Nesles 02 +1116 714 some_text Sermoise 02 +1117 715 some_text Serval 02 +1118 716 some_text Servais 02 +1119 717 some_text Séry-lès-Mézières 02 +1120 718 some_text Silly-la-Poterie 02 +1121 719 some_text Sinceny 02 +1122 720 some_text Sissonne 02 +1123 721 some_text Sissy 02 +1124 722 some_text Soissons 02 +1125 723 some_text Soize 02 +1126 724 some_text Sommelans 02 +1127 725 some_text Sommeron 02 +1128 726 some_text Sommette-Eaucourt 02 +1129 727 some_text Sons-et-Ronchères 02 +1130 728 some_text Sorbais 02 +1131 729 some_text Soucy 02 +1132 730 some_text Soupir 02 +1133 731 some_text Sourd 02 +1134 732 some_text Surfontaine 02 +1135 733 some_text Suzy 02 +1136 734 some_text Taillefontaine 02 +1137 735 some_text Tannières 02 +1138 736 some_text Tartiers 02 +1139 737 some_text Tavaux-et-Pontséricourt 02 +1140 738 some_text Tergnier 02 +1141 739 some_text Terny-Sorny 02 +1142 740 some_text Thenailles 02 +1143 741 some_text Thenelles 02 +1144 742 some_text Thiernu 02 +1145 743 some_text Thuel 02 +1146 744 some_text Torcy-en-Valois 02 +1147 745 some_text Toulis-et-Attencourt 02 +1148 746 some_text Travecy 02 +1149 747 some_text Trefcon 02 +1150 748 some_text Trélou-sur-Marne 02 +1151 749 some_text Troësnes 02 +1152 750 some_text Trosly-Loire 02 +1153 751 some_text Trucy 02 +1154 752 some_text Tugny-et-Pont 02 +1155 753 some_text Tupigny 02 +1156 754 some_text Ugny-le-Gay 02 +1157 755 some_text Urcel 02 +1158 756 some_text Urvillers 02 +1159 757 some_text Vadencourt 02 +1160 758 some_text Vailly-sur-Aisne 02 +1161 759 some_text Vallée-au-Blé 02 +1162 760 some_text Vallée-Mulâtre 02 +1163 761 some_text Variscourt 02 +1164 762 some_text Vassens 02 +1165 763 some_text Vasseny 02 +1166 764 some_text Vassogne 02 +1167 765 some_text Vaucelles-et-Beffecourt 02 +1168 766 some_text Vaudesson 02 +1169 767 some_text Vauxrezis 02 +1170 768 some_text Vauxaillon 02 +1171 769 some_text Vaux-Andigny 02 +1172 770 some_text Vauxbuin 02 +1173 771 some_text Vauxcéré 02 +1174 772 some_text Vaux-en-Vermandois 02 +1175 773 some_text Vauxtin 02 +1176 774 some_text Vendelles 02 +1177 775 some_text Vendeuil 02 +1178 776 some_text Vendhuile 02 +1179 777 some_text Vendières 02 +1180 778 some_text Vendresse-Beaulne 02 +1181 779 some_text Vénérolles 02 +1182 780 some_text Venizel 02 +1183 781 some_text Verdilly 02 +1184 782 some_text Verguier 02 +1185 783 some_text Grand-Verly 02 +1186 784 some_text Petit-Verly 02 +1187 785 some_text Vermand 02 +1188 786 some_text Verneuil-sous-Coucy 02 +1189 787 some_text Verneuil-sur-Serre 02 +1190 788 some_text Versigny 02 +1191 789 some_text Vervins 02 +1192 790 some_text Vesles-et-Caumont 02 +1193 791 some_text Veslud 02 +1194 792 some_text Veuilly-la-Poterie 02 +1195 793 some_text Vézaponin 02 +1196 794 some_text Vézilly 02 +1197 795 some_text Vic-sur-Aisne 02 +1198 796 some_text Vichel-Nanteuil 02 +1199 797 some_text Viel-Arcy 02 +1200 798 some_text Viels-Maisons 02 +1201 799 some_text Vierzy 02 +1202 800 some_text Viffort 02 +1203 801 some_text Vigneux-Hocquet 02 +1204 802 some_text Ville-aux-Bois-lès-Dizy 02 +1205 803 some_text Ville-aux-Bois-lès-Pontavert 02 +1206 804 some_text Villemontoire 02 +1207 805 some_text Villeneuve-Saint-Germain 02 +1208 806 some_text Villeneuve-sur-Fère 02 +1209 807 some_text Villequier-Aumont 02 +1210 808 some_text Villeret 02 +1211 809 some_text Villers-Agron-Aiguizy 02 +1212 810 some_text Villers-Cotterêts 02 +1213 811 some_text Villers-en-Prayères 02 +1214 812 some_text Villers-Hélon 02 +1215 813 some_text Villers-le-Sec 02 +1216 814 some_text Villers-lès-Guise 02 +1217 815 some_text Villers-Saint-Christophe 02 +1218 816 some_text Villers-sur-Fère 02 +1219 817 some_text Ville-Savoye 02 +1220 818 some_text Villiers-Saint-Denis 02 +1221 819 some_text Vincy-Reuil-et-Magny 02 +1222 820 some_text Viry-Noureuil 02 +1223 821 some_text Vivaise 02 +1224 822 some_text Vivières 02 +1225 823 some_text Voharies 02 +1226 824 some_text Vorges 02 +1227 826 some_text Voulpaix 02 +1228 827 some_text Voyenne 02 +1229 828 some_text Vregny 02 +1230 829 some_text Vuillery 02 +1231 830 some_text Wassigny 02 +1232 831 some_text Watigny 02 +1233 832 some_text Wiège-Faty 02 +1234 833 some_text Wimy 02 +1235 834 some_text Wissignicourt 02 +1236 001 some_text Abrest 03 +1237 002 some_text Agonges 03 +1238 003 some_text Ainay-le-Château 03 +1239 004 some_text Andelaroche 03 +1240 005 some_text Archignat 03 +1241 006 some_text Arfeuilles 03 +1242 007 some_text Arpheuilles-Saint-Priest 03 +1243 008 some_text Arronnes 03 +1244 009 some_text Aubigny 03 +1245 010 some_text Audes 03 +1246 011 some_text Aurouër 03 +1247 012 some_text Autry-Issards 03 +1248 013 some_text Avermes 03 +1249 014 some_text Avrilly 03 +1250 015 some_text Bagneux 03 +1251 016 some_text Barberier 03 +1252 017 some_text Barrais-Bussolles 03 +1253 018 some_text Bayet 03 +1254 019 some_text Beaulon 03 +1255 020 some_text Beaune-d'Allier 03 +1256 021 some_text Bègues 03 +1257 022 some_text Bellenaves 03 +1258 023 some_text Bellerive-sur-Allier 03 +1259 024 some_text Bert 03 +1260 025 some_text Bessay-sur-Allier 03 +1261 026 some_text Besson 03 +1262 027 some_text Bézenet 03 +1263 028 some_text Billezois 03 +1264 029 some_text Billy 03 +1265 030 some_text Biozat 03 +1266 031 some_text Bizeneuille 03 +1267 032 some_text Blomard 03 +1268 033 some_text Bost 03 +1269 034 some_text Boucé 03 +1270 035 some_text Bouchaud 03 +1271 036 some_text Bourbon-l'Archambault 03 +1272 037 some_text Braize 03 +1273 038 some_text Bransat 03 +1274 039 some_text Bresnay 03 +1275 040 some_text Bressolles 03 +1276 041 some_text Brethon 03 +1277 042 some_text Breuil 03 +1278 043 some_text Broût-Vernet 03 +1279 044 some_text Brugheas 03 +1280 045 some_text Busset 03 +1281 046 some_text Buxières-les-Mines 03 +1282 047 some_text Celle 03 +1283 048 some_text Cérilly 03 +1284 049 some_text Cesset 03 +1285 050 some_text Chabanne 03 +1286 051 some_text Chambérat 03 +1287 052 some_text Chamblet 03 +1288 053 some_text Chantelle 03 +1289 054 some_text Chapeau 03 +1290 055 some_text Chapelaude 03 +1291 056 some_text Chapelle 03 +1292 057 some_text Chapelle-aux-Chasses 03 +1293 058 some_text Chappes 03 +1294 059 some_text Chareil-Cintrat 03 +1295 060 some_text Charmeil 03 +1296 061 some_text Charmes 03 +1297 062 some_text Charroux 03 +1298 063 some_text Chassenard 03 +1299 064 some_text Château-sur-Allier 03 +1300 065 some_text Châtel-de-Neuvre 03 +1301 066 some_text Châtel-Montagne 03 +1302 067 some_text Châtelperron 03 +1303 068 some_text Châtelus 03 +1304 069 some_text Châtillon 03 +1305 070 some_text Chavenon 03 +1306 071 some_text Chavroches 03 +1307 072 some_text Chazemais 03 +1308 073 some_text Chemilly 03 +1309 074 some_text Chevagnes 03 +1310 075 some_text Chezelle 03 +1311 076 some_text Chézy 03 +1312 077 some_text Chirat-l'Église 03 +1313 078 some_text Chouvigny 03 +\. + +SELECT count(*) FROM towns; +SELECT * FROM towns where id in (13, 666); +-- ON REPLICA +-- +-- select count(*) from towns; +-- count +-- ------- +-- 1313 +-- (1 row) +-- +-- select * from towns where id in (13, 666); +-- id | code | article | name | department +-- -----+------+-----------+----------------+------------ +-- 13 | 014 | some_text | Arbent | 01 +-- 666 | 252 | some_text | Cuissy-et-Geny | 02 +-- (2 rows) +-- + +DROP TABLE towns; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/multi_insert.sql b/contrib/pg_tde/sql/multi_insert.sql new file mode 100644 index 00000000000..51ca82ad470 --- /dev/null +++ b/contrib/pg_tde/sql/multi_insert.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/multi_insert.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/multi_insert_basic.sql b/contrib/pg_tde/sql/multi_insert_basic.sql new file mode 100644 index 00000000000..4d34e97704c --- /dev/null +++ b/contrib/pg_tde/sql/multi_insert_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/multi_insert.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/non_sorted_off_compact.inc b/contrib/pg_tde/sql/non_sorted_off_compact.inc new file mode 100644 index 00000000000..3b6ac2a81b7 --- /dev/null +++ b/contrib/pg_tde/sql/non_sorted_off_compact.inc @@ -0,0 +1,56 @@ +-- A test case for https://github.com/percona/pg_tde/pull/21 +-- +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +DROP TABLE IF EXISTS sbtest1; +CREATE TABLE sbtest1( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; + +INSERT INTO sbtest1(k) VALUES +(1), +(2), +(3), +(4), +(5), +(6), +(7), +(8), +(9), +(10); +DELETE FROM sbtest1 WHERE id IN (4,5,6); + +VACUUM sbtest1; + +INSERT INTO sbtest1(k) VALUES +(11), +(12), +(13); + +-- Line pointers (lp) point to non-sorted offsets (lp_off): +-- CREATE EXTENSION pageinspect; +-- SELECT lp, lp_off, t_ctid FROM heap_page_items(get_raw_page('sbtest1', 0)); +-- lp | lp_off | t_ctid +-- ----+--------+-------- +-- 1 | 8160 | (0,1) +-- 2 | 8128 | (0,2) +-- 3 | 8096 | (0,3) +-- 4 | 7936 | (0,4) +-- 5 | 7904 | (0,5) +-- 6 | 7872 | (0,6) +-- 7 | 8064 | (0,7) +-- 8 | 8032 | (0,8) +-- 9 | 8000 | (0,9) +-- 10 | 7968 | (0,10) + +---- Trigger comapction +delete from sbtest1 where id in (2); +VACUUM sbtest1; + +DROP TABLE sbtest1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/non_sorted_off_compact.sql b/contrib/pg_tde/sql/non_sorted_off_compact.sql new file mode 100644 index 00000000000..d36399eb887 --- /dev/null +++ b/contrib/pg_tde/sql/non_sorted_off_compact.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/non_sorted_off_compact.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/non_sorted_off_compact_basic.sql b/contrib/pg_tde/sql/non_sorted_off_compact_basic.sql new file mode 100644 index 00000000000..f9edfc46b2d --- /dev/null +++ b/contrib/pg_tde/sql/non_sorted_off_compact_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/non_sorted_off_compact.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/pg_tde_is_encrypted.inc b/contrib/pg_tde/sql/pg_tde_is_encrypted.inc new file mode 100644 index 00000000000..00054e0d1cc --- /dev/null +++ b/contrib/pg_tde/sql/pg_tde_is_encrypted.inc @@ -0,0 +1,34 @@ +CREATE EXTENSION pg_tde; + +SELECT * FROM pg_tde_principal_key_info(); + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; + +CREATE TABLE test_norm( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING heap; + +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_enc'; +SELECT amname FROM pg_class INNER JOIN pg_am ON pg_am.oid = pg_class.relam WHERE relname = 'test_norm'; + +SELECT pg_tde_is_encrypted('test_enc'); +SELECT pg_tde_is_encrypted('test_norm'); + +SELECT pg_tde_is_encrypted('public.test_enc'); + +SELECT key_provider_id, key_provider_name, principal_key_name + FROM pg_tde_principal_key_info(); + +DROP TABLE test_enc; +DROP TABLE test_norm; + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/pg_tde_is_encrypted.sql b/contrib/pg_tde/sql/pg_tde_is_encrypted.sql new file mode 100644 index 00000000000..c464a316e8f --- /dev/null +++ b/contrib/pg_tde/sql/pg_tde_is_encrypted.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/pg_tde_is_encrypted.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/pg_tde_is_encrypted_basic.sql b/contrib/pg_tde/sql/pg_tde_is_encrypted_basic.sql new file mode 100644 index 00000000000..e260192cbcf --- /dev/null +++ b/contrib/pg_tde/sql/pg_tde_is_encrypted_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/pg_tde_is_encrypted.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/subtransaction.inc b/contrib/pg_tde/sql/subtransaction.inc new file mode 100644 index 00000000000..3446d44ea76 --- /dev/null +++ b/contrib/pg_tde/sql/subtransaction.inc @@ -0,0 +1,25 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + + +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +CREATE TABLE foo(s TEXT); -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; + +BEGIN; -- Nesting level 1 +SAVEPOINT sp; +DROP TABLE foo; -- Nesting level 2 +RELEASE SAVEPOINT sp; +SAVEPOINT sp; +CREATE TABLE bar(s TEXT); -- Nesting level 2 +ROLLBACK TO sp; -- Rollback should not affect first subtransaction +COMMIT; + +DROP EXTENSION pg_tde; \ No newline at end of file diff --git a/contrib/pg_tde/sql/subtransaction.sql b/contrib/pg_tde/sql/subtransaction.sql new file mode 100644 index 00000000000..ce4a442ebd3 --- /dev/null +++ b/contrib/pg_tde/sql/subtransaction.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/subtransaction.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/subtransaction_basic.sql b/contrib/pg_tde/sql/subtransaction_basic.sql new file mode 100644 index 00000000000..c7800c44dae --- /dev/null +++ b/contrib/pg_tde/sql/subtransaction_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/subtransaction.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/tablespace.inc b/contrib/pg_tde/sql/tablespace.inc new file mode 100644 index 00000000000..b2f7ebddc77 --- /dev/null +++ b/contrib/pg_tde/sql/tablespace.inc @@ -0,0 +1,26 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE test(num1 bigint, num2 double precision, t text) USING :tde_am; +INSERT INTO test(num1, num2, t) + SELECT round(random()*100), random(), 'text' + FROM generate_series(1, 10) s(i); +CREATE INDEX test_idx ON test(num1); + +SET allow_in_place_tablespaces = true; +CREATE TABLESPACE test_tblspace LOCATION ''; + +ALTER TABLE test SET TABLESPACE test_tblspace; +SELECT count(*) FROM test; +ALTER TABLE test SET TABLESPACE pg_default; + +REINDEX (TABLESPACE test_tblspace, CONCURRENTLY) TABLE test; +INSERT INTO test VALUES (110, 2); + +SELECT * FROM test WHERE num1=110; + +DROP TABLE test; +DROP TABLESPACE test_tblspace; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/tablespace.sql b/contrib/pg_tde/sql/tablespace.sql new file mode 100644 index 00000000000..0d3aa6206d7 --- /dev/null +++ b/contrib/pg_tde/sql/tablespace.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/tablespace.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/tablespace_basic.sql b/contrib/pg_tde/sql/tablespace_basic.sql new file mode 100644 index 00000000000..0e358723953 --- /dev/null +++ b/contrib/pg_tde/sql/tablespace_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/tablespace.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/test_issue_153_fix.inc b/contrib/pg_tde/sql/test_issue_153_fix.inc new file mode 100644 index 00000000000..72eb4497e1b --- /dev/null +++ b/contrib/pg_tde/sql/test_issue_153_fix.inc @@ -0,0 +1,486 @@ +CREATE EXTENSION pg_tde; +SET datestyle TO 'iso, dmy'; + +SELECT * FROM pg_tde_principal_key_info(); + +SELECT pg_tde_add_key_provider_file('file-ring','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-ring'); + +-- +-- Script that creates the 'sample' tde encrypted tables, views +-- functions, triggers, etc. +-- +-- Start new transaction - commit all or nothing +-- +BEGIN; +-- +-- Create and load tables used in the documentation examples. +-- +-- Create the 'dept' table +-- +CREATE TABLE dept ( + deptno NUMERIC(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY, + dname VARCHAR(14) CONSTRAINT dept_dname_uq UNIQUE, + loc VARCHAR(13) +)using :tde_am; +-- +-- Create the 'emp' table +-- +CREATE TABLE emp ( + empno NUMERIC(4) NOT NULL CONSTRAINT emp_pk PRIMARY KEY, + ename VARCHAR(10), + job VARCHAR(9), + mgr NUMERIC(4), + hiredate DATE, + sal NUMERIC(7,2) CONSTRAINT emp_sal_ck CHECK (sal > 0), + comm NUMERIC(7,2), + deptno NUMERIC(2) CONSTRAINT emp_ref_dept_fk + REFERENCES dept(deptno) +)using :tde_am; +-- +-- Create the 'jobhist' table +-- +CREATE TABLE jobhist ( + empno NUMERIC(4) NOT NULL, + startdate TIMESTAMP(0) NOT NULL, + enddate TIMESTAMP(0), + job VARCHAR(9), + sal NUMERIC(7,2), + comm NUMERIC(7,2), + deptno NUMERIC(2), + chgdesc VARCHAR(80), + CONSTRAINT jobhist_pk PRIMARY KEY (empno, startdate), + CONSTRAINT jobhist_ref_emp_fk FOREIGN KEY (empno) + REFERENCES emp(empno) ON DELETE CASCADE, + CONSTRAINT jobhist_ref_dept_fk FOREIGN KEY (deptno) + REFERENCES dept (deptno) ON DELETE SET NULL, + CONSTRAINT jobhist_date_chk CHECK (startdate <= enddate) +)using :tde_am; +-- +-- Create the 'salesemp' view +-- +CREATE OR REPLACE VIEW salesemp AS + SELECT empno, ename, hiredate, sal, comm FROM emp WHERE job = 'SALESMAN'; +-- +-- Sequence to generate values for function 'new_empno'. +-- +CREATE SEQUENCE next_empno START WITH 8000 INCREMENT BY 1; +-- +-- Issue PUBLIC grants +-- +GRANT ALL ON emp TO PUBLIC; +GRANT ALL ON dept TO PUBLIC; +GRANT ALL ON jobhist TO PUBLIC; +GRANT ALL ON salesemp TO PUBLIC; +GRANT ALL ON next_empno TO PUBLIC; +-- +-- Load the 'dept' table +-- +INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK'); +INSERT INTO dept VALUES (20,'RESEARCH','DALLAS'); +INSERT INTO dept VALUES (30,'SALES','CHICAGO'); +INSERT INTO dept VALUES (40,'OPERATIONS','BOSTON'); +-- +-- Load the 'emp' table +-- +INSERT INTO emp VALUES (7369,'SMITH','CLERK',7902,'17-DEC-80',800,NULL,20); +INSERT INTO emp VALUES (7499,'ALLEN','SALESMAN',7698,'20-FEB-81',1600,300,30); +INSERT INTO emp VALUES (7521,'WARD','SALESMAN',7698,'22-FEB-81',1250,500,30); +INSERT INTO emp VALUES (7566,'JONES','MANAGER',7839,'02-APR-81',2975,NULL,20); +INSERT INTO emp VALUES (7654,'MARTIN','SALESMAN',7698,'28-SEP-81',1250,1400,30); +INSERT INTO emp VALUES (7698,'BLAKE','MANAGER',7839,'01-MAY-81',2850,NULL,30); +INSERT INTO emp VALUES (7782,'CLARK','MANAGER',7839,'09-JUN-81',2450,NULL,10); +INSERT INTO emp VALUES (7788,'SCOTT','ANALYST',7566,'19-APR-87',3000,NULL,20); +INSERT INTO emp VALUES (7839,'KING','PRESIDENT',NULL,'17-NOV-81',5000,NULL,10); +INSERT INTO emp VALUES (7844,'TURNER','SALESMAN',7698,'08-SEP-81',1500,0,30); +INSERT INTO emp VALUES (7876,'ADAMS','CLERK',7788,'23-MAY-87',1100,NULL,20); +INSERT INTO emp VALUES (7900,'JAMES','CLERK',7698,'03-DEC-81',950,NULL,30); +INSERT INTO emp VALUES (7902,'FORD','ANALYST',7566,'03-DEC-81',3000,NULL,20); +INSERT INTO emp VALUES (7934,'MILLER','CLERK',7782,'23-JAN-82',1300,NULL,10); +-- +-- Load the 'jobhist' table +-- +INSERT INTO jobhist VALUES (7369,'17-DEC-80',NULL,'CLERK',800,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7499,'20-FEB-81',NULL,'SALESMAN',1600,300,30,'New Hire'); +INSERT INTO jobhist VALUES (7521,'22-FEB-81',NULL,'SALESMAN',1250,500,30,'New Hire'); +INSERT INTO jobhist VALUES (7566,'02-APR-81',NULL,'MANAGER',2975,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7654,'28-SEP-81',NULL,'SALESMAN',1250,1400,30,'New Hire'); +INSERT INTO jobhist VALUES (7698,'01-MAY-81',NULL,'MANAGER',2850,NULL,30,'New Hire'); +INSERT INTO jobhist VALUES (7782,'09-JUN-81',NULL,'MANAGER',2450,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7788,'19-APR-87','12-APR-88','CLERK',1000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7788,'13-APR-88','04-MAY-89','CLERK',1040,NULL,20,'Raise'); +INSERT INTO jobhist VALUES (7788,'05-MAY-90',NULL,'ANALYST',3000,NULL,20,'Promoted to Analyst'); +INSERT INTO jobhist VALUES (7839,'17-NOV-81',NULL,'PRESIDENT',5000,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7844,'08-SEP-81',NULL,'SALESMAN',1500,0,30,'New Hire'); +INSERT INTO jobhist VALUES (7876,'23-MAY-87',NULL,'CLERK',1100,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7900,'03-DEC-81','14-JAN-83','CLERK',950,NULL,10,'New Hire'); +INSERT INTO jobhist VALUES (7900,'15-JAN-83',NULL,'CLERK',950,NULL,30,'Changed to Dept 30'); +INSERT INTO jobhist VALUES (7902,'03-DEC-81',NULL,'ANALYST',3000,NULL,20,'New Hire'); +INSERT INTO jobhist VALUES (7934,'23-JAN-82',NULL,'CLERK',1300,NULL,10,'New Hire'); +-- +-- Populate statistics table and view (pg_statistic/pg_stats) +-- +ANALYZE dept; +ANALYZE emp; +ANALYZE jobhist; +-- +-- Function that lists all employees' numbers and names +-- from the 'emp' table using a cursor. +-- +CREATE OR REPLACE FUNCTION list_emp() RETURNS VOID +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + emp_cur CURSOR FOR + SELECT empno, ename FROM emp ORDER BY empno; +BEGIN + OPEN emp_cur; + RAISE INFO 'EMPNO ENAME'; + RAISE INFO '----- -------'; + LOOP + FETCH emp_cur INTO v_empno, v_ename; + EXIT WHEN NOT FOUND; + RAISE INFO '% %', v_empno, v_ename; + END LOOP; + CLOSE emp_cur; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that selects an employee row given the employee +-- number and displays certain columns. +-- +CREATE OR REPLACE FUNCTION select_emp ( + p_empno NUMERIC +) RETURNS VOID +AS $$ +DECLARE + v_ename emp.ename%TYPE; + v_hiredate emp.hiredate%TYPE; + v_sal emp.sal%TYPE; + v_comm emp.comm%TYPE; + v_dname dept.dname%TYPE; + v_disp_date VARCHAR(10); +BEGIN + SELECT INTO + v_ename, v_hiredate, v_sal, v_comm, v_dname + ename, hiredate, sal, COALESCE(comm, 0), dname + FROM emp e, dept d + WHERE empno = p_empno + AND e.deptno = d.deptno; + IF NOT FOUND THEN + RAISE INFO 'Employee % not found', p_empno; + RETURN; + END IF; + v_disp_date := TO_CHAR(v_hiredate, 'MM/DD/YYYY'); + RAISE INFO 'Number : %', p_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Hire Date : %', v_disp_date; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission: %', v_comm; + RAISE INFO 'Department: %', v_dname; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- A RECORD type used to format the return value of +-- function, 'emp_query'. +-- +CREATE TYPE emp_query_type AS ( + empno NUMERIC, + ename VARCHAR(10), + job VARCHAR(9), + hiredate DATE, + sal NUMERIC +); +-- +-- Function that queries the 'emp' table based on +-- department number and employee number or name. Returns +-- employee number and name as INOUT parameters and job, +-- hire date, and salary as OUT parameters. These are +-- returned in the form of a record defined by +-- RECORD type, 'emp_query_type'. +-- +CREATE OR REPLACE FUNCTION emp_query ( + IN p_deptno NUMERIC, + INOUT p_empno NUMERIC, + INOUT p_ename VARCHAR, + OUT p_job VARCHAR, + OUT p_hiredate DATE, + OUT p_sal NUMERIC +) +AS $$ +BEGIN + SELECT INTO + p_empno, p_ename, p_job, p_hiredate, p_sal + empno, ename, job, hiredate, sal + FROM emp + WHERE deptno = p_deptno + AND (empno = p_empno + OR ename = UPPER(p_ename)); +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to call 'emp_query_caller' with IN and INOUT +-- parameters. Displays the results received from INOUT and +-- OUT parameters. +-- +CREATE OR REPLACE FUNCTION emp_query_caller() RETURNS VOID +AS $$ +DECLARE + v_deptno NUMERIC; + v_empno NUMERIC; + v_ename VARCHAR; + v_rows INTEGER; + r_emp_query EMP_QUERY_TYPE; +BEGIN + v_deptno := 30; + v_empno := 0; + v_ename := 'Martin'; + r_emp_query := emp_query(v_deptno, v_empno, v_ename); + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', (r_emp_query).empno; + RAISE INFO 'Name : %', (r_emp_query).ename; + RAISE INFO 'Job : %', (r_emp_query).job; + RAISE INFO 'Hire Date : %', (r_emp_query).hiredate; + RAISE INFO 'Salary : %', (r_emp_query).sal; + RETURN; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function to compute yearly compensation based on semimonthly +-- salary. +-- +CREATE OR REPLACE FUNCTION emp_comp ( + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +BEGIN + RETURN (p_sal + COALESCE(p_comm, 0)) * 24; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that gets the next number from sequence, 'next_empno', +-- and ensures it is not already in use as an employee number. +-- +CREATE OR REPLACE FUNCTION new_empno() RETURNS INTEGER +AS $$ +DECLARE + v_cnt INTEGER := 1; + v_new_empno INTEGER; +BEGIN + WHILE v_cnt > 0 LOOP + SELECT INTO v_new_empno nextval('next_empno'); + SELECT INTO v_cnt COUNT(*) FROM emp WHERE empno = v_new_empno; + END LOOP; + RETURN v_new_empno; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new clerk to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_clerk ( + p_ename VARCHAR, + p_deptno NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'CLERK', 7782, + CURRENT_DATE, 950.00, NULL, p_deptno); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Function that adds a new salesman to table 'emp'. +-- +CREATE OR REPLACE FUNCTION hire_salesman ( + p_ename VARCHAR, + p_sal NUMERIC, + p_comm NUMERIC +) RETURNS NUMERIC +AS $$ +DECLARE + v_empno NUMERIC(4); + v_ename VARCHAR(10); + v_job VARCHAR(9); + v_mgr NUMERIC(4); + v_hiredate DATE; + v_sal NUMERIC(7,2); + v_comm NUMERIC(7,2); + v_deptno NUMERIC(2); +BEGIN + v_empno := new_empno(); + INSERT INTO emp VALUES (v_empno, p_ename, 'SALESMAN', 7698, + CURRENT_DATE, p_sal, p_comm, 30); + SELECT INTO + v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno + empno, ename, job, mgr, hiredate, sal, comm, deptno + FROM emp WHERE empno = v_empno; + RAISE INFO 'Department : %', v_deptno; + RAISE INFO 'Employee No: %', v_empno; + RAISE INFO 'Name : %', v_ename; + RAISE INFO 'Job : %', v_job; + RAISE INFO 'Manager : %', v_mgr; + RAISE INFO 'Hire Date : %', v_hiredate; + RAISE INFO 'Salary : %', v_sal; + RAISE INFO 'Commission : %', v_comm; + RETURN v_empno; +EXCEPTION + WHEN OTHERS THEN + RAISE INFO 'The following is SQLERRM : %', SQLERRM; + RAISE INFO 'The following is SQLSTATE: %', SQLSTATE; + RETURN -1; +END; +$$ LANGUAGE 'plpgsql'; +-- +-- Rule to INSERT into view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_i AS ON INSERT TO salesemp +DO INSTEAD + INSERT INTO emp VALUES (NEW.empno, NEW.ename, 'SALESMAN', 7698, + NEW.hiredate, NEW.sal, NEW.comm, 30); +-- +-- Rule to UPDATE view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_u AS ON UPDATE TO salesemp +DO INSTEAD + UPDATE emp SET empno = NEW.empno, + ename = NEW.ename, + hiredate = NEW.hiredate, + sal = NEW.sal, + comm = NEW.comm + WHERE empno = OLD.empno; +-- +-- Rule to DELETE from view 'salesemp' +-- +CREATE OR REPLACE RULE salesemp_d AS ON DELETE TO salesemp +DO INSTEAD + DELETE FROM emp WHERE empno = OLD.empno; +-- +-- After statement-level trigger that displays a message after +-- an insert, update, or deletion to the 'emp' table. One message +-- per SQL command is displayed. +-- +CREATE OR REPLACE FUNCTION user_audit_trig() RETURNS TRIGGER +AS $$ +DECLARE + v_action VARCHAR(24); + v_text TEXT; +BEGIN + IF TG_OP = 'INSERT' THEN + v_action := ' added employee(s) on '; + ELSIF TG_OP = 'UPDATE' THEN + v_action := ' updated employee(s) on '; + ELSIF TG_OP = 'DELETE' THEN + v_action := ' deleted employee(s) on '; + END IF; +-- v_text := 'User ' || USER || v_action || CURRENT_DATE; Changing this as we need consistent output for regression + v_text := 'User ' || v_action ; + RAISE INFO ' %', v_text; + RETURN NULL; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER user_audit_trig + AFTER INSERT OR UPDATE OR DELETE ON emp + FOR EACH STATEMENT EXECUTE PROCEDURE user_audit_trig(); +-- +-- Before row-level trigger that displays employee number and +-- salary of an employee that is about to be added, updated, +-- or deleted in the 'emp' table. +-- +CREATE OR REPLACE FUNCTION emp_sal_trig() RETURNS TRIGGER +AS $$ +DECLARE + sal_diff NUMERIC(7,2); +BEGIN + IF TG_OP = 'INSERT' THEN + RAISE INFO 'Inserting employee %', NEW.empno; + RAISE INFO '..New salary: %', NEW.sal; + RETURN NEW; + END IF; + IF TG_OP = 'UPDATE' THEN + sal_diff := NEW.sal - OLD.sal; + RAISE INFO 'Updating employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RAISE INFO '..New salary: %', NEW.sal; + RAISE INFO '..Raise : %', sal_diff; + RETURN NEW; + END IF; + IF TG_OP = 'DELETE' THEN + RAISE INFO 'Deleting employee %', OLD.empno; + RAISE INFO '..Old salary: %', OLD.sal; + RETURN OLD; + END IF; +END; +$$ LANGUAGE 'plpgsql'; +CREATE TRIGGER emp_sal_trig + BEFORE DELETE OR INSERT OR UPDATE ON emp + FOR EACH ROW EXECUTE PROCEDURE emp_sal_trig(); +COMMIT; + +SELECT * FROM emp; +SELECT * FROM dept; +SELECT * FROM jobhist; + +-- Now test the crash fix +DELETE FROM emp WHERE empno = 7934; +DELETE FROM emp WHERE empno = 7698; +DELETE FROM emp WHERE empno = 7782; +DELETE FROM emp WHERE empno = 7788; +DELETE FROM emp WHERE empno = 7838; +DELETE FROM emp WHERE empno = 7900; +DELETE FROM emp WHERE empno = 7654; + +DELETE FROM dept WHERE deptno = 40; + +SELECT * FROM emp; +SELECT * FROM dept; +SELECT * FROM jobhist; + +DROP TABLE jobhist CASCADE; +DROP TABLE emp CASCADE; +DROP TABLE dept CASCADE; + +DROP SEQUENCE next_empno; +DROP TYPE emp_query_type; + +DROP EXTENSION pg_tde CASCADE; diff --git a/contrib/pg_tde/sql/test_issue_153_fix.sql b/contrib/pg_tde/sql/test_issue_153_fix.sql new file mode 100644 index 00000000000..389e6701217 --- /dev/null +++ b/contrib/pg_tde/sql/test_issue_153_fix.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/test_issue_153_fix.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/test_issue_153_fix_basic.sql b/contrib/pg_tde/sql/test_issue_153_fix_basic.sql new file mode 100644 index 00000000000..8cca850891b --- /dev/null +++ b/contrib/pg_tde/sql/test_issue_153_fix_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/test_issue_153_fix.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/toast_decrypt.inc b/contrib/pg_tde/sql/toast_decrypt.inc new file mode 100644 index 00000000000..b41a2731464 --- /dev/null +++ b/contrib/pg_tde/sql/toast_decrypt.inc @@ -0,0 +1,12 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE src (f1 TEXT STORAGE EXTERNAL) USING :tde_am; +INSERT INTO src VALUES(repeat('abcdeF',1000)); +SELECT * FROM src; + +DROP TABLE src; + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/toast_decrypt.sql b/contrib/pg_tde/sql/toast_decrypt.sql new file mode 100644 index 00000000000..7b62e4f4565 --- /dev/null +++ b/contrib/pg_tde/sql/toast_decrypt.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/toast_decrypt.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/toast_decrypt_basic.sql b/contrib/pg_tde/sql/toast_decrypt_basic.sql new file mode 100644 index 00000000000..6f4e847151b --- /dev/null +++ b/contrib/pg_tde/sql/toast_decrypt_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/toast_decrypt.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/toast_extended_storage.inc b/contrib/pg_tde/sql/toast_extended_storage.inc new file mode 100644 index 00000000000..2311e8038ab --- /dev/null +++ b/contrib/pg_tde/sql/toast_extended_storage.inc @@ -0,0 +1,49 @@ +-- test https://github.com/percona/pg_tde/issues/63 +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TEMP TABLE src (f1 text) USING :tde_am; +-- Crash on INSERT +INSERT INTO src +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src; + +DROP TABLE src; + +CREATE TABLE src2 (f1 TEXT) USING :tde_am; +INSERT INTO src2 +VALUES('0.55859909742449630.44658969494913570.54075930161272720.173117157913014630.61483029376206380.65764492874377220.341317552838924730.367982528684053230.175345977931963270.168412839608874880.00154803678245296620.82706532396263290.74748634462447190.090831815264683650.390919315685386960.99951082699941550.9977981693287330.6988579613645320.310754450662202640.90325242484683190.75374800591547490.26680100643896230.55751915566773990.57902456214791110.36183153154123460.63524053266029830.78389378855711180.19584445869629020.199333924650425760.82155191593829560.7371944732869880.183910466357891660.0147813222233452720.3747022411129810.49101561236565550.95483453706535880.35594888092451550.43381965349401440.46361549602747920.50604155870332760.86586716524835540.63478889357891990.77509493569207090.86665305443338790.64852060828658550.50280760242256580.21800585609741340.096173392125813220.0261400320036884180.33800342276157360.485498510272187160.69492885593869610.14719438590370170.57633710730539910.6854376608363930.162803430883830650.28902094699378880.93884121928877070.4124819510126210.69895400258256470.61386295568035320.019902272612943640.85235316437206570.0940431968488260.272794218757168140.61549039934229780.422575394607501930.67002314675933960.465323258961145570.163191821055387760.0126060416991824460.40893698240906830.31893797439819460.15469713662310670.55528689194077320.66788769570588440.71025660771475390.38117379415620990.0220335908759561330.107673951160519810.71950609969184590.54341042925206180.024053937929693570.74203099973156790.035064651259838710.86887319172737380.335093303911782050.7483180995321150.97612135845236070.084654394261215680.76508793255901520.68191364158327270.64505339286832350.448618338317764650.335092422718133550.55149498651635520.9413160253094210.9168195414285170.98684856309613790.60400653549636750.85646368669913330.58837858335250370.9799739681795840.48079146876587030.218616079813109820.9302335200895790.4780449500011730.424050492872935840.063479437634682330.98094393207488960.335273138834828230.48560551700566790.139203310225991970.62595627061874380.71122415168232940.152848330691444540.89199132936279750.27800941859127290.95439564372772280.84837555269067490.75100083734460510.362767538265398760.235976384421268110.61187422491548470.9495830853409060.89514971758309940.26872924068443680.74626444803809160.139587450203541460.302395254482703770.78411327172089250.38966191620694680.198917136896949340.64377926785777470.293260719678428260.44648764937475450.37420519795286180.92220518748025750.0073108929093146370.93143459249930790.61565949670551650.95409849589104280.70559701135921380.80911223952124960.78688763724234260.0143429787186462040.47314838377612680.220085013432371750.74895079799389160.34241785580036590.377476662711184960.55856596798903090.72300200663394070.93372512510565420.48213601319131170.98122442024471220.346628953420145660.74820202768550950.36134870838736320.53096217018068880.91813333111021930.16912775074741670.37503790891484610.9532471869686030.125924225709406650.481359293779658250.49808987733380960.292530386328931250.87891128070888010.190872215535672130.8880398891158570.312849610687170760.83382258936561130.88683286703304340.54819728672968980.55198306885689990.94518489093830830.82370179536934040.39422249429194810.88920643110698830.100781813305392380.156896688655811630.176728786940858470.418953555096873260.49179322828441930.6482244643731250.50630017133792920.96824089562929780.48649962422895390.224128640753047840.65318604085187480.0277597024087572470.269592268429819760.078229807252904630.12959218454427490.76024146340840760.53005245019718040.208874546770384530.55257426353213910.5937585938899110.80002298982932360.176800500254526760.80793461098073150.73215202402765760.89330730727462560.0317516822834684740.160090174689148550.51532774354845980.70921991745912830.138735433408188950.57752467002560.403150487295366840.40749394747573110.66251587358165880.35063881167575020.9599596099677250.445932818660210060.287245889223862740.8257369856611840.70400052356170930.30353378134511090.393039351875958730.362370380599912560.388471172945746850.365472631694610860.54200112719970450.314895026881156560.95277396387790940.65717791763685370.190656024871326270.97805817247168080.82424428074710860.90092118424733720.87158730600807480.92974051731450720.65817976419118640.53290189661821690.39921764010135830.437371129418732130.54880953388242660.68460757473144040.79353167169486190.76018043252842250.86776754557471510.154812009703990.43513822113626890.27237032901240730.319453093144346440.71805913046337030.36559454463692820.744352905917590.104333806358574810.436173275549234240.32527834767622110.267847234273361550.57084377650998520.104177751155159590.33089495199934650.63225317492672660.74723541122747060.494332985433825640.69567541211602820.25175339015164710.63155410904344930.85261979776189520.213947072087478540.116500925662627930.0179480357425600980.174284134096331030.033176454827700930.74261681843758920.87530858693430760.7475446944895350.94056404731337580.63391517780623240.19782412222947120.294298476924694350.078590228180988490.94544551154725580.309740808480930330.58151178351223430.54279813541957210.15899402561489050.220517609114043940.87020048815449540.453872032347550470.152859165096062450.61074979931058640.144888238626858620.99041812532331260.54156908920795370.04165813661344120.52106850675663050.70979862294325490.309423192717801940.104759377449733520.100241108871801820.57623244475946890.89685838695066630.090179331203737560.459757127631922560.91055597599411020.062652892151061620.3957549469001820.20930474506823660.100267794939050910.088485558336314440.48902516637998940.28887564989109650.57153819581046220.55892890575540880.0560891728851322660.7344804208424660.148907757489083360.76957143272000830.154868098317315180.81159315420869630.44133537433804590.37278761862448160.60591287657588920.063302672508175340.053306086212437710.46220266749901540.128985239698984570.85633874521839260.067996420502524520.0284232841636806730.29836261032579190.39154315471529990.222267341457545120.263539428271130970.62778609185857850.60776229183174580.77080401415726360.265763132444866160.242504428121770750.96329054224533460.97762540147621140.89131212200180630.60898765857225670.67429141511349160.60523963808397350.246266851399476350.403945416969184070.57685155716036270.0057898216945795330.461298136292183970.109095726379537130.64108109155126210.52729468037839820.429250823991674140.0461429365458496670.78701739169339650.486744604809777660.67408253928102060.245631501779759680.97499790194550550.85293571370140420.67698635446231050.412067250333180370.97482534007991070.76365268652784390.60843248596980670.325432520881775430.53749708721485770.355229569948804170.42115127519206050.90879701384812690.32908069089866520.33023841256557950.77756498707736620.18756301506162210.52706375281054750.73173435691423450.00241391828323744840.251433710269416850.80702906815626530.2638196814093130.38270953033148090.118473162701491220.83001470395355550.85223182629553640.033022814590289110.9965499529438340.67054036405993680.68755543955027210.208611605942402580.87914323433706860.165905178807298270.182595893077931450.21513996146425130.089552155874880410.314779054365917240.79612735879776710.42602975925430940.90633195438770240.74836824537341370.36821330725304760.56473312465489340.301758528313859740.257557183799996150.52175015048395750.34437531739733030.12803644333956510.58152344614280870.81024139869387970.92151339563803970.9419794316846240.14518928800493570.51509222411904320.88433839509613280.7043713958444770.57844071347444470.82142787881797560.361182666192116030.401080783630624940.00360516170878111280.50992311937912050.48592820943372070.049224043025032140.33757691441461060.63122825460216840.217143314204432780.075532976986801610.88692098500172280.3939705645440310.76987032912433540.42978415521497990.39660803881931270.195834795264905770.68952908520953130.64390116049496760.283061853701023660.75959274948839410.115863202690749350.62027371986814180.61505352340231130.475082412351449430.055534471845069430.36852323404335950.94079455836769110.378262424582432640.76888703633139490.40316203479605360.0618512243941680140.410221022783130660.166202503621515780.77605355493809180.0238360080265551670.58146598984882970.5724057644768230.98836532669660770.317241629857772930.6076338271910620.169824116669973660.235329859686410180.89812158420926110.91911037909535540.89566442642084640.96017124223141990.136805494040843630.29922151617189940.084129285069876050.142322258309026670.331958796010522360.77195152983670480.28065178532813340.34929560908190840.36412919524028410.20623600253013440.98231558166537920.109411601043275520.420751361534868140.25398188076793350.73210274229522860.35523027622504390.56017854324865990.035906773682321980.458594285249765750.429186356476820130.29376322396731270.28620072549417430.81755145809453760.7648289441362690.107720329547366770.312164929559409550.98654986130775990.240209261278337750.71102263928954420.025219566132762550.347013891045537150.94806739585158150.033290356497574390.37228908363793820.50679372610177960.42006058974112090.88192213558663360.9608866193956920.58474979630793890.042227994641621080.0252720756429745120.33045360424017420.073186012566550.51188944632792530.88119389097831840.62135241206234930.273771472557474250.0521672174874079350.221245186496503930.24687691656478240.147320124390813770.190792370597731240.69355122216463560.086056706966926690.82310172842897770.32887183166387790.6268252443986260.63323066430150310.303226019288216350.64636790914559760.62172489249387520.8906208337615320.53724750507000230.91256347427921550.105726509492241270.234224425962393120.59141778297622790.46983523949980890.65062949305891320.8405190258421960.204788284824616750.48877759819797960.206091190201679760.41069264808958360.81763373366381420.5084606548014940.0287355170596057530.262703166966945960.69636550335784660.296805576443307740.33823549021164090.55206402049435740.048599135591500310.180280693104786230.39809430050236940.85783149110582780.59856943049089170.73083898331960610.39115620101771540.97510738320707180.49832685663478360.83969482573285580.81263903325462290.175900457406543480.25362717481604390.4086176886942350.94859118189175340.72728673793949540.86000363710178670.72991922087398780.144146361114049880.137307127131112820.95837138870799480.30464143430607680.60549385299218940.25589131494997110.202879664003428270.64344113573378990.086028307747932020.2162077473738040.133179303093533540.63276918975496830.223612877608330820.090979780540967560.47248850309212910.42497534390669990.54265927351340330.78026452844579630.47617580145072560.52025111760824120.455497539401685140.84750943622097340.209294366697259760.16819707470244150.48476513425161640.57983872920100280.474382158984199660.254874470253334540.52585380692307830.25590342041928180.53444932407398720.2467763723907610.3873669403624920.95841821859258450.63836466385929370.416529458114704760.5385547032325060.55113853218554860.78536228490082220.0423541444084443160.61104340251074580.25024327522581060.8968304236004010.91708646839265830.89986432305046280.244670466125564980.91990842319302280.61089716025359750.64165798316833510.150486815632834280.302656874235680550.77299688751490070.06442465730779690.170187735350043170.96092925877515610.073569446403604080.71566156948489470.41688213041932330.106009564676631920.77772746604669680.40420283080528850.53720781701369490.80248755232571470.9067255270263550.66110396909102160.74198258125295680.100036449910233530.53372181079229670.195456372245575770.36778542134467540.52677015412378260.0046158429404739020.3573129258646650.70710429286692180.7590453334083040.080069319066503390.13645177032718280.75294433071330950.30623120284457350.124939879649973660.74666367922779210.27283546392438620.0249682284310672740.231418455110220080.18240016991642350.94835900483274790.8185585957416810.88227138571932670.5763656518909330.041425729831589430.156495692422954140.383434359444130960.33191761135630470.73429826375351340.70196127414686390.4176693994178480.51798783115477140.133000437723328920.99183781995588590.62038035159226260.70230630180648260.48149294093471930.071032611375611680.316692539064261650.72180762218821220.33585192016184260.2450197225961710.381209315857462760.86065931604870150.20926776488396760.150989235950110820.034414934269560330.202697791930762650.256674769360348830.83792620592924920.115132877456959550.99550359751178990.96660022265996550.215495131403689920.56092083988834850.5414241305269070.94699295816500210.57836049825443240.126634069691756950.34139212995839420.81123452422299750.395502066337996670.02784202585389850.70054461921525820.0251629273855515920.400088495101414750.042343479053726930.255385456379162430.115323214180639290.86052868405649410.80679786790456510.185724249636432410.70225407591357740.58488866880594670.78797295634420.060463509837799690.94239615039764080.390925469140910530.61930394909062090.00242977964733892550.03832342462305660.236127432526586520.85665946639165140.63819586647921330.41918198781161540.8806863111437020.30998197088794630.5091071836721230.85148740830823380.479261121128822240.5493687561286420.406746809922069460.67766155839030540.68582761943917790.6213998619846040.40003951756032510.50012334946657160.86347241702015310.498562514425627160.43387154606222890.43072796203442290.84431096858734240.91556206059139990.075531540204718530.34637779857456620.09803193830141810.113257180770137910.471363081925707750.386860589489405270.84006043516887830.147215521084850480.99494822829120370.8993747317254130.28745440228503780.61276005847955250.472781788642338440.276861272513734940.117763298225837860.215988090000936550.55739417150262690.82256884249839920.24319530989306770.65012212094836630.200740656294019710.94765111093963530.41269728851781330.71225577084735250.400411906058890.95872905170228840.34550901391959330.153222305223898130.95864777832062950.70017440926415330.74467979235810430.96637701524562810.95769716263556770.67722900525357280.277538733464677060.25299627181338760.49175435775497920.444808352839532350.95159141856708640.3290512494095710.261643308927478340.58005405681136550.81668091542356950.5806116903806480.45714437436937770.0562933316045919340.6300438974448530.188623906862750880.21760948467363360.89388938998057510.116537571134564640.95150737833507870.04176692770659640.33629030272703230.90835370155068820.71473766853785060.63270822261999630.093062154612900240.020562706895957740.66910006911490850.211929290423362280.055661110765969690.39836223634887480.94083417838387410.55424792517495880.470869449244478270.87337609853183710.103244217533338570.55321869490207120.54172435809406090.25695314878723720.16631015046142150.76331973383241340.1899178641120980.56554122983082840.89321305405550010.55793973112074410.251214203805392170.88801755321451090.126348423024250640.51313501961726040.71355373012135170.2438577330041840.35699645637431330.67578064396352120.276489256552760350.33298881202684960.00126232159337602570.91768198017511730.389940479426011950.0071034111941941090.485440276060289830.99044697362439730.28868605183053340.092549576897248410.67042684484907470.107407998478450620.386642394768174750.6602118447246730.87454861233932890.17018753511598960.94127747210143740.80321960491334640.97873131544938460.69885414797191640.491085091427780.0256061015472428540.98781509496584550.78234026806900130.77366145515249270.86292229329185880.18141135788276520.53115418675967780.0204720611536368670.78466124297385130.79878337673937040.64676938370403090.66340028659503990.0172587732682336630.50643826587851070.304259859706085050.231148535265845560.72913383643661760.96056196839458270.103726581164883540.723319718296590.96256253881580080.84915058756499540.29326902056361060.060498267997478150.233590322887418770.82394353907746120.35836913871068310.42868318857439360.92620906123748360.428856383380940.0128858422895083980.5897482329242330.74346123509463720.71392134263184510.7922870515385190.45034578460346420.220713070657383660.50036576077936770.33315150965670970.153791141264383760.081628549271551610.50269544524849260.79871301529344190.62880130752812930.61514733085654270.133290598379500750.98622715267453280.236064344707098250.96733120263385030.474023712962884370.31380358511415230.424555265473760770.467924922792159360.0251235794683071220.6759587091182420.182943144737917730.403016682666544670.91099124492879270.282707871188664140.60257089242009630.41189844877838810.24765897575224960.87111564743408910.441703916151838570.224931920602724930.11202177846435290.074042088708803360.73892123885173520.1665949066403940.061017078562687120.47880792614354870.66027786637404670.8643395969145260.372093701089625030.81660163351184290.227518457861348060.81823518538645380.6354593624513880.130643509274014360.49431668684029950.151986269119320340.37132344164127120.7617103369919940.056864798294056440.73540879564125850.65732925913024020.7251631704649990.91258109596924930.80117253974860940.133379923360126050.69118582098829440.87788183388539290.78718370446488680.5781924355256240.72830873052270830.418725353291123260.113458368474375740.72741570241218830.246127795989631970.52988627694727030.52435244304811570.416120042667790240.78792656042796350.67466269717410140.48713989424384320.99027893964726040.57471525724853150.207240278538425530.94860445376822540.38937980296276420.85939650190469960.227749538284571780.91513558546019280.83968344275887110.81636768367206390.89891236927292930.05689027675212710.362862092800543270.86873922271553240.72174266199009860.73772646324074520.82771531001742020.27083109056695510.41648068011031960.089699268759771970.215544913134742220.5868214136880710.310309893365539270.417081436339563850.41264633416121030.94632759256888210.52903739330871650.156591953331768560.63323741807498650.028415091408658720.67147107229582550.237939421790171360.71193150247025860.30796382253117940.4538868452261180.0082861042382491590.83517541288095280.175955384389067770.307543514998160460.382878090844856130.6418790117085420.86507915169740610.94224842628676790.164135522294932780.09486941194495690.157102263729585360.5742678522826350.50625991475584970.131334532205562130.78874937990440010.78110607600549380.7682254095530070.032657183065025520.004936553383318110.6419535543045420.410641505575076060.213250252801446160.54995289118616460.22467936776999430.245124565834815340.8678620340425454'); +SELECT * FROM src2; + +DROP TABLE src2; + +-- https://github.com/percona/pg_tde/issues/82 +CREATE TABLE indtoasttest(descr text, cnt int DEFAULT 0, f1 text, f2 text) using :tde_am; + +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',30000), repeat('1234567890',50000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-compressed,one-null', NULL, repeat('1234567890',1000)); +INSERT INTO indtoasttest(descr, f1, f2) VALUES('one-toasted,one-null', NULL, repeat('1234567890',50000)); + +UPDATE indtoasttest SET cnt = cnt +1 RETURNING substring(indtoasttest::text, 1, 200); +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1 RETURNING substring(indtoasttest::text, 1, 200); +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); +UPDATE indtoasttest SET cnt = cnt +1, f1 = f1||'' RETURNING substring(indtoasttest::text, 1, 200); +UPDATE indtoasttest SET f2 = '+'||f2||'-' ; + +DROP TABLE indtoasttest; + +-- Test substr with toasted externalized bytea values +CREATE TABLE toasttest(t bytea STORAGE EXTERNAL) using :tde_am; +INSERT INTO toasttest VALUES (decode(repeat('1234567890',10000), 'escape')); + +SET bytea_output = 'escape'; +SELECT substring(t, 1, 10) FROM toasttest; +SELECT substring(t, 50001, 10) FROM toasttest; +SELECT substring(t, 99991) FROM toasttest; + +DROP TABLE toasttest; + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/toast_extended_storage.sql b/contrib/pg_tde/sql/toast_extended_storage.sql new file mode 100644 index 00000000000..5951dc64313 --- /dev/null +++ b/contrib/pg_tde/sql/toast_extended_storage.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/toast_extended_storage.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/toast_extended_storage_basic.sql b/contrib/pg_tde/sql/toast_extended_storage_basic.sql new file mode 100644 index 00000000000..4e079b0cac4 --- /dev/null +++ b/contrib/pg_tde/sql/toast_extended_storage_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/toast_extended_storage.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/trigger_on_view.inc b/contrib/pg_tde/sql/trigger_on_view.inc new file mode 100644 index 00000000000..b229d1b9449 --- /dev/null +++ b/contrib/pg_tde/sql/trigger_on_view.inc @@ -0,0 +1,133 @@ +CREATE extension pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +-- +-- 2 -- Test triggers on a join view +-- +SET default_table_access_method TO ':tde_am'; + +DROP VIEW IF EXISTS city_view CASCADE; +DROP TABLE IF exists country_table CASCADE; +DROP TABLE IF exists city_table cascade; + + CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null + ) using :tde_am; + + INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America') + RETURNING *; + + CREATE TABLE city_table ( + city_id serial primary key, + city_name text not null, + population bigint, + country_id int references country_table + ) using :tde_am; + + CREATE VIEW city_view AS + SELECT city_id, city_name, population, country_name, continent + FROM city_table ci + LEFT JOIN country_table co ON co.country_id = ci.country_id; + +CREATE OR REPLACE FUNCTION city_insert() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS NOT NULL then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + else + NEW.continent := NULL; + end if; + + if NEW.city_id IS NOT NULL then + INSERT INTO city_table + VALUES(NEW.city_id, NEW.city_name, NEW.population, ctry_id); + else + INSERT INTO city_table(city_name, population, country_id) + VALUES(NEW.city_name, NEW.population, ctry_id) + RETURNING city_id INTO NEW.city_id; + end if; + + RETURN NEW; + end; + $$; + + CREATE TRIGGER city_insert_trig INSTEAD OF INSERT ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_insert(); + + CREATE OR REPLACE FUNCTION city_delete() RETURNS trigger LANGUAGE plpgsql AS $$ + begin + DELETE FROM city_table WHERE city_id = OLD.city_id; + if NOT FOUND then RETURN NULL; end if; + RETURN OLD; + end; + $$; + + CREATE TRIGGER city_delete_trig INSTEAD OF DELETE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_delete(); + + CREATE OR REPLACE FUNCTION city_update() RETURNS trigger LANGUAGE plpgsql AS $$ + declare + ctry_id int; + begin + if NEW.country_name IS DISTINCT FROM OLD.country_name then + SELECT country_id, continent INTO ctry_id, NEW.continent + FROM country_table WHERE country_name = NEW.country_name; + if NOT FOUND then + raise exception 'No such country: "%"', NEW.country_name; + end if; + + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population, + country_id = ctry_id + WHERE city_id = OLD.city_id; + else + UPDATE city_table SET city_name = NEW.city_name, + population = NEW.population + WHERE city_id = OLD.city_id; + NEW.continent := OLD.continent; + end if; + + if NOT FOUND then RETURN NULL; end if; + RETURN NEW; + end; + $$; + + CREATE TRIGGER city_update_trig INSTEAD OF UPDATE ON city_view + FOR EACH ROW EXECUTE PROCEDURE city_update(); + +-- INSERT .. RETURNING + INSERT INTO city_view(city_name) VALUES('Tokyo') RETURNING *; + INSERT INTO city_view(city_name, population) VALUES('London', 7556900) RETURNING *; + INSERT INTO city_view(city_name, country_name) VALUES('Washington DC', 'USA') RETURNING *; + INSERT INTO city_view(city_id, city_name) VALUES(123456, 'New York') RETURNING *; + INSERT INTO city_view VALUES(234567, 'Birmingham', 1016800, 'UK', 'EU') RETURNING *; + + -- UPDATE .. RETURNING + UPDATE city_view SET country_name = 'Japon' WHERE city_name = 'Tokyo'; -- error + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Takyo'; -- no match + UPDATE city_view SET country_name = 'Japan' WHERE city_name = 'Tokyo' RETURNING *; -- OK + + UPDATE city_view SET population = 13010279 WHERE city_name = 'Tokyo' RETURNING *; + UPDATE city_view SET country_name = 'UK' WHERE city_name = 'New York' RETURNING *; + UPDATE city_view SET country_name = 'USA', population = 8391881 WHERE city_name = 'New York' RETURNING *; + UPDATE city_view SET continent = 'EU' WHERE continent = 'Europe' RETURNING *; + UPDATE city_view v1 SET country_name = v2.country_name FROM city_view v2 + WHERE v2.city_name = 'Birmingham' AND v1.city_name = 'London' RETURNING *; + + -- DELETE .. RETURNING + DELETE FROM city_view WHERE city_name = 'Birmingham' RETURNING *; + + +DROP extension pg_tde CASCADE; diff --git a/contrib/pg_tde/sql/trigger_on_view.sql b/contrib/pg_tde/sql/trigger_on_view.sql new file mode 100644 index 00000000000..ad69ba86a16 --- /dev/null +++ b/contrib/pg_tde/sql/trigger_on_view.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/trigger_on_view.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/trigger_on_view_basic.sql b/contrib/pg_tde/sql/trigger_on_view_basic.sql new file mode 100644 index 00000000000..085d2dc3de2 --- /dev/null +++ b/contrib/pg_tde/sql/trigger_on_view_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/trigger_on_view.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/update.inc b/contrib/pg_tde/sql/update.inc new file mode 100644 index 00000000000..2702940b013 --- /dev/null +++ b/contrib/pg_tde/sql/update.inc @@ -0,0 +1,33 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + + +CREATE TABLE update_test ( + a INT DEFAULT 10, + b INT, + c TEXT +) USING tde_heap_basic; + +CREATE TABLE upsert_test ( + a INT PRIMARY KEY, + b TEXT +) USING tde_heap_basic; + +INSERT INTO update_test VALUES (5, 10, 'foo'); +INSERT INTO update_test(b, a) VALUES (15, 10); + +INSERT INTO upsert_test VALUES (2, 'Beeble') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = 0 AS xmax_correct; +-- currently xmax is set after a conflict - that's probably not good, +-- but it seems worthwhile to have to be explicit if that changes. +INSERT INTO upsert_test VALUES (2, 'Brox') ON CONFLICT(a) + DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a) + RETURNING tableoid::regclass, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = pg_current_xact_id()::xid AS xmax_correct; + +DROP TABLE update_test; +DROP TABLE upsert_test; + +DROP EXTENSION pg_tde; \ No newline at end of file diff --git a/contrib/pg_tde/sql/update.sql b/contrib/pg_tde/sql/update.sql new file mode 100644 index 00000000000..b1fedf0d3fe --- /dev/null +++ b/contrib/pg_tde/sql/update.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/update.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/update_basic.sql b/contrib/pg_tde/sql/update_basic.sql new file mode 100644 index 00000000000..b4bfcc7d7b5 --- /dev/null +++ b/contrib/pg_tde/sql/update_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/update.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/update_compare_indexes.inc b/contrib/pg_tde/sql/update_compare_indexes.inc new file mode 100644 index 00000000000..6cc54ac6027 --- /dev/null +++ b/contrib/pg_tde/sql/update_compare_indexes.inc @@ -0,0 +1,14 @@ +CREATE EXTENSION pg_tde; + +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +DROP TABLE IF EXISTS pvactst; +CREATE TABLE pvactst (i INT, a INT[], p POINT) USING :tde_am; +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM generate_series(1,1000) i; +CREATE INDEX spgist_pvactst ON pvactst USING spgist (p); +UPDATE pvactst SET i = i WHERE i < 1000; +-- crash! + +DROP TABLE pvactst; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/update_compare_indexes.sql b/contrib/pg_tde/sql/update_compare_indexes.sql new file mode 100644 index 00000000000..98cbdd66d87 --- /dev/null +++ b/contrib/pg_tde/sql/update_compare_indexes.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/update_compare_indexes.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/update_compare_indexes_basic.sql b/contrib/pg_tde/sql/update_compare_indexes_basic.sql new file mode 100644 index 00000000000..7d8f79e5015 --- /dev/null +++ b/contrib/pg_tde/sql/update_compare_indexes_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/update_compare_indexes.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/vault_v2_test.inc b/contrib/pg_tde/sql/vault_v2_test.inc new file mode 100644 index 00000000000..70c7190690a --- /dev/null +++ b/contrib/pg_tde/sql/vault_v2_test.inc @@ -0,0 +1,32 @@ +CREATE EXTENSION pg_tde; + +\getenv root_token ROOT_TOKEN + +SELECT pg_tde_add_key_provider_vault_v2('vault-incorrect',:'root_token','http://127.0.0.1:8200','DUMMY-TOKEN',NULL); +-- FAILS +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-incorrect'); + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; + +SELECT pg_tde_add_key_provider_vault_v2('vault-v2',:'root_token','http://127.0.0.1:8200','secret',NULL); +SELECT pg_tde_set_principal_key('vault-v2-principal-key','vault-v2'); + +CREATE TABLE test_enc( + id SERIAL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id) + ) USING :tde_am; + +INSERT INTO test_enc (k) VALUES (1); +INSERT INTO test_enc (k) VALUES (2); +INSERT INTO test_enc (k) VALUES (3); + +SELECT * from test_enc; + +DROP TABLE test_enc; + +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/sql/vault_v2_test.sql b/contrib/pg_tde/sql/vault_v2_test.sql new file mode 100644 index 00000000000..4e4119777b6 --- /dev/null +++ b/contrib/pg_tde/sql/vault_v2_test.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap +\i sql/vault_v2_test.inc \ No newline at end of file diff --git a/contrib/pg_tde/sql/vault_v2_test_basic.sql b/contrib/pg_tde/sql/vault_v2_test_basic.sql new file mode 100644 index 00000000000..ad8f5652030 --- /dev/null +++ b/contrib/pg_tde/sql/vault_v2_test_basic.sql @@ -0,0 +1,2 @@ +\set tde_am tde_heap_basic +\i sql/vault_v2_test.inc \ No newline at end of file diff --git a/contrib/pg_tde/src/access/pg_tde_ddl.c b/contrib/pg_tde/src/access/pg_tde_ddl.c new file mode 100644 index 00000000000..cb14738b0b6 --- /dev/null +++ b/contrib/pg_tde/src/access/pg_tde_ddl.c @@ -0,0 +1,56 @@ + +/*------------------------------------------------------------------------- + * + * tdeheap_ddl.c + * Handles the DDL operation on TDE relations. + * + * IDENTIFICATION + * contrib/pg_tde/src/access/pg_tde_ddl.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "catalog/objectaccess.h" +#include "access/pg_tde_ddl.h" +#include "access/pg_tdeam.h" +#include "access/pg_tde_tdemap.h" + + +static object_access_hook_type prev_object_access_hook = NULL; + +static void tdeheap_object_access_hook(ObjectAccessType access, Oid classId, + Oid objectId, int subId, void *arg); + +void +SetupTdeDDLHooks(void) +{ + prev_object_access_hook = object_access_hook; + object_access_hook = tdeheap_object_access_hook; +} + +static void +tdeheap_object_access_hook(ObjectAccessType access, Oid classId, Oid objectId, + int subId, void *arg) +{ + Relation rel = NULL; + + if (prev_object_access_hook) + prev_object_access_hook(access, classId, objectId, subId, arg); + + if (access == OAT_DROP && classId == RelationRelationId) + { + rel = relation_open(objectId, AccessShareLock); + } + if (rel != NULL) + { + if ((rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_TOASTVALUE || + rel->rd_rel->relkind == RELKIND_MATVIEW) && + (subId == 0) && is_tdeheap_rel(rel)) + { + pg_tde_delete_key_map_entry(&rel->rd_locator, MAP_ENTRY_VALID); + } + relation_close(rel, AccessShareLock); + } +} diff --git a/contrib/pg_tde/src/access/pg_tde_slot.c b/contrib/pg_tde/src/access/pg_tde_slot.c new file mode 100644 index 00000000000..7decf7ac55c --- /dev/null +++ b/contrib/pg_tde/src/access/pg_tde_slot.c @@ -0,0 +1,602 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam.c + * pg_tde TupleTableSlot implementation code + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * Portions Copyright (c) 2024, Percona + * + * + * IDENTIFICATION + * contrib/pg_tde/access/pg_tde_slot.c + * + * + */ +#include "postgres.h" +#include "pg_tde_defines.h" +#include "access/pg_tde_slot.h" +#include "access/heaptoast.h" +#include "access/htup_details.h" +#include "access/tupdesc_details.h" +#include "access/xact.h" +#include "catalog/pg_type.h" +#include "funcapi.h" +#include "nodes/nodeFuncs.h" +#include "storage/bufmgr.h" +#include "utils/builtins.h" +#include "utils/expandeddatum.h" +#include "utils/lsyscache.h" +#include "utils/typcache.h" +#include "encryption/enc_tde.h" + +/* + * TTSOpsTDEBufferHeapTuple is effectively the same as TTSOpsBufferHeapTuple slot. + * The only difference is that it keeps the reference of the decrypted tuple + * and free it during clear slot operation + */ + +const TupleTableSlotOps TTSOpsTDEBufferHeapTuple; + +static HeapTuple slot_copytuple(void *buffer, HeapTuple tuple); +static pg_attribute_always_inline void tdeheap_slot_deform_heap_tuple(TupleTableSlot *slot, HeapTuple tuple, uint32 *offp, int natts); +static inline void tdeheap_tts_buffer_heap_store_tuple(TupleTableSlot *slot, + HeapTuple tuple, + Buffer buffer, + bool transfer_pin); +static inline RelKeyData *get_current_slot_relation_key(TDEBufferHeapTupleTableSlot *bslot, Relation rel); +static void +tdeheap_tts_buffer_heap_init(TupleTableSlot *slot) +{ + TDEBufferHeapTupleTableSlot *bslot = (TDEBufferHeapTupleTableSlot *) slot; + + bslot->cached_relation_key = NULL; +} + +static void +tdeheap_tts_buffer_heap_release(TupleTableSlot *slot) +{ + // nop +} + +static void +tdeheap_tts_buffer_heap_clear(TupleTableSlot *slot) +{ + TDEBufferHeapTupleTableSlot *bslot = (TDEBufferHeapTupleTableSlot *) slot; + + /* + * Free the memory for heap tuple if allowed. A tuple coming from buffer + * can never be freed. But we may have materialized a tuple from buffer. + * Such a tuple can be freed. + */ + if (TTS_SHOULDFREE(slot)) + { + /* We should have unpinned the buffer while materializing the tuple. */ + Assert(!BufferIsValid(bslot->buffer)); + + tdeheap_freetuple(bslot->base.tuple); + slot->tts_flags &= ~TTS_FLAG_SHOULDFREE; + } + + if (BufferIsValid(bslot->buffer)) + ReleaseBuffer(bslot->buffer); + + slot->tts_nvalid = 0; + slot->tts_flags |= TTS_FLAG_EMPTY; + ItemPointerSetInvalid(&slot->tts_tid); + bslot->base.tuple = NULL; + bslot->base.off = 0; + bslot->buffer = InvalidBuffer; +} + +static void +tdeheap_tts_buffer_heap_getsomeattrs(TupleTableSlot *slot, int natts) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + Assert(!TTS_EMPTY(slot)); + + tdeheap_slot_deform_heap_tuple(slot, bslot->base.tuple, &bslot->base.off, natts); +} + +static Datum +tdeheap_tts_buffer_heap_getsysattr(TupleTableSlot *slot, int attnum, bool *isnull) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + Assert(!TTS_EMPTY(slot)); + + /* + * In some code paths it's possible to get here with a non-materialized + * slot, in which case we can't retrieve system columns. + */ + if (!bslot->base.tuple) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot retrieve a system column in this context"))); + + return tdeheap_getsysattr(bslot->base.tuple, attnum, + slot->tts_tupleDescriptor, isnull); +} + +static bool +tdeheap_buffer_is_current_xact_tuple(TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + TransactionId xmin; + + Assert(!TTS_EMPTY(slot)); + + /* + * In some code paths it's possible to get here with a non-materialized + * slot, in which case we can't check if tuple is created by the current + * transaction. + */ + if (!bslot->base.tuple) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("don't have a storage tuple in this context"))); + + xmin = HeapTupleHeaderGetRawXmin(bslot->base.tuple->t_data); + + return TransactionIdIsCurrentTransactionId(xmin); +} + +static void +tdeheap_tts_buffer_heap_materialize(TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + MemoryContext oldContext; + + Assert(!TTS_EMPTY(slot)); + + /* If slot has its tuple already materialized, nothing to do. */ + if (TTS_SHOULDFREE(slot)) + return; + + oldContext = MemoryContextSwitchTo(slot->tts_mcxt); + + /* + * Have to deform from scratch, otherwise tts_values[] entries could point + * into the non-materialized tuple (which might be gone when accessed). + */ + bslot->base.off = 0; + slot->tts_nvalid = 0; + + if (!bslot->base.tuple) + { + /* + * Normally BufferHeapTupleTableSlot should have a tuple + buffer + * associated with it, unless it's materialized (which would've + * returned above). But when it's useful to allow storing virtual + * tuples in a buffer slot, which then also needs to be + * materializable. + */ + bslot->base.tuple = tdeheap_form_tuple(slot->tts_tupleDescriptor, + slot->tts_values, + slot->tts_isnull); + } + else + { + bslot->base.tuple = tdeheap_copytuple(bslot->base.tuple); + + /* + * A heap tuple stored in a BufferHeapTupleTableSlot should have a + * buffer associated with it, unless it's materialized or virtual. + */ + if (likely(BufferIsValid(bslot->buffer))) + ReleaseBuffer(bslot->buffer); + bslot->buffer = InvalidBuffer; + } + + /* + * We don't set TTS_FLAG_SHOULDFREE until after releasing the buffer, if + * any. This avoids having a transient state that would fall foul of our + * assertions that a slot with TTS_FLAG_SHOULDFREE doesn't own a buffer. + * In the unlikely event that ReleaseBuffer() above errors out, we'd + * effectively leak the copied tuple, but that seems fairly harmless. + */ + slot->tts_flags |= TTS_FLAG_SHOULDFREE; + + MemoryContextSwitchTo(oldContext); +} + +static void +tdeheap_tts_buffer_heap_copyslot(TupleTableSlot *dstslot, TupleTableSlot *srcslot) +{ + TDEBufferHeapTupleTableSlot *bsrcslot = (TDEBufferHeapTupleTableSlot *) srcslot; + TDEBufferHeapTupleTableSlot *bdstslot = (TDEBufferHeapTupleTableSlot *) dstslot; + + /* + * If the source slot is of a different kind, or is a buffer slot that has + * been materialized / is virtual, make a new copy of the tuple. Otherwise + * make a new reference to the in-buffer tuple. + */ + if (dstslot->tts_ops != srcslot->tts_ops || + TTS_SHOULDFREE(srcslot) || + !bsrcslot->base.tuple) + { + MemoryContext oldContext; + + ExecClearTuple(dstslot); + dstslot->tts_flags &= ~TTS_FLAG_EMPTY; + oldContext = MemoryContextSwitchTo(dstslot->tts_mcxt); + bdstslot->base.tuple = ExecCopySlotHeapTuple(srcslot); + dstslot->tts_flags |= TTS_FLAG_SHOULDFREE; + MemoryContextSwitchTo(oldContext); + } + else + { + Assert(BufferIsValid(bsrcslot->buffer)); + + tdeheap_tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple, + bsrcslot->buffer, false); + + /* + * The HeapTupleData portion of the source tuple might be shorter + * lived than the destination slot. Therefore copy the HeapTuple into + * our slot's tupdata, which is guaranteed to live long enough (but + * will still point into the buffer). + */ + memcpy(&bdstslot->base.tupdata, bdstslot->base.tuple, sizeof(HeapTupleData)); + bdstslot->base.tuple = &bdstslot->base.tupdata; + + /* + * copy the decrypted buffer content as well + * We only need to copy buffer upto tuple size + */ + memcpy(bdstslot->decrypted_buffer, bsrcslot->decrypted_buffer, HEAPTUPLESIZE + bsrcslot->base.tuple->t_len); + slot_copytuple(bdstslot->decrypted_buffer, bsrcslot->base.tuple); + bdstslot->base.tuple->t_data = ((HeapTuple)bdstslot->decrypted_buffer)->t_data; + } +} + +static HeapTuple +tdeheap_tts_buffer_heap_get_heap_tuple(TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + Assert(!TTS_EMPTY(slot)); + + if (!bslot->base.tuple) + tdeheap_tts_buffer_heap_materialize(slot); + return bslot->base.tuple; +} + +static HeapTuple +tdeheap_tts_buffer_heap_copy_heap_tuple(TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + Assert(!TTS_EMPTY(slot)); + + if (!bslot->base.tuple) + tdeheap_tts_buffer_heap_materialize(slot); + + return tdeheap_copytuple(bslot->base.tuple); +} + +static MinimalTuple +tdeheap_tts_buffer_heap_copy_minimal_tuple(TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + Assert(!TTS_EMPTY(slot)); + + if (!bslot->base.tuple) + tdeheap_tts_buffer_heap_materialize(slot); + + return minimal_tuple_from_heap_tuple(bslot->base.tuple); +} + +static inline void +tdeheap_tts_buffer_heap_store_tuple(TupleTableSlot *slot, HeapTuple tuple, + Buffer buffer, bool transfer_pin) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + + if (TTS_SHOULDFREE(slot)) + { + /* materialized slot shouldn't have a buffer to release */ + Assert(!BufferIsValid(bslot->buffer)); + + tdeheap_freetuple(bslot->base.tuple); + slot->tts_flags &= ~TTS_FLAG_SHOULDFREE; + } + + slot->tts_flags &= ~TTS_FLAG_EMPTY; + slot->tts_nvalid = 0; + bslot->base.tuple = tuple; + bslot->base.off = 0; + slot->tts_tid = tuple->t_self; + + /* + * If tuple is on a disk page, keep the page pinned as long as we hold a + * pointer into it. We assume the caller already has such a pin. If + * transfer_pin is true, we'll transfer that pin to this slot, if not + * we'll pin it again ourselves. + * + * This is coded to optimize the case where the slot previously held a + * tuple on the same disk page: in that case releasing and re-acquiring + * the pin is a waste of cycles. This is a common situation during + * seqscans, so it's worth troubling over. + */ + if (bslot->buffer != buffer) + { + if (BufferIsValid(bslot->buffer)) + ReleaseBuffer(bslot->buffer); + + bslot->buffer = buffer; + + if (!transfer_pin && BufferIsValid(buffer)) + IncrBufferRefCount(buffer); + } + else if (transfer_pin && BufferIsValid(buffer)) + { + /* + * In transfer_pin mode the caller won't know about the same-page + * optimization, so we gotta release its pin. + */ + ReleaseBuffer(buffer); + } +} + +/* + * slot_deform_heap_tuple + * Given a TupleTableSlot, extract data from the slot's physical tuple + * into its Datum/isnull arrays. Data is extracted up through the + * natts'th column (caller must ensure this is a legal column number). + * + * This is essentially an incremental version of tdeheap_deform_tuple: + * on each call we extract attributes up to the one needed, without + * re-computing information about previously extracted attributes. + * slot->tts_nvalid is the number of attributes already extracted. + * + * This is marked as always inline, so the different offp for different types + * of slots gets optimized away. + */ +static pg_attribute_always_inline void +tdeheap_slot_deform_heap_tuple(TupleTableSlot *slot, HeapTuple tuple, uint32 *offp, + int natts) +{ + TupleDesc tupleDesc = slot->tts_tupleDescriptor; + Datum *values = slot->tts_values; + bool *isnull = slot->tts_isnull; + HeapTupleHeader tup = tuple->t_data; + bool hasnulls = HeapTupleHasNulls(tuple); + int attnum; + char *tp; /* ptr to tuple data */ + uint32 off; /* offset in tuple data */ + bits8 *bp = tup->t_bits; /* ptr to null bitmap in tuple */ + bool slow; /* can we use/set attcacheoff? */ + + /* We can only fetch as many attributes as the tuple has. */ + natts = Min(HeapTupleHeaderGetNatts(tuple->t_data), natts); + + /* + * Check whether the first call for this tuple, and initialize or restore + * loop state. + */ + attnum = slot->tts_nvalid; + if (attnum == 0) + { + /* Start from the first attribute */ + off = 0; + slow = false; + } + else + { + /* Restore state from previous execution */ + off = *offp; + slow = TTS_SLOW(slot); + } + + tp = (char *) tup + tup->t_hoff; + + for (; attnum < natts; attnum++) + { + Form_pg_attribute thisatt = TupleDescAttr(tupleDesc, attnum); + + if (hasnulls && att_isnull(attnum, bp)) + { + values[attnum] = (Datum) 0; + isnull[attnum] = true; + slow = true; /* can't use attcacheoff anymore */ + continue; + } + + isnull[attnum] = false; + + if (!slow && thisatt->attcacheoff >= 0) + off = thisatt->attcacheoff; + else if (thisatt->attlen == -1) + { + /* + * We can only cache the offset for a varlena attribute if the + * offset is already suitably aligned, so that there would be no + * pad bytes in any case: then the offset will be valid for either + * an aligned or unaligned value. + */ + if (!slow && + off == att_align_nominal(off, thisatt->attalign)) + thisatt->attcacheoff = off; + else + { + off = att_align_pointer(off, thisatt->attalign, -1, + tp + off); + slow = true; + } + } + else + { + /* not varlena, so safe to use att_align_nominal */ + off = att_align_nominal(off, thisatt->attalign); + + if (!slow) + thisatt->attcacheoff = off; + } + + values[attnum] = fetchatt(thisatt, tp + off); + + off = att_addlength_pointer(off, thisatt->attlen, tp + off); + + if (thisatt->attlen <= 0) + slow = true; /* can't use attcacheoff anymore */ + } + + /* + * Save state for next execution + */ + slot->tts_nvalid = attnum; + *offp = off; + if (slow) + slot->tts_flags |= TTS_FLAG_SLOW; + else + slot->tts_flags &= ~TTS_FLAG_SLOW; +} + +static HeapTuple +slot_copytuple(void *buffer, HeapTuple tuple) +{ + HeapTuple newTuple; + + if (!HeapTupleIsValid(tuple) || tuple->t_data == NULL) + return NULL; + + newTuple = (HeapTuple) buffer; + newTuple->t_len = tuple->t_len; + newTuple->t_self = tuple->t_self; + newTuple->t_tableOid = tuple->t_tableOid; + newTuple->t_data = (HeapTupleHeader) ((char *) newTuple + HEAPTUPLESIZE); + /* We don't copy the data, it will be copied by the decryption code */ + memcpy((char *) newTuple->t_data, (char *) tuple->t_data, tuple->t_data->t_hoff); + return newTuple; +} + +const TupleTableSlotOps TTSOpsTDEBufferHeapTuple = { + .base_slot_size = sizeof(TDEBufferHeapTupleTableSlot), + .init = tdeheap_tts_buffer_heap_init, + .release = tdeheap_tts_buffer_heap_release, + .clear = tdeheap_tts_buffer_heap_clear, + .getsomeattrs = tdeheap_tts_buffer_heap_getsomeattrs, + .getsysattr = tdeheap_tts_buffer_heap_getsysattr, + .materialize = tdeheap_tts_buffer_heap_materialize, +#if PG_VERSION_NUM >= 170000 + .is_current_xact_tuple = tdeheap_buffer_is_current_xact_tuple, +#endif + .copyslot = tdeheap_tts_buffer_heap_copyslot, + .get_heap_tuple = tdeheap_tts_buffer_heap_get_heap_tuple, + + /* A buffer heap tuple table slot can not "own" a minimal tuple. */ + .get_minimal_tuple = NULL, + .copy_heap_tuple = tdeheap_tts_buffer_heap_copy_heap_tuple, + .copy_minimal_tuple = tdeheap_tts_buffer_heap_copy_minimal_tuple +}; + +/* -------------------------------- + * ExecStoreBufferHeapTuple + * + * This function is used to store an on-disk physical tuple from a buffer + * into a specified slot in the tuple table. + * + * tuple: tuple to store + * slot: TTSOpsBufferHeapTuple type slot to store it in + * buffer: disk buffer if tuple is in a disk page, else InvalidBuffer + * + * The tuple table code acquires a pin on the buffer which is held until the + * slot is cleared, so that the tuple won't go away on us. + * + * Return value is just the passed-in slot pointer. + * + * If the target slot is not guaranteed to be TTSOpsBufferHeapTuple type slot, + * use the, more expensive, ExecForceStoreHeapTuple(). + * -------------------------------- + */ +TupleTableSlot * +PGTdeExecStoreBufferHeapTuple(Relation rel, + HeapTuple tuple, + TupleTableSlot *slot, + Buffer buffer) +{ + + TDEBufferHeapTupleTableSlot *bslot = (TDEBufferHeapTupleTableSlot *) slot; + + /* + * sanity checks + */ + Assert(rel != NULL); + Assert(tuple != NULL); + Assert(slot != NULL); + Assert(slot->tts_tupleDescriptor != NULL); + Assert(BufferIsValid(buffer)); + + if (unlikely(!TTS_IS_TDE_BUFFERTUPLE(slot))) + elog(ERROR, "trying to store an on-disk heap tuple into wrong type of slot"); + + if (rel->rd_rel->relkind != RELKIND_TOASTVALUE) + { + RelKeyData *key = get_current_slot_relation_key(bslot, rel); + + Assert(key != NULL); + + slot_copytuple(bslot->decrypted_buffer, tuple); + PG_TDE_DECRYPT_TUPLE_EX(tuple, (HeapTuple) bslot->decrypted_buffer, key, "ExecStoreBuffer"); + tuple->t_data = ((HeapTuple) bslot->decrypted_buffer)->t_data; + } + + tdeheap_tts_buffer_heap_store_tuple(slot, tuple, buffer, false); + + slot->tts_tableOid = tuple->t_tableOid; + + return slot; +} + +/* + * Like ExecStoreBufferHeapTuple, but transfer an existing pin from the caller + * to the slot, i.e. the caller doesn't need to, and may not, release the pin. + */ +TupleTableSlot * +PGTdeExecStorePinnedBufferHeapTuple(Relation rel, + HeapTuple tuple, + TupleTableSlot *slot, + Buffer buffer) +{ + TDEBufferHeapTupleTableSlot *bslot = (TDEBufferHeapTupleTableSlot *) slot; + + /* + * sanity checks + */ + Assert(rel != NULL); + Assert(tuple != NULL); + Assert(slot != NULL); + Assert(slot->tts_tupleDescriptor != NULL); + Assert(BufferIsValid(buffer)); + + if (unlikely(!TTS_IS_TDE_BUFFERTUPLE(slot))) + elog(ERROR, "trying to store an on-disk heap tuple into wrong type of slot"); + + if (rel->rd_rel->relkind != RELKIND_TOASTVALUE) + { + RelKeyData *key = get_current_slot_relation_key(bslot, rel); + + slot_copytuple(bslot->decrypted_buffer, tuple); + PG_TDE_DECRYPT_TUPLE_EX(tuple, (HeapTuple) bslot->decrypted_buffer, key, "ExecStorePinnedBuffer"); + /* TODO: revisit this */ + tuple->t_data = ((HeapTuple) bslot->decrypted_buffer)->t_data; + } + + tdeheap_tts_buffer_heap_store_tuple(slot, tuple, buffer, true); + + slot->tts_tableOid = tuple->t_tableOid; + + return slot; +} + +static inline RelKeyData * +get_current_slot_relation_key(TDEBufferHeapTupleTableSlot *bslot, Relation rel) +{ + Assert(bslot != NULL); + if (bslot->cached_relation_key == NULL) + bslot->cached_relation_key = GetHeapBaiscRelationKey(rel->rd_locator); + return bslot->cached_relation_key; +} diff --git a/contrib/pg_tde/src/access/pg_tde_tdemap.c b/contrib/pg_tde/src/access/pg_tde_tdemap.c new file mode 100644 index 00000000000..198f6a12543 --- /dev/null +++ b/contrib/pg_tde/src/access/pg_tde_tdemap.c @@ -0,0 +1,1515 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_tdemap.c + * tde relation fork manager code + * + * + * IDENTIFICATION + * src/access/pg_tde_tdemap.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "access/pg_tde_tdemap.h" +#include "common/file_perm.h" +#include "transam/pg_tde_xact_handler.h" +#include "storage/fd.h" +#include "utils/wait_event.h" +#include "utils/memutils.h" +#include "access/xlog.h" +#include "access/xlog_internal.h" +#include "access/xloginsert.h" +#include "utils/builtins.h" +#include "miscadmin.h" + +#include "access/pg_tde_tdemap.h" +#include "access/pg_tde_xlog.h" +#include "catalog/tde_principal_key.h" +#include "encryption/enc_aes.h" +#include "encryption/enc_tde.h" +#include "keyring/keyring_api.h" +#include "common/pg_tde_utils.h" + +#include +#include +#include +#include + +#include "pg_tde_defines.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +/* A useful macro when debugging key encryption/decryption */ +#ifdef DEBUG +#define ELOG_KEY(_msg, _key) \ +{ \ + int i; \ + char buf[1024]; \ + for (i = 0; i < sizeof(_key->internal_key.key); i++) \ + sprintf(buf+i, "%02X", _key->internal_key.key[i]); \ + buf[i] = '\0'; \ + elog(INFO, "[%s] INTERNAL KEY => %s", _msg, buf); \ +} +#endif + +#define PG_TDE_FILEMAGIC 0x01454454 /* version ID value = TDE 01 */ + + +#define MAP_ENTRY_SIZE sizeof(TDEMapEntry) +#define TDE_FILE_HEADER_SIZE sizeof(TDEFileHeader) + +typedef struct TDEFileHeader +{ + int32 file_version; + TDEPrincipalKeyInfo principal_key_info; +} TDEFileHeader; + +typedef struct TDEMapEntry +{ + RelFileNumber relNumber; + uint32 flags; + int32 key_index; +} TDEMapEntry; + +typedef struct TDEMapFilePath +{ + char map_path[MAXPGPATH]; + char keydata_path[MAXPGPATH]; +} TDEMapFilePath; + + +typedef struct RelKeyCacheRec +{ + RelFileNumber rel_number; + RelKeyData key; +} RelKeyCacheRec; + +/* + * Relation keys cache. + * + * This is a slice backed by memory `*data`. Initially, we allocate one memory + * page (usually 4Kb). We reallocate it by adding another page when we run out + * of space. This memory is locked in the RAM so it won't be paged to the swap + * (we don't want decrypted keys on disk). We do allocations in mem pages as + * these are the units `mlock()` operations are performed in. + * + * Currently, the cache can only grow (no eviction). The data is located in + * TopMemoryContext hence being wiped when the process exits, as well as memory + * is being unlocked by OS. + */ +typedef struct RelKeyCache +{ + RelKeyCacheRec *data; /* must be a multiple of a memory page (usually 4Kb) */ + int len; /* num of RelKeyCacheRecs currenty in cache */ + int cap; /* max amount of RelKeyCacheRec data can fit */ +} RelKeyCache; + +RelKeyCache *tde_rel_key_cache = NULL; + +static int32 pg_tde_process_map_entry(const RelFileLocator *rlocator, uint32 key_type, char *db_map_path, off_t *offset, bool should_delete); +static RelKeyData *pg_tde_read_keydata(char *db_keydata_path, int32 key_index, TDEPrincipalKey *principal_key); +static int pg_tde_open_file_basic(char *tde_filename, int fileFlags, bool ignore_missing); +static int pg_tde_file_header_read(char *tde_filename, int fd, TDEFileHeader *fheader, bool *is_new_file, off_t *bytes_read); +static bool pg_tde_read_one_map_entry(int fd, const RelFileLocator *rlocator, int flags, TDEMapEntry *map_entry, off_t *offset); +static RelKeyData *pg_tde_read_one_keydata(int keydata_fd, int32 key_index, TDEPrincipalKey *principal_key); +static int pg_tde_open_file(char *tde_filename, TDEPrincipalKeyInfo *principal_key_info, bool update_header, int fileFlags, bool *is_new_file, off_t *curr_pos); +static RelKeyData *pg_tde_get_key_from_cache(RelFileNumber rel_number, uint32 key_type); + +#ifndef FRONTEND + +static int pg_tde_file_header_write(char *tde_filename, int fd, TDEPrincipalKeyInfo *principal_key_info, off_t *bytes_written); +static int32 pg_tde_write_map_entry(const RelFileLocator *rlocator, uint32 entry_type, char *db_map_path, TDEPrincipalKeyInfo *principal_key_info); +static off_t pg_tde_write_one_map_entry(int fd, const RelFileLocator *rlocator, uint32 flags, int32 key_index, TDEMapEntry *map_entry, off_t *offset); +static void pg_tde_write_keydata(char *db_keydata_path, TDEPrincipalKeyInfo *principal_key_info, int32 key_index, RelKeyData *enc_rel_key_data); +static void pg_tde_write_one_keydata(int keydata_fd, int32 key_index, RelKeyData *enc_rel_key_data); +static int keyrotation_init_file(TDEPrincipalKeyInfo *new_principal_key_info, char *rotated_filename, char *filename, bool *is_new_file, off_t *curr_pos); +static void finalize_key_rotation(char *m_path_old, char *k_path_old, char *m_path_new, char *k_path_new); + +RelKeyData * +pg_tde_create_smgr_key(const RelFileLocator *newrlocator) +{ + return pg_tde_create_key_map_entry(newrlocator, TDE_KEY_TYPE_SMGR); +} + +RelKeyData * +pg_tde_create_global_key(const RelFileLocator *newrlocator) +{ + return pg_tde_create_key_map_entry(newrlocator, TDE_KEY_TYPE_GLOBAL); +} + +RelKeyData * +pg_tde_create_heap_basic_key(const RelFileLocator *newrlocator) +{ + return pg_tde_create_key_map_entry(newrlocator, TDE_KEY_TYPE_HEAP_BASIC); +} + +/* + * Generate an encrypted key for the relation and store it in the keymap file. + */ +RelKeyData * +pg_tde_create_key_map_entry(const RelFileLocator *newrlocator, uint32 entry_type) +{ + InternalKey int_key; + RelKeyData *rel_key_data; + RelKeyData *enc_rel_key_data; + TDEPrincipalKey *principal_key; + XLogRelKey xlrec; + LWLock *lock_pk = tde_lwlock_enc_keys(); + + LWLockAcquire(lock_pk, LW_EXCLUSIVE); + principal_key = GetPrincipalKey(newrlocator->dbOid, LW_EXCLUSIVE); + if (principal_key == NULL) + { + LWLockRelease(lock_pk); + ereport(ERROR, + (errmsg("failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables."))); + + return NULL; + } + + memset(&int_key, 0, sizeof(InternalKey)); + + int_key.rel_type = entry_type; + + if (!RAND_bytes(int_key.key, INTERNAL_KEY_LEN)) + { + LWLockRelease(lock_pk); + ereport(ERROR, + (errcode(ERRCODE_INTERNAL_ERROR), + errmsg("could not generate internal key for relation \"%s\": %s", + "TODO", ERR_error_string(ERR_get_error(), NULL)))); + + return NULL; + } + + /* Encrypt the key */ + rel_key_data = tde_create_rel_key(newrlocator->relNumber, &int_key, &principal_key->keyInfo); + enc_rel_key_data = tde_encrypt_rel_key(principal_key, rel_key_data, newrlocator->dbOid); + + /* + * XLOG internal key + */ + xlrec.rlocator = *newrlocator; + xlrec.relKey = *enc_rel_key_data; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, sizeof(xlrec)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_ADD_RELATION_KEY); + + /* + * Add the encrypted key to the key map data file structure. + */ + pg_tde_write_key_map_entry(newrlocator, enc_rel_key_data, &principal_key->keyInfo); + LWLockRelease(lock_pk); + pfree(enc_rel_key_data); + return rel_key_data; +} + +const char * +tde_sprint_key(InternalKey *k) +{ + static char buf[256]; + int i; + + for (i = 0; i < sizeof(k->key); i++) + sprintf(buf + i, "%02X", k->key[i]); + + return buf; +} + +/* + * Creates a key for a relation identified by rlocator. Returns the newly + * created key. + */ +RelKeyData * +tde_create_rel_key(RelFileNumber rel_num, InternalKey *key, TDEPrincipalKeyInfo *principal_key_info) +{ + RelKeyData rel_key_data; + + memcpy(&rel_key_data.principal_key_id, &principal_key_info->keyId, sizeof(TDEPrincipalKeyId)); + memcpy(&rel_key_data.internal_key, key, sizeof(InternalKey)); + rel_key_data.internal_key.ctx = NULL; + + /* Add to the decrypted key to cache */ + return pg_tde_put_key_into_cache(rel_num, &rel_key_data); +} + +/* + * Encrypts a given key and returns the encrypted one. + */ +RelKeyData * +tde_encrypt_rel_key(TDEPrincipalKey *principal_key, RelKeyData *rel_key_data, Oid dbOid) +{ + RelKeyData *enc_rel_key_data; + size_t enc_key_bytes; + + AesEncryptKey(principal_key, dbOid, rel_key_data, &enc_rel_key_data, &enc_key_bytes); + + return enc_rel_key_data; +} + +/* + * Creates the pair of map and key data file and save the principal key information. + * Returns true if both map and key data files are created. + */ +void +pg_tde_delete_tde_files(Oid dbOid) +{ + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + + /* Set the file paths */ + pg_tde_set_db_file_paths(dbOid, db_map_path, db_keydata_path); + + /* Remove these files without emitting any error */ + PathNameDeleteTemporaryFile(db_map_path, false); + PathNameDeleteTemporaryFile(db_keydata_path, false); +} + +/* + * Creates the pair of map and key data file and save the principal key information. + * Returns true if both map and key data files are created. + * + * If the files pre-exist, it truncates both files before adding principal key + * information. + * + * The caller must have an EXCLUSIVE LOCK on the files before calling this function. + */ +bool +pg_tde_save_principal_key(TDEPrincipalKeyInfo *principal_key_info, bool truncate_existing, bool update_header) +{ + int map_fd = -1; + int keydata_fd = -1; + off_t curr_pos = 0; + bool is_new_map = false; + bool is_new_key_data = false; + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + int file_flags = O_RDWR | O_CREAT; + + /* Set the file paths */ + pg_tde_set_db_file_paths(principal_key_info->databaseId, + db_map_path, db_keydata_path); + + ereport(DEBUG2, + (errmsg("pg_tde_save_principal_key"), + errdetail("truncate_existing:%s update_header:%s", truncate_existing?"YES":"NO", update_header?"YES":"NO"))); + /* + * Create or truncate these map and keydata files. + */ + if (truncate_existing) + file_flags |= O_TRUNC; + + map_fd = pg_tde_open_file(db_map_path, principal_key_info, update_header, file_flags, &is_new_map, &curr_pos); + keydata_fd = pg_tde_open_file(db_keydata_path, principal_key_info, update_header, file_flags, &is_new_key_data, &curr_pos); + + /* Closing files. */ + close(map_fd); + close(keydata_fd); + + return (is_new_map && is_new_key_data); +} + +/* + * Write TDE file header to a TDE file. + */ +static int +pg_tde_file_header_write(char *tde_filename, int fd, TDEPrincipalKeyInfo *principal_key_info, off_t *bytes_written) +{ + TDEFileHeader fheader; + size_t sz = sizeof(TDEPrincipalKeyInfo); + + Assert(principal_key_info); + + /* Create the header for this file. */ + fheader.file_version = PG_TDE_FILEMAGIC; + + /* Fill in the data */ + memset(&fheader.principal_key_info, 0, sz); + memcpy(&fheader.principal_key_info, principal_key_info, sz); + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + *bytes_written = pg_pwrite(fd, &fheader, TDE_FILE_HEADER_SIZE, 0); + + if (*bytes_written != TDE_FILE_HEADER_SIZE) + { + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write tde file \"%s\": %m", + tde_filename))); + } + + if (pg_fsync(fd) != 0) + { + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", tde_filename))); + } + ereport(DEBUG2, + (errmsg("Wrote the header to %s", tde_filename))); + + return fd; +} + +/* + * Key Map Table [pg_tde.map]: + * header: {Format Version, Principal Key Name} + * data: {OID, Flag, index of key in pg_tde.dat}... + * + * Returns the index of the key to be written in the key data file. + * The caller must hold an exclusive lock on the map file to avoid + * concurrent in place updates leading to data conflicts. + */ +static int32 +pg_tde_write_map_entry(const RelFileLocator *rlocator, uint32 entry_type, char *db_map_path, TDEPrincipalKeyInfo *principal_key_info) +{ + int map_fd = -1; + int32 key_index = 0; + TDEMapEntry map_entry; + bool is_new_file; + off_t curr_pos = 0; + off_t prev_pos = 0; + bool found = false; + + /* Open and validate file for basic correctness. */ + map_fd = pg_tde_open_file(db_map_path, principal_key_info, false, O_RDWR | O_CREAT, &is_new_file, &curr_pos); + prev_pos = curr_pos; + + /* + * Read until we find an empty slot. Otherwise, read until end. This seems + * to be less frequent than vacuum. So let's keep this function here + * rather than overloading the vacuum process. + */ + while (1) + { + prev_pos = curr_pos; + found = pg_tde_read_one_map_entry(map_fd, NULL, MAP_ENTRY_EMPTY, &map_entry, &curr_pos); + + /* + * We either reach EOF or found an empty slot in the middle of the + * file + */ + if (prev_pos == curr_pos || found) + break; + + /* Increment the offset and the key index */ + key_index++; + } + + /* + * Write the given entry at the location pointed by prev_pos; i.e. the + * free entry + */ + curr_pos = prev_pos; + pg_tde_write_one_map_entry(map_fd, rlocator, entry_type, key_index, &map_entry, &prev_pos); + + /* Let's close the file. */ + close(map_fd); + + /* Register the entry to be freed in case the transaction aborts */ + RegisterEntryForDeletion(rlocator, curr_pos, false); + + return key_index; +} + +/* + * Based on the given arguments, creates and write the entry into the key + * map file. + */ +static off_t +pg_tde_write_one_map_entry(int fd, const RelFileLocator *rlocator, uint32 flags, int32 key_index, TDEMapEntry *map_entry, off_t *offset) +{ + int bytes_written = 0; + + Assert(map_entry); + + /* Fill in the map entry structure */ + map_entry->relNumber = (rlocator == NULL) ? 0 : rlocator->relNumber; + map_entry->flags = flags; + map_entry->key_index = key_index; + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + bytes_written = pg_pwrite(fd, map_entry, MAP_ENTRY_SIZE, *offset); + + /* Add the entry to the file */ + if (bytes_written != MAP_ENTRY_SIZE) + { + char db_map_path[MAXPGPATH] = {0}; + + // TODO: this seems like a bad idea? + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, NULL); + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write tde map file \"%s\": %m", + db_map_path))); + } + if (pg_fsync(fd) != 0) + { + char db_map_path[MAXPGPATH] = {0}; + + // TODO: this seems like a bad idea? + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, NULL); + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", db_map_path))); + } + + return (*offset + bytes_written); +} + +/* + * Key Data [pg_tde.dat]: + * header: {Format Version: x} + * data: {Encrypted Key} + * + * Requires a valid index of the key to be written. The function with seek to + * the required location in the file. Any holes will be filled when another + * job finds an empty index. + */ +static void +pg_tde_write_keydata(char *db_keydata_path, TDEPrincipalKeyInfo *principal_key_info, int32 key_index, RelKeyData *enc_rel_key_data) +{ + File fd = -1; + bool is_new_file; + off_t curr_pos = 0; + + /* Open and validate file for basic correctness. */ + fd = pg_tde_open_file(db_keydata_path, principal_key_info, false, O_RDWR | O_CREAT, &is_new_file, &curr_pos); + + /* Write a single key data */ + pg_tde_write_one_keydata(fd, key_index, enc_rel_key_data); + + /* Let's close the file. */ + close(fd); +} + +/* + * Function writes a single RelKeyData into the file at the given index. + */ +static void +pg_tde_write_one_keydata(int fd, int32 key_index, RelKeyData *enc_rel_key_data) +{ + off_t curr_pos; + + Assert(fd != -1); + + /* Calculate the writing position in the file. */ + curr_pos = (key_index * INTERNAL_KEY_DAT_LEN) + TDE_FILE_HEADER_SIZE; + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + if (pg_pwrite(fd, &enc_rel_key_data->internal_key, INTERNAL_KEY_DAT_LEN, curr_pos) != INTERNAL_KEY_DAT_LEN) + { + // TODO: what now? File is corrupted + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write tde key data file: %m"))); + } + + if (pg_fsync(fd) != 0) + { + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file: %m"))); + } +} + +/* + * Calls the create map entry function to get an index into the keydata. This + * The keydata function will then write the encrypted key on the desired + * location. + * + * The caller must hold an exclusive lock tde_lwlock_enc_keys. + */ +void +pg_tde_write_key_map_entry(const RelFileLocator *rlocator, RelKeyData *enc_rel_key_data, TDEPrincipalKeyInfo *principal_key_info) +{ + int32 key_index = 0; + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + + Assert(rlocator); + + /* Set the file paths */ + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, db_keydata_path); + + /* Create the map entry and then add the encrypted key to the data file */ + key_index = pg_tde_write_map_entry(rlocator, enc_rel_key_data->internal_key.rel_type, db_map_path, principal_key_info); + + /* Add the encrypted key to the data file. */ + pg_tde_write_keydata(db_keydata_path, principal_key_info, key_index, enc_rel_key_data); +} + +/* + * Deletes a map entry by setting marking it as unused. We don't have to delete + * the actual key data as valid key data entries are identify by valid map entries. + */ +void +pg_tde_delete_key_map_entry(const RelFileLocator *rlocator, uint32 key_type) +{ + int32 key_index = 0; + off_t offset = 0; + LWLock *lock_files = tde_lwlock_enc_keys(); + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + + Assert(rlocator); + + /* Get the file paths */ + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, db_keydata_path); + + errno = 0; + /* Remove the map entry if found */ + LWLockAcquire(lock_files, LW_EXCLUSIVE); + key_index = pg_tde_process_map_entry(rlocator, key_type, db_map_path, &offset, false); + LWLockRelease(lock_files); + + if (key_index == -1) + { + ereport(WARNING, + (errcode(ERRCODE_NO_DATA_FOUND), + errmsg("could not find the required map entry for deletion of relation %d in tde map file \"%s\": %m", + rlocator->relNumber, + db_map_path))); + + return; + } + + /* Register the entry to be freed when transaction commits */ + RegisterEntryForDeletion(rlocator, offset, true); +} + +/* + * Called when transaction is being completed; either committed or aborted. + * By default, when a transaction creates an entry, we mark it as MAP_ENTRY_VALID. + * Only during the abort phase of the transaction that we are proceed on with + * marking the entry as MAP_ENTRY_FREE. This optimistic strategy that assumes + * that transaction will commit more often then getting aborted avoids + * unnecessary locking. + * + * The offset allows us to simply seek to the desired location and mark the entry + * as MAP_ENTRY_FREE without needing any further processing. + * + * A caller should hold an EXCLUSIVE tde_lwlock_enc_keys lock. + */ +void +pg_tde_free_key_map_entry(const RelFileLocator *rlocator, uint32 key_type, off_t offset) +{ + int32 key_index = 0; + char db_map_path[MAXPGPATH] = {0}; + + Assert(rlocator); + + /* Get the file paths */ + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, NULL); + + /* Remove the map entry if found */ + key_index = pg_tde_process_map_entry(rlocator, key_type, db_map_path, &offset, true); + + if (key_index == -1) + { + ereport(WARNING, + (errcode(ERRCODE_NO_DATA_FOUND), + errmsg("could not find the required map entry for deletion of relation %d in tde map file \"%s\": %m", + rlocator->relNumber, + db_map_path))); + + } +} + +/* + * Accepts the unrotated filename and returns the rotation temp + * filename. Both the strings are expected to be of the size + * MAXPGPATH. + * + * No error checking by this function. + */ +static File +keyrotation_init_file(TDEPrincipalKeyInfo *new_principal_key_info, char *rotated_filename, char *filename, bool *is_new_file, off_t *curr_pos) +{ + /* + * Set the new filenames for the key rotation process - temporary at the + * moment + */ + snprintf(rotated_filename, MAXPGPATH, "%s.r", filename); + + /* Create file, truncate if the rotate file already exits */ + return pg_tde_open_file(rotated_filename, new_principal_key_info, false, O_RDWR | O_CREAT | O_TRUNC, is_new_file, curr_pos); +} + +/* + * Do the final steps in the key rotation. + */ +static void +finalize_key_rotation(char *m_path_old, char *k_path_old, char *m_path_new, char *k_path_new) +{ + /* Remove old files */ + durable_unlink(m_path_old, ERROR); + durable_unlink(k_path_old, ERROR); + + /* Rename the new files to required filenames */ + durable_rename(m_path_new, m_path_old, ERROR); + durable_rename(k_path_new, k_path_old, ERROR); +} + +/* + * Rotate keys and generates the WAL record for it. + */ +bool +pg_tde_perform_rotate_key(TDEPrincipalKey *principal_key, TDEPrincipalKey *new_principal_key) +{ +#define OLD_PRINCIPAL_KEY 0 +#define NEW_PRINCIPAL_KEY 1 +#define PRINCIPAL_KEY_COUNT 2 + + off_t curr_pos[PRINCIPAL_KEY_COUNT] = {0}; + off_t prev_pos[PRINCIPAL_KEY_COUNT] = {0}; + int32 key_index[PRINCIPAL_KEY_COUNT] = {0}; + RelKeyData *rel_key_data[PRINCIPAL_KEY_COUNT]; + RelKeyData *enc_rel_key_data[PRINCIPAL_KEY_COUNT]; + int m_fd[PRINCIPAL_KEY_COUNT] = {-1}; + int k_fd[PRINCIPAL_KEY_COUNT] = {-1}; + char m_path[PRINCIPAL_KEY_COUNT][MAXPGPATH]; + char k_path[PRINCIPAL_KEY_COUNT][MAXPGPATH]; + bool found = false; + off_t read_pos_tmp = 0; + bool is_new_file; + off_t map_size; + off_t keydata_size; + XLogPrincipalKeyRotate *xlrec; + off_t xlrec_size; + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + bool success = true; + + /* Set the file paths */ + pg_tde_set_db_file_paths(principal_key->keyInfo.databaseId, + db_map_path, db_keydata_path); + + /* + * Let's update the pathnames in the local variable for ease of + * use/readability + */ + strncpy(m_path[OLD_PRINCIPAL_KEY], db_map_path, MAXPGPATH); + strncpy(k_path[OLD_PRINCIPAL_KEY], db_keydata_path, MAXPGPATH); + + /* + * Open both files in read only mode. We don't need to track the current + * position of the keydata file. We always use the key index + */ + m_fd[OLD_PRINCIPAL_KEY] = pg_tde_open_file(m_path[OLD_PRINCIPAL_KEY], &principal_key->keyInfo, false, O_RDONLY, &is_new_file, &curr_pos[OLD_PRINCIPAL_KEY]); + k_fd[OLD_PRINCIPAL_KEY] = pg_tde_open_file(k_path[OLD_PRINCIPAL_KEY], &principal_key->keyInfo, false, O_RDONLY, &is_new_file, &read_pos_tmp); + + m_fd[NEW_PRINCIPAL_KEY] = keyrotation_init_file(&new_principal_key->keyInfo, m_path[NEW_PRINCIPAL_KEY], m_path[OLD_PRINCIPAL_KEY], &is_new_file, &curr_pos[NEW_PRINCIPAL_KEY]); + k_fd[NEW_PRINCIPAL_KEY] = keyrotation_init_file(&new_principal_key->keyInfo, k_path[NEW_PRINCIPAL_KEY], k_path[OLD_PRINCIPAL_KEY], &is_new_file, &read_pos_tmp); + + /* Read all entries until EOF */ + for (key_index[OLD_PRINCIPAL_KEY] = 0;; key_index[OLD_PRINCIPAL_KEY]++) + { + TDEMapEntry read_map_entry, + write_map_entry; + RelFileLocator rloc; + + prev_pos[OLD_PRINCIPAL_KEY] = curr_pos[OLD_PRINCIPAL_KEY]; + found = pg_tde_read_one_map_entry(m_fd[OLD_PRINCIPAL_KEY], NULL, MAP_ENTRY_VALID, &read_map_entry, &curr_pos[OLD_PRINCIPAL_KEY]); + + /* We either reach EOF */ + if (prev_pos[OLD_PRINCIPAL_KEY] == curr_pos[OLD_PRINCIPAL_KEY]) + break; + + /* We didn't find a valid entry */ + if (found == false) + continue; + + rloc.relNumber = read_map_entry.relNumber; + rloc.dbOid = principal_key->keyInfo.databaseId; + + /* Let's get the decrypted key and re-encrypt it with the new key. */ + enc_rel_key_data[OLD_PRINCIPAL_KEY] = pg_tde_read_one_keydata(k_fd[OLD_PRINCIPAL_KEY], key_index[OLD_PRINCIPAL_KEY], principal_key); + + /* Decrypt and re-encrypt keys */ + rel_key_data[OLD_PRINCIPAL_KEY] = tde_decrypt_rel_key(principal_key, enc_rel_key_data[OLD_PRINCIPAL_KEY], principal_key->keyInfo.databaseId); + enc_rel_key_data[NEW_PRINCIPAL_KEY] = tde_encrypt_rel_key(new_principal_key, rel_key_data[OLD_PRINCIPAL_KEY], principal_key->keyInfo.databaseId); + + /* Write the given entry at the location pointed by prev_pos */ + prev_pos[NEW_PRINCIPAL_KEY] = curr_pos[NEW_PRINCIPAL_KEY]; + curr_pos[NEW_PRINCIPAL_KEY] = pg_tde_write_one_map_entry(m_fd[NEW_PRINCIPAL_KEY], &rloc, read_map_entry.flags, key_index[NEW_PRINCIPAL_KEY], &write_map_entry, &prev_pos[NEW_PRINCIPAL_KEY]); + pg_tde_write_one_keydata(k_fd[NEW_PRINCIPAL_KEY], key_index[NEW_PRINCIPAL_KEY], enc_rel_key_data[NEW_PRINCIPAL_KEY]); + + /* Increment the key index for the new principal key */ + key_index[NEW_PRINCIPAL_KEY]++; + } + + /* Close unrotated files */ + close(m_fd[OLD_PRINCIPAL_KEY]); + close(k_fd[OLD_PRINCIPAL_KEY]); + + /* Let's calculate sizes */ + map_size = lseek(m_fd[NEW_PRINCIPAL_KEY], 0, SEEK_END); + keydata_size = lseek(k_fd[NEW_PRINCIPAL_KEY], 0, SEEK_END); + xlrec_size = map_size + keydata_size + SizeoOfXLogPrincipalKeyRotate; + + /* palloc and fill in the structure */ + xlrec = (XLogPrincipalKeyRotate *) palloc(xlrec_size); + + xlrec->databaseId = principal_key->keyInfo.databaseId; + xlrec->map_size = map_size; + xlrec->keydata_size = keydata_size; + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + /* TODO: error handling */ + if(pg_pread(m_fd[NEW_PRINCIPAL_KEY], xlrec->buff, xlrec->map_size, 0) == -1) success = false; + if(pg_pread(k_fd[NEW_PRINCIPAL_KEY], &xlrec->buff[xlrec->map_size], xlrec->keydata_size, 0) == -1) success = false; + + /* Close the files */ + close(m_fd[NEW_PRINCIPAL_KEY]); + close(k_fd[NEW_PRINCIPAL_KEY]); + + /* Insert the XLog record */ + XLogBeginInsert(); + XLogRegisterData((char *) xlrec, xlrec_size); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_ROTATE_KEY); + + /* Do the final steps */ + finalize_key_rotation(m_path[OLD_PRINCIPAL_KEY], k_path[OLD_PRINCIPAL_KEY], + m_path[NEW_PRINCIPAL_KEY], k_path[NEW_PRINCIPAL_KEY]); + + /* Free up the palloc'ed data */ + pfree(xlrec); + + return success; + +#undef OLD_PRINCIPAL_KEY +#undef NEW_PRINCIPAL_KEY +#undef PRINCIPAL_KEY_COUNT +} + +/* + * Rotate keys on a standby. + */ +bool +pg_tde_write_map_keydata_files(off_t map_size, char *m_file_data, off_t keydata_size, char *k_file_data) +{ + TDEFileHeader *fheader; + char m_path_new[MAXPGPATH]; + char k_path_new[MAXPGPATH]; + int m_fd_new; + int k_fd_new; + bool is_new_file; + off_t curr_pos = 0; + off_t read_pos_tmp = 0; + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + bool is_err = false; + + /* Let's get the header. Buff should start with the map file header. */ + fheader = (TDEFileHeader *) m_file_data; + + /* Set the file paths */ + pg_tde_set_db_file_paths(fheader->principal_key_info.databaseId, + db_map_path, db_keydata_path); + + /* Initialize the new files and set the names */ + m_fd_new = keyrotation_init_file(&fheader->principal_key_info, m_path_new, db_map_path, &is_new_file, &curr_pos); + k_fd_new = keyrotation_init_file(&fheader->principal_key_info, k_path_new, db_keydata_path, &is_new_file, &read_pos_tmp); + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + if (pg_pwrite(m_fd_new, m_file_data, map_size, 0) != map_size) + { + ereport(WARNING, + (errcode_for_file_access(), + errmsg("could not write tde file \"%s\": %m", + m_path_new))); + is_err = true; + goto FINALIZE; + } + if (pg_fsync(m_fd_new) != 0) + { + ereport(WARNING, + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", m_path_new))); + is_err = true; + goto FINALIZE; + } + + + if (pg_pwrite(k_fd_new, k_file_data, keydata_size, 0) != keydata_size) + { + ereport(WARNING, + (errcode_for_file_access(), + errmsg("could not write tde file \"%s\": %m", + k_path_new))); + is_err = true; + goto FINALIZE; + } + if (pg_fsync(k_fd_new) != 0) + { + ereport(WARNING, + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", k_path_new))); + is_err = true; + goto FINALIZE; + } + +FINALIZE: + close(m_fd_new); + close(k_fd_new); + + if (!is_err) + finalize_key_rotation(db_map_path, db_keydata_path, m_path_new, k_path_new); + + return !is_err; +} + +/* + * Saves the relation key with the new relfilenode. + * Needed by ALTER TABLE SET TABLESPACE for example. + */ +void +pg_tde_move_rel_key(const RelFileLocator *newrlocator, const RelFileLocator *oldrlocator) +{ + RelKeyData *rel_key; + RelKeyData *enc_key; + TDEPrincipalKey *principal_key; + XLogRelKey xlrec; + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + off_t offset = 0; + int32 key_index = 0; + + pg_tde_set_db_file_paths(oldrlocator->dbOid, db_map_path, db_keydata_path); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + + principal_key = GetPrincipalKey(oldrlocator->dbOid, LW_EXCLUSIVE); + Assert(principal_key); + + /* + * We don't use internal_key cache to avoid locking complications. + */ + key_index = pg_tde_process_map_entry(oldrlocator, MAP_ENTRY_VALID, db_map_path, &offset, false); + Assert(key_index != -1); + + enc_key = pg_tde_read_keydata(db_keydata_path, key_index, principal_key); + rel_key = tde_decrypt_rel_key(principal_key, enc_key, oldrlocator->dbOid); + + xlrec.rlocator = *newrlocator; + xlrec.relKey = *enc_key; + xlrec.pkInfo = principal_key->keyInfo; + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, sizeof(xlrec)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_ADD_RELATION_KEY); + + pg_tde_write_key_map_entry(newrlocator, enc_key, &principal_key->keyInfo); + pg_tde_put_key_into_cache(newrlocator->relNumber, rel_key); + + XLogBeginInsert(); + XLogRegisterData((char *) oldrlocator, sizeof(RelFileLocator)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_FREE_MAP_ENTRY); + + /* Clean-up map/dat entries. It will also remove physical files (*.map, + * *.dat and keyring) if it was the last tde_heap_basic relation in the old + * locator AND it was a custom tablespace. + */ + pg_tde_free_key_map_entry(oldrlocator, MAP_ENTRY_VALID, offset); + + LWLockRelease(tde_lwlock_enc_keys()); + + pfree(enc_key); +} + +#endif /* !FRONTEND */ + +/* + * Reads the key of the required relation. It identifies its map entry and then simply + * reads the key data from the keydata file. + */ +RelKeyData * +pg_tde_get_key_from_file(const RelFileLocator *rlocator, uint32 key_type, bool no_map_ok) +{ + int32 key_index = 0; + TDEPrincipalKey *principal_key; + RelKeyData *rel_key_data; + RelKeyData *enc_rel_key_data; + off_t offset = 0; + LWLock *lock_pk = tde_lwlock_enc_keys(); + char db_map_path[MAXPGPATH] = {0}; + char db_keydata_path[MAXPGPATH] = {0}; + + Assert(rlocator); + + /* + * Get/generate a principal key, create the key for relation and get the + * encrypted key with bytes to write + * + * We should hold the lock until the internal key is loaded to be sure the + * retrieved key was encrypted with the obtained principal key. Otherwise, + * the next may happen: - GetPrincipalKey returns key "PKey_1". - Some + * other process rotates the Principal key and re-encrypt an Internal key + * with "PKey_2". - We read the Internal key and decrypt it with "PKey_1" + * (that's what we've got). As the result we return an invalid Internal + * key. + */ + LWLockAcquire(lock_pk, LW_SHARED); + principal_key = GetPrincipalKey(rlocator->dbOid, LW_SHARED); + if (principal_key == NULL) + { + LWLockRelease(lock_pk); + ereport(ERROR, + (errmsg("failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables."))); + } + + /* Get the file paths */ + pg_tde_set_db_file_paths(rlocator->dbOid, db_map_path, db_keydata_path); + + if (no_map_ok && access(db_map_path, F_OK) == -1) + { + LWLockRelease(lock_pk); + return NULL; + } + /* Read the map entry and get the index of the relation key */ + key_index = pg_tde_process_map_entry(rlocator, key_type, db_map_path, &offset, false); + + if (key_index == -1) + { + LWLockRelease(lock_pk); + return NULL; + } + + enc_rel_key_data = pg_tde_read_keydata(db_keydata_path, key_index, principal_key); + LWLockRelease(lock_pk); + + rel_key_data = tde_decrypt_rel_key(principal_key, enc_rel_key_data, rlocator->dbOid); + + return rel_key_data; +} + +/* + * Returns the index of the read map if we find a valid match; i.e. + * - flags is set to MAP_ENTRY_VALID and the relNumber matches the one + * provided in rlocator. + * - If should_delete is true, we delete the entry. An offset value may + * be passed to speed up the file reading operation. + * + * The function expects that the offset points to a valid map start location. + */ +static int32 +pg_tde_process_map_entry(const RelFileLocator *rlocator, uint32 key_type, char *db_map_path, off_t *offset, bool should_delete) +{ + File map_fd = -1; + int32 key_index = 0; + TDEMapEntry map_entry; + bool is_new_file; + bool found = false; + off_t prev_pos = 0; + off_t curr_pos = 0; + + Assert(offset); + + /* + * Open and validate file for basic correctness. DO NOT create it. The + * file should pre-exist otherwise we should never be here. + */ + map_fd = pg_tde_open_file(db_map_path, NULL, false, O_RDWR, &is_new_file, &curr_pos); + + /* + * If we need to delete an entry, we expect an offset value to the start + * of the entry to speed up the operation. Otherwise, we'd be sequentially + * scanning the entire map file. + */ + if (should_delete == true && *offset > 0) + { + curr_pos = lseek(map_fd, *offset, SEEK_SET); + + if (curr_pos == -1) + { + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not seek in tde map file \"%s\": %m", + db_map_path))); + return curr_pos; + } + } + else + { + /* Otherwise, let's just offset to zero */ + *offset = 0; + } + + /* + * Read until we find an empty slot. Otherwise, read until end. This seems + * to be less frequent than vacuum. So let's keep this function here + * rather than overloading the vacuum process. + */ + while (1) + { + prev_pos = curr_pos; + found = pg_tde_read_one_map_entry(map_fd, rlocator, key_type, &map_entry, &curr_pos); + + /* We've reached EOF */ + if (curr_pos == prev_pos) + break; + + /* We found a valid entry for the relNumber */ + if (found) + { +#ifndef FRONTEND + /* Mark the entry pointed by prev_pos as free */ + if (should_delete) + { + pg_tde_write_one_map_entry(map_fd, NULL, MAP_ENTRY_EMPTY, 0, &map_entry, &prev_pos); + } +#endif + break; + } + + /* Increment the offset and the key index */ + key_index++; + } + + /* Let's close the file. */ + close(map_fd); + + /* Return -1 indicating that no entry was removed */ + return ((found) ? key_index : -1); +} + + +/* + * Open the file and read the required key data from file and return encrypted key. + * The caller should hold a tde_lwlock_enc_keys lock + */ +static RelKeyData * +pg_tde_read_keydata(char *db_keydata_path, int32 key_index, TDEPrincipalKey *principal_key) +{ + int fd = -1; + RelKeyData *enc_rel_key_data; + off_t read_pos = 0; + bool is_new_file; + + /* Open and validate file for basic correctness. */ + fd = pg_tde_open_file(db_keydata_path, &principal_key->keyInfo, false, O_RDONLY, &is_new_file, &read_pos); + + /* Read the encrypted key from file */ + enc_rel_key_data = pg_tde_read_one_keydata(fd, key_index, principal_key); + + /* Let's close the file. */ + close(fd); + + return enc_rel_key_data; +} + + +/* + * Decrypts a given key and returns the decrypted one. + */ +RelKeyData * +tde_decrypt_rel_key(TDEPrincipalKey *principal_key, RelKeyData *enc_rel_key_data, Oid dbOid) +{ + RelKeyData *rel_key_data = NULL; + size_t key_bytes; + + AesDecryptKey(principal_key, dbOid, &rel_key_data, enc_rel_key_data, &key_bytes); + + return rel_key_data; +} + + +/* + * Open and Validate File Header [pg_tde.*]: + * header: {Format Version, Principal Key Name} + * + * Returns the file descriptor in case of a success. Otherwise, error + * is raised. + * + * Also, it sets the is_new_file to true if the file is just created. This is + * useful to know when reading a file so that we can skip further processing. + * + * Plus, there is nothing wrong with a create even if we are going to read + * data. This will save the creation overhead the next time. Ideally, this + * should never happen for a read operation as it indicates a missing file. + * + * The caller can pass the required flags to ensure that file is created + * or an error is thrown if the file does not exist. + */ +static int +pg_tde_open_file(char *tde_filename, TDEPrincipalKeyInfo *principal_key_info, bool update_header, int fileFlags, bool *is_new_file, off_t *curr_pos) +{ + int fd = -1; + TDEFileHeader fheader; + off_t bytes_read = 0; + off_t bytes_written = 0; + + /* + * Ensuring that we always open the file in binary mode. The caller must + * specify other flags for reading, writing or creating the file. + */ + fd = pg_tde_open_file_basic(tde_filename, fileFlags, false); + + pg_tde_file_header_read(tde_filename, fd, &fheader, is_new_file, &bytes_read); + +#ifndef FRONTEND + /* In case it's a new file, let's add the header now. */ + if ((*is_new_file || update_header) && principal_key_info) + pg_tde_file_header_write(tde_filename, fd, principal_key_info, &bytes_written); +#endif /* FRONTEND */ + + *curr_pos = bytes_read + bytes_written; + return fd; +} + + +/* + * Open a TDE file [pg_tde.*]: + * + * Returns the file descriptor in case of a success. Otherwise, error + * is raised except when ignore_missing is true and the file does not exit. + */ +static int +pg_tde_open_file_basic(char *tde_filename, int fileFlags, bool ignore_missing) +{ + int fd = -1; + + /* + * Ensuring that we always open the file in binary mode. The caller must + * specify other flags for reading, writing or creating the file. + */ + fd = BasicOpenFile(tde_filename, fileFlags | PG_BINARY); + if (fd < 0 && !(errno == ENOENT && ignore_missing == true)) + { + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open tde file \"%s\": %m", + tde_filename))); + } + + return fd; +} + + +/* + * Read TDE file header from a TDE file and fill in the fheader data structure. + */ +static int +pg_tde_file_header_read(char *tde_filename, int fd, TDEFileHeader *fheader, bool *is_new_file, off_t *bytes_read) +{ + Assert(fheader); + + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + *bytes_read = pg_pread(fd, fheader, TDE_FILE_HEADER_SIZE, 0); + *is_new_file = (*bytes_read == 0); + + /* File doesn't exist */ + if (*bytes_read == 0) + return fd; + + if (*bytes_read != TDE_FILE_HEADER_SIZE + || fheader->file_version != PG_TDE_FILEMAGIC) + { + /* Corrupt file */ + ereport(FATAL, + (errcode_for_file_access(), + errmsg("TDE map file \"%s\" is corrupted: %m", + tde_filename))); + } + + return fd; +} + + +/* + * Returns true if a valid map entry if found. Otherwise, it only increments + * the offset and returns false. If the same offset value is set, it indicates + * to the caller that nothing was read. + * + * If a non-NULL rlocator is provided, the function compares the read value + * against the relNumber of rlocator. It sets found accordingly. + * + * The caller is reponsible for identifying that we have reached EOF by + * comparing old and new value of the offset. + */ +static bool +pg_tde_read_one_map_entry(File map_file, const RelFileLocator *rlocator, int flags, TDEMapEntry *map_entry, off_t *offset) +{ + bool found; + off_t bytes_read = 0; + + Assert(map_entry); + Assert(offset); + + /* Read the entry at the given offset */ + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + bytes_read = pg_pread(map_file, map_entry, MAP_ENTRY_SIZE, *offset); + + /* We've reached the end of the file. */ + if (bytes_read != MAP_ENTRY_SIZE) + return false; + + *offset += bytes_read; + + /* We found a valid entry for the relNumber */ + found = (map_entry->flags & flags); + + /* If a valid rlocator is provided, let's compare and set found value */ + found &= (rlocator == NULL) ? true : (map_entry->relNumber == rlocator->relNumber); + + return found; +} + +/* + * Reads a single keydata from the file. + */ +static RelKeyData * +pg_tde_read_one_keydata(int keydata_fd, int32 key_index, TDEPrincipalKey *principal_key) +{ + RelKeyData *enc_rel_key_data; + off_t read_pos = 0; + + /* Allocate and fill in the structure */ + enc_rel_key_data = (RelKeyData *) palloc(sizeof(RelKeyData)); + + strncpy(enc_rel_key_data->principal_key_id.name, principal_key->keyInfo.keyId.name, PRINCIPAL_KEY_NAME_LEN); + + /* Calculate the reading position in the file. */ + read_pos += (key_index * INTERNAL_KEY_DAT_LEN) + TDE_FILE_HEADER_SIZE; + + /* Check if the file has a valid key */ + if ((read_pos + INTERNAL_KEY_DAT_LEN) > lseek(keydata_fd, 0, SEEK_END)) + { + char db_keydata_path[MAXPGPATH] = {0}; + + pg_tde_set_db_file_paths(principal_key->keyInfo.databaseId, NULL, db_keydata_path); + ereport(FATAL, + (errcode(ERRCODE_NO_DATA_FOUND), + errmsg("could not find the required key at index %d in tde data file \"%s\": %m", + key_index, + db_keydata_path))); + } + + /* Read the encrypted key */ + /* TODO: pgstat_report_wait_start / pgstat_report_wait_end */ + if (pg_pread(keydata_fd, &(enc_rel_key_data->internal_key), INTERNAL_KEY_DAT_LEN, read_pos) != INTERNAL_KEY_DAT_LEN) + { + char db_keydata_path[MAXPGPATH] = {0}; + + pg_tde_set_db_file_paths(principal_key->keyInfo.databaseId, NULL, db_keydata_path); + ereport(FATAL, + (errcode_for_file_access(), + errmsg("could not read key at index %d in tde key data file \"%s\": %m", + key_index, + db_keydata_path))); + } + + return enc_rel_key_data; +} + + +/* + * Get the principal key from the map file. The caller must hold + * a LW_SHARED or higher lock on files before calling this function. + */ +TDEPrincipalKeyInfo * +pg_tde_get_principal_key_info(Oid dbOid) +{ + int fd = -1; + TDEFileHeader fheader; + TDEPrincipalKeyInfo *principal_key_info = NULL; + bool is_new_file = false; + off_t bytes_read = 0; + char db_map_path[MAXPGPATH] = {0}; + + /* Set the file paths */ + pg_tde_set_db_file_paths(dbOid, db_map_path, NULL); + + /* + * Ensuring that we always open the file in binary mode. The caller must + * specify other flags for reading, writing or creating the file. + */ + fd = pg_tde_open_file_basic(db_map_path, O_RDONLY, true); + + /* The file does not exist. */ + if (fd < 0) + return NULL; + + pg_tde_file_header_read(db_map_path, fd, &fheader, &is_new_file, &bytes_read); + + close(fd); + + /* + * It's not a new file. So we can memcpy the principal key info from the + * header + */ + if (!is_new_file) + { + size_t sz = sizeof(TDEPrincipalKeyInfo); + + principal_key_info = (TDEPrincipalKeyInfo *) palloc(sz); + memcpy(principal_key_info, &fheader.principal_key_info, sz); + } + + return principal_key_info; +} + +/* + * Returns TDE key for a given relation. + * First it looks in a cache. If nothing found in the cache, it reads data from + * the tde fork file and populates cache. + */ +RelKeyData * +GetRelationKey(RelFileLocator rel, uint32 key_type, bool no_map_ok) +{ + RelKeyData *key; + + key = pg_tde_get_key_from_cache(rel.relNumber, key_type); + if (key) + return key; + + key = pg_tde_get_key_from_file(&rel, key_type, no_map_ok); + + if (key != NULL) + { + RelKeyData* cached_key = pg_tde_put_key_into_cache(rel.relNumber, key); + pfree(key); + return cached_key; + } + + return NULL; +} + +RelKeyData * +GetSMGRRelationKey(RelFileLocator rel) +{ + return GetRelationKey(rel, TDE_KEY_TYPE_SMGR, true); +} + +RelKeyData * +GetHeapBaiscRelationKey(RelFileLocator rel) +{ + return GetRelationKey(rel, TDE_KEY_TYPE_HEAP_BASIC, false); +} + +RelKeyData * +GetTdeGlobaleRelationKey(RelFileLocator rel) +{ + return GetRelationKey(rel, TDE_KEY_TYPE_GLOBAL, false); +} + +/* + * Returns TDE key for a given relation. + * First it looks in a cache. If nothing found in the cache, it reads data from + * the tde key file and populates cache. + */ +static RelKeyData * +pg_tde_get_key_from_cache(RelFileNumber rel_number, uint32 key_type) +{ + RelKeyCacheRec *rec; + + if (tde_rel_key_cache == NULL) + return NULL; + + for (int i = 0; i < tde_rel_key_cache->len; i++) + { + rec = tde_rel_key_cache->data + i; + if (rec != NULL && + (rel_number == InvalidOid || (rec->rel_number == rel_number)) && + rec->key.internal_key.rel_type & key_type) + { + return &rec->key; + } + } + + return NULL; +} + +/* Add key to cache. See comments on `RelKeyCache`. + * + * TODO: add tests. + */ +RelKeyData * +pg_tde_put_key_into_cache(RelFileNumber rel_num, RelKeyData *key) +{ + static long pageSize = 0; + RelKeyCacheRec *rec; + MemoryContext oldCtx; + + if (pageSize == 0) + { +#ifndef _SC_PAGESIZE + pageSize = getpagesize(); +#else + pageSize = sysconf(_SC_PAGESIZE); +#endif + } + + if (tde_rel_key_cache == NULL) + { +#ifndef FRONTEND + oldCtx = MemoryContextSwitchTo(TopMemoryContext); + tde_rel_key_cache = palloc(sizeof(RelKeyCache)); + tde_rel_key_cache->data = palloc_aligned(pageSize, pageSize, MCXT_ALLOC_ZERO); + MemoryContextSwitchTo(oldCtx); +#else + tde_rel_key_cache = palloc(sizeof(RelKeyCache)); + tde_rel_key_cache->data = aligned_alloc(pageSize, pageSize); + memset(tde_rel_key_cache->data, 0, pageSize); +#endif + + if (mlock(tde_rel_key_cache->data, pageSize) == -1) + elog(ERROR, "could not mlock internal key initial cache page: %m"); + + tde_rel_key_cache->len = 0; + tde_rel_key_cache->cap = (pageSize - 1) / sizeof(RelKeyCacheRec); + } + + /* + * Add another mem page if there is no more room left for another key. We + * allocate `current_memory_size` + 1 page and copy data there. + */ + if (tde_rel_key_cache->len == tde_rel_key_cache->cap) + { + size_t size; + size_t old_size; + RelKeyCacheRec *cachePage; + + old_size = TYPEALIGN(pageSize, (tde_rel_key_cache->cap) * sizeof(RelKeyCacheRec)); + + /* TODO: consider some formula for less allocations when caching a lot + * of objects. But on the other, hand it'll use more memory... + * E.g.: + * if (old_size < 0x8000) + * size = old_size * 2; + * else + * size = TYPEALIGN(pageSize, old_size + ((old_size + 3*256) >> 2)); + * + */ + size = old_size + pageSize; + +#ifndef FRONTEND + oldCtx = MemoryContextSwitchTo(TopMemoryContext); + cachePage = palloc_aligned(size, pageSize, MCXT_ALLOC_ZERO); + MemoryContextSwitchTo(oldCtx); +#else + cachePage = aligned_alloc(pageSize, size); + memset(cachePage, 0, size); +#endif + + memcpy(cachePage, tde_rel_key_cache->data, old_size); + pfree(tde_rel_key_cache->data); + tde_rel_key_cache->data = cachePage; + + if (mlock(tde_rel_key_cache->data, size) == -1) + elog(WARNING, "could not mlock internal key cache pages: %m"); + + tde_rel_key_cache->cap = (size - 1) / sizeof(RelKeyCacheRec); + } + + rec = tde_rel_key_cache->data + tde_rel_key_cache->len; + + rec->rel_number = rel_num; + memcpy(&rec->key, key, sizeof(RelKeyCacheRec)); + tde_rel_key_cache->len++; + + return &rec->key; +} diff --git a/contrib/pg_tde/src/access/pg_tde_xlog.c b/contrib/pg_tde/src/access/pg_tde_xlog.c new file mode 100644 index 00000000000..0654d8fbca9 --- /dev/null +++ b/contrib/pg_tde/src/access/pg_tde_xlog.c @@ -0,0 +1,158 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_xlog.c + * TDE XLog resource manager + * + * + * IDENTIFICATION + * src/access/pg_tde_xlog.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "pg_tde.h" +#include "pg_tde_defines.h" +#include "access/xlog.h" +#include "access/xlog_internal.h" +#include "access/xloginsert.h" +#include "catalog/tde_keyring.h" +#include "storage/bufmgr.h" +#include "storage/shmem.h" +#include "utils/guc.h" +#include "utils/memutils.h" + +#include "access/pg_tde_xlog.h" +#include "encryption/enc_tde.h" + +/* + * TDE fork XLog + */ +void +tdeheap_rmgr_redo(XLogReaderState *record) +{ + uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK; + + if (info == XLOG_TDE_ADD_RELATION_KEY) + { + TDEPrincipalKeyInfo *pk = NULL; + XLogRelKey *xlrec = (XLogRelKey *) XLogRecGetData(record); + + if (xlrec->pkInfo.databaseId != 0) + pk = &xlrec->pkInfo; + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + pg_tde_write_key_map_entry(&xlrec->rlocator, &xlrec->relKey, pk); + LWLockRelease(tde_lwlock_enc_keys()); + } + else if (info == XLOG_TDE_ADD_PRINCIPAL_KEY || info == XLOG_TDE_UPDATE_PRINCIPAL_KEY) + { + TDEPrincipalKeyInfo *mkey = (TDEPrincipalKeyInfo *) XLogRecGetData(record); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + if (info == XLOG_TDE_ADD_PRINCIPAL_KEY) + save_principal_key_info(mkey); + else + update_principal_key_info(mkey); + + LWLockRelease(tde_lwlock_enc_keys()); + } + else if (info == XLOG_TDE_EXTENSION_INSTALL_KEY) + { + XLogExtensionInstall *xlrec = (XLogExtensionInstall *) XLogRecGetData(record); + + extension_install_redo(xlrec); + } + + else if (info == XLOG_TDE_ADD_KEY_PROVIDER_KEY) + { + KeyringProviderXLRecord *xlrec = (KeyringProviderXLRecord *) XLogRecGetData(record); + + redo_key_provider_info(xlrec); + } + + else if (info == XLOG_TDE_ROTATE_KEY) + { + XLogPrincipalKeyRotate *xlrec = (XLogPrincipalKeyRotate *) XLogRecGetData(record); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + xl_tde_perform_rotate_key(xlrec); + LWLockRelease(tde_lwlock_enc_keys()); + } + + else if (info == XLOG_TDE_FREE_MAP_ENTRY) + { + off_t offset = 0; + RelFileLocator *xlrec = (RelFileLocator *) XLogRecGetData(record); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + pg_tde_free_key_map_entry(xlrec, MAP_ENTRY_VALID, offset); + LWLockRelease(tde_lwlock_enc_keys()); + } + else + { + elog(PANIC, "pg_tde_redo: unknown op code %u", info); + } +} + +void +tdeheap_rmgr_desc(StringInfo buf, XLogReaderState *record) +{ + uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK; + + if (info == XLOG_TDE_ADD_RELATION_KEY) + { + XLogRelKey *xlrec = (XLogRelKey *) XLogRecGetData(record); + + appendStringInfo(buf, "add tde internal key for relation %u/%u", xlrec->rlocator.dbOid, xlrec->rlocator.relNumber); + } + if (info == XLOG_TDE_ADD_PRINCIPAL_KEY) + { + TDEPrincipalKeyInfo *xlrec = (TDEPrincipalKeyInfo *) XLogRecGetData(record); + + appendStringInfo(buf, "add tde principal key for db %u", xlrec->databaseId); + } + if (info == XLOG_TDE_UPDATE_PRINCIPAL_KEY) + { + TDEPrincipalKeyInfo *xlrec = (TDEPrincipalKeyInfo *) XLogRecGetData(record); + + appendStringInfo(buf, "Alter key provider to:%d for tde principal key for db %u", xlrec->keyringId, xlrec->databaseId); + } + if (info == XLOG_TDE_EXTENSION_INSTALL_KEY) + { + XLogExtensionInstall *xlrec = (XLogExtensionInstall *) XLogRecGetData(record); + + appendStringInfo(buf, "tde extension install for db %u", xlrec->database_id); + } + if (info == XLOG_TDE_ROTATE_KEY) + { + XLogPrincipalKeyRotate *xlrec = (XLogPrincipalKeyRotate *) XLogRecGetData(record); + + appendStringInfo(buf, "rotate principal key for %u", xlrec->databaseId); + } + if (info == XLOG_TDE_ADD_KEY_PROVIDER_KEY) + { + KeyringProviderXLRecord *xlrec = (KeyringProviderXLRecord *) XLogRecGetData(record); + + appendStringInfo(buf, "add key provider %s for %u", xlrec->provider.provider_name, xlrec->database_id); + } +} + +const char * +tdeheap_rmgr_identify(uint8 info) +{ + if ((info & ~XLR_INFO_MASK) == XLOG_TDE_ADD_RELATION_KEY) + return "XLOG_TDE_ADD_RELATION_KEY"; + + if ((info & ~XLR_INFO_MASK) == XLOG_TDE_ADD_PRINCIPAL_KEY) + return "XLOG_TDE_ADD_PRINCIPAL_KEY"; + + if ((info & ~XLR_INFO_MASK) == XLOG_TDE_UPDATE_PRINCIPAL_KEY) + return "XLOG_TDE_UPDATE_PRINCIPAL_KEY"; + + if ((info & ~XLR_INFO_MASK) == XLOG_TDE_EXTENSION_INSTALL_KEY) + return "XLOG_TDE_EXTENSION_INSTALL_KEY"; + + return NULL; +} diff --git a/contrib/pg_tde/src/access/pg_tde_xlog_encrypt.c b/contrib/pg_tde/src/access/pg_tde_xlog_encrypt.c new file mode 100644 index 00000000000..5c40f73e902 --- /dev/null +++ b/contrib/pg_tde/src/access/pg_tde_xlog_encrypt.c @@ -0,0 +1,314 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_xlog_encrypt.c + * Encrypted XLog storage manager + * + * + * IDENTIFICATION + * src/access/pg_tde_xlog_encrypt.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#ifdef PERCONA_EXT +#include "pg_tde.h" +#include "pg_tde_defines.h" +#include "access/xlog.h" +#include "access/xlog_internal.h" +#include "access/xloginsert.h" +#include "storage/bufmgr.h" +#include "storage/shmem.h" +#include "utils/guc.h" +#include "utils/memutils.h" + +#include "access/pg_tde_xlog_encrypt.h" +#include "catalog/tde_global_space.h" +#include "encryption/enc_tde.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +static XLogPageHeaderData DecryptCurrentPageHrd; + +static void SetXLogPageIVPrefix(TimeLineID tli, XLogRecPtr lsn, char *iv_prefix); + +#ifndef FRONTEND +/* GUC */ +static bool EncryptXLog = false; + +static XLogPageHeaderData EncryptCurrentPageHrd; + +static ssize_t TDEXLogWriteEncryptedPages(int fd, const void *buf, size_t count, off_t offset); +static char *TDEXLogEncryptBuf = NULL; +static int XLOGChooseNumBuffers(void); + +void +XLogInitGUC(void) +{ + DefineCustomBoolVariable("pg_tde.wal_encrypt", /* name */ + "Enable/Disable encryption of WAL.", /* short_desc */ + NULL, /* long_desc */ + &EncryptXLog, /* value address */ + false, /* boot value */ + PGC_POSTMASTER, /* context */ + 0, /* flags */ + NULL, /* check_hook */ + NULL, /* assign_hook */ + NULL /* show_hook */ + ); +} + +static int +XLOGChooseNumBuffers(void) +{ + int xbuffers; + + xbuffers = NBuffers / 32; + if (xbuffers > (wal_segment_size / XLOG_BLCKSZ)) + xbuffers = (wal_segment_size / XLOG_BLCKSZ); + if (xbuffers < 8) + xbuffers = 8; + return xbuffers; +} + +/* + * Defines the size of the XLog encryption buffer + */ +Size +TDEXLogEncryptBuffSize(void) +{ + int xbuffers; + + xbuffers = (XLOGbuffers == -1) ? XLOGChooseNumBuffers() : XLOGbuffers; + return (Size) XLOG_BLCKSZ * xbuffers; +} + +/* + * Alloc memory for the encryption buffer. + * + * It should fit XLog buffers (XLOG_BLCKSZ * wal_buffers). We can't + * (re)alloc this buf in tdeheap_xlog_seg_write() based on the write size as + * it's called in the CRIT section, hence no allocations are allowed. + * + * Access to this buffer happens during XLogWrite() call which should + * be called with WALWriteLock held, hence no need in extra locks. + */ +void +TDEXLogShmemInit(void) +{ + bool foundBuf; + + if (EncryptXLog) + { + TDEXLogEncryptBuf = (char *) + TYPEALIGN(PG_IO_ALIGN_SIZE, + ShmemInitStruct("TDE XLog Encryption Buffer", + XLOG_TDE_ENC_BUFF_ALIGNED_SIZE, + &foundBuf)); + + elog(DEBUG1, "pg_tde: initialized encryption buffer %lu bytes", XLOG_TDE_ENC_BUFF_ALIGNED_SIZE); + } +} + +/* + * Encrypt XLog page(s) from the buf and write to the segment file. + */ +static ssize_t +TDEXLogWriteEncryptedPages(int fd, const void *buf, size_t count, off_t offset) +{ + char iv_prefix[16] = {0,}; + size_t data_size = 0; + XLogPageHeader curr_page_hdr = &EncryptCurrentPageHrd; + XLogPageHeader enc_buf_page; + RelKeyData *key = GetTdeGlobaleRelationKey(GLOBAL_SPACE_RLOCATOR(XLOG_TDE_OID)); + off_t enc_off; + size_t page_size = XLOG_BLCKSZ - offset % XLOG_BLCKSZ; + uint32 iv_ctr = 0; + +#ifdef TDE_XLOG_DEBUG + elog(DEBUG1, "write encrypted WAL, pages amount: %d, size: %lu offset: %ld", count / (Size) XLOG_BLCKSZ, count, offset); +#endif + + /* + * Go through the buf page-by-page and encrypt them. We may start or + * finish writing from/in the middle of the page (walsender or + * `full_page_writes = off`). So preserve a page header for the IV init + * data. + * + * TODO: check if walsender restarts form the beggining of the page in + * case of the crash. + */ + for (enc_off = 0; enc_off < count;) + { + data_size = Min(page_size, count); + + if (page_size == XLOG_BLCKSZ) + { + memcpy((char *) curr_page_hdr, (char *) buf + enc_off, SizeOfXLogShortPHD); + + /* + * Need to use a separate buf for the encryption so the page + * remains non-crypted in the XLog buf (XLogInsert has to have + * access to records' lsn). + */ + enc_buf_page = (XLogPageHeader) (TDEXLogEncryptBuf + enc_off); + memcpy((char *) enc_buf_page, (char *) buf + enc_off, (Size) XLogPageHeaderSize(curr_page_hdr)); + enc_buf_page->xlp_info |= XLP_ENCRYPTED; + + enc_off += XLogPageHeaderSize(curr_page_hdr); + data_size -= XLogPageHeaderSize(curr_page_hdr); + /* it's a beginning of the page */ + iv_ctr = 0; + } + else + { + /* we're in the middle of the page */ + iv_ctr = (offset % XLOG_BLCKSZ) - XLogPageHeaderSize(curr_page_hdr); + } + + if (data_size + enc_off > count) + { + data_size = count - enc_off; + } + + /* + * The page is zeroed (no data), no sense to enctypt. This may happen + * when base_backup or other requests XLOG SWITCH and some pages in + * XLog buffer still not used. + */ + if (curr_page_hdr->xlp_magic == 0) + { + /* ensure all the page is {0} */ + Assert((*((char *) buf + enc_off) == 0) && + memcmp((char *) buf + enc_off, (char *) buf + enc_off + 1, data_size - 1) == 0); + + memcpy((char *) enc_buf_page, (char *) buf + enc_off, data_size); + } + else + { + SetXLogPageIVPrefix(curr_page_hdr->xlp_tli, curr_page_hdr->xlp_pageaddr, iv_prefix); + PG_TDE_ENCRYPT_DATA(iv_prefix, iv_ctr, (char *) buf + enc_off, data_size, + TDEXLogEncryptBuf + enc_off, key); + } + + page_size = XLOG_BLCKSZ; + enc_off += data_size; + } + + return pg_pwrite(fd, TDEXLogEncryptBuf, count, offset); +} +#endif /* !FRONTEND */ + +void +TDEXLogSmgrInit(void) +{ + SetXLogSmgr(&tde_xlog_smgr); +} + +ssize_t +tdeheap_xlog_seg_write(int fd, const void *buf, size_t count, off_t offset) +{ +#ifndef FRONTEND + if (EncryptXLog) + return TDEXLogWriteEncryptedPages(fd, buf, count, offset); + else +#endif + return pg_pwrite(fd, buf, count, offset); +} + +/* + * Read the XLog pages from the segment file and dectypt if need. + */ +ssize_t +tdeheap_xlog_seg_read(int fd, void *buf, size_t count, off_t offset) +{ + ssize_t readsz; + char iv_prefix[16] = {0,}; + size_t data_size = 0; + XLogPageHeader curr_page_hdr = &DecryptCurrentPageHrd; + RelKeyData *key = GetTdeGlobaleRelationKey(GLOBAL_SPACE_RLOCATOR(XLOG_TDE_OID)); + size_t page_size = XLOG_BLCKSZ - offset % XLOG_BLCKSZ; + off_t dec_off; + uint32 iv_ctr = 0; + +#ifdef TDE_XLOG_DEBUG + elog(DEBUG1, "read from a WAL segment, pages amount: %d, size: %lu offset: %ld", count / (Size) XLOG_BLCKSZ, count, offset); +#endif + + readsz = pg_pread(fd, buf, count, offset); + + /* + * Read the buf page by page and decypt ecnrypted pages. We may start or + * fihish reading from/in the middle of the page (walreceiver) in such a + * case we should preserve the last read page header for the IV data and + * the encryption state. + * + * TODO: check if walsender/receiver restarts form the beggining of the + * page in case of the crash. + */ + for (dec_off = 0; dec_off < readsz;) + { + data_size = Min(page_size, readsz); + + if (page_size == XLOG_BLCKSZ) + { + memcpy((char *) curr_page_hdr, (char *) buf + dec_off, SizeOfXLogShortPHD); + + /* set the flag to "not encrypted" for the walreceiver */ + ((XLogPageHeader) ((char *) buf + dec_off))->xlp_info &= ~XLP_ENCRYPTED; + + Assert(curr_page_hdr->xlp_magic == XLOG_PAGE_MAGIC || curr_page_hdr->xlp_magic == 0); + dec_off += XLogPageHeaderSize(curr_page_hdr); + data_size -= XLogPageHeaderSize(curr_page_hdr); + /* it's a beginning of the page */ + iv_ctr = 0; + } + else + { + /* we're in the middle of the page */ + iv_ctr = (offset % XLOG_BLCKSZ) - XLogPageHeaderSize(curr_page_hdr); + } + + if ((data_size + dec_off) > readsz) + { + data_size = readsz - dec_off; + } + + if (curr_page_hdr->xlp_info & XLP_ENCRYPTED) + { + SetXLogPageIVPrefix(curr_page_hdr->xlp_tli, curr_page_hdr->xlp_pageaddr, iv_prefix); + PG_TDE_DECRYPT_DATA( + iv_prefix, iv_ctr, + (char *) buf + dec_off, data_size, (char *) buf + dec_off, key); + } + + page_size = XLOG_BLCKSZ; + dec_off += data_size; + } + + return readsz; +} + +/* IV: TLI(uint32) + XLogRecPtr(uint64)*/ +static void +SetXLogPageIVPrefix(TimeLineID tli, XLogRecPtr lsn, char *iv_prefix) +{ + iv_prefix[0] = (tli >> 24); + iv_prefix[1] = ((tli >> 16) & 0xFF); + iv_prefix[2] = ((tli >> 8) & 0xFF); + iv_prefix[3] = (tli & 0xFF); + + iv_prefix[4] = (lsn >> 56); + iv_prefix[5] = ((lsn >> 48) & 0xFF); + iv_prefix[6] = ((lsn >> 40) & 0xFF); + iv_prefix[7] = ((lsn >> 32) & 0xFF); + iv_prefix[8] = ((lsn >> 24) & 0xFF); + iv_prefix[9] = ((lsn >> 16) & 0xFF); + iv_prefix[10] = ((lsn >> 8) & 0xFF); + iv_prefix[11] = (lsn & 0xFF); +} + +#endif /* PERCONA_EXT */ diff --git a/contrib/pg_tde/src/catalog/tde_global_space.c b/contrib/pg_tde/src/catalog/tde_global_space.c new file mode 100644 index 00000000000..f35605cb27f --- /dev/null +++ b/contrib/pg_tde/src/catalog/tde_global_space.c @@ -0,0 +1,207 @@ +/*------------------------------------------------------------------------- + * + * tde_global_space.c + * Global catalog key management + * + * + * IDENTIFICATION + * src/catalog/tde_global_space.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#ifdef PERCONA_EXT + +#include "utils/memutils.h" + +#include "access/pg_tde_tdemap.h" +#include "catalog/tde_global_space.h" +#include "catalog/tde_keyring.h" +#include "common/pg_tde_utils.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +#include +#include +#include +#include + +#define PRINCIPAL_KEY_DEFAULT_NAME "tde-global-catalog-key" +#define KEYRING_DEFAULT_NAME "default_global_tablespace_keyring" +#define KEYRING_DEFAULT_FILE_NAME "pg_tde_default_keyring_CHANGE_AND_REMOVE_IT" + +#define DefaultKeyProvider GetKeyProviderByName(KEYRING_DEFAULT_NAME, \ + GLOBAL_DATA_TDE_OID) + +#ifndef FRONTEND +static void init_keys(void); +static void init_default_keyring(void); +static TDEPrincipalKey *create_principal_key(const char *key_name, + GenericKeyring *keyring, Oid dbOid); +#endif /* !FRONTEND */ + + +void +TDEInitGlobalKeys(const char *dir) +{ +#ifndef FRONTEND + char db_map_path[MAXPGPATH] = {0}; + + pg_tde_set_db_file_paths(GLOBAL_DATA_TDE_OID, db_map_path, NULL); + if (access(db_map_path, F_OK) == -1) + { + init_default_keyring(); + init_keys(); + } + else +#endif /* !FRONTEND */ + { + RelKeyData *ikey; + + if (dir != NULL) + pg_tde_set_data_dir(dir); + + ikey = pg_tde_get_key_from_file(&GLOBAL_SPACE_RLOCATOR(XLOG_TDE_OID), TDE_KEY_TYPE_GLOBAL, false); + + /* + * Internal Key should be in the TopMemmoryContext because of SSL + * contexts. This context is being initialized by OpenSSL with the + * pointer to the encryption context which is valid only for the + * current backend. So new backends have to inherit a cached key with + * NULL SSL connext and any changes to it have to remain local ot the + * backend. (see + * https://github.com/percona-Lab/pg_tde/pull/214#discussion_r1648998317) + */ + pg_tde_put_key_into_cache(XLOG_TDE_OID, ikey); + } +} + +#ifndef FRONTEND + +static void +init_default_keyring(void) +{ + if (GetAllKeyringProviders(GLOBAL_DATA_TDE_OID) == NIL) + { + char path[MAXPGPATH] = {0}; + static KeyringProvideRecord provider = + { + .provider_name = KEYRING_DEFAULT_NAME, + .provider_type = FILE_KEY_PROVIDER, + }; + + char *data_path = make_absolute_path(PG_TDE_DATA_DIR); + + join_path_components(path, data_path, KEYRING_DEFAULT_FILE_NAME); + free(data_path); + + snprintf(provider.options, MAX_KEYRING_OPTION_LEN, + "{" + "\"type\": \"file\"," + "\"path\": \"%s\"" + "}", path + ); + + pg_tde_init_data_dir(); + + /* + * TODO: should we remove it automaticaly on + * pg_tde_rotate_principal_key() ? + */ + save_new_key_provider_info(&provider, GLOBAL_DATA_TDE_OID, false); + elog(INFO, + "default keyring has been created for the global tablespace (WAL)." + " Change it with pg_tde_add_key_provider_* and run pg_tde_rotate_principal_key." + ); + } +} + +/* + * Create and store global space keys (principal and internal) and cache the + * internal key. + * + * Since we always keep an Internal key in the memory for the global tablespace + * and read it from disk once, only during the server start, we need no cache for + * the principal key. + * + * This function has to be run during the cluster start only, so no locks needed. + */ +static void +init_keys(void) +{ + InternalKey int_key; + RelKeyData *rel_key_data; + RelKeyData *enc_rel_key_data; + RelFileLocator *rlocator; + TDEPrincipalKey *mkey; + + mkey = create_principal_key(PRINCIPAL_KEY_DEFAULT_NAME, + DefaultKeyProvider, + GLOBAL_DATA_TDE_OID); + + memset(&int_key, 0, sizeof(InternalKey)); + + int_key.rel_type = TDE_KEY_TYPE_GLOBAL; + + /* Create and store an internal key for XLog */ + if (!RAND_bytes(int_key.key, INTERNAL_KEY_LEN)) + { + ereport(FATAL, + (errcode(ERRCODE_INTERNAL_ERROR), + errmsg("could not generate internal key for \"WAL\": %s", + ERR_error_string(ERR_get_error(), NULL)))); + } + + rlocator = &GLOBAL_SPACE_RLOCATOR(XLOG_TDE_OID); + rel_key_data = tde_create_rel_key(rlocator->relNumber, &int_key, &mkey->keyInfo); + enc_rel_key_data = tde_encrypt_rel_key(mkey, rel_key_data, rlocator->dbOid); + pg_tde_write_key_map_entry(rlocator, enc_rel_key_data, &mkey->keyInfo); + pfree(enc_rel_key_data); + pfree(mkey); +} + +/* + * Substantially simplified version of set_principal_key_with_keyring() as during + * recovery (server start): + * - we can't insert XLog records; + * - no need for locks; + * - we run this func only once, during the first server start and always create + * a new key with the default keyring, hence no need to try to load the key + * first. + */ +static TDEPrincipalKey * +create_principal_key(const char *key_name, GenericKeyring *keyring, Oid dbOid) +{ + TDEPrincipalKey *principalKey; + keyInfo *keyInfo = NULL; + + principalKey = palloc(sizeof(TDEPrincipalKey)); + principalKey->keyInfo.databaseId = dbOid; + principalKey->keyInfo.keyId.version = DEFAULT_PRINCIPAL_KEY_VERSION; + principalKey->keyInfo.keyringId = keyring->key_id; + strncpy(principalKey->keyInfo.keyId.name, key_name, TDE_KEY_NAME_LEN); + snprintf(principalKey->keyInfo.keyId.versioned_name, TDE_KEY_NAME_LEN, + "%s_%d", principalKey->keyInfo.keyId.name, principalKey->keyInfo.keyId.version); + gettimeofday(&principalKey->keyInfo.creationTime, NULL); + + keyInfo = KeyringGenerateNewKeyAndStore(keyring, principalKey->keyInfo.keyId.versioned_name, INTERNAL_KEY_LEN, false); + + if (keyInfo == NULL) + { + ereport(ERROR, + (errmsg("failed to generate principal key"))); + } + + principalKey->keyLength = keyInfo->data.len; + + memcpy(principalKey->keyData, keyInfo->data.data, keyInfo->data.len); + + return principalKey; +} +#endif /* FRONTEND */ + +#endif /* PERCONA_EXT */ diff --git a/contrib/pg_tde/src/catalog/tde_keyring.c b/contrib/pg_tde/src/catalog/tde_keyring.c new file mode 100644 index 00000000000..9ee422fbfe3 --- /dev/null +++ b/contrib/pg_tde/src/catalog/tde_keyring.c @@ -0,0 +1,721 @@ +/*------------------------------------------------------------------------- + * + * tde_keyring.c + * Deals with the tde keyring configuration + * routines. + * + * IDENTIFICATION + * contrib/pg_tde/src/catalog/tde_keyring.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "access/pg_tde_xlog.h" +#include "catalog/tde_global_space.h" +#include "catalog/tde_keyring.h" +#include "catalog/tde_principal_key.h" +#include "access/skey.h" +#include "utils/lsyscache.h" +#include "utils/snapmgr.h" +#include "utils/fmgroids.h" +#include "common/pg_tde_utils.h" +#include "miscadmin.h" +#include "unistd.h" +#include "utils/builtins.h" +#include "pg_tde.h" + +#ifndef FRONTEND +#include "access/heapam.h" +#include "common/pg_tde_shmem.h" +#include "funcapi.h" +#include "access/relscan.h" +#include "access/relation.h" +#include "catalog/namespace.h" +#include "executor/spi.h" +#else +#include "fe_utils/simple_list.h" +#include "pg_tde_fe.h" +#endif /* !FRONTEND */ + +typedef enum ProviderScanType +{ + PROVIDER_SCAN_BY_NAME, + PROVIDER_SCAN_BY_ID, + PROVIDER_SCAN_BY_TYPE, + PROVIDER_SCAN_ALL +} ProviderScanType; + +#define PG_TDE_KEYRING_FILENAME "pg_tde_%d_keyring" + +static FileKeyring *load_file_keyring_provider_options(char *keyring_options); +static GenericKeyring *load_keyring_provider_options(ProviderType provider_type, char *keyring_options); +static VaultV2Keyring *load_vaultV2_keyring_provider_options(char *keyring_options); +static KmipKeyring *load_kmip_keyring_provider_options(char *keyring_options); +static void debug_print_kerying(GenericKeyring *keyring); +static GenericKeyring *load_keyring_provider_from_record(KeyringProvideRecord *provider); +static inline void get_keyring_infofile_path(char *resPath, Oid dbOid); +static bool fetch_next_key_provider(int fd, off_t *curr_pos, KeyringProvideRecord *provider); + +#ifdef FRONTEND + +static SimplePtrList *scan_key_provider_file(ProviderScanType scanType, void *scanKey, Oid dbOid); +static void simple_list_free(SimplePtrList *list); + +#else + +static List *scan_key_provider_file(ProviderScanType scanType, void *scanKey, Oid dbOid); + +PG_FUNCTION_INFO_V1(pg_tde_add_key_provider_internal); +Datum pg_tde_add_key_provider_internal(PG_FUNCTION_ARGS); + +PG_FUNCTION_INFO_V1(pg_tde_list_all_key_providers); +Datum pg_tde_list_all_key_providers(PG_FUNCTION_ARGS); + +#define PG_TDE_LIST_PROVIDERS_COLS 4 + +static void key_provider_startup_cleanup(int tde_tbl_count, XLogExtensionInstall *ext_info, bool redo, void *arg); +static const char *get_keyring_provider_typename(ProviderType p_type); +static uint32 write_key_provider_info(KeyringProvideRecord *provider, + Oid database_id, off_t position, + bool error_if_exists, bool write_xlog); + +static Size initialize_shared_state(void *start_address); +static Size required_shared_mem_size(void); + +typedef struct TdeKeyProviderInfoSharedState +{ + LWLockPadded *Locks; +} TdeKeyProviderInfoSharedState; + +TdeKeyProviderInfoSharedState *sharedPrincipalKeyState = NULL; /* Lives in shared state */ + +static const TDEShmemSetupRoutine key_provider_info_shmem_routine = { + .init_shared_state = initialize_shared_state, + .init_dsa_area_objects = NULL, + .required_shared_mem_size = required_shared_mem_size, + .shmem_kill = NULL +}; + +static Size +required_shared_mem_size(void) +{ + return MAXALIGN(sizeof(TdeKeyProviderInfoSharedState)); +} + +static Size +initialize_shared_state(void *start_address) +{ + sharedPrincipalKeyState = (TdeKeyProviderInfoSharedState *) start_address; + sharedPrincipalKeyState->Locks = GetNamedLWLockTranche(TDE_TRANCHE_NAME); + + return sizeof(TdeKeyProviderInfoSharedState); +} + +static inline LWLock * +tde_provider_info_lock(void) +{ + Assert(sharedPrincipalKeyState); + return &sharedPrincipalKeyState->Locks[TDE_LWLOCK_PI_FILES].lock; +} + +void +InitializeKeyProviderInfo(void) +{ + ereport(LOG, (errmsg("initializing TDE key provider info"))); + RegisterShmemRequest(&key_provider_info_shmem_routine); + on_ext_install(key_provider_startup_cleanup, NULL); +} +static void +key_provider_startup_cleanup(int tde_tbl_count, XLogExtensionInstall *ext_info, bool redo, void *arg) +{ + + if (tde_tbl_count > 0) + { + ereport(WARNING, + (errmsg("failed to perform initialization. database already has %d TDE tables", tde_tbl_count))); + return; + } + cleanup_key_provider_info(ext_info->database_id); +} + +ProviderType +get_keyring_provider_from_typename(char *provider_type) +{ + if (provider_type == NULL) + return UNKNOWN_KEY_PROVIDER; + + if (strcmp(FILE_KEYRING_TYPE, provider_type) == 0) + return FILE_KEY_PROVIDER; + if (strcmp(VAULTV2_KEYRING_TYPE, provider_type) == 0) + return VAULT_V2_KEY_PROVIDER; + if (strcmp(KMIP_KEYRING_TYPE, provider_type) == 0) + return KMIP_KEY_PROVIDER; + return UNKNOWN_KEY_PROVIDER; +} + +static const char * +get_keyring_provider_typename(ProviderType p_type) +{ + switch (p_type) + { + case FILE_KEY_PROVIDER: + return FILE_KEYRING_TYPE; + case VAULT_V2_KEY_PROVIDER: + return VAULTV2_KEYRING_TYPE; + case KMIP_KEY_PROVIDER: + return KMIP_KEYRING_TYPE; + default: + break; + } + return NULL; +} + +List * +GetAllKeyringProviders(Oid dbOid) +{ + return scan_key_provider_file(PROVIDER_SCAN_ALL, NULL, dbOid); +} + +GenericKeyring * +GetKeyProviderByName(const char *provider_name, Oid dbOid) +{ + GenericKeyring *keyring = NULL; + List *providers = scan_key_provider_file(PROVIDER_SCAN_BY_NAME, (void *) provider_name, dbOid); + + if (providers != NIL) + { + keyring = (GenericKeyring *) linitial(providers); + list_free(providers); + } + else + { + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("key provider \"%s\" does not exists", provider_name), + errhint("Use pg_tde_add_key_provider interface to create the key provider"))); + } + return keyring; +} + + +static uint32 +write_key_provider_info(KeyringProvideRecord *provider, Oid database_id, + off_t position, bool error_if_exists, bool write_xlog) +{ + off_t bytes_written = 0; + off_t curr_pos = 0; + int fd; + int max_provider_id = 0; + char kp_info_path[MAXPGPATH] = {0}; + KeyringProvideRecord existing_provider; + + Assert(provider != NULL); + + get_keyring_infofile_path(kp_info_path, database_id); + + LWLockAcquire(tde_provider_info_lock(), LW_EXCLUSIVE); + + fd = BasicOpenFile(kp_info_path, O_CREAT | O_RDWR | PG_BINARY); + if (fd < 0) + { + LWLockRelease(tde_provider_info_lock()); + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open tde file \"%s\": %m", kp_info_path))); + } + if (position == -1) + { + /* + * we also need to verify the name conflict and generate the next + * provider ID + */ + while (fetch_next_key_provider(fd, &curr_pos, &existing_provider)) + { + if (strcmp(existing_provider.provider_name, provider->provider_name) == 0) + { + close(fd); + LWLockRelease(tde_provider_info_lock()); + ereport(error_if_exists ? ERROR : DEBUG1, + (errcode(ERRCODE_DUPLICATE_OBJECT), + errmsg("key provider \"%s\" already exists", provider->provider_name))); + + if (!error_if_exists) + { + provider->provider_id = existing_provider.provider_id; + return provider->provider_id; + } + } + if (max_provider_id < existing_provider.provider_id) + max_provider_id = existing_provider.provider_id; + } + provider->provider_id = max_provider_id + 1; + curr_pos = lseek(fd, 0, SEEK_END); + + /* + * emit the xlog here. So that we can handle partial file write errors + * but cannot make new WAL entries during recovery. + */ + if (write_xlog) + { + KeyringProviderXLRecord xlrec; + + xlrec.database_id = database_id; + xlrec.offset_in_file = curr_pos; + memcpy(&xlrec.provider, provider, sizeof(KeyringProvideRecord)); + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, sizeof(KeyringProviderXLRecord)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_ADD_KEY_PROVIDER_KEY); + } + } + else + { + /* + * we are performing redo, just go to the position received from the + * xlog and write the record there. No need to verify the name + * conflict and generate the provider ID + */ + curr_pos = lseek(fd, position, SEEK_SET); + } + + /* + * All good, Just add a new provider + */ + bytes_written = pg_pwrite(fd, provider, sizeof(KeyringProvideRecord), curr_pos); + if (bytes_written != sizeof(KeyringProvideRecord)) + { + close(fd); + LWLockRelease(tde_provider_info_lock()); + ereport(ERROR, + (errcode_for_file_access(), + errmsg("key provider info file \"%s\" can't be written: %m", + kp_info_path))); + } + if (pg_fsync(fd) != 0) + { + close(fd); + LWLockRelease(tde_provider_info_lock()); + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", + kp_info_path))); + } + close(fd); + LWLockRelease(tde_provider_info_lock()); + return provider->provider_id; +} + +/* + * Save the key provider info to the file + */ +uint32 +save_new_key_provider_info(KeyringProvideRecord* provider, Oid databaseId, bool write_xlog) +{ + return write_key_provider_info(provider, databaseId, -1, true, write_xlog); +} + +uint32 +redo_key_provider_info(KeyringProviderXLRecord *xlrec) +{ + return write_key_provider_info(&xlrec->provider, xlrec->database_id, xlrec->offset_in_file, true, false); +} + +void +cleanup_key_provider_info(Oid databaseId) +{ + /* Remove the key provider info file */ + char kp_info_path[MAXPGPATH] = {0}; + + get_keyring_infofile_path(kp_info_path, databaseId); + PathNameDeleteTemporaryFile(kp_info_path, false); +} + +Datum +pg_tde_add_key_provider_internal(PG_FUNCTION_ARGS) +{ + char *provider_type = text_to_cstring(PG_GETARG_TEXT_PP(0)); + char *provider_name = text_to_cstring(PG_GETARG_TEXT_PP(1)); + char *options = text_to_cstring(PG_GETARG_TEXT_PP(2)); + bool is_global = PG_GETARG_BOOL(3); + KeyringProvideRecord provider; + Oid dbOid = is_global ? GLOBAL_DATA_TDE_OID : MyDatabaseId; + + strncpy(provider.options, options, sizeof(provider.options)); + strncpy(provider.provider_name, provider_name, sizeof(provider.provider_name)); + provider.provider_type = get_keyring_provider_from_typename(provider_type); + save_new_key_provider_info(&provider, dbOid, true); + + PG_RETURN_INT32(provider.provider_id); +} + +Datum +pg_tde_list_all_key_providers(PG_FUNCTION_ARGS) +{ + List *all_providers = GetAllKeyringProviders(MyDatabaseId); + ListCell *lc; + Tuplestorestate *tupstore; + TupleDesc tupdesc; + MemoryContext per_query_ctx; + MemoryContext oldcontext; + ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo; + + /* check to see if caller supports us returning a tuplestore */ + if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("pg_tde_list_all_key_providers: set-valued function called in context that cannot accept a set"))); + if (!(rsinfo->allowedModes & SFRM_Materialize)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("pg_tde_list_all_key_providers: materialize mode required, but it is not allowed in this context"))); + + /* Switch into long-lived context to construct returned data structures */ + per_query_ctx = rsinfo->econtext->ecxt_per_query_memory; + oldcontext = MemoryContextSwitchTo(per_query_ctx); + + /* Build a tuple descriptor for our result type */ + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + elog(ERROR, "pg_tde_list_all_key_providers: return type must be a row type"); + + tupstore = tuplestore_begin_heap(true, false, work_mem); + rsinfo->returnMode = SFRM_Materialize; + rsinfo->setResult = tupstore; + rsinfo->setDesc = tupdesc; + + MemoryContextSwitchTo(oldcontext); + + foreach(lc, all_providers) + { + Datum values[PG_TDE_LIST_PROVIDERS_COLS] = {0}; + bool nulls[PG_TDE_LIST_PROVIDERS_COLS] = {0}; + GenericKeyring *keyring = (GenericKeyring *) lfirst(lc); + int i = 0; + + values[i++] = Int32GetDatum(keyring->key_id); + values[i++] = CStringGetTextDatum(keyring->provider_name); + values[i++] = CStringGetTextDatum(get_keyring_provider_typename(keyring->type)); + values[i++] = CStringGetTextDatum(keyring->options); + tuplestore_putvalues(tupstore, tupdesc, values, nulls); + + debug_print_kerying(keyring); + } + list_free_deep(all_providers); + return (Datum) 0; +} + +GenericKeyring * +GetKeyProviderByID(int provider_id, Oid dbOid) +{ + GenericKeyring *keyring = NULL; + List *providers = scan_key_provider_file(PROVIDER_SCAN_BY_ID, &provider_id, dbOid); + + if (providers != NIL) + { + keyring = (GenericKeyring *) linitial(providers); + list_free(providers); + } + return keyring; +} + +#endif /* !FRONTEND */ + +#ifdef FRONTEND +GenericKeyring * +GetKeyProviderByID(int provider_id, Oid dbOid) +{ + GenericKeyring *keyring = NULL; + SimplePtrList *providers = scan_key_provider_file(PROVIDER_SCAN_BY_ID, &provider_id, dbOid); + + if (providers != NULL) + { + keyring = (GenericKeyring *) providers->head->ptr; + simple_list_free(providers); + } + return keyring; +} + +static void +simple_list_free(SimplePtrList *list) +{ + SimplePtrListCell *cell; + + cell = list->head; + while (cell != NULL) + { + SimplePtrListCell *next; + + next = cell->next; + pfree(cell); + cell = next; + } +} +#endif /* FRONTEND */ + +/* + * Scan the key provider info file and can also apply filter based on scanType + */ +#ifndef FRONTEND +static List * +#else +static SimplePtrList * +#endif +scan_key_provider_file(ProviderScanType scanType, void *scanKey, Oid dbOid) +{ + off_t curr_pos = 0; + int fd; + char kp_info_path[MAXPGPATH] = {0}; + KeyringProvideRecord provider; +#ifndef FRONTEND + List *providers_list = NIL; +#else + SimplePtrList *providers_list = NULL; +#endif + + if (scanType != PROVIDER_SCAN_ALL) + Assert(scanKey != NULL); + + get_keyring_infofile_path(kp_info_path, dbOid); + + LWLockAcquire(tde_provider_info_lock(), LW_SHARED); + + fd = BasicOpenFile(kp_info_path, PG_BINARY); + if (fd < 0) + { + LWLockRelease(tde_provider_info_lock()); + ereport(DEBUG2, + (errcode_for_file_access(), + errmsg("could not open tde file \"%s\": %m", kp_info_path))); + return providers_list; + } + while (fetch_next_key_provider(fd, &curr_pos, &provider)) + { + bool match = false; + + ereport(DEBUG2, + (errmsg("read key provider ID=%d %s", provider.provider_id, provider.provider_name))); + + if (scanType == PROVIDER_SCAN_BY_NAME) + { + if (strcasecmp(provider.provider_name, (char *) scanKey) == 0) + match = true; + } + else if (scanType == PROVIDER_SCAN_BY_ID) + { + if (provider.provider_id == *(int *) scanKey) + match = true; + } + else if (scanType == PROVIDER_SCAN_BY_TYPE) + { + if (provider.provider_type == *(ProviderType *) scanKey) + match = true; + } + else if (scanType == PROVIDER_SCAN_ALL) + match = true; + + if (match) + { + GenericKeyring *keyring = load_keyring_provider_from_record(&provider); + + if (keyring) + { +#ifndef FRONTEND + providers_list = lappend(providers_list, keyring); +#else + if (providers_list == NULL) + providers_list = palloc(sizeof(providers_list)); + simple_ptr_list_append(providers_list, keyring); +#endif + } + } + } + close(fd); + LWLockRelease(tde_provider_info_lock()); + return providers_list; +} + +static GenericKeyring * +load_keyring_provider_from_record(KeyringProvideRecord *provider) +{ + GenericKeyring *keyring = NULL; + + keyring = load_keyring_provider_options(provider->provider_type, provider->options); + if (keyring) + { + keyring->key_id = provider->provider_id; + strncpy(keyring->provider_name, provider->provider_name, sizeof(keyring->provider_name)); + keyring->type = provider->provider_type; + strncpy(keyring->options, provider->options, sizeof(keyring->options)); + debug_print_kerying(keyring); + } + return keyring; +} + + +static GenericKeyring * +load_keyring_provider_options(ProviderType provider_type, char *keyring_options) +{ + switch (provider_type) + { + case FILE_KEY_PROVIDER: + return (GenericKeyring *) load_file_keyring_provider_options(keyring_options); + break; + case VAULT_V2_KEY_PROVIDER: + return (GenericKeyring *) load_vaultV2_keyring_provider_options(keyring_options); + break; + case KMIP_KEY_PROVIDER: + return (GenericKeyring *)load_kmip_keyring_provider_options(keyring_options); + break; + default: + break; + } + return NULL; +} + +static FileKeyring * +load_file_keyring_provider_options(char *keyring_options) +{ + FileKeyring *file_keyring = palloc0(sizeof(FileKeyring)); + + file_keyring->keyring.type = FILE_KEY_PROVIDER; + + if (!ParseKeyringJSONOptions(FILE_KEY_PROVIDER, file_keyring, + keyring_options, strlen(keyring_options))) + { + return NULL; + } + + if (strlen(file_keyring->file_name) == 0) + { + ereport(WARNING, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("file path is missing in the keyring options"))); + return NULL; + } + + return file_keyring; +} + +static VaultV2Keyring * +load_vaultV2_keyring_provider_options(char *keyring_options) +{ + VaultV2Keyring *vaultV2_keyring = palloc0(sizeof(VaultV2Keyring)); + + vaultV2_keyring->keyring.type = VAULT_V2_KEY_PROVIDER; + + if (!ParseKeyringJSONOptions(VAULT_V2_KEY_PROVIDER, vaultV2_keyring, + keyring_options, strlen(keyring_options))) + { + return NULL; + } + + if (strlen(vaultV2_keyring->vault_token) == 0 || + strlen(vaultV2_keyring->vault_url) == 0 || + strlen(vaultV2_keyring->vault_mount_path) == 0) + { + ereport(WARNING, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("missing in the keyring options:%s%s%s", + *(vaultV2_keyring->vault_token) ? "" : " token", + *(vaultV2_keyring->vault_url) ? "" : " url", + *(vaultV2_keyring->vault_mount_path) ? "" : " mountPath"))); + return NULL; + } + + return vaultV2_keyring; +} + +static KmipKeyring * +load_kmip_keyring_provider_options(char *keyring_options) +{ + KmipKeyring *kmip_keyring = palloc0(sizeof(KmipKeyring)); + + kmip_keyring->keyring.type = KMIP_KEY_PROVIDER; + + if (!ParseKeyringJSONOptions(KMIP_KEY_PROVIDER, kmip_keyring, + keyring_options, strlen(keyring_options))) + { + return NULL; + } + + if (strlen(kmip_keyring->kmip_host) == 0 || + strlen(kmip_keyring->kmip_ca_path) == 0 || + strlen(kmip_keyring->kmip_cert_path) == 0) + { + ereport(WARNING, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("missing in the keyring options:%s%s%s", + *(kmip_keyring->kmip_host) ? "" : " kmip_host", + *(kmip_keyring->kmip_ca_path) ? "" : " kmip_ca_path", + *(kmip_keyring->kmip_cert_path) ? "" : " kmip_cert_path"))); + return NULL; + } + + return kmip_keyring; +} + +static void +debug_print_kerying(GenericKeyring *keyring) +{ + int debug_level = DEBUG2; + + elog(debug_level, "Keyring type: %d", keyring->type); + elog(debug_level, "Keyring name: %s", keyring->provider_name); + elog(debug_level, "Keyring id: %d", keyring->key_id); + switch (keyring->type) + { + case FILE_KEY_PROVIDER: + elog(debug_level, "File Keyring Path: %s", ((FileKeyring *) keyring)->file_name); + break; + case VAULT_V2_KEY_PROVIDER: + elog(debug_level, "Vault Keyring Token: %s", ((VaultV2Keyring *) keyring)->vault_token); + elog(debug_level, "Vault Keyring URL: %s", ((VaultV2Keyring *) keyring)->vault_url); + elog(debug_level, "Vault Keyring Mount Path: %s", ((VaultV2Keyring *) keyring)->vault_mount_path); + elog(debug_level, "Vault Keyring CA Path: %s", ((VaultV2Keyring *) keyring)->vault_ca_path); + break; + case KMIP_KEY_PROVIDER: + elog(debug_level, "KMIP Keyring Host: %s", ((KmipKeyring *)keyring)->kmip_host); + elog(debug_level, "KMIP Keyring Port: %s", ((KmipKeyring *)keyring)->kmip_port); + elog(debug_level, "KMIP Keyring CA Path: %s", ((KmipKeyring *)keyring)->kmip_ca_path); + elog(debug_level, "KMIP Keyring Cert Path: %s", ((KmipKeyring *)keyring)->kmip_cert_path); + break; + case UNKNOWN_KEY_PROVIDER: + elog(debug_level, "Unknown Keyring "); + break; + } +} + +static inline void +get_keyring_infofile_path(char *resPath, Oid dbOid) +{ + join_path_components(resPath, pg_tde_get_tde_data_dir(), psprintf(PG_TDE_KEYRING_FILENAME, dbOid)); +} + +/* + * Fetch the next key provider from the file and update the curr_pos +*/ +static bool +fetch_next_key_provider(int fd, off_t *curr_pos, KeyringProvideRecord *provider) +{ + off_t bytes_read = 0; + + Assert(provider != NULL); + Assert(fd >= 0); + + bytes_read = pg_pread(fd, provider, sizeof(KeyringProvideRecord), *curr_pos); + *curr_pos += bytes_read; + + if (bytes_read == 0) + return false; + if (bytes_read != sizeof(KeyringProvideRecord)) + { + close(fd); + /* Corrupt file */ + ereport(ERROR, + (errcode_for_file_access(), + errmsg("key provider info file is corrupted: %m"), + errdetail("invalid key provider record size %ld expected %lu", bytes_read, sizeof(KeyringProvideRecord)))); + } + return true; +} diff --git a/contrib/pg_tde/src/catalog/tde_keyring_parse_opts.c b/contrib/pg_tde/src/catalog/tde_keyring_parse_opts.c new file mode 100644 index 00000000000..f94459d3b29 --- /dev/null +++ b/contrib/pg_tde/src/catalog/tde_keyring_parse_opts.c @@ -0,0 +1,515 @@ +/*------------------------------------------------------------------------- + * + * tde_keyring_parse_opts.c + * Parser routines for the keyring JSON options + * + * Each value in the JSON document can be either scalar (string) - a value itself + * or a reference to the external object that contains the value. Though the top + * level field "type" can be only scalar. + * + * Examples: + * {"type" : "file", "path" : "/tmp/keyring_data_file"} + * {"type" : "file", "path" : {"type" : "file", "path" : "/tmp/datafile-location"}} + * in the latter one, /tmp/datafile-location contains not keyring data but the + * location of such. + * + * A field type can be "file", in this case, we expect "path" field. Or "remote", + * when "url" field is expected. + * + * IDENTIFICATION + * contrib/pg_tde/src/catalog/tde_keyring_parse_opts.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "common/file_perm.h" +#include "common/jsonapi.h" +#include "mb/pg_wchar.h" +#include "storage/fd.h" + +#include "catalog/tde_keyring.h" +#include "keyring/keyring_curl.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +#include + +#define MAX_CONFIG_FILE_DATA_LENGTH 1024 + +/* + * JSON parser state + */ + +typedef enum JsonKeringSemState +{ + JK_EXPECT_TOP_FIELD, + JK_EXPECT_EXTERN_VAL, +} JsonKeringSemState; + +#define KEYRING_REMOTE_FIELD_TYPE "remote" +#define KEYRING_FILE_FIELD_TYPE "file" + +typedef enum JsonKeyringField +{ + JK_FIELD_UNKNOWN, + + JK_KRING_TYPE, + + JK_FIELD_TYPE, + JK_REMOTE_URL, + JK_FIELD_PATH, + + JF_FILE_PATH, + + JK_VAULT_TOKEN, + JK_VAULT_URL, + JK_VAULT_MOUNT_PATH, + JK_VAULT_CA_PATH, + + JK_KMIP_HOST, + JK_KMIP_PORT, + JK_KMIP_CA_PATH, + JK_KMIP_CERT_PATH, + + /* must be the last */ + JK_FIELDS_TOTAL +} JsonKeyringField; + +static const char *JK_FIELD_NAMES[JK_FIELDS_TOTAL] = { + [JK_FIELD_UNKNOWN] = "unknownField", + [JK_KRING_TYPE] = "type", + [JK_FIELD_TYPE] = "type", + [JK_REMOTE_URL] = "url", + [JK_FIELD_PATH] = "path", + + /* + * These values should match pg_tde_add_key_provider_vault_v2 and + * pg_tde_add_key_provider_file SQL interfaces + */ + [JF_FILE_PATH] = "path", + [JK_VAULT_TOKEN] = "token", + [JK_VAULT_URL] = "url", + [JK_VAULT_MOUNT_PATH] = "mountPath", + [JK_VAULT_CA_PATH] = "caPath", + + [JK_KMIP_HOST] = "host", + [JK_KMIP_PORT] = "port", + [JK_KMIP_CA_PATH] = "caPath", + [JK_KMIP_CERT_PATH] = "certPath", +}; + +#define MAX_JSON_DEPTH 64 +typedef struct JsonKeyringState +{ + ProviderType provider_type; + + /* + * Caller's options to be set from JSON values. Expected either + * `VaultV2Keyring` or `FileKeyring` + */ + void *provider_opts; + + /* + * A field hierarchy of the current branch, field[level] is the current + * one, field[level-1] is the parent and so on. We need to track parent + * fields because of the external values + */ + JsonKeyringField field[MAX_JSON_DEPTH]; + JsonKeringSemState state; + int level; + + /* + * The rest of the scalar fields might be in the JSON document but has no + * direct value for the caller. Although we need them for the values + * extraction or state tracking. + */ + char *kring_type; + char *field_type; + char *extern_url; + char *extern_path; +} JsonKeyringState; + +static JsonParseErrorType json_kring_scalar(void *state, char *token, JsonTokenType tokentype); +static JsonParseErrorType json_kring_object_field_start(void *state, char *fname, bool isnull); +static JsonParseErrorType json_kring_object_start(void *state); +static JsonParseErrorType json_kring_object_end(void *state); + +static void json_kring_assign_scalar(JsonKeyringState *parse, JsonKeyringField field, char *value); +static char *get_remote_kring_value(const char *url, const char *field_name); +static char *get_file_kring_value(const char *path, const char *field_name); + + +/* + * Parses json input for the given provider type and sets the provided options + * out_opts should be a palloc'd `VaultV2Keyring` or `FileKeyring` struct as the + * respective option values will be mem copied into it. + * Returns `true` if parsing succeded and `false` otherwise. +*/ +bool +ParseKeyringJSONOptions(ProviderType provider_type, void *out_opts, char *in_buf, int buf_len) +{ + JsonLexContext *jlex; + JsonKeyringState parse = {0}; + JsonSemAction sem; + JsonParseErrorType jerr; + + /* Set up parsing context and initial semantic state */ + parse.provider_type = provider_type; + parse.provider_opts = out_opts; + parse.level = -1; + parse.state = JK_EXPECT_TOP_FIELD; + memset(parse.field, 0, MAX_JSON_DEPTH * sizeof(JsonKeyringField)); + +#if PG_VERSION_NUM >= 170000 + jlex = makeJsonLexContextCstringLen(NULL, in_buf, buf_len, PG_UTF8, true); +#else + jlex = makeJsonLexContextCstringLen(in_buf, buf_len, PG_UTF8, true); +#endif + + /* + * Set up semantic actions. The function below will be called when the + * parser reaches the appropriate state. See comments on the functions. + */ + sem.semstate = &parse; + sem.object_start = json_kring_object_start; + sem.object_end = json_kring_object_end; + sem.array_start = NULL; + sem.array_end = NULL; + sem.object_field_start = json_kring_object_field_start; + sem.object_field_end = NULL; + sem.array_element_start = NULL; + sem.array_element_end = NULL; + sem.scalar = json_kring_scalar; + + /* Run the parser */ + jerr = pg_parse_json(jlex, &sem); + if (jerr != JSON_SUCCESS) + { + ereport(WARNING, + (errmsg("parsing of keyring options failed: %s", + json_errdetail(jerr, jlex)))); + + } +#if PG_VERSION_NUM >= 170000 + freeJsonLexContext(jlex); +#endif + + return jerr == JSON_SUCCESS; +} + +/* + * JSON parser semantic actions +*/ + +/* + * Invoked at the start of each object in the JSON document. + * + * Every new object increases the level of nesting as the whole document is the + * object itself (level 0) and every next one means going deeper into nesting. + * + * On the top level, we expect either scalar (string) values or objects referencing + * the external value of the field. Hence, if we are on level 1, we expect an + * "external field object" e.g. ({"type" : "remote", "url" : "http://localhost:8888/hello"}) + */ +static JsonParseErrorType +json_kring_object_start(void *state) +{ + JsonKeyringState *parse = state; + + if (MAX_JSON_DEPTH == ++parse->level) + { + elog(WARNING, "reached max depth of JSON nesting"); + return JSON_SEM_ACTION_FAILED; + } + + switch (parse->level) + { + case 0: + parse->state = JK_EXPECT_TOP_FIELD; + break; + case 1: + parse->state = JK_EXPECT_EXTERN_VAL; + break; + } + + return JSON_SUCCESS; +} + +/* + * Invoked at the end of each object in the JSON document. + * + * First, it means we are going back to the higher level. Plus, if it was the + * level 1, we expect only external objects there, which means we have all + * the necessary info to extract the value and assign the result to the + * appropriate (parent) field. + */ +static JsonParseErrorType +json_kring_object_end(void *state) +{ + JsonKeyringState *parse = state; + + /* + * we're done with the nested object and if it's an external field, the + * value should be extracted and assigned to the parent "field". for + * example if : "field" : {"type" : "remote", "url" : + * "http://localhost:8888/hello"} or "field" : {"type" : "file", "path" : + * "/tmp/datafile-location"} the "field"'s value should be the content of + * "path" or "url" respectively + */ + if (parse->level == 1) + { + if (parse->state == JK_EXPECT_EXTERN_VAL) + { + JsonKeyringField parent_field = parse->field[0]; + + char *value = NULL; + + if (strcmp(parse->field_type, KEYRING_REMOTE_FIELD_TYPE) == 0) + value = get_remote_kring_value(parse->extern_url, JK_FIELD_NAMES[parent_field]); + if (strcmp(parse->field_type, KEYRING_FILE_FIELD_TYPE) == 0) + value = get_file_kring_value(parse->extern_path, JK_FIELD_NAMES[parent_field]); + + json_kring_assign_scalar(parse, parent_field, value); + } + + parse->state = JK_EXPECT_TOP_FIELD; + } + + parse->level--; + + return JSON_SUCCESS; +} + +/* + * Invoked at the start of each object field in the JSON document. + * + * Based on the given field name and the semantic state (we expect a top-level + * field or an external object) we set the state so that when we get the value, + * we know what is it and where to assign it. + */ +static JsonParseErrorType +json_kring_object_field_start(void *state, char *fname, bool isnull) +{ + JsonKeyringState *parse = state; + JsonKeyringField *field; + + Assert(parse->level >= 0); + + field = &parse->field[parse->level]; + + switch (parse->state) + { + case JK_EXPECT_TOP_FIELD: + + /* + * On the top level, "type" stores a keyring type and this field + * is common for all keyrings. The rest of the fields depend on + * the keyring type. + */ + if (strcmp(fname, JK_FIELD_NAMES[JK_KRING_TYPE]) == 0) + { + *field = JK_KRING_TYPE; + break; + } + switch (parse->provider_type) + { + case FILE_KEY_PROVIDER: + if (strcmp(fname, JK_FIELD_NAMES[JF_FILE_PATH]) == 0) + *field = JF_FILE_PATH; + else + { + *field = JK_FIELD_UNKNOWN; + elog(DEBUG1, "parse file keyring config: unexpected field %s", fname); + } + break; + + case VAULT_V2_KEY_PROVIDER: + if (strcmp(fname, JK_FIELD_NAMES[JK_VAULT_TOKEN]) == 0) + *field = JK_VAULT_TOKEN; + else if (strcmp(fname, JK_FIELD_NAMES[JK_VAULT_URL]) == 0) + *field = JK_VAULT_URL; + else if (strcmp(fname, JK_FIELD_NAMES[JK_VAULT_MOUNT_PATH]) == 0) + *field = JK_VAULT_MOUNT_PATH; + else if (strcmp(fname, JK_FIELD_NAMES[JK_VAULT_CA_PATH]) == 0) + *field = JK_VAULT_CA_PATH; + else + { + *field = JK_FIELD_UNKNOWN; + elog(DEBUG1, "parse json keyring config: unexpected field %s", fname); + } + break; + + case KMIP_KEY_PROVIDER: + if (strcmp(fname, JK_FIELD_NAMES[JK_KMIP_HOST]) == 0) + *field = JK_KMIP_HOST; + else if (strcmp(fname, JK_FIELD_NAMES[JK_KMIP_PORT]) == 0) + *field = JK_KMIP_PORT; + else if (strcmp(fname, JK_FIELD_NAMES[JK_KMIP_CA_PATH]) == 0) + *field = JK_KMIP_CA_PATH; + else if (strcmp(fname, JK_FIELD_NAMES[JK_KMIP_CERT_PATH]) == 0) + *field = JK_KMIP_CERT_PATH; + else + { + *field = JK_FIELD_UNKNOWN; + elog(DEBUG1, "parse json keyring config: unexpected field %s", fname); + } + break; + + case UNKNOWN_KEY_PROVIDER: + Assert(0); + break; + } + break; + + case JK_EXPECT_EXTERN_VAL: + if (strcmp(fname, JK_FIELD_NAMES[JK_FIELD_TYPE]) == 0) + *field = JK_FIELD_TYPE; + else if (strcmp(fname, JK_FIELD_NAMES[JK_REMOTE_URL]) == 0) + *field = JK_REMOTE_URL; + else if (strcmp(fname, JK_FIELD_NAMES[JK_FIELD_PATH]) == 0) + *field = JK_FIELD_PATH; + break; + } + + return JSON_SUCCESS; +} + +/* + * Invoked at the start of each scalar in the JSON document. + * + * We have only the string value of the field. And rely on the state set by + * `json_kring_object_field_start` for defining what the field is. + */ +static JsonParseErrorType +json_kring_scalar(void *state, char *token, JsonTokenType tokentype) +{ + JsonKeyringState *parse = state; + + json_kring_assign_scalar(parse, parse->field[parse->level], token); + + return JSON_SUCCESS; +} + +static void +json_kring_assign_scalar(JsonKeyringState *parse, JsonKeyringField field, char *value) +{ + VaultV2Keyring *vault = parse->provider_opts; + FileKeyring *file = parse->provider_opts; + KmipKeyring *kmip = parse->provider_opts; + + switch (field) + { + case JK_KRING_TYPE: + parse->kring_type = value; + break; + + case JK_FIELD_TYPE: + parse->field_type = value; + break; + case JK_REMOTE_URL: + parse->extern_url = value; + break; + case JK_FIELD_PATH: + parse->extern_path = value; + break; + + case JF_FILE_PATH: + strncpy(file->file_name, value, sizeof(file->file_name)); + break; + + case JK_VAULT_TOKEN: + strncpy(vault->vault_token, value, sizeof(vault->vault_token)); + break; + case JK_VAULT_URL: + strncpy(vault->vault_url, value, sizeof(vault->vault_url)); + break; + case JK_VAULT_MOUNT_PATH: + strncpy(vault->vault_mount_path, value, sizeof(vault->vault_mount_path)); + break; + case JK_VAULT_CA_PATH: + strncpy(vault->vault_ca_path, value, sizeof(vault->vault_ca_path)); + break; + + case JK_KMIP_HOST: + strncpy(kmip->kmip_host, value, sizeof(kmip->kmip_host)); + break; + case JK_KMIP_PORT: + strncpy(kmip->kmip_port, value, sizeof(kmip->kmip_port)); + break; + case JK_KMIP_CA_PATH: + strncpy(kmip->kmip_ca_path, value, sizeof(kmip->kmip_ca_path)); + break; + case JK_KMIP_CERT_PATH: + strncpy(kmip->kmip_cert_path, value, sizeof(kmip->kmip_cert_path)); + break; + + default: + elog(DEBUG1, "json keyring: unexpected scalar field %d", field); + Assert(0); + break; + } +} + +static char * +get_remote_kring_value(const char *url, const char *field_name) +{ + long httpCode; + CurlString outStr; + + /* TODO: we never pfree it */ + outStr.ptr = palloc0(1); + outStr.len = 0; + + if (!curlSetupSession(url, NULL, &outStr)) + { + elog(WARNING, "CURL error for remote object %s", field_name); + return NULL; + } + if (curl_easy_perform(keyringCurl) != CURLE_OK) + { + elog(WARNING, "HTTP request error for remote object %s", field_name); + return NULL; + } + if (curl_easy_getinfo(keyringCurl, CURLINFO_RESPONSE_CODE, &httpCode) != CURLE_OK) + { + elog(WARNING, "HTTP error for remote object %s, HTTP code %li", field_name, httpCode); + return NULL; + } + + /* remove trailing whitespace */ + outStr.ptr[strcspn(outStr.ptr, " \t\n\r")] = '\0'; + + return outStr.ptr; +} + +static char * +get_file_kring_value(const char *path, const char *field_name) +{ + int fd = -1; + char *val; + + fd = BasicOpenFile(path, O_RDONLY); + if (fd < 0) + { + elog(WARNING, "failed to open file %s for %s", path, field_name); + return NULL; + } + + /* TODO: we never pfree it */ + val = palloc0(MAX_CONFIG_FILE_DATA_LENGTH); + if(pg_pread(fd, val, MAX_CONFIG_FILE_DATA_LENGTH, 0) == -1) + { + elog(WARNING, "failed to read file %s for %s", path, field_name); + pfree(val); + close(fd); + return NULL; + } + /* remove trailing whitespace */ + val[strcspn(val, " \t\n\r")] = '\0'; + + close(fd); + return val; +} diff --git a/contrib/pg_tde/src/catalog/tde_principal_key.c b/contrib/pg_tde/src/catalog/tde_principal_key.c new file mode 100644 index 00000000000..5316cbe5cbc --- /dev/null +++ b/contrib/pg_tde/src/catalog/tde_principal_key.c @@ -0,0 +1,949 @@ +/*------------------------------------------------------------------------- + * + * tde_principal_key.c + * Deals with the tde principal key configuration catalog + * routines. + * + * IDENTIFICATION + * contrib/pg_tde/src/catalog/tde_principal_key.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "catalog/tde_principal_key.h" +#include "storage/fd.h" +#include "utils/palloc.h" +#include "utils/memutils.h" +#include "utils/wait_event.h" +#include "utils/timestamp.h" +#include "common/relpath.h" +#include "miscadmin.h" +#include "utils/builtins.h" +#include "pg_tde.h" +#include "access/pg_tde_xlog.h" +#include +#include + +#include "access/pg_tde_tdemap.h" +#include "catalog/tde_global_space.h" +#ifndef FRONTEND +#include "common/pg_tde_shmem.h" +#include "funcapi.h" +#include "storage/lwlock.h" +#else +#include "pg_tde_fe.h" +#endif + +#include + +#ifndef FRONTEND + +typedef struct TdePrincipalKeySharedState +{ + LWLockPadded *Locks; + int hashTrancheId; + dshash_table_handle hashHandle; + void *rawDsaArea; /* DSA area pointer */ + +} TdePrincipalKeySharedState; + +typedef struct TdePrincipalKeylocalState +{ + TdePrincipalKeySharedState *sharedPrincipalKeyState; + dsa_area *dsa; /* local dsa area for backend attached to the + * dsa area created by postmaster at startup. */ + dshash_table *sharedHash; +} TdePrincipalKeylocalState; + +/* parameter for the principal key info shared hash */ +static dshash_parameters principal_key_dsh_params = { + sizeof(Oid), + sizeof(TDEPrincipalKey), + dshash_memcmp, /* TODO use int compare instead */ +dshash_memhash}; + +TdePrincipalKeylocalState principalKeyLocalState; + +static void principal_key_info_attach_shmem(void); +static Size initialize_shared_state(void *start_address); +static void initialize_objects_in_dsa_area(dsa_area *dsa, void *raw_dsa_area); +static Size cache_area_size(void); +static Size required_shared_mem_size(void); +static void shared_memory_shutdown(int code, Datum arg); +static void principal_key_startup_cleanup(int tde_tbl_count, XLogExtensionInstall *ext_info, bool redo, void *arg); +static void clear_principal_key_cache(Oid databaseId); +static inline dshash_table *get_principal_key_Hash(void); +static TDEPrincipalKey *get_principal_key_from_keyring(Oid dbOid); +static TDEPrincipalKey *get_principal_key_from_cache(Oid dbOid); +static void push_principal_key_to_cache(TDEPrincipalKey *principalKey); +static Datum pg_tde_get_key_info(PG_FUNCTION_ARGS, Oid dbOid); +static keyInfo *load_latest_versioned_key_name(TDEPrincipalKeyInfo *principal_key_info, + GenericKeyring *keyring, + bool ensure_new_key); +static TDEPrincipalKey *set_principal_key_with_keyring(const char *key_name, + GenericKeyring *keyring, + Oid dbOid, + bool ensure_new_key); +static TDEPrincipalKey *alter_keyprovider_for_principal_key(GenericKeyring *newKeyring,Oid dbOid); + +static const TDEShmemSetupRoutine principal_key_info_shmem_routine = { + .init_shared_state = initialize_shared_state, + .init_dsa_area_objects = initialize_objects_in_dsa_area, + .required_shared_mem_size = required_shared_mem_size, + .shmem_kill = shared_memory_shutdown +}; + +void +InitializePrincipalKeyInfo(void) +{ + ereport(LOG, (errmsg("Initializing TDE principal key info"))); + RegisterShmemRequest(&principal_key_info_shmem_routine); + on_ext_install(principal_key_startup_cleanup, NULL); +} + +/* + * Lock to guard internal/principal key. Usually, this lock has to be held until + * the caller fetches an internal_key or rotates the principal. + */ +LWLock * +tde_lwlock_enc_keys(void) +{ + Assert(principalKeyLocalState.sharedPrincipalKeyState); + + return &principalKeyLocalState.sharedPrincipalKeyState->Locks[TDE_LWLOCK_ENC_KEY].lock; +} + +static Size +cache_area_size(void) +{ + return MAXALIGN(8192 * 100); /* TODO: Probably get it from guc */ +} + +static Size +required_shared_mem_size(void) +{ + Size sz = cache_area_size(); + + sz = add_size(sz, sizeof(TdePrincipalKeySharedState)); + return MAXALIGN(sz); +} + +/* + * Initialize the shared area for Principal key info. + * This includes locks and cache area for principal key info + */ + +static Size +initialize_shared_state(void *start_address) +{ + TdePrincipalKeySharedState *sharedState = (TdePrincipalKeySharedState *) start_address; + + ereport(LOG, (errmsg("initializing shared state for principal key"))); + principalKeyLocalState.dsa = NULL; + principalKeyLocalState.sharedHash = NULL; + + sharedState->Locks = GetNamedLWLockTranche(TDE_TRANCHE_NAME); + + principalKeyLocalState.sharedPrincipalKeyState = sharedState; + return sizeof(TdePrincipalKeySharedState); +} + +void +initialize_objects_in_dsa_area(dsa_area *dsa, void *raw_dsa_area) +{ + dshash_table *dsh; + TdePrincipalKeySharedState *sharedState = principalKeyLocalState.sharedPrincipalKeyState; + + ereport(LOG, (errmsg("initializing dsa area objects for principal key"))); + + Assert(sharedState != NULL); + + sharedState->rawDsaArea = raw_dsa_area; + sharedState->hashTrancheId = LWLockNewTrancheId(); + principal_key_dsh_params.tranche_id = sharedState->hashTrancheId; +#if PG_VERSION_NUM >= 170000 + principal_key_dsh_params.copy_function = dshash_memcpy; +#endif + dsh = dshash_create(dsa, &principal_key_dsh_params, 0); + sharedState->hashHandle = dshash_get_hash_table_handle(dsh); + dshash_detach(dsh); +} + +/* + * Attaches to the DSA to local backend + */ +static void +principal_key_info_attach_shmem(void) +{ + MemoryContext oldcontext; + + if (principalKeyLocalState.dsa) + return; + + /* + * We want the dsa to remain valid throughout the lifecycle of this + * process. so switch to TopMemoryContext before attaching + */ + oldcontext = MemoryContextSwitchTo(TopMemoryContext); + + principalKeyLocalState.dsa = dsa_attach_in_place(principalKeyLocalState.sharedPrincipalKeyState->rawDsaArea, + NULL); + + /* + * pin the attached area to keep the area attached until end of session or + * explicit detach. + */ + dsa_pin_mapping(principalKeyLocalState.dsa); + + principal_key_dsh_params.tranche_id = principalKeyLocalState.sharedPrincipalKeyState->hashTrancheId; + principalKeyLocalState.sharedHash = dshash_attach(principalKeyLocalState.dsa, &principal_key_dsh_params, + principalKeyLocalState.sharedPrincipalKeyState->hashHandle, 0); + MemoryContextSwitchTo(oldcontext); +} + +static void +shared_memory_shutdown(int code, Datum arg) +{ + principalKeyLocalState.sharedPrincipalKeyState = NULL; +} + +bool +save_principal_key_info(TDEPrincipalKeyInfo *principal_key_info) +{ + Assert(principal_key_info != NULL); + + return pg_tde_save_principal_key(principal_key_info, true, true); +} + +bool +update_principal_key_info(TDEPrincipalKeyInfo *principal_key_info) +{ + Assert(principal_key_info != NULL); + return pg_tde_save_principal_key(principal_key_info, false, true); +} + +/* + * SetPrincipalkey: + * We need to ensure that only one principal key is set for a database. + */ +TDEPrincipalKey * +set_principal_key_with_keyring(const char *key_name, GenericKeyring *keyring, + Oid dbOid, bool ensure_new_key) +{ + TDEPrincipalKey *principalKey = NULL; + LWLock *lock_files = tde_lwlock_enc_keys(); + bool is_dup_key = false; + + /* + * Try to get principal key from cache. + */ + LWLockAcquire(lock_files, LW_EXCLUSIVE); + + principalKey = get_principal_key_from_cache(dbOid); + is_dup_key = (principalKey != NULL); + + /* TODO: Add the key in the cache? */ + if (!is_dup_key) + is_dup_key = (pg_tde_get_principal_key_info(dbOid) != NULL); + + if (!is_dup_key) + { + const keyInfo *keyInfo = NULL; + + principalKey = palloc(sizeof(TDEPrincipalKey)); + principalKey->keyInfo.databaseId = dbOid; + principalKey->keyInfo.keyId.version = DEFAULT_PRINCIPAL_KEY_VERSION; + principalKey->keyInfo.keyringId = keyring->key_id; + strncpy(principalKey->keyInfo.keyId.name, key_name, TDE_KEY_NAME_LEN); + gettimeofday(&principalKey->keyInfo.creationTime, NULL); + + keyInfo = load_latest_versioned_key_name(&principalKey->keyInfo, keyring, ensure_new_key); + + if (keyInfo == NULL) + keyInfo = KeyringGenerateNewKeyAndStore(keyring, principalKey->keyInfo.keyId.versioned_name, INTERNAL_KEY_LEN, true); + + if (keyInfo == NULL) + { + LWLockRelease(lock_files); + + ereport(ERROR, + (errmsg("failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables."))); + } + + principalKey->keyLength = keyInfo->data.len; + + memcpy(principalKey->keyData, keyInfo->data.data, keyInfo->data.len); + + save_principal_key_info(&principalKey->keyInfo); + + /* XLog the new key */ + XLogBeginInsert(); + XLogRegisterData((char *) &principalKey->keyInfo, sizeof(TDEPrincipalKeyInfo)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_ADD_PRINCIPAL_KEY); + + push_principal_key_to_cache(principalKey); + } + + LWLockRelease(lock_files); + + if (is_dup_key) + { + /* + * Seems like just before we got the lock, the key was installed by + * some other caller Throw an error and mover no + */ + + ereport(ERROR, + (errcode(ERRCODE_DUPLICATE_OBJECT), + errmsg("Principal key already exists for the database"), + errhint("Use rotate_key interface to change the principal key"))); + } + + return principalKey; +} + +/* + * alter_keyprovider_for_principal_key: + */ +TDEPrincipalKey * +alter_keyprovider_for_principal_key(GenericKeyring *newKeyring, Oid dbOid) +{ + TDEPrincipalKeyInfo *principalKeyInfo = NULL; + TDEPrincipalKey *principal_key = NULL; + + LWLock *lock_files = tde_lwlock_enc_keys(); + + Assert(newKeyring != NULL); + LWLockAcquire(lock_files, LW_EXCLUSIVE); + + principalKeyInfo = pg_tde_get_principal_key_info(dbOid); + + if (principalKeyInfo == NULL) + { + LWLockRelease(lock_files); + ereport(ERROR, + (errmsg("Principal key not set for the database"), + errhint("Use set_principal_key interface to set the principal key"))); + } + + if (newKeyring->key_id == principalKeyInfo->keyringId) + { + LWLockRelease(lock_files); + ereport(ERROR, + (errmsg("New key provider is same as the current key provider"))); + } + /* update the key provider in principal key info */ + + ereport(DEBUG2, + (errmsg("Changing keyprovider ID from :%d to %d", principalKeyInfo->keyringId, newKeyring->key_id))); + + principalKeyInfo->keyringId = newKeyring->key_id; + + update_principal_key_info(principalKeyInfo); + + /* XLog the new key*/ + XLogBeginInsert(); + XLogRegisterData((char *)principalKeyInfo, sizeof(TDEPrincipalKeyInfo)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_UPDATE_PRINCIPAL_KEY); + + /* clear the cache as well */ + clear_principal_key_cache(dbOid); + + principal_key = GetPrincipalKey(dbOid, LW_EXCLUSIVE); + + LWLockRelease(lock_files); + + return principal_key; +} + +bool +SetPrincipalKey(const char *key_name, const char *provider_name, bool ensure_new_key) +{ + TDEPrincipalKey *principal_key = set_principal_key_with_keyring(key_name, + GetKeyProviderByName(provider_name, MyDatabaseId), + MyDatabaseId, + ensure_new_key); + + return (principal_key != NULL); +} + +bool +AlterPrincipalKeyKeyring(const char *provider_name) +{ + TDEPrincipalKey *principal_key = alter_keyprovider_for_principal_key(GetKeyProviderByName(provider_name, MyDatabaseId), + MyDatabaseId); + + return (principal_key != NULL); +} + +bool +RotatePrincipalKey(TDEPrincipalKey *current_key, const char *new_key_name, const char *new_provider_name, bool ensure_new_key) +{ + TDEPrincipalKey new_principal_key; + const keyInfo *keyInfo = NULL; + GenericKeyring *keyring; + bool is_rotated; + MemoryContext keyRotateCtx; + MemoryContext oldCtx; + + Assert(current_key != NULL); + + keyRotateCtx = AllocSetContextCreate(CurrentMemoryContext, + "TDE key rotation temporary context", + ALLOCSET_DEFAULT_SIZES); + oldCtx = MemoryContextSwitchTo(keyRotateCtx); + + /* + * Let's set everything the same as the older principal key and update + * only the required attributes. + */ + memcpy(&new_principal_key, current_key, sizeof(TDEPrincipalKey)); + + if (new_key_name == NULL) + { + new_principal_key.keyInfo.keyId.version++; + } + else + { + strncpy(new_principal_key.keyInfo.keyId.name, new_key_name, sizeof(new_principal_key.keyInfo.keyId.name)); + new_principal_key.keyInfo.keyId.version = DEFAULT_PRINCIPAL_KEY_VERSION; + + if (new_provider_name != NULL) + { + new_principal_key.keyInfo.keyringId = GetKeyProviderByName(new_provider_name, + new_principal_key.keyInfo.databaseId)->key_id; + } + } + + /* We need a valid keyring structure */ + keyring = GetKeyProviderByID(new_principal_key.keyInfo.keyringId, + new_principal_key.keyInfo.databaseId); + + keyInfo = load_latest_versioned_key_name(&new_principal_key.keyInfo, keyring, ensure_new_key); + + if (keyInfo == NULL) + keyInfo = KeyringGenerateNewKeyAndStore(keyring, new_principal_key.keyInfo.keyId.versioned_name, INTERNAL_KEY_LEN, true); + + if (keyInfo == NULL) + { + ereport(ERROR, + (errmsg("Failed to generate new key name"))); + } + + new_principal_key.keyLength = keyInfo->data.len; + + memcpy(new_principal_key.keyData, keyInfo->data.data, keyInfo->data.len); + is_rotated = pg_tde_perform_rotate_key(current_key, &new_principal_key); + if (is_rotated && !TDEisInGlobalSpace(current_key->keyInfo.databaseId)) + { + clear_principal_key_cache(current_key->keyInfo.databaseId); + push_principal_key_to_cache(&new_principal_key); + } + + MemoryContextSwitchTo(oldCtx); + MemoryContextDelete(keyRotateCtx); + + return is_rotated; +} + +/* + * Rotate keys on a standby. + */ +bool +xl_tde_perform_rotate_key(XLogPrincipalKeyRotate *xlrec) +{ + bool ret; + + ret = pg_tde_write_map_keydata_files(xlrec->map_size, xlrec->buff, xlrec->keydata_size, &xlrec->buff[xlrec->map_size]); + clear_principal_key_cache(xlrec->databaseId); + + return ret; +} + +/* +* Load the latest versioned key name for the principal key +* If ensure_new_key is true, then we will keep on incrementing the version number +* till we get a key name that is not present in the keyring +*/ +keyInfo * +load_latest_versioned_key_name(TDEPrincipalKeyInfo *principal_key_info, GenericKeyring *keyring, bool ensure_new_key) +{ + KeyringReturnCodes kr_ret; + keyInfo *keyInfo = NULL; + int base_version = principal_key_info->keyId.version; + + Assert(principal_key_info != NULL); + Assert(keyring != NULL); + Assert(strlen(principal_key_info->keyId.name) > 0); + + /* + * Start with the passed in version number We expect the name and the + * version number are already properly initialized and contain the correct + * values + */ + snprintf(principal_key_info->keyId.versioned_name, TDE_KEY_NAME_LEN, + "%s_%d", principal_key_info->keyId.name, principal_key_info->keyId.version); + + while (true) + { + keyInfo = KeyringGetKey(keyring, principal_key_info->keyId.versioned_name, false, &kr_ret); + + /* + * vault-v2 returns 404 (KEYRING_CODE_RESOURCE_NOT_AVAILABLE) when key + * is not found + */ + if (kr_ret != KEYRING_CODE_SUCCESS && kr_ret != KEYRING_CODE_RESOURCE_NOT_AVAILABLE) + { + ereport(ERROR, + (errmsg("failed to retrieve principal key from keyring provider :\"%s\"", keyring->provider_name), + errdetail("Error code: %d", kr_ret))); + return NULL; + } + if (keyInfo == NULL) + { + if (ensure_new_key == false) + { + /* + * If ensure_key is false and we are not at the base version, + * We should return the last existent version. + */ + if (base_version < principal_key_info->keyId.version) + { + /* Not optimal but keep the things simple */ + principal_key_info->keyId.version -= 1; + snprintf(principal_key_info->keyId.versioned_name, TDE_KEY_NAME_LEN, + "%s_%d", principal_key_info->keyId.name, principal_key_info->keyId.version); + keyInfo = KeyringGetKey(keyring, principal_key_info->keyId.versioned_name, false, &kr_ret); + } + } + return keyInfo; + } + + principal_key_info->keyId.version++; + snprintf(principal_key_info->keyId.versioned_name, TDE_KEY_NAME_LEN, "%s_%d", principal_key_info->keyId.name, principal_key_info->keyId.version); + + /* + * Not really required. Just to break the infinite loop in case the + * key provider is not behaving sane. + */ + if (principal_key_info->keyId.version > MAX_PRINCIPAL_KEY_VERSION_NUM) + { + ereport(ERROR, + (errmsg("failed to retrieve principal key. %d versions already exist", MAX_PRINCIPAL_KEY_VERSION_NUM))); + return NULL; + } + } + return NULL; /* Just to keep compiler quite */ +} + +/* + * Returns the provider ID of the keyring that holds the principal key + * Return InvalidOid if the principal key is not set for the database + */ +Oid +GetPrincipalKeyProviderId(void) +{ + TDEPrincipalKey *principalKey = NULL; + TDEPrincipalKeyInfo *principalKeyInfo = NULL; + Oid keyringId = InvalidOid; + Oid dbOid = MyDatabaseId; + LWLock *lock_files = tde_lwlock_enc_keys(); + + LWLockAcquire(lock_files, LW_SHARED); + + principalKey = get_principal_key_from_cache(dbOid); + if (principalKey) + { + keyringId = principalKey->keyInfo.keyringId; + } + { + /* + * Principal key not present in cache. Try Loading it from the info + * file + */ + principalKeyInfo = pg_tde_get_principal_key_info(dbOid); + if (principalKeyInfo) + { + keyringId = principalKeyInfo->keyringId; + pfree(principalKeyInfo); + } + } + + LWLockRelease(lock_files); + + return keyringId; +} + +/* + * ------------------------------ + * Principal key cache realted stuff + */ + +static inline dshash_table * +get_principal_key_Hash(void) +{ + principal_key_info_attach_shmem(); + return principalKeyLocalState.sharedHash; +} + +/* + * Gets the principal key for current database from cache + */ +static TDEPrincipalKey * +get_principal_key_from_cache(Oid dbOid) +{ + TDEPrincipalKey *cacheEntry = NULL; + + cacheEntry = (TDEPrincipalKey *) dshash_find(get_principal_key_Hash(), + &dbOid, false); + if (cacheEntry) + dshash_release_lock(get_principal_key_Hash(), cacheEntry); + + return cacheEntry; +} + +/* + * Push the principal key for current database to the shared memory cache. + * TODO: Add eviction policy + * For now we just keep pushing the principal keys to the cache and do not have + * any eviction policy. We have one principal key for a database, so at max, + * we could have as many entries in the cache as the number of databases. + * Which in practice would not be a huge number, but still we need to have + * some eviction policy in place. Moreover, we need to have some mechanism to + * remove the cache entry when the database is dropped. + */ +static void +push_principal_key_to_cache(TDEPrincipalKey *principalKey) +{ + TDEPrincipalKey *cacheEntry = NULL; + Oid databaseId = principalKey->keyInfo.databaseId; + bool found = false; + + cacheEntry = dshash_find_or_insert(get_principal_key_Hash(), + &databaseId, &found); + if (!found) + memcpy(cacheEntry, principalKey, sizeof(TDEPrincipalKey)); + dshash_release_lock(get_principal_key_Hash(), cacheEntry); + + /* we don't want principal keys to end up paged to the swap */ + if (mlock(cacheEntry, sizeof(TDEPrincipalKey)) == -1) + elog(ERROR, "could not mlock principal key cache entry: %m"); +} + +/* + * Cleanup the principal key cache entry for the current database. + * This function is a hack to handle the situation if the + * extension was dropped from the database and had created the + * principal key info file and cache entry in its previous encarnation. + * We need to remove the cache entry and the principal key info file + * at the time of extension creation to start fresh again. + * Idelly we should have a mechanism to remove these when the extension + * but unfortunately we do not have any such mechanism in PG. +*/ +static void +principal_key_startup_cleanup(int tde_tbl_count, XLogExtensionInstall *ext_info, bool redo, void *arg) +{ + if (tde_tbl_count > 0) + { + ereport(WARNING, + (errmsg("Failed to perform initialization. database already has %d TDE tables", tde_tbl_count))); + return; + } + + cleanup_principal_key_info(ext_info->database_id); +} + +void +cleanup_principal_key_info(Oid databaseId) +{ + clear_principal_key_cache(databaseId); + + /* + * TODO: Although should never happen. Still verify if any table in the + * database is using tde + */ + + /* Remove the tde files */ + pg_tde_delete_tde_files(databaseId); +} + +static void +clear_principal_key_cache(Oid databaseId) +{ + TDEPrincipalKey *cache_entry; + + /* Start with deleting the cache entry for the database */ + cache_entry = (TDEPrincipalKey *) dshash_find(get_principal_key_Hash(), + &databaseId, true); + if (cache_entry) + { + dshash_delete_entry(get_principal_key_Hash(), cache_entry); + } +} + +/* + * SQL interface to set principal key + */ +PG_FUNCTION_INFO_V1(pg_tde_set_principal_key); +Datum pg_tde_set_principal_key(PG_FUNCTION_ARGS); + +Datum +pg_tde_set_principal_key(PG_FUNCTION_ARGS) +{ + char *principal_key_name = text_to_cstring(PG_GETARG_TEXT_PP(0)); + char *provider_name = text_to_cstring(PG_GETARG_TEXT_PP(1)); + bool ensure_new_key = PG_GETARG_BOOL(2); + bool ret; + + ereport(LOG, (errmsg("Setting principal key [%s : %s] for the database", principal_key_name, provider_name))); + ret = SetPrincipalKey(principal_key_name, provider_name, ensure_new_key); + PG_RETURN_BOOL(ret); +} + +PG_FUNCTION_INFO_V1(pg_tde_alter_principal_key_keyring); +Datum pg_tde_alter_principal_key_keyring(PG_FUNCTION_ARGS); + +Datum pg_tde_alter_principal_key_keyring(PG_FUNCTION_ARGS) +{ + char *provider_name = text_to_cstring(PG_GETARG_TEXT_PP(0)); + bool ret; + + ereport(LOG, (errmsg("Altering principal key provider to \"%s\" for the database", provider_name))); + ret = AlterPrincipalKeyKeyring(provider_name); + PG_RETURN_BOOL(ret); +} + +/* + * SQL interface for key rotation + */ +PG_FUNCTION_INFO_V1(pg_tde_rotate_principal_key_internal); +Datum +pg_tde_rotate_principal_key_internal(PG_FUNCTION_ARGS) +{ + char *new_principal_key_name = NULL; + char *new_provider_name = NULL; + bool ensure_new_key; + bool is_global; + bool ret; + TDEPrincipalKey *current_key; + Oid dbOid = MyDatabaseId; + + if (!PG_ARGISNULL(0)) + new_principal_key_name = text_to_cstring(PG_GETARG_TEXT_PP(0)); + if (!PG_ARGISNULL(1)) + new_provider_name = text_to_cstring(PG_GETARG_TEXT_PP(1)); + ensure_new_key = PG_GETARG_BOOL(2); + is_global = PG_GETARG_BOOL(3); + +#ifdef PERCONA_EXT + if (is_global) + { + dbOid = GLOBAL_DATA_TDE_OID; + } +#endif + + ereport(LOG, (errmsg("rotating principal key to [%s : %s] the for the %s", + new_principal_key_name, + new_provider_name, + is_global ? "cluster" : "database"))); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + current_key = GetPrincipalKey(dbOid, LW_EXCLUSIVE); + ret = RotatePrincipalKey(current_key, new_principal_key_name, new_provider_name, ensure_new_key); + LWLockRelease(tde_lwlock_enc_keys()); + + PG_RETURN_BOOL(ret); +} + +PG_FUNCTION_INFO_V1(pg_tde_principal_key_info_internal); +Datum +pg_tde_principal_key_info_internal(PG_FUNCTION_ARGS) +{ + Oid dbOid = MyDatabaseId; + bool is_global = PG_GETARG_BOOL(0); + + if (is_global) + { + dbOid = GLOBAL_DATA_TDE_OID; + } + + return pg_tde_get_key_info(fcinfo, dbOid); +} + +static Datum +pg_tde_get_key_info(PG_FUNCTION_ARGS, Oid dbOid) +{ + TupleDesc tupdesc; + Datum values[6]; + bool isnull[6]; + HeapTuple tuple; + Datum result; + TDEPrincipalKey *principal_key; + TimestampTz ts; + GenericKeyring *keyring; + + /* Build a tuple descriptor for our result type */ + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("function returning record called in context that cannot accept type record"))); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_SHARED); + principal_key = GetPrincipalKey(dbOid, LW_SHARED); + LWLockRelease(tde_lwlock_enc_keys()); + if (principal_key == NULL) + { + ereport(ERROR, + (errmsg("Principal key does not exists for the database"), + errhint("Use set_principal_key interface to set the principal key"))); + PG_RETURN_NULL(); + } + + keyring = GetKeyProviderByID(principal_key->keyInfo.keyringId, dbOid); + + /* Initialize the values and null flags */ + + /* TEXT: Principal key name */ + values[0] = CStringGetTextDatum(principal_key->keyInfo.keyId.name); + isnull[0] = false; + /* TEXT: Keyring provider name */ + if (keyring) + { + values[1] = CStringGetTextDatum(keyring->provider_name); + isnull[1] = false; + } + else + isnull[1] = true; + + /* INTEGERT: key provider id */ + values[2] = Int32GetDatum(principal_key->keyInfo.keyringId); + isnull[2] = false; + + /* TEXT: Principal key versioned name */ + values[3] = CStringGetTextDatum(principal_key->keyInfo.keyId.versioned_name); + isnull[3] = false; + /* INTEGERT: Principal key version */ + values[4] = Int32GetDatum(principal_key->keyInfo.keyId.version); + isnull[4] = false; + /* TIMESTAMP TZ: Principal key creation time */ + ts = (TimestampTz) principal_key->keyInfo.creationTime.tv_sec - ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY); + ts = (ts * USECS_PER_SEC) + principal_key->keyInfo.creationTime.tv_usec; + values[5] = TimestampTzGetDatum(ts); + isnull[5] = false; + + /* Form the tuple */ + tuple = heap_form_tuple(tupdesc, values, isnull); + + /* Make the tuple into a datum */ + result = HeapTupleGetDatum(tuple); + + PG_RETURN_DATUM(result); +} +#endif /* FRONTEND */ + +/* + * Gets principal key form the keyring and pops it into cache if key exists + * Caller should hold an exclusive tde_lwlock_enc_keys lock + */ +static TDEPrincipalKey * +get_principal_key_from_keyring(Oid dbOid) +{ + GenericKeyring *keyring; + TDEPrincipalKey *principalKey = NULL; + TDEPrincipalKeyInfo *principalKeyInfo = NULL; + const keyInfo *keyInfo = NULL; + KeyringReturnCodes keyring_ret; + + Assert(LWLockHeldByMeInMode(tde_lwlock_enc_keys(), LW_EXCLUSIVE)); + + principalKeyInfo = pg_tde_get_principal_key_info(dbOid); + if (principalKeyInfo == NULL) + { + return NULL; + } + + keyring = GetKeyProviderByID(principalKeyInfo->keyringId, dbOid); + if (keyring == NULL) + { + return NULL; + } + + keyInfo = KeyringGetKey(keyring, principalKeyInfo->keyId.versioned_name, false, &keyring_ret); + + if (keyInfo == NULL) + { + return NULL; + } + + principalKey = palloc(sizeof(TDEPrincipalKey)); + + memcpy(&principalKey->keyInfo, principalKeyInfo, sizeof(principalKey->keyInfo)); + memcpy(principalKey->keyData, keyInfo->data.data, keyInfo->data.len); + principalKey->keyLength = keyInfo->data.len; + + Assert(dbOid == principalKey->keyInfo.databaseId); + +#ifndef FRONTEND + /* We don't store global space key in cache */ + if (!TDEisInGlobalSpace(dbOid)) + { + push_principal_key_to_cache(principalKey); + + /* If we do store key in cache we want to return a cache reference + * rather then a palloc'ed copy. + */ + pfree(principalKey); + principalKey = get_principal_key_from_cache(dbOid); + } +#endif + + if (principalKeyInfo) + pfree(principalKeyInfo); + + return principalKey; +} + +/* + * A public interface to get the principal key for the database. + * If the principal key is not present in the cache, it is loaded from + * the keyring and stored in the cache. + * When the principal key is not set for the database. The function returns + * throws an error. + * + * The caller must hold a `tde_lwlock_enc_keys` lock and pass its obtained mode + * via the `lockMode` param (LW_SHARED or LW_EXCLUSIVE). We expect the key to be + * most likely in the cache. So the caller should use LW_SHARED if there are no + * principal key changes planned as this is faster and creates less contention. + * But if there is no key in the cache, we have to switch the lock + * (LWLockRelease + LWLockAcquire) to LW_EXCLUSIVE mode to write the key to the + * cache. + */ +TDEPrincipalKey * +GetPrincipalKey(Oid dbOid, LWLockMode lockMode) +{ +#ifndef FRONTEND + TDEPrincipalKey *principalKey = NULL; + + Assert(LWLockHeldByMeInMode(tde_lwlock_enc_keys(), lockMode)); + /* We don't store global space key in cache */ + if (!TDEisInGlobalSpace(dbOid)) + { + principalKey = get_principal_key_from_cache(dbOid); + } + + if (likely(principalKey)) + { + return principalKey; + } + + if (lockMode != LW_EXCLUSIVE) + { + LWLockRelease(tde_lwlock_enc_keys()); + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + } +#endif + + return get_principal_key_from_keyring(dbOid); +} diff --git a/contrib/pg_tde/src/common/pg_tde_shmem.c b/contrib/pg_tde/src/common/pg_tde_shmem.c new file mode 100644 index 00000000000..1d027778550 --- /dev/null +++ b/contrib/pg_tde/src/common/pg_tde_shmem.c @@ -0,0 +1,149 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_shmem.c + * Shared memory area to manage cache and locks. + * + * IDENTIFICATION + * contrib/pg_tde/src/pg_tde_shmem.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "storage/ipc.h" +#include "common/pg_tde_shmem.h" +#include "nodes/pg_list.h" +#include "storage/lwlock.h" + +typedef struct TdeSharedState +{ + LWLock *principalKeyLock; + int principalKeyHashTrancheId; + void *rawDsaArea; /* DSA area pointer to store cache hashes */ + dshash_table_handle principalKeyHashHandle; +} TdeSharedState; + +typedef struct TDELocalState +{ + TdeSharedState *sharedTdeState; + dsa_area **dsa; /* local dsa area for backend attached to the + * dsa area created by postmaster at startup. */ + dshash_table *principalKeySharedHash; +} TDELocalState; + +static void tde_shmem_shutdown(int code, Datum arg); + +List *registeredShmemRequests = NIL; +bool shmemInited = false; + +void +RegisterShmemRequest(const TDEShmemSetupRoutine *routine) +{ + Assert(shmemInited == false); + registeredShmemRequests = lappend(registeredShmemRequests, (void *) routine); +} + +Size +TdeRequiredSharedMemorySize(void) +{ + Size sz = 0; + ListCell *lc; + + foreach(lc, registeredShmemRequests) + { + TDEShmemSetupRoutine *routine = (TDEShmemSetupRoutine *) lfirst(lc); + + if (routine->required_shared_mem_size) + sz = add_size(sz, routine->required_shared_mem_size()); + } + sz = add_size(sz, sizeof(TdeSharedState)); + return MAXALIGN(sz); +} + +int +TdeRequiredLocksCount(void) +{ + return TDE_LWLOCK_COUNT; +} + +void +TdeShmemInit(void) +{ + bool found; + TdeSharedState *tdeState; + Size required_shmem_size = TdeRequiredSharedMemorySize(); + + LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE); + /* Create or attach to the shared memory state */ + ereport(NOTICE, (errmsg("TdeShmemInit: requested %ld bytes", required_shmem_size))); + tdeState = ShmemInitStruct("pg_tde", required_shmem_size, &found); + + if (!found) + { + /* First time through ... */ + char *p = (char *) tdeState; + dsa_area *dsa; + ListCell *lc; + Size used_size = 0; + Size dsa_area_size; + + p += MAXALIGN(sizeof(TdeSharedState)); + used_size += MAXALIGN(sizeof(TdeSharedState)); + /* Now place all shared state structures */ + foreach(lc, registeredShmemRequests) + { + Size sz = 0; + TDEShmemSetupRoutine *routine = (TDEShmemSetupRoutine *) lfirst(lc); + + if (routine->init_shared_state) + { + sz = routine->init_shared_state(p); + used_size += MAXALIGN(sz); + p += MAXALIGN(sz); + Assert(used_size <= required_shmem_size); + } + } + /* Create DSA area */ + dsa_area_size = required_shmem_size - used_size; + Assert(dsa_area_size > 0); + tdeState->rawDsaArea = p; + + ereport(LOG, (errmsg("creating DSA area of size %lu", dsa_area_size))); + dsa = dsa_create_in_place(tdeState->rawDsaArea, + dsa_area_size, + LWLockNewTrancheId(), 0); + dsa_pin(dsa); + dsa_set_size_limit(dsa, dsa_area_size); + + /* Initialize all DSA area objects */ + foreach(lc, registeredShmemRequests) + { + TDEShmemSetupRoutine *routine = (TDEShmemSetupRoutine *) lfirst(lc); + + if (routine->init_dsa_area_objects) + routine->init_dsa_area_objects(dsa, tdeState->rawDsaArea); + } + ereport(LOG, (errmsg("setting no limit to DSA area of size %lu", dsa_area_size))); + + dsa_set_size_limit(dsa, -1); /* Let it grow outside the shared + * memory */ + + shmemInited = true; + } + LWLockRelease(AddinShmemInitLock); + on_shmem_exit(tde_shmem_shutdown, (Datum) 0); +} + +static void +tde_shmem_shutdown(int code, Datum arg) +{ + ListCell *lc; + + foreach(lc, registeredShmemRequests) + { + TDEShmemSetupRoutine *routine = (TDEShmemSetupRoutine *) lfirst(lc); + + if (routine->shmem_kill) + routine->shmem_kill(code, arg); + } +} diff --git a/contrib/pg_tde/src/common/pg_tde_utils.c b/contrib/pg_tde/src/common/pg_tde_utils.c new file mode 100644 index 00000000000..8c211ea8b4b --- /dev/null +++ b/contrib/pg_tde/src/common/pg_tde_utils.c @@ -0,0 +1,155 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_utils.c + * Utility functions. + * + * IDENTIFICATION + * contrib/pg_tde/src/pg_tde_utils.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "utils/snapmgr.h" +#include "commands/defrem.h" +#include "common/pg_tde_utils.h" +#include "miscadmin.h" +#include "catalog/tde_principal_key.h" +#include "access/pg_tde_tdemap.h" +#include "pg_tde.h" + +#ifndef FRONTEND +#include "access/genam.h" +#include "access/heapam.h" + +Oid +get_tde_basic_table_am_oid(void) +{ + return get_table_am_oid("tde_heap_basic", false); +} + +Oid +get_tde_table_am_oid(void) +{ + return get_table_am_oid("tde_heap", false); +} + +PG_FUNCTION_INFO_V1(pg_tde_internal_has_key); +Datum +pg_tde_internal_has_key(PG_FUNCTION_ARGS) +{ + Oid tableOid = InvalidOid; + Oid dbOid = MyDatabaseId; + TDEPrincipalKey* principalKey = NULL; + + if (!PG_ARGISNULL(0)) + { + tableOid = PG_GETARG_OID(0); + } + + if(tableOid == InvalidOid) + { + PG_RETURN_BOOL(false); + } + + LWLockAcquire(tde_lwlock_enc_keys(), LW_SHARED); + principalKey = GetPrincipalKey(dbOid, LW_SHARED); + LWLockRelease(tde_lwlock_enc_keys()); + + if(principalKey == NULL) + { + PG_RETURN_BOOL(false); + } + + { + LOCKMODE lockmode = AccessShareLock; + Relation rel = table_open(tableOid, lockmode); + RelKeyData *rkd; + + if ( + #ifdef PERCONA_EXT + rel->rd_rel->relam != get_tde_table_am_oid() && + #endif + rel->rd_rel->relam != get_tde_basic_table_am_oid()) + { + table_close(rel, lockmode); + PG_RETURN_BOOL(false); + } + + rkd = GetSMGRRelationKey(rel->rd_locator); + + table_close(rel, lockmode); + + PG_RETURN_BOOL(rkd != NULL); + } +} + +/* + * Returns the list of OIDs for all TDE tables in a database + */ +List * +get_all_tde_tables(void) +{ + Relation pg_class; + SysScanDesc scan; + HeapTuple tuple; + List *tde_tables = NIL; + Oid am_oid = get_tde_basic_table_am_oid(); + + /* Open the pg_class table */ + pg_class = table_open(RelationRelationId, AccessShareLock); + + /* Start a scan */ + scan = systable_beginscan(pg_class, ClassOidIndexId, true, + SnapshotSelf, 0, NULL); + + /* Iterate over all tuples in the table */ + while ((tuple = systable_getnext(scan)) != NULL) + { + Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple); + + /* Check if the table uses the specified access method */ + if (classForm->relam == am_oid) + { + /* Print the name of the table */ + tde_tables = lappend_oid(tde_tables, classForm->oid); + elog(DEBUG2, "Table %s uses the TDE access method.", NameStr(classForm->relname)); + } + } + + /* End the scan */ + systable_endscan(scan); + + /* Close the pg_class table */ + table_close(pg_class, AccessShareLock); + return tde_tables; +} + +int +get_tde_tables_count(void) +{ + List *tde_tables = get_all_tde_tables(); + int count = list_length(tde_tables); + + list_free(tde_tables); + return count; +} + +#endif /* !FRONTEND */ + +static char globalspace_dir[MAXPGPATH] = PG_TDE_DATA_DIR; + +void +pg_tde_set_data_dir(const char *dir) +{ + Assert(dir != NULL); + strncpy(globalspace_dir, dir, sizeof(globalspace_dir)); +} + +/* returns the palloc'd string */ +char * +pg_tde_get_tde_data_dir(void) +{ + return globalspace_dir; +} diff --git a/contrib/pg_tde/src/encryption/enc_aes.c b/contrib/pg_tde/src/encryption/enc_aes.c new file mode 100644 index 00000000000..50081dbf68f --- /dev/null +++ b/contrib/pg_tde/src/encryption/enc_aes.c @@ -0,0 +1,201 @@ + +#ifndef FRONTEND +#include "postgres.h" +#else +#include +#define Assert(p) assert(p) +#endif + +#include "encryption/enc_aes.h" + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +/* Implementation notes + * ===================== + * + * AES-CTR in a nutshell: + * * Uses a counter, 0 for the first block, 1 for the next block, ... + * * Encrypts the counter using AES-ECB + * * XORs the data to the encrypted counter + * + * In our implementation, we want random access into any 16 byte part of the encrypted datafile. + * This is doable with OpenSSL and directly using AES-CTR, by passing the offset in the correct format as IV. + * Unfortunately this requires reinitializing the OpenSSL context for every seek, and that's a costly operation. + * Initialization and then decryption of 8192 bytes takes just double the time of initialization and deecryption + * of 16 bytes. + * + * To mitigate this, we reimplement AES-CTR using AES-ECB: + * * We only initialize one ECB context per encryption key (e.g. table), and store this context + * * When a new block is requested, we use this stored context to encrypt the position information + * * And then XOR it with the data + * + * This is still not as fast as using 8k blocks, but already 2 orders of magnitude better than direct CTR with + * 16 byte blocks. + */ + + +const EVP_CIPHER *cipher = NULL; +const EVP_CIPHER *cipher2 = NULL; +int cipher_block_size = 0; + +void +AesInit(void) +{ + static int initialized = 0; + + if (!initialized) + { + OpenSSL_add_all_algorithms(); + ERR_load_crypto_strings(); + + cipher = EVP_aes_128_cbc(); + cipher_block_size = EVP_CIPHER_block_size(cipher); + //== buffer size + cipher2 = EVP_aes_128_ecb(); + + initialized = 1; + } +} + +/* TODO: a few things could be optimized in this. It's good enough for a prototype. */ +static void +AesRunCtr(EVP_CIPHER_CTX **ctxPtr, int enc, const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len) +{ + if (*ctxPtr == NULL) + { + *ctxPtr = EVP_CIPHER_CTX_new(); + EVP_CIPHER_CTX_init(*ctxPtr); + + if (EVP_CipherInit_ex(*ctxPtr, cipher2, NULL, key, iv, enc) == 0) + { +#ifdef FRONTEND + fprintf(stderr, "ERROR: EVP_CipherInit_ex failed. OpenSSL error: %s\n", ERR_error_string(ERR_get_error(), NULL)); +#else + ereport(ERROR, + (errmsg("EVP_CipherInit_ex failed. OpenSSL error: %s", ERR_error_string(ERR_get_error(), NULL)))); +#endif + + return; + } + + EVP_CIPHER_CTX_set_padding(*ctxPtr, 0); + } + + if (EVP_CipherUpdate(*ctxPtr, out, out_len, in, in_len) == 0) + { +#ifdef FRONTEND + fprintf(stderr, "ERROR: EVP_CipherUpdate failed. OpenSSL error: %s\n", ERR_error_string(ERR_get_error(), NULL)); +#else + ereport(ERROR, + (errmsg("EVP_CipherUpdate failed. OpenSSL error: %s", ERR_error_string(ERR_get_error(), NULL)))); +#endif + return; + } +} + +static void +AesRunCbc(int enc, const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len) +{ + int out_len_final = 0; + EVP_CIPHER_CTX *ctx = NULL; + + ctx = EVP_CIPHER_CTX_new(); + EVP_CIPHER_CTX_init(ctx); + + if (EVP_CipherInit_ex(ctx, cipher, NULL, key, iv, enc) == 0) + { +#ifdef FRONTEND + fprintf(stderr, "ERROR: EVP_CipherInit_ex failed. OpenSSL error: %s\n", ERR_error_string(ERR_get_error(), NULL)); +#else + ereport(ERROR, + (errmsg("EVP_CipherInit_ex failed. OpenSSL error: %s", ERR_error_string(ERR_get_error(), NULL)))); +#endif + goto cleanup; + } + + EVP_CIPHER_CTX_set_padding(ctx, 0); + Assert(in_len % cipher_block_size == 0); + + if (EVP_CipherUpdate(ctx, out, out_len, in, in_len) == 0) + { +#ifdef FRONTEND + fprintf(stderr, "ERROR: EVP_CipherUpdate failed. OpenSSL error: %s\n", ERR_error_string(ERR_get_error(), NULL)); +#else + ereport(ERROR, + (errmsg("EVP_CipherUpdate failed. OpenSSL error: %s", ERR_error_string(ERR_get_error(), NULL)))); +#endif + goto cleanup; + } + + if (EVP_CipherFinal_ex(ctx, out + *out_len, &out_len_final) == 0) + { +#ifdef FRONTEND + fprintf(stderr, "ERROR: EVP_CipherFinal_ex failed. OpenSSL error: %s\n", ERR_error_string(ERR_get_error(), NULL)); +#else + ereport(ERROR, + (errmsg("EVP_CipherFinal_ex failed. OpenSSL error: %s", ERR_error_string(ERR_get_error(), NULL)))); +#endif + goto cleanup; + } + + /* + * We encrypt one block (16 bytes) Our expectation is that the result + * should also be 16 bytes, without any additional padding + */ + *out_len += out_len_final; + Assert(in_len == *out_len); + +cleanup: + EVP_CIPHER_CTX_cleanup(ctx); + EVP_CIPHER_CTX_free(ctx); +} + +void +AesEncrypt(const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len) +{ + AesRunCbc(1, key, iv, in, in_len, out, out_len); +} + +void +AesDecrypt(const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len) +{ + AesRunCbc(0, key, iv, in, in_len, out, out_len); +} + +/* This function assumes that the out buffer is big enough: at least (blockNumber2 - blockNumber1) * 16 bytes + */ +void +Aes128EncryptedZeroBlocks(void *ctxPtr, const unsigned char *key, const char *iv_prefix, uint64_t blockNumber1, uint64_t blockNumber2, unsigned char *out) +{ + const unsigned char iv[16] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + + const unsigned dataLen = (blockNumber2 - blockNumber1) * 16; + int outLen; + + Assert(blockNumber2 >= blockNumber1); + + for (int j = blockNumber1; j < blockNumber2; ++j) + { + /* + * We have 16 bytes, and a 4 byte counter. The counter is the last 4 + * bytes. Technically, this isn't correct: the byte order of the + * counter depends on the endianness of the CPU running it. As this is + * a generic limitation of Postgres, it's fine. + */ + memcpy(out + (16 * (j - blockNumber1)), iv_prefix, 12); + memcpy(out + (16 * (j - blockNumber1)) + 12, (char *) &j, 4); + } + + AesRunCtr(ctxPtr, 1, key, iv, out, dataLen, out, &outLen); + Assert(outLen == dataLen); +} diff --git a/contrib/pg_tde/src/encryption/enc_tde.c b/contrib/pg_tde/src/encryption/enc_tde.c new file mode 100644 index 00000000000..53906141d5b --- /dev/null +++ b/contrib/pg_tde/src/encryption/enc_tde.c @@ -0,0 +1,298 @@ +#include "pg_tde_defines.h" + +#include "postgres.h" +#include "utils/memutils.h" + +#include "access/pg_tde_slot.h" +#include "access/pg_tde_tdemap.h" +#include "encryption/enc_tde.h" +#include "encryption/enc_aes.h" +#include "storage/bufmgr.h" +#include "keyring/keyring_api.h" + +#ifdef ENCRYPTION_DEBUG +static void +iv_prefix_debug(const char *iv_prefix, char *out_hex) +{ + for (int i = 0; i < 16; ++i) + { + sprintf(out_hex + i * 2, "%02x", (int) *(iv_prefix + i)); + } + out_hex[32] = 0; +} +#endif + +#ifndef FRONTEND +static void +SetIVPrefix(ItemPointerData *ip, char *iv_prefix) +{ + /* + * We have up to 16 bytes for the entire IV The higher bytes (starting + * with 15) are used for the incrementing counter The lower bytes (in this + * case, 0..5) are used for the tuple identification Tuple identification + * is based on CTID, which currently is 48 bytes in postgres: 4 bytes for + * the block id and 2 bytes for the position id + */ + iv_prefix[0] = ip->ip_blkid.bi_hi / 256; + iv_prefix[1] = ip->ip_blkid.bi_hi % 256; + iv_prefix[2] = ip->ip_blkid.bi_lo / 256; + iv_prefix[3] = ip->ip_blkid.bi_lo % 256; + iv_prefix[4] = ip->ip_posid / 256; + iv_prefix[5] = ip->ip_posid % 256; +} +#endif + +/* + * ================================================================ + * ACTUAL ENCRYPTION/DECRYPTION FUNCTIONS + * ================================================================ + */ + +/* + * pg_tde_crypt_simple: + * Encrypts/decrypts `data` with a given `key`. The result is written to `out`. + * start_offset: is the absolute location of start of data in the file. + * This function assumes that everything is in a single block, and has an assertion ensuring this + */ +static void +pg_tde_crypt_simple(const char *iv_prefix, uint32 start_offset, const char *data, uint32 data_len, char *out, RelKeyData *key, const char *context) +{ + const uint64 aes_start_block = start_offset / AES_BLOCK_SIZE; + const uint64 aes_end_block = (start_offset + data_len + (AES_BLOCK_SIZE - 1)) / AES_BLOCK_SIZE; + const uint64 aes_block_no = start_offset % AES_BLOCK_SIZE; + + unsigned char enc_key[DATA_BYTES_PER_AES_BATCH + AES_BLOCK_SIZE]; + + Assert(aes_end_block - aes_start_block <= NUM_AES_BLOCKS_IN_BATCH + 1); + + Aes128EncryptedZeroBlocks(&(key->internal_key.ctx), key->internal_key.key, iv_prefix, aes_start_block, aes_end_block, enc_key); + +#ifdef ENCRYPTION_DEBUG + { + char ivp_debug[33]; + + iv_prefix_debug(iv_prefix, ivp_debug); + ereport(LOG, + (errmsg("%s: Start offset: %lu Data_Len: %u, aes_start_block: %lu, aes_end_block: %lu, IV prefix: %s", + context ? context : "", start_offset, data_len, aes_start_block, aes_end_block, ivp_debug))); + } +#endif + + for (uint32 i = 0; i < data_len; ++i) + { + out[i] = data[i] ^ enc_key[i + aes_block_no]; + } +} + + +/* + * pg_tde_crypt_complex: + * Encrypts/decrypts `data` with a given `key`. The result is written to `out`. + * start_offset: is the absolute location of start of data in the file. + * This is a generic function intended for large data, that do not fit into a single block + */ +static void +pg_tde_crypt_complex(const char *iv_prefix, uint32 start_offset, const char *data, uint32 data_len, char *out, RelKeyData *key, const char *context) +{ + const uint64 aes_start_block = start_offset / AES_BLOCK_SIZE; + const uint64 aes_end_block = (start_offset + data_len + (AES_BLOCK_SIZE - 1)) / AES_BLOCK_SIZE; + const uint64 aes_block_no = start_offset % AES_BLOCK_SIZE; + uint32 batch_no = 0; + uint32 data_index = 0; + uint64 batch_end_block; + uint32 current_batch_bytes; + unsigned char enc_key[DATA_BYTES_PER_AES_BATCH]; + + /* do max NUM_AES_BLOCKS_IN_BATCH blocks at a time */ + for (uint64 batch_start_block = aes_start_block; batch_start_block < aes_end_block; batch_start_block += NUM_AES_BLOCKS_IN_BATCH) + { + batch_end_block = Min(batch_start_block + NUM_AES_BLOCKS_IN_BATCH, aes_end_block); + + Aes128EncryptedZeroBlocks(&(key->internal_key.ctx), key->internal_key.key, iv_prefix, batch_start_block, batch_end_block, enc_key); +#ifdef ENCRYPTION_DEBUG + { + char ivp_debug[33]; + + iv_prefix_debug(iv_prefix, ivp_debug); + ereport(LOG, + (errmsg("%s: Batch-No:%d Start offset: %lu Data_Len: %u, batch_start_block: %lu, batch_end_block: %lu, IV prefix: %s", + context ? context : "", batch_no, start_offset, data_len, batch_start_block, batch_end_block, ivp_debug))); + } +#endif + + current_batch_bytes = ((batch_end_block - batch_start_block) * AES_BLOCK_SIZE) + - (batch_no > 0 ? 0 : aes_block_no); /* first batch skips + * `aes_block_no`-th bytes + * of enc_key */ + if ((data_index + current_batch_bytes) > data_len) + current_batch_bytes = data_len - data_index; + + for (uint32 i = 0; i < current_batch_bytes; ++i) + { + /* + * As the size of enc_key always is a multiple of 16 we start from + * `aes_block_no`-th index of the enc_key[] so N-th will be + * crypted with the same enc_key byte despite what start_offset + * the function was called with. For example start_offset = 10; + * MAX_AES_ENC_BATCH_KEY_SIZE = 6: data: [10 11 12 + * 13 14 15 16] encKey: [...][0 1 2 3 4 5][0 1 2 3 4 5] so + * the 10th data byte is encoded with the 4th byte of the 2nd + * enc_key etc. We need this shift so each byte will be coded the + * same despite the initial offset. Let's see the same data but + * sent to the func starting from the offset 0: data: [0 1 2 3 + * 4 5 6 7 8 9 10 11 12 13 14 15 16] encKey: [0 1 2 3 4 5][0 1 2 3 + * 4 5][ 0 1 2 3 4 5] again, the 10th data byte is encoded + * with the 4th byte of the 2nd enc_key etc. + */ + uint32 enc_key_index = i + (batch_no > 0 ? 0 : aes_block_no); + + out[data_index] = data[data_index] ^ enc_key[enc_key_index]; + + data_index++; + } + batch_no++; + } +} + +/* + * pg_tde_crypt: + * Encrypts/decrypts `data` with a given `key`. The result is written to `out`. + * start_offset: is the absolute location of start of data in the file. + * This function simply selects between the two above variations based on the data length + */ +void +pg_tde_crypt(const char *iv_prefix, uint32 start_offset, const char *data, uint32 data_len, char *out, RelKeyData *key, const char *context) +{ + if (data_len >= DATA_BYTES_PER_AES_BATCH) + { + pg_tde_crypt_complex(iv_prefix, start_offset, data, data_len, out, key, context); + } + else + { + pg_tde_crypt_simple(iv_prefix, start_offset, data, data_len, out, key, context); + } +} + +#ifndef FRONTEND +/* + * pg_tde_crypt_tuple: + * Does the encryption/decryption of tuple data in place + * tuple: HeapTuple to be encrypted/decrypted + * out_tuple: to encrypt/decrypt into. If you want to do inplace encryption/decryption, pass tuple as out_tuple + * context: Optional context message to be used in debug log + * */ +void +pg_tde_crypt_tuple(HeapTuple tuple, HeapTuple out_tuple, RelKeyData *key, const char *context) +{ + char iv_prefix[16] = {0}; + uint32 data_len = tuple->t_len - tuple->t_data->t_hoff; + char *tup_data = (char *) tuple->t_data + tuple->t_data->t_hoff; + char *out_data = (char *) out_tuple->t_data + out_tuple->t_data->t_hoff; + + SetIVPrefix(&tuple->t_self, iv_prefix); + +#ifdef ENCRYPTION_DEBUG + ereport(LOG, + (errmsg("%s: table Oid: %u data size: %u", + context ? context : "", tuple->t_tableOid, + data_len))); +#endif + pg_tde_crypt(iv_prefix, 0, tup_data, data_len, out_data, key, context); +} + + +/* ================================================================ */ +/* HELPER FUNCTIONS FOR ENCRYPTION */ +/* ================================================================ */ + +OffsetNumber +PGTdePageAddItemExtended(RelFileLocator rel, + BlockNumber bn, + Page page, + Item item, + Size size, + OffsetNumber offsetNumber, + int flags) +{ + OffsetNumber off = PageAddItemExtended(page, item, size, offsetNumber, flags); + PageHeader phdr = (PageHeader) page; + unsigned long header_size = ((HeapTupleHeader) item)->t_hoff; + char iv_prefix[16] = {0,}; + char *toAddr = ((char *) phdr) + phdr->pd_upper + header_size; + char *data = item + header_size; + uint32 data_len = size - header_size; + + /* ctid stored in item is incorrect (not set) at this point */ + ItemPointerData ip; + RelKeyData *key = GetHeapBaiscRelationKey(rel); + + ItemPointerSet(&ip, bn, off); + + SetIVPrefix(&ip, iv_prefix); + + PG_TDE_ENCRYPT_PAGE_ITEM(iv_prefix, 0, data, data_len, toAddr, key); + return off; +} + +/* + * Provide a simple interface to encrypt a given key. + * + * The function pallocs and updates the p_enc_rel_key_data along with key bytes. The memory + * is allocated in the current memory context as this key should be ephemeral with a very + * short lifespan until it is written to disk. + */ +void +AesEncryptKey(const TDEPrincipalKey *principal_key, Oid dbOid, RelKeyData *rel_key_data, RelKeyData **p_enc_rel_key_data, size_t *enc_key_bytes) +{ + unsigned char iv[16] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + + /* Ensure we are getting a valid pointer here */ + Assert(principal_key); + + memcpy(iv, &dbOid, sizeof(Oid)); + + *p_enc_rel_key_data = (RelKeyData *) palloc(sizeof(RelKeyData)); + memcpy(*p_enc_rel_key_data, rel_key_data, sizeof(RelKeyData)); + + AesEncrypt(principal_key->keyData, iv, ((unsigned char *) &rel_key_data->internal_key), INTERNAL_KEY_LEN, ((unsigned char *) &(*p_enc_rel_key_data)->internal_key), (int *) enc_key_bytes); +} + +#endif /* FRONTEND */ + +/* + * Provide a simple interface to decrypt a given key. + * + * The function pallocs and updates the p_rel_key_data along with key bytes. It's important + * to note that memory is allocated in the TopMemoryContext so we expect this to be added + * to our key cache. + */ +void +AesDecryptKey(const TDEPrincipalKey *principal_key, Oid dbOid, RelKeyData **p_rel_key_data, RelKeyData *enc_rel_key_data, size_t *key_bytes) +{ + unsigned char iv[16] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + +#ifndef FRONTEND + MemoryContext oldcontext; +#endif + + /* Ensure we are getting a valid pointer here */ + Assert(principal_key); + + memcpy(iv, &dbOid, sizeof(Oid)); + +#ifndef FRONTEND + oldcontext = MemoryContextSwitchTo(TopMemoryContext); +#endif + + *p_rel_key_data = (RelKeyData *) palloc(sizeof(RelKeyData)); + +#ifndef FRONTEND + MemoryContextSwitchTo(oldcontext); +#endif + + /* Fill in the structure */ + memcpy(*p_rel_key_data, enc_rel_key_data, sizeof(RelKeyData)); + (*p_rel_key_data)->internal_key.ctx = NULL; + + AesDecrypt(principal_key->keyData, iv, ((unsigned char *) &enc_rel_key_data->internal_key), INTERNAL_KEY_LEN, ((unsigned char *) &(*p_rel_key_data)->internal_key), (int *) key_bytes); +} diff --git a/contrib/pg_tde/src/include/access/pg_tde_ddl.h b/contrib/pg_tde/src/include/access/pg_tde_ddl.h new file mode 100644 index 00000000000..ed58588f38a --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_ddl.h @@ -0,0 +1,17 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_ddl.h + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/pg_tde_ddl.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_DDL_H +#define PG_TDE_DDL_H + +extern void SetupTdeDDLHooks(void); + +#endif /* PG_TDE_DDL_H */ diff --git a/contrib/pg_tde/src/include/access/pg_tde_slot.h b/contrib/pg_tde/src/include/access/pg_tde_slot.h new file mode 100644 index 00000000000..9a8ed82368c --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_slot.h @@ -0,0 +1,50 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_slot.h + * TupleSlot implementation for TDE + * + * src/include/access/pg_tde_slot.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_SLOT_H +#define PG_TDE_SLOT_H + + +#include "postgres.h" +#include "executor/tuptable.h" +#include "access/pg_tde_tdemap.h" +#include "utils/relcache.h" + +/* heap tuple residing in a buffer */ +typedef struct TDEBufferHeapTupleTableSlot +{ + pg_node_attr(abstract) + + HeapTupleTableSlot base; + + /* + * If buffer is not InvalidBuffer, then the slot is holding a pin on the + * indicated buffer page; drop the pin when we release the slot's + * reference to that buffer. (TTS_FLAG_SHOULDFREE should not be set in + * such a case, since presumably base.tuple is pointing into the buffer.) + */ + Buffer buffer; /* tuple's buffer, or InvalidBuffer */ + char decrypted_buffer[BLCKSZ]; + RelKeyData *cached_relation_key; +} TDEBufferHeapTupleTableSlot; + +extern PGDLLIMPORT const TupleTableSlotOps TTSOpsTDEBufferHeapTuple; + +#define TTS_IS_TDE_BUFFERTUPLE(slot) ((slot)->tts_ops == &TTSOpsTDEBufferHeapTuple) + +extern TupleTableSlot *PGTdeExecStorePinnedBufferHeapTuple(Relation rel, + HeapTuple tuple, + TupleTableSlot *slot, + Buffer buffer); +extern TupleTableSlot *PGTdeExecStoreBufferHeapTuple(Relation rel, + HeapTuple tuple, + TupleTableSlot *slot, + Buffer buffer); + +#endif /* PG_TDE_SLOT_H */ diff --git a/contrib/pg_tde/src/include/access/pg_tde_tdemap.h b/contrib/pg_tde/src/include/access/pg_tde_tdemap.h new file mode 100644 index 00000000000..bba4fba62f2 --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_tdemap.h @@ -0,0 +1,94 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_tdemap.h + * TDE relation fork manapulation. + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_MAP_H +#define PG_TDE_MAP_H + +#include "pg_tde.h" +#include "utils/rel.h" +#include "access/xlog_internal.h" +#include "catalog/tde_principal_key.h" +#include "common/pg_tde_utils.h" +#include "storage/relfilelocator.h" + +/* Map entry flags */ +#define MAP_ENTRY_EMPTY 0x00 +#define TDE_KEY_TYPE_HEAP_BASIC 0x01 +#define TDE_KEY_TYPE_SMGR 0x02 +#define TDE_KEY_TYPE_GLOBAL 0x04 +#define MAP_ENTRY_VALID (TDE_KEY_TYPE_HEAP_BASIC | TDE_KEY_TYPE_SMGR | TDE_KEY_TYPE_GLOBAL) + +typedef struct InternalKey +{ + /* + * DO NOT re-arrange fields! + * Any changes should be aligned with pg_tde_read/write_one_keydata() + */ + uint8 key[INTERNAL_KEY_LEN]; + uint32 rel_type; + + void* ctx; +} InternalKey; + +#define INTERNAL_KEY_DAT_LEN offsetof(InternalKey, ctx) + +typedef struct RelKeyData +{ + TDEPrincipalKeyId principal_key_id; + InternalKey internal_key; +} RelKeyData; + + +typedef struct XLogRelKey +{ + RelFileLocator rlocator; + RelKeyData relKey; + TDEPrincipalKeyInfo pkInfo; +} XLogRelKey; + +extern RelKeyData *pg_tde_create_smgr_key(const RelFileLocator *newrlocator); +extern RelKeyData *pg_tde_create_global_key(const RelFileLocator *newrlocator); +extern RelKeyData *pg_tde_create_heap_basic_key(const RelFileLocator *newrlocator); +extern RelKeyData *pg_tde_create_key_map_entry(const RelFileLocator *newrlocator, uint32 entry_type); +extern void pg_tde_write_key_map_entry(const RelFileLocator *rlocator, RelKeyData *enc_rel_key_data, TDEPrincipalKeyInfo *principal_key_info); +extern void pg_tde_delete_key_map_entry(const RelFileLocator *rlocator, uint32 key_type); +extern void pg_tde_free_key_map_entry(const RelFileLocator *rlocator, uint32 key_type, off_t offset); + +extern RelKeyData *GetRelationKey(RelFileLocator rel, uint32 entry_type, bool no_map_ok); +extern RelKeyData *GetSMGRRelationKey(RelFileLocator rel); +extern RelKeyData *GetHeapBaiscRelationKey(RelFileLocator rel); +extern RelKeyData *GetTdeGlobaleRelationKey(RelFileLocator rel); + +extern void pg_tde_delete_tde_files(Oid dbOid); + +extern TDEPrincipalKeyInfo *pg_tde_get_principal_key_info(Oid dbOid); +extern bool pg_tde_save_principal_key(TDEPrincipalKeyInfo *principal_key_info, bool truncate_existing, bool update_header); +extern bool pg_tde_perform_rotate_key(TDEPrincipalKey *principal_key, TDEPrincipalKey *new_principal_key); +extern bool pg_tde_write_map_keydata_files(off_t map_size, char *m_file_data, off_t keydata_size, char *k_file_data); +extern RelKeyData *tde_create_rel_key(RelFileNumber rel_num, InternalKey *key, TDEPrincipalKeyInfo *principal_key_info); +extern RelKeyData *tde_encrypt_rel_key(TDEPrincipalKey *principal_key, RelKeyData *rel_key_data, Oid dbOid); +extern RelKeyData *tde_decrypt_rel_key(TDEPrincipalKey *principal_key, RelKeyData *enc_rel_key_data, Oid dbOid); +extern RelKeyData *pg_tde_get_key_from_file(const RelFileLocator *rlocator, uint32 key_type, bool no_map_ok); +extern void pg_tde_move_rel_key(const RelFileLocator *newrlocator, const RelFileLocator *oldrlocator); + +#define PG_TDE_MAP_FILENAME "pg_tde_%d_map" +#define PG_TDE_KEYDATA_FILENAME "pg_tde_%d_dat" + +static inline void +pg_tde_set_db_file_paths(Oid dbOid, char *map_path, char *keydata_path) +{ + if (map_path) + join_path_components(map_path, pg_tde_get_tde_data_dir(), psprintf(PG_TDE_MAP_FILENAME, dbOid)); + if (keydata_path) + join_path_components(keydata_path, pg_tde_get_tde_data_dir(), psprintf(PG_TDE_KEYDATA_FILENAME, dbOid)); +} + +const char *tde_sprint_key(InternalKey *k); + +extern RelKeyData *pg_tde_put_key_into_cache(RelFileNumber rel_num, RelKeyData *key); + +#endif /* PG_TDE_MAP_H */ diff --git a/contrib/pg_tde/src/include/access/pg_tde_xlog.h b/contrib/pg_tde/src/include/access/pg_tde_xlog.h new file mode 100644 index 00000000000..c064e017862 --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_xlog.h @@ -0,0 +1,43 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_xlog.h + * TDE XLog resource manager + * + *------------------------------------------------------------------------- + */ + +#ifndef PG_TDE_XLOG_H +#define PG_TDE_XLOG_H + +#ifndef FRONTEND + +#include "postgres.h" +#include "access/xlog.h" +#include "access/xlog_internal.h" + +/* TDE XLOG resource manager */ +#define XLOG_TDE_ADD_RELATION_KEY 0x00 +#define XLOG_TDE_ADD_PRINCIPAL_KEY 0x10 +#define XLOG_TDE_EXTENSION_INSTALL_KEY 0x20 +#define XLOG_TDE_ROTATE_KEY 0x30 +#define XLOG_TDE_ADD_KEY_PROVIDER_KEY 0x40 +#define XLOG_TDE_FREE_MAP_ENTRY 0x50 +#define XLOG_TDE_UPDATE_PRINCIPAL_KEY 0x60 + +/* ID 140 is registered for Percona TDE extension: https://wiki.postgresql.org/wiki/CustomWALResourceManagers */ +#define RM_TDERMGR_ID 140 +#define RM_TDERMGR_NAME "test_tdeheap_custom_rmgr" + +extern void tdeheap_rmgr_redo(XLogReaderState *record); +extern void tdeheap_rmgr_desc(StringInfo buf, XLogReaderState *record); +extern const char *tdeheap_rmgr_identify(uint8 info); + +static const RmgrData tdeheap_rmgr = { + .rm_name = RM_TDERMGR_NAME, + .rm_redo = tdeheap_rmgr_redo, + .rm_desc = tdeheap_rmgr_desc, + .rm_identify = tdeheap_rmgr_identify +}; + +#endif /* !FRONTEND */ +#endif /* PG_TDE_XLOG_H */ diff --git a/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt.h b/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt.h new file mode 100644 index 00000000000..4812a9cd5a0 --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt.h @@ -0,0 +1,35 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_xlog_encrypt.h + * Encrypted XLog storage manager + * + *------------------------------------------------------------------------- + */ + +#ifndef PG_TDE_XLOGENCRYPT_H +#define PG_TDE_XLOGENCRYPT_H + +#include "postgres.h" +#ifdef PERCONA_EXT +#include "access/xlog_smgr.h" + +extern Size TDEXLogEncryptBuffSize(void); + +#define XLOG_TDE_ENC_BUFF_ALIGNED_SIZE add_size(TDEXLogEncryptBuffSize(), PG_IO_ALIGN_SIZE) + +extern void TDEXLogShmemInit(void); + +extern ssize_t tdeheap_xlog_seg_read(int fd, void *buf, size_t count, off_t offset); +extern ssize_t tdeheap_xlog_seg_write(int fd, const void *buf, size_t count, off_t offset); + +static const XLogSmgr tde_xlog_smgr = { + .seg_read = tdeheap_xlog_seg_read, + .seg_write = tdeheap_xlog_seg_write, +}; + +extern void TDEXLogSmgrInit(void); +extern void XLogInitGUC(void); + +#endif /* PERCONA_EXT */ + +#endif /* PG_TDE_XLOGENCRYPT_H */ diff --git a/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt_fe.h b/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt_fe.h new file mode 100644 index 00000000000..4717afb7fb7 --- /dev/null +++ b/contrib/pg_tde/src/include/access/pg_tde_xlog_encrypt_fe.h @@ -0,0 +1,31 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_xlog_encrypt_fe.h + * Frontened definitions for encrypted XLog storage manager + * + *------------------------------------------------------------------------- + */ + +#ifndef PG_TDE_XLOGENCRYPT_FE_H +#define PG_TDE_XLOGENCRYPT_FE_H + +#ifdef PERCONA_EXT +#include "access/pg_tde_xlog_encrypt.h" +#include "catalog/tde_global_space.h" +#include "encryption/enc_aes.h" +#include "keyring/keyring_file.h" +#include "keyring/keyring_vault.h" +#include "keyring/keyring_kmip.h" + +/* Frontend has to call it needs to read an encrypted XLog */ +#define TDE_XLOG_INIT(kring_dir) \ + AesInit(); \ + InstallFileKeyring(); \ + InstallVaultV2Keyring(); \ + InstallKmipKeyring(); \ + TDEInitGlobalKeys(kring_dir); \ + TDEXLogSmgrInit() + +#endif /* PERCONA_EXT */ + +#endif /* PG_TDE_XLOGENCRYPT_FE_H */ diff --git a/contrib/pg_tde/src/include/catalog/keyring_min.h b/contrib/pg_tde/src/include/catalog/keyring_min.h new file mode 100644 index 00000000000..00c1c420c6a --- /dev/null +++ b/contrib/pg_tde/src/include/catalog/keyring_min.h @@ -0,0 +1,102 @@ + +#ifndef KEYRING_MIN_H_ +#define KEYRING_MIN_H_ + +#include "pg_config_manual.h" + +/* This is a minimal header that doesn't depend on postgres headers to avoid a type conflict with libkmip */ + +typedef unsigned int Oid; + +#define MAX_PROVIDER_NAME_LEN 128 /* pg_tde_key_provider's provider_name size*/ +#define MAX_VAULT_V2_KEY_LEN 128 /* From hashi corp docs */ +#define MAX_KEYRING_OPTION_LEN 1024 +typedef enum ProviderType +{ + UNKNOWN_KEY_PROVIDER, + FILE_KEY_PROVIDER, + VAULT_V2_KEY_PROVIDER, + KMIP_KEY_PROVIDER, +} ProviderType; + +#define TDE_KEY_NAME_LEN 256 +#define MAX_KEY_DATA_SIZE 32 /* maximum 256 bit encryption */ +#define INTERNAL_KEY_LEN 16 + +typedef struct keyName +{ + char name[TDE_KEY_NAME_LEN]; +} keyName; + +typedef struct keyData +{ + unsigned char data[MAX_KEY_DATA_SIZE]; + unsigned len; +} keyData; + +typedef struct keyInfo +{ + keyName name; + keyData data; +} keyInfo; + +typedef enum KeyringReturnCodes +{ + KEYRING_CODE_SUCCESS = 0, + KEYRING_CODE_INVALID_PROVIDER, + KEYRING_CODE_RESOURCE_NOT_AVAILABLE, + KEYRING_CODE_RESOURCE_NOT_ACCESSABLE, + KEYRING_CODE_INVALID_OPERATION, + KEYRING_CODE_INVALID_RESPONSE, + KEYRING_CODE_INVALID_KEY_SIZE, + KEYRING_CODE_DATA_CORRUPTED +} KeyringReturnCodes; + +/* Base type for all keyring */ +typedef struct GenericKeyring +{ + ProviderType type; /* Must be the first field */ + Oid key_id; + char provider_name[MAX_PROVIDER_NAME_LEN]; + char options[MAX_KEYRING_OPTION_LEN]; /* User provided options string*/ +} GenericKeyring; + +typedef struct TDEKeyringRoutine +{ + keyInfo *(*keyring_get_key) (GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * returnCode); + KeyringReturnCodes(*keyring_store_key) (GenericKeyring *keyring, keyInfo *key, bool throw_error); +} TDEKeyringRoutine; + +/* + * Keyring type name must be in sync with catalog table + * defination in pg_tde--1.0 SQL + */ +#define FILE_KEYRING_TYPE "file" +#define VAULTV2_KEYRING_TYPE "vault-v2" +#define KMIP_KEYRING_TYPE "kmip" + +typedef struct FileKeyring +{ + GenericKeyring keyring; /* Must be the first field */ + char file_name[MAXPGPATH]; +} FileKeyring; + +typedef struct VaultV2Keyring +{ + GenericKeyring keyring; /* Must be the first field */ + char vault_token[MAX_VAULT_V2_KEY_LEN]; + char vault_url[MAXPGPATH]; + char vault_ca_path[MAXPGPATH]; + char vault_mount_path[MAXPGPATH]; +} VaultV2Keyring; + +typedef struct KmipKeyring +{ + GenericKeyring keyring; /* Must be the first field */ + char kmip_host[MAXPGPATH]; + char kmip_port[32]; + char kmip_ca_path[MAXPGPATH]; + char kmip_cert_path[MAXPGPATH]; +} KmipKeyring; + +#endif \ No newline at end of file diff --git a/contrib/pg_tde/src/include/catalog/tde_global_space.h b/contrib/pg_tde/src/include/catalog/tde_global_space.h new file mode 100644 index 00000000000..0656ef4d2d7 --- /dev/null +++ b/contrib/pg_tde/src/include/catalog/tde_global_space.h @@ -0,0 +1,38 @@ +/*------------------------------------------------------------------------- + * + * tde_global_space.h + * Global catalog key management + * + * src/include/catalog/tde_global_space.h + * + *------------------------------------------------------------------------- + */ + +#ifndef TDE_GLOBAL_CATALOG_H +#define TDE_GLOBAL_CATALOG_H + +#include "postgres.h" +#include "catalog/pg_tablespace_d.h" + +#include "access/pg_tde_tdemap.h" +#include "catalog/tde_principal_key.h" + +/* + * Needed for global data (WAL etc) keys identification in caches and storage. + * We take Oids of the sql operators, so there is no overlap with the "real" + * catalog objects possible. + */ +#define GLOBAL_DATA_TDE_OID 607 +#define XLOG_TDE_OID 608 + +#define GLOBAL_SPACE_RLOCATOR(_obj_oid) (RelFileLocator) { \ + GLOBALTABLESPACE_OID, \ + GLOBAL_DATA_TDE_OID, \ + _obj_oid \ +} + +#define TDEisInGlobalSpace(dbOid) (dbOid == GLOBAL_DATA_TDE_OID) + +extern void TDEInitGlobalKeys(const char *dir); + +#endif /* TDE_GLOBAL_CATALOG_H */ diff --git a/contrib/pg_tde/src/include/catalog/tde_keyring.h b/contrib/pg_tde/src/include/catalog/tde_keyring.h new file mode 100644 index 00000000000..48d9b131a1b --- /dev/null +++ b/contrib/pg_tde/src/include/catalog/tde_keyring.h @@ -0,0 +1,47 @@ +/*------------------------------------------------------------------------- + * + * tde_keyring.h + * TDE catalog handling + * + * src/include/catalog/tde_keyring.h + * + *------------------------------------------------------------------------- + */ +#ifndef TDE_KEYRING_H +#define TDE_KEYRING_H + +#include "postgres.h" +#include "nodes/pg_list.h" +#include "catalog/keyring_min.h" + +#define PG_TDE_NAMESPACE_NAME "percona_tde" +#define PG_TDE_KEY_PROVIDER_CAT_NAME "pg_tde_key_provider" + +/* This record goes into key provider info file */ +typedef struct KeyringProvideRecord +{ + int provider_id; + char provider_name[MAX_PROVIDER_NAME_LEN]; + char options[MAX_KEYRING_OPTION_LEN]; + ProviderType provider_type; +} KeyringProvideRecord; +typedef struct KeyringProviderXLRecord +{ + Oid database_id; + off_t offset_in_file; + KeyringProvideRecord provider; +} KeyringProviderXLRecord; + +extern List *GetAllKeyringProviders(Oid dbOid); +extern GenericKeyring *GetKeyProviderByName(const char *provider_name, Oid dbOid); +extern GenericKeyring *GetKeyProviderByID(int provider_id, Oid dbOid); +extern ProviderType get_keyring_provider_from_typename(char *provider_type); +extern void cleanup_key_provider_info(Oid databaseId); +extern void InitializeKeyProviderInfo(void); +extern uint32 save_new_key_provider_info(KeyringProvideRecord *provider, + Oid databaseId, bool write_xlog); +extern uint32 redo_key_provider_info(KeyringProviderXLRecord *xlrec); + +extern bool ParseKeyringJSONOptions(ProviderType provider_type, void *out_opts, + char *in_buf, int buf_len); +#endif /* TDE_KEYRING_H */ diff --git a/contrib/pg_tde/src/include/catalog/tde_principal_key.h b/contrib/pg_tde/src/include/catalog/tde_principal_key.h new file mode 100644 index 00000000000..bc8239d6df4 --- /dev/null +++ b/contrib/pg_tde/src/include/catalog/tde_principal_key.h @@ -0,0 +1,78 @@ +/*------------------------------------------------------------------------- + * + * tde_principal_key.h + * TDE principal key handling + * + * src/include/catalog/tde_principal_key.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_PRINCIPAL_KEY_H +#define PG_TDE_PRINCIPAL_KEY_H + + +#include "postgres.h" +#include "catalog/tde_keyring.h" +#include "keyring/keyring_api.h" +#include "nodes/pg_list.h" +#ifndef FRONTEND +#include "storage/lwlock.h" +#endif + +#define DEFAULT_PRINCIPAL_KEY_VERSION 1 +#define PRINCIPAL_KEY_NAME_LEN TDE_KEY_NAME_LEN +#define MAX_PRINCIPAL_KEY_VERSION_NUM 100000 + +typedef struct TDEPrincipalKeyId +{ + uint32 version; + char name[PRINCIPAL_KEY_NAME_LEN]; + char versioned_name[PRINCIPAL_KEY_NAME_LEN + 4]; +} TDEPrincipalKeyId; + +typedef struct TDEPrincipalKeyInfo +{ + Oid databaseId; + Oid userId; + Oid keyringId; + struct timeval creationTime; + TDEPrincipalKeyId keyId; +} TDEPrincipalKeyInfo; + +typedef struct TDEPrincipalKey +{ + TDEPrincipalKeyInfo keyInfo; + unsigned char keyData[MAX_KEY_DATA_SIZE]; + uint32 keyLength; +} TDEPrincipalKey; + +typedef struct XLogPrincipalKeyRotate +{ + Oid databaseId; + off_t map_size; + off_t keydata_size; + char buff[FLEXIBLE_ARRAY_MEMBER]; +} XLogPrincipalKeyRotate; + +#define SizeoOfXLogPrincipalKeyRotate offsetof(XLogPrincipalKeyRotate, buff) + +extern void InitializePrincipalKeyInfo(void); +extern void cleanup_principal_key_info(Oid databaseId); + +#ifndef FRONTEND +extern LWLock *tde_lwlock_enc_keys(void); +extern TDEPrincipalKey *GetPrincipalKey(Oid dbOid, LWLockMode lockMode); +#else +extern TDEPrincipalKey *GetPrincipalKey(Oid dbOid, void *lockMode); +#endif + +extern bool save_principal_key_info(TDEPrincipalKeyInfo *principalKeyInfo); +extern bool update_principal_key_info(TDEPrincipalKeyInfo *principal_key_info); + +extern Oid GetPrincipalKeyProviderId(void); +extern bool SetPrincipalKey(const char *key_name, const char *provider_name, bool ensure_new_key); +extern bool AlterPrincipalKeyKeyring(const char *provider_name); +extern bool RotatePrincipalKey(TDEPrincipalKey *current_key, const char *new_key_name, const char *new_provider_name, bool ensure_new_key); +extern bool xl_tde_perform_rotate_key(XLogPrincipalKeyRotate *xlrec); + +#endif /* PG_TDE_PRINCIPAL_KEY_H */ diff --git a/contrib/pg_tde/src/include/common/pg_tde_shmem.h b/contrib/pg_tde/src/include/common/pg_tde_shmem.h new file mode 100644 index 00000000000..db555bf22cb --- /dev/null +++ b/contrib/pg_tde/src/include/common/pg_tde_shmem.h @@ -0,0 +1,63 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_shmem.h + * src/include/common/pg_tde_shmem.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_SHMEM_H +#define PG_TDE_SHMEM_H + +#include "postgres.h" +#include "storage/shmem.h" +#include "storage/lwlock.h" +#include "lib/dshash.h" +#include "utils/dsa.h" + +#define TDE_TRANCHE_NAME "pg_tde_tranche" + +typedef enum +{ + TDE_LWLOCK_ENC_KEY, + TDE_LWLOCK_PI_FILES, + + /* Must be the last entry in the enum */ + TDE_LWLOCK_COUNT +} TDELockTypes; + +typedef struct TDEShmemSetupRoutine +{ + /* + * init_shared_state gets called at the time of extension load you can + * initialize the data structures required to be placed in shared memory + * in this callback The callback must return the size of the shared memory + * area acquired. The argument to the function is the start of the shared + * memory address that can be used to store the shared data structures. + */ + Size (*init_shared_state) (void *raw_dsa_area); + + /* + * shmem_startup gets called at the time of postmaster shutdown + */ + void (*shmem_kill) (int code, Datum arg); + + /* + * The callback must return the size of the shared memory acquired. + */ + Size (*required_shared_mem_size) (void); + + /* + * Gets called after all shared memory structures are initialized and here + * you can create shared memory hash tables or any other shared objects + * that needs to live in DSA area. + */ + void (*init_dsa_area_objects) (dsa_area *dsa, void *raw_dsa_area); +} TDEShmemSetupRoutine; + +/* Interface to register the shared memory requests */ +extern void RegisterShmemRequest(const TDEShmemSetupRoutine *routine); +extern void TdeShmemInit(void); +extern Size TdeRequiredSharedMemorySize(void); +extern int TdeRequiredLocksCount(void); + +#endif /* PG_TDE_SHMEM_H */ diff --git a/contrib/pg_tde/src/include/common/pg_tde_utils.h b/contrib/pg_tde/src/include/common/pg_tde_utils.h new file mode 100644 index 00000000000..02909fada59 --- /dev/null +++ b/contrib/pg_tde/src/include/common/pg_tde_utils.h @@ -0,0 +1,24 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_utils.h + * src/include/common/pg_tde_utils.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_UTILS_H +#define PG_TDE_UTILS_H + +#include "postgres.h" + +#ifndef FRONTEND +#include "nodes/pg_list.h" + +extern Oid get_tde_basic_table_am_oid(void); +extern Oid get_tde_table_am_oid(void); +extern List *get_all_tde_tables(void); +extern int get_tde_tables_count(void); +#endif /* !FRONTEND */ + +extern void pg_tde_set_data_dir(const char *dir); +extern char* pg_tde_get_tde_data_dir(void); +#endif /* PG_TDE_UTILS_H */ diff --git a/contrib/pg_tde/src/include/config.h b/contrib/pg_tde/src/include/config.h new file mode 100644 index 00000000000..dc52741142e --- /dev/null +++ b/contrib/pg_tde/src/include/config.h @@ -0,0 +1,12 @@ +#ifndef TDE_CONFIG_H +#define TDE_CONFIG_H + +#define PACKAGE_NAME "pg_tde" +#define PACKAGE_VERSION "1.0.0-beta2" + +#define PACKAGE_STRING PACKAGE_NAME" "PACKAGE_VERSION + +#define PACKAGE_TARNAME "pg_tde" +#define PACKAGE_BUGREPORT "https://github.com/percona/pg_tde/issues" + +#endif /* TDE_CONFIG_H */ diff --git a/contrib/pg_tde/src/include/encryption/enc_aes.h b/contrib/pg_tde/src/include/encryption/enc_aes.h new file mode 100644 index 00000000000..5d3901e3736 --- /dev/null +++ b/contrib/pg_tde/src/include/encryption/enc_aes.h @@ -0,0 +1,26 @@ +/*------------------------------------------------------------------------- + * + * end_aes.h + * AES Encryption / Decryption routines using OpenSSL + * + * src/include/encryption/enc_aes.h + * + *------------------------------------------------------------------------- + */ +#ifndef ENC_AES_H +#define ENC_AES_H + +#include + +#define AES_BLOCK_SIZE 16 +#define NUM_AES_BLOCKS_IN_BATCH 200 +#define DATA_BYTES_PER_AES_BATCH (NUM_AES_BLOCKS_IN_BATCH * AES_BLOCK_SIZE) + +void AesInit(void); +extern void Aes128EncryptedZeroBlocks(void *ctxPtr, const unsigned char *key, const char *iv_prefix, uint64_t blockNumber1, uint64_t blockNumber2, unsigned char *out); + +/* Only used for testing */ +extern void AesEncrypt(const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len); +extern void AesDecrypt(const unsigned char *key, const unsigned char *iv, const unsigned char *in, int in_len, unsigned char *out, int *out_len); + +#endif /* ENC_AES_H */ diff --git a/contrib/pg_tde/src/include/encryption/enc_tde.h b/contrib/pg_tde/src/include/encryption/enc_tde.h new file mode 100644 index 00000000000..552888b317d --- /dev/null +++ b/contrib/pg_tde/src/include/encryption/enc_tde.h @@ -0,0 +1,58 @@ +/*------------------------------------------------------------------------- + * + * enc_tde.h + * Encryption / Decryption of functions for TDE + * + * src/include/encryption/enc_tde.h + * + *------------------------------------------------------------------------- + */ +#ifndef ENC_TDE_H +#define ENC_TDE_H + +#include "utils/rel.h" +#include "storage/bufpage.h" +#include "executor/tuptable.h" +#include "executor/tuptable.h" +#include "access/pg_tde_tdemap.h" +#include "keyring/keyring_api.h" + +extern void + pg_tde_crypt(const char *iv_prefix, uint32 start_offset, const char *data, uint32 data_len, char *out, RelKeyData *key, const char *context); +extern void + pg_tde_crypt_tuple(HeapTuple tuple, HeapTuple out_tuple, RelKeyData *key, const char *context); + +/* A wrapper to encrypt a tuple before adding it to the buffer */ +extern OffsetNumber + PGTdePageAddItemExtended(RelFileLocator rel, BlockNumber bn, Page page, + Item item, + Size size, + OffsetNumber offsetNumber, + int flags); + +/* Function Macros over crypt */ + +#define PG_TDE_ENCRYPT_DATA(_iv_prefix, _start_offset, _data, _data_len, _out, _key) \ + pg_tde_crypt(_iv_prefix, _start_offset, _data, _data_len, _out, _key, "ENCRYPT") + +#define PG_TDE_DECRYPT_DATA(_iv_prefix, _start_offset, _data, _data_len, _out, _key) \ + pg_tde_crypt(_iv_prefix, _start_offset, _data, _data_len, _out, _key, "DECRYPT") + +#define PG_TDE_DECRYPT_TUPLE(_tuple, _out_tuple, _key) \ + pg_tde_crypt_tuple(_tuple, _out_tuple, _key, "DECRYPT-TUPLE") + +#define PG_TDE_DECRYPT_TUPLE_EX(_tuple, _out_tuple, _key, _context) \ + do { \ + const char* _msg_context = "DECRYPT-TUPLE-"_context ; \ + pg_tde_crypt_tuple(_tuple, _out_tuple, _key, _msg_context); \ + } while(0) + +#define PG_TDE_ENCRYPT_PAGE_ITEM(_iv_prefix, _start_offset, _data, _data_len, _out, _key) \ + do { \ + pg_tde_crypt(_iv_prefix, _start_offset, _data, _data_len, _out, _key, "ENCRYPT-PAGE-ITEM"); \ + } while(0) + +extern void AesEncryptKey(const TDEPrincipalKey *principal_key, Oid dbOid, RelKeyData *rel_key_data, RelKeyData **p_enc_rel_key_data, size_t *enc_key_bytes); +extern void AesDecryptKey(const TDEPrincipalKey *principal_key, Oid dbOid, RelKeyData **p_rel_key_data, RelKeyData *enc_rel_key_data, size_t *key_bytes); + +#endif /* ENC_TDE_H */ diff --git a/contrib/pg_tde/src/include/keyring/keyring_api.h b/contrib/pg_tde/src/include/keyring/keyring_api.h new file mode 100644 index 00000000000..25f08be29ed --- /dev/null +++ b/contrib/pg_tde/src/include/keyring/keyring_api.h @@ -0,0 +1,22 @@ +/*------------------------------------------------------------------------- + * + * keyring_api.h + * src/include/keyring/keyring_api.h + * + *------------------------------------------------------------------------- + */ + +#ifndef KEYRING_API_H +#define KEYRING_API_H + +#include "catalog/tde_keyring.h" +#include "catalog/keyring_min.h" + +extern bool RegisterKeyProvider(const TDEKeyringRoutine *routine, ProviderType type); + +extern KeyringReturnCodes KeyringStoreKey(GenericKeyring *keyring, keyInfo *key, bool throw_error); +extern keyInfo *KeyringGetKey(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * returnCode); +extern keyInfo *KeyringGenerateNewKeyAndStore(GenericKeyring *keyring, const char *key_name, unsigned key_len, bool throw_error); +extern keyInfo *KeyringGenerateNewKey(const char *key_name, unsigned key_len); + +#endif /* KEYRING_API_H */ diff --git a/contrib/pg_tde/src/include/keyring/keyring_curl.h b/contrib/pg_tde/src/include/keyring/keyring_curl.h new file mode 100644 index 00000000000..6eef5ada14c --- /dev/null +++ b/contrib/pg_tde/src/include/keyring/keyring_curl.h @@ -0,0 +1,32 @@ +/*------------------------------------------------------------------------- + * + * keyring_curl.h + * Contains common curl related methods. + * + * IDENTIFICATION + * src/include/keyring/keyring_curl.h + * + *------------------------------------------------------------------------- + */ + +#ifndef KEYRING_CURL_H +#define KEYRING_CURL_H + +#include "pg_tde_defines.h" + +#define VAULT_URL_MAX_LEN 512 + +#include +#include + +typedef struct CurlString +{ + char *ptr; + size_t len; +} CurlString; + +extern CURL * keyringCurl; + +bool curlSetupSession(const char *url, const char *caFile, CurlString *outStr); + +#endif /* //KEYRING_CURL_H */ diff --git a/contrib/pg_tde/src/include/keyring/keyring_file.h b/contrib/pg_tde/src/include/keyring/keyring_file.h new file mode 100644 index 00000000000..9945dbd7f29 --- /dev/null +++ b/contrib/pg_tde/src/include/keyring/keyring_file.h @@ -0,0 +1,17 @@ +/*------------------------------------------------------------------------- + * + * keyring_file.h + * File vault implementation + * + * src/include/keyring/keyring_file.h + * + *------------------------------------------------------------------------- + */ + +#ifndef KEYRING_FILE_H +#define KEYRING_FILE_H + + +extern bool InstallFileKeyring(void); + +#endif /* KEYRING_FILE_H */ diff --git a/contrib/pg_tde/src/include/keyring/keyring_kmip.h b/contrib/pg_tde/src/include/keyring/keyring_kmip.h new file mode 100644 index 00000000000..f168202cc0f --- /dev/null +++ b/contrib/pg_tde/src/include/keyring/keyring_kmip.h @@ -0,0 +1,19 @@ +/*------------------------------------------------------------------------- + * + * keyring_vault.h + * KMIP based keyring provider + * + * IDENTIFICATION + * src/include/keyring/keyring_kmip.h + * + *------------------------------------------------------------------------- + */ + +#ifndef KEYRING_KMIP_H +#define KEYRING_KMIP_H + +extern bool InstallKmipKeyring(void); + +void kmip_ereport(bool throw_error, const char *msg, int errCode); + +#endif // KEYRING_KMIP_H diff --git a/contrib/pg_tde/src/include/keyring/keyring_vault.h b/contrib/pg_tde/src/include/keyring/keyring_vault.h new file mode 100644 index 00000000000..c86a963c45b --- /dev/null +++ b/contrib/pg_tde/src/include/keyring/keyring_vault.h @@ -0,0 +1,17 @@ +/*------------------------------------------------------------------------- + * + * keyring_vault.h + * HashiCorp Vault 2 based keyring provider + * + * IDENTIFICATION + * src/include/keyring/keyring_vault.h + * + *------------------------------------------------------------------------- + */ + +#ifndef KEYRING_VAULT_H +#define KEYRING_VAULT_H + +extern bool InstallVaultV2Keyring(void); + +#endif /* KEYRING_FILE_H */ diff --git a/contrib/pg_tde/src/include/pg_tde.h b/contrib/pg_tde/src/include/pg_tde.h new file mode 100644 index 00000000000..c8046373e01 --- /dev/null +++ b/contrib/pg_tde/src/include/pg_tde.h @@ -0,0 +1,25 @@ +/*------------------------------------------------------------------------- + * + * pg_tde.h + * src/include/pg_tde.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_H +#define PG_TDE_H + +#define PG_TDE_DATA_DIR "pg_tde" + +typedef struct XLogExtensionInstall +{ + Oid database_id; +} XLogExtensionInstall; + +typedef void (*pg_tde_on_ext_install_callback) (int tde_tbl_count, XLogExtensionInstall *ext_info, bool redo, void *arg); + +extern void on_ext_install(pg_tde_on_ext_install_callback function, void *arg); + +extern void extension_install_redo(XLogExtensionInstall *xlrec); + +extern void pg_tde_init_data_dir(void); +#endif /* PG_TDE_H */ diff --git a/contrib/pg_tde/src/include/pg_tde_defines.h b/contrib/pg_tde/src/include/pg_tde_defines.h new file mode 100644 index 00000000000..9aeee11a7c6 --- /dev/null +++ b/contrib/pg_tde/src/include/pg_tde_defines.h @@ -0,0 +1,50 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_io.h + * POSTGRES heap access method input/output definitions. + * + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/hio.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_DEFINES_H +#define PG_TDE_DEFINES_H + +/* ---------- + * Defines for functions that may need to replace later on + * ---------- +*/ + +/* #define ENCRYPTION_DEBUG 1 */ +/* #define KEYRING_DEBUG 1 */ +/* #define TDE_FORK_DEBUG 1 */ +/* #define TDE_XLOG_DEBUG 1 */ + +#define tdeheap_fill_tuple heap_fill_tuple +#define tdeheap_form_tuple heap_form_tuple +#define tdeheap_deform_tuple heap_deform_tuple +#define tdeheap_freetuple heap_freetuple +#define tdeheap_compute_data_size heap_compute_data_size +#define tdeheap_getattr heap_getattr +#define tdeheap_copytuple heap_copytuple +#define tdeheap_getsysattr heap_getsysattr + +#define pgstat_count_tdeheap_scan pgstat_count_heap_scan +#define pgstat_count_tdeheap_fetch pgstat_count_heap_fetch +#define pgstat_count_tdeheap_getnext pgstat_count_heap_getnext +#define pgstat_count_tdeheap_update pgstat_count_heap_update +#define pgstat_count_tdeheap_delete pgstat_count_heap_delete +#define pgstat_count_tdeheap_insert pgstat_count_heap_insert + +#define TDE_PageAddItem(rel, blkno, page, item, size, offsetNumber, overwrite, is_heap) \ + PGTdePageAddItemExtended(rel, blkno, page, item, size, offsetNumber, \ + ((overwrite) ? PAI_OVERWRITE : 0) | \ + ((is_heap) ? PAI_IS_HEAP : 0)) + +/* ---------- */ + +#endif /* PG_TDE_DEFINES_H */ diff --git a/contrib/pg_tde/src/include/pg_tde_defs.h b/contrib/pg_tde/src/include/pg_tde_defs.h new file mode 100644 index 00000000000..6d33312cbc3 --- /dev/null +++ b/contrib/pg_tde/src/include/pg_tde_defs.h @@ -0,0 +1,16 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_defs.h + * src/include/pg_tde_defs.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_DEFS_H +#define PG_TDE_DEFS_H + + +extern const char *pg_tde_package_string(void); +extern const char *pg_tde_package_name(void); +extern const char *pg_tde_package_version(void); + +#endif /* PG_TDE_DEFS_H */ diff --git a/contrib/pg_tde/src/include/pg_tde_event_capture.h b/contrib/pg_tde/src/include/pg_tde_event_capture.h new file mode 100644 index 00000000000..e3c15ff42d1 --- /dev/null +++ b/contrib/pg_tde/src/include/pg_tde_event_capture.h @@ -0,0 +1,33 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_event_capture.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_EVENT_CAPTURE_H +#define PG_TDE_EVENT_CAPTURE_H + +#include "postgres.h" +#include "nodes/parsenodes.h" + +typedef enum TdeCreateEventType +{ + TDE_UNKNOWN_CREATE_EVENT, + TDE_TABLE_CREATE_EVENT, + TDE_INDEX_CREATE_EVENT +} TdeCreateEventType; + +typedef struct TdeCreateEvent +{ + TdeCreateEventType eventType; /* DDL statement type */ + bool encryptMode; /* true when the table uses encryption */ + Oid baseTableOid; /* Oid of table on which index is being + * created on. For create table statement this + * contains InvalidOid */ + RangeVar *relation; /* Reference to the parsed relation from + * create statement */ +} TdeCreateEvent; + +extern TdeCreateEvent *GetCurrentTdeCreateEvent(void); + +#endif diff --git a/contrib/pg_tde/src/include/pg_tde_fe.h b/contrib/pg_tde/src/include/pg_tde_fe.h new file mode 100644 index 00000000000..da5631ac751 --- /dev/null +++ b/contrib/pg_tde/src/include/pg_tde_fe.h @@ -0,0 +1,93 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_fe.h + * TDE redefinitions for frontend included code + * + * src/include/pg_tde_fe.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_EREPORT_H +#define PG_TDE_EREPORT_H + +#ifdef FRONTEND + +#include "postgres_fe.h" +#include "utils/elog.h" +#include "common/logging.h" +#include "common/file_perm.h" + +#pragma GCC diagnostic ignored "-Wunused-macros" +#pragma GCC diagnostic ignored "-Wunused-value" +#pragma GCC diagnostic ignored "-Wunused-variable" +#pragma GCC diagnostic ignored "-Wextra" + +/* + * Errors handling + * ---------------------------------------- + */ + +#define tde_fe_errlog(_type, ...) \ + ({ \ + if (tde_fe_error_level >= ERROR) \ + pg_log_error##_type(__VA_ARGS__); \ + else if (tde_fe_error_level >= WARNING) \ + pg_log_warning##_type(__VA_ARGS__); \ + else if (tde_fe_error_level >= LOG) \ + pg_log_info##_type(__VA_ARGS__); \ + else \ + pg_log_debug##_type(__VA_ARGS__); \ + }) + +#define errmsg(...) tde_fe_errlog(, __VA_ARGS__) +#define errhint(...) tde_fe_errlog(_hint, __VA_ARGS__) +#define errdetail(...) tde_fe_errlog(_detail, __VA_ARGS__) + +#define errcode_for_file_access() NULL +#define errcode(e) NULL + +#define tde_error_handle_exit(elevel) \ + do { \ + if (elevel >= PANIC) \ + pg_unreachable(); \ + else if (elevel >= ERROR) \ + exit(1); \ + } while(0) + +#undef elog +#define elog(elevel, fmt, ...) \ + do { \ + tde_fe_error_level = elevel; \ + errmsg(fmt, ##__VA_ARGS__); \ + tde_error_handle_exit(elevel); \ + } while(0) + +#undef ereport +#define ereport(elevel,...) \ + do { \ + tde_fe_error_level = elevel; \ + __VA_ARGS__; \ + tde_error_handle_exit(elevel); \ + } while(0) + +static int tde_fe_error_level = 0; + +/* + * ------------- + */ + +#define LWLockAcquire(lock, mode) NULL +#define LWLockRelease(lock_files) NULL +#define LWLockHeldByMeInMode(lock, mode) true +#define LWLock void +#define LWLockMode void* +#define LW_SHARED NULL +#define LW_EXCLUSIVE NULL +#define tde_lwlock_enc_keys() NULL + +#define BasicOpenFile(fileName, fileFlags) open(fileName, fileFlags, PG_FILE_MODE_OWNER) + +#define pg_fsync(fd) fsync(fd) +#endif /* FRONTEND */ + +#endif /* PG_TDE_EREPORT_H */ diff --git a/contrib/pg_tde/src/include/smgr/pg_tde_smgr.h b/contrib/pg_tde/src/include/smgr/pg_tde_smgr.h new file mode 100644 index 00000000000..72070adf2fa --- /dev/null +++ b/contrib/pg_tde/src/include/smgr/pg_tde_smgr.h @@ -0,0 +1,15 @@ + +/*------------------------------------------------------------------------- + * + * pg_tde_smgr.h + * src/include/smgr/pg_tde_smgr.h + * + *------------------------------------------------------------------------- + */ + +#ifndef PG_TDE_SMGR_H +#define PG_TDE_SMGR_H + +extern void RegisterStorageMgr(void); + +#endif /* PG_TDE_SMGR_H */ diff --git a/contrib/pg_tde/src/include/transam/pg_tde_xact_handler.h b/contrib/pg_tde/src/include/transam/pg_tde_xact_handler.h new file mode 100644 index 00000000000..524f8acb536 --- /dev/null +++ b/contrib/pg_tde/src/include/transam/pg_tde_xact_handler.h @@ -0,0 +1,21 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_xact_handler.h + * TDE transaction handling. + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_XACT_HANDLER_H +#define PG_TDE_XACT_HANDLER_H + +#include "postgres.h" +#include "access/xact.h" + +extern void pg_tde_xact_callback(XactEvent event, void *arg); +extern void pg_tde_subxact_callback(SubXactEvent event, SubTransactionId mySubid, + SubTransactionId parentSubid, void *arg); + +extern void RegisterEntryForDeletion(const RelFileLocator *rlocator, off_t map_entry_offset, bool atCommit); + + +#endif /* PG_TDE_XACT_HANDLER_H */ diff --git a/contrib/pg_tde/src/keyring/keyring_api.c b/contrib/pg_tde/src/keyring/keyring_api.c new file mode 100644 index 00000000000..8fa56716d0d --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_api.c @@ -0,0 +1,172 @@ + +#include "keyring/keyring_api.h" +#include "keyring/keyring_file.h" +#include "keyring/keyring_vault.h" + +#include "postgres.h" +#include "access/xlog.h" +#include "storage/shmem.h" +#include "nodes/pg_list.h" +#include "utils/memutils.h" +#ifdef FRONTEND +#include "fe_utils/simple_list.h" +#include "pg_tde_fe.h" +#endif + +#include +#include + +typedef struct KeyProviders +{ + TDEKeyringRoutine *routine; + ProviderType type; +} KeyProviders; + +#ifndef FRONTEND +List *registeredKeyProviders = NIL; +#else +SimplePtrList registeredKeyProviders = {NULL, NULL}; +#endif +static KeyProviders *find_key_provider(ProviderType type); + +#ifndef FRONTEND +static KeyProviders * +find_key_provider(ProviderType type) +{ + ListCell *lc; + + foreach(lc, registeredKeyProviders) + { + KeyProviders *kp = (KeyProviders *) lfirst(lc); + + if (kp->type == type) + { + return kp; + } + } + return NULL; +} +#else +static KeyProviders * +find_key_provider(ProviderType type) +{ + SimplePtrListCell *lc; + + for (lc = registeredKeyProviders.head; lc; lc = lc->next) + { + KeyProviders *kp = (KeyProviders *) lc->ptr; + + if (kp->type == type) + { + return kp; + } + } + return NULL; +} +#endif /* !FRONTEND */ + +bool +RegisterKeyProvider(const TDEKeyringRoutine *routine, ProviderType type) +{ + KeyProviders *kp; +#ifndef FRONTEND + MemoryContext oldcontext; +#endif + + Assert(routine != NULL); + Assert(routine->keyring_get_key != NULL); + Assert(routine->keyring_store_key != NULL); + + kp = find_key_provider(type); + if (kp) + { + ereport(ERROR, + (errmsg("Key provider of type %d already registered", type))); + return false; + } + +#ifndef FRONTEND + oldcontext = MemoryContextSwitchTo(TopMemoryContext); +#endif + kp = palloc(sizeof(KeyProviders)); + kp->routine = (TDEKeyringRoutine *) routine; + kp->type = type; +#ifndef FRONTEND + registeredKeyProviders = lappend(registeredKeyProviders, kp); + MemoryContextSwitchTo(oldcontext); +#else + simple_ptr_list_append(®isteredKeyProviders, kp); +#endif + + return true; +} + +keyInfo * +KeyringGetKey(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * returnCode) +{ + KeyProviders *kp = find_key_provider(keyring->type); + int ereport_level = throw_error ? ERROR : WARNING; + + if (kp == NULL) + { + ereport(ereport_level, + (errmsg("Key provider of type %d not registered", keyring->type))); + *returnCode = KEYRING_CODE_INVALID_PROVIDER; + return NULL; + } + return kp->routine->keyring_get_key(keyring, key_name, throw_error, returnCode); +} + +KeyringReturnCodes +KeyringStoreKey(GenericKeyring *keyring, keyInfo *key, bool throw_error) +{ + KeyProviders *kp = find_key_provider(keyring->type); + int ereport_level = throw_error ? ERROR : WARNING; + + if (kp == NULL) + { + ereport(ereport_level, + (errmsg("Key provider of type %d not registered", keyring->type))); + return KEYRING_CODE_INVALID_PROVIDER; + } + return kp->routine->keyring_store_key(keyring, key, throw_error); +} + +keyInfo * +KeyringGenerateNewKey(const char *key_name, unsigned key_len) +{ + keyInfo *key; + + Assert(key_len <= 32); + key = palloc(sizeof(keyInfo)); + key->data.len = key_len; + if (!RAND_bytes(key->data.data, key_len)) + { + pfree(key); + return NULL; /* openssl error */ + } + strncpy(key->name.name, key_name, sizeof(key->name.name)); + return key; +} + +keyInfo * +KeyringGenerateNewKeyAndStore(GenericKeyring *keyring, const char *key_name, unsigned key_len, bool throw_error) +{ + keyInfo *key = KeyringGenerateNewKey(key_name, key_len); + int ereport_level = throw_error ? ERROR : WARNING; + + if (key == NULL) + { + ereport(ereport_level, + (errmsg("Failed to generate key"))); + return NULL; + } + if (KeyringStoreKey(keyring, key, throw_error) != KEYRING_CODE_SUCCESS) + { + pfree(key); + ereport(ereport_level, + (errmsg("Failed to store key on keyring. Please check the keyring configuration."))); + return NULL; + } + return key; +} diff --git a/contrib/pg_tde/src/keyring/keyring_curl.c b/contrib/pg_tde/src/keyring/keyring_curl.c new file mode 100644 index 00000000000..693121711b0 --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_curl.c @@ -0,0 +1,82 @@ +/*------------------------------------------------------------------------- + * + * keyring_curl.c + * Contains common curl related methods. + * + * IDENTIFICATION + * contrib/pg_tde/src/keyring/keyring_curl.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "keyring/keyring_curl.h" +#include "pg_tde_defines.h" + +CURL *keyringCurl = NULL; + +static +size_t +write_func(void *ptr, size_t size, size_t nmemb, struct CurlString *s) +{ + size_t new_len = s->len + size * nmemb; + + s->ptr = repalloc(s->ptr, new_len + 1); + if (s->ptr == NULL) + { + exit(EXIT_FAILURE); + } + memcpy(s->ptr + s->len, ptr, size * nmemb); + s->ptr[new_len] = '\0'; + s->len = new_len; + + return size * nmemb; +} + +bool +curlSetupSession(const char *url, const char *caFile, CurlString *outStr) +{ + if (keyringCurl == NULL) + { + keyringCurl = curl_easy_init(); + + if (keyringCurl == NULL) + return 0; + } + else + { + curl_easy_reset(keyringCurl); + } + + if (curl_easy_setopt(keyringCurl, CURLOPT_SSL_VERIFYPEER, 1) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_USE_SSL, CURLUSESSL_ALL) != CURLE_OK) + return 0; + if (caFile != NULL && strlen(caFile) != 0) + { + if (curl_easy_setopt(keyringCurl, CURLOPT_CAINFO, caFile) != CURLE_OK) + return 0; + } + if (curl_easy_setopt(keyringCurl, CURLOPT_FOLLOWLOCATION, 1L) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_CONNECTTIMEOUT, 3) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_TIMEOUT, 10) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_HTTP_VERSION, (long) CURL_HTTP_VERSION_1_1) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_WRITEFUNCTION, write_func) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_WRITEDATA, outStr) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_URL, url) != CURLE_OK) + return 0; + + if (curl_easy_setopt(keyringCurl, CURLOPT_POSTFIELDS, NULL) != CURLE_OK) + return 0; + if (curl_easy_setopt(keyringCurl, CURLOPT_POST, 0) != CURLE_OK) + return 0; + + return 1; +} diff --git a/contrib/pg_tde/src/keyring/keyring_file.c b/contrib/pg_tde/src/keyring/keyring_file.c new file mode 100644 index 00000000000..7eb73c4035e --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_file.c @@ -0,0 +1,154 @@ +/*------------------------------------------------------------------------- + * + * keyring_file.c + * Implements the file provider keyring + * routines. + * + * IDENTIFICATION + * contrib/pg_tde/src/keyring/keyring_file.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "keyring/keyring_file.h" +#include "catalog/tde_keyring.h" +#include "common/file_perm.h" +#include "keyring/keyring_api.h" +#include "storage/fd.h" +#include "utils/wait_event.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +#include +#include + +static keyInfo *get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * return_code); +static KeyringReturnCodes set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error); + +const TDEKeyringRoutine keyringFileRoutine = { + .keyring_get_key = get_key_by_name, + .keyring_store_key = set_key_by_name +}; + +bool +InstallFileKeyring(void) +{ + return RegisterKeyProvider(&keyringFileRoutine, FILE_KEY_PROVIDER); +} + + +static keyInfo * +get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * return_code) +{ + keyInfo *key = NULL; + int fd = -1; + FileKeyring *file_keyring = (FileKeyring *) keyring; + off_t bytes_read = 0; + off_t curr_pos = 0; + int ereport_level = throw_error ? ERROR : WARNING; + + *return_code = KEYRING_CODE_SUCCESS; + + fd = BasicOpenFile(file_keyring->file_name, PG_BINARY); + if (fd < 0) + return NULL; + + key = palloc(sizeof(keyInfo)); + while (true) + { + bytes_read = pg_pread(fd, key, sizeof(keyInfo), curr_pos); + curr_pos += bytes_read; + + if (bytes_read == 0) + { + /* + * Empty keyring file is considered as a valid keyring file that + * has no keys + */ + close(fd); + pfree(key); + return NULL; + } + if (bytes_read != sizeof(keyInfo)) + { + close(fd); + pfree(key); + /* Corrupt file */ + *return_code = KEYRING_CODE_DATA_CORRUPTED; + ereport(ereport_level, + (errcode_for_file_access(), + errmsg("keyring file \"%s\" is corrupted: %m", + file_keyring->file_name), + errdetail("invalid key size %lu expected %lu", bytes_read, sizeof(keyInfo)))); + return NULL; + } + if (strncasecmp(key->name.name, key_name, sizeof(key->name.name)) == 0) + { + close(fd); + return key; + } + } + close(fd); + pfree(key); + return NULL; +} + +static KeyringReturnCodes +set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error) +{ + off_t bytes_written = 0; + off_t curr_pos = 0; + int fd; + FileKeyring *file_keyring = (FileKeyring *) keyring; + keyInfo *existing_key; + KeyringReturnCodes return_code = KEYRING_CODE_SUCCESS; + int ereport_level = throw_error ? ERROR : WARNING; + + Assert(key != NULL); + /* See if the key with same name already exists */ + existing_key = get_key_by_name(keyring, key->name.name, false, &return_code); + if (existing_key) + { + pfree(existing_key); + ereport(ereport_level, + (errmsg("Key with name %s already exists in keyring", key->name.name))); + return KEYRING_CODE_INVALID_OPERATION; + } + + fd = BasicOpenFile(file_keyring->file_name, O_CREAT | O_RDWR | PG_BINARY); + if (fd < 0) + { + ereport(ereport_level, + (errcode_for_file_access(), + errmsg("Failed to open keyring file %s :%m", file_keyring->file_name))); + return KEYRING_CODE_RESOURCE_NOT_ACCESSABLE; + } + /* Write key to the end of file */ + curr_pos = lseek(fd, 0, SEEK_END); + bytes_written = pg_pwrite(fd, key, sizeof(keyInfo), curr_pos); + if (bytes_written != sizeof(keyInfo)) + { + close(fd); + ereport(ereport_level, + (errcode_for_file_access(), + errmsg("keyring file \"%s\" can't be written: %m", + file_keyring->file_name))); + return KEYRING_CODE_RESOURCE_NOT_ACCESSABLE; + } + + if (pg_fsync(fd) != 0) + { + close(fd); + ereport(ereport_level, + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", + file_keyring->file_name))); + return KEYRING_CODE_RESOURCE_NOT_ACCESSABLE; + } + close(fd); + return KEYRING_CODE_SUCCESS; +} diff --git a/contrib/pg_tde/src/keyring/keyring_kmip.c b/contrib/pg_tde/src/keyring/keyring_kmip.c new file mode 100644 index 00000000000..5d9533f0ff9 --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_kmip.c @@ -0,0 +1,262 @@ +/*------------------------------------------------------------------------- + * + * keyring_kmip.c + * KMIP based keyring provider + * + * IDENTIFICATION + * contrib/pg_tde/src/keyring/keyring_kmip.c + * + *------------------------------------------------------------------------- + */ + +#include +#include +#include +#include +#include +#include + +/* The KMIP headers and Postgres headers conflict. + We can't include most postgres headers here, instead just copy required declarations. +*/ + +#define bool int +#define true 1 +#define false 0 + +#include "keyring/keyring_kmip.h" +#include "catalog/keyring_min.h" + +extern bool RegisterKeyProvider(const TDEKeyringRoutine *routine, ProviderType type); + +static KeyringReturnCodes set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error); +static keyInfo *get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes *return_code); + +const TDEKeyringRoutine keyringKmipRoutine = { + .keyring_get_key = get_key_by_name, + .keyring_store_key = set_key_by_name}; + +bool InstallKmipKeyring(void) +{ + return RegisterKeyProvider(&keyringKmipRoutine, KMIP_KEY_PROVIDER); +} + +typedef struct KmipCtx +{ + SSL_CTX *ssl; + BIO *bio; +} KmipCtx; + +static bool kmipSslConnect(KmipCtx *ctx, KmipKeyring *kmip_keyring, bool throw_error) +{ + SSL *ssl = NULL; + ctx->ssl = SSL_CTX_new(SSLv23_method()); + + if (SSL_CTX_use_certificate_file(ctx->ssl, kmip_keyring->kmip_cert_path, SSL_FILETYPE_PEM) != 1) + { + kmip_ereport(throw_error, "SSL error: Loading the client certificate failed", 0); + SSL_CTX_free(ctx->ssl); + return false; + } + + if (SSL_CTX_use_PrivateKey_file(ctx->ssl, kmip_keyring->kmip_cert_path, SSL_FILETYPE_PEM) != 1) + { + SSL_CTX_free(ctx->ssl); + kmip_ereport(throw_error, "SSL error: Loading the client key failed", 0); + return false; + } + + if (SSL_CTX_load_verify_locations(ctx->ssl, kmip_keyring->kmip_ca_path, NULL) != 1) + { + SSL_CTX_free(ctx->ssl); + kmip_ereport(throw_error, "SSL error: Loading the CA certificate failed", 0); + return false; + } + + ctx->bio = BIO_new_ssl_connect(ctx->ssl); + if (ctx->bio == NULL) + { + SSL_CTX_free(ctx->ssl); + kmip_ereport(throw_error, "SSL error: BIO_new_ssl_connect failed", 0); + return false; + } + + BIO_get_ssl(ctx->bio, &ssl); + SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); + BIO_set_conn_hostname(ctx->bio, kmip_keyring->kmip_host); + BIO_set_conn_port(ctx->bio, kmip_keyring->kmip_port); + if (BIO_do_connect(ctx->bio) != 1) + { + BIO_free_all(ctx->bio); + SSL_CTX_free(ctx->ssl); + kmip_ereport(throw_error, "SSL error: BIO_do_connect failed", 0); + return false; + } + + return true; +} + +static KeyringReturnCodes +set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error) +{ + KmipCtx ctx; + KmipKeyring *kmip_keyring = (KmipKeyring *)keyring; + int result; + int id_max_len = 64; + char *idp = NULL; + + Attribute a[4]; + enum cryptographic_algorithm algorithm = KMIP_CRYPTOALG_AES; + int32 length = key->data.len * 8; + int32 mask = KMIP_CRYPTOMASK_ENCRYPT | KMIP_CRYPTOMASK_DECRYPT; + Name ts; + TextString ts2 = {0, 0}; + TemplateAttribute ta = {0}; + + if (!kmipSslConnect(&ctx, kmip_keyring, throw_error)) + { + return KEYRING_CODE_INVALID_RESPONSE; + } + + for (int i = 0; i < 4; i++) + { + kmip_init_attribute(&a[i]); + } + + a[0].type = KMIP_ATTR_CRYPTOGRAPHIC_ALGORITHM; + a[0].value = &algorithm; + + a[1].type = KMIP_ATTR_CRYPTOGRAPHIC_LENGTH; + a[1].value = &length; + + a[2].type = KMIP_ATTR_CRYPTOGRAPHIC_USAGE_MASK; + a[2].value = &mask; + + ts2.value = key->name.name; + ts2.size = kmip_strnlen_s(key->name.name, 250); + ts.value = &ts2; + ts.type = KMIP_NAME_UNINTERPRETED_TEXT_STRING; + a[3].type = KMIP_ATTR_NAME; + a[3].value = &ts; + + ta.attributes = a; + ta.attribute_count = ARRAY_LENGTH(a); + + result = kmip_bio_register_symmetric_key(ctx.bio, &ta, (char *)key->data.data, key->data.len, &idp, &id_max_len); + + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + + if (result != 0) + { + kmip_ereport(throw_error, "KMIP server reported error on register symmetric key: %i", result); + return KEYRING_CODE_INVALID_RESPONSE; + } + + return KEYRING_CODE_SUCCESS; +} + +void * + palloc(size_t); + +void pfree(void *); + +static keyInfo *get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes *return_code) +{ + keyInfo *key = NULL; + KmipKeyring *kmip_keyring = (KmipKeyring *)keyring; + char *id = 0; + KmipCtx ctx; + + *return_code = KEYRING_CODE_SUCCESS; + + if (!kmipSslConnect(&ctx, kmip_keyring, throw_error)) + { + return NULL; + } + + // 1. locate key + + { + int upto = 0; + int result; + LocateResponse locate_result; + Name ts; + TextString ts2 = {0, 0}; + Attribute a[3]; + enum object_type loctype = KMIP_OBJTYPE_SYMMETRIC_KEY; + + for (int i = 0; i < 3; i++) + { + kmip_init_attribute(&a[i]); + } + + a[0].type = KMIP_ATTR_OBJECT_TYPE; + a[0].value = &loctype; + + ts2.value = (char *)key_name; + ts2.size = kmip_strnlen_s(key_name, 250); + ts.value = &ts2; + ts.type = KMIP_NAME_UNINTERPRETED_TEXT_STRING; + a[1].type = KMIP_ATTR_NAME; + a[1].value = &ts; + + // 16 is hard coded: seems like the most vault supports? + result = kmip_bio_locate(ctx.bio, a, 2, &locate_result, 16, upto); + + if (result != 0) + { + *return_code = KEYRING_CODE_RESOURCE_NOT_AVAILABLE; + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + return NULL; + } + + if (locate_result.ids_size == 0) + { + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + return NULL; + } + + if (locate_result.ids_size > 1) + { + fprintf(stderr, "KMIP ERR: %li\n", locate_result.ids_size); + kmip_ereport(throw_error, "KMIP server contains multiple results for key, ignoring", 0); + *return_code = KEYRING_CODE_RESOURCE_NOT_AVAILABLE; + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + return NULL; + } + + id = locate_result.ids[0]; + } + + // 2. get key + + key = palloc(sizeof(keyInfo)); + + { + + char *keyp = NULL; + int result = kmip_bio_get_symmetric_key(ctx.bio, id, strlen(id), &keyp, (int *)&key->data.len); + + if (result != 0) + { + kmip_ereport(throw_error, "KMIP server LOCATEd key, but GET failed with %i", result); + *return_code = KEYRING_CODE_RESOURCE_NOT_AVAILABLE; + pfree(key); + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + return NULL; + } + + strncpy((char *)key->data.data, keyp, MAX_KEY_DATA_SIZE); + free(keyp); + } + + BIO_free_all(ctx.bio); + SSL_CTX_free(ctx.ssl); + + return key; +} diff --git a/contrib/pg_tde/src/keyring/keyring_kmip_ereport.c b/contrib/pg_tde/src/keyring/keyring_kmip_ereport.c new file mode 100644 index 00000000000..d05d79b7d0e --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_kmip_ereport.c @@ -0,0 +1,25 @@ + +#include "postgres.h" + +#include "keyring/keyring_kmip.h" +#include "catalog/keyring_min.h" +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +void kmip_ereport(bool throw_error, const char *msg, int errCode) +{ + int ereport_level = throw_error ? ERROR : WARNING; + if (errCode != 0) + { + ereport(ereport_level, (errmsg(msg, errCode))); + } + else + { + #pragma GCC diagnostic push + #pragma GCC diagnostic ignored "-Wformat-security" + // TODO: how to do this properly? + elog(ereport_level, (msg)); + #pragma GCC diagnostic pop + } +} \ No newline at end of file diff --git a/contrib/pg_tde/src/keyring/keyring_vault.c b/contrib/pg_tde/src/keyring/keyring_vault.c new file mode 100644 index 00000000000..6fe54408679 --- /dev/null +++ b/contrib/pg_tde/src/keyring/keyring_vault.c @@ -0,0 +1,428 @@ +/*------------------------------------------------------------------------- + * + * keyring_vault.c + * HashiCorp Vault 2 based keyring provider + * + * IDENTIFICATION + * contrib/pg_tde/src/keyring/keyring_vault.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "keyring/keyring_vault.h" +#include "keyring/keyring_curl.h" +#include "keyring/keyring_api.h" +#include "pg_tde_defines.h" +#include "common/jsonapi.h" +#include "mb/pg_wchar.h" +#include "utils/builtins.h" + +#include + +#include + +#include "common/base64.h" + +#ifdef FRONTEND +#include "pg_tde_fe.h" +#endif + +/* + * JSON parser state +*/ + +typedef enum +{ + JRESP_EXPECT_TOP_DATA, + JRESP_EXPECT_DATA, + JRESP_EXPECT_KEY +} JsonVaultRespSemState; + +typedef enum +{ + JRESP_F_UNUSED, + + JRESP_F_KEY +} JsonVaultRespField; + +typedef struct JsonVaultRespState +{ + JsonVaultRespSemState state; + JsonVaultRespField field; + int level; + + char *key; +} JsonVaultRespState; + +static JsonParseErrorType json_resp_object_start(void *state); +static JsonParseErrorType json_resp_object_end(void *state); +static JsonParseErrorType json_resp_scalar(void *state, char *token, JsonTokenType tokentype); +static JsonParseErrorType json_resp_object_field_start(void *state, char *fname, bool isnull); +static JsonParseErrorType parse_json_response(JsonVaultRespState *parse, JsonLexContext *lex); + +struct curl_slist *curlList = NULL; + +static bool curl_setup_token(VaultV2Keyring *keyring); +static char *get_keyring_vault_url(VaultV2Keyring *keyring, const char *key_name, char *out, size_t out_size); +static bool curl_perform(VaultV2Keyring *keyring, const char *url, CurlString *outStr, long *httpCode, const char *postData); + +static KeyringReturnCodes set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error); +static keyInfo *get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * return_code); + +const TDEKeyringRoutine keyringVaultV2Routine = { + .keyring_get_key = get_key_by_name, + .keyring_store_key = set_key_by_name +}; + + +bool +InstallVaultV2Keyring(void) +{ + return RegisterKeyProvider(&keyringVaultV2Routine, VAULT_V2_KEY_PROVIDER); +} + +static bool +curl_setup_token(VaultV2Keyring *keyring) +{ + if (curlList == NULL) + { + char tokenHeader[256]; + + strcpy(tokenHeader, "X-Vault-Token:"); + strcat(tokenHeader, keyring->vault_token); + + curlList = curl_slist_append(curlList, tokenHeader); + if (curlList == NULL) + return 0; + + curlList = curl_slist_append(curlList, "Content-Type: application/json"); + if (curlList == NULL) + return 0; + } + + if (curl_easy_setopt(keyringCurl, CURLOPT_HTTPHEADER, curlList) != CURLE_OK) + return 0; + + return 1; +} + +static bool +curl_perform(VaultV2Keyring *keyring, const char *url, CurlString *outStr, long *httpCode, const char *postData) +{ + CURLcode ret; +#if KEYRING_DEBUG + elog(DEBUG1, "Performing Vault HTTP [%s] request to '%s'", postData != NULL ? "POST" : "GET", url); + if (postData != NULL) + { + elog(DEBUG2, "Postdata: '%s'", postData); + } +#endif + outStr->ptr = palloc0(1); + outStr->len = 0; + + if (!curlSetupSession(url, keyring->vault_ca_path, outStr)) + return 0; + + if (!curl_setup_token(keyring)) + return 0; + + if (postData != NULL) + { + if (curl_easy_setopt(keyringCurl, CURLOPT_POSTFIELDS, postData) != CURLE_OK) + return 0; + } + + ret = curl_easy_perform(keyringCurl); + if (ret != CURLE_OK) + { + elog(LOG, "curl_easy_perform failed with return code: %d", ret); + return 0; + } + + if (curl_easy_getinfo(keyringCurl, CURLINFO_RESPONSE_CODE, httpCode) != CURLE_OK) + return 0; + +#if KEYRING_DEBUG + elog(DEBUG2, "Vault response [%li] '%s'", *httpCode, outStr->ptr != NULL ? outStr->ptr : ""); +#endif + + return 1; +} + +/* + * Function builds the vault url in out parameter. + * so enough memory should be allocated to out pointer + */ +static char * +get_keyring_vault_url(VaultV2Keyring *keyring, const char *key_name, char *out, size_t out_size) +{ + Assert(keyring != NULL); + Assert(key_name != NULL); + Assert(out != NULL); + + snprintf(out, out_size, "%s/v1/%s/data/%s", keyring->vault_url, keyring->vault_mount_path, key_name); + return out; +} + +static KeyringReturnCodes +set_key_by_name(GenericKeyring *keyring, keyInfo *key, bool throw_error) +{ + VaultV2Keyring *vault_keyring = (VaultV2Keyring *) keyring; + char url[VAULT_URL_MAX_LEN]; + CurlString str; + long httpCode = 0; + char jsonText[512]; + char keyData[64]; + int keyLen = 0; + int ereport_level = throw_error ? ERROR : WARNING; + + Assert(key != NULL); + + /* + * Since we are only building a very limited JSON with a single base64 + * string, we build it by hand + */ + /* Simpler than using the limited pg json api */ + keyLen = pg_b64_encode((char *) key->data.data, key->data.len, keyData, 64); + keyData[keyLen] = 0; + + snprintf(jsonText, 512, "{\"data\":{\"key\":\"%s\"}}", keyData); + +#if KEYRING_DEBUG + elog(DEBUG1, "Sending base64 key: %s", keyData); +#endif + + get_keyring_vault_url(vault_keyring, key->name.name, url, sizeof(url)); + + if (!curl_perform(vault_keyring, url, &str, &httpCode, jsonText)) + { + if (str.ptr != NULL) + pfree(str.ptr); + + ereport(ereport_level, + (errmsg("HTTP(S) request to keyring provider \"%s\" failed", + vault_keyring->keyring.provider_name))); + + return KEYRING_CODE_INVALID_RESPONSE; + } + + if (str.ptr != NULL) + pfree(str.ptr); + + if (httpCode / 100 == 2) + return KEYRING_CODE_SUCCESS; + + return KEYRING_CODE_INVALID_RESPONSE; +} + +static keyInfo * +get_key_by_name(GenericKeyring *keyring, const char *key_name, bool throw_error, KeyringReturnCodes * return_code) +{ + VaultV2Keyring *vault_keyring = (VaultV2Keyring *) keyring; + keyInfo *key = NULL; + char url[VAULT_URL_MAX_LEN]; + CurlString str; + long httpCode = 0; + JsonParseErrorType json_error; + JsonLexContext *jlex = NULL; + JsonVaultRespState parse; + int ereport_level = throw_error ? ERROR : WARNING; + + const char *responseKey; + + *return_code = KEYRING_CODE_SUCCESS; + + get_keyring_vault_url(vault_keyring, key_name, url, sizeof(url)); + + if (!curl_perform(vault_keyring, url, &str, &httpCode, NULL)) + { + *return_code = KEYRING_CODE_INVALID_KEY_SIZE; + ereport(ereport_level, + (errmsg("HTTP(S) request to keyring provider \"%s\" failed", + vault_keyring->keyring.provider_name))); + goto cleanup; + } + + if (httpCode == 404) + { + *return_code = KEYRING_CODE_RESOURCE_NOT_AVAILABLE; + goto cleanup; + } + + if (httpCode / 100 != 2) + { + *return_code = KEYRING_CODE_INVALID_RESPONSE; + ereport(ereport_level, + (errmsg("HTTP(S) request to keyring provider \"%s\" returned invalid response %li", + vault_keyring->keyring.provider_name, httpCode))); + goto cleanup; + } + +#if PG_VERSION_NUM < 170000 + jlex = makeJsonLexContextCstringLen(str.ptr, str.len, PG_UTF8, true); +#else + jlex = makeJsonLexContextCstringLen(NULL, str.ptr, str.len, PG_UTF8, true); +#endif + json_error = parse_json_response(&parse, jlex); + + if (json_error != JSON_SUCCESS) + { + *return_code = KEYRING_CODE_INVALID_RESPONSE; + ereport(ereport_level, + (errmsg("HTTP(S) request to keyring provider \"%s\" returned incorrect JSON: %s", + vault_keyring->keyring.provider_name, json_errdetail(json_error, jlex)))); + goto cleanup; + } + + responseKey = parse.key; + +#if KEYRING_DEBUG + elog(DEBUG1, "Retrieved base64 key: %s", responseKey); +#endif + + key = palloc(sizeof(keyInfo)); + key->data.len = pg_b64_decode(responseKey, strlen(responseKey), (char *) key->data.data, MAX_KEY_DATA_SIZE); + + if (key->data.len > MAX_KEY_DATA_SIZE) + { + *return_code = KEYRING_CODE_INVALID_KEY_SIZE; + ereport(ereport_level, + (errmsg("keyring provider \"%s\" returned invalid key size: %d", + vault_keyring->keyring.provider_name, key->data.len))); + pfree(key); + key = NULL; + goto cleanup; + } + +cleanup: + if (str.ptr != NULL) + pfree(str.ptr); +#if PG_VERSION_NUM >= 170000 + if (jlex != NULL) + freeJsonLexContext(jlex); +#endif + return key; +} + +/* + * JSON parser routines + * + * We expect the response in the form of: + * { + * ... + * "data": { + * "data": { + * "key": "key_value" + * }, + * } + * ... + * } + * + * the rest fields are ignored + */ + +static JsonParseErrorType +parse_json_response(JsonVaultRespState *parse, JsonLexContext *lex) +{ + JsonSemAction sem; + + parse->state = JRESP_EXPECT_TOP_DATA; + parse->level = -1; + parse->field = JRESP_F_UNUSED; + parse->key = NULL; + + sem.semstate = parse; + sem.object_start = json_resp_object_start; + sem.object_end = json_resp_object_end; + sem.array_start = NULL; + sem.array_end = NULL; + sem.object_field_start = json_resp_object_field_start; + sem.object_field_end = NULL; + sem.array_element_start = NULL; + sem.array_element_end = NULL; + sem.scalar = json_resp_scalar; + + return pg_parse_json(lex, &sem); +} + +/* + * Invoked at the start of each object in the JSON document. + * + * It just keeps track of the current nesting level + */ +static JsonParseErrorType +json_resp_object_start(void *state) +{ + ((JsonVaultRespState *) state)->level++; + + return JSON_SUCCESS; +} + +/* + * Invoked at the end of each object in the JSON document. + * + * It just keeps track of the current nesting level + */ +static JsonParseErrorType +json_resp_object_end(void *state) +{ + ((JsonVaultRespState *) state)->level--; + + return JSON_SUCCESS; +} + +/* + * Invoked at the start of each scalar in the JSON document. + * + * We have only the string value of the field. And rely on the state set by + * `json_resp_object_field_start` for defining what the field is. + */ +static JsonParseErrorType +json_resp_scalar(void *state, char *token, JsonTokenType tokentype) +{ + JsonVaultRespState *parse = state; + + switch (parse->field) + { + case JRESP_F_KEY: + parse->key = token; + parse->field = JRESP_F_UNUSED; + break; + default: + // NOP + break; + } + return JSON_SUCCESS; +} + +/* + * Invoked at the start of each object field in the JSON document. + * + * Based on the given field name and the level we set the state so that when + * we get the value, we know what is it and where to assign it. + */ +static JsonParseErrorType +json_resp_object_field_start(void *state, char *fname, bool isnull) +{ + JsonVaultRespState *parse = state; + + switch (parse->state) + { + case JRESP_EXPECT_TOP_DATA: + if (strcmp(fname, "data") == 0 && parse->level == 0) + parse->state = JRESP_EXPECT_DATA; + break; + case JRESP_EXPECT_DATA: + if (strcmp(fname, "data") == 0 && parse->level == 1) + parse->state = JRESP_EXPECT_KEY; + break; + case JRESP_EXPECT_KEY: + if (strcmp(fname, "key") == 0 && parse->level == 2) + parse->field = JRESP_F_KEY; + break; + } + + return JSON_SUCCESS; +} diff --git a/contrib/pg_tde/src/libkmip b/contrib/pg_tde/src/libkmip new file mode 160000 index 00000000000..f3f21ceb32b --- /dev/null +++ b/contrib/pg_tde/src/libkmip @@ -0,0 +1 @@ +Subproject commit f3f21ceb32bef8ce8fb36e25c6ae4831f7689e02 diff --git a/contrib/pg_tde/src/pg_tde.c b/contrib/pg_tde/src/pg_tde.c new file mode 100644 index 00000000000..fa48c0fb384 --- /dev/null +++ b/contrib/pg_tde/src/pg_tde.c @@ -0,0 +1,226 @@ +/*------------------------------------------------------------------------- + * + * pg_tde.c + * Main file: setup GUCs, shared memory, hooks and other general-purpose + * routines. + * + * IDENTIFICATION + * contrib/pg_tde/src/pg_tde.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "funcapi.h" +#include "pg_tde.h" +#include "transam/pg_tde_xact_handler.h" +#include "miscadmin.h" +#include "storage/ipc.h" +#include "storage/lwlock.h" +#include "storage/shmem.h" +#include "access/pg_tde_ddl.h" +#include "access/pg_tde_xlog.h" +#include "access/pg_tde_xlog_encrypt.h" +#include "encryption/enc_aes.h" +#include "access/pg_tde_tdemap.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "keyring/keyring_api.h" +#include "common/pg_tde_shmem.h" +#include "common/pg_tde_utils.h" +#include "catalog/tde_principal_key.h" +#include "keyring/keyring_file.h" +#include "keyring/keyring_vault.h" +#include "keyring/keyring_kmip.h" +#include "utils/builtins.h" +#include "pg_tde_defs.h" +#include "smgr/pg_tde_smgr.h" +#ifdef PERCONA_EXT +#include "catalog/tde_global_space.h" +#include "utils/percona.h" +#endif + +#include + +#define MAX_ON_INSTALLS 5 + +PG_MODULE_MAGIC; + +struct OnExtInstall +{ + pg_tde_on_ext_install_callback function; + void *arg; +}; + +static struct OnExtInstall on_ext_install_list[MAX_ON_INSTALLS]; +static int on_ext_install_index = 0; +static void run_extension_install_callbacks(XLogExtensionInstall *xlrec, bool redo); +void _PG_init(void); +Datum pg_tde_extension_initialize(PG_FUNCTION_ARGS); +Datum pg_tde_version(PG_FUNCTION_ARGS); + +static shmem_startup_hook_type prev_shmem_startup_hook = NULL; +static shmem_request_hook_type prev_shmem_request_hook = NULL; + +PG_FUNCTION_INFO_V1(pg_tde_extension_initialize); +PG_FUNCTION_INFO_V1(pg_tde_version); +static void +tde_shmem_request(void) +{ + Size sz = TdeRequiredSharedMemorySize(); + int required_locks = TdeRequiredLocksCount(); + +#ifdef PERCONA_EXT + sz = add_size(sz, XLOG_TDE_ENC_BUFF_ALIGNED_SIZE); +#endif + + if (prev_shmem_request_hook) + prev_shmem_request_hook(); + RequestAddinShmemSpace(sz); + RequestNamedLWLockTranche(TDE_TRANCHE_NAME, required_locks); + ereport(LOG, (errmsg("tde_shmem_request: requested %ld bytes", sz))); +} + +static void +tde_shmem_startup(void) +{ + if (prev_shmem_startup_hook) + prev_shmem_startup_hook(); + + TdeShmemInit(); + AesInit(); + +#ifdef PERCONA_EXT + TDEInitGlobalKeys(NULL); + + TDEXLogShmemInit(); + TDEXLogSmgrInit(); +#endif +} + +void +_PG_init(void) +{ + if (!process_shared_preload_libraries_in_progress) + { + elog(ERROR, "pg_tde can only be loaded at server startup. Restart required."); + return; + } + +#ifdef PERCONA_EXT + check_percona_api_version(); +#endif + + InitializePrincipalKeyInfo(); + InitializeKeyProviderInfo(); +#ifdef PERCONA_EXT + XLogInitGUC(); +#endif + prev_shmem_request_hook = shmem_request_hook; + shmem_request_hook = tde_shmem_request; + prev_shmem_startup_hook = shmem_startup_hook; + shmem_startup_hook = tde_shmem_startup; + + RegisterXactCallback(pg_tde_xact_callback, NULL); + RegisterSubXactCallback(pg_tde_subxact_callback, NULL); + SetupTdeDDLHooks(); + InstallFileKeyring(); + InstallVaultV2Keyring(); + InstallKmipKeyring(); + RegisterCustomRmgr(RM_TDERMGR_ID, &tdeheap_rmgr); + + RegisterStorageMgr(); +} + +Datum +pg_tde_extension_initialize(PG_FUNCTION_ARGS) +{ + /* Initialize the TDE map */ + XLogExtensionInstall xlrec; + + pg_tde_init_data_dir(); + + xlrec.database_id = MyDatabaseId; + run_extension_install_callbacks(&xlrec, false); + + /* + * Also put this info in xlog, so we can replicate the same on the other + * side + */ + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, sizeof(XLogExtensionInstall)); + XLogInsert(RM_TDERMGR_ID, XLOG_TDE_EXTENSION_INSTALL_KEY); + + PG_RETURN_NULL(); +} +void +extension_install_redo(XLogExtensionInstall *xlrec) +{ + run_extension_install_callbacks(xlrec, true); +} + +/* ---------------------------------------------------------------- + * on_ext_install + * + * Register ordinary callback to perform initializations + * run at the time of pg_tde extension installs. + * ---------------------------------------------------------------- + */ +void +on_ext_install(pg_tde_on_ext_install_callback function, void *arg) +{ + if (on_ext_install_index >= MAX_ON_INSTALLS) + ereport(FATAL, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg_internal("out of on extension install slots"))); + + on_ext_install_list[on_ext_install_index].function = function; + on_ext_install_list[on_ext_install_index].arg = arg; + + ++on_ext_install_index; +} + +/* Creates a tde directory for internal files if not exists */ +void +pg_tde_init_data_dir(void) +{ + struct stat st; + + if (stat(PG_TDE_DATA_DIR, &st) < 0) + { + if (MakePGDirectory(PG_TDE_DATA_DIR) < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create tde directory \"%s\": %m", + PG_TDE_DATA_DIR))); + } +} + +/* ------------------ + * Run all of the on_ext_install routines and execute those one by one + * ------------------ + */ +static void +run_extension_install_callbacks(XLogExtensionInstall *xlrec, bool redo) +{ + int i; + int tde_table_count = 0; + + /* + * Get the number of tde tables in this database should always be zero. + * But still, it prevents the cleanup if someone explicitly calls this + * function. + */ + if (!redo) + tde_table_count = get_tde_tables_count(); + for (i = 0; i < on_ext_install_index; i++) + on_ext_install_list[i] + .function(tde_table_count, xlrec, redo, on_ext_install_list[i].arg); +} + +/* Returns package version */ +Datum +pg_tde_version(PG_FUNCTION_ARGS) +{ + PG_RETURN_TEXT_P(cstring_to_text(pg_tde_package_string())); +} diff --git a/contrib/pg_tde/src/pg_tde_defs.c b/contrib/pg_tde/src/pg_tde_defs.c new file mode 100644 index 00000000000..22d2602cbe5 --- /dev/null +++ b/contrib/pg_tde/src/pg_tde_defs.c @@ -0,0 +1,36 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_defs.c + * The configure script generates config.h which contains the package_* defs + * and these defines conflicts with the PG defines. + * This file is used to provide the package version string to the extension + * without including the config.h file. + * + * IDENTIFICATION + * contrib/pg_tde/src/pg_tde_defs.c + * + *------------------------------------------------------------------------- + */ + + +#include "config.h" +#include "pg_tde_defs.h" + + +/* Returns package version */ +const char * +pg_tde_package_string(void) +{ + return PACKAGE_STRING; +} + +const char * +pg_tde_package_name(void) +{ + return PACKAGE_NAME; +} +const char * +pg_tde_package_version(void) +{ + return PACKAGE_VERSION; +} diff --git a/contrib/pg_tde/src/pg_tde_event_capture.c b/contrib/pg_tde/src/pg_tde_event_capture.c new file mode 100644 index 00000000000..1a7518f5d48 --- /dev/null +++ b/contrib/pg_tde/src/pg_tde_event_capture.c @@ -0,0 +1,205 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_event_capture.c + * event trigger logic to identify if we are creating the encrypted table or not. + * + * IDENTIFICATION + * contrib/pg_tde/src/pg_tde_event_trigger.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "funcapi.h" +#include "fmgr.h" +#include "utils/rel.h" +#include "utils/builtins.h" +#include "catalog/pg_class.h" +#include "access/table.h" +#include "catalog/pg_event_trigger.h" +#include "catalog/namespace.h" +#include "commands/event_trigger.h" +#include "common/pg_tde_utils.h" +#include "pg_tde_event_capture.h" +#include "catalog/tde_principal_key.h" +#include "miscadmin.h" +#include "access/tableam.h" + +/* Global variable that gets set at ddl start and cleard out at ddl end*/ +TdeCreateEvent tdeCurrentCreateEvent = {.relation = NULL}; + + +static void reset_current_tde_create_event(void); + +PG_FUNCTION_INFO_V1(pg_tde_ddl_command_start_capture); +PG_FUNCTION_INFO_V1(pg_tde_ddl_command_end_capture); + +TdeCreateEvent * +GetCurrentTdeCreateEvent(void) +{ + return &tdeCurrentCreateEvent; +} + +/* + * pg_tde_ddl_command_start_capture is an event trigger function triggered + * at the start of any DDL command execution. + * + * The function specifically focuses on CREATE INDEX and CREATE TABLE statements, + * aiming to determine if the create table or the table on which an index is being created + * utilizes the pg_tde access method for encryption. + * Once it confirms the table's encryption requirement or usage, + * it updates the table information in the tdeCurrentCreateEvent global variable. + * This information can be accessed by SMGR or any other component + * during the execution of this DDL statement. + */ +Datum +pg_tde_ddl_command_start_capture(PG_FUNCTION_ARGS) +{ + /* TODO: verify update_compare_indexes failure related to this */ +#ifdef PERCONA_EXT + EventTriggerData *trigdata; + Node *parsetree; + + /* Ensure this function is being called as an event trigger */ + if (!CALLED_AS_EVENT_TRIGGER(fcinfo)) /* internal error */ + ereport(ERROR, + (errmsg("Function can only be fired by event trigger manager"))); + + trigdata = (EventTriggerData *) fcinfo->context; + parsetree = trigdata->parsetree; + + reset_current_tde_create_event(); + + if (IsA(parsetree, IndexStmt)) + { + IndexStmt *stmt = (IndexStmt *) parsetree; + Oid relationId = RangeVarGetRelid(stmt->relation, NoLock, true); + + tdeCurrentCreateEvent.eventType = TDE_INDEX_CREATE_EVENT; + tdeCurrentCreateEvent.baseTableOid = relationId; + tdeCurrentCreateEvent.relation = stmt->relation; + + if (relationId != InvalidOid) + { + LOCKMODE lockmode = AccessShareLock; /* TODO. Verify lock mode? */ + Relation rel = table_open(relationId, lockmode); + + if (rel->rd_rel->relam == get_tde_table_am_oid()) + { + /* We are creating the index on encrypted table */ + /* set the global state */ + tdeCurrentCreateEvent.encryptMode = true; + } + table_close(rel, lockmode); + } + else + ereport(DEBUG1, (errmsg("Failed to get relation Oid for relation:%s", stmt->relation->relname))); + + } + else if (IsA(parsetree, CreateStmt)) + { + CreateStmt *stmt = (CreateStmt *) parsetree; + TDEPrincipalKey *principal_key; + + tdeCurrentCreateEvent.eventType = TDE_TABLE_CREATE_EVENT; + tdeCurrentCreateEvent.relation = stmt->relation; + + if (stmt->accessMethod && strcmp(stmt->accessMethod, "tde_heap") == 0) + { + tdeCurrentCreateEvent.encryptMode = true; + } + else if ((stmt->accessMethod == NULL || stmt->accessMethod[0] == 0) && strcmp(default_table_access_method, "tde_heap") == 0) + { + tdeCurrentCreateEvent.encryptMode = true; + } + + if (tdeCurrentCreateEvent.encryptMode) + { + LWLockAcquire(tde_lwlock_enc_keys(), LW_SHARED); + principal_key = GetPrincipalKey(MyDatabaseId, LW_SHARED); + LWLockRelease(tde_lwlock_enc_keys()); + if (principal_key == NULL) + { + ereport(ERROR, + (errmsg("failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables."))); + + } + } + + } + else if (IsA(parsetree, AlterTableStmt)) + { + LOCKMODE lockmode = AccessShareLock; /* TODO. Verify lock mode? */ + AlterTableStmt *stmt = (AlterTableStmt *) parsetree; + ListCell *lcmd; + + foreach(lcmd, stmt->cmds) + { + AlterTableCmd *cmd = (AlterTableCmd *) lfirst(lcmd); + if (cmd->subtype == AT_SetAccessMethod && + ((cmd->name != NULL && strcmp(cmd->name, "tde_heap")==0) || + (cmd->name == NULL && strcmp(default_table_access_method, "tde_heap") == 0)) + ) + { + tdeCurrentCreateEvent.encryptMode = true; + tdeCurrentCreateEvent.eventType = TDE_TABLE_CREATE_EVENT; + tdeCurrentCreateEvent.relation = stmt->relation; + } + } + + if (tdeCurrentCreateEvent.encryptMode) + { + TDEPrincipalKey * principal_key; + Oid relationId = RangeVarGetRelid(stmt->relation, NoLock, true); + Relation rel = table_open(relationId, lockmode); + table_close(rel, lockmode); + + LWLockAcquire(tde_lwlock_enc_keys(), LW_SHARED); + principal_key = GetPrincipalKey(MyDatabaseId, LW_SHARED); + LWLockRelease(tde_lwlock_enc_keys()); + if (principal_key == NULL) + { + ereport(ERROR, + (errmsg("failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables."))); + + } + } + } +#endif + PG_RETURN_NULL(); +} + +/* + * trigger function called at the end of DDL statement execution. + * It just clears the tdeCurrentCreateEvent global variable. + */ +Datum +pg_tde_ddl_command_end_capture(PG_FUNCTION_ARGS) +{ +#ifdef PERCONA_EXT + /* Ensure this function is being called as an event trigger */ + if (!CALLED_AS_EVENT_TRIGGER(fcinfo)) /* internal error */ + ereport(ERROR, + (errmsg("Function can only be fired by event trigger manager"))); + + elog(LOG, "Type:%s EncryptMode:%s, Oid:%d, Relation:%s ", + (tdeCurrentCreateEvent.eventType == TDE_INDEX_CREATE_EVENT) ? "CREATE INDEX" : + (tdeCurrentCreateEvent.eventType == TDE_TABLE_CREATE_EVENT) ? "CREATE TABLE" : "UNKNOWN", + tdeCurrentCreateEvent.encryptMode ? "true" : "false", + tdeCurrentCreateEvent.baseTableOid, + tdeCurrentCreateEvent.relation ? tdeCurrentCreateEvent.relation->relname : "UNKNOWN"); + + /* All we need to do is to clear the event state */ + reset_current_tde_create_event(); +#endif + PG_RETURN_NULL(); +} + +static void +reset_current_tde_create_event(void) +{ + tdeCurrentCreateEvent.encryptMode = false; + tdeCurrentCreateEvent.eventType = TDE_UNKNOWN_CREATE_EVENT; + tdeCurrentCreateEvent.baseTableOid = InvalidOid; + tdeCurrentCreateEvent.relation = NULL; +} diff --git a/contrib/pg_tde/src/smgr/pg_tde_smgr.c b/contrib/pg_tde/src/smgr/pg_tde_smgr.c new file mode 100644 index 00000000000..357467dc194 --- /dev/null +++ b/contrib/pg_tde/src/smgr/pg_tde_smgr.c @@ -0,0 +1,313 @@ + +#include "smgr/pg_tde_smgr.h" +#include "postgres.h" +#include "storage/smgr.h" +#include "storage/md.h" +#include "catalog/catalog.h" +#include "encryption/enc_aes.h" +#include "access/pg_tde_tdemap.h" +#include "pg_tde_event_capture.h" + +#ifdef PERCONA_EXT + +typedef struct TDESMgrRelationData +{ + /* parent data */ + SMgrRelationData reln; + + /* + * for md.c; per-fork arrays of the number of open segments + * (md_num_open_segs) and the segments themselves (md_seg_fds). + */ + int md_num_open_segs[MAX_FORKNUM + 1]; + struct _MdfdVec *md_seg_fds[MAX_FORKNUM + 1]; + + bool encrypted_relation; + RelKeyData relKey; +} TDESMgrRelationData; + +typedef TDESMgrRelationData * TDESMgrRelation; + +/* + * we only encrypt main and init forks + */ +static inline bool +tde_is_encryption_required(TDESMgrRelation tdereln, ForkNumber forknum) +{ + return (tdereln->encrypted_relation && (forknum == MAIN_FORKNUM || forknum == INIT_FORKNUM)); +} + +static RelKeyData * +tde_smgr_get_key(SMgrRelation reln, RelFileLocator* old_locator, bool can_create) +{ + TdeCreateEvent *event; + RelKeyData *rkd; + TDEPrincipalKey *pk; + + if (IsCatalogRelationOid(reln->smgr_rlocator.locator.relNumber)) + { + /* do not try to encrypt/decrypt catalog tables */ + return NULL; + } + + LWLockAcquire(tde_lwlock_enc_keys(), LW_SHARED); + pk = GetPrincipalKey(reln->smgr_rlocator.locator.dbOid, LW_SHARED); + LWLockRelease(tde_lwlock_enc_keys()); + if (pk == NULL) + { + return NULL; + } + + event = GetCurrentTdeCreateEvent(); + + /* see if we have a key for the relation, and return if yes */ + rkd = GetSMGRRelationKey(reln->smgr_rlocator.locator); + + if (rkd != NULL) + { + return rkd; + } + + /* if this is a CREATE TABLE, we have to generate the key */ + if (event->encryptMode == true && event->eventType == TDE_TABLE_CREATE_EVENT && can_create) + { + return pg_tde_create_smgr_key(&reln->smgr_rlocator.locator); + } + + /* if this is a CREATE INDEX, we have to load the key based on the table */ + if (event->encryptMode == true && event->eventType == TDE_INDEX_CREATE_EVENT && can_create) + { + /* For now keep it simple and create separate key for indexes */ + /* + * Later we might modify the map infrastructure to support the same + * keys + */ + return pg_tde_create_smgr_key(&reln->smgr_rlocator.locator); + } + + /* check if we had a key for the old locator, if there's one */ + if(old_locator != NULL && can_create) + { + RelKeyData *rkd2 = GetSMGRRelationKey(*old_locator); + if(rkd2!=NULL) + { + // create a new key for the new file + return pg_tde_create_key_map_entry(&reln->smgr_rlocator.locator, TDE_KEY_TYPE_SMGR); + } + } + + return NULL; +} + +static void +tde_mdwritev(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, + const void **buffers, BlockNumber nblocks, bool skipFsync) +{ + TDESMgrRelation tdereln = (TDESMgrRelation) reln; + RelKeyData *rkd = &tdereln->relKey; + + if (!tde_is_encryption_required(tdereln, forknum)) + { + mdwritev(reln, forknum, blocknum, buffers, nblocks, skipFsync); + } + else + { + unsigned char *local_blocks = palloc(BLCKSZ * (nblocks + 1)); + unsigned char *local_blocks_aligned = (unsigned char *) TYPEALIGN(PG_IO_ALIGN_SIZE, local_blocks); + void **local_buffers = palloc(sizeof(void *) * nblocks); + + AesInit(); + + for (int i = 0; i < nblocks; ++i) + { + int out_len = BLCKSZ; + BlockNumber bn = blocknum + i; + unsigned char iv[16] = {0,}; + + local_buffers[i] = &local_blocks_aligned[i * BLCKSZ]; + + + memcpy(iv + 4, &bn, sizeof(BlockNumber)); + + AesEncrypt(rkd->internal_key.key, iv, ((unsigned char **) buffers)[i], BLCKSZ, local_buffers[i], &out_len); + } + + mdwritev(reln, forknum, blocknum, + (const void**) local_buffers, nblocks, skipFsync); + + pfree(local_blocks); + pfree(local_buffers); + } +} + +static void +tde_mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, + const void *buffer, bool skipFsync) +{ + TDESMgrRelation tdereln = (TDESMgrRelation) reln; + RelKeyData *rkd = &tdereln->relKey; + + if (!tde_is_encryption_required(tdereln, forknum)) + { + mdextend(reln, forknum, blocknum, buffer, skipFsync); + } + else + { + unsigned char *local_blocks = palloc(BLCKSZ * (1 + 1)); + unsigned char *local_blocks_aligned = (unsigned char *) TYPEALIGN(PG_IO_ALIGN_SIZE, local_blocks); + int out_len = BLCKSZ; + unsigned char iv[16] = { + 0, + }; + + AesInit(); + memcpy(iv + 4, &blocknum, sizeof(BlockNumber)); + + AesEncrypt(rkd->internal_key.key, iv, ((unsigned char *) buffer), BLCKSZ, local_blocks_aligned, &out_len); + + mdextend(reln, forknum, blocknum, local_blocks_aligned, skipFsync); + + pfree(local_blocks); + } +} + +static void +tde_mdreadv(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, + void **buffers, BlockNumber nblocks) +{ + int out_len = BLCKSZ; + TDESMgrRelation tdereln = (TDESMgrRelation) reln; + RelKeyData *rkd = &tdereln->relKey; + + mdreadv(reln, forknum, blocknum, buffers, nblocks); + + if (!tde_is_encryption_required(tdereln, forknum)) + return; + + AesInit(); + + for (int i = 0; i < nblocks; ++i) + { + bool allZero = true; + BlockNumber bn = blocknum + i; + unsigned char iv[16] = {0,}; + + for (int j = 0; j < 32; ++j) + { + if (((char **) buffers)[i][j] != 0) + { + /* + * Postgres creates all zero blocks in an optimized route, + * which we do not try + */ + /* to encrypt. */ + /* + * Instead we detect if a block is all zero at decryption + * time, and + */ + /* leave it as is. */ + /* + * This could be a security issue later, but it is a good + * first prototype + */ + allZero = false; + break; + } + } + if (allZero) + continue; + + memcpy(iv + 4, &bn, sizeof(BlockNumber)); + + AesDecrypt(rkd->internal_key.key, iv, ((unsigned char **) buffers)[i], BLCKSZ, ((unsigned char **) buffers)[i], &out_len); + } +} + +static void +tde_mdcreate(RelFileLocator relold, SMgrRelation reln, ForkNumber forknum, bool isRedo) +{ + TDESMgrRelation tdereln = (TDESMgrRelation) reln; + RelKeyData *key; + /* + * This is the only function that gets called during actual CREATE + * TABLE/INDEX (EVENT TRIGGER) + */ + /* so we create the key here by loading it */ + + mdcreate(relold, reln, forknum, isRedo); + + /* + * Later calls then decide to encrypt or not based on the existence of the + * key + */ + key = tde_smgr_get_key(reln, &relold, true); + + if (key) + { + tdereln->encrypted_relation = true; + memcpy(&tdereln->relKey, key, sizeof(RelKeyData)); + } + else + { + tdereln->encrypted_relation = false; + } +} + +/* + * mdopen() -- Initialize newly-opened relation. + */ +static void +tde_mdopen(SMgrRelation reln) +{ + TDESMgrRelation tdereln = (TDESMgrRelation) reln; + RelKeyData *key = tde_smgr_get_key(reln, NULL, false); + + if (key) + { + tdereln->encrypted_relation = true; + memcpy(&tdereln->relKey, key, sizeof(RelKeyData)); + } + else + { + tdereln->encrypted_relation = false; + } + mdopen(reln); +} + +static SMgrId tde_smgr_id; +static const struct f_smgr tde_smgr = { + .name = "tde", + .smgr_init = mdinit, + .smgr_shutdown = NULL, + .smgr_open = tde_mdopen, + .smgr_close = mdclose, + .smgr_create = tde_mdcreate, + .smgr_exists = mdexists, + .smgr_unlink = mdunlink, + .smgr_extend = tde_mdextend, + .smgr_zeroextend = mdzeroextend, + .smgr_prefetch = mdprefetch, + .smgr_readv = tde_mdreadv, + .smgr_writev = tde_mdwritev, + .smgr_writeback = mdwriteback, + .smgr_nblocks = mdnblocks, + .smgr_truncate = mdtruncate, + .smgr_immedsync = mdimmedsync, + .smgr_registersync = mdregistersync, +}; + +void +RegisterStorageMgr(void) +{ + tde_smgr_id = smgr_register(&tde_smgr, sizeof(TDESMgrRelationData)); + + /* TODO: figure out how this part should work in a real extension */ + storage_manager_id = tde_smgr_id; +} + +#else +void +RegisterStorageMgr(void) +{ +} +#endif /* PERCONA_EXT */ diff --git a/contrib/pg_tde/src/transam/pg_tde_xact_handler.c b/contrib/pg_tde/src/transam/pg_tde_xact_handler.c new file mode 100644 index 00000000000..0f9d680555e --- /dev/null +++ b/contrib/pg_tde/src/transam/pg_tde_xact_handler.c @@ -0,0 +1,183 @@ +/*------------------------------------------------------------------------- + * + * pg_tde_xact_handler.c + * Transaction handling routines for pg_tde + * + * + * IDENTIFICATION + * src/transam/pg_tde_xact_handler.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "utils/memutils.h" +#include "utils/palloc.h" +#include "utils/elog.h" +#include "storage/fd.h" +#include "transam/pg_tde_xact_handler.h" +#include "access/pg_tde_tdemap.h" + +typedef struct PendingMapEntryDelete +{ + off_t map_entry_offset; /* map entry offset */ + RelFileLocator rlocator; /* main for use as relation OID */ + bool atCommit; /* T=delete at commit; F=delete at abort */ + int nestLevel; /* xact nesting level of request */ + struct PendingMapEntryDelete *next; /* linked-list link */ +} PendingMapEntryDelete; + +static PendingMapEntryDelete *pendingDeletes = NULL; /* head of linked list */ + +static void do_pending_deletes(bool isCommit); +static void reassign_pending_deletes_to_parent_xact(void); +static void pending_delete_cleanup(void); + +/* Transaction Callbacks from Backend*/ +void +pg_tde_xact_callback(XactEvent event, void *arg) +{ + if (event == XACT_EVENT_PARALLEL_ABORT || + event == XACT_EVENT_ABORT) + { + ereport(DEBUG2, + (errmsg("pg_tde_xact_callback: aborting transaction"))); + do_pending_deletes(false); + } + else if (event == XACT_EVENT_COMMIT) + { + do_pending_deletes(true); + pending_delete_cleanup(); + } + else if (event == XACT_EVENT_PREPARE) + { + pending_delete_cleanup(); + } +} + +void +pg_tde_subxact_callback(SubXactEvent event, SubTransactionId mySubid, + SubTransactionId parentSubid, void *arg) +{ + /* TODO: takle all possible transaction states */ + if (event == SUBXACT_EVENT_ABORT_SUB) + { + ereport(DEBUG2, + (errmsg("pg_tde_subxact_callback: aborting subtransaction"))); + do_pending_deletes(false); + } + else if (event == SUBXACT_EVENT_COMMIT_SUB) + { + ereport(DEBUG2, + (errmsg("pg_tde_subxact_callback: committing subtransaction"))); + reassign_pending_deletes_to_parent_xact(); + } +} + +void +RegisterEntryForDeletion(const RelFileLocator *rlocator, off_t map_entry_offset, bool atCommit) +{ + PendingMapEntryDelete *pending; + + pending = (PendingMapEntryDelete *) MemoryContextAlloc(TopMemoryContext, sizeof(PendingMapEntryDelete)); + pending->map_entry_offset = map_entry_offset; + memcpy(&pending->rlocator, rlocator, sizeof(RelFileLocator)); + pending->atCommit = atCommit; /* delete if abort */ + pending->nestLevel = GetCurrentTransactionNestLevel(); + pending->next = pendingDeletes; + pendingDeletes = pending; +} + +/* + * do_pending_deletes() -- Take care of file deletes at end of xact. + * + * This also runs when aborting a subxact; we want to clean up a failed + * subxact immediately. + * + */ +static void +do_pending_deletes(bool isCommit) +{ + int nestLevel = GetCurrentTransactionNestLevel(); + PendingMapEntryDelete *pending; + PendingMapEntryDelete *prev; + PendingMapEntryDelete *next; + + LWLockAcquire(tde_lwlock_enc_keys(), LW_EXCLUSIVE); + + prev = NULL; + for (pending = pendingDeletes; pending != NULL; pending = next) + { + next = pending->next; + if (pending->nestLevel != nestLevel) + { + /* outer-level entries should not be processed yet */ + prev = pending; + continue; + } + + /* unlink list entry first, so we don't retry on failure */ + if (prev) + prev->next = next; + else + pendingDeletes = next; + /* do deletion if called for */ + if (pending->atCommit == isCommit) + { + ereport(LOG, + (errmsg("pg_tde_xact_callback: deleting entry at offset %d", + (int) (pending->map_entry_offset)))); + pg_tde_free_key_map_entry(&pending->rlocator, MAP_ENTRY_VALID, pending->map_entry_offset); + } + pfree(pending); + /* prev does not change */ + + } + + LWLockRelease(tde_lwlock_enc_keys()); +} + + +/* + * reassign_pending_deletes_to_parent_xact() -- Adjust nesting level of pending deletes. + * + * There are several cases to consider: + * 1. Only top level transaction can perform on-commit deletes. + * 2. Subtransaction and top level transaction can perform on-abort deletes. + * So we have to decrement the nesting level of pending deletes to reassing them to the parent transaction + * if subtransaction was not self aborted. In other words if subtransaction state is commited all its pending + * deletes are reassigned to the parent transaction. + */ +static void +reassign_pending_deletes_to_parent_xact(void) +{ + PendingMapEntryDelete *pending; + int nestLevel = GetCurrentTransactionNestLevel(); + + for (pending = pendingDeletes; pending != NULL; pending = pending->next) + { + if (pending->nestLevel == nestLevel) + pending->nestLevel--; + } +} + +/* + * pending_delete_cleanup -- Clean up after a successful PREPARE or COMMIT + * + * What we have to do here is throw away the in-memory state about pending + * file deletes. It's all been recorded in the 2PC state file and + * it's no longer our job to worry about it. + */ +static void +pending_delete_cleanup(void) +{ + PendingMapEntryDelete *pending; + PendingMapEntryDelete *next; + + for (pending = pendingDeletes; pending != NULL; pending = next) + { + next = pending->next; + pendingDeletes = next; + pfree(pending); + } +} diff --git a/contrib/pg_tde/src16/COMMIT b/contrib/pg_tde/src16/COMMIT new file mode 100644 index 00000000000..090b64cf67b --- /dev/null +++ b/contrib/pg_tde/src16/COMMIT @@ -0,0 +1 @@ +f199436c12819d2c01b72eaa6429de0ca5838471 diff --git a/contrib/pg_tde/src16/access/pg_tde_io.c b/contrib/pg_tde/src16/access/pg_tde_io.c new file mode 100644 index 00000000000..0c107a331cd --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tde_io.c @@ -0,0 +1,895 @@ +/*------------------------------------------------------------------------- + * + * hio.c + * POSTGRES heap access method input/output code. + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/hio.c + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tde_io.h" +#include "access/pg_tde_visibilitymap.h" +#include "encryption/enc_tde.h" + +#include "access/htup_details.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" +#include "storage/smgr.h" + + +/* + * tdeheap_RelationPutHeapTuple - place tuple at specified page + * + * !!! EREPORT(ERROR) IS DISALLOWED HERE !!! Must PANIC on failure!!! + * + * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer. + */ +void +tdeheap_RelationPutHeapTuple(Relation relation, + Buffer buffer, + HeapTuple tuple, + bool encrypt, + bool token) +{ + Page pageHeader; + OffsetNumber offnum; + + /* + * A tuple that's being inserted speculatively should already have its + * token set. + */ + Assert(!token || HeapTupleHeaderIsSpeculative(tuple->t_data)); + + /* + * Do not allow tuples with invalid combinations of hint bits to be placed + * on a page. This combination is detected as corruption by the + * contrib/amcheck logic, so if you disable this assertion, make + * corresponding changes there. + */ + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_COMMITTED) && + (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI))); + + /* Add the tuple to the page */ + pageHeader = BufferGetPage(buffer); + + if (encrypt) + offnum = TDE_PageAddItem(relation->rd_locator, BufferGetBlockNumber(buffer), pageHeader, (Item) tuple->t_data, + tuple->t_len, InvalidOffsetNumber, false, true); + else + offnum = PageAddItem(pageHeader, (Item) tuple->t_data, + tuple->t_len, InvalidOffsetNumber, false, true); + + if (offnum == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple to page"); + + /* Update tuple->t_self to the actual position where it was stored */ + ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum); + + /* + * Insert the correct position into CTID of the stored tuple, too (unless + * this is a speculative insertion, in which case the token is held in + * CTID field instead) + */ + if (!token) + { + ItemId itemId = PageGetItemId(pageHeader, offnum); + HeapTupleHeader item = (HeapTupleHeader) PageGetItem(pageHeader, itemId); + + item->t_ctid = tuple->t_self; + } +} + +/* + * Read in a buffer in mode, using bulk-insert strategy if bistate isn't NULL. + */ +static Buffer +ReadBufferBI(Relation relation, BlockNumber targetBlock, + ReadBufferMode mode, BulkInsertState bistate) +{ + Buffer buffer; + + /* If not bulk-insert, exactly like ReadBuffer */ + if (!bistate) + return ReadBufferExtended(relation, MAIN_FORKNUM, targetBlock, + mode, NULL); + + /* If we have the desired block already pinned, re-pin and return it */ + if (bistate->current_buf != InvalidBuffer) + { + if (BufferGetBlockNumber(bistate->current_buf) == targetBlock) + { + /* + * Currently the LOCK variants are only used for extending + * relation, which should never reach this branch. + */ + Assert(mode != RBM_ZERO_AND_LOCK && + mode != RBM_ZERO_AND_CLEANUP_LOCK); + + IncrBufferRefCount(bistate->current_buf); + return bistate->current_buf; + } + /* ... else drop the old buffer */ + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + } + + /* Perform a read using the buffer strategy */ + buffer = ReadBufferExtended(relation, MAIN_FORKNUM, targetBlock, + mode, bistate->strategy); + + /* Save the selected block as target for future inserts */ + IncrBufferRefCount(buffer); + bistate->current_buf = buffer; + + return buffer; +} + +/* + * For each heap page which is all-visible, acquire a pin on the appropriate + * visibility map page, if we haven't already got one. + * + * To avoid complexity in the callers, either buffer1 or buffer2 may be + * InvalidBuffer if only one buffer is involved. For the same reason, block2 + * may be smaller than block1. + * + * Returns whether buffer locks were temporarily released. + */ +static bool +GetVisibilityMapPins(Relation relation, Buffer buffer1, Buffer buffer2, + BlockNumber block1, BlockNumber block2, + Buffer *vmbuffer1, Buffer *vmbuffer2) +{ + bool need_to_pin_buffer1; + bool need_to_pin_buffer2; + bool released_locks = false; + + /* + * Swap buffers around to handle case of a single block/buffer, and to + * handle if lock ordering rules require to lock block2 first. + */ + if (!BufferIsValid(buffer1) || + (BufferIsValid(buffer2) && block1 > block2)) + { + Buffer tmpbuf = buffer1; + Buffer *tmpvmbuf = vmbuffer1; + BlockNumber tmpblock = block1; + + buffer1 = buffer2; + vmbuffer1 = vmbuffer2; + block1 = block2; + + buffer2 = tmpbuf; + vmbuffer2 = tmpvmbuf; + block2 = tmpblock; + } + + Assert(BufferIsValid(buffer1)); + Assert(buffer2 == InvalidBuffer || block1 <= block2); + + while (1) + { + /* Figure out which pins we need but don't have. */ + need_to_pin_buffer1 = PageIsAllVisible(BufferGetPage(buffer1)) + && !tdeheap_visibilitymap_pin_ok(block1, *vmbuffer1); + need_to_pin_buffer2 = buffer2 != InvalidBuffer + && PageIsAllVisible(BufferGetPage(buffer2)) + && !tdeheap_visibilitymap_pin_ok(block2, *vmbuffer2); + if (!need_to_pin_buffer1 && !need_to_pin_buffer2) + break; + + /* We must unlock both buffers before doing any I/O. */ + released_locks = true; + LockBuffer(buffer1, BUFFER_LOCK_UNLOCK); + if (buffer2 != InvalidBuffer && buffer2 != buffer1) + LockBuffer(buffer2, BUFFER_LOCK_UNLOCK); + + /* Get pins. */ + if (need_to_pin_buffer1) + tdeheap_visibilitymap_pin(relation, block1, vmbuffer1); + if (need_to_pin_buffer2) + tdeheap_visibilitymap_pin(relation, block2, vmbuffer2); + + /* Relock buffers. */ + LockBuffer(buffer1, BUFFER_LOCK_EXCLUSIVE); + if (buffer2 != InvalidBuffer && buffer2 != buffer1) + LockBuffer(buffer2, BUFFER_LOCK_EXCLUSIVE); + + /* + * If there are two buffers involved and we pinned just one of them, + * it's possible that the second one became all-visible while we were + * busy pinning the first one. If it looks like that's a possible + * scenario, we'll need to make a second pass through this loop. + */ + if (buffer2 == InvalidBuffer || buffer1 == buffer2 + || (need_to_pin_buffer1 && need_to_pin_buffer2)) + break; + } + + return released_locks; +} + +/* + * Extend the relation. By multiple pages, if beneficial. + * + * If the caller needs multiple pages (num_pages > 1), we always try to extend + * by at least that much. + * + * If there is contention on the extension lock, we don't just extend "for + * ourselves", but we try to help others. We can do so by adding empty pages + * into the FSM. Typically there is no contention when we can't use the FSM. + * + * We do have to limit the number of pages to extend by to some value, as the + * buffers for all the extended pages need to, temporarily, be pinned. For now + * we define MAX_BUFFERS_TO_EXTEND_BY to be 64 buffers, it's hard to see + * benefits with higher numbers. This partially is because copyfrom.c's + * MAX_BUFFERED_TUPLES / MAX_BUFFERED_BYTES prevents larger multi_inserts. + * + * Returns a buffer for a newly extended block. If possible, the buffer is + * returned exclusively locked. *did_unlock is set to true if the lock had to + * be released, false otherwise. + * + * + * XXX: It would likely be beneficial for some workloads to extend more + * aggressively, e.g. using a heuristic based on the relation size. + */ +static Buffer +RelationAddBlocks(Relation relation, BulkInsertState bistate, + int num_pages, bool use_fsm, bool *did_unlock) +{ +#define MAX_BUFFERS_TO_EXTEND_BY 64 + Buffer victim_buffers[MAX_BUFFERS_TO_EXTEND_BY]; + BlockNumber first_block = InvalidBlockNumber; + BlockNumber last_block = InvalidBlockNumber; + uint32 extend_by_pages; + uint32 not_in_fsm_pages; + Buffer buffer; + Page page; + + /* + * Determine by how many pages to try to extend by. + */ + if (bistate == NULL && !use_fsm) + { + /* + * If we have neither bistate, nor can use the FSM, we can't bulk + * extend - there'd be no way to find the additional pages. + */ + extend_by_pages = 1; + } + else + { + uint32 waitcount; + + /* + * Try to extend at least by the number of pages the caller needs. We + * can remember the additional pages (either via FSM or bistate). + */ + extend_by_pages = num_pages; + + if (!RELATION_IS_LOCAL(relation)) + waitcount = RelationExtensionLockWaiterCount(relation); + else + waitcount = 0; + + /* + * Multiply the number of pages to extend by the number of waiters. Do + * this even if we're not using the FSM, as it still relieves + * contention, by deferring the next time this backend needs to + * extend. In that case the extended pages will be found via + * bistate->next_free. + */ + extend_by_pages += extend_by_pages * waitcount; + + /* --- + * If we previously extended using the same bistate, it's very likely + * we'll extend some more. Try to extend by as many pages as + * before. This can be important for performance for several reasons, + * including: + * + * - It prevents mdzeroextend() switching between extending the + * relation in different ways, which is inefficient for some + * filesystems. + * + * - Contention is often intermittent. Even if we currently don't see + * other waiters (see above), extending by larger amounts can + * prevent future contention. + * --- + */ + if (bistate) + extend_by_pages = Max(extend_by_pages, bistate->already_extended_by); + + /* + * Can't extend by more than MAX_BUFFERS_TO_EXTEND_BY, we need to pin + * them all concurrently. + */ + extend_by_pages = Min(extend_by_pages, MAX_BUFFERS_TO_EXTEND_BY); + } + + /* + * How many of the extended pages should be entered into the FSM? + * + * If we have a bistate, only enter pages that we don't need ourselves + * into the FSM. Otherwise every other backend will immediately try to + * use the pages this backend needs for itself, causing unnecessary + * contention. If we don't have a bistate, we can't avoid the FSM. + * + * Never enter the page returned into the FSM, we'll immediately use it. + */ + if (num_pages > 1 && bistate == NULL) + not_in_fsm_pages = 1; + else + not_in_fsm_pages = num_pages; + + /* prepare to put another buffer into the bistate */ + if (bistate && bistate->current_buf != InvalidBuffer) + { + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + } + + /* + * Extend the relation. We ask for the first returned page to be locked, + * so that we are sure that nobody has inserted into the page + * concurrently. + * + * With the current MAX_BUFFERS_TO_EXTEND_BY there's no danger of + * [auto]vacuum trying to truncate later pages as REL_TRUNCATE_MINIMUM is + * way larger. + */ + first_block = ExtendBufferedRelBy(BMR_REL(relation), MAIN_FORKNUM, + bistate ? bistate->strategy : NULL, + EB_LOCK_FIRST, + extend_by_pages, + victim_buffers, + &extend_by_pages); + buffer = victim_buffers[0]; /* the buffer the function will return */ + last_block = first_block + (extend_by_pages - 1); + Assert(first_block == BufferGetBlockNumber(buffer)); + + /* + * Relation is now extended. Initialize the page. We do this here, before + * potentially releasing the lock on the page, because it allows us to + * double check that the page contents are empty (this should never + * happen, but if it does we don't want to risk wiping out valid data). + */ + page = BufferGetPage(buffer); + if (!PageIsNew(page)) + elog(ERROR, "page %u of relation \"%s\" should be empty but is not", + first_block, + RelationGetRelationName(relation)); + + PageInit(page, BufferGetPageSize(buffer), 0); + MarkBufferDirty(buffer); + + /* + * If we decided to put pages into the FSM, release the buffer lock (but + * not pin), we don't want to do IO while holding a buffer lock. This will + * necessitate a bit more extensive checking in our caller. + */ + if (use_fsm && not_in_fsm_pages < extend_by_pages) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + *did_unlock = true; + } + else + *did_unlock = false; + + /* + * Relation is now extended. Release pins on all buffers, except for the + * first (which we'll return). If we decided to put pages into the FSM, + * we can do that as part of the same loop. + */ + for (uint32 i = 1; i < extend_by_pages; i++) + { + BlockNumber curBlock = first_block + i; + + Assert(curBlock == BufferGetBlockNumber(victim_buffers[i])); + Assert(BlockNumberIsValid(curBlock)); + + ReleaseBuffer(victim_buffers[i]); + + if (use_fsm && i >= not_in_fsm_pages) + { + Size freespace = BufferGetPageSize(victim_buffers[i]) - + SizeOfPageHeaderData; + + RecordPageWithFreeSpace(relation, curBlock, freespace); + } + } + + if (use_fsm && not_in_fsm_pages < extend_by_pages) + { + BlockNumber first_fsm_block = first_block + not_in_fsm_pages; + + FreeSpaceMapVacuumRange(relation, first_fsm_block, last_block); + } + + if (bistate) + { + /* + * Remember the additional pages we extended by, so we later can use + * them without looking into the FSM. + */ + if (extend_by_pages > 1) + { + bistate->next_free = first_block + 1; + bistate->last_free = last_block; + } + else + { + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + } + + /* maintain bistate->current_buf */ + IncrBufferRefCount(buffer); + bistate->current_buf = buffer; + bistate->already_extended_by += extend_by_pages; + } + + return buffer; +#undef MAX_BUFFERS_TO_EXTEND_BY +} + +/* + * tdeheap_RelationGetBufferForTuple + * + * Returns pinned and exclusive-locked buffer of a page in given relation + * with free space >= given len. + * + * If num_pages is > 1, we will try to extend the relation by at least that + * many pages when we decide to extend the relation. This is more efficient + * for callers that know they will need multiple pages + * (e.g. tdeheap_multi_insert()). + * + * If otherBuffer is not InvalidBuffer, then it references a previously + * pinned buffer of another page in the same relation; on return, this + * buffer will also be exclusive-locked. (This case is used by tdeheap_update; + * the otherBuffer contains the tuple being updated.) + * + * The reason for passing otherBuffer is that if two backends are doing + * concurrent tdeheap_update operations, a deadlock could occur if they try + * to lock the same two buffers in opposite orders. To ensure that this + * can't happen, we impose the rule that buffers of a relation must be + * locked in increasing page number order. This is most conveniently done + * by having tdeheap_RelationGetBufferForTuple lock them both, with suitable care + * for ordering. + * + * NOTE: it is unlikely, but not quite impossible, for otherBuffer to be the + * same buffer we select for insertion of the new tuple (this could only + * happen if space is freed in that page after tdeheap_update finds there's not + * enough there). In that case, the page will be pinned and locked only once. + * + * We also handle the possibility that the all-visible flag will need to be + * cleared on one or both pages. If so, pin on the associated visibility map + * page must be acquired before acquiring buffer lock(s), to avoid possibly + * doing I/O while holding buffer locks. The pins are passed back to the + * caller using the input-output arguments vmbuffer and vmbuffer_other. + * Note that in some cases the caller might have already acquired such pins, + * which is indicated by these arguments not being InvalidBuffer on entry. + * + * We normally use FSM to help us find free space. However, + * if HEAP_INSERT_SKIP_FSM is specified, we just append a new empty page to + * the end of the relation if the tuple won't fit on the current target page. + * This can save some cycles when we know the relation is new and doesn't + * contain useful amounts of free space. + * + * HEAP_INSERT_SKIP_FSM is also useful for non-WAL-logged additions to a + * relation, if the caller holds exclusive lock and is careful to invalidate + * relation's smgr_targblock before the first insertion --- that ensures that + * all insertions will occur into newly added pages and not be intermixed + * with tuples from other transactions. That way, a crash can't risk losing + * any committed data of other transactions. (See tdeheap_insert's comments + * for additional constraints needed for safe usage of this behavior.) + * + * The caller can also provide a BulkInsertState object to optimize many + * insertions into the same relation. This keeps a pin on the current + * insertion target page (to save pin/unpin cycles) and also passes a + * BULKWRITE buffer selection strategy object to the buffer manager. + * Passing NULL for bistate selects the default behavior. + * + * We don't fill existing pages further than the fillfactor, except for large + * tuples in nearly-empty pages. This is OK since this routine is not + * consulted when updating a tuple and keeping it on the same page, which is + * the scenario fillfactor is meant to reserve space for. + * + * ereport(ERROR) is allowed here, so this routine *must* be called + * before any (unlogged) changes are made in buffer pool. + */ +Buffer +tdeheap_RelationGetBufferForTuple(Relation relation, Size len, + Buffer otherBuffer, int options, + BulkInsertState bistate, + Buffer *vmbuffer, Buffer *vmbuffer_other, + int num_pages) +{ + bool use_fsm = !(options & HEAP_INSERT_SKIP_FSM); + Buffer buffer = InvalidBuffer; + Page page; + Size nearlyEmptyFreeSpace, + pageFreeSpace = 0, + saveFreeSpace = 0, + targetFreeSpace = 0; + BlockNumber targetBlock, + otherBlock; + bool unlockedTargetBuffer; + bool recheckVmPins; + + len = MAXALIGN(len); /* be conservative */ + + /* if the caller doesn't know by how many pages to extend, extend by 1 */ + if (num_pages <= 0) + num_pages = 1; + + /* Bulk insert is not supported for updates, only inserts. */ + Assert(otherBuffer == InvalidBuffer || !bistate); + + /* + * If we're gonna fail for oversize tuple, do it right away + */ + if (len > MaxHeapTupleSize) + ereport(ERROR, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg("row is too big: size %zu, maximum size %zu", + len, MaxHeapTupleSize))); + + /* Compute desired extra freespace due to fillfactor option */ + saveFreeSpace = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + + /* + * Since pages without tuples can still have line pointers, we consider + * pages "empty" when the unavailable space is slight. This threshold is + * somewhat arbitrary, but it should prevent most unnecessary relation + * extensions while inserting large tuples into low-fillfactor tables. + */ + nearlyEmptyFreeSpace = MaxHeapTupleSize - + (MaxHeapTuplesPerPage / 8 * sizeof(ItemIdData)); + if (len + saveFreeSpace > nearlyEmptyFreeSpace) + targetFreeSpace = Max(len, nearlyEmptyFreeSpace); + else + targetFreeSpace = len + saveFreeSpace; + + if (otherBuffer != InvalidBuffer) + otherBlock = BufferGetBlockNumber(otherBuffer); + else + otherBlock = InvalidBlockNumber; /* just to keep compiler quiet */ + + /* + * We first try to put the tuple on the same page we last inserted a tuple + * on, as cached in the BulkInsertState or relcache entry. If that + * doesn't work, we ask the Free Space Map to locate a suitable page. + * Since the FSM's info might be out of date, we have to be prepared to + * loop around and retry multiple times. (To ensure this isn't an infinite + * loop, we must update the FSM with the correct amount of free space on + * each page that proves not to be suitable.) If the FSM has no record of + * a page with enough free space, we give up and extend the relation. + * + * When use_fsm is false, we either put the tuple onto the existing target + * page or extend the relation. + */ + if (bistate && bistate->current_buf != InvalidBuffer) + targetBlock = BufferGetBlockNumber(bistate->current_buf); + else + targetBlock = RelationGetTargetBlock(relation); + + if (targetBlock == InvalidBlockNumber && use_fsm) + { + /* + * We have no cached target page, so ask the FSM for an initial + * target. + */ + targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace); + } + + /* + * If the FSM knows nothing of the rel, try the last page before we give + * up and extend. This avoids one-tuple-per-page syndrome during + * bootstrapping or in a recently-started system. + */ + if (targetBlock == InvalidBlockNumber) + { + BlockNumber nblocks = RelationGetNumberOfBlocks(relation); + + if (nblocks > 0) + targetBlock = nblocks - 1; + } + +loop: + while (targetBlock != InvalidBlockNumber) + { + /* + * Read and exclusive-lock the target block, as well as the other + * block if one was given, taking suitable care with lock ordering and + * the possibility they are the same block. + * + * If the page-level all-visible flag is set, caller will need to + * clear both that and the corresponding visibility map bit. However, + * by the time we return, we'll have x-locked the buffer, and we don't + * want to do any I/O while in that state. So we check the bit here + * before taking the lock, and pin the page if it appears necessary. + * Checking without the lock creates a risk of getting the wrong + * answer, so we'll have to recheck after acquiring the lock. + */ + if (otherBuffer == InvalidBuffer) + { + /* easy case */ + buffer = ReadBufferBI(relation, targetBlock, RBM_NORMAL, bistate); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + + /* + * If the page is empty, pin vmbuffer to set all_frozen bit later. + */ + if ((options & HEAP_INSERT_FROZEN) && + (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0)) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else if (otherBlock == targetBlock) + { + /* also easy case */ + buffer = otherBuffer; + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else if (otherBlock < targetBlock) + { + /* lock other buffer first */ + buffer = ReadBuffer(relation, targetBlock); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else + { + /* lock target buffer first */ + buffer = ReadBuffer(relation, targetBlock); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + } + + /* + * We now have the target page (and the other buffer, if any) pinned + * and locked. However, since our initial PageIsAllVisible checks + * were performed before acquiring the lock, the results might now be + * out of date, either for the selected victim buffer, or for the + * other buffer passed by the caller. In that case, we'll need to + * give up our locks, go get the pin(s) we failed to get earlier, and + * re-lock. That's pretty painful, but hopefully shouldn't happen + * often. + * + * Note that there's a small possibility that we didn't pin the page + * above but still have the correct page pinned anyway, either because + * we've already made a previous pass through this loop, or because + * caller passed us the right page anyway. + * + * Note also that it's possible that by the time we get the pin and + * retake the buffer locks, the visibility map bit will have been + * cleared by some other backend anyway. In that case, we'll have + * done a bit of extra work for no gain, but there's no real harm + * done. + */ + GetVisibilityMapPins(relation, buffer, otherBuffer, + targetBlock, otherBlock, vmbuffer, + vmbuffer_other); + + /* + * Now we can check to see if there's enough free space here. If so, + * we're done. + */ + page = BufferGetPage(buffer); + + /* + * If necessary initialize page, it'll be used soon. We could avoid + * dirtying the buffer here, and rely on the caller to do so whenever + * it puts a tuple onto the page, but there seems not much benefit in + * doing so. + */ + if (PageIsNew(page)) + { + PageInit(page, BufferGetPageSize(buffer), 0); + MarkBufferDirty(buffer); + } + + pageFreeSpace = PageGetHeapFreeSpace(page); + if (targetFreeSpace <= pageFreeSpace) + { + /* use this page as future insert target, too */ + RelationSetTargetBlock(relation, targetBlock); + return buffer; + } + + /* + * Not enough space, so we must give up our page locks and pin (if + * any) and prepare to look elsewhere. We don't care which order we + * unlock the two buffers in, so this can be slightly simpler than the + * code above. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + if (otherBuffer == InvalidBuffer) + ReleaseBuffer(buffer); + else if (otherBlock != targetBlock) + { + LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + } + + /* Is there an ongoing bulk extension? */ + if (bistate && bistate->next_free != InvalidBlockNumber) + { + Assert(bistate->next_free <= bistate->last_free); + + /* + * We bulk extended the relation before, and there are still some + * unused pages from that extension, so we don't need to look in + * the FSM for a new page. But do record the free space from the + * last page, somebody might insert narrower tuples later. + */ + if (use_fsm) + RecordPageWithFreeSpace(relation, targetBlock, pageFreeSpace); + + targetBlock = bistate->next_free; + if (bistate->next_free >= bistate->last_free) + { + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + } + else + bistate->next_free++; + } + else if (!use_fsm) + { + /* Without FSM, always fall out of the loop and extend */ + break; + } + else + { + /* + * Update FSM as to condition of this page, and ask for another + * page to try. + */ + targetBlock = RecordAndGetPageWithFreeSpace(relation, + targetBlock, + pageFreeSpace, + targetFreeSpace); + } + } + + /* Have to extend the relation */ + buffer = RelationAddBlocks(relation, bistate, num_pages, use_fsm, + &unlockedTargetBuffer); + + targetBlock = BufferGetBlockNumber(buffer); + page = BufferGetPage(buffer); + + /* + * The page is empty, pin vmbuffer to set all_frozen bit. We don't want to + * do IO while the buffer is locked, so we unlock the page first if IO is + * needed (necessitating checks below). + */ + if (options & HEAP_INSERT_FROZEN) + { + Assert(PageGetMaxOffsetNumber(page) == 0); + + if (!tdeheap_visibilitymap_pin_ok(targetBlock, *vmbuffer)) + { + if (!unlockedTargetBuffer) + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + unlockedTargetBuffer = true; + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + } + } + + /* + * Reacquire locks if necessary. + * + * If the target buffer was unlocked above, or is unlocked while + * reacquiring the lock on otherBuffer below, it's unlikely, but possible, + * that another backend used space on this page. We check for that below, + * and retry if necessary. + */ + recheckVmPins = false; + if (unlockedTargetBuffer) + { + /* released lock on target buffer above */ + if (otherBuffer != InvalidBuffer) + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + recheckVmPins = true; + } + else if (otherBuffer != InvalidBuffer) + { + /* + * We did not release the target buffer, and otherBuffer is valid, + * need to lock the other buffer. It's guaranteed to be of a lower + * page number than the new page. To conform with the deadlock + * prevent rules, we ought to lock otherBuffer first, but that would + * give other backends a chance to put tuples on our page. To reduce + * the likelihood of that, attempt to lock the other buffer + * conditionally, that's very likely to work. + * + * Alternatively, we could acquire the lock on otherBuffer before + * extending the relation, but that'd require holding the lock while + * performing IO, which seems worse than an unlikely retry. + */ + Assert(otherBuffer != buffer); + Assert(targetBlock > otherBlock); + + if (unlikely(!ConditionalLockBuffer(otherBuffer))) + { + unlockedTargetBuffer = true; + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + recheckVmPins = true; + } + + /* + * If one of the buffers was unlocked (always the case if otherBuffer is + * valid), it's possible, although unlikely, that an all-visible flag + * became set. We can use GetVisibilityMapPins to deal with that. It's + * possible that GetVisibilityMapPins() might need to temporarily release + * buffer locks, in which case we'll need to check if there's still enough + * space on the page below. + */ + if (recheckVmPins) + { + if (GetVisibilityMapPins(relation, otherBuffer, buffer, + otherBlock, targetBlock, vmbuffer_other, + vmbuffer)) + unlockedTargetBuffer = true; + } + + /* + * If the target buffer was temporarily unlocked since the relation + * extension, it's possible, although unlikely, that all the space on the + * page was already used. If so, we just retry from the start. If we + * didn't unlock, something has gone wrong if there's not enough space - + * the test at the top should have prevented reaching this case. + */ + pageFreeSpace = PageGetHeapFreeSpace(page); + if (len > pageFreeSpace) + { + if (unlockedTargetBuffer) + { + if (otherBuffer != InvalidBuffer) + LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK); + UnlockReleaseBuffer(buffer); + + goto loop; + } + elog(PANIC, "tuple is too big: size %zu", len); + } + + /* + * Remember the new page as our target for future insertions. + * + * XXX should we enter the new page into the free space map immediately, + * or just keep it for this backend's exclusive use in the short run + * (until VACUUM sees it)? Seems to depend on whether you expect the + * current backend to make more insertions or not, which is probably a + * good bet most of the time. So for now, don't add it to FSM yet. + */ + RelationSetTargetBlock(relation, targetBlock); + + return buffer; +} diff --git a/contrib/pg_tde/src16/access/pg_tde_prune.c b/contrib/pg_tde/src16/access/pg_tde_prune.c new file mode 100644 index 00000000000..7c902f322c3 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tde_prune.c @@ -0,0 +1,1615 @@ +/*------------------------------------------------------------------------- + * + * pruneheap.c + * heap page pruning and HOT-chain management code + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pruneheap.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "encryption/enc_tde.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" + +#include "access/htup_details.h" +#include "access/transam.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "catalog/catalog.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "storage/bufmgr.h" +#include "utils/snapmgr.h" +#include "utils/rel.h" +#include "utils/snapmgr.h" + +/* Working data for tdeheap_page_prune and subroutines */ +typedef struct +{ + Relation rel; + + /* tuple visibility test, initialized for the relation */ + GlobalVisState *vistest; + + /* + * Thresholds set by TransactionIdLimitedForOldSnapshots() if they have + * been computed (done on demand, and only if + * OldSnapshotThresholdActive()). The first time a tuple is about to be + * removed based on the limited horizon, old_snap_used is set to true, and + * SetOldSnapshotThresholdTimestamp() is called. See + * tdeheap_prune_satisfies_vacuum(). + */ + TimestampTz old_snap_ts; + TransactionId old_snap_xmin; + bool old_snap_used; + + TransactionId new_prune_xid; /* new prune hint value for page */ + TransactionId snapshotConflictHorizon; /* latest xid removed */ + int nredirected; /* numbers of entries in arrays below */ + int ndead; + int nunused; + /* arrays that accumulate indexes of items to be changed */ + OffsetNumber redirected[MaxHeapTuplesPerPage * 2]; + OffsetNumber nowdead[MaxHeapTuplesPerPage]; + OffsetNumber nowunused[MaxHeapTuplesPerPage]; + + /* + * marked[i] is true if item i is entered in one of the above arrays. + * + * This needs to be MaxHeapTuplesPerPage + 1 long as FirstOffsetNumber is + * 1. Otherwise every access would need to subtract 1. + */ + bool marked[MaxHeapTuplesPerPage + 1]; + + /* + * Tuple visibility is only computed once for each tuple, for correctness + * and efficiency reasons; see comment in tdeheap_page_prune() for details. + * This is of type int8[], instead of HTSV_Result[], so we can use -1 to + * indicate no visibility has been computed, e.g. for LP_DEAD items. + * + * Same indexing as ->marked. + */ + int8 htsv[MaxHeapTuplesPerPage + 1]; +} PruneState; + +/* Local functions */ +static HTSV_Result tdeheap_prune_satisfies_vacuum(PruneState *prstate, + HeapTuple tup, + Buffer buffer); +static int tdeheap_prune_chain(Buffer buffer, + OffsetNumber rootoffnum, + PruneState *prstate); +static void tdeheap_prune_record_prunable(PruneState *prstate, TransactionId xid); +static void tdeheap_prune_record_redirect(PruneState *prstate, + OffsetNumber offnum, OffsetNumber rdoffnum); +static void tdeheap_prune_record_dead(PruneState *prstate, OffsetNumber offnum); +static void tdeheap_prune_record_unused(PruneState *prstate, OffsetNumber offnum); +static void page_verify_redirects(Page page); + + +/* + * Optionally prune and repair fragmentation in the specified page. + * + * This is an opportunistic function. It will perform housekeeping + * only if the page heuristically looks like a candidate for pruning and we + * can acquire buffer cleanup lock without blocking. + * + * Note: this is called quite often. It's important that it fall out quickly + * if there's not any use in pruning. + * + * Caller must have pin on the buffer, and must *not* have a lock on it. + */ +void +tdeheap_page_prune_opt(Relation relation, Buffer buffer) +{ + Page page = BufferGetPage(buffer); + TransactionId prune_xid; + GlobalVisState *vistest; + TransactionId limited_xmin = InvalidTransactionId; + TimestampTz limited_ts = 0; + Size minfree; + + /* + * We can't write WAL in recovery mode, so there's no point trying to + * clean the page. The primary will likely issue a cleaning WAL record + * soon anyway, so this is no particular loss. + */ + if (RecoveryInProgress()) + return; + + /* + * XXX: Magic to keep old_snapshot_threshold tests appear "working". They + * currently are broken, and discussion of what to do about them is + * ongoing. See + * https://www.postgresql.org/message-id/20200403001235.e6jfdll3gh2ygbuc%40alap3.anarazel.de + */ + if (old_snapshot_threshold == 0) + SnapshotTooOldMagicForTest(); + + /* + * First check whether there's any chance there's something to prune, + * determining the appropriate horizon is a waste if there's no prune_xid + * (i.e. no updates/deletes left potentially dead tuples around). + */ + prune_xid = ((PageHeader) page)->pd_prune_xid; + if (!TransactionIdIsValid(prune_xid)) + return; + + /* + * Check whether prune_xid indicates that there may be dead rows that can + * be cleaned up. + * + * It is OK to check the old snapshot limit before acquiring the cleanup + * lock because the worst that can happen is that we are not quite as + * aggressive about the cleanup (by however many transaction IDs are + * consumed between this point and acquiring the lock). This allows us to + * save significant overhead in the case where the page is found not to be + * prunable. + * + * Even if old_snapshot_threshold is set, we first check whether the page + * can be pruned without. Both because + * TransactionIdLimitedForOldSnapshots() is not cheap, and because not + * unnecessarily relying on old_snapshot_threshold avoids causing + * conflicts. + */ + vistest = GlobalVisTestFor(relation); + + if (!GlobalVisTestIsRemovableXid(vistest, prune_xid)) + { + if (!OldSnapshotThresholdActive()) + return; + + if (!TransactionIdLimitedForOldSnapshots(GlobalVisTestNonRemovableHorizon(vistest), + relation, + &limited_xmin, &limited_ts)) + return; + + if (!TransactionIdPrecedes(prune_xid, limited_xmin)) + return; + } + + /* + * We prune when a previous UPDATE failed to find enough space on the page + * for a new tuple version, or when free space falls below the relation's + * fill-factor target (but not less than 10%). + * + * Checking free space here is questionable since we aren't holding any + * lock on the buffer; in the worst case we could get a bogus answer. It's + * unlikely to be *seriously* wrong, though, since reading either pd_lower + * or pd_upper is probably atomic. Avoiding taking a lock seems more + * important than sometimes getting a wrong answer in what is after all + * just a heuristic estimate. + */ + minfree = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + minfree = Max(minfree, BLCKSZ / 10); + + if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree) + { + /* OK, try to get exclusive buffer lock */ + if (!ConditionalLockBufferForCleanup(buffer)) + return; + + /* + * Now that we have buffer lock, get accurate information about the + * page's free space, and recheck the heuristic about whether to + * prune. + */ + if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree) + { + int ndeleted, + nnewlpdead; + + ndeleted = tdeheap_page_prune(relation, buffer, vistest, limited_xmin, + limited_ts, &nnewlpdead, NULL); + + /* + * Report the number of tuples reclaimed to pgstats. This is + * ndeleted minus the number of newly-LP_DEAD-set items. + * + * We derive the number of dead tuples like this to avoid totally + * forgetting about items that were set to LP_DEAD, since they + * still need to be cleaned up by VACUUM. We only want to count + * heap-only tuples that just became LP_UNUSED in our report, + * which don't. + * + * VACUUM doesn't have to compensate in the same way when it + * tracks ndeleted, since it will set the same LP_DEAD items to + * LP_UNUSED separately. + */ + if (ndeleted > nnewlpdead) + pgstat_update_heap_dead_tuples(relation, + ndeleted - nnewlpdead); + } + + /* And release buffer lock */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * We avoid reuse of any free space created on the page by unrelated + * UPDATEs/INSERTs by opting to not update the FSM at this point. The + * free space should be reused by UPDATEs to *this* page. + */ + } +} + + +/* + * Prune and repair fragmentation in the specified page. + * + * Caller must have pin and buffer cleanup lock on the page. Note that we + * don't update the FSM information for page on caller's behalf. Caller might + * also need to account for a reduction in the length of the line pointer + * array following array truncation by us. + * + * vistest is used to distinguish whether tuples are DEAD or RECENTLY_DEAD + * (see tdeheap_prune_satisfies_vacuum and + * HeapTupleSatisfiesVacuum). old_snap_xmin / old_snap_ts need to + * either have been set by TransactionIdLimitedForOldSnapshots, or + * InvalidTransactionId/0 respectively. + * + * Sets *nnewlpdead for caller, indicating the number of items that were + * newly set LP_DEAD during prune operation. + * + * off_loc is the offset location required by the caller to use in error + * callback. + * + * Returns the number of tuples deleted from the page during this call. + */ +int +tdeheap_page_prune(Relation relation, Buffer buffer, + GlobalVisState *vistest, + TransactionId old_snap_xmin, + TimestampTz old_snap_ts, + int *nnewlpdead, + OffsetNumber *off_loc) +{ + int ndeleted = 0; + Page page = BufferGetPage(buffer); + BlockNumber blockno = BufferGetBlockNumber(buffer); + OffsetNumber offnum, + maxoff; + PruneState prstate; + HeapTupleData tup; + + /* + * Our strategy is to scan the page and make lists of items to change, + * then apply the changes within a critical section. This keeps as much + * logic as possible out of the critical section, and also ensures that + * WAL replay will work the same as the normal case. + * + * First, initialize the new pd_prune_xid value to zero (indicating no + * prunable tuples). If we find any tuples which may soon become + * prunable, we will save the lowest relevant XID in new_prune_xid. Also + * initialize the rest of our working state. + */ + prstate.new_prune_xid = InvalidTransactionId; + prstate.rel = relation; + prstate.vistest = vistest; + prstate.old_snap_xmin = old_snap_xmin; + prstate.old_snap_ts = old_snap_ts; + prstate.old_snap_used = false; + prstate.snapshotConflictHorizon = InvalidTransactionId; + prstate.nredirected = prstate.ndead = prstate.nunused = 0; + memset(prstate.marked, 0, sizeof(prstate.marked)); + + maxoff = PageGetMaxOffsetNumber(page); + tup.t_tableOid = RelationGetRelid(prstate.rel); + + /* + * Determine HTSV for all tuples. + * + * This is required for correctness to deal with cases where running HTSV + * twice could result in different results (e.g. RECENTLY_DEAD can turn to + * DEAD if another checked item causes GlobalVisTestIsRemovableFullXid() + * to update the horizon, INSERT_IN_PROGRESS can change to DEAD if the + * inserting transaction aborts, ...). That in turn could cause + * tdeheap_prune_chain() to behave incorrectly if a tuple is reached twice, + * once directly via a tdeheap_prune_chain() and once following a HOT chain. + * + * It's also good for performance. Most commonly tuples within a page are + * stored at decreasing offsets (while the items are stored at increasing + * offsets). When processing all tuples on a page this leads to reading + * memory at decreasing offsets within a page, with a variable stride. + * That's hard for CPU prefetchers to deal with. Processing the items in + * reverse order (and thus the tuples in increasing order) increases + * prefetching efficiency significantly / decreases the number of cache + * misses. + */ + for (offnum = maxoff; + offnum >= FirstOffsetNumber; + offnum = OffsetNumberPrev(offnum)) + { + ItemId itemid = PageGetItemId(page, offnum); + HeapTupleHeader htup; + + /* Nothing to do if slot doesn't contain a tuple */ + if (!ItemIdIsNormal(itemid)) + { + prstate.htsv[offnum] = -1; + continue; + } + + htup = (HeapTupleHeader) PageGetItem(page, itemid); + tup.t_data = htup; + tup.t_len = ItemIdGetLength(itemid); + ItemPointerSet(&(tup.t_self), blockno, offnum); + + /* + * Set the offset number so that we can display it along with any + * error that occurred while processing this tuple. + */ + if (off_loc) + *off_loc = offnum; + + prstate.htsv[offnum] = tdeheap_prune_satisfies_vacuum(&prstate, &tup, + buffer); + } + + /* Scan the page */ + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + + /* Ignore items already processed as part of an earlier chain */ + if (prstate.marked[offnum]) + continue; + + /* see preceding loop */ + if (off_loc) + *off_loc = offnum; + + /* Nothing to do if slot is empty or already dead */ + itemid = PageGetItemId(page, offnum); + if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid)) + continue; + + /* Process this item or chain of items */ + ndeleted += tdeheap_prune_chain(buffer, offnum, &prstate); + } + + /* Clear the offset information once we have processed the given page. */ + if (off_loc) + *off_loc = InvalidOffsetNumber; + + /* + * Make sure relation key in the cahce to avoid pallocs in + * the critical section. + * We need it here as there is `pgtde_compactify_tuples()` down in + * the call stack wich reencrypt tuples. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* Any error while applying the changes is critical */ + START_CRIT_SECTION(); + + /* Have we found any prunable items? */ + if (prstate.nredirected > 0 || prstate.ndead > 0 || prstate.nunused > 0) + { + /* + * Apply the planned item changes, then repair page fragmentation, and + * update the page's hint bit about whether it has free line pointers. + */ + tdeheap_page_prune_execute(prstate.rel, buffer, + prstate.redirected, prstate.nredirected, + prstate.nowdead, prstate.ndead, + prstate.nowunused, prstate.nunused); + + /* + * Update the page's pd_prune_xid field to either zero, or the lowest + * XID of any soon-prunable tuple. + */ + ((PageHeader) page)->pd_prune_xid = prstate.new_prune_xid; + + /* + * Also clear the "page is full" flag, since there's no point in + * repeating the prune/defrag process until something else happens to + * the page. + */ + PageClearFull(page); + + MarkBufferDirty(buffer); + + /* + * Emit a WAL XLOG_HEAP2_PRUNE record showing what we did + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_prune xlrec; + XLogRecPtr recptr; + + xlrec.isCatalogRel = RelationIsAccessibleInLogicalDecoding(relation); + xlrec.snapshotConflictHorizon = prstate.snapshotConflictHorizon; + xlrec.nredirected = prstate.nredirected; + xlrec.ndead = prstate.ndead; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapPrune); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + /* + * The OffsetNumber arrays are not actually in the buffer, but we + * pretend that they are. When XLogInsert stores the whole + * buffer, the offset arrays need not be stored too. + */ + if (prstate.nredirected > 0) + XLogRegisterBufData(0, (char *) prstate.redirected, + prstate.nredirected * + sizeof(OffsetNumber) * 2); + + if (prstate.ndead > 0) + XLogRegisterBufData(0, (char *) prstate.nowdead, + prstate.ndead * sizeof(OffsetNumber)); + + if (prstate.nunused > 0) + XLogRegisterBufData(0, (char *) prstate.nowunused, + prstate.nunused * sizeof(OffsetNumber)); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_PRUNE); + + PageSetLSN(BufferGetPage(buffer), recptr); + } + } + else + { + /* + * If we didn't prune anything, but have found a new value for the + * pd_prune_xid field, update it and mark the buffer dirty. This is + * treated as a non-WAL-logged hint. + * + * Also clear the "page is full" flag if it is set, since there's no + * point in repeating the prune/defrag process until something else + * happens to the page. + */ + if (((PageHeader) page)->pd_prune_xid != prstate.new_prune_xid || + PageIsFull(page)) + { + ((PageHeader) page)->pd_prune_xid = prstate.new_prune_xid; + PageClearFull(page); + MarkBufferDirtyHint(buffer, true); + } + } + + END_CRIT_SECTION(); + + /* Record number of newly-set-LP_DEAD items for caller */ + *nnewlpdead = prstate.ndead; + + return ndeleted; +} + + +/* + * Perform visibility checks for heap pruning. + * + * This is more complicated than just using GlobalVisTestIsRemovableXid() + * because of old_snapshot_threshold. We only want to increase the threshold + * that triggers errors for old snapshots when we actually decide to remove a + * row based on the limited horizon. + * + * Due to its cost we also only want to call + * TransactionIdLimitedForOldSnapshots() if necessary, i.e. we might not have + * done so in tdeheap_page_prune_opt() if pd_prune_xid was old enough. But we + * still want to be able to remove rows that are too new to be removed + * according to prstate->vistest, but that can be removed based on + * old_snapshot_threshold. So we call TransactionIdLimitedForOldSnapshots() on + * demand in here, if appropriate. + */ +static HTSV_Result +tdeheap_prune_satisfies_vacuum(PruneState *prstate, HeapTuple tup, Buffer buffer) +{ + HTSV_Result res; + TransactionId dead_after; + + res = HeapTupleSatisfiesVacuumHorizon(tup, buffer, &dead_after); + + if (res != HEAPTUPLE_RECENTLY_DEAD) + return res; + + /* + * If we are already relying on the limited xmin, there is no need to + * delay doing so anymore. + */ + if (prstate->old_snap_used) + { + Assert(TransactionIdIsValid(prstate->old_snap_xmin)); + + if (TransactionIdPrecedes(dead_after, prstate->old_snap_xmin)) + res = HEAPTUPLE_DEAD; + return res; + } + + /* + * First check if GlobalVisTestIsRemovableXid() is sufficient to find the + * row dead. If not, and old_snapshot_threshold is enabled, try to use the + * lowered horizon. + */ + if (GlobalVisTestIsRemovableXid(prstate->vistest, dead_after)) + res = HEAPTUPLE_DEAD; + else if (OldSnapshotThresholdActive()) + { + /* haven't determined limited horizon yet, requests */ + if (!TransactionIdIsValid(prstate->old_snap_xmin)) + { + TransactionId horizon = + GlobalVisTestNonRemovableHorizon(prstate->vistest); + + TransactionIdLimitedForOldSnapshots(horizon, prstate->rel, + &prstate->old_snap_xmin, + &prstate->old_snap_ts); + } + + if (TransactionIdIsValid(prstate->old_snap_xmin) && + TransactionIdPrecedes(dead_after, prstate->old_snap_xmin)) + { + /* + * About to remove row based on snapshot_too_old. Need to raise + * the threshold so problematic accesses would error. + */ + Assert(!prstate->old_snap_used); + SetOldSnapshotThresholdTimestamp(prstate->old_snap_ts, + prstate->old_snap_xmin); + prstate->old_snap_used = true; + res = HEAPTUPLE_DEAD; + } + } + + return res; +} + + +/* + * Prune specified line pointer or a HOT chain originating at line pointer. + * + * If the item is an index-referenced tuple (i.e. not a heap-only tuple), + * the HOT chain is pruned by removing all DEAD tuples at the start of the HOT + * chain. We also prune any RECENTLY_DEAD tuples preceding a DEAD tuple. + * This is OK because a RECENTLY_DEAD tuple preceding a DEAD tuple is really + * DEAD, our visibility test is just too coarse to detect it. + * + * In general, pruning must never leave behind a DEAD tuple that still has + * tuple storage. VACUUM isn't prepared to deal with that case. That's why + * VACUUM prunes the same heap page a second time (without dropping its lock + * in the interim) when it sees a newly DEAD tuple that we initially saw as + * in-progress. Retrying pruning like this can only happen when an inserting + * transaction concurrently aborts. + * + * The root line pointer is redirected to the tuple immediately after the + * latest DEAD tuple. If all tuples in the chain are DEAD, the root line + * pointer is marked LP_DEAD. (This includes the case of a DEAD simple + * tuple, which we treat as a chain of length 1.) + * + * We don't actually change the page here. We just add entries to the arrays in + * prstate showing the changes to be made. Items to be redirected are added + * to the redirected[] array (two entries per redirection); items to be set to + * LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED + * state are added to nowunused[]. + * + * Returns the number of tuples (to be) deleted from the page. + */ +static int +tdeheap_prune_chain(Buffer buffer, OffsetNumber rootoffnum, PruneState *prstate) +{ + int ndeleted = 0; + Page dp = (Page) BufferGetPage(buffer); + TransactionId priorXmax = InvalidTransactionId; + ItemId rootlp; + HeapTupleHeader htup; + OffsetNumber latestdead = InvalidOffsetNumber, + maxoff = PageGetMaxOffsetNumber(dp), + offnum; + OffsetNumber chainitems[MaxHeapTuplesPerPage]; + int nchain = 0, + i; + + rootlp = PageGetItemId(dp, rootoffnum); + + /* + * If it's a heap-only tuple, then it is not the start of a HOT chain. + */ + if (ItemIdIsNormal(rootlp)) + { + Assert(prstate->htsv[rootoffnum] != -1); + htup = (HeapTupleHeader) PageGetItem(dp, rootlp); + + if (HeapTupleHeaderIsHeapOnly(htup)) + { + /* + * If the tuple is DEAD and doesn't chain to anything else, mark + * it unused immediately. (If it does chain, we can only remove + * it as part of pruning its chain.) + * + * We need this primarily to handle aborted HOT updates, that is, + * XMIN_INVALID heap-only tuples. Those might not be linked to by + * any chain, since the parent tuple might be re-updated before + * any pruning occurs. So we have to be able to reap them + * separately from chain-pruning. (Note that + * HeapTupleHeaderIsHotUpdated will never return true for an + * XMIN_INVALID tuple, so this code will work even when there were + * sequential updates within the aborted transaction.) + * + * Note that we might first arrive at a dead heap-only tuple + * either here or while following a chain below. Whichever path + * gets there first will mark the tuple unused. + */ + if (prstate->htsv[rootoffnum] == HEAPTUPLE_DEAD && + !HeapTupleHeaderIsHotUpdated(htup)) + { + tdeheap_prune_record_unused(prstate, rootoffnum); + HeapTupleHeaderAdvanceConflictHorizon(htup, + &prstate->snapshotConflictHorizon); + ndeleted++; + } + + /* Nothing more to do */ + return ndeleted; + } + } + + /* Start from the root tuple */ + offnum = rootoffnum; + + /* while not end of the chain */ + for (;;) + { + ItemId lp; + bool tupdead, + recent_dead; + + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated (original item must have been unused) + */ + if (offnum > maxoff) + break; + + /* If item is already processed, stop --- it must not be same chain */ + if (prstate->marked[offnum]) + break; + + lp = PageGetItemId(dp, offnum); + + /* Unused item obviously isn't part of the chain */ + if (!ItemIdIsUsed(lp)) + break; + + /* + * If we are looking at the redirected root line pointer, jump to the + * first normal tuple in the chain. If we find a redirect somewhere + * else, stop --- it must not be same chain. + */ + if (ItemIdIsRedirected(lp)) + { + if (nchain > 0) + break; /* not at start of chain */ + chainitems[nchain++] = offnum; + offnum = ItemIdGetRedirect(rootlp); + continue; + } + + /* + * Likewise, a dead line pointer can't be part of the chain. (We + * already eliminated the case of dead root tuple outside this + * function.) + */ + if (ItemIdIsDead(lp)) + break; + + Assert(ItemIdIsNormal(lp)); + Assert(prstate->htsv[offnum] != -1); + htup = (HeapTupleHeader) PageGetItem(dp, lp); + + /* + * Check the tuple XMIN against prior XMAX, if any + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) + break; + + /* + * OK, this tuple is indeed a member of the chain. + */ + chainitems[nchain++] = offnum; + + /* + * Check tuple's visibility status. + */ + tupdead = recent_dead = false; + + switch ((HTSV_Result) prstate->htsv[offnum]) + { + case HEAPTUPLE_DEAD: + tupdead = true; + break; + + case HEAPTUPLE_RECENTLY_DEAD: + recent_dead = true; + + /* + * This tuple may soon become DEAD. Update the hint field so + * that the page is reconsidered for pruning in future. + */ + tdeheap_prune_record_prunable(prstate, + HeapTupleHeaderGetUpdateXid(htup)); + break; + + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * This tuple may soon become DEAD. Update the hint field so + * that the page is reconsidered for pruning in future. + */ + tdeheap_prune_record_prunable(prstate, + HeapTupleHeaderGetUpdateXid(htup)); + break; + + case HEAPTUPLE_LIVE: + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * If we wanted to optimize for aborts, we might consider + * marking the page prunable when we see INSERT_IN_PROGRESS. + * But we don't. See related decisions about when to mark the + * page prunable in heapam.c. + */ + break; + + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + + /* + * Remember the last DEAD tuple seen. We will advance past + * RECENTLY_DEAD tuples just in case there's a DEAD one after them; + * but we can't advance past anything else. We have to make sure that + * we don't miss any DEAD tuples, since DEAD tuples that still have + * tuple storage after pruning will confuse VACUUM. + */ + if (tupdead) + { + latestdead = offnum; + HeapTupleHeaderAdvanceConflictHorizon(htup, + &prstate->snapshotConflictHorizon); + } + else if (!recent_dead) + break; + + /* + * If the tuple is not HOT-updated, then we are at the end of this + * HOT-update chain. + */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + break; + + /* HOT implies it can't have moved to different partition */ + Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup)); + + /* + * Advance to next chain member. + */ + Assert(ItemPointerGetBlockNumber(&htup->t_ctid) == + BufferGetBlockNumber(buffer)); + offnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + + /* + * If we found a DEAD tuple in the chain, adjust the HOT chain so that all + * the DEAD tuples at the start of the chain are removed and the root line + * pointer is appropriately redirected. + */ + if (OffsetNumberIsValid(latestdead)) + { + /* + * Mark as unused each intermediate item that we are able to remove + * from the chain. + * + * When the previous item is the last dead tuple seen, we are at the + * right candidate for redirection. + */ + for (i = 1; (i < nchain) && (chainitems[i - 1] != latestdead); i++) + { + tdeheap_prune_record_unused(prstate, chainitems[i]); + ndeleted++; + } + + /* + * If the root entry had been a normal tuple, we are deleting it, so + * count it in the result. But changing a redirect (even to DEAD + * state) doesn't count. + */ + if (ItemIdIsNormal(rootlp)) + ndeleted++; + + /* + * If the DEAD tuple is at the end of the chain, the entire chain is + * dead and the root line pointer can be marked dead. Otherwise just + * redirect the root to the correct chain member. + */ + if (i >= nchain) + tdeheap_prune_record_dead(prstate, rootoffnum); + else + tdeheap_prune_record_redirect(prstate, rootoffnum, chainitems[i]); + } + else if (nchain < 2 && ItemIdIsRedirected(rootlp)) + { + /* + * We found a redirect item that doesn't point to a valid follow-on + * item. This can happen if the loop in tdeheap_page_prune caused us to + * visit the dead successor of a redirect item before visiting the + * redirect item. We can clean up by setting the redirect item to + * DEAD state. + */ + tdeheap_prune_record_dead(prstate, rootoffnum); + } + + return ndeleted; +} + +/* Record lowest soon-prunable XID */ +static void +tdeheap_prune_record_prunable(PruneState *prstate, TransactionId xid) +{ + /* + * This should exactly match the PageSetPrunable macro. We can't store + * directly into the page header yet, so we update working state. + */ + Assert(TransactionIdIsNormal(xid)); + if (!TransactionIdIsValid(prstate->new_prune_xid) || + TransactionIdPrecedes(xid, prstate->new_prune_xid)) + prstate->new_prune_xid = xid; +} + +/* Record line pointer to be redirected */ +static void +tdeheap_prune_record_redirect(PruneState *prstate, + OffsetNumber offnum, OffsetNumber rdoffnum) +{ + Assert(prstate->nredirected < MaxHeapTuplesPerPage); + prstate->redirected[prstate->nredirected * 2] = offnum; + prstate->redirected[prstate->nredirected * 2 + 1] = rdoffnum; + prstate->nredirected++; + Assert(!prstate->marked[offnum]); + prstate->marked[offnum] = true; + Assert(!prstate->marked[rdoffnum]); + prstate->marked[rdoffnum] = true; +} + +/* Record line pointer to be marked dead */ +static void +tdeheap_prune_record_dead(PruneState *prstate, OffsetNumber offnum) +{ + Assert(prstate->ndead < MaxHeapTuplesPerPage); + prstate->nowdead[prstate->ndead] = offnum; + prstate->ndead++; + Assert(!prstate->marked[offnum]); + prstate->marked[offnum] = true; +} + +/* Record line pointer to be marked unused */ +static void +tdeheap_prune_record_unused(PruneState *prstate, OffsetNumber offnum) +{ + Assert(prstate->nunused < MaxHeapTuplesPerPage); + prstate->nowunused[prstate->nunused] = offnum; + prstate->nunused++; + Assert(!prstate->marked[offnum]); + prstate->marked[offnum] = true; +} + +void TdePageRepairFragmentation(Relation rel, Buffer buffer, Page page); + +/* + * Perform the actual page changes needed by tdeheap_page_prune. + * It is expected that the caller has a full cleanup lock on the + * buffer. + */ +void +tdeheap_page_prune_execute(Relation rel, Buffer buffer, + OffsetNumber *redirected, int nredirected, + OffsetNumber *nowdead, int ndead, + OffsetNumber *nowunused, int nunused) +{ + Page page = (Page) BufferGetPage(buffer); + OffsetNumber *offnum; + HeapTupleHeader htup PG_USED_FOR_ASSERTS_ONLY; + + /* Shouldn't be called unless there's something to do */ + Assert(nredirected > 0 || ndead > 0 || nunused > 0); + + /* Update all redirected line pointers */ + offnum = redirected; + for (int i = 0; i < nredirected; i++) + { + OffsetNumber fromoff = *offnum++; + OffsetNumber tooff = *offnum++; + ItemId fromlp = PageGetItemId(page, fromoff); + ItemId tolp PG_USED_FOR_ASSERTS_ONLY; + +#ifdef USE_ASSERT_CHECKING + + /* + * Any existing item that we set as an LP_REDIRECT (any 'from' item) + * must be the first item from a HOT chain. If the item has tuple + * storage then it can't be a heap-only tuple. Otherwise we are just + * maintaining an existing LP_REDIRECT from an existing HOT chain that + * has been pruned at least once before now. + */ + if (!ItemIdIsRedirected(fromlp)) + { + Assert(ItemIdHasStorage(fromlp) && ItemIdIsNormal(fromlp)); + + htup = (HeapTupleHeader) PageGetItem(page, fromlp); + Assert(!HeapTupleHeaderIsHeapOnly(htup)); + } + else + { + /* We shouldn't need to redundantly set the redirect */ + Assert(ItemIdGetRedirect(fromlp) != tooff); + } + + /* + * The item that we're about to set as an LP_REDIRECT (the 'from' + * item) will point to an existing item (the 'to' item) that is + * already a heap-only tuple. There can be at most one LP_REDIRECT + * item per HOT chain. + * + * We need to keep around an LP_REDIRECT item (after original + * non-heap-only root tuple gets pruned away) so that it's always + * possible for VACUUM to easily figure out what TID to delete from + * indexes when an entire HOT chain becomes dead. A heap-only tuple + * can never become LP_DEAD; an LP_REDIRECT item or a regular heap + * tuple can. + * + * This check may miss problems, e.g. the target of a redirect could + * be marked as unused subsequently. The page_verify_redirects() check + * below will catch such problems. + */ + tolp = PageGetItemId(page, tooff); + Assert(ItemIdHasStorage(tolp) && ItemIdIsNormal(tolp)); + htup = (HeapTupleHeader) PageGetItem(page, tolp); + Assert(HeapTupleHeaderIsHeapOnly(htup)); +#endif + + ItemIdSetRedirect(fromlp, tooff); + } + + /* Update all now-dead line pointers */ + offnum = nowdead; + for (int i = 0; i < ndead; i++) + { + OffsetNumber off = *offnum++; + ItemId lp = PageGetItemId(page, off); + +#ifdef USE_ASSERT_CHECKING + + /* + * An LP_DEAD line pointer must be left behind when the original item + * (which is dead to everybody) could still be referenced by a TID in + * an index. This should never be necessary with any individual + * heap-only tuple item, though. (It's not clear how much of a problem + * that would be, but there is no reason to allow it.) + */ + if (ItemIdHasStorage(lp)) + { + Assert(ItemIdIsNormal(lp)); + htup = (HeapTupleHeader) PageGetItem(page, lp); + Assert(!HeapTupleHeaderIsHeapOnly(htup)); + } + else + { + /* Whole HOT chain becomes dead */ + Assert(ItemIdIsRedirected(lp)); + } +#endif + + ItemIdSetDead(lp); + } + + /* Update all now-unused line pointers */ + offnum = nowunused; + for (int i = 0; i < nunused; i++) + { + OffsetNumber off = *offnum++; + ItemId lp = PageGetItemId(page, off); + +#ifdef USE_ASSERT_CHECKING + + /* + * Only heap-only tuples can become LP_UNUSED during pruning. They + * don't need to be left in place as LP_DEAD items until VACUUM gets + * around to doing index vacuuming. + */ + Assert(ItemIdHasStorage(lp) && ItemIdIsNormal(lp)); + htup = (HeapTupleHeader) PageGetItem(page, lp); + Assert(HeapTupleHeaderIsHeapOnly(htup)); +#endif + + ItemIdSetUnused(lp); + } + + /* + * Finally, repair any fragmentation, and update the page's hint bit about + * whether it has free pointers. + */ + TdePageRepairFragmentation(rel, buffer, page); + + /* + * Now that the page has been modified, assert that redirect items still + * point to valid targets. + */ + page_verify_redirects(page); +} + + +/* + * If built with assertions, verify that all LP_REDIRECT items point to a + * valid item. + * + * One way that bugs related to HOT pruning show is redirect items pointing to + * removed tuples. It's not trivial to reliably check that marking an item + * unused will not orphan a redirect item during tdeheap_prune_chain() / + * tdeheap_page_prune_execute(), so we additionally check the whole page after + * pruning. Without this check such bugs would typically only cause asserts + * later, potentially well after the corruption has been introduced. + * + * Also check comments in tdeheap_page_prune_execute()'s redirection loop. + */ +static void +page_verify_redirects(Page page) +{ +#ifdef USE_ASSERT_CHECKING + OffsetNumber offnum; + OffsetNumber maxoff; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid = PageGetItemId(page, offnum); + OffsetNumber targoff; + ItemId targitem; + HeapTupleHeader htup; + + if (!ItemIdIsRedirected(itemid)) + continue; + + targoff = ItemIdGetRedirect(itemid); + targitem = PageGetItemId(page, targoff); + + Assert(ItemIdIsUsed(targitem)); + Assert(ItemIdIsNormal(targitem)); + Assert(ItemIdHasStorage(targitem)); + htup = (HeapTupleHeader) PageGetItem(page, targitem); + Assert(HeapTupleHeaderIsHeapOnly(htup)); + } +#endif +} + + +/* + * For all items in this page, find their respective root line pointers. + * If item k is part of a HOT-chain with root at item j, then we set + * root_offsets[k - 1] = j. + * + * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries. + * Unused entries are filled with InvalidOffsetNumber (zero). + * + * The function must be called with at least share lock on the buffer, to + * prevent concurrent prune operations. + * + * Note: The information collected here is valid only as long as the caller + * holds a pin on the buffer. Once pin is released, a tuple might be pruned + * and reused by a completely unrelated tuple. + */ +void +tdeheap_get_root_tuples(Page page, OffsetNumber *root_offsets) +{ + OffsetNumber offnum, + maxoff; + + MemSet(root_offsets, InvalidOffsetNumber, + MaxHeapTuplesPerPage * sizeof(OffsetNumber)); + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum)) + { + ItemId lp = PageGetItemId(page, offnum); + HeapTupleHeader htup; + OffsetNumber nextoffnum; + TransactionId priorXmax; + + /* skip unused and dead items */ + if (!ItemIdIsUsed(lp) || ItemIdIsDead(lp)) + continue; + + if (ItemIdIsNormal(lp)) + { + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Check if this tuple is part of a HOT-chain rooted at some other + * tuple. If so, skip it for now; we'll process it when we find + * its root. + */ + if (HeapTupleHeaderIsHeapOnly(htup)) + continue; + + /* + * This is either a plain tuple or the root of a HOT-chain. + * Remember it in the mapping. + */ + root_offsets[offnum - 1] = offnum; + + /* If it's not the start of a HOT-chain, we're done with it */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + continue; + + /* Set up to scan the HOT-chain */ + nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + else + { + /* Must be a redirect item. We do not set its root_offsets entry */ + Assert(ItemIdIsRedirected(lp)); + /* Set up to scan the HOT-chain */ + nextoffnum = ItemIdGetRedirect(lp); + priorXmax = InvalidTransactionId; + } + + /* + * Now follow the HOT-chain and collect other tuples in the chain. + * + * Note: Even though this is a nested loop, the complexity of the + * function is O(N) because a tuple in the page should be visited not + * more than twice, once in the outer loop and once in HOT-chain + * chases. + */ + for (;;) + { + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated + */ + if (offnum > maxoff) + break; + + lp = PageGetItemId(page, nextoffnum); + + /* Check for broken chains */ + if (!ItemIdIsNormal(lp)) + break; + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup))) + break; + + /* Remember the root line pointer for this item */ + root_offsets[nextoffnum - 1] = offnum; + + /* Advance to next chain member, if any */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + break; + + /* HOT implies it can't have moved to different partition */ + Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup)); + + nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + } +} + +// TODO: move to own file so it can be autoupdated +// FROM src/page/bufpage.c + +/* + * Tuple defrag support for PageRepairFragmentation and PageIndexMultiDelete + */ +typedef struct itemIdCompactData +{ + uint16 offsetindex; /* linp array index */ + int16 itemoff; /* page offset of item data */ + uint16 len; + uint16 alignedlen; /* MAXALIGN(item data len) */ +} itemIdCompactData; +typedef itemIdCompactData *itemIdCompact; + +/* + * After removing or marking some line pointers unused, move the tuples to + * remove the gaps caused by the removed items and reorder them back into + * reverse line pointer order in the page. + * + * This function can often be fairly hot, so it pays to take some measures to + * make it as optimal as possible. + * + * Callers may pass 'presorted' as true if the 'itemidbase' array is sorted in + * descending order of itemoff. When this is true we can just memmove() + * tuples towards the end of the page. This is quite a common case as it's + * the order that tuples are initially inserted into pages. When we call this + * function to defragment the tuples in the page then any new line pointers + * added to the page will keep that presorted order, so hitting this case is + * still very common for tables that are commonly updated. + * + * When the 'itemidbase' array is not presorted then we're unable to just + * memmove() tuples around freely. Doing so could cause us to overwrite the + * memory belonging to a tuple we've not moved yet. In this case, we copy all + * the tuples that need to be moved into a temporary buffer. We can then + * simply memcpy() out of that temp buffer back into the page at the correct + * location. Tuples are copied back into the page in the same order as the + * 'itemidbase' array, so we end up reordering the tuples back into reverse + * line pointer order. This will increase the chances of hitting the + * presorted case the next time around. + * + * Callers must ensure that nitems is > 0 + */ +static void // this is where it happens! +pgtde_compactify_tuples(Relation rel, Buffer buffer, itemIdCompact itemidbase, int nitems, Page page, bool presorted) +{ + PageHeader phdr = (PageHeader) page; + Offset upper; + Offset copy_tail; + Offset copy_head; + itemIdCompact itemidptr; + int i; + + /* Code within will not work correctly if nitems == 0 */ + Assert(nitems > 0); + + if (presorted) + { + +#ifdef USE_ASSERT_CHECKING + { + /* + * Verify we've not gotten any new callers that are incorrectly + * passing a true presorted value. + */ + Offset lastoff = phdr->pd_special; + + for (i = 0; i < nitems; i++) + { + itemidptr = &itemidbase[i]; + + Assert(lastoff > itemidptr->itemoff); + + lastoff = itemidptr->itemoff; + } + } +#endif /* USE_ASSERT_CHECKING */ + + /* + * 'itemidbase' is already in the optimal order, i.e, lower item + * pointers have a higher offset. This allows us to memmove() the + * tuples up to the end of the page without having to worry about + * overwriting other tuples that have not been moved yet. + * + * There's a good chance that there are tuples already right at the + * end of the page that we can simply skip over because they're + * already in the correct location within the page. We'll do that + * first... + */ + upper = phdr->pd_special; + i = 0; + do + { + itemidptr = &itemidbase[i]; + if (upper != itemidptr->itemoff + itemidptr->alignedlen) + break; + upper -= itemidptr->alignedlen; + + i++; + } while (i < nitems); + + /* + * Now that we've found the first tuple that needs to be moved, we can + * do the tuple compactification. We try and make the least number of + * memmove() calls and only call memmove() when there's a gap. When + * we see a gap we just move all tuples after the gap up until the + * point of the last move operation. + */ + copy_tail = copy_head = itemidptr->itemoff + itemidptr->alignedlen; + for (; i < nitems; i++) + { + ItemId lp; + + itemidptr = &itemidbase[i]; + + lp = PageGetItemId(page, itemidptr->offsetindex + 1); + + if (copy_head != itemidptr->itemoff + itemidptr->alignedlen && copy_head < copy_tail) + { + memmove((char *) page + upper, + page + copy_head, + copy_tail - copy_head); + + /* + * We've now moved all tuples already seen, but not the + * current tuple, so we set the copy_tail to the end of this + * tuple so it can be moved in another iteration of the loop. + */ + copy_tail = itemidptr->itemoff + itemidptr->alignedlen; + } + /* shift the target offset down by the length of this tuple */ + upper -= itemidptr->alignedlen; + /* point the copy_head to the start of this tuple */ + copy_head = itemidptr->itemoff; + + /* update the line pointer to reference the new offset */ + lp->lp_off = upper; + } + + /* move the remaining tuples. */ + memmove((char *) page + upper, + page + copy_head, + copy_tail - copy_head); + } + else + { + PGAlignedBlock scratch; + char *scratchptr = scratch.data; + + /* + * Non-presorted case: The tuples in the itemidbase array may be in + * any order. So, in order to move these to the end of the page we + * must make a temp copy of each tuple that needs to be moved before + * we copy them back into the page at the new offset. + * + * If a large percentage of tuples have been pruned (>75%) then we'll + * copy these into the temp buffer tuple-by-tuple, otherwise, we'll + * just do a single memcpy() for all tuples that need to be moved. + * When so many tuples have been removed there's likely to be a lot of + * gaps and it's unlikely that many non-movable tuples remain at the + * end of the page. + */ + if (nitems < PageGetMaxOffsetNumber(page) / 4) + { + i = 0; + do + { + itemidptr = &itemidbase[i]; + memcpy(scratchptr + itemidptr->itemoff, page + itemidptr->itemoff, + itemidptr->alignedlen); + i++; + } while (i < nitems); + + /* Set things up for the compactification code below */ + i = 0; + itemidptr = &itemidbase[0]; + upper = phdr->pd_special; + } + else + { + upper = phdr->pd_special; + + /* + * Many tuples are likely to already be in the correct location. + * There's no need to copy these into the temp buffer. Instead + * we'll just skip forward in the itemidbase array to the position + * that we do need to move tuples from so that the code below just + * leaves these ones alone. + */ + i = 0; + do + { + itemidptr = &itemidbase[i]; + if (upper != itemidptr->itemoff + itemidptr->alignedlen) + break; + upper -= itemidptr->alignedlen; + + i++; + } while (i < nitems); + + /* Copy all tuples that need to be moved into the temp buffer */ + memcpy(scratchptr + phdr->pd_upper, + page + phdr->pd_upper, + upper - phdr->pd_upper); + } + + /* + * Do the tuple compactification. itemidptr is already pointing to + * the first tuple that we're going to move. Here we collapse the + * memcpy calls for adjacent tuples into a single call. This is done + * by delaying the memcpy call until we find a gap that needs to be + * closed. + */ + copy_tail = copy_head = itemidptr->itemoff + itemidptr->alignedlen; + for (; i < nitems; i++) + { + ItemId lp; + + itemidptr = &itemidbase[i]; + + lp = PageGetItemId(page, itemidptr->offsetindex + 1); + + /* copy pending tuples when we detect a gap */ + if (copy_head != itemidptr->itemoff + itemidptr->alignedlen) + { + memcpy((char *) page + upper, + scratchptr + copy_head, + copy_tail - copy_head); + + /* + * We've now copied all tuples already seen, but not the + * current tuple, so we set the copy_tail to the end of this + * tuple. + */ + copy_tail = itemidptr->itemoff + itemidptr->alignedlen; + } + /* shift the target offset down by the length of this tuple */ + upper -= itemidptr->alignedlen; + /* point the copy_head to the start of this tuple */ + copy_head = itemidptr->itemoff; + + /* update the line pointer to reference the new offset */ + lp->lp_off = upper; + } + + /* Copy the remaining chunk */ + memcpy((char *) page + upper, + scratchptr + copy_head, + copy_tail - copy_head); + } + + phdr->pd_upper = upper; +} + +/* + * PageRepairFragmentation + * + * Frees fragmented space on a heap page following pruning. + * + * This routine is usable for heap pages only, but see PageIndexMultiDelete. + * + * This routine removes unused line pointers from the end of the line pointer + * array. This is possible when dead heap-only tuples get removed by pruning, + * especially when there were HOT chains with several tuples each beforehand. + * + * Caller had better have a full cleanup lock on page's buffer. As a side + * effect the page's PD_HAS_FREE_LINES hint bit will be set or unset as + * needed. Caller might also need to account for a reduction in the length of + * the line pointer array following array truncation. + */ +void +TdePageRepairFragmentation(Relation rel, Buffer buffer, Page page) +{ + Offset pd_lower = ((PageHeader) page)->pd_lower; + Offset pd_upper = ((PageHeader) page)->pd_upper; + Offset pd_special = ((PageHeader) page)->pd_special; + Offset last_offset; + itemIdCompactData itemidbase[MaxHeapTuplesPerPage]; + itemIdCompact itemidptr; + ItemId lp; + int nline, + nstorage, + nunused; + OffsetNumber finalusedlp = InvalidOffsetNumber; + int i; + Size totallen; + bool presorted = true; /* For now */ + + /* + * It's worth the trouble to be more paranoid here than in most places, + * because we are about to reshuffle data in (what is usually) a shared + * disk buffer. If we aren't careful then corrupted pointers, lengths, + * etc could cause us to clobber adjacent disk buffers, spreading the data + * loss further. So, check everything. + */ + if (pd_lower < SizeOfPageHeaderData || + pd_lower > pd_upper || + pd_upper > pd_special || + pd_special > BLCKSZ || + pd_special != MAXALIGN(pd_special)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted page pointers: lower = %u, upper = %u, special = %u", + pd_lower, pd_upper, pd_special))); + + /* + * Run through the line pointer array and collect data about live items. + */ + nline = PageGetMaxOffsetNumber(page); + itemidptr = itemidbase; + nunused = totallen = 0; + last_offset = pd_special; + for (i = FirstOffsetNumber; i <= nline; i++) + { + lp = PageGetItemId(page, i); + if (ItemIdIsUsed(lp)) + { + if (ItemIdHasStorage(lp)) + { + itemidptr->offsetindex = i - 1; + itemidptr->itemoff = ItemIdGetOffset(lp); + + if (last_offset > itemidptr->itemoff) + last_offset = itemidptr->itemoff; + else + presorted = false; + + if (unlikely(itemidptr->itemoff < (int) pd_upper || + itemidptr->itemoff >= (int) pd_special)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted line pointer: %u", + itemidptr->itemoff))); + itemidptr->len = ItemIdGetLength(lp); + itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp)); + totallen += itemidptr->alignedlen; + itemidptr++; + } + + finalusedlp = i; /* Could be the final non-LP_UNUSED item */ + } + else + { + /* Unused entries should have lp_len = 0, but make sure */ + Assert(!ItemIdHasStorage(lp)); + ItemIdSetUnused(lp); + nunused++; + } + } + + nstorage = itemidptr - itemidbase; + if (nstorage == 0) + { + /* Page is completely empty, so just reset it quickly */ + ((PageHeader) page)->pd_upper = pd_special; + } + else + { + /* Need to compact the page the hard way */ + if (totallen > (Size) (pd_special - pd_lower)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted item lengths: total %u, available space %u", + (unsigned int) totallen, pd_special - pd_lower))); + + pgtde_compactify_tuples(rel, buffer, itemidbase, nstorage, page, presorted); + } + + if (finalusedlp != nline) + { + /* The last line pointer is not the last used line pointer */ + int nunusedend = nline - finalusedlp; + + Assert(nunused >= nunusedend && nunusedend > 0); + + /* remove trailing unused line pointers from the count */ + nunused -= nunusedend; + /* truncate the line pointer array */ + ((PageHeader) page)->pd_lower -= (sizeof(ItemIdData) * nunusedend); + } + + /* Set hint bit for PageAddItemExtended */ + if (nunused > 0) + PageSetHasFreeLinePointers(page); + else + PageClearHasFreeLinePointers(page); +} diff --git a/contrib/pg_tde/src16/access/pg_tde_rewrite.c b/contrib/pg_tde/src16/access/pg_tde_rewrite.c new file mode 100644 index 00000000000..3577141ee00 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tde_rewrite.c @@ -0,0 +1,1291 @@ +/*------------------------------------------------------------------------- + * + * rewriteheap.c + * Support functions to rewrite tables. + * + * These functions provide a facility to completely rewrite a heap, while + * preserving visibility information and update chains. + * + * INTERFACE + * + * The caller is responsible for creating the new heap, all catalog + * changes, supplying the tuples to be written to the new heap, and + * rebuilding indexes. The caller must hold AccessExclusiveLock on the + * target table, because we assume no one else is writing into it. + * + * To use the facility: + * + * begin_tdeheap_rewrite + * while (fetch next tuple) + * { + * if (tuple is dead) + * rewrite_tdeheap_dead_tuple + * else + * { + * // do any transformations here if required + * rewrite_tdeheap_tuple + * } + * } + * end_tdeheap_rewrite + * + * The contents of the new relation shouldn't be relied on until after + * end_tdeheap_rewrite is called. + * + * + * IMPLEMENTATION + * + * This would be a fairly trivial affair, except that we need to maintain + * the ctid chains that link versions of an updated tuple together. + * Since the newly stored tuples will have tids different from the original + * ones, if we just copied t_ctid fields to the new table the links would + * be wrong. When we are required to copy a (presumably recently-dead or + * delete-in-progress) tuple whose ctid doesn't point to itself, we have + * to substitute the correct ctid instead. + * + * For each ctid reference from A -> B, we might encounter either A first + * or B first. (Note that a tuple in the middle of a chain is both A and B + * of different pairs.) + * + * If we encounter A first, we'll store the tuple in the unresolved_tups + * hash table. When we later encounter B, we remove A from the hash table, + * fix the ctid to point to the new location of B, and insert both A and B + * to the new heap. + * + * If we encounter B first, we can insert B to the new heap right away. + * We then add an entry to the old_new_tid_map hash table showing B's + * original tid (in the old heap) and new tid (in the new heap). + * When we later encounter A, we get the new location of B from the table, + * and can write A immediately with the correct ctid. + * + * Entries in the hash tables can be removed as soon as the later tuple + * is encountered. That helps to keep the memory usage down. At the end, + * both tables are usually empty; we should have encountered both A and B + * of each pair. However, it's possible for A to be RECENTLY_DEAD and B + * entirely DEAD according to HeapTupleSatisfiesVacuum, because the test + * for deadness using OldestXmin is not exact. In such a case we might + * encounter B first, and skip it, and find A later. Then A would be added + * to unresolved_tups, and stay there until end of the rewrite. Since + * this case is very unusual, we don't worry about the memory usage. + * + * Using in-memory hash tables means that we use some memory for each live + * update chain in the table, from the time we find one end of the + * reference until we find the other end. That shouldn't be a problem in + * practice, but if you do something like an UPDATE without a where-clause + * on a large table, and then run CLUSTER in the same transaction, you + * could run out of memory. It doesn't seem worthwhile to add support for + * spill-to-disk, as there shouldn't be that many RECENTLY_DEAD tuples in a + * table under normal circumstances. Furthermore, in the typical scenario + * of CLUSTERing on an unchanging key column, we'll see all the versions + * of a given tuple together anyway, and so the peak memory usage is only + * proportional to the number of RECENTLY_DEAD versions of a single row, not + * in the whole table. Note that if we do fail halfway through a CLUSTER, + * the old table is still valid, so failure is not catastrophic. + * + * We can't use the normal tdeheap_insert function to insert into the new + * heap, because tdeheap_insert overwrites the visibility information. + * We use a special-purpose raw_tdeheap_insert function instead, which + * is optimized for bulk inserting a lot of tuples, knowing that we have + * exclusive access to the heap. raw_tdeheap_insert builds new pages in + * local storage. When a page is full, or at the end of the process, + * we insert it to WAL as a single record and then write it to disk + * directly through smgr. Note, however, that any data sent to the new + * heap's TOAST table will go through the normal bufmgr. + * + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994-5, Regents of the University of California + * + * IDENTIFICATION + * src/backend/access/heap/rewriteheap.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_rewrite.h" +#include "encryption/enc_tde.h" + +#include "access/transam.h" +#include "access/xact.h" +#include "access/xloginsert.h" +#include "catalog/catalog.h" +#include "common/file_utils.h" +#include "lib/ilist.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "replication/logical.h" +#include "replication/slot.h" +#include "storage/bufmgr.h" +#include "storage/fd.h" +#include "storage/procarray.h" +#include "storage/smgr.h" +#include "utils/memutils.h" +#include "utils/rel.h" + +/* + * State associated with a rewrite operation. This is opaque to the user + * of the rewrite facility. + */ +typedef struct RewriteStateData +{ + Relation rs_old_rel; /* source heap */ + Relation rs_new_rel; /* destination heap */ + Page rs_buffer; /* page currently being built */ + BlockNumber rs_blockno; /* block where page will go */ + bool rs_buffer_valid; /* T if any tuples in buffer */ + bool rs_logical_rewrite; /* do we need to do logical rewriting */ + TransactionId rs_oldest_xmin; /* oldest xmin used by caller to determine + * tuple visibility */ + TransactionId rs_freeze_xid; /* Xid that will be used as freeze cutoff + * point */ + TransactionId rs_logical_xmin; /* Xid that will be used as cutoff point + * for logical rewrites */ + MultiXactId rs_cutoff_multi; /* MultiXactId that will be used as cutoff + * point for multixacts */ + MemoryContext rs_cxt; /* for hash tables and entries and tuples in + * them */ + XLogRecPtr rs_begin_lsn; /* XLogInsertLsn when starting the rewrite */ + HTAB *rs_unresolved_tups; /* unmatched A tuples */ + HTAB *rs_old_new_tid_map; /* unmatched B tuples */ + HTAB *rs_logical_mappings; /* logical remapping files */ + uint32 rs_num_rewrite_mappings; /* # in memory mappings */ +} RewriteStateData; + +/* + * The lookup keys for the hash tables are tuple TID and xmin (we must check + * both to avoid false matches from dead tuples). Beware that there is + * probably some padding space in this struct; it must be zeroed out for + * correct hashtable operation. + */ +typedef struct +{ + TransactionId xmin; /* tuple xmin */ + ItemPointerData tid; /* tuple location in old heap */ +} TidHashKey; + +/* + * Entry structures for the hash tables + */ +typedef struct +{ + TidHashKey key; /* expected xmin/old location of B tuple */ + ItemPointerData old_tid; /* A's location in the old heap */ + HeapTuple tuple; /* A's tuple contents */ +} UnresolvedTupData; + +typedef UnresolvedTupData *UnresolvedTup; + +typedef struct +{ + TidHashKey key; /* actual xmin/old location of B tuple */ + ItemPointerData new_tid; /* where we put it in the new heap */ +} OldToNewMappingData; + +typedef OldToNewMappingData *OldToNewMapping; + +/* + * In-Memory data for an xid that might need logical remapping entries + * to be logged. + */ +typedef struct RewriteMappingFile +{ + TransactionId xid; /* xid that might need to see the row */ + int vfd; /* fd of mappings file */ + off_t off; /* how far have we written yet */ + dclist_head mappings; /* list of in-memory mappings */ + char path[MAXPGPATH]; /* path, for error messages */ +} RewriteMappingFile; + +/* + * A single In-Memory logical rewrite mapping, hanging off + * RewriteMappingFile->mappings. + */ +typedef struct RewriteMappingDataEntry +{ + LogicalRewriteMappingData map; /* map between old and new location of the + * tuple */ + dlist_node node; +} RewriteMappingDataEntry; + + +/* prototypes for internal functions */ +static void raw_tdeheap_insert(RewriteState state, HeapTuple tup); + +/* internal logical remapping prototypes */ +static void logical_begin_tdeheap_rewrite(RewriteState state); +static void logical_rewrite_tdeheap_tuple(RewriteState state, ItemPointerData old_tid, HeapTuple new_tuple); +static void logical_end_tdeheap_rewrite(RewriteState state); + + +/* + * Begin a rewrite of a table + * + * old_heap old, locked heap relation tuples will be read from + * new_heap new, locked heap relation to insert tuples to + * oldest_xmin xid used by the caller to determine which tuples are dead + * freeze_xid xid before which tuples will be frozen + * cutoff_multi multixact before which multis will be removed + * + * Returns an opaque RewriteState, allocated in current memory context, + * to be used in subsequent calls to the other functions. + */ +RewriteState +begin_tdeheap_rewrite(Relation old_heap, Relation new_heap, TransactionId oldest_xmin, + TransactionId freeze_xid, MultiXactId cutoff_multi) +{ + RewriteState state; + MemoryContext rw_cxt; + MemoryContext old_cxt; + HASHCTL hash_ctl; + + /* + * To ease cleanup, make a separate context that will contain the + * RewriteState struct itself plus all subsidiary data. + */ + rw_cxt = AllocSetContextCreate(CurrentMemoryContext, + "Table rewrite", + ALLOCSET_DEFAULT_SIZES); + old_cxt = MemoryContextSwitchTo(rw_cxt); + + /* Create and fill in the state struct */ + state = palloc0(sizeof(RewriteStateData)); + + state->rs_old_rel = old_heap; + state->rs_new_rel = new_heap; + state->rs_buffer = (Page) palloc_aligned(BLCKSZ, PG_IO_ALIGN_SIZE, 0); + /* new_heap needn't be empty, just locked */ + state->rs_blockno = RelationGetNumberOfBlocks(new_heap); + state->rs_buffer_valid = false; + state->rs_oldest_xmin = oldest_xmin; + state->rs_freeze_xid = freeze_xid; + state->rs_cutoff_multi = cutoff_multi; + state->rs_cxt = rw_cxt; + + /* Initialize hash tables used to track update chains */ + hash_ctl.keysize = sizeof(TidHashKey); + hash_ctl.entrysize = sizeof(UnresolvedTupData); + hash_ctl.hcxt = state->rs_cxt; + + state->rs_unresolved_tups = + hash_create("Rewrite / Unresolved ctids", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); + + hash_ctl.entrysize = sizeof(OldToNewMappingData); + + state->rs_old_new_tid_map = + hash_create("Rewrite / Old to new tid map", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); + + MemoryContextSwitchTo(old_cxt); + + logical_begin_tdeheap_rewrite(state); + + return state; +} + +/* + * End a rewrite. + * + * state and any other resources are freed. + */ +void +end_tdeheap_rewrite(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + UnresolvedTup unresolved; + + /* + * Write any remaining tuples in the UnresolvedTups table. If we have any + * left, they should in fact be dead, but let's err on the safe side. + */ + hash_seq_init(&seq_status, state->rs_unresolved_tups); + + while ((unresolved = hash_seq_search(&seq_status)) != NULL) + { + ItemPointerSetInvalid(&unresolved->tuple->t_data->t_ctid); + raw_tdeheap_insert(state, unresolved->tuple); + } + + /* Write the last page, if any */ + if (state->rs_buffer_valid) + { + if (RelationNeedsWAL(state->rs_new_rel)) + log_newpage(&state->rs_new_rel->rd_locator, + MAIN_FORKNUM, + state->rs_blockno, + state->rs_buffer, + true); + + PageSetChecksumInplace(state->rs_buffer, state->rs_blockno); + + smgrextend(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM, + state->rs_blockno, state->rs_buffer, true); + } + + /* + * When we WAL-logged rel pages, we must nonetheless fsync them. The + * reason is the same as in storage.c's RelationCopyStorage(): we're + * writing data that's not in shared buffers, and so a CHECKPOINT + * occurring during the rewriteheap operation won't have fsync'd data we + * wrote before the checkpoint. + */ + if (RelationNeedsWAL(state->rs_new_rel)) + smgrimmedsync(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM); + + logical_end_tdeheap_rewrite(state); + + /* Deleting the context frees everything */ + MemoryContextDelete(state->rs_cxt); +} + +/* + * Add a tuple to the new heap. + * + * Visibility information is copied from the original tuple, except that + * we "freeze" very-old tuples. Note that since we scribble on new_tuple, + * it had better be temp storage not a pointer to the original tuple. + * + * state opaque state as returned by begin_tdeheap_rewrite + * old_tuple original tuple in the old heap + * new_tuple new, rewritten tuple to be inserted to new heap + */ +void +rewrite_tdeheap_tuple(RewriteState state, + HeapTuple old_tuple, HeapTuple new_tuple) +{ + MemoryContext old_cxt; + ItemPointerData old_tid; + TidHashKey hashkey; + bool found; + bool free_new; + + old_cxt = MemoryContextSwitchTo(state->rs_cxt); + + /* + * Copy the original tuple's visibility information into new_tuple. + * + * XXX we might later need to copy some t_infomask2 bits, too? Right now, + * we intentionally clear the HOT status bits. + */ + memcpy(&new_tuple->t_data->t_choice.t_heap, + &old_tuple->t_data->t_choice.t_heap, + sizeof(HeapTupleFields)); + + new_tuple->t_data->t_infomask &= ~HEAP_XACT_MASK; + new_tuple->t_data->t_infomask2 &= ~HEAP2_XACT_MASK; + new_tuple->t_data->t_infomask |= + old_tuple->t_data->t_infomask & HEAP_XACT_MASK; + + /* + * While we have our hands on the tuple, we may as well freeze any + * eligible xmin or xmax, so that future VACUUM effort can be saved. + */ + tdeheap_freeze_tuple(new_tuple->t_data, + state->rs_old_rel->rd_rel->relfrozenxid, + state->rs_old_rel->rd_rel->relminmxid, + state->rs_freeze_xid, + state->rs_cutoff_multi); + + /* + * Invalid ctid means that ctid should point to the tuple itself. We'll + * override it later if the tuple is part of an update chain. + */ + ItemPointerSetInvalid(&new_tuple->t_data->t_ctid); + + /* + * If the tuple has been updated, check the old-to-new mapping hash table. + */ + if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) || + HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) && + !HeapTupleHeaderIndicatesMovedPartitions(old_tuple->t_data) && + !(ItemPointerEquals(&(old_tuple->t_self), + &(old_tuple->t_data->t_ctid)))) + { + OldToNewMapping mapping; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data); + hashkey.tid = old_tuple->t_data->t_ctid; + + mapping = (OldToNewMapping) + hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_FIND, NULL); + + if (mapping != NULL) + { + /* + * We've already copied the tuple that t_ctid points to, so we can + * set the ctid of this tuple to point to the new location, and + * insert it right away. + */ + new_tuple->t_data->t_ctid = mapping->new_tid; + + /* We don't need the mapping entry anymore */ + hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_REMOVE, &found); + Assert(found); + } + else + { + /* + * We haven't seen the tuple t_ctid points to yet. Stash this + * tuple into unresolved_tups to be written later. + */ + UnresolvedTup unresolved; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_ENTER, &found); + Assert(!found); + + unresolved->old_tid = old_tuple->t_self; + unresolved->tuple = tdeheap_copytuple(new_tuple); + + /* + * We can't do anything more now, since we don't know where the + * tuple will be written. + */ + MemoryContextSwitchTo(old_cxt); + return; + } + } + + /* + * Now we will write the tuple, and then check to see if it is the B tuple + * in any new or known pair. When we resolve a known pair, we will be + * able to write that pair's A tuple, and then we have to check if it + * resolves some other pair. Hence, we need a loop here. + */ + old_tid = old_tuple->t_self; + free_new = false; + + for (;;) + { + ItemPointerData new_tid; + + /* Insert the tuple and find out where it's put in new_heap */ + raw_tdeheap_insert(state, new_tuple); + new_tid = new_tuple->t_self; + + logical_rewrite_tdeheap_tuple(state, old_tid, new_tuple); + + /* + * If the tuple is the updated version of a row, and the prior version + * wouldn't be DEAD yet, then we need to either resolve the prior + * version (if it's waiting in rs_unresolved_tups), or make an entry + * in rs_old_new_tid_map (so we can resolve it when we do see it). The + * previous tuple's xmax would equal this one's xmin, so it's + * RECENTLY_DEAD if and only if the xmin is not before OldestXmin. + */ + if ((new_tuple->t_data->t_infomask & HEAP_UPDATED) && + !TransactionIdPrecedes(HeapTupleHeaderGetXmin(new_tuple->t_data), + state->rs_oldest_xmin)) + { + /* + * Okay, this is B in an update pair. See if we've seen A. + */ + UnresolvedTup unresolved; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetXmin(new_tuple->t_data); + hashkey.tid = old_tid; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_FIND, NULL); + + if (unresolved != NULL) + { + /* + * We have seen and memorized the previous tuple already. Now + * that we know where we inserted the tuple its t_ctid points + * to, fix its t_ctid and insert it to the new heap. + */ + if (free_new) + tdeheap_freetuple(new_tuple); + new_tuple = unresolved->tuple; + free_new = true; + old_tid = unresolved->old_tid; + new_tuple->t_data->t_ctid = new_tid; + + /* + * We don't need the hash entry anymore, but don't free its + * tuple just yet. + */ + hash_search(state->rs_unresolved_tups, &hashkey, + HASH_REMOVE, &found); + Assert(found); + + /* loop back to insert the previous tuple in the chain */ + continue; + } + else + { + /* + * Remember the new tid of this tuple. We'll use it to set the + * ctid when we find the previous tuple in the chain. + */ + OldToNewMapping mapping; + + mapping = hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_ENTER, &found); + Assert(!found); + + mapping->new_tid = new_tid; + } + } + + /* Done with this (chain of) tuples, for now */ + if (free_new) + tdeheap_freetuple(new_tuple); + break; + } + + MemoryContextSwitchTo(old_cxt); +} + +/* + * Register a dead tuple with an ongoing rewrite. Dead tuples are not + * copied to the new table, but we still make note of them so that we + * can release some resources earlier. + * + * Returns true if a tuple was removed from the unresolved_tups table. + * This indicates that that tuple, previously thought to be "recently dead", + * is now known really dead and won't be written to the output. + */ +bool +rewrite_tdeheap_dead_tuple(RewriteState state, HeapTuple old_tuple) +{ + /* + * If we have already seen an earlier tuple in the update chain that + * points to this tuple, let's forget about that earlier tuple. It's in + * fact dead as well, our simple xmax < OldestXmin test in + * HeapTupleSatisfiesVacuum just wasn't enough to detect it. It happens + * when xmin of a tuple is greater than xmax, which sounds + * counter-intuitive but is perfectly valid. + * + * We don't bother to try to detect the situation the other way round, + * when we encounter the dead tuple first and then the recently dead one + * that points to it. If that happens, we'll have some unmatched entries + * in the UnresolvedTups hash table at the end. That can happen anyway, + * because a vacuum might have removed the dead tuple in the chain before + * us. + */ + UnresolvedTup unresolved; + TidHashKey hashkey; + bool found; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetXmin(old_tuple->t_data); + hashkey.tid = old_tuple->t_self; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_FIND, NULL); + + if (unresolved != NULL) + { + /* Need to free the contained tuple as well as the hashtable entry */ + tdeheap_freetuple(unresolved->tuple); + hash_search(state->rs_unresolved_tups, &hashkey, + HASH_REMOVE, &found); + Assert(found); + return true; + } + + return false; +} + +/* + * Insert a tuple to the new relation. This has to track tdeheap_insert + * and its subsidiary functions! + * + * t_self of the tuple is set to the new TID of the tuple. If t_ctid of the + * tuple is invalid on entry, it's replaced with the new TID as well (in + * the inserted data only, not in the caller's copy). + */ +static void +raw_tdeheap_insert(RewriteState state, HeapTuple tup) +{ + Page page = state->rs_buffer; + Size pageFreeSpace, + saveFreeSpace; + Size len; + OffsetNumber newoff; + HeapTuple heaptup; + + /* + * If the new tuple is too big for storage or contains already toasted + * out-of-line attributes from some other relation, invoke the toaster. + * + * Note: below this point, heaptup is the data we actually intend to store + * into the relation; tup is the caller's original untoasted data. + */ + if (state->rs_new_rel->rd_rel->relkind == RELKIND_TOASTVALUE) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(tup)); + heaptup = tup; + } + else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD) + { + int options = HEAP_INSERT_SKIP_FSM; + + /* + * While rewriting the heap for VACUUM FULL / CLUSTER, make sure data + * for the TOAST table are not logically decoded. The main heap is + * WAL-logged as XLOG FPI records, which are not logically decoded. + */ + options |= HEAP_INSERT_NO_LOGICAL; + + heaptup = tdeheap_toast_insert_or_update(state->rs_new_rel, tup, NULL, + options); + } + else + heaptup = tup; + + len = MAXALIGN(heaptup->t_len); /* be conservative */ + + /* + * If we're gonna fail for oversize tuple, do it right away + */ + if (len > MaxHeapTupleSize) + ereport(ERROR, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg("row is too big: size %zu, maximum size %zu", + len, MaxHeapTupleSize))); + + /* Compute desired extra freespace due to fillfactor option */ + saveFreeSpace = RelationGetTargetPageFreeSpace(state->rs_new_rel, + HEAP_DEFAULT_FILLFACTOR); + + /* Now we can check to see if there's enough free space already. */ + if (state->rs_buffer_valid) + { + pageFreeSpace = PageGetHeapFreeSpace(page); + + if (len + saveFreeSpace > pageFreeSpace) + { + /* + * Doesn't fit, so write out the existing page. It always + * contains a tuple. Hence, unlike tdeheap_RelationGetBufferForTuple(), + * enforce saveFreeSpace unconditionally. + */ + + /* XLOG stuff */ + if (RelationNeedsWAL(state->rs_new_rel)) + log_newpage(&state->rs_new_rel->rd_locator, + MAIN_FORKNUM, + state->rs_blockno, + page, + true); + + /* + * Now write the page. We say skipFsync = true because there's no + * need for smgr to schedule an fsync for this write; we'll do it + * ourselves in end_tdeheap_rewrite. + */ + PageSetChecksumInplace(page, state->rs_blockno); + + smgrextend(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM, + state->rs_blockno, page, true); + + state->rs_blockno++; + state->rs_buffer_valid = false; + } + } + + if (!state->rs_buffer_valid) + { + /* Initialize a new empty page */ + PageInit(page, BLCKSZ, 0); + state->rs_buffer_valid = true; + } + + /* And now we can insert the tuple into the page */ + newoff = TDE_PageAddItem(state->rs_new_rel->rd_locator, state->rs_blockno, page, (Item) heaptup->t_data, heaptup->t_len, + InvalidOffsetNumber, false, true); + if (newoff == InvalidOffsetNumber) + elog(ERROR, "failed to add tuple"); + + /* Update caller's t_self to the actual position where it was stored */ + ItemPointerSet(&(tup->t_self), state->rs_blockno, newoff); + + /* + * Insert the correct position into CTID of the stored tuple, too, if the + * caller didn't supply a valid CTID. + */ + if (!ItemPointerIsValid(&tup->t_data->t_ctid)) + { + ItemId newitemid; + HeapTupleHeader onpage_tup; + + newitemid = PageGetItemId(page, newoff); + onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid); + + onpage_tup->t_ctid = tup->t_self; + } + + /* If heaptup is a private copy, release it. */ + if (heaptup != tup) + tdeheap_freetuple(heaptup); +} + +/* ------------------------------------------------------------------------ + * Logical rewrite support + * + * When doing logical decoding - which relies on using cmin/cmax of catalog + * tuples, via xl_tdeheap_new_cid records - heap rewrites have to log enough + * information to allow the decoding backend to update its internal mapping + * of (relfilelocator,ctid) => (cmin, cmax) to be correct for the rewritten heap. + * + * For that, every time we find a tuple that's been modified in a catalog + * relation within the xmin horizon of any decoding slot, we log a mapping + * from the old to the new location. + * + * To deal with rewrites that abort the filename of a mapping file contains + * the xid of the transaction performing the rewrite, which then can be + * checked before being read in. + * + * For efficiency we don't immediately spill every single map mapping for a + * row to disk but only do so in batches when we've collected several of them + * in memory or when end_tdeheap_rewrite() has been called. + * + * Crash-Safety: This module diverts from the usual patterns of doing WAL + * since it cannot rely on checkpoint flushing out all buffers and thus + * waiting for exclusive locks on buffers. Usually the XLogInsert() covering + * buffer modifications is performed while the buffer(s) that are being + * modified are exclusively locked guaranteeing that both the WAL record and + * the modified heap are on either side of the checkpoint. But since the + * mapping files we log aren't in shared_buffers that interlock doesn't work. + * + * Instead we simply write the mapping files out to disk, *before* the + * XLogInsert() is performed. That guarantees that either the XLogInsert() is + * inserted after the checkpoint's redo pointer or that the checkpoint (via + * CheckPointLogicalRewriteHeap()) has flushed the (partial) mapping file to + * disk. That leaves the tail end that has not yet been flushed open to + * corruption, which is solved by including the current offset in the + * xl_tdeheap_rewrite_mapping records and truncating the mapping file to it + * during replay. Every time a rewrite is finished all generated mapping files + * are synced to disk. + * + * Note that if we were only concerned about crash safety we wouldn't have to + * deal with WAL logging at all - an fsync() at the end of a rewrite would be + * sufficient for crash safety. Any mapping that hasn't been safely flushed to + * disk has to be by an aborted (explicitly or via a crash) transaction and is + * ignored by virtue of the xid in its name being subject to a + * TransactionDidCommit() check. But we want to support having standbys via + * physical replication, both for availability and to do logical decoding + * there. + * ------------------------------------------------------------------------ + */ + +/* + * Do preparations for logging logical mappings during a rewrite if + * necessary. If we detect that we don't need to log anything we'll prevent + * any further action by the various logical rewrite functions. + */ +static void +logical_begin_tdeheap_rewrite(RewriteState state) +{ + HASHCTL hash_ctl; + TransactionId logical_xmin; + + /* + * We only need to persist these mappings if the rewritten table can be + * accessed during logical decoding, if not, we can skip doing any + * additional work. + */ + state->rs_logical_rewrite = + RelationIsAccessibleInLogicalDecoding(state->rs_old_rel); + + if (!state->rs_logical_rewrite) + return; + + ProcArrayGetReplicationSlotXmin(NULL, &logical_xmin); + + /* + * If there are no logical slots in progress we don't need to do anything, + * there cannot be any remappings for relevant rows yet. The relation's + * lock protects us against races. + */ + if (logical_xmin == InvalidTransactionId) + { + state->rs_logical_rewrite = false; + return; + } + + state->rs_logical_xmin = logical_xmin; + state->rs_begin_lsn = GetXLogInsertRecPtr(); + state->rs_num_rewrite_mappings = 0; + + hash_ctl.keysize = sizeof(TransactionId); + hash_ctl.entrysize = sizeof(RewriteMappingFile); + hash_ctl.hcxt = state->rs_cxt; + + state->rs_logical_mappings = + hash_create("Logical rewrite mapping", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); +} + +/* + * Flush all logical in-memory mappings to disk, but don't fsync them yet. + */ +static void +logical_tdeheap_rewrite_flush_mappings(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + RewriteMappingFile *src; + dlist_mutable_iter iter; + + Assert(state->rs_logical_rewrite); + + /* no logical rewrite in progress, no need to iterate over mappings */ + if (state->rs_num_rewrite_mappings == 0) + return; + + elog(DEBUG1, "flushing %u logical rewrite mapping entries", + state->rs_num_rewrite_mappings); + + hash_seq_init(&seq_status, state->rs_logical_mappings); + while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL) + { + char *waldata; + char *waldata_start; + xl_tdeheap_rewrite_mapping xlrec; + Oid dboid; + uint32 len; + int written; + uint32 num_mappings = dclist_count(&src->mappings); + + /* this file hasn't got any new mappings */ + if (num_mappings == 0) + continue; + + if (state->rs_old_rel->rd_rel->relisshared) + dboid = InvalidOid; + else + dboid = MyDatabaseId; + + xlrec.num_mappings = num_mappings; + xlrec.mapped_rel = RelationGetRelid(state->rs_old_rel); + xlrec.mapped_xid = src->xid; + xlrec.mapped_db = dboid; + xlrec.offset = src->off; + xlrec.start_lsn = state->rs_begin_lsn; + + /* write all mappings consecutively */ + len = num_mappings * sizeof(LogicalRewriteMappingData); + waldata_start = waldata = palloc(len); + + /* + * collect data we need to write out, but don't modify ondisk data yet + */ + dclist_foreach_modify(iter, &src->mappings) + { + RewriteMappingDataEntry *pmap; + + pmap = dclist_container(RewriteMappingDataEntry, node, iter.cur); + + memcpy(waldata, &pmap->map, sizeof(pmap->map)); + waldata += sizeof(pmap->map); + + /* remove from the list and free */ + dclist_delete_from(&src->mappings, &pmap->node); + pfree(pmap); + + /* update bookkeeping */ + state->rs_num_rewrite_mappings--; + } + + Assert(dclist_count(&src->mappings) == 0); + Assert(waldata == waldata_start + len); + + /* + * Note that we deviate from the usual WAL coding practices here, + * check the above "Logical rewrite support" comment for reasoning. + */ + written = FileWrite(src->vfd, waldata_start, len, src->off, + WAIT_EVENT_LOGICAL_REWRITE_WRITE); + if (written != len) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\", wrote %d of %d: %m", src->path, + written, len))); + src->off += len; + + XLogBeginInsert(); + XLogRegisterData((char *) (&xlrec), sizeof(xlrec)); + XLogRegisterData(waldata_start, len); + + /* write xlog record */ + XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_REWRITE); + + pfree(waldata_start); + } + Assert(state->rs_num_rewrite_mappings == 0); +} + +/* + * Logical remapping part of end_tdeheap_rewrite(). + */ +static void +logical_end_tdeheap_rewrite(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + RewriteMappingFile *src; + + /* done, no logical rewrite in progress */ + if (!state->rs_logical_rewrite) + return; + + /* writeout remaining in-memory entries */ + if (state->rs_num_rewrite_mappings > 0) + logical_tdeheap_rewrite_flush_mappings(state); + + /* Iterate over all mappings we have written and fsync the files. */ + hash_seq_init(&seq_status, state->rs_logical_mappings); + while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL) + { + if (FileSync(src->vfd, WAIT_EVENT_LOGICAL_REWRITE_SYNC) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", src->path))); + FileClose(src->vfd); + } + /* memory context cleanup will deal with the rest */ +} + +/* + * Log a single (old->new) mapping for 'xid'. + */ +static void +logical_rewrite_log_mapping(RewriteState state, TransactionId xid, + LogicalRewriteMappingData *map) +{ + RewriteMappingFile *src; + RewriteMappingDataEntry *pmap; + Oid relid; + bool found; + + relid = RelationGetRelid(state->rs_old_rel); + + /* look for existing mappings for this 'mapped' xid */ + src = hash_search(state->rs_logical_mappings, &xid, + HASH_ENTER, &found); + + /* + * We haven't yet had the need to map anything for this xid, create + * per-xid data structures. + */ + if (!found) + { + char path[MAXPGPATH]; + Oid dboid; + + if (state->rs_old_rel->rd_rel->relisshared) + dboid = InvalidOid; + else + dboid = MyDatabaseId; + + snprintf(path, MAXPGPATH, + "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT, + dboid, relid, + LSN_FORMAT_ARGS(state->rs_begin_lsn), + xid, GetCurrentTransactionId()); + + dclist_init(&src->mappings); + src->off = 0; + memcpy(src->path, path, sizeof(path)); + src->vfd = PathNameOpenFile(path, + O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); + if (src->vfd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create file \"%s\": %m", path))); + } + + pmap = MemoryContextAlloc(state->rs_cxt, + sizeof(RewriteMappingDataEntry)); + memcpy(&pmap->map, map, sizeof(LogicalRewriteMappingData)); + dclist_push_tail(&src->mappings, &pmap->node); + state->rs_num_rewrite_mappings++; + + /* + * Write out buffer every time we've too many in-memory entries across all + * mapping files. + */ + if (state->rs_num_rewrite_mappings >= 1000 /* arbitrary number */ ) + logical_tdeheap_rewrite_flush_mappings(state); +} + +/* + * Perform logical remapping for a tuple that's mapped from old_tid to + * new_tuple->t_self by rewrite_tdeheap_tuple() if necessary for the tuple. + */ +static void +logical_rewrite_tdeheap_tuple(RewriteState state, ItemPointerData old_tid, + HeapTuple new_tuple) +{ + ItemPointerData new_tid = new_tuple->t_self; + TransactionId cutoff = state->rs_logical_xmin; + TransactionId xmin; + TransactionId xmax; + bool do_log_xmin = false; + bool do_log_xmax = false; + LogicalRewriteMappingData map; + + /* no logical rewrite in progress, we don't need to log anything */ + if (!state->rs_logical_rewrite) + return; + + xmin = HeapTupleHeaderGetXmin(new_tuple->t_data); + /* use *GetUpdateXid to correctly deal with multixacts */ + xmax = HeapTupleHeaderGetUpdateXid(new_tuple->t_data); + + /* + * Log the mapping iff the tuple has been created recently. + */ + if (TransactionIdIsNormal(xmin) && !TransactionIdPrecedes(xmin, cutoff)) + do_log_xmin = true; + + if (!TransactionIdIsNormal(xmax)) + { + /* + * no xmax is set, can't have any permanent ones, so this check is + * sufficient + */ + } + else if (HEAP_XMAX_IS_LOCKED_ONLY(new_tuple->t_data->t_infomask)) + { + /* only locked, we don't care */ + } + else if (!TransactionIdPrecedes(xmax, cutoff)) + { + /* tuple has been deleted recently, log */ + do_log_xmax = true; + } + + /* if neither needs to be logged, we're done */ + if (!do_log_xmin && !do_log_xmax) + return; + + /* fill out mapping information */ + map.old_locator = state->rs_old_rel->rd_locator; + map.old_tid = old_tid; + map.new_locator = state->rs_new_rel->rd_locator; + map.new_tid = new_tid; + + /* --- + * Now persist the mapping for the individual xids that are affected. We + * need to log for both xmin and xmax if they aren't the same transaction + * since the mapping files are per "affected" xid. + * We don't muster all that much effort detecting whether xmin and xmax + * are actually the same transaction, we just check whether the xid is the + * same disregarding subtransactions. Logging too much is relatively + * harmless and we could never do the check fully since subtransaction + * data is thrown away during restarts. + * --- + */ + if (do_log_xmin) + logical_rewrite_log_mapping(state, xmin, &map); + /* separately log mapping for xmax unless it'd be redundant */ + if (do_log_xmax && !TransactionIdEquals(xmin, xmax)) + logical_rewrite_log_mapping(state, xmax, &map); +} + +/* + * Replay XLOG_HEAP2_REWRITE records + */ +void +tdeheap_xlog_logical_rewrite(XLogReaderState *r) +{ + char path[MAXPGPATH]; + int fd; + xl_tdeheap_rewrite_mapping *xlrec; + uint32 len; + char *data; + + xlrec = (xl_tdeheap_rewrite_mapping *) XLogRecGetData(r); + + snprintf(path, MAXPGPATH, + "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT, + xlrec->mapped_db, xlrec->mapped_rel, + LSN_FORMAT_ARGS(xlrec->start_lsn), + xlrec->mapped_xid, XLogRecGetXid(r)); + + fd = OpenTransientFile(path, + O_CREAT | O_WRONLY | PG_BINARY); + if (fd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create file \"%s\": %m", path))); + + /* + * Truncate all data that's not guaranteed to have been safely fsynced (by + * previous record or by the last checkpoint). + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_TRUNCATE); + if (ftruncate(fd, xlrec->offset) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not truncate file \"%s\" to %u: %m", + path, (uint32) xlrec->offset))); + pgstat_report_wait_end(); + + data = XLogRecGetData(r) + sizeof(*xlrec); + + len = xlrec->num_mappings * sizeof(LogicalRewriteMappingData); + + /* write out tail end of mapping file (again) */ + errno = 0; + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_WRITE); + if (pg_pwrite(fd, data, len, xlrec->offset) != len) + { + /* if write didn't set errno, assume problem is no disk space */ + if (errno == 0) + errno = ENOSPC; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\": %m", path))); + } + pgstat_report_wait_end(); + + /* + * Now fsync all previously written data. We could improve things and only + * do this for the last write to a file, but the required bookkeeping + * doesn't seem worth the trouble. + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_SYNC); + if (pg_fsync(fd) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", path))); + pgstat_report_wait_end(); + + if (CloseTransientFile(fd) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not close file \"%s\": %m", path))); +} + +/* --- + * Perform a checkpoint for logical rewrite mappings + * + * This serves two tasks: + * 1) Remove all mappings not needed anymore based on the logical restart LSN + * 2) Flush all remaining mappings to disk, so that replay after a checkpoint + * only has to deal with the parts of a mapping that have been written out + * after the checkpoint started. + * --- + */ +void +CheckPointLogicalRewriteHeap(void) +{ + XLogRecPtr cutoff; + XLogRecPtr redo; + DIR *mappings_dir; + struct dirent *mapping_de; + char path[MAXPGPATH + 20]; + + /* + * We start of with a minimum of the last redo pointer. No new decoding + * slot will start before that, so that's a safe upper bound for removal. + */ + redo = GetRedoRecPtr(); + + /* now check for the restart ptrs from existing slots */ + cutoff = ReplicationSlotsComputeLogicalRestartLSN(); + + /* don't start earlier than the restart lsn */ + if (cutoff != InvalidXLogRecPtr && redo < cutoff) + cutoff = redo; + + mappings_dir = AllocateDir("pg_logical/mappings"); + while ((mapping_de = ReadDir(mappings_dir, "pg_logical/mappings")) != NULL) + { + Oid dboid; + Oid relid; + XLogRecPtr lsn; + TransactionId rewrite_xid; + TransactionId create_xid; + uint32 hi, + lo; + PGFileType de_type; + + if (strcmp(mapping_de->d_name, ".") == 0 || + strcmp(mapping_de->d_name, "..") == 0) + continue; + + snprintf(path, sizeof(path), "pg_logical/mappings/%s", mapping_de->d_name); + de_type = get_dirent_type(path, mapping_de, false, DEBUG1); + + if (de_type != PGFILETYPE_ERROR && de_type != PGFILETYPE_REG) + continue; + + /* Skip over files that cannot be ours. */ + if (strncmp(mapping_de->d_name, "map-", 4) != 0) + continue; + + if (sscanf(mapping_de->d_name, LOGICAL_REWRITE_FORMAT, + &dboid, &relid, &hi, &lo, &rewrite_xid, &create_xid) != 6) + elog(ERROR, "could not parse filename \"%s\"", mapping_de->d_name); + + lsn = ((uint64) hi) << 32 | lo; + + if (lsn < cutoff || cutoff == InvalidXLogRecPtr) + { + elog(DEBUG1, "removing logical rewrite file \"%s\"", path); + if (unlink(path) < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", path))); + } + else + { + /* on some operating systems fsyncing a file requires O_RDWR */ + int fd = OpenTransientFile(path, O_RDWR | PG_BINARY); + + /* + * The file cannot vanish due to concurrency since this function + * is the only one removing logical mappings and only one + * checkpoint can be in progress at a time. + */ + if (fd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open file \"%s\": %m", path))); + + /* + * We could try to avoid fsyncing files that either haven't + * changed or have only been created since the checkpoint's start, + * but it's currently not deemed worth the effort. + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_CHECKPOINT_SYNC); + if (pg_fsync(fd) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", path))); + pgstat_report_wait_end(); + + if (CloseTransientFile(fd) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not close file \"%s\": %m", path))); + } + } + FreeDir(mappings_dir); + + /* persist directory entries to disk */ + fsync_fname("pg_logical/mappings", true); +} diff --git a/contrib/pg_tde/src16/access/pg_tde_vacuumlazy.c b/contrib/pg_tde/src16/access/pg_tde_vacuumlazy.c new file mode 100644 index 00000000000..8a3f49efac4 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tde_vacuumlazy.c @@ -0,0 +1,3476 @@ +/*------------------------------------------------------------------------- + * + * vacuumlazy.c + * Concurrent ("lazy") vacuuming. + * + * The major space usage for vacuuming is storage for the array of dead TIDs + * that are to be removed from indexes. We want to ensure we can vacuum even + * the very largest relations with finite memory space usage. To do that, we + * set upper bounds on the number of TIDs we can keep track of at once. + * + * We are willing to use at most maintenance_work_mem (or perhaps + * autovacuum_work_mem) memory space to keep track of dead TIDs. We initially + * allocate an array of TIDs of that size, with an upper limit that depends on + * table size (this limit ensures we don't allocate a huge area uselessly for + * vacuuming small tables). If the array threatens to overflow, we must call + * lazy_vacuum to vacuum indexes (and to vacuum the pages that we've pruned). + * This frees up the memory space dedicated to storing dead TIDs. + * + * In practice VACUUM will often complete its initial pass over the target + * pg_tde relation without ever running out of space to store TIDs. This means + * that there only needs to be one call to lazy_vacuum, after the initial pass + * completes. + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/pg_tde/vacuumlazy.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tde_visibilitymap.h" +#include "encryption/enc_tde.h" + +#include "access/amapi.h" +#include "access/genam.h" +#include "access/htup_details.h" +#include "access/multixact.h" +#include "access/transam.h" +#include "access/xact.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "catalog/index.h" +#include "catalog/storage.h" +#include "commands/dbcommands.h" +#include "commands/progress.h" +#include "commands/vacuum.h" +#include "executor/instrument.h" +#include "miscadmin.h" +#include "optimizer/paths.h" +#include "pgstat.h" +#include "portability/instr_time.h" +#include "postmaster/autovacuum.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" +#include "tcop/tcopprot.h" +#include "utils/lsyscache.h" +#include "utils/memutils.h" +#include "utils/pg_rusage.h" +#include "utils/timestamp.h" + + +/* + * Space/time tradeoff parameters: do these need to be user-tunable? + * + * To consider truncating the relation, we want there to be at least + * REL_TRUNCATE_MINIMUM or (relsize / REL_TRUNCATE_FRACTION) (whichever + * is less) potentially-freeable pages. + */ +#define REL_TRUNCATE_MINIMUM 1000 +#define REL_TRUNCATE_FRACTION 16 + +/* + * Timing parameters for truncate locking heuristics. + * + * These were not exposed as user tunable GUC values because it didn't seem + * that the potential for improvement was great enough to merit the cost of + * supporting them. + */ +#define VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL 20 /* ms */ +#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL 50 /* ms */ +#define VACUUM_TRUNCATE_LOCK_TIMEOUT 5000 /* ms */ + +/* + * Threshold that controls whether we bypass index vacuuming and heap + * vacuuming as an optimization + */ +#define BYPASS_THRESHOLD_PAGES 0.02 /* i.e. 2% of rel_pages */ + +/* + * Perform a failsafe check each time we scan another 4GB of pages. + * (Note that this is deliberately kept to a power-of-two, usually 2^19.) + */ +#define FAILSAFE_EVERY_PAGES \ + ((BlockNumber) (((uint64) 4 * 1024 * 1024 * 1024) / BLCKSZ)) + +/* + * When a table has no indexes, vacuum the FSM after every 8GB, approximately + * (it won't be exact because we only vacuum FSM after processing a heap page + * that has some removable tuples). When there are indexes, this is ignored, + * and we vacuum FSM after each index/heap cleaning pass. + */ +#define VACUUM_FSM_EVERY_PAGES \ + ((BlockNumber) (((uint64) 8 * 1024 * 1024 * 1024) / BLCKSZ)) + +/* + * Before we consider skipping a page that's marked as clean in + * visibility map, we must've seen at least this many clean pages. + */ +#define SKIP_PAGES_THRESHOLD ((BlockNumber) 32) + +/* + * Size of the prefetch window for lazy vacuum backwards truncation scan. + * Needs to be a power of 2. + */ +#define PREFETCH_SIZE ((BlockNumber) 32) + +/* + * Macro to check if we are in a parallel vacuum. If true, we are in the + * parallel mode and the DSM segment is initialized. + */ +#define ParallelVacuumIsActive(vacrel) ((vacrel)->pvs != NULL) + +/* Phases of vacuum during which we report error context. */ +typedef enum +{ + VACUUM_ERRCB_PHASE_UNKNOWN, + VACUUM_ERRCB_PHASE_SCAN_HEAP, + VACUUM_ERRCB_PHASE_VACUUM_INDEX, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, + VACUUM_ERRCB_PHASE_INDEX_CLEANUP, + VACUUM_ERRCB_PHASE_TRUNCATE +} VacErrPhase; + +typedef struct LVRelState +{ + /* Target heap relation and its indexes */ + Relation rel; + Relation *indrels; + int nindexes; + + /* Buffer access strategy and parallel vacuum state */ + BufferAccessStrategy bstrategy; + ParallelVacuumState *pvs; + + /* Aggressive VACUUM? (must set relfrozenxid >= FreezeLimit) */ + bool aggressive; + /* Use visibility map to skip? (disabled by DISABLE_PAGE_SKIPPING) */ + bool skipwithvm; + /* Consider index vacuuming bypass optimization? */ + bool consider_bypass_optimization; + + /* Doing index vacuuming, index cleanup, rel truncation? */ + bool do_index_vacuuming; + bool do_index_cleanup; + bool do_rel_truncate; + + /* VACUUM operation's cutoffs for freezing and pruning */ + struct VacuumCutoffs cutoffs; + GlobalVisState *vistest; + /* Tracks oldest extant XID/MXID for setting relfrozenxid/relminmxid */ + TransactionId NewRelfrozenXid; + MultiXactId NewRelminMxid; + bool skippedallvis; + + /* Error reporting state */ + char *dbname; + char *relnamespace; + char *relname; + char *indname; /* Current index name */ + BlockNumber blkno; /* used only for heap operations */ + OffsetNumber offnum; /* used only for heap operations */ + VacErrPhase phase; + bool verbose; /* VACUUM VERBOSE? */ + + /* + * dead_items stores TIDs whose index tuples are deleted by index + * vacuuming. Each TID points to an LP_DEAD line pointer from a heap page + * that has been processed by lazy_scan_prune. Also needed by + * lazy_vacuum_tdeheap_rel, which marks the same LP_DEAD line pointers as + * LP_UNUSED during second heap pass. + */ + VacDeadItems *dead_items; /* TIDs whose index tuples we'll delete */ + BlockNumber rel_pages; /* total number of pages */ + BlockNumber scanned_pages; /* # pages examined (not skipped via VM) */ + BlockNumber removed_pages; /* # pages removed by relation truncation */ + BlockNumber frozen_pages; /* # pages with newly frozen tuples */ + BlockNumber lpdead_item_pages; /* # pages with LP_DEAD items */ + BlockNumber missed_dead_pages; /* # pages with missed dead tuples */ + BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */ + + /* Statistics output by us, for table */ + double new_rel_tuples; /* new estimated total # of tuples */ + double new_live_tuples; /* new estimated total # of live tuples */ + /* Statistics output by index AMs */ + IndexBulkDeleteResult **indstats; + + /* Instrumentation counters */ + int num_index_scans; + /* Counters that follow are only for scanned_pages */ + int64 tuples_deleted; /* # deleted from table */ + int64 tuples_frozen; /* # newly frozen */ + int64 lpdead_items; /* # deleted from indexes */ + int64 live_tuples; /* # live tuples remaining */ + int64 recently_dead_tuples; /* # dead, but not yet removable */ + int64 missed_dead_tuples; /* # removable, but not removed */ +} LVRelState; + +/* + * State returned by lazy_scan_prune() + */ +typedef struct LVPagePruneState +{ + bool hastup; /* Page prevents rel truncation? */ + bool has_lpdead_items; /* includes existing LP_DEAD items */ + + /* + * State describes the proper VM bit states to set for the page following + * pruning and freezing. all_visible implies !has_lpdead_items, but don't + * trust all_frozen result unless all_visible is also set to true. + */ + bool all_visible; /* Every item visible to all? */ + bool all_frozen; /* provided all_visible is also true */ + TransactionId visibility_cutoff_xid; /* For recovery conflicts */ +} LVPagePruneState; + +/* Struct for saving and restoring vacuum error information. */ +typedef struct LVSavedErrInfo +{ + BlockNumber blkno; + OffsetNumber offnum; + VacErrPhase phase; +} LVSavedErrInfo; + + +/* non-export function prototypes */ +static void lazy_scan_heap(LVRelState *vacrel); +static BlockNumber lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, + BlockNumber next_block, + bool *next_unskippable_allvis, + bool *skipping_current_range); +static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + bool sharelock, Buffer vmbuffer); +static void lazy_scan_prune(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + LVPagePruneState *prunestate); +static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + bool *hastup, bool *recordfreespace); +static void lazy_vacuum(LVRelState *vacrel); +static bool lazy_vacuum_all_indexes(LVRelState *vacrel); +static void lazy_vacuum_tdeheap_rel(LVRelState *vacrel); +static int lazy_vacuum_tdeheap_page(LVRelState *vacrel, BlockNumber blkno, + Buffer buffer, int index, Buffer vmbuffer); +static bool lazy_check_wraparound_failsafe(LVRelState *vacrel); +static void lazy_cleanup_all_indexes(LVRelState *vacrel); +static IndexBulkDeleteResult *lazy_vacuum_one_index(Relation indrel, + IndexBulkDeleteResult *istat, + double reltuples, + LVRelState *vacrel); +static IndexBulkDeleteResult *lazy_cleanup_one_index(Relation indrel, + IndexBulkDeleteResult *istat, + double reltuples, + bool estimated_count, + LVRelState *vacrel); +static bool should_attempt_truncation(LVRelState *vacrel); +static void lazy_truncate_heap(LVRelState *vacrel); +static BlockNumber count_nondeletable_pages(LVRelState *vacrel, + bool *lock_waiter_detected); +static void dead_items_alloc(LVRelState *vacrel, int nworkers); +static void dead_items_cleanup(LVRelState *vacrel); +static bool tdeheap_page_is_all_visible(LVRelState *vacrel, Buffer buf, + TransactionId *visibility_cutoff_xid, bool *all_frozen); +static void update_relstats_all_indexes(LVRelState *vacrel); +static void vacuum_error_callback(void *arg); +static void update_vacuum_error_info(LVRelState *vacrel, + LVSavedErrInfo *saved_vacrel, + int phase, BlockNumber blkno, + OffsetNumber offnum); +static void restore_vacuum_error_info(LVRelState *vacrel, + const LVSavedErrInfo *saved_vacrel); + + +/* + * tdeheap_vacuum_rel() -- perform VACUUM for one heap relation + * + * This routine sets things up for and then calls lazy_scan_heap, where + * almost all work actually takes place. Finalizes everything after call + * returns by managing relation truncation and updating rel's pg_class + * entry. (Also updates pg_class entries for any indexes that need it.) + * + * At entry, we have already established a transaction and opened + * and locked the relation. + */ +void +tdeheap_vacuum_rel(Relation rel, VacuumParams *params, + BufferAccessStrategy bstrategy) +{ + LVRelState *vacrel; + bool verbose, + instrument, + skipwithvm, + frozenxid_updated, + minmulti_updated; + BlockNumber orig_rel_pages, + new_rel_pages, + new_rel_allvisible; + PGRUsage ru0; + TimestampTz starttime = 0; + PgStat_Counter startreadtime = 0, + startwritetime = 0; + WalUsage startwalusage = pgWalUsage; + BufferUsage startbufferusage = pgBufferUsage; + ErrorContextCallback errcallback; + char **indnames = NULL; + + verbose = (params->options & VACOPT_VERBOSE) != 0; + instrument = (verbose || (IsAutoVacuumWorkerProcess() && + params->log_min_duration >= 0)); + if (instrument) + { + pg_rusage_init(&ru0); + starttime = GetCurrentTimestamp(); + if (track_io_timing) + { + startreadtime = pgStatBlockReadTime; + startwritetime = pgStatBlockWriteTime; + } + } + + pgstat_progress_start_command(PROGRESS_COMMAND_VACUUM, + RelationGetRelid(rel)); + + /* + * Setup error traceback support for ereport() first. The idea is to set + * up an error context callback to display additional information on any + * error during a vacuum. During different phases of vacuum, we update + * the state so that the error context callback always display current + * information. + * + * Copy the names of heap rel into local memory for error reporting + * purposes, too. It isn't always safe to assume that we can get the name + * of each rel. It's convenient for code in lazy_scan_heap to always use + * these temp copies. + */ + vacrel = (LVRelState *) palloc0(sizeof(LVRelState)); + vacrel->dbname = get_database_name(MyDatabaseId); + vacrel->relnamespace = get_namespace_name(RelationGetNamespace(rel)); + vacrel->relname = pstrdup(RelationGetRelationName(rel)); + vacrel->indname = NULL; + vacrel->phase = VACUUM_ERRCB_PHASE_UNKNOWN; + vacrel->verbose = verbose; + errcallback.callback = vacuum_error_callback; + errcallback.arg = vacrel; + errcallback.previous = error_context_stack; + error_context_stack = &errcallback; + + /* Set up high level stuff about rel and its indexes */ + vacrel->rel = rel; + vac_open_indexes(vacrel->rel, RowExclusiveLock, &vacrel->nindexes, + &vacrel->indrels); + vacrel->bstrategy = bstrategy; + if (instrument && vacrel->nindexes > 0) + { + /* Copy index names used by instrumentation (not error reporting) */ + indnames = palloc(sizeof(char *) * vacrel->nindexes); + for (int i = 0; i < vacrel->nindexes; i++) + indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i])); + } + + /* + * The index_cleanup param either disables index vacuuming and cleanup or + * forces it to go ahead when we would otherwise apply the index bypass + * optimization. The default is 'auto', which leaves the final decision + * up to lazy_vacuum(). + * + * The truncate param allows user to avoid attempting relation truncation, + * though it can't force truncation to happen. + */ + Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED); + Assert(params->truncate != VACOPTVALUE_UNSPECIFIED && + params->truncate != VACOPTVALUE_AUTO); + + /* + * While VacuumFailSafeActive is reset to false before calling this, we + * still need to reset it here due to recursive calls. + */ + VacuumFailsafeActive = false; + vacrel->consider_bypass_optimization = true; + vacrel->do_index_vacuuming = true; + vacrel->do_index_cleanup = true; + vacrel->do_rel_truncate = (params->truncate != VACOPTVALUE_DISABLED); + if (params->index_cleanup == VACOPTVALUE_DISABLED) + { + /* Force disable index vacuuming up-front */ + vacrel->do_index_vacuuming = false; + vacrel->do_index_cleanup = false; + } + else if (params->index_cleanup == VACOPTVALUE_ENABLED) + { + /* Force index vacuuming. Note that failsafe can still bypass. */ + vacrel->consider_bypass_optimization = false; + } + else + { + /* Default/auto, make all decisions dynamically */ + Assert(params->index_cleanup == VACOPTVALUE_AUTO); + } + + /* Initialize page counters explicitly (be tidy) */ + vacrel->scanned_pages = 0; + vacrel->removed_pages = 0; + vacrel->frozen_pages = 0; + vacrel->lpdead_item_pages = 0; + vacrel->missed_dead_pages = 0; + vacrel->nonempty_pages = 0; + /* dead_items_alloc allocates vacrel->dead_items later on */ + + /* Allocate/initialize output statistics state */ + vacrel->new_rel_tuples = 0; + vacrel->new_live_tuples = 0; + vacrel->indstats = (IndexBulkDeleteResult **) + palloc0(vacrel->nindexes * sizeof(IndexBulkDeleteResult *)); + + /* Initialize remaining counters (be tidy) */ + vacrel->num_index_scans = 0; + vacrel->tuples_deleted = 0; + vacrel->tuples_frozen = 0; + vacrel->lpdead_items = 0; + vacrel->live_tuples = 0; + vacrel->recently_dead_tuples = 0; + vacrel->missed_dead_tuples = 0; + + /* + * Get cutoffs that determine which deleted tuples are considered DEAD, + * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze. Then determine + * the extent of the blocks that we'll scan in lazy_scan_heap. It has to + * happen in this order to ensure that the OldestXmin cutoff field works + * as an upper bound on the XIDs stored in the pages we'll actually scan + * (NewRelfrozenXid tracking must never be allowed to miss unfrozen XIDs). + * + * Next acquire vistest, a related cutoff that's used in tdeheap_page_prune. + * We expect vistest will always make tdeheap_page_prune remove any deleted + * tuple whose xmax is < OldestXmin. lazy_scan_prune must never become + * confused about whether a tuple should be frozen or removed. (In the + * future we might want to teach lazy_scan_prune to recompute vistest from + * time to time, to increase the number of dead tuples it can prune away.) + */ + vacrel->aggressive = vacuum_get_cutoffs(rel, params, &vacrel->cutoffs); + vacrel->rel_pages = orig_rel_pages = RelationGetNumberOfBlocks(rel); + vacrel->vistest = GlobalVisTestFor(rel); + /* Initialize state used to track oldest extant XID/MXID */ + vacrel->NewRelfrozenXid = vacrel->cutoffs.OldestXmin; + vacrel->NewRelminMxid = vacrel->cutoffs.OldestMxact; + vacrel->skippedallvis = false; + skipwithvm = true; + if (params->options & VACOPT_DISABLE_PAGE_SKIPPING) + { + /* + * Force aggressive mode, and disable skipping blocks using the + * visibility map (even those set all-frozen) + */ + vacrel->aggressive = true; + skipwithvm = false; + } + + vacrel->skipwithvm = skipwithvm; + + if (verbose) + { + if (vacrel->aggressive) + ereport(INFO, + (errmsg("aggressively vacuuming \"%s.%s.%s\"", + vacrel->dbname, vacrel->relnamespace, + vacrel->relname))); + else + ereport(INFO, + (errmsg("vacuuming \"%s.%s.%s\"", + vacrel->dbname, vacrel->relnamespace, + vacrel->relname))); + } + + /* + * Allocate dead_items array memory using dead_items_alloc. This handles + * parallel VACUUM initialization as part of allocating shared memory + * space used for dead_items. (But do a failsafe precheck first, to + * ensure that parallel VACUUM won't be attempted at all when relfrozenxid + * is already dangerously old.) + */ + lazy_check_wraparound_failsafe(vacrel); + dead_items_alloc(vacrel, params->nworkers); + + /* + * Call lazy_scan_heap to perform all required heap pruning, index + * vacuuming, and heap vacuuming (plus related processing) + */ + lazy_scan_heap(vacrel); + + /* + * Free resources managed by dead_items_alloc. This ends parallel mode in + * passing when necessary. + */ + dead_items_cleanup(vacrel); + Assert(!IsInParallelMode()); + + /* + * Update pg_class entries for each of rel's indexes where appropriate. + * + * Unlike the later update to rel's pg_class entry, this is not critical. + * Maintains relpages/reltuples statistics used by the planner only. + */ + if (vacrel->do_index_cleanup) + update_relstats_all_indexes(vacrel); + + /* Done with rel's indexes */ + vac_close_indexes(vacrel->nindexes, vacrel->indrels, NoLock); + + /* Optionally truncate rel */ + if (should_attempt_truncation(vacrel)) + lazy_truncate_heap(vacrel); + + /* Pop the error context stack */ + error_context_stack = errcallback.previous; + + /* Report that we are now doing final cleanup */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_FINAL_CLEANUP); + + /* + * Prepare to update rel's pg_class entry. + * + * Aggressive VACUUMs must always be able to advance relfrozenxid to a + * value >= FreezeLimit, and relminmxid to a value >= MultiXactCutoff. + * Non-aggressive VACUUMs may advance them by any amount, or not at all. + */ + Assert(vacrel->NewRelfrozenXid == vacrel->cutoffs.OldestXmin || + TransactionIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.FreezeLimit : + vacrel->cutoffs.relfrozenxid, + vacrel->NewRelfrozenXid)); + Assert(vacrel->NewRelminMxid == vacrel->cutoffs.OldestMxact || + MultiXactIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.MultiXactCutoff : + vacrel->cutoffs.relminmxid, + vacrel->NewRelminMxid)); + if (vacrel->skippedallvis) + { + /* + * Must keep original relfrozenxid in a non-aggressive VACUUM that + * chose to skip an all-visible page range. The state that tracks new + * values will have missed unfrozen XIDs from the pages we skipped. + */ + Assert(!vacrel->aggressive); + vacrel->NewRelfrozenXid = InvalidTransactionId; + vacrel->NewRelminMxid = InvalidMultiXactId; + } + + /* + * For safety, clamp relallvisible to be not more than what we're setting + * pg_class.relpages to + */ + new_rel_pages = vacrel->rel_pages; /* After possible rel truncation */ + tdeheap_visibilitymap_count(rel, &new_rel_allvisible, NULL); + if (new_rel_allvisible > new_rel_pages) + new_rel_allvisible = new_rel_pages; + + /* + * Now actually update rel's pg_class entry. + * + * In principle new_live_tuples could be -1 indicating that we (still) + * don't know the tuple count. In practice that can't happen, since we + * scan every page that isn't skipped using the visibility map. + */ + vac_update_relstats(rel, new_rel_pages, vacrel->new_live_tuples, + new_rel_allvisible, vacrel->nindexes > 0, + vacrel->NewRelfrozenXid, vacrel->NewRelminMxid, + &frozenxid_updated, &minmulti_updated, false); + + /* + * Report results to the cumulative stats system, too. + * + * Deliberately avoid telling the stats system about LP_DEAD items that + * remain in the table due to VACUUM bypassing index and heap vacuuming. + * ANALYZE will consider the remaining LP_DEAD items to be dead "tuples". + * It seems like a good idea to err on the side of not vacuuming again too + * soon in cases where the failsafe prevented significant amounts of heap + * vacuuming. + */ + pgstat_report_vacuum(RelationGetRelid(rel), + rel->rd_rel->relisshared, + Max(vacrel->new_live_tuples, 0), + vacrel->recently_dead_tuples + + vacrel->missed_dead_tuples); + pgstat_progress_end_command(); + + if (instrument) + { + TimestampTz endtime = GetCurrentTimestamp(); + + if (verbose || params->log_min_duration == 0 || + TimestampDifferenceExceeds(starttime, endtime, + params->log_min_duration)) + { + long secs_dur; + int usecs_dur; + WalUsage walusage; + BufferUsage bufferusage; + StringInfoData buf; + char *msgfmt; + int32 diff; + double read_rate = 0, + write_rate = 0; + + TimestampDifference(starttime, endtime, &secs_dur, &usecs_dur); + memset(&walusage, 0, sizeof(WalUsage)); + WalUsageAccumDiff(&walusage, &pgWalUsage, &startwalusage); + memset(&bufferusage, 0, sizeof(BufferUsage)); + BufferUsageAccumDiff(&bufferusage, &pgBufferUsage, &startbufferusage); + + initStringInfo(&buf); + if (verbose) + { + /* + * Aggressiveness already reported earlier, in dedicated + * VACUUM VERBOSE ereport + */ + Assert(!params->is_wraparound); + msgfmt = _("finished vacuuming \"%s.%s.%s\": index scans: %d\n"); + } + else if (params->is_wraparound) + { + /* + * While it's possible for a VACUUM to be both is_wraparound + * and !aggressive, that's just a corner-case -- is_wraparound + * implies aggressive. Produce distinct output for the corner + * case all the same, just in case. + */ + if (vacrel->aggressive) + msgfmt = _("automatic aggressive vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n"); + else + msgfmt = _("automatic vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n"); + } + else + { + if (vacrel->aggressive) + msgfmt = _("automatic aggressive vacuum of table \"%s.%s.%s\": index scans: %d\n"); + else + msgfmt = _("automatic vacuum of table \"%s.%s.%s\": index scans: %d\n"); + } + appendStringInfo(&buf, msgfmt, + vacrel->dbname, + vacrel->relnamespace, + vacrel->relname, + vacrel->num_index_scans); + appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total)\n"), + vacrel->removed_pages, + new_rel_pages, + vacrel->scanned_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->scanned_pages / orig_rel_pages); + appendStringInfo(&buf, + _("tuples: %lld removed, %lld remain, %lld are dead but not yet removable\n"), + (long long) vacrel->tuples_deleted, + (long long) vacrel->new_rel_tuples, + (long long) vacrel->recently_dead_tuples); + if (vacrel->missed_dead_tuples > 0) + appendStringInfo(&buf, + _("tuples missed: %lld dead from %u pages not removed due to cleanup lock contention\n"), + (long long) vacrel->missed_dead_tuples, + vacrel->missed_dead_pages); + diff = (int32) (ReadNextTransactionId() - + vacrel->cutoffs.OldestXmin); + appendStringInfo(&buf, + _("removable cutoff: %u, which was %d XIDs old when operation ended\n"), + vacrel->cutoffs.OldestXmin, diff); + if (frozenxid_updated) + { + diff = (int32) (vacrel->NewRelfrozenXid - + vacrel->cutoffs.relfrozenxid); + appendStringInfo(&buf, + _("new relfrozenxid: %u, which is %d XIDs ahead of previous value\n"), + vacrel->NewRelfrozenXid, diff); + } + if (minmulti_updated) + { + diff = (int32) (vacrel->NewRelminMxid - + vacrel->cutoffs.relminmxid); + appendStringInfo(&buf, + _("new relminmxid: %u, which is %d MXIDs ahead of previous value\n"), + vacrel->NewRelminMxid, diff); + } + appendStringInfo(&buf, _("frozen: %u pages from table (%.2f%% of total) had %lld tuples frozen\n"), + vacrel->frozen_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->frozen_pages / orig_rel_pages, + (long long) vacrel->tuples_frozen); + if (vacrel->do_index_vacuuming) + { + if (vacrel->nindexes == 0 || vacrel->num_index_scans == 0) + appendStringInfoString(&buf, _("index scan not needed: ")); + else + appendStringInfoString(&buf, _("index scan needed: ")); + + msgfmt = _("%u pages from table (%.2f%% of total) had %lld dead item identifiers removed\n"); + } + else + { + if (!VacuumFailsafeActive) + appendStringInfoString(&buf, _("index scan bypassed: ")); + else + appendStringInfoString(&buf, _("index scan bypassed by failsafe: ")); + + msgfmt = _("%u pages from table (%.2f%% of total) have %lld dead item identifiers\n"); + } + appendStringInfo(&buf, msgfmt, + vacrel->lpdead_item_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->lpdead_item_pages / orig_rel_pages, + (long long) vacrel->lpdead_items); + for (int i = 0; i < vacrel->nindexes; i++) + { + IndexBulkDeleteResult *istat = vacrel->indstats[i]; + + if (!istat) + continue; + + appendStringInfo(&buf, + _("index \"%s\": pages: %u in total, %u newly deleted, %u currently deleted, %u reusable\n"), + indnames[i], + istat->num_pages, + istat->pages_newly_deleted, + istat->pages_deleted, + istat->pages_free); + } + if (track_io_timing) + { + double read_ms = (double) (pgStatBlockReadTime - startreadtime) / 1000; + double write_ms = (double) (pgStatBlockWriteTime - startwritetime) / 1000; + + appendStringInfo(&buf, _("I/O timings: read: %.3f ms, write: %.3f ms\n"), + read_ms, write_ms); + } + if (secs_dur > 0 || usecs_dur > 0) + { + read_rate = (double) BLCKSZ * (bufferusage.shared_blks_read + bufferusage.local_blks_read) / + (1024 * 1024) / (secs_dur + usecs_dur / 1000000.0); + write_rate = (double) BLCKSZ * (bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied) / + (1024 * 1024) / (secs_dur + usecs_dur / 1000000.0); + } + appendStringInfo(&buf, _("avg read rate: %.3f MB/s, avg write rate: %.3f MB/s\n"), + read_rate, write_rate); + appendStringInfo(&buf, + _("buffer usage: %lld hits, %lld misses, %lld dirtied\n"), + (long long) (bufferusage.shared_blks_hit + bufferusage.local_blks_hit), + (long long) (bufferusage.shared_blks_read + bufferusage.local_blks_read), + (long long) (bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied)); + appendStringInfo(&buf, + _("WAL usage: %lld records, %lld full page images, %llu bytes\n"), + (long long) walusage.wal_records, + (long long) walusage.wal_fpi, + (unsigned long long) walusage.wal_bytes); + appendStringInfo(&buf, _("system usage: %s"), pg_rusage_show(&ru0)); + + ereport(verbose ? INFO : LOG, + (errmsg_internal("%s", buf.data))); + pfree(buf.data); + } + } + + /* Cleanup index statistics and index names */ + for (int i = 0; i < vacrel->nindexes; i++) + { + if (vacrel->indstats[i]) + pfree(vacrel->indstats[i]); + + if (instrument) + pfree(indnames[i]); + } +} + +/* + * lazy_scan_heap() -- workhorse function for VACUUM + * + * This routine prunes each page in the heap, and considers the need to + * freeze remaining tuples with storage (not including pages that can be + * skipped using the visibility map). Also performs related maintenance + * of the FSM and visibility map. These steps all take place during an + * initial pass over the target heap relation. + * + * Also invokes lazy_vacuum_all_indexes to vacuum indexes, which largely + * consists of deleting index tuples that point to LP_DEAD items left in + * heap pages following pruning. Earlier initial pass over the heap will + * have collected the TIDs whose index tuples need to be removed. + * + * Finally, invokes lazy_vacuum_tdeheap_rel to vacuum heap pages, which + * largely consists of marking LP_DEAD items (from collected TID array) + * as LP_UNUSED. This has to happen in a second, final pass over the + * heap, to preserve a basic invariant that all index AMs rely on: no + * extant index tuple can ever be allowed to contain a TID that points to + * an LP_UNUSED line pointer in the heap. We must disallow premature + * recycling of line pointers to avoid index scans that get confused + * about which TID points to which tuple immediately after recycling. + * (Actually, this isn't a concern when target heap relation happens to + * have no indexes, which allows us to safely apply the one-pass strategy + * as an optimization). + * + * In practice we often have enough space to fit all TIDs, and so won't + * need to call lazy_vacuum more than once, after our initial pass over + * the heap has totally finished. Otherwise things are slightly more + * complicated: our "initial pass" over the heap applies only to those + * pages that were pruned before we needed to call lazy_vacuum, and our + * "final pass" over the heap only vacuums these same heap pages. + * However, we process indexes in full every time lazy_vacuum is called, + * which makes index processing very inefficient when memory is in short + * supply. + */ +static void +lazy_scan_heap(LVRelState *vacrel) +{ + BlockNumber rel_pages = vacrel->rel_pages, + blkno, + next_unskippable_block, + next_fsm_block_to_vacuum = 0; + VacDeadItems *dead_items = vacrel->dead_items; + Buffer vmbuffer = InvalidBuffer; + bool next_unskippable_allvis, + skipping_current_range; + const int initprog_index[] = { + PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_TOTAL_HEAP_BLKS, + PROGRESS_VACUUM_MAX_DEAD_TUPLES + }; + int64 initprog_val[3]; + + /* Report that we're scanning the heap, advertising total # of blocks */ + initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP; + initprog_val[1] = rel_pages; + initprog_val[2] = dead_items->max_items; + pgstat_progress_update_multi_param(3, initprog_index, initprog_val); + + /* Set up an initial range of skippable blocks using the visibility map */ + next_unskippable_block = lazy_scan_skip(vacrel, &vmbuffer, 0, + &next_unskippable_allvis, + &skipping_current_range); + for (blkno = 0; blkno < rel_pages; blkno++) + { + Buffer buf; + Page page; + bool all_visible_according_to_vm; + LVPagePruneState prunestate; + + if (blkno == next_unskippable_block) + { + /* + * Can't skip this page safely. Must scan the page. But + * determine the next skippable range after the page first. + */ + all_visible_according_to_vm = next_unskippable_allvis; + next_unskippable_block = lazy_scan_skip(vacrel, &vmbuffer, + blkno + 1, + &next_unskippable_allvis, + &skipping_current_range); + + Assert(next_unskippable_block >= blkno + 1); + } + else + { + /* Last page always scanned (may need to set nonempty_pages) */ + Assert(blkno < rel_pages - 1); + + if (skipping_current_range) + continue; + + /* Current range is too small to skip -- just scan the page */ + all_visible_according_to_vm = true; + } + + vacrel->scanned_pages++; + + /* Report as block scanned, update error traceback information */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno); + update_vacuum_error_info(vacrel, NULL, VACUUM_ERRCB_PHASE_SCAN_HEAP, + blkno, InvalidOffsetNumber); + + vacuum_delay_point(); + + /* + * Regularly check if wraparound failsafe should trigger. + * + * There is a similar check inside lazy_vacuum_all_indexes(), but + * relfrozenxid might start to look dangerously old before we reach + * that point. This check also provides failsafe coverage for the + * one-pass strategy, and the two-pass strategy with the index_cleanup + * param set to 'off'. + */ + if (vacrel->scanned_pages % FAILSAFE_EVERY_PAGES == 0) + lazy_check_wraparound_failsafe(vacrel); + + /* + * Consider if we definitely have enough space to process TIDs on page + * already. If we are close to overrunning the available space for + * dead_items TIDs, pause and do a cycle of vacuuming before we tackle + * this page. + */ + Assert(dead_items->max_items >= MaxHeapTuplesPerPage); + if (dead_items->max_items - dead_items->num_items < MaxHeapTuplesPerPage) + { + /* + * Before beginning index vacuuming, we release any pin we may + * hold on the visibility map page. This isn't necessary for + * correctness, but we do it anyway to avoid holding the pin + * across a lengthy, unrelated operation. + */ + if (BufferIsValid(vmbuffer)) + { + ReleaseBuffer(vmbuffer); + vmbuffer = InvalidBuffer; + } + + /* Perform a round of index and heap vacuuming */ + vacrel->consider_bypass_optimization = false; + lazy_vacuum(vacrel); + + /* + * Vacuum the Free Space Map to make newly-freed space visible on + * upper-level FSM pages. Note we have not yet processed blkno. + */ + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, + blkno); + next_fsm_block_to_vacuum = blkno; + + /* Report that we are once again scanning the heap */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_SCAN_HEAP); + } + + /* + * Pin the visibility map page in case we need to mark the page + * all-visible. In most cases this will be very cheap, because we'll + * already have the correct page pinned anyway. + */ + tdeheap_visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); + + /* + * We need a buffer cleanup lock to prune HOT chains and defragment + * the page in lazy_scan_prune. But when it's not possible to acquire + * a cleanup lock right away, we may be able to settle for reduced + * processing using lazy_scan_noprune. + */ + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + page = BufferGetPage(buf); + if (!ConditionalLockBufferForCleanup(buf)) + { + bool hastup, + recordfreespace; + + LockBuffer(buf, BUFFER_LOCK_SHARE); + + /* Check for new or empty pages before lazy_scan_noprune call */ + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, true, + vmbuffer)) + { + /* Processed as new/empty page (lock and pin released) */ + continue; + } + + /* Collect LP_DEAD items in dead_items array, count tuples */ + if (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup, + &recordfreespace)) + { + Size freespace = 0; + + /* + * Processed page successfully (without cleanup lock) -- just + * need to perform rel truncation and FSM steps, much like the + * lazy_scan_prune case. Don't bother trying to match its + * visibility map setting steps, though. + */ + if (hastup) + vacrel->nonempty_pages = blkno + 1; + if (recordfreespace) + freespace = PageGetHeapFreeSpace(page); + UnlockReleaseBuffer(buf); + if (recordfreespace) + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + continue; + } + + /* + * lazy_scan_noprune could not do all required processing. Wait + * for a cleanup lock, and call lazy_scan_prune in the usual way. + */ + Assert(vacrel->aggressive); + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + LockBufferForCleanup(buf); + } + + /* Check for new or empty pages before lazy_scan_prune call */ + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, false, vmbuffer)) + { + /* Processed as new/empty page (lock and pin released) */ + continue; + } + + /* + * Prune, freeze, and count tuples. + * + * Accumulates details of remaining LP_DEAD line pointers on page in + * dead_items array. This includes LP_DEAD line pointers that we + * pruned ourselves, as well as existing LP_DEAD line pointers that + * were pruned some time earlier. Also considers freezing XIDs in the + * tuple headers of remaining items with storage. + */ + lazy_scan_prune(vacrel, buf, blkno, page, &prunestate); + + Assert(!prunestate.all_visible || !prunestate.has_lpdead_items); + + /* Remember the location of the last page with nonremovable tuples */ + if (prunestate.hastup) + vacrel->nonempty_pages = blkno + 1; + + if (vacrel->nindexes == 0) + { + /* + * Consider the need to do page-at-a-time heap vacuuming when + * using the one-pass strategy now. + * + * The one-pass strategy will never call lazy_vacuum(). The steps + * performed here can be thought of as the one-pass equivalent of + * a call to lazy_vacuum(). + */ + if (prunestate.has_lpdead_items) + { + Size freespace; + + lazy_vacuum_tdeheap_page(vacrel, blkno, buf, 0, vmbuffer); + + /* Forget the LP_DEAD items that we just vacuumed */ + dead_items->num_items = 0; + + /* + * Periodically perform FSM vacuuming to make newly-freed + * space visible on upper FSM pages. Note we have not yet + * performed FSM processing for blkno. + */ + if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES) + { + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, + blkno); + next_fsm_block_to_vacuum = blkno; + } + + /* + * Now perform FSM processing for blkno, and move on to next + * page. + * + * Our call to lazy_vacuum_tdeheap_page() will have considered if + * it's possible to set all_visible/all_frozen independently + * of lazy_scan_prune(). Note that prunestate was invalidated + * by lazy_vacuum_tdeheap_page() call. + */ + freespace = PageGetHeapFreeSpace(page); + + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + continue; + } + + /* + * There was no call to lazy_vacuum_tdeheap_page() because pruning + * didn't encounter/create any LP_DEAD items that needed to be + * vacuumed. Prune state has not been invalidated, so proceed + * with prunestate-driven visibility map and FSM steps (just like + * the two-pass strategy). + */ + Assert(dead_items->num_items == 0); + } + + /* + * Handle setting visibility map bit based on information from the VM + * (as of last lazy_scan_skip() call), and from prunestate + */ + if (!all_visible_according_to_vm && prunestate.all_visible) + { + uint8 flags = VISIBILITYMAP_ALL_VISIBLE; + + if (prunestate.all_frozen) + { + Assert(!TransactionIdIsValid(prunestate.visibility_cutoff_xid)); + flags |= VISIBILITYMAP_ALL_FROZEN; + } + + /* + * It should never be the case that the visibility map page is set + * while the page-level bit is clear, but the reverse is allowed + * (if checksums are not enabled). Regardless, set both bits so + * that we get back in sync. + * + * NB: If the heap page is all-visible but the VM bit is not set, + * we don't need to dirty the heap page. However, if checksums + * are enabled, we do need to make sure that the heap page is + * dirtied before passing it to tdeheap_visibilitymap_set(), because it + * may be logged. Given that this situation should only happen in + * rare cases after a crash, it is not worth optimizing. + */ + PageSetAllVisible(page); + MarkBufferDirty(buf); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, prunestate.visibility_cutoff_xid, + flags); + } + + /* + * As of PostgreSQL 9.2, the visibility map bit should never be set if + * the page-level bit is clear. However, it's possible that the bit + * got cleared after lazy_scan_skip() was called, so we must recheck + * with buffer lock before concluding that the VM is corrupt. + */ + else if (all_visible_according_to_vm && !PageIsAllVisible(page) && + tdeheap_visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0) + { + elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u", + vacrel->relname, blkno); + tdeheap_visibilitymap_clear(vacrel->rel, blkno, vmbuffer, + VISIBILITYMAP_VALID_BITS); + } + + /* + * It's possible for the value returned by + * GetOldestNonRemovableTransactionId() to move backwards, so it's not + * wrong for us to see tuples that appear to not be visible to + * everyone yet, while PD_ALL_VISIBLE is already set. The real safe + * xmin value never moves backwards, but + * GetOldestNonRemovableTransactionId() is conservative and sometimes + * returns a value that's unnecessarily small, so if we see that + * contradiction it just means that the tuples that we think are not + * visible to everyone yet actually are, and the PD_ALL_VISIBLE flag + * is correct. + * + * There should never be LP_DEAD items on a page with PD_ALL_VISIBLE + * set, however. + */ + else if (prunestate.has_lpdead_items && PageIsAllVisible(page)) + { + elog(WARNING, "page containing LP_DEAD items is marked as all-visible in relation \"%s\" page %u", + vacrel->relname, blkno); + PageClearAllVisible(page); + MarkBufferDirty(buf); + tdeheap_visibilitymap_clear(vacrel->rel, blkno, vmbuffer, + VISIBILITYMAP_VALID_BITS); + } + + /* + * If the all-visible page is all-frozen but not marked as such yet, + * mark it as all-frozen. Note that all_frozen is only valid if + * all_visible is true, so we must check both prunestate fields. + */ + else if (all_visible_according_to_vm && prunestate.all_visible && + prunestate.all_frozen && + !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer)) + { + /* + * Avoid relying on all_visible_according_to_vm as a proxy for the + * page-level PD_ALL_VISIBLE bit being set, since it might have + * become stale -- even when all_visible is set in prunestate + */ + if (!PageIsAllVisible(page)) + { + PageSetAllVisible(page); + MarkBufferDirty(buf); + } + + /* + * Set the page all-frozen (and all-visible) in the VM. + * + * We can pass InvalidTransactionId as our visibility_cutoff_xid, + * since a snapshotConflictHorizon sufficient to make everything + * safe for REDO was logged when the page's tuples were frozen. + */ + Assert(!TransactionIdIsValid(prunestate.visibility_cutoff_xid)); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | + VISIBILITYMAP_ALL_FROZEN); + } + + /* + * Final steps for block: drop cleanup lock, record free space in the + * FSM + */ + if (prunestate.has_lpdead_items && vacrel->do_index_vacuuming) + { + /* + * Wait until lazy_vacuum_tdeheap_rel() to save free space. This + * doesn't just save us some cycles; it also allows us to record + * any additional free space that lazy_vacuum_tdeheap_page() will + * make available in cases where it's possible to truncate the + * page's line pointer array. + * + * Note: It's not in fact 100% certain that we really will call + * lazy_vacuum_tdeheap_rel() -- lazy_vacuum() might yet opt to skip + * index vacuuming (and so must skip heap vacuuming). This is + * deemed okay because it only happens in emergencies, or when + * there is very little free space anyway. (Besides, we start + * recording free space in the FSM once index vacuuming has been + * abandoned.) + * + * Note: The one-pass (no indexes) case is only supposed to make + * it this far when there were no LP_DEAD items during pruning. + */ + Assert(vacrel->nindexes > 0); + UnlockReleaseBuffer(buf); + } + else + { + Size freespace = PageGetHeapFreeSpace(page); + + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + } + } + + vacrel->blkno = InvalidBlockNumber; + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* report that everything is now scanned */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno); + + /* now we can compute the new value for pg_class.reltuples */ + vacrel->new_live_tuples = vac_estimate_reltuples(vacrel->rel, rel_pages, + vacrel->scanned_pages, + vacrel->live_tuples); + + /* + * Also compute the total number of surviving heap entries. In the + * (unlikely) scenario that new_live_tuples is -1, take it as zero. + */ + vacrel->new_rel_tuples = + Max(vacrel->new_live_tuples, 0) + vacrel->recently_dead_tuples + + vacrel->missed_dead_tuples; + + /* + * Do index vacuuming (call each index's ambulkdelete routine), then do + * related heap vacuuming + */ + if (dead_items->num_items > 0) + lazy_vacuum(vacrel); + + /* + * Vacuum the remainder of the Free Space Map. We must do this whether or + * not there were indexes, and whether or not we bypassed index vacuuming. + */ + if (blkno > next_fsm_block_to_vacuum) + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, blkno); + + /* report all blocks vacuumed */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno); + + /* Do final index cleanup (call each index's amvacuumcleanup routine) */ + if (vacrel->nindexes > 0 && vacrel->do_index_cleanup) + lazy_cleanup_all_indexes(vacrel); +} + +/* + * lazy_scan_skip() -- set up range of skippable blocks using visibility map. + * + * lazy_scan_heap() calls here every time it needs to set up a new range of + * blocks to skip via the visibility map. Caller passes the next block in + * line. We return a next_unskippable_block for this range. When there are + * no skippable blocks we just return caller's next_block. The all-visible + * status of the returned block is set in *next_unskippable_allvis for caller, + * too. Block usually won't be all-visible (since it's unskippable), but it + * can be during aggressive VACUUMs (as well as in certain edge cases). + * + * Sets *skipping_current_range to indicate if caller should skip this range. + * Costs and benefits drive our decision. Very small ranges won't be skipped. + * + * Note: our opinion of which blocks can be skipped can go stale immediately. + * It's okay if caller "misses" a page whose all-visible or all-frozen marking + * was concurrently cleared, though. All that matters is that caller scan all + * pages whose tuples might contain XIDs < OldestXmin, or MXIDs < OldestMxact. + * (Actually, non-aggressive VACUUMs can choose to skip all-visible pages with + * older XIDs/MXIDs. The vacrel->skippedallvis flag will be set here when the + * choice to skip such a range is actually made, making everything safe.) + */ +static BlockNumber +lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block, + bool *next_unskippable_allvis, bool *skipping_current_range) +{ + BlockNumber rel_pages = vacrel->rel_pages, + next_unskippable_block = next_block, + nskippable_blocks = 0; + bool skipsallvis = false; + + *next_unskippable_allvis = true; + while (next_unskippable_block < rel_pages) + { + uint8 mapbits = tdeheap_visibilitymap_get_status(vacrel->rel, + next_unskippable_block, + vmbuffer); + + if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0) + { + Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0); + *next_unskippable_allvis = false; + break; + } + + /* + * Caller must scan the last page to determine whether it has tuples + * (caller must have the opportunity to set vacrel->nonempty_pages). + * This rule avoids having lazy_truncate_heap() take access-exclusive + * lock on rel to attempt a truncation that fails anyway, just because + * there are tuples on the last page (it is likely that there will be + * tuples on other nearby pages as well, but those can be skipped). + * + * Implement this by always treating the last block as unsafe to skip. + */ + if (next_unskippable_block == rel_pages - 1) + break; + + /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */ + if (!vacrel->skipwithvm) + break; + + /* + * Aggressive VACUUM caller can't skip pages just because they are + * all-visible. They may still skip all-frozen pages, which can't + * contain XIDs < OldestXmin (XIDs that aren't already frozen by now). + */ + if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0) + { + if (vacrel->aggressive) + break; + + /* + * All-visible block is safe to skip in non-aggressive case. But + * remember that the final range contains such a block for later. + */ + skipsallvis = true; + } + + vacuum_delay_point(); + next_unskippable_block++; + nskippable_blocks++; + } + + /* + * We only skip a range with at least SKIP_PAGES_THRESHOLD consecutive + * pages. Since we're reading sequentially, the OS should be doing + * readahead for us, so there's no gain in skipping a page now and then. + * Skipping such a range might even discourage sequential detection. + * + * This test also enables more frequent relfrozenxid advancement during + * non-aggressive VACUUMs. If the range has any all-visible pages then + * skipping makes updating relfrozenxid unsafe, which is a real downside. + */ + if (nskippable_blocks < SKIP_PAGES_THRESHOLD) + *skipping_current_range = false; + else + { + *skipping_current_range = true; + if (skipsallvis) + vacrel->skippedallvis = true; + } + + return next_unskippable_block; +} + +/* + * lazy_scan_new_or_empty() -- lazy_scan_heap() new/empty page handling. + * + * Must call here to handle both new and empty pages before calling + * lazy_scan_prune or lazy_scan_noprune, since they're not prepared to deal + * with new or empty pages. + * + * It's necessary to consider new pages as a special case, since the rules for + * maintaining the visibility map and FSM with empty pages are a little + * different (though new pages can be truncated away during rel truncation). + * + * Empty pages are not really a special case -- they're just heap pages that + * have no allocated tuples (including even LP_UNUSED items). You might + * wonder why we need to handle them here all the same. It's only necessary + * because of a corner-case involving a hard crash during heap relation + * extension. If we ever make relation-extension crash safe, then it should + * no longer be necessary to deal with empty pages here (or new pages, for + * that matter). + * + * Caller must hold at least a shared lock. We might need to escalate the + * lock in that case, so the type of lock caller holds needs to be specified + * using 'sharelock' argument. + * + * Returns false in common case where caller should go on to call + * lazy_scan_prune (or lazy_scan_noprune). Otherwise returns true, indicating + * that lazy_scan_heap is done processing the page, releasing lock on caller's + * behalf. + */ +static bool +lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno, + Page page, bool sharelock, Buffer vmbuffer) +{ + Size freespace; + + if (PageIsNew(page)) + { + /* + * All-zeroes pages can be left over if either a backend extends the + * relation by a single page, but crashes before the newly initialized + * page has been written out, or when bulk-extending the relation + * (which creates a number of empty pages at the tail end of the + * relation), and then enters them into the FSM. + * + * Note we do not enter the page into the visibilitymap. That has the + * downside that we repeatedly visit this page in subsequent vacuums, + * but otherwise we'll never discover the space on a promoted standby. + * The harm of repeated checking ought to normally not be too bad. The + * space usually should be used at some point, otherwise there + * wouldn't be any regular vacuums. + * + * Make sure these pages are in the FSM, to ensure they can be reused. + * Do that by testing if there's any space recorded for the page. If + * not, enter it. We do so after releasing the lock on the heap page, + * the FSM is approximate, after all. + */ + UnlockReleaseBuffer(buf); + + if (GetRecordedFreeSpace(vacrel->rel, blkno) == 0) + { + freespace = BLCKSZ - SizeOfPageHeaderData; + + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + } + + return true; + } + + if (PageIsEmpty(page)) + { + /* + * It seems likely that caller will always be able to get a cleanup + * lock on an empty page. But don't take any chances -- escalate to + * an exclusive lock (still don't need a cleanup lock, though). + */ + if (sharelock) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + + if (!PageIsEmpty(page)) + { + /* page isn't new or empty -- keep lock and pin for now */ + return false; + } + } + else + { + /* Already have a full cleanup lock (which is more than enough) */ + } + + /* + * Unlike new pages, empty pages are always set all-visible and + * all-frozen. + */ + if (!PageIsAllVisible(page)) + { + START_CRIT_SECTION(); + + /* mark buffer dirty before writing a WAL record */ + MarkBufferDirty(buf); + + /* + * It's possible that another backend has extended the heap, + * initialized the page, and then failed to WAL-log the page due + * to an ERROR. Since heap extension is not WAL-logged, recovery + * might try to replay our record setting the page all-visible and + * find that the page isn't initialized, which will cause a PANIC. + * To prevent that, check whether the page has been previously + * WAL-logged, and if not, do that now. + */ + if (RelationNeedsWAL(vacrel->rel) && + PageGetLSN(page) == InvalidXLogRecPtr) + log_newpage_buffer(buf, true); + + PageSetAllVisible(page); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN); + END_CRIT_SECTION(); + } + + freespace = PageGetHeapFreeSpace(page); + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + return true; + } + + /* page isn't new or empty -- keep lock and pin */ + return false; +} + +/* + * lazy_scan_prune() -- lazy_scan_heap() pruning and freezing. + * + * Caller must hold pin and buffer cleanup lock on the buffer. + * + * Prior to PostgreSQL 14 there were very rare cases where tdeheap_page_prune() + * was allowed to disagree with our HeapTupleSatisfiesVacuum() call about + * whether or not a tuple should be considered DEAD. This happened when an + * inserting transaction concurrently aborted (after our tdeheap_page_prune() + * call, before our HeapTupleSatisfiesVacuum() call). There was rather a lot + * of complexity just so we could deal with tuples that were DEAD to VACUUM, + * but nevertheless were left with storage after pruning. + * + * The approach we take now is to restart pruning when the race condition is + * detected. This allows tdeheap_page_prune() to prune the tuples inserted by + * the now-aborted transaction. This is a little crude, but it guarantees + * that any items that make it into the dead_items array are simple LP_DEAD + * line pointers, and that every remaining item with tuple storage is + * considered as a candidate for freezing. + */ +static void +lazy_scan_prune(LVRelState *vacrel, + Buffer buf, + BlockNumber blkno, + Page page, + LVPagePruneState *prunestate) +{ + Relation rel = vacrel->rel; + OffsetNumber offnum, + maxoff; + ItemId itemid; + HeapTupleData tuple; + HTSV_Result res; + int tuples_deleted, + tuples_frozen, + lpdead_items, + live_tuples, + recently_dead_tuples; + int nnewlpdead; + HeapPageFreeze pagefrz; + int64 fpi_before = pgWalUsage.wal_fpi; + OffsetNumber deadoffsets[MaxHeapTuplesPerPage]; + HeapTupleFreeze frozen[MaxHeapTuplesPerPage]; + + Assert(BufferGetBlockNumber(buf) == blkno); + + /* + * maxoff might be reduced following line pointer array truncation in + * tdeheap_page_prune. That's safe for us to ignore, since the reclaimed + * space will continue to look like LP_UNUSED items below. + */ + maxoff = PageGetMaxOffsetNumber(page); + +retry: + + /* Initialize (or reset) page-level state */ + pagefrz.freeze_required = false; + pagefrz.FreezePageRelfrozenXid = vacrel->NewRelfrozenXid; + pagefrz.FreezePageRelminMxid = vacrel->NewRelminMxid; + pagefrz.NoFreezePageRelfrozenXid = vacrel->NewRelfrozenXid; + pagefrz.NoFreezePageRelminMxid = vacrel->NewRelminMxid; + tuples_deleted = 0; + tuples_frozen = 0; + lpdead_items = 0; + live_tuples = 0; + recently_dead_tuples = 0; + + /* + * Prune all HOT-update chains in this page. + * + * We count tuples removed by the pruning step as tuples_deleted. Its + * final value can be thought of as the number of tuples that have been + * deleted from the table. It should not be confused with lpdead_items; + * lpdead_items's final value can be thought of as the number of tuples + * that were deleted from indexes. + */ + tuples_deleted = tdeheap_page_prune(rel, buf, vacrel->vistest, + InvalidTransactionId, 0, &nnewlpdead, + &vacrel->offnum); + + /* + * Now scan the page to collect LP_DEAD items and check for tuples + * requiring freezing among remaining tuples with storage + */ + prunestate->hastup = false; + prunestate->has_lpdead_items = false; + prunestate->all_visible = true; + prunestate->all_frozen = true; + prunestate->visibility_cutoff_xid = InvalidTransactionId; + + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + bool totally_frozen; + + /* + * Set the offset number so that we can display it along with any + * error that occurred while processing this tuple. + */ + vacrel->offnum = offnum; + itemid = PageGetItemId(page, offnum); + + if (!ItemIdIsUsed(itemid)) + continue; + + /* Redirect items mustn't be touched */ + if (ItemIdIsRedirected(itemid)) + { + /* page makes rel truncation unsafe */ + prunestate->hastup = true; + continue; + } + + if (ItemIdIsDead(itemid)) + { + /* + * Deliberately don't set hastup for LP_DEAD items. We make the + * soft assumption that any LP_DEAD items encountered here will + * become LP_UNUSED later on, before count_nondeletable_pages is + * reached. If we don't make this assumption then rel truncation + * will only happen every other VACUUM, at most. Besides, VACUUM + * must treat hastup/nonempty_pages as provisional no matter how + * LP_DEAD items are handled (handled here, or handled later on). + * + * Also deliberately delay unsetting all_visible until just before + * we return to lazy_scan_heap caller, as explained in full below. + * (This is another case where it's useful to anticipate that any + * LP_DEAD items will become LP_UNUSED during the ongoing VACUUM.) + */ + deadoffsets[lpdead_items++] = offnum; + continue; + } + + Assert(ItemIdIsNormal(itemid)); + + ItemPointerSet(&(tuple.t_self), blkno, offnum); + tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple.t_len = ItemIdGetLength(itemid); + tuple.t_tableOid = RelationGetRelid(rel); + + /* + * DEAD tuples are almost always pruned into LP_DEAD line pointers by + * tdeheap_page_prune(), but it's possible that the tuple state changed + * since tdeheap_page_prune() looked. Handle that here by restarting. + * (See comments at the top of function for a full explanation.) + */ + res = HeapTupleSatisfiesVacuum(&tuple, vacrel->cutoffs.OldestXmin, + buf); + + if (unlikely(res == HEAPTUPLE_DEAD)) + goto retry; + + /* + * The criteria for counting a tuple as live in this block need to + * match what analyze.c's acquire_sample_rows() does, otherwise VACUUM + * and ANALYZE may produce wildly different reltuples values, e.g. + * when there are many recently-dead tuples. + * + * The logic here is a bit simpler than acquire_sample_rows(), as + * VACUUM can't run inside a transaction block, which makes some cases + * impossible (e.g. in-progress insert from the same transaction). + * + * We treat LP_DEAD items (which are the closest thing to DEAD tuples + * that might be seen here) differently, too: we assume that they'll + * become LP_UNUSED before VACUUM finishes. This difference is only + * superficial. VACUUM effectively agrees with ANALYZE about DEAD + * items, in the end. VACUUM won't remember LP_DEAD items, but only + * because they're not supposed to be left behind when it is done. + * (Cases where we bypass index vacuuming will violate this optimistic + * assumption, but the overall impact of that should be negligible.) + */ + switch (res) + { + case HEAPTUPLE_LIVE: + + /* + * Count it as live. Not only is this natural, but it's also + * what acquire_sample_rows() does. + */ + live_tuples++; + + /* + * Is the tuple definitely visible to all transactions? + * + * NB: Like with per-tuple hint bits, we can't set the + * PD_ALL_VISIBLE flag if the inserter committed + * asynchronously. See SetHintBits for more info. Check that + * the tuple is hinted xmin-committed because of that. + */ + if (prunestate->all_visible) + { + TransactionId xmin; + + if (!HeapTupleHeaderXminCommitted(tuple.t_data)) + { + prunestate->all_visible = false; + break; + } + + /* + * The inserter definitely committed. But is it old enough + * that everyone sees it as committed? + */ + xmin = HeapTupleHeaderGetXmin(tuple.t_data); + if (!TransactionIdPrecedes(xmin, + vacrel->cutoffs.OldestXmin)) + { + prunestate->all_visible = false; + break; + } + + /* Track newest xmin on page. */ + if (TransactionIdFollows(xmin, prunestate->visibility_cutoff_xid) && + TransactionIdIsNormal(xmin)) + prunestate->visibility_cutoff_xid = xmin; + } + break; + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * If tuple is recently dead then we must not remove it from + * the relation. (We only remove items that are LP_DEAD from + * pruning.) + */ + recently_dead_tuples++; + prunestate->all_visible = false; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * We do not count these rows as live, because we expect the + * inserting transaction to update the counters at commit, and + * we assume that will happen only after we report our + * results. This assumption is a bit shaky, but it is what + * acquire_sample_rows() does, so be consistent. + */ + prunestate->all_visible = false; + break; + case HEAPTUPLE_DELETE_IN_PROGRESS: + /* This is an expected case during concurrent vacuum */ + prunestate->all_visible = false; + + /* + * Count such rows as live. As above, we assume the deleting + * transaction will commit and update the counters after we + * report. + */ + live_tuples++; + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + + prunestate->hastup = true; /* page makes rel truncation unsafe */ + + /* Tuple with storage -- consider need to freeze */ + if (tdeheap_prepare_freeze_tuple(tuple.t_data, &vacrel->cutoffs, &pagefrz, + &frozen[tuples_frozen], &totally_frozen)) + { + /* Save prepared freeze plan for later */ + frozen[tuples_frozen++].offset = offnum; + } + + /* + * If any tuple isn't either totally frozen already or eligible to + * become totally frozen (according to its freeze plan), then the page + * definitely cannot be set all-frozen in the visibility map later on + */ + if (!totally_frozen) + prunestate->all_frozen = false; + } + + /* + * We have now divided every item on the page into either an LP_DEAD item + * that will need to be vacuumed in indexes later, or a LP_NORMAL tuple + * that remains and needs to be considered for freezing now (LP_UNUSED and + * LP_REDIRECT items also remain, but are of no further interest to us). + */ + vacrel->offnum = InvalidOffsetNumber; + + /* + * Freeze the page when tdeheap_prepare_freeze_tuple indicates that at least + * one XID/MXID from before FreezeLimit/MultiXactCutoff is present. Also + * freeze when pruning generated an FPI, if doing so means that we set the + * page all-frozen afterwards (might not happen until final heap pass). + */ + if (pagefrz.freeze_required || tuples_frozen == 0 || + (prunestate->all_visible && prunestate->all_frozen && + fpi_before != pgWalUsage.wal_fpi)) + { + /* + * We're freezing the page. Our final NewRelfrozenXid doesn't need to + * be affected by the XIDs that are just about to be frozen anyway. + */ + vacrel->NewRelfrozenXid = pagefrz.FreezePageRelfrozenXid; + vacrel->NewRelminMxid = pagefrz.FreezePageRelminMxid; + + if (tuples_frozen == 0) + { + /* + * We have no freeze plans to execute, so there's no added cost + * from following the freeze path. That's why it was chosen. This + * is important in the case where the page only contains totally + * frozen tuples at this point (perhaps only following pruning). + * Such pages can be marked all-frozen in the VM by our caller, + * even though none of its tuples were newly frozen here (note + * that the "no freeze" path never sets pages all-frozen). + * + * We never increment the frozen_pages instrumentation counter + * here, since it only counts pages with newly frozen tuples + * (don't confuse that with pages newly set all-frozen in VM). + */ + } + else + { + TransactionId snapshotConflictHorizon; + + vacrel->frozen_pages++; + + /* + * We can use visibility_cutoff_xid as our cutoff for conflicts + * when the whole page is eligible to become all-frozen in the VM + * once we're done with it. Otherwise we generate a conservative + * cutoff by stepping back from OldestXmin. + */ + if (prunestate->all_visible && prunestate->all_frozen) + { + /* Using same cutoff when setting VM is now unnecessary */ + snapshotConflictHorizon = prunestate->visibility_cutoff_xid; + prunestate->visibility_cutoff_xid = InvalidTransactionId; + } + else + { + /* Avoids false conflicts when hot_standby_feedback in use */ + snapshotConflictHorizon = vacrel->cutoffs.OldestXmin; + TransactionIdRetreat(snapshotConflictHorizon); + } + + /* Execute all freeze plans for page as a single atomic action */ + tdeheap_freeze_execute_prepared(vacrel->rel, buf, + snapshotConflictHorizon, + frozen, tuples_frozen); + } + } + else + { + /* + * Page requires "no freeze" processing. It might be set all-visible + * in the visibility map, but it can never be set all-frozen. + */ + vacrel->NewRelfrozenXid = pagefrz.NoFreezePageRelfrozenXid; + vacrel->NewRelminMxid = pagefrz.NoFreezePageRelminMxid; + prunestate->all_frozen = false; + tuples_frozen = 0; /* avoid miscounts in instrumentation */ + } + + /* + * VACUUM will call tdeheap_page_is_all_visible() during the second pass over + * the heap to determine all_visible and all_frozen for the page -- this + * is a specialized version of the logic from this function. Now that + * we've finished pruning and freezing, make sure that we're in total + * agreement with tdeheap_page_is_all_visible() using an assertion. + */ +#ifdef USE_ASSERT_CHECKING + /* Note that all_frozen value does not matter when !all_visible */ + if (prunestate->all_visible && lpdead_items == 0) + { + TransactionId cutoff; + bool all_frozen; + + if (!tdeheap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen)) + Assert(false); + + Assert(!TransactionIdIsValid(cutoff) || + cutoff == prunestate->visibility_cutoff_xid); + } +#endif + + /* + * Now save details of the LP_DEAD items from the page in vacrel + */ + if (lpdead_items > 0) + { + VacDeadItems *dead_items = vacrel->dead_items; + ItemPointerData tmp; + + vacrel->lpdead_item_pages++; + prunestate->has_lpdead_items = true; + + ItemPointerSetBlockNumber(&tmp, blkno); + + for (int i = 0; i < lpdead_items; i++) + { + ItemPointerSetOffsetNumber(&tmp, deadoffsets[i]); + dead_items->items[dead_items->num_items++] = tmp; + } + + Assert(dead_items->num_items <= dead_items->max_items); + pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, + dead_items->num_items); + + /* + * It was convenient to ignore LP_DEAD items in all_visible earlier on + * to make the choice of whether or not to freeze the page unaffected + * by the short-term presence of LP_DEAD items. These LP_DEAD items + * were effectively assumed to be LP_UNUSED items in the making. It + * doesn't matter which heap pass (initial pass or final pass) ends up + * setting the page all-frozen, as long as the ongoing VACUUM does it. + * + * Now that freezing has been finalized, unset all_visible. It needs + * to reflect the present state of things, as expected by our caller. + */ + prunestate->all_visible = false; + } + + /* Finally, add page-local counts to whole-VACUUM counts */ + vacrel->tuples_deleted += tuples_deleted; + vacrel->tuples_frozen += tuples_frozen; + vacrel->lpdead_items += lpdead_items; + vacrel->live_tuples += live_tuples; + vacrel->recently_dead_tuples += recently_dead_tuples; +} + +/* + * lazy_scan_noprune() -- lazy_scan_prune() without pruning or freezing + * + * Caller need only hold a pin and share lock on the buffer, unlike + * lazy_scan_prune, which requires a full cleanup lock. While pruning isn't + * performed here, it's quite possible that an earlier opportunistic pruning + * operation left LP_DEAD items behind. We'll at least collect any such items + * in the dead_items array for removal from indexes. + * + * For aggressive VACUUM callers, we may return false to indicate that a full + * cleanup lock is required for processing by lazy_scan_prune. This is only + * necessary when the aggressive VACUUM needs to freeze some tuple XIDs from + * one or more tuples on the page. We always return true for non-aggressive + * callers. + * + * See lazy_scan_prune for an explanation of hastup return flag. + * recordfreespace flag instructs caller on whether or not it should do + * generic FSM processing for page. + */ +static bool +lazy_scan_noprune(LVRelState *vacrel, + Buffer buf, + BlockNumber blkno, + Page page, + bool *hastup, + bool *recordfreespace) +{ + OffsetNumber offnum, + maxoff; + int lpdead_items, + live_tuples, + recently_dead_tuples, + missed_dead_tuples; + HeapTupleHeader tupleheader; + TransactionId NoFreezePageRelfrozenXid = vacrel->NewRelfrozenXid; + MultiXactId NoFreezePageRelminMxid = vacrel->NewRelminMxid; + OffsetNumber deadoffsets[MaxHeapTuplesPerPage]; + + Assert(BufferGetBlockNumber(buf) == blkno); + + *hastup = false; /* for now */ + *recordfreespace = false; /* for now */ + + lpdead_items = 0; + live_tuples = 0; + recently_dead_tuples = 0; + missed_dead_tuples = 0; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + HeapTupleData tuple; + + vacrel->offnum = offnum; + itemid = PageGetItemId(page, offnum); + + if (!ItemIdIsUsed(itemid)) + continue; + + if (ItemIdIsRedirected(itemid)) + { + *hastup = true; + continue; + } + + if (ItemIdIsDead(itemid)) + { + /* + * Deliberately don't set hastup=true here. See same point in + * lazy_scan_prune for an explanation. + */ + deadoffsets[lpdead_items++] = offnum; + continue; + } + + *hastup = true; /* page prevents rel truncation */ + tupleheader = (HeapTupleHeader) PageGetItem(page, itemid); + if (tdeheap_tuple_should_freeze(tupleheader, &vacrel->cutoffs, + &NoFreezePageRelfrozenXid, + &NoFreezePageRelminMxid)) + { + /* Tuple with XID < FreezeLimit (or MXID < MultiXactCutoff) */ + if (vacrel->aggressive) + { + /* + * Aggressive VACUUMs must always be able to advance rel's + * relfrozenxid to a value >= FreezeLimit (and be able to + * advance rel's relminmxid to a value >= MultiXactCutoff). + * The ongoing aggressive VACUUM won't be able to do that + * unless it can freeze an XID (or MXID) from this tuple now. + * + * The only safe option is to have caller perform processing + * of this page using lazy_scan_prune. Caller might have to + * wait a while for a cleanup lock, but it can't be helped. + */ + vacrel->offnum = InvalidOffsetNumber; + return false; + } + + /* + * Non-aggressive VACUUMs are under no obligation to advance + * relfrozenxid (even by one XID). We can be much laxer here. + * + * Currently we always just accept an older final relfrozenxid + * and/or relminmxid value. We never make caller wait or work a + * little harder, even when it likely makes sense to do so. + */ + } + + ItemPointerSet(&(tuple.t_self), blkno, offnum); + tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple.t_len = ItemIdGetLength(itemid); + tuple.t_tableOid = RelationGetRelid(vacrel->rel); + + switch (HeapTupleSatisfiesVacuum(&tuple, vacrel->cutoffs.OldestXmin, + buf)) + { + case HEAPTUPLE_DELETE_IN_PROGRESS: + case HEAPTUPLE_LIVE: + + /* + * Count both cases as live, just like lazy_scan_prune + */ + live_tuples++; + + break; + case HEAPTUPLE_DEAD: + + /* + * There is some useful work for pruning to do, that won't be + * done due to failure to get a cleanup lock. + */ + missed_dead_tuples++; + break; + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * Count in recently_dead_tuples, just like lazy_scan_prune + */ + recently_dead_tuples++; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Do not count these rows as live, just like lazy_scan_prune + */ + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + } + + vacrel->offnum = InvalidOffsetNumber; + + /* + * By here we know for sure that caller can put off freezing and pruning + * this particular page until the next VACUUM. Remember its details now. + * (lazy_scan_prune expects a clean slate, so we have to do this last.) + */ + vacrel->NewRelfrozenXid = NoFreezePageRelfrozenXid; + vacrel->NewRelminMxid = NoFreezePageRelminMxid; + + /* Save any LP_DEAD items found on the page in dead_items array */ + if (vacrel->nindexes == 0) + { + /* Using one-pass strategy (since table has no indexes) */ + if (lpdead_items > 0) + { + /* + * Perfunctory handling for the corner case where a single pass + * strategy VACUUM cannot get a cleanup lock, and it turns out + * that there is one or more LP_DEAD items: just count the LP_DEAD + * items as missed_dead_tuples instead. (This is a bit dishonest, + * but it beats having to maintain specialized heap vacuuming code + * forever, for vanishingly little benefit.) + */ + *hastup = true; + missed_dead_tuples += lpdead_items; + } + + *recordfreespace = true; + } + else if (lpdead_items == 0) + { + /* + * Won't be vacuuming this page later, so record page's freespace in + * the FSM now + */ + *recordfreespace = true; + } + else + { + VacDeadItems *dead_items = vacrel->dead_items; + ItemPointerData tmp; + + /* + * Page has LP_DEAD items, and so any references/TIDs that remain in + * indexes will be deleted during index vacuuming (and then marked + * LP_UNUSED in the heap) + */ + vacrel->lpdead_item_pages++; + + ItemPointerSetBlockNumber(&tmp, blkno); + + for (int i = 0; i < lpdead_items; i++) + { + ItemPointerSetOffsetNumber(&tmp, deadoffsets[i]); + dead_items->items[dead_items->num_items++] = tmp; + } + + Assert(dead_items->num_items <= dead_items->max_items); + pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, + dead_items->num_items); + + vacrel->lpdead_items += lpdead_items; + + /* + * Assume that we'll go on to vacuum this heap page during final pass + * over the heap. Don't record free space until then. + */ + *recordfreespace = false; + } + + /* + * Finally, add relevant page-local counts to whole-VACUUM counts + */ + vacrel->live_tuples += live_tuples; + vacrel->recently_dead_tuples += recently_dead_tuples; + vacrel->missed_dead_tuples += missed_dead_tuples; + if (missed_dead_tuples > 0) + vacrel->missed_dead_pages++; + + /* Caller won't need to call lazy_scan_prune with same page */ + return true; +} + +/* + * Main entry point for index vacuuming and heap vacuuming. + * + * Removes items collected in dead_items from table's indexes, then marks the + * same items LP_UNUSED in the heap. See the comments above lazy_scan_heap + * for full details. + * + * Also empties dead_items, freeing up space for later TIDs. + * + * We may choose to bypass index vacuuming at this point, though only when the + * ongoing VACUUM operation will definitely only have one index scan/round of + * index vacuuming. + */ +static void +lazy_vacuum(LVRelState *vacrel) +{ + bool bypass; + + /* Should not end up here with no indexes */ + Assert(vacrel->nindexes > 0); + Assert(vacrel->lpdead_item_pages > 0); + + if (!vacrel->do_index_vacuuming) + { + Assert(!vacrel->do_index_cleanup); + vacrel->dead_items->num_items = 0; + return; + } + + /* + * Consider bypassing index vacuuming (and heap vacuuming) entirely. + * + * We currently only do this in cases where the number of LP_DEAD items + * for the entire VACUUM operation is close to zero. This avoids sharp + * discontinuities in the duration and overhead of successive VACUUM + * operations that run against the same table with a fixed workload. + * Ideally, successive VACUUM operations will behave as if there are + * exactly zero LP_DEAD items in cases where there are close to zero. + * + * This is likely to be helpful with a table that is continually affected + * by UPDATEs that can mostly apply the HOT optimization, but occasionally + * have small aberrations that lead to just a few heap pages retaining + * only one or two LP_DEAD items. This is pretty common; even when the + * DBA goes out of their way to make UPDATEs use HOT, it is practically + * impossible to predict whether HOT will be applied in 100% of cases. + * It's far easier to ensure that 99%+ of all UPDATEs against a table use + * HOT through careful tuning. + */ + bypass = false; + if (vacrel->consider_bypass_optimization && vacrel->rel_pages > 0) + { + BlockNumber threshold; + + Assert(vacrel->num_index_scans == 0); + Assert(vacrel->lpdead_items == vacrel->dead_items->num_items); + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + + /* + * This crossover point at which we'll start to do index vacuuming is + * expressed as a percentage of the total number of heap pages in the + * table that are known to have at least one LP_DEAD item. This is + * much more important than the total number of LP_DEAD items, since + * it's a proxy for the number of heap pages whose visibility map bits + * cannot be set on account of bypassing index and heap vacuuming. + * + * We apply one further precautionary test: the space currently used + * to store the TIDs (TIDs that now all point to LP_DEAD items) must + * not exceed 32MB. This limits the risk that we will bypass index + * vacuuming again and again until eventually there is a VACUUM whose + * dead_items space is not CPU cache resident. + * + * We don't take any special steps to remember the LP_DEAD items (such + * as counting them in our final update to the stats system) when the + * optimization is applied. Though the accounting used in analyze.c's + * acquire_sample_rows() will recognize the same LP_DEAD items as dead + * rows in its own stats report, that's okay. The discrepancy should + * be negligible. If this optimization is ever expanded to cover more + * cases then this may need to be reconsidered. + */ + threshold = (double) vacrel->rel_pages * BYPASS_THRESHOLD_PAGES; + bypass = (vacrel->lpdead_item_pages < threshold && + vacrel->lpdead_items < MAXDEADITEMS(32L * 1024L * 1024L)); + } + + if (bypass) + { + /* + * There are almost zero TIDs. Behave as if there were precisely + * zero: bypass index vacuuming, but do index cleanup. + * + * We expect that the ongoing VACUUM operation will finish very + * quickly, so there is no point in considering speeding up as a + * failsafe against wraparound failure. (Index cleanup is expected to + * finish very quickly in cases where there were no ambulkdelete() + * calls.) + */ + vacrel->do_index_vacuuming = false; + } + else if (lazy_vacuum_all_indexes(vacrel)) + { + /* + * We successfully completed a round of index vacuuming. Do related + * heap vacuuming now. + */ + lazy_vacuum_tdeheap_rel(vacrel); + } + else + { + /* + * Failsafe case. + * + * We attempted index vacuuming, but didn't finish a full round/full + * index scan. This happens when relfrozenxid or relminmxid is too + * far in the past. + * + * From this point on the VACUUM operation will do no further index + * vacuuming or heap vacuuming. This VACUUM operation won't end up + * back here again. + */ + Assert(VacuumFailsafeActive); + } + + /* + * Forget the LP_DEAD items that we just vacuumed (or just decided to not + * vacuum) + */ + vacrel->dead_items->num_items = 0; +} + +/* + * lazy_vacuum_all_indexes() -- Main entry for index vacuuming + * + * Returns true in the common case when all indexes were successfully + * vacuumed. Returns false in rare cases where we determined that the ongoing + * VACUUM operation is at risk of taking too long to finish, leading to + * wraparound failure. + */ +static bool +lazy_vacuum_all_indexes(LVRelState *vacrel) +{ + bool allindexes = true; + double old_live_tuples = vacrel->rel->rd_rel->reltuples; + + Assert(vacrel->nindexes > 0); + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + + /* Precheck for XID wraparound emergencies */ + if (lazy_check_wraparound_failsafe(vacrel)) + { + /* Wraparound emergency -- don't even start an index scan */ + return false; + } + + /* Report that we are now vacuuming indexes */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_VACUUM_INDEX); + + if (!ParallelVacuumIsActive(vacrel)) + { + for (int idx = 0; idx < vacrel->nindexes; idx++) + { + Relation indrel = vacrel->indrels[idx]; + IndexBulkDeleteResult *istat = vacrel->indstats[idx]; + + vacrel->indstats[idx] = lazy_vacuum_one_index(indrel, istat, + old_live_tuples, + vacrel); + + if (lazy_check_wraparound_failsafe(vacrel)) + { + /* Wraparound emergency -- end current index scan */ + allindexes = false; + break; + } + } + } + else + { + /* Outsource everything to parallel variant */ + parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples, + vacrel->num_index_scans); + + /* + * Do a postcheck to consider applying wraparound failsafe now. Note + * that parallel VACUUM only gets the precheck and this postcheck. + */ + if (lazy_check_wraparound_failsafe(vacrel)) + allindexes = false; + } + + /* + * We delete all LP_DEAD items from the first heap pass in all indexes on + * each call here (except calls where we choose to do the failsafe). This + * makes the next call to lazy_vacuum_tdeheap_rel() safe (except in the event + * of the failsafe triggering, which prevents the next call from taking + * place). + */ + Assert(vacrel->num_index_scans > 0 || + vacrel->dead_items->num_items == vacrel->lpdead_items); + Assert(allindexes || VacuumFailsafeActive); + + /* + * Increase and report the number of index scans. + * + * We deliberately include the case where we started a round of bulk + * deletes that we weren't able to finish due to the failsafe triggering. + */ + vacrel->num_index_scans++; + pgstat_progress_update_param(PROGRESS_VACUUM_NUM_INDEX_VACUUMS, + vacrel->num_index_scans); + + return allindexes; +} + +/* + * lazy_vacuum_tdeheap_rel() -- second pass over the heap for two pass strategy + * + * This routine marks LP_DEAD items in vacrel->dead_items array as LP_UNUSED. + * Pages that never had lazy_scan_prune record LP_DEAD items are not visited + * at all. + * + * We may also be able to truncate the line pointer array of the heap pages we + * visit. If there is a contiguous group of LP_UNUSED items at the end of the + * array, it can be reclaimed as free space. These LP_UNUSED items usually + * start out as LP_DEAD items recorded by lazy_scan_prune (we set items from + * each page to LP_UNUSED, and then consider if it's possible to truncate the + * page's line pointer array). + * + * Note: the reason for doing this as a second pass is we cannot remove the + * tuples until we've removed their index entries, and we want to process + * index entry removal in batches as large as possible. + */ +static void +lazy_vacuum_tdeheap_rel(LVRelState *vacrel) +{ + int index = 0; + BlockNumber vacuumed_pages = 0; + Buffer vmbuffer = InvalidBuffer; + LVSavedErrInfo saved_err_info; + + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + Assert(vacrel->num_index_scans > 0); + + /* Report that we are now vacuuming the heap */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_VACUUM_HEAP); + + /* Update error traceback information */ + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, + InvalidBlockNumber, InvalidOffsetNumber); + + while (index < vacrel->dead_items->num_items) + { + BlockNumber blkno; + Buffer buf; + Page page; + Size freespace; + + vacuum_delay_point(); + + blkno = ItemPointerGetBlockNumber(&vacrel->dead_items->items[index]); + vacrel->blkno = blkno; + + /* + * Pin the visibility map page in case we need to mark the page + * all-visible. In most cases this will be very cheap, because we'll + * already have the correct page pinned anyway. + */ + tdeheap_visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); + + /* We need a non-cleanup exclusive lock to mark dead_items unused */ + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + index = lazy_vacuum_tdeheap_page(vacrel, blkno, buf, index, vmbuffer); + + /* Now that we've vacuumed the page, record its available space */ + page = BufferGetPage(buf); + freespace = PageGetHeapFreeSpace(page); + + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + vacuumed_pages++; + } + + vacrel->blkno = InvalidBlockNumber; + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * We set all LP_DEAD items from the first heap pass to LP_UNUSED during + * the second heap pass. No more, no less. + */ + Assert(index > 0); + Assert(vacrel->num_index_scans > 1 || + (index == vacrel->lpdead_items && + vacuumed_pages == vacrel->lpdead_item_pages)); + + ereport(DEBUG2, + (errmsg("table \"%s\": removed %lld dead item identifiers in %u pages", + vacrel->relname, (long long) index, vacuumed_pages))); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); +} + +/* + * lazy_vacuum_tdeheap_page() -- free page's LP_DEAD items listed in the + * vacrel->dead_items array. + * + * Caller must have an exclusive buffer lock on the buffer (though a full + * cleanup lock is also acceptable). vmbuffer must be valid and already have + * a pin on blkno's visibility map page. + * + * index is an offset into the vacrel->dead_items array for the first listed + * LP_DEAD item on the page. The return value is the first index immediately + * after all LP_DEAD items for the same page in the array. + */ +static int +lazy_vacuum_tdeheap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer, + int index, Buffer vmbuffer) +{ + VacDeadItems *dead_items = vacrel->dead_items; + Page page = BufferGetPage(buffer); + OffsetNumber unused[MaxHeapTuplesPerPage]; + int nunused = 0; + TransactionId visibility_cutoff_xid; + bool all_frozen; + LVSavedErrInfo saved_err_info; + + Assert(vacrel->nindexes == 0 || vacrel->do_index_vacuuming); + + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno); + + /* Update error traceback information */ + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, blkno, + InvalidOffsetNumber); + + START_CRIT_SECTION(); + + for (; index < dead_items->num_items; index++) + { + BlockNumber tblk; + OffsetNumber toff; + ItemId itemid; + + tblk = ItemPointerGetBlockNumber(&dead_items->items[index]); + if (tblk != blkno) + break; /* past end of tuples for this block */ + toff = ItemPointerGetOffsetNumber(&dead_items->items[index]); + itemid = PageGetItemId(page, toff); + + Assert(ItemIdIsDead(itemid) && !ItemIdHasStorage(itemid)); + ItemIdSetUnused(itemid); + unused[nunused++] = toff; + } + + Assert(nunused > 0); + + /* Attempt to truncate line pointer array now */ + PageTruncateLinePointerArray(page); + + /* + * Mark buffer dirty before we write WAL. + */ + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(vacrel->rel)) + { + xl_tdeheap_vacuum xlrec; + XLogRecPtr recptr; + + xlrec.nunused = nunused; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapVacuum); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + XLogRegisterBufData(0, (char *) unused, nunused * sizeof(OffsetNumber)); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_VACUUM); + + PageSetLSN(page, recptr); + } + + /* + * End critical section, so we safely can do visibility tests (which + * possibly need to perform IO and allocate memory!). If we crash now the + * page (including the corresponding vm bit) might not be marked all + * visible, but that's fine. A later vacuum will fix that. + */ + END_CRIT_SECTION(); + + /* + * Now that we have removed the LP_DEAD items from the page, once again + * check if the page has become all-visible. The page is already marked + * dirty, exclusively locked, and, if needed, a full page image has been + * emitted. + */ + Assert(!PageIsAllVisible(page)); + if (tdeheap_page_is_all_visible(vacrel, buffer, &visibility_cutoff_xid, + &all_frozen)) + { + uint8 flags = VISIBILITYMAP_ALL_VISIBLE; + + if (all_frozen) + { + Assert(!TransactionIdIsValid(visibility_cutoff_xid)); + flags |= VISIBILITYMAP_ALL_FROZEN; + } + + PageSetAllVisible(page); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buffer, InvalidXLogRecPtr, + vmbuffer, visibility_cutoff_xid, flags); + } + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); + return index; +} + +/* + * Trigger the failsafe to avoid wraparound failure when vacrel table has a + * relfrozenxid and/or relminmxid that is dangerously far in the past. + * Triggering the failsafe makes the ongoing VACUUM bypass any further index + * vacuuming and heap vacuuming. Truncating the heap is also bypassed. + * + * Any remaining work (work that VACUUM cannot just bypass) is typically sped + * up when the failsafe triggers. VACUUM stops applying any cost-based delay + * that it started out with. + * + * Returns true when failsafe has been triggered. + */ +static bool +lazy_check_wraparound_failsafe(LVRelState *vacrel) +{ + /* Don't warn more than once per VACUUM */ + if (VacuumFailsafeActive) + return true; + + if (unlikely(vacuum_xid_failsafe_check(&vacrel->cutoffs))) + { + VacuumFailsafeActive = true; + + /* + * Abandon use of a buffer access strategy to allow use of all of + * shared buffers. We assume the caller who allocated the memory for + * the BufferAccessStrategy will free it. + */ + vacrel->bstrategy = NULL; + + /* Disable index vacuuming, index cleanup, and heap rel truncation */ + vacrel->do_index_vacuuming = false; + vacrel->do_index_cleanup = false; + vacrel->do_rel_truncate = false; + + ereport(WARNING, + (errmsg("bypassing nonessential maintenance of table \"%s.%s.%s\" as a failsafe after %d index scans", + vacrel->dbname, vacrel->relnamespace, vacrel->relname, + vacrel->num_index_scans), + errdetail("The table's relfrozenxid or relminmxid is too far in the past."), + errhint("Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".\n" + "You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs."))); + + /* Stop applying cost limits from this point on */ + VacuumCostActive = false; + VacuumCostBalance = 0; + + return true; + } + + return false; +} + +/* + * lazy_cleanup_all_indexes() -- cleanup all indexes of relation. + */ +static void +lazy_cleanup_all_indexes(LVRelState *vacrel) +{ + double reltuples = vacrel->new_rel_tuples; + bool estimated_count = vacrel->scanned_pages < vacrel->rel_pages; + + Assert(vacrel->do_index_cleanup); + Assert(vacrel->nindexes > 0); + + /* Report that we are now cleaning up indexes */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_INDEX_CLEANUP); + + if (!ParallelVacuumIsActive(vacrel)) + { + for (int idx = 0; idx < vacrel->nindexes; idx++) + { + Relation indrel = vacrel->indrels[idx]; + IndexBulkDeleteResult *istat = vacrel->indstats[idx]; + + vacrel->indstats[idx] = + lazy_cleanup_one_index(indrel, istat, reltuples, + estimated_count, vacrel); + } + } + else + { + /* Outsource everything to parallel variant */ + parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples, + vacrel->num_index_scans, + estimated_count); + } +} + +/* + * lazy_vacuum_one_index() -- vacuum index relation. + * + * Delete all the index tuples containing a TID collected in + * vacrel->dead_items array. Also update running statistics. + * Exact details depend on index AM's ambulkdelete routine. + * + * reltuples is the number of heap tuples to be passed to the + * bulkdelete callback. It's always assumed to be estimated. + * See indexam.sgml for more info. + * + * Returns bulk delete stats derived from input stats + */ +static IndexBulkDeleteResult * +lazy_vacuum_one_index(Relation indrel, IndexBulkDeleteResult *istat, + double reltuples, LVRelState *vacrel) +{ + IndexVacuumInfo ivinfo; + LVSavedErrInfo saved_err_info; + + ivinfo.index = indrel; + ivinfo.heaprel = vacrel->rel; + ivinfo.analyze_only = false; + ivinfo.report_progress = false; + ivinfo.estimated_count = true; + ivinfo.message_level = DEBUG2; + ivinfo.num_heap_tuples = reltuples; + ivinfo.strategy = vacrel->bstrategy; + + /* + * Update error traceback information. + * + * The index name is saved during this phase and restored immediately + * after this phase. See vacuum_error_callback. + */ + Assert(vacrel->indname == NULL); + vacrel->indname = pstrdup(RelationGetRelationName(indrel)); + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_INDEX, + InvalidBlockNumber, InvalidOffsetNumber); + + /* Do bulk deletion */ + istat = vac_bulkdel_one_index(&ivinfo, istat, (void *) vacrel->dead_items); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); + pfree(vacrel->indname); + vacrel->indname = NULL; + + return istat; +} + +/* + * lazy_cleanup_one_index() -- do post-vacuum cleanup for index relation. + * + * Calls index AM's amvacuumcleanup routine. reltuples is the number + * of heap tuples and estimated_count is true if reltuples is an + * estimated value. See indexam.sgml for more info. + * + * Returns bulk delete stats derived from input stats + */ +static IndexBulkDeleteResult * +lazy_cleanup_one_index(Relation indrel, IndexBulkDeleteResult *istat, + double reltuples, bool estimated_count, + LVRelState *vacrel) +{ + IndexVacuumInfo ivinfo; + LVSavedErrInfo saved_err_info; + + ivinfo.index = indrel; + ivinfo.heaprel = vacrel->rel; + ivinfo.analyze_only = false; + ivinfo.report_progress = false; + ivinfo.estimated_count = estimated_count; + ivinfo.message_level = DEBUG2; + + ivinfo.num_heap_tuples = reltuples; + ivinfo.strategy = vacrel->bstrategy; + + /* + * Update error traceback information. + * + * The index name is saved during this phase and restored immediately + * after this phase. See vacuum_error_callback. + */ + Assert(vacrel->indname == NULL); + vacrel->indname = pstrdup(RelationGetRelationName(indrel)); + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_INDEX_CLEANUP, + InvalidBlockNumber, InvalidOffsetNumber); + + istat = vac_cleanup_one_index(&ivinfo, istat); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); + pfree(vacrel->indname); + vacrel->indname = NULL; + + return istat; +} + +/* + * should_attempt_truncation - should we attempt to truncate the heap? + * + * Don't even think about it unless we have a shot at releasing a goodly + * number of pages. Otherwise, the time taken isn't worth it, mainly because + * an AccessExclusive lock must be replayed on any hot standby, where it can + * be particularly disruptive. + * + * Also don't attempt it if wraparound failsafe is in effect. The entire + * system might be refusing to allocate new XIDs at this point. The system + * definitely won't return to normal unless and until VACUUM actually advances + * the oldest relfrozenxid -- which hasn't happened for target rel just yet. + * If lazy_truncate_heap attempted to acquire an AccessExclusiveLock to + * truncate the table under these circumstances, an XID exhaustion error might + * make it impossible for VACUUM to fix the underlying XID exhaustion problem. + * There is very little chance of truncation working out when the failsafe is + * in effect in any case. lazy_scan_prune makes the optimistic assumption + * that any LP_DEAD items it encounters will always be LP_UNUSED by the time + * we're called. + * + * Also don't attempt it if we are doing early pruning/vacuuming, because a + * scan which cannot find a truncated heap page cannot determine that the + * snapshot is too old to read that page. + */ +static bool +should_attempt_truncation(LVRelState *vacrel) +{ + BlockNumber possibly_freeable; + + if (!vacrel->do_rel_truncate || VacuumFailsafeActive || + old_snapshot_threshold >= 0) + return false; + + possibly_freeable = vacrel->rel_pages - vacrel->nonempty_pages; + if (possibly_freeable > 0 && + (possibly_freeable >= REL_TRUNCATE_MINIMUM || + possibly_freeable >= vacrel->rel_pages / REL_TRUNCATE_FRACTION)) + return true; + + return false; +} + +/* + * lazy_truncate_heap - try to truncate off any empty pages at the end + */ +static void +lazy_truncate_heap(LVRelState *vacrel) +{ + BlockNumber orig_rel_pages = vacrel->rel_pages; + BlockNumber new_rel_pages; + bool lock_waiter_detected; + int lock_retry; + + /* Report that we are now truncating */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_TRUNCATE); + + /* Update error traceback information one last time */ + update_vacuum_error_info(vacrel, NULL, VACUUM_ERRCB_PHASE_TRUNCATE, + vacrel->nonempty_pages, InvalidOffsetNumber); + + /* + * Loop until no more truncating can be done. + */ + do + { + /* + * We need full exclusive lock on the relation in order to do + * truncation. If we can't get it, give up rather than waiting --- we + * don't want to block other backends, and we don't want to deadlock + * (which is quite possible considering we already hold a lower-grade + * lock). + */ + lock_waiter_detected = false; + lock_retry = 0; + while (true) + { + if (ConditionalLockRelation(vacrel->rel, AccessExclusiveLock)) + break; + + /* + * Check for interrupts while trying to (re-)acquire the exclusive + * lock. + */ + CHECK_FOR_INTERRUPTS(); + + if (++lock_retry > (VACUUM_TRUNCATE_LOCK_TIMEOUT / + VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL)) + { + /* + * We failed to establish the lock in the specified number of + * retries. This means we give up truncating. + */ + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("\"%s\": stopping truncate due to conflicting lock request", + vacrel->relname))); + return; + } + + (void) WaitLatch(MyLatch, + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, + VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL, + WAIT_EVENT_VACUUM_TRUNCATE); + ResetLatch(MyLatch); + } + + /* + * Now that we have exclusive lock, look to see if the rel has grown + * whilst we were vacuuming with non-exclusive lock. If so, give up; + * the newly added pages presumably contain non-deletable tuples. + */ + new_rel_pages = RelationGetNumberOfBlocks(vacrel->rel); + if (new_rel_pages != orig_rel_pages) + { + /* + * Note: we intentionally don't update vacrel->rel_pages with the + * new rel size here. If we did, it would amount to assuming that + * the new pages are empty, which is unlikely. Leaving the numbers + * alone amounts to assuming that the new pages have the same + * tuple density as existing ones, which is less unlikely. + */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + return; + } + + /* + * Scan backwards from the end to verify that the end pages actually + * contain no tuples. This is *necessary*, not optional, because + * other backends could have added tuples to these pages whilst we + * were vacuuming. + */ + new_rel_pages = count_nondeletable_pages(vacrel, &lock_waiter_detected); + vacrel->blkno = new_rel_pages; + + if (new_rel_pages >= orig_rel_pages) + { + /* can't do anything after all */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + return; + } + + /* + * Okay to truncate. + */ + RelationTruncate(vacrel->rel, new_rel_pages); + + /* + * We can release the exclusive lock as soon as we have truncated. + * Other backends can't safely access the relation until they have + * processed the smgr invalidation that smgrtruncate sent out ... but + * that should happen as part of standard invalidation processing once + * they acquire lock on the relation. + */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + + /* + * Update statistics. Here, it *is* correct to adjust rel_pages + * without also touching reltuples, since the tuple count wasn't + * changed by the truncation. + */ + vacrel->removed_pages += orig_rel_pages - new_rel_pages; + vacrel->rel_pages = new_rel_pages; + + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("table \"%s\": truncated %u to %u pages", + vacrel->relname, + orig_rel_pages, new_rel_pages))); + orig_rel_pages = new_rel_pages; + } while (new_rel_pages > vacrel->nonempty_pages && lock_waiter_detected); +} + +/* + * Rescan end pages to verify that they are (still) empty of tuples. + * + * Returns number of nondeletable pages (last nonempty page + 1). + */ +static BlockNumber +count_nondeletable_pages(LVRelState *vacrel, bool *lock_waiter_detected) +{ + BlockNumber blkno; + BlockNumber prefetchedUntil; + instr_time starttime; + + /* Initialize the starttime if we check for conflicting lock requests */ + INSTR_TIME_SET_CURRENT(starttime); + + /* + * Start checking blocks at what we believe relation end to be and move + * backwards. (Strange coding of loop control is needed because blkno is + * unsigned.) To make the scan faster, we prefetch a few blocks at a time + * in forward direction, so that OS-level readahead can kick in. + */ + blkno = vacrel->rel_pages; + StaticAssertStmt((PREFETCH_SIZE & (PREFETCH_SIZE - 1)) == 0, + "prefetch size must be power of 2"); + prefetchedUntil = InvalidBlockNumber; + while (blkno > vacrel->nonempty_pages) + { + Buffer buf; + Page page; + OffsetNumber offnum, + maxoff; + bool hastup; + + /* + * Check if another process requests a lock on our relation. We are + * holding an AccessExclusiveLock here, so they will be waiting. We + * only do this once per VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL, and we + * only check if that interval has elapsed once every 32 blocks to + * keep the number of system calls and actual shared lock table + * lookups to a minimum. + */ + if ((blkno % 32) == 0) + { + instr_time currenttime; + instr_time elapsed; + + INSTR_TIME_SET_CURRENT(currenttime); + elapsed = currenttime; + INSTR_TIME_SUBTRACT(elapsed, starttime); + if ((INSTR_TIME_GET_MICROSEC(elapsed) / 1000) + >= VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL) + { + if (LockHasWaitersRelation(vacrel->rel, AccessExclusiveLock)) + { + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("table \"%s\": suspending truncate due to conflicting lock request", + vacrel->relname))); + + *lock_waiter_detected = true; + return blkno; + } + starttime = currenttime; + } + } + + /* + * We don't insert a vacuum delay point here, because we have an + * exclusive lock on the table which we want to hold for as short a + * time as possible. We still need to check for interrupts however. + */ + CHECK_FOR_INTERRUPTS(); + + blkno--; + + /* If we haven't prefetched this lot yet, do so now. */ + if (prefetchedUntil > blkno) + { + BlockNumber prefetchStart; + BlockNumber pblkno; + + prefetchStart = blkno & ~(PREFETCH_SIZE - 1); + for (pblkno = prefetchStart; pblkno <= blkno; pblkno++) + { + PrefetchBuffer(vacrel->rel, MAIN_FORKNUM, pblkno); + CHECK_FOR_INTERRUPTS(); + } + prefetchedUntil = prefetchStart; + } + + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + + /* In this phase we only need shared access to the buffer */ + LockBuffer(buf, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buf); + + if (PageIsNew(page) || PageIsEmpty(page)) + { + UnlockReleaseBuffer(buf); + continue; + } + + hastup = false; + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + + itemid = PageGetItemId(page, offnum); + + /* + * Note: any non-unused item should be taken as a reason to keep + * this page. Even an LP_DEAD item makes truncation unsafe, since + * we must not have cleaned out its index entries. + */ + if (ItemIdIsUsed(itemid)) + { + hastup = true; + break; /* can stop scanning */ + } + } /* scan along page */ + + UnlockReleaseBuffer(buf); + + /* Done scanning if we found a tuple here */ + if (hastup) + return blkno + 1; + } + + /* + * If we fall out of the loop, all the previously-thought-to-be-empty + * pages still are; we need not bother to look at the last known-nonempty + * page. + */ + return vacrel->nonempty_pages; +} + +/* + * Returns the number of dead TIDs that VACUUM should allocate space to + * store, given a heap rel of size vacrel->rel_pages, and given current + * maintenance_work_mem setting (or current autovacuum_work_mem setting, + * when applicable). + * + * See the comments at the head of this file for rationale. + */ +static int +dead_items_max_items(LVRelState *vacrel) +{ + int64 max_items; + int vac_work_mem = IsAutoVacuumWorkerProcess() && + autovacuum_work_mem != -1 ? + autovacuum_work_mem : maintenance_work_mem; + + if (vacrel->nindexes > 0) + { + BlockNumber rel_pages = vacrel->rel_pages; + + max_items = MAXDEADITEMS(vac_work_mem * 1024L); + max_items = Min(max_items, INT_MAX); + max_items = Min(max_items, MAXDEADITEMS(MaxAllocSize)); + + /* curious coding here to ensure the multiplication can't overflow */ + if ((BlockNumber) (max_items / MaxHeapTuplesPerPage) > rel_pages) + max_items = rel_pages * MaxHeapTuplesPerPage; + + /* stay sane if small maintenance_work_mem */ + max_items = Max(max_items, MaxHeapTuplesPerPage); + } + else + { + /* One-pass case only stores a single heap page's TIDs at a time */ + max_items = MaxHeapTuplesPerPage; + } + + return (int) max_items; +} + +/* + * Allocate dead_items (either using palloc, or in dynamic shared memory). + * Sets dead_items in vacrel for caller. + * + * Also handles parallel initialization as part of allocating dead_items in + * DSM when required. + */ +static void +dead_items_alloc(LVRelState *vacrel, int nworkers) +{ + VacDeadItems *dead_items; + int max_items; + + max_items = dead_items_max_items(vacrel); + Assert(max_items >= MaxHeapTuplesPerPage); + + /* + * Initialize state for a parallel vacuum. As of now, only one worker can + * be used for an index, so we invoke parallelism only if there are at + * least two indexes on a table. + */ + if (nworkers >= 0 && vacrel->nindexes > 1 && vacrel->do_index_vacuuming) + { + /* + * Since parallel workers cannot access data in temporary tables, we + * can't perform parallel vacuum on them. + */ + if (RelationUsesLocalBuffers(vacrel->rel)) + { + /* + * Give warning only if the user explicitly tries to perform a + * parallel vacuum on the temporary table. + */ + if (nworkers > 0) + ereport(WARNING, + (errmsg("disabling parallel option of vacuum on \"%s\" --- cannot vacuum temporary tables in parallel", + vacrel->relname))); + } + else + vacrel->pvs = parallel_vacuum_init(vacrel->rel, vacrel->indrels, + vacrel->nindexes, nworkers, + max_items, + vacrel->verbose ? INFO : DEBUG2, + vacrel->bstrategy); + + /* If parallel mode started, dead_items space is allocated in DSM */ + if (ParallelVacuumIsActive(vacrel)) + { + vacrel->dead_items = parallel_vacuum_get_dead_items(vacrel->pvs); + return; + } + } + + /* Serial VACUUM case */ + dead_items = (VacDeadItems *) palloc(vac_max_items_to_alloc_size(max_items)); + dead_items->max_items = max_items; + dead_items->num_items = 0; + + vacrel->dead_items = dead_items; +} + +/* + * Perform cleanup for resources allocated in dead_items_alloc + */ +static void +dead_items_cleanup(LVRelState *vacrel) +{ + if (!ParallelVacuumIsActive(vacrel)) + { + /* Don't bother with pfree here */ + return; + } + + /* End parallel mode */ + parallel_vacuum_end(vacrel->pvs, vacrel->indstats); + vacrel->pvs = NULL; +} + +/* + * Check if every tuple in the given page is visible to all current and future + * transactions. Also return the visibility_cutoff_xid which is the highest + * xmin amongst the visible tuples. Set *all_frozen to true if every tuple + * on this page is frozen. + * + * This is a stripped down version of lazy_scan_prune(). If you change + * anything here, make sure that everything stays in sync. Note that an + * assertion calls us to verify that everybody still agrees. Be sure to avoid + * introducing new side-effects here. + */ +static bool +tdeheap_page_is_all_visible(LVRelState *vacrel, Buffer buf, + TransactionId *visibility_cutoff_xid, + bool *all_frozen) +{ + Page page = BufferGetPage(buf); + BlockNumber blockno = BufferGetBlockNumber(buf); + OffsetNumber offnum, + maxoff; + bool all_visible = true; + + *visibility_cutoff_xid = InvalidTransactionId; + *all_frozen = true; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff && all_visible; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + HeapTupleData tuple; + + /* + * Set the offset number so that we can display it along with any + * error that occurred while processing this tuple. + */ + vacrel->offnum = offnum; + itemid = PageGetItemId(page, offnum); + + /* Unused or redirect line pointers are of no interest */ + if (!ItemIdIsUsed(itemid) || ItemIdIsRedirected(itemid)) + continue; + + ItemPointerSet(&(tuple.t_self), blockno, offnum); + + /* + * Dead line pointers can have index pointers pointing to them. So + * they can't be treated as visible + */ + if (ItemIdIsDead(itemid)) + { + all_visible = false; + *all_frozen = false; + break; + } + + Assert(ItemIdIsNormal(itemid)); + + tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple.t_len = ItemIdGetLength(itemid); + tuple.t_tableOid = RelationGetRelid(vacrel->rel); + + switch (HeapTupleSatisfiesVacuum(&tuple, vacrel->cutoffs.OldestXmin, + buf)) + { + case HEAPTUPLE_LIVE: + { + TransactionId xmin; + + /* Check comments in lazy_scan_prune. */ + if (!HeapTupleHeaderXminCommitted(tuple.t_data)) + { + all_visible = false; + *all_frozen = false; + break; + } + + /* + * The inserter definitely committed. But is it old enough + * that everyone sees it as committed? + */ + xmin = HeapTupleHeaderGetXmin(tuple.t_data); + if (!TransactionIdPrecedes(xmin, + vacrel->cutoffs.OldestXmin)) + { + all_visible = false; + *all_frozen = false; + break; + } + + /* Track newest xmin on page. */ + if (TransactionIdFollows(xmin, *visibility_cutoff_xid) && + TransactionIdIsNormal(xmin)) + *visibility_cutoff_xid = xmin; + + /* Check whether this tuple is already frozen or not */ + if (all_visible && *all_frozen && + tdeheap_tuple_needs_eventual_freeze(tuple.t_data)) + *all_frozen = false; + } + break; + + case HEAPTUPLE_DEAD: + case HEAPTUPLE_RECENTLY_DEAD: + case HEAPTUPLE_INSERT_IN_PROGRESS: + case HEAPTUPLE_DELETE_IN_PROGRESS: + { + all_visible = false; + *all_frozen = false; + break; + } + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + } /* scan along page */ + + /* Clear the offset information once we have processed the given page. */ + vacrel->offnum = InvalidOffsetNumber; + + return all_visible; +} + +/* + * Update index statistics in pg_class if the statistics are accurate. + */ +static void +update_relstats_all_indexes(LVRelState *vacrel) +{ + Relation *indrels = vacrel->indrels; + int nindexes = vacrel->nindexes; + IndexBulkDeleteResult **indstats = vacrel->indstats; + + Assert(vacrel->do_index_cleanup); + + for (int idx = 0; idx < nindexes; idx++) + { + Relation indrel = indrels[idx]; + IndexBulkDeleteResult *istat = indstats[idx]; + + if (istat == NULL || istat->estimated_count) + continue; + + /* Update index statistics */ + vac_update_relstats(indrel, + istat->num_pages, + istat->num_index_tuples, + 0, + false, + InvalidTransactionId, + InvalidMultiXactId, + NULL, NULL, false); + } +} + +/* + * Error context callback for errors occurring during vacuum. The error + * context messages for index phases should match the messages set in parallel + * vacuum. If you change this function for those phases, change + * parallel_vacuum_error_callback() as well. + */ +static void +vacuum_error_callback(void *arg) +{ + LVRelState *errinfo = arg; + + switch (errinfo->phase) + { + case VACUUM_ERRCB_PHASE_SCAN_HEAP: + if (BlockNumberIsValid(errinfo->blkno)) + { + if (OffsetNumberIsValid(errinfo->offnum)) + errcontext("while scanning block %u offset %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname); + else + errcontext("while scanning block %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->relnamespace, errinfo->relname); + } + else + errcontext("while scanning relation \"%s.%s\"", + errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_VACUUM_HEAP: + if (BlockNumberIsValid(errinfo->blkno)) + { + if (OffsetNumberIsValid(errinfo->offnum)) + errcontext("while vacuuming block %u offset %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname); + else + errcontext("while vacuuming block %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->relnamespace, errinfo->relname); + } + else + errcontext("while vacuuming relation \"%s.%s\"", + errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_VACUUM_INDEX: + errcontext("while vacuuming index \"%s\" of relation \"%s.%s\"", + errinfo->indname, errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_INDEX_CLEANUP: + errcontext("while cleaning up index \"%s\" of relation \"%s.%s\"", + errinfo->indname, errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_TRUNCATE: + if (BlockNumberIsValid(errinfo->blkno)) + errcontext("while truncating relation \"%s.%s\" to %u blocks", + errinfo->relnamespace, errinfo->relname, errinfo->blkno); + break; + + case VACUUM_ERRCB_PHASE_UNKNOWN: + default: + return; /* do nothing; the errinfo may not be + * initialized */ + } +} + +/* + * Updates the information required for vacuum error callback. This also saves + * the current information which can be later restored via restore_vacuum_error_info. + */ +static void +update_vacuum_error_info(LVRelState *vacrel, LVSavedErrInfo *saved_vacrel, + int phase, BlockNumber blkno, OffsetNumber offnum) +{ + if (saved_vacrel) + { + saved_vacrel->offnum = vacrel->offnum; + saved_vacrel->blkno = vacrel->blkno; + saved_vacrel->phase = vacrel->phase; + } + + vacrel->blkno = blkno; + vacrel->offnum = offnum; + vacrel->phase = phase; +} + +/* + * Restores the vacuum information saved via a prior call to update_vacuum_error_info. + */ +static void +restore_vacuum_error_info(LVRelState *vacrel, + const LVSavedErrInfo *saved_vacrel) +{ + vacrel->blkno = saved_vacrel->blkno; + vacrel->offnum = saved_vacrel->offnum; + vacrel->phase = saved_vacrel->phase; +} diff --git a/contrib/pg_tde/src16/access/pg_tde_visibilitymap.c b/contrib/pg_tde/src16/access/pg_tde_visibilitymap.c new file mode 100644 index 00000000000..bef5bbfff88 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tde_visibilitymap.c @@ -0,0 +1,650 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_visibilitymap.c + * bitmap for tracking visibility of heap tuples + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pg_tde_visibilitymap.c + * + * INTERFACE ROUTINES + * tdeheap_visibilitymap_clear - clear bits for one page in the visibility map + * tdeheap_visibilitymap_pin - pin a map page for setting a bit + * tdeheap_visibilitymap_pin_ok - check whether correct map page is already pinned + * tdeheap_visibilitymap_set - set a bit in a previously pinned page + * tdeheap_visibilitymap_get_status - get status of bits + * tdeheap_visibilitymap_count - count number of bits set in visibility map + * tdeheap_visibilitymap_prepare_truncate - + * prepare for truncation of the visibility map + * + * NOTES + * + * The visibility map is a bitmap with two bits (all-visible and all-frozen) + * per heap page. A set all-visible bit means that all tuples on the page are + * known visible to all transactions, and therefore the page doesn't need to + * be vacuumed. A set all-frozen bit means that all tuples on the page are + * completely frozen, and therefore the page doesn't need to be vacuumed even + * if whole table scanning vacuum is required (e.g. anti-wraparound vacuum). + * The all-frozen bit must be set only when the page is already all-visible. + * + * The map is conservative in the sense that we make sure that whenever a bit + * is set, we know the condition is true, but if a bit is not set, it might or + * might not be true. + * + * Clearing visibility map bits is not separately WAL-logged. The callers + * must make sure that whenever a bit is cleared, the bit is cleared on WAL + * replay of the updating operation as well. + * + * When we *set* a visibility map during VACUUM, we must write WAL. This may + * seem counterintuitive, since the bit is basically a hint: if it is clear, + * it may still be the case that every tuple on the page is visible to all + * transactions; we just don't know that for certain. The difficulty is that + * there are two bits which are typically set together: the PD_ALL_VISIBLE bit + * on the page itself, and the visibility map bit. If a crash occurs after the + * visibility map page makes it to disk and before the updated heap page makes + * it to disk, redo must set the bit on the heap page. Otherwise, the next + * insert, update, or delete on the heap page will fail to realize that the + * visibility map bit must be cleared, possibly causing index-only scans to + * return wrong answers. + * + * VACUUM will normally skip pages for which the visibility map bit is set; + * such pages can't contain any dead tuples and therefore don't need vacuuming. + * + * LOCKING + * + * In heapam.c, whenever a page is modified so that not all tuples on the + * page are visible to everyone anymore, the corresponding bit in the + * visibility map is cleared. In order to be crash-safe, we need to do this + * while still holding a lock on the heap page and in the same critical + * section that logs the page modification. However, we don't want to hold + * the buffer lock over any I/O that may be required to read in the visibility + * map page. To avoid this, we examine the heap page before locking it; + * if the page-level PD_ALL_VISIBLE bit is set, we pin the visibility map + * bit. Then, we lock the buffer. But this creates a race condition: there + * is a possibility that in the time it takes to lock the buffer, the + * PD_ALL_VISIBLE bit gets set. If that happens, we have to unlock the + * buffer, pin the visibility map page, and relock the buffer. This shouldn't + * happen often, because only VACUUM currently sets visibility map bits, + * and the race will only occur if VACUUM processes a given page at almost + * exactly the same time that someone tries to further modify it. + * + * To set a bit, you need to hold a lock on the heap page. That prevents + * the race condition where VACUUM sees that all tuples on the page are + * visible to everyone, but another backend modifies the page before VACUUM + * sets the bit in the visibility map. + * + * When a bit is set, the LSN of the visibility map page is updated to make + * sure that the visibility map update doesn't get written to disk before the + * WAL record of the changes that made it possible to set the bit is flushed. + * But when a bit is cleared, we don't have to do that because it's always + * safe to clear a bit in the map from correctness point of view. + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tde_visibilitymap.h" + +#include "access/xloginsert.h" +#include "access/xlogutils.h" +#include "miscadmin.h" +#include "port/pg_bitutils.h" +#include "storage/bufmgr.h" +#include "storage/lmgr.h" +#include "storage/smgr.h" +#include "utils/inval.h" + + +/*#define TRACE_VISIBILITYMAP */ + +/* + * Size of the bitmap on each visibility map page, in bytes. There's no + * extra headers, so the whole page minus the standard page header is + * used for the bitmap. + */ +#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData)) + +/* Number of heap blocks we can represent in one byte */ +#define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK) + +/* Number of heap blocks we can represent in one visibility map page. */ +#define HEAPBLOCKS_PER_PAGE (MAPSIZE * HEAPBLOCKS_PER_BYTE) + +/* Mapping from heap block number to the right bit in the visibility map */ +#define HEAPBLK_TO_MAPBLOCK(x) ((x) / HEAPBLOCKS_PER_PAGE) +#define HEAPBLK_TO_MAPBYTE(x) (((x) % HEAPBLOCKS_PER_PAGE) / HEAPBLOCKS_PER_BYTE) +#define HEAPBLK_TO_OFFSET(x) (((x) % HEAPBLOCKS_PER_BYTE) * BITS_PER_HEAPBLOCK) + +/* Masks for counting subsets of bits in the visibility map. */ +#define VISIBLE_MASK64 UINT64CONST(0x5555555555555555) /* The lower bit of each + * bit pair */ +#define FROZEN_MASK64 UINT64CONST(0xaaaaaaaaaaaaaaaa) /* The upper bit of each + * bit pair */ + +/* prototypes for internal routines */ +static Buffer vm_readbuf(Relation rel, BlockNumber blkno, bool extend); +static Buffer vm_extend(Relation rel, BlockNumber vm_nblocks); + + +/* + * tdeheap_visibilitymap_clear - clear specified bits for one page in visibility map + * + * You must pass a buffer containing the correct map page to this function. + * Call tdeheap_visibilitymap_pin first to pin the right one. This function doesn't do + * any I/O. Returns true if any bits have been cleared and false otherwise. + */ +bool +tdeheap_visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer vmbuf, uint8 flags) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + int mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + int mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + uint8 mask = flags << mapOffset; + char *map; + bool cleared = false; + + /* Must never clear all_visible bit while leaving all_frozen bit set */ + Assert(flags & VISIBILITYMAP_VALID_BITS); + Assert(flags != VISIBILITYMAP_ALL_VISIBLE); + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_clear %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + if (!BufferIsValid(vmbuf) || BufferGetBlockNumber(vmbuf) != mapBlock) + elog(ERROR, "wrong buffer passed to tdeheap_visibilitymap_clear"); + + LockBuffer(vmbuf, BUFFER_LOCK_EXCLUSIVE); + map = PageGetContents(BufferGetPage(vmbuf)); + + if (map[mapByte] & mask) + { + map[mapByte] &= ~mask; + + MarkBufferDirty(vmbuf); + cleared = true; + } + + LockBuffer(vmbuf, BUFFER_LOCK_UNLOCK); + + return cleared; +} + +/* + * tdeheap_visibilitymap_pin - pin a map page for setting a bit + * + * Setting a bit in the visibility map is a two-phase operation. First, call + * tdeheap_visibilitymap_pin, to pin the visibility map page containing the bit for + * the heap page. Because that can require I/O to read the map page, you + * shouldn't hold a lock on the heap page while doing that. Then, call + * tdeheap_visibilitymap_set to actually set the bit. + * + * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by + * an earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. On return, *vmbuf is a valid buffer with the map page containing + * the bit for heapBlk. + * + * If the page doesn't exist in the map file yet, it is extended. + */ +void +tdeheap_visibilitymap_pin(Relation rel, BlockNumber heapBlk, Buffer *vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + + /* Reuse the old pinned buffer if possible */ + if (BufferIsValid(*vmbuf)) + { + if (BufferGetBlockNumber(*vmbuf) == mapBlock) + return; + + ReleaseBuffer(*vmbuf); + } + *vmbuf = vm_readbuf(rel, mapBlock, true); +} + +/* + * tdeheap_visibilitymap_pin_ok - do we already have the correct page pinned? + * + * On entry, vmbuf should be InvalidBuffer or a valid buffer returned by + * an earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. The return value indicates whether the buffer covers the + * given heapBlk. + */ +bool +tdeheap_visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + + return BufferIsValid(vmbuf) && BufferGetBlockNumber(vmbuf) == mapBlock; +} + +/* + * tdeheap_visibilitymap_set - set bit(s) on a previously pinned page + * + * recptr is the LSN of the XLOG record we're replaying, if we're in recovery, + * or InvalidXLogRecPtr in normal running. The VM page LSN is advanced to the + * one provided; in normal running, we generate a new XLOG record and set the + * page LSN to that value (though the heap page's LSN may *not* be updated; + * see below). cutoff_xid is the largest xmin on the page being marked + * all-visible; it is needed for Hot Standby, and can be InvalidTransactionId + * if the page contains no tuples. It can also be set to InvalidTransactionId + * when a page that is already all-visible is being marked all-frozen. + * + * Caller is expected to set the heap page's PD_ALL_VISIBLE bit before calling + * this function. Except in recovery, caller should also pass the heap + * buffer. When checksums are enabled and we're not in recovery, we must add + * the heap buffer to the WAL chain to protect it from being torn. + * + * You must pass a buffer containing the correct map page to this function. + * Call tdeheap_visibilitymap_pin first to pin the right one. This function doesn't do + * any I/O. + */ +void +tdeheap_visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf, + XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid, + uint8 flags) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + Page page; + uint8 *map; + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_set %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + Assert(InRecovery || XLogRecPtrIsInvalid(recptr)); + Assert(InRecovery || PageIsAllVisible((Page) BufferGetPage(heapBuf))); + Assert((flags & VISIBILITYMAP_VALID_BITS) == flags); + + /* Must never set all_frozen bit without also setting all_visible bit */ + Assert(flags != VISIBILITYMAP_ALL_FROZEN); + + /* Check that we have the right heap page pinned, if present */ + if (BufferIsValid(heapBuf) && BufferGetBlockNumber(heapBuf) != heapBlk) + elog(ERROR, "wrong heap buffer passed to tdeheap_visibilitymap_set"); + + /* Check that we have the right VM page pinned */ + if (!BufferIsValid(vmBuf) || BufferGetBlockNumber(vmBuf) != mapBlock) + elog(ERROR, "wrong VM buffer passed to tdeheap_visibilitymap_set"); + + page = BufferGetPage(vmBuf); + map = (uint8 *) PageGetContents(page); + LockBuffer(vmBuf, BUFFER_LOCK_EXCLUSIVE); + + if (flags != (map[mapByte] >> mapOffset & VISIBILITYMAP_VALID_BITS)) + { + START_CRIT_SECTION(); + + map[mapByte] |= (flags << mapOffset); + MarkBufferDirty(vmBuf); + + if (RelationNeedsWAL(rel)) + { + if (XLogRecPtrIsInvalid(recptr)) + { + Assert(!InRecovery); + recptr = log_tdeheap_visible(rel, heapBuf, vmBuf, cutoff_xid, flags); + + /* + * If data checksums are enabled (or wal_log_hints=on), we + * need to protect the heap page from being torn. + * + * If not, then we must *not* update the heap page's LSN. In + * this case, the FPI for the heap page was omitted from the + * WAL record inserted above, so it would be incorrect to + * update the heap page's LSN. + */ + if (XLogHintBitIsNeeded()) + { + Page heapPage = BufferGetPage(heapBuf); + + PageSetLSN(heapPage, recptr); + } + } + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + } + + LockBuffer(vmBuf, BUFFER_LOCK_UNLOCK); +} + +/* + * tdeheap_visibilitymap_get_status - get status of bits + * + * Are all tuples on heapBlk visible to all or are marked frozen, according + * to the visibility map? + * + * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by an + * earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. On return, *vmbuf is a valid buffer with the map page containing + * the bit for heapBlk, or InvalidBuffer. The caller is responsible for + * releasing *vmbuf after it's done testing and setting bits. + * + * NOTE: This function is typically called without a lock on the heap page, + * so somebody else could change the bit just after we look at it. In fact, + * since we don't lock the visibility map page either, it's even possible that + * someone else could have changed the bit just before we look at it, but yet + * we might see the old value. It is the caller's responsibility to deal with + * all concurrency issues! + */ +uint8 +tdeheap_visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + char *map; + uint8 result; + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_get_status %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + /* Reuse the old pinned buffer if possible */ + if (BufferIsValid(*vmbuf)) + { + if (BufferGetBlockNumber(*vmbuf) != mapBlock) + { + ReleaseBuffer(*vmbuf); + *vmbuf = InvalidBuffer; + } + } + + if (!BufferIsValid(*vmbuf)) + { + *vmbuf = vm_readbuf(rel, mapBlock, false); + if (!BufferIsValid(*vmbuf)) + return false; + } + + map = PageGetContents(BufferGetPage(*vmbuf)); + + /* + * A single byte read is atomic. There could be memory-ordering effects + * here, but for performance reasons we make it the caller's job to worry + * about that. + */ + result = ((map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS); + return result; +} + +/* + * tdeheap_visibilitymap_count - count number of bits set in visibility map + * + * Note: we ignore the possibility of race conditions when the table is being + * extended concurrently with the call. New pages added to the table aren't + * going to be marked all-visible or all-frozen, so they won't affect the result. + */ +void +tdeheap_visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen) +{ + BlockNumber mapBlock; + BlockNumber nvisible = 0; + BlockNumber nfrozen = 0; + + /* all_visible must be specified */ + Assert(all_visible); + + for (mapBlock = 0;; mapBlock++) + { + Buffer mapBuffer; + uint64 *map; + int i; + + /* + * Read till we fall off the end of the map. We assume that any extra + * bytes in the last page are zeroed, so we don't bother excluding + * them from the count. + */ + mapBuffer = vm_readbuf(rel, mapBlock, false); + if (!BufferIsValid(mapBuffer)) + break; + + /* + * We choose not to lock the page, since the result is going to be + * immediately stale anyway if anyone is concurrently setting or + * clearing bits, and we only really need an approximate value. + */ + map = (uint64 *) PageGetContents(BufferGetPage(mapBuffer)); + + StaticAssertStmt(MAPSIZE % sizeof(uint64) == 0, + "unsupported MAPSIZE"); + if (all_frozen == NULL) + { + for (i = 0; i < MAPSIZE / sizeof(uint64); i++) + nvisible += pg_popcount64(map[i] & VISIBLE_MASK64); + } + else + { + for (i = 0; i < MAPSIZE / sizeof(uint64); i++) + { + nvisible += pg_popcount64(map[i] & VISIBLE_MASK64); + nfrozen += pg_popcount64(map[i] & FROZEN_MASK64); + } + } + + ReleaseBuffer(mapBuffer); + } + + *all_visible = nvisible; + if (all_frozen) + *all_frozen = nfrozen; +} + +/* + * tdeheap_visibilitymap_prepare_truncate - + * prepare for truncation of the visibility map + * + * nheapblocks is the new size of the heap. + * + * Return the number of blocks of new visibility map. + * If it's InvalidBlockNumber, there is nothing to truncate; + * otherwise the caller is responsible for calling smgrtruncate() + * to truncate the visibility map pages. + */ +BlockNumber +tdeheap_visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks) +{ + BlockNumber newnblocks; + + /* last remaining block, byte, and bit */ + BlockNumber truncBlock = HEAPBLK_TO_MAPBLOCK(nheapblocks); + uint32 truncByte = HEAPBLK_TO_MAPBYTE(nheapblocks); + uint8 truncOffset = HEAPBLK_TO_OFFSET(nheapblocks); + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_truncate %s %d", RelationGetRelationName(rel), nheapblocks); +#endif + + /* + * If no visibility map has been created yet for this relation, there's + * nothing to truncate. + */ + if (!smgrexists(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM)) + return InvalidBlockNumber; + + /* + * Unless the new size is exactly at a visibility map page boundary, the + * tail bits in the last remaining map page, representing truncated heap + * blocks, need to be cleared. This is not only tidy, but also necessary + * because we don't get a chance to clear the bits if the heap is extended + * again. + */ + if (truncByte != 0 || truncOffset != 0) + { + Buffer mapBuffer; + Page page; + char *map; + + newnblocks = truncBlock + 1; + + mapBuffer = vm_readbuf(rel, truncBlock, false); + if (!BufferIsValid(mapBuffer)) + { + /* nothing to do, the file was already smaller */ + return InvalidBlockNumber; + } + + page = BufferGetPage(mapBuffer); + map = PageGetContents(page); + + LockBuffer(mapBuffer, BUFFER_LOCK_EXCLUSIVE); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* Clear out the unwanted bytes. */ + MemSet(&map[truncByte + 1], 0, MAPSIZE - (truncByte + 1)); + + /*---- + * Mask out the unwanted bits of the last remaining byte. + * + * ((1 << 0) - 1) = 00000000 + * ((1 << 1) - 1) = 00000001 + * ... + * ((1 << 6) - 1) = 00111111 + * ((1 << 7) - 1) = 01111111 + *---- + */ + map[truncByte] &= (1 << truncOffset) - 1; + + /* + * Truncation of a relation is WAL-logged at a higher-level, and we + * will be called at WAL replay. But if checksums are enabled, we need + * to still write a WAL record to protect against a torn page, if the + * page is flushed to disk before the truncation WAL record. We cannot + * use MarkBufferDirtyHint here, because that will not dirty the page + * during recovery. + */ + MarkBufferDirty(mapBuffer); + if (!InRecovery && RelationNeedsWAL(rel) && XLogHintBitIsNeeded()) + log_newpage_buffer(mapBuffer, false); + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(mapBuffer); + } + else + newnblocks = truncBlock; + + if (smgrnblocks(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM) <= newnblocks) + { + /* nothing to do, the file was already smaller than requested size */ + return InvalidBlockNumber; + } + + return newnblocks; +} + +/* + * Read a visibility map page. + * + * If the page doesn't exist, InvalidBuffer is returned, or if 'extend' is + * true, the visibility map file is extended. + */ +static Buffer +vm_readbuf(Relation rel, BlockNumber blkno, bool extend) +{ + Buffer buf; + SMgrRelation reln; + + /* + * Caution: re-using this smgr pointer could fail if the relcache entry + * gets closed. It's safe as long as we only do smgr-level operations + * between here and the last use of the pointer. + */ + reln = RelationGetSmgr(rel); + + /* + * If we haven't cached the size of the visibility map fork yet, check it + * first. + */ + if (reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] == InvalidBlockNumber) + { + if (smgrexists(reln, VISIBILITYMAP_FORKNUM)) + smgrnblocks(reln, VISIBILITYMAP_FORKNUM); + else + reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] = 0; + } + + /* + * For reading we use ZERO_ON_ERROR mode, and initialize the page if + * necessary. It's always safe to clear bits, so it's better to clear + * corrupt pages than error out. + * + * We use the same path below to initialize pages when extending the + * relation, as a concurrent extension can end up with vm_extend() + * returning an already-initialized page. + */ + if (blkno >= reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM]) + { + if (extend) + buf = vm_extend(rel, blkno + 1); + else + return InvalidBuffer; + } + else + buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno, + RBM_ZERO_ON_ERROR, NULL); + + /* + * Initializing the page when needed is trickier than it looks, because of + * the possibility of multiple backends doing this concurrently, and our + * desire to not uselessly take the buffer lock in the normal path where + * the page is OK. We must take the lock to initialize the page, so + * recheck page newness after we have the lock, in case someone else + * already did it. Also, because we initially check PageIsNew with no + * lock, it's possible to fall through and return the buffer while someone + * else is still initializing the page (i.e., we might see pd_upper as set + * but other page header fields are still zeroes). This is harmless for + * callers that will take a buffer lock themselves, but some callers + * inspect the page without any lock at all. The latter is OK only so + * long as it doesn't depend on the page header having correct contents. + * Current usage is safe because PageGetContents() does not require that. + */ + if (PageIsNew(BufferGetPage(buf))) + { + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + if (PageIsNew(BufferGetPage(buf))) + PageInit(BufferGetPage(buf), BLCKSZ, 0); + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + } + return buf; +} + +/* + * Ensure that the visibility map fork is at least vm_nblocks long, extending + * it if necessary with zeroed pages. + */ +static Buffer +vm_extend(Relation rel, BlockNumber vm_nblocks) +{ + Buffer buf; + + buf = ExtendBufferedRelTo(BMR_REL(rel), VISIBILITYMAP_FORKNUM, NULL, + EB_CREATE_FORK_IF_NEEDED | + EB_CLEAR_SIZE_CACHE, + vm_nblocks, + RBM_ZERO_ON_ERROR); + + /* + * Send a shared-inval message to force other backends to close any smgr + * references they may have for this rel, which we are about to change. + * This is a useful optimization because it means that backends don't have + * to keep checking for creation or extension of the file, which happens + * infrequently. + */ + CacheInvalidateSmgr(RelationGetSmgr(rel)->smgr_rlocator); + + return buf; +} diff --git a/contrib/pg_tde/src16/access/pg_tdeam.c b/contrib/pg_tde/src16/access/pg_tdeam.c new file mode 100644 index 00000000000..306d197c417 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tdeam.c @@ -0,0 +1,9863 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam.c + * pg_tde access method code + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * contrib/pg_tde/pg_tdeam.c + * + * + * INTERFACE ROUTINES + * tdeheap_beginscan - begin relation scan + * tdeheap_rescan - restart a relation scan + * tdeheap_endscan - end relation scan + * tdeheap_getnext - retrieve next tuple in scan + * tdeheap_fetch - retrieve tuple with given tid + * tdeheap_insert - insert tuple into a relation + * tdeheap_multi_insert - insert multiple tuples into a relation + * tdeheap_delete - delete a tuple from a relation + * tdeheap_update - replace a tuple in a relation with another tuple + * + * NOTES + * This file contains the tdeheap_ routines which implement + * the POSTGRES pg_tde access method used for all POSTGRES + * relations. + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_io.h" +#include "access/pg_tde_visibilitymap.h" +#include "access/pg_tde_slot.h" +#include "encryption/enc_tde.h" + +#include "access/bufmask.h" +#include "access/genam.h" +#include "access/multixact.h" +#include "access/parallel.h" +#include "access/relscan.h" +#include "access/subtrans.h" +#include "access/syncscan.h" +#include "access/sysattr.h" +#include "access/tableam.h" +#include "access/transam.h" +#include "access/valid.h" +#include "access/xact.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "access/xlogutils.h" +#include "catalog/catalog.h" +#include "commands/vacuum.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "port/atomics.h" +#include "port/pg_bitutils.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" +#include "storage/predicate.h" +#include "storage/procarray.h" +#include "storage/smgr.h" +#include "storage/spin.h" +#include "storage/standby.h" +#include "utils/datum.h" +#include "utils/inval.h" +#include "utils/lsyscache.h" +#include "utils/relcache.h" +#include "utils/snapmgr.h" +#include "utils/spccache.h" +#include "utils/memutils.h" + + +static HeapTuple tdeheap_prepare_insert(Relation relation, HeapTuple tup, + TransactionId xid, CommandId cid, int options); +static XLogRecPtr log_tdeheap_update(Relation reln, Buffer oldbuf, + Buffer newbuf, HeapTuple oldtup, + HeapTuple newtup, HeapTuple old_key_tuple, + bool all_visible_cleared, bool new_all_visible_cleared); +static Bitmapset *HeapDetermineColumnsInfo(Relation relation, + Bitmapset *interesting_cols, + Bitmapset *external_cols, + HeapTuple oldtup, HeapTuple newtup, + bool *has_external); +static bool tdeheap_acquire_tuplock(Relation relation, ItemPointer tid, + LockTupleMode mode, LockWaitPolicy wait_policy, + bool *have_tuple_lock); +static void compute_new_xmax_infomask(TransactionId xmax, uint16 old_infomask, + uint16 old_infomask2, TransactionId add_to_xmax, + LockTupleMode mode, bool is_update, + TransactionId *result_xmax, uint16 *result_infomask, + uint16 *result_infomask2); +static TM_Result tdeheap_lock_updated_tuple(Relation rel, HeapTuple tuple, + ItemPointer ctid, TransactionId xid, + LockTupleMode mode); +static int tdeheap_log_freeze_plan(HeapTupleFreeze *tuples, int ntuples, + xl_tdeheap_freeze_plan *plans_out, + OffsetNumber *offsets_out); +static void GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask, + uint16 *new_infomask2); +static TransactionId MultiXactIdGetUpdateXid(TransactionId xmax, + uint16 t_infomask); +static bool DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask, + LockTupleMode lockmode, bool *current_is_member); +static void MultiXactIdWait(MultiXactId multi, MultiXactStatus status, uint16 infomask, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining); +static bool ConditionalMultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, Relation rel, int *remaining); +static void index_delete_sort(TM_IndexDeleteOp *delstate); +static int bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate); +static XLogRecPtr log_tdeheap_new_cid(Relation relation, HeapTuple tup); +static HeapTuple ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_required, + bool *copy); + + +/* + * Each tuple lock mode has a corresponding heavyweight lock, and one or two + * corresponding MultiXactStatuses (one to merely lock tuples, another one to + * update them). This table (and the macros below) helps us determine the + * heavyweight lock mode and MultiXactStatus values to use for any particular + * tuple lock strength. + * + * Don't look at lockstatus/updstatus directly! Use get_mxact_status_for_lock + * instead. + */ +static const struct +{ + LOCKMODE hwlock; + int lockstatus; + int updstatus; +} + + tupleLockExtraInfo[MaxLockTupleMode + 1] = +{ + { /* LockTupleKeyShare */ + AccessShareLock, + MultiXactStatusForKeyShare, + -1 /* KeyShare does not allow updating tuples */ + }, + { /* LockTupleShare */ + RowShareLock, + MultiXactStatusForShare, + -1 /* Share does not allow updating tuples */ + }, + { /* LockTupleNoKeyExclusive */ + ExclusiveLock, + MultiXactStatusForNoKeyUpdate, + MultiXactStatusNoKeyUpdate + }, + { /* LockTupleExclusive */ + AccessExclusiveLock, + MultiXactStatusForUpdate, + MultiXactStatusUpdate + } +}; + +/* Get the LOCKMODE for a given MultiXactStatus */ +#define LOCKMODE_from_mxstatus(status) \ + (tupleLockExtraInfo[TUPLOCK_from_mxstatus((status))].hwlock) + +/* + * Acquire heavyweight locks on tuples, using a LockTupleMode strength value. + * This is more readable than having every caller translate it to lock.h's + * LOCKMODE. + */ +#define LockTupleTuplock(rel, tup, mode) \ + LockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) +#define UnlockTupleTuplock(rel, tup, mode) \ + UnlockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) +#define ConditionalLockTupleTuplock(rel, tup, mode) \ + ConditionalLockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) + +#ifdef USE_PREFETCH +/* + * tdeheap_index_delete_tuples and index_delete_prefetch_buffer use this + * structure to coordinate prefetching activity + */ +typedef struct +{ + BlockNumber cur_hblkno; + int next_item; + int ndeltids; + TM_IndexDelete *deltids; +} IndexDeletePrefetchState; +#endif + +/* tdeheap_index_delete_tuples bottom-up index deletion costing constants */ +#define BOTTOMUP_MAX_NBLOCKS 6 +#define BOTTOMUP_TOLERANCE_NBLOCKS 3 + +/* + * tdeheap_index_delete_tuples uses this when determining which heap blocks it + * must visit to help its bottom-up index deletion caller + */ +typedef struct IndexDeleteCounts +{ + int16 npromisingtids; /* Number of "promising" TIDs in group */ + int16 ntids; /* Number of TIDs in group */ + int16 ifirsttid; /* Offset to group's first deltid */ +} IndexDeleteCounts; + +/* + * This table maps tuple lock strength values for each particular + * MultiXactStatus value. + */ +static const int MultiXactStatusLock[MaxMultiXactStatus + 1] = +{ + LockTupleKeyShare, /* ForKeyShare */ + LockTupleShare, /* ForShare */ + LockTupleNoKeyExclusive, /* ForNoKeyUpdate */ + LockTupleExclusive, /* ForUpdate */ + LockTupleNoKeyExclusive, /* NoKeyUpdate */ + LockTupleExclusive /* Update */ +}; + +/* Get the LockTupleMode for a given MultiXactStatus */ +#define TUPLOCK_from_mxstatus(status) \ + (MultiXactStatusLock[(status)]) + +/* ---------------------------------------------------------------- + * heap support routines + * ---------------------------------------------------------------- + */ + +/* ---------------- + * initscan - scan code common to tdeheap_beginscan and tdeheap_rescan + * ---------------- + */ +static void +initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) +{ + ParallelBlockTableScanDesc bpscan = NULL; + bool allow_strat; + bool allow_sync; + + /* + * Determine the number of blocks we have to scan. + * + * It is sufficient to do this once at scan start, since any tuples added + * while the scan is in progress will be invisible to my snapshot anyway. + * (That is not true when using a non-MVCC snapshot. However, we couldn't + * guarantee to return tuples added after scan start anyway, since they + * might go into pages we already scanned. To guarantee consistent + * results for a non-MVCC snapshot, the caller must hold some higher-level + * lock that ensures the interesting tuple(s) won't change.) + */ + if (scan->rs_base.rs_parallel != NULL) + { + bpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel; + scan->rs_nblocks = bpscan->phs_nblocks; + } + else + scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); + + /* + * If the table is large relative to NBuffers, use a bulk-read access + * strategy and enable synchronized scanning (see syncscan.c). Although + * the thresholds for these features could be different, we make them the + * same so that there are only two behaviors to tune rather than four. + * (However, some callers need to be able to disable one or both of these + * behaviors, independently of the size of the table; also there is a GUC + * variable that can disable synchronized scanning.) + * + * Note that table_block_parallelscan_initialize has a very similar test; + * if you change this, consider changing that one, too. + */ + if (!RelationUsesLocalBuffers(scan->rs_base.rs_rd) && + scan->rs_nblocks > NBuffers / 4) + { + allow_strat = (scan->rs_base.rs_flags & SO_ALLOW_STRAT) != 0; + allow_sync = (scan->rs_base.rs_flags & SO_ALLOW_SYNC) != 0; + } + else + allow_strat = allow_sync = false; + + if (allow_strat) + { + /* During a rescan, keep the previous strategy object. */ + if (scan->rs_strategy == NULL) + scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD); + } + else + { + if (scan->rs_strategy != NULL) + FreeAccessStrategy(scan->rs_strategy); + scan->rs_strategy = NULL; + } + + if (scan->rs_base.rs_parallel != NULL) + { + /* For parallel scan, believe whatever ParallelTableScanDesc says. */ + if (scan->rs_base.rs_parallel->phs_syncscan) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + } + else if (keep_startblock) + { + /* + * When rescanning, we want to keep the previous startblock setting, + * so that rewinding a cursor doesn't generate surprising results. + * Reset the active syncscan setting, though. + */ + if (allow_sync && synchronize_seqscans) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + } + else if (allow_sync && synchronize_seqscans) + { + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + scan->rs_startblock = ss_get_location(scan->rs_base.rs_rd, scan->rs_nblocks); + } + else + { + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + scan->rs_startblock = 0; + } + + scan->rs_numblocks = InvalidBlockNumber; + scan->rs_inited = false; + scan->rs_ctup.t_data = NULL; + ItemPointerSetInvalid(&scan->rs_ctup.t_self); + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + + /* page-at-a-time fields are always invalid when not rs_inited */ + + /* + * copy the scan key, if appropriate + */ + if (key != NULL && scan->rs_base.rs_nkeys > 0) + memcpy(scan->rs_base.rs_key, key, scan->rs_base.rs_nkeys * sizeof(ScanKeyData)); + + /* + * Currently, we only have a stats counter for sequential heap scans (but + * e.g for bitmap scans the underlying bitmap index scans will be counted, + * and for sample scans we update stats for tuple fetches). + */ + if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN) + pgstat_count_tdeheap_scan(scan->rs_base.rs_rd); +} + +/* + * tdeheap_setscanlimits - restrict range of a heapscan + * + * startBlk is the page to start at + * numBlks is number of pages to scan (InvalidBlockNumber means "all") + */ +void +tdeheap_setscanlimits(TableScanDesc sscan, BlockNumber startBlk, BlockNumber numBlks) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + Assert(!scan->rs_inited); /* else too late to change */ + /* else rs_startblock is significant */ + Assert(!(scan->rs_base.rs_flags & SO_ALLOW_SYNC)); + + /* Check startBlk is valid (but allow case of zero blocks...) */ + Assert(startBlk == 0 || startBlk < scan->rs_nblocks); + + scan->rs_startblock = startBlk; + scan->rs_numblocks = numBlks; +} + +/* + * tdeheapgetpage - subroutine for tdeheapgettup() + * + * This routine reads and pins the specified page of the relation. + * In page-at-a-time mode it performs additional work, namely determining + * which tuples on the page are visible. + */ +void +tdeheapgetpage(TableScanDesc sscan, BlockNumber block) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + Buffer buffer; + Snapshot snapshot; + Page page; + int lines; + int ntup; + OffsetNumber lineoff; + bool all_visible; + + Assert(block < scan->rs_nblocks); + + /* release previous scan buffer, if any */ + if (BufferIsValid(scan->rs_cbuf)) + { + ReleaseBuffer(scan->rs_cbuf); + scan->rs_cbuf = InvalidBuffer; + } + + /* + * Be sure to check for interrupts at least once per page. Checks at + * higher code levels won't be able to stop a seqscan that encounters many + * pages' worth of consecutive dead tuples. + */ + CHECK_FOR_INTERRUPTS(); + + /* read page using selected strategy */ + scan->rs_cbuf = ReadBufferExtended(scan->rs_base.rs_rd, MAIN_FORKNUM, block, + RBM_NORMAL, scan->rs_strategy); + scan->rs_cblock = block; + + if (!(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE)) + return; + + buffer = scan->rs_cbuf; + snapshot = scan->rs_base.rs_snapshot; + + /* + * Prune and repair fragmentation for the whole page, if possible. + */ + tdeheap_page_prune_opt(scan->rs_base.rs_rd, buffer); + + /* + * We must hold share lock on the buffer content while examining tuple + * visibility. Afterwards, however, the tuples we have found to be + * visible are guaranteed good as long as we hold the buffer pin. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buffer); + TestForOldSnapshot(snapshot, scan->rs_base.rs_rd, page); + lines = PageGetMaxOffsetNumber(page); + ntup = 0; + + /* + * If the all-visible flag indicates that all tuples on the page are + * visible to everyone, we can skip the per-tuple visibility tests. + * + * Note: In hot standby, a tuple that's already visible to all + * transactions on the primary might still be invisible to a read-only + * transaction in the standby. We partly handle this problem by tracking + * the minimum xmin of visible tuples as the cut-off XID while marking a + * page all-visible on the primary and WAL log that along with the + * visibility map SET operation. In hot standby, we wait for (or abort) + * all transactions that can potentially may not see one or more tuples on + * the page. That's how index-only scans work fine in hot standby. A + * crucial difference between index-only scans and heap scans is that the + * index-only scan completely relies on the visibility map where as heap + * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if + * the page-level flag can be trusted in the same way, because it might + * get propagated somehow without being explicitly WAL-logged, e.g. via a + * full page write. Until we can prove that beyond doubt, let's check each + * tuple for visibility the hard way. + */ + all_visible = PageIsAllVisible(page) && !snapshot->takenDuringRecovery; + + for (lineoff = FirstOffsetNumber; lineoff <= lines; lineoff++) + { + ItemId lpp = PageGetItemId(page, lineoff); + HeapTupleData loctup; + bool valid; + + if (!ItemIdIsNormal(lpp)) + continue; + + loctup.t_tableOid = RelationGetRelid(scan->rs_base.rs_rd); + loctup.t_data = (HeapTupleHeader) PageGetItem(page, lpp); + loctup.t_len = ItemIdGetLength(lpp); + ItemPointerSet(&(loctup.t_self), block, lineoff); + + if (all_visible) + valid = true; + else + valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer); + + HeapCheckForSerializableConflictOut(valid, scan->rs_base.rs_rd, + &loctup, buffer, snapshot); + + if (valid) + scan->rs_vistuples[ntup++] = lineoff; + } + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + Assert(ntup <= MaxHeapTuplesPerPage); + scan->rs_ntuples = ntup; +} + +/* + * tdeheapgettup_initial_block - return the first BlockNumber to scan + * + * Returns InvalidBlockNumber when there are no blocks to scan. This can + * occur with empty tables and in parallel scans when parallel workers get all + * of the pages before we can get a chance to get our first page. + */ +static BlockNumber +tdeheapgettup_initial_block(HeapScanDesc scan, ScanDirection dir) +{ + Assert(!scan->rs_inited); + + /* When there are no pages to scan, return InvalidBlockNumber */ + if (scan->rs_nblocks == 0 || scan->rs_numblocks == 0) + return InvalidBlockNumber; + + if (ScanDirectionIsForward(dir)) + { + /* serial scan */ + if (scan->rs_base.rs_parallel == NULL) + return scan->rs_startblock; + else + { + /* parallel scan */ + table_block_parallelscan_startblock_init(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, + (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel); + + /* may return InvalidBlockNumber if there are no more blocks */ + return table_block_parallelscan_nextpage(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, + (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel); + } + } + else + { + /* backward parallel scan not supported */ + Assert(scan->rs_base.rs_parallel == NULL); + + /* + * Disable reporting to syncscan logic in a backwards scan; it's not + * very likely anyone else is doing the same thing at the same time, + * and much more likely that we'll just bollix things for forward + * scanners. + */ + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + + /* + * Start from last page of the scan. Ensure we take into account + * rs_numblocks if it's been adjusted by tdeheap_setscanlimits(). + */ + if (scan->rs_numblocks != InvalidBlockNumber) + return (scan->rs_startblock + scan->rs_numblocks - 1) % scan->rs_nblocks; + + if (scan->rs_startblock > 0) + return scan->rs_startblock - 1; + + return scan->rs_nblocks - 1; + } +} + + +/* + * tdeheapgettup_start_page - helper function for tdeheapgettup() + * + * Return the next page to scan based on the scan->rs_cbuf and set *linesleft + * to the number of tuples on this page. Also set *lineoff to the first + * offset to scan with forward scans getting the first offset and backward + * getting the final offset on the page. + */ +static Page +tdeheapgettup_start_page(HeapScanDesc scan, ScanDirection dir, int *linesleft, + OffsetNumber *lineoff) +{ + Page page; + + Assert(scan->rs_inited); + Assert(BufferIsValid(scan->rs_cbuf)); + + /* Caller is responsible for ensuring buffer is locked if needed */ + page = BufferGetPage(scan->rs_cbuf); + + TestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, page); + + *linesleft = PageGetMaxOffsetNumber(page) - FirstOffsetNumber + 1; + + if (ScanDirectionIsForward(dir)) + *lineoff = FirstOffsetNumber; + else + *lineoff = (OffsetNumber) (*linesleft); + + /* lineoff now references the physically previous or next tid */ + return page; +} + + +/* + * tdeheapgettup_continue_page - helper function for tdeheapgettup() + * + * Return the next page to scan based on the scan->rs_cbuf and set *linesleft + * to the number of tuples left to scan on this page. Also set *lineoff to + * the next offset to scan according to the ScanDirection in 'dir'. + */ +static inline Page +tdeheapgettup_continue_page(HeapScanDesc scan, ScanDirection dir, int *linesleft, + OffsetNumber *lineoff) +{ + Page page; + + Assert(scan->rs_inited); + Assert(BufferIsValid(scan->rs_cbuf)); + + /* Caller is responsible for ensuring buffer is locked if needed */ + page = BufferGetPage(scan->rs_cbuf); + + TestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, page); + + if (ScanDirectionIsForward(dir)) + { + *lineoff = OffsetNumberNext(scan->rs_coffset); + *linesleft = PageGetMaxOffsetNumber(page) - (*lineoff) + 1; + } + else + { + /* + * The previous returned tuple may have been vacuumed since the + * previous scan when we use a non-MVCC snapshot, so we must + * re-establish the lineoff <= PageGetMaxOffsetNumber(page) invariant + */ + *lineoff = Min(PageGetMaxOffsetNumber(page), OffsetNumberPrev(scan->rs_coffset)); + *linesleft = *lineoff; + } + + /* lineoff now references the physically previous or next tid */ + return page; +} + +/* + * tdeheapgettup_advance_block - helper for tdeheapgettup() and tdeheapgettup_pagemode() + * + * Given the current block number, the scan direction, and various information + * contained in the scan descriptor, calculate the BlockNumber to scan next + * and return it. If there are no further blocks to scan, return + * InvalidBlockNumber to indicate this fact to the caller. + * + * This should not be called to determine the initial block number -- only for + * subsequent blocks. + * + * This also adjusts rs_numblocks when a limit has been imposed by + * tdeheap_setscanlimits(). + */ +static inline BlockNumber +tdeheapgettup_advance_block(HeapScanDesc scan, BlockNumber block, ScanDirection dir) +{ + if (ScanDirectionIsForward(dir)) + { + if (scan->rs_base.rs_parallel == NULL) + { + block++; + + /* wrap back to the start of the heap */ + if (block >= scan->rs_nblocks) + block = 0; + + /* + * Report our new scan position for synchronization purposes. We + * don't do that when moving backwards, however. That would just + * mess up any other forward-moving scanners. + * + * Note: we do this before checking for end of scan so that the + * final state of the position hint is back at the start of the + * rel. That's not strictly necessary, but otherwise when you run + * the same query multiple times the starting position would shift + * a little bit backwards on every invocation, which is confusing. + * We don't guarantee any specific ordering in general, though. + */ + if (scan->rs_base.rs_flags & SO_ALLOW_SYNC) + ss_report_location(scan->rs_base.rs_rd, block); + + /* we're done if we're back at where we started */ + if (block == scan->rs_startblock) + return InvalidBlockNumber; + + /* check if the limit imposed by tdeheap_setscanlimits() is met */ + if (scan->rs_numblocks != InvalidBlockNumber) + { + if (--scan->rs_numblocks == 0) + return InvalidBlockNumber; + } + + return block; + } + else + { + return table_block_parallelscan_nextpage(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, (ParallelBlockTableScanDesc) + scan->rs_base.rs_parallel); + } + } + else + { + /* we're done if the last block is the start position */ + if (block == scan->rs_startblock) + return InvalidBlockNumber; + + /* check if the limit imposed by tdeheap_setscanlimits() is met */ + if (scan->rs_numblocks != InvalidBlockNumber) + { + if (--scan->rs_numblocks == 0) + return InvalidBlockNumber; + } + + /* wrap to the end of the heap when the last page was page 0 */ + if (block == 0) + block = scan->rs_nblocks; + + block--; + + return block; + } +} + +/* ---------------- + * tdeheapgettup - fetch next heap tuple + * + * Initialize the scan if not already done; then advance to the next + * tuple as indicated by "dir"; return the next tuple in scan->rs_ctup, + * or set scan->rs_ctup.t_data = NULL if no more tuples. + * + * Note: the reason nkeys/key are passed separately, even though they are + * kept in the scan descriptor, is that the caller may not want us to check + * the scankeys. + * + * Note: when we fall off the end of the scan in either direction, we + * reset rs_inited. This means that a further request with the same + * scan direction will restart the scan, which is a bit odd, but a + * request with the opposite scan direction will start a fresh scan + * in the proper direction. The latter is required behavior for cursors, + * while the former case is generally undefined behavior in Postgres + * so we don't care too much. + * ---------------- + */ +static void +tdeheapgettup(HeapScanDesc scan, + ScanDirection dir, + int nkeys, + ScanKey key) +{ + HeapTuple tuple = &(scan->rs_ctup); + BlockNumber block; + Page page; + OffsetNumber lineoff; + int linesleft; + + if (unlikely(!scan->rs_inited)) + { + block = tdeheapgettup_initial_block(scan, dir); + /* ensure rs_cbuf is invalid when we get InvalidBlockNumber */ + Assert(block != InvalidBlockNumber || !BufferIsValid(scan->rs_cbuf)); + scan->rs_inited = true; + } + else + { + /* continue from previously returned page/tuple */ + block = scan->rs_cblock; + + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE); + page = tdeheapgettup_continue_page(scan, dir, &linesleft, &lineoff); + goto continue_page; + } + + /* + * advance the scan until we find a qualifying tuple or run out of stuff + * to scan + */ + while (block != InvalidBlockNumber) + { + tdeheapgetpage((TableScanDesc) scan, block); + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE); + page = tdeheapgettup_start_page(scan, dir, &linesleft, &lineoff); +continue_page: + + /* + * Only continue scanning the page while we have lines left. + * + * Note that this protects us from accessing line pointers past + * PageGetMaxOffsetNumber(); both for forward scans when we resume the + * table scan, and for when we start scanning a new page. + */ + for (; linesleft > 0; linesleft--, lineoff += dir) + { + bool visible; + ItemId lpp = PageGetItemId(page, lineoff); + + if (!ItemIdIsNormal(lpp)) + continue; + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lpp); + tuple->t_len = ItemIdGetLength(lpp); + ItemPointerSet(&(tuple->t_self), block, lineoff); + + visible = HeapTupleSatisfiesVisibility(tuple, + scan->rs_base.rs_snapshot, + scan->rs_cbuf); + + HeapCheckForSerializableConflictOut(visible, scan->rs_base.rs_rd, + tuple, scan->rs_cbuf, + scan->rs_base.rs_snapshot); + + /* skip tuples not visible to this snapshot */ + if (!visible) + continue; + + /* skip any tuples that don't match the scan key */ + if (key != NULL && + !HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd), + nkeys, key)) + continue; + + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK); + scan->rs_coffset = lineoff; + return; + } + + /* + * if we get here, it means we've exhausted the items on this page and + * it's time to move to the next. + */ + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + /* get the BlockNumber to scan next */ + block = tdeheapgettup_advance_block(scan, block, dir); + } + + /* end of scan */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + tuple->t_data = NULL; + scan->rs_inited = false; +} + +/* ---------------- + * tdeheapgettup_pagemode - fetch next heap tuple in page-at-a-time mode + * + * Same API as tdeheapgettup, but used in page-at-a-time mode + * + * The internal logic is much the same as tdeheapgettup's too, but there are some + * differences: we do not take the buffer content lock (that only needs to + * happen inside tdeheapgetpage), and we iterate through just the tuples listed + * in rs_vistuples[] rather than all tuples on the page. Notice that + * lineindex is 0-based, where the corresponding loop variable lineoff in + * tdeheapgettup is 1-based. + * ---------------- + */ +static void +tdeheapgettup_pagemode(HeapScanDesc scan, + ScanDirection dir, + int nkeys, + ScanKey key) +{ + HeapTuple tuple = &(scan->rs_ctup); + BlockNumber block; + Page page; + int lineindex; + int linesleft; + + if (unlikely(!scan->rs_inited)) + { + block = tdeheapgettup_initial_block(scan, dir); + /* ensure rs_cbuf is invalid when we get InvalidBlockNumber */ + Assert(block != InvalidBlockNumber || !BufferIsValid(scan->rs_cbuf)); + scan->rs_inited = true; + } + else + { + /* continue from previously returned page/tuple */ + block = scan->rs_cblock; /* current page */ + page = BufferGetPage(scan->rs_cbuf); + TestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, page); + + lineindex = scan->rs_cindex + dir; + if (ScanDirectionIsForward(dir)) + linesleft = scan->rs_ntuples - lineindex; + else + linesleft = scan->rs_cindex; + /* lineindex now references the next or previous visible tid */ + + goto continue_page; + } + + /* + * advance the scan until we find a qualifying tuple or run out of stuff + * to scan + */ + while (block != InvalidBlockNumber) + { + tdeheapgetpage((TableScanDesc) scan, block); + page = BufferGetPage(scan->rs_cbuf); + TestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, page); + linesleft = scan->rs_ntuples; + lineindex = ScanDirectionIsForward(dir) ? 0 : linesleft - 1; + + /* lineindex now references the next or previous visible tid */ +continue_page: + + for (; linesleft > 0; linesleft--, lineindex += dir) + { + ItemId lpp; + OffsetNumber lineoff; + + lineoff = scan->rs_vistuples[lineindex]; + lpp = PageGetItemId(page, lineoff); + Assert(ItemIdIsNormal(lpp)); + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lpp); + tuple->t_len = ItemIdGetLength(lpp); + ItemPointerSet(&(tuple->t_self), block, lineoff); + + /* skip any tuples that don't match the scan key */ + if (key != NULL && + !HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd), + nkeys, key)) + continue; + + scan->rs_cindex = lineindex; + return; + } + + /* get the BlockNumber to scan next */ + block = tdeheapgettup_advance_block(scan, block, dir); + } + + /* end of scan */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + tuple->t_data = NULL; + scan->rs_inited = false; +} + + +/* ---------------------------------------------------------------- + * heap access method interface + * ---------------------------------------------------------------- + */ + + +TableScanDesc +tdeheap_beginscan(Relation relation, Snapshot snapshot, + int nkeys, ScanKey key, + ParallelTableScanDesc parallel_scan, + uint32 flags) +{ + HeapScanDesc scan; + + /* + * increment relation ref count while scanning relation + * + * This is just to make really sure the relcache entry won't go away while + * the scan has a pointer to it. Caller should be holding the rel open + * anyway, so this is redundant in all normal scenarios... + */ + RelationIncrementReferenceCount(relation); + + /* + * allocate and initialize scan descriptor + */ + scan = (HeapScanDesc) palloc(sizeof(HeapScanDescData)); + + scan->rs_base.rs_rd = relation; + scan->rs_base.rs_snapshot = snapshot; + scan->rs_base.rs_nkeys = nkeys; + scan->rs_base.rs_flags = flags; + scan->rs_base.rs_parallel = parallel_scan; + scan->rs_strategy = NULL; /* set in initscan */ + + /* + * Disable page-at-a-time mode if it's not a MVCC-safe snapshot. + */ + if (!(snapshot && IsMVCCSnapshot(snapshot))) + scan->rs_base.rs_flags &= ~SO_ALLOW_PAGEMODE; + + /* + * For seqscan and sample scans in a serializable transaction, acquire a + * predicate lock on the entire relation. This is required not only to + * lock all the matching tuples, but also to conflict with new insertions + * into the table. In an indexscan, we take page locks on the index pages + * covering the range specified in the scan qual, but in a heap scan there + * is nothing more fine-grained to lock. A bitmap scan is a different + * story, there we have already scanned the index and locked the index + * pages covering the predicate. But in that case we still have to lock + * any matching heap tuples. For sample scan we could optimize the locking + * to be at least page-level granularity, but we'd need to add per-tuple + * locking for that. + */ + if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN)) + { + /* + * Ensure a missing snapshot is noticed reliably, even if the + * isolation mode means predicate locking isn't performed (and + * therefore the snapshot isn't used here). + */ + Assert(snapshot); + PredicateLockRelation(relation, snapshot); + } + + /* we only need to set this up once */ + scan->rs_ctup.t_tableOid = RelationGetRelid(relation); + + /* + * Allocate memory to keep track of page allocation for parallel workers + * when doing a parallel scan. + */ + if (parallel_scan != NULL) + scan->rs_parallelworkerdata = palloc(sizeof(ParallelBlockTableScanWorkerData)); + else + scan->rs_parallelworkerdata = NULL; + + /* + * we do this here instead of in initscan() because tdeheap_rescan also calls + * initscan() and we don't want to allocate memory again + */ + if (nkeys > 0) + scan->rs_base.rs_key = (ScanKey) palloc(sizeof(ScanKeyData) * nkeys); + else + scan->rs_base.rs_key = NULL; + + initscan(scan, key, false); + + return (TableScanDesc) scan; +} + +void +tdeheap_rescan(TableScanDesc sscan, ScanKey key, bool set_params, + bool allow_strat, bool allow_sync, bool allow_pagemode) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + if (set_params) + { + if (allow_strat) + scan->rs_base.rs_flags |= SO_ALLOW_STRAT; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_STRAT; + + if (allow_sync) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + + if (allow_pagemode && scan->rs_base.rs_snapshot && + IsMVCCSnapshot(scan->rs_base.rs_snapshot)) + scan->rs_base.rs_flags |= SO_ALLOW_PAGEMODE; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_PAGEMODE; + } + + /* + * unpin scan buffers + */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + /* + * reinitialize scan descriptor + */ + initscan(scan, key, true); +} + +void +tdeheap_endscan(TableScanDesc sscan) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* Note: no locking manipulations needed */ + + /* + * unpin scan buffers + */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + /* + * decrement relation reference count and free scan descriptor storage + */ + RelationDecrementReferenceCount(scan->rs_base.rs_rd); + + if (scan->rs_base.rs_key) + pfree(scan->rs_base.rs_key); + + if (scan->rs_strategy != NULL) + FreeAccessStrategy(scan->rs_strategy); + + if (scan->rs_parallelworkerdata != NULL) + pfree(scan->rs_parallelworkerdata); + + if (scan->rs_base.rs_flags & SO_TEMP_SNAPSHOT) + UnregisterSnapshot(scan->rs_base.rs_snapshot); + + pfree(scan); +} + +HeapTuple +tdeheap_getnext(TableScanDesc sscan, ScanDirection direction) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* + * This is still widely used directly, without going through table AM, so + * add a safety check. It's possible we should, at a later point, + * downgrade this to an assert. The reason for checking the AM routine, + * rather than the AM oid, is that this allows to write regression tests + * that create another AM reusing the heap handler. + */ + if (unlikely(sscan->rs_rd->rd_tableam != GetPGTdeamTableAmRoutine())) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg_internal("only pg_tde AM is supported"))); + + /* + * We don't expect direct calls to tdeheap_getnext with valid CheckXidAlive + * for catalog or regular tables. See detailed comments in xact.c where + * these variables are declared. Normally we have such a check at tableam + * level API but this is called from many places so we need to ensure it + * here. + */ + if (unlikely(TransactionIdIsValid(CheckXidAlive) && !bsysscan)) + elog(ERROR, "unexpected tdeheap_getnext call during logical decoding"); + + /* Note: no locking manipulations needed */ + + if (scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, + scan->rs_base.rs_nkeys, scan->rs_base.rs_key); + else + tdeheapgettup(scan, direction, + scan->rs_base.rs_nkeys, scan->rs_base.rs_key); + + if (scan->rs_ctup.t_data == NULL) + return NULL; + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + return &scan->rs_ctup; +} + +bool +tdeheap_getnextslot(TableScanDesc sscan, ScanDirection direction, TupleTableSlot *slot) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* Note: no locking manipulations needed */ + + if (sscan->rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, sscan->rs_nkeys, sscan->rs_key); + else + tdeheapgettup(scan, direction, sscan->rs_nkeys, sscan->rs_key); + + if (scan->rs_ctup.t_data == NULL) + { + ExecClearTuple(slot); + return false; + } + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + PGTdeExecStoreBufferHeapTuple(sscan->rs_rd, &scan->rs_ctup, slot, + scan->rs_cbuf); + return true; +} + +void +tdeheap_set_tidrange(TableScanDesc sscan, ItemPointer mintid, + ItemPointer maxtid) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + BlockNumber startBlk; + BlockNumber numBlks; + ItemPointerData highestItem; + ItemPointerData lowestItem; + + /* + * For relations without any pages, we can simply leave the TID range + * unset. There will be no tuples to scan, therefore no tuples outside + * the given TID range. + */ + if (scan->rs_nblocks == 0) + return; + + /* + * Set up some ItemPointers which point to the first and last possible + * tuples in the heap. + */ + ItemPointerSet(&highestItem, scan->rs_nblocks - 1, MaxOffsetNumber); + ItemPointerSet(&lowestItem, 0, FirstOffsetNumber); + + /* + * If the given maximum TID is below the highest possible TID in the + * relation, then restrict the range to that, otherwise we scan to the end + * of the relation. + */ + if (ItemPointerCompare(maxtid, &highestItem) < 0) + ItemPointerCopy(maxtid, &highestItem); + + /* + * If the given minimum TID is above the lowest possible TID in the + * relation, then restrict the range to only scan for TIDs above that. + */ + if (ItemPointerCompare(mintid, &lowestItem) > 0) + ItemPointerCopy(mintid, &lowestItem); + + /* + * Check for an empty range and protect from would be negative results + * from the numBlks calculation below. + */ + if (ItemPointerCompare(&highestItem, &lowestItem) < 0) + { + /* Set an empty range of blocks to scan */ + tdeheap_setscanlimits(sscan, 0, 0); + return; + } + + /* + * Calculate the first block and the number of blocks we must scan. We + * could be more aggressive here and perform some more validation to try + * and further narrow the scope of blocks to scan by checking if the + * lowestItem has an offset above MaxOffsetNumber. In this case, we could + * advance startBlk by one. Likewise, if highestItem has an offset of 0 + * we could scan one fewer blocks. However, such an optimization does not + * seem worth troubling over, currently. + */ + startBlk = ItemPointerGetBlockNumberNoCheck(&lowestItem); + + numBlks = ItemPointerGetBlockNumberNoCheck(&highestItem) - + ItemPointerGetBlockNumberNoCheck(&lowestItem) + 1; + + /* Set the start block and number of blocks to scan */ + tdeheap_setscanlimits(sscan, startBlk, numBlks); + + /* Finally, set the TID range in sscan */ + ItemPointerCopy(&lowestItem, &sscan->rs_mintid); + ItemPointerCopy(&highestItem, &sscan->rs_maxtid); +} + +bool +tdeheap_getnextslot_tidrange(TableScanDesc sscan, ScanDirection direction, + TupleTableSlot *slot) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + ItemPointer mintid = &sscan->rs_mintid; + ItemPointer maxtid = &sscan->rs_maxtid; + + /* Note: no locking manipulations needed */ + for (;;) + { + if (sscan->rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, sscan->rs_nkeys, sscan->rs_key); + else + tdeheapgettup(scan, direction, sscan->rs_nkeys, sscan->rs_key); + + if (scan->rs_ctup.t_data == NULL) + { + ExecClearTuple(slot); + return false; + } + + /* + * tdeheap_set_tidrange will have used tdeheap_setscanlimits to limit the + * range of pages we scan to only ones that can contain the TID range + * we're scanning for. Here we must filter out any tuples from these + * pages that are outside of that range. + */ + if (ItemPointerCompare(&scan->rs_ctup.t_self, mintid) < 0) + { + ExecClearTuple(slot); + + /* + * When scanning backwards, the TIDs will be in descending order. + * Future tuples in this direction will be lower still, so we can + * just return false to indicate there will be no more tuples. + */ + if (ScanDirectionIsBackward(direction)) + return false; + + continue; + } + + /* + * Likewise for the final page, we must filter out TIDs greater than + * maxtid. + */ + if (ItemPointerCompare(&scan->rs_ctup.t_self, maxtid) > 0) + { + ExecClearTuple(slot); + + /* + * When scanning forward, the TIDs will be in ascending order. + * Future tuples in this direction will be higher still, so we can + * just return false to indicate there will be no more tuples. + */ + if (ScanDirectionIsForward(direction)) + return false; + continue; + } + + break; + } + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + PGTdeExecStoreBufferHeapTuple(sscan->rs_rd, &scan->rs_ctup, slot, scan->rs_cbuf); + return true; +} + +/* + * tdeheap_fetch - retrieve tuple with given tid + * + * On entry, tuple->t_self is the TID to fetch. We pin the buffer holding + * the tuple, fill in the remaining fields of *tuple, and check the tuple + * against the specified snapshot. + * + * If successful (tuple found and passes snapshot time qual), then *userbuf + * is set to the buffer holding the tuple and true is returned. The caller + * must unpin the buffer when done with the tuple. + * + * If the tuple is not found (ie, item number references a deleted slot), + * then tuple->t_data is set to NULL, *userbuf is set to InvalidBuffer, + * and false is returned. + * + * If the tuple is found but fails the time qual check, then the behavior + * depends on the keep_buf parameter. If keep_buf is false, the results + * are the same as for the tuple-not-found case. If keep_buf is true, + * then tuple->t_data and *userbuf are returned as for the success case, + * and again the caller must unpin the buffer; but false is returned. + * + * tdeheap_fetch does not follow HOT chains: only the exact TID requested will + * be fetched. + * + * It is somewhat inconsistent that we ereport() on invalid block number but + * return false on invalid item number. There are a couple of reasons though. + * One is that the caller can relatively easily check the block number for + * validity, but cannot check the item number without reading the page + * himself. Another is that when we are following a t_ctid link, we can be + * reasonably confident that the page number is valid (since VACUUM shouldn't + * truncate off the destination page without having killed the referencing + * tuple first), but the item number might well not be good. + */ +bool +tdeheap_fetch(Relation relation, + Snapshot snapshot, + HeapTuple tuple, + Buffer *userbuf, + bool keep_buf) +{ + ItemPointer tid = &(tuple->t_self); + ItemId lp; + Buffer buffer; + Page page; + OffsetNumber offnum; + bool valid; + + /* + * Fetch and pin the appropriate page of the relation. + */ + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + + /* + * Need share lock on buffer to examine tuple commit status. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + page = BufferGetPage(buffer); + TestForOldSnapshot(snapshot, relation, page); + + /* + * We'd better check for out-of-range offnum in case of VACUUM since the + * TID was obtained. + */ + offnum = ItemPointerGetOffsetNumber(tid); + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + return false; + } + + /* + * get the item line pointer corresponding to the requested tid + */ + lp = PageGetItemId(page, offnum); + + /* + * Must check for deleted tuple. + */ + if (!ItemIdIsNormal(lp)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + return false; + } + + /* + * fill in *tuple fields + */ + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + tuple->t_len = ItemIdGetLength(lp); + tuple->t_tableOid = RelationGetRelid(relation); + + /* + * check tuple visibility, then release lock + */ + valid = HeapTupleSatisfiesVisibility(tuple, snapshot, buffer); + + if (valid) + PredicateLockTID(relation, &(tuple->t_self), snapshot, + HeapTupleHeaderGetXmin(tuple->t_data)); + + HeapCheckForSerializableConflictOut(valid, relation, tuple, buffer, snapshot); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (valid) + { + /* + * All checks passed, so return the tuple as valid. Caller is now + * responsible for releasing the buffer. + */ + *userbuf = buffer; + + return true; + } + + /* Tuple failed time qual, but maybe caller wants to see it anyway. */ + if (keep_buf) + *userbuf = buffer; + else + { + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + } + + return false; +} + +/* + * tdeheap_hot_search_buffer - search HOT chain for tuple satisfying snapshot + * + * On entry, *tid is the TID of a tuple (either a simple tuple, or the root + * of a HOT chain), and buffer is the buffer holding this tuple. We search + * for the first chain member satisfying the given snapshot. If one is + * found, we update *tid to reference that tuple's offset number, and + * return true. If no match, return false without modifying *tid. + * + * heapTuple is a caller-supplied buffer. When a match is found, we return + * the tuple here, in addition to updating *tid. If no match is found, the + * contents of this buffer on return are undefined. + * + * If all_dead is not NULL, we check non-visible tuples to see if they are + * globally dead; *all_dead is set true if all members of the HOT chain + * are vacuumable, false if not. + * + * Unlike tdeheap_fetch, the caller must already have pin and (at least) share + * lock on the buffer; it is still pinned/locked at exit. + */ +bool +tdeheap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer, + Snapshot snapshot, HeapTuple heapTuple, + bool *all_dead, bool first_call) +{ + Page page = BufferGetPage(buffer); + TransactionId prev_xmax = InvalidTransactionId; + BlockNumber blkno; + OffsetNumber offnum; + bool at_chain_start; + bool valid; + bool skip; + GlobalVisState *vistest = NULL; + + /* If this is not the first call, previous call returned a (live!) tuple */ + if (all_dead) + *all_dead = first_call; + + blkno = ItemPointerGetBlockNumber(tid); + offnum = ItemPointerGetOffsetNumber(tid); + at_chain_start = first_call; + skip = !first_call; + + /* XXX: we should assert that a snapshot is pushed or registered */ + Assert(TransactionIdIsValid(RecentXmin)); + Assert(BufferGetBlockNumber(buffer) == blkno); + + /* Scan through possible multiple members of HOT-chain */ + for (;;) + { + ItemId lp; + + /* check for bogus TID */ + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + break; + + lp = PageGetItemId(page, offnum); + + /* check for unused, dead, or redirected items */ + if (!ItemIdIsNormal(lp)) + { + /* We should only see a redirect at start of chain */ + if (ItemIdIsRedirected(lp) && at_chain_start) + { + /* Follow the redirect */ + offnum = ItemIdGetRedirect(lp); + at_chain_start = false; + continue; + } + /* else must be end of chain */ + break; + } + + /* + * Update heapTuple to point to the element of the HOT chain we're + * currently investigating. Having t_self set correctly is important + * because the SSI checks and the *Satisfies routine for historical + * MVCC snapshots need the correct tid to decide about the visibility. + */ + heapTuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + heapTuple->t_len = ItemIdGetLength(lp); + heapTuple->t_tableOid = RelationGetRelid(relation); + ItemPointerSet(&heapTuple->t_self, blkno, offnum); + + /* + * Shouldn't see a HEAP_ONLY tuple at chain start. + */ + if (at_chain_start && HeapTupleIsHeapOnly(heapTuple)) + break; + + /* + * The xmin should match the previous xmax value, else chain is + * broken. + */ + if (TransactionIdIsValid(prev_xmax) && + !TransactionIdEquals(prev_xmax, + HeapTupleHeaderGetXmin(heapTuple->t_data))) + break; + + /* + * When first_call is true (and thus, skip is initially false) we'll + * return the first tuple we find. But on later passes, heapTuple + * will initially be pointing to the tuple we returned last time. + * Returning it again would be incorrect (and would loop forever), so + * we skip it and return the next match we find. + */ + if (!skip) + { + /* If it's visible per the snapshot, we must return it */ + valid = HeapTupleSatisfiesVisibility(heapTuple, snapshot, buffer); + HeapCheckForSerializableConflictOut(valid, relation, heapTuple, + buffer, snapshot); + + if (valid) + { + ItemPointerSetOffsetNumber(tid, offnum); + PredicateLockTID(relation, &heapTuple->t_self, snapshot, + HeapTupleHeaderGetXmin(heapTuple->t_data)); + if (all_dead) + *all_dead = false; + return true; + } + } + skip = false; + + /* + * If we can't see it, maybe no one else can either. At caller + * request, check whether all chain members are dead to all + * transactions. + * + * Note: if you change the criterion here for what is "dead", fix the + * planner's get_actual_variable_range() function to match. + */ + if (all_dead && *all_dead) + { + if (!vistest) + vistest = GlobalVisTestFor(relation); + + if (!HeapTupleIsSurelyDead(heapTuple, vistest)) + *all_dead = false; + } + + /* + * Check to see if HOT chain continues past this tuple; if so fetch + * the next offnum and loop around. + */ + if (HeapTupleIsHotUpdated(heapTuple)) + { + Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) == + blkno); + offnum = ItemPointerGetOffsetNumber(&heapTuple->t_data->t_ctid); + at_chain_start = false; + prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple->t_data); + } + else + break; /* end of chain */ + } + + return false; +} + +/* + * tdeheap_get_latest_tid - get the latest tid of a specified tuple + * + * Actually, this gets the latest version that is visible according to the + * scan's snapshot. Create a scan using SnapshotDirty to get the very latest, + * possibly uncommitted version. + * + * *tid is both an input and an output parameter: it is updated to + * show the latest version of the row. Note that it will not be changed + * if no version of the row passes the snapshot test. + */ +void +tdeheap_get_latest_tid(TableScanDesc sscan, + ItemPointer tid) +{ + Relation relation = sscan->rs_rd; + Snapshot snapshot = sscan->rs_snapshot; + ItemPointerData ctid; + TransactionId priorXmax; + + /* + * table_tuple_get_latest_tid() verified that the passed in tid is valid. + * Assume that t_ctid links are valid however - there shouldn't be invalid + * ones in the table. + */ + Assert(ItemPointerIsValid(tid)); + + /* + * Loop to chase down t_ctid links. At top of loop, ctid is the tuple we + * need to examine, and *tid is the TID we will return if ctid turns out + * to be bogus. + * + * Note that we will loop until we reach the end of the t_ctid chain. + * Depending on the snapshot passed, there might be at most one visible + * version of the row, but we don't try to optimize for that. + */ + ctid = *tid; + priorXmax = InvalidTransactionId; /* cannot check first XMIN */ + for (;;) + { + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp; + HeapTupleData tp; + bool valid; + + /* + * Read, pin, and lock the page. + */ + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&ctid)); + LockBuffer(buffer, BUFFER_LOCK_SHARE); + page = BufferGetPage(buffer); + TestForOldSnapshot(snapshot, relation, page); + + /* + * Check for bogus item number. This is not treated as an error + * condition because it can happen while following a t_ctid link. We + * just assume that the prior tid is OK and return it unchanged. + */ + offnum = ItemPointerGetOffsetNumber(&ctid); + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + { + UnlockReleaseBuffer(buffer); + break; + } + lp = PageGetItemId(page, offnum); + if (!ItemIdIsNormal(lp)) + { + UnlockReleaseBuffer(buffer); + break; + } + + /* OK to access the tuple */ + tp.t_self = ctid; + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_tableOid = RelationGetRelid(relation); + + /* + * After following a t_ctid link, we might arrive at an unrelated + * tuple. Check for XMIN match. + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data))) + { + UnlockReleaseBuffer(buffer); + break; + } + + /* + * Check tuple visibility; if visible, set it as the new result + * candidate. + */ + valid = HeapTupleSatisfiesVisibility(&tp, snapshot, buffer); + HeapCheckForSerializableConflictOut(valid, relation, &tp, buffer, snapshot); + if (valid) + *tid = ctid; + + /* + * If there's a valid t_ctid link, follow it, else we're done. + */ + if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) || + HeapTupleHeaderIsOnlyLocked(tp.t_data) || + HeapTupleHeaderIndicatesMovedPartitions(tp.t_data) || + ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)) + { + UnlockReleaseBuffer(buffer); + break; + } + + ctid = tp.t_data->t_ctid; + priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data); + UnlockReleaseBuffer(buffer); + } /* end of loop */ +} + + +/* + * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends + * + * This is called after we have waited for the XMAX transaction to terminate. + * If the transaction aborted, we guarantee the XMAX_INVALID hint bit will + * be set on exit. If the transaction committed, we set the XMAX_COMMITTED + * hint bit if possible --- but beware that that may not yet be possible, + * if the transaction committed asynchronously. + * + * Note that if the transaction was a locker only, we set HEAP_XMAX_INVALID + * even if it commits. + * + * Hence callers should look only at XMAX_INVALID. + * + * Note this is not allowed for tuples whose xmax is a multixact. + */ +static void +UpdateXmaxHintBits(HeapTupleHeader tuple, Buffer buffer, TransactionId xid) +{ + Assert(TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple), xid)); + Assert(!(tuple->t_infomask & HEAP_XMAX_IS_MULTI)); + + if (!(tuple->t_infomask & (HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID))) + { + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) && + TransactionIdDidCommit(xid)) + HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + xid); + else + HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + } +} + + +/* + * GetBulkInsertState - prepare status object for a bulk insert + */ +BulkInsertState +GetBulkInsertState(void) +{ + BulkInsertState bistate; + + bistate = (BulkInsertState) palloc(sizeof(BulkInsertStateData)); + bistate->strategy = GetAccessStrategy(BAS_BULKWRITE); + bistate->current_buf = InvalidBuffer; + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + bistate->already_extended_by = 0; + return bistate; +} + +/* + * FreeBulkInsertState - clean up after finishing a bulk insert + */ +void +FreeBulkInsertState(BulkInsertState bistate) +{ + if (bistate->current_buf != InvalidBuffer) + ReleaseBuffer(bistate->current_buf); + FreeAccessStrategy(bistate->strategy); + pfree(bistate); +} + +/* + * ReleaseBulkInsertStatePin - release a buffer currently held in bistate + */ +void +ReleaseBulkInsertStatePin(BulkInsertState bistate) +{ + if (bistate->current_buf != InvalidBuffer) + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + + /* + * Despite the name, we also reset bulk relation extension state. + * Otherwise we can end up erroring out due to looking for free space in + * ->next_free of one partition, even though ->next_free was set when + * extending another partition. It could obviously also be bad for + * efficiency to look at existing blocks at offsets from another + * partition, even if we don't error out. + */ + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; +} + + +/* + * tdeheap_insert - insert tuple into a heap + * + * The new tuple is stamped with current transaction ID and the specified + * command ID. + * + * See table_tuple_insert for comments about most of the input flags, except + * that this routine directly takes a tuple rather than a slot. + * + * There's corresponding HEAP_INSERT_ options to all the TABLE_INSERT_ + * options, and there additionally is HEAP_INSERT_SPECULATIVE which is used to + * implement table_tuple_insert_speculative(). + * + * On return the header fields of *tup are updated to match the stored tuple; + * in particular tup->t_self receives the actual TID where the tuple was + * stored. But note that any toasting of fields within the tuple data is NOT + * reflected into *tup. + */ +void +tdeheap_insert(Relation relation, HeapTuple tup, CommandId cid, + int options, BulkInsertState bistate) +{ + TransactionId xid = GetCurrentTransactionId(); + HeapTuple heaptup; + Buffer buffer; + Buffer vmbuffer = InvalidBuffer; + bool all_visible_cleared = false; + + /* Cheap, simplistic check that the tuple matches the rel's rowtype. */ + Assert(HeapTupleHeaderGetNatts(tup->t_data) <= + RelationGetNumberOfAttributes(relation)); + + /* + * Fill in tuple header fields and toast the tuple if necessary. + * + * Note: below this point, heaptup is the data we actually intend to store + * into the relation; tup is the caller's original untoasted data. + */ + heaptup = tdeheap_prepare_insert(relation, tup, xid, cid, options); + + /* + * Find buffer to insert this tuple into. If the page is all visible, + * this will also pin the requisite visibility map page. + */ + buffer = tdeheap_RelationGetBufferForTuple(relation, heaptup->t_len, + InvalidBuffer, options, bistate, + &vmbuffer, NULL, + 0); + + /* + * We're about to do the actual insert -- but check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the page between this check and the insert + * being visible to the scan (i.e., an exclusive buffer content lock is + * continuously held from this point until the tuple insert is visible). + * + * For a heap insert, we only need to check for table-level SSI locks. Our + * new tuple can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call, which makes for a faster check. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + tdeheap_RelationPutHeapTuple(relation, buffer, heaptup, + (options & HEAP_INSERT_TDE_NO_ENCRYPT) == 0, + (options & HEAP_INSERT_SPECULATIVE) != 0); + + if (PageIsAllVisible(BufferGetPage(buffer))) + { + all_visible_cleared = true; + PageClearAllVisible(BufferGetPage(buffer)); + tdeheap_visibilitymap_clear(relation, + ItemPointerGetBlockNumber(&(heaptup->t_self)), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + + /* + * XXX Should we set PageSetPrunable on this page ? + * + * The inserting transaction may eventually abort thus making this tuple + * DEAD and hence available for pruning. Though we don't want to optimize + * for aborts, if no other tuple in this page is UPDATEd/DELETEd, the + * aborted tuple will never be pruned until next vacuum is triggered. + * + * If you do add PageSetPrunable here, add it in tdeheap_xlog_insert too. + */ + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_insert xlrec; + xl_tdeheap_header xlhdr; + XLogRecPtr recptr; + Page page = BufferGetPage(buffer); + uint8 info = XLOG_HEAP_INSERT; + int bufflags = 0; + + /* + * If this is a catalog, we need to transmit combo CIDs to properly + * decode, so log that as well. + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + log_tdeheap_new_cid(relation, heaptup); + + /* + * If this is the single and first tuple on page, we can reinit the + * page instead of restoring the whole thing. Set flag, and hide + * buffer references from XLogInsert. + */ + if (ItemPointerGetOffsetNumber(&(heaptup->t_self)) == FirstOffsetNumber && + PageGetMaxOffsetNumber(page) == FirstOffsetNumber) + { + info |= XLOG_HEAP_INIT_PAGE; + bufflags |= REGBUF_WILL_INIT; + } + + xlrec.offnum = ItemPointerGetOffsetNumber(&heaptup->t_self); + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_INSERT_ALL_VISIBLE_CLEARED; + if (options & HEAP_INSERT_SPECULATIVE) + xlrec.flags |= XLH_INSERT_IS_SPECULATIVE; + Assert(ItemPointerGetBlockNumber(&heaptup->t_self) == BufferGetBlockNumber(buffer)); + + /* + * For logical decoding, we need the tuple even if we're doing a full + * page write, so make sure it's included even if we take a full-page + * image. (XXX We could alternatively store a pointer into the FPW). + */ + if (RelationIsLogicallyLogged(relation) && + !(options & HEAP_INSERT_NO_LOGICAL)) + { + xlrec.flags |= XLH_INSERT_CONTAINS_NEW_TUPLE; + bufflags |= REGBUF_KEEP_DATA; + + if (IsToastRelation(relation)) + xlrec.flags |= XLH_INSERT_ON_TOAST_RELATION; + } + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapInsert); + + xlhdr.t_infomask2 = heaptup->t_data->t_infomask2; + xlhdr.t_infomask = heaptup->t_data->t_infomask; + xlhdr.t_hoff = heaptup->t_data->t_hoff; + + /* + * note we mark xlhdr as belonging to buffer; if XLogInsert decides to + * write the whole page to the xlog, we don't need to store + * xl_tdeheap_header in the xlog. + */ + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags); + XLogRegisterBufData(0, (char *) &xlhdr, SizeOfHeapHeader); + /* register encrypted tuple data from the buffer */ + PageHeader phdr = (PageHeader) BufferGetPage(buffer); + /* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */ + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + heaptup->t_len - SizeofHeapTupleHeader); + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, info); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * If tuple is cachable, mark it for invalidation from the caches in case + * we abort. Note it is OK to do this after releasing the buffer, because + * the heaptup data structure is all in local memory, not in the shared + * buffer. + */ + CacheInvalidateHeapTuple(relation, heaptup, NULL); + + /* Note: speculative insertions are counted too, even if aborted later */ + pgstat_count_tdeheap_insert(relation, 1); + + /* + * If heaptup is a private copy, release it. Don't forget to copy t_self + * back to the caller's image, too. + */ + if (heaptup != tup) + { + tup->t_self = heaptup->t_self; + tdeheap_freetuple(heaptup); + } +} + +/* + * Subroutine for tdeheap_insert(). Prepares a tuple for insertion. This sets the + * tuple header fields and toasts the tuple if necessary. Returns a toasted + * version of the tuple if it was toasted, or the original tuple if not. Note + * that in any case, the header fields are also set in the original tuple. + */ +static HeapTuple +tdeheap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid, + CommandId cid, int options) +{ + /* + * To allow parallel inserts, we need to ensure that they are safe to be + * performed in workers. We have the infrastructure to allow parallel + * inserts in general except for the cases where inserts generate a new + * CommandId (eg. inserts into a table having a foreign key column). + */ + if (IsParallelWorker()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot insert tuples in a parallel worker"))); + + tup->t_data->t_infomask &= ~(HEAP_XACT_MASK); + tup->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK); + tup->t_data->t_infomask |= HEAP_XMAX_INVALID; + HeapTupleHeaderSetXmin(tup->t_data, xid); + if (options & HEAP_INSERT_FROZEN) + HeapTupleHeaderSetXminFrozen(tup->t_data); + + HeapTupleHeaderSetCmin(tup->t_data, cid); + HeapTupleHeaderSetXmax(tup->t_data, 0); /* for cleanliness */ + tup->t_tableOid = RelationGetRelid(relation); + + /* + * If the new tuple is too big for storage or contains already toasted + * out-of-line attributes from some other relation, invoke the toaster. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(tup)); + return tup; + } + else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD) + return tdeheap_toast_insert_or_update(relation, tup, NULL, options); + else + return tup; +} + +/* + * Helper for tdeheap_multi_insert() that computes the number of entire pages + * that inserting the remaining heaptuples requires. Used to determine how + * much the relation needs to be extended by. + */ +static int +tdeheap_multi_insert_pages(HeapTuple *heaptuples, int done, int ntuples, Size saveFreeSpace) +{ + size_t page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace; + int npages = 1; + + for (int i = done; i < ntuples; i++) + { + size_t tup_sz = sizeof(ItemIdData) + MAXALIGN(heaptuples[i]->t_len); + + if (page_avail < tup_sz) + { + npages++; + page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace; + } + page_avail -= tup_sz; + } + + return npages; +} + +/* + * tdeheap_multi_insert - insert multiple tuples into a heap + * + * This is like tdeheap_insert(), but inserts multiple tuples in one operation. + * That's faster than calling tdeheap_insert() in a loop, because when multiple + * tuples can be inserted on a single page, we can write just a single WAL + * record covering all of them, and only need to lock/unlock the page once. + * + * Note: this leaks memory into the current memory context. You can create a + * temporary context before calling this, if that's a problem. + */ +void +tdeheap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples, + CommandId cid, int options, BulkInsertState bistate) +{ + TransactionId xid = GetCurrentTransactionId(); + HeapTuple *heaptuples; + int i; + int ndone; + PGAlignedBlock scratch; + Page page; + Buffer vmbuffer = InvalidBuffer; + bool needwal; + Size saveFreeSpace; + bool need_tuple_data = RelationIsLogicallyLogged(relation); + bool need_cids = RelationIsAccessibleInLogicalDecoding(relation); + bool starting_with_empty_page = false; + int npages = 0; + int npages_used = 0; + + /* currently not needed (thus unsupported) for tdeheap_multi_insert() */ + Assert(!(options & HEAP_INSERT_NO_LOGICAL)); + + needwal = RelationNeedsWAL(relation); + saveFreeSpace = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + + /* Toast and set header data in all the slots */ + heaptuples = palloc(ntuples * sizeof(HeapTuple)); + for (i = 0; i < ntuples; i++) + { + HeapTuple tuple; + + tuple = ExecFetchSlotHeapTuple(slots[i], true, NULL); + slots[i]->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slots[i]->tts_tableOid; + heaptuples[i] = tdeheap_prepare_insert(relation, tuple, xid, cid, + options); + } + + /* + * We're about to do the actual inserts -- but check for conflict first, + * to minimize the possibility of having to roll back work we've just + * done. + * + * A check here does not definitively prevent a serialization anomaly; + * that check MUST be done at least past the point of acquiring an + * exclusive buffer content lock on every buffer that will be affected, + * and MAY be done after all inserts are reflected in the buffers and + * those locks are released; otherwise there is a race condition. Since + * multiple buffers can be locked and unlocked in the loop below, and it + * would not be feasible to identify and lock all of those buffers before + * the loop, we must do a final check at the end. + * + * The check here could be omitted with no loss of correctness; it is + * present strictly as an optimization. + * + * For heap inserts, we only need to check for table-level SSI locks. Our + * new tuples can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call, which makes for a faster check. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + ndone = 0; + while (ndone < ntuples) + { + Buffer buffer; + bool all_visible_cleared = false; + bool all_frozen_set = false; + int nthispage; + + CHECK_FOR_INTERRUPTS(); + + /* + * Compute number of pages needed to fit the to-be-inserted tuples in + * the worst case. This will be used to determine how much to extend + * the relation by in tdeheap_RelationGetBufferForTuple(), if needed. If we + * filled a prior page from scratch, we can just update our last + * computation, but if we started with a partially filled page, + * recompute from scratch, the number of potentially required pages + * can vary due to tuples needing to fit onto the page, page headers + * etc. + */ + if (ndone == 0 || !starting_with_empty_page) + { + npages = tdeheap_multi_insert_pages(heaptuples, ndone, ntuples, + saveFreeSpace); + npages_used = 0; + } + else + npages_used++; + + /* + * Find buffer where at least the next tuple will fit. If the page is + * all-visible, this will also pin the requisite visibility map page. + * + * Also pin visibility map page if COPY FREEZE inserts tuples into an + * empty page. See all_frozen_set below. + */ + buffer = tdeheap_RelationGetBufferForTuple(relation, heaptuples[ndone]->t_len, + InvalidBuffer, options, bistate, + &vmbuffer, NULL, + npages - npages_used); + page = BufferGetPage(buffer); + + starting_with_empty_page = PageGetMaxOffsetNumber(page) == 0; + + if (starting_with_empty_page && (options & HEAP_INSERT_FROZEN)) + all_frozen_set = true; + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* + * tdeheap_RelationGetBufferForTuple has ensured that the first tuple fits. + * Put that on the page, and then as many other tuples as fit. + */ + tdeheap_RelationPutHeapTuple(relation, buffer, heaptuples[ndone], true, false); + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (needwal && need_cids) + log_tdeheap_new_cid(relation, heaptuples[ndone]); + + for (nthispage = 1; ndone + nthispage < ntuples; nthispage++) + { + HeapTuple heaptup = heaptuples[ndone + nthispage]; + + if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace) + break; + + tdeheap_RelationPutHeapTuple(relation, buffer, heaptup, true, false); + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (needwal && need_cids) + log_tdeheap_new_cid(relation, heaptup); + } + + /* + * If the page is all visible, need to clear that, unless we're only + * going to add further frozen rows to it. + * + * If we're only adding already frozen rows to a previously empty + * page, mark it as all-visible. + */ + if (PageIsAllVisible(page) && !(options & HEAP_INSERT_FROZEN)) + { + all_visible_cleared = true; + PageClearAllVisible(page); + tdeheap_visibilitymap_clear(relation, + BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + else if (all_frozen_set) + PageSetAllVisible(page); + + /* + * XXX Should we set PageSetPrunable on this page ? See tdeheap_insert() + */ + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (needwal) + { + XLogRecPtr recptr; + xl_tdeheap_multi_insert *xlrec; + uint8 info = XLOG_HEAP2_MULTI_INSERT; + char *tupledata; + int totaldatalen; + char *scratchptr = scratch.data; + bool init; + int bufflags = 0; + + /* + * If the page was previously empty, we can reinit the page + * instead of restoring the whole thing. + */ + init = starting_with_empty_page; + + /* allocate xl_tdeheap_multi_insert struct from the scratch area */ + xlrec = (xl_tdeheap_multi_insert *) scratchptr; + scratchptr += SizeOfHeapMultiInsert; + + /* + * Allocate offsets array. Unless we're reinitializing the page, + * in that case the tuples are stored in order starting at + * FirstOffsetNumber and we don't need to store the offsets + * explicitly. + */ + if (!init) + scratchptr += nthispage * sizeof(OffsetNumber); + + /* the rest of the scratch space is used for tuple data */ + tupledata = scratchptr; + + /* check that the mutually exclusive flags are not both set */ + Assert(!(all_visible_cleared && all_frozen_set)); + + xlrec->flags = 0; + if (all_visible_cleared) + xlrec->flags = XLH_INSERT_ALL_VISIBLE_CLEARED; + if (all_frozen_set) + xlrec->flags = XLH_INSERT_ALL_FROZEN_SET; + + xlrec->ntuples = nthispage; + + /* + * Write out an xl_multi_insert_tuple and the tuple data itself + * for each tuple. + */ + for (i = 0; i < nthispage; i++) + { + HeapTuple heaptup = heaptuples[ndone + i]; + xl_multi_insert_tuple *tuphdr; + int datalen; + + if (!init) + xlrec->offsets[i] = ItemPointerGetOffsetNumber(&heaptup->t_self); + /* xl_multi_insert_tuple needs two-byte alignment. */ + tuphdr = (xl_multi_insert_tuple *) SHORTALIGN(scratchptr); + scratchptr = ((char *) tuphdr) + SizeOfMultiInsertTuple; + + tuphdr->t_infomask2 = heaptup->t_data->t_infomask2; + tuphdr->t_infomask = heaptup->t_data->t_infomask; + tuphdr->t_hoff = heaptup->t_data->t_hoff; + + /* Point to an encrypted tuple data in the Buffer */ + char *tup_data_on_page = (char *) page + ItemIdGetOffset(PageGetItemId(page, heaptup->t_self.ip_posid)); + /* write bitmap [+ padding] [+ oid] + data */ + datalen = heaptup->t_len - SizeofHeapTupleHeader; + memcpy(scratchptr, + tup_data_on_page + SizeofHeapTupleHeader, + datalen); + tuphdr->datalen = datalen; + scratchptr += datalen; + } + totaldatalen = scratchptr - tupledata; + Assert((scratchptr - scratch.data) < BLCKSZ); + + if (need_tuple_data) + xlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE; + + /* + * Signal that this is the last xl_tdeheap_multi_insert record + * emitted by this call to tdeheap_multi_insert(). Needed for logical + * decoding so it knows when to cleanup temporary data. + */ + if (ndone + nthispage == ntuples) + xlrec->flags |= XLH_INSERT_LAST_IN_MULTI; + + if (init) + { + info |= XLOG_HEAP_INIT_PAGE; + bufflags |= REGBUF_WILL_INIT; + } + + /* + * If we're doing logical decoding, include the new tuple data + * even if we take a full-page image of the page. + */ + if (need_tuple_data) + bufflags |= REGBUF_KEEP_DATA; + + XLogBeginInsert(); + XLogRegisterData((char *) xlrec, tupledata - scratch.data); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags); + + XLogRegisterBufData(0, tupledata, totaldatalen); + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP2_ID, info); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + /* + * If we've frozen everything on the page, update the visibilitymap. + * We're already holding pin on the vmbuffer. + */ + if (all_frozen_set) + { + Assert(PageIsAllVisible(page)); + Assert(tdeheap_visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer)); + + /* + * It's fine to use InvalidTransactionId here - this is only used + * when HEAP_INSERT_FROZEN is specified, which intentionally + * violates visibility rules. + */ + tdeheap_visibilitymap_set(relation, BufferGetBlockNumber(buffer), buffer, + InvalidXLogRecPtr, vmbuffer, + InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN); + } + + UnlockReleaseBuffer(buffer); + ndone += nthispage; + + /* + * NB: Only release vmbuffer after inserting all tuples - it's fairly + * likely that we'll insert into subsequent heap pages that are likely + * to use the same vm page. + */ + } + + /* We're done with inserting all tuples, so release the last vmbuffer. */ + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * We're done with the actual inserts. Check for conflicts again, to + * ensure that all rw-conflicts in to these inserts are detected. Without + * this final check, a sequential scan of the heap may have locked the + * table after the "before" check, missing one opportunity to detect the + * conflict, and then scanned the table before the new tuples were there, + * missing the other chance to detect the conflict. + * + * For heap inserts, we only need to check for table-level SSI locks. Our + * new tuples can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + /* + * If tuples are cachable, mark them for invalidation from the caches in + * case we abort. Note it is OK to do this after releasing the buffer, + * because the heaptuples data structure is all in local memory, not in + * the shared buffer. + */ + if (IsCatalogRelation(relation)) + { + for (i = 0; i < ntuples; i++) + CacheInvalidateHeapTuple(relation, heaptuples[i], NULL); + } + + /* copy t_self fields back to the caller's slots */ + for (i = 0; i < ntuples; i++) + slots[i]->tts_tid = heaptuples[i]->t_self; + + pgstat_count_tdeheap_insert(relation, ntuples); +} + +/* + * simple_tdeheap_insert - insert a tuple + * + * Currently, this routine differs from tdeheap_insert only in supplying + * a default command ID and not allowing access to the speedup options. + * + * This should be used rather than using tdeheap_insert directly in most places + * where we are modifying system catalogs. + */ +void +simple_tdeheap_insert(Relation relation, HeapTuple tup) +{ + tdeheap_insert(relation, tup, GetCurrentCommandId(true), 0, NULL); +} + +/* + * Given infomask/infomask2, compute the bits that must be saved in the + * "infobits" field of xl_tdeheap_delete, xl_tdeheap_update, xl_tdeheap_lock, + * xl_tdeheap_lock_updated WAL records. + * + * See fix_infomask_from_infobits. + */ +static uint8 +compute_infobits(uint16 infomask, uint16 infomask2) +{ + return + ((infomask & HEAP_XMAX_IS_MULTI) != 0 ? XLHL_XMAX_IS_MULTI : 0) | + ((infomask & HEAP_XMAX_LOCK_ONLY) != 0 ? XLHL_XMAX_LOCK_ONLY : 0) | + ((infomask & HEAP_XMAX_EXCL_LOCK) != 0 ? XLHL_XMAX_EXCL_LOCK : 0) | + /* note we ignore HEAP_XMAX_SHR_LOCK here */ + ((infomask & HEAP_XMAX_KEYSHR_LOCK) != 0 ? XLHL_XMAX_KEYSHR_LOCK : 0) | + ((infomask2 & HEAP_KEYS_UPDATED) != 0 ? + XLHL_KEYS_UPDATED : 0); +} + +/* + * Given two versions of the same t_infomask for a tuple, compare them and + * return whether the relevant status for a tuple Xmax has changed. This is + * used after a buffer lock has been released and reacquired: we want to ensure + * that the tuple state continues to be the same it was when we previously + * examined it. + * + * Note the Xmax field itself must be compared separately. + */ +static inline bool +xmax_infomask_changed(uint16 new_infomask, uint16 old_infomask) +{ + const uint16 interesting = + HEAP_XMAX_IS_MULTI | HEAP_XMAX_LOCK_ONLY | HEAP_LOCK_MASK; + + if ((new_infomask & interesting) != (old_infomask & interesting)) + return true; + + return false; +} + +/* + * tdeheap_delete - delete a tuple + * + * See table_tuple_delete() for an explanation of the parameters, except that + * this routine directly takes a tuple rather than a slot. + * + * In the failure cases, the routine fills *tmfd with the tuple's t_ctid, + * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last + * only for TM_SelfModified, since we cannot obtain cmax from a combo CID + * generated by another transaction). + */ +TM_Result +tdeheap_delete(Relation relation, ItemPointer tid, + CommandId cid, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, bool changingPart) +{ + TM_Result result; + TransactionId xid = GetCurrentTransactionId(); + ItemId lp; + HeapTupleData tp; + Page page; + BlockNumber block; + Buffer buffer; + Buffer vmbuffer = InvalidBuffer; + TransactionId new_xmax; + uint16 new_infomask, + new_infomask2; + bool have_tuple_lock = false; + bool iscombo; + bool all_visible_cleared = false; + HeapTuple old_key_tuple = NULL; /* replica identity of the tuple */ + bool old_key_copied = false; + HeapTuple decrypted_tuple; + + Assert(ItemPointerIsValid(tid)); + + /* + * Forbid this during a parallel operation, lest it allocate a combo CID. + * Other workers might need that combo CID for visibility checks, and we + * have no provision for broadcasting it to them. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot delete tuples during a parallel operation"))); + + block = ItemPointerGetBlockNumber(tid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tp.t_tableOid = RelationGetRelid(relation); + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_self = *tid; + +l1: + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, we'll have to unlock and + * re-lock, to avoid holding the buffer lock across an I/O. That's a bit + * unfortunate, but hopefully shouldn't happen often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + + result = HeapTupleSatisfiesUpdate(&tp, cid, buffer); + + if (result == TM_Invisible) + { + UnlockReleaseBuffer(buffer); + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("attempted to delete invisible tuple"))); + } + else if (result == TM_BeingModified && wait) + { + TransactionId xwait; + uint16 infomask; + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(tp.t_data); + infomask = tp.t_data->t_infomask; + + /* + * Sleep until concurrent transaction ends -- except when there's a + * single locker and it's our own transaction. Note we don't care + * which lock mode the locker has, because we need the strongest one. + * + * Before sleeping, we need to acquire tuple lock to establish our + * priority for the tuple (see tdeheap_lock_tuple). LockTuple will + * release us when we are next-in-line for the tuple. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while rechecking + * tuple state. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + bool current_is_member = false; + + if (DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + LockTupleExclusive, ¤t_is_member)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Acquire the lock, if necessary (but skip it when we're + * requesting a lock and already have one; avoids deadlock). + */ + if (!current_is_member) + tdeheap_acquire_tuplock(relation, &(tp.t_self), LockTupleExclusive, + LockWaitBlock, &have_tuple_lock); + + /* wait for multixact */ + MultiXactIdWait((MultiXactId) xwait, MultiXactStatusUpdate, infomask, + relation, &(tp.t_self), XLTW_Delete, + NULL); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * If xwait had just locked the tuple then some other xact + * could update this tuple before we get to this point. Check + * for xmax change, and start over if so. + * + * We also must start over if we didn't pin the VM page, and + * the page has become all visible. + */ + if ((vmbuffer == InvalidBuffer && PageIsAllVisible(page)) || + xmax_infomask_changed(tp.t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tp.t_data), + xwait)) + goto l1; + } + + /* + * You might think the multixact is necessarily done here, but not + * so: it could have surviving members, namely our own xact or + * other subxacts of this backend. It is legal for us to delete + * the tuple in either case, however (the latter case is + * essentially a situation of upgrading our former shared lock to + * exclusive). We don't bother changing the on-disk hint bits + * since we are about to overwrite the xmax altogether. + */ + } + else if (!TransactionIdIsCurrentTransactionId(xwait)) + { + /* + * Wait for regular transaction to end; but first, acquire tuple + * lock. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_acquire_tuplock(relation, &(tp.t_self), LockTupleExclusive, + LockWaitBlock, &have_tuple_lock); + XactLockTableWait(xwait, relation, &(tp.t_self), XLTW_Delete); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + * + * We also must start over if we didn't pin the VM page, and the + * page has become all visible. + */ + if ((vmbuffer == InvalidBuffer && PageIsAllVisible(page)) || + xmax_infomask_changed(tp.t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tp.t_data), + xwait)) + goto l1; + + /* Otherwise check if it committed or aborted */ + UpdateXmaxHintBits(tp.t_data, buffer, xwait); + } + + /* + * We may overwrite if previous xmax aborted, or if it committed but + * only locked the tuple without updating it. + */ + if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_XMAX_IS_LOCKED_ONLY(tp.t_data->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tp.t_data)) + result = TM_Ok; + else if (!ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + + /* sanity check the result HeapTupleSatisfiesUpdate() and the logic above */ + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || + result == TM_Updated || + result == TM_Deleted || + result == TM_BeingModified); + Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)); + } + + if (crosscheck != InvalidSnapshot && result == TM_Ok) + { + /* Perform additional check for transaction-snapshot mode RI updates */ + if (!HeapTupleSatisfiesVisibility(&tp, crosscheck, buffer)) + result = TM_Updated; + } + + if (result != TM_Ok) + { + tmfd->ctid = tp.t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(tp.t_data); + else + tmfd->cmax = InvalidCommandId; + UnlockReleaseBuffer(buffer); + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(tp.t_self), LockTupleExclusive); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + return result; + } + + /* + * We're about to do the actual delete -- check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the page between this check and the delete + * being visible to the scan (i.e., an exclusive buffer content lock is + * continuously held from this point until the tuple delete is visible). + */ + CheckForSerializableConflictIn(relation, tid, BufferGetBlockNumber(buffer)); + + /* replace cid with a combo CID if necessary */ + HeapTupleHeaderAdjustCmax(tp.t_data, &cid, &iscombo); + + /* + * Compute replica identity tuple before entering the critical section so + * we don't PANIC upon a memory allocation failure. + * + * ExtractReplicaIdentity has to get a decrypted tuple, otherwise it + * won't be able to extract varlen attributes. + */ + decrypted_tuple = tdeheap_copytuple(&tp); + PG_TDE_DECRYPT_TUPLE(&tp, decrypted_tuple, GetHeapBaiscRelationKey(relation->rd_locator)); + + old_key_tuple = ExtractReplicaIdentity(relation, decrypted_tuple, true, &old_key_copied); + + /* + * If this is the first possibly-multixact-able operation in the current + * transaction, set my per-backend OldestMemberMXactId setting. We can be + * certain that the transaction will never become a member of any older + * MultiXactIds than that. (We have to do this even if we end up just + * using our own TransactionId below, since some other backend could + * incorporate our XID into a MultiXact immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(tp.t_data), + tp.t_data->t_infomask, tp.t_data->t_infomask2, + xid, LockTupleExclusive, true, + &new_xmax, &new_infomask, &new_infomask2); + + START_CRIT_SECTION(); + + /* + * If this transaction commits, the tuple will become DEAD sooner or + * later. Set flag that this page is a candidate for pruning once our xid + * falls below the OldestXmin horizon. If the transaction finally aborts, + * the subsequent page pruning will be a no-op and the hint will be + * cleared. + */ + PageSetPrunable(page, xid); + + if (PageIsAllVisible(page)) + { + all_visible_cleared = true; + PageClearAllVisible(page); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + + /* store transaction information of xact deleting the tuple */ + tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + tp.t_data->t_infomask |= new_infomask; + tp.t_data->t_infomask2 |= new_infomask2; + HeapTupleHeaderClearHotUpdated(tp.t_data); + HeapTupleHeaderSetXmax(tp.t_data, new_xmax); + HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo); + /* Make sure there is no forward chain link in t_ctid */ + tp.t_data->t_ctid = tp.t_self; + + /* Signal that this is actually a move into another partition */ + if (changingPart) + HeapTupleHeaderSetMovedPartitions(tp.t_data); + + MarkBufferDirty(buffer); + + /* + * XLOG stuff + * + * NB: tdeheap_abort_speculative() uses the same xlog record and replay + * routines. + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_delete xlrec; + xl_tdeheap_header xlhdr; + XLogRecPtr recptr; + + /* + * For logical decode we need combo CIDs to properly decode the + * catalog + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + log_tdeheap_new_cid(relation, &tp); + + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_DELETE_ALL_VISIBLE_CLEARED; + if (changingPart) + xlrec.flags |= XLH_DELETE_IS_PARTITION_MOVE; + xlrec.infobits_set = compute_infobits(tp.t_data->t_infomask, + tp.t_data->t_infomask2); + xlrec.offnum = ItemPointerGetOffsetNumber(&tp.t_self); + xlrec.xmax = new_xmax; + + if (old_key_tuple != NULL) + { + if (relation->rd_rel->relreplident == REPLICA_IDENTITY_FULL) + xlrec.flags |= XLH_DELETE_CONTAINS_OLD_TUPLE; + else + xlrec.flags |= XLH_DELETE_CONTAINS_OLD_KEY; + } + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapDelete); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + /* + * Log replica identity of the deleted tuple if there is one + */ + if (old_key_tuple != NULL) + { + xlhdr.t_infomask2 = old_key_tuple->t_data->t_infomask2; + xlhdr.t_infomask = old_key_tuple->t_data->t_infomask; + xlhdr.t_hoff = old_key_tuple->t_data->t_hoff; + + XLogRegisterData((char *) &xlhdr, SizeOfHeapHeader); + XLogRegisterData((char *) old_key_tuple->t_data + + SizeofHeapTupleHeader, + old_key_tuple->t_len + - SizeofHeapTupleHeader); + } + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_DELETE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * If the tuple has toasted out-of-line attributes, we need to delete + * those items too. We have to do this before releasing the buffer + * because we need to look at the contents of the tuple, but it's OK to + * release the content lock on the buffer first. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(&tp)); + } + else if (HeapTupleHasExternal(&tp)) + { + /* + * tdeheap_toast_delete needs decypted tuple to extract external + * attributes + */ + tdeheap_toast_delete(relation, decrypted_tuple, false); + } + + tdeheap_freetuple(decrypted_tuple); + + /* + * Mark tuple for invalidation from system caches at next command + * boundary. We have to do this before releasing the buffer because we + * need to look at the contents of the tuple. + */ + CacheInvalidateHeapTuple(relation, &tp, NULL); + + /* Now we can release the buffer */ + ReleaseBuffer(buffer); + + /* + * Release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(tp.t_self), LockTupleExclusive); + + pgstat_count_tdeheap_delete(relation); + + if (old_key_tuple != NULL && old_key_copied) + tdeheap_freetuple(old_key_tuple); + + return TM_Ok; +} + +/* + * simple_tdeheap_delete - delete a tuple + * + * This routine may be used to delete a tuple when concurrent updates of + * the target tuple are not expected (for example, because we have a lock + * on the relation associated with the tuple). Any failure is reported + * via ereport(). + */ +void +simple_tdeheap_delete(Relation relation, ItemPointer tid) +{ + TM_Result result; + TM_FailureData tmfd; + + result = tdeheap_delete(relation, tid, + GetCurrentCommandId(true), InvalidSnapshot, + true /* wait for commit */ , + &tmfd, false /* changingPart */ ); + switch (result) + { + case TM_SelfModified: + /* Tuple was already updated in current command? */ + elog(ERROR, "tuple already updated by self"); + break; + + case TM_Ok: + /* done successfully */ + break; + + case TM_Updated: + elog(ERROR, "tuple concurrently updated"); + break; + + case TM_Deleted: + elog(ERROR, "tuple concurrently deleted"); + break; + + default: + elog(ERROR, "unrecognized tdeheap_delete status: %u", result); + break; + } +} + +/* + * tdeheap_update - replace a tuple + * + * See table_tuple_update() for an explanation of the parameters, except that + * this routine directly takes a tuple rather than a slot. + * + * In the failure cases, the routine fills *tmfd with the tuple's t_ctid, + * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last + * only for TM_SelfModified, since we cannot obtain cmax from a combo CID + * generated by another transaction). + */ +TM_Result +tdeheap_update(Relation relation, ItemPointer otid, HeapTuple newtup, + CommandId cid, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, LockTupleMode *lockmode, + TU_UpdateIndexes *update_indexes) +{ + TM_Result result; + TransactionId xid = GetCurrentTransactionId(); + Bitmapset *hot_attrs; + Bitmapset *sum_attrs; + Bitmapset *key_attrs; + Bitmapset *id_attrs; + Bitmapset *interesting_attrs; + Bitmapset *modified_attrs; + ItemId lp; + HeapTupleData oldtup; + HeapTupleData oldtup_decrypted; + void* oldtup_data; + HeapTuple heaptup; + HeapTuple old_key_tuple = NULL; + bool old_key_copied = false; + Page page; + BlockNumber block; + MultiXactStatus mxact_status; + Buffer buffer, + newbuf, + vmbuffer = InvalidBuffer, + vmbuffer_new = InvalidBuffer; + bool need_toast; + Size newtupsize, + pagefree; + bool have_tuple_lock = false; + bool iscombo; + bool use_hot_update = false; + bool summarized_update = false; + bool key_intact; + bool all_visible_cleared = false; + bool all_visible_cleared_new = false; + bool checked_lockers; + bool locker_remains; + bool id_has_external = false; + TransactionId xmax_new_tuple, + xmax_old_tuple; + uint16 infomask_old_tuple, + infomask2_old_tuple, + infomask_new_tuple, + infomask2_new_tuple; + + Assert(ItemPointerIsValid(otid)); + + /* Cheap, simplistic check that the tuple matches the rel's rowtype. */ + Assert(HeapTupleHeaderGetNatts(newtup->t_data) <= + RelationGetNumberOfAttributes(relation)); + + /* + * Forbid this during a parallel operation, lest it allocate a combo CID. + * Other workers might need that combo CID for visibility checks, and we + * have no provision for broadcasting it to them. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot update tuples during a parallel operation"))); + + /* + * Fetch the list of attributes to be checked for various operations. + * + * For HOT considerations, this is wasted effort if we fail to update or + * have to put the new tuple on a different page. But we must compute the + * list before obtaining buffer lock --- in the worst case, if we are + * doing an update on one of the relevant system catalogs, we could + * deadlock if we try to fetch the list later. In any case, the relcache + * caches the data so this is usually pretty cheap. + * + * We also need columns used by the replica identity and columns that are + * considered the "key" of rows in the table. + * + * Note that we get copies of each bitmap, so we need not worry about + * relcache flush happening midway through. + */ + hot_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_HOT_BLOCKING); + sum_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_SUMMARIZED); + key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY); + id_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_IDENTITY_KEY); + interesting_attrs = NULL; + interesting_attrs = bms_add_members(interesting_attrs, hot_attrs); + interesting_attrs = bms_add_members(interesting_attrs, sum_attrs); + interesting_attrs = bms_add_members(interesting_attrs, key_attrs); + interesting_attrs = bms_add_members(interesting_attrs, id_attrs); + + block = ItemPointerGetBlockNumber(otid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid)); + Assert(ItemIdIsNormal(lp)); + + /* + * Fill in enough data in oldtup for HeapDetermineColumnsInfo to work + * properly. + */ + oldtup.t_tableOid = RelationGetRelid(relation); + oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + oldtup_data = oldtup.t_data; + oldtup.t_len = ItemIdGetLength(lp); + oldtup.t_self = *otid; + /* decrypt the old tuple */ + { + char* new_ptr = NULL; + new_ptr = MemoryContextAlloc(CurTransactionContext, oldtup.t_len); + memcpy(new_ptr, oldtup.t_data, oldtup.t_data->t_hoff); + // only neccessary field + oldtup_decrypted.t_data = (HeapTupleHeader)new_ptr; + } + PG_TDE_DECRYPT_TUPLE(&oldtup, &oldtup_decrypted, + GetHeapBaiscRelationKey(relation->rd_locator)); + + // change field in oldtup now. + // We can't do it before, as PG_TDE_DECRYPT_TUPLE uses t_data address in + // calculations + oldtup.t_data = oldtup_decrypted.t_data; + + /* the new tuple is ready, except for this: */ + newtup->t_tableOid = RelationGetRelid(relation); + + /* + * Determine columns modified by the update. Additionally, identify + * whether any of the unmodified replica identity key attributes in the + * old tuple is externally stored or not. This is required because for + * such attributes the flattened value won't be WAL logged as part of the + * new tuple so we must include it as part of the old_key_tuple. See + * ExtractReplicaIdentity. + */ + modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs, + id_attrs, &oldtup, + newtup, &id_has_external); + + /* + * If we're not updating any "key" column, we can grab a weaker lock type. + * This allows for more concurrency when we are running simultaneously + * with foreign key checks. + * + * Note that if a column gets detoasted while executing the update, but + * the value ends up being the same, this test will fail and we will use + * the stronger lock. This is acceptable; the important case to optimize + * is updates that don't manipulate key columns, not those that + * serendipitously arrive at the same key values. + */ + if (!bms_overlap(modified_attrs, key_attrs)) + { + *lockmode = LockTupleNoKeyExclusive; + mxact_status = MultiXactStatusNoKeyUpdate; + key_intact = true; + + /* + * If this is the first possibly-multixact-able operation in the + * current transaction, set my per-backend OldestMemberMXactId + * setting. We can be certain that the transaction will never become a + * member of any older MultiXactIds than that. (We have to do this + * even if we end up just using our own TransactionId below, since + * some other backend could incorporate our XID into a MultiXact + * immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + } + else + { + *lockmode = LockTupleExclusive; + mxact_status = MultiXactStatusUpdate; + key_intact = false; + } + + /* + * Note: beyond this point, use oldtup not otid to refer to old tuple. + * otid may very well point at newtup->t_self, which we will overwrite + * with the new tuple's location, so there's great risk of confusion if we + * use otid anymore. + */ + + oldtup.t_data = oldtup_data; + +l2: + checked_lockers = false; + locker_remains = false; + result = HeapTupleSatisfiesUpdate(&oldtup, cid, buffer); + + /* see below about the "no wait" case */ + Assert(result != TM_BeingModified || wait); + + if (result == TM_Invisible) + { + UnlockReleaseBuffer(buffer); + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("attempted to update invisible tuple"))); + } + else if (result == TM_BeingModified && wait) + { + TransactionId xwait; + uint16 infomask; + bool can_continue = false; + + /* + * XXX note that we don't consider the "no wait" case here. This + * isn't a problem currently because no caller uses that case, but it + * should be fixed if such a caller is introduced. It wasn't a + * problem previously because this code would always wait, but now + * that some tuple locks do not conflict with one of the lock modes we + * use, it is possible that this case is interesting to handle + * specially. + * + * This may cause failures with third-party code that calls + * tdeheap_update directly. + */ + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(oldtup.t_data); + infomask = oldtup.t_data->t_infomask; + + /* + * Now we have to do something about the existing locker. If it's a + * multi, sleep on it; we might be awakened before it is completely + * gone (or even not sleep at all in some cases); we need to preserve + * it as locker, unless it is gone completely. + * + * If it's not a multi, we need to check for sleeping conditions + * before actually going to sleep. If the update doesn't conflict + * with the locks, we just continue without sleeping (but making sure + * it is preserved). + * + * Before sleeping, we need to acquire tuple lock to establish our + * priority for the tuple (see tdeheap_lock_tuple). LockTuple will + * release us when we are next-in-line for the tuple. Note we must + * not acquire the tuple lock until we're sure we're going to sleep; + * otherwise we're open for race conditions with other transactions + * holding the tuple lock which sleep on us. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while rechecking + * tuple state. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId update_xact; + int remain; + bool current_is_member = false; + + if (DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + *lockmode, ¤t_is_member)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Acquire the lock, if necessary (but skip it when we're + * requesting a lock and already have one; avoids deadlock). + */ + if (!current_is_member) + tdeheap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode, + LockWaitBlock, &have_tuple_lock); + + /* wait for multixact */ + MultiXactIdWait((MultiXactId) xwait, mxact_status, infomask, + relation, &oldtup.t_self, XLTW_Update, + &remain); + checked_lockers = true; + locker_remains = remain != 0; + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * If xwait had just locked the tuple then some other xact + * could update this tuple before we get to this point. Check + * for xmax change, and start over if so. + */ + if (xmax_infomask_changed(oldtup.t_data->t_infomask, + infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup.t_data), + xwait)) + goto l2; + } + + /* + * Note that the multixact may not be done by now. It could have + * surviving members; our own xact or other subxacts of this + * backend, and also any other concurrent transaction that locked + * the tuple with LockTupleKeyShare if we only got + * LockTupleNoKeyExclusive. If this is the case, we have to be + * careful to mark the updated tuple with the surviving members in + * Xmax. + * + * Note that there could have been another update in the + * MultiXact. In that case, we need to check whether it committed + * or aborted. If it aborted we are safe to update it again; + * otherwise there is an update conflict, and we have to return + * TableTuple{Deleted, Updated} below. + * + * In the LockTupleExclusive case, we still need to preserve the + * surviving members: those would include the tuple locks we had + * before this one, which are important to keep in case this + * subxact aborts. + */ + if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup.t_data->t_infomask)) + update_xact = HeapTupleGetUpdateXid(oldtup.t_data); + else + update_xact = InvalidTransactionId; + + /* + * There was no UPDATE in the MultiXact; or it aborted. No + * TransactionIdIsInProgress() call needed here, since we called + * MultiXactIdWait() above. + */ + if (!TransactionIdIsValid(update_xact) || + TransactionIdDidAbort(update_xact)) + can_continue = true; + } + else if (TransactionIdIsCurrentTransactionId(xwait)) + { + /* + * The only locker is ourselves; we can avoid grabbing the tuple + * lock here, but must preserve our locking information. + */ + checked_lockers = true; + locker_remains = true; + can_continue = true; + } + else if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask) && key_intact) + { + /* + * If it's just a key-share locker, and we're not changing the key + * columns, we don't need to wait for it to end; but we need to + * preserve it as locker. + */ + checked_lockers = true; + locker_remains = true; + can_continue = true; + } + else + { + /* + * Wait for regular transaction to end; but first, acquire tuple + * lock. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode, + LockWaitBlock, &have_tuple_lock); + XactLockTableWait(xwait, relation, &oldtup.t_self, + XLTW_Update); + checked_lockers = true; + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + */ + if (xmax_infomask_changed(oldtup.t_data->t_infomask, infomask) || + !TransactionIdEquals(xwait, + HeapTupleHeaderGetRawXmax(oldtup.t_data))) + goto l2; + + /* Otherwise check if it committed or aborted */ + UpdateXmaxHintBits(oldtup.t_data, buffer, xwait); + if (oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) + can_continue = true; + } + + if (can_continue) + result = TM_Ok; + else if (!ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + + /* Sanity check the result HeapTupleSatisfiesUpdate() and the logic above */ + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || + result == TM_Updated || + result == TM_Deleted || + result == TM_BeingModified); + Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid)); + } + + if (crosscheck != InvalidSnapshot && result == TM_Ok) + { + /* Perform additional check for transaction-snapshot mode RI updates */ + if (!HeapTupleSatisfiesVisibility(&oldtup, crosscheck, buffer)) + result = TM_Updated; + } + + if (result != TM_Ok) + { + tmfd->ctid = oldtup.t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data); + else + tmfd->cmax = InvalidCommandId; + UnlockReleaseBuffer(buffer); + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + *update_indexes = TU_None; + + bms_free(hot_attrs); + bms_free(sum_attrs); + bms_free(key_attrs); + bms_free(id_attrs); + bms_free(modified_attrs); + bms_free(interesting_attrs); + return result; + } + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, or during some + * subsequent window during which we had it unlocked, we'll have to unlock + * and re-lock, to avoid holding the buffer lock across an I/O. That's a + * bit unfortunate, especially since we'll now have to recheck whether the + * tuple has been locked or updated under us, but hopefully it won't + * happen very often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + goto l2; + } + + /* Fill in transaction status data */ + + /* + * If the tuple we're updating is locked, we need to preserve the locking + * info in the old tuple's Xmax. Prepare a new Xmax value for this. + */ + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data), + oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2, + xid, *lockmode, true, + &xmax_old_tuple, &infomask_old_tuple, + &infomask2_old_tuple); + + /* + * And also prepare an Xmax value for the new copy of the tuple. If there + * was no xmax previously, or there was one but all lockers are now gone, + * then use InvalidTransactionId; otherwise, get the xmax from the old + * tuple. (In rare cases that might also be InvalidTransactionId and yet + * not have the HEAP_XMAX_INVALID bit set; that's fine.) + */ + if ((oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_LOCKED_UPGRADED(oldtup.t_data->t_infomask) || + (checked_lockers && !locker_remains)) + xmax_new_tuple = InvalidTransactionId; + else + xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup.t_data); + + if (!TransactionIdIsValid(xmax_new_tuple)) + { + infomask_new_tuple = HEAP_XMAX_INVALID; + infomask2_new_tuple = 0; + } + else + { + /* + * If we found a valid Xmax for the new tuple, then the infomask bits + * to use on the new tuple depend on what was there on the old one. + * Note that since we're doing an update, the only possibility is that + * the lockers had FOR KEY SHARE lock. + */ + if (oldtup.t_data->t_infomask & HEAP_XMAX_IS_MULTI) + { + GetMultiXactIdHintBits(xmax_new_tuple, &infomask_new_tuple, + &infomask2_new_tuple); + } + else + { + infomask_new_tuple = HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_LOCK_ONLY; + infomask2_new_tuple = 0; + } + } + + /* + * Prepare the new tuple with the appropriate initial values of Xmin and + * Xmax, as well as initial infomask bits as computed above. + */ + newtup->t_data->t_infomask &= ~(HEAP_XACT_MASK); + newtup->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK); + HeapTupleHeaderSetXmin(newtup->t_data, xid); + HeapTupleHeaderSetCmin(newtup->t_data, cid); + newtup->t_data->t_infomask |= HEAP_UPDATED | infomask_new_tuple; + newtup->t_data->t_infomask2 |= infomask2_new_tuple; + HeapTupleHeaderSetXmax(newtup->t_data, xmax_new_tuple); + + /* + * Replace cid with a combo CID if necessary. Note that we already put + * the plain cid into the new tuple. + */ + HeapTupleHeaderAdjustCmax(oldtup.t_data, &cid, &iscombo); + + /* + * If the toaster needs to be activated, OR if the new tuple will not fit + * on the same page as the old, then we need to release the content lock + * (but not the pin!) on the old tuple's buffer while we are off doing + * TOAST and/or table-file-extension work. We must mark the old tuple to + * show that it's locked, else other processes may try to update it + * themselves. + * + * We need to invoke the toaster if there are already any out-of-line + * toasted values present, or if the new tuple is over-threshold. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(&oldtup)); + Assert(!HeapTupleHasExternal(newtup)); + need_toast = false; + } + else + need_toast = (HeapTupleHasExternal(&oldtup) || + HeapTupleHasExternal(newtup) || + newtup->t_len > TOAST_TUPLE_THRESHOLD); + + pagefree = PageGetHeapFreeSpace(page); + + newtupsize = MAXALIGN(newtup->t_len); + + if (need_toast || newtupsize > pagefree) + { + TransactionId xmax_lock_old_tuple; + uint16 infomask_lock_old_tuple, + infomask2_lock_old_tuple; + bool cleared_all_frozen = false; + + /* + * To prevent concurrent sessions from updating the tuple, we have to + * temporarily mark it locked, while we release the page-level lock. + * + * To satisfy the rule that any xid potentially appearing in a buffer + * written out to disk, we unfortunately have to WAL log this + * temporary modification. We can reuse xl_tdeheap_lock for this + * purpose. If we crash/error before following through with the + * actual update, xmax will be of an aborted transaction, allowing + * other sessions to proceed. + */ + + /* + * Compute xmax / infomask appropriate for locking the tuple. This has + * to be done separately from the combo that's going to be used for + * updating, because the potentially created multixact would otherwise + * be wrong. + */ + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data), + oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2, + xid, *lockmode, false, + &xmax_lock_old_tuple, &infomask_lock_old_tuple, + &infomask2_lock_old_tuple); + + Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple)); + + START_CRIT_SECTION(); + + /* Clear obsolete visibility flags ... */ + oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + HeapTupleClearHotUpdated(&oldtup); + /* ... and store info about transaction updating this tuple */ + Assert(TransactionIdIsValid(xmax_lock_old_tuple)); + HeapTupleHeaderSetXmax(oldtup.t_data, xmax_lock_old_tuple); + oldtup.t_data->t_infomask |= infomask_lock_old_tuple; + oldtup.t_data->t_infomask2 |= infomask2_lock_old_tuple; + HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo); + + /* temporarily make it look not-updated, but locked */ + oldtup.t_data->t_ctid = oldtup.t_self; + + /* + * Clear all-frozen bit on visibility map if needed. We could + * immediately reset ALL_VISIBLE, but given that the WAL logging + * overhead would be unchanged, that doesn't seem necessarily + * worthwhile. + */ + if (PageIsAllVisible(page) && + tdeheap_visibilitymap_clear(relation, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + MarkBufferDirty(buffer); + + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_lock xlrec; + XLogRecPtr recptr; + + XLogBeginInsert(); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup.t_self); + xlrec.xmax = xmax_lock_old_tuple; + xlrec.infobits_set = compute_infobits(oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2); + xlrec.flags = + cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + XLogRegisterData((char *) &xlrec, SizeOfHeapLock); + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_LOCK); + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Let the toaster do its thing, if needed. + * + * Note: below this point, heaptup is the data we actually intend to + * store into the relation; newtup is the caller's original untoasted + * data. + */ + if (need_toast) + { + /* Note we always use WAL and FSM during updates */ + heaptup = tdeheap_toast_insert_or_update(relation, newtup, &oldtup_decrypted, 0); + newtupsize = MAXALIGN(heaptup->t_len); + } + else + heaptup = newtup; + + /* + * Now, do we need a new page for the tuple, or not? This is a bit + * tricky since someone else could have added tuples to the page while + * we weren't looking. We have to recheck the available space after + * reacquiring the buffer lock. But don't bother to do that if the + * former amount of free space is still not enough; it's unlikely + * there's more free now than before. + * + * What's more, if we need to get a new page, we will need to acquire + * buffer locks on both old and new pages. To avoid deadlock against + * some other backend trying to get the same two locks in the other + * order, we must be consistent about the order we get the locks in. + * We use the rule "lock the lower-numbered page of the relation + * first". To implement this, we must do tdeheap_RelationGetBufferForTuple + * while not holding the lock on the old page, and we must rely on it + * to get the locks on both pages in the correct order. + * + * Another consideration is that we need visibility map page pin(s) if + * we will have to clear the all-visible flag on either page. If we + * call tdeheap_RelationGetBufferForTuple, we rely on it to acquire any such + * pins; but if we don't, we have to handle that here. Hence we need + * a loop. + */ + for (;;) + { + if (newtupsize > pagefree) + { + /* It doesn't fit, must use tdeheap_RelationGetBufferForTuple. */ + newbuf = tdeheap_RelationGetBufferForTuple(relation, heaptup->t_len, + buffer, 0, NULL, + &vmbuffer_new, &vmbuffer, + 0); + /* We're all done. */ + break; + } + /* Acquire VM page pin if needed and we don't have it. */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + /* Re-acquire the lock on the old tuple's page. */ + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + /* Re-check using the up-to-date free space */ + pagefree = PageGetHeapFreeSpace(page); + if (newtupsize > pagefree || + (vmbuffer == InvalidBuffer && PageIsAllVisible(page))) + { + /* + * Rats, it doesn't fit anymore, or somebody just now set the + * all-visible flag. We must now unlock and loop to avoid + * deadlock. Fortunately, this path should seldom be taken. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + } + else + { + /* We're all done. */ + newbuf = buffer; + break; + } + } + } + else + { + /* No TOAST work needed, and it'll fit on same page */ + newbuf = buffer; + heaptup = newtup; + } + + /* + * We're about to do the actual update -- check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the pages between this check and the update + * being visible to the scan (i.e., exclusive buffer content lock(s) are + * continuously held from this point until the tuple update is visible). + * + * For the new tuple the only check needed is at the relation level, but + * since both tuples are in the same relation and the check for oldtup + * will include checking the relation level, there is no benefit to a + * separate check for the new tuple. + */ + CheckForSerializableConflictIn(relation, &oldtup.t_self, + BufferGetBlockNumber(buffer)); + + /* + * At this point newbuf and buffer are both pinned and locked, and newbuf + * has enough space for the new tuple. If they are the same buffer, only + * one pin is held. + */ + + if (newbuf == buffer) + { + /* + * Since the new tuple is going into the same page, we might be able + * to do a HOT update. Check if any of the index columns have been + * changed. + */ + if (!bms_overlap(modified_attrs, hot_attrs)) + { + use_hot_update = true; + + /* + * If none of the columns that are used in hot-blocking indexes + * were updated, we can apply HOT, but we do still need to check + * if we need to update the summarizing indexes, and update those + * indexes if the columns were updated, or we may fail to detect + * e.g. value bound changes in BRIN minmax indexes. + */ + if (bms_overlap(modified_attrs, sum_attrs)) + summarized_update = true; + } + } + else + { + /* Set a hint that the old page could use prune/defrag */ + PageSetFull(page); + } + + /* + * Compute replica identity tuple before entering the critical section so + * we don't PANIC upon a memory allocation failure. + * ExtractReplicaIdentity() will return NULL if nothing needs to be + * logged. Pass old key required as true only if the replica identity key + * columns are modified or it has external data. + */ + old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, + bms_overlap(modified_attrs, id_attrs) || + id_has_external, + &old_key_copied); + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* + * If this transaction commits, the old tuple will become DEAD sooner or + * later. Set flag that this page is a candidate for pruning once our xid + * falls below the OldestXmin horizon. If the transaction finally aborts, + * the subsequent page pruning will be a no-op and the hint will be + * cleared. + * + * XXX Should we set hint on newbuf as well? If the transaction aborts, + * there would be a prunable tuple in the newbuf; but for now we choose + * not to optimize for aborts. Note that tdeheap_xlog_update must be kept in + * sync if this decision changes. + */ + PageSetPrunable(page, xid); + + if (use_hot_update) + { + /* Mark the old tuple as HOT-updated */ + HeapTupleSetHotUpdated(&oldtup); + /* And mark the new tuple as heap-only */ + HeapTupleSetHeapOnly(heaptup); + /* Mark the caller's copy too, in case different from heaptup */ + HeapTupleSetHeapOnly(newtup); + } + else + { + /* Make sure tuples are correctly marked as not-HOT */ + HeapTupleClearHotUpdated(&oldtup); + HeapTupleClearHeapOnly(heaptup); + HeapTupleClearHeapOnly(newtup); + } + + tdeheap_RelationPutHeapTuple(relation, newbuf, heaptup, true, false); /* insert new tuple */ + + + /* Clear obsolete visibility flags, possibly set by ourselves above... */ + oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + /* ... and store info about transaction updating this tuple */ + Assert(TransactionIdIsValid(xmax_old_tuple)); + HeapTupleHeaderSetXmax(oldtup.t_data, xmax_old_tuple); + oldtup.t_data->t_infomask |= infomask_old_tuple; + oldtup.t_data->t_infomask2 |= infomask2_old_tuple; + HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo); + + /* record address of new tuple in t_ctid of old one */ + oldtup.t_data->t_ctid = heaptup->t_self; + + /* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */ + if (PageIsAllVisible(BufferGetPage(buffer))) + { + all_visible_cleared = true; + PageClearAllVisible(BufferGetPage(buffer)); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + if (newbuf != buffer && PageIsAllVisible(BufferGetPage(newbuf))) + { + all_visible_cleared_new = true; + PageClearAllVisible(BufferGetPage(newbuf)); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(newbuf), + vmbuffer_new, VISIBILITYMAP_VALID_BITS); + } + + if (newbuf != buffer) + MarkBufferDirty(newbuf); + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + XLogRecPtr recptr; + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + { + log_tdeheap_new_cid(relation, &oldtup); + log_tdeheap_new_cid(relation, heaptup); + } + + recptr = log_tdeheap_update(relation, buffer, + newbuf, &oldtup, heaptup, + old_key_tuple, + all_visible_cleared, + all_visible_cleared_new); + if (newbuf != buffer) + { + PageSetLSN(BufferGetPage(newbuf), recptr); + } + PageSetLSN(BufferGetPage(buffer), recptr); + } + + END_CRIT_SECTION(); + + if (newbuf != buffer) + LockBuffer(newbuf, BUFFER_LOCK_UNLOCK); + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Mark old tuple for invalidation from system caches at next command + * boundary, and mark the new tuple for invalidation in case we abort. We + * have to do this before releasing the buffer because oldtup is in the + * buffer. (heaptup is all in local memory, but it's necessary to process + * both tuple versions in one call to inval.c so we can avoid redundant + * sinval messages.) + */ + CacheInvalidateHeapTuple(relation, &oldtup, heaptup); + + /* Now we can release the buffer(s) */ + if (newbuf != buffer) + ReleaseBuffer(newbuf); + ReleaseBuffer(buffer); + if (BufferIsValid(vmbuffer_new)) + ReleaseBuffer(vmbuffer_new); + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * Release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode); + + pgstat_count_tdeheap_update(relation, use_hot_update, newbuf != buffer); + + /* + * If heaptup is a private copy, release it. Don't forget to copy t_self + * back to the caller's image, too. + */ + if (heaptup != newtup) + { + newtup->t_self = heaptup->t_self; + tdeheap_freetuple(heaptup); + } + + /* + * If it is a HOT update, the update may still need to update summarized + * indexes, lest we fail to update those summaries and get incorrect + * results (for example, minmax bounds of the block may change with this + * update). + */ + if (use_hot_update) + { + if (summarized_update) + *update_indexes = TU_Summarizing; + else + *update_indexes = TU_None; + } + else + *update_indexes = TU_All; + + if (old_key_tuple != NULL && old_key_copied) + tdeheap_freetuple(old_key_tuple); + + bms_free(hot_attrs); + bms_free(sum_attrs); + bms_free(key_attrs); + bms_free(id_attrs); + bms_free(modified_attrs); + bms_free(interesting_attrs); + + return TM_Ok; +} + +/* + * Check if the specified attribute's values are the same. Subroutine for + * HeapDetermineColumnsInfo. + */ +static bool +tdeheap_attr_equals(TupleDesc tupdesc, int attrnum, Datum value1, Datum value2, + bool isnull1, bool isnull2) +{ + Form_pg_attribute att; + + /* + * If one value is NULL and other is not, then they are certainly not + * equal + */ + if (isnull1 != isnull2) + return false; + + /* + * If both are NULL, they can be considered equal. + */ + if (isnull1) + return true; + + /* + * We do simple binary comparison of the two datums. This may be overly + * strict because there can be multiple binary representations for the + * same logical value. But we should be OK as long as there are no false + * positives. Using a type-specific equality operator is messy because + * there could be multiple notions of equality in different operator + * classes; furthermore, we cannot safely invoke user-defined functions + * while holding exclusive buffer lock. + */ + if (attrnum <= 0) + { + /* The only allowed system columns are OIDs, so do this */ + return (DatumGetObjectId(value1) == DatumGetObjectId(value2)); + } + else + { + Assert(attrnum <= tupdesc->natts); + att = TupleDescAttr(tupdesc, attrnum - 1); + return datumIsEqual(value1, value2, att->attbyval, att->attlen); + } +} + +/* + * Check which columns are being updated. + * + * Given an updated tuple, determine (and return into the output bitmapset), + * from those listed as interesting, the set of columns that changed. + * + * has_external indicates if any of the unmodified attributes (from those + * listed as interesting) of the old tuple is a member of external_cols and is + * stored externally. + */ +static Bitmapset * +HeapDetermineColumnsInfo(Relation relation, + Bitmapset *interesting_cols, + Bitmapset *external_cols, + HeapTuple oldtup, HeapTuple newtup, + bool *has_external) +{ + int attidx; + Bitmapset *modified = NULL; + TupleDesc tupdesc = RelationGetDescr(relation); + + attidx = -1; + while ((attidx = bms_next_member(interesting_cols, attidx)) >= 0) + { + /* attidx is zero-based, attrnum is the normal attribute number */ + AttrNumber attrnum = attidx + FirstLowInvalidHeapAttributeNumber; + Datum value1, + value2; + bool isnull1, + isnull2; + + /* + * If it's a whole-tuple reference, say "not equal". It's not really + * worth supporting this case, since it could only succeed after a + * no-op update, which is hardly a case worth optimizing for. + */ + if (attrnum == 0) + { + modified = bms_add_member(modified, attidx); + continue; + } + + /* + * Likewise, automatically say "not equal" for any system attribute + * other than tableOID; we cannot expect these to be consistent in a + * HOT chain, or even to be set correctly yet in the new tuple. + */ + if (attrnum < 0) + { + if (attrnum != TableOidAttributeNumber) + { + modified = bms_add_member(modified, attidx); + continue; + } + } + + /* + * Extract the corresponding values. XXX this is pretty inefficient + * if there are many indexed columns. Should we do a single + * tdeheap_deform_tuple call on each tuple, instead? But that doesn't + * work for system columns ... + */ + value1 = tdeheap_getattr(oldtup, attrnum, tupdesc, &isnull1); + value2 = tdeheap_getattr(newtup, attrnum, tupdesc, &isnull2); + if (!tdeheap_attr_equals(tupdesc, attrnum, value1, + value2, isnull1, isnull2)) + { + modified = bms_add_member(modified, attidx); + continue; + } + + /* + * No need to check attributes that can't be stored externally. Note + * that system attributes can't be stored externally. + */ + if (attrnum < 0 || isnull1 || + TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1) + continue; + + /* + * Check if the old tuple's attribute is stored externally and is a + * member of external_cols. + */ + if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) && + bms_is_member(attidx, external_cols)) + *has_external = true; + } + + return modified; +} + +/* + * simple_tdeheap_update - replace a tuple + * + * This routine may be used to update a tuple when concurrent updates of + * the target tuple are not expected (for example, because we have a lock + * on the relation associated with the tuple). Any failure is reported + * via ereport(). + */ +void +simple_tdeheap_update(Relation relation, ItemPointer otid, HeapTuple tup, + TU_UpdateIndexes *update_indexes) +{ + TM_Result result; + TM_FailureData tmfd; + LockTupleMode lockmode; + + result = tdeheap_update(relation, otid, tup, + GetCurrentCommandId(true), InvalidSnapshot, + true /* wait for commit */ , + &tmfd, &lockmode, update_indexes); + switch (result) + { + case TM_SelfModified: + /* Tuple was already updated in current command? */ + elog(ERROR, "tuple already updated by self"); + break; + + case TM_Ok: + /* done successfully */ + break; + + case TM_Updated: + elog(ERROR, "tuple concurrently updated"); + break; + + case TM_Deleted: + elog(ERROR, "tuple concurrently deleted"); + break; + + default: + elog(ERROR, "unrecognized tdeheap_update status: %u", result); + break; + } +} + + +/* + * Return the MultiXactStatus corresponding to the given tuple lock mode. + */ +static MultiXactStatus +get_mxact_status_for_lock(LockTupleMode mode, bool is_update) +{ + int retval; + + if (is_update) + retval = tupleLockExtraInfo[mode].updstatus; + else + retval = tupleLockExtraInfo[mode].lockstatus; + + if (retval == -1) + elog(ERROR, "invalid lock tuple mode %d/%s", mode, + is_update ? "true" : "false"); + + return (MultiXactStatus) retval; +} + +/* + * tdeheap_lock_tuple - lock a tuple in shared or exclusive mode + * + * Note that this acquires a buffer pin, which the caller must release. + * + * Input parameters: + * relation: relation containing tuple (caller must hold suitable lock) + * tid: TID of tuple to lock + * cid: current command ID (used for visibility test, and stored into + * tuple's cmax if lock is successful) + * mode: indicates if shared or exclusive tuple lock is desired + * wait_policy: what to do if tuple lock is not available + * follow_updates: if true, follow the update chain to also lock descendant + * tuples. + * + * Output parameters: + * *tuple: all fields filled in + * *buffer: set to buffer holding tuple (pinned but not locked at exit) + * *tmfd: filled in failure cases (see below) + * + * Function results are the same as the ones for table_tuple_lock(). + * + * In the failure cases other than TM_Invisible, the routine fills + * *tmfd with the tuple's t_ctid, t_xmax (resolving a possible MultiXact, + * if necessary), and t_cmax (the last only for TM_SelfModified, + * since we cannot obtain cmax from a combo CID generated by another + * transaction). + * See comments for struct TM_FailureData for additional info. + * + * See README.tuplock for a thorough explanation of this mechanism. + */ +TM_Result +tdeheap_lock_tuple(Relation relation, HeapTuple tuple, + CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy, + bool follow_updates, + Buffer *buffer, TM_FailureData *tmfd) +{ + TM_Result result; + ItemPointer tid = &(tuple->t_self); + ItemId lp; + Page page; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + TransactionId xid, + xmax; + uint16 old_infomask, + new_infomask, + new_infomask2; + bool first_time = true; + bool skip_tuple_lock = false; + bool have_tuple_lock = false; + bool cleared_all_frozen = false; + + *buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + block = ItemPointerGetBlockNumber(tid); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(BufferGetPage(*buffer))) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + page = BufferGetPage(*buffer); + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + tuple->t_len = ItemIdGetLength(lp); + tuple->t_tableOid = RelationGetRelid(relation); + +l3: + result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer); + + if (result == TM_Invisible) + { + /* + * This is possible, but only when locking a tuple for ON CONFLICT + * UPDATE. We return this value here rather than throwing an error in + * order to give that case the opportunity to throw a more specific + * error. + */ + result = TM_Invisible; + goto out_locked; + } + else if (result == TM_BeingModified || + result == TM_Updated || + result == TM_Deleted) + { + TransactionId xwait; + uint16 infomask; + uint16 infomask2; + bool require_sleep; + ItemPointerData t_ctid; + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(tuple->t_data); + infomask = tuple->t_data->t_infomask; + infomask2 = tuple->t_data->t_infomask2; + ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid); + + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + + /* + * If any subtransaction of the current top transaction already holds + * a lock as strong as or stronger than what we're requesting, we + * effectively hold the desired lock already. We *must* succeed + * without trying to take the tuple lock, else we will deadlock + * against anyone wanting to acquire a stronger lock. + * + * Note we only do this the first time we loop on the HTSU result; + * there is no point in testing in subsequent passes, because + * evidently our own transaction cannot have acquired a new lock after + * the first time we checked. + */ + if (first_time) + { + first_time = false; + + if (infomask & HEAP_XMAX_IS_MULTI) + { + int i; + int nmembers; + MultiXactMember *members; + + /* + * We don't need to allow old multixacts here; if that had + * been the case, HeapTupleSatisfiesUpdate would have returned + * MayBeUpdated and we wouldn't be here. + */ + nmembers = + GetMultiXactIdMembers(xwait, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + + for (i = 0; i < nmembers; i++) + { + /* only consider members of our own transaction */ + if (!TransactionIdIsCurrentTransactionId(members[i].xid)) + continue; + + if (TUPLOCK_from_mxstatus(members[i].status) >= mode) + { + pfree(members); + result = TM_Ok; + goto out_unlocked; + } + else + { + /* + * Disable acquisition of the heavyweight tuple lock. + * Otherwise, when promoting a weaker lock, we might + * deadlock with another locker that has acquired the + * heavyweight tuple lock and is waiting for our + * transaction to finish. + * + * Note that in this case we still need to wait for + * the multixact if required, to avoid acquiring + * conflicting locks. + */ + skip_tuple_lock = true; + } + } + + if (members) + pfree(members); + } + else if (TransactionIdIsCurrentTransactionId(xwait)) + { + switch (mode) + { + case LockTupleKeyShare: + Assert(HEAP_XMAX_IS_KEYSHR_LOCKED(infomask) || + HEAP_XMAX_IS_SHR_LOCKED(infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(infomask)); + result = TM_Ok; + goto out_unlocked; + case LockTupleShare: + if (HEAP_XMAX_IS_SHR_LOCKED(infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + result = TM_Ok; + goto out_unlocked; + } + break; + case LockTupleNoKeyExclusive: + if (HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + result = TM_Ok; + goto out_unlocked; + } + break; + case LockTupleExclusive: + if (HEAP_XMAX_IS_EXCL_LOCKED(infomask) && + infomask2 & HEAP_KEYS_UPDATED) + { + result = TM_Ok; + goto out_unlocked; + } + break; + } + } + } + + /* + * Initially assume that we will have to wait for the locking + * transaction(s) to finish. We check various cases below in which + * this can be turned off. + */ + require_sleep = true; + if (mode == LockTupleKeyShare) + { + /* + * If we're requesting KeyShare, and there's no update present, we + * don't need to wait. Even if there is an update, we can still + * continue if the key hasn't been modified. + * + * However, if there are updates, we need to walk the update chain + * to mark future versions of the row as locked, too. That way, + * if somebody deletes that future version, we're protected + * against the key going away. This locking of future versions + * could block momentarily, if a concurrent transaction is + * deleting a key; or it could return a value to the effect that + * the transaction deleting the key has already committed. So we + * do this before re-locking the buffer; otherwise this would be + * prone to deadlocks. + * + * Note that the TID we're locking was grabbed before we unlocked + * the buffer. For it to change while we're not looking, the + * other properties we're testing for below after re-locking the + * buffer would also change, in which case we would restart this + * loop above. + */ + if (!(infomask2 & HEAP_KEYS_UPDATED)) + { + bool updated; + + updated = !HEAP_XMAX_IS_LOCKED_ONLY(infomask); + + /* + * If there are updates, follow the update chain; bail out if + * that cannot be done. + */ + if (follow_updates && updated) + { + TM_Result res; + + res = tdeheap_lock_updated_tuple(relation, tuple, &t_ctid, + GetCurrentTransactionId(), + mode); + if (res != TM_Ok) + { + result = res; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + } + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Make sure it's still an appropriate lock, else start over. + * Also, if it wasn't updated before we released the lock, but + * is updated now, we start over too; the reason is that we + * now need to follow the update chain to lock the new + * versions. + */ + if (!HeapTupleHeaderIsOnlyLocked(tuple->t_data) && + ((tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED) || + !updated)) + goto l3; + + /* Things look okay, so we can skip sleeping */ + require_sleep = false; + + /* + * Note we allow Xmax to change here; other updaters/lockers + * could have modified it before we grabbed the buffer lock. + * However, this is not a problem, because with the recheck we + * just did we ensure that they still don't conflict with the + * lock we want. + */ + } + } + else if (mode == LockTupleShare) + { + /* + * If we're requesting Share, we can similarly avoid sleeping if + * there's no update and no exclusive lock present. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(infomask) && + !HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Make sure it's still an appropriate lock, else start over. + * See above about allowing xmax to change. + */ + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(tuple->t_data->t_infomask)) + goto l3; + require_sleep = false; + } + } + else if (mode == LockTupleNoKeyExclusive) + { + /* + * If we're requesting NoKeyExclusive, we might also be able to + * avoid sleeping; just ensure that there no conflicting lock + * already acquired. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + if (!DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + mode, NULL)) + { + /* + * No conflict, but if the xmax changed under us in the + * meantime, start over. + */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + + /* otherwise, we're good */ + require_sleep = false; + } + } + else if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* if the xmax changed in the meantime, start over */ + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + /* otherwise, we're good */ + require_sleep = false; + } + } + + /* + * As a check independent from those above, we can also avoid sleeping + * if the current transaction is the sole locker of the tuple. Note + * that the strength of the lock already held is irrelevant; this is + * not about recording the lock in Xmax (which will be done regardless + * of this optimization, below). Also, note that the cases where we + * hold a lock stronger than we are requesting are already handled + * above by not doing anything. + * + * Note we only deal with the non-multixact case here; MultiXactIdWait + * is well equipped to deal with this situation on its own. + */ + if (require_sleep && !(infomask & HEAP_XMAX_IS_MULTI) && + TransactionIdIsCurrentTransactionId(xwait)) + { + /* ... but if the xmax changed in the meantime, start over */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + Assert(HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask)); + require_sleep = false; + } + + /* + * Time to sleep on the other transaction/multixact, if necessary. + * + * If the other transaction is an update/delete that's already + * committed, then sleeping cannot possibly do any good: if we're + * required to sleep, get out to raise an error instead. + * + * By here, we either have already acquired the buffer exclusive lock, + * or we must wait for the locking transaction or multixact; so below + * we ensure that we grab buffer lock after the sleep. + */ + if (require_sleep && (result == TM_Updated || result == TM_Deleted)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + else if (require_sleep) + { + /* + * Acquire tuple lock to establish our priority for the tuple, or + * die trying. LockTuple will release us when we are next-in-line + * for the tuple. We must do this even if we are share-locking, + * but not if we already have a weaker lock on the tuple. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while + * rechecking tuple state. + */ + if (!skip_tuple_lock && + !tdeheap_acquire_tuplock(relation, tid, mode, wait_policy, + &have_tuple_lock)) + { + /* + * This can only happen if wait_policy is Skip and the lock + * couldn't be obtained. + */ + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + + if (infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactStatus status = get_mxact_status_for_lock(mode, false); + + /* We only ever lock tuples, never update them */ + if (status >= MultiXactStatusNoKeyUpdate) + elog(ERROR, "invalid lock mode in tdeheap_lock_tuple"); + + /* wait for multixact to end, or die trying */ + switch (wait_policy) + { + case LockWaitBlock: + MultiXactIdWait((MultiXactId) xwait, status, infomask, + relation, &tuple->t_self, XLTW_Lock, NULL); + break; + case LockWaitSkip: + if (!ConditionalMultiXactIdWait((MultiXactId) xwait, + status, infomask, relation, + NULL)) + { + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + break; + case LockWaitError: + if (!ConditionalMultiXactIdWait((MultiXactId) xwait, + status, infomask, relation, + NULL)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + + break; + } + + /* + * Of course, the multixact might not be done here: if we're + * requesting a light lock mode, other transactions with light + * locks could still be alive, as well as locks owned by our + * own xact or other subxacts of this backend. We need to + * preserve the surviving MultiXact members. Note that it + * isn't absolutely necessary in the latter case, but doing so + * is simpler. + */ + } + else + { + /* wait for regular transaction to end, or die trying */ + switch (wait_policy) + { + case LockWaitBlock: + XactLockTableWait(xwait, relation, &tuple->t_self, + XLTW_Lock); + break; + case LockWaitSkip: + if (!ConditionalXactLockTableWait(xwait)) + { + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + break; + case LockWaitError: + if (!ConditionalXactLockTableWait(xwait)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + } + + /* if there are updates, follow the update chain */ + if (follow_updates && !HEAP_XMAX_IS_LOCKED_ONLY(infomask)) + { + TM_Result res; + + res = tdeheap_lock_updated_tuple(relation, tuple, &t_ctid, + GetCurrentTransactionId(), + mode); + if (res != TM_Ok) + { + result = res; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + } + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + */ + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + + if (!(infomask & HEAP_XMAX_IS_MULTI)) + { + /* + * Otherwise check if it committed or aborted. Note we cannot + * be here if the tuple was only locked by somebody who didn't + * conflict with us; that would have been handled above. So + * that transaction must necessarily be gone by now. But + * don't check for this in the multixact case, because some + * locker transactions might still be running. + */ + UpdateXmaxHintBits(tuple->t_data, *buffer, xwait); + } + } + + /* By here, we're certain that we hold buffer exclusive lock again */ + + /* + * We may lock if previous xmax aborted, or if it committed but only + * locked the tuple without updating it; or if we didn't have to wait + * at all for whatever reason. + */ + if (!require_sleep || + (tuple->t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tuple->t_data)) + result = TM_Ok; + else if (!ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + +failed: + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || result == TM_Updated || + result == TM_Deleted || result == TM_WouldBlock); + + /* + * When locking a tuple under LockWaitSkip semantics and we fail with + * TM_WouldBlock above, it's possible for concurrent transactions to + * release the lock and set HEAP_XMAX_INVALID in the meantime. So + * this assert is slightly different from the equivalent one in + * tdeheap_delete and tdeheap_update. + */ + Assert((result == TM_WouldBlock) || + !(tuple->t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)); + tmfd->ctid = tuple->t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(tuple->t_data); + else + tmfd->cmax = InvalidCommandId; + goto out_locked; + } + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, or during some + * subsequent window during which we had it unlocked, we'll have to unlock + * and re-lock, to avoid holding the buffer lock across I/O. That's a bit + * unfortunate, especially since we'll now have to recheck whether the + * tuple has been locked or updated under us, but hopefully it won't + * happen very often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto l3; + } + + xmax = HeapTupleHeaderGetRawXmax(tuple->t_data); + old_infomask = tuple->t_data->t_infomask; + + /* + * If this is the first possibly-multixact-able operation in the current + * transaction, set my per-backend OldestMemberMXactId setting. We can be + * certain that the transaction will never become a member of any older + * MultiXactIds than that. (We have to do this even if we end up just + * using our own TransactionId below, since some other backend could + * incorporate our XID into a MultiXact immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + /* + * Compute the new xmax and infomask to store into the tuple. Note we do + * not modify the tuple just yet, because that would leave it in the wrong + * state if multixact.c elogs. + */ + compute_new_xmax_infomask(xmax, old_infomask, tuple->t_data->t_infomask2, + GetCurrentTransactionId(), mode, false, + &xid, &new_infomask, &new_infomask2); + + START_CRIT_SECTION(); + + /* + * Store transaction information of xact locking the tuple. + * + * Note: Cmax is meaningless in this context, so don't set it; this avoids + * possibly generating a useless combo CID. Moreover, if we're locking a + * previously updated tuple, it's important to preserve the Cmax. + * + * Also reset the HOT UPDATE bit, but only if there's no update; otherwise + * we would break the HOT chain. + */ + tuple->t_data->t_infomask &= ~HEAP_XMAX_BITS; + tuple->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + tuple->t_data->t_infomask |= new_infomask; + tuple->t_data->t_infomask2 |= new_infomask2; + if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask)) + HeapTupleHeaderClearHotUpdated(tuple->t_data); + HeapTupleHeaderSetXmax(tuple->t_data, xid); + + /* + * Make sure there is no forward chain link in t_ctid. Note that in the + * cases where the tuple has been updated, we must not overwrite t_ctid, + * because it was set by the updater. Moreover, if the tuple has been + * updated, we need to follow the update chain to lock the new versions of + * the tuple as well. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask)) + tuple->t_data->t_ctid = *tid; + + /* Clear only the all-frozen bit on visibility map if needed */ + if (PageIsAllVisible(page) && + tdeheap_visibilitymap_clear(relation, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + + MarkBufferDirty(*buffer); + + /* + * XLOG stuff. You might think that we don't need an XLOG record because + * there is no state change worth restoring after a crash. You would be + * wrong however: we have just written either a TransactionId or a + * MultiXactId that may never have been seen on disk before, and we need + * to make sure that there are XLOG entries covering those ID numbers. + * Else the same IDs might be re-used after a crash, which would be + * disastrous if this page made it to disk before the crash. Essentially + * we have to enforce the WAL log-before-data rule even in this case. + * (Also, in a PITR log-shipping or 2PC environment, we have to have XLOG + * entries for everything anyway.) + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_lock xlrec; + XLogRecPtr recptr; + + XLogBeginInsert(); + XLogRegisterBuffer(0, *buffer, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self); + xlrec.xmax = xid; + xlrec.infobits_set = compute_infobits(new_infomask, + tuple->t_data->t_infomask2); + xlrec.flags = cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + XLogRegisterData((char *) &xlrec, SizeOfHeapLock); + + /* we don't decode row locks atm, so no need to log the origin */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_LOCK); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + result = TM_Ok; + +out_locked: + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + +out_unlocked: + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * Don't update the visibility map here. Locking a tuple doesn't change + * visibility info. + */ + + /* + * Now that we have successfully marked the tuple as locked, we can + * release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, tid, mode); + + return result; +} + +/* + * Acquire heavyweight lock on the given tuple, in preparation for acquiring + * its normal, Xmax-based tuple lock. + * + * have_tuple_lock is an input and output parameter: on input, it indicates + * whether the lock has previously been acquired (and this function does + * nothing in that case). If this function returns success, have_tuple_lock + * has been flipped to true. + * + * Returns false if it was unable to obtain the lock; this can only happen if + * wait_policy is Skip. + */ +static bool +tdeheap_acquire_tuplock(Relation relation, ItemPointer tid, LockTupleMode mode, + LockWaitPolicy wait_policy, bool *have_tuple_lock) +{ + if (*have_tuple_lock) + return true; + + switch (wait_policy) + { + case LockWaitBlock: + LockTupleTuplock(relation, tid, mode); + break; + + case LockWaitSkip: + if (!ConditionalLockTupleTuplock(relation, tid, mode)) + return false; + break; + + case LockWaitError: + if (!ConditionalLockTupleTuplock(relation, tid, mode)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + *have_tuple_lock = true; + + return true; +} + +/* + * Given an original set of Xmax and infomask, and a transaction (identified by + * add_to_xmax) acquiring a new lock of some mode, compute the new Xmax and + * corresponding infomasks to use on the tuple. + * + * Note that this might have side effects such as creating a new MultiXactId. + * + * Most callers will have called HeapTupleSatisfiesUpdate before this function; + * that will have set the HEAP_XMAX_INVALID bit if the xmax was a MultiXactId + * but it was not running anymore. There is a race condition, which is that the + * MultiXactId may have finished since then, but that uncommon case is handled + * either here, or within MultiXactIdExpand. + * + * There is a similar race condition possible when the old xmax was a regular + * TransactionId. We test TransactionIdIsInProgress again just to narrow the + * window, but it's still possible to end up creating an unnecessary + * MultiXactId. Fortunately this is harmless. + */ +static void +compute_new_xmax_infomask(TransactionId xmax, uint16 old_infomask, + uint16 old_infomask2, TransactionId add_to_xmax, + LockTupleMode mode, bool is_update, + TransactionId *result_xmax, uint16 *result_infomask, + uint16 *result_infomask2) +{ + TransactionId new_xmax; + uint16 new_infomask, + new_infomask2; + + Assert(TransactionIdIsCurrentTransactionId(add_to_xmax)); + +l5: + new_infomask = 0; + new_infomask2 = 0; + if (old_infomask & HEAP_XMAX_INVALID) + { + /* + * No previous locker; we just insert our own TransactionId. + * + * Note that it's critical that this case be the first one checked, + * because there are several blocks below that come back to this one + * to implement certain optimizations; old_infomask might contain + * other dirty bits in those cases, but we don't really care. + */ + if (is_update) + { + new_xmax = add_to_xmax; + if (mode == LockTupleExclusive) + new_infomask2 |= HEAP_KEYS_UPDATED; + } + else + { + new_infomask |= HEAP_XMAX_LOCK_ONLY; + switch (mode) + { + case LockTupleKeyShare: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_KEYSHR_LOCK; + break; + case LockTupleShare: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_SHR_LOCK; + break; + case LockTupleNoKeyExclusive: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_EXCL_LOCK; + break; + case LockTupleExclusive: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_EXCL_LOCK; + new_infomask2 |= HEAP_KEYS_UPDATED; + break; + default: + new_xmax = InvalidTransactionId; /* silence compiler */ + elog(ERROR, "invalid lock mode"); + } + } + } + else if (old_infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactStatus new_status; + + /* + * Currently we don't allow XMAX_COMMITTED to be set for multis, so + * cross-check. + */ + Assert(!(old_infomask & HEAP_XMAX_COMMITTED)); + + /* + * A multixact together with LOCK_ONLY set but neither lock bit set + * (i.e. a pg_upgraded share locked tuple) cannot possibly be running + * anymore. This check is critical for databases upgraded by + * pg_upgrade; both MultiXactIdIsRunning and MultiXactIdExpand assume + * that such multis are never passed. + */ + if (HEAP_LOCKED_UPGRADED(old_infomask)) + { + old_infomask &= ~HEAP_XMAX_IS_MULTI; + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + /* + * If the XMAX is already a MultiXactId, then we need to expand it to + * include add_to_xmax; but if all the members were lockers and are + * all gone, we can do away with the IS_MULTI bit and just set + * add_to_xmax as the only locker/updater. If all lockers are gone + * and we have an updater that aborted, we can also do without a + * multi. + * + * The cost of doing GetMultiXactIdMembers would be paid by + * MultiXactIdExpand if we weren't to do this, so this check is not + * incurring extra work anyhow. + */ + if (!MultiXactIdIsRunning(xmax, HEAP_XMAX_IS_LOCKED_ONLY(old_infomask))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) || + !TransactionIdDidCommit(MultiXactIdGetUpdateXid(xmax, + old_infomask))) + { + /* + * Reset these bits and restart; otherwise fall through to + * create a new multi below. + */ + old_infomask &= ~HEAP_XMAX_IS_MULTI; + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + } + + new_status = get_mxact_status_for_lock(mode, is_update); + + new_xmax = MultiXactIdExpand((MultiXactId) xmax, add_to_xmax, + new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (old_infomask & HEAP_XMAX_COMMITTED) + { + /* + * It's a committed update, so we need to preserve him as updater of + * the tuple. + */ + MultiXactStatus status; + MultiXactStatus new_status; + + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + + new_status = get_mxact_status_for_lock(mode, is_update); + + /* + * since it's not running, it's obviously impossible for the old + * updater to be identical to the current one, so we need not check + * for that case as we do in the block above. + */ + new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (TransactionIdIsInProgress(xmax)) + { + /* + * If the XMAX is a valid, in-progress TransactionId, then we need to + * create a new MultiXactId that includes both the old locker or + * updater and our own TransactionId. + */ + MultiXactStatus new_status; + MultiXactStatus old_status; + LockTupleMode old_mode; + + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)) + { + if (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask)) + old_status = MultiXactStatusForKeyShare; + else if (HEAP_XMAX_IS_SHR_LOCKED(old_infomask)) + old_status = MultiXactStatusForShare; + else if (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) + { + if (old_infomask2 & HEAP_KEYS_UPDATED) + old_status = MultiXactStatusForUpdate; + else + old_status = MultiXactStatusForNoKeyUpdate; + } + else + { + /* + * LOCK_ONLY can be present alone only when a page has been + * upgraded by pg_upgrade. But in that case, + * TransactionIdIsInProgress() should have returned false. We + * assume it's no longer locked in this case. + */ + elog(WARNING, "LOCK_ONLY found for Xid in progress %u", xmax); + old_infomask |= HEAP_XMAX_INVALID; + old_infomask &= ~HEAP_XMAX_LOCK_ONLY; + goto l5; + } + } + else + { + /* it's an update, but which kind? */ + if (old_infomask2 & HEAP_KEYS_UPDATED) + old_status = MultiXactStatusUpdate; + else + old_status = MultiXactStatusNoKeyUpdate; + } + + old_mode = TUPLOCK_from_mxstatus(old_status); + + /* + * If the lock to be acquired is for the same TransactionId as the + * existing lock, there's an optimization possible: consider only the + * strongest of both locks as the only one present, and restart. + */ + if (xmax == add_to_xmax) + { + /* + * Note that it's not possible for the original tuple to be + * updated: we wouldn't be here because the tuple would have been + * invisible and we wouldn't try to update it. As a subtlety, + * this code can also run when traversing an update chain to lock + * future versions of a tuple. But we wouldn't be here either, + * because the add_to_xmax would be different from the original + * updater. + */ + Assert(HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)); + + /* acquire the strongest of both */ + if (mode < old_mode) + mode = old_mode; + /* mustn't touch is_update */ + + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + /* otherwise, just fall back to creating a new multixact */ + new_status = get_mxact_status_for_lock(mode, is_update); + new_xmax = MultiXactIdCreate(xmax, old_status, + add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (!HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) && + TransactionIdDidCommit(xmax)) + { + /* + * It's a committed update, so we gotta preserve him as updater of the + * tuple. + */ + MultiXactStatus status; + MultiXactStatus new_status; + + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + + new_status = get_mxact_status_for_lock(mode, is_update); + + /* + * since it's not running, it's obviously impossible for the old + * updater to be identical to the current one, so we need not check + * for that case as we do in the block above. + */ + new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else + { + /* + * Can get here iff the locking/updating transaction was running when + * the infomask was extracted from the tuple, but finished before + * TransactionIdIsInProgress got to run. Deal with it as if there was + * no locker at all in the first place. + */ + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + *result_infomask = new_infomask; + *result_infomask2 = new_infomask2; + *result_xmax = new_xmax; +} + +/* + * Subroutine for tdeheap_lock_updated_tuple_rec. + * + * Given a hypothetical multixact status held by the transaction identified + * with the given xid, does the current transaction need to wait, fail, or can + * it continue if it wanted to acquire a lock of the given mode? "needwait" + * is set to true if waiting is necessary; if it can continue, then TM_Ok is + * returned. If the lock is already held by the current transaction, return + * TM_SelfModified. In case of a conflict with another transaction, a + * different HeapTupleSatisfiesUpdate return code is returned. + * + * The held status is said to be hypothetical because it might correspond to a + * lock held by a single Xid, i.e. not a real MultiXactId; we express it this + * way for simplicity of API. + */ +static TM_Result +test_lockmode_for_conflict(MultiXactStatus status, TransactionId xid, + LockTupleMode mode, HeapTuple tup, + bool *needwait) +{ + MultiXactStatus wantedstatus; + + *needwait = false; + wantedstatus = get_mxact_status_for_lock(mode, false); + + /* + * Note: we *must* check TransactionIdIsInProgress before + * TransactionIdDidAbort/Commit; see comment at top of heapam_visibility.c + * for an explanation. + */ + if (TransactionIdIsCurrentTransactionId(xid)) + { + /* + * The tuple has already been locked by our own transaction. This is + * very rare but can happen if multiple transactions are trying to + * lock an ancient version of the same tuple. + */ + return TM_SelfModified; + } + else if (TransactionIdIsInProgress(xid)) + { + /* + * If the locking transaction is running, what we do depends on + * whether the lock modes conflict: if they do, then we must wait for + * it to finish; otherwise we can fall through to lock this tuple + * version without waiting. + */ + if (DoLockModesConflict(LOCKMODE_from_mxstatus(status), + LOCKMODE_from_mxstatus(wantedstatus))) + { + *needwait = true; + } + + /* + * If we set needwait above, then this value doesn't matter; + * otherwise, this value signals to caller that it's okay to proceed. + */ + return TM_Ok; + } + else if (TransactionIdDidAbort(xid)) + return TM_Ok; + else if (TransactionIdDidCommit(xid)) + { + /* + * The other transaction committed. If it was only a locker, then the + * lock is completely gone now and we can return success; but if it + * was an update, then what we do depends on whether the two lock + * modes conflict. If they conflict, then we must report error to + * caller. But if they don't, we can fall through to allow the current + * transaction to lock the tuple. + * + * Note: the reason we worry about ISUPDATE here is because as soon as + * a transaction ends, all its locks are gone and meaningless, and + * thus we can ignore them; whereas its updates persist. In the + * TransactionIdIsInProgress case, above, we don't need to check + * because we know the lock is still "alive" and thus a conflict needs + * always be checked. + */ + if (!ISUPDATE_from_mxstatus(status)) + return TM_Ok; + + if (DoLockModesConflict(LOCKMODE_from_mxstatus(status), + LOCKMODE_from_mxstatus(wantedstatus))) + { + /* bummer */ + if (!ItemPointerEquals(&tup->t_self, &tup->t_data->t_ctid)) + return TM_Updated; + else + return TM_Deleted; + } + + return TM_Ok; + } + + /* Not in progress, not aborted, not committed -- must have crashed */ + return TM_Ok; +} + + +/* + * Recursive part of tdeheap_lock_updated_tuple + * + * Fetch the tuple pointed to by tid in rel, and mark it as locked by the given + * xid with the given mode; if this tuple is updated, recurse to lock the new + * version as well. + */ +static TM_Result +tdeheap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid, + LockTupleMode mode) +{ + TM_Result result; + ItemPointerData tupid; + HeapTupleData mytup; + Buffer buf; + uint16 new_infomask, + new_infomask2, + old_infomask, + old_infomask2; + TransactionId xmax, + new_xmax; + TransactionId priorXmax = InvalidTransactionId; + bool cleared_all_frozen = false; + bool pinned_desired_page; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + + ItemPointerCopy(tid, &tupid); + + for (;;) + { + new_infomask = 0; + new_xmax = InvalidTransactionId; + block = ItemPointerGetBlockNumber(&tupid); + ItemPointerCopy(&tupid, &(mytup.t_self)); + + if (!tdeheap_fetch(rel, SnapshotAny, &mytup, &buf, false)) + { + /* + * if we fail to find the updated version of the tuple, it's + * because it was vacuumed/pruned away after its creator + * transaction aborted. So behave as if we got to the end of the + * chain, and there's no further tuple to lock: return success to + * caller. + */ + result = TM_Ok; + goto out_unlocked; + } + +l4: + CHECK_FOR_INTERRUPTS(); + + /* + * Before locking the buffer, pin the visibility map page if it + * appears to be necessary. Since we haven't got the lock yet, + * someone else might be in the middle of changing this, so we'll need + * to recheck after we have the lock. + */ + if (PageIsAllVisible(BufferGetPage(buf))) + { + tdeheap_visibilitymap_pin(rel, block, &vmbuffer); + pinned_desired_page = true; + } + else + pinned_desired_page = false; + + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + + /* + * If we didn't pin the visibility map page and the page has become + * all visible while we were busy locking the buffer, we'll have to + * unlock and re-lock, to avoid holding the buffer lock across I/O. + * That's a bit unfortunate, but hopefully shouldn't happen often. + * + * Note: in some paths through this function, we will reach here + * holding a pin on a vm page that may or may not be the one matching + * this page. If this page isn't all-visible, we won't use the vm + * page, but we hold onto such a pin till the end of the function. + */ + if (!pinned_desired_page && PageIsAllVisible(BufferGetPage(buf))) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(rel, block, &vmbuffer); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + } + + /* + * Check the tuple XMIN against prior XMAX, if any. If we reached the + * end of the chain, we're done, so return success. + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(mytup.t_data), + priorXmax)) + { + result = TM_Ok; + goto out_locked; + } + + /* + * Also check Xmin: if this tuple was created by an aborted + * (sub)transaction, then we already locked the last live one in the + * chain, thus we're done, so return success. + */ + if (TransactionIdDidAbort(HeapTupleHeaderGetXmin(mytup.t_data))) + { + result = TM_Ok; + goto out_locked; + } + + old_infomask = mytup.t_data->t_infomask; + old_infomask2 = mytup.t_data->t_infomask2; + xmax = HeapTupleHeaderGetRawXmax(mytup.t_data); + + /* + * If this tuple version has been updated or locked by some concurrent + * transaction(s), what we do depends on whether our lock mode + * conflicts with what those other transactions hold, and also on the + * status of them. + */ + if (!(old_infomask & HEAP_XMAX_INVALID)) + { + TransactionId rawxmax; + bool needwait; + + rawxmax = HeapTupleHeaderGetRawXmax(mytup.t_data); + if (old_infomask & HEAP_XMAX_IS_MULTI) + { + int nmembers; + int i; + MultiXactMember *members; + + /* + * We don't need a test for pg_upgrade'd tuples: this is only + * applied to tuples after the first in an update chain. Said + * first tuple in the chain may well be locked-in-9.2-and- + * pg_upgraded, but that one was already locked by our caller, + * not us; and any subsequent ones cannot be because our + * caller must necessarily have obtained a snapshot later than + * the pg_upgrade itself. + */ + Assert(!HEAP_LOCKED_UPGRADED(mytup.t_data->t_infomask)); + + nmembers = GetMultiXactIdMembers(rawxmax, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)); + for (i = 0; i < nmembers; i++) + { + result = test_lockmode_for_conflict(members[i].status, + members[i].xid, + mode, + &mytup, + &needwait); + + /* + * If the tuple was already locked by ourselves in a + * previous iteration of this (say tdeheap_lock_tuple was + * forced to restart the locking loop because of a change + * in xmax), then we hold the lock already on this tuple + * version and we don't need to do anything; and this is + * not an error condition either. We just need to skip + * this tuple and continue locking the next version in the + * update chain. + */ + if (result == TM_SelfModified) + { + pfree(members); + goto next; + } + + if (needwait) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(members[i].xid, rel, + &mytup.t_self, + XLTW_LockUpdated); + pfree(members); + goto l4; + } + if (result != TM_Ok) + { + pfree(members); + goto out_locked; + } + } + if (members) + pfree(members); + } + else + { + MultiXactStatus status; + + /* + * For a non-multi Xmax, we first need to compute the + * corresponding MultiXactStatus by using the infomask bits. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)) + { + if (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask)) + status = MultiXactStatusForKeyShare; + else if (HEAP_XMAX_IS_SHR_LOCKED(old_infomask)) + status = MultiXactStatusForShare; + else if (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) + { + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusForUpdate; + else + status = MultiXactStatusForNoKeyUpdate; + } + else + { + /* + * LOCK_ONLY present alone (a pg_upgraded tuple marked + * as share-locked in the old cluster) shouldn't be + * seen in the middle of an update chain. + */ + elog(ERROR, "invalid lock status in tuple"); + } + } + else + { + /* it's an update, but which kind? */ + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + } + + result = test_lockmode_for_conflict(status, rawxmax, mode, + &mytup, &needwait); + + /* + * If the tuple was already locked by ourselves in a previous + * iteration of this (say tdeheap_lock_tuple was forced to + * restart the locking loop because of a change in xmax), then + * we hold the lock already on this tuple version and we don't + * need to do anything; and this is not an error condition + * either. We just need to skip this tuple and continue + * locking the next version in the update chain. + */ + if (result == TM_SelfModified) + goto next; + + if (needwait) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(rawxmax, rel, &mytup.t_self, + XLTW_LockUpdated); + goto l4; + } + if (result != TM_Ok) + { + goto out_locked; + } + } + } + + /* compute the new Xmax and infomask values for the tuple ... */ + compute_new_xmax_infomask(xmax, old_infomask, mytup.t_data->t_infomask2, + xid, mode, false, + &new_xmax, &new_infomask, &new_infomask2); + + if (PageIsAllVisible(BufferGetPage(buf)) && + tdeheap_visibilitymap_clear(rel, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + START_CRIT_SECTION(); + + /* ... and set them */ + HeapTupleHeaderSetXmax(mytup.t_data, new_xmax); + mytup.t_data->t_infomask &= ~HEAP_XMAX_BITS; + mytup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + mytup.t_data->t_infomask |= new_infomask; + mytup.t_data->t_infomask2 |= new_infomask2; + + MarkBufferDirty(buf); + + /* XLOG stuff */ + if (RelationNeedsWAL(rel)) + { + xl_tdeheap_lock_updated xlrec; + XLogRecPtr recptr; + Page page = BufferGetPage(buf); + + XLogBeginInsert(); + XLogRegisterBuffer(0, buf, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&mytup.t_self); + xlrec.xmax = new_xmax; + xlrec.infobits_set = compute_infobits(new_infomask, new_infomask2); + xlrec.flags = + cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + + XLogRegisterData((char *) &xlrec, SizeOfHeapLockUpdated); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_LOCK_UPDATED); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + +next: + /* if we find the end of update chain, we're done. */ + if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID || + HeapTupleHeaderIndicatesMovedPartitions(mytup.t_data) || + ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) || + HeapTupleHeaderIsOnlyLocked(mytup.t_data)) + { + result = TM_Ok; + goto out_locked; + } + + /* tail recursion */ + priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data); + ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid); + UnlockReleaseBuffer(buf); + } + + result = TM_Ok; + +out_locked: + UnlockReleaseBuffer(buf); + +out_unlocked: + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + return result; +} + +/* + * tdeheap_lock_updated_tuple + * Follow update chain when locking an updated tuple, acquiring locks (row + * marks) on the updated versions. + * + * The initial tuple is assumed to be already locked. + * + * This function doesn't check visibility, it just unconditionally marks the + * tuple(s) as locked. If any tuple in the updated chain is being deleted + * concurrently (or updated with the key being modified), sleep until the + * transaction doing it is finished. + * + * Note that we don't acquire heavyweight tuple locks on the tuples we walk + * when we have to wait for other transactions to release them, as opposed to + * what tdeheap_lock_tuple does. The reason is that having more than one + * transaction walking the chain is probably uncommon enough that risk of + * starvation is not likely: one of the preconditions for being here is that + * the snapshot in use predates the update that created this tuple (because we + * started at an earlier version of the tuple), but at the same time such a + * transaction cannot be using repeatable read or serializable isolation + * levels, because that would lead to a serializability failure. + */ +static TM_Result +tdeheap_lock_updated_tuple(Relation rel, HeapTuple tuple, ItemPointer ctid, + TransactionId xid, LockTupleMode mode) +{ + /* + * If the tuple has not been updated, or has moved into another partition + * (effectively a delete) stop here. + */ + if (!HeapTupleHeaderIndicatesMovedPartitions(tuple->t_data) && + !ItemPointerEquals(&tuple->t_self, ctid)) + { + /* + * If this is the first possibly-multixact-able operation in the + * current transaction, set my per-backend OldestMemberMXactId + * setting. We can be certain that the transaction will never become a + * member of any older MultiXactIds than that. (We have to do this + * even if we end up just using our own TransactionId below, since + * some other backend could incorporate our XID into a MultiXact + * immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + return tdeheap_lock_updated_tuple_rec(rel, ctid, xid, mode); + } + + /* nothing to lock */ + return TM_Ok; +} + +/* + * tdeheap_finish_speculative - mark speculative insertion as successful + * + * To successfully finish a speculative insertion we have to clear speculative + * token from tuple. To do so the t_ctid field, which will contain a + * speculative token value, is modified in place to point to the tuple itself, + * which is characteristic of a newly inserted ordinary tuple. + * + * NB: It is not ok to commit without either finishing or aborting a + * speculative insertion. We could treat speculative tuples of committed + * transactions implicitly as completed, but then we would have to be prepared + * to deal with speculative tokens on committed tuples. That wouldn't be + * difficult - no-one looks at the ctid field of a tuple with invalid xmax - + * but clearing the token at completion isn't very expensive either. + * An explicit confirmation WAL record also makes logical decoding simpler. + */ +void +tdeheap_finish_speculative(Relation relation, ItemPointer tid) +{ + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + page = (Page) BufferGetPage(buffer); + + offnum = ItemPointerGetOffsetNumber(tid); + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(ERROR, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + Assert(HeapTupleHeaderIsSpeculative(htup)); + + MarkBufferDirty(buffer); + + /* + * Replace the speculative insertion token with a real t_ctid, pointing to + * itself like it does on regular tuples. + */ + htup->t_ctid = *tid; + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_confirm xlrec; + XLogRecPtr recptr; + + xlrec.offnum = ItemPointerGetOffsetNumber(tid); + + XLogBeginInsert(); + + /* We want the same filtering on this as on a plain insert */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + XLogRegisterData((char *) &xlrec, SizeOfHeapConfirm); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_CONFIRM); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); +} + +/* + * tdeheap_abort_speculative - kill a speculatively inserted tuple + * + * Marks a tuple that was speculatively inserted in the same command as dead, + * by setting its xmin as invalid. That makes it immediately appear as dead + * to all transactions, including our own. In particular, it makes + * HeapTupleSatisfiesDirty() regard the tuple as dead, so that another backend + * inserting a duplicate key value won't unnecessarily wait for our whole + * transaction to finish (it'll just wait for our speculative insertion to + * finish). + * + * Killing the tuple prevents "unprincipled deadlocks", which are deadlocks + * that arise due to a mutual dependency that is not user visible. By + * definition, unprincipled deadlocks cannot be prevented by the user + * reordering lock acquisition in client code, because the implementation level + * lock acquisitions are not under the user's direct control. If speculative + * inserters did not take this precaution, then under high concurrency they + * could deadlock with each other, which would not be acceptable. + * + * This is somewhat redundant with tdeheap_delete, but we prefer to have a + * dedicated routine with stripped down requirements. Note that this is also + * used to delete the TOAST tuples created during speculative insertion. + * + * This routine does not affect logical decoding as it only looks at + * confirmation records. + */ +void +tdeheap_abort_speculative(Relation relation, ItemPointer tid) +{ + TransactionId xid = GetCurrentTransactionId(); + ItemId lp; + HeapTupleData tp; + Page page; + BlockNumber block; + Buffer buffer; + TransactionId prune_xid; + + Assert(ItemPointerIsValid(tid)); + + block = ItemPointerGetBlockNumber(tid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Page can't be all visible, we just inserted into it, and are still + * running. + */ + Assert(!PageIsAllVisible(page)); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tp.t_tableOid = RelationGetRelid(relation); + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_self = *tid; + + /* + * Sanity check that the tuple really is a speculatively inserted tuple, + * inserted by us. + */ + if (tp.t_data->t_choice.t_heap.t_xmin != xid) + elog(ERROR, "attempted to kill a tuple inserted by another transaction"); + if (!(IsToastRelation(relation) || HeapTupleHeaderIsSpeculative(tp.t_data))) + elog(ERROR, "attempted to kill a non-speculative tuple"); + Assert(!HeapTupleHeaderIsHeapOnly(tp.t_data)); + + /* + * No need to check for serializable conflicts here. There is never a + * need for a combo CID, either. No need to extract replica identity, or + * do anything special with infomask bits. + */ + + START_CRIT_SECTION(); + + /* + * The tuple will become DEAD immediately. Flag that this page is a + * candidate for pruning by setting xmin to TransactionXmin. While not + * immediately prunable, it is the oldest xid we can cheaply determine + * that's safe against wraparound / being older than the table's + * relfrozenxid. To defend against the unlikely case of a new relation + * having a newer relfrozenxid than our TransactionXmin, use relfrozenxid + * if so (vacuum can't subsequently move relfrozenxid to beyond + * TransactionXmin, so there's no race here). + */ + Assert(TransactionIdIsValid(TransactionXmin)); + if (TransactionIdPrecedes(TransactionXmin, relation->rd_rel->relfrozenxid)) + prune_xid = relation->rd_rel->relfrozenxid; + else + prune_xid = TransactionXmin; + PageSetPrunable(page, prune_xid); + + /* store transaction information of xact deleting the tuple */ + tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + + /* + * Set the tuple header xmin to InvalidTransactionId. This makes the + * tuple immediately invisible everyone. (In particular, to any + * transactions waiting on the speculative token, woken up later.) + */ + HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId); + + /* Clear the speculative insertion token too */ + tp.t_data->t_ctid = tp.t_self; + + MarkBufferDirty(buffer); + + /* + * XLOG stuff + * + * The WAL records generated here match tdeheap_delete(). The same recovery + * routines are used. + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_delete xlrec; + XLogRecPtr recptr; + + xlrec.flags = XLH_DELETE_IS_SUPER; + xlrec.infobits_set = compute_infobits(tp.t_data->t_infomask, + tp.t_data->t_infomask2); + xlrec.offnum = ItemPointerGetOffsetNumber(&tp.t_self); + xlrec.xmax = xid; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapDelete); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + /* No replica identity & replication origin logged */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_DELETE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (HeapTupleHasExternal(&tp)) + { + Assert(!IsToastRelation(relation)); + tdeheap_toast_delete(relation, &tp, true); + } + + /* + * Never need to mark tuple for invalidation, since catalogs don't support + * speculative insertion + */ + + /* Now we can release the buffer */ + ReleaseBuffer(buffer); + + /* count deletion, as we counted the insertion too */ + pgstat_count_tdeheap_delete(relation); +} + +/* + * tdeheap_inplace_update - update a tuple "in place" (ie, overwrite it) + * + * Overwriting violates both MVCC and transactional safety, so the uses + * of this function in Postgres are extremely limited. Nonetheless we + * find some places to use it. + * + * The tuple cannot change size, and therefore it's reasonable to assume + * that its null bitmap (if any) doesn't change either. So we just + * overwrite the data portion of the tuple without touching the null + * bitmap or any of the header fields. + * + * tuple is an in-memory tuple structure containing the data to be written + * over the target tuple. Also, tuple->t_self identifies the target tuple. + * + * Note that the tuple updated here had better not come directly from the + * syscache if the relation has a toast relation as this tuple could + * include toast values that have been expanded, causing a failure here. + */ +void +tdeheap_inplace_update(Relation relation, HeapTuple tuple) +{ + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + uint32 oldlen; + uint32 newlen; + + /* + * For now, we don't allow parallel updates. Unlike a regular update, + * this should never create a combo CID, so it might be possible to relax + * this restriction, but not without more thought and testing. It's not + * clear that it would be useful, anyway. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot update tuples during a parallel operation"))); + + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&(tuple->t_self))); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + page = (Page) BufferGetPage(buffer); + + offnum = ItemPointerGetOffsetNumber(&(tuple->t_self)); + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(ERROR, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldlen = ItemIdGetLength(lp) - htup->t_hoff; + newlen = tuple->t_len - tuple->t_data->t_hoff; + if (oldlen != newlen || htup->t_hoff != tuple->t_data->t_hoff) + elog(ERROR, "wrong tuple length"); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + memcpy((char *) htup + htup->t_hoff, + (char *) tuple->t_data + tuple->t_data->t_hoff, + newlen); + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_inplace xlrec; + XLogRecPtr recptr; + + xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self); + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapInplace); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + XLogRegisterBufData(0, (char *) htup + htup->t_hoff, newlen); + + /* inplace updates aren't decoded atm, don't log the origin */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_INPLACE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); + + /* + * Send out shared cache inval if necessary. Note that because we only + * pass the new version of the tuple, this mustn't be used for any + * operations that could change catcache lookup keys. But we aren't + * bothering with index updates either, so that's true a fortiori. + */ + if (!IsBootstrapProcessingMode()) + CacheInvalidateHeapTuple(relation, tuple, NULL); +} + +#define FRM_NOOP 0x0001 +#define FRM_INVALIDATE_XMAX 0x0002 +#define FRM_RETURN_IS_XID 0x0004 +#define FRM_RETURN_IS_MULTI 0x0008 +#define FRM_MARK_COMMITTED 0x0010 + +/* + * FreezeMultiXactId + * Determine what to do during freezing when a tuple is marked by a + * MultiXactId. + * + * "flags" is an output value; it's used to tell caller what to do on return. + * "pagefrz" is an input/output value, used to manage page level freezing. + * + * Possible values that we can set in "flags": + * FRM_NOOP + * don't do anything -- keep existing Xmax + * FRM_INVALIDATE_XMAX + * mark Xmax as InvalidTransactionId and set XMAX_INVALID flag. + * FRM_RETURN_IS_XID + * The Xid return value is a single update Xid to set as xmax. + * FRM_MARK_COMMITTED + * Xmax can be marked as HEAP_XMAX_COMMITTED + * FRM_RETURN_IS_MULTI + * The return value is a new MultiXactId to set as new Xmax. + * (caller must obtain proper infomask bits using GetMultiXactIdHintBits) + * + * Caller delegates control of page freezing to us. In practice we always + * force freezing of caller's page unless FRM_NOOP processing is indicated. + * We help caller ensure that XIDs < FreezeLimit and MXIDs < MultiXactCutoff + * can never be left behind. We freely choose when and how to process each + * Multi, without ever violating the cutoff postconditions for freezing. + * + * It's useful to remove Multis on a proactive timeline (relative to freezing + * XIDs) to keep MultiXact member SLRU buffer misses to a minimum. It can also + * be cheaper in the short run, for us, since we too can avoid SLRU buffer + * misses through eager processing. + * + * NB: Creates a _new_ MultiXactId when FRM_RETURN_IS_MULTI is set, though only + * when FreezeLimit and/or MultiXactCutoff cutoffs leave us with no choice. + * This can usually be put off, which is usually enough to avoid it altogether. + * Allocating new multis during VACUUM should be avoided on general principle; + * only VACUUM can advance relminmxid, so allocating new Multis here comes with + * its own special risks. + * + * NB: Caller must maintain "no freeze" NewRelfrozenXid/NewRelminMxid trackers + * using tdeheap_tuple_should_freeze when we haven't forced page-level freezing. + * + * NB: Caller should avoid needlessly calling tdeheap_tuple_should_freeze when we + * have already forced page-level freezing, since that might incur the same + * SLRU buffer misses that we specifically intended to avoid by freezing. + */ +static TransactionId +FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, + const struct VacuumCutoffs *cutoffs, uint16 *flags, + HeapPageFreeze *pagefrz) +{ + TransactionId newxmax; + MultiXactMember *members; + int nmembers; + bool need_replace; + int nnewmembers; + MultiXactMember *newmembers; + bool has_lockers; + TransactionId update_xid; + bool update_committed; + TransactionId FreezePageRelfrozenXid; + + *flags = 0; + + /* We should only be called in Multis */ + Assert(t_infomask & HEAP_XMAX_IS_MULTI); + + if (!MultiXactIdIsValid(multi) || + HEAP_LOCKED_UPGRADED(t_infomask)) + { + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + else if (MultiXactIdPrecedes(multi, cutoffs->relminmxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found multixact %u from before relminmxid %u", + multi, cutoffs->relminmxid))); + else if (MultiXactIdPrecedes(multi, cutoffs->OldestMxact)) + { + TransactionId update_xact; + + /* + * This old multi cannot possibly have members still running, but + * verify just in case. If it was a locker only, it can be removed + * without any further consideration; but if it contained an update, + * we might need to preserve it. + */ + if (MultiXactIdIsRunning(multi, + HEAP_XMAX_IS_LOCKED_ONLY(t_infomask))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u from before multi freeze cutoff %u found to be still running", + multi, cutoffs->OldestMxact))); + + if (HEAP_XMAX_IS_LOCKED_ONLY(t_infomask)) + { + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* replace multi with single XID for its updater? */ + update_xact = MultiXactIdGetUpdateXid(multi, t_infomask); + if (TransactionIdPrecedes(update_xact, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains update XID %u from before relfrozenxid %u", + multi, update_xact, + cutoffs->relfrozenxid))); + else if (TransactionIdPrecedes(update_xact, cutoffs->OldestXmin)) + { + /* + * Updater XID has to have aborted (otherwise the tuple would have + * been pruned away instead, since updater XID is < OldestXmin). + * Just remove xmax. + */ + if (TransactionIdDidCommit(update_xact)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains committed update XID %u from before removable cutoff %u", + multi, update_xact, + cutoffs->OldestXmin))); + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* Have to keep updater XID as new xmax */ + *flags |= FRM_RETURN_IS_XID; + pagefrz->freeze_required = true; + return update_xact; + } + + /* + * Some member(s) of this Multi may be below FreezeLimit xid cutoff, so we + * need to walk the whole members array to figure out what to do, if + * anything. + */ + nmembers = + GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(t_infomask)); + if (nmembers <= 0) + { + /* Nothing worth keeping */ + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* + * The FRM_NOOP case is the only case where we might need to ratchet back + * FreezePageRelfrozenXid or FreezePageRelminMxid. It is also the only + * case where our caller might ratchet back its NoFreezePageRelfrozenXid + * or NoFreezePageRelminMxid "no freeze" trackers to deal with a multi. + * FRM_NOOP handling should result in the NewRelfrozenXid/NewRelminMxid + * trackers managed by VACUUM being ratcheting back by xmax to the degree + * required to make it safe to leave xmax undisturbed, independent of + * whether or not page freezing is triggered somewhere else. + * + * Our policy is to force freezing in every case other than FRM_NOOP, + * which obviates the need to maintain either set of trackers, anywhere. + * Every other case will reliably execute a freeze plan for xmax that + * either replaces xmax with an XID/MXID >= OldestXmin/OldestMxact, or + * sets xmax to an InvalidTransactionId XID, rendering xmax fully frozen. + * (VACUUM's NewRelfrozenXid/NewRelminMxid trackers are initialized with + * OldestXmin/OldestMxact, so later values never need to be tracked here.) + */ + need_replace = false; + FreezePageRelfrozenXid = pagefrz->FreezePageRelfrozenXid; + for (int i = 0; i < nmembers; i++) + { + TransactionId xid = members[i].xid; + + Assert(!TransactionIdPrecedes(xid, cutoffs->relfrozenxid)); + + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + { + /* Can't violate the FreezeLimit postcondition */ + need_replace = true; + break; + } + if (TransactionIdPrecedes(xid, FreezePageRelfrozenXid)) + FreezePageRelfrozenXid = xid; + } + + /* Can't violate the MultiXactCutoff postcondition, either */ + if (!need_replace) + need_replace = MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff); + + if (!need_replace) + { + /* + * vacuumlazy.c might ratchet back NewRelminMxid, NewRelfrozenXid, or + * both together to make it safe to retain this particular multi after + * freezing its page + */ + *flags |= FRM_NOOP; + pagefrz->FreezePageRelfrozenXid = FreezePageRelfrozenXid; + if (MultiXactIdPrecedes(multi, pagefrz->FreezePageRelminMxid)) + pagefrz->FreezePageRelminMxid = multi; + pfree(members); + return multi; + } + + /* + * Do a more thorough second pass over the multi to figure out which + * member XIDs actually need to be kept. Checking the precise status of + * individual members might even show that we don't need to keep anything. + * That is quite possible even though the Multi must be >= OldestMxact, + * since our second pass only keeps member XIDs when it's truly necessary; + * even member XIDs >= OldestXmin often won't be kept by second pass. + */ + nnewmembers = 0; + newmembers = palloc(sizeof(MultiXactMember) * nmembers); + has_lockers = false; + update_xid = InvalidTransactionId; + update_committed = false; + + /* + * Determine whether to keep each member xid, or to ignore it instead + */ + for (int i = 0; i < nmembers; i++) + { + TransactionId xid = members[i].xid; + MultiXactStatus mstatus = members[i].status; + + Assert(!TransactionIdPrecedes(xid, cutoffs->relfrozenxid)); + + if (!ISUPDATE_from_mxstatus(mstatus)) + { + /* + * Locker XID (not updater XID). We only keep lockers that are + * still running. + */ + if (TransactionIdIsCurrentTransactionId(xid) || + TransactionIdIsInProgress(xid)) + { + if (TransactionIdPrecedes(xid, cutoffs->OldestXmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains running locker XID %u from before removable cutoff %u", + multi, xid, + cutoffs->OldestXmin))); + newmembers[nnewmembers++] = members[i]; + has_lockers = true; + } + + continue; + } + + /* + * Updater XID (not locker XID). Should we keep it? + * + * Since the tuple wasn't totally removed when vacuum pruned, the + * update Xid cannot possibly be older than OldestXmin cutoff unless + * the updater XID aborted. If the updater transaction is known + * aborted or crashed then it's okay to ignore it, otherwise not. + * + * In any case the Multi should never contain two updaters, whatever + * their individual commit status. Check for that first, in passing. + */ + if (TransactionIdIsValid(update_xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u has two or more updating members", + multi), + errdetail_internal("First updater XID=%u second updater XID=%u.", + update_xid, xid))); + + /* + * As with all tuple visibility routines, it's critical to test + * TransactionIdIsInProgress before TransactionIdDidCommit, because of + * race conditions explained in detail in heapam_visibility.c. + */ + if (TransactionIdIsCurrentTransactionId(xid) || + TransactionIdIsInProgress(xid)) + update_xid = xid; + else if (TransactionIdDidCommit(xid)) + { + /* + * The transaction committed, so we can tell caller to set + * HEAP_XMAX_COMMITTED. (We can only do this because we know the + * transaction is not running.) + */ + update_committed = true; + update_xid = xid; + } + else + { + /* + * Not in progress, not committed -- must be aborted or crashed; + * we can ignore it. + */ + continue; + } + + /* + * We determined that updater must be kept -- add it to pending new + * members list + */ + if (TransactionIdPrecedes(xid, cutoffs->OldestXmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains committed update XID %u from before removable cutoff %u", + multi, xid, cutoffs->OldestXmin))); + newmembers[nnewmembers++] = members[i]; + } + + pfree(members); + + /* + * Determine what to do with caller's multi based on information gathered + * during our second pass + */ + if (nnewmembers == 0) + { + /* Nothing worth keeping */ + *flags |= FRM_INVALIDATE_XMAX; + newxmax = InvalidTransactionId; + } + else if (TransactionIdIsValid(update_xid) && !has_lockers) + { + /* + * If there's a single member and it's an update, pass it back alone + * without creating a new Multi. (XXX we could do this when there's a + * single remaining locker, too, but that would complicate the API too + * much; moreover, the case with the single updater is more + * interesting, because those are longer-lived.) + */ + Assert(nnewmembers == 1); + *flags |= FRM_RETURN_IS_XID; + if (update_committed) + *flags |= FRM_MARK_COMMITTED; + newxmax = update_xid; + } + else + { + /* + * Create a new multixact with the surviving members of the previous + * one, to set as new Xmax in the tuple + */ + newxmax = MultiXactIdCreateFromMembers(nnewmembers, newmembers); + *flags |= FRM_RETURN_IS_MULTI; + } + + pfree(newmembers); + + pagefrz->freeze_required = true; + return newxmax; +} + +/* + * tdeheap_prepare_freeze_tuple + * + * Check to see whether any of the XID fields of a tuple (xmin, xmax, xvac) + * are older than the OldestXmin and/or OldestMxact freeze cutoffs. If so, + * setup enough state (in the *frz output argument) to enable caller to + * process this tuple as part of freezing its page, and return true. Return + * false if nothing can be changed about the tuple right now. + * + * Also sets *totally_frozen to true if the tuple will be totally frozen once + * caller executes returned freeze plan (or if the tuple was already totally + * frozen by an earlier VACUUM). This indicates that there are no remaining + * XIDs or MultiXactIds that will need to be processed by a future VACUUM. + * + * VACUUM caller must assemble HeapTupleFreeze freeze plan entries for every + * tuple that we returned true for, and call tdeheap_freeze_execute_prepared to + * execute freezing. Caller must initialize pagefrz fields for page as a + * whole before first call here for each heap page. + * + * VACUUM caller decides on whether or not to freeze the page as a whole. + * We'll often prepare freeze plans for a page that caller just discards. + * However, VACUUM doesn't always get to make a choice; it must freeze when + * pagefrz.freeze_required is set, to ensure that any XIDs < FreezeLimit (and + * MXIDs < MultiXactCutoff) can never be left behind. We help to make sure + * that VACUUM always follows that rule. + * + * We sometimes force freezing of xmax MultiXactId values long before it is + * strictly necessary to do so just to ensure the FreezeLimit postcondition. + * It's worth processing MultiXactIds proactively when it is cheap to do so, + * and it's convenient to make that happen by piggy-backing it on the "force + * freezing" mechanism. Conversely, we sometimes delay freezing MultiXactIds + * because it is expensive right now (though only when it's still possible to + * do so without violating the FreezeLimit/MultiXactCutoff postcondition). + * + * It is assumed that the caller has checked the tuple with + * HeapTupleSatisfiesVacuum() and determined that it is not HEAPTUPLE_DEAD + * (else we should be removing the tuple, not freezing it). + * + * NB: This function has side effects: it might allocate a new MultiXactId. + * It will be set as tuple's new xmax when our *frz output is processed within + * tdeheap_execute_freeze_tuple later on. If the tuple is in a shared buffer + * then caller had better have an exclusive lock on it already. + */ +bool +tdeheap_prepare_freeze_tuple(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + HeapPageFreeze *pagefrz, + HeapTupleFreeze *frz, bool *totally_frozen) +{ + bool xmin_already_frozen = false, + xmax_already_frozen = false; + bool freeze_xmin = false, + replace_xvac = false, + replace_xmax = false, + freeze_xmax = false; + TransactionId xid; + + frz->xmax = HeapTupleHeaderGetRawXmax(tuple); + frz->t_infomask2 = tuple->t_infomask2; + frz->t_infomask = tuple->t_infomask; + frz->frzflags = 0; + frz->checkflags = 0; + + /* + * Process xmin, while keeping track of whether it's already frozen, or + * will become frozen iff our freeze plan is executed by caller (could be + * neither). + */ + xid = HeapTupleHeaderGetXmin(tuple); + if (!TransactionIdIsNormal(xid)) + xmin_already_frozen = true; + else + { + if (TransactionIdPrecedes(xid, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmin %u from before relfrozenxid %u", + xid, cutoffs->relfrozenxid))); + + /* Will set freeze_xmin flags in freeze plan below */ + freeze_xmin = TransactionIdPrecedes(xid, cutoffs->OldestXmin); + + /* Verify that xmin committed if and when freeze plan is executed */ + if (freeze_xmin) + frz->checkflags |= HEAP_FREEZE_CHECK_XMIN_COMMITTED; + } + + /* + * Old-style VACUUM FULL is gone, but we have to process xvac for as long + * as we support having MOVED_OFF/MOVED_IN tuples in the database + */ + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + Assert(TransactionIdPrecedes(xid, cutoffs->OldestXmin)); + + /* + * For Xvac, we always freeze proactively. This allows totally_frozen + * tracking to ignore xvac. + */ + replace_xvac = pagefrz->freeze_required = true; + + /* Will set replace_xvac flags in freeze plan below */ + } + + /* Now process xmax */ + xid = frz->xmax; + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + /* Raw xmax is a MultiXactId */ + TransactionId newxmax; + uint16 flags; + + /* + * We will either remove xmax completely (in the "freeze_xmax" path), + * process xmax by replacing it (in the "replace_xmax" path), or + * perform no-op xmax processing. The only constraint is that the + * FreezeLimit/MultiXactCutoff postcondition must never be violated. + */ + newxmax = FreezeMultiXactId(xid, tuple->t_infomask, cutoffs, + &flags, pagefrz); + + if (flags & FRM_NOOP) + { + /* + * xmax is a MultiXactId, and nothing about it changes for now. + * This is the only case where 'freeze_required' won't have been + * set for us by FreezeMultiXactId, as well as the only case where + * neither freeze_xmax nor replace_xmax are set (given a multi). + * + * This is a no-op, but the call to FreezeMultiXactId might have + * ratcheted back NewRelfrozenXid and/or NewRelminMxid trackers + * for us (the "freeze page" variants, specifically). That'll + * make it safe for our caller to freeze the page later on, while + * leaving this particular xmax undisturbed. + * + * FreezeMultiXactId is _not_ responsible for the "no freeze" + * NewRelfrozenXid/NewRelminMxid trackers, though -- that's our + * job. A call to tdeheap_tuple_should_freeze for this same tuple + * will take place below if 'freeze_required' isn't set already. + * (This repeats work from FreezeMultiXactId, but allows "no + * freeze" tracker maintenance to happen in only one place.) + */ + Assert(!MultiXactIdPrecedes(newxmax, cutoffs->MultiXactCutoff)); + Assert(MultiXactIdIsValid(newxmax) && xid == newxmax); + } + else if (flags & FRM_RETURN_IS_XID) + { + /* + * xmax will become an updater Xid (original MultiXact's updater + * member Xid will be carried forward as a simple Xid in Xmax). + */ + Assert(!TransactionIdPrecedes(newxmax, cutoffs->OldestXmin)); + + /* + * NB -- some of these transformations are only valid because we + * know the return Xid is a tuple updater (i.e. not merely a + * locker.) Also note that the only reason we don't explicitly + * worry about HEAP_KEYS_UPDATED is because it lives in + * t_infomask2 rather than t_infomask. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->xmax = newxmax; + if (flags & FRM_MARK_COMMITTED) + frz->t_infomask |= HEAP_XMAX_COMMITTED; + replace_xmax = true; + } + else if (flags & FRM_RETURN_IS_MULTI) + { + uint16 newbits; + uint16 newbits2; + + /* + * xmax is an old MultiXactId that we have to replace with a new + * MultiXactId, to carry forward two or more original member XIDs. + */ + Assert(!MultiXactIdPrecedes(newxmax, cutoffs->OldestMxact)); + + /* + * We can't use GetMultiXactIdHintBits directly on the new multi + * here; that routine initializes the masks to all zeroes, which + * would lose other bits we need. Doing it this way ensures all + * unrelated bits remain untouched. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->t_infomask2 &= ~HEAP_KEYS_UPDATED; + GetMultiXactIdHintBits(newxmax, &newbits, &newbits2); + frz->t_infomask |= newbits; + frz->t_infomask2 |= newbits2; + frz->xmax = newxmax; + replace_xmax = true; + } + else + { + /* + * Freeze plan for tuple "freezes xmax" in the strictest sense: + * it'll leave nothing in xmax (neither an Xid nor a MultiXactId). + */ + Assert(flags & FRM_INVALIDATE_XMAX); + Assert(!TransactionIdIsValid(newxmax)); + + /* Will set freeze_xmax flags in freeze plan below */ + freeze_xmax = true; + } + + /* MultiXactId processing forces freezing (barring FRM_NOOP case) */ + Assert(pagefrz->freeze_required || (!freeze_xmax && !replace_xmax)); + } + else if (TransactionIdIsNormal(xid)) + { + /* Raw xmax is normal XID */ + if (TransactionIdPrecedes(xid, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmax %u from before relfrozenxid %u", + xid, cutoffs->relfrozenxid))); + + /* Will set freeze_xmax flags in freeze plan below */ + freeze_xmax = TransactionIdPrecedes(xid, cutoffs->OldestXmin); + + /* + * Verify that xmax aborted if and when freeze plan is executed, + * provided it's from an update. (A lock-only xmax can be removed + * independent of this, since the lock is released at xact end.) + */ + if (freeze_xmax && !HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + frz->checkflags |= HEAP_FREEZE_CHECK_XMAX_ABORTED; + } + else if (!TransactionIdIsValid(xid)) + { + /* Raw xmax is InvalidTransactionId XID */ + Assert((tuple->t_infomask & HEAP_XMAX_IS_MULTI) == 0); + xmax_already_frozen = true; + } + else + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found raw xmax %u (infomask 0x%04x) not invalid and not multi", + xid, tuple->t_infomask))); + + if (freeze_xmin) + { + Assert(!xmin_already_frozen); + + frz->t_infomask |= HEAP_XMIN_FROZEN; + } + if (replace_xvac) + { + /* + * If a MOVED_OFF tuple is not dead, the xvac transaction must have + * failed; whereas a non-dead MOVED_IN tuple must mean the xvac + * transaction succeeded. + */ + Assert(pagefrz->freeze_required); + if (tuple->t_infomask & HEAP_MOVED_OFF) + frz->frzflags |= XLH_INVALID_XVAC; + else + frz->frzflags |= XLH_FREEZE_XVAC; + } + if (replace_xmax) + { + Assert(!xmax_already_frozen && !freeze_xmax); + Assert(pagefrz->freeze_required); + + /* Already set replace_xmax flags in freeze plan earlier */ + } + if (freeze_xmax) + { + Assert(!xmax_already_frozen && !replace_xmax); + + frz->xmax = InvalidTransactionId; + + /* + * The tuple might be marked either XMAX_INVALID or XMAX_COMMITTED + + * LOCKED. Normalize to INVALID just to be sure no one gets confused. + * Also get rid of the HEAP_KEYS_UPDATED bit. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->t_infomask |= HEAP_XMAX_INVALID; + frz->t_infomask2 &= ~HEAP_HOT_UPDATED; + frz->t_infomask2 &= ~HEAP_KEYS_UPDATED; + } + + /* + * Determine if this tuple is already totally frozen, or will become + * totally frozen (provided caller executes freeze plans for the page) + */ + *totally_frozen = ((freeze_xmin || xmin_already_frozen) && + (freeze_xmax || xmax_already_frozen)); + + if (!pagefrz->freeze_required && !(xmin_already_frozen && + xmax_already_frozen)) + { + /* + * So far no previous tuple from the page made freezing mandatory. + * Does this tuple force caller to freeze the entire page? + */ + pagefrz->freeze_required = + tdeheap_tuple_should_freeze(tuple, cutoffs, + &pagefrz->NoFreezePageRelfrozenXid, + &pagefrz->NoFreezePageRelminMxid); + } + + /* Tell caller if this tuple has a usable freeze plan set in *frz */ + return freeze_xmin || replace_xvac || replace_xmax || freeze_xmax; +} + +/* + * tdeheap_execute_freeze_tuple + * Execute the prepared freezing of a tuple with caller's freeze plan. + * + * Caller is responsible for ensuring that no other backend can access the + * storage underlying this tuple, either by holding an exclusive lock on the + * buffer containing it (which is what lazy VACUUM does), or by having it be + * in private storage (which is what CLUSTER and friends do). + */ +static inline void +tdeheap_execute_freeze_tuple(HeapTupleHeader tuple, HeapTupleFreeze *frz) +{ + HeapTupleHeaderSetXmax(tuple, frz->xmax); + + if (frz->frzflags & XLH_FREEZE_XVAC) + HeapTupleHeaderSetXvac(tuple, FrozenTransactionId); + + if (frz->frzflags & XLH_INVALID_XVAC) + HeapTupleHeaderSetXvac(tuple, InvalidTransactionId); + + tuple->t_infomask = frz->t_infomask; + tuple->t_infomask2 = frz->t_infomask2; +} + +/* + * tdeheap_freeze_execute_prepared + * + * Executes freezing of one or more heap tuples on a page on behalf of caller. + * Caller passes an array of tuple plans from tdeheap_prepare_freeze_tuple. + * Caller must set 'offset' in each plan for us. Note that we destructively + * sort caller's tuples array in-place, so caller had better be done with it. + * + * WAL-logs the changes so that VACUUM can advance the rel's relfrozenxid + * later on without any risk of unsafe pg_xact lookups, even following a hard + * crash (or when querying from a standby). We represent freezing by setting + * infomask bits in tuple headers, but this shouldn't be thought of as a hint. + * See section on buffer access rules in src/backend/storage/buffer/README. + */ +void +tdeheap_freeze_execute_prepared(Relation rel, Buffer buffer, + TransactionId snapshotConflictHorizon, + HeapTupleFreeze *tuples, int ntuples) +{ + Page page = BufferGetPage(buffer); + + Assert(ntuples > 0); + + /* + * Perform xmin/xmax XID status sanity checks before critical section. + * + * tdeheap_prepare_freeze_tuple doesn't perform these checks directly because + * pg_xact lookups are relatively expensive. They shouldn't be repeated + * by successive VACUUMs that each decide against freezing the same page. + */ + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + ItemId itemid = PageGetItemId(page, frz->offset); + HeapTupleHeader htup; + + htup = (HeapTupleHeader) PageGetItem(page, itemid); + + /* Deliberately avoid relying on tuple hint bits here */ + if (frz->checkflags & HEAP_FREEZE_CHECK_XMIN_COMMITTED) + { + TransactionId xmin = HeapTupleHeaderGetRawXmin(htup); + + Assert(!HeapTupleHeaderXminFrozen(htup)); + if (unlikely(!TransactionIdDidCommit(xmin))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("uncommitted xmin %u needs to be frozen", + xmin))); + } + + /* + * TransactionIdDidAbort won't work reliably in the presence of XIDs + * left behind by transactions that were in progress during a crash, + * so we can only check that xmax didn't commit + */ + if (frz->checkflags & HEAP_FREEZE_CHECK_XMAX_ABORTED) + { + TransactionId xmax = HeapTupleHeaderGetRawXmax(htup); + + Assert(TransactionIdIsNormal(xmax)); + if (unlikely(TransactionIdDidCommit(xmax))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("cannot freeze committed xmax %u", + xmax))); + } + } + + START_CRIT_SECTION(); + + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + ItemId itemid = PageGetItemId(page, frz->offset); + HeapTupleHeader htup; + + htup = (HeapTupleHeader) PageGetItem(page, itemid); + tdeheap_execute_freeze_tuple(htup, frz); + } + + MarkBufferDirty(buffer); + + /* Now WAL-log freezing if necessary */ + if (RelationNeedsWAL(rel)) + { + xl_tdeheap_freeze_plan plans[MaxHeapTuplesPerPage]; + OffsetNumber offsets[MaxHeapTuplesPerPage]; + int nplans; + xl_tdeheap_freeze_page xlrec; + XLogRecPtr recptr; + + /* Prepare deduplicated representation for use in WAL record */ + nplans = tdeheap_log_freeze_plan(tuples, ntuples, plans, offsets); + + xlrec.snapshotConflictHorizon = snapshotConflictHorizon; + xlrec.isCatalogRel = RelationIsAccessibleInLogicalDecoding(rel); + xlrec.nplans = nplans; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapFreezePage); + + /* + * The freeze plan array and offset array are not actually in the + * buffer, but pretend that they are. When XLogInsert stores the + * whole buffer, the arrays need not be stored too. + */ + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + XLogRegisterBufData(0, (char *) plans, + nplans * sizeof(xl_tdeheap_freeze_plan)); + XLogRegisterBufData(0, (char *) offsets, + ntuples * sizeof(OffsetNumber)); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_FREEZE_PAGE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); +} + +/* + * Comparator used to deduplicate XLOG_HEAP2_FREEZE_PAGE freeze plans + */ +static int +tdeheap_log_freeze_cmp(const void *arg1, const void *arg2) +{ + HeapTupleFreeze *frz1 = (HeapTupleFreeze *) arg1; + HeapTupleFreeze *frz2 = (HeapTupleFreeze *) arg2; + + if (frz1->xmax < frz2->xmax) + return -1; + else if (frz1->xmax > frz2->xmax) + return 1; + + if (frz1->t_infomask2 < frz2->t_infomask2) + return -1; + else if (frz1->t_infomask2 > frz2->t_infomask2) + return 1; + + if (frz1->t_infomask < frz2->t_infomask) + return -1; + else if (frz1->t_infomask > frz2->t_infomask) + return 1; + + if (frz1->frzflags < frz2->frzflags) + return -1; + else if (frz1->frzflags > frz2->frzflags) + return 1; + + /* + * tdeheap_log_freeze_eq would consider these tuple-wise plans to be equal. + * (So the tuples will share a single canonical freeze plan.) + * + * We tiebreak on page offset number to keep each freeze plan's page + * offset number array individually sorted. (Unnecessary, but be tidy.) + */ + if (frz1->offset < frz2->offset) + return -1; + else if (frz1->offset > frz2->offset) + return 1; + + Assert(false); + return 0; +} + +/* + * Compare fields that describe actions required to freeze tuple with caller's + * open plan. If everything matches then the frz tuple plan is equivalent to + * caller's plan. + */ +static inline bool +tdeheap_log_freeze_eq(xl_tdeheap_freeze_plan *plan, HeapTupleFreeze *frz) +{ + if (plan->xmax == frz->xmax && + plan->t_infomask2 == frz->t_infomask2 && + plan->t_infomask == frz->t_infomask && + plan->frzflags == frz->frzflags) + return true; + + /* Caller must call tdeheap_log_freeze_new_plan again for frz */ + return false; +} + +/* + * Start new plan initialized using tuple-level actions. At least one tuple + * will have steps required to freeze described by caller's plan during REDO. + */ +static inline void +tdeheap_log_freeze_new_plan(xl_tdeheap_freeze_plan *plan, HeapTupleFreeze *frz) +{ + plan->xmax = frz->xmax; + plan->t_infomask2 = frz->t_infomask2; + plan->t_infomask = frz->t_infomask; + plan->frzflags = frz->frzflags; + plan->ntuples = 1; /* for now */ +} + +/* + * Deduplicate tuple-based freeze plans so that each distinct set of + * processing steps is only stored once in XLOG_HEAP2_FREEZE_PAGE records. + * Called during original execution of freezing (for logged relations). + * + * Return value is number of plans set in *plans_out for caller. Also writes + * an array of offset numbers into *offsets_out output argument for caller + * (actually there is one array per freeze plan, but that's not of immediate + * concern to our caller). + */ +static int +tdeheap_log_freeze_plan(HeapTupleFreeze *tuples, int ntuples, + xl_tdeheap_freeze_plan *plans_out, + OffsetNumber *offsets_out) +{ + int nplans = 0; + + /* Sort tuple-based freeze plans in the order required to deduplicate */ + qsort(tuples, ntuples, sizeof(HeapTupleFreeze), tdeheap_log_freeze_cmp); + + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + + if (i == 0) + { + /* New canonical freeze plan starting with first tup */ + tdeheap_log_freeze_new_plan(plans_out, frz); + nplans++; + } + else if (tdeheap_log_freeze_eq(plans_out, frz)) + { + /* tup matches open canonical plan -- include tup in it */ + Assert(offsets_out[i - 1] < frz->offset); + plans_out->ntuples++; + } + else + { + /* Tup doesn't match current plan -- done with it now */ + plans_out++; + + /* New canonical freeze plan starting with this tup */ + tdeheap_log_freeze_new_plan(plans_out, frz); + nplans++; + } + + /* + * Save page offset number in dedicated buffer in passing. + * + * REDO routine relies on the record's offset numbers array grouping + * offset numbers by freeze plan. The sort order within each grouping + * is ascending offset number order, just to keep things tidy. + */ + offsets_out[i] = frz->offset; + } + + Assert(nplans > 0 && nplans <= ntuples); + + return nplans; +} + +/* + * tdeheap_freeze_tuple + * Freeze tuple in place, without WAL logging. + * + * Useful for callers like CLUSTER that perform their own WAL logging. + */ +bool +tdeheap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId FreezeLimit, TransactionId MultiXactCutoff) +{ + HeapTupleFreeze frz; + bool do_freeze; + bool totally_frozen; + struct VacuumCutoffs cutoffs; + HeapPageFreeze pagefrz; + + cutoffs.relfrozenxid = relfrozenxid; + cutoffs.relminmxid = relminmxid; + cutoffs.OldestXmin = FreezeLimit; + cutoffs.OldestMxact = MultiXactCutoff; + cutoffs.FreezeLimit = FreezeLimit; + cutoffs.MultiXactCutoff = MultiXactCutoff; + + pagefrz.freeze_required = true; + pagefrz.FreezePageRelfrozenXid = FreezeLimit; + pagefrz.FreezePageRelminMxid = MultiXactCutoff; + pagefrz.NoFreezePageRelfrozenXid = FreezeLimit; + pagefrz.NoFreezePageRelminMxid = MultiXactCutoff; + + do_freeze = tdeheap_prepare_freeze_tuple(tuple, &cutoffs, + &pagefrz, &frz, &totally_frozen); + + /* + * Note that because this is not a WAL-logged operation, we don't need to + * fill in the offset in the freeze record. + */ + + if (do_freeze) + tdeheap_execute_freeze_tuple(tuple, &frz); + return do_freeze; +} + +/* + * For a given MultiXactId, return the hint bits that should be set in the + * tuple's infomask. + * + * Normally this should be called for a multixact that was just created, and + * so is on our local cache, so the GetMembers call is fast. + */ +static void +GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask, + uint16 *new_infomask2) +{ + int nmembers; + MultiXactMember *members; + int i; + uint16 bits = HEAP_XMAX_IS_MULTI; + uint16 bits2 = 0; + bool has_update = false; + LockTupleMode strongest = LockTupleKeyShare; + + /* + * We only use this in multis we just created, so they cannot be values + * pre-pg_upgrade. + */ + nmembers = GetMultiXactIdMembers(multi, &members, false, false); + + for (i = 0; i < nmembers; i++) + { + LockTupleMode mode; + + /* + * Remember the strongest lock mode held by any member of the + * multixact. + */ + mode = TUPLOCK_from_mxstatus(members[i].status); + if (mode > strongest) + strongest = mode; + + /* See what other bits we need */ + switch (members[i].status) + { + case MultiXactStatusForKeyShare: + case MultiXactStatusForShare: + case MultiXactStatusForNoKeyUpdate: + break; + + case MultiXactStatusForUpdate: + bits2 |= HEAP_KEYS_UPDATED; + break; + + case MultiXactStatusNoKeyUpdate: + has_update = true; + break; + + case MultiXactStatusUpdate: + bits2 |= HEAP_KEYS_UPDATED; + has_update = true; + break; + } + } + + if (strongest == LockTupleExclusive || + strongest == LockTupleNoKeyExclusive) + bits |= HEAP_XMAX_EXCL_LOCK; + else if (strongest == LockTupleShare) + bits |= HEAP_XMAX_SHR_LOCK; + else if (strongest == LockTupleKeyShare) + bits |= HEAP_XMAX_KEYSHR_LOCK; + + if (!has_update) + bits |= HEAP_XMAX_LOCK_ONLY; + + if (nmembers > 0) + pfree(members); + + *new_infomask = bits; + *new_infomask2 = bits2; +} + +/* + * MultiXactIdGetUpdateXid + * + * Given a multixact Xmax and corresponding infomask, which does not have the + * HEAP_XMAX_LOCK_ONLY bit set, obtain and return the Xid of the updating + * transaction. + * + * Caller is expected to check the status of the updating transaction, if + * necessary. + */ +static TransactionId +MultiXactIdGetUpdateXid(TransactionId xmax, uint16 t_infomask) +{ + TransactionId update_xact = InvalidTransactionId; + MultiXactMember *members; + int nmembers; + + Assert(!(t_infomask & HEAP_XMAX_LOCK_ONLY)); + Assert(t_infomask & HEAP_XMAX_IS_MULTI); + + /* + * Since we know the LOCK_ONLY bit is not set, this cannot be a multi from + * pre-pg_upgrade. + */ + nmembers = GetMultiXactIdMembers(xmax, &members, false, false); + + if (nmembers > 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + /* Ignore lockers */ + if (!ISUPDATE_from_mxstatus(members[i].status)) + continue; + + /* there can be at most one updater */ + Assert(update_xact == InvalidTransactionId); + update_xact = members[i].xid; +#ifndef USE_ASSERT_CHECKING + + /* + * in an assert-enabled build, walk the whole array to ensure + * there's no other updater. + */ + break; +#endif + } + + pfree(members); + } + + return update_xact; +} + +/* + * HeapTupleGetUpdateXid + * As above, but use a HeapTupleHeader + * + * See also HeapTupleHeaderGetUpdateXid, which can be used without previously + * checking the hint bits. + */ +TransactionId +HeapTupleGetUpdateXid(HeapTupleHeader tuple) +{ + return MultiXactIdGetUpdateXid(HeapTupleHeaderGetRawXmax(tuple), + tuple->t_infomask); +} + +/* + * Does the given multixact conflict with the current transaction grabbing a + * tuple lock of the given strength? + * + * The passed infomask pairs up with the given multixact in the tuple header. + * + * If current_is_member is not NULL, it is set to 'true' if the current + * transaction is a member of the given multixact. + */ +static bool +DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask, + LockTupleMode lockmode, bool *current_is_member) +{ + int nmembers; + MultiXactMember *members; + bool result = false; + LOCKMODE wanted = tupleLockExtraInfo[lockmode].hwlock; + + if (HEAP_LOCKED_UPGRADED(infomask)) + return false; + + nmembers = GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + if (nmembers >= 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + TransactionId memxid; + LOCKMODE memlockmode; + + if (result && (current_is_member == NULL || *current_is_member)) + break; + + memlockmode = LOCKMODE_from_mxstatus(members[i].status); + + /* ignore members from current xact (but track their presence) */ + memxid = members[i].xid; + if (TransactionIdIsCurrentTransactionId(memxid)) + { + if (current_is_member != NULL) + *current_is_member = true; + continue; + } + else if (result) + continue; + + /* ignore members that don't conflict with the lock we want */ + if (!DoLockModesConflict(memlockmode, wanted)) + continue; + + if (ISUPDATE_from_mxstatus(members[i].status)) + { + /* ignore aborted updaters */ + if (TransactionIdDidAbort(memxid)) + continue; + } + else + { + /* ignore lockers-only that are no longer in progress */ + if (!TransactionIdIsInProgress(memxid)) + continue; + } + + /* + * Whatever remains are either live lockers that conflict with our + * wanted lock, and updaters that are not aborted. Those conflict + * with what we want. Set up to return true, but keep going to + * look for the current transaction among the multixact members, + * if needed. + */ + result = true; + } + pfree(members); + } + + return result; +} + +/* + * Do_MultiXactIdWait + * Actual implementation for the two functions below. + * + * 'multi', 'status' and 'infomask' indicate what to sleep on (the status is + * needed to ensure we only sleep on conflicting members, and the infomask is + * used to optimize multixact access in case it's a lock-only multi); 'nowait' + * indicates whether to use conditional lock acquisition, to allow callers to + * fail if lock is unavailable. 'rel', 'ctid' and 'oper' are used to set up + * context information for error messages. 'remaining', if not NULL, receives + * the number of members that are still running, including any (non-aborted) + * subtransactions of our own transaction. + * + * We do this by sleeping on each member using XactLockTableWait. Any + * members that belong to the current backend are *not* waited for, however; + * this would not merely be useless but would lead to Assert failure inside + * XactLockTableWait. By the time this returns, it is certain that all + * transactions *of other backends* that were members of the MultiXactId + * that conflict with the requested status are dead (and no new ones can have + * been added, since it is not legal to add members to an existing + * MultiXactId). + * + * But by the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * Note that in case we return false, the number of remaining members is + * not to be trusted. + */ +static bool +Do_MultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, bool nowait, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining) +{ + bool result = true; + MultiXactMember *members; + int nmembers; + int remain = 0; + + /* for pre-pg_upgrade tuples, no need to sleep at all */ + nmembers = HEAP_LOCKED_UPGRADED(infomask) ? -1 : + GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + + if (nmembers >= 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + TransactionId memxid = members[i].xid; + MultiXactStatus memstatus = members[i].status; + + if (TransactionIdIsCurrentTransactionId(memxid)) + { + remain++; + continue; + } + + if (!DoLockModesConflict(LOCKMODE_from_mxstatus(memstatus), + LOCKMODE_from_mxstatus(status))) + { + if (remaining && TransactionIdIsInProgress(memxid)) + remain++; + continue; + } + + /* + * This member conflicts with our multi, so we have to sleep (or + * return failure, if asked to avoid waiting.) + * + * Note that we don't set up an error context callback ourselves, + * but instead we pass the info down to XactLockTableWait. This + * might seem a bit wasteful because the context is set up and + * tore down for each member of the multixact, but in reality it + * should be barely noticeable, and it avoids duplicate code. + */ + if (nowait) + { + result = ConditionalXactLockTableWait(memxid); + if (!result) + break; + } + else + XactLockTableWait(memxid, rel, ctid, oper); + } + + pfree(members); + } + + if (remaining) + *remaining = remain; + + return result; +} + +/* + * MultiXactIdWait + * Sleep on a MultiXactId. + * + * By the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * We return (in *remaining, if not NULL) the number of members that are still + * running, including any (non-aborted) subtransactions of our own transaction. + */ +static void +MultiXactIdWait(MultiXactId multi, MultiXactStatus status, uint16 infomask, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining) +{ + (void) Do_MultiXactIdWait(multi, status, infomask, false, + rel, ctid, oper, remaining); +} + +/* + * ConditionalMultiXactIdWait + * As above, but only lock if we can get the lock without blocking. + * + * By the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * If the multixact is now all gone, return true. Returns false if some + * transactions might still be running. + * + * We return (in *remaining, if not NULL) the number of members that are still + * running, including any (non-aborted) subtransactions of our own transaction. + */ +static bool +ConditionalMultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, Relation rel, int *remaining) +{ + return Do_MultiXactIdWait(multi, status, infomask, true, + rel, NULL, XLTW_None, remaining); +} + +/* + * tdeheap_tuple_needs_eventual_freeze + * + * Check to see whether any of the XID fields of a tuple (xmin, xmax, xvac) + * will eventually require freezing (if tuple isn't removed by pruning first). + */ +bool +tdeheap_tuple_needs_eventual_freeze(HeapTupleHeader tuple) +{ + TransactionId xid; + + /* + * If xmin is a normal transaction ID, this tuple is definitely not + * frozen. + */ + xid = HeapTupleHeaderGetXmin(tuple); + if (TransactionIdIsNormal(xid)) + return true; + + /* + * If xmax is a valid xact or multixact, this tuple is also not frozen. + */ + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactId multi; + + multi = HeapTupleHeaderGetRawXmax(tuple); + if (MultiXactIdIsValid(multi)) + return true; + } + else + { + xid = HeapTupleHeaderGetRawXmax(tuple); + if (TransactionIdIsNormal(xid)) + return true; + } + + if (tuple->t_infomask & HEAP_MOVED) + { + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + return true; + } + + return false; +} + +/* + * tdeheap_tuple_should_freeze + * + * Return value indicates if tdeheap_prepare_freeze_tuple sibling function would + * (or should) force freezing of the heap page that contains caller's tuple. + * Tuple header XIDs/MXIDs < FreezeLimit/MultiXactCutoff trigger freezing. + * This includes (xmin, xmax, xvac) fields, as well as MultiXact member XIDs. + * + * The *NoFreezePageRelfrozenXid and *NoFreezePageRelminMxid input/output + * arguments help VACUUM track the oldest extant XID/MXID remaining in rel. + * Our working assumption is that caller won't decide to freeze this tuple. + * It's up to caller to only ratchet back its own top-level trackers after the + * point that it fully commits to not freezing the tuple/page in question. + */ +bool +tdeheap_tuple_should_freeze(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + TransactionId *NoFreezePageRelfrozenXid, + MultiXactId *NoFreezePageRelminMxid) +{ + TransactionId xid; + MultiXactId multi; + bool freeze = false; + + /* First deal with xmin */ + xid = HeapTupleHeaderGetXmin(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + + /* Now deal with xmax */ + xid = InvalidTransactionId; + multi = InvalidMultiXactId; + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + multi = HeapTupleHeaderGetRawXmax(tuple); + else + xid = HeapTupleHeaderGetRawXmax(tuple); + + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + /* xmax is a non-permanent XID */ + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + else if (!MultiXactIdIsValid(multi)) + { + /* xmax is a permanent XID or invalid MultiXactId/XID */ + } + else if (HEAP_LOCKED_UPGRADED(tuple->t_infomask)) + { + /* xmax is a pg_upgrade'd MultiXact, which can't have updater XID */ + if (MultiXactIdPrecedes(multi, *NoFreezePageRelminMxid)) + *NoFreezePageRelminMxid = multi; + /* tdeheap_prepare_freeze_tuple always freezes pg_upgrade'd xmax */ + freeze = true; + } + else + { + /* xmax is a MultiXactId that may have an updater XID */ + MultiXactMember *members; + int nmembers; + + Assert(MultiXactIdPrecedesOrEquals(cutoffs->relminmxid, multi)); + if (MultiXactIdPrecedes(multi, *NoFreezePageRelminMxid)) + *NoFreezePageRelminMxid = multi; + if (MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff)) + freeze = true; + + /* need to check whether any member of the mxact is old */ + nmembers = GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + for (int i = 0; i < nmembers; i++) + { + xid = members[i].xid; + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + if (nmembers > 0) + pfree(members); + } + + if (tuple->t_infomask & HEAP_MOVED) + { + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + /* tdeheap_prepare_freeze_tuple forces xvac freezing */ + freeze = true; + } + } + + return freeze; +} + +/* + * Maintain snapshotConflictHorizon for caller by ratcheting forward its value + * using any committed XIDs contained in 'tuple', an obsolescent heap tuple + * that caller is in the process of physically removing, e.g. via HOT pruning + * or index deletion. + * + * Caller must initialize its value to InvalidTransactionId, which is + * generally interpreted as "definitely no need for a recovery conflict". + * Final value must reflect all heap tuples that caller will physically remove + * (or remove TID references to) via its ongoing pruning/deletion operation. + * ResolveRecoveryConflictWithSnapshot() is passed the final value (taken from + * caller's WAL record) by REDO routine when it replays caller's operation. + */ +void +HeapTupleHeaderAdvanceConflictHorizon(HeapTupleHeader tuple, + TransactionId *snapshotConflictHorizon) +{ + TransactionId xmin = HeapTupleHeaderGetXmin(tuple); + TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple); + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (tuple->t_infomask & HEAP_MOVED) + { + if (TransactionIdPrecedes(*snapshotConflictHorizon, xvac)) + *snapshotConflictHorizon = xvac; + } + + /* + * Ignore tuples inserted by an aborted transaction or if the tuple was + * updated/deleted by the inserting transaction. + * + * Look for a committed hint bit, or if no xmin bit is set, check clog. + */ + if (HeapTupleHeaderXminCommitted(tuple) || + (!HeapTupleHeaderXminInvalid(tuple) && TransactionIdDidCommit(xmin))) + { + if (xmax != xmin && + TransactionIdFollows(xmax, *snapshotConflictHorizon)) + *snapshotConflictHorizon = xmax; + } +} + +#ifdef USE_PREFETCH +/* + * Helper function for tdeheap_index_delete_tuples. Issues prefetch requests for + * prefetch_count buffers. The prefetch_state keeps track of all the buffers + * we can prefetch, and which have already been prefetched; each call to this + * function picks up where the previous call left off. + * + * Note: we expect the deltids array to be sorted in an order that groups TIDs + * by heap block, with all TIDs for each block appearing together in exactly + * one group. + */ +static void +index_delete_prefetch_buffer(Relation rel, + IndexDeletePrefetchState *prefetch_state, + int prefetch_count) +{ + BlockNumber cur_hblkno = prefetch_state->cur_hblkno; + int count = 0; + int i; + int ndeltids = prefetch_state->ndeltids; + TM_IndexDelete *deltids = prefetch_state->deltids; + + for (i = prefetch_state->next_item; + i < ndeltids && count < prefetch_count; + i++) + { + ItemPointer htid = &deltids[i].tid; + + if (cur_hblkno == InvalidBlockNumber || + ItemPointerGetBlockNumber(htid) != cur_hblkno) + { + cur_hblkno = ItemPointerGetBlockNumber(htid); + PrefetchBuffer(rel, MAIN_FORKNUM, cur_hblkno); + count++; + } + } + + /* + * Save the prefetch position so that next time we can continue from that + * position. + */ + prefetch_state->next_item = i; + prefetch_state->cur_hblkno = cur_hblkno; +} +#endif + +/* + * Helper function for tdeheap_index_delete_tuples. Checks for index corruption + * involving an invalid TID in index AM caller's index page. + * + * This is an ideal place for these checks. The index AM must hold a buffer + * lock on the index page containing the TIDs we examine here, so we don't + * have to worry about concurrent VACUUMs at all. We can be sure that the + * index is corrupt when htid points directly to an LP_UNUSED item or + * heap-only tuple, which is not the case during standard index scans. + */ +static inline void +index_delete_check_htid(TM_IndexDeleteOp *delstate, + Page page, OffsetNumber maxoff, + ItemPointer htid, TM_IndexStatus *istatus) +{ + OffsetNumber indexpagehoffnum = ItemPointerGetOffsetNumber(htid); + ItemId iid; + + Assert(OffsetNumberIsValid(istatus->idxoffnum)); + + if (unlikely(indexpagehoffnum > maxoff)) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points past end of heap page line pointer array at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + + iid = PageGetItemId(page, indexpagehoffnum); + if (unlikely(!ItemIdIsUsed(iid))) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points to unused heap page item at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + + if (ItemIdHasStorage(iid)) + { + HeapTupleHeader htup; + + Assert(ItemIdIsNormal(iid)); + htup = (HeapTupleHeader) PageGetItem(page, iid); + + if (unlikely(HeapTupleHeaderIsHeapOnly(htup))) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points to heap-only tuple at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + } +} + +/* + * heapam implementation of tableam's index_delete_tuples interface. + * + * This helper function is called by index AMs during index tuple deletion. + * See tableam header comments for an explanation of the interface implemented + * here and a general theory of operation. Note that each call here is either + * a simple index deletion call, or a bottom-up index deletion call. + * + * It's possible for this to generate a fair amount of I/O, since we may be + * deleting hundreds of tuples from a single index block. To amortize that + * cost to some degree, this uses prefetching and combines repeat accesses to + * the same heap block. + */ +TransactionId +tdeheap_index_delete_tuples(Relation rel, TM_IndexDeleteOp *delstate) +{ + /* Initial assumption is that earlier pruning took care of conflict */ + TransactionId snapshotConflictHorizon = InvalidTransactionId; + BlockNumber blkno = InvalidBlockNumber; + Buffer buf = InvalidBuffer; + Page page = NULL; + OffsetNumber maxoff = InvalidOffsetNumber; + TransactionId priorXmax; +#ifdef USE_PREFETCH + IndexDeletePrefetchState prefetch_state; + int prefetch_distance; +#endif + SnapshotData SnapshotNonVacuumable; + int finalndeltids = 0, + nblocksaccessed = 0; + + /* State that's only used in bottom-up index deletion case */ + int nblocksfavorable = 0; + int curtargetfreespace = delstate->bottomupfreespace, + lastfreespace = 0, + actualfreespace = 0; + bool bottomup_final_block = false; + + InitNonVacuumableSnapshot(SnapshotNonVacuumable, GlobalVisTestFor(rel)); + + /* Sort caller's deltids array by TID for further processing */ + index_delete_sort(delstate); + + /* + * Bottom-up case: resort deltids array in an order attuned to where the + * greatest number of promising TIDs are to be found, and determine how + * many blocks from the start of sorted array should be considered + * favorable. This will also shrink the deltids array in order to + * eliminate completely unfavorable blocks up front. + */ + if (delstate->bottomup) + nblocksfavorable = bottomup_sort_and_shrink(delstate); + +#ifdef USE_PREFETCH + /* Initialize prefetch state. */ + prefetch_state.cur_hblkno = InvalidBlockNumber; + prefetch_state.next_item = 0; + prefetch_state.ndeltids = delstate->ndeltids; + prefetch_state.deltids = delstate->deltids; + + /* + * Determine the prefetch distance that we will attempt to maintain. + * + * Since the caller holds a buffer lock somewhere in rel, we'd better make + * sure that isn't a catalog relation before we call code that does + * syscache lookups, to avoid risk of deadlock. + */ + if (IsCatalogRelation(rel)) + prefetch_distance = maintenance_io_concurrency; + else + prefetch_distance = + get_tablespace_maintenance_io_concurrency(rel->rd_rel->reltablespace); + + /* Cap initial prefetch distance for bottom-up deletion caller */ + if (delstate->bottomup) + { + Assert(nblocksfavorable >= 1); + Assert(nblocksfavorable <= BOTTOMUP_MAX_NBLOCKS); + prefetch_distance = Min(prefetch_distance, nblocksfavorable); + } + + /* Start prefetching. */ + index_delete_prefetch_buffer(rel, &prefetch_state, prefetch_distance); +#endif + + /* Iterate over deltids, determine which to delete, check their horizon */ + Assert(delstate->ndeltids > 0); + for (int i = 0; i < delstate->ndeltids; i++) + { + TM_IndexDelete *ideltid = &delstate->deltids[i]; + TM_IndexStatus *istatus = delstate->status + ideltid->id; + ItemPointer htid = &ideltid->tid; + OffsetNumber offnum; + + /* + * Read buffer, and perform required extra steps each time a new block + * is encountered. Avoid refetching if it's the same block as the one + * from the last htid. + */ + if (blkno == InvalidBlockNumber || + ItemPointerGetBlockNumber(htid) != blkno) + { + /* + * Consider giving up early for bottom-up index deletion caller + * first. (Only prefetch next-next block afterwards, when it + * becomes clear that we're at least going to access the next + * block in line.) + * + * Sometimes the first block frees so much space for bottom-up + * caller that the deletion process can end without accessing any + * more blocks. It is usually necessary to access 2 or 3 blocks + * per bottom-up deletion operation, though. + */ + if (delstate->bottomup) + { + /* + * We often allow caller to delete a few additional items + * whose entries we reached after the point that space target + * from caller was satisfied. The cost of accessing the page + * was already paid at that point, so it made sense to finish + * it off. When that happened, we finalize everything here + * (by finishing off the whole bottom-up deletion operation + * without needlessly paying the cost of accessing any more + * blocks). + */ + if (bottomup_final_block) + break; + + /* + * Give up when we didn't enable our caller to free any + * additional space as a result of processing the page that we + * just finished up with. This rule is the main way in which + * we keep the cost of bottom-up deletion under control. + */ + if (nblocksaccessed >= 1 && actualfreespace == lastfreespace) + break; + lastfreespace = actualfreespace; /* for next time */ + + /* + * Deletion operation (which is bottom-up) will definitely + * access the next block in line. Prepare for that now. + * + * Decay target free space so that we don't hang on for too + * long with a marginal case. (Space target is only truly + * helpful when it allows us to recognize that we don't need + * to access more than 1 or 2 blocks to satisfy caller due to + * agreeable workload characteristics.) + * + * We are a bit more patient when we encounter contiguous + * blocks, though: these are treated as favorable blocks. The + * decay process is only applied when the next block in line + * is not a favorable/contiguous block. This is not an + * exception to the general rule; we still insist on finding + * at least one deletable item per block accessed. See + * bottomup_nblocksfavorable() for full details of the theory + * behind favorable blocks and heap block locality in general. + * + * Note: The first block in line is always treated as a + * favorable block, so the earliest possible point that the + * decay can be applied is just before we access the second + * block in line. The Assert() verifies this for us. + */ + Assert(nblocksaccessed > 0 || nblocksfavorable > 0); + if (nblocksfavorable > 0) + nblocksfavorable--; + else + curtargetfreespace /= 2; + } + + /* release old buffer */ + if (BufferIsValid(buf)) + UnlockReleaseBuffer(buf); + + blkno = ItemPointerGetBlockNumber(htid); + buf = ReadBuffer(rel, blkno); + nblocksaccessed++; + Assert(!delstate->bottomup || + nblocksaccessed <= BOTTOMUP_MAX_NBLOCKS); + +#ifdef USE_PREFETCH + + /* + * To maintain the prefetch distance, prefetch one more page for + * each page we read. + */ + index_delete_prefetch_buffer(rel, &prefetch_state, 1); +#endif + + LockBuffer(buf, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buf); + maxoff = PageGetMaxOffsetNumber(page); + } + + /* + * In passing, detect index corruption involving an index page with a + * TID that points to a location in the heap that couldn't possibly be + * correct. We only do this with actual TIDs from caller's index page + * (not items reached by traversing through a HOT chain). + */ + index_delete_check_htid(delstate, page, maxoff, htid, istatus); + + if (istatus->knowndeletable) + Assert(!delstate->bottomup && !istatus->promising); + else + { + ItemPointerData tmp = *htid; + HeapTupleData heapTuple; + + /* Are any tuples from this HOT chain non-vacuumable? */ + if (tdeheap_hot_search_buffer(&tmp, rel, buf, &SnapshotNonVacuumable, + &heapTuple, NULL, true)) + continue; /* can't delete entry */ + + /* Caller will delete, since whole HOT chain is vacuumable */ + istatus->knowndeletable = true; + + /* Maintain index free space info for bottom-up deletion case */ + if (delstate->bottomup) + { + Assert(istatus->freespace > 0); + actualfreespace += istatus->freespace; + if (actualfreespace >= curtargetfreespace) + bottomup_final_block = true; + } + } + + /* + * Maintain snapshotConflictHorizon value for deletion operation as a + * whole by advancing current value using heap tuple headers. This is + * loosely based on the logic for pruning a HOT chain. + */ + offnum = ItemPointerGetOffsetNumber(htid); + priorXmax = InvalidTransactionId; /* cannot check first XMIN */ + for (;;) + { + ItemId lp; + HeapTupleHeader htup; + + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated + */ + if (offnum > maxoff) + break; + + lp = PageGetItemId(page, offnum); + if (ItemIdIsRedirected(lp)) + { + offnum = ItemIdGetRedirect(lp); + continue; + } + + /* + * We'll often encounter LP_DEAD line pointers (especially with an + * entry marked knowndeletable by our caller up front). No heap + * tuple headers get examined for an htid that leads us to an + * LP_DEAD item. This is okay because the earlier pruning + * operation that made the line pointer LP_DEAD in the first place + * must have considered the original tuple header as part of + * generating its own snapshotConflictHorizon value. + * + * Relying on XLOG_HEAP2_PRUNE records like this is the same + * strategy that index vacuuming uses in all cases. Index VACUUM + * WAL records don't even have a snapshotConflictHorizon field of + * their own for this reason. + */ + if (!ItemIdIsNormal(lp)) + break; + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Check the tuple XMIN against prior XMAX, if any + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) + break; + + HeapTupleHeaderAdvanceConflictHorizon(htup, + &snapshotConflictHorizon); + + /* + * If the tuple is not HOT-updated, then we are at the end of this + * HOT-chain. No need to visit later tuples from the same update + * chain (they get their own index entries) -- just move on to + * next htid from index AM caller. + */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + break; + + /* Advance to next HOT chain member */ + Assert(ItemPointerGetBlockNumber(&htup->t_ctid) == blkno); + offnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + + /* Enable further/final shrinking of deltids for caller */ + finalndeltids = i + 1; + } + + UnlockReleaseBuffer(buf); + + /* + * Shrink deltids array to exclude non-deletable entries at the end. This + * is not just a minor optimization. Final deltids array size might be + * zero for a bottom-up caller. Index AM is explicitly allowed to rely on + * ndeltids being zero in all cases with zero total deletable entries. + */ + Assert(finalndeltids > 0 || delstate->bottomup); + delstate->ndeltids = finalndeltids; + + return snapshotConflictHorizon; +} + +/* + * Specialized inlineable comparison function for index_delete_sort() + */ +static inline int +index_delete_sort_cmp(TM_IndexDelete *deltid1, TM_IndexDelete *deltid2) +{ + ItemPointer tid1 = &deltid1->tid; + ItemPointer tid2 = &deltid2->tid; + + { + BlockNumber blk1 = ItemPointerGetBlockNumber(tid1); + BlockNumber blk2 = ItemPointerGetBlockNumber(tid2); + + if (blk1 != blk2) + return (blk1 < blk2) ? -1 : 1; + } + { + OffsetNumber pos1 = ItemPointerGetOffsetNumber(tid1); + OffsetNumber pos2 = ItemPointerGetOffsetNumber(tid2); + + if (pos1 != pos2) + return (pos1 < pos2) ? -1 : 1; + } + + Assert(false); + + return 0; +} + +/* + * Sort deltids array from delstate by TID. This prepares it for further + * processing by tdeheap_index_delete_tuples(). + * + * This operation becomes a noticeable consumer of CPU cycles with some + * workloads, so we go to the trouble of specialization/micro optimization. + * We use shellsort for this because it's easy to specialize, compiles to + * relatively few instructions, and is adaptive to presorted inputs/subsets + * (which are typical here). + */ +static void +index_delete_sort(TM_IndexDeleteOp *delstate) +{ + TM_IndexDelete *deltids = delstate->deltids; + int ndeltids = delstate->ndeltids; + int low = 0; + + /* + * Shellsort gap sequence (taken from Sedgewick-Incerpi paper). + * + * This implementation is fast with array sizes up to ~4500. This covers + * all supported BLCKSZ values. + */ + const int gaps[9] = {1968, 861, 336, 112, 48, 21, 7, 3, 1}; + + /* Think carefully before changing anything here -- keep swaps cheap */ + StaticAssertDecl(sizeof(TM_IndexDelete) <= 8, + "element size exceeds 8 bytes"); + + for (int g = 0; g < lengthof(gaps); g++) + { + for (int hi = gaps[g], i = low + hi; i < ndeltids; i++) + { + TM_IndexDelete d = deltids[i]; + int j = i; + + while (j >= hi && index_delete_sort_cmp(&deltids[j - hi], &d) >= 0) + { + deltids[j] = deltids[j - hi]; + j -= hi; + } + deltids[j] = d; + } + } +} + +/* + * Returns how many blocks should be considered favorable/contiguous for a + * bottom-up index deletion pass. This is a number of heap blocks that starts + * from and includes the first block in line. + * + * There is always at least one favorable block during bottom-up index + * deletion. In the worst case (i.e. with totally random heap blocks) the + * first block in line (the only favorable block) can be thought of as a + * degenerate array of contiguous blocks that consists of a single block. + * tdeheap_index_delete_tuples() will expect this. + * + * Caller passes blockgroups, a description of the final order that deltids + * will be sorted in for tdeheap_index_delete_tuples() bottom-up index deletion + * processing. Note that deltids need not actually be sorted just yet (caller + * only passes deltids to us so that we can interpret blockgroups). + * + * You might guess that the existence of contiguous blocks cannot matter much, + * since in general the main factor that determines which blocks we visit is + * the number of promising TIDs, which is a fixed hint from the index AM. + * We're not really targeting the general case, though -- the actual goal is + * to adapt our behavior to a wide variety of naturally occurring conditions. + * The effects of most of the heuristics we apply are only noticeable in the + * aggregate, over time and across many _related_ bottom-up index deletion + * passes. + * + * Deeming certain blocks favorable allows heapam to recognize and adapt to + * workloads where heap blocks visited during bottom-up index deletion can be + * accessed contiguously, in the sense that each newly visited block is the + * neighbor of the block that bottom-up deletion just finished processing (or + * close enough to it). It will likely be cheaper to access more favorable + * blocks sooner rather than later (e.g. in this pass, not across a series of + * related bottom-up passes). Either way it is probably only a matter of time + * (or a matter of further correlated version churn) before all blocks that + * appear together as a single large batch of favorable blocks get accessed by + * _some_ bottom-up pass. Large batches of favorable blocks tend to either + * appear almost constantly or not even once (it all depends on per-index + * workload characteristics). + * + * Note that the blockgroups sort order applies a power-of-two bucketing + * scheme that creates opportunities for contiguous groups of blocks to get + * batched together, at least with workloads that are naturally amenable to + * being driven by heap block locality. This doesn't just enhance the spatial + * locality of bottom-up heap block processing in the obvious way. It also + * enables temporal locality of access, since sorting by heap block number + * naturally tends to make the bottom-up processing order deterministic. + * + * Consider the following example to get a sense of how temporal locality + * might matter: There is a heap relation with several indexes, each of which + * is low to medium cardinality. It is subject to constant non-HOT updates. + * The updates are skewed (in one part of the primary key, perhaps). None of + * the indexes are logically modified by the UPDATE statements (if they were + * then bottom-up index deletion would not be triggered in the first place). + * Naturally, each new round of index tuples (for each heap tuple that gets a + * tdeheap_update() call) will have the same heap TID in each and every index. + * Since these indexes are low cardinality and never get logically modified, + * heapam processing during bottom-up deletion passes will access heap blocks + * in approximately sequential order. Temporal locality of access occurs due + * to bottom-up deletion passes behaving very similarly across each of the + * indexes at any given moment. This keeps the number of buffer misses needed + * to visit heap blocks to a minimum. + */ +static int +bottomup_nblocksfavorable(IndexDeleteCounts *blockgroups, int nblockgroups, + TM_IndexDelete *deltids) +{ + int64 lastblock = -1; + int nblocksfavorable = 0; + + Assert(nblockgroups >= 1); + Assert(nblockgroups <= BOTTOMUP_MAX_NBLOCKS); + + /* + * We tolerate heap blocks that will be accessed only slightly out of + * physical order. Small blips occur when a pair of almost-contiguous + * blocks happen to fall into different buckets (perhaps due only to a + * small difference in npromisingtids that the bucketing scheme didn't + * quite manage to ignore). We effectively ignore these blips by applying + * a small tolerance. The precise tolerance we use is a little arbitrary, + * but it works well enough in practice. + */ + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + TM_IndexDelete *firstdtid = deltids + group->ifirsttid; + BlockNumber block = ItemPointerGetBlockNumber(&firstdtid->tid); + + if (lastblock != -1 && + ((int64) block < lastblock - BOTTOMUP_TOLERANCE_NBLOCKS || + (int64) block > lastblock + BOTTOMUP_TOLERANCE_NBLOCKS)) + break; + + nblocksfavorable++; + lastblock = block; + } + + /* Always indicate that there is at least 1 favorable block */ + Assert(nblocksfavorable >= 1); + + return nblocksfavorable; +} + +/* + * qsort comparison function for bottomup_sort_and_shrink() + */ +static int +bottomup_sort_and_shrink_cmp(const void *arg1, const void *arg2) +{ + const IndexDeleteCounts *group1 = (const IndexDeleteCounts *) arg1; + const IndexDeleteCounts *group2 = (const IndexDeleteCounts *) arg2; + + /* + * Most significant field is npromisingtids (which we invert the order of + * so as to sort in desc order). + * + * Caller should have already normalized npromisingtids fields into + * power-of-two values (buckets). + */ + if (group1->npromisingtids > group2->npromisingtids) + return -1; + if (group1->npromisingtids < group2->npromisingtids) + return 1; + + /* + * Tiebreak: desc ntids sort order. + * + * We cannot expect power-of-two values for ntids fields. We should + * behave as if they were already rounded up for us instead. + */ + if (group1->ntids != group2->ntids) + { + uint32 ntids1 = pg_nextpower2_32((uint32) group1->ntids); + uint32 ntids2 = pg_nextpower2_32((uint32) group2->ntids); + + if (ntids1 > ntids2) + return -1; + if (ntids1 < ntids2) + return 1; + } + + /* + * Tiebreak: asc offset-into-deltids-for-block (offset to first TID for + * block in deltids array) order. + * + * This is equivalent to sorting in ascending heap block number order + * (among otherwise equal subsets of the array). This approach allows us + * to avoid accessing the out-of-line TID. (We rely on the assumption + * that the deltids array was sorted in ascending heap TID order when + * these offsets to the first TID from each heap block group were formed.) + */ + if (group1->ifirsttid > group2->ifirsttid) + return 1; + if (group1->ifirsttid < group2->ifirsttid) + return -1; + + pg_unreachable(); + + return 0; +} + +/* + * tdeheap_index_delete_tuples() helper function for bottom-up deletion callers. + * + * Sorts deltids array in the order needed for useful processing by bottom-up + * deletion. The array should already be sorted in TID order when we're + * called. The sort process groups heap TIDs from deltids into heap block + * groupings. Earlier/more-promising groups/blocks are usually those that are + * known to have the most "promising" TIDs. + * + * Sets new size of deltids array (ndeltids) in state. deltids will only have + * TIDs from the BOTTOMUP_MAX_NBLOCKS most promising heap blocks when we + * return. This often means that deltids will be shrunk to a small fraction + * of its original size (we eliminate many heap blocks from consideration for + * caller up front). + * + * Returns the number of "favorable" blocks. See bottomup_nblocksfavorable() + * for a definition and full details. + */ +static int +bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate) +{ + IndexDeleteCounts *blockgroups; + TM_IndexDelete *reordereddeltids; + BlockNumber curblock = InvalidBlockNumber; + int nblockgroups = 0; + int ncopied = 0; + int nblocksfavorable = 0; + + Assert(delstate->bottomup); + Assert(delstate->ndeltids > 0); + + /* Calculate per-heap-block count of TIDs */ + blockgroups = palloc(sizeof(IndexDeleteCounts) * delstate->ndeltids); + for (int i = 0; i < delstate->ndeltids; i++) + { + TM_IndexDelete *ideltid = &delstate->deltids[i]; + TM_IndexStatus *istatus = delstate->status + ideltid->id; + ItemPointer htid = &ideltid->tid; + bool promising = istatus->promising; + + if (curblock != ItemPointerGetBlockNumber(htid)) + { + /* New block group */ + nblockgroups++; + + Assert(curblock < ItemPointerGetBlockNumber(htid) || + !BlockNumberIsValid(curblock)); + + curblock = ItemPointerGetBlockNumber(htid); + blockgroups[nblockgroups - 1].ifirsttid = i; + blockgroups[nblockgroups - 1].ntids = 1; + blockgroups[nblockgroups - 1].npromisingtids = 0; + } + else + { + blockgroups[nblockgroups - 1].ntids++; + } + + if (promising) + blockgroups[nblockgroups - 1].npromisingtids++; + } + + /* + * We're about ready to sort block groups to determine the optimal order + * for visiting heap blocks. But before we do, round the number of + * promising tuples for each block group up to the next power-of-two, + * unless it is very low (less than 4), in which case we round up to 4. + * npromisingtids is far too noisy to trust when choosing between a pair + * of block groups that both have very low values. + * + * This scheme divides heap blocks/block groups into buckets. Each bucket + * contains blocks that have _approximately_ the same number of promising + * TIDs as each other. The goal is to ignore relatively small differences + * in the total number of promising entries, so that the whole process can + * give a little weight to heapam factors (like heap block locality) + * instead. This isn't a trade-off, really -- we have nothing to lose. It + * would be foolish to interpret small differences in npromisingtids + * values as anything more than noise. + * + * We tiebreak on nhtids when sorting block group subsets that have the + * same npromisingtids, but this has the same issues as npromisingtids, + * and so nhtids is subject to the same power-of-two bucketing scheme. The + * only reason that we don't fix nhtids in the same way here too is that + * we'll need accurate nhtids values after the sort. We handle nhtids + * bucketization dynamically instead (in the sort comparator). + * + * See bottomup_nblocksfavorable() for a full explanation of when and how + * heap locality/favorable blocks can significantly influence when and how + * heap blocks are accessed. + */ + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + + /* Better off falling back on nhtids with low npromisingtids */ + if (group->npromisingtids <= 4) + group->npromisingtids = 4; + else + group->npromisingtids = + pg_nextpower2_32((uint32) group->npromisingtids); + } + + /* Sort groups and rearrange caller's deltids array */ + qsort(blockgroups, nblockgroups, sizeof(IndexDeleteCounts), + bottomup_sort_and_shrink_cmp); + reordereddeltids = palloc(delstate->ndeltids * sizeof(TM_IndexDelete)); + + nblockgroups = Min(BOTTOMUP_MAX_NBLOCKS, nblockgroups); + /* Determine number of favorable blocks at the start of final deltids */ + nblocksfavorable = bottomup_nblocksfavorable(blockgroups, nblockgroups, + delstate->deltids); + + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + TM_IndexDelete *firstdtid = delstate->deltids + group->ifirsttid; + + memcpy(reordereddeltids + ncopied, firstdtid, + sizeof(TM_IndexDelete) * group->ntids); + ncopied += group->ntids; + } + + /* Copy final grouped and sorted TIDs back into start of caller's array */ + memcpy(delstate->deltids, reordereddeltids, + sizeof(TM_IndexDelete) * ncopied); + delstate->ndeltids = ncopied; + + pfree(reordereddeltids); + pfree(blockgroups); + + return nblocksfavorable; +} + +/* + * Perform XLogInsert for a heap-visible operation. 'block' is the block + * being marked all-visible, and vm_buffer is the buffer containing the + * corresponding visibility map block. Both should have already been modified + * and dirtied. + * + * snapshotConflictHorizon comes from the largest xmin on the page being + * marked all-visible. REDO routine uses it to generate recovery conflicts. + * + * If checksums or wal_log_hints are enabled, we may also generate a full-page + * image of tdeheap_buffer. Otherwise, we optimize away the FPI (by specifying + * REGBUF_NO_IMAGE for the heap buffer), in which case the caller should *not* + * update the heap page's LSN. + */ +XLogRecPtr +log_tdeheap_visible(Relation rel, Buffer tdeheap_buffer, Buffer vm_buffer, + TransactionId snapshotConflictHorizon, uint8 vmflags) +{ + xl_tdeheap_visible xlrec; + XLogRecPtr recptr; + uint8 flags; + + Assert(BufferIsValid (tdeheap_buffer)); + Assert(BufferIsValid(vm_buffer)); + + xlrec.snapshotConflictHorizon = snapshotConflictHorizon; + xlrec.flags = vmflags; + if (RelationIsAccessibleInLogicalDecoding(rel)) + xlrec.flags |= VISIBILITYMAP_XLOG_CATALOG_REL; + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapVisible); + + XLogRegisterBuffer(0, vm_buffer, 0); + + flags = REGBUF_STANDARD; + if (!XLogHintBitIsNeeded()) + flags |= REGBUF_NO_IMAGE; + XLogRegisterBuffer(1, tdeheap_buffer, flags); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_VISIBLE); + + return recptr; +} + +/* + * Perform XLogInsert for a heap-update operation. Caller must already + * have modified the buffer(s) and marked them dirty. + */ +static XLogRecPtr +log_tdeheap_update(Relation reln, Buffer oldbuf, + Buffer newbuf, HeapTuple oldtup, HeapTuple newtup, + HeapTuple old_key_tuple, + bool all_visible_cleared, bool new_all_visible_cleared) +{ + xl_tdeheap_update xlrec; + xl_tdeheap_header xlhdr; + xl_tdeheap_header xlhdr_idx; + uint8 info; + uint16 prefix_suffix[2]; + uint16 prefixlen = 0, + suffixlen = 0; + XLogRecPtr recptr; + Page page = BufferGetPage(newbuf); + PageHeader phdr = (PageHeader) page; + bool need_tuple_data = RelationIsLogicallyLogged(reln); + bool init; + int bufflags; + + /* Caller should not call me on a non-WAL-logged relation */ + Assert(RelationNeedsWAL(reln)); + + XLogBeginInsert(); + + if (HeapTupleIsHeapOnly(newtup)) + info = XLOG_HEAP_HOT_UPDATE; + else + info = XLOG_HEAP_UPDATE; + + /* + * If the old and new tuple are on the same page, we only need to log the + * parts of the new tuple that were changed. That saves on the amount of + * WAL we need to write. Currently, we just count any unchanged bytes in + * the beginning and end of the tuple. That's quick to check, and + * perfectly covers the common case that only one field is updated. + * + * We could do this even if the old and new tuple are on different pages, + * but only if we don't make a full-page image of the old page, which is + * difficult to know in advance. Also, if the old tuple is corrupt for + * some reason, it would allow the corruption to propagate the new page, + * so it seems best to avoid. Under the general assumption that most + * updates tend to create the new tuple version on the same page, there + * isn't much to be gained by doing this across pages anyway. + * + * Skip this if we're taking a full-page image of the new page, as we + * don't include the new tuple in the WAL record in that case. Also + * disable if wal_level='logical', as logical decoding needs to be able to + * read the new tuple in whole from the WAL record alone. + */ + if (oldbuf == newbuf && !need_tuple_data && + !XLogCheckBufferNeedsBackup(newbuf)) + { + char *oldp = (char *) oldtup->t_data + oldtup->t_data->t_hoff; + char *newp = (char *) newtup->t_data + newtup->t_data->t_hoff; + int oldlen = oldtup->t_len - oldtup->t_data->t_hoff; + int newlen = newtup->t_len - newtup->t_data->t_hoff; + + /* Check for common prefix between old and new tuple */ + for (prefixlen = 0; prefixlen < Min(oldlen, newlen); prefixlen++) + { + if (newp[prefixlen] != oldp[prefixlen]) + break; + } + + /* + * Storing the length of the prefix takes 2 bytes, so we need to save + * at least 3 bytes or there's no point. + */ + if (prefixlen < 3) + prefixlen = 0; + + /* Same for suffix */ + for (suffixlen = 0; suffixlen < Min(oldlen, newlen) - prefixlen; suffixlen++) + { + if (newp[newlen - suffixlen - 1] != oldp[oldlen - suffixlen - 1]) + break; + } + if (suffixlen < 3) + suffixlen = 0; + } + + /* Prepare main WAL data chain */ + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED; + if (new_all_visible_cleared) + xlrec.flags |= XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED; + if (prefixlen > 0) + xlrec.flags |= XLH_UPDATE_PREFIX_FROM_OLD; + if (suffixlen > 0) + xlrec.flags |= XLH_UPDATE_SUFFIX_FROM_OLD; + if (need_tuple_data) + { + xlrec.flags |= XLH_UPDATE_CONTAINS_NEW_TUPLE; + if (old_key_tuple) + { + if (reln->rd_rel->relreplident == REPLICA_IDENTITY_FULL) + xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_TUPLE; + else + xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY; + } + } + + /* If new tuple is the single and first tuple on page... */ + if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber && + PageGetMaxOffsetNumber(page) == FirstOffsetNumber) + { + info |= XLOG_HEAP_INIT_PAGE; + init = true; + } + else + init = false; + + /* Prepare WAL data for the old page */ + xlrec.old_offnum = ItemPointerGetOffsetNumber(&oldtup->t_self); + xlrec.old_xmax = HeapTupleHeaderGetRawXmax(oldtup->t_data); + xlrec.old_infobits_set = compute_infobits(oldtup->t_data->t_infomask, + oldtup->t_data->t_infomask2); + + /* Prepare WAL data for the new page */ + xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self); + xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data); + + bufflags = REGBUF_STANDARD; + if (init) + bufflags |= REGBUF_WILL_INIT; + if (need_tuple_data) + bufflags |= REGBUF_KEEP_DATA; + + XLogRegisterBuffer(0, newbuf, bufflags); + if (oldbuf != newbuf) + XLogRegisterBuffer(1, oldbuf, REGBUF_STANDARD); + + XLogRegisterData((char *) &xlrec, SizeOfHeapUpdate); + + /* + * Prepare WAL data for the new tuple. + */ + if (prefixlen > 0 || suffixlen > 0) + { + if (prefixlen > 0 && suffixlen > 0) + { + prefix_suffix[0] = prefixlen; + prefix_suffix[1] = suffixlen; + XLogRegisterBufData(0, (char *) &prefix_suffix, sizeof(uint16) * 2); + } + else if (prefixlen > 0) + { + XLogRegisterBufData(0, (char *) &prefixlen, sizeof(uint16)); + } + else + { + XLogRegisterBufData(0, (char *) &suffixlen, sizeof(uint16)); + } + } + + xlhdr.t_infomask2 = newtup->t_data->t_infomask2; + xlhdr.t_infomask = newtup->t_data->t_infomask; + xlhdr.t_hoff = newtup->t_data->t_hoff; + Assert(SizeofHeapTupleHeader + prefixlen + suffixlen <= newtup->t_len); + + /* + * PG73FORMAT: write bitmap [+ padding] [+ oid] + data + * + * The 'data' doesn't include the common prefix or suffix. + */ + /* We write an encrypted newtuple data from the buffer */ + XLogRegisterBufData(0, (char *) &xlhdr, SizeOfHeapHeader); + if (prefixlen == 0) + { + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + newtup->t_len - SizeofHeapTupleHeader - suffixlen); + } + else + { + /* + * Have to write the null bitmap and data after the common prefix as + * two separate rdata entries. + */ + /* bitmap [+ padding] [+ oid] */ + if (newtup->t_data->t_hoff - SizeofHeapTupleHeader > 0) + { + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + newtup->t_data->t_hoff - SizeofHeapTupleHeader); + } + + /* data after common prefix */ + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + newtup->t_data->t_hoff + prefixlen, + newtup->t_len - newtup->t_data->t_hoff - prefixlen - suffixlen); + } + + /* We need to log a tuple identity */ + if (need_tuple_data && old_key_tuple) + { + /* don't really need this, but its more comfy to decode */ + xlhdr_idx.t_infomask2 = old_key_tuple->t_data->t_infomask2; + xlhdr_idx.t_infomask = old_key_tuple->t_data->t_infomask; + xlhdr_idx.t_hoff = old_key_tuple->t_data->t_hoff; + + XLogRegisterData((char *) &xlhdr_idx, SizeOfHeapHeader); + + /* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */ + XLogRegisterData((char *) old_key_tuple->t_data + SizeofHeapTupleHeader, + old_key_tuple->t_len - SizeofHeapTupleHeader); + } + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, info); + + return recptr; +} + +/* + * Perform XLogInsert of an XLOG_HEAP2_NEW_CID record + * + * This is only used in wal_level >= WAL_LEVEL_LOGICAL, and only for catalog + * tuples. + */ +static XLogRecPtr +log_tdeheap_new_cid(Relation relation, HeapTuple tup) +{ + xl_tdeheap_new_cid xlrec; + + XLogRecPtr recptr; + HeapTupleHeader hdr = tup->t_data; + + Assert(ItemPointerIsValid(&tup->t_self)); + Assert(tup->t_tableOid != InvalidOid); + + xlrec.top_xid = GetTopTransactionId(); + xlrec.target_locator = relation->rd_locator; + xlrec.target_tid = tup->t_self; + + /* + * If the tuple got inserted & deleted in the same TX we definitely have a + * combo CID, set cmin and cmax. + */ + if (hdr->t_infomask & HEAP_COMBOCID) + { + Assert(!(hdr->t_infomask & HEAP_XMAX_INVALID)); + Assert(!HeapTupleHeaderXminInvalid(hdr)); + xlrec.cmin = HeapTupleHeaderGetCmin(hdr); + xlrec.cmax = HeapTupleHeaderGetCmax(hdr); + xlrec.combocid = HeapTupleHeaderGetRawCommandId(hdr); + } + /* No combo CID, so only cmin or cmax can be set by this TX */ + else + { + /* + * Tuple inserted. + * + * We need to check for LOCK ONLY because multixacts might be + * transferred to the new tuple in case of FOR KEY SHARE updates in + * which case there will be an xmax, although the tuple just got + * inserted. + */ + if (hdr->t_infomask & HEAP_XMAX_INVALID || + HEAP_XMAX_IS_LOCKED_ONLY(hdr->t_infomask)) + { + xlrec.cmin = HeapTupleHeaderGetRawCommandId(hdr); + xlrec.cmax = InvalidCommandId; + } + /* Tuple from a different tx updated or deleted. */ + else + { + xlrec.cmin = InvalidCommandId; + xlrec.cmax = HeapTupleHeaderGetRawCommandId(hdr); + } + xlrec.combocid = InvalidCommandId; + } + + /* + * Note that we don't need to register the buffer here, because this + * operation does not modify the page. The insert/update/delete that + * called us certainly did, but that's WAL-logged separately. + */ + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapNewCid); + + /* will be looked at irrespective of origin */ + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_NEW_CID); + + return recptr; +} + +/* + * Build a heap tuple representing the configured REPLICA IDENTITY to represent + * the old tuple in an UPDATE or DELETE. + * + * Returns NULL if there's no need to log an identity or if there's no suitable + * key defined. + * + * Pass key_required true if any replica identity columns changed value, or if + * any of them have any external data. Delete must always pass true. + * + * *copy is set to true if the returned tuple is a modified copy rather than + * the same tuple that was passed in. + */ +static HeapTuple +ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_required, + bool *copy) +{ + TupleDesc desc = RelationGetDescr(relation); + char replident = relation->rd_rel->relreplident; + Bitmapset *idattrs; + HeapTuple key_tuple; + bool nulls[MaxHeapAttributeNumber]; + Datum values[MaxHeapAttributeNumber]; + + *copy = false; + + if (!RelationIsLogicallyLogged(relation)) + return NULL; + + if (replident == REPLICA_IDENTITY_NOTHING) + return NULL; + + if (replident == REPLICA_IDENTITY_FULL) + { + /* + * When logging the entire old tuple, it very well could contain + * toasted columns. If so, force them to be inlined. + */ + if (HeapTupleHasExternal(tp)) + { + *copy = true; + tp = toast_flatten_tuple(tp, desc); + } + return tp; + } + + /* if the key isn't required and we're only logging the key, we're done */ + if (!key_required) + return NULL; + + /* find out the replica identity columns */ + idattrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_IDENTITY_KEY); + + /* + * If there's no defined replica identity columns, treat as !key_required. + * (This case should not be reachable from tdeheap_update, since that should + * calculate key_required accurately. But tdeheap_delete just passes + * constant true for key_required, so we can hit this case in deletes.) + */ + if (bms_is_empty(idattrs)) + return NULL; + + /* + * Construct a new tuple containing only the replica identity columns, + * with nulls elsewhere. While we're at it, assert that the replica + * identity columns aren't null. + */ + tdeheap_deform_tuple(tp, desc, values, nulls); + + for (int i = 0; i < desc->natts; i++) + { + if (bms_is_member(i + 1 - FirstLowInvalidHeapAttributeNumber, + idattrs)) + Assert(!nulls[i]); + else + nulls[i] = true; + } + + key_tuple = tdeheap_form_tuple(desc, values, nulls); + *copy = true; + + bms_free(idattrs); + + /* + * If the tuple, which by here only contains indexed columns, still has + * toasted columns, force them to be inlined. This is somewhat unlikely + * since there's limits on the size of indexed columns, so we don't + * duplicate toast_flatten_tuple()s functionality in the above loop over + * the indexed columns, even if it would be more efficient. + */ + if (HeapTupleHasExternal(key_tuple)) + { + HeapTuple oldtup = key_tuple; + + key_tuple = toast_flatten_tuple(oldtup, desc); + tdeheap_freetuple(oldtup); + } + + return key_tuple; +} + +/* + * Handles XLOG_HEAP2_PRUNE record type. + * + * Acquires a full cleanup lock. + */ +static void +tdeheap_xlog_prune(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_prune *xlrec = (xl_tdeheap_prune *) XLogRecGetData(record); + Buffer buffer; + RelFileLocator rlocator; + BlockNumber blkno; + XLogRedoAction action; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &blkno); + + /* + * We're about to remove tuples. In Hot Standby mode, ensure that there's + * no queries running for which the removed tuples are still visible. + */ + if (InHotStandby) + ResolveRecoveryConflictWithSnapshot(xlrec->snapshotConflictHorizon, + xlrec->isCatalogRel, + rlocator); + + /* + * If we have a full-page image, restore it (using a cleanup lock) and + * we're done. + */ + action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true, + &buffer); + if (action == BLK_NEEDS_REDO) + { + Page page = (Page) BufferGetPage(buffer); + OffsetNumber *end; + OffsetNumber *redirected; + OffsetNumber *nowdead; + OffsetNumber *nowunused; + int nredirected; + int ndead; + int nunused; + Size datalen; + Relation reln; + + redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen); + + nredirected = xlrec->nredirected; + ndead = xlrec->ndead; + end = (OffsetNumber *) ((char *) redirected + datalen); + nowdead = redirected + (nredirected * 2); + nowunused = nowdead + ndead; + nunused = (end - nowunused); + Assert(nunused >= 0); + + /* Update all line pointers per the record, and repair fragmentation */ + reln = CreateFakeRelcacheEntry(rlocator); + tdeheap_page_prune_execute(reln, buffer, + redirected, nredirected, + nowdead, ndead, + nowunused, nunused); + + /* + * Note: we don't worry about updating the page's prunability hints. + * At worst this will cause an extra prune cycle to occur soon. + */ + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + + if (BufferIsValid(buffer)) + { + Size freespace = PageGetHeapFreeSpace(BufferGetPage(buffer)); + + UnlockReleaseBuffer(buffer); + + /* + * After pruning records from a page, it's useful to update the FSM + * about it, as it may cause the page become target for insertions + * later even if vacuum decides not to visit it (which is possible if + * gets marked all-visible.) + * + * Do this regardless of a full-page image being applied, since the + * FSM data is not in the page anyway. + */ + XLogRecordPageWithFreeSpace(rlocator, blkno, freespace); + } +} + +/* + * Handles XLOG_HEAP2_VACUUM record type. + * + * Acquires an ordinary exclusive lock only. + */ +static void +tdeheap_xlog_vacuum(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_vacuum *xlrec = (xl_tdeheap_vacuum *) XLogRecGetData(record); + Buffer buffer; + BlockNumber blkno; + XLogRedoAction action; + + /* + * If we have a full-page image, restore it (without using a cleanup lock) + * and we're done. + */ + action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, false, + &buffer); + if (action == BLK_NEEDS_REDO) + { + Page page = (Page) BufferGetPage(buffer); + OffsetNumber *nowunused; + Size datalen; + OffsetNumber *offnum; + + nowunused = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen); + + /* Shouldn't be a record unless there's something to do */ + Assert(xlrec->nunused > 0); + + /* Update all now-unused line pointers */ + offnum = nowunused; + for (int i = 0; i < xlrec->nunused; i++) + { + OffsetNumber off = *offnum++; + ItemId lp = PageGetItemId(page, off); + + Assert(ItemIdIsDead(lp) && !ItemIdHasStorage(lp)); + ItemIdSetUnused(lp); + } + + /* Attempt to truncate line pointer array now */ + PageTruncateLinePointerArray(page); + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + + if (BufferIsValid(buffer)) + { + Size freespace = PageGetHeapFreeSpace(BufferGetPage(buffer)); + RelFileLocator rlocator; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &blkno); + + UnlockReleaseBuffer(buffer); + + /* + * After vacuuming LP_DEAD items from a page, it's useful to update + * the FSM about it, as it may cause the page become target for + * insertions later even if vacuum decides not to visit it (which is + * possible if gets marked all-visible.) + * + * Do this regardless of a full-page image being applied, since the + * FSM data is not in the page anyway. + */ + XLogRecordPageWithFreeSpace(rlocator, blkno, freespace); + } +} + +/* + * Given an "infobits" field from an XLog record, set the correct bits in the + * given infomask and infomask2 for the tuple touched by the record. + * + * (This is the reverse of compute_infobits). + */ +static void +fix_infomask_from_infobits(uint8 infobits, uint16 *infomask, uint16 *infomask2) +{ + *infomask &= ~(HEAP_XMAX_IS_MULTI | HEAP_XMAX_LOCK_ONLY | + HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_EXCL_LOCK); + *infomask2 &= ~HEAP_KEYS_UPDATED; + + if (infobits & XLHL_XMAX_IS_MULTI) + *infomask |= HEAP_XMAX_IS_MULTI; + if (infobits & XLHL_XMAX_LOCK_ONLY) + *infomask |= HEAP_XMAX_LOCK_ONLY; + if (infobits & XLHL_XMAX_EXCL_LOCK) + *infomask |= HEAP_XMAX_EXCL_LOCK; + /* note HEAP_XMAX_SHR_LOCK isn't considered here */ + if (infobits & XLHL_XMAX_KEYSHR_LOCK) + *infomask |= HEAP_XMAX_KEYSHR_LOCK; + + if (infobits & XLHL_KEYS_UPDATED) + *infomask2 |= HEAP_KEYS_UPDATED; +} + +static void +tdeheap_xlog_delete(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_delete *xlrec = (xl_tdeheap_delete *) XLogRecGetData(record); + Buffer buffer; + Page page; + ItemId lp = NULL; + HeapTupleHeader htup; + BlockNumber blkno; + RelFileLocator target_locator; + ItemPointerData target_tid; + + XLogRecGetBlockTag(record, 0, &target_locator, NULL, &blkno); + ItemPointerSetBlockNumber(&target_tid, blkno); + ItemPointerSetOffsetNumber(&target_tid, xlrec->offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_DELETE_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(target_locator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, blkno, &vmbuffer); + tdeheap_visibilitymap_clear(reln, blkno, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = BufferGetPage(buffer); + + if (PageGetMaxOffsetNumber(page) >= xlrec->offnum) + lp = PageGetItemId(page, xlrec->offnum); + + if (PageGetMaxOffsetNumber(page) < xlrec->offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + HeapTupleHeaderClearHotUpdated(htup); + fix_infomask_from_infobits(xlrec->infobits_set, + &htup->t_infomask, &htup->t_infomask2); + if (!(xlrec->flags & XLH_DELETE_IS_SUPER)) + HeapTupleHeaderSetXmax(htup, xlrec->xmax); + else + HeapTupleHeaderSetXmin(htup, InvalidTransactionId); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + + /* Mark the page as a candidate for pruning */ + PageSetPrunable(page, XLogRecGetXid(record)); + + if (xlrec->flags & XLH_DELETE_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + /* Make sure t_ctid is set correctly */ + if (xlrec->flags & XLH_DELETE_IS_PARTITION_MOVE) + HeapTupleHeaderSetMovedPartitions(htup); + else + htup->t_ctid = target_tid; + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_insert(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_insert *xlrec = (xl_tdeheap_insert *) XLogRecGetData(record); + Buffer buffer; + Page page; + union + { + HeapTupleHeaderData hdr; + char data[MaxHeapTupleSize]; + } tbuf; + HeapTupleHeader htup; + xl_tdeheap_header xlhdr; + uint32 newlen; + Size freespace = 0; + RelFileLocator target_locator; + BlockNumber blkno; + ItemPointerData target_tid; + XLogRedoAction action; + + XLogRecGetBlockTag(record, 0, &target_locator, NULL, &blkno); + ItemPointerSetBlockNumber(&target_tid, blkno); + ItemPointerSetOffsetNumber(&target_tid, xlrec->offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(target_locator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, blkno, &vmbuffer); + tdeheap_visibilitymap_clear(reln, blkno, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* + * If we inserted the first and only tuple on the page, re-initialize the + * page from scratch. + */ + if (XLogRecGetInfo(record) & XLOG_HEAP_INIT_PAGE) + { + buffer = XLogInitBufferForRedo(record, 0); + page = BufferGetPage(buffer); + PageInit(page, BufferGetPageSize(buffer), 0); + action = BLK_NEEDS_REDO; + } + else + action = XLogReadBufferForRedo(record, 0, &buffer); + if (action == BLK_NEEDS_REDO) + { + Size datalen; + char *data; + + page = BufferGetPage(buffer); + + if (PageGetMaxOffsetNumber(page) + 1 < xlrec->offnum) + elog(PANIC, "invalid max offset number"); + + data = XLogRecGetBlockData(record, 0, &datalen); + + newlen = datalen - SizeOfHeapHeader; + Assert(datalen > SizeOfHeapHeader && newlen <= MaxHeapTupleSize); + memcpy((char *) &xlhdr, data, SizeOfHeapHeader); + data += SizeOfHeapHeader; + + htup = &tbuf.hdr; + MemSet((char *) htup, 0, SizeofHeapTupleHeader); + /* PG73FORMAT: get bitmap [+ padding] [+ oid] + data */ + memcpy((char *) htup + SizeofHeapTupleHeader, + data, + newlen); + newlen += SizeofHeapTupleHeader; + htup->t_infomask2 = xlhdr.t_infomask2; + htup->t_infomask = xlhdr.t_infomask; + htup->t_hoff = xlhdr.t_hoff; + HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record)); + HeapTupleHeaderSetCmin(htup, FirstCommandId); + htup->t_ctid = target_tid; + + if (TDE_PageAddItem(target_locator, blkno, page, (Item) htup, newlen, xlrec->offnum, + true, true) == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple"); + + freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */ + + PageSetLSN(page, lsn); + + if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + /* XLH_INSERT_ALL_FROZEN_SET implies that all tuples are visible */ + if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET) + PageSetAllVisible(page); + + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); + + /* + * If the page is running low on free space, update the FSM as well. + * Arbitrarily, our definition of "low" is less than 20%. We can't do much + * better than that without knowing the fill-factor for the table. + * + * XXX: Don't do this if the page was restored from full page image. We + * don't bother to update the FSM in that case, it doesn't need to be + * totally accurate anyway. + */ + if (action == BLK_NEEDS_REDO && freespace < BLCKSZ / 5) + XLogRecordPageWithFreeSpace(target_locator, blkno, freespace); +} + +/* + * Handles UPDATE and HOT_UPDATE + */ +static void +tdeheap_xlog_update(XLogReaderState *record, bool hot_update) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_update *xlrec = (xl_tdeheap_update *) XLogRecGetData(record); + RelFileLocator rlocator; + BlockNumber oldblk; + BlockNumber newblk; + ItemPointerData newtid; + Buffer obuffer, + nbuffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleData oldtup; + HeapTupleHeader htup; + uint16 prefixlen = 0, + suffixlen = 0; + char *newp; + union + { + HeapTupleHeaderData hdr; + char data[MaxHeapTupleSize]; + } tbuf; + xl_tdeheap_header xlhdr; + uint32 newlen; + Size freespace = 0; + XLogRedoAction oldaction; + XLogRedoAction newaction; + + /* initialize to keep the compiler quiet */ + oldtup.t_data = NULL; + oldtup.t_len = 0; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &newblk); + if (XLogRecGetBlockTagExtended(record, 1, NULL, NULL, &oldblk, NULL)) + { + /* HOT updates are never done across pages */ + Assert(!hot_update); + } + else + oldblk = newblk; + + ItemPointerSet(&newtid, newblk, xlrec->new_offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(rlocator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, oldblk, &vmbuffer); + tdeheap_visibilitymap_clear(reln, oldblk, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* + * In normal operation, it is important to lock the two pages in + * page-number order, to avoid possible deadlocks against other update + * operations going the other way. However, during WAL replay there can + * be no other update happening, so we don't need to worry about that. But + * we *do* need to worry that we don't expose an inconsistent state to Hot + * Standby queries --- so the original page can't be unlocked before we've + * added the new tuple to the new page. + */ + + /* Deal with old tuple version */ + oldaction = XLogReadBufferForRedo(record, (oldblk == newblk) ? 0 : 1, + &obuffer); + if (oldaction == BLK_NEEDS_REDO) + { + page = BufferGetPage(obuffer); + offnum = xlrec->old_offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldtup.t_data = htup; + oldtup.t_len = ItemIdGetLength(lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + if (hot_update) + HeapTupleHeaderSetHotUpdated(htup); + else + HeapTupleHeaderClearHotUpdated(htup); + fix_infomask_from_infobits(xlrec->old_infobits_set, &htup->t_infomask, + &htup->t_infomask2); + HeapTupleHeaderSetXmax(htup, xlrec->old_xmax); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + /* Set forward chain link in t_ctid */ + htup->t_ctid = newtid; + + /* Mark the page as a candidate for pruning */ + PageSetPrunable(page, XLogRecGetXid(record)); + + if (xlrec->flags & XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + PageSetLSN(page, lsn); + MarkBufferDirty(obuffer); + } + + /* + * Read the page the new tuple goes into, if different from old. + */ + if (oldblk == newblk) + { + nbuffer = obuffer; + newaction = oldaction; + } + else if (XLogRecGetInfo(record) & XLOG_HEAP_INIT_PAGE) + { + nbuffer = XLogInitBufferForRedo(record, 0); + page = (Page) BufferGetPage(nbuffer); + PageInit(page, BufferGetPageSize(nbuffer), 0); + newaction = BLK_NEEDS_REDO; + } + else + newaction = XLogReadBufferForRedo(record, 0, &nbuffer); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(rlocator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, newblk, &vmbuffer); + tdeheap_visibilitymap_clear(reln, newblk, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* Deal with new tuple */ + if (newaction == BLK_NEEDS_REDO) + { + char *recdata; + char *recdata_end; + Size datalen; + Size tuplen; + + recdata = XLogRecGetBlockData(record, 0, &datalen); + recdata_end = recdata + datalen; + + page = BufferGetPage(nbuffer); + + offnum = xlrec->new_offnum; + if (PageGetMaxOffsetNumber(page) + 1 < offnum) + elog(PANIC, "invalid max offset number"); + + if (xlrec->flags & XLH_UPDATE_PREFIX_FROM_OLD) + { + Assert(newblk == oldblk); + memcpy(&prefixlen, recdata, sizeof(uint16)); + recdata += sizeof(uint16); + } + if (xlrec->flags & XLH_UPDATE_SUFFIX_FROM_OLD) + { + Assert(newblk == oldblk); + memcpy(&suffixlen, recdata, sizeof(uint16)); + recdata += sizeof(uint16); + } + + memcpy((char *) &xlhdr, recdata, SizeOfHeapHeader); + recdata += SizeOfHeapHeader; + + tuplen = recdata_end - recdata; + Assert(tuplen <= MaxHeapTupleSize); + + htup = &tbuf.hdr; + MemSet((char *) htup, 0, SizeofHeapTupleHeader); + + /* + * Reconstruct the new tuple using the prefix and/or suffix from the + * old tuple, and the data stored in the WAL record. + */ + newp = (char *) htup + SizeofHeapTupleHeader; + if (prefixlen > 0) + { + int len; + + /* copy bitmap [+ padding] [+ oid] from WAL record */ + len = xlhdr.t_hoff - SizeofHeapTupleHeader; + memcpy(newp, recdata, len); + recdata += len; + newp += len; + + /* copy prefix from old tuple */ + memcpy(newp, (char *) oldtup.t_data + oldtup.t_data->t_hoff, prefixlen); + newp += prefixlen; + + /* copy new tuple data from WAL record */ + len = tuplen - (xlhdr.t_hoff - SizeofHeapTupleHeader); + memcpy(newp, recdata, len); + recdata += len; + newp += len; + } + else + { + /* + * copy bitmap [+ padding] [+ oid] + data from record, all in one + * go + */ + memcpy(newp, recdata, tuplen); + recdata += tuplen; + newp += tuplen; + } + Assert(recdata == recdata_end); + + /* copy suffix from old tuple */ + if (suffixlen > 0) + memcpy(newp, (char *) oldtup.t_data + oldtup.t_len - suffixlen, suffixlen); + + newlen = SizeofHeapTupleHeader + tuplen + prefixlen + suffixlen; + htup->t_infomask2 = xlhdr.t_infomask2; + htup->t_infomask = xlhdr.t_infomask; + htup->t_hoff = xlhdr.t_hoff; + + HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record)); + HeapTupleHeaderSetCmin(htup, FirstCommandId); + HeapTupleHeaderSetXmax(htup, xlrec->new_xmax); + /* Make sure there is no forward chain link in t_ctid */ + htup->t_ctid = newtid; + + offnum = TDE_PageAddItem(rlocator, newblk, page, (Item) htup, newlen, offnum, true, true); + if (offnum == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple"); + + if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */ + + PageSetLSN(page, lsn); + MarkBufferDirty(nbuffer); + } + + if (BufferIsValid(nbuffer) && nbuffer != obuffer) + UnlockReleaseBuffer(nbuffer); + if (BufferIsValid(obuffer)) + UnlockReleaseBuffer(obuffer); + + /* + * If the new page is running low on free space, update the FSM as well. + * Arbitrarily, our definition of "low" is less than 20%. We can't do much + * better than that without knowing the fill-factor for the table. + * + * However, don't update the FSM on HOT updates, because after crash + * recovery, either the old or the new tuple will certainly be dead and + * prunable. After pruning, the page will have roughly as much free space + * as it did before the update, assuming the new tuple is about the same + * size as the old one. + * + * XXX: Don't do this if the page was restored from full page image. We + * don't bother to update the FSM in that case, it doesn't need to be + * totally accurate anyway. + */ + if (newaction == BLK_NEEDS_REDO && !hot_update && freespace < BLCKSZ / 5) + XLogRecordPageWithFreeSpace(rlocator, newblk, freespace); +} + +static void +tdeheap_xlog_confirm(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_confirm *xlrec = (xl_tdeheap_confirm *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Confirm tuple as actually inserted + */ + ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum); + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_lock(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_lock *xlrec = (xl_tdeheap_lock *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_LOCK_ALL_FROZEN_CLEARED) + { + RelFileLocator rlocator; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + Relation reln; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &block); + reln = CreateFakeRelcacheEntry(rlocator); + + tdeheap_visibilitymap_pin(reln, block, &vmbuffer); + tdeheap_visibilitymap_clear(reln, block, vmbuffer, VISIBILITYMAP_ALL_FROZEN); + + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = (Page) BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask, + &htup->t_infomask2); + + /* + * Clear relevant update flags, but only if the modified infomask says + * there's no update. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask)) + { + HeapTupleHeaderClearHotUpdated(htup); + /* Make sure there is no forward chain link in t_ctid */ + ItemPointerSet(&htup->t_ctid, + BufferGetBlockNumber(buffer), + offnum); + } + HeapTupleHeaderSetXmax(htup, xlrec->xmax); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_inplace(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_inplace *xlrec = (xl_tdeheap_inplace *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + uint32 oldlen; + Size newlen; + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + char *newtup = XLogRecGetBlockData(record, 0, &newlen); + + page = BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldlen = ItemIdGetLength(lp) - htup->t_hoff; + if (oldlen != newlen) + elog(PANIC, "wrong tuple length"); + + memcpy((char *) htup + htup->t_hoff, newtup, newlen); + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +void +tdeheap_redo(XLogReaderState *record) +{ + uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK; + + /* + * These operations don't overwrite MVCC data so no conflict processing is + * required. The ones in heap2 rmgr do. + */ + + switch (info & XLOG_HEAP_OPMASK) + { + case XLOG_HEAP_INSERT: + tdeheap_xlog_insert(record); + break; + case XLOG_HEAP_DELETE: + tdeheap_xlog_delete(record); + break; + case XLOG_HEAP_UPDATE: + tdeheap_xlog_update(record, false); + break; + case XLOG_HEAP_TRUNCATE: + + /* + * TRUNCATE is a no-op because the actions are already logged as + * SMGR WAL records. TRUNCATE WAL record only exists for logical + * decoding. + */ + break; + case XLOG_HEAP_HOT_UPDATE: + tdeheap_xlog_update(record, true); + break; + case XLOG_HEAP_CONFIRM: + tdeheap_xlog_confirm(record); + break; + case XLOG_HEAP_LOCK: + tdeheap_xlog_lock(record); + break; + case XLOG_HEAP_INPLACE: + tdeheap_xlog_inplace(record); + break; + default: + elog(PANIC, "pg_tde_redo: unknown op code %u", info); + } +} + +/* + * Mask a heap page before performing consistency checks on it. + */ +void +tdeheap_mask(char *pagedata, BlockNumber blkno) +{ + Page page = (Page) pagedata; + OffsetNumber off; + + mask_page_lsn_and_checksum(page); + + mask_page_hint_bits(page); + mask_unused_space(page); + + for (off = 1; off <= PageGetMaxOffsetNumber(page); off++) + { + ItemId iid = PageGetItemId(page, off); + char *page_item; + + page_item = (char *) (page + ItemIdGetOffset(iid)); + + if (ItemIdIsNormal(iid)) + { + HeapTupleHeader page_htup = (HeapTupleHeader) page_item; + + /* + * If xmin of a tuple is not yet frozen, we should ignore + * differences in hint bits, since they can be set without + * emitting WAL. + */ + if (!HeapTupleHeaderXminFrozen(page_htup)) + page_htup->t_infomask &= ~HEAP_XACT_MASK; + else + { + /* Still we need to mask xmax hint bits. */ + page_htup->t_infomask &= ~HEAP_XMAX_INVALID; + page_htup->t_infomask &= ~HEAP_XMAX_COMMITTED; + } + + /* + * During replay, we set Command Id to FirstCommandId. Hence, mask + * it. See tdeheap_xlog_insert() for details. + */ + page_htup->t_choice.t_heap.t_field3.t_cid = MASK_MARKER; + + /* + * For a speculative tuple, tdeheap_insert() does not set ctid in the + * caller-passed heap tuple itself, leaving the ctid field to + * contain a speculative token value - a per-backend monotonically + * increasing identifier. Besides, it does not WAL-log ctid under + * any circumstances. + * + * During redo, tdeheap_xlog_insert() sets t_ctid to current block + * number and self offset number. It doesn't care about any + * speculative insertions on the primary. Hence, we set t_ctid to + * current block number and self offset number to ignore any + * inconsistency. + */ + if (HeapTupleHeaderIsSpeculative(page_htup)) + ItemPointerSet(&page_htup->t_ctid, blkno, off); + + /* + * NB: Not ignoring ctid changes due to the tuple having moved + * (i.e. HeapTupleHeaderIndicatesMovedPartitions), because that's + * important information that needs to be in-sync between primary + * and standby, and thus is WAL logged. + */ + } + + /* + * Ignore any padding bytes after the tuple, when the length of the + * item is not MAXALIGNed. + */ + if (ItemIdHasStorage(iid)) + { + int len = ItemIdGetLength(iid); + int padlen = MAXALIGN(len) - len; + + if (padlen > 0) + memset(page_item + len, MASK_MARKER, padlen); + } + } +} + +/* + * HeapCheckForSerializableConflictOut + * We are reading a tuple. If it's not visible, there may be a + * rw-conflict out with the inserter. Otherwise, if it is visible to us + * but has been deleted, there may be a rw-conflict out with the deleter. + * + * We will determine the top level xid of the writing transaction with which + * we may be in conflict, and ask CheckForSerializableConflictOut() to check + * for overlap with our own transaction. + * + * This function should be called just about anywhere in heapam.c where a + * tuple has been read. The caller must hold at least a shared lock on the + * buffer, because this function might set hint bits on the tuple. There is + * currently no known reason to call this function from an index AM. + */ +void +HeapCheckForSerializableConflictOut(bool visible, Relation relation, + HeapTuple tuple, Buffer buffer, + Snapshot snapshot) +{ + TransactionId xid; + HTSV_Result htsvResult; + + if (!CheckForSerializableConflictOutNeeded(relation, snapshot)) + return; + + /* + * Check to see whether the tuple has been written to by a concurrent + * transaction, either to create it not visible to us, or to delete it + * while it is visible to us. The "visible" bool indicates whether the + * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else + * is going on with it. + * + * In the event of a concurrently inserted tuple that also happens to have + * been concurrently updated (by a separate transaction), the xmin of the + * tuple will be used -- not the updater's xid. + */ + htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer); + switch (htsvResult) + { + case HEAPTUPLE_LIVE: + if (visible) + return; + xid = HeapTupleHeaderGetXmin(tuple->t_data); + break; + case HEAPTUPLE_RECENTLY_DEAD: + case HEAPTUPLE_DELETE_IN_PROGRESS: + if (visible) + xid = HeapTupleHeaderGetUpdateXid(tuple->t_data); + else + xid = HeapTupleHeaderGetXmin(tuple->t_data); + + if (TransactionIdPrecedes(xid, TransactionXmin)) + { + /* This is like the HEAPTUPLE_DEAD case */ + Assert(!visible); + return; + } + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + xid = HeapTupleHeaderGetXmin(tuple->t_data); + break; + case HEAPTUPLE_DEAD: + Assert(!visible); + return; + default: + + /* + * The only way to get to this default clause is if a new value is + * added to the enum type without adding it to this switch + * statement. That's a bug, so elog. + */ + elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult); + + /* + * In spite of having all enum values covered and calling elog on + * this default, some compilers think this is a code path which + * allows xid to be used below without initialization. Silence + * that warning. + */ + xid = InvalidTransactionId; + } + + Assert(TransactionIdIsValid(xid)); + Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)); + + /* + * Find top level xid. Bail out if xid is too early to be a conflict, or + * if it's our own xid. + */ + if (TransactionIdEquals(xid, GetTopTransactionIdIfAny())) + return; + xid = SubTransGetTopmostTransaction(xid); + if (TransactionIdPrecedes(xid, TransactionXmin)) + return; + + CheckForSerializableConflictOut(relation, xid, snapshot); +} diff --git a/contrib/pg_tde/src16/access/pg_tdeam_handler.c b/contrib/pg_tde/src16/access/pg_tdeam_handler.c new file mode 100644 index 00000000000..e7db6daf78f --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tdeam_handler.c @@ -0,0 +1,2672 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_handler.c + * heap table access method code + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pg_tdeam_handler.c + * + * + * NOTES + * This files wires up the lower level heapam.c et al routines with the + * tableam abstraction. + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tde_slot.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_rewrite.h" +#include "access/pg_tde_tdemap.h" + +#include "encryption/enc_tde.h" + +#include "access/genam.h" +#include "access/multixact.h" +#include "access/syncscan.h" +#include "access/tableam.h" +#include "access/tsmapi.h" +#include "access/xact.h" +#include "catalog/catalog.h" +#include "catalog/index.h" +#include "catalog/storage.h" +#include "catalog/storage_xlog.h" +#include "commands/progress.h" +#include "executor/executor.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "storage/bufmgr.h" +#include "storage/bufpage.h" +#include "storage/lmgr.h" +#include "storage/predicate.h" +#include "storage/procarray.h" +#include "storage/smgr.h" +#include "utils/builtins.h" +#include "utils/rel.h" + +PG_FUNCTION_INFO_V1(pg_tdeam_basic_handler); +#ifdef PERCONA_EXT +PG_FUNCTION_INFO_V1(pg_tdeam_handler); +#endif + + +static void reform_and_rewrite_tuple(HeapTuple tuple, + Relation OldHeap, Relation NewHeap, + Datum *values, bool *isnull, RewriteState rwstate); + +static bool SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer, + HeapTuple tuple, + OffsetNumber tupoffset); + +static BlockNumber pg_tdeam_scan_get_blocks_done(HeapScanDesc hscan); + +static const TableAmRoutine pg_tdeam_methods; + + +/* ------------------------------------------------------------------------ + * Slot related callbacks for heap AM + * ------------------------------------------------------------------------ + */ + +static const TupleTableSlotOps * +pg_tdeam_slot_callbacks(Relation relation) +{ + return &TTSOpsTDEBufferHeapTuple; +} + + +/* ------------------------------------------------------------------------ + * Index Scan Callbacks for heap AM + * ------------------------------------------------------------------------ + */ + +static IndexFetchTableData * +pg_tdeam_index_fetch_begin(Relation rel) +{ + IndexFetchHeapData *hscan = palloc0(sizeof(IndexFetchHeapData)); + + hscan->xs_base.rel = rel; + hscan->xs_cbuf = InvalidBuffer; + + return &hscan->xs_base; +} + +static void +pg_tdeam_index_fetch_reset(IndexFetchTableData *scan) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + + if (BufferIsValid(hscan->xs_cbuf)) + { + ReleaseBuffer(hscan->xs_cbuf); + hscan->xs_cbuf = InvalidBuffer; + } +} + +static void +pg_tdeam_index_fetch_end(IndexFetchTableData *scan) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + + pg_tdeam_index_fetch_reset(scan); + + pfree(hscan); +} + +static bool +pg_tdeam_index_fetch_tuple(struct IndexFetchTableData *scan, + ItemPointer tid, + Snapshot snapshot, + TupleTableSlot *slot, + bool *call_again, bool *all_dead) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + bool got_tdeheap_tuple; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + /* We can skip the buffer-switching logic if we're in mid-HOT chain. */ + if (!*call_again) + { + /* Switch to correct buffer if we don't have it already */ + Buffer prev_buf = hscan->xs_cbuf; + + hscan->xs_cbuf = ReleaseAndReadBuffer(hscan->xs_cbuf, + hscan->xs_base.rel, + ItemPointerGetBlockNumber(tid)); + + /* + * Prune page, but only if we weren't already on this page + */ + if (prev_buf != hscan->xs_cbuf) + tdeheap_page_prune_opt(hscan->xs_base.rel, hscan->xs_cbuf); + } + + /* Obtain share-lock on the buffer so we can examine visibility */ + LockBuffer(hscan->xs_cbuf, BUFFER_LOCK_SHARE); + got_tdeheap_tuple = tdeheap_hot_search_buffer(tid, + hscan->xs_base.rel, + hscan->xs_cbuf, + snapshot, + &bslot->base.tupdata, + all_dead, + !*call_again); + bslot->base.tupdata.t_self = *tid; + LockBuffer(hscan->xs_cbuf, BUFFER_LOCK_UNLOCK); + + if (got_tdeheap_tuple) + { + /* + * Only in a non-MVCC snapshot can more than one member of the HOT + * chain be visible. + */ + *call_again = !IsMVCCSnapshot(snapshot); + + slot->tts_tableOid = RelationGetRelid(scan->rel); + PGTdeExecStoreBufferHeapTuple(scan->rel, &bslot->base.tupdata, slot, hscan->xs_cbuf); + } + else + { + /* We've reached the end of the HOT chain. */ + *call_again = false; + } + + return got_tdeheap_tuple; +} + + +/* ------------------------------------------------------------------------ + * Callbacks for non-modifying operations on individual tuples for heap AM + * ------------------------------------------------------------------------ + */ + +static bool +pg_tdeam_fetch_row_version(Relation relation, + ItemPointer tid, + Snapshot snapshot, + TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + Buffer buffer; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + bslot->base.tupdata.t_self = *tid; + if (tdeheap_fetch(relation, snapshot, &bslot->base.tupdata, &buffer, false)) + { + /* store in slot, transferring existing pin */ + PGTdeExecStorePinnedBufferHeapTuple(relation, &bslot->base.tupdata, slot, buffer); + slot->tts_tableOid = RelationGetRelid(relation); + + return true; + } + + return false; +} + +static bool +pg_tdeam_tuple_tid_valid(TableScanDesc scan, ItemPointer tid) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + return ItemPointerIsValid(tid) && + ItemPointerGetBlockNumber(tid) < hscan->rs_nblocks; +} + +static bool +pg_tdeam_tuple_satisfies_snapshot(Relation rel, TupleTableSlot *slot, + Snapshot snapshot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + bool res; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + Assert(BufferIsValid(bslot->buffer)); + + /* + * We need buffer pin and lock to call HeapTupleSatisfiesVisibility. + * Caller should be holding pin, but not lock. + */ + LockBuffer(bslot->buffer, BUFFER_LOCK_SHARE); + res = HeapTupleSatisfiesVisibility(bslot->base.tuple, snapshot, + bslot->buffer); + LockBuffer(bslot->buffer, BUFFER_LOCK_UNLOCK); + + return res; +} + + +/* ---------------------------------------------------------------------------- + * Functions for manipulations of physical tuples for heap AM. + * ---------------------------------------------------------------------------- + */ + +static void +pg_tdeam_tuple_insert(Relation relation, TupleTableSlot *slot, CommandId cid, + int options, BulkInsertState bistate) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + /* Perform the insertion, and copy the resulting ItemPointer */ + tdeheap_insert(relation, tuple, cid, options, bistate); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static void +pg_tdeam_tuple_insert_speculative(Relation relation, TupleTableSlot *slot, + CommandId cid, int options, + BulkInsertState bistate, uint32 specToken) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + HeapTupleHeaderSetSpeculativeToken(tuple->t_data, specToken); + options |= HEAP_INSERT_SPECULATIVE; + + /* Perform the insertion, and copy the resulting ItemPointer */ + tdeheap_insert(relation, tuple, cid, options, bistate); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static void +pg_tdeam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot, + uint32 specToken, bool succeeded) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* adjust the tuple's state accordingly */ + if (succeeded) + tdeheap_finish_speculative(relation, &slot->tts_tid); + else + tdeheap_abort_speculative(relation, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static TM_Result +pg_tdeam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid, + Snapshot snapshot, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, bool changingPart) +{ + /* + * Currently Deleting of index tuples are handled at vacuum, in case if + * the storage itself is cleaning the dead tuples by itself, it is the + * time to call the index tuple deletion also. + */ + return tdeheap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart); +} + + +static TM_Result +pg_tdeam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot, + CommandId cid, Snapshot snapshot, Snapshot crosscheck, + bool wait, TM_FailureData *tmfd, + LockTupleMode *lockmode, TU_UpdateIndexes *update_indexes) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + TM_Result result; + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + result = tdeheap_update(relation, otid, tuple, cid, crosscheck, wait, + tmfd, lockmode, update_indexes); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + /* + * Decide whether new index entries are needed for the tuple + * + * Note: tdeheap_update returns the tid (location) of the new tuple in the + * t_self field. + * + * If the update is not HOT, we must update all indexes. If the update is + * HOT, it could be that we updated summarized columns, so we either + * update only summarized indexes, or none at all. + */ + if (result != TM_Ok) + { + Assert(*update_indexes == TU_None); + *update_indexes = TU_None; + } + else if (!HeapTupleIsHeapOnly(tuple)) + Assert(*update_indexes == TU_All); + else + Assert((*update_indexes == TU_Summarizing) || + (*update_indexes == TU_None)); + + if (shouldFree) + pfree(tuple); + + return result; +} + +static TM_Result +pg_tdeam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot, + TupleTableSlot *slot, CommandId cid, LockTupleMode mode, + LockWaitPolicy wait_policy, uint8 flags, + TM_FailureData *tmfd) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + TM_Result result; + Buffer buffer; + HeapTuple tuple = &bslot->base.tupdata; + bool follow_updates; + + follow_updates = (flags & TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS) != 0; + tmfd->traversed = false; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + +tuple_lock_retry: + tuple->t_self = *tid; + result = tdeheap_lock_tuple(relation, tuple, cid, mode, wait_policy, + follow_updates, &buffer, tmfd); + + if (result == TM_Updated && + (flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION)) + { + /* Should not encounter speculative tuple on recheck */ + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data)); + + ReleaseBuffer(buffer); + + if (!ItemPointerEquals(&tmfd->ctid, &tuple->t_self)) + { + SnapshotData SnapshotDirty; + TransactionId priorXmax; + + /* it was updated, so look at the updated version */ + *tid = tmfd->ctid; + /* updated row should have xmin matching this xmax */ + priorXmax = tmfd->xmax; + + /* signal that a tuple later in the chain is getting locked */ + tmfd->traversed = true; + + /* + * fetch target tuple + * + * Loop here to deal with updated or busy tuples + */ + InitDirtySnapshot(SnapshotDirty); + for (;;) + { + if (ItemPointerIndicatesMovedPartitions(tid)) + ereport(ERROR, + (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), + errmsg("tuple to be locked was already moved to another partition due to concurrent update"))); + + tuple->t_self = *tid; + if (tdeheap_fetch(relation, &SnapshotDirty, tuple, &buffer, true)) + { + /* + * If xmin isn't what we're expecting, the slot must have + * been recycled and reused for an unrelated tuple. This + * implies that the latest version of the row was deleted, + * so we need do nothing. (Should be safe to examine xmin + * without getting buffer's content lock. We assume + * reading a TransactionId to be atomic, and Xmin never + * changes in an existing tuple, except to invalid or + * frozen, and neither of those can match priorXmax.) + */ + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple->t_data), + priorXmax)) + { + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* otherwise xmin should not be dirty... */ + if (TransactionIdIsValid(SnapshotDirty.xmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("t_xmin %u is uncommitted in tuple (%u,%u) to be updated in table \"%s\"", + SnapshotDirty.xmin, + ItemPointerGetBlockNumber(&tuple->t_self), + ItemPointerGetOffsetNumber(&tuple->t_self), + RelationGetRelationName(relation)))); + + /* + * If tuple is being updated by other transaction then we + * have to wait for its commit/abort, or die trying. + */ + if (TransactionIdIsValid(SnapshotDirty.xmax)) + { + ReleaseBuffer(buffer); + switch (wait_policy) + { + case LockWaitBlock: + XactLockTableWait(SnapshotDirty.xmax, + relation, &tuple->t_self, + XLTW_FetchUpdated); + break; + case LockWaitSkip: + if (!ConditionalXactLockTableWait(SnapshotDirty.xmax)) + /* skip instead of waiting */ + return TM_WouldBlock; + break; + case LockWaitError: + if (!ConditionalXactLockTableWait(SnapshotDirty.xmax)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + continue; /* loop back to repeat tdeheap_fetch */ + } + + /* + * If tuple was inserted by our own transaction, we have + * to check cmin against cid: cmin >= current CID means + * our command cannot see the tuple, so we should ignore + * it. Otherwise tdeheap_lock_tuple() will throw an error, + * and so would any later attempt to update or delete the + * tuple. (We need not check cmax because + * HeapTupleSatisfiesDirty will consider a tuple deleted + * by our transaction dead, regardless of cmax.) We just + * checked that priorXmax == xmin, so we can test that + * variable instead of doing HeapTupleHeaderGetXmin again. + */ + if (TransactionIdIsCurrentTransactionId(priorXmax) && + HeapTupleHeaderGetCmin(tuple->t_data) >= cid) + { + tmfd->xmax = priorXmax; + + /* + * Cmin is the problematic value, so store that. See + * above. + */ + tmfd->cmax = HeapTupleHeaderGetCmin(tuple->t_data); + ReleaseBuffer(buffer); + return TM_SelfModified; + } + + /* + * This is a live tuple, so try to lock it again. + */ + ReleaseBuffer(buffer); + goto tuple_lock_retry; + } + + /* + * If the referenced slot was actually empty, the latest + * version of the row must have been deleted, so we need do + * nothing. + */ + if (tuple->t_data == NULL) + { + Assert(!BufferIsValid(buffer)); + return TM_Deleted; + } + + /* + * As above, if xmin isn't what we're expecting, do nothing. + */ + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple->t_data), + priorXmax)) + { + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* + * If we get here, the tuple was found but failed + * SnapshotDirty. Assuming the xmin is either a committed xact + * or our own xact (as it certainly should be if we're trying + * to modify the tuple), this must mean that the row was + * updated or deleted by either a committed xact or our own + * xact. If it was deleted, we can ignore it; if it was + * updated then chain up to the next version and repeat the + * whole process. + * + * As above, it should be safe to examine xmax and t_ctid + * without the buffer content lock, because they can't be + * changing. We'd better hold a buffer pin though. + */ + if (ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)) + { + /* deleted, so forget about it */ + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* updated, so look at the updated row */ + *tid = tuple->t_data->t_ctid; + /* updated row should have xmin matching this xmax */ + priorXmax = HeapTupleHeaderGetUpdateXid(tuple->t_data); + ReleaseBuffer(buffer); + /* loop back to fetch next in chain */ + } + } + else + { + /* tuple was deleted, so give up */ + return TM_Deleted; + } + } + + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + /* store in slot, transferring existing pin */ + PGTdeExecStorePinnedBufferHeapTuple(relation, tuple, slot, buffer); + + return result; +} + + +/* ------------------------------------------------------------------------ + * DDL related callbacks for heap AM. + * ------------------------------------------------------------------------ + */ + +static void +pg_tdeam_relation_set_new_filelocator(Relation rel, + const RelFileLocator *newrlocator, + char persistence, + TransactionId *freezeXid, + MultiXactId *minmulti) +{ + SMgrRelation srel; +#ifdef PERCONA_EXT + RelFileLocator oldlocator = rel->rd_locator; +#endif + + /* + * Initialize to the minimum XID that could put tuples in the table. We + * know that no xacts older than RecentXmin are still running, so that + * will do. + */ + *freezeXid = RecentXmin; + + /* + * Similarly, initialize the minimum Multixact to the first value that + * could possibly be stored in tuples in the table. Running transactions + * could reuse values from their local cache, so we are careful to + * consider all currently running multis. + * + * XXX this could be refined further, but is it worth the hassle? + */ + *minmulti = GetOldestMultiXactId(); + +#ifdef PERCONA_EXT + srel = RelationCreateStorage(oldlocator, *newrlocator, persistence, true); +#else + srel = RelationCreateStorage(*newrlocator, persistence, true); +#endif + + /* + * If required, set up an init fork for an unlogged table so that it can + * be correctly reinitialized on restart. An immediate sync is required + * even if the page has been logged, because the write did not go through + * shared_buffers and therefore a concurrent checkpoint may have moved the + * redo pointer past our xlog record. Recovery may as well remove it + * while replaying, for example, XLOG_DBASE_CREATE* or XLOG_TBLSPC_CREATE + * record. Therefore, logging is necessary even if wal_level=minimal. + */ + if (persistence == RELPERSISTENCE_UNLOGGED) + { + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW || + rel->rd_rel->relkind == RELKIND_TOASTVALUE); +#ifdef PERCONA_EXT + smgrcreate(oldlocator, srel, INIT_FORKNUM, false); +#else + smgrcreate(srel, INIT_FORKNUM, false); +#endif + log_smgrcreate(newrlocator, INIT_FORKNUM); + smgrimmedsync(srel, INIT_FORKNUM); + } + + smgrclose(srel); + + /* Update TDE filemap */ + if (rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW || + rel->rd_rel->relkind == RELKIND_TOASTVALUE) + { + ereport(DEBUG1, + (errmsg("creating key file for relation %s", RelationGetRelationName(rel)))); + + pg_tde_create_heap_basic_key(newrlocator); + } +} + +static void +pg_tdeam_relation_nontransactional_truncate(Relation rel) +{ + RelationTruncate(rel, 0); +} + +static void +pg_tdeam_relation_copy_data(Relation rel, const RelFileLocator *newrlocator) +{ + SMgrRelation dstrel; + + dstrel = smgropen(*newrlocator, rel->rd_backend); + + /* + * Since we copy the file directly without looking at the shared buffers, + * we'd better first flush out any pages of the source relation that are + * in shared buffers. We assume no new changes will be made while we are + * holding exclusive lock on the rel. + */ + FlushRelationBuffers(rel); + + /* + * Create and copy all forks of the relation, and schedule unlinking of + * old physical files. + * + * NOTE: any conflict in relfilenumber value will be caught in + * RelationCreateStorage(). + */ +#ifdef PERCONA_EXT + RelationCreateStorage(rel->rd_locator, *newrlocator, rel->rd_rel->relpersistence, true); +#else + RelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true); +#endif + + /* copy main fork */ + RelationCopyStorage(RelationGetSmgr(rel), dstrel, MAIN_FORKNUM, + rel->rd_rel->relpersistence); + + /* copy those extra forks that exist */ + for (ForkNumber forkNum = MAIN_FORKNUM + 1; + forkNum <= MAX_FORKNUM; forkNum++) + { + if (smgrexists(RelationGetSmgr(rel), forkNum)) + { +#ifdef PERCONA_EXT + smgrcreate(rel->rd_locator, dstrel, forkNum, false); +#else + smgrcreate(dstrel, forkNum, false); +#endif + + /* + * WAL log creation if the relation is persistent, or this is the + * init fork of an unlogged relation. + */ + if (RelationIsPermanent(rel) || + (rel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED && + forkNum == INIT_FORKNUM)) + log_smgrcreate(newrlocator, forkNum); + RelationCopyStorage(RelationGetSmgr(rel), dstrel, forkNum, + rel->rd_rel->relpersistence); + } + } + + pg_tde_move_rel_key(newrlocator, &rel->rd_locator); + + /* drop old relation, and close new one */ + RelationDropStorage(rel); + smgrclose(dstrel); +} + +static void +pg_tdeam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap, + Relation OldIndex, bool use_sort, + TransactionId OldestXmin, + TransactionId *xid_cutoff, + MultiXactId *multi_cutoff, + double *num_tuples, + double *tups_vacuumed, + double *tups_recently_dead) +{ + RewriteState rwstate; + IndexScanDesc indexScan; + TableScanDesc tableScan; + HeapScanDesc heapScan; + bool is_system_catalog; + Tuplesortstate *tuplesort; + TupleDesc oldTupDesc = RelationGetDescr(OldHeap); + TupleDesc newTupDesc = RelationGetDescr(NewHeap); + TupleTableSlot *slot; + int natts; + Datum *values; + bool *isnull; + BufferHeapTupleTableSlot *hslot; + BlockNumber prev_cblock = InvalidBlockNumber; + + /* Remember if it's a system catalog */ + is_system_catalog = IsSystemRelation(OldHeap); + + /* + * Valid smgr_targblock implies something already wrote to the relation. + * This may be harmless, but this function hasn't planned for it. + */ + Assert(RelationGetTargetBlock(NewHeap) == InvalidBlockNumber); + + /* Preallocate values/isnull arrays */ + natts = newTupDesc->natts; + values = (Datum *) palloc(natts * sizeof(Datum)); + isnull = (bool *) palloc(natts * sizeof(bool)); + + /* Initialize the rewrite operation */ + rwstate = begin_tdeheap_rewrite(OldHeap, NewHeap, OldestXmin, *xid_cutoff, + *multi_cutoff); + + + /* Set up sorting if wanted */ + if (use_sort) + tuplesort = tuplesort_begin_cluster(oldTupDesc, OldIndex, + maintenance_work_mem, + NULL, TUPLESORT_NONE); + else + tuplesort = NULL; + + /* + * Prepare to scan the OldHeap. To ensure we see recently-dead tuples + * that still need to be copied, we scan with SnapshotAny and use + * HeapTupleSatisfiesVacuum for the visibility test. + */ + if (OldIndex != NULL && !use_sort) + { + const int ci_index[] = { + PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_INDEX_RELID + }; + int64 ci_val[2]; + + /* Set phase and OIDOldIndex to columns */ + ci_val[0] = PROGRESS_CLUSTER_PHASE_INDEX_SCAN_HEAP; + ci_val[1] = RelationGetRelid(OldIndex); + pgstat_progress_update_multi_param(2, ci_index, ci_val); + + tableScan = NULL; + heapScan = NULL; + indexScan = index_beginscan(OldHeap, OldIndex, SnapshotAny, 0, 0); + index_rescan(indexScan, NULL, 0, NULL, 0); + } + else + { + /* In scan-and-sort mode and also VACUUM FULL, set phase */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP); + + tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL); + heapScan = (HeapScanDesc) tableScan; + indexScan = NULL; + + /* Set total heap blocks */ + pgstat_progress_update_param(PROGRESS_CLUSTER_TOTAL_HEAP_BLKS, + heapScan->rs_nblocks); + } + + slot = table_slot_create(OldHeap, NULL); + hslot = (BufferHeapTupleTableSlot *) slot; + + /* + * Scan through the OldHeap, either in OldIndex order or sequentially; + * copy each tuple into the NewHeap, or transiently to the tuplesort + * module. Note that we don't bother sorting dead tuples (they won't get + * to the new table anyway). + */ + for (;;) + { + HeapTuple tuple; + Buffer buf; + bool isdead; + + CHECK_FOR_INTERRUPTS(); + + if (indexScan != NULL) + { + if (!index_getnext_slot(indexScan, ForwardScanDirection, slot)) + break; + + /* Since we used no scan keys, should never need to recheck */ + if (indexScan->xs_recheck) + elog(ERROR, "CLUSTER does not support lossy index conditions"); + } + else + { + if (!table_scan_getnextslot(tableScan, ForwardScanDirection, slot)) + { + /* + * If the last pages of the scan were empty, we would go to + * the next phase while tdeheap_blks_scanned != tdeheap_blks_total. + * Instead, to ensure that tdeheap_blks_scanned is equivalent to + * tdeheap_blks_total after the table scan phase, this parameter + * is manually updated to the correct value when the table + * scan finishes. + */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED, + heapScan->rs_nblocks); + break; + } + + /* + * In scan-and-sort mode and also VACUUM FULL, set heap blocks + * scanned + * + * Note that heapScan may start at an offset and wrap around, i.e. + * rs_startblock may be >0, and rs_cblock may end with a number + * below rs_startblock. To prevent showing this wraparound to the + * user, we offset rs_cblock by rs_startblock (modulo rs_nblocks). + */ + if (prev_cblock != heapScan->rs_cblock) + { + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED, + (heapScan->rs_cblock + + heapScan->rs_nblocks - + heapScan->rs_startblock + ) % heapScan->rs_nblocks + 1); + prev_cblock = heapScan->rs_cblock; + } + } + + tuple = ExecFetchSlotHeapTuple(slot, false, NULL); + buf = hslot->buffer; + + LockBuffer(buf, BUFFER_LOCK_SHARE); + + switch (HeapTupleSatisfiesVacuum(tuple, OldestXmin, buf)) + { + case HEAPTUPLE_DEAD: + /* Definitely dead */ + isdead = true; + break; + case HEAPTUPLE_RECENTLY_DEAD: + *tups_recently_dead += 1; + /* fall through */ + case HEAPTUPLE_LIVE: + /* Live or recently dead, must copy it */ + isdead = false; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Since we hold exclusive lock on the relation, normally the + * only way to see this is if it was inserted earlier in our + * own transaction. However, it can happen in system + * catalogs, since we tend to release write lock before commit + * there. Give a warning if neither case applies; but in any + * case we had better copy it. + */ + if (!is_system_catalog && + !TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tuple->t_data))) + elog(WARNING, "concurrent insert in progress within table \"%s\"", + RelationGetRelationName(OldHeap)); + /* treat as live */ + isdead = false; + break; + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * Similar situation to INSERT_IN_PROGRESS case. + */ + if (!is_system_catalog && + !TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple->t_data))) + elog(WARNING, "concurrent delete in progress within table \"%s\"", + RelationGetRelationName(OldHeap)); + /* treat as recently dead */ + *tups_recently_dead += 1; + isdead = false; + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + isdead = false; /* keep compiler quiet */ + break; + } + + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + + if (isdead) + { + *tups_vacuumed += 1; + /* heap rewrite module still needs to see it... */ + if (rewrite_tdeheap_dead_tuple(rwstate, tuple)) + { + /* A previous recently-dead tuple is now known dead */ + *tups_vacuumed += 1; + *tups_recently_dead -= 1; + } + continue; + } + + *num_tuples += 1; + if (tuplesort != NULL) + { + tuplesort_putheaptuple(tuplesort, tuple); + + /* + * In scan-and-sort mode, report increase in number of tuples + * scanned + */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_TUPLES_SCANNED, + *num_tuples); + } + else + { + const int ct_index[] = { + PROGRESS_CLUSTER_HEAP_TUPLES_SCANNED, + PROGRESS_CLUSTER_HEAP_TUPLES_WRITTEN + }; + int64 ct_val[2]; + + reform_and_rewrite_tuple(tuple, OldHeap, NewHeap, + values, isnull, rwstate); + + /* + * In indexscan mode and also VACUUM FULL, report increase in + * number of tuples scanned and written + */ + ct_val[0] = *num_tuples; + ct_val[1] = *num_tuples; + pgstat_progress_update_multi_param(2, ct_index, ct_val); + } + } + + if (indexScan != NULL) + index_endscan(indexScan); + if (tableScan != NULL) + table_endscan(tableScan); + if (slot) + ExecDropSingleTupleTableSlot(slot); + + /* + * In scan-and-sort mode, complete the sort, then read out all live tuples + * from the tuplestore and write them to the new relation. + */ + if (tuplesort != NULL) + { + double n_tuples = 0; + + /* Report that we are now sorting tuples */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_SORT_TUPLES); + + tuplesort_performsort(tuplesort); + + /* Report that we are now writing new heap */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_WRITE_NEW_HEAP); + + for (;;) + { + HeapTuple tuple; + + CHECK_FOR_INTERRUPTS(); + + tuple = tuplesort_getheaptuple(tuplesort, true); + if (tuple == NULL) + break; + + n_tuples += 1; + reform_and_rewrite_tuple(tuple, + OldHeap, NewHeap, + values, isnull, + rwstate); + /* Report n_tuples */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_TUPLES_WRITTEN, + n_tuples); + } + + tuplesort_end(tuplesort); + } + + /* Write out any remaining tuples, and fsync if needed */ + end_tdeheap_rewrite(rwstate); + + /* Clean up */ + pfree(values); + pfree(isnull); +} + +static bool +pg_tdeam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno, + BufferAccessStrategy bstrategy) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + /* + * We must maintain a pin on the target page's buffer to ensure that + * concurrent activity - e.g. HOT pruning - doesn't delete tuples out from + * under us. Hence, pin the page until we are done looking at it. We + * also choose to hold sharelock on the buffer throughout --- we could + * release and re-acquire sharelock for each tuple, but since we aren't + * doing much work per tuple, the extra lock traffic is probably better + * avoided. + */ + hscan->rs_cblock = blockno; + hscan->rs_cindex = FirstOffsetNumber; + hscan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM, + blockno, RBM_NORMAL, bstrategy); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + /* in heap all blocks can contain tuples, so always return true */ + return true; +} + +static bool +pg_tdeam_scan_analyze_next_tuple(TableScanDesc scan, TransactionId OldestXmin, + double *liverows, double *deadrows, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + Page targpage; + OffsetNumber maxoffset; + BufferHeapTupleTableSlot *hslot; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + hslot = (BufferHeapTupleTableSlot *) slot; + targpage = BufferGetPage(hscan->rs_cbuf); + maxoffset = PageGetMaxOffsetNumber(targpage); + + /* Inner loop over all tuples on the selected page */ + for (; hscan->rs_cindex <= maxoffset; hscan->rs_cindex++) + { + ItemId itemid; + HeapTuple targtuple = &hslot->base.tupdata; + bool sample_it = false; + + itemid = PageGetItemId(targpage, hscan->rs_cindex); + + /* + * We ignore unused and redirect line pointers. DEAD line pointers + * should be counted as dead, because we need vacuum to run to get rid + * of them. Note that this rule agrees with the way that + * tdeheap_page_prune() counts things. + */ + if (!ItemIdIsNormal(itemid)) + { + if (ItemIdIsDead(itemid)) + *deadrows += 1; + continue; + } + + ItemPointerSet(&targtuple->t_self, hscan->rs_cblock, hscan->rs_cindex); + + targtuple->t_tableOid = RelationGetRelid(scan->rs_rd); + targtuple->t_data = (HeapTupleHeader) PageGetItem(targpage, itemid); + targtuple->t_len = ItemIdGetLength(itemid); + + switch (HeapTupleSatisfiesVacuum(targtuple, OldestXmin, + hscan->rs_cbuf)) + { + case HEAPTUPLE_LIVE: + sample_it = true; + *liverows += 1; + break; + + case HEAPTUPLE_DEAD: + case HEAPTUPLE_RECENTLY_DEAD: + /* Count dead and recently-dead rows */ + *deadrows += 1; + break; + + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Insert-in-progress rows are not counted. We assume that + * when the inserting transaction commits or aborts, it will + * send a stats message to increment the proper count. This + * works right only if that transaction ends after we finish + * analyzing the table; if things happen in the other order, + * its stats update will be overwritten by ours. However, the + * error will be large only if the other transaction runs long + * enough to insert many tuples, so assuming it will finish + * after us is the safer option. + * + * A special case is that the inserting transaction might be + * our own. In this case we should count and sample the row, + * to accommodate users who load a table and analyze it in one + * transaction. (pgstat_report_analyze has to adjust the + * numbers we report to the cumulative stats system to make + * this come out right.) + */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(targtuple->t_data))) + { + sample_it = true; + *liverows += 1; + } + break; + + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * We count and sample delete-in-progress rows the same as + * live ones, so that the stats counters come out right if the + * deleting transaction commits after us, per the same + * reasoning given above. + * + * If the delete was done by our own transaction, however, we + * must count the row as dead to make pgstat_report_analyze's + * stats adjustments come out right. (Note: this works out + * properly when the row was both inserted and deleted in our + * xact.) + * + * The net effect of these choices is that we act as though an + * IN_PROGRESS transaction hasn't happened yet, except if it + * is our own transaction, which we assume has happened. + * + * This approach ensures that we behave sanely if we see both + * the pre-image and post-image rows for a row being updated + * by a concurrent transaction: we will sample the pre-image + * but not the post-image. We also get sane results if the + * concurrent transaction never commits. + */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(targtuple->t_data))) + *deadrows += 1; + else + { + sample_it = true; + *liverows += 1; + } + break; + + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + + if (sample_it) + { + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, targtuple, slot, hscan->rs_cbuf); + hscan->rs_cindex++; + + /* note that we leave the buffer locked here! */ + return true; + } + } + + /* Now release the lock and pin on the page */ + UnlockReleaseBuffer(hscan->rs_cbuf); + hscan->rs_cbuf = InvalidBuffer; + /* also prevent old slot contents from having pin on page */ + ExecClearTuple(slot); + + return false; +} + +static double +pg_tdeam_index_build_range_scan(Relation heapRelation, + Relation indexRelation, + IndexInfo *indexInfo, + bool allow_sync, + bool anyvisible, + bool progress, + BlockNumber start_blockno, + BlockNumber numblocks, + IndexBuildCallback callback, + void *callback_state, + TableScanDesc scan) +{ + HeapScanDesc hscan; + bool is_system_catalog; + bool checking_uniqueness; + HeapTuple heapTuple; + Datum values[INDEX_MAX_KEYS]; + bool isnull[INDEX_MAX_KEYS]; + double reltuples; + ExprState *predicate; + TupleTableSlot *slot; + EState *estate; + ExprContext *econtext; + Snapshot snapshot; + bool need_unregister_snapshot = false; + TransactionId OldestXmin; + BlockNumber previous_blkno = InvalidBlockNumber; + BlockNumber root_blkno = InvalidBlockNumber; + OffsetNumber root_offsets[MaxHeapTuplesPerPage]; + + /* + * sanity checks + */ + Assert(OidIsValid(indexRelation->rd_rel->relam)); + + /* Remember if it's a system catalog */ + is_system_catalog = IsSystemRelation(heapRelation); + + /* See whether we're verifying uniqueness/exclusion properties */ + checking_uniqueness = (indexInfo->ii_Unique || + indexInfo->ii_ExclusionOps != NULL); + + /* + * "Any visible" mode is not compatible with uniqueness checks; make sure + * only one of those is requested. + */ + Assert(!(anyvisible && checking_uniqueness)); + + /* + * Need an EState for evaluation of index expressions and partial-index + * predicates. Also a slot to hold the current tuple. + */ + estate = CreateExecutorState(); + econtext = GetPerTupleExprContext(estate); + slot = table_slot_create(heapRelation, NULL); + + /* Arrange for econtext's scan tuple to be the tuple under test */ + econtext->ecxt_scantuple = slot; + + /* Set up execution state for predicate, if any. */ + predicate = ExecPrepareQual(indexInfo->ii_Predicate, estate); + + /* + * Prepare for scan of the base relation. In a normal index build, we use + * SnapshotAny because we must retrieve all tuples and do our own time + * qual checks (because we have to index RECENTLY_DEAD tuples). In a + * concurrent build, or during bootstrap, we take a regular MVCC snapshot + * and index whatever's live according to that. + */ + OldestXmin = InvalidTransactionId; + + /* okay to ignore lazy VACUUMs here */ + if (!IsBootstrapProcessingMode() && !indexInfo->ii_Concurrent) + OldestXmin = GetOldestNonRemovableTransactionId(heapRelation); + + if (!scan) + { + /* + * Serial index build. + * + * Must begin our own heap scan in this case. We may also need to + * register a snapshot whose lifetime is under our direct control. + */ + if (!TransactionIdIsValid(OldestXmin)) + { + snapshot = RegisterSnapshot(GetTransactionSnapshot()); + need_unregister_snapshot = true; + } + else + snapshot = SnapshotAny; + + scan = table_beginscan_strat(heapRelation, /* relation */ + snapshot, /* snapshot */ + 0, /* number of keys */ + NULL, /* scan key */ + true, /* buffer access strategy OK */ + allow_sync); /* syncscan OK? */ + } + else + { + /* + * Parallel index build. + * + * Parallel case never registers/unregisters own snapshot. Snapshot + * is taken from parallel heap scan, and is SnapshotAny or an MVCC + * snapshot, based on same criteria as serial case. + */ + Assert(!IsBootstrapProcessingMode()); + Assert(allow_sync); + snapshot = scan->rs_snapshot; + } + + hscan = (HeapScanDesc) scan; + + /* + * Must have called GetOldestNonRemovableTransactionId() if using + * SnapshotAny. Shouldn't have for an MVCC snapshot. (It's especially + * worth checking this for parallel builds, since ambuild routines that + * support parallel builds must work these details out for themselves.) + */ + Assert(snapshot == SnapshotAny || IsMVCCSnapshot(snapshot)); + Assert(snapshot == SnapshotAny ? TransactionIdIsValid(OldestXmin) : + !TransactionIdIsValid(OldestXmin)); + Assert(snapshot == SnapshotAny || !anyvisible); + + /* Publish number of blocks to scan */ + if (progress) + { + BlockNumber nblocks; + + if (hscan->rs_base.rs_parallel != NULL) + { + ParallelBlockTableScanDesc pbscan; + + pbscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + nblocks = pbscan->phs_nblocks; + } + else + nblocks = hscan->rs_nblocks; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_TOTAL, + nblocks); + } + + /* set our scan endpoints */ + if (!allow_sync) + tdeheap_setscanlimits(scan, start_blockno, numblocks); + else + { + /* syncscan can only be requested on whole relation */ + Assert(start_blockno == 0); + Assert(numblocks == InvalidBlockNumber); + } + + reltuples = 0; + + /* + * Scan all tuples in the base relation. + */ + while ((heapTuple = tdeheap_getnext(scan, ForwardScanDirection)) != NULL) + { + bool tupleIsAlive; + + CHECK_FOR_INTERRUPTS(); + + /* Report scan progress, if asked to. */ + if (progress) + { + BlockNumber blocks_done = pg_tdeam_scan_get_blocks_done(hscan); + + if (blocks_done != previous_blkno) + { + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + blocks_done); + previous_blkno = blocks_done; + } + } + + /* + * When dealing with a HOT-chain of updated tuples, we want to index + * the values of the live tuple (if any), but index it under the TID + * of the chain's root tuple. This approach is necessary to preserve + * the HOT-chain structure in the heap. So we need to be able to find + * the root item offset for every tuple that's in a HOT-chain. When + * first reaching a new page of the relation, call + * tdeheap_get_root_tuples() to build a map of root item offsets on the + * page. + * + * It might look unsafe to use this information across buffer + * lock/unlock. However, we hold ShareLock on the table so no + * ordinary insert/update/delete should occur; and we hold pin on the + * buffer continuously while visiting the page, so no pruning + * operation can occur either. + * + * In cases with only ShareUpdateExclusiveLock on the table, it's + * possible for some HOT tuples to appear that we didn't know about + * when we first read the page. To handle that case, we re-obtain the + * list of root offsets when a HOT tuple points to a root item that we + * don't know about. + * + * Also, although our opinions about tuple liveness could change while + * we scan the page (due to concurrent transaction commits/aborts), + * the chain root locations won't, so this info doesn't need to be + * rebuilt after waiting for another transaction. + * + * Note the implied assumption that there is no more than one live + * tuple per HOT-chain --- else we could create more than one index + * entry pointing to the same root tuple. + */ + if (hscan->rs_cblock != root_blkno) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + root_blkno = hscan->rs_cblock; + } + + if (snapshot == SnapshotAny) + { + /* do our own time qual check */ + bool indexIt; + TransactionId xwait; + + recheck: + + /* + * We could possibly get away with not locking the buffer here, + * since caller should hold ShareLock on the relation, but let's + * be conservative about it. (This remark is still correct even + * with HOT-pruning: our pin on the buffer prevents pruning.) + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + /* + * The criteria for counting a tuple as live in this block need to + * match what analyze.c's pg_tdeam_scan_analyze_next_tuple() does, + * otherwise CREATE INDEX and ANALYZE may produce wildly different + * reltuples values, e.g. when there are many recently-dead + * tuples. + */ + switch (HeapTupleSatisfiesVacuum(heapTuple, OldestXmin, + hscan->rs_cbuf)) + { + case HEAPTUPLE_DEAD: + /* Definitely dead, we can ignore it */ + indexIt = false; + tupleIsAlive = false; + break; + case HEAPTUPLE_LIVE: + /* Normal case, index and unique-check it */ + indexIt = true; + tupleIsAlive = true; + /* Count it as live, too */ + reltuples += 1; + break; + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * If tuple is recently deleted then we must index it + * anyway to preserve MVCC semantics. (Pre-existing + * transactions could try to use the index after we finish + * building it, and may need to see such tuples.) + * + * However, if it was HOT-updated then we must only index + * the live tuple at the end of the HOT-chain. Since this + * breaks semantics for pre-existing snapshots, mark the + * index as unusable for them. + * + * We don't count recently-dead tuples in reltuples, even + * if we index them; see pg_tdeam_scan_analyze_next_tuple(). + */ + if (HeapTupleIsHotUpdated(heapTuple)) + { + indexIt = false; + /* mark the index as unsafe for old snapshots */ + indexInfo->ii_BrokenHotChain = true; + } + else + indexIt = true; + /* In any case, exclude the tuple from unique-checking */ + tupleIsAlive = false; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * In "anyvisible" mode, this tuple is visible and we + * don't need any further checks. + */ + if (anyvisible) + { + indexIt = true; + tupleIsAlive = true; + reltuples += 1; + break; + } + + /* + * Since caller should hold ShareLock or better, normally + * the only way to see this is if it was inserted earlier + * in our own transaction. However, it can happen in + * system catalogs, since we tend to release write lock + * before commit there. Give a warning if neither case + * applies. + */ + xwait = HeapTupleHeaderGetXmin(heapTuple->t_data); + if (!TransactionIdIsCurrentTransactionId(xwait)) + { + if (!is_system_catalog) + elog(WARNING, "concurrent insert in progress within table \"%s\"", + RelationGetRelationName(heapRelation)); + + /* + * If we are performing uniqueness checks, indexing + * such a tuple could lead to a bogus uniqueness + * failure. In that case we wait for the inserting + * transaction to finish and check again. + */ + if (checking_uniqueness) + { + /* + * Must drop the lock on the buffer before we wait + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(xwait, heapRelation, + &heapTuple->t_self, + XLTW_InsertIndexUnique); + CHECK_FOR_INTERRUPTS(); + goto recheck; + } + } + else + { + /* + * For consistency with + * pg_tdeam_scan_analyze_next_tuple(), count + * HEAPTUPLE_INSERT_IN_PROGRESS tuples as live only + * when inserted by our own transaction. + */ + reltuples += 1; + } + + /* + * We must index such tuples, since if the index build + * commits then they're good. + */ + indexIt = true; + tupleIsAlive = true; + break; + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * As with INSERT_IN_PROGRESS case, this is unexpected + * unless it's our own deletion or a system catalog; but + * in anyvisible mode, this tuple is visible. + */ + if (anyvisible) + { + indexIt = true; + tupleIsAlive = false; + reltuples += 1; + break; + } + + xwait = HeapTupleHeaderGetUpdateXid(heapTuple->t_data); + if (!TransactionIdIsCurrentTransactionId(xwait)) + { + if (!is_system_catalog) + elog(WARNING, "concurrent delete in progress within table \"%s\"", + RelationGetRelationName(heapRelation)); + + /* + * If we are performing uniqueness checks, assuming + * the tuple is dead could lead to missing a + * uniqueness violation. In that case we wait for the + * deleting transaction to finish and check again. + * + * Also, if it's a HOT-updated tuple, we should not + * index it but rather the live tuple at the end of + * the HOT-chain. However, the deleting transaction + * could abort, possibly leaving this tuple as live + * after all, in which case it has to be indexed. The + * only way to know what to do is to wait for the + * deleting transaction to finish and check again. + */ + if (checking_uniqueness || + HeapTupleIsHotUpdated(heapTuple)) + { + /* + * Must drop the lock on the buffer before we wait + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(xwait, heapRelation, + &heapTuple->t_self, + XLTW_InsertIndexUnique); + CHECK_FOR_INTERRUPTS(); + goto recheck; + } + + /* + * Otherwise index it but don't check for uniqueness, + * the same as a RECENTLY_DEAD tuple. + */ + indexIt = true; + + /* + * Count HEAPTUPLE_DELETE_IN_PROGRESS tuples as live, + * if they were not deleted by the current + * transaction. That's what + * pg_tdeam_scan_analyze_next_tuple() does, and we want + * the behavior to be consistent. + */ + reltuples += 1; + } + else if (HeapTupleIsHotUpdated(heapTuple)) + { + /* + * It's a HOT-updated tuple deleted by our own xact. + * We can assume the deletion will commit (else the + * index contents don't matter), so treat the same as + * RECENTLY_DEAD HOT-updated tuples. + */ + indexIt = false; + /* mark the index as unsafe for old snapshots */ + indexInfo->ii_BrokenHotChain = true; + } + else + { + /* + * It's a regular tuple deleted by our own xact. Index + * it, but don't check for uniqueness nor count in + * reltuples, the same as a RECENTLY_DEAD tuple. + */ + indexIt = true; + } + /* In any case, exclude the tuple from unique-checking */ + tupleIsAlive = false; + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + indexIt = tupleIsAlive = false; /* keep compiler quiet */ + break; + } + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + if (!indexIt) + continue; + } + else + { + /* tdeheap_getnext did the time qual check */ + tupleIsAlive = true; + reltuples += 1; + } + + MemoryContextReset(econtext->ecxt_per_tuple_memory); + + /* Set up for predicate or expression evaluation */ + PGTdeExecStoreBufferHeapTuple(heapRelation, heapTuple, slot, hscan->rs_cbuf); + + /* + * In a partial index, discard tuples that don't satisfy the + * predicate. + */ + if (predicate != NULL) + { + if (!ExecQual(predicate, econtext)) + continue; + } + + /* + * For the current heap tuple, extract all the attributes we use in + * this index, and note which are null. This also performs evaluation + * of any expressions needed. + */ + FormIndexDatum(indexInfo, + slot, + estate, + values, + isnull); + + /* + * You'd think we should go ahead and build the index tuple here, but + * some index AMs want to do further processing on the data first. So + * pass the values[] and isnull[] arrays, instead. + */ + + if (HeapTupleIsHeapOnly(heapTuple)) + { + /* + * For a heap-only tuple, pretend its TID is that of the root. See + * src/backend/access/heap/README.HOT for discussion. + */ + ItemPointerData tid; + OffsetNumber offnum; + + offnum = ItemPointerGetOffsetNumber(&heapTuple->t_self); + + /* + * If a HOT tuple points to a root that we don't know about, + * obtain root items afresh. If that still fails, report it as + * corruption. + */ + if (root_offsets[offnum - 1] == InvalidOffsetNumber) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + } + + if (!OffsetNumberIsValid(root_offsets[offnum - 1])) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("failed to find parent tuple for heap-only tuple at (%u,%u) in table \"%s\"", + ItemPointerGetBlockNumber(&heapTuple->t_self), + offnum, + RelationGetRelationName(heapRelation)))); + + ItemPointerSet(&tid, ItemPointerGetBlockNumber(&heapTuple->t_self), + root_offsets[offnum - 1]); + + /* Call the AM's callback routine to process the tuple */ + callback(indexRelation, &tid, values, isnull, tupleIsAlive, + callback_state); + } + else + { + /* Call the AM's callback routine to process the tuple */ + callback(indexRelation, &heapTuple->t_self, values, isnull, + tupleIsAlive, callback_state); + } + } + + /* Report scan progress one last time. */ + if (progress) + { + BlockNumber blks_done; + + if (hscan->rs_base.rs_parallel != NULL) + { + ParallelBlockTableScanDesc pbscan; + + pbscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + blks_done = pbscan->phs_nblocks; + } + else + blks_done = hscan->rs_nblocks; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + blks_done); + } + + table_endscan(scan); + + /* we can now forget our snapshot, if set and registered by us */ + if (need_unregister_snapshot) + UnregisterSnapshot(snapshot); + + ExecDropSingleTupleTableSlot(slot); + + FreeExecutorState(estate); + + /* These may have been pointing to the now-gone estate */ + indexInfo->ii_ExpressionsState = NIL; + indexInfo->ii_PredicateState = NULL; + + return reltuples; +} + +static void +pg_tdeam_index_validate_scan(Relation heapRelation, + Relation indexRelation, + IndexInfo *indexInfo, + Snapshot snapshot, + ValidateIndexState *state) +{ + TableScanDesc scan; + HeapScanDesc hscan; + HeapTuple heapTuple; + Datum values[INDEX_MAX_KEYS]; + bool isnull[INDEX_MAX_KEYS]; + ExprState *predicate; + TupleTableSlot *slot; + EState *estate; + ExprContext *econtext; + BlockNumber root_blkno = InvalidBlockNumber; + OffsetNumber root_offsets[MaxHeapTuplesPerPage]; + bool in_index[MaxHeapTuplesPerPage]; + BlockNumber previous_blkno = InvalidBlockNumber; + + /* state variables for the merge */ + ItemPointer indexcursor = NULL; + ItemPointerData decoded; + bool tuplesort_empty = false; + + /* + * sanity checks + */ + Assert(OidIsValid(indexRelation->rd_rel->relam)); + + /* + * Need an EState for evaluation of index expressions and partial-index + * predicates. Also a slot to hold the current tuple. + */ + estate = CreateExecutorState(); + econtext = GetPerTupleExprContext(estate); + slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRelation), + &TTSOpsHeapTuple); + + /* Arrange for econtext's scan tuple to be the tuple under test */ + econtext->ecxt_scantuple = slot; + + /* Set up execution state for predicate, if any. */ + predicate = ExecPrepareQual(indexInfo->ii_Predicate, estate); + + /* + * Prepare for scan of the base relation. We need just those tuples + * satisfying the passed-in reference snapshot. We must disable syncscan + * here, because it's critical that we read from block zero forward to + * match the sorted TIDs. + */ + scan = table_beginscan_strat(heapRelation, /* relation */ + snapshot, /* snapshot */ + 0, /* number of keys */ + NULL, /* scan key */ + true, /* buffer access strategy OK */ + false); /* syncscan not OK */ + hscan = (HeapScanDesc) scan; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_TOTAL, + hscan->rs_nblocks); + + /* + * Scan all tuples matching the snapshot. + */ + while ((heapTuple = tdeheap_getnext(scan, ForwardScanDirection)) != NULL) + { + ItemPointer heapcursor = &heapTuple->t_self; + ItemPointerData rootTuple; + OffsetNumber root_offnum; + + CHECK_FOR_INTERRUPTS(); + + state->htups += 1; + + if ((previous_blkno == InvalidBlockNumber) || + (hscan->rs_cblock != previous_blkno)) + { + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + hscan->rs_cblock); + previous_blkno = hscan->rs_cblock; + } + + /* + * As commented in table_index_build_scan, we should index heap-only + * tuples under the TIDs of their root tuples; so when we advance onto + * a new heap page, build a map of root item offsets on the page. + * + * This complicates merging against the tuplesort output: we will + * visit the live tuples in order by their offsets, but the root + * offsets that we need to compare against the index contents might be + * ordered differently. So we might have to "look back" within the + * tuplesort output, but only within the current page. We handle that + * by keeping a bool array in_index[] showing all the + * already-passed-over tuplesort output TIDs of the current page. We + * clear that array here, when advancing onto a new heap page. + */ + if (hscan->rs_cblock != root_blkno) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + memset(in_index, 0, sizeof(in_index)); + + root_blkno = hscan->rs_cblock; + } + + /* Convert actual tuple TID to root TID */ + rootTuple = *heapcursor; + root_offnum = ItemPointerGetOffsetNumber(heapcursor); + + if (HeapTupleIsHeapOnly(heapTuple)) + { + root_offnum = root_offsets[root_offnum - 1]; + if (!OffsetNumberIsValid(root_offnum)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("failed to find parent tuple for heap-only tuple at (%u,%u) in table \"%s\"", + ItemPointerGetBlockNumber(heapcursor), + ItemPointerGetOffsetNumber(heapcursor), + RelationGetRelationName(heapRelation)))); + ItemPointerSetOffsetNumber(&rootTuple, root_offnum); + } + + /* + * "merge" by skipping through the index tuples until we find or pass + * the current root tuple. + */ + while (!tuplesort_empty && + (!indexcursor || + ItemPointerCompare(indexcursor, &rootTuple) < 0)) + { + Datum ts_val; + bool ts_isnull; + + if (indexcursor) + { + /* + * Remember index items seen earlier on the current heap page + */ + if (ItemPointerGetBlockNumber(indexcursor) == root_blkno) + in_index[ItemPointerGetOffsetNumber(indexcursor) - 1] = true; + } + + tuplesort_empty = !tuplesort_getdatum(state->tuplesort, true, + false, &ts_val, &ts_isnull, + NULL); + Assert(tuplesort_empty || !ts_isnull); + if (!tuplesort_empty) + { + itemptr_decode(&decoded, DatumGetInt64(ts_val)); + indexcursor = &decoded; + } + else + { + /* Be tidy */ + indexcursor = NULL; + } + } + + /* + * If the tuplesort has overshot *and* we didn't see a match earlier, + * then this tuple is missing from the index, so insert it. + */ + if ((tuplesort_empty || + ItemPointerCompare(indexcursor, &rootTuple) > 0) && + !in_index[root_offnum - 1]) + { + MemoryContextReset(econtext->ecxt_per_tuple_memory); + + /* Set up for predicate or expression evaluation */ + ExecStoreHeapTuple(heapTuple, slot, false); + + /* + * In a partial index, discard tuples that don't satisfy the + * predicate. + */ + if (predicate != NULL) + { + if (!ExecQual(predicate, econtext)) + continue; + } + + /* + * For the current heap tuple, extract all the attributes we use + * in this index, and note which are null. This also performs + * evaluation of any expressions needed. + */ + FormIndexDatum(indexInfo, + slot, + estate, + values, + isnull); + + /* + * You'd think we should go ahead and build the index tuple here, + * but some index AMs want to do further processing on the data + * first. So pass the values[] and isnull[] arrays, instead. + */ + + /* + * If the tuple is already committed dead, you might think we + * could suppress uniqueness checking, but this is no longer true + * in the presence of HOT, because the insert is actually a proxy + * for a uniqueness check on the whole HOT-chain. That is, the + * tuple we have here could be dead because it was already + * HOT-updated, and if so the updating transaction will not have + * thought it should insert index entries. The index AM will + * check the whole HOT-chain and correctly detect a conflict if + * there is one. + */ + + index_insert(indexRelation, + values, + isnull, + &rootTuple, + heapRelation, + indexInfo->ii_Unique ? + UNIQUE_CHECK_YES : UNIQUE_CHECK_NO, + false, + indexInfo); + + state->tups_inserted += 1; + } + } + + table_endscan(scan); + + ExecDropSingleTupleTableSlot(slot); + + FreeExecutorState(estate); + + /* These may have been pointing to the now-gone estate */ + indexInfo->ii_ExpressionsState = NIL; + indexInfo->ii_PredicateState = NULL; +} + +/* + * Return the number of blocks that have been read by this scan since + * starting. This is meant for progress reporting rather than be fully + * accurate: in a parallel scan, workers can be concurrently reading blocks + * further ahead than what we report. + */ +static BlockNumber +pg_tdeam_scan_get_blocks_done(HeapScanDesc hscan) +{ + ParallelBlockTableScanDesc bpscan = NULL; + BlockNumber startblock; + BlockNumber blocks_done; + + if (hscan->rs_base.rs_parallel != NULL) + { + bpscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + startblock = bpscan->phs_startblock; + } + else + startblock = hscan->rs_startblock; + + /* + * Might have wrapped around the end of the relation, if startblock was + * not zero. + */ + if (hscan->rs_cblock > startblock) + blocks_done = hscan->rs_cblock - startblock; + else + { + BlockNumber nblocks; + + nblocks = bpscan != NULL ? bpscan->phs_nblocks : hscan->rs_nblocks; + blocks_done = nblocks - startblock + + hscan->rs_cblock; + } + + return blocks_done; +} + + +/* ------------------------------------------------------------------------ + * Miscellaneous callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +/* + * Check to see whether the table needs a TOAST table. It does only if + * (1) there are any toastable attributes, and (2) the maximum length + * of a tuple could exceed TOAST_TUPLE_THRESHOLD. (We don't want to + * create a toast table for something like "f1 varchar(20)".) + */ +static bool +pg_tdeam_relation_needs_toast_table(Relation rel) +{ + int32 data_length = 0; + bool maxlength_unknown = false; + bool has_toastable_attrs = false; + TupleDesc tupdesc = rel->rd_att; + int32 tuple_length; + int i; + + for (i = 0; i < tupdesc->natts; i++) + { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) + continue; + data_length = att_align_nominal(data_length, att->attalign); + if (att->attlen > 0) + { + /* Fixed-length types are never toastable */ + data_length += att->attlen; + } + else + { + int32 maxlen = type_maximum_size(att->atttypid, + att->atttypmod); + + if (maxlen < 0) + maxlength_unknown = true; + else + data_length += maxlen; + if (att->attstorage != TYPSTORAGE_PLAIN) + has_toastable_attrs = true; + } + } + if (!has_toastable_attrs) + return false; /* nothing to toast? */ + if (maxlength_unknown) + return true; /* any unlimited-length attrs? */ + tuple_length = MAXALIGN(SizeofHeapTupleHeader + + BITMAPLEN(tupdesc->natts)) + + MAXALIGN(data_length); + return (tuple_length > TOAST_TUPLE_THRESHOLD); +} + +/* + * TOAST tables for heap relations are just heap relations. + */ +static Oid +pg_tdeam_relation_toast_am(Relation rel) +{ + return rel->rd_rel->relam; +} + + +/* ------------------------------------------------------------------------ + * Planner related callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +#define HEAP_OVERHEAD_BYTES_PER_TUPLE \ + (MAXALIGN(SizeofHeapTupleHeader) + sizeof(ItemIdData)) +#define HEAP_USABLE_BYTES_PER_PAGE \ + (BLCKSZ - SizeOfPageHeaderData) + +static void +pg_tdeam_estimate_rel_size(Relation rel, int32 *attr_widths, + BlockNumber *pages, double *tuples, + double *allvisfrac) +{ + table_block_relation_estimate_size(rel, attr_widths, pages, + tuples, allvisfrac, + HEAP_OVERHEAD_BYTES_PER_TUPLE, + HEAP_USABLE_BYTES_PER_PAGE); +} + + +/* ------------------------------------------------------------------------ + * Executor related callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +static bool +pg_tdeam_scan_bitmap_next_block(TableScanDesc scan, + TBMIterateResult *tbmres) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + BlockNumber block = tbmres->blockno; + Buffer buffer; + Snapshot snapshot; + int ntup; + + hscan->rs_cindex = 0; + hscan->rs_ntuples = 0; + + /* + * Ignore any claimed entries past what we think is the end of the + * relation. It may have been extended after the start of our scan (we + * only hold an AccessShareLock, and it could be inserts from this + * backend). We don't take this optimization in SERIALIZABLE isolation + * though, as we need to examine all invisible tuples reachable by the + * index. + */ + if (!IsolationIsSerializable() && block >= hscan->rs_nblocks) + return false; + + /* + * Acquire pin on the target heap page, trading in any pin we held before. + */ + hscan->rs_cbuf = ReleaseAndReadBuffer(hscan->rs_cbuf, + scan->rs_rd, + block); + hscan->rs_cblock = block; + buffer = hscan->rs_cbuf; + snapshot = scan->rs_snapshot; + + ntup = 0; + + /* + * Prune and repair fragmentation for the whole page, if possible. + */ + tdeheap_page_prune_opt(scan->rs_rd, buffer); + + /* + * We must hold share lock on the buffer content while examining tuple + * visibility. Afterwards, however, the tuples we have found to be + * visible are guaranteed good as long as we hold the buffer pin. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + + /* + * We need two separate strategies for lossy and non-lossy cases. + */ + if (tbmres->ntuples >= 0) + { + /* + * Bitmap is non-lossy, so we just look through the offsets listed in + * tbmres; but we have to follow any HOT chain starting at each such + * offset. + */ + int curslot; + + for (curslot = 0; curslot < tbmres->ntuples; curslot++) + { + OffsetNumber offnum = tbmres->offsets[curslot]; + ItemPointerData tid; + HeapTupleData heapTuple; + + ItemPointerSet(&tid, block, offnum); + if (tdeheap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot, + &heapTuple, NULL, true)) + hscan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid); + } + } + else + { + /* + * Bitmap is lossy, so we must examine each line pointer on the page. + * But we can ignore HOT chains, since we'll check each tuple anyway. + */ + Page page = BufferGetPage(buffer); + OffsetNumber maxoff = PageGetMaxOffsetNumber(page); + OffsetNumber offnum; + + for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum)) + { + ItemId lp; + HeapTupleData loctup; + bool valid; + + lp = PageGetItemId(page, offnum); + if (!ItemIdIsNormal(lp)) + continue; + loctup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + loctup.t_len = ItemIdGetLength(lp); + loctup.t_tableOid = scan->rs_rd->rd_id; + ItemPointerSet(&loctup.t_self, block, offnum); + valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer); + if (valid) + { + hscan->rs_vistuples[ntup++] = offnum; + PredicateLockTID(scan->rs_rd, &loctup.t_self, snapshot, + HeapTupleHeaderGetXmin(loctup.t_data)); + } + HeapCheckForSerializableConflictOut(valid, scan->rs_rd, &loctup, + buffer, snapshot); + } + } + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + Assert(ntup <= MaxHeapTuplesPerPage); + hscan->rs_ntuples = ntup; + + return ntup > 0; +} + +static bool +pg_tdeam_scan_bitmap_next_tuple(TableScanDesc scan, + TBMIterateResult *tbmres, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + OffsetNumber targoffset; + Page page; + ItemId lp; + + /* + * Out of range? If so, nothing more to look at on this page + */ + if (hscan->rs_cindex < 0 || hscan->rs_cindex >= hscan->rs_ntuples) + return false; + + targoffset = hscan->rs_vistuples[hscan->rs_cindex]; + page = BufferGetPage(hscan->rs_cbuf); + lp = PageGetItemId(page, targoffset); + Assert(ItemIdIsNormal(lp)); + + hscan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + hscan->rs_ctup.t_len = ItemIdGetLength(lp); + hscan->rs_ctup.t_tableOid = scan->rs_rd->rd_id; + ItemPointerSet(&hscan->rs_ctup.t_self, hscan->rs_cblock, targoffset); + + pgstat_count_tdeheap_fetch(scan->rs_rd); + + /* + * Set up the result slot to point to this tuple. Note that the slot + * acquires a pin on the buffer. + */ + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, &hscan->rs_ctup, + slot, + hscan->rs_cbuf); + + hscan->rs_cindex++; + + return true; +} + +static bool +pg_tdeam_scan_sample_next_block(TableScanDesc scan, SampleScanState *scanstate) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + TsmRoutine *tsm = scanstate->tsmroutine; + BlockNumber blockno; + + /* return false immediately if relation is empty */ + if (hscan->rs_nblocks == 0) + return false; + + if (tsm->NextSampleBlock) + { + blockno = tsm->NextSampleBlock(scanstate, hscan->rs_nblocks); + hscan->rs_cblock = blockno; + } + else + { + /* scanning table sequentially */ + + if (hscan->rs_cblock == InvalidBlockNumber) + { + Assert(!hscan->rs_inited); + blockno = hscan->rs_startblock; + } + else + { + Assert(hscan->rs_inited); + + blockno = hscan->rs_cblock + 1; + + if (blockno >= hscan->rs_nblocks) + { + /* wrap to beginning of rel, might not have started at 0 */ + blockno = 0; + } + + /* + * Report our new scan position for synchronization purposes. + * + * Note: we do this before checking for end of scan so that the + * final state of the position hint is back at the start of the + * rel. That's not strictly necessary, but otherwise when you run + * the same query multiple times the starting position would shift + * a little bit backwards on every invocation, which is confusing. + * We don't guarantee any specific ordering in general, though. + */ + if (scan->rs_flags & SO_ALLOW_SYNC) + ss_report_location(scan->rs_rd, blockno); + + if (blockno == hscan->rs_startblock) + { + blockno = InvalidBlockNumber; + } + } + } + + if (!BlockNumberIsValid(blockno)) + { + if (BufferIsValid(hscan->rs_cbuf)) + ReleaseBuffer(hscan->rs_cbuf); + hscan->rs_cbuf = InvalidBuffer; + hscan->rs_cblock = InvalidBlockNumber; + hscan->rs_inited = false; + + return false; + } + + tdeheapgetpage(scan, blockno); + hscan->rs_inited = true; + + return true; +} + +static bool +pg_tdeam_scan_sample_next_tuple(TableScanDesc scan, SampleScanState *scanstate, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + TsmRoutine *tsm = scanstate->tsmroutine; + BlockNumber blockno = hscan->rs_cblock; + bool pagemode = (scan->rs_flags & SO_ALLOW_PAGEMODE) != 0; + + Page page; + bool all_visible; + OffsetNumber maxoffset; + + /* + * When not using pagemode, we must lock the buffer during tuple + * visibility checks. + */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + page = (Page) BufferGetPage(hscan->rs_cbuf); + all_visible = PageIsAllVisible(page) && + !scan->rs_snapshot->takenDuringRecovery; + maxoffset = PageGetMaxOffsetNumber(page); + + for (;;) + { + OffsetNumber tupoffset; + + CHECK_FOR_INTERRUPTS(); + + /* Ask the tablesample method which tuples to check on this page. */ + tupoffset = tsm->NextSampleTuple(scanstate, + blockno, + maxoffset); + + if (OffsetNumberIsValid(tupoffset)) + { + ItemId itemid; + bool visible; + HeapTuple tuple = &(hscan->rs_ctup); + + /* Skip invalid tuple pointers. */ + itemid = PageGetItemId(page, tupoffset); + if (!ItemIdIsNormal(itemid)) + continue; + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple->t_len = ItemIdGetLength(itemid); + ItemPointerSet(&(tuple->t_self), blockno, tupoffset); + + + if (all_visible) + visible = true; + else + visible = SampleHeapTupleVisible(scan, hscan->rs_cbuf, + tuple, tupoffset); + + /* in pagemode, tdeheapgetpage did this for us */ + if (!pagemode) + HeapCheckForSerializableConflictOut(visible, scan->rs_rd, tuple, + hscan->rs_cbuf, scan->rs_snapshot); + + /* Try next tuple from same page. */ + if (!visible) + continue; + + /* Found visible tuple, return it. */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, tuple, slot, hscan->rs_cbuf); + + /* Count successfully-fetched tuples as heap fetches */ + pgstat_count_tdeheap_getnext(scan->rs_rd); + + return true; + } + else + { + /* + * If we get here, it means we've exhausted the items on this page + * and it's time to move to the next. + */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + ExecClearTuple(slot); + return false; + } + } + + Assert(0); +} + + +/* ---------------------------------------------------------------------------- + * Helper functions for the above. + * ---------------------------------------------------------------------------- + */ + +/* + * Reconstruct and rewrite the given tuple + * + * We cannot simply copy the tuple as-is, for several reasons: + * + * 1. We'd like to squeeze out the values of any dropped columns, both + * to save space and to ensure we have no corner-case failures. (It's + * possible for example that the new table hasn't got a TOAST table + * and so is unable to store any large values of dropped cols.) + * + * 2. The tuple might not even be legal for the new table; this is + * currently only known to happen as an after-effect of ALTER TABLE + * SET WITHOUT OIDS. + * + * So, we must reconstruct the tuple from component Datums. + */ +static void +reform_and_rewrite_tuple(HeapTuple tuple, + Relation OldHeap, Relation NewHeap, + Datum *values, bool *isnull, RewriteState rwstate) +{ + TupleDesc oldTupDesc = RelationGetDescr(OldHeap); + TupleDesc newTupDesc = RelationGetDescr(NewHeap); + HeapTuple copiedTuple; + int i; + + tdeheap_deform_tuple(tuple, oldTupDesc, values, isnull); + + /* Be sure to null out any dropped columns */ + for (i = 0; i < newTupDesc->natts; i++) + { + if (TupleDescAttr(newTupDesc, i)->attisdropped) + isnull[i] = true; + } + + copiedTuple = tdeheap_form_tuple(newTupDesc, values, isnull); + + /* The heap rewrite module does the rest */ + rewrite_tdeheap_tuple(rwstate, tuple, copiedTuple); + + tdeheap_freetuple(copiedTuple); +} + +/* + * Check visibility of the tuple. + */ +static bool +SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer, + HeapTuple tuple, + OffsetNumber tupoffset) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + if (scan->rs_flags & SO_ALLOW_PAGEMODE) + { + /* + * In pageatatime mode, tdeheapgetpage() already did visibility checks, + * so just look at the info it left in rs_vistuples[]. + * + * We use a binary search over the known-sorted array. Note: we could + * save some effort if we insisted that NextSampleTuple select tuples + * in increasing order, but it's not clear that there would be enough + * gain to justify the restriction. + */ + int start = 0, + end = hscan->rs_ntuples - 1; + + while (start <= end) + { + int mid = (start + end) / 2; + OffsetNumber curoffset = hscan->rs_vistuples[mid]; + + if (tupoffset == curoffset) + return true; + else if (tupoffset < curoffset) + end = mid - 1; + else + start = mid + 1; + } + + return false; + } + else + { + /* Otherwise, we have to check the tuple individually. */ + return HeapTupleSatisfiesVisibility(tuple, scan->rs_snapshot, + buffer); + } +} + + +/* ------------------------------------------------------------------------ + * Definition of the heap table access method. + * ------------------------------------------------------------------------ + */ + +static const TableAmRoutine pg_tdeam_methods = { + .type = T_TableAmRoutine, + + .slot_callbacks = pg_tdeam_slot_callbacks, + + .scan_begin = tdeheap_beginscan, + .scan_end = tdeheap_endscan, + .scan_rescan = tdeheap_rescan, + .scan_getnextslot = tdeheap_getnextslot, + + .scan_set_tidrange = tdeheap_set_tidrange, + .scan_getnextslot_tidrange = tdeheap_getnextslot_tidrange, + + .parallelscan_estimate = table_block_parallelscan_estimate, + .parallelscan_initialize = table_block_parallelscan_initialize, + .parallelscan_reinitialize = table_block_parallelscan_reinitialize, + + .index_fetch_begin = pg_tdeam_index_fetch_begin, + .index_fetch_reset = pg_tdeam_index_fetch_reset, + .index_fetch_end = pg_tdeam_index_fetch_end, + .index_fetch_tuple = pg_tdeam_index_fetch_tuple, + + .tuple_insert = pg_tdeam_tuple_insert, + .tuple_insert_speculative = pg_tdeam_tuple_insert_speculative, + .tuple_complete_speculative = pg_tdeam_tuple_complete_speculative, + .multi_insert = tdeheap_multi_insert, + .tuple_delete = pg_tdeam_tuple_delete, + .tuple_update = pg_tdeam_tuple_update, + .tuple_lock = pg_tdeam_tuple_lock, + + .tuple_fetch_row_version = pg_tdeam_fetch_row_version, + .tuple_get_latest_tid = tdeheap_get_latest_tid, + .tuple_tid_valid = pg_tdeam_tuple_tid_valid, + .tuple_satisfies_snapshot = pg_tdeam_tuple_satisfies_snapshot, + .index_delete_tuples = tdeheap_index_delete_tuples, + + .relation_set_new_filelocator = pg_tdeam_relation_set_new_filelocator, + .relation_nontransactional_truncate = pg_tdeam_relation_nontransactional_truncate, + .relation_copy_data = pg_tdeam_relation_copy_data, + .relation_copy_for_cluster = pg_tdeam_relation_copy_for_cluster, + .relation_vacuum = tdeheap_vacuum_rel, + .scan_analyze_next_block = pg_tdeam_scan_analyze_next_block, + .scan_analyze_next_tuple = pg_tdeam_scan_analyze_next_tuple, + .index_build_range_scan = pg_tdeam_index_build_range_scan, + .index_validate_scan = pg_tdeam_index_validate_scan, + + .relation_size = table_block_relation_size, + .relation_needs_toast_table = pg_tdeam_relation_needs_toast_table, + .relation_toast_am = pg_tdeam_relation_toast_am, + .relation_fetch_toast_slice = tdeheap_fetch_toast_slice, + + .relation_estimate_size = pg_tdeam_estimate_rel_size, + + .scan_bitmap_next_block = pg_tdeam_scan_bitmap_next_block, + .scan_bitmap_next_tuple = pg_tdeam_scan_bitmap_next_tuple, + .scan_sample_next_block = pg_tdeam_scan_sample_next_block, + .scan_sample_next_tuple = pg_tdeam_scan_sample_next_tuple +}; + +const TableAmRoutine * +GetPGTdeamTableAmRoutine(void) +{ + return &pg_tdeam_methods; +} + +Datum +pg_tdeam_basic_handler(PG_FUNCTION_ARGS) +{ + PG_RETURN_POINTER(&pg_tdeam_methods); +} + +#ifdef PERCONA_EXT +Datum +pg_tdeam_handler(PG_FUNCTION_ARGS) +{ + PG_RETURN_POINTER(GetHeapamTableAmRoutine()); +} +#endif + +bool +is_tdeheap_rel(Relation rel) +{ + return (rel->rd_tableam == (TableAmRoutine *) &pg_tdeam_methods); +} diff --git a/contrib/pg_tde/src16/access/pg_tdeam_visibility.c b/contrib/pg_tde/src16/access/pg_tdeam_visibility.c new file mode 100644 index 00000000000..c037e30c30a --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tdeam_visibility.c @@ -0,0 +1,1793 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_visibility.c + * Tuple visibility rules for tuples stored in heap. + * + * NOTE: all the HeapTupleSatisfies routines will update the tuple's + * "hint" status bits if we see that the inserting or deleting transaction + * has now committed or aborted (and it is safe to set the hint bits). + * If the hint bits are changed, MarkBufferDirtyHint is called on + * the passed-in buffer. The caller must hold not only a pin, but at least + * shared buffer content lock on the buffer containing the tuple. + * + * NOTE: When using a non-MVCC snapshot, we must check + * TransactionIdIsInProgress (which looks in the PGPROC array) before + * TransactionIdDidCommit (which look in pg_xact). Otherwise we have a race + * condition: we might decide that a just-committed transaction crashed, + * because none of the tests succeed. xact.c is careful to record + * commit/abort in pg_xact before it unsets MyProc->xid in the PGPROC array. + * That fixes that problem, but it also means there is a window where + * TransactionIdIsInProgress and TransactionIdDidCommit will both return true. + * If we check only TransactionIdDidCommit, we could consider a tuple + * committed when a later GetSnapshotData call will still think the + * originating transaction is in progress, which leads to application-level + * inconsistency. The upshot is that we gotta check TransactionIdIsInProgress + * first in all code paths, except for a few cases where we are looking at + * subtransactions of our own main transaction and so there can't be any race + * condition. + * + * We can't use TransactionIdDidAbort here because it won't treat transactions + * that were in progress during a crash as aborted. We determine that + * transactions aborted/crashed through process of elimination instead. + * + * When using an MVCC snapshot, we rely on XidInMVCCSnapshot rather than + * TransactionIdIsInProgress, but the logic is otherwise the same: do not + * check pg_xact until after deciding that the xact is no longer in progress. + * + * + * Summary of visibility functions: + * + * HeapTupleSatisfiesMVCC() + * visible to supplied snapshot, excludes current command + * HeapTupleSatisfiesUpdate() + * visible to instant snapshot, with user-supplied command + * counter and more complex result + * HeapTupleSatisfiesSelf() + * visible to instant snapshot and current command + * HeapTupleSatisfiesDirty() + * like HeapTupleSatisfiesSelf(), but includes open transactions + * HeapTupleSatisfiesVacuum() + * visible to any running transaction, used by VACUUM + * HeapTupleSatisfiesNonVacuumable() + * Snapshot-style API for HeapTupleSatisfiesVacuum + * HeapTupleSatisfiesToast() + * visible unless part of interrupted vacuum, used for TOAST + * HeapTupleSatisfiesAny() + * all tuples are visible + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/access/heap/pg_tdeam_visibility.c + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" + +#include "access/htup_details.h" +#include "access/multixact.h" +#include "access/subtrans.h" +#include "access/tableam.h" +#include "access/transam.h" +#include "access/xact.h" +#include "access/xlog.h" +#include "storage/bufmgr.h" +#include "storage/procarray.h" +#include "utils/builtins.h" +#include "utils/combocid.h" +#include "utils/snapmgr.h" + + +/* + * SetHintBits() + * + * Set commit/abort hint bits on a tuple, if appropriate at this time. + * + * It is only safe to set a transaction-committed hint bit if we know the + * transaction's commit record is guaranteed to be flushed to disk before the + * buffer, or if the table is temporary or unlogged and will be obliterated by + * a crash anyway. We cannot change the LSN of the page here, because we may + * hold only a share lock on the buffer, so we can only use the LSN to + * interlock this if the buffer's LSN already is newer than the commit LSN; + * otherwise we have to just refrain from setting the hint bit until some + * future re-examination of the tuple. + * + * We can always set hint bits when marking a transaction aborted. (Some + * code in pg_tdeam.c relies on that!) + * + * Also, if we are cleaning up HEAP_MOVED_IN or HEAP_MOVED_OFF entries, then + * we can always set the hint bits, since pre-9.0 VACUUM FULL always used + * synchronous commits and didn't move tuples that weren't previously + * hinted. (This is not known by this subroutine, but is applied by its + * callers.) Note: old-style VACUUM FULL is gone, but we have to keep this + * module's support for MOVED_OFF/MOVED_IN flag bits for as long as we + * support in-place update from pre-9.0 databases. + * + * Normal commits may be asynchronous, so for those we need to get the LSN + * of the transaction and then check whether this is flushed. + * + * The caller should pass xid as the XID of the transaction to check, or + * InvalidTransactionId if no check is needed. + */ +static inline void +SetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid) +{ + if (TransactionIdIsValid(xid)) + { + /* NB: xid must be known committed here! */ + XLogRecPtr commitLSN = TransactionIdGetCommitLSN(xid); + + if (BufferIsPermanent(buffer) && XLogNeedsFlush(commitLSN) && + BufferGetLSNAtomic(buffer) < commitLSN) + { + /* not flushed and no LSN interlock, so don't set hint */ + return; + } + } + + tuple->t_infomask |= infomask; + MarkBufferDirtyHint(buffer, true); +} + +/* + * HeapTupleSetHintBits --- exported version of SetHintBits() + * + * This must be separate because of C99's brain-dead notions about how to + * implement inline functions. + */ +void +HeapTupleSetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid) +{ + SetHintBits(tuple, buffer, infomask, xid); +} + + +/* + * HeapTupleSatisfiesSelf + * True iff heap tuple is valid "for itself". + * + * See SNAPSHOT_MVCC's definition for the intended behaviour. + * + * Note: + * Assumes heap tuple is valid. + * + * The satisfaction of "itself" requires the following: + * + * ((Xmin == my-transaction && the row was updated by the current transaction, and + * (Xmax is null it was not deleted + * [|| Xmax != my-transaction)]) [or it was deleted by another transaction] + * || + * + * (Xmin is committed && the row was modified by a committed transaction, and + * (Xmax is null || the row has not been deleted, or + * (Xmax != my-transaction && the row was deleted by another transaction + * Xmax is not committed))) that has not been committed + */ +static bool +HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else + return false; + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + return false; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + return false; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; /* updated by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + return true; + if (TransactionIdDidCommit(xmax)) + return false; + /* it must have aborted or crashed */ + return true; + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return true; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + return false; +} + +/* + * HeapTupleSatisfiesAny + * Dummy "satisfies" routine: any tuple satisfies SnapshotAny. + */ +static bool +HeapTupleSatisfiesAny(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + return true; +} + +/* + * HeapTupleSatisfiesToast + * True iff heap tuple is valid as a TOAST row. + * + * See SNAPSHOT_TOAST's definition for the intended behaviour. + * + * This is a simplified version that only checks for VACUUM moving conditions. + * It's appropriate for TOAST usage because TOAST really doesn't want to do + * its own time qual checks; if you can see the main table row that contains + * a TOAST reference, you should be able to see the TOASTed value. However, + * vacuuming a TOAST table is independent of the main table, and in case such + * a vacuum fails partway through, we'd better do this much checking. + * + * Among other things, this means you can't do UPDATEs of rows in a TOAST + * table. + */ +static bool +HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + + /* + * An invalid Xmin can be left behind by a speculative insertion that + * is canceled by super-deleting the tuple. This also applies to + * TOAST tuples created during speculative insertion. + */ + else if (!TransactionIdIsValid(HeapTupleHeaderGetXmin(tuple))) + return false; + } + + /* otherwise assume the tuple is valid for TOAST. */ + return true; +} + +/* + * HeapTupleSatisfiesUpdate + * + * This function returns a more detailed result code than most of the + * functions in this file, since UPDATE needs to know more than "is it + * visible?". It also allows for user-supplied CommandId rather than + * relying on CurrentCommandId. + * + * The possible return codes are: + * + * TM_Invisible: the tuple didn't exist at all when the scan started, e.g. it + * was created by a later CommandId. + * + * TM_Ok: The tuple is valid and visible, so it may be updated. + * + * TM_SelfModified: The tuple was updated by the current transaction, after + * the current scan started. + * + * TM_Updated: The tuple was updated by a committed transaction (including + * the case where the tuple was moved into a different partition). + * + * TM_Deleted: The tuple was deleted by a committed transaction. + * + * TM_BeingModified: The tuple is being updated by an in-progress transaction + * other than the current transaction. (Note: this includes the case where + * the tuple is share-locked by a MultiXact, even if the MultiXact includes + * the current transaction. Callers that want to distinguish that case must + * test for it themselves.) + */ +TM_Result +HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return TM_Invisible; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return TM_Invisible; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return TM_Invisible; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (HeapTupleHeaderGetCmin(tuple) >= curcid) + return TM_Invisible; /* inserted after scan started */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return TM_Ok; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + TransactionId xmax; + + xmax = HeapTupleHeaderGetRawXmax(tuple); + + /* + * Careful here: even though this tuple was created by our own + * transaction, it might be locked by other transactions, if + * the original version was key-share locked when we updated + * it. + */ + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + if (MultiXactIdIsRunning(xmax, true)) + return TM_BeingModified; + else + return TM_Ok; + } + + /* + * If the locker is gone, then there is nothing of interest + * left in this Xmax; otherwise, report the tuple as + * locked/updated. + */ + if (!TransactionIdIsInProgress(xmax)) + return TM_Ok; + return TM_BeingModified; + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* deleting subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), + false)) + return TM_BeingModified; + return TM_Ok; + } + else + { + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + return TM_Invisible; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return TM_Ok; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return TM_Ok; + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; /* updated by other */ + else + return TM_Deleted; /* deleted by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_LOCKED_UPGRADED(tuple->t_infomask)) + return TM_Ok; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), true)) + return TM_BeingModified; + + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + return TM_Ok; + } + + xmax = HeapTupleGetUpdateXid(tuple); + if (!TransactionIdIsValid(xmax)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + return TM_BeingModified; + } + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + { + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + return TM_BeingModified; + + if (TransactionIdDidCommit(xmax)) + { + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; + else + return TM_Deleted; + } + + /* + * By here, the update in the Xmax is either aborted or crashed, but + * what about the other members? + */ + + if (!MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + { + /* + * There's no member, even just a locker, alive anymore, so we can + * mark the Xmax as invalid. + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + else + { + /* There are lockers running */ + return TM_BeingModified; + } + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return TM_BeingModified; + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return TM_BeingModified; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; /* updated by other */ + else + return TM_Deleted; /* deleted by other */ +} + +/* + * HeapTupleSatisfiesDirty + * True iff heap tuple is valid including effects of open transactions. + * + * See SNAPSHOT_DIRTY's definition for the intended behaviour. + * + * This is essentially like HeapTupleSatisfiesSelf as far as effects of + * the current transaction and committed/aborted xacts are concerned. + * However, we also include the effects of other xacts still in progress. + * + * A special hack is that the passed-in snapshot struct is used as an + * output argument to return the xids of concurrent xacts that affected the + * tuple. snapshot->xmin is set to the tuple's xmin if that is another + * transaction that's still in progress; or to InvalidTransactionId if the + * tuple's xmin is committed good, committed dead, or my own xact. + * Similarly for snapshot->xmax and the tuple's xmax. If the tuple was + * inserted speculatively, meaning that the inserter might still back down + * on the insertion without aborting the whole transaction, the associated + * token is also returned in snapshot->speculativeToken. + */ +static bool +HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + snapshot->xmin = snapshot->xmax = InvalidTransactionId; + snapshot->speculativeToken = 0; + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else + return false; + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + return false; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + { + /* + * Return the speculative token to caller. Caller can worry about + * xmax, since it requires a conclusively locked row version, and + * a concurrent update to this tuple is a conflict of its + * purposes. + */ + if (HeapTupleHeaderIsSpeculative(tuple)) + { + snapshot->speculativeToken = + HeapTupleHeaderGetSpeculativeToken(tuple); + + Assert(snapshot->speculativeToken != 0); + } + + snapshot->xmin = HeapTupleHeaderGetRawXmin(tuple); + /* XXX shouldn't we fall through to look at xmax? */ + return true; /* in insertion by other */ + } + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; /* updated by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + { + snapshot->xmax = xmax; + return true; + } + if (TransactionIdDidCommit(xmax)) + return false; + /* it must have aborted or crashed */ + return true; + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + { + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + snapshot->xmax = HeapTupleHeaderGetRawXmax(tuple); + return true; + } + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + return false; /* updated by other */ +} + +/* + * HeapTupleSatisfiesMVCC + * True iff heap tuple is valid for the given MVCC snapshot. + * + * See SNAPSHOT_MVCC's definition for the intended behaviour. + * + * Notice that here, we will not update the tuple status hint bits if the + * inserting/deleting transaction is still running according to our snapshot, + * even if in reality it's committed or aborted by now. This is intentional. + * Checking the true transaction state would require access to high-traffic + * shared data structures, creating contention we'd rather do without, and it + * would not change the result of our visibility check anyway. The hint bits + * will be updated by the first visitor that has a snapshot new enough to see + * the inserting/deleting transaction as done. In the meantime, the cost of + * leaving the hint bits unset is basically that each HeapTupleSatisfiesMVCC + * call will need to run TransactionIdIsCurrentTransactionId in addition to + * XidInMVCCSnapshot (but it would have to do the latter anyway). In the old + * coding where we tried to set the hint bits as soon as possible, we instead + * did TransactionIdIsInProgress in each call --- to no avail, as long as the + * inserting/deleting transaction was still running --- which was more cycles + * and more contention on ProcArrayLock. + */ +static bool +HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!XidInMVCCSnapshot(xvac, snapshot)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (XidInMVCCSnapshot(xvac, snapshot)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (HeapTupleHeaderGetCmin(tuple) >= snapshot->curcid) + return false; /* inserted after scan started */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* updated after scan started */ + else + return false; /* updated before scan started */ + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + else if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot)) + return false; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + else + { + /* xmin is committed, but maybe not according to our snapshot */ + if (!HeapTupleHeaderXminFrozen(tuple) && + XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot)) + return false; /* treat as still in progress */ + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + /* already checked above */ + Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + { + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + if (XidInMVCCSnapshot(xmax, snapshot)) + return true; + if (TransactionIdDidCommit(xmax)) + return false; /* updating transaction committed */ + /* it must have aborted or crashed */ + return true; + } + + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + + if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmax(tuple), snapshot)) + return true; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + } + else + { + /* xmax is committed, but maybe not according to our snapshot */ + if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmax(tuple), snapshot)) + return true; /* treat as still in progress */ + } + + /* xmax transaction committed */ + + return false; +} + + +/* + * HeapTupleSatisfiesVacuum + * + * Determine the status of tuples for VACUUM purposes. Here, what + * we mainly want to know is if a tuple is potentially visible to *any* + * running transaction. If so, it can't be removed yet by VACUUM. + * + * OldestXmin is a cutoff XID (obtained from + * GetOldestNonRemovableTransactionId()). Tuples deleted by XIDs >= + * OldestXmin are deemed "recently dead"; they might still be visible to some + * open transaction, so we can't remove them, even if we see that the deleting + * transaction has committed. + */ +HTSV_Result +HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, + Buffer buffer) +{ + TransactionId dead_after = InvalidTransactionId; + HTSV_Result res; + + res = HeapTupleSatisfiesVacuumHorizon(htup, buffer, &dead_after); + + if (res == HEAPTUPLE_RECENTLY_DEAD) + { + Assert(TransactionIdIsValid(dead_after)); + + if (TransactionIdPrecedes(dead_after, OldestXmin)) + res = HEAPTUPLE_DEAD; + } + else + Assert(!TransactionIdIsValid(dead_after)); + + return res; +} + +/* + * Work horse for HeapTupleSatisfiesVacuum and similar routines. + * + * In contrast to HeapTupleSatisfiesVacuum this routine, when encountering a + * tuple that could still be visible to some backend, stores the xid that + * needs to be compared with the horizon in *dead_after, and returns + * HEAPTUPLE_RECENTLY_DEAD. The caller then can perform the comparison with + * the horizon. This is e.g. useful when comparing with different horizons. + * + * Note: HEAPTUPLE_DEAD can still be returned here, e.g. if the inserting + * transaction aborted. + */ +HTSV_Result +HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer, TransactionId *dead_after) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + Assert(dead_after != NULL); + + *dead_after = InvalidTransactionId; + + /* + * Has inserting transaction committed? + * + * If the inserting transaction aborted, then the tuple was never visible + * to any other transaction, so we can delete it immediately. + */ + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return HEAPTUPLE_DEAD; + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + if (TransactionIdIsInProgress(xvac)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + if (TransactionIdIsInProgress(xvac)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + /* only locked? run infomask-only check first, for performance */ + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tuple)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + /* inserted and then deleted by same xact */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple))) + return HEAPTUPLE_DELETE_IN_PROGRESS; + /* deleting subtransaction must have aborted */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + { + /* + * It'd be possible to discern between INSERT/DELETE in progress + * here by looking at xmax - but that doesn't seem beneficial for + * the majority of callers and even detrimental for some. We'd + * rather have callers look at/wait for xmin than xmax. It's + * always correct to return INSERT_IN_PROGRESS because that's + * what's happening from the view of other backends. + */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + } + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed + */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + + /* + * At this point the xmin is known committed, but we might not have + * been able to set the hint bit yet; so we can no longer Assert that + * it's set. + */ + } + + /* + * Okay, the inserter committed, so it was good at some point. Now what + * about the deleting transaction? + */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return HEAPTUPLE_LIVE; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + /* + * "Deleting" xact really only locked it, so the tuple is live in any + * case. However, we should make sure that either XMAX_COMMITTED or + * XMAX_INVALID gets set once the xact is gone, to reduce the costs of + * examining the tuple for future xacts. + */ + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + /* + * If it's a pre-pg_upgrade tuple, the multixact cannot + * possibly be running; otherwise have to check. + */ + if (!HEAP_LOCKED_UPGRADED(tuple->t_infomask) && + MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), + true)) + return HEAPTUPLE_LIVE; + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + } + else + { + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return HEAPTUPLE_LIVE; + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + } + } + + /* + * We don't really care whether xmax did commit, abort or crash. We + * know that xmax did lock the tuple, but it did not and will never + * actually update it. + */ + + return HEAPTUPLE_LIVE; + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax = HeapTupleGetUpdateXid(tuple); + + /* already checked above */ + Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsInProgress(xmax)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + else if (TransactionIdDidCommit(xmax)) + { + /* + * The multixact might still be running due to lockers. Need to + * allow for pruning if below the xid horizon regardless -- + * otherwise we could end up with a tuple where the updater has to + * be removed due to the horizon, but is not pruned away. It's + * not a problem to prune that tuple, because any remaining + * lockers will also be present in newer tuple versions. + */ + *dead_after = xmax; + return HEAPTUPLE_RECENTLY_DEAD; + } + else if (!MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed. + * Mark the Xmax as invalid. + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + } + + return HEAPTUPLE_LIVE; + } + + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return HEAPTUPLE_DELETE_IN_PROGRESS; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + else + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return HEAPTUPLE_LIVE; + } + + /* + * At this point the xmax is known committed, but we might not have + * been able to set the hint bit yet; so we can no longer Assert that + * it's set. + */ + } + + /* + * Deleter committed, allow caller to check if it was recent enough that + * some open transactions could still see the tuple. + */ + *dead_after = HeapTupleHeaderGetRawXmax(tuple); + return HEAPTUPLE_RECENTLY_DEAD; +} + + +/* + * HeapTupleSatisfiesNonVacuumable + * + * True if tuple might be visible to some transaction; false if it's + * surely dead to everyone, ie, vacuumable. + * + * See SNAPSHOT_NON_VACUUMABLE's definition for the intended behaviour. + * + * This is an interface to HeapTupleSatisfiesVacuum that's callable via + * HeapTupleSatisfiesSnapshot, so it can be used through a Snapshot. + * snapshot->vistest must have been set up with the horizon to use. + */ +static bool +HeapTupleSatisfiesNonVacuumable(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + TransactionId dead_after = InvalidTransactionId; + HTSV_Result res; + + res = HeapTupleSatisfiesVacuumHorizon(htup, buffer, &dead_after); + + if (res == HEAPTUPLE_RECENTLY_DEAD) + { + Assert(TransactionIdIsValid(dead_after)); + + if (GlobalVisTestIsRemovableXid(snapshot->vistest, dead_after)) + res = HEAPTUPLE_DEAD; + } + else + Assert(!TransactionIdIsValid(dead_after)); + + return res != HEAPTUPLE_DEAD; +} + + +/* + * HeapTupleIsSurelyDead + * + * Cheaply determine whether a tuple is surely dead to all onlookers. + * We sometimes use this in lieu of HeapTupleSatisfiesVacuum when the + * tuple has just been tested by another visibility routine (usually + * HeapTupleSatisfiesMVCC) and, therefore, any hint bits that can be set + * should already be set. We assume that if no hint bits are set, the xmin + * or xmax transaction is still running. This is therefore faster than + * HeapTupleSatisfiesVacuum, because we consult neither procarray nor CLOG. + * It's okay to return false when in doubt, but we must return true only + * if the tuple is removable. + */ +bool +HeapTupleIsSurelyDead(HeapTuple htup, GlobalVisState *vistest) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + /* + * If the inserting transaction is marked invalid, then it aborted, and + * the tuple is definitely dead. If it's marked neither committed nor + * invalid, then we assume it's still alive (since the presumption is that + * all relevant hint bits were just set moments ago). + */ + if (!HeapTupleHeaderXminCommitted(tuple)) + return HeapTupleHeaderXminInvalid(tuple); + + /* + * If the inserting transaction committed, but any deleting transaction + * aborted, the tuple is still alive. + */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return false; + + /* + * If the XMAX is just a lock, the tuple is still alive. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return false; + + /* + * If the Xmax is a MultiXact, it might be dead or alive, but we cannot + * know without checking pg_multixact. + */ + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + return false; + + /* If deleter isn't known to have committed, assume it's still running. */ + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + return false; + + /* Deleter committed, so tuple is dead if the XID is old enough. */ + return GlobalVisTestIsRemovableXid(vistest, + HeapTupleHeaderGetRawXmax(tuple)); +} + +/* + * Is the tuple really only locked? That is, is it not updated? + * + * It's easy to check just infomask bits if the locker is not a multi; but + * otherwise we need to verify that the updating transaction has not aborted. + * + * This function is here because it follows the same visibility rules laid out + * at the top of this file. + */ +bool +HeapTupleHeaderIsOnlyLocked(HeapTupleHeader tuple) +{ + TransactionId xmax; + + /* if there's no valid Xmax, then there's obviously no update either */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return true; + + if (tuple->t_infomask & HEAP_XMAX_LOCK_ONLY) + return true; + + /* invalid xmax means no update */ + if (!TransactionIdIsValid(HeapTupleHeaderGetRawXmax(tuple))) + return true; + + /* + * if HEAP_XMAX_LOCK_ONLY is not set and not a multi, then this must + * necessarily have been updated + */ + if (!(tuple->t_infomask & HEAP_XMAX_IS_MULTI)) + return false; + + /* ... but if it's a multi, then perhaps the updating Xid aborted. */ + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + return false; + if (TransactionIdDidCommit(xmax)) + return false; + + /* + * not current, not in progress, not committed -- must have aborted or + * crashed + */ + return true; +} + +/* + * check whether the transaction id 'xid' is in the pre-sorted array 'xip'. + */ +static bool +TransactionIdInArray(TransactionId xid, TransactionId *xip, Size num) +{ + return num > 0 && + bsearch(&xid, xip, num, sizeof(TransactionId), xidComparator) != NULL; +} + +/* + * See the comments for HeapTupleSatisfiesMVCC for the semantics this function + * obeys. + * + * Only usable on tuples from catalog tables! + * + * We don't need to support HEAP_MOVED_(IN|OFF) for now because we only support + * reading catalog pages which couldn't have been created in an older version. + * + * We don't set any hint bits in here as it seems unlikely to be beneficial as + * those should already be set by normal access and it seems to be too + * dangerous to do so as the semantics of doing so during timetravel are more + * complicated than when dealing "only" with the present. + */ +static bool +HeapTupleSatisfiesHistoricMVCC(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + TransactionId xmin = HeapTupleHeaderGetXmin(tuple); + TransactionId xmax = HeapTupleHeaderGetRawXmax(tuple); + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + /* inserting transaction aborted */ + if (HeapTupleHeaderXminInvalid(tuple)) + { + Assert(!TransactionIdDidCommit(xmin)); + return false; + } + /* check if it's one of our txids, toplevel is also in there */ + else if (TransactionIdInArray(xmin, snapshot->subxip, snapshot->subxcnt)) + { + bool resolved; + CommandId cmin = HeapTupleHeaderGetRawCommandId(tuple); + CommandId cmax = InvalidCommandId; + + /* + * another transaction might have (tried to) delete this tuple or + * cmin/cmax was stored in a combo CID. So we need to lookup the + * actual values externally. + */ + resolved = ResolveCminCmaxDuringDecoding(HistoricSnapshotGetTupleCids(), snapshot, + htup, buffer, + &cmin, &cmax); + + /* + * If we haven't resolved the combo CID to cmin/cmax, that means we + * have not decoded the combo CID yet. That means the cmin is + * definitely in the future, and we're not supposed to see the tuple + * yet. + * + * XXX This only applies to decoding of in-progress transactions. In + * regular logical decoding we only execute this code at commit time, + * at which point we should have seen all relevant combo CIDs. So + * ideally, we should error out in this case but in practice, this + * won't happen. If we are too worried about this then we can add an + * elog inside ResolveCminCmaxDuringDecoding. + * + * XXX For the streaming case, we can track the largest combo CID + * assigned, and error out based on this (when unable to resolve combo + * CID below that observed maximum value). + */ + if (!resolved) + return false; + + Assert(cmin != InvalidCommandId); + + if (cmin >= snapshot->curcid) + return false; /* inserted after scan started */ + /* fall through */ + } + /* committed before our xmin horizon. Do a normal visibility check. */ + else if (TransactionIdPrecedes(xmin, snapshot->xmin)) + { + Assert(!(HeapTupleHeaderXminCommitted(tuple) && + !TransactionIdDidCommit(xmin))); + + /* check for hint bit first, consult clog afterwards */ + if (!HeapTupleHeaderXminCommitted(tuple) && + !TransactionIdDidCommit(xmin)) + return false; + /* fall through */ + } + /* beyond our xmax horizon, i.e. invisible */ + else if (TransactionIdFollowsOrEquals(xmin, snapshot->xmax)) + { + return false; + } + /* check if it's a committed transaction in [xmin, xmax) */ + else if (TransactionIdInArray(xmin, snapshot->xip, snapshot->xcnt)) + { + /* fall through */ + } + + /* + * none of the above, i.e. between [xmin, xmax) but hasn't committed. I.e. + * invisible. + */ + else + { + return false; + } + + /* at this point we know xmin is visible, go on to check xmax */ + + /* xid invalid or aborted */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return true; + /* locked tuples are always visible */ + else if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + /* + * We can see multis here if we're looking at user tables or if somebody + * SELECT ... FOR SHARE/UPDATE a system table. + */ + else if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + xmax = HeapTupleGetUpdateXid(tuple); + } + + /* check if it's one of our txids, toplevel is also in there */ + if (TransactionIdInArray(xmax, snapshot->subxip, snapshot->subxcnt)) + { + bool resolved; + CommandId cmin; + CommandId cmax = HeapTupleHeaderGetRawCommandId(tuple); + + /* Lookup actual cmin/cmax values */ + resolved = ResolveCminCmaxDuringDecoding(HistoricSnapshotGetTupleCids(), snapshot, + htup, buffer, + &cmin, &cmax); + + /* + * If we haven't resolved the combo CID to cmin/cmax, that means we + * have not decoded the combo CID yet. That means the cmax is + * definitely in the future, and we're still supposed to see the + * tuple. + * + * XXX This only applies to decoding of in-progress transactions. In + * regular logical decoding we only execute this code at commit time, + * at which point we should have seen all relevant combo CIDs. So + * ideally, we should error out in this case but in practice, this + * won't happen. If we are too worried about this then we can add an + * elog inside ResolveCminCmaxDuringDecoding. + * + * XXX For the streaming case, we can track the largest combo CID + * assigned, and error out based on this (when unable to resolve combo + * CID below that observed maximum value). + */ + if (!resolved || cmax == InvalidCommandId) + return true; + + if (cmax >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + /* below xmin horizon, normal transaction state is valid */ + else if (TransactionIdPrecedes(xmax, snapshot->xmin)) + { + Assert(!(tuple->t_infomask & HEAP_XMAX_COMMITTED && + !TransactionIdDidCommit(xmax))); + + /* check hint bit first */ + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + return false; + + /* check clog */ + return !TransactionIdDidCommit(xmax); + } + /* above xmax horizon, we cannot possibly see the deleting transaction */ + else if (TransactionIdFollowsOrEquals(xmax, snapshot->xmax)) + return true; + /* xmax is between [xmin, xmax), check known committed array */ + else if (TransactionIdInArray(xmax, snapshot->xip, snapshot->xcnt)) + return false; + /* xmax is between [xmin, xmax), but known not to have committed yet */ + else + return true; +} + +/* + * HeapTupleSatisfiesVisibility + * True iff heap tuple satisfies a time qual. + * + * Notes: + * Assumes heap tuple is valid, and buffer at least share locked. + * + * Hint bits in the HeapTuple's t_infomask may be updated as a side effect; + * if so, the indicated buffer is marked dirty. + */ +bool +HeapTupleSatisfiesVisibility(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + switch (snapshot->snapshot_type) + { + case SNAPSHOT_MVCC: + return HeapTupleSatisfiesMVCC(htup, snapshot, buffer); + case SNAPSHOT_SELF: + return HeapTupleSatisfiesSelf(htup, snapshot, buffer); + case SNAPSHOT_ANY: + return HeapTupleSatisfiesAny(htup, snapshot, buffer); + case SNAPSHOT_TOAST: + return HeapTupleSatisfiesToast(htup, snapshot, buffer); + case SNAPSHOT_DIRTY: + return HeapTupleSatisfiesDirty(htup, snapshot, buffer); + case SNAPSHOT_HISTORIC_MVCC: + return HeapTupleSatisfiesHistoricMVCC(htup, snapshot, buffer); + case SNAPSHOT_NON_VACUUMABLE: + return HeapTupleSatisfiesNonVacuumable(htup, snapshot, buffer); + } + + return false; /* keep compiler quiet */ +} diff --git a/contrib/pg_tde/src16/access/pg_tdetoast.c b/contrib/pg_tde/src16/access/pg_tdetoast.c new file mode 100644 index 00000000000..350c5291326 --- /dev/null +++ b/contrib/pg_tde/src16/access/pg_tdetoast.c @@ -0,0 +1,1262 @@ +/*------------------------------------------------------------------------- + * + * heaptoast.c + * Heap-specific definitions for external and compressed storage + * of variable size attributes. + * + * Copyright (c) 2000-2023, PostgreSQL Global Development Group + * + * + * IDENTIFICATION + * src/backend/access/heap/heaptoast.c + * + * + * INTERFACE ROUTINES + * tdeheap_toast_insert_or_update - + * Try to make a given tuple fit into one page by compressing + * or moving off attributes + * + * tdeheap_toast_delete - + * Reclaim toast storage when a tuple is deleted + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdetoast.h" + +#include "access/detoast.h" +#include "access/genam.h" +#include "access/toast_helper.h" +#include "access/toast_internals.h" +#include "miscadmin.h" +#include "utils/fmgroids.h" +#include "utils/snapmgr.h" +#include "encryption/enc_tde.h" + +#define TDE_TOAST_COMPRESS_HEADER_SIZE (VARHDRSZ_COMPRESSED - VARHDRSZ) + +static void tdeheap_toast_tuple_externalize(ToastTupleContext *ttc, + int attribute, int options); +static Datum tdeheap_toast_save_datum(Relation rel, Datum value, + struct varlena *oldexternal, + int options); +static void tdeheap_toast_encrypt(Pointer dval, Oid valueid, RelKeyData *keys); +static bool toastrel_valueid_exists(Relation toastrel, Oid valueid); +static bool toastid_valueid_exists(Oid toastrelid, Oid valueid); + + +/* ---------- + * tdeheap_toast_delete - + * + * Cascaded delete toast-entries on DELETE + * ---------- + */ +void +tdeheap_toast_delete(Relation rel, HeapTuple oldtup, bool is_speculative) +{ + TupleDesc tupleDesc; + Datum toast_values[MaxHeapAttributeNumber]; + bool toast_isnull[MaxHeapAttributeNumber]; + + /* + * We should only ever be called for tuples of plain relations or + * materialized views --- recursing on a toast rel is bad news. + */ + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW); + + /* + * Get the tuple descriptor and break down the tuple into fields. + * + * NOTE: it's debatable whether to use tdeheap_deform_tuple() here or just + * tdeheap_getattr() only the varlena columns. The latter could win if there + * are few varlena columns and many non-varlena ones. However, + * tdeheap_deform_tuple costs only O(N) while the tdeheap_getattr way would cost + * O(N^2) if there are many varlena columns, so it seems better to err on + * the side of linear cost. (We won't even be here unless there's at + * least one varlena column, by the way.) + */ + tupleDesc = rel->rd_att; + + Assert(tupleDesc->natts <= MaxHeapAttributeNumber); + tdeheap_deform_tuple(oldtup, tupleDesc, toast_values, toast_isnull); + + /* Do the real work. */ + toast_delete_external(rel, toast_values, toast_isnull, is_speculative); +} + + +/* ---------- + * tdeheap_toast_insert_or_update - + * + * Delete no-longer-used toast-entries and create new ones to + * make the new tuple fit on INSERT or UPDATE + * + * Inputs: + * newtup: the candidate new tuple to be inserted + * oldtup: the old row version for UPDATE, or NULL for INSERT + * options: options to be passed to tdeheap_insert() for toast rows + * Result: + * either newtup if no toasting is needed, or a palloc'd modified tuple + * that is what should actually get stored + * + * NOTE: neither newtup nor oldtup will be modified. This is a change + * from the pre-8.1 API of this routine. + * ---------- + */ +HeapTuple +tdeheap_toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, + int options) +{ + HeapTuple result_tuple; + TupleDesc tupleDesc; + int numAttrs; + + Size maxDataLen; + Size hoff; + + bool toast_isnull[MaxHeapAttributeNumber]; + bool toast_oldisnull[MaxHeapAttributeNumber]; + Datum toast_values[MaxHeapAttributeNumber]; + Datum toast_oldvalues[MaxHeapAttributeNumber]; + ToastAttrInfo toast_attr[MaxHeapAttributeNumber]; + ToastTupleContext ttc; + + /* + * Ignore the INSERT_SPECULATIVE option. Speculative insertions/super + * deletions just normally insert/delete the toast values. It seems + * easiest to deal with that here, instead on, potentially, multiple + * callers. + */ + options &= ~HEAP_INSERT_SPECULATIVE; + + /* + * We should only ever be called for tuples of plain relations or + * materialized views --- recursing on a toast rel is bad news. + */ + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW); + + /* + * Get the tuple descriptor and break down the tuple(s) into fields. + */ + tupleDesc = rel->rd_att; + numAttrs = tupleDesc->natts; + + Assert(numAttrs <= MaxHeapAttributeNumber); + tdeheap_deform_tuple(newtup, tupleDesc, toast_values, toast_isnull); + if (oldtup != NULL) + tdeheap_deform_tuple(oldtup, tupleDesc, toast_oldvalues, toast_oldisnull); + + /* ---------- + * Prepare for toasting + * ---------- + */ + ttc.ttc_rel = rel; + ttc.ttc_values = toast_values; + ttc.ttc_isnull = toast_isnull; + if (oldtup == NULL) + { + ttc.ttc_oldvalues = NULL; + ttc.ttc_oldisnull = NULL; + } + else + { + ttc.ttc_oldvalues = toast_oldvalues; + ttc.ttc_oldisnull = toast_oldisnull; + } + ttc.ttc_attr = toast_attr; + toast_tuple_init(&ttc); + + /* ---------- + * Compress and/or save external until data fits into target length + * + * 1: Inline compress attributes with attstorage EXTENDED, and store very + * large attributes with attstorage EXTENDED or EXTERNAL external + * immediately + * 2: Store attributes with attstorage EXTENDED or EXTERNAL external + * 3: Inline compress attributes with attstorage MAIN + * 4: Store attributes with attstorage MAIN external + * ---------- + */ + + /* compute header overhead --- this should match tdeheap_form_tuple() */ + hoff = SizeofHeapTupleHeader; + if ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) + hoff += BITMAPLEN(numAttrs); + hoff = MAXALIGN(hoff); + /* now convert to a limit on the tuple data size */ + maxDataLen = RelationGetToastTupleTarget(rel, TOAST_TUPLE_TARGET) - hoff; + + /* + * Look for attributes with attstorage EXTENDED to compress. Also find + * large attributes with attstorage EXTENDED or EXTERNAL, and store them + * external. + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, true, false); + if (biggest_attno < 0) + break; + + /* + * Attempt to compress it inline, if it has attstorage EXTENDED + */ + if (TupleDescAttr(tupleDesc, biggest_attno)->attstorage == TYPSTORAGE_EXTENDED) + toast_tuple_try_compression(&ttc, biggest_attno); + else + { + /* + * has attstorage EXTERNAL, ignore on subsequent compression + * passes + */ + toast_attr[biggest_attno].tai_colflags |= TOASTCOL_INCOMPRESSIBLE; + } + + /* + * If this value is by itself more than maxDataLen (after compression + * if any), push it out to the toast table immediately, if possible. + * This avoids uselessly compressing other fields in the common case + * where we have one long field and several short ones. + * + * XXX maybe the threshold should be less than maxDataLen? + */ + if (toast_attr[biggest_attno].tai_size > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * Second we look for attributes of attstorage EXTENDED or EXTERNAL that + * are still inline, and make them external. But skip this if there's no + * toast table to push them to. + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, false, false); + if (biggest_attno < 0) + break; + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * Round 3 - this time we take attributes with storage MAIN into + * compression + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, true, true); + if (biggest_attno < 0) + break; + + toast_tuple_try_compression(&ttc, biggest_attno); + } + + /* + * Finally we store attributes of type MAIN externally. At this point we + * increase the target tuple size, so that MAIN attributes aren't stored + * externally unless really necessary. + */ + maxDataLen = TOAST_TUPLE_TARGET_MAIN - hoff; + + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, false, true); + if (biggest_attno < 0) + break; + + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * In the case we toasted any values, we need to build a new heap tuple + * with the changed values. + */ + if ((ttc.ttc_flags & TOAST_NEEDS_CHANGE) != 0) + { + HeapTupleHeader olddata = newtup->t_data; + HeapTupleHeader new_data; + int32 new_header_len; + int32 new_data_len; + int32 new_tuple_len; + + /* + * Calculate the new size of the tuple. + * + * Note: we used to assume here that the old tuple's t_hoff must equal + * the new_header_len value, but that was incorrect. The old tuple + * might have a smaller-than-current natts, if there's been an ALTER + * TABLE ADD COLUMN since it was stored; and that would lead to a + * different conclusion about the size of the null bitmap, or even + * whether there needs to be one at all. + */ + new_header_len = SizeofHeapTupleHeader; + if ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) + new_header_len += BITMAPLEN(numAttrs); + new_header_len = MAXALIGN(new_header_len); + new_data_len = tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull); + new_tuple_len = new_header_len + new_data_len; + + /* + * Allocate and zero the space needed, and fill HeapTupleData fields. + */ + result_tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + new_tuple_len); + result_tuple->t_len = new_tuple_len; + result_tuple->t_self = newtup->t_self; + result_tuple->t_tableOid = newtup->t_tableOid; + new_data = (HeapTupleHeader) ((char *) result_tuple + HEAPTUPLESIZE); + result_tuple->t_data = new_data; + + /* + * Copy the existing tuple header, but adjust natts and t_hoff. + */ + memcpy(new_data, olddata, SizeofHeapTupleHeader); + HeapTupleHeaderSetNatts(new_data, numAttrs); + new_data->t_hoff = new_header_len; + + /* Copy over the data, and fill the null bitmap if needed */ + tdeheap_fill_tuple(tupleDesc, + toast_values, + toast_isnull, + (char *) new_data + new_header_len, + new_data_len, + &(new_data->t_infomask), + ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) ? + new_data->t_bits : NULL); + } + else + result_tuple = newtup; + + toast_tuple_cleanup(&ttc); + + return result_tuple; +} + + +/* ---------- + * toast_flatten_tuple - + * + * "Flatten" a tuple to contain no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * + * Note: we expect the caller already checked HeapTupleHasExternal(tup), + * so there is no need for a short-circuit path. + * ---------- + */ +HeapTuple +toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc) +{ + HeapTuple new_tuple; + int numAttrs = tupleDesc->natts; + int i; + Datum toast_values[MaxTupleAttributeNumber]; + bool toast_isnull[MaxTupleAttributeNumber]; + bool toast_free[MaxTupleAttributeNumber]; + + /* + * Break down the tuple into fields. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + tdeheap_deform_tuple(tup, tupleDesc, toast_values, toast_isnull); + + memset(toast_free, 0, numAttrs * sizeof(bool)); + + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (!toast_isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(toast_values[i]); + if (VARATT_IS_EXTERNAL(new_value)) + { + new_value = detoast_external_attr(new_value); + toast_values[i] = PointerGetDatum(new_value); + toast_free[i] = true; + } + } + } + + /* + * Form the reconfigured tuple. + */ + new_tuple = tdeheap_form_tuple(tupleDesc, toast_values, toast_isnull); + + /* + * Be sure to copy the tuple's identity fields. We also make a point of + * copying visibility info, just in case anybody looks at those fields in + * a syscache entry. + */ + new_tuple->t_self = tup->t_self; + new_tuple->t_tableOid = tup->t_tableOid; + + new_tuple->t_data->t_choice = tup->t_data->t_choice; + new_tuple->t_data->t_ctid = tup->t_data->t_ctid; + new_tuple->t_data->t_infomask &= ~HEAP_XACT_MASK; + new_tuple->t_data->t_infomask |= + tup->t_data->t_infomask & HEAP_XACT_MASK; + new_tuple->t_data->t_infomask2 &= ~HEAP2_XACT_MASK; + new_tuple->t_data->t_infomask2 |= + tup->t_data->t_infomask2 & HEAP2_XACT_MASK; + + /* + * Free allocated temp values + */ + for (i = 0; i < numAttrs; i++) + if (toast_free[i]) + pfree(DatumGetPointer(toast_values[i])); + + return new_tuple; +} + + +/* ---------- + * toast_flatten_tuple_to_datum - + * + * "Flatten" a tuple containing out-of-line toasted fields into a Datum. + * The result is always palloc'd in the current memory context. + * + * We have a general rule that Datums of container types (rows, arrays, + * ranges, etc) must not contain any external TOAST pointers. Without + * this rule, we'd have to look inside each Datum when preparing a tuple + * for storage, which would be expensive and would fail to extend cleanly + * to new sorts of container types. + * + * However, we don't want to say that tuples represented as HeapTuples + * can't contain toasted fields, so instead this routine should be called + * when such a HeapTuple is being converted into a Datum. + * + * While we're at it, we decompress any compressed fields too. This is not + * necessary for correctness, but reflects an expectation that compression + * will be more effective if applied to the whole tuple not individual + * fields. We are not so concerned about that that we want to deconstruct + * and reconstruct tuples just to get rid of compressed fields, however. + * So callers typically won't call this unless they see that the tuple has + * at least one external field. + * + * On the other hand, in-line short-header varlena fields are left alone. + * If we "untoasted" them here, they'd just get changed back to short-header + * format anyway within tdeheap_fill_tuple. + * ---------- + */ +Datum +toast_flatten_tuple_to_datum(HeapTupleHeader tup, + uint32 tup_len, + TupleDesc tupleDesc) +{ + HeapTupleHeader new_data; + int32 new_header_len; + int32 new_data_len; + int32 new_tuple_len; + HeapTupleData tmptup; + int numAttrs = tupleDesc->natts; + int i; + bool has_nulls = false; + Datum toast_values[MaxTupleAttributeNumber]; + bool toast_isnull[MaxTupleAttributeNumber]; + bool toast_free[MaxTupleAttributeNumber]; + + /* Build a temporary HeapTuple control structure */ + tmptup.t_len = tup_len; + ItemPointerSetInvalid(&(tmptup.t_self)); + tmptup.t_tableOid = InvalidOid; + tmptup.t_data = tup; + + /* + * Break down the tuple into fields. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + tdeheap_deform_tuple(&tmptup, tupleDesc, toast_values, toast_isnull); + + memset(toast_free, 0, numAttrs * sizeof(bool)); + + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (toast_isnull[i]) + has_nulls = true; + else if (TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(toast_values[i]); + if (VARATT_IS_EXTERNAL(new_value) || + VARATT_IS_COMPRESSED(new_value)) + { + new_value = detoast_attr(new_value); + toast_values[i] = PointerGetDatum(new_value); + toast_free[i] = true; + } + } + } + + /* + * Calculate the new size of the tuple. + * + * This should match the reconstruction code in + * tdeheap_toast_insert_or_update. + */ + new_header_len = SizeofHeapTupleHeader; + if (has_nulls) + new_header_len += BITMAPLEN(numAttrs); + new_header_len = MAXALIGN(new_header_len); + new_data_len = tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull); + new_tuple_len = new_header_len + new_data_len; + + new_data = (HeapTupleHeader) palloc0(new_tuple_len); + + /* + * Copy the existing tuple header, but adjust natts and t_hoff. + */ + memcpy(new_data, tup, SizeofHeapTupleHeader); + HeapTupleHeaderSetNatts(new_data, numAttrs); + new_data->t_hoff = new_header_len; + + /* Set the composite-Datum header fields correctly */ + HeapTupleHeaderSetDatumLength(new_data, new_tuple_len); + HeapTupleHeaderSetTypeId(new_data, tupleDesc->tdtypeid); + HeapTupleHeaderSetTypMod(new_data, tupleDesc->tdtypmod); + + /* Copy over the data, and fill the null bitmap if needed */ + tdeheap_fill_tuple(tupleDesc, + toast_values, + toast_isnull, + (char *) new_data + new_header_len, + new_data_len, + &(new_data->t_infomask), + has_nulls ? new_data->t_bits : NULL); + + /* + * Free allocated temp values + */ + for (i = 0; i < numAttrs; i++) + if (toast_free[i]) + pfree(DatumGetPointer(toast_values[i])); + + return PointerGetDatum(new_data); +} + + +/* ---------- + * toast_build_flattened_tuple - + * + * Build a tuple containing no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * + * This is essentially just like tdeheap_form_tuple, except that it will + * expand any external-data pointers beforehand. + * + * It's not very clear whether it would be preferable to decompress + * in-line compressed datums while at it. For now, we don't. + * ---------- + */ +HeapTuple +toast_build_flattened_tuple(TupleDesc tupleDesc, + Datum *values, + bool *isnull) +{ + HeapTuple new_tuple; + int numAttrs = tupleDesc->natts; + int num_to_free; + int i; + Datum new_values[MaxTupleAttributeNumber]; + Pointer freeable_values[MaxTupleAttributeNumber]; + + /* + * We can pass the caller's isnull array directly to tdeheap_form_tuple, but + * we potentially need to modify the values array. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + memcpy(new_values, values, numAttrs * sizeof(Datum)); + + num_to_free = 0; + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (!isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(new_values[i]); + if (VARATT_IS_EXTERNAL(new_value)) + { + new_value = detoast_external_attr(new_value); + new_values[i] = PointerGetDatum(new_value); + freeable_values[num_to_free++] = (Pointer) new_value; + } + } + } + + /* + * Form the reconfigured tuple. + */ + new_tuple = tdeheap_form_tuple(tupleDesc, new_values, isnull); + + /* + * Free allocated temp values + */ + for (i = 0; i < num_to_free; i++) + pfree(freeable_values[i]); + + return new_tuple; +} + +/* + * Fetch a TOAST slice from a heap table. + * + * toastrel is the relation from which chunks are to be fetched. + * valueid identifies the TOAST value from which chunks are being fetched. + * attrsize is the total size of the TOAST value. + * sliceoffset is the byte offset within the TOAST value from which to fetch. + * slicelength is the number of bytes to be fetched from the TOAST value. + * result is the varlena into which the results should be written. + */ +void +tdeheap_fetch_toast_slice(Relation toastrel, Oid valueid, int32 attrsize, + int32 sliceoffset, int32 slicelength, + struct varlena *result) +{ + Relation *toastidxs; + ScanKeyData toastkey[3]; + TupleDesc toasttupDesc = toastrel->rd_att; + int nscankeys; + SysScanDesc toastscan; + HeapTuple ttup; + int32 expectedchunk; + int32 totalchunks = ((attrsize - 1) / TOAST_MAX_CHUNK_SIZE) + 1; + int startchunk; + int endchunk; + int num_indexes; + int validIndex; + SnapshotData SnapshotToast; + char decrypted_data[TOAST_MAX_CHUNK_SIZE]; + RelKeyData *key = GetHeapBaiscRelationKey(toastrel->rd_locator); + char iv_prefix[16] = {0,}; + + + /* Look for the valid index of toast relation */ + validIndex = toast_open_indexes(toastrel, + AccessShareLock, + &toastidxs, + &num_indexes); + + startchunk = sliceoffset / TOAST_MAX_CHUNK_SIZE; + endchunk = (sliceoffset + slicelength - 1) / TOAST_MAX_CHUNK_SIZE; + Assert(endchunk <= totalchunks); + + /* Set up a scan key to fetch from the index. */ + ScanKeyInit(&toastkey[0], + (AttrNumber) 1, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(valueid)); + + /* + * No additional condition if fetching all chunks. Otherwise, use an + * equality condition for one chunk, and a range condition otherwise. + */ + if (startchunk == 0 && endchunk == totalchunks - 1) + nscankeys = 1; + else if (startchunk == endchunk) + { + ScanKeyInit(&toastkey[1], + (AttrNumber) 2, + BTEqualStrategyNumber, F_INT4EQ, + Int32GetDatum(startchunk)); + nscankeys = 2; + } + else + { + ScanKeyInit(&toastkey[1], + (AttrNumber) 2, + BTGreaterEqualStrategyNumber, F_INT4GE, + Int32GetDatum(startchunk)); + ScanKeyInit(&toastkey[2], + (AttrNumber) 2, + BTLessEqualStrategyNumber, F_INT4LE, + Int32GetDatum(endchunk)); + nscankeys = 3; + } + + /* Prepare for scan */ + init_toast_snapshot(&SnapshotToast); + toastscan = systable_beginscan_ordered(toastrel, toastidxs[validIndex], + &SnapshotToast, nscankeys, toastkey); + + memcpy(iv_prefix, &valueid, sizeof(Oid)); + + /* + * Read the chunks by index + * + * The index is on (valueid, chunkidx) so they will come in order + */ + expectedchunk = startchunk; + while ((ttup = systable_getnext_ordered(toastscan, ForwardScanDirection)) != NULL) + { + int32 curchunk; + Pointer chunk; + bool isnull; + char *chunkdata; + int32 chunksize; + int32 expected_size; + int32 chcpystrt; + int32 chcpyend; + int32 encrypt_offset; + + /* + * Have a chunk, extract the sequence number and the data + */ + curchunk = DatumGetInt32(fastgetattr(ttup, 2, toasttupDesc, &isnull)); + Assert(!isnull); + chunk = DatumGetPointer(fastgetattr(ttup, 3, toasttupDesc, &isnull)); + Assert(!isnull); + if (!VARATT_IS_EXTENDED(chunk)) + { + chunksize = VARSIZE(chunk) - VARHDRSZ; + chunkdata = VARDATA(chunk); + } + else if (VARATT_IS_SHORT(chunk)) + { + /* could happen due to tdeheap_form_tuple doing its thing */ + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT; + chunkdata = VARDATA_SHORT(chunk); + } + else + { + /* should never happen */ + elog(ERROR, "found toasted toast chunk for toast value %u in %s", + valueid, RelationGetRelationName(toastrel)); + chunksize = 0; /* keep compiler quiet */ + chunkdata = NULL; + } + + /* + * Some checks on the data we've found + */ + if (curchunk != expectedchunk) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk number %d (expected %d) for toast value %u in %s", + curchunk, expectedchunk, valueid, + RelationGetRelationName(toastrel)))); + if (curchunk > endchunk) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk number %d (out of range %d..%d) for toast value %u in %s", + curchunk, + startchunk, endchunk, valueid, + RelationGetRelationName(toastrel)))); + expected_size = curchunk < totalchunks - 1 ? TOAST_MAX_CHUNK_SIZE + : attrsize - ((totalchunks - 1) * TOAST_MAX_CHUNK_SIZE); + if (chunksize != expected_size) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk size %d (expected %d) in chunk %d of %d for toast value %u in %s", + chunksize, expected_size, + curchunk, totalchunks, valueid, + RelationGetRelationName(toastrel)))); + + /* + * Copy the data into proper place in our result + */ + chcpystrt = 0; + chcpyend = chunksize - 1; + if (curchunk == startchunk) + chcpystrt = sliceoffset % TOAST_MAX_CHUNK_SIZE; + if (curchunk == endchunk) + chcpyend = (sliceoffset + slicelength - 1) % TOAST_MAX_CHUNK_SIZE; + + /* + * If TOAST is compressed, the first TDE_TOAST_COMPRESS_HEADER_SIZE (4 bytes) is + * not encrypted and contains compression info. It should be added to the + * result as it is and the rest should be decrypted. Encryption offset in + * that case will be 0 for the first chunk (despite the encrypted data + * starting with the offset TDE_TOAST_COMPRESS_HEADER_SIZE, we've encrypted it + * without compression headers) and `chunk start offset - 4` for the next + * chunks. + */ + encrypt_offset = chcpystrt; + if (VARATT_IS_COMPRESSED(result)) { + if (curchunk == 0) { + memcpy(VARDATA(result), chunkdata + chcpystrt, TDE_TOAST_COMPRESS_HEADER_SIZE); + chcpystrt += TDE_TOAST_COMPRESS_HEADER_SIZE; + } else { + encrypt_offset -= TDE_TOAST_COMPRESS_HEADER_SIZE; + } + } + /* Decrypt the data chunk by chunk here */ + + PG_TDE_DECRYPT_DATA(iv_prefix, (curchunk * TOAST_MAX_CHUNK_SIZE) + encrypt_offset, + chunkdata + chcpystrt, + (chcpyend - chcpystrt) + 1, + decrypted_data, key); + + memcpy(VARDATA(result) + + (curchunk * TOAST_MAX_CHUNK_SIZE - sliceoffset) + chcpystrt, + decrypted_data, + (chcpyend - chcpystrt) + 1); + + expectedchunk++; + } + + /* + * Final checks that we successfully fetched the datum + */ + if (expectedchunk != (endchunk + 1)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("missing chunk number %d for toast value %u in %s", + expectedchunk, valueid, + RelationGetRelationName(toastrel)))); + + /* End scan and close indexes. */ + systable_endscan_ordered(toastscan); + toast_close_indexes(toastidxs, num_indexes, AccessShareLock); +} +// TODO: these should be in their own file so we can proplerly autoupdate them +/* pg_tde extension */ +static void +tdeheap_toast_encrypt(Pointer dval, Oid valueid, RelKeyData *key) +{ + int32 data_size =0; + char* data_p; + char* encrypted_data; + char iv_prefix[16] = {0,}; + + /* + * Encryption specific data_p and data_size as we have to avoid + * encryption of the compression info. + * See https://github.com/percona/pg_tde/commit/dee6e357ef05d217a4c4df131249a80e5e909163 + */ + if (VARATT_IS_SHORT(dval)) + { + data_p = VARDATA_SHORT(dval); + data_size = VARSIZE_SHORT(dval) - VARHDRSZ_SHORT; + } + else if (VARATT_IS_COMPRESSED(dval)) + { + data_p = VARDATA_4B_C(dval); + data_size = VARSIZE(dval) - VARHDRSZ_COMPRESSED; + } + else + { + data_p = VARDATA(dval); + data_size = VARSIZE(dval) - VARHDRSZ; + } + /* Now encrypt the data and replace it in ttc */ + encrypted_data = (char *)palloc(data_size); + + memcpy(iv_prefix, &valueid, sizeof(Oid)); + PG_TDE_ENCRYPT_DATA(iv_prefix, 0, data_p, data_size, encrypted_data, key); + + memcpy(data_p, encrypted_data, data_size); + pfree(encrypted_data); +} + +/* + * Move an attribute to external storage. + * + * copy from PG src/backend/access/table/toast_helper.c + */ +static void +tdeheap_toast_tuple_externalize(ToastTupleContext *ttc, int attribute, int options) +{ + Datum *value = &ttc->ttc_values[attribute]; + Datum old_value = *value; + ToastAttrInfo *attr = &ttc->ttc_attr[attribute]; + + attr->tai_colflags |= TOASTCOL_IGNORE; + *value = tdeheap_toast_save_datum(ttc->ttc_rel, old_value, attr->tai_oldexternal, + options); + if ((attr->tai_colflags & TOASTCOL_NEEDS_FREE) != 0) + pfree(DatumGetPointer(old_value)); + attr->tai_colflags |= TOASTCOL_NEEDS_FREE; + ttc->ttc_flags |= (TOAST_NEEDS_CHANGE | TOAST_NEEDS_FREE); +} + +/* ---------- + * tdeheap_toast_save_datum - + * + * Save one single datum into the secondary relation and return + * a Datum reference for it. + * It also encrypts toasted data. + * + * rel: the main relation we're working with (not the toast rel!) + * value: datum to be pushed to toast storage + * oldexternal: if not NULL, toast pointer previously representing the datum + * options: options to be passed to tdeheap_insert() for toast rows + * + * based on toast_save_datum from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static Datum +tdeheap_toast_save_datum(Relation rel, Datum value, + struct varlena *oldexternal, int options) +{ + Relation toastrel; + Relation *toastidxs; + HeapTuple toasttup; + TupleDesc toasttupDesc; + Datum t_values[3]; + bool t_isnull[3]; + CommandId mycid = GetCurrentCommandId(true); + struct varlena *result; + struct varatt_external toast_pointer; + union + { + struct varlena hdr; + /* this is to make the union big enough for a chunk: */ + char data[TOAST_MAX_CHUNK_SIZE + VARHDRSZ]; + /* ensure union is aligned well enough: */ + int32 align_it; + } chunk_data; + int32 chunk_size; + int32 chunk_seq = 0; + char *data_p; + int32 data_todo; + Pointer dval = DatumGetPointer(value); + int num_indexes; + int validIndex; + + + Assert(!VARATT_IS_EXTERNAL(value)); + + /* + * Open the toast relation and its indexes. We can use the index to check + * uniqueness of the OID we assign to the toasted item, even though it has + * additional columns besides OID. + */ + toastrel = table_open(rel->rd_rel->reltoastrelid, RowExclusiveLock); + toasttupDesc = toastrel->rd_att; + + /* Open all the toast indexes and look for the valid one */ + validIndex = toast_open_indexes(toastrel, + RowExclusiveLock, + &toastidxs, + &num_indexes); + + /* + * Get the data pointer and length, and compute va_rawsize and va_extinfo. + * + * va_rawsize is the size of the equivalent fully uncompressed datum, so + * we have to adjust for short headers. + * + * va_extinfo stored the actual size of the data payload in the toast + * records and the compression method in first 2 bits if data is + * compressed. + */ + if (VARATT_IS_SHORT(dval)) + { + data_p = VARDATA_SHORT(dval); + data_todo = VARSIZE_SHORT(dval) - VARHDRSZ_SHORT; + toast_pointer.va_rawsize = data_todo + VARHDRSZ; /* as if not short */ + toast_pointer.va_extinfo = data_todo; + } + else if (VARATT_IS_COMPRESSED(dval)) + { + data_p = VARDATA(dval); + data_todo = VARSIZE(dval) - VARHDRSZ; + /* rawsize in a compressed datum is just the size of the payload */ + toast_pointer.va_rawsize = VARDATA_COMPRESSED_GET_EXTSIZE(dval) + VARHDRSZ; + + /* set external size and compression method */ + VARATT_EXTERNAL_SET_SIZE_AND_COMPRESS_METHOD(toast_pointer, data_todo, + VARDATA_COMPRESSED_GET_COMPRESS_METHOD(dval)); + /* Assert that the numbers look like it's compressed */ + Assert(VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer)); + } + else + { + data_p = VARDATA(dval); + data_todo = VARSIZE(dval) - VARHDRSZ; + toast_pointer.va_rawsize = VARSIZE(dval); + toast_pointer.va_extinfo = data_todo; + } + + /* + * Insert the correct table OID into the result TOAST pointer. + * + * Normally this is the actual OID of the target toast table, but during + * table-rewriting operations such as CLUSTER, we have to insert the OID + * of the table's real permanent toast table instead. rd_toastoid is set + * if we have to substitute such an OID. + */ + if (OidIsValid(rel->rd_toastoid)) + toast_pointer.va_toastrelid = rel->rd_toastoid; + else + toast_pointer.va_toastrelid = RelationGetRelid(toastrel); + + /* + * Choose an OID to use as the value ID for this toast value. + * + * Normally we just choose an unused OID within the toast table. But + * during table-rewriting operations where we are preserving an existing + * toast table OID, we want to preserve toast value OIDs too. So, if + * rd_toastoid is set and we had a prior external value from that same + * toast table, re-use its value ID. If we didn't have a prior external + * value (which is a corner case, but possible if the table's attstorage + * options have been changed), we have to pick a value ID that doesn't + * conflict with either new or existing toast value OIDs. + */ + if (!OidIsValid(rel->rd_toastoid)) + { + /* normal case: just choose an unused OID */ + toast_pointer.va_valueid = + GetNewOidWithIndex(toastrel, + RelationGetRelid(toastidxs[validIndex]), + (AttrNumber) 1); + } + else + { + /* rewrite case: check to see if value was in old toast table */ + toast_pointer.va_valueid = InvalidOid; + if (oldexternal != NULL) + { + struct varatt_external old_toast_pointer; + + Assert(VARATT_IS_EXTERNAL_ONDISK(oldexternal)); + /* Must copy to access aligned fields */ + VARATT_EXTERNAL_GET_POINTER(old_toast_pointer, oldexternal); + if (old_toast_pointer.va_toastrelid == rel->rd_toastoid) + { + /* This value came from the old toast table; reuse its OID */ + toast_pointer.va_valueid = old_toast_pointer.va_valueid; + + /* + * There is a corner case here: the table rewrite might have + * to copy both live and recently-dead versions of a row, and + * those versions could easily reference the same toast value. + * When we copy the second or later version of such a row, + * reusing the OID will mean we select an OID that's already + * in the new toast table. Check for that, and if so, just + * fall through without writing the data again. + * + * While annoying and ugly-looking, this is a good thing + * because it ensures that we wind up with only one copy of + * the toast value when there is only one copy in the old + * toast table. Before we detected this case, we'd have made + * multiple copies, wasting space; and what's worse, the + * copies belonging to already-deleted heap tuples would not + * be reclaimed by VACUUM. + */ + if (toastrel_valueid_exists(toastrel, + toast_pointer.va_valueid)) + { + /* Match, so short-circuit the data storage loop below */ + data_todo = 0; + } + } + } + if (toast_pointer.va_valueid == InvalidOid) + { + /* + * new value; must choose an OID that doesn't conflict in either + * old or new toast table + */ + do + { + toast_pointer.va_valueid = + GetNewOidWithIndex(toastrel, + RelationGetRelid(toastidxs[validIndex]), + (AttrNumber) 1); + } while (toastid_valueid_exists(rel->rd_toastoid, + toast_pointer.va_valueid)); + } + } + + /* + * Encrypt toast data. + */ + tdeheap_toast_encrypt(dval, toast_pointer.va_valueid, GetHeapBaiscRelationKey(toastrel->rd_locator)); + + /* + * Initialize constant parts of the tuple data + */ + t_values[0] = ObjectIdGetDatum(toast_pointer.va_valueid); + t_values[2] = PointerGetDatum(&chunk_data); + t_isnull[0] = false; + t_isnull[1] = false; + t_isnull[2] = false; + + /* + * Split up the item into chunks + */ + while (data_todo > 0) + { + int i; + + CHECK_FOR_INTERRUPTS(); + + /* + * Calculate the size of this chunk + */ + chunk_size = Min(TOAST_MAX_CHUNK_SIZE, data_todo); + + /* + * Build a tuple and store it + */ + t_values[1] = Int32GetDatum(chunk_seq++); + SET_VARSIZE(&chunk_data, chunk_size + VARHDRSZ); + memcpy(VARDATA(&chunk_data), data_p, chunk_size); + toasttup = tdeheap_form_tuple(toasttupDesc, t_values, t_isnull); + + /* + * The tuple should be insterted not encrypted. + * TOAST data already encrypted. + */ + options |= HEAP_INSERT_TDE_NO_ENCRYPT; + tdeheap_insert(toastrel, toasttup, mycid, options, NULL); + + /* + * Create the index entry. We cheat a little here by not using + * FormIndexDatum: this relies on the knowledge that the index columns + * are the same as the initial columns of the table for all the + * indexes. We also cheat by not providing an IndexInfo: this is okay + * for now because btree doesn't need one, but we might have to be + * more honest someday. + * + * Note also that there had better not be any user-created index on + * the TOAST table, since we don't bother to update anything else. + */ + for (i = 0; i < num_indexes; i++) + { + /* Only index relations marked as ready can be updated */ + if (toastidxs[i]->rd_index->indisready) + index_insert(toastidxs[i], t_values, t_isnull, + &(toasttup->t_self), + toastrel, + toastidxs[i]->rd_index->indisunique ? + UNIQUE_CHECK_YES : UNIQUE_CHECK_NO, + false, NULL); + } + + /* + * Free memory + */ + tdeheap_freetuple(toasttup); + + /* + * Move on to next chunk + */ + data_todo -= chunk_size; + data_p += chunk_size; + } + + /* + * Done - close toast relation and its indexes but keep the lock until + * commit, so as a concurrent reindex done directly on the toast relation + * would be able to wait for this transaction. + */ + toast_close_indexes(toastidxs, num_indexes, NoLock); + table_close(toastrel, NoLock); + + /* + * Create the TOAST pointer value that we'll return + */ + result = (struct varlena *) palloc(TOAST_POINTER_SIZE); + SET_VARTAG_EXTERNAL(result, VARTAG_ONDISK); + memcpy(VARDATA_EXTERNAL(result), &toast_pointer, sizeof(toast_pointer)); + + return PointerGetDatum(result); +} + +/* ---------- + * toastrel_valueid_exists - + * + * Test whether a toast value with the given ID exists in the toast relation. + * For safety, we consider a value to exist if there are either live or dead + * toast rows with that ID; see notes for GetNewOidWithIndex(). + * + * copy from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static bool +toastrel_valueid_exists(Relation toastrel, Oid valueid) +{ + bool result = false; + ScanKeyData toastkey; + SysScanDesc toastscan; + int num_indexes; + int validIndex; + Relation *toastidxs; + + /* Fetch a valid index relation */ + validIndex = toast_open_indexes(toastrel, + RowExclusiveLock, + &toastidxs, + &num_indexes); + + /* + * Setup a scan key to find chunks with matching va_valueid + */ + ScanKeyInit(&toastkey, + (AttrNumber) 1, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(valueid)); + + /* + * Is there any such chunk? + */ + toastscan = systable_beginscan(toastrel, + RelationGetRelid(toastidxs[validIndex]), + true, SnapshotAny, 1, &toastkey); + + if (systable_getnext(toastscan) != NULL) + result = true; + + systable_endscan(toastscan); + + /* Clean up */ + toast_close_indexes(toastidxs, num_indexes, RowExclusiveLock); + + return result; +} + +/* ---------- + * toastid_valueid_exists - + * + * As above, but work from toast rel's OID not an open relation + * + * copy from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static bool +toastid_valueid_exists(Oid toastrelid, Oid valueid) +{ + bool result; + Relation toastrel; + + toastrel = table_open(toastrelid, AccessShareLock); + + result = toastrel_valueid_exists(toastrel, valueid); + + table_close(toastrel, AccessShareLock); + + return result; +} diff --git a/contrib/pg_tde/src16/include/access/pg_tde_io.h b/contrib/pg_tde/src16/include/access/pg_tde_io.h new file mode 100644 index 00000000000..4d0a64bc83b --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tde_io.h @@ -0,0 +1,62 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_io.h + * POSTGRES heap access method input/output definitions. + * + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/hio.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_IO_H +#define PG_TDE_IO_H + +#include "access/htup.h" +#include "storage/buf.h" +#include "utils/relcache.h" + +/* + * state for bulk inserts --- private to heapam.c and hio.c + * + * If current_buf isn't InvalidBuffer, then we are holding an extra pin + * on that buffer. + * + * "typedef struct BulkInsertStateData *BulkInsertState" is in heapam.h + */ +typedef struct BulkInsertStateData +{ + BufferAccessStrategy strategy; /* our BULKWRITE strategy object */ + Buffer current_buf; /* current insertion target page */ + + /* + * State for bulk extensions. + * + * last_free..next_free are further pages that were unused at the time of + * the last extension. They might be in use by the time we use them + * though, so rechecks are needed. + * + * XXX: Eventually these should probably live in RelationData instead, + * alongside targetblock. + * + * already_extended_by is the number of pages that this bulk inserted + * extended by. If we already extended by a significant number of pages, + * we can be more aggressive about extending going forward. + */ + BlockNumber next_free; + BlockNumber last_free; + uint32 already_extended_by; +} BulkInsertStateData; + + +extern void tdeheap_RelationPutHeapTuple(Relation relation, Buffer buffer, + HeapTuple tuple, bool encrypt, bool token); +extern Buffer tdeheap_RelationGetBufferForTuple(Relation relation, Size len, + Buffer otherBuffer, int options, + BulkInsertStateData *bistate, + Buffer *vmbuffer, Buffer *vmbuffer_other, + int num_pages); + +#endif /* PG_TDE_IO_H */ diff --git a/contrib/pg_tde/src16/include/access/pg_tde_rewrite.h b/contrib/pg_tde/src16/include/access/pg_tde_rewrite.h new file mode 100644 index 00000000000..5285f39c7f4 --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tde_rewrite.h @@ -0,0 +1,57 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_rewrite.h + * Declarations for heap rewrite support functions + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994-5, Regents of the University of California + * + * src/include/access/rewriteheap.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_REWRITE_H +#define PG_TDE_REWRITE_H + +#include "access/htup.h" +#include "storage/itemptr.h" +#include "storage/relfilelocator.h" +#include "utils/relcache.h" + +/* struct definition is private to rewriteheap.c */ +typedef struct RewriteStateData *RewriteState; + +extern RewriteState begin_tdeheap_rewrite(Relation old_heap, Relation new_heap, + TransactionId oldest_xmin, TransactionId freeze_xid, + MultiXactId cutoff_multi); +extern void end_tdeheap_rewrite(RewriteState state); +extern void rewrite_tdeheap_tuple(RewriteState state, HeapTuple old_tuple, + HeapTuple new_tuple); +extern bool rewrite_tdeheap_dead_tuple(RewriteState state, HeapTuple old_tuple); + +/* + * On-Disk data format for an individual logical rewrite mapping. + */ +typedef struct LogicalRewriteMappingData +{ + RelFileLocator old_locator; + RelFileLocator new_locator; + ItemPointerData old_tid; + ItemPointerData new_tid; +} LogicalRewriteMappingData; + +/* --- + * The filename consists of the following, dash separated, + * components: + * 1) database oid or InvalidOid for shared relations + * 2) the oid of the relation + * 3) upper 32bit of the LSN at which a rewrite started + * 4) lower 32bit of the LSN at which a rewrite started + * 5) xid we are mapping for + * 6) xid of the xact performing the mapping + * --- + */ +#define LOGICAL_REWRITE_FORMAT "map-%x-%x-%X_%X-%x-%x" +extern void CheckPointLogicalRewriteHeap(void); + +#endif /* PG_TDE_REWRITE_H */ diff --git a/contrib/pg_tde/src16/include/access/pg_tde_visibilitymap.h b/contrib/pg_tde/src16/include/access/pg_tde_visibilitymap.h new file mode 100644 index 00000000000..0b8213f0523 --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tde_visibilitymap.h @@ -0,0 +1,42 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_visibilitymap.h + * visibility map interface + * + * + * Portions Copyright (c) 2007-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/pg_tde_visibilitymap.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_VISIBILITYMAP_H +#define PG_TDE_VISIBILITYMAP_H + +#include "access/visibilitymapdefs.h" +#include "access/xlogdefs.h" +#include "storage/block.h" +#include "storage/buf.h" +#include "utils/relcache.h" + +/* Macros for visibilitymap test */ +#define VM_ALL_VISIBLE(r, b, v) \ + ((tdeheap_visibilitymap_get_status((r), (b), (v)) & VISIBILITYMAP_ALL_VISIBLE) != 0) +#define VM_ALL_FROZEN(r, b, v) \ + ((tdeheap_visibilitymap_get_status((r), (b), (v)) & VISIBILITYMAP_ALL_FROZEN) != 0) + +extern bool tdeheap_visibilitymap_clear(Relation rel, BlockNumber heapBlk, + Buffer vmbuf, uint8 flags); +extern void tdeheap_visibilitymap_pin(Relation rel, BlockNumber heapBlk, + Buffer *vmbuf); +extern bool tdeheap_visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf); +extern void tdeheap_visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf, + XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid, + uint8 flags); +extern uint8 tdeheap_visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf); +extern void tdeheap_visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen); +extern BlockNumber tdeheap_visibilitymap_prepare_truncate(Relation rel, + BlockNumber nheapblocks); + +#endif /* PG_TDE_VISIBILITYMAP_H */ diff --git a/contrib/pg_tde/src16/include/access/pg_tdeam.h b/contrib/pg_tde/src16/include/access/pg_tdeam.h new file mode 100644 index 00000000000..b982c8ff2cd --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tdeam.h @@ -0,0 +1,339 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam.h + * POSTGRES heap access method definitions. + * + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/heapam.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDEAM_H +#define PG_TDEAM_H + +#include "access/relation.h" /* for backward compatibility */ +#include "access/relscan.h" +#include "access/sdir.h" +#include "access/skey.h" +#include "access/table.h" /* for backward compatibility */ +#include "access/tableam.h" +#include "nodes/lockoptions.h" +#include "nodes/primnodes.h" +#include "storage/bufpage.h" +#include "storage/dsm.h" +#include "storage/lockdefs.h" +#include "storage/shm_toc.h" +#include "utils/relcache.h" +#include "utils/snapshot.h" + + +/* "options" flag bits for tdeheap_insert */ +#define HEAP_INSERT_SKIP_FSM TABLE_INSERT_SKIP_FSM +#define HEAP_INSERT_FROZEN TABLE_INSERT_FROZEN +#define HEAP_INSERT_NO_LOGICAL TABLE_INSERT_NO_LOGICAL +#define HEAP_INSERT_SPECULATIVE 0x0010 +#define HEAP_INSERT_TDE_NO_ENCRYPT 0x2000 /* to specify rare cases when NO TDE enc */ + +typedef struct BulkInsertStateData *BulkInsertState; +struct TupleTableSlot; +struct VacuumCutoffs; + +#define MaxLockTupleMode LockTupleExclusive + +/* + * Descriptor for heap table scans. + */ +typedef struct HeapScanDescData +{ + TableScanDescData rs_base; /* AM independent part of the descriptor */ + + /* state set up at initscan time */ + BlockNumber rs_nblocks; /* total number of blocks in rel */ + BlockNumber rs_startblock; /* block # to start at */ + BlockNumber rs_numblocks; /* max number of blocks to scan */ + /* rs_numblocks is usually InvalidBlockNumber, meaning "scan whole rel" */ + + /* scan current state */ + bool rs_inited; /* false = scan not init'd yet */ + OffsetNumber rs_coffset; /* current offset # in non-page-at-a-time mode */ + BlockNumber rs_cblock; /* current block # in scan, if any */ + Buffer rs_cbuf; /* current buffer in scan, if any */ + /* NB: if rs_cbuf is not InvalidBuffer, we hold a pin on that buffer */ + + BufferAccessStrategy rs_strategy; /* access strategy for reads */ + + HeapTupleData rs_ctup; /* current tuple in scan, if any */ + + /* + * For parallel scans to store page allocation data. NULL when not + * performing a parallel scan. + */ + ParallelBlockTableScanWorkerData *rs_parallelworkerdata; + + /* these fields only used in page-at-a-time mode and for bitmap scans */ + int rs_cindex; /* current tuple's index in vistuples */ + int rs_ntuples; /* number of visible tuples on page */ + OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */ +} HeapScanDescData; +typedef struct HeapScanDescData *HeapScanDesc; + +/* + * Descriptor for fetches from heap via an index. + */ +typedef struct IndexFetchHeapData +{ + IndexFetchTableData xs_base; /* AM independent part of the descriptor */ + + Buffer xs_cbuf; /* current heap buffer in scan, if any */ + /* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */ +} IndexFetchHeapData; + +/* Result codes for HeapTupleSatisfiesVacuum */ +typedef enum +{ + HEAPTUPLE_DEAD, /* tuple is dead and deletable */ + HEAPTUPLE_LIVE, /* tuple is live (committed, no deleter) */ + HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */ + HEAPTUPLE_INSERT_IN_PROGRESS, /* inserting xact is still in progress */ + HEAPTUPLE_DELETE_IN_PROGRESS /* deleting xact is still in progress */ +} HTSV_Result; + +/* + * tdeheap_prepare_freeze_tuple may request that tdeheap_freeze_execute_prepared + * check any tuple's to-be-frozen xmin and/or xmax status using pg_xact + */ +#define HEAP_FREEZE_CHECK_XMIN_COMMITTED 0x01 +#define HEAP_FREEZE_CHECK_XMAX_ABORTED 0x02 + +/* tdeheap_prepare_freeze_tuple state describing how to freeze a tuple */ +typedef struct HeapTupleFreeze +{ + /* Fields describing how to process tuple */ + TransactionId xmax; + uint16 t_infomask2; + uint16 t_infomask; + uint8 frzflags; + + /* xmin/xmax check flags */ + uint8 checkflags; + /* Page offset number for tuple */ + OffsetNumber offset; +} HeapTupleFreeze; + +/* + * State used by VACUUM to track the details of freezing all eligible tuples + * on a given heap page. + * + * VACUUM prepares freeze plans for each page via tdeheap_prepare_freeze_tuple + * calls (every tuple with storage gets its own call). This page-level freeze + * state is updated across each call, which ultimately determines whether or + * not freezing the page is required. + * + * Aside from the basic question of whether or not freezing will go ahead, the + * state also tracks the oldest extant XID/MXID in the table as a whole, for + * the purposes of advancing relfrozenxid/relminmxid values in pg_class later + * on. Each tdeheap_prepare_freeze_tuple call pushes NewRelfrozenXid and/or + * NewRelminMxid back as required to avoid unsafe final pg_class values. Any + * and all unfrozen XIDs or MXIDs that remain after VACUUM finishes _must_ + * have values >= the final relfrozenxid/relminmxid values in pg_class. This + * includes XIDs that remain as MultiXact members from any tuple's xmax. + * + * When 'freeze_required' flag isn't set after all tuples are examined, the + * final choice on freezing is made by vacuumlazy.c. It can decide to trigger + * freezing based on whatever criteria it deems appropriate. However, it is + * recommended that vacuumlazy.c avoid early freezing when freezing does not + * enable setting the target page all-frozen in the visibility map afterwards. + */ +typedef struct HeapPageFreeze +{ + /* Is tdeheap_prepare_freeze_tuple caller required to freeze page? */ + bool freeze_required; + + /* + * "Freeze" NewRelfrozenXid/NewRelminMxid trackers. + * + * Trackers used when tdeheap_freeze_execute_prepared freezes, or when there + * are zero freeze plans for a page. It is always valid for vacuumlazy.c + * to freeze any page, by definition. This even includes pages that have + * no tuples with storage to consider in the first place. That way the + * 'totally_frozen' results from tdeheap_prepare_freeze_tuple can always be + * used in the same way, even when no freeze plans need to be executed to + * "freeze the page". Only the "freeze" path needs to consider the need + * to set pages all-frozen in the visibility map under this scheme. + * + * When we freeze a page, we generally freeze all XIDs < OldestXmin, only + * leaving behind XIDs that are ineligible for freezing, if any. And so + * you might wonder why these trackers are necessary at all; why should + * _any_ page that VACUUM freezes _ever_ be left with XIDs/MXIDs that + * ratchet back the top-level NewRelfrozenXid/NewRelminMxid trackers? + * + * It is useful to use a definition of "freeze the page" that does not + * overspecify how MultiXacts are affected. tdeheap_prepare_freeze_tuple + * generally prefers to remove Multis eagerly, but lazy processing is used + * in cases where laziness allows VACUUM to avoid allocating a new Multi. + * The "freeze the page" trackers enable this flexibility. + */ + TransactionId FreezePageRelfrozenXid; + MultiXactId FreezePageRelminMxid; + + /* + * "No freeze" NewRelfrozenXid/NewRelminMxid trackers. + * + * These trackers are maintained in the same way as the trackers used when + * VACUUM scans a page that isn't cleanup locked. Both code paths are + * based on the same general idea (do less work for this page during the + * ongoing VACUUM, at the cost of having to accept older final values). + */ + TransactionId NoFreezePageRelfrozenXid; + MultiXactId NoFreezePageRelminMxid; + +} HeapPageFreeze; + +/* ---------------- + * function prototypes for heap access method + * + * tdeheap_create, tdeheap_create_with_catalog, and tdeheap_drop_with_catalog + * are declared in catalog/heap.h + * ---------------- + */ + + +/* + * HeapScanIsValid + * True iff the heap scan is valid. + */ +#define HeapScanIsValid(scan) PointerIsValid(scan) + +extern TableScanDesc tdeheap_beginscan(Relation relation, Snapshot snapshot, + int nkeys, ScanKey key, + ParallelTableScanDesc parallel_scan, + uint32 flags); +extern void tdeheap_setscanlimits(TableScanDesc sscan, BlockNumber startBlk, + BlockNumber numBlks); +extern void tdeheapgetpage(TableScanDesc sscan, BlockNumber block); +extern void tdeheap_rescan(TableScanDesc sscan, ScanKey key, bool set_params, + bool allow_strat, bool allow_sync, bool allow_pagemode); +extern void tdeheap_endscan(TableScanDesc sscan); +extern HeapTuple tdeheap_getnext(TableScanDesc sscan, ScanDirection direction); +extern bool tdeheap_getnextslot(TableScanDesc sscan, + ScanDirection direction, struct TupleTableSlot *slot); +extern void tdeheap_set_tidrange(TableScanDesc sscan, ItemPointer mintid, + ItemPointer maxtid); +extern bool tdeheap_getnextslot_tidrange(TableScanDesc sscan, + ScanDirection direction, + TupleTableSlot *slot); +extern bool tdeheap_fetch(Relation relation, Snapshot snapshot, + HeapTuple tuple, Buffer *userbuf, bool keep_buf); +extern bool tdeheap_hot_search_buffer(ItemPointer tid, Relation relation, + Buffer buffer, Snapshot snapshot, HeapTuple heapTuple, + bool *all_dead, bool first_call); + +extern void tdeheap_get_latest_tid(TableScanDesc sscan, ItemPointer tid); + +extern BulkInsertState GetBulkInsertState(void); +extern void FreeBulkInsertState(BulkInsertState); +extern void ReleaseBulkInsertStatePin(BulkInsertState bistate); + +extern void tdeheap_insert(Relation relation, HeapTuple tup, CommandId cid, + int options, BulkInsertState bistate); +extern void tdeheap_multi_insert(Relation relation, struct TupleTableSlot **slots, + int ntuples, CommandId cid, int options, + BulkInsertState bistate); +extern TM_Result tdeheap_delete(Relation relation, ItemPointer tid, + CommandId cid, Snapshot crosscheck, bool wait, + struct TM_FailureData *tmfd, bool changingPart); +extern void tdeheap_finish_speculative(Relation relation, ItemPointer tid); +extern void tdeheap_abort_speculative(Relation relation, ItemPointer tid); +extern TM_Result tdeheap_update(Relation relation, ItemPointer otid, + HeapTuple newtup, + CommandId cid, Snapshot crosscheck, bool wait, + struct TM_FailureData *tmfd, LockTupleMode *lockmode, + TU_UpdateIndexes *update_indexes); +extern TM_Result tdeheap_lock_tuple(Relation relation, HeapTuple tuple, + CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy, + bool follow_updates, + Buffer *buffer, struct TM_FailureData *tmfd); + +extern void tdeheap_inplace_update(Relation relation, HeapTuple tuple); +extern bool tdeheap_prepare_freeze_tuple(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + HeapPageFreeze *pagefrz, + HeapTupleFreeze *frz, bool *totally_frozen); +extern void tdeheap_freeze_execute_prepared(Relation rel, Buffer buffer, + TransactionId snapshotConflictHorizon, + HeapTupleFreeze *tuples, int ntuples); +extern bool tdeheap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId FreezeLimit, TransactionId MultiXactCutoff); +extern bool tdeheap_tuple_should_freeze(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + TransactionId *NoFreezePageRelfrozenXid, + MultiXactId *NoFreezePageRelminMxid); +extern bool tdeheap_tuple_needs_eventual_freeze(HeapTupleHeader tuple); + +extern void simple_tdeheap_insert(Relation relation, HeapTuple tup); +extern void simple_tdeheap_delete(Relation relation, ItemPointer tid); +extern void simple_tdeheap_update(Relation relation, ItemPointer otid, + HeapTuple tup, TU_UpdateIndexes *update_indexes); + +extern TransactionId tdeheap_index_delete_tuples(Relation rel, + TM_IndexDeleteOp *delstate); + +/* in heap/pruneheap.c */ +struct GlobalVisState; +extern void tdeheap_page_prune_opt(Relation relation, Buffer buffer); +extern int tdeheap_page_prune(Relation relation, Buffer buffer, + struct GlobalVisState *vistest, + TransactionId old_snap_xmin, + TimestampTz old_snap_ts, + int *nnewlpdead, + OffsetNumber *off_loc); +extern void tdeheap_page_prune_execute(Relation rel, Buffer buffer, + OffsetNumber *redirected, int nredirected, + OffsetNumber *nowdead, int ndead, + OffsetNumber *nowunused, int nunused); +extern void tdeheap_get_root_tuples(Page page, OffsetNumber *root_offsets); + +/* in heap/vacuumlazy.c */ +struct VacuumParams; +extern void tdeheap_vacuum_rel(Relation rel, + struct VacuumParams *params, BufferAccessStrategy bstrategy); + +/* in heap/pg_tdeam_visibility.c */ +extern bool HeapTupleSatisfiesVisibility(HeapTuple htup, Snapshot snapshot, + Buffer buffer); +extern TM_Result HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid, + Buffer buffer); +extern HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, + Buffer buffer); +extern HTSV_Result HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer, + TransactionId *dead_after); +extern void HeapTupleSetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid); +extern bool HeapTupleHeaderIsOnlyLocked(HeapTupleHeader tuple); +extern bool HeapTupleIsSurelyDead(HeapTuple htup, + struct GlobalVisState *vistest); + +/* + * To avoid leaking too much knowledge about reorderbuffer implementation + * details this is implemented in reorderbuffer.c not pg_tdeam_visibility.c + */ +struct HTAB; +extern bool ResolveCminCmaxDuringDecoding(struct HTAB *tuplecid_data, + Snapshot snapshot, + HeapTuple htup, + Buffer buffer, + CommandId *cmin, CommandId *cmax); +extern void HeapCheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, + Buffer buffer, Snapshot snapshot); + +/* Defined in pg_tdeam_handler.c */ +extern bool is_tdeheap_rel(Relation rel); + +const TableAmRoutine * +GetPGTdeamTableAmRoutine(void); + +#endif /* PG_TDEAM_H */ diff --git a/contrib/pg_tde/src16/include/access/pg_tdeam_xlog.h b/contrib/pg_tde/src16/include/access/pg_tdeam_xlog.h new file mode 100644 index 00000000000..9f07212c1af --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tdeam_xlog.h @@ -0,0 +1,421 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_xlog.h + * POSTGRES pg_tde access XLOG definitions. + * + * + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/heapam_xlog.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDEAM_XLOG_H +#define PG_TDEAM_XLOG_H + +#include "access/htup.h" +#include "access/xlogreader.h" +#include "lib/stringinfo.h" +#include "storage/buf.h" +#include "storage/bufpage.h" +#include "storage/relfilelocator.h" +#include "utils/relcache.h" + + +/* + * WAL record definitions for pg_tdeam.c's WAL operations + * + * XLOG allows to store some information in high 4 bits of log + * record xl_info field. We use 3 for opcode and one for init bit. + */ +#define XLOG_HEAP_INSERT 0x00 +#define XLOG_HEAP_DELETE 0x10 +#define XLOG_HEAP_UPDATE 0x20 +#define XLOG_HEAP_TRUNCATE 0x30 +#define XLOG_HEAP_HOT_UPDATE 0x40 +#define XLOG_HEAP_CONFIRM 0x50 +#define XLOG_HEAP_LOCK 0x60 +#define XLOG_HEAP_INPLACE 0x70 + +#define XLOG_HEAP_OPMASK 0x70 +/* + * When we insert 1st item on new page in INSERT, UPDATE, HOT_UPDATE, + * or MULTI_INSERT, we can (and we do) restore entire page in redo + */ +#define XLOG_HEAP_INIT_PAGE 0x80 +/* + * We ran out of opcodes, so pg_tdeam.c now has a second RmgrId. These opcodes + * are associated with RM_HEAP2_ID, but are not logically different from + * the ones above associated with RM_HEAP_ID. XLOG_HEAP_OPMASK applies to + * these, too. + */ +#define XLOG_HEAP2_REWRITE 0x00 +#define XLOG_HEAP2_PRUNE 0x10 +#define XLOG_HEAP2_VACUUM 0x20 +#define XLOG_HEAP2_FREEZE_PAGE 0x30 +#define XLOG_HEAP2_VISIBLE 0x40 +#define XLOG_HEAP2_MULTI_INSERT 0x50 +#define XLOG_HEAP2_LOCK_UPDATED 0x60 +#define XLOG_HEAP2_NEW_CID 0x70 + +/* + * xl_tdeheap_insert/xl_tdeheap_multi_insert flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_INSERT_ALL_VISIBLE_CLEARED (1<<0) +#define XLH_INSERT_LAST_IN_MULTI (1<<1) +#define XLH_INSERT_IS_SPECULATIVE (1<<2) +#define XLH_INSERT_CONTAINS_NEW_TUPLE (1<<3) +#define XLH_INSERT_ON_TOAST_RELATION (1<<4) + +/* all_frozen_set always implies all_visible_set */ +#define XLH_INSERT_ALL_FROZEN_SET (1<<5) + +/* + * xl_tdeheap_update flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED (1<<0) +/* PD_ALL_VISIBLE was cleared in the 2nd page */ +#define XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED (1<<1) +#define XLH_UPDATE_CONTAINS_OLD_TUPLE (1<<2) +#define XLH_UPDATE_CONTAINS_OLD_KEY (1<<3) +#define XLH_UPDATE_CONTAINS_NEW_TUPLE (1<<4) +#define XLH_UPDATE_PREFIX_FROM_OLD (1<<5) +#define XLH_UPDATE_SUFFIX_FROM_OLD (1<<6) + +/* convenience macro for checking whether any form of old tuple was logged */ +#define XLH_UPDATE_CONTAINS_OLD \ + (XLH_UPDATE_CONTAINS_OLD_TUPLE | XLH_UPDATE_CONTAINS_OLD_KEY) + +/* + * xl_tdeheap_delete flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_DELETE_ALL_VISIBLE_CLEARED (1<<0) +#define XLH_DELETE_CONTAINS_OLD_TUPLE (1<<1) +#define XLH_DELETE_CONTAINS_OLD_KEY (1<<2) +#define XLH_DELETE_IS_SUPER (1<<3) +#define XLH_DELETE_IS_PARTITION_MOVE (1<<4) + +/* convenience macro for checking whether any form of old tuple was logged */ +#define XLH_DELETE_CONTAINS_OLD \ + (XLH_DELETE_CONTAINS_OLD_TUPLE | XLH_DELETE_CONTAINS_OLD_KEY) + +/* This is what we need to know about delete */ +typedef struct xl_tdeheap_delete +{ + TransactionId xmax; /* xmax of the deleted tuple */ + OffsetNumber offnum; /* deleted tuple's offset */ + uint8 infobits_set; /* infomask bits */ + uint8 flags; +} xl_tdeheap_delete; + +#define SizeOfHeapDelete (offsetof(xl_tdeheap_delete, flags) + sizeof(uint8)) + +/* + * xl_tdeheap_truncate flag values, 8 bits are available. + */ +#define XLH_TRUNCATE_CASCADE (1<<0) +#define XLH_TRUNCATE_RESTART_SEQS (1<<1) + +/* + * For truncate we list all truncated relids in an array, followed by all + * sequence relids that need to be restarted, if any. + * All rels are always within the same database, so we just list dbid once. + */ +typedef struct xl_tdeheap_truncate +{ + Oid dbId; + uint32 nrelids; + uint8 flags; + Oid relids[FLEXIBLE_ARRAY_MEMBER]; +} xl_tdeheap_truncate; + +#define SizeOfHeapTruncate (offsetof(xl_tdeheap_truncate, relids)) + +/* + * We don't store the whole fixed part (HeapTupleHeaderData) of an inserted + * or updated tuple in WAL; we can save a few bytes by reconstructing the + * fields that are available elsewhere in the WAL record, or perhaps just + * plain needn't be reconstructed. These are the fields we must store. + */ +typedef struct xl_tdeheap_header +{ + uint16 t_infomask2; + uint16 t_infomask; + uint8 t_hoff; +} xl_tdeheap_header; + +#define SizeOfHeapHeader (offsetof(xl_tdeheap_header, t_hoff) + sizeof(uint8)) + +/* This is what we need to know about insert */ +typedef struct xl_tdeheap_insert +{ + OffsetNumber offnum; /* inserted tuple's offset */ + uint8 flags; + + /* xl_tdeheap_header & TUPLE DATA in backup block 0 */ +} xl_tdeheap_insert; + +#define SizeOfHeapInsert (offsetof(xl_tdeheap_insert, flags) + sizeof(uint8)) + +/* + * This is what we need to know about a multi-insert. + * + * The main data of the record consists of this xl_tdeheap_multi_insert header. + * 'offsets' array is omitted if the whole page is reinitialized + * (XLOG_HEAP_INIT_PAGE). + * + * In block 0's data portion, there is an xl_multi_insert_tuple struct, + * followed by the tuple data for each tuple. There is padding to align + * each xl_multi_insert_tuple struct. + */ +typedef struct xl_tdeheap_multi_insert +{ + uint8 flags; + uint16 ntuples; + OffsetNumber offsets[FLEXIBLE_ARRAY_MEMBER]; +} xl_tdeheap_multi_insert; + +#define SizeOfHeapMultiInsert offsetof(xl_tdeheap_multi_insert, offsets) + +typedef struct xl_multi_insert_tuple +{ + uint16 datalen; /* size of tuple data that follows */ + uint16 t_infomask2; + uint16 t_infomask; + uint8 t_hoff; + /* TUPLE DATA FOLLOWS AT END OF STRUCT */ +} xl_multi_insert_tuple; + +#define SizeOfMultiInsertTuple (offsetof(xl_multi_insert_tuple, t_hoff) + sizeof(uint8)) + +/* + * This is what we need to know about update|hot_update + * + * Backup blk 0: new page + * + * If XLH_UPDATE_PREFIX_FROM_OLD or XLH_UPDATE_SUFFIX_FROM_OLD flags are set, + * the prefix and/or suffix come first, as one or two uint16s. + * + * After that, xl_tdeheap_header and new tuple data follow. The new tuple + * data doesn't include the prefix and suffix, which are copied from the + * old tuple on replay. + * + * If XLH_UPDATE_CONTAINS_NEW_TUPLE flag is given, the tuple data is + * included even if a full-page image was taken. + * + * Backup blk 1: old page, if different. (no data, just a reference to the blk) + */ +typedef struct xl_tdeheap_update +{ + TransactionId old_xmax; /* xmax of the old tuple */ + OffsetNumber old_offnum; /* old tuple's offset */ + uint8 old_infobits_set; /* infomask bits to set on old tuple */ + uint8 flags; + TransactionId new_xmax; /* xmax of the new tuple */ + OffsetNumber new_offnum; /* new tuple's offset */ + + /* + * If XLH_UPDATE_CONTAINS_OLD_TUPLE or XLH_UPDATE_CONTAINS_OLD_KEY flags + * are set, xl_tdeheap_header and tuple data for the old tuple follow. + */ +} xl_tdeheap_update; + +#define SizeOfHeapUpdate (offsetof(xl_tdeheap_update, new_offnum) + sizeof(OffsetNumber)) + +/* + * This is what we need to know about page pruning (both during VACUUM and + * during opportunistic pruning) + * + * The array of OffsetNumbers following the fixed part of the record contains: + * * for each redirected item: the item offset, then the offset redirected to + * * for each now-dead item: the item offset + * * for each now-unused item: the item offset + * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused. + * Note that nunused is not explicitly stored, but may be found by reference + * to the total record length. + * + * Acquires a full cleanup lock. + */ +typedef struct xl_tdeheap_prune +{ + TransactionId snapshotConflictHorizon; + uint16 nredirected; + uint16 ndead; + bool isCatalogRel; /* to handle recovery conflict during logical + * decoding on standby */ + /* OFFSET NUMBERS are in the block reference 0 */ +} xl_tdeheap_prune; + +#define SizeOfHeapPrune (offsetof(xl_tdeheap_prune, isCatalogRel) + sizeof(bool)) + +/* + * The vacuum page record is similar to the prune record, but can only mark + * already LP_DEAD items LP_UNUSED (during VACUUM's second heap pass) + * + * Acquires an ordinary exclusive lock only. + */ +typedef struct xl_tdeheap_vacuum +{ + uint16 nunused; + /* OFFSET NUMBERS are in the block reference 0 */ +} xl_tdeheap_vacuum; + +#define SizeOfHeapVacuum (offsetof(xl_tdeheap_vacuum, nunused) + sizeof(uint16)) + +/* flags for infobits_set */ +#define XLHL_XMAX_IS_MULTI 0x01 +#define XLHL_XMAX_LOCK_ONLY 0x02 +#define XLHL_XMAX_EXCL_LOCK 0x04 +#define XLHL_XMAX_KEYSHR_LOCK 0x08 +#define XLHL_KEYS_UPDATED 0x10 + +/* flag bits for xl_tdeheap_lock / xl_tdeheap_lock_updated's flag field */ +#define XLH_LOCK_ALL_FROZEN_CLEARED 0x01 + +/* This is what we need to know about lock */ +typedef struct xl_tdeheap_lock +{ + TransactionId xmax; /* might be a MultiXactId */ + OffsetNumber offnum; /* locked tuple's offset on page */ + uint8 infobits_set; /* infomask and infomask2 bits to set */ + uint8 flags; /* XLH_LOCK_* flag bits */ +} xl_tdeheap_lock; + +#define SizeOfHeapLock (offsetof(xl_tdeheap_lock, flags) + sizeof(uint8)) + +/* This is what we need to know about locking an updated version of a row */ +typedef struct xl_tdeheap_lock_updated +{ + TransactionId xmax; + OffsetNumber offnum; + uint8 infobits_set; + uint8 flags; +} xl_tdeheap_lock_updated; + +#define SizeOfHeapLockUpdated (offsetof(xl_tdeheap_lock_updated, flags) + sizeof(uint8)) + +/* This is what we need to know about confirmation of speculative insertion */ +typedef struct xl_tdeheap_confirm +{ + OffsetNumber offnum; /* confirmed tuple's offset on page */ +} xl_tdeheap_confirm; + +#define SizeOfHeapConfirm (offsetof(xl_tdeheap_confirm, offnum) + sizeof(OffsetNumber)) + +/* This is what we need to know about in-place update */ +typedef struct xl_tdeheap_inplace +{ + OffsetNumber offnum; /* updated tuple's offset on page */ + /* TUPLE DATA FOLLOWS AT END OF STRUCT */ +} xl_tdeheap_inplace; + +#define SizeOfHeapInplace (offsetof(xl_tdeheap_inplace, offnum) + sizeof(OffsetNumber)) + +/* + * This struct represents a 'freeze plan', which describes how to freeze a + * group of one or more heap tuples (appears in xl_tdeheap_freeze_page record) + */ +/* 0x01 was XLH_FREEZE_XMIN */ +#define XLH_FREEZE_XVAC 0x02 +#define XLH_INVALID_XVAC 0x04 + +typedef struct xl_tdeheap_freeze_plan +{ + TransactionId xmax; + uint16 t_infomask2; + uint16 t_infomask; + uint8 frzflags; + + /* Length of individual page offset numbers array for this plan */ + uint16 ntuples; +} xl_tdeheap_freeze_plan; + +/* + * This is what we need to know about a block being frozen during vacuum + * + * Backup block 0's data contains an array of xl_tdeheap_freeze_plan structs + * (with nplans elements), followed by one or more page offset number arrays. + * Each such page offset number array corresponds to a single freeze plan + * (REDO routine freezes corresponding heap tuples using freeze plan). + */ +typedef struct xl_tdeheap_freeze_page +{ + TransactionId snapshotConflictHorizon; + uint16 nplans; + bool isCatalogRel; /* to handle recovery conflict during logical + * decoding on standby */ + + /* + * In payload of blk 0 : FREEZE PLANS and OFFSET NUMBER ARRAY + */ +} xl_tdeheap_freeze_page; + +#define SizeOfHeapFreezePage (offsetof(xl_tdeheap_freeze_page, isCatalogRel) + sizeof(bool)) + +/* + * This is what we need to know about setting a visibility map bit + * + * Backup blk 0: visibility map buffer + * Backup blk 1: heap buffer + */ +typedef struct xl_tdeheap_visible +{ + TransactionId snapshotConflictHorizon; + uint8 flags; +} xl_tdeheap_visible; + +#define SizeOfHeapVisible (offsetof(xl_tdeheap_visible, flags) + sizeof(uint8)) + +typedef struct xl_tdeheap_new_cid +{ + /* + * store toplevel xid so we don't have to merge cids from different + * transactions + */ + TransactionId top_xid; + CommandId cmin; + CommandId cmax; + CommandId combocid; /* just for debugging */ + + /* + * Store the relfilelocator/ctid pair to facilitate lookups. + */ + RelFileLocator target_locator; + ItemPointerData target_tid; +} xl_tdeheap_new_cid; + +#define SizeOfHeapNewCid (offsetof(xl_tdeheap_new_cid, target_tid) + sizeof(ItemPointerData)) + +/* logical rewrite xlog record header */ +typedef struct xl_tdeheap_rewrite_mapping +{ + TransactionId mapped_xid; /* xid that might need to see the row */ + Oid mapped_db; /* DbOid or InvalidOid for shared rels */ + Oid mapped_rel; /* Oid of the mapped relation */ + off_t offset; /* How far have we written so far */ + uint32 num_mappings; /* Number of in-memory mappings */ + XLogRecPtr start_lsn; /* Insert LSN at begin of rewrite */ +} xl_tdeheap_rewrite_mapping; + +extern void HeapTupleHeaderAdvanceConflictHorizon(HeapTupleHeader tuple, + TransactionId *snapshotConflictHorizon); + +extern void tdeheap_redo(XLogReaderState *record); +extern void tdeheap_desc(StringInfo buf, XLogReaderState *record); +extern const char *tdeheap_identify(uint8 info); +extern void tdeheap_mask(char *pagedata, BlockNumber blkno); +extern void tdeheap2_redo(XLogReaderState *record); +extern void tdeheap2_desc(StringInfo buf, XLogReaderState *record); +extern const char *tdeheap2_identify(uint8 info); +extern void tdeheap_xlog_logical_rewrite(XLogReaderState *r); + +extern XLogRecPtr log_tdeheap_visible(Relation rel, Buffer tdeheap_buffer, + Buffer vm_buffer, + TransactionId snapshotConflictHorizon, + uint8 vmflags); + +#endif /* PG_TDEAM_XLOG_H */ diff --git a/contrib/pg_tde/src16/include/access/pg_tdetoast.h b/contrib/pg_tde/src16/include/access/pg_tdetoast.h new file mode 100644 index 00000000000..c17a7816cdb --- /dev/null +++ b/contrib/pg_tde/src16/include/access/pg_tdetoast.h @@ -0,0 +1,149 @@ +/*------------------------------------------------------------------------- + * + * heaptoast.h + * Heap-specific definitions for external and compressed storage + * of variable size attributes. + * + * Copyright (c) 2000-2023, PostgreSQL Global Development Group + * + * src/include/access/heaptoast.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_TOAST_H +#define PG_TDE_TOAST_H + +#include "access/htup_details.h" +#include "storage/lockdefs.h" +#include "utils/relcache.h" + +/* + * Find the maximum size of a tuple if there are to be N tuples per page. + */ +#define MaximumBytesPerTuple(tuplesPerPage) \ + MAXALIGN_DOWN((BLCKSZ - \ + MAXALIGN(SizeOfPageHeaderData + (tuplesPerPage) * sizeof(ItemIdData))) \ + / (tuplesPerPage)) + +/* + * These symbols control toaster activation. If a tuple is larger than + * TOAST_TUPLE_THRESHOLD, we will try to toast it down to no more than + * TOAST_TUPLE_TARGET bytes through compressing compressible fields and + * moving EXTENDED and EXTERNAL data out-of-line. + * + * The numbers need not be the same, though they currently are. It doesn't + * make sense for TARGET to exceed THRESHOLD, but it could be useful to make + * it be smaller. + * + * Currently we choose both values to match the largest tuple size for which + * TOAST_TUPLES_PER_PAGE tuples can fit on a heap page. + * + * XXX while these can be modified without initdb, some thought needs to be + * given to needs_toast_table() in toasting.c before unleashing random + * changes. Also see LOBLKSIZE in large_object.h, which can *not* be + * changed without initdb. + */ +#define TOAST_TUPLES_PER_PAGE 4 + +#define TOAST_TUPLE_THRESHOLD MaximumBytesPerTuple(TOAST_TUPLES_PER_PAGE) + +#define TOAST_TUPLE_TARGET TOAST_TUPLE_THRESHOLD + +/* + * The code will also consider moving MAIN data out-of-line, but only as a + * last resort if the previous steps haven't reached the target tuple size. + * In this phase we use a different target size, currently equal to the + * largest tuple that will fit on a heap page. This is reasonable since + * the user has told us to keep the data in-line if at all possible. + */ +#define TOAST_TUPLES_PER_PAGE_MAIN 1 + +#define TOAST_TUPLE_TARGET_MAIN MaximumBytesPerTuple(TOAST_TUPLES_PER_PAGE_MAIN) + +/* + * If an index value is larger than TOAST_INDEX_TARGET, we will try to + * compress it (we can't move it out-of-line, however). Note that this + * number is per-datum, not per-tuple, for simplicity in index_form_tuple(). + */ +#define TOAST_INDEX_TARGET (MaxHeapTupleSize / 16) + +/* + * When we store an oversize datum externally, we divide it into chunks + * containing at most TOAST_MAX_CHUNK_SIZE data bytes. This number *must* + * be small enough that the completed toast-table tuple (including the + * ID and sequence fields and all overhead) will fit on a page. + * The coding here sets the size on the theory that we want to fit + * EXTERN_TUPLES_PER_PAGE tuples of maximum size onto a page. + * + * NB: Changing TOAST_MAX_CHUNK_SIZE requires an initdb. + */ +#define EXTERN_TUPLES_PER_PAGE 4 /* tweak only this */ + +#define EXTERN_TUPLE_MAX_SIZE MaximumBytesPerTuple(EXTERN_TUPLES_PER_PAGE) + +#define TOAST_MAX_CHUNK_SIZE \ + (EXTERN_TUPLE_MAX_SIZE - \ + MAXALIGN(SizeofHeapTupleHeader) - \ + sizeof(Oid) - \ + sizeof(int32) - \ + VARHDRSZ) + +/* ---------- + * tdeheap_toast_insert_or_update - + * + * Called by tdeheap_insert() and tdeheap_update(). + * ---------- + */ +extern HeapTuple tdeheap_toast_insert_or_update(Relation rel, HeapTuple newtup, + HeapTuple oldtup, int options); + +/* ---------- + * tdeheap_toast_delete - + * + * Called by tdeheap_delete(). + * ---------- + */ +extern void tdeheap_toast_delete(Relation rel, HeapTuple oldtup, + bool is_speculative); + +/* ---------- + * toast_flatten_tuple - + * + * "Flatten" a tuple to contain no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * ---------- + */ +extern HeapTuple toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc); + +/* ---------- + * toast_flatten_tuple_to_datum - + * + * "Flatten" a tuple containing out-of-line toasted fields into a Datum. + * ---------- + */ +extern Datum toast_flatten_tuple_to_datum(HeapTupleHeader tup, + uint32 tup_len, + TupleDesc tupleDesc); + +/* ---------- + * toast_build_flattened_tuple - + * + * Build a tuple containing no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * ---------- + */ +extern HeapTuple toast_build_flattened_tuple(TupleDesc tupleDesc, + Datum *values, + bool *isnull); + +/* ---------- + * tdeheap_fetch_toast_slice + * + * Fetch a slice from a toast value stored in a heap table. + * ---------- + */ +extern void tdeheap_fetch_toast_slice(Relation toastrel, Oid valueid, + int32 attrsize, int32 sliceoffset, + int32 slicelength, struct varlena *result); + +#endif /* PG_TDE_TOAST_H */ diff --git a/contrib/pg_tde/src17/COMMIT b/contrib/pg_tde/src17/COMMIT new file mode 100644 index 00000000000..3d690f37a9f --- /dev/null +++ b/contrib/pg_tde/src17/COMMIT @@ -0,0 +1 @@ +84e40a3e113b2d74a655358d8791dc556579a241 diff --git a/contrib/pg_tde/src17/access/pg_tde_io.c b/contrib/pg_tde/src17/access/pg_tde_io.c new file mode 100644 index 00000000000..4136b04b56a --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tde_io.c @@ -0,0 +1,894 @@ +/*------------------------------------------------------------------------- + * + * hio.c + * POSTGRES heap access method input/output code. + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/hio.c + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tde_io.h" +#include "access/pg_tde_visibilitymap.h" +#include "encryption/enc_tde.h" + +#include "access/htup_details.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" + + +/* + * tdeheap_RelationPutHeapTuple - place tuple at specified page + * + * !!! EREPORT(ERROR) IS DISALLOWED HERE !!! Must PANIC on failure!!! + * + * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer. + */ +void +tdeheap_RelationPutHeapTuple(Relation relation, + Buffer buffer, + HeapTuple tuple, + bool encrypt, + bool token) +{ + Page pageHeader; + OffsetNumber offnum; + + /* + * A tuple that's being inserted speculatively should already have its + * token set. + */ + Assert(!token || HeapTupleHeaderIsSpeculative(tuple->t_data)); + + /* + * Do not allow tuples with invalid combinations of hint bits to be placed + * on a page. This combination is detected as corruption by the + * contrib/amcheck logic, so if you disable this assertion, make + * corresponding changes there. + */ + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_COMMITTED) && + (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI))); + + /* Add the tuple to the page */ + pageHeader = BufferGetPage(buffer); + + if (encrypt) + offnum = TDE_PageAddItem(relation->rd_locator, BufferGetBlockNumber(buffer), pageHeader, (Item) tuple->t_data, + tuple->t_len, InvalidOffsetNumber, false, true); + else + offnum = PageAddItem(pageHeader, (Item) tuple->t_data, + tuple->t_len, InvalidOffsetNumber, false, true); + + if (offnum == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple to page"); + + /* Update tuple->t_self to the actual position where it was stored */ + ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum); + + /* + * Insert the correct position into CTID of the stored tuple, too (unless + * this is a speculative insertion, in which case the token is held in + * CTID field instead) + */ + if (!token) + { + ItemId itemId = PageGetItemId(pageHeader, offnum); + HeapTupleHeader item = (HeapTupleHeader) PageGetItem(pageHeader, itemId); + + item->t_ctid = tuple->t_self; + } +} + +/* + * Read in a buffer in mode, using bulk-insert strategy if bistate isn't NULL. + */ +static Buffer +ReadBufferBI(Relation relation, BlockNumber targetBlock, + ReadBufferMode mode, BulkInsertState bistate) +{ + Buffer buffer; + + /* If not bulk-insert, exactly like ReadBuffer */ + if (!bistate) + return ReadBufferExtended(relation, MAIN_FORKNUM, targetBlock, + mode, NULL); + + /* If we have the desired block already pinned, re-pin and return it */ + if (bistate->current_buf != InvalidBuffer) + { + if (BufferGetBlockNumber(bistate->current_buf) == targetBlock) + { + /* + * Currently the LOCK variants are only used for extending + * relation, which should never reach this branch. + */ + Assert(mode != RBM_ZERO_AND_LOCK && + mode != RBM_ZERO_AND_CLEANUP_LOCK); + + IncrBufferRefCount(bistate->current_buf); + return bistate->current_buf; + } + /* ... else drop the old buffer */ + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + } + + /* Perform a read using the buffer strategy */ + buffer = ReadBufferExtended(relation, MAIN_FORKNUM, targetBlock, + mode, bistate->strategy); + + /* Save the selected block as target for future inserts */ + IncrBufferRefCount(buffer); + bistate->current_buf = buffer; + + return buffer; +} + +/* + * For each heap page which is all-visible, acquire a pin on the appropriate + * visibility map page, if we haven't already got one. + * + * To avoid complexity in the callers, either buffer1 or buffer2 may be + * InvalidBuffer if only one buffer is involved. For the same reason, block2 + * may be smaller than block1. + * + * Returns whether buffer locks were temporarily released. + */ +static bool +GetVisibilityMapPins(Relation relation, Buffer buffer1, Buffer buffer2, + BlockNumber block1, BlockNumber block2, + Buffer *vmbuffer1, Buffer *vmbuffer2) +{ + bool need_to_pin_buffer1; + bool need_to_pin_buffer2; + bool released_locks = false; + + /* + * Swap buffers around to handle case of a single block/buffer, and to + * handle if lock ordering rules require to lock block2 first. + */ + if (!BufferIsValid(buffer1) || + (BufferIsValid(buffer2) && block1 > block2)) + { + Buffer tmpbuf = buffer1; + Buffer *tmpvmbuf = vmbuffer1; + BlockNumber tmpblock = block1; + + buffer1 = buffer2; + vmbuffer1 = vmbuffer2; + block1 = block2; + + buffer2 = tmpbuf; + vmbuffer2 = tmpvmbuf; + block2 = tmpblock; + } + + Assert(BufferIsValid(buffer1)); + Assert(buffer2 == InvalidBuffer || block1 <= block2); + + while (1) + { + /* Figure out which pins we need but don't have. */ + need_to_pin_buffer1 = PageIsAllVisible(BufferGetPage(buffer1)) + && !tdeheap_visibilitymap_pin_ok(block1, *vmbuffer1); + need_to_pin_buffer2 = buffer2 != InvalidBuffer + && PageIsAllVisible(BufferGetPage(buffer2)) + && !tdeheap_visibilitymap_pin_ok(block2, *vmbuffer2); + if (!need_to_pin_buffer1 && !need_to_pin_buffer2) + break; + + /* We must unlock both buffers before doing any I/O. */ + released_locks = true; + LockBuffer(buffer1, BUFFER_LOCK_UNLOCK); + if (buffer2 != InvalidBuffer && buffer2 != buffer1) + LockBuffer(buffer2, BUFFER_LOCK_UNLOCK); + + /* Get pins. */ + if (need_to_pin_buffer1) + tdeheap_visibilitymap_pin(relation, block1, vmbuffer1); + if (need_to_pin_buffer2) + tdeheap_visibilitymap_pin(relation, block2, vmbuffer2); + + /* Relock buffers. */ + LockBuffer(buffer1, BUFFER_LOCK_EXCLUSIVE); + if (buffer2 != InvalidBuffer && buffer2 != buffer1) + LockBuffer(buffer2, BUFFER_LOCK_EXCLUSIVE); + + /* + * If there are two buffers involved and we pinned just one of them, + * it's possible that the second one became all-visible while we were + * busy pinning the first one. If it looks like that's a possible + * scenario, we'll need to make a second pass through this loop. + */ + if (buffer2 == InvalidBuffer || buffer1 == buffer2 + || (need_to_pin_buffer1 && need_to_pin_buffer2)) + break; + } + + return released_locks; +} + +/* + * Extend the relation. By multiple pages, if beneficial. + * + * If the caller needs multiple pages (num_pages > 1), we always try to extend + * by at least that much. + * + * If there is contention on the extension lock, we don't just extend "for + * ourselves", but we try to help others. We can do so by adding empty pages + * into the FSM. Typically there is no contention when we can't use the FSM. + * + * We do have to limit the number of pages to extend by to some value, as the + * buffers for all the extended pages need to, temporarily, be pinned. For now + * we define MAX_BUFFERS_TO_EXTEND_BY to be 64 buffers, it's hard to see + * benefits with higher numbers. This partially is because copyfrom.c's + * MAX_BUFFERED_TUPLES / MAX_BUFFERED_BYTES prevents larger multi_inserts. + * + * Returns a buffer for a newly extended block. If possible, the buffer is + * returned exclusively locked. *did_unlock is set to true if the lock had to + * be released, false otherwise. + * + * + * XXX: It would likely be beneficial for some workloads to extend more + * aggressively, e.g. using a heuristic based on the relation size. + */ +static Buffer +RelationAddBlocks(Relation relation, BulkInsertState bistate, + int num_pages, bool use_fsm, bool *did_unlock) +{ +#define MAX_BUFFERS_TO_EXTEND_BY 64 + Buffer victim_buffers[MAX_BUFFERS_TO_EXTEND_BY]; + BlockNumber first_block = InvalidBlockNumber; + BlockNumber last_block = InvalidBlockNumber; + uint32 extend_by_pages; + uint32 not_in_fsm_pages; + Buffer buffer; + Page page; + + /* + * Determine by how many pages to try to extend by. + */ + if (bistate == NULL && !use_fsm) + { + /* + * If we have neither bistate, nor can use the FSM, we can't bulk + * extend - there'd be no way to find the additional pages. + */ + extend_by_pages = 1; + } + else + { + uint32 waitcount; + + /* + * Try to extend at least by the number of pages the caller needs. We + * can remember the additional pages (either via FSM or bistate). + */ + extend_by_pages = num_pages; + + if (!RELATION_IS_LOCAL(relation)) + waitcount = RelationExtensionLockWaiterCount(relation); + else + waitcount = 0; + + /* + * Multiply the number of pages to extend by the number of waiters. Do + * this even if we're not using the FSM, as it still relieves + * contention, by deferring the next time this backend needs to + * extend. In that case the extended pages will be found via + * bistate->next_free. + */ + extend_by_pages += extend_by_pages * waitcount; + + /* --- + * If we previously extended using the same bistate, it's very likely + * we'll extend some more. Try to extend by as many pages as + * before. This can be important for performance for several reasons, + * including: + * + * - It prevents mdzeroextend() switching between extending the + * relation in different ways, which is inefficient for some + * filesystems. + * + * - Contention is often intermittent. Even if we currently don't see + * other waiters (see above), extending by larger amounts can + * prevent future contention. + * --- + */ + if (bistate) + extend_by_pages = Max(extend_by_pages, bistate->already_extended_by); + + /* + * Can't extend by more than MAX_BUFFERS_TO_EXTEND_BY, we need to pin + * them all concurrently. + */ + extend_by_pages = Min(extend_by_pages, MAX_BUFFERS_TO_EXTEND_BY); + } + + /* + * How many of the extended pages should be entered into the FSM? + * + * If we have a bistate, only enter pages that we don't need ourselves + * into the FSM. Otherwise every other backend will immediately try to + * use the pages this backend needs for itself, causing unnecessary + * contention. If we don't have a bistate, we can't avoid the FSM. + * + * Never enter the page returned into the FSM, we'll immediately use it. + */ + if (num_pages > 1 && bistate == NULL) + not_in_fsm_pages = 1; + else + not_in_fsm_pages = num_pages; + + /* prepare to put another buffer into the bistate */ + if (bistate && bistate->current_buf != InvalidBuffer) + { + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + } + + /* + * Extend the relation. We ask for the first returned page to be locked, + * so that we are sure that nobody has inserted into the page + * concurrently. + * + * With the current MAX_BUFFERS_TO_EXTEND_BY there's no danger of + * [auto]vacuum trying to truncate later pages as REL_TRUNCATE_MINIMUM is + * way larger. + */ + first_block = ExtendBufferedRelBy(BMR_REL(relation), MAIN_FORKNUM, + bistate ? bistate->strategy : NULL, + EB_LOCK_FIRST, + extend_by_pages, + victim_buffers, + &extend_by_pages); + buffer = victim_buffers[0]; /* the buffer the function will return */ + last_block = first_block + (extend_by_pages - 1); + Assert(first_block == BufferGetBlockNumber(buffer)); + + /* + * Relation is now extended. Initialize the page. We do this here, before + * potentially releasing the lock on the page, because it allows us to + * double check that the page contents are empty (this should never + * happen, but if it does we don't want to risk wiping out valid data). + */ + page = BufferGetPage(buffer); + if (!PageIsNew(page)) + elog(ERROR, "page %u of relation \"%s\" should be empty but is not", + first_block, + RelationGetRelationName(relation)); + + PageInit(page, BufferGetPageSize(buffer), 0); + MarkBufferDirty(buffer); + + /* + * If we decided to put pages into the FSM, release the buffer lock (but + * not pin), we don't want to do IO while holding a buffer lock. This will + * necessitate a bit more extensive checking in our caller. + */ + if (use_fsm && not_in_fsm_pages < extend_by_pages) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + *did_unlock = true; + } + else + *did_unlock = false; + + /* + * Relation is now extended. Release pins on all buffers, except for the + * first (which we'll return). If we decided to put pages into the FSM, + * we can do that as part of the same loop. + */ + for (uint32 i = 1; i < extend_by_pages; i++) + { + BlockNumber curBlock = first_block + i; + + Assert(curBlock == BufferGetBlockNumber(victim_buffers[i])); + Assert(BlockNumberIsValid(curBlock)); + + ReleaseBuffer(victim_buffers[i]); + + if (use_fsm && i >= not_in_fsm_pages) + { + Size freespace = BufferGetPageSize(victim_buffers[i]) - + SizeOfPageHeaderData; + + RecordPageWithFreeSpace(relation, curBlock, freespace); + } + } + + if (use_fsm && not_in_fsm_pages < extend_by_pages) + { + BlockNumber first_fsm_block = first_block + not_in_fsm_pages; + + FreeSpaceMapVacuumRange(relation, first_fsm_block, last_block); + } + + if (bistate) + { + /* + * Remember the additional pages we extended by, so we later can use + * them without looking into the FSM. + */ + if (extend_by_pages > 1) + { + bistate->next_free = first_block + 1; + bistate->last_free = last_block; + } + else + { + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + } + + /* maintain bistate->current_buf */ + IncrBufferRefCount(buffer); + bistate->current_buf = buffer; + bistate->already_extended_by += extend_by_pages; + } + + return buffer; +#undef MAX_BUFFERS_TO_EXTEND_BY +} + +/* + * tdeheap_RelationGetBufferForTuple + * + * Returns pinned and exclusive-locked buffer of a page in given relation + * with free space >= given len. + * + * If num_pages is > 1, we will try to extend the relation by at least that + * many pages when we decide to extend the relation. This is more efficient + * for callers that know they will need multiple pages + * (e.g. tdeheap_multi_insert()). + * + * If otherBuffer is not InvalidBuffer, then it references a previously + * pinned buffer of another page in the same relation; on return, this + * buffer will also be exclusive-locked. (This case is used by tdeheap_update; + * the otherBuffer contains the tuple being updated.) + * + * The reason for passing otherBuffer is that if two backends are doing + * concurrent tdeheap_update operations, a deadlock could occur if they try + * to lock the same two buffers in opposite orders. To ensure that this + * can't happen, we impose the rule that buffers of a relation must be + * locked in increasing page number order. This is most conveniently done + * by having tdeheap_RelationGetBufferForTuple lock them both, with suitable care + * for ordering. + * + * NOTE: it is unlikely, but not quite impossible, for otherBuffer to be the + * same buffer we select for insertion of the new tuple (this could only + * happen if space is freed in that page after tdeheap_update finds there's not + * enough there). In that case, the page will be pinned and locked only once. + * + * We also handle the possibility that the all-visible flag will need to be + * cleared on one or both pages. If so, pin on the associated visibility map + * page must be acquired before acquiring buffer lock(s), to avoid possibly + * doing I/O while holding buffer locks. The pins are passed back to the + * caller using the input-output arguments vmbuffer and vmbuffer_other. + * Note that in some cases the caller might have already acquired such pins, + * which is indicated by these arguments not being InvalidBuffer on entry. + * + * We normally use FSM to help us find free space. However, + * if HEAP_INSERT_SKIP_FSM is specified, we just append a new empty page to + * the end of the relation if the tuple won't fit on the current target page. + * This can save some cycles when we know the relation is new and doesn't + * contain useful amounts of free space. + * + * HEAP_INSERT_SKIP_FSM is also useful for non-WAL-logged additions to a + * relation, if the caller holds exclusive lock and is careful to invalidate + * relation's smgr_targblock before the first insertion --- that ensures that + * all insertions will occur into newly added pages and not be intermixed + * with tuples from other transactions. That way, a crash can't risk losing + * any committed data of other transactions. (See tdeheap_insert's comments + * for additional constraints needed for safe usage of this behavior.) + * + * The caller can also provide a BulkInsertState object to optimize many + * insertions into the same relation. This keeps a pin on the current + * insertion target page (to save pin/unpin cycles) and also passes a + * BULKWRITE buffer selection strategy object to the buffer manager. + * Passing NULL for bistate selects the default behavior. + * + * We don't fill existing pages further than the fillfactor, except for large + * tuples in nearly-empty pages. This is OK since this routine is not + * consulted when updating a tuple and keeping it on the same page, which is + * the scenario fillfactor is meant to reserve space for. + * + * ereport(ERROR) is allowed here, so this routine *must* be called + * before any (unlogged) changes are made in buffer pool. + */ +Buffer +tdeheap_RelationGetBufferForTuple(Relation relation, Size len, + Buffer otherBuffer, int options, + BulkInsertState bistate, + Buffer *vmbuffer, Buffer *vmbuffer_other, + int num_pages) +{ + bool use_fsm = !(options & HEAP_INSERT_SKIP_FSM); + Buffer buffer = InvalidBuffer; + Page page; + Size nearlyEmptyFreeSpace, + pageFreeSpace = 0, + saveFreeSpace = 0, + targetFreeSpace = 0; + BlockNumber targetBlock, + otherBlock; + bool unlockedTargetBuffer; + bool recheckVmPins; + + len = MAXALIGN(len); /* be conservative */ + + /* if the caller doesn't know by how many pages to extend, extend by 1 */ + if (num_pages <= 0) + num_pages = 1; + + /* Bulk insert is not supported for updates, only inserts. */ + Assert(otherBuffer == InvalidBuffer || !bistate); + + /* + * If we're gonna fail for oversize tuple, do it right away + */ + if (len > MaxHeapTupleSize) + ereport(ERROR, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg("row is too big: size %zu, maximum size %zu", + len, MaxHeapTupleSize))); + + /* Compute desired extra freespace due to fillfactor option */ + saveFreeSpace = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + + /* + * Since pages without tuples can still have line pointers, we consider + * pages "empty" when the unavailable space is slight. This threshold is + * somewhat arbitrary, but it should prevent most unnecessary relation + * extensions while inserting large tuples into low-fillfactor tables. + */ + nearlyEmptyFreeSpace = MaxHeapTupleSize - + (MaxHeapTuplesPerPage / 8 * sizeof(ItemIdData)); + if (len + saveFreeSpace > nearlyEmptyFreeSpace) + targetFreeSpace = Max(len, nearlyEmptyFreeSpace); + else + targetFreeSpace = len + saveFreeSpace; + + if (otherBuffer != InvalidBuffer) + otherBlock = BufferGetBlockNumber(otherBuffer); + else + otherBlock = InvalidBlockNumber; /* just to keep compiler quiet */ + + /* + * We first try to put the tuple on the same page we last inserted a tuple + * on, as cached in the BulkInsertState or relcache entry. If that + * doesn't work, we ask the Free Space Map to locate a suitable page. + * Since the FSM's info might be out of date, we have to be prepared to + * loop around and retry multiple times. (To ensure this isn't an infinite + * loop, we must update the FSM with the correct amount of free space on + * each page that proves not to be suitable.) If the FSM has no record of + * a page with enough free space, we give up and extend the relation. + * + * When use_fsm is false, we either put the tuple onto the existing target + * page or extend the relation. + */ + if (bistate && bistate->current_buf != InvalidBuffer) + targetBlock = BufferGetBlockNumber(bistate->current_buf); + else + targetBlock = RelationGetTargetBlock(relation); + + if (targetBlock == InvalidBlockNumber && use_fsm) + { + /* + * We have no cached target page, so ask the FSM for an initial + * target. + */ + targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace); + } + + /* + * If the FSM knows nothing of the rel, try the last page before we give + * up and extend. This avoids one-tuple-per-page syndrome during + * bootstrapping or in a recently-started system. + */ + if (targetBlock == InvalidBlockNumber) + { + BlockNumber nblocks = RelationGetNumberOfBlocks(relation); + + if (nblocks > 0) + targetBlock = nblocks - 1; + } + +loop: + while (targetBlock != InvalidBlockNumber) + { + /* + * Read and exclusive-lock the target block, as well as the other + * block if one was given, taking suitable care with lock ordering and + * the possibility they are the same block. + * + * If the page-level all-visible flag is set, caller will need to + * clear both that and the corresponding visibility map bit. However, + * by the time we return, we'll have x-locked the buffer, and we don't + * want to do any I/O while in that state. So we check the bit here + * before taking the lock, and pin the page if it appears necessary. + * Checking without the lock creates a risk of getting the wrong + * answer, so we'll have to recheck after acquiring the lock. + */ + if (otherBuffer == InvalidBuffer) + { + /* easy case */ + buffer = ReadBufferBI(relation, targetBlock, RBM_NORMAL, bistate); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + + /* + * If the page is empty, pin vmbuffer to set all_frozen bit later. + */ + if ((options & HEAP_INSERT_FROZEN) && + (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0)) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else if (otherBlock == targetBlock) + { + /* also easy case */ + buffer = otherBuffer; + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else if (otherBlock < targetBlock) + { + /* lock other buffer first */ + buffer = ReadBuffer(relation, targetBlock); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + else + { + /* lock target buffer first */ + buffer = ReadBuffer(relation, targetBlock); + if (PageIsAllVisible(BufferGetPage(buffer))) + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + } + + /* + * We now have the target page (and the other buffer, if any) pinned + * and locked. However, since our initial PageIsAllVisible checks + * were performed before acquiring the lock, the results might now be + * out of date, either for the selected victim buffer, or for the + * other buffer passed by the caller. In that case, we'll need to + * give up our locks, go get the pin(s) we failed to get earlier, and + * re-lock. That's pretty painful, but hopefully shouldn't happen + * often. + * + * Note that there's a small possibility that we didn't pin the page + * above but still have the correct page pinned anyway, either because + * we've already made a previous pass through this loop, or because + * caller passed us the right page anyway. + * + * Note also that it's possible that by the time we get the pin and + * retake the buffer locks, the visibility map bit will have been + * cleared by some other backend anyway. In that case, we'll have + * done a bit of extra work for no gain, but there's no real harm + * done. + */ + GetVisibilityMapPins(relation, buffer, otherBuffer, + targetBlock, otherBlock, vmbuffer, + vmbuffer_other); + + /* + * Now we can check to see if there's enough free space here. If so, + * we're done. + */ + page = BufferGetPage(buffer); + + /* + * If necessary initialize page, it'll be used soon. We could avoid + * dirtying the buffer here, and rely on the caller to do so whenever + * it puts a tuple onto the page, but there seems not much benefit in + * doing so. + */ + if (PageIsNew(page)) + { + PageInit(page, BufferGetPageSize(buffer), 0); + MarkBufferDirty(buffer); + } + + pageFreeSpace = PageGetHeapFreeSpace(page); + if (targetFreeSpace <= pageFreeSpace) + { + /* use this page as future insert target, too */ + RelationSetTargetBlock(relation, targetBlock); + return buffer; + } + + /* + * Not enough space, so we must give up our page locks and pin (if + * any) and prepare to look elsewhere. We don't care which order we + * unlock the two buffers in, so this can be slightly simpler than the + * code above. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + if (otherBuffer == InvalidBuffer) + ReleaseBuffer(buffer); + else if (otherBlock != targetBlock) + { + LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + } + + /* Is there an ongoing bulk extension? */ + if (bistate && bistate->next_free != InvalidBlockNumber) + { + Assert(bistate->next_free <= bistate->last_free); + + /* + * We bulk extended the relation before, and there are still some + * unused pages from that extension, so we don't need to look in + * the FSM for a new page. But do record the free space from the + * last page, somebody might insert narrower tuples later. + */ + if (use_fsm) + RecordPageWithFreeSpace(relation, targetBlock, pageFreeSpace); + + targetBlock = bistate->next_free; + if (bistate->next_free >= bistate->last_free) + { + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + } + else + bistate->next_free++; + } + else if (!use_fsm) + { + /* Without FSM, always fall out of the loop and extend */ + break; + } + else + { + /* + * Update FSM as to condition of this page, and ask for another + * page to try. + */ + targetBlock = RecordAndGetPageWithFreeSpace(relation, + targetBlock, + pageFreeSpace, + targetFreeSpace); + } + } + + /* Have to extend the relation */ + buffer = RelationAddBlocks(relation, bistate, num_pages, use_fsm, + &unlockedTargetBuffer); + + targetBlock = BufferGetBlockNumber(buffer); + page = BufferGetPage(buffer); + + /* + * The page is empty, pin vmbuffer to set all_frozen bit. We don't want to + * do IO while the buffer is locked, so we unlock the page first if IO is + * needed (necessitating checks below). + */ + if (options & HEAP_INSERT_FROZEN) + { + Assert(PageGetMaxOffsetNumber(page) == 0); + + if (!tdeheap_visibilitymap_pin_ok(targetBlock, *vmbuffer)) + { + if (!unlockedTargetBuffer) + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + unlockedTargetBuffer = true; + tdeheap_visibilitymap_pin(relation, targetBlock, vmbuffer); + } + } + + /* + * Reacquire locks if necessary. + * + * If the target buffer was unlocked above, or is unlocked while + * reacquiring the lock on otherBuffer below, it's unlikely, but possible, + * that another backend used space on this page. We check for that below, + * and retry if necessary. + */ + recheckVmPins = false; + if (unlockedTargetBuffer) + { + /* released lock on target buffer above */ + if (otherBuffer != InvalidBuffer) + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + recheckVmPins = true; + } + else if (otherBuffer != InvalidBuffer) + { + /* + * We did not release the target buffer, and otherBuffer is valid, + * need to lock the other buffer. It's guaranteed to be of a lower + * page number than the new page. To conform with the deadlock + * prevent rules, we ought to lock otherBuffer first, but that would + * give other backends a chance to put tuples on our page. To reduce + * the likelihood of that, attempt to lock the other buffer + * conditionally, that's very likely to work. + * + * Alternatively, we could acquire the lock on otherBuffer before + * extending the relation, but that'd require holding the lock while + * performing IO, which seems worse than an unlikely retry. + */ + Assert(otherBuffer != buffer); + Assert(targetBlock > otherBlock); + + if (unlikely(!ConditionalLockBuffer(otherBuffer))) + { + unlockedTargetBuffer = true; + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + recheckVmPins = true; + } + + /* + * If one of the buffers was unlocked (always the case if otherBuffer is + * valid), it's possible, although unlikely, that an all-visible flag + * became set. We can use GetVisibilityMapPins to deal with that. It's + * possible that GetVisibilityMapPins() might need to temporarily release + * buffer locks, in which case we'll need to check if there's still enough + * space on the page below. + */ + if (recheckVmPins) + { + if (GetVisibilityMapPins(relation, otherBuffer, buffer, + otherBlock, targetBlock, vmbuffer_other, + vmbuffer)) + unlockedTargetBuffer = true; + } + + /* + * If the target buffer was temporarily unlocked since the relation + * extension, it's possible, although unlikely, that all the space on the + * page was already used. If so, we just retry from the start. If we + * didn't unlock, something has gone wrong if there's not enough space - + * the test at the top should have prevented reaching this case. + */ + pageFreeSpace = PageGetHeapFreeSpace(page); + if (len > pageFreeSpace) + { + if (unlockedTargetBuffer) + { + if (otherBuffer != InvalidBuffer) + LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK); + UnlockReleaseBuffer(buffer); + + goto loop; + } + elog(PANIC, "tuple is too big: size %zu", len); + } + + /* + * Remember the new page as our target for future insertions. + * + * XXX should we enter the new page into the free space map immediately, + * or just keep it for this backend's exclusive use in the short run + * (until VACUUM sees it)? Seems to depend on whether you expect the + * current backend to make more insertions or not, which is probably a + * good bet most of the time. So for now, don't add it to FSM yet. + */ + RelationSetTargetBlock(relation, targetBlock); + + return buffer; +} diff --git a/contrib/pg_tde/src17/access/pg_tde_prune.c b/contrib/pg_tde/src17/access/pg_tde_prune.c new file mode 100644 index 00000000000..bd1e71a7ba6 --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tde_prune.c @@ -0,0 +1,2574 @@ +/*------------------------------------------------------------------------- + * + * pruneheap.c + * heap page pruning and HOT-chain management code + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pruneheap.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "encryption/enc_tde.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" + +#include "access/htup_details.h" +#include "access/multixact.h" +#include "access/transam.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "commands/vacuum.h" +#include "executor/instrument.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "storage/bufmgr.h" +#include "utils/rel.h" +#include "utils/snapmgr.h" + +/* Working data for tdeheap_page_prune_and_freeze() and subroutines */ +typedef struct +{ + /*------------------------------------------------------- + * Arguments passed to tdeheap_page_prune_and_freeze() + *------------------------------------------------------- + */ + + /* tuple visibility test, initialized for the relation */ + GlobalVisState *vistest; + /* whether or not dead items can be set LP_UNUSED during pruning */ + bool mark_unused_now; + /* whether to attempt freezing tuples */ + bool freeze; + struct VacuumCutoffs *cutoffs; + + /*------------------------------------------------------- + * Fields describing what to do to the page + *------------------------------------------------------- + */ + TransactionId new_prune_xid; /* new prune hint value */ + TransactionId latest_xid_removed; + int nredirected; /* numbers of entries in arrays below */ + int ndead; + int nunused; + int nfrozen; + /* arrays that accumulate indexes of items to be changed */ + OffsetNumber redirected[MaxHeapTuplesPerPage * 2]; + OffsetNumber nowdead[MaxHeapTuplesPerPage]; + OffsetNumber nowunused[MaxHeapTuplesPerPage]; + HeapTupleFreeze frozen[MaxHeapTuplesPerPage]; + + /*------------------------------------------------------- + * Working state for HOT chain processing + *------------------------------------------------------- + */ + + /* + * 'root_items' contains offsets of all LP_REDIRECT line pointers and + * normal non-HOT tuples. They can be stand-alone items or the first item + * in a HOT chain. 'heaponly_items' contains heap-only tuples which can + * only be removed as part of a HOT chain. + */ + int nroot_items; + OffsetNumber root_items[MaxHeapTuplesPerPage]; + int nheaponly_items; + OffsetNumber heaponly_items[MaxHeapTuplesPerPage]; + + /* + * processed[offnum] is true if item at offnum has been processed. + * + * This needs to be MaxHeapTuplesPerPage + 1 long as FirstOffsetNumber is + * 1. Otherwise every access would need to subtract 1. + */ + bool processed[MaxHeapTuplesPerPage + 1]; + + /* + * Tuple visibility is only computed once for each tuple, for correctness + * and efficiency reasons; see comment in tdeheap_page_prune_and_freeze() for + * details. This is of type int8[], instead of HTSV_Result[], so we can + * use -1 to indicate no visibility has been computed, e.g. for LP_DEAD + * items. + * + * This needs to be MaxHeapTuplesPerPage + 1 long as FirstOffsetNumber is + * 1. Otherwise every access would need to subtract 1. + */ + int8 htsv[MaxHeapTuplesPerPage + 1]; + + /* + * Freezing-related state. + */ + HeapPageFreeze pagefrz; + + /*------------------------------------------------------- + * Information about what was done + * + * These fields are not used by pruning itself for the most part, but are + * used to collect information about what was pruned and what state the + * page is in after pruning, for the benefit of the caller. They are + * copied to the caller's PruneFreezeResult at the end. + * ------------------------------------------------------- + */ + + int ndeleted; /* Number of tuples deleted from the page */ + + /* Number of live and recently dead tuples, after pruning */ + int live_tuples; + int recently_dead_tuples; + + /* Whether or not the page makes rel truncation unsafe */ + bool hastup; + + /* + * LP_DEAD items on the page after pruning. Includes existing LP_DEAD + * items + */ + int lpdead_items; /* number of items in the array */ + OffsetNumber *deadoffsets; /* points directly to presult->deadoffsets */ + + /* + * all_visible and all_frozen indicate if the all-visible and all-frozen + * bits in the visibility map can be set for this page after pruning. + * + * visibility_cutoff_xid is the newest xmin of live tuples on the page. + * The caller can use it as the conflict horizon, when setting the VM + * bits. It is only valid if we froze some tuples, and all_frozen is + * true. + * + * NOTE: all_visible and all_frozen don't include LP_DEAD items. That's + * convenient for tdeheap_page_prune_and_freeze(), to use them to decide + * whether to freeze the page or not. The all_visible and all_frozen + * values returned to the caller are adjusted to include LP_DEAD items at + * the end. + * + * all_frozen should only be considered valid if all_visible is also set; + * we don't bother to clear the all_frozen flag every time we clear the + * all_visible flag. + */ + bool all_visible; + bool all_frozen; + TransactionId visibility_cutoff_xid; +} PruneState; + +/* Local functions */ +static HTSV_Result tdeheap_prune_satisfies_vacuum(PruneState *prstate, + HeapTuple tup, + Buffer buffer); +static inline HTSV_Result htsv_get_valid_status(int status); +static void tdeheap_prune_chain(Page page, BlockNumber blockno, OffsetNumber maxoff, + OffsetNumber rootoffnum, PruneState *prstate); +static void tdeheap_prune_record_prunable(PruneState *prstate, TransactionId xid); +static void tdeheap_prune_record_redirect(PruneState *prstate, + OffsetNumber offnum, OffsetNumber rdoffnum, + bool was_normal); +static void tdeheap_prune_record_dead(PruneState *prstate, OffsetNumber offnum, + bool was_normal); +static void tdeheap_prune_record_dead_or_unused(PruneState *prstate, OffsetNumber offnum, + bool was_normal); +static void tdeheap_prune_record_unused(PruneState *prstate, OffsetNumber offnum, bool was_normal); + +static void tdeheap_prune_record_unchanged_lp_unused(Page page, PruneState *prstate, OffsetNumber offnum); +static void tdeheap_prune_record_unchanged_lp_normal(Page page, PruneState *prstate, OffsetNumber offnum); +static void tdeheap_prune_record_unchanged_lp_dead(Page page, PruneState *prstate, OffsetNumber offnum); +static void tdeheap_prune_record_unchanged_lp_redirect(PruneState *prstate, OffsetNumber offnum); + +static void page_verify_redirects(Page page); + + +/* + * Optionally prune and repair fragmentation in the specified page. + * + * This is an opportunistic function. It will perform housekeeping + * only if the page heuristically looks like a candidate for pruning and we + * can acquire buffer cleanup lock without blocking. + * + * Note: this is called quite often. It's important that it fall out quickly + * if there's not any use in pruning. + * + * Caller must have pin on the buffer, and must *not* have a lock on it. + */ +void +tdeheap_page_prune_opt(Relation relation, Buffer buffer) +{ + Page page = BufferGetPage(buffer); + TransactionId prune_xid; + GlobalVisState *vistest; + Size minfree; + + /* + * We can't write WAL in recovery mode, so there's no point trying to + * clean the page. The primary will likely issue a cleaning WAL record + * soon anyway, so this is no particular loss. + */ + if (RecoveryInProgress()) + return; + + /* + * First check whether there's any chance there's something to prune, + * determining the appropriate horizon is a waste if there's no prune_xid + * (i.e. no updates/deletes left potentially dead tuples around). + */ + prune_xid = ((PageHeader) page)->pd_prune_xid; + if (!TransactionIdIsValid(prune_xid)) + return; + + /* + * Check whether prune_xid indicates that there may be dead rows that can + * be cleaned up. + */ + vistest = GlobalVisTestFor(relation); + + if (!GlobalVisTestIsRemovableXid(vistest, prune_xid)) + return; + + /* + * We prune when a previous UPDATE failed to find enough space on the page + * for a new tuple version, or when free space falls below the relation's + * fill-factor target (but not less than 10%). + * + * Checking free space here is questionable since we aren't holding any + * lock on the buffer; in the worst case we could get a bogus answer. It's + * unlikely to be *seriously* wrong, though, since reading either pd_lower + * or pd_upper is probably atomic. Avoiding taking a lock seems more + * important than sometimes getting a wrong answer in what is after all + * just a heuristic estimate. + */ + minfree = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + minfree = Max(minfree, BLCKSZ / 10); + + if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree) + { + /* OK, try to get exclusive buffer lock */ + if (!ConditionalLockBufferForCleanup(buffer)) + return; + + /* + * Now that we have buffer lock, get accurate information about the + * page's free space, and recheck the heuristic about whether to + * prune. + */ + if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree) + { + OffsetNumber dummy_off_loc; + PruneFreezeResult presult; + + /* + * For now, pass mark_unused_now as false regardless of whether or + * not the relation has indexes, since we cannot safely determine + * that during on-access pruning with the current implementation. + */ + tdeheap_page_prune_and_freeze(relation, buffer, vistest, 0, + NULL, &presult, PRUNE_ON_ACCESS, &dummy_off_loc, NULL, NULL); + + /* + * Report the number of tuples reclaimed to pgstats. This is + * presult.ndeleted minus the number of newly-LP_DEAD-set items. + * + * We derive the number of dead tuples like this to avoid totally + * forgetting about items that were set to LP_DEAD, since they + * still need to be cleaned up by VACUUM. We only want to count + * heap-only tuples that just became LP_UNUSED in our report, + * which don't. + * + * VACUUM doesn't have to compensate in the same way when it + * tracks ndeleted, since it will set the same LP_DEAD items to + * LP_UNUSED separately. + */ + if (presult.ndeleted > presult.nnewlpdead) + pgstat_update_heap_dead_tuples(relation, + presult.ndeleted - presult.nnewlpdead); + } + + /* And release buffer lock */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * We avoid reuse of any free space created on the page by unrelated + * UPDATEs/INSERTs by opting to not update the FSM at this point. The + * free space should be reused by UPDATEs to *this* page. + */ + } +} + + +/* + * Prune and repair fragmentation and potentially freeze tuples on the + * specified page. + * + * Caller must have pin and buffer cleanup lock on the page. Note that we + * don't update the FSM information for page on caller's behalf. Caller might + * also need to account for a reduction in the length of the line pointer + * array following array truncation by us. + * + * If the HEAP_PRUNE_FREEZE option is set, we will also freeze tuples if it's + * required in order to advance relfrozenxid / relminmxid, or if it's + * considered advantageous for overall system performance to do so now. The + * 'cutoffs', 'presult', 'new_relfrozen_xid' and 'new_relmin_mxid' arguments + * are required when freezing. When HEAP_PRUNE_FREEZE option is set, we also + * set presult->all_visible and presult->all_frozen on exit, to indicate if + * the VM bits can be set. They are always set to false when the + * HEAP_PRUNE_FREEZE option is not set, because at the moment only callers + * that also freeze need that information. + * + * vistest is used to distinguish whether tuples are DEAD or RECENTLY_DEAD + * (see tdeheap_prune_satisfies_vacuum). + * + * options: + * MARK_UNUSED_NOW indicates that dead items can be set LP_UNUSED during + * pruning. + * + * FREEZE indicates that we will also freeze tuples, and will return + * 'all_visible', 'all_frozen' flags to the caller. + * + * cutoffs contains the freeze cutoffs, established by VACUUM at the beginning + * of vacuuming the relation. Required if HEAP_PRUNE_FREEZE option is set. + * cutoffs->OldestXmin is also used to determine if dead tuples are + * HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD. + * + * presult contains output parameters needed by callers, such as the number of + * tuples removed and the offsets of dead items on the page after pruning. + * tdeheap_page_prune_and_freeze() is responsible for initializing it. Required + * by all callers. + * + * reason indicates why the pruning is performed. It is included in the WAL + * record for debugging and analysis purposes, but otherwise has no effect. + * + * off_loc is the offset location required by the caller to use in error + * callback. + * + * new_relfrozen_xid and new_relmin_mxid must provided by the caller if the + * HEAP_PRUNE_FREEZE option is set. On entry, they contain the oldest XID and + * multi-XID seen on the relation so far. They will be updated with oldest + * values present on the page after pruning. After processing the whole + * relation, VACUUM can use these values as the new relfrozenxid/relminmxid + * for the relation. + */ +void +tdeheap_page_prune_and_freeze(Relation relation, Buffer buffer, + GlobalVisState *vistest, + int options, + struct VacuumCutoffs *cutoffs, + PruneFreezeResult *presult, + PruneReason reason, + OffsetNumber *off_loc, + TransactionId *new_relfrozen_xid, + MultiXactId *new_relmin_mxid) +{ + Page page = BufferGetPage(buffer); + BlockNumber blockno = BufferGetBlockNumber(buffer); + OffsetNumber offnum, + maxoff; + PruneState prstate; + HeapTupleData tup; + bool do_freeze; + bool do_prune; + bool do_hint; + bool hint_bit_fpi; + int64 fpi_before = pgWalUsage.wal_fpi; + + /* Copy parameters to prstate */ + prstate.vistest = vistest; + prstate.mark_unused_now = (options & HEAP_PAGE_PRUNE_MARK_UNUSED_NOW) != 0; + prstate.freeze = (options & HEAP_PAGE_PRUNE_FREEZE) != 0; + prstate.cutoffs = cutoffs; + + /* + * Our strategy is to scan the page and make lists of items to change, + * then apply the changes within a critical section. This keeps as much + * logic as possible out of the critical section, and also ensures that + * WAL replay will work the same as the normal case. + * + * First, initialize the new pd_prune_xid value to zero (indicating no + * prunable tuples). If we find any tuples which may soon become + * prunable, we will save the lowest relevant XID in new_prune_xid. Also + * initialize the rest of our working state. + */ + prstate.new_prune_xid = InvalidTransactionId; + prstate.latest_xid_removed = InvalidTransactionId; + prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nfrozen = 0; + prstate.nroot_items = 0; + prstate.nheaponly_items = 0; + + /* initialize page freezing working state */ + prstate.pagefrz.freeze_required = false; + if (prstate.freeze) + { + Assert(new_relfrozen_xid && new_relmin_mxid); + prstate.pagefrz.FreezePageRelfrozenXid = *new_relfrozen_xid; + prstate.pagefrz.NoFreezePageRelfrozenXid = *new_relfrozen_xid; + prstate.pagefrz.FreezePageRelminMxid = *new_relmin_mxid; + prstate.pagefrz.NoFreezePageRelminMxid = *new_relmin_mxid; + } + else + { + Assert(new_relfrozen_xid == NULL && new_relmin_mxid == NULL); + prstate.pagefrz.FreezePageRelminMxid = InvalidMultiXactId; + prstate.pagefrz.NoFreezePageRelminMxid = InvalidMultiXactId; + prstate.pagefrz.FreezePageRelfrozenXid = InvalidTransactionId; + prstate.pagefrz.NoFreezePageRelfrozenXid = InvalidTransactionId; + } + + prstate.ndeleted = 0; + prstate.live_tuples = 0; + prstate.recently_dead_tuples = 0; + prstate.hastup = false; + prstate.lpdead_items = 0; + prstate.deadoffsets = presult->deadoffsets; + + /* + * Caller may update the VM after we're done. We can keep track of + * whether the page will be all-visible and all-frozen after pruning and + * freezing to help the caller to do that. + * + * Currently, only VACUUM sets the VM bits. To save the effort, only do + * the bookkeeping if the caller needs it. Currently, that's tied to + * HEAP_PAGE_PRUNE_FREEZE, but it could be a separate flag if you wanted + * to update the VM bits without also freezing or freeze without also + * setting the VM bits. + * + * In addition to telling the caller whether it can set the VM bit, we + * also use 'all_visible' and 'all_frozen' for our own decision-making. If + * the whole page would become frozen, we consider opportunistically + * freezing tuples. We will not be able to freeze the whole page if there + * are tuples present that are not visible to everyone or if there are + * dead tuples which are not yet removable. However, dead tuples which + * will be removed by the end of vacuuming should not preclude us from + * opportunistically freezing. Because of that, we do not clear + * all_visible when we see LP_DEAD items. We fix that at the end of the + * function, when we return the value to the caller, so that the caller + * doesn't set the VM bit incorrectly. + */ + if (prstate.freeze) + { + prstate.all_visible = true; + prstate.all_frozen = true; + } + else + { + /* + * Initializing to false allows skipping the work to update them in + * tdeheap_prune_record_unchanged_lp_normal(). + */ + prstate.all_visible = false; + prstate.all_frozen = false; + } + + /* + * The visibility cutoff xid is the newest xmin of live tuples on the + * page. In the common case, this will be set as the conflict horizon the + * caller can use for updating the VM. If, at the end of freezing and + * pruning, the page is all-frozen, there is no possibility that any + * running transaction on the standby does not see tuples on the page as + * all-visible, so the conflict horizon remains InvalidTransactionId. + */ + prstate.visibility_cutoff_xid = InvalidTransactionId; + + maxoff = PageGetMaxOffsetNumber(page); + tup.t_tableOid = RelationGetRelid(relation); + + /* + * Determine HTSV for all tuples, and queue them up for processing as HOT + * chain roots or as heap-only items. + * + * Determining HTSV only once for each tuple is required for correctness, + * to deal with cases where running HTSV twice could result in different + * results. For example, RECENTLY_DEAD can turn to DEAD if another + * checked item causes GlobalVisTestIsRemovableFullXid() to update the + * horizon, or INSERT_IN_PROGRESS can change to DEAD if the inserting + * transaction aborts. + * + * It's also good for performance. Most commonly tuples within a page are + * stored at decreasing offsets (while the items are stored at increasing + * offsets). When processing all tuples on a page this leads to reading + * memory at decreasing offsets within a page, with a variable stride. + * That's hard for CPU prefetchers to deal with. Processing the items in + * reverse order (and thus the tuples in increasing order) increases + * prefetching efficiency significantly / decreases the number of cache + * misses. + */ + for (offnum = maxoff; + offnum >= FirstOffsetNumber; + offnum = OffsetNumberPrev(offnum)) + { + ItemId itemid = PageGetItemId(page, offnum); + HeapTupleHeader htup; + + /* + * Set the offset number so that we can display it along with any + * error that occurred while processing this tuple. + */ + *off_loc = offnum; + + prstate.processed[offnum] = false; + prstate.htsv[offnum] = -1; + + /* Nothing to do if slot doesn't contain a tuple */ + if (!ItemIdIsUsed(itemid)) + { + tdeheap_prune_record_unchanged_lp_unused(page, &prstate, offnum); + continue; + } + + if (ItemIdIsDead(itemid)) + { + /* + * If the caller set mark_unused_now true, we can set dead line + * pointers LP_UNUSED now. + */ + if (unlikely(prstate.mark_unused_now)) + tdeheap_prune_record_unused(&prstate, offnum, false); + else + tdeheap_prune_record_unchanged_lp_dead(page, &prstate, offnum); + continue; + } + + if (ItemIdIsRedirected(itemid)) + { + /* This is the start of a HOT chain */ + prstate.root_items[prstate.nroot_items++] = offnum; + continue; + } + + Assert(ItemIdIsNormal(itemid)); + + /* + * Get the tuple's visibility status and queue it up for processing. + */ + htup = (HeapTupleHeader) PageGetItem(page, itemid); + tup.t_data = htup; + tup.t_len = ItemIdGetLength(itemid); + ItemPointerSet(&tup.t_self, blockno, offnum); + + prstate.htsv[offnum] = tdeheap_prune_satisfies_vacuum(&prstate, &tup, + buffer); + + if (!HeapTupleHeaderIsHeapOnly(htup)) + prstate.root_items[prstate.nroot_items++] = offnum; + else + prstate.heaponly_items[prstate.nheaponly_items++] = offnum; + } + + /* + * If checksums are enabled, tdeheap_prune_satisfies_vacuum() may have caused + * an FPI to be emitted. + */ + hint_bit_fpi = fpi_before != pgWalUsage.wal_fpi; + + /* + * Process HOT chains. + * + * We added the items to the array starting from 'maxoff', so by + * processing the array in reverse order, we process the items in + * ascending offset number order. The order doesn't matter for + * correctness, but some quick micro-benchmarking suggests that this is + * faster. (Earlier PostgreSQL versions, which scanned all the items on + * the page instead of using the root_items array, also did it in + * ascending offset number order.) + */ + for (int i = prstate.nroot_items - 1; i >= 0; i--) + { + offnum = prstate.root_items[i]; + + /* Ignore items already processed as part of an earlier chain */ + if (prstate.processed[offnum]) + continue; + + /* see preceding loop */ + *off_loc = offnum; + + /* Process this item or chain of items */ + tdeheap_prune_chain(page, blockno, maxoff, offnum, &prstate); + } + + /* + * Process any heap-only tuples that were not already processed as part of + * a HOT chain. + */ + for (int i = prstate.nheaponly_items - 1; i >= 0; i--) + { + offnum = prstate.heaponly_items[i]; + + if (prstate.processed[offnum]) + continue; + + /* see preceding loop */ + *off_loc = offnum; + + /* + * If the tuple is DEAD and doesn't chain to anything else, mark it + * unused. (If it does chain, we can only remove it as part of + * pruning its chain.) + * + * We need this primarily to handle aborted HOT updates, that is, + * XMIN_INVALID heap-only tuples. Those might not be linked to by any + * chain, since the parent tuple might be re-updated before any + * pruning occurs. So we have to be able to reap them separately from + * chain-pruning. (Note that HeapTupleHeaderIsHotUpdated will never + * return true for an XMIN_INVALID tuple, so this code will work even + * when there were sequential updates within the aborted transaction.) + */ + if (prstate.htsv[offnum] == HEAPTUPLE_DEAD) + { + ItemId itemid = PageGetItemId(page, offnum); + HeapTupleHeader htup = (HeapTupleHeader) PageGetItem(page, itemid); + + if (likely(!HeapTupleHeaderIsHotUpdated(htup))) + { + HeapTupleHeaderAdvanceConflictHorizon(htup, + &prstate.latest_xid_removed); + tdeheap_prune_record_unused(&prstate, offnum, true); + } + else + { + /* + * This tuple should've been processed and removed as part of + * a HOT chain, so something's wrong. To preserve evidence, + * we don't dare to remove it. We cannot leave behind a DEAD + * tuple either, because that will cause VACUUM to error out. + * Throwing an error with a distinct error message seems like + * the least bad option. + */ + elog(ERROR, "dead heap-only tuple (%u, %d) is not linked to from any HOT chain", + blockno, offnum); + } + } + else + tdeheap_prune_record_unchanged_lp_normal(page, &prstate, offnum); + } + + /* We should now have processed every tuple exactly once */ +#ifdef USE_ASSERT_CHECKING + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + *off_loc = offnum; + + Assert(prstate.processed[offnum]); + } +#endif + + /* Clear the offset information once we have processed the given page. */ + *off_loc = InvalidOffsetNumber; + + do_prune = prstate.nredirected > 0 || + prstate.ndead > 0 || + prstate.nunused > 0; + + /* + * Even if we don't prune anything, if we found a new value for the + * pd_prune_xid field or the page was marked full, we will update the hint + * bit. + */ + do_hint = ((PageHeader) page)->pd_prune_xid != prstate.new_prune_xid || + PageIsFull(page); + + /* + * Decide if we want to go ahead with freezing according to the freeze + * plans we prepared, or not. + */ + do_freeze = false; + if (prstate.freeze) + { + if (prstate.pagefrz.freeze_required) + { + /* + * tdeheap_prepare_freeze_tuple indicated that at least one XID/MXID + * from before FreezeLimit/MultiXactCutoff is present. Must + * freeze to advance relfrozenxid/relminmxid. + */ + do_freeze = true; + } + else + { + /* + * Opportunistically freeze the page if we are generating an FPI + * anyway and if doing so means that we can set the page + * all-frozen afterwards (might not happen until VACUUM's final + * heap pass). + * + * XXX: Previously, we knew if pruning emitted an FPI by checking + * pgWalUsage.wal_fpi before and after pruning. Once the freeze + * and prune records were combined, this heuristic couldn't be + * used anymore. The opportunistic freeze heuristic must be + * improved; however, for now, try to approximate the old logic. + */ + if (prstate.all_visible && prstate.all_frozen && prstate.nfrozen > 0) + { + /* + * Freezing would make the page all-frozen. Have already + * emitted an FPI or will do so anyway? + */ + if (RelationNeedsWAL(relation)) + { + if (hint_bit_fpi) + do_freeze = true; + else if (do_prune) + { + if (XLogCheckBufferNeedsBackup(buffer)) + do_freeze = true; + } + else if (do_hint) + { + if (XLogHintBitIsNeeded() && XLogCheckBufferNeedsBackup(buffer)) + do_freeze = true; + } + } + } + } + } + + if (do_freeze) + { + /* + * Validate the tuples we will be freezing before entering the + * critical section. + */ + tdeheap_pre_freeze_checks(buffer, prstate.frozen, prstate.nfrozen); + } + else if (prstate.nfrozen > 0) + { + /* + * The page contained some tuples that were not already frozen, and we + * chose not to freeze them now. The page won't be all-frozen then. + */ + Assert(!prstate.pagefrz.freeze_required); + + prstate.all_frozen = false; + prstate.nfrozen = 0; /* avoid miscounts in instrumentation */ + } + else + { + /* + * We have no freeze plans to execute. The page might already be + * all-frozen (perhaps only following pruning), though. Such pages + * can be marked all-frozen in the VM by our caller, even though none + * of its tuples were newly frozen here. + */ + } + + /* + * Make sure relation key in the cahce to avoid pallocs in + * the critical section. + * We need it here as there is `pgtde_compactify_tuples()` down in + * the call stack wich reencrypt tuples. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* Any error while applying the changes is critical */ + START_CRIT_SECTION(); + + if (do_hint) + { + /* + * Update the page's pd_prune_xid field to either zero, or the lowest + * XID of any soon-prunable tuple. + */ + ((PageHeader) page)->pd_prune_xid = prstate.new_prune_xid; + + /* + * Also clear the "page is full" flag, since there's no point in + * repeating the prune/defrag process until something else happens to + * the page. + */ + PageClearFull(page); + + /* + * If that's all we had to do to the page, this is a non-WAL-logged + * hint. If we are going to freeze or prune the page, we will mark + * the buffer dirty below. + */ + if (!do_freeze && !do_prune) + MarkBufferDirtyHint(buffer, true); + } + + if (do_prune || do_freeze) + { + /* Apply the planned item changes and repair page fragmentation. */ + if (do_prune) + { + tdeheap_page_prune_execute(relation, buffer, false, + prstate.redirected, prstate.nredirected, + prstate.nowdead, prstate.ndead, + prstate.nowunused, prstate.nunused); + } + + if (do_freeze) + tdeheap_freeze_prepared_tuples(buffer, prstate.frozen, prstate.nfrozen); + + MarkBufferDirty(buffer); + + /* + * Emit a WAL XLOG_HEAP2_PRUNE_FREEZE record showing what we did + */ + if (RelationNeedsWAL(relation)) + { + /* + * The snapshotConflictHorizon for the whole record should be the + * most conservative of all the horizons calculated for any of the + * possible modifications. If this record will prune tuples, any + * transactions on the standby older than the youngest xmax of the + * most recently removed tuple this record will prune will + * conflict. If this record will freeze tuples, any transactions + * on the standby with xids older than the youngest tuple this + * record will freeze will conflict. + */ + TransactionId frz_conflict_horizon = InvalidTransactionId; + TransactionId conflict_xid; + + /* + * We can use the visibility_cutoff_xid as our cutoff for + * conflicts when the whole page is eligible to become all-frozen + * in the VM once we're done with it. Otherwise we generate a + * conservative cutoff by stepping back from OldestXmin. + */ + if (do_freeze) + { + if (prstate.all_visible && prstate.all_frozen) + frz_conflict_horizon = prstate.visibility_cutoff_xid; + else + { + /* Avoids false conflicts when hot_standby_feedback in use */ + frz_conflict_horizon = prstate.cutoffs->OldestXmin; + TransactionIdRetreat(frz_conflict_horizon); + } + } + + if (TransactionIdFollows(frz_conflict_horizon, prstate.latest_xid_removed)) + conflict_xid = frz_conflict_horizon; + else + conflict_xid = prstate.latest_xid_removed; + + log_tdeheap_prune_and_freeze(relation, buffer, + conflict_xid, + true, reason, + prstate.frozen, prstate.nfrozen, + prstate.redirected, prstate.nredirected, + prstate.nowdead, prstate.ndead, + prstate.nowunused, prstate.nunused); + } + } + + END_CRIT_SECTION(); + + /* Copy information back for caller */ + presult->ndeleted = prstate.ndeleted; + presult->nnewlpdead = prstate.ndead; + presult->nfrozen = prstate.nfrozen; + presult->live_tuples = prstate.live_tuples; + presult->recently_dead_tuples = prstate.recently_dead_tuples; + + /* + * It was convenient to ignore LP_DEAD items in all_visible earlier on to + * make the choice of whether or not to freeze the page unaffected by the + * short-term presence of LP_DEAD items. These LP_DEAD items were + * effectively assumed to be LP_UNUSED items in the making. It doesn't + * matter which vacuum heap pass (initial pass or final pass) ends up + * setting the page all-frozen, as long as the ongoing VACUUM does it. + * + * Now that freezing has been finalized, unset all_visible if there are + * any LP_DEAD items on the page. It needs to reflect the present state + * of the page, as expected by our caller. + */ + if (prstate.all_visible && prstate.lpdead_items == 0) + { + presult->all_visible = prstate.all_visible; + presult->all_frozen = prstate.all_frozen; + } + else + { + presult->all_visible = false; + presult->all_frozen = false; + } + + presult->hastup = prstate.hastup; + + /* + * For callers planning to update the visibility map, the conflict horizon + * for that record must be the newest xmin on the page. However, if the + * page is completely frozen, there can be no conflict and the + * vm_conflict_horizon should remain InvalidTransactionId. This includes + * the case that we just froze all the tuples; the prune-freeze record + * included the conflict XID already so the caller doesn't need it. + */ + if (presult->all_frozen) + presult->vm_conflict_horizon = InvalidTransactionId; + else + presult->vm_conflict_horizon = prstate.visibility_cutoff_xid; + + presult->lpdead_items = prstate.lpdead_items; + /* the presult->deadoffsets array was already filled in */ + + if (prstate.freeze) + { + if (presult->nfrozen > 0) + { + *new_relfrozen_xid = prstate.pagefrz.FreezePageRelfrozenXid; + *new_relmin_mxid = prstate.pagefrz.FreezePageRelminMxid; + } + else + { + *new_relfrozen_xid = prstate.pagefrz.NoFreezePageRelfrozenXid; + *new_relmin_mxid = prstate.pagefrz.NoFreezePageRelminMxid; + } + } +} + +void TdePageRepairFragmentation(Relation rel, Buffer buffer, Page page); + +/* + * Perform visibility checks for heap pruning. + */ +static HTSV_Result +tdeheap_prune_satisfies_vacuum(PruneState *prstate, HeapTuple tup, Buffer buffer) +{ + HTSV_Result res; + TransactionId dead_after; + + res = HeapTupleSatisfiesVacuumHorizon(tup, buffer, &dead_after); + + if (res != HEAPTUPLE_RECENTLY_DEAD) + return res; + + /* + * For VACUUM, we must be sure to prune tuples with xmax older than + * OldestXmin -- a visibility cutoff determined at the beginning of + * vacuuming the relation. OldestXmin is used for freezing determination + * and we cannot freeze dead tuples' xmaxes. + */ + if (prstate->cutoffs && + TransactionIdIsValid(prstate->cutoffs->OldestXmin) && + NormalTransactionIdPrecedes(dead_after, prstate->cutoffs->OldestXmin)) + return HEAPTUPLE_DEAD; + + /* + * Determine whether or not the tuple is considered dead when compared + * with the provided GlobalVisState. On-access pruning does not provide + * VacuumCutoffs. And for vacuum, even if the tuple's xmax is not older + * than OldestXmin, GlobalVisTestIsRemovableXid() could find the row dead + * if the GlobalVisState has been updated since the beginning of vacuuming + * the relation. + */ + if (GlobalVisTestIsRemovableXid(prstate->vistest, dead_after)) + return HEAPTUPLE_DEAD; + + return res; +} + + +/* + * Pruning calculates tuple visibility once and saves the results in an array + * of int8. See PruneState.htsv for details. This helper function is meant + * to guard against examining visibility status array members which have not + * yet been computed. + */ +static inline HTSV_Result +htsv_get_valid_status(int status) +{ + Assert(status >= HEAPTUPLE_DEAD && + status <= HEAPTUPLE_DELETE_IN_PROGRESS); + return (HTSV_Result) status; +} + +/* + * Prune specified line pointer or a HOT chain originating at line pointer. + * + * Tuple visibility information is provided in prstate->htsv. + * + * If the item is an index-referenced tuple (i.e. not a heap-only tuple), + * the HOT chain is pruned by removing all DEAD tuples at the start of the HOT + * chain. We also prune any RECENTLY_DEAD tuples preceding a DEAD tuple. + * This is OK because a RECENTLY_DEAD tuple preceding a DEAD tuple is really + * DEAD, our visibility test is just too coarse to detect it. + * + * Pruning must never leave behind a DEAD tuple that still has tuple storage. + * VACUUM isn't prepared to deal with that case. + * + * The root line pointer is redirected to the tuple immediately after the + * latest DEAD tuple. If all tuples in the chain are DEAD, the root line + * pointer is marked LP_DEAD. (This includes the case of a DEAD simple + * tuple, which we treat as a chain of length 1.) + * + * We don't actually change the page here. We just add entries to the arrays in + * prstate showing the changes to be made. Items to be redirected are added + * to the redirected[] array (two entries per redirection); items to be set to + * LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED + * state are added to nowunused[]. We perform bookkeeping of live tuples, + * visibility etc. based on what the page will look like after the changes + * applied. All that bookkeeping is performed in the tdeheap_prune_record_*() + * subroutines. The division of labor is that tdeheap_prune_chain() decides the + * fate of each tuple, ie. whether it's going to be removed, redirected or + * left unchanged, and the tdeheap_prune_record_*() subroutines update PruneState + * based on that outcome. + */ +static void +tdeheap_prune_chain(Page page, BlockNumber blockno, OffsetNumber maxoff, + OffsetNumber rootoffnum, PruneState *prstate) +{ + TransactionId priorXmax = InvalidTransactionId; + ItemId rootlp; + OffsetNumber offnum; + OffsetNumber chainitems[MaxHeapTuplesPerPage]; + + /* + * After traversing the HOT chain, ndeadchain is the index in chainitems + * of the first live successor after the last dead item. + */ + int ndeadchain = 0, + nchain = 0; + + rootlp = PageGetItemId(page, rootoffnum); + + /* Start from the root tuple */ + offnum = rootoffnum; + + /* while not end of the chain */ + for (;;) + { + HeapTupleHeader htup; + ItemId lp; + + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated (original item must have been unused) + */ + if (offnum > maxoff) + break; + + /* If item is already processed, stop --- it must not be same chain */ + if (prstate->processed[offnum]) + break; + + lp = PageGetItemId(page, offnum); + + /* + * Unused item obviously isn't part of the chain. Likewise, a dead + * line pointer can't be part of the chain. Both of those cases were + * already marked as processed. + */ + Assert(ItemIdIsUsed(lp)); + Assert(!ItemIdIsDead(lp)); + + /* + * If we are looking at the redirected root line pointer, jump to the + * first normal tuple in the chain. If we find a redirect somewhere + * else, stop --- it must not be same chain. + */ + if (ItemIdIsRedirected(lp)) + { + if (nchain > 0) + break; /* not at start of chain */ + chainitems[nchain++] = offnum; + offnum = ItemIdGetRedirect(rootlp); + continue; + } + + Assert(ItemIdIsNormal(lp)); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Check the tuple XMIN against prior XMAX, if any + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) + break; + + /* + * OK, this tuple is indeed a member of the chain. + */ + chainitems[nchain++] = offnum; + + switch (htsv_get_valid_status(prstate->htsv[offnum])) + { + case HEAPTUPLE_DEAD: + + /* Remember the last DEAD tuple seen */ + ndeadchain = nchain; + HeapTupleHeaderAdvanceConflictHorizon(htup, + &prstate->latest_xid_removed); + /* Advance to next chain member */ + break; + + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * We don't need to advance the conflict horizon for + * RECENTLY_DEAD tuples, even if we are removing them. This + * is because we only remove RECENTLY_DEAD tuples if they + * precede a DEAD tuple, and the DEAD tuple must have been + * inserted by a newer transaction than the RECENTLY_DEAD + * tuple by virtue of being later in the chain. We will have + * advanced the conflict horizon for the DEAD tuple. + */ + + /* + * Advance past RECENTLY_DEAD tuples just in case there's a + * DEAD one after them. We have to make sure that we don't + * miss any DEAD tuples, since DEAD tuples that still have + * tuple storage after pruning will confuse VACUUM. + */ + break; + + case HEAPTUPLE_DELETE_IN_PROGRESS: + case HEAPTUPLE_LIVE: + case HEAPTUPLE_INSERT_IN_PROGRESS: + goto process_chain; + + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + goto process_chain; + } + + /* + * If the tuple is not HOT-updated, then we are at the end of this + * HOT-update chain. + */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + goto process_chain; + + /* HOT implies it can't have moved to different partition */ + Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup)); + + /* + * Advance to next chain member. + */ + Assert(ItemPointerGetBlockNumber(&htup->t_ctid) == blockno); + offnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + + if (ItemIdIsRedirected(rootlp) && nchain < 2) + { + /* + * We found a redirect item that doesn't point to a valid follow-on + * item. This can happen if the loop in tdeheap_page_prune_and_freeze() + * caused us to visit the dead successor of a redirect item before + * visiting the redirect item. We can clean up by setting the + * redirect item to LP_DEAD state or LP_UNUSED if the caller + * indicated. + */ + tdeheap_prune_record_dead_or_unused(prstate, rootoffnum, false); + return; + } + +process_chain: + + if (ndeadchain == 0) + { + /* + * No DEAD tuple was found, so the chain is entirely composed of + * normal, unchanged tuples. Leave it alone. + */ + int i = 0; + + if (ItemIdIsRedirected(rootlp)) + { + tdeheap_prune_record_unchanged_lp_redirect(prstate, rootoffnum); + i++; + } + for (; i < nchain; i++) + tdeheap_prune_record_unchanged_lp_normal(page, prstate, chainitems[i]); + } + else if (ndeadchain == nchain) + { + /* + * The entire chain is dead. Mark the root line pointer LP_DEAD, and + * fully remove the other tuples in the chain. + */ + tdeheap_prune_record_dead_or_unused(prstate, rootoffnum, ItemIdIsNormal(rootlp)); + for (int i = 1; i < nchain; i++) + tdeheap_prune_record_unused(prstate, chainitems[i], true); + } + else + { + /* + * We found a DEAD tuple in the chain. Redirect the root line pointer + * to the first non-DEAD tuple, and mark as unused each intermediate + * item that we are able to remove from the chain. + */ + tdeheap_prune_record_redirect(prstate, rootoffnum, chainitems[ndeadchain], + ItemIdIsNormal(rootlp)); + for (int i = 1; i < ndeadchain; i++) + tdeheap_prune_record_unused(prstate, chainitems[i], true); + + /* the rest of tuples in the chain are normal, unchanged tuples */ + for (int i = ndeadchain; i < nchain; i++) + tdeheap_prune_record_unchanged_lp_normal(page, prstate, chainitems[i]); + } +} + +/* Record lowest soon-prunable XID */ +static void +tdeheap_prune_record_prunable(PruneState *prstate, TransactionId xid) +{ + /* + * This should exactly match the PageSetPrunable macro. We can't store + * directly into the page header yet, so we update working state. + */ + Assert(TransactionIdIsNormal(xid)); + if (!TransactionIdIsValid(prstate->new_prune_xid) || + TransactionIdPrecedes(xid, prstate->new_prune_xid)) + prstate->new_prune_xid = xid; +} + +/* Record line pointer to be redirected */ +static void +tdeheap_prune_record_redirect(PruneState *prstate, + OffsetNumber offnum, OffsetNumber rdoffnum, + bool was_normal) +{ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; + + /* + * Do not mark the redirect target here. It needs to be counted + * separately as an unchanged tuple. + */ + + Assert(prstate->nredirected < MaxHeapTuplesPerPage); + prstate->redirected[prstate->nredirected * 2] = offnum; + prstate->redirected[prstate->nredirected * 2 + 1] = rdoffnum; + + prstate->nredirected++; + + /* + * If the root entry had been a normal tuple, we are deleting it, so count + * it in the result. But changing a redirect (even to DEAD state) doesn't + * count. + */ + if (was_normal) + prstate->ndeleted++; + + prstate->hastup = true; +} + +/* Record line pointer to be marked dead */ +static void +tdeheap_prune_record_dead(PruneState *prstate, OffsetNumber offnum, + bool was_normal) +{ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; + + Assert(prstate->ndead < MaxHeapTuplesPerPage); + prstate->nowdead[prstate->ndead] = offnum; + prstate->ndead++; + + /* + * Deliberately delay unsetting all_visible until later during pruning. + * Removable dead tuples shouldn't preclude freezing the page. + */ + + /* Record the dead offset for vacuum */ + prstate->deadoffsets[prstate->lpdead_items++] = offnum; + + /* + * If the root entry had been a normal tuple, we are deleting it, so count + * it in the result. But changing a redirect (even to DEAD state) doesn't + * count. + */ + if (was_normal) + prstate->ndeleted++; +} + +/* + * Depending on whether or not the caller set mark_unused_now to true, record that a + * line pointer should be marked LP_DEAD or LP_UNUSED. There are other cases in + * which we will mark line pointers LP_UNUSED, but we will not mark line + * pointers LP_DEAD if mark_unused_now is true. + */ +static void +tdeheap_prune_record_dead_or_unused(PruneState *prstate, OffsetNumber offnum, + bool was_normal) +{ + /* + * If the caller set mark_unused_now to true, we can remove dead tuples + * during pruning instead of marking their line pointers dead. Set this + * tuple's line pointer LP_UNUSED. We hint that this option is less + * likely. + */ + if (unlikely(prstate->mark_unused_now)) + tdeheap_prune_record_unused(prstate, offnum, was_normal); + else + tdeheap_prune_record_dead(prstate, offnum, was_normal); +} + +/* Record line pointer to be marked unused */ +static void +tdeheap_prune_record_unused(PruneState *prstate, OffsetNumber offnum, bool was_normal) +{ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; + + Assert(prstate->nunused < MaxHeapTuplesPerPage); + prstate->nowunused[prstate->nunused] = offnum; + prstate->nunused++; + + /* + * If the root entry had been a normal tuple, we are deleting it, so count + * it in the result. But changing a redirect (even to DEAD state) doesn't + * count. + */ + if (was_normal) + prstate->ndeleted++; +} + +/* + * Record an unused line pointer that is left unchanged. + */ +static void +tdeheap_prune_record_unchanged_lp_unused(Page page, PruneState *prstate, OffsetNumber offnum) +{ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; +} + +/* + * Record line pointer that is left unchanged. We consider freezing it, and + * update bookkeeping of tuple counts and page visibility. + */ +static void +tdeheap_prune_record_unchanged_lp_normal(Page page, PruneState *prstate, OffsetNumber offnum) +{ + HeapTupleHeader htup; + + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; + + prstate->hastup = true; /* the page is not empty */ + + /* + * The criteria for counting a tuple as live in this block need to match + * what analyze.c's acquire_sample_rows() does, otherwise VACUUM and + * ANALYZE may produce wildly different reltuples values, e.g. when there + * are many recently-dead tuples. + * + * The logic here is a bit simpler than acquire_sample_rows(), as VACUUM + * can't run inside a transaction block, which makes some cases impossible + * (e.g. in-progress insert from the same transaction). + * + * HEAPTUPLE_DEAD are handled by the other tdeheap_prune_record_*() + * subroutines. They don't count dead items like acquire_sample_rows() + * does, because we assume that all dead items will become LP_UNUSED + * before VACUUM finishes. This difference is only superficial. VACUUM + * effectively agrees with ANALYZE about DEAD items, in the end. VACUUM + * won't remember LP_DEAD items, but only because they're not supposed to + * be left behind when it is done. (Cases where we bypass index vacuuming + * will violate this optimistic assumption, but the overall impact of that + * should be negligible.) + */ + htup = (HeapTupleHeader) PageGetItem(page, PageGetItemId(page, offnum)); + + switch (prstate->htsv[offnum]) + { + case HEAPTUPLE_LIVE: + + /* + * Count it as live. Not only is this natural, but it's also what + * acquire_sample_rows() does. + */ + prstate->live_tuples++; + + /* + * Is the tuple definitely visible to all transactions? + * + * NB: Like with per-tuple hint bits, we can't set the + * PD_ALL_VISIBLE flag if the inserter committed asynchronously. + * See SetHintBits for more info. Check that the tuple is hinted + * xmin-committed because of that. + */ + if (prstate->all_visible) + { + TransactionId xmin; + + if (!HeapTupleHeaderXminCommitted(htup)) + { + prstate->all_visible = false; + break; + } + + /* + * The inserter definitely committed. But is it old enough + * that everyone sees it as committed? A FrozenTransactionId + * is seen as committed to everyone. Otherwise, we check if + * there is a snapshot that considers this xid to still be + * running, and if so, we don't consider the page all-visible. + */ + xmin = HeapTupleHeaderGetXmin(htup); + + /* + * For now always use prstate->cutoffs for this test, because + * we only update 'all_visible' when freezing is requested. We + * could use GlobalVisTestIsRemovableXid instead, if a + * non-freezing caller wanted to set the VM bit. + */ + Assert(prstate->cutoffs); + if (!TransactionIdPrecedes(xmin, prstate->cutoffs->OldestXmin)) + { + prstate->all_visible = false; + break; + } + + /* Track newest xmin on page. */ + if (TransactionIdFollows(xmin, prstate->visibility_cutoff_xid) && + TransactionIdIsNormal(xmin)) + prstate->visibility_cutoff_xid = xmin; + } + break; + + case HEAPTUPLE_RECENTLY_DEAD: + prstate->recently_dead_tuples++; + prstate->all_visible = false; + + /* + * This tuple will soon become DEAD. Update the hint field so + * that the page is reconsidered for pruning in future. + */ + tdeheap_prune_record_prunable(prstate, + HeapTupleHeaderGetUpdateXid(htup)); + break; + + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * We do not count these rows as live, because we expect the + * inserting transaction to update the counters at commit, and we + * assume that will happen only after we report our results. This + * assumption is a bit shaky, but it is what acquire_sample_rows() + * does, so be consistent. + */ + prstate->all_visible = false; + + /* + * If we wanted to optimize for aborts, we might consider marking + * the page prunable when we see INSERT_IN_PROGRESS. But we + * don't. See related decisions about when to mark the page + * prunable in heapam.c. + */ + break; + + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * This an expected case during concurrent vacuum. Count such + * rows as live. As above, we assume the deleting transaction + * will commit and update the counters after we report. + */ + prstate->live_tuples++; + prstate->all_visible = false; + + /* + * This tuple may soon become DEAD. Update the hint field so that + * the page is reconsidered for pruning in future. + */ + tdeheap_prune_record_prunable(prstate, + HeapTupleHeaderGetUpdateXid(htup)); + break; + + default: + + /* + * DEAD tuples should've been passed to tdeheap_prune_record_dead() + * or tdeheap_prune_record_unused() instead. + */ + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result %d", + prstate->htsv[offnum]); + break; + } + + /* Consider freezing any normal tuples which will not be removed */ + if (prstate->freeze) + { + bool totally_frozen; + + if ( (tdeheap_prepare_freeze_tuple(htup, + prstate->cutoffs, + &prstate->pagefrz, + &prstate->frozen[prstate->nfrozen], + &totally_frozen))) + { + /* Save prepared freeze plan for later */ + prstate->frozen[prstate->nfrozen++].offset = offnum; + } + + /* + * If any tuple isn't either totally frozen already or eligible to + * become totally frozen (according to its freeze plan), then the page + * definitely cannot be set all-frozen in the visibility map later on. + */ + if (!totally_frozen) + prstate->all_frozen = false; + } +} + + +/* + * Record line pointer that was already LP_DEAD and is left unchanged. + */ +static void +tdeheap_prune_record_unchanged_lp_dead(Page page, PruneState *prstate, OffsetNumber offnum) +{ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; + + /* + * Deliberately don't set hastup for LP_DEAD items. We make the soft + * assumption that any LP_DEAD items encountered here will become + * LP_UNUSED later on, before count_nondeletable_pages is reached. If we + * don't make this assumption then rel truncation will only happen every + * other VACUUM, at most. Besides, VACUUM must treat + * hastup/nonempty_pages as provisional no matter how LP_DEAD items are + * handled (handled here, or handled later on). + * + * Similarly, don't unset all_visible until later, at the end of + * tdeheap_page_prune_and_freeze(). This will allow us to attempt to freeze + * the page after pruning. As long as we unset it before updating the + * visibility map, this will be correct. + */ + + /* Record the dead offset for vacuum */ + prstate->deadoffsets[prstate->lpdead_items++] = offnum; +} + +/* + * Record LP_REDIRECT that is left unchanged. + */ +static void +tdeheap_prune_record_unchanged_lp_redirect(PruneState *prstate, OffsetNumber offnum) +{ + /* + * A redirect line pointer doesn't count as a live tuple. + * + * If we leave a redirect line pointer in place, there will be another + * tuple on the page that it points to. We will do the bookkeeping for + * that separately. So we have nothing to do here, except remember that + * we processed this item. + */ + Assert(!prstate->processed[offnum]); + prstate->processed[offnum] = true; +} + +/* + * Perform the actual page changes needed by tdeheap_page_prune_and_freeze(). + * + * If 'lp_truncate_only' is set, we are merely marking LP_DEAD line pointers + * as unused, not redirecting or removing anything else. The + * PageRepairFragmentation() call is skipped in that case. + * + * If 'lp_truncate_only' is not set, the caller must hold a cleanup lock on + * the buffer. If it is set, an ordinary exclusive lock suffices. + */ +void +tdeheap_page_prune_execute(Relation rel, Buffer buffer, bool lp_truncate_only, + OffsetNumber *redirected, int nredirected, + OffsetNumber *nowdead, int ndead, + OffsetNumber *nowunused, int nunused) +{ + Page page = (Page) BufferGetPage(buffer); + OffsetNumber *offnum; + HeapTupleHeader htup PG_USED_FOR_ASSERTS_ONLY; + + /* Shouldn't be called unless there's something to do */ + Assert(nredirected > 0 || ndead > 0 || nunused > 0); + + /* If 'lp_truncate_only', we can only remove already-dead line pointers */ + Assert(!lp_truncate_only || (nredirected == 0 && ndead == 0)); + + /* Update all redirected line pointers */ + offnum = redirected; + for (int i = 0; i < nredirected; i++) + { + OffsetNumber fromoff = *offnum++; + OffsetNumber tooff = *offnum++; + ItemId fromlp = PageGetItemId(page, fromoff); + ItemId tolp PG_USED_FOR_ASSERTS_ONLY; + +#ifdef USE_ASSERT_CHECKING + + /* + * Any existing item that we set as an LP_REDIRECT (any 'from' item) + * must be the first item from a HOT chain. If the item has tuple + * storage then it can't be a heap-only tuple. Otherwise we are just + * maintaining an existing LP_REDIRECT from an existing HOT chain that + * has been pruned at least once before now. + */ + if (!ItemIdIsRedirected(fromlp)) + { + Assert(ItemIdHasStorage(fromlp) && ItemIdIsNormal(fromlp)); + + htup = (HeapTupleHeader) PageGetItem(page, fromlp); + Assert(!HeapTupleHeaderIsHeapOnly(htup)); + } + else + { + /* We shouldn't need to redundantly set the redirect */ + Assert(ItemIdGetRedirect(fromlp) != tooff); + } + + /* + * The item that we're about to set as an LP_REDIRECT (the 'from' + * item) will point to an existing item (the 'to' item) that is + * already a heap-only tuple. There can be at most one LP_REDIRECT + * item per HOT chain. + * + * We need to keep around an LP_REDIRECT item (after original + * non-heap-only root tuple gets pruned away) so that it's always + * possible for VACUUM to easily figure out what TID to delete from + * indexes when an entire HOT chain becomes dead. A heap-only tuple + * can never become LP_DEAD; an LP_REDIRECT item or a regular heap + * tuple can. + * + * This check may miss problems, e.g. the target of a redirect could + * be marked as unused subsequently. The page_verify_redirects() check + * below will catch such problems. + */ + tolp = PageGetItemId(page, tooff); + Assert(ItemIdHasStorage(tolp) && ItemIdIsNormal(tolp)); + htup = (HeapTupleHeader) PageGetItem(page, tolp); + Assert(HeapTupleHeaderIsHeapOnly(htup)); +#endif + + ItemIdSetRedirect(fromlp, tooff); + } + + /* Update all now-dead line pointers */ + offnum = nowdead; + for (int i = 0; i < ndead; i++) + { + OffsetNumber off = *offnum++; + ItemId lp = PageGetItemId(page, off); + +#ifdef USE_ASSERT_CHECKING + + /* + * An LP_DEAD line pointer must be left behind when the original item + * (which is dead to everybody) could still be referenced by a TID in + * an index. This should never be necessary with any individual + * heap-only tuple item, though. (It's not clear how much of a problem + * that would be, but there is no reason to allow it.) + */ + if (ItemIdHasStorage(lp)) + { + Assert(ItemIdIsNormal(lp)); + htup = (HeapTupleHeader) PageGetItem(page, lp); + Assert(!HeapTupleHeaderIsHeapOnly(htup)); + } + else + { + /* Whole HOT chain becomes dead */ + Assert(ItemIdIsRedirected(lp)); + } +#endif + + ItemIdSetDead(lp); + } + + /* Update all now-unused line pointers */ + offnum = nowunused; + for (int i = 0; i < nunused; i++) + { + OffsetNumber off = *offnum++; + ItemId lp = PageGetItemId(page, off); + +#ifdef USE_ASSERT_CHECKING + + if (lp_truncate_only) + { + /* Setting LP_DEAD to LP_UNUSED in vacuum's second pass */ + Assert(ItemIdIsDead(lp) && !ItemIdHasStorage(lp)); + } + else + { + /* + * When tdeheap_page_prune_and_freeze() was called, mark_unused_now + * may have been passed as true, which allows would-be LP_DEAD + * items to be made LP_UNUSED instead. This is only possible if + * the relation has no indexes. If there are any dead items, then + * mark_unused_now was not true and every item being marked + * LP_UNUSED must refer to a heap-only tuple. + */ + if (ndead > 0) + { + Assert(ItemIdHasStorage(lp) && ItemIdIsNormal(lp)); + htup = (HeapTupleHeader) PageGetItem(page, lp); + Assert(HeapTupleHeaderIsHeapOnly(htup)); + } + else + Assert(ItemIdIsUsed(lp)); + } + +#endif + + ItemIdSetUnused(lp); + } + + if (lp_truncate_only) + PageTruncateLinePointerArray(page); + else + { + /* + * Finally, repair any fragmentation, and update the page's hint bit + * about whether it has free pointers. + */ + TdePageRepairFragmentation(rel, buffer, page); + + /* + * Now that the page has been modified, assert that redirect items + * still point to valid targets. + */ + page_verify_redirects(page); + } +} + + +/* + * If built with assertions, verify that all LP_REDIRECT items point to a + * valid item. + * + * One way that bugs related to HOT pruning show is redirect items pointing to + * removed tuples. It's not trivial to reliably check that marking an item + * unused will not orphan a redirect item during tdeheap_prune_chain() / + * tdeheap_page_prune_execute(), so we additionally check the whole page after + * pruning. Without this check such bugs would typically only cause asserts + * later, potentially well after the corruption has been introduced. + * + * Also check comments in tdeheap_page_prune_execute()'s redirection loop. + */ +static void +page_verify_redirects(Page page) +{ +#ifdef USE_ASSERT_CHECKING + OffsetNumber offnum; + OffsetNumber maxoff; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid = PageGetItemId(page, offnum); + OffsetNumber targoff; + ItemId targitem; + HeapTupleHeader htup; + + if (!ItemIdIsRedirected(itemid)) + continue; + + targoff = ItemIdGetRedirect(itemid); + targitem = PageGetItemId(page, targoff); + + Assert(ItemIdIsUsed(targitem)); + Assert(ItemIdIsNormal(targitem)); + Assert(ItemIdHasStorage(targitem)); + htup = (HeapTupleHeader) PageGetItem(page, targitem); + Assert(HeapTupleHeaderIsHeapOnly(htup)); + } +#endif +} + + +/* + * For all items in this page, find their respective root line pointers. + * If item k is part of a HOT-chain with root at item j, then we set + * root_offsets[k - 1] = j. + * + * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries. + * Unused entries are filled with InvalidOffsetNumber (zero). + * + * The function must be called with at least share lock on the buffer, to + * prevent concurrent prune operations. + * + * Note: The information collected here is valid only as long as the caller + * holds a pin on the buffer. Once pin is released, a tuple might be pruned + * and reused by a completely unrelated tuple. + */ +void +tdeheap_get_root_tuples(Page page, OffsetNumber *root_offsets) +{ + OffsetNumber offnum, + maxoff; + + MemSet(root_offsets, InvalidOffsetNumber, + MaxHeapTuplesPerPage * sizeof(OffsetNumber)); + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum)) + { + ItemId lp = PageGetItemId(page, offnum); + HeapTupleHeader htup; + OffsetNumber nextoffnum; + TransactionId priorXmax; + + /* skip unused and dead items */ + if (!ItemIdIsUsed(lp) || ItemIdIsDead(lp)) + continue; + + if (ItemIdIsNormal(lp)) + { + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Check if this tuple is part of a HOT-chain rooted at some other + * tuple. If so, skip it for now; we'll process it when we find + * its root. + */ + if (HeapTupleHeaderIsHeapOnly(htup)) + continue; + + /* + * This is either a plain tuple or the root of a HOT-chain. + * Remember it in the mapping. + */ + root_offsets[offnum - 1] = offnum; + + /* If it's not the start of a HOT-chain, we're done with it */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + continue; + + /* Set up to scan the HOT-chain */ + nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + else + { + /* Must be a redirect item. We do not set its root_offsets entry */ + Assert(ItemIdIsRedirected(lp)); + /* Set up to scan the HOT-chain */ + nextoffnum = ItemIdGetRedirect(lp); + priorXmax = InvalidTransactionId; + } + + /* + * Now follow the HOT-chain and collect other tuples in the chain. + * + * Note: Even though this is a nested loop, the complexity of the + * function is O(N) because a tuple in the page should be visited not + * more than twice, once in the outer loop and once in HOT-chain + * chases. + */ + for (;;) + { + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated + */ + if (offnum > maxoff) + break; + + lp = PageGetItemId(page, nextoffnum); + + /* Check for broken chains */ + if (!ItemIdIsNormal(lp)) + break; + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup))) + break; + + /* Remember the root line pointer for this item */ + root_offsets[nextoffnum - 1] = offnum; + + /* Advance to next chain member, if any */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + break; + + /* HOT implies it can't have moved to different partition */ + Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup)); + + nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + } +} + + +/* + * Compare fields that describe actions required to freeze tuple with caller's + * open plan. If everything matches then the frz tuple plan is equivalent to + * caller's plan. + */ +static inline bool +tdeheap_log_freeze_eq(xlhp_freeze_plan *plan, HeapTupleFreeze *frz) +{ + if (plan->xmax == frz->xmax && + plan->t_infomask2 == frz->t_infomask2 && + plan->t_infomask == frz->t_infomask && + plan->frzflags == frz->frzflags) + return true; + + /* Caller must call tdeheap_log_freeze_new_plan again for frz */ + return false; +} + +/* + * Comparator used to deduplicate XLOG_HEAP2_FREEZE_PAGE freeze plans + */ +static int +tdeheap_log_freeze_cmp(const void *arg1, const void *arg2) +{ + HeapTupleFreeze *frz1 = (HeapTupleFreeze *) arg1; + HeapTupleFreeze *frz2 = (HeapTupleFreeze *) arg2; + + if (frz1->xmax < frz2->xmax) + return -1; + else if (frz1->xmax > frz2->xmax) + return 1; + + if (frz1->t_infomask2 < frz2->t_infomask2) + return -1; + else if (frz1->t_infomask2 > frz2->t_infomask2) + return 1; + + if (frz1->t_infomask < frz2->t_infomask) + return -1; + else if (frz1->t_infomask > frz2->t_infomask) + return 1; + + if (frz1->frzflags < frz2->frzflags) + return -1; + else if (frz1->frzflags > frz2->frzflags) + return 1; + + /* + * tdeheap_log_freeze_eq would consider these tuple-wise plans to be equal. + * (So the tuples will share a single canonical freeze plan.) + * + * We tiebreak on page offset number to keep each freeze plan's page + * offset number array individually sorted. (Unnecessary, but be tidy.) + */ + if (frz1->offset < frz2->offset) + return -1; + else if (frz1->offset > frz2->offset) + return 1; + + Assert(false); + return 0; +} + +/* + * Start new plan initialized using tuple-level actions. At least one tuple + * will have steps required to freeze described by caller's plan during REDO. + */ +static inline void +tdeheap_log_freeze_new_plan(xlhp_freeze_plan *plan, HeapTupleFreeze *frz) +{ + plan->xmax = frz->xmax; + plan->t_infomask2 = frz->t_infomask2; + plan->t_infomask = frz->t_infomask; + plan->frzflags = frz->frzflags; + plan->ntuples = 1; /* for now */ +} + +/* + * Deduplicate tuple-based freeze plans so that each distinct set of + * processing steps is only stored once in XLOG_HEAP2_FREEZE_PAGE records. + * Called during original execution of freezing (for logged relations). + * + * Return value is number of plans set in *plans_out for caller. Also writes + * an array of offset numbers into *offsets_out output argument for caller + * (actually there is one array per freeze plan, but that's not of immediate + * concern to our caller). + */ +static int +tdeheap_log_freeze_plan(HeapTupleFreeze *tuples, int ntuples, + xlhp_freeze_plan *plans_out, + OffsetNumber *offsets_out) +{ + int nplans = 0; + + /* Sort tuple-based freeze plans in the order required to deduplicate */ + qsort(tuples, ntuples, sizeof(HeapTupleFreeze), tdeheap_log_freeze_cmp); + + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + + if (i == 0) + { + /* New canonical freeze plan starting with first tup */ + tdeheap_log_freeze_new_plan(plans_out, frz); + nplans++; + } + else if (tdeheap_log_freeze_eq(plans_out, frz)) + { + /* tup matches open canonical plan -- include tup in it */ + Assert(offsets_out[i - 1] < frz->offset); + plans_out->ntuples++; + } + else + { + /* Tup doesn't match current plan -- done with it now */ + plans_out++; + + /* New canonical freeze plan starting with this tup */ + tdeheap_log_freeze_new_plan(plans_out, frz); + nplans++; + } + + /* + * Save page offset number in dedicated buffer in passing. + * + * REDO routine relies on the record's offset numbers array grouping + * offset numbers by freeze plan. The sort order within each grouping + * is ascending offset number order, just to keep things tidy. + */ + offsets_out[i] = frz->offset; + } + + Assert(nplans > 0 && nplans <= ntuples); + + return nplans; +} + +/* + * Write an XLOG_HEAP2_PRUNE_FREEZE WAL record + * + * This is used for several different page maintenance operations: + * + * - Page pruning, in VACUUM's 1st pass or on access: Some items are + * redirected, some marked dead, and some removed altogether. + * + * - Freezing: Items are marked as 'frozen'. + * + * - Vacuum, 2nd pass: Items that are already LP_DEAD are marked as unused. + * + * They have enough commonalities that we use a single WAL record for them + * all. + * + * If replaying the record requires a cleanup lock, pass cleanup_lock = true. + * Replaying 'redirected' or 'dead' items always requires a cleanup lock, but + * replaying 'unused' items depends on whether they were all previously marked + * as dead. + * + * Note: This function scribbles on the 'frozen' array. + * + * Note: This is called in a critical section, so careful what you do here. + */ +void +log_tdeheap_prune_and_freeze(Relation relation, Buffer buffer, + TransactionId conflict_xid, + bool cleanup_lock, + PruneReason reason, + HeapTupleFreeze *frozen, int nfrozen, + OffsetNumber *redirected, int nredirected, + OffsetNumber *dead, int ndead, + OffsetNumber *unused, int nunused) +{ + xl_tdeheap_prune xlrec; + XLogRecPtr recptr; + uint8 info; + + /* The following local variables hold data registered in the WAL record: */ + xlhp_freeze_plan plans[MaxHeapTuplesPerPage]; + xlhp_freeze_plans freeze_plans; + xlhp_prune_items redirect_items; + xlhp_prune_items dead_items; + xlhp_prune_items unused_items; + OffsetNumber frz_offsets[MaxHeapTuplesPerPage]; + + xlrec.flags = 0; + + /* + * Prepare data for the buffer. The arrays are not actually in the + * buffer, but we pretend that they are. When XLogInsert stores a full + * page image, the arrays can be omitted. + */ + XLogBeginInsert(); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + if (nfrozen > 0) + { + int nplans; + + xlrec.flags |= XLHP_HAS_FREEZE_PLANS; + + /* + * Prepare deduplicated representation for use in the WAL record. This + * destructively sorts frozen tuples array in-place. + */ + nplans = tdeheap_log_freeze_plan(frozen, nfrozen, plans, frz_offsets); + + freeze_plans.nplans = nplans; + XLogRegisterBufData(0, (char *) &freeze_plans, + offsetof(xlhp_freeze_plans, plans)); + XLogRegisterBufData(0, (char *) plans, + sizeof(xlhp_freeze_plan) * nplans); + } + if (nredirected > 0) + { + xlrec.flags |= XLHP_HAS_REDIRECTIONS; + + redirect_items.ntargets = nredirected; + XLogRegisterBufData(0, (char *) &redirect_items, + offsetof(xlhp_prune_items, data)); + XLogRegisterBufData(0, (char *) redirected, + sizeof(OffsetNumber[2]) * nredirected); + } + if (ndead > 0) + { + xlrec.flags |= XLHP_HAS_DEAD_ITEMS; + + dead_items.ntargets = ndead; + XLogRegisterBufData(0, (char *) &dead_items, + offsetof(xlhp_prune_items, data)); + XLogRegisterBufData(0, (char *) dead, + sizeof(OffsetNumber) * ndead); + } + if (nunused > 0) + { + xlrec.flags |= XLHP_HAS_NOW_UNUSED_ITEMS; + + unused_items.ntargets = nunused; + XLogRegisterBufData(0, (char *) &unused_items, + offsetof(xlhp_prune_items, data)); + XLogRegisterBufData(0, (char *) unused, + sizeof(OffsetNumber) * nunused); + } + if (nfrozen > 0) + XLogRegisterBufData(0, (char *) frz_offsets, + sizeof(OffsetNumber) * nfrozen); + + /* + * Prepare the main xl_tdeheap_prune record. We already set the XLPH_HAS_* + * flag above. + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + xlrec.flags |= XLHP_IS_CATALOG_REL; + if (TransactionIdIsValid(conflict_xid)) + xlrec.flags |= XLHP_HAS_CONFLICT_HORIZON; + if (cleanup_lock) + xlrec.flags |= XLHP_CLEANUP_LOCK; + else + { + Assert(nredirected == 0 && ndead == 0); + /* also, any items in 'unused' must've been LP_DEAD previously */ + } + XLogRegisterData((char *) &xlrec, SizeOfHeapPrune); + if (TransactionIdIsValid(conflict_xid)) + XLogRegisterData((char *) &conflict_xid, sizeof(TransactionId)); + + switch (reason) + { + case PRUNE_ON_ACCESS: + info = XLOG_HEAP2_PRUNE_ON_ACCESS; + break; + case PRUNE_VACUUM_SCAN: + info = XLOG_HEAP2_PRUNE_VACUUM_SCAN; + break; + case PRUNE_VACUUM_CLEANUP: + info = XLOG_HEAP2_PRUNE_VACUUM_CLEANUP; + break; + default: + elog(ERROR, "unrecognized prune reason: %d", (int) reason); + break; + } + recptr = XLogInsert(RM_HEAP2_ID, info); + + PageSetLSN(BufferGetPage(buffer), recptr); +} + +// TODO: move to own file so it can be autoupdated +// FROM src/page/bufpage.c + +/* + * Tuple defrag support for PageRepairFragmentation and PageIndexMultiDelete + */ +typedef struct itemIdCompactData +{ + uint16 offsetindex; /* linp array index */ + int16 itemoff; /* page offset of item data */ + uint16 len; + uint16 alignedlen; /* MAXALIGN(item data len) */ +} itemIdCompactData; +typedef itemIdCompactData *itemIdCompact; + +/* + * After removing or marking some line pointers unused, move the tuples to + * remove the gaps caused by the removed items and reorder them back into + * reverse line pointer order in the page. + * + * This function can often be fairly hot, so it pays to take some measures to + * make it as optimal as possible. + * + * Callers may pass 'presorted' as true if the 'itemidbase' array is sorted in + * descending order of itemoff. When this is true we can just memmove() + * tuples towards the end of the page. This is quite a common case as it's + * the order that tuples are initially inserted into pages. When we call this + * function to defragment the tuples in the page then any new line pointers + * added to the page will keep that presorted order, so hitting this case is + * still very common for tables that are commonly updated. + * + * When the 'itemidbase' array is not presorted then we're unable to just + * memmove() tuples around freely. Doing so could cause us to overwrite the + * memory belonging to a tuple we've not moved yet. In this case, we copy all + * the tuples that need to be moved into a temporary buffer. We can then + * simply memcpy() out of that temp buffer back into the page at the correct + * location. Tuples are copied back into the page in the same order as the + * 'itemidbase' array, so we end up reordering the tuples back into reverse + * line pointer order. This will increase the chances of hitting the + * presorted case the next time around. + * + * Callers must ensure that nitems is > 0 + */ +static void // this is where it happens! +pgtde_compactify_tuples(Relation rel, Buffer buffer, itemIdCompact itemidbase, int nitems, Page page, bool presorted) +{ + PageHeader phdr = (PageHeader) page; + Offset upper; + Offset copy_tail; + Offset copy_head; + itemIdCompact itemidptr; + int i; + + /* Code within will not work correctly if nitems == 0 */ + Assert(nitems > 0); + + if (presorted) + { + +#ifdef USE_ASSERT_CHECKING + { + /* + * Verify we've not gotten any new callers that are incorrectly + * passing a true presorted value. + */ + Offset lastoff = phdr->pd_special; + + for (i = 0; i < nitems; i++) + { + itemidptr = &itemidbase[i]; + + Assert(lastoff > itemidptr->itemoff); + + lastoff = itemidptr->itemoff; + } + } +#endif /* USE_ASSERT_CHECKING */ + + /* + * 'itemidbase' is already in the optimal order, i.e, lower item + * pointers have a higher offset. This allows us to memmove() the + * tuples up to the end of the page without having to worry about + * overwriting other tuples that have not been moved yet. + * + * There's a good chance that there are tuples already right at the + * end of the page that we can simply skip over because they're + * already in the correct location within the page. We'll do that + * first... + */ + upper = phdr->pd_special; + i = 0; + do + { + itemidptr = &itemidbase[i]; + if (upper != itemidptr->itemoff + itemidptr->alignedlen) + break; + upper -= itemidptr->alignedlen; + + i++; + } while (i < nitems); + + /* + * Now that we've found the first tuple that needs to be moved, we can + * do the tuple compactification. We try and make the least number of + * memmove() calls and only call memmove() when there's a gap. When + * we see a gap we just move all tuples after the gap up until the + * point of the last move operation. + */ + copy_tail = copy_head = itemidptr->itemoff + itemidptr->alignedlen; + for (; i < nitems; i++) + { + ItemId lp; + + itemidptr = &itemidbase[i]; + + lp = PageGetItemId(page, itemidptr->offsetindex + 1); + + if (copy_head != itemidptr->itemoff + itemidptr->alignedlen && copy_head < copy_tail) + { + memmove((char *) page + upper, + page + copy_head, + copy_tail - copy_head); + + /* + * We've now moved all tuples already seen, but not the + * current tuple, so we set the copy_tail to the end of this + * tuple so it can be moved in another iteration of the loop. + */ + copy_tail = itemidptr->itemoff + itemidptr->alignedlen; + } + /* shift the target offset down by the length of this tuple */ + upper -= itemidptr->alignedlen; + /* point the copy_head to the start of this tuple */ + copy_head = itemidptr->itemoff; + + /* update the line pointer to reference the new offset */ + lp->lp_off = upper; + } + + /* move the remaining tuples. */ + memmove((char *) page + upper, + page + copy_head, + copy_tail - copy_head); + } + else + { + PGAlignedBlock scratch; + char *scratchptr = scratch.data; + + /* + * Non-presorted case: The tuples in the itemidbase array may be in + * any order. So, in order to move these to the end of the page we + * must make a temp copy of each tuple that needs to be moved before + * we copy them back into the page at the new offset. + * + * If a large percentage of tuples have been pruned (>75%) then we'll + * copy these into the temp buffer tuple-by-tuple, otherwise, we'll + * just do a single memcpy() for all tuples that need to be moved. + * When so many tuples have been removed there's likely to be a lot of + * gaps and it's unlikely that many non-movable tuples remain at the + * end of the page. + */ + if (nitems < PageGetMaxOffsetNumber(page) / 4) + { + i = 0; + do + { + itemidptr = &itemidbase[i]; + memcpy(scratchptr + itemidptr->itemoff, page + itemidptr->itemoff, + itemidptr->alignedlen); + i++; + } while (i < nitems); + + /* Set things up for the compactification code below */ + i = 0; + itemidptr = &itemidbase[0]; + upper = phdr->pd_special; + } + else + { + upper = phdr->pd_special; + + /* + * Many tuples are likely to already be in the correct location. + * There's no need to copy these into the temp buffer. Instead + * we'll just skip forward in the itemidbase array to the position + * that we do need to move tuples from so that the code below just + * leaves these ones alone. + */ + i = 0; + do + { + itemidptr = &itemidbase[i]; + if (upper != itemidptr->itemoff + itemidptr->alignedlen) + break; + upper -= itemidptr->alignedlen; + + i++; + } while (i < nitems); + + /* Copy all tuples that need to be moved into the temp buffer */ + memcpy(scratchptr + phdr->pd_upper, + page + phdr->pd_upper, + upper - phdr->pd_upper); + } + + /* + * Do the tuple compactification. itemidptr is already pointing to + * the first tuple that we're going to move. Here we collapse the + * memcpy calls for adjacent tuples into a single call. This is done + * by delaying the memcpy call until we find a gap that needs to be + * closed. + */ + copy_tail = copy_head = itemidptr->itemoff + itemidptr->alignedlen; + for (; i < nitems; i++) + { + ItemId lp; + + itemidptr = &itemidbase[i]; + + lp = PageGetItemId(page, itemidptr->offsetindex + 1); + + /* copy pending tuples when we detect a gap */ + if (copy_head != itemidptr->itemoff + itemidptr->alignedlen) + { + memcpy((char *) page + upper, + scratchptr + copy_head, + copy_tail - copy_head); + + /* + * We've now copied all tuples already seen, but not the + * current tuple, so we set the copy_tail to the end of this + * tuple. + */ + copy_tail = itemidptr->itemoff + itemidptr->alignedlen; + } + /* shift the target offset down by the length of this tuple */ + upper -= itemidptr->alignedlen; + /* point the copy_head to the start of this tuple */ + copy_head = itemidptr->itemoff; + + /* update the line pointer to reference the new offset */ + lp->lp_off = upper; + } + + /* Copy the remaining chunk */ + memcpy((char *) page + upper, + scratchptr + copy_head, + copy_tail - copy_head); + } + + phdr->pd_upper = upper; +} + +/* + * PageRepairFragmentation + * + * Frees fragmented space on a heap page following pruning. + * + * This routine is usable for heap pages only, but see PageIndexMultiDelete. + * + * This routine removes unused line pointers from the end of the line pointer + * array. This is possible when dead heap-only tuples get removed by pruning, + * especially when there were HOT chains with several tuples each beforehand. + * + * Caller had better have a full cleanup lock on page's buffer. As a side + * effect the page's PD_HAS_FREE_LINES hint bit will be set or unset as + * needed. Caller might also need to account for a reduction in the length of + * the line pointer array following array truncation. + */ +void +TdePageRepairFragmentation(Relation rel, Buffer buffer, Page page) +{ + Offset pd_lower = ((PageHeader) page)->pd_lower; + Offset pd_upper = ((PageHeader) page)->pd_upper; + Offset pd_special = ((PageHeader) page)->pd_special; + Offset last_offset; + itemIdCompactData itemidbase[MaxHeapTuplesPerPage]; + itemIdCompact itemidptr; + ItemId lp; + int nline, + nstorage, + nunused; + OffsetNumber finalusedlp = InvalidOffsetNumber; + int i; + Size totallen; + bool presorted = true; /* For now */ + + /* + * It's worth the trouble to be more paranoid here than in most places, + * because we are about to reshuffle data in (what is usually) a shared + * disk buffer. If we aren't careful then corrupted pointers, lengths, + * etc could cause us to clobber adjacent disk buffers, spreading the data + * loss further. So, check everything. + */ + if (pd_lower < SizeOfPageHeaderData || + pd_lower > pd_upper || + pd_upper > pd_special || + pd_special > BLCKSZ || + pd_special != MAXALIGN(pd_special)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted page pointers: lower = %u, upper = %u, special = %u", + pd_lower, pd_upper, pd_special))); + + /* + * Run through the line pointer array and collect data about live items. + */ + nline = PageGetMaxOffsetNumber(page); + itemidptr = itemidbase; + nunused = totallen = 0; + last_offset = pd_special; + for (i = FirstOffsetNumber; i <= nline; i++) + { + lp = PageGetItemId(page, i); + if (ItemIdIsUsed(lp)) + { + if (ItemIdHasStorage(lp)) + { + itemidptr->offsetindex = i - 1; + itemidptr->itemoff = ItemIdGetOffset(lp); + + if (last_offset > itemidptr->itemoff) + last_offset = itemidptr->itemoff; + else + presorted = false; + + if (unlikely(itemidptr->itemoff < (int) pd_upper || + itemidptr->itemoff >= (int) pd_special)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted line pointer: %u", + itemidptr->itemoff))); + itemidptr->len = ItemIdGetLength(lp); + itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp)); + totallen += itemidptr->alignedlen; + itemidptr++; + } + + finalusedlp = i; /* Could be the final non-LP_UNUSED item */ + } + else + { + /* Unused entries should have lp_len = 0, but make sure */ + Assert(!ItemIdHasStorage(lp)); + ItemIdSetUnused(lp); + nunused++; + } + } + + nstorage = itemidptr - itemidbase; + if (nstorage == 0) + { + /* Page is completely empty, so just reset it quickly */ + ((PageHeader) page)->pd_upper = pd_special; + } + else + { + /* Need to compact the page the hard way */ + if (totallen > (Size) (pd_special - pd_lower)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted item lengths: total %u, available space %u", + (unsigned int) totallen, pd_special - pd_lower))); + + pgtde_compactify_tuples(rel, buffer, itemidbase, nstorage, page, presorted); + } + + if (finalusedlp != nline) + { + /* The last line pointer is not the last used line pointer */ + int nunusedend = nline - finalusedlp; + + Assert(nunused >= nunusedend && nunusedend > 0); + + /* remove trailing unused line pointers from the count */ + nunused -= nunusedend; + /* truncate the line pointer array */ + ((PageHeader) page)->pd_lower -= (sizeof(ItemIdData) * nunusedend); + } + + /* Set hint bit for PageAddItemExtended */ + if (nunused > 0) + PageSetHasFreeLinePointers(page); + else + PageClearHasFreeLinePointers(page); +} diff --git a/contrib/pg_tde/src17/access/pg_tde_rewrite.c b/contrib/pg_tde/src17/access/pg_tde_rewrite.c new file mode 100644 index 00000000000..9332b42923a --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tde_rewrite.c @@ -0,0 +1,1257 @@ +/*------------------------------------------------------------------------- + * + * rewriteheap.c + * Support functions to rewrite tables. + * + * These functions provide a facility to completely rewrite a heap, while + * preserving visibility information and update chains. + * + * INTERFACE + * + * The caller is responsible for creating the new heap, all catalog + * changes, supplying the tuples to be written to the new heap, and + * rebuilding indexes. The caller must hold AccessExclusiveLock on the + * target table, because we assume no one else is writing into it. + * + * To use the facility: + * + * begin_tdeheap_rewrite + * while (fetch next tuple) + * { + * if (tuple is dead) + * rewrite_tdeheap_dead_tuple + * else + * { + * // do any transformations here if required + * rewrite_tdeheap_tuple + * } + * } + * end_tdeheap_rewrite + * + * The contents of the new relation shouldn't be relied on until after + * end_tdeheap_rewrite is called. + * + * + * IMPLEMENTATION + * + * This would be a fairly trivial affair, except that we need to maintain + * the ctid chains that link versions of an updated tuple together. + * Since the newly stored tuples will have tids different from the original + * ones, if we just copied t_ctid fields to the new table the links would + * be wrong. When we are required to copy a (presumably recently-dead or + * delete-in-progress) tuple whose ctid doesn't point to itself, we have + * to substitute the correct ctid instead. + * + * For each ctid reference from A -> B, we might encounter either A first + * or B first. (Note that a tuple in the middle of a chain is both A and B + * of different pairs.) + * + * If we encounter A first, we'll store the tuple in the unresolved_tups + * hash table. When we later encounter B, we remove A from the hash table, + * fix the ctid to point to the new location of B, and insert both A and B + * to the new heap. + * + * If we encounter B first, we can insert B to the new heap right away. + * We then add an entry to the old_new_tid_map hash table showing B's + * original tid (in the old heap) and new tid (in the new heap). + * When we later encounter A, we get the new location of B from the table, + * and can write A immediately with the correct ctid. + * + * Entries in the hash tables can be removed as soon as the later tuple + * is encountered. That helps to keep the memory usage down. At the end, + * both tables are usually empty; we should have encountered both A and B + * of each pair. However, it's possible for A to be RECENTLY_DEAD and B + * entirely DEAD according to HeapTupleSatisfiesVacuum, because the test + * for deadness using OldestXmin is not exact. In such a case we might + * encounter B first, and skip it, and find A later. Then A would be added + * to unresolved_tups, and stay there until end of the rewrite. Since + * this case is very unusual, we don't worry about the memory usage. + * + * Using in-memory hash tables means that we use some memory for each live + * update chain in the table, from the time we find one end of the + * reference until we find the other end. That shouldn't be a problem in + * practice, but if you do something like an UPDATE without a where-clause + * on a large table, and then run CLUSTER in the same transaction, you + * could run out of memory. It doesn't seem worthwhile to add support for + * spill-to-disk, as there shouldn't be that many RECENTLY_DEAD tuples in a + * table under normal circumstances. Furthermore, in the typical scenario + * of CLUSTERing on an unchanging key column, we'll see all the versions + * of a given tuple together anyway, and so the peak memory usage is only + * proportional to the number of RECENTLY_DEAD versions of a single row, not + * in the whole table. Note that if we do fail halfway through a CLUSTER, + * the old table is still valid, so failure is not catastrophic. + * + * We can't use the normal tdeheap_insert function to insert into the new + * heap, because tdeheap_insert overwrites the visibility information. + * We use a special-purpose raw_tdeheap_insert function instead, which + * is optimized for bulk inserting a lot of tuples, knowing that we have + * exclusive access to the heap. raw_tdeheap_insert builds new pages in + * local storage. When a page is full, or at the end of the process, + * we insert it to WAL as a single record and then write it to disk with + * the bulk smgr writer. Note, however, that any data sent to the new + * heap's TOAST table will go through the normal bufmgr. + * + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994-5, Regents of the University of California + * + * IDENTIFICATION + * src/backend/access/heap/rewriteheap.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_rewrite.h" +#include "encryption/enc_tde.h" + +#include "access/transam.h" +#include "access/xact.h" +#include "access/xloginsert.h" +#include "common/file_utils.h" +#include "lib/ilist.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "replication/slot.h" +#include "storage/bufmgr.h" +#include "storage/bulk_write.h" +#include "storage/fd.h" +#include "storage/procarray.h" +#include "utils/memutils.h" +#include "utils/rel.h" + +/* + * State associated with a rewrite operation. This is opaque to the user + * of the rewrite facility. + */ +typedef struct RewriteStateData +{ + Relation rs_old_rel; /* source heap */ + Relation rs_new_rel; /* destination heap */ + BulkWriteState *rs_bulkstate; /* writer for the destination */ + BulkWriteBuffer rs_buffer; /* page currently being built */ + BlockNumber rs_blockno; /* block where page will go */ + bool rs_logical_rewrite; /* do we need to do logical rewriting */ + TransactionId rs_oldest_xmin; /* oldest xmin used by caller to determine + * tuple visibility */ + TransactionId rs_freeze_xid; /* Xid that will be used as freeze cutoff + * point */ + TransactionId rs_logical_xmin; /* Xid that will be used as cutoff point + * for logical rewrites */ + MultiXactId rs_cutoff_multi; /* MultiXactId that will be used as cutoff + * point for multixacts */ + MemoryContext rs_cxt; /* for hash tables and entries and tuples in + * them */ + XLogRecPtr rs_begin_lsn; /* XLogInsertLsn when starting the rewrite */ + HTAB *rs_unresolved_tups; /* unmatched A tuples */ + HTAB *rs_old_new_tid_map; /* unmatched B tuples */ + HTAB *rs_logical_mappings; /* logical remapping files */ + uint32 rs_num_rewrite_mappings; /* # in memory mappings */ +} RewriteStateData; + +/* + * The lookup keys for the hash tables are tuple TID and xmin (we must check + * both to avoid false matches from dead tuples). Beware that there is + * probably some padding space in this struct; it must be zeroed out for + * correct hashtable operation. + */ +typedef struct +{ + TransactionId xmin; /* tuple xmin */ + ItemPointerData tid; /* tuple location in old heap */ +} TidHashKey; + +/* + * Entry structures for the hash tables + */ +typedef struct +{ + TidHashKey key; /* expected xmin/old location of B tuple */ + ItemPointerData old_tid; /* A's location in the old heap */ + HeapTuple tuple; /* A's tuple contents */ +} UnresolvedTupData; + +typedef UnresolvedTupData *UnresolvedTup; + +typedef struct +{ + TidHashKey key; /* actual xmin/old location of B tuple */ + ItemPointerData new_tid; /* where we put it in the new heap */ +} OldToNewMappingData; + +typedef OldToNewMappingData *OldToNewMapping; + +/* + * In-Memory data for an xid that might need logical remapping entries + * to be logged. + */ +typedef struct RewriteMappingFile +{ + TransactionId xid; /* xid that might need to see the row */ + int vfd; /* fd of mappings file */ + off_t off; /* how far have we written yet */ + dclist_head mappings; /* list of in-memory mappings */ + char path[MAXPGPATH]; /* path, for error messages */ +} RewriteMappingFile; + +/* + * A single In-Memory logical rewrite mapping, hanging off + * RewriteMappingFile->mappings. + */ +typedef struct RewriteMappingDataEntry +{ + LogicalRewriteMappingData map; /* map between old and new location of the + * tuple */ + dlist_node node; +} RewriteMappingDataEntry; + + +/* prototypes for internal functions */ +static void raw_tdeheap_insert(RewriteState state, HeapTuple tup); + +/* internal logical remapping prototypes */ +static void logical_begin_tdeheap_rewrite(RewriteState state); +static void logical_rewrite_tdeheap_tuple(RewriteState state, ItemPointerData old_tid, HeapTuple new_tuple); +static void logical_end_tdeheap_rewrite(RewriteState state); + + +/* + * Begin a rewrite of a table + * + * old_heap old, locked heap relation tuples will be read from + * new_heap new, locked heap relation to insert tuples to + * oldest_xmin xid used by the caller to determine which tuples are dead + * freeze_xid xid before which tuples will be frozen + * cutoff_multi multixact before which multis will be removed + * + * Returns an opaque RewriteState, allocated in current memory context, + * to be used in subsequent calls to the other functions. + */ +RewriteState +begin_tdeheap_rewrite(Relation old_heap, Relation new_heap, TransactionId oldest_xmin, + TransactionId freeze_xid, MultiXactId cutoff_multi) +{ + RewriteState state; + MemoryContext rw_cxt; + MemoryContext old_cxt; + HASHCTL hash_ctl; + + /* + * To ease cleanup, make a separate context that will contain the + * RewriteState struct itself plus all subsidiary data. + */ + rw_cxt = AllocSetContextCreate(CurrentMemoryContext, + "Table rewrite", + ALLOCSET_DEFAULT_SIZES); + old_cxt = MemoryContextSwitchTo(rw_cxt); + + /* Create and fill in the state struct */ + state = palloc0(sizeof(RewriteStateData)); + + state->rs_old_rel = old_heap; + state->rs_new_rel = new_heap; + state->rs_buffer = NULL; + /* new_heap needn't be empty, just locked */ + state->rs_blockno = RelationGetNumberOfBlocks(new_heap); + state->rs_oldest_xmin = oldest_xmin; + state->rs_freeze_xid = freeze_xid; + state->rs_cutoff_multi = cutoff_multi; + state->rs_cxt = rw_cxt; + state->rs_bulkstate = smgr_bulk_start_rel(new_heap, MAIN_FORKNUM); + + /* Initialize hash tables used to track update chains */ + hash_ctl.keysize = sizeof(TidHashKey); + hash_ctl.entrysize = sizeof(UnresolvedTupData); + hash_ctl.hcxt = state->rs_cxt; + + state->rs_unresolved_tups = + hash_create("Rewrite / Unresolved ctids", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); + + hash_ctl.entrysize = sizeof(OldToNewMappingData); + + state->rs_old_new_tid_map = + hash_create("Rewrite / Old to new tid map", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); + + MemoryContextSwitchTo(old_cxt); + + logical_begin_tdeheap_rewrite(state); + + return state; +} + +/* + * End a rewrite. + * + * state and any other resources are freed. + */ +void +end_tdeheap_rewrite(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + UnresolvedTup unresolved; + + /* + * Write any remaining tuples in the UnresolvedTups table. If we have any + * left, they should in fact be dead, but let's err on the safe side. + */ + hash_seq_init(&seq_status, state->rs_unresolved_tups); + + while ((unresolved = hash_seq_search(&seq_status)) != NULL) + { + ItemPointerSetInvalid(&unresolved->tuple->t_data->t_ctid); + raw_tdeheap_insert(state, unresolved->tuple); + } + + /* Write the last page, if any */ + if (state->rs_buffer) + { + smgr_bulk_write(state->rs_bulkstate, state->rs_blockno, state->rs_buffer, true); + state->rs_buffer = NULL; + } + + smgr_bulk_finish(state->rs_bulkstate); + + logical_end_tdeheap_rewrite(state); + + /* Deleting the context frees everything */ + MemoryContextDelete(state->rs_cxt); +} + +/* + * Add a tuple to the new heap. + * + * Visibility information is copied from the original tuple, except that + * we "freeze" very-old tuples. Note that since we scribble on new_tuple, + * it had better be temp storage not a pointer to the original tuple. + * + * state opaque state as returned by begin_tdeheap_rewrite + * old_tuple original tuple in the old heap + * new_tuple new, rewritten tuple to be inserted to new heap + */ +void +rewrite_tdeheap_tuple(RewriteState state, + HeapTuple old_tuple, HeapTuple new_tuple) +{ + MemoryContext old_cxt; + ItemPointerData old_tid; + TidHashKey hashkey; + bool found; + bool free_new; + + old_cxt = MemoryContextSwitchTo(state->rs_cxt); + + /* + * Copy the original tuple's visibility information into new_tuple. + * + * XXX we might later need to copy some t_infomask2 bits, too? Right now, + * we intentionally clear the HOT status bits. + */ + memcpy(&new_tuple->t_data->t_choice.t_heap, + &old_tuple->t_data->t_choice.t_heap, + sizeof(HeapTupleFields)); + + new_tuple->t_data->t_infomask &= ~HEAP_XACT_MASK; + new_tuple->t_data->t_infomask2 &= ~HEAP2_XACT_MASK; + new_tuple->t_data->t_infomask |= + old_tuple->t_data->t_infomask & HEAP_XACT_MASK; + + /* + * While we have our hands on the tuple, we may as well freeze any + * eligible xmin or xmax, so that future VACUUM effort can be saved. + */ + tdeheap_freeze_tuple(new_tuple->t_data, + state->rs_old_rel->rd_rel->relfrozenxid, + state->rs_old_rel->rd_rel->relminmxid, + state->rs_freeze_xid, + state->rs_cutoff_multi); + + /* + * Invalid ctid means that ctid should point to the tuple itself. We'll + * override it later if the tuple is part of an update chain. + */ + ItemPointerSetInvalid(&new_tuple->t_data->t_ctid); + + /* + * If the tuple has been updated, check the old-to-new mapping hash table. + */ + if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) || + HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) && + !HeapTupleHeaderIndicatesMovedPartitions(old_tuple->t_data) && + !(ItemPointerEquals(&(old_tuple->t_self), + &(old_tuple->t_data->t_ctid)))) + { + OldToNewMapping mapping; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data); + hashkey.tid = old_tuple->t_data->t_ctid; + + mapping = (OldToNewMapping) + hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_FIND, NULL); + + if (mapping != NULL) + { + /* + * We've already copied the tuple that t_ctid points to, so we can + * set the ctid of this tuple to point to the new location, and + * insert it right away. + */ + new_tuple->t_data->t_ctid = mapping->new_tid; + + /* We don't need the mapping entry anymore */ + hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_REMOVE, &found); + Assert(found); + } + else + { + /* + * We haven't seen the tuple t_ctid points to yet. Stash this + * tuple into unresolved_tups to be written later. + */ + UnresolvedTup unresolved; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_ENTER, &found); + Assert(!found); + + unresolved->old_tid = old_tuple->t_self; + unresolved->tuple = tdeheap_copytuple(new_tuple); + + /* + * We can't do anything more now, since we don't know where the + * tuple will be written. + */ + MemoryContextSwitchTo(old_cxt); + return; + } + } + + /* + * Now we will write the tuple, and then check to see if it is the B tuple + * in any new or known pair. When we resolve a known pair, we will be + * able to write that pair's A tuple, and then we have to check if it + * resolves some other pair. Hence, we need a loop here. + */ + old_tid = old_tuple->t_self; + free_new = false; + + for (;;) + { + ItemPointerData new_tid; + + /* Insert the tuple and find out where it's put in new_heap */ + raw_tdeheap_insert(state, new_tuple); + new_tid = new_tuple->t_self; + + logical_rewrite_tdeheap_tuple(state, old_tid, new_tuple); + + /* + * If the tuple is the updated version of a row, and the prior version + * wouldn't be DEAD yet, then we need to either resolve the prior + * version (if it's waiting in rs_unresolved_tups), or make an entry + * in rs_old_new_tid_map (so we can resolve it when we do see it). The + * previous tuple's xmax would equal this one's xmin, so it's + * RECENTLY_DEAD if and only if the xmin is not before OldestXmin. + */ + if ((new_tuple->t_data->t_infomask & HEAP_UPDATED) && + !TransactionIdPrecedes(HeapTupleHeaderGetXmin(new_tuple->t_data), + state->rs_oldest_xmin)) + { + /* + * Okay, this is B in an update pair. See if we've seen A. + */ + UnresolvedTup unresolved; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetXmin(new_tuple->t_data); + hashkey.tid = old_tid; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_FIND, NULL); + + if (unresolved != NULL) + { + /* + * We have seen and memorized the previous tuple already. Now + * that we know where we inserted the tuple its t_ctid points + * to, fix its t_ctid and insert it to the new heap. + */ + if (free_new) + tdeheap_freetuple(new_tuple); + new_tuple = unresolved->tuple; + free_new = true; + old_tid = unresolved->old_tid; + new_tuple->t_data->t_ctid = new_tid; + + /* + * We don't need the hash entry anymore, but don't free its + * tuple just yet. + */ + hash_search(state->rs_unresolved_tups, &hashkey, + HASH_REMOVE, &found); + Assert(found); + + /* loop back to insert the previous tuple in the chain */ + continue; + } + else + { + /* + * Remember the new tid of this tuple. We'll use it to set the + * ctid when we find the previous tuple in the chain. + */ + OldToNewMapping mapping; + + mapping = hash_search(state->rs_old_new_tid_map, &hashkey, + HASH_ENTER, &found); + Assert(!found); + + mapping->new_tid = new_tid; + } + } + + /* Done with this (chain of) tuples, for now */ + if (free_new) + tdeheap_freetuple(new_tuple); + break; + } + + MemoryContextSwitchTo(old_cxt); +} + +/* + * Register a dead tuple with an ongoing rewrite. Dead tuples are not + * copied to the new table, but we still make note of them so that we + * can release some resources earlier. + * + * Returns true if a tuple was removed from the unresolved_tups table. + * This indicates that that tuple, previously thought to be "recently dead", + * is now known really dead and won't be written to the output. + */ +bool +rewrite_tdeheap_dead_tuple(RewriteState state, HeapTuple old_tuple) +{ + /* + * If we have already seen an earlier tuple in the update chain that + * points to this tuple, let's forget about that earlier tuple. It's in + * fact dead as well, our simple xmax < OldestXmin test in + * HeapTupleSatisfiesVacuum just wasn't enough to detect it. It happens + * when xmin of a tuple is greater than xmax, which sounds + * counter-intuitive but is perfectly valid. + * + * We don't bother to try to detect the situation the other way round, + * when we encounter the dead tuple first and then the recently dead one + * that points to it. If that happens, we'll have some unmatched entries + * in the UnresolvedTups hash table at the end. That can happen anyway, + * because a vacuum might have removed the dead tuple in the chain before + * us. + */ + UnresolvedTup unresolved; + TidHashKey hashkey; + bool found; + + memset(&hashkey, 0, sizeof(hashkey)); + hashkey.xmin = HeapTupleHeaderGetXmin(old_tuple->t_data); + hashkey.tid = old_tuple->t_self; + + unresolved = hash_search(state->rs_unresolved_tups, &hashkey, + HASH_FIND, NULL); + + if (unresolved != NULL) + { + /* Need to free the contained tuple as well as the hashtable entry */ + tdeheap_freetuple(unresolved->tuple); + hash_search(state->rs_unresolved_tups, &hashkey, + HASH_REMOVE, &found); + Assert(found); + return true; + } + + return false; +} + +/* + * Insert a tuple to the new relation. This has to track tdeheap_insert + * and its subsidiary functions! + * + * t_self of the tuple is set to the new TID of the tuple. If t_ctid of the + * tuple is invalid on entry, it's replaced with the new TID as well (in + * the inserted data only, not in the caller's copy). + */ +static void +raw_tdeheap_insert(RewriteState state, HeapTuple tup) +{ + Page page; + Size pageFreeSpace, + saveFreeSpace; + Size len; + OffsetNumber newoff; + HeapTuple heaptup; + + /* + * If the new tuple is too big for storage or contains already toasted + * out-of-line attributes from some other relation, invoke the toaster. + * + * Note: below this point, heaptup is the data we actually intend to store + * into the relation; tup is the caller's original untoasted data. + */ + if (state->rs_new_rel->rd_rel->relkind == RELKIND_TOASTVALUE) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(tup)); + heaptup = tup; + } + else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD) + { + int options = HEAP_INSERT_SKIP_FSM; + + /* + * While rewriting the heap for VACUUM FULL / CLUSTER, make sure data + * for the TOAST table are not logically decoded. The main heap is + * WAL-logged as XLOG FPI records, which are not logically decoded. + */ + options |= HEAP_INSERT_NO_LOGICAL; + + heaptup = tdeheap_toast_insert_or_update(state->rs_new_rel, tup, NULL, + options); + } + else + heaptup = tup; + + len = MAXALIGN(heaptup->t_len); /* be conservative */ + + /* + * If we're gonna fail for oversize tuple, do it right away + */ + if (len > MaxHeapTupleSize) + ereport(ERROR, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg("row is too big: size %zu, maximum size %zu", + len, MaxHeapTupleSize))); + + /* Compute desired extra freespace due to fillfactor option */ + saveFreeSpace = RelationGetTargetPageFreeSpace(state->rs_new_rel, + HEAP_DEFAULT_FILLFACTOR); + + /* Now we can check to see if there's enough free space already. */ + page = (Page) state->rs_buffer; + if (page) + { + pageFreeSpace = PageGetHeapFreeSpace(page); + + if (len + saveFreeSpace > pageFreeSpace) + { + /* + * Doesn't fit, so write out the existing page. It always + * contains a tuple. Hence, unlike tdeheap_RelationGetBufferForTuple(), + * enforce saveFreeSpace unconditionally. + */ + smgr_bulk_write(state->rs_bulkstate, state->rs_blockno, state->rs_buffer, true); + state->rs_buffer = NULL; + page = NULL; + state->rs_blockno++; + } + } + + if (!page) + { + /* Initialize a new empty page */ + state->rs_buffer = smgr_bulk_get_buf(state->rs_bulkstate); + page = (Page) state->rs_buffer; + PageInit(page, BLCKSZ, 0); + } + + /* And now we can insert the tuple into the page */ + newoff = TDE_PageAddItem(state->rs_new_rel->rd_locator, state->rs_blockno, page, (Item) heaptup->t_data, heaptup->t_len, + InvalidOffsetNumber, false, true); + if (newoff == InvalidOffsetNumber) + elog(ERROR, "failed to add tuple"); + + /* Update caller's t_self to the actual position where it was stored */ + ItemPointerSet(&(tup->t_self), state->rs_blockno, newoff); + + /* + * Insert the correct position into CTID of the stored tuple, too, if the + * caller didn't supply a valid CTID. + */ + if (!ItemPointerIsValid(&tup->t_data->t_ctid)) + { + ItemId newitemid; + HeapTupleHeader onpage_tup; + + newitemid = PageGetItemId(page, newoff); + onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid); + + onpage_tup->t_ctid = tup->t_self; + } + + /* If heaptup is a private copy, release it. */ + if (heaptup != tup) + tdeheap_freetuple(heaptup); +} + +/* ------------------------------------------------------------------------ + * Logical rewrite support + * + * When doing logical decoding - which relies on using cmin/cmax of catalog + * tuples, via xl_tdeheap_new_cid records - heap rewrites have to log enough + * information to allow the decoding backend to update its internal mapping + * of (relfilelocator,ctid) => (cmin, cmax) to be correct for the rewritten heap. + * + * For that, every time we find a tuple that's been modified in a catalog + * relation within the xmin horizon of any decoding slot, we log a mapping + * from the old to the new location. + * + * To deal with rewrites that abort the filename of a mapping file contains + * the xid of the transaction performing the rewrite, which then can be + * checked before being read in. + * + * For efficiency we don't immediately spill every single map mapping for a + * row to disk but only do so in batches when we've collected several of them + * in memory or when end_tdeheap_rewrite() has been called. + * + * Crash-Safety: This module diverts from the usual patterns of doing WAL + * since it cannot rely on checkpoint flushing out all buffers and thus + * waiting for exclusive locks on buffers. Usually the XLogInsert() covering + * buffer modifications is performed while the buffer(s) that are being + * modified are exclusively locked guaranteeing that both the WAL record and + * the modified heap are on either side of the checkpoint. But since the + * mapping files we log aren't in shared_buffers that interlock doesn't work. + * + * Instead we simply write the mapping files out to disk, *before* the + * XLogInsert() is performed. That guarantees that either the XLogInsert() is + * inserted after the checkpoint's redo pointer or that the checkpoint (via + * CheckPointLogicalRewriteHeap()) has flushed the (partial) mapping file to + * disk. That leaves the tail end that has not yet been flushed open to + * corruption, which is solved by including the current offset in the + * xl_tdeheap_rewrite_mapping records and truncating the mapping file to it + * during replay. Every time a rewrite is finished all generated mapping files + * are synced to disk. + * + * Note that if we were only concerned about crash safety we wouldn't have to + * deal with WAL logging at all - an fsync() at the end of a rewrite would be + * sufficient for crash safety. Any mapping that hasn't been safely flushed to + * disk has to be by an aborted (explicitly or via a crash) transaction and is + * ignored by virtue of the xid in its name being subject to a + * TransactionDidCommit() check. But we want to support having standbys via + * physical replication, both for availability and to do logical decoding + * there. + * ------------------------------------------------------------------------ + */ + +/* + * Do preparations for logging logical mappings during a rewrite if + * necessary. If we detect that we don't need to log anything we'll prevent + * any further action by the various logical rewrite functions. + */ +static void +logical_begin_tdeheap_rewrite(RewriteState state) +{ + HASHCTL hash_ctl; + TransactionId logical_xmin; + + /* + * We only need to persist these mappings if the rewritten table can be + * accessed during logical decoding, if not, we can skip doing any + * additional work. + */ + state->rs_logical_rewrite = + RelationIsAccessibleInLogicalDecoding(state->rs_old_rel); + + if (!state->rs_logical_rewrite) + return; + + ProcArrayGetReplicationSlotXmin(NULL, &logical_xmin); + + /* + * If there are no logical slots in progress we don't need to do anything, + * there cannot be any remappings for relevant rows yet. The relation's + * lock protects us against races. + */ + if (logical_xmin == InvalidTransactionId) + { + state->rs_logical_rewrite = false; + return; + } + + state->rs_logical_xmin = logical_xmin; + state->rs_begin_lsn = GetXLogInsertRecPtr(); + state->rs_num_rewrite_mappings = 0; + + hash_ctl.keysize = sizeof(TransactionId); + hash_ctl.entrysize = sizeof(RewriteMappingFile); + hash_ctl.hcxt = state->rs_cxt; + + state->rs_logical_mappings = + hash_create("Logical rewrite mapping", + 128, /* arbitrary initial size */ + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); +} + +/* + * Flush all logical in-memory mappings to disk, but don't fsync them yet. + */ +static void +logical_tdeheap_rewrite_flush_mappings(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + RewriteMappingFile *src; + dlist_mutable_iter iter; + + Assert(state->rs_logical_rewrite); + + /* no logical rewrite in progress, no need to iterate over mappings */ + if (state->rs_num_rewrite_mappings == 0) + return; + + elog(DEBUG1, "flushing %u logical rewrite mapping entries", + state->rs_num_rewrite_mappings); + + hash_seq_init(&seq_status, state->rs_logical_mappings); + while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL) + { + char *waldata; + char *waldata_start; + xl_tdeheap_rewrite_mapping xlrec; + Oid dboid; + uint32 len; + int written; + uint32 num_mappings = dclist_count(&src->mappings); + + /* this file hasn't got any new mappings */ + if (num_mappings == 0) + continue; + + if (state->rs_old_rel->rd_rel->relisshared) + dboid = InvalidOid; + else + dboid = MyDatabaseId; + + xlrec.num_mappings = num_mappings; + xlrec.mapped_rel = RelationGetRelid(state->rs_old_rel); + xlrec.mapped_xid = src->xid; + xlrec.mapped_db = dboid; + xlrec.offset = src->off; + xlrec.start_lsn = state->rs_begin_lsn; + + /* write all mappings consecutively */ + len = num_mappings * sizeof(LogicalRewriteMappingData); + waldata_start = waldata = palloc(len); + + /* + * collect data we need to write out, but don't modify ondisk data yet + */ + dclist_foreach_modify(iter, &src->mappings) + { + RewriteMappingDataEntry *pmap; + + pmap = dclist_container(RewriteMappingDataEntry, node, iter.cur); + + memcpy(waldata, &pmap->map, sizeof(pmap->map)); + waldata += sizeof(pmap->map); + + /* remove from the list and free */ + dclist_delete_from(&src->mappings, &pmap->node); + pfree(pmap); + + /* update bookkeeping */ + state->rs_num_rewrite_mappings--; + } + + Assert(dclist_count(&src->mappings) == 0); + Assert(waldata == waldata_start + len); + + /* + * Note that we deviate from the usual WAL coding practices here, + * check the above "Logical rewrite support" comment for reasoning. + */ + written = FileWrite(src->vfd, waldata_start, len, src->off, + WAIT_EVENT_LOGICAL_REWRITE_WRITE); + if (written != len) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\", wrote %d of %d: %m", src->path, + written, len))); + src->off += len; + + XLogBeginInsert(); + XLogRegisterData((char *) (&xlrec), sizeof(xlrec)); + XLogRegisterData(waldata_start, len); + + /* write xlog record */ + XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_REWRITE); + + pfree(waldata_start); + } + Assert(state->rs_num_rewrite_mappings == 0); +} + +/* + * Logical remapping part of end_tdeheap_rewrite(). + */ +static void +logical_end_tdeheap_rewrite(RewriteState state) +{ + HASH_SEQ_STATUS seq_status; + RewriteMappingFile *src; + + /* done, no logical rewrite in progress */ + if (!state->rs_logical_rewrite) + return; + + /* writeout remaining in-memory entries */ + if (state->rs_num_rewrite_mappings > 0) + logical_tdeheap_rewrite_flush_mappings(state); + + /* Iterate over all mappings we have written and fsync the files. */ + hash_seq_init(&seq_status, state->rs_logical_mappings); + while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL) + { + if (FileSync(src->vfd, WAIT_EVENT_LOGICAL_REWRITE_SYNC) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", src->path))); + FileClose(src->vfd); + } + /* memory context cleanup will deal with the rest */ +} + +/* + * Log a single (old->new) mapping for 'xid'. + */ +static void +logical_rewrite_log_mapping(RewriteState state, TransactionId xid, + LogicalRewriteMappingData *map) +{ + RewriteMappingFile *src; + RewriteMappingDataEntry *pmap; + Oid relid; + bool found; + + relid = RelationGetRelid(state->rs_old_rel); + + /* look for existing mappings for this 'mapped' xid */ + src = hash_search(state->rs_logical_mappings, &xid, + HASH_ENTER, &found); + + /* + * We haven't yet had the need to map anything for this xid, create + * per-xid data structures. + */ + if (!found) + { + char path[MAXPGPATH]; + Oid dboid; + + if (state->rs_old_rel->rd_rel->relisshared) + dboid = InvalidOid; + else + dboid = MyDatabaseId; + + snprintf(path, MAXPGPATH, + "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT, + dboid, relid, + LSN_FORMAT_ARGS(state->rs_begin_lsn), + xid, GetCurrentTransactionId()); + + dclist_init(&src->mappings); + src->off = 0; + memcpy(src->path, path, sizeof(path)); + src->vfd = PathNameOpenFile(path, + O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); + if (src->vfd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create file \"%s\": %m", path))); + } + + pmap = MemoryContextAlloc(state->rs_cxt, + sizeof(RewriteMappingDataEntry)); + memcpy(&pmap->map, map, sizeof(LogicalRewriteMappingData)); + dclist_push_tail(&src->mappings, &pmap->node); + state->rs_num_rewrite_mappings++; + + /* + * Write out buffer every time we've too many in-memory entries across all + * mapping files. + */ + if (state->rs_num_rewrite_mappings >= 1000 /* arbitrary number */ ) + logical_tdeheap_rewrite_flush_mappings(state); +} + +/* + * Perform logical remapping for a tuple that's mapped from old_tid to + * new_tuple->t_self by rewrite_tdeheap_tuple() if necessary for the tuple. + */ +static void +logical_rewrite_tdeheap_tuple(RewriteState state, ItemPointerData old_tid, + HeapTuple new_tuple) +{ + ItemPointerData new_tid = new_tuple->t_self; + TransactionId cutoff = state->rs_logical_xmin; + TransactionId xmin; + TransactionId xmax; + bool do_log_xmin = false; + bool do_log_xmax = false; + LogicalRewriteMappingData map; + + /* no logical rewrite in progress, we don't need to log anything */ + if (!state->rs_logical_rewrite) + return; + + xmin = HeapTupleHeaderGetXmin(new_tuple->t_data); + /* use *GetUpdateXid to correctly deal with multixacts */ + xmax = HeapTupleHeaderGetUpdateXid(new_tuple->t_data); + + /* + * Log the mapping iff the tuple has been created recently. + */ + if (TransactionIdIsNormal(xmin) && !TransactionIdPrecedes(xmin, cutoff)) + do_log_xmin = true; + + if (!TransactionIdIsNormal(xmax)) + { + /* + * no xmax is set, can't have any permanent ones, so this check is + * sufficient + */ + } + else if (HEAP_XMAX_IS_LOCKED_ONLY(new_tuple->t_data->t_infomask)) + { + /* only locked, we don't care */ + } + else if (!TransactionIdPrecedes(xmax, cutoff)) + { + /* tuple has been deleted recently, log */ + do_log_xmax = true; + } + + /* if neither needs to be logged, we're done */ + if (!do_log_xmin && !do_log_xmax) + return; + + /* fill out mapping information */ + map.old_locator = state->rs_old_rel->rd_locator; + map.old_tid = old_tid; + map.new_locator = state->rs_new_rel->rd_locator; + map.new_tid = new_tid; + + /* --- + * Now persist the mapping for the individual xids that are affected. We + * need to log for both xmin and xmax if they aren't the same transaction + * since the mapping files are per "affected" xid. + * We don't muster all that much effort detecting whether xmin and xmax + * are actually the same transaction, we just check whether the xid is the + * same disregarding subtransactions. Logging too much is relatively + * harmless and we could never do the check fully since subtransaction + * data is thrown away during restarts. + * --- + */ + if (do_log_xmin) + logical_rewrite_log_mapping(state, xmin, &map); + /* separately log mapping for xmax unless it'd be redundant */ + if (do_log_xmax && !TransactionIdEquals(xmin, xmax)) + logical_rewrite_log_mapping(state, xmax, &map); +} + +/* + * Replay XLOG_HEAP2_REWRITE records + */ +void +tdeheap_xlog_logical_rewrite(XLogReaderState *r) +{ + char path[MAXPGPATH]; + int fd; + xl_tdeheap_rewrite_mapping *xlrec; + uint32 len; + char *data; + + xlrec = (xl_tdeheap_rewrite_mapping *) XLogRecGetData(r); + + snprintf(path, MAXPGPATH, + "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT, + xlrec->mapped_db, xlrec->mapped_rel, + LSN_FORMAT_ARGS(xlrec->start_lsn), + xlrec->mapped_xid, XLogRecGetXid(r)); + + fd = OpenTransientFile(path, + O_CREAT | O_WRONLY | PG_BINARY); + if (fd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create file \"%s\": %m", path))); + + /* + * Truncate all data that's not guaranteed to have been safely fsynced (by + * previous record or by the last checkpoint). + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_TRUNCATE); + if (ftruncate(fd, xlrec->offset) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not truncate file \"%s\" to %u: %m", + path, (uint32) xlrec->offset))); + pgstat_report_wait_end(); + + data = XLogRecGetData(r) + sizeof(*xlrec); + + len = xlrec->num_mappings * sizeof(LogicalRewriteMappingData); + + /* write out tail end of mapping file (again) */ + errno = 0; + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_WRITE); + if (pg_pwrite(fd, data, len, xlrec->offset) != len) + { + /* if write didn't set errno, assume problem is no disk space */ + if (errno == 0) + errno = ENOSPC; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\": %m", path))); + } + pgstat_report_wait_end(); + + /* + * Now fsync all previously written data. We could improve things and only + * do this for the last write to a file, but the required bookkeeping + * doesn't seem worth the trouble. + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_SYNC); + if (pg_fsync(fd) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", path))); + pgstat_report_wait_end(); + + if (CloseTransientFile(fd) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not close file \"%s\": %m", path))); +} + +/* --- + * Perform a checkpoint for logical rewrite mappings + * + * This serves two tasks: + * 1) Remove all mappings not needed anymore based on the logical restart LSN + * 2) Flush all remaining mappings to disk, so that replay after a checkpoint + * only has to deal with the parts of a mapping that have been written out + * after the checkpoint started. + * --- + */ +void +CheckPointLogicalRewriteHeap(void) +{ + XLogRecPtr cutoff; + XLogRecPtr redo; + DIR *mappings_dir; + struct dirent *mapping_de; + char path[MAXPGPATH + 20]; + + /* + * We start of with a minimum of the last redo pointer. No new decoding + * slot will start before that, so that's a safe upper bound for removal. + */ + redo = GetRedoRecPtr(); + + /* now check for the restart ptrs from existing slots */ + cutoff = ReplicationSlotsComputeLogicalRestartLSN(); + + /* don't start earlier than the restart lsn */ + if (cutoff != InvalidXLogRecPtr && redo < cutoff) + cutoff = redo; + + mappings_dir = AllocateDir("pg_logical/mappings"); + while ((mapping_de = ReadDir(mappings_dir, "pg_logical/mappings")) != NULL) + { + Oid dboid; + Oid relid; + XLogRecPtr lsn; + TransactionId rewrite_xid; + TransactionId create_xid; + uint32 hi, + lo; + PGFileType de_type; + + if (strcmp(mapping_de->d_name, ".") == 0 || + strcmp(mapping_de->d_name, "..") == 0) + continue; + + snprintf(path, sizeof(path), "pg_logical/mappings/%s", mapping_de->d_name); + de_type = get_dirent_type(path, mapping_de, false, DEBUG1); + + if (de_type != PGFILETYPE_ERROR && de_type != PGFILETYPE_REG) + continue; + + /* Skip over files that cannot be ours. */ + if (strncmp(mapping_de->d_name, "map-", 4) != 0) + continue; + + if (sscanf(mapping_de->d_name, LOGICAL_REWRITE_FORMAT, + &dboid, &relid, &hi, &lo, &rewrite_xid, &create_xid) != 6) + elog(ERROR, "could not parse filename \"%s\"", mapping_de->d_name); + + lsn = ((uint64) hi) << 32 | lo; + + if (lsn < cutoff || cutoff == InvalidXLogRecPtr) + { + elog(DEBUG1, "removing logical rewrite file \"%s\"", path); + if (unlink(path) < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", path))); + } + else + { + /* on some operating systems fsyncing a file requires O_RDWR */ + int fd = OpenTransientFile(path, O_RDWR | PG_BINARY); + + /* + * The file cannot vanish due to concurrency since this function + * is the only one removing logical mappings and only one + * checkpoint can be in progress at a time. + */ + if (fd < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open file \"%s\": %m", path))); + + /* + * We could try to avoid fsyncing files that either haven't + * changed or have only been created since the checkpoint's start, + * but it's currently not deemed worth the effort. + */ + pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_CHECKPOINT_SYNC); + if (pg_fsync(fd) != 0) + ereport(data_sync_elevel(ERROR), + (errcode_for_file_access(), + errmsg("could not fsync file \"%s\": %m", path))); + pgstat_report_wait_end(); + + if (CloseTransientFile(fd) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not close file \"%s\": %m", path))); + } + } + FreeDir(mappings_dir); + + /* persist directory entries to disk */ + fsync_fname("pg_logical/mappings", true); +} diff --git a/contrib/pg_tde/src17/access/pg_tde_vacuumlazy.c b/contrib/pg_tde/src17/access/pg_tde_vacuumlazy.c new file mode 100644 index 00000000000..414ef0cb617 --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tde_vacuumlazy.c @@ -0,0 +1,3195 @@ +/*------------------------------------------------------------------------- + * + * vacuumlazy.c + * Concurrent ("lazy") vacuuming. + * + * The major space usage for vacuuming is storage for the dead tuple IDs that + * are to be removed from indexes. We want to ensure we can vacuum even the + * very largest relations with finite memory space usage. To do that, we set + * upper bounds on the memory that can be used for keeping track of dead TIDs + * at once. + * + * We are willing to use at most maintenance_work_mem (or perhaps + * autovacuum_work_mem) memory space to keep track of dead TIDs. If the + * TID store is full, we must call lazy_vacuum to vacuum indexes (and to vacuum + * the pages that we've pruned). This frees up the memory space dedicated to + * store dead TIDs. + * + * In practice VACUUM will often complete its initial pass over the target + * pg_tde relation without ever running out of space to store TIDs. This means + * that there only needs to be one call to lazy_vacuum, after the initial pass + * completes. + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/pg_tde/vacuumlazy.c + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tde_visibilitymap.h" +#include "encryption/enc_tde.h" +#include "access/genam.h" +#include "access/htup_details.h" +#include "access/multixact.h" +#include "access/tidstore.h" +#include "access/transam.h" +#include "access/xloginsert.h" +#include "catalog/storage.h" +#include "commands/dbcommands.h" +#include "commands/progress.h" +#include "commands/vacuum.h" +#include "common/int.h" +#include "executor/instrument.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "portability/instr_time.h" +#include "postmaster/autovacuum.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" +#include "utils/lsyscache.h" +#include "utils/memutils.h" +#include "utils/pg_rusage.h" +#include "utils/timestamp.h" + + +/* + * Space/time tradeoff parameters: do these need to be user-tunable? + * + * To consider truncating the relation, we want there to be at least + * REL_TRUNCATE_MINIMUM or (relsize / REL_TRUNCATE_FRACTION) (whichever + * is less) potentially-freeable pages. + */ +#define REL_TRUNCATE_MINIMUM 1000 +#define REL_TRUNCATE_FRACTION 16 + +/* + * Timing parameters for truncate locking heuristics. + * + * These were not exposed as user tunable GUC values because it didn't seem + * that the potential for improvement was great enough to merit the cost of + * supporting them. + */ +#define VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL 20 /* ms */ +#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL 50 /* ms */ +#define VACUUM_TRUNCATE_LOCK_TIMEOUT 5000 /* ms */ + +/* + * Threshold that controls whether we bypass index vacuuming and heap + * vacuuming as an optimization + */ +#define BYPASS_THRESHOLD_PAGES 0.02 /* i.e. 2% of rel_pages */ + +/* + * Perform a failsafe check each time we scan another 4GB of pages. + * (Note that this is deliberately kept to a power-of-two, usually 2^19.) + */ +#define FAILSAFE_EVERY_PAGES \ + ((BlockNumber) (((uint64) 4 * 1024 * 1024 * 1024) / BLCKSZ)) + +/* + * When a table has no indexes, vacuum the FSM after every 8GB, approximately + * (it won't be exact because we only vacuum FSM after processing a heap page + * that has some removable tuples). When there are indexes, this is ignored, + * and we vacuum FSM after each index/heap cleaning pass. + */ +#define VACUUM_FSM_EVERY_PAGES \ + ((BlockNumber) (((uint64) 8 * 1024 * 1024 * 1024) / BLCKSZ)) + +/* + * Before we consider skipping a page that's marked as clean in + * visibility map, we must've seen at least this many clean pages. + */ +#define SKIP_PAGES_THRESHOLD ((BlockNumber) 32) + +/* + * Size of the prefetch window for lazy vacuum backwards truncation scan. + * Needs to be a power of 2. + */ +#define PREFETCH_SIZE ((BlockNumber) 32) + +/* + * Macro to check if we are in a parallel vacuum. If true, we are in the + * parallel mode and the DSM segment is initialized. + */ +#define ParallelVacuumIsActive(vacrel) ((vacrel)->pvs != NULL) + +/* Phases of vacuum during which we report error context. */ +typedef enum +{ + VACUUM_ERRCB_PHASE_UNKNOWN, + VACUUM_ERRCB_PHASE_SCAN_HEAP, + VACUUM_ERRCB_PHASE_VACUUM_INDEX, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, + VACUUM_ERRCB_PHASE_INDEX_CLEANUP, + VACUUM_ERRCB_PHASE_TRUNCATE, +} VacErrPhase; + +typedef struct LVRelState +{ + /* Target heap relation and its indexes */ + Relation rel; + Relation *indrels; + int nindexes; + + /* Buffer access strategy and parallel vacuum state */ + BufferAccessStrategy bstrategy; + ParallelVacuumState *pvs; + + /* Aggressive VACUUM? (must set relfrozenxid >= FreezeLimit) */ + bool aggressive; + /* Use visibility map to skip? (disabled by DISABLE_PAGE_SKIPPING) */ + bool skipwithvm; + /* Consider index vacuuming bypass optimization? */ + bool consider_bypass_optimization; + + /* Doing index vacuuming, index cleanup, rel truncation? */ + bool do_index_vacuuming; + bool do_index_cleanup; + bool do_rel_truncate; + + /* VACUUM operation's cutoffs for freezing and pruning */ + struct VacuumCutoffs cutoffs; + GlobalVisState *vistest; + /* Tracks oldest extant XID/MXID for setting relfrozenxid/relminmxid */ + TransactionId NewRelfrozenXid; + MultiXactId NewRelminMxid; + bool skippedallvis; + + /* Error reporting state */ + char *dbname; + char *relnamespace; + char *relname; + char *indname; /* Current index name */ + BlockNumber blkno; /* used only for heap operations */ + OffsetNumber offnum; /* used only for heap operations */ + VacErrPhase phase; + bool verbose; /* VACUUM VERBOSE? */ + + /* + * dead_items stores TIDs whose index tuples are deleted by index + * vacuuming. Each TID points to an LP_DEAD line pointer from a heap page + * that has been processed by lazy_scan_prune. Also needed by + * lazy_vacuum_tdeheap_rel, which marks the same LP_DEAD line pointers as + * LP_UNUSED during second heap pass. + * + * Both dead_items and dead_items_info are allocated in shared memory in + * parallel vacuum cases. + */ + TidStore *dead_items; /* TIDs whose index tuples we'll delete */ + VacDeadItemsInfo *dead_items_info; + + BlockNumber rel_pages; /* total number of pages */ + BlockNumber scanned_pages; /* # pages examined (not skipped via VM) */ + BlockNumber removed_pages; /* # pages removed by relation truncation */ + BlockNumber frozen_pages; /* # pages with newly frozen tuples */ + BlockNumber lpdead_item_pages; /* # pages with LP_DEAD items */ + BlockNumber missed_dead_pages; /* # pages with missed dead tuples */ + BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */ + + /* Statistics output by us, for table */ + double new_rel_tuples; /* new estimated total # of tuples */ + double new_live_tuples; /* new estimated total # of live tuples */ + /* Statistics output by index AMs */ + IndexBulkDeleteResult **indstats; + + /* Instrumentation counters */ + int num_index_scans; + /* Counters that follow are only for scanned_pages */ + int64 tuples_deleted; /* # deleted from table */ + int64 tuples_frozen; /* # newly frozen */ + int64 lpdead_items; /* # deleted from indexes */ + int64 live_tuples; /* # live tuples remaining */ + int64 recently_dead_tuples; /* # dead, but not yet removable */ + int64 missed_dead_tuples; /* # removable, but not removed */ + + /* State maintained by tdeheap_vac_scan_next_block() */ + BlockNumber current_block; /* last block returned */ + BlockNumber next_unskippable_block; /* next unskippable block */ + bool next_unskippable_allvis; /* its visibility status */ + Buffer next_unskippable_vmbuffer; /* buffer containing its VM bit */ +} LVRelState; + +/* Struct for saving and restoring vacuum error information. */ +typedef struct LVSavedErrInfo +{ + BlockNumber blkno; + OffsetNumber offnum; + VacErrPhase phase; +} LVSavedErrInfo; + + +/* non-export function prototypes */ +static void lazy_scan_heap(LVRelState *vacrel); +static bool tdeheap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, + bool *all_visible_according_to_vm); +static void find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis); +static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + bool sharelock, Buffer vmbuffer); +static void lazy_scan_prune(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + Buffer vmbuffer, bool all_visible_according_to_vm, + bool *has_lpdead_items); +static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf, + BlockNumber blkno, Page page, + bool *has_lpdead_items); +static void lazy_vacuum(LVRelState *vacrel); +static bool lazy_vacuum_all_indexes(LVRelState *vacrel); +static void lazy_vacuum_tdeheap_rel(LVRelState *vacrel); +static void lazy_vacuum_tdeheap_page(LVRelState *vacrel, BlockNumber blkno, + Buffer buffer, OffsetNumber *deadoffsets, + int num_offsets, Buffer vmbuffer); +static bool lazy_check_wraparound_failsafe(LVRelState *vacrel); +static void lazy_cleanup_all_indexes(LVRelState *vacrel); +static IndexBulkDeleteResult *lazy_vacuum_one_index(Relation indrel, + IndexBulkDeleteResult *istat, + double reltuples, + LVRelState *vacrel); +static IndexBulkDeleteResult *lazy_cleanup_one_index(Relation indrel, + IndexBulkDeleteResult *istat, + double reltuples, + bool estimated_count, + LVRelState *vacrel); +static bool should_attempt_truncation(LVRelState *vacrel); +static void lazy_truncate_heap(LVRelState *vacrel); +static BlockNumber count_nondeletable_pages(LVRelState *vacrel, + bool *lock_waiter_detected); +static void dead_items_alloc(LVRelState *vacrel, int nworkers); +static void dead_items_add(LVRelState *vacrel, BlockNumber blkno, OffsetNumber *offsets, + int num_offsets); +static void dead_items_reset(LVRelState *vacrel); +static void dead_items_cleanup(LVRelState *vacrel); +static bool tdeheap_page_is_all_visible(LVRelState *vacrel, Buffer buf, + TransactionId *visibility_cutoff_xid, bool *all_frozen); +static void update_relstats_all_indexes(LVRelState *vacrel); +static void vacuum_error_callback(void *arg); +static void update_vacuum_error_info(LVRelState *vacrel, + LVSavedErrInfo *saved_vacrel, + int phase, BlockNumber blkno, + OffsetNumber offnum); +static void restore_vacuum_error_info(LVRelState *vacrel, + const LVSavedErrInfo *saved_vacrel); + + +/* + * tdeheap_vacuum_rel() -- perform VACUUM for one heap relation + * + * This routine sets things up for and then calls lazy_scan_heap, where + * almost all work actually takes place. Finalizes everything after call + * returns by managing relation truncation and updating rel's pg_class + * entry. (Also updates pg_class entries for any indexes that need it.) + * + * At entry, we have already established a transaction and opened + * and locked the relation. + */ +void +tdeheap_vacuum_rel(Relation rel, VacuumParams *params, + BufferAccessStrategy bstrategy) +{ + LVRelState *vacrel; + bool verbose, + instrument, + skipwithvm, + frozenxid_updated, + minmulti_updated; + BlockNumber orig_rel_pages, + new_rel_pages, + new_rel_allvisible; + PGRUsage ru0; + TimestampTz starttime = 0; + PgStat_Counter startreadtime = 0, + startwritetime = 0; + WalUsage startwalusage = pgWalUsage; + BufferUsage startbufferusage = pgBufferUsage; + ErrorContextCallback errcallback; + char **indnames = NULL; + + verbose = (params->options & VACOPT_VERBOSE) != 0; + instrument = (verbose || (AmAutoVacuumWorkerProcess() && + params->log_min_duration >= 0)); + if (instrument) + { + pg_rusage_init(&ru0); + starttime = GetCurrentTimestamp(); + if (track_io_timing) + { + startreadtime = pgStatBlockReadTime; + startwritetime = pgStatBlockWriteTime; + } + } + + pgstat_progress_start_command(PROGRESS_COMMAND_VACUUM, + RelationGetRelid(rel)); + + /* + * Setup error traceback support for ereport() first. The idea is to set + * up an error context callback to display additional information on any + * error during a vacuum. During different phases of vacuum, we update + * the state so that the error context callback always display current + * information. + * + * Copy the names of heap rel into local memory for error reporting + * purposes, too. It isn't always safe to assume that we can get the name + * of each rel. It's convenient for code in lazy_scan_heap to always use + * these temp copies. + */ + vacrel = (LVRelState *) palloc0(sizeof(LVRelState)); + vacrel->dbname = get_database_name(MyDatabaseId); + vacrel->relnamespace = get_namespace_name(RelationGetNamespace(rel)); + vacrel->relname = pstrdup(RelationGetRelationName(rel)); + vacrel->indname = NULL; + vacrel->phase = VACUUM_ERRCB_PHASE_UNKNOWN; + vacrel->verbose = verbose; + errcallback.callback = vacuum_error_callback; + errcallback.arg = vacrel; + errcallback.previous = error_context_stack; + error_context_stack = &errcallback; + + /* Set up high level stuff about rel and its indexes */ + vacrel->rel = rel; + vac_open_indexes(vacrel->rel, RowExclusiveLock, &vacrel->nindexes, + &vacrel->indrels); + vacrel->bstrategy = bstrategy; + if (instrument && vacrel->nindexes > 0) + { + /* Copy index names used by instrumentation (not error reporting) */ + indnames = palloc(sizeof(char *) * vacrel->nindexes); + for (int i = 0; i < vacrel->nindexes; i++) + indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i])); + } + + /* + * The index_cleanup param either disables index vacuuming and cleanup or + * forces it to go ahead when we would otherwise apply the index bypass + * optimization. The default is 'auto', which leaves the final decision + * up to lazy_vacuum(). + * + * The truncate param allows user to avoid attempting relation truncation, + * though it can't force truncation to happen. + */ + Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED); + Assert(params->truncate != VACOPTVALUE_UNSPECIFIED && + params->truncate != VACOPTVALUE_AUTO); + + /* + * While VacuumFailSafeActive is reset to false before calling this, we + * still need to reset it here due to recursive calls. + */ + VacuumFailsafeActive = false; + vacrel->consider_bypass_optimization = true; + vacrel->do_index_vacuuming = true; + vacrel->do_index_cleanup = true; + vacrel->do_rel_truncate = (params->truncate != VACOPTVALUE_DISABLED); + if (params->index_cleanup == VACOPTVALUE_DISABLED) + { + /* Force disable index vacuuming up-front */ + vacrel->do_index_vacuuming = false; + vacrel->do_index_cleanup = false; + } + else if (params->index_cleanup == VACOPTVALUE_ENABLED) + { + /* Force index vacuuming. Note that failsafe can still bypass. */ + vacrel->consider_bypass_optimization = false; + } + else + { + /* Default/auto, make all decisions dynamically */ + Assert(params->index_cleanup == VACOPTVALUE_AUTO); + } + + /* Initialize page counters explicitly (be tidy) */ + vacrel->scanned_pages = 0; + vacrel->removed_pages = 0; + vacrel->frozen_pages = 0; + vacrel->lpdead_item_pages = 0; + vacrel->missed_dead_pages = 0; + vacrel->nonempty_pages = 0; + /* dead_items_alloc allocates vacrel->dead_items later on */ + + /* Allocate/initialize output statistics state */ + vacrel->new_rel_tuples = 0; + vacrel->new_live_tuples = 0; + vacrel->indstats = (IndexBulkDeleteResult **) + palloc0(vacrel->nindexes * sizeof(IndexBulkDeleteResult *)); + + /* Initialize remaining counters (be tidy) */ + vacrel->num_index_scans = 0; + vacrel->tuples_deleted = 0; + vacrel->tuples_frozen = 0; + vacrel->lpdead_items = 0; + vacrel->live_tuples = 0; + vacrel->recently_dead_tuples = 0; + vacrel->missed_dead_tuples = 0; + + /* + * Get cutoffs that determine which deleted tuples are considered DEAD, + * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze. Then determine + * the extent of the blocks that we'll scan in lazy_scan_heap. It has to + * happen in this order to ensure that the OldestXmin cutoff field works + * as an upper bound on the XIDs stored in the pages we'll actually scan + * (NewRelfrozenXid tracking must never be allowed to miss unfrozen XIDs). + * + * Next acquire vistest, a related cutoff that's used in pruning. We use + * vistest in combination with OldestXmin to ensure that + * tdeheap_page_prune_and_freeze() always removes any deleted tuple whose + * xmax is < OldestXmin. lazy_scan_prune must never become confused about + * whether a tuple should be frozen or removed. (In the future we might + * want to teach lazy_scan_prune to recompute vistest from time to time, + * to increase the number of dead tuples it can prune away.) + */ + vacrel->aggressive = vacuum_get_cutoffs(rel, params, &vacrel->cutoffs); + vacrel->rel_pages = orig_rel_pages = RelationGetNumberOfBlocks(rel); + vacrel->vistest = GlobalVisTestFor(rel); + /* Initialize state used to track oldest extant XID/MXID */ + vacrel->NewRelfrozenXid = vacrel->cutoffs.OldestXmin; + vacrel->NewRelminMxid = vacrel->cutoffs.OldestMxact; + vacrel->skippedallvis = false; + skipwithvm = true; + if (params->options & VACOPT_DISABLE_PAGE_SKIPPING) + { + /* + * Force aggressive mode, and disable skipping blocks using the + * visibility map (even those set all-frozen) + */ + vacrel->aggressive = true; + skipwithvm = false; + } + + vacrel->skipwithvm = skipwithvm; + + if (verbose) + { + if (vacrel->aggressive) + ereport(INFO, + (errmsg("aggressively vacuuming \"%s.%s.%s\"", + vacrel->dbname, vacrel->relnamespace, + vacrel->relname))); + else + ereport(INFO, + (errmsg("vacuuming \"%s.%s.%s\"", + vacrel->dbname, vacrel->relnamespace, + vacrel->relname))); + } + + /* + * Allocate dead_items memory using dead_items_alloc. This handles + * parallel VACUUM initialization as part of allocating shared memory + * space used for dead_items. (But do a failsafe precheck first, to + * ensure that parallel VACUUM won't be attempted at all when relfrozenxid + * is already dangerously old.) + */ + lazy_check_wraparound_failsafe(vacrel); + dead_items_alloc(vacrel, params->nworkers); + + /* + * Call lazy_scan_heap to perform all required heap pruning, index + * vacuuming, and heap vacuuming (plus related processing) + */ + lazy_scan_heap(vacrel); + + /* + * Free resources managed by dead_items_alloc. This ends parallel mode in + * passing when necessary. + */ + dead_items_cleanup(vacrel); + Assert(!IsInParallelMode()); + + /* + * Update pg_class entries for each of rel's indexes where appropriate. + * + * Unlike the later update to rel's pg_class entry, this is not critical. + * Maintains relpages/reltuples statistics used by the planner only. + */ + if (vacrel->do_index_cleanup) + update_relstats_all_indexes(vacrel); + + /* Done with rel's indexes */ + vac_close_indexes(vacrel->nindexes, vacrel->indrels, NoLock); + + /* Optionally truncate rel */ + if (should_attempt_truncation(vacrel)) + lazy_truncate_heap(vacrel); + + /* Pop the error context stack */ + error_context_stack = errcallback.previous; + + /* Report that we are now doing final cleanup */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_FINAL_CLEANUP); + + /* + * Prepare to update rel's pg_class entry. + * + * Aggressive VACUUMs must always be able to advance relfrozenxid to a + * value >= FreezeLimit, and relminmxid to a value >= MultiXactCutoff. + * Non-aggressive VACUUMs may advance them by any amount, or not at all. + */ + Assert(vacrel->NewRelfrozenXid == vacrel->cutoffs.OldestXmin || + TransactionIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.FreezeLimit : + vacrel->cutoffs.relfrozenxid, + vacrel->NewRelfrozenXid)); + Assert(vacrel->NewRelminMxid == vacrel->cutoffs.OldestMxact || + MultiXactIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.MultiXactCutoff : + vacrel->cutoffs.relminmxid, + vacrel->NewRelminMxid)); + if (vacrel->skippedallvis) + { + /* + * Must keep original relfrozenxid in a non-aggressive VACUUM that + * chose to skip an all-visible page range. The state that tracks new + * values will have missed unfrozen XIDs from the pages we skipped. + */ + Assert(!vacrel->aggressive); + vacrel->NewRelfrozenXid = InvalidTransactionId; + vacrel->NewRelminMxid = InvalidMultiXactId; + } + + /* + * For safety, clamp relallvisible to be not more than what we're setting + * pg_class.relpages to + */ + new_rel_pages = vacrel->rel_pages; /* After possible rel truncation */ + tdeheap_visibilitymap_count(rel, &new_rel_allvisible, NULL); + if (new_rel_allvisible > new_rel_pages) + new_rel_allvisible = new_rel_pages; + + /* + * Now actually update rel's pg_class entry. + * + * In principle new_live_tuples could be -1 indicating that we (still) + * don't know the tuple count. In practice that can't happen, since we + * scan every page that isn't skipped using the visibility map. + */ + vac_update_relstats(rel, new_rel_pages, vacrel->new_live_tuples, + new_rel_allvisible, vacrel->nindexes > 0, + vacrel->NewRelfrozenXid, vacrel->NewRelminMxid, + &frozenxid_updated, &minmulti_updated, false); + + /* + * Report results to the cumulative stats system, too. + * + * Deliberately avoid telling the stats system about LP_DEAD items that + * remain in the table due to VACUUM bypassing index and heap vacuuming. + * ANALYZE will consider the remaining LP_DEAD items to be dead "tuples". + * It seems like a good idea to err on the side of not vacuuming again too + * soon in cases where the failsafe prevented significant amounts of heap + * vacuuming. + */ + pgstat_report_vacuum(RelationGetRelid(rel), + rel->rd_rel->relisshared, + Max(vacrel->new_live_tuples, 0), + vacrel->recently_dead_tuples + + vacrel->missed_dead_tuples); + pgstat_progress_end_command(); + + if (instrument) + { + TimestampTz endtime = GetCurrentTimestamp(); + + if (verbose || params->log_min_duration == 0 || + TimestampDifferenceExceeds(starttime, endtime, + params->log_min_duration)) + { + long secs_dur; + int usecs_dur; + WalUsage walusage; + BufferUsage bufferusage; + StringInfoData buf; + char *msgfmt; + int32 diff; + double read_rate = 0, + write_rate = 0; + + TimestampDifference(starttime, endtime, &secs_dur, &usecs_dur); + memset(&walusage, 0, sizeof(WalUsage)); + WalUsageAccumDiff(&walusage, &pgWalUsage, &startwalusage); + memset(&bufferusage, 0, sizeof(BufferUsage)); + BufferUsageAccumDiff(&bufferusage, &pgBufferUsage, &startbufferusage); + + initStringInfo(&buf); + if (verbose) + { + /* + * Aggressiveness already reported earlier, in dedicated + * VACUUM VERBOSE ereport + */ + Assert(!params->is_wraparound); + msgfmt = _("finished vacuuming \"%s.%s.%s\": index scans: %d\n"); + } + else if (params->is_wraparound) + { + /* + * While it's possible for a VACUUM to be both is_wraparound + * and !aggressive, that's just a corner-case -- is_wraparound + * implies aggressive. Produce distinct output for the corner + * case all the same, just in case. + */ + if (vacrel->aggressive) + msgfmt = _("automatic aggressive vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n"); + else + msgfmt = _("automatic vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n"); + } + else + { + if (vacrel->aggressive) + msgfmt = _("automatic aggressive vacuum of table \"%s.%s.%s\": index scans: %d\n"); + else + msgfmt = _("automatic vacuum of table \"%s.%s.%s\": index scans: %d\n"); + } + appendStringInfo(&buf, msgfmt, + vacrel->dbname, + vacrel->relnamespace, + vacrel->relname, + vacrel->num_index_scans); + appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total)\n"), + vacrel->removed_pages, + new_rel_pages, + vacrel->scanned_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->scanned_pages / orig_rel_pages); + appendStringInfo(&buf, + _("tuples: %lld removed, %lld remain, %lld are dead but not yet removable\n"), + (long long) vacrel->tuples_deleted, + (long long) vacrel->new_rel_tuples, + (long long) vacrel->recently_dead_tuples); + if (vacrel->missed_dead_tuples > 0) + appendStringInfo(&buf, + _("tuples missed: %lld dead from %u pages not removed due to cleanup lock contention\n"), + (long long) vacrel->missed_dead_tuples, + vacrel->missed_dead_pages); + diff = (int32) (ReadNextTransactionId() - + vacrel->cutoffs.OldestXmin); + appendStringInfo(&buf, + _("removable cutoff: %u, which was %d XIDs old when operation ended\n"), + vacrel->cutoffs.OldestXmin, diff); + if (frozenxid_updated) + { + diff = (int32) (vacrel->NewRelfrozenXid - + vacrel->cutoffs.relfrozenxid); + appendStringInfo(&buf, + _("new relfrozenxid: %u, which is %d XIDs ahead of previous value\n"), + vacrel->NewRelfrozenXid, diff); + } + if (minmulti_updated) + { + diff = (int32) (vacrel->NewRelminMxid - + vacrel->cutoffs.relminmxid); + appendStringInfo(&buf, + _("new relminmxid: %u, which is %d MXIDs ahead of previous value\n"), + vacrel->NewRelminMxid, diff); + } + appendStringInfo(&buf, _("frozen: %u pages from table (%.2f%% of total) had %lld tuples frozen\n"), + vacrel->frozen_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->frozen_pages / orig_rel_pages, + (long long) vacrel->tuples_frozen); + if (vacrel->do_index_vacuuming) + { + if (vacrel->nindexes == 0 || vacrel->num_index_scans == 0) + appendStringInfoString(&buf, _("index scan not needed: ")); + else + appendStringInfoString(&buf, _("index scan needed: ")); + + msgfmt = _("%u pages from table (%.2f%% of total) had %lld dead item identifiers removed\n"); + } + else + { + if (!VacuumFailsafeActive) + appendStringInfoString(&buf, _("index scan bypassed: ")); + else + appendStringInfoString(&buf, _("index scan bypassed by failsafe: ")); + + msgfmt = _("%u pages from table (%.2f%% of total) have %lld dead item identifiers\n"); + } + appendStringInfo(&buf, msgfmt, + vacrel->lpdead_item_pages, + orig_rel_pages == 0 ? 100.0 : + 100.0 * vacrel->lpdead_item_pages / orig_rel_pages, + (long long) vacrel->lpdead_items); + for (int i = 0; i < vacrel->nindexes; i++) + { + IndexBulkDeleteResult *istat = vacrel->indstats[i]; + + if (!istat) + continue; + + appendStringInfo(&buf, + _("index \"%s\": pages: %u in total, %u newly deleted, %u currently deleted, %u reusable\n"), + indnames[i], + istat->num_pages, + istat->pages_newly_deleted, + istat->pages_deleted, + istat->pages_free); + } + if (track_io_timing) + { + double read_ms = (double) (pgStatBlockReadTime - startreadtime) / 1000; + double write_ms = (double) (pgStatBlockWriteTime - startwritetime) / 1000; + + appendStringInfo(&buf, _("I/O timings: read: %.3f ms, write: %.3f ms\n"), + read_ms, write_ms); + } + if (secs_dur > 0 || usecs_dur > 0) + { + read_rate = (double) BLCKSZ * (bufferusage.shared_blks_read + bufferusage.local_blks_read) / + (1024 * 1024) / (secs_dur + usecs_dur / 1000000.0); + write_rate = (double) BLCKSZ * (bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied) / + (1024 * 1024) / (secs_dur + usecs_dur / 1000000.0); + } + appendStringInfo(&buf, _("avg read rate: %.3f MB/s, avg write rate: %.3f MB/s\n"), + read_rate, write_rate); + appendStringInfo(&buf, + _("buffer usage: %lld hits, %lld misses, %lld dirtied\n"), + (long long) (bufferusage.shared_blks_hit + bufferusage.local_blks_hit), + (long long) (bufferusage.shared_blks_read + bufferusage.local_blks_read), + (long long) (bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied)); + appendStringInfo(&buf, + _("WAL usage: %lld records, %lld full page images, %llu bytes\n"), + (long long) walusage.wal_records, + (long long) walusage.wal_fpi, + (unsigned long long) walusage.wal_bytes); + appendStringInfo(&buf, _("system usage: %s"), pg_rusage_show(&ru0)); + + ereport(verbose ? INFO : LOG, + (errmsg_internal("%s", buf.data))); + pfree(buf.data); + } + } + + /* Cleanup index statistics and index names */ + for (int i = 0; i < vacrel->nindexes; i++) + { + if (vacrel->indstats[i]) + pfree(vacrel->indstats[i]); + + if (instrument) + pfree(indnames[i]); + } +} + +/* + * lazy_scan_heap() -- workhorse function for VACUUM + * + * This routine prunes each page in the heap, and considers the need to + * freeze remaining tuples with storage (not including pages that can be + * skipped using the visibility map). Also performs related maintenance + * of the FSM and visibility map. These steps all take place during an + * initial pass over the target heap relation. + * + * Also invokes lazy_vacuum_all_indexes to vacuum indexes, which largely + * consists of deleting index tuples that point to LP_DEAD items left in + * heap pages following pruning. Earlier initial pass over the heap will + * have collected the TIDs whose index tuples need to be removed. + * + * Finally, invokes lazy_vacuum_tdeheap_rel to vacuum heap pages, which + * largely consists of marking LP_DEAD items (from vacrel->dead_items) + * as LP_UNUSED. This has to happen in a second, final pass over the + * heap, to preserve a basic invariant that all index AMs rely on: no + * extant index tuple can ever be allowed to contain a TID that points to + * an LP_UNUSED line pointer in the heap. We must disallow premature + * recycling of line pointers to avoid index scans that get confused + * about which TID points to which tuple immediately after recycling. + * (Actually, this isn't a concern when target heap relation happens to + * have no indexes, which allows us to safely apply the one-pass strategy + * as an optimization). + * + * In practice we often have enough space to fit all TIDs, and so won't + * need to call lazy_vacuum more than once, after our initial pass over + * the heap has totally finished. Otherwise things are slightly more + * complicated: our "initial pass" over the heap applies only to those + * pages that were pruned before we needed to call lazy_vacuum, and our + * "final pass" over the heap only vacuums these same heap pages. + * However, we process indexes in full every time lazy_vacuum is called, + * which makes index processing very inefficient when memory is in short + * supply. + */ +static void +lazy_scan_heap(LVRelState *vacrel) +{ + BlockNumber rel_pages = vacrel->rel_pages, + blkno, + next_fsm_block_to_vacuum = 0; + bool all_visible_according_to_vm; + + TidStore *dead_items = vacrel->dead_items; + VacDeadItemsInfo *dead_items_info = vacrel->dead_items_info; + Buffer vmbuffer = InvalidBuffer; + const int initprog_index[] = { + PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_TOTAL_HEAP_BLKS, + PROGRESS_VACUUM_MAX_DEAD_TUPLE_BYTES + }; + int64 initprog_val[3]; + + /* Report that we're scanning the heap, advertising total # of blocks */ + initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP; + initprog_val[1] = rel_pages; + initprog_val[2] = dead_items_info->max_bytes; + pgstat_progress_update_multi_param(3, initprog_index, initprog_val); + + /* Initialize for the first tdeheap_vac_scan_next_block() call */ + vacrel->current_block = InvalidBlockNumber; + vacrel->next_unskippable_block = InvalidBlockNumber; + vacrel->next_unskippable_allvis = false; + vacrel->next_unskippable_vmbuffer = InvalidBuffer; + + while (tdeheap_vac_scan_next_block(vacrel, &blkno, &all_visible_according_to_vm)) + { + Buffer buf; + Page page; + bool has_lpdead_items; + bool got_cleanup_lock = false; + + vacrel->scanned_pages++; + + /* Report as block scanned, update error traceback information */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno); + update_vacuum_error_info(vacrel, NULL, VACUUM_ERRCB_PHASE_SCAN_HEAP, + blkno, InvalidOffsetNumber); + + vacuum_delay_point(); + + /* + * Regularly check if wraparound failsafe should trigger. + * + * There is a similar check inside lazy_vacuum_all_indexes(), but + * relfrozenxid might start to look dangerously old before we reach + * that point. This check also provides failsafe coverage for the + * one-pass strategy, and the two-pass strategy with the index_cleanup + * param set to 'off'. + */ + if (vacrel->scanned_pages % FAILSAFE_EVERY_PAGES == 0) + lazy_check_wraparound_failsafe(vacrel); + + /* + * Consider if we definitely have enough space to process TIDs on page + * already. If we are close to overrunning the available space for + * dead_items TIDs, pause and do a cycle of vacuuming before we tackle + * this page. + */ + if (TidStoreMemoryUsage(dead_items) > dead_items_info->max_bytes) + { + /* + * Before beginning index vacuuming, we release any pin we may + * hold on the visibility map page. This isn't necessary for + * correctness, but we do it anyway to avoid holding the pin + * across a lengthy, unrelated operation. + */ + if (BufferIsValid(vmbuffer)) + { + ReleaseBuffer(vmbuffer); + vmbuffer = InvalidBuffer; + } + + /* Perform a round of index and heap vacuuming */ + vacrel->consider_bypass_optimization = false; + lazy_vacuum(vacrel); + + /* + * Vacuum the Free Space Map to make newly-freed space visible on + * upper-level FSM pages. Note we have not yet processed blkno. + */ + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, + blkno); + next_fsm_block_to_vacuum = blkno; + + /* Report that we are once again scanning the heap */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_SCAN_HEAP); + } + + /* + * Pin the visibility map page in case we need to mark the page + * all-visible. In most cases this will be very cheap, because we'll + * already have the correct page pinned anyway. + */ + tdeheap_visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); + + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + page = BufferGetPage(buf); + + /* + * We need a buffer cleanup lock to prune HOT chains and defragment + * the page in lazy_scan_prune. But when it's not possible to acquire + * a cleanup lock right away, we may be able to settle for reduced + * processing using lazy_scan_noprune. + */ + got_cleanup_lock = ConditionalLockBufferForCleanup(buf); + + if (!got_cleanup_lock) + LockBuffer(buf, BUFFER_LOCK_SHARE); + + /* Check for new or empty pages before lazy_scan_[no]prune call */ + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock, + vmbuffer)) + { + /* Processed as new/empty page (lock and pin released) */ + continue; + } + + /* + * If we didn't get the cleanup lock, we can still collect LP_DEAD + * items in the dead_items area for later vacuuming, count live and + * recently dead tuples for vacuum logging, and determine if this + * block could later be truncated. If we encounter any xid/mxids that + * require advancing the relfrozenxid/relminxid, we'll have to wait + * for a cleanup lock and call lazy_scan_prune(). + */ + if (!got_cleanup_lock && + !lazy_scan_noprune(vacrel, buf, blkno, page, &has_lpdead_items)) + { + /* + * lazy_scan_noprune could not do all required processing. Wait + * for a cleanup lock, and call lazy_scan_prune in the usual way. + */ + Assert(vacrel->aggressive); + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + LockBufferForCleanup(buf); + got_cleanup_lock = true; + } + + /* + * If we have a cleanup lock, we must now prune, freeze, and count + * tuples. We may have acquired the cleanup lock originally, or we may + * have gone back and acquired it after lazy_scan_noprune() returned + * false. Either way, the page hasn't been processed yet. + * + * Like lazy_scan_noprune(), lazy_scan_prune() will count + * recently_dead_tuples and live tuples for vacuum logging, determine + * if the block can later be truncated, and accumulate the details of + * remaining LP_DEAD line pointers on the page into dead_items. These + * dead items include those pruned by lazy_scan_prune() as well as + * line pointers previously marked LP_DEAD. + */ + if (got_cleanup_lock) + lazy_scan_prune(vacrel, buf, blkno, page, + vmbuffer, all_visible_according_to_vm, + &has_lpdead_items); + + /* + * Now drop the buffer lock and, potentially, update the FSM. + * + * Our goal is to update the freespace map the last time we touch the + * page. If we'll process a block in the second pass, we may free up + * additional space on the page, so it is better to update the FSM + * after the second pass. If the relation has no indexes, or if index + * vacuuming is disabled, there will be no second heap pass; if this + * particular page has no dead items, the second heap pass will not + * touch this page. So, in those cases, update the FSM now. + * + * Note: In corner cases, it's possible to miss updating the FSM + * entirely. If index vacuuming is currently enabled, we'll skip the + * FSM update now. But if failsafe mode is later activated, or there + * are so few dead tuples that index vacuuming is bypassed, there will + * also be no opportunity to update the FSM later, because we'll never + * revisit this page. Since updating the FSM is desirable but not + * absolutely required, that's OK. + */ + if (vacrel->nindexes == 0 + || !vacrel->do_index_vacuuming + || !has_lpdead_items) + { + Size freespace = PageGetHeapFreeSpace(page); + + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + + /* + * Periodically perform FSM vacuuming to make newly-freed space + * visible on upper FSM pages. This is done after vacuuming if the + * table has indexes. There will only be newly-freed space if we + * held the cleanup lock and lazy_scan_prune() was called. + */ + if (got_cleanup_lock && vacrel->nindexes == 0 && has_lpdead_items && + blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES) + { + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, + blkno); + next_fsm_block_to_vacuum = blkno; + } + } + else + UnlockReleaseBuffer(buf); + } + + vacrel->blkno = InvalidBlockNumber; + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* report that everything is now scanned */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno); + + /* now we can compute the new value for pg_class.reltuples */ + vacrel->new_live_tuples = vac_estimate_reltuples(vacrel->rel, rel_pages, + vacrel->scanned_pages, + vacrel->live_tuples); + + /* + * Also compute the total number of surviving heap entries. In the + * (unlikely) scenario that new_live_tuples is -1, take it as zero. + */ + vacrel->new_rel_tuples = + Max(vacrel->new_live_tuples, 0) + vacrel->recently_dead_tuples + + vacrel->missed_dead_tuples; + + /* + * Do index vacuuming (call each index's ambulkdelete routine), then do + * related heap vacuuming + */ + if (dead_items_info->num_items > 0) + lazy_vacuum(vacrel); + + /* + * Vacuum the remainder of the Free Space Map. We must do this whether or + * not there were indexes, and whether or not we bypassed index vacuuming. + */ + if (blkno > next_fsm_block_to_vacuum) + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, blkno); + + /* report all blocks vacuumed */ + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno); + + /* Do final index cleanup (call each index's amvacuumcleanup routine) */ + if (vacrel->nindexes > 0 && vacrel->do_index_cleanup) + lazy_cleanup_all_indexes(vacrel); +} + +/* + * tdeheap_vac_scan_next_block() -- get next block for vacuum to process + * + * lazy_scan_heap() calls here every time it needs to get the next block to + * prune and vacuum. The function uses the visibility map, vacuum options, + * and various thresholds to skip blocks which do not need to be processed and + * sets blkno to the next block to process. + * + * The block number and visibility status of the next block to process are set + * in *blkno and *all_visible_according_to_vm. The return value is false if + * there are no further blocks to process. + * + * vacrel is an in/out parameter here. Vacuum options and information about + * the relation are read. vacrel->skippedallvis is set if we skip a block + * that's all-visible but not all-frozen, to ensure that we don't update + * relfrozenxid in that case. vacrel also holds information about the next + * unskippable block, as bookkeeping for this function. + */ +static bool +tdeheap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, + bool *all_visible_according_to_vm) +{ + BlockNumber next_block; + + /* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */ + next_block = vacrel->current_block + 1; + + /* Have we reached the end of the relation? */ + if (next_block >= vacrel->rel_pages) + { + if (BufferIsValid(vacrel->next_unskippable_vmbuffer)) + { + ReleaseBuffer(vacrel->next_unskippable_vmbuffer); + vacrel->next_unskippable_vmbuffer = InvalidBuffer; + } + *blkno = vacrel->rel_pages; + return false; + } + + /* + * We must be in one of the three following states: + */ + if (next_block > vacrel->next_unskippable_block || + vacrel->next_unskippable_block == InvalidBlockNumber) + { + /* + * 1. We have just processed an unskippable block (or we're at the + * beginning of the scan). Find the next unskippable block using the + * visibility map. + */ + bool skipsallvis; + + find_next_unskippable_block(vacrel, &skipsallvis); + + /* + * We now know the next block that we must process. It can be the + * next block after the one we just processed, or something further + * ahead. If it's further ahead, we can jump to it, but we choose to + * do so only if we can skip at least SKIP_PAGES_THRESHOLD consecutive + * pages. Since we're reading sequentially, the OS should be doing + * readahead for us, so there's no gain in skipping a page now and + * then. Skipping such a range might even discourage sequential + * detection. + * + * This test also enables more frequent relfrozenxid advancement + * during non-aggressive VACUUMs. If the range has any all-visible + * pages then skipping makes updating relfrozenxid unsafe, which is a + * real downside. + */ + if (vacrel->next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD) + { + next_block = vacrel->next_unskippable_block; + if (skipsallvis) + vacrel->skippedallvis = true; + } + } + + /* Now we must be in one of the two remaining states: */ + if (next_block < vacrel->next_unskippable_block) + { + /* + * 2. We are processing a range of blocks that we could have skipped + * but chose not to. We know that they are all-visible in the VM, + * otherwise they would've been unskippable. + */ + *blkno = vacrel->current_block = next_block; + *all_visible_according_to_vm = true; + return true; + } + else + { + /* + * 3. We reached the next unskippable block. Process it. On next + * iteration, we will be back in state 1. + */ + Assert(next_block == vacrel->next_unskippable_block); + + *blkno = vacrel->current_block = next_block; + *all_visible_according_to_vm = vacrel->next_unskippable_allvis; + return true; + } +} + +/* + * Find the next unskippable block in a vacuum scan using the visibility map. + * The next unskippable block and its visibility information is updated in + * vacrel. + * + * Note: our opinion of which blocks can be skipped can go stale immediately. + * It's okay if caller "misses" a page whose all-visible or all-frozen marking + * was concurrently cleared, though. All that matters is that caller scan all + * pages whose tuples might contain XIDs < OldestXmin, or MXIDs < OldestMxact. + * (Actually, non-aggressive VACUUMs can choose to skip all-visible pages with + * older XIDs/MXIDs. The *skippedallvis flag will be set here when the choice + * to skip such a range is actually made, making everything safe.) + */ +static void +find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis) +{ + BlockNumber rel_pages = vacrel->rel_pages; + BlockNumber next_unskippable_block = vacrel->next_unskippable_block + 1; + Buffer next_unskippable_vmbuffer = vacrel->next_unskippable_vmbuffer; + bool next_unskippable_allvis; + + *skipsallvis = false; + + for (;;) + { + uint8 mapbits = tdeheap_visibilitymap_get_status(vacrel->rel, + next_unskippable_block, + &next_unskippable_vmbuffer); + + next_unskippable_allvis = (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0; + + /* + * A block is unskippable if it is not all visible according to the + * visibility map. + */ + if (!next_unskippable_allvis) + { + Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0); + break; + } + + /* + * Caller must scan the last page to determine whether it has tuples + * (caller must have the opportunity to set vacrel->nonempty_pages). + * This rule avoids having lazy_truncate_heap() take access-exclusive + * lock on rel to attempt a truncation that fails anyway, just because + * there are tuples on the last page (it is likely that there will be + * tuples on other nearby pages as well, but those can be skipped). + * + * Implement this by always treating the last block as unsafe to skip. + */ + if (next_unskippable_block == rel_pages - 1) + break; + + /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */ + if (!vacrel->skipwithvm) + break; + + /* + * Aggressive VACUUM caller can't skip pages just because they are + * all-visible. They may still skip all-frozen pages, which can't + * contain XIDs < OldestXmin (XIDs that aren't already frozen by now). + */ + if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0) + { + if (vacrel->aggressive) + break; + + /* + * All-visible block is safe to skip in non-aggressive case. But + * remember that the final range contains such a block for later. + */ + *skipsallvis = true; + } + + next_unskippable_block++; + } + + /* write the local variables back to vacrel */ + vacrel->next_unskippable_block = next_unskippable_block; + vacrel->next_unskippable_allvis = next_unskippable_allvis; + vacrel->next_unskippable_vmbuffer = next_unskippable_vmbuffer; +} + +/* + * lazy_scan_new_or_empty() -- lazy_scan_heap() new/empty page handling. + * + * Must call here to handle both new and empty pages before calling + * lazy_scan_prune or lazy_scan_noprune, since they're not prepared to deal + * with new or empty pages. + * + * It's necessary to consider new pages as a special case, since the rules for + * maintaining the visibility map and FSM with empty pages are a little + * different (though new pages can be truncated away during rel truncation). + * + * Empty pages are not really a special case -- they're just heap pages that + * have no allocated tuples (including even LP_UNUSED items). You might + * wonder why we need to handle them here all the same. It's only necessary + * because of a corner-case involving a hard crash during heap relation + * extension. If we ever make relation-extension crash safe, then it should + * no longer be necessary to deal with empty pages here (or new pages, for + * that matter). + * + * Caller must hold at least a shared lock. We might need to escalate the + * lock in that case, so the type of lock caller holds needs to be specified + * using 'sharelock' argument. + * + * Returns false in common case where caller should go on to call + * lazy_scan_prune (or lazy_scan_noprune). Otherwise returns true, indicating + * that lazy_scan_heap is done processing the page, releasing lock on caller's + * behalf. + */ +static bool +lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno, + Page page, bool sharelock, Buffer vmbuffer) +{ + Size freespace; + + if (PageIsNew(page)) + { + /* + * All-zeroes pages can be left over if either a backend extends the + * relation by a single page, but crashes before the newly initialized + * page has been written out, or when bulk-extending the relation + * (which creates a number of empty pages at the tail end of the + * relation), and then enters them into the FSM. + * + * Note we do not enter the page into the visibilitymap. That has the + * downside that we repeatedly visit this page in subsequent vacuums, + * but otherwise we'll never discover the space on a promoted standby. + * The harm of repeated checking ought to normally not be too bad. The + * space usually should be used at some point, otherwise there + * wouldn't be any regular vacuums. + * + * Make sure these pages are in the FSM, to ensure they can be reused. + * Do that by testing if there's any space recorded for the page. If + * not, enter it. We do so after releasing the lock on the heap page, + * the FSM is approximate, after all. + */ + UnlockReleaseBuffer(buf); + + if (GetRecordedFreeSpace(vacrel->rel, blkno) == 0) + { + freespace = BLCKSZ - SizeOfPageHeaderData; + + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + } + + return true; + } + + if (PageIsEmpty(page)) + { + /* + * It seems likely that caller will always be able to get a cleanup + * lock on an empty page. But don't take any chances -- escalate to + * an exclusive lock (still don't need a cleanup lock, though). + */ + if (sharelock) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + + if (!PageIsEmpty(page)) + { + /* page isn't new or empty -- keep lock and pin for now */ + return false; + } + } + else + { + /* Already have a full cleanup lock (which is more than enough) */ + } + + /* + * Unlike new pages, empty pages are always set all-visible and + * all-frozen. + */ + if (!PageIsAllVisible(page)) + { + START_CRIT_SECTION(); + + /* mark buffer dirty before writing a WAL record */ + MarkBufferDirty(buf); + + /* + * It's possible that another backend has extended the heap, + * initialized the page, and then failed to WAL-log the page due + * to an ERROR. Since heap extension is not WAL-logged, recovery + * might try to replay our record setting the page all-visible and + * find that the page isn't initialized, which will cause a PANIC. + * To prevent that, check whether the page has been previously + * WAL-logged, and if not, do that now. + */ + if (RelationNeedsWAL(vacrel->rel) && + PageGetLSN(page) == InvalidXLogRecPtr) + log_newpage_buffer(buf, true); + + PageSetAllVisible(page); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN); + END_CRIT_SECTION(); + } + + freespace = PageGetHeapFreeSpace(page); + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + return true; + } + + /* page isn't new or empty -- keep lock and pin */ + return false; +} + +/* qsort comparator for sorting OffsetNumbers */ +static int +cmpOffsetNumbers(const void *a, const void *b) +{ + return pg_cmp_u16(*(const OffsetNumber *) a, *(const OffsetNumber *) b); +} + +/* + * lazy_scan_prune() -- lazy_scan_heap() pruning and freezing. + * + * Caller must hold pin and buffer cleanup lock on the buffer. + * + * vmbuffer is the buffer containing the VM block with visibility information + * for the heap block, blkno. all_visible_according_to_vm is the saved + * visibility status of the heap block looked up earlier by the caller. We + * won't rely entirely on this status, as it may be out of date. + * + * *has_lpdead_items is set to true or false depending on whether, upon return + * from this function, any LP_DEAD items are still present on the page. + */ +static void +lazy_scan_prune(LVRelState *vacrel, + Buffer buf, + BlockNumber blkno, + Page page, + Buffer vmbuffer, + bool all_visible_according_to_vm, + bool *has_lpdead_items) +{ + Relation rel = vacrel->rel; + PruneFreezeResult presult; + int prune_options = 0; + + Assert(BufferGetBlockNumber(buf) == blkno); + + /* + * Prune all HOT-update chains and potentially freeze tuples on this page. + * + * If the relation has no indexes, we can immediately mark would-be dead + * items LP_UNUSED. + * + * The number of tuples removed from the page is returned in + * presult.ndeleted. It should not be confused with presult.lpdead_items; + * presult.lpdead_items's final value can be thought of as the number of + * tuples that were deleted from indexes. + * + * We will update the VM after collecting LP_DEAD items and freezing + * tuples. Pruning will have determined whether or not the page is + * all-visible. + */ + prune_options = HEAP_PAGE_PRUNE_FREEZE; + if (vacrel->nindexes == 0) + prune_options |= HEAP_PAGE_PRUNE_MARK_UNUSED_NOW; + + tdeheap_page_prune_and_freeze(rel, buf, vacrel->vistest, prune_options, + &vacrel->cutoffs, &presult, PRUNE_VACUUM_SCAN, + &vacrel->offnum, + &vacrel->NewRelfrozenXid, &vacrel->NewRelminMxid); + + Assert(MultiXactIdIsValid(vacrel->NewRelminMxid)); + Assert(TransactionIdIsValid(vacrel->NewRelfrozenXid)); + + if (presult.nfrozen > 0) + { + /* + * We don't increment the frozen_pages instrumentation counter when + * nfrozen == 0, since it only counts pages with newly frozen tuples + * (don't confuse that with pages newly set all-frozen in VM). + */ + vacrel->frozen_pages++; + } + + /* + * VACUUM will call tdeheap_page_is_all_visible() during the second pass over + * the heap to determine all_visible and all_frozen for the page -- this + * is a specialized version of the logic from this function. Now that + * we've finished pruning and freezing, make sure that we're in total + * agreement with tdeheap_page_is_all_visible() using an assertion. + */ +#ifdef USE_ASSERT_CHECKING + /* Note that all_frozen value does not matter when !all_visible */ + if (presult.all_visible) + { + TransactionId debug_cutoff; + bool debug_all_frozen; + + Assert(presult.lpdead_items == 0); + + if (!tdeheap_page_is_all_visible(vacrel, buf, + &debug_cutoff, &debug_all_frozen)) + Assert(false); + + Assert(presult.all_frozen == debug_all_frozen); + + Assert(!TransactionIdIsValid(debug_cutoff) || + debug_cutoff == presult.vm_conflict_horizon); + } +#endif + + /* + * Now save details of the LP_DEAD items from the page in vacrel + */ + if (presult.lpdead_items > 0) + { + vacrel->lpdead_item_pages++; + + /* + * deadoffsets are collected incrementally in + * tdeheap_page_prune_and_freeze() as each dead line pointer is recorded, + * with an indeterminate order, but dead_items_add requires them to be + * sorted. + */ + qsort(presult.deadoffsets, presult.lpdead_items, sizeof(OffsetNumber), + cmpOffsetNumbers); + + dead_items_add(vacrel, blkno, presult.deadoffsets, presult.lpdead_items); + } + + /* Finally, add page-local counts to whole-VACUUM counts */ + vacrel->tuples_deleted += presult.ndeleted; + vacrel->tuples_frozen += presult.nfrozen; + vacrel->lpdead_items += presult.lpdead_items; + vacrel->live_tuples += presult.live_tuples; + vacrel->recently_dead_tuples += presult.recently_dead_tuples; + + /* Can't truncate this page */ + if (presult.hastup) + vacrel->nonempty_pages = blkno + 1; + + /* Did we find LP_DEAD items? */ + *has_lpdead_items = (presult.lpdead_items > 0); + + Assert(!presult.all_visible || !(*has_lpdead_items)); + + /* + * Handle setting visibility map bit based on information from the VM (as + * of last tdeheap_vac_scan_next_block() call), and from all_visible and + * all_frozen variables + */ + if (!all_visible_according_to_vm && presult.all_visible) + { + uint8 flags = VISIBILITYMAP_ALL_VISIBLE; + + if (presult.all_frozen) + { + Assert(!TransactionIdIsValid(presult.vm_conflict_horizon)); + flags |= VISIBILITYMAP_ALL_FROZEN; + } + + /* + * It should never be the case that the visibility map page is set + * while the page-level bit is clear, but the reverse is allowed (if + * checksums are not enabled). Regardless, set both bits so that we + * get back in sync. + * + * NB: If the heap page is all-visible but the VM bit is not set, we + * don't need to dirty the heap page. However, if checksums are + * enabled, we do need to make sure that the heap page is dirtied + * before passing it to tdeheap_visibilitymap_set(), because it may be logged. + * Given that this situation should only happen in rare cases after a + * crash, it is not worth optimizing. + */ + PageSetAllVisible(page); + MarkBufferDirty(buf); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, presult.vm_conflict_horizon, + flags); + } + + /* + * As of PostgreSQL 9.2, the visibility map bit should never be set if the + * page-level bit is clear. However, it's possible that the bit got + * cleared after tdeheap_vac_scan_next_block() was called, so we must recheck + * with buffer lock before concluding that the VM is corrupt. + */ + else if (all_visible_according_to_vm && !PageIsAllVisible(page) && + tdeheap_visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0) + { + elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u", + vacrel->relname, blkno); + tdeheap_visibilitymap_clear(vacrel->rel, blkno, vmbuffer, + VISIBILITYMAP_VALID_BITS); + } + + /* + * It's possible for the value returned by + * GetOldestNonRemovableTransactionId() to move backwards, so it's not + * wrong for us to see tuples that appear to not be visible to everyone + * yet, while PD_ALL_VISIBLE is already set. The real safe xmin value + * never moves backwards, but GetOldestNonRemovableTransactionId() is + * conservative and sometimes returns a value that's unnecessarily small, + * so if we see that contradiction it just means that the tuples that we + * think are not visible to everyone yet actually are, and the + * PD_ALL_VISIBLE flag is correct. + * + * There should never be LP_DEAD items on a page with PD_ALL_VISIBLE set, + * however. + */ + else if (presult.lpdead_items > 0 && PageIsAllVisible(page)) + { + elog(WARNING, "page containing LP_DEAD items is marked as all-visible in relation \"%s\" page %u", + vacrel->relname, blkno); + PageClearAllVisible(page); + MarkBufferDirty(buf); + tdeheap_visibilitymap_clear(vacrel->rel, blkno, vmbuffer, + VISIBILITYMAP_VALID_BITS); + } + + /* + * If the all-visible page is all-frozen but not marked as such yet, mark + * it as all-frozen. Note that all_frozen is only valid if all_visible is + * true, so we must check both all_visible and all_frozen. + */ + else if (all_visible_according_to_vm && presult.all_visible && + presult.all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer)) + { + /* + * Avoid relying on all_visible_according_to_vm as a proxy for the + * page-level PD_ALL_VISIBLE bit being set, since it might have become + * stale -- even when all_visible is set + */ + if (!PageIsAllVisible(page)) + { + PageSetAllVisible(page); + MarkBufferDirty(buf); + } + + /* + * Set the page all-frozen (and all-visible) in the VM. + * + * We can pass InvalidTransactionId as our cutoff_xid, since a + * snapshotConflictHorizon sufficient to make everything safe for REDO + * was logged when the page's tuples were frozen. + */ + Assert(!TransactionIdIsValid(presult.vm_conflict_horizon)); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr, + vmbuffer, InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | + VISIBILITYMAP_ALL_FROZEN); + } +} + +/* + * lazy_scan_noprune() -- lazy_scan_prune() without pruning or freezing + * + * Caller need only hold a pin and share lock on the buffer, unlike + * lazy_scan_prune, which requires a full cleanup lock. While pruning isn't + * performed here, it's quite possible that an earlier opportunistic pruning + * operation left LP_DEAD items behind. We'll at least collect any such items + * in dead_items for removal from indexes. + * + * For aggressive VACUUM callers, we may return false to indicate that a full + * cleanup lock is required for processing by lazy_scan_prune. This is only + * necessary when the aggressive VACUUM needs to freeze some tuple XIDs from + * one or more tuples on the page. We always return true for non-aggressive + * callers. + * + * If this function returns true, *has_lpdead_items gets set to true or false + * depending on whether, upon return from this function, any LP_DEAD items are + * present on the page. If this function returns false, *has_lpdead_items + * is not updated. + */ +static bool +lazy_scan_noprune(LVRelState *vacrel, + Buffer buf, + BlockNumber blkno, + Page page, + bool *has_lpdead_items) +{ + OffsetNumber offnum, + maxoff; + int lpdead_items, + live_tuples, + recently_dead_tuples, + missed_dead_tuples; + bool hastup; + HeapTupleHeader tupleheader; + TransactionId NoFreezePageRelfrozenXid = vacrel->NewRelfrozenXid; + MultiXactId NoFreezePageRelminMxid = vacrel->NewRelminMxid; + OffsetNumber deadoffsets[MaxHeapTuplesPerPage]; + + Assert(BufferGetBlockNumber(buf) == blkno); + + hastup = false; /* for now */ + + lpdead_items = 0; + live_tuples = 0; + recently_dead_tuples = 0; + missed_dead_tuples = 0; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + HeapTupleData tuple; + + vacrel->offnum = offnum; + itemid = PageGetItemId(page, offnum); + + if (!ItemIdIsUsed(itemid)) + continue; + + if (ItemIdIsRedirected(itemid)) + { + hastup = true; + continue; + } + + if (ItemIdIsDead(itemid)) + { + /* + * Deliberately don't set hastup=true here. See same point in + * lazy_scan_prune for an explanation. + */ + deadoffsets[lpdead_items++] = offnum; + continue; + } + + hastup = true; /* page prevents rel truncation */ + tupleheader = (HeapTupleHeader) PageGetItem(page, itemid); + if (tdeheap_tuple_should_freeze(tupleheader, &vacrel->cutoffs, + &NoFreezePageRelfrozenXid, + &NoFreezePageRelminMxid)) + { + /* Tuple with XID < FreezeLimit (or MXID < MultiXactCutoff) */ + if (vacrel->aggressive) + { + /* + * Aggressive VACUUMs must always be able to advance rel's + * relfrozenxid to a value >= FreezeLimit (and be able to + * advance rel's relminmxid to a value >= MultiXactCutoff). + * The ongoing aggressive VACUUM won't be able to do that + * unless it can freeze an XID (or MXID) from this tuple now. + * + * The only safe option is to have caller perform processing + * of this page using lazy_scan_prune. Caller might have to + * wait a while for a cleanup lock, but it can't be helped. + */ + vacrel->offnum = InvalidOffsetNumber; + return false; + } + + /* + * Non-aggressive VACUUMs are under no obligation to advance + * relfrozenxid (even by one XID). We can be much laxer here. + * + * Currently we always just accept an older final relfrozenxid + * and/or relminmxid value. We never make caller wait or work a + * little harder, even when it likely makes sense to do so. + */ + } + + ItemPointerSet(&(tuple.t_self), blkno, offnum); + tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple.t_len = ItemIdGetLength(itemid); + tuple.t_tableOid = RelationGetRelid(vacrel->rel); + + switch (HeapTupleSatisfiesVacuum(&tuple, vacrel->cutoffs.OldestXmin, + buf)) + { + case HEAPTUPLE_DELETE_IN_PROGRESS: + case HEAPTUPLE_LIVE: + + /* + * Count both cases as live, just like lazy_scan_prune + */ + live_tuples++; + + break; + case HEAPTUPLE_DEAD: + + /* + * There is some useful work for pruning to do, that won't be + * done due to failure to get a cleanup lock. + */ + missed_dead_tuples++; + break; + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * Count in recently_dead_tuples, just like lazy_scan_prune + */ + recently_dead_tuples++; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Do not count these rows as live, just like lazy_scan_prune + */ + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + } + + vacrel->offnum = InvalidOffsetNumber; + + /* + * By here we know for sure that caller can put off freezing and pruning + * this particular page until the next VACUUM. Remember its details now. + * (lazy_scan_prune expects a clean slate, so we have to do this last.) + */ + vacrel->NewRelfrozenXid = NoFreezePageRelfrozenXid; + vacrel->NewRelminMxid = NoFreezePageRelminMxid; + + /* Save any LP_DEAD items found on the page in dead_items */ + if (vacrel->nindexes == 0) + { + /* Using one-pass strategy (since table has no indexes) */ + if (lpdead_items > 0) + { + /* + * Perfunctory handling for the corner case where a single pass + * strategy VACUUM cannot get a cleanup lock, and it turns out + * that there is one or more LP_DEAD items: just count the LP_DEAD + * items as missed_dead_tuples instead. (This is a bit dishonest, + * but it beats having to maintain specialized heap vacuuming code + * forever, for vanishingly little benefit.) + */ + hastup = true; + missed_dead_tuples += lpdead_items; + } + } + else if (lpdead_items > 0) + { + /* + * Page has LP_DEAD items, and so any references/TIDs that remain in + * indexes will be deleted during index vacuuming (and then marked + * LP_UNUSED in the heap) + */ + vacrel->lpdead_item_pages++; + + dead_items_add(vacrel, blkno, deadoffsets, lpdead_items); + + vacrel->lpdead_items += lpdead_items; + } + + /* + * Finally, add relevant page-local counts to whole-VACUUM counts + */ + vacrel->live_tuples += live_tuples; + vacrel->recently_dead_tuples += recently_dead_tuples; + vacrel->missed_dead_tuples += missed_dead_tuples; + if (missed_dead_tuples > 0) + vacrel->missed_dead_pages++; + + /* Can't truncate this page */ + if (hastup) + vacrel->nonempty_pages = blkno + 1; + + /* Did we find LP_DEAD items? */ + *has_lpdead_items = (lpdead_items > 0); + + /* Caller won't need to call lazy_scan_prune with same page */ + return true; +} + +/* + * Main entry point for index vacuuming and heap vacuuming. + * + * Removes items collected in dead_items from table's indexes, then marks the + * same items LP_UNUSED in the heap. See the comments above lazy_scan_heap + * for full details. + * + * Also empties dead_items, freeing up space for later TIDs. + * + * We may choose to bypass index vacuuming at this point, though only when the + * ongoing VACUUM operation will definitely only have one index scan/round of + * index vacuuming. + */ +static void +lazy_vacuum(LVRelState *vacrel) +{ + bool bypass; + + /* Should not end up here with no indexes */ + Assert(vacrel->nindexes > 0); + Assert(vacrel->lpdead_item_pages > 0); + + if (!vacrel->do_index_vacuuming) + { + Assert(!vacrel->do_index_cleanup); + dead_items_reset(vacrel); + return; + } + + /* + * Consider bypassing index vacuuming (and heap vacuuming) entirely. + * + * We currently only do this in cases where the number of LP_DEAD items + * for the entire VACUUM operation is close to zero. This avoids sharp + * discontinuities in the duration and overhead of successive VACUUM + * operations that run against the same table with a fixed workload. + * Ideally, successive VACUUM operations will behave as if there are + * exactly zero LP_DEAD items in cases where there are close to zero. + * + * This is likely to be helpful with a table that is continually affected + * by UPDATEs that can mostly apply the HOT optimization, but occasionally + * have small aberrations that lead to just a few heap pages retaining + * only one or two LP_DEAD items. This is pretty common; even when the + * DBA goes out of their way to make UPDATEs use HOT, it is practically + * impossible to predict whether HOT will be applied in 100% of cases. + * It's far easier to ensure that 99%+ of all UPDATEs against a table use + * HOT through careful tuning. + */ + bypass = false; + if (vacrel->consider_bypass_optimization && vacrel->rel_pages > 0) + { + BlockNumber threshold; + + Assert(vacrel->num_index_scans == 0); + Assert(vacrel->lpdead_items == vacrel->dead_items_info->num_items); + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + + /* + * This crossover point at which we'll start to do index vacuuming is + * expressed as a percentage of the total number of heap pages in the + * table that are known to have at least one LP_DEAD item. This is + * much more important than the total number of LP_DEAD items, since + * it's a proxy for the number of heap pages whose visibility map bits + * cannot be set on account of bypassing index and heap vacuuming. + * + * We apply one further precautionary test: the space currently used + * to store the TIDs (TIDs that now all point to LP_DEAD items) must + * not exceed 32MB. This limits the risk that we will bypass index + * vacuuming again and again until eventually there is a VACUUM whose + * dead_items space is not CPU cache resident. + * + * We don't take any special steps to remember the LP_DEAD items (such + * as counting them in our final update to the stats system) when the + * optimization is applied. Though the accounting used in analyze.c's + * acquire_sample_rows() will recognize the same LP_DEAD items as dead + * rows in its own stats report, that's okay. The discrepancy should + * be negligible. If this optimization is ever expanded to cover more + * cases then this may need to be reconsidered. + */ + threshold = (double) vacrel->rel_pages * BYPASS_THRESHOLD_PAGES; + bypass = (vacrel->lpdead_item_pages < threshold && + (TidStoreMemoryUsage(vacrel->dead_items) < (32L * 1024L * 1024L))); + } + + if (bypass) + { + /* + * There are almost zero TIDs. Behave as if there were precisely + * zero: bypass index vacuuming, but do index cleanup. + * + * We expect that the ongoing VACUUM operation will finish very + * quickly, so there is no point in considering speeding up as a + * failsafe against wraparound failure. (Index cleanup is expected to + * finish very quickly in cases where there were no ambulkdelete() + * calls.) + */ + vacrel->do_index_vacuuming = false; + } + else if (lazy_vacuum_all_indexes(vacrel)) + { + /* + * We successfully completed a round of index vacuuming. Do related + * heap vacuuming now. + */ + lazy_vacuum_tdeheap_rel(vacrel); + } + else + { + /* + * Failsafe case. + * + * We attempted index vacuuming, but didn't finish a full round/full + * index scan. This happens when relfrozenxid or relminmxid is too + * far in the past. + * + * From this point on the VACUUM operation will do no further index + * vacuuming or heap vacuuming. This VACUUM operation won't end up + * back here again. + */ + Assert(VacuumFailsafeActive); + } + + /* + * Forget the LP_DEAD items that we just vacuumed (or just decided to not + * vacuum) + */ + dead_items_reset(vacrel); +} + +/* + * lazy_vacuum_all_indexes() -- Main entry for index vacuuming + * + * Returns true in the common case when all indexes were successfully + * vacuumed. Returns false in rare cases where we determined that the ongoing + * VACUUM operation is at risk of taking too long to finish, leading to + * wraparound failure. + */ +static bool +lazy_vacuum_all_indexes(LVRelState *vacrel) +{ + bool allindexes = true; + double old_live_tuples = vacrel->rel->rd_rel->reltuples; + const int progress_start_index[] = { + PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_INDEXES_TOTAL + }; + const int progress_end_index[] = { + PROGRESS_VACUUM_INDEXES_TOTAL, + PROGRESS_VACUUM_INDEXES_PROCESSED, + PROGRESS_VACUUM_NUM_INDEX_VACUUMS + }; + int64 progress_start_val[2]; + int64 progress_end_val[3]; + + Assert(vacrel->nindexes > 0); + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + + /* Precheck for XID wraparound emergencies */ + if (lazy_check_wraparound_failsafe(vacrel)) + { + /* Wraparound emergency -- don't even start an index scan */ + return false; + } + + /* + * Report that we are now vacuuming indexes and the number of indexes to + * vacuum. + */ + progress_start_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_INDEX; + progress_start_val[1] = vacrel->nindexes; + pgstat_progress_update_multi_param(2, progress_start_index, progress_start_val); + + if (!ParallelVacuumIsActive(vacrel)) + { + for (int idx = 0; idx < vacrel->nindexes; idx++) + { + Relation indrel = vacrel->indrels[idx]; + IndexBulkDeleteResult *istat = vacrel->indstats[idx]; + + vacrel->indstats[idx] = lazy_vacuum_one_index(indrel, istat, + old_live_tuples, + vacrel); + + /* Report the number of indexes vacuumed */ + pgstat_progress_update_param(PROGRESS_VACUUM_INDEXES_PROCESSED, + idx + 1); + + if (lazy_check_wraparound_failsafe(vacrel)) + { + /* Wraparound emergency -- end current index scan */ + allindexes = false; + break; + } + } + } + else + { + /* Outsource everything to parallel variant */ + parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples, + vacrel->num_index_scans); + + /* + * Do a postcheck to consider applying wraparound failsafe now. Note + * that parallel VACUUM only gets the precheck and this postcheck. + */ + if (lazy_check_wraparound_failsafe(vacrel)) + allindexes = false; + } + + /* + * We delete all LP_DEAD items from the first heap pass in all indexes on + * each call here (except calls where we choose to do the failsafe). This + * makes the next call to lazy_vacuum_tdeheap_rel() safe (except in the event + * of the failsafe triggering, which prevents the next call from taking + * place). + */ + Assert(vacrel->num_index_scans > 0 || + vacrel->dead_items_info->num_items == vacrel->lpdead_items); + Assert(allindexes || VacuumFailsafeActive); + + /* + * Increase and report the number of index scans. Also, we reset + * PROGRESS_VACUUM_INDEXES_TOTAL and PROGRESS_VACUUM_INDEXES_PROCESSED. + * + * We deliberately include the case where we started a round of bulk + * deletes that we weren't able to finish due to the failsafe triggering. + */ + vacrel->num_index_scans++; + progress_end_val[0] = 0; + progress_end_val[1] = 0; + progress_end_val[2] = vacrel->num_index_scans; + pgstat_progress_update_multi_param(3, progress_end_index, progress_end_val); + + return allindexes; +} + +/* + * lazy_vacuum_tdeheap_rel() -- second pass over the heap for two pass strategy + * + * This routine marks LP_DEAD items in vacrel->dead_items as LP_UNUSED. Pages + * that never had lazy_scan_prune record LP_DEAD items are not visited at all. + * + * We may also be able to truncate the line pointer array of the heap pages we + * visit. If there is a contiguous group of LP_UNUSED items at the end of the + * array, it can be reclaimed as free space. These LP_UNUSED items usually + * start out as LP_DEAD items recorded by lazy_scan_prune (we set items from + * each page to LP_UNUSED, and then consider if it's possible to truncate the + * page's line pointer array). + * + * Note: the reason for doing this as a second pass is we cannot remove the + * tuples until we've removed their index entries, and we want to process + * index entry removal in batches as large as possible. + */ +static void +lazy_vacuum_tdeheap_rel(LVRelState *vacrel) +{ + BlockNumber vacuumed_pages = 0; + Buffer vmbuffer = InvalidBuffer; + LVSavedErrInfo saved_err_info; + TidStoreIter *iter; + TidStoreIterResult *iter_result; + + Assert(vacrel->do_index_vacuuming); + Assert(vacrel->do_index_cleanup); + Assert(vacrel->num_index_scans > 0); + + /* Report that we are now vacuuming the heap */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_VACUUM_HEAP); + + /* Update error traceback information */ + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, + InvalidBlockNumber, InvalidOffsetNumber); + + iter = TidStoreBeginIterate(vacrel->dead_items); + while ((iter_result = TidStoreIterateNext(iter)) != NULL) + { + BlockNumber blkno; + Buffer buf; + Page page; + Size freespace; + + vacuum_delay_point(); + + blkno = iter_result->blkno; + vacrel->blkno = blkno; + + /* + * Pin the visibility map page in case we need to mark the page + * all-visible. In most cases this will be very cheap, because we'll + * already have the correct page pinned anyway. + */ + tdeheap_visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); + + /* We need a non-cleanup exclusive lock to mark dead_items unused */ + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + lazy_vacuum_tdeheap_page(vacrel, blkno, buf, iter_result->offsets, + iter_result->num_offsets, vmbuffer); + + /* Now that we've vacuumed the page, record its available space */ + page = BufferGetPage(buf); + freespace = PageGetHeapFreeSpace(page); + + UnlockReleaseBuffer(buf); + RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); + vacuumed_pages++; + } + TidStoreEndIterate(iter); + + vacrel->blkno = InvalidBlockNumber; + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * We set all LP_DEAD items from the first heap pass to LP_UNUSED during + * the second heap pass. No more, no less. + */ + Assert(vacrel->num_index_scans > 1 || + (vacrel->dead_items_info->num_items == vacrel->lpdead_items && + vacuumed_pages == vacrel->lpdead_item_pages)); + + ereport(DEBUG2, + (errmsg("table \"%s\": removed %lld dead item identifiers in %u pages", + vacrel->relname, (long long) vacrel->dead_items_info->num_items, + vacuumed_pages))); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); +} + +/* + * lazy_vacuum_tdeheap_page() -- free page's LP_DEAD items listed in the + * vacrel->dead_items store. + * + * Caller must have an exclusive buffer lock on the buffer (though a full + * cleanup lock is also acceptable). vmbuffer must be valid and already have + * a pin on blkno's visibility map page. + */ +static void +lazy_vacuum_tdeheap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer, + OffsetNumber *deadoffsets, int num_offsets, + Buffer vmbuffer) +{ + Page page = BufferGetPage(buffer); + OffsetNumber unused[MaxHeapTuplesPerPage]; + int nunused = 0; + TransactionId visibility_cutoff_xid; + bool all_frozen; + LVSavedErrInfo saved_err_info; + + Assert(vacrel->do_index_vacuuming); + + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno); + + /* Update error traceback information */ + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_HEAP, blkno, + InvalidOffsetNumber); + + START_CRIT_SECTION(); + + for (int i = 0; i < num_offsets; i++) + { + ItemId itemid; + OffsetNumber toff = deadoffsets[i]; + + itemid = PageGetItemId(page, toff); + + Assert(ItemIdIsDead(itemid) && !ItemIdHasStorage(itemid)); + ItemIdSetUnused(itemid); + unused[nunused++] = toff; + } + + Assert(nunused > 0); + + /* Attempt to truncate line pointer array now */ + PageTruncateLinePointerArray(page); + + /* + * Mark buffer dirty before we write WAL. + */ + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(vacrel->rel)) + { + log_tdeheap_prune_and_freeze(vacrel->rel, buffer, + InvalidTransactionId, + false, /* no cleanup lock required */ + PRUNE_VACUUM_CLEANUP, + NULL, 0, /* frozen */ + NULL, 0, /* redirected */ + NULL, 0, /* dead */ + unused, nunused); + } + + /* + * End critical section, so we safely can do visibility tests (which + * possibly need to perform IO and allocate memory!). If we crash now the + * page (including the corresponding vm bit) might not be marked all + * visible, but that's fine. A later vacuum will fix that. + */ + END_CRIT_SECTION(); + + /* + * Now that we have removed the LP_DEAD items from the page, once again + * check if the page has become all-visible. The page is already marked + * dirty, exclusively locked, and, if needed, a full page image has been + * emitted. + */ + Assert(!PageIsAllVisible(page)); + if (tdeheap_page_is_all_visible(vacrel, buffer, &visibility_cutoff_xid, + &all_frozen)) + { + uint8 flags = VISIBILITYMAP_ALL_VISIBLE; + + if (all_frozen) + { + Assert(!TransactionIdIsValid(visibility_cutoff_xid)); + flags |= VISIBILITYMAP_ALL_FROZEN; + } + + PageSetAllVisible(page); + tdeheap_visibilitymap_set(vacrel->rel, blkno, buffer, InvalidXLogRecPtr, + vmbuffer, visibility_cutoff_xid, flags); + } + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); +} + +/* + * Trigger the failsafe to avoid wraparound failure when vacrel table has a + * relfrozenxid and/or relminmxid that is dangerously far in the past. + * Triggering the failsafe makes the ongoing VACUUM bypass any further index + * vacuuming and heap vacuuming. Truncating the heap is also bypassed. + * + * Any remaining work (work that VACUUM cannot just bypass) is typically sped + * up when the failsafe triggers. VACUUM stops applying any cost-based delay + * that it started out with. + * + * Returns true when failsafe has been triggered. + */ +static bool +lazy_check_wraparound_failsafe(LVRelState *vacrel) +{ + /* Don't warn more than once per VACUUM */ + if (VacuumFailsafeActive) + return true; + + if (unlikely(vacuum_xid_failsafe_check(&vacrel->cutoffs))) + { + const int progress_index[] = { + PROGRESS_VACUUM_INDEXES_TOTAL, + PROGRESS_VACUUM_INDEXES_PROCESSED + }; + int64 progress_val[2] = {0, 0}; + + VacuumFailsafeActive = true; + + /* + * Abandon use of a buffer access strategy to allow use of all of + * shared buffers. We assume the caller who allocated the memory for + * the BufferAccessStrategy will free it. + */ + vacrel->bstrategy = NULL; + + /* Disable index vacuuming, index cleanup, and heap rel truncation */ + vacrel->do_index_vacuuming = false; + vacrel->do_index_cleanup = false; + vacrel->do_rel_truncate = false; + + /* Reset the progress counters */ + pgstat_progress_update_multi_param(2, progress_index, progress_val); + + ereport(WARNING, + (errmsg("bypassing nonessential maintenance of table \"%s.%s.%s\" as a failsafe after %d index scans", + vacrel->dbname, vacrel->relnamespace, vacrel->relname, + vacrel->num_index_scans), + errdetail("The table's relfrozenxid or relminmxid is too far in the past."), + errhint("Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".\n" + "You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs."))); + + /* Stop applying cost limits from this point on */ + VacuumCostActive = false; + VacuumCostBalance = 0; + + return true; + } + + return false; +} + +/* + * lazy_cleanup_all_indexes() -- cleanup all indexes of relation. + */ +static void +lazy_cleanup_all_indexes(LVRelState *vacrel) +{ + double reltuples = vacrel->new_rel_tuples; + bool estimated_count = vacrel->scanned_pages < vacrel->rel_pages; + const int progress_start_index[] = { + PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_INDEXES_TOTAL + }; + const int progress_end_index[] = { + PROGRESS_VACUUM_INDEXES_TOTAL, + PROGRESS_VACUUM_INDEXES_PROCESSED + }; + int64 progress_start_val[2]; + int64 progress_end_val[2] = {0, 0}; + + Assert(vacrel->do_index_cleanup); + Assert(vacrel->nindexes > 0); + + /* + * Report that we are now cleaning up indexes and the number of indexes to + * cleanup. + */ + progress_start_val[0] = PROGRESS_VACUUM_PHASE_INDEX_CLEANUP; + progress_start_val[1] = vacrel->nindexes; + pgstat_progress_update_multi_param(2, progress_start_index, progress_start_val); + + if (!ParallelVacuumIsActive(vacrel)) + { + for (int idx = 0; idx < vacrel->nindexes; idx++) + { + Relation indrel = vacrel->indrels[idx]; + IndexBulkDeleteResult *istat = vacrel->indstats[idx]; + + vacrel->indstats[idx] = + lazy_cleanup_one_index(indrel, istat, reltuples, + estimated_count, vacrel); + + /* Report the number of indexes cleaned up */ + pgstat_progress_update_param(PROGRESS_VACUUM_INDEXES_PROCESSED, + idx + 1); + } + } + else + { + /* Outsource everything to parallel variant */ + parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples, + vacrel->num_index_scans, + estimated_count); + } + + /* Reset the progress counters */ + pgstat_progress_update_multi_param(2, progress_end_index, progress_end_val); +} + +/* + * lazy_vacuum_one_index() -- vacuum index relation. + * + * Delete all the index tuples containing a TID collected in + * vacrel->dead_items. Also update running statistics. Exact + * details depend on index AM's ambulkdelete routine. + * + * reltuples is the number of heap tuples to be passed to the + * bulkdelete callback. It's always assumed to be estimated. + * See indexam.sgml for more info. + * + * Returns bulk delete stats derived from input stats + */ +static IndexBulkDeleteResult * +lazy_vacuum_one_index(Relation indrel, IndexBulkDeleteResult *istat, + double reltuples, LVRelState *vacrel) +{ + IndexVacuumInfo ivinfo; + LVSavedErrInfo saved_err_info; + + ivinfo.index = indrel; + ivinfo.heaprel = vacrel->rel; + ivinfo.analyze_only = false; + ivinfo.report_progress = false; + ivinfo.estimated_count = true; + ivinfo.message_level = DEBUG2; + ivinfo.num_heap_tuples = reltuples; + ivinfo.strategy = vacrel->bstrategy; + + /* + * Update error traceback information. + * + * The index name is saved during this phase and restored immediately + * after this phase. See vacuum_error_callback. + */ + Assert(vacrel->indname == NULL); + vacrel->indname = pstrdup(RelationGetRelationName(indrel)); + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_VACUUM_INDEX, + InvalidBlockNumber, InvalidOffsetNumber); + + /* Do bulk deletion */ + istat = vac_bulkdel_one_index(&ivinfo, istat, (void *) vacrel->dead_items, + vacrel->dead_items_info); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); + pfree(vacrel->indname); + vacrel->indname = NULL; + + return istat; +} + +/* + * lazy_cleanup_one_index() -- do post-vacuum cleanup for index relation. + * + * Calls index AM's amvacuumcleanup routine. reltuples is the number + * of heap tuples and estimated_count is true if reltuples is an + * estimated value. See indexam.sgml for more info. + * + * Returns bulk delete stats derived from input stats + */ +static IndexBulkDeleteResult * +lazy_cleanup_one_index(Relation indrel, IndexBulkDeleteResult *istat, + double reltuples, bool estimated_count, + LVRelState *vacrel) +{ + IndexVacuumInfo ivinfo; + LVSavedErrInfo saved_err_info; + + ivinfo.index = indrel; + ivinfo.heaprel = vacrel->rel; + ivinfo.analyze_only = false; + ivinfo.report_progress = false; + ivinfo.estimated_count = estimated_count; + ivinfo.message_level = DEBUG2; + + ivinfo.num_heap_tuples = reltuples; + ivinfo.strategy = vacrel->bstrategy; + + /* + * Update error traceback information. + * + * The index name is saved during this phase and restored immediately + * after this phase. See vacuum_error_callback. + */ + Assert(vacrel->indname == NULL); + vacrel->indname = pstrdup(RelationGetRelationName(indrel)); + update_vacuum_error_info(vacrel, &saved_err_info, + VACUUM_ERRCB_PHASE_INDEX_CLEANUP, + InvalidBlockNumber, InvalidOffsetNumber); + + istat = vac_cleanup_one_index(&ivinfo, istat); + + /* Revert to the previous phase information for error traceback */ + restore_vacuum_error_info(vacrel, &saved_err_info); + pfree(vacrel->indname); + vacrel->indname = NULL; + + return istat; +} + +/* + * should_attempt_truncation - should we attempt to truncate the heap? + * + * Don't even think about it unless we have a shot at releasing a goodly + * number of pages. Otherwise, the time taken isn't worth it, mainly because + * an AccessExclusive lock must be replayed on any hot standby, where it can + * be particularly disruptive. + * + * Also don't attempt it if wraparound failsafe is in effect. The entire + * system might be refusing to allocate new XIDs at this point. The system + * definitely won't return to normal unless and until VACUUM actually advances + * the oldest relfrozenxid -- which hasn't happened for target rel just yet. + * If lazy_truncate_heap attempted to acquire an AccessExclusiveLock to + * truncate the table under these circumstances, an XID exhaustion error might + * make it impossible for VACUUM to fix the underlying XID exhaustion problem. + * There is very little chance of truncation working out when the failsafe is + * in effect in any case. lazy_scan_prune makes the optimistic assumption + * that any LP_DEAD items it encounters will always be LP_UNUSED by the time + * we're called. + */ +static bool +should_attempt_truncation(LVRelState *vacrel) +{ + BlockNumber possibly_freeable; + + if (!vacrel->do_rel_truncate || VacuumFailsafeActive) + return false; + + possibly_freeable = vacrel->rel_pages - vacrel->nonempty_pages; + if (possibly_freeable > 0 && + (possibly_freeable >= REL_TRUNCATE_MINIMUM || + possibly_freeable >= vacrel->rel_pages / REL_TRUNCATE_FRACTION)) + return true; + + return false; +} + +/* + * lazy_truncate_heap - try to truncate off any empty pages at the end + */ +static void +lazy_truncate_heap(LVRelState *vacrel) +{ + BlockNumber orig_rel_pages = vacrel->rel_pages; + BlockNumber new_rel_pages; + bool lock_waiter_detected; + int lock_retry; + + /* Report that we are now truncating */ + pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, + PROGRESS_VACUUM_PHASE_TRUNCATE); + + /* Update error traceback information one last time */ + update_vacuum_error_info(vacrel, NULL, VACUUM_ERRCB_PHASE_TRUNCATE, + vacrel->nonempty_pages, InvalidOffsetNumber); + + /* + * Loop until no more truncating can be done. + */ + do + { + /* + * We need full exclusive lock on the relation in order to do + * truncation. If we can't get it, give up rather than waiting --- we + * don't want to block other backends, and we don't want to deadlock + * (which is quite possible considering we already hold a lower-grade + * lock). + */ + lock_waiter_detected = false; + lock_retry = 0; + while (true) + { + if (ConditionalLockRelation(vacrel->rel, AccessExclusiveLock)) + break; + + /* + * Check for interrupts while trying to (re-)acquire the exclusive + * lock. + */ + CHECK_FOR_INTERRUPTS(); + + if (++lock_retry > (VACUUM_TRUNCATE_LOCK_TIMEOUT / + VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL)) + { + /* + * We failed to establish the lock in the specified number of + * retries. This means we give up truncating. + */ + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("\"%s\": stopping truncate due to conflicting lock request", + vacrel->relname))); + return; + } + + (void) WaitLatch(MyLatch, + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, + VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL, + WAIT_EVENT_VACUUM_TRUNCATE); + ResetLatch(MyLatch); + } + + /* + * Now that we have exclusive lock, look to see if the rel has grown + * whilst we were vacuuming with non-exclusive lock. If so, give up; + * the newly added pages presumably contain non-deletable tuples. + */ + new_rel_pages = RelationGetNumberOfBlocks(vacrel->rel); + if (new_rel_pages != orig_rel_pages) + { + /* + * Note: we intentionally don't update vacrel->rel_pages with the + * new rel size here. If we did, it would amount to assuming that + * the new pages are empty, which is unlikely. Leaving the numbers + * alone amounts to assuming that the new pages have the same + * tuple density as existing ones, which is less unlikely. + */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + return; + } + + /* + * Scan backwards from the end to verify that the end pages actually + * contain no tuples. This is *necessary*, not optional, because + * other backends could have added tuples to these pages whilst we + * were vacuuming. + */ + new_rel_pages = count_nondeletable_pages(vacrel, &lock_waiter_detected); + vacrel->blkno = new_rel_pages; + + if (new_rel_pages >= orig_rel_pages) + { + /* can't do anything after all */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + return; + } + + /* + * Okay to truncate. + */ + RelationTruncate(vacrel->rel, new_rel_pages); + + /* + * We can release the exclusive lock as soon as we have truncated. + * Other backends can't safely access the relation until they have + * processed the smgr invalidation that smgrtruncate sent out ... but + * that should happen as part of standard invalidation processing once + * they acquire lock on the relation. + */ + UnlockRelation(vacrel->rel, AccessExclusiveLock); + + /* + * Update statistics. Here, it *is* correct to adjust rel_pages + * without also touching reltuples, since the tuple count wasn't + * changed by the truncation. + */ + vacrel->removed_pages += orig_rel_pages - new_rel_pages; + vacrel->rel_pages = new_rel_pages; + + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("table \"%s\": truncated %u to %u pages", + vacrel->relname, + orig_rel_pages, new_rel_pages))); + orig_rel_pages = new_rel_pages; + } while (new_rel_pages > vacrel->nonempty_pages && lock_waiter_detected); +} + +/* + * Rescan end pages to verify that they are (still) empty of tuples. + * + * Returns number of nondeletable pages (last nonempty page + 1). + */ +static BlockNumber +count_nondeletable_pages(LVRelState *vacrel, bool *lock_waiter_detected) +{ + BlockNumber blkno; + BlockNumber prefetchedUntil; + instr_time starttime; + + /* Initialize the starttime if we check for conflicting lock requests */ + INSTR_TIME_SET_CURRENT(starttime); + + /* + * Start checking blocks at what we believe relation end to be and move + * backwards. (Strange coding of loop control is needed because blkno is + * unsigned.) To make the scan faster, we prefetch a few blocks at a time + * in forward direction, so that OS-level readahead can kick in. + */ + blkno = vacrel->rel_pages; + StaticAssertStmt((PREFETCH_SIZE & (PREFETCH_SIZE - 1)) == 0, + "prefetch size must be power of 2"); + prefetchedUntil = InvalidBlockNumber; + while (blkno > vacrel->nonempty_pages) + { + Buffer buf; + Page page; + OffsetNumber offnum, + maxoff; + bool hastup; + + /* + * Check if another process requests a lock on our relation. We are + * holding an AccessExclusiveLock here, so they will be waiting. We + * only do this once per VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL, and we + * only check if that interval has elapsed once every 32 blocks to + * keep the number of system calls and actual shared lock table + * lookups to a minimum. + */ + if ((blkno % 32) == 0) + { + instr_time currenttime; + instr_time elapsed; + + INSTR_TIME_SET_CURRENT(currenttime); + elapsed = currenttime; + INSTR_TIME_SUBTRACT(elapsed, starttime); + if ((INSTR_TIME_GET_MICROSEC(elapsed) / 1000) + >= VACUUM_TRUNCATE_LOCK_CHECK_INTERVAL) + { + if (LockHasWaitersRelation(vacrel->rel, AccessExclusiveLock)) + { + ereport(vacrel->verbose ? INFO : DEBUG2, + (errmsg("table \"%s\": suspending truncate due to conflicting lock request", + vacrel->relname))); + + *lock_waiter_detected = true; + return blkno; + } + starttime = currenttime; + } + } + + /* + * We don't insert a vacuum delay point here, because we have an + * exclusive lock on the table which we want to hold for as short a + * time as possible. We still need to check for interrupts however. + */ + CHECK_FOR_INTERRUPTS(); + + blkno--; + + /* If we haven't prefetched this lot yet, do so now. */ + if (prefetchedUntil > blkno) + { + BlockNumber prefetchStart; + BlockNumber pblkno; + + prefetchStart = blkno & ~(PREFETCH_SIZE - 1); + for (pblkno = prefetchStart; pblkno <= blkno; pblkno++) + { + PrefetchBuffer(vacrel->rel, MAIN_FORKNUM, pblkno); + CHECK_FOR_INTERRUPTS(); + } + prefetchedUntil = prefetchStart; + } + + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + + /* In this phase we only need shared access to the buffer */ + LockBuffer(buf, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buf); + + if (PageIsNew(page) || PageIsEmpty(page)) + { + UnlockReleaseBuffer(buf); + continue; + } + + hastup = false; + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + + itemid = PageGetItemId(page, offnum); + + /* + * Note: any non-unused item should be taken as a reason to keep + * this page. Even an LP_DEAD item makes truncation unsafe, since + * we must not have cleaned out its index entries. + */ + if (ItemIdIsUsed(itemid)) + { + hastup = true; + break; /* can stop scanning */ + } + } /* scan along page */ + + UnlockReleaseBuffer(buf); + + /* Done scanning if we found a tuple here */ + if (hastup) + return blkno + 1; + } + + /* + * If we fall out of the loop, all the previously-thought-to-be-empty + * pages still are; we need not bother to look at the last known-nonempty + * page. + */ + return vacrel->nonempty_pages; +} + +/* + * Allocate dead_items and dead_items_info (either using palloc, or in dynamic + * shared memory). Sets both in vacrel for caller. + * + * Also handles parallel initialization as part of allocating dead_items in + * DSM when required. + */ +static void +dead_items_alloc(LVRelState *vacrel, int nworkers) +{ + VacDeadItemsInfo *dead_items_info; + int vac_work_mem = AmAutoVacuumWorkerProcess() && + autovacuum_work_mem != -1 ? + autovacuum_work_mem : maintenance_work_mem; + + /* + * Initialize state for a parallel vacuum. As of now, only one worker can + * be used for an index, so we invoke parallelism only if there are at + * least two indexes on a table. + */ + if (nworkers >= 0 && vacrel->nindexes > 1 && vacrel->do_index_vacuuming) + { + /* + * Since parallel workers cannot access data in temporary tables, we + * can't perform parallel vacuum on them. + */ + if (RelationUsesLocalBuffers(vacrel->rel)) + { + /* + * Give warning only if the user explicitly tries to perform a + * parallel vacuum on the temporary table. + */ + if (nworkers > 0) + ereport(WARNING, + (errmsg("disabling parallel option of vacuum on \"%s\" --- cannot vacuum temporary tables in parallel", + vacrel->relname))); + } + else + vacrel->pvs = parallel_vacuum_init(vacrel->rel, vacrel->indrels, + vacrel->nindexes, nworkers, + vac_work_mem, + vacrel->verbose ? INFO : DEBUG2, + vacrel->bstrategy); + + /* + * If parallel mode started, dead_items and dead_items_info spaces are + * allocated in DSM. + */ + if (ParallelVacuumIsActive(vacrel)) + { + vacrel->dead_items = parallel_vacuum_get_dead_items(vacrel->pvs, + &vacrel->dead_items_info); + return; + } + } + + /* + * Serial VACUUM case. Allocate both dead_items and dead_items_info + * locally. + */ + + dead_items_info = (VacDeadItemsInfo *) palloc(sizeof(VacDeadItemsInfo)); + dead_items_info->max_bytes = vac_work_mem * 1024L; + dead_items_info->num_items = 0; + vacrel->dead_items_info = dead_items_info; + + vacrel->dead_items = TidStoreCreateLocal(dead_items_info->max_bytes, true); +} + +/* + * Add the given block number and offset numbers to dead_items. + */ +static void +dead_items_add(LVRelState *vacrel, BlockNumber blkno, OffsetNumber *offsets, + int num_offsets) +{ + TidStore *dead_items = vacrel->dead_items; + const int prog_index[2] = { + PROGRESS_VACUUM_NUM_DEAD_ITEM_IDS, + PROGRESS_VACUUM_DEAD_TUPLE_BYTES + }; + int64 prog_val[2]; + + TidStoreSetBlockOffsets(dead_items, blkno, offsets, num_offsets); + vacrel->dead_items_info->num_items += num_offsets; + + /* update the progress information */ + prog_val[0] = vacrel->dead_items_info->num_items; + prog_val[1] = TidStoreMemoryUsage(dead_items); + pgstat_progress_update_multi_param(2, prog_index, prog_val); +} + +/* + * Forget all collected dead items. + */ +static void +dead_items_reset(LVRelState *vacrel) +{ + TidStore *dead_items = vacrel->dead_items; + + if (ParallelVacuumIsActive(vacrel)) + { + parallel_vacuum_reset_dead_items(vacrel->pvs); + return; + } + + /* Recreate the tidstore with the same max_bytes limitation */ + TidStoreDestroy(dead_items); + vacrel->dead_items = TidStoreCreateLocal(vacrel->dead_items_info->max_bytes, true); + + /* Reset the counter */ + vacrel->dead_items_info->num_items = 0; +} + +/* + * Perform cleanup for resources allocated in dead_items_alloc + */ +static void +dead_items_cleanup(LVRelState *vacrel) +{ + if (!ParallelVacuumIsActive(vacrel)) + { + /* Don't bother with pfree here */ + return; + } + + /* End parallel mode */ + parallel_vacuum_end(vacrel->pvs, vacrel->indstats); + vacrel->pvs = NULL; +} + +/* + * Check if every tuple in the given page is visible to all current and future + * transactions. Also return the visibility_cutoff_xid which is the highest + * xmin amongst the visible tuples. Set *all_frozen to true if every tuple + * on this page is frozen. + * + * This is a stripped down version of lazy_scan_prune(). If you change + * anything here, make sure that everything stays in sync. Note that an + * assertion calls us to verify that everybody still agrees. Be sure to avoid + * introducing new side-effects here. + */ +static bool +tdeheap_page_is_all_visible(LVRelState *vacrel, Buffer buf, + TransactionId *visibility_cutoff_xid, + bool *all_frozen) +{ + Page page = BufferGetPage(buf); + BlockNumber blockno = BufferGetBlockNumber(buf); + OffsetNumber offnum, + maxoff; + bool all_visible = true; + + *visibility_cutoff_xid = InvalidTransactionId; + *all_frozen = true; + + maxoff = PageGetMaxOffsetNumber(page); + for (offnum = FirstOffsetNumber; + offnum <= maxoff && all_visible; + offnum = OffsetNumberNext(offnum)) + { + ItemId itemid; + HeapTupleData tuple; + + /* + * Set the offset number so that we can display it along with any + * error that occurred while processing this tuple. + */ + vacrel->offnum = offnum; + itemid = PageGetItemId(page, offnum); + + /* Unused or redirect line pointers are of no interest */ + if (!ItemIdIsUsed(itemid) || ItemIdIsRedirected(itemid)) + continue; + + ItemPointerSet(&(tuple.t_self), blockno, offnum); + + /* + * Dead line pointers can have index pointers pointing to them. So + * they can't be treated as visible + */ + if (ItemIdIsDead(itemid)) + { + all_visible = false; + *all_frozen = false; + break; + } + + Assert(ItemIdIsNormal(itemid)); + + tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple.t_len = ItemIdGetLength(itemid); + tuple.t_tableOid = RelationGetRelid(vacrel->rel); + + switch (HeapTupleSatisfiesVacuum(&tuple, vacrel->cutoffs.OldestXmin, + buf)) + { + case HEAPTUPLE_LIVE: + { + TransactionId xmin; + + /* Check comments in lazy_scan_prune. */ + if (!HeapTupleHeaderXminCommitted(tuple.t_data)) + { + all_visible = false; + *all_frozen = false; + break; + } + + /* + * The inserter definitely committed. But is it old enough + * that everyone sees it as committed? + */ + xmin = HeapTupleHeaderGetXmin(tuple.t_data); + if (!TransactionIdPrecedes(xmin, + vacrel->cutoffs.OldestXmin)) + { + all_visible = false; + *all_frozen = false; + break; + } + + /* Track newest xmin on page. */ + if (TransactionIdFollows(xmin, *visibility_cutoff_xid) && + TransactionIdIsNormal(xmin)) + *visibility_cutoff_xid = xmin; + + /* Check whether this tuple is already frozen or not */ + if (all_visible && *all_frozen && + tdeheap_tuple_needs_eventual_freeze(tuple.t_data)) + *all_frozen = false; + } + break; + + case HEAPTUPLE_DEAD: + case HEAPTUPLE_RECENTLY_DEAD: + case HEAPTUPLE_INSERT_IN_PROGRESS: + case HEAPTUPLE_DELETE_IN_PROGRESS: + { + all_visible = false; + *all_frozen = false; + break; + } + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + } /* scan along page */ + + /* Clear the offset information once we have processed the given page. */ + vacrel->offnum = InvalidOffsetNumber; + + return all_visible; +} + +/* + * Update index statistics in pg_class if the statistics are accurate. + */ +static void +update_relstats_all_indexes(LVRelState *vacrel) +{ + Relation *indrels = vacrel->indrels; + int nindexes = vacrel->nindexes; + IndexBulkDeleteResult **indstats = vacrel->indstats; + + Assert(vacrel->do_index_cleanup); + + for (int idx = 0; idx < nindexes; idx++) + { + Relation indrel = indrels[idx]; + IndexBulkDeleteResult *istat = indstats[idx]; + + if (istat == NULL || istat->estimated_count) + continue; + + /* Update index statistics */ + vac_update_relstats(indrel, + istat->num_pages, + istat->num_index_tuples, + 0, + false, + InvalidTransactionId, + InvalidMultiXactId, + NULL, NULL, false); + } +} + +/* + * Error context callback for errors occurring during vacuum. The error + * context messages for index phases should match the messages set in parallel + * vacuum. If you change this function for those phases, change + * parallel_vacuum_error_callback() as well. + */ +static void +vacuum_error_callback(void *arg) +{ + LVRelState *errinfo = arg; + + switch (errinfo->phase) + { + case VACUUM_ERRCB_PHASE_SCAN_HEAP: + if (BlockNumberIsValid(errinfo->blkno)) + { + if (OffsetNumberIsValid(errinfo->offnum)) + errcontext("while scanning block %u offset %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname); + else + errcontext("while scanning block %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->relnamespace, errinfo->relname); + } + else + errcontext("while scanning relation \"%s.%s\"", + errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_VACUUM_HEAP: + if (BlockNumberIsValid(errinfo->blkno)) + { + if (OffsetNumberIsValid(errinfo->offnum)) + errcontext("while vacuuming block %u offset %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname); + else + errcontext("while vacuuming block %u of relation \"%s.%s\"", + errinfo->blkno, errinfo->relnamespace, errinfo->relname); + } + else + errcontext("while vacuuming relation \"%s.%s\"", + errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_VACUUM_INDEX: + errcontext("while vacuuming index \"%s\" of relation \"%s.%s\"", + errinfo->indname, errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_INDEX_CLEANUP: + errcontext("while cleaning up index \"%s\" of relation \"%s.%s\"", + errinfo->indname, errinfo->relnamespace, errinfo->relname); + break; + + case VACUUM_ERRCB_PHASE_TRUNCATE: + if (BlockNumberIsValid(errinfo->blkno)) + errcontext("while truncating relation \"%s.%s\" to %u blocks", + errinfo->relnamespace, errinfo->relname, errinfo->blkno); + break; + + case VACUUM_ERRCB_PHASE_UNKNOWN: + default: + return; /* do nothing; the errinfo may not be + * initialized */ + } +} + +/* + * Updates the information required for vacuum error callback. This also saves + * the current information which can be later restored via restore_vacuum_error_info. + */ +static void +update_vacuum_error_info(LVRelState *vacrel, LVSavedErrInfo *saved_vacrel, + int phase, BlockNumber blkno, OffsetNumber offnum) +{ + if (saved_vacrel) + { + saved_vacrel->offnum = vacrel->offnum; + saved_vacrel->blkno = vacrel->blkno; + saved_vacrel->phase = vacrel->phase; + } + + vacrel->blkno = blkno; + vacrel->offnum = offnum; + vacrel->phase = phase; +} + +/* + * Restores the vacuum information saved via a prior call to update_vacuum_error_info. + */ +static void +restore_vacuum_error_info(LVRelState *vacrel, + const LVSavedErrInfo *saved_vacrel) +{ + vacrel->blkno = saved_vacrel->blkno; + vacrel->offnum = saved_vacrel->offnum; + vacrel->phase = saved_vacrel->phase; +} diff --git a/contrib/pg_tde/src17/access/pg_tde_visibilitymap.c b/contrib/pg_tde/src17/access/pg_tde_visibilitymap.c new file mode 100644 index 00000000000..76de58677dd --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tde_visibilitymap.c @@ -0,0 +1,635 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_visibilitymap.c + * bitmap for tracking visibility of heap tuples + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pg_tde_visibilitymap.c + * + * INTERFACE ROUTINES + * tdeheap_visibilitymap_clear - clear bits for one page in the visibility map + * tdeheap_visibilitymap_pin - pin a map page for setting a bit + * tdeheap_visibilitymap_pin_ok - check whether correct map page is already pinned + * tdeheap_visibilitymap_set - set a bit in a previously pinned page + * tdeheap_visibilitymap_get_status - get status of bits + * tdeheap_visibilitymap_count - count number of bits set in visibility map + * tdeheap_visibilitymap_prepare_truncate - + * prepare for truncation of the visibility map + * + * NOTES + * + * The visibility map is a bitmap with two bits (all-visible and all-frozen) + * per heap page. A set all-visible bit means that all tuples on the page are + * known visible to all transactions, and therefore the page doesn't need to + * be vacuumed. A set all-frozen bit means that all tuples on the page are + * completely frozen, and therefore the page doesn't need to be vacuumed even + * if whole table scanning vacuum is required (e.g. anti-wraparound vacuum). + * The all-frozen bit must be set only when the page is already all-visible. + * + * The map is conservative in the sense that we make sure that whenever a bit + * is set, we know the condition is true, but if a bit is not set, it might or + * might not be true. + * + * Clearing visibility map bits is not separately WAL-logged. The callers + * must make sure that whenever a bit is cleared, the bit is cleared on WAL + * replay of the updating operation as well. + * + * When we *set* a visibility map during VACUUM, we must write WAL. This may + * seem counterintuitive, since the bit is basically a hint: if it is clear, + * it may still be the case that every tuple on the page is visible to all + * transactions; we just don't know that for certain. The difficulty is that + * there are two bits which are typically set together: the PD_ALL_VISIBLE bit + * on the page itself, and the visibility map bit. If a crash occurs after the + * visibility map page makes it to disk and before the updated heap page makes + * it to disk, redo must set the bit on the heap page. Otherwise, the next + * insert, update, or delete on the heap page will fail to realize that the + * visibility map bit must be cleared, possibly causing index-only scans to + * return wrong answers. + * + * VACUUM will normally skip pages for which the visibility map bit is set; + * such pages can't contain any dead tuples and therefore don't need vacuuming. + * + * LOCKING + * + * In heapam.c, whenever a page is modified so that not all tuples on the + * page are visible to everyone anymore, the corresponding bit in the + * visibility map is cleared. In order to be crash-safe, we need to do this + * while still holding a lock on the heap page and in the same critical + * section that logs the page modification. However, we don't want to hold + * the buffer lock over any I/O that may be required to read in the visibility + * map page. To avoid this, we examine the heap page before locking it; + * if the page-level PD_ALL_VISIBLE bit is set, we pin the visibility map + * bit. Then, we lock the buffer. But this creates a race condition: there + * is a possibility that in the time it takes to lock the buffer, the + * PD_ALL_VISIBLE bit gets set. If that happens, we have to unlock the + * buffer, pin the visibility map page, and relock the buffer. This shouldn't + * happen often, because only VACUUM currently sets visibility map bits, + * and the race will only occur if VACUUM processes a given page at almost + * exactly the same time that someone tries to further modify it. + * + * To set a bit, you need to hold a lock on the heap page. That prevents + * the race condition where VACUUM sees that all tuples on the page are + * visible to everyone, but another backend modifies the page before VACUUM + * sets the bit in the visibility map. + * + * When a bit is set, the LSN of the visibility map page is updated to make + * sure that the visibility map update doesn't get written to disk before the + * WAL record of the changes that made it possible to set the bit is flushed. + * But when a bit is cleared, we don't have to do that because it's always + * safe to clear a bit in the map from correctness point of view. + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tde_visibilitymap.h" + +#include "access/xloginsert.h" +#include "access/xlogutils.h" +#include "miscadmin.h" +#include "port/pg_bitutils.h" +#include "storage/bufmgr.h" +#include "storage/smgr.h" +#include "utils/inval.h" +#include "utils/rel.h" + + +/*#define TRACE_VISIBILITYMAP */ + +/* + * Size of the bitmap on each visibility map page, in bytes. There's no + * extra headers, so the whole page minus the standard page header is + * used for the bitmap. + */ +#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData)) + +/* Number of heap blocks we can represent in one byte */ +#define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK) + +/* Number of heap blocks we can represent in one visibility map page. */ +#define HEAPBLOCKS_PER_PAGE (MAPSIZE * HEAPBLOCKS_PER_BYTE) + +/* Mapping from heap block number to the right bit in the visibility map */ +#define HEAPBLK_TO_MAPBLOCK(x) ((x) / HEAPBLOCKS_PER_PAGE) +#define HEAPBLK_TO_MAPBYTE(x) (((x) % HEAPBLOCKS_PER_PAGE) / HEAPBLOCKS_PER_BYTE) +#define HEAPBLK_TO_OFFSET(x) (((x) % HEAPBLOCKS_PER_BYTE) * BITS_PER_HEAPBLOCK) + +/* Masks for counting subsets of bits in the visibility map. */ +#define VISIBLE_MASK8 (0x55) /* The lower bit of each bit pair */ +#define FROZEN_MASK8 (0xaa) /* The upper bit of each bit pair */ + +/* prototypes for internal routines */ +static Buffer vm_readbuf(Relation rel, BlockNumber blkno, bool extend); +static Buffer vm_extend(Relation rel, BlockNumber vm_nblocks); + + +/* + * tdeheap_visibilitymap_clear - clear specified bits for one page in visibility map + * + * You must pass a buffer containing the correct map page to this function. + * Call tdeheap_visibilitymap_pin first to pin the right one. This function doesn't do + * any I/O. Returns true if any bits have been cleared and false otherwise. + */ +bool +tdeheap_visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer vmbuf, uint8 flags) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + int mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + int mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + uint8 mask = flags << mapOffset; + char *map; + bool cleared = false; + + /* Must never clear all_visible bit while leaving all_frozen bit set */ + Assert(flags & VISIBILITYMAP_VALID_BITS); + Assert(flags != VISIBILITYMAP_ALL_VISIBLE); + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_clear %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + if (!BufferIsValid(vmbuf) || BufferGetBlockNumber(vmbuf) != mapBlock) + elog(ERROR, "wrong buffer passed to tdeheap_visibilitymap_clear"); + + LockBuffer(vmbuf, BUFFER_LOCK_EXCLUSIVE); + map = PageGetContents(BufferGetPage(vmbuf)); + + if (map[mapByte] & mask) + { + map[mapByte] &= ~mask; + + MarkBufferDirty(vmbuf); + cleared = true; + } + + LockBuffer(vmbuf, BUFFER_LOCK_UNLOCK); + + return cleared; +} + +/* + * tdeheap_visibilitymap_pin - pin a map page for setting a bit + * + * Setting a bit in the visibility map is a two-phase operation. First, call + * tdeheap_visibilitymap_pin, to pin the visibility map page containing the bit for + * the heap page. Because that can require I/O to read the map page, you + * shouldn't hold a lock on the heap page while doing that. Then, call + * tdeheap_visibilitymap_set to actually set the bit. + * + * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by + * an earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. On return, *vmbuf is a valid buffer with the map page containing + * the bit for heapBlk. + * + * If the page doesn't exist in the map file yet, it is extended. + */ +void +tdeheap_visibilitymap_pin(Relation rel, BlockNumber heapBlk, Buffer *vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + + /* Reuse the old pinned buffer if possible */ + if (BufferIsValid(*vmbuf)) + { + if (BufferGetBlockNumber(*vmbuf) == mapBlock) + return; + + ReleaseBuffer(*vmbuf); + } + *vmbuf = vm_readbuf(rel, mapBlock, true); +} + +/* + * tdeheap_visibilitymap_pin_ok - do we already have the correct page pinned? + * + * On entry, vmbuf should be InvalidBuffer or a valid buffer returned by + * an earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. The return value indicates whether the buffer covers the + * given heapBlk. + */ +bool +tdeheap_visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + + return BufferIsValid(vmbuf) && BufferGetBlockNumber(vmbuf) == mapBlock; +} + +/* + * tdeheap_visibilitymap_set - set bit(s) on a previously pinned page + * + * recptr is the LSN of the XLOG record we're replaying, if we're in recovery, + * or InvalidXLogRecPtr in normal running. The VM page LSN is advanced to the + * one provided; in normal running, we generate a new XLOG record and set the + * page LSN to that value (though the heap page's LSN may *not* be updated; + * see below). cutoff_xid is the largest xmin on the page being marked + * all-visible; it is needed for Hot Standby, and can be InvalidTransactionId + * if the page contains no tuples. It can also be set to InvalidTransactionId + * when a page that is already all-visible is being marked all-frozen. + * + * Caller is expected to set the heap page's PD_ALL_VISIBLE bit before calling + * this function. Except in recovery, caller should also pass the heap + * buffer. When checksums are enabled and we're not in recovery, we must add + * the heap buffer to the WAL chain to protect it from being torn. + * + * You must pass a buffer containing the correct map page to this function. + * Call tdeheap_visibilitymap_pin first to pin the right one. This function doesn't do + * any I/O. + */ +void +tdeheap_visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf, + XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid, + uint8 flags) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + Page page; + uint8 *map; + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_set %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + Assert(InRecovery || XLogRecPtrIsInvalid(recptr)); + Assert(InRecovery || PageIsAllVisible((Page) BufferGetPage(heapBuf))); + Assert((flags & VISIBILITYMAP_VALID_BITS) == flags); + + /* Must never set all_frozen bit without also setting all_visible bit */ + Assert(flags != VISIBILITYMAP_ALL_FROZEN); + + /* Check that we have the right heap page pinned, if present */ + if (BufferIsValid(heapBuf) && BufferGetBlockNumber(heapBuf) != heapBlk) + elog(ERROR, "wrong heap buffer passed to tdeheap_visibilitymap_set"); + + /* Check that we have the right VM page pinned */ + if (!BufferIsValid(vmBuf) || BufferGetBlockNumber(vmBuf) != mapBlock) + elog(ERROR, "wrong VM buffer passed to tdeheap_visibilitymap_set"); + + page = BufferGetPage(vmBuf); + map = (uint8 *) PageGetContents(page); + LockBuffer(vmBuf, BUFFER_LOCK_EXCLUSIVE); + + if (flags != (map[mapByte] >> mapOffset & VISIBILITYMAP_VALID_BITS)) + { + START_CRIT_SECTION(); + + map[mapByte] |= (flags << mapOffset); + MarkBufferDirty(vmBuf); + + if (RelationNeedsWAL(rel)) + { + if (XLogRecPtrIsInvalid(recptr)) + { + Assert(!InRecovery); + recptr = log_tdeheap_visible(rel, heapBuf, vmBuf, cutoff_xid, flags); + + /* + * If data checksums are enabled (or wal_log_hints=on), we + * need to protect the heap page from being torn. + * + * If not, then we must *not* update the heap page's LSN. In + * this case, the FPI for the heap page was omitted from the + * WAL record inserted above, so it would be incorrect to + * update the heap page's LSN. + */ + if (XLogHintBitIsNeeded()) + { + Page heapPage = BufferGetPage(heapBuf); + + PageSetLSN(heapPage, recptr); + } + } + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + } + + LockBuffer(vmBuf, BUFFER_LOCK_UNLOCK); +} + +/* + * tdeheap_visibilitymap_get_status - get status of bits + * + * Are all tuples on heapBlk visible to all or are marked frozen, according + * to the visibility map? + * + * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by an + * earlier call to tdeheap_visibilitymap_pin or tdeheap_visibilitymap_get_status on the same + * relation. On return, *vmbuf is a valid buffer with the map page containing + * the bit for heapBlk, or InvalidBuffer. The caller is responsible for + * releasing *vmbuf after it's done testing and setting bits. + * + * NOTE: This function is typically called without a lock on the heap page, + * so somebody else could change the bit just after we look at it. In fact, + * since we don't lock the visibility map page either, it's even possible that + * someone else could have changed the bit just before we look at it, but yet + * we might see the old value. It is the caller's responsibility to deal with + * all concurrency issues! + */ +uint8 +tdeheap_visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf) +{ + BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); + uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); + uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk); + char *map; + uint8 result; + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_get_status %s %d", RelationGetRelationName(rel), heapBlk); +#endif + + /* Reuse the old pinned buffer if possible */ + if (BufferIsValid(*vmbuf)) + { + if (BufferGetBlockNumber(*vmbuf) != mapBlock) + { + ReleaseBuffer(*vmbuf); + *vmbuf = InvalidBuffer; + } + } + + if (!BufferIsValid(*vmbuf)) + { + *vmbuf = vm_readbuf(rel, mapBlock, false); + if (!BufferIsValid(*vmbuf)) + return false; + } + + map = PageGetContents(BufferGetPage(*vmbuf)); + + /* + * A single byte read is atomic. There could be memory-ordering effects + * here, but for performance reasons we make it the caller's job to worry + * about that. + */ + result = ((map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS); + return result; +} + +/* + * tdeheap_visibilitymap_count - count number of bits set in visibility map + * + * Note: we ignore the possibility of race conditions when the table is being + * extended concurrently with the call. New pages added to the table aren't + * going to be marked all-visible or all-frozen, so they won't affect the result. + */ +void +tdeheap_visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen) +{ + BlockNumber mapBlock; + BlockNumber nvisible = 0; + BlockNumber nfrozen = 0; + + /* all_visible must be specified */ + Assert(all_visible); + + for (mapBlock = 0;; mapBlock++) + { + Buffer mapBuffer; + uint64 *map; + + /* + * Read till we fall off the end of the map. We assume that any extra + * bytes in the last page are zeroed, so we don't bother excluding + * them from the count. + */ + mapBuffer = vm_readbuf(rel, mapBlock, false); + if (!BufferIsValid(mapBuffer)) + break; + + /* + * We choose not to lock the page, since the result is going to be + * immediately stale anyway if anyone is concurrently setting or + * clearing bits, and we only really need an approximate value. + */ + map = (uint64 *) PageGetContents(BufferGetPage(mapBuffer)); + + nvisible += pg_popcount_masked((const char *) map, MAPSIZE, VISIBLE_MASK8); + if (all_frozen) + nfrozen += pg_popcount_masked((const char *) map, MAPSIZE, FROZEN_MASK8); + + ReleaseBuffer(mapBuffer); + } + + *all_visible = nvisible; + if (all_frozen) + *all_frozen = nfrozen; +} + +/* + * tdeheap_visibilitymap_prepare_truncate - + * prepare for truncation of the visibility map + * + * nheapblocks is the new size of the heap. + * + * Return the number of blocks of new visibility map. + * If it's InvalidBlockNumber, there is nothing to truncate; + * otherwise the caller is responsible for calling smgrtruncate() + * to truncate the visibility map pages. + */ +BlockNumber +tdeheap_visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks) +{ + BlockNumber newnblocks; + + /* last remaining block, byte, and bit */ + BlockNumber truncBlock = HEAPBLK_TO_MAPBLOCK(nheapblocks); + uint32 truncByte = HEAPBLK_TO_MAPBYTE(nheapblocks); + uint8 truncOffset = HEAPBLK_TO_OFFSET(nheapblocks); + +#ifdef TRACE_VISIBILITYMAP + elog(DEBUG1, "vm_truncate %s %d", RelationGetRelationName(rel), nheapblocks); +#endif + + /* + * If no visibility map has been created yet for this relation, there's + * nothing to truncate. + */ + if (!smgrexists(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM)) + return InvalidBlockNumber; + + /* + * Unless the new size is exactly at a visibility map page boundary, the + * tail bits in the last remaining map page, representing truncated heap + * blocks, need to be cleared. This is not only tidy, but also necessary + * because we don't get a chance to clear the bits if the heap is extended + * again. + */ + if (truncByte != 0 || truncOffset != 0) + { + Buffer mapBuffer; + Page page; + char *map; + + newnblocks = truncBlock + 1; + + mapBuffer = vm_readbuf(rel, truncBlock, false); + if (!BufferIsValid(mapBuffer)) + { + /* nothing to do, the file was already smaller */ + return InvalidBlockNumber; + } + + page = BufferGetPage(mapBuffer); + map = PageGetContents(page); + + LockBuffer(mapBuffer, BUFFER_LOCK_EXCLUSIVE); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* Clear out the unwanted bytes. */ + MemSet(&map[truncByte + 1], 0, MAPSIZE - (truncByte + 1)); + + /*---- + * Mask out the unwanted bits of the last remaining byte. + * + * ((1 << 0) - 1) = 00000000 + * ((1 << 1) - 1) = 00000001 + * ... + * ((1 << 6) - 1) = 00111111 + * ((1 << 7) - 1) = 01111111 + *---- + */ + map[truncByte] &= (1 << truncOffset) - 1; + + /* + * Truncation of a relation is WAL-logged at a higher-level, and we + * will be called at WAL replay. But if checksums are enabled, we need + * to still write a WAL record to protect against a torn page, if the + * page is flushed to disk before the truncation WAL record. We cannot + * use MarkBufferDirtyHint here, because that will not dirty the page + * during recovery. + */ + MarkBufferDirty(mapBuffer); + if (!InRecovery && RelationNeedsWAL(rel) && XLogHintBitIsNeeded()) + log_newpage_buffer(mapBuffer, false); + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(mapBuffer); + } + else + newnblocks = truncBlock; + + if (smgrnblocks(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM) <= newnblocks) + { + /* nothing to do, the file was already smaller than requested size */ + return InvalidBlockNumber; + } + + return newnblocks; +} + +/* + * Read a visibility map page. + * + * If the page doesn't exist, InvalidBuffer is returned, or if 'extend' is + * true, the visibility map file is extended. + */ +static Buffer +vm_readbuf(Relation rel, BlockNumber blkno, bool extend) +{ + Buffer buf; + SMgrRelation reln; + + /* + * Caution: re-using this smgr pointer could fail if the relcache entry + * gets closed. It's safe as long as we only do smgr-level operations + * between here and the last use of the pointer. + */ + reln = RelationGetSmgr(rel); + + /* + * If we haven't cached the size of the visibility map fork yet, check it + * first. + */ + if (reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] == InvalidBlockNumber) + { + if (smgrexists(reln, VISIBILITYMAP_FORKNUM)) + smgrnblocks(reln, VISIBILITYMAP_FORKNUM); + else + reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] = 0; + } + + /* + * For reading we use ZERO_ON_ERROR mode, and initialize the page if + * necessary. It's always safe to clear bits, so it's better to clear + * corrupt pages than error out. + * + * We use the same path below to initialize pages when extending the + * relation, as a concurrent extension can end up with vm_extend() + * returning an already-initialized page. + */ + if (blkno >= reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM]) + { + if (extend) + buf = vm_extend(rel, blkno + 1); + else + return InvalidBuffer; + } + else + buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno, + RBM_ZERO_ON_ERROR, NULL); + + /* + * Initializing the page when needed is trickier than it looks, because of + * the possibility of multiple backends doing this concurrently, and our + * desire to not uselessly take the buffer lock in the normal path where + * the page is OK. We must take the lock to initialize the page, so + * recheck page newness after we have the lock, in case someone else + * already did it. Also, because we initially check PageIsNew with no + * lock, it's possible to fall through and return the buffer while someone + * else is still initializing the page (i.e., we might see pd_upper as set + * but other page header fields are still zeroes). This is harmless for + * callers that will take a buffer lock themselves, but some callers + * inspect the page without any lock at all. The latter is OK only so + * long as it doesn't depend on the page header having correct contents. + * Current usage is safe because PageGetContents() does not require that. + */ + if (PageIsNew(BufferGetPage(buf))) + { + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + if (PageIsNew(BufferGetPage(buf))) + PageInit(BufferGetPage(buf), BLCKSZ, 0); + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + } + return buf; +} + +/* + * Ensure that the visibility map fork is at least vm_nblocks long, extending + * it if necessary with zeroed pages. + */ +static Buffer +vm_extend(Relation rel, BlockNumber vm_nblocks) +{ + Buffer buf; + + buf = ExtendBufferedRelTo(BMR_REL(rel), VISIBILITYMAP_FORKNUM, NULL, + EB_CREATE_FORK_IF_NEEDED | + EB_CLEAR_SIZE_CACHE, + vm_nblocks, + RBM_ZERO_ON_ERROR); + + /* + * Send a shared-inval message to force other backends to close any smgr + * references they may have for this rel, which we are about to change. + * This is a useful optimization because it means that backends don't have + * to keep checking for creation or extension of the file, which happens + * infrequently. + */ + CacheInvalidateSmgr(RelationGetSmgr(rel)->smgr_rlocator); + + return buf; +} diff --git a/contrib/pg_tde/src17/access/pg_tdeam.c b/contrib/pg_tde/src17/access/pg_tdeam.c new file mode 100644 index 00000000000..743c3abedd1 --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tdeam.c @@ -0,0 +1,9684 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam.c + * pg_tde access method code + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * contrib/pg_tde/pg_tdeam.c + * + * + * INTERFACE ROUTINES + * tdeheap_beginscan - begin relation scan + * tdeheap_rescan - restart a relation scan + * tdeheap_endscan - end relation scan + * tdeheap_getnext - retrieve next tuple in scan + * tdeheap_fetch - retrieve tuple with given tid + * tdeheap_insert - insert tuple into a relation + * tdeheap_multi_insert - insert multiple tuples into a relation + * tdeheap_delete - delete a tuple from a relation + * tdeheap_update - replace a tuple in a relation with another tuple + * + * NOTES + * This file contains the tdeheap_ routines which implement + * the POSTGRES pg_tde access method used for all POSTGRES + * relations. + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdeam_xlog.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_io.h" +#include "access/pg_tde_visibilitymap.h" +#include "access/pg_tde_slot.h" +#include "encryption/enc_tde.h" + +#include "access/bufmask.h" +#include "access/genam.h" +#include "access/multixact.h" +#include "access/parallel.h" +#include "access/relscan.h" +#include "access/subtrans.h" +#include "access/syncscan.h" +#include "access/sysattr.h" +#include "access/tableam.h" +#include "access/transam.h" +#include "access/valid.h" +#include "access/xact.h" +#include "access/xlog.h" +#include "access/xloginsert.h" +#include "access/xlogutils.h" +#include "catalog/catalog.h" +#include "commands/vacuum.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "port/atomics.h" +#include "port/pg_bitutils.h" +#include "storage/bufmgr.h" +#include "storage/freespace.h" +#include "storage/lmgr.h" +#include "storage/predicate.h" +#include "storage/procarray.h" +#include "storage/standby.h" +#include "utils/datum.h" +#include "utils/injection_point.h" +#include "utils/inval.h" +#include "utils/relcache.h" +#include "utils/snapmgr.h" +#include "utils/spccache.h" +#include "utils/memutils.h" + + +static HeapTuple tdeheap_prepare_insert(Relation relation, HeapTuple tup, + TransactionId xid, CommandId cid, int options); +static XLogRecPtr log_tdeheap_update(Relation reln, Buffer oldbuf, + Buffer newbuf, HeapTuple oldtup, + HeapTuple newtup, HeapTuple old_key_tuple, + bool all_visible_cleared, bool new_all_visible_cleared); +static Bitmapset *HeapDetermineColumnsInfo(Relation relation, + Bitmapset *interesting_cols, + Bitmapset *external_cols, + HeapTuple oldtup, HeapTuple newtup, + bool *has_external); +static bool tdeheap_acquire_tuplock(Relation relation, ItemPointer tid, + LockTupleMode mode, LockWaitPolicy wait_policy, + bool *have_tuple_lock); +static inline BlockNumber tdeheapgettup_advance_block(HeapScanDesc scan, + BlockNumber block, + ScanDirection dir); +static pg_noinline BlockNumber tdeheapgettup_initial_block(HeapScanDesc scan, + ScanDirection dir); +static void compute_new_xmax_infomask(TransactionId xmax, uint16 old_infomask, + uint16 old_infomask2, TransactionId add_to_xmax, + LockTupleMode mode, bool is_update, + TransactionId *result_xmax, uint16 *result_infomask, + uint16 *result_infomask2); +static TM_Result tdeheap_lock_updated_tuple(Relation rel, HeapTuple tuple, + ItemPointer ctid, TransactionId xid, + LockTupleMode mode); +static void GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask, + uint16 *new_infomask2); +static TransactionId MultiXactIdGetUpdateXid(TransactionId xmax, + uint16 t_infomask); +static bool DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask, + LockTupleMode lockmode, bool *current_is_member); +static void MultiXactIdWait(MultiXactId multi, MultiXactStatus status, uint16 infomask, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining); +static bool ConditionalMultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, Relation rel, int *remaining); +static void index_delete_sort(TM_IndexDeleteOp *delstate); +static int bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate); +static XLogRecPtr log_tdeheap_new_cid(Relation relation, HeapTuple tup); +static HeapTuple ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_required, + bool *copy); + + +/* + * Each tuple lock mode has a corresponding heavyweight lock, and one or two + * corresponding MultiXactStatuses (one to merely lock tuples, another one to + * update them). This table (and the macros below) helps us determine the + * heavyweight lock mode and MultiXactStatus values to use for any particular + * tuple lock strength. + * + * Don't look at lockstatus/updstatus directly! Use get_mxact_status_for_lock + * instead. + */ +static const struct +{ + LOCKMODE hwlock; + int lockstatus; + int updstatus; +} + + tupleLockExtraInfo[MaxLockTupleMode + 1] = +{ + { /* LockTupleKeyShare */ + AccessShareLock, + MultiXactStatusForKeyShare, + -1 /* KeyShare does not allow updating tuples */ + }, + { /* LockTupleShare */ + RowShareLock, + MultiXactStatusForShare, + -1 /* Share does not allow updating tuples */ + }, + { /* LockTupleNoKeyExclusive */ + ExclusiveLock, + MultiXactStatusForNoKeyUpdate, + MultiXactStatusNoKeyUpdate + }, + { /* LockTupleExclusive */ + AccessExclusiveLock, + MultiXactStatusForUpdate, + MultiXactStatusUpdate + } +}; + +/* Get the LOCKMODE for a given MultiXactStatus */ +#define LOCKMODE_from_mxstatus(status) \ + (tupleLockExtraInfo[TUPLOCK_from_mxstatus((status))].hwlock) + +/* + * Acquire heavyweight locks on tuples, using a LockTupleMode strength value. + * This is more readable than having every caller translate it to lock.h's + * LOCKMODE. + */ +#define LockTupleTuplock(rel, tup, mode) \ + LockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) +#define UnlockTupleTuplock(rel, tup, mode) \ + UnlockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) +#define ConditionalLockTupleTuplock(rel, tup, mode) \ + ConditionalLockTuple((rel), (tup), tupleLockExtraInfo[mode].hwlock) + +#ifdef USE_PREFETCH +/* + * tdeheap_index_delete_tuples and index_delete_prefetch_buffer use this + * structure to coordinate prefetching activity + */ +typedef struct +{ + BlockNumber cur_hblkno; + int next_item; + int ndeltids; + TM_IndexDelete *deltids; +} IndexDeletePrefetchState; +#endif + +/* tdeheap_index_delete_tuples bottom-up index deletion costing constants */ +#define BOTTOMUP_MAX_NBLOCKS 6 +#define BOTTOMUP_TOLERANCE_NBLOCKS 3 + +/* + * tdeheap_index_delete_tuples uses this when determining which heap blocks it + * must visit to help its bottom-up index deletion caller + */ +typedef struct IndexDeleteCounts +{ + int16 npromisingtids; /* Number of "promising" TIDs in group */ + int16 ntids; /* Number of TIDs in group */ + int16 ifirsttid; /* Offset to group's first deltid */ +} IndexDeleteCounts; + +/* + * This table maps tuple lock strength values for each particular + * MultiXactStatus value. + */ +static const int MultiXactStatusLock[MaxMultiXactStatus + 1] = +{ + LockTupleKeyShare, /* ForKeyShare */ + LockTupleShare, /* ForShare */ + LockTupleNoKeyExclusive, /* ForNoKeyUpdate */ + LockTupleExclusive, /* ForUpdate */ + LockTupleNoKeyExclusive, /* NoKeyUpdate */ + LockTupleExclusive /* Update */ +}; + +/* Get the LockTupleMode for a given MultiXactStatus */ +#define TUPLOCK_from_mxstatus(status) \ + (MultiXactStatusLock[(status)]) + +/* ---------------------------------------------------------------- + * heap support routines + * ---------------------------------------------------------------- + */ + +/* + * Streaming read API callback for parallel sequential scans. Returns the next + * block the caller wants from the read stream or InvalidBlockNumber when done. + */ +static BlockNumber +tdeheap_scan_stream_read_next_parallel(ReadStream *stream, + void *callback_private_data, + void *per_buffer_data) +{ + HeapScanDesc scan = (HeapScanDesc) callback_private_data; + + Assert(ScanDirectionIsForward(scan->rs_dir)); + Assert(scan->rs_base.rs_parallel); + + if (unlikely(!scan->rs_inited)) + { + /* parallel scan */ + table_block_parallelscan_startblock_init(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, + (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel); + + /* may return InvalidBlockNumber if there are no more blocks */ + scan->rs_prefetch_block = table_block_parallelscan_nextpage(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, + (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel); + scan->rs_inited = true; + } + else + { + scan->rs_prefetch_block = table_block_parallelscan_nextpage(scan->rs_base.rs_rd, + scan->rs_parallelworkerdata, (ParallelBlockTableScanDesc) + scan->rs_base.rs_parallel); + } + + return scan->rs_prefetch_block; +} + +/* + * Streaming read API callback for serial sequential and TID range scans. + * Returns the next block the caller wants from the read stream or + * InvalidBlockNumber when done. + */ +static BlockNumber +tdeheap_scan_stream_read_next_serial(ReadStream *stream, + void *callback_private_data, + void *per_buffer_data) +{ + HeapScanDesc scan = (HeapScanDesc) callback_private_data; + + if (unlikely(!scan->rs_inited)) + { + scan->rs_prefetch_block = tdeheapgettup_initial_block(scan, scan->rs_dir); + scan->rs_inited = true; + } + else + scan->rs_prefetch_block = tdeheapgettup_advance_block(scan, + scan->rs_prefetch_block, + scan->rs_dir); + + return scan->rs_prefetch_block; +} + +/* ---------------- + * initscan - scan code common to tdeheap_beginscan and tdeheap_rescan + * ---------------- + */ +static void +initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) +{ + ParallelBlockTableScanDesc bpscan = NULL; + bool allow_strat; + bool allow_sync; + + /* + * Determine the number of blocks we have to scan. + * + * It is sufficient to do this once at scan start, since any tuples added + * while the scan is in progress will be invisible to my snapshot anyway. + * (That is not true when using a non-MVCC snapshot. However, we couldn't + * guarantee to return tuples added after scan start anyway, since they + * might go into pages we already scanned. To guarantee consistent + * results for a non-MVCC snapshot, the caller must hold some higher-level + * lock that ensures the interesting tuple(s) won't change.) + */ + if (scan->rs_base.rs_parallel != NULL) + { + bpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel; + scan->rs_nblocks = bpscan->phs_nblocks; + } + else + scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); + + /* + * If the table is large relative to NBuffers, use a bulk-read access + * strategy and enable synchronized scanning (see syncscan.c). Although + * the thresholds for these features could be different, we make them the + * same so that there are only two behaviors to tune rather than four. + * (However, some callers need to be able to disable one or both of these + * behaviors, independently of the size of the table; also there is a GUC + * variable that can disable synchronized scanning.) + * + * Note that table_block_parallelscan_initialize has a very similar test; + * if you change this, consider changing that one, too. + */ + if (!RelationUsesLocalBuffers(scan->rs_base.rs_rd) && + scan->rs_nblocks > NBuffers / 4) + { + allow_strat = (scan->rs_base.rs_flags & SO_ALLOW_STRAT) != 0; + allow_sync = (scan->rs_base.rs_flags & SO_ALLOW_SYNC) != 0; + } + else + allow_strat = allow_sync = false; + + if (allow_strat) + { + /* During a rescan, keep the previous strategy object. */ + if (scan->rs_strategy == NULL) + scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD); + } + else + { + if (scan->rs_strategy != NULL) + FreeAccessStrategy(scan->rs_strategy); + scan->rs_strategy = NULL; + } + + if (scan->rs_base.rs_parallel != NULL) + { + /* For parallel scan, believe whatever ParallelTableScanDesc says. */ + if (scan->rs_base.rs_parallel->phs_syncscan) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + } + else if (keep_startblock) + { + /* + * When rescanning, we want to keep the previous startblock setting, + * so that rewinding a cursor doesn't generate surprising results. + * Reset the active syncscan setting, though. + */ + if (allow_sync && synchronize_seqscans) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + } + else if (allow_sync && synchronize_seqscans) + { + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + scan->rs_startblock = ss_get_location(scan->rs_base.rs_rd, scan->rs_nblocks); + } + else + { + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + scan->rs_startblock = 0; + } + + scan->rs_numblocks = InvalidBlockNumber; + scan->rs_inited = false; + scan->rs_ctup.t_data = NULL; + ItemPointerSetInvalid(&scan->rs_ctup.t_self); + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + + /* + * Initialize to ForwardScanDirection because it is most common and + * because heap scans go forward before going backward (e.g. CURSORs). + */ + scan->rs_dir = ForwardScanDirection; + scan->rs_prefetch_block = InvalidBlockNumber; + + /* page-at-a-time fields are always invalid when not rs_inited */ + + /* + * copy the scan key, if appropriate + */ + if (key != NULL && scan->rs_base.rs_nkeys > 0) + memcpy(scan->rs_base.rs_key, key, scan->rs_base.rs_nkeys * sizeof(ScanKeyData)); + + /* + * Currently, we only have a stats counter for sequential heap scans (but + * e.g for bitmap scans the underlying bitmap index scans will be counted, + * and for sample scans we update stats for tuple fetches). + */ + if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN) + pgstat_count_tdeheap_scan(scan->rs_base.rs_rd); +} + +/* + * tdeheap_setscanlimits - restrict range of a heapscan + * + * startBlk is the page to start at + * numBlks is number of pages to scan (InvalidBlockNumber means "all") + */ +void +tdeheap_setscanlimits(TableScanDesc sscan, BlockNumber startBlk, BlockNumber numBlks) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + Assert(!scan->rs_inited); /* else too late to change */ + /* else rs_startblock is significant */ + Assert(!(scan->rs_base.rs_flags & SO_ALLOW_SYNC)); + + /* Check startBlk is valid (but allow case of zero blocks...) */ + Assert(startBlk == 0 || startBlk < scan->rs_nblocks); + + scan->rs_startblock = startBlk; + scan->rs_numblocks = numBlks; +} + +/* + * Per-tuple loop for tdeheap_prepare_pagescan(). Pulled out so it can be called + * multiple times, with constant arguments for all_visible, + * check_serializable. + */ +pg_attribute_always_inline +static int +page_collect_tuples(HeapScanDesc scan, Snapshot snapshot, + Page page, Buffer buffer, + BlockNumber block, int lines, + bool all_visible, bool check_serializable) +{ + int ntup = 0; + OffsetNumber lineoff; + + for (lineoff = FirstOffsetNumber; lineoff <= lines; lineoff++) + { + ItemId lpp = PageGetItemId(page, lineoff); + HeapTupleData loctup; + bool valid; + + if (!ItemIdIsNormal(lpp)) + continue; + + loctup.t_data = (HeapTupleHeader) PageGetItem(page, lpp); + loctup.t_len = ItemIdGetLength(lpp); + loctup.t_tableOid = RelationGetRelid(scan->rs_base.rs_rd); + ItemPointerSet(&(loctup.t_self), block, lineoff); + + if (all_visible) + valid = true; + else + valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer); + + if (check_serializable) + HeapCheckForSerializableConflictOut(valid, scan->rs_base.rs_rd, + &loctup, buffer, snapshot); + + if (valid) + { + scan->rs_vistuples[ntup] = lineoff; + ntup++; + } + } + + Assert(ntup <= MaxHeapTuplesPerPage); + + return ntup; +} + +/* + * tdeheap_prepare_pagescan - Prepare current scan page to be scanned in pagemode + * + * Preparation currently consists of 1. prune the scan's rs_cbuf page, and 2. + * fill the rs_vistuples[] array with the OffsetNumbers of visible tuples. + */ +void +tdeheap_prepare_pagescan(TableScanDesc sscan) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + Buffer buffer = scan->rs_cbuf; + BlockNumber block = scan->rs_cblock; + Snapshot snapshot; + Page page; + int lines; + bool all_visible; + bool check_serializable; + + Assert(BufferGetBlockNumber(buffer) == block); + + /* ensure we're not accidentally being used when not in pagemode */ + Assert(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE); + snapshot = scan->rs_base.rs_snapshot; + + /* + * Prune and repair fragmentation for the whole page, if possible. + */ + tdeheap_page_prune_opt(scan->rs_base.rs_rd, buffer); + + /* + * We must hold share lock on the buffer content while examining tuple + * visibility. Afterwards, however, the tuples we have found to be + * visible are guaranteed good as long as we hold the buffer pin. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buffer); + lines = PageGetMaxOffsetNumber(page); + + /* + * If the all-visible flag indicates that all tuples on the page are + * visible to everyone, we can skip the per-tuple visibility tests. + * + * Note: In hot standby, a tuple that's already visible to all + * transactions on the primary might still be invisible to a read-only + * transaction in the standby. We partly handle this problem by tracking + * the minimum xmin of visible tuples as the cut-off XID while marking a + * page all-visible on the primary and WAL log that along with the + * visibility map SET operation. In hot standby, we wait for (or abort) + * all transactions that can potentially may not see one or more tuples on + * the page. That's how index-only scans work fine in hot standby. A + * crucial difference between index-only scans and heap scans is that the + * index-only scan completely relies on the visibility map where as heap + * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if + * the page-level flag can be trusted in the same way, because it might + * get propagated somehow without being explicitly WAL-logged, e.g. via a + * full page write. Until we can prove that beyond doubt, let's check each + * tuple for visibility the hard way. + */ + all_visible = PageIsAllVisible(page) && !snapshot->takenDuringRecovery; + check_serializable = + CheckForSerializableConflictOutNeeded(scan->rs_base.rs_rd, snapshot); + + /* + * We call page_collect_tuples() with constant arguments, to get the + * compiler to constant fold the constant arguments. Separate calls with + * constant arguments, rather than variables, are needed on several + * compilers to actually perform constant folding. + */ + if (likely(all_visible)) + { + if (likely(!check_serializable)) + scan->rs_ntuples = page_collect_tuples(scan, snapshot, page, buffer, + block, lines, true, false); + else + scan->rs_ntuples = page_collect_tuples(scan, snapshot, page, buffer, + block, lines, true, true); + } + else + { + if (likely(!check_serializable)) + scan->rs_ntuples = page_collect_tuples(scan, snapshot, page, buffer, + block, lines, false, false); + else + scan->rs_ntuples = page_collect_tuples(scan, snapshot, page, buffer, + block, lines, false, true); + } + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); +} + +/* + * tdeheap_fetch_next_buffer - read and pin the next block from MAIN_FORKNUM. + * + * Read the next block of the scan relation from the read stream and save it + * in the scan descriptor. It is already pinned. + */ +static inline void +tdeheap_fetch_next_buffer(HeapScanDesc scan, ScanDirection dir) +{ + Assert(scan->rs_read_stream); + + /* release previous scan buffer, if any */ + if (BufferIsValid(scan->rs_cbuf)) + { + ReleaseBuffer(scan->rs_cbuf); + scan->rs_cbuf = InvalidBuffer; + } + + /* + * Be sure to check for interrupts at least once per page. Checks at + * higher code levels won't be able to stop a seqscan that encounters many + * pages' worth of consecutive dead tuples. + */ + CHECK_FOR_INTERRUPTS(); + + /* + * If the scan direction is changing, reset the prefetch block to the + * current block. Otherwise, we will incorrectly prefetch the blocks + * between the prefetch block and the current block again before + * prefetching blocks in the new, correct scan direction. + */ + if (unlikely(scan->rs_dir != dir)) + { + scan->rs_prefetch_block = scan->rs_cblock; + read_stream_reset(scan->rs_read_stream); + } + + scan->rs_dir = dir; + + scan->rs_cbuf = read_stream_next_buffer(scan->rs_read_stream, NULL); + if (BufferIsValid(scan->rs_cbuf)) + scan->rs_cblock = BufferGetBlockNumber(scan->rs_cbuf); +} + +/* + * tdeheapgettup_initial_block - return the first BlockNumber to scan + * + * Returns InvalidBlockNumber when there are no blocks to scan. This can + * occur with empty tables and in parallel scans when parallel workers get all + * of the pages before we can get a chance to get our first page. + */ +static pg_noinline BlockNumber +tdeheapgettup_initial_block(HeapScanDesc scan, ScanDirection dir) +{ + Assert(!scan->rs_inited); + Assert(scan->rs_base.rs_parallel == NULL); + + /* When there are no pages to scan, return InvalidBlockNumber */ + if (scan->rs_nblocks == 0 || scan->rs_numblocks == 0) + return InvalidBlockNumber; + + if (ScanDirectionIsForward(dir)) + { + return scan->rs_startblock; + } + else + { + /* + * Disable reporting to syncscan logic in a backwards scan; it's not + * very likely anyone else is doing the same thing at the same time, + * and much more likely that we'll just bollix things for forward + * scanners. + */ + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + + /* + * Start from last page of the scan. Ensure we take into account + * rs_numblocks if it's been adjusted by tdeheap_setscanlimits(). + */ + if (scan->rs_numblocks != InvalidBlockNumber) + return (scan->rs_startblock + scan->rs_numblocks - 1) % scan->rs_nblocks; + + if (scan->rs_startblock > 0) + return scan->rs_startblock - 1; + + return scan->rs_nblocks - 1; + } +} + + +/* + * tdeheapgettup_start_page - helper function for tdeheapgettup() + * + * Return the next page to scan based on the scan->rs_cbuf and set *linesleft + * to the number of tuples on this page. Also set *lineoff to the first + * offset to scan with forward scans getting the first offset and backward + * getting the final offset on the page. + */ +static Page +tdeheapgettup_start_page(HeapScanDesc scan, ScanDirection dir, int *linesleft, + OffsetNumber *lineoff) +{ + Page page; + + Assert(scan->rs_inited); + Assert(BufferIsValid(scan->rs_cbuf)); + + /* Caller is responsible for ensuring buffer is locked if needed */ + page = BufferGetPage(scan->rs_cbuf); + + *linesleft = PageGetMaxOffsetNumber(page) - FirstOffsetNumber + 1; + + if (ScanDirectionIsForward(dir)) + *lineoff = FirstOffsetNumber; + else + *lineoff = (OffsetNumber) (*linesleft); + + /* lineoff now references the physically previous or next tid */ + return page; +} + + +/* + * tdeheapgettup_continue_page - helper function for tdeheapgettup() + * + * Return the next page to scan based on the scan->rs_cbuf and set *linesleft + * to the number of tuples left to scan on this page. Also set *lineoff to + * the next offset to scan according to the ScanDirection in 'dir'. + */ +static inline Page +tdeheapgettup_continue_page(HeapScanDesc scan, ScanDirection dir, int *linesleft, + OffsetNumber *lineoff) +{ + Page page; + + Assert(scan->rs_inited); + Assert(BufferIsValid(scan->rs_cbuf)); + + /* Caller is responsible for ensuring buffer is locked if needed */ + page = BufferGetPage(scan->rs_cbuf); + + if (ScanDirectionIsForward(dir)) + { + *lineoff = OffsetNumberNext(scan->rs_coffset); + *linesleft = PageGetMaxOffsetNumber(page) - (*lineoff) + 1; + } + else + { + /* + * The previous returned tuple may have been vacuumed since the + * previous scan when we use a non-MVCC snapshot, so we must + * re-establish the lineoff <= PageGetMaxOffsetNumber(page) invariant + */ + *lineoff = Min(PageGetMaxOffsetNumber(page), OffsetNumberPrev(scan->rs_coffset)); + *linesleft = *lineoff; + } + + /* lineoff now references the physically previous or next tid */ + return page; +} + +/* + * tdeheapgettup_advance_block - helper for tdeheap_fetch_next_buffer() + * + * Given the current block number, the scan direction, and various information + * contained in the scan descriptor, calculate the BlockNumber to scan next + * and return it. If there are no further blocks to scan, return + * InvalidBlockNumber to indicate this fact to the caller. + * + * This should not be called to determine the initial block number -- only for + * subsequent blocks. + * + * This also adjusts rs_numblocks when a limit has been imposed by + * tdeheap_setscanlimits(). + */ +static inline BlockNumber +tdeheapgettup_advance_block(HeapScanDesc scan, BlockNumber block, ScanDirection dir) +{ + Assert(scan->rs_base.rs_parallel == NULL); + + if (likely(ScanDirectionIsForward(dir))) + { + block++; + + /* wrap back to the start of the heap */ + if (block >= scan->rs_nblocks) + block = 0; + + /* + * Report our new scan position for synchronization purposes. We don't + * do that when moving backwards, however. That would just mess up any + * other forward-moving scanners. + * + * Note: we do this before checking for end of scan so that the final + * state of the position hint is back at the start of the rel. That's + * not strictly necessary, but otherwise when you run the same query + * multiple times the starting position would shift a little bit + * backwards on every invocation, which is confusing. We don't + * guarantee any specific ordering in general, though. + */ + if (scan->rs_base.rs_flags & SO_ALLOW_SYNC) + ss_report_location(scan->rs_base.rs_rd, block); + + /* we're done if we're back at where we started */ + if (block == scan->rs_startblock) + return InvalidBlockNumber; + + /* check if the limit imposed by tdeheap_setscanlimits() is met */ + if (scan->rs_numblocks != InvalidBlockNumber) + { + if (--scan->rs_numblocks == 0) + return InvalidBlockNumber; + } + + return block; + } + else + { + /* we're done if the last block is the start position */ + if (block == scan->rs_startblock) + return InvalidBlockNumber; + + /* check if the limit imposed by tdeheap_setscanlimits() is met */ + if (scan->rs_numblocks != InvalidBlockNumber) + { + if (--scan->rs_numblocks == 0) + return InvalidBlockNumber; + } + + /* wrap to the end of the heap when the last page was page 0 */ + if (block == 0) + block = scan->rs_nblocks; + + block--; + + return block; + } +} + +/* ---------------- + * tdeheapgettup - fetch next heap tuple + * + * Initialize the scan if not already done; then advance to the next + * tuple as indicated by "dir"; return the next tuple in scan->rs_ctup, + * or set scan->rs_ctup.t_data = NULL if no more tuples. + * + * Note: the reason nkeys/key are passed separately, even though they are + * kept in the scan descriptor, is that the caller may not want us to check + * the scankeys. + * + * Note: when we fall off the end of the scan in either direction, we + * reset rs_inited. This means that a further request with the same + * scan direction will restart the scan, which is a bit odd, but a + * request with the opposite scan direction will start a fresh scan + * in the proper direction. The latter is required behavior for cursors, + * while the former case is generally undefined behavior in Postgres + * so we don't care too much. + * ---------------- + */ +static void +tdeheapgettup(HeapScanDesc scan, + ScanDirection dir, + int nkeys, + ScanKey key) +{ + HeapTuple tuple = &(scan->rs_ctup); + Page page; + OffsetNumber lineoff; + int linesleft; + + if (likely(scan->rs_inited)) + { + /* continue from previously returned page/tuple */ + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE); + page = tdeheapgettup_continue_page(scan, dir, &linesleft, &lineoff); + goto continue_page; + } + + /* + * advance the scan until we find a qualifying tuple or run out of stuff + * to scan + */ + while (true) + { + tdeheap_fetch_next_buffer(scan, dir); + + /* did we run out of blocks to scan? */ + if (!BufferIsValid(scan->rs_cbuf)) + break; + + Assert(BufferGetBlockNumber(scan->rs_cbuf) == scan->rs_cblock); + + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE); + page = tdeheapgettup_start_page(scan, dir, &linesleft, &lineoff); +continue_page: + + /* + * Only continue scanning the page while we have lines left. + * + * Note that this protects us from accessing line pointers past + * PageGetMaxOffsetNumber(); both for forward scans when we resume the + * table scan, and for when we start scanning a new page. + */ + for (; linesleft > 0; linesleft--, lineoff += dir) + { + bool visible; + ItemId lpp = PageGetItemId(page, lineoff); + + if (!ItemIdIsNormal(lpp)) + continue; + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lpp); + tuple->t_len = ItemIdGetLength(lpp); + ItemPointerSet(&(tuple->t_self), scan->rs_cblock, lineoff); + + visible = HeapTupleSatisfiesVisibility(tuple, + scan->rs_base.rs_snapshot, + scan->rs_cbuf); + + HeapCheckForSerializableConflictOut(visible, scan->rs_base.rs_rd, + tuple, scan->rs_cbuf, + scan->rs_base.rs_snapshot); + + /* skip tuples not visible to this snapshot */ + if (!visible) + continue; + + /* skip any tuples that don't match the scan key */ + if (key != NULL && + !HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd), + nkeys, key)) + continue; + + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK); + scan->rs_coffset = lineoff; + return; + } + + /* + * if we get here, it means we've exhausted the items on this page and + * it's time to move to the next. + */ + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK); + } + + /* end of scan */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + scan->rs_prefetch_block = InvalidBlockNumber; + tuple->t_data = NULL; + scan->rs_inited = false; +} + +/* ---------------- + * tdeheapgettup_pagemode - fetch next heap tuple in page-at-a-time mode + * + * Same API as tdeheapgettup, but used in page-at-a-time mode + * + * The internal logic is much the same as tdeheapgettup's too, but there are some + * differences: we do not take the buffer content lock (that only needs to + * happen inside tdeheap_prepare_pagescan), and we iterate through just the + * tuples listed in rs_vistuples[] rather than all tuples on the page. Notice + * that lineindex is 0-based, where the corresponding loop variable lineoff in + * tdeheapgettup is 1-based. + * ---------------- + */ +static void +tdeheapgettup_pagemode(HeapScanDesc scan, + ScanDirection dir, + int nkeys, + ScanKey key) +{ + HeapTuple tuple = &(scan->rs_ctup); + Page page; + int lineindex; + int linesleft; + + if (likely(scan->rs_inited)) + { + /* continue from previously returned page/tuple */ + page = BufferGetPage(scan->rs_cbuf); + + lineindex = scan->rs_cindex + dir; + if (ScanDirectionIsForward(dir)) + linesleft = scan->rs_ntuples - lineindex; + else + linesleft = scan->rs_cindex; + /* lineindex now references the next or previous visible tid */ + + goto continue_page; + } + + /* + * advance the scan until we find a qualifying tuple or run out of stuff + * to scan + */ + while (true) + { + tdeheap_fetch_next_buffer(scan, dir); + + /* did we run out of blocks to scan? */ + if (!BufferIsValid(scan->rs_cbuf)) + break; + + Assert(BufferGetBlockNumber(scan->rs_cbuf) == scan->rs_cblock); + + /* prune the page and determine visible tuple offsets */ + tdeheap_prepare_pagescan((TableScanDesc) scan); + page = BufferGetPage(scan->rs_cbuf); + linesleft = scan->rs_ntuples; + lineindex = ScanDirectionIsForward(dir) ? 0 : linesleft - 1; + + /* lineindex now references the next or previous visible tid */ +continue_page: + + for (; linesleft > 0; linesleft--, lineindex += dir) + { + ItemId lpp; + OffsetNumber lineoff; + + lineoff = scan->rs_vistuples[lineindex]; + lpp = PageGetItemId(page, lineoff); + Assert(ItemIdIsNormal(lpp)); + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lpp); + tuple->t_len = ItemIdGetLength(lpp); + ItemPointerSet(&(tuple->t_self), scan->rs_cblock, lineoff); + + /* skip any tuples that don't match the scan key */ + if (key != NULL && + !HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd), + nkeys, key)) + continue; + + scan->rs_cindex = lineindex; + return; + } + } + + /* end of scan */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + scan->rs_cbuf = InvalidBuffer; + scan->rs_cblock = InvalidBlockNumber; + scan->rs_prefetch_block = InvalidBlockNumber; + tuple->t_data = NULL; + scan->rs_inited = false; +} + + +/* ---------------------------------------------------------------- + * heap access method interface + * ---------------------------------------------------------------- + */ + + +TableScanDesc +tdeheap_beginscan(Relation relation, Snapshot snapshot, + int nkeys, ScanKey key, + ParallelTableScanDesc parallel_scan, + uint32 flags) +{ + HeapScanDesc scan; + + /* + * increment relation ref count while scanning relation + * + * This is just to make really sure the relcache entry won't go away while + * the scan has a pointer to it. Caller should be holding the rel open + * anyway, so this is redundant in all normal scenarios... + */ + RelationIncrementReferenceCount(relation); + + /* + * allocate and initialize scan descriptor + */ + scan = (HeapScanDesc) palloc(sizeof(HeapScanDescData)); + + scan->rs_base.rs_rd = relation; + scan->rs_base.rs_snapshot = snapshot; + scan->rs_base.rs_nkeys = nkeys; + scan->rs_base.rs_flags = flags; + scan->rs_base.rs_parallel = parallel_scan; + scan->rs_strategy = NULL; /* set in initscan */ + scan->rs_vmbuffer = InvalidBuffer; + scan->rs_empty_tuples_pending = 0; + + /* + * Disable page-at-a-time mode if it's not a MVCC-safe snapshot. + */ + if (!(snapshot && IsMVCCSnapshot(snapshot))) + scan->rs_base.rs_flags &= ~SO_ALLOW_PAGEMODE; + + /* + * For seqscan and sample scans in a serializable transaction, acquire a + * predicate lock on the entire relation. This is required not only to + * lock all the matching tuples, but also to conflict with new insertions + * into the table. In an indexscan, we take page locks on the index pages + * covering the range specified in the scan qual, but in a heap scan there + * is nothing more fine-grained to lock. A bitmap scan is a different + * story, there we have already scanned the index and locked the index + * pages covering the predicate. But in that case we still have to lock + * any matching heap tuples. For sample scan we could optimize the locking + * to be at least page-level granularity, but we'd need to add per-tuple + * locking for that. + */ + if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN)) + { + /* + * Ensure a missing snapshot is noticed reliably, even if the + * isolation mode means predicate locking isn't performed (and + * therefore the snapshot isn't used here). + */ + Assert(snapshot); + PredicateLockRelation(relation, snapshot); + } + + /* we only need to set this up once */ + scan->rs_ctup.t_tableOid = RelationGetRelid(relation); + + /* + * Allocate memory to keep track of page allocation for parallel workers + * when doing a parallel scan. + */ + if (parallel_scan != NULL) + scan->rs_parallelworkerdata = palloc(sizeof(ParallelBlockTableScanWorkerData)); + else + scan->rs_parallelworkerdata = NULL; + + /* + * we do this here instead of in initscan() because tdeheap_rescan also calls + * initscan() and we don't want to allocate memory again + */ + if (nkeys > 0) + scan->rs_base.rs_key = (ScanKey) palloc(sizeof(ScanKeyData) * nkeys); + else + scan->rs_base.rs_key = NULL; + + initscan(scan, key, false); + + scan->rs_read_stream = NULL; + + /* + * Set up a read stream for sequential scans and TID range scans. This + * should be done after initscan() because initscan() allocates the + * BufferAccessStrategy object passed to the read stream API. + */ + if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN || + scan->rs_base.rs_flags & SO_TYPE_TIDRANGESCAN) + { + ReadStreamBlockNumberCB cb; + + if (scan->rs_base.rs_parallel) + cb = tdeheap_scan_stream_read_next_parallel; + else + cb = tdeheap_scan_stream_read_next_serial; + + scan->rs_read_stream = read_stream_begin_relation(READ_STREAM_SEQUENTIAL, + scan->rs_strategy, + scan->rs_base.rs_rd, + MAIN_FORKNUM, + cb, + scan, + 0); + } + + + return (TableScanDesc) scan; +} + +void +tdeheap_rescan(TableScanDesc sscan, ScanKey key, bool set_params, + bool allow_strat, bool allow_sync, bool allow_pagemode) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + if (set_params) + { + if (allow_strat) + scan->rs_base.rs_flags |= SO_ALLOW_STRAT; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_STRAT; + + if (allow_sync) + scan->rs_base.rs_flags |= SO_ALLOW_SYNC; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC; + + if (allow_pagemode && scan->rs_base.rs_snapshot && + IsMVCCSnapshot(scan->rs_base.rs_snapshot)) + scan->rs_base.rs_flags |= SO_ALLOW_PAGEMODE; + else + scan->rs_base.rs_flags &= ~SO_ALLOW_PAGEMODE; + } + + /* + * unpin scan buffers + */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + if (BufferIsValid(scan->rs_vmbuffer)) + { + ReleaseBuffer(scan->rs_vmbuffer); + scan->rs_vmbuffer = InvalidBuffer; + } + + /* + * Reset rs_empty_tuples_pending, a field only used by bitmap heap scan, + * to avoid incorrectly emitting NULL-filled tuples from a previous scan + * on rescan. + */ + scan->rs_empty_tuples_pending = 0; + + /* + * The read stream is reset on rescan. This must be done before + * initscan(), as some state referred to by read_stream_reset() is reset + * in initscan(). + */ + if (scan->rs_read_stream) + read_stream_reset(scan->rs_read_stream); + + /* + * reinitialize scan descriptor + */ + initscan(scan, key, true); +} + +void +tdeheap_endscan(TableScanDesc sscan) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* Note: no locking manipulations needed */ + + /* + * unpin scan buffers + */ + if (BufferIsValid(scan->rs_cbuf)) + ReleaseBuffer(scan->rs_cbuf); + + if (BufferIsValid(scan->rs_vmbuffer)) + ReleaseBuffer(scan->rs_vmbuffer); + + /* + * Must free the read stream before freeing the BufferAccessStrategy. + */ + if (scan->rs_read_stream) + read_stream_end(scan->rs_read_stream); + + /* + * decrement relation reference count and free scan descriptor storage + */ + RelationDecrementReferenceCount(scan->rs_base.rs_rd); + + if (scan->rs_base.rs_key) + pfree(scan->rs_base.rs_key); + + if (scan->rs_strategy != NULL) + FreeAccessStrategy(scan->rs_strategy); + + if (scan->rs_parallelworkerdata != NULL) + pfree(scan->rs_parallelworkerdata); + + if (scan->rs_base.rs_flags & SO_TEMP_SNAPSHOT) + UnregisterSnapshot(scan->rs_base.rs_snapshot); + + pfree(scan); +} + +HeapTuple +tdeheap_getnext(TableScanDesc sscan, ScanDirection direction) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* + * This is still widely used directly, without going through table AM, so + * add a safety check. It's possible we should, at a later point, + * downgrade this to an assert. The reason for checking the AM routine, + * rather than the AM oid, is that this allows to write regression tests + * that create another AM reusing the heap handler. + */ + if (unlikely(sscan->rs_rd->rd_tableam != GetPGTdeamTableAmRoutine())) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg_internal("only pg_tde AM is supported"))); + + /* + * We don't expect direct calls to tdeheap_getnext with valid CheckXidAlive + * for catalog or regular tables. See detailed comments in xact.c where + * these variables are declared. Normally we have such a check at tableam + * level API but this is called from many places so we need to ensure it + * here. + */ + if (unlikely(TransactionIdIsValid(CheckXidAlive) && !bsysscan)) + elog(ERROR, "unexpected tdeheap_getnext call during logical decoding"); + + /* Note: no locking manipulations needed */ + + if (scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, + scan->rs_base.rs_nkeys, scan->rs_base.rs_key); + else + tdeheapgettup(scan, direction, + scan->rs_base.rs_nkeys, scan->rs_base.rs_key); + + if (scan->rs_ctup.t_data == NULL) + return NULL; + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + return &scan->rs_ctup; +} + +bool +tdeheap_getnextslot(TableScanDesc sscan, ScanDirection direction, TupleTableSlot *slot) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + + /* Note: no locking manipulations needed */ + + if (sscan->rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, sscan->rs_nkeys, sscan->rs_key); + else + tdeheapgettup(scan, direction, sscan->rs_nkeys, sscan->rs_key); + + if (scan->rs_ctup.t_data == NULL) + { + ExecClearTuple(slot); + return false; + } + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + PGTdeExecStoreBufferHeapTuple(sscan->rs_rd, &scan->rs_ctup, slot, + scan->rs_cbuf); + return true; +} + +void +tdeheap_set_tidrange(TableScanDesc sscan, ItemPointer mintid, + ItemPointer maxtid) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + BlockNumber startBlk; + BlockNumber numBlks; + ItemPointerData highestItem; + ItemPointerData lowestItem; + + /* + * For relations without any pages, we can simply leave the TID range + * unset. There will be no tuples to scan, therefore no tuples outside + * the given TID range. + */ + if (scan->rs_nblocks == 0) + return; + + /* + * Set up some ItemPointers which point to the first and last possible + * tuples in the heap. + */ + ItemPointerSet(&highestItem, scan->rs_nblocks - 1, MaxOffsetNumber); + ItemPointerSet(&lowestItem, 0, FirstOffsetNumber); + + /* + * If the given maximum TID is below the highest possible TID in the + * relation, then restrict the range to that, otherwise we scan to the end + * of the relation. + */ + if (ItemPointerCompare(maxtid, &highestItem) < 0) + ItemPointerCopy(maxtid, &highestItem); + + /* + * If the given minimum TID is above the lowest possible TID in the + * relation, then restrict the range to only scan for TIDs above that. + */ + if (ItemPointerCompare(mintid, &lowestItem) > 0) + ItemPointerCopy(mintid, &lowestItem); + + /* + * Check for an empty range and protect from would be negative results + * from the numBlks calculation below. + */ + if (ItemPointerCompare(&highestItem, &lowestItem) < 0) + { + /* Set an empty range of blocks to scan */ + tdeheap_setscanlimits(sscan, 0, 0); + return; + } + + /* + * Calculate the first block and the number of blocks we must scan. We + * could be more aggressive here and perform some more validation to try + * and further narrow the scope of blocks to scan by checking if the + * lowestItem has an offset above MaxOffsetNumber. In this case, we could + * advance startBlk by one. Likewise, if highestItem has an offset of 0 + * we could scan one fewer blocks. However, such an optimization does not + * seem worth troubling over, currently. + */ + startBlk = ItemPointerGetBlockNumberNoCheck(&lowestItem); + + numBlks = ItemPointerGetBlockNumberNoCheck(&highestItem) - + ItemPointerGetBlockNumberNoCheck(&lowestItem) + 1; + + /* Set the start block and number of blocks to scan */ + tdeheap_setscanlimits(sscan, startBlk, numBlks); + + /* Finally, set the TID range in sscan */ + ItemPointerCopy(&lowestItem, &sscan->rs_mintid); + ItemPointerCopy(&highestItem, &sscan->rs_maxtid); +} + +bool +tdeheap_getnextslot_tidrange(TableScanDesc sscan, ScanDirection direction, + TupleTableSlot *slot) +{ + HeapScanDesc scan = (HeapScanDesc) sscan; + ItemPointer mintid = &sscan->rs_mintid; + ItemPointer maxtid = &sscan->rs_maxtid; + + /* Note: no locking manipulations needed */ + for (;;) + { + if (sscan->rs_flags & SO_ALLOW_PAGEMODE) + tdeheapgettup_pagemode(scan, direction, sscan->rs_nkeys, sscan->rs_key); + else + tdeheapgettup(scan, direction, sscan->rs_nkeys, sscan->rs_key); + + if (scan->rs_ctup.t_data == NULL) + { + ExecClearTuple(slot); + return false; + } + + /* + * tdeheap_set_tidrange will have used tdeheap_setscanlimits to limit the + * range of pages we scan to only ones that can contain the TID range + * we're scanning for. Here we must filter out any tuples from these + * pages that are outside of that range. + */ + if (ItemPointerCompare(&scan->rs_ctup.t_self, mintid) < 0) + { + ExecClearTuple(slot); + + /* + * When scanning backwards, the TIDs will be in descending order. + * Future tuples in this direction will be lower still, so we can + * just return false to indicate there will be no more tuples. + */ + if (ScanDirectionIsBackward(direction)) + return false; + + continue; + } + + /* + * Likewise for the final page, we must filter out TIDs greater than + * maxtid. + */ + if (ItemPointerCompare(&scan->rs_ctup.t_self, maxtid) > 0) + { + ExecClearTuple(slot); + + /* + * When scanning forward, the TIDs will be in ascending order. + * Future tuples in this direction will be higher still, so we can + * just return false to indicate there will be no more tuples. + */ + if (ScanDirectionIsForward(direction)) + return false; + continue; + } + + break; + } + + /* + * if we get here it means we have a new current scan tuple, so point to + * the proper return buffer and return the tuple. + */ + pgstat_count_tdeheap_getnext(scan->rs_base.rs_rd); + + PGTdeExecStoreBufferHeapTuple(sscan->rs_rd, &scan->rs_ctup, slot, scan->rs_cbuf); + return true; +} + +/* + * tdeheap_fetch - retrieve tuple with given tid + * + * On entry, tuple->t_self is the TID to fetch. We pin the buffer holding + * the tuple, fill in the remaining fields of *tuple, and check the tuple + * against the specified snapshot. + * + * If successful (tuple found and passes snapshot time qual), then *userbuf + * is set to the buffer holding the tuple and true is returned. The caller + * must unpin the buffer when done with the tuple. + * + * If the tuple is not found (ie, item number references a deleted slot), + * then tuple->t_data is set to NULL, *userbuf is set to InvalidBuffer, + * and false is returned. + * + * If the tuple is found but fails the time qual check, then the behavior + * depends on the keep_buf parameter. If keep_buf is false, the results + * are the same as for the tuple-not-found case. If keep_buf is true, + * then tuple->t_data and *userbuf are returned as for the success case, + * and again the caller must unpin the buffer; but false is returned. + * + * tdeheap_fetch does not follow HOT chains: only the exact TID requested will + * be fetched. + * + * It is somewhat inconsistent that we ereport() on invalid block number but + * return false on invalid item number. There are a couple of reasons though. + * One is that the caller can relatively easily check the block number for + * validity, but cannot check the item number without reading the page + * himself. Another is that when we are following a t_ctid link, we can be + * reasonably confident that the page number is valid (since VACUUM shouldn't + * truncate off the destination page without having killed the referencing + * tuple first), but the item number might well not be good. + */ +bool +tdeheap_fetch(Relation relation, + Snapshot snapshot, + HeapTuple tuple, + Buffer *userbuf, + bool keep_buf) +{ + ItemPointer tid = &(tuple->t_self); + ItemId lp; + Buffer buffer; + Page page; + OffsetNumber offnum; + bool valid; + + /* + * Fetch and pin the appropriate page of the relation. + */ + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + + /* + * Need share lock on buffer to examine tuple commit status. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + page = BufferGetPage(buffer); + + /* + * We'd better check for out-of-range offnum in case of VACUUM since the + * TID was obtained. + */ + offnum = ItemPointerGetOffsetNumber(tid); + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + return false; + } + + /* + * get the item line pointer corresponding to the requested tid + */ + lp = PageGetItemId(page, offnum); + + /* + * Must check for deleted tuple. + */ + if (!ItemIdIsNormal(lp)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + return false; + } + + /* + * fill in *tuple fields + */ + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + tuple->t_len = ItemIdGetLength(lp); + tuple->t_tableOid = RelationGetRelid(relation); + + /* + * check tuple visibility, then release lock + */ + valid = HeapTupleSatisfiesVisibility(tuple, snapshot, buffer); + + if (valid) + PredicateLockTID(relation, &(tuple->t_self), snapshot, + HeapTupleHeaderGetXmin(tuple->t_data)); + + HeapCheckForSerializableConflictOut(valid, relation, tuple, buffer, snapshot); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (valid) + { + /* + * All checks passed, so return the tuple as valid. Caller is now + * responsible for releasing the buffer. + */ + *userbuf = buffer; + + return true; + } + + /* Tuple failed time qual, but maybe caller wants to see it anyway. */ + if (keep_buf) + *userbuf = buffer; + else + { + ReleaseBuffer(buffer); + *userbuf = InvalidBuffer; + tuple->t_data = NULL; + } + + return false; +} + +/* + * tdeheap_hot_search_buffer - search HOT chain for tuple satisfying snapshot + * + * On entry, *tid is the TID of a tuple (either a simple tuple, or the root + * of a HOT chain), and buffer is the buffer holding this tuple. We search + * for the first chain member satisfying the given snapshot. If one is + * found, we update *tid to reference that tuple's offset number, and + * return true. If no match, return false without modifying *tid. + * + * heapTuple is a caller-supplied buffer. When a match is found, we return + * the tuple here, in addition to updating *tid. If no match is found, the + * contents of this buffer on return are undefined. + * + * If all_dead is not NULL, we check non-visible tuples to see if they are + * globally dead; *all_dead is set true if all members of the HOT chain + * are vacuumable, false if not. + * + * Unlike tdeheap_fetch, the caller must already have pin and (at least) share + * lock on the buffer; it is still pinned/locked at exit. + */ +bool +tdeheap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer, + Snapshot snapshot, HeapTuple heapTuple, + bool *all_dead, bool first_call) +{ + Page page = BufferGetPage(buffer); + TransactionId prev_xmax = InvalidTransactionId; + BlockNumber blkno; + OffsetNumber offnum; + bool at_chain_start; + bool valid; + bool skip; + GlobalVisState *vistest = NULL; + + /* If this is not the first call, previous call returned a (live!) tuple */ + if (all_dead) + *all_dead = first_call; + + blkno = ItemPointerGetBlockNumber(tid); + offnum = ItemPointerGetOffsetNumber(tid); + at_chain_start = first_call; + skip = !first_call; + + /* XXX: we should assert that a snapshot is pushed or registered */ + Assert(TransactionIdIsValid(RecentXmin)); + Assert(BufferGetBlockNumber(buffer) == blkno); + + /* Scan through possible multiple members of HOT-chain */ + for (;;) + { + ItemId lp; + + /* check for bogus TID */ + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + break; + + lp = PageGetItemId(page, offnum); + + /* check for unused, dead, or redirected items */ + if (!ItemIdIsNormal(lp)) + { + /* We should only see a redirect at start of chain */ + if (ItemIdIsRedirected(lp) && at_chain_start) + { + /* Follow the redirect */ + offnum = ItemIdGetRedirect(lp); + at_chain_start = false; + continue; + } + /* else must be end of chain */ + break; + } + + /* + * Update heapTuple to point to the element of the HOT chain we're + * currently investigating. Having t_self set correctly is important + * because the SSI checks and the *Satisfies routine for historical + * MVCC snapshots need the correct tid to decide about the visibility. + */ + heapTuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + heapTuple->t_len = ItemIdGetLength(lp); + heapTuple->t_tableOid = RelationGetRelid(relation); + ItemPointerSet(&heapTuple->t_self, blkno, offnum); + + /* + * Shouldn't see a HEAP_ONLY tuple at chain start. + */ + if (at_chain_start && HeapTupleIsHeapOnly(heapTuple)) + break; + + /* + * The xmin should match the previous xmax value, else chain is + * broken. + */ + if (TransactionIdIsValid(prev_xmax) && + !TransactionIdEquals(prev_xmax, + HeapTupleHeaderGetXmin(heapTuple->t_data))) + break; + + /* + * When first_call is true (and thus, skip is initially false) we'll + * return the first tuple we find. But on later passes, heapTuple + * will initially be pointing to the tuple we returned last time. + * Returning it again would be incorrect (and would loop forever), so + * we skip it and return the next match we find. + */ + if (!skip) + { + /* If it's visible per the snapshot, we must return it */ + valid = HeapTupleSatisfiesVisibility(heapTuple, snapshot, buffer); + HeapCheckForSerializableConflictOut(valid, relation, heapTuple, + buffer, snapshot); + + if (valid) + { + ItemPointerSetOffsetNumber(tid, offnum); + PredicateLockTID(relation, &heapTuple->t_self, snapshot, + HeapTupleHeaderGetXmin(heapTuple->t_data)); + if (all_dead) + *all_dead = false; + return true; + } + } + skip = false; + + /* + * If we can't see it, maybe no one else can either. At caller + * request, check whether all chain members are dead to all + * transactions. + * + * Note: if you change the criterion here for what is "dead", fix the + * planner's get_actual_variable_range() function to match. + */ + if (all_dead && *all_dead) + { + if (!vistest) + vistest = GlobalVisTestFor(relation); + + if (!HeapTupleIsSurelyDead(heapTuple, vistest)) + *all_dead = false; + } + + /* + * Check to see if HOT chain continues past this tuple; if so fetch + * the next offnum and loop around. + */ + if (HeapTupleIsHotUpdated(heapTuple)) + { + Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) == + blkno); + offnum = ItemPointerGetOffsetNumber(&heapTuple->t_data->t_ctid); + at_chain_start = false; + prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple->t_data); + } + else + break; /* end of chain */ + } + + return false; +} + +/* + * tdeheap_get_latest_tid - get the latest tid of a specified tuple + * + * Actually, this gets the latest version that is visible according to the + * scan's snapshot. Create a scan using SnapshotDirty to get the very latest, + * possibly uncommitted version. + * + * *tid is both an input and an output parameter: it is updated to + * show the latest version of the row. Note that it will not be changed + * if no version of the row passes the snapshot test. + */ +void +tdeheap_get_latest_tid(TableScanDesc sscan, + ItemPointer tid) +{ + Relation relation = sscan->rs_rd; + Snapshot snapshot = sscan->rs_snapshot; + ItemPointerData ctid; + TransactionId priorXmax; + + /* + * table_tuple_get_latest_tid() verified that the passed in tid is valid. + * Assume that t_ctid links are valid however - there shouldn't be invalid + * ones in the table. + */ + Assert(ItemPointerIsValid(tid)); + + /* + * Loop to chase down t_ctid links. At top of loop, ctid is the tuple we + * need to examine, and *tid is the TID we will return if ctid turns out + * to be bogus. + * + * Note that we will loop until we reach the end of the t_ctid chain. + * Depending on the snapshot passed, there might be at most one visible + * version of the row, but we don't try to optimize for that. + */ + ctid = *tid; + priorXmax = InvalidTransactionId; /* cannot check first XMIN */ + for (;;) + { + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp; + HeapTupleData tp; + bool valid; + + /* + * Read, pin, and lock the page. + */ + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&ctid)); + LockBuffer(buffer, BUFFER_LOCK_SHARE); + page = BufferGetPage(buffer); + + /* + * Check for bogus item number. This is not treated as an error + * condition because it can happen while following a t_ctid link. We + * just assume that the prior tid is OK and return it unchanged. + */ + offnum = ItemPointerGetOffsetNumber(&ctid); + if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(page)) + { + UnlockReleaseBuffer(buffer); + break; + } + lp = PageGetItemId(page, offnum); + if (!ItemIdIsNormal(lp)) + { + UnlockReleaseBuffer(buffer); + break; + } + + /* OK to access the tuple */ + tp.t_self = ctid; + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_tableOid = RelationGetRelid(relation); + + /* + * After following a t_ctid link, we might arrive at an unrelated + * tuple. Check for XMIN match. + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data))) + { + UnlockReleaseBuffer(buffer); + break; + } + + /* + * Check tuple visibility; if visible, set it as the new result + * candidate. + */ + valid = HeapTupleSatisfiesVisibility(&tp, snapshot, buffer); + HeapCheckForSerializableConflictOut(valid, relation, &tp, buffer, snapshot); + if (valid) + *tid = ctid; + + /* + * If there's a valid t_ctid link, follow it, else we're done. + */ + if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) || + HeapTupleHeaderIsOnlyLocked(tp.t_data) || + HeapTupleHeaderIndicatesMovedPartitions(tp.t_data) || + ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)) + { + UnlockReleaseBuffer(buffer); + break; + } + + ctid = tp.t_data->t_ctid; + priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data); + UnlockReleaseBuffer(buffer); + } /* end of loop */ +} + + +/* + * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends + * + * This is called after we have waited for the XMAX transaction to terminate. + * If the transaction aborted, we guarantee the XMAX_INVALID hint bit will + * be set on exit. If the transaction committed, we set the XMAX_COMMITTED + * hint bit if possible --- but beware that that may not yet be possible, + * if the transaction committed asynchronously. + * + * Note that if the transaction was a locker only, we set HEAP_XMAX_INVALID + * even if it commits. + * + * Hence callers should look only at XMAX_INVALID. + * + * Note this is not allowed for tuples whose xmax is a multixact. + */ +static void +UpdateXmaxHintBits(HeapTupleHeader tuple, Buffer buffer, TransactionId xid) +{ + Assert(TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple), xid)); + Assert(!(tuple->t_infomask & HEAP_XMAX_IS_MULTI)); + + if (!(tuple->t_infomask & (HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID))) + { + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) && + TransactionIdDidCommit(xid)) + HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + xid); + else + HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + } +} + + +/* + * GetBulkInsertState - prepare status object for a bulk insert + */ +BulkInsertState +GetBulkInsertState(void) +{ + BulkInsertState bistate; + + bistate = (BulkInsertState) palloc(sizeof(BulkInsertStateData)); + bistate->strategy = GetAccessStrategy(BAS_BULKWRITE); + bistate->current_buf = InvalidBuffer; + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; + bistate->already_extended_by = 0; + return bistate; +} + +/* + * FreeBulkInsertState - clean up after finishing a bulk insert + */ +void +FreeBulkInsertState(BulkInsertState bistate) +{ + if (bistate->current_buf != InvalidBuffer) + ReleaseBuffer(bistate->current_buf); + FreeAccessStrategy(bistate->strategy); + pfree(bistate); +} + +/* + * ReleaseBulkInsertStatePin - release a buffer currently held in bistate + */ +void +ReleaseBulkInsertStatePin(BulkInsertState bistate) +{ + if (bistate->current_buf != InvalidBuffer) + ReleaseBuffer(bistate->current_buf); + bistate->current_buf = InvalidBuffer; + + /* + * Despite the name, we also reset bulk relation extension state. + * Otherwise we can end up erroring out due to looking for free space in + * ->next_free of one partition, even though ->next_free was set when + * extending another partition. It could obviously also be bad for + * efficiency to look at existing blocks at offsets from another + * partition, even if we don't error out. + */ + bistate->next_free = InvalidBlockNumber; + bistate->last_free = InvalidBlockNumber; +} + + +/* + * tdeheap_insert - insert tuple into a heap + * + * The new tuple is stamped with current transaction ID and the specified + * command ID. + * + * See table_tuple_insert for comments about most of the input flags, except + * that this routine directly takes a tuple rather than a slot. + * + * There's corresponding HEAP_INSERT_ options to all the TABLE_INSERT_ + * options, and there additionally is HEAP_INSERT_SPECULATIVE which is used to + * implement table_tuple_insert_speculative(). + * + * On return the header fields of *tup are updated to match the stored tuple; + * in particular tup->t_self receives the actual TID where the tuple was + * stored. But note that any toasting of fields within the tuple data is NOT + * reflected into *tup. + */ +void +tdeheap_insert(Relation relation, HeapTuple tup, CommandId cid, + int options, BulkInsertState bistate) +{ + TransactionId xid = GetCurrentTransactionId(); + HeapTuple heaptup; + Buffer buffer; + Buffer vmbuffer = InvalidBuffer; + bool all_visible_cleared = false; + + /* Cheap, simplistic check that the tuple matches the rel's rowtype. */ + Assert(HeapTupleHeaderGetNatts(tup->t_data) <= + RelationGetNumberOfAttributes(relation)); + + /* + * Fill in tuple header fields and toast the tuple if necessary. + * + * Note: below this point, heaptup is the data we actually intend to store + * into the relation; tup is the caller's original untoasted data. + */ + heaptup = tdeheap_prepare_insert(relation, tup, xid, cid, options); + + /* + * Find buffer to insert this tuple into. If the page is all visible, + * this will also pin the requisite visibility map page. + */ + buffer = tdeheap_RelationGetBufferForTuple(relation, heaptup->t_len, + InvalidBuffer, options, bistate, + &vmbuffer, NULL, + 0); + + /* + * We're about to do the actual insert -- but check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the page between this check and the insert + * being visible to the scan (i.e., an exclusive buffer content lock is + * continuously held from this point until the tuple insert is visible). + * + * For a heap insert, we only need to check for table-level SSI locks. Our + * new tuple can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call, which makes for a faster check. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + tdeheap_RelationPutHeapTuple(relation, buffer, heaptup, + (options & HEAP_INSERT_TDE_NO_ENCRYPT) == 0, + (options & HEAP_INSERT_SPECULATIVE) != 0); + + if (PageIsAllVisible(BufferGetPage(buffer))) + { + all_visible_cleared = true; + PageClearAllVisible(BufferGetPage(buffer)); + tdeheap_visibilitymap_clear(relation, + ItemPointerGetBlockNumber(&(heaptup->t_self)), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + + /* + * XXX Should we set PageSetPrunable on this page ? + * + * The inserting transaction may eventually abort thus making this tuple + * DEAD and hence available for pruning. Though we don't want to optimize + * for aborts, if no other tuple in this page is UPDATEd/DELETEd, the + * aborted tuple will never be pruned until next vacuum is triggered. + * + * If you do add PageSetPrunable here, add it in tdeheap_xlog_insert too. + */ + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_insert xlrec; + xl_tdeheap_header xlhdr; + XLogRecPtr recptr; + Page page = BufferGetPage(buffer); + uint8 info = XLOG_HEAP_INSERT; + int bufflags = 0; + PageHeader phdr; + + /* + * If this is a catalog, we need to transmit combo CIDs to properly + * decode, so log that as well. + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + log_tdeheap_new_cid(relation, heaptup); + + /* + * If this is the single and first tuple on page, we can reinit the + * page instead of restoring the whole thing. Set flag, and hide + * buffer references from XLogInsert. + */ + if (ItemPointerGetOffsetNumber(&(heaptup->t_self)) == FirstOffsetNumber && + PageGetMaxOffsetNumber(page) == FirstOffsetNumber) + { + info |= XLOG_HEAP_INIT_PAGE; + bufflags |= REGBUF_WILL_INIT; + } + + xlrec.offnum = ItemPointerGetOffsetNumber(&heaptup->t_self); + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_INSERT_ALL_VISIBLE_CLEARED; + if (options & HEAP_INSERT_SPECULATIVE) + xlrec.flags |= XLH_INSERT_IS_SPECULATIVE; + Assert(ItemPointerGetBlockNumber(&heaptup->t_self) == BufferGetBlockNumber(buffer)); + + /* + * For logical decoding, we need the tuple even if we're doing a full + * page write, so make sure it's included even if we take a full-page + * image. (XXX We could alternatively store a pointer into the FPW). + */ + if (RelationIsLogicallyLogged(relation) && + !(options & HEAP_INSERT_NO_LOGICAL)) + { + xlrec.flags |= XLH_INSERT_CONTAINS_NEW_TUPLE; + bufflags |= REGBUF_KEEP_DATA; + + if (IsToastRelation(relation)) + xlrec.flags |= XLH_INSERT_ON_TOAST_RELATION; + } + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapInsert); + + xlhdr.t_infomask2 = heaptup->t_data->t_infomask2; + xlhdr.t_infomask = heaptup->t_data->t_infomask; + xlhdr.t_hoff = heaptup->t_data->t_hoff; + + /* + * note we mark xlhdr as belonging to buffer; if XLogInsert decides to + * write the whole page to the xlog, we don't need to store + * xl_tdeheap_header in the xlog. + */ + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags); + XLogRegisterBufData(0, (char *) &xlhdr, SizeOfHeapHeader); + /* register encrypted tuple data from the buffer */ + phdr = (PageHeader) BufferGetPage(buffer); + /* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */ + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + heaptup->t_len - SizeofHeapTupleHeader); + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, info); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * If tuple is cachable, mark it for invalidation from the caches in case + * we abort. Note it is OK to do this after releasing the buffer, because + * the heaptup data structure is all in local memory, not in the shared + * buffer. + */ + CacheInvalidateHeapTuple(relation, heaptup, NULL); + + /* Note: speculative insertions are counted too, even if aborted later */ + pgstat_count_tdeheap_insert(relation, 1); + + /* + * If heaptup is a private copy, release it. Don't forget to copy t_self + * back to the caller's image, too. + */ + if (heaptup != tup) + { + tup->t_self = heaptup->t_self; + tdeheap_freetuple(heaptup); + } +} + +/* + * Subroutine for tdeheap_insert(). Prepares a tuple for insertion. This sets the + * tuple header fields and toasts the tuple if necessary. Returns a toasted + * version of the tuple if it was toasted, or the original tuple if not. Note + * that in any case, the header fields are also set in the original tuple. + */ +static HeapTuple +tdeheap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid, + CommandId cid, int options) +{ + /* + * To allow parallel inserts, we need to ensure that they are safe to be + * performed in workers. We have the infrastructure to allow parallel + * inserts in general except for the cases where inserts generate a new + * CommandId (eg. inserts into a table having a foreign key column). + */ + if (IsParallelWorker()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot insert tuples in a parallel worker"))); + + tup->t_data->t_infomask &= ~(HEAP_XACT_MASK); + tup->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK); + tup->t_data->t_infomask |= HEAP_XMAX_INVALID; + HeapTupleHeaderSetXmin(tup->t_data, xid); + if (options & HEAP_INSERT_FROZEN) + HeapTupleHeaderSetXminFrozen(tup->t_data); + + HeapTupleHeaderSetCmin(tup->t_data, cid); + HeapTupleHeaderSetXmax(tup->t_data, 0); /* for cleanliness */ + tup->t_tableOid = RelationGetRelid(relation); + + /* + * If the new tuple is too big for storage or contains already toasted + * out-of-line attributes from some other relation, invoke the toaster. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(tup)); + return tup; + } + else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD) + return tdeheap_toast_insert_or_update(relation, tup, NULL, options); + else + return tup; +} + +/* + * Helper for tdeheap_multi_insert() that computes the number of entire pages + * that inserting the remaining heaptuples requires. Used to determine how + * much the relation needs to be extended by. + */ +static int +tdeheap_multi_insert_pages(HeapTuple *heaptuples, int done, int ntuples, Size saveFreeSpace) +{ + size_t page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace; + int npages = 1; + + for (int i = done; i < ntuples; i++) + { + size_t tup_sz = sizeof(ItemIdData) + MAXALIGN(heaptuples[i]->t_len); + + if (page_avail < tup_sz) + { + npages++; + page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace; + } + page_avail -= tup_sz; + } + + return npages; +} + +/* + * tdeheap_multi_insert - insert multiple tuples into a heap + * + * This is like tdeheap_insert(), but inserts multiple tuples in one operation. + * That's faster than calling tdeheap_insert() in a loop, because when multiple + * tuples can be inserted on a single page, we can write just a single WAL + * record covering all of them, and only need to lock/unlock the page once. + * + * Note: this leaks memory into the current memory context. You can create a + * temporary context before calling this, if that's a problem. + */ +void +tdeheap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples, + CommandId cid, int options, BulkInsertState bistate) +{ + TransactionId xid = GetCurrentTransactionId(); + HeapTuple *heaptuples; + int i; + int ndone; + PGAlignedBlock scratch; + Page page; + Buffer vmbuffer = InvalidBuffer; + bool needwal; + Size saveFreeSpace; + bool need_tuple_data = RelationIsLogicallyLogged(relation); + bool need_cids = RelationIsAccessibleInLogicalDecoding(relation); + bool starting_with_empty_page = false; + int npages = 0; + int npages_used = 0; + + /* currently not needed (thus unsupported) for tdeheap_multi_insert() */ + Assert(!(options & HEAP_INSERT_NO_LOGICAL)); + + needwal = RelationNeedsWAL(relation); + saveFreeSpace = RelationGetTargetPageFreeSpace(relation, + HEAP_DEFAULT_FILLFACTOR); + + /* Toast and set header data in all the slots */ + heaptuples = palloc(ntuples * sizeof(HeapTuple)); + for (i = 0; i < ntuples; i++) + { + HeapTuple tuple; + + tuple = ExecFetchSlotHeapTuple(slots[i], true, NULL); + slots[i]->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slots[i]->tts_tableOid; + heaptuples[i] = tdeheap_prepare_insert(relation, tuple, xid, cid, + options); + } + + /* + * We're about to do the actual inserts -- but check for conflict first, + * to minimize the possibility of having to roll back work we've just + * done. + * + * A check here does not definitively prevent a serialization anomaly; + * that check MUST be done at least past the point of acquiring an + * exclusive buffer content lock on every buffer that will be affected, + * and MAY be done after all inserts are reflected in the buffers and + * those locks are released; otherwise there is a race condition. Since + * multiple buffers can be locked and unlocked in the loop below, and it + * would not be feasible to identify and lock all of those buffers before + * the loop, we must do a final check at the end. + * + * The check here could be omitted with no loss of correctness; it is + * present strictly as an optimization. + * + * For heap inserts, we only need to check for table-level SSI locks. Our + * new tuples can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call, which makes for a faster check. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + ndone = 0; + while (ndone < ntuples) + { + Buffer buffer; + bool all_visible_cleared = false; + bool all_frozen_set = false; + int nthispage; + + CHECK_FOR_INTERRUPTS(); + + /* + * Compute number of pages needed to fit the to-be-inserted tuples in + * the worst case. This will be used to determine how much to extend + * the relation by in tdeheap_RelationGetBufferForTuple(), if needed. If we + * filled a prior page from scratch, we can just update our last + * computation, but if we started with a partially filled page, + * recompute from scratch, the number of potentially required pages + * can vary due to tuples needing to fit onto the page, page headers + * etc. + */ + if (ndone == 0 || !starting_with_empty_page) + { + npages = tdeheap_multi_insert_pages(heaptuples, ndone, ntuples, + saveFreeSpace); + npages_used = 0; + } + else + npages_used++; + + /* + * Find buffer where at least the next tuple will fit. If the page is + * all-visible, this will also pin the requisite visibility map page. + * + * Also pin visibility map page if COPY FREEZE inserts tuples into an + * empty page. See all_frozen_set below. + */ + buffer = tdeheap_RelationGetBufferForTuple(relation, heaptuples[ndone]->t_len, + InvalidBuffer, options, bistate, + &vmbuffer, NULL, + npages - npages_used); + page = BufferGetPage(buffer); + + starting_with_empty_page = PageGetMaxOffsetNumber(page) == 0; + + if (starting_with_empty_page && (options & HEAP_INSERT_FROZEN)) + all_frozen_set = true; + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* + * tdeheap_RelationGetBufferForTuple has ensured that the first tuple fits. + * Put that on the page, and then as many other tuples as fit. + */ + tdeheap_RelationPutHeapTuple(relation, buffer, heaptuples[ndone], true, false); + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (needwal && need_cids) + log_tdeheap_new_cid(relation, heaptuples[ndone]); + + for (nthispage = 1; ndone + nthispage < ntuples; nthispage++) + { + HeapTuple heaptup = heaptuples[ndone + nthispage]; + + if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace) + break; + + tdeheap_RelationPutHeapTuple(relation, buffer, heaptup, true, false); + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (needwal && need_cids) + log_tdeheap_new_cid(relation, heaptup); + } + + /* + * If the page is all visible, need to clear that, unless we're only + * going to add further frozen rows to it. + * + * If we're only adding already frozen rows to a previously empty + * page, mark it as all-visible. + */ + if (PageIsAllVisible(page) && !(options & HEAP_INSERT_FROZEN)) + { + all_visible_cleared = true; + PageClearAllVisible(page); + tdeheap_visibilitymap_clear(relation, + BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + else if (all_frozen_set) + PageSetAllVisible(page); + + /* + * XXX Should we set PageSetPrunable on this page ? See tdeheap_insert() + */ + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (needwal) + { + XLogRecPtr recptr; + xl_tdeheap_multi_insert *xlrec; + uint8 info = XLOG_HEAP2_MULTI_INSERT; + char *tupledata; + int totaldatalen; + char *scratchptr = scratch.data; + bool init; + int bufflags = 0; + + /* + * If the page was previously empty, we can reinit the page + * instead of restoring the whole thing. + */ + init = starting_with_empty_page; + + /* allocate xl_tdeheap_multi_insert struct from the scratch area */ + xlrec = (xl_tdeheap_multi_insert *) scratchptr; + scratchptr += SizeOfHeapMultiInsert; + + /* + * Allocate offsets array. Unless we're reinitializing the page, + * in that case the tuples are stored in order starting at + * FirstOffsetNumber and we don't need to store the offsets + * explicitly. + */ + if (!init) + scratchptr += nthispage * sizeof(OffsetNumber); + + /* the rest of the scratch space is used for tuple data */ + tupledata = scratchptr; + + /* check that the mutually exclusive flags are not both set */ + Assert(!(all_visible_cleared && all_frozen_set)); + + xlrec->flags = 0; + if (all_visible_cleared) + xlrec->flags = XLH_INSERT_ALL_VISIBLE_CLEARED; + if (all_frozen_set) + xlrec->flags = XLH_INSERT_ALL_FROZEN_SET; + + xlrec->ntuples = nthispage; + + /* + * Write out an xl_multi_insert_tuple and the tuple data itself + * for each tuple. + */ + for (i = 0; i < nthispage; i++) + { + HeapTuple heaptup = heaptuples[ndone + i]; + xl_multi_insert_tuple *tuphdr; + int datalen; + char *tup_data_on_page; + + if (!init) + xlrec->offsets[i] = ItemPointerGetOffsetNumber(&heaptup->t_self); + /* xl_multi_insert_tuple needs two-byte alignment. */ + tuphdr = (xl_multi_insert_tuple *) SHORTALIGN(scratchptr); + scratchptr = ((char *) tuphdr) + SizeOfMultiInsertTuple; + + tuphdr->t_infomask2 = heaptup->t_data->t_infomask2; + tuphdr->t_infomask = heaptup->t_data->t_infomask; + tuphdr->t_hoff = heaptup->t_data->t_hoff; + + /* Point to an encrypted tuple data in the Buffer */ + tup_data_on_page = (char *) page + ItemIdGetOffset(PageGetItemId(page, heaptup->t_self.ip_posid)); + /* write bitmap [+ padding] [+ oid] + data */ + datalen = heaptup->t_len - SizeofHeapTupleHeader; + memcpy(scratchptr, + tup_data_on_page + SizeofHeapTupleHeader, + datalen); + tuphdr->datalen = datalen; + scratchptr += datalen; + } + totaldatalen = scratchptr - tupledata; + Assert((scratchptr - scratch.data) < BLCKSZ); + + if (need_tuple_data) + xlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE; + + /* + * Signal that this is the last xl_tdeheap_multi_insert record + * emitted by this call to tdeheap_multi_insert(). Needed for logical + * decoding so it knows when to cleanup temporary data. + */ + if (ndone + nthispage == ntuples) + xlrec->flags |= XLH_INSERT_LAST_IN_MULTI; + + if (init) + { + info |= XLOG_HEAP_INIT_PAGE; + bufflags |= REGBUF_WILL_INIT; + } + + /* + * If we're doing logical decoding, include the new tuple data + * even if we take a full-page image of the page. + */ + if (need_tuple_data) + bufflags |= REGBUF_KEEP_DATA; + + XLogBeginInsert(); + XLogRegisterData((char *) xlrec, tupledata - scratch.data); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags); + + XLogRegisterBufData(0, tupledata, totaldatalen); + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP2_ID, info); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + /* + * If we've frozen everything on the page, update the visibilitymap. + * We're already holding pin on the vmbuffer. + */ + if (all_frozen_set) + { + Assert(PageIsAllVisible(page)); + Assert(tdeheap_visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer)); + + /* + * It's fine to use InvalidTransactionId here - this is only used + * when HEAP_INSERT_FROZEN is specified, which intentionally + * violates visibility rules. + */ + tdeheap_visibilitymap_set(relation, BufferGetBlockNumber(buffer), buffer, + InvalidXLogRecPtr, vmbuffer, + InvalidTransactionId, + VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN); + } + + UnlockReleaseBuffer(buffer); + ndone += nthispage; + + /* + * NB: Only release vmbuffer after inserting all tuples - it's fairly + * likely that we'll insert into subsequent heap pages that are likely + * to use the same vm page. + */ + } + + /* We're done with inserting all tuples, so release the last vmbuffer. */ + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * We're done with the actual inserts. Check for conflicts again, to + * ensure that all rw-conflicts in to these inserts are detected. Without + * this final check, a sequential scan of the heap may have locked the + * table after the "before" check, missing one opportunity to detect the + * conflict, and then scanned the table before the new tuples were there, + * missing the other chance to detect the conflict. + * + * For heap inserts, we only need to check for table-level SSI locks. Our + * new tuples can't possibly conflict with existing tuple locks, and heap + * page locks are only consolidated versions of tuple locks; they do not + * lock "gaps" as index page locks do. So we don't need to specify a + * buffer when making the call. + */ + CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); + + /* + * If tuples are cachable, mark them for invalidation from the caches in + * case we abort. Note it is OK to do this after releasing the buffer, + * because the heaptuples data structure is all in local memory, not in + * the shared buffer. + */ + if (IsCatalogRelation(relation)) + { + for (i = 0; i < ntuples; i++) + CacheInvalidateHeapTuple(relation, heaptuples[i], NULL); + } + + /* copy t_self fields back to the caller's slots */ + for (i = 0; i < ntuples; i++) + slots[i]->tts_tid = heaptuples[i]->t_self; + + pgstat_count_tdeheap_insert(relation, ntuples); +} + +/* + * simple_tdeheap_insert - insert a tuple + * + * Currently, this routine differs from tdeheap_insert only in supplying + * a default command ID and not allowing access to the speedup options. + * + * This should be used rather than using tdeheap_insert directly in most places + * where we are modifying system catalogs. + */ +void +simple_tdeheap_insert(Relation relation, HeapTuple tup) +{ + tdeheap_insert(relation, tup, GetCurrentCommandId(true), 0, NULL); +} + +/* + * Given infomask/infomask2, compute the bits that must be saved in the + * "infobits" field of xl_tdeheap_delete, xl_tdeheap_update, xl_tdeheap_lock, + * xl_tdeheap_lock_updated WAL records. + * + * See fix_infomask_from_infobits. + */ +static uint8 +compute_infobits(uint16 infomask, uint16 infomask2) +{ + return + ((infomask & HEAP_XMAX_IS_MULTI) != 0 ? XLHL_XMAX_IS_MULTI : 0) | + ((infomask & HEAP_XMAX_LOCK_ONLY) != 0 ? XLHL_XMAX_LOCK_ONLY : 0) | + ((infomask & HEAP_XMAX_EXCL_LOCK) != 0 ? XLHL_XMAX_EXCL_LOCK : 0) | + /* note we ignore HEAP_XMAX_SHR_LOCK here */ + ((infomask & HEAP_XMAX_KEYSHR_LOCK) != 0 ? XLHL_XMAX_KEYSHR_LOCK : 0) | + ((infomask2 & HEAP_KEYS_UPDATED) != 0 ? + XLHL_KEYS_UPDATED : 0); +} + +/* + * Given two versions of the same t_infomask for a tuple, compare them and + * return whether the relevant status for a tuple Xmax has changed. This is + * used after a buffer lock has been released and reacquired: we want to ensure + * that the tuple state continues to be the same it was when we previously + * examined it. + * + * Note the Xmax field itself must be compared separately. + */ +static inline bool +xmax_infomask_changed(uint16 new_infomask, uint16 old_infomask) +{ + const uint16 interesting = + HEAP_XMAX_IS_MULTI | HEAP_XMAX_LOCK_ONLY | HEAP_LOCK_MASK; + + if ((new_infomask & interesting) != (old_infomask & interesting)) + return true; + + return false; +} + +/* + * tdeheap_delete - delete a tuple + * + * See table_tuple_delete() for an explanation of the parameters, except that + * this routine directly takes a tuple rather than a slot. + * + * In the failure cases, the routine fills *tmfd with the tuple's t_ctid, + * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last + * only for TM_SelfModified, since we cannot obtain cmax from a combo CID + * generated by another transaction). + */ +TM_Result +tdeheap_delete(Relation relation, ItemPointer tid, + CommandId cid, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, bool changingPart) +{ + TM_Result result; + TransactionId xid = GetCurrentTransactionId(); + ItemId lp; + HeapTupleData tp; + Page page; + BlockNumber block; + Buffer buffer; + Buffer vmbuffer = InvalidBuffer; + TransactionId new_xmax; + uint16 new_infomask, + new_infomask2; + bool have_tuple_lock = false; + bool iscombo; + bool all_visible_cleared = false; + HeapTuple old_key_tuple = NULL; /* replica identity of the tuple */ + bool old_key_copied = false; + HeapTuple decrypted_tuple; + + Assert(ItemPointerIsValid(tid)); + + /* + * Forbid this during a parallel operation, lest it allocate a combo CID. + * Other workers might need that combo CID for visibility checks, and we + * have no provision for broadcasting it to them. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot delete tuples during a parallel operation"))); + + block = ItemPointerGetBlockNumber(tid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tp.t_tableOid = RelationGetRelid(relation); + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_self = *tid; + +l1: + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, we'll have to unlock and + * re-lock, to avoid holding the buffer lock across an I/O. That's a bit + * unfortunate, but hopefully shouldn't happen often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + } + + result = HeapTupleSatisfiesUpdate(&tp, cid, buffer); + + if (result == TM_Invisible) + { + UnlockReleaseBuffer(buffer); + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("attempted to delete invisible tuple"))); + } + else if (result == TM_BeingModified && wait) + { + TransactionId xwait; + uint16 infomask; + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(tp.t_data); + infomask = tp.t_data->t_infomask; + + /* + * Sleep until concurrent transaction ends -- except when there's a + * single locker and it's our own transaction. Note we don't care + * which lock mode the locker has, because we need the strongest one. + * + * Before sleeping, we need to acquire tuple lock to establish our + * priority for the tuple (see tdeheap_lock_tuple). LockTuple will + * release us when we are next-in-line for the tuple. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while rechecking + * tuple state. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + bool current_is_member = false; + + if (DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + LockTupleExclusive, ¤t_is_member)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Acquire the lock, if necessary (but skip it when we're + * requesting a lock and already have one; avoids deadlock). + */ + if (!current_is_member) + tdeheap_acquire_tuplock(relation, &(tp.t_self), LockTupleExclusive, + LockWaitBlock, &have_tuple_lock); + + /* wait for multixact */ + MultiXactIdWait((MultiXactId) xwait, MultiXactStatusUpdate, infomask, + relation, &(tp.t_self), XLTW_Delete, + NULL); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * If xwait had just locked the tuple then some other xact + * could update this tuple before we get to this point. Check + * for xmax change, and start over if so. + * + * We also must start over if we didn't pin the VM page, and + * the page has become all visible. + */ + if ((vmbuffer == InvalidBuffer && PageIsAllVisible(page)) || + xmax_infomask_changed(tp.t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tp.t_data), + xwait)) + goto l1; + } + + /* + * You might think the multixact is necessarily done here, but not + * so: it could have surviving members, namely our own xact or + * other subxacts of this backend. It is legal for us to delete + * the tuple in either case, however (the latter case is + * essentially a situation of upgrading our former shared lock to + * exclusive). We don't bother changing the on-disk hint bits + * since we are about to overwrite the xmax altogether. + */ + } + else if (!TransactionIdIsCurrentTransactionId(xwait)) + { + /* + * Wait for regular transaction to end; but first, acquire tuple + * lock. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_acquire_tuplock(relation, &(tp.t_self), LockTupleExclusive, + LockWaitBlock, &have_tuple_lock); + XactLockTableWait(xwait, relation, &(tp.t_self), XLTW_Delete); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + * + * We also must start over if we didn't pin the VM page, and the + * page has become all visible. + */ + if ((vmbuffer == InvalidBuffer && PageIsAllVisible(page)) || + xmax_infomask_changed(tp.t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tp.t_data), + xwait)) + goto l1; + + /* Otherwise check if it committed or aborted */ + UpdateXmaxHintBits(tp.t_data, buffer, xwait); + } + + /* + * We may overwrite if previous xmax aborted, or if it committed but + * only locked the tuple without updating it. + */ + if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_XMAX_IS_LOCKED_ONLY(tp.t_data->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tp.t_data)) + result = TM_Ok; + else if (!ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + + /* sanity check the result HeapTupleSatisfiesUpdate() and the logic above */ + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || + result == TM_Updated || + result == TM_Deleted || + result == TM_BeingModified); + Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid)); + } + + if (crosscheck != InvalidSnapshot && result == TM_Ok) + { + /* Perform additional check for transaction-snapshot mode RI updates */ + if (!HeapTupleSatisfiesVisibility(&tp, crosscheck, buffer)) + result = TM_Updated; + } + + if (result != TM_Ok) + { + tmfd->ctid = tp.t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(tp.t_data); + else + tmfd->cmax = InvalidCommandId; + UnlockReleaseBuffer(buffer); + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(tp.t_self), LockTupleExclusive); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + return result; + } + + /* + * We're about to do the actual delete -- check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the page between this check and the delete + * being visible to the scan (i.e., an exclusive buffer content lock is + * continuously held from this point until the tuple delete is visible). + */ + CheckForSerializableConflictIn(relation, tid, BufferGetBlockNumber(buffer)); + + /* replace cid with a combo CID if necessary */ + HeapTupleHeaderAdjustCmax(tp.t_data, &cid, &iscombo); + + /* + * Compute replica identity tuple before entering the critical section so + * we don't PANIC upon a memory allocation failure. + * + * ExtractReplicaIdentity has to get a decrypted tuple, otherwise it + * won't be able to extract varlen attributes. + */ + decrypted_tuple = tdeheap_copytuple(&tp); + PG_TDE_DECRYPT_TUPLE(&tp, decrypted_tuple, GetHeapBaiscRelationKey(relation->rd_locator)); + + old_key_tuple = ExtractReplicaIdentity(relation, decrypted_tuple, true, &old_key_copied); + + /* + * If this is the first possibly-multixact-able operation in the current + * transaction, set my per-backend OldestMemberMXactId setting. We can be + * certain that the transaction will never become a member of any older + * MultiXactIds than that. (We have to do this even if we end up just + * using our own TransactionId below, since some other backend could + * incorporate our XID into a MultiXact immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(tp.t_data), + tp.t_data->t_infomask, tp.t_data->t_infomask2, + xid, LockTupleExclusive, true, + &new_xmax, &new_infomask, &new_infomask2); + + START_CRIT_SECTION(); + + /* + * If this transaction commits, the tuple will become DEAD sooner or + * later. Set flag that this page is a candidate for pruning once our xid + * falls below the OldestXmin horizon. If the transaction finally aborts, + * the subsequent page pruning will be a no-op and the hint will be + * cleared. + */ + PageSetPrunable(page, xid); + + if (PageIsAllVisible(page)) + { + all_visible_cleared = true; + PageClearAllVisible(page); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + + /* store transaction information of xact deleting the tuple */ + tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + tp.t_data->t_infomask |= new_infomask; + tp.t_data->t_infomask2 |= new_infomask2; + HeapTupleHeaderClearHotUpdated(tp.t_data); + HeapTupleHeaderSetXmax(tp.t_data, new_xmax); + HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo); + /* Make sure there is no forward chain link in t_ctid */ + tp.t_data->t_ctid = tp.t_self; + + /* Signal that this is actually a move into another partition */ + if (changingPart) + HeapTupleHeaderSetMovedPartitions(tp.t_data); + + MarkBufferDirty(buffer); + + /* + * XLOG stuff + * + * NB: tdeheap_abort_speculative() uses the same xlog record and replay + * routines. + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_delete xlrec; + xl_tdeheap_header xlhdr; + XLogRecPtr recptr; + + /* + * For logical decode we need combo CIDs to properly decode the + * catalog + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + log_tdeheap_new_cid(relation, &tp); + + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_DELETE_ALL_VISIBLE_CLEARED; + if (changingPart) + xlrec.flags |= XLH_DELETE_IS_PARTITION_MOVE; + xlrec.infobits_set = compute_infobits(tp.t_data->t_infomask, + tp.t_data->t_infomask2); + xlrec.offnum = ItemPointerGetOffsetNumber(&tp.t_self); + xlrec.xmax = new_xmax; + + if (old_key_tuple != NULL) + { + if (relation->rd_rel->relreplident == REPLICA_IDENTITY_FULL) + xlrec.flags |= XLH_DELETE_CONTAINS_OLD_TUPLE; + else + xlrec.flags |= XLH_DELETE_CONTAINS_OLD_KEY; + } + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapDelete); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + /* + * Log replica identity of the deleted tuple if there is one + */ + if (old_key_tuple != NULL) + { + xlhdr.t_infomask2 = old_key_tuple->t_data->t_infomask2; + xlhdr.t_infomask = old_key_tuple->t_data->t_infomask; + xlhdr.t_hoff = old_key_tuple->t_data->t_hoff; + + XLogRegisterData((char *) &xlhdr, SizeOfHeapHeader); + XLogRegisterData((char *) old_key_tuple->t_data + + SizeofHeapTupleHeader, + old_key_tuple->t_len + - SizeofHeapTupleHeader); + } + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_DELETE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + /* + * If the tuple has toasted out-of-line attributes, we need to delete + * those items too. We have to do this before releasing the buffer + * because we need to look at the contents of the tuple, but it's OK to + * release the content lock on the buffer first. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(&tp)); + } + else if (HeapTupleHasExternal(&tp)) + { + /* + * tdeheap_toast_delete needs decypted tuple to extract external + * attributes + */ + tdeheap_toast_delete(relation, decrypted_tuple, false); + } + + tdeheap_freetuple(decrypted_tuple); + + /* + * Mark tuple for invalidation from system caches at next command + * boundary. We have to do this before releasing the buffer because we + * need to look at the contents of the tuple. + */ + CacheInvalidateHeapTuple(relation, &tp, NULL); + + /* Now we can release the buffer */ + ReleaseBuffer(buffer); + + /* + * Release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(tp.t_self), LockTupleExclusive); + + pgstat_count_tdeheap_delete(relation); + + if (old_key_tuple != NULL && old_key_copied) + tdeheap_freetuple(old_key_tuple); + + return TM_Ok; +} + +/* + * simple_tdeheap_delete - delete a tuple + * + * This routine may be used to delete a tuple when concurrent updates of + * the target tuple are not expected (for example, because we have a lock + * on the relation associated with the tuple). Any failure is reported + * via ereport(). + */ +void +simple_tdeheap_delete(Relation relation, ItemPointer tid) +{ + TM_Result result; + TM_FailureData tmfd; + + result = tdeheap_delete(relation, tid, + GetCurrentCommandId(true), InvalidSnapshot, + true /* wait for commit */ , + &tmfd, false /* changingPart */ ); + switch (result) + { + case TM_SelfModified: + /* Tuple was already updated in current command? */ + elog(ERROR, "tuple already updated by self"); + break; + + case TM_Ok: + /* done successfully */ + break; + + case TM_Updated: + elog(ERROR, "tuple concurrently updated"); + break; + + case TM_Deleted: + elog(ERROR, "tuple concurrently deleted"); + break; + + default: + elog(ERROR, "unrecognized tdeheap_delete status: %u", result); + break; + } +} + +/* + * tdeheap_update - replace a tuple + * + * See table_tuple_update() for an explanation of the parameters, except that + * this routine directly takes a tuple rather than a slot. + * + * In the failure cases, the routine fills *tmfd with the tuple's t_ctid, + * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last + * only for TM_SelfModified, since we cannot obtain cmax from a combo CID + * generated by another transaction). + */ +TM_Result +tdeheap_update(Relation relation, ItemPointer otid, HeapTuple newtup, + CommandId cid, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, LockTupleMode *lockmode, + TU_UpdateIndexes *update_indexes) +{ + TM_Result result; + TransactionId xid = GetCurrentTransactionId(); + Bitmapset *hot_attrs; + Bitmapset *sum_attrs; + Bitmapset *key_attrs; + Bitmapset *id_attrs; + Bitmapset *interesting_attrs; + Bitmapset *modified_attrs; + ItemId lp; + HeapTupleData oldtup; + HeapTupleData oldtup_decrypted; + void* oldtup_data; + HeapTuple heaptup; + HeapTuple old_key_tuple = NULL; + bool old_key_copied = false; + Page page; + BlockNumber block; + MultiXactStatus mxact_status; + Buffer buffer, + newbuf, + vmbuffer = InvalidBuffer, + vmbuffer_new = InvalidBuffer; + bool need_toast; + Size newtupsize, + pagefree; + bool have_tuple_lock = false; + bool iscombo; + bool use_hot_update = false; + bool summarized_update = false; + bool key_intact; + bool all_visible_cleared = false; + bool all_visible_cleared_new = false; + bool checked_lockers; + bool locker_remains; + bool id_has_external = false; + TransactionId xmax_new_tuple, + xmax_old_tuple; + uint16 infomask_old_tuple, + infomask2_old_tuple, + infomask_new_tuple, + infomask2_new_tuple; + + Assert(ItemPointerIsValid(otid)); + + /* Cheap, simplistic check that the tuple matches the rel's rowtype. */ + Assert(HeapTupleHeaderGetNatts(newtup->t_data) <= + RelationGetNumberOfAttributes(relation)); + + /* + * Forbid this during a parallel operation, lest it allocate a combo CID. + * Other workers might need that combo CID for visibility checks, and we + * have no provision for broadcasting it to them. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot update tuples during a parallel operation"))); + + /* + * Fetch the list of attributes to be checked for various operations. + * + * For HOT considerations, this is wasted effort if we fail to update or + * have to put the new tuple on a different page. But we must compute the + * list before obtaining buffer lock --- in the worst case, if we are + * doing an update on one of the relevant system catalogs, we could + * deadlock if we try to fetch the list later. In any case, the relcache + * caches the data so this is usually pretty cheap. + * + * We also need columns used by the replica identity and columns that are + * considered the "key" of rows in the table. + * + * Note that we get copies of each bitmap, so we need not worry about + * relcache flush happening midway through. + */ + hot_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_HOT_BLOCKING); + sum_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_SUMMARIZED); + key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY); + id_attrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_IDENTITY_KEY); + interesting_attrs = NULL; + interesting_attrs = bms_add_members(interesting_attrs, hot_attrs); + interesting_attrs = bms_add_members(interesting_attrs, sum_attrs); + interesting_attrs = bms_add_members(interesting_attrs, key_attrs); + interesting_attrs = bms_add_members(interesting_attrs, id_attrs); + + block = ItemPointerGetBlockNumber(otid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid)); + Assert(ItemIdIsNormal(lp)); + + /* + * Fill in enough data in oldtup for HeapDetermineColumnsInfo to work + * properly. + */ + oldtup.t_tableOid = RelationGetRelid(relation); + oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + oldtup_data = oldtup.t_data; + oldtup.t_len = ItemIdGetLength(lp); + oldtup.t_self = *otid; + /* decrypt the old tuple */ + { + char* new_ptr = NULL; + new_ptr = MemoryContextAlloc(CurTransactionContext, oldtup.t_len); + memcpy(new_ptr, oldtup.t_data, oldtup.t_data->t_hoff); + // only neccessary field + oldtup_decrypted.t_data = (HeapTupleHeader)new_ptr; + } + PG_TDE_DECRYPT_TUPLE(&oldtup, &oldtup_decrypted, + GetHeapBaiscRelationKey(relation->rd_locator)); + + // change field in oldtup now. + // We can't do it before, as PG_TDE_DECRYPT_TUPLE uses t_data address in + // calculations + oldtup.t_data = oldtup_decrypted.t_data; + + /* the new tuple is ready, except for this: */ + newtup->t_tableOid = RelationGetRelid(relation); + + /* + * Determine columns modified by the update. Additionally, identify + * whether any of the unmodified replica identity key attributes in the + * old tuple is externally stored or not. This is required because for + * such attributes the flattened value won't be WAL logged as part of the + * new tuple so we must include it as part of the old_key_tuple. See + * ExtractReplicaIdentity. + */ + modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs, + id_attrs, &oldtup, + newtup, &id_has_external); + + /* + * If we're not updating any "key" column, we can grab a weaker lock type. + * This allows for more concurrency when we are running simultaneously + * with foreign key checks. + * + * Note that if a column gets detoasted while executing the update, but + * the value ends up being the same, this test will fail and we will use + * the stronger lock. This is acceptable; the important case to optimize + * is updates that don't manipulate key columns, not those that + * serendipitously arrive at the same key values. + */ + if (!bms_overlap(modified_attrs, key_attrs)) + { + *lockmode = LockTupleNoKeyExclusive; + mxact_status = MultiXactStatusNoKeyUpdate; + key_intact = true; + + /* + * If this is the first possibly-multixact-able operation in the + * current transaction, set my per-backend OldestMemberMXactId + * setting. We can be certain that the transaction will never become a + * member of any older MultiXactIds than that. (We have to do this + * even if we end up just using our own TransactionId below, since + * some other backend could incorporate our XID into a MultiXact + * immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + } + else + { + *lockmode = LockTupleExclusive; + mxact_status = MultiXactStatusUpdate; + key_intact = false; + } + + /* + * Note: beyond this point, use oldtup not otid to refer to old tuple. + * otid may very well point at newtup->t_self, which we will overwrite + * with the new tuple's location, so there's great risk of confusion if we + * use otid anymore. + */ + + oldtup.t_data = oldtup_data; + +l2: + checked_lockers = false; + locker_remains = false; + result = HeapTupleSatisfiesUpdate(&oldtup, cid, buffer); + + /* see below about the "no wait" case */ + Assert(result != TM_BeingModified || wait); + + if (result == TM_Invisible) + { + UnlockReleaseBuffer(buffer); + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("attempted to update invisible tuple"))); + } + else if (result == TM_BeingModified && wait) + { + TransactionId xwait; + uint16 infomask; + bool can_continue = false; + + /* + * XXX note that we don't consider the "no wait" case here. This + * isn't a problem currently because no caller uses that case, but it + * should be fixed if such a caller is introduced. It wasn't a + * problem previously because this code would always wait, but now + * that some tuple locks do not conflict with one of the lock modes we + * use, it is possible that this case is interesting to handle + * specially. + * + * This may cause failures with third-party code that calls + * tdeheap_update directly. + */ + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(oldtup.t_data); + infomask = oldtup.t_data->t_infomask; + + /* + * Now we have to do something about the existing locker. If it's a + * multi, sleep on it; we might be awakened before it is completely + * gone (or even not sleep at all in some cases); we need to preserve + * it as locker, unless it is gone completely. + * + * If it's not a multi, we need to check for sleeping conditions + * before actually going to sleep. If the update doesn't conflict + * with the locks, we just continue without sleeping (but making sure + * it is preserved). + * + * Before sleeping, we need to acquire tuple lock to establish our + * priority for the tuple (see tdeheap_lock_tuple). LockTuple will + * release us when we are next-in-line for the tuple. Note we must + * not acquire the tuple lock until we're sure we're going to sleep; + * otherwise we're open for race conditions with other transactions + * holding the tuple lock which sleep on us. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while rechecking + * tuple state. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId update_xact; + int remain; + bool current_is_member = false; + + if (DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + *lockmode, ¤t_is_member)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Acquire the lock, if necessary (but skip it when we're + * requesting a lock and already have one; avoids deadlock). + */ + if (!current_is_member) + tdeheap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode, + LockWaitBlock, &have_tuple_lock); + + /* wait for multixact */ + MultiXactIdWait((MultiXactId) xwait, mxact_status, infomask, + relation, &oldtup.t_self, XLTW_Update, + &remain); + checked_lockers = true; + locker_remains = remain != 0; + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * If xwait had just locked the tuple then some other xact + * could update this tuple before we get to this point. Check + * for xmax change, and start over if so. + */ + if (xmax_infomask_changed(oldtup.t_data->t_infomask, + infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup.t_data), + xwait)) + goto l2; + } + + /* + * Note that the multixact may not be done by now. It could have + * surviving members; our own xact or other subxacts of this + * backend, and also any other concurrent transaction that locked + * the tuple with LockTupleKeyShare if we only got + * LockTupleNoKeyExclusive. If this is the case, we have to be + * careful to mark the updated tuple with the surviving members in + * Xmax. + * + * Note that there could have been another update in the + * MultiXact. In that case, we need to check whether it committed + * or aborted. If it aborted we are safe to update it again; + * otherwise there is an update conflict, and we have to return + * TableTuple{Deleted, Updated} below. + * + * In the LockTupleExclusive case, we still need to preserve the + * surviving members: those would include the tuple locks we had + * before this one, which are important to keep in case this + * subxact aborts. + */ + if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup.t_data->t_infomask)) + update_xact = HeapTupleGetUpdateXid(oldtup.t_data); + else + update_xact = InvalidTransactionId; + + /* + * There was no UPDATE in the MultiXact; or it aborted. No + * TransactionIdIsInProgress() call needed here, since we called + * MultiXactIdWait() above. + */ + if (!TransactionIdIsValid(update_xact) || + TransactionIdDidAbort(update_xact)) + can_continue = true; + } + else if (TransactionIdIsCurrentTransactionId(xwait)) + { + /* + * The only locker is ourselves; we can avoid grabbing the tuple + * lock here, but must preserve our locking information. + */ + checked_lockers = true; + locker_remains = true; + can_continue = true; + } + else if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask) && key_intact) + { + /* + * If it's just a key-share locker, and we're not changing the key + * columns, we don't need to wait for it to end; but we need to + * preserve it as locker. + */ + checked_lockers = true; + locker_remains = true; + can_continue = true; + } + else + { + /* + * Wait for regular transaction to end; but first, acquire tuple + * lock. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode, + LockWaitBlock, &have_tuple_lock); + XactLockTableWait(xwait, relation, &oldtup.t_self, + XLTW_Update); + checked_lockers = true; + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + */ + if (xmax_infomask_changed(oldtup.t_data->t_infomask, infomask) || + !TransactionIdEquals(xwait, + HeapTupleHeaderGetRawXmax(oldtup.t_data))) + goto l2; + + /* Otherwise check if it committed or aborted */ + UpdateXmaxHintBits(oldtup.t_data, buffer, xwait); + if (oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) + can_continue = true; + } + + if (can_continue) + result = TM_Ok; + else if (!ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + + /* Sanity check the result HeapTupleSatisfiesUpdate() and the logic above */ + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || + result == TM_Updated || + result == TM_Deleted || + result == TM_BeingModified); + Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid)); + } + + if (crosscheck != InvalidSnapshot && result == TM_Ok) + { + /* Perform additional check for transaction-snapshot mode RI updates */ + if (!HeapTupleSatisfiesVisibility(&oldtup, crosscheck, buffer)) + result = TM_Updated; + } + + if (result != TM_Ok) + { + tmfd->ctid = oldtup.t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data); + else + tmfd->cmax = InvalidCommandId; + UnlockReleaseBuffer(buffer); + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode); + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + *update_indexes = TU_None; + + bms_free(hot_attrs); + bms_free(sum_attrs); + bms_free(key_attrs); + bms_free(id_attrs); + bms_free(modified_attrs); + bms_free(interesting_attrs); + return result; + } + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, or during some + * subsequent window during which we had it unlocked, we'll have to unlock + * and re-lock, to avoid holding the buffer lock across an I/O. That's a + * bit unfortunate, especially since we'll now have to recheck whether the + * tuple has been locked or updated under us, but hopefully it won't + * happen very often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + goto l2; + } + + /* Fill in transaction status data */ + + /* + * If the tuple we're updating is locked, we need to preserve the locking + * info in the old tuple's Xmax. Prepare a new Xmax value for this. + */ + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data), + oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2, + xid, *lockmode, true, + &xmax_old_tuple, &infomask_old_tuple, + &infomask2_old_tuple); + + /* + * And also prepare an Xmax value for the new copy of the tuple. If there + * was no xmax previously, or there was one but all lockers are now gone, + * then use InvalidTransactionId; otherwise, get the xmax from the old + * tuple. (In rare cases that might also be InvalidTransactionId and yet + * not have the HEAP_XMAX_INVALID bit set; that's fine.) + */ + if ((oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_LOCKED_UPGRADED(oldtup.t_data->t_infomask) || + (checked_lockers && !locker_remains)) + xmax_new_tuple = InvalidTransactionId; + else + xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup.t_data); + + if (!TransactionIdIsValid(xmax_new_tuple)) + { + infomask_new_tuple = HEAP_XMAX_INVALID; + infomask2_new_tuple = 0; + } + else + { + /* + * If we found a valid Xmax for the new tuple, then the infomask bits + * to use on the new tuple depend on what was there on the old one. + * Note that since we're doing an update, the only possibility is that + * the lockers had FOR KEY SHARE lock. + */ + if (oldtup.t_data->t_infomask & HEAP_XMAX_IS_MULTI) + { + GetMultiXactIdHintBits(xmax_new_tuple, &infomask_new_tuple, + &infomask2_new_tuple); + } + else + { + infomask_new_tuple = HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_LOCK_ONLY; + infomask2_new_tuple = 0; + } + } + + /* + * Prepare the new tuple with the appropriate initial values of Xmin and + * Xmax, as well as initial infomask bits as computed above. + */ + newtup->t_data->t_infomask &= ~(HEAP_XACT_MASK); + newtup->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK); + HeapTupleHeaderSetXmin(newtup->t_data, xid); + HeapTupleHeaderSetCmin(newtup->t_data, cid); + newtup->t_data->t_infomask |= HEAP_UPDATED | infomask_new_tuple; + newtup->t_data->t_infomask2 |= infomask2_new_tuple; + HeapTupleHeaderSetXmax(newtup->t_data, xmax_new_tuple); + + /* + * Replace cid with a combo CID if necessary. Note that we already put + * the plain cid into the new tuple. + */ + HeapTupleHeaderAdjustCmax(oldtup.t_data, &cid, &iscombo); + + /* + * If the toaster needs to be activated, OR if the new tuple will not fit + * on the same page as the old, then we need to release the content lock + * (but not the pin!) on the old tuple's buffer while we are off doing + * TOAST and/or table-file-extension work. We must mark the old tuple to + * show that it's locked, else other processes may try to update it + * themselves. + * + * We need to invoke the toaster if there are already any out-of-line + * toasted values present, or if the new tuple is over-threshold. + */ + if (relation->rd_rel->relkind != RELKIND_RELATION && + relation->rd_rel->relkind != RELKIND_MATVIEW) + { + /* toast table entries should never be recursively toasted */ + Assert(!HeapTupleHasExternal(&oldtup)); + Assert(!HeapTupleHasExternal(newtup)); + need_toast = false; + } + else + need_toast = (HeapTupleHasExternal(&oldtup) || + HeapTupleHasExternal(newtup) || + newtup->t_len > TOAST_TUPLE_THRESHOLD); + + pagefree = PageGetHeapFreeSpace(page); + + newtupsize = MAXALIGN(newtup->t_len); + + if (need_toast || newtupsize > pagefree) + { + TransactionId xmax_lock_old_tuple; + uint16 infomask_lock_old_tuple, + infomask2_lock_old_tuple; + bool cleared_all_frozen = false; + + /* + * To prevent concurrent sessions from updating the tuple, we have to + * temporarily mark it locked, while we release the page-level lock. + * + * To satisfy the rule that any xid potentially appearing in a buffer + * written out to disk, we unfortunately have to WAL log this + * temporary modification. We can reuse xl_tdeheap_lock for this + * purpose. If we crash/error before following through with the + * actual update, xmax will be of an aborted transaction, allowing + * other sessions to proceed. + */ + + /* + * Compute xmax / infomask appropriate for locking the tuple. This has + * to be done separately from the combo that's going to be used for + * updating, because the potentially created multixact would otherwise + * be wrong. + */ + compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data), + oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2, + xid, *lockmode, false, + &xmax_lock_old_tuple, &infomask_lock_old_tuple, + &infomask2_lock_old_tuple); + + Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple)); + + START_CRIT_SECTION(); + + /* Clear obsolete visibility flags ... */ + oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + HeapTupleClearHotUpdated(&oldtup); + /* ... and store info about transaction updating this tuple */ + Assert(TransactionIdIsValid(xmax_lock_old_tuple)); + HeapTupleHeaderSetXmax(oldtup.t_data, xmax_lock_old_tuple); + oldtup.t_data->t_infomask |= infomask_lock_old_tuple; + oldtup.t_data->t_infomask2 |= infomask2_lock_old_tuple; + HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo); + + /* temporarily make it look not-updated, but locked */ + oldtup.t_data->t_ctid = oldtup.t_self; + + /* + * Clear all-frozen bit on visibility map if needed. We could + * immediately reset ALL_VISIBLE, but given that the WAL logging + * overhead would be unchanged, that doesn't seem necessarily + * worthwhile. + */ + if (PageIsAllVisible(page) && + tdeheap_visibilitymap_clear(relation, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + MarkBufferDirty(buffer); + + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_lock xlrec; + XLogRecPtr recptr; + + XLogBeginInsert(); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup.t_self); + xlrec.xmax = xmax_lock_old_tuple; + xlrec.infobits_set = compute_infobits(oldtup.t_data->t_infomask, + oldtup.t_data->t_infomask2); + xlrec.flags = + cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + XLogRegisterData((char *) &xlrec, SizeOfHeapLock); + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_LOCK); + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Let the toaster do its thing, if needed. + * + * Note: below this point, heaptup is the data we actually intend to + * store into the relation; newtup is the caller's original untoasted + * data. + */ + if (need_toast) + { + /* Note we always use WAL and FSM during updates */ + heaptup = tdeheap_toast_insert_or_update(relation, newtup, &oldtup_decrypted, 0); + newtupsize = MAXALIGN(heaptup->t_len); + } + else + heaptup = newtup; + + /* + * Now, do we need a new page for the tuple, or not? This is a bit + * tricky since someone else could have added tuples to the page while + * we weren't looking. We have to recheck the available space after + * reacquiring the buffer lock. But don't bother to do that if the + * former amount of free space is still not enough; it's unlikely + * there's more free now than before. + * + * What's more, if we need to get a new page, we will need to acquire + * buffer locks on both old and new pages. To avoid deadlock against + * some other backend trying to get the same two locks in the other + * order, we must be consistent about the order we get the locks in. + * We use the rule "lock the lower-numbered page of the relation + * first". To implement this, we must do tdeheap_RelationGetBufferForTuple + * while not holding the lock on the old page, and we must rely on it + * to get the locks on both pages in the correct order. + * + * Another consideration is that we need visibility map page pin(s) if + * we will have to clear the all-visible flag on either page. If we + * call tdeheap_RelationGetBufferForTuple, we rely on it to acquire any such + * pins; but if we don't, we have to handle that here. Hence we need + * a loop. + */ + for (;;) + { + if (newtupsize > pagefree) + { + /* It doesn't fit, must use tdeheap_RelationGetBufferForTuple. */ + newbuf = tdeheap_RelationGetBufferForTuple(relation, heaptup->t_len, + buffer, 0, NULL, + &vmbuffer_new, &vmbuffer, + 0); + /* We're all done. */ + break; + } + /* Acquire VM page pin if needed and we don't have it. */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + /* Re-acquire the lock on the old tuple's page. */ + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + /* Re-check using the up-to-date free space */ + pagefree = PageGetHeapFreeSpace(page); + if (newtupsize > pagefree || + (vmbuffer == InvalidBuffer && PageIsAllVisible(page))) + { + /* + * Rats, it doesn't fit anymore, or somebody just now set the + * all-visible flag. We must now unlock and loop to avoid + * deadlock. Fortunately, this path should seldom be taken. + */ + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + } + else + { + /* We're all done. */ + newbuf = buffer; + break; + } + } + } + else + { + /* No TOAST work needed, and it'll fit on same page */ + newbuf = buffer; + heaptup = newtup; + } + + /* + * We're about to do the actual update -- check for conflict first, to + * avoid possibly having to roll back work we've just done. + * + * This is safe without a recheck as long as there is no possibility of + * another process scanning the pages between this check and the update + * being visible to the scan (i.e., exclusive buffer content lock(s) are + * continuously held from this point until the tuple update is visible). + * + * For the new tuple the only check needed is at the relation level, but + * since both tuples are in the same relation and the check for oldtup + * will include checking the relation level, there is no benefit to a + * separate check for the new tuple. + */ + CheckForSerializableConflictIn(relation, &oldtup.t_self, + BufferGetBlockNumber(buffer)); + + /* + * At this point newbuf and buffer are both pinned and locked, and newbuf + * has enough space for the new tuple. If they are the same buffer, only + * one pin is held. + */ + + if (newbuf == buffer) + { + /* + * Since the new tuple is going into the same page, we might be able + * to do a HOT update. Check if any of the index columns have been + * changed. + */ + if (!bms_overlap(modified_attrs, hot_attrs)) + { + use_hot_update = true; + + /* + * If none of the columns that are used in hot-blocking indexes + * were updated, we can apply HOT, but we do still need to check + * if we need to update the summarizing indexes, and update those + * indexes if the columns were updated, or we may fail to detect + * e.g. value bound changes in BRIN minmax indexes. + */ + if (bms_overlap(modified_attrs, sum_attrs)) + summarized_update = true; + } + } + else + { + /* Set a hint that the old page could use prune/defrag */ + PageSetFull(page); + } + + /* + * Compute replica identity tuple before entering the critical section so + * we don't PANIC upon a memory allocation failure. + * ExtractReplicaIdentity() will return NULL if nothing needs to be + * logged. Pass old key required as true only if the replica identity key + * columns are modified or it has external data. + */ + old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, + bms_overlap(modified_attrs, id_attrs) || + id_has_external, + &old_key_copied); + + /* + * Make sure relation keys in the cahce to avoid pallocs in + * the critical section. + */ + GetHeapBaiscRelationKey(relation->rd_locator); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + /* + * If this transaction commits, the old tuple will become DEAD sooner or + * later. Set flag that this page is a candidate for pruning once our xid + * falls below the OldestXmin horizon. If the transaction finally aborts, + * the subsequent page pruning will be a no-op and the hint will be + * cleared. + * + * XXX Should we set hint on newbuf as well? If the transaction aborts, + * there would be a prunable tuple in the newbuf; but for now we choose + * not to optimize for aborts. Note that tdeheap_xlog_update must be kept in + * sync if this decision changes. + */ + PageSetPrunable(page, xid); + + if (use_hot_update) + { + /* Mark the old tuple as HOT-updated */ + HeapTupleSetHotUpdated(&oldtup); + /* And mark the new tuple as heap-only */ + HeapTupleSetHeapOnly(heaptup); + /* Mark the caller's copy too, in case different from heaptup */ + HeapTupleSetHeapOnly(newtup); + } + else + { + /* Make sure tuples are correctly marked as not-HOT */ + HeapTupleClearHotUpdated(&oldtup); + HeapTupleClearHeapOnly(heaptup); + HeapTupleClearHeapOnly(newtup); + } + + tdeheap_RelationPutHeapTuple(relation, newbuf, heaptup, true, false); /* insert new tuple */ + + + /* Clear obsolete visibility flags, possibly set by ourselves above... */ + oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + /* ... and store info about transaction updating this tuple */ + Assert(TransactionIdIsValid(xmax_old_tuple)); + HeapTupleHeaderSetXmax(oldtup.t_data, xmax_old_tuple); + oldtup.t_data->t_infomask |= infomask_old_tuple; + oldtup.t_data->t_infomask2 |= infomask2_old_tuple; + HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo); + + /* record address of new tuple in t_ctid of old one */ + oldtup.t_data->t_ctid = heaptup->t_self; + + /* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */ + if (PageIsAllVisible(BufferGetPage(buffer))) + { + all_visible_cleared = true; + PageClearAllVisible(BufferGetPage(buffer)); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(buffer), + vmbuffer, VISIBILITYMAP_VALID_BITS); + } + if (newbuf != buffer && PageIsAllVisible(BufferGetPage(newbuf))) + { + all_visible_cleared_new = true; + PageClearAllVisible(BufferGetPage(newbuf)); + tdeheap_visibilitymap_clear(relation, BufferGetBlockNumber(newbuf), + vmbuffer_new, VISIBILITYMAP_VALID_BITS); + } + + if (newbuf != buffer) + MarkBufferDirty(newbuf); + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + XLogRecPtr recptr; + + /* + * For logical decoding we need combo CIDs to properly decode the + * catalog. + */ + if (RelationIsAccessibleInLogicalDecoding(relation)) + { + log_tdeheap_new_cid(relation, &oldtup); + log_tdeheap_new_cid(relation, heaptup); + } + + recptr = log_tdeheap_update(relation, buffer, + newbuf, &oldtup, heaptup, + old_key_tuple, + all_visible_cleared, + all_visible_cleared_new); + if (newbuf != buffer) + { + PageSetLSN(BufferGetPage(newbuf), recptr); + } + PageSetLSN(BufferGetPage(buffer), recptr); + } + + END_CRIT_SECTION(); + + if (newbuf != buffer) + LockBuffer(newbuf, BUFFER_LOCK_UNLOCK); + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Mark old tuple for invalidation from system caches at next command + * boundary, and mark the new tuple for invalidation in case we abort. We + * have to do this before releasing the buffer because oldtup is in the + * buffer. (heaptup is all in local memory, but it's necessary to process + * both tuple versions in one call to inval.c so we can avoid redundant + * sinval messages.) + */ + CacheInvalidateHeapTuple(relation, &oldtup, heaptup); + + /* Now we can release the buffer(s) */ + if (newbuf != buffer) + ReleaseBuffer(newbuf); + ReleaseBuffer(buffer); + if (BufferIsValid(vmbuffer_new)) + ReleaseBuffer(vmbuffer_new); + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * Release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode); + + pgstat_count_tdeheap_update(relation, use_hot_update, newbuf != buffer); + + /* + * If heaptup is a private copy, release it. Don't forget to copy t_self + * back to the caller's image, too. + */ + if (heaptup != newtup) + { + newtup->t_self = heaptup->t_self; + tdeheap_freetuple(heaptup); + } + + /* + * If it is a HOT update, the update may still need to update summarized + * indexes, lest we fail to update those summaries and get incorrect + * results (for example, minmax bounds of the block may change with this + * update). + */ + if (use_hot_update) + { + if (summarized_update) + *update_indexes = TU_Summarizing; + else + *update_indexes = TU_None; + } + else + *update_indexes = TU_All; + + if (old_key_tuple != NULL && old_key_copied) + tdeheap_freetuple(old_key_tuple); + + bms_free(hot_attrs); + bms_free(sum_attrs); + bms_free(key_attrs); + bms_free(id_attrs); + bms_free(modified_attrs); + bms_free(interesting_attrs); + + return TM_Ok; +} + +/* + * Check if the specified attribute's values are the same. Subroutine for + * HeapDetermineColumnsInfo. + */ +static bool +tdeheap_attr_equals(TupleDesc tupdesc, int attrnum, Datum value1, Datum value2, + bool isnull1, bool isnull2) +{ + Form_pg_attribute att; + + /* + * If one value is NULL and other is not, then they are certainly not + * equal + */ + if (isnull1 != isnull2) + return false; + + /* + * If both are NULL, they can be considered equal. + */ + if (isnull1) + return true; + + /* + * We do simple binary comparison of the two datums. This may be overly + * strict because there can be multiple binary representations for the + * same logical value. But we should be OK as long as there are no false + * positives. Using a type-specific equality operator is messy because + * there could be multiple notions of equality in different operator + * classes; furthermore, we cannot safely invoke user-defined functions + * while holding exclusive buffer lock. + */ + if (attrnum <= 0) + { + /* The only allowed system columns are OIDs, so do this */ + return (DatumGetObjectId(value1) == DatumGetObjectId(value2)); + } + else + { + Assert(attrnum <= tupdesc->natts); + att = TupleDescAttr(tupdesc, attrnum - 1); + return datumIsEqual(value1, value2, att->attbyval, att->attlen); + } +} + +/* + * Check which columns are being updated. + * + * Given an updated tuple, determine (and return into the output bitmapset), + * from those listed as interesting, the set of columns that changed. + * + * has_external indicates if any of the unmodified attributes (from those + * listed as interesting) of the old tuple is a member of external_cols and is + * stored externally. + */ +static Bitmapset * +HeapDetermineColumnsInfo(Relation relation, + Bitmapset *interesting_cols, + Bitmapset *external_cols, + HeapTuple oldtup, HeapTuple newtup, + bool *has_external) +{ + int attidx; + Bitmapset *modified = NULL; + TupleDesc tupdesc = RelationGetDescr(relation); + + attidx = -1; + while ((attidx = bms_next_member(interesting_cols, attidx)) >= 0) + { + /* attidx is zero-based, attrnum is the normal attribute number */ + AttrNumber attrnum = attidx + FirstLowInvalidHeapAttributeNumber; + Datum value1, + value2; + bool isnull1, + isnull2; + + /* + * If it's a whole-tuple reference, say "not equal". It's not really + * worth supporting this case, since it could only succeed after a + * no-op update, which is hardly a case worth optimizing for. + */ + if (attrnum == 0) + { + modified = bms_add_member(modified, attidx); + continue; + } + + /* + * Likewise, automatically say "not equal" for any system attribute + * other than tableOID; we cannot expect these to be consistent in a + * HOT chain, or even to be set correctly yet in the new tuple. + */ + if (attrnum < 0) + { + if (attrnum != TableOidAttributeNumber) + { + modified = bms_add_member(modified, attidx); + continue; + } + } + + /* + * Extract the corresponding values. XXX this is pretty inefficient + * if there are many indexed columns. Should we do a single + * tdeheap_deform_tuple call on each tuple, instead? But that doesn't + * work for system columns ... + */ + value1 = tdeheap_getattr(oldtup, attrnum, tupdesc, &isnull1); + value2 = tdeheap_getattr(newtup, attrnum, tupdesc, &isnull2); + if (!tdeheap_attr_equals(tupdesc, attrnum, value1, + value2, isnull1, isnull2)) + { + modified = bms_add_member(modified, attidx); + continue; + } + + /* + * No need to check attributes that can't be stored externally. Note + * that system attributes can't be stored externally. + */ + if (attrnum < 0 || isnull1 || + TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1) + continue; + + /* + * Check if the old tuple's attribute is stored externally and is a + * member of external_cols. + */ + if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) && + bms_is_member(attidx, external_cols)) + *has_external = true; + } + + return modified; +} + +/* + * simple_tdeheap_update - replace a tuple + * + * This routine may be used to update a tuple when concurrent updates of + * the target tuple are not expected (for example, because we have a lock + * on the relation associated with the tuple). Any failure is reported + * via ereport(). + */ +void +simple_tdeheap_update(Relation relation, ItemPointer otid, HeapTuple tup, + TU_UpdateIndexes *update_indexes) +{ + TM_Result result; + TM_FailureData tmfd; + LockTupleMode lockmode; + + result = tdeheap_update(relation, otid, tup, + GetCurrentCommandId(true), InvalidSnapshot, + true /* wait for commit */ , + &tmfd, &lockmode, update_indexes); + switch (result) + { + case TM_SelfModified: + /* Tuple was already updated in current command? */ + elog(ERROR, "tuple already updated by self"); + break; + + case TM_Ok: + /* done successfully */ + break; + + case TM_Updated: + elog(ERROR, "tuple concurrently updated"); + break; + + case TM_Deleted: + elog(ERROR, "tuple concurrently deleted"); + break; + + default: + elog(ERROR, "unrecognized tdeheap_update status: %u", result); + break; + } +} + + +/* + * Return the MultiXactStatus corresponding to the given tuple lock mode. + */ +static MultiXactStatus +get_mxact_status_for_lock(LockTupleMode mode, bool is_update) +{ + int retval; + + if (is_update) + retval = tupleLockExtraInfo[mode].updstatus; + else + retval = tupleLockExtraInfo[mode].lockstatus; + + if (retval == -1) + elog(ERROR, "invalid lock tuple mode %d/%s", mode, + is_update ? "true" : "false"); + + return (MultiXactStatus) retval; +} + +/* + * tdeheap_lock_tuple - lock a tuple in shared or exclusive mode + * + * Note that this acquires a buffer pin, which the caller must release. + * + * Input parameters: + * relation: relation containing tuple (caller must hold suitable lock) + * tid: TID of tuple to lock + * cid: current command ID (used for visibility test, and stored into + * tuple's cmax if lock is successful) + * mode: indicates if shared or exclusive tuple lock is desired + * wait_policy: what to do if tuple lock is not available + * follow_updates: if true, follow the update chain to also lock descendant + * tuples. + * + * Output parameters: + * *tuple: all fields filled in + * *buffer: set to buffer holding tuple (pinned but not locked at exit) + * *tmfd: filled in failure cases (see below) + * + * Function results are the same as the ones for table_tuple_lock(). + * + * In the failure cases other than TM_Invisible, the routine fills + * *tmfd with the tuple's t_ctid, t_xmax (resolving a possible MultiXact, + * if necessary), and t_cmax (the last only for TM_SelfModified, + * since we cannot obtain cmax from a combo CID generated by another + * transaction). + * See comments for struct TM_FailureData for additional info. + * + * See README.tuplock for a thorough explanation of this mechanism. + */ +TM_Result +tdeheap_lock_tuple(Relation relation, HeapTuple tuple, + CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy, + bool follow_updates, + Buffer *buffer, TM_FailureData *tmfd) +{ + TM_Result result; + ItemPointer tid = &(tuple->t_self); + ItemId lp; + Page page; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + TransactionId xid, + xmax; + uint16 old_infomask, + new_infomask, + new_infomask2; + bool first_time = true; + bool skip_tuple_lock = false; + bool have_tuple_lock = false; + bool cleared_all_frozen = false; + + *buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + block = ItemPointerGetBlockNumber(tid); + + /* + * Before locking the buffer, pin the visibility map page if it appears to + * be necessary. Since we haven't got the lock yet, someone else might be + * in the middle of changing this, so we'll need to recheck after we have + * the lock. + */ + if (PageIsAllVisible(BufferGetPage(*buffer))) + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + page = BufferGetPage(*buffer); + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp); + tuple->t_len = ItemIdGetLength(lp); + tuple->t_tableOid = RelationGetRelid(relation); + +l3: + result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer); + + if (result == TM_Invisible) + { + /* + * This is possible, but only when locking a tuple for ON CONFLICT + * UPDATE. We return this value here rather than throwing an error in + * order to give that case the opportunity to throw a more specific + * error. + */ + result = TM_Invisible; + goto out_locked; + } + else if (result == TM_BeingModified || + result == TM_Updated || + result == TM_Deleted) + { + TransactionId xwait; + uint16 infomask; + uint16 infomask2; + bool require_sleep; + ItemPointerData t_ctid; + + /* must copy state data before unlocking buffer */ + xwait = HeapTupleHeaderGetRawXmax(tuple->t_data); + infomask = tuple->t_data->t_infomask; + infomask2 = tuple->t_data->t_infomask2; + ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid); + + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + + /* + * If any subtransaction of the current top transaction already holds + * a lock as strong as or stronger than what we're requesting, we + * effectively hold the desired lock already. We *must* succeed + * without trying to take the tuple lock, else we will deadlock + * against anyone wanting to acquire a stronger lock. + * + * Note we only do this the first time we loop on the HTSU result; + * there is no point in testing in subsequent passes, because + * evidently our own transaction cannot have acquired a new lock after + * the first time we checked. + */ + if (first_time) + { + first_time = false; + + if (infomask & HEAP_XMAX_IS_MULTI) + { + int i; + int nmembers; + MultiXactMember *members; + + /* + * We don't need to allow old multixacts here; if that had + * been the case, HeapTupleSatisfiesUpdate would have returned + * MayBeUpdated and we wouldn't be here. + */ + nmembers = + GetMultiXactIdMembers(xwait, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + + for (i = 0; i < nmembers; i++) + { + /* only consider members of our own transaction */ + if (!TransactionIdIsCurrentTransactionId(members[i].xid)) + continue; + + if (TUPLOCK_from_mxstatus(members[i].status) >= mode) + { + pfree(members); + result = TM_Ok; + goto out_unlocked; + } + else + { + /* + * Disable acquisition of the heavyweight tuple lock. + * Otherwise, when promoting a weaker lock, we might + * deadlock with another locker that has acquired the + * heavyweight tuple lock and is waiting for our + * transaction to finish. + * + * Note that in this case we still need to wait for + * the multixact if required, to avoid acquiring + * conflicting locks. + */ + skip_tuple_lock = true; + } + } + + if (members) + pfree(members); + } + else if (TransactionIdIsCurrentTransactionId(xwait)) + { + switch (mode) + { + case LockTupleKeyShare: + Assert(HEAP_XMAX_IS_KEYSHR_LOCKED(infomask) || + HEAP_XMAX_IS_SHR_LOCKED(infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(infomask)); + result = TM_Ok; + goto out_unlocked; + case LockTupleShare: + if (HEAP_XMAX_IS_SHR_LOCKED(infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + result = TM_Ok; + goto out_unlocked; + } + break; + case LockTupleNoKeyExclusive: + if (HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + result = TM_Ok; + goto out_unlocked; + } + break; + case LockTupleExclusive: + if (HEAP_XMAX_IS_EXCL_LOCKED(infomask) && + infomask2 & HEAP_KEYS_UPDATED) + { + result = TM_Ok; + goto out_unlocked; + } + break; + } + } + } + + /* + * Initially assume that we will have to wait for the locking + * transaction(s) to finish. We check various cases below in which + * this can be turned off. + */ + require_sleep = true; + if (mode == LockTupleKeyShare) + { + /* + * If we're requesting KeyShare, and there's no update present, we + * don't need to wait. Even if there is an update, we can still + * continue if the key hasn't been modified. + * + * However, if there are updates, we need to walk the update chain + * to mark future versions of the row as locked, too. That way, + * if somebody deletes that future version, we're protected + * against the key going away. This locking of future versions + * could block momentarily, if a concurrent transaction is + * deleting a key; or it could return a value to the effect that + * the transaction deleting the key has already committed. So we + * do this before re-locking the buffer; otherwise this would be + * prone to deadlocks. + * + * Note that the TID we're locking was grabbed before we unlocked + * the buffer. For it to change while we're not looking, the + * other properties we're testing for below after re-locking the + * buffer would also change, in which case we would restart this + * loop above. + */ + if (!(infomask2 & HEAP_KEYS_UPDATED)) + { + bool updated; + + updated = !HEAP_XMAX_IS_LOCKED_ONLY(infomask); + + /* + * If there are updates, follow the update chain; bail out if + * that cannot be done. + */ + if (follow_updates && updated) + { + TM_Result res; + + res = tdeheap_lock_updated_tuple(relation, tuple, &t_ctid, + GetCurrentTransactionId(), + mode); + if (res != TM_Ok) + { + result = res; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + } + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Make sure it's still an appropriate lock, else start over. + * Also, if it wasn't updated before we released the lock, but + * is updated now, we start over too; the reason is that we + * now need to follow the update chain to lock the new + * versions. + */ + if (!HeapTupleHeaderIsOnlyLocked(tuple->t_data) && + ((tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED) || + !updated)) + goto l3; + + /* Things look okay, so we can skip sleeping */ + require_sleep = false; + + /* + * Note we allow Xmax to change here; other updaters/lockers + * could have modified it before we grabbed the buffer lock. + * However, this is not a problem, because with the recheck we + * just did we ensure that they still don't conflict with the + * lock we want. + */ + } + } + else if (mode == LockTupleShare) + { + /* + * If we're requesting Share, we can similarly avoid sleeping if + * there's no update and no exclusive lock present. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(infomask) && + !HEAP_XMAX_IS_EXCL_LOCKED(infomask)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Make sure it's still an appropriate lock, else start over. + * See above about allowing xmax to change. + */ + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask) || + HEAP_XMAX_IS_EXCL_LOCKED(tuple->t_data->t_infomask)) + goto l3; + require_sleep = false; + } + } + else if (mode == LockTupleNoKeyExclusive) + { + /* + * If we're requesting NoKeyExclusive, we might also be able to + * avoid sleeping; just ensure that there no conflicting lock + * already acquired. + */ + if (infomask & HEAP_XMAX_IS_MULTI) + { + if (!DoesMultiXactIdConflict((MultiXactId) xwait, infomask, + mode, NULL)) + { + /* + * No conflict, but if the xmax changed under us in the + * meantime, start over. + */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + + /* otherwise, we're good */ + require_sleep = false; + } + } + else if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* if the xmax changed in the meantime, start over */ + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + /* otherwise, we're good */ + require_sleep = false; + } + } + + /* + * As a check independent from those above, we can also avoid sleeping + * if the current transaction is the sole locker of the tuple. Note + * that the strength of the lock already held is irrelevant; this is + * not about recording the lock in Xmax (which will be done regardless + * of this optimization, below). Also, note that the cases where we + * hold a lock stronger than we are requesting are already handled + * above by not doing anything. + * + * Note we only deal with the non-multixact case here; MultiXactIdWait + * is well equipped to deal with this situation on its own. + */ + if (require_sleep && !(infomask & HEAP_XMAX_IS_MULTI) && + TransactionIdIsCurrentTransactionId(xwait)) + { + /* ... but if the xmax changed in the meantime, start over */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + Assert(HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask)); + require_sleep = false; + } + + /* + * Time to sleep on the other transaction/multixact, if necessary. + * + * If the other transaction is an update/delete that's already + * committed, then sleeping cannot possibly do any good: if we're + * required to sleep, get out to raise an error instead. + * + * By here, we either have already acquired the buffer exclusive lock, + * or we must wait for the locking transaction or multixact; so below + * we ensure that we grab buffer lock after the sleep. + */ + if (require_sleep && (result == TM_Updated || result == TM_Deleted)) + { + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + else if (require_sleep) + { + /* + * Acquire tuple lock to establish our priority for the tuple, or + * die trying. LockTuple will release us when we are next-in-line + * for the tuple. We must do this even if we are share-locking, + * but not if we already have a weaker lock on the tuple. + * + * If we are forced to "start over" below, we keep the tuple lock; + * this arranges that we stay at the head of the line while + * rechecking tuple state. + */ + if (!skip_tuple_lock && + !tdeheap_acquire_tuplock(relation, tid, mode, wait_policy, + &have_tuple_lock)) + { + /* + * This can only happen if wait_policy is Skip and the lock + * couldn't be obtained. + */ + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + + if (infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactStatus status = get_mxact_status_for_lock(mode, false); + + /* We only ever lock tuples, never update them */ + if (status >= MultiXactStatusNoKeyUpdate) + elog(ERROR, "invalid lock mode in tdeheap_lock_tuple"); + + /* wait for multixact to end, or die trying */ + switch (wait_policy) + { + case LockWaitBlock: + MultiXactIdWait((MultiXactId) xwait, status, infomask, + relation, &tuple->t_self, XLTW_Lock, NULL); + break; + case LockWaitSkip: + if (!ConditionalMultiXactIdWait((MultiXactId) xwait, + status, infomask, relation, + NULL)) + { + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + break; + case LockWaitError: + if (!ConditionalMultiXactIdWait((MultiXactId) xwait, + status, infomask, relation, + NULL)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + + break; + } + + /* + * Of course, the multixact might not be done here: if we're + * requesting a light lock mode, other transactions with light + * locks could still be alive, as well as locks owned by our + * own xact or other subxacts of this backend. We need to + * preserve the surviving MultiXact members. Note that it + * isn't absolutely necessary in the latter case, but doing so + * is simpler. + */ + } + else + { + /* wait for regular transaction to end, or die trying */ + switch (wait_policy) + { + case LockWaitBlock: + XactLockTableWait(xwait, relation, &tuple->t_self, + XLTW_Lock); + break; + case LockWaitSkip: + if (!ConditionalXactLockTableWait(xwait)) + { + result = TM_WouldBlock; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + break; + case LockWaitError: + if (!ConditionalXactLockTableWait(xwait)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + } + + /* if there are updates, follow the update chain */ + if (follow_updates && !HEAP_XMAX_IS_LOCKED_ONLY(infomask)) + { + TM_Result res; + + res = tdeheap_lock_updated_tuple(relation, tuple, &t_ctid, + GetCurrentTransactionId(), + mode); + if (res != TM_Ok) + { + result = res; + /* recovery code expects to have buffer lock held */ + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto failed; + } + } + + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * xwait is done, but if xwait had just locked the tuple then some + * other xact could update this tuple before we get to this point. + * Check for xmax change, and start over if so. + */ + if (xmax_infomask_changed(tuple->t_data->t_infomask, infomask) || + !TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple->t_data), + xwait)) + goto l3; + + if (!(infomask & HEAP_XMAX_IS_MULTI)) + { + /* + * Otherwise check if it committed or aborted. Note we cannot + * be here if the tuple was only locked by somebody who didn't + * conflict with us; that would have been handled above. So + * that transaction must necessarily be gone by now. But + * don't check for this in the multixact case, because some + * locker transactions might still be running. + */ + UpdateXmaxHintBits(tuple->t_data, *buffer, xwait); + } + } + + /* By here, we're certain that we hold buffer exclusive lock again */ + + /* + * We may lock if previous xmax aborted, or if it committed but only + * locked the tuple without updating it; or if we didn't have to wait + * at all for whatever reason. + */ + if (!require_sleep || + (tuple->t_data->t_infomask & HEAP_XMAX_INVALID) || + HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tuple->t_data)) + result = TM_Ok; + else if (!ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)) + result = TM_Updated; + else + result = TM_Deleted; + } + +failed: + if (result != TM_Ok) + { + Assert(result == TM_SelfModified || result == TM_Updated || + result == TM_Deleted || result == TM_WouldBlock); + + /* + * When locking a tuple under LockWaitSkip semantics and we fail with + * TM_WouldBlock above, it's possible for concurrent transactions to + * release the lock and set HEAP_XMAX_INVALID in the meantime. So + * this assert is slightly different from the equivalent one in + * tdeheap_delete and tdeheap_update. + */ + Assert((result == TM_WouldBlock) || + !(tuple->t_data->t_infomask & HEAP_XMAX_INVALID)); + Assert(result != TM_Updated || + !ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)); + tmfd->ctid = tuple->t_data->t_ctid; + tmfd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data); + if (result == TM_SelfModified) + tmfd->cmax = HeapTupleHeaderGetCmax(tuple->t_data); + else + tmfd->cmax = InvalidCommandId; + goto out_locked; + } + + /* + * If we didn't pin the visibility map page and the page has become all + * visible while we were busy locking the buffer, or during some + * subsequent window during which we had it unlocked, we'll have to unlock + * and re-lock, to avoid holding the buffer lock across I/O. That's a bit + * unfortunate, especially since we'll now have to recheck whether the + * tuple has been locked or updated under us, but hopefully it won't + * happen very often. + */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + { + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(relation, block, &vmbuffer); + LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE); + goto l3; + } + + xmax = HeapTupleHeaderGetRawXmax(tuple->t_data); + old_infomask = tuple->t_data->t_infomask; + + /* + * If this is the first possibly-multixact-able operation in the current + * transaction, set my per-backend OldestMemberMXactId setting. We can be + * certain that the transaction will never become a member of any older + * MultiXactIds than that. (We have to do this even if we end up just + * using our own TransactionId below, since some other backend could + * incorporate our XID into a MultiXact immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + /* + * Compute the new xmax and infomask to store into the tuple. Note we do + * not modify the tuple just yet, because that would leave it in the wrong + * state if multixact.c elogs. + */ + compute_new_xmax_infomask(xmax, old_infomask, tuple->t_data->t_infomask2, + GetCurrentTransactionId(), mode, false, + &xid, &new_infomask, &new_infomask2); + + START_CRIT_SECTION(); + + /* + * Store transaction information of xact locking the tuple. + * + * Note: Cmax is meaningless in this context, so don't set it; this avoids + * possibly generating a useless combo CID. Moreover, if we're locking a + * previously updated tuple, it's important to preserve the Cmax. + * + * Also reset the HOT UPDATE bit, but only if there's no update; otherwise + * we would break the HOT chain. + */ + tuple->t_data->t_infomask &= ~HEAP_XMAX_BITS; + tuple->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + tuple->t_data->t_infomask |= new_infomask; + tuple->t_data->t_infomask2 |= new_infomask2; + if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask)) + HeapTupleHeaderClearHotUpdated(tuple->t_data); + HeapTupleHeaderSetXmax(tuple->t_data, xid); + + /* + * Make sure there is no forward chain link in t_ctid. Note that in the + * cases where the tuple has been updated, we must not overwrite t_ctid, + * because it was set by the updater. Moreover, if the tuple has been + * updated, we need to follow the update chain to lock the new versions of + * the tuple as well. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask)) + tuple->t_data->t_ctid = *tid; + + /* Clear only the all-frozen bit on visibility map if needed */ + if (PageIsAllVisible(page) && + tdeheap_visibilitymap_clear(relation, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + + MarkBufferDirty(*buffer); + + /* + * XLOG stuff. You might think that we don't need an XLOG record because + * there is no state change worth restoring after a crash. You would be + * wrong however: we have just written either a TransactionId or a + * MultiXactId that may never have been seen on disk before, and we need + * to make sure that there are XLOG entries covering those ID numbers. + * Else the same IDs might be re-used after a crash, which would be + * disastrous if this page made it to disk before the crash. Essentially + * we have to enforce the WAL log-before-data rule even in this case. + * (Also, in a PITR log-shipping or 2PC environment, we have to have XLOG + * entries for everything anyway.) + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_lock xlrec; + XLogRecPtr recptr; + + XLogBeginInsert(); + XLogRegisterBuffer(0, *buffer, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self); + xlrec.xmax = xid; + xlrec.infobits_set = compute_infobits(new_infomask, + tuple->t_data->t_infomask2); + xlrec.flags = cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + XLogRegisterData((char *) &xlrec, SizeOfHeapLock); + + /* we don't decode row locks atm, so no need to log the origin */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_LOCK); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + result = TM_Ok; + +out_locked: + LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); + +out_unlocked: + if (BufferIsValid(vmbuffer)) + ReleaseBuffer(vmbuffer); + + /* + * Don't update the visibility map here. Locking a tuple doesn't change + * visibility info. + */ + + /* + * Now that we have successfully marked the tuple as locked, we can + * release the lmgr tuple lock, if we had it. + */ + if (have_tuple_lock) + UnlockTupleTuplock(relation, tid, mode); + + return result; +} + +/* + * Acquire heavyweight lock on the given tuple, in preparation for acquiring + * its normal, Xmax-based tuple lock. + * + * have_tuple_lock is an input and output parameter: on input, it indicates + * whether the lock has previously been acquired (and this function does + * nothing in that case). If this function returns success, have_tuple_lock + * has been flipped to true. + * + * Returns false if it was unable to obtain the lock; this can only happen if + * wait_policy is Skip. + */ +static bool +tdeheap_acquire_tuplock(Relation relation, ItemPointer tid, LockTupleMode mode, + LockWaitPolicy wait_policy, bool *have_tuple_lock) +{ + if (*have_tuple_lock) + return true; + + switch (wait_policy) + { + case LockWaitBlock: + LockTupleTuplock(relation, tid, mode); + break; + + case LockWaitSkip: + if (!ConditionalLockTupleTuplock(relation, tid, mode)) + return false; + break; + + case LockWaitError: + if (!ConditionalLockTupleTuplock(relation, tid, mode)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + *have_tuple_lock = true; + + return true; +} + +/* + * Given an original set of Xmax and infomask, and a transaction (identified by + * add_to_xmax) acquiring a new lock of some mode, compute the new Xmax and + * corresponding infomasks to use on the tuple. + * + * Note that this might have side effects such as creating a new MultiXactId. + * + * Most callers will have called HeapTupleSatisfiesUpdate before this function; + * that will have set the HEAP_XMAX_INVALID bit if the xmax was a MultiXactId + * but it was not running anymore. There is a race condition, which is that the + * MultiXactId may have finished since then, but that uncommon case is handled + * either here, or within MultiXactIdExpand. + * + * There is a similar race condition possible when the old xmax was a regular + * TransactionId. We test TransactionIdIsInProgress again just to narrow the + * window, but it's still possible to end up creating an unnecessary + * MultiXactId. Fortunately this is harmless. + */ +static void +compute_new_xmax_infomask(TransactionId xmax, uint16 old_infomask, + uint16 old_infomask2, TransactionId add_to_xmax, + LockTupleMode mode, bool is_update, + TransactionId *result_xmax, uint16 *result_infomask, + uint16 *result_infomask2) +{ + TransactionId new_xmax; + uint16 new_infomask, + new_infomask2; + + Assert(TransactionIdIsCurrentTransactionId(add_to_xmax)); + +l5: + new_infomask = 0; + new_infomask2 = 0; + if (old_infomask & HEAP_XMAX_INVALID) + { + /* + * No previous locker; we just insert our own TransactionId. + * + * Note that it's critical that this case be the first one checked, + * because there are several blocks below that come back to this one + * to implement certain optimizations; old_infomask might contain + * other dirty bits in those cases, but we don't really care. + */ + if (is_update) + { + new_xmax = add_to_xmax; + if (mode == LockTupleExclusive) + new_infomask2 |= HEAP_KEYS_UPDATED; + } + else + { + new_infomask |= HEAP_XMAX_LOCK_ONLY; + switch (mode) + { + case LockTupleKeyShare: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_KEYSHR_LOCK; + break; + case LockTupleShare: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_SHR_LOCK; + break; + case LockTupleNoKeyExclusive: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_EXCL_LOCK; + break; + case LockTupleExclusive: + new_xmax = add_to_xmax; + new_infomask |= HEAP_XMAX_EXCL_LOCK; + new_infomask2 |= HEAP_KEYS_UPDATED; + break; + default: + new_xmax = InvalidTransactionId; /* silence compiler */ + elog(ERROR, "invalid lock mode"); + } + } + } + else if (old_infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactStatus new_status; + + /* + * Currently we don't allow XMAX_COMMITTED to be set for multis, so + * cross-check. + */ + Assert(!(old_infomask & HEAP_XMAX_COMMITTED)); + + /* + * A multixact together with LOCK_ONLY set but neither lock bit set + * (i.e. a pg_upgraded share locked tuple) cannot possibly be running + * anymore. This check is critical for databases upgraded by + * pg_upgrade; both MultiXactIdIsRunning and MultiXactIdExpand assume + * that such multis are never passed. + */ + if (HEAP_LOCKED_UPGRADED(old_infomask)) + { + old_infomask &= ~HEAP_XMAX_IS_MULTI; + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + /* + * If the XMAX is already a MultiXactId, then we need to expand it to + * include add_to_xmax; but if all the members were lockers and are + * all gone, we can do away with the IS_MULTI bit and just set + * add_to_xmax as the only locker/updater. If all lockers are gone + * and we have an updater that aborted, we can also do without a + * multi. + * + * The cost of doing GetMultiXactIdMembers would be paid by + * MultiXactIdExpand if we weren't to do this, so this check is not + * incurring extra work anyhow. + */ + if (!MultiXactIdIsRunning(xmax, HEAP_XMAX_IS_LOCKED_ONLY(old_infomask))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) || + !TransactionIdDidCommit(MultiXactIdGetUpdateXid(xmax, + old_infomask))) + { + /* + * Reset these bits and restart; otherwise fall through to + * create a new multi below. + */ + old_infomask &= ~HEAP_XMAX_IS_MULTI; + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + } + + new_status = get_mxact_status_for_lock(mode, is_update); + + new_xmax = MultiXactIdExpand((MultiXactId) xmax, add_to_xmax, + new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (old_infomask & HEAP_XMAX_COMMITTED) + { + /* + * It's a committed update, so we need to preserve him as updater of + * the tuple. + */ + MultiXactStatus status; + MultiXactStatus new_status; + + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + + new_status = get_mxact_status_for_lock(mode, is_update); + + /* + * since it's not running, it's obviously impossible for the old + * updater to be identical to the current one, so we need not check + * for that case as we do in the block above. + */ + new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (TransactionIdIsInProgress(xmax)) + { + /* + * If the XMAX is a valid, in-progress TransactionId, then we need to + * create a new MultiXactId that includes both the old locker or + * updater and our own TransactionId. + */ + MultiXactStatus new_status; + MultiXactStatus old_status; + LockTupleMode old_mode; + + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)) + { + if (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask)) + old_status = MultiXactStatusForKeyShare; + else if (HEAP_XMAX_IS_SHR_LOCKED(old_infomask)) + old_status = MultiXactStatusForShare; + else if (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) + { + if (old_infomask2 & HEAP_KEYS_UPDATED) + old_status = MultiXactStatusForUpdate; + else + old_status = MultiXactStatusForNoKeyUpdate; + } + else + { + /* + * LOCK_ONLY can be present alone only when a page has been + * upgraded by pg_upgrade. But in that case, + * TransactionIdIsInProgress() should have returned false. We + * assume it's no longer locked in this case. + */ + elog(WARNING, "LOCK_ONLY found for Xid in progress %u", xmax); + old_infomask |= HEAP_XMAX_INVALID; + old_infomask &= ~HEAP_XMAX_LOCK_ONLY; + goto l5; + } + } + else + { + /* it's an update, but which kind? */ + if (old_infomask2 & HEAP_KEYS_UPDATED) + old_status = MultiXactStatusUpdate; + else + old_status = MultiXactStatusNoKeyUpdate; + } + + old_mode = TUPLOCK_from_mxstatus(old_status); + + /* + * If the lock to be acquired is for the same TransactionId as the + * existing lock, there's an optimization possible: consider only the + * strongest of both locks as the only one present, and restart. + */ + if (xmax == add_to_xmax) + { + /* + * Note that it's not possible for the original tuple to be + * updated: we wouldn't be here because the tuple would have been + * invisible and we wouldn't try to update it. As a subtlety, + * this code can also run when traversing an update chain to lock + * future versions of a tuple. But we wouldn't be here either, + * because the add_to_xmax would be different from the original + * updater. + */ + Assert(HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)); + + /* acquire the strongest of both */ + if (mode < old_mode) + mode = old_mode; + /* mustn't touch is_update */ + + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + /* otherwise, just fall back to creating a new multixact */ + new_status = get_mxact_status_for_lock(mode, is_update); + new_xmax = MultiXactIdCreate(xmax, old_status, + add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else if (!HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) && + TransactionIdDidCommit(xmax)) + { + /* + * It's a committed update, so we gotta preserve him as updater of the + * tuple. + */ + MultiXactStatus status; + MultiXactStatus new_status; + + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + + new_status = get_mxact_status_for_lock(mode, is_update); + + /* + * since it's not running, it's obviously impossible for the old + * updater to be identical to the current one, so we need not check + * for that case as we do in the block above. + */ + new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status); + GetMultiXactIdHintBits(new_xmax, &new_infomask, &new_infomask2); + } + else + { + /* + * Can get here iff the locking/updating transaction was running when + * the infomask was extracted from the tuple, but finished before + * TransactionIdIsInProgress got to run. Deal with it as if there was + * no locker at all in the first place. + */ + old_infomask |= HEAP_XMAX_INVALID; + goto l5; + } + + *result_infomask = new_infomask; + *result_infomask2 = new_infomask2; + *result_xmax = new_xmax; +} + +/* + * Subroutine for tdeheap_lock_updated_tuple_rec. + * + * Given a hypothetical multixact status held by the transaction identified + * with the given xid, does the current transaction need to wait, fail, or can + * it continue if it wanted to acquire a lock of the given mode? "needwait" + * is set to true if waiting is necessary; if it can continue, then TM_Ok is + * returned. If the lock is already held by the current transaction, return + * TM_SelfModified. In case of a conflict with another transaction, a + * different HeapTupleSatisfiesUpdate return code is returned. + * + * The held status is said to be hypothetical because it might correspond to a + * lock held by a single Xid, i.e. not a real MultiXactId; we express it this + * way for simplicity of API. + */ +static TM_Result +test_lockmode_for_conflict(MultiXactStatus status, TransactionId xid, + LockTupleMode mode, HeapTuple tup, + bool *needwait) +{ + MultiXactStatus wantedstatus; + + *needwait = false; + wantedstatus = get_mxact_status_for_lock(mode, false); + + /* + * Note: we *must* check TransactionIdIsInProgress before + * TransactionIdDidAbort/Commit; see comment at top of heapam_visibility.c + * for an explanation. + */ + if (TransactionIdIsCurrentTransactionId(xid)) + { + /* + * The tuple has already been locked by our own transaction. This is + * very rare but can happen if multiple transactions are trying to + * lock an ancient version of the same tuple. + */ + return TM_SelfModified; + } + else if (TransactionIdIsInProgress(xid)) + { + /* + * If the locking transaction is running, what we do depends on + * whether the lock modes conflict: if they do, then we must wait for + * it to finish; otherwise we can fall through to lock this tuple + * version without waiting. + */ + if (DoLockModesConflict(LOCKMODE_from_mxstatus(status), + LOCKMODE_from_mxstatus(wantedstatus))) + { + *needwait = true; + } + + /* + * If we set needwait above, then this value doesn't matter; + * otherwise, this value signals to caller that it's okay to proceed. + */ + return TM_Ok; + } + else if (TransactionIdDidAbort(xid)) + return TM_Ok; + else if (TransactionIdDidCommit(xid)) + { + /* + * The other transaction committed. If it was only a locker, then the + * lock is completely gone now and we can return success; but if it + * was an update, then what we do depends on whether the two lock + * modes conflict. If they conflict, then we must report error to + * caller. But if they don't, we can fall through to allow the current + * transaction to lock the tuple. + * + * Note: the reason we worry about ISUPDATE here is because as soon as + * a transaction ends, all its locks are gone and meaningless, and + * thus we can ignore them; whereas its updates persist. In the + * TransactionIdIsInProgress case, above, we don't need to check + * because we know the lock is still "alive" and thus a conflict needs + * always be checked. + */ + if (!ISUPDATE_from_mxstatus(status)) + return TM_Ok; + + if (DoLockModesConflict(LOCKMODE_from_mxstatus(status), + LOCKMODE_from_mxstatus(wantedstatus))) + { + /* bummer */ + if (!ItemPointerEquals(&tup->t_self, &tup->t_data->t_ctid)) + return TM_Updated; + else + return TM_Deleted; + } + + return TM_Ok; + } + + /* Not in progress, not aborted, not committed -- must have crashed */ + return TM_Ok; +} + + +/* + * Recursive part of tdeheap_lock_updated_tuple + * + * Fetch the tuple pointed to by tid in rel, and mark it as locked by the given + * xid with the given mode; if this tuple is updated, recurse to lock the new + * version as well. + */ +static TM_Result +tdeheap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid, + LockTupleMode mode) +{ + TM_Result result; + ItemPointerData tupid; + HeapTupleData mytup; + Buffer buf; + uint16 new_infomask, + new_infomask2, + old_infomask, + old_infomask2; + TransactionId xmax, + new_xmax; + TransactionId priorXmax = InvalidTransactionId; + bool cleared_all_frozen = false; + bool pinned_desired_page; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + + ItemPointerCopy(tid, &tupid); + + for (;;) + { + new_infomask = 0; + new_xmax = InvalidTransactionId; + block = ItemPointerGetBlockNumber(&tupid); + ItemPointerCopy(&tupid, &(mytup.t_self)); + + if (!tdeheap_fetch(rel, SnapshotAny, &mytup, &buf, false)) + { + /* + * if we fail to find the updated version of the tuple, it's + * because it was vacuumed/pruned away after its creator + * transaction aborted. So behave as if we got to the end of the + * chain, and there's no further tuple to lock: return success to + * caller. + */ + result = TM_Ok; + goto out_unlocked; + } + +l4: + CHECK_FOR_INTERRUPTS(); + + /* + * Before locking the buffer, pin the visibility map page if it + * appears to be necessary. Since we haven't got the lock yet, + * someone else might be in the middle of changing this, so we'll need + * to recheck after we have the lock. + */ + if (PageIsAllVisible(BufferGetPage(buf))) + { + tdeheap_visibilitymap_pin(rel, block, &vmbuffer); + pinned_desired_page = true; + } + else + pinned_desired_page = false; + + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + + /* + * If we didn't pin the visibility map page and the page has become + * all visible while we were busy locking the buffer, we'll have to + * unlock and re-lock, to avoid holding the buffer lock across I/O. + * That's a bit unfortunate, but hopefully shouldn't happen often. + * + * Note: in some paths through this function, we will reach here + * holding a pin on a vm page that may or may not be the one matching + * this page. If this page isn't all-visible, we won't use the vm + * page, but we hold onto such a pin till the end of the function. + */ + if (!pinned_desired_page && PageIsAllVisible(BufferGetPage(buf))) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + tdeheap_visibilitymap_pin(rel, block, &vmbuffer); + LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); + } + + /* + * Check the tuple XMIN against prior XMAX, if any. If we reached the + * end of the chain, we're done, so return success. + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(mytup.t_data), + priorXmax)) + { + result = TM_Ok; + goto out_locked; + } + + /* + * Also check Xmin: if this tuple was created by an aborted + * (sub)transaction, then we already locked the last live one in the + * chain, thus we're done, so return success. + */ + if (TransactionIdDidAbort(HeapTupleHeaderGetXmin(mytup.t_data))) + { + result = TM_Ok; + goto out_locked; + } + + old_infomask = mytup.t_data->t_infomask; + old_infomask2 = mytup.t_data->t_infomask2; + xmax = HeapTupleHeaderGetRawXmax(mytup.t_data); + + /* + * If this tuple version has been updated or locked by some concurrent + * transaction(s), what we do depends on whether our lock mode + * conflicts with what those other transactions hold, and also on the + * status of them. + */ + if (!(old_infomask & HEAP_XMAX_INVALID)) + { + TransactionId rawxmax; + bool needwait; + + rawxmax = HeapTupleHeaderGetRawXmax(mytup.t_data); + if (old_infomask & HEAP_XMAX_IS_MULTI) + { + int nmembers; + int i; + MultiXactMember *members; + + /* + * We don't need a test for pg_upgrade'd tuples: this is only + * applied to tuples after the first in an update chain. Said + * first tuple in the chain may well be locked-in-9.2-and- + * pg_upgraded, but that one was already locked by our caller, + * not us; and any subsequent ones cannot be because our + * caller must necessarily have obtained a snapshot later than + * the pg_upgrade itself. + */ + Assert(!HEAP_LOCKED_UPGRADED(mytup.t_data->t_infomask)); + + nmembers = GetMultiXactIdMembers(rawxmax, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)); + for (i = 0; i < nmembers; i++) + { + result = test_lockmode_for_conflict(members[i].status, + members[i].xid, + mode, + &mytup, + &needwait); + + /* + * If the tuple was already locked by ourselves in a + * previous iteration of this (say tdeheap_lock_tuple was + * forced to restart the locking loop because of a change + * in xmax), then we hold the lock already on this tuple + * version and we don't need to do anything; and this is + * not an error condition either. We just need to skip + * this tuple and continue locking the next version in the + * update chain. + */ + if (result == TM_SelfModified) + { + pfree(members); + goto next; + } + + if (needwait) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(members[i].xid, rel, + &mytup.t_self, + XLTW_LockUpdated); + pfree(members); + goto l4; + } + if (result != TM_Ok) + { + pfree(members); + goto out_locked; + } + } + if (members) + pfree(members); + } + else + { + MultiXactStatus status; + + /* + * For a non-multi Xmax, we first need to compute the + * corresponding MultiXactStatus by using the infomask bits. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)) + { + if (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask)) + status = MultiXactStatusForKeyShare; + else if (HEAP_XMAX_IS_SHR_LOCKED(old_infomask)) + status = MultiXactStatusForShare; + else if (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) + { + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusForUpdate; + else + status = MultiXactStatusForNoKeyUpdate; + } + else + { + /* + * LOCK_ONLY present alone (a pg_upgraded tuple marked + * as share-locked in the old cluster) shouldn't be + * seen in the middle of an update chain. + */ + elog(ERROR, "invalid lock status in tuple"); + } + } + else + { + /* it's an update, but which kind? */ + if (old_infomask2 & HEAP_KEYS_UPDATED) + status = MultiXactStatusUpdate; + else + status = MultiXactStatusNoKeyUpdate; + } + + result = test_lockmode_for_conflict(status, rawxmax, mode, + &mytup, &needwait); + + /* + * If the tuple was already locked by ourselves in a previous + * iteration of this (say tdeheap_lock_tuple was forced to + * restart the locking loop because of a change in xmax), then + * we hold the lock already on this tuple version and we don't + * need to do anything; and this is not an error condition + * either. We just need to skip this tuple and continue + * locking the next version in the update chain. + */ + if (result == TM_SelfModified) + goto next; + + if (needwait) + { + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(rawxmax, rel, &mytup.t_self, + XLTW_LockUpdated); + goto l4; + } + if (result != TM_Ok) + { + goto out_locked; + } + } + } + + /* compute the new Xmax and infomask values for the tuple ... */ + compute_new_xmax_infomask(xmax, old_infomask, mytup.t_data->t_infomask2, + xid, mode, false, + &new_xmax, &new_infomask, &new_infomask2); + + if (PageIsAllVisible(BufferGetPage(buf)) && + tdeheap_visibilitymap_clear(rel, block, vmbuffer, + VISIBILITYMAP_ALL_FROZEN)) + cleared_all_frozen = true; + + START_CRIT_SECTION(); + + /* ... and set them */ + HeapTupleHeaderSetXmax(mytup.t_data, new_xmax); + mytup.t_data->t_infomask &= ~HEAP_XMAX_BITS; + mytup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + mytup.t_data->t_infomask |= new_infomask; + mytup.t_data->t_infomask2 |= new_infomask2; + + MarkBufferDirty(buf); + + /* XLOG stuff */ + if (RelationNeedsWAL(rel)) + { + xl_tdeheap_lock_updated xlrec; + XLogRecPtr recptr; + Page page = BufferGetPage(buf); + + XLogBeginInsert(); + XLogRegisterBuffer(0, buf, REGBUF_STANDARD); + + xlrec.offnum = ItemPointerGetOffsetNumber(&mytup.t_self); + xlrec.xmax = new_xmax; + xlrec.infobits_set = compute_infobits(new_infomask, new_infomask2); + xlrec.flags = + cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0; + + XLogRegisterData((char *) &xlrec, SizeOfHeapLockUpdated); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_LOCK_UPDATED); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + +next: + /* if we find the end of update chain, we're done. */ + if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID || + HeapTupleHeaderIndicatesMovedPartitions(mytup.t_data) || + ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) || + HeapTupleHeaderIsOnlyLocked(mytup.t_data)) + { + result = TM_Ok; + goto out_locked; + } + + /* tail recursion */ + priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data); + ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid); + UnlockReleaseBuffer(buf); + } + + result = TM_Ok; + +out_locked: + UnlockReleaseBuffer(buf); + +out_unlocked: + if (vmbuffer != InvalidBuffer) + ReleaseBuffer(vmbuffer); + + return result; +} + +/* + * tdeheap_lock_updated_tuple + * Follow update chain when locking an updated tuple, acquiring locks (row + * marks) on the updated versions. + * + * The initial tuple is assumed to be already locked. + * + * This function doesn't check visibility, it just unconditionally marks the + * tuple(s) as locked. If any tuple in the updated chain is being deleted + * concurrently (or updated with the key being modified), sleep until the + * transaction doing it is finished. + * + * Note that we don't acquire heavyweight tuple locks on the tuples we walk + * when we have to wait for other transactions to release them, as opposed to + * what tdeheap_lock_tuple does. The reason is that having more than one + * transaction walking the chain is probably uncommon enough that risk of + * starvation is not likely: one of the preconditions for being here is that + * the snapshot in use predates the update that created this tuple (because we + * started at an earlier version of the tuple), but at the same time such a + * transaction cannot be using repeatable read or serializable isolation + * levels, because that would lead to a serializability failure. + */ +static TM_Result +tdeheap_lock_updated_tuple(Relation rel, HeapTuple tuple, ItemPointer ctid, + TransactionId xid, LockTupleMode mode) +{ + /* + * If the tuple has not been updated, or has moved into another partition + * (effectively a delete) stop here. + */ + if (!HeapTupleHeaderIndicatesMovedPartitions(tuple->t_data) && + !ItemPointerEquals(&tuple->t_self, ctid)) + { + /* + * If this is the first possibly-multixact-able operation in the + * current transaction, set my per-backend OldestMemberMXactId + * setting. We can be certain that the transaction will never become a + * member of any older MultiXactIds than that. (We have to do this + * even if we end up just using our own TransactionId below, since + * some other backend could incorporate our XID into a MultiXact + * immediately afterwards.) + */ + MultiXactIdSetOldestMember(); + + return tdeheap_lock_updated_tuple_rec(rel, ctid, xid, mode); + } + + /* nothing to lock */ + return TM_Ok; +} + +/* + * tdeheap_finish_speculative - mark speculative insertion as successful + * + * To successfully finish a speculative insertion we have to clear speculative + * token from tuple. To do so the t_ctid field, which will contain a + * speculative token value, is modified in place to point to the tuple itself, + * which is characteristic of a newly inserted ordinary tuple. + * + * NB: It is not ok to commit without either finishing or aborting a + * speculative insertion. We could treat speculative tuples of committed + * transactions implicitly as completed, but then we would have to be prepared + * to deal with speculative tokens on committed tuples. That wouldn't be + * difficult - no-one looks at the ctid field of a tuple with invalid xmax - + * but clearing the token at completion isn't very expensive either. + * An explicit confirmation WAL record also makes logical decoding simpler. + */ +void +tdeheap_finish_speculative(Relation relation, ItemPointer tid) +{ + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid)); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + page = (Page) BufferGetPage(buffer); + + offnum = ItemPointerGetOffsetNumber(tid); + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(ERROR, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + Assert(HeapTupleHeaderIsSpeculative(htup)); + + MarkBufferDirty(buffer); + + /* + * Replace the speculative insertion token with a real t_ctid, pointing to + * itself like it does on regular tuples. + */ + htup->t_ctid = *tid; + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_confirm xlrec; + XLogRecPtr recptr; + + xlrec.offnum = ItemPointerGetOffsetNumber(tid); + + XLogBeginInsert(); + + /* We want the same filtering on this as on a plain insert */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + XLogRegisterData((char *) &xlrec, SizeOfHeapConfirm); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_CONFIRM); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); +} + +/* + * tdeheap_abort_speculative - kill a speculatively inserted tuple + * + * Marks a tuple that was speculatively inserted in the same command as dead, + * by setting its xmin as invalid. That makes it immediately appear as dead + * to all transactions, including our own. In particular, it makes + * HeapTupleSatisfiesDirty() regard the tuple as dead, so that another backend + * inserting a duplicate key value won't unnecessarily wait for our whole + * transaction to finish (it'll just wait for our speculative insertion to + * finish). + * + * Killing the tuple prevents "unprincipled deadlocks", which are deadlocks + * that arise due to a mutual dependency that is not user visible. By + * definition, unprincipled deadlocks cannot be prevented by the user + * reordering lock acquisition in client code, because the implementation level + * lock acquisitions are not under the user's direct control. If speculative + * inserters did not take this precaution, then under high concurrency they + * could deadlock with each other, which would not be acceptable. + * + * This is somewhat redundant with tdeheap_delete, but we prefer to have a + * dedicated routine with stripped down requirements. Note that this is also + * used to delete the TOAST tuples created during speculative insertion. + * + * This routine does not affect logical decoding as it only looks at + * confirmation records. + */ +void +tdeheap_abort_speculative(Relation relation, ItemPointer tid) +{ + TransactionId xid = GetCurrentTransactionId(); + ItemId lp; + HeapTupleData tp; + Page page; + BlockNumber block; + Buffer buffer; + + Assert(ItemPointerIsValid(tid)); + + block = ItemPointerGetBlockNumber(tid); + buffer = ReadBuffer(relation, block); + page = BufferGetPage(buffer); + + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + + /* + * Page can't be all visible, we just inserted into it, and are still + * running. + */ + Assert(!PageIsAllVisible(page)); + + lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid)); + Assert(ItemIdIsNormal(lp)); + + tp.t_tableOid = RelationGetRelid(relation); + tp.t_data = (HeapTupleHeader) PageGetItem(page, lp); + tp.t_len = ItemIdGetLength(lp); + tp.t_self = *tid; + + /* + * Sanity check that the tuple really is a speculatively inserted tuple, + * inserted by us. + */ + if (tp.t_data->t_choice.t_heap.t_xmin != xid) + elog(ERROR, "attempted to kill a tuple inserted by another transaction"); + if (!(IsToastRelation(relation) || HeapTupleHeaderIsSpeculative(tp.t_data))) + elog(ERROR, "attempted to kill a non-speculative tuple"); + Assert(!HeapTupleHeaderIsHeapOnly(tp.t_data)); + + /* + * No need to check for serializable conflicts here. There is never a + * need for a combo CID, either. No need to extract replica identity, or + * do anything special with infomask bits. + */ + + START_CRIT_SECTION(); + + /* + * The tuple will become DEAD immediately. Flag that this page is a + * candidate for pruning by setting xmin to TransactionXmin. While not + * immediately prunable, it is the oldest xid we can cheaply determine + * that's safe against wraparound / being older than the table's + * relfrozenxid. To defend against the unlikely case of a new relation + * having a newer relfrozenxid than our TransactionXmin, use relfrozenxid + * if so (vacuum can't subsequently move relfrozenxid to beyond + * TransactionXmin, so there's no race here). + */ + Assert(TransactionIdIsValid(TransactionXmin)); + { + TransactionId relfrozenxid = relation->rd_rel->relfrozenxid; + TransactionId prune_xid; + + if (TransactionIdPrecedes(TransactionXmin, relfrozenxid)) + prune_xid = relfrozenxid; + else + prune_xid = TransactionXmin; + PageSetPrunable(page, prune_xid); + } + + /* store transaction information of xact deleting the tuple */ + tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED; + + /* + * Set the tuple header xmin to InvalidTransactionId. This makes the + * tuple immediately invisible everyone. (In particular, to any + * transactions waiting on the speculative token, woken up later.) + */ + HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId); + + /* Clear the speculative insertion token too */ + tp.t_data->t_ctid = tp.t_self; + + MarkBufferDirty(buffer); + + /* + * XLOG stuff + * + * The WAL records generated here match tdeheap_delete(). The same recovery + * routines are used. + */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_delete xlrec; + XLogRecPtr recptr; + + xlrec.flags = XLH_DELETE_IS_SUPER; + xlrec.infobits_set = compute_infobits(tp.t_data->t_infomask, + tp.t_data->t_infomask2); + xlrec.offnum = ItemPointerGetOffsetNumber(&tp.t_self); + xlrec.xmax = xid; + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapDelete); + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + + /* No replica identity & replication origin logged */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_DELETE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + if (HeapTupleHasExternal(&tp)) + { + Assert(!IsToastRelation(relation)); + tdeheap_toast_delete(relation, &tp, true); + } + + /* + * Never need to mark tuple for invalidation, since catalogs don't support + * speculative insertion + */ + + /* Now we can release the buffer */ + ReleaseBuffer(buffer); + + /* count deletion, as we counted the insertion too */ + pgstat_count_tdeheap_delete(relation); +} + +/* + * tdeheap_inplace_update - update a tuple "in place" (ie, overwrite it) + * + * Overwriting violates both MVCC and transactional safety, so the uses + * of this function in Postgres are extremely limited. Nonetheless we + * find some places to use it. + * + * The tuple cannot change size, and therefore it's reasonable to assume + * that its null bitmap (if any) doesn't change either. So we just + * overwrite the data portion of the tuple without touching the null + * bitmap or any of the header fields. + * + * tuple is an in-memory tuple structure containing the data to be written + * over the target tuple. Also, tuple->t_self identifies the target tuple. + * + * Note that the tuple updated here had better not come directly from the + * syscache if the relation has a toast relation as this tuple could + * include toast values that have been expanded, causing a failure here. + */ +void +tdeheap_inplace_update(Relation relation, HeapTuple tuple) +{ + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + uint32 oldlen; + uint32 newlen; + + /* + * For now, we don't allow parallel updates. Unlike a regular update, + * this should never create a combo CID, so it might be possible to relax + * this restriction, but not without more thought and testing. It's not + * clear that it would be useful, anyway. + */ + if (IsInParallelMode()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_STATE), + errmsg("cannot update tuples during a parallel operation"))); + + INJECTION_POINT("inplace-before-pin"); + buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&(tuple->t_self))); + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + page = (Page) BufferGetPage(buffer); + + offnum = ItemPointerGetOffsetNumber(&(tuple->t_self)); + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(ERROR, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldlen = ItemIdGetLength(lp) - htup->t_hoff; + newlen = tuple->t_len - tuple->t_data->t_hoff; + if (oldlen != newlen || htup->t_hoff != tuple->t_data->t_hoff) + elog(ERROR, "wrong tuple length"); + + /* NO EREPORT(ERROR) from here till changes are logged */ + START_CRIT_SECTION(); + + memcpy((char *) htup + htup->t_hoff, + (char *) tuple->t_data + tuple->t_data->t_hoff, + newlen); + + MarkBufferDirty(buffer); + + /* XLOG stuff */ + if (RelationNeedsWAL(relation)) + { + xl_tdeheap_inplace xlrec; + XLogRecPtr recptr; + + xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self); + + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapInplace); + + XLogRegisterBuffer(0, buffer, REGBUF_STANDARD); + XLogRegisterBufData(0, (char *) htup + htup->t_hoff, newlen); + + /* inplace updates aren't decoded atm, don't log the origin */ + + recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_INPLACE); + + PageSetLSN(page, recptr); + } + + END_CRIT_SECTION(); + + UnlockReleaseBuffer(buffer); + + /* + * Send out shared cache inval if necessary. Note that because we only + * pass the new version of the tuple, this mustn't be used for any + * operations that could change catcache lookup keys. But we aren't + * bothering with index updates either, so that's true a fortiori. + */ + if (!IsBootstrapProcessingMode()) + CacheInvalidateHeapTuple(relation, tuple, NULL); +} + +#define FRM_NOOP 0x0001 +#define FRM_INVALIDATE_XMAX 0x0002 +#define FRM_RETURN_IS_XID 0x0004 +#define FRM_RETURN_IS_MULTI 0x0008 +#define FRM_MARK_COMMITTED 0x0010 + +/* + * FreezeMultiXactId + * Determine what to do during freezing when a tuple is marked by a + * MultiXactId. + * + * "flags" is an output value; it's used to tell caller what to do on return. + * "pagefrz" is an input/output value, used to manage page level freezing. + * + * Possible values that we can set in "flags": + * FRM_NOOP + * don't do anything -- keep existing Xmax + * FRM_INVALIDATE_XMAX + * mark Xmax as InvalidTransactionId and set XMAX_INVALID flag. + * FRM_RETURN_IS_XID + * The Xid return value is a single update Xid to set as xmax. + * FRM_MARK_COMMITTED + * Xmax can be marked as HEAP_XMAX_COMMITTED + * FRM_RETURN_IS_MULTI + * The return value is a new MultiXactId to set as new Xmax. + * (caller must obtain proper infomask bits using GetMultiXactIdHintBits) + * + * Caller delegates control of page freezing to us. In practice we always + * force freezing of caller's page unless FRM_NOOP processing is indicated. + * We help caller ensure that XIDs < FreezeLimit and MXIDs < MultiXactCutoff + * can never be left behind. We freely choose when and how to process each + * Multi, without ever violating the cutoff postconditions for freezing. + * + * It's useful to remove Multis on a proactive timeline (relative to freezing + * XIDs) to keep MultiXact member SLRU buffer misses to a minimum. It can also + * be cheaper in the short run, for us, since we too can avoid SLRU buffer + * misses through eager processing. + * + * NB: Creates a _new_ MultiXactId when FRM_RETURN_IS_MULTI is set, though only + * when FreezeLimit and/or MultiXactCutoff cutoffs leave us with no choice. + * This can usually be put off, which is usually enough to avoid it altogether. + * Allocating new multis during VACUUM should be avoided on general principle; + * only VACUUM can advance relminmxid, so allocating new Multis here comes with + * its own special risks. + * + * NB: Caller must maintain "no freeze" NewRelfrozenXid/NewRelminMxid trackers + * using tdeheap_tuple_should_freeze when we haven't forced page-level freezing. + * + * NB: Caller should avoid needlessly calling tdeheap_tuple_should_freeze when we + * have already forced page-level freezing, since that might incur the same + * SLRU buffer misses that we specifically intended to avoid by freezing. + */ +static TransactionId +FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, + const struct VacuumCutoffs *cutoffs, uint16 *flags, + HeapPageFreeze *pagefrz) +{ + TransactionId newxmax; + MultiXactMember *members; + int nmembers; + bool need_replace; + int nnewmembers; + MultiXactMember *newmembers; + bool has_lockers; + TransactionId update_xid; + bool update_committed; + TransactionId FreezePageRelfrozenXid; + + *flags = 0; + + /* We should only be called in Multis */ + Assert(t_infomask & HEAP_XMAX_IS_MULTI); + + if (!MultiXactIdIsValid(multi) || + HEAP_LOCKED_UPGRADED(t_infomask)) + { + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + else if (MultiXactIdPrecedes(multi, cutoffs->relminmxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found multixact %u from before relminmxid %u", + multi, cutoffs->relminmxid))); + else if (MultiXactIdPrecedes(multi, cutoffs->OldestMxact)) + { + TransactionId update_xact; + + /* + * This old multi cannot possibly have members still running, but + * verify just in case. If it was a locker only, it can be removed + * without any further consideration; but if it contained an update, + * we might need to preserve it. + */ + if (MultiXactIdIsRunning(multi, + HEAP_XMAX_IS_LOCKED_ONLY(t_infomask))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u from before multi freeze cutoff %u found to be still running", + multi, cutoffs->OldestMxact))); + + if (HEAP_XMAX_IS_LOCKED_ONLY(t_infomask)) + { + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* replace multi with single XID for its updater? */ + update_xact = MultiXactIdGetUpdateXid(multi, t_infomask); + if (TransactionIdPrecedes(update_xact, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains update XID %u from before relfrozenxid %u", + multi, update_xact, + cutoffs->relfrozenxid))); + else if (TransactionIdPrecedes(update_xact, cutoffs->OldestXmin)) + { + /* + * Updater XID has to have aborted (otherwise the tuple would have + * been pruned away instead, since updater XID is < OldestXmin). + * Just remove xmax. + */ + if (TransactionIdDidCommit(update_xact)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains committed update XID %u from before removable cutoff %u", + multi, update_xact, + cutoffs->OldestXmin))); + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* Have to keep updater XID as new xmax */ + *flags |= FRM_RETURN_IS_XID; + pagefrz->freeze_required = true; + return update_xact; + } + + /* + * Some member(s) of this Multi may be below FreezeLimit xid cutoff, so we + * need to walk the whole members array to figure out what to do, if + * anything. + */ + nmembers = + GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(t_infomask)); + if (nmembers <= 0) + { + /* Nothing worth keeping */ + *flags |= FRM_INVALIDATE_XMAX; + pagefrz->freeze_required = true; + return InvalidTransactionId; + } + + /* + * The FRM_NOOP case is the only case where we might need to ratchet back + * FreezePageRelfrozenXid or FreezePageRelminMxid. It is also the only + * case where our caller might ratchet back its NoFreezePageRelfrozenXid + * or NoFreezePageRelminMxid "no freeze" trackers to deal with a multi. + * FRM_NOOP handling should result in the NewRelfrozenXid/NewRelminMxid + * trackers managed by VACUUM being ratcheting back by xmax to the degree + * required to make it safe to leave xmax undisturbed, independent of + * whether or not page freezing is triggered somewhere else. + * + * Our policy is to force freezing in every case other than FRM_NOOP, + * which obviates the need to maintain either set of trackers, anywhere. + * Every other case will reliably execute a freeze plan for xmax that + * either replaces xmax with an XID/MXID >= OldestXmin/OldestMxact, or + * sets xmax to an InvalidTransactionId XID, rendering xmax fully frozen. + * (VACUUM's NewRelfrozenXid/NewRelminMxid trackers are initialized with + * OldestXmin/OldestMxact, so later values never need to be tracked here.) + */ + need_replace = false; + FreezePageRelfrozenXid = pagefrz->FreezePageRelfrozenXid; + for (int i = 0; i < nmembers; i++) + { + TransactionId xid = members[i].xid; + + Assert(!TransactionIdPrecedes(xid, cutoffs->relfrozenxid)); + + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + { + /* Can't violate the FreezeLimit postcondition */ + need_replace = true; + break; + } + if (TransactionIdPrecedes(xid, FreezePageRelfrozenXid)) + FreezePageRelfrozenXid = xid; + } + + /* Can't violate the MultiXactCutoff postcondition, either */ + if (!need_replace) + need_replace = MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff); + + if (!need_replace) + { + /* + * vacuumlazy.c might ratchet back NewRelminMxid, NewRelfrozenXid, or + * both together to make it safe to retain this particular multi after + * freezing its page + */ + *flags |= FRM_NOOP; + pagefrz->FreezePageRelfrozenXid = FreezePageRelfrozenXid; + if (MultiXactIdPrecedes(multi, pagefrz->FreezePageRelminMxid)) + pagefrz->FreezePageRelminMxid = multi; + pfree(members); + return multi; + } + + /* + * Do a more thorough second pass over the multi to figure out which + * member XIDs actually need to be kept. Checking the precise status of + * individual members might even show that we don't need to keep anything. + * That is quite possible even though the Multi must be >= OldestMxact, + * since our second pass only keeps member XIDs when it's truly necessary; + * even member XIDs >= OldestXmin often won't be kept by second pass. + */ + nnewmembers = 0; + newmembers = palloc(sizeof(MultiXactMember) * nmembers); + has_lockers = false; + update_xid = InvalidTransactionId; + update_committed = false; + + /* + * Determine whether to keep each member xid, or to ignore it instead + */ + for (int i = 0; i < nmembers; i++) + { + TransactionId xid = members[i].xid; + MultiXactStatus mstatus = members[i].status; + + Assert(!TransactionIdPrecedes(xid, cutoffs->relfrozenxid)); + + if (!ISUPDATE_from_mxstatus(mstatus)) + { + /* + * Locker XID (not updater XID). We only keep lockers that are + * still running. + */ + if (TransactionIdIsCurrentTransactionId(xid) || + TransactionIdIsInProgress(xid)) + { + if (TransactionIdPrecedes(xid, cutoffs->OldestXmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains running locker XID %u from before removable cutoff %u", + multi, xid, + cutoffs->OldestXmin))); + newmembers[nnewmembers++] = members[i]; + has_lockers = true; + } + + continue; + } + + /* + * Updater XID (not locker XID). Should we keep it? + * + * Since the tuple wasn't totally removed when vacuum pruned, the + * update Xid cannot possibly be older than OldestXmin cutoff unless + * the updater XID aborted. If the updater transaction is known + * aborted or crashed then it's okay to ignore it, otherwise not. + * + * In any case the Multi should never contain two updaters, whatever + * their individual commit status. Check for that first, in passing. + */ + if (TransactionIdIsValid(update_xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u has two or more updating members", + multi), + errdetail_internal("First updater XID=%u second updater XID=%u.", + update_xid, xid))); + + /* + * As with all tuple visibility routines, it's critical to test + * TransactionIdIsInProgress before TransactionIdDidCommit, because of + * race conditions explained in detail in heapam_visibility.c. + */ + if (TransactionIdIsCurrentTransactionId(xid) || + TransactionIdIsInProgress(xid)) + update_xid = xid; + else if (TransactionIdDidCommit(xid)) + { + /* + * The transaction committed, so we can tell caller to set + * HEAP_XMAX_COMMITTED. (We can only do this because we know the + * transaction is not running.) + */ + update_committed = true; + update_xid = xid; + } + else + { + /* + * Not in progress, not committed -- must be aborted or crashed; + * we can ignore it. + */ + continue; + } + + /* + * We determined that updater must be kept -- add it to pending new + * members list + */ + if (TransactionIdPrecedes(xid, cutoffs->OldestXmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u contains committed update XID %u from before removable cutoff %u", + multi, xid, cutoffs->OldestXmin))); + newmembers[nnewmembers++] = members[i]; + } + + pfree(members); + + /* + * Determine what to do with caller's multi based on information gathered + * during our second pass + */ + if (nnewmembers == 0) + { + /* Nothing worth keeping */ + *flags |= FRM_INVALIDATE_XMAX; + newxmax = InvalidTransactionId; + } + else if (TransactionIdIsValid(update_xid) && !has_lockers) + { + /* + * If there's a single member and it's an update, pass it back alone + * without creating a new Multi. (XXX we could do this when there's a + * single remaining locker, too, but that would complicate the API too + * much; moreover, the case with the single updater is more + * interesting, because those are longer-lived.) + */ + Assert(nnewmembers == 1); + *flags |= FRM_RETURN_IS_XID; + if (update_committed) + *flags |= FRM_MARK_COMMITTED; + newxmax = update_xid; + } + else + { + /* + * Create a new multixact with the surviving members of the previous + * one, to set as new Xmax in the tuple + */ + newxmax = MultiXactIdCreateFromMembers(nnewmembers, newmembers); + *flags |= FRM_RETURN_IS_MULTI; + } + + pfree(newmembers); + + pagefrz->freeze_required = true; + return newxmax; +} + +/* + * tdeheap_prepare_freeze_tuple + * + * Check to see whether any of the XID fields of a tuple (xmin, xmax, xvac) + * are older than the OldestXmin and/or OldestMxact freeze cutoffs. If so, + * setup enough state (in the *frz output argument) to enable caller to + * process this tuple as part of freezing its page, and return true. Return + * false if nothing can be changed about the tuple right now. + * + * Also sets *totally_frozen to true if the tuple will be totally frozen once + * caller executes returned freeze plan (or if the tuple was already totally + * frozen by an earlier VACUUM). This indicates that there are no remaining + * XIDs or MultiXactIds that will need to be processed by a future VACUUM. + * + * VACUUM caller must assemble HeapTupleFreeze freeze plan entries for every + * tuple that we returned true for, and then execute freezing. Caller must + * initialize pagefrz fields for page as a whole before first call here for + * each heap page. + * + * VACUUM caller decides on whether or not to freeze the page as a whole. + * We'll often prepare freeze plans for a page that caller just discards. + * However, VACUUM doesn't always get to make a choice; it must freeze when + * pagefrz.freeze_required is set, to ensure that any XIDs < FreezeLimit (and + * MXIDs < MultiXactCutoff) can never be left behind. We help to make sure + * that VACUUM always follows that rule. + * + * We sometimes force freezing of xmax MultiXactId values long before it is + * strictly necessary to do so just to ensure the FreezeLimit postcondition. + * It's worth processing MultiXactIds proactively when it is cheap to do so, + * and it's convenient to make that happen by piggy-backing it on the "force + * freezing" mechanism. Conversely, we sometimes delay freezing MultiXactIds + * because it is expensive right now (though only when it's still possible to + * do so without violating the FreezeLimit/MultiXactCutoff postcondition). + * + * It is assumed that the caller has checked the tuple with + * HeapTupleSatisfiesVacuum() and determined that it is not HEAPTUPLE_DEAD + * (else we should be removing the tuple, not freezing it). + * + * NB: This function has side effects: it might allocate a new MultiXactId. + * It will be set as tuple's new xmax when our *frz output is processed within + * tdeheap_execute_freeze_tuple later on. If the tuple is in a shared buffer + * then caller had better have an exclusive lock on it already. + */ +bool +tdeheap_prepare_freeze_tuple(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + HeapPageFreeze *pagefrz, + HeapTupleFreeze *frz, bool *totally_frozen) +{ + bool xmin_already_frozen = false, + xmax_already_frozen = false; + bool freeze_xmin = false, + replace_xvac = false, + replace_xmax = false, + freeze_xmax = false; + TransactionId xid; + + frz->xmax = HeapTupleHeaderGetRawXmax(tuple); + frz->t_infomask2 = tuple->t_infomask2; + frz->t_infomask = tuple->t_infomask; + frz->frzflags = 0; + frz->checkflags = 0; + + /* + * Process xmin, while keeping track of whether it's already frozen, or + * will become frozen iff our freeze plan is executed by caller (could be + * neither). + */ + xid = HeapTupleHeaderGetXmin(tuple); + if (!TransactionIdIsNormal(xid)) + xmin_already_frozen = true; + else + { + if (TransactionIdPrecedes(xid, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmin %u from before relfrozenxid %u", + xid, cutoffs->relfrozenxid))); + + /* Will set freeze_xmin flags in freeze plan below */ + freeze_xmin = TransactionIdPrecedes(xid, cutoffs->OldestXmin); + + /* Verify that xmin committed if and when freeze plan is executed */ + if (freeze_xmin) + frz->checkflags |= HEAP_FREEZE_CHECK_XMIN_COMMITTED; + } + + /* + * Old-style VACUUM FULL is gone, but we have to process xvac for as long + * as we support having MOVED_OFF/MOVED_IN tuples in the database + */ + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + Assert(TransactionIdPrecedes(xid, cutoffs->OldestXmin)); + + /* + * For Xvac, we always freeze proactively. This allows totally_frozen + * tracking to ignore xvac. + */ + replace_xvac = pagefrz->freeze_required = true; + + /* Will set replace_xvac flags in freeze plan below */ + } + + /* Now process xmax */ + xid = frz->xmax; + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + /* Raw xmax is a MultiXactId */ + TransactionId newxmax; + uint16 flags; + + /* + * We will either remove xmax completely (in the "freeze_xmax" path), + * process xmax by replacing it (in the "replace_xmax" path), or + * perform no-op xmax processing. The only constraint is that the + * FreezeLimit/MultiXactCutoff postcondition must never be violated. + */ + newxmax = FreezeMultiXactId(xid, tuple->t_infomask, cutoffs, + &flags, pagefrz); + + if (flags & FRM_NOOP) + { + /* + * xmax is a MultiXactId, and nothing about it changes for now. + * This is the only case where 'freeze_required' won't have been + * set for us by FreezeMultiXactId, as well as the only case where + * neither freeze_xmax nor replace_xmax are set (given a multi). + * + * This is a no-op, but the call to FreezeMultiXactId might have + * ratcheted back NewRelfrozenXid and/or NewRelminMxid trackers + * for us (the "freeze page" variants, specifically). That'll + * make it safe for our caller to freeze the page later on, while + * leaving this particular xmax undisturbed. + * + * FreezeMultiXactId is _not_ responsible for the "no freeze" + * NewRelfrozenXid/NewRelminMxid trackers, though -- that's our + * job. A call to tdeheap_tuple_should_freeze for this same tuple + * will take place below if 'freeze_required' isn't set already. + * (This repeats work from FreezeMultiXactId, but allows "no + * freeze" tracker maintenance to happen in only one place.) + */ + Assert(!MultiXactIdPrecedes(newxmax, cutoffs->MultiXactCutoff)); + Assert(MultiXactIdIsValid(newxmax) && xid == newxmax); + } + else if (flags & FRM_RETURN_IS_XID) + { + /* + * xmax will become an updater Xid (original MultiXact's updater + * member Xid will be carried forward as a simple Xid in Xmax). + */ + Assert(!TransactionIdPrecedes(newxmax, cutoffs->OldestXmin)); + + /* + * NB -- some of these transformations are only valid because we + * know the return Xid is a tuple updater (i.e. not merely a + * locker.) Also note that the only reason we don't explicitly + * worry about HEAP_KEYS_UPDATED is because it lives in + * t_infomask2 rather than t_infomask. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->xmax = newxmax; + if (flags & FRM_MARK_COMMITTED) + frz->t_infomask |= HEAP_XMAX_COMMITTED; + replace_xmax = true; + } + else if (flags & FRM_RETURN_IS_MULTI) + { + uint16 newbits; + uint16 newbits2; + + /* + * xmax is an old MultiXactId that we have to replace with a new + * MultiXactId, to carry forward two or more original member XIDs. + */ + Assert(!MultiXactIdPrecedes(newxmax, cutoffs->OldestMxact)); + + /* + * We can't use GetMultiXactIdHintBits directly on the new multi + * here; that routine initializes the masks to all zeroes, which + * would lose other bits we need. Doing it this way ensures all + * unrelated bits remain untouched. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->t_infomask2 &= ~HEAP_KEYS_UPDATED; + GetMultiXactIdHintBits(newxmax, &newbits, &newbits2); + frz->t_infomask |= newbits; + frz->t_infomask2 |= newbits2; + frz->xmax = newxmax; + replace_xmax = true; + } + else + { + /* + * Freeze plan for tuple "freezes xmax" in the strictest sense: + * it'll leave nothing in xmax (neither an Xid nor a MultiXactId). + */ + Assert(flags & FRM_INVALIDATE_XMAX); + Assert(!TransactionIdIsValid(newxmax)); + + /* Will set freeze_xmax flags in freeze plan below */ + freeze_xmax = true; + } + + /* MultiXactId processing forces freezing (barring FRM_NOOP case) */ + Assert(pagefrz->freeze_required || (!freeze_xmax && !replace_xmax)); + } + else if (TransactionIdIsNormal(xid)) + { + /* Raw xmax is normal XID */ + if (TransactionIdPrecedes(xid, cutoffs->relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmax %u from before relfrozenxid %u", + xid, cutoffs->relfrozenxid))); + + /* Will set freeze_xmax flags in freeze plan below */ + freeze_xmax = TransactionIdPrecedes(xid, cutoffs->OldestXmin); + + /* + * Verify that xmax aborted if and when freeze plan is executed, + * provided it's from an update. (A lock-only xmax can be removed + * independent of this, since the lock is released at xact end.) + */ + if (freeze_xmax && !HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + frz->checkflags |= HEAP_FREEZE_CHECK_XMAX_ABORTED; + } + else if (!TransactionIdIsValid(xid)) + { + /* Raw xmax is InvalidTransactionId XID */ + Assert((tuple->t_infomask & HEAP_XMAX_IS_MULTI) == 0); + xmax_already_frozen = true; + } + else + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found raw xmax %u (infomask 0x%04x) not invalid and not multi", + xid, tuple->t_infomask))); + + if (freeze_xmin) + { + Assert(!xmin_already_frozen); + + frz->t_infomask |= HEAP_XMIN_FROZEN; + } + if (replace_xvac) + { + /* + * If a MOVED_OFF tuple is not dead, the xvac transaction must have + * failed; whereas a non-dead MOVED_IN tuple must mean the xvac + * transaction succeeded. + */ + Assert(pagefrz->freeze_required); + if (tuple->t_infomask & HEAP_MOVED_OFF) + frz->frzflags |= XLH_INVALID_XVAC; + else + frz->frzflags |= XLH_FREEZE_XVAC; + } + if (replace_xmax) + { + Assert(!xmax_already_frozen && !freeze_xmax); + Assert(pagefrz->freeze_required); + + /* Already set replace_xmax flags in freeze plan earlier */ + } + if (freeze_xmax) + { + Assert(!xmax_already_frozen && !replace_xmax); + + frz->xmax = InvalidTransactionId; + + /* + * The tuple might be marked either XMAX_INVALID or XMAX_COMMITTED + + * LOCKED. Normalize to INVALID just to be sure no one gets confused. + * Also get rid of the HEAP_KEYS_UPDATED bit. + */ + frz->t_infomask &= ~HEAP_XMAX_BITS; + frz->t_infomask |= HEAP_XMAX_INVALID; + frz->t_infomask2 &= ~HEAP_HOT_UPDATED; + frz->t_infomask2 &= ~HEAP_KEYS_UPDATED; + } + + /* + * Determine if this tuple is already totally frozen, or will become + * totally frozen (provided caller executes freeze plans for the page) + */ + *totally_frozen = ((freeze_xmin || xmin_already_frozen) && + (freeze_xmax || xmax_already_frozen)); + + if (!pagefrz->freeze_required && !(xmin_already_frozen && + xmax_already_frozen)) + { + /* + * So far no previous tuple from the page made freezing mandatory. + * Does this tuple force caller to freeze the entire page? + */ + pagefrz->freeze_required = + tdeheap_tuple_should_freeze(tuple, cutoffs, + &pagefrz->NoFreezePageRelfrozenXid, + &pagefrz->NoFreezePageRelminMxid); + } + + /* Tell caller if this tuple has a usable freeze plan set in *frz */ + return freeze_xmin || replace_xvac || replace_xmax || freeze_xmax; +} + +/* + * tdeheap_execute_freeze_tuple + * Execute the prepared freezing of a tuple with caller's freeze plan. + * + * Caller is responsible for ensuring that no other backend can access the + * storage underlying this tuple, either by holding an exclusive lock on the + * buffer containing it (which is what lazy VACUUM does), or by having it be + * in private storage (which is what CLUSTER and friends do). + */ +static inline void +tdeheap_execute_freeze_tuple(HeapTupleHeader tuple, HeapTupleFreeze *frz) +{ + HeapTupleHeaderSetXmax(tuple, frz->xmax); + + if (frz->frzflags & XLH_FREEZE_XVAC) + HeapTupleHeaderSetXvac(tuple, FrozenTransactionId); + + if (frz->frzflags & XLH_INVALID_XVAC) + HeapTupleHeaderSetXvac(tuple, InvalidTransactionId); + + tuple->t_infomask = frz->t_infomask; + tuple->t_infomask2 = frz->t_infomask2; +} + +/* + * Perform xmin/xmax XID status sanity checks before actually executing freeze + * plans. + * + * tdeheap_prepare_freeze_tuple doesn't perform these checks directly because + * pg_xact lookups are relatively expensive. They shouldn't be repeated by + * successive VACUUMs that each decide against freezing the same page. + */ +void +tdeheap_pre_freeze_checks(Buffer buffer, + HeapTupleFreeze *tuples, int ntuples) +{ + Page page = BufferGetPage(buffer); + + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + ItemId itemid = PageGetItemId(page, frz->offset); + HeapTupleHeader htup; + + htup = (HeapTupleHeader) PageGetItem(page, itemid); + + /* Deliberately avoid relying on tuple hint bits here */ + if (frz->checkflags & HEAP_FREEZE_CHECK_XMIN_COMMITTED) + { + TransactionId xmin = HeapTupleHeaderGetRawXmin(htup); + + Assert(!HeapTupleHeaderXminFrozen(htup)); + if (unlikely(!TransactionIdDidCommit(xmin))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("uncommitted xmin %u needs to be frozen", + xmin))); + } + + /* + * TransactionIdDidAbort won't work reliably in the presence of XIDs + * left behind by transactions that were in progress during a crash, + * so we can only check that xmax didn't commit + */ + if (frz->checkflags & HEAP_FREEZE_CHECK_XMAX_ABORTED) + { + TransactionId xmax = HeapTupleHeaderGetRawXmax(htup); + + Assert(TransactionIdIsNormal(xmax)); + if (unlikely(TransactionIdDidCommit(xmax))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("cannot freeze committed xmax %u", + xmax))); + } + } +} + +/* + * Helper which executes freezing of one or more heap tuples on a page on + * behalf of caller. Caller passes an array of tuple plans from + * tdeheap_prepare_freeze_tuple. Caller must set 'offset' in each plan for us. + * Must be called in a critical section that also marks the buffer dirty and, + * if needed, emits WAL. + */ +void +tdeheap_freeze_prepared_tuples(Buffer buffer, HeapTupleFreeze *tuples, int ntuples) +{ + Page page = BufferGetPage(buffer); + + for (int i = 0; i < ntuples; i++) + { + HeapTupleFreeze *frz = tuples + i; + ItemId itemid = PageGetItemId(page, frz->offset); + HeapTupleHeader htup; + + htup = (HeapTupleHeader) PageGetItem(page, itemid); + tdeheap_execute_freeze_tuple(htup, frz); + } +} + +/* + * tdeheap_freeze_tuple + * Freeze tuple in place, without WAL logging. + * + * Useful for callers like CLUSTER that perform their own WAL logging. + */ +bool +tdeheap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId FreezeLimit, TransactionId MultiXactCutoff) +{ + HeapTupleFreeze frz; + bool do_freeze; + bool totally_frozen; + struct VacuumCutoffs cutoffs; + HeapPageFreeze pagefrz; + + cutoffs.relfrozenxid = relfrozenxid; + cutoffs.relminmxid = relminmxid; + cutoffs.OldestXmin = FreezeLimit; + cutoffs.OldestMxact = MultiXactCutoff; + cutoffs.FreezeLimit = FreezeLimit; + cutoffs.MultiXactCutoff = MultiXactCutoff; + + pagefrz.freeze_required = true; + pagefrz.FreezePageRelfrozenXid = FreezeLimit; + pagefrz.FreezePageRelminMxid = MultiXactCutoff; + pagefrz.NoFreezePageRelfrozenXid = FreezeLimit; + pagefrz.NoFreezePageRelminMxid = MultiXactCutoff; + + do_freeze = tdeheap_prepare_freeze_tuple(tuple, &cutoffs, + &pagefrz, &frz, &totally_frozen); + + /* + * Note that because this is not a WAL-logged operation, we don't need to + * fill in the offset in the freeze record. + */ + + if (do_freeze) + tdeheap_execute_freeze_tuple(tuple, &frz); + return do_freeze; +} + +/* + * For a given MultiXactId, return the hint bits that should be set in the + * tuple's infomask. + * + * Normally this should be called for a multixact that was just created, and + * so is on our local cache, so the GetMembers call is fast. + */ +static void +GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask, + uint16 *new_infomask2) +{ + int nmembers; + MultiXactMember *members; + int i; + uint16 bits = HEAP_XMAX_IS_MULTI; + uint16 bits2 = 0; + bool has_update = false; + LockTupleMode strongest = LockTupleKeyShare; + + /* + * We only use this in multis we just created, so they cannot be values + * pre-pg_upgrade. + */ + nmembers = GetMultiXactIdMembers(multi, &members, false, false); + + for (i = 0; i < nmembers; i++) + { + LockTupleMode mode; + + /* + * Remember the strongest lock mode held by any member of the + * multixact. + */ + mode = TUPLOCK_from_mxstatus(members[i].status); + if (mode > strongest) + strongest = mode; + + /* See what other bits we need */ + switch (members[i].status) + { + case MultiXactStatusForKeyShare: + case MultiXactStatusForShare: + case MultiXactStatusForNoKeyUpdate: + break; + + case MultiXactStatusForUpdate: + bits2 |= HEAP_KEYS_UPDATED; + break; + + case MultiXactStatusNoKeyUpdate: + has_update = true; + break; + + case MultiXactStatusUpdate: + bits2 |= HEAP_KEYS_UPDATED; + has_update = true; + break; + } + } + + if (strongest == LockTupleExclusive || + strongest == LockTupleNoKeyExclusive) + bits |= HEAP_XMAX_EXCL_LOCK; + else if (strongest == LockTupleShare) + bits |= HEAP_XMAX_SHR_LOCK; + else if (strongest == LockTupleKeyShare) + bits |= HEAP_XMAX_KEYSHR_LOCK; + + if (!has_update) + bits |= HEAP_XMAX_LOCK_ONLY; + + if (nmembers > 0) + pfree(members); + + *new_infomask = bits; + *new_infomask2 = bits2; +} + +/* + * MultiXactIdGetUpdateXid + * + * Given a multixact Xmax and corresponding infomask, which does not have the + * HEAP_XMAX_LOCK_ONLY bit set, obtain and return the Xid of the updating + * transaction. + * + * Caller is expected to check the status of the updating transaction, if + * necessary. + */ +static TransactionId +MultiXactIdGetUpdateXid(TransactionId xmax, uint16 t_infomask) +{ + TransactionId update_xact = InvalidTransactionId; + MultiXactMember *members; + int nmembers; + + Assert(!(t_infomask & HEAP_XMAX_LOCK_ONLY)); + Assert(t_infomask & HEAP_XMAX_IS_MULTI); + + /* + * Since we know the LOCK_ONLY bit is not set, this cannot be a multi from + * pre-pg_upgrade. + */ + nmembers = GetMultiXactIdMembers(xmax, &members, false, false); + + if (nmembers > 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + /* Ignore lockers */ + if (!ISUPDATE_from_mxstatus(members[i].status)) + continue; + + /* there can be at most one updater */ + Assert(update_xact == InvalidTransactionId); + update_xact = members[i].xid; +#ifndef USE_ASSERT_CHECKING + + /* + * in an assert-enabled build, walk the whole array to ensure + * there's no other updater. + */ + break; +#endif + } + + pfree(members); + } + + return update_xact; +} + +/* + * HeapTupleGetUpdateXid + * As above, but use a HeapTupleHeader + * + * See also HeapTupleHeaderGetUpdateXid, which can be used without previously + * checking the hint bits. + */ +TransactionId +HeapTupleGetUpdateXid(HeapTupleHeader tuple) +{ + return MultiXactIdGetUpdateXid(HeapTupleHeaderGetRawXmax(tuple), + tuple->t_infomask); +} + +/* + * Does the given multixact conflict with the current transaction grabbing a + * tuple lock of the given strength? + * + * The passed infomask pairs up with the given multixact in the tuple header. + * + * If current_is_member is not NULL, it is set to 'true' if the current + * transaction is a member of the given multixact. + */ +static bool +DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask, + LockTupleMode lockmode, bool *current_is_member) +{ + int nmembers; + MultiXactMember *members; + bool result = false; + LOCKMODE wanted = tupleLockExtraInfo[lockmode].hwlock; + + if (HEAP_LOCKED_UPGRADED(infomask)) + return false; + + nmembers = GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + if (nmembers >= 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + TransactionId memxid; + LOCKMODE memlockmode; + + if (result && (current_is_member == NULL || *current_is_member)) + break; + + memlockmode = LOCKMODE_from_mxstatus(members[i].status); + + /* ignore members from current xact (but track their presence) */ + memxid = members[i].xid; + if (TransactionIdIsCurrentTransactionId(memxid)) + { + if (current_is_member != NULL) + *current_is_member = true; + continue; + } + else if (result) + continue; + + /* ignore members that don't conflict with the lock we want */ + if (!DoLockModesConflict(memlockmode, wanted)) + continue; + + if (ISUPDATE_from_mxstatus(members[i].status)) + { + /* ignore aborted updaters */ + if (TransactionIdDidAbort(memxid)) + continue; + } + else + { + /* ignore lockers-only that are no longer in progress */ + if (!TransactionIdIsInProgress(memxid)) + continue; + } + + /* + * Whatever remains are either live lockers that conflict with our + * wanted lock, and updaters that are not aborted. Those conflict + * with what we want. Set up to return true, but keep going to + * look for the current transaction among the multixact members, + * if needed. + */ + result = true; + } + pfree(members); + } + + return result; +} + +/* + * Do_MultiXactIdWait + * Actual implementation for the two functions below. + * + * 'multi', 'status' and 'infomask' indicate what to sleep on (the status is + * needed to ensure we only sleep on conflicting members, and the infomask is + * used to optimize multixact access in case it's a lock-only multi); 'nowait' + * indicates whether to use conditional lock acquisition, to allow callers to + * fail if lock is unavailable. 'rel', 'ctid' and 'oper' are used to set up + * context information for error messages. 'remaining', if not NULL, receives + * the number of members that are still running, including any (non-aborted) + * subtransactions of our own transaction. + * + * We do this by sleeping on each member using XactLockTableWait. Any + * members that belong to the current backend are *not* waited for, however; + * this would not merely be useless but would lead to Assert failure inside + * XactLockTableWait. By the time this returns, it is certain that all + * transactions *of other backends* that were members of the MultiXactId + * that conflict with the requested status are dead (and no new ones can have + * been added, since it is not legal to add members to an existing + * MultiXactId). + * + * But by the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * Note that in case we return false, the number of remaining members is + * not to be trusted. + */ +static bool +Do_MultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, bool nowait, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining) +{ + bool result = true; + MultiXactMember *members; + int nmembers; + int remain = 0; + + /* for pre-pg_upgrade tuples, no need to sleep at all */ + nmembers = HEAP_LOCKED_UPGRADED(infomask) ? -1 : + GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(infomask)); + + if (nmembers >= 0) + { + int i; + + for (i = 0; i < nmembers; i++) + { + TransactionId memxid = members[i].xid; + MultiXactStatus memstatus = members[i].status; + + if (TransactionIdIsCurrentTransactionId(memxid)) + { + remain++; + continue; + } + + if (!DoLockModesConflict(LOCKMODE_from_mxstatus(memstatus), + LOCKMODE_from_mxstatus(status))) + { + if (remaining && TransactionIdIsInProgress(memxid)) + remain++; + continue; + } + + /* + * This member conflicts with our multi, so we have to sleep (or + * return failure, if asked to avoid waiting.) + * + * Note that we don't set up an error context callback ourselves, + * but instead we pass the info down to XactLockTableWait. This + * might seem a bit wasteful because the context is set up and + * tore down for each member of the multixact, but in reality it + * should be barely noticeable, and it avoids duplicate code. + */ + if (nowait) + { + result = ConditionalXactLockTableWait(memxid); + if (!result) + break; + } + else + XactLockTableWait(memxid, rel, ctid, oper); + } + + pfree(members); + } + + if (remaining) + *remaining = remain; + + return result; +} + +/* + * MultiXactIdWait + * Sleep on a MultiXactId. + * + * By the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * We return (in *remaining, if not NULL) the number of members that are still + * running, including any (non-aborted) subtransactions of our own transaction. + */ +static void +MultiXactIdWait(MultiXactId multi, MultiXactStatus status, uint16 infomask, + Relation rel, ItemPointer ctid, XLTW_Oper oper, + int *remaining) +{ + (void) Do_MultiXactIdWait(multi, status, infomask, false, + rel, ctid, oper, remaining); +} + +/* + * ConditionalMultiXactIdWait + * As above, but only lock if we can get the lock without blocking. + * + * By the time we finish sleeping, someone else may have changed the Xmax + * of the containing tuple, so the caller needs to iterate on us somehow. + * + * If the multixact is now all gone, return true. Returns false if some + * transactions might still be running. + * + * We return (in *remaining, if not NULL) the number of members that are still + * running, including any (non-aborted) subtransactions of our own transaction. + */ +static bool +ConditionalMultiXactIdWait(MultiXactId multi, MultiXactStatus status, + uint16 infomask, Relation rel, int *remaining) +{ + return Do_MultiXactIdWait(multi, status, infomask, true, + rel, NULL, XLTW_None, remaining); +} + +/* + * tdeheap_tuple_needs_eventual_freeze + * + * Check to see whether any of the XID fields of a tuple (xmin, xmax, xvac) + * will eventually require freezing (if tuple isn't removed by pruning first). + */ +bool +tdeheap_tuple_needs_eventual_freeze(HeapTupleHeader tuple) +{ + TransactionId xid; + + /* + * If xmin is a normal transaction ID, this tuple is definitely not + * frozen. + */ + xid = HeapTupleHeaderGetXmin(tuple); + if (TransactionIdIsNormal(xid)) + return true; + + /* + * If xmax is a valid xact or multixact, this tuple is also not frozen. + */ + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + MultiXactId multi; + + multi = HeapTupleHeaderGetRawXmax(tuple); + if (MultiXactIdIsValid(multi)) + return true; + } + else + { + xid = HeapTupleHeaderGetRawXmax(tuple); + if (TransactionIdIsNormal(xid)) + return true; + } + + if (tuple->t_infomask & HEAP_MOVED) + { + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + return true; + } + + return false; +} + +/* + * tdeheap_tuple_should_freeze + * + * Return value indicates if tdeheap_prepare_freeze_tuple sibling function would + * (or should) force freezing of the heap page that contains caller's tuple. + * Tuple header XIDs/MXIDs < FreezeLimit/MultiXactCutoff trigger freezing. + * This includes (xmin, xmax, xvac) fields, as well as MultiXact member XIDs. + * + * The *NoFreezePageRelfrozenXid and *NoFreezePageRelminMxid input/output + * arguments help VACUUM track the oldest extant XID/MXID remaining in rel. + * Our working assumption is that caller won't decide to freeze this tuple. + * It's up to caller to only ratchet back its own top-level trackers after the + * point that it fully commits to not freezing the tuple/page in question. + */ +bool +tdeheap_tuple_should_freeze(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + TransactionId *NoFreezePageRelfrozenXid, + MultiXactId *NoFreezePageRelminMxid) +{ + TransactionId xid; + MultiXactId multi; + bool freeze = false; + + /* First deal with xmin */ + xid = HeapTupleHeaderGetXmin(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + + /* Now deal with xmax */ + xid = InvalidTransactionId; + multi = InvalidMultiXactId; + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + multi = HeapTupleHeaderGetRawXmax(tuple); + else + xid = HeapTupleHeaderGetRawXmax(tuple); + + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + /* xmax is a non-permanent XID */ + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + else if (!MultiXactIdIsValid(multi)) + { + /* xmax is a permanent XID or invalid MultiXactId/XID */ + } + else if (HEAP_LOCKED_UPGRADED(tuple->t_infomask)) + { + /* xmax is a pg_upgrade'd MultiXact, which can't have updater XID */ + if (MultiXactIdPrecedes(multi, *NoFreezePageRelminMxid)) + *NoFreezePageRelminMxid = multi; + /* tdeheap_prepare_freeze_tuple always freezes pg_upgrade'd xmax */ + freeze = true; + } + else + { + /* xmax is a MultiXactId that may have an updater XID */ + MultiXactMember *members; + int nmembers; + + Assert(MultiXactIdPrecedesOrEquals(cutoffs->relminmxid, multi)); + if (MultiXactIdPrecedes(multi, *NoFreezePageRelminMxid)) + *NoFreezePageRelminMxid = multi; + if (MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff)) + freeze = true; + + /* need to check whether any member of the mxact is old */ + nmembers = GetMultiXactIdMembers(multi, &members, false, + HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + for (int i = 0; i < nmembers; i++) + { + xid = members[i].xid; + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + if (TransactionIdPrecedes(xid, cutoffs->FreezeLimit)) + freeze = true; + } + if (nmembers > 0) + pfree(members); + } + + if (tuple->t_infomask & HEAP_MOVED) + { + xid = HeapTupleHeaderGetXvac(tuple); + if (TransactionIdIsNormal(xid)) + { + Assert(TransactionIdPrecedesOrEquals(cutoffs->relfrozenxid, xid)); + if (TransactionIdPrecedes(xid, *NoFreezePageRelfrozenXid)) + *NoFreezePageRelfrozenXid = xid; + /* tdeheap_prepare_freeze_tuple forces xvac freezing */ + freeze = true; + } + } + + return freeze; +} + +/* + * Maintain snapshotConflictHorizon for caller by ratcheting forward its value + * using any committed XIDs contained in 'tuple', an obsolescent heap tuple + * that caller is in the process of physically removing, e.g. via HOT pruning + * or index deletion. + * + * Caller must initialize its value to InvalidTransactionId, which is + * generally interpreted as "definitely no need for a recovery conflict". + * Final value must reflect all heap tuples that caller will physically remove + * (or remove TID references to) via its ongoing pruning/deletion operation. + * ResolveRecoveryConflictWithSnapshot() is passed the final value (taken from + * caller's WAL record) by REDO routine when it replays caller's operation. + */ +void +HeapTupleHeaderAdvanceConflictHorizon(HeapTupleHeader tuple, + TransactionId *snapshotConflictHorizon) +{ + TransactionId xmin = HeapTupleHeaderGetXmin(tuple); + TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple); + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (tuple->t_infomask & HEAP_MOVED) + { + if (TransactionIdPrecedes(*snapshotConflictHorizon, xvac)) + *snapshotConflictHorizon = xvac; + } + + /* + * Ignore tuples inserted by an aborted transaction or if the tuple was + * updated/deleted by the inserting transaction. + * + * Look for a committed hint bit, or if no xmin bit is set, check clog. + */ + if (HeapTupleHeaderXminCommitted(tuple) || + (!HeapTupleHeaderXminInvalid(tuple) && TransactionIdDidCommit(xmin))) + { + if (xmax != xmin && + TransactionIdFollows(xmax, *snapshotConflictHorizon)) + *snapshotConflictHorizon = xmax; + } +} + +#ifdef USE_PREFETCH +/* + * Helper function for tdeheap_index_delete_tuples. Issues prefetch requests for + * prefetch_count buffers. The prefetch_state keeps track of all the buffers + * we can prefetch, and which have already been prefetched; each call to this + * function picks up where the previous call left off. + * + * Note: we expect the deltids array to be sorted in an order that groups TIDs + * by heap block, with all TIDs for each block appearing together in exactly + * one group. + */ +static void +index_delete_prefetch_buffer(Relation rel, + IndexDeletePrefetchState *prefetch_state, + int prefetch_count) +{ + BlockNumber cur_hblkno = prefetch_state->cur_hblkno; + int count = 0; + int i; + int ndeltids = prefetch_state->ndeltids; + TM_IndexDelete *deltids = prefetch_state->deltids; + + for (i = prefetch_state->next_item; + i < ndeltids && count < prefetch_count; + i++) + { + ItemPointer htid = &deltids[i].tid; + + if (cur_hblkno == InvalidBlockNumber || + ItemPointerGetBlockNumber(htid) != cur_hblkno) + { + cur_hblkno = ItemPointerGetBlockNumber(htid); + PrefetchBuffer(rel, MAIN_FORKNUM, cur_hblkno); + count++; + } + } + + /* + * Save the prefetch position so that next time we can continue from that + * position. + */ + prefetch_state->next_item = i; + prefetch_state->cur_hblkno = cur_hblkno; +} +#endif + +/* + * Helper function for tdeheap_index_delete_tuples. Checks for index corruption + * involving an invalid TID in index AM caller's index page. + * + * This is an ideal place for these checks. The index AM must hold a buffer + * lock on the index page containing the TIDs we examine here, so we don't + * have to worry about concurrent VACUUMs at all. We can be sure that the + * index is corrupt when htid points directly to an LP_UNUSED item or + * heap-only tuple, which is not the case during standard index scans. + */ +static inline void +index_delete_check_htid(TM_IndexDeleteOp *delstate, + Page page, OffsetNumber maxoff, + ItemPointer htid, TM_IndexStatus *istatus) +{ + OffsetNumber indexpagehoffnum = ItemPointerGetOffsetNumber(htid); + ItemId iid; + + Assert(OffsetNumberIsValid(istatus->idxoffnum)); + + if (unlikely(indexpagehoffnum > maxoff)) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points past end of heap page line pointer array at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + + iid = PageGetItemId(page, indexpagehoffnum); + if (unlikely(!ItemIdIsUsed(iid))) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points to unused heap page item at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + + if (ItemIdHasStorage(iid)) + { + HeapTupleHeader htup; + + Assert(ItemIdIsNormal(iid)); + htup = (HeapTupleHeader) PageGetItem(page, iid); + + if (unlikely(HeapTupleHeaderIsHeapOnly(htup))) + ereport(ERROR, + (errcode(ERRCODE_INDEX_CORRUPTED), + errmsg_internal("heap tid from index tuple (%u,%u) points to heap-only tuple at offset %u of block %u in index \"%s\"", + ItemPointerGetBlockNumber(htid), + indexpagehoffnum, + istatus->idxoffnum, delstate->iblknum, + RelationGetRelationName(delstate->irel)))); + } +} + +/* + * heapam implementation of tableam's index_delete_tuples interface. + * + * This helper function is called by index AMs during index tuple deletion. + * See tableam header comments for an explanation of the interface implemented + * here and a general theory of operation. Note that each call here is either + * a simple index deletion call, or a bottom-up index deletion call. + * + * It's possible for this to generate a fair amount of I/O, since we may be + * deleting hundreds of tuples from a single index block. To amortize that + * cost to some degree, this uses prefetching and combines repeat accesses to + * the same heap block. + */ +TransactionId +tdeheap_index_delete_tuples(Relation rel, TM_IndexDeleteOp *delstate) +{ + /* Initial assumption is that earlier pruning took care of conflict */ + TransactionId snapshotConflictHorizon = InvalidTransactionId; + BlockNumber blkno = InvalidBlockNumber; + Buffer buf = InvalidBuffer; + Page page = NULL; + OffsetNumber maxoff = InvalidOffsetNumber; + TransactionId priorXmax; +#ifdef USE_PREFETCH + IndexDeletePrefetchState prefetch_state; + int prefetch_distance; +#endif + SnapshotData SnapshotNonVacuumable; + int finalndeltids = 0, + nblocksaccessed = 0; + + /* State that's only used in bottom-up index deletion case */ + int nblocksfavorable = 0; + int curtargetfreespace = delstate->bottomupfreespace, + lastfreespace = 0, + actualfreespace = 0; + bool bottomup_final_block = false; + + InitNonVacuumableSnapshot(SnapshotNonVacuumable, GlobalVisTestFor(rel)); + + /* Sort caller's deltids array by TID for further processing */ + index_delete_sort(delstate); + + /* + * Bottom-up case: resort deltids array in an order attuned to where the + * greatest number of promising TIDs are to be found, and determine how + * many blocks from the start of sorted array should be considered + * favorable. This will also shrink the deltids array in order to + * eliminate completely unfavorable blocks up front. + */ + if (delstate->bottomup) + nblocksfavorable = bottomup_sort_and_shrink(delstate); + +#ifdef USE_PREFETCH + /* Initialize prefetch state. */ + prefetch_state.cur_hblkno = InvalidBlockNumber; + prefetch_state.next_item = 0; + prefetch_state.ndeltids = delstate->ndeltids; + prefetch_state.deltids = delstate->deltids; + + /* + * Determine the prefetch distance that we will attempt to maintain. + * + * Since the caller holds a buffer lock somewhere in rel, we'd better make + * sure that isn't a catalog relation before we call code that does + * syscache lookups, to avoid risk of deadlock. + */ + if (IsCatalogRelation(rel)) + prefetch_distance = maintenance_io_concurrency; + else + prefetch_distance = + get_tablespace_maintenance_io_concurrency(rel->rd_rel->reltablespace); + + /* Cap initial prefetch distance for bottom-up deletion caller */ + if (delstate->bottomup) + { + Assert(nblocksfavorable >= 1); + Assert(nblocksfavorable <= BOTTOMUP_MAX_NBLOCKS); + prefetch_distance = Min(prefetch_distance, nblocksfavorable); + } + + /* Start prefetching. */ + index_delete_prefetch_buffer(rel, &prefetch_state, prefetch_distance); +#endif + + /* Iterate over deltids, determine which to delete, check their horizon */ + Assert(delstate->ndeltids > 0); + for (int i = 0; i < delstate->ndeltids; i++) + { + TM_IndexDelete *ideltid = &delstate->deltids[i]; + TM_IndexStatus *istatus = delstate->status + ideltid->id; + ItemPointer htid = &ideltid->tid; + OffsetNumber offnum; + + /* + * Read buffer, and perform required extra steps each time a new block + * is encountered. Avoid refetching if it's the same block as the one + * from the last htid. + */ + if (blkno == InvalidBlockNumber || + ItemPointerGetBlockNumber(htid) != blkno) + { + /* + * Consider giving up early for bottom-up index deletion caller + * first. (Only prefetch next-next block afterwards, when it + * becomes clear that we're at least going to access the next + * block in line.) + * + * Sometimes the first block frees so much space for bottom-up + * caller that the deletion process can end without accessing any + * more blocks. It is usually necessary to access 2 or 3 blocks + * per bottom-up deletion operation, though. + */ + if (delstate->bottomup) + { + /* + * We often allow caller to delete a few additional items + * whose entries we reached after the point that space target + * from caller was satisfied. The cost of accessing the page + * was already paid at that point, so it made sense to finish + * it off. When that happened, we finalize everything here + * (by finishing off the whole bottom-up deletion operation + * without needlessly paying the cost of accessing any more + * blocks). + */ + if (bottomup_final_block) + break; + + /* + * Give up when we didn't enable our caller to free any + * additional space as a result of processing the page that we + * just finished up with. This rule is the main way in which + * we keep the cost of bottom-up deletion under control. + */ + if (nblocksaccessed >= 1 && actualfreespace == lastfreespace) + break; + lastfreespace = actualfreespace; /* for next time */ + + /* + * Deletion operation (which is bottom-up) will definitely + * access the next block in line. Prepare for that now. + * + * Decay target free space so that we don't hang on for too + * long with a marginal case. (Space target is only truly + * helpful when it allows us to recognize that we don't need + * to access more than 1 or 2 blocks to satisfy caller due to + * agreeable workload characteristics.) + * + * We are a bit more patient when we encounter contiguous + * blocks, though: these are treated as favorable blocks. The + * decay process is only applied when the next block in line + * is not a favorable/contiguous block. This is not an + * exception to the general rule; we still insist on finding + * at least one deletable item per block accessed. See + * bottomup_nblocksfavorable() for full details of the theory + * behind favorable blocks and heap block locality in general. + * + * Note: The first block in line is always treated as a + * favorable block, so the earliest possible point that the + * decay can be applied is just before we access the second + * block in line. The Assert() verifies this for us. + */ + Assert(nblocksaccessed > 0 || nblocksfavorable > 0); + if (nblocksfavorable > 0) + nblocksfavorable--; + else + curtargetfreespace /= 2; + } + + /* release old buffer */ + if (BufferIsValid(buf)) + UnlockReleaseBuffer(buf); + + blkno = ItemPointerGetBlockNumber(htid); + buf = ReadBuffer(rel, blkno); + nblocksaccessed++; + Assert(!delstate->bottomup || + nblocksaccessed <= BOTTOMUP_MAX_NBLOCKS); + +#ifdef USE_PREFETCH + + /* + * To maintain the prefetch distance, prefetch one more page for + * each page we read. + */ + index_delete_prefetch_buffer(rel, &prefetch_state, 1); +#endif + + LockBuffer(buf, BUFFER_LOCK_SHARE); + + page = BufferGetPage(buf); + maxoff = PageGetMaxOffsetNumber(page); + } + + /* + * In passing, detect index corruption involving an index page with a + * TID that points to a location in the heap that couldn't possibly be + * correct. We only do this with actual TIDs from caller's index page + * (not items reached by traversing through a HOT chain). + */ + index_delete_check_htid(delstate, page, maxoff, htid, istatus); + + if (istatus->knowndeletable) + Assert(!delstate->bottomup && !istatus->promising); + else + { + ItemPointerData tmp = *htid; + HeapTupleData heapTuple; + + /* Are any tuples from this HOT chain non-vacuumable? */ + if (tdeheap_hot_search_buffer(&tmp, rel, buf, &SnapshotNonVacuumable, + &heapTuple, NULL, true)) + continue; /* can't delete entry */ + + /* Caller will delete, since whole HOT chain is vacuumable */ + istatus->knowndeletable = true; + + /* Maintain index free space info for bottom-up deletion case */ + if (delstate->bottomup) + { + Assert(istatus->freespace > 0); + actualfreespace += istatus->freespace; + if (actualfreespace >= curtargetfreespace) + bottomup_final_block = true; + } + } + + /* + * Maintain snapshotConflictHorizon value for deletion operation as a + * whole by advancing current value using heap tuple headers. This is + * loosely based on the logic for pruning a HOT chain. + */ + offnum = ItemPointerGetOffsetNumber(htid); + priorXmax = InvalidTransactionId; /* cannot check first XMIN */ + for (;;) + { + ItemId lp; + HeapTupleHeader htup; + + /* Sanity check (pure paranoia) */ + if (offnum < FirstOffsetNumber) + break; + + /* + * An offset past the end of page's line pointer array is possible + * when the array was truncated + */ + if (offnum > maxoff) + break; + + lp = PageGetItemId(page, offnum); + if (ItemIdIsRedirected(lp)) + { + offnum = ItemIdGetRedirect(lp); + continue; + } + + /* + * We'll often encounter LP_DEAD line pointers (especially with an + * entry marked knowndeletable by our caller up front). No heap + * tuple headers get examined for an htid that leads us to an + * LP_DEAD item. This is okay because the earlier pruning + * operation that made the line pointer LP_DEAD in the first place + * must have considered the original tuple header as part of + * generating its own snapshotConflictHorizon value. + * + * Relying on XLOG_HEAP2_PRUNE_VACUUM_SCAN records like this is + * the same strategy that index vacuuming uses in all cases. Index + * VACUUM WAL records don't even have a snapshotConflictHorizon + * field of their own for this reason. + */ + if (!ItemIdIsNormal(lp)) + break; + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Check the tuple XMIN against prior XMAX, if any + */ + if (TransactionIdIsValid(priorXmax) && + !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) + break; + + HeapTupleHeaderAdvanceConflictHorizon(htup, + &snapshotConflictHorizon); + + /* + * If the tuple is not HOT-updated, then we are at the end of this + * HOT-chain. No need to visit later tuples from the same update + * chain (they get their own index entries) -- just move on to + * next htid from index AM caller. + */ + if (!HeapTupleHeaderIsHotUpdated(htup)) + break; + + /* Advance to next HOT chain member */ + Assert(ItemPointerGetBlockNumber(&htup->t_ctid) == blkno); + offnum = ItemPointerGetOffsetNumber(&htup->t_ctid); + priorXmax = HeapTupleHeaderGetUpdateXid(htup); + } + + /* Enable further/final shrinking of deltids for caller */ + finalndeltids = i + 1; + } + + UnlockReleaseBuffer(buf); + + /* + * Shrink deltids array to exclude non-deletable entries at the end. This + * is not just a minor optimization. Final deltids array size might be + * zero for a bottom-up caller. Index AM is explicitly allowed to rely on + * ndeltids being zero in all cases with zero total deletable entries. + */ + Assert(finalndeltids > 0 || delstate->bottomup); + delstate->ndeltids = finalndeltids; + + return snapshotConflictHorizon; +} + +/* + * Specialized inlineable comparison function for index_delete_sort() + */ +static inline int +index_delete_sort_cmp(TM_IndexDelete *deltid1, TM_IndexDelete *deltid2) +{ + ItemPointer tid1 = &deltid1->tid; + ItemPointer tid2 = &deltid2->tid; + + { + BlockNumber blk1 = ItemPointerGetBlockNumber(tid1); + BlockNumber blk2 = ItemPointerGetBlockNumber(tid2); + + if (blk1 != blk2) + return (blk1 < blk2) ? -1 : 1; + } + { + OffsetNumber pos1 = ItemPointerGetOffsetNumber(tid1); + OffsetNumber pos2 = ItemPointerGetOffsetNumber(tid2); + + if (pos1 != pos2) + return (pos1 < pos2) ? -1 : 1; + } + + Assert(false); + + return 0; +} + +/* + * Sort deltids array from delstate by TID. This prepares it for further + * processing by tdeheap_index_delete_tuples(). + * + * This operation becomes a noticeable consumer of CPU cycles with some + * workloads, so we go to the trouble of specialization/micro optimization. + * We use shellsort for this because it's easy to specialize, compiles to + * relatively few instructions, and is adaptive to presorted inputs/subsets + * (which are typical here). + */ +static void +index_delete_sort(TM_IndexDeleteOp *delstate) +{ + TM_IndexDelete *deltids = delstate->deltids; + int ndeltids = delstate->ndeltids; + int low = 0; + + /* + * Shellsort gap sequence (taken from Sedgewick-Incerpi paper). + * + * This implementation is fast with array sizes up to ~4500. This covers + * all supported BLCKSZ values. + */ + const int gaps[9] = {1968, 861, 336, 112, 48, 21, 7, 3, 1}; + + /* Think carefully before changing anything here -- keep swaps cheap */ + StaticAssertDecl(sizeof(TM_IndexDelete) <= 8, + "element size exceeds 8 bytes"); + + for (int g = 0; g < lengthof(gaps); g++) + { + for (int hi = gaps[g], i = low + hi; i < ndeltids; i++) + { + TM_IndexDelete d = deltids[i]; + int j = i; + + while (j >= hi && index_delete_sort_cmp(&deltids[j - hi], &d) >= 0) + { + deltids[j] = deltids[j - hi]; + j -= hi; + } + deltids[j] = d; + } + } +} + +/* + * Returns how many blocks should be considered favorable/contiguous for a + * bottom-up index deletion pass. This is a number of heap blocks that starts + * from and includes the first block in line. + * + * There is always at least one favorable block during bottom-up index + * deletion. In the worst case (i.e. with totally random heap blocks) the + * first block in line (the only favorable block) can be thought of as a + * degenerate array of contiguous blocks that consists of a single block. + * tdeheap_index_delete_tuples() will expect this. + * + * Caller passes blockgroups, a description of the final order that deltids + * will be sorted in for tdeheap_index_delete_tuples() bottom-up index deletion + * processing. Note that deltids need not actually be sorted just yet (caller + * only passes deltids to us so that we can interpret blockgroups). + * + * You might guess that the existence of contiguous blocks cannot matter much, + * since in general the main factor that determines which blocks we visit is + * the number of promising TIDs, which is a fixed hint from the index AM. + * We're not really targeting the general case, though -- the actual goal is + * to adapt our behavior to a wide variety of naturally occurring conditions. + * The effects of most of the heuristics we apply are only noticeable in the + * aggregate, over time and across many _related_ bottom-up index deletion + * passes. + * + * Deeming certain blocks favorable allows heapam to recognize and adapt to + * workloads where heap blocks visited during bottom-up index deletion can be + * accessed contiguously, in the sense that each newly visited block is the + * neighbor of the block that bottom-up deletion just finished processing (or + * close enough to it). It will likely be cheaper to access more favorable + * blocks sooner rather than later (e.g. in this pass, not across a series of + * related bottom-up passes). Either way it is probably only a matter of time + * (or a matter of further correlated version churn) before all blocks that + * appear together as a single large batch of favorable blocks get accessed by + * _some_ bottom-up pass. Large batches of favorable blocks tend to either + * appear almost constantly or not even once (it all depends on per-index + * workload characteristics). + * + * Note that the blockgroups sort order applies a power-of-two bucketing + * scheme that creates opportunities for contiguous groups of blocks to get + * batched together, at least with workloads that are naturally amenable to + * being driven by heap block locality. This doesn't just enhance the spatial + * locality of bottom-up heap block processing in the obvious way. It also + * enables temporal locality of access, since sorting by heap block number + * naturally tends to make the bottom-up processing order deterministic. + * + * Consider the following example to get a sense of how temporal locality + * might matter: There is a heap relation with several indexes, each of which + * is low to medium cardinality. It is subject to constant non-HOT updates. + * The updates are skewed (in one part of the primary key, perhaps). None of + * the indexes are logically modified by the UPDATE statements (if they were + * then bottom-up index deletion would not be triggered in the first place). + * Naturally, each new round of index tuples (for each heap tuple that gets a + * tdeheap_update() call) will have the same heap TID in each and every index. + * Since these indexes are low cardinality and never get logically modified, + * heapam processing during bottom-up deletion passes will access heap blocks + * in approximately sequential order. Temporal locality of access occurs due + * to bottom-up deletion passes behaving very similarly across each of the + * indexes at any given moment. This keeps the number of buffer misses needed + * to visit heap blocks to a minimum. + */ +static int +bottomup_nblocksfavorable(IndexDeleteCounts *blockgroups, int nblockgroups, + TM_IndexDelete *deltids) +{ + int64 lastblock = -1; + int nblocksfavorable = 0; + + Assert(nblockgroups >= 1); + Assert(nblockgroups <= BOTTOMUP_MAX_NBLOCKS); + + /* + * We tolerate heap blocks that will be accessed only slightly out of + * physical order. Small blips occur when a pair of almost-contiguous + * blocks happen to fall into different buckets (perhaps due only to a + * small difference in npromisingtids that the bucketing scheme didn't + * quite manage to ignore). We effectively ignore these blips by applying + * a small tolerance. The precise tolerance we use is a little arbitrary, + * but it works well enough in practice. + */ + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + TM_IndexDelete *firstdtid = deltids + group->ifirsttid; + BlockNumber block = ItemPointerGetBlockNumber(&firstdtid->tid); + + if (lastblock != -1 && + ((int64) block < lastblock - BOTTOMUP_TOLERANCE_NBLOCKS || + (int64) block > lastblock + BOTTOMUP_TOLERANCE_NBLOCKS)) + break; + + nblocksfavorable++; + lastblock = block; + } + + /* Always indicate that there is at least 1 favorable block */ + Assert(nblocksfavorable >= 1); + + return nblocksfavorable; +} + +/* + * qsort comparison function for bottomup_sort_and_shrink() + */ +static int +bottomup_sort_and_shrink_cmp(const void *arg1, const void *arg2) +{ + const IndexDeleteCounts *group1 = (const IndexDeleteCounts *) arg1; + const IndexDeleteCounts *group2 = (const IndexDeleteCounts *) arg2; + + /* + * Most significant field is npromisingtids (which we invert the order of + * so as to sort in desc order). + * + * Caller should have already normalized npromisingtids fields into + * power-of-two values (buckets). + */ + if (group1->npromisingtids > group2->npromisingtids) + return -1; + if (group1->npromisingtids < group2->npromisingtids) + return 1; + + /* + * Tiebreak: desc ntids sort order. + * + * We cannot expect power-of-two values for ntids fields. We should + * behave as if they were already rounded up for us instead. + */ + if (group1->ntids != group2->ntids) + { + uint32 ntids1 = pg_nextpower2_32((uint32) group1->ntids); + uint32 ntids2 = pg_nextpower2_32((uint32) group2->ntids); + + if (ntids1 > ntids2) + return -1; + if (ntids1 < ntids2) + return 1; + } + + /* + * Tiebreak: asc offset-into-deltids-for-block (offset to first TID for + * block in deltids array) order. + * + * This is equivalent to sorting in ascending heap block number order + * (among otherwise equal subsets of the array). This approach allows us + * to avoid accessing the out-of-line TID. (We rely on the assumption + * that the deltids array was sorted in ascending heap TID order when + * these offsets to the first TID from each heap block group were formed.) + */ + if (group1->ifirsttid > group2->ifirsttid) + return 1; + if (group1->ifirsttid < group2->ifirsttid) + return -1; + + pg_unreachable(); + + return 0; +} + +/* + * tdeheap_index_delete_tuples() helper function for bottom-up deletion callers. + * + * Sorts deltids array in the order needed for useful processing by bottom-up + * deletion. The array should already be sorted in TID order when we're + * called. The sort process groups heap TIDs from deltids into heap block + * groupings. Earlier/more-promising groups/blocks are usually those that are + * known to have the most "promising" TIDs. + * + * Sets new size of deltids array (ndeltids) in state. deltids will only have + * TIDs from the BOTTOMUP_MAX_NBLOCKS most promising heap blocks when we + * return. This often means that deltids will be shrunk to a small fraction + * of its original size (we eliminate many heap blocks from consideration for + * caller up front). + * + * Returns the number of "favorable" blocks. See bottomup_nblocksfavorable() + * for a definition and full details. + */ +static int +bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate) +{ + IndexDeleteCounts *blockgroups; + TM_IndexDelete *reordereddeltids; + BlockNumber curblock = InvalidBlockNumber; + int nblockgroups = 0; + int ncopied = 0; + int nblocksfavorable = 0; + + Assert(delstate->bottomup); + Assert(delstate->ndeltids > 0); + + /* Calculate per-heap-block count of TIDs */ + blockgroups = palloc(sizeof(IndexDeleteCounts) * delstate->ndeltids); + for (int i = 0; i < delstate->ndeltids; i++) + { + TM_IndexDelete *ideltid = &delstate->deltids[i]; + TM_IndexStatus *istatus = delstate->status + ideltid->id; + ItemPointer htid = &ideltid->tid; + bool promising = istatus->promising; + + if (curblock != ItemPointerGetBlockNumber(htid)) + { + /* New block group */ + nblockgroups++; + + Assert(curblock < ItemPointerGetBlockNumber(htid) || + !BlockNumberIsValid(curblock)); + + curblock = ItemPointerGetBlockNumber(htid); + blockgroups[nblockgroups - 1].ifirsttid = i; + blockgroups[nblockgroups - 1].ntids = 1; + blockgroups[nblockgroups - 1].npromisingtids = 0; + } + else + { + blockgroups[nblockgroups - 1].ntids++; + } + + if (promising) + blockgroups[nblockgroups - 1].npromisingtids++; + } + + /* + * We're about ready to sort block groups to determine the optimal order + * for visiting heap blocks. But before we do, round the number of + * promising tuples for each block group up to the next power-of-two, + * unless it is very low (less than 4), in which case we round up to 4. + * npromisingtids is far too noisy to trust when choosing between a pair + * of block groups that both have very low values. + * + * This scheme divides heap blocks/block groups into buckets. Each bucket + * contains blocks that have _approximately_ the same number of promising + * TIDs as each other. The goal is to ignore relatively small differences + * in the total number of promising entries, so that the whole process can + * give a little weight to heapam factors (like heap block locality) + * instead. This isn't a trade-off, really -- we have nothing to lose. It + * would be foolish to interpret small differences in npromisingtids + * values as anything more than noise. + * + * We tiebreak on nhtids when sorting block group subsets that have the + * same npromisingtids, but this has the same issues as npromisingtids, + * and so nhtids is subject to the same power-of-two bucketing scheme. The + * only reason that we don't fix nhtids in the same way here too is that + * we'll need accurate nhtids values after the sort. We handle nhtids + * bucketization dynamically instead (in the sort comparator). + * + * See bottomup_nblocksfavorable() for a full explanation of when and how + * heap locality/favorable blocks can significantly influence when and how + * heap blocks are accessed. + */ + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + + /* Better off falling back on nhtids with low npromisingtids */ + if (group->npromisingtids <= 4) + group->npromisingtids = 4; + else + group->npromisingtids = + pg_nextpower2_32((uint32) group->npromisingtids); + } + + /* Sort groups and rearrange caller's deltids array */ + qsort(blockgroups, nblockgroups, sizeof(IndexDeleteCounts), + bottomup_sort_and_shrink_cmp); + reordereddeltids = palloc(delstate->ndeltids * sizeof(TM_IndexDelete)); + + nblockgroups = Min(BOTTOMUP_MAX_NBLOCKS, nblockgroups); + /* Determine number of favorable blocks at the start of final deltids */ + nblocksfavorable = bottomup_nblocksfavorable(blockgroups, nblockgroups, + delstate->deltids); + + for (int b = 0; b < nblockgroups; b++) + { + IndexDeleteCounts *group = blockgroups + b; + TM_IndexDelete *firstdtid = delstate->deltids + group->ifirsttid; + + memcpy(reordereddeltids + ncopied, firstdtid, + sizeof(TM_IndexDelete) * group->ntids); + ncopied += group->ntids; + } + + /* Copy final grouped and sorted TIDs back into start of caller's array */ + memcpy(delstate->deltids, reordereddeltids, + sizeof(TM_IndexDelete) * ncopied); + delstate->ndeltids = ncopied; + + pfree(reordereddeltids); + pfree(blockgroups); + + return nblocksfavorable; +} + +/* + * Perform XLogInsert for a heap-visible operation. 'block' is the block + * being marked all-visible, and vm_buffer is the buffer containing the + * corresponding visibility map block. Both should have already been modified + * and dirtied. + * + * snapshotConflictHorizon comes from the largest xmin on the page being + * marked all-visible. REDO routine uses it to generate recovery conflicts. + * + * If checksums or wal_log_hints are enabled, we may also generate a full-page + * image of tdeheap_buffer. Otherwise, we optimize away the FPI (by specifying + * REGBUF_NO_IMAGE for the heap buffer), in which case the caller should *not* + * update the heap page's LSN. + */ +XLogRecPtr +log_tdeheap_visible(Relation rel, Buffer tdeheap_buffer, Buffer vm_buffer, + TransactionId snapshotConflictHorizon, uint8 vmflags) +{ + xl_tdeheap_visible xlrec; + XLogRecPtr recptr; + uint8 flags; + + Assert(BufferIsValid (tdeheap_buffer)); + Assert(BufferIsValid(vm_buffer)); + + xlrec.snapshotConflictHorizon = snapshotConflictHorizon; + xlrec.flags = vmflags; + if (RelationIsAccessibleInLogicalDecoding(rel)) + xlrec.flags |= VISIBILITYMAP_XLOG_CATALOG_REL; + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapVisible); + + XLogRegisterBuffer(0, vm_buffer, 0); + + flags = REGBUF_STANDARD; + if (!XLogHintBitIsNeeded()) + flags |= REGBUF_NO_IMAGE; + XLogRegisterBuffer(1, tdeheap_buffer, flags); + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_VISIBLE); + + return recptr; +} + +/* + * Perform XLogInsert for a heap-update operation. Caller must already + * have modified the buffer(s) and marked them dirty. + */ +static XLogRecPtr +log_tdeheap_update(Relation reln, Buffer oldbuf, + Buffer newbuf, HeapTuple oldtup, HeapTuple newtup, + HeapTuple old_key_tuple, + bool all_visible_cleared, bool new_all_visible_cleared) +{ + xl_tdeheap_update xlrec; + xl_tdeheap_header xlhdr; + xl_tdeheap_header xlhdr_idx; + uint8 info; + uint16 prefix_suffix[2]; + uint16 prefixlen = 0, + suffixlen = 0; + XLogRecPtr recptr; + Page page = BufferGetPage(newbuf); + PageHeader phdr = (PageHeader) page; + bool need_tuple_data = RelationIsLogicallyLogged(reln); + bool init; + int bufflags; + + /* Caller should not call me on a non-WAL-logged relation */ + Assert(RelationNeedsWAL(reln)); + + XLogBeginInsert(); + + if (HeapTupleIsHeapOnly(newtup)) + info = XLOG_HEAP_HOT_UPDATE; + else + info = XLOG_HEAP_UPDATE; + + /* + * If the old and new tuple are on the same page, we only need to log the + * parts of the new tuple that were changed. That saves on the amount of + * WAL we need to write. Currently, we just count any unchanged bytes in + * the beginning and end of the tuple. That's quick to check, and + * perfectly covers the common case that only one field is updated. + * + * We could do this even if the old and new tuple are on different pages, + * but only if we don't make a full-page image of the old page, which is + * difficult to know in advance. Also, if the old tuple is corrupt for + * some reason, it would allow the corruption to propagate the new page, + * so it seems best to avoid. Under the general assumption that most + * updates tend to create the new tuple version on the same page, there + * isn't much to be gained by doing this across pages anyway. + * + * Skip this if we're taking a full-page image of the new page, as we + * don't include the new tuple in the WAL record in that case. Also + * disable if wal_level='logical', as logical decoding needs to be able to + * read the new tuple in whole from the WAL record alone. + */ + if (oldbuf == newbuf && !need_tuple_data && + !XLogCheckBufferNeedsBackup(newbuf)) + { + char *oldp = (char *) oldtup->t_data + oldtup->t_data->t_hoff; + char *newp = (char *) newtup->t_data + newtup->t_data->t_hoff; + int oldlen = oldtup->t_len - oldtup->t_data->t_hoff; + int newlen = newtup->t_len - newtup->t_data->t_hoff; + + /* Check for common prefix between old and new tuple */ + for (prefixlen = 0; prefixlen < Min(oldlen, newlen); prefixlen++) + { + if (newp[prefixlen] != oldp[prefixlen]) + break; + } + + /* + * Storing the length of the prefix takes 2 bytes, so we need to save + * at least 3 bytes or there's no point. + */ + if (prefixlen < 3) + prefixlen = 0; + + /* Same for suffix */ + for (suffixlen = 0; suffixlen < Min(oldlen, newlen) - prefixlen; suffixlen++) + { + if (newp[newlen - suffixlen - 1] != oldp[oldlen - suffixlen - 1]) + break; + } + if (suffixlen < 3) + suffixlen = 0; + } + + /* Prepare main WAL data chain */ + xlrec.flags = 0; + if (all_visible_cleared) + xlrec.flags |= XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED; + if (new_all_visible_cleared) + xlrec.flags |= XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED; + if (prefixlen > 0) + xlrec.flags |= XLH_UPDATE_PREFIX_FROM_OLD; + if (suffixlen > 0) + xlrec.flags |= XLH_UPDATE_SUFFIX_FROM_OLD; + if (need_tuple_data) + { + xlrec.flags |= XLH_UPDATE_CONTAINS_NEW_TUPLE; + if (old_key_tuple) + { + if (reln->rd_rel->relreplident == REPLICA_IDENTITY_FULL) + xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_TUPLE; + else + xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY; + } + } + + /* If new tuple is the single and first tuple on page... */ + if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber && + PageGetMaxOffsetNumber(page) == FirstOffsetNumber) + { + info |= XLOG_HEAP_INIT_PAGE; + init = true; + } + else + init = false; + + /* Prepare WAL data for the old page */ + xlrec.old_offnum = ItemPointerGetOffsetNumber(&oldtup->t_self); + xlrec.old_xmax = HeapTupleHeaderGetRawXmax(oldtup->t_data); + xlrec.old_infobits_set = compute_infobits(oldtup->t_data->t_infomask, + oldtup->t_data->t_infomask2); + + /* Prepare WAL data for the new page */ + xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self); + xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data); + + bufflags = REGBUF_STANDARD; + if (init) + bufflags |= REGBUF_WILL_INIT; + if (need_tuple_data) + bufflags |= REGBUF_KEEP_DATA; + + XLogRegisterBuffer(0, newbuf, bufflags); + if (oldbuf != newbuf) + XLogRegisterBuffer(1, oldbuf, REGBUF_STANDARD); + + XLogRegisterData((char *) &xlrec, SizeOfHeapUpdate); + + /* + * Prepare WAL data for the new tuple. + */ + if (prefixlen > 0 || suffixlen > 0) + { + if (prefixlen > 0 && suffixlen > 0) + { + prefix_suffix[0] = prefixlen; + prefix_suffix[1] = suffixlen; + XLogRegisterBufData(0, (char *) &prefix_suffix, sizeof(uint16) * 2); + } + else if (prefixlen > 0) + { + XLogRegisterBufData(0, (char *) &prefixlen, sizeof(uint16)); + } + else + { + XLogRegisterBufData(0, (char *) &suffixlen, sizeof(uint16)); + } + } + + xlhdr.t_infomask2 = newtup->t_data->t_infomask2; + xlhdr.t_infomask = newtup->t_data->t_infomask; + xlhdr.t_hoff = newtup->t_data->t_hoff; + Assert(SizeofHeapTupleHeader + prefixlen + suffixlen <= newtup->t_len); + + /* + * PG73FORMAT: write bitmap [+ padding] [+ oid] + data + * + * The 'data' doesn't include the common prefix or suffix. + */ + /* We write an encrypted newtuple data from the buffer */ + XLogRegisterBufData(0, (char *) &xlhdr, SizeOfHeapHeader); + if (prefixlen == 0) + { + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + newtup->t_len - SizeofHeapTupleHeader - suffixlen); + } + else + { + /* + * Have to write the null bitmap and data after the common prefix as + * two separate rdata entries. + */ + /* bitmap [+ padding] [+ oid] */ + if (newtup->t_data->t_hoff - SizeofHeapTupleHeader > 0) + { + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + SizeofHeapTupleHeader, + newtup->t_data->t_hoff - SizeofHeapTupleHeader); + } + + /* data after common prefix */ + XLogRegisterBufData(0, + ((char *) phdr) + phdr->pd_upper + newtup->t_data->t_hoff + prefixlen, + newtup->t_len - newtup->t_data->t_hoff - prefixlen - suffixlen); + } + + /* We need to log a tuple identity */ + if (need_tuple_data && old_key_tuple) + { + /* don't really need this, but its more comfy to decode */ + xlhdr_idx.t_infomask2 = old_key_tuple->t_data->t_infomask2; + xlhdr_idx.t_infomask = old_key_tuple->t_data->t_infomask; + xlhdr_idx.t_hoff = old_key_tuple->t_data->t_hoff; + + XLogRegisterData((char *) &xlhdr_idx, SizeOfHeapHeader); + + /* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */ + XLogRegisterData((char *) old_key_tuple->t_data + SizeofHeapTupleHeader, + old_key_tuple->t_len - SizeofHeapTupleHeader); + } + + /* filtering by origin on a row level is much more efficient */ + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); + + recptr = XLogInsert(RM_HEAP_ID, info); + + return recptr; +} + +/* + * Perform XLogInsert of an XLOG_HEAP2_NEW_CID record + * + * This is only used in wal_level >= WAL_LEVEL_LOGICAL, and only for catalog + * tuples. + */ +static XLogRecPtr +log_tdeheap_new_cid(Relation relation, HeapTuple tup) +{ + xl_tdeheap_new_cid xlrec; + + XLogRecPtr recptr; + HeapTupleHeader hdr = tup->t_data; + + Assert(ItemPointerIsValid(&tup->t_self)); + Assert(tup->t_tableOid != InvalidOid); + + xlrec.top_xid = GetTopTransactionId(); + xlrec.target_locator = relation->rd_locator; + xlrec.target_tid = tup->t_self; + + /* + * If the tuple got inserted & deleted in the same TX we definitely have a + * combo CID, set cmin and cmax. + */ + if (hdr->t_infomask & HEAP_COMBOCID) + { + Assert(!(hdr->t_infomask & HEAP_XMAX_INVALID)); + Assert(!HeapTupleHeaderXminInvalid(hdr)); + xlrec.cmin = HeapTupleHeaderGetCmin(hdr); + xlrec.cmax = HeapTupleHeaderGetCmax(hdr); + xlrec.combocid = HeapTupleHeaderGetRawCommandId(hdr); + } + /* No combo CID, so only cmin or cmax can be set by this TX */ + else + { + /* + * Tuple inserted. + * + * We need to check for LOCK ONLY because multixacts might be + * transferred to the new tuple in case of FOR KEY SHARE updates in + * which case there will be an xmax, although the tuple just got + * inserted. + */ + if (hdr->t_infomask & HEAP_XMAX_INVALID || + HEAP_XMAX_IS_LOCKED_ONLY(hdr->t_infomask)) + { + xlrec.cmin = HeapTupleHeaderGetRawCommandId(hdr); + xlrec.cmax = InvalidCommandId; + } + /* Tuple from a different tx updated or deleted. */ + else + { + xlrec.cmin = InvalidCommandId; + xlrec.cmax = HeapTupleHeaderGetRawCommandId(hdr); + } + xlrec.combocid = InvalidCommandId; + } + + /* + * Note that we don't need to register the buffer here, because this + * operation does not modify the page. The insert/update/delete that + * called us certainly did, but that's WAL-logged separately. + */ + XLogBeginInsert(); + XLogRegisterData((char *) &xlrec, SizeOfHeapNewCid); + + /* will be looked at irrespective of origin */ + + recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_NEW_CID); + + return recptr; +} + +/* + * Build a heap tuple representing the configured REPLICA IDENTITY to represent + * the old tuple in an UPDATE or DELETE. + * + * Returns NULL if there's no need to log an identity or if there's no suitable + * key defined. + * + * Pass key_required true if any replica identity columns changed value, or if + * any of them have any external data. Delete must always pass true. + * + * *copy is set to true if the returned tuple is a modified copy rather than + * the same tuple that was passed in. + */ +static HeapTuple +ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_required, + bool *copy) +{ + TupleDesc desc = RelationGetDescr(relation); + char replident = relation->rd_rel->relreplident; + Bitmapset *idattrs; + HeapTuple key_tuple; + bool nulls[MaxHeapAttributeNumber]; + Datum values[MaxHeapAttributeNumber]; + + *copy = false; + + if (!RelationIsLogicallyLogged(relation)) + return NULL; + + if (replident == REPLICA_IDENTITY_NOTHING) + return NULL; + + if (replident == REPLICA_IDENTITY_FULL) + { + /* + * When logging the entire old tuple, it very well could contain + * toasted columns. If so, force them to be inlined. + */ + if (HeapTupleHasExternal(tp)) + { + *copy = true; + tp = toast_flatten_tuple(tp, desc); + } + return tp; + } + + /* if the key isn't required and we're only logging the key, we're done */ + if (!key_required) + return NULL; + + /* find out the replica identity columns */ + idattrs = RelationGetIndexAttrBitmap(relation, + INDEX_ATTR_BITMAP_IDENTITY_KEY); + + /* + * If there's no defined replica identity columns, treat as !key_required. + * (This case should not be reachable from tdeheap_update, since that should + * calculate key_required accurately. But tdeheap_delete just passes + * constant true for key_required, so we can hit this case in deletes.) + */ + if (bms_is_empty(idattrs)) + return NULL; + + /* + * Construct a new tuple containing only the replica identity columns, + * with nulls elsewhere. While we're at it, assert that the replica + * identity columns aren't null. + */ + tdeheap_deform_tuple(tp, desc, values, nulls); + + for (int i = 0; i < desc->natts; i++) + { + if (bms_is_member(i + 1 - FirstLowInvalidHeapAttributeNumber, + idattrs)) + Assert(!nulls[i]); + else + nulls[i] = true; + } + + key_tuple = tdeheap_form_tuple(desc, values, nulls); + *copy = true; + + bms_free(idattrs); + + /* + * If the tuple, which by here only contains indexed columns, still has + * toasted columns, force them to be inlined. This is somewhat unlikely + * since there's limits on the size of indexed columns, so we don't + * duplicate toast_flatten_tuple()s functionality in the above loop over + * the indexed columns, even if it would be more efficient. + */ + if (HeapTupleHasExternal(key_tuple)) + { + HeapTuple oldtup = key_tuple; + + key_tuple = toast_flatten_tuple(oldtup, desc); + tdeheap_freetuple(oldtup); + } + + return key_tuple; +} + +/* + * Given an "infobits" field from an XLog record, set the correct bits in the + * given infomask and infomask2 for the tuple touched by the record. + * + * (This is the reverse of compute_infobits). + */ +static void +fix_infomask_from_infobits(uint8 infobits, uint16 *infomask, uint16 *infomask2) +{ + *infomask &= ~(HEAP_XMAX_IS_MULTI | HEAP_XMAX_LOCK_ONLY | + HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_EXCL_LOCK); + *infomask2 &= ~HEAP_KEYS_UPDATED; + + if (infobits & XLHL_XMAX_IS_MULTI) + *infomask |= HEAP_XMAX_IS_MULTI; + if (infobits & XLHL_XMAX_LOCK_ONLY) + *infomask |= HEAP_XMAX_LOCK_ONLY; + if (infobits & XLHL_XMAX_EXCL_LOCK) + *infomask |= HEAP_XMAX_EXCL_LOCK; + /* note HEAP_XMAX_SHR_LOCK isn't considered here */ + if (infobits & XLHL_XMAX_KEYSHR_LOCK) + *infomask |= HEAP_XMAX_KEYSHR_LOCK; + + if (infobits & XLHL_KEYS_UPDATED) + *infomask2 |= HEAP_KEYS_UPDATED; +} + +static void +tdeheap_xlog_delete(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_delete *xlrec = (xl_tdeheap_delete *) XLogRecGetData(record); + Buffer buffer; + Page page; + ItemId lp = NULL; + HeapTupleHeader htup; + BlockNumber blkno; + RelFileLocator target_locator; + ItemPointerData target_tid; + + XLogRecGetBlockTag(record, 0, &target_locator, NULL, &blkno); + ItemPointerSetBlockNumber(&target_tid, blkno); + ItemPointerSetOffsetNumber(&target_tid, xlrec->offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_DELETE_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(target_locator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, blkno, &vmbuffer); + tdeheap_visibilitymap_clear(reln, blkno, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = BufferGetPage(buffer); + + if (PageGetMaxOffsetNumber(page) >= xlrec->offnum) + lp = PageGetItemId(page, xlrec->offnum); + + if (PageGetMaxOffsetNumber(page) < xlrec->offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + HeapTupleHeaderClearHotUpdated(htup); + fix_infomask_from_infobits(xlrec->infobits_set, + &htup->t_infomask, &htup->t_infomask2); + if (!(xlrec->flags & XLH_DELETE_IS_SUPER)) + HeapTupleHeaderSetXmax(htup, xlrec->xmax); + else + HeapTupleHeaderSetXmin(htup, InvalidTransactionId); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + + /* Mark the page as a candidate for pruning */ + PageSetPrunable(page, XLogRecGetXid(record)); + + if (xlrec->flags & XLH_DELETE_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + /* Make sure t_ctid is set correctly */ + if (xlrec->flags & XLH_DELETE_IS_PARTITION_MOVE) + HeapTupleHeaderSetMovedPartitions(htup); + else + htup->t_ctid = target_tid; + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_insert(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_insert *xlrec = (xl_tdeheap_insert *) XLogRecGetData(record); + Buffer buffer; + Page page; + union + { + HeapTupleHeaderData hdr; + char data[MaxHeapTupleSize]; + } tbuf; + HeapTupleHeader htup; + xl_tdeheap_header xlhdr; + uint32 newlen; + Size freespace = 0; + RelFileLocator target_locator; + BlockNumber blkno; + ItemPointerData target_tid; + XLogRedoAction action; + + XLogRecGetBlockTag(record, 0, &target_locator, NULL, &blkno); + ItemPointerSetBlockNumber(&target_tid, blkno); + ItemPointerSetOffsetNumber(&target_tid, xlrec->offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(target_locator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, blkno, &vmbuffer); + tdeheap_visibilitymap_clear(reln, blkno, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* + * If we inserted the first and only tuple on the page, re-initialize the + * page from scratch. + */ + if (XLogRecGetInfo(record) & XLOG_HEAP_INIT_PAGE) + { + buffer = XLogInitBufferForRedo(record, 0); + page = BufferGetPage(buffer); + PageInit(page, BufferGetPageSize(buffer), 0); + action = BLK_NEEDS_REDO; + } + else + action = XLogReadBufferForRedo(record, 0, &buffer); + if (action == BLK_NEEDS_REDO) + { + Size datalen; + char *data; + + page = BufferGetPage(buffer); + + if (PageGetMaxOffsetNumber(page) + 1 < xlrec->offnum) + elog(PANIC, "invalid max offset number"); + + data = XLogRecGetBlockData(record, 0, &datalen); + + newlen = datalen - SizeOfHeapHeader; + Assert(datalen > SizeOfHeapHeader && newlen <= MaxHeapTupleSize); + memcpy((char *) &xlhdr, data, SizeOfHeapHeader); + data += SizeOfHeapHeader; + + htup = &tbuf.hdr; + MemSet((char *) htup, 0, SizeofHeapTupleHeader); + /* PG73FORMAT: get bitmap [+ padding] [+ oid] + data */ + memcpy((char *) htup + SizeofHeapTupleHeader, + data, + newlen); + newlen += SizeofHeapTupleHeader; + htup->t_infomask2 = xlhdr.t_infomask2; + htup->t_infomask = xlhdr.t_infomask; + htup->t_hoff = xlhdr.t_hoff; + HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record)); + HeapTupleHeaderSetCmin(htup, FirstCommandId); + htup->t_ctid = target_tid; + + if (TDE_PageAddItem(target_locator, blkno, page, (Item) htup, newlen, xlrec->offnum, + true, true) == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple"); + + freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */ + + PageSetLSN(page, lsn); + + if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + /* XLH_INSERT_ALL_FROZEN_SET implies that all tuples are visible */ + if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET) + PageSetAllVisible(page); + + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); + + /* + * If the page is running low on free space, update the FSM as well. + * Arbitrarily, our definition of "low" is less than 20%. We can't do much + * better than that without knowing the fill-factor for the table. + * + * XXX: Don't do this if the page was restored from full page image. We + * don't bother to update the FSM in that case, it doesn't need to be + * totally accurate anyway. + */ + if (action == BLK_NEEDS_REDO && freespace < BLCKSZ / 5) + XLogRecordPageWithFreeSpace(target_locator, blkno, freespace); +} + +/* + * Handles UPDATE and HOT_UPDATE + */ +static void +tdeheap_xlog_update(XLogReaderState *record, bool hot_update) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_update *xlrec = (xl_tdeheap_update *) XLogRecGetData(record); + RelFileLocator rlocator; + BlockNumber oldblk; + BlockNumber newblk; + ItemPointerData newtid; + Buffer obuffer, + nbuffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleData oldtup; + HeapTupleHeader htup; + uint16 prefixlen = 0, + suffixlen = 0; + char *newp; + union + { + HeapTupleHeaderData hdr; + char data[MaxHeapTupleSize]; + } tbuf; + xl_tdeheap_header xlhdr; + uint32 newlen; + Size freespace = 0; + XLogRedoAction oldaction; + XLogRedoAction newaction; + + /* initialize to keep the compiler quiet */ + oldtup.t_data = NULL; + oldtup.t_len = 0; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &newblk); + if (XLogRecGetBlockTagExtended(record, 1, NULL, NULL, &oldblk, NULL)) + { + /* HOT updates are never done across pages */ + Assert(!hot_update); + } + else + oldblk = newblk; + + ItemPointerSet(&newtid, newblk, xlrec->new_offnum); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(rlocator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, oldblk, &vmbuffer); + tdeheap_visibilitymap_clear(reln, oldblk, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* + * In normal operation, it is important to lock the two pages in + * page-number order, to avoid possible deadlocks against other update + * operations going the other way. However, during WAL replay there can + * be no other update happening, so we don't need to worry about that. But + * we *do* need to worry that we don't expose an inconsistent state to Hot + * Standby queries --- so the original page can't be unlocked before we've + * added the new tuple to the new page. + */ + + /* Deal with old tuple version */ + oldaction = XLogReadBufferForRedo(record, (oldblk == newblk) ? 0 : 1, + &obuffer); + if (oldaction == BLK_NEEDS_REDO) + { + page = BufferGetPage(obuffer); + offnum = xlrec->old_offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldtup.t_data = htup; + oldtup.t_len = ItemIdGetLength(lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + if (hot_update) + HeapTupleHeaderSetHotUpdated(htup); + else + HeapTupleHeaderClearHotUpdated(htup); + fix_infomask_from_infobits(xlrec->old_infobits_set, &htup->t_infomask, + &htup->t_infomask2); + HeapTupleHeaderSetXmax(htup, xlrec->old_xmax); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + /* Set forward chain link in t_ctid */ + htup->t_ctid = newtid; + + /* Mark the page as a candidate for pruning */ + PageSetPrunable(page, XLogRecGetXid(record)); + + if (xlrec->flags & XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + PageSetLSN(page, lsn); + MarkBufferDirty(obuffer); + } + + /* + * Read the page the new tuple goes into, if different from old. + */ + if (oldblk == newblk) + { + nbuffer = obuffer; + newaction = oldaction; + } + else if (XLogRecGetInfo(record) & XLOG_HEAP_INIT_PAGE) + { + nbuffer = XLogInitBufferForRedo(record, 0); + page = (Page) BufferGetPage(nbuffer); + PageInit(page, BufferGetPageSize(nbuffer), 0); + newaction = BLK_NEEDS_REDO; + } + else + newaction = XLogReadBufferForRedo(record, 0, &nbuffer); + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) + { + Relation reln = CreateFakeRelcacheEntry(rlocator); + Buffer vmbuffer = InvalidBuffer; + + tdeheap_visibilitymap_pin(reln, newblk, &vmbuffer); + tdeheap_visibilitymap_clear(reln, newblk, vmbuffer, VISIBILITYMAP_VALID_BITS); + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + /* Deal with new tuple */ + if (newaction == BLK_NEEDS_REDO) + { + char *recdata; + char *recdata_end; + Size datalen; + Size tuplen; + + recdata = XLogRecGetBlockData(record, 0, &datalen); + recdata_end = recdata + datalen; + + page = BufferGetPage(nbuffer); + + offnum = xlrec->new_offnum; + if (PageGetMaxOffsetNumber(page) + 1 < offnum) + elog(PANIC, "invalid max offset number"); + + if (xlrec->flags & XLH_UPDATE_PREFIX_FROM_OLD) + { + Assert(newblk == oldblk); + memcpy(&prefixlen, recdata, sizeof(uint16)); + recdata += sizeof(uint16); + } + if (xlrec->flags & XLH_UPDATE_SUFFIX_FROM_OLD) + { + Assert(newblk == oldblk); + memcpy(&suffixlen, recdata, sizeof(uint16)); + recdata += sizeof(uint16); + } + + memcpy((char *) &xlhdr, recdata, SizeOfHeapHeader); + recdata += SizeOfHeapHeader; + + tuplen = recdata_end - recdata; + Assert(tuplen <= MaxHeapTupleSize); + + htup = &tbuf.hdr; + MemSet((char *) htup, 0, SizeofHeapTupleHeader); + + /* + * Reconstruct the new tuple using the prefix and/or suffix from the + * old tuple, and the data stored in the WAL record. + */ + newp = (char *) htup + SizeofHeapTupleHeader; + if (prefixlen > 0) + { + int len; + + /* copy bitmap [+ padding] [+ oid] from WAL record */ + len = xlhdr.t_hoff - SizeofHeapTupleHeader; + memcpy(newp, recdata, len); + recdata += len; + newp += len; + + /* copy prefix from old tuple */ + memcpy(newp, (char *) oldtup.t_data + oldtup.t_data->t_hoff, prefixlen); + newp += prefixlen; + + /* copy new tuple data from WAL record */ + len = tuplen - (xlhdr.t_hoff - SizeofHeapTupleHeader); + memcpy(newp, recdata, len); + recdata += len; + newp += len; + } + else + { + /* + * copy bitmap [+ padding] [+ oid] + data from record, all in one + * go + */ + memcpy(newp, recdata, tuplen); + recdata += tuplen; + newp += tuplen; + } + Assert(recdata == recdata_end); + + /* copy suffix from old tuple */ + if (suffixlen > 0) + memcpy(newp, (char *) oldtup.t_data + oldtup.t_len - suffixlen, suffixlen); + + newlen = SizeofHeapTupleHeader + tuplen + prefixlen + suffixlen; + htup->t_infomask2 = xlhdr.t_infomask2; + htup->t_infomask = xlhdr.t_infomask; + htup->t_hoff = xlhdr.t_hoff; + + HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record)); + HeapTupleHeaderSetCmin(htup, FirstCommandId); + HeapTupleHeaderSetXmax(htup, xlrec->new_xmax); + /* Make sure there is no forward chain link in t_ctid */ + htup->t_ctid = newtid; + + offnum = TDE_PageAddItem(rlocator, newblk, page, (Item) htup, newlen, offnum, true, true); + if (offnum == InvalidOffsetNumber) + elog(PANIC, "failed to add tuple"); + + if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) + PageClearAllVisible(page); + + freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */ + + PageSetLSN(page, lsn); + MarkBufferDirty(nbuffer); + } + + if (BufferIsValid(nbuffer) && nbuffer != obuffer) + UnlockReleaseBuffer(nbuffer); + if (BufferIsValid(obuffer)) + UnlockReleaseBuffer(obuffer); + + /* + * If the new page is running low on free space, update the FSM as well. + * Arbitrarily, our definition of "low" is less than 20%. We can't do much + * better than that without knowing the fill-factor for the table. + * + * However, don't update the FSM on HOT updates, because after crash + * recovery, either the old or the new tuple will certainly be dead and + * prunable. After pruning, the page will have roughly as much free space + * as it did before the update, assuming the new tuple is about the same + * size as the old one. + * + * XXX: Don't do this if the page was restored from full page image. We + * don't bother to update the FSM in that case, it doesn't need to be + * totally accurate anyway. + */ + if (newaction == BLK_NEEDS_REDO && !hot_update && freespace < BLCKSZ / 5) + XLogRecordPageWithFreeSpace(rlocator, newblk, freespace); +} + +static void +tdeheap_xlog_confirm(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_confirm *xlrec = (xl_tdeheap_confirm *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + /* + * Confirm tuple as actually inserted + */ + ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum); + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_lock(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_lock *xlrec = (xl_tdeheap_lock *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + + /* + * The visibility map may need to be fixed even if the heap page is + * already up-to-date. + */ + if (xlrec->flags & XLH_LOCK_ALL_FROZEN_CLEARED) + { + RelFileLocator rlocator; + Buffer vmbuffer = InvalidBuffer; + BlockNumber block; + Relation reln; + + XLogRecGetBlockTag(record, 0, &rlocator, NULL, &block); + reln = CreateFakeRelcacheEntry(rlocator); + + tdeheap_visibilitymap_pin(reln, block, &vmbuffer); + tdeheap_visibilitymap_clear(reln, block, vmbuffer, VISIBILITYMAP_ALL_FROZEN); + + ReleaseBuffer(vmbuffer); + FreeFakeRelcacheEntry(reln); + } + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + page = (Page) BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED); + htup->t_infomask2 &= ~HEAP_KEYS_UPDATED; + fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask, + &htup->t_infomask2); + + /* + * Clear relevant update flags, but only if the modified infomask says + * there's no update. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask)) + { + HeapTupleHeaderClearHotUpdated(htup); + /* Make sure there is no forward chain link in t_ctid */ + ItemPointerSet(&htup->t_ctid, + BufferGetBlockNumber(buffer), + offnum); + } + HeapTupleHeaderSetXmax(htup, xlrec->xmax); + HeapTupleHeaderSetCmax(htup, FirstCommandId, false); + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +static void +tdeheap_xlog_inplace(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_tdeheap_inplace *xlrec = (xl_tdeheap_inplace *) XLogRecGetData(record); + Buffer buffer; + Page page; + OffsetNumber offnum; + ItemId lp = NULL; + HeapTupleHeader htup; + uint32 oldlen; + Size newlen; + + if (XLogReadBufferForRedo(record, 0, &buffer) == BLK_NEEDS_REDO) + { + char *newtup = XLogRecGetBlockData(record, 0, &newlen); + + page = BufferGetPage(buffer); + + offnum = xlrec->offnum; + if (PageGetMaxOffsetNumber(page) >= offnum) + lp = PageGetItemId(page, offnum); + + if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp)) + elog(PANIC, "invalid lp"); + + htup = (HeapTupleHeader) PageGetItem(page, lp); + + oldlen = ItemIdGetLength(lp) - htup->t_hoff; + if (oldlen != newlen) + elog(PANIC, "wrong tuple length"); + + memcpy((char *) htup + htup->t_hoff, newtup, newlen); + + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); + } + if (BufferIsValid(buffer)) + UnlockReleaseBuffer(buffer); +} + +void +tdeheap_redo(XLogReaderState *record) +{ + uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK; + + /* + * These operations don't overwrite MVCC data so no conflict processing is + * required. The ones in heap2 rmgr do. + */ + + switch (info & XLOG_HEAP_OPMASK) + { + case XLOG_HEAP_INSERT: + tdeheap_xlog_insert(record); + break; + case XLOG_HEAP_DELETE: + tdeheap_xlog_delete(record); + break; + case XLOG_HEAP_UPDATE: + tdeheap_xlog_update(record, false); + break; + case XLOG_HEAP_TRUNCATE: + + /* + * TRUNCATE is a no-op because the actions are already logged as + * SMGR WAL records. TRUNCATE WAL record only exists for logical + * decoding. + */ + break; + case XLOG_HEAP_HOT_UPDATE: + tdeheap_xlog_update(record, true); + break; + case XLOG_HEAP_CONFIRM: + tdeheap_xlog_confirm(record); + break; + case XLOG_HEAP_LOCK: + tdeheap_xlog_lock(record); + break; + case XLOG_HEAP_INPLACE: + tdeheap_xlog_inplace(record); + break; + default: + elog(PANIC, "pg_tde_redo: unknown op code %u", info); + } +} + +/* + * Mask a heap page before performing consistency checks on it. + */ +void +tdeheap_mask(char *pagedata, BlockNumber blkno) +{ + Page page = (Page) pagedata; + OffsetNumber off; + + mask_page_lsn_and_checksum(page); + + mask_page_hint_bits(page); + mask_unused_space(page); + + for (off = 1; off <= PageGetMaxOffsetNumber(page); off++) + { + ItemId iid = PageGetItemId(page, off); + char *page_item; + + page_item = (char *) (page + ItemIdGetOffset(iid)); + + if (ItemIdIsNormal(iid)) + { + HeapTupleHeader page_htup = (HeapTupleHeader) page_item; + + /* + * If xmin of a tuple is not yet frozen, we should ignore + * differences in hint bits, since they can be set without + * emitting WAL. + */ + if (!HeapTupleHeaderXminFrozen(page_htup)) + page_htup->t_infomask &= ~HEAP_XACT_MASK; + else + { + /* Still we need to mask xmax hint bits. */ + page_htup->t_infomask &= ~HEAP_XMAX_INVALID; + page_htup->t_infomask &= ~HEAP_XMAX_COMMITTED; + } + + /* + * During replay, we set Command Id to FirstCommandId. Hence, mask + * it. See tdeheap_xlog_insert() for details. + */ + page_htup->t_choice.t_heap.t_field3.t_cid = MASK_MARKER; + + /* + * For a speculative tuple, tdeheap_insert() does not set ctid in the + * caller-passed heap tuple itself, leaving the ctid field to + * contain a speculative token value - a per-backend monotonically + * increasing identifier. Besides, it does not WAL-log ctid under + * any circumstances. + * + * During redo, tdeheap_xlog_insert() sets t_ctid to current block + * number and self offset number. It doesn't care about any + * speculative insertions on the primary. Hence, we set t_ctid to + * current block number and self offset number to ignore any + * inconsistency. + */ + if (HeapTupleHeaderIsSpeculative(page_htup)) + ItemPointerSet(&page_htup->t_ctid, blkno, off); + + /* + * NB: Not ignoring ctid changes due to the tuple having moved + * (i.e. HeapTupleHeaderIndicatesMovedPartitions), because that's + * important information that needs to be in-sync between primary + * and standby, and thus is WAL logged. + */ + } + + /* + * Ignore any padding bytes after the tuple, when the length of the + * item is not MAXALIGNed. + */ + if (ItemIdHasStorage(iid)) + { + int len = ItemIdGetLength(iid); + int padlen = MAXALIGN(len) - len; + + if (padlen > 0) + memset(page_item + len, MASK_MARKER, padlen); + } + } +} + +/* + * HeapCheckForSerializableConflictOut + * We are reading a tuple. If it's not visible, there may be a + * rw-conflict out with the inserter. Otherwise, if it is visible to us + * but has been deleted, there may be a rw-conflict out with the deleter. + * + * We will determine the top level xid of the writing transaction with which + * we may be in conflict, and ask CheckForSerializableConflictOut() to check + * for overlap with our own transaction. + * + * This function should be called just about anywhere in heapam.c where a + * tuple has been read. The caller must hold at least a shared lock on the + * buffer, because this function might set hint bits on the tuple. There is + * currently no known reason to call this function from an index AM. + */ +void +HeapCheckForSerializableConflictOut(bool visible, Relation relation, + HeapTuple tuple, Buffer buffer, + Snapshot snapshot) +{ + TransactionId xid; + HTSV_Result htsvResult; + + if (!CheckForSerializableConflictOutNeeded(relation, snapshot)) + return; + + /* + * Check to see whether the tuple has been written to by a concurrent + * transaction, either to create it not visible to us, or to delete it + * while it is visible to us. The "visible" bool indicates whether the + * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else + * is going on with it. + * + * In the event of a concurrently inserted tuple that also happens to have + * been concurrently updated (by a separate transaction), the xmin of the + * tuple will be used -- not the updater's xid. + */ + htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer); + switch (htsvResult) + { + case HEAPTUPLE_LIVE: + if (visible) + return; + xid = HeapTupleHeaderGetXmin(tuple->t_data); + break; + case HEAPTUPLE_RECENTLY_DEAD: + case HEAPTUPLE_DELETE_IN_PROGRESS: + if (visible) + xid = HeapTupleHeaderGetUpdateXid(tuple->t_data); + else + xid = HeapTupleHeaderGetXmin(tuple->t_data); + + if (TransactionIdPrecedes(xid, TransactionXmin)) + { + /* This is like the HEAPTUPLE_DEAD case */ + Assert(!visible); + return; + } + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + xid = HeapTupleHeaderGetXmin(tuple->t_data); + break; + case HEAPTUPLE_DEAD: + Assert(!visible); + return; + default: + + /* + * The only way to get to this default clause is if a new value is + * added to the enum type without adding it to this switch + * statement. That's a bug, so elog. + */ + elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult); + + /* + * In spite of having all enum values covered and calling elog on + * this default, some compilers think this is a code path which + * allows xid to be used below without initialization. Silence + * that warning. + */ + xid = InvalidTransactionId; + } + + Assert(TransactionIdIsValid(xid)); + Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)); + + /* + * Find top level xid. Bail out if xid is too early to be a conflict, or + * if it's our own xid. + */ + if (TransactionIdEquals(xid, GetTopTransactionIdIfAny())) + return; + xid = SubTransGetTopmostTransaction(xid); + if (TransactionIdPrecedes(xid, TransactionXmin)) + return; + + CheckForSerializableConflictOut(relation, xid, snapshot); +} diff --git a/contrib/pg_tde/src17/access/pg_tdeam_handler.c b/contrib/pg_tde/src17/access/pg_tdeam_handler.c new file mode 100644 index 00000000000..01cbbed9aab --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tdeam_handler.c @@ -0,0 +1,2719 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_handler.c + * heap table access method code + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/access/heap/pg_tdeam_handler.c + * + * + * NOTES + * This files wires up the lower level heapam.c et al routines with the + * tableam abstraction. + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tde_slot.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdetoast.h" +#include "access/pg_tde_rewrite.h" +#include "access/pg_tde_tdemap.h" + +#include "encryption/enc_tde.h" + +#include "access/genam.h" +#include "access/multixact.h" +#include "access/syncscan.h" +#include "access/tableam.h" +#include "access/tsmapi.h" +#include "access/visibilitymap.h" +#include "access/xact.h" +#include "catalog/catalog.h" +#include "catalog/index.h" +#include "catalog/storage.h" +#include "catalog/storage_xlog.h" +#include "commands/progress.h" +#include "executor/executor.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "storage/bufmgr.h" +#include "storage/bufpage.h" +#include "storage/lmgr.h" +#include "storage/predicate.h" +#include "storage/procarray.h" +#include "storage/smgr.h" +#include "utils/builtins.h" +#include "utils/rel.h" + +PG_FUNCTION_INFO_V1(pg_tdeam_basic_handler); +#ifdef PERCONA_EXT +PG_FUNCTION_INFO_V1(pg_tdeam_handler); +#endif + + +static void reform_and_rewrite_tuple(HeapTuple tuple, + Relation OldHeap, Relation NewHeap, + Datum *values, bool *isnull, RewriteState rwstate); + +static bool SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer, + HeapTuple tuple, + OffsetNumber tupoffset); + +static BlockNumber pg_tdeam_scan_get_blocks_done(HeapScanDesc hscan); + +static const TableAmRoutine pg_tdeam_methods; + + +/* ------------------------------------------------------------------------ + * Slot related callbacks for heap AM + * ------------------------------------------------------------------------ + */ + +static const TupleTableSlotOps * +pg_tdeam_slot_callbacks(Relation relation) +{ + return &TTSOpsTDEBufferHeapTuple; +} + + +/* ------------------------------------------------------------------------ + * Index Scan Callbacks for heap AM + * ------------------------------------------------------------------------ + */ + +static IndexFetchTableData * +pg_tdeam_index_fetch_begin(Relation rel) +{ + IndexFetchHeapData *hscan = palloc0(sizeof(IndexFetchHeapData)); + + hscan->xs_base.rel = rel; + hscan->xs_cbuf = InvalidBuffer; + + return &hscan->xs_base; +} + +static void +pg_tdeam_index_fetch_reset(IndexFetchTableData *scan) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + + if (BufferIsValid(hscan->xs_cbuf)) + { + ReleaseBuffer(hscan->xs_cbuf); + hscan->xs_cbuf = InvalidBuffer; + } +} + +static void +pg_tdeam_index_fetch_end(IndexFetchTableData *scan) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + + pg_tdeam_index_fetch_reset(scan); + + pfree(hscan); +} + +static bool +pg_tdeam_index_fetch_tuple(struct IndexFetchTableData *scan, + ItemPointer tid, + Snapshot snapshot, + TupleTableSlot *slot, + bool *call_again, bool *all_dead) +{ + IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan; + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + bool got_tdeheap_tuple; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + /* We can skip the buffer-switching logic if we're in mid-HOT chain. */ + if (!*call_again) + { + /* Switch to correct buffer if we don't have it already */ + Buffer prev_buf = hscan->xs_cbuf; + + hscan->xs_cbuf = ReleaseAndReadBuffer(hscan->xs_cbuf, + hscan->xs_base.rel, + ItemPointerGetBlockNumber(tid)); + + /* + * Prune page, but only if we weren't already on this page + */ + if (prev_buf != hscan->xs_cbuf) + tdeheap_page_prune_opt(hscan->xs_base.rel, hscan->xs_cbuf); + } + + /* Obtain share-lock on the buffer so we can examine visibility */ + LockBuffer(hscan->xs_cbuf, BUFFER_LOCK_SHARE); + got_tdeheap_tuple = tdeheap_hot_search_buffer(tid, + hscan->xs_base.rel, + hscan->xs_cbuf, + snapshot, + &bslot->base.tupdata, + all_dead, + !*call_again); + bslot->base.tupdata.t_self = *tid; + LockBuffer(hscan->xs_cbuf, BUFFER_LOCK_UNLOCK); + + if (got_tdeheap_tuple) + { + /* + * Only in a non-MVCC snapshot can more than one member of the HOT + * chain be visible. + */ + *call_again = !IsMVCCSnapshot(snapshot); + + slot->tts_tableOid = RelationGetRelid(scan->rel); + PGTdeExecStoreBufferHeapTuple(scan->rel, &bslot->base.tupdata, slot, hscan->xs_cbuf); + } + else + { + /* We've reached the end of the HOT chain. */ + *call_again = false; + } + + return got_tdeheap_tuple; +} + + +/* ------------------------------------------------------------------------ + * Callbacks for non-modifying operations on individual tuples for heap AM + * ------------------------------------------------------------------------ + */ + +static bool +pg_tdeam_fetch_row_version(Relation relation, + ItemPointer tid, + Snapshot snapshot, + TupleTableSlot *slot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + Buffer buffer; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + bslot->base.tupdata.t_self = *tid; + if (tdeheap_fetch(relation, snapshot, &bslot->base.tupdata, &buffer, false)) + { + /* store in slot, transferring existing pin */ + PGTdeExecStorePinnedBufferHeapTuple(relation, &bslot->base.tupdata, slot, buffer); + slot->tts_tableOid = RelationGetRelid(relation); + + return true; + } + + return false; +} + +static bool +pg_tdeam_tuple_tid_valid(TableScanDesc scan, ItemPointer tid) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + return ItemPointerIsValid(tid) && + ItemPointerGetBlockNumber(tid) < hscan->rs_nblocks; +} + +static bool +pg_tdeam_tuple_satisfies_snapshot(Relation rel, TupleTableSlot *slot, + Snapshot snapshot) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + bool res; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + Assert(BufferIsValid(bslot->buffer)); + + /* + * We need buffer pin and lock to call HeapTupleSatisfiesVisibility. + * Caller should be holding pin, but not lock. + */ + LockBuffer(bslot->buffer, BUFFER_LOCK_SHARE); + res = HeapTupleSatisfiesVisibility(bslot->base.tuple, snapshot, + bslot->buffer); + LockBuffer(bslot->buffer, BUFFER_LOCK_UNLOCK); + + return res; +} + + +/* ---------------------------------------------------------------------------- + * Functions for manipulations of physical tuples for heap AM. + * ---------------------------------------------------------------------------- + */ + +static void +pg_tdeam_tuple_insert(Relation relation, TupleTableSlot *slot, CommandId cid, + int options, BulkInsertState bistate) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + /* Perform the insertion, and copy the resulting ItemPointer */ + tdeheap_insert(relation, tuple, cid, options, bistate); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static void +pg_tdeam_tuple_insert_speculative(Relation relation, TupleTableSlot *slot, + CommandId cid, int options, + BulkInsertState bistate, uint32 specToken) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + HeapTupleHeaderSetSpeculativeToken(tuple->t_data, specToken); + options |= HEAP_INSERT_SPECULATIVE; + + /* Perform the insertion, and copy the resulting ItemPointer */ + tdeheap_insert(relation, tuple, cid, options, bistate); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static void +pg_tdeam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot, + uint32 specToken, bool succeeded) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + + /* adjust the tuple's state accordingly */ + if (succeeded) + tdeheap_finish_speculative(relation, &slot->tts_tid); + else + tdeheap_abort_speculative(relation, &slot->tts_tid); + + if (shouldFree) + pfree(tuple); +} + +static TM_Result +pg_tdeam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid, + Snapshot snapshot, Snapshot crosscheck, bool wait, + TM_FailureData *tmfd, bool changingPart) +{ + /* + * Currently Deleting of index tuples are handled at vacuum, in case if + * the storage itself is cleaning the dead tuples by itself, it is the + * time to call the index tuple deletion also. + */ + return tdeheap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart); +} + + +static TM_Result +pg_tdeam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot, + CommandId cid, Snapshot snapshot, Snapshot crosscheck, + bool wait, TM_FailureData *tmfd, + LockTupleMode *lockmode, TU_UpdateIndexes *update_indexes) +{ + bool shouldFree = true; + HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree); + TM_Result result; + + /* Update the tuple with table oid */ + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + result = tdeheap_update(relation, otid, tuple, cid, crosscheck, wait, + tmfd, lockmode, update_indexes); + ItemPointerCopy(&tuple->t_self, &slot->tts_tid); + + /* + * Decide whether new index entries are needed for the tuple + * + * Note: tdeheap_update returns the tid (location) of the new tuple in the + * t_self field. + * + * If the update is not HOT, we must update all indexes. If the update is + * HOT, it could be that we updated summarized columns, so we either + * update only summarized indexes, or none at all. + */ + if (result != TM_Ok) + { + Assert(*update_indexes == TU_None); + *update_indexes = TU_None; + } + else if (!HeapTupleIsHeapOnly(tuple)) + Assert(*update_indexes == TU_All); + else + Assert((*update_indexes == TU_Summarizing) || + (*update_indexes == TU_None)); + + if (shouldFree) + pfree(tuple); + + return result; +} + +static TM_Result +pg_tdeam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot, + TupleTableSlot *slot, CommandId cid, LockTupleMode mode, + LockWaitPolicy wait_policy, uint8 flags, + TM_FailureData *tmfd) +{ + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot; + TM_Result result; + Buffer buffer; + HeapTuple tuple = &bslot->base.tupdata; + bool follow_updates; + + follow_updates = (flags & TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS) != 0; + tmfd->traversed = false; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + +tuple_lock_retry: + tuple->t_self = *tid; + result = tdeheap_lock_tuple(relation, tuple, cid, mode, wait_policy, + follow_updates, &buffer, tmfd); + + if (result == TM_Updated && + (flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION)) + { + /* Should not encounter speculative tuple on recheck */ + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data)); + + ReleaseBuffer(buffer); + + if (!ItemPointerEquals(&tmfd->ctid, &tuple->t_self)) + { + SnapshotData SnapshotDirty; + TransactionId priorXmax; + + /* it was updated, so look at the updated version */ + *tid = tmfd->ctid; + /* updated row should have xmin matching this xmax */ + priorXmax = tmfd->xmax; + + /* signal that a tuple later in the chain is getting locked */ + tmfd->traversed = true; + + /* + * fetch target tuple + * + * Loop here to deal with updated or busy tuples + */ + InitDirtySnapshot(SnapshotDirty); + for (;;) + { + if (ItemPointerIndicatesMovedPartitions(tid)) + ereport(ERROR, + (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), + errmsg("tuple to be locked was already moved to another partition due to concurrent update"))); + + tuple->t_self = *tid; + if (tdeheap_fetch(relation, &SnapshotDirty, tuple, &buffer, true)) + { + /* + * If xmin isn't what we're expecting, the slot must have + * been recycled and reused for an unrelated tuple. This + * implies that the latest version of the row was deleted, + * so we need do nothing. (Should be safe to examine xmin + * without getting buffer's content lock. We assume + * reading a TransactionId to be atomic, and Xmin never + * changes in an existing tuple, except to invalid or + * frozen, and neither of those can match priorXmax.) + */ + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple->t_data), + priorXmax)) + { + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* otherwise xmin should not be dirty... */ + if (TransactionIdIsValid(SnapshotDirty.xmin)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("t_xmin %u is uncommitted in tuple (%u,%u) to be updated in table \"%s\"", + SnapshotDirty.xmin, + ItemPointerGetBlockNumber(&tuple->t_self), + ItemPointerGetOffsetNumber(&tuple->t_self), + RelationGetRelationName(relation)))); + + /* + * If tuple is being updated by other transaction then we + * have to wait for its commit/abort, or die trying. + */ + if (TransactionIdIsValid(SnapshotDirty.xmax)) + { + ReleaseBuffer(buffer); + switch (wait_policy) + { + case LockWaitBlock: + XactLockTableWait(SnapshotDirty.xmax, + relation, &tuple->t_self, + XLTW_FetchUpdated); + break; + case LockWaitSkip: + if (!ConditionalXactLockTableWait(SnapshotDirty.xmax)) + /* skip instead of waiting */ + return TM_WouldBlock; + break; + case LockWaitError: + if (!ConditionalXactLockTableWait(SnapshotDirty.xmax)) + ereport(ERROR, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("could not obtain lock on row in relation \"%s\"", + RelationGetRelationName(relation)))); + break; + } + continue; /* loop back to repeat tdeheap_fetch */ + } + + /* + * If tuple was inserted by our own transaction, we have + * to check cmin against cid: cmin >= current CID means + * our command cannot see the tuple, so we should ignore + * it. Otherwise tdeheap_lock_tuple() will throw an error, + * and so would any later attempt to update or delete the + * tuple. (We need not check cmax because + * HeapTupleSatisfiesDirty will consider a tuple deleted + * by our transaction dead, regardless of cmax.) We just + * checked that priorXmax == xmin, so we can test that + * variable instead of doing HeapTupleHeaderGetXmin again. + */ + if (TransactionIdIsCurrentTransactionId(priorXmax) && + HeapTupleHeaderGetCmin(tuple->t_data) >= cid) + { + tmfd->xmax = priorXmax; + + /* + * Cmin is the problematic value, so store that. See + * above. + */ + tmfd->cmax = HeapTupleHeaderGetCmin(tuple->t_data); + ReleaseBuffer(buffer); + return TM_SelfModified; + } + + /* + * This is a live tuple, so try to lock it again. + */ + ReleaseBuffer(buffer); + goto tuple_lock_retry; + } + + /* + * If the referenced slot was actually empty, the latest + * version of the row must have been deleted, so we need do + * nothing. + */ + if (tuple->t_data == NULL) + { + Assert(!BufferIsValid(buffer)); + return TM_Deleted; + } + + /* + * As above, if xmin isn't what we're expecting, do nothing. + */ + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple->t_data), + priorXmax)) + { + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* + * If we get here, the tuple was found but failed + * SnapshotDirty. Assuming the xmin is either a committed xact + * or our own xact (as it certainly should be if we're trying + * to modify the tuple), this must mean that the row was + * updated or deleted by either a committed xact or our own + * xact. If it was deleted, we can ignore it; if it was + * updated then chain up to the next version and repeat the + * whole process. + * + * As above, it should be safe to examine xmax and t_ctid + * without the buffer content lock, because they can't be + * changing. We'd better hold a buffer pin though. + */ + if (ItemPointerEquals(&tuple->t_self, &tuple->t_data->t_ctid)) + { + /* deleted, so forget about it */ + ReleaseBuffer(buffer); + return TM_Deleted; + } + + /* updated, so look at the updated row */ + *tid = tuple->t_data->t_ctid; + /* updated row should have xmin matching this xmax */ + priorXmax = HeapTupleHeaderGetUpdateXid(tuple->t_data); + ReleaseBuffer(buffer); + /* loop back to fetch next in chain */ + } + } + else + { + /* tuple was deleted, so give up */ + return TM_Deleted; + } + } + + slot->tts_tableOid = RelationGetRelid(relation); + tuple->t_tableOid = slot->tts_tableOid; + + /* store in slot, transferring existing pin */ + PGTdeExecStorePinnedBufferHeapTuple(relation, tuple, slot, buffer); + + return result; +} + + +/* ------------------------------------------------------------------------ + * DDL related callbacks for heap AM. + * ------------------------------------------------------------------------ + */ + +static void +pg_tdeam_relation_set_new_filelocator(Relation rel, + const RelFileLocator *newrlocator, + char persistence, + TransactionId *freezeXid, + MultiXactId *minmulti) +{ + SMgrRelation srel; +#ifdef PERCONA_EXT + RelFileLocator oldlocator = rel->rd_locator; +#endif + + /* + * Initialize to the minimum XID that could put tuples in the table. We + * know that no xacts older than RecentXmin are still running, so that + * will do. + */ + *freezeXid = RecentXmin; + + /* + * Similarly, initialize the minimum Multixact to the first value that + * could possibly be stored in tuples in the table. Running transactions + * could reuse values from their local cache, so we are careful to + * consider all currently running multis. + * + * XXX this could be refined further, but is it worth the hassle? + */ + *minmulti = GetOldestMultiXactId(); + +#ifdef PERCONA_EXT + srel = RelationCreateStorage(oldlocator, *newrlocator, persistence, true); +#else + srel = RelationCreateStorage(*newrlocator, persistence, true); +#endif + + /* + * If required, set up an init fork for an unlogged table so that it can + * be correctly reinitialized on restart. Recovery may remove it while + * replaying, for example, an XLOG_DBASE_CREATE* or XLOG_TBLSPC_CREATE + * record. Therefore, logging is necessary even if wal_level=minimal. + */ + if (persistence == RELPERSISTENCE_UNLOGGED) + { + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW || + rel->rd_rel->relkind == RELKIND_TOASTVALUE); +#ifdef PERCONA_EXT + smgrcreate(oldlocator, srel, INIT_FORKNUM, false); +#else + smgrcreate(srel, INIT_FORKNUM, false); +#endif + log_smgrcreate(newrlocator, INIT_FORKNUM); + } + + smgrclose(srel); + + /* Update TDE filemap */ + if (rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW || + rel->rd_rel->relkind == RELKIND_TOASTVALUE) + { + ereport(DEBUG1, + (errmsg("creating key file for relation %s", RelationGetRelationName(rel)))); + + pg_tde_create_heap_basic_key(newrlocator); + } +} + +static void +pg_tdeam_relation_nontransactional_truncate(Relation rel) +{ + RelationTruncate(rel, 0); +} + +static void +pg_tdeam_relation_copy_data(Relation rel, const RelFileLocator *newrlocator) +{ + SMgrRelation dstrel; + + /* + * Since we copy the file directly without looking at the shared buffers, + * we'd better first flush out any pages of the source relation that are + * in shared buffers. We assume no new changes will be made while we are + * holding exclusive lock on the rel. + */ + FlushRelationBuffers(rel); + + /* + * Create and copy all forks of the relation, and schedule unlinking of + * old physical files. + * + * NOTE: any conflict in relfilenumber value will be caught in + * RelationCreateStorage(). + */ +#ifdef PERCONA_EXT + dstrel = RelationCreateStorage(rel->rd_locator, *newrlocator, rel->rd_rel->relpersistence, true); +#else + dstrel = RelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true); +#endif + + /* copy main fork */ + RelationCopyStorage(RelationGetSmgr(rel), dstrel, MAIN_FORKNUM, + rel->rd_rel->relpersistence); + + /* copy those extra forks that exist */ + for (ForkNumber forkNum = MAIN_FORKNUM + 1; + forkNum <= MAX_FORKNUM; forkNum++) + { + if (smgrexists(RelationGetSmgr(rel), forkNum)) + { +#ifdef PERCONA_EXT + smgrcreate(rel->rd_locator, dstrel, forkNum, false); +#else + smgrcreate(dstrel, forkNum, false); +#endif + + /* + * WAL log creation if the relation is persistent, or this is the + * init fork of an unlogged relation. + */ + if (RelationIsPermanent(rel) || + (rel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED && + forkNum == INIT_FORKNUM)) + log_smgrcreate(newrlocator, forkNum); + RelationCopyStorage(RelationGetSmgr(rel), dstrel, forkNum, + rel->rd_rel->relpersistence); + } + } + + pg_tde_move_rel_key(newrlocator, &rel->rd_locator); + + /* drop old relation, and close new one */ + RelationDropStorage(rel); + smgrclose(dstrel); +} + +static void +pg_tdeam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap, + Relation OldIndex, bool use_sort, + TransactionId OldestXmin, + TransactionId *xid_cutoff, + MultiXactId *multi_cutoff, + double *num_tuples, + double *tups_vacuumed, + double *tups_recently_dead) +{ + RewriteState rwstate; + IndexScanDesc indexScan; + TableScanDesc tableScan; + HeapScanDesc heapScan; + bool is_system_catalog; + Tuplesortstate *tuplesort; + TupleDesc oldTupDesc = RelationGetDescr(OldHeap); + TupleDesc newTupDesc = RelationGetDescr(NewHeap); + TupleTableSlot *slot; + int natts; + Datum *values; + bool *isnull; + BufferHeapTupleTableSlot *hslot; + BlockNumber prev_cblock = InvalidBlockNumber; + + /* Remember if it's a system catalog */ + is_system_catalog = IsSystemRelation(OldHeap); + + /* + * Valid smgr_targblock implies something already wrote to the relation. + * This may be harmless, but this function hasn't planned for it. + */ + Assert(RelationGetTargetBlock(NewHeap) == InvalidBlockNumber); + + /* Preallocate values/isnull arrays */ + natts = newTupDesc->natts; + values = (Datum *) palloc(natts * sizeof(Datum)); + isnull = (bool *) palloc(natts * sizeof(bool)); + + /* Initialize the rewrite operation */ + rwstate = begin_tdeheap_rewrite(OldHeap, NewHeap, OldestXmin, *xid_cutoff, + *multi_cutoff); + + + /* Set up sorting if wanted */ + if (use_sort) + tuplesort = tuplesort_begin_cluster(oldTupDesc, OldIndex, + maintenance_work_mem, + NULL, TUPLESORT_NONE); + else + tuplesort = NULL; + + /* + * Prepare to scan the OldHeap. To ensure we see recently-dead tuples + * that still need to be copied, we scan with SnapshotAny and use + * HeapTupleSatisfiesVacuum for the visibility test. + */ + if (OldIndex != NULL && !use_sort) + { + const int ci_index[] = { + PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_INDEX_RELID + }; + int64 ci_val[2]; + + /* Set phase and OIDOldIndex to columns */ + ci_val[0] = PROGRESS_CLUSTER_PHASE_INDEX_SCAN_HEAP; + ci_val[1] = RelationGetRelid(OldIndex); + pgstat_progress_update_multi_param(2, ci_index, ci_val); + + tableScan = NULL; + heapScan = NULL; + indexScan = index_beginscan(OldHeap, OldIndex, SnapshotAny, 0, 0); + index_rescan(indexScan, NULL, 0, NULL, 0); + } + else + { + /* In scan-and-sort mode and also VACUUM FULL, set phase */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP); + + tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL); + heapScan = (HeapScanDesc) tableScan; + indexScan = NULL; + + /* Set total heap blocks */ + pgstat_progress_update_param(PROGRESS_CLUSTER_TOTAL_HEAP_BLKS, + heapScan->rs_nblocks); + } + + slot = table_slot_create(OldHeap, NULL); + hslot = (BufferHeapTupleTableSlot *) slot; + + /* + * Scan through the OldHeap, either in OldIndex order or sequentially; + * copy each tuple into the NewHeap, or transiently to the tuplesort + * module. Note that we don't bother sorting dead tuples (they won't get + * to the new table anyway). + */ + for (;;) + { + HeapTuple tuple; + Buffer buf; + bool isdead; + + CHECK_FOR_INTERRUPTS(); + + if (indexScan != NULL) + { + if (!index_getnext_slot(indexScan, ForwardScanDirection, slot)) + break; + + /* Since we used no scan keys, should never need to recheck */ + if (indexScan->xs_recheck) + elog(ERROR, "CLUSTER does not support lossy index conditions"); + } + else + { + if (!table_scan_getnextslot(tableScan, ForwardScanDirection, slot)) + { + /* + * If the last pages of the scan were empty, we would go to + * the next phase while tdeheap_blks_scanned != tdeheap_blks_total. + * Instead, to ensure that tdeheap_blks_scanned is equivalent to + * tdeheap_blks_total after the table scan phase, this parameter + * is manually updated to the correct value when the table + * scan finishes. + */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED, + heapScan->rs_nblocks); + break; + } + + /* + * In scan-and-sort mode and also VACUUM FULL, set heap blocks + * scanned + * + * Note that heapScan may start at an offset and wrap around, i.e. + * rs_startblock may be >0, and rs_cblock may end with a number + * below rs_startblock. To prevent showing this wraparound to the + * user, we offset rs_cblock by rs_startblock (modulo rs_nblocks). + */ + if (prev_cblock != heapScan->rs_cblock) + { + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED, + (heapScan->rs_cblock + + heapScan->rs_nblocks - + heapScan->rs_startblock + ) % heapScan->rs_nblocks + 1); + prev_cblock = heapScan->rs_cblock; + } + } + + tuple = ExecFetchSlotHeapTuple(slot, false, NULL); + buf = hslot->buffer; + + LockBuffer(buf, BUFFER_LOCK_SHARE); + + switch (HeapTupleSatisfiesVacuum(tuple, OldestXmin, buf)) + { + case HEAPTUPLE_DEAD: + /* Definitely dead */ + isdead = true; + break; + case HEAPTUPLE_RECENTLY_DEAD: + *tups_recently_dead += 1; + /* fall through */ + case HEAPTUPLE_LIVE: + /* Live or recently dead, must copy it */ + isdead = false; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Since we hold exclusive lock on the relation, normally the + * only way to see this is if it was inserted earlier in our + * own transaction. However, it can happen in system + * catalogs, since we tend to release write lock before commit + * there. Give a warning if neither case applies; but in any + * case we had better copy it. + */ + if (!is_system_catalog && + !TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tuple->t_data))) + elog(WARNING, "concurrent insert in progress within table \"%s\"", + RelationGetRelationName(OldHeap)); + /* treat as live */ + isdead = false; + break; + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * Similar situation to INSERT_IN_PROGRESS case. + */ + if (!is_system_catalog && + !TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple->t_data))) + elog(WARNING, "concurrent delete in progress within table \"%s\"", + RelationGetRelationName(OldHeap)); + /* treat as recently dead */ + *tups_recently_dead += 1; + isdead = false; + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + isdead = false; /* keep compiler quiet */ + break; + } + + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + + if (isdead) + { + *tups_vacuumed += 1; + /* heap rewrite module still needs to see it... */ + if (rewrite_tdeheap_dead_tuple(rwstate, tuple)) + { + /* A previous recently-dead tuple is now known dead */ + *tups_vacuumed += 1; + *tups_recently_dead -= 1; + } + continue; + } + + *num_tuples += 1; + if (tuplesort != NULL) + { + tuplesort_putheaptuple(tuplesort, tuple); + + /* + * In scan-and-sort mode, report increase in number of tuples + * scanned + */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_TUPLES_SCANNED, + *num_tuples); + } + else + { + const int ct_index[] = { + PROGRESS_CLUSTER_HEAP_TUPLES_SCANNED, + PROGRESS_CLUSTER_HEAP_TUPLES_WRITTEN + }; + int64 ct_val[2]; + + reform_and_rewrite_tuple(tuple, OldHeap, NewHeap, + values, isnull, rwstate); + + /* + * In indexscan mode and also VACUUM FULL, report increase in + * number of tuples scanned and written + */ + ct_val[0] = *num_tuples; + ct_val[1] = *num_tuples; + pgstat_progress_update_multi_param(2, ct_index, ct_val); + } + } + + if (indexScan != NULL) + index_endscan(indexScan); + if (tableScan != NULL) + table_endscan(tableScan); + if (slot) + ExecDropSingleTupleTableSlot(slot); + + /* + * In scan-and-sort mode, complete the sort, then read out all live tuples + * from the tuplestore and write them to the new relation. + */ + if (tuplesort != NULL) + { + double n_tuples = 0; + + /* Report that we are now sorting tuples */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_SORT_TUPLES); + + tuplesort_performsort(tuplesort); + + /* Report that we are now writing new heap */ + pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE, + PROGRESS_CLUSTER_PHASE_WRITE_NEW_HEAP); + + for (;;) + { + HeapTuple tuple; + + CHECK_FOR_INTERRUPTS(); + + tuple = tuplesort_getheaptuple(tuplesort, true); + if (tuple == NULL) + break; + + n_tuples += 1; + reform_and_rewrite_tuple(tuple, + OldHeap, NewHeap, + values, isnull, + rwstate); + /* Report n_tuples */ + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_TUPLES_WRITTEN, + n_tuples); + } + + tuplesort_end(tuplesort); + } + + /* Write out any remaining tuples, and fsync if needed */ + end_tdeheap_rewrite(rwstate); + + /* Clean up */ + pfree(values); + pfree(isnull); +} + +/* + * Prepare to analyze the next block in the read stream. Returns false if + * the stream is exhausted and true otherwise. The scan must have been started + * with SO_TYPE_ANALYZE option. + * + * This routine holds a buffer pin and lock on the heap page. They are held + * until pg_tdeam_scan_analyze_next_tuple() returns false. That is until all the + * items of the heap page are analyzed. + */ +static bool +pg_tdeam_scan_analyze_next_block(TableScanDesc scan, ReadStream *stream) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + /* + * We must maintain a pin on the target page's buffer to ensure that + * concurrent activity - e.g. HOT pruning - doesn't delete tuples out from + * under us. It comes from the stream already pinned. We also choose to + * hold sharelock on the buffer throughout --- we could release and + * re-acquire sharelock for each tuple, but since we aren't doing much + * work per tuple, the extra lock traffic is probably better avoided. + */ + hscan->rs_cbuf = read_stream_next_buffer(stream, NULL); + if (!BufferIsValid(hscan->rs_cbuf)) + return false; + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + hscan->rs_cblock = BufferGetBlockNumber(hscan->rs_cbuf); + hscan->rs_cindex = FirstOffsetNumber; + return true; +} + +static bool +pg_tdeam_scan_analyze_next_tuple(TableScanDesc scan, TransactionId OldestXmin, + double *liverows, double *deadrows, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + Page targpage; + OffsetNumber maxoffset; + BufferHeapTupleTableSlot *hslot; + + Assert(TTS_IS_TDE_BUFFERTUPLE(slot)); + + hslot = (BufferHeapTupleTableSlot *) slot; + targpage = BufferGetPage(hscan->rs_cbuf); + maxoffset = PageGetMaxOffsetNumber(targpage); + + /* Inner loop over all tuples on the selected page */ + for (; hscan->rs_cindex <= maxoffset; hscan->rs_cindex++) + { + ItemId itemid; + HeapTuple targtuple = &hslot->base.tupdata; + bool sample_it = false; + + itemid = PageGetItemId(targpage, hscan->rs_cindex); + + /* + * We ignore unused and redirect line pointers. DEAD line pointers + * should be counted as dead, because we need vacuum to run to get rid + * of them. Note that this rule agrees with the way that + * tdeheap_page_prune_and_freeze() counts things. + */ + if (!ItemIdIsNormal(itemid)) + { + if (ItemIdIsDead(itemid)) + *deadrows += 1; + continue; + } + + ItemPointerSet(&targtuple->t_self, hscan->rs_cblock, hscan->rs_cindex); + + targtuple->t_tableOid = RelationGetRelid(scan->rs_rd); + targtuple->t_data = (HeapTupleHeader) PageGetItem(targpage, itemid); + targtuple->t_len = ItemIdGetLength(itemid); + + switch (HeapTupleSatisfiesVacuum(targtuple, OldestXmin, + hscan->rs_cbuf)) + { + case HEAPTUPLE_LIVE: + sample_it = true; + *liverows += 1; + break; + + case HEAPTUPLE_DEAD: + case HEAPTUPLE_RECENTLY_DEAD: + /* Count dead and recently-dead rows */ + *deadrows += 1; + break; + + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * Insert-in-progress rows are not counted. We assume that + * when the inserting transaction commits or aborts, it will + * send a stats message to increment the proper count. This + * works right only if that transaction ends after we finish + * analyzing the table; if things happen in the other order, + * its stats update will be overwritten by ours. However, the + * error will be large only if the other transaction runs long + * enough to insert many tuples, so assuming it will finish + * after us is the safer option. + * + * A special case is that the inserting transaction might be + * our own. In this case we should count and sample the row, + * to accommodate users who load a table and analyze it in one + * transaction. (pgstat_report_analyze has to adjust the + * numbers we report to the cumulative stats system to make + * this come out right.) + */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(targtuple->t_data))) + { + sample_it = true; + *liverows += 1; + } + break; + + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * We count and sample delete-in-progress rows the same as + * live ones, so that the stats counters come out right if the + * deleting transaction commits after us, per the same + * reasoning given above. + * + * If the delete was done by our own transaction, however, we + * must count the row as dead to make pgstat_report_analyze's + * stats adjustments come out right. (Note: this works out + * properly when the row was both inserted and deleted in our + * xact.) + * + * The net effect of these choices is that we act as though an + * IN_PROGRESS transaction hasn't happened yet, except if it + * is our own transaction, which we assume has happened. + * + * This approach ensures that we behave sanely if we see both + * the pre-image and post-image rows for a row being updated + * by a concurrent transaction: we will sample the pre-image + * but not the post-image. We also get sane results if the + * concurrent transaction never commits. + */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(targtuple->t_data))) + *deadrows += 1; + else + { + sample_it = true; + *liverows += 1; + } + break; + + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + break; + } + + if (sample_it) + { + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, targtuple, slot, hscan->rs_cbuf); + hscan->rs_cindex++; + + /* note that we leave the buffer locked here! */ + return true; + } + } + + /* Now release the lock and pin on the page */ + UnlockReleaseBuffer(hscan->rs_cbuf); + hscan->rs_cbuf = InvalidBuffer; + /* also prevent old slot contents from having pin on page */ + ExecClearTuple(slot); + + return false; +} + +static double +pg_tdeam_index_build_range_scan(Relation heapRelation, + Relation indexRelation, + IndexInfo *indexInfo, + bool allow_sync, + bool anyvisible, + bool progress, + BlockNumber start_blockno, + BlockNumber numblocks, + IndexBuildCallback callback, + void *callback_state, + TableScanDesc scan) +{ + HeapScanDesc hscan; + bool is_system_catalog; + bool checking_uniqueness; + HeapTuple heapTuple; + Datum values[INDEX_MAX_KEYS]; + bool isnull[INDEX_MAX_KEYS]; + double reltuples; + ExprState *predicate; + TupleTableSlot *slot; + EState *estate; + ExprContext *econtext; + Snapshot snapshot; + bool need_unregister_snapshot = false; + TransactionId OldestXmin; + BlockNumber previous_blkno = InvalidBlockNumber; + BlockNumber root_blkno = InvalidBlockNumber; + OffsetNumber root_offsets[MaxHeapTuplesPerPage]; + + /* + * sanity checks + */ + Assert(OidIsValid(indexRelation->rd_rel->relam)); + + /* Remember if it's a system catalog */ + is_system_catalog = IsSystemRelation(heapRelation); + + /* See whether we're verifying uniqueness/exclusion properties */ + checking_uniqueness = (indexInfo->ii_Unique || + indexInfo->ii_ExclusionOps != NULL); + + /* + * "Any visible" mode is not compatible with uniqueness checks; make sure + * only one of those is requested. + */ + Assert(!(anyvisible && checking_uniqueness)); + + /* + * Need an EState for evaluation of index expressions and partial-index + * predicates. Also a slot to hold the current tuple. + */ + estate = CreateExecutorState(); + econtext = GetPerTupleExprContext(estate); + slot = table_slot_create(heapRelation, NULL); + + /* Arrange for econtext's scan tuple to be the tuple under test */ + econtext->ecxt_scantuple = slot; + + /* Set up execution state for predicate, if any. */ + predicate = ExecPrepareQual(indexInfo->ii_Predicate, estate); + + /* + * Prepare for scan of the base relation. In a normal index build, we use + * SnapshotAny because we must retrieve all tuples and do our own time + * qual checks (because we have to index RECENTLY_DEAD tuples). In a + * concurrent build, or during bootstrap, we take a regular MVCC snapshot + * and index whatever's live according to that. + */ + OldestXmin = InvalidTransactionId; + + /* okay to ignore lazy VACUUMs here */ + if (!IsBootstrapProcessingMode() && !indexInfo->ii_Concurrent) + OldestXmin = GetOldestNonRemovableTransactionId(heapRelation); + + if (!scan) + { + /* + * Serial index build. + * + * Must begin our own heap scan in this case. We may also need to + * register a snapshot whose lifetime is under our direct control. + */ + if (!TransactionIdIsValid(OldestXmin)) + { + snapshot = RegisterSnapshot(GetTransactionSnapshot()); + need_unregister_snapshot = true; + } + else + snapshot = SnapshotAny; + + scan = table_beginscan_strat(heapRelation, /* relation */ + snapshot, /* snapshot */ + 0, /* number of keys */ + NULL, /* scan key */ + true, /* buffer access strategy OK */ + allow_sync); /* syncscan OK? */ + } + else + { + /* + * Parallel index build. + * + * Parallel case never registers/unregisters own snapshot. Snapshot + * is taken from parallel heap scan, and is SnapshotAny or an MVCC + * snapshot, based on same criteria as serial case. + */ + Assert(!IsBootstrapProcessingMode()); + Assert(allow_sync); + snapshot = scan->rs_snapshot; + } + + hscan = (HeapScanDesc) scan; + + /* + * Must have called GetOldestNonRemovableTransactionId() if using + * SnapshotAny. Shouldn't have for an MVCC snapshot. (It's especially + * worth checking this for parallel builds, since ambuild routines that + * support parallel builds must work these details out for themselves.) + */ + Assert(snapshot == SnapshotAny || IsMVCCSnapshot(snapshot)); + Assert(snapshot == SnapshotAny ? TransactionIdIsValid(OldestXmin) : + !TransactionIdIsValid(OldestXmin)); + Assert(snapshot == SnapshotAny || !anyvisible); + + /* Publish number of blocks to scan */ + if (progress) + { + BlockNumber nblocks; + + if (hscan->rs_base.rs_parallel != NULL) + { + ParallelBlockTableScanDesc pbscan; + + pbscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + nblocks = pbscan->phs_nblocks; + } + else + nblocks = hscan->rs_nblocks; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_TOTAL, + nblocks); + } + + /* set our scan endpoints */ + if (!allow_sync) + tdeheap_setscanlimits(scan, start_blockno, numblocks); + else + { + /* syncscan can only be requested on whole relation */ + Assert(start_blockno == 0); + Assert(numblocks == InvalidBlockNumber); + } + + reltuples = 0; + + /* + * Scan all tuples in the base relation. + */ + while ((heapTuple = tdeheap_getnext(scan, ForwardScanDirection)) != NULL) + { + bool tupleIsAlive; + + CHECK_FOR_INTERRUPTS(); + + /* Report scan progress, if asked to. */ + if (progress) + { + BlockNumber blocks_done = pg_tdeam_scan_get_blocks_done(hscan); + + if (blocks_done != previous_blkno) + { + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + blocks_done); + previous_blkno = blocks_done; + } + } + + /* + * When dealing with a HOT-chain of updated tuples, we want to index + * the values of the live tuple (if any), but index it under the TID + * of the chain's root tuple. This approach is necessary to preserve + * the HOT-chain structure in the heap. So we need to be able to find + * the root item offset for every tuple that's in a HOT-chain. When + * first reaching a new page of the relation, call + * tdeheap_get_root_tuples() to build a map of root item offsets on the + * page. + * + * It might look unsafe to use this information across buffer + * lock/unlock. However, we hold ShareLock on the table so no + * ordinary insert/update/delete should occur; and we hold pin on the + * buffer continuously while visiting the page, so no pruning + * operation can occur either. + * + * In cases with only ShareUpdateExclusiveLock on the table, it's + * possible for some HOT tuples to appear that we didn't know about + * when we first read the page. To handle that case, we re-obtain the + * list of root offsets when a HOT tuple points to a root item that we + * don't know about. + * + * Also, although our opinions about tuple liveness could change while + * we scan the page (due to concurrent transaction commits/aborts), + * the chain root locations won't, so this info doesn't need to be + * rebuilt after waiting for another transaction. + * + * Note the implied assumption that there is no more than one live + * tuple per HOT-chain --- else we could create more than one index + * entry pointing to the same root tuple. + */ + if (hscan->rs_cblock != root_blkno) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + root_blkno = hscan->rs_cblock; + } + + if (snapshot == SnapshotAny) + { + /* do our own time qual check */ + bool indexIt; + TransactionId xwait; + + recheck: + + /* + * We could possibly get away with not locking the buffer here, + * since caller should hold ShareLock on the relation, but let's + * be conservative about it. (This remark is still correct even + * with HOT-pruning: our pin on the buffer prevents pruning.) + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + /* + * The criteria for counting a tuple as live in this block need to + * match what analyze.c's pg_tdeam_scan_analyze_next_tuple() does, + * otherwise CREATE INDEX and ANALYZE may produce wildly different + * reltuples values, e.g. when there are many recently-dead + * tuples. + */ + switch (HeapTupleSatisfiesVacuum(heapTuple, OldestXmin, + hscan->rs_cbuf)) + { + case HEAPTUPLE_DEAD: + /* Definitely dead, we can ignore it */ + indexIt = false; + tupleIsAlive = false; + break; + case HEAPTUPLE_LIVE: + /* Normal case, index and unique-check it */ + indexIt = true; + tupleIsAlive = true; + /* Count it as live, too */ + reltuples += 1; + break; + case HEAPTUPLE_RECENTLY_DEAD: + + /* + * If tuple is recently deleted then we must index it + * anyway to preserve MVCC semantics. (Pre-existing + * transactions could try to use the index after we finish + * building it, and may need to see such tuples.) + * + * However, if it was HOT-updated then we must only index + * the live tuple at the end of the HOT-chain. Since this + * breaks semantics for pre-existing snapshots, mark the + * index as unusable for them. + * + * We don't count recently-dead tuples in reltuples, even + * if we index them; see pg_tdeam_scan_analyze_next_tuple(). + */ + if (HeapTupleIsHotUpdated(heapTuple)) + { + indexIt = false; + /* mark the index as unsafe for old snapshots */ + indexInfo->ii_BrokenHotChain = true; + } + else + indexIt = true; + /* In any case, exclude the tuple from unique-checking */ + tupleIsAlive = false; + break; + case HEAPTUPLE_INSERT_IN_PROGRESS: + + /* + * In "anyvisible" mode, this tuple is visible and we + * don't need any further checks. + */ + if (anyvisible) + { + indexIt = true; + tupleIsAlive = true; + reltuples += 1; + break; + } + + /* + * Since caller should hold ShareLock or better, normally + * the only way to see this is if it was inserted earlier + * in our own transaction. However, it can happen in + * system catalogs, since we tend to release write lock + * before commit there. Give a warning if neither case + * applies. + */ + xwait = HeapTupleHeaderGetXmin(heapTuple->t_data); + if (!TransactionIdIsCurrentTransactionId(xwait)) + { + if (!is_system_catalog) + elog(WARNING, "concurrent insert in progress within table \"%s\"", + RelationGetRelationName(heapRelation)); + + /* + * If we are performing uniqueness checks, indexing + * such a tuple could lead to a bogus uniqueness + * failure. In that case we wait for the inserting + * transaction to finish and check again. + */ + if (checking_uniqueness) + { + /* + * Must drop the lock on the buffer before we wait + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(xwait, heapRelation, + &heapTuple->t_self, + XLTW_InsertIndexUnique); + CHECK_FOR_INTERRUPTS(); + goto recheck; + } + } + else + { + /* + * For consistency with + * pg_tdeam_scan_analyze_next_tuple(), count + * HEAPTUPLE_INSERT_IN_PROGRESS tuples as live only + * when inserted by our own transaction. + */ + reltuples += 1; + } + + /* + * We must index such tuples, since if the index build + * commits then they're good. + */ + indexIt = true; + tupleIsAlive = true; + break; + case HEAPTUPLE_DELETE_IN_PROGRESS: + + /* + * As with INSERT_IN_PROGRESS case, this is unexpected + * unless it's our own deletion or a system catalog; but + * in anyvisible mode, this tuple is visible. + */ + if (anyvisible) + { + indexIt = true; + tupleIsAlive = false; + reltuples += 1; + break; + } + + xwait = HeapTupleHeaderGetUpdateXid(heapTuple->t_data); + if (!TransactionIdIsCurrentTransactionId(xwait)) + { + if (!is_system_catalog) + elog(WARNING, "concurrent delete in progress within table \"%s\"", + RelationGetRelationName(heapRelation)); + + /* + * If we are performing uniqueness checks, assuming + * the tuple is dead could lead to missing a + * uniqueness violation. In that case we wait for the + * deleting transaction to finish and check again. + * + * Also, if it's a HOT-updated tuple, we should not + * index it but rather the live tuple at the end of + * the HOT-chain. However, the deleting transaction + * could abort, possibly leaving this tuple as live + * after all, in which case it has to be indexed. The + * only way to know what to do is to wait for the + * deleting transaction to finish and check again. + */ + if (checking_uniqueness || + HeapTupleIsHotUpdated(heapTuple)) + { + /* + * Must drop the lock on the buffer before we wait + */ + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + XactLockTableWait(xwait, heapRelation, + &heapTuple->t_self, + XLTW_InsertIndexUnique); + CHECK_FOR_INTERRUPTS(); + goto recheck; + } + + /* + * Otherwise index it but don't check for uniqueness, + * the same as a RECENTLY_DEAD tuple. + */ + indexIt = true; + + /* + * Count HEAPTUPLE_DELETE_IN_PROGRESS tuples as live, + * if they were not deleted by the current + * transaction. That's what + * pg_tdeam_scan_analyze_next_tuple() does, and we want + * the behavior to be consistent. + */ + reltuples += 1; + } + else if (HeapTupleIsHotUpdated(heapTuple)) + { + /* + * It's a HOT-updated tuple deleted by our own xact. + * We can assume the deletion will commit (else the + * index contents don't matter), so treat the same as + * RECENTLY_DEAD HOT-updated tuples. + */ + indexIt = false; + /* mark the index as unsafe for old snapshots */ + indexInfo->ii_BrokenHotChain = true; + } + else + { + /* + * It's a regular tuple deleted by our own xact. Index + * it, but don't check for uniqueness nor count in + * reltuples, the same as a RECENTLY_DEAD tuple. + */ + indexIt = true; + } + /* In any case, exclude the tuple from unique-checking */ + tupleIsAlive = false; + break; + default: + elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result"); + indexIt = tupleIsAlive = false; /* keep compiler quiet */ + break; + } + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + if (!indexIt) + continue; + } + else + { + /* tdeheap_getnext did the time qual check */ + tupleIsAlive = true; + reltuples += 1; + } + + MemoryContextReset(econtext->ecxt_per_tuple_memory); + + /* Set up for predicate or expression evaluation */ + PGTdeExecStoreBufferHeapTuple(heapRelation, heapTuple, slot, hscan->rs_cbuf); + + /* + * In a partial index, discard tuples that don't satisfy the + * predicate. + */ + if (predicate != NULL) + { + if (!ExecQual(predicate, econtext)) + continue; + } + + /* + * For the current heap tuple, extract all the attributes we use in + * this index, and note which are null. This also performs evaluation + * of any expressions needed. + */ + FormIndexDatum(indexInfo, + slot, + estate, + values, + isnull); + + /* + * You'd think we should go ahead and build the index tuple here, but + * some index AMs want to do further processing on the data first. So + * pass the values[] and isnull[] arrays, instead. + */ + + if (HeapTupleIsHeapOnly(heapTuple)) + { + /* + * For a heap-only tuple, pretend its TID is that of the root. See + * src/backend/access/heap/README.HOT for discussion. + */ + ItemPointerData tid; + OffsetNumber offnum; + + offnum = ItemPointerGetOffsetNumber(&heapTuple->t_self); + + /* + * If a HOT tuple points to a root that we don't know about, + * obtain root items afresh. If that still fails, report it as + * corruption. + */ + if (root_offsets[offnum - 1] == InvalidOffsetNumber) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + } + + if (!OffsetNumberIsValid(root_offsets[offnum - 1])) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("failed to find parent tuple for heap-only tuple at (%u,%u) in table \"%s\"", + ItemPointerGetBlockNumber(&heapTuple->t_self), + offnum, + RelationGetRelationName(heapRelation)))); + + ItemPointerSet(&tid, ItemPointerGetBlockNumber(&heapTuple->t_self), + root_offsets[offnum - 1]); + + /* Call the AM's callback routine to process the tuple */ + callback(indexRelation, &tid, values, isnull, tupleIsAlive, + callback_state); + } + else + { + /* Call the AM's callback routine to process the tuple */ + callback(indexRelation, &heapTuple->t_self, values, isnull, + tupleIsAlive, callback_state); + } + } + + /* Report scan progress one last time. */ + if (progress) + { + BlockNumber blks_done; + + if (hscan->rs_base.rs_parallel != NULL) + { + ParallelBlockTableScanDesc pbscan; + + pbscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + blks_done = pbscan->phs_nblocks; + } + else + blks_done = hscan->rs_nblocks; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + blks_done); + } + + table_endscan(scan); + + /* we can now forget our snapshot, if set and registered by us */ + if (need_unregister_snapshot) + UnregisterSnapshot(snapshot); + + ExecDropSingleTupleTableSlot(slot); + + FreeExecutorState(estate); + + /* These may have been pointing to the now-gone estate */ + indexInfo->ii_ExpressionsState = NIL; + indexInfo->ii_PredicateState = NULL; + + return reltuples; +} + +static void +pg_tdeam_index_validate_scan(Relation heapRelation, + Relation indexRelation, + IndexInfo *indexInfo, + Snapshot snapshot, + ValidateIndexState *state) +{ + TableScanDesc scan; + HeapScanDesc hscan; + HeapTuple heapTuple; + Datum values[INDEX_MAX_KEYS]; + bool isnull[INDEX_MAX_KEYS]; + ExprState *predicate; + TupleTableSlot *slot; + EState *estate; + ExprContext *econtext; + BlockNumber root_blkno = InvalidBlockNumber; + OffsetNumber root_offsets[MaxHeapTuplesPerPage]; + bool in_index[MaxHeapTuplesPerPage]; + BlockNumber previous_blkno = InvalidBlockNumber; + + /* state variables for the merge */ + ItemPointer indexcursor = NULL; + ItemPointerData decoded; + bool tuplesort_empty = false; + + /* + * sanity checks + */ + Assert(OidIsValid(indexRelation->rd_rel->relam)); + + /* + * Need an EState for evaluation of index expressions and partial-index + * predicates. Also a slot to hold the current tuple. + */ + estate = CreateExecutorState(); + econtext = GetPerTupleExprContext(estate); + slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRelation), + &TTSOpsHeapTuple); + + /* Arrange for econtext's scan tuple to be the tuple under test */ + econtext->ecxt_scantuple = slot; + + /* Set up execution state for predicate, if any. */ + predicate = ExecPrepareQual(indexInfo->ii_Predicate, estate); + + /* + * Prepare for scan of the base relation. We need just those tuples + * satisfying the passed-in reference snapshot. We must disable syncscan + * here, because it's critical that we read from block zero forward to + * match the sorted TIDs. + */ + scan = table_beginscan_strat(heapRelation, /* relation */ + snapshot, /* snapshot */ + 0, /* number of keys */ + NULL, /* scan key */ + true, /* buffer access strategy OK */ + false); /* syncscan not OK */ + hscan = (HeapScanDesc) scan; + + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_TOTAL, + hscan->rs_nblocks); + + /* + * Scan all tuples matching the snapshot. + */ + while ((heapTuple = tdeheap_getnext(scan, ForwardScanDirection)) != NULL) + { + ItemPointer heapcursor = &heapTuple->t_self; + ItemPointerData rootTuple; + OffsetNumber root_offnum; + + CHECK_FOR_INTERRUPTS(); + + state->htups += 1; + + if ((previous_blkno == InvalidBlockNumber) || + (hscan->rs_cblock != previous_blkno)) + { + pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE, + hscan->rs_cblock); + previous_blkno = hscan->rs_cblock; + } + + /* + * As commented in table_index_build_scan, we should index heap-only + * tuples under the TIDs of their root tuples; so when we advance onto + * a new heap page, build a map of root item offsets on the page. + * + * This complicates merging against the tuplesort output: we will + * visit the live tuples in order by their offsets, but the root + * offsets that we need to compare against the index contents might be + * ordered differently. So we might have to "look back" within the + * tuplesort output, but only within the current page. We handle that + * by keeping a bool array in_index[] showing all the + * already-passed-over tuplesort output TIDs of the current page. We + * clear that array here, when advancing onto a new heap page. + */ + if (hscan->rs_cblock != root_blkno) + { + Page page = BufferGetPage(hscan->rs_cbuf); + + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + tdeheap_get_root_tuples(page, root_offsets); + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + memset(in_index, 0, sizeof(in_index)); + + root_blkno = hscan->rs_cblock; + } + + /* Convert actual tuple TID to root TID */ + rootTuple = *heapcursor; + root_offnum = ItemPointerGetOffsetNumber(heapcursor); + + if (HeapTupleIsHeapOnly(heapTuple)) + { + root_offnum = root_offsets[root_offnum - 1]; + if (!OffsetNumberIsValid(root_offnum)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("failed to find parent tuple for heap-only tuple at (%u,%u) in table \"%s\"", + ItemPointerGetBlockNumber(heapcursor), + ItemPointerGetOffsetNumber(heapcursor), + RelationGetRelationName(heapRelation)))); + ItemPointerSetOffsetNumber(&rootTuple, root_offnum); + } + + /* + * "merge" by skipping through the index tuples until we find or pass + * the current root tuple. + */ + while (!tuplesort_empty && + (!indexcursor || + ItemPointerCompare(indexcursor, &rootTuple) < 0)) + { + Datum ts_val; + bool ts_isnull; + + if (indexcursor) + { + /* + * Remember index items seen earlier on the current heap page + */ + if (ItemPointerGetBlockNumber(indexcursor) == root_blkno) + in_index[ItemPointerGetOffsetNumber(indexcursor) - 1] = true; + } + + tuplesort_empty = !tuplesort_getdatum(state->tuplesort, true, + false, &ts_val, &ts_isnull, + NULL); + Assert(tuplesort_empty || !ts_isnull); + if (!tuplesort_empty) + { + itemptr_decode(&decoded, DatumGetInt64(ts_val)); + indexcursor = &decoded; + } + else + { + /* Be tidy */ + indexcursor = NULL; + } + } + + /* + * If the tuplesort has overshot *and* we didn't see a match earlier, + * then this tuple is missing from the index, so insert it. + */ + if ((tuplesort_empty || + ItemPointerCompare(indexcursor, &rootTuple) > 0) && + !in_index[root_offnum - 1]) + { + MemoryContextReset(econtext->ecxt_per_tuple_memory); + + /* Set up for predicate or expression evaluation */ + ExecStoreHeapTuple(heapTuple, slot, false); + + /* + * In a partial index, discard tuples that don't satisfy the + * predicate. + */ + if (predicate != NULL) + { + if (!ExecQual(predicate, econtext)) + continue; + } + + /* + * For the current heap tuple, extract all the attributes we use + * in this index, and note which are null. This also performs + * evaluation of any expressions needed. + */ + FormIndexDatum(indexInfo, + slot, + estate, + values, + isnull); + + /* + * You'd think we should go ahead and build the index tuple here, + * but some index AMs want to do further processing on the data + * first. So pass the values[] and isnull[] arrays, instead. + */ + + /* + * If the tuple is already committed dead, you might think we + * could suppress uniqueness checking, but this is no longer true + * in the presence of HOT, because the insert is actually a proxy + * for a uniqueness check on the whole HOT-chain. That is, the + * tuple we have here could be dead because it was already + * HOT-updated, and if so the updating transaction will not have + * thought it should insert index entries. The index AM will + * check the whole HOT-chain and correctly detect a conflict if + * there is one. + */ + + index_insert(indexRelation, + values, + isnull, + &rootTuple, + heapRelation, + indexInfo->ii_Unique ? + UNIQUE_CHECK_YES : UNIQUE_CHECK_NO, + false, + indexInfo); + + state->tups_inserted += 1; + } + } + + table_endscan(scan); + + ExecDropSingleTupleTableSlot(slot); + + FreeExecutorState(estate); + + /* These may have been pointing to the now-gone estate */ + indexInfo->ii_ExpressionsState = NIL; + indexInfo->ii_PredicateState = NULL; +} + +/* + * Return the number of blocks that have been read by this scan since + * starting. This is meant for progress reporting rather than be fully + * accurate: in a parallel scan, workers can be concurrently reading blocks + * further ahead than what we report. + */ +static BlockNumber +pg_tdeam_scan_get_blocks_done(HeapScanDesc hscan) +{ + ParallelBlockTableScanDesc bpscan = NULL; + BlockNumber startblock; + BlockNumber blocks_done; + + if (hscan->rs_base.rs_parallel != NULL) + { + bpscan = (ParallelBlockTableScanDesc) hscan->rs_base.rs_parallel; + startblock = bpscan->phs_startblock; + } + else + startblock = hscan->rs_startblock; + + /* + * Might have wrapped around the end of the relation, if startblock was + * not zero. + */ + if (hscan->rs_cblock > startblock) + blocks_done = hscan->rs_cblock - startblock; + else + { + BlockNumber nblocks; + + nblocks = bpscan != NULL ? bpscan->phs_nblocks : hscan->rs_nblocks; + blocks_done = nblocks - startblock + + hscan->rs_cblock; + } + + return blocks_done; +} + + +/* ------------------------------------------------------------------------ + * Miscellaneous callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +/* + * Check to see whether the table needs a TOAST table. It does only if + * (1) there are any toastable attributes, and (2) the maximum length + * of a tuple could exceed TOAST_TUPLE_THRESHOLD. (We don't want to + * create a toast table for something like "f1 varchar(20)".) + */ +static bool +pg_tdeam_relation_needs_toast_table(Relation rel) +{ + int32 data_length = 0; + bool maxlength_unknown = false; + bool has_toastable_attrs = false; + TupleDesc tupdesc = rel->rd_att; + int32 tuple_length; + int i; + + for (i = 0; i < tupdesc->natts; i++) + { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) + continue; + data_length = att_align_nominal(data_length, att->attalign); + if (att->attlen > 0) + { + /* Fixed-length types are never toastable */ + data_length += att->attlen; + } + else + { + int32 maxlen = type_maximum_size(att->atttypid, + att->atttypmod); + + if (maxlen < 0) + maxlength_unknown = true; + else + data_length += maxlen; + if (att->attstorage != TYPSTORAGE_PLAIN) + has_toastable_attrs = true; + } + } + if (!has_toastable_attrs) + return false; /* nothing to toast? */ + if (maxlength_unknown) + return true; /* any unlimited-length attrs? */ + tuple_length = MAXALIGN(SizeofHeapTupleHeader + + BITMAPLEN(tupdesc->natts)) + + MAXALIGN(data_length); + return (tuple_length > TOAST_TUPLE_THRESHOLD); +} + +/* + * TOAST tables for heap relations are just heap relations. + */ +static Oid +pg_tdeam_relation_toast_am(Relation rel) +{ + return rel->rd_rel->relam; +} + + +/* ------------------------------------------------------------------------ + * Planner related callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +#define HEAP_OVERHEAD_BYTES_PER_TUPLE \ + (MAXALIGN(SizeofHeapTupleHeader) + sizeof(ItemIdData)) +#define HEAP_USABLE_BYTES_PER_PAGE \ + (BLCKSZ - SizeOfPageHeaderData) + +static void +pg_tdeam_estimate_rel_size(Relation rel, int32 *attr_widths, + BlockNumber *pages, double *tuples, + double *allvisfrac) +{ + table_block_relation_estimate_size(rel, attr_widths, pages, + tuples, allvisfrac, + HEAP_OVERHEAD_BYTES_PER_TUPLE, + HEAP_USABLE_BYTES_PER_PAGE); +} + + +/* ------------------------------------------------------------------------ + * Executor related callbacks for the heap AM + * ------------------------------------------------------------------------ + */ + +static bool +pg_tdeam_scan_bitmap_next_block(TableScanDesc scan, + TBMIterateResult *tbmres) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + BlockNumber block = tbmres->blockno; + Buffer buffer; + Snapshot snapshot; + int ntup; + + hscan->rs_cindex = 0; + hscan->rs_ntuples = 0; + + /* + * We can skip fetching the heap page if we don't need any fields from the + * heap, the bitmap entries don't need rechecking, and all tuples on the + * page are visible to our transaction. + */ + if (!(scan->rs_flags & SO_NEED_TUPLES) && + !tbmres->recheck && + VM_ALL_VISIBLE(scan->rs_rd, tbmres->blockno, &hscan->rs_vmbuffer)) + { + /* can't be lossy in the skip_fetch case */ + Assert(tbmres->ntuples >= 0); + Assert(hscan->rs_empty_tuples_pending >= 0); + + hscan->rs_empty_tuples_pending += tbmres->ntuples; + + return true; + } + + /* + * Ignore any claimed entries past what we think is the end of the + * relation. It may have been extended after the start of our scan (we + * only hold an AccessShareLock, and it could be inserts from this + * backend). We don't take this optimization in SERIALIZABLE isolation + * though, as we need to examine all invisible tuples reachable by the + * index. + */ + if (!IsolationIsSerializable() && block >= hscan->rs_nblocks) + return false; + + /* + * Acquire pin on the target heap page, trading in any pin we held before. + */ + hscan->rs_cbuf = ReleaseAndReadBuffer(hscan->rs_cbuf, + scan->rs_rd, + block); + hscan->rs_cblock = block; + buffer = hscan->rs_cbuf; + snapshot = scan->rs_snapshot; + + ntup = 0; + + /* + * Prune and repair fragmentation for the whole page, if possible. + */ + tdeheap_page_prune_opt(scan->rs_rd, buffer); + + /* + * We must hold share lock on the buffer content while examining tuple + * visibility. Afterwards, however, the tuples we have found to be + * visible are guaranteed good as long as we hold the buffer pin. + */ + LockBuffer(buffer, BUFFER_LOCK_SHARE); + + /* + * We need two separate strategies for lossy and non-lossy cases. + */ + if (tbmres->ntuples >= 0) + { + /* + * Bitmap is non-lossy, so we just look through the offsets listed in + * tbmres; but we have to follow any HOT chain starting at each such + * offset. + */ + int curslot; + + for (curslot = 0; curslot < tbmres->ntuples; curslot++) + { + OffsetNumber offnum = tbmres->offsets[curslot]; + ItemPointerData tid; + HeapTupleData heapTuple; + + ItemPointerSet(&tid, block, offnum); + if (tdeheap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot, + &heapTuple, NULL, true)) + hscan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid); + } + } + else + { + /* + * Bitmap is lossy, so we must examine each line pointer on the page. + * But we can ignore HOT chains, since we'll check each tuple anyway. + */ + Page page = BufferGetPage(buffer); + OffsetNumber maxoff = PageGetMaxOffsetNumber(page); + OffsetNumber offnum; + + for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum)) + { + ItemId lp; + HeapTupleData loctup; + bool valid; + + lp = PageGetItemId(page, offnum); + if (!ItemIdIsNormal(lp)) + continue; + loctup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + loctup.t_len = ItemIdGetLength(lp); + loctup.t_tableOid = scan->rs_rd->rd_id; + ItemPointerSet(&loctup.t_self, block, offnum); + valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer); + if (valid) + { + hscan->rs_vistuples[ntup++] = offnum; + PredicateLockTID(scan->rs_rd, &loctup.t_self, snapshot, + HeapTupleHeaderGetXmin(loctup.t_data)); + } + HeapCheckForSerializableConflictOut(valid, scan->rs_rd, &loctup, + buffer, snapshot); + } + } + + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + Assert(ntup <= MaxHeapTuplesPerPage); + hscan->rs_ntuples = ntup; + + return ntup > 0; +} + +static bool +pg_tdeam_scan_bitmap_next_tuple(TableScanDesc scan, + TBMIterateResult *tbmres, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + OffsetNumber targoffset; + Page page; + ItemId lp; + + if (hscan->rs_empty_tuples_pending > 0) + { + /* + * If we don't have to fetch the tuple, just return nulls. + */ + ExecStoreAllNullTuple(slot); + hscan->rs_empty_tuples_pending--; + return true; + } + + /* + * Out of range? If so, nothing more to look at on this page + */ + if (hscan->rs_cindex < 0 || hscan->rs_cindex >= hscan->rs_ntuples) + return false; + + targoffset = hscan->rs_vistuples[hscan->rs_cindex]; + page = BufferGetPage(hscan->rs_cbuf); + lp = PageGetItemId(page, targoffset); + Assert(ItemIdIsNormal(lp)); + + hscan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem(page, lp); + hscan->rs_ctup.t_len = ItemIdGetLength(lp); + hscan->rs_ctup.t_tableOid = scan->rs_rd->rd_id; + ItemPointerSet(&hscan->rs_ctup.t_self, hscan->rs_cblock, targoffset); + + pgstat_count_tdeheap_fetch(scan->rs_rd); + + /* + * Set up the result slot to point to this tuple. Note that the slot + * acquires a pin on the buffer. + */ + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, &hscan->rs_ctup, + slot, + hscan->rs_cbuf); + + hscan->rs_cindex++; + + return true; +} + +static bool +pg_tdeam_scan_sample_next_block(TableScanDesc scan, SampleScanState *scanstate) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + TsmRoutine *tsm = scanstate->tsmroutine; + BlockNumber blockno; + + /* return false immediately if relation is empty */ + if (hscan->rs_nblocks == 0) + return false; + + /* release previous scan buffer, if any */ + if (BufferIsValid(hscan->rs_cbuf)) + { + ReleaseBuffer(hscan->rs_cbuf); + hscan->rs_cbuf = InvalidBuffer; + } + + if (tsm->NextSampleBlock) + blockno = tsm->NextSampleBlock(scanstate, hscan->rs_nblocks); + else + { + /* scanning table sequentially */ + + if (hscan->rs_cblock == InvalidBlockNumber) + { + Assert(!hscan->rs_inited); + blockno = hscan->rs_startblock; + } + else + { + Assert(hscan->rs_inited); + + blockno = hscan->rs_cblock + 1; + + if (blockno >= hscan->rs_nblocks) + { + /* wrap to beginning of rel, might not have started at 0 */ + blockno = 0; + } + + /* + * Report our new scan position for synchronization purposes. + * + * Note: we do this before checking for end of scan so that the + * final state of the position hint is back at the start of the + * rel. That's not strictly necessary, but otherwise when you run + * the same query multiple times the starting position would shift + * a little bit backwards on every invocation, which is confusing. + * We don't guarantee any specific ordering in general, though. + */ + if (scan->rs_flags & SO_ALLOW_SYNC) + ss_report_location(scan->rs_rd, blockno); + + if (blockno == hscan->rs_startblock) + { + blockno = InvalidBlockNumber; + } + } + } + + hscan->rs_cblock = blockno; + + if (!BlockNumberIsValid(blockno)) + { + hscan->rs_inited = false; + return false; + } + + Assert(hscan->rs_cblock < hscan->rs_nblocks); + + /* + * Be sure to check for interrupts at least once per page. Checks at + * higher code levels won't be able to stop a sample scan that encounters + * many pages' worth of consecutive dead tuples. + */ + CHECK_FOR_INTERRUPTS(); + + /* Read page using selected strategy */ + hscan->rs_cbuf = ReadBufferExtended(hscan->rs_base.rs_rd, MAIN_FORKNUM, + blockno, RBM_NORMAL, hscan->rs_strategy); + + /* in pagemode, prune the page and determine visible tuple offsets */ + if (hscan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) + tdeheap_prepare_pagescan(scan); + + hscan->rs_inited = true; + return true; +} + +static bool +pg_tdeam_scan_sample_next_tuple(TableScanDesc scan, SampleScanState *scanstate, + TupleTableSlot *slot) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + TsmRoutine *tsm = scanstate->tsmroutine; + BlockNumber blockno = hscan->rs_cblock; + bool pagemode = (scan->rs_flags & SO_ALLOW_PAGEMODE) != 0; + + Page page; + bool all_visible; + OffsetNumber maxoffset; + + /* + * When not using pagemode, we must lock the buffer during tuple + * visibility checks. + */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE); + + page = (Page) BufferGetPage(hscan->rs_cbuf); + all_visible = PageIsAllVisible(page) && + !scan->rs_snapshot->takenDuringRecovery; + maxoffset = PageGetMaxOffsetNumber(page); + + for (;;) + { + OffsetNumber tupoffset; + + CHECK_FOR_INTERRUPTS(); + + /* Ask the tablesample method which tuples to check on this page. */ + tupoffset = tsm->NextSampleTuple(scanstate, + blockno, + maxoffset); + + if (OffsetNumberIsValid(tupoffset)) + { + ItemId itemid; + bool visible; + HeapTuple tuple = &(hscan->rs_ctup); + + /* Skip invalid tuple pointers. */ + itemid = PageGetItemId(page, tupoffset); + if (!ItemIdIsNormal(itemid)) + continue; + + tuple->t_data = (HeapTupleHeader) PageGetItem(page, itemid); + tuple->t_len = ItemIdGetLength(itemid); + ItemPointerSet(&(tuple->t_self), blockno, tupoffset); + + + if (all_visible) + visible = true; + else + visible = SampleHeapTupleVisible(scan, hscan->rs_cbuf, + tuple, tupoffset); + + /* in pagemode, tdeheap_prepare_pagescan did this for us */ + if (!pagemode) + HeapCheckForSerializableConflictOut(visible, scan->rs_rd, tuple, + hscan->rs_cbuf, scan->rs_snapshot); + + /* Try next tuple from same page. */ + if (!visible) + continue; + + /* Found visible tuple, return it. */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + + PGTdeExecStoreBufferHeapTuple(scan->rs_rd, tuple, slot, hscan->rs_cbuf); + + /* Count successfully-fetched tuples as heap fetches */ + pgstat_count_tdeheap_getnext(scan->rs_rd); + + return true; + } + else + { + /* + * If we get here, it means we've exhausted the items on this page + * and it's time to move to the next. + */ + if (!pagemode) + LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK); + ExecClearTuple(slot); + return false; + } + } + + Assert(0); +} + + +/* ---------------------------------------------------------------------------- + * Helper functions for the above. + * ---------------------------------------------------------------------------- + */ + +/* + * Reconstruct and rewrite the given tuple + * + * We cannot simply copy the tuple as-is, for several reasons: + * + * 1. We'd like to squeeze out the values of any dropped columns, both + * to save space and to ensure we have no corner-case failures. (It's + * possible for example that the new table hasn't got a TOAST table + * and so is unable to store any large values of dropped cols.) + * + * 2. The tuple might not even be legal for the new table; this is + * currently only known to happen as an after-effect of ALTER TABLE + * SET WITHOUT OIDS. + * + * So, we must reconstruct the tuple from component Datums. + */ +static void +reform_and_rewrite_tuple(HeapTuple tuple, + Relation OldHeap, Relation NewHeap, + Datum *values, bool *isnull, RewriteState rwstate) +{ + TupleDesc oldTupDesc = RelationGetDescr(OldHeap); + TupleDesc newTupDesc = RelationGetDescr(NewHeap); + HeapTuple copiedTuple; + int i; + + tdeheap_deform_tuple(tuple, oldTupDesc, values, isnull); + + /* Be sure to null out any dropped columns */ + for (i = 0; i < newTupDesc->natts; i++) + { + if (TupleDescAttr(newTupDesc, i)->attisdropped) + isnull[i] = true; + } + + copiedTuple = tdeheap_form_tuple(newTupDesc, values, isnull); + + /* The heap rewrite module does the rest */ + rewrite_tdeheap_tuple(rwstate, tuple, copiedTuple); + + tdeheap_freetuple(copiedTuple); +} + +/* + * Check visibility of the tuple. + */ +static bool +SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer, + HeapTuple tuple, + OffsetNumber tupoffset) +{ + HeapScanDesc hscan = (HeapScanDesc) scan; + + if (scan->rs_flags & SO_ALLOW_PAGEMODE) + { + /* + * In pageatatime mode, tdeheap_prepare_pagescan() already did visibility + * checks, so just look at the info it left in rs_vistuples[]. + * + * We use a binary search over the known-sorted array. Note: we could + * save some effort if we insisted that NextSampleTuple select tuples + * in increasing order, but it's not clear that there would be enough + * gain to justify the restriction. + */ + int start = 0, + end = hscan->rs_ntuples - 1; + + while (start <= end) + { + int mid = (start + end) / 2; + OffsetNumber curoffset = hscan->rs_vistuples[mid]; + + if (tupoffset == curoffset) + return true; + else if (tupoffset < curoffset) + end = mid - 1; + else + start = mid + 1; + } + + return false; + } + else + { + /* Otherwise, we have to check the tuple individually. */ + return HeapTupleSatisfiesVisibility(tuple, scan->rs_snapshot, + buffer); + } +} + + +/* ------------------------------------------------------------------------ + * Definition of the heap table access method. + * ------------------------------------------------------------------------ + */ + +static const TableAmRoutine pg_tdeam_methods = { + .type = T_TableAmRoutine, + + .slot_callbacks = pg_tdeam_slot_callbacks, + + .scan_begin = tdeheap_beginscan, + .scan_end = tdeheap_endscan, + .scan_rescan = tdeheap_rescan, + .scan_getnextslot = tdeheap_getnextslot, + + .scan_set_tidrange = tdeheap_set_tidrange, + .scan_getnextslot_tidrange = tdeheap_getnextslot_tidrange, + + .parallelscan_estimate = table_block_parallelscan_estimate, + .parallelscan_initialize = table_block_parallelscan_initialize, + .parallelscan_reinitialize = table_block_parallelscan_reinitialize, + + .index_fetch_begin = pg_tdeam_index_fetch_begin, + .index_fetch_reset = pg_tdeam_index_fetch_reset, + .index_fetch_end = pg_tdeam_index_fetch_end, + .index_fetch_tuple = pg_tdeam_index_fetch_tuple, + + .tuple_insert = pg_tdeam_tuple_insert, + .tuple_insert_speculative = pg_tdeam_tuple_insert_speculative, + .tuple_complete_speculative = pg_tdeam_tuple_complete_speculative, + .multi_insert = tdeheap_multi_insert, + .tuple_delete = pg_tdeam_tuple_delete, + .tuple_update = pg_tdeam_tuple_update, + .tuple_lock = pg_tdeam_tuple_lock, + + .tuple_fetch_row_version = pg_tdeam_fetch_row_version, + .tuple_get_latest_tid = tdeheap_get_latest_tid, + .tuple_tid_valid = pg_tdeam_tuple_tid_valid, + .tuple_satisfies_snapshot = pg_tdeam_tuple_satisfies_snapshot, + .index_delete_tuples = tdeheap_index_delete_tuples, + + .relation_set_new_filelocator = pg_tdeam_relation_set_new_filelocator, + .relation_nontransactional_truncate = pg_tdeam_relation_nontransactional_truncate, + .relation_copy_data = pg_tdeam_relation_copy_data, + .relation_copy_for_cluster = pg_tdeam_relation_copy_for_cluster, + .relation_vacuum = tdeheap_vacuum_rel, + .scan_analyze_next_block = pg_tdeam_scan_analyze_next_block, + .scan_analyze_next_tuple = pg_tdeam_scan_analyze_next_tuple, + .index_build_range_scan = pg_tdeam_index_build_range_scan, + .index_validate_scan = pg_tdeam_index_validate_scan, + + .relation_size = table_block_relation_size, + .relation_needs_toast_table = pg_tdeam_relation_needs_toast_table, + .relation_toast_am = pg_tdeam_relation_toast_am, + .relation_fetch_toast_slice = tdeheap_fetch_toast_slice, + + .relation_estimate_size = pg_tdeam_estimate_rel_size, + + .scan_bitmap_next_block = pg_tdeam_scan_bitmap_next_block, + .scan_bitmap_next_tuple = pg_tdeam_scan_bitmap_next_tuple, + .scan_sample_next_block = pg_tdeam_scan_sample_next_block, + .scan_sample_next_tuple = pg_tdeam_scan_sample_next_tuple +}; + +const TableAmRoutine * +GetPGTdeamTableAmRoutine(void) +{ + return &pg_tdeam_methods; +} + +Datum +pg_tdeam_basic_handler(PG_FUNCTION_ARGS) +{ + PG_RETURN_POINTER(&pg_tdeam_methods); +} + +#ifdef PERCONA_EXT +Datum +pg_tdeam_handler(PG_FUNCTION_ARGS) +{ + PG_RETURN_POINTER(GetHeapamTableAmRoutine()); +} +#endif + +bool +is_tdeheap_rel(Relation rel) +{ + return (rel->rd_tableam == (TableAmRoutine *) &pg_tdeam_methods); +} diff --git a/contrib/pg_tde/src17/access/pg_tdeam_visibility.c b/contrib/pg_tde/src17/access/pg_tdeam_visibility.c new file mode 100644 index 00000000000..ca0879033d1 --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tdeam_visibility.c @@ -0,0 +1,1791 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_visibility.c + * Tuple visibility rules for tuples stored in heap. + * + * NOTE: all the HeapTupleSatisfies routines will update the tuple's + * "hint" status bits if we see that the inserting or deleting transaction + * has now committed or aborted (and it is safe to set the hint bits). + * If the hint bits are changed, MarkBufferDirtyHint is called on + * the passed-in buffer. The caller must hold not only a pin, but at least + * shared buffer content lock on the buffer containing the tuple. + * + * NOTE: When using a non-MVCC snapshot, we must check + * TransactionIdIsInProgress (which looks in the PGPROC array) before + * TransactionIdDidCommit (which look in pg_xact). Otherwise we have a race + * condition: we might decide that a just-committed transaction crashed, + * because none of the tests succeed. xact.c is careful to record + * commit/abort in pg_xact before it unsets MyProc->xid in the PGPROC array. + * That fixes that problem, but it also means there is a window where + * TransactionIdIsInProgress and TransactionIdDidCommit will both return true. + * If we check only TransactionIdDidCommit, we could consider a tuple + * committed when a later GetSnapshotData call will still think the + * originating transaction is in progress, which leads to application-level + * inconsistency. The upshot is that we gotta check TransactionIdIsInProgress + * first in all code paths, except for a few cases where we are looking at + * subtransactions of our own main transaction and so there can't be any race + * condition. + * + * We can't use TransactionIdDidAbort here because it won't treat transactions + * that were in progress during a crash as aborted. We determine that + * transactions aborted/crashed through process of elimination instead. + * + * When using an MVCC snapshot, we rely on XidInMVCCSnapshot rather than + * TransactionIdIsInProgress, but the logic is otherwise the same: do not + * check pg_xact until after deciding that the xact is no longer in progress. + * + * + * Summary of visibility functions: + * + * HeapTupleSatisfiesMVCC() + * visible to supplied snapshot, excludes current command + * HeapTupleSatisfiesUpdate() + * visible to instant snapshot, with user-supplied command + * counter and more complex result + * HeapTupleSatisfiesSelf() + * visible to instant snapshot and current command + * HeapTupleSatisfiesDirty() + * like HeapTupleSatisfiesSelf(), but includes open transactions + * HeapTupleSatisfiesVacuum() + * visible to any running transaction, used by VACUUM + * HeapTupleSatisfiesNonVacuumable() + * Snapshot-style API for HeapTupleSatisfiesVacuum + * HeapTupleSatisfiesToast() + * visible unless part of interrupted vacuum, used for TOAST + * HeapTupleSatisfiesAny() + * all tuples are visible + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/access/heap/pg_tdeam_visibility.c + * + *------------------------------------------------------------------------- + */ + +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" + +#include "access/htup_details.h" +#include "access/multixact.h" +#include "access/tableam.h" +#include "access/transam.h" +#include "access/xact.h" +#include "access/xlog.h" +#include "storage/bufmgr.h" +#include "storage/procarray.h" +#include "utils/builtins.h" +#include "utils/snapmgr.h" + + +/* + * SetHintBits() + * + * Set commit/abort hint bits on a tuple, if appropriate at this time. + * + * It is only safe to set a transaction-committed hint bit if we know the + * transaction's commit record is guaranteed to be flushed to disk before the + * buffer, or if the table is temporary or unlogged and will be obliterated by + * a crash anyway. We cannot change the LSN of the page here, because we may + * hold only a share lock on the buffer, so we can only use the LSN to + * interlock this if the buffer's LSN already is newer than the commit LSN; + * otherwise we have to just refrain from setting the hint bit until some + * future re-examination of the tuple. + * + * We can always set hint bits when marking a transaction aborted. (Some + * code in pg_tdeam.c relies on that!) + * + * Also, if we are cleaning up HEAP_MOVED_IN or HEAP_MOVED_OFF entries, then + * we can always set the hint bits, since pre-9.0 VACUUM FULL always used + * synchronous commits and didn't move tuples that weren't previously + * hinted. (This is not known by this subroutine, but is applied by its + * callers.) Note: old-style VACUUM FULL is gone, but we have to keep this + * module's support for MOVED_OFF/MOVED_IN flag bits for as long as we + * support in-place update from pre-9.0 databases. + * + * Normal commits may be asynchronous, so for those we need to get the LSN + * of the transaction and then check whether this is flushed. + * + * The caller should pass xid as the XID of the transaction to check, or + * InvalidTransactionId if no check is needed. + */ +static inline void +SetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid) +{ + if (TransactionIdIsValid(xid)) + { + /* NB: xid must be known committed here! */ + XLogRecPtr commitLSN = TransactionIdGetCommitLSN(xid); + + if (BufferIsPermanent(buffer) && XLogNeedsFlush(commitLSN) && + BufferGetLSNAtomic(buffer) < commitLSN) + { + /* not flushed and no LSN interlock, so don't set hint */ + return; + } + } + + tuple->t_infomask |= infomask; + MarkBufferDirtyHint(buffer, true); +} + +/* + * HeapTupleSetHintBits --- exported version of SetHintBits() + * + * This must be separate because of C99's brain-dead notions about how to + * implement inline functions. + */ +void +HeapTupleSetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid) +{ + SetHintBits(tuple, buffer, infomask, xid); +} + + +/* + * HeapTupleSatisfiesSelf + * True iff heap tuple is valid "for itself". + * + * See SNAPSHOT_MVCC's definition for the intended behaviour. + * + * Note: + * Assumes heap tuple is valid. + * + * The satisfaction of "itself" requires the following: + * + * ((Xmin == my-transaction && the row was updated by the current transaction, and + * (Xmax is null it was not deleted + * [|| Xmax != my-transaction)]) [or it was deleted by another transaction] + * || + * + * (Xmin is committed && the row was modified by a committed transaction, and + * (Xmax is null || the row has not been deleted, or + * (Xmax != my-transaction && the row was deleted by another transaction + * Xmax is not committed))) that has not been committed + */ +static bool +HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else + return false; + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + return false; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + return false; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; /* updated by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + return true; + if (TransactionIdDidCommit(xmax)) + return false; + /* it must have aborted or crashed */ + return true; + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return true; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + return false; +} + +/* + * HeapTupleSatisfiesAny + * Dummy "satisfies" routine: any tuple satisfies SnapshotAny. + */ +static bool +HeapTupleSatisfiesAny(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + return true; +} + +/* + * HeapTupleSatisfiesToast + * True iff heap tuple is valid as a TOAST row. + * + * See SNAPSHOT_TOAST's definition for the intended behaviour. + * + * This is a simplified version that only checks for VACUUM moving conditions. + * It's appropriate for TOAST usage because TOAST really doesn't want to do + * its own time qual checks; if you can see the main table row that contains + * a TOAST reference, you should be able to see the TOASTed value. However, + * vacuuming a TOAST table is independent of the main table, and in case such + * a vacuum fails partway through, we'd better do this much checking. + * + * Among other things, this means you can't do UPDATEs of rows in a TOAST + * table. + */ +static bool +HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + + /* + * An invalid Xmin can be left behind by a speculative insertion that + * is canceled by super-deleting the tuple. This also applies to + * TOAST tuples created during speculative insertion. + */ + else if (!TransactionIdIsValid(HeapTupleHeaderGetXmin(tuple))) + return false; + } + + /* otherwise assume the tuple is valid for TOAST. */ + return true; +} + +/* + * HeapTupleSatisfiesUpdate + * + * This function returns a more detailed result code than most of the + * functions in this file, since UPDATE needs to know more than "is it + * visible?". It also allows for user-supplied CommandId rather than + * relying on CurrentCommandId. + * + * The possible return codes are: + * + * TM_Invisible: the tuple didn't exist at all when the scan started, e.g. it + * was created by a later CommandId. + * + * TM_Ok: The tuple is valid and visible, so it may be updated. + * + * TM_SelfModified: The tuple was updated by the current transaction, after + * the current scan started. + * + * TM_Updated: The tuple was updated by a committed transaction (including + * the case where the tuple was moved into a different partition). + * + * TM_Deleted: The tuple was deleted by a committed transaction. + * + * TM_BeingModified: The tuple is being updated by an in-progress transaction + * other than the current transaction. (Note: this includes the case where + * the tuple is share-locked by a MultiXact, even if the MultiXact includes + * the current transaction. Callers that want to distinguish that case must + * test for it themselves.) + */ +TM_Result +HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return TM_Invisible; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return TM_Invisible; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return TM_Invisible; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (HeapTupleHeaderGetCmin(tuple) >= curcid) + return TM_Invisible; /* inserted after scan started */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return TM_Ok; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + TransactionId xmax; + + xmax = HeapTupleHeaderGetRawXmax(tuple); + + /* + * Careful here: even though this tuple was created by our own + * transaction, it might be locked by other transactions, if + * the original version was key-share locked when we updated + * it. + */ + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + if (MultiXactIdIsRunning(xmax, true)) + return TM_BeingModified; + else + return TM_Ok; + } + + /* + * If the locker is gone, then there is nothing of interest + * left in this Xmax; otherwise, report the tuple as + * locked/updated. + */ + if (!TransactionIdIsInProgress(xmax)) + return TM_Ok; + return TM_BeingModified; + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* deleting subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), + false)) + return TM_BeingModified; + return TM_Ok; + } + else + { + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + return TM_Invisible; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return TM_Invisible; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return TM_Ok; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return TM_Ok; + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; /* updated by other */ + else + return TM_Deleted; /* deleted by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_LOCKED_UPGRADED(tuple->t_infomask)) + return TM_Ok; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), true)) + return TM_BeingModified; + + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + return TM_Ok; + } + + xmax = HeapTupleGetUpdateXid(tuple); + if (!TransactionIdIsValid(xmax)) + { + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + return TM_BeingModified; + } + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + { + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + + if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + return TM_BeingModified; + + if (TransactionIdDidCommit(xmax)) + { + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; + else + return TM_Deleted; + } + + /* + * By here, the update in the Xmax is either aborted or crashed, but + * what about the other members? + */ + + if (!MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + { + /* + * There's no member, even just a locker, alive anymore, so we can + * mark the Xmax as invalid. + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + else + { + /* There are lockers running */ + return TM_BeingModified; + } + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return TM_BeingModified; + if (HeapTupleHeaderGetCmax(tuple) >= curcid) + return TM_SelfModified; /* updated after scan started */ + else + return TM_Invisible; /* updated before scan started */ + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return TM_BeingModified; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return TM_Ok; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + if (!ItemPointerEquals(&htup->t_self, &tuple->t_ctid)) + return TM_Updated; /* updated by other */ + else + return TM_Deleted; /* deleted by other */ +} + +/* + * HeapTupleSatisfiesDirty + * True iff heap tuple is valid including effects of open transactions. + * + * See SNAPSHOT_DIRTY's definition for the intended behaviour. + * + * This is essentially like HeapTupleSatisfiesSelf as far as effects of + * the current transaction and committed/aborted xacts are concerned. + * However, we also include the effects of other xacts still in progress. + * + * A special hack is that the passed-in snapshot struct is used as an + * output argument to return the xids of concurrent xacts that affected the + * tuple. snapshot->xmin is set to the tuple's xmin if that is another + * transaction that's still in progress; or to InvalidTransactionId if the + * tuple's xmin is committed good, committed dead, or my own xact. + * Similarly for snapshot->xmax and the tuple's xmax. If the tuple was + * inserted speculatively, meaning that the inserter might still back down + * on the insertion without aborting the whole transaction, the associated + * token is also returned in snapshot->speculativeToken. + */ +static bool +HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + snapshot->xmin = snapshot->xmax = InvalidTransactionId; + snapshot->speculativeToken = 0; + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!TransactionIdIsInProgress(xvac)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (TransactionIdIsInProgress(xvac)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else + return false; + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + return false; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + { + /* + * Return the speculative token to caller. Caller can worry about + * xmax, since it requires a conclusively locked row version, and + * a concurrent update to this tuple is a conflict of its + * purposes. + */ + if (HeapTupleHeaderIsSpeculative(tuple)) + { + snapshot->speculativeToken = + HeapTupleHeaderGetSpeculativeToken(tuple); + + Assert(snapshot->speculativeToken != 0); + } + + snapshot->xmin = HeapTupleHeaderGetRawXmin(tuple); + /* XXX shouldn't we fall through to look at xmax? */ + return true; /* in insertion by other */ + } + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; /* updated by other */ + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + { + snapshot->xmax = xmax; + return true; + } + if (TransactionIdDidCommit(xmax)) + return false; + /* it must have aborted or crashed */ + return true; + } + + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + return false; + } + + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + { + if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + snapshot->xmax = HeapTupleHeaderGetRawXmax(tuple); + return true; + } + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + return false; /* updated by other */ +} + +/* + * HeapTupleSatisfiesMVCC + * True iff heap tuple is valid for the given MVCC snapshot. + * + * See SNAPSHOT_MVCC's definition for the intended behaviour. + * + * Notice that here, we will not update the tuple status hint bits if the + * inserting/deleting transaction is still running according to our snapshot, + * even if in reality it's committed or aborted by now. This is intentional. + * Checking the true transaction state would require access to high-traffic + * shared data structures, creating contention we'd rather do without, and it + * would not change the result of our visibility check anyway. The hint bits + * will be updated by the first visitor that has a snapshot new enough to see + * the inserting/deleting transaction as done. In the meantime, the cost of + * leaving the hint bits unset is basically that each HeapTupleSatisfiesMVCC + * call will need to run TransactionIdIsCurrentTransactionId in addition to + * XidInMVCCSnapshot (but it would have to do the latter anyway). In the old + * coding where we tried to set the hint bits as soon as possible, we instead + * did TransactionIdIsInProgress in each call --- to no avail, as long as the + * inserting/deleting transaction was still running --- which was more cycles + * and more contention on ProcArrayLock. + */ +static bool +HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return false; + + /* Used by pre-9.0 binary upgrades */ + if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return false; + if (!XidInMVCCSnapshot(xvac, snapshot)) + { + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (!TransactionIdIsCurrentTransactionId(xvac)) + { + if (XidInMVCCSnapshot(xvac, snapshot)) + return false; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (HeapTupleHeaderGetCmin(tuple) >= snapshot->curcid) + return false; /* inserted after scan started */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) /* not deleter */ + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + /* updating subtransaction must have aborted */ + if (!TransactionIdIsCurrentTransactionId(xmax)) + return true; + else if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* updated after scan started */ + else + return false; /* updated before scan started */ + } + + if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + /* deleting subtransaction must have aborted */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + else if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot)) + return false; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return false; + } + } + else + { + /* xmin is committed, but maybe not according to our snapshot */ + if (!HeapTupleHeaderXminFrozen(tuple) && + XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot)) + return false; /* treat as still in progress */ + } + + /* by here, the inserting transaction has committed */ + + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid or aborted */ + return true; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax; + + /* already checked above */ + Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + { + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + if (XidInMVCCSnapshot(xmax, snapshot)) + return true; + if (TransactionIdDidCommit(xmax)) + return false; /* updating transaction committed */ + /* it must have aborted or crashed */ + return true; + } + + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tuple))) + { + if (HeapTupleHeaderGetCmax(tuple) >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + + if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmax(tuple), snapshot)) + return true; + + if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + { + /* it must have aborted or crashed */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return true; + } + + /* xmax transaction committed */ + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + } + else + { + /* xmax is committed, but maybe not according to our snapshot */ + if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmax(tuple), snapshot)) + return true; /* treat as still in progress */ + } + + /* xmax transaction committed */ + + return false; +} + + +/* + * HeapTupleSatisfiesVacuum + * + * Determine the status of tuples for VACUUM purposes. Here, what + * we mainly want to know is if a tuple is potentially visible to *any* + * running transaction. If so, it can't be removed yet by VACUUM. + * + * OldestXmin is a cutoff XID (obtained from + * GetOldestNonRemovableTransactionId()). Tuples deleted by XIDs >= + * OldestXmin are deemed "recently dead"; they might still be visible to some + * open transaction, so we can't remove them, even if we see that the deleting + * transaction has committed. + */ +HTSV_Result +HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, + Buffer buffer) +{ + TransactionId dead_after = InvalidTransactionId; + HTSV_Result res; + + res = HeapTupleSatisfiesVacuumHorizon(htup, buffer, &dead_after); + + if (res == HEAPTUPLE_RECENTLY_DEAD) + { + Assert(TransactionIdIsValid(dead_after)); + + if (TransactionIdPrecedes(dead_after, OldestXmin)) + res = HEAPTUPLE_DEAD; + } + else + Assert(!TransactionIdIsValid(dead_after)); + + return res; +} + +/* + * Work horse for HeapTupleSatisfiesVacuum and similar routines. + * + * In contrast to HeapTupleSatisfiesVacuum this routine, when encountering a + * tuple that could still be visible to some backend, stores the xid that + * needs to be compared with the horizon in *dead_after, and returns + * HEAPTUPLE_RECENTLY_DEAD. The caller then can perform the comparison with + * the horizon. This is e.g. useful when comparing with different horizons. + * + * Note: HEAPTUPLE_DEAD can still be returned here, e.g. if the inserting + * transaction aborted. + */ +HTSV_Result +HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer, TransactionId *dead_after) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + Assert(dead_after != NULL); + + *dead_after = InvalidTransactionId; + + /* + * Has inserting transaction committed? + * + * If the inserting transaction aborted, then the tuple was never visible + * to any other transaction, so we can delete it immediately. + */ + if (!HeapTupleHeaderXminCommitted(tuple)) + { + if (HeapTupleHeaderXminInvalid(tuple)) + return HEAPTUPLE_DEAD; + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_OFF) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + if (TransactionIdIsInProgress(xvac)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + if (TransactionIdDidCommit(xvac)) + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + } + /* Used by pre-9.0 binary upgrades */ + else if (tuple->t_infomask & HEAP_MOVED_IN) + { + TransactionId xvac = HeapTupleHeaderGetXvac(tuple); + + if (TransactionIdIsCurrentTransactionId(xvac)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + if (TransactionIdIsInProgress(xvac)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + if (TransactionIdDidCommit(xvac)) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + InvalidTransactionId); + else + { + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + } + else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple))) + { + if (tuple->t_infomask & HEAP_XMAX_INVALID) /* xid invalid */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + /* only locked? run infomask-only check first, for performance */ + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) || + HeapTupleHeaderIsOnlyLocked(tuple)) + return HEAPTUPLE_INSERT_IN_PROGRESS; + /* inserted and then deleted by same xact */ + if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple))) + return HEAPTUPLE_DELETE_IN_PROGRESS; + /* deleting subtransaction must have aborted */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + } + else if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmin(tuple))) + { + /* + * It'd be possible to discern between INSERT/DELETE in progress + * here by looking at xmax - but that doesn't seem beneficial for + * the majority of callers and even detrimental for some. We'd + * rather have callers look at/wait for xmin than xmax. It's + * always correct to return INSERT_IN_PROGRESS because that's + * what's happening from the view of other backends. + */ + return HEAPTUPLE_INSERT_IN_PROGRESS; + } + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple))) + SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED, + HeapTupleHeaderGetRawXmin(tuple)); + else + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed + */ + SetHintBits(tuple, buffer, HEAP_XMIN_INVALID, + InvalidTransactionId); + return HEAPTUPLE_DEAD; + } + + /* + * At this point the xmin is known committed, but we might not have + * been able to set the hint bit yet; so we can no longer Assert that + * it's set. + */ + } + + /* + * Okay, the inserter committed, so it was good at some point. Now what + * about the deleting transaction? + */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return HEAPTUPLE_LIVE; + + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + { + /* + * "Deleting" xact really only locked it, so the tuple is live in any + * case. However, we should make sure that either XMAX_COMMITTED or + * XMAX_INVALID gets set once the xact is gone, to reduce the costs of + * examining the tuple for future xacts. + */ + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + /* + * If it's a pre-pg_upgrade tuple, the multixact cannot + * possibly be running; otherwise have to check. + */ + if (!HEAP_LOCKED_UPGRADED(tuple->t_infomask) && + MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), + true)) + return HEAPTUPLE_LIVE; + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + } + else + { + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return HEAPTUPLE_LIVE; + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + } + } + + /* + * We don't really care whether xmax did commit, abort or crash. We + * know that xmax did lock the tuple, but it did not and will never + * actually update it. + */ + + return HEAPTUPLE_LIVE; + } + + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + TransactionId xmax = HeapTupleGetUpdateXid(tuple); + + /* already checked above */ + Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsInProgress(xmax)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + else if (TransactionIdDidCommit(xmax)) + { + /* + * The multixact might still be running due to lockers. Need to + * allow for pruning if below the xid horizon regardless -- + * otherwise we could end up with a tuple where the updater has to + * be removed due to the horizon, but is not pruned away. It's + * not a problem to prune that tuple, because any remaining + * lockers will also be present in newer tuple versions. + */ + *dead_after = xmax; + return HEAPTUPLE_RECENTLY_DEAD; + } + else if (!MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed. + * Mark the Xmax as invalid. + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); + } + + return HEAPTUPLE_LIVE; + } + + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + { + if (TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuple))) + return HEAPTUPLE_DELETE_IN_PROGRESS; + else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuple))) + SetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED, + HeapTupleHeaderGetRawXmax(tuple)); + else + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, + InvalidTransactionId); + return HEAPTUPLE_LIVE; + } + + /* + * At this point the xmax is known committed, but we might not have + * been able to set the hint bit yet; so we can no longer Assert that + * it's set. + */ + } + + /* + * Deleter committed, allow caller to check if it was recent enough that + * some open transactions could still see the tuple. + */ + *dead_after = HeapTupleHeaderGetRawXmax(tuple); + return HEAPTUPLE_RECENTLY_DEAD; +} + + +/* + * HeapTupleSatisfiesNonVacuumable + * + * True if tuple might be visible to some transaction; false if it's + * surely dead to everyone, ie, vacuumable. + * + * See SNAPSHOT_NON_VACUUMABLE's definition for the intended behaviour. + * + * This is an interface to HeapTupleSatisfiesVacuum that's callable via + * HeapTupleSatisfiesSnapshot, so it can be used through a Snapshot. + * snapshot->vistest must have been set up with the horizon to use. + */ +static bool +HeapTupleSatisfiesNonVacuumable(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + TransactionId dead_after = InvalidTransactionId; + HTSV_Result res; + + res = HeapTupleSatisfiesVacuumHorizon(htup, buffer, &dead_after); + + if (res == HEAPTUPLE_RECENTLY_DEAD) + { + Assert(TransactionIdIsValid(dead_after)); + + if (GlobalVisTestIsRemovableXid(snapshot->vistest, dead_after)) + res = HEAPTUPLE_DEAD; + } + else + Assert(!TransactionIdIsValid(dead_after)); + + return res != HEAPTUPLE_DEAD; +} + + +/* + * HeapTupleIsSurelyDead + * + * Cheaply determine whether a tuple is surely dead to all onlookers. + * We sometimes use this in lieu of HeapTupleSatisfiesVacuum when the + * tuple has just been tested by another visibility routine (usually + * HeapTupleSatisfiesMVCC) and, therefore, any hint bits that can be set + * should already be set. We assume that if no hint bits are set, the xmin + * or xmax transaction is still running. This is therefore faster than + * HeapTupleSatisfiesVacuum, because we consult neither procarray nor CLOG. + * It's okay to return false when in doubt, but we must return true only + * if the tuple is removable. + */ +bool +HeapTupleIsSurelyDead(HeapTuple htup, GlobalVisState *vistest) +{ + HeapTupleHeader tuple = htup->t_data; + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + /* + * If the inserting transaction is marked invalid, then it aborted, and + * the tuple is definitely dead. If it's marked neither committed nor + * invalid, then we assume it's still alive (since the presumption is that + * all relevant hint bits were just set moments ago). + */ + if (!HeapTupleHeaderXminCommitted(tuple)) + return HeapTupleHeaderXminInvalid(tuple); + + /* + * If the inserting transaction committed, but any deleting transaction + * aborted, the tuple is still alive. + */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return false; + + /* + * If the XMAX is just a lock, the tuple is still alive. + */ + if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return false; + + /* + * If the Xmax is a MultiXact, it might be dead or alive, but we cannot + * know without checking pg_multixact. + */ + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + return false; + + /* If deleter isn't known to have committed, assume it's still running. */ + if (!(tuple->t_infomask & HEAP_XMAX_COMMITTED)) + return false; + + /* Deleter committed, so tuple is dead if the XID is old enough. */ + return GlobalVisTestIsRemovableXid(vistest, + HeapTupleHeaderGetRawXmax(tuple)); +} + +/* + * Is the tuple really only locked? That is, is it not updated? + * + * It's easy to check just infomask bits if the locker is not a multi; but + * otherwise we need to verify that the updating transaction has not aborted. + * + * This function is here because it follows the same visibility rules laid out + * at the top of this file. + */ +bool +HeapTupleHeaderIsOnlyLocked(HeapTupleHeader tuple) +{ + TransactionId xmax; + + /* if there's no valid Xmax, then there's obviously no update either */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return true; + + if (tuple->t_infomask & HEAP_XMAX_LOCK_ONLY) + return true; + + /* invalid xmax means no update */ + if (!TransactionIdIsValid(HeapTupleHeaderGetRawXmax(tuple))) + return true; + + /* + * if HEAP_XMAX_LOCK_ONLY is not set and not a multi, then this must + * necessarily have been updated + */ + if (!(tuple->t_infomask & HEAP_XMAX_IS_MULTI)) + return false; + + /* ... but if it's a multi, then perhaps the updating Xid aborted. */ + xmax = HeapTupleGetUpdateXid(tuple); + + /* not LOCKED_ONLY, so it has to have an xmax */ + Assert(TransactionIdIsValid(xmax)); + + if (TransactionIdIsCurrentTransactionId(xmax)) + return false; + if (TransactionIdIsInProgress(xmax)) + return false; + if (TransactionIdDidCommit(xmax)) + return false; + + /* + * not current, not in progress, not committed -- must have aborted or + * crashed + */ + return true; +} + +/* + * check whether the transaction id 'xid' is in the pre-sorted array 'xip'. + */ +static bool +TransactionIdInArray(TransactionId xid, TransactionId *xip, Size num) +{ + return num > 0 && + bsearch(&xid, xip, num, sizeof(TransactionId), xidComparator) != NULL; +} + +/* + * See the comments for HeapTupleSatisfiesMVCC for the semantics this function + * obeys. + * + * Only usable on tuples from catalog tables! + * + * We don't need to support HEAP_MOVED_(IN|OFF) for now because we only support + * reading catalog pages which couldn't have been created in an older version. + * + * We don't set any hint bits in here as it seems unlikely to be beneficial as + * those should already be set by normal access and it seems to be too + * dangerous to do so as the semantics of doing so during timetravel are more + * complicated than when dealing "only" with the present. + */ +static bool +HeapTupleSatisfiesHistoricMVCC(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + HeapTupleHeader tuple = htup->t_data; + TransactionId xmin = HeapTupleHeaderGetXmin(tuple); + TransactionId xmax = HeapTupleHeaderGetRawXmax(tuple); + + Assert(ItemPointerIsValid(&htup->t_self)); + Assert(htup->t_tableOid != InvalidOid); + + /* inserting transaction aborted */ + if (HeapTupleHeaderXminInvalid(tuple)) + { + Assert(!TransactionIdDidCommit(xmin)); + return false; + } + /* check if it's one of our txids, toplevel is also in there */ + else if (TransactionIdInArray(xmin, snapshot->subxip, snapshot->subxcnt)) + { + bool resolved; + CommandId cmin = HeapTupleHeaderGetRawCommandId(tuple); + CommandId cmax = InvalidCommandId; + + /* + * another transaction might have (tried to) delete this tuple or + * cmin/cmax was stored in a combo CID. So we need to lookup the + * actual values externally. + */ + resolved = ResolveCminCmaxDuringDecoding(HistoricSnapshotGetTupleCids(), snapshot, + htup, buffer, + &cmin, &cmax); + + /* + * If we haven't resolved the combo CID to cmin/cmax, that means we + * have not decoded the combo CID yet. That means the cmin is + * definitely in the future, and we're not supposed to see the tuple + * yet. + * + * XXX This only applies to decoding of in-progress transactions. In + * regular logical decoding we only execute this code at commit time, + * at which point we should have seen all relevant combo CIDs. So + * ideally, we should error out in this case but in practice, this + * won't happen. If we are too worried about this then we can add an + * elog inside ResolveCminCmaxDuringDecoding. + * + * XXX For the streaming case, we can track the largest combo CID + * assigned, and error out based on this (when unable to resolve combo + * CID below that observed maximum value). + */ + if (!resolved) + return false; + + Assert(cmin != InvalidCommandId); + + if (cmin >= snapshot->curcid) + return false; /* inserted after scan started */ + /* fall through */ + } + /* committed before our xmin horizon. Do a normal visibility check. */ + else if (TransactionIdPrecedes(xmin, snapshot->xmin)) + { + Assert(!(HeapTupleHeaderXminCommitted(tuple) && + !TransactionIdDidCommit(xmin))); + + /* check for hint bit first, consult clog afterwards */ + if (!HeapTupleHeaderXminCommitted(tuple) && + !TransactionIdDidCommit(xmin)) + return false; + /* fall through */ + } + /* beyond our xmax horizon, i.e. invisible */ + else if (TransactionIdFollowsOrEquals(xmin, snapshot->xmax)) + { + return false; + } + /* check if it's a committed transaction in [xmin, xmax) */ + else if (TransactionIdInArray(xmin, snapshot->xip, snapshot->xcnt)) + { + /* fall through */ + } + + /* + * none of the above, i.e. between [xmin, xmax) but hasn't committed. I.e. + * invisible. + */ + else + { + return false; + } + + /* at this point we know xmin is visible, go on to check xmax */ + + /* xid invalid or aborted */ + if (tuple->t_infomask & HEAP_XMAX_INVALID) + return true; + /* locked tuples are always visible */ + else if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)) + return true; + + /* + * We can see multis here if we're looking at user tables or if somebody + * SELECT ... FOR SHARE/UPDATE a system table. + */ + else if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) + { + xmax = HeapTupleGetUpdateXid(tuple); + } + + /* check if it's one of our txids, toplevel is also in there */ + if (TransactionIdInArray(xmax, snapshot->subxip, snapshot->subxcnt)) + { + bool resolved; + CommandId cmin; + CommandId cmax = HeapTupleHeaderGetRawCommandId(tuple); + + /* Lookup actual cmin/cmax values */ + resolved = ResolveCminCmaxDuringDecoding(HistoricSnapshotGetTupleCids(), snapshot, + htup, buffer, + &cmin, &cmax); + + /* + * If we haven't resolved the combo CID to cmin/cmax, that means we + * have not decoded the combo CID yet. That means the cmax is + * definitely in the future, and we're still supposed to see the + * tuple. + * + * XXX This only applies to decoding of in-progress transactions. In + * regular logical decoding we only execute this code at commit time, + * at which point we should have seen all relevant combo CIDs. So + * ideally, we should error out in this case but in practice, this + * won't happen. If we are too worried about this then we can add an + * elog inside ResolveCminCmaxDuringDecoding. + * + * XXX For the streaming case, we can track the largest combo CID + * assigned, and error out based on this (when unable to resolve combo + * CID below that observed maximum value). + */ + if (!resolved || cmax == InvalidCommandId) + return true; + + if (cmax >= snapshot->curcid) + return true; /* deleted after scan started */ + else + return false; /* deleted before scan started */ + } + /* below xmin horizon, normal transaction state is valid */ + else if (TransactionIdPrecedes(xmax, snapshot->xmin)) + { + Assert(!(tuple->t_infomask & HEAP_XMAX_COMMITTED && + !TransactionIdDidCommit(xmax))); + + /* check hint bit first */ + if (tuple->t_infomask & HEAP_XMAX_COMMITTED) + return false; + + /* check clog */ + return !TransactionIdDidCommit(xmax); + } + /* above xmax horizon, we cannot possibly see the deleting transaction */ + else if (TransactionIdFollowsOrEquals(xmax, snapshot->xmax)) + return true; + /* xmax is between [xmin, xmax), check known committed array */ + else if (TransactionIdInArray(xmax, snapshot->xip, snapshot->xcnt)) + return false; + /* xmax is between [xmin, xmax), but known not to have committed yet */ + else + return true; +} + +/* + * HeapTupleSatisfiesVisibility + * True iff heap tuple satisfies a time qual. + * + * Notes: + * Assumes heap tuple is valid, and buffer at least share locked. + * + * Hint bits in the HeapTuple's t_infomask may be updated as a side effect; + * if so, the indicated buffer is marked dirty. + */ +bool +HeapTupleSatisfiesVisibility(HeapTuple htup, Snapshot snapshot, Buffer buffer) +{ + switch (snapshot->snapshot_type) + { + case SNAPSHOT_MVCC: + return HeapTupleSatisfiesMVCC(htup, snapshot, buffer); + case SNAPSHOT_SELF: + return HeapTupleSatisfiesSelf(htup, snapshot, buffer); + case SNAPSHOT_ANY: + return HeapTupleSatisfiesAny(htup, snapshot, buffer); + case SNAPSHOT_TOAST: + return HeapTupleSatisfiesToast(htup, snapshot, buffer); + case SNAPSHOT_DIRTY: + return HeapTupleSatisfiesDirty(htup, snapshot, buffer); + case SNAPSHOT_HISTORIC_MVCC: + return HeapTupleSatisfiesHistoricMVCC(htup, snapshot, buffer); + case SNAPSHOT_NON_VACUUMABLE: + return HeapTupleSatisfiesNonVacuumable(htup, snapshot, buffer); + } + + return false; /* keep compiler quiet */ +} diff --git a/contrib/pg_tde/src17/access/pg_tdetoast.c b/contrib/pg_tde/src17/access/pg_tdetoast.c new file mode 100644 index 00000000000..a24b21eb7a8 --- /dev/null +++ b/contrib/pg_tde/src17/access/pg_tdetoast.c @@ -0,0 +1,1262 @@ +/*------------------------------------------------------------------------- + * + * heaptoast.c + * Heap-specific definitions for external and compressed storage + * of variable size attributes. + * + * Copyright (c) 2000-2024, PostgreSQL Global Development Group + * + * + * IDENTIFICATION + * src/backend/access/heap/heaptoast.c + * + * + * INTERFACE ROUTINES + * tdeheap_toast_insert_or_update - + * Try to make a given tuple fit into one page by compressing + * or moving off attributes + * + * tdeheap_toast_delete - + * Reclaim toast storage when a tuple is deleted + * + *------------------------------------------------------------------------- + */ +#include "pg_tde_defines.h" + +#include "postgres.h" + +#include "access/pg_tdeam.h" +#include "access/pg_tdetoast.h" + +#include "access/detoast.h" +#include "access/genam.h" +#include "access/toast_helper.h" +#include "access/toast_internals.h" +#include "miscadmin.h" +#include "utils/fmgroids.h" +#include "utils/snapmgr.h" +#include "encryption/enc_tde.h" + +#define TDE_TOAST_COMPRESS_HEADER_SIZE (VARHDRSZ_COMPRESSED - VARHDRSZ) + +static void tdeheap_toast_tuple_externalize(ToastTupleContext *ttc, + int attribute, int options); +static Datum tdeheap_toast_save_datum(Relation rel, Datum value, + struct varlena *oldexternal, + int options); +static void tdeheap_toast_encrypt(Pointer dval, Oid valueid, RelKeyData *keys); +static bool toastrel_valueid_exists(Relation toastrel, Oid valueid); +static bool toastid_valueid_exists(Oid toastrelid, Oid valueid); + + +/* ---------- + * tdeheap_toast_delete - + * + * Cascaded delete toast-entries on DELETE + * ---------- + */ +void +tdeheap_toast_delete(Relation rel, HeapTuple oldtup, bool is_speculative) +{ + TupleDesc tupleDesc; + Datum toast_values[MaxHeapAttributeNumber]; + bool toast_isnull[MaxHeapAttributeNumber]; + + /* + * We should only ever be called for tuples of plain relations or + * materialized views --- recursing on a toast rel is bad news. + */ + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW); + + /* + * Get the tuple descriptor and break down the tuple into fields. + * + * NOTE: it's debatable whether to use tdeheap_deform_tuple() here or just + * tdeheap_getattr() only the varlena columns. The latter could win if there + * are few varlena columns and many non-varlena ones. However, + * tdeheap_deform_tuple costs only O(N) while the tdeheap_getattr way would cost + * O(N^2) if there are many varlena columns, so it seems better to err on + * the side of linear cost. (We won't even be here unless there's at + * least one varlena column, by the way.) + */ + tupleDesc = rel->rd_att; + + Assert(tupleDesc->natts <= MaxHeapAttributeNumber); + tdeheap_deform_tuple(oldtup, tupleDesc, toast_values, toast_isnull); + + /* Do the real work. */ + toast_delete_external(rel, toast_values, toast_isnull, is_speculative); +} + + +/* ---------- + * tdeheap_toast_insert_or_update - + * + * Delete no-longer-used toast-entries and create new ones to + * make the new tuple fit on INSERT or UPDATE + * + * Inputs: + * newtup: the candidate new tuple to be inserted + * oldtup: the old row version for UPDATE, or NULL for INSERT + * options: options to be passed to tdeheap_insert() for toast rows + * Result: + * either newtup if no toasting is needed, or a palloc'd modified tuple + * that is what should actually get stored + * + * NOTE: neither newtup nor oldtup will be modified. This is a change + * from the pre-8.1 API of this routine. + * ---------- + */ +HeapTuple +tdeheap_toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, + int options) +{ + HeapTuple result_tuple; + TupleDesc tupleDesc; + int numAttrs; + + Size maxDataLen; + Size hoff; + + bool toast_isnull[MaxHeapAttributeNumber]; + bool toast_oldisnull[MaxHeapAttributeNumber]; + Datum toast_values[MaxHeapAttributeNumber]; + Datum toast_oldvalues[MaxHeapAttributeNumber]; + ToastAttrInfo toast_attr[MaxHeapAttributeNumber]; + ToastTupleContext ttc; + + /* + * Ignore the INSERT_SPECULATIVE option. Speculative insertions/super + * deletions just normally insert/delete the toast values. It seems + * easiest to deal with that here, instead on, potentially, multiple + * callers. + */ + options &= ~HEAP_INSERT_SPECULATIVE; + + /* + * We should only ever be called for tuples of plain relations or + * materialized views --- recursing on a toast rel is bad news. + */ + Assert(rel->rd_rel->relkind == RELKIND_RELATION || + rel->rd_rel->relkind == RELKIND_MATVIEW); + + /* + * Get the tuple descriptor and break down the tuple(s) into fields. + */ + tupleDesc = rel->rd_att; + numAttrs = tupleDesc->natts; + + Assert(numAttrs <= MaxHeapAttributeNumber); + tdeheap_deform_tuple(newtup, tupleDesc, toast_values, toast_isnull); + if (oldtup != NULL) + tdeheap_deform_tuple(oldtup, tupleDesc, toast_oldvalues, toast_oldisnull); + + /* ---------- + * Prepare for toasting + * ---------- + */ + ttc.ttc_rel = rel; + ttc.ttc_values = toast_values; + ttc.ttc_isnull = toast_isnull; + if (oldtup == NULL) + { + ttc.ttc_oldvalues = NULL; + ttc.ttc_oldisnull = NULL; + } + else + { + ttc.ttc_oldvalues = toast_oldvalues; + ttc.ttc_oldisnull = toast_oldisnull; + } + ttc.ttc_attr = toast_attr; + toast_tuple_init(&ttc); + + /* ---------- + * Compress and/or save external until data fits into target length + * + * 1: Inline compress attributes with attstorage EXTENDED, and store very + * large attributes with attstorage EXTENDED or EXTERNAL external + * immediately + * 2: Store attributes with attstorage EXTENDED or EXTERNAL external + * 3: Inline compress attributes with attstorage MAIN + * 4: Store attributes with attstorage MAIN external + * ---------- + */ + + /* compute header overhead --- this should match tdeheap_form_tuple() */ + hoff = SizeofHeapTupleHeader; + if ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) + hoff += BITMAPLEN(numAttrs); + hoff = MAXALIGN(hoff); + /* now convert to a limit on the tuple data size */ + maxDataLen = RelationGetToastTupleTarget(rel, TOAST_TUPLE_TARGET) - hoff; + + /* + * Look for attributes with attstorage EXTENDED to compress. Also find + * large attributes with attstorage EXTENDED or EXTERNAL, and store them + * external. + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, true, false); + if (biggest_attno < 0) + break; + + /* + * Attempt to compress it inline, if it has attstorage EXTENDED + */ + if (TupleDescAttr(tupleDesc, biggest_attno)->attstorage == TYPSTORAGE_EXTENDED) + toast_tuple_try_compression(&ttc, biggest_attno); + else + { + /* + * has attstorage EXTERNAL, ignore on subsequent compression + * passes + */ + toast_attr[biggest_attno].tai_colflags |= TOASTCOL_INCOMPRESSIBLE; + } + + /* + * If this value is by itself more than maxDataLen (after compression + * if any), push it out to the toast table immediately, if possible. + * This avoids uselessly compressing other fields in the common case + * where we have one long field and several short ones. + * + * XXX maybe the threshold should be less than maxDataLen? + */ + if (toast_attr[biggest_attno].tai_size > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * Second we look for attributes of attstorage EXTENDED or EXTERNAL that + * are still inline, and make them external. But skip this if there's no + * toast table to push them to. + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, false, false); + if (biggest_attno < 0) + break; + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * Round 3 - this time we take attributes with storage MAIN into + * compression + */ + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, true, true); + if (biggest_attno < 0) + break; + + toast_tuple_try_compression(&ttc, biggest_attno); + } + + /* + * Finally we store attributes of type MAIN externally. At this point we + * increase the target tuple size, so that MAIN attributes aren't stored + * externally unless really necessary. + */ + maxDataLen = TOAST_TUPLE_TARGET_MAIN - hoff; + + while (tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull) > maxDataLen && + rel->rd_rel->reltoastrelid != InvalidOid) + { + int biggest_attno; + + biggest_attno = toast_tuple_find_biggest_attribute(&ttc, false, true); + if (biggest_attno < 0) + break; + + tdeheap_toast_tuple_externalize(&ttc, biggest_attno, options); + } + + /* + * In the case we toasted any values, we need to build a new heap tuple + * with the changed values. + */ + if ((ttc.ttc_flags & TOAST_NEEDS_CHANGE) != 0) + { + HeapTupleHeader olddata = newtup->t_data; + HeapTupleHeader new_data; + int32 new_header_len; + int32 new_data_len; + int32 new_tuple_len; + + /* + * Calculate the new size of the tuple. + * + * Note: we used to assume here that the old tuple's t_hoff must equal + * the new_header_len value, but that was incorrect. The old tuple + * might have a smaller-than-current natts, if there's been an ALTER + * TABLE ADD COLUMN since it was stored; and that would lead to a + * different conclusion about the size of the null bitmap, or even + * whether there needs to be one at all. + */ + new_header_len = SizeofHeapTupleHeader; + if ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) + new_header_len += BITMAPLEN(numAttrs); + new_header_len = MAXALIGN(new_header_len); + new_data_len = tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull); + new_tuple_len = new_header_len + new_data_len; + + /* + * Allocate and zero the space needed, and fill HeapTupleData fields. + */ + result_tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + new_tuple_len); + result_tuple->t_len = new_tuple_len; + result_tuple->t_self = newtup->t_self; + result_tuple->t_tableOid = newtup->t_tableOid; + new_data = (HeapTupleHeader) ((char *) result_tuple + HEAPTUPLESIZE); + result_tuple->t_data = new_data; + + /* + * Copy the existing tuple header, but adjust natts and t_hoff. + */ + memcpy(new_data, olddata, SizeofHeapTupleHeader); + HeapTupleHeaderSetNatts(new_data, numAttrs); + new_data->t_hoff = new_header_len; + + /* Copy over the data, and fill the null bitmap if needed */ + tdeheap_fill_tuple(tupleDesc, + toast_values, + toast_isnull, + (char *) new_data + new_header_len, + new_data_len, + &(new_data->t_infomask), + ((ttc.ttc_flags & TOAST_HAS_NULLS) != 0) ? + new_data->t_bits : NULL); + } + else + result_tuple = newtup; + + toast_tuple_cleanup(&ttc); + + return result_tuple; +} + + +/* ---------- + * toast_flatten_tuple - + * + * "Flatten" a tuple to contain no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * + * Note: we expect the caller already checked HeapTupleHasExternal(tup), + * so there is no need for a short-circuit path. + * ---------- + */ +HeapTuple +toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc) +{ + HeapTuple new_tuple; + int numAttrs = tupleDesc->natts; + int i; + Datum toast_values[MaxTupleAttributeNumber]; + bool toast_isnull[MaxTupleAttributeNumber]; + bool toast_free[MaxTupleAttributeNumber]; + + /* + * Break down the tuple into fields. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + tdeheap_deform_tuple(tup, tupleDesc, toast_values, toast_isnull); + + memset(toast_free, 0, numAttrs * sizeof(bool)); + + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (!toast_isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(toast_values[i]); + if (VARATT_IS_EXTERNAL(new_value)) + { + new_value = detoast_external_attr(new_value); + toast_values[i] = PointerGetDatum(new_value); + toast_free[i] = true; + } + } + } + + /* + * Form the reconfigured tuple. + */ + new_tuple = tdeheap_form_tuple(tupleDesc, toast_values, toast_isnull); + + /* + * Be sure to copy the tuple's identity fields. We also make a point of + * copying visibility info, just in case anybody looks at those fields in + * a syscache entry. + */ + new_tuple->t_self = tup->t_self; + new_tuple->t_tableOid = tup->t_tableOid; + + new_tuple->t_data->t_choice = tup->t_data->t_choice; + new_tuple->t_data->t_ctid = tup->t_data->t_ctid; + new_tuple->t_data->t_infomask &= ~HEAP_XACT_MASK; + new_tuple->t_data->t_infomask |= + tup->t_data->t_infomask & HEAP_XACT_MASK; + new_tuple->t_data->t_infomask2 &= ~HEAP2_XACT_MASK; + new_tuple->t_data->t_infomask2 |= + tup->t_data->t_infomask2 & HEAP2_XACT_MASK; + + /* + * Free allocated temp values + */ + for (i = 0; i < numAttrs; i++) + if (toast_free[i]) + pfree(DatumGetPointer(toast_values[i])); + + return new_tuple; +} + + +/* ---------- + * toast_flatten_tuple_to_datum - + * + * "Flatten" a tuple containing out-of-line toasted fields into a Datum. + * The result is always palloc'd in the current memory context. + * + * We have a general rule that Datums of container types (rows, arrays, + * ranges, etc) must not contain any external TOAST pointers. Without + * this rule, we'd have to look inside each Datum when preparing a tuple + * for storage, which would be expensive and would fail to extend cleanly + * to new sorts of container types. + * + * However, we don't want to say that tuples represented as HeapTuples + * can't contain toasted fields, so instead this routine should be called + * when such a HeapTuple is being converted into a Datum. + * + * While we're at it, we decompress any compressed fields too. This is not + * necessary for correctness, but reflects an expectation that compression + * will be more effective if applied to the whole tuple not individual + * fields. We are not so concerned about that that we want to deconstruct + * and reconstruct tuples just to get rid of compressed fields, however. + * So callers typically won't call this unless they see that the tuple has + * at least one external field. + * + * On the other hand, in-line short-header varlena fields are left alone. + * If we "untoasted" them here, they'd just get changed back to short-header + * format anyway within tdeheap_fill_tuple. + * ---------- + */ +Datum +toast_flatten_tuple_to_datum(HeapTupleHeader tup, + uint32 tup_len, + TupleDesc tupleDesc) +{ + HeapTupleHeader new_data; + int32 new_header_len; + int32 new_data_len; + int32 new_tuple_len; + HeapTupleData tmptup; + int numAttrs = tupleDesc->natts; + int i; + bool has_nulls = false; + Datum toast_values[MaxTupleAttributeNumber]; + bool toast_isnull[MaxTupleAttributeNumber]; + bool toast_free[MaxTupleAttributeNumber]; + + /* Build a temporary HeapTuple control structure */ + tmptup.t_len = tup_len; + ItemPointerSetInvalid(&(tmptup.t_self)); + tmptup.t_tableOid = InvalidOid; + tmptup.t_data = tup; + + /* + * Break down the tuple into fields. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + tdeheap_deform_tuple(&tmptup, tupleDesc, toast_values, toast_isnull); + + memset(toast_free, 0, numAttrs * sizeof(bool)); + + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (toast_isnull[i]) + has_nulls = true; + else if (TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(toast_values[i]); + if (VARATT_IS_EXTERNAL(new_value) || + VARATT_IS_COMPRESSED(new_value)) + { + new_value = detoast_attr(new_value); + toast_values[i] = PointerGetDatum(new_value); + toast_free[i] = true; + } + } + } + + /* + * Calculate the new size of the tuple. + * + * This should match the reconstruction code in + * tdeheap_toast_insert_or_update. + */ + new_header_len = SizeofHeapTupleHeader; + if (has_nulls) + new_header_len += BITMAPLEN(numAttrs); + new_header_len = MAXALIGN(new_header_len); + new_data_len = tdeheap_compute_data_size(tupleDesc, + toast_values, toast_isnull); + new_tuple_len = new_header_len + new_data_len; + + new_data = (HeapTupleHeader) palloc0(new_tuple_len); + + /* + * Copy the existing tuple header, but adjust natts and t_hoff. + */ + memcpy(new_data, tup, SizeofHeapTupleHeader); + HeapTupleHeaderSetNatts(new_data, numAttrs); + new_data->t_hoff = new_header_len; + + /* Set the composite-Datum header fields correctly */ + HeapTupleHeaderSetDatumLength(new_data, new_tuple_len); + HeapTupleHeaderSetTypeId(new_data, tupleDesc->tdtypeid); + HeapTupleHeaderSetTypMod(new_data, tupleDesc->tdtypmod); + + /* Copy over the data, and fill the null bitmap if needed */ + tdeheap_fill_tuple(tupleDesc, + toast_values, + toast_isnull, + (char *) new_data + new_header_len, + new_data_len, + &(new_data->t_infomask), + has_nulls ? new_data->t_bits : NULL); + + /* + * Free allocated temp values + */ + for (i = 0; i < numAttrs; i++) + if (toast_free[i]) + pfree(DatumGetPointer(toast_values[i])); + + return PointerGetDatum(new_data); +} + + +/* ---------- + * toast_build_flattened_tuple - + * + * Build a tuple containing no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * + * This is essentially just like tdeheap_form_tuple, except that it will + * expand any external-data pointers beforehand. + * + * It's not very clear whether it would be preferable to decompress + * in-line compressed datums while at it. For now, we don't. + * ---------- + */ +HeapTuple +toast_build_flattened_tuple(TupleDesc tupleDesc, + Datum *values, + bool *isnull) +{ + HeapTuple new_tuple; + int numAttrs = tupleDesc->natts; + int num_to_free; + int i; + Datum new_values[MaxTupleAttributeNumber]; + Pointer freeable_values[MaxTupleAttributeNumber]; + + /* + * We can pass the caller's isnull array directly to tdeheap_form_tuple, but + * we potentially need to modify the values array. + */ + Assert(numAttrs <= MaxTupleAttributeNumber); + memcpy(new_values, values, numAttrs * sizeof(Datum)); + + num_to_free = 0; + for (i = 0; i < numAttrs; i++) + { + /* + * Look at non-null varlena attributes + */ + if (!isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) + { + struct varlena *new_value; + + new_value = (struct varlena *) DatumGetPointer(new_values[i]); + if (VARATT_IS_EXTERNAL(new_value)) + { + new_value = detoast_external_attr(new_value); + new_values[i] = PointerGetDatum(new_value); + freeable_values[num_to_free++] = (Pointer) new_value; + } + } + } + + /* + * Form the reconfigured tuple. + */ + new_tuple = tdeheap_form_tuple(tupleDesc, new_values, isnull); + + /* + * Free allocated temp values + */ + for (i = 0; i < num_to_free; i++) + pfree(freeable_values[i]); + + return new_tuple; +} + +/* + * Fetch a TOAST slice from a heap table. + * + * toastrel is the relation from which chunks are to be fetched. + * valueid identifies the TOAST value from which chunks are being fetched. + * attrsize is the total size of the TOAST value. + * sliceoffset is the byte offset within the TOAST value from which to fetch. + * slicelength is the number of bytes to be fetched from the TOAST value. + * result is the varlena into which the results should be written. + */ +void +tdeheap_fetch_toast_slice(Relation toastrel, Oid valueid, int32 attrsize, + int32 sliceoffset, int32 slicelength, + struct varlena *result) +{ + Relation *toastidxs; + ScanKeyData toastkey[3]; + TupleDesc toasttupDesc = toastrel->rd_att; + int nscankeys; + SysScanDesc toastscan; + HeapTuple ttup; + int32 expectedchunk; + int32 totalchunks = ((attrsize - 1) / TOAST_MAX_CHUNK_SIZE) + 1; + int startchunk; + int endchunk; + int num_indexes; + int validIndex; + SnapshotData SnapshotToast; + char decrypted_data[TOAST_MAX_CHUNK_SIZE]; + RelKeyData *key = GetHeapBaiscRelationKey(toastrel->rd_locator); + char iv_prefix[16] = {0,}; + + + /* Look for the valid index of toast relation */ + validIndex = toast_open_indexes(toastrel, + AccessShareLock, + &toastidxs, + &num_indexes); + + startchunk = sliceoffset / TOAST_MAX_CHUNK_SIZE; + endchunk = (sliceoffset + slicelength - 1) / TOAST_MAX_CHUNK_SIZE; + Assert(endchunk <= totalchunks); + + /* Set up a scan key to fetch from the index. */ + ScanKeyInit(&toastkey[0], + (AttrNumber) 1, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(valueid)); + + /* + * No additional condition if fetching all chunks. Otherwise, use an + * equality condition for one chunk, and a range condition otherwise. + */ + if (startchunk == 0 && endchunk == totalchunks - 1) + nscankeys = 1; + else if (startchunk == endchunk) + { + ScanKeyInit(&toastkey[1], + (AttrNumber) 2, + BTEqualStrategyNumber, F_INT4EQ, + Int32GetDatum(startchunk)); + nscankeys = 2; + } + else + { + ScanKeyInit(&toastkey[1], + (AttrNumber) 2, + BTGreaterEqualStrategyNumber, F_INT4GE, + Int32GetDatum(startchunk)); + ScanKeyInit(&toastkey[2], + (AttrNumber) 2, + BTLessEqualStrategyNumber, F_INT4LE, + Int32GetDatum(endchunk)); + nscankeys = 3; + } + + /* Prepare for scan */ + init_toast_snapshot(&SnapshotToast); + toastscan = systable_beginscan_ordered(toastrel, toastidxs[validIndex], + &SnapshotToast, nscankeys, toastkey); + + memcpy(iv_prefix, &valueid, sizeof(Oid)); + + /* + * Read the chunks by index + * + * The index is on (valueid, chunkidx) so they will come in order + */ + expectedchunk = startchunk; + while ((ttup = systable_getnext_ordered(toastscan, ForwardScanDirection)) != NULL) + { + int32 curchunk; + Pointer chunk; + bool isnull; + char *chunkdata; + int32 chunksize; + int32 expected_size; + int32 chcpystrt; + int32 chcpyend; + int32 encrypt_offset; + + /* + * Have a chunk, extract the sequence number and the data + */ + curchunk = DatumGetInt32(fastgetattr(ttup, 2, toasttupDesc, &isnull)); + Assert(!isnull); + chunk = DatumGetPointer(fastgetattr(ttup, 3, toasttupDesc, &isnull)); + Assert(!isnull); + if (!VARATT_IS_EXTENDED(chunk)) + { + chunksize = VARSIZE(chunk) - VARHDRSZ; + chunkdata = VARDATA(chunk); + } + else if (VARATT_IS_SHORT(chunk)) + { + /* could happen due to tdeheap_form_tuple doing its thing */ + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT; + chunkdata = VARDATA_SHORT(chunk); + } + else + { + /* should never happen */ + elog(ERROR, "found toasted toast chunk for toast value %u in %s", + valueid, RelationGetRelationName(toastrel)); + chunksize = 0; /* keep compiler quiet */ + chunkdata = NULL; + } + + /* + * Some checks on the data we've found + */ + if (curchunk != expectedchunk) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk number %d (expected %d) for toast value %u in %s", + curchunk, expectedchunk, valueid, + RelationGetRelationName(toastrel)))); + if (curchunk > endchunk) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk number %d (out of range %d..%d) for toast value %u in %s", + curchunk, + startchunk, endchunk, valueid, + RelationGetRelationName(toastrel)))); + expected_size = curchunk < totalchunks - 1 ? TOAST_MAX_CHUNK_SIZE + : attrsize - ((totalchunks - 1) * TOAST_MAX_CHUNK_SIZE); + if (chunksize != expected_size) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("unexpected chunk size %d (expected %d) in chunk %d of %d for toast value %u in %s", + chunksize, expected_size, + curchunk, totalchunks, valueid, + RelationGetRelationName(toastrel)))); + + /* + * Copy the data into proper place in our result + */ + chcpystrt = 0; + chcpyend = chunksize - 1; + if (curchunk == startchunk) + chcpystrt = sliceoffset % TOAST_MAX_CHUNK_SIZE; + if (curchunk == endchunk) + chcpyend = (sliceoffset + slicelength - 1) % TOAST_MAX_CHUNK_SIZE; + + /* + * If TOAST is compressed, the first TDE_TOAST_COMPRESS_HEADER_SIZE (4 bytes) is + * not encrypted and contains compression info. It should be added to the + * result as it is and the rest should be decrypted. Encryption offset in + * that case will be 0 for the first chunk (despite the encrypted data + * starting with the offset TDE_TOAST_COMPRESS_HEADER_SIZE, we've encrypted it + * without compression headers) and `chunk start offset - 4` for the next + * chunks. + */ + encrypt_offset = chcpystrt; + if (VARATT_IS_COMPRESSED(result)) { + if (curchunk == 0) { + memcpy(VARDATA(result), chunkdata + chcpystrt, TDE_TOAST_COMPRESS_HEADER_SIZE); + chcpystrt += TDE_TOAST_COMPRESS_HEADER_SIZE; + } else { + encrypt_offset -= TDE_TOAST_COMPRESS_HEADER_SIZE; + } + } + /* Decrypt the data chunk by chunk here */ + + PG_TDE_DECRYPT_DATA(iv_prefix, (curchunk * TOAST_MAX_CHUNK_SIZE) + encrypt_offset, + chunkdata + chcpystrt, + (chcpyend - chcpystrt) + 1, + decrypted_data, key); + + memcpy(VARDATA(result) + + (curchunk * TOAST_MAX_CHUNK_SIZE - sliceoffset) + chcpystrt, + decrypted_data, + (chcpyend - chcpystrt) + 1); + + expectedchunk++; + } + + /* + * Final checks that we successfully fetched the datum + */ + if (expectedchunk != (endchunk + 1)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("missing chunk number %d for toast value %u in %s", + expectedchunk, valueid, + RelationGetRelationName(toastrel)))); + + /* End scan and close indexes. */ + systable_endscan_ordered(toastscan); + toast_close_indexes(toastidxs, num_indexes, AccessShareLock); +} +// TODO: these should be in their own file so we can proplerly autoupdate them +/* pg_tde extension */ +static void +tdeheap_toast_encrypt(Pointer dval, Oid valueid, RelKeyData *key) +{ + int32 data_size =0; + char* data_p; + char* encrypted_data; + char iv_prefix[16] = {0,}; + + /* + * Encryption specific data_p and data_size as we have to avoid + * encryption of the compression info. + * See https://github.com/percona/pg_tde/commit/dee6e357ef05d217a4c4df131249a80e5e909163 + */ + if (VARATT_IS_SHORT(dval)) + { + data_p = VARDATA_SHORT(dval); + data_size = VARSIZE_SHORT(dval) - VARHDRSZ_SHORT; + } + else if (VARATT_IS_COMPRESSED(dval)) + { + data_p = VARDATA_4B_C(dval); + data_size = VARSIZE(dval) - VARHDRSZ_COMPRESSED; + } + else + { + data_p = VARDATA(dval); + data_size = VARSIZE(dval) - VARHDRSZ; + } + /* Now encrypt the data and replace it in ttc */ + encrypted_data = (char *)palloc(data_size); + + memcpy(iv_prefix, &valueid, sizeof(Oid)); + PG_TDE_ENCRYPT_DATA(iv_prefix, 0, data_p, data_size, encrypted_data, key); + + memcpy(data_p, encrypted_data, data_size); + pfree(encrypted_data); +} + +/* + * Move an attribute to external storage. + * + * copy from PG src/backend/access/table/toast_helper.c + */ +static void +tdeheap_toast_tuple_externalize(ToastTupleContext *ttc, int attribute, int options) +{ + Datum *value = &ttc->ttc_values[attribute]; + Datum old_value = *value; + ToastAttrInfo *attr = &ttc->ttc_attr[attribute]; + + attr->tai_colflags |= TOASTCOL_IGNORE; + *value = tdeheap_toast_save_datum(ttc->ttc_rel, old_value, attr->tai_oldexternal, + options); + if ((attr->tai_colflags & TOASTCOL_NEEDS_FREE) != 0) + pfree(DatumGetPointer(old_value)); + attr->tai_colflags |= TOASTCOL_NEEDS_FREE; + ttc->ttc_flags |= (TOAST_NEEDS_CHANGE | TOAST_NEEDS_FREE); +} + +/* ---------- + * tdeheap_toast_save_datum - + * + * Save one single datum into the secondary relation and return + * a Datum reference for it. + * It also encrypts toasted data. + * + * rel: the main relation we're working with (not the toast rel!) + * value: datum to be pushed to toast storage + * oldexternal: if not NULL, toast pointer previously representing the datum + * options: options to be passed to tdeheap_insert() for toast rows + * + * based on toast_save_datum from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static Datum +tdeheap_toast_save_datum(Relation rel, Datum value, + struct varlena *oldexternal, int options) +{ + Relation toastrel; + Relation *toastidxs; + HeapTuple toasttup; + TupleDesc toasttupDesc; + Datum t_values[3]; + bool t_isnull[3]; + CommandId mycid = GetCurrentCommandId(true); + struct varlena *result; + struct varatt_external toast_pointer; + union + { + struct varlena hdr; + /* this is to make the union big enough for a chunk: */ + char data[TOAST_MAX_CHUNK_SIZE + VARHDRSZ]; + /* ensure union is aligned well enough: */ + int32 align_it; + } chunk_data; + int32 chunk_size; + int32 chunk_seq = 0; + char *data_p; + int32 data_todo; + Pointer dval = DatumGetPointer(value); + int num_indexes; + int validIndex; + + + Assert(!VARATT_IS_EXTERNAL(value)); + + /* + * Open the toast relation and its indexes. We can use the index to check + * uniqueness of the OID we assign to the toasted item, even though it has + * additional columns besides OID. + */ + toastrel = table_open(rel->rd_rel->reltoastrelid, RowExclusiveLock); + toasttupDesc = toastrel->rd_att; + + /* Open all the toast indexes and look for the valid one */ + validIndex = toast_open_indexes(toastrel, + RowExclusiveLock, + &toastidxs, + &num_indexes); + + /* + * Get the data pointer and length, and compute va_rawsize and va_extinfo. + * + * va_rawsize is the size of the equivalent fully uncompressed datum, so + * we have to adjust for short headers. + * + * va_extinfo stored the actual size of the data payload in the toast + * records and the compression method in first 2 bits if data is + * compressed. + */ + if (VARATT_IS_SHORT(dval)) + { + data_p = VARDATA_SHORT(dval); + data_todo = VARSIZE_SHORT(dval) - VARHDRSZ_SHORT; + toast_pointer.va_rawsize = data_todo + VARHDRSZ; /* as if not short */ + toast_pointer.va_extinfo = data_todo; + } + else if (VARATT_IS_COMPRESSED(dval)) + { + data_p = VARDATA(dval); + data_todo = VARSIZE(dval) - VARHDRSZ; + /* rawsize in a compressed datum is just the size of the payload */ + toast_pointer.va_rawsize = VARDATA_COMPRESSED_GET_EXTSIZE(dval) + VARHDRSZ; + + /* set external size and compression method */ + VARATT_EXTERNAL_SET_SIZE_AND_COMPRESS_METHOD(toast_pointer, data_todo, + VARDATA_COMPRESSED_GET_COMPRESS_METHOD(dval)); + /* Assert that the numbers look like it's compressed */ + Assert(VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer)); + } + else + { + data_p = VARDATA(dval); + data_todo = VARSIZE(dval) - VARHDRSZ; + toast_pointer.va_rawsize = VARSIZE(dval); + toast_pointer.va_extinfo = data_todo; + } + + /* + * Insert the correct table OID into the result TOAST pointer. + * + * Normally this is the actual OID of the target toast table, but during + * table-rewriting operations such as CLUSTER, we have to insert the OID + * of the table's real permanent toast table instead. rd_toastoid is set + * if we have to substitute such an OID. + */ + if (OidIsValid(rel->rd_toastoid)) + toast_pointer.va_toastrelid = rel->rd_toastoid; + else + toast_pointer.va_toastrelid = RelationGetRelid(toastrel); + + /* + * Choose an OID to use as the value ID for this toast value. + * + * Normally we just choose an unused OID within the toast table. But + * during table-rewriting operations where we are preserving an existing + * toast table OID, we want to preserve toast value OIDs too. So, if + * rd_toastoid is set and we had a prior external value from that same + * toast table, re-use its value ID. If we didn't have a prior external + * value (which is a corner case, but possible if the table's attstorage + * options have been changed), we have to pick a value ID that doesn't + * conflict with either new or existing toast value OIDs. + */ + if (!OidIsValid(rel->rd_toastoid)) + { + /* normal case: just choose an unused OID */ + toast_pointer.va_valueid = + GetNewOidWithIndex(toastrel, + RelationGetRelid(toastidxs[validIndex]), + (AttrNumber) 1); + } + else + { + /* rewrite case: check to see if value was in old toast table */ + toast_pointer.va_valueid = InvalidOid; + if (oldexternal != NULL) + { + struct varatt_external old_toast_pointer; + + Assert(VARATT_IS_EXTERNAL_ONDISK(oldexternal)); + /* Must copy to access aligned fields */ + VARATT_EXTERNAL_GET_POINTER(old_toast_pointer, oldexternal); + if (old_toast_pointer.va_toastrelid == rel->rd_toastoid) + { + /* This value came from the old toast table; reuse its OID */ + toast_pointer.va_valueid = old_toast_pointer.va_valueid; + + /* + * There is a corner case here: the table rewrite might have + * to copy both live and recently-dead versions of a row, and + * those versions could easily reference the same toast value. + * When we copy the second or later version of such a row, + * reusing the OID will mean we select an OID that's already + * in the new toast table. Check for that, and if so, just + * fall through without writing the data again. + * + * While annoying and ugly-looking, this is a good thing + * because it ensures that we wind up with only one copy of + * the toast value when there is only one copy in the old + * toast table. Before we detected this case, we'd have made + * multiple copies, wasting space; and what's worse, the + * copies belonging to already-deleted heap tuples would not + * be reclaimed by VACUUM. + */ + if (toastrel_valueid_exists(toastrel, + toast_pointer.va_valueid)) + { + /* Match, so short-circuit the data storage loop below */ + data_todo = 0; + } + } + } + if (toast_pointer.va_valueid == InvalidOid) + { + /* + * new value; must choose an OID that doesn't conflict in either + * old or new toast table + */ + do + { + toast_pointer.va_valueid = + GetNewOidWithIndex(toastrel, + RelationGetRelid(toastidxs[validIndex]), + (AttrNumber) 1); + } while (toastid_valueid_exists(rel->rd_toastoid, + toast_pointer.va_valueid)); + } + } + + /* + * Encrypt toast data. + */ + tdeheap_toast_encrypt(dval, toast_pointer.va_valueid, GetHeapBaiscRelationKey(toastrel->rd_locator)); + + /* + * Initialize constant parts of the tuple data + */ + t_values[0] = ObjectIdGetDatum(toast_pointer.va_valueid); + t_values[2] = PointerGetDatum(&chunk_data); + t_isnull[0] = false; + t_isnull[1] = false; + t_isnull[2] = false; + + /* + * Split up the item into chunks + */ + while (data_todo > 0) + { + int i; + + CHECK_FOR_INTERRUPTS(); + + /* + * Calculate the size of this chunk + */ + chunk_size = Min(TOAST_MAX_CHUNK_SIZE, data_todo); + + /* + * Build a tuple and store it + */ + t_values[1] = Int32GetDatum(chunk_seq++); + SET_VARSIZE(&chunk_data, chunk_size + VARHDRSZ); + memcpy(VARDATA(&chunk_data), data_p, chunk_size); + toasttup = tdeheap_form_tuple(toasttupDesc, t_values, t_isnull); + + /* + * The tuple should be insterted not encrypted. + * TOAST data already encrypted. + */ + options |= HEAP_INSERT_TDE_NO_ENCRYPT; + tdeheap_insert(toastrel, toasttup, mycid, options, NULL); + + /* + * Create the index entry. We cheat a little here by not using + * FormIndexDatum: this relies on the knowledge that the index columns + * are the same as the initial columns of the table for all the + * indexes. We also cheat by not providing an IndexInfo: this is okay + * for now because btree doesn't need one, but we might have to be + * more honest someday. + * + * Note also that there had better not be any user-created index on + * the TOAST table, since we don't bother to update anything else. + */ + for (i = 0; i < num_indexes; i++) + { + /* Only index relations marked as ready can be updated */ + if (toastidxs[i]->rd_index->indisready) + index_insert(toastidxs[i], t_values, t_isnull, + &(toasttup->t_self), + toastrel, + toastidxs[i]->rd_index->indisunique ? + UNIQUE_CHECK_YES : UNIQUE_CHECK_NO, + false, NULL); + } + + /* + * Free memory + */ + tdeheap_freetuple(toasttup); + + /* + * Move on to next chunk + */ + data_todo -= chunk_size; + data_p += chunk_size; + } + + /* + * Done - close toast relation and its indexes but keep the lock until + * commit, so as a concurrent reindex done directly on the toast relation + * would be able to wait for this transaction. + */ + toast_close_indexes(toastidxs, num_indexes, NoLock); + table_close(toastrel, NoLock); + + /* + * Create the TOAST pointer value that we'll return + */ + result = (struct varlena *) palloc(TOAST_POINTER_SIZE); + SET_VARTAG_EXTERNAL(result, VARTAG_ONDISK); + memcpy(VARDATA_EXTERNAL(result), &toast_pointer, sizeof(toast_pointer)); + + return PointerGetDatum(result); +} + +/* ---------- + * toastrel_valueid_exists - + * + * Test whether a toast value with the given ID exists in the toast relation. + * For safety, we consider a value to exist if there are either live or dead + * toast rows with that ID; see notes for GetNewOidWithIndex(). + * + * copy from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static bool +toastrel_valueid_exists(Relation toastrel, Oid valueid) +{ + bool result = false; + ScanKeyData toastkey; + SysScanDesc toastscan; + int num_indexes; + int validIndex; + Relation *toastidxs; + + /* Fetch a valid index relation */ + validIndex = toast_open_indexes(toastrel, + RowExclusiveLock, + &toastidxs, + &num_indexes); + + /* + * Setup a scan key to find chunks with matching va_valueid + */ + ScanKeyInit(&toastkey, + (AttrNumber) 1, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(valueid)); + + /* + * Is there any such chunk? + */ + toastscan = systable_beginscan(toastrel, + RelationGetRelid(toastidxs[validIndex]), + true, SnapshotAny, 1, &toastkey); + + if (systable_getnext(toastscan) != NULL) + result = true; + + systable_endscan(toastscan); + + /* Clean up */ + toast_close_indexes(toastidxs, num_indexes, RowExclusiveLock); + + return result; +} + +/* ---------- + * toastid_valueid_exists - + * + * As above, but work from toast rel's OID not an open relation + * + * copy from PG src/backend/access/common/toast_internals.c + * ---------- + */ +static bool +toastid_valueid_exists(Oid toastrelid, Oid valueid) +{ + bool result; + Relation toastrel; + + toastrel = table_open(toastrelid, AccessShareLock); + + result = toastrel_valueid_exists(toastrel, valueid); + + table_close(toastrel, AccessShareLock); + + return result; +} diff --git a/contrib/pg_tde/src17/include/access/pg_tde_io.h b/contrib/pg_tde/src17/include/access/pg_tde_io.h new file mode 100644 index 00000000000..58fccc3fcf2 --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tde_io.h @@ -0,0 +1,62 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_io.h + * POSTGRES heap access method input/output definitions. + * + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/hio.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_IO_H +#define PG_TDE_IO_H + +#include "access/htup.h" +#include "storage/buf.h" +#include "utils/relcache.h" + +/* + * state for bulk inserts --- private to heapam.c and hio.c + * + * If current_buf isn't InvalidBuffer, then we are holding an extra pin + * on that buffer. + * + * "typedef struct BulkInsertStateData *BulkInsertState" is in heapam.h + */ +typedef struct BulkInsertStateData +{ + BufferAccessStrategy strategy; /* our BULKWRITE strategy object */ + Buffer current_buf; /* current insertion target page */ + + /* + * State for bulk extensions. + * + * last_free..next_free are further pages that were unused at the time of + * the last extension. They might be in use by the time we use them + * though, so rechecks are needed. + * + * XXX: Eventually these should probably live in RelationData instead, + * alongside targetblock. + * + * already_extended_by is the number of pages that this bulk inserted + * extended by. If we already extended by a significant number of pages, + * we can be more aggressive about extending going forward. + */ + BlockNumber next_free; + BlockNumber last_free; + uint32 already_extended_by; +} BulkInsertStateData; + + +extern void tdeheap_RelationPutHeapTuple(Relation relation, Buffer buffer, + HeapTuple tuple, bool encrypt, bool token); +extern Buffer tdeheap_RelationGetBufferForTuple(Relation relation, Size len, + Buffer otherBuffer, int options, + BulkInsertStateData *bistate, + Buffer *vmbuffer, Buffer *vmbuffer_other, + int num_pages); + +#endif /* PG_TDE_IO_H */ diff --git a/contrib/pg_tde/src17/include/access/pg_tde_rewrite.h b/contrib/pg_tde/src17/include/access/pg_tde_rewrite.h new file mode 100644 index 00000000000..8f03d442e66 --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tde_rewrite.h @@ -0,0 +1,57 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_rewrite.h + * Declarations for heap rewrite support functions + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994-5, Regents of the University of California + * + * src/include/access/rewriteheap.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_REWRITE_H +#define PG_TDE_REWRITE_H + +#include "access/htup.h" +#include "storage/itemptr.h" +#include "storage/relfilelocator.h" +#include "utils/relcache.h" + +/* struct definition is private to rewriteheap.c */ +typedef struct RewriteStateData *RewriteState; + +extern RewriteState begin_tdeheap_rewrite(Relation old_heap, Relation new_heap, + TransactionId oldest_xmin, TransactionId freeze_xid, + MultiXactId cutoff_multi); +extern void end_tdeheap_rewrite(RewriteState state); +extern void rewrite_tdeheap_tuple(RewriteState state, HeapTuple old_tuple, + HeapTuple new_tuple); +extern bool rewrite_tdeheap_dead_tuple(RewriteState state, HeapTuple old_tuple); + +/* + * On-Disk data format for an individual logical rewrite mapping. + */ +typedef struct LogicalRewriteMappingData +{ + RelFileLocator old_locator; + RelFileLocator new_locator; + ItemPointerData old_tid; + ItemPointerData new_tid; +} LogicalRewriteMappingData; + +/* --- + * The filename consists of the following, dash separated, + * components: + * 1) database oid or InvalidOid for shared relations + * 2) the oid of the relation + * 3) upper 32bit of the LSN at which a rewrite started + * 4) lower 32bit of the LSN at which a rewrite started + * 5) xid we are mapping for + * 6) xid of the xact performing the mapping + * --- + */ +#define LOGICAL_REWRITE_FORMAT "map-%x-%x-%X_%X-%x-%x" +extern void CheckPointLogicalRewriteHeap(void); + +#endif /* PG_TDE_REWRITE_H */ diff --git a/contrib/pg_tde/src17/include/access/pg_tde_visibilitymap.h b/contrib/pg_tde/src17/include/access/pg_tde_visibilitymap.h new file mode 100644 index 00000000000..1d47403ee14 --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tde_visibilitymap.h @@ -0,0 +1,42 @@ +/*------------------------------------------------------------------------- + * + * tdeheap_visibilitymap.h + * visibility map interface + * + * + * Portions Copyright (c) 2007-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/pg_tde_visibilitymap.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_VISIBILITYMAP_H +#define PG_TDE_VISIBILITYMAP_H + +#include "access/visibilitymapdefs.h" +#include "access/xlogdefs.h" +#include "storage/block.h" +#include "storage/buf.h" +#include "utils/relcache.h" + +/* Macros for visibilitymap test */ +#define VM_ALL_VISIBLE(r, b, v) \ + ((tdeheap_visibilitymap_get_status((r), (b), (v)) & VISIBILITYMAP_ALL_VISIBLE) != 0) +#define VM_ALL_FROZEN(r, b, v) \ + ((tdeheap_visibilitymap_get_status((r), (b), (v)) & VISIBILITYMAP_ALL_FROZEN) != 0) + +extern bool tdeheap_visibilitymap_clear(Relation rel, BlockNumber heapBlk, + Buffer vmbuf, uint8 flags); +extern void tdeheap_visibilitymap_pin(Relation rel, BlockNumber heapBlk, + Buffer *vmbuf); +extern bool tdeheap_visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf); +extern void tdeheap_visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf, + XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid, + uint8 flags); +extern uint8 tdeheap_visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf); +extern void tdeheap_visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen); +extern BlockNumber tdeheap_visibilitymap_prepare_truncate(Relation rel, + BlockNumber nheapblocks); + +#endif /* PG_TDE_VISIBILITYMAP_H */ diff --git a/contrib/pg_tde/src17/include/access/pg_tdeam.h b/contrib/pg_tde/src17/include/access/pg_tdeam.h new file mode 100644 index 00000000000..bf7ec9b3ec0 --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tdeam.h @@ -0,0 +1,432 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam.h + * POSTGRES heap access method definitions. + * + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/heapam.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDEAM_H +#define PG_TDEAM_H + +#include "access/relation.h" /* for backward compatibility */ +#include "access/relscan.h" +#include "access/sdir.h" +#include "access/skey.h" +#include "access/table.h" /* for backward compatibility */ +#include "access/tableam.h" +#include "nodes/lockoptions.h" +#include "nodes/primnodes.h" +#include "storage/bufpage.h" +#include "storage/dsm.h" +#include "storage/lockdefs.h" +#include "storage/read_stream.h" +#include "storage/shm_toc.h" +#include "utils/relcache.h" +#include "utils/snapshot.h" + + +/* "options" flag bits for tdeheap_insert */ +#define HEAP_INSERT_SKIP_FSM TABLE_INSERT_SKIP_FSM +#define HEAP_INSERT_FROZEN TABLE_INSERT_FROZEN +#define HEAP_INSERT_NO_LOGICAL TABLE_INSERT_NO_LOGICAL +#define HEAP_INSERT_SPECULATIVE 0x0010 +#define HEAP_INSERT_TDE_NO_ENCRYPT 0x2000 /* to specify rare cases when NO TDE enc */ + +/* "options" flag bits for tdeheap_page_prune_and_freeze */ +#define HEAP_PAGE_PRUNE_MARK_UNUSED_NOW (1 << 0) +#define HEAP_PAGE_PRUNE_FREEZE (1 << 1) + +typedef struct BulkInsertStateData *BulkInsertState; +struct TupleTableSlot; +struct VacuumCutoffs; + +#define MaxLockTupleMode LockTupleExclusive + +/* + * Descriptor for heap table scans. + */ +typedef struct HeapScanDescData +{ + TableScanDescData rs_base; /* AM independent part of the descriptor */ + + /* state set up at initscan time */ + BlockNumber rs_nblocks; /* total number of blocks in rel */ + BlockNumber rs_startblock; /* block # to start at */ + BlockNumber rs_numblocks; /* max number of blocks to scan */ + /* rs_numblocks is usually InvalidBlockNumber, meaning "scan whole rel" */ + + /* scan current state */ + bool rs_inited; /* false = scan not init'd yet */ + OffsetNumber rs_coffset; /* current offset # in non-page-at-a-time mode */ + BlockNumber rs_cblock; /* current block # in scan, if any */ + Buffer rs_cbuf; /* current buffer in scan, if any */ + /* NB: if rs_cbuf is not InvalidBuffer, we hold a pin on that buffer */ + + BufferAccessStrategy rs_strategy; /* access strategy for reads */ + + HeapTupleData rs_ctup; /* current tuple in scan, if any */ + + /* For scans that stream reads */ + ReadStream *rs_read_stream; + + /* + * For sequential scans and TID range scans to stream reads. The read + * stream is allocated at the beginning of the scan and reset on rescan or + * when the scan direction changes. The scan direction is saved each time + * a new page is requested. If the scan direction changes from one page to + * the next, the read stream releases all previously pinned buffers and + * resets the prefetch block. + */ + ScanDirection rs_dir; + BlockNumber rs_prefetch_block; + + /* + * For parallel scans to store page allocation data. NULL when not + * performing a parallel scan. + */ + ParallelBlockTableScanWorkerData *rs_parallelworkerdata; + + /* + * These fields are only used for bitmap scans for the "skip fetch" + * optimization. Bitmap scans needing no fields from the heap may skip + * fetching an all visible block, instead using the number of tuples per + * block reported by the bitmap to determine how many NULL-filled tuples + * to return. + */ + Buffer rs_vmbuffer; + int rs_empty_tuples_pending; + + /* these fields only used in page-at-a-time mode and for bitmap scans */ + int rs_cindex; /* current tuple's index in vistuples */ + int rs_ntuples; /* number of visible tuples on page */ + OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */ +} HeapScanDescData; +typedef struct HeapScanDescData *HeapScanDesc; + +/* + * Descriptor for fetches from heap via an index. + */ +typedef struct IndexFetchHeapData +{ + IndexFetchTableData xs_base; /* AM independent part of the descriptor */ + + Buffer xs_cbuf; /* current heap buffer in scan, if any */ + /* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */ +} IndexFetchHeapData; + +/* Result codes for HeapTupleSatisfiesVacuum */ +typedef enum +{ + HEAPTUPLE_DEAD, /* tuple is dead and deletable */ + HEAPTUPLE_LIVE, /* tuple is live (committed, no deleter) */ + HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */ + HEAPTUPLE_INSERT_IN_PROGRESS, /* inserting xact is still in progress */ + HEAPTUPLE_DELETE_IN_PROGRESS, /* deleting xact is still in progress */ +} HTSV_Result; + +/* + * tdeheap_prepare_freeze_tuple may request that tdeheap_freeze_execute_prepared + * check any tuple's to-be-frozen xmin and/or xmax status using pg_xact + */ +#define HEAP_FREEZE_CHECK_XMIN_COMMITTED 0x01 +#define HEAP_FREEZE_CHECK_XMAX_ABORTED 0x02 + +/* tdeheap_prepare_freeze_tuple state describing how to freeze a tuple */ +typedef struct HeapTupleFreeze +{ + /* Fields describing how to process tuple */ + TransactionId xmax; + uint16 t_infomask2; + uint16 t_infomask; + uint8 frzflags; + + /* xmin/xmax check flags */ + uint8 checkflags; + /* Page offset number for tuple */ + OffsetNumber offset; +} HeapTupleFreeze; + +/* + * State used by VACUUM to track the details of freezing all eligible tuples + * on a given heap page. + * + * VACUUM prepares freeze plans for each page via tdeheap_prepare_freeze_tuple + * calls (every tuple with storage gets its own call). This page-level freeze + * state is updated across each call, which ultimately determines whether or + * not freezing the page is required. + * + * Aside from the basic question of whether or not freezing will go ahead, the + * state also tracks the oldest extant XID/MXID in the table as a whole, for + * the purposes of advancing relfrozenxid/relminmxid values in pg_class later + * on. Each tdeheap_prepare_freeze_tuple call pushes NewRelfrozenXid and/or + * NewRelminMxid back as required to avoid unsafe final pg_class values. Any + * and all unfrozen XIDs or MXIDs that remain after VACUUM finishes _must_ + * have values >= the final relfrozenxid/relminmxid values in pg_class. This + * includes XIDs that remain as MultiXact members from any tuple's xmax. + * + * When 'freeze_required' flag isn't set after all tuples are examined, the + * final choice on freezing is made by vacuumlazy.c. It can decide to trigger + * freezing based on whatever criteria it deems appropriate. However, it is + * recommended that vacuumlazy.c avoid early freezing when freezing does not + * enable setting the target page all-frozen in the visibility map afterwards. + */ +typedef struct HeapPageFreeze +{ + /* Is tdeheap_prepare_freeze_tuple caller required to freeze page? */ + bool freeze_required; + + /* + * "Freeze" NewRelfrozenXid/NewRelminMxid trackers. + * + * Trackers used when tdeheap_freeze_execute_prepared freezes, or when there + * are zero freeze plans for a page. It is always valid for vacuumlazy.c + * to freeze any page, by definition. This even includes pages that have + * no tuples with storage to consider in the first place. That way the + * 'totally_frozen' results from tdeheap_prepare_freeze_tuple can always be + * used in the same way, even when no freeze plans need to be executed to + * "freeze the page". Only the "freeze" path needs to consider the need + * to set pages all-frozen in the visibility map under this scheme. + * + * When we freeze a page, we generally freeze all XIDs < OldestXmin, only + * leaving behind XIDs that are ineligible for freezing, if any. And so + * you might wonder why these trackers are necessary at all; why should + * _any_ page that VACUUM freezes _ever_ be left with XIDs/MXIDs that + * ratchet back the top-level NewRelfrozenXid/NewRelminMxid trackers? + * + * It is useful to use a definition of "freeze the page" that does not + * overspecify how MultiXacts are affected. tdeheap_prepare_freeze_tuple + * generally prefers to remove Multis eagerly, but lazy processing is used + * in cases where laziness allows VACUUM to avoid allocating a new Multi. + * The "freeze the page" trackers enable this flexibility. + */ + TransactionId FreezePageRelfrozenXid; + MultiXactId FreezePageRelminMxid; + + /* + * "No freeze" NewRelfrozenXid/NewRelminMxid trackers. + * + * These trackers are maintained in the same way as the trackers used when + * VACUUM scans a page that isn't cleanup locked. Both code paths are + * based on the same general idea (do less work for this page during the + * ongoing VACUUM, at the cost of having to accept older final values). + */ + TransactionId NoFreezePageRelfrozenXid; + MultiXactId NoFreezePageRelminMxid; + +} HeapPageFreeze; + +/* + * Per-page state returned by tdeheap_page_prune_and_freeze() + */ +typedef struct PruneFreezeResult +{ + int ndeleted; /* Number of tuples deleted from the page */ + int nnewlpdead; /* Number of newly LP_DEAD items */ + int nfrozen; /* Number of tuples we froze */ + + /* Number of live and recently dead tuples on the page, after pruning */ + int live_tuples; + int recently_dead_tuples; + + /* + * all_visible and all_frozen indicate if the all-visible and all-frozen + * bits in the visibility map can be set for this page, after pruning. + * + * vm_conflict_horizon is the newest xmin of live tuples on the page. The + * caller can use it as the conflict horizon when setting the VM bits. It + * is only valid if we froze some tuples (nfrozen > 0), and all_frozen is + * true. + * + * These are only set if the HEAP_PRUNE_FREEZE option is set. + */ + bool all_visible; + bool all_frozen; + TransactionId vm_conflict_horizon; + + /* + * Whether or not the page makes rel truncation unsafe. This is set to + * 'true', even if the page contains LP_DEAD items. VACUUM will remove + * them before attempting to truncate. + */ + bool hastup; + + /* + * LP_DEAD items on the page after pruning. Includes existing LP_DEAD + * items. + */ + int lpdead_items; + OffsetNumber deadoffsets[MaxHeapTuplesPerPage]; +} PruneFreezeResult; + +/* 'reason' codes for tdeheap_page_prune_and_freeze() */ +typedef enum +{ + PRUNE_ON_ACCESS, /* on-access pruning */ + PRUNE_VACUUM_SCAN, /* VACUUM 1st heap pass */ + PRUNE_VACUUM_CLEANUP, /* VACUUM 2nd heap pass */ +} PruneReason; + +/* ---------------- + * function prototypes for heap access method + * + * tdeheap_create, tdeheap_create_with_catalog, and tdeheap_drop_with_catalog + * are declared in catalog/heap.h + * ---------------- + */ + + +/* + * HeapScanIsValid + * True iff the heap scan is valid. + */ +#define HeapScanIsValid(scan) PointerIsValid(scan) + +extern TableScanDesc tdeheap_beginscan(Relation relation, Snapshot snapshot, + int nkeys, ScanKey key, + ParallelTableScanDesc parallel_scan, + uint32 flags); +extern void tdeheap_setscanlimits(TableScanDesc sscan, BlockNumber startBlk, + BlockNumber numBlks); +extern void tdeheap_prepare_pagescan(TableScanDesc sscan); +extern void tdeheap_rescan(TableScanDesc sscan, ScanKey key, bool set_params, + bool allow_strat, bool allow_sync, bool allow_pagemode); +extern void tdeheap_endscan(TableScanDesc sscan); +extern HeapTuple tdeheap_getnext(TableScanDesc sscan, ScanDirection direction); +extern bool tdeheap_getnextslot(TableScanDesc sscan, + ScanDirection direction, struct TupleTableSlot *slot); +extern void tdeheap_set_tidrange(TableScanDesc sscan, ItemPointer mintid, + ItemPointer maxtid); +extern bool tdeheap_getnextslot_tidrange(TableScanDesc sscan, + ScanDirection direction, + TupleTableSlot *slot); +extern bool tdeheap_fetch(Relation relation, Snapshot snapshot, + HeapTuple tuple, Buffer *userbuf, bool keep_buf); +extern bool tdeheap_hot_search_buffer(ItemPointer tid, Relation relation, + Buffer buffer, Snapshot snapshot, HeapTuple heapTuple, + bool *all_dead, bool first_call); + +extern void tdeheap_get_latest_tid(TableScanDesc sscan, ItemPointer tid); + +extern BulkInsertState GetBulkInsertState(void); +extern void FreeBulkInsertState(BulkInsertState); +extern void ReleaseBulkInsertStatePin(BulkInsertState bistate); + +extern void tdeheap_insert(Relation relation, HeapTuple tup, CommandId cid, + int options, BulkInsertState bistate); +extern void tdeheap_multi_insert(Relation relation, struct TupleTableSlot **slots, + int ntuples, CommandId cid, int options, + BulkInsertState bistate); +extern TM_Result tdeheap_delete(Relation relation, ItemPointer tid, + CommandId cid, Snapshot crosscheck, bool wait, + struct TM_FailureData *tmfd, bool changingPart); +extern void tdeheap_finish_speculative(Relation relation, ItemPointer tid); +extern void tdeheap_abort_speculative(Relation relation, ItemPointer tid); +extern TM_Result tdeheap_update(Relation relation, ItemPointer otid, + HeapTuple newtup, + CommandId cid, Snapshot crosscheck, bool wait, + struct TM_FailureData *tmfd, LockTupleMode *lockmode, + TU_UpdateIndexes *update_indexes); +extern TM_Result tdeheap_lock_tuple(Relation relation, HeapTuple tuple, + CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy, + bool follow_updates, + Buffer *buffer, struct TM_FailureData *tmfd); + +extern void tdeheap_inplace_update(Relation relation, HeapTuple tuple); +extern bool tdeheap_prepare_freeze_tuple(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + HeapPageFreeze *pagefrz, + HeapTupleFreeze *frz, bool *totally_frozen); + +extern void tdeheap_pre_freeze_checks(Buffer buffer, + HeapTupleFreeze *tuples, int ntuples); +extern void tdeheap_freeze_prepared_tuples(Buffer buffer, + HeapTupleFreeze *tuples, int ntuples); +extern bool tdeheap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId FreezeLimit, TransactionId MultiXactCutoff); +extern bool tdeheap_tuple_should_freeze(HeapTupleHeader tuple, + const struct VacuumCutoffs *cutoffs, + TransactionId *NoFreezePageRelfrozenXid, + MultiXactId *NoFreezePageRelminMxid); +extern bool tdeheap_tuple_needs_eventual_freeze(HeapTupleHeader tuple); + +extern void simple_tdeheap_insert(Relation relation, HeapTuple tup); +extern void simple_tdeheap_delete(Relation relation, ItemPointer tid); +extern void simple_tdeheap_update(Relation relation, ItemPointer otid, + HeapTuple tup, TU_UpdateIndexes *update_indexes); + +extern TransactionId tdeheap_index_delete_tuples(Relation rel, + TM_IndexDeleteOp *delstate); + +/* in heap/pruneheap.c */ +struct GlobalVisState; +extern void tdeheap_page_prune_opt(Relation relation, Buffer buffer); +extern void tdeheap_page_prune_and_freeze(Relation relation, Buffer buffer, + struct GlobalVisState *vistest, + int options, + struct VacuumCutoffs *cutoffs, + PruneFreezeResult *presult, + PruneReason reason, + OffsetNumber *off_loc, + TransactionId *new_relfrozen_xid, + MultiXactId *new_relmin_mxid); +extern void tdeheap_page_prune_execute(Relation rel, Buffer buffer, bool lp_truncate_only, + OffsetNumber *redirected, int nredirected, + OffsetNumber *nowdead, int ndead, + OffsetNumber *nowunused, int nunused); +extern void tdeheap_get_root_tuples(Page page, OffsetNumber *root_offsets); +extern void log_tdeheap_prune_and_freeze(Relation relation, Buffer buffer, + TransactionId conflict_xid, + bool cleanup_lock, + PruneReason reason, + HeapTupleFreeze *frozen, int nfrozen, + OffsetNumber *redirected, int nredirected, + OffsetNumber *dead, int ndead, + OffsetNumber *unused, int nunused); + +/* in heap/vacuumlazy.c */ +struct VacuumParams; +extern void tdeheap_vacuum_rel(Relation rel, + struct VacuumParams *params, BufferAccessStrategy bstrategy); + +/* in heap/pg_tdeam_visibility.c */ +extern bool HeapTupleSatisfiesVisibility(HeapTuple htup, Snapshot snapshot, + Buffer buffer); +extern TM_Result HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid, + Buffer buffer); +extern HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, + Buffer buffer); +extern HTSV_Result HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer, + TransactionId *dead_after); +extern void HeapTupleSetHintBits(HeapTupleHeader tuple, Buffer buffer, + uint16 infomask, TransactionId xid); +extern bool HeapTupleHeaderIsOnlyLocked(HeapTupleHeader tuple); +extern bool HeapTupleIsSurelyDead(HeapTuple htup, + struct GlobalVisState *vistest); + +/* + * To avoid leaking too much knowledge about reorderbuffer implementation + * details this is implemented in reorderbuffer.c not pg_tdeam_visibility.c + */ +struct HTAB; +extern bool ResolveCminCmaxDuringDecoding(struct HTAB *tuplecid_data, + Snapshot snapshot, + HeapTuple htup, + Buffer buffer, + CommandId *cmin, CommandId *cmax); +extern void HeapCheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, + Buffer buffer, Snapshot snapshot); + +/* Defined in pg_tdeam_handler.c */ +extern bool is_tdeheap_rel(Relation rel); + +const TableAmRoutine * +GetPGTdeamTableAmRoutine(void); + +#endif /* PG_TDEAM_H */ diff --git a/contrib/pg_tde/src17/include/access/pg_tdeam_xlog.h b/contrib/pg_tde/src17/include/access/pg_tdeam_xlog.h new file mode 100644 index 00000000000..34ef08d1e1d --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tdeam_xlog.h @@ -0,0 +1,502 @@ +/*------------------------------------------------------------------------- + * + * pg_tdeam_xlog.h + * POSTGRES pg_tde access XLOG definitions. + * + * + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/access/heapam_xlog.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDEAM_XLOG_H +#define PG_TDEAM_XLOG_H + +#include "access/htup.h" +#include "access/xlogreader.h" +#include "lib/stringinfo.h" +#include "storage/buf.h" +#include "storage/bufpage.h" +#include "storage/relfilelocator.h" +#include "utils/relcache.h" + + +/* + * WAL record definitions for pg_tdeam.c's WAL operations + * + * XLOG allows to store some information in high 4 bits of log + * record xl_info field. We use 3 for opcode and one for init bit. + */ +#define XLOG_HEAP_INSERT 0x00 +#define XLOG_HEAP_DELETE 0x10 +#define XLOG_HEAP_UPDATE 0x20 +#define XLOG_HEAP_TRUNCATE 0x30 +#define XLOG_HEAP_HOT_UPDATE 0x40 +#define XLOG_HEAP_CONFIRM 0x50 +#define XLOG_HEAP_LOCK 0x60 +#define XLOG_HEAP_INPLACE 0x70 + +#define XLOG_HEAP_OPMASK 0x70 +/* + * When we insert 1st item on new page in INSERT, UPDATE, HOT_UPDATE, + * or MULTI_INSERT, we can (and we do) restore entire page in redo + */ +#define XLOG_HEAP_INIT_PAGE 0x80 +/* + * We ran out of opcodes, so pg_tdeam.c now has a second RmgrId. These opcodes + * are associated with RM_HEAP2_ID, but are not logically different from + * the ones above associated with RM_HEAP_ID. XLOG_HEAP_OPMASK applies to + * these, too. + * + * There's no difference between XLOG_HEAP2_PRUNE_ON_ACCESS, + * XLOG_HEAP2_PRUNE_VACUUM_SCAN and XLOG_HEAP2_PRUNE_VACUUM_CLEANUP records. + * They have separate opcodes just for debugging and analysis purposes, to + * indicate why the WAL record was emitted. + */ +#define XLOG_HEAP2_REWRITE 0x00 +#define XLOG_HEAP2_PRUNE_ON_ACCESS 0x10 +#define XLOG_HEAP2_PRUNE_VACUUM_SCAN 0x20 +#define XLOG_HEAP2_PRUNE_VACUUM_CLEANUP 0x30 +#define XLOG_HEAP2_VISIBLE 0x40 +#define XLOG_HEAP2_MULTI_INSERT 0x50 +#define XLOG_HEAP2_LOCK_UPDATED 0x60 +#define XLOG_HEAP2_NEW_CID 0x70 + +/* + * xl_tdeheap_insert/xl_tdeheap_multi_insert flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_INSERT_ALL_VISIBLE_CLEARED (1<<0) +#define XLH_INSERT_LAST_IN_MULTI (1<<1) +#define XLH_INSERT_IS_SPECULATIVE (1<<2) +#define XLH_INSERT_CONTAINS_NEW_TUPLE (1<<3) +#define XLH_INSERT_ON_TOAST_RELATION (1<<4) + +/* all_frozen_set always implies all_visible_set */ +#define XLH_INSERT_ALL_FROZEN_SET (1<<5) + +/* + * xl_tdeheap_update flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED (1<<0) +/* PD_ALL_VISIBLE was cleared in the 2nd page */ +#define XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED (1<<1) +#define XLH_UPDATE_CONTAINS_OLD_TUPLE (1<<2) +#define XLH_UPDATE_CONTAINS_OLD_KEY (1<<3) +#define XLH_UPDATE_CONTAINS_NEW_TUPLE (1<<4) +#define XLH_UPDATE_PREFIX_FROM_OLD (1<<5) +#define XLH_UPDATE_SUFFIX_FROM_OLD (1<<6) + +/* convenience macro for checking whether any form of old tuple was logged */ +#define XLH_UPDATE_CONTAINS_OLD \ + (XLH_UPDATE_CONTAINS_OLD_TUPLE | XLH_UPDATE_CONTAINS_OLD_KEY) + +/* + * xl_tdeheap_delete flag values, 8 bits are available. + */ +/* PD_ALL_VISIBLE was cleared */ +#define XLH_DELETE_ALL_VISIBLE_CLEARED (1<<0) +#define XLH_DELETE_CONTAINS_OLD_TUPLE (1<<1) +#define XLH_DELETE_CONTAINS_OLD_KEY (1<<2) +#define XLH_DELETE_IS_SUPER (1<<3) +#define XLH_DELETE_IS_PARTITION_MOVE (1<<4) + +/* convenience macro for checking whether any form of old tuple was logged */ +#define XLH_DELETE_CONTAINS_OLD \ + (XLH_DELETE_CONTAINS_OLD_TUPLE | XLH_DELETE_CONTAINS_OLD_KEY) + +/* This is what we need to know about delete */ +typedef struct xl_tdeheap_delete +{ + TransactionId xmax; /* xmax of the deleted tuple */ + OffsetNumber offnum; /* deleted tuple's offset */ + uint8 infobits_set; /* infomask bits */ + uint8 flags; +} xl_tdeheap_delete; + +#define SizeOfHeapDelete (offsetof(xl_tdeheap_delete, flags) + sizeof(uint8)) + +/* + * xl_tdeheap_truncate flag values, 8 bits are available. + */ +#define XLH_TRUNCATE_CASCADE (1<<0) +#define XLH_TRUNCATE_RESTART_SEQS (1<<1) + +/* + * For truncate we list all truncated relids in an array, followed by all + * sequence relids that need to be restarted, if any. + * All rels are always within the same database, so we just list dbid once. + */ +typedef struct xl_tdeheap_truncate +{ + Oid dbId; + uint32 nrelids; + uint8 flags; + Oid relids[FLEXIBLE_ARRAY_MEMBER]; +} xl_tdeheap_truncate; + +#define SizeOfHeapTruncate (offsetof(xl_tdeheap_truncate, relids)) + +/* + * We don't store the whole fixed part (HeapTupleHeaderData) of an inserted + * or updated tuple in WAL; we can save a few bytes by reconstructing the + * fields that are available elsewhere in the WAL record, or perhaps just + * plain needn't be reconstructed. These are the fields we must store. + */ +typedef struct xl_tdeheap_header +{ + uint16 t_infomask2; + uint16 t_infomask; + uint8 t_hoff; +} xl_tdeheap_header; + +#define SizeOfHeapHeader (offsetof(xl_tdeheap_header, t_hoff) + sizeof(uint8)) + +/* This is what we need to know about insert */ +typedef struct xl_tdeheap_insert +{ + OffsetNumber offnum; /* inserted tuple's offset */ + uint8 flags; + + /* xl_tdeheap_header & TUPLE DATA in backup block 0 */ +} xl_tdeheap_insert; + +#define SizeOfHeapInsert (offsetof(xl_tdeheap_insert, flags) + sizeof(uint8)) + +/* + * This is what we need to know about a multi-insert. + * + * The main data of the record consists of this xl_tdeheap_multi_insert header. + * 'offsets' array is omitted if the whole page is reinitialized + * (XLOG_HEAP_INIT_PAGE). + * + * In block 0's data portion, there is an xl_multi_insert_tuple struct, + * followed by the tuple data for each tuple. There is padding to align + * each xl_multi_insert_tuple struct. + */ +typedef struct xl_tdeheap_multi_insert +{ + uint8 flags; + uint16 ntuples; + OffsetNumber offsets[FLEXIBLE_ARRAY_MEMBER]; +} xl_tdeheap_multi_insert; + +#define SizeOfHeapMultiInsert offsetof(xl_tdeheap_multi_insert, offsets) + +typedef struct xl_multi_insert_tuple +{ + uint16 datalen; /* size of tuple data that follows */ + uint16 t_infomask2; + uint16 t_infomask; + uint8 t_hoff; + /* TUPLE DATA FOLLOWS AT END OF STRUCT */ +} xl_multi_insert_tuple; + +#define SizeOfMultiInsertTuple (offsetof(xl_multi_insert_tuple, t_hoff) + sizeof(uint8)) + +/* + * This is what we need to know about update|hot_update + * + * Backup blk 0: new page + * + * If XLH_UPDATE_PREFIX_FROM_OLD or XLH_UPDATE_SUFFIX_FROM_OLD flags are set, + * the prefix and/or suffix come first, as one or two uint16s. + * + * After that, xl_tdeheap_header and new tuple data follow. The new tuple + * data doesn't include the prefix and suffix, which are copied from the + * old tuple on replay. + * + * If XLH_UPDATE_CONTAINS_NEW_TUPLE flag is given, the tuple data is + * included even if a full-page image was taken. + * + * Backup blk 1: old page, if different. (no data, just a reference to the blk) + */ +typedef struct xl_tdeheap_update +{ + TransactionId old_xmax; /* xmax of the old tuple */ + OffsetNumber old_offnum; /* old tuple's offset */ + uint8 old_infobits_set; /* infomask bits to set on old tuple */ + uint8 flags; + TransactionId new_xmax; /* xmax of the new tuple */ + OffsetNumber new_offnum; /* new tuple's offset */ + + /* + * If XLH_UPDATE_CONTAINS_OLD_TUPLE or XLH_UPDATE_CONTAINS_OLD_KEY flags + * are set, xl_tdeheap_header and tuple data for the old tuple follow. + */ +} xl_tdeheap_update; + +#define SizeOfHeapUpdate (offsetof(xl_tdeheap_update, new_offnum) + sizeof(OffsetNumber)) + +/* + * These structures and flags encode VACUUM pruning and freezing and on-access + * pruning page modifications. + * + * xl_tdeheap_prune is the main record. The XLHP_HAS_* flags indicate which + * "sub-records" are included and the other XLHP_* flags provide additional + * information about the conditions for replay. + * + * The data for block reference 0 contains "sub-records" depending on which of + * the XLHP_HAS_* flags are set. See xlhp_* struct definitions below. The + * sub-records appear in the same order as the XLHP_* flags. An example + * record with every sub-record included: + * + *----------------------------------------------------------------------------- + * Main data section: + * + * xl_tdeheap_prune + * uint8 flags + * TransactionId snapshot_conflict_horizon + * + * Block 0 data section: + * + * xlhp_freeze_plans + * uint16 nplans + * [2 bytes of padding] + * xlhp_freeze_plan plans[nplans] + * + * xlhp_prune_items + * uint16 nredirected + * OffsetNumber redirected[2 * nredirected] + * + * xlhp_prune_items + * uint16 ndead + * OffsetNumber nowdead[ndead] + * + * xlhp_prune_items + * uint16 nunused + * OffsetNumber nowunused[nunused] + * + * OffsetNumber frz_offsets[sum([plan.ntuples for plan in plans])] + *----------------------------------------------------------------------------- + * + * NOTE: because the record data is assembled from many optional parts, we + * have to pay close attention to alignment. In the main data section, + * 'snapshot_conflict_horizon' is stored unaligned after 'flags', to save + * space. In the block 0 data section, the freeze plans appear first, because + * they contain TransactionId fields that require 4-byte alignment. All the + * other fields require only 2-byte alignment. This is also the reason that + * 'frz_offsets' is stored separately from the xlhp_freeze_plan structs. + */ +typedef struct xl_tdeheap_prune +{ + uint8 reason; + uint8 flags; + + /* + * If XLHP_HAS_CONFLICT_HORIZON is set, the conflict horizon XID follows, + * unaligned + */ +} xl_tdeheap_prune; + +#define SizeOfHeapPrune (offsetof(xl_tdeheap_prune, flags) + sizeof(uint8)) + +/* to handle recovery conflict during logical decoding on standby */ +#define XLHP_IS_CATALOG_REL (1 << 1) + +/* + * Does replaying the record require a cleanup-lock? + * + * Pruning, in VACUUM's first pass or when otherwise accessing a page, + * requires a cleanup lock. For freezing, and VACUUM's second pass which + * marks LP_DEAD line pointers as unused without moving any tuple data, an + * ordinary exclusive lock is sufficient. + */ +#define XLHP_CLEANUP_LOCK (1 << 2) + +/* + * If we remove or freeze any entries that contain xids, we need to include a + * snapshot conflict horizon. It's used in Hot Standby mode to ensure that + * there are no queries running for which the removed tuples are still + * visible, or which still consider the frozen XIDs as running. + */ +#define XLHP_HAS_CONFLICT_HORIZON (1 << 3) + +/* + * Indicates that an xlhp_freeze_plans sub-record and one or more + * xlhp_freeze_plan sub-records are present. + */ +#define XLHP_HAS_FREEZE_PLANS (1 << 4) + +/* + * XLHP_HAS_REDIRECTIONS, XLHP_HAS_DEAD_ITEMS, and XLHP_HAS_NOW_UNUSED_ITEMS + * indicate that xlhp_prune_items sub-records with redirected, dead, and + * unused item offsets are present. + */ +#define XLHP_HAS_REDIRECTIONS (1 << 5) +#define XLHP_HAS_DEAD_ITEMS (1 << 6) +#define XLHP_HAS_NOW_UNUSED_ITEMS (1 << 7) + +/* + * xlhp_freeze_plan describes how to freeze a group of one or more heap tuples + * (appears in xl_tdeheap_prune's xlhp_freeze_plans sub-record) + */ +/* 0x01 was XLH_FREEZE_XMIN */ +#define XLH_FREEZE_XVAC 0x02 +#define XLH_INVALID_XVAC 0x04 + +typedef struct xlhp_freeze_plan +{ + TransactionId xmax; + uint16 t_infomask2; + uint16 t_infomask; + uint8 frzflags; + + /* Length of individual page offset numbers array for this plan */ + uint16 ntuples; +} xlhp_freeze_plan; + +/* + * This is what we need to know about a block being frozen during vacuum + * + * The backup block's data contains an array of xlhp_freeze_plan structs (with + * nplans elements). The individual item offsets are located in an array at + * the end of the entire record with nplans * (each plan's ntuples) members + * Those offsets are in the same order as the plans. The REDO routine uses + * the offsets to freeze the corresponding heap tuples. + * + * (As of PostgreSQL 17, XLOG_HEAP2_PRUNE_VACUUM_SCAN records replace the + * separate XLOG_HEAP2_FREEZE_PAGE records.) + */ +typedef struct xlhp_freeze_plans +{ + uint16 nplans; + xlhp_freeze_plan plans[FLEXIBLE_ARRAY_MEMBER]; +} xlhp_freeze_plans; + +/* + * Generic sub-record type contained in block reference 0 of an xl_tdeheap_prune + * record and used for redirect, dead, and unused items if any of + * XLHP_HAS_REDIRECTIONS/XLHP_HAS_DEAD_ITEMS/XLHP_HAS_NOW_UNUSED_ITEMS are + * set. Note that in the XLHP_HAS_REDIRECTIONS variant, there are actually 2 + * * length number of OffsetNumbers in the data. + */ +typedef struct xlhp_prune_items +{ + uint16 ntargets; + OffsetNumber data[FLEXIBLE_ARRAY_MEMBER]; +} xlhp_prune_items; + + +/* flags for infobits_set */ +#define XLHL_XMAX_IS_MULTI 0x01 +#define XLHL_XMAX_LOCK_ONLY 0x02 +#define XLHL_XMAX_EXCL_LOCK 0x04 +#define XLHL_XMAX_KEYSHR_LOCK 0x08 +#define XLHL_KEYS_UPDATED 0x10 + +/* flag bits for xl_tdeheap_lock / xl_tdeheap_lock_updated's flag field */ +#define XLH_LOCK_ALL_FROZEN_CLEARED 0x01 + +/* This is what we need to know about lock */ +typedef struct xl_tdeheap_lock +{ + TransactionId xmax; /* might be a MultiXactId */ + OffsetNumber offnum; /* locked tuple's offset on page */ + uint8 infobits_set; /* infomask and infomask2 bits to set */ + uint8 flags; /* XLH_LOCK_* flag bits */ +} xl_tdeheap_lock; + +#define SizeOfHeapLock (offsetof(xl_tdeheap_lock, flags) + sizeof(uint8)) + +/* This is what we need to know about locking an updated version of a row */ +typedef struct xl_tdeheap_lock_updated +{ + TransactionId xmax; + OffsetNumber offnum; + uint8 infobits_set; + uint8 flags; +} xl_tdeheap_lock_updated; + +#define SizeOfHeapLockUpdated (offsetof(xl_tdeheap_lock_updated, flags) + sizeof(uint8)) + +/* This is what we need to know about confirmation of speculative insertion */ +typedef struct xl_tdeheap_confirm +{ + OffsetNumber offnum; /* confirmed tuple's offset on page */ +} xl_tdeheap_confirm; + +#define SizeOfHeapConfirm (offsetof(xl_tdeheap_confirm, offnum) + sizeof(OffsetNumber)) + +/* This is what we need to know about in-place update */ +typedef struct xl_tdeheap_inplace +{ + OffsetNumber offnum; /* updated tuple's offset on page */ +} xl_tdeheap_inplace; + +#define SizeOfHeapInplace (offsetof(xl_tdeheap_inplace, offnum) + sizeof(OffsetNumber)) + +/* + * This is what we need to know about setting a visibility map bit + * + * Backup blk 0: visibility map buffer + * Backup blk 1: heap buffer + */ +typedef struct xl_tdeheap_visible +{ + TransactionId snapshotConflictHorizon; + uint8 flags; +} xl_tdeheap_visible; + +#define SizeOfHeapVisible (offsetof(xl_tdeheap_visible, flags) + sizeof(uint8)) + +typedef struct xl_tdeheap_new_cid +{ + /* + * store toplevel xid so we don't have to merge cids from different + * transactions + */ + TransactionId top_xid; + CommandId cmin; + CommandId cmax; + CommandId combocid; /* just for debugging */ + + /* + * Store the relfilelocator/ctid pair to facilitate lookups. + */ + RelFileLocator target_locator; + ItemPointerData target_tid; +} xl_tdeheap_new_cid; + +#define SizeOfHeapNewCid (offsetof(xl_tdeheap_new_cid, target_tid) + sizeof(ItemPointerData)) + +/* logical rewrite xlog record header */ +typedef struct xl_tdeheap_rewrite_mapping +{ + TransactionId mapped_xid; /* xid that might need to see the row */ + Oid mapped_db; /* DbOid or InvalidOid for shared rels */ + Oid mapped_rel; /* Oid of the mapped relation */ + off_t offset; /* How far have we written so far */ + uint32 num_mappings; /* Number of in-memory mappings */ + XLogRecPtr start_lsn; /* Insert LSN at begin of rewrite */ +} xl_tdeheap_rewrite_mapping; + +extern void HeapTupleHeaderAdvanceConflictHorizon(HeapTupleHeader tuple, + TransactionId *snapshotConflictHorizon); + +extern void tdeheap_redo(XLogReaderState *record); +extern void tdeheap_desc(StringInfo buf, XLogReaderState *record); +extern const char *tdeheap_identify(uint8 info); +extern void tdeheap_mask(char *pagedata, BlockNumber blkno); +extern void tdeheap2_redo(XLogReaderState *record); +extern void tdeheap2_desc(StringInfo buf, XLogReaderState *record); +extern const char *tdeheap2_identify(uint8 info); +extern void tdeheap_xlog_logical_rewrite(XLogReaderState *r); + +extern XLogRecPtr log_tdeheap_visible(Relation rel, Buffer tdeheap_buffer, + Buffer vm_buffer, + TransactionId snapshotConflictHorizon, + uint8 vmflags); + +/* in heapdesc.c, so it can be shared between frontend/backend code */ +extern void heap_xlog_deserialize_prune_and_freeze(char *cursor, uint8 flags, + int *nplans, xlhp_freeze_plan **plans, + OffsetNumber **frz_offsets, + int *nredirected, OffsetNumber **redirected, + int *ndead, OffsetNumber **nowdead, + int *nunused, OffsetNumber **nowunused); + +#endif /* PG_TDEAM_XLOG_H */ diff --git a/contrib/pg_tde/src17/include/access/pg_tdetoast.h b/contrib/pg_tde/src17/include/access/pg_tdetoast.h new file mode 100644 index 00000000000..a9e98617e88 --- /dev/null +++ b/contrib/pg_tde/src17/include/access/pg_tdetoast.h @@ -0,0 +1,149 @@ +/*------------------------------------------------------------------------- + * + * heaptoast.h + * Heap-specific definitions for external and compressed storage + * of variable size attributes. + * + * Copyright (c) 2000-2024, PostgreSQL Global Development Group + * + * src/include/access/heaptoast.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_TDE_TOAST_H +#define PG_TDE_TOAST_H + +#include "access/htup_details.h" +#include "storage/lockdefs.h" +#include "utils/relcache.h" + +/* + * Find the maximum size of a tuple if there are to be N tuples per page. + */ +#define MaximumBytesPerTuple(tuplesPerPage) \ + MAXALIGN_DOWN((BLCKSZ - \ + MAXALIGN(SizeOfPageHeaderData + (tuplesPerPage) * sizeof(ItemIdData))) \ + / (tuplesPerPage)) + +/* + * These symbols control toaster activation. If a tuple is larger than + * TOAST_TUPLE_THRESHOLD, we will try to toast it down to no more than + * TOAST_TUPLE_TARGET bytes through compressing compressible fields and + * moving EXTENDED and EXTERNAL data out-of-line. + * + * The numbers need not be the same, though they currently are. It doesn't + * make sense for TARGET to exceed THRESHOLD, but it could be useful to make + * it be smaller. + * + * Currently we choose both values to match the largest tuple size for which + * TOAST_TUPLES_PER_PAGE tuples can fit on a heap page. + * + * XXX while these can be modified without initdb, some thought needs to be + * given to needs_toast_table() in toasting.c before unleashing random + * changes. Also see LOBLKSIZE in large_object.h, which can *not* be + * changed without initdb. + */ +#define TOAST_TUPLES_PER_PAGE 4 + +#define TOAST_TUPLE_THRESHOLD MaximumBytesPerTuple(TOAST_TUPLES_PER_PAGE) + +#define TOAST_TUPLE_TARGET TOAST_TUPLE_THRESHOLD + +/* + * The code will also consider moving MAIN data out-of-line, but only as a + * last resort if the previous steps haven't reached the target tuple size. + * In this phase we use a different target size, currently equal to the + * largest tuple that will fit on a heap page. This is reasonable since + * the user has told us to keep the data in-line if at all possible. + */ +#define TOAST_TUPLES_PER_PAGE_MAIN 1 + +#define TOAST_TUPLE_TARGET_MAIN MaximumBytesPerTuple(TOAST_TUPLES_PER_PAGE_MAIN) + +/* + * If an index value is larger than TOAST_INDEX_TARGET, we will try to + * compress it (we can't move it out-of-line, however). Note that this + * number is per-datum, not per-tuple, for simplicity in index_form_tuple(). + */ +#define TOAST_INDEX_TARGET (MaxHeapTupleSize / 16) + +/* + * When we store an oversize datum externally, we divide it into chunks + * containing at most TOAST_MAX_CHUNK_SIZE data bytes. This number *must* + * be small enough that the completed toast-table tuple (including the + * ID and sequence fields and all overhead) will fit on a page. + * The coding here sets the size on the theory that we want to fit + * EXTERN_TUPLES_PER_PAGE tuples of maximum size onto a page. + * + * NB: Changing TOAST_MAX_CHUNK_SIZE requires an initdb. + */ +#define EXTERN_TUPLES_PER_PAGE 4 /* tweak only this */ + +#define EXTERN_TUPLE_MAX_SIZE MaximumBytesPerTuple(EXTERN_TUPLES_PER_PAGE) + +#define TOAST_MAX_CHUNK_SIZE \ + (EXTERN_TUPLE_MAX_SIZE - \ + MAXALIGN(SizeofHeapTupleHeader) - \ + sizeof(Oid) - \ + sizeof(int32) - \ + VARHDRSZ) + +/* ---------- + * tdeheap_toast_insert_or_update - + * + * Called by tdeheap_insert() and tdeheap_update(). + * ---------- + */ +extern HeapTuple tdeheap_toast_insert_or_update(Relation rel, HeapTuple newtup, + HeapTuple oldtup, int options); + +/* ---------- + * tdeheap_toast_delete - + * + * Called by tdeheap_delete(). + * ---------- + */ +extern void tdeheap_toast_delete(Relation rel, HeapTuple oldtup, + bool is_speculative); + +/* ---------- + * toast_flatten_tuple - + * + * "Flatten" a tuple to contain no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * ---------- + */ +extern HeapTuple toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc); + +/* ---------- + * toast_flatten_tuple_to_datum - + * + * "Flatten" a tuple containing out-of-line toasted fields into a Datum. + * ---------- + */ +extern Datum toast_flatten_tuple_to_datum(HeapTupleHeader tup, + uint32 tup_len, + TupleDesc tupleDesc); + +/* ---------- + * toast_build_flattened_tuple - + * + * Build a tuple containing no out-of-line toasted fields. + * (This does not eliminate compressed or short-header datums.) + * ---------- + */ +extern HeapTuple toast_build_flattened_tuple(TupleDesc tupleDesc, + Datum *values, + bool *isnull); + +/* ---------- + * tdeheap_fetch_toast_slice + * + * Fetch a slice from a toast value stored in a heap table. + * ---------- + */ +extern void tdeheap_fetch_toast_slice(Relation toastrel, Oid valueid, + int32 attrsize, int32 sliceoffset, + int32 slicelength, struct varlena *result); + +#endif /* PG_TDE_TOAST_H */ diff --git a/contrib/pg_tde/sysbench/bulk_insert.lua b/contrib/pg_tde/sysbench/bulk_insert.lua new file mode 100755 index 00000000000..8be93bfb7da --- /dev/null +++ b/contrib/pg_tde/sysbench/bulk_insert.lua @@ -0,0 +1,56 @@ +#!/usr/bin/sysbench +-- -------------------------------------------------------------------------- -- +-- Bulk insert benchmark: do multi-row INSERTs concurrently in --threads +-- threads with each thread inserting into its own table. The number of INSERTs +-- executed by each thread is controlled by either --time or --events. +-- -------------------------------------------------------------------------- -- + +cursize=0 + +function thread_init() + drv = sysbench.sql.driver() + con = drv:connect() +end + +function prepare() + local i + + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = 1, sysbench.opt.threads do + print("Creating table 'sbtest" .. i .. "'...") + con:query(string.format([[ + CREATE TABLE IF NOT EXISTS sbtest%d ( + id INTEGER NOT NULL, + k INTEGER DEFAULT '0' NOT NULL, + PRIMARY KEY (id))]], i)) + end +end + +function event() + if (cursize == 0) then + con:bulk_insert_init("INSERT INTO sbtest" .. thread_id+1 .. " VALUES") + end + + cursize = cursize + 1 + + con:bulk_insert_next("(" .. cursize .. "," .. cursize .. ")") +end + +function thread_done(thread_9d) + con:bulk_insert_done() + con:disconnect() +end + +function cleanup() + local i + + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = 1, sysbench.opt.threads do + print("Dropping table 'sbtest" .. i .. "'...") + con:query("DROP TABLE IF EXISTS sbtest" .. i ) + end +end diff --git a/contrib/pg_tde/sysbench/oltp_common.lua b/contrib/pg_tde/sysbench/oltp_common.lua new file mode 100644 index 00000000000..c42d432e1db --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_common.lua @@ -0,0 +1,503 @@ +-- Copyright (C) 2006-2018 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ----------------------------------------------------------------------------- +-- Common code for OLTP benchmarks. +-- ----------------------------------------------------------------------------- + +function init() + assert(event ~= nil, + "this script is meant to be included by other OLTP scripts and " .. + "should not be called directly.") +end + +if sysbench.cmdline.command == nil then + error("Command is required. Supported commands: prepare, prewarm, run, " .. + "cleanup, help") +end + +-- Command line options +sysbench.cmdline.options = { + table_size = + {"Number of rows per table", 10000}, + range_size = + {"Range size for range SELECT queries", 100}, + tables = + {"Number of tables", 1}, + point_selects = + {"Number of point SELECT queries per transaction", 10}, + simple_ranges = + {"Number of simple range SELECT queries per transaction", 1}, + sum_ranges = + {"Number of SELECT SUM() queries per transaction", 1}, + order_ranges = + {"Number of SELECT ORDER BY queries per transaction", 1}, + distinct_ranges = + {"Number of SELECT DISTINCT queries per transaction", 1}, + index_updates = + {"Number of UPDATE index queries per transaction", 1}, + non_index_updates = + {"Number of UPDATE non-index queries per transaction", 1}, + delete_inserts = + {"Number of DELETE/INSERT combinations per transaction", 1}, + range_selects = + {"Enable/disable all range SELECT queries", true}, + auto_inc = + {"Use AUTO_INCREMENT column as Primary Key (for MySQL), " .. + "or its alternatives in other DBMS. When disabled, use " .. + "client-generated IDs", true}, + skip_trx = + {"Don't start explicit transactions and execute all queries " .. + "in the AUTOCOMMIT mode", false}, + secondary = + {"Use a secondary index in place of the PRIMARY KEY", false}, + create_secondary = + {"Create a secondary index in addition to the PRIMARY KEY", true}, + mysql_storage_engine = + {"Storage engine, if MySQL is used", "innodb"}, + pgsql_variant = + {"Use this PostgreSQL variant when running with the " .. + "PostgreSQL driver. The only currently supported " .. + "variant is 'redshift'. When enabled, " .. + "create_secondary is automatically disabled, and " .. + "delete_inserts is set to 0"} +} + +-- Prepare the dataset. This command supports parallel execution, i.e. will +-- benefit from executing with --threads > 1 as long as --tables > 1 +function cmd_prepare() + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, + sysbench.opt.threads do + create_table(drv, con, i) + end +end + +-- Preload the dataset into the server cache. This command supports parallel +-- execution, i.e. will benefit from executing with --threads > 1 as long as +-- --tables > 1 +-- +-- PS. Currently, this command is only meaningful for MySQL/InnoDB benchmarks +function cmd_prewarm() + local drv = sysbench.sql.driver() + local con = drv:connect() + + assert(drv:name() == "mysql", "prewarm is currently MySQL only") + + -- Do not create on disk tables for subsequent queries + con:query("SET tmp_table_size=2*1024*1024*1024") + con:query("SET max_heap_table_size=2*1024*1024*1024") + + for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, + sysbench.opt.threads do + local t = "sbtest" .. i + print("Prewarming table " .. t) + con:query("ANALYZE TABLE sbtest" .. i) + con:query(string.format( + "SELECT AVG(id) FROM " .. + "(SELECT * FROM %s FORCE KEY (PRIMARY) " .. + "LIMIT %u) t", + t, sysbench.opt.table_size)) + con:query(string.format( + "SELECT COUNT(*) FROM " .. + "(SELECT * FROM %s WHERE k LIKE '%%0%%' LIMIT %u) t", + t, sysbench.opt.table_size)) + end +end + +-- Implement parallel prepare and prewarm commands +sysbench.cmdline.commands = { + prepare = {cmd_prepare, sysbench.cmdline.PARALLEL_COMMAND}, + prewarm = {cmd_prewarm, sysbench.cmdline.PARALLEL_COMMAND} +} + + +-- Template strings of random digits with 11-digit groups separated by dashes + +-- 10 groups, 119 characters +local c_value_template = "###########-###########-###########-" .. + "###########-###########-###########-" .. + "###########-###########-###########-" .. + "###########" + +-- 5 groups, 59 characters +local pad_value_template = "###########-###########-###########-" .. + "###########-###########" + +function get_c_value() + return sysbench.rand.string(c_value_template) +end + +function get_pad_value() + return sysbench.rand.string(pad_value_template) +end + +function create_table(drv, con, table_num) + local id_index_def, id_def + local engine_def = "" + local extra_table_options = "" + local query + + if sysbench.opt.secondary then + id_index_def = "KEY xid" + else + id_index_def = "PRIMARY KEY" + end + + if drv:name() == "mysql" or drv:name() == "attachsql" or + drv:name() == "drizzle" + then + if sysbench.opt.auto_inc then + id_def = "INTEGER NOT NULL AUTO_INCREMENT" + else + id_def = "INTEGER NOT NULL" + end + engine_def = "/*! ENGINE = " .. sysbench.opt.mysql_storage_engine .. " */" + extra_table_options = mysql_table_options or "" + elseif drv:name() == "pgsql" + then + if not sysbench.opt.auto_inc then + id_def = "INTEGER NOT NULL" + elseif pgsql_variant == 'redshift' then + id_def = "INTEGER IDENTITY(1,1)" + else + id_def = "SERIAL" + end + else + error("Unsupported database driver:" .. drv:name()) + end + + print(string.format("Creating table 'sbtest%d'...", table_num)) + + query = string.format([[ +CREATE TABLE sbtest%d( + id %s, + k INTEGER DEFAULT '0' NOT NULL, + c CHAR(120) DEFAULT '' NOT NULL, + pad CHAR(60) DEFAULT '' NOT NULL, + %s (id) +), + table_num, id_def, id_index_def, engine_def, extra_table_options) + + con:query(query) + + if (sysbench.opt.table_size > 0) then + print(string.format("Inserting %d records into 'sbtest%d'", + sysbench.opt.table_size, table_num)) + end + + if sysbench.opt.auto_inc then + query = "INSERT INTO sbtest" .. table_num .. "(k, c, pad) VALUES" + else + query = "INSERT INTO sbtest" .. table_num .. "(id, k, c, pad) VALUES" + end + + con:bulk_insert_init(query) + + local c_val + local pad_val + + for i = 1, sysbench.opt.table_size do + + c_val = get_c_value() + pad_val = get_pad_value() + + if (sysbench.opt.auto_inc) then + query = string.format("(%d, '%s', '%s')", + sb_rand(1, sysbench.opt.table_size), c_val, + pad_val) + else + query = string.format("(%d, %d, '%s', '%s')", + i, sb_rand(1, sysbench.opt.table_size), c_val, + pad_val) + end + + con:bulk_insert_next(query) + end + + con:bulk_insert_done() + + if sysbench.opt.create_secondary then + print(string.format("Creating a secondary index on 'sbtest%d'...", + table_num)) + con:query(string.format("CREATE INDEX k_%d ON sbtest%d(k)", + table_num, table_num)) + end +end + +local t = sysbench.sql.type +local stmt_defs = { + point_selects = { + "SELECT c FROM sbtest%u WHERE id=?", + t.INT}, + simple_ranges = { + "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ?", + t.INT, t.INT}, + sum_ranges = { + "SELECT SUM(k) FROM sbtest%u WHERE id BETWEEN ? AND ?", + t.INT, t.INT}, + order_ranges = { + "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", + t.INT, t.INT}, + distinct_ranges = { + "SELECT DISTINCT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", + t.INT, t.INT}, + index_updates = { + "UPDATE sbtest%u SET k=k+1 WHERE id=?", + t.INT}, + non_index_updates = { + "UPDATE sbtest%u SET c=? WHERE id=?", + {t.CHAR, 120}, t.INT}, + deletes = { + "DELETE FROM sbtest%u WHERE id=?", + t.INT}, + inserts = { + "INSERT INTO sbtest%u (id, k, c, pad) VALUES (?, ?, ?, ?)", + t.INT, t.INT, {t.CHAR, 120}, {t.CHAR, 60}}, +} + +function prepare_begin() + stmt.begin = con:prepare("BEGIN") +end + +function prepare_commit() + stmt.commit = con:prepare("COMMIT") +end + +function prepare_for_each_table(key) + for t = 1, sysbench.opt.tables do + stmt[t][key] = con:prepare(string.format(stmt_defs[key][1], t)) + + local nparam = #stmt_defs[key] - 1 + + if nparam > 0 then + param[t][key] = {} + end + + for p = 1, nparam do + local btype = stmt_defs[key][p+1] + local len + + if type(btype) == "table" then + len = btype[2] + btype = btype[1] + end + if btype == sysbench.sql.type.VARCHAR or + btype == sysbench.sql.type.CHAR then + param[t][key][p] = stmt[t][key]:bind_create(btype, len) + else + param[t][key][p] = stmt[t][key]:bind_create(btype) + end + end + + if nparam > 0 then + stmt[t][key]:bind_param(unpack(param[t][key])) + end + end +end + +function prepare_point_selects() + prepare_for_each_table("point_selects") +end + +function prepare_simple_ranges() + prepare_for_each_table("simple_ranges") +end + +function prepare_sum_ranges() + prepare_for_each_table("sum_ranges") +end + +function prepare_order_ranges() + prepare_for_each_table("order_ranges") +end + +function prepare_distinct_ranges() + prepare_for_each_table("distinct_ranges") +end + +function prepare_index_updates() + prepare_for_each_table("index_updates") +end + +function prepare_non_index_updates() + prepare_for_each_table("non_index_updates") +end + +function prepare_delete_inserts() + prepare_for_each_table("deletes") + prepare_for_each_table("inserts") +end + +function thread_init() + drv = sysbench.sql.driver() + con = drv:connect() + + -- Create global nested tables for prepared statements and their + -- parameters. We need a statement and a parameter set for each combination + -- of connection/table/query + stmt = {} + param = {} + + for t = 1, sysbench.opt.tables do + stmt[t] = {} + param[t] = {} + end + + -- This function is a 'callback' defined by individual benchmark scripts + prepare_statements() +end + +-- Close prepared statements +function close_statements() + for t = 1, sysbench.opt.tables do + for k, s in pairs(stmt[t]) do + stmt[t][k]:close() + end + end + if (stmt.begin ~= nil) then + stmt.begin:close() + end + if (stmt.commit ~= nil) then + stmt.commit:close() + end +end + +function thread_done() + close_statements() + con:disconnect() +end + +function cleanup() + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = 1, sysbench.opt.tables do + print(string.format("Dropping table 'sbtest%d'...", i)) + con:query("DROP TABLE IF EXISTS sbtest" .. i ) + end +end + +local function get_table_num() + return sysbench.rand.uniform(1, sysbench.opt.tables) +end + +local function get_id() + return sysbench.rand.default(1, sysbench.opt.table_size) +end + +function begin() + stmt.begin:execute() +end + +function commit() + stmt.commit:execute() +end + +function execute_point_selects() + local tnum = get_table_num() + local i + + for i = 1, sysbench.opt.point_selects do + param[tnum].point_selects[1]:set(get_id()) + + stmt[tnum].point_selects:execute() + end +end + +local function execute_range(key) + local tnum = get_table_num() + + for i = 1, sysbench.opt[key] do + local id = get_id() + + param[tnum][key][1]:set(id) + param[tnum][key][2]:set(id + sysbench.opt.range_size - 1) + + stmt[tnum][key]:execute() + end +end + +function execute_simple_ranges() + execute_range("simple_ranges") +end + +function execute_sum_ranges() + execute_range("sum_ranges") +end + +function execute_order_ranges() + execute_range("order_ranges") +end + +function execute_distinct_ranges() + execute_range("distinct_ranges") +end + +function execute_index_updates() + local tnum = get_table_num() + + for i = 1, sysbench.opt.index_updates do + param[tnum].index_updates[1]:set(get_id()) + + stmt[tnum].index_updates:execute() + end +end + +function execute_non_index_updates() + local tnum = get_table_num() + + for i = 1, sysbench.opt.non_index_updates do + param[tnum].non_index_updates[1]:set_rand_str(c_value_template) + param[tnum].non_index_updates[2]:set(get_id()) + + stmt[tnum].non_index_updates:execute() + end +end + +function execute_delete_inserts() + local tnum = get_table_num() + + for i = 1, sysbench.opt.delete_inserts do + local id = get_id() + local k = get_id() + + param[tnum].deletes[1]:set(id) + + param[tnum].inserts[1]:set(id) + param[tnum].inserts[2]:set(k) + param[tnum].inserts[3]:set_rand_str(c_value_template) + param[tnum].inserts[4]:set_rand_str(pad_value_template) + + stmt[tnum].deletes:execute() + stmt[tnum].inserts:execute() + end +end + +-- Re-prepare statements if we have reconnected, which is possible when some of +-- the listed error codes are in the --mysql-ignore-errors list +function sysbench.hooks.before_restart_event(errdesc) + if errdesc.sql_errno == 2013 or -- CR_SERVER_LOST + errdesc.sql_errno == 2055 or -- CR_SERVER_LOST_EXTENDED + errdesc.sql_errno == 2006 or -- CR_SERVER_GONE_ERROR + errdesc.sql_errno == 2011 -- CR_TCP_CONNECTION + then + close_statements() + prepare_statements() + end +end diff --git a/contrib/pg_tde/sysbench/oltp_common_tde.lua b/contrib/pg_tde/sysbench/oltp_common_tde.lua new file mode 100644 index 00000000000..a740bc9105b --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_common_tde.lua @@ -0,0 +1,503 @@ +-- Copyright (C) 2006-2018 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ----------------------------------------------------------------------------- +-- Common code for OLTP benchmarks. +-- ----------------------------------------------------------------------------- + +function init() + assert(event ~= nil, + "this script is meant to be included by other OLTP scripts and " .. + "should not be called directly.") +end + +if sysbench.cmdline.command == nil then + error("Command is required. Supported commands: prepare, prewarm, run, " .. + "cleanup, help") +end + +-- Command line options +sysbench.cmdline.options = { + table_size = + {"Number of rows per table", 10000}, + range_size = + {"Range size for range SELECT queries", 100}, + tables = + {"Number of tables", 1}, + point_selects = + {"Number of point SELECT queries per transaction", 10}, + simple_ranges = + {"Number of simple range SELECT queries per transaction", 1}, + sum_ranges = + {"Number of SELECT SUM() queries per transaction", 1}, + order_ranges = + {"Number of SELECT ORDER BY queries per transaction", 1}, + distinct_ranges = + {"Number of SELECT DISTINCT queries per transaction", 1}, + index_updates = + {"Number of UPDATE index queries per transaction", 1}, + non_index_updates = + {"Number of UPDATE non-index queries per transaction", 1}, + delete_inserts = + {"Number of DELETE/INSERT combinations per transaction", 1}, + range_selects = + {"Enable/disable all range SELECT queries", true}, + auto_inc = + {"Use AUTO_INCREMENT column as Primary Key (for MySQL), " .. + "or its alternatives in other DBMS. When disabled, use " .. + "client-generated IDs", true}, + skip_trx = + {"Don't start explicit transactions and execute all queries " .. + "in the AUTOCOMMIT mode", false}, + secondary = + {"Use a secondary index in place of the PRIMARY KEY", false}, + create_secondary = + {"Create a secondary index in addition to the PRIMARY KEY", true}, + mysql_storage_engine = + {"Storage engine, if MySQL is used", "innodb"}, + pgsql_variant = + {"Use this PostgreSQL variant when running with the " .. + "PostgreSQL driver. The only currently supported " .. + "variant is 'redshift'. When enabled, " .. + "create_secondary is automatically disabled, and " .. + "delete_inserts is set to 0"} +} + +-- Prepare the dataset. This command supports parallel execution, i.e. will +-- benefit from executing with --threads > 1 as long as --tables > 1 +function cmd_prepare() + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, + sysbench.opt.threads do + create_table(drv, con, i) + end +end + +-- Preload the dataset into the server cache. This command supports parallel +-- execution, i.e. will benefit from executing with --threads > 1 as long as +-- --tables > 1 +-- +-- PS. Currently, this command is only meaningful for MySQL/InnoDB benchmarks +function cmd_prewarm() + local drv = sysbench.sql.driver() + local con = drv:connect() + + assert(drv:name() == "mysql", "prewarm is currently MySQL only") + + -- Do not create on disk tables for subsequent queries + con:query("SET tmp_table_size=2*1024*1024*1024") + con:query("SET max_heap_table_size=2*1024*1024*1024") + + for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, + sysbench.opt.threads do + local t = "sbtest" .. i + print("Prewarming table " .. t) + con:query("ANALYZE TABLE sbtest" .. i) + con:query(string.format( + "SELECT AVG(id) FROM " .. + "(SELECT * FROM %s FORCE KEY (PRIMARY) " .. + "LIMIT %u) t", + t, sysbench.opt.table_size)) + con:query(string.format( + "SELECT COUNT(*) FROM " .. + "(SELECT * FROM %s WHERE k LIKE '%%0%%' LIMIT %u) t", + t, sysbench.opt.table_size)) + end +end + +-- Implement parallel prepare and prewarm commands +sysbench.cmdline.commands = { + prepare = {cmd_prepare, sysbench.cmdline.PARALLEL_COMMAND}, + prewarm = {cmd_prewarm, sysbench.cmdline.PARALLEL_COMMAND} +} + + +-- Template strings of random digits with 11-digit groups separated by dashes + +-- 10 groups, 119 characters +local c_value_template = "###########-###########-###########-" .. + "###########-###########-###########-" .. + "###########-###########-###########-" .. + "###########" + +-- 5 groups, 59 characters +local pad_value_template = "###########-###########-###########-" .. + "###########-###########" + +function get_c_value() + return sysbench.rand.string(c_value_template) +end + +function get_pad_value() + return sysbench.rand.string(pad_value_template) +end + +function create_table(drv, con, table_num) + local id_index_def, id_def + local engine_def = "" + local extra_table_options = "" + local query + + if sysbench.opt.secondary then + id_index_def = "KEY xid" + else + id_index_def = "PRIMARY KEY" + end + + if drv:name() == "mysql" or drv:name() == "attachsql" or + drv:name() == "drizzle" + then + if sysbench.opt.auto_inc then + id_def = "INTEGER NOT NULL AUTO_INCREMENT" + else + id_def = "INTEGER NOT NULL" + end + engine_def = "/*! ENGINE = " .. sysbench.opt.mysql_storage_engine .. " */" + extra_table_options = mysql_table_options or "" + elseif drv:name() == "pgsql" + then + if not sysbench.opt.auto_inc then + id_def = "INTEGER NOT NULL" + elseif pgsql_variant == 'redshift' then + id_def = "INTEGER IDENTITY(1,1)" + else + id_def = "SERIAL" + end + else + error("Unsupported database driver:" .. drv:name()) + end + + print(string.format("Creating table 'sbtest%d'...", table_num)) + + query = string.format([[ +CREATE TABLE sbtest%d( + id %s, + k INTEGER DEFAULT '0' NOT NULL, + c CHAR(120) DEFAULT '' NOT NULL, + pad CHAR(60) DEFAULT '' NOT NULL, + %s (id) +) USING tde_heap_basic %s %s]], + table_num, id_def, id_index_def, engine_def, extra_table_options) + + con:query(query) + + if (sysbench.opt.table_size > 0) then + print(string.format("Inserting %d records into 'sbtest%d'", + sysbench.opt.table_size, table_num)) + end + + if sysbench.opt.auto_inc then + query = "INSERT INTO sbtest" .. table_num .. "(k, c, pad) VALUES" + else + query = "INSERT INTO sbtest" .. table_num .. "(id, k, c, pad) VALUES" + end + + con:bulk_insert_init(query) + + local c_val + local pad_val + + for i = 1, sysbench.opt.table_size do + + c_val = get_c_value() + pad_val = get_pad_value() + + if (sysbench.opt.auto_inc) then + query = string.format("(%d, '%s', '%s')", + sb_rand(1, sysbench.opt.table_size), c_val, + pad_val) + else + query = string.format("(%d, %d, '%s', '%s')", + i, sb_rand(1, sysbench.opt.table_size), c_val, + pad_val) + end + + con:bulk_insert_next(query) + end + + con:bulk_insert_done() + + if sysbench.opt.create_secondary then + print(string.format("Creating a secondary index on 'sbtest%d'...", + table_num)) + con:query(string.format("CREATE INDEX k_%d ON sbtest%d(k)", + table_num, table_num)) + end +end + +local t = sysbench.sql.type +local stmt_defs = { + point_selects = { + "SELECT c FROM sbtest%u WHERE id=?", + t.INT}, + simple_ranges = { + "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ?", + t.INT, t.INT}, + sum_ranges = { + "SELECT SUM(k) FROM sbtest%u WHERE id BETWEEN ? AND ?", + t.INT, t.INT}, + order_ranges = { + "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", + t.INT, t.INT}, + distinct_ranges = { + "SELECT DISTINCT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", + t.INT, t.INT}, + index_updates = { + "UPDATE sbtest%u SET k=k+1 WHERE id=?", + t.INT}, + non_index_updates = { + "UPDATE sbtest%u SET c=? WHERE id=?", + {t.CHAR, 120}, t.INT}, + deletes = { + "DELETE FROM sbtest%u WHERE id=?", + t.INT}, + inserts = { + "INSERT INTO sbtest%u (id, k, c, pad) VALUES (?, ?, ?, ?)", + t.INT, t.INT, {t.CHAR, 120}, {t.CHAR, 60}}, +} + +function prepare_begin() + stmt.begin = con:prepare("BEGIN") +end + +function prepare_commit() + stmt.commit = con:prepare("COMMIT") +end + +function prepare_for_each_table(key) + for t = 1, sysbench.opt.tables do + stmt[t][key] = con:prepare(string.format(stmt_defs[key][1], t)) + + local nparam = #stmt_defs[key] - 1 + + if nparam > 0 then + param[t][key] = {} + end + + for p = 1, nparam do + local btype = stmt_defs[key][p+1] + local len + + if type(btype) == "table" then + len = btype[2] + btype = btype[1] + end + if btype == sysbench.sql.type.VARCHAR or + btype == sysbench.sql.type.CHAR then + param[t][key][p] = stmt[t][key]:bind_create(btype, len) + else + param[t][key][p] = stmt[t][key]:bind_create(btype) + end + end + + if nparam > 0 then + stmt[t][key]:bind_param(unpack(param[t][key])) + end + end +end + +function prepare_point_selects() + prepare_for_each_table("point_selects") +end + +function prepare_simple_ranges() + prepare_for_each_table("simple_ranges") +end + +function prepare_sum_ranges() + prepare_for_each_table("sum_ranges") +end + +function prepare_order_ranges() + prepare_for_each_table("order_ranges") +end + +function prepare_distinct_ranges() + prepare_for_each_table("distinct_ranges") +end + +function prepare_index_updates() + prepare_for_each_table("index_updates") +end + +function prepare_non_index_updates() + prepare_for_each_table("non_index_updates") +end + +function prepare_delete_inserts() + prepare_for_each_table("deletes") + prepare_for_each_table("inserts") +end + +function thread_init() + drv = sysbench.sql.driver() + con = drv:connect() + + -- Create global nested tables for prepared statements and their + -- parameters. We need a statement and a parameter set for each combination + -- of connection/table/query + stmt = {} + param = {} + + for t = 1, sysbench.opt.tables do + stmt[t] = {} + param[t] = {} + end + + -- This function is a 'callback' defined by individual benchmark scripts + prepare_statements() +end + +-- Close prepared statements +function close_statements() + for t = 1, sysbench.opt.tables do + for k, s in pairs(stmt[t]) do + stmt[t][k]:close() + end + end + if (stmt.begin ~= nil) then + stmt.begin:close() + end + if (stmt.commit ~= nil) then + stmt.commit:close() + end +end + +function thread_done() + close_statements() + con:disconnect() +end + +function cleanup() + local drv = sysbench.sql.driver() + local con = drv:connect() + + for i = 1, sysbench.opt.tables do + print(string.format("Dropping table 'sbtest%d'...", i)) + con:query("DROP TABLE IF EXISTS sbtest" .. i ) + end +end + +local function get_table_num() + return sysbench.rand.uniform(1, sysbench.opt.tables) +end + +local function get_id() + return sysbench.rand.default(1, sysbench.opt.table_size) +end + +function begin() + stmt.begin:execute() +end + +function commit() + stmt.commit:execute() +end + +function execute_point_selects() + local tnum = get_table_num() + local i + + for i = 1, sysbench.opt.point_selects do + param[tnum].point_selects[1]:set(get_id()) + + stmt[tnum].point_selects:execute() + end +end + +local function execute_range(key) + local tnum = get_table_num() + + for i = 1, sysbench.opt[key] do + local id = get_id() + + param[tnum][key][1]:set(id) + param[tnum][key][2]:set(id + sysbench.opt.range_size - 1) + + stmt[tnum][key]:execute() + end +end + +function execute_simple_ranges() + execute_range("simple_ranges") +end + +function execute_sum_ranges() + execute_range("sum_ranges") +end + +function execute_order_ranges() + execute_range("order_ranges") +end + +function execute_distinct_ranges() + execute_range("distinct_ranges") +end + +function execute_index_updates() + local tnum = get_table_num() + + for i = 1, sysbench.opt.index_updates do + param[tnum].index_updates[1]:set(get_id()) + + stmt[tnum].index_updates:execute() + end +end + +function execute_non_index_updates() + local tnum = get_table_num() + + for i = 1, sysbench.opt.non_index_updates do + param[tnum].non_index_updates[1]:set_rand_str(c_value_template) + param[tnum].non_index_updates[2]:set(get_id()) + + stmt[tnum].non_index_updates:execute() + end +end + +function execute_delete_inserts() + local tnum = get_table_num() + + for i = 1, sysbench.opt.delete_inserts do + local id = get_id() + local k = get_id() + + param[tnum].deletes[1]:set(id) + + param[tnum].inserts[1]:set(id) + param[tnum].inserts[2]:set(k) + param[tnum].inserts[3]:set_rand_str(c_value_template) + param[tnum].inserts[4]:set_rand_str(pad_value_template) + + stmt[tnum].deletes:execute() + stmt[tnum].inserts:execute() + end +end + +-- Re-prepare statements if we have reconnected, which is possible when some of +-- the listed error codes are in the --mysql-ignore-errors list +function sysbench.hooks.before_restart_event(errdesc) + if errdesc.sql_errno == 2013 or -- CR_SERVER_LOST + errdesc.sql_errno == 2055 or -- CR_SERVER_LOST_EXTENDED + errdesc.sql_errno == 2006 or -- CR_SERVER_GONE_ERROR + errdesc.sql_errno == 2011 -- CR_TCP_CONNECTION + then + close_statements() + prepare_statements() + end +end diff --git a/contrib/pg_tde/sysbench/oltp_delete.lua b/contrib/pg_tde/sysbench/oltp_delete.lua new file mode 100755 index 00000000000..23668b2c6b8 --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_delete.lua @@ -0,0 +1,34 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Delete-Only OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + prepare_for_each_table("deletes") +end + +function event() + local tnum = sysbench.rand.uniform(1, sysbench.opt.tables) + local id = sysbench.rand.default(1, sysbench.opt.table_size) + + param[tnum].deletes[1]:set(id) + stmt[tnum].deletes:execute() +end diff --git a/contrib/pg_tde/sysbench/oltp_insert.lua b/contrib/pg_tde/sysbench/oltp_insert.lua new file mode 100755 index 00000000000..af8bd1f6d5a --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_insert.lua @@ -0,0 +1,65 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Insert-Only OLTP benchmark +-- ---------------------------------------------------------------------- + +require(".oltp_common") + +sysbench.cmdline.commands.prepare = { + function () + if (not sysbench.opt.auto_inc) then + -- Create empty tables on prepare when --auto-inc is off, since IDs + -- generated on prepare may collide later with values generated by + -- sysbench.rand.unique() + sysbench.opt.table_size=0 + end + + cmd_prepare() + end, + sysbench.cmdline.PARALLEL_COMMAND +} + +function prepare_statements() + -- We do not use prepared statements here, but oltp_common.sh expects this + -- function to be defined +end + +function event() + local table_name = "sbtest" .. sysbench.rand.uniform(1, sysbench.opt.tables) + local k_val = sysbench.rand.default(1, sysbench.opt.table_size) + local c_val = get_c_value() + local pad_val = get_pad_value() + + if (drv:name() == "pgsql" and sysbench.opt.auto_inc) then + con:query(string.format("INSERT INTO %s (k, c, pad) VALUES " .. + "(%d, '%s', '%s')", + table_name, k_val, c_val, pad_val)) + else + if (sysbench.opt.auto_inc) then + i = 0 + else + -- Convert a uint32_t value to SQL INT + i = sysbench.rand.unique() - 2147483648 + end + + con:query(string.format("INSERT INTO %s (id, k, c, pad) VALUES " .. + "(%d, %d, '%s', '%s')", + table_name, i, k_val, c_val, pad_val)) + end +end diff --git a/contrib/pg_tde/sysbench/oltp_point_select.lua b/contrib/pg_tde/sysbench/oltp_point_select.lua new file mode 100755 index 00000000000..b82cb071170 --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_point_select.lua @@ -0,0 +1,34 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- OLTP Point Select benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + -- use 1 query per event, rather than sysbench.opt.point_selects which + -- defaults to 10 in other OLTP scripts + sysbench.opt.point_selects=1 + + prepare_point_selects() +end + +function event() + execute_point_selects() +end diff --git a/contrib/pg_tde/sysbench/oltp_read_only.lua b/contrib/pg_tde/sysbench/oltp_read_only.lua new file mode 100755 index 00000000000..1c9ab05a720 --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_read_only.lua @@ -0,0 +1,57 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Read-Only OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + prepare_point_selects() + + if not sysbench.opt.skip_trx then + prepare_begin() + prepare_commit() + end + + if sysbench.opt.range_selects then + prepare_simple_ranges() + prepare_sum_ranges() + prepare_order_ranges() + prepare_distinct_ranges() + end +end + +function event() + if not sysbench.opt.skip_trx then + begin() + end + + execute_point_selects() + + if sysbench.opt.range_selects then + execute_simple_ranges() + execute_sum_ranges() + execute_order_ranges() + execute_distinct_ranges() + end + + if not sysbench.opt.skip_trx then + commit() + end +end diff --git a/contrib/pg_tde/sysbench/oltp_read_write.lua b/contrib/pg_tde/sysbench/oltp_read_write.lua new file mode 100755 index 00000000000..b3ec02e0ede --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_read_write.lua @@ -0,0 +1,65 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Read/Write OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + if not sysbench.opt.skip_trx then + prepare_begin() + prepare_commit() + end + + prepare_point_selects() + + if sysbench.opt.range_selects then + prepare_simple_ranges() + prepare_sum_ranges() + prepare_order_ranges() + prepare_distinct_ranges() + end + + prepare_index_updates() + prepare_non_index_updates() + prepare_delete_inserts() +end + +function event() + if not sysbench.opt.skip_trx then + begin() + end + + execute_point_selects() + + if sysbench.opt.range_selects then + execute_simple_ranges() + execute_sum_ranges() + execute_order_ranges() + execute_distinct_ranges() + end + + execute_index_updates() + execute_non_index_updates() + execute_delete_inserts() + + if not sysbench.opt.skip_trx then + commit() + end +end diff --git a/contrib/pg_tde/sysbench/oltp_update_index.lua b/contrib/pg_tde/sysbench/oltp_update_index.lua new file mode 100755 index 00000000000..39ae347a1b4 --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_update_index.lua @@ -0,0 +1,30 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Update-Index OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + prepare_index_updates() +end + +function event() + execute_index_updates(con) +end diff --git a/contrib/pg_tde/sysbench/oltp_update_non_index.lua b/contrib/pg_tde/sysbench/oltp_update_non_index.lua new file mode 100755 index 00000000000..a504de57e7b --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_update_non_index.lua @@ -0,0 +1,30 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Update-Non-Index OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + prepare_non_index_updates() +end + +function event() + execute_non_index_updates() +end diff --git a/contrib/pg_tde/sysbench/oltp_write_only.lua b/contrib/pg_tde/sysbench/oltp_write_only.lua new file mode 100755 index 00000000000..1bf814f382b --- /dev/null +++ b/contrib/pg_tde/sysbench/oltp_write_only.lua @@ -0,0 +1,47 @@ +#!/usr/bin/sysbench +-- Copyright (C) 2006-2017 Alexey Kopytov + +-- This program is free software; you can redistribute it and/or modify +-- it under the terms of the GNU General Public License as published by +-- the Free Software Foundation; either version 2 of the License, or +-- (at your option) any later version. + +-- This program is distributed in the hope that it will be useful, +-- but WITHOUT ANY WARRANTY; without even the implied warranty of +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +-- GNU General Public License for more details. + +-- You should have received a copy of the GNU General Public License +-- along with this program; if not, write to the Free Software +-- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +-- ---------------------------------------------------------------------- +-- Write-Only OLTP benchmark +-- ---------------------------------------------------------------------- + +require("oltp_common") + +function prepare_statements() + if not sysbench.opt.skip_trx then + prepare_begin() + prepare_commit() + end + + prepare_index_updates() + prepare_non_index_updates() + prepare_delete_inserts() +end + +function event() + if not sysbench.opt.skip_trx then + begin() + end + + execute_index_updates() + execute_non_index_updates() + execute_delete_inserts() + + if not sysbench.opt.skip_trx then + commit() + end +end diff --git a/contrib/pg_tde/sysbench/select_random_points.lua b/contrib/pg_tde/sysbench/select_random_points.lua new file mode 100755 index 00000000000..386a62a4f7b --- /dev/null +++ b/contrib/pg_tde/sysbench/select_random_points.lua @@ -0,0 +1,72 @@ +#!/usr/bin/sysbench +-- This test is designed for testing MariaDB's key_cache_segments for MyISAM, +-- and should work with other storage engines as well. +-- +-- For details about key_cache_segments please refer to: +-- http://kb.askmonty.org/v/segmented-key-cache +-- + +require("oltp_common") + +-- Add random_points to the list of standard OLTP options +sysbench.cmdline.options.random_points = + {"Number of random points in the IN() clause in generated SELECTs", 10} + +-- Override standard prepare/cleanup OLTP functions, as this benchmark does not +-- support multiple tables +oltp_prepare = prepare +oltp_cleanup = cleanup + +function prepare() + assert(sysbench.opt.tables == 1, "this benchmark does not support " .. + "--tables > 1") + oltp_prepare() +end + +function cleanup() + assert(sysbench.opt.tables == 1, "this benchmark does not support " .. + "--tables > 1") + oltp_cleanup() +end + +function thread_init() + drv = sysbench.sql.driver() + con = drv:connect() + + local points = string.rep("?, ", sysbench.opt.random_points - 1) .. "?" + + stmt = con:prepare(string.format([[ + SELECT id, k, c, pad + FROM sbtest1 + WHERE k IN (%s) + ]], points)) + + params = {} + for j = 1, sysbench.opt.random_points do + params[j] = stmt:bind_create(sysbench.sql.type.INT) + end + + stmt:bind_param(unpack(params)) + + rlen = sysbench.opt.table_size / sysbench.opt.threads + + thread_id = sysbench.tid % sysbench.opt.threads +end + +function thread_done() + stmt:close() + con:disconnect() +end + +function event() + -- To prevent overlapping of our range queries we need to partition the whole + -- table into 'threads' segments and then make each thread work with its + -- own segment. + for i = 1, sysbench.opt.random_points do + local rmin = rlen * thread_id + local rmax = rmin + rlen + params[i]:set(sb_rand(rmin, rmax)) + end + + stmt:execute() +end diff --git a/contrib/pg_tde/sysbench/select_random_ranges.lua b/contrib/pg_tde/sysbench/select_random_ranges.lua new file mode 100755 index 00000000000..f74c1410b93 --- /dev/null +++ b/contrib/pg_tde/sysbench/select_random_ranges.lua @@ -0,0 +1,77 @@ +#!/usr/bin/sysbench +-- This test is designed for testing MariaDB's key_cache_segments for MyISAM, +-- and should work with other storage engines as well. +-- +-- For details about key_cache_segments please refer to: +-- http://kb.askmonty.org/v/segmented-key-cache +-- + +require("oltp_common") + +-- Add --number-of-ranges and --delta to the list of standard OLTP options +sysbench.cmdline.options.number_of_ranges = + {"Number of random BETWEEN ranges per SELECT", 10} +sysbench.cmdline.options.delta = + {"Size of BETWEEN ranges", 5} + +-- Override standard prepare/cleanup OLTP functions, as this benchmark does not +-- support multiple tables +oltp_prepare = prepare +oltp_cleanup = cleanup + +function prepare() + assert(sysbench.opt.tables == 1, "this benchmark does not support " .. + "--tables > 1") + oltp_prepare() +end + +function cleanup() + assert(sysbench.opt.tables == 1, "this benchmark does not support " .. + "--tables > 1") + oltp_cleanup() +end + +function thread_init() + drv = sysbench.sql.driver() + con = drv:connect() + + local ranges = string.rep("k BETWEEN ? AND ? OR ", + sysbench.opt.number_of_ranges - 1) .. + "k BETWEEN ? AND ?" + + stmt = con:prepare(string.format([[ + SELECT count(k) + FROM sbtest1 + WHERE %s]], ranges)) + + params = {} + for j = 1, sysbench.opt.number_of_ranges*2 do + params[j] = stmt:bind_create(sysbench.sql.type.INT) + end + + stmt:bind_param(unpack(params)) + + rlen = sysbench.opt.table_size / sysbench.opt.threads + + thread_id = sysbench.tid % sysbench.opt.threads +end + +function thread_done() + stmt:close() + con:disconnect() +end + +function event() + -- To prevent overlapping of our range queries we need to partition the whole + -- table into 'threads' segments and then make each thread work with its + -- own segment. + for i = 1, sysbench.opt.number_of_ranges*2, 2 do + local rmin = rlen * thread_id + local rmax = rmin + rlen + local val = sb_rand(rmin, rmax) + params[i]:set(val) + params[i+1]:set(val + sysbench.opt.delta) + end + + stmt:execute() +end diff --git a/contrib/pg_tde/t/001_basic.pl b/contrib/pg_tde/t/001_basic.pl new file mode 100644 index 00000000000..914f4af9ca7 --- /dev/null +++ b/contrib/pg_tde/t/001_basic.pl @@ -0,0 +1,98 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# CREATE EXTENSION and change out file permissions +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'SELECT extname, extversion FROM pg_extension WHERE extname = \'pg_tde\';', extra_params => ['-a']); +ok($cmdret == 0, "SELECT PGTDE VERSION"); +PGTDE::append_to_file($stdout); + +$rt_value = $node->psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +ok($rt_value == 3, "Failing query"); + + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per');", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Verify that we can't see the data in the file +my $tablefile = $node->safe_psql('postgres', 'SHOW data_directory;'); +$tablefile .= '/'; +$tablefile .= $node->safe_psql('postgres', 'SELECT pg_relation_filepath(\'test_enc\');'); + +my $strings = 'TABLEFILE FOUND: '; +$strings .= `(ls $tablefile >/dev/null && echo yes) || echo no`; +PGTDE::append_to_file($strings); + +$strings = 'CONTAINS FOO (should be empty): '; +$strings .= `strings $tablefile | grep foo`; +PGTDE::append_to_file($strings); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/002_rotate_key.pl b/contrib/pg_tde/t/002_rotate_key.pl new file mode 100644 index 00000000000..75fb943d180 --- /dev/null +++ b/contrib/pg_tde/t/002_rotate_key.pl @@ -0,0 +1,103 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# CREATE EXTENSION and change out file permissions +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + + +$rt_value = $node->psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +ok($rt_value == 3, "Failing query"); + + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per');", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-2','/tmp/pg_tde_test_keyring_2.per');", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +#rotate key +PGTDE::append_to_file("-- ROTATE KEY pg_tde_rotate_principal_key('rotated-principal-key','file-2');"); +$rt_value = $node->psql('postgres', "SELECT pg_tde_rotate_principal_key('rotated-principal-key','file-2');", extra_params => ['-a']); +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +#Again rotate key +PGTDE::append_to_file("-- ROTATE KEY pg_tde_rotate_principal_key();"); +$rt_value = $node->psql('postgres', "SELECT pg_tde_rotate_principal_key();", extra_params => ['-a']); +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/003_remote_config.pl b/contrib/pg_tde/t/003_remote_config.pl new file mode 100644 index 00000000000..aa22c1bacc4 --- /dev/null +++ b/contrib/pg_tde/t/003_remote_config.pl @@ -0,0 +1,112 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +{ +package MyWebServer; + +use HTTP::Server::Simple::CGI; +use base qw(HTTP::Server::Simple::CGI); + +my %dispatch = ( + '/hello' => \&resp_hello, + # ... +); + +sub handle_request { + my $self = shift; + my $cgi = shift; + + my $path = $cgi->path_info(); + my $handler = $dispatch{$path}; + + if (ref($handler) eq "CODE") { + print "HTTP/1.0 200 OK\r\n"; + $handler->($cgi); + + } else { + print "HTTP/1.0 404 Not found\r\n"; + print $cgi->header, + $cgi->start_html('Not found'), + $cgi->h1('Not found'), + $cgi->end_html; + } +} + +sub resp_hello { + my $cgi = shift; + print $cgi->header, + "/tmp/http_datafile\r\n"; +} + +} +my $pid = MyWebServer->new(8888)->background(); + + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +my $rt_value = $node->stop(); +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-provider', json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/hello' ));", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-provider');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc2(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc2 (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc2 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc2 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc2;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +system("kill $pid"); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/004_file_config.pl b/contrib/pg_tde/t/004_file_config.pl new file mode 100644 index 00000000000..411aa7f3f04 --- /dev/null +++ b/contrib/pg_tde/t/004_file_config.pl @@ -0,0 +1,74 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +copy("$pgdata/postgresql.conf", "$pgdata/postgresql.conf.bak"); + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +open my $conf2, '>>', "/tmp/datafile-location"; +print $conf2 "/tmp/keyring_data_file\n"; +close $conf2; + +my $rt_value = $node->start(); +ok($rt_value == 1, "Start Server"); + +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-provider', json_object( 'type' VALUE 'file', 'path' VALUE '/tmp/datafile-location' ));", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-provider');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc1 (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc1 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc1 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc1;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/005_multiple_extensions.pl b/contrib/pg_tde/t/005_multiple_extensions.pl new file mode 100644 index 00000000000..e8b499e3ba6 --- /dev/null +++ b/contrib/pg_tde/t/005_multiple_extensions.pl @@ -0,0 +1,158 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +my $PG_VERSION_STRING = `pg_config --version`; + +if (index(lc($PG_VERSION_STRING), lc("Percona Distribution")) == -1) +{ + plan skip_all => "pg_tde test case only for PPG server package install with extensions."; +} + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +copy("$pgdata/postgresql.conf", "$pgdata/postgresql.conf.bak"); + +# UPDATE postgresql.conf to include/load pg_stat_monitor library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde, pg_stat_monitor, pgaudit, set_user, pg_repack'\n"; +print $conf "pg_stat_monitor.pgsm_bucket_time = 360000\n"; +print $conf "pg_stat_monitor.pgsm_normalized_query = 'yes'\n"; +close $conf; + +open my $conf2, '>>', "/tmp/datafile-location"; +print $conf2 "/tmp/keyring_data_file\n"; +close $conf2; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# Create PGSM extension +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_stat_monitor;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGSM EXTENSION"); +PGTDE::append_to_debug_file($stdout); + +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'SELECT pg_stat_monitor_reset();', extra_params => ['-a', '-Pformat=aligned','-Ptuples_only=off']); +ok($cmdret == 0, "Reset PGSM EXTENSION"); +PGTDE::append_to_debug_file($stdout); + +# Create pg_tde extension +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +# Create Other extensions +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS pgaudit;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE pgaudit EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS set_user;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE set_user EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS pg_repack;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE pg_repack EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SET pgaudit.log = 'none'; CREATE EXTENSION IF NOT EXISTS postgis; SET pgaudit.log = 'all';", extra_params => ['-a']); +ok($cmdret == 0, "CREATE postgis EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS postgis_raster;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE postgis_raster EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS postgis_sfcgal;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE postgis_sfcgal EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE fuzzystrmatch EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS address_standardizer;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE address_standardizer EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS address_standardizer_data_us;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE address_standardizer_data_us EXTENSION"); +PGTDE::append_to_debug_file($stdout); +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE postgis_tiger_geocoder EXTENSION"); +PGTDE::append_to_debug_file($stdout); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-provider', json_object( 'type' VALUE 'file', 'path' VALUE '/tmp/datafile-location' ));", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-provider');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc1 (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc1 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc1 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc1;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Print PGSM settings +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT name, setting, unit, context, vartype, source, min_val, max_val, enumvals, boot_val, reset_val, pending_restart FROM pg_settings WHERE name='pg_stat_monitor.pgsm_query_shared_buffer';", extra_params => ['-a', '-Pformat=aligned','-Ptuples_only=off']); +ok($cmdret == 0, "Print PGTDE EXTENSION Settings"); +PGTDE::append_to_debug_file($stdout); + +# Create example database and run pgbench init +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE database example;', extra_params => ['-a']); +print "cmdret $cmdret\n"; +ok($cmdret == 0, "CREATE Database example"); +PGTDE::append_to_debug_file($stdout); + +my $port = $node->port; +print "port $port \n"; + +my $out = system ("pgbench -i -s 20 -p $port example"); +print " out: $out \n"; +ok($cmdret == 0, "Perform pgbench init"); + +$out = system ("pgbench -c 10 -j 2 -t 5000 -p $port example"); +print " out: $out \n"; +ok($cmdret == 0, "Run pgbench"); + +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'SELECT datname, substr(query,0,150) AS query, SUM(calls) AS calls FROM pg_stat_monitor GROUP BY datname, query ORDER BY datname, query, calls;', extra_params => ['-a', '-Pformat=aligned','-Ptuples_only=off']); +ok($cmdret == 0, "SELECT XXX FROM pg_stat_monitor"); +PGTDE::append_to_debug_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_stat_monitor;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_debug_file($stdout); + +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/006_remote_vault_config.pl b/contrib/pg_tde/t/006_remote_vault_config.pl new file mode 100644 index 00000000000..bacf80bd7f7 --- /dev/null +++ b/contrib/pg_tde/t/006_remote_vault_config.pl @@ -0,0 +1,120 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; +use Env; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +{ +package MyWebServer; + +use HTTP::Server::Simple::CGI; +use base qw(HTTP::Server::Simple::CGI); + +my %dispatch = ( + '/token' => \&resp_token, + '/url' => \&resp_url, + # ... +); + +sub handle_request { + my $self = shift; + my $cgi = shift; + + my $path = $cgi->path_info(); + my $handler = $dispatch{$path}; + + if (ref($handler) eq "CODE") { + print "HTTP/1.0 200 OK\r\n"; + $handler->($cgi); + + } else { + print "HTTP/1.0 404 Not found\r\n"; + print $cgi->header, + $cgi->start_html('Not found'), + $cgi->h1('Not found'), + $cgi->end_html; + } +} + +sub resp_token { + my $cgi = shift; + print $cgi->header, + "$ENV{'ROOT_TOKEN'}\r\n"; +} + +sub resp_url { + my $cgi = shift; + print $cgi->header, + "http://127.0.0.1:8200\r\n"; +} + +} +my $pid = MyWebServer->new(8888)->background(); + + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +my $rt_value = $node->stop(); +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_vault_v2('vault-provider', json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/token' ), json_object( 'type' VALUE 'remote', 'url' VALUE 'http://localhost:8888/url' ), to_json('secret'::text), NULL);", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','vault-provider');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc2(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc2 (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc2 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc2 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc2;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +system("kill $pid"); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/007_access_control.pl b/contrib/pg_tde/t/007_access_control.pl new file mode 100644 index 00000000000..0cd2bce459d --- /dev/null +++ b/contrib/pg_tde/t/007_access_control.pl @@ -0,0 +1,129 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# CREATE EXTENSION and change out file permissions +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + + +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE USER test_access;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE test_access USER"); +PGTDE::append_to_file($stdout); + +($cmdret, $stdout, $stderr) = $node->psql('postgres', 'grant all ON database postgres TO test_access;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE test_access USER"); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +# TRY performing operations without permission +PGTDE::append_to_file("-- pg_tde_add_key_provider_file should throw access denied"); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stderr); + +PGTDE::append_to_file("-- pg_tde_set_principal_key should also fail"); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stderr); + +PGTDE::append_to_file("-- pg_tde_rotate_principal_key should give access denied error"); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT pg_tde_rotate_principal_key('rotated-principal-key','file-2');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stderr); + + +# now give key management access to test_access user +PGTDE::append_to_file("-- grant key management access to test_access"); +$stdout = $node->safe_psql('postgres', "select pg_tde_grant_key_management_to_role('test_access');", extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# TRY performing key operation with permission +$stdout = $node->safe_psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($cmdret); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', "SELECT pg_tde_add_key_provider_file('file-2','/tmp/pg_tde_test_keyring_2.per');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', "SELECT pg_tde_rotate_principal_key('rotated-principal-key','file-2');", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', "SELECT principal_key_name,key_provider_name,key_provider_id,principal_key_internal_name, principal_key_version from pg_tde_principal_key_info();", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($cmdret); + + +$stdout = $node->safe_psql('postgres', "SELECT pg_tde_list_all_key_providers();", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +# Now revoke the view access from test_access user +$stdout = $node->safe_psql('postgres', "select pg_tde_revoke_key_viewer_from_role('test_access');", extra_params => ['-a']); + +# verify the view access is revoked + +PGTDE::append_to_file("-- pg_tde_list_all_key_providers should also fail"); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT pg_tde_list_all_key_providers();", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stderr); + +PGTDE::append_to_file("-- pg_tde_principal_key_info should also fail"); +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT principal_key_name,key_provider_name,key_provider_id,principal_key_internal_name, principal_key_version from pg_tde_principal_key_info();", extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stderr); + + +$stdout = $node->safe_psql('postgres', 'CREATE SCHEMA test_access;', extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_access.test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_access.test_enc1 (k) VALUES (5),(6);', extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_access.test_enc1 ORDER BY id ASC;', extra_params => ['-a', '-U', 'test_access']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde CASCADE;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/008_tde_heap.pl b/contrib/pg_tde/t/008_tde_heap.pl new file mode 100644 index 00000000000..6db26aa25d3 --- /dev/null +++ b/contrib/pg_tde/t/008_tde_heap.pl @@ -0,0 +1,235 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +my $PG_VERSION_STRING = `pg_config --version`; + +if (index(lc($PG_VERSION_STRING), lc("Percona Server")) == -1) +{ + plan skip_all => "pg_tde test case only for Percona Server for PostgreSQL"; +} + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# CREATE EXTENSION and change out file permissions +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + + +$rt_value = $node->psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap;', extra_params => ['-a']); +ok($rt_value == 3, "Failing query"); + + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per');", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault');", extra_params => ['-a']); + + + +######################### test_enc1 (simple create table w tde_heap) + + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc1(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc1 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc1 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +######################### test_enc2 (create heap + alter to tde_heap) + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc2(id SERIAL,k VARCHAR(32),PRIMARY KEY (id));', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc2 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'ALTER TABLE test_enc2 SET ACCESS METHOD tde_heap;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc2 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +######################### test_enc3 (default_table_access_method) + +$stdout = $node->safe_psql('postgres', 'SET default_table_access_method = "tde_heap"; CREATE TABLE test_enc3(id SERIAL,k VARCHAR(32),PRIMARY KEY (id));', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc3 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc3 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +######################### test_enc4 (create heap + alter default) + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc4(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING heap;', extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc4 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); +$stdout = $node->safe_psql('postgres', 'SET default_table_access_method = "tde_heap"; ALTER TABLE test_enc4 SET ACCESS METHOD DEFAULT;', extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc4 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + + +######################### test_enc5 (create tde_heap + truncate) + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc5(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc5 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'CHECKPOINT;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'TRUNCATE test_enc5;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc5 (k) VALUES (\'foobar\'),(\'barfoo\');', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc5 ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +sub verify_table +{ + PGTDE::append_to_file('###########################'); + + my ($table) = @_; + + my $tablefile = $node->safe_psql('postgres', 'SHOW data_directory;'); + $tablefile .= '/'; + $tablefile .= $node->safe_psql('postgres', 'SELECT pg_relation_filepath(\''.$table.'\');'); + + $stdout = $node->safe_psql('postgres', 'SELECT * FROM ' . $table . ' ORDER BY id ASC;', extra_params => ['-a']); + PGTDE::append_to_file($stdout); + + my $strings = 'TABLEFILE FOR ' . $table . ' FOUND: '; + $strings .= `(ls $tablefile >/dev/null && echo -n yes) || echo -n no`; + PGTDE::append_to_file($strings); + + $strings = 'CONTAINS FOO (should be empty): '; + $strings .= `strings $tablefile | grep foo`; + PGTDE::append_to_file($strings); +} + +verify_table('test_enc1'); +verify_table('test_enc2'); +verify_table('test_enc3'); +verify_table('test_enc4'); +verify_table('test_enc5'); + +# Verify that we can't see the data in the file +my $tablefile2 = $node->safe_psql('postgres', 'SHOW data_directory;'); +$tablefile2 .= '/'; +$tablefile2 .= $node->safe_psql('postgres', 'SELECT pg_relation_filepath(\'test_enc2\');'); + +my $strings = 'TABLEFILE2 FOUND: '; +$strings .= `(ls $tablefile2 >/dev/null && echo yes) || echo no`; +PGTDE::append_to_file($strings); + +$strings = 'CONTAINS FOO (should be empty): '; +$strings .= `strings $tablefile2 | grep foo`; +PGTDE::append_to_file($strings); + + + + +# Verify that we can't see the data in the file +my $tablefile3 = $node->safe_psql('postgres', 'SHOW data_directory;'); +$tablefile3 .= '/'; +$tablefile3 .= $node->safe_psql('postgres', 'SELECT pg_relation_filepath(\'test_enc3\');'); + +$strings = 'TABLEFILE3 FOUND: '; +$strings .= `(ls $tablefile3 >/dev/null && echo yes) || echo no`; +PGTDE::append_to_file($strings); + +$strings = 'CONTAINS FOO (should be empty): '; +$strings .= `strings $tablefile3 | grep foo`; +PGTDE::append_to_file($strings); + + + + +# Verify that we can't see the data in the file +my $tablefile4 = $node->safe_psql('postgres', 'SHOW data_directory;'); +$tablefile4 .= '/'; +$tablefile4 .= $node->safe_psql('postgres', 'SELECT pg_relation_filepath(\'test_enc4\');'); + +$strings = 'TABLEFILE4 FOUND: '; +$strings .= `(ls $tablefile4 >/dev/null && echo yes) || echo no`; +PGTDE::append_to_file($strings); + +$strings = 'CONTAINS FOO (should be empty): '; +$strings .= `strings $tablefile4 | grep foo`; +PGTDE::append_to_file($strings); + + + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc1;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc2;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc3;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc4;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc5;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/009_key_rotate_tablespace.pl b/contrib/pg_tde/t/009_key_rotate_tablespace.pl new file mode 100644 index 00000000000..815a51eedbb --- /dev/null +++ b/contrib/pg_tde/t/009_key_rotate_tablespace.pl @@ -0,0 +1,96 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +my ($cmdret, $stdout); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +$node->safe_psql('postgres', + q{ +SET allow_in_place_tablespaces = true; +CREATE TABLESPACE test_tblspace LOCATION ''; +CREATE DATABASE tbc TABLESPACE = test_tblspace; +}); + +$stdout = $node->safe_psql('tbc', + q{ +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); + +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) USING tde_heap_basic; + +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); + +SELECT * FROM country_table; + +}, extra_params => ['-a']); +PGTDE::append_to_file($stdout); + + +$cmdret = $node->psql('tbc', "SELECT pg_tde_rotate_principal_key('new-k', 'file-vault');", extra_params => ['-a']); +ok($cmdret == 0, "ROTATE KEY"); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +$stdout = $node->safe_psql('tbc', 'SELECT * FROM country_table;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + + +# DROP EXTENSION +$stdout = $node->safe_psql('tbc', 'DROP EXTENSION pg_tde CASCADE;', extra_params => ['-a']); +ok($cmdret == 0, "DROP PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', q{ +DROP DATABASE tbc; +DROP TABLESPACE test_tblspace; +}, extra_params => ['-a']); +ok($cmdret == 0, "DROP DATABSE"); +PGTDE::append_to_file($stdout); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/010_alter_keyring.pl b/contrib/pg_tde/t/010_alter_keyring.pl new file mode 100644 index 00000000000..a6e8ea55497 --- /dev/null +++ b/contrib/pg_tde/t/010_alter_keyring.pl @@ -0,0 +1,107 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use File::Basename; +use File::Compare; +use File::Copy; +use Test::More; +use lib 't'; +use pgtde; + +# Get file name and CREATE out file name and dirs WHERE requried +PGTDE::setup_files_dir(basename($0)); + +# CREATE new PostgreSQL node and do initdb +my $node = PGTDE->pgtde_init_pg(); +my $pgdata = $node->data_dir; + +# UPDATE postgresql.conf to include/load pg_tde library +open my $conf, '>>', "$pgdata/postgresql.conf"; +print $conf "shared_preload_libraries = 'pg_tde'\n"; +close $conf; + +# Start server +my $rt_value = $node->start; +ok($rt_value == 1, "Start Server"); + +# CREATE EXTENSION and change out file permissions +my ($cmdret, $stdout, $stderr) = $node->psql('postgres', 'CREATE EXTENSION pg_tde;', extra_params => ['-a']); +ok($cmdret == 0, "CREATE PGTDE EXTENSION"); +PGTDE::append_to_file($stdout); + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault-1','/tmp/pg_tde_keyring_1.per');", extra_params => ['-a']); +$rt_value = $node->psql('postgres', "SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault-1');", extra_params => ['-a']); + +$stdout = $node->safe_psql('postgres', 'CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'INSERT INTO test_enc (k) VALUES (5),(6);', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +#rename the keyring file to make it inaccessible +PGTDE::append_to_file("--moving keyring file--"); +move("/tmp/pg_tde_keyring_1.per", "/tmp/pg_tde_keyring_2.per") + or die "move failed: $!"; + +#restart the server and now the table should become inaccessible +PGTDE::append_to_file("-- server restart"); +$node->stop(); + +$rt_value = $node->start(); +ok($rt_value == 1, "Restart Server"); + +# this should fail +($cmdret, $stdout, $stderr) = $node->psql('postgres', "SELECT * FROM test_enc ORDER BY id ASC;", extra_params => ['-a']); +PGTDE::append_to_file($stderr); + +#$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +#PGTDE::append_to_file($stdout); + + +#create a new key provider pointing to the moved keyring file +PGTDE::append_to_file("-- creating new key provider pointing to the moved file --"); +$rt_value = $node->psql('postgres', "SELECT pg_tde_add_key_provider_file('file-vault-2','/tmp/pg_tde_keyring_2.per');", extra_params => ['-a']); +#update principal key to use the new key provider +PGTDE::append_to_file("-- Alter principal key to use new provider --"); +$rt_value = $node->psql('postgres', "SELECT pg_tde_alter_principal_key_keyring('file-vault-2');", extra_params => ['-a']); + +# this should work now +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + + +# Restart the server +PGTDE::append_to_file("-- server restart"); +$rt_value = $node->stop(); +$rt_value = $node->start(); + +$stdout = $node->safe_psql('postgres', 'SELECT * FROM test_enc ORDER BY id ASC;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +$stdout = $node->safe_psql('postgres', 'DROP TABLE test_enc;', extra_params => ['-a']); +PGTDE::append_to_file($stdout); + +# DROP EXTENSION +$stdout = $node->safe_psql('postgres', 'DROP EXTENSION pg_tde CASCADE;', extra_params => ['-a']); +# Stop the server +$node->stop(); + +# compare the expected and out file +my $compare = PGTDE->compare_results(); + +# Test/check if expected and result/out file match. If Yes, test passes. +is($compare,0,"Compare Files: $PGTDE::expected_filename_with_path and $PGTDE::out_filename_with_path files."); + +# Done testing for this testcase file. +done_testing(); diff --git a/contrib/pg_tde/t/expected/001_basic.out b/contrib/pg_tde/t/expected/001_basic.out new file mode 100644 index 00000000000..ab41e680414 --- /dev/null +++ b/contrib/pg_tde/t/expected/001_basic.out @@ -0,0 +1,18 @@ +CREATE EXTENSION pg_tde; +SELECT extname, extversion FROM pg_extension WHERE extname = 'pg_tde'; +pg_tde|1.0-beta2 +-- server restart +CREATE TABLE test_enc(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc (k) VALUES ('foobar'),('barfoo'); +SELECT * FROM test_enc ORDER BY id ASC; +1|foobar +2|barfoo +-- server restart +SELECT * FROM test_enc ORDER BY id ASC; +1|foobar +2|barfoo +TABLEFILE FOUND: yes + +CONTAINS FOO (should be empty): +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/002_rotate_key.out b/contrib/pg_tde/t/expected/002_rotate_key.out new file mode 100644 index 00000000000..5819294f755 --- /dev/null +++ b/contrib/pg_tde/t/expected/002_rotate_key.out @@ -0,0 +1,25 @@ +CREATE EXTENSION pg_tde; +-- server restart +CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc (k) VALUES (5),(6); +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +-- ROTATE KEY pg_tde_rotate_principal_key('rotated-principal-key','file-2'); +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +-- ROTATE KEY pg_tde_rotate_principal_key(); +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/003_remote_config.out b/contrib/pg_tde/t/expected/003_remote_config.out new file mode 100644 index 00000000000..c6a75f18996 --- /dev/null +++ b/contrib/pg_tde/t/expected/003_remote_config.out @@ -0,0 +1,12 @@ +CREATE EXTENSION pg_tde; +CREATE TABLE test_enc2(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc2 (k) VALUES (5),(6); +SELECT * FROM test_enc2 ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc2 ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc2; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/004_file_config.out b/contrib/pg_tde/t/expected/004_file_config.out new file mode 100644 index 00000000000..7879231cdf6 --- /dev/null +++ b/contrib/pg_tde/t/expected/004_file_config.out @@ -0,0 +1,12 @@ +CREATE EXTENSION pg_tde; +CREATE TABLE test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc1 (k) VALUES (5),(6); +SELECT * FROM test_enc1 ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc1 ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/005_multiple_extensions.out b/contrib/pg_tde/t/expected/005_multiple_extensions.out new file mode 100644 index 00000000000..7879231cdf6 --- /dev/null +++ b/contrib/pg_tde/t/expected/005_multiple_extensions.out @@ -0,0 +1,12 @@ +CREATE EXTENSION pg_tde; +CREATE TABLE test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc1 (k) VALUES (5),(6); +SELECT * FROM test_enc1 ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc1 ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc1; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/006_remote_vault_config.out b/contrib/pg_tde/t/expected/006_remote_vault_config.out new file mode 100644 index 00000000000..c6a75f18996 --- /dev/null +++ b/contrib/pg_tde/t/expected/006_remote_vault_config.out @@ -0,0 +1,12 @@ +CREATE EXTENSION pg_tde; +CREATE TABLE test_enc2(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc2 (k) VALUES (5),(6); +SELECT * FROM test_enc2 ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc2 ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc2; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/007_access_control.out b/contrib/pg_tde/t/expected/007_access_control.out new file mode 100644 index 00000000000..e00cf14c7e6 --- /dev/null +++ b/contrib/pg_tde/t/expected/007_access_control.out @@ -0,0 +1,37 @@ +CREATE EXTENSION pg_tde; +CREATE USER test_access; +grant all ON database postgres TO test_access; +-- server restart +-- pg_tde_add_key_provider_file should throw access denied +psql::1: ERROR: permission denied for function pg_tde_add_key_provider_file +-- pg_tde_set_principal_key should also fail +psql::1: ERROR: permission denied for function pg_tde_set_principal_key +-- pg_tde_rotate_principal_key should give access denied error +psql::1: ERROR: permission denied for function pg_tde_rotate_principal_key +-- grant key management access to test_access +select pg_tde_grant_key_management_to_role('test_access'); +t +3 +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +1 +SELECT pg_tde_add_key_provider_file('file-2','/tmp/pg_tde_test_keyring_2.per'); +2 +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); +t +SELECT pg_tde_rotate_principal_key('rotated-principal-key','file-2'); +t +3 +SELECT pg_tde_list_all_key_providers(); +(1,file-vault,file,"{""type"" : ""file"", ""path"" : ""/tmp/pg_tde_test_keyring.per""}") +(2,file-2,file,"{""type"" : ""file"", ""path"" : ""/tmp/pg_tde_test_keyring_2.per""}") +-- pg_tde_list_all_key_providers should also fail +psql::1: ERROR: permission denied for function pg_tde_list_all_key_providers +-- pg_tde_principal_key_info should also fail +psql::1: ERROR: permission denied for function pg_tde_principal_key_info +CREATE SCHEMA test_access; +CREATE TABLE test_access.test_enc1(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_access.test_enc1 (k) VALUES (5),(6); +SELECT * FROM test_access.test_enc1 ORDER BY id ASC; +1|5 +2|6 +DROP EXTENSION pg_tde CASCADE; diff --git a/contrib/pg_tde/t/expected/008_tde_heap.out b/contrib/pg_tde/t/expected/008_tde_heap.out new file mode 100644 index 00000000000..5b16269bd84 --- /dev/null +++ b/contrib/pg_tde/t/expected/008_tde_heap.out @@ -0,0 +1,76 @@ +CREATE EXTENSION pg_tde; +-- server restart +CREATE TABLE test_enc1(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap; +INSERT INTO test_enc1 (k) VALUES ('foobar'),('barfoo'); +SELECT * FROM test_enc1 ORDER BY id ASC; +1|foobar +2|barfoo +CREATE TABLE test_enc2(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)); +INSERT INTO test_enc2 (k) VALUES ('foobar'),('barfoo'); +ALTER TABLE test_enc2 SET ACCESS METHOD tde_heap; +SELECT * FROM test_enc2 ORDER BY id ASC; +1|foobar +2|barfoo +SET default_table_access_method = "tde_heap"; CREATE TABLE test_enc3(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)); +INSERT INTO test_enc3 (k) VALUES ('foobar'),('barfoo'); +SELECT * FROM test_enc3 ORDER BY id ASC; +1|foobar +2|barfoo +INSERT INTO test_enc4 (k) VALUES ('foobar'),('barfoo'); +SELECT * FROM test_enc4 ORDER BY id ASC; +1|foobar +2|barfoo +CREATE TABLE test_enc5(id SERIAL,k VARCHAR(32),PRIMARY KEY (id)) USING tde_heap; +INSERT INTO test_enc5 (k) VALUES ('foobar'),('barfoo'); +CHECKPOINT; +TRUNCATE test_enc5; +INSERT INTO test_enc5 (k) VALUES ('foobar'),('barfoo'); +SELECT * FROM test_enc5 ORDER BY id ASC; +3|foobar +4|barfoo +-- server restart +########################### +SELECT * FROM test_enc1 ORDER BY id ASC; +1|foobar +2|barfoo +TABLEFILE FOR test_enc1 FOUND: yes +CONTAINS FOO (should be empty): +########################### +SELECT * FROM test_enc2 ORDER BY id ASC; +1|foobar +2|barfoo +TABLEFILE FOR test_enc2 FOUND: yes +CONTAINS FOO (should be empty): +########################### +SELECT * FROM test_enc3 ORDER BY id ASC; +1|foobar +2|barfoo +TABLEFILE FOR test_enc3 FOUND: yes +CONTAINS FOO (should be empty): +########################### +SELECT * FROM test_enc4 ORDER BY id ASC; +1|foobar +2|barfoo +TABLEFILE FOR test_enc4 FOUND: yes +CONTAINS FOO (should be empty): +########################### +SELECT * FROM test_enc5 ORDER BY id ASC; +3|foobar +4|barfoo +TABLEFILE FOR test_enc5 FOUND: yes +CONTAINS FOO (should be empty): +TABLEFILE2 FOUND: yes + +CONTAINS FOO (should be empty): +TABLEFILE3 FOUND: yes + +CONTAINS FOO (should be empty): +TABLEFILE4 FOUND: yes + +CONTAINS FOO (should be empty): +DROP TABLE test_enc1; +DROP TABLE test_enc2; +DROP TABLE test_enc3; +DROP TABLE test_enc4; +DROP TABLE test_enc5; +DROP EXTENSION pg_tde; diff --git a/contrib/pg_tde/t/expected/009_key_rotate_tablespace.out b/contrib/pg_tde/t/expected/009_key_rotate_tablespace.out new file mode 100644 index 00000000000..463024ff021 --- /dev/null +++ b/contrib/pg_tde/t/expected/009_key_rotate_tablespace.out @@ -0,0 +1,44 @@ +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +1 +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); +t +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) USING tde_heap_basic; +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); +SELECT * FROM country_table; +1|Japan|Asia +2|UK|Europe +3|USA|North America +CREATE EXTENSION pg_tde; +SELECT pg_tde_add_key_provider_file('file-vault','/tmp/pg_tde_test_keyring.per'); +1 +SELECT pg_tde_set_principal_key('test-db-principal-key','file-vault'); +t +CREATE TABLE country_table ( + country_id serial primary key, + country_name text unique not null, + continent text not null +) USING tde_heap_basic; +INSERT INTO country_table (country_name, continent) + VALUES ('Japan', 'Asia'), + ('UK', 'Europe'), + ('USA', 'North America'); +SELECT * FROM country_table; +1|Japan|Asia +2|UK|Europe +3|USA|North America +-- server restart +SELECT * FROM country_table; +1|Japan|Asia +2|UK|Europe +3|USA|North America +DROP EXTENSION pg_tde CASCADE; +DROP DATABASE tbc; +DROP TABLESPACE test_tblspace; diff --git a/contrib/pg_tde/t/expected/010_alter_keyring.out b/contrib/pg_tde/t/expected/010_alter_keyring.out new file mode 100644 index 00000000000..0f6c37e0588 --- /dev/null +++ b/contrib/pg_tde/t/expected/010_alter_keyring.out @@ -0,0 +1,20 @@ +CREATE EXTENSION pg_tde; +-- server restart +CREATE TABLE test_enc(id SERIAL,k INTEGER,PRIMARY KEY (id)) USING tde_heap_basic; +INSERT INTO test_enc (k) VALUES (5),(6); +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +--moving keyring file-- +-- server restart +psql::1: ERROR: failed to retrieve principal key. Create one using pg_tde_set_principal_key before using encrypted tables. +-- creating new key provider pointing to the moved file -- +-- Alter principal key to use new provider -- +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +-- server restart +SELECT * FROM test_enc ORDER BY id ASC; +1|5 +2|6 +DROP TABLE test_enc; diff --git a/contrib/pg_tde/t/pgtde.pm b/contrib/pg_tde/t/pgtde.pm new file mode 100644 index 00000000000..67b1709aad7 --- /dev/null +++ b/contrib/pg_tde/t/pgtde.pm @@ -0,0 +1,146 @@ +package PGTDE; + +use File::Basename; +use File::Compare; +use Test::More; + +our @ISA= qw( Exporter ); + +# These CAN be exported. +our @EXPORT = qw( pgtde_init_pg pgtde_start_pg pgtde_stop_pg pgtde_psql_cmd pgtde_setup_pg_tde pgtde_create_extension pgtde_drop_extension ); + +# Instance of pg server that would be spanwed by TAP testing. A new server will be created for each TAP test. +our $pg_node; + +# Expected .out filename of TAP testcase being executed. These are already part of repo under t/expected/*. +our $expected_filename_with_path; + +# Major version of PG Server that we are using. +our $PG_MAJOR_VERSION; + +# Result .out filename of TAP testcase being executed. Where needed, a new *.out will be created for each TAP test. +our $out_filename_with_path; + +# Runtime output file that is used only for debugging purposes for comparison to PGSS, blocks and timings. +our $debug_out_filename_with_path; + +BEGIN { + # Get PG Server Major version from pg_config + $PG_MAJOR_VERSION = `pg_config --version | awk {'print \$2'} | cut -f1 -d"." | sed -e 's/[^0-9].*\$//g'`; + $PG_MAJOR_VERSION =~ s/^\s+|\s+$//g; + + # Depending upon PG server version load the required module at runtime when pgtde.pm is loaded. + my $node_module = $PG_MAJOR_VERSION > 14 ? "PostgreSQL::Test::Cluster" : "PostgresNode"; + my $node_module_file = $node_module; + $node_module_file =~ s[::][/]g; + $node_module_file .= '.pm'; + require $node_module_file; + $node_module->import; +} + +sub pgtde_init_pg +{ + print "Postgres major version: $PG_MAJOR_VERSION \n"; + + # For Server version 15 & above, spawn the server using PostgreSQL::Test::Cluster + if ($PG_MAJOR_VERSION > 14) { + $pg_node = PostgreSQL::Test::Cluster->new('pgtde_regression'); + } + # For Server version 14 & below, spawn the server using PostgresNode + elsif ($PG_MAJOR_VERSION < 15) { + $pg_node = PostgresNode->get_new_node('pgtde_regression'); + } + + $pg_node->dump_info; + $pg_node->init; + return $pg_node; +} + +sub append_to_file +{ + my ($str) = @_; + + # For Server version 15 & above, use PostgreSQL::Test::Utils to write to files + if ($PG_MAJOR_VERSION > 14) { + PostgreSQL::Test::Utils::append_to_file($out_filename_with_path, $str . "\n"); + } + # For Server version 14 & below, use PostgresNode to write to files + elsif ($PG_MAJOR_VERSION < 15) { + TestLib::append_to_file($out_filename_with_path, $str . "\n"); + } + chmod(0640 , $out_filename_with_path) + or die("unable to set permissions for $out_filename_with_path"); + + return; +} + +sub append_to_debug_file +{ + my ($str) = @_; + + # For Server version 15 & above, use PostgreSQL::Test::Utils to write to files + if ($PG_MAJOR_VERSION > 14) { + PostgreSQL::Test::Utils::append_to_file($debug_out_filename_with_path, $str . "\n"); + } + # For Server version 14 & below, use PostgresNode to write to files + elsif ($PG_MAJOR_VERSION < 15) { + TestLib::append_to_file($debug_out_filename_with_path, $str . "\n"); + } + chmod(0640 , $debug_out_filename_with_path) + or die("unable to set permissions for $debug_out_filename_with_path"); + + return; +} + +sub setup_files_dir +{ + my ($perlfilename) = @_; + + # Expected folder where expected output will be present + my $expected_folder = "t/expected"; + + # Results/out folder where generated results files will be placed + my $results_folder = "t/results"; + + # Check if results folder exists or not, create if it doesn't + unless (-d $results_folder) + { + mkdir $results_folder or die "Can't create folder $results_folder: $!\n"; + } + + # Check if expected folder exists or not, bail out if it doesn't + unless (-d $expected_folder) + { + BAIL_OUT "Expected files folder $expected_folder doesn't exist: \n"; + } + + #Remove .pl from filename and store in a variable + my @split_arr = split /\./, $perlfilename; + + my $filename_without_extension = $split_arr[0]; + + # Create expected filename with path + my $expected_filename = "${filename_without_extension}.out"; + + $expected_filename_with_path = "${expected_folder}/${expected_filename}"; + + # Create results filename with path + my $out_filename = "${filename_without_extension}.out"; + $out_filename_with_path = "${results_folder}/${out_filename}"; + + # Delete already existing result out file, if it exists. + if ( -f $out_filename_with_path) + { + unlink($out_filename_with_path) or die "Can't delete already existing $out_filename_with_path: $!\n"; + } + + $debug_out_filename_with_path = "${results_folder}/${out_filename}.debug"; +} + +sub compare_results +{ + # Compare expected and results files and return the result + return compare($expected_filename_with_path, $out_filename_with_path); +} + +1; diff --git a/contrib/pg_tde/tools/repl.sed b/contrib/pg_tde/tools/repl.sed new file mode 100644 index 00000000000..2cac8e82899 --- /dev/null +++ b/contrib/pg_tde/tools/repl.sed @@ -0,0 +1,25 @@ +# These first few lines are only for the initial run, but should be harmless in later runs +s/\theap_/\ttdeheap_/g +s/\t\*heap_/\t*tdeheap_/g +s/ heap_/ tdeheap_/g +s/ \*heap_/ *tdeheap_/g +s/(heap_/ (tdeheap_/g +s/^heap_/tdeheap_/g +s/_heap_/_tdeheap_/g +s/-heap_/-tdeheap_/g +s/+heap_/+tdeheap_/g +s/!heap_/!tdeheap_/g +s/heapam_/pg_tdeam_/g +s/heap2_/tdeheap2_/g +s/heapgettup/tdeheapgettup/g +s/heapgetpage/tdeheapgetpage/g +s/visibilitymap_/tdeheap_visibilitymap_/g +s/RelationPutHeapTuple/tdeheap_RelationPutHeapTuple/g +s/RelationGetBufferForTuple/tdeheap_RelationGetBufferForTuple/g +s/TTSOpsBufferHeapTuple/TTSOpsTDEBufferHeapTuple/g +s/TTS_IS_BUFFERTUPLE/TTS_IS_TDE_BUFFERTUPLE/g +s/toast_tuple_externalize/tdeheap_toast_tuple_externalize/g +# Repairing error by earlier rule +s/num_tdeheap_tuples/num_heap_tuples/g +s/pgstat_update_tdeheap_dead_tuples/pgstat_update_heap_dead_tuples/g +s/tdeheap_xlog_deserialize_prune_and_freeze/heap_xlog_deserialize_prune_and_freeze/g \ No newline at end of file diff --git a/contrib/pg_tde/tools/tool.py b/contrib/pg_tde/tools/tool.py new file mode 100644 index 00000000000..a66b0efb589 --- /dev/null +++ b/contrib/pg_tde/tools/tool.py @@ -0,0 +1,198 @@ +# Simple helper script for upstream merges to the copied heap code +# It implements a few simple steps which can be used to automate +# most operations +# +# Generally this script assumes that pg_tde is checked out as a +# submodule inside postgres, in the contrib/pg_tde directory. +# +# Most methods interact with the currently checked out version +# of postgres, this part is not automated at all. Select the +# correct commit before executing functions! +# +# == copy +# +# Copies the required heapam source files from the postgres repo, +# to the specified inside the pg_tde repo. Also +# renames the files, places them in the correct directory, and +# runs the automatic sed replacement script. +# +# The sed replacements only cover the name changes, mainly changing "heap" +# to "tdeheap". It doesn't apply the actual encryption changes! +# +# It also creates a file named "COMMIT" in the directory, which contains the +# commit hash used. +# +# == diff +# +# Runs diff on the tdeheap files between and , and places +# the results into +# +# The assumption is that contains the copied, but not TDEfied +# version of the files, while is the actual current TDEfied code, +# and that way this command creates the "tde patch" for the given commit. +# +# For example, assuming that we have the PG16 tde sources in the src16 +# directory, these steps create a diff for the current sources: +# 1. check out the src16/COMMIT commit +# 2. run `copy tmp_16dir` +# 3. run `diff tmp_16dir src16 diff16` +# 4. delete the tmp_16dir directory +# +# == apply +# +# Applies the diffs created by the diff command from the to the +# source directory. +# +# When the diff can't be applied cleanly, and there are conflicts, it still +# writes the file with conflicts, using the diff3 format (usual git conflict +# markers). which can be resolved manually. +# +# The recommended action in this case is to first create a commit with the +# conflicts as-is, and then create a separate commit with the conflicts +# resolved and the code working. +# +# This is mainly intended for version upgrades. +# For example, if the current version is 16, and the goal is creating the 17 +# version: +# 1. create the src16 diff using the steps described in the `diff` section +# 2. checkout the 17 version in the postgres repo +# 3. use the copy command to create a base directory for the 17 version +# 4. create a commit with the src17 basefiles +# 5. use the apply command to apply the patches +# 6. commit things with conflicts +# 7. resolve the conflicts as needed +# 8. commit resolved/working sources + + +import shutil +import os +import subprocess +import sys + +tools_directory = os.path.dirname(os.path.realpath(__file__)) + +pg_root = tools_directory + "/../../../" +heapam_src_dir = pg_root + "src/backend/access/heap/" +heapam_inc_dir = pg_root + "src/include/access/" + +tde_root = tools_directory + "/../" + +heapam_headers = { + "visibilitymap.h": "pg_tde_visibilitymap.h", + "rewriteheap.h": "pg_tde_rewrite.h", + "heapam_xlog.h": "pg_tdeam_xlog.h", + "hio.h": "pg_tde_io.h", + "heapam.h": "pg_tdeam.h", + "heaptoast.h": "pg_tdetoast.h" +} + +heapam_sources = { + "heapam.c": "pg_tdeam.c", + "heapam_handler.c": "pg_tdeam_handler.c", + "heapam_visibility.c": "pg_tdeam_visibility.c", + "heaptoast.c": "pg_tdetoast.c", + "hio.c": "pg_tde_io.c", + "pruneheap.c": "pg_tde_prune.c", + "rewriteheap.c": "pg_tde_rewrite.c", + "vacuumlazy.c": "pg_tde_vacuumlazy.c", + "visibilitymap.c": "pg_tde_visibilitymap.c", +} + +def copy_and_sed_things(files, src, dst): + os.makedirs(dst, exist_ok=True) + for original,copy in files.items(): + print(" - ", original, "=>", copy) + shutil.copyfile(src+original, dst+copy) + subprocess.call(["sed", "-i", "-f", tools_directory + "/repl.sed", dst+copy]) + +def copy_upstream_things(dstdir): + print("Processing headers") + copy_and_sed_things(heapam_headers, heapam_inc_dir, tde_root + dstdir + "/include/access/") + print("Processing sources") + copy_and_sed_things(heapam_sources, heapam_src_dir, tde_root + dstdir + "/access/") + # Also create a commit file + cwd = os.getcwd() + os.chdir(pg_root) + commit_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]) + os.chdir(cwd) + f = open(tde_root + dstdir + "/COMMIT", "w") + f.write(commit_hash.decode("utf-8")) + f.close() + + +def save_diffs(files, src, dst, diffdir): + os.makedirs(tde_root + "/" + diffdir, exist_ok=True) + for _,copy in files.items(): + print(" - ", copy + ".patch") + diff = subprocess.run(["diff", "-u", tde_root+src+"/"+copy, tde_root+dst+"/"+copy], stdout = subprocess.PIPE, stderr=subprocess.PIPE, check=False) + f = open(tde_root + "/" + diffdir + "/" + copy + ".patch", "w") + f.write(diff.stdout.decode("utf-8")) + f.close() + +def diff_things(src, dst, diffdir): + print("Processing headers") + save_diffs(heapam_headers, src + "/include/access/", dst + "/include/access/", diffdir) + print("Processing sources") + save_diffs(heapam_sources, src + "/access/", dst + "/access/", diffdir) + +def apply_diffs(files, dst, diffdir): + for _,copy in files.items(): + print(" - ", copy + ".patch") + patch = subprocess.run(["patch", "--merge=diff3", "-l", "--no-backup-if-mismatch", tde_root+dst+"/"+copy, tde_root+"/"+diffdir+"/"+copy+".patch"], stdout = subprocess.PIPE, stderr=subprocess.PIPE, check=False) + print(patch.stdout.decode("utf-8")) + print(patch.stderr.decode("utf-8")) + +def apply_things(dst, diffdir): + print("Processing headers") + apply_diffs(heapam_headers, dst + "/include/access/", diffdir) + print("Processing sources") + apply_diffs(heapam_sources, dst + "/access/", diffdir) + +def rm_files(files, src): + for _,copy in files.items(): + print(" - RM ", copy) + os.remove(tde_root+src+"/"+copy) + +def rm_things(srcdir): + print("Processing headers") + rm_files(heapam_headers, srcdir + "/include/access/") + print("Processing sources") + rm_files(heapam_sources, srcdir + "/access/") + +if len(sys.argv) < 2: + print("No command given! Commands:") + print(" - copy") + print(" - diff") + print(" - ppply") + print(" - rm ") + exit() + +if sys.argv[1] == "copy": + if len(sys.argv) < 3: + print("No target directory given!") + print("Usage: tool.py copy ") + exit() + copy_upstream_things(sys.argv[2]) + +if sys.argv[1] == "diff": + if len(sys.argv) < 5: + print("Not enough parameters!") + print("Usage: tool.py diff ") + exit() + diff_things(sys.argv[2], sys.argv[3], sys.argv[4]) + +if sys.argv[1] == "apply": + if len(sys.argv) < 4: + print("Not enough parameters!") + print("Usage: tool.py patch ") + exit() + apply_things(sys.argv[2], sys.argv[3]) + + + +if sys.argv[1] == "rm": + if len(sys.argv) < 3: + print("No target directory given!") + print("Usage: tool.py rm ") + exit() + rm_things(sys.argv[2]) \ No newline at end of file diff --git a/contrib/pg_tde/typedefs.list b/contrib/pg_tde/typedefs.list new file mode 100644 index 00000000000..848bebc767d --- /dev/null +++ b/contrib/pg_tde/typedefs.list @@ -0,0 +1,113 @@ +BulkInsertStateData +BulkInsertStateData +BulkInsertStateData +BulkInsertStateData +CurlString +FileKeyring +GenericKeyring +HeapPageFreeze +HeapPageFreeze +HeapScanDescData +HeapScanDescData +HeapScanDescData +HeapScanDescData +HeapTupleFreeze +HeapTupleFreeze +IndexDeleteCounts +IndexDeleteCounts +IndexFetchHeapData +IndexFetchHeapData +InternalKey +JsonKeringSemState +JsonKeyringField +JsonKeyringState +JsonVaultRespField +JsonVaultRespSemState +JsonVaultRespState +KeyProviders +KeyringProvideRecord +KeyringProviderXLRecord +KeyringReturnCode +LVPagePruneState +LVRelState +LVRelState +LVSavedErrInfo +LVSavedErrInfo +LogicalRewriteMappingData +LogicalRewriteMappingData +PendingMapEntryDelete +ProviderScanType +PruneFreezeResult +RelKeyCache +RelKeyCacheRec +RelKeyData +RewriteMappingDataEntry +RewriteMappingDataEntry +RewriteMappingFile +RewriteMappingFile +RewriteStateData +RewriteStateData +RewriteStateData +RewriteStateData *RewriteState; +TDEBufferHeapTupleTableSlot +TDEFileHeader +TDEKeyringRoutine +TDELocalState +TDELockTypes +TDEMapEntry +TDEMapFilePath +TDEPrincipalKey +TDEPrincipalKeyId +TDEPrincipalKeyInfo +TDEShmemSetupRoutine +TdeCreateEvent +TdeCreateEventType +TdeKeyProviderInfoSharedState +TdePrincipalKeySharedState +TdePrincipalKeylocalState +TdeSharedState +VaultV2Keyring +XLogExtensionInstall +XLogPrincipalKeyRotate +XLogRelKey +itemIdCompactData +itemIdCompactData +keyData +keyInfo +keyName +xl_multi_insert_tuple +xl_multi_insert_tuple +xl_tdeheap_confirm +xl_tdeheap_confirm +xl_tdeheap_delete +xl_tdeheap_delete +xl_tdeheap_freeze_page +xl_tdeheap_freeze_plan +xl_tdeheap_header +xl_tdeheap_header +xl_tdeheap_inplace +xl_tdeheap_inplace +xl_tdeheap_insert +xl_tdeheap_insert +xl_tdeheap_lock +xl_tdeheap_lock +xl_tdeheap_lock_updated +xl_tdeheap_lock_updated +xl_tdeheap_multi_insert +xl_tdeheap_multi_insert +xl_tdeheap_new_cid +xl_tdeheap_new_cid +xl_tdeheap_prune +xl_tdeheap_prune +xl_tdeheap_rewrite_mapping +xl_tdeheap_rewrite_mapping +xl_tdeheap_truncate +xl_tdeheap_truncate +xl_tdeheap_update +xl_tdeheap_update +xl_tdeheap_vacuum +xl_tdeheap_visible +xl_tdeheap_visible +xlhp_freeze_plan +xlhp_freeze_plans +xlhp_prune_items \ No newline at end of file