mirror of https://github.com/postgres/postgres
Tag:
Branch:
Tree:
4f1b6e5bb4
REL2_0B
REL6_4
REL6_5_PATCHES
REL7_0_PATCHES
REL7_1_STABLE
REL7_2_STABLE
REL7_3_STABLE
REL7_4_STABLE
REL8_0_STABLE
REL8_1_STABLE
REL8_2_STABLE
REL8_3_STABLE
REL8_4_STABLE
REL8_5_ALPHA1_BRANCH
REL8_5_ALPHA2_BRANCH
REL8_5_ALPHA3_BRANCH
REL9_0_ALPHA4_BRANCH
REL9_0_ALPHA5_BRANCH
REL9_0_STABLE
REL9_1_STABLE
REL9_2_STABLE
REL9_3_STABLE
REL9_4_STABLE
REL9_5_STABLE
REL9_6_STABLE
REL_10_STABLE
REL_11_STABLE
REL_12_STABLE
REL_13_STABLE
REL_14_STABLE
REL_15_STABLE
REL_16_STABLE
REL_17_STABLE
REL_18_STABLE
Release_1_0_3
WIN32_DEV
ecpg_big_bison
master
PG95-1_01
PG95-1_08
PG95-1_09
REL2_0
REL6_1
REL6_1_1
REL6_2
REL6_2_1
REL6_3
REL6_3_2
REL6_4_2
REL6_5
REL6_5_1
REL6_5_2
REL6_5_3
REL7_0
REL7_0_2
REL7_0_3
REL7_1
REL7_1_1
REL7_1_2
REL7_1_3
REL7_1_BETA
REL7_1_BETA2
REL7_1_BETA3
REL7_2
REL7_2_1
REL7_2_2
REL7_2_3
REL7_2_4
REL7_2_5
REL7_2_6
REL7_2_7
REL7_2_8
REL7_2_BETA1
REL7_2_BETA2
REL7_2_BETA3
REL7_2_BETA4
REL7_2_BETA5
REL7_2_RC1
REL7_2_RC2
REL7_3
REL7_3_1
REL7_3_10
REL7_3_11
REL7_3_12
REL7_3_13
REL7_3_14
REL7_3_15
REL7_3_16
REL7_3_17
REL7_3_18
REL7_3_19
REL7_3_2
REL7_3_20
REL7_3_21
REL7_3_3
REL7_3_4
REL7_3_5
REL7_3_6
REL7_3_7
REL7_3_8
REL7_3_9
REL7_4
REL7_4_1
REL7_4_10
REL7_4_11
REL7_4_12
REL7_4_13
REL7_4_14
REL7_4_15
REL7_4_16
REL7_4_17
REL7_4_18
REL7_4_19
REL7_4_2
REL7_4_20
REL7_4_21
REL7_4_22
REL7_4_23
REL7_4_24
REL7_4_25
REL7_4_26
REL7_4_27
REL7_4_28
REL7_4_29
REL7_4_3
REL7_4_30
REL7_4_4
REL7_4_5
REL7_4_6
REL7_4_7
REL7_4_8
REL7_4_9
REL7_4_BETA1
REL7_4_BETA2
REL7_4_BETA3
REL7_4_BETA4
REL7_4_BETA5
REL7_4_RC1
REL7_4_RC2
REL8_0_0
REL8_0_0BETA1
REL8_0_0BETA2
REL8_0_0BETA3
REL8_0_0BETA4
REL8_0_0BETA5
REL8_0_0RC1
REL8_0_0RC2
REL8_0_0RC3
REL8_0_0RC4
REL8_0_0RC5
REL8_0_1
REL8_0_10
REL8_0_11
REL8_0_12
REL8_0_13
REL8_0_14
REL8_0_15
REL8_0_16
REL8_0_17
REL8_0_18
REL8_0_19
REL8_0_2
REL8_0_20
REL8_0_21
REL8_0_22
REL8_0_23
REL8_0_24
REL8_0_25
REL8_0_26
REL8_0_3
REL8_0_4
REL8_0_5
REL8_0_6
REL8_0_7
REL8_0_8
REL8_0_9
REL8_1_0
REL8_1_0BETA1
REL8_1_0BETA2
REL8_1_0BETA3
REL8_1_0BETA4
REL8_1_0RC1
REL8_1_1
REL8_1_10
REL8_1_11
REL8_1_12
REL8_1_13
REL8_1_14
REL8_1_15
REL8_1_16
REL8_1_17
REL8_1_18
REL8_1_19
REL8_1_2
REL8_1_20
REL8_1_21
REL8_1_22
REL8_1_23
REL8_1_3
REL8_1_4
REL8_1_5
REL8_1_6
REL8_1_7
REL8_1_8
REL8_1_9
REL8_2_0
REL8_2_1
REL8_2_10
REL8_2_11
REL8_2_12
REL8_2_13
REL8_2_14
REL8_2_15
REL8_2_16
REL8_2_17
REL8_2_18
REL8_2_19
REL8_2_2
REL8_2_20
REL8_2_21
REL8_2_22
REL8_2_23
REL8_2_3
REL8_2_4
REL8_2_5
REL8_2_6
REL8_2_7
REL8_2_8
REL8_2_9
REL8_2_BETA1
REL8_2_BETA2
REL8_2_BETA3
REL8_2_RC1
REL8_3_0
REL8_3_1
REL8_3_10
REL8_3_11
REL8_3_12
REL8_3_13
REL8_3_14
REL8_3_15
REL8_3_16
REL8_3_17
REL8_3_18
REL8_3_19
REL8_3_2
REL8_3_20
REL8_3_21
REL8_3_22
REL8_3_23
REL8_3_3
REL8_3_4
REL8_3_5
REL8_3_6
REL8_3_7
REL8_3_8
REL8_3_9
REL8_3_BETA1
REL8_3_BETA2
REL8_3_BETA3
REL8_3_BETA4
REL8_3_RC1
REL8_3_RC2
REL8_4_0
REL8_4_1
REL8_4_10
REL8_4_11
REL8_4_12
REL8_4_13
REL8_4_14
REL8_4_15
REL8_4_16
REL8_4_17
REL8_4_18
REL8_4_19
REL8_4_2
REL8_4_20
REL8_4_21
REL8_4_22
REL8_4_3
REL8_4_4
REL8_4_5
REL8_4_6
REL8_4_7
REL8_4_8
REL8_4_9
REL8_4_BETA1
REL8_4_BETA2
REL8_4_RC1
REL8_4_RC2
REL8_5_ALPHA1
REL8_5_ALPHA2
REL8_5_ALPHA3
REL9_0_0
REL9_0_1
REL9_0_10
REL9_0_11
REL9_0_12
REL9_0_13
REL9_0_14
REL9_0_15
REL9_0_16
REL9_0_17
REL9_0_18
REL9_0_19
REL9_0_2
REL9_0_20
REL9_0_21
REL9_0_22
REL9_0_23
REL9_0_3
REL9_0_4
REL9_0_5
REL9_0_6
REL9_0_7
REL9_0_8
REL9_0_9
REL9_0_ALPHA4
REL9_0_ALPHA5
REL9_0_BETA1
REL9_0_BETA2
REL9_0_BETA3
REL9_0_BETA4
REL9_0_RC1
REL9_1_0
REL9_1_1
REL9_1_10
REL9_1_11
REL9_1_12
REL9_1_13
REL9_1_14
REL9_1_15
REL9_1_16
REL9_1_17
REL9_1_18
REL9_1_19
REL9_1_2
REL9_1_20
REL9_1_21
REL9_1_22
REL9_1_23
REL9_1_24
REL9_1_3
REL9_1_4
REL9_1_5
REL9_1_6
REL9_1_7
REL9_1_8
REL9_1_9
REL9_1_ALPHA1
REL9_1_ALPHA2
REL9_1_ALPHA3
REL9_1_ALPHA4
REL9_1_ALPHA5
REL9_1_BETA1
REL9_1_BETA2
REL9_1_BETA3
REL9_1_RC1
REL9_2_0
REL9_2_1
REL9_2_10
REL9_2_11
REL9_2_12
REL9_2_13
REL9_2_14
REL9_2_15
REL9_2_16
REL9_2_17
REL9_2_18
REL9_2_19
REL9_2_2
REL9_2_20
REL9_2_21
REL9_2_22
REL9_2_23
REL9_2_24
REL9_2_3
REL9_2_4
REL9_2_5
REL9_2_6
REL9_2_7
REL9_2_8
REL9_2_9
REL9_2_BETA1
REL9_2_BETA2
REL9_2_BETA3
REL9_2_BETA4
REL9_2_RC1
REL9_3_0
REL9_3_1
REL9_3_10
REL9_3_11
REL9_3_12
REL9_3_13
REL9_3_14
REL9_3_15
REL9_3_16
REL9_3_17
REL9_3_18
REL9_3_19
REL9_3_2
REL9_3_20
REL9_3_21
REL9_3_22
REL9_3_23
REL9_3_24
REL9_3_25
REL9_3_3
REL9_3_4
REL9_3_5
REL9_3_6
REL9_3_7
REL9_3_8
REL9_3_9
REL9_3_BETA1
REL9_3_BETA2
REL9_3_RC1
REL9_4_0
REL9_4_1
REL9_4_10
REL9_4_11
REL9_4_12
REL9_4_13
REL9_4_14
REL9_4_15
REL9_4_16
REL9_4_17
REL9_4_18
REL9_4_19
REL9_4_2
REL9_4_20
REL9_4_21
REL9_4_22
REL9_4_23
REL9_4_24
REL9_4_25
REL9_4_26
REL9_4_3
REL9_4_4
REL9_4_5
REL9_4_6
REL9_4_7
REL9_4_8
REL9_4_9
REL9_4_BETA1
REL9_4_BETA2
REL9_4_BETA3
REL9_4_RC1
REL9_5_0
REL9_5_1
REL9_5_10
REL9_5_11
REL9_5_12
REL9_5_13
REL9_5_14
REL9_5_15
REL9_5_16
REL9_5_17
REL9_5_18
REL9_5_19
REL9_5_2
REL9_5_20
REL9_5_21
REL9_5_22
REL9_5_23
REL9_5_24
REL9_5_25
REL9_5_3
REL9_5_4
REL9_5_5
REL9_5_6
REL9_5_7
REL9_5_8
REL9_5_9
REL9_5_ALPHA1
REL9_5_ALPHA2
REL9_5_BETA1
REL9_5_BETA2
REL9_5_RC1
REL9_6_0
REL9_6_1
REL9_6_10
REL9_6_11
REL9_6_12
REL9_6_13
REL9_6_14
REL9_6_15
REL9_6_16
REL9_6_17
REL9_6_18
REL9_6_19
REL9_6_2
REL9_6_20
REL9_6_21
REL9_6_22
REL9_6_23
REL9_6_24
REL9_6_3
REL9_6_4
REL9_6_5
REL9_6_6
REL9_6_7
REL9_6_8
REL9_6_9
REL9_6_BETA1
REL9_6_BETA2
REL9_6_BETA3
REL9_6_BETA4
REL9_6_RC1
REL_10_0
REL_10_1
REL_10_10
REL_10_11
REL_10_12
REL_10_13
REL_10_14
REL_10_15
REL_10_16
REL_10_17
REL_10_18
REL_10_19
REL_10_2
REL_10_20
REL_10_21
REL_10_22
REL_10_23
REL_10_3
REL_10_4
REL_10_5
REL_10_6
REL_10_7
REL_10_8
REL_10_9
REL_10_BETA1
REL_10_BETA2
REL_10_BETA3
REL_10_BETA4
REL_10_RC1
REL_11_0
REL_11_1
REL_11_10
REL_11_11
REL_11_12
REL_11_13
REL_11_14
REL_11_15
REL_11_16
REL_11_17
REL_11_18
REL_11_19
REL_11_2
REL_11_20
REL_11_21
REL_11_22
REL_11_3
REL_11_4
REL_11_5
REL_11_6
REL_11_7
REL_11_8
REL_11_9
REL_11_BETA1
REL_11_BETA2
REL_11_BETA3
REL_11_BETA4
REL_11_RC1
REL_12_0
REL_12_1
REL_12_10
REL_12_11
REL_12_12
REL_12_13
REL_12_14
REL_12_15
REL_12_16
REL_12_17
REL_12_18
REL_12_19
REL_12_2
REL_12_20
REL_12_21
REL_12_22
REL_12_3
REL_12_4
REL_12_5
REL_12_6
REL_12_7
REL_12_8
REL_12_9
REL_12_BETA1
REL_12_BETA2
REL_12_BETA3
REL_12_BETA4
REL_12_RC1
REL_13_0
REL_13_1
REL_13_10
REL_13_11
REL_13_12
REL_13_13
REL_13_14
REL_13_15
REL_13_16
REL_13_17
REL_13_18
REL_13_19
REL_13_2
REL_13_20
REL_13_21
REL_13_22
REL_13_23
REL_13_3
REL_13_4
REL_13_5
REL_13_6
REL_13_7
REL_13_8
REL_13_9
REL_13_BETA1
REL_13_BETA2
REL_13_BETA3
REL_13_RC1
REL_14_0
REL_14_1
REL_14_10
REL_14_11
REL_14_12
REL_14_13
REL_14_14
REL_14_15
REL_14_16
REL_14_17
REL_14_18
REL_14_19
REL_14_2
REL_14_20
REL_14_3
REL_14_4
REL_14_5
REL_14_6
REL_14_7
REL_14_8
REL_14_9
REL_14_BETA1
REL_14_BETA2
REL_14_BETA3
REL_14_RC1
REL_15_0
REL_15_1
REL_15_10
REL_15_11
REL_15_12
REL_15_13
REL_15_14
REL_15_15
REL_15_2
REL_15_3
REL_15_4
REL_15_5
REL_15_6
REL_15_7
REL_15_8
REL_15_9
REL_15_BETA1
REL_15_BETA2
REL_15_BETA3
REL_15_BETA4
REL_15_RC1
REL_15_RC2
REL_16_0
REL_16_1
REL_16_10
REL_16_11
REL_16_2
REL_16_3
REL_16_4
REL_16_5
REL_16_6
REL_16_7
REL_16_8
REL_16_9
REL_16_BETA1
REL_16_BETA2
REL_16_BETA3
REL_16_RC1
REL_17_0
REL_17_1
REL_17_2
REL_17_3
REL_17_4
REL_17_5
REL_17_6
REL_17_7
REL_17_BETA1
REL_17_BETA2
REL_17_BETA3
REL_17_RC1
REL_18_0
REL_18_1
REL_18_BETA1
REL_18_BETA2
REL_18_BETA3
REL_18_RC1
Release_1_0_2
Release_2_0
Release_2_0_0
release-6-3
${ noResults }
60554 Commits (4f1b6e5bb4fe9bc74395d30d689b28e9cda654a5)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
4f1b6e5bb4 |
Remove unstable test suite added by 525392d57
The 'cached-plan-inval' test suite, introduced in
|
10 months ago |
|
|
f8d7f29b3e |
Allow lwlocks to be disowned
To implement AIO writes, the backend initiating writes needs to transfer the lock ownership to the AIO subsystem, so the lock held during the write can be released in another backend. Other backends need to be able to "complete" an asynchronously started IO to avoid deadlocks (consider e.g. one backend starting IO for a buffer and then waiting for a heavyweight lock held by another relation followed by the current holder of the heavyweight lock waiting for the IO to complete). To that end, this commit adds LWLockDisown() and LWLockReleaseDisowned(). If code uses LWLockDisown() it's the code's responsibility to ensure that the lock is released in case of errors. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi |
10 months ago |
|
|
44cbba9a7f |
Adjust EXPLAIN test case to filter out "Actual Rows" values.
Per the buildfarm, these tests appear to be unstable in the wake of
commit
|
10 months ago |
|
|
98fc31d649 |
Avoid race condition between "GRANT role" and "DROP ROLE".
Concurrently dropping either the granted role or the grantee
does not stop GRANT from completing, instead resulting in a
dangling role reference in pg_auth_members. That's relatively
harmless in the short run, but inconsistent catalog entries
are not a good thing.
This patch solves the problem by adding the granted and grantee
roles as explicit shared dependencies of the pg_auth_members entry.
That's a bit indirect, but it works because the pg_shdepend code
applies the necessary locking and rechecking.
Commit
|
10 months ago |
|
|
ddb17e387a |
Allow EXPLAIN to indicate fractional rows.
When nloops > 1, we now display two digits after the decimal point, rather than none. This is important because what we print is actually planstate->instrument->ntuples / nloops, and sometimes what you want to know is planstate->instrument->ntuples. You can estimate that by multiplying the displayed row count by the displayed nloops value, but the fact that the displayed value is rounded makes that inexact. It's still inexact even if we show these two extra decimal places, but less so. Perhaps we will agree on a way to further improve this output later, but for now this seems better than doing nothing. Author: Ibrar Ahmed <ibrar.ahmad@gmail.com> Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Vignesh C <vignesh21@gmail.com> Reviewed-by: Greg Stark <stark@mit.edu> Reviewed-by: Naeem Akhter <akhternaeem@gmail.com> Reviewed-by: Hamid Akhtar <hamid.akhtar@percona.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrei Lepikhov <a.lepikhov@postgrespro.ru> Reviewed-by: Guillaume Lelarge <guillaume@lelarge.info> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Reviewed-by: Alena Rybakina <a.rybakina@postgrespro.ru> Discussion: http://postgr.es/m/603c8f070905281830g2e5419c4xad2946d149e21f9d%40mail.gmail.com |
10 months ago |
|
|
78d3f48895 |
Add test 005_char_signedness.pl to meson.build.
Oversight in
|
10 months ago |
|
|
29d75b25b5 |
Fix pg_dumpall to cope with dangling OIDs in pg_auth_members.
There is a race condition between "GRANT role" and "DROP ROLE",
which allows GRANT to install pg_auth_members entries that refer to
dropped roles. (Commit
|
10 months ago |
|
|
dfd8e6c73e |
Fix an issue with index scan using pg_trgm due to char signedness on different architectures.
GIN and GiST indexes utilizing pg_trgm's opclasses store sorted
trigrams within index tuples. When comparing and sorting each trigram,
pg_trgm treats each character as a 'char[3]' type in C. However, the
char type in C can be interpreted as either signed char or unsigned
char, depending on the platform, if the signedness is not explicitly
specified. Consequently, during replication between different CPU
architectures, there was an issue where index scans on standby servers
could not locate matching index tuples due to the differing treatment
of character signedness.
This change introduces comparison functions for trgm that explicitly
handle signed char and unsigned char. The appropriate comparison
function will be dynamically selected based on the character
signedness stored in the control file. Therefore, upgraded clusters
can utilize the indexes without rebuilding, provided the cluster
upgrade occurs on platforms with the same character signedness as the
original cluster initialization.
The default char signedness information was introduced in
|
10 months ago |
|
|
1aab680591 |
pg_upgrade: Add --set-char-signedness to set the default char signedness of new cluster.
This change adds a new option --set-char-signedness to pg_upgrade. It enables user to set arbitrary signedness during pg_upgrade. This helps cases where user who knew they copied the v17 source cluster from x86 (signedness=true) to ARM (signedness=false) can pg_upgrade properly without the prerequisite of acquiring an x86 VM. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CB11ADBC-0C3F-4FE0-A678-666EE80CBB07%40amazon.com |
10 months ago |
|
|
a8238f87f9 |
pg_upgrade: Preserve default char signedness value from old cluster.
Commit
|
10 months ago |
|
|
30666d1857 |
pg_resetwal: Add --char-signedness option to change the default char signedness.
With the newly added option --char-signedness, pg_resetwal updates the default char signedness flag in the controlfile. This option is primarily intended for an upcoming patch that pg_upgrade supports preserving the default char signedness during upgrades, and is not meant for manual operation. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CB11ADBC-0C3F-4FE0-A678-666EE80CBB07%40amazon.com |
10 months ago |
|
|
44fe30fdab |
Add default_char_signedness field to ControlFileData.
The signedness of the 'char' type in C is implementation-dependent. For instance, 'signed char' is used by default on x86 CPUs, while 'unsigned char' is used on aarch CPUs. Previously, we accidentally let C implementation signedness affect persistent data. This led to inconsistent results when comparing char data across different platforms. This commit introduces a new 'default_char_signedness' field in ControlFileData to store the signedness of the 'char' type. While this change does not encourage the use of 'char' without explicitly specifying its signedness, this field can be used as a hint to ensure consistent behavior for pre-v18 data files that store data sorted by the 'char' type on disk (e.g., GIN and GiST indexes), especially in cross-platform replication scenarios. Newly created database clusters unconditionally set the default char signedness to true. pg_upgrade (with an upcoming commit) changes this flag for clusters if the source database cluster has signedness=false. As a result, signedness=false setting will become rare over time. If we had known about the problem during the last development cycle that forced initdb (v8.3), we would have made all clusters signed or all clusters unsigned. Making pg_upgrade the only source of signedness=false will cause the population of database clusters to converge toward that retrospective ideal. Bump catalog version (for the catalog changes) and PG_CONTROL_VERSION (for the additions in ControlFileData). Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CB11ADBC-0C3F-4FE0-A678-666EE80CBB07%40amazon.com |
10 months ago |
|
|
901a1cf8b4 |
doc: clarify default checksum behavior in non-master branches
Also simplify and correct data checksum wording in master now that it is the default. PG 13 did not have the awkward wording. Reported-by: Felix <afripowered@gmail.com> Reviewed-by: Laurenz Albe Discussion: https://postgr.es/m/173928241056.707.3989867022954178032@wrigleys.postgresql.org Backpatch-through: 14 |
10 months ago |
|
|
6ea0734e41 |
doc: remove non-breaking space in SGML files, causes make error
|
10 months ago |
|
|
32ce58e9e9 |
Make test portlock logic work with meson
Previously the portlock logic, added in |
10 months ago |
|
|
665cafe8a4 |
Fix cross-version upgrades with XMLSERIALIZE(NO INDENT)
Dumps from versions older than v16 do not know about NO INDENT in a
XMLSERIALIZE() clause. This commit adjusts AdjustUpgrade.pm so as NO
INDENT is discarded in the contents of the new dump adjusted for
comparison when the old version is v15 or older. This should be enough
to make the cross-version upgrade tests pass.
Per report from buildfarm member crake. Oversight in
|
10 months ago |
|
|
329304c901 |
Support text position search functions with nondeterministic collations
This allows using text position search functions with nondeterministic collations. These functions are - position, strpos - replace - split_part - string_to_array - string_to_table which all use common internal infrastructure. There was previously no internal implementation of this, so it was met with a not-supported error. This adds the internal implementation and removes the error. Unlike with deterministic collations, the search cannot use any byte-by-byte optimized techniques but has to go substring by substring. We also need to consider that the found match could have a different length than the needle and that there could be substrings of different length matching at a position. In most cases, we need to find the longest such substring (greedy semantics), but this can be configured by each caller. Reviewed-by: Euler Taveira <euler@eulerto.com> Discussion: https://www.postgresql.org/message-id/flat/582b2613-0900-48ca-8b0d-340c06f4d400@eisentraut.org |
10 months ago |
|
|
41336bf085 |
doc: Add links to olsen93 and ong90 in bibliography
The bibliography entries for olsen93 and ong90 lacked links to online copies. While ong90 is available in digital form, the olsen93 thesis is only available as a physical copy in the UCB library. To save people from searching for it, we still link to it via the UCB library page. Reported-by: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxFcJYdRvzgt59N26XjFp2tFFUXu+VN+x8Uo0NbDUCMCbw@mail.gmail.com |
10 months ago |
|
|
b4e0d0c53f |
Fix a WARNING for data origin discrepancies.
Previously, a WARNING was issued at the time of defining a subscription with origin=NONE only when the publisher subscribed to the same table from other publishers, indicating potential data origination from different origins. However, the publisher can subscribe to the partition ancestors or partition children of the table from other publishers, which could also result in mixed-origin data inclusion. So, give a WARNING in those cases as well. Reported-by: Sergey Tatarintsev <s.tatarintsev@postgrespro.ru> Author: Hou Zhijie <houzj.fnst@fujitsu.com> Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: Vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 16, where it was introduced Discussion: https://postgr.es/m/5eda6a9c-63cf-404d-8a49-8dcb116a29f3@postgrespro.ru |
10 months ago |
|
|
984410b923 |
Add missing deparsing of [NO] IDENT to XMLSERIALIZE()
NO INDENT is the default, and is added if no explicit indentation
flag was provided with XMLSERIALIZE().
Oversight in
|
10 months ago |
|
|
7d6d2c4bbd |
Drop opcintype from index AM strategy translation API
The type argument wasn't actually really necessary. It was a remnant of converting the API of the gist strategy translation from using opclass to using opfamily+opcintype (commits |
10 months ago |
|
|
7202d72787 |
backend launchers void * arguments for binary data
Change backend launcher functions to take void * for binary data instead of char *. This removes the need for numerous casts. Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org |
10 months ago |
|
|
b50a554cc8 |
Fix for pg_restore_attribute_stats().
Use RelationGetIndexExpressions() rather than rd_indexprs directly. Author: Corey Huinker <corey.huinker@gmail.com> |
10 months ago |
|
|
41625ab8ea |
psql: Add support for pipelines
With \bind, \parse, \bind_named and \close, it is possible to issue queries from psql using the extended protocol. However, it was not possible to send these queries using libpq's pipeline mode. This feature has two advantages: - Testing. Pipeline tests were only possible with pgbench, using TAP tests. It now becomes possible to have more SQL tests that are able to stress the backend with pipelines and extended queries. More tests will be added in a follow-up commit that were discussed on some other threads. Some external projects in the community had to implement their own facility to work around this limitation. - Emulation of custom workloads, with more control over the actions taken by a client with libpq APIs. It is possible to emulate more workload patterns to bottleneck the backend with the extended query protocol. This patch adds six new meta-commands to be able to control pipelines: * \startpipeline starts a new pipeline. All extended queries are queued until the end of the pipeline are reached or a sync request is sent and processed. * \endpipeline ends an existing pipeline. All queued commands are sent to the server and all responses are processed by psql. * \syncpipeline queues a synchronisation request, without flushing the commands to the server, equivalent of PQsendPipelineSync(). * \flush, equivalent of PQflush(). * \flushrequest, equivalent of PQsendFlushRequest() * \getresults reads the server's results for the queries in a pipeline. Unsent data is automatically pushed when \getresults is called. It is possible to control the number of results read in a single meta-command execution with an optional parameter, 0 means that all the results should be read. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Discussion: https://postgr.es/m/CAO6_XqroE7JuMEm1sWz55rp9fAYX2JwmcP_3m_v51vnOFdsLiQ@mail.gmail.com |
10 months ago |
|
|
40af897eb7 |
Add braces for if block with large comment in psql's common.c
A patch touching this area of the code is under review, and this format makes the readability of the code slightly harder to parse. Extracted from a larger patch by the same author. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Discussion: https://postgr.es/m/CAO6_XqroE7JuMEm1sWz55rp9fAYX2JwmcP_3m_v51vnOFdsLiQ@mail.gmail.com |
10 months ago |
|
|
2c53dec7f4 |
Add missing entry to oauth_validator test .gitignore
Commit
|
10 months ago |
|
|
3e4d868615 |
Remove various unnecessary (char *) casts
Remove a number of (char *) casts that are unnecessary. Or in some cases, rewrite the code to make the purpose of the cast clearer. Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org |
10 months ago |
|
|
ab84d0ff80 |
Trial fix for old cross-version upgrades.
Per buildfarm and reports, it seems that 9.X to 18 upgrades were
failing after commit
|
10 months ago |
|
|
8e4d72573c |
Ignore blank lines in pgindent exclude files
Currently a blank line matches everything, which is almost never what someone would want. If they really want that they can use a wildcard regex to do it. Author: Zsolt Parragi <zsolt.parragi@percona.com> Discussion: https://postgr.es/m/CAN4CZFNka+2q3=-Dithr4w65RJfwPaV92T62spEzLn+T4MgcMg@mail.gmail.com |
10 months ago |
|
|
9d9a71002a |
cirrus: Temporarily fix libcurl link error
On FreeBSD the ftp/curl port appears to be missing a minimum version dependency on libssh2, so the following starts showing up after upgrading to curl 8.11.1_1: libcurl.so.4: Undefined symbol "libssh2_session_callback_set2" Awaiting an upgrade of the FreeBSD CI images to version 14, work around the issue. Author: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAOYmi+kZAka0sdxCOBxsQc2ozEZGZKHWU_9nrPXg3sG1NJ-zJw@mail.gmail.com |
10 months ago |
|
|
b3f0be788a |
Add support for OAUTHBEARER SASL mechanism
This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device Authorization Grants, RFC 8628. In order to use this there is a new pg_hba auth method called oauth. When speaking to a OAuth- enabled server, it looks a bit like this: $ psql 'host=example.org oauth_issuer=... oauth_client_id=...' Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG Device authorization is currently the only supported flow so the OAuth issuer must support that in order for users to authenticate. Third-party clients may however extend this and provide their own flows. The built-in device authorization flow is currently not supported on Windows. In order for validation to happen server side a new framework for plugging in OAuth validation modules is added. As validation is implementation specific, with no default specified in the standard, PostgreSQL does not ship with one built-in. Each pg_hba entry can specify a specific validator or be left blank for the validator installed as default. This adds a requirement on libcurl for the client side support, which is optional to build, but the server side has no additional build requirements. In order to run the tests, Python is required as this adds a https server written in Python. Tests are gated behind PG_TEST_EXTRA as they open ports. This patch has been a multi-year project with many contributors involved with reviews and in-depth discussions: Michael Paquier, Heikki Linnakangas, Zhihong Yu, Mahendrakar Srinivasarao, Andrey Chudnovsky and Stephen Frost to name a few. While Jacob Champion is the main author there have been some levels of hacking by others. Daniel Gustafsson contributed the validation module and various bits and pieces; Thomas Munro wrote the client side support for kqueue. Author: Jacob Champion <jacob.champion@enterprisedb.com> Co-authored-by: Daniel Gustafsson <daniel@yesql.se> Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Antonin Houska <ah@cybertec.at> Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com> Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com |
10 months ago |
|
|
1fd1bd8710 |
Transfer statistics during pg_upgrade.
Add support to pg_dump for dumping stats, and use that during pg_upgrade so that statistics are transferred during upgrade. In most cases this removes the need for a costly re-analyze after upgrade. Some statistics are not transferred, such as extended statistics or statistics with a custom stakind. Now pg_dump accepts the options --schema-only, --no-schema, --data-only, --no-data, --statistics-only, and --no-statistics; which allow all combinations of schema, data, and/or stats. The options are named this way to preserve compatibility with the previous --schema-only and --data-only options. Statistics are in SECTION_DATA, unless the object itself is in SECTION_POST_DATA. The stats are represented as calls to pg_restore_relation_stats() and pg_restore_attribute_stats(). Author: Corey Huinker, Jeff Davis Reviewed-by: Jian He Discussion: https://postgr.es/m/CADkLM=fzX7QX6r78fShWDjNN3Vcr4PVAnvXxQ4DiGy6V=0bCUA@mail.gmail.com Discussion: https://postgr.es/m/CADkLM%3DcB0rF3p_FuWRTMSV0983ihTRpsH%2BOCpNyiqE7Wk0vUWA%40mail.gmail.com |
10 months ago |
|
|
7da344b9f8 |
Improve errdetail message added by ac0e33136a.
Make it consistent with other similar messages. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Discussion: https://postgr.es/m/20250220.140839.1444694904721968348.horikyota.ntt@gmail.com |
10 months ago |
|
|
525392d572 |
Don't lock partitions pruned by initial pruning
Before executing a cached generic plan, AcquireExecutorLocks() in plancache.c locks all relations in a plan's range table to ensure the plan is safe for execution. However, this locks runtime-prunable relations that will later be pruned during "initial" runtime pruning, introducing unnecessary overhead. This commit defers locking for such relations to executor startup and ensures that if the CachedPlan is invalidated due to concurrent DDL during this window, replanning is triggered. Deferring these locks avoids unnecessary locking overhead for pruned partitions, resulting in significant speedup, particularly when many partitions are pruned during initial runtime pruning. * Changes to locking when executing generic plans: AcquireExecutorLocks() now locks only unprunable relations, that is, those found in PlannedStmt.unprunableRelids (introduced in commit |
10 months ago |
|
|
4aa6fa3cd0 |
Include schema/table publications even with exclude options in dump.
The current implementation inconsistently includes public schema but not information_schema when those are specified in FOR TABLES IN SCHMEA ... Apart from that, the current behavior for publications w.r.t exclude table and schema (--exclude-table, --exclude-schema) option differs from what we do at other places. We try to avoid including publications for corresponding tables or schemas when an exclude-table or exclude-schema option is given, unlike what we do for views using functions defined in a particular schema or a subscription pointing to publications with their corresponding exclude options. I decided not to backpatch this as it leads to a behavior change and we don't see any field report for current behavior. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/1270733.1734134272@sss.pgh.pa.us |
10 months ago |
|
|
f11674f8df |
doc: Fix typo in section "WAL configuration"
pg_stat_io has an attribute named fsync_time, not sync_time.
Oversight in
|
10 months ago |
|
|
4538bd3f1d |
doc: Add details about object "wal" in pg_stat_io
This commit adds a short description of what kind of activity is tracked
in pg_stat_io for the object "wal", with a link pointing to the section
"WAL configuration" that has a lot of details on the matter.
This should perhaps have been added in
|
10 months ago |
|
|
2f70871c2b |
doc: Recommend pg_stat_io rather than pg_stat_wal in WAL configuration
Since
|
10 months ago |
|
|
71f17823ba |
Fix FATAL message for invalid recovery timeline at beginning of recovery
If the requested recovery timeline is not reachable, the logged checkpoint and timeline should to be the values read from the backup_label when it is defined. The message generated used the values from the control file in this case, which is fine when recovering from the control file without a backup_label, but not if there is a backup_label. Issue introduced in |
10 months ago |
|
|
d38bab5edd |
pgbench: Increase RLIMIT_NOFILE if necessary
pgbench already had code to check if the soft rlimit is too low for the specified number of connections. If too low, it errored out, telling the user to increase the limit. However, we can do better: If the hard limit allows, increase the soft limit to be sufficiently for the number of connections. It is common for the soft limit to be considerably lower than the hard limit, due to the danger of soft limits > 1024 breaking programs that use the select(2), as explained in [1]. [1]: https://0pointer.net/blog/file-descriptor-limits.html Author: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAGECzQQh6VSy3KG4pN1d%3Dh9J%3DD1rStFCMR%2Bt7yh_Kwj-g87aLQ%40mail.gmail.com |
10 months ago |
|
|
9b1cb58c5f |
test_escape: Fix output of --help
The short option name -f was not listed, only its long option name --force-unsupported. Author: Japin Li Discussion: https://postgr.es/m/ME0P300MB04452BD1FB1B277D4C1C20B9B6C52@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM Backpatch-through: 13 |
10 months ago |
|
|
9ba7bcc894 |
Correct relation size estimate with low fillfactor
Since commit
|
10 months ago |
|
|
e596e077bb |
Assert that ExecOpenIndices and ExecCloseIndices are not repeated.
These functions should be called at most once per ResultRelInfo; it's wasteful to do otherwise, and certainly the pattern of opening twice and then closing twice is a bad idea. Moreover, aminsertcleanup functions might not be prepared to be called twice, as the just-hardened code in BRIN demonstrates. This amounts to an API change, since such coding patterns were safe even if wasteful before v17. Hence, apply to HEAD only. (Extension code violating this new rule faces some risk in v17, but we just fixed brininsertcleanup and there are probably few other aminsertcleanup functions as yet. So the odds of breaking usable code seem higher than the odds of doing something useful with a back-patch.) Bug: #18815 Reported-by: Sergey Belyashov <sergey.belyashov@gmail.com> Discussion: https://postgr.es/m/18815-2a0407cc7f40b327@postgresql.org |
10 months ago |
|
|
9ff68679b5 |
Fix crash in brininsertcleanup during logical replication.
Logical replication crashes if the subscriber's partitioned table has a BRIN index. There are two independently blamable causes, and this patch fixes both: 1. brininsertcleanup fails if called twice for the same IndexInfo, because it half-destroys its BrinInsertState but leaves it still linked from ii_AmCache. brininsert would also fail in that state, so it's pretty hard to see any advantage to this coding. Fully remove the BrinInsertState, instead, so that a new brininsert call would create a new cache. 2. A logical replication subscriber sometimes does ExecOpenIndices twice on the same ResultRelInfo, followed by doing ExecCloseIndices twice; the second call reaches the brininsertcleanup bug. Quite aside from tickling unexpected cases in aminsertcleanup methods, this seems very wasteful, because the IndexInfos built in the first ExecOpenIndices call are just lost during the second call, and have to be rebuilt at possibly-nontrivial cost. We should establish a coding rule that you don't do that. The problematic coding is that when the target table is partitioned, apply_handle_tuple_routing calls ExecFindPartition which does ExecOpenIndices (and expects that ExecCleanupTupleRouting will close the indexes again). Using the ResultRelInfo made by ExecFindPartition, it calls apply_handle_delete_internal or apply_handle_insert_internal, both of which think they need to do ExecOpenIndices/ExecCloseIndices for themselves. They do in the main non-partitioned code paths, but not here. The simplest fix is to pull their ExecOpenIndices/ExecCloseIndices calls out and put them in the call sites for the non-partitioned cases. (We could have refactored apply_handle_update_internal similarly, but I did not do so today because there's no bug there: the partitioned code path doesn't call it.) Also, remove the always-duplicative open/close calls within apply_handle_tuple_routing itself. Since brininsertcleanup and indeed the whole aminsertcleanup mechanism are new in v17, there's no observable bug in older branches. A case could be made for trying to avoid these duplicative open/close calls in the older branches, but for now it seems not worth the trouble and risk of new bugs. Bug: #18815 Reported-by: Sergey Belyashov <sergey.belyashov@gmail.com> Discussion: https://postgr.es/m/18815-2a0407cc7f40b327@postgresql.org Backpatch-through: 17 |
10 months ago |
|
|
a1b4f289be |
Consider BufFiles when adjusting hashjoin parameters
Until now ExecChooseHashTableSize() considered only the size of the in-memory hash table, and ignored the memory needed for the batch files. Which can be a significant amount, because each batch needs two BufFiles (each with a BLCKSZ buffer). The same issue applies to increasing the number of batches during execution. It's also possible to trigger a "batch explosion", e.g. due to duplicate values or skew. We've seen reports of joins with hundreds of thousands (or even millions) of batches, consuming gigabytes of memory, triggering OOM errors. These cases may be fairly rare, but it's clearly possible to hit them. These issues can't be prevented during planning. Even if we improve that, it does not help with execution-time batch explosion. We can however reduce the impact and use as little memory as possible. This patch improves the behavior by adjusting how the memory is divided between the hash table and batch files. It may be better to use fewer batch files, even if it means the hash table will exceed the limit. The capacity of the hash node may be increased either by doubling he number of batches, or doubling the size of the in-memory hash table. The outcome is the same, but the memory usage may be very different. For low nbatch values it's better to add batches, for high nbatch values it's better to allow a larger hash table. The patch considers both options, both during the initial sizing and then during execution, to minimize how much the limit gets exceeded. It might seem this patch is relaxing the memory limit - allowing it to be exceeded. But that's not really the case. It has always been like that, except the memory used by batches was ignored. Allowing the hash table to grow may also prevent the batch explosion. If there's a large batch that can't be split (due to hash collisions or duplicate values), at some point the memory limit will increase enough for the batch to fit into the hash table. This patch was in the works for a long time. The early versions were posted in 2019, and revived every year or two when we happened to get the next report of OOM due to a hashjoin batch explosion. Each of those patch versions were reviewed by a couple people. I'm mentioning only Melanie Plageman and Robert Haas, because they reviewed the last version, and the older patches are very different. Reviewed-by: Melanie Plageman, Robert Haas Discussion: https://postgr.es/m/7bed6c08-72a0-4ab9-a79c-e01fcdd0940f@vondra.me Discussion: https://postgr.es/m/20190504003414.bulcbnge3rhwhcsh%40development Discussion: https://postgr.es/m/20190428141901.5dsbge2ka3rxmpk6%40development |
10 months ago |
|
|
8b886a4e34 |
tests: BackgroundPsql: Fix potential for lost errors on windows
This addresses various corner cases in BackgroundPsql: - On windows stdout and stderr may arrive out of order, leading to errors not being reported, or attributed to the wrong statement. To fix, emit the "query-separation banner" on both stdout and stderr and wait for both. - Very occasionally the "query-separation banner" would not get removed, because we waited until the banner arrived, but then replaced the banner plus newline. To fix, wait for banner and newline. - For interactive psql replacing $banner\n is not sufficient, interactive psql outputs \r\n. - For interactive psql, where commands are echoed to stdout, the \echo command, rather than its output, would be matched. This would sometimes lead to output from the prior query, or wait_connect(), being returned in the next command. This also affected wait_connect(), leading to sometimes sending queries to psql before the connection actually was established. While debugging these issues I also found that it's hard to know whether a query separation banner was attributed to the right query. Make that easier by counting the queries each BackgroundPsql instance has emitted and include the number in the banner. Also emit psql stdout/stderr in query() and wait_connect() as Test::More notes, without that it's rather hard to debug some issues in CI and buildfarm. As this can cause issues not just to-be-added tests, but also existing ones, backpatch the fix to all supported versions. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/wmovm6xcbwh7twdtymxuboaoarbvwj2haasd3sikzlb3dkgz76@n45rzycluzft Backpatch-through: 13 |
10 months ago |
|
|
80d7f99049
|
Add ATAlterConstraint struct for ALTER .. CONSTRAINT
Replace the use of Constraint with a new ATAlterConstraint struct, which allows us to pass additional information. No functionality is added by this commit. This is necessary for future work that allows altering constraints in other ways. I (Álvaro) took the liberty of restructuring the code for ALTER CONSTRAINT beyond what Amul did. The original coding before Amul's patch was unnecessarily baroque, and this change makes things simpler by removing one level of subroutine. Also, partly remove the assumption that only partitioned tables are relevant (by passing sensible 'recurse' arguments) and no longer ignore whether ONLY was specified. I say 'partly' because the current coding only walks down via the 'conparentid' relationship, which is only used for partitioned tables; but future patches could handle ONLY or not for other types of constraint changes for legacy inheritance trees too. Author: Amul Sul <sulamul@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CAAJ_b94bfgPV-8Mw_HwSBeheVwaK9=5s+7+KbBj_NpwXQFgDGg@mail.gmail.com |
10 months ago |
|
|
e983ee9380 |
Improve statistics estimation for single-column GROUP BY in sub-queries
This commit follows the idea of the
|
10 months ago |
|
|
8a695d7998 |
Add a test for commit ac0e33136a using the injection point.
This test uses an injection point to bypass the time overhead caused by the idle_replication_slot_timeout GUC, which has a minimum value of one minute. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Author: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com |
10 months ago |
|
|
302cf15759 |
Add support for LIKE in CREATE FOREIGN TABLE
LIKE enables the creation of foreign tables based on the column
definitions, constraints and objects of the defined source relation(s).
This feature mirrors the behavior of CREATE TABLE LIKE, but ignores
the INCLUDING sub-options that do not make sense for foreign tables:
INDEXES, COMPRESSION, IDENTITY and STORAGE. The supported sub-options
are COMMENTS, CONSTRAINTS, DEFAULTS, GENERATED and STATISTICS, mapping
with the clauses already supported by the command.
Note that the restriction with LIKE in CREATE FOREIGN TABLE was added in
|
10 months ago |