mirror of https://github.com/postgres/postgres
Tag:
Branch:
Tree:
REL_17_STABLE
REL2_0B
REL6_4
REL6_5_PATCHES
REL7_0_PATCHES
REL7_1_STABLE
REL7_2_STABLE
REL7_3_STABLE
REL7_4_STABLE
REL8_0_STABLE
REL8_1_STABLE
REL8_2_STABLE
REL8_3_STABLE
REL8_4_STABLE
REL8_5_ALPHA1_BRANCH
REL8_5_ALPHA2_BRANCH
REL8_5_ALPHA3_BRANCH
REL9_0_ALPHA4_BRANCH
REL9_0_ALPHA5_BRANCH
REL9_0_STABLE
REL9_1_STABLE
REL9_2_STABLE
REL9_3_STABLE
REL9_4_STABLE
REL9_5_STABLE
REL9_6_STABLE
REL_10_STABLE
REL_11_STABLE
REL_12_STABLE
REL_13_STABLE
REL_14_STABLE
REL_15_STABLE
REL_16_STABLE
REL_17_STABLE
REL_18_STABLE
Release_1_0_3
WIN32_DEV
ecpg_big_bison
master
PG95-1_01
PG95-1_08
PG95-1_09
REL2_0
REL6_1
REL6_1_1
REL6_2
REL6_2_1
REL6_3
REL6_3_2
REL6_4_2
REL6_5
REL6_5_1
REL6_5_2
REL6_5_3
REL7_0
REL7_0_2
REL7_0_3
REL7_1
REL7_1_1
REL7_1_2
REL7_1_3
REL7_1_BETA
REL7_1_BETA2
REL7_1_BETA3
REL7_2
REL7_2_1
REL7_2_2
REL7_2_3
REL7_2_4
REL7_2_5
REL7_2_6
REL7_2_7
REL7_2_8
REL7_2_BETA1
REL7_2_BETA2
REL7_2_BETA3
REL7_2_BETA4
REL7_2_BETA5
REL7_2_RC1
REL7_2_RC2
REL7_3
REL7_3_1
REL7_3_10
REL7_3_11
REL7_3_12
REL7_3_13
REL7_3_14
REL7_3_15
REL7_3_16
REL7_3_17
REL7_3_18
REL7_3_19
REL7_3_2
REL7_3_20
REL7_3_21
REL7_3_3
REL7_3_4
REL7_3_5
REL7_3_6
REL7_3_7
REL7_3_8
REL7_3_9
REL7_4
REL7_4_1
REL7_4_10
REL7_4_11
REL7_4_12
REL7_4_13
REL7_4_14
REL7_4_15
REL7_4_16
REL7_4_17
REL7_4_18
REL7_4_19
REL7_4_2
REL7_4_20
REL7_4_21
REL7_4_22
REL7_4_23
REL7_4_24
REL7_4_25
REL7_4_26
REL7_4_27
REL7_4_28
REL7_4_29
REL7_4_3
REL7_4_30
REL7_4_4
REL7_4_5
REL7_4_6
REL7_4_7
REL7_4_8
REL7_4_9
REL7_4_BETA1
REL7_4_BETA2
REL7_4_BETA3
REL7_4_BETA4
REL7_4_BETA5
REL7_4_RC1
REL7_4_RC2
REL8_0_0
REL8_0_0BETA1
REL8_0_0BETA2
REL8_0_0BETA3
REL8_0_0BETA4
REL8_0_0BETA5
REL8_0_0RC1
REL8_0_0RC2
REL8_0_0RC3
REL8_0_0RC4
REL8_0_0RC5
REL8_0_1
REL8_0_10
REL8_0_11
REL8_0_12
REL8_0_13
REL8_0_14
REL8_0_15
REL8_0_16
REL8_0_17
REL8_0_18
REL8_0_19
REL8_0_2
REL8_0_20
REL8_0_21
REL8_0_22
REL8_0_23
REL8_0_24
REL8_0_25
REL8_0_26
REL8_0_3
REL8_0_4
REL8_0_5
REL8_0_6
REL8_0_7
REL8_0_8
REL8_0_9
REL8_1_0
REL8_1_0BETA1
REL8_1_0BETA2
REL8_1_0BETA3
REL8_1_0BETA4
REL8_1_0RC1
REL8_1_1
REL8_1_10
REL8_1_11
REL8_1_12
REL8_1_13
REL8_1_14
REL8_1_15
REL8_1_16
REL8_1_17
REL8_1_18
REL8_1_19
REL8_1_2
REL8_1_20
REL8_1_21
REL8_1_22
REL8_1_23
REL8_1_3
REL8_1_4
REL8_1_5
REL8_1_6
REL8_1_7
REL8_1_8
REL8_1_9
REL8_2_0
REL8_2_1
REL8_2_10
REL8_2_11
REL8_2_12
REL8_2_13
REL8_2_14
REL8_2_15
REL8_2_16
REL8_2_17
REL8_2_18
REL8_2_19
REL8_2_2
REL8_2_20
REL8_2_21
REL8_2_22
REL8_2_23
REL8_2_3
REL8_2_4
REL8_2_5
REL8_2_6
REL8_2_7
REL8_2_8
REL8_2_9
REL8_2_BETA1
REL8_2_BETA2
REL8_2_BETA3
REL8_2_RC1
REL8_3_0
REL8_3_1
REL8_3_10
REL8_3_11
REL8_3_12
REL8_3_13
REL8_3_14
REL8_3_15
REL8_3_16
REL8_3_17
REL8_3_18
REL8_3_19
REL8_3_2
REL8_3_20
REL8_3_21
REL8_3_22
REL8_3_23
REL8_3_3
REL8_3_4
REL8_3_5
REL8_3_6
REL8_3_7
REL8_3_8
REL8_3_9
REL8_3_BETA1
REL8_3_BETA2
REL8_3_BETA3
REL8_3_BETA4
REL8_3_RC1
REL8_3_RC2
REL8_4_0
REL8_4_1
REL8_4_10
REL8_4_11
REL8_4_12
REL8_4_13
REL8_4_14
REL8_4_15
REL8_4_16
REL8_4_17
REL8_4_18
REL8_4_19
REL8_4_2
REL8_4_20
REL8_4_21
REL8_4_22
REL8_4_3
REL8_4_4
REL8_4_5
REL8_4_6
REL8_4_7
REL8_4_8
REL8_4_9
REL8_4_BETA1
REL8_4_BETA2
REL8_4_RC1
REL8_4_RC2
REL8_5_ALPHA1
REL8_5_ALPHA2
REL8_5_ALPHA3
REL9_0_0
REL9_0_1
REL9_0_10
REL9_0_11
REL9_0_12
REL9_0_13
REL9_0_14
REL9_0_15
REL9_0_16
REL9_0_17
REL9_0_18
REL9_0_19
REL9_0_2
REL9_0_20
REL9_0_21
REL9_0_22
REL9_0_23
REL9_0_3
REL9_0_4
REL9_0_5
REL9_0_6
REL9_0_7
REL9_0_8
REL9_0_9
REL9_0_ALPHA4
REL9_0_ALPHA5
REL9_0_BETA1
REL9_0_BETA2
REL9_0_BETA3
REL9_0_BETA4
REL9_0_RC1
REL9_1_0
REL9_1_1
REL9_1_10
REL9_1_11
REL9_1_12
REL9_1_13
REL9_1_14
REL9_1_15
REL9_1_16
REL9_1_17
REL9_1_18
REL9_1_19
REL9_1_2
REL9_1_20
REL9_1_21
REL9_1_22
REL9_1_23
REL9_1_24
REL9_1_3
REL9_1_4
REL9_1_5
REL9_1_6
REL9_1_7
REL9_1_8
REL9_1_9
REL9_1_ALPHA1
REL9_1_ALPHA2
REL9_1_ALPHA3
REL9_1_ALPHA4
REL9_1_ALPHA5
REL9_1_BETA1
REL9_1_BETA2
REL9_1_BETA3
REL9_1_RC1
REL9_2_0
REL9_2_1
REL9_2_10
REL9_2_11
REL9_2_12
REL9_2_13
REL9_2_14
REL9_2_15
REL9_2_16
REL9_2_17
REL9_2_18
REL9_2_19
REL9_2_2
REL9_2_20
REL9_2_21
REL9_2_22
REL9_2_23
REL9_2_24
REL9_2_3
REL9_2_4
REL9_2_5
REL9_2_6
REL9_2_7
REL9_2_8
REL9_2_9
REL9_2_BETA1
REL9_2_BETA2
REL9_2_BETA3
REL9_2_BETA4
REL9_2_RC1
REL9_3_0
REL9_3_1
REL9_3_10
REL9_3_11
REL9_3_12
REL9_3_13
REL9_3_14
REL9_3_15
REL9_3_16
REL9_3_17
REL9_3_18
REL9_3_19
REL9_3_2
REL9_3_20
REL9_3_21
REL9_3_22
REL9_3_23
REL9_3_24
REL9_3_25
REL9_3_3
REL9_3_4
REL9_3_5
REL9_3_6
REL9_3_7
REL9_3_8
REL9_3_9
REL9_3_BETA1
REL9_3_BETA2
REL9_3_RC1
REL9_4_0
REL9_4_1
REL9_4_10
REL9_4_11
REL9_4_12
REL9_4_13
REL9_4_14
REL9_4_15
REL9_4_16
REL9_4_17
REL9_4_18
REL9_4_19
REL9_4_2
REL9_4_20
REL9_4_21
REL9_4_22
REL9_4_23
REL9_4_24
REL9_4_25
REL9_4_26
REL9_4_3
REL9_4_4
REL9_4_5
REL9_4_6
REL9_4_7
REL9_4_8
REL9_4_9
REL9_4_BETA1
REL9_4_BETA2
REL9_4_BETA3
REL9_4_RC1
REL9_5_0
REL9_5_1
REL9_5_10
REL9_5_11
REL9_5_12
REL9_5_13
REL9_5_14
REL9_5_15
REL9_5_16
REL9_5_17
REL9_5_18
REL9_5_19
REL9_5_2
REL9_5_20
REL9_5_21
REL9_5_22
REL9_5_23
REL9_5_24
REL9_5_25
REL9_5_3
REL9_5_4
REL9_5_5
REL9_5_6
REL9_5_7
REL9_5_8
REL9_5_9
REL9_5_ALPHA1
REL9_5_ALPHA2
REL9_5_BETA1
REL9_5_BETA2
REL9_5_RC1
REL9_6_0
REL9_6_1
REL9_6_10
REL9_6_11
REL9_6_12
REL9_6_13
REL9_6_14
REL9_6_15
REL9_6_16
REL9_6_17
REL9_6_18
REL9_6_19
REL9_6_2
REL9_6_20
REL9_6_21
REL9_6_22
REL9_6_23
REL9_6_24
REL9_6_3
REL9_6_4
REL9_6_5
REL9_6_6
REL9_6_7
REL9_6_8
REL9_6_9
REL9_6_BETA1
REL9_6_BETA2
REL9_6_BETA3
REL9_6_BETA4
REL9_6_RC1
REL_10_0
REL_10_1
REL_10_10
REL_10_11
REL_10_12
REL_10_13
REL_10_14
REL_10_15
REL_10_16
REL_10_17
REL_10_18
REL_10_19
REL_10_2
REL_10_20
REL_10_21
REL_10_22
REL_10_23
REL_10_3
REL_10_4
REL_10_5
REL_10_6
REL_10_7
REL_10_8
REL_10_9
REL_10_BETA1
REL_10_BETA2
REL_10_BETA3
REL_10_BETA4
REL_10_RC1
REL_11_0
REL_11_1
REL_11_10
REL_11_11
REL_11_12
REL_11_13
REL_11_14
REL_11_15
REL_11_16
REL_11_17
REL_11_18
REL_11_19
REL_11_2
REL_11_20
REL_11_21
REL_11_22
REL_11_3
REL_11_4
REL_11_5
REL_11_6
REL_11_7
REL_11_8
REL_11_9
REL_11_BETA1
REL_11_BETA2
REL_11_BETA3
REL_11_BETA4
REL_11_RC1
REL_12_0
REL_12_1
REL_12_10
REL_12_11
REL_12_12
REL_12_13
REL_12_14
REL_12_15
REL_12_16
REL_12_17
REL_12_18
REL_12_19
REL_12_2
REL_12_20
REL_12_21
REL_12_22
REL_12_3
REL_12_4
REL_12_5
REL_12_6
REL_12_7
REL_12_8
REL_12_9
REL_12_BETA1
REL_12_BETA2
REL_12_BETA3
REL_12_BETA4
REL_12_RC1
REL_13_0
REL_13_1
REL_13_10
REL_13_11
REL_13_12
REL_13_13
REL_13_14
REL_13_15
REL_13_16
REL_13_17
REL_13_18
REL_13_19
REL_13_2
REL_13_20
REL_13_21
REL_13_22
REL_13_23
REL_13_3
REL_13_4
REL_13_5
REL_13_6
REL_13_7
REL_13_8
REL_13_9
REL_13_BETA1
REL_13_BETA2
REL_13_BETA3
REL_13_RC1
REL_14_0
REL_14_1
REL_14_10
REL_14_11
REL_14_12
REL_14_13
REL_14_14
REL_14_15
REL_14_16
REL_14_17
REL_14_18
REL_14_19
REL_14_2
REL_14_20
REL_14_21
REL_14_22
REL_14_3
REL_14_4
REL_14_5
REL_14_6
REL_14_7
REL_14_8
REL_14_9
REL_14_BETA1
REL_14_BETA2
REL_14_BETA3
REL_14_RC1
REL_15_0
REL_15_1
REL_15_10
REL_15_11
REL_15_12
REL_15_13
REL_15_14
REL_15_15
REL_15_16
REL_15_17
REL_15_2
REL_15_3
REL_15_4
REL_15_5
REL_15_6
REL_15_7
REL_15_8
REL_15_9
REL_15_BETA1
REL_15_BETA2
REL_15_BETA3
REL_15_BETA4
REL_15_RC1
REL_15_RC2
REL_16_0
REL_16_1
REL_16_10
REL_16_11
REL_16_12
REL_16_13
REL_16_2
REL_16_3
REL_16_4
REL_16_5
REL_16_6
REL_16_7
REL_16_8
REL_16_9
REL_16_BETA1
REL_16_BETA2
REL_16_BETA3
REL_16_RC1
REL_17_0
REL_17_1
REL_17_2
REL_17_3
REL_17_4
REL_17_5
REL_17_6
REL_17_7
REL_17_8
REL_17_9
REL_17_BETA1
REL_17_BETA2
REL_17_BETA3
REL_17_RC1
REL_18_0
REL_18_1
REL_18_2
REL_18_3
REL_18_BETA1
REL_18_BETA2
REL_18_BETA3
REL_18_RC1
Release_1_0_2
Release_2_0
Release_2_0_0
release-6-3
${ noResults }
59992 Commits (REL_17_STABLE)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
d54e754415 |
Don't call CheckAttributeType() with InvalidOid on dropped cols
If CheckAttributeType() is called with InvalidOid, it performs a bunch of pointless, futile syscache lookups with InvalidOid, but ultimately tolerates it and has no effect. We were calling it with InvalidOid on dropped columns, but it seems accidental that it works, so let's stop doing it. Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi Backpatch-through: 14 |
6 hours ago |
|
|
54343f6f90 |
Don't allow composite type to be member of itself via multirange
CheckAttributeType() checks that a composite type is not made a member of itself with ALTER TABLE ADD COLUMN or ALTER TYPE ADD ATTRIBUTE, even indirectly via a domain, array, another composite type or a range type. But it missed checking for multiranges. That was a simple oversight when multiranges were added. Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi Backpatch-through: 14 |
6 hours ago |
|
|
9dda30dd3d |
catcache.c: use C_COLLATION_OID for texteqfast/texthashfast.
The problem report was about setting GUCs in the startup packet for a physical replication connection. Setting the GUC required an ACL check, which performed a lookup on pg_parameter_acl.parname. The catalog cache was hardwired to use DEFAULT_COLLATION_OID for texteqfast() and texthashfast(), but the database default collation was uninitialized because it's a physical walsender and never connects to a database. In versions 18 and later, this resulted in a NULL pointer dereference, while in version 17 it resulted in an ERROR. As the comments stated, using DEFAULT_COLLATION_OID was arbitrary anyway: if the collation actually mattered, it should have used the column's actual collation. (In the catalog, some text columns are the default collation and some are "C".) Fix by using C_COLLATION_OID, which doesn't require any initialization and is always available. When any deterministic collation will do, it's best to consistently use the simplest and fastest one, so this is a good idea anyway. Another problem was raised in the thread, which this commit doesn't fix (see second discussion link). Reported-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/D18AD72A-5004-4EF8-AF80-10732AF677FA@yandex-team.ru Discussion: https://postgr.es/m/4524ed61a015d3496fc008644dcb999bb31916a7.camel%40j-davis.com Backpatch-through: 17 |
1 day ago |
|
|
c97a286185 |
Guard against overly-long numeric formatting symbols from locale.
to_char() allocates its output buffer with 8 bytes per formatting code in the pattern. If the locale's currency symbol, thousands separator, or decimal or sign symbol is more than 8 bytes long, in principle we could overrun the output buffer. No such locales exist in the real world, so it seems sufficient to truncate the symbol if we do see it's too long. Reported-by: Xint Code Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/638232.1776790821@sss.pgh.pa.us Backpatch-through: 14 |
1 day ago |
|
|
ea5f0d176a |
Prevent some buffer overruns in spell.c's parsing of affix files.
parse_affentry() and addCompoundAffixFlagValue() each collect fields from an affix file into working buffers of size BUFSIZ. They failed to defend against overlength fields, so that a malicious affix file could cause a stack smash. BUFSIZ (typically 8K) is certainly way longer than any reasonable affix field, but let's fix this while we're closing holes in this area. I chose to do this by silently truncating the input before it can overrun the buffer, using logic comparable to the existing logic in get_nextfield(). Certainly there's at least as good an argument for raising an error, but for now let's follow the existing precedent. Reported-by: Igor Stepansky <igor.stepansky@orca.security> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/864123.1776810909@sss.pgh.pa.us Backpatch-through: 14 |
1 day ago |
|
|
a5426dbf84 |
Prevent buffer overrun in spell.c's CheckAffix().
This function writes into a caller-supplied buffer of length 2 * MAXNORMLEN, which should be plenty in real-world cases. However a malicious affix file could supply an affix long enough to overrun that. Defend by just rejecting the match if it would overrun the buffer. I also inserted a check of the input word length against Affix->replen, just to be sure we won't index off the buffer, though it would be caller error for that not to be true. Also make the actual copying steps a bit more readable, and remove an unnecessary requirement for the whole input word to fit into the output buffer (even though it always will with the current caller). The lack of documentation in this code makes my head hurt, so I also reverse-engineered a basic header comment for CheckAffix. Reported-by: Xint Code Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/641711.1776792744@sss.pgh.pa.us Backpatch-through: 14 |
1 day ago |
|
|
becf6d2696 |
Allow ALTER INDEX .. ATTACH PARTITION to validate a parent index
This commit tweaks ALTER INDEX .. ATTACH PARTITION to attempt a validation of a parent index in the case where an index is already attached but the parent is not yet valid. This occurs in cases where a parent index was created invalid such as with CREATE INDEX ONLY, but was left invalid after an invalid child index was attached (partitioned indexes set indisvalid to false if at least one partition is !indisvalid, indisvalid is true in a partitioned table iff all partitions are indisvalid). This could leave a partition tree in a situation where a user could not bring the parent index back to valid after fixing the child index, as there is no built-in mechanism to do so. This commit relies on the fact that repeated ATTACH PARTITION commands on the same index silently succeed. An invalid parent index is more than just a passive issue. It causes for example ON CONFLICT on a partitioned table if the invalid parent index is used to enforce a unique constraint. Some test cases are added to track some of problematic patterns, using a set of partition trees with combinations of invalid indexes and ATTACH PARTITION. Reported-by: Mohamed Ali <moali.pg@gmail.com> Author: Sami Imseih <sanmimseih@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Haibo Yan <tristan.yim@gmail.com> Discussion: http://postgr.es/m/CAGnOmWqi1D9ycBgUeOGf6mOCd2Dcf=6sKhbf4sHLs5xAcKVCMQ@mail.gmail.com Backpatch-through: 14 |
2 days ago |
|
|
94f1409847 |
Make plpgsql_trap test more robust and less resource-intensive.
We were using "select count(*) into x from generate_series(1, 1_000_000_000_000)" to waste one second waiting for a statement timeout trap. Aside from consuming CPU to little purpose, this could easily eat several hundred MB of temporary file space, which has been observed to cause out-of-disk-space errors in the buildfarm. Let's just use "pg_sleep(10)", which is far less resource-intensive. Also update the "when others" exception handler so that if it does ever again trap an error, it will tell us what error. The cause of these intermittent buildfarm failures had been obscure for awhile. Discussion: https://postgr.es/m/557992.1776779694@sss.pgh.pa.us Backpatch-through: 14 |
2 days ago |
|
|
9d6208939a |
Fix incorrect NEW references to generated columns in rule rewriting
When a rule action or rule qualification references NEW.col where col is a generated column (stored or virtual), the rewriter produces incorrect results. rewriteTargetListIU removes generated columns from the query's target list, since stored generated columns are recomputed by the executor and virtual ones store nothing. However, ReplaceVarsFromTargetList then cannot find these columns when resolving NEW references during rule rewriting. For UPDATE, the REPLACEVARS_CHANGE_VARNO fallback redirects NEW.col to the original target relation, making it read the pre-update value (same as OLD.col). For INSERT, REPLACEVARS_SUBSTITUTE_NULL replaces it with NULL. Both are wrong when the generated column depends on columns being modified. Fix by building target list entries for generated columns from their generation expressions, pre-resolving the NEW.attribute references within those expressions against the query's targetlist, and passing them together with the query's targetlist to ReplaceVarsFromTargetList. Back-patch to all supported branches. Virtual generated columns were added in v18, so the back-patches in pre-v18 branches only handle stored generated columns. Reported-by: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> Author: Richard Guo <guofenglinux@gmail.com> Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/CAHg+QDexGTmCZzx=73gXkY2ZADS6LRhpnU+-8Y_QmrdTS6yUhA@mail.gmail.com Backpatch-through: 14 |
3 days ago |
|
|
e381843cfa |
Fix orphaned processes when startup process fails during PM_STARTUP
When the startup process exists with a FATAL error during PM_STARTUP,
the postmaster called ExitPostmaster() directly, assuming that no other
processes are running at this stage. Since
|
3 days ago |
|
|
19fcb75c81 |
doc: Correct context description for some JIT support GUCs
The documentation for jit_debugging_support and jit_profiling_support previously stated that these parameters can only be set at server start. However, both parameters use the PGC_SU_BACKEND context, meaning they can be set at session start by superusers or users granted the appropriate SET privilege, but cannot be changed within an active session. This commit updates the documentation to reflect the actual behavior. Backpatch to all supported versions. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAHGQGwEpMDpB-K8SSUVRRHg6L6z3pLAkekd9aviOS=ns0EC=+Q@mail.gmail.com Backpatch-through: 14 |
3 days ago |
|
|
53cb4ec1de |
Fix relid-set clobber during join removal.
Commit |
3 days ago |
|
|
766d402866 |
Clean up all relid fields of RestrictInfos during join removal.
The original implementation of remove_rel_from_restrictinfo() thought it could skate by with removing no-longer-valid relid bits from only the clause_relids and required_relids fields. This is quite bogus, although somehow we had not run across a counterexample before now. At minimum, the left_relids and right_relids fields need to be fixed because they will be examined later by clause_sides_match_join(). But it seems pretty foolish not to fix all the relid fields, so do that. This needs to be back-patched as far as v16, because the bug report shows a planner failure that does not occur before v16. I'm a little nervous about back-patching, because this could cause unexpected plan changes due to opening up join possibilities that were rejected before. But it's hard to argue that this isn't a regression. Also, the fact that this changes no existing regression test results suggests that the scope of changes may be fairly narrow. I'll refrain from back-patching further though, since no adverse effects have been demonstrated in older branches. Bug: #19460 Reported-by: François Jehl <francois.jehl@pigment.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/19460-5625143cef66012f@postgresql.org Backpatch-through: 16 |
3 days ago |
|
|
88d7fdcc9a |
Flush statistics during idle periods in parallel apply worker.
Parallel apply workers previously failed to report statistics while waiting for new work in the main loop. This resulted in the stats from the most recent transaction remaining unbuffered, leading to arbitrary reporting delays—particularly when streamed transactions were infrequent. This commit ensures that statistics are explicitly flushed when the worker is idle, providing timely visibility into accumulated worker activity. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 16, where it was introduced Discussion: https://postgr.es/m/TYRPR01MB1419579F217CC4332B615589594202@TYRPR01MB14195.jpnprd01.prod.outlook.com |
4 days ago |
|
|
0d09492a74 |
doc: Improve description of pg_ctl -l log file permissions
The documentation stated only that the log file created by pg_ctl -l is
inaccessible to other users by default. However, since commit
|
7 days ago |
|
|
29d8bd9085 |
Update .abi-compliance-history for change to enum ProcSignalReason
As noted in the commit message for
|
1 week ago |
|
|
ec61832231 |
Fix comments for Korean encodings in encnames.c
* JOHAB: replace the incorrect "simplified Chinese" description with
a correct one that identifies it as the Korean combining (Johab)
encoding standardized in KS X 1001 annex 3.
* EUC_KR: drop a stray space before the comma in the existing
comment, and note that the encoding covers the KS X 1001
precomposed (Wansung) form.
* UHC: spell out "Unified Hangul Code", clarify that it is
Microsoft Windows CodePage 949, and describe its relationship to
EUC-KR (superset covering all 11,172 precomposed Hangul syllables).
Backpatch-through: 14
Author: Henson Choi <assam258@gmail.com>
Discussion: https://postgr.es/m/CAAAe_zAFz1v-3b7Je4L%2B%3DwZM3UGAczXV47YVZfZi9wbJxspxeA%40mail.gmail.com
|
1 week ago |
|
|
1a2d60cc04 |
Fix incorrect comment in JsonTablePlanJoinNextRow()
The comment on the return-false path when both UNION siblings are exhausted said "there are more rows," which is the opposite of what the code does. The code itself is correct, returning false to signal no more rows, but the misleading comment could tempt a reader into "fixing" the return value, which would cause UNION plans to loop indefinitely. Back-patch to 17, where JSON_TABLE was introduced. Author: Chuanwen Hu <463945512@qq.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/tencent_4CC6316F02DECA61ACCF22F933FEA5C12806@qq.com Backpatch-through: 17 |
1 week ago |
|
|
a756067a0e |
Check for unterminated strings when calling uloc_getLanguage().
Missed by commit
|
1 week ago |
|
|
c78947badc |
Add tests for low-level PGLZ [de]compression routines
The goal of this module is to provide an entry point for the coverage of
the low-level compression and decompression PGLZ routines. The new test
is moved to a new parallel group, with all the existing
compression-related tests added to it.
This includes tests for the cases detected by fuzzing that emulate
corrupted compressed data, as fixed by 2b5ba2a0a141:
- Set control bit with read of a match tag, where no data follows.
- Set control bit with read of a match tag, where 1 byte follows.
- Set control bit with match tag where length nibble is 3 bytes
(extended case).
While on it, some tests are added for compress/decompress roundtrips,
and for check_complete=false/true. Like
|
1 week ago |
|
|
91741b7cb7 |
Fix excessive logging in idle slotsync worker.
The slotsync worker was incorrectly identifying no-op states as successful updates, triggering a busy loop to sync slots that logged messages every 200ms. This patch corrects the logic to properly classify these states, enabling the worker to respect normal sleep intervals when no work is performed. Reported-by: Fujii Masao <masao.fujii@gmail.com> Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Backpatch-through: 17, where it was introduced Discussion: https://postgr.es/m/CAHGQGwF6zG9Z8ws1yb3hY1VqV-WT7hR0qyXCn2HdbjvZQKufDw@mail.gmail.com |
2 weeks ago |
|
|
a4fefb3e0d |
Honor passed-in database OIDs in pgstat_database.c
Three routines in pgstat_database.c incorrectly ignore the database OID provided by their caller, using MyDatabaseId instead: - pgstat_report_connect() - pgstat_report_disconnect() - pgstat_reset_database_timestamp() The first two functions, for connection and disconnection, each have a single caller that already passes MyDatabaseId. This was harmless, still incorrect. The timestamp reset function also has a single caller, but in this case the issue has a real impact: it fails to reset the timestamp for the shared-database entry (datid=0) when operating on shared objects. This situation can occur, for example, when resetting counters for shared relations via pg_stat_reset_single_table_counters(). There is currently one test in the tree that checks the reset of a shared relation, for pg_shdescription, we rely on it to check what is stored in pg_stat_database. As stats_reset may be NULL, two resets are done to provide a baseline for comparison. Author: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Dapeng Wang <wangdp20191008@gmail.com> Discussion: https://postgr.es/m/ABBD5026-506F-4006-A569-28F72C188693@gmail.com Backpatch-through: 15 |
2 weeks ago |
|
|
93ed187201 |
Fix estimate_array_length error with set-operation array coercions
When a nested set operation's output type doesn't match the parent's
expected type, recurse_set_operations builds a projection target list
using generate_setop_tlist with varno 0. If the required type
coercion involves an ArrayCoerceExpr, estimate_array_length could be
called on such a Var, and would pass it to examine_variable, which
errors in find_base_rel because varno 0 has no valid relation entry.
Fix by skipping the statistics lookup for Vars with varno 0.
Bug introduced by commit
|
2 weeks ago |
|
|
c05c3baf16 |
Fix heap-buffer-overflow in pglz_decompress() on corrupt input.
When decoding a match tag, pglz_decompress() reads 2 bytes (or 3 for extended-length matches) from the source buffer before checking whether enough data remains. The existing bounds check (sp > srcend) occurs after the reads, so truncated compressed data that ends mid-tag causes a read past the allocated buffer. Fix by validating that sufficient source bytes are available before reading each part of the match tag. The post-read sp > srcend check is no longer needed and is removed. Found by fuzz testing with libFuzzer and AddressSanitizer. Backpatch-through: 14 |
2 weeks ago |
|
|
2e373785ec |
Fix incremental JSON parser numeric token reassembly across chunks.
When the incremental JSON parser splits a numeric token across chunk boundaries, it accumulates continuation characters into the partial token buffer. The accumulator's switch statement unconditionally accepted '+', '-', '.', 'e', and 'E' as valid numeric continuations regardless of position, which violated JSON number grammar (-? int [frac] [exp]). For example, input "4-" fed in single-byte chunks would accumulate the '-' into the numeric token, producing an invalid token that later triggered an assertion failure during re-lexing. Fix by tracking parser state (seen_dot, seen_exp, prev character) across the existing partial token and incoming bytes, so that each character class is accepted only in its grammatically valid position. Backpatch-through: 17 |
2 weeks ago |
|
|
492c386b4d |
Zero-fill private_data when attaching an injection point
InjectionPointAttach() did not initialize the private_data buffer of the shared memory entry before (perhaps partially) overwriting it. When the private data is set to NULL by the caler, the buffer was left uninitialized. If set, it could have stale contents. The buffer is initialized to zero, so as the contents recorded when a point is attached are deterministic. Author: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0tsGHu2h6YLnVu4HiK05q+gTE_9WVUAqihW2LSscAYS-g@mail.gmail.com Backpatch-through: 17 |
2 weeks ago |
|
|
f8736f8bc5 |
Fix integer overflow in nodeWindowAgg.c
In nodeWindowAgg.c, the calculations for frame start and end positions in ROWS and GROUPS modes were performed using simple integer addition. If a user-supplied offset was sufficiently large (close to INT64_MAX), adding it to the current row or group index could cause a signed integer overflow, wrapping the result to a negative number. This led to incorrect behavior where frame boundaries that should have extended indefinitely (or beyond the partition end) were treated as falling at the first row, or where valid rows were incorrectly marked as out-of-frame. Depending on the specific query and data, these overflows can result in incorrect query results, execution errors, or assertion failures. To fix, use overflow-aware integer addition (ie, pg_add_s64_overflow) to check for overflows during these additions. If an overflow is detected, the boundary is now clamped to INT64_MAX. This ensures the logic correctly treats the boundary as extending to the end of the partition. Bug: #19405 Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/19405-1ecf025dda171555@postgresql.org Backpatch-through: 14 |
2 weeks ago |
|
|
586f4266fb |
Fix ABI break by moving PROCSIG_SLOTSYNC_MESSAGE in ProcSignalReason
Commit
|
2 weeks ago |
|
|
15910b1c36 |
Fix slotsync worker blocking promotion when stuck in wait
Previously, on standby promotion, the startup process sent SIGUSR1 to the slotsync worker (or a backend performing slot synchronization) and waited for it to exit. This worked in most cases, but if the process was blocked waiting for a response from the primary (e.g., due to a network failure), SIGUSR1 would not interrupt the wait. As a result, the process could remain stuck, causing the startup process to wait for a long time and delaying promotion. This commit fixes the issue by introducing a new procsignal reason, PROCSIG_SLOTSYNC_MESSAGE. On promotion, the startup process sends this signal, and the handler sets interrupt flags so the process exits (or errors out) promptly at CHECK_FOR_INTERRUPTS(), allowing promotion to complete without delay. Backpatch to v17, where slotsync was introduced. Author: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CAHGQGwFzNYroAxSoyJhqTU-pH=t4Ej6RyvhVmBZ91Exj_TPMMQ@mail.gmail.com Backpatch-through: 17 |
2 weeks ago |
|
|
4bed04d395 |
Enhance slot synchronization API to respect promotion signal.
Previously, during a promotion, only the slot synchronization worker was signaled to shut down. The backend executing slot synchronization via the pg_sync_replication_slots() SQL function was not signaled, allowing it to complete its synchronization cycle before exiting. An upcoming patch improves pg_sync_replication_slots() to wait until replication slots are fully persisted before finishing. This behaviour requires the backend to exit promptly if a promotion occurs. This patch ensures that, during promotion, a signal is also sent to the backend running pg_sync_replication_slots(), allowing it to be interrupted and exit immediately. This change was originally committed to master only. However, backpatch it to v17, where slot synchronization was introduced. Because it is required for an upcoming bug fix addressing slotsync (including pg_sync_replication_slots()) blocking promotion when stuck in a wait. Author: Ajin Cherian <itsajin@gmail.com> Reviewed-by: Shveta Malik <shveta.malik@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com Backpatch-through: 17 |
2 weeks ago |
|
|
681a91d29d |
Avoid unsafe access to negative index in a TupleDesc.
Commit |
2 weeks ago |
|
|
d6c9432cb5 |
Fix null-bitmap combining in array_agg_array_combine().
This code missed the need to update the combined state's nullbitmap if state1 already had a bitmap but state2 didn't. We need to extend the existing bitmap with 1's but didn't. This could result in wrong output from a parallelized array_agg(anyarray) calculation, if the input has a mix of null and non-null elements. The errors depended on timing of the parallel workers, and therefore would vary from one run to another. Also install guards against integer overflow when calculating the combined object's sizes, and make some trivial cosmetic improvements. Author: Dmytro Astapov <dastapov@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFQUnFj2pQ1HbGp69+w2fKqARSfGhAi9UOb+JjyExp7kx3gsqA@mail.gmail.com Backpatch-through: 16 |
2 weeks ago |
|
|
0a2291b59f |
jit: No backport::SectionMemoryManager for LLVM 22.
LLVM 22 has the fix that we copied into our tree in commit |
3 weeks ago |
|
|
b6d0cddbe2 |
jit: Stop emitting lifetime.end for LLVM 22.
The lifetime.end intrinsic can now only be used for stack memory allocated with alloca[1][2][3]. We use it to tell LLVM about the lifetime of function arguments/isnull values that we keep in palloc'd memory, so that it can avoid spilling registers to memory. We might need to rearrange things and put them on the stack, but that'll take some research. In the meantime, unbreak the build on LLVM 22. [1] https://github.com/llvm/llvm-project/pull/149310 [2] https://llvm.org/docs/LangRef.html#llvm-lifetime-end-intrinsic [3] https://llvm.org/docs/LangRef.html#i-alloca Backpatch-through: 14 Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> (earlier attempt) Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> (earlier attempt) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier attempt) Discussion: https://postgr.es/m/CA%2BhUKGJTumad75o8Zao-LFseEbt%3DenbUFCM7LZVV%3Dc8yg2i7dg%40mail.gmail.com |
3 weeks ago |
|
|
a6ab311c70 |
doc: Add missing description for DROP SUBSCRIPTION IF EXISTS.
Oversight in commit
|
3 weeks ago |
|
|
1f5b6a5e5d |
Be more careful to preserve consistency of a tuplestore.
Several places in tuplestore.c would leave the tuplestore data structure effectively corrupt if some subroutine were to throw an error. Notably, if WRITETUP() failed after some number of successful calls within dumptuples(), the tuplestore would contain some memtuples pointers that were apparently live entries but in fact pointed to pfree'd chunks. In most cases this sort of thing is fine because transaction abort cleanup is not too picky about the contents of memory that it's going to throw away anyway. There's at least one exception though: if a Portal has a holdStore, we're going to call tuplestore_end() on that, even during transaction abort. So it's not cool if that tuplestore is corrupt, and that means tuplestore.c has to be more careful. This oversight demonstrably leads to crashes in v15 and before, if a holdable cursor fails to persist its data due to an undersized temp_file_limit setting. Very possibly the same thing can happen in v16 and v17 as well, though the specific test case submitted failed to fail there (cf. |
3 weeks ago |
|
|
0c8b4e9cfc |
Detect pfree or repalloc of a previously-freed memory chunk.
Before the major rewrite in commit
|
3 weeks ago |
|
|
d29808e35d |
Fix datum_image_*()'s inability to detect sign-extension variations
Functions such as hash_numeric() are not careful to use the correct PG_RETURN_*() macro according to the return type of that function as defined in pg_proc. Because that function is meant to return int32, when the hashed value exceeds 2^31, the 64-bit Datum value won't wrap to a negative number, which means the Datum won't have the same value as it would have had it been cast to int32 on a two's complement machine. This isn't harmless as both datum_image_eq() and datum_image_hash() may receive a Datum that's been formed and deformed from a tuple in some cases, and not in other cases. When formed into a tuple, the Datum value will be coerced into an integer according to the attlen as specified by the TupleDesc. This can result in two Datums that should be equal being classed as not equal, which could result in (but not limited to) an error such as: ERROR: could not find memoization table entry Here we fix this by ensuring we cast the Datum value to a signed integer according to the typLen specified in the datum_image_eq/datum_image_hash function call before comparing or hashing. Author: David Rowley <dgrowleyml@gmail.com> Reported-by: Tender Wang <tndrwang@gmail.com> Backpatch-through: 14 Discussion: https://postgr.es/m/CAHewXNmcXVFdB9_WwA8Ez0P+m_TQy_KzYk5Ri5dvg+fuwjD_yw@mail.gmail.com |
4 weeks ago |
|
|
f1298a4c20 |
Fix multiple bugs in astreamer pipeline code.
astreamer_tar_parser_content() sent the wrong data pointer when forwarding MEMBER_TRAILER padding to the next streamer. After astreamer_buffer_until() buffers the padding bytes, the 'data' pointer has been advanced past them, but the code passed 'data' instead of bbs_buffer.data. This caused the downstream consumer to receive bytes from after the padding rather than the padding itself, and could read past the end of the input buffer. astreamer_gzip_decompressor_content() only checked for Z_STREAM_ERROR from inflate(), silently ignoring Z_DATA_ERROR (corrupted data) and Z_MEM_ERROR (out of memory). Fix by treating any return other than Z_OK, Z_STREAM_END, and Z_BUF_ERROR as fatal. astreamer_gzip_decompressor_free() missed calling inflateEnd() to release zlib's internal decompression state. astreamer_tar_parser_free() neglected to pfree() the streamer struct itself, leaking it. astreamer_extractor_content() did not check the return value of fclose() when closing an extracted file. A deferred write error (e.g., disk full on buffered I/O) would be silently lost. Discussion: https://postgr.es/m/results/98c6b630-acbb-44a7-97fa-1692ce2b827c@dunslane.net Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us> Backpatch-through: 15 |
4 weeks ago |
|
|
351e59f344 |
Avoid memory leak on error while parsing pg_stat_statements dump file
By using palloc() instead of raw malloc(). Reported-by: Gaurav Singh <gaurav.singh@yugabyte.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://www.postgresql.org/message-id/CAEcQ1bYR9s4eQLFDjzzJHU8fj-MTbmRpW-9J-r2gsCn+HEsynw@mail.gmail.com Backpatch-through: 14 |
4 weeks ago |
|
|
fdce5de552 |
Fix premature NULL lag reporting in pg_stat_replication
pg_stat_replication is documented to keep the last measured lag values for a short time after the standby catches up, and then set them to NULL when there is no WAL activity. However, previously lag values could become NULL prematurely even while WAL activity was ongoing, especially in logical replication. This happened because the code cleared lag when two consecutive reply messages indicated that the apply location had caught up with the send location. It did not verify that the reported positions were unchanged, so lag could be cleared even when positions had advanced between messages. In logical replication, where the apply location often quickly catches up, this issue was more likely to occur. This commit fixes the issue by clearing lag only when the standby reports that it has fully replayed WAL (i.e., both flush and apply locations have caught up with the send location) and the write/flush/apply positions remain unchanged across two consecutive reply messages. The second message with unchanged positions typically results from wal_receiver_status_interval, so lag values are cleared after that interval when there is no activity. This avoids showing stale lag data while preventing premature NULL values. Even with this fix, lag may rarely become NULL during activity if identical position reports are sent repeatedly. Eliminating such duplicate messages would address this fully, but that change is considered too invasive for stable branches and will be handled in master only later. Backpatch to all supported branches. Author: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CAOzEurTzcUrEzrH97DD7+Yz=HGPU81kzWQonKZvqBwYhx2G9_A@mail.gmail.com Backpatch-through: 14 |
4 weeks ago |
|
|
791ff1df1e |
Fix copy-paste error in test_ginpostinglist
The check for a mismatch on the second decoded item pointer
was an exact copy of the first item pointer check, comparing
orig_itemptrs[0] with decoded_itemptrs[0] instead of orig_itemptrs[1]
with decoded_itemptrs[1]. The error message also reported (0, 1) as
the expected value instead of (blk, off). As a result, any decoding
error in the second item pointer (where the varbyte delta encoding
is exercised) would go undetected.
This has been wrong since commit
|
1 month ago |
|
|
1ca3850321 |
Fix multixact backwards-compatibility with CHECKPOINT race condition
If a CHECKPOINT record with nextMulti N is written to the WAL before
the CREATE_ID record for N, and N happens to be the first multixid on
an offset page, the backwards compatibility logic to tolerate WAL
generated by older minor versions (before commit
|
1 month ago |
|
|
6ccfc44922 |
Fix finalization of decompressor astreamers.
Send the correct amount of data to the next astreamer, not the whole allocated buffer size. This bug escaped detection because in present uses the next astreamer is always a tar-file parser which is insensitive to trailing garbage. But that may not be true in future uses. Author: Andrew Dunstan <andrew@dunslane.net> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us Backpatch-through: 15 |
1 month ago |
|
|
876fa84a27 |
Fix dependency on FDW handler.
ALTER FOREIGN DATA WRAPPER could drop the dependency on the handler function if it wasn't explicitly specified. Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/35c44a4b7fb76d35418c4d66b775a88f4ce60c86.camel@j-davis.com Backpatch-through: 14 |
1 month ago |
|
|
8ee536c895 |
Fix WAL flush LSN used by logical walsender during shutdown
Commit |
1 month ago |
|
|
2fa42feb79 |
Tighten asserts on ParallelWorkerNumber
The comment about ParallelWorkerNumbr in parallel.c says: In parallel workers, it will be set to a value >= 0 and < the number of workers before any user code is invoked; each parallel worker will get a different parallel worker number. However asserts in various places collecting instrumentation allowed (ParallelWorkerNumber == num_workers). That would be a bug, as the value is used as index into an array with num_workers entries. Fixed by adjusting the asserts accordingly. Backpatch to all supported versions. Discussion: https://postgr.es/m/5db067a1-2cdf-4afb-a577-a04f30b69167@vondra.me Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Backpatch-through: 14 |
1 month ago |
|
|
6ef36bb358 |
Use GetXLogInsertEndRecPtr in gistGetFakeLSN
The function used GetXLogInsertRecPtr() to generate the fake LSN. Most
of the time this is the same as what XLogInsert() would return, and so
it works fine with the XLogFlush() call. But if the last record ends at
a page boundary, GetXLogInsertRecPtr() returns LSN pointing after the
page header. In such case XLogFlush() fails with errors like this:
ERROR: xlog flush request 0/01BD2018 is not satisfied --- flushed only to 0/01BD2000
Such failures are very hard to trigger, particularly outside aggressive
test scenarios.
Fixed by introducing GetXLogInsertEndRecPtr(), returning the correct LSN
without skipping the header. This is the same as GetXLogInsertRecPtr(),
except that it calls XLogBytePosToEndRecPtr().
Initial investigation by me, root cause identified by Andres Freund.
This is a long-standing bug in gistGetFakeLSN(), probably introduced by
|
1 month ago |
|
|
63aa4342da |
xml2: Fix failure with xslt_process() under -fsanitize=undefined
The logic of xslt_process() has never considered the fact that
xsltSaveResultToString() would return NULL for an empty string (the
upstream code has always done so, with a string length of 0). This
would cause memcpy() to be called with a NULL pointer, something
forbidden by POSIX.
Like
|
1 month ago |
|
|
076bc57fa4 |
Prevent restore of incremental backup from bloating VM fork.
When I (rhaas) wrote the WAL summarizer code, I incorrectly believed that XLOG_SMGR_TRUNCATE truncates all forks to the same length. In fact, what other parts of the code do is compute the truncation length for the FSM and VM forks from the truncation length used for the main fork. But, because I was confused, I coded the WAL summarizer to set the limit block for the VM fork to the same value as for the main fork. (Incremental backup always copies FSM forks in full, so there is no similar issue in that case.) Doing that doesn't directly cause any data corruption, as far as I can see. However, it does create a serious risk of consuming a large amount of extra disk space, because pg_combinebackup's reconstruct.c believes that the reconstructed file should always be at least as long as the limit block value. We might want to be smarter about that at some point in the future, because it's always safe to omit all-zeroes blocks at the end of the last segment of a relation, and doing so could save disk space, but the current algorithm will rarely waste enough disk space to worry about unless we believe that a relation has been truncated to a length much longer than its actual length on disk, which is exactly what happens as a result of the problem mentioned in the previous paragraph. To fix, create a new visibilitymap helper function and use it to include the right limit block in the summary files. Incremental backups taken with existing summary files will still have this issue, but this should improve the situation going forward. Diagnosed-by: Oleg Tkachenko <oatkachenko@gmail.com> Diagnosed-by: Amul Sul <sulamul@gmail.com> Discussion: http://postgr.es/m/CAAJ_b97PqG89hvPNJ8cGwmk94gJ9KOf_pLsowUyQGZgJY32o9g@mail.gmail.com Discussion: http://postgr.es/m/6897DAF7-B699-41BF-A6FB-B818FCFFD585%40gmail.com Backpatch-through: 17 |
2 months ago |