During the calculations of the maximum for the number of buckets, take into
account that later we round that to the next power of 2.
Reported-by: Karen Talarico
Bug: #16925
Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org
Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov
Reviewed-by: Alena Rybakina
Backpatch-through: 12
A typo has been introduced by 31966b151e when updating the state of a
local buffer when a temporary relation is extended, for the case of a
block included in the relation range extended, when it is already found
in the hash table holding the local buffers. In this case, BM_VALID
should be cleared, but the buffer state was changed so as BM_VALID
remained while clearing the other flags.
As reported on the thread, it was possible to corrupt the state of the
local buffers on ENOSPC, but the states would be corrupted on any kind
of ERROR during the relation extend (like partial writes or some other
errno).
Reported-by: Alexander Lakhin
Author: Tender Wang
Reviewed-by: Richard Guo, Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/18259-6e256429825dd435@postgresql.org
Backpatch-through: 16
1349d2790 added code to allow DISTINCT and ORDER BY aggregates to work
more efficiently by using presorted input. That commit added some code
that made use of the AggState's tmpcontext and adjusted the
ecxt_outertuple and ecxt_innertuple slots before checking if the current
row is distinct from the previously seen row. That code forgot to set the
TupleTableSlots back to what they were originally, which could result in
errors such as:
ERROR: attribute 1 of type record has wrong type
This only affects aggregate functions which have multiple arguments when
DISTINCT is used. For example: string_agg(DISTINCT col, ', ')
Thanks to Tom Lane for identifying the breaking commit.
Bug: #18264
Reported-by: Vojtěch Beneš
Discussion: https://postgr.es/m/18264-e363593d7e9feb7d@postgresql.org
Backpatch-through: 16, where 1349d2790 was added
CheckPWChallengeAuth() would return STATUS_ERROR if the user does not
exist or has no password assigned, even if the client disconnected
without responding to the password challenge (as libpq often will,
for example). We should return STATUS_EOF in that case, and the
lower-level functions do, but this code level got it wrong since the
refactoring done in 7ac955b34. This breaks the intent of not logging
anything for EOF cases (cf. comments in auth_failed()) and might
also confuse users of ClientAuthentication_hook.
Per report from Liu Lang. Back-patch to all supported versions.
Discussion: https://postgr.es/m/b725238c-539d-cb09-2bff-b5e6cb2c069c@esgyn.cn
set_config_option() bails out early if it detects that the option to
be set is PGC_BACKEND or PGC_SU_BACKEND class and we're reading the
config file in a postmaster child; we don't want to apply any new
value in such a case. That's fine as far as it goes, but it fails
to consider the requirements of the pg_file_settings view: for that,
we need to check validity of the value even though we have no
intention to apply it. Because we didn't, even very silly values
for affected GUCs would be reported as valid by the view. There
are only half a dozen such GUCs, which perhaps explains why this
got overlooked for so long.
Fix by continuing when changeVal is false; this parallels the logic
in some other early-exit paths.
Also, the check added by commit 924bcf4f1 to prevent GUC changes in
parallel workers seems a few bricks shy of a load: it's evidently
assuming that ereport(elevel, ...) won't return. Make sure we
bail out if it does. The lack of trouble reports suggests that
this is only a latent bug, i.e. parallel workers don't actually
reach here with elevel < ERROR. (Per the code coverage report,
we never reach here at all in the regression suite.) But we clearly
don't want to risk proceeding if that does happen.
Per report from Rıdvan Korkmaz. These are ancient bugs, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/2089235.1703617353@sss.pgh.pa.us
This is necessary when spgcanreturn() is invoked on a partitioned
index, and the failure might be reachable in other scenarios as
well. The rest of what spgGetCache() does is perfectly sensible
for a partitioned index, so we should allow it to go through.
I think the main takeaway from this is that we lack sufficient test
coverage for non-btree partitioned indexes. Therefore, I added
simple test cases for brin and gin as well as spgist (hash and
gist AMs were covered already in indexing.sql).
Per bug #18256 from Alexander Lakhin. Although the known test case
only fails since v16 (3c569049b), I've got no faith at all that there
aren't other ways to reach this problem; so back-patch to all
supported branches.
Discussion: https://postgr.es/m/18256-0b0e1b6e4a620f1b@postgresql.org
Fix a bug during MERGE if a cross-partition update is attempted on a
partitioned table with a BEFORE DELETE ROW trigger that returns NULL,
to prevent the update. This would cause an error to be thrown, or an
assert failure in an assert-enabled build.
This was an oversight in 9321c79c86, which failed to properly
distinguish a DELETE prevented by a trigger from one prevented by a
concurrent update. Fix by having ExecDelete() return the TM_Result
status to ExecCrossPartitionUpdate(), so that it can distinguish the
two cases, and make ExecCrossPartitionUpdate() return the TM_Result
status to ExecUpdateAct(), so that it can return the correct status
from a concurrent update.
In addition, ensure that the command tag is correctly updated by
having ExecMergeMatched() pass canSetTag to ExecUpdateAct(), rather
than passing false, so that it updates the command tag if it does a
cross-partition update, making this code path in ExecMergeMatched()
consistent with ExecUpdate().
Per bug #18238 from Alexander Lakhin. Back-patch to v15, where MERGE
was introduced.
Dean Rasheed, reviewed by Richard Guo and Jian He.
Discussion: https://postgr.es/m/18238-2f2bdc7f720180b9%40postgresql.org
In v16 and up (since commit afbfc0298), large object ownership
checking has been broken because object_ownercheck() didn't take care
of the discrepancy between our object-address representation of large
objects (classId == LargeObjectRelationId) and the catalog where their
ownership info is actually stored (LargeObjectMetadataRelationId).
This resulted in failures such as "unrecognized class ID: 2613"
when trying to update blob properties as a non-superuser.
Poking around for related bugs, I found that AlterObjectOwner_internal
would pass the wrong classId to the PostAlterHook in the no-op code
path where the large object already has the desired owner. Also,
recordExtObjInitPriv checked for the wrong classId; that bug is only
latent because the stanza is dead code anyway, but as long as we're
carrying it around it should be less wrong. These bugs are quite old.
In HEAD, we can reduce the scope for future bugs of this ilk by
changing AlterObjectOwner_internal's API to let the translation happen
inside that function, rather than requiring callers to know about it.
A more bulletproof fix, perhaps, would be to start using
LargeObjectMetadataRelationId as the dependency and object-address
classId for blobs. However that has substantial risk of breaking
third-party code; even within our own code, it'd create hassles
for pg_dump which would have to cope with a version-dependent
representation. For now, keep the status quo.
Discussion: https://postgr.es/m/2650449.1702497209@sss.pgh.pa.us
This fixes an error introduced by efb0ef909f, that changed the
description of this field to "shared/local" while these I/O timings
relate to shared buffers. This information is available when
track_io_timing is enabled. Note that HEAD has added new counters for
local buffers in 295c36c0c1, so there is no need to touch it. The
description is updated to "shared" to be compatible with HEAD.
Per discussion with Nazir Bilal Yavuz and Hubert Depesz Lubaczewski,
whose EXPLAIN analyzer tool was not actually able to parse the previous
term because of the slash character.
Discussion: https://postgr.es/m/ZTCTiUqm_H3iBihl@paquier.xyz
Backpatch-through: 15
Dead tuples are ignored and are not marked as dead during recovery, as
it can lead to MVCC issues on a standby because its xmin may not match
with the primary. This information is tracked by a field called
"xactStartedInRecovery" in the transaction state data, switched on when
starting a transaction in recovery.
Unfortunately, this information was not correctly tracked when starting
a subtransaction, because the transaction state used for the
subtransaction did not update "xactStartedInRecovery" based on the state
of its parent. This would cause index scans done in subtransactions to
return inconsistent data, depending on how the xmin of the primary
and/or the standby evolved.
This is broken since the introduction of hot standby in efc16ea520, so
backpatch all the way down.
Author: Fei Changhong
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/tencent_C4D907A5093C071A029712E73B43C6512706@qq.com
Backpatch-through: 12
OpenSSL will sometimes return SSL_ERROR_SYSCALL without having set
errno; this is apparently a reflection of recv(2)'s habit of not
setting errno when reporting EOF. Ensure that we treat such cases
the same as read EOF. Previously, we'd frequently report them like
"could not accept SSL connection: Success" which is confusing, or
worse report them with an unrelated errno left over from some
previous syscall.
To fix, ensure that errno is zeroed immediately before the call,
and report its value only when it's not zero afterwards; otherwise
report EOF.
For consistency, I've applied the same coding pattern in libpq's
pqsecure_raw_read(). Bare recv(2) shouldn't really return -1 without
setting errno, but in case it does we might as well cope.
Per report from Andres Freund. Back-patch to all supported versions.
Discussion: https://postgr.es/m/20231208181451.deqnflwxqoehhxpe@awork3.anarazel.de
The apply worker needs to update the state of the subscription tables to
'READY' during the synchronization phase which requires locking the
corresponding subscription. The apply worker also waits for the
subscription tables to reach the 'SYNCDONE' state after holding the locks
on the subscription and the wait is done using WaitLatch. The 'SYNCDONE'
state is changed by tablesync workers again by locking the corresponding
subscription. Both the state updates use AccessShareLock mode to lock the
subscription, so they can't block each other. However, a backend can
simultaneously try to acquire a lock on the same subscription using
AccessExclusiveLock mode to alter the subscription. Now, the backend's
wait on a lock can sneak in between the apply worker and table sync worker
causing deadlock.
In other words, apply_worker waits for tablesync worker which waits for
backend, and backend waits for apply worker. This is not detected by the
deadlock detector because apply worker uses WaitLatch.
The fix is to release existing locks in apply worker before it starts to
wait for tablesync worker to change the state.
Reported-by: Tomas Vondra
Author: Shlok Kyal
Reviewed-by: Amit Kapila, Peter Smith
Backpatch-through: 12
Discussion: https://postgr.es/m/d291bb50-12c4-e8af-2af2-7bb9bb4d8e3e@enterprisedb.com
While checking if a record could fit in the circular WAL decoding
buffer, the coding from commit 3f1ce973 used arithmetic that could
overflow. 64 bit systems were unaffected for various technical reasons,
which probably explains the lack of problem reports. Likewise for 32
bit systems running known 32 bit kernels. The systems at risk of
problems appear to be 32 bit processes running on 64 bit kernels, with
unlucky placement in memory.
Per complaint from GCC -fsanitize=undefined -m32, while testing
variations of 039_end_of_wal.pl.
Back-patch to 15.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGKH0oRPOX7DhiQ_b51sM8HqcPp2J3WA-Oen%3DdXog%2BAGGQ%40mail.gmail.com
Commit 5a991ef869 accidentally reversed the order of the tuples
and fields parameters, making the error message incorrectly refer
to 3 tuples with 1 field when IDENTIFY_SYSTEM returns 1 tuple and
3 or 4 fields. Fix by changing the order of the parameters. This
also adds a comment describing why we check for < 3 when postgres
since 9.4 has been sending 4 fields.
Backpatch all the way since the bug is almost a decade old.
Author: Tomonari Katsumata <t.katsumata1122@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Bug: #18224
Backpatch-through: v12
When creating a partitioned index, the partition key must be a subset
of the index's columns. But this currently doesn't check that the
collations between the partition key and the index definition match.
So you can construct a unique index that fails to enforce uniqueness.
(This would most likely involve a nondeterministic collation, so it
would have to be crafted explicitly and is not something that would
just happen by accident.)
This patch adds the required collation check. As a result, any
previously allowed unique index that has a collation mismatch would no
longer be allowed to be created.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/3327cb54-f7f1-413b-8fdb-7a9dceebb938%40eisentraut.org
Commit 7389aad6 removed the notion of backends started from inside a
signal handler. A stray comment still referred to them, while
explaining the need for a call to set_stack_base(). That leads to the
question of whether we still need the call in !EXEC_BACKEND builds.
There doesn't seem to be much point in suppressing it now, as it doesn't
hurt and probably helps to measure the stack base from the exact same
place in EXEC_BACKEND and !EXEC_BACKEND builds.
Back-patch to 16.
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reported-by: Tristan Partin <tristan@neon.tech>
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2BEJHcevNGNOxVWxTNFbuB%3Dvjf4U68%2B85rAC_Sxvy2zEQ%40mail.gmail.com
We should have done it this way all along, but we accidentally got
away with using the wrong BIO field up until OpenSSL 3.2. There,
the library's BIO routines that we rely on use the "data" field
for their own purposes, and our conflicting use causes assorted
weird behaviors up to and including core dumps when SSL connections
are attempted. Switch to using the approved field for the purpose,
i.e. app_data.
While at it, remove our configure probes for BIO_get_data as well
as the fallback implementation. BIO_{get,set}_app_data have been
there since long before any OpenSSL version that we still support,
even in the back branches.
Also, update src/test/ssl/t/001_ssltests.pl to allow for a minor
change in an error message spelling that evidently came in with 3.2.
Tristan Partin and Bo Andreson. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAN55FZ1eDDYsYaL7mv+oSLUij2h_u6hvD4Qmv-7PK7jkji0uyQ@mail.gmail.com
If the tuple being updated is not visible to the crosscheck snapshot,
we return TM_Updated but the assertions would not hold in that case.
Move them to before the cross-check.
Fixes bug #17893. Backpatch to all supported versions.
Author: Alexander Lakhin
Backpatch-through: 12
Discussion: https://www.postgresql.org/message-id/17893-35847009eec517b5%40postgresql.org
52e4f0cd4 introduced a bug in pgoutput in which missing values in tuples
were incorrectly filled in with NULL. The problem was the use of
CreateTupleDescCopy where CreateTupleDescCopyConstr was required, as the
former drops the constraints in the tuple description (specifically, the
default value constraint) on the floor.
The bug could result in incorrectness when a table replicated via
`REPLICA IDENTITY FULL` underwent a schema change that added a column
with a default value. The problem is that in such cases updates fill NULL
values in old tuples for missing columns for default values. Then on the
subscriber, we failed to find a matching tuple and missed updating the
required row.
Author: Nikhil Benesch
Reviewed-by: Hou Zhijie, Amit Kapila
Backpatch-through: 15
Discussion: http://postgr.es/m/CAPWqQZTEpZQamYsGMn6ZDRvVywwpVPiKH6OY4KSgA+NmeqFNzA@mail.gmail.com
When using GSSAPI encryption in non-blocking mode, libpq sometimes
failed with "GSSAPI caller failed to retransmit all data needing
to be retried". The cause is that pqPutMsgEnd rounds its transmit
request down to an even multiple of 8K, and sometimes that can lead
to not requesting a write of data that was requested to be written
(but reported as not written) earlier. That can upset pg_GSS_write's
logic for dealing with not-yet-written data, since it's possible
the data in question had already been incorporated into an encrypted
packet that we weren't able to send during the previous call.
We could fix this with a one-or-two-line hack to disable pqPutMsgEnd's
round-down behavior, but that seems like making the caller work around
a behavior that pg_GSS_write shouldn't expose in this way. Instead,
adjust pg_GSS_write to never report a partial write: it either
reports a complete write, or reflects the failure of the lower-level
pqsecure_raw_write call. The requirement still exists for the caller
to present at least as much data as on the previous call, but with
the caller-visible write start point not moving there is no temptation
for it to present less. We lose some ability to reclaim buffer space
early, but I doubt that that will make much difference in practice.
This also gets rid of a rather dubious assumption that "any
interesting failure condition (from pqsecure_raw_write) will recur
on the next try". We've not seen failure reports traceable to that,
but I've never trusted it particularly and am glad to remove it.
Make the same adjustments to the equivalent backend routine
be_gssapi_write(). It is probable that there's no bug on the backend
side, since we don't have a notion of nonblock mode there; but we
should keep the logic the same to ease future maintenance.
Per bug #18210 from Lars Kanis. Back-patch to all supported branches.
Discussion: https://postgr.es/m/18210-4c6d0b14627f2eb8@postgresql.org
If an error is thrown after calling CreateWaitEventSet(), the memory
of a WaitEventSet is free'd as it's allocated in the short-lived
memory context, but the file descriptor (on epoll- or kqueue-based
systems) or handles (on Windows) that it contains are leaked.
Use PG_TRY-FINALLY to ensure it gets freed. (On master, I will apply a
better fix, using ResourceOwners to track the WaitEventSet, but that's
not backpatchable.)
The added test doesn't check for leaking resources, so it passed even
before this commit. But at least it covers the code path.
In the passing, fix misleading comment on what the 'nevents' argument
to WaitEventSetWait means.
Report by Alexander Lakhin, analysis and suggestion for the fix by Tom
Lane. Fixes bug #17828. Backpatch to v14 where async execution was
introduced, but master gets a different fix.
Discussion: https://www.postgresql.org/message-id/17828-122da8cba23236be@postgresql.org
Discussion: https://www.postgresql.org/message-id/472235.1678387869@sss.pgh.pa.us
The copy command formed for initial sync was using parenthesis for tables
with no columns leading to syntax error. This patch avoids adding
parenthesis for such tables.
Reported-by: Justin G
Author: Vignesh C
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 15
Discussion: http://postgr.es/m/18203-df37fe354b626670@postgresql.org
The DROP STATISTICS code failed to properly lock the table, leading to
ERROR: tuple concurrently deleted
when executed concurrently with ANALYZE.
Fixed by modifying RemoveStatisticsById() to acquire the same lock as
ANALYZE. This function is called only by DROP STATISTICS, as ANALYZE
calls RemoveStatisticsDataById() directly.
Reported by Justin Pryzby, fix by me. Backpatch through 12. The code was
like this since it was introduced in 10, but older releases are EOL.
Reported-by: Justin Pryzby
Reviewed-by: Tom Lane
Backpatch-through: 12
Discussion: https://postgr.es/m/ZUuk-8CfbYeq6g_u@pryzbyj2023
Commits 146604ec43 and a898b409f6 added overflow checks to
interval_mul(), but not to interval_div(), which contains almost
identical code, and so is susceptible to the same kinds of
overflows. In addition, those checks did not catch all possible
overflow conditions.
Add additional checks to the "cascade down" code in interval_mul(),
and copy all the overflow checks over to the corresponding code in
interval_div(), so that they both generate "interval out of range"
errors, rather than returning bogus results.
Given that these errors are relatively easy to hit, back-patch to all
supported branches.
Per bug #18200 from Alexander Lakhin, and subsequent investigation.
Discussion: https://postgr.es/m/18200-5ea288c7b2d504b1%40postgresql.org
When performing inlining LLVM unfortunately "leaks" types (the
types survive and are usable, but a new round of inlining will
recreate new structurally equivalent types). This accumulation
will over time amount to a memory leak which for some queries
can be large enough to trigger the OOM process killer.
To avoid accumulation of types, all IR related data is stored
in an LLVMContextRef which is dropped and recreated in order
to release all types. Dropping and recreating incurs overhead,
so it will be done only after 100 queries. This is a heuristic
which might be revisited, but until we can get the size of the
context from LLVM we are flying a bit blind.
This issue has been reported several times, there may be more
references to it in the archives on top of the threads linked
below.
This is a backpatch of 9dce22033d to all supported branches.
Reported-By: Justin Pryzby <pryzby@telsasoft.com>
Reported-By: Kurt Roeckx <kurt@roeckx.be>
Reported-By: Jaime Casanova <jcasanov@systemguards.com.ec>
Reported-By: Lauri Laanmets <pcspets@gmail.com>
Author: Andres Freund and Daniel Gustafsson
Discussion: https://postgr.es/m/7acc8678-df5f-4923-9cf6-e843131ae89d@www.fastmail.com
Discussion: https://postgr.es/m/20201218235607.GC30237@telsasoft.com
Discussion: https://postgr.es/m/CAPH-tTxLf44s3CvUUtQpkDr1D8Hxqc2NGDzGXS1ODsfiJ6WSqA@mail.gmail.com
Backpatch-through: v12
contain_mutable_functions and contain_volatile_functions give
reliable answers only after expression preprocessing (specifically
eval_const_expressions). Some places understand this, but some did
not get the memo --- which is not entirely their fault, because the
problem is documented only in places far away from those functions.
Introduce wrapper functions that allow doing the right thing easily,
and add commentary in hopes of preventing future mistakes from
copy-and-paste of code that's only conditionally safe.
Two actual bugs of this ilk are fixed here. We failed to preprocess
column GENERATED expressions before checking mutability, so that the
code could fail to detect the use of a volatile function
default-argument expression, or it could reject a polymorphic function
that is actually immutable on the datatype of interest. Likewise,
column DEFAULT expressions weren't preprocessed before determining if
it's safe to apply the attmissingval mechanism. A false negative
would just result in an unnecessary table rewrite, but a false
positive could allow the attmissingval mechanism to be used in a case
where it should not be, resulting in unexpected initial values in a
new column.
In passing, re-order the steps in ComputePartitionAttrs so that its
checks for invalid column references are done before applying
expression_planner, rather than after. The previous coding would
not complain if a partition expression contains a disallowed column
reference that gets optimized away by constant folding, which seems
to me to be a behavior we do not want.
Per bug #18097 from Jim Keener. Back-patch to all supported versions.
Discussion: https://postgr.es/m/18097-ebb179674f22932f@postgresql.org
It's clearly stated in the comments that ginFindParents() must keep
the pin on the index's root page that's associated with the topmost
GinBtreeStack item. However, the code path for the case that the
desired downlink has been pushed down to the next index level
ignored this proviso, and would release the pin anyway if we were
still examining the root level. That led to an assertion failure
or "buffer NNNN is not owned by resource owner" error later, when
we try to release the pin again at the end of the insertion.
This is quite hard to reproduce, since it can only happen if an
index root page split occurs concurrently with our own insertion.
Thanks to Jeff Janes for finding a test case that triggers it
often enough to allow investigation.
This has been there since the beginning of GIN, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAMkU=1yCAKtv86dMrD__Ja-7KzjE=uMeKX8y__cx5W-OEWy2ow@mail.gmail.com
Commit b7eda3e0e moved XidInMVCCSnapshot() from tqual.c into snapmgr.c,
but follow-up commit c91560def incorrectly updated this reference. We
could fix it, but as pointed out by Daniel Gustafsson, 1) the reader can
easily find the file that contains the definition of that function, e.g.
by grepping, and 2) this kind of reference is prone to going stale; so
let's just remove it.
Back-patch to all supported branches.
Reviewed by Daniel Gustafsson.
Discussion: https://postgr.es/m/CAPmGK145VdKkPBLWS2urwhgsfidbSexwY-9zCL6xSUJH%2BBTUUg%40mail.gmail.com
We can simplify FieldSelect on a whole-row Var into a plain Var
for the selected field. However, we should copy the whole-row Var's
varnullingrels when we do so, because the new Var is clearly nullable
by exactly the same rels as the original. Failure to do this led to
errors like "wrong varnullingrels (b) (expected (b 3)) for Var 2/2".
Richard Guo, per bug #18184 from Marian Krucina. Back-patch to
v16 where varnullingrels was introduced.
Discussion: https://postgr.es/m/18184-5868dd258782058e@postgresql.org
When executing a MERGE UPDATE action, if the UPDATE is turned into a
cross-partition DELETE then INSERT, do not attempt to invoke AFTER
UPDATE ROW triggers, or any of the other post-update actions in
ExecUpdateEpilogue().
For consistency with a plain UPDATE command, such triggers should not
be fired (and typically fail anyway), and similarly, other post-update
actions, such as WCO/RLS checks should not be executed, and might also
lead to unexpected failures.
Therefore, as with ExecUpdate(), make ExecMergeMatched() return
immediately if ExecUpdateAct() reports that a cross-partition update
was done, to be sure that no further processing is done for that
tuple.
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWjBgagyNZs02vgDF0DvASYj-iHTFtXG2-nP3orZhmtcw%40mail.gmail.com
false_positive_rate is a parameter that can be set with the bloom
opclass in BRIN, and setting it to a value of exactly 0.25 would trigger
an assertion in the first INSERT done on the index with value set.
The assertion changed here relied on BLOOM_{MIN|MAX}_FALSE_POSITIVE_RATE
that are somewhat arbitrary values, and specifying an out-of-range value
would also trigger a failure when defining such an index. So, as-is,
the assertion was just doubling on the min-max check of the reloption.
This is now enlarged to check that it is a correct percentage value,
instead, based on a suggestion by Tom Lane.
Author: Alexander Lakhin
Reviewed-by: Tom Lane, Shihao Zhong
Discussion: https://postgr.es/m/17969-a6c54de48026d694@postgresql.org
Backpatch-through: 14
array_set_element() and related functions allow an array to be
enlarged by assigning to subscripts outside the current array bounds.
While these places were careful to check that the new bounds are
allowable, they neglected to consider the risk of integer overflow
in computing the new bounds. In edge cases, we could compute new
bounds that are invalid but get past the subsequent checks,
allowing bad things to happen. Memory stomps that are potentially
exploitable for arbitrary code execution are possible, and so is
disclosure of server memory.
To fix, perform the hazardous computations using overflow-detecting
arithmetic routines, which fortunately exist in all still-supported
branches.
The test cases added for this generate (after patching) errors that
mention the value of MaxArraySize, which is platform-dependent.
Rather than introduce multiple expected-files, use psql's VERBOSITY
parameter to suppress the printing of the message text. v11 psql
lacks that parameter, so omit the tests in that branch.
Our thanks to Pedro Gallegos for reporting this problem.
Security: CVE-2023-5869
transformAggregateCall() captures the datatypes of the aggregate's
arguments immediately to construct the Aggref.aggargtypes list.
This seems reasonable because the arguments have already been
transformed --- but there is an edge case where they haven't been.
Specifically, if we have an unknown-type literal in an ANY argument
position, nothing will have been done with it earlier. But if we
also have DISTINCT, then addTargetToGroupList() converts the literal
to "text" type, resulting in the aggargtypes list not matching the
actual runtime type of the argument. The end result is that the
aggregate tries to interpret a "text" value as being of type
"unknown", that is a zero-terminated C string. If the text value
contains no zero bytes, this could result in disclosure of server
memory following the text literal value.
To fix, move the collection of the aggargtypes list to the end
of transformAggregateCall(), after DISTINCT has been handled.
This requires slightly more code, but not a great deal.
Our thanks to Jingzhou Fu for reporting this problem.
Security: CVE-2023-5868
It was always false in single-user mode, in autovacuum workers, and in
background workers. This had no specifically-identified security
consequences, but non-core code or future work might make it
security-relevant. Back-patch to v11 (all supported versions).
Jelte Fennema-Nio. Reported by Jelte Fennema-Nio.
Documentation says it cannot signal "a backend owned by a superuser".
On the contrary, it could signal background workers, including the
logical replication launcher. It could signal autovacuum workers and
the autovacuum launcher. Block all that. Signaling autovacuum workers
and those two launchers doesn't stall progress beyond what one could
achieve other ways. If a cluster uses a non-core extension with a
background worker that does not auto-restart, this could create a denial
of service with respect to that background worker. A background worker
with bugs in its code for responding to terminations or cancellations
could experience those bugs at a time the pg_signal_backend member
chooses. Back-patch to v11 (all supported versions).
Reviewed by Jelte Fennema-Nio. Reported by Hemanth Sandrana and
Mahendrakar Srinivasarao.
Security: CVE-2023-5870
get_explain_guc_options() crashed if a string GUC marked GUC_EXPLAIN
has a NULL boot_val. Nosing around found a couple of other places
that seemed insufficiently cautious about NULL string values, although
those are likely unreachable in practice. Add some commentary
defining the expectations for NULL values of string variables,
in hopes of forestalling future additions of more such bugs.
Xing Guo, Aleksander Alekseev, Tom Lane
Discussion: https://postgr.es/m/CACpMh+AyDx5YUpPaAgzVwC1d8zfOL4JoD-uyFDnNSa1z0EsDQQ@mail.gmail.com
pgstatindex failed with ERRCODE_DATA_CORRUPTED, of the "can't-happen"
class XX. The other functions succeeded on an empty index; they might
have malfunctioned if the failed index build left torn I/O or other
complex state. Report an ERROR in statistics functions pgstatindex,
pgstatginindex, pgstathashindex, and pgstattuple. Report DEBUG1 and
skip all index I/O in maintenance functions brin_desummarize_range,
brin_summarize_new_values, brin_summarize_range, and
gin_clean_pending_list. Back-patch to v11 (all supported versions).
Discussion: https://postgr.es/m/20231001195309.a3@google.com
When looping around after finding that the set-returning function
returned zero rows for the current input tuple, ExecProjectSet
neglected to reset either of the two memory contexts it's
responsible for cleaning out. Typically this wouldn't cause much
problem, because once the SRF does return at least one row, the
contexts would get reset on the next call. However, if the SRF
returns no rows for many input tuples in succession, quite a lot
of memory could be transiently consumed.
To fix, make sure we reset both contexts while looping around.
Per bug #18172 from Sergei Kornilov. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18172-9b8c5fc1d676ded3@postgresql.org
When calculating distance for interval values, the code mostly mimicked
interval_mi, i.e. it built a new interval value for the difference.
That however does not work for sufficiently distant interval values,
when the difference overflows the interval range.
Instead, we can calculate the distance directly, without constructing
the intermediate (and unnecessary) interval value.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Dean Rasheed
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com
Make sure that infinite values in date/timestamp columns are treated as
if in infinite distance. Infinite values should not be merged with other
values, leaving them as outliers. The code however returned distance 0
in this case, so that infinite values were merged first. While this does
not break the index (i.e. it still produces correct query results), it
may make it much less efficient.
We don't need explicit handling of infinite date/timestamp values when
calculating distances, because those values are represented as extreme
but regular values (e.g. INT64_MIN/MAX for the timestamp type).
We don't need an exact distance, just a value that is much larger than
distanced between regular values. With the added cast to double values,
we can simply subtract the values.
The regression test queries a value in the "gap" and checks the range
was properly eliminated by the BRIN index.
This only affects minmax-multi indexes on timestamp/date columns with
infinite values, which is not very common in practice. The affected
indexes may need to be rebuilt.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Ashutosh Bapat
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com
When calculating the distance between date values, make sure to subtract
them in the right order, i.e. (larger - smaller).
The distance is used to determine which values to merge, and is expected
to be a positive value. The code unfortunately did the subtraction in
the opposite order, i.e. (smaller - larger), thus producing negative
values and merging values the most distant values first.
The resulting index is correct (i.e. produces correct results), but may
be significantly less efficient. This affects all minmax-multi indexes
on date columns.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Ashutosh Bapat
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com
When calculating distances for timestamp values for BRIN minmax-multi
indexes, we need to be careful about overflows for extreme values. If
the value overflows into a negative value, the index may be inefficient.
The new regression test checks this for the timestamp type by adding a
table with enough values to force range compaction/merging. The values
are close to min/max, which means a risk of overflow.
Fixed by converting the int64 values to double first, before calculating
the distance. This prevents the overflow. We may lose some precision, of
course, but that's good enough. In the worst case we build a slightly
less efficient index, but for large distances this won't matter.
This only affects minmax-multi indexes on timestamp columns, with ranges
containing values sufficiently distant to cause an overflow. That seems
like a fairly rare case in practice.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Ashutosh Bapat
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com