Commit c4d5cb71d2 introduced a couple asserts in the fast-path locking
code, upsetting Coverity.
The assert in InitProcGlobal() is clearly wrong, as it assigns instead
of checking the value. This is harmless, but doesn't check anything.
The asserts in FAST_PATH_ macros are written as if for signed values,
but the macros are only called for unsigned ones. That makes the check
for (val >= 0) useless. Checks written as ((uint32) x < max) work for
both signed and unsigned values. Negative values should wrap to values
greater than INT32_MAX.
Per Coverity, report by Tom Lane.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/2891628.1727019959@sss.pgh.pa.us
Replace the fixed-size array of fast-path locks with arrays, sized on
startup based on max_locks_per_transaction. This allows using fast-path
locking for workloads that need more locks.
The fast-path locking introduced in 9.2 allowed each backend to acquire
a small number (16) of weak relation locks cheaply. If a backend needs
to hold more locks, it has to insert them into the shared lock table.
This is considerably more expensive, and may be subject to contention
(especially on many-core systems).
The limit of 16 fast-path locks was always rather low, because we have
to lock all relations - not just tables, but also indexes, views, etc.
For planning we need to lock all relations that might be used in the
plan, not just those that actually get used in the final plan. So even
with rather simple queries and schemas, we often need significantly more
than 16 locks.
As partitioning gets used more widely, and the number of partitions
increases, this limit is trivial to hit. Complex queries may easily use
hundreds or even thousands of locks. For workloads doing a lot of I/O
this is not noticeable, but for workloads accessing only data in RAM,
the access to the shared lock table may be a serious issue.
This commit removes the hard-coded limit of the number of fast-path
locks. Instead, the size of the fast-path arrays is calculated at
startup, and can be set much higher than the original 16-lock limit.
The overall fast-path locking protocol remains unchanged.
The variable-sized fast-path arrays can no longer be part of PGPROC, but
are allocated as a separate chunk of shared memory and then references
from the PGPROC entries.
The fast-path slots are organized as a 16-way set associative cache. You
can imagine it as a hash table of 16-slot "groups". Each relation is
mapped to exactly one group using hash(relid), and the group is then
processed using linear search, just like the original fast-path cache.
With only 16 entries this is cheap, with good locality.
Treating this as a simple hash table with open addressing would not be
efficient, especially once the hash table gets almost full. The usual
remedy is to grow the table, but we can't do that here easily. The
access would also be more random, with worse locality.
The fast-path arrays are sized using the max_locks_per_transaction GUC.
We try to have enough capacity for the number of locks specified in the
GUC, using the traditional 2^n formula, with an upper limit of 1024 lock
groups (i.e. 16k locks). The default value of max_locks_per_transaction
is 64, which means those instances will have 64 fast-path slots.
The main purpose of the max_locks_per_transaction GUC is to size the
shared lock table. It is often set to the "average" number of locks
needed by backends, with some backends using significantly more locks.
This should not be a major issue, however. Some backens may have to
insert locks into the shared lock table, but there can't be too many of
them, limiting the contention.
The only solution is to increase the GUC, even if the shared lock table
already has sufficient capacity. That is not free, especially in terms
of memory usage (the shared lock table entries are fairly large). It
should only happen on machines with plenty of memory, though.
In the future we may consider a separate GUC for the number of fast-path
slots, but let's try without one first.
Reviewed-by: Robert Haas, Jakub Wartak
Discussion: https://postgr.es/m/510b887e-c0ce-4a0c-a17a-2c6abb8d9a5c@enterprisedb.com
Commits 041b9680 and 6377e12a changed the interface of
scan_analyze_next_block() to take a ReadStream instead of a BlockNumber
and a BufferAccessStrategy, and to return a value to indicate when the
stream has run out of blocks.
This caused integration problems for at least one known extension that
uses specially encoded BlockNumber values that map to different
underlying storage, because acquire_sample_rows() sets up the stream so
that read_stream_next_buffer() reads blocks from the main fork of the
relation's SMgrRelation.
Provide read_stream_next_block(), as a way for such an extension to
access the stream of raw BlockNumbers directly and forward them to its
own ReadBuffer() calls after decoding, as it could in earlier releases.
The new function returns the BlockNumber and BufferAccessStrategy that
were previously passed directly to scan_analyze_next_block().
Alternatively, an extension could wrap the stream of BlockNumbers in
another ReadStream with a callback that performs any decoding required
to arrive at real storage manager BlockNumber values, so that it could
benefit from the I/O combining and concurrency provided by
read_stream.c.
Another class of table access method that does nothing in
scan_analyze_next_block() because it is not block-oriented could use
this function to control the number of block sampling loops. It could
match the previous behavior with "return read_stream_next_block(stream,
&bas) != InvalidBlockNumber".
Ongoing work is expected to provide better ANALYZE support for table
access methods that don't behave like heapam with respect to storage
blocks, but that will be for future releases.
Back-patch to 17.
Reported-by: Mats Kindahl <mats@timescale.com>
Reviewed-by: Mats Kindahl <mats@timescale.com>
Discussion: https://postgr.es/m/CA%2B14425%2BCcm07ocG97Fp%2BFrD9xUXqmBKFvecp0p%2BgV2YYR258Q%40mail.gmail.com
myLargeObjectExists() and LargeObjectExists() had nearly identical code,
except for handling snapshots. This commit renames myLargeObjectExists()
to LargeObjectExistsWithSnapshot() and refactors LargeObjectExists()
to call it internally, reducing duplication.
Author: Yugo Nagata
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20240702163444.ab586f6075e502eb84f11b1a@sranhm.sraoss.co.jp
This term was using an inconsistent casing between the code and the
documentation, using "CommitTsSLRU" in wait_event_names.txt and
"CommitTSSLRU" in the code.
Let's update the term in the code to reflect what's in the
documentation, "CommitTs" being more commonly used, so as
pg_stat_activity shows the same term as the documentation.
Oversight in 53c2a97a92.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
Backpatch-through: 17
Commit 6654bb920 added macOS's equivalent of POSIX_FADV_WILLNEED, and
changed some explicit references to posix_fadvise to use this more
general name for the concept. Update some remaining references.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/0827edec-1317-4917-a186-035eb1e3241d%40eisentraut.org
The typos in 005_negotiate_encryption.pl and pg_combinebackup.c
shall be backported to v17 where they were introduced.
Backpatch-through: v17
Discussion: https://postgr.es/m/Ztaj7BkN4658OMxF@paquier.xyz
Similarly to 2065ddf5e3, this introduces a define for "pg_tblspc".
This makes the style more consistent with the existing PG_STAT_TMP_DIR,
for example.
There is a difference with the other cases with the introduction of
PG_TBLSPC_DIR_SLASH, required in two places for recovery and backups.
Author: Bertrand Drouvot
Reviewed-by: Ashutosh Bapat, Álvaro Herrera, Yugo Nagata, Michael
Paquier
Discussion: https://postgr.es/m/ZryVvjqS9SnV1GPP@ip-10-97-1-34.eu-west-3.compute.internal
Commit b5a9b18c introduced block streaming infrastructure with a special
fast path for all-cached scans, and commit b7b0f3f2 connected the
infrastructure up to sequential scans. One of the fast path
micro-optimizations had an unintended consequence: it interfered with
parallel sequential scan's block range allocator (from commit 56788d21),
which has its own ramp-up and ramp-down algorithm when handing out
groups of pages to workers. A scan of an all-cached table could give
extra blocks to one worker, when others had finished. In some plans
(probably already very bad plans, such as the one reported by
Alexander), the unfairness could be magnified.
An internal buffer of 16 block numbers is removed, keeping just a single
block buffer for technical reasons.
Back-patch to 17.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/63a63690-dd92-c809-0b47-af05459e95d1%40gmail.com
To make them follow the usual naming convention where
FoobarShmemSize() calculates the amount of shared memory needed by
Foobar subsystem, and FoobarShmemInit() performs the initialization.
I didn't rename CreateLWLocks() and InitShmmeIndex(), because they are
a little special. They need to be called before any of the other
ShmemInit() functions, because they set up the shared memory
bookkeeping itself. I also didn't rename InitProcGlobal(), because
unlike other Shmeminit functions, it's not called by individual
backends.
Reviewed-by: Andreas Karlsson
Discussion: https://www.postgresql.org/message-id/c09694ff-2453-47e5-b26c-32a16cd75ce6@iki.fi
Split the shared and local initialization to separate functions, and
follow the common naming conventions. With this, we no longer create
the LockMethodLocalHash hash table in the postmaster process, which
was always pointless.
Reviewed-by: Andreas Karlsson
Discussion: https://www.postgresql.org/message-id/c09694ff-2453-47e5-b26c-32a16cd75ce6@iki.fi
Previously, (auto)analyze used global variables VacuumPageHit,
VacuumPageMiss, and VacuumPageDirty to track buffer usage. However,
pgBufferUsage provides a more generic way to track buffer usage with
support functions.
This change replaces those global variables with pgBufferUsage in
analyze. Since analyze was the sole user of those variables, it
removes their declarations. Vacuum previously used those variables but
replaced them with pgBufferUsage as part of a bug fix, commit
5cd72cc0c.
Additionally, it adjusts the buffer usage message in both vacuum and
analyze for better consistency.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed. This option is useful when
the user makes some data changes on primary and needs a guarantee to see
these changes are on standby.
The queue of waiters is stored in the shared memory as an LSN-ordered pairing
heap, where the waiter with the nearest LSN stays on the top. During
the replay of WAL, waiters whose LSNs have already been replayed are deleted
from the shared memory pairing heap and woken up by setting their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records, implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working without an active snapshot,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
A later change will require atomic support, so it wouldn't make sense
for a hypothetical new system not to be able to implement spinlocks.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (concept, not the patch)
Reviewed-by: Andres Freund <andres@anarazel.de> (concept, not the patch)
Discussion: https://postgr.es/m/3351991.1697728588%40sss.pgh.pa.us
Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal
flags, but in EmitProcSignalBarrier(), the spinlock was released
twice. With most spinlock implementations, releasing a lock that's not
held is not easy to notice, because most of the time it does nothing,
but if the spinlock was concurrently acquired by another process, it
could lead to more serious issues. Fortunately, with the
--disable-spinlocks emulation implementation, it caused more visible
failures.
In the passing, fix a type in comment and add an assertion that the
procNumber passed to SendProcSignal looks valid.
Discussion: https://www.postgresql.org/message-id/b8ce284c-18a2-4a79-afd3-1991a2e7d246@iki.fi
Move responsibility of generating the cancel key to the backend
process. The cancel key is now generated after forking, and the
backend advertises it in the ProcSignal array. When a cancel request
arrives, the backend handling it scans the ProcSignal array to find
the target pid and cancel key. This is similar to how this previously
worked in the EXEC_BACKEND case with the ShmemBackendArray, just
reusing the ProcSignal array.
One notable change is that we no longer generate cancellation keys for
non-backend processes. We generated them before just to prevent a
malicious user from canceling them; the keys for non-backend processes
were never actually given to anyone. There is now an explicit flag
indicating whether a process has a valid key or not.
I wrote this originally in preparation for supporting longer cancel
keys, but it's a nice cleanup on its own.
Reviewed-by: Jelte Fennema-Nio
Discussion: https://www.postgresql.org/message-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
The documentation for System V IPC parameters provides complicated
formulas to determine the appropriate values for SEMMNI and SEMMNS.
Furthermore, these formulas have often been wrong because folks
forget to update them (e.g., when adding a new auxiliary process).
This commit introduces a new runtime-computed GUC named
num_os_semaphores that reports the number of semaphores needed for
the configured number of allowed connections, worker processes,
etc. This new GUC allows us to simplify the formulas in the
documentation, and it should help prevent future inaccuracies.
Like the other runtime-computed GUCs, users can view it with
"postgres -C" before starting the server, which is useful for
preconfiguring the necessary operating system resources.
Reviewed-by: Tom Lane, Sami Imseih, Andres Freund, Robert Haas
Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
clog.c, async.c and predicate.c included some SLRU page numbers still
handled as 4-byte integers, while int64 should be used for this purpose.
These holes have been introduced in 4ed8f0913b, that has introduced
the use of 8-byte integers for SLRU page numbers, still forgot about the
code paths updated by this commit.
Reported-by: Noah Misch
Author: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/20240626002747.dc.nmisch@google.com
Backpatch-through: 17
While this doesn't significantly change runtime now, it arranges for
STRATEGY=WAL_LOG to benefit automatically from future optimizations to
the read_stream subsystem. For large tables in the template database,
this does read 16x as many bytes per system call. Platforms with high
per-call overhead, if any, may see an immediate benefit.
Nazir Bilal Yavuz
Discussion: https://postgr.es/m/CAN55FZ0JKL6vk1xQp6rfOXiNFV1u1H0tJDPPGHWoiO3ea2Wc=A@mail.gmail.com
Winsock only signals an FD_CLOSE event once if the other end of the
socket shuts down gracefully. Because each WaitLatchOrSocket() call
constructs and destroys a new event handle every time, with unlucky
timing we can lose it and hang. We get away with this only if the other
end disconnects non-gracefully, because FD_CLOSE is repeatedly signaled
in that case.
To fix this design flaw in our Windows socket support fundamentally,
we'd probably need to rearchitect it so that a single event handle
exists for the lifetime of a socket, or switch to completely different
multiplexing or async I/O APIs. That's going to be a bigger job
and probably wouldn't be back-patchable.
This brute force kludge closes the race by explicitly polling with
MSG_PEEK before sleeping.
Back-patch to all supported releases. This should hopefully clear up
some random build farm and CI hang failures reported over the years. It
might also allow us to try using graceful shutdown in more places again
(reverted in commit 29992a6) to fix instability in the transmission of
FATAL error messages, but that isn't done by this commit.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/176008.1715492071%40sss.pgh.pa.us
Since commit 3a9b18b309, roles with privileges of pg_signal_backend
cannot signal autovacuum workers. Many users treated the ability
to signal autovacuum workers as a feature instead of a bug, so we
are reintroducing it via a new predefined role. Having privileges
of this new role, named pg_signal_autovacuum_worker, only permits
signaling autovacuum workers. It does not permit signaling other
types of superuser backends.
Bumps catversion.
Author: Kirill Reshke
Reviewed-by: Anthony Leung, Michael Paquier, Andrey Borodin
Discussion: https://postgr.es/m/CALdSSPhC4GGmbnugHfB9G0%3DfAxjCSug_-rmL9oUh0LTxsyBfsg%40mail.gmail.com
Since commit 5764f611e1, we've been using the ilist.h functions for
handling the linked list. There's no need for 'links' to be the first
element of the struct anymore, except for one call in InitProcess
where we used a straight cast from the 'dlist_node *' to PGPROC *,
without the dlist_container() macro. That was just an oversight in
commit 5764f611e1, fix it.
There no imminent need to move 'links' from being the first field, but
let's be tidy.
Reviewed-by: Aleksander Alekseev, Andres Freund
Discussion: https://www.postgresql.org/message-id/22aa749e-cc1a-424a-b455-21325473a794@iki.fi
e85662df44 made GetRunningTransactionData() calculate the oldest running
transaction id within the current database. This commit optimized this
calculation by performing a cheap transaction id comparison before fetching
the process database id, while the latter could cause extra cache misses.
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240630231816.bf.nmisch%40google.com
e85662df44 made GetRunningTransactionData() calculate the oldest running
transaction id within the current database. However, because of the typo,
the new code uses oldestRunningXid instead of oldestDatabaseRunningXid
in comparison before updating oldestDatabaseRunningXid. This commit fixes
that issue.
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240630231816.bf.nmisch%40google.com
Backpatch-through: 17
Both BufFileSize() and BufFileAppend() contained Asserts to ensure the
given BufFile(s) had a valid fileset. A valid fileset isn't required in
either of these functions, so remove the Asserts and adjust the
comments accordingly.
This was noticed while work was being done on a new patch to call
BufFileSize() on a BufFile without a valid fileset. It seems there's
currently no code in the tree which could trigger these Asserts, so no
need to backpatch this, for now.
Reviewed-by: Peter Geoghegan, Matthias van de Meent, Tom Lane
Discussion: https://postgr.es/m/CAApHDvofgZT0VzydhyGH5MMb-XZzNDqqAbzf1eBZV5HDm3%2BosQ%40mail.gmail.com
We have in launch_backend.c:
/*
* The following need to be available to the save/restore_backend_variables
* functions. They are marked NON_EXEC_STATIC in their home modules.
*/
extern slock_t *ShmemLock;
extern slock_t *ProcStructLock;
extern PGPROC *AuxiliaryProcs;
extern PMSignalData *PMSignalState;
extern pg_time_t first_syslogger_file_time;
extern struct bkend *ShmemBackendArray;
extern bool redirection_done;
That comment is not completely true: ShmemLock, ShmemBackendArray, and
redirection_done are not in fact NON_EXEC_STATIC. ShmemLock once was,
but was then needed elsewhere. ShmemBackendArray was static inside
postmaster.c before launch_backend.c was created. redirection_done
was never static.
This patch moves the declaration of ShmemLock and redirection_done to
a header file.
ShmemBackendArray gets a NON_EXEC_STATIC. This doesn't make a
difference, since it only exists if EXEC_BACKEND anyway, but it makes
it consistent.
After that, the comment is now correct.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
This old CPU architecture hasn't been produced in decades, and
whatever instances might still survive are surely too underpowered
for anyone to consider running Postgres on in production. We'd
nonetheless continued to carry code support for it (largely at my
insistence), because its unique implementation of spinlocks seemed
like a good edge case for our spinlock infrastructure. However,
our last buildfarm animal of this type was retired last year, and
it seems quite unlikely that another will emerge. Without the ability
to run tests, the argument that this is useful test code fails to
hold water. Furthermore, carrying code support for an untestable
architecture has costs not to be ignored. So, remove HPPA-specific
code, in the same vein as commits 718aa43a4 and 92d70b77e.
Discussion: https://postgr.es/m/3351991.1697728588@sss.pgh.pa.us
Commit 5b562644fe added a comment that
SetRelationHasSubclass() callers must hold this lock. When commit
17f206fbc8 extended use of this column to
partitioned indexes, it didn't take the lock. As the latter commit
message mentioned, we currently never reset a partitioned index to
relhassubclass=f. That largely avoids harm from the lock omission. The
cause for fixing this now is to unblock introducing a rule about locks
required to heap_update() a pg_class row. This might cause more
deadlocks. It gives minor user-visible benefits:
- If an ALTER INDEX SET TABLESPACE runs concurrently with ALTER TABLE
ATTACH PARTITION or CREATE PARTITION OF, one transaction blocks
instead of failing with "tuple concurrently updated". (Many cases of
DDL concurrency still fail that way.)
- Match ALTER INDEX ATTACH PARTITION in choosing to lock the index.
While not user-visible today, we'll need this if we ever make something
set the flag to false for a partitioned index, like ANALYZE does today
for tables. Back-patch to v12 (all supported versions), the plan for
the commit relying on the new rule. In back branches, add
LockOrStrongerHeldByMe() instead of adding a LockHeldByMe() parameter.
Reviewed (in an earlier version) by Robert Haas.
Discussion: https://postgr.es/m/20240611024525.9f.nmisch@google.com
We did not recover the subtransaction IDs of prepared transactions
when starting a hot standby from a shutdown checkpoint. As a result,
such subtransactions were considered as aborted, rather than
in-progress. That would lead to hint bits being set incorrectly, and
the subtransactions suddenly becoming visible to old snapshots when
the prepared transaction was committed.
To fix, update pg_subtrans with prepared transactions's subxids when
starting hot standby from a shutdown checkpoint. The snapshots taken
from that state need to be marked as "suboverflowed", so that we also
check the pg_subtrans.
Backport to all supported versions.
Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
Commit 210622c6 accidentally zeroed out pages even if they were found in
the buffer pool. It should always lock the page, but it should only
zero pages that were not already valid. Otherwise, concurrent readers
that hold only a pin could see corrupted page contents changing under
their feet.
While here, rename ZeroAndLockBuffer() to match the RBM_ flag name.
Also restore a some useful comments lost by 210622c6's refactoring, and
add some new ones to clarify why we need to use the BM_IO_IN_PROGRESS
infrastructure despite not doing I/O.
Reported-by: Noah Misch <noah@leadboat.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> (earlier version)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Discussion: https://postgr.es/m/20240512171658.7e.nmisch@google.com
Discussion: https://postgr.es/m/7ed10231-ce47-03d5-d3f9-4aea0dc7d5a4%40gmail.com
The assertion used at the beginning of mdwritev(), that is not enabled
except by defining -DCHECK_WRITE_VS_EXTEND as mdnblocks() is costly,
forgot about the total number of blocks to write at location specified
by the caller. The calculation is fixed to count for that, and uses
casts to uint64 to ensure a proper check should the number of blocks
overflow.
Using a cast is a suggestion from Tom Lane.
Oversight in 4908c58720.
Author: Xing Guo
Discussion: https://postgr.es/m/CACpMh+BM-VgKeO7suPG-VHTtpzJ+zsbDPwVHu42PLp-iTk0z+A@mail.gmail.com
After further review, we want to move in the direction of always
quoting GUC names in error messages, rather than the previous (PG16)
wildly mixed practice or the intermittent (mid-PG17) idea of doing
this depending on how possibly confusing the GUC name is.
This commit applies appropriate quotes to (almost?) all mentions of
GUC names in error messages. It partially supersedes a243569bf6 and
8d9978a717, which had moved things a bit in the opposite direction
but which then were abandoned in a partial state.
Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com