Our repo already contained an .editorconfig file, but it was not kept
up to date with .gitattributes. This adds a script that keeps these
files in sync. A big advantage of the editorconfig file is that it
many editors/IDEs get automatically configured to trim trailing
newlines and add a final newline on save, while .gitattributes only
complains about these problems instead of automatically fixing them.
This also adds rules to .gitattributes for Python files as well as for
C files in pg_bsd_indent directory (which have a different tab_width
than most C files due to being vendored in).
Author: Jelte Fennema-Nio <github-tech@jeltef.nl>
Discussion: https://www.postgresql.org/message-id/flat/CAGECzQQGzbroAXi+Yicp3HvcCo4=g84kaOgjuvQ5MW9F0ubOGg@mail.gmail.com
Whilst working on commit 041e8b95b I happened to notice that
parallel_vacuum_main() assigns directly to the maintenance_work_mem
GUC. This is definitely not per project conventions, so I tried to
fix it to use SetConfigOption(). But that fails with "parameter
cannot be set during a parallel operation". It doesn't seem worth
working on a cleaner answer, at least not till we have a few more
instances of similar problems. But add some commentary, just so
nobody gets the idea that this is an approved way to set a GUC.
When syslogger.c was first written, we didn't want to assume that
all platforms have 64-bit ftello. But we've been assuming that
since v13 (cf commit 799d22461), so let's use that in syslogger.c
and allow log_rotation_size to range up to INT_MAX kilobytes.
The old code effectively limited log_rotation_size to 2GB regardless
of platform. While nobody's complained, that doesn't seem too far
away from what might be thought reasonable these days.
I noticed this while searching for instances of "1024L" in connection
with commit 041e8b95b. These were the last such instances.
(We still have instances of L-suffixed literals, but most of them
are associated with wait intervals for pg_usleep or similar functions.
I don't see any urgent reason to change that.)
Consistently use "Size" (or size_t, or in some places int64 or double)
as the type for variables holding memory allocation sizes. In most
places variables' data types were fine already, but we had an ancient
habit of computing bytes from kilobytes-units GUCs with code like
"work_mem * 1024L". That risks overflow on Win64 where they did not
make "long" as wide as "size_t". We worked around that by restricting
such GUCs' ranges, so you couldn't set work_mem et al higher than 2GB
on Win64. This patch removes that restriction, after replacing such
calculations with "work_mem * (Size) 1024" or variants of that.
It should be noted that this patch was constructed by searching
outwards from the GUCs that have MAX_KILOBYTES as upper limit.
So I can't positively guarantee there are no other places doing
memory-size arithmetic in int or long variables. I do however feel
pretty confident that increasing MAX_KILOBYTES on Win64 is safe now.
Also, nothing in our code should be dealing in multiple-gigabyte
allocations without authorization from a relevant GUC, so it seems
pretty likely that this search caught everything that could be at
risk of overflow.
Author: Vladlen Popolitov <v.popolitov@postgrespro.ru>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Prior to this patch, the require_auth implementation assumed that
the AuthenticationSASL protocol message was using SCRAM-SHA-256.
In preparation for future SASL mechanisms, like OAUTHBEARER, split
the implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et.al are permitted.
conn->allowed_sasl_mechs contains a list of pointers to acceptable
mechanisms, and pg_SASL_init() will bail if the selected mechanism
isn't contained in this array.
Since there's only one mechansism supported right now, one branch
of the second tier cannot be exercised yet and is protected by an
Assert(false) call. This assertion will need to be removed when
the next mechanism is added.
This patch is extracted from a larger body of work aimed at adding
support for OAUTHBEARER in libpq.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAOYmi+kJqzo6XsR9TEhvVfeVNQ-TyFM5LATypm9yoQVYk=4Wrw@mail.gmail.com
Future SASL mechanism, like OAUTHBEARER, will use this as a limit on
token messages coming from the client, so promote it to the header
file to make it available.
This patch is extracted from a larger body of work aimed at adding
support for OAUTHBEARER in libpq.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAOYmi+kJqzo6XsR9TEhvVfeVNQ-TyFM5LATypm9yoQVYk=4Wrw@mail.gmail.com
Use <function> rather than <structname> in the <title> to be consistent
with how other functions in this module are documented. Also suffix the
function name with () for consistency.
Backpatch to v17 where pg_buffercache_evict was introduced.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAExHW5uKWH8CuZc9NCb8XxSQc6uzvACV0cScebm54kF763ERAw@mail.gmail.com
Backpatch-through: 17
This commit builds on the prior change that moved PartitionPruneInfos
out of individual plan nodes into a list in PlannedStmt, making it
possible to initialize PartitionPruneStates without traversing the
plan tree and perform runtime initial pruning before ExecInitNode()
initializes the plan trees. These tasks are now handled in a new
routine, ExecDoInitialPruning(), which is called by InitPlan()
before calling ExecInitNode() on various plan trees.
ExecDoInitialPruning() performs the initial pruning and saves the
result -- a Bitmapset of indexes for surviving child subnodes -- in
es_part_prune_results, a list in EState.
PartitionPruneStates created for initial pruning are stored in
es_part_prune_states, another list in EState, for later use during
exec pruning. Both lists are parallel to es_part_prune_infos, which
holds the PartitionPruneInfos from PlannedStmt, enabling shared
indexing.
PartitionPruneStates initialized in ExecDoInitialPruning() now
include only the PartitionPruneContexts for initial pruning steps.
Exec pruning contexts are initialized later in
ExecInitPartitionExecPruning() when the parent plan node is
initialized, as the exec pruning step expressions depend on the parent
node's PlanState.
The existing function PartitionPruneFixSubPlanMap() has been
repurposed for this initialization to avoid duplicating a similar
loop structure for finding PartitionedRelPruningData to initialize
exec pruning contexts for. It has been renamed to
InitExecPruningContexts() to reflect its new primary responsibility.
The original logic to "fix subplan maps" remains intact but is now
encapsulated within the renamed function.
This commit removes two obsolete Asserts in partkey_datum_from_expr().
The ExprContext used for pruning expression evaluation is now
independent of the parent PlanState, making these Asserts unnecessary.
By centralizing pruning logic and decoupling it from the plan
initialization step (ExecInitNode()), this change sets the stage for
future patches that will use the result of initial pruning to
save the overhead of redundant processing for pruned partitions.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
Once a replication slot is invalidated, it cannot be altered or used to
fetch changes. However, a process could still acquire an invalid slot and
fail later.
For example, if a process acquires a logical slot that was invalidated due
to wal_removed, it will eventually fail in CreateDecodingContext() when
attempting to access the removed WAL. Similarly, for physical replication
slots, even if the slot is invalidated and invalidation_reason is set to
wal_removed, the walsender does not currently check for invalidation when
starting physical replication. Instead, replication starts, and an error
is only reported later while trying to access WAL. Similarly, we prohibit
modifying slot properties for invalid slots but give the error for the
same after acquiring the slot.
This patch improves error handling by detecting invalid slots earlier at
the time of slot acquisition which is the first step. This also helped in
unifying different ERROR messages at different places and gave a
consistent message for invalid slots. This means that the message for
invalid slots will change to a generic message.
This will also be helpful for future patches where we are planning to
invalidate slots due to more reasons like idle_timeout because we don't
have to modify multiple places in such cases and avoid the chances of
missing out on a particular place.
Author: Nisha Moond <nisha.moond412@gmail.com>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CABdArM6pBL5hPnSQ+5nEVMANcF4FCH7LQmgskXyiLY75TMnKpw@mail.gmail.com
This allows users of the cumulative statistics to drop entries in the
shared hash stats table, deleting as well local references. Callers of
this function can optionally define a callback able to filter which
entries to drop, similarly to pgstat_reset_matching_entries() with its
callback do_reset().
pgstat_drop_all_entries() is refactored so as it uses this new function.
Author: Lukas Fitti
Discussion: https://postgr.es/m/CAP53PkwuFbo3NkwZgxwNRMjMfqPEqidD-SggaoQ4ijotBVLJAA@mail.gmail.com
This change adapts these functions to the machine's address width
without depending on "long" to be the right size. (It isn't on
Win64, for example.) While it seems unlikely anyone would care
to run with a stack depth limit exceeding 2GB, this is part of a
general push to avoid using type "long" to represent memory sizes.
It's convenient to use ssize_t rather than the perhaps-more-obvious
choice of size_t/Size, because the code involved depends on working
with a signed data type. Our MAX_KILOBYTES limit already ensures
that ssize_t will be sufficient to represent the maximum value of
max_stack_depth.
Extracted from a larger patch by Vladlen, plus additional hackery
by me.
Author: Vladlen Popolitov <v.popolitov@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
smgrDoPendingSyncs had two distinct risks of integer overflow while
deciding which way to ensure durability of a newly-created relation.
First, it accumulated the total size of all forks in a variable of
type BlockNumber (uint32). While we restrict an individual fork's
size to fit in that, I don't believe there's such a restriction on
all of them added together. Second, it proceeded to multiply the
sum by BLCKSZ, which most certainly could overflow a uint32.
(The exact expression is total_blocks * BLCKSZ / 1024. The
compiler might choose to optimize that to total_blocks * 8,
which is not at quite as much risk of overflow as a literal
reading would be, but it's still wrong.)
If an overflow did occur it could lead to a poor choice to
shove a very large relation into WAL instead of fsync'ing it.
This wouldn't be fatal, but it could be inefficient.
Change total_blocks to uint64 which should be plenty, and
rearrange the comparison calculation to be overflow-safe.
I noticed this while looking for ramifications of the proposed
change in MAX_KILOBYTES. It's not entirely clear to me why
wal_skip_threshold is limited to MAX_KILOBYTES in the
first place, but in any case this code is unsafe regardless
of the range of wal_skip_threshold.
Oversight in c6b92041d which introduced wal_skip_threshold,
so back-patch to v13.
Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Backpatch-through: 13
Add BitmapTableScanSetup(), a helper which contains all of the code that
must be done on every scan of the table in a bitmap table scan. This
includes scanning the index, building the bitmap, and setting up the
scan descriptors.
Pushing this setup into a helper function makes BitmapHeapNext() more
readable.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1vXu%2BZdT0_MM-i1vbTdfHHf0KR3cK6R5gs6dNNNpyrJw%40mail.gmail.com
Instead of deciding at runtime whether to read from casetest.value
or caseValue_datum, split EEOP_CASE_TESTVAL into two opcodes and
make the decision during expression compilation. Similarly for
EEOP_DOMAIN_TESTVAL. This actually results in net less code,
mainly because llvmjit_expr.c's code for handling these opcodes
gets shorter. The performance gain is doubtless negligible, but
this seems worth changing anyway on grounds of simplicity and
understandability.
Author: Andreas Karlsson <andreas@proxel.se>
Co-authored-by: Xing Guo <higuoxing@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACpMh+AiBYAWn+D1aU7Rsy-V1tox06Cbc0H3qA7rwL5zdJ=anQ@mail.gmail.com
This moves PartitionPruneInfo from plan nodes to PlannedStmt,
simplifying traversal by centralizing all PartitionPruneInfo
structures in a single list in it, which holds all instances for the
main query and its subqueries. Instead of plan nodes (Append or
MergeAppend) storing PartitionPruneInfo pointers, they now reference
an index in this list.
A bitmapset field is added to PartitionPruneInfo to store the RT
indexes corresponding to the apprelids field in Append or MergeAppend.
This allows execution pruning logic to verify that it operates on the
correct plan node, mainly to facilitate debugging.
Duplicated code in set_append_references() and
set_mergeappend_references() is refactored into a new function,
register_pruneinfo(). This updates RT indexes by applying rtoffet
and adds PartitionPruneInfo to the global list in PlannerGlobal.
By allowing pruning to be performed without traversing the plan tree,
this change lays the groundwork for runtime initial pruning to occur
independently of plan tree initialization.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> (earlier version)
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
In view of the issue fixed in commit 0da39aa76, it no longer seems
like a great idea for coerce_to_domain() to offer to perform a lookup
that its caller probably should have done already. The caller should
be providing a value of the domain's base type, so it's hard to
envision a valid case where it hasn't looked up that type. After
0da39aa76 there is only one caller using the option for internal
lookup, and that one can trivially be rearranged to not do that.
So this seems more like a bug-encouraging misfeature than a useful
shortcut; let's get rid of it (in HEAD only, there's no need to
break any external callers in back branches).
Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us
If a column is omitted in an INSERT, and there's no column default,
the code in preptlist.c generates a NULL Const to be inserted.
Furthermore, if the column is of a domain type, we wrap the Const
in CoerceToDomain, so as to throw a run-time error if the domain
has a NOT NULL constraint. That's fine as far as it goes, but
there are two problems:
1. We're being sloppy about the type/typmod that the Const is
labeled with. It really should have the domain's base type/typmod,
since it's the input to CoerceToDomain not the output. This can
result in coerce_to_domain inserting a useless length-coercion
function (useless because it's being applied to a null). The
coercion would typically get const-folded away later, but it'd
be better not to create it in the first place.
2. We're not applying expression preprocessing (specifically,
eval_const_expressions) to the resulting expression tree.
The planner's primary expression-preprocessing pass already happened,
so that means the length coercion step and CoerceToDomain node miss
preprocessing altogether.
This is at the least inefficient, since it means the length coercion
and CoerceToDomain will actually be executed for each inserted row,
though they could be const-folded away in most cases. Worse, it
seems possible that missing preprocessing for the length coercion
could result in an invalid plan (for example, due to failing to
perform default-function-argument insertion). I'm not aware of
any live bug of that sort with core datatypes, and it might be
unreachable for extension types as well because of restrictions of
CREATE CAST, but I'm not entirely convinced that it's unreachable.
Hence, it seems worth back-patching the fix (although I only went
back to v14, as the patch doesn't apply cleanly at all in v13).
There are several places in the rewriter that are building null
domain constants the same way as preptlist.c. While those are
before the planner and hence don't have any reachable bug, they're
still applying a length coercion that will be const-folded away
later, uselessly wasting cycles. Hence, make a utility routine
that all of these places can call to do it right.
Making this code more careful about the typmod assigned to the
generated NULL constant has visible but cosmetic effects on some
of the plans shown in contrib/postgres_fdw's regression tests.
Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us
Backpatch-through: 14
When running on Windows, canonicalize_path() converts '\' to '/'
to prevent confusing the Windows command processor. It was
doing that in a non-encoding-aware fashion; but in SJIS there
are valid two-byte characters whose second byte matches '\'.
So encoding corruption ensues if such a character is used in
the path.
We can fairly easily fix this if we know which encoding is
in use, but a lot of our utilities don't have much of a clue
about that. After some discussion we decided we'd settle for
fixing this only in psql, and assuming that its value of
client_encoding matches what the user is typing.
It seems hopeless to get the server to deal with the problematic
characters in database path names, so we'll just declare that
case to be unsupported. That means nothing need be done in
the server, nor in utility programs whose only contact with
file path names is for database paths. But psql frequently
deals with client-side file paths, so it'd be good if it
didn't mess those up.
Bug: #18735
Reported-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Discussion: https://postgr.es/m/18735-4acdb3998bb9f2b1@postgresql.org
Backpatch-through: 13
These functions tried to check the state of the buffer's content lock
even for local buffers. Since we don't use the content lock for a
local buffer, that would lead to a "false" result from
LWLockHeldByMeInMode, which would mean a misleading "false" answer
from BufferIsExclusiveLocked (we'd rather that case always return
"true") or an assertion failure in BufferIsDirty.
The core code never applies these two functions to local buffers,
and apparently no extensions do either, since we've not heard
complaints. Still, in the name of future-proofing, let's fix
them to act as though a pinned local buffer is content-locked.
Author: Srinath Reddy <srinath2133@gmail.com>
Discussion: https://postgr.es/m/19396ef77f8.1098c4a1810508.2255483659262451647@zohocorp.com
The following functions were not covered by any tests:
- abbrev(inet)
- set_masklen(cidr)
- set_masklen(inet)
- netmask(inet)
- hostmask(inet)
While on it, this improves the output of some of the existing queries in
the test inet to use better aliases.
Author: Aleksander Alekseev
Reviewed-by: Jacob Champion, Keisuke Kuroda, Tom Lane
Discussion: https://postgr.es/m/CAJ7c6TOyZ9bGNrDK6Z3Q0gr9ow8ZpOm+=+01mpE0dsdH4C+u9A@mail.gmail.com
The column added in commit e65dbc9927, pubgencols_type, was inconsistent
with the naming conventions of other columns in the pg_publication
catalog.
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CALDaNm1u-ufVOW-RUsXSooqzkpohxfZYy=z78fbcr_9Pq5hbCg@mail.gmail.com
This commit adds four fields to the statistics of relations, aggregating
the amount of time spent for each operation on a relation:
- total_vacuum_time, for manual vacuum.
- total_autovacuum_time, for vacuum done by the autovacuum daemon.
- total_analyze_time, for manual analyze.
- total_autoanalyze_time, for analyze done by the autovacuum daemon.
This gives users the option to derive the average time spent for these
operations with the help of the related "count" fields.
Bump catalog version (for the catalog changes) and PGSTAT_FILE_FORMAT_ID
(for the additions in PgStat_StatTabEntry).
Author: Sami Imseih
Reviewed-by: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/CAA5RZ0uVOGBYmPEeGF2d1B_67tgNjKx_bKDuL+oUftuoz+=Y1g@mail.gmail.com
The installation documentation stated that using Meson is
experimental. But since this is the only way to build using Visual
Studio on Windows, this would imply that that whole build procedure is
experimental, which isn't true. So qualify this statement a bit more.
We keep the statement that Meson is experimental on other platforms,
since it doesn't have full, confirmed feature parity with the make
build system.
Author: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/a3e76618-4cb5-4d54-a71c-da4fb8ba571b@eisentraut.org
A ParseState exists in ATPrepAlterColumnType() since its introduction
in 077db40fa1, and it has never relied on a query string that could be
used to point at a location in the origin string on error.
The output of some regression tests are updated, showing the error
location where applicable. Six error strings are upgraded with the
error location.
Author: Jian He
Discussion: https://postgr.es/m/CACJufxGfbPfWLjcEz33G9eW_epDW0UDi2H05i9eSTPKGJ4rxSA@mail.gmail.com
For SSPI auth extra users need to be explicitly allowed, or we get
"SSPI authentication failed" instead of the expected "role does not
exist" error.
This report also means that the test has never worked on Windows since
its introduction in 9706092839, because it has always bumped on an
authentication failure rather than an error about the role not existing.
Oversight in eef4a33f62, that has added a pattern check on the error
generated by the command.
Per report from Tom Lane, via buildfarm member drongo.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/379085.1737734611@sss.pgh.pa.us
Most executables already get that testing. To occupy the customary
001_basic.pl name, this renumbers the new-in-October tests of
src/test/postmaster/t.
Reviewed by Thomas Munro.
Discussion: https://postgr.es/m/20241215022701.a1.nmisch@google.com
For commit b663b9436e I thought this was useless, but turns out not to
be for the case where a partitioned table has two identical foreign key
constraints which can both be matched by the same constraint in a
partition during attach. This CCI makes the match search for the second
constraint in the parent ignore the constraint in the child that has
already been matched by the first constraint in the parent.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/c599253c-1ccd-4161-80fc-c9065e037a09@gmail.com
The right mix of DDL and VACUUM could corrupt a catalog page header such
that PageIsVerified() durably fails, requiring a restore from backup.
This affects only catalogs that both have a syscache and have DDL code
that uses syscache tuples to construct updates. One of the test
permutations shows a variant not yet fixed.
This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with
TM_Deleted. I think core and PGXN are indifferent to that.
Per bug #17821 from Alexander Lakhin. Back-patch to v13 (all supported
versions). The test case is v17+, since it uses INJECTION_POINT.
Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
Directory "injection_points" has specified NO_INSTALLCHECK since before
commit c35f419d6e added the specs, but
that commit neglected to disable the corresponding meson runningcheck.
The alternative would be to enable "make installcheck" for ISOLATION,
but the GNU make build system lacks a concept of setting NO_INSTALLCHECK
for REGRESS without also setting it for ISOLATION. Back-patch to v17,
where that commit first appeared, to avoid surprises when back-patching
additional specs.
Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
Since commit 757fb0e5a9, these
Informix-compat functions return 0 without changing the output
parameter. Initialize the output parameter before the test call, making
that obvious. Before this, the expected test output has been depending
on freed stack memory. "gcc -ftrivial-auto-var-init=pattern" revealed
that. Back-patch to v13 (all supported versions).
Discussion: https://postgr.es/m/20250106192748.cf.nmisch@google.com
This practice avoids possible problems caused by non-default psql
options, such as disabling AUTOCOMMIT.
Author: Shinya Kato <Shinya11.Kato@oss.nttdata.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/96ff23a5d858ff72ca8e823a014d16fe@oss.nttdata.com
Backpatch-through: 13
The main motivation for this change is to have a process that can serialize
stats after all other processes have terminated. Serializing stats already
happens in checkpointer, even though walsenders can be active longer.
The only reason the current shutdown sequence does not actively cause problems
is that walsender currently does not generate any stats. However, there is an
upcoming patch changing that.
Another need for this change originates in the AIO patchset, where IO
workers (which, in some edge cases, can emit stats of their own) need to run
while the shutdown checkpoint is being written.
This commit changes the shutdown sequence so checkpointer is signalled (via
SIGINT) to trigger writing the shutdown checkpoint without also causing
checkpointer to exit. Once checkpointer wrote the shutdown checkpoint it
notifies postmaster via PMSIGNAL_XLOG_IS_SHUTDOWN and waits for the
termination signal (SIGUSR2, as before). Checkpointer now is terminated after
all children, other than dead-end children and logger, have been terminated,
tracked using the new PM_WAIT_CHECKPOINTER PMState.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
Instead of checking just the first letter, match the whole string
using pg_strcasecmp. Per the documentation, we allow either just
the first letter (e.g. "c") or the whole name ("custom"); but we
will no longer accept random variations such as "chump". This
matches pg_dump's longstanding parsing code for the same option.
Also for consistency with pg_dump, recognize "p"/"plain". We don't
support it, but we can give a more helpful error message than
"unrecognized archive format".
Author: Srinath Reddy <srinath2133@gmail.com>
Discussion: https://postgr.es/m/CAFC+b6pfK-BGcWW1kQmtxVrCh-JGjB2X02rLPQs_ZFaDGjZDsQ@mail.gmail.com
The sizeof() call should reference buffer.data, because that's the
buffer we're reading data into, not the whole PGAlignedBuffer union.
This was introduced by 44cac93464, which replaced the simple buffer
with a PGAlignedBuffer field.
It's benign, because the buffer is the largest field of the union, so
the sizes are the same. But it's easy to trip over this in a patch, so
fix and backpatch. Commit 44cac93464 went into 12, but that's EOL.
Backpatch-through: 13
Discussion: https://postgr.es/m/928bdab1-6567-449f-98c4-339cd2203b87@vondra.me
Useful for caseless matching. Similar to LOWER(), but avoids edge-case
problems with using LOWER() for caseless matching.
For collations that support it, CASEFOLD() handles characters with
more than two case variations or multi-character case variations. Some
characters may fold to uppercase. The results of case folding are also
more stable across Unicode versions than LOWER() or UPPER().
Discussion: https://postgr.es/m/a1886ddfcd8f60cb3e905c93009b646b4cfb74c5.camel%40j-davis.com
Reviewed-by: Ian Lawrence Barwick
Comments and code stated that we expect checkpointer to have been signalled in
case of immediate shutdown / fatal errors, but didn't treat archiver and
walsenders the same. That doesn't seem right.
I had started digging through the history to see where this oddity was
introduced, but it's not the fault of a single commit.
Instead treat archiver, checkpointer, and walsenders the same.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
This includes some behavioral changes:
- Previously PM_WAIT_XLOG_ARCHIVAL wasn't handled in HandleFatalError(), that
doesn't seem quite right.
- Previously a fatal error in PM_WAIT_XLOG_SHUTDOWN lead to jumping back to
PM_WAIT_BACKENDS, no we go to PM_WAIT_DEAD_END. Jumping backwards doesn't
seem quite right and we didn't do so when checkpointer failed to fork during
a shutdown.
- Previously a checkpointer fork failure didn't call SetQuitSignalReason(),
which would lead to quickdie() reporting
"terminating connection because of unexpected SIGQUIT signal"
which seems even worse than the PMQUIT_FOR_CRASH message. If I saw that in
the log I'd suspect somebody outside of postgres sent SIGQUITs
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
There are two places switching to FatalError mode, behaving somewhat
differently. An upcoming commit will introduce a third. That doesn't seem seem
like a good idea.
This commit just moves the FatalError related code from HandleChildCrash()
into its own function, a subsequent commit will evolve the state machine
change to be suitable for other callers.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
Previously HandleChildCrash() skipped logging and signalling child exits if
already in an immediate shutdown or in FatalError state, but still
transitioned server state in response to a crash. That's redundant.
In the other place we transition to FatalError, we do take care to not do so
when already in FatalError state.
To make it easier to combine different paths for entering FatalError state,
only do so once in HandleChildCrash().
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
The motivation for this change is that a future commit will use SIGINT for
another purpose (postmaster requesting WAL access to be shut down) and that
there no other signals that we could readily use (see code comment for the
reason why SIGTERM shouldn't be used). But it's also a tad nicer / more
efficient to use SetLatch(), as it avoids sending signals when checkpointer
already is busy.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp