We do not want to trigger some tasks by default, to avoid using too many
compute credits. These tasks have to be manually triggered to be run. But
e.g. for cfbot we do have sufficient resources, so we always want to start
those tasks.
With this commit, an individual repository can be configured to trigger
them automatically using an environment variable defined under
"Repository Settings", for example:
REPO_CI_AUTOMATIC_TRIGGER_TASKS="mingw netbsd openbsd"
This will enable cfbot to turn them on by default when running tests for the
Commitfest app.
Backpatch this back to PG 15, even though PG 15 does not have any manually
triggered task. Keeping the CI infrastructure the same seems advantageous.
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20240413021221.hg53rvqlvldqh57i%40awork3.anarazel.de
Backpatch-through: 16
As highlighted by the prior commit, writing correct escape functions is less
trivial than one might hope.
This test module tries to verify that different escaping functions behave
reasonably. It e.g. tests:
- Invalidly encoded input to an escape function leads to invalidly encoded
output
- Trailing incomplete multi-byte characters are handled sensibly
- Escaped strings are parsed as single statement by psql's parser (which
derives from the backend parser)
There are further tests that would be good to add. But even in the current
state it was rather useful for writing the fix in the prior commit.
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-1094
If a new catalog tuple is inserted that belongs to a catcache list
entry, and cache invalidation happens while the list entry is being
built, the list entry might miss the newly inserted tuple.
To fix, change the way we detect concurrent invalidations while a
catcache entry is being built. Keep a stack of entries that are being
built, and apply cache invalidation to those entries in addition to
the real catcache entries. This is similar to the in-progress list in
relcache.c.
Back-patch to all supported versions.
Reviewed-by: Noah Misch
Discussion: https://www.postgresql.org/message-id/2234dc98-06fe-42ed-b5db-ac17384dc880@iki.fi
As I'd feared, commit 5c9d8636d was still a few bricks shy of a load.
We can't just leave pulled-up lateral-reference Vars with no new
nullingrels: we have to carefully compute what subset of the
to-be-replaced Var's nullingrels apply to them, else we still get
"wrong varnullingrels" errors. This is a bit tedious, but it looks
like we can use the nullingrel data this patch computes for other
purposes, enabling better optimization. We don't want to inject
unnecessary plan changes into stable branches though, so leave that
idea for a later HEAD-only patch.
Patch by me, but thanks to Richard Guo for devising a test case that
broke 5c9d8636d, and for preliminary investigation about how to fix
it. As before, back-patch to v16.
Discussion: https://postgr.es/m/E1tGn4j-0003zi-MP@gemulon.postgresql.org
1. The error reporting of "port setrequested list-of-packages..."
changed, hiding errors we were relying on to know if all packages in our
list were already installed. Use a loop instead.
2. The cached MacPorts installation was shared between PostgreSQL
major branches, though each branch wanted different packages. Add the
list of packages to cache key, so that different branches, when tested
in one github account/repo such as postgres/postgres, stop fighting with
each other, adding and removing packages.
Back-patch to 15 where CI began.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Suggested-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/au2uqfuy2nf43nwy2txmc5t2emhwij7kzupygto3d2ffgtrdgr%40ckvrlwyflnh2
Previously, in unlucky cases, it was possible for pg_rewind to remove
certain WAL segments from the rewound demoted primary. In particular
this happens if those files have been marked for archival (i.e., their
.ready files were created) but not yet archived; the newly promoted node
no longer has such files because of them having been recycled, but they
are likely critical for recovery in the demoted node. If pg_rewind
removes them, recovery is not possible anymore.
Fix this by maintaining a hash table of files in this situation in the
scan that looks for a checkpoint, which the decide_file_actions phase
can consult so that it knows to preserve them.
Backpatch to 14. The problem also exists in 13, but that branch was not
blessed with commit eb00f1d4bf, so this patch is difficult to apply
there. Users of older releases will just have to continue to be extra
careful when rewinding.
Co-authored-by: Полина Бунгина (Polina Bungina) <bungina@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://postgr.es/m/CAAtGL4AhzmBRsEsaDdz7065T+k+BscNadfTqP1NcPmsqwA5HBw@mail.gmail.com
If a CTE, subquery, sublink, security invoker view, or coercion
projection references a table with row-level security policies, we
neglected to mark the plan as potentially dependent on which role
is executing it. This could lead to later executions in the same
session returning or hiding rows that should have been hidden or
returned instead.
Reported-by: Wolfgang Walther
Reviewed-by: Noah Misch
Security: CVE-2024-10976
Backpatch-through: 12
Supply a new memory manager for RuntimeDyld, to avoid crashes in
generated code caused by memory placement that can overflow a 32 bit
data type. This is a drop-in replacement for the
llvm::SectionMemoryManager class in the LLVM library, with Michael
Smith's proposed fix from
https://www.github.com/llvm/llvm-project/pull/71968.
We hereby slurp it into our own source tree, after moving into a new
namespace llvm::backport and making some minor adjustments so that it
can be compiled with older LLVM versions as far back as 12. It's harder
to make it work on even older LLVM versions, but it doesn't seem likely
that people are really using them so that is not investigated for now.
The problem could also be addressed by switching to JITLink instead of
RuntimeDyld, and that is the LLVM project's recommended solution as
the latter is about to be deprecated. We'll have to do that soon enough
anyway, and then when the LLVM version support window advances far
enough in a few years we'll be able to delete this code. Unfortunately
that wouldn't be enough for PostgreSQL today: in most relevant versions
of LLVM, JITLink is missing or incomplete.
Several other projects have already back-ported this fix into their fork
of LLVM, which is a vote of confidence despite the lack of commit into
LLVM as of today. We don't have our own copy of LLVM so we can't do
exactly what they've done; instead we have a copy of the whole patched
class so we can pass an instance of it to RuntimeDyld.
The LLVM project hasn't chosen to commit the fix yet, and even if it
did, it wouldn't be back-ported into the releases of LLVM that most of
our users care about, so there is not much point in waiting any longer
for that. If they make further changes and commit it to LLVM 19 or 20,
we'll still need this for older versions, but we may want to
resynchronize our copy and update some comments.
The changes that we've had to make to our copy can be seen by diffing
our SectionMemoryManager.{h,cpp} files against the ones in the tree of
the pull request. Per the LLVM project's license requirements, a copy
is in SectionMemoryManager.LICENSE.
This should fix the spate of crash reports we've been receiving lately
from users on large memory ARM systems.
Back-patch to all supported releases.
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se> (license aspects)
Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/CAO6_Xqr63qj%3DSx7HY6ZiiQ6R_JbX%2B-p6sTPwDYwTWZjUmjsYBg%40mail.gmail.com
We had been using "diff -upd", which evidently works for most people,
but Solaris's diff doesn't like it. (We'd not noticed because the
Solaris buildfarm animals weren't running this test until they were
upgraded to the latest buildfarm client script.) Change to "diff -U3"
which is what pg_regress has used for ages.
Per buildfarm (and off-list discussion with Noah Misch).
Back-patch to v16 where this test was added. In v16,
also back-patch the relevant part of 628c1d1f2 so that
the test script looks about the same in all branches.
This reverts commit 95c5acb3fc (v17) and
counterparts in each other non-master branch. If released, that commit
would have caused a worst-in-years minor release regression, via
undetected LWLock self-deadlock. This commit and its self-deadlock fix
warrant more bake time in the master branch.
Reported by Alexander Lakhin.
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
The inplace update survives ROLLBACK. The inval didn't, so another
backend's DDL could then update the row without incorporating the
inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER
TABLE resulted in a table with an index, yet relhasindex=f. That is a
source of index corruption. Back-patch to v12 (all supported versions).
The back branch versions don't change WAL, because those branches just
added end-of-recovery SIResetAll(). All branches change the ABI of
extern function PrepareToInvalidateCacheTuple(). No PGXN extension
calls that, and there's no apparent use case in extensions.
Reviewed by Nitin Motiani and (in earlier versions) Andres Freund.
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
... to fix bugs when the referenced table is partitioned.
The catalog representation we chose for foreign keys connecting
partitioned tables (in commit f56f8f8da6) is inconvenient, in the
sense that a standalone table has a different way to represent the
constraint when referencing a partitioned table, than when the same
table becomes a partition (and vice versa). Because of this, we need to
create additional catalog rows on detach (pg_constraint and pg_trigger),
and remove them on attach. We were doing some of those things, but not
all of them, leading to missing catalog rows in certain cases.
The worst problem seems to be that we are missing action triggers after
detaching a partition, which means that you could update/delete rows
from the referenced partitioned table that still had referencing rows on
that table, the server failing to throw the required errors.
!!!
Note that this means existing databases with FKs that reference
partitioned tables might have rows that break relational integrity, on
tables that were once partitions on the referencing side of the FK.
Another possible problem is that trying to reattach a table
that had been detached would fail indicating that internal triggers
cannot be found, which from the user's point of view is nonsensical.
In branches 15 and above, we fix this by creating a new helper function
addFkConstraint() which is in charge of creating a standalone
pg_constraint row, and repurposing addFkRecurseReferencing() and
addFkRecurseReferenced() so that they're only the recursive routine for
each side of the FK, and they call addFkConstraint() to create
pg_constraint at each partitioning level and add the necessary triggers.
These new routines can be used during partition creation, partition
attach and detach, and foreign key creation. This reduces redundant
code and simplifies the flow.
In branches 14 and 13, we have a much simpler fix that consists on
simply removing the constraint on detach. The reason is that those
branches are missing commit f4566345cf, which reworked the way this
works in a way that we didn't consider back-patchable at the time.
We opted to leave branch 12 alone, because it's different from branch 13
enough that the fix doesn't apply; and because it is going in EOL mode
very soon, patching it now might be worse since there's no way to undo
the damage if it goes wrong.
Existing databases might need to be repaired.
In the future we might want to rethink the catalog representation to
avoid this problem, but for now the code seems to do what's required to
make the constraints operate correctly.
Co-authored-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Guillaume Lelarge <guillaume@lelarge.info>
Reported-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Reported-by: Thomas Baehler (SBB CFF FFS) <thomas.baehler2@sbb.ch>
Discussion: https://postgr.es/m/20230420144344.40744130@karst
Discussion: https://postgr.es/m/20230705233028.2f554f73@karst
Discussion: https://postgr.es/m/GVAP278MB02787E7134FD691861635A8BC9032@GVAP278MB0278.CHEP278.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/18541-628a61bc267cd2d3@postgresql.org
Commit 149ac7d455 which re-implemented pgindent in Perl explicitly
imported the devnull function from File::Spec, but the module does
not export anything. In recent versions of Perl calling a missing
import function cause a warning, which combined with warnings being
fatal cause pgindent to error out.
Backpatch to all supported versions.
Author: Erik Wienhold <ewie@ewie.name>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discusson: https://postgr.es/m/2372cd74-11b0-46f9-b28e-8f9627215d19@ewie.name
Backpatch-through: v12
MacPorts version 2.9.3 started failing in our ci_macports_packages.sh
script, for reasons not fully determined, but plausibly linked to the
release of 2.10.1. 2.10.1 seems to work, so let's switch to it.
Back-patch to 15, where CI began.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/81f104e8-f0a9-43c0-85bd-2bbbf590a5b8%40eisentraut.org
Commit d01ce180 invented a new way to find the latest MacPorts version.
By bad luck, a new beta release has just been published, and it seems
to lack some packages we need. Go back to searching for this specific
version for now. We still search with a pattern so that we can find the
package for the running version of macOS, but for now we always look for
2.9.3. The code to do that had been anticipated already in a commented
out line, I just didn't expect to have to use it so soon...
Also include the whole MacPorts installation script in the cache key, so
that changes to the script cause a fresh installation. This should make
it a bit easier to reason about the effect of changes on cached state in
github accounts using CI, when we make adjustments.
Back-patch to 15, like d01ce180.
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
1. Previously we were using ghcr.io/cirruslabs/macos-XXX-base:latest
images, but Cirrus has started ignoring that and using a particular
image, currently ghcr.io/cirruslabs/macos-runner:sonoma, for github
accounts using free CI resources (as opposed to dedicated runner
machines, as cfbot uses). Let's just ask for that image anyway, to stay
in sync.
2. Instead of hard-coding a MacPorts installation URL, deduce it from
the running macOS version and the available releases. This removes the
need to keep the ci_macports_packages.sh in sync with .cirrus.task.yml,
and to advance the MacPorts version from time to time.
3. Change the cache key we use to cache the whole macports installation
across builds to include the OS major version, to trigger a fresh
installation when appropriate.
Back-patch to 15 where CI began.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
Test result files might be checked out using Unix or Windows style line
endings, depening on git flags, so on Windows we use the
--strip-trailing-cr flag to tell diff to ignore line endings
differences.
The flag is added to the diff invocation for the test_json_parser module
tests and the pg_bsd_indent tests. in pg_regress.c we replace the
current use of the "-w" flag, which ignore all white space differences,
with this one which only ignores line end differences.
Discussion: https://postgr.es/m/20240707052030.r77hbdkid3mwksop@awork3.anarazel.de
macOS 15's SDK pulls in headers related to <regex.h> when we include
<xlocale.h>. This causes our own regex_t implementation to clash with
the OS's regex_t implementation. Luckily our function names already had
pg_ prefixes, but the macros and typenames did not.
Include <regex.h> explicitly on all POSIX systems, and fix everything
that breaks. Then we can prove that we are capable of fully hiding and
replacing the system regex API with our own.
1. Deal with standard-clobbering macros by undefining them all first.
POSIX says they are "symbolic constants". If they are macros, this
allows us to redefine them. If they are enums or variables, our macros
will hide them.
2. Deal with standard-clobbering types by giving our types pg_
prefixes, and then using macros to redirect xxx_t -> pg_xxx_t.
After including our "regex/regex.h", the system <regex.h> is hidden,
because we've replaced all the standard names. The PostgreSQL source
tree and extensions can continue to use standard prefix-less type and
macro names, but reach our implementation, if they included our
"regex/regex.h" header.
Back-patch to all supported branches, so that macOS 15's tool chain can
build them.
Reported-by: Stan Hu <stanhu@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAMBWrQnEwEJtgOv7EUNsXmFw2Ub4p5P%2B5QTBEgYwiyjy7rAsEQ%40mail.gmail.com
The standby_slot_names GUC allows the specification of physical standby
slots that must be synchronized before the logical walsenders associated
with logical failover slots. However, for this purpose, the GUC name is
too generic.
Author: Hou Zhijie
Reviewed-by: Bertrand Drouvot, Masahiko Sawada
Backpatch-through: 17
Discussion: https://postgr.es/m/ZnWeUgdHong93fQN@momjian.us
This could have caused spurious failures only on SPARC Linux, because
today's only todo_start tests for that platform. Back-patch to v16,
where Meson support first appeared.
Reviewed by Robert Haas.
Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
We did not recover the subtransaction IDs of prepared transactions
when starting a hot standby from a shutdown checkpoint. As a result,
such subtransactions were considered as aborted, rather than
in-progress. That would lead to hint bits being set incorrectly, and
the subtransactions suddenly becoming visible to old snapshots when
the prepared transaction was committed.
To fix, update pg_subtrans with prepared transactions's subxids when
starting hot standby from a shutdown checkpoint. The snapshots taken
from that state need to be marked as "suboverflowed", so that we also
check the pg_subtrans.
Backport to all supported versions.
Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
Commit f5e4dedfa exposed libpq's internal function PQsocketPoll
without a lot of thought about whether that was an API we really
wanted to chisel in stone. The main problem with it is the use of
time_t to specify the timeout. While we do want an absolute time
so that a loop around PQsocketPoll doesn't have problems with
timeout slippage, time_t has only 1-second resolution. That's
already problematic for libpq's own internal usage --- for example,
pqConnectDBComplete has long had a kluge to treat "connect_timeout=1"
as 2 seconds so that it doesn't accidentally round to nearly zero.
And it's even less likely to be satisfactory for external callers.
Hence, let's change this while we still can.
The best idea seems to be to use an int64 count of microseconds since
the epoch --- basically the same thing as the backend's TimestampTz,
but let's use the standard Unix epoch (1970-01-01) since that's more
likely for clients to be easy to calculate. Millisecond resolution
would be plenty for foreseeable uses, but maybe the day will come that
we're glad we used microseconds.
Also, since time(2) isn't especially helpful for computing timeouts
defined this way, introduce a new function PQgetCurrentTimeUSec
to get the current time in this form.
Remove the hack in pqConnectDBComplete, so that "connect_timeout=1"
now means what you'd expect.
We can also remove the "#include <time.h>" that f5e4dedfa added to
libpq-fe.h, since there's no longer a need for time_t in that header.
It seems better for v17 not to enlarge libpq-fe.h's include footprint
from what it's historically been, anyway.
I also failed to resist the temptation to do some wordsmithing
on PQsocketPoll's documentation.
Patch by me, per complaint from Dominique Devienne.
Discussion: https://postgr.es/m/913559.1718055575@sss.pgh.pa.us
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in pg_bsd_indent.
This commit was written with help from clang-tidy, by mechanically
applying the same rules as similar clean-up commits.
Discussion: https://postgr.es/m/CAH2-WzkaBS8w-vCbG5M5Bx7XikC0WhNLJV_+Z_YAWW9Kef6OBQ@mail.gmail.com
0452b461bc made optimizer explore alternative orderings of group-by pathkeys.
The PathKeyInfo data structure was used to store the particular ordering of
group-by pathkeys and corresponding clauses. It turns out that PathKeyInfo
is not the best name for that purpose. This commit renames this data structure
to GroupByOrdering, and revises its comment.
Discussion: https://postgr.es/m/db0fc3a4-966c-4cec-a136-94024d39212d%40postgrespro.ru
Reported-by: Tom Lane
Author: Andrei Lepikhov
Reviewed-by: Alexander Korotkov, Pavel Borisov
This commit introduces a new data structure BtreeLastVisibleEntry comprising
information about the last visible heap entry with the current value of key.
Usage of this data structure allows us to avoid passing all this information
as individual function arguments.
Reported-by: Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/CAPpHfdsVbB9ToriaB1UHuOKwjKxiZmTFQcEF%3DjuzzC_nby31uA%40mail.gmail.com
Author: Pavel Borisov, Alexander Korotkov
In commit da256a4a7, I manually added some typedef names to the
buildfarm-generated list so as not to cause any formatting regressions
compared to the prior manually-updated list.
About half of the additions were injection-point-related names.
It turns out that those were missing because none of the buildfarm
animals contributing typedef lists were building with
--enable-injection-points. I rectified that on my animal sifaka,
and now those are in the list available from the buildfarm.
The other half were typedefs that didn't show up in the generated list
because our method for collecting that doesn't catch names that are
not used to declare any C variables or fields. Such a typedef name
doesn't really add a lot of value, so we decided to get rid of them,
and that's now been done in commits 110eb4aef and be5942aee. (Note:
I'm pretty sure there are some remaining cases of that, but we've
already accepted the ensuing odd formatting of the typedef declaration
itself. The present fixes only dealt with typedefs that had been
manually added to typedefs.list during the v17 development cycle.)
Hence, we can now install a verbatim copy of the buildfarm's list
and not have it affect anything. The only change is to add
InjectionPointCallback, which I'd omitted from da256a4a7 because
it chanced not to affect any pgindent decisions.
Discussion: https://postgr.es/m/1919000.1715815925@sss.pgh.pa.us
There were a few typedefs that were never used to define a variable or
field. This had the effect that the buildfarm's typedefs.list would
not include them, and so they would need to be re-added manually to
keep the overall pgindent result perfectly clean.
We can easily get rid of these typedefs to avoid the issue. In a few
cases, we just let the enum or struct stand on its own without a
typedef around it. In one case, an enum was abused to define flag
bits; that's better done with macro definitions.
This fixes all the remaining issues with missing entries in the
buildfarm's typedefs.list.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/1919000.1715815925@sss.pgh.pa.us
This enum was used to determine the first ID to use when assigning a
custom wait event for extensions, which is always 1. It was kept so
as it would be possible to add new in-core wait events in the category
"Extension". There is no such thing currently, so let's remove this
enum until a case justifying it pops up. This makes the code simpler
and easier to understand.
This has as effect to switch back autoprewarm.c to use PG_WAIT_EXTENSION
rather than WAIT_EVENT_EXTENSION, on par with v16 and older stable
branches.
Thinko in c9af054653.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/195c6c45-abce-4331-be6a-e87724e1d060@eisentraut.org
This README explains how to run pgindent, but it was written
for our former practice of running pgindent only infrequently.
Nowadays the plan is to keep the tree indent-clean all the time,
so the typical thing is to do an incremental pgindent run with
each commit. Revise to explain that as the normal case, and
the infrequent full-on process as a separate thing.
For now, pgperltidy is still a run-it-infrequently item.
Run pgindent, pgperltidy, and reformat-dat-files.
The pgindent part of this is pretty small, consisting mainly of
fixing up self-inflicted formatting damage from patches that
hadn't bothered to add their new typedefs to typedefs.list.
In order to keep it from making anything worse, I manually added
a dozen or so typedefs that appeared in the existing typedefs.list
but not in the buildfarm's list. Perhaps we should formalize that,
or better find a way to get those typedefs into the automatic list.
pgperltidy is as opinionated as always, and reformat-dat-files too.
This commit fixes a race condition between injection point run and
detach, where a point detached by a backend and concurrently running in
a second backend could cause the second backend to do an incorrect
condition check. This issue happens because the second backend
retrieves the callback information in a first step in the shmem hash
table for injection points, and the condition in a second step within
the callback. If the point is detached between these two steps, the
condition would be removed, causing the point to run while it should
not. Storing the condition in the new private_data area introduced in
33181b48fd ensures that the condition retrieved is consistent with its
callback.
This commit leads to a lot of simplifications in the module
injection_points, as there is no need to handle the runtime conditions
inside it anymore. Runtime conditions have no more a maximum number.
Per discussion with Noah Misch.
Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/20240509031553.47@rfd.leadboat.com
94985c210 added code to detect when WindowFuncs were monotonic and
allowed additional quals to be "pushed down" into the subquery to be
used as WindowClause runConditions in order to short-circuit execution
in nodeWindowAgg.c.
The Node representation of runConditions wasn't well selected and
because we do qual pushdown before planning the subquery, the planning
of the subquery could perform subquery pull-up of nested subqueries.
For WindowFuncs with args, the arguments could be changed after pushing
the qual down to the subquery.
This was made more difficult by the fact that the code duplicated the
WindowFunc inside an OpExpr to include in the WindowClauses runCondition
field. This could result in duplication of subqueries and a pull-up of
such a subquery could result in another initplan parameter being issued
for the 2nd version of the subplan. This could result in errors such as:
ERROR: WindowFunc not found in subplan target lists
To fix this, we change the node representation of these run conditions
and instead of storing an OpExpr containing the WindowFunc in a list
inside WindowClause, we now store a new node type named
WindowFuncRunCondition within a new field in the WindowFunc. These get
transformed into OpExprs later in planning once subquery pull-up has been
performed.
This problem did exist in v15 and v16, but that was fixed by 9d36b883b
and e5d20bbd.
Cat version bump due to new node type and modifying WindowFunc struct.
Bug: #18305
Reported-by: Zuming Jiang
Discussion: https://postgr.es/m/18305-33c49b4c830b37b3%40postgresql.org
JsonNonTerminal and JsonParserSem were added in commit 3311ea86ed
These names of these two enums are not actually used, so there is no
need for typedefs. Instead use plain enums to declare the constants.
Noticed by Alvaro Herera.
We missed performing table sync if the invalidation happened while the
non-ready tables list was being prepared. This occurs because the sync
state was set to valid at the end of non-ready table list preparation
irrespective of the invalidations processed while the list is being
prepared.
Fix it by changing the boolean variable to a tri-state enum and by setting
table state to valid only if no invalidations have occurred while the list
is being prepared.
Reprted-by: Alexander Lakhin
Diagnosed-by: Alexander Lakhin
Author: Vignesh C
Reviewed-by: Hou Zhijie, Alexander Lakhin, Ajin Cherian, Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/711a6afe-edb7-1211-cc27-1bef8239eec7@gmail.com
While keeping API the same, this commit provides a way for block-level table
AMs to re-use existing acquire_sample_rows() by providing custom callbacks
for getting the next block and the next tuple.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de
Reviewed-by: Pavel Borisov