The workload to generate multixids before upgrade is very slow on
buildfarm members running with JIT enabled. The workload runs a lot of
small queries, so it's unsurprising that JIT makes it slower. On my
laptop it nevertheless runs in under 10 s even with JIT enabled, while
some buildfarm members have been hitting the 180 s timeout. That seems
extreme, but I suppose it's still expected on very slow and busy
buildfarm animals. The timeout applies to the BackgroundPsql sessions
as whole rather than the individual queries.
Bump up the timeout to avoid the test failures. Add periodic progress
reports to the test output so that we get a better picture of just how
slow the test is.
In the passing, also fix comments about how many multixids and members
the workload generates. The comments were written based on 10 parallel
connections, but it actually uses 20.
Discussion: https://www.postgresql.org/message-id/b7faf07c-7d2c-4f35-8c43-392e057153ef@gmail.com
In the server, check explicitly for multixids with zero members. We
used to have an assertion for it, but commit d4b7bde418 replaced it
with more extensive runtime checks, but it missed the original case of
zero members.
In the upgrade code, a negative length never makes sense, so better
check for it explicitly. Commit d4b7bde418 added a similar sanity
check to the corresponding server code on master, and in backbranches,
the 'length' is passed to palloc which would fail with "invalid memory
alloc request size" error. Clarify the comments on what kind of
invalid entries are tolerated by the upgrade code and which ones are
reported as fatal errors.
Coverity complained about 'length' in the upgrade code being
tainted. That's bogus because we trust the data on disk at least to
some extent, but hopefully this will silence the complaint. If not,
I'll dismiss it manually.
Discussion: https://www.postgresql.org/message-id/7b505284-c6e9-4c80-a7ee-816493170abc@iki.fi
We have tried to stabilize them several times already, but they are very
flaky -- apparently there's some intrinsic instability that's hard to
solve with the isolationtester framework. They are very noisy in CI
runs (whereas buildfarm has not registered any such failures). They may
need to be rewritten completely. In the meantime just comment them out
in Makefile/meson.build, leaving the spec files around.
Per complaint from Andres Freund.
Discussion: https://postgr.es/m/202512112014.icpomgc37zx4@alvherre.pgsql
HAVE__STATIC_ASSERT was really a test for GCC statement expressions,
as needed for StaticAssertExpr() now that _Static_assert could be
assumed to be available through our C11 requirement. This
artificially prevented Visual Studio from being able to use
static_assert() in other contexts.
Instead, make a new test for HAVE_STATEMENT_EXPRESSIONS, and use that
to control only whether StaticAssertExpr() uses fallback code, not the
other variants. This improves the quality of failure messages in the
(much more common) other variants under Visual Studio.
Also get rid of the two separate implementations for C++, since the C
implementation is also also valid as C++11. While it is a stretch to
apply HAVE_STATEMENT_EXPRESSIONS tested with $CC to a C++ compiler,
the previous C++ coding assumed that the C++ compiler had them
unconditionally, so it isn't a new stretch. In practice, the C and
C++ compilers are very likely to agree, and if a combination is ever
reported that falsifies this assumption we can always reconsider that.
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGKvr0x_oGmQTUkx%3DODgSksT2EtgCA6LmGx_jQFG%3DsDUpg%40mail.gmail.com
Coverity complained that offset cannot be 0 here because there's an
explicit check for "offset == 0" earlier in the function, but it
didn't see the possibility that offset could've wrapped around to 0.
The code is correct, but clarify the comment about it.
The same code exists in backbranches in the server
GetMultiXactIdMembers() function and in 'master' in the pg_upgrade
GetOldMultiXactIdSingleMember function. In backbranches Coverity
didn't complain about it because the check was merely an assertion,
but change the comment in all supported branches for consistency.
Per Tom Lane's suggestion.
Discussion: https://www.postgresql.org/message-id/1827755.1765752936@sss.pgh.pa.us
Previously, pg_sync_replication_slots() would finish without synchronizing
slots that didn't meet requirements, rather than failing outright. This
could leave some failover slots unsynchronized if required catalog rows or
WAL segments were missing or at risk of removal, while the standby
continued removing needed data.
To address this, the function now waits for the primary slot to advance to
a position where all required data is available on the standby before
completing synchronization. It retries cyclically until all failover slots
that existed on the primary at the start of the call are synchronized.
Slots created after the function begins are not included. If the standby
is promoted during this wait, the function exits gracefully and the
temporary slots will be removed.
Author: Ajin Cherian <itsajin@gmail.com>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Yilin Zhang <jiezhilove@126.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com
I have fat-fingered an error message related to an offset while
switching the code to use pgoff_t. Let's switch to the same error
message used in the rest of the tree for similar failures with fseeko(),
instead.
Per buildfarm members running macos: longfin, sifaka and indri.
gist_page_items() opens its target relation with index_open(), but
closed it using relation_close() instead of index_close(). This was
harmless because index_close() and relation_close() do the exact same
work, still inconsistent with the rest of the code tree as routines
opening and closing a relation based on a relkind are expected to match,
at least in name.
Author: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2=bL41WWcD-4Fxx-buS2Y2G5=9PjkxZbHeFMR6Uy2WNvw@mail.gmail.com
This commit builds upon 4ba012a8ed, giving an example of what can be
achieved with the new callbacks. This provides coverage for the new
pgstats APIs, while serving as a reference template.
Note that built-in stats kinds could use them, we just don't have a
use-case there yet.
Author: Sami Imseih <samimseih@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0s9SDOu+Z6veoJCHWk+kDeTktAtC-KY9fQ9Z6BJdDUirQ@mail.gmail.com
Cumulative stats kinds gain the capability to write additional per-entry
data when flushing the stats at shutdown, and read this data when
loading back the stats at startup. This can be fit for example in the
case of variable-length data (like normalized query strings), so as it
becomes possible to link the shared memory stats entries to data that is
stored in a different area, like a DSA segment.
Three new optional callbacks are added to PgStat_KindInfo, available to
variable-numbered stats kinds:
* to_serialized_data: writes auxiliary data for an entry.
* from_serialized_data: reads auxiliary data for an entry.
* finish: performs actions after read/write/discard operations. This is
invoked after processing all the entries of a kind, allowing extensions
to close file handles and clean up resources.
Stats kinds have the option to store this data in the existing pgstats
file, but can as well store it in one or more additional files whose
names can be built upon the entry keys. The new serialized callbacks
are called once an entry key is read or written from the main stats
file. A file descriptor to the main pgstats file is available in the
arguments of the callbacks.
Author: Sami Imseih <samimseih@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0s9SDOu+Z6veoJCHWk+kDeTktAtC-KY9fQ9Z6BJdDUirQ@mail.gmail.com
The current list from the buildfarm includes quite a few typedef
names that it used to miss. The reason is a bit obscure, but it
seems likely to have something to do with our recent increased
use of palloc_object and palloc_array. In any case, this makes
the relevant struct declarations be much more nicely formatted,
so I'll take it. Install the current list and re-run pgindent
to update affected code.
Syncing with the current list also removes some obsolete
typedef names and fixes some alphabetization errors.
Discussion: https://postgr.es/m/1681301.1765742268@sss.pgh.pa.us
There doesn't seem to be any great reason why this has been a macro
rather than a typedef. But doing it like that means our buildfarm
typedef tooling doesn't capture the name as a typedef. That would
result in pgindent glitches, except that we've seemingly kept it
in typedefs.list manually. That's obviously error-prone, so let's
convert it to a typedef now.
Discussion: https://postgr.es/m/1681301.1765742268@sss.pgh.pa.us
While commit 5b275a6e1 fixed a few unhappy buildfarm animals,
it looks like the remainder simply don't have any es_ES locale
at all. There's little point in running the test in that case,
so minimize the number of variant expected-files by bailing out.
Also emit a log entry so that it's possible to tell from buildfarm
postmaster logs which case occurred.
Possibly, the scope of this testing could be improved by providing
additional translations. But I think it's likely that the failing
animals have no non-C locales installed at all.
In passing, update typedefs.list so that koel doesn't think
regress.c is misformatted.
Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
The private refcount entry for a buffer is often looked up repeatedly for the
same buffer, e.g. to pin and then unpin a buffer. Benchmarking shows that it's
worthwhile to have a one-entry cache for that case. With that cache in place,
it's worth splitting GetPrivateRefCountEntry() into a small inline
portion (for the cache hit case) and an out-of-line helper for the rest.
This is helpful for some workloads today, but becomes more important in an
upcoming patch that will utilize the private refcount infrastructure to also
store whether the buffer is currently locked, as that increases the rate of
lookups substantially.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/6rgb2nvhyvnszz4ul3wfzlf5rheb2kkwrglthnna7qhe24onwr@vw27225tkyar
This makes lookups faster, due to allowing auto-vectorized lookups. It is also
beneficial for an upcoming patch, independent of auto-vectorization, as the
upcoming patch wants to track more information for each pinned buffer, making
the existing loop, iterating over an array of PrivateRefCountEntry, more
expensive due to increasing its size.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
While CI testing in advance of commit 8c498479d suggested that all
Unix-ish platforms would accept 'es_ES.UTF-8', the buildfarm has
a different opinion. Let's dynamically select something that works,
if possible.
Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
Coverity complained about this, not without reason:
OldMultiXactReader *state = state = pg_malloc(sizeof(*state));
(I'm surprised this is even legal C ... why is "state" in-scope
in its initialization expression?)
While at it, convert to use our newly-preferred "pg_malloc_object"
macro instead of an explicit sizeof().
We've never actually had a formal test for this facility.
It seems worth adding one now, mainly because we are starting
to depend on gettext() being able to handle the PRI* macros
from <inttypes.h>, and it's not all that certain that that
works everywhere. So the test goes to a bit of effort to
check all the PRI* macros we are likely to use.
(This effort has indeed found one problem already, now fixed
in commit f8715ec86.)
Discussion: https://postgr.es/m/3098752.1765221796@sss.pgh.pa.us
Discussion: https://postgr.es/m/292844.1765315339@sss.pgh.pa.us
Change WAIT_LSN_TYPE_COUNT from an enum sentinel to a macro definition,
in a similar way to IOObject, IOContext, and BackendType enums. Remove
explicit enum value assignments well.
Author: Xuneng Zhou <xunengzhou@gmail.com>
f2e4cc4279 and 4b3d173629 implement ALTER TABLE ... MERGE/SPLIT
PARTITION(s) commands. In several places, these commits use palloc(),
where we should use palloc_object() and palloc_array(). This commit
provides appropriate usage of palloc_object() and palloc_array().
Reported-by: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_3661BB522D5466B33EA33666%40qq.com
This new DDL command splits a single partition into several partitions. Just
like the ALTER TABLE ... MERGE PARTITIONS ... command, new partitions are
created using the createPartitionTable() function with the parent partition
as the template.
This commit comprises a quite naive implementation which works in a single
process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all
the operations, including the tuple routing. This is why the new DDL command
can't be recommended for large, partitioned tables under high load. However,
this implementation comes in handy in certain cases, even as it is. Also, it
could serve as a foundation for future implementations with less locking and
possibly parallelism.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval <d.koval@postgrespro.ru>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Richard Guo <guofenglinux@gmail.com>
Co-authored-by: Dagfinn Ilmari Mannsaker <ilmari@ilmari.org>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Co-authored-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Robert Haas <rhaas@postgresql.org>
Reviewed-by: Stephane Tachoires <stephane.tachoires@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Daniel Gustafsson <dgustafsson@postgresql.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
This new DDL command merges several partitions into a single partition of the
target table. The target partition is created using the new
createPartitionTable() function with the parent partition as the template.
This commit comprises a quite naive implementation which works in a single
process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all
the operations, including the tuple routing. This is why this new DDL
command can't be recommended for large partitioned tables under a high load.
However, this implementation comes in handy in certain cases, even as it is.
Also, it could serve as a foundation for future implementations with less
locking and possibly parallelism.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval <d.koval@postgrespro.ru>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Richard Guo <guofenglinux@gmail.com>
Co-authored-by: Dagfinn Ilmari Mannsaker <ilmari@ilmari.org>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Co-authored-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Robert Haas <rhaas@postgresql.org>
Reviewed-by: Stephane Tachoires <stephane.tachoires@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Daniel Gustafsson <dgustafsson@postgresql.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
The reference to the test module test_custom_stats should have been
added under the section "Custom Cumulative Statistics", but the section
"Injection Points" has been updated instead, reversing the references
for both test modules.
d52c24b0f8 has removed a paragraph that was correct, and 31280d96a6
has added a paragraph that was incorrect.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0s4heX926+ZNh63u12gLd9jgauU6yiirKc7xGo1G01PXQ@mail.gmail.com
In commit b61aa76e4 I added an assumption in jsonb_object_agg_finalfn
that it'd be okay to apply uniqueifyJsonbObject repeatedly to a
JsonbValue. I should have studied that code more closely first,
because in skip_nulls mode it removed leading nulls by changing the
"pairs" array start pointer. This broke the data structure's
invariants in two ways: pairs no longer references a repalloc-able
chunk, and the distance from pairs to the end of its array is less
than parseState->size. So any subsequent addition of more pairs is
at high risk of clobbering memory and/or causing repalloc to crash.
Unfortunately, adding more pairs is exactly what will happen when the
aggregate is being used as a window function.
Fix by rewriting uniqueifyJsonbObject to not do that. The prior
coding had little to recommend it anyway.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ec5e96fb-ee49-4e5f-8a09-3f72b4780538@gmail.com
When relptr.h was added (commit fbc1c12a94), there was no check for
HAVE_TYPEOF, so it used HAVE__BUILTIN_TYPES_COMPATIBLE_P, which
already existed (commit ea473fb2de) and which was thought to cover
approximately the same compilers. But the guarded code can also work
without HAVE__BUILTIN_TYPES_COMPATIBLE_P, and we now have a check for
HAVE_TYPEOF (commit 4cb824699e), so let's fix this up to use the
correct logic.
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGL7trhWiJ4qxpksBztMMTWDyPnP1QN%2BLq341V7QL775DA%40mail.gmail.com
It's as pointless as ASC/DESC and NULLS FIRST/LAST are, so reject all of
them in the same way. While at it, normalize the others' error messages
to have less translatable strings. Add tests for these errors.
Noticed while reviewing recent INSERT ON CONFLICT patches.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/202511271516.oiefpvn3z27m@alvherre.pgsql
Before this commit, when multixid wraparound happens,
MultiXactState->nextMXact goes to 0, which is invalid. All the readers
need to deal with that possibility and skip over the 0. That's
error-prone and we've missed it a few times in the past. This commit
changes the responsibility so that all the writers of
MultiXactState->nextMXact skip over the zero already, and readers can
trust that it's never 0.
We were already doing that for MultiXactState->oldestMultiXactId; none
of its writers would set it to 0. ReadMultiXactIdRange() was
nevertheless checking for that possibility. For clarity, remove that
check.
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Discussion: https://www.postgresql.org/message-id/3624730d-6dae-42bf-9458-76c4c965fb27@iki.fi
The fix for concurrent index operations in bc32a12e0d started
considering indexes that are not yet marked indisvalid as arbiters for
INSERT ON CONFLICT. For partitioned tables, this leads to including
indexes that may not exist in partitions, causing a trivially
reproducible "invalid arbiter index list" error to be thrown because of
failure to match the index. To fix, it suffices to ignore !indisvalid
indexes on partitioned tables. There should be no risk that the set of
indexes will change for concurrent transactions, because in order for
such an index to be marked valid, an ALTER INDEX ATTACH PARTITION must
run which requires AccessExclusiveLock.
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/17622f79-117a-4a44-aa8e-0374e53faaf0%40gmail.com
It's not far-fetched that we'd try to read a multixid with an invalid
offset in case of bugs or corruption. Or if you call
pg_get_multixact_members() after a crash that left behind invalid but
unused multixids. Better to get a somewhat descriptive error message
if that happens.
Discussion: https://www.postgresql.org/message-id/3624730d-6dae-42bf-9458-76c4c965fb27@iki.fi
Previously, c.h made <assert.h> only available in frontends (#ifdef
FRONTEND), which was probably reasonable, because the only thing it
would give you is assert(), which you generally shouldn't use in the
backend. But with C11, <assert.h> also makes available
static_assert(), which would be useful everywhere. So this patch
moves <assert.h> to the commonly available header files in c.h and
fixes a small complication in regcustom.h that resulted from that.
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA%2BhUKGKvr0x_oGmQTUkx%3DODgSksT2EtgCA6LmGx_jQFG%3DsDUpg%40mail.gmail.com
This is the last batch of changes that have been suggested by the
author, this part covering the non-trivial changes. Some of the changes
suggested have been discarded as they seem to lead to more instructions
generated, leaving the parts that can be qualified as in-place
replacements.
Similar work has been done in 1b105f9472, 0c3c5c3b06 and
31d3847a37.
Author: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/ad0748d4-3080-436e-b0bc-ac8f86a3466a@gmail.com
The code over-allocated the memory required for os_page_status, relying
on uint64 for its element size instead of an int, hence doubling what
was required. This could mean quite a lot of memory if dealing with a
lot of NUMA pages.
Oversight in ba2a3c2302.
Author: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/ad0748d4-3080-436e-b0bc-ac8f86a3466a@gmail.com
Backpatch-through: 18
Previously, during a promotion, only the slot synchronization worker was
signaled to shut down. The backend executing slot synchronization via the
pg_sync_replication_slots() SQL function was not signaled, allowing it to
complete its synchronization cycle before exiting.
An upcoming patch improves pg_sync_replication_slots() to wait until
replication slots are fully persisted before finishing. This behaviour
requires the backend to exit promptly if a promotion occurs.
This patch ensures that, during promotion, a signal is also sent to the
backend running pg_sync_replication_slots(), allowing it to be interrupted
and exit immediately.
Author: Ajin Cherian <itsajin@gmail.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com
Make it clear why _bt_killitems sorts the scan's so->killedItems[]
array. Also add an assertion to the _bt_killitems loop (that iterates
through this array) to verify it accesses tuples in leaf page order.
Follow-up to commit bfb335df58.
Author: Peter Geoghegan <pg@bowt.ie>
Suggested-by: Victor Yegorov <vyegorov@gmail.com>
Discussion: https://postgr.es/m/CAGnEboirgArezZDNeFrR8FOGvKF-Xok333s2iVwWi65gZf8MEA@mail.gmail.com
An array of LLVMBasicBlockRef is allocated with the size used for an
element being "LLVMBasicBlockRef *" rather than "LLVMBasicBlockRef".
LLVMBasicBlockRef is a type that refers to a pointer, so this did not
directly cause a problem because both should have the same size, still
it is incorrect.
This issue has been spotted while reviewing a different patch, and
exists since 2a0faed9d7, so backpatch all the way down.
Discussion: https://postgr.es/m/CA+hUKGLngd9cKHtTUuUdEo2eWEgUcZ_EQRbP55MigV2t_zTReg@mail.gmail.com
Backpatch-through: 14
Although clang claims to be compatible with gcc's printf format
archetypes, this appears to be a falsehood: it likes __syslog__
(which gcc does not, on most platforms) and doesn't accept
gnu_printf. This means that if you try to use gcc with clang++
or clang with g++, you get compiler warnings when compiling
printf-like calls in our C++ code. This has been true for quite
awhile, but it's gotten more annoying with the recent appearance
of several buildfarm members that are configured like this.
To fix, run separate probes for the format archetype to use with the
C and C++ compilers, and conditionally define PG_PRINTF_ATTRIBUTE
depending on __cplusplus.
(We could alternatively insist that you not mix-and-match C and
C++ compilers; but if the case works otherwise, this is a poor
reason to insist on that.)
No back-patch for now, but we may want to do that if this
patch survives buildfarm testing.
Discussion: https://postgr.es/m/986485.1764825548@sss.pgh.pa.us
Always return TIDs in descending order when returning groups of TIDs
from an nbtree posting list tuple during nbtree backwards scans. This
makes backwards scans tend to require fewer buffer hits, since the scan
is less likely to repeatedly pin and unpin the same heap page/buffer
(we'll get exactly as many buffer hits as we get with a similar forwards
scan case).
Commit 0d861bbb, which added nbtree deduplication, originally did things
this way to avoid interfering with _bt_killitems's approach to setting
LP_DEAD bits on posting list tuples. _bt_killitems makes a soft
assumption that it can always iterate through posting lists in ascending
TID order, finding corresponding killItems[]/so->currPos.items[] entries
in that same order. This worked out because of the prior _bt_readpage
backwards scan behavior. If we just changed the backwards scan posting
list logic in _bt_readpage, without altering _bt_killitems itself, it
would break its soft assumption.
Avoid that problem by sorting the so->killedItems[] array at the start
of _bt_killitems. That way the order that dead items are saved in from
btgettuple can't matter; so->killedItems[] will always be in the same
order as so->currPos.items[] in the end. Since so->currPos.items[] is
now always in leaf page order, regardless of the scan direction used
within _bt_readpage, and since so->killedItems[] is always in that same
order, the _bt_killitems loop can continue to make a uniform assumption
about everything being in page order. In fact, sorting like this makes
the previous soft assumption about item order into a hard invariant.
Also deduplicate the so->killedItems[] array after it is sorted. That
way there's no risk of the _bt_killitems loop becoming confused by a
duplicate dead item/TID. This was possible in cases that involved a
scrollable cursor that encountered the same dead TID more than once
(within the same leaf page/so->currPos context). This doesn't come up
very much in practice, but it seems best to be as consistent as possible
about how and when _bt_killitems will LP_DEAD-mark index tuples.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Mircea Cadariu <cadariu.mircea@gmail.com>
Reviewed-By: Victor Yegorov <vyegorov@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=Wut2pKvbW-u3hJ_LXwsYeiXHiW8oN1GfbKPavcGo8Ow@mail.gmail.com
It's only useful for an ILIKE optimization for the libc provider using
a single-byte encoding and a non-C locale, but it creates significant
internal complexity.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/450ceb6260cad30d7afdf155d991a9caafee7c0d.camel@j-davis.com
The test seemed to incorrectly think that query_safe() takes an
argument that describes what the query does, similar to e.g.
command_ok(). Until commit bd8d9c9bdf the extra arguments were
harmless and were just ignored, but when commit bd8d9c9bdf introduced
a new optional argument to query_safe(), the extra arguments started
clashing with that, causing the test to fail.
Backpatch to v17, that's the oldest branch where the test exists. The
extra arguments didn't cause any trouble on the older branches, but
they were clearly bogus anyway.
1. The test initially focuses on the "parent" table, then switches to
the "part" table, and goes back to the "parent" table. That seems a
little weird, so move the tests around so that all the commands on the
"parent" table are done first, followed by the "part" table.
2. ALTER TABLE ALTER COLUMN SET EXPRESSION was not tested, so add
that.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/CACJufxFDi7fnwB-8xXd_ExML7-7pKbTaK4j46AJ=4-14DXvtVg@mail.gmail.com
Phase I vacuum gives the page a once-over after pruning and freezing to
check that the values of all_visible and all_frozen agree with the
result of heap_page_is_all_visible(). This is meant to keep the logic in
phase I for determining visibility in sync with the logic in phase III.
Rewrite the assertion to avoid an Assert(false).
Suggested by Andres Freund.
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/mhf4vkmh3j57zx7vuxp4jagtdzwhu3573pgfpmnjwqa6i6yj5y%40sy4ymcdtdklo