pgbench would sometimes get stuck waiting forever after its last
client thread terminated, due to failing to check for there being
nothing more to wait for.
Bug introduced during refactoring in v10 (I didn't bother to try to
assign blame to a specific commit). It's already repaired in
HEAD/v12 thanks to commit 3bac77c48, but v10 and v11 need this fix.
Fabien Coelho, per report from Tomas Vondra; reviewed by
Yoshikazu Imai
Discussion: https://postgr.es/m/cc5d76c1-6144-bbed-ad1b-961d13d88f3b@2ndquadrant.com
Some platforms create a file named "localtime" in the system
timezone directory, making it a copy or link to the active time
zone file. If Postgres is built with --with-system-tzdata, initdb
will see that file as an exact match to localtime(3)'s behavior,
and it may decide that "localtime" is the most preferred spelling of
the active zone. That's a very bad choice though, because it's
neither informative, nor portable, nor stable if someone changes
the system timezone setting. Extend the preference logic added by
commit e3846a00c so that we will prefer any other zone file that
matches localtime's behavior over "localtime".
On the same logic, also discriminate against "posixrules", which
is another not-really-a-zone file that is often present in the
timezone directory. (Since we install "posixrules" but not
"localtime", this change can affect the behavior of Postgres
with or without --with-system-tzdata.)
Note that this change doesn't prevent anyone from choosing these
pseudo-zones if they really want to (i.e., by setting TZ for initdb,
or modifying the timezone GUC later on). It just prevents initdb
from preferring these zone names when there are multiple matches to
localtime's behavior.
Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.
Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com
Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
describeOneTableDetails issued a partition-constraint-fetching query
for every table, even ones it knows perfectly well are not partitions.
To add insult to injury, it then proceeded to leak the empty PGresult
if the table wasn't a partition. Doing that a lot of times might
amount to a meaningful leak, so this seems like a back-patchable bug.
Fix that, and also fix a related PGresult leak in the partition-parent
case (though that leak would occur only if we got no row, which is
unexpected).
Minor code beautification too, to make this code look more like the
pre-existing code around it.
Back-patch the whole change into v12. However, the fact that we already
know whether the table is a partition dates only to commit 1af25ca0c;
back-patching the relevant changes from that is probably more churn
than is justified in released branches. Hence, in v11 and v10, just
do the minimum to fix the PGresult leaks.
Noted while messing around with adjacent code for yesterday's \d
improvements.
The logic in ATExecDropColumn that rejects dropping partition key
columns is quite an inadequate defense, because it doesn't execute
in cases where a column needs to be dropped due to cascade from
something that only the column, not the whole partitioned table,
depends on. That leaves us with a badly broken partitioned table;
even an attempt to load its relcache entry will fail.
We really need to have explicit pg_depend entries that show that the
column can't be dropped without dropping the whole table. Hence,
add those entries. In v12 and HEAD, bump catversion to ensure that
partitioned tables will have such entries. We can't do that in
released branches of course, so in v10 and v11 this patch affords
protection only to partitioned tables created after the patch is
installed. Given the lack of field complaints (this bug was found
by fuzz-testing not by end users), that's probably good enough.
In passing, fix ATExecDropColumn and ATPrepAlterColumnType
messages to be more specific about which partition key column
they're complaining about.
Per report from Manuel Rigger. Back-patch to v10 where partitioned
tables were added.
Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com
Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
Don't think that the context "UPDATE tab SET var =" is a GUC-setting
command.
If we have "SET var =" but the "var" is not a known GUC variable,
don't offer any completions. The most likely explanation is that
we've misparsed the context and it's not really a GUC-setting command.
Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists
further back, but before 9.6 the code looks very different and it
doesn't actually know whether the "var" name matches anything,
so I desisted from trying to fix it.
Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
tzdb 2019a made "UCT" a link to the "UTC" zone rather than a separate
zone with its own abbreviation. Unfortunately, our code for choosing a
timezone in initdb has an arbitrary preference for names earlier in
the alphabet, and so it would choose the spelling "UCT" over "UTC"
when the system is running on a UTC zone.
Commit 23bd3cec6 was backpatched in order to address this issue, but
that code helps only when /etc/localtime exists as a symlink, and does
nothing to help on systems where /etc/localtime is a copy of a zone
file (as is the standard setup on FreeBSD and probably some other
platforms too) or when /etc/localtime is simply absent (giving UTC as
the default).
Accordingly, add a preference for the spelling "UTC", such that if
multiple zone names have equally good content matches, we prefer that
name before applying the existing arbitrary rules. Also add a slightly
lower preference for "Etc/UTC"; lower because that preserves the
previous behaviour of choosing the shorter name, but letting us still
choose "Etc/UTC" over "Etc/UCT" when both exist but "UTC" does
not (not common, but I've seen it happen).
Backpatch all the way, because the tzdb change that sparked this issue
is in those branches too.
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Back-patch of commit 23bd3cec6. The original intention was to not
back-patch, since this can result in cosmetic behavioral changes ---
for example, on my own workstation initdb now chooses "America/New_York",
where it used to prefer "US/Eastern" which is equivalent and shorter.
However, our hand has been more or less forced by tzdb update 2019a,
which made the "UCT" zone fully equivalent to "UTC". Our old code
now prefers "UCT" on the grounds of it being alphabetically first,
and that's making nobody happy. Choosing the alias indicated by
/etc/localtime is a more defensible behavior. (Users who don't like
the results can always force the decision by setting the TZ environment
variable before running initdb.)
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190604085735.GD24018@msg.df7cb.de
Using PARTITION OF can result in column ordering being changed from the
database being dumped, if the partition uses a column layout different
from the parent's. It's not pg_dump's job to editorialize on table
definitions, so this is not acceptable; back-patch all the way back to
pg10, where partitioned tables where introduced.
This change also ensures that partitions end up in the correct
tablespace, if different from the parent's; this is an oversight in
ca4103025d (in pg12 only). Partitioned indexes (in pg11) don't have
this problem, because they're already created as independent indexes and
attached to their parents afterwards.
This change also has the advantage that the partition is restorable from
the dump (as a standalone table) even if its parent table isn't
restored.
The original commits (3b23552ad8 in branch master) failed to cover
subsidiary column elements correctly, such as NOT NULL constraint and
CHECK constraints, as reported by Rushabh Lathia (initially as a failure
to restore serial columns). They were reverted. This recapitulation
commit fixes those problems.
Add some pg_dump tests to verify these things more exhaustively,
including constraints with legacy-inheritance tables, which were not
tested originally. In branches 10 and 11, add a local constraint to the
pg_dump test partition that was added by commit 2d7eeb1b14 to master.
Author: Álvaro Herrera, David Rowley
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
Discussion: https://postgr.es/m/20190423185007.GA27954@alvherre.pgsql
Discussion: https://postgr.es/m/CAGPqQf0iQV=PPOv2Btog9J9AwOQp6HmuVd6SbGTR_v3Zp2XT1w@mail.gmail.com
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released fifty-four days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Reviewed (in earlier versions) by Andrew Dunstan.
Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
This allows "vcregress upgradecheck" to pass twice in immediate
succession, and it's more like how $(prove_check) works. Back-patch to
9.5, where pg_upgrade moved to src/bin.
Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
On master (after 700538) the old version's installed psql was used -
even when the old version might not actually be installed / might be
installed into a temporary directory. As commonly the case when just
executing make check for pg_upgrade, as $oldbindir is just the current
version's $bindir.
In the back branches, with --install specified, psql from the new
version's temporary installation was used, without --install (e.g for
NO_TEMP_INSTALL, cf 47b3c26642), the new version's installed psql was
used (which might or might not exist).
Author: Andres Freund
Discussion: https://postgr.es/m/20190522175150.c26f4jkqytahajdg@alap3.anarazel.de
This uses a method similar to 68a7c24f and now b8c6014 (applied for
database creation), which guarantees that GRANT commands using the WITH
GRANT OPTION are dumped in a way so as cascading dependencies are
respected. Note that tablespaces do not have support for initial
privileges via pg_init_privs, so the same method needs to be applied
again. It would be nice to merge all the logic generating ACL queries
in dumps under the same banner, but this requires extending the support
of pg_init_privs to objects that cannot use it yet, so this is left as
future work.
Discussion: https://postgr.es/m/20190522071555.GB1278@paquier.xyz
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Backpatch-through: 9.6
This uses a method similar to 68a7c24f, which guarantees that GRANT
commands using the WITH GRANT OPTION are dumped in a way so as cascading
dependencies are respected. As databases do not have support for
initial privileges via pg_init_privs, we need to repeat again the same
ACL reordering method.
ACL for databases have been moved from pg_dumpall to pg_dump in v11, so
this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and
v10.
Discussion: https://postgr.es/m/15788-4e18847520ebcc75@postgresql.org
Author: Nathan Bossart
Reviewed-by: Haribabu Kommi
Backpatch-through: 9.6
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Reviewed by Andrew Dunstan.
Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8. The new behavior is consistent
with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA(). On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8. Back-patch to 9.4 (all supported versions).
Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
The buildfarm client uses TEMP_CONFIG to implement its extra_config
setting. Except for stats_temp_directory, extra_config now applies to
TAP suites; extra_config values seen in the past month are compatible
with this. Back-patch to 9.6, where PostgresNode was introduced, so the
buildfarm can rely on it sooner.
Reviewed by Andrew Dunstan and Tom Lane.
Discussion: https://postgr.es/m/20181229021950.GA3302966@rfd.leadboat.com
When failing to reindex a table or an index, reindexdb would generate an
extra error message related to a database failure, which is misleading.
Backpatch all the way down, as this has been introduced by 85e9a5a0.
Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com
Author: Julien Rouhaud
Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael
Paquier
Backpatch-through: 9.4
When running a batch of VACUUM or ANALYZE commands on a given database,
there were cases where it is possible to have vacuumdb not report an
error where it actually should, leading to incorrect status results.
Author: Julien Rouhaud
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com
Backpatch-through: 9.5
Using PARTITION OF can result in column ordering being changed from the
database being dumped, if the partition uses a column layout different
from the parent's. It's not pg_dump's job to editorialize on table
definitions, so this is not acceptable; back-patch all the way back to
pg10, where partitioned tables where introduced.
This change also ensures that partitions end up in the correct
tablespace, if different from the parent's; this is an oversight in
ca4103025d (in pg12 only). Partitioned indexes (in pg11) don't have
this problem, because they're already created as independent indexes and
attached to their parents afterwards.
This change also has the advantage that the partition is restorable from
the dump (as a standalone table) even if its parent table isn't
restored.
Author: David Rowley
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
Discussion: https://postgr.es/m/20190423185007.GA27954@alvherre.pgsql
For some reason the dump test with names with high bits set fails on
Msys2 (although not Msys1). Disable the tests for now, so that other
tests can run.
Renaming varchar_transform to varchar_support had a side effect
I hadn't foreseen: the core regression tests leave around a
transform object that relies on that function, so the name
change breaks cross-version upgrade tests, because the name
used in the older branches doesn't match.
Since the dependency on varchar_transform was chosen with the
aid of a dartboard anyway (it would surely not work as a
language transform support function), fix by just choosing
a different random builtin function with the right signature.
Also add some comments explaining why this isn't horribly unsafe.
I chose to make the same substitution in a couple of other
copied-and-pasted test cases, for consistency, though those
aren't directly contributing to the testing problem.
Per buildfarm. Back-patch, else it doesn't fix the problem.
warn_or_exit_horribly() was blithely passing a potentially-NULL
string pointer to a %s format specifier. That works (at least
to the extent of not crashing) on some platforms, but not all,
and since we switched to our own snprintf.c it doesn't work
for us anywhere.
Of the three string fields being handled this way here, I think
that only "owner" is supposed to be nullable ... but considering
that this is error-reporting code, it has very little business
assuming anything, so put in defenses for all three.
Per a crash observed on buildfarm member crake and then
reproduced here. Because of the portability aspect,
back-patch to all supported versions.
The modules RewindTest.pm and ServerSetup.pm are really only useful for
TAP tests, so they really belong in the TAP test directories. In
addition, ServerSetup.pm is renamed to SSLServer.pm.
The test scripts have their own directories added to the search path so
that the relocated modules will be found, regardless of where the tests
are run from, even on modern perl where "." is no longer in the
searchpath.
Discussion: https://postgr.es/m/e4b0f366-269c-73c3-9c90-d9cb0f4db1f9@2ndQuadrant.com
Backpatch as appropriate to 9.5
Commit 62215de29 turns out to have been not quite on-the-mark.
When we are forced to postpone dumping of a materialized view into
the dump's post-data section (because it depends on a unique index
that isn't created till that section), we may also have to postpone
dumping other matviews that depend on said matview. The previous fix
didn't reliably work for such cases: it'd break the dependency loops
properly, producing a workable object ordering, but it didn't
necessarily mark all the matviews as "postponed_def". This led to
harmless bleating about "archive items not in correct section order",
as reported by Tom Cassidy in bug #15602. Less harmlessly,
selective-restore options such as --section might misbehave due to
the matview dump objects not being properly labeled.
The right way to fix it is to consider that each pre-data dependency
we break amounts to moving the no-longer-dependent object into
post-data, and hence we should mark that object if it's a matview.
Back-patch to all supported versions, since the issue's been there
since matviews were introduced.
Discussion: https://postgr.es/m/15602-e895445f73dc450b@postgresql.org
Previously, \g would successfully execute the COPY command, but
the target specification if any was ignored, so that the data was
always dumped to the regular query output target. This seems like
a clear bug, so let's not just fix it but back-patch it.
While at it, adjust the documentation for \copy to recommend
"COPY ... TO STDOUT \g foo" as a plausible alternative.
Back-patch to 9.5. The problem exists much further back, but the
code associated with \g was refactored enough in 9.5 that we'd
need a significantly different patch for 9.4, and it doesn't
seem worth the trouble.
Daniel Vérité, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/15dadc39-e050-4d46-956b-dcc4ed098753@manitou-mail.org
reap_child() basically ignored the possibility of either an error in
waitpid() itself or a child process failure on signal. We don't really
need to do more than report and crash hard, but proceeding as though
nothing is wrong is definitely Not Acceptable. The error report for
nonzero child exit status was pretty off-point, as well.
Noted while fooling around with child-process failure detection
logic elsewhere. It's been like this a long time, so back-patch to
all supported branches.
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
96e1cb4 has added support for --no-publications in pg_dump, pg_dumpall
and pg_restore, but forgot the fact that publication tables also need to
be ignored when this option is used.
Author: Gilles Darold
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3f48e812-b0fa-388e-2043-9a176bdee27e@dalibo.com
Backpatch-through: 10, where publications have been added.
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single. In a stand-alone
backend, "SELECT random()" always started at the same seed.
On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed. The sysv fallback appeared after ten crashes of
--boot or --single postgres. Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.
Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.
Reviewed by Tom Lane and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20180915221546.GA3159382@rfd.leadboat.com
You can't use "FOR TABLE" as a single Matches argument, because readline
will consider that input to be two words not one. It's necessary to make
the pattern contain two arguments.
The case accidentally worked anyway because the words_after_create
test fired ... but only for the first such table name.
Noted by Edmund Horner, though this isn't exactly his proposed fix.
Backpatch to v10 where the faulty code came in.
Discussion: https://postgr.es/m/CAMyN-kDe=gBmHgxWwUUaXuwK+p+7g1vChR7foPHRDLE592nJPQ@mail.gmail.com
Our general practice in frontend code is to accept input with either
Unix-style newlines (\n) or DOS-style (\r\n). pgbench was mostly down
with that, but its rule for line continuations (backslash-newline) was
not. This had been masked on Windows buildfarm machines before commit
0ba06e0bf by use of Windows text mode to read files. We could have fixed
it by forcing text mode again, but it's better to fix the parsing code
so that Windows-style text files on Unix systems don't cause problems.
Back-patch to v10 where pgbench grew line continuations.
Discussion: https://postgr.es/m/17194.1537191697@sss.pgh.pa.us
There's a project policy against using plain "char buf[BLCKSZ]" local
or static variables as page buffers; preferred style is to palloc or
malloc each buffer to ensure it is MAXALIGN'd. However, that policy's
been ignored in an increasing number of places. We've apparently got
away with it so far, probably because (a) relatively few people use
platforms on which misalignment causes core dumps and/or (b) the
variables chance to be sufficiently aligned anyway. But this is not
something to rely on. Moreover, even if we don't get a core dump,
we might be paying a lot of cycles for misaligned accesses.
To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock
that the compiler must allocate with sufficient alignment, and use
those in place of plain char arrays.
I used these types even for variables where there's no risk of a
misaligned access, since ensuring proper alignment should make
kernel data transfers faster. I also changed some places where
we had been palloc'ing short-lived buffers, for coding style
uniformity and to save palloc/pfree overhead.
Since this seems to be a live portability hazard (despite the lack
of field reports), back-patch to all supported versions.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de