testtablespace is an extra path used as tablespace location in the main
regression test suite, computed from --outputdir as defined by the
caller of pg_regress (current directory if undefined).
This special handling was introduced as of f10589e to be specific to
MSVC, as we let pg_regress' Makefile handle this cleanup in other
environments. This moves the cleanup to the MSVC script running
regression tests instead where needed: check, installcheck and
upgradecheck. I have also checked this patch on MSVC with repeated runs
of each target.
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200219.142519.437573253063431435.horikyota.ntt@gmail.com
Per POSIX this case should produce +0, but buildfarm member fossa
(with icc (ICC) 19.0.5.281 20190815) is reporting -0. icc has a
boatload of unsafe floating-point optimizations, with a corresponding
boatload of not-too-well-documented compiler switches, and it seems our
default use of "-mp1" isn't whacking it hard enough to keep it from
misoptimizing the stanza in dpow() that checks whether y is odd.
There's nothing wrong with that code (seeing that no other buildfarm
member has trouble with it), so I'm content to blame this on the
compiler. But without access to the compiler I'm not going to guess at
what switches might be needed to fix it. For now, tweak the test case
so it will accept either -0 or +0 as a correct answer.
Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
Buildfarm results for commit decbe2bfb show that AIX and illumos
have non-POSIX-compliant pow() functions, as do ancient NetBSD
and HPUX releases. While it's dubious how much we should care
about the latter two platforms, the former two are probably enough
reason to put in manual handling of infinite-input cases. Hence,
do so, and clean up the post-pow() error handling to reflect its
now-more-limited scope. (Notably, while we no longer expect to
ever see EDOM from pow(), report it as a domain error if we do.
The former coding had the net effect of expensively converting the
error to ERANGE, which seems highly questionable: if pow() wanted
to report ERANGE, it would have done so.)
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
Previously, these functions tended to throw underflow errors for
negative-infinity exponents. The correct thing per POSIX is to
return 0, so let's do that instead. (Note that the SQL standard
is silent on such issues, as it lacks the concepts of either Inf
or NaN; so our practice is to follow POSIX whenever a corresponding
C-library function exists.)
Also, add a bunch of test cases verifying that exp() and power()
actually do follow POSIX for Inf and NaN inputs. While this patch
should guarantee that exp() passes the tests, power() will not unless
the platform's pow(3) is fully POSIX-compliant. I already know that
gaur fails some of the tests, and I am suspicious that the Windows
animals will too; the extent of compliance of other old platforms
remains to be seen. We might choose to drop failing test cases, or
to work harder at overriding pow(3) for these cases, but first let's
see just how good or bad the situation is.
Discussion: https://postgr.es/m/582552.1591917752@sss.pgh.pa.us
var_samp(numeric) and stddev_samp(numeric) disagreed with their float
cousins about what to do for a single non-null input value that is NaN.
The float versions return NULL on the grounds that the calculation is
only defined for more than one non-null input, which seems like the
right answer. But the numeric versions returned NaN, as a result of
dealing with edge cases in the wrong order. Fix that. The patch
also gets rid of an insignificant memory leak in such cases.
This inconsistency is of long standing, but on the whole it seems best
not to back-patch the change into stable branches; nobody's complained
and it's such an obscure point that nobody's likely to complain.
(Note that v13 and v12 now contain test cases that will notice if we
accidentally back-patch this behavior change in future.)
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
When there is just one non-null input value, and it is infinity or NaN,
aggregates such as stddev_pop and covar_pop should produce a NaN
result, because the calculation is not well-defined. They used to do
so, but since we adopted Youngs-Cramer aggregation in commit e954a727f,
they produced zero instead. That's an oversight, so fix it. Add tests
exercising these edge cases.
Affected aggregates are
var_pop(double precision)
stddev_pop(double precision)
var_pop(real)
stddev_pop(real)
regr_sxx(double precision,double precision)
regr_syy(double precision,double precision)
regr_sxy(double precision,double precision)
regr_r2(double precision,double precision)
regr_slope(double precision,double precision)
regr_intercept(double precision,double precision)
covar_pop(double precision,double precision)
corr(double precision,double precision)
Back-patch to v12 where the behavior change was accidentally introduced.
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
Using --outputdir with a custom output repository has never created by
default the sql/ and expected/ paths generated with contents from
respectively input/ and output/ if they don't exist, while the base
output directory gets created if it does not exist. If sql/ and
expected/ are not present, pg_regress would fail with the path missing,
requiring test scripts to create those extra paths by themselves. This
commit changes pg_regress so as both get created by default if they do
not exist, removing the need for external test scripts to do so.
This cleans up two code paths in the tree for pg_upgrade tests in MSVC
and environments able to use test.sh. sql/ and expected/ were created
as part of each test script, but this is not needed anymore as
pg_regress handles the work now.
Author: Roman Zharkov, Daniel Gustafsson
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/16484-4d89e9cc11241996@postgresql.org
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.
This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected. That's pretty improbable, so it's no
surprise this hasn't been reported from the field. Still, it's a bug.
I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them. With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.
Back-patch to 9.6 where this code was introduced. (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
Eliminate enable_groupingsets_hash_disk, which was primarily useful
for testing grouping sets that use HashAgg and spill. Instead, hack
the table stats to convince the planner to choose hashed aggregation
for grouping sets that will spill to disk. Suggested by Melanie
Plageman.
Rename enable_hashagg_disk to hashagg_avoid_disk_plan, and invert the
meaning of on/off. The new name indicates more strongly that it only
affects the planner. Also, the word "avoid" is less definite, which
should avoid surprises when HashAgg still needs to use the
disk. Change suggested by Justin Pryzby, though I chose a different
GUC name.
Discussion: https://postgr.es/m/CAAKRu_aisiENMsPM2gC4oUY1hHG3yrCwY-fXUg22C6_MJUwQdA%40mail.gmail.com
Discussion: https://postgr.es/m/20200610021544.GA14879@telsasoft.com
Backpatch-through: 13
SSI's HeapCheckForSerializableConflictOut() test failed to correctly
handle conditions involving a concurrently inserted tuple which is later
concurrently updated by a separate transaction . A SELECT statement
that called HeapCheckForSerializableConflictOut() could end up using the
same XID (updater's XID) for both the original tuple, and the successor
tuple, missing the XID of the xact that created the original tuple
entirely. This only happened when neither tuple from the chain was
visible to the transaction's MVCC snapshot.
The observable symptoms of this bug were subtle. A pair of transactions
could commit, with the later transaction failing to observe the effects
of the earlier transaction (because of the confusion created by the
update to the non-visible row). This bug dates all the way back to
commit dafaa3ef, which added SSI.
To fix, make sure that we check the xmin of concurrently inserted tuples
that happen to also have been updated concurrently.
Author: Peter Geoghegan
Reported-By: Kyle Kingsbury
Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443@jepsen.io
Backpatch: All supported versions
Commit 0c882e52a tried to force table atest12 to have more-accurate-
than-default statistics; but transiently setting default_statistics_target
isn't enough for that, because autovacuum could come along and overwrite
the stats later. This evidently explains some intermittent buildfarm
failures we've seen since then. Repair by disabling autovac on this table.
Thanks to David Rowley for correctly diagnosing the cause.
Discussion: https://postgr.es/m/CA+hUKG+OUkQSOUTg=qo=S=fWa_tbm99i7rB7mfbHz1SYm4v-jQ@mail.gmail.com
Since database connections can be used with WAL senders in 9.4, it is
possible to use physical replication. This commit fixes a crash when
starting physical replication with a WAL sender using a database
connection, caused by the refactoring done in 850196b.
There have been discussions about forbidding the use of physical
replication in a database connection, but this is left for later,
taking care only of the crash new to 13.
While on it, add a test to check for a failure when attempting logical
replication if the WAL sender does not have a database connection. This
part is extracted from a larger patch by Kyotaro Horiguchi.
Reported-by: Vladimir Sitnikov
Author: Michael Paquier, Kyotaro Horiguchi
Reviewed-by: Kyotaro Horiguchi, Álvaro Herrera
Discussion: https://postgr.es/m/CAB=Je-GOWMj1PTPkeUhjqQp-4W3=nW-pXe2Hjax6rJFffB5_Aw@mail.gmail.com
Backpatch-through: 13
ineq_histogram_selectivity() can be invoked in situations where the
ordering we care about is not that of the column's histogram. We could
be considering some other collation, or even more drastically, the
query operator might not agree at all with what was used to construct
the histogram. (We'll get here for anything using scalarineqsel-based
estimators, so that's quite likely to happen for extension operators.)
Up to now we just ignored this issue and assumed we were dealing with
an operator/collation whose sort order exactly matches the histogram,
possibly resulting in junk estimates if the binary search gets confused.
It's past time to improve that, since the use of nondefault collations
is increasing. What we can do is verify that the given operator and
collation match what's recorded in pg_statistic, and use the existing
code only if so. When they don't match, instead execute the operator
against each histogram entry, and take the fraction of successes as our
selectivity estimate. This gives an estimate that is probably good to
about 1/histogram_size, with no assumptions about ordering. (The quality
of the estimate is likely to degrade near the ends of the value range,
since the two orderings probably don't agree on what is an extremal value;
but this is surely going to be more reliable than what we did before.)
At some point we might further improve matters by storing more than one
histogram calculated according to different orderings. But this code
would still be good fallback logic when no matches exist, so that is
not an argument for not doing this.
While here, also improve get_variable_range() to deal more honestly
with non-default collations.
This isn't back-patchable, because it requires adding another argument
to ineq_histogram_selectivity, and because it might have significant
impact on the estimation results for extension operators relying on
scalarineqsel --- mostly for the better, one hopes, but in any case
destabilizing plan choices in back branches is best avoided.
Per investigation of a report from James Lucas.
Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
DES has been deprecated in OpenSSL 3.0.0 which makes loading keys
encrypted with DES fail with "fetch failed". Solve by changing the
cipher used to aes256 which has been supported since 1.0.1 (and is
more realistic to use anyways).
Note that the minimum supported OpenSSL version is 1.0.1 as of
7b283d0e1d, so this does not introduce
any new version requirements.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se
If the flag value is lost, logical decoding would work the same way as
REPLICA IDENTITY NOTHING, meaning that no old tuple values would be
included in the changes anymore produced by logical decoding.
Author: Michael Paquier
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20200603065340.GK89559@paquier.xyz
Backpatch-through: 12
It's intentional that we don't allow values greater than 24 hours,
while we do allow "24:00:00" as well as "23:59:60" as inputs.
However, the range check was miscoded in such a way that it would
accept "23:59:60.nnn" with a nonzero fraction. For time or timetz,
the stored result would then be greater than "24:00:00" which would
fail dump/reload, not to mention possibly confusing other operations.
Fix by explicitly calculating the result and making sure it does not
exceed 24 hours. (This calculation is redundant with what will happen
later in tm2time or tm2timetz. Maybe someday somebody will find that
annoying enough to justify refactoring to avoid the duplication; but
that seems too invasive for a back-patched bug fix, and the cost is
probably unmeasurable anyway.)
Note that this change also rejects such input as the time portion
of a timestamp(tz) value.
Back-patch to v10. The bug is far older, but to change this pre-v10
we'd need to ensure that the logic behaves sanely with float timestamps,
which is possibly nontrivial due to roundoff considerations.
Doesn't really seem worth troubling with.
Per report from Christoph Berg.
Discussion: https://postgr.es/m/20200520125807.GB296739@msg.df7cb.de
The preferred terminology has been support "function", not procedure,
for some time, so change that over. The command stays \dAp, since
\dAf is already something else.
The recipe was previously given in comments in the module's test
script, but now we have an explicit recipe in the Makefile. The now
redundant comments in the script are removed.
This recipe shouldn't be needed in normal use, as the certificate and
key are in git and don't need to be regenerated.
Discussion: https://postgr.es/m/ae8f21fc-95cb-c98a-f241-1936133f466f@2ndQuadrant.com
A relation that has no storage initializes rd_tableam to NULL, which
caused those two functions to crash because of a pointer dereference.
Note that in 11 and older versions, this has always failed with a
confusing error "could not open file".
These two functions are used by the Postgres ODBC driver, which requires
them only when connecting to a backend strictly older than 8.1. When
connected to 8.2 or a newer version, the driver uses a RETURNING clause
instead whose support has been added in 8.2, so it should be possible to
just remove both functions in the future. This is left as an issue to
address later.
While on it, add more regression tests for those functions as we never
really had coverage for them, and for aggregates of TIDs.
Reported-by: Jaime Casanova, via sqlsmith
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com
Backpatch-through: 12
This commit changes ORDER BY clause for \dAo and \dAp psql commands in
the following way.
* Operators for the same types are grouped together.
* Same-class operators and procedures are listed before cross-class operators
and procedures.
Modification of ORDER BY clause for \dAp required removing DISTINCT clause,
which doesn't seem to affect anything.
Discussion: https://postgr.es/m/20200511210856.GA18368%40alvherre.pgsql
Author: Alvaro Herrera revised by me
Reviewed-by: Alexander Korotkov, Nikita Glukhov
d140f2f3 has renamed receivedUpto to flushedUpto, and has added
writtenUpto to the WAL receiver's shared memory information, but
pg_stat_wal_receiver was not consistent with that. This commit renames
received_lsn to flushed_lsn, and adds a new column called written_lsn.
Bump catalog version.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20200515090817.GA212736@paquier.xyz
In a logical replication subscriber, a table using REPLICA IDENTITY FULL
which has a primary key would try to use the primary key's index
available to scan for a tuple, but an assertion only assumed as correct
the case of an index associated to REPLICA IDENTITY USING INDEX. This
commit corrects the assertion so as the use of a primary key index is a
valid case.
Reported-by: Dilip Kumar
Analyzed-by: Dilip Kumar
Author: Euler Taveira
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/CAFiTN-u64S5bUiPL1q5kwpHNd0hRnf1OE-bzxNiOs5zo84i51w@mail.gmail.com
Backpatch-through: 10
It's just weird that this name wasn't chosen to look like an
identifier. The suspicion that it wasn't thought about too
hard is reinforced by the fact that it wasn't documented in
the pg_locks view (until I did so, a day or two back).
Update, and add a comment reminding future adjusters of this
array to fix the docs too.
Do some desultory wordsmithing on various entries in the wait
events tables.
Discussion: https://postgr.es/m/24595.1589326879@sss.pgh.pa.us
In commit 850196b610 I (Álvaro) failed to handle the case of walsender
shutting down on an error before setting up its 'xlogreader' pointer;
the error handling code dereferences the pointer, causing a crash.
Fix by testing the pointer before trying to dereference it.
Kyotaro authored the code fix; I adopted Nathan's test case to be used
by the TAP tests and added the necessary PostgresNode change.
Reported-by: Nathan Bossart <bossartn@amazon.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/C04FC24E-903D-4423-B312-6910E4D846E5@amazon.com
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.
Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
pg_recvlogical merely called PQfinish(), so the backend sent messages
after the disconnect. When that caused EPIPE in internal_flush(),
before a LogicalConfirmReceivedLocation(), the next pg_recvlogical would
repeat already-acknowledged records. Whether or not the defect causes
EPIPE, post-disconnect messages could contain an ErrorResponse that the
user should see. One properly ends PGRES_COPY_OUT by repeating
PQgetCopyData() until it returns a negative value. Augment one of the
tests to cover the case of WAL past --endpos. Back-patch to v10, where
commit 7c030783a5 first appeared. Before
that commit, pg_recvlogical never reached PGRES_COPY_OUT.
Reported by Thomas Munro.
Discussion: https://postgr.es/m/CAEepm=1MzM2Z_xNe4foGwZ1a+MO_2S9oYDq3M5D11=JDU_+0Nw@mail.gmail.com
The explain format used by incremental sort was somewhat inconsistent
with other nodes, making it harder to parse and understand. This commit
addresses that by
- adding an extra space to better separate groups of values
- using colons instead of equal signs to separate key/value
- properly capitalizing first letter of a key
- using separate lines for full and pre-sorted groups
These changes were proposed by Justin Pryzby and mostly copy the final
explain format used to report WAL usage.
Author: Justin Pryzby
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/20200419023625.GP26953@telsasoft.com
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES. For reference, the command was
./renumber_oids.pl --first-mapped-oid=8000 --target-oid=5032
Also run reformat_dat_file.pl while I'm here.
Renumbering recently-added types changed some results in the opr_sanity
test. To make those a bit easier to eyeball-verify, change the queries
to show regtype not just bare type OIDs. (I think we didn't have
regtype when these queries were written.)
Somehow we'd never noticed this oversight, even though it means
that such basic columns as pg_proc.proargtypes were not being
validated by the oidjoins test. Correct the query and update
the test script with the newly-found dependencies.
We seem to have forgotten to do this in the v12 cycle, so add it as
a task in the RELEASE_CHANGES list, in hopes we won't forget again.
While here, fix findoidjoins.c so that it actually works in the
new dispensation where OID is a regular column, and change it to only
consider system relations (this avoids being fooled by the OID column
in the brintest test table).
Also tweak the largeobject test so that the somewhat-recently-added
manual creation of a LO with an OID in the system range doesn't
fool findoidjoins.c. For the moment I just made that use an unused
OID, but we might have to find a more robust solution someday.
checkcondition_str() failed to report multiple matches for a prefix
pattern correctly: it would dutifully merge the match positions, but
then after exiting that loop, if the last prefix-matching word had
had no suitable positions, it would report there were no matches.
The upshot would be failing to recognize a match that the query
should match.
It looks like you need all of these conditions to see the bug:
* a phrase search (else we don't ask for match position details)
* a prefix search item (else we don't get to this code)
* a weight restriction (else checkclass_str won't fail)
Noted while investigating a problem report from Pavel Borisov,
though this is distinct from the issue he was on about.
Back-patch to 9.6 where phrase search was added.
Add test coverage for the nbtutils.c routines concerned with IndexScans
that have native ScalarArrayOpExpr quals. The ScalarArrayOpExpr
specialized mark and restore routines, and the "find extreme element"
routine now have some test coverage.
These functions are probably infrequently exercised by real world
queries, so having some coverage seems like a good idea. The mark and
restore routines were originally added by a bugfix that came several
weeks after the first stable release of Postgres 9.2 (see commit
70bc583319).
The libpq parameters ssl{max|min}protocolversion are renamed to use
underscores, to become ssl_{max|min}_protocol_version. The related
environment variables still use the names introduced in commit ff8ca5f
that added the feature.
Per complaint from Peter Eisentraut (this was also mentioned by me in
the original patch review but the issue got discarded).
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
Checkpointer uses its MyLatch to wake up when a checkpoint request is
received. But before commit c655077639 the latch was not used for
anything else, so the code could just go to sleep after each loop
without rechecking the sleeping condition. That commit added a separate
ResetLatch in its code path[1], which can cause a checkpoint to go
unnoticed for potentially a long time.
Fix by skipping sleep if any checkpoint flags are set. Also add a test
to verify this; authored by Kyotaro Horiguchi.
[1] CreateCheckPoint -> InvalidateObsoleteReplicationSlots ->
ConditionVariableTimeSleep
Report and diagnosis by Kyotaro Horiguchi.
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200408.141956.891237856186513376.horikyota.ntt@gmail.com
pg_statio_all_tables view appears to have a wrong grouping. As the result
numbers of toast index blocks read and hit were multiplied to the number of
table indexes. This commit fixes the view definition.
Backpatching this appears difficult. We don't have a mechanism to patch a
system catalog of existing instances in minor upgrade. We can write a
release notes instruction to do this manually. But per discussion this is
probably not so critical bug for doing such an intrusive fix.
Reported-by: Andrei Zubkov
Discussion: https://postgr.es/m/CAPpHfdtMYkkNudLMG9G0dxX_B%3Dn5sfKzOyxxrvWYtSicaGW0Lw%40mail.gmail.com
This part of the test was included originally in 4e87c48, but got
reverted as of f9c1b8d because of timing issues in the test, where,
after more analysis, we found that the standbys may not have recovered
from the archives all the segments needed by the test. This stabilizes
the test by making sure that standbys replay up to the position returned
by pg_switch_wal(), meaning that all segments are recovered before the
manual checkpoint that tests WAL segment recycling and its interactions
with archive status files.
Author: Jehan-Guillaume de Rorthais
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200424034248.GL33034@paquier.xyz