This provides a handy way to get, say, the last field of the string.
Use of a negative index in this way has precedent in the nearby
left() and right() functions.
The implementation scans the string twice when N < -1, but it seems
likely that N = -1 will be the huge majority of actual use cases,
so I'm not really excited about adding complexity to avoid that.
Nikhil Benesch, reviewed by Jacob Champion; cosmetic tweakage by me
Discussion: https://postgr.es/m/cbb7f861-6162-3a51-9823-97bc3aa0b638@gmail.com
Summarily changing the STYPE of regression-test aggregates that
depend on array_append or array_cat is an issue for the buildfarm's
cross-version-upgrade tests, because those aggregates (as defined
in the back branches) now won't load into HEAD. Although this seems
like only a minimal risk for genuine user-defined aggregates, we
need to do something for the buildfarm. Hence, adjust the aggregate
definitions, in both HEAD and the back branches.
Discussion: https://postgr.es/m/1401824.1604537031@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1kaQ2c-0005lx-Eg@gemulon.postgresql.org
If an interactive psql session used \gset when querying a compromised
server, the attacker could execute arbitrary code as the operating
system account running psql. Using a prefix not found among specially
treated variables, e.g. every lowercase string, precluded the attack.
Fix by issuing a warning and setting no variable for the column in
question. Users wanting the old behavior can use a prefix and then a
meta-command like "\set HISTSIZE :prefix_HISTSIZE". Back-patch to 9.5
(all supported versions).
Reviewed by Robert Haas. Reported by Nick Cleaton.
Security: CVE-2020-25696
Specifically, this blocks DECLARE ... WITH HOLD and firing of deferred
triggers within index expressions and materialized view queries. An
attacker having permission to create non-temp objects in at least one
schema could execute arbitrary SQL functions under the identity of the
bootstrap superuser. One can work around the vulnerability by disabling
autovacuum and not manually running ANALYZE, CLUSTER, REINDEX, CREATE
INDEX, VACUUM FULL, or REFRESH MATERIALIZED VIEW. (Don't restore from
pg_dump, since it runs some of those commands.) Plain VACUUM (without
FULL) is safe, and all commands are fine when a trusted user owns the
target object. Performance may degrade quickly under this workaround,
however. Back-patch to 9.5 (all supported versions).
Reviewed by Robert Haas. Reported by Etienne Stalmans.
Security: CVE-2020-25695
This back-patches commit 20d3fe900 into the v12 and v13 branches.
At the time I thought that commit was not fixing any observable
bug, but Bertrand Drouvot showed otherwise: adding a dropped column
to the previously-considered scenario crashes v12 and v13, unless the
dropped column happens to be an integer. That is, of course, because
the tupdesc we derive from the plan output tlist fails to describe
the dropped column accurately, so that we'll do the wrong thing with
a tuple in which that column isn't NULL.
There is no bug in pre-v12 branches because they already did use
the table's real tuple descriptor for any trigger-returned tuple.
It seems that this set of bugs can be blamed on the changes that
removed es_trig_tuple_slot, though I've not attempted to pin that
down precisely.
Although there's no code change needed in HEAD, update the test case
to include a dropped column there too.
Discussion: https://postgr.es/m/db5d97c8-f48a-51e2-7b08-b73d5434d425@amazon.com
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
brin_form_tuple failed to consider the values may be toasted, inserting
the toast pointer into the index. This may easily result in index
corruption, as the toast data may be deleted and cleaned up by vacuum.
The cleanup however does not care about indexes, leaving invalid toast
pointers behind, which triggers errors like this:
ERROR: missing chunk number 0 for toast value 16433 in pg_toast_16426
A less severe consequence are inconsistent failures due to the index row
being too large, depending on whether brin_form_tuple operated on plain
or toasted version of the row. For example
CREATE TABLE t (val TEXT);
INSERT INTO t VALUES ('... long value ...')
CREATE INDEX idx ON t USING brin (val);
would likely succeed, as the row would likely include toast pointer.
Switching the order of INSERT and CREATE INDEX would likely fail:
ERROR: index row size 8712 exceeds maximum 8152 for index "idx"
because this happens before the row values are toasted.
The bug exists since PostgreSQL 9.5 where BRIN indexes were introduced.
So backpatch all the way back.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Backpatch-through: 9.5
Discussion: https://postgr.es/m/20201001184133.oq5uq75sb45pu3aw@development
Discussion: https://postgr.es/m/20201104010544.zexj52mlldagzowv%40development
Revert 59ab4ac32, as well as the followup fix 33862cb9c, in all
branches. We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases. We'll take another crack at this
later.
Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
LOCK TABLE has complained about "infinite recursion" when applied
to a self-referential view, ever since we made it recurse into views
in v11. However, that breaks pg_dump's new assumption that it's
okay to lock every relation. There doesn't seem to be any good
reason to throw an error: if we just abandon the recursion, we've
still satisfied the requirement of locking every referenced relation.
Per bug #16703 from Andrew Bille (via Alexander Lakhin).
Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
float4_div and float8_div correctly produced zero for zero divided
by infinity, but threw an underflow error for nonzero finite values
divided by infinity. This seems wrong; at the very least it's
inconsistent with the behavior recently implemented for numeric
infinities. Remove the error and allow zero to be returned.
This patch also removes a useless isinf() test from the overflow
checks in these functions (non-Inf divided by Inf can't produce Inf).
Extracted from a larger patch; this seems significant outside the
context of geometric operators, so it deserves its own commit.
Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAGf+fX70rWFOk5cd00uMfa__0yP+vtQg5ck7c2Onb-Yczp0URA@mail.gmail.com
Convert array_append, array_prepend, array_cat, array_position,
array_positions, array_remove, array_replace, and width_bucket
to use anycompatiblearray. This is a simple extension of commit
5c292e6b9 to hit some other places where there's a pretty obvious
gain in usability from doing so.
Ideally we'd also modify other functions taking multiple old-style
polymorphic arguments. But most of the remainder are tied into one
or more operator classes, making any such change a much larger can of
worms than I desire to open right now.
Discussion: https://postgr.es/m/77675130-89da-dab1-51dd-492c93dcf5d1@postgresfriends.org
This allows use of a "default" expression that doesn't slavishly
match the data column's type. Formerly you got something like
"function lag(numeric, integer, integer) does not exist", which
is not just unhelpful but actively misleading.
The SQL spec suggests that the default should be coerced to the data
column's type, but this implementation instead chooses the common
supertype, which seems at least as reasonable.
(Note: I took the opportunity to run "make reformat-dat-files" on
pg_proc.dat, so this commit includes some cosmetic changes to
recently-added entries that aren't related to lead/lag.)
Vik Fearing
Discussion: https://postgr.es/m/77675130-89da-dab1-51dd-492c93dcf5d1@postgresfriends.org
The SQL spec calls out nonstandard syntax for certain function calls,
for example substring() with numeric position info is supposed to be
spelled "SUBSTRING(string FROM start FOR count)". We accept many
of these things, but up to now would not print them in the same format,
instead simplifying down to "substring"(string, start, count).
That's long annoyed me because it creates an interoperability
problem: we're gratuitously injecting Postgres-specific syntax into
what might otherwise be a perfectly spec-compliant view definition.
However, the real reason for addressing it right now is to support
a planned change in the semantics of EXTRACT() a/k/a date_part().
When we switch that to returning numeric, we'll have the parser
translate EXTRACT() to some new function name (might as well be
"extract" if you ask me) and then teach ruleutils.c to reverse-list
that per SQL spec. In this way existing calls to date_part() will
continue to have the old semantics.
To implement this, invent a new CoercionForm value COERCE_SQL_SYNTAX,
and make the parser insert that rather than COERCE_EXPLICIT_CALL when
the input has SQL-spec decoration. (But if the input has the form of
a plain function call, continue to mark it COERCE_EXPLICIT_CALL, even
if it's calling one of these functions.) Then ruleutils.c recognizes
COERCE_SQL_SYNTAX as a cue to emit SQL call syntax. It can know
which decoration to emit using hard-wired knowledge about the
functions that could be called this way. (While this solution isn't
extensible without manual additions, neither is the grammar, so this
doesn't seem unmaintainable.) Notice that this solution will
reverse-list a function call with SQL decoration only if it was
entered that way; so dump-and-reload will not by itself produce any
changes in the appearance of views.
This requires adding a CoercionForm field to struct FuncCall.
(I couldn't resist the temptation to rearrange that struct's
field order a tad while I was at it.) FuncCall doesn't appear
in stored rules, so that change isn't a reason for a catversion
bump, but I did one anyway because the new enum value for
CoercionForm fields could confuse old backend code.
Possible future work:
* Perhaps CoercionForm should now be renamed to DisplayForm,
or something like that, to reflect its more general meaning.
This'd require touching a couple hundred places, so it's not
clear it's worth the code churn.
* The SQLValueFunction node type, which was invented partly for
the same goal of improving SQL-compatibility of view output,
could perhaps be replaced with regular function calls marked
with COERCE_SQL_SYNTAX. It's unclear if this would be a net
code savings, however.
Discussion: https://postgr.es/m/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
hash_array_extended() needs to pass PG_GET_COLLATION() to the hash
function of the element type. Otherwise, the hash function of a
collation-aware data type such as text will error out, since the
introduction of nondeterministic collation made hash functions require
a collation, too.
The consequence of this is that before this change, hash partitioning
using an array over text in the partition key would not work.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/32c1fdae-95c6-5dc6-058a-a90330a3b621%40enterprisedb.com
When considering Incremental Sort below a Gather Merge, we need to be
a bit more careful when matching pathkeys to EC members. It's not enough
to find a member whose Vars are all in the current relation's target;
volatile expressions in particular need to be contained in the target,
otherwise it's too early to use the pathkey.
Reported-by: Jaime Casanova
Author: James Coleman
Reviewed-by: Tomas Vondra
Backpatch-through: 13, where the incremental sort code was added
Discussion: https://postgr.es/m/CAJGNTeNaxpXgBVcRhJX%2B2vSbq%2BF2kJqGBcvompmpvXb7pq%2BoFA%40mail.gmail.com
The current implementation cannot handle this correctly, so just
forbid it for now.
GENERATED clauses must be attached to the column definition and cannot
be added later like DEFAULT, so if a child table has a generation
expression that the parent does not have, the child column will
necessarily be an attlocal column. So to implement ALTER TABLE ONLY /
DROP EXPRESSION, we'd need extra code to update attislocal of the
direct child tables, somewhat similar to how DROP COLUMN does it, so
that the resulting state can be properly dumped and restored.
Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
Remove old containment operators @ and ~ for built-in geometry data
types. These have been deprecated; use <@ and @> instead.
(Some contrib modules still contain the same deprecated operators.
That will be dealt with separately.)
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://www.postgresql.org/message-id/flat/20201027032511.GF9241@telsasoft.com
Commit 3a9ae3d206 (back in 2006) deprecated TG_RELNAME
in favor of TG_TABLE_NAME, but the existing usage in test
cases has remained till today. Change to use TG_TABLE_NAME
instead (TG_RELNAME is still covered by a test case).
Record the current version of dependent collations in pg_depend when
creating or rebuilding an index. When accessing the index later, warn
that the index may be corrupted if the current version doesn't match.
Thanks to Douglas Doole, Peter Eisentraut, Christoph Berg, Laurenz Albe,
Michael Paquier, Robert Haas, Tom Lane and others for very helpful
discussion.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> (earlier versions)
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
This model couldn't be extended to cover the default collation, and
didn't have any information about the affected database objects when the
version changed. Remove, in preparation for a follow-up commit that
will add a new mechanism.
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
This was broken recently by a929e17e5. I'd failed to remember that
parallel tests should have their EXPLAIN output run through the
explain_parallel_append function so that the output is stable when
parallel workers fail to start.
fairywren was first to notice.
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/20201102062951.GB15770@paquier.xyz
Previously we only tagged on the required information to allow the
executor to perform run-time partition pruning for Append/MergeAppend
nodes belonging to base relations. It was thought that nested
Append/MergeAppend nodes were just about always pulled up into the
top-level Append/MergeAppend and that making the run-time pruning info for
any sub Append/MergeAppend nodes was a waste of time. However, that was
likely badly thought through.
Some examples of cases we're unable to pullup nested Append/MergeAppends
are: 1) Parallel Append nodes with a mix of parallel and non-parallel
paths into a Parallel Append. 2) When planning an ordered Append scan a
sub-partition which is unordered may require a nested MergeAppend path to
ensure sub-partitions don't mix up the order of tuples being fed into the
top-level Append.
Unfortunately, it was not just as simple as removing the lines in
createplan.c which were purposefully not building the run-time pruning
info for anything but RELOPT_BASEREL relations. The code in
add_paths_to_append_rel() was far too sloppy about which partitioned_rels
it included for the Append/MergeAppend paths. The original code there
would always assume accumulate_append_subpath() would pull each sub-Append
and sub-MergeAppend path into the top-level path. While it does not
appear that there were any actual bugs caused by having the additional
partitioned table RT indexes recorded, what it did mean is that later in
planning, when we built the run-time pruning info that we wasted effort
and built PartitionedRelPruneInfos for partitioned tables that we had no
subpaths for the executor to run-time prune.
Here we tighten that up so that partitioned_rels only ever contains the RT
index for partitioned tables which actually have subpaths in the given
Append/MergeAppend. We can now Assert that every PartitionedRelPruneInfo
has a non-empty present_parts. That should allow us to catch any weird
corner cases that have been missed.
In passing, it seems there is no longer a good reason to have the
AppendPath and MergeAppendPath's partitioned_rel fields a List of IntList.
We can simply have a List of Relids instead. This is more compact in
memory and faster to add new members to. We still know which is the root
level partition as these always have a lower relid than their children.
Previously this field was used for more things, but run-time partition
pruning now remains the only user of it and it has no need for a List of
IntLists.
Here we also get rid of the RelOptInfo partitioned_child_rels field. This
is what was previously used to (sometimes incorrectly) set the
Append/MergeAppend path's partitioned_rels field. That was the only usage
of that field, so we can happily just remove it.
I also couldn't resist changing some nearby code to make use of the newly
added for_each_from macro so we can skip the first element in the list
without checking if the current item was the first one on each
iteration.
A bug report from Andreas Kretschmer prompted all this work, however,
after some consideration, I'm not personally classing this as a bug fix.
So no backpatch. In Andreas' test case, it just wasn't that clear that
there was a nested Append since the top-level Append just had a single
sub-path which was pulled up a level, per 8edd0e794.
Author: David Rowley
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/flat/CAApHDvqSchs%2BubdybcfFaSPB%2B%2BEA7kqMaoqajtP0GtZvzOOR3g%40mail.gmail.com
Statistics associated to an index got lost after running REINDEX
CONCURRENTLY, while the non-concurrent case preserves these correctly.
The concurrent and non-concurrent operations need to be consistent for
the end-user, and missing statistics would force to wait for a new
analyze to happen, which could take some time depending on the activity
of the existing autovacuum workers. This issue is fixed by copying any
existing entries in pg_statistic associated to the old index to the new
one. Note that this copy is already done with the data of the index in
the stats collector.
Reported-by: Fabrízio de Royes Mello
Author: Michael Paquier, Fabrízio de Royes Mello
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAFcNs+qpFPmiHd1oTXvcPdvAHicJDA9qBUSujgAhUMJyUMb+SA@mail.gmail.com
Backpatch-through: 12
The timetz test cases I added in commit a9632830b were unintentionally
sensitive to whether or not DST is active in the PST8PDT time zone.
Thus, they'll start failing this coming weekend, as reported by
Bernhard M. Wiedemann in bug #16689. Fortunately, DST-awareness is
not significant to the purpose of these test cases, so we can just
force them all to PDT (DST hours) to preserve stability of the
results.
Back-patch to v10, as the prior patch was.
Discussion: https://postgr.es/m/16689-57701daa23b377bf@postgresql.org
It's unsafe to do this at parse time because addition of generated
columns to a table would not invalidate stored rules containing
UPDATEs on the table ... but there might now be dependent generated
columns that were not there when the rule was made. This also fixes
an oversight that rewriteTargetView failed to update extraUpdatedCols
when transforming an UPDATE on an updatable view. (Since the new
calculation is downstream of that, rewriteTargetView doesn't actually
need to do anything; but before, there was a demonstrable bug there.)
In v13 and HEAD, this leads to easily-visible bugs because (since
commit c6679e4fc) we won't recalculate generated columns that aren't
listed in extraUpdatedCols. In v12 this bitmap is mostly just used
for trigger-firing decisions, so you'd only notice a problem if a
trigger cared whether a generated column had been updated.
I'd complained about this back in May, but then forgot about it
until bug #16671 from Michael Paul Killian revived the issue.
Back-patch to v12 where this field was introduced. If existing
stored rules contain any extraUpdatedCols values, they'll be
ignored because the rewriter will overwrite them, so the bug will
be fixed even for existing rules. (But note that if someone were
to update to 13.1 or 12.5, store some rules with UPDATEs on tables
having generated columns, and then downgrade to a prior minor version,
they might observe issues similar to what this patch fixes. That
seems unlikely enough to not be worth going to a lot of effort to fix.)
Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us
Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
get_foreign_key_join_selectivity() looks for join clauses that equate
the two sides of the FK constraint. However, if we have a query like
"WHERE fktab.a = pktab.a and fktab.a = 1", it won't find any such join
clause, because equivclass.c replaces the given clauses with "fktab.a
= 1 and pktab.a = 1", which can be enforced at the scan level, leaving
nothing to be done for column "a" at the join level.
We can fix that expectation without much trouble, but then a new problem
arises: applying the foreign-key-based selectivity rule produces a
rowcount underestimate, because we're effectively double-counting the
selectivity of the "fktab.a = 1" clause. So we have to cancel that
selectivity out of the estimate.
To fix, refactor process_implied_equality() so that it can pass back the
new RestrictInfo to its callers in equivclass.c, allowing the generated
"fktab.a = 1" clause to be saved in the EquivalenceClass's ec_derives
list. Then it's not much trouble to dig out the relevant RestrictInfo
when we need to adjust an FK selectivity estimate. (While at it, we
can also remove the expensive use of initialize_mergeclause_eclasses()
to set up the new RestrictInfo's left_ec and right_ec pointers.
The equivclass.c code can set those basically for free.)
This seems like clearly a bug fix, but I'm hesitant to back-patch it,
first because there's some API/ABI risk for extensions and second because
we're usually loath to destabilize plan choices in stable branches.
Per report from Sigrid Ehrenreich.
Discussion: https://postgr.es/m/1019549.1603770457@sss.pgh.pa.us
Discussion: https://postgr.es/m/AM6PR02MB5287A0ADD936C1FA80973E72AB190@AM6PR02MB5287.eurprd02.prod.outlook.com
This makes use of CheckBuffer() introduced in c780a7a, adding a SQL
wrapper able to do checks for all the pages of a relation. By default,
all the fork types of a relation are checked, and it is possible to
check only a given relation fork. Note that if the relation given in
input has no physical storage or is temporary, then no errors are
generated, allowing full-database checks when coupled with a simple scan
of pg_class for example. This is not limited to clusters with data
checksums enabled, as clusters without data checksums can still apply
checks on pages using the page headers or for the case of a page full of
zeros.
This function returns a set of tuples consisting of:
- The physical file where a broken page has been detected (without the
segment number as that can be AM-dependent, which can be guessed from
the block number for heap). A relative path from PGPATH is used.
- The block number of the broken page.
By default, only superusers have an access to this function but
execution rights can be granted to other users.
The feature introduced here is still minimal, and more improvements
could be done, like:
- Addition of a start and end block number to run checks on a range
of blocks, which would apply only if one fork type is checked.
- Addition of some progress reporting.
- Throttling, with configuration parameters in function input or
potentially some cost-based GUCs.
Regression tests are added for positive cases in the main regression
test suite, and TAP tests are added for cases involving the emulation of
page corruptions.
Bump catalog version.
Author: Julien Rouhaud, Michael Paquier
Reviewed-by: Masahiko Sawada, Justin Pryzby
Discussion: https://postgr.es/m/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com
The restriction that only tables and views can be locked by LOCK TABLE
is quite arbitrary, since the underlying mechanism can lock any relation
type. Drop the restriction so that programs such as pg_dump can lock
all relations they're interested in, preventing schema changes that
could cause a dump to fail after expending much effort.
Backpatch to 9.5.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
If the old row has any "missing" attributes that are supposed to
be retrieved from an associated tuple descriptor, the wrong things
happened because the trigger result is shoved directly into an
executor slot that lacks the missing-attribute data. Notably,
CHECK-constraint verification would incorrectly see those columns
as NULL, and so would RETURNING-list evaluation.
Band-aid around this by forcibly expanding the tuple before passing
it to the trigger function. (IMO it was a fundamental misdesign to
put the missing-attribute data into tuple constraints, which so
much of the system considers to be optional. But we're probably
stuck with that now, and will have to continue to apply band-aids
as we find other places with similar issues.)
Back-patch to v12. v11 would also have the issue, except that
commit 920311ab1 already applied a similar band-aid. That forced
expansion in more cases than seem really necessary, though, so
this isn't a directly equivalent fix.
Amit Langote, with some cosmetic changes by me
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
More precisely, correctly handle the ONLY flag indicating not to
recurse. This was implemented in 86f575948c by recursing in
trigger.c, but that's the wrong place; use ATSimpleRecursion instead,
which behaves properly. However, because legacy inheritance has never
recursed in that situation, make sure to do that only for new-style
partitioning.
I noticed this problem while testing a fix for another bug in the
vicinity.
This has been wrong all along, so backpatch to 11.
Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
Since commit 913bbd88d, check_sql_fn_retval() can either insert type
coercion steps in-line in the Query that produces the SQL function's
results, or generate a new top-level Query to perform the coercions,
if modifying the Query's output in-place wouldn't be safe. However,
it appears that the latter case has never actually worked, because
the code tried to inject the new Query back into the query list it was
passed ... which is not the list that will be used for later processing
when we execute the SQL function "normally" (without inlining it).
So we ended up with no coercion happening at run-time, leading to
wrong results or crashes depending on the datatypes involved.
While the regression tests look like they cover this area well enough,
through a huge bit of bad luck all the test cases that exercise the
separate-Query path were checking either inline-able cases (which
accidentally didn't have the bug) or cases that are no-ops at runtime
(e.g., varchar to text), so that the failure to perform the coercion
wasn't obvious. The fact that the cases that don't work weren't
allowed at all before v13 probably contributed to not noticing the
problem sooner, too.
To fix, get rid of the separate "flat" list of Query nodes and instead
pass the real two-level list that is going to be used later. I chose
to make the same change in check_sql_fn_statements(), although that has
no actual bug, just so that we don't need that data structure at all.
This is an API change, as evidenced by the adjustments needed to
callers outside functions.c. That's a bit scary to be doing in a
released branch, but so far as I can tell from a quick search,
there are no outside callers of these functions (and they are
sufficiently specific to our semantics for SQL-language functions that
it's not apparent why any extension would need to call them). In any
case, v13 already changed the API of check_sql_fn_retval() compared to
prior branches.
Per report from pinker. Back-patch to v13 where this code came in.
Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
Maintaining 'es_result_relation_info' correctly at all times has become
cumbersome, especially with partitioning where each partition gets its
own result relation info. Having to set and reset it across arbitrary
operations has caused bugs in the past.
This changes all the places that used 'es_result_relation_info', to
receive the currently active ResultRelInfo via function parameters
instead.
Author: Amit Langote
Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
Our infinite_recurse() test to verify sane stack-overrun behavior
is affected by a bug of the Linux kernel on PPC64: it will get SIGSEGV
if it receives a signal when the stack depth is (a) over 1MB and
(b) within a few kB of filling the current physical stack allocation.
See https://bugzilla.kernel.org/show_bug.cgi?id=205183.
Since this test is a bit time-consuming and we run it in parallel with
test scripts that do a lot of DDL, it can be expected to get an sinval
catchup interrupt at some point, leading to failure if the timing is
wrong. This has caused more than 100 buildfarm failures over the
past year or so.
While a fix exists for the kernel bug, it might be years before that
propagates into all production kernels, particularly in some of the
older distros we have in the buildfarm. For now, let's just back off
and not run this test on Linux PPC64; that loses nothing in test
coverage so far as our own code is concerned.
To do that, split this test into a new script infinite_recurse.sql
and skip the test when the platform name is powerpc64...-linux-gnu.
Back-patch to v12. Branches before that have not been seen to get
this failure. No doubt that's because the "errors" test was not
run in parallel with other tests before commit 798070ec0, greatly
reducing the odds of an sinval catchup being necessary.
I also back-patched 3c8553547 into v12, just so the new regression
script would look the same in all branches having it.
Discussion: https://postgr.es/m/3479046.1602607848@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190723162703.GM22387%40telsasoft.com
Adjust the existing cycle detection example and test queries to put
the cycle column before the path column. This is mainly because the
SQL-standard CYCLE clause puts them in that order, and so if we added
that feature that would make the sequence of examples more consistent
and easier to follow.
Discussion: https://www.postgresql.org/message-id/c5603982-0088-7f14-0caa-fdcd0c837b57@2ndquadrant.com
Multiply before dividing, not the reverse, so that cases that should
produce exact results do produce exact results. (width_bucket_float8
got this right already.) Even when the result is inexact, this avoids
making it more inexact, since only the division step introduces any
imprecision.
While at it, fix compute_bucket() to not uselessly repeat the sign
check already done by its caller, and avoid duplicating the
multiply/divide steps by adjusting variable usage.
Per complaint from Martin Visser. Although this seems like a bug fix,
I'm hesitant to risk changing width_bucket()'s results in stable
branches, so no back-patch.
Discussion: https://postgr.es/m/6FA5117D-6AED-4656-8FEF-B74AC18FAD85@brytlyt.com
While the calculation is not well-defined if the bounds arguments are
infinite, there is a perfectly sane outcome if the test operand is
infinite: it's just like any other value that's before the first bucket
or after the last one. width_bucket_float8() got this right, but
I was too hasty about the case when adding infinities to numerics
(commit a57d312a7), so that width_bucket_numeric() just rejected it.
Fix that, and sync the relevant error message strings.
No back-patch needed, since infinities-in-numeric haven't shipped yet.
Discussion: https://postgr.es/m/2465409.1602170063@sss.pgh.pa.us
The date-vs-timestamp, date-vs-timestamptz, and timestamp-vs-timestamptz
comparators all worked by promoting the first type to the second and
then doing a simple same-type comparison. This works fine, except
when the conversion result is out of range, in which case we throw an
entirely avoidable error. The sources of such failures are
(a) type date can represent dates much farther in the future than
the timestamp types can;
(b) timezone rotation might cause a just-in-range timestamp value to
become a just-out-of-range timestamptz value.
Up to now we just ignored these corner-case issues, but now we have
an actual user complaint (bug #16657 from Huss EL-Sheikh), so let's
do something about it.
It turns out that commit 52ad1e659 already built all the necessary
infrastructure to support error-free comparisons, but neglected to
actually use it in the main-line code paths. Fix that, do a little
bit of code style review, and remove the now-duplicate logic in
jsonpath_exec.c.
Back-patch to v13 where 52ad1e659 came in. We could take this back
further by back-patching said infrastructure, but given the small
number of complaints so far, I don't feel a great need to.
Discussion: https://postgr.es/m/16657-cde2f876d8cc7971@postgresql.org
Unlike for functions, OUT parameters for procedures are part of the
signature. Therefore, they have to be listed in pg_proc.proargtypes
as well as mentioned in ALTER PROCEDURE and DROP PROCEDURE.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/2b8490fe-51af-e671-c504-47359dc453c5@2ndquadrant.com
I noticed while trying to run the regression tests under a low
geqo_threshold that one query on information_schema.columns had
unstable (as in, variable from one run to the next) output order.
This is pretty unsurprising given the complexity of the underlying
plan. Interestingly, of this test's three nigh-identical queries on
information_schema.columns, the other two already had ORDER BY clauses
guaranteeing stable output. Let's make this one look the same.
Back-patch to v10 where this test was added. We've not heard field
reports of the test failing, but this experience shows that it can
happen when testing under even slightly unusual conditions.
The handling of those options was inconsistent, as the processing used
directly the value assigned to the option to check if it was redundant,
leading to patterns like this one to succeed (note that false is
specified first):
COPY hoge to '/path/to/file/' (header off, header on);
And the opposite would fail correctly (note that true is first here):
COPY hoge to '/path/to/file/' (header on, header off);
While on it, add some tests to check for all redundant patterns with the
options of COPY. I have gone through the code and did not notice
similar mistakes for other commands.
"header" got it wrong since b63990c, and "freeze" was wrong from the
start as of 8de72b6. No backpatch is done per the lack of complaints.
Reported-by: Rémi Lapeyre
Discussion: https://postgr.es/m/20200929072433.GA15570@paquier.xyz
Discussion: https://postgr.es/m/0B55BD07-83E4-439F-AACC-FA2D7CF50532@lenstra.fr
This view shows the statistics about WAL activity. Currently it has only
two columns: wal_buffers_full and stats_reset. wal_buffers_full column
indicates the number of times WAL data was written to the disk because
WAL buffers got full. This information is useful when tuning wal_buffers.
stats_reset column indicates the time at which these statistics were
last reset.
pg_stat_wal view is also the basic infrastructure to expose other
various statistics about WAL activity later.
Bump PGSTAT_FILE_FORMAT_ID due to the change in pgstat format.
Bump catalog version.
Author: Masahiro Ikeda
Reviewed-by: Takayuki Tsunakawa, Kyotaro Horiguchi, Amit Kapila, Fujii Masao
Discussion: https://postgr.es/m/188bd3f2d2233cf97753b5ced02bb050@oss.nttdata.com
Previously, a conversion such as
to_date('-44-02-01','YYYY-MM-DD')
would result in '0045-02-01 BC', as the code attempted to interpret
the negative year as BC, but failed to apply the correction needed
for our internal handling of BC years. Fix the off-by-one problem.
Also, arrange for the combination of a negative year and an
explicit "BC" marker to cancel out and produce AD. This is how
the negative-century case works, so it seems sane to do likewise.
Continue to read "year 0000" as 1 BC. Oracle would throw an error,
but we've accepted that case for a long time so I'm hesitant to
change it in a back-patch.
Per bug #16419 from Saeed Hubaishan. Back-patch to all supported
branches.
Dar Alathar-Yemen and Tom Lane
Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
Previously we threw an error. But make_date already allowed the case,
so it is inconsistent as well as unhelpful for make_timestamp not to.
Both functions continue to reject year zero.
Code and test fixes by Peter Eisentraut, doc changes by me
Discussion: https://postgr.es/m/13c0992c-f15a-a0ca-d839-91d3efd965d9@2ndquadrant.com
The SQL standard doesn't require jsonpath .datetime() method to support the
ISO 8601 format. But our to_json[b]() functions convert timestamps to text in
the ISO 8601 format in the sake of compatibility with javascript. So, we add
support of the ISO 8601 to the jsonpath .datetime() in the sake compatibility
with to_json[b]().
The standard mode of datetime parsing currently supports just template patterns
and separators in the format string. In order to implement ISO 8601, we have to
add support of the format string double quotes to the standard parsing mode.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Backpatch-through: 13
bffe1bd684 has introduced jsonpath .datetime() method, but default formats
for time and timestamp contain excess space between time and timezone. This
commit removes this excess space making behavior of .datetime() method
standard-compliant.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov
Backpatch-through: 13
Adding \d+ to the test in commit 2dfa3fea8 was ill-advised,
because the partitions' names are such that their sort order
is locale dependent. We could rename them to avoid that,
but it doesn't seem worth the trouble; just take \d+ out again.
Per buildfarm.