When I consolidated two copies of the HOT-chain search logic in commit
4da99ea423, I introduced a behavior
change: the old code wouldn't necessarily traverse the entire chain,
if the most recently returned tuple were updated while the HOT chain
traversal is in progress. The new behavior seems more correct, but
unfortunately, the code here relies on a scan with SnapshotNow failing
to see its own updates. That seems pretty shaky even with the old HOT
chain traversal behavior, since there's no guarantee that these
updates will always be HOT, but it's trivial to broke a failure with
the new HOT search logic. Fix by updating just the first matching
pg_constraint tuple, rather than all of them, since there should be
only one anyway. But since nobody has reproduced this failure on older
versions, no back-patch for now.
Report and test case by Alex Hunsaker; tablecmds.c changes by me.
This bollixes the test because it's expecting to see the idx_tup_fetch
counter increase, which won't happen if heap fetches were avoided by use
of an index-only scan. Per buildfarm results.
While at it, let's just make sure that enable_seqscan and enable_indexscan
are ON for this test ...
When a btree index contains all columns required by the query, and the
visibility map shows that all tuples on a target heap page are
visible-to-all, we don't need to fetch that heap page. This patch depends
on the previous patches that made the visibility map reliable.
There's a fair amount left to do here, notably trying to figure out a less
chintzy way of estimating the cost of an index-only scan, but the core
functionality seems ready to commit.
Robert Haas and Ibrar Ahmed, with some previous work by Heikki Linnakangas.
We'll now use "exists" for EXISTS(SELECT ...), "array" for ARRAY(SELECT
...), or the sub-select's own result column name for a simple expression
sub-select. Previously, you usually got "?column?" in such cases.
Marti Raudsepp, reviewed by Kyotaro Horiugchi
We now report errors reported by the just-unblocked and unblocking
transactions identically; this should fix relatively common buildfarm
failures reported by animals that are failing the "wrong" session.
In commit c1d9579dd8, I changed things so
that the output of the Agg node that feeds the window functions would not
list any ungrouped Vars directly. Formerly, for example, the Agg tlist
might have included both "x" and "sum(x)", which is not really valid if
"x" isn't a grouping column. If we then had a window function ordering on
something like "sum(x) + 1", prepare_sort_from_pathkeys would find no exact
match for this in the Agg tlist, and would conclude that it must recompute
the expression. But it would break the expression down to just the Var
"x", which it would find in the tlist, and then rebuild the ORDER BY
expression using a reference to the subplan's "x" output. Now, after the
above-referenced changes, "x" isn't in the Agg tlist if it's not a grouping
column, so that prepare_sort_from_pathkeys fails with "could not find
pathkey item to sort", as reported by Bricklen Anderson.
The fix is to not break down Aggrefs into their component parts, but just
treat them as irreducible expressions to be sought in the subplan tlist.
This is definitely OK for the use with respect to window functions in
grouping_planner, since it just built the tlist being used on the same
basis. AFAICT it is safe for other uses too; most of the other call sites
couldn't encounter Aggrefs anyway.
Per bug #6205, reported by Abel Abraham Camarillo Ojeda. This isn't a
particularly elegant fix, but I'm trying to minimize the chances of
causing yet another round of breakage.
Adjust regression tests to exercise this case.
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach. The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.
In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed. The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps. It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan. The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced. Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
Trailing-zero stripping applied by the FM specifier could strip zeroes
to the left of the decimal point, for a format with no digit positions
after the decimal point (such as "FM999.").
Reported and diagnosed by Marti Raudsepp, though I didn't use his patch.
1. Use new dropdb --if-exists option, to avoid alarming the user if
the database being dropped doesn't already exist.
2. Bail out if createdb fails.
3. exit 1 if the checks fail.
4. Make it executable.
Josh Kupershmidt, with some kibitzing by me.
Previously, 'yesterday 04:00:00'::timestamp didn't do the same thing as
'04:00:00 yesterday'::timestamp, and the return value from the latter
was midnight rather than the specified time.
Dean Rasheed, with some stylistic changes
This was deemed unnecessary initially but in later discussion it was
agreed otherwise.
Original file from Kevin Grittner, allegedly from Dan Ports.
I had to clean up whitespace a bit per changes from Heikki.
Rather than dumping out the raw array as PostgreSQL represents it
internally, we now print it out in a format similar to the one in
which the user input it, which seems a lot more user friendly.
Shigeru Hanada
Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER
ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger
fired for the same update. Fix by not trying to overload the use of
estate->es_trig_tuple_slot. Per report from Yoran Heling.
Back-patch to 9.0, when trigger WHEN conditions were introduced.
Now that we have a test that requires nondefault settings to pass, it seems
like we'd better mention that detail in the directions about how to run the
tests.
Also do some very minor copy-editing.
order of begin, prepare, and commit of three concurrent transactions that
have conflicts between them.
The test runs for a quite long time, and the expected output file is huge,
but this test caught some serious bugs during development, so seems
worthwhile to keep. The test uses prepared transactions, so it fails if the
server has max_prepared_transactions=0. Because of that, it's marked as
"ignore" in the schedule file.
Dan Ports
A PlaceHolderVar's expression might contain another, lower-level
PlaceHolderVar. If the outer PlaceHolderVar is used, the inner one
certainly will be also, and so we have to make sure that both of them get
into the placeholder_list with correct ph_may_need values during the
initial pre-scan of the query (before deconstruct_jointree starts).
We did this correctly for PlaceHolderVars appearing in the query quals,
but overlooked the issue for those appearing in the top-level targetlist;
with the result that nested placeholders referenced only in the targetlist
did not work correctly, as illustrated in bug #6154.
While at it, add some error checking to find_placeholder_info to ensure
that we don't try to create new placeholders after it's too late to do so;
they have to all be created before deconstruct_jointree starts.
Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
The relevant backslash commands already exist, so we're just adding an
additional column. With this commit, all objects that have psql backslash
commands and accept comments should now display those comments at least
in verbose mode.
Josh Kupershmidt, with doc additions by me.
I broke this in commit 5da79169d3, which
was obviously insufficiently well tested. Add some regression tests
in the hope of making future slip-ups more likely to be noticed.
Previously, xpath() simply returned an empty array if the expression did
not yield a node set. This is useless for expressions that return scalars,
such as one with name() at the top level. Arrange to return the scalar
value as a single-element xml array, instead. (String values will be
suitably escaped.)
This change will also cause xpath_exists() to return true, not false,
for such expressions.
Florian Pflug, reviewed by Radoslaw Smogura
Without this it's possible for the output to not be legal XML, as
illustrated by the added regression test cases.
NB: this change will need to be called out as an incompatibility in the
9.2 release notes, since it's possible somebody was relying on the old
behavior, even though it's clearly wrong.
Florian Pflug, reviewed by Radoslaw Smogura
This requires a new shared catalog, pg_shseclabel.
Along the way, fix the security_label regression tests so that they
don't monkey with the labels of any pre-existing objects. This is
unlikely to matter in practice, since only the label for the "dummy"
provider was being manipulated. But this way still seems cleaner.
KaiGai Kohei, with fairly extensive hacking by me.
libxml reports some errors (like invalid xmlns attributes) via the error
handler hook, but still returns a success indicator to the library caller.
This causes us to miss some errors that are important to report. Since the
"generic" error handler hook doesn't know whether the message it's getting
is for an error, warning, or notice, stop using that and instead start
using the "structured" error handler hook, which gets enough information
to be useful.
While at it, arrange to save and restore the error handler hook setting in
each libxml-using function, rather than assuming we can set and forget the
hook. This should improve the odds of working nicely with third-party
libraries that also use libxml.
In passing, volatile-ize some local variables that get modified within
PG_TRY blocks. I noticed this while testing with an older gcc version
than I'd previously tried to compile xml.c with.
Florian Pflug and Tom Lane, with extensive review/testing by Noah Misch
Noah Misch diagnosed the buildfarm problems in the isolation tests
partly as failure to differentiate backends properly; the old code was
using backend IDs, which is not good enough because a new backend might
use an already used ID. Use PIDs instead.
Also, the code was purposely careless about other concurrent activity,
because it isn't expected; and in fact, it doesn't affect the vast
majority of the time. However, it can be observed that autovacuum can
block tables for long enough to cause sporadic failures. The new code
accounts for that by ignoring locks held by processes not explicitly
declared in our spec file.
Author: Noah Misch
The previous value of 20ms is dangerously close to the time actually
spent just waiting for the deadlock to happen, so on occasion it causes
the test to fail simply because the other session didn't get to run
early enough, not managing to cause the deadlock that needs to be
detected. With this new value, it's expected that most machines on
normal load will be able to pass the test.
Author: Noah Misch
These new files allow the new FK tests on isolationtester to pass on the
serializable and repeatable read isolation levels (which are untested
by the buildfarm).
Author: Kevin Grittner
Reviewed by Noah Misch
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.
Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
When an AccessShareLock, RowShareLock, or RowExclusiveLock is requested
on an unshared database relation, and we can verify that no conflicting
locks can possibly be present, record the lock in a per-backend queue,
stored within the PGPROC, rather than in the primary lock table. This
eliminates a great deal of contention on the lock manager LWLocks.
This patch also refactors the interface between GetLockStatusData() and
pg_lock_status() to be a bit more abstract, so that we don't rely so
heavily on the lock manager's internal representation details. The new
fast path lock structures don't have a LOCK or PROCLOCK structure to
return, so we mustn't depend on that for purposes of listing outstanding
locks.
Review by Jeff Davis.
Regular aggregate functions in combination with, or within the arguments
of, window functions are OK per spec; they have the semantics that the
aggregate output rows are computed and then we run the window functions
over that row set. (Thus, this combination is not really useful unless
there's a GROUP BY so that more than one aggregate output row is possible.)
The case without GROUP BY could fail, as recently reported by Jeff Davis,
because sloppy construction of the Agg node's targetlist resulted in extra
references to possibly-ungrouped Vars appearing outside the aggregate
function calls themselves. See the added regression test case for an
example.
Fixing this requires modifying the API of flatten_tlist and its underlying
function pull_var_clause. I chose to make pull_var_clause's API for
aggregates identical to what it was already doing for placeholders, since
the useful behaviors turn out to be the same (error, report node as-is, or
recurse into it). I also tightened the error checking in this area a bit:
if it was ever valid to see an uplevel Var, Aggref, or PlaceHolderVar here,
that was a long time ago, so complain instead of ignoring them.
Backpatch into 9.1. The failure exists in 8.4 and 9.0 as well, but seeing
that it only occurs in a basically-useless corner case, it doesn't seem
worth the risks of changing a function API in a minor release. There might
be third-party code using pull_var_clause.
This enables us to test that blocking commands (such as foreign keys
checks that conflict with some other lock) act as intended. The set of
tests that this adds is pretty minimal, but can easily be extended by
adding new specs.
The intention is that this will serve as a basis for ensuring that
further tweaks of locking implementation preserve (or improve) existing
behavior.
Author: Noah Misch
If there's a dangerous structure T0 ---> T1 ---> T2, and T2 commits first,
we need to abort something. If T2 commits before both conflicts appear,
then it should be caught by OnConflict_CheckForSerializationFailure. If
both conflicts appear before T2 commits, it should be caught by
PreCommit_CheckForSerializationFailure. But that is actually run when
T2 *prepares*. Fix that in OnConflict_CheckForSerializationFailure, by
treating a prepared T2 as if it committed already.
This is mostly a problem for prepared transactions, which are in prepared
state for some time, but also for regular transactions because they also go
through the prepared state in the SSI code for a short moment when they're
committed.
Kevin Grittner and Dan Ports