Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
We have had support for restrictive RLS policies since 9.5, but they
were only available through extensions which use the appropriate hooks.
This adds support into the grammer, catalog, psql and pg_dump for
restrictive RLS policies, thus reducing the cases where an extension is
necessary.
In passing, also move away from using "AND"d and "OR"d in comments.
As pointed out by Alvaro, it's not really appropriate to attempt
to make verbs out of "AND" and "OR", so reword those comments which
attempted to.
Reviewed By: Jeevan Chalke, Dean Rasheed
Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
Previously, the right-hand side of a multiple-column assignment, if it
wasn't a sub-SELECT, had to be a simple parenthesized expression list,
because gram.y was responsible for "bursting" the construct into
independent column assignments. This had the minor defect that you
couldn't write ROW (though you should be able to, since the standard says
this is a row constructor), and the rather larger defect that unlike other
uses of row constructors, we would not expand a "foo.*" item into multiple
columns.
Fix that by changing the RHS to be just "a_expr" in the grammar, leaving
it to transformMultiAssignRef to separate the elements of a RowExpr;
which it will do only after performing standard transformation of the
RowExpr, so that "foo.*" behaves as expected.
The key reason we didn't do that before was the hard-wired handling of
DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by
allowing DEFAULT in any a_expr and having parse analysis throw an error
if SetToDefault is found in an unexpected place. That's an improvement
anyway since the error can be more specific than just "syntax error".
The SQL standard suggests that the RHS could be any a_expr yielding a
suitable row value. This patch doesn't really move the goal posts in that
respect --- you're still limited to RowExpr or a sub-SELECT --- but it does
fix the grammar restriction, so it provides some tangible progress towards
a full implementation. And the limitation is now documented by an explicit
error message rather than an unhelpful "syntax error".
Discussion: <8542.1479742008@sss.pgh.pa.us>
Because we use transformTargetList() for UPDATE as well as SELECT
tlists, the code accidentally tried to expand a "*" reference into
several columns. This is nonsensical, because the UPDATE syntax
provides exactly one target column to put the value into. The
immediate result was that transformUpdateTargetList() got confused
and reported "UPDATE target count mismatch --- internal error".
It seems better to treat such a reference as a plain whole-row
variable, as it would be in other contexts. (This could produce
useful results when the target column is of composite type.)
Fix by tweaking transformTargetList() to perform *-expansion only
conditionally, depending on its exprKind parameter.
Back-patch to 9.3. The problem exists further back, but a fix would be
much more invasive before that, because transformTargetList() wasn't
told what kind of list it was working on. Doesn't seem worth the
trouble given the lack of field reports. (I only noticed it because
I was checking the code while trying to improve the documentation about
how we handle "foo.*".)
Discussion: <4308.1479595330@sss.pgh.pa.us>
Like pg_tables, pg_views, and others, this view contains information
about sequences in a way that is independent of the system catalog
layout but more comprehensive than the information schema.
To help implement the view, add a new internal function
pg_sequence_last_value() to return the last value of a sequence. This
is kept separate from pg_sequence_parameters() to separate querying
run-time state from catalog-like information.
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
We just pass the data to the INSTEAD trigger.
Haribabu Kommi, reviewed by Dilip Kumar
Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Discussion: <26681.1477940227@sss.pgh.pa.us>
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Report: <CAJ3Xv+jzJ8iNNUcp4RKW8b6Qp1xVAxHwSXVpjBNygjKxcVuE9w@mail.gmail.com>
A transaction that conflicts against itself, for example
INSERT INTO t(pk) VALUES (1),(1) ON CONFLICT DO NOTHING;
should behave the same regardless of isolation level. It certainly
shouldn't throw a serialization error, as retrying will not help.
We got this wrong due to the ON CONFLICT logic not considering the case,
as reported by Jason Dusek.
Core of this patch is by Peter Geoghegan (based on an earlier patch by
Thomas Munro), though I didn't take his proposed code refactoring for fear
that it might have unexpected side-effects. Test cases by Thomas Munro
and myself.
Report: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
Related-Discussion: <57EE93C8.8080504@postgrespro.ru>
Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Report: <19391.1477244876@sss.pgh.pa.us>
On my system, this improves coverage for src/backend/access/hash from
61.3% of lines to 88.2% of lines, and from 83.5% of functions to 97.5%
of functions, which is pretty good for 36 lines of tests.
Mithun Cy, reviewing by Amit Kapila and Álvaro Herrera
It's not good for an inherited child constraint to be marked connoinherit;
that would result in the constraint not propagating to grandchild tables,
if any are created later. The code mostly prevented this from happening
but there was one case that was missed.
This is somewhat related to commit e55a946a8, which also tightened checks
on constraint merging. Hence, back-patch to 9.2 like that one. This isn't
so much because there's a concrete feature-related reason to stop there,
as to avoid having more distinct behaviors than we have to in this area.
Amit Langote
Discussion: <b28ee774-7009-313d-dd55-5bdd81242c41@lab.ntt.co.jp>
Commit 0b62fd036 did a fairly sloppy job of refactoring setPath()
to support jsonb_insert() along with jsonb_set(). In its defense,
though, there was no regression test case exercising the case of
replacing an existing element in a jsonb array.
Per bug #14366 from Peng Sun. Back-patch to 9.6 where bug was introduced.
Report: <20161012065349.1412.47858@wrigleys.postgresql.org>
Upcoming changes to the hash table code used, among others, for grouping
and set operations will change the output order for a few queries. To
make it less likely that actual bugs are hidden between regression test
ordering changes, and to make the tests robust against platform
dependant ordering, add ORDER BYs guaranteeing the output order.
As it's possible that some of the changes expose platform dependant
ordering, push this earlier, to let the buildfarm shake out potentially
unstable results.
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
The transfunction was told that its first argument and result were
of the window function output type, not the aggregate state type.
This'd only matter if the transfunction consults get_fn_expr_argtype,
which typically only polymorphic functions would do.
Although we have several regression tests around polymorphic aggs,
none of them detected this mistake --- in fact, they still didn't
fail when I injected the same mistake into nodeAgg.c. So add some
more tests covering both plain agg and window-function-agg cases.
Per report from Sebastian Luque. Back-patch to 9.6 where the error
was introduced (by sloppy refactoring in commit 804163bc2, looks like).
Report: <87int2qkat.fsf@gmail.com>
Historically, we've allowed users to add a CHECK constraint to a child
table and then add an identical CHECK constraint to the parent. This
results in "merging" the two constraints so that the pre-existing
child constraint ends up with both conislocal = true and coninhcount > 0.
However, if you tried to do it in the other order, you got a duplicate
constraint error. This is problematic for pg_dump, which needs to issue
separated ADD CONSTRAINT commands in some cases, but has no good way to
ensure that the constraints will be added in the required order.
And it's more than a bit arbitrary, too. The goal of complaining about
duplicated ADD CONSTRAINT commands can be served if we reject the case of
adding a constraint when the existing one already has conislocal = true;
but if it has conislocal = false, let's just make the ADD CONSTRAINT set
conislocal = true. In this way, either order of adding the constraints
has the same end result.
Another problem was that the code allowed creation of a parent constraint
marked convalidated that is merged with a child constraint that is
!convalidated. In this case, an inheritance scan of the parent table could
emit some rows violating the constraint condition, which would be an
unexpected result given the marking of the parent constraint as validated.
Hence, forbid merging of constraints in this case. (Note: valid child and
not-valid parent seems fine, so continue to allow that.)
Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced
possibly-not-valid check constraints. The second bug obviously doesn't
apply before that, and I think the first doesn't either, because pg_dump
only gets into this situation when dealing with not-valid constraints.
Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com>
Discussion: <22108.1475874586@sss.pgh.pa.us>
Attempting to COPY a subset of columns from a table with RLS enabled
would fail due to an invalid query being constructed (using a single
ColumnRef with the list of fields to exact in 'fields', but that's for
the different levels of an indirection for a single column, not for
specifying multiple columns).
Correct by building a ColumnRef and then RestTarget for each column
being requested and then adding those to the targetList for the select
query. Include regression tests to hopefully catch if this is broken
again in the future.
Patch-By: Adam Brightwell
Reviewed-By: Michael Paquier
Historically, something like to_date('2009-06-40','YYYY-MM-DD') would
return '2009-07-10' because there was no prohibition on out-of-range
month or day numbers. This has been widely panned, and it also turns
out that Oracle throws an error in such cases. Since these functions
are nominally Oracle-compatibility features, let's change that.
There's no particular restriction on year (modulo the fact that the
scanner may not believe that more than 4 digits are year digits,
a matter to be addressed separately if at all). But we now check month,
day, hour, minute, second, and fractional-second fields, as well as
day-of-year and second-of-day fields if those are used.
Currently, no checks are made on ISO-8601-style week numbers or day
numbers; it's not very clear what the appropriate rules would be there,
and they're probably so little used that it's not worth sweating over.
Artur Zakirov, reviewed by Amul Sul, further adjustments by me
Discussion: <1873520224.1784572.1465833145330.JavaMail.yahoo@mail.yahoo.com>
See-Also: <57786490.9010201@wars-nicht.de>
Pushing an upper-level restriction clause into an unflattened
subquery-in-FROM is okay when the subquery contains no SRFs in its
targetlist, or when it does but the SRFs are unreferenced by the clause
*and the clause is not volatile*. Otherwise, we're changing the number
of times the clause is evaluated, which is bad for volatile quals, and
possibly changing the result, since a volatile qual might succeed for some
SRF output rows and not others despite not referencing any of the changing
columns. (Indeed, if the clause is something like "random() > 0.5", the
user is probably expecting exactly that behavior.)
We had most of these restrictions down, but not the one about the upper
clause not being volatile. Fix that, and add a regression test to
illustrate the expected behavior.
Although this is definitely a bug, it doesn't seem like back-patch
material, since possibly some users don't realize that the broken
behavior is broken and are relying on what happens now. Also, while
the added test is quite cheap in the wake of commit a4c35ea1c, it would
be much more expensive (or else messier) in older branches.
Per report from Tom van Tilburg.
Discussion: <CAP3PPDiucxYCNev52=YPVkrQAPVF1C5PFWnrQPT7iMzO1fiKFQ@mail.gmail.com>
The money type input function did not have any overflow checks at all.
There were some regression tests that purported to check for overflow,
but they actually checked for the overflow behavior of the int8 type
before casting to money. Remove those unnecessary checks and add some
that actually check the money input function.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9). While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks. We could add nesting checks
later if it seems important to catch all cases at parse time.
There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE. That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error. It's a release-note-worthy change though.
Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches. (There will be more such checks soon.) In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.
catversion bump because adding a Query field changes stored rules.
Andres Freund and Tom Lane
Discussion: <24639.1473782855@sss.pgh.pa.us>
We're considering changing the implementation of targetlist SRFs
considerably, and a lot of the current behaviour isn't tested in our
regression tests. Thus it seems useful to increase coverage to avoid
accidental behaviour changes.
It's quite possible that some of the plans here will require adjustments
to avoid falling afoul of ordering differences (e.g. hashed group
bys). The buildfarm will tell us.
Reviewed-By: Tom Lane
Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
When heap_lock_tuple decides to follow the update chain, it tried to
also lock any version of the tuple that was created by an update that
was subsequently rolled back. This is pointless, since for all intents
and purposes that tuple exists no more; and moreover it causes
misbehavior, as reported independently by Marko Tiikkaja and Marti
Raudsepp: some SELECT FOR UPDATE/SHARE queries may fail to return
the tuples, and assertion-enabled builds crash.
Fix by having heap_lock_updated_tuple test the xmin and return success
immediately if the tuple was created by an aborted transaction.
The condition where tuples become invisible occurs when an updated tuple
chain is followed by heap_lock_updated_tuple, which reports the problem
as HeapTupleSelfUpdated to its caller heap_lock_tuple, which in turn
propagates that code outwards possibly leading the calling code
(ExecLockRows) to believe that the tuple exists no longer.
Backpatch to 9.3. Only on 9.5 and newer this leads to a visible
failure, because of commit 27846f02c176; before that, heap_lock_tuple
skips the whole dance when the tuple is already locked by the same
transaction, because of the ancient HeapTupleSatisfiesUpdate behavior.
Still, the buggy condition may also exist in more convoluted scenarios
involving concurrent transactions, so it seems safer to fix the bug in
the old branches too.
Discussion:
https://www.postgresql.org/message-id/CABRT9RC81YUf1=jsmWopcKJEro=VoeG2ou6sPwyOUTx_qteRsg@mail.gmail.comhttps://www.postgresql.org/message-id/48d3eade-98d3-8b9a-477e-1a8dc32a724d@joh.to
Not much to be said about this patch: it does what it says on the tin.
In passing, rename AlterEnumStmt.skipIfExists to skipIfNewValExists
to clarify what it actually does. In the discussion of this patch
we considered supporting other similar options, such as IF EXISTS
on the type as a whole or IF NOT EXISTS on the target name. This
patch doesn't actually add any such feature, but it might happen later.
Dagfinn Ilmari Mannsåker, reviewed by Emre Hasegeli
Discussion: <CAO=2mx6uvgPaPDf-rHqG8=1MZnGyVDMQeh8zS4euRyyg4D35OQ@mail.gmail.com>
Previously, we failed to recognize Unicode characters above U+7FF as
being members of locale-dependent character classes such as [[:alpha:]].
(Actually, the same problem occurs for large pg_wchar values in any
multibyte encoding, but UTF8 is the only case people have actually
complained about.) It's impractical to get Spencer's original code to
handle character classes or ranges containing many thousands of characters,
because it insists on considering each member character individually at
regex compile time, whether or not the character will ever be of interest
at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which
we process characters individually as before, and deal with entire ranges
or classes as single entities above that. We can actually make things
cheaper than before for chars below the cutoff, because the color map can
now be a simple linear array for those chars, rather than the multilevel
tree structure Spencer designed. It's more expensive than before for
chars above the cutoff, because we must do a binary search in a list of
high chars and char ranges used in the regex pattern, plus call iswalpha()
and friends for each locale-dependent character class used in the pattern.
However, multibyte encodings are normally designed to give smaller codes
to popular characters, so that we can expect that the slow path will be
taken relatively infrequently. In any case, the speed penalty appears
minor except when we have to apply iswalpha() etc. to high character codes
at runtime --- and the previous coding gave wrong answers for those cases,
so whether it was faster is moot.
Tom Lane, reviewed by Heikki Linnakangas
Discussion: <15563.1471913698@sss.pgh.pa.us>
To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we
can be sure that any such column is new in the current transaction.
Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE
from being executed at all in a transaction block, unless the target
enum type had been created in the current transaction. This patch
removes that restriction, and instead insists that an uncommitted enum
value can't be referenced unless it belongs to an enum type created
in the same transaction as the value. Per discussion, this should be
a bit less onerous. It does require each function that could possibly
return a new enum value to SQL operations to check this restriction,
but there aren't so many of those that this seems unmaintainable.
Andrew Dunstan and Tom Lane
Discussion: <4075.1459088427@sss.pgh.pa.us>
After further reflection about the mess cleaned up in commit 39b691f25,
I decided the main bit of test coverage that was still missing was to
check that the non-default abbreviation-set files we supply are usable.
Add that.
Back-patch to supported branches, just because it seems like a good
idea to keep this all in sync.
Previously, we threw an error if a dynamic timezone abbreviation did not
match any abbreviation recorded in the referenced IANA time zone entry.
That seemed like a good consistency check at the time, but it turns out
that a number of the abbreviations in the IANA database are things that
Olson and crew made up out of whole cloth. Their current policy is to
remove such names in favor of using simple numeric offsets. Perhaps
unsurprisingly, a lot of these made-up abbreviations have varied in meaning
over time, which meant that our commit b2cbced9e and later changes made
them into dynamic abbreviations. So with newer IANA database versions
that don't mention these abbreviations at all, we fail, as reported in bug
#14307 from Neil Anderson. It's worse than just a few unused-in-the-wild
abbreviations not working, because the pg_timezone_abbrevs view stops
working altogether (since its underlying function tries to compute the
whole view result in one call).
We considered deleting these abbreviations from our abbreviations list, but
the problem with that is that we can't stay ahead of possible future IANA
changes. Instead, let's leave the abbreviations list alone, and treat any
"orphaned" dynamic abbreviation as just meaning the referenced time zone.
It will behave a bit differently than it used to, in that you can't any
longer override the zone's standard vs. daylight rule by using the "wrong"
abbreviation of a pair, but that's better than failing entirely. (Also,
this solution can be interpreted as adding a small new feature, which is
that any abbreviation a user wants can be defined as referencing a time
zone name.)
Back-patch to all supported branches, since this problem affects all
of them when using tzdata 2016f or newer.
Report: <20160902031551.15674.67337@wrigleys.postgresql.org>
Discussion: <6189.1472820913@sss.pgh.pa.us>
This introduces a numeric sum accumulator, which performs better than
repeatedly calling add_var(). The performance comes from using wider digits
and delaying carry propagation, tallying positive and negative values
separately, and avoiding a round of palloc/pfree on every value. This
speeds up SUM(), as well as other standard aggregates like AVG() and
STDDEV() that also calculate a sum internally.
Reviewed-by: Andrey Borodin
Discussion: <c0545351-a467-5b76-6d46-4840d1ea8aa4@iki.fi>
Commit f0c7b789a added a test case in case.sql that creates and then drops
both an '=' operator and the type it's for. Given the right timing, that
can cause a "cache lookup failed for type" failure in concurrent sessions,
which see the '=' operator as a potential match for '=' in a query, but
then the type is gone by the time they inquire into its properties.
It might be nice to make that behavior more robust someday, but as a
back-patchable solution, adjust the new test case so that the operator
is never visible to other sessions. Like the previous commit, back-patch
to all supported branches.
Discussion: <5983.1471371667@sss.pgh.pa.us>
ExecReScanAgg's check for whether it could re-use a previously calculated
hashtable neglected the possibility that the Agg node might reference
PARAM_EXEC Params that are not referenced by its input plan node. That's
okay if the Params are in upper tlist or qual expressions; but if one
appears in aggregate input expressions, then the hashtable contents need
to be recomputed when the Param's value changes.
To avoid unnecessary performance degradation in the case of a Param that
isn't within an aggregate input, add logic to the planner to determine
which Params are within aggregate inputs. This requires a new field in
struct Agg, but fortunately we never write plans to disk, so this isn't
an initdb-forcing change.
Per report from Jeevan Chalke. This has been broken since forever,
so back-patch to all supported branches.
Andrew Gierth, with minor adjustments by me
Report: <CAM2+6=VY8ykfLT5Q8vb9B6EbeBk-NGuLbT6seaQ+Fq4zXvrDcA@mail.gmail.com>
This seems to offer significantly better search performance than the
existing GiST opclass for inet/cidr, at least on data with a wide mix
of network mask lengths. (That may suggest that the data splitting
heuristics in the GiST opclass could be improved.)
Emre Hasegeli, with mostly-cosmetic adjustments by me
Discussion: <CAE2gYzxtth9qatW_OAqdOjykS0bxq7AYHLuyAQLPgT7H9ZU0Cw@mail.gmail.com>
Add a variant of txid_current() that returns NULL if no transaction ID
is assigned. This version can be used even on a standby server,
although it will always return NULL since no transaction IDs can be
assigned during recovery.
Craig Ringer, per suggestion from Jim Nasby. Reviewed by Petr Jelinek
and by me.
Remove the plpgsql wrapping that hides the context. So now the test
will fail if the work doesn't actually happen in a parallel worker. Run
the test in its own test group to ensure it won't run out of resources
for that.
regexp_match() is like regexp_matches(), but it disallows the 'g' flag
and in consequence does not need to return a set. Instead, it returns
a simple text array value, or NULL if there's no match. Previously people
usually got that behavior with a sub-select, but this way is considerably
more efficient.
Documentation adjusted so that regexp_match() is presented first and then
regexp_matches() is introduced as a more complicated version. This is
a bit historically revisionist but seems pedagogically better.
Still TODO: extend contrib/citext to support this function.
Emre Hasegeli, reviewed by David Johnston
Discussion: <CAE2gYzy42sna2ME_e3y1KLQ-4UBrB-eVF0SWn8QG39sQSeVhEw@mail.gmail.com>
Per discussion, we should provide such functions to replace the lost
ability to discover AM properties by inspecting pg_am (cf commit
65c5fcd35). The added functionality is also meant to displace any code
that was looking directly at pg_index.indoption, since we'd rather not
believe that the bit meanings in that field are part of any client API
contract.
As future-proofing, define the SQL API to not assume that properties that
are currently AM-wide or index-wide will remain so unless they logically
must be; instead, expose them only when inquiring about a specific index
or even specific index column. Also provide the ability for an index
AM to override the behavior.
In passing, document pg_am.amtype, overlooked in commit 473b93287.
Andrew Gierth, with kibitzing by me and others
Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
Commit 874fe3aea changed the command tag returned for CREATE MATVIEW/CREATE
TABLE AS ... WITH NO DATA, but missed that there was code in spi.c that
expected the command tag to always be "SELECT". Fortunately, the
consequence was only an Assert failure, so this oversight should have no
impact in production builds.
Since this code path was evidently un-exercised, add a regression test.
Per report from Shivam Saxena. Back-patch to 9.3, like the previous commit.
Michael Paquier
Report: <97218716-480B-4527-B5CD-D08D798A0C7B@dresources.com>
ExecEvalCase() tried to save a cycle or two by passing
&econtext->caseValue_isNull as the isNull argument to its sub-evaluation of
the CASE value expression. If that subexpression itself contained a CASE,
then *isNull was an alias for econtext->caseValue_isNull within the
recursive call of ExecEvalCase(), leading to confusion about whether the
inner call's caseValue was null or not. In the worst case this could lead
to a core dump due to dereferencing a null pointer. Fix by not assigning
to the global variable until control comes back from the subexpression.
Also, avoid using the passed-in isNull pointer transiently for evaluation
of WHEN expressions. (Either one of these changes would have been
sufficient to fix the known misbehavior, but it's clear now that each of
these choices was in itself dangerous coding practice and best avoided.
There do not seem to be any similar hazards elsewhere in execQual.c.)
Also, it was possible for inlining of a SQL function that implements the
equality operator used for a CASE comparison to result in one CASE
expression's CaseTestExpr node being inserted inside another CASE
expression. This would certainly result in wrong answers since the
improperly nested CaseTestExpr would be caused to return the inner CASE's
comparison value not the outer's. If the CASE values were of different
data types, a crash might result; moreover such situations could be abused
to allow disclosure of portions of server memory. To fix, teach
inline_function to check for "bare" CaseTestExpr nodes in the arguments of
a function to be inlined, and avoid inlining if there are any.
Heikki Linnakangas, Michael Paquier, Tom Lane
Report: https://github.com/greenplum-db/gpdb/pull/327
Report: <4DDCEEB8.50602@enterprisedb.com>
Security: CVE-2016-5423
In arguments, these meta-commands wrongly treated each pair as closing
the double quoted string. Make the behavior match the documentation.
This is a compatibility break, but I more expect to find software with
untested reliance on the documented behavior than software reliant on
today's behavior. Back-patch to 9.1 (all supported versions).
Reviewed by Tom Lane and Peter Eisentraut.
Security: CVE-2016-5424
This is required for the result to be a legal tsvector value.
Noted while fooling with Andreas Seltenreich's ts_delete() crash.
Discussion: <87invhoj6e.fsf@credativ.de>
Such cases either failed an Assert, or produced a corrupt tsvector in
non-Assert builds, as reported by Andreas Seltenreich. The reason is
that tsvector_delete_by_indices() just assumed that its input array had
no duplicates. Fix by explicitly de-duping.
In passing, improve some comments, and fix a number of tests for null
values to use ERRCODE_NULL_VALUE_NOT_ALLOWED not
ERRCODE_INVALID_PARAMETER_VALUE.
Discussion: <87invhoj6e.fsf@credativ.de>
Previously, if an INSERT with multiple rows of VALUES had indirection
(array subscripting or field selection) in its target-columns list, the
parser handled that by applying transformAssignedExpr() to each element
of each VALUES row independently. This led to having ArrayRef assignment
nodes or FieldStore nodes in each row of the VALUES RTE. That works for
simple cases, but in bug #14265 Nuri Boardman points out that it fails
if there are multiple assignments to elements/fields of the same target
column. For such cases to work, rewriteTargetListIU() has to nest the
ArrayRefs or FieldStores together to produce a single expression to be
assigned to the column. But it failed to find them in the top-level
targetlist and issued an error about "multiple assignments to same column".
We could possibly fix this by teaching the rewriter to apply
rewriteTargetListIU to each VALUES row separately, but that would be messy
(it would change the output rowtype of the VALUES RTE, for example) and
inefficient. Instead, let's fix the parser so that the VALUES RTE outputs
are just the user-specified values, cast to the right type if necessary,
and then the ArrayRefs or FieldStores are applied in the top-level
targetlist to Vars representing the RTE's outputs. This is the same
parsetree representation already used for similar cases with INSERT/SELECT
syntax, so it allows simplifications in ruleutils.c, which no longer needs
to treat INSERT-from-multiple-VALUES as its own special case.
This implementation works by applying transformAssignedExpr to the VALUES
entries as before, and then stripping off any ArrayRefs or FieldStores it
adds. With lots of VALUES rows it would be noticeably more efficient to
not add those nodes in the first place. But that's just an optimization
not a bug fix, and there doesn't seem to be any good way to do it without
significant refactoring. (A non-invasive answer would be to apply
transformAssignedExpr + stripping to just the first VALUES row, and then
just forcibly cast remaining rows to the same data types exposed in the
first row. But this way would lead to different, not-INSERT-specific
errors being reported in casting failure cases, so it doesn't seem very
nice.) So leave that for later; this patch at least isn't making the
per-row parsing work worse, and it does make the finished parsetree
smaller, saving rewriter and planner work.
Catversion bump because stored rules containing such INSERTs would need
to change. Because of that, no back-patch, even though this is a very
long-standing bug.
Report: <20160727005725.7438.26021@wrigleys.postgresql.org>
Discussion: <9578.1469645245@sss.pgh.pa.us>