getrlimit() is in SUSv2 and all targeted systems have it.
Windows doesn't have it. We could just use #ifndef WIN32, but for a
little more explanation about why we're making things conditional, let's
retain the HAVE_GETRLIMIT macro. It's defined in port.h for Unix systems.
On systems that have it, it's not necessary to test for RLIMIT_CORE,
RLIMIT_STACK or RLIMIT_NOFILE macros, since SUSv2 requires those and all
targeted systems have them. Also remove references to a pre-historic
alternative spelling of RLIMIT_NOFILE, and coding that seemed to believe
that Cygwin didn't have it.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
Adjusting this function was overlooked in commit 94aa7cc5f. The only
visible symptom (so far) is that INSERT ... ON CONFLICT could go into
an endless loop when inserting a null that has a conflict.
Richard Guo and Tom Lane, per bug #17558 from Andrew Kesper
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
Remove the test case added by commit fac1b470a, which never actually
worked to expose the problem it claimed to test. Replace it with
a case that does expose the problem, and also covers the SRF-not-
at-the-top deficiency repaired in 1aa8dad41.
Richard Guo, with some editorialization by me
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
The sto_using_cursor and sto_using_select tests were coded to exercise
every permutation of their test steps, but AFAICS there is no value in
exercising more than one. This matters because each permutation costs
about six seconds, thanks to the "pg_sleep(6)". Perhaps we could
reduce that, but the useless permutations seem worth getting rid of
in any case. (Note that sto_using_hash_index got it right already.)
While here, clean up some other sloppiness such as an unused table.
This doesn't make too much difference in interactive testing, since the
wasted time is typically masked by parallelization with other tests.
However, the buildfarm runs this as a serial step, which means we can
expect to shave ~40 seconds from every buildfarm run. That makes it
worth back-patching.
Discussion: https://postgr.es/m/2515192.1659454702@sss.pgh.pa.us
The TAP tests for logical replication in src/test/subscription are using
the following code in many places to make sure that the subscription is
synchronized with the publisher:
$node_publisher->wait_for_catchup('tap_sub');
$node_subscriber->poll_query_until('postgres',
qq[SELECT count(1) = 0
FROM pg_subscription_rel
WHERE srsubstate NOT IN ('r', 's')]);
The new function wait_for_subscription_sync() can be used to replace the
above code. This eliminates duplicated code and makes it easier to write
future tests.
Author: Masahiko Sawada
Reviewed by: Amit Kapila, Shi yu
Discussion: https://postgr.es/m/CAD21AoC-fvAkaKHa4t1urupwL8xbAcWRePeETvshvy80f6WV1A@mail.gmail.com
Previously, a byte with the high bit set was just transmitted
as-is by charin() and charout(). This is problematic if the
database encoding is multibyte, because the result of charout()
won't be validly encoded, which breaks various stuff that
expects all text strings to be validly encoded. We've
previously decided to enforce encoding validity rather than try
to individually harden each place that might have a problem with
such strings, so it's time to do something about "char".
To fix, represent high-bit-set characters as \ooo (backslash
and three octal digits), following the ancient "escape" format
for bytea. charin() will continue to accept the old way as well,
though that is only reachable in single-byte encodings.
Add some test cases just so there is coverage for this code.
We'll otherwise leave this question undocumented as it was before,
because we don't really want to encourage end-user use of "char".
For the moment, back-patch into v15 so that this change appears
in 15beta3. If there's not great pushback we should consider
absorbing this change into the older branches.
Discussion: https://postgr.es/m/2318797.1638558730@sss.pgh.pa.us
ORDER BY / DISTINCT aggreagtes have, since implemented in Postgres, been
executed by always performing a sort in nodeAgg.c to sort the tuples in
the current group into the correct order before calling the transition
function on the sorted tuples. This was not great as often there might be
an index that could have provided pre-sorted input and allowed the
transition functions to be called as the rows come in, rather than having
to store them in a tuplestore in order to sort them once all the tuples
for the group have arrived.
Here we change the planner so it requests a path with a sort order which
supports the most amount of ORDER BY / DISTINCT aggregate functions and
add new code to the executor to allow it to support the processing of
ORDER BY / DISTINCT aggregates where the tuples are already sorted in the
correct order.
Since there can be many ORDER BY / DISTINCT aggregates in any given query
level, it's very possible that we can't find an order that suits all of
these aggregates. The sort order that the planner chooses is simply the
one that suits the most aggregate functions. We take the most strictly
sorted variation of each order and see how many aggregate functions can
use that, then we try again with the order of the remaining aggregates to
see if another order would suit more aggregate functions. For example:
SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ...
would request the sort order to be {a, b} because {a} is a subset of the
sort order of {a,b}, but;
SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ...
would just pick a plan ordered by {a} (we give precedence to aggregates
which are earlier in the targetlist).
SELECT agg(a ORDER BY a),agg2(a ORDER BY b),agg3(a ORDER BY b) ...
would choose to order by {b} since two aggregates suit that vs just one
that requires input ordered by {a}.
Author: David Rowley
Reviewed-by: Ronan Dunklau, James Coleman, Ranier Vilela, Richard Guo, Tom Lane
Discussion: https://postgr.es/m/CAApHDvpHzfo92%3DR4W0%2BxVua3BUYCKMckWAmo-2t_KiXN-wYH%3Dw%40mail.gmail.com
The select_outer_pathkeys_for_merge function made an attempt to build the
merge join pathkeys in the same order as query_pathkeys. This was done as
it may have led to no sort being required for an ORDER BY or GROUP BY
clause in the upper planner. However, this restriction seems overly
strict as it required that we match the query_pathkeys entirely or we
don't bother putting the merge join pathkeys in that order.
Here we relax this rule so that we use a prefix of the query_pathkeys
providing that prefix matches all of the join quals. This may provide the
upper planner with partially sorted input which will allow the use of
incremental sorts instead of full sorts.
Author: David Rowley
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/CAApHDvrtZu0PHVfDPFM4Yx3jNR2Wuwosv+T2zqa7LrhhBr2rRg@mail.gmail.com
This commit adds some test coverage for ee79647 (prevent BASE_BACKUP
from running in the middle of another base backup) and b24b2be
(BASE_BACKUP cancellation followed by pg_backup_start), caused by the
interactions of replication and SQL commands in a logical replication
connection in a WAL sender.
The second test uses a design close to what has been introduced in
0475a97f, where BASE_BACKUP is throttled to give enough room for a
cancellation, though this time we rely on psql with multiple -c
switches to keep a connection around for the second query.
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/Ys/NCI4Eo9300GnQ@paquier.xyz
In the short time this function has existed, it's already proven to be
a nontrivial maintenance burden, since it has to be updated whenever a
node tag is added or removed. Although in principle we could now
automate that, I see little justification for having such functionality
here at all. The function is only being applied to utility statements,
for which we already have infrastructure for obtaining string names.
Moreover, that infrastructure produces already-familiar-to-users names,
unlike nodetag_to_string().
So, remove this function and use the existing infrastructure instead.
That saves over a thousand lines of largely-unreachable code.
Back-patch to v15 where this code came in. Although it seems unlikely
that v15's nodetag list will change anymore, we might as well keep the
two branches looking and acting alike; otherwise back-patching any
test-results changes in this area will be painful.
Discussion: https://postgr.es/m/843818.1659218928@sss.pgh.pa.us
The code tried to access ARR_DIMS(v)[0] and ARR_LBOUND(v)[0]
whether or not those values exist. This made the range check
on the "n" argument unstable --- it might or might not fail, and
if it did it would report garbage for the allowed upper limit.
These bogus accesses would probably annoy Valgrind, and if you
were very unlucky even lead to SIGSEGV.
Report and fix by Martin Kalcher. Back-patch to v14 where this
function was added.
Discussion: https://postgr.es/m/baaeb413-b8a8-4656-5757-ef347e5ec11f@aboutsource.net
These flavors of ALTER TABLE were already shaped to report the
ObjectAddress of the partition attached or detached, but this data was
not added to what is collected for event triggers. The tests of
test_ddl_deparse are updated to show the modification in the data
reported.
Author: Hou Zhijie
Reviewed-by: Álvaro Herrera, Amit Kapila, Hayato Kuroda, Michael Paquier
Discussion: https://postgr.es/m/OS0PR01MB571626984BD099DADF53F38394899@OS0PR01MB5716.jpnprd01.prod.outlook.com
This module is expanded to track the description of the objects changed
in the subcommands of ALTER TABLE by reworking the function
get_altertable_subcmdtypes() (now named get_altertable_subcmdinfo) used
in the event trigger of the test. It now returns a set of rows made of
(subcommand type, object description) instead of a text array with only
the information about the subcommand type.
The tests have been lacking a lot of the subcommands added to
AlterTableType over the years. All the missing subcommands are added,
and the code is now structured so as the addition of a new subcommand
is detected by removing the default clause used in the switch for the
subcommand types.
The coverage of the module is increased from roughly 30% to 50%. More
could be done but this is already a nice improvement.
Author: Michael Paquier, Hou Zhijie
Reviewed-by: Álvaro Herrera, Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/OS0PR01MB571626984BD099DADF53F38394899@OS0PR01MB5716.jpnprd01.prod.outlook.com
It's allowed for an installation to remove postgresql.auto.conf,
so don't rely on that being present. Instead probe whether we can
read postmaster.pid. (If you've removed that, you broke the data
directory's multiple-postmaster interlock, not to mention pg_ctl.)
Per gripe from Michael Paquier.
Discussion: https://postgr.es/m/YuSZTsoBMObyY+vT@paquier.xyz
The new test is from commit 9e4f914b5e.
With this setting messages have SQL error numbers included, so that
needs to be provided for in the pattern looked for.
There wasn't an especially nice way to read all of a file while
passing missing_ok = true. Add an additional overloaded variant
to support that use-case.
While here, refactor the C code to avoid a rats-nest of PG_NARGS
checks, instead handling the argument collection in the outer
wrapper functions. It's a bit longer this way, but far more
straightforward.
(Upon looking at the code coverage report for genfile.c, I was
impelled to also add a test case for pg_stat_file() -- tgl)
Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220607.160520.1984541900138970018.horikyota.ntt@gmail.com
On FreeBSD, the new test fails due to a WAL file being removed before
the standby has had the chance to copy it. Fix by adding a replication
slot to prevent the removal until after the standby has connected.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reported-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2Wj5nau_qpjbwihvmXLfkAWOZ5TKdbnqOc6nKSiRJEoPyQ@mail.gmail.com
Crash recovery on standby may encounter missing directories
when replaying database-creation WAL records. Prior to this
patch, the standby would fail to recover in such a case;
however, the directories could be legitimately missing.
Consider the following sequence of commands:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the
tablespace directory, the standby crashes and has to replay the
create database record again, crash recovery must be able to continue.
A fix for this problem was already attempted in 49d9cfc68b, but it
was reverted because of design issues. This new version is based
on Robert Haas' proposal: any missing tablespaces are created
during recovery before reaching consistency. Tablespaces
are created as real directories, and should be deleted
by later replay. CheckRecoveryConsistency ensures
they have disappeared.
The problems detected by this new code are reported as PANIC,
except when allow_in_place_tablespaces is set to ON, in which
case they are WARNING. Apart from making tests possible, this
gives users an escape hatch in case things don't go as planned.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Author: Paul Guo <paulguo@gmail.com>
Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions)
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions)
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
As before, we start by prepending one underscore (truncating the
base name if necessary). But if there is a conflict, then instead of
prepending more and more underscores, append an underscore and some
digits, in much the same way that ChooseRelationName does. While
the previous logic could be driven to fail by creating a lot of
types with long names differing only near the end, this version seems
certain enough to eventually succeed that we can remove the failure
code path that was there before.
While at it, undo 6df7a9698's decision to split this code out of
makeArrayTypeName. That wasn't actually accomplishing anything,
because no other function was using it --- and it would have been
wrong to do so. The convention that a prefix "_" means an array,
not something else, is too ancient to mess with.
Andrey Lepikhov and Dmitry Koval, reviewed by Masahiko Sawada and myself
Discussion: https://postgr.es/m/b84cd82c-cc67-198a-8b1c-60f44e1259ad@postgrespro.ru
This addresses a couple of bugs in the REINDEX grammar, introduced by
83011ce:
- A name was never specified for DATABASE/SYSTEM, even if the query
included one. This caused such REINDEX queries to always work with any
object name, but we should complain if the object name specified does
not match the name of the database we are connected to. A test is added
for this case in the main regression test suite, provided by Álvaro.
- REINDEX SYSTEM CONCURRENTLY [name] was getting rejected in the
parser. Concurrent rebuilds are not supported for catalogs but the
error provided at execution time is more helpful for the user, and
allowing this flavor results in a simplification of the parsing logic.
- REINDEX DATABASE CONCURRENTLY was rebuilding the index in a
non-concurrent way, as the option was not being appended correctly in
the list of DefElems in ReindexStmt (REINDEX (CONCURRENTLY) DATABASE was
working fine. A test is added in the TAP tests of reindexdb for this
case, where we already have a REINDEX DATABASE CONCURRENTLY query
running on a small-ish instance. This relies on the work done in
2cbc3c1 for SYSTEM, but here we check if the OIDs of the index relations
match or not after the concurrent rebuild. Note that in order to get
this part to work, I had to tweak the tests so as the index OID and
names are saved separately. This change not affect the reliability or
of the coverage of the existing tests.
While on it, I have implemented a tweak in the grammar to reduce the
parsing by one branch, simplifying things even more.
Author: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/YttqI6O64wDxGn0K@paquier.xyz
The setting controls tha maximum length of the header line in expanded
format output. Possible settings are full, column, page, or an integer.
the default is full, the current behaviour, and in this case the header
line is the length of the widest line of output. column causes the
header to be truncated to the width of the first column, page causes it
to be truncated to the width of the terminal page, and an integer causes
it to be truncated to that value. If the full value is less than the
page or integer value no truncation occurs. If given without an argument
this option prints its current setting.
Platon Pronko, somewhat modified by me.
Discussion: https://postgr.es/m/f03d38a3-db96-a56e-d1bc-dbbc80bbde4d@gmail.com
Commit e2f65f425 added contrib/pg_prewarm to the prerequisites for
running the src/test/recovery suite, but did not bother to update
the documentation about that.
The dependency logic failed to register a column-level dependency
when a view or rule contains a reference to a specific column of
the result of a function-returning-composite. That meant you could
drop the column from the composite type, causing trouble for future
executions of the view. We've known about this for years, but never
summoned the energy to actually fix it, instead installing various
low-level defenses to prevent crashing on references to dropped columns.
We had to do that to plug the hole in stable branches, where there might
be pre-existing broken references; but let's fix the root cause today.
To do that, add some logic (borrowed from get_rte_attribute_is_dropped)
to find_expr_references_walker, to check whether a Var referencing an
RTE_FUNCTION RTE is referencing a column of a composite type, and if
so add the proper dependency.
However ... it seems mighty unwise to remove said low-level defenses,
since there could be other bugs now or in the future that allow
reaching them. By the same token, letting those defenses go untested
seems unwise. Hence, rather than just dropping the associated test
cases, hack them to continue working by the expedient of manually
dropping the pg_depend entries that this fix installs.
Back-patch into v15. I don't want to risk changing this behavior
in stable branches, but it seems not too late for v15. (Since
we have already forced initdb for beta3, we can be sure that all
production v15 installations will have these added dependencies.)
Discussion: https://postgr.es/m/182492.1658431155@sss.pgh.pa.us
This allows users to omit the statistics name in a CREATE STATISTICS
command, letting the system auto-generate a sensible, unique name,
putting the statistics object in the same schema as the table.
Simon Riggs, reviewed by Matthias van de Meent.
Discussion: https://postgr.es/m/CANbhV-FGD2d_C3zFTfT2aRfX_TaPSgOeKES58RLZx5XzQp5NhA@mail.gmail.com
Due to lack of concern for the case in the dependency code, it's
possible to drop a column of a composite type even though stored
queries have references to the dropped column via functions-in-FROM
that return the composite type. There are "soft" references,
namely FROM-clause aliases for such columns, and "hard" references,
that is actual Vars referring to them. The right fix for hard
references is to add dependencies preventing the drop; something
we've known for many years and not done (and this commit still doesn't
address it). A "soft" reference shouldn't prevent a drop though.
We've been around on this before (cf. 9b35ddce9, 2c4debbd0), but
nobody had noticed that the current behavior can result in dump/reload
failures, because ruleutils.c can print more column aliases than the
underlying composite type now has. So we need to rejigger the
column-alias-handling code to treat such columns as dropped and not
print aliases for them.
Rather than writing new code for this, I used expandRTE() which already
knows how to figure out which function result columns are dropped.
I'd initially thought maybe we could use expandRTE() in all cases, but
that fails for EXPLAIN's purposes, because the planner strips a lot of
RTE infrastructure that expandRTE() needs. So this patch just uses it
for unplanned function RTEs and otherwise does things the old way.
If there is a hard reference (Var), then removing the column alias
causes us to fail to print the Var, since there's no longer a name
to print. Failing seems less desirable than printing a made-up
name, so I made it print "?dropped?column?" instead.
Per report from Timo Stolz. Back-patch to all supported branches.
Discussion: https://postgr.es/m/5c91267e-3b6d-5795-189c-d15a55d61dbb@nullachtvierzehn.de
This patch adds a new SUBSCRIPTION parameter "origin". It specifies
whether the subscription will request the publisher to only send changes
that don't have an origin or send changes regardless of origin. Setting it
to "none" means that the subscription will request the publisher to only
send changes that have no origin associated. Setting it to "any" means
that the publisher sends changes regardless of their origin. The default
is "any".
Usage:
CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=postgres port=9999'
PUBLICATION pub1 WITH (origin = none);
This can be used to avoid loops (infinite replication of the same data)
among replication nodes.
This feature allows filtering only the replication data originating from
WAL but for initial sync (initial copy of table data) we don't have such a
facility as we can only distinguish the data based on origin from WAL. As
a follow-up patch, we are planning to forbid the initial sync if the
origin is specified as none and we notice that the publication tables were
also replicated from other publishers to avoid duplicate data or loops.
We forbid to allow creating origin with names 'none' and 'any' to avoid
confusion with the same name options.
Author: Vignesh C, Amit Kapila
Reviewed-By: Peter Smith, Amit Kapila, Dilip Kumar, Shi yu, Ashutosh Bapat, Hayato Kuroda
Discussion: https://postgr.es/m/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
This allows aliases for sub-SELECTs and VALUES clauses in the FROM
clause to be omitted.
This is an extension of the SQL standard, supported by some other
database systems, and so eases the transition from such systems, as
well as removing the minor inconvenience caused by requiring these
aliases.
Patch by me, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCUCGCf82=hxd9N5n6xGHPyYpQnxW8HneeH+uP7yNALkWA@mail.gmail.com
Detail and hint messages should be full sentences and should end with a
period, but some of the messages newly-introduced in v15 did not follow
that.
Author: Justin Pryzby
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20220719120948.GF12702@telsasoft.com
Backpatch-through: 15
We allow users to set the values of not-yet-loaded extension GUCs,
remembering those values in "placeholder" GUC entries. When/if
the extension is loaded later in the session, we need to verify that
the user had permissions to set the GUC. That was done correctly
before commit a0ffa885e, but as of that commit, we'd check the
permissions of the active role when the LOAD happens, not the role
that had set the value. (This'd be a security bug if it had made it
into a released version.)
In principle this is simple enough to fix: we just need to remember
the exact role OID that set each GUC value, and use that not
GetUserID() when verifying permissions. Maintaining that data in
the guc.c data structures is slightly tedious, but fortunately it's
all basically just copy-n-paste of the logic for tracking the
GucSource of each setting, as we were already doing.
Another oversight is that validate_option_array_item() hadn't
been taught to check for granted GUC privileges. This appears
to manifest only in that ALTER ROLE/DATABASE RESET ALL will
fail to reset settings that the user should be allowed to reset.
Patch by myself and Nathan Bossart, per report from Nathan Bossart.
Back-patch to v15 where the faulty code came in.
Discussion: https://postgr.es/m/20220706224727.GA2158260@nathanxps13
setrefs.c contains logic to discard no-op SubqueryScan nodes, that is,
ones that have no qual to check and copy the input targetlist unchanged.
(Formally it's not very nice to be applying such optimizations so late
in the planner, but there are practical reasons for it; mostly that we
can't unify relids between the subquery and the parent query until we
flatten the rangetable during setrefs.c.) This behavior falsifies our
previous cost estimates, since we would've charged cpu_tuple_cost per
row just to pass data through the node. Most of the time that's little
enough to not matter, but there are cases where this effect visibly
changes the plan compared to what you would've gotten with no
sub-select.
To improve the situation, make the callers of cost_subqueryscan tell
it whether they think the targetlist is trivial. cost_subqueryscan
already has the qual list, so it can check the other half of the
condition easily. It could make its own determination of tlist
triviality too, but doing so would be repetitive (for callers that
may call it several times) or unnecessarily expensive (for callers
that can determine this more cheaply than a general test would do).
This isn't a 100% solution, because createplan.c also does things
that can falsify any earlier estimate of whether the tlist is
trivial. However, it fixes nearly all cases in practice, if results
for the regression tests are anything to go by.
setrefs.c also contains logic to discard no-op Append and MergeAppend
nodes. We did have knowledge of that behavior at costing time, but
somebody failed to update it when a check on parallel-awareness was
added to the setrefs.c logic. Fix that while we're here.
These changes result in two minor changes in query plans shown in
our regression tests. Neither is relevant to the purposes of its
test case AFAICT.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/2581077.1651703520@sss.pgh.pa.us
Commit e3fcca0d0d reverted modifications to HOT for BRIN, but it also
removed a couple unrelated tests from stats.sql. Reinstate those tests.
Reported-by: Peter Eisentraut
This is in preparation for defaulting to -fvisibility=hidden in extensions,
instead of relying on all symbols in extensions to be exported.
This should have been committed before 089480c077, but something in my commit
scripts went wrong.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20211101020311.av6hphdl6xbjbuif@alap3.anarazel.de
Some of the test cases added by commit 3a0e38504 are failing
intermittently in CI testing. It looks like, when a connection
attempt fails, it's possible for psql to exit and the test script
to slurp up the postmaster's log file before the connected backend
has managed to write the log entry we're expecting to see.
It's not clear whether that's fixable in any robust way. Pending
more thought, just comment out the log_like checks. The ones in
connect_ok tests should be fine, since surely the log entry should
be emitted before we complete the client auth sequence. I took
out all the ones in connect_fails tests though.
Discussion: https://postgr.es/m/E1oCNLk-000LCH-Af@gemulon.postgresql.org
This replaces all MemSet() calls with struct initialization where that
is easily and obviously possible. (For example, some cases have to
worry about padding bits, so I left those.)
(The same could be done with appropriate memset() calls, but this
patch is part of an effort to phase out MemSet(), so it doesn't touch
memset() calls.)
Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/9847b13c-b785-f4e2-75c3-12ec77a3b05c@enterprisedb.com
Currently, debugging client certificate verification failures is
mostly limited to looking at the TLS alert code on the client side.
For simple deployments, sometimes it's enough to see "sslv3 alert
certificate revoked" and know exactly what needs to be fixed, but if
you add any more complexity (multiple CA layers, misconfigured CA
certificates, etc.), trying to debug what happened based on the TLS
alert alone can be an exercise in frustration.
Luckily, the server has more information about exactly what failed in
the chain, and we already have the requisite callback implemented as a
stub. We fill that in, collect the data, and pass the constructed
error message back to the main code via a static variable. This lets
us add our error details directly to the final "could not accept SSL
connection" log message, as opposed to issuing intermediate LOGs.
It ends up looking like
LOG: connection received: host=localhost port=43112
LOG: could not accept SSL connection: certificate verify failed
DETAIL: Client certificate verification failed at depth 1: unable to get local issuer certificate.
Failed certificate data (unverified): subject "/CN=Test CA for PostgreSQL SSL regression test client certs", serial number 2315134995201656577, issuer "/CN=Test root CA for PostgreSQL SSL regression test suite".
The length of the Subject and Issuer strings is limited to prevent
malicious client certs from spamming the logs. In case the truncation
makes things ambiguous, the certificate's serial number is also
logged.
Author: Jacob Champion <pchampion@vmware.com>
Discussion: https://www.postgresql.org/message-id/flat/d13c4a5787c2a3f83705124f0391e0738c796751.camel@vmware.com
In what must have been a copy'n paste mistake, all the flag tests use
the same flag rather than a different flag each. The bug is not
suprising, considering that it's dead code; add a minimal, testimonial
line to cover it.
This is all pretty inconsequential, because this is just example code,
but it had better be correct.
Discussion: https://postgr.es/m/20220712152059.fwli2majwgzdmh4r@alvherre.pgsql
Previously, the STORAGE specification was only available in ALTER
TABLE. This makes it available in CREATE TABLE as well.
Also make the code and the documentation for STORAGE and COMPRESSION
attributes consistent.
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: wenjing zeng <wjzeng2012@gmail.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/de83407a-ae3d-a8e1-a788-920eb334f25b@sigaev.ru
Commit f10a025cfe added support for List to store Xids, but didn't
handle the new type in all cases. Add some obviously necessary pieces.
As far as I am aware, this is all dead code as far as core code is
concerned, but it seems unacceptable not to have it in case third-party
code wants to rely on this type of list. (Some parts of the List API
remain unimplemented, but that can be fixed as and when needed -- see
lack of list_intersection_oid, list_deduplicate_int as precedents.)
Discussion: https://postgr.es/m/20220708164534.nbejhgt4ajz35p65@alvherre.pgsql
This addresses two issues in the tests of test_oat_hooks:
- The role regress_test_user was being left behind, preventing the test
to succeed on repeated runs. It makes sense to leave some objects
behind to have more coverage for pg_upgrade (as does test_pg_dump), but
the role dropped here does not own any objects so there is no reason to
keep it.
- GRANT SET ON PARAMETER is issued, creating an entry in
pg_parameter_acl without cleaning up the entry created. This causes
an overlap with unsafe_tests as both use work_mem, making the latter
fail. This commit adds an extra REVOKE SET ON PARAMETER to clean the
contents of pg_parameter_acl, switching to maintenance_work_mem rather
than work_mem to avoid an overlap between both tests.
The tests of test_oat_hooks cannot use installcheck yet as these are
proving to be unstable with caching and the namespace search hooks, so
the issues fixed here cannot be reached yet, but they would be once the
hook issue is addressed and installcheck is allowed again in
test_oat_hooks.
Discussion: https://postgr.es/m/YrpVkADAY0knF6vM@paquier.xyz
Backpatch-through: 15
In the same vein as commit 251154beb, make it clear that we never
instantiate PlanState.
Also mark MemoryContextData as abstract. This has no effect right now,
since memnodes.h isn't one of the files fed to gen_node_support.pl.
But it seems like good documentation and future-proofing.
PostgreSQL contains the implementation of the red-black tree. The red-black
tree is the ordered data structure, and one of its advantages is the ability
to do inequality searches. This commit adds rbt_find_less() and
rbt_find_great() functions implementing these searches. While these searches
aren't yet used in the core code, they might be useful for extensions.
Discussion: https://postgr.es/m/CAGRrpzYE8-7GCoaPjOiL9T_HY605MRax-2jgTtLq236uksZ1Sw%40mail.gmail.com
Author: Steve Chavez, Alexander Korotkov
Reviewed-by: Alexander Korotkov