Debian unstable is shipping a broken version of libedit: it de-escapes
words before passing them to the application's tab completion function,
preventing us from recognizing backslash commands. Fortunately,
we have enough information available to dig the original text out of
rl_line_buffer, so ignore the string argument and do that.
I view this as a temporary workaround to get the affected buildfarm
members back to green in the wake of 7c015045b. I hope we can get
rid of it once somebody fixes Debian's libedit; hence, no back-patch,
at least for now.
Discussion: https://postgr.es/m/20200103110128.GA28967@msg.df7cb.de
Up to now, psql's tab-complete.c has had exactly no regression test
coverage. This patch is an experimental attempt to add some.
This needs Perl's IO::Pty module, which isn't installed everywhere,
so the test script just skips all tests if that's not present.
There may be other portability gotchas too, so I await buildfarm
results with interest.
So far this just covers a few very basic keyword-completion and
query-driven-completion scenarios, which should be enough to let us
get a feel for whether this is practical at all from a portability
standpoint. If it is, there's lots more that can be done.
Discussion: https://postgr.es/m/10967.1577562752@sss.pgh.pa.us
Since d6c55de1, src/port/snprintf.c is able to use %m instead of
strerror(). A couple of utilities in src/bin/ have already done the
switch, and do it now for pg_waldump as this reduces the workload for
translators.
Note that more could be done, particularly with pgbench. Thanks to
Kyotaro Horiguchi for the discussion.
Discussion: https://postgr.es/m/20191129065115.GM2505@paquier.xyz
The refactoring done in a4fd3aa for query cancellation has messed up
with the logic in psql by mixing CancelRequested and cancel_pressed,
breaking for example \watch. The former would be switched to true if a
cancellation request has been attempted and that it actually succeeded,
and the latter tracks if a cancellation attempt has been done.
This commit brings back the code of psql to a state consistent to what
it was before a4fd3aa, without giving up on the refactoring pieces
introduced. It should be actually possible to merge more both flags as
their concepts are close enough, however note that psql's --single-step
mode relies on cancel_pressed to be always set, so this requires more
careful analysis left for later.
While on it, fix the declarations of CancelRequested (in cancel.c) and
cancel_pressed (in psql) to be volatile sig_atomic_t. Previously,
both were declared as booleans, which should be fine on modern
platforms, but the C standard recommends the use of sig_atomic_t for
variables used in signal handlers. Note that since its introduction in
a1792320, CancelRequested declaration was not volatile.
Reported-by: Jeff Janes
Author: Michael Paquier
Discussion: https://postgr.es/m/CAMkU=1zpoUDGKqWKuMWkj7t-bOCaJDx0r=5te_-d0B2HVLABXg@mail.gmail.com
Prefer to call "rl_filename_completion_function" and
"rl_completion_matches", rather than using the names without the rl_
prefix. This matches Readline's documentation, and makes our code
a little clearer about which names are external. On platforms that
only have the un-prefixed names (just some very ancient versions of
libedit, AFAICT), reverse the direction of the compatibility macro
definitions to match.
Also, remove our extern declaration of "filename_completion_function";
whatever libedit versions may have failed to declare that are surely
dead and buried.
Discussion: https://postgr.es/m/23608.1576248145@sss.pgh.pa.us
gcc-based Windows buildfarm animals are not happy about a multiline
regular expression I added recently. Try to accomodate; existing
pg_basebackup tests suggest that \n should work instead of a bare
newline, but throw in \r also. This being perl, TIMTOWTDI.
Also remove the pointless $ at the end of the pattern, for extra luck.
(If this doesn't work, I'll probably just split the regex in two.)
Per buildfarm members jacana and fairywren.
Discussion: https://postgr.es/m/3562.1576161217@sss.pgh.pa.us
This makes such log entries more useful, since the cause of the error
can be dependent on the parameter values.
Author: Alexey Bashtanov, Álvaro Herrera
Discussion: https://postgr.es/m/0146a67b-a22a-0519-9082-bc29756b93a2@imap.cc
Reviewed-by: Peter Eisentraut, Andres Freund, Tom Lane
On Windows, we use CMD.EXE to redirect the postmaster's stdout/stderr
into a log file. CMD.EXE will open that file with non-sharing-friendly
parameters, and the file will remain open for a short time after the
postmaster has removed postmaster.pid. This can result in an
ERROR_SHARING_VIOLATION failure if we attempt to start a new postmaster
immediately with the same log file (e.g. during "pg_ctl restart").
This seems to explain intermittent buildfarm failures we've been seeing
on Windows machines.
To fix, just open and close the log file using our own pgwin32_open(),
which will wait if necessary to avoid the failure. (Perhaps someday
we should stop using CMD.EXE, but that would be a far more complex
patch, and it doesn't seem worth the trouble ... yet.)
Back-patch to v12. This only solves the problem when frontend fopen()
is redirected to pgwin32_fopen(), which has only been true since commit
0ba06e0bf. Hence, no point in back-patching further, unless we care
to back-patch that change too.
Diagnosis and patch by Alexander Lakhin (bug #16154).
Discussion: https://postgr.es/m/16154-1ccf0b537b24d5e0@postgresql.org
When restoring database schemas on a new cluster, database "template1"
is processed first, followed by all other databases in parallel,
including "postgres". Both "postgres" and "template1" have some extra
handling to propagate each one's properties, but comments were confusing
regarding which one is processed where.
Author: Julien Rouhaud
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAOBaU_a2iviTG7FE10yO_gcW+zQCHNFhRA_NDiktf3UR65BHdw@mail.gmail.com
Add a new function ReceiveCopyData that does just that, taking a
callback as an argument to specify what should be done with each chunk
as it is received. This allows a single copy of the logic to be shared
between ReceiveTarFile and ReceiveAndUnpackTarFile, and eliminates
a few #ifdef conditions based on HAVE_LIBZ.
While this is slightly more code, it's arguably clearer, and
there is a pending patch that introduces additional calls to
ReceiveCopyData.
This commit is not intended to result in any functional change.
Discussion: http://postgr.es/m/CA+TgmoYZDTHbSpwZtW=JDgAhwVAYvmdSrRUjOd+AYdfNNXVBDg@mail.gmail.com
This is similar to what pg_basebackup and pg_rewind do when reporting
cumulative data, and that's more user-friendly. Carriage returns are
now used when stderr points to a terminal, and newlines are used in
other cases, like a redirection to a log file.
Author: Amit Langote
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/CA+HiwqFNwEjPeVaQsp2L7DyCPv1Eg1guwhrVhzMYqUJUk8ULKg@mail.gmail.com
This variable is now part of the refactored code for query cancellation
in fe_utils. This fixes an oversight in commit a4fd3aa. While on it,
improve some header includes in bin/scripts/.
Author: Michael Paquier
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20191203101625.GF1634@paquier.xyz
On Windows, all the hosts spawned by the TAP tests bind to 127.0.0.1.
Hence, if there is a port conflict, starting a cluster would immediately
fail. One of the test scripts of pg_ctl initializes a node without
PostgresNode.pm, using the default port 5432. This could cause
unexpected startup failures in the tests if an independent server was up
and running on the same host (the reverse is also possible, though more
unlikely). Fix this issue by assigning properly a free port to the node
configured, in the same range used as for the other nodes part of the
tests.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20191202031444.GC1696@paquier.xyz
Backpatch-through: 11
Originally, this code was duplicated in src/bin/psql/ and
src/bin/scripts/, but it can be useful for other frontend applications,
like pgbench. This refactoring offers the possibility to setup a custom
callback which would get called in the signal handler for SIGINT or when
the interruption console events happen on Windows.
Author: Fabien Coelho, with contributions from Michael Paquier
Reviewed-by: Álvaro Herrera, Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1910311939430.27369@lancre
This commit revert the commits to add a test case that tests the 'force'
option when there is an active backend connected to the database being
dropped.
This feature internally sends SIGTERM to all the backends connected to the
database being dropped and then the same is reported to the client. We
found that on Windows, the client end of the socket is not able to read
the data once we close the socket in the server which leads to loss of
error message which is not what we expect. We also observed similar
behavior in other cases like pg_terminate_backend(),
pg_ctl kill TERM <pid>. There are probably a few others like that. The
fix for this requires further study.
Discussion: https://postgr.es/m/E1iaD8h-0004us-K9@gemulon.postgresql.org
This clarifies instructions on how to dump/reload databases that use
incompatible isn versions.
Reported-by: Alvaro Herrera
Discussion: https://postgr.es/m/20191114190652.GA23586@alvherre.pgsql
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Backpatch-through: 13
XLogReader, walsender and pg_waldump all had their own routines to read
data from WAL files to memory, with slightly different approaches
according to the particular conditions of each environment. There's a
lot of commonality, so we can refactor that into a single routine
WALRead in XLogReader, and move the differences to a separate (simpler)
callback that just opens the next WAL-segment. This results in a
clearer (ahem) code flow.
The error reporting needs are covered by filling in a new error-info
struct, WALReadError, and it's the caller's responsibility to act on it.
The backend has WALReadRaiseError() to do so.
We no longer ever need to seek in this interface; switch to using
pg_pread().
Author: Antonin Houska, with contributions from Álvaro Herrera
Reviewed-by: Michaël Paquier, Kyotaro Horiguchi
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
Up to now, whatever you'd edited was put back into the query buffer
but not redisplayed, which is less than user-friendly. But we can
improve that just by printing the text along with a prompt, if we
enforce that the editing result ends with a newline (which it
typically would anyway). You then continue typing more lines if
you want, or you can type ";" or do \g or \r or another \e.
This is intentionally divorced from readline's processing,
for simplicity and so that it works the same with or without
readline enabled. We discussed possibly integrating things
more closely with readline; but that seems difficult, uncertainly
portable across different readline and libedit versions, and
of limited real benefit anyway. Let's try the simple way and
see if it's good enough.
Patch by me, thanks to Fabien Coelho and Laurenz Albe for review
Discussion: https://postgr.es/m/13192.1572318028@sss.pgh.pa.us
Specifying '-f' will add the 'force' option to the DROP DATABASE command
sent to the server. This will try to terminate all existing connections
to the target database before dropping it.
Author: Pavel Stehule
Reviewed-by: Vignesh C and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
pg_upgrade needs to check whether certain non-upgradable data types
appear anywhere on-disk in the source cluster. It knew that it has
to check for these types being contained inside domains and composite
types; but it somehow overlooked that they could be contained in
arrays and ranges, too. Extend the existing recursive-containment
query to handle those cases.
We probably should have noticed this oversight while working on
commit 0ccfc2822 and follow-ups, but we failed to :-(. The whole
thing's possibly a bit overdesigned, since we don't really expect
that any of these types will appear on disk; but if we're going to
the effort of doing a recursive search then it's silly not to cover
all the possibilities.
While at it, refactor so that we have only one copy of the search
logic, not three-and-counting. Also, to keep the branches looking
more alike, back-patch the output wording change of commit 1634d3615.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/31473.1573412838@sss.pgh.pa.us
This new option terminates the other sessions connected to the target
database and then drop it. To terminate other sessions, the current user
must have desired permissions (same as pg_terminate_backend()). We don't
allow to terminate the sessions if prepared transactions, active logical
replication slots or subscriptions are present in the target database.
Author: Pavel Stehule with changes by me
Reviewed-by: Dilip Kumar, Vignesh C, Ibrar Ahmed, Anthony Nowocien,
Ryan Lambert and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
This patch adopts the overflow check logic introduced by commit cbdb8b4c0
into two more places. interval_mul() failed to notice if it computed a
new microseconds value that was one more than INT64_MAX, and pgbench's
double-to-int64 logic had the same sorts of edge-case problems that
cbdb8b4c0 fixed in the core code.
To make this easier to get right in future, put the guts of the checks
into new macros in c.h, and add commentary about how to use the macros
correctly.
Back-patch to all supported branches, as we did with the previous fix.
Yuya Watari
Discussion: https://postgr.es/m/CAJ2pMkbkkFw2hb9Qb1Zj8d06EhWAQXFLy73St4qWv6aX=vqnjw@mail.gmail.com
This commit allows --init-steps option in pgbench to accept "G" character
meaning server-side data generation as an initialization step.
With "G", only limited queries are sent from pgbench client and
then data is actually generated in the server. This might make
the initialization phase faster if the bandwidth between pgbench client
and the server is low.
Author: Fabien Coelho
Reviewed-by: Anna Endo, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/alpine.DEB.2.21.1904061826420.3678@lancre
There's plenty places in frontend code that could benefit from a
string buffer implementation. Some because it yields simpler and
faster code, and some others because of the desire to share code
between backend and frontend.
While there is a string buffer implementation available to frontend
code, libpq's PQExpBuffer, it is clunkier than stringinfo, it
introduces a libpq dependency, doesn't allow for sharing between
frontend and backend code, and has a higher API/ABI stability
requirement due to being exposed via libpq.
Therefore it seems best to just making StringInfo being usable by
frontend code. There's not much to do for that, except for rewriting
two subsequent elog/ereport calls into others types of error
reporting, and deciding on a maximum string length.
For the maximum string size I decided to privately define MaxAllocSize
to the same value as used in the backend. It seems likely that we'll
want to reconsider this for both backend and frontend code in the not
too far away future.
For now I've left stringinfo.h in lib/, rather than common/, to reduce
the likelihood of unnecessary breakage. We could alternatively decide
to provide a redirecting stringinfo.h in lib/, or just not provide
compatibility.
Author: Andres Freund
Reviewed-By: Kyotaro Horiguchi, Daniel Gustafsson
Discussion: https://postgr.es/m/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.
By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
The code only compared two triggers' names and namespaces (the latter
being the owning table's schema). This could result in falling back
to an OID-based sort of similarly-named triggers on different tables.
We prefer to avoid that, so add a comparison of the table names too.
(The sort order is thus table namespace, trigger name, table name,
which is a bit odd, but it doesn't seem worth contorting the code
to work around that.)
Likewise for policy objects, in 9.5 and up.
Complaint and fix by Benjie Gillam. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CAMThMzEEt2mvBbPgCaZ1Ap1N-moGn=Edxmadddjq89WG4NpPtQ@mail.gmail.com
The additional newline seems to have accidentally been introduced in
2c03216d83, in 9.5. The newline is only issued when an FPW is
present for the block reference.
While there could be an argument that removing the newlines in the
back branches could cause a problem for somebody parsing the
pg_waldump output, the likelihood of that seems small enough. It seems
at least equally likely that the randomness of when newlines are
issued causes problems.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029233341.4gnyau7e5v2lh5sc@alap3.anarazel.de
Backpatch: 9.5, like 2c03216d83.
Historically, psql consulted COMSPEC to spawn a shell in its \! command,
but we just invoked "cmd" when spawning shells in pg_ctl and pg_regress.
It seems better to rely on the environment variable, if it's set,
in all cases.
It's debatable whether this is a bug fix or just a behavioral change,
so no back-patch.
Juan José Santamaría Flecha
Discussion: https://postgr.es/m/16080-5d7f03222469f717@postgresql.org
9155580 has changed the value of the first fake LSN for unlogged
relations from 1 to FirstNormalUnloggedLSN (aka 1000), GiST requiring a
non-zero LSN on some pages to allow an interlocking logic to work, but
its value was still initialized to 1 at the beginning of recovery or
after running pg_resetwal. This fixes the initialization for both code
paths.
Author: Takayuki Tsunakawa
Reviewed-by: Dilip Kumar, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/OSBPR01MB2503CE851940C17DE44AE3D9FE6F0@OSBPR01MB2503.jpnprd01.prod.outlook.com
Backpatch-through: 12
Commit a524f50d0f added
old_11_check_for_sql_identifier_data_type_usage(), but it did not use
the clearer database error list format added to the master branch in
commit 1634d36157. This commit fixes that.
Backpatch-through: master
8ae0d47 marked those options as obsolete back in 2005, with the options
removed from the documentation. This removes the last references to
both options in the code which were kept around for compatibility
purposes with past commands.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/5da284a2-62d9-e338-88d1-26ee5009d93e@gmail.com
When an FK constraint is created, it needs the index on the referenced
table to exist and be valid. When doing parallel pg_restore and the
referenced table was partitioned, this condition can sometimes not be
met, because pg_dump didn't emit sufficient object dependencies to
ensure so; this means that parallel pg_restore would fail in certain
conditions. Fix by having pg_dump make the FK constraint object
dependent on the partition attachment objects for the constraint's
referenced index.
This has been broken since f56f8f8da6, so backpatch to Postgres 12.
Discussion: https://postgr.es/m/20191005224333.GA9738@alvherre.pgsql