Instead of hard-wiring specific verbosity levels into the option
processing of client applications, invent pg_logging_increase_verbosity()
and encourage clients to implement --verbose by calling that. Then,
the common convention that more -v's gets you more verbosity just works.
In particular, this allows resurrection of the debug-grade messages that
have long existed in pg_dump and its siblings. They were unreachable
before this commit due to lack of a way to select PG_LOG_DEBUG logging
level. (It appears that they may have been unreachable for some time
before common/logging.c was introduced, too, so I'm not specifically
blaming cc8d41511 for the oversight. One reason for thinking that is
that it's now apparent that _allocAH()'s message needs a null-pointer
guard. Testing might have failed to reveal that before 96bf88d52.)
Discussion: https://postgr.es/m/1173106.1600116625@sss.pgh.pa.us
It is not possible to get a list of foreign schemas as the server is not
known, so this provides instead a list of local schemas, which is more
useful than nothing if using a loopback server or having schema names
matching in the local and remote servers.
Author: Jeff Janes
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAMkU=1wr7Roj41q-XiJs=Uyc2xCmHhcGGy7J-peJQK-e+w=ghw@mail.gmail.com
Adjust the whitespace in the emitted files so that it matches
what pgindent would do. This makes the generated files look
like they match project style, and avoids confusion if someone
does run pgindent on the generated files.
Also, add probes.h to pgindent's exclusion list, because it can
confuse pgindent, plus there's not much point in processing it.
Daniel Gustafsson, additional fixes by me
Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
Those extra queries are not necessary when doing a data-only dump. With
this change, this means that the dependencies between CHECK/DEFAULT and
the parent table are not tracked anymore for a data-only dump. However,
these dependencies are only used for the schema generation and we have
never guaranteed that a dump can be reloaded if a CHECK constraint uses
a custom function whose behavior changes when loading the data, like
when using cross-table references in the CHECK function.
Author: Julien Rouhaud
Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/20200712054850.GA92357@nol
If there are no objects of a certain type, there is no need to do an
allocation for a set of DumpableObject items. The previous coding did
an allocation of 1 byte instead as per the fallback of pg_malloc() in
the event of an allocation size of zero. This assigns NULL instead for
a set of dumpable objects.
A similar rule already applied to findObjectByOid(), so this makes the
code more defensive as we would just fail with a pointer dereference
instead of attempting to use some incorrect data if a non-existing,
positive, OID is given by a caller of this function.
Author: Daniel Gustafsson
Reviewed-by: Julien Rouhaud, Ranier Vilela
Discussion: https://postgr.es/m/26C43E58-BDD0-4F1A-97CC-4A07B52E32C5@yesql.se
The stats target can be set since commit d06215d03, but wasn't shown by
psql.
Author: Justin Pryzby <justin@telsasoft.com>
Discussion: https://postgr.es/m/20200831050047.GG5450@telsasoft.com
Reviewed-by: Georgios Kokolatos <gkokolatos@protonmail.com>
Reviewed-by: Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>
Letting the caller provide a StringInfo to read into is helpful when
the caller needs to merge lines or otherwise modify the data after
it's been read. Notably, now the code added by commit 8f8154a50
can use pg_get_line_append() instead of having its own copy of that
logic. A follow-on commit will also make use of this.
Also, since StringInfo buffers are a minimum of 1KB long, blindly
using pg_get_line() in a loop can eat a lot more memory than one would
expect. I discovered for instance that commit e0f05cd5b caused initdb
to consume circa 10MB to read postgres.bki, even though that's under
1MB worth of data. A less memory-hungry alternative is to re-use the
same StringInfo for all lines and pg_strdup the results.
Discussion: https://postgr.es/m/1315832.1599345736@sss.pgh.pa.us
This is duplicative of an lstat that was just done by the calling
function (traverse_datadir), besides which we weren't really doing
anything with the results. There's not much point in checking to
see if someone removed the file since the previous lstat, since the
FILE_ACTION_REMOVE code would have to deal with missing-file cases
anyway. Moreover, the "exists = false" assignment was a dead store;
nothing was done with that value later.
A syscall saved is a syscall earned, so back-patch to 9.5
where this code was introduced.
Discussion: https://postgr.es/m/1221796.1599329320@sss.pgh.pa.us
At some point back in the last century, somebody felt that reading
all of pg_type twice was cheaper, or at least easier, than using
repalloc() to resize the Typ[] array dynamically. That seems like an
entirely wacko proposition, so rewrite the code to do it the other
way. (To add insult to injury, there were two not-quite-identical
copies of said code.)
initdb.c's readfile() function had the same disease of preferring
to do double the I/O to avoid resizing its output array. Here,
we can make things easier by using the just-invented pg_get_line()
function to handle reading individual lines without a predetermined
notion of how long they are.
On my machine, it's difficult to detect any net change in the
overall runtime of initdb from these changes; but they should
help on slower buildfarm machines (especially since a buildfarm
cycle involves a lot of initdb's these days).
My attention was drawn to these places by scan-build complaints,
but on inspection they needed a lot more work than just suppressing
dead stores :-(
If this data is not collected, pg_dump segfaults if asked for column
inserts.
Fix by Fabrízio de Royes Mello
Backpatch to release 12 where the bug was introduced.
This patch started out with the goal of harmonizing various arbitrary
limits on password length, but after awhile a better idea emerged:
let's just get rid of those fixed limits.
recv_password_packet() has an arbitrary limit on the packet size,
which we don't really need, so just drop it. (Note that this doesn't
really affect anything for MD5 or SCRAM password verification, since
those will hash the user's password to something shorter anyway.
It does matter for auth methods that require a cleartext password.)
Likewise remove the arbitrary error condition in pg_saslprep().
The remaining limits are mostly in client-side code that prompts
for passwords. To improve those, refactor simple_prompt() so that
it allocates its own result buffer that can be made as big as
necessary. Actually, it proves best to make a separate routine
pg_get_line() that has essentially the semantics of fgets(), except
that it allocates a suitable result buffer and hence will never
return a truncated line. (pg_get_line has a lot of potential
applications to replace randomly-sized fgets buffers elsewhere,
but I'll leave that for another patch.)
I built pg_get_line() atop stringinfo.c, which requires moving
that code to src/common/; but that seems fine since it was a poor
fit for src/port/ anyway.
This patch is mostly mine, but it owes a good deal to Nathan Bossart
who pressed for a solution to the password length problem and
created a predecessor patch. Also thanks to Peter Eisentraut and
Stephen Frost for ideas and discussion.
Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:
* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).
* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.
We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.
Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
Listing a full set of relations with those psql meta-commands, without a
matching pattern, has never showed the access method associated with
each relation. This commit adds the access method of tables, indexes
and matviews, masking it for relation kinds where it does not apply.
Note that when HIDE_TABLEAM is enabled, the information does not show
up. This is available when connecting to a backend version of at least
12, where table AMs have been introduced.
Author: Georgios Kokolatos
Reviewed-by: Vignesh C, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/svaS1VTOEscES9CLKVTeKItjJP1EEJuBhTsA0ESOdlnbXeQSgycYwVlliL5zt8Jwcfo4ATYDXtEqsExxjkSkkhCSTCL8fnRgaCAJdr0unUg=@protonmail.com
If a commit or abort record includes "dropped relfilenodes", then replaying
the record will remove data files. That is surely a "special rel update",
but the records were not marked as such. Fix that, teach pg_rewind to
expect and ignore them, and add a test case to cover it.
It's always been like this, but no backporting for fear of breaking
existing applications. If an application parsed the WAL but was not
handling commit/abort records, it would stop working. That might be a good
thing if it really needed to handle the dropped rels, but it will be caught
when the application is updated to work with PostgreSQL v14 anyway.
Discussion: https://www.postgresql.org/message-id/07b33e2c-46a6-86a1-5f9e-a7da73fddb95%40iki.fi
Reviewed-by: Amit Kapila, Michael Paquier
A number of client programs have a "--progress" option that when printing
to a TTY, updates the current line by printing a '\r' and overwriting it.
After the last line, '\n' needs to be printed to move the cursor to the
next line. pg_basebackup and pgbench got this right, but pg_rewind and
pg_checksums were slightly wrong. pg_rewind printed the newline to stdout
instead of stderr, and pg_checksums printed the newline even when not
printing to a TTY. Fix them, and also add a 'finished' argument to
pg_basebackup's progress_report() function, to keep it consistent with
the other programs.
Backpatch to v12. pg_rewind's newline was broken with the logging changes
in commit cc8d415117 in v12, and pg_checksums was introduced in v12.
Discussion: https://www.postgresql.org/message-id/82b539e5-ae33-34b0-1aee-22b3379fd3eb@iki.fi
Reuse cautionary language from src/test/ssl/README in
src/test/kerberos/README. SLRUs have had access to six-character
segments names since commit 73c986adde,
and recovery stopped calling HeapTupleHeaderAdvanceLatestRemovedXid() in
commit 558a9165e0. The other corrections
are more self-evident.
Parallel-restoring a foreign key that references a partitioned table
with several levels of partitions can fail:
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 6684; 2606 29166 FK CONSTRAINT fk fk_a_fkey postgres
pg_restore: error: could not execute query: ERROR: there is no unique constraint matching given keys for referenced table "pk"
Command was: ALTER TABLE fkpart3.fk
ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart3.pk(a);
This happens in parallel restore mode because some index partitions
aren't yet attached to the topmost partitioned index that the FK uses,
and so the index is still invalid. The current code marks the FK as
dependent on the first level of index-attach dump objects; the bug is
fixed by recursively marking the FK on their children.
Backpatch to 12, where FKs to partitioned tables were introduced.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/3170626.1594842723@sss.pgh.pa.us
Backpatch: 12-master
Including Full in variable names duplicates the type information and
leads to overly long names. As FullTransactionId cannot accidentally
be casted to TransactionId that does not seem necessary.
Author: Andres Freund
Discussion: https://postgr.es/m/20200724011143.jccsyvsvymuiqfxu@alap3.anarazel.de
This allows the tab completion of REINDEX to handle an optional
parenthesized list of options. This case is more complicated than
VACUUM or ANALYZE because of CONCURRENTLY and the different object types
to consider with the reindex.
Author: Justin Pryzby
Reviewed-by: Alexey Kondratov, Michael Paquier
Discussion: https://postgr.es/m/20200403182712.GR14618@telsasoft.com
Any libpq client can use the header. Clients include backend components
postgres_fdw, dblink, and logical replication apply worker. Back-patch
to v10, because another fix needs this. In released branches, just copy
the header and keep the original.
Coverity pointed out, not unreasonably, that we checked fseeko's
result at every other call site but these. Failure to seek in the
temp file (note this is NOT pg_dump's output file) seems quite
unlikely, and even if it did happen the file length cross-check
further down would probably detect the problem. Still, that's a
poor excuse for not checking the result of a system call.
A moment's examination of these queries is sufficient to see that
they do not produce duplicate rows, unless perhaps there's
catalog corruption. Using DISTINCT anyway is inefficient and
confusing; moreover it sets a poor example for anyone who
refers to psql -E output to see how to query the catalogs.
The type-name pattern in \dAc and \dAf was matched only to the actual
pg_type.typname string, which is fairly user-unfriendly in cases where
that is not what's shown to the user by format_type (compare "_int4"
and "integer[]"). Make this code match what \dT does, i.e. match the
pattern against either typname or format_type() output. Also fix its
broken handling of schema-name restrictions. (IOW, make these
processSQLNamePattern calls match \dT's.) While here, adjust
whitespace to make the query a little prettier in -E output, too.
Also improve some inaccuracies and shaky grammar in the related
documentation.
Noted while working on a patch for intarray's opclasses; I wondered
why I couldn't get a match to "integer*" for the input type name.
This allows AM-specific knowledge to be applied during creation of
pg_amop and pg_amproc entries. Specifically, the AM knows better than
core code which entries to consider as required or optional. Giving
the latter entries the appropriate sort of dependency allows them to
be dropped without taking out the whole opclass or opfamily; which
is something we'd like to have to correct obsolescent entries in
extensions.
This callback also opens the door to performing AM-specific validity
checks during opclass creation, rather than hoping than an opclass
developer will remember to test with "amvalidate". For the most part
I've not actually added any such checks yet; that can happen in a
follow-on patch. (Note that we shouldn't remove any tests from
"amvalidate", as those are still needed to cross-check manually
constructed entries in the initdb data. So adding tests to
"amadjustmembers" will be somewhat duplicative, but it seems like
a good idea anyway.)
Patch by me, reviewed by Alexander Korotkov, Hamid Akhtar, and
Anastasia Lubennikova.
Discussion: https://postgr.es/m/4578.1565195302@sss.pgh.pa.us
The relkinds that support indexing are the same as the ones supporting
VACUUM, so the code gets refactored a bit with the completion query used
for CLUSTER, but there is no change for CLUSTER in this commit.
Author: Justin Pryzby
Reviewed-by: Fujii Masao, Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/20200728170408.GI20393@telsasoft.com
When fast promotion was supported in 9.3, non-fast promotion became
undocumented feature and it's basically not available for ordinary users.
However we decided not to remove non-fast promotion at that moment,
to leave it for a release or two for debugging purpose or as an emergency
method because fast promotion might have some issues, and then to
remove it later. Now, several versions were released since that decision
and there is no longer reason to keep supporting non-fast promotion.
Therefore this commit removes non-fast promotion.
Author: Fujii Masao
Reviewed-by: Hamid Akhtar, Kyotaro Horiguchi
Discussion: https://postgr.es/m/76066434-648f-f567-437b-54853b43398f@oss.nttdata.com
This corrects and simplifies $subject in a number of ways:
- Remove from the completion the pre-9.0 grammar still supported for
compatibility purposes. This simplifies the code, and allows to extend
it more easily with new patterns.
- Add completion for the options of FORMAT within a WITH clause.
- Complete WHERE and WITH clauses correctly depending on if TO or FROM
are used, WHERE being only available with COPY FROM.
Author: Vignesh C, Michael Paquier
Reviewed-by: Ahsan Hadi
Discussion: https://postgr.es/m/CALDaNm3zWr=OmxeNqOqfT=uZTSdam_j-gkX94CL8eTNfgUtf6A@mail.gmail.com
max_slot_wal_keep_size that was added in v13 and wal_keep_segments are
the GUC parameters to specify how much WAL files to retain for
the standby servers. While max_slot_wal_keep_size accepts the number of
bytes of WAL files, wal_keep_segments accepts the number of WAL files.
This difference of setting units between those similar parameters could
be confusing to users.
To alleviate this situation, this commit renames wal_keep_segments to
wal_keep_size, and make users specify the WAL size in it instead of
the number of WAL files.
There was also the idea to rename max_slot_wal_keep_size to
max_slot_wal_keep_segments, in the discussion. But we have been moving
away from measuring in segments, for example, checkpoint_segments was
replaced by max_wal_size. So we concluded to rename wal_keep_segments
to wal_keep_size.
Back-patch to v13 where max_slot_wal_keep_size was added.
Author: Fujii Masao
Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele
Discussion: https://postgr.es/m/574b4ea3-e0f9-b175-ead2-ebea7faea855@oss.nttdata.com
This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION.
When that's set, the publisher will send data using the data type's
typsend function if any, rather than typoutput. This is generally
faster, if slightly less robust.
As committed, we won't try to transfer user-defined array or composite
types in binary, for fear that type OIDs won't match at the subscriber.
This might be changed later, but it seems like fit material for a
follow-on patch.
Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others;
adjusted some by me
Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
pg_dump produces custom-format archive files that lack data offsets
when it is unable to seek its output. Up to now that's been a hazard
for pg_restore. But if pg_restore is able to seek in the archive
file, there is no reason to throw up our hands when asked to restore
data blocks out of order. Instead, whenever we are searching for a
data block, record the locations of the blocks we passed over (that
is, fill in the missing data-offset fields in our in-memory copy of
the TOC data). Then, when we hit a case that requires going
backwards, we can just seek back.
Also track the furthest point that we've searched to, and seek back
to there when beginning a search for a new data block. This avoids
possible O(N^2) time consumption, by ensuring that each data block
is examined at most twice. (On Unix systems, that's at most twice
per parallel-restore job; but since Windows uses threads here, the
threads can share block location knowledge, reducing the amount of
duplicated work.)
We can also improve the code a bit by using fseeko() to skip over
data blocks during the search.
This is all of some use even in simple restores, but it's really
significant for parallel pg_restore. In that case, we require
seekability of the input already, and we will very probably need
to do out-of-order restores.
Back-patch to v12, as this fixes a regression introduced by commit
548e50976. Before that, parallel restore avoided requesting
out-of-order restores, so it would work on a data-offset-less
archive. Now it will again.
Ideally this patch would include some test coverage, but there are
other open bugs that need to be fixed before we can extend our
coverage of parallel restore very much. Plan to revisit that later.
David Gilman and Tom Lane; reviewed by Justin Pryzby
Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
We do not really need to track the file position by hand. We were
already relying on ftello() whenever the archive file is seekable,
while if it's not seekable we don't need the file position info
anyway because we're not going to be able to re-write the TOC.
Moreover, that tracking was buggy since it failed to account for
the effects of fseeko(). Somewhat remarkably, that seems not to
have made for any live bugs up to now. We could fix the oversights,
but it seems better to just get rid of the whole error-prone mess.
In itself this is merely code cleanup. However, it's necessary
infrastructure for an upcoming bug-fix patch (because that code
*does* need valid file position after fseeko). The bug fix
needs to go back as far as v12; hence, back-patch that far.
Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
Windows has junction points which function as symbolic links for
directories. This patch introduces a new function TestLib::dir_symlink()
which creates a junction point on Windows and a standard Unix type
symbolic link elsewhere.
The function TestLib::perl2host is also modified, first to use cygpath
where it's available (e.g. msys2) and second to allow it to succeed if
the gandparent directory exists but the parent does not.
Given these changes the only symlink tests that need to be skipped on
Windows are those related to permissions or to use of readlink. The
relevant tests for pg_basebackup and pg_rewind are therefore adjusted
accordingly.
Andrew Dunstan, reviewed by Peter Eisentraut and Michael Paquier.
Discussion: https://postgr.es/m/c50a646c-d9bb-7c62-a4bf-8256ff6ff338@2ndquadrant.com
pg_test_fsync has always opened files using the text mode on Windows, as
this is the default mode used if not enforced by _setmode().
This fixes a failure when running pg_test_fsync down to 12 because
O_DSYNC and the text mode are not able to work together nicely. We
fixed the handling of O_DSYNC in 12~ for the tool by switching to the
concurrent-safe version of fopen() in src/port/ with 0ba06e0. And
40cfe86, by enforcing the text mode for compatibility reasons if O_TEXT
or O_BINARY are not specified by the caller, broke pg_test_fsync. For
all versions, this avoids any translation overhead, and pg_test_fsync
should test binary writes, so it is a gain in all cases.
Note that O_DSYNC is still not handled correctly in ~11, leading to
pg_test_fsync to show insanely high numbers for open_datasync() (using
this property it is easy to notice that the binary mode is much
faster). This would require a backpatch of 0ba06e0 and 40cfe86, which
could potentially break existing applications, so this is left out.
There are no TAP tests for this tool yet, so I have checked all builds
manually using MSVC. We could invent a new option to run a single
transaction instead of using a duration of 1s to make the tests a
maximum short, but this is left as future work.
Thanks to Bruce Momjian for the discussion.
Reported-by: Jeff Janes
Author: Michael Paquier
Discussion: https://postgr.es/m/16526-279ded30a230d275@postgresql.org
Backpatch-through: 9.5
When working with an online source cluster, pg_rewind gets a list of all
the files in the source data directory using a WITH RECURSIVE query,
returning a NULL result for a file's metadata if it gets removed between
the moment it is listed in a directory and the moment its metadata is
obtained with pg_stat_file() (say a recycled WAL segment). The query
result was processed in such a way that for each tuple we checked only
that the first file's metadata was NULL. This could have two
consequences, both resulting in a failure of the rewind:
- If the first tuple referred to a removed file, all files from the
source would be ignored.
- Any file actually missing would not be considered as such.
While on it, rework slightly the code so as no values are saved if we
know that a file is going to be skipped.
Issue introduced by b36805f, so backpatch down to 9.5.
Author: Justin Pryzby, Michael Paquier
Reviewed-by: Daniel Gustafsson, Masahiko Sawada
Discussion: https://postgr.es/m/20200713061010.GC23581@telsasoft.com
Backpatch-through: 9.5
* Strategy number and purpose are essential information for opfamily operator.
So, show those columns in non-verbose output.
* "Left/right arg type" \dAp column names are confusing, because those type
don't necessary match to function arguments. Rename them to "Registered
left/right type".
* Replace manual assembling of operator/procedure names with casts to
regoperator/regprocedure.
* Add schema-qualification for pg_catalog functions and tables.
Reported-by: Peter Eisentraut, Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2edc7b27-031f-b2b6-0db2-864241c91cb9%402ndquadrant.com
Backpatch-through: 13
Parallel pg_restore has always supposed that ACL items for different
objects are independent and can be restored in parallel without
conflicts. However, there is one case where this fails: because
REVOKE on a table is defined to also revoke the privilege(s) at
column level, we can't restore per-column ACLs till after we restore
any table-level privileges on their table. Failure to honor this
restriction can lead to "tuple concurrently updated" errors during
parallel restore, or even to the per-column ACLs silently disappearing
because the table-level REVOKE is executed afterwards.
To fix, add a dependency from each column-level ACL item to its table's
ACL item, if there is one. Note that this doesn't fix the hazard
for pre-existing archive files, only for ones made with a corrected
pg_dump. Given that the bug's been there quite awhile without
field reports, I think this is acceptable.
This requires changing the API of pg_dump's dumpACL() function.
To keep its argument list from getting even longer, I removed the
"CatalogId objCatId" argument, which has been unused for ages.
Per report from Justin Pryzby. Back-patch to all supported branches.
Discussion: https://postgr.es/m/20200706050129.GW4107@telsasoft.com
The previous approach was to search-and-destroy all \r occurrences
no matter what. That seems more likely to hide bugs than anything
else; indeed it seems to be hiding one now. Fix things so that
we only transform \r\n to \n.
Side effects: must do this before, not after, chomp'ing if we're
going to chomp, else we'd fail to clean up a trailing \r\n. Also,
remove safe_psql's redundant repetition of what psql already did;
else it might reduce \r\r\n to \n, which is exactly the scenario
I'm hoping to expose.
Perhaps this should be back-patched, but for now I'm content to
see what happens in HEAD.
Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net