At the moment, pg_upgrade stores whether it is doing a "live check"
(i.e., the user specified --check and the old server is still
running) in a local variable scoped to main(). This live_check
variable is passed to several functions. To further complicate
matters, a few call sites provide a hard-coded "false" as the
live_check argument. Specifically, this is done when calling these
functions for the new cluster, for which any live-check-only paths
won't apply.
This commit moves the live_check variable to the global user_opts
variable, which stores information about the options the user
specified on the command line. This allows us to remove the
live_check parameter from several functions. For the functions
with callers that provide a hard-coded "false" as the live_check
argument (e.g., get_control_data()), we verify the given cluster is
the old cluster before taking any live-check-only paths.
This small refactoring effort helps simplify some proposed changes
that would parallelize many of pg_upgrade's once-in-each-database
tasks using libpq's asynchronous APIs. By removing the live_check
parameter, we can more easily convert the functions to callbacks
for the new parallel system.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20240516211638.GA1688936%40nathanxps13
Presently, pg_upgrade obtains the number of subscriptions in the
to-be-upgraded cluster by first querying pg_subscription in every
database for the number of subscriptions in only that database.
Then, in count_old_cluster_subscriptions(), it adds all the values
collected in the first step. This is expensive, especially when
there are many databases.
Fortunately, there is a better way to retrieve the subscription
count. Since pg_subscription is a shared catalog, we only need to
connect to a single database and query it once. This commit
modifies pg_upgrade to use that approach, which also allows us to
trim several lines of code. In passing, move the call to
get_db_subscription_count(), which has been renamed to
get_subscription_count(), from get_db_rel_and_slot_infos() to the
dedicated >= v17 section in check_and_dump_old_cluster().
We may be able to make similar improvements to
get_old_cluster_logical_slot_infos(), but that is left as a future
exercise.
Reviewed-by: Michael Paquier, Amit Kapila
Discussion: https://postgr.es/m/ZprQJv_TxccN3tkr%40nathan
Backpatch-through: 17
After further review, we want to move in the direction of always
quoting GUC names in error messages, rather than the previous (PG16)
wildly mixed practice or the intermittent (mid-PG17) idea of doing
this depending on how possibly confusing the GUC name is.
This commit applies appropriate quotes to (almost?) all mentions of
GUC names in error messages. It partially supersedes a243569bf6 and
8d9978a717, which had moved things a bit in the opposite direction
but which then were abandoned in a partial state.
Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com
Run pgindent, pgperltidy, and reformat-dat-files.
The pgindent part of this is pretty small, consisting mainly of
fixing up self-inflicted formatting damage from patches that
hadn't bothered to add their new typedefs to typedefs.list.
In order to keep it from making anything worse, I manually added
a dozen or so typedefs that appeared in the existing typedefs.list
but not in the buildfarm's list. Perhaps we should formalize that,
or better find a way to get those typedefs into the automatic list.
pgperltidy is as opinionated as always, and reformat-dat-files too.
Coverity identified this as a resource leak. It's surely of no
consequence given that the function is called only once per run, but
freeing the storage is no more work than dismissing the complaint.
Minor oversight in commit 347758b12.
The checks for data type usage were each connecting to all databases
in the cluster and running their query. On clusters which have a lot
of databases this can become unnecessarily expensive. This moves the
checks to run in a single connection instead to minimize setup and
teardown overhead.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/BB4C76F-D416-4F9F-949E-DBE950D37787@yesql.se
Most callers of strerror() are removed from the backend code. The
remaining callers require special handling with a saved errno from a
previous system call. The frontend code still needs strerror() where
error states need to be handled outside of fprintf.
Note that pg_regress is not changed to use %m as the TAP output may
clobber errno, since those functions call fprintf() and friends before
evaluating the format string.
Support for %m in src/port/snprintf.c has been added in d6c55de1f9,
hence all the stable branches currently supported include it.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87sf13jhuw.fsf@wibble.ilmari.org
The copy_file_range() system call is available on at least Linux and
FreeBSD, and asks the kernel to use efficient ways to copy ranges of a
file. Options available to the kernel include sharing block ranges
(similar to --clone mode), and pushing down block copies to the storage
layer.
For automated testing, see PG_TEST_PG_UPGRADE_MODE. (Perhaps in a later
commit we could consider setting this mode for one of the CI targets.)
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CA%2BhUKGKe7Hb0-UNih8VD5UNZy5-ojxFb3Pr3xSBBL8qj2M2%3DdQ%40mail.gmail.com
This feature will allow us to replicate the changes on subscriber nodes
after the upgrade.
Previously, only the subscription metadata information was preserved.
Without the list of relations and their state, it's not possible to
re-enable the subscriptions without missing some records as the list of
relations can only be refreshed after enabling the subscription (and
therefore starting the apply worker). Even if we added a way to refresh
the subscription while enabling a publication, we still wouldn't know
which relations are new on the publication side, and therefore should be
fully synced, and which shouldn't.
To preserve the subscription relations, this patch teaches pg_dump to
restore the content of pg_subscription_rel from the old cluster by using
binary_upgrade_add_sub_rel_state SQL function. This is supported only
in binary upgrade mode.
The subscription's replication origin is needed to ensure that we don't
replicate anything twice.
To preserve the replication origins, this patch teaches pg_dump to update
the replication origin along with creating a subscription by using
binary_upgrade_replorigin_advance SQL function to restore the
underlying replication origin remote LSN. This is supported only in
binary upgrade mode.
pg_upgrade will check that all the subscription relations are in 'i'
(init) or in 'r' (ready) state and will error out if that's not the case,
logging the reason for the failure. This helps to avoid the risk of any
dangling slot or origin after the upgrade.
Author: Vignesh C, Julien Rouhaud, Shlok Kyal
Reviewed-by: Peter Smith, Masahiko Sawada, Michael Paquier, Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud
While reading information from the old cluster, a list of logical
slots is fetched. At the later part of upgrading, pg_upgrade revisits the
list and restores slots by executing pg_create_logical_replication_slot()
on the new cluster. Migration of logical replication slots is only
supported when the old cluster is version 17.0 or later.
If the old node has invalid slots or slots with unconsumed WAL records,
the pg_upgrade fails. These checks are needed to prevent data loss.
The significant advantage of this commit is that it makes it easy to
continue logical replication even after upgrading the publisher node.
Previously, pg_upgrade allowed copying publications to a new node. With
this patch, adjusting the connection string to the new publisher will
cause the apply worker on the subscriber to connect to the new publisher
automatically. This enables seamless continuation of logical replication,
even after an upgrade.
Author: Hayato Kuroda, Hou Zhijie
Reviewed-by: Peter Smith, Bharath Rupireddy, Dilip Kumar, Vignesh C, Shlok Kyal
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
Discussion: http://postgr.es/m/CAA4eK1+t7xYcfa0rEQw839=b2MzsfvYDPz3xbD+ZqOdP3zpKYg@mail.gmail.com
Commit cda6a8d01d removed a few datatypes, but didn't update
pg_upgrade --check to throw error if these types are used. So the users
find that pg_upgrade --check tells them that everything is fine, only to
fail when the real upgrade is attempted.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Discussion: https://postgr.es/m/202309201654.ng4ksea25mti@alvherre.pgsql
The new_cluster parameter in check_for_new_tablespace_dir was
shadowing the globally defined new_cluster variable, causing
compiler warnings when running with -Wshadow. The function is
only applicable to the new cluster, so remove the parameter
rather than rename to match check_new_cluster_is_empty which
also only applies to the new cluster.
Author: Peter Smith <peter.b.smith@fujitsu.com>
Discussion: https://postgr.es/m/CAHut+PvS_PHLntWy1yTgXv0O1tWm4iVcKBQFzpoQRDsm2Ce_Fg@mail.gmail.com
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
The check for the number of roles in the target cluster for an upgrade
selects the existing roles and performs a COUNT(*) over the result. A
value of one is the expected query result value indicating that only
the install user is present in the new cluster. The result was converted
with the function for converting a string containing an Oid into a numeric,
which avoids potential overflow but makes the code less readable since
it's not actually an Oid at all.
Discussion: https://postgr.es/m/41AB5F1F-4389-4B25-9668-5C430375836C@yesql.se
Previously, pg_upgrade checked that the old and new clusters were
compatible, including the locale and encoding. But the new cluster was
just created, and only template0 from the new cluster will be
preserved (template1 and postgres are both recreated during the
upgrade process).
Because template0 is not sensitive to locale or encoding, just update
the pg_database entry to be the same as template0 from the original
cluster.
This commit makes it easier to change the default initdb locale or
encoding settings without causing needless incompatibilities.
Discussion: https://postgr.es/m/d62b2874-729b-d26a-2d0a-0d64f509eca4@enterprisedb.com
Reviewed-by: Peter Eisentraut
When checking for roles with a pg_ prefix in <= 9.5 servers, report
all found roles in a text file as how other checks are done instead
of just reporting that they exist. Rolenames are printed with their
Oids to match other reports.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/4BB2735C-C347-4CF7-AFB1-8573BE930066@yesql.se
We're running out of bits for new permissions. This change doubles the
number of permissions we can accomodate from 16 to 32, so the
forthcoming new ones for vacuum/analyze don't exhaust the pool.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
This change concerns a couple of .txt files (for internal state checks)
that were still written in the path where the binary is executed, and
not in the subdirectory located in the target cluster. Like the other
.txt files doing already so (like loadable_libraries.txt), these are
saved in the base output directory. Note that on failure, the logs
report the full path to the .txt file generated, so these are easy to
find.
Oversight in 38bfae3.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Justin Prysby
Discussion: https://postgr.es/m/181A6DA8-3B7F-4B71-82D5-363FF0146820@yesql.se
Backpatch-through: 15
When reporting failure in check_ functions there is (typically) a text-
file mentioned in the error report which contains further details. Some
check_ functions kept a separate flag variable to indicate failure, and
some just checked the state of the filehandle as it's guaranteed to be
open when the check failed. This refactors the functions to consistently
do the same check on error reporting. As the error report contains the
filepath, it makes more sense to check the filehandle state and skip the
flag variable.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Discussion: https://postgr.es/m/595759F6-625B-4ED7-8125-91AF00437F83@yesql.se
pg_upgrade does not use common/logging.c, which is unfortunate
but changing it to do so seems like more work than is justified.
However, we really need to make it work more like common/logging.c
in one respect: the latter expects supplied message strings to not
end with a newline, instead adding one internally. As it stands,
pg_upgrade's logging facilities expect a caller-supplied newline
in some cases and not others, which is already an invitation to bugs,
but the inconsistency with our other frontend code makes it worse.
There are already several places with missing or extra newlines,
and it's inevitable that there won't be more if we let this stand.
Hence, run around and get rid of all trailing newlines in message
strings, and add an Assert that there's not one, similar to the
existing Assert in common/logging.c. Adjust the logging functions
to supply a newline at the right places.
(Some of these strings also have a *leading* newline, which would
be a good thing to get rid of too; but this patch doesn't attempt
that.)
There are some consequent minor changes in output. The ones that
aren't outright bug fixes are generally removal of extra blank
lines that the original coding intentionally inserted. It didn't
seem worth being bug-compatible with that.
Patch by me, reviewed by Kyotaro Horiguchi and Peter Eisentraut
Discussion: https://postgr.es/m/113191.1655233060@sss.pgh.pa.us
When we changed some built-in functions to use anycompatiblearray
instead of anyarray, we created a dump/restore hazard for user-defined
operators and aggregates relying on those functions: the user objects
have to be modified to change their signatures similarly. This causes
pg_upgrade to fail partway through if the source installation contains
such objects. We generally try to have pg_upgrade detect such hazards
and fail before it does anything exciting, so add logic to detect
this case too.
Back-patch to v14 where the change was made.
Justin Pryzby, reviewed by Andrey Borodin
Discussion: https://postgr.es/m/3383880.QJadu78ljV@vejsadalnx
38bfae3 has moved the contents written to files by pg_upgrade under a
new directory called pg_upgrade_output.d/ located in the new cluster's
data folder, and it used a simple structure made of two subdirectories
leading to a fixed structure: log/ and dump/. This design has made
weaker pg_upgrade on repeated calls, as we could get failures when
creating one or more of those directories, while potentially losing the
logs of a previous run (logs are retained automatically on failure, and
cleaned up on success unless --retain is specified). So a user would
need to clean up pg_upgrade_output.d/ as an extra step for any repeated
calls of pg_upgrade. The most common scenario here is --check followed
by the actual upgrade, but one could see a failure when specifying an
incorrect input argument value. Removing entirely the logs would have
the disadvantage of removing all the past information, even if --retain
was specified at some past step.
This result is annoying for a lot of users and automated upgrade flows.
So, rather than requiring a manual removal of pg_upgrade_output.d/, this
redesigns the set of output directories in a more dynamic way, based on
a suggestion from Tom Lane and Daniel Gustafsson. pg_upgrade_output.d/
is still the base path, but a second directory level is added, mostly
named after an ISO-8601-formatted timestamp (in short human-readable,
with milliseconds appended to the name to avoid any conflicts). The
logs and dumps are saved within the same subdirectories as previously,
as of log/ and dump/, but these are located inside the subdirectory
named after the timestamp.
The logs of a given run are removed only after a successful run if
--retain is not used, and pg_upgrade_output.d/ is kept if there are any
logs from a previous run. Note that previously, pg_upgrade would have
kept the logs even after a successful --check but that was inconsistent
compared to the case without --check when using --retain. The code in
charge of the removal of the output directories is now refactored into a
single routine.
Two TAP tests are added with some --check commands (one failure case and
one success case), to look after the issue fixed here. Note that the
tests had to be tweaked a bit to fit with the new directory structure so
as it can find any logs generated on failure. This is still going to
require a change in the buildfarm client for the case where pg_upgrade
is tested without the TAP test, though, but I'll tackle that with a
separate patch where needed.
Reported-by: Tushar Ahuja
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Justin Pryzby
Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
"\r" (for progress output) must not be inside a translatable string
(gettext gets upset).
In passing, move the minimum supported version number to a separate
argument, so that we don't have to retranslate this string every year
now.
The check for datallowconn being properly set on all databases in the
old cluster errored out on the first instance, rather than report the
set of problematic databases. This adds reporting to a textfile like
how many other checks are performed. While there, also add a comment
to the function as per how other checks are commented.
This check won't catch if template1 isn't allowing connections, since
that's used for connecting in the first place. That error remains as
it is today:
connection to server on socket ".." failed: FATAL: database "template1" is not currently accepting connections
Author: Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>
Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAOgcT0McABqF_iFFQuuRf9is+gmYMsmUu_SWNikyr=2VGyP9Jw@mail.gmail.com
This adds the option to use ICU as the default locale provider for
either the whole cluster or a database. New options for initdb,
createdb, and CREATE DATABASE are used to select this.
Since some (legacy) code still uses the libc locale facilities
directly, we still need to set the libc global locale settings even if
ICU is otherwise selected. So pg_database now has three
locale-related fields: the existing datcollate and datctype, which are
always set, and a new daticulocale, which is only set if ICU is
selected. A similar change is made in pg_collation for consistency,
but in that case, only the libc-related fields or the ICU-related
field is set, never both.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
Historically, the location of any files generated by pg_upgrade, as of
the per-database logs and internal dumps, has been the current working
directory, leaving all those files behind when using --retain or on a
failure.
Putting all those contents in a targeted subdirectory makes the whole
easier to debug, and simplifies the code in charge of cleaning up the
logs. Note that another reason is that this facilitates the move of
pg_upgrade to TAP with a fixed location for all the logs to grab if the
test fails repeatedly.
Initially, we thought about being able to specify the output directory
with a new option, but we have settled on using a subdirectory located
at the root of the new cluster's data folder, "pg_upgrade_output.d",
instead, as at the end the new data directory is the location of all the
data generated by pg_upgrade. There is a take with group permissions
here though: if the new data folder has been initialized with this
option, we need to create all the files and paths with the correct
permissions or a base backup taken after a pg_upgrade --retain would
fail, meaning that GetDataDirectoryCreatePerm() has to be called before
creating the log paths, before a couple of sanity checks on the clusters
and before getting the socket directory for the cluster's host settings.
The idea of the new location is based on a suggestion from Peter
Eisentraut.
Also thanks to Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson, Tom
Lane and Bruce Momjian for the discussion (in alphabetical order).
Author: Justin Pryzby
Discussion: https://postgr.es/m/20211212025017.GN17618@telsasoft.com
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.
Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
Also "make reformat-dat-files".
The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
Don't advocate dropping a whole table when dropping a column would
serve. While at it, try to make the layout of these messages a
bit cleaner and more consistent.
Per suggestion from Daniel Gustafsson. No back-patch, as we generally
don't like to churn translatable messages in released branches.
Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
Commits 29aeda6e4 et al closed up some oversights involving not checking
for non-upgradable types within container types, such as arrays and
ranges. However, I only looked at version.c, failing to notice that
there were substantially-equivalent tests in check.c. (The division
of responsibility between those files is less than clear...)
In addition, because genbki.pl does not guarantee that auto-generated
rowtype OIDs will hold still across versions, we need to consider that
the composite type associated with a system catalog or view is
non-upgradable. It seems unlikely that someone would have a user
column declared that way, but if they did, trying to read it in another
PG version would likely draw "no such pg_type OID" failures, thanks
to the type OID embedded in composite Datums.
To support the composite and reg*-type cases, extend the recursive
query that does the search to allow any base query that returns
a column of pg_type OIDs, rather than limiting it to exactly one
starting type.
As before, back-patch to all supported branches.
Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
With the 'noError' argument, you can try to convert a buffer without
knowing the character boundaries beforehand. The functions now need to
return the number of input bytes successfully converted.
This is is a backwards-incompatible change, if you have created a custom
encoding conversion with CREATE CONVERSION. This adds a check to
pg_upgrade for that, refusing the upgrade if there are any user-defined
encoding conversions. Custom conversions are very rare, there are no
commonly used extensions that I know of that uses that feature. No other
objects can depend on conversions, so if you do have one, you can fairly
easily drop it before upgrading, and recreate it after the upgrade with
an updated version.
Add regression tests for built-in encoding conversions. This doesn't cover
every conversion, but it covers all the internal functions in conv.c that
are used to implement the conversions.
Reviewed-by: John Naylor
Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi
The patch needs test cases, reorganization, and cfbot testing.
Technically reverts commits 5c31afc49d..e35b2bad1a (exclusive/inclusive)
and 08db7c63f3..ccbe34139b.
Reported-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/E1ktAAG-0002V2-VB@gemulon.postgresql.org
This adds a key management system that stores (currently) two data
encryption keys of length 128, 192, or 256 bits. The data keys are
AES256 encrypted using a key encryption key, and validated via GCM
cipher mode. A command to obtain the key encryption key must be
specified at initdb time, and will be run at every database server
start. New parameters allow a file descriptor open to the terminal to
be passed. pg_upgrade support has also been added.
Discussion: https://postgr.es/m/CA+fd4k7q5o6Nc_AaX6BcYM9yqTbC6_pnH-6nSD=54Zp6NBQTCQ@mail.gmail.com
Discussion: https://postgr.es/m/20201202213814.GG20285@momjian.us
Author: Masahiko Sawada, me, Stephen Frost