Split out "ConfigOptionIsVisible" to perform the privilege
check for GUC_SUPERUSER_ONLY GUCs (which these days can also
be read by pg_read_all_settings role members), and move the
should-we-show-it checks from GetConfigOptionValues to its
sole caller.
This commit also removes get_explain_guc_options's check of
GUC_NO_SHOW_ALL, which seems to have got cargo-culted in there.
While there's no obvious use-case for marking a GUC both
GUC_EXPLAIN and GUC_NO_SHOW_ALL, if it were set up that way
one would expect EXPLAIN to show it --- if that's not what
you want, then don't set GUC_EXPLAIN.
In passing, simplify the loop logic in show_all_settings.
Nitin Jadhav, Bharath Rupireddy, Tom Lane
Discussion: https://postgr.es/m/CAMm1aWYgfekpRK-Jz5=pM_bV+Om=ktGq1vxTZ_dr1Z6MV-qokA@mail.gmail.com
Avoid having walreceiver code know explicitly about the precision
and underlying datatype of TimestampTz. (There is still one
calculation that knows that, which should be replaced with use of
TimestampDifferenceMilliseconds; but we need to figure out what to do
about overflow cases first.)
In support of this, provide a TimestampTzPlusSeconds macro, as well
as TIMESTAMP_INFINITY and TIMESTAMP_MINUS_INFINITY macros. (We could
have used the existing DT_NOEND and DT_NOBEGIN symbols, but I judged
those too opaque and confusing.)
Move GetCurrentTimestamp calls so that it's more obvious that we
are not using stale values of "now" anyplace. This doesn't result
in net more calls, and might indeed make for net fewer.
Avoid having a dummy value in the WalRcvWakeupReason enum, so that
we can hope for the compiler to catch overlooked switch cases.
Nathan Bossart and Tom Lane
Discussion: https://postgr.es/m/20230125235004.GA1327755@nathanxps13
Eager freezing strategy avoids large build-ups of all-visible pages. It
makes VACUUM trigger page-level freezing whenever doing so will enable
the page to become all-frozen in the visibility map. This is useful for
tables that experience continual growth, particularly strict append-only
tables such as pgbench's history table. Eager freezing significantly
improves performance stability by spreading out the cost of freezing
over time, rather than doing most freezing during aggressive VACUUMs.
It complements the insert autovacuum mechanism added by commit b07642db.
VACUUM determines its freezing strategy based on the value of the new
vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables.
Tables that exceed the size threshold use the eager freezing strategy.
Unlogged tables and temp tables always use eager freezing strategy,
since the added cost is negligible there. Non-permanent relations won't
incur any extra overhead in WAL written (for the obvious reason), nor in
pages dirtied (since any extra freezing will only take place on pages
whose PD_ALL_VISIBLE bit needed to be set either way).
VACUUM uses lazy freezing strategy for logged tables that fall under the
GUC size threshold. Page-level freezing triggers based on the criteria
established in commit 1de58df4, which added basic page-level freezing.
Eager freezing is strictly more aggressive than lazy freezing. Settings
like vacuum_freeze_min_age still get applied in just the same way in
every VACUUM, independent of the strategy in use. The only mechanical
difference between eager and lazy freezing strategies is that only the
former applies its own additional criteria to trigger freezing pages.
Note that even lazy freezing strategy will trigger freezing whenever a
page happens to have required that an FPI be written during pruning,
provided that the page will thereby become all-frozen in the visibility
map afterwards (due to the FPI optimization from commit 1de58df4).
The vacuum_freeze_strategy_threshold default setting is 4GB. This is a
relatively low setting that prioritizes performance stability. It will
be reviewed at the end of the Postgres 16 beta period.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
Invent separate macros for "invalid" values of these types, so that
we needn't embed knowledge of their representations into calling code.
These are all zeroes anyway ATM, so this is not fixing any live bug,
but it makes the code cleaner and more future-proof.
I (tgl) also chose to move DSM_HANDLE_INVALID into dsm_impl.h,
since it seems like it should live beside the typedef for dsm_handle.
Hou Zhijie, Nathan Bossart, Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/OS0PR01MB5716860B1454C34E5B179B6694C99@OS0PR01MB5716.jpnprd01.prod.outlook.com
This adds combine, serial and deserial functions for the array_agg() and
string_agg() aggregate functions, thus allowing these aggregates to
partake in partial aggregations. This allows both parallel aggregation to
take place when these aggregates are present and also allows additional
partition-wise aggregation plan shapes to include plans that require
additional aggregation once the partially aggregated results from the
partitions have been combined.
Author: David Rowley
Reviewed-by: Andres Freund, Tomas Vondra, Stephen Frost, Tom Lane
Discussion: https://postgr.es/m/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A@mail.gmail.com
This will ease a follow-up move that will generate automatically this
code. The C file is renamed, for consistency with the node-related
files whose code are generated by gen_node_support.pl:
- queryjumble.c -> queryjumblefuncs.c
- utils/queryjumble.h -> nodes/queryjumble.h
Per a suggestion from Peter Eisentraut.
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.
In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode - in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.
This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).
In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.
Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
This is not really complete, but it catches most cases of practical
interest. The main omissions are:
* regtype, regprocedure, and regoperator parse type names by
calling the main grammar, so any grammar-detected syntax error
will still be a hard error. Also, if one includes a type
modifier in such a type specification, errors detected by the
typmodin function will be hard errors.
* Lookup errors are handled just by passing missing_ok = true
to the relevant catalog lookup function. Because we've used
quite a restrictive definition of "missing_ok", this means that
edge cases such as "the named schema exists, but you lack
USAGE permission on it" are still hard errors.
It would make sense to me to replace most/all missing_ok
parameters with an escontext parameter and then allow these
additional lookup failure cases to be trapped too. But that's
a job for some other day.
Discussion: https://postgr.es/m/3342239.1671988406@sss.pgh.pa.us
Historically these input functions just called strtoul or strtoull
and returned the result, with no error detection whatever. Upgrade
them to reject garbage input and out-of-range values, similarly to
our other numeric input routines.
To share the code for this with type oid, adjust the existing
"oidin_subr" to be agnostic about the SQL name of the type it is
handling, and move it to numutils.c; then clone it for 64-bit types.
Because the xid types previously accepted hex and octal input by
reason of calling strtoul[l] with third argument zero, I made the
common subroutine do that too, with the consequence that type oid
now also accepts hex and octal input. In view of 6fcda9aba, that
seems like a good thing.
While at it, simplify the existing over-complicated handling of
syntax errors from strtoul: we only need one ereturn not three.
Discussion: https://postgr.es/m/3526121.1672000729@sss.pgh.pa.us
This introduces palloc_aligned() and MemoryContextAllocAligned() which
allow callers to obtain memory which is allocated to the given size and
also aligned to the specified alignment boundary. The alignment
boundaries may be any power-of-2 value. Currently, the alignment is
capped at 2^26, however, we don't expect values anything like that large.
The primary expected use case is to align allocations to perhaps CPU
cache line size or to maybe I/O page size. Certain use cases can benefit
from having aligned memory by either having better performance or more
predictable performance.
The alignment is achieved by requesting 'alignto' additional bytes from
the underlying allocator function and then aligning the address that is
returned to the requested alignment. This obviously does waste some
memory, so alignments should be kept as small as what is required.
It's also important to note that these alignment bytes eat into the
maximum allocation size. So something like:
palloc_aligned(MaxAllocSize, 64, 0);
will not work as we cannot request MaxAllocSize + 64 bytes.
Additionally, because we're just requesting the requested size plus the
alignment requirements from the given MemoryContext, if that context is
the Slab allocator, then since slab can only provide chunks of the size
that's specified when the slab context is created, then this is not going
to work. Slab will generate an error to indicate that the requested size
is not supported.
The alignment that is requested in palloc_aligned() is stored along with
the allocated memory. This allows the alignment to remain intact through
repalloc() calls.
Author: Andres Freund, David Rowley
Reviewed-by: Maxim Orlov, Andres Freund, John Naylor
Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
This is the guts of float4in, callable as a routine to input floats,
which will be useful in an upcoming patch for allowing soft errors in
the seg module's input function.
A similar operation was performed some years ago for float8in in
commit 50861cd683.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/cee4e426-d014-c0b7-aa22-a659f2cd9130@dunslane.net
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input. (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.) Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well. Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.
Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.
In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase. The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded. We should improve that, but it seems like a
separable project.
Andrew Dunstan and Tom Lane
Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism. This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did). And I doubt
there's any measurable performance difference either. But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError. The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.
Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes. Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.
This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
More could be done in this line, but I just grabbed some low-hanging
fruit. Principal objective was to remove the need for several ugly
unconstify() usages in formatting.c.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted. There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions. This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong. There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup. This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose. See the
included documentation changes for details. Follow-on commits will
provide test code and usage examples.
The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly". This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role. That lets ordinary roles set placeholder variables, which
permission requirements are not known yet. Such a value wouldn't be used if
the variable finally appear to require superuser privileges.
The new flags are stored in the pg_db_role_setting.setuser array. Catversion
is bumped.
This commit is inspired by the previous work by Steve Chavez.
Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
The code has been assuming already in a few places that the initial
recursion nesting depth is 0, and the recent changes in hba.c (mainly
783e8c6) have relies on this assumption in more places. The maximum
recursion nesting level is assumed to be 10 for hba.c and GUCs.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20221124090724.n7amf5kpdhx6vb76@jrouhaud
We're running out of bits for new permissions. This change doubles the
number of permissions we can accomodate from 16 to 32, so the
forthcoming new ones for vacuum/analyze don't exhaust the pool.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
The lwlock wait queue scalability issue fixed in a4adc31f69 was quite hard to
find because of the exponential backoff and because we adjust spins_per_delay
over time within a backend.
To make it easier to find similar issues in the future, add a wait event for
the pg_usleep() in perform_spin_delay(). Showing a wait event while spinning
without sleeping would increase the overhead of spinlocks, which we do not
want.
We may at some later point want to have more granular wait events, but that'd
be a substantial amount of work. This provides at least some insights into
something currently hard to observe.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
https://postgr.es/m/20221120204310.xywrhyxyytsajuuq@awork3.anarazel.de
This switch impacts 9 patterns related to a SQL-mandated special syntax
for function calls:
- LOCALTIME [ ( typmod ) ]
- LOCALTIMESTAMP [ ( typmod ) ]
- CURRENT_TIME [ ( typmod ) ]
- CURRENT_TIMESTAMP [ ( typmod ) ]
- CURRENT_DATE
Five new entries are added to pg_proc to compensate the removal of
SQLValueFunction to provide backward-compatibility and making this
change transparent for the end-user (for example for the attribute
generated when a keyword is specified in a SELECT or in a FROM clause
without an alias, or when specifying something else than an Iconst to
the parser).
The parser included a set of checks coming from the files in charge of
holding the C functions used for the SQLValueFunction calls (as of
transformSQLValueFunction()), which are now moved within each function's
execution path, so this reduces the dependencies between the execution
and the parsing steps. As of this change, all the SQL keywords use the
same paths for their work, relying only on COERCE_SQL_SYNTAX. Like
fb32748, no performance difference has been noticed, while the perf
profiles get reduced with ExecEvalSQLValueFunction() gone.
Bump catalog version.
Reviewed-by: Corey Huinker, Ted Yu
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
Similar to how the INHERIT option controls whether or not the
permissions of the granted role are automatically available to the
grantee, the new SET permission controls whether or not the grantee
may use the SET ROLE command to assume the privileges of the granted
role.
In addition, the new SET permission controls whether or not it
is possible to transfer ownership of objects to the target role
or to create new objects owned by the target role using commands
such as CREATE DATABASE .. OWNER. We could alternatively have made
this controlled by the INHERIT option, or allow it when either
option is given. An advantage of this approach is that if you
are granted a predefined role with INHERIT TRUE, SET FALSE, you
can't go and create objects owned by that role.
The underlying theory here is that the ability to create objects
as a target role is not a privilege per se, and thus does not
depend on whether you inherit the target role's privileges. However,
it's surely something you could do anyway if you could SET ROLE
to the target role, and thus making it contingent on whether you
have that ability is reasonable.
Design review by Nathan Bossat, Wolfgang Walther, Jeff Davis,
Peter Eisentraut, and Stephen Frost.
Discussion: http://postgr.es/m/CA+Tgmob+zDSRS6JXYrgq0NWdzCXuTNzT5eK54Dn2hhgt17nm8A@mail.gmail.com
Instead of dozens of mostly-duplicate pg_foo_aclcheck() functions,
write one common function object_aclcheck() that can handle almost all
of them. We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.
There are a few pg_foo_aclcheck() that don't work via the generic
function and have special APIs, so those stay as is.
I also changed most pg_foo_aclmask() functions to static functions,
since they are not used outside of aclchk.c.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com
Instead of dozens of mostly-duplicate pg_foo_ownercheck() functions,
write one common function object_ownercheck() that can handle almost
all of them. We already have all the information we need, such as
which system catalog corresponds to which catalog table and which
column is the owner column.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com
The code in charge of listing and classifying a set of configuration
files in a directory was located in guc-file.l, being used currently for
GUCs under "include_dir". This code is planned to be used for an
upcoming feature able to include configuration files for ident and HBA
files from a directory, similarly to GUCs. In both cases, the file
names, suffixed by ".conf", have to be ordered alphabetically. This
logic is moved to a new file, called conffiles.c, so as it is easier to
share this facility between GUCs and the HBA/ident parsing logic.
Author: Julien Rouhaud, Michael Paquier
Discussion: https://postgr.es/m/Y2IgaH5YzIq2b+iR@paquier.xyz
We have various requirements when using a dlist_head to keep track of the
number of items in the list. This, traditionally, has been done by
maintaining a counter variable in the calling code. Here we tidy this up
by adding "dclist", which is very similar to dlist but also keeps track of
the number of items stored in the list.
Callers may use the new dclist_count() function when they need to know how
many items are stored. Obtaining the count is an O(1) operation.
For simplicity reasons, dclist and dlist both use dlist_node as their node
type and dlist_iter/dlist_mutable_iter as their iterator type. dclists
have all of the same functionality as dlists except there is no function
named dclist_delete(). To remove an item from a list dclist_delete_from()
must be used. This requires knowing which dclist the given item is stored
in.
Additionally, here we also convert some dlists where additional code
exists to keep track of the number of items stored and to make these use
dclists instead.
Author: David Rowley
Reviewed-by: Bharath Rupireddy, Aleksander Alekseev
Discussion: https://postgr.es/m/CAApHDvrtVxr+FXEX0VbViCFKDGxA3tWDgw9oFewNXCJMmwLjLg@mail.gmail.com
This is similar to 7d25958, and this commit takes care of all the
remaining inconsistencies between the initial value used in the C
variable associated to a GUC and its default value stored in the GUC
tables (as of pg_settings.boot_val).
Some of the initial values of the GUCs updated rely on a compile-time
default. These are refactored so as the GUC table and its C declaration
use the same values. This makes everything consistent with other
places, backend_flush_after, bgwriter_flush_after, port,
checkpoint_flush_after doing so already, for example.
Extracted from a larger patch by Peter Smith. The spots updated in the
modules are from me.
Author: Peter Smith, Michael Paquier
Reviewed-by: Nathan Bossart, Tom Lane, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
Here we add a new 'copy' parameter to tuplesort_getdatum so that we can
instruct the function not to datumCopy() byref Datums before returning.
Similar to 91e9e89dc, this can provide significant performance
improvements in nodeSort when sorting by a single byref column and the
sort's targetlist contains only that column.
This allows us to re-enable Datum sorts for byref types which was disabled
in 3a5817695 due to a reported memory leak.
Additionally, here we slightly optimize DISTINCT aggregates so that we no
longer perform any datumCopy() when we find the current value not to be
distinct from the previous value. Previously the code would always take a
copy of the most recent Datum and pfree the previous value, even when the
values were the same. Testing shows a small but noticeable performance
increase when aggregate transitions are skipped due to the current
transition value being the same as the prior one.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvqS6wC5U==k9Hd26E4EQXH3QR67-T4=Q1rQ36NGvjfVSg@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvqHonfe9G1cVaKeHbDx70R_zCrM3qP2AGXpGrieSKGnhA@mail.gmail.com