|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* postinit.c
|
|
|
|
* postgres initialization utilities
|
|
|
|
*
|
|
|
|
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
|
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
|
|
|
* src/backend/utils/init/postinit.c
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include <ctype.h>
|
|
|
|
#include <fcntl.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
|
|
|
|
#include "access/genam.h"
|
|
|
|
#include "access/heapam.h"
|
|
|
|
#include "access/htup_details.h"
|
|
|
|
#include "access/session.h"
|
|
|
|
#include "access/sysattr.h"
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
6 years ago
|
|
|
#include "access/tableam.h"
|
|
|
|
#include "access/xact.h"
|
|
|
|
#include "access/xlog.h"
|
|
|
|
#include "access/xloginsert.h"
|
|
|
|
#include "catalog/catalog.h"
|
|
|
|
#include "catalog/namespace.h"
|
|
|
|
#include "catalog/pg_authid.h"
|
|
|
|
#include "catalog/pg_collation.h"
|
|
|
|
#include "catalog/pg_database.h"
|
|
|
|
#include "catalog/pg_db_role_setting.h"
|
|
|
|
#include "catalog/pg_tablespace.h"
|
|
|
|
#include "libpq/auth.h"
|
|
|
|
#include "libpq/libpq-be.h"
|
|
|
|
#include "mb/pg_wchar.h"
|
|
|
|
#include "miscadmin.h"
|
|
|
|
#include "pgstat.h"
|
|
|
|
#include "postmaster/autovacuum.h"
|
|
|
|
#include "postmaster/postmaster.h"
|
|
|
|
#include "replication/slot.h"
|
Add a new slot sync worker to synchronize logical slots.
By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.
The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.
A new parameter sync_replication_slots enables or disables this new
process.
On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.
A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.
Author: Shveta Malik, Hou Zhijie based on design inputs by Masahiko Sawada and Amit Kapila
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Ajin Cherian, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
1 year ago
|
|
|
#include "replication/slotsync.h"
|
|
|
|
#include "replication/walsender.h"
|
|
|
|
#include "storage/bufmgr.h"
|
|
|
|
#include "storage/fd.h"
|
|
|
|
#include "storage/ipc.h"
|
|
|
|
#include "storage/lmgr.h"
|
|
|
|
#include "storage/proc.h"
|
|
|
|
#include "storage/procarray.h"
|
|
|
|
#include "storage/procsignal.h"
|
|
|
|
#include "storage/sinvaladt.h"
|
|
|
|
#include "storage/smgr.h"
|
|
|
|
#include "storage/sync.h"
|
|
|
|
#include "tcop/tcopprot.h"
|
|
|
|
#include "utils/acl.h"
|
|
|
|
#include "utils/builtins.h"
|
|
|
|
#include "utils/fmgroids.h"
|
Split up guc.c for better build speed and ease of maintenance.
guc.c has grown to be one of our largest .c files, making it
a bottleneck for compilation. It's also acquired a bunch of
knowledge that'd be better kept elsewhere, because of our not
very good habit of putting variable-specific check hooks here.
Hence, split it up along these lines:
* guc.c itself retains just the core GUC housekeeping mechanisms.
* New file guc_funcs.c contains the SET/SHOW interfaces and some
SQL-accessible functions for GUC manipulation.
* New file guc_tables.c contains the data arrays that define the
built-in GUC variables, along with some already-exported constant
tables.
* GUC check/assign/show hook functions are moved to the variable's
home module, whenever that's clearly identifiable. A few hard-
to-classify hooks ended up in commands/variable.c, which was
already a home for miscellaneous GUC hook functions.
To avoid cluttering a lot more header files with #include "guc.h",
I also invented a new header file utils/guc_hooks.h and put all
the GUC hook functions' declarations there, regardless of their
originating module. That allowed removal of #include "guc.h"
from some existing headers. The fallout from that (hopefully
all caught here) demonstrates clearly why such inclusions are
best minimized: there are a lot of files that, for example,
were getting array.h at two or more levels of remove, despite
not having any connection at all to GUCs in themselves.
There is some very minor code beautification here, such as
renaming a couple of inconsistently-named hook functions
and improving some comments. But mostly this just moves
code from point A to point B and deals with the ensuing
needs for #include adjustments and exporting a few functions
that previously weren't exported.
Patch by me, per a suggestion from Andres Freund; thanks also
to Michael Paquier for the idea to invent guc_funcs.c.
Discussion: https://postgr.es/m/587607.1662836699@sss.pgh.pa.us
3 years ago
|
|
|
#include "utils/guc_hooks.h"
|
|
|
|
#include "utils/memutils.h"
|
|
|
|
#include "utils/pg_locale.h"
|
|
|
|
#include "utils/portal.h"
|
|
|
|
#include "utils/ps_status.h"
|
|
|
|
#include "utils/snapmgr.h"
|
|
|
|
#include "utils/syscache.h"
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
#include "utils/timeout.h"
|
|
|
|
|
|
|
|
static HeapTuple GetDatabaseTuple(const char *dbname);
|
|
|
|
static HeapTuple GetDatabaseTupleByOid(Oid dboid);
|
|
|
|
static void PerformAuthentication(Port *port);
|
|
|
|
static void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);
|
|
|
|
static void ShutdownPostgres(int code, Datum arg);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
static void StatementTimeoutHandler(void);
|
|
|
|
static void LockTimeoutHandler(void);
|
|
|
|
static void IdleInTransactionSessionTimeoutHandler(void);
|
|
|
|
static void TransactionTimeoutHandler(void);
|
|
|
|
static void IdleSessionTimeoutHandler(void);
|
|
|
|
static void IdleStatsUpdateTimeoutHandler(void);
|
|
|
|
static void ClientCheckTimeoutHandler(void);
|
|
|
|
static bool ThereIsAtLeastOneRole(void);
|
|
|
|
static void process_startup_options(Port *port, bool am_superuser);
|
|
|
|
static void process_settings(Oid databaseid, Oid roleid);
|
|
|
|
|
|
|
|
|
|
|
|
/*** InitPostgres support ***/
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GetDatabaseTuple -- fetch the pg_database row for a database
|
|
|
|
*
|
|
|
|
* This is used during backend startup when we don't yet have any access to
|
|
|
|
* system catalogs in general. In the worst case, we can seqscan pg_database
|
|
|
|
* using nothing but the hard-wired descriptor that relcache.c creates for
|
|
|
|
* pg_database. In more typical cases, relcache.c was able to load
|
|
|
|
* descriptors for both pg_database and its indexes from the shared relcache
|
|
|
|
* cache file, and so we can do an indexscan. criticalSharedRelcachesBuilt
|
|
|
|
* tells whether we got the cached descriptors.
|
|
|
|
*/
|
|
|
|
static HeapTuple
|
|
|
|
GetDatabaseTuple(const char *dbname)
|
|
|
|
{
|
|
|
|
HeapTuple tuple;
|
|
|
|
Relation relation;
|
|
|
|
SysScanDesc scan;
|
|
|
|
ScanKeyData key[1];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* form a scan key
|
|
|
|
*/
|
|
|
|
ScanKeyInit(&key[0],
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
|
|
|
CStringGetDatum(dbname));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Open pg_database and fetch a tuple. Force heap scan if we haven't yet
|
|
|
|
* built the critical shared relcache entries (i.e., we're starting up
|
|
|
|
* without a shared relcache cache file).
|
|
|
|
*/
|
|
|
|
relation = table_open(DatabaseRelationId, AccessShareLock);
|
|
|
|
scan = systable_beginscan(relation, DatabaseNameIndexId,
|
|
|
|
criticalSharedRelcachesBuilt,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
NULL,
|
|
|
|
1, key);
|
|
|
|
|
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
|
|
|
|
/* Must copy tuple before releasing buffer */
|
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
tuple = heap_copytuple(tuple);
|
|
|
|
|
|
|
|
/* all done */
|
|
|
|
systable_endscan(scan);
|
|
|
|
table_close(relation, AccessShareLock);
|
|
|
|
|
|
|
|
return tuple;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GetDatabaseTupleByOid -- as above, but search by database OID
|
|
|
|
*/
|
|
|
|
static HeapTuple
|
|
|
|
GetDatabaseTupleByOid(Oid dboid)
|
|
|
|
{
|
|
|
|
HeapTuple tuple;
|
|
|
|
Relation relation;
|
|
|
|
SysScanDesc scan;
|
|
|
|
ScanKeyData key[1];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* form a scan key
|
|
|
|
*/
|
|
|
|
ScanKeyInit(&key[0],
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
7 years ago
|
|
|
Anum_pg_database_oid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(dboid));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Open pg_database and fetch a tuple. Force heap scan if we haven't yet
|
|
|
|
* built the critical shared relcache entries (i.e., we're starting up
|
|
|
|
* without a shared relcache cache file).
|
|
|
|
*/
|
|
|
|
relation = table_open(DatabaseRelationId, AccessShareLock);
|
|
|
|
scan = systable_beginscan(relation, DatabaseOidIndexId,
|
|
|
|
criticalSharedRelcachesBuilt,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
NULL,
|
|
|
|
1, key);
|
|
|
|
|
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
|
|
|
|
/* Must copy tuple before releasing buffer */
|
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
tuple = heap_copytuple(tuple);
|
|
|
|
|
|
|
|
/* all done */
|
|
|
|
systable_endscan(scan);
|
|
|
|
table_close(relation, AccessShareLock);
|
|
|
|
|
|
|
|
return tuple;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PerformAuthentication -- authenticate a remote client
|
|
|
|
*
|
|
|
|
* returns: nothing. Will not return at all if there's any failure.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
PerformAuthentication(Port *port)
|
|
|
|
{
|
|
|
|
/* This should be set already, but let's make sure */
|
|
|
|
ClientAuthInProgress = true; /* limit visibility of log messages */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In EXEC_BACKEND case, we didn't inherit the contents of pg_hba.conf
|
|
|
|
* etcetera from the postmaster, and have to load them ourselves.
|
|
|
|
*
|
|
|
|
* FIXME: [fork/exec] Ugh. Is there a way around this overhead?
|
|
|
|
*/
|
|
|
|
#ifdef EXEC_BACKEND
|
|
|
|
|
|
|
|
/*
|
|
|
|
* load_hba() and load_ident() want to work within the PostmasterContext,
|
|
|
|
* so create that if it doesn't exist (which it won't). We'll delete it
|
|
|
|
* again later, in PostgresMain.
|
|
|
|
*/
|
|
|
|
if (PostmasterContext == NULL)
|
|
|
|
PostmasterContext = AllocSetContextCreate(TopMemoryContext,
|
|
|
|
"Postmaster",
|
Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
9 years ago
|
|
|
ALLOCSET_DEFAULT_SIZES);
|
|
|
|
|
|
|
|
if (!load_hba())
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* It makes no sense to continue if we fail to load the HBA file,
|
|
|
|
* since there is no way to connect to the database in this case.
|
|
|
|
*/
|
|
|
|
ereport(FATAL,
|
|
|
|
/* translator: %s is a configuration file */
|
|
|
|
(errmsg("could not load %s", HbaFileName)));
|
|
|
|
}
|
Parse pg_ident.conf when it's loaded, keeping it in memory in parsed format.
Similar changes were done to pg_hba.conf earlier already, this commit makes
pg_ident.conf to behave the same as pg_hba.conf.
This has two user-visible effects. First, if pg_ident.conf contains multiple
errors, the whole file is parsed at postmaster startup time and all the
errors are immediately reported. Before this patch, the file was parsed and
the errors were reported only when someone tries to connect using an
authentication method that uses the file, and the parsing stopped on first
error. Second, if you SIGHUP to reload the config files, and the new
pg_ident.conf file contains an error, the error is logged but the old file
stays in effect.
Also, regular expressions in pg_ident.conf are now compiled only once when
the file is loaded, rather than every time the a user is authenticated. That
should speed up authentication if you have a lot of regexps in the file.
Amit Kapila
13 years ago
|
|
|
|
|
|
|
if (!load_ident())
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* It is ok to continue if we fail to load the IDENT file, although it
|
|
|
|
* means that you cannot log in using any of the authentication
|
|
|
|
* methods that need a user name mapping. load_ident() already logged
|
|
|
|
* the details of error to the log.
|
Parse pg_ident.conf when it's loaded, keeping it in memory in parsed format.
Similar changes were done to pg_hba.conf earlier already, this commit makes
pg_ident.conf to behave the same as pg_hba.conf.
This has two user-visible effects. First, if pg_ident.conf contains multiple
errors, the whole file is parsed at postmaster startup time and all the
errors are immediately reported. Before this patch, the file was parsed and
the errors were reported only when someone tries to connect using an
authentication method that uses the file, and the parsing stopped on first
error. Second, if you SIGHUP to reload the config files, and the new
pg_ident.conf file contains an error, the error is logged but the old file
stays in effect.
Also, regular expressions in pg_ident.conf are now compiled only once when
the file is loaded, rather than every time the a user is authenticated. That
should speed up authentication if you have a lot of regexps in the file.
Amit Kapila
13 years ago
|
|
|
*/
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up a timeout in case a buggy or malicious client fails to respond
|
|
|
|
* during authentication. Since we're inside a transaction and might do
|
|
|
|
* database access, we have to use the statement_timeout infrastructure.
|
|
|
|
*/
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
enable_timeout_after(STATEMENT_TIMEOUT, AuthenticationTimeout * 1000);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now perform authentication exchange.
|
|
|
|
*/
|
|
|
|
set_ps_display("authentication");
|
|
|
|
ClientAuthentication(port); /* might not return, if failure */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Done with authentication. Disable the timeout, and log if needed.
|
|
|
|
*/
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
disable_timeout(STATEMENT_TIMEOUT, false);
|
|
|
|
|
|
|
|
if (Log_connections)
|
|
|
|
{
|
|
|
|
StringInfoData logmsg;
|
|
|
|
|
|
|
|
initStringInfo(&logmsg);
|
|
|
|
if (am_walsender)
|
|
|
|
appendStringInfo(&logmsg, _("replication connection authorized: user=%s"),
|
|
|
|
port->user_name);
|
|
|
|
else
|
|
|
|
appendStringInfo(&logmsg, _("connection authorized: user=%s"),
|
|
|
|
port->user_name);
|
|
|
|
if (!am_walsender)
|
|
|
|
appendStringInfo(&logmsg, _(" database=%s"), port->database_name);
|
|
|
|
|
|
|
|
if (port->application_name != NULL)
|
|
|
|
appendStringInfo(&logmsg, _(" application_name=%s"),
|
|
|
|
port->application_name);
|
|
|
|
|
|
|
|
#ifdef USE_SSL
|
|
|
|
if (port->ssl_in_use)
|
Remove support for SSL compression
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since. Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that. The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.
A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.
Bump catalog version.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
4 years ago
|
|
|
appendStringInfo(&logmsg, _(" SSL enabled (protocol=%s, cipher=%s, bits=%d)"),
|
|
|
|
be_tls_get_version(port),
|
|
|
|
be_tls_get_cipher(port),
|
Remove support for SSL compression
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since. Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that. The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.
A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.
Bump catalog version.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
4 years ago
|
|
|
be_tls_get_cipher_bits(port));
|
|
|
|
#endif
|
|
|
|
#ifdef ENABLE_GSS
|
|
|
|
if (port->gss)
|
|
|
|
{
|
|
|
|
const char *princ = be_gssapi_get_princ(port);
|
|
|
|
|
|
|
|
if (princ)
|
|
|
|
appendStringInfo(&logmsg,
|
|
|
|
_(" GSS (authenticated=%s, encrypted=%s, delegated_credentials=%s, principal=%s)"),
|
|
|
|
be_gssapi_get_auth(port) ? _("yes") : _("no"),
|
|
|
|
be_gssapi_get_enc(port) ? _("yes") : _("no"),
|
|
|
|
be_gssapi_get_delegation(port) ? _("yes") : _("no"),
|
|
|
|
princ);
|
|
|
|
else
|
|
|
|
appendStringInfo(&logmsg,
|
|
|
|
_(" GSS (authenticated=%s, encrypted=%s, delegated_credentials=%s)"),
|
|
|
|
be_gssapi_get_auth(port) ? _("yes") : _("no"),
|
|
|
|
be_gssapi_get_enc(port) ? _("yes") : _("no"),
|
|
|
|
be_gssapi_get_delegation(port) ? _("yes") : _("no"));
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
ereport(LOG, errmsg_internal("%s", logmsg.data));
|
|
|
|
pfree(logmsg.data);
|
|
|
|
}
|
|
|
|
|
|
|
|
set_ps_display("startup");
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
ClientAuthInProgress = false; /* client_min_messages is active now */
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CheckMyDatabase -- fetch information from the pg_database entry for our DB
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections)
|
|
|
|
{
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_database dbform;
|
|
|
|
Datum datum;
|
|
|
|
bool isnull;
|
|
|
|
char *collate;
|
|
|
|
char *ctype;
|
|
|
|
char *iculocale;
|
|
|
|
|
|
|
|
/* Fetch our pg_database row normally, via syscache */
|
|
|
|
tup = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(MyDatabaseId));
|
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for database %u", MyDatabaseId);
|
|
|
|
dbform = (Form_pg_database) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
/* This recheck is strictly paranoia */
|
|
|
|
if (strcmp(name, NameStr(dbform->datname)) != 0)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" has disappeared from pg_database",
|
|
|
|
name),
|
|
|
|
errdetail("Database OID %u now seems to belong to \"%s\".",
|
|
|
|
MyDatabaseId, NameStr(dbform->datname))));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check permissions to connect to the database.
|
|
|
|
*
|
|
|
|
* These checks are not enforced when in standalone mode, so that there is
|
|
|
|
* a way to recover from disabling all access to all databases, for
|
|
|
|
* example "UPDATE pg_database SET datallowconn = false;".
|
|
|
|
*
|
|
|
|
* We do not enforce them for autovacuum worker processes either.
|
|
|
|
*/
|
|
|
|
if (IsUnderPostmaster && !IsAutoVacuumWorkerProcess())
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Check that the database is currently allowing connections.
|
|
|
|
*/
|
|
|
|
if (!dbform->datallowconn && !override_allow_connections)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
errmsg("database \"%s\" is not currently accepting connections",
|
|
|
|
name)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check privilege to connect to the database. (The am_superuser test
|
|
|
|
* is redundant, but since we have the flag, might as well check it
|
|
|
|
* and save a few cycles.)
|
|
|
|
*/
|
|
|
|
if (!am_superuser &&
|
|
|
|
object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(),
|
|
|
|
ACL_CONNECT) != ACLCHECK_OK)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("permission denied for database \"%s\"", name),
|
|
|
|
errdetail("User does not have CONNECT privilege.")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check connection limit for this database.
|
|
|
|
*
|
|
|
|
* There is a race condition here --- we create our PGPROC before
|
|
|
|
* checking for other PGPROCs. If two backends did this at about the
|
|
|
|
* same time, they might both think they were over the limit, while
|
|
|
|
* ideally one should succeed and one fail. Getting that to work
|
|
|
|
* exactly seems more trouble than it is worth, however; instead we
|
|
|
|
* just document that the connection limit is approximate.
|
|
|
|
*/
|
|
|
|
if (dbform->datconnlimit >= 0 &&
|
|
|
|
!am_superuser &&
|
|
|
|
CountDBConnections(MyDatabaseId) > dbform->datconnlimit)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
|
|
|
|
errmsg("too many connections for database \"%s\"",
|
|
|
|
name)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* OK, we're golden. Next to-do item is to save the encoding info out of
|
|
|
|
* the pg_database tuple.
|
|
|
|
*/
|
|
|
|
SetDatabaseEncoding(dbform->encoding);
|
|
|
|
/* Record it as a GUC internal option, too */
|
|
|
|
SetConfigOption("server_encoding", GetDatabaseEncodingName(),
|
|
|
|
PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT);
|
|
|
|
/* If we have no other source of client_encoding, use server encoding */
|
|
|
|
SetConfigOption("client_encoding", GetDatabaseEncodingName(),
|
Split PGC_S_DEFAULT into two values, for true boot_val vs computed default.
Failure to distinguish these cases is the real cause behind the recent
reports of Windows builds crashing on 'infinity'::timestamp, which was
directly due to failure to establish a value of timezone_abbreviations
in postmaster child processes. The postmaster had the desired value,
but write_one_nondefault_variable() didn't transmit it to backends.
To fix that, invent a new value PGC_S_DYNAMIC_DEFAULT, and be sure to use
that or PGC_S_ENV_VAR (as appropriate) for "default" settings that are
computed during initialization. (We need both because there's at least
one variable that could receive a value from either source.)
This commit also fixes ProcessConfigFile's failure to restore the correct
default value for certain GUC variables if they are set in postgresql.conf
and then removed/commented out of the file. We have to recompute and
reinstall the value for any GUC variable that could have received a value
from PGC_S_DYNAMIC_DEFAULT or PGC_S_ENV_VAR sources, and there were a
number of oversights. (That whole thing is a crock that needs to be
redesigned, but not today.)
However, I intentionally didn't make it work "exactly right" for the cases
of timezone and log_timezone. The exactly right behavior would involve
running select_default_timezone, which we'd have to do independently in
each postgres process, causing the whole database to become entirely
unresponsive for as much as several seconds. That didn't seem like a good
idea, especially since the variable's removal from postgresql.conf might be
just an accidental edit. Instead the behavior is to adopt the previously
active setting as if it were default.
Note that this patch creates an ABI break for extensions that use any of
the PGC_S_XXX constants; they'll need to be recompiled.
14 years ago
|
|
|
PGC_BACKEND, PGC_S_DYNAMIC_DEFAULT);
|
|
|
|
|
|
|
|
/* assign locale variables */
|
|
|
|
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_datcollate);
|
|
|
|
collate = TextDatumGetCString(datum);
|
|
|
|
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_datctype);
|
|
|
|
ctype = TextDatumGetCString(datum);
|
|
|
|
|
|
|
|
if (pg_perm_setlocale(LC_COLLATE, collate) == NULL)
|
|
|
|
ereport(FATAL,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
(errmsg("database locale is incompatible with operating system"),
|
|
|
|
errdetail("The database was initialized with LC_COLLATE \"%s\", "
|
|
|
|
" which is not recognized by setlocale().", collate),
|
|
|
|
errhint("Recreate the database with another locale or install the missing locale.")));
|
|
|
|
|
|
|
|
if (pg_perm_setlocale(LC_CTYPE, ctype) == NULL)
|
|
|
|
ereport(FATAL,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
(errmsg("database locale is incompatible with operating system"),
|
|
|
|
errdetail("The database was initialized with LC_CTYPE \"%s\", "
|
|
|
|
" which is not recognized by setlocale().", ctype),
|
|
|
|
errhint("Recreate the database with another locale or install the missing locale.")));
|
|
|
|
|
|
|
|
if (strcmp(ctype, "C") == 0 ||
|
|
|
|
strcmp(ctype, "POSIX") == 0)
|
|
|
|
database_ctype_is_c = true;
|
|
|
|
|
|
|
|
if (dbform->datlocprovider == COLLPROVIDER_ICU)
|
|
|
|
{
|
|
|
|
char *icurules;
|
|
|
|
|
|
|
|
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_daticulocale);
|
|
|
|
iculocale = TextDatumGetCString(datum);
|
|
|
|
|
|
|
|
datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_daticurules, &isnull);
|
|
|
|
if (!isnull)
|
|
|
|
icurules = TextDatumGetCString(datum);
|
|
|
|
else
|
|
|
|
icurules = NULL;
|
|
|
|
|
|
|
|
make_icu_collator(iculocale, icurules, &default_locale);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
iculocale = NULL;
|
|
|
|
|
|
|
|
default_locale.provider = dbform->datlocprovider;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Default locale is currently always deterministic. Nondeterministic
|
|
|
|
* locales currently don't support pattern matching, which would break a
|
|
|
|
* lot of things if applied globally.
|
|
|
|
*/
|
|
|
|
default_locale.deterministic = true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check collation version. See similar code in
|
|
|
|
* pg_newlocale_from_collation(). Note that here we warn instead of error
|
|
|
|
* in any case, so that we don't prevent connecting.
|
|
|
|
*/
|
|
|
|
datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_datcollversion,
|
|
|
|
&isnull);
|
|
|
|
if (!isnull)
|
|
|
|
{
|
|
|
|
char *actual_versionstr;
|
|
|
|
char *collversionstr;
|
|
|
|
|
|
|
|
collversionstr = TextDatumGetCString(datum);
|
|
|
|
|
|
|
|
actual_versionstr = get_collation_actual_version(dbform->datlocprovider, dbform->datlocprovider == COLLPROVIDER_ICU ? iculocale : collate);
|
|
|
|
if (!actual_versionstr)
|
|
|
|
/* should not happen */
|
|
|
|
elog(WARNING,
|
|
|
|
"database \"%s\" has no actual collation version, but a version was recorded",
|
|
|
|
name);
|
|
|
|
else if (strcmp(actual_versionstr, collversionstr) != 0)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errmsg("database \"%s\" has a collation version mismatch",
|
|
|
|
name),
|
|
|
|
errdetail("The database was created using collation version %s, "
|
|
|
|
"but the operating system provides version %s.",
|
|
|
|
collversionstr, actual_versionstr),
|
|
|
|
errhint("Rebuild all objects in this database that use the default collation and run "
|
|
|
|
"ALTER DATABASE %s REFRESH COLLATION VERSION, "
|
|
|
|
"or build PostgreSQL with the right library version.",
|
|
|
|
quote_identifier(name))));
|
|
|
|
}
|
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* pg_split_opts -- split a string of options and append it to an argv array
|
|
|
|
*
|
|
|
|
* The caller is responsible for ensuring the argv array is large enough. The
|
|
|
|
* maximum possible number of arguments added by this routine is
|
|
|
|
* (strlen(optstr) + 1) / 2.
|
|
|
|
*
|
|
|
|
* Because some option values can contain spaces we allow escaping using
|
|
|
|
* backslashes, with \\ representing a literal backslash.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
pg_split_opts(char **argv, int *argcp, const char *optstr)
|
|
|
|
{
|
|
|
|
StringInfoData s;
|
|
|
|
|
|
|
|
initStringInfo(&s);
|
|
|
|
|
|
|
|
while (*optstr)
|
|
|
|
{
|
|
|
|
bool last_was_escape = false;
|
|
|
|
|
|
|
|
resetStringInfo(&s);
|
|
|
|
|
|
|
|
/* skip over leading space */
|
|
|
|
while (isspace((unsigned char) *optstr))
|
|
|
|
optstr++;
|
|
|
|
|
|
|
|
if (*optstr == '\0')
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parse a single option, stopping at the first space, unless it's
|
|
|
|
* escaped.
|
|
|
|
*/
|
|
|
|
while (*optstr)
|
|
|
|
{
|
|
|
|
if (isspace((unsigned char) *optstr) && !last_was_escape)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!last_was_escape && *optstr == '\\')
|
|
|
|
last_was_escape = true;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
last_was_escape = false;
|
|
|
|
appendStringInfoChar(&s, *optstr);
|
|
|
|
}
|
|
|
|
|
|
|
|
optstr++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* now store the option in the next argv[] position */
|
|
|
|
argv[(*argcp)++] = pstrdup(s.data);
|
|
|
|
}
|
|
|
|
|
|
|
|
pfree(s.data);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize MaxBackends value from config options.
|
|
|
|
*
|
|
|
|
* This must be called after modules have had the chance to alter GUCs in
|
|
|
|
* shared_preload_libraries and before shared memory size is determined.
|
|
|
|
*
|
|
|
|
* Note that in EXEC_BACKEND environment, the value is passed down from
|
|
|
|
* postmaster to subprocesses via BackendParameters in SubPostmasterMain; only
|
|
|
|
* postmaster itself and processes not under postmaster control should call
|
|
|
|
* this.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
InitializeMaxBackends(void)
|
|
|
|
{
|
|
|
|
Assert(MaxBackends == 0);
|
|
|
|
|
|
|
|
/* the extra unit accounts for the autovacuum launcher */
|
|
|
|
MaxBackends = MaxConnections + autovacuum_max_workers + 1 +
|
|
|
|
max_worker_processes + max_wal_senders;
|
|
|
|
|
|
|
|
/* internal error because the values were all checked previously */
|
|
|
|
if (MaxBackends > MAX_BACKENDS)
|
|
|
|
elog(ERROR, "too many backends configured");
|
|
|
|
}
|
|
|
|
|
Split up guc.c for better build speed and ease of maintenance.
guc.c has grown to be one of our largest .c files, making it
a bottleneck for compilation. It's also acquired a bunch of
knowledge that'd be better kept elsewhere, because of our not
very good habit of putting variable-specific check hooks here.
Hence, split it up along these lines:
* guc.c itself retains just the core GUC housekeeping mechanisms.
* New file guc_funcs.c contains the SET/SHOW interfaces and some
SQL-accessible functions for GUC manipulation.
* New file guc_tables.c contains the data arrays that define the
built-in GUC variables, along with some already-exported constant
tables.
* GUC check/assign/show hook functions are moved to the variable's
home module, whenever that's clearly identifiable. A few hard-
to-classify hooks ended up in commands/variable.c, which was
already a home for miscellaneous GUC hook functions.
To avoid cluttering a lot more header files with #include "guc.h",
I also invented a new header file utils/guc_hooks.h and put all
the GUC hook functions' declarations there, regardless of their
originating module. That allowed removal of #include "guc.h"
from some existing headers. The fallout from that (hopefully
all caught here) demonstrates clearly why such inclusions are
best minimized: there are a lot of files that, for example,
were getting array.h at two or more levels of remove, despite
not having any connection at all to GUCs in themselves.
There is some very minor code beautification here, such as
renaming a couple of inconsistently-named hook functions
and improving some comments. But mostly this just moves
code from point A to point B and deals with the ensuing
needs for #include adjustments and exporting a few functions
that previously weren't exported.
Patch by me, per a suggestion from Andres Freund; thanks also
to Michael Paquier for the idea to invent guc_funcs.c.
Discussion: https://postgr.es/m/587607.1662836699@sss.pgh.pa.us
3 years ago
|
|
|
/*
|
|
|
|
* GUC check_hook for max_connections
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
check_max_connections(int *newval, void **extra, GucSource source)
|
|
|
|
{
|
|
|
|
if (*newval + autovacuum_max_workers + 1 +
|
|
|
|
max_worker_processes + max_wal_senders > MAX_BACKENDS)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC check_hook for autovacuum_max_workers
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
check_autovacuum_max_workers(int *newval, void **extra, GucSource source)
|
|
|
|
{
|
|
|
|
if (MaxConnections + *newval + 1 +
|
|
|
|
max_worker_processes + max_wal_senders > MAX_BACKENDS)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC check_hook for max_worker_processes
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
check_max_worker_processes(int *newval, void **extra, GucSource source)
|
|
|
|
{
|
|
|
|
if (MaxConnections + autovacuum_max_workers + 1 +
|
|
|
|
*newval + max_wal_senders > MAX_BACKENDS)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC check_hook for max_wal_senders
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
check_max_wal_senders(int *newval, void **extra, GucSource source)
|
|
|
|
{
|
|
|
|
if (MaxConnections + autovacuum_max_workers + 1 +
|
|
|
|
max_worker_processes + *newval > MAX_BACKENDS)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Early initialization of a backend (either standalone or under postmaster).
|
|
|
|
* This happens even before InitPostgres.
|
|
|
|
*
|
|
|
|
* This is separate from InitPostgres because it is also called by auxiliary
|
|
|
|
* processes, such as the background writer process, which may not call
|
|
|
|
* InitPostgres at all.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
BaseInit(void)
|
|
|
|
{
|
|
|
|
Assert(MyProc != NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize our input/output/debugging file descriptors.
|
|
|
|
*/
|
|
|
|
DebugFileOpen();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize file access. Done early so other subsystems can access
|
|
|
|
* files.
|
|
|
|
*/
|
|
|
|
InitFileAccess();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize statistics reporting. This needs to happen early to ensure
|
|
|
|
* that pgstat's shutdown callback runs after the shutdown callbacks of
|
|
|
|
* all subsystems that can produce stats (like e.g. transaction commits
|
|
|
|
* can).
|
|
|
|
*/
|
|
|
|
pgstat_initialize();
|
|
|
|
|
|
|
|
/* Do local initialization of storage and buffer managers */
|
|
|
|
InitSync();
|
|
|
|
smgrinit();
|
|
|
|
InitBufferPoolAccess();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize temporary file access after pgstat, so that the temporary
|
|
|
|
* file shutdown hook can report temporary file statistics.
|
|
|
|
*/
|
|
|
|
InitTemporaryFileAccess();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize local buffers for WAL record construction, in case we ever
|
|
|
|
* try to insert XLOG.
|
|
|
|
*/
|
|
|
|
InitXLogInsert();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize replication slots after pgstat. The exit hook might need to
|
|
|
|
* drop ephemeral slots, which in turn triggers stats reporting.
|
|
|
|
*/
|
|
|
|
ReplicationSlotInitialize();
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* --------------------------------
|
|
|
|
* InitPostgres
|
|
|
|
* Initialize POSTGRES.
|
|
|
|
*
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* Parameters:
|
|
|
|
* in_dbname, dboid: specify database to connect to, as described below
|
|
|
|
* username, useroid: specify role to connect as, as described below
|
|
|
|
* flags:
|
|
|
|
* - INIT_PG_LOAD_SESSION_LIBS to honor [session|local]_preload_libraries.
|
|
|
|
* - INIT_PG_OVERRIDE_ALLOW_CONNS to connect despite !datallowconn.
|
|
|
|
* - INIT_PG_OVERRIDE_ROLE_LOGIN to connect despite !rolcanlogin.
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* out_dbname: optional output parameter, see below; pass NULL if not used
|
|
|
|
*
|
|
|
|
* The database can be specified by name, using the in_dbname parameter, or by
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* OID, using the dboid parameter. Specify NULL or InvalidOid respectively
|
|
|
|
* for the unused parameter. If dboid is provided, the actual database
|
|
|
|
* name can be returned to the caller in out_dbname. If out_dbname isn't
|
|
|
|
* NULL, it must point to a buffer of size NAMEDATALEN.
|
|
|
|
*
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* Similarly, the role can be passed by name, using the username parameter,
|
|
|
|
* or by OID using the useroid parameter.
|
|
|
|
*
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* In bootstrap mode the database and username parameters are NULL/InvalidOid.
|
|
|
|
* The autovacuum launcher process doesn't specify these parameters either,
|
|
|
|
* because it only goes far enough to be able to read pg_database; it doesn't
|
|
|
|
* connect to any particular database. An autovacuum worker specifies a
|
|
|
|
* database but not a username; conversely, a physical walsender specifies
|
|
|
|
* username but not database.
|
|
|
|
*
|
|
|
|
* By convention, INIT_PG_LOAD_SESSION_LIBS should be passed in "flags" in
|
|
|
|
* "interactive" sessions (including standalone backends), but not in
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* background processes such as autovacuum. Note in particular that it
|
|
|
|
* shouldn't be true in parallel worker processes; those have another
|
|
|
|
* mechanism for replicating their leader's set of loaded libraries.
|
|
|
|
*
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
* We expect that InitProcess() was already called, so we already have a
|
|
|
|
* PGPROC struct ... but it's not completely filled in yet.
|
|
|
|
*
|
|
|
|
* Note:
|
|
|
|
* Be very careful with the order of calls in the InitPostgres function.
|
|
|
|
* --------------------------------
|
|
|
|
*/
|
|
|
|
void
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
InitPostgres(const char *in_dbname, Oid dboid,
|
|
|
|
const char *username, Oid useroid,
|
|
|
|
bits32 flags,
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
char *out_dbname)
|
|
|
|
{
|
|
|
|
bool bootstrap = IsBootstrapProcessingMode();
|
|
|
|
bool am_superuser;
|
|
|
|
char *fullpath;
|
|
|
|
char dbname[NAMEDATALEN];
|
|
|
|
int nfree = 0;
|
|
|
|
|
|
|
|
elog(DEBUG3, "InitPostgres");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add my PGPROC struct to the ProcArray.
|
|
|
|
*
|
|
|
|
* Once I have done this, I am visible to other backends!
|
|
|
|
*/
|
|
|
|
InitProcessPhase2();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize my entry in the shared-invalidation manager's array of
|
|
|
|
* per-backend data.
|
|
|
|
*
|
|
|
|
* Sets up MyBackendId, a unique backend identifier.
|
|
|
|
*/
|
|
|
|
MyBackendId = InvalidBackendId;
|
|
|
|
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
16 years ago
|
|
|
SharedInvalBackendInit(false);
|
|
|
|
|
|
|
|
if (MyBackendId > MaxBackends || MyBackendId <= 0)
|
|
|
|
elog(FATAL, "bad backend ID: %d", MyBackendId);
|
|
|
|
|
|
|
|
/* Now that we have a BackendId, we can participate in ProcSignal */
|
|
|
|
ProcSignalInit(MyBackendId);
|
|
|
|
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
/*
|
|
|
|
* Also set up timeout handlers needed for backend operation. We need
|
|
|
|
* these in every case except bootstrap.
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
{
|
|
|
|
RegisterTimeout(DEADLOCK_TIMEOUT, CheckDeadLockAlert);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
RegisterTimeout(STATEMENT_TIMEOUT, StatementTimeoutHandler);
|
|
|
|
RegisterTimeout(LOCK_TIMEOUT, LockTimeoutHandler);
|
|
|
|
RegisterTimeout(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
|
|
|
|
IdleInTransactionSessionTimeoutHandler);
|
|
|
|
RegisterTimeout(TRANSACTION_TIMEOUT, TransactionTimeoutHandler);
|
|
|
|
RegisterTimeout(IDLE_SESSION_TIMEOUT, IdleSessionTimeoutHandler);
|
|
|
|
RegisterTimeout(CLIENT_CONNECTION_CHECK_TIMEOUT, ClientCheckTimeoutHandler);
|
|
|
|
RegisterTimeout(IDLE_STATS_UPDATE_TIMEOUT,
|
|
|
|
IdleStatsUpdateTimeoutHandler);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this is either a bootstrap process or a standalone backend, start up
|
|
|
|
* the XLOG machinery, and register to have it closed down at exit. In
|
|
|
|
* other cases, the startup process is responsible for starting up the
|
|
|
|
* XLOG machinery, and the checkpointer for closing it down.
|
|
|
|
*/
|
Remove InitXLOGAccess().
It's not great that RecoveryInProgress() calls InitXLOGAccess(),
because a status inquiry function typically shouldn't have the side
effect of performing initializations. We could fix that by calling
InitXLOGAccess() from some other place, but instead, let's remove it
altogether.
One thing InitXLogAccess() did is initialize wal_segment_size, but it
doesn't need to do that. In the postmaster, PostmasterMain() calls
LocalProcessControlFile(), and all child processes will inherit that
value -- except in EXEC_BACKEND bulds, but then each backend runs
SubPostmasterMain() which also calls LocalProcessControlFile().
The other thing InitXLOGAccess() did is update RedoRecPtr and
doPageWrites, but that's not critical, because all code that uses
them will just retry if it turns out that they've changed. The
only difference is that most code will now see an initial value that
is definitely invalid instead of one that might have just been way
out of date, but that will only happen once per backend lifetime,
so it shouldn't be a big deal.
Patch by me, reviewed by Nathan Bossart, Michael Paquier, Andres
Freund, Heikki Linnakangas, and Álvaro Herrera.
Discussion: http://postgr.es/m/CA+TgmoY7b65qRjzHN_tWUk8B4sJqk1vj1d31uepVzmgPnZKeLg@mail.gmail.com
4 years ago
|
|
|
if (!IsUnderPostmaster)
|
|
|
|
{
|
|
|
|
/*
|
Use a ResourceOwner to track buffer pins in all cases.
Historically, we've allowed auxiliary processes to take buffer pins without
tracking them in a ResourceOwner. However, that creates problems for error
recovery. In particular, we've seen multiple reports of assertion crashes
in the startup process when it gets an error while holding a buffer pin,
as for example if it gets ENOSPC during a write. In a non-assert build,
the process would simply exit without releasing the pin at all. We've
gotten away with that so far just because a failure exit of the startup
process translates to a database crash anyhow; but any similar behavior
in other aux processes could result in stuck pins and subsequent problems
in vacuum.
To improve this, institute a policy that we must *always* have a resowner
backing any attempt to pin a buffer, which we can enforce just by removing
the previous special-case code in resowner.c. Add infrastructure to make
it easy to create a process-lifespan AuxProcessResourceOwner and clear
out its contents at appropriate times. Replace existing ad-hoc resowner
management in bgwriter.c and other aux processes with that. (Thus, while
the startup process gains a resowner where it had none at all before, some
other aux process types are replacing an ad-hoc resowner with this code.)
Also use the AuxProcessResourceOwner to manage buffer pins taken during
StartupXLOG and ShutdownXLOG, even when those are being run in a bootstrap
process or a standalone backend rather than a true auxiliary process.
In passing, remove some other ad-hoc resource owner creations that had
gotten cargo-culted into various other places. As far as I can tell
that was all unnecessary, and if it had been necessary it was incomplete,
due to lacking any provision for clearing those resowners later.
(Also worth noting in this connection is that a process that hasn't called
InitBufferPoolBackend has no business accessing buffers; so there's more
to do than just add the resowner if we want to touch buffers in processes
not covered by this patch.)
Although this fixes a very old bug, no back-patch, because there's no
evidence of any significant problem in non-assert builds.
Patch by me, pursuant to a report from Justin Pryzby. Thanks to
Robert Haas and Kyotaro Horiguchi for reviews.
Discussion: https://postgr.es/m/20180627233939.GA10276@telsasoft.com
7 years ago
|
|
|
* We don't yet have an aux-process resource owner, but StartupXLOG
|
|
|
|
* and ShutdownXLOG will need one. Hence, create said resource owner
|
|
|
|
* (and register a callback to clean it up after ShutdownXLOG runs).
|
|
|
|
*/
|
Use a ResourceOwner to track buffer pins in all cases.
Historically, we've allowed auxiliary processes to take buffer pins without
tracking them in a ResourceOwner. However, that creates problems for error
recovery. In particular, we've seen multiple reports of assertion crashes
in the startup process when it gets an error while holding a buffer pin,
as for example if it gets ENOSPC during a write. In a non-assert build,
the process would simply exit without releasing the pin at all. We've
gotten away with that so far just because a failure exit of the startup
process translates to a database crash anyhow; but any similar behavior
in other aux processes could result in stuck pins and subsequent problems
in vacuum.
To improve this, institute a policy that we must *always* have a resowner
backing any attempt to pin a buffer, which we can enforce just by removing
the previous special-case code in resowner.c. Add infrastructure to make
it easy to create a process-lifespan AuxProcessResourceOwner and clear
out its contents at appropriate times. Replace existing ad-hoc resowner
management in bgwriter.c and other aux processes with that. (Thus, while
the startup process gains a resowner where it had none at all before, some
other aux process types are replacing an ad-hoc resowner with this code.)
Also use the AuxProcessResourceOwner to manage buffer pins taken during
StartupXLOG and ShutdownXLOG, even when those are being run in a bootstrap
process or a standalone backend rather than a true auxiliary process.
In passing, remove some other ad-hoc resource owner creations that had
gotten cargo-culted into various other places. As far as I can tell
that was all unnecessary, and if it had been necessary it was incomplete,
due to lacking any provision for clearing those resowners later.
(Also worth noting in this connection is that a process that hasn't called
InitBufferPoolBackend has no business accessing buffers; so there's more
to do than just add the resowner if we want to touch buffers in processes
not covered by this patch.)
Although this fixes a very old bug, no back-patch, because there's no
evidence of any significant problem in non-assert builds.
Patch by me, pursuant to a report from Justin Pryzby. Thanks to
Robert Haas and Kyotaro Horiguchi for reviews.
Discussion: https://postgr.es/m/20180627233939.GA10276@telsasoft.com
7 years ago
|
|
|
CreateAuxProcessResourceOwner();
|
|
|
|
|
|
|
|
StartupXLOG();
|
Use a ResourceOwner to track buffer pins in all cases.
Historically, we've allowed auxiliary processes to take buffer pins without
tracking them in a ResourceOwner. However, that creates problems for error
recovery. In particular, we've seen multiple reports of assertion crashes
in the startup process when it gets an error while holding a buffer pin,
as for example if it gets ENOSPC during a write. In a non-assert build,
the process would simply exit without releasing the pin at all. We've
gotten away with that so far just because a failure exit of the startup
process translates to a database crash anyhow; but any similar behavior
in other aux processes could result in stuck pins and subsequent problems
in vacuum.
To improve this, institute a policy that we must *always* have a resowner
backing any attempt to pin a buffer, which we can enforce just by removing
the previous special-case code in resowner.c. Add infrastructure to make
it easy to create a process-lifespan AuxProcessResourceOwner and clear
out its contents at appropriate times. Replace existing ad-hoc resowner
management in bgwriter.c and other aux processes with that. (Thus, while
the startup process gains a resowner where it had none at all before, some
other aux process types are replacing an ad-hoc resowner with this code.)
Also use the AuxProcessResourceOwner to manage buffer pins taken during
StartupXLOG and ShutdownXLOG, even when those are being run in a bootstrap
process or a standalone backend rather than a true auxiliary process.
In passing, remove some other ad-hoc resource owner creations that had
gotten cargo-culted into various other places. As far as I can tell
that was all unnecessary, and if it had been necessary it was incomplete,
due to lacking any provision for clearing those resowners later.
(Also worth noting in this connection is that a process that hasn't called
InitBufferPoolBackend has no business accessing buffers; so there's more
to do than just add the resowner if we want to touch buffers in processes
not covered by this patch.)
Although this fixes a very old bug, no back-patch, because there's no
evidence of any significant problem in non-assert builds.
Patch by me, pursuant to a report from Justin Pryzby. Thanks to
Robert Haas and Kyotaro Horiguchi for reviews.
Discussion: https://postgr.es/m/20180627233939.GA10276@telsasoft.com
7 years ago
|
|
|
/* Release (and warn about) any buffer pins leaked in StartupXLOG */
|
|
|
|
ReleaseAuxProcessResources(true);
|
|
|
|
/* Reset CurrentResourceOwner to nothing for the moment */
|
|
|
|
CurrentResourceOwner = NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use before_shmem_exit() so that ShutdownXLOG() can rely on DSM
|
|
|
|
* segments etc to work (which in turn is required for pgstats).
|
|
|
|
*/
|
|
|
|
before_shmem_exit(pgstat_before_server_shutdown, 0);
|
|
|
|
before_shmem_exit(ShutdownXLOG, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the relation cache and the system catalog caches. Note that
|
|
|
|
* no catalog access happens here; we only set up the hashtable structure.
|
|
|
|
* We must do this before starting a transaction because transaction abort
|
|
|
|
* would try to touch these hashtables.
|
|
|
|
*/
|
|
|
|
RelationCacheInitialize();
|
|
|
|
InitCatalogCache();
|
|
|
|
InitPlanCache();
|
|
|
|
|
|
|
|
/* Initialize portal manager */
|
|
|
|
EnablePortalManager();
|
|
|
|
|
|
|
|
/* Initialize status reporting */
|
|
|
|
pgstat_beinit();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Load relcache entries for the shared system catalogs. This must create
|
|
|
|
* at least entries for pg_database and catalogs used for authentication.
|
|
|
|
*/
|
|
|
|
RelationCacheInitializePhase2();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up process-exit callback to do pre-shutdown cleanup. This is the
|
|
|
|
* one of the first before_shmem_exit callbacks we register; thus, this
|
|
|
|
* will be one the last things we do before low-level modules like the
|
|
|
|
* buffer manager begin to close down. We need to have this in place
|
|
|
|
* before we begin our first transaction --- if we fail during the
|
|
|
|
* initialization transaction, as is entirely possible, we need the
|
|
|
|
* AbortTransaction call to clean up.
|
|
|
|
*/
|
|
|
|
before_shmem_exit(ShutdownPostgres, 0);
|
|
|
|
|
|
|
|
/* The autovacuum launcher is done here */
|
|
|
|
if (IsAutoVacuumLauncherProcess())
|
|
|
|
{
|
|
|
|
/* report this backend in the PgBackendStatus array */
|
|
|
|
pgstat_bestart();
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start a new transaction here before first access to db, and get a
|
|
|
|
* snapshot. We don't have a use for the snapshot itself, but we're
|
|
|
|
* interested in the secondary effect that it sets RecentGlobalXmin. (This
|
|
|
|
* is critical for anything that reads heap pages, because HOT may decide
|
|
|
|
* to prune them even if the process doesn't attempt to modify any
|
|
|
|
* tuples.)
|
snapshot scalability: Don't compute global horizons while building snapshots.
To make GetSnapshotData() more scalable, it cannot not look at at each proc's
xmin: While snapshot contents do not need to change whenever a read-only
transaction commits or a snapshot is released, a proc's xmin is modified in
those cases. The frequency of xmin modifications leads to, particularly on
higher core count systems, many cache misses inside GetSnapshotData(), despite
the data underlying a snapshot not changing. That is the most
significant source of GetSnapshotData() scaling poorly on larger systems.
Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons /
thresholds as it has so far. But we don't really have to: The horizons don't
actually change that much between GetSnapshotData() calls. Nor are the horizons
actually used every time a snapshot is built.
The trick this commit introduces is to delay computation of accurate horizons
until there use and using horizon boundaries to determine whether accurate
horizons need to be computed.
The use of RecentGlobal[Data]Xmin to decide whether a row version could be
removed has been replaces with new GlobalVisTest* functions. These use two
thresholds to determine whether a row can be pruned:
1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed
are definitely still visible.
2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can
definitely be removed
GetSnapshotData() updates definitely_needed to be the xmin of the computed
snapshot.
When testing whether a row can be removed (with GlobalVisTestIsRemovableXid())
and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID <
definitely_needed) the boundaries can be recomputed to be more accurate. As it
is not cheap to compute accurate boundaries, we limit the number of times that
happens in short succession. As the boundaries used by
GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by
GetSnapshotData()), it is likely that further test can benefit from an earlier
computation of accurate horizons.
To avoid regressing performance when old_snapshot_threshold is set (as that
requires an accurate horizon to be computed), heap_page_prune_opt() doesn't
unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the
computation of the limited horizon, and the triggering of errors (with
SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove
tuples.
This commit just removes the accesses to PGXACT->xmin from
GetSnapshotData(), but other members of PGXACT residing in the same
cache line are accessed. Therefore this in itself does not result in a
significant improvement. Subsequent commits will take advantage of the
fact that GetSnapshotData() now does not need to access xmins anymore.
Note: This contains a workaround in heap_page_prune_opt() to keep the
snapshot_too_old tests working. While that workaround is ugly, the tests
currently are not meaningful, and it seems best to address them separately.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
5 years ago
|
|
|
*
|
|
|
|
* FIXME: This comment is inaccurate / the code buggy. A snapshot that is
|
|
|
|
* not pushed/active does not reliably prevent HOT pruning (->xmin could
|
|
|
|
* e.g. be cleared when cache invalidations are processed).
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
{
|
|
|
|
/* statement_timestamp must be set for timeouts to work correctly */
|
|
|
|
SetCurrentStatementStartTimestamp();
|
|
|
|
StartTransactionCommand();
|
Fix issues with checks for unsupported transaction states in Hot Standby.
The GUC check hooks for transaction_read_only and transaction_isolation
tried to check RecoveryInProgress(), so as to disallow setting read/write
mode or serializable isolation level (respectively) in hot standby
sessions. However, GUC check hooks can be called in many situations where
we're not connected to shared memory at all, resulting in a crash in
RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND
builds crashing during child process start if default_transaction_isolation
is serializable, as reported by Heikki Linnakangas. Protect those calls
by silently allowing any setting when not inside a transaction; which is
okay anyway since these GUCs are always reset at start of transaction.
Also, add a check to GetSerializableTransactionSnapshot() to complain
if we are in hot standby. We need that check despite the one in
check_XactIsoLevel() because default_transaction_isolation could be
serializable. We don't want to complain any sooner than this in such
cases, since that would prevent running transactions at all in such a
state; but a transaction can be run, if SET TRANSACTION ISOLATION is done
before setting a snapshot. Per report some months ago from Robert Haas.
Back-patch to 9.1, since these problems were introduced by the SSI patch.
Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
13 years ago
|
|
|
|
|
|
|
/*
|
|
|
|
* transaction_isolation will have been set to the default by the
|
|
|
|
* above. If the default is "serializable", and we are in hot
|
|
|
|
* standby, we will fail if we don't change it to something lower.
|
|
|
|
* Fortunately, "read committed" is plenty good enough.
|
|
|
|
*/
|
|
|
|
XactIsoLevel = XACT_READ_COMMITTED;
|
|
|
|
|
|
|
|
(void) GetTransactionSnapshot();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Perform client authentication if necessary, then figure out our
|
|
|
|
* postgres user ID, and see if we are a superuser.
|
|
|
|
*
|
Add a new slot sync worker to synchronize logical slots.
By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.
The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.
A new parameter sync_replication_slots enables or disables this new
process.
On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.
A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.
Author: Shveta Malik, Hou Zhijie based on design inputs by Masahiko Sawada and Amit Kapila
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Ajin Cherian, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
1 year ago
|
|
|
* In standalone mode, autovacuum worker processes and slot sync worker
|
|
|
|
* process, we use a fixed ID, otherwise we figure it out from the
|
|
|
|
* authenticated user name.
|
|
|
|
*/
|
Add a new slot sync worker to synchronize logical slots.
By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.
The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.
A new parameter sync_replication_slots enables or disables this new
process.
On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.
A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.
Author: Shveta Malik, Hou Zhijie based on design inputs by Masahiko Sawada and Amit Kapila
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Ajin Cherian, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
1 year ago
|
|
|
if (bootstrap || IsAutoVacuumWorkerProcess() || IsLogicalSlotSyncWorker())
|
|
|
|
{
|
|
|
|
InitializeSessionUserIdStandalone();
|
|
|
|
am_superuser = true;
|
|
|
|
}
|
|
|
|
else if (!IsUnderPostmaster)
|
|
|
|
{
|
|
|
|
InitializeSessionUserIdStandalone();
|
|
|
|
am_superuser = true;
|
|
|
|
if (!ThereIsAtLeastOneRole())
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("no roles are defined in this database system"),
|
|
|
|
errhint("You should immediately run CREATE USER \"%s\" SUPERUSER;.",
|
|
|
|
username != NULL ? username : "postgres")));
|
|
|
|
}
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
13 years ago
|
|
|
else if (IsBackgroundWorker)
|
|
|
|
{
|
|
|
|
if (username == NULL && !OidIsValid(useroid))
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
13 years ago
|
|
|
{
|
|
|
|
InitializeSessionUserIdStandalone();
|
|
|
|
am_superuser = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
InitializeSessionUserId(username, useroid,
|
|
|
|
(flags & INIT_PG_OVERRIDE_ROLE_LOGIN) != 0);
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
13 years ago
|
|
|
am_superuser = superuser();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* normal multiuser case */
|
|
|
|
Assert(MyProcPort != NULL);
|
|
|
|
PerformAuthentication(MyProcPort);
|
|
|
|
InitializeSessionUserId(username, useroid, false);
|
Introduce SYSTEM_USER
SYSTEM_USER is a reserved keyword of the SQL specification that,
roughly described, is aimed at reporting some information about the
system user who has connected to the database server. It may include
implementation-specific information about the means by the user
connected, like an authentication method.
This commit implements SYSTEM_USER as of auth_method:identity, where
"auth_method" is a keyword about the authentication method used to log
into the server (like peer, md5, scram-sha-256, gss, etc.) and
"identity" is the authentication identity as introduced by 9afffcb (peer
sets authn to the OS user name, gss to the user principal, etc.). This
format has been suggested by Tom Lane.
Note that thanks to d951052, SYSTEM_USER is available to parallel
workers.
Bump catalog version.
Author: Bertrand Drouvot
Reviewed-by: Jacob Champion, Joe Conway, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com
3 years ago
|
|
|
/* ensure that auth_method is actually valid, aka authn_id is not NULL */
|
|
|
|
if (MyClientConnectionInfo.authn_id)
|
|
|
|
InitializeSystemUser(MyClientConnectionInfo.authn_id,
|
|
|
|
hba_authname(MyClientConnectionInfo.auth_method));
|
|
|
|
am_superuser = superuser();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Binary upgrades only allowed super-user connections
|
|
|
|
*/
|
|
|
|
if (IsBinaryUpgrade && !am_superuser)
|
|
|
|
{
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
errmsg("must be superuser to connect in binary upgrade mode")));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The last few connection slots are reserved for superusers and roles
|
|
|
|
* with privileges of pg_use_reserved_connections. Replication
|
|
|
|
* connections are drawn from slots reserved with max_wal_senders and are
|
|
|
|
* not limited by max_connections, superuser_reserved_connections, or
|
|
|
|
* reserved_connections.
|
|
|
|
*
|
|
|
|
* Note: At this point, the new backend has already claimed a proc struct,
|
|
|
|
* so we must check whether the number of free slots is strictly less than
|
|
|
|
* the reserved connection limits.
|
|
|
|
*/
|
Move max_wal_senders out of max_connections for connection slot handling
Since its introduction, max_wal_senders is counted as part of
max_connections when it comes to define how many connection slots can be
used for replication connections with a WAL sender context. This can
lead to confusion for some users, as it could be possible to block a
base backup or replication from happening because other backend sessions
are already taken for other purposes by an application, and
superuser-only connection slots are not a correct solution to handle
that case.
This commit makes max_wal_senders independent of max_connections for its
handling of PGPROC entries in ProcGlobal, meaning that connection slots
for WAL senders are handled using their own free queue, like autovacuum
workers and bgworkers.
One compatibility issue that this change creates is that a standby now
requires to have a value of max_wal_senders at least equal to its
primary. So, if a standby created enforces the value of
max_wal_senders to be lower than that, then this could break failovers.
Normally this should not be an issue though, as any settings of a
standby are inherited from its primary as postgresql.conf gets normally
copied as part of a base backup, so parameters would be consistent.
Author: Alexander Kukushkin
Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
Kliukin
Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
6 years ago
|
|
|
if (!am_superuser && !am_walsender &&
|
|
|
|
(SuperuserReservedConnections + ReservedConnections) > 0 &&
|
|
|
|
!HaveNFreeProcs(SuperuserReservedConnections + ReservedConnections, &nfree))
|
|
|
|
{
|
|
|
|
if (nfree < SuperuserReservedConnections)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
|
|
|
|
errmsg("remaining connection slots are reserved for roles with the %s attribute",
|
|
|
|
"SUPERUSER")));
|
|
|
|
|
|
|
|
if (!has_privs_of_role(GetUserId(), ROLE_PG_USE_RESERVED_CONNECTIONS))
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
|
|
|
|
errmsg("remaining connection slots are reserved for roles with privileges of the \"%s\" role",
|
|
|
|
"pg_use_reserved_connections")));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check replication permissions needed for walsender processes. */
|
|
|
|
if (am_walsender)
|
|
|
|
{
|
|
|
|
Assert(!bootstrap);
|
|
|
|
|
|
|
|
if (!has_rolreplication(GetUserId()))
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("permission denied to start WAL sender"),
|
|
|
|
errdetail("Only roles with the %s attribute may start a WAL sender process.",
|
|
|
|
"REPLICATION")));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this is a plain walsender only supporting physical replication, we
|
|
|
|
* don't want to connect to any particular database. Just finish the
|
|
|
|
* backend startup by processing any options from the startup packet, and
|
|
|
|
* we're done.
|
|
|
|
*/
|
|
|
|
if (am_walsender && !am_db_walsender)
|
|
|
|
{
|
|
|
|
/* process any options passed in the startup packet */
|
|
|
|
if (MyProcPort != NULL)
|
|
|
|
process_startup_options(MyProcPort, am_superuser);
|
|
|
|
|
|
|
|
/* Apply PostAuthDelay as soon as we've read all options */
|
|
|
|
if (PostAuthDelay > 0)
|
|
|
|
pg_usleep(PostAuthDelay * 1000000L);
|
|
|
|
|
|
|
|
/* initialize client encoding */
|
|
|
|
InitializeClientEncoding();
|
|
|
|
|
|
|
|
/* report this backend in the PgBackendStatus array */
|
|
|
|
pgstat_bestart();
|
|
|
|
|
|
|
|
/* close the transaction we started above */
|
|
|
|
CommitTransactionCommand();
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up the global variables holding database id and default tablespace.
|
|
|
|
* But note we won't actually try to touch the database just yet.
|
|
|
|
*
|
|
|
|
* We take a shortcut in the bootstrap case, otherwise we have to look up
|
|
|
|
* the db's entry in pg_database.
|
|
|
|
*/
|
|
|
|
if (bootstrap)
|
|
|
|
{
|
|
|
|
dboid = Template1DbOid;
|
|
|
|
MyDatabaseTableSpace = DEFAULTTABLESPACE_OID;
|
|
|
|
}
|
|
|
|
else if (in_dbname != NULL)
|
|
|
|
{
|
|
|
|
HeapTuple tuple;
|
|
|
|
Form_pg_database dbform;
|
|
|
|
|
|
|
|
tuple = GetDatabaseTuple(in_dbname);
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", in_dbname)));
|
|
|
|
dbform = (Form_pg_database) GETSTRUCT(tuple);
|
|
|
|
dboid = dbform->oid;
|
|
|
|
}
|
|
|
|
else if (!OidIsValid(dboid))
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If this is a background worker not bound to any particular
|
|
|
|
* database, we're done now. Everything that follows only makes sense
|
|
|
|
* if we are bound to a specific database. We do need to close the
|
|
|
|
* transaction we started before returning.
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
{
|
|
|
|
pgstat_bestart();
|
|
|
|
CommitTransactionCommand();
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now, take a writer's lock on the database we are trying to connect to.
|
|
|
|
* If there is a concurrently running DROP DATABASE on that database, this
|
|
|
|
* will block us until it finishes (and has committed its update of
|
|
|
|
* pg_database).
|
|
|
|
*
|
|
|
|
* Note that the lock is not held long, only until the end of this startup
|
|
|
|
* transaction. This is OK since we will advertise our use of the
|
|
|
|
* database in the ProcArray before dropping the lock (in fact, that's the
|
|
|
|
* next thing to do). Anyone trying a DROP DATABASE after this point will
|
|
|
|
* see us in the array once they have the lock. Ordering is important for
|
|
|
|
* this because we don't want to advertise ourselves as being in this
|
|
|
|
* database until we have the lock; otherwise we create what amounts to a
|
|
|
|
* deadlock with CountOtherDBBackends().
|
|
|
|
*
|
|
|
|
* Note: use of RowExclusiveLock here is reasonable because we envision
|
|
|
|
* our session as being a concurrent writer of the database. If we had a
|
|
|
|
* way of declaring a session as being guaranteed-read-only, we could use
|
|
|
|
* AccessShareLock for such sessions and thereby not conflict against
|
|
|
|
* CREATE DATABASE.
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
LockSharedObject(DatabaseRelationId, dboid, 0, RowExclusiveLock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recheck pg_database to make sure the target database hasn't gone away.
|
|
|
|
* If there was a concurrent DROP DATABASE, this ensures we will die
|
|
|
|
* cleanly without creating a mess.
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
{
|
|
|
|
HeapTuple tuple;
|
|
|
|
Form_pg_database datform;
|
|
|
|
|
|
|
|
tuple = GetDatabaseTupleByOid(dboid);
|
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
datform = (Form_pg_database) GETSTRUCT(tuple);
|
|
|
|
|
|
|
|
if (!HeapTupleIsValid(tuple) ||
|
|
|
|
(in_dbname && namestrcmp(&datform->datname, in_dbname)))
|
|
|
|
{
|
|
|
|
if (in_dbname)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", in_dbname),
|
|
|
|
errdetail("It seems to have just been dropped or renamed.")));
|
|
|
|
else
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database %u does not exist", dboid)));
|
|
|
|
}
|
|
|
|
|
|
|
|
strlcpy(dbname, NameStr(datform->datname), sizeof(dbname));
|
|
|
|
|
|
|
|
if (database_is_invalid_form(datform))
|
|
|
|
{
|
|
|
|
ereport(FATAL,
|
|
|
|
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("cannot connect to invalid database \"%s\"", dbname),
|
|
|
|
errhint("Use DROP DATABASE to drop invalid databases."));
|
|
|
|
}
|
|
|
|
|
|
|
|
MyDatabaseTableSpace = datform->dattablespace;
|
Add support event triggers on authenticated login
This commit introduces trigger on login event, allowing to fire some actions
right on the user connection. This can be useful for logging or connection
check purposes as well as for some personalization of environment. Usage
details are described in the documentation included, but shortly usage is
the same as for other triggers: create function returning event_trigger and
then create event trigger on login event.
In order to prevent the connection time overhead when there are no triggers
the commit introduces pg_database.dathasloginevt flag, which indicates database
has active login triggers. This flag is set by CREATE/ALTER EVENT TRIGGER
command, and unset at connection time when no active triggers found.
Author: Konstantin Knizhnik, Mikhail Gribkov
Discussion: https://postgr.es/m/0d46d29f-4558-3af9-9c85-7774e14a7709%40postgrespro.ru
Reviewed-by: Pavel Stehule, Takayuki Tsunakawa, Greg Nancarrow, Ivan Panchenko
Reviewed-by: Daniel Gustafsson, Teodor Sigaev, Robert Haas, Andres Freund
Reviewed-by: Tom Lane, Andrey Sokolov, Zhihong Yu, Sergey Shinderuk
Reviewed-by: Gregory Stark, Nikita Malakhov, Ted Yu
2 years ago
|
|
|
MyDatabaseHasLoginEventTriggers = datform->dathasloginevt;
|
|
|
|
/* pass the database name back to the caller */
|
|
|
|
if (out_dbname)
|
|
|
|
strcpy(out_dbname, dbname);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now that we rechecked, we are certain to be connected to a database and
|
|
|
|
* thus can set MyDatabaseId.
|
|
|
|
*
|
|
|
|
* It is important that MyDatabaseId only be set once we are sure that the
|
|
|
|
* target database can no longer be concurrently dropped or renamed. For
|
|
|
|
* example, without this guarantee, pgstat_update_dbstats() could create
|
|
|
|
* entries for databases that were just dropped in the pgstat shutdown
|
|
|
|
* callback, which could confuse other code paths like the autovacuum
|
|
|
|
* scheduler.
|
|
|
|
*/
|
|
|
|
MyDatabaseId = dboid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we can mark our PGPROC entry with the database ID.
|
|
|
|
*
|
|
|
|
* We assume this is an atomic store so no lock is needed; though actually
|
|
|
|
* things would work fine even if it weren't atomic. Anyone searching the
|
|
|
|
* ProcArray for this database's ID should hold the database lock, so they
|
|
|
|
* would not be executing concurrently with this store. A process looking
|
|
|
|
* for another database's ID could in theory see a chance match if it read
|
|
|
|
* a partially-updated databaseId value; but as long as all such searches
|
|
|
|
* wait and retry, as in CountOtherDBBackends(), they will certainly see
|
|
|
|
* the correct value on their next try.
|
|
|
|
*/
|
|
|
|
MyProc->databaseId = MyDatabaseId;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We established a catalog snapshot while reading pg_authid and/or
|
|
|
|
* pg_database; but until we have set up MyDatabaseId, we won't react to
|
|
|
|
* incoming sinval messages for unshared catalogs, so we won't realize it
|
|
|
|
* if the snapshot has been invalidated. Assume it's no good anymore.
|
|
|
|
*/
|
|
|
|
InvalidateCatalogSnapshot();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we should be able to access the database directory safely. Verify
|
|
|
|
* it's there and looks reasonable.
|
|
|
|
*/
|
|
|
|
fullpath = GetDatabasePath(MyDatabaseId, MyDatabaseTableSpace);
|
|
|
|
|
|
|
|
if (!bootstrap)
|
|
|
|
{
|
|
|
|
if (access(fullpath, F_OK) == -1)
|
|
|
|
{
|
|
|
|
if (errno == ENOENT)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist",
|
|
|
|
dbname),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
|
|
|
errdetail("The database subdirectory \"%s\" is missing.",
|
|
|
|
fullpath)));
|
|
|
|
else
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not access directory \"%s\": %m",
|
|
|
|
fullpath)));
|
|
|
|
}
|
|
|
|
|
|
|
|
ValidatePgVersion(fullpath);
|
|
|
|
}
|
|
|
|
|
|
|
|
SetDatabasePath(fullpath);
|
|
|
|
pfree(fullpath);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* It's now possible to do real access to the system catalogs.
|
|
|
|
*
|
|
|
|
* Load relcache entries for the system catalogs. This must create at
|
|
|
|
* least the minimum set of "nailed-in" cache entries.
|
|
|
|
*/
|
|
|
|
RelationCacheInitializePhase3();
|
|
|
|
|
|
|
|
/* set up ACL framework (so CheckMyDatabase can check permissions) */
|
|
|
|
initialize_acl();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Re-read the pg_database row for our database, check permissions and set
|
|
|
|
* up database-specific GUC settings. We can't do this until all the
|
|
|
|
* database-access infrastructure is up. (Also, it wants to know if the
|
|
|
|
* user is a superuser, so the above stuff has to happen first.)
|
|
|
|
*/
|
|
|
|
if (!bootstrap)
|
|
|
|
CheckMyDatabase(dbname, am_superuser,
|
|
|
|
(flags & INIT_PG_OVERRIDE_ALLOW_CONNS) != 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now process any command-line switches and any additional GUC variable
|
|
|
|
* settings passed in the startup packet. We couldn't do this before
|
|
|
|
* because we didn't know if client is a superuser.
|
|
|
|
*/
|
|
|
|
if (MyProcPort != NULL)
|
|
|
|
process_startup_options(MyProcPort, am_superuser);
|
|
|
|
|
|
|
|
/* Process pg_db_role_setting options */
|
|
|
|
process_settings(MyDatabaseId, GetSessionUserId());
|
|
|
|
|
|
|
|
/* Apply PostAuthDelay as soon as we've read all options */
|
|
|
|
if (PostAuthDelay > 0)
|
|
|
|
pg_usleep(PostAuthDelay * 1000000L);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize various default states that can't be set up until we've
|
|
|
|
* selected the active user and gotten the right GUC settings.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* set default namespace search path */
|
|
|
|
InitializeSearchPath();
|
|
|
|
|
|
|
|
/* initialize client encoding */
|
|
|
|
InitializeClientEncoding();
|
|
|
|
|
|
|
|
/* Initialize this backend's session state. */
|
|
|
|
InitializeSession();
|
|
|
|
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
/*
|
|
|
|
* If this is an interactive session, load any libraries that should be
|
|
|
|
* preloaded at backend start. Since those are determined by GUCs, this
|
|
|
|
* can't happen until GUC settings are complete, but we want it to happen
|
|
|
|
* during the initial transaction in case anything that requires database
|
|
|
|
* access needs to be done.
|
|
|
|
*/
|
|
|
|
if ((flags & INIT_PG_LOAD_SESSION_LIBS) != 0)
|
Process session_preload_libraries within InitPostgres's transaction.
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
3 years ago
|
|
|
process_session_preload_libraries();
|
|
|
|
|
|
|
|
/* report this backend in the PgBackendStatus array */
|
|
|
|
if (!bootstrap)
|
|
|
|
pgstat_bestart();
|
|
|
|
|
|
|
|
/* close the transaction we started above */
|
|
|
|
if (!bootstrap)
|
|
|
|
CommitTransactionCommand();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Process any command-line switches and any additional GUC variable
|
|
|
|
* settings passed in the startup packet.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
process_startup_options(Port *port, bool am_superuser)
|
|
|
|
{
|
|
|
|
GucContext gucctx;
|
|
|
|
ListCell *gucopts;
|
|
|
|
|
|
|
|
gucctx = am_superuser ? PGC_SU_BACKEND : PGC_BACKEND;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First process any command-line switches that were included in the
|
|
|
|
* startup packet, if we are in a regular backend.
|
|
|
|
*/
|
|
|
|
if (port->cmdline_options != NULL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The maximum possible number of commandline arguments that could
|
|
|
|
* come from port->cmdline_options is (strlen + 1) / 2; see
|
|
|
|
* pg_split_opts().
|
|
|
|
*/
|
|
|
|
char **av;
|
|
|
|
int maxac;
|
|
|
|
int ac;
|
|
|
|
|
|
|
|
maxac = 2 + (strlen(port->cmdline_options) + 1) / 2;
|
|
|
|
|
|
|
|
av = (char **) palloc(maxac * sizeof(char *));
|
|
|
|
ac = 0;
|
|
|
|
|
|
|
|
av[ac++] = "postgres";
|
|
|
|
|
|
|
|
pg_split_opts(av, &ac, port->cmdline_options);
|
|
|
|
|
|
|
|
av[ac] = NULL;
|
|
|
|
|
|
|
|
Assert(ac < maxac);
|
|
|
|
|
|
|
|
(void) process_postgres_switches(ac, av, gucctx, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Process any additional GUC variable settings passed in startup packet.
|
|
|
|
* These are handled exactly like command-line variables.
|
|
|
|
*/
|
|
|
|
gucopts = list_head(port->guc_options);
|
|
|
|
while (gucopts)
|
|
|
|
{
|
|
|
|
char *name;
|
|
|
|
char *value;
|
|
|
|
|
|
|
|
name = lfirst(gucopts);
|
Represent Lists as expansible arrays, not chains of cons-cells.
Originally, Postgres Lists were a more or less exact reimplementation of
Lisp lists, which consist of chains of separately-allocated cons cells,
each having a value and a next-cell link. We'd hacked that once before
(commit d0b4399d8) to add a separate List header, but the data was still
in cons cells. That makes some operations -- notably list_nth() -- O(N),
and it's bulky because of the next-cell pointers and per-cell palloc
overhead, and it's very cache-unfriendly if the cons cells end up
scattered around rather than being adjacent.
In this rewrite, we still have List headers, but the data is in a
resizable array of values, with no next-cell links. Now we need at
most two palloc's per List, and often only one, since we can allocate
some values in the same palloc call as the List header. (Of course,
extending an existing List may require repalloc's to enlarge the array.
But this involves just O(log N) allocations not O(N).)
Of course this is not without downsides. The key difficulty is that
addition or deletion of a list entry may now cause other entries to
move, which it did not before.
For example, that breaks foreach() and sister macros, which historically
used a pointer to the current cons-cell as loop state. We can repair
those macros transparently by making their actual loop state be an
integer list index; the exposed "ListCell *" pointer is no longer state
carried across loop iterations, but is just a derived value. (In
practice, modern compilers can optimize things back to having just one
loop state value, at least for simple cases with inline loop bodies.)
In principle, this is a semantics change for cases where the loop body
inserts or deletes list entries ahead of the current loop index; but
I found no such cases in the Postgres code.
The change is not at all transparent for code that doesn't use foreach()
but chases lists "by hand" using lnext(). The largest share of such
code in the backend is in loops that were maintaining "prev" and "next"
variables in addition to the current-cell pointer, in order to delete
list cells efficiently using list_delete_cell(). However, we no longer
need a previous-cell pointer to delete a list cell efficiently. Keeping
a next-cell pointer doesn't work, as explained above, but we can improve
matters by changing such code to use a regular foreach() loop and then
using the new macro foreach_delete_current() to delete the current cell.
(This macro knows how to update the associated foreach loop's state so
that no cells will be missed in the traversal.)
There remains a nontrivial risk of code assuming that a ListCell *
pointer will remain good over an operation that could now move the list
contents. To help catch such errors, list.c can be compiled with a new
define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents
whenever that could possibly happen. This makes list operations
significantly more expensive so it's not normally turned on (though it
is on by default if USE_VALGRIND is on).
There are two notable API differences from the previous code:
* lnext() now requires the List's header pointer in addition to the
current cell's address.
* list_delete_cell() no longer requires a previous-cell argument.
These changes are somewhat unfortunate, but on the other hand code using
either function needs inspection to see if it is assuming anything
it shouldn't, so it's not all bad.
Programmers should be aware of these significant performance changes:
* list_nth() and related functions are now O(1); so there's no
major access-speed difference between a list and an array.
* Inserting or deleting a list element now takes time proportional to
the distance to the end of the list, due to moving the array elements.
(However, it typically *doesn't* require palloc or pfree, so except in
long lists it's probably still faster than before.) Notably, lcons()
used to be about the same cost as lappend(), but that's no longer true
if the list is long. Code that uses lcons() and list_delete_first()
to maintain a stack might usefully be rewritten to push and pop at the
end of the list rather than the beginning.
* There are now list_insert_nth...() and list_delete_nth...() functions
that add or remove a list cell identified by index. These have the
data-movement penalty explained above, but there's no search penalty.
* list_concat() and variants now copy the second list's data into
storage belonging to the first list, so there is no longer any
sharing of cells between the input lists. The second argument is
now declared "const List *" to reflect that it isn't changed.
This patch just does the minimum needed to get the new implementation
in place and fix bugs exposed by the regression tests. As suggested
by the foregoing, there's a fair amount of followup work remaining to
do.
Also, the ENABLE_LIST_COMPAT macros are finally removed in this
commit. Code using those should have been gone a dozen years ago.
Patch by me; thanks to David Rowley, Jesper Pedersen, and others
for review.
Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
6 years ago
|
|
|
gucopts = lnext(port->guc_options, gucopts);
|
|
|
|
|
|
|
|
value = lfirst(gucopts);
|
Represent Lists as expansible arrays, not chains of cons-cells.
Originally, Postgres Lists were a more or less exact reimplementation of
Lisp lists, which consist of chains of separately-allocated cons cells,
each having a value and a next-cell link. We'd hacked that once before
(commit d0b4399d8) to add a separate List header, but the data was still
in cons cells. That makes some operations -- notably list_nth() -- O(N),
and it's bulky because of the next-cell pointers and per-cell palloc
overhead, and it's very cache-unfriendly if the cons cells end up
scattered around rather than being adjacent.
In this rewrite, we still have List headers, but the data is in a
resizable array of values, with no next-cell links. Now we need at
most two palloc's per List, and often only one, since we can allocate
some values in the same palloc call as the List header. (Of course,
extending an existing List may require repalloc's to enlarge the array.
But this involves just O(log N) allocations not O(N).)
Of course this is not without downsides. The key difficulty is that
addition or deletion of a list entry may now cause other entries to
move, which it did not before.
For example, that breaks foreach() and sister macros, which historically
used a pointer to the current cons-cell as loop state. We can repair
those macros transparently by making their actual loop state be an
integer list index; the exposed "ListCell *" pointer is no longer state
carried across loop iterations, but is just a derived value. (In
practice, modern compilers can optimize things back to having just one
loop state value, at least for simple cases with inline loop bodies.)
In principle, this is a semantics change for cases where the loop body
inserts or deletes list entries ahead of the current loop index; but
I found no such cases in the Postgres code.
The change is not at all transparent for code that doesn't use foreach()
but chases lists "by hand" using lnext(). The largest share of such
code in the backend is in loops that were maintaining "prev" and "next"
variables in addition to the current-cell pointer, in order to delete
list cells efficiently using list_delete_cell(). However, we no longer
need a previous-cell pointer to delete a list cell efficiently. Keeping
a next-cell pointer doesn't work, as explained above, but we can improve
matters by changing such code to use a regular foreach() loop and then
using the new macro foreach_delete_current() to delete the current cell.
(This macro knows how to update the associated foreach loop's state so
that no cells will be missed in the traversal.)
There remains a nontrivial risk of code assuming that a ListCell *
pointer will remain good over an operation that could now move the list
contents. To help catch such errors, list.c can be compiled with a new
define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents
whenever that could possibly happen. This makes list operations
significantly more expensive so it's not normally turned on (though it
is on by default if USE_VALGRIND is on).
There are two notable API differences from the previous code:
* lnext() now requires the List's header pointer in addition to the
current cell's address.
* list_delete_cell() no longer requires a previous-cell argument.
These changes are somewhat unfortunate, but on the other hand code using
either function needs inspection to see if it is assuming anything
it shouldn't, so it's not all bad.
Programmers should be aware of these significant performance changes:
* list_nth() and related functions are now O(1); so there's no
major access-speed difference between a list and an array.
* Inserting or deleting a list element now takes time proportional to
the distance to the end of the list, due to moving the array elements.
(However, it typically *doesn't* require palloc or pfree, so except in
long lists it's probably still faster than before.) Notably, lcons()
used to be about the same cost as lappend(), but that's no longer true
if the list is long. Code that uses lcons() and list_delete_first()
to maintain a stack might usefully be rewritten to push and pop at the
end of the list rather than the beginning.
* There are now list_insert_nth...() and list_delete_nth...() functions
that add or remove a list cell identified by index. These have the
data-movement penalty explained above, but there's no search penalty.
* list_concat() and variants now copy the second list's data into
storage belonging to the first list, so there is no longer any
sharing of cells between the input lists. The second argument is
now declared "const List *" to reflect that it isn't changed.
This patch just does the minimum needed to get the new implementation
in place and fix bugs exposed by the regression tests. As suggested
by the foregoing, there's a fair amount of followup work remaining to
do.
Also, the ENABLE_LIST_COMPAT macros are finally removed in this
commit. Code using those should have been gone a dozen years ago.
Patch by me; thanks to David Rowley, Jesper Pedersen, and others
for review.
Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
6 years ago
|
|
|
gucopts = lnext(port->guc_options, gucopts);
|
|
|
|
|
|
|
|
SetConfigOption(name, value, gucctx, PGC_S_CLIENT);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Load GUC settings from pg_db_role_setting.
|
|
|
|
*
|
|
|
|
* We try specific settings for the database/role combination, as well as
|
|
|
|
* general for this database and for this user.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
process_settings(Oid databaseid, Oid roleid)
|
|
|
|
{
|
|
|
|
Relation relsetting;
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
Snapshot snapshot;
|
|
|
|
|
|
|
|
if (!IsUnderPostmaster)
|
|
|
|
return;
|
|
|
|
|
|
|
|
relsetting = table_open(DbRoleSettingRelationId, AccessShareLock);
|
|
|
|
|
|
|
|
/* read all the settings under the same snapshot for efficiency */
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
snapshot = RegisterSnapshot(GetCatalogSnapshot(DbRoleSettingRelationId));
|
|
|
|
|
|
|
|
/* Later settings are ignored if set earlier. */
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
ApplySetting(snapshot, databaseid, roleid, relsetting, PGC_S_DATABASE_USER);
|
|
|
|
ApplySetting(snapshot, InvalidOid, roleid, relsetting, PGC_S_USER);
|
|
|
|
ApplySetting(snapshot, databaseid, InvalidOid, relsetting, PGC_S_DATABASE);
|
|
|
|
ApplySetting(snapshot, InvalidOid, InvalidOid, relsetting, PGC_S_GLOBAL);
|
|
|
|
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
12 years ago
|
|
|
UnregisterSnapshot(snapshot);
|
|
|
|
table_close(relsetting, AccessShareLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Backend-shutdown callback. Do cleanup that we want to be sure happens
|
|
|
|
* before all the supporting modules begin to nail their doors shut via
|
|
|
|
* their own callbacks.
|
|
|
|
*
|
|
|
|
* User-level cleanup, such as temp-relation removal and UNLISTEN, happens
|
|
|
|
* via separate callbacks that execute before this one. We don't combine the
|
|
|
|
* callbacks because we still want this one to happen if the user-level
|
|
|
|
* cleanup fails.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ShutdownPostgres(int code, Datum arg)
|
|
|
|
{
|
|
|
|
/* Make sure we've killed any active transaction */
|
|
|
|
AbortOutOfAnyTransaction();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* User locks are not released by transaction end, so be sure to release
|
|
|
|
* them explicitly.
|
|
|
|
*/
|
|
|
|
LockReleaseAll(USER_LOCKMETHOD, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
/*
|
|
|
|
* STATEMENT_TIMEOUT handler: trigger a query-cancel interrupt.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
StatementTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
int sig = SIGINT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* During authentication the timeout is used to deal with
|
|
|
|
* authentication_timeout - we want to quit in response to such timeouts.
|
|
|
|
*/
|
|
|
|
if (ClientAuthInProgress)
|
|
|
|
sig = SIGTERM;
|
|
|
|
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
#ifdef HAVE_SETSID
|
|
|
|
/* try to signal whole process group */
|
|
|
|
kill(-MyProcPid, sig);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
#endif
|
|
|
|
kill(MyProcPid, sig);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* LOCK_TIMEOUT handler: trigger a query-cancel interrupt.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
LockTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
#ifdef HAVE_SETSID
|
|
|
|
/* try to signal whole process group */
|
|
|
|
kill(-MyProcPid, SIGINT);
|
|
|
|
#endif
|
|
|
|
kill(MyProcPid, SIGINT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
TransactionTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
TransactionTimeoutPending = true;
|
|
|
|
InterruptPending = true;
|
|
|
|
SetLatch(MyLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
IdleInTransactionSessionTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
IdleInTransactionSessionTimeoutPending = true;
|
|
|
|
InterruptPending = true;
|
|
|
|
SetLatch(MyLatch);
|
|
|
|
}
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
13 years ago
|
|
|
|
|
|
|
static void
|
|
|
|
IdleSessionTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
IdleSessionTimeoutPending = true;
|
|
|
|
InterruptPending = true;
|
|
|
|
SetLatch(MyLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
IdleStatsUpdateTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
IdleStatsUpdateTimeoutPending = true;
|
|
|
|
InterruptPending = true;
|
|
|
|
SetLatch(MyLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ClientCheckTimeoutHandler(void)
|
|
|
|
{
|
|
|
|
CheckClientConnectionPending = true;
|
|
|
|
InterruptPending = true;
|
|
|
|
SetLatch(MyLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns true if at least one role is defined in this database cluster.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ThereIsAtLeastOneRole(void)
|
|
|
|
{
|
|
|
|
Relation pg_authid_rel;
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
6 years ago
|
|
|
TableScanDesc scan;
|
|
|
|
bool result;
|
|
|
|
|
|
|
|
pg_authid_rel = table_open(AuthIdRelationId, AccessShareLock);
|
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
6 years ago
|
|
|
scan = table_beginscan_catalog(pg_authid_rel, 0, NULL);
|
|
|
|
result = (heap_getnext(scan, ForwardScanDirection) != NULL);
|
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
6 years ago
|
|
|
table_endscan(scan);
|
|
|
|
table_close(pg_authid_rel, AccessShareLock);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|