|
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
|
*
|
|
|
|
|
* portal.h
|
|
|
|
|
* POSTGRES portal definitions.
|
|
|
|
|
*
|
|
|
|
|
* A portal is an abstraction which represents the execution state of
|
|
|
|
|
* a running or runnable query. Portals support both SQL-level CURSORs
|
|
|
|
|
* and protocol-level portals.
|
|
|
|
|
*
|
|
|
|
|
* Scrolling (nonsequential access) and suspension of execution are allowed
|
|
|
|
|
* only for portals that contain a single SELECT-type query. We do not want
|
|
|
|
|
* to let the client suspend an update-type query partway through! Because
|
|
|
|
|
* the query rewriter does not allow arbitrary ON SELECT rewrite rules,
|
|
|
|
|
* only queries that were originally update-type could produce multiple
|
|
|
|
|
* plan trees; so the restriction to a single query is not a problem
|
|
|
|
|
* in practice.
|
|
|
|
|
*
|
|
|
|
|
* For SQL cursors, we support three kinds of scroll behavior:
|
|
|
|
|
*
|
|
|
|
|
* (1) Neither NO SCROLL nor SCROLL was specified: to remain backward
|
|
|
|
|
* compatible, we allow backward fetches here, unless it would
|
|
|
|
|
* impose additional runtime overhead to do so.
|
|
|
|
|
*
|
|
|
|
|
* (2) NO SCROLL was specified: don't allow any backward fetches.
|
|
|
|
|
*
|
|
|
|
|
* (3) SCROLL was specified: allow all kinds of backward fetches, even
|
|
|
|
|
* if we need to take a performance hit to do so. (The planner sticks
|
|
|
|
|
* a Materialize node atop the query plan if needed.)
|
|
|
|
|
*
|
|
|
|
|
* Case #1 is converted to #2 or #3 by looking at the query itself and
|
|
|
|
|
* determining if scrollability can be supported without additional
|
|
|
|
|
* overhead.
|
|
|
|
|
*
|
|
|
|
|
* Protocol-level portals have no nonsequential-fetch API and so the
|
|
|
|
|
* distinction doesn't matter for them. They are always initialized
|
|
|
|
|
* to look like NO SCROLL cursors.
|
|
|
|
|
*
|
|
|
|
|
*
|
|
|
|
|
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
|
|
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
|
*
|
|
|
|
|
* src/include/utils/portal.h
|
|
|
|
|
*
|
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
|
*/
|
|
|
|
|
#ifndef PORTAL_H
|
|
|
|
|
#define PORTAL_H
|
|
|
|
|
|
|
|
|
|
#include "datatype/timestamp.h"
|
|
|
|
|
#include "executor/execdesc.h"
|
|
|
|
|
#include "tcop/cmdtag.h"
|
|
|
|
|
#include "utils/plancache.h"
|
|
|
|
|
#include "utils/resowner.h"
|
|
|
|
|
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
/*
|
|
|
|
|
* We have several execution strategies for Portals, depending on what
|
|
|
|
|
* query or queries are to be executed. (Note: in all cases, a Portal
|
|
|
|
|
* executes just a single source-SQL query, and thus produces just a
|
|
|
|
|
* single result from the user's viewpoint. However, the rule rewriter
|
|
|
|
|
* may expand the single source query to zero or many actual queries.)
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
*
|
|
|
|
|
* PORTAL_ONE_SELECT: the portal contains one single SELECT query. We run
|
|
|
|
|
* the Executor incrementally as results are demanded. This strategy also
|
|
|
|
|
* supports holdable cursors (the Executor results can be dumped into a
|
|
|
|
|
* tuplestore for access after transaction completion).
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
*
|
|
|
|
|
* PORTAL_ONE_RETURNING: the portal contains a single INSERT/UPDATE/DELETE/
|
|
|
|
|
* MERGE query with a RETURNING clause (plus possibly auxiliary queries added
|
|
|
|
|
* by rule rewriting). On first execution, we run the portal to completion
|
|
|
|
|
* and dump the primary query's results into the portal tuplestore; the
|
|
|
|
|
* results are then returned to the client as demanded. (We can't support
|
|
|
|
|
* suspension of the query partway through, because the AFTER TRIGGER code
|
|
|
|
|
* can't cope, and also because we don't want to risk failing to execute
|
|
|
|
|
* all the auxiliary queries.)
|
|
|
|
|
*
|
|
|
|
|
* PORTAL_ONE_MOD_WITH: the portal contains one single SELECT query, but
|
|
|
|
|
* it has data-modifying CTEs. This is currently treated the same as the
|
|
|
|
|
* PORTAL_ONE_RETURNING case because of the possibility of needing to fire
|
|
|
|
|
* triggers. It may act more like PORTAL_ONE_SELECT in future.
|
|
|
|
|
*
|
|
|
|
|
* PORTAL_UTIL_SELECT: the portal contains a utility statement that returns
|
|
|
|
|
* a SELECT-like result (for example, EXPLAIN or SHOW). On first execution,
|
|
|
|
|
* we run the statement and dump its results into the portal tuplestore;
|
|
|
|
|
* the results are then returned to the client as demanded.
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
*
|
|
|
|
|
* PORTAL_MULTI_QUERY: all other cases. Here, we do not support partial
|
|
|
|
|
* execution: the portal's queries will be run to completion on first call.
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
*/
|
|
|
|
|
typedef enum PortalStrategy
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
{
|
|
|
|
|
PORTAL_ONE_SELECT,
|
|
|
|
|
PORTAL_ONE_RETURNING,
|
|
|
|
|
PORTAL_ONE_MOD_WITH,
|
|
|
|
|
PORTAL_UTIL_SELECT,
|
|
|
|
|
PORTAL_MULTI_QUERY,
|
|
|
|
|
} PortalStrategy;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* A portal is always in one of these states. It is possible to transit
|
|
|
|
|
* from ACTIVE back to READY if the query is not run to completion;
|
|
|
|
|
* otherwise we never back up in status.
|
|
|
|
|
*/
|
|
|
|
|
typedef enum PortalStatus
|
|
|
|
|
{
|
|
|
|
|
PORTAL_NEW, /* freshly created */
|
|
|
|
|
PORTAL_DEFINED, /* PortalDefineQuery done */
|
|
|
|
|
PORTAL_READY, /* PortalStart complete, can run it */
|
|
|
|
|
PORTAL_ACTIVE, /* portal is running (can't delete it) */
|
|
|
|
|
PORTAL_DONE, /* portal is finished (don't re-run it) */
|
|
|
|
|
PORTAL_FAILED, /* portal got error (can't re-run it) */
|
|
|
|
|
} PortalStatus;
|
|
|
|
|
|
|
|
|
|
typedef struct PortalData *Portal;
|
|
|
|
|
|
|
|
|
|
typedef struct PortalData
|
|
|
|
|
{
|
|
|
|
|
/* Bookkeeping data */
|
|
|
|
|
const char *name; /* portal's name */
|
|
|
|
|
const char *prepStmtName; /* source prepared statement (NULL if none) */
|
|
|
|
|
MemoryContext portalContext; /* subsidiary memory for portal */
|
|
|
|
|
ResourceOwner resowner; /* resources owned by portal */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
9 years ago
|
|
|
void (*cleanup) (Portal portal); /* cleanup hook */
|
|
|
|
|
|
|
|
|
|
/*
|
Fix subtransaction cleanup after an outer-subtransaction portal fails.
Formerly, we treated only portals created in the current subtransaction as
having failed during subtransaction abort. However, if the error occurred
while running a portal created in an outer subtransaction (ie, a cursor
declared before the last savepoint), that has to be considered broken too.
To allow reliable detection of which ones those are, add a bookkeeping
field to struct Portal that tracks the innermost subtransaction in which
each portal has actually been executed. (Without this, we'd end up
failing portals containing functions that had called the subtransaction,
thereby breaking plpgsql exception blocks completely.)
In addition, when we fail an outer-subtransaction Portal, transfer its
resources into the subtransaction's resource owner, so that they're
released early in cleanup of the subxact. This fixes a problem reported by
Jim Nasby in which a function executed in an outer-subtransaction cursor
could cause an Assert failure or crash by referencing a relation created
within the inner subtransaction.
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache
assumed it could blow away a relcache entry without first checking that the
entry had zero refcount. That was a bad idea on its own terms, so add such
a check there, and to the similar coding in AtEOXact_RelationCache. This
provides an independent safety measure in case there are still ways to
provoke the situation despite the Portal-level changes.
This has been broken since subtransactions were invented, so back-patch
to all supported branches.
Tom Lane and Michael Paquier
10 years ago
|
|
|
* State data for remembering which subtransaction(s) the portal was
|
|
|
|
|
* created or used in. If the portal is held over from a previous
|
|
|
|
|
* transaction, both subxids are InvalidSubTransactionId. Otherwise,
|
|
|
|
|
* createSubid is the creating subxact and activeSubid is the last subxact
|
|
|
|
|
* in which we ran the portal.
|
|
|
|
|
*/
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
9 years ago
|
|
|
SubTransactionId createSubid; /* the creating subxact */
|
|
|
|
|
SubTransactionId activeSubid; /* the last subxact with activity */
|
|
|
|
|
int createLevel; /* creating subxact's nesting level */
|
|
|
|
|
|
|
|
|
|
/* The query or queries the portal will execute */
|
Adjust things so that the query_string of a cached plan and the sourceText of
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement. That doesn't seem like a
sufficiently critical performance hotspot to justify not offering a guarantee
of validity of the portal source text. Fix it to copy the source text over
from the cached plan. Add Asserts in the places that set up cached plans and
portals to reject null source strings, and simplify a bunch of places that
formerly needed to guard against nulls.
There may be a few places that cons up statements for execution without
having any source text at all; I found one such in ConvertTriggerToFK().
It seems sufficient to inject a phony source string in such a case,
for instance
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
NULL, false, None_Receiver, NULL);
We should take a second look at the usage of debug_query_string,
particularly the recently added current_query() SQL function.
ITAGAKI Takahiro and Tom Lane
18 years ago
|
|
|
const char *sourceText; /* text of query (as of 8.4, never NULL) */
|
|
|
|
|
CommandTag commandTag; /* command tag for original query */
|
|
|
|
|
QueryCompletion qc; /* command completion data for executed query */
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
9 years ago
|
|
|
List *stmts; /* list of PlannedStmts */
|
|
|
|
|
CachedPlan *cplan; /* CachedPlan, if stmts are from one */
|
|
|
|
|
|
|
|
|
|
ParamListInfo portalParams; /* params to pass to query */
|
|
|
|
|
QueryEnvironment *queryEnv; /* environment for query */
|
|
|
|
|
|
|
|
|
|
/* Features/options */
|
|
|
|
|
PortalStrategy strategy; /* see above */
|
|
|
|
|
int cursorOptions; /* DECLARE CURSOR option bits */
|
|
|
|
|
bool run_once; /* portal will only be run once */
|
|
|
|
|
|
|
|
|
|
/* Status data */
|
|
|
|
|
PortalStatus status; /* see above */
|
|
|
|
|
bool portalPinned; /* a pinned portal can't be dropped */
|
|
|
|
|
bool autoHeld; /* was automatically converted from pinned to
|
|
|
|
|
* held (see HoldPinnedPortals()) */
|
|
|
|
|
|
|
|
|
|
/* If not NULL, Executor is active; call ExecutorEnd eventually: */
|
|
|
|
|
QueryDesc *queryDesc; /* info needed for executor invocation */
|
|
|
|
|
|
|
|
|
|
/* If portal returns tuples, this is their tupdesc: */
|
|
|
|
|
TupleDesc tupDesc; /* descriptor for result tuples */
|
|
|
|
|
/* and these are the format codes to use for the columns: */
|
|
|
|
|
int16 *formats; /* a format code for each column */
|
|
|
|
|
|
Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
5 years ago
|
|
|
/*
|
|
|
|
|
* Outermost ActiveSnapshot for execution of the portal's queries. For
|
|
|
|
|
* all but a few utility commands, we require such a snapshot to exist.
|
|
|
|
|
* This ensures that TOAST references in query results can be detoasted,
|
|
|
|
|
* and helps to reduce thrashing of the process's exposed xmin.
|
|
|
|
|
*/
|
|
|
|
|
Snapshot portalSnapshot; /* active snapshot, or NULL if none */
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING,
|
|
|
|
|
* PORTAL_ONE_MOD_WITH, or PORTAL_UTIL_SELECT query. (A cursor held past
|
|
|
|
|
* the end of its transaction no longer has any active executor state.)
|
|
|
|
|
*/
|
|
|
|
|
Tuplestorestate *holdStore; /* store for holdable cursors */
|
|
|
|
|
MemoryContext holdContext; /* memory containing holdStore */
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
23 years ago
|
|
|
|
Fix TOAST access failure in RETURNING queries.
Discussion of commit 3e2f3c2e4 exposed a problem that is of longer
standing: since we don't detoast data while sticking it into a portal's
holdStore for PORTAL_ONE_RETURNING and PORTAL_UTIL_SELECT queries, and we
release the query's snapshot as soon as we're done loading the holdStore,
later readout of the holdStore can do TOAST fetches against data that can
no longer be seen by any of the session's live snapshots. This means that
a concurrent VACUUM could remove the TOAST data before we can fetch it.
Commit 3e2f3c2e4 exposed the problem by showing that sometimes we had *no*
live snapshots while fetching TOAST data, but we'd be at risk anyway.
I believe this code was all right when written, because our management of a
session's exposed xmin was such that the TOAST references were safe until
end of transaction. But that's no longer true now that we can advance or
clear our PGXACT.xmin intra-transaction.
To fix, copy the query's snapshot during FillPortalStore() and save it in
the Portal; release it only when the portal is dropped. This essentially
implements a policy that we must hold a relevant snapshot whenever we
access potentially-toasted data. We had already come to that conclusion
in other places, cf commits 08e261cbc94ce9a7 and ec543db77b6b72f2.
I'd have liked to add a regression test case for this, but I didn't see
a way to make one that's not unreasonably bloated; it seems to require
returning a toasted value to the client, and those will be big.
In passing, improve PortalRunUtility() so that it positively verifies
that its ending PopActiveSnapshot() call will pop the expected snapshot,
removing a rather shaky assumption about which utility commands might
do their own PopActiveSnapshot(). There's no known bug here, but now
that we're actively referencing the snapshot it's almost free to make
this code a bit more bulletproof.
We might want to consider back-patching something like this into older
branches, but it would be prudent to let it prove itself more in HEAD
beforehand.
Discussion: <87vazemeda.fsf@credativ.de>
10 years ago
|
|
|
/*
|
|
|
|
|
* Snapshot under which tuples in the holdStore were read. We must keep a
|
|
|
|
|
* reference to this snapshot if there is any possibility that the tuples
|
|
|
|
|
* contain TOAST references, because releasing the snapshot could allow
|
|
|
|
|
* recently-dead rows to be vacuumed away, along with any toast data
|
|
|
|
|
* belonging to them. In the case of a held cursor, we avoid needing to
|
|
|
|
|
* keep such a snapshot by forcibly detoasting the data.
|
|
|
|
|
*/
|
|
|
|
|
Snapshot holdSnapshot; /* registered snapshot, or NULL if none */
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* atStart, atEnd and portalPos indicate the current cursor position.
|
|
|
|
|
* portalPos is zero before the first row, N after fetching N'th row of
|
|
|
|
|
* query. After we run off the end, portalPos = # of rows in query, and
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
10 years ago
|
|
|
* atEnd is true. Note that atStart implies portalPos == 0, but not the
|
|
|
|
|
* reverse: we might have backed up only as far as the first row, not to
|
|
|
|
|
* the start. Also note that various code inspects atStart and atEnd, but
|
|
|
|
|
* only the portal movement routines should touch portalPos.
|
|
|
|
|
*/
|
|
|
|
|
bool atStart;
|
|
|
|
|
bool atEnd;
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
10 years ago
|
|
|
uint64 portalPos;
|
|
|
|
|
|
|
|
|
|
/* Presentation data, primarily used by the pg_cursors system view */
|
|
|
|
|
TimestampTz creation_time; /* time at which this portal was defined */
|
|
|
|
|
bool visible; /* include this portal in pg_cursors? */
|
|
|
|
|
} PortalData;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* PortalIsValid
|
|
|
|
|
* True iff portal is valid.
|
|
|
|
|
*/
|
|
|
|
|
#define PortalIsValid(p) PointerIsValid(p)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/* Prototypes for functions in utils/mmgr/portalmem.c */
|
|
|
|
|
extern void EnablePortalManager(void);
|
|
|
|
|
extern bool PreCommit_Portals(bool isPrepare);
|
|
|
|
|
extern void AtAbort_Portals(void);
|
|
|
|
|
extern void AtCleanup_Portals(void);
|
|
|
|
|
extern void PortalErrorCleanup(void);
|
|
|
|
|
extern void AtSubCommit_Portals(SubTransactionId mySubid,
|
|
|
|
|
SubTransactionId parentSubid,
|
|
|
|
|
int parentLevel,
|
|
|
|
|
ResourceOwner parentXactOwner);
|
|
|
|
|
extern void AtSubAbort_Portals(SubTransactionId mySubid,
|
|
|
|
|
SubTransactionId parentSubid,
|
|
|
|
|
ResourceOwner myXactOwner,
|
|
|
|
|
ResourceOwner parentXactOwner);
|
|
|
|
|
extern void AtSubCleanup_Portals(SubTransactionId mySubid);
|
|
|
|
|
extern Portal CreatePortal(const char *name, bool allowDup, bool dupSilent);
|
|
|
|
|
extern Portal CreateNewPortal(void);
|
|
|
|
|
extern void PinPortal(Portal portal);
|
|
|
|
|
extern void UnpinPortal(Portal portal);
|
Fix subtransaction cleanup after an outer-subtransaction portal fails.
Formerly, we treated only portals created in the current subtransaction as
having failed during subtransaction abort. However, if the error occurred
while running a portal created in an outer subtransaction (ie, a cursor
declared before the last savepoint), that has to be considered broken too.
To allow reliable detection of which ones those are, add a bookkeeping
field to struct Portal that tracks the innermost subtransaction in which
each portal has actually been executed. (Without this, we'd end up
failing portals containing functions that had called the subtransaction,
thereby breaking plpgsql exception blocks completely.)
In addition, when we fail an outer-subtransaction Portal, transfer its
resources into the subtransaction's resource owner, so that they're
released early in cleanup of the subxact. This fixes a problem reported by
Jim Nasby in which a function executed in an outer-subtransaction cursor
could cause an Assert failure or crash by referencing a relation created
within the inner subtransaction.
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache
assumed it could blow away a relcache entry without first checking that the
entry had zero refcount. That was a bad idea on its own terms, so add such
a check there, and to the similar coding in AtEOXact_RelationCache. This
provides an independent safety measure in case there are still ways to
provoke the situation despite the Portal-level changes.
This has been broken since subtransactions were invented, so back-patch
to all supported branches.
Tom Lane and Michael Paquier
10 years ago
|
|
|
extern void MarkPortalActive(Portal portal);
|
|
|
|
|
extern void MarkPortalDone(Portal portal);
|
|
|
|
|
extern void MarkPortalFailed(Portal portal);
|
|
|
|
|
extern void PortalDrop(Portal portal, bool isTopCommit);
|
|
|
|
|
extern Portal GetPortalByName(const char *name);
|
|
|
|
|
extern void PortalDefineQuery(Portal portal,
|
|
|
|
|
const char *prepStmtName,
|
|
|
|
|
const char *sourceText,
|
|
|
|
|
CommandTag commandTag,
|
|
|
|
|
List *stmts,
|
|
|
|
|
CachedPlan *cplan);
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
9 years ago
|
|
|
extern PlannedStmt *PortalGetPrimaryStmt(Portal portal);
|
|
|
|
|
extern void PortalCreateHoldStore(Portal portal);
|
|
|
|
|
extern void PortalHashTableDeleteAll(void);
|
|
|
|
|
extern bool ThereAreNoReadyPortals(void);
|
|
|
|
|
extern void HoldPinnedPortals(void);
|
Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
5 years ago
|
|
|
extern void ForgetPortalSnapshots(void);
|
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
9 years ago
|
|
|
#endif /* PORTAL_H */
|