|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* readfuncs.c
|
|
|
|
* Reader functions for Postgres tree nodes.
|
|
|
|
*
|
|
|
|
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
|
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
|
|
|
* src/backend/nodes/readfuncs.c
|
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
* Path nodes do not have any readfuncs support, because we never
|
|
|
|
* have occasion to read them in. (There was once code here that
|
|
|
|
* claimed to read them, but it was broken as well as unused.) We
|
|
|
|
* never read executor state trees, either.
|
|
|
|
*
|
|
|
|
* Parse location fields are written out by outfuncs.c, but only for
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
* debugging use. When reading a location field, we normally discard
|
|
|
|
* the stored value and set the location field to -1 (ie, "unknown").
|
|
|
|
* This is because nodes coming from a stored rule should not be thought
|
|
|
|
* to have a known location in the current query's text.
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
* However, if restore_location_fields is true, we do restore location
|
|
|
|
* fields from the string. This is currently intended only for use by the
|
|
|
|
* WRITE_READ_PARSE_PLAN_TREES test code, which doesn't want to cause
|
|
|
|
* any change in the node contents.
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include <math.h>
|
|
|
|
|
|
|
|
#include "miscadmin.h"
|
Automatically generate node support functions
Add a script to automatically generate the node support functions
(copy, equal, out, and read, as well as the node tags enum) from the
struct definitions.
For each of the four node support files, it creates two include files,
e.g., copyfuncs.funcs.c and copyfuncs.switch.c, to include in the main
file. All the scaffolding of the main file stays in place.
I have tried to mostly make the coverage of the output match what is
currently there. For example, one could now do out/read coverage of
utility statement nodes, but I have manually excluded those for now.
The reason is mainly that it's easier to diff the before and after,
and adding a bunch of stuff like this might require a separate
analysis and review.
Subtyping (TidScan -> Scan) is supported.
For the hard cases, you can just write a manual function and exclude
generating one. For the not so hard cases, there is a way of
annotating struct fields to get special behaviors. For example,
pg_node_attr(equal_ignore) has the field ignored in equal functions.
(In this patch, I have only ifdef'ed out the code to could be removed,
mainly so that it won't constantly have merge conflicts. It will be
deleted in a separate patch. All the code comments that are worth
keeping from those sections have already been moved to the header
files where the structs are defined.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/c1097590-a6a4-486a-64b1-e1f9cc0533ce%40enterprisedb.com
3 years ago
|
|
|
#include "nodes/bitmapset.h"
|
|
|
|
#include "nodes/readfuncs.h"
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Macros to simplify reading of different kinds of fields. Use these
|
|
|
|
* wherever possible to reduce the chance for silly typos. Note that these
|
|
|
|
* hard-wire conventions about the names of the local variables in a Read
|
|
|
|
* routine.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Macros for declaring appropriate local variables */
|
|
|
|
|
|
|
|
/* A few guys need only local_node */
|
|
|
|
#define READ_LOCALS_NO_FIELDS(nodeTypeName) \
|
|
|
|
nodeTypeName *local_node = makeNode(nodeTypeName)
|
|
|
|
|
|
|
|
/* And a few guys need only the pg_strtok support fields */
|
|
|
|
#define READ_TEMP_LOCALS() \
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token; \
|
|
|
|
int length
|
|
|
|
|
|
|
|
/* ... but most need both */
|
|
|
|
#define READ_LOCALS(nodeTypeName) \
|
|
|
|
READ_LOCALS_NO_FIELDS(nodeTypeName); \
|
|
|
|
READ_TEMP_LOCALS()
|
|
|
|
|
|
|
|
/* Read an integer field (anything written as ":fldname %d") */
|
|
|
|
#define READ_INT_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = atoi(token)
|
|
|
|
|
|
|
|
/* Read an unsigned integer field (anything written as ":fldname %u") */
|
|
|
|
#define READ_UINT_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = atoui(token)
|
|
|
|
|
|
|
|
/* Read an unsigned integer field (anything written using UINT64_FORMAT) */
|
|
|
|
#define READ_UINT64_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
Simplify the general-purpose 64-bit integer parsing APIs
pg_strtouint64() is a wrapper around strtoull/strtoul/_strtoui64, but
it seems no longer necessary to have this indirection.
msvc/Solution.pm claims HAVE_STRTOULL, so the "MSVC only" part seems
unnecessary. Also, we have code in c.h to substitute alternatives for
strtoull() if not found, and that would appear to cover all currently
supported platforms, so having a further fallback in pg_strtouint64()
seems unnecessary.
Therefore, we could remove pg_strtouint64(), and use strtoull()
directly in all call sites. However, it seems useful to keep a
separate notation for parsing exactly 64-bit integers, matching the
type definition int64/uint64. For that, add new macros strtoi64() and
strtou64() in c.h as thin wrappers around strtol()/strtoul() or
strtoll()/stroull(). This makes these functions available everywhere
instead of just in the server code, and it makes the function naming
notably different from the pg_strtointNN() functions in numutils.c,
which have a different API.
Discussion: https://www.postgresql.org/message-id/flat/a3df47c9-b1b4-29f2-7e91-427baf8b75a3%40enterprisedb.com
4 years ago
|
|
|
local_node->fldname = strtou64(token, NULL, 10)
|
|
|
|
|
|
|
|
/* Read a long integer field (anything written as ":fldname %ld") */
|
|
|
|
#define READ_LONG_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = atol(token)
|
|
|
|
|
|
|
|
/* Read an OID field (don't hard-wire assumption that OID is same as uint) */
|
|
|
|
#define READ_OID_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = atooid(token)
|
|
|
|
|
|
|
|
/* Read a char field (ie, one ascii character) */
|
|
|
|
#define READ_CHAR_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
/* avoid overhead of calling debackslash() for one char */ \
|
|
|
|
local_node->fldname = (length == 0) ? '\0' : (token[0] == '\\' ? token[1] : token[0])
|
|
|
|
|
|
|
|
/* Read an enumerated-type field that was written as an integer code */
|
|
|
|
#define READ_ENUM_FIELD(fldname, enumtype) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = (enumtype) atoi(token)
|
|
|
|
|
|
|
|
/* Read a float field */
|
|
|
|
#define READ_FLOAT_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = atof(token)
|
|
|
|
|
|
|
|
/* Read a boolean field */
|
|
|
|
#define READ_BOOL_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = strtobool(token)
|
|
|
|
|
|
|
|
/* Read a character-string field */
|
|
|
|
#define READ_STRING_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = nullable_string(token, length)
|
|
|
|
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
/* Read a parse location field (and possibly throw away the value) */
|
|
|
|
#ifdef WRITE_READ_PARSE_PLAN_TREES
|
|
|
|
#define READ_LOCATION_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
local_node->fldname = restore_location_fields ? atoi(token) : -1
|
|
|
|
#else
|
|
|
|
#define READ_LOCATION_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
token = pg_strtok(&length); /* get field value */ \
|
|
|
|
(void) token; /* in case not used elsewhere */ \
|
|
|
|
local_node->fldname = -1 /* set field to "unknown" */
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Read a Node field */
|
|
|
|
#define READ_NODE_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
(void) token; /* in case not used elsewhere */ \
|
|
|
|
local_node->fldname = nodeRead(NULL, 0)
|
|
|
|
|
|
|
|
/* Read a bitmapset field */
|
|
|
|
#define READ_BITMAPSET_FIELD(fldname) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
|
|
|
(void) token; /* in case not used elsewhere */ \
|
|
|
|
local_node->fldname = _readBitmapset()
|
|
|
|
|
|
|
|
/* Read an attribute number array */
|
|
|
|
#define READ_ATTRNUMBER_ARRAY(fldname, len) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
Get rid of trailing semicolons in C macro definitions.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
5 years ago
|
|
|
local_node->fldname = readAttrNumberCols(len)
|
|
|
|
|
|
|
|
/* Read an oid array */
|
|
|
|
#define READ_OID_ARRAY(fldname, len) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
Get rid of trailing semicolons in C macro definitions.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
5 years ago
|
|
|
local_node->fldname = readOidCols(len)
|
|
|
|
|
|
|
|
/* Read an int array */
|
|
|
|
#define READ_INT_ARRAY(fldname, len) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
Get rid of trailing semicolons in C macro definitions.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
5 years ago
|
|
|
local_node->fldname = readIntCols(len)
|
|
|
|
|
|
|
|
/* Read a bool array */
|
|
|
|
#define READ_BOOL_ARRAY(fldname, len) \
|
|
|
|
token = pg_strtok(&length); /* skip :fldname */ \
|
Get rid of trailing semicolons in C macro definitions.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
5 years ago
|
|
|
local_node->fldname = readBoolCols(len)
|
|
|
|
|
|
|
|
/* Routine exit */
|
|
|
|
#define READ_DONE() \
|
|
|
|
return local_node
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE: use atoi() to read values written with %d, or atoui() to read
|
|
|
|
* values written with %u in outfuncs.c. An exception is OID values,
|
|
|
|
* for which use atooid(). (As of 7.1, outfuncs.c writes OIDs as %u,
|
|
|
|
* but this will probably change in the future.)
|
|
|
|
*/
|
|
|
|
#define atoui(x) ((unsigned int) strtoul((x), NULL, 10))
|
|
|
|
|
|
|
|
#define strtobool(x) ((*(x) == 't') ? true : false)
|
|
|
|
|
|
|
|
#define nullable_string(token,length) \
|
|
|
|
((length) == 0 ? NULL : debackslash(token, length))
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _readBitmapset
|
|
|
|
*/
|
|
|
|
static Bitmapset *
|
|
|
|
_readBitmapset(void)
|
|
|
|
{
|
|
|
|
Bitmapset *result = NULL;
|
|
|
|
|
|
|
|
READ_TEMP_LOCALS();
|
|
|
|
|
|
|
|
token = pg_strtok(&length);
|
|
|
|
if (token == NULL)
|
|
|
|
elog(ERROR, "incomplete Bitmapset structure");
|
|
|
|
if (length != 1 || token[0] != '(')
|
|
|
|
elog(ERROR, "unrecognized token: \"%.*s\"", length, token);
|
|
|
|
|
|
|
|
token = pg_strtok(&length);
|
|
|
|
if (token == NULL)
|
|
|
|
elog(ERROR, "incomplete Bitmapset structure");
|
|
|
|
if (length != 1 || token[0] != 'b')
|
|
|
|
elog(ERROR, "unrecognized token: \"%.*s\"", length, token);
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
int val;
|
|
|
|
char *endptr;
|
|
|
|
|
|
|
|
token = pg_strtok(&length);
|
|
|
|
if (token == NULL)
|
|
|
|
elog(ERROR, "unterminated Bitmapset structure");
|
|
|
|
if (length == 1 && token[0] == ')')
|
|
|
|
break;
|
|
|
|
val = (int) strtol(token, &endptr, 10);
|
|
|
|
if (endptr != token + length)
|
|
|
|
elog(ERROR, "unrecognized integer: \"%.*s\"", length, token);
|
|
|
|
result = bms_add_member(result, val);
|
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
Introduce extensible node types.
An extensible node is always tagged T_Extensible, but the extnodename
field identifies it more specifically; it may also include arbitrary
private data. Extensible nodes can be copied, tested for equality,
serialized, and deserialized, but the core system doesn't know
anything about them otherwise. Some extensions may find it useful to
include these nodes in fdw_private or custom_private lists in lieu of
arm-wrestling their data into a format that the core code can
understand.
Along the way, so as not to burden the authors of such extensible
node types too much, expose the functions for writing serialized
tokens, and for serializing and deserializing bitmapsets.
KaiGai Kohei, per a design suggested by me. Reviewed by Andres Freund
and by me, and further edited by me.
10 years ago
|
|
|
/*
|
|
|
|
* for use by extensions which define extensible nodes
|
|
|
|
*/
|
|
|
|
Bitmapset *
|
|
|
|
readBitmapset(void)
|
|
|
|
{
|
|
|
|
return _readBitmapset();
|
|
|
|
}
|
|
|
|
|
Automatically generate node support functions
Add a script to automatically generate the node support functions
(copy, equal, out, and read, as well as the node tags enum) from the
struct definitions.
For each of the four node support files, it creates two include files,
e.g., copyfuncs.funcs.c and copyfuncs.switch.c, to include in the main
file. All the scaffolding of the main file stays in place.
I have tried to mostly make the coverage of the output match what is
currently there. For example, one could now do out/read coverage of
utility statement nodes, but I have manually excluded those for now.
The reason is mainly that it's easier to diff the before and after,
and adding a bunch of stuff like this might require a separate
analysis and review.
Subtyping (TidScan -> Scan) is supported.
For the hard cases, you can just write a manual function and exclude
generating one. For the not so hard cases, there is a way of
annotating struct fields to get special behaviors. For example,
pg_node_attr(equal_ignore) has the field ignored in equal functions.
(In this patch, I have only ifdef'ed out the code to could be removed,
mainly so that it won't constantly have merge conflicts. It will be
deleted in a separate patch. All the code comments that are worth
keeping from those sections have already been moved to the header
files where the structs are defined.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/c1097590-a6a4-486a-64b1-e1f9cc0533ce%40enterprisedb.com
3 years ago
|
|
|
#include "readfuncs.funcs.c"
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Support functions for nodes with custom_read_write attribute or
|
|
|
|
* special_read_write attribute
|
|
|
|
*/
|
|
|
|
|
|
|
|
static Query *
|
|
|
|
_readQuery(void)
|
|
|
|
{
|
|
|
|
READ_LOCALS(Query);
|
|
|
|
|
|
|
|
READ_ENUM_FIELD(commandType, CmdType);
|
|
|
|
READ_ENUM_FIELD(querySource, QuerySource);
|
|
|
|
local_node->queryId = UINT64CONST(0); /* not saved in output format */
|
|
|
|
READ_BOOL_FIELD(canSetTag);
|
|
|
|
READ_NODE_FIELD(utilityStmt);
|
|
|
|
READ_INT_FIELD(resultRelation);
|
|
|
|
READ_BOOL_FIELD(hasAggs);
|
|
|
|
READ_BOOL_FIELD(hasWindowFuncs);
|
|
|
|
READ_BOOL_FIELD(hasTargetSRFs);
|
|
|
|
READ_BOOL_FIELD(hasSubLinks);
|
|
|
|
READ_BOOL_FIELD(hasDistinctOn);
|
|
|
|
READ_BOOL_FIELD(hasRecursive);
|
|
|
|
READ_BOOL_FIELD(hasModifyingCTE);
|
|
|
|
READ_BOOL_FIELD(hasForUpdate);
|
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
11 years ago
|
|
|
READ_BOOL_FIELD(hasRowSecurity);
|
|
|
|
READ_BOOL_FIELD(isReturn);
|
|
|
|
READ_NODE_FIELD(cteList);
|
|
|
|
READ_NODE_FIELD(rtable);
|
|
|
|
READ_NODE_FIELD(jointree);
|
|
|
|
READ_NODE_FIELD(targetList);
|
|
|
|
READ_ENUM_FIELD(override, OverridingKind);
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
10 years ago
|
|
|
READ_NODE_FIELD(onConflict);
|
|
|
|
READ_NODE_FIELD(returningList);
|
|
|
|
READ_NODE_FIELD(groupClause);
|
Implement GROUP BY DISTINCT
With grouping sets, it's possible that some of the grouping sets are
duplicate. This is especially common with CUBE and ROLLUP clauses. For
example GROUP BY CUBE (a,b), CUBE (b,c) is equivalent to
GROUP BY GROUPING SETS (
(a, b, c),
(a, b, c),
(a, b, c),
(a, b),
(a, b),
(a, b),
(a),
(a),
(a),
(c, a),
(c, a),
(c, a),
(c),
(b, c),
(b),
()
)
Some of the grouping sets are calculated multiple times, which is mostly
unnecessary. This commit implements a new GROUP BY DISTINCT feature, as
defined in the SQL standard, which eliminates the duplicate sets.
Author: Vik Fearing
Reviewed-by: Erik Rijkers, Georgios Kokolatos, Tomas Vondra
Discussion: https://postgr.es/m/bf3805a8-d7d1-ae61-fece-761b7ff41ecc@postgresfriends.org
4 years ago
|
|
|
READ_BOOL_FIELD(groupDistinct);
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
|
|
|
READ_NODE_FIELD(groupingSets);
|
|
|
|
READ_NODE_FIELD(havingQual);
|
|
|
|
READ_NODE_FIELD(windowClause);
|
|
|
|
READ_NODE_FIELD(distinctClause);
|
|
|
|
READ_NODE_FIELD(sortClause);
|
|
|
|
READ_NODE_FIELD(limitOffset);
|
|
|
|
READ_NODE_FIELD(limitCount);
|
|
|
|
READ_ENUM_FIELD(limitOption, LimitOption);
|
|
|
|
READ_NODE_FIELD(rowMarks);
|
|
|
|
READ_NODE_FIELD(setOperations);
|
|
|
|
READ_NODE_FIELD(constraintDeps);
|
|
|
|
READ_NODE_FIELD(withCheckOptions);
|
|
|
|
READ_NODE_FIELD(mergeActionList);
|
|
|
|
READ_BOOL_FIELD(mergeUseOuterJoin);
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
9 years ago
|
|
|
READ_LOCATION_FIELD(stmt_location);
|
|
|
|
READ_INT_FIELD(stmt_len);
|
|
|
|
|
|
|
|
READ_DONE();
|
|
|
|
}
|
|
|
|
|
|
|
|
static Const *
|
|
|
|
_readConst(void)
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
|
|
|
{
|
|
|
|
READ_LOCALS(Const);
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
|
|
|
|
|
|
|
READ_OID_FIELD(consttype);
|
|
|
|
READ_INT_FIELD(consttypmod);
|
|
|
|
READ_OID_FIELD(constcollid);
|
|
|
|
READ_INT_FIELD(constlen);
|
|
|
|
READ_BOOL_FIELD(constbyval);
|
|
|
|
READ_BOOL_FIELD(constisnull);
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
|
|
|
READ_LOCATION_FIELD(location);
|
|
|
|
|
|
|
|
token = pg_strtok(&length); /* skip :constvalue */
|
|
|
|
if (local_node->constisnull)
|
|
|
|
token = pg_strtok(&length); /* skip "<>" */
|
|
|
|
else
|
|
|
|
local_node->constvalue = readDatum(local_node->constbyval);
|
|
|
|
|
|
|
|
READ_DONE();
|
|
|
|
}
|
|
|
|
|
|
|
|
static BoolExpr *
|
|
|
|
_readBoolExpr(void)
|
|
|
|
{
|
|
|
|
READ_LOCALS(BoolExpr);
|
|
|
|
|
|
|
|
/* do-it-yourself enum representation */
|
|
|
|
token = pg_strtok(&length); /* skip :boolop */
|
|
|
|
token = pg_strtok(&length); /* get field value */
|
|
|
|
if (strncmp(token, "and", 3) == 0)
|
|
|
|
local_node->boolop = AND_EXPR;
|
|
|
|
else if (strncmp(token, "or", 2) == 0)
|
|
|
|
local_node->boolop = OR_EXPR;
|
|
|
|
else if (strncmp(token, "not", 3) == 0)
|
|
|
|
local_node->boolop = NOT_EXPR;
|
|
|
|
else
|
|
|
|
elog(ERROR, "unrecognized boolop \"%.*s\"", length, token);
|
|
|
|
|
|
|
|
READ_NODE_FIELD(args);
|
|
|
|
READ_LOCATION_FIELD(location);
|
|
|
|
|
|
|
|
READ_DONE();
|
|
|
|
}
|
|
|
|
|
|
|
|
static RangeTblEntry *
|
|
|
|
_readRangeTblEntry(void)
|
|
|
|
{
|
|
|
|
READ_LOCALS(RangeTblEntry);
|
|
|
|
|
|
|
|
/* put alias + eref first to make dump more legible */
|
|
|
|
READ_NODE_FIELD(alias);
|
|
|
|
READ_NODE_FIELD(eref);
|
|
|
|
READ_ENUM_FIELD(rtekind, RTEKind);
|
|
|
|
|
|
|
|
switch (local_node->rtekind)
|
|
|
|
{
|
|
|
|
case RTE_RELATION:
|
|
|
|
READ_OID_FIELD(relid);
|
|
|
|
READ_CHAR_FIELD(relkind);
|
|
|
|
READ_INT_FIELD(rellockmode);
|
|
|
|
READ_NODE_FIELD(tablesample);
|
|
|
|
break;
|
|
|
|
case RTE_SUBQUERY:
|
|
|
|
READ_NODE_FIELD(subquery);
|
|
|
|
READ_BOOL_FIELD(security_barrier);
|
|
|
|
break;
|
|
|
|
case RTE_JOIN:
|
|
|
|
READ_ENUM_FIELD(jointype, JoinType);
|
|
|
|
READ_INT_FIELD(joinmergedcols);
|
|
|
|
READ_NODE_FIELD(joinaliasvars);
|
|
|
|
READ_NODE_FIELD(joinleftcols);
|
|
|
|
READ_NODE_FIELD(joinrightcols);
|
|
|
|
READ_NODE_FIELD(join_using_alias);
|
|
|
|
break;
|
|
|
|
case RTE_FUNCTION:
|
|
|
|
READ_NODE_FIELD(functions);
|
|
|
|
READ_BOOL_FIELD(funcordinality);
|
|
|
|
break;
|
|
|
|
case RTE_TABLEFUNC:
|
|
|
|
READ_NODE_FIELD(tablefunc);
|
|
|
|
/* The RTE must have a copy of the column type info, if any */
|
|
|
|
if (local_node->tablefunc)
|
|
|
|
{
|
|
|
|
TableFunc *tf = local_node->tablefunc;
|
|
|
|
|
|
|
|
local_node->coltypes = tf->coltypes;
|
|
|
|
local_node->coltypmods = tf->coltypmods;
|
|
|
|
local_node->colcollations = tf->colcollations;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RTE_VALUES:
|
|
|
|
READ_NODE_FIELD(values_lists);
|
|
|
|
READ_NODE_FIELD(coltypes);
|
|
|
|
READ_NODE_FIELD(coltypmods);
|
|
|
|
READ_NODE_FIELD(colcollations);
|
|
|
|
break;
|
|
|
|
case RTE_CTE:
|
|
|
|
READ_STRING_FIELD(ctename);
|
|
|
|
READ_UINT_FIELD(ctelevelsup);
|
|
|
|
READ_BOOL_FIELD(self_reference);
|
|
|
|
READ_NODE_FIELD(coltypes);
|
|
|
|
READ_NODE_FIELD(coltypmods);
|
|
|
|
READ_NODE_FIELD(colcollations);
|
|
|
|
break;
|
|
|
|
case RTE_NAMEDTUPLESTORE:
|
|
|
|
READ_STRING_FIELD(enrname);
|
|
|
|
READ_FLOAT_FIELD(enrtuples);
|
|
|
|
READ_OID_FIELD(relid);
|
|
|
|
READ_NODE_FIELD(coltypes);
|
|
|
|
READ_NODE_FIELD(coltypmods);
|
|
|
|
READ_NODE_FIELD(colcollations);
|
|
|
|
break;
|
|
|
|
case RTE_RESULT:
|
|
|
|
/* no extra fields */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
elog(ERROR, "unrecognized RTE kind: %d",
|
|
|
|
(int) local_node->rtekind);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
READ_BOOL_FIELD(lateral);
|
|
|
|
READ_BOOL_FIELD(inh);
|
|
|
|
READ_BOOL_FIELD(inFromCl);
|
|
|
|
READ_UINT_FIELD(requiredPerms);
|
|
|
|
READ_OID_FIELD(checkAsUser);
|
|
|
|
READ_BITMAPSET_FIELD(selectedCols);
|
|
|
|
READ_BITMAPSET_FIELD(insertedCols);
|
|
|
|
READ_BITMAPSET_FIELD(updatedCols);
|
|
|
|
READ_BITMAPSET_FIELD(extraUpdatedCols);
|
|
|
|
READ_NODE_FIELD(securityQuals);
|
|
|
|
|
|
|
|
READ_DONE();
|
|
|
|
}
|
|
|
|
|
|
|
|
static ExtensibleNode *
|
|
|
|
_readExtensibleNode(void)
|
|
|
|
{
|
|
|
|
const ExtensibleNodeMethods *methods;
|
|
|
|
ExtensibleNode *local_node;
|
|
|
|
const char *extnodename;
|
|
|
|
|
|
|
|
READ_TEMP_LOCALS();
|
|
|
|
|
|
|
|
token = pg_strtok(&length); /* skip :extnodename */
|
|
|
|
token = pg_strtok(&length); /* get extnodename */
|
|
|
|
|
|
|
|
extnodename = nullable_string(token, length);
|
|
|
|
if (!extnodename)
|
|
|
|
elog(ERROR, "extnodename has to be supplied");
|
|
|
|
methods = GetExtensibleNodeMethods(extnodename, false);
|
|
|
|
|
|
|
|
local_node = (ExtensibleNode *) newNode(methods->node_size,
|
|
|
|
T_ExtensibleNode);
|
|
|
|
local_node->extnodename = extnodename;
|
|
|
|
|
|
|
|
/* deserialize the private fields */
|
|
|
|
methods->nodeRead(local_node);
|
|
|
|
|
|
|
|
READ_DONE();
|
|
|
|
}
|
|
|
|
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
9 years ago
|
|
|
|
|
|
|
/*
|
|
|
|
* parseNodeString
|
|
|
|
*
|
|
|
|
* Given a character string representing a node tree, parseNodeString creates
|
|
|
|
* the internal node structure.
|
|
|
|
*
|
|
|
|
* The string to be read must already have been loaded into pg_strtok().
|
|
|
|
*/
|
|
|
|
Node *
|
|
|
|
parseNodeString(void)
|
|
|
|
{
|
|
|
|
void *return_value;
|
|
|
|
|
|
|
|
READ_TEMP_LOCALS();
|
|
|
|
|
|
|
|
/* Guard against stack overflow due to overly complex expressions */
|
|
|
|
check_stack_depth();
|
|
|
|
|
|
|
|
token = pg_strtok(&length);
|
|
|
|
|
|
|
|
#define MATCH(tokname, namelen) \
|
|
|
|
(length == namelen && memcmp(token, tokname, namelen) == 0)
|
|
|
|
|
Automatically generate node support functions
Add a script to automatically generate the node support functions
(copy, equal, out, and read, as well as the node tags enum) from the
struct definitions.
For each of the four node support files, it creates two include files,
e.g., copyfuncs.funcs.c and copyfuncs.switch.c, to include in the main
file. All the scaffolding of the main file stays in place.
I have tried to mostly make the coverage of the output match what is
currently there. For example, one could now do out/read coverage of
utility statement nodes, but I have manually excluded those for now.
The reason is mainly that it's easier to diff the before and after,
and adding a bunch of stuff like this might require a separate
analysis and review.
Subtyping (TidScan -> Scan) is supported.
For the hard cases, you can just write a manual function and exclude
generating one. For the not so hard cases, there is a way of
annotating struct fields to get special behaviors. For example,
pg_node_attr(equal_ignore) has the field ignored in equal functions.
(In this patch, I have only ifdef'ed out the code to could be removed,
mainly so that it won't constantly have merge conflicts. It will be
deleted in a separate patch. All the code comments that are worth
keeping from those sections have already been moved to the header
files where the structs are defined.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/c1097590-a6a4-486a-64b1-e1f9cc0533ce%40enterprisedb.com
3 years ago
|
|
|
if (false)
|
|
|
|
;
|
|
|
|
#include "readfuncs.switch.c"
|
|
|
|
else
|
|
|
|
{
|
|
|
|
elog(ERROR, "badly formatted node string \"%.32s\"...", token);
|
|
|
|
return_value = NULL; /* keep compiler quiet */
|
|
|
|
}
|
|
|
|
|
|
|
|
return (Node *) return_value;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* readDatum
|
|
|
|
*
|
|
|
|
* Given a string representation of a constant, recreate the appropriate
|
|
|
|
* Datum. The string representation embeds length info, but not byValue,
|
|
|
|
* so we must be told that.
|
|
|
|
*/
|
|
|
|
Datum
|
|
|
|
readDatum(bool typbyval)
|
|
|
|
{
|
|
|
|
Size length,
|
|
|
|
i;
|
|
|
|
int tokenLength;
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token;
|
|
|
|
Datum res;
|
|
|
|
char *s;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read the actual length of the value
|
|
|
|
*/
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
length = atoui(token);
|
|
|
|
|
|
|
|
token = pg_strtok(&tokenLength); /* read the '[' */
|
|
|
|
if (token == NULL || token[0] != '[')
|
|
|
|
elog(ERROR, "expected \"[\" to start datum, but got \"%s\"; length = %zu",
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
token ? token : "[NULL]", length);
|
|
|
|
|
|
|
|
if (typbyval)
|
|
|
|
{
|
|
|
|
if (length > (Size) sizeof(Datum))
|
|
|
|
elog(ERROR, "byval datum but length = %zu", length);
|
|
|
|
res = (Datum) 0;
|
|
|
|
s = (char *) (&res);
|
|
|
|
for (i = 0; i < (Size) sizeof(Datum); i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
s[i] = (char) atoi(token);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (length <= 0)
|
|
|
|
res = (Datum) NULL;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
s = (char *) palloc(length);
|
|
|
|
for (i = 0; i < length; i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
s[i] = (char) atoi(token);
|
|
|
|
}
|
|
|
|
res = PointerGetDatum(s);
|
|
|
|
}
|
|
|
|
|
|
|
|
token = pg_strtok(&tokenLength); /* read the ']' */
|
|
|
|
if (token == NULL || token[0] != ']')
|
|
|
|
elog(ERROR, "expected \"]\" to end datum, but got \"%s\"; length = %zu",
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
token ? token : "[NULL]", length);
|
|
|
|
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* readAttrNumberCols
|
|
|
|
*/
|
|
|
|
AttrNumber *
|
|
|
|
readAttrNumberCols(int numCols)
|
|
|
|
{
|
|
|
|
int tokenLength,
|
|
|
|
i;
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token;
|
|
|
|
AttrNumber *attr_vals;
|
|
|
|
|
|
|
|
if (numCols <= 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
attr_vals = (AttrNumber *) palloc(numCols * sizeof(AttrNumber));
|
|
|
|
for (i = 0; i < numCols; i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
attr_vals[i] = atoi(token);
|
|
|
|
}
|
|
|
|
|
|
|
|
return attr_vals;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* readOidCols
|
|
|
|
*/
|
|
|
|
Oid *
|
|
|
|
readOidCols(int numCols)
|
|
|
|
{
|
|
|
|
int tokenLength,
|
|
|
|
i;
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token;
|
|
|
|
Oid *oid_vals;
|
|
|
|
|
|
|
|
if (numCols <= 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
oid_vals = (Oid *) palloc(numCols * sizeof(Oid));
|
|
|
|
for (i = 0; i < numCols; i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
oid_vals[i] = atooid(token);
|
|
|
|
}
|
|
|
|
|
|
|
|
return oid_vals;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* readIntCols
|
|
|
|
*/
|
|
|
|
int *
|
|
|
|
readIntCols(int numCols)
|
|
|
|
{
|
|
|
|
int tokenLength,
|
|
|
|
i;
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token;
|
|
|
|
int *int_vals;
|
|
|
|
|
|
|
|
if (numCols <= 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
int_vals = (int *) palloc(numCols * sizeof(int));
|
|
|
|
for (i = 0; i < numCols; i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
int_vals[i] = atoi(token);
|
|
|
|
}
|
|
|
|
|
|
|
|
return int_vals;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* readBoolCols
|
|
|
|
*/
|
|
|
|
bool *
|
|
|
|
readBoolCols(int numCols)
|
|
|
|
{
|
|
|
|
int tokenLength,
|
|
|
|
i;
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
7 years ago
|
|
|
const char *token;
|
|
|
|
bool *bool_vals;
|
|
|
|
|
|
|
|
if (numCols <= 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
bool_vals = (bool *) palloc(numCols * sizeof(bool));
|
|
|
|
for (i = 0; i < numCols; i++)
|
|
|
|
{
|
|
|
|
token = pg_strtok(&tokenLength);
|
|
|
|
bool_vals[i] = strtobool(token);
|
|
|
|
}
|
|
|
|
|
|
|
|
return bool_vals;
|
|
|
|
}
|