You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
postgres/src/backend/nodes/params.c

422 lines
11 KiB

/*-------------------------------------------------------------------------
*
* params.c
* Support for finding the values associated with Param nodes.
*
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/nodes/params.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/xact.h"
#include "mb/stringinfo_mb.h"
#include "nodes/params.h"
Avoid using a cursor in plpgsql's RETURN QUERY statement. plpgsql has always executed the query given in a RETURN QUERY command by opening it as a cursor and then fetching a few rows at a time, which it turns around and dumps into the function's result tuplestore. The point of this was to keep from blowing out memory with an oversized SPITupleTable result (note that while a tuplestore can spill tuples to disk, SPITupleTable cannot). However, it's rather inefficient, both because of extra data copying and because of executor entry/exit overhead. In recent versions, a new performance problem has emerged: use of a cursor prevents use of a parallel plan for the executed query. We can improve matters by skipping use of a cursor and having the executor push result tuples directly into the function's result tuplestore. However, a moderate amount of new infrastructure is needed to make that idea work: * We can use the existing tstoreReceiver.c DestReceiver code to funnel executor output to the tuplestore, but it has to be extended to support plpgsql's requirement for possibly applying a tuple conversion map. * SPI needs to be extended to allow use of a caller-supplied DestReceiver instead of its usual receiver that puts tuples into a SPITupleTable. Two new API calls are needed to handle both the RETURN QUERY and RETURN QUERY EXECUTE cases. I also felt that I didn't want these new API calls to use the legacy method of specifying query parameter values with "char" null flags (the old ' '/'n' convention); rather they should accept ParamListInfo objects containing the parameter type and value info. This required a bit of additional new infrastructure since we didn't yet have any parse analysis callback that would interpret $N parameter symbols according to type data supplied in a ParamListInfo. There seems to be no harm in letting makeParamList install that callback by default, rather than leaving a new ParamListInfo's parserSetup hook as NULL. (Indeed, as of HEAD, I couldn't find anyplace that was using the parserSetup field at all; plpgsql was using parserSetupArg for its own purposes, but parserSetup seemed to be write-only.) We can actually get plpgsql out of the business of using legacy null flags altogether, and using ParamListInfo instead of its ad-hoc PreparedParamsData structure; but this requires inventing one more SPI API call that can replace SPI_cursor_open_with_args. That seems worth doing, though. SPI_execute_with_args and SPI_cursor_open_with_args are now unused anywhere in the core PG distribution. Perhaps someday we could deprecate/remove them. But cleaning up the crufty bits of the SPI API is a task for a different patch. Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to consider back-patching. Patch by me; thanks to Hamid Akhtar for review. Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
5 years ago
#include "parser/parse_node.h"
#include "storage/shmem.h"
#include "utils/datum.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
Avoid using a cursor in plpgsql's RETURN QUERY statement. plpgsql has always executed the query given in a RETURN QUERY command by opening it as a cursor and then fetching a few rows at a time, which it turns around and dumps into the function's result tuplestore. The point of this was to keep from blowing out memory with an oversized SPITupleTable result (note that while a tuplestore can spill tuples to disk, SPITupleTable cannot). However, it's rather inefficient, both because of extra data copying and because of executor entry/exit overhead. In recent versions, a new performance problem has emerged: use of a cursor prevents use of a parallel plan for the executed query. We can improve matters by skipping use of a cursor and having the executor push result tuples directly into the function's result tuplestore. However, a moderate amount of new infrastructure is needed to make that idea work: * We can use the existing tstoreReceiver.c DestReceiver code to funnel executor output to the tuplestore, but it has to be extended to support plpgsql's requirement for possibly applying a tuple conversion map. * SPI needs to be extended to allow use of a caller-supplied DestReceiver instead of its usual receiver that puts tuples into a SPITupleTable. Two new API calls are needed to handle both the RETURN QUERY and RETURN QUERY EXECUTE cases. I also felt that I didn't want these new API calls to use the legacy method of specifying query parameter values with "char" null flags (the old ' '/'n' convention); rather they should accept ParamListInfo objects containing the parameter type and value info. This required a bit of additional new infrastructure since we didn't yet have any parse analysis callback that would interpret $N parameter symbols according to type data supplied in a ParamListInfo. There seems to be no harm in letting makeParamList install that callback by default, rather than leaving a new ParamListInfo's parserSetup hook as NULL. (Indeed, as of HEAD, I couldn't find anyplace that was using the parserSetup field at all; plpgsql was using parserSetupArg for its own purposes, but parserSetup seemed to be write-only.) We can actually get plpgsql out of the business of using legacy null flags altogether, and using ParamListInfo instead of its ad-hoc PreparedParamsData structure; but this requires inventing one more SPI API call that can replace SPI_cursor_open_with_args. That seems worth doing, though. SPI_execute_with_args and SPI_cursor_open_with_args are now unused anywhere in the core PG distribution. Perhaps someday we could deprecate/remove them. But cleaning up the crufty bits of the SPI API is a task for a different patch. Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to consider back-patching. Patch by me; thanks to Hamid Akhtar for review. Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
5 years ago
static void paramlist_parser_setup(ParseState *pstate, void *arg);
static Node *paramlist_param_ref(ParseState *pstate, ParamRef *pref);
/*
* Allocate and initialize a new ParamListInfo structure.
*
* To make a new structure for the "dynamic" way (with hooks), pass 0 for
* numParams and set numParams manually.
Avoid using a cursor in plpgsql's RETURN QUERY statement. plpgsql has always executed the query given in a RETURN QUERY command by opening it as a cursor and then fetching a few rows at a time, which it turns around and dumps into the function's result tuplestore. The point of this was to keep from blowing out memory with an oversized SPITupleTable result (note that while a tuplestore can spill tuples to disk, SPITupleTable cannot). However, it's rather inefficient, both because of extra data copying and because of executor entry/exit overhead. In recent versions, a new performance problem has emerged: use of a cursor prevents use of a parallel plan for the executed query. We can improve matters by skipping use of a cursor and having the executor push result tuples directly into the function's result tuplestore. However, a moderate amount of new infrastructure is needed to make that idea work: * We can use the existing tstoreReceiver.c DestReceiver code to funnel executor output to the tuplestore, but it has to be extended to support plpgsql's requirement for possibly applying a tuple conversion map. * SPI needs to be extended to allow use of a caller-supplied DestReceiver instead of its usual receiver that puts tuples into a SPITupleTable. Two new API calls are needed to handle both the RETURN QUERY and RETURN QUERY EXECUTE cases. I also felt that I didn't want these new API calls to use the legacy method of specifying query parameter values with "char" null flags (the old ' '/'n' convention); rather they should accept ParamListInfo objects containing the parameter type and value info. This required a bit of additional new infrastructure since we didn't yet have any parse analysis callback that would interpret $N parameter symbols according to type data supplied in a ParamListInfo. There seems to be no harm in letting makeParamList install that callback by default, rather than leaving a new ParamListInfo's parserSetup hook as NULL. (Indeed, as of HEAD, I couldn't find anyplace that was using the parserSetup field at all; plpgsql was using parserSetupArg for its own purposes, but parserSetup seemed to be write-only.) We can actually get plpgsql out of the business of using legacy null flags altogether, and using ParamListInfo instead of its ad-hoc PreparedParamsData structure; but this requires inventing one more SPI API call that can replace SPI_cursor_open_with_args. That seems worth doing, though. SPI_execute_with_args and SPI_cursor_open_with_args are now unused anywhere in the core PG distribution. Perhaps someday we could deprecate/remove them. But cleaning up the crufty bits of the SPI API is a task for a different patch. Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to consider back-patching. Patch by me; thanks to Hamid Akhtar for review. Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
5 years ago
*
* A default parserSetup function is supplied automatically. Callers may
* override it if they choose. (Note that most use-cases for ParamListInfos
* will never use the parserSetup function anyway.)
*/
ParamListInfo
makeParamList(int numParams)
{
ParamListInfo retval;
Size size;
size = offsetof(ParamListInfoData, params) +
numParams * sizeof(ParamExternData);
retval = (ParamListInfo) palloc(size);
retval->paramFetch = NULL;
retval->paramFetchArg = NULL;
retval->paramCompile = NULL;
retval->paramCompileArg = NULL;
Avoid using a cursor in plpgsql's RETURN QUERY statement. plpgsql has always executed the query given in a RETURN QUERY command by opening it as a cursor and then fetching a few rows at a time, which it turns around and dumps into the function's result tuplestore. The point of this was to keep from blowing out memory with an oversized SPITupleTable result (note that while a tuplestore can spill tuples to disk, SPITupleTable cannot). However, it's rather inefficient, both because of extra data copying and because of executor entry/exit overhead. In recent versions, a new performance problem has emerged: use of a cursor prevents use of a parallel plan for the executed query. We can improve matters by skipping use of a cursor and having the executor push result tuples directly into the function's result tuplestore. However, a moderate amount of new infrastructure is needed to make that idea work: * We can use the existing tstoreReceiver.c DestReceiver code to funnel executor output to the tuplestore, but it has to be extended to support plpgsql's requirement for possibly applying a tuple conversion map. * SPI needs to be extended to allow use of a caller-supplied DestReceiver instead of its usual receiver that puts tuples into a SPITupleTable. Two new API calls are needed to handle both the RETURN QUERY and RETURN QUERY EXECUTE cases. I also felt that I didn't want these new API calls to use the legacy method of specifying query parameter values with "char" null flags (the old ' '/'n' convention); rather they should accept ParamListInfo objects containing the parameter type and value info. This required a bit of additional new infrastructure since we didn't yet have any parse analysis callback that would interpret $N parameter symbols according to type data supplied in a ParamListInfo. There seems to be no harm in letting makeParamList install that callback by default, rather than leaving a new ParamListInfo's parserSetup hook as NULL. (Indeed, as of HEAD, I couldn't find anyplace that was using the parserSetup field at all; plpgsql was using parserSetupArg for its own purposes, but parserSetup seemed to be write-only.) We can actually get plpgsql out of the business of using legacy null flags altogether, and using ParamListInfo instead of its ad-hoc PreparedParamsData structure; but this requires inventing one more SPI API call that can replace SPI_cursor_open_with_args. That seems worth doing, though. SPI_execute_with_args and SPI_cursor_open_with_args are now unused anywhere in the core PG distribution. Perhaps someday we could deprecate/remove them. But cleaning up the crufty bits of the SPI API is a task for a different patch. Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to consider back-patching. Patch by me; thanks to Hamid Akhtar for review. Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
5 years ago
retval->parserSetup = paramlist_parser_setup;
retval->parserSetupArg = (void *) retval;
retval->paramValuesStr = NULL;
retval->numParams = numParams;
return retval;
}
/*
* Copy a ParamListInfo structure.
*
* The result is allocated in CurrentMemoryContext.
*
* Note: the intent of this function is to make a static, self-contained
* set of parameter values. If dynamic parameter hooks are present, we
* intentionally do not copy them into the result. Rather, we forcibly
* instantiate all available parameter values and copy the datum values.
*
* paramValuesStr is not copied, either.
*/
ParamListInfo
copyParamList(ParamListInfo from)
{
ParamListInfo retval;
if (from == NULL || from->numParams <= 0)
return NULL;
retval = makeParamList(from->numParams);
for (int i = 0; i < from->numParams; i++)
{
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
ParamExternData *oprm;
ParamExternData *nprm = &retval->params[i];
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
ParamExternData prmdata;
int16 typLen;
bool typByVal;
/* give hook a chance in case parameter is dynamic */
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
if (from->paramFetch != NULL)
oprm = from->paramFetch(from, i + 1, false, &prmdata);
else
oprm = &from->params[i];
/* flat-copy the parameter info */
*nprm = *oprm;
/* need datumCopy in case it's a pass-by-reference datatype */
if (nprm->isnull || !OidIsValid(nprm->ptype))
continue;
get_typlenbyval(nprm->ptype, &typLen, &typByVal);
nprm->value = datumCopy(nprm->value, typByVal, typLen);
}
return retval;
}
Avoid using a cursor in plpgsql's RETURN QUERY statement. plpgsql has always executed the query given in a RETURN QUERY command by opening it as a cursor and then fetching a few rows at a time, which it turns around and dumps into the function's result tuplestore. The point of this was to keep from blowing out memory with an oversized SPITupleTable result (note that while a tuplestore can spill tuples to disk, SPITupleTable cannot). However, it's rather inefficient, both because of extra data copying and because of executor entry/exit overhead. In recent versions, a new performance problem has emerged: use of a cursor prevents use of a parallel plan for the executed query. We can improve matters by skipping use of a cursor and having the executor push result tuples directly into the function's result tuplestore. However, a moderate amount of new infrastructure is needed to make that idea work: * We can use the existing tstoreReceiver.c DestReceiver code to funnel executor output to the tuplestore, but it has to be extended to support plpgsql's requirement for possibly applying a tuple conversion map. * SPI needs to be extended to allow use of a caller-supplied DestReceiver instead of its usual receiver that puts tuples into a SPITupleTable. Two new API calls are needed to handle both the RETURN QUERY and RETURN QUERY EXECUTE cases. I also felt that I didn't want these new API calls to use the legacy method of specifying query parameter values with "char" null flags (the old ' '/'n' convention); rather they should accept ParamListInfo objects containing the parameter type and value info. This required a bit of additional new infrastructure since we didn't yet have any parse analysis callback that would interpret $N parameter symbols according to type data supplied in a ParamListInfo. There seems to be no harm in letting makeParamList install that callback by default, rather than leaving a new ParamListInfo's parserSetup hook as NULL. (Indeed, as of HEAD, I couldn't find anyplace that was using the parserSetup field at all; plpgsql was using parserSetupArg for its own purposes, but parserSetup seemed to be write-only.) We can actually get plpgsql out of the business of using legacy null flags altogether, and using ParamListInfo instead of its ad-hoc PreparedParamsData structure; but this requires inventing one more SPI API call that can replace SPI_cursor_open_with_args. That seems worth doing, though. SPI_execute_with_args and SPI_cursor_open_with_args are now unused anywhere in the core PG distribution. Perhaps someday we could deprecate/remove them. But cleaning up the crufty bits of the SPI API is a task for a different patch. Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to consider back-patching. Patch by me; thanks to Hamid Akhtar for review. Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
5 years ago
/*
* Set up to parse a query containing references to parameters
* sourced from a ParamListInfo.
*/
static void
paramlist_parser_setup(ParseState *pstate, void *arg)
{
pstate->p_paramref_hook = paramlist_param_ref;
/* no need to use p_coerce_param_hook */
pstate->p_ref_hook_state = arg;
}
/*
* Transform a ParamRef using parameter type data from a ParamListInfo.
*/
static Node *
paramlist_param_ref(ParseState *pstate, ParamRef *pref)
{
ParamListInfo paramLI = (ParamListInfo) pstate->p_ref_hook_state;
int paramno = pref->number;
ParamExternData *prm;
ParamExternData prmdata;
Param *param;
/* check parameter number is valid */
if (paramno <= 0 || paramno > paramLI->numParams)
return NULL;
/* give hook a chance in case parameter is dynamic */
if (paramLI->paramFetch != NULL)
prm = paramLI->paramFetch(paramLI, paramno, false, &prmdata);
else
prm = &paramLI->params[paramno - 1];
if (!OidIsValid(prm->ptype))
return NULL;
param = makeNode(Param);
param->paramkind = PARAM_EXTERN;
param->paramid = paramno;
param->paramtype = prm->ptype;
param->paramtypmod = -1;
param->paramcollid = get_typcollation(param->paramtype);
param->location = pref->location;
return (Node *) param;
}
/*
* Estimate the amount of space required to serialize a ParamListInfo.
*/
Size
EstimateParamListSpace(ParamListInfo paramLI)
{
int i;
Size sz = sizeof(int);
if (paramLI == NULL || paramLI->numParams <= 0)
return sz;
for (i = 0; i < paramLI->numParams; i++)
{
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
ParamExternData *prm;
ParamExternData prmdata;
Oid typeOid;
int16 typLen;
bool typByVal;
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
/* give hook a chance in case parameter is dynamic */
if (paramLI->paramFetch != NULL)
prm = paramLI->paramFetch(paramLI, i + 1, false, &prmdata);
else
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
prm = &paramLI->params[i];
typeOid = prm->ptype;
sz = add_size(sz, sizeof(Oid)); /* space for type OID */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
sz = add_size(sz, sizeof(uint16)); /* space for pflags */
/* space for datum/isnull */
if (OidIsValid(typeOid))
get_typlenbyval(typeOid, &typLen, &typByVal);
else
{
/* If no type OID, assume by-value, like copyParamList does. */
typLen = sizeof(Datum);
typByVal = true;
}
sz = add_size(sz,
datumEstimateSpace(prm->value, prm->isnull, typByVal, typLen));
}
return sz;
}
/*
* Serialize a ParamListInfo structure into caller-provided storage.
*
* We write the number of parameters first, as a 4-byte integer, and then
* write details for each parameter in turn. The details for each parameter
* consist of a 4-byte type OID, 2 bytes of flags, and then the datum as
* serialized by datumSerialize(). The caller is responsible for ensuring
* that there is enough storage to store the number of bytes that will be
* written; use EstimateParamListSpace to find out how many will be needed.
* *start_address is updated to point to the byte immediately following those
* written.
*
* RestoreParamList can be used to recreate a ParamListInfo based on the
* serialized representation; this will be a static, self-contained copy
* just as copyParamList would create.
*
* paramValuesStr is not included.
*/
void
SerializeParamList(ParamListInfo paramLI, char **start_address)
{
int nparams;
int i;
/* Write number of parameters. */
if (paramLI == NULL || paramLI->numParams <= 0)
nparams = 0;
else
nparams = paramLI->numParams;
memcpy(*start_address, &nparams, sizeof(int));
*start_address += sizeof(int);
/* Write each parameter in turn. */
for (i = 0; i < nparams; i++)
{
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
ParamExternData *prm;
ParamExternData prmdata;
Oid typeOid;
int16 typLen;
bool typByVal;
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
/* give hook a chance in case parameter is dynamic */
if (paramLI->paramFetch != NULL)
prm = paramLI->paramFetch(paramLI, i + 1, false, &prmdata);
else
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
8 years ago
prm = &paramLI->params[i];
typeOid = prm->ptype;
/* Write type OID. */
memcpy(*start_address, &typeOid, sizeof(Oid));
*start_address += sizeof(Oid);
/* Write flags. */
memcpy(*start_address, &prm->pflags, sizeof(uint16));
*start_address += sizeof(uint16);
/* Write datum/isnull. */
if (OidIsValid(typeOid))
get_typlenbyval(typeOid, &typLen, &typByVal);
else
{
/* If no type OID, assume by-value, like copyParamList does. */
typLen = sizeof(Datum);
typByVal = true;
}
datumSerialize(prm->value, prm->isnull, typByVal, typLen,
start_address);
}
}
/*
* Copy a ParamListInfo structure.
*
* The result is allocated in CurrentMemoryContext.
*
* Note: the intent of this function is to make a static, self-contained
* set of parameter values. If dynamic parameter hooks are present, we
* intentionally do not copy them into the result. Rather, we forcibly
* instantiate all available parameter values and copy the datum values.
*/
ParamListInfo
RestoreParamList(char **start_address)
{
ParamListInfo paramLI;
int nparams;
memcpy(&nparams, *start_address, sizeof(int));
*start_address += sizeof(int);
paramLI = makeParamList(nparams);
for (int i = 0; i < nparams; i++)
{
ParamExternData *prm = &paramLI->params[i];
/* Read type OID. */
memcpy(&prm->ptype, *start_address, sizeof(Oid));
*start_address += sizeof(Oid);
/* Read flags. */
memcpy(&prm->pflags, *start_address, sizeof(uint16));
*start_address += sizeof(uint16);
/* Read datum/isnull. */
prm->value = datumRestore(start_address, &prm->isnull);
}
return paramLI;
}
/*
* BuildParamLogString
* Return a string that represents the parameter list, for logging.
*
* If caller already knows textual representations for some parameters, it can
* pass an array of exactly params->numParams values as knownTextValues, which
* can contain NULLs for any unknown individual values. NULL can be given if
* no parameters are known.
*
* If maxlen is >= 0, that's the maximum number of bytes of any one
* parameter value to be printed; an ellipsis is added if the string is
* longer. (Added quotes are not considered in this calculation.)
*/
char *
BuildParamLogString(ParamListInfo params, char **knownTextValues, int maxlen)
{
MemoryContext tmpCxt,
oldCxt;
StringInfoData buf;
/*
* NB: think not of returning params->paramValuesStr! It may have been
* generated with a different maxlen, and so be unsuitable. Besides that,
* this is the function used to create that string.
*/
/*
* No work if the param fetch hook is in use. Also, it's not possible to
* do this in an aborted transaction. (It might be possible to improve on
* this last point when some knownTextValues exist, but it seems tricky.)
*/
if (params->paramFetch != NULL ||
IsAbortedTransactionBlockState())
return NULL;
/* Initialize the output stringinfo, in caller's memory context */
initStringInfo(&buf);
/* Use a temporary context to call output functions, just in case */
tmpCxt = AllocSetContextCreate(CurrentMemoryContext,
"BuildParamLogString",
ALLOCSET_DEFAULT_SIZES);
oldCxt = MemoryContextSwitchTo(tmpCxt);
for (int paramno = 0; paramno < params->numParams; paramno++)
{
ParamExternData *param = &params->params[paramno];
appendStringInfo(&buf,
"%s$%d = ",
paramno > 0 ? ", " : "",
paramno + 1);
if (param->isnull || !OidIsValid(param->ptype))
appendStringInfoString(&buf, "NULL");
else
{
if (knownTextValues != NULL && knownTextValues[paramno] != NULL)
appendStringInfoStringQuoted(&buf, knownTextValues[paramno],
maxlen);
else
{
Oid typoutput;
bool typisvarlena;
char *pstring;
getTypeOutputInfo(param->ptype, &typoutput, &typisvarlena);
pstring = OidOutputFunctionCall(typoutput, param->value);
appendStringInfoStringQuoted(&buf, pstring, maxlen);
}
}
}
MemoryContextSwitchTo(oldCxt);
MemoryContextDelete(tmpCxt);
return buf.data;
}
/*
* ParamsErrorCallback - callback for printing parameters in error context
*
* Note that this is a no-op unless BuildParamLogString has been called
* beforehand.
*/
void
ParamsErrorCallback(void *arg)
{
ParamsErrorCbData *data = (ParamsErrorCbData *) arg;
if (data == NULL ||
data->params == NULL ||
data->params->paramValuesStr == NULL)
return;
if (data->portalName && data->portalName[0] != '\0')
errcontext("extended query \"%s\" with parameters: %s",
data->portalName, data->params->paramValuesStr);
else
errcontext("extended query with parameters: %s",
data->params->paramValuesStr);
}