You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
postgres/src/include/nodes/nodes.h

719 lines
16 KiB

/*-------------------------------------------------------------------------
*
* nodes.h
* Definitions for tagged nodes.
*
*
* Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/nodes/nodes.h
*
*-------------------------------------------------------------------------
*/
#ifndef NODES_H
#define NODES_H
/*
* The first field of every node is NodeTag. Each node created (with makeNode)
* will have one of the following tags as the value of its first field.
*
* Note that the numbers of the node tags are not contiguous. We left holes
* here so that we can add more tags without changing the existing enum's.
* (Since node tag numbers never exist outside backend memory, there's no
* real harm in renumbering, it just costs a full rebuild ...)
*/
typedef enum NodeTag
{
T_Invalid = 0,
/*
* TAGS FOR EXECUTOR NODES (execnodes.h)
*/
T_IndexInfo = 10,
T_ExprContext,
T_ProjectionInfo,
T_JunkFilter,
T_ResultRelInfo,
T_EState,
T_TupleTableSlot,
/*
* TAGS FOR PLAN NODES (plannodes.h)
*/
T_Plan = 100,
T_Result,
T_ModifyTable,
T_Append,
T_MergeAppend,
T_RecursiveUnion,
T_BitmapAnd,
T_BitmapOr,
T_Scan,
T_SeqScan,
Redesign tablesample method API, and do extensive code review. The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
10 years ago
T_SampleScan,
T_IndexScan,
T_IndexOnlyScan,
T_BitmapIndexScan,
T_BitmapHeapScan,
T_TidScan,
T_SubqueryScan,
T_FunctionScan,
T_ValuesScan,
T_CteScan,
T_WorkTableScan,
T_ForeignScan,
T_CustomScan,
T_Join,
T_NestLoop,
T_MergeJoin,
T_HashJoin,
T_Material,
T_Sort,
T_Group,
T_Agg,
T_WindowAgg,
T_Unique,
T_Gather,
T_Hash,
T_SetOp,
T_LockRows,
T_Limit,
/* these aren't subclasses of Plan: */
T_NestLoopParam,
T_PlanRowMark,
T_PlanInvalItem,
/*
* TAGS FOR PLAN STATE NODES (execnodes.h)
*
* These should correspond one-to-one with Plan node types.
*/
T_PlanState = 200,
T_ResultState,
T_ModifyTableState,
T_AppendState,
T_MergeAppendState,
T_RecursiveUnionState,
T_BitmapAndState,
T_BitmapOrState,
T_ScanState,
T_SeqScanState,
T_SampleScanState,
T_IndexScanState,
T_IndexOnlyScanState,
T_BitmapIndexScanState,
T_BitmapHeapScanState,
T_TidScanState,
T_SubqueryScanState,
T_FunctionScanState,
T_ValuesScanState,
T_CteScanState,
T_WorkTableScanState,
T_ForeignScanState,
T_CustomScanState,
T_JoinState,
T_NestLoopState,
T_MergeJoinState,
T_HashJoinState,
T_MaterialState,
T_SortState,
T_GroupState,
T_AggState,
T_WindowAggState,
T_UniqueState,
T_GatherState,
T_HashState,
T_SetOpState,
T_LockRowsState,
T_LimitState,
/*
* TAGS FOR PRIMITIVE NODES (primnodes.h)
*/
T_Alias = 300,
T_RangeVar,
T_Expr,
T_Var,
T_Const,
T_Param,
T_Aggref,
Support GROUPING SETS, CUBE and ROLLUP. This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
T_GroupingFunc,
T_WindowFunc,
T_ArrayRef,
T_FuncExpr,
T_NamedArgExpr,
T_OpExpr,
T_DistinctExpr,
T_NullIfExpr,
T_ScalarArrayOpExpr,
T_BoolExpr,
T_SubLink,
T_SubPlan,
T_AlternativeSubPlan,
T_FieldSelect,
T_FieldStore,
T_RelabelType,
T_CoerceViaIO,
T_ArrayCoerceExpr,
T_ConvertRowtypeExpr,
T_CollateExpr,
T_CaseExpr,
T_CaseWhen,
T_CaseTestExpr,
T_ArrayExpr,
T_RowExpr,
T_RowCompareExpr,
T_CoalesceExpr,
T_MinMaxExpr,
T_XmlExpr,
T_NullTest,
T_BooleanTest,
T_CoerceToDomain,
T_CoerceToDomainValue,
T_SetToDefault,
T_CurrentOfExpr,
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
10 years ago
T_InferenceElem,
T_TargetEntry,
T_RangeTblRef,
T_JoinExpr,
T_FromExpr,
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
10 years ago
T_OnConflictExpr,
T_IntoClause,
/*
* TAGS FOR EXPRESSION STATE NODES (execnodes.h)
*
* These correspond (not always one-for-one) to primitive nodes derived
* from Expr.
*/
T_ExprState = 400,
T_GenericExprState,
T_WholeRowVarExprState,
T_AggrefExprState,
Support GROUPING SETS, CUBE and ROLLUP. This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
T_GroupingFuncExprState,
T_WindowFuncExprState,
T_ArrayRefExprState,
T_FuncExprState,
T_ScalarArrayOpExprState,
T_BoolExprState,
T_SubPlanState,
T_AlternativeSubPlanState,
T_FieldSelectState,
T_FieldStoreState,
T_CoerceViaIOState,
T_ArrayCoerceExprState,
T_ConvertRowtypeExprState,
T_CaseExprState,
T_CaseWhenState,
T_ArrayExprState,
T_RowExprState,
T_RowCompareExprState,
T_CoalesceExprState,
T_MinMaxExprState,
T_XmlExprState,
T_NullTestState,
T_CoerceToDomainState,
T_DomainConstraintState,
/*
* TAGS FOR PLANNER NODES (relation.h)
*/
T_PlannerInfo = 500,
T_PlannerGlobal,
T_RelOptInfo,
T_IndexOptInfo,
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
13 years ago
T_ParamPathInfo,
T_Path,
T_IndexPath,
T_BitmapHeapPath,
T_BitmapAndPath,
T_BitmapOrPath,
T_TidPath,
Make the upper part of the planner work by generating and comparing Paths. I've been saying we needed to do this for more than five years, and here it finally is. This patch removes the ever-growing tangle of spaghetti logic that grouping_planner() used to use to try to identify the best plan for post-scan/join query steps. Now, there is (nearly) independent consideration of each execution step, and entirely separate construction of Paths to represent each of the possible ways to do that step. We choose the best Path or set of Paths using the same add_path() logic that's been used inside query_planner() for years. In addition, this patch removes the old restriction that subquery_planner() could return only a single Plan. It now returns a RelOptInfo containing a set of Paths, just as query_planner() does, and the parent query level can use each of those Paths as the basis of a SubqueryScanPath at its level. This allows finding some optimizations that we missed before, wherein a subquery was capable of returning presorted data and thereby avoiding a sort in the parent level, making the overall cost cheaper even though delivering sorted output was not the cheapest plan for the subquery in isolation. (A couple of regression test outputs change in consequence of that. However, there is very little change in visible planner behavior overall, because the point of this patch is not to get immediate planning benefits but to create the infrastructure for future improvements.) There is a great deal left to do here. This patch unblocks a lot of planner work that was basically impractical in the old code structure, such as allowing FDWs to implement remote aggregation, or rewriting plan_set_operations() to allow consideration of multiple implementation orders for set operations. (The latter will likely require a full rewrite of plan_set_operations(); what I've done here is only to fix it to return Paths not Plans.) I have also left unfinished some localized refactoring in createplan.c and planner.c, because it was not necessary to get this patch to a working state. Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
9 years ago
T_SubqueryScanPath,
T_ForeignPath,
T_CustomPath,
Make the upper part of the planner work by generating and comparing Paths. I've been saying we needed to do this for more than five years, and here it finally is. This patch removes the ever-growing tangle of spaghetti logic that grouping_planner() used to use to try to identify the best plan for post-scan/join query steps. Now, there is (nearly) independent consideration of each execution step, and entirely separate construction of Paths to represent each of the possible ways to do that step. We choose the best Path or set of Paths using the same add_path() logic that's been used inside query_planner() for years. In addition, this patch removes the old restriction that subquery_planner() could return only a single Plan. It now returns a RelOptInfo containing a set of Paths, just as query_planner() does, and the parent query level can use each of those Paths as the basis of a SubqueryScanPath at its level. This allows finding some optimizations that we missed before, wherein a subquery was capable of returning presorted data and thereby avoiding a sort in the parent level, making the overall cost cheaper even though delivering sorted output was not the cheapest plan for the subquery in isolation. (A couple of regression test outputs change in consequence of that. However, there is very little change in visible planner behavior overall, because the point of this patch is not to get immediate planning benefits but to create the infrastructure for future improvements.) There is a great deal left to do here. This patch unblocks a lot of planner work that was basically impractical in the old code structure, such as allowing FDWs to implement remote aggregation, or rewriting plan_set_operations() to allow consideration of multiple implementation orders for set operations. (The latter will likely require a full rewrite of plan_set_operations(); what I've done here is only to fix it to return Paths not Plans.) I have also left unfinished some localized refactoring in createplan.c and planner.c, because it was not necessary to get this patch to a working state. Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
9 years ago
T_NestPath,
T_MergePath,
T_HashPath,
T_AppendPath,
T_MergeAppendPath,
T_ResultPath,
T_MaterialPath,
T_UniquePath,
T_GatherPath,
Make the upper part of the planner work by generating and comparing Paths. I've been saying we needed to do this for more than five years, and here it finally is. This patch removes the ever-growing tangle of spaghetti logic that grouping_planner() used to use to try to identify the best plan for post-scan/join query steps. Now, there is (nearly) independent consideration of each execution step, and entirely separate construction of Paths to represent each of the possible ways to do that step. We choose the best Path or set of Paths using the same add_path() logic that's been used inside query_planner() for years. In addition, this patch removes the old restriction that subquery_planner() could return only a single Plan. It now returns a RelOptInfo containing a set of Paths, just as query_planner() does, and the parent query level can use each of those Paths as the basis of a SubqueryScanPath at its level. This allows finding some optimizations that we missed before, wherein a subquery was capable of returning presorted data and thereby avoiding a sort in the parent level, making the overall cost cheaper even though delivering sorted output was not the cheapest plan for the subquery in isolation. (A couple of regression test outputs change in consequence of that. However, there is very little change in visible planner behavior overall, because the point of this patch is not to get immediate planning benefits but to create the infrastructure for future improvements.) There is a great deal left to do here. This patch unblocks a lot of planner work that was basically impractical in the old code structure, such as allowing FDWs to implement remote aggregation, or rewriting plan_set_operations() to allow consideration of multiple implementation orders for set operations. (The latter will likely require a full rewrite of plan_set_operations(); what I've done here is only to fix it to return Paths not Plans.) I have also left unfinished some localized refactoring in createplan.c and planner.c, because it was not necessary to get this patch to a working state. Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
9 years ago
T_ProjectionPath,
T_SortPath,
T_GroupPath,
T_UpperUniquePath,
T_AggPath,
T_GroupingSetsPath,
T_MinMaxAggPath,
T_WindowAggPath,
T_SetOpPath,
T_RecursiveUnionPath,
T_LockRowsPath,
T_ModifyTablePath,
T_LimitPath,
/* these aren't subclasses of Path: */
T_EquivalenceClass,
T_EquivalenceMember,
T_PathKey,
T_PathTarget,
T_RestrictInfo,
T_PlaceHolderVar,
T_SpecialJoinInfo,
T_AppendRelInfo,
T_PlaceHolderInfo,
T_MinMaxAggInfo,
T_PlannerParamItem,
/*
* TAGS FOR MEMORY NODES (memnodes.h)
*/
T_MemoryContext = 600,
T_AllocSetContext,
/*
* TAGS FOR VALUE NODES (value.h)
*/
T_Value = 650,
T_Integer,
T_Float,
T_String,
T_BitString,
T_Null,
/*
* TAGS FOR LIST NODES (pg_list.h)
*/
T_List,
T_IntList,
T_OidList,
/*
* TAGS FOR EXTENSIBLE NODES (extensible.h)
*/
T_ExtensibleNode,
/*
* TAGS FOR STATEMENT NODES (mostly in parsenodes.h)
*/
T_Query = 700,
T_PlannedStmt,
T_InsertStmt,
T_DeleteStmt,
T_UpdateStmt,
T_SelectStmt,
T_AlterTableStmt,
T_AlterTableCmd,
T_AlterDomainStmt,
T_SetOperationStmt,
T_GrantStmt,
T_GrantRoleStmt,
T_AlterDefaultPrivilegesStmt,
T_ClosePortalStmt,
T_ClusterStmt,
T_CopyStmt,
T_CreateStmt,
T_DefineStmt,
T_DropStmt,
T_TruncateStmt,
T_CommentStmt,
T_FetchStmt,
T_IndexStmt,
T_CreateFunctionStmt,
T_AlterFunctionStmt,
T_DoStmt,
T_RenameStmt,
T_RuleStmt,
T_NotifyStmt,
T_ListenStmt,
T_UnlistenStmt,
T_TransactionStmt,
T_ViewStmt,
T_LoadStmt,
T_CreateDomainStmt,
T_CreatedbStmt,
T_DropdbStmt,
T_VacuumStmt,
T_ExplainStmt,
T_CreateTableAsStmt,
T_CreateSeqStmt,
T_AlterSeqStmt,
T_VariableSetStmt,
T_VariableShowStmt,
T_DiscardStmt,
T_CreateTrigStmt,
T_CreatePLangStmt,
T_CreateRoleStmt,
T_AlterRoleStmt,
T_DropRoleStmt,
T_LockStmt,
T_ConstraintsSetStmt,
T_ReindexStmt,
T_CheckPointStmt,
T_CreateSchemaStmt,
T_AlterDatabaseStmt,
T_AlterDatabaseSetStmt,
T_AlterRoleSetStmt,
T_CreateConversionStmt,
T_CreateCastStmt,
T_CreateOpClassStmt,
T_CreateOpFamilyStmt,
T_AlterOpFamilyStmt,
T_PrepareStmt,
T_ExecuteStmt,
T_DeallocateStmt,
T_DeclareCursorStmt,
T_CreateTableSpaceStmt,
T_DropTableSpaceStmt,
T_AlterObjectSchemaStmt,
T_AlterOwnerStmt,
T_AlterOperatorStmt,
T_DropOwnedStmt,
T_ReassignOwnedStmt,
T_CompositeTypeStmt,
T_CreateEnumStmt,
T_CreateRangeStmt,
T_AlterEnumStmt,
T_AlterTSDictionaryStmt,
T_AlterTSConfigurationStmt,
T_CreateFdwStmt,
T_AlterFdwStmt,
T_CreateForeignServerStmt,
T_AlterForeignServerStmt,
T_CreateUserMappingStmt,
T_AlterUserMappingStmt,
T_DropUserMappingStmt,
T_AlterTableSpaceOptionsStmt,
T_AlterTableMoveAllStmt,
T_SecLabelStmt,
T_CreateForeignTableStmt,
T_ImportForeignSchemaStmt,
T_CreateExtensionStmt,
T_AlterExtensionStmt,
T_AlterExtensionContentsStmt,
T_CreateEventTrigStmt,
T_AlterEventTrigStmt,
T_RefreshMatViewStmt,
T_ReplicaIdentityStmt,
T_AlterSystemStmt,
Row-Level Security Policies (RLS) Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
11 years ago
T_CreatePolicyStmt,
T_AlterPolicyStmt,
T_CreateTransformStmt,
/*
* TAGS FOR PARSE TREE NODES (parsenodes.h)
*/
T_A_Expr = 900,
T_ColumnRef,
T_ParamRef,
T_A_Const,
T_FuncCall,
T_A_Star,
T_A_Indices,
T_A_Indirection,
T_A_ArrayExpr,
T_ResTarget,
T_MultiAssignRef,
T_TypeCast,
T_CollateClause,
T_SortBy,
T_WindowDef,
T_RangeSubselect,
T_RangeFunction,
Redesign tablesample method API, and do extensive code review. The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
10 years ago
T_RangeTableSample,
T_TypeName,
T_ColumnDef,
T_IndexElem,
T_Constraint,
T_DefElem,
T_RangeTblEntry,
T_RangeTblFunction,
Redesign tablesample method API, and do extensive code review. The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
10 years ago
T_TableSampleClause,
T_WithCheckOption,
T_SortGroupClause,
Support GROUPING SETS, CUBE and ROLLUP. This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
10 years ago
T_GroupingSet,
T_WindowClause,
T_FuncWithArgs,
T_AccessPriv,
T_CreateOpClassItem,
T_TableLikeClause,
T_FunctionParameter,
T_LockingClause,
T_RowMarkClause,
T_XmlSerialize,
T_WithClause,
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
10 years ago
T_InferClause,
T_OnConflictClause,
T_CommonTableExpr,
T_RoleSpec,
/*
* TAGS FOR REPLICATION GRAMMAR PARSE NODES (replnodes.h)
*/
T_IdentifySystemCmd,
T_BaseBackupCmd,
T_CreateReplicationSlotCmd,
T_DropReplicationSlotCmd,
T_StartReplicationCmd,
Allow a streaming replication standby to follow a timeline switch. Before this patch, streaming replication would refuse to start replicating if the timeline in the primary doesn't exactly match the standby. The situation where it doesn't match is when you have a master, and two standbys, and you promote one of the standbys to become new master. Promoting bumps up the timeline ID, and after that bump, the other standby would refuse to continue. There's significantly more timeline related logic in streaming replication now. First of all, when a standby connects to primary, it will ask the primary for any timeline history files that are missing from the standby. The missing files are sent using a new replication command TIMELINE_HISTORY, and stored in standby's pg_xlog directory. Using the timeline history files, the standby can follow the latest timeline present in the primary (recovery_target_timeline='latest'), just as it can follow new timelines appearing in an archive directory. START_REPLICATION now takes a TIMELINE parameter, to specify exactly which timeline to stream WAL from. This allows the standby to request the primary to send over WAL that precedes the promotion. The replication protocol is changed slightly (in a backwards-compatible way although there's little hope of streaming replication working across major versions anyway), to allow replication to stop when the end of timeline reached, putting the walsender back into accepting a replication command. Many thanks to Amit Kapila for testing and reviewing various versions of this patch.
13 years ago
T_TimeLineHistoryCmd,
/*
* TAGS FOR RANDOM OTHER STUFF
*
* These are objects that aren't part of parse/plan/execute node tree
* structures, but we give them NodeTags anyway for identification
* purposes (usually because they are involved in APIs where we want to
* pass multiple object types through the same pointer).
*/
T_TriggerData = 950, /* in commands/trigger.h */
T_EventTriggerData, /* in commands/event_trigger.h */
T_ReturnSetInfo, /* in nodes/execnodes.h */
T_WindowObjectData, /* private in nodeWindowAgg.c */
T_TIDBitmap, /* in nodes/tidbitmap.h */
T_InlineCodeBlock, /* in nodes/parsenodes.h */
Redesign tablesample method API, and do extensive code review. The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
10 years ago
T_FdwRoutine, /* in foreign/fdwapi.h */
T_IndexAmRoutine, /* in access/amapi.h */
Redesign tablesample method API, and do extensive code review. The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
10 years ago
T_TsmRoutine /* in access/tsmapi.h */
} NodeTag;
/*
* The first field of a node of any type is guaranteed to be the NodeTag.
* Hence the type of any node can be gotten by casting it to Node. Declaring
* a variable to be of Node * (instead of void *) can also facilitate
* debugging.
*/
typedef struct Node
{
NodeTag type;
} Node;
#define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)
/*
* newNode -
* create a new node of the specified size and tag the node with the
* specified tag.
*
* !WARNING!: Avoid using newNode directly. You should be using the
* macro makeNode. eg. to create a Query node, use makeNode(Query)
*
* Note: the size argument should always be a compile-time constant, so the
* apparent risk of multiple evaluation doesn't matter in practice.
*/
#ifdef __GNUC__
/* With GCC, we can use a compound statement within an expression */
#define newNode(size, tag) \
({ Node *_result; \
AssertMacro((size) >= sizeof(Node)); /* need the tag, at least */ \
_result = (Node *) palloc0fast(size); \
_result->type = (tag); \
_result; \
})
#else
/*
* There is no way to dereference the palloc'ed pointer to assign the
* tag, and also return the pointer itself, so we need a holder variable.
* Fortunately, this macro isn't recursive so we just define
* a global variable for this purpose.
*/
extern PGDLLIMPORT Node *newNodeMacroHolder;
#define newNode(size, tag) \
( \
AssertMacro((size) >= sizeof(Node)), /* need the tag, at least */ \
newNodeMacroHolder = (Node *) palloc0fast(size), \
newNodeMacroHolder->type = (tag), \
newNodeMacroHolder \
)
#endif /* __GNUC__ */
#define makeNode(_type_) ((_type_ *) newNode(sizeof(_type_),T_##_type_))
#define NodeSetTag(nodeptr,t) (((Node*)(nodeptr))->type = (t))
#define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)
/* ----------------------------------------------------------------
* extern declarations follow
* ----------------------------------------------------------------
*/
/*
* nodes/{outfuncs.c,print.c}
*/
extern char *nodeToString(const void *obj);
struct Bitmapset; /* not to include bitmapset.h here */
struct StringInfoData; /* not to include stringinfo.h here */
extern void outToken(struct StringInfoData *str, const char *s);
extern void outBitmapset(struct StringInfoData *str,
const struct Bitmapset *bms);
/*
* nodes/{readfuncs.c,read.c}
*/
extern void *stringToNode(char *str);
extern struct Bitmapset *readBitmapset(void);
/*
* nodes/copyfuncs.c
*/
extern void *copyObject(const void *obj);
/*
* nodes/equalfuncs.c
*/
extern bool equal(const void *a, const void *b);
/*
* Typedefs for identifying qualifier selectivities and plan costs as such.
* These are just plain "double"s, but declaring a variable as Selectivity
* or Cost makes the intent more obvious.
*
* These could have gone into plannodes.h or some such, but many files
* depend on them...
*/
typedef double Selectivity; /* fraction of tuples a qualifier will pass */
typedef double Cost; /* execution cost (in page-access units) */
/*
* CmdType -
* enums for type of operation represented by a Query or PlannedStmt
*
* This is needed in both parsenodes.h and plannodes.h, so put it here...
*/
typedef enum CmdType
{
CMD_UNKNOWN,
CMD_SELECT, /* select stmt */
CMD_UPDATE, /* update stmt */
CMD_INSERT, /* insert stmt */
CMD_DELETE,
CMD_UTILITY, /* cmds like create, destroy, copy, vacuum,
* etc. */
CMD_NOTHING /* dummy command for instead nothing rules
* with qual */
} CmdType;
/*
* JoinType -
* enums for types of relation joins
*
* JoinType determines the exact semantics of joining two relations using
* a matching qualification. For example, it tells what to do with a tuple
* that has no match in the other relation.
*
* This is needed in both parsenodes.h and plannodes.h, so put it here...
*/
typedef enum JoinType
{
/*
* The canonical kinds of joins according to the SQL JOIN syntax. Only
* these codes can appear in parser output (e.g., JoinExpr nodes).
*/
JOIN_INNER, /* matching tuple pairs only */
JOIN_LEFT, /* pairs + unmatched LHS tuples */
JOIN_FULL, /* pairs + unmatched LHS + unmatched RHS */
JOIN_RIGHT, /* pairs + unmatched RHS tuples */
/*
* Semijoins and anti-semijoins (as defined in relational theory) do not
* appear in the SQL JOIN syntax, but there are standard idioms for
* representing them (e.g., using EXISTS). The planner recognizes these
* cases and converts them to joins. So the planner and executor must
* support these codes. NOTE: in JOIN_SEMI output, it is unspecified
* which matching RHS row is joined to. In JOIN_ANTI output, the row is
* guaranteed to be null-extended.
*/
JOIN_SEMI, /* 1 copy of each LHS row that has match(es) */
JOIN_ANTI, /* 1 copy of each LHS row that has no match */
/*
* These codes are used internally in the planner, but are not supported
* by the executor (nor, indeed, by most of the planner).
*/
JOIN_UNIQUE_OUTER, /* LHS path must be made unique */
JOIN_UNIQUE_INNER /* RHS path must be made unique */
/*
* We might need additional join types someday.
*/
} JoinType;
/*
* OUTER joins are those for which pushed-down quals must behave differently
* from the join's own quals. This is in fact everything except INNER and
* SEMI joins. However, this macro must also exclude the JOIN_UNIQUE symbols
* since those are temporary proxies for what will eventually be an INNER
* join.
*
* Note: semijoins are a hybrid case, but we choose to treat them as not
* being outer joins. This is okay principally because the SQL syntax makes
* it impossible to have a pushed-down qual that refers to the inner relation
* of a semijoin; so there is no strong need to distinguish join quals from
* pushed-down quals. This is convenient because for almost all purposes,
* quals attached to a semijoin can be treated the same as innerjoin quals.
*/
#define IS_OUTER_JOIN(jointype) \
(((1 << (jointype)) & \
((1 << JOIN_LEFT) | \
(1 << JOIN_FULL) | \
(1 << JOIN_RIGHT) | \
(1 << JOIN_ANTI))) != 0)
Make the upper part of the planner work by generating and comparing Paths. I've been saying we needed to do this for more than five years, and here it finally is. This patch removes the ever-growing tangle of spaghetti logic that grouping_planner() used to use to try to identify the best plan for post-scan/join query steps. Now, there is (nearly) independent consideration of each execution step, and entirely separate construction of Paths to represent each of the possible ways to do that step. We choose the best Path or set of Paths using the same add_path() logic that's been used inside query_planner() for years. In addition, this patch removes the old restriction that subquery_planner() could return only a single Plan. It now returns a RelOptInfo containing a set of Paths, just as query_planner() does, and the parent query level can use each of those Paths as the basis of a SubqueryScanPath at its level. This allows finding some optimizations that we missed before, wherein a subquery was capable of returning presorted data and thereby avoiding a sort in the parent level, making the overall cost cheaper even though delivering sorted output was not the cheapest plan for the subquery in isolation. (A couple of regression test outputs change in consequence of that. However, there is very little change in visible planner behavior overall, because the point of this patch is not to get immediate planning benefits but to create the infrastructure for future improvements.) There is a great deal left to do here. This patch unblocks a lot of planner work that was basically impractical in the old code structure, such as allowing FDWs to implement remote aggregation, or rewriting plan_set_operations() to allow consideration of multiple implementation orders for set operations. (The latter will likely require a full rewrite of plan_set_operations(); what I've done here is only to fix it to return Paths not Plans.) I have also left unfinished some localized refactoring in createplan.c and planner.c, because it was not necessary to get this patch to a working state. Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
9 years ago
/*
* AggStrategy -
* overall execution strategies for Agg plan nodes
*
* This is needed in both plannodes.h and relation.h, so put it here...
*/
typedef enum AggStrategy
{
AGG_PLAIN, /* simple agg across all input rows */
AGG_SORTED, /* grouped agg, input must be sorted */
AGG_HASHED /* grouped agg, use internal hashtable */
} AggStrategy;
/*
* SetOpCmd and SetOpStrategy -
* overall semantics and execution strategies for SetOp plan nodes
*
* This is needed in both plannodes.h and relation.h, so put it here...
*/
typedef enum SetOpCmd
{
SETOPCMD_INTERSECT,
SETOPCMD_INTERSECT_ALL,
SETOPCMD_EXCEPT,
SETOPCMD_EXCEPT_ALL
} SetOpCmd;
typedef enum SetOpStrategy
{
SETOP_SORTED, /* input must be sorted */
SETOP_HASHED /* use internal hashtable */
} SetOpStrategy;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
10 years ago
/*
* OnConflictAction -
* "ON CONFLICT" clause type of query
*
* This is needed in both parsenodes.h and plannodes.h, so put it here...
*/
typedef enum OnConflictAction
{
ONCONFLICT_NONE, /* No "ON CONFLICT" clause */
ONCONFLICT_NOTHING, /* ON CONFLICT ... DO NOTHING */
ONCONFLICT_UPDATE /* ON CONFLICT ... DO UPDATE */
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
10 years ago
} OnConflictAction;
#endif /* NODES_H */