mirror of https://github.com/postgres/postgres
Tag:
Branch:
Tree:
2f39106a20
REL2_0B
REL6_4
REL6_5_PATCHES
REL7_0_PATCHES
REL7_1_STABLE
REL7_2_STABLE
REL7_3_STABLE
REL7_4_STABLE
REL8_0_STABLE
REL8_1_STABLE
REL8_2_STABLE
REL8_3_STABLE
REL8_4_STABLE
REL8_5_ALPHA1_BRANCH
REL8_5_ALPHA2_BRANCH
REL8_5_ALPHA3_BRANCH
REL9_0_ALPHA4_BRANCH
REL9_0_ALPHA5_BRANCH
REL9_0_STABLE
REL9_1_STABLE
REL9_2_STABLE
REL9_3_STABLE
REL9_4_STABLE
REL9_5_STABLE
REL9_6_STABLE
REL_10_STABLE
REL_11_STABLE
REL_12_STABLE
REL_13_STABLE
REL_14_STABLE
REL_15_STABLE
REL_16_STABLE
REL_17_STABLE
REL_18_STABLE
Release_1_0_3
WIN32_DEV
ecpg_big_bison
master
PG95-1_01
PG95-1_08
PG95-1_09
REL2_0
REL6_1
REL6_1_1
REL6_2
REL6_2_1
REL6_3
REL6_3_2
REL6_4_2
REL6_5
REL6_5_1
REL6_5_2
REL6_5_3
REL7_0
REL7_0_2
REL7_0_3
REL7_1
REL7_1_1
REL7_1_2
REL7_1_3
REL7_1_BETA
REL7_1_BETA2
REL7_1_BETA3
REL7_2
REL7_2_1
REL7_2_2
REL7_2_3
REL7_2_4
REL7_2_5
REL7_2_6
REL7_2_7
REL7_2_8
REL7_2_BETA1
REL7_2_BETA2
REL7_2_BETA3
REL7_2_BETA4
REL7_2_BETA5
REL7_2_RC1
REL7_2_RC2
REL7_3
REL7_3_1
REL7_3_10
REL7_3_11
REL7_3_12
REL7_3_13
REL7_3_14
REL7_3_15
REL7_3_16
REL7_3_17
REL7_3_18
REL7_3_19
REL7_3_2
REL7_3_20
REL7_3_21
REL7_3_3
REL7_3_4
REL7_3_5
REL7_3_6
REL7_3_7
REL7_3_8
REL7_3_9
REL7_4
REL7_4_1
REL7_4_10
REL7_4_11
REL7_4_12
REL7_4_13
REL7_4_14
REL7_4_15
REL7_4_16
REL7_4_17
REL7_4_18
REL7_4_19
REL7_4_2
REL7_4_20
REL7_4_21
REL7_4_22
REL7_4_23
REL7_4_24
REL7_4_25
REL7_4_26
REL7_4_27
REL7_4_28
REL7_4_29
REL7_4_3
REL7_4_30
REL7_4_4
REL7_4_5
REL7_4_6
REL7_4_7
REL7_4_8
REL7_4_9
REL7_4_BETA1
REL7_4_BETA2
REL7_4_BETA3
REL7_4_BETA4
REL7_4_BETA5
REL7_4_RC1
REL7_4_RC2
REL8_0_0
REL8_0_0BETA1
REL8_0_0BETA2
REL8_0_0BETA3
REL8_0_0BETA4
REL8_0_0BETA5
REL8_0_0RC1
REL8_0_0RC2
REL8_0_0RC3
REL8_0_0RC4
REL8_0_0RC5
REL8_0_1
REL8_0_10
REL8_0_11
REL8_0_12
REL8_0_13
REL8_0_14
REL8_0_15
REL8_0_16
REL8_0_17
REL8_0_18
REL8_0_19
REL8_0_2
REL8_0_20
REL8_0_21
REL8_0_22
REL8_0_23
REL8_0_24
REL8_0_25
REL8_0_26
REL8_0_3
REL8_0_4
REL8_0_5
REL8_0_6
REL8_0_7
REL8_0_8
REL8_0_9
REL8_1_0
REL8_1_0BETA1
REL8_1_0BETA2
REL8_1_0BETA3
REL8_1_0BETA4
REL8_1_0RC1
REL8_1_1
REL8_1_10
REL8_1_11
REL8_1_12
REL8_1_13
REL8_1_14
REL8_1_15
REL8_1_16
REL8_1_17
REL8_1_18
REL8_1_19
REL8_1_2
REL8_1_20
REL8_1_21
REL8_1_22
REL8_1_23
REL8_1_3
REL8_1_4
REL8_1_5
REL8_1_6
REL8_1_7
REL8_1_8
REL8_1_9
REL8_2_0
REL8_2_1
REL8_2_10
REL8_2_11
REL8_2_12
REL8_2_13
REL8_2_14
REL8_2_15
REL8_2_16
REL8_2_17
REL8_2_18
REL8_2_19
REL8_2_2
REL8_2_20
REL8_2_21
REL8_2_22
REL8_2_23
REL8_2_3
REL8_2_4
REL8_2_5
REL8_2_6
REL8_2_7
REL8_2_8
REL8_2_9
REL8_2_BETA1
REL8_2_BETA2
REL8_2_BETA3
REL8_2_RC1
REL8_3_0
REL8_3_1
REL8_3_10
REL8_3_11
REL8_3_12
REL8_3_13
REL8_3_14
REL8_3_15
REL8_3_16
REL8_3_17
REL8_3_18
REL8_3_19
REL8_3_2
REL8_3_20
REL8_3_21
REL8_3_22
REL8_3_23
REL8_3_3
REL8_3_4
REL8_3_5
REL8_3_6
REL8_3_7
REL8_3_8
REL8_3_9
REL8_3_BETA1
REL8_3_BETA2
REL8_3_BETA3
REL8_3_BETA4
REL8_3_RC1
REL8_3_RC2
REL8_4_0
REL8_4_1
REL8_4_10
REL8_4_11
REL8_4_12
REL8_4_13
REL8_4_14
REL8_4_15
REL8_4_16
REL8_4_17
REL8_4_18
REL8_4_19
REL8_4_2
REL8_4_20
REL8_4_21
REL8_4_22
REL8_4_3
REL8_4_4
REL8_4_5
REL8_4_6
REL8_4_7
REL8_4_8
REL8_4_9
REL8_4_BETA1
REL8_4_BETA2
REL8_4_RC1
REL8_4_RC2
REL8_5_ALPHA1
REL8_5_ALPHA2
REL8_5_ALPHA3
REL9_0_0
REL9_0_1
REL9_0_10
REL9_0_11
REL9_0_12
REL9_0_13
REL9_0_14
REL9_0_15
REL9_0_16
REL9_0_17
REL9_0_18
REL9_0_19
REL9_0_2
REL9_0_20
REL9_0_21
REL9_0_22
REL9_0_23
REL9_0_3
REL9_0_4
REL9_0_5
REL9_0_6
REL9_0_7
REL9_0_8
REL9_0_9
REL9_0_ALPHA4
REL9_0_ALPHA5
REL9_0_BETA1
REL9_0_BETA2
REL9_0_BETA3
REL9_0_BETA4
REL9_0_RC1
REL9_1_0
REL9_1_1
REL9_1_10
REL9_1_11
REL9_1_12
REL9_1_13
REL9_1_14
REL9_1_15
REL9_1_16
REL9_1_17
REL9_1_18
REL9_1_19
REL9_1_2
REL9_1_20
REL9_1_21
REL9_1_22
REL9_1_23
REL9_1_24
REL9_1_3
REL9_1_4
REL9_1_5
REL9_1_6
REL9_1_7
REL9_1_8
REL9_1_9
REL9_1_ALPHA1
REL9_1_ALPHA2
REL9_1_ALPHA3
REL9_1_ALPHA4
REL9_1_ALPHA5
REL9_1_BETA1
REL9_1_BETA2
REL9_1_BETA3
REL9_1_RC1
REL9_2_0
REL9_2_1
REL9_2_10
REL9_2_11
REL9_2_12
REL9_2_13
REL9_2_14
REL9_2_15
REL9_2_16
REL9_2_17
REL9_2_18
REL9_2_19
REL9_2_2
REL9_2_20
REL9_2_21
REL9_2_22
REL9_2_23
REL9_2_24
REL9_2_3
REL9_2_4
REL9_2_5
REL9_2_6
REL9_2_7
REL9_2_8
REL9_2_9
REL9_2_BETA1
REL9_2_BETA2
REL9_2_BETA3
REL9_2_BETA4
REL9_2_RC1
REL9_3_0
REL9_3_1
REL9_3_10
REL9_3_11
REL9_3_12
REL9_3_13
REL9_3_14
REL9_3_15
REL9_3_16
REL9_3_17
REL9_3_18
REL9_3_19
REL9_3_2
REL9_3_20
REL9_3_21
REL9_3_22
REL9_3_23
REL9_3_24
REL9_3_25
REL9_3_3
REL9_3_4
REL9_3_5
REL9_3_6
REL9_3_7
REL9_3_8
REL9_3_9
REL9_3_BETA1
REL9_3_BETA2
REL9_3_RC1
REL9_4_0
REL9_4_1
REL9_4_10
REL9_4_11
REL9_4_12
REL9_4_13
REL9_4_14
REL9_4_15
REL9_4_16
REL9_4_17
REL9_4_18
REL9_4_19
REL9_4_2
REL9_4_20
REL9_4_21
REL9_4_22
REL9_4_23
REL9_4_24
REL9_4_25
REL9_4_26
REL9_4_3
REL9_4_4
REL9_4_5
REL9_4_6
REL9_4_7
REL9_4_8
REL9_4_9
REL9_4_BETA1
REL9_4_BETA2
REL9_4_BETA3
REL9_4_RC1
REL9_5_0
REL9_5_1
REL9_5_10
REL9_5_11
REL9_5_12
REL9_5_13
REL9_5_14
REL9_5_15
REL9_5_16
REL9_5_17
REL9_5_18
REL9_5_19
REL9_5_2
REL9_5_20
REL9_5_21
REL9_5_22
REL9_5_23
REL9_5_24
REL9_5_25
REL9_5_3
REL9_5_4
REL9_5_5
REL9_5_6
REL9_5_7
REL9_5_8
REL9_5_9
REL9_5_ALPHA1
REL9_5_ALPHA2
REL9_5_BETA1
REL9_5_BETA2
REL9_5_RC1
REL9_6_0
REL9_6_1
REL9_6_10
REL9_6_11
REL9_6_12
REL9_6_13
REL9_6_14
REL9_6_15
REL9_6_16
REL9_6_17
REL9_6_18
REL9_6_19
REL9_6_2
REL9_6_20
REL9_6_21
REL9_6_22
REL9_6_23
REL9_6_24
REL9_6_3
REL9_6_4
REL9_6_5
REL9_6_6
REL9_6_7
REL9_6_8
REL9_6_9
REL9_6_BETA1
REL9_6_BETA2
REL9_6_BETA3
REL9_6_BETA4
REL9_6_RC1
REL_10_0
REL_10_1
REL_10_10
REL_10_11
REL_10_12
REL_10_13
REL_10_14
REL_10_15
REL_10_16
REL_10_17
REL_10_18
REL_10_19
REL_10_2
REL_10_20
REL_10_21
REL_10_22
REL_10_23
REL_10_3
REL_10_4
REL_10_5
REL_10_6
REL_10_7
REL_10_8
REL_10_9
REL_10_BETA1
REL_10_BETA2
REL_10_BETA3
REL_10_BETA4
REL_10_RC1
REL_11_0
REL_11_1
REL_11_10
REL_11_11
REL_11_12
REL_11_13
REL_11_14
REL_11_15
REL_11_16
REL_11_17
REL_11_18
REL_11_19
REL_11_2
REL_11_20
REL_11_21
REL_11_22
REL_11_3
REL_11_4
REL_11_5
REL_11_6
REL_11_7
REL_11_8
REL_11_9
REL_11_BETA1
REL_11_BETA2
REL_11_BETA3
REL_11_BETA4
REL_11_RC1
REL_12_0
REL_12_1
REL_12_10
REL_12_11
REL_12_12
REL_12_13
REL_12_14
REL_12_15
REL_12_16
REL_12_17
REL_12_18
REL_12_19
REL_12_2
REL_12_20
REL_12_21
REL_12_22
REL_12_3
REL_12_4
REL_12_5
REL_12_6
REL_12_7
REL_12_8
REL_12_9
REL_12_BETA1
REL_12_BETA2
REL_12_BETA3
REL_12_BETA4
REL_12_RC1
REL_13_0
REL_13_1
REL_13_10
REL_13_11
REL_13_12
REL_13_13
REL_13_14
REL_13_15
REL_13_16
REL_13_17
REL_13_18
REL_13_19
REL_13_2
REL_13_20
REL_13_21
REL_13_22
REL_13_23
REL_13_3
REL_13_4
REL_13_5
REL_13_6
REL_13_7
REL_13_8
REL_13_9
REL_13_BETA1
REL_13_BETA2
REL_13_BETA3
REL_13_RC1
REL_14_0
REL_14_1
REL_14_10
REL_14_11
REL_14_12
REL_14_13
REL_14_14
REL_14_15
REL_14_16
REL_14_17
REL_14_18
REL_14_19
REL_14_2
REL_14_20
REL_14_3
REL_14_4
REL_14_5
REL_14_6
REL_14_7
REL_14_8
REL_14_9
REL_14_BETA1
REL_14_BETA2
REL_14_BETA3
REL_14_RC1
REL_15_0
REL_15_1
REL_15_10
REL_15_11
REL_15_12
REL_15_13
REL_15_14
REL_15_15
REL_15_2
REL_15_3
REL_15_4
REL_15_5
REL_15_6
REL_15_7
REL_15_8
REL_15_9
REL_15_BETA1
REL_15_BETA2
REL_15_BETA3
REL_15_BETA4
REL_15_RC1
REL_15_RC2
REL_16_0
REL_16_1
REL_16_10
REL_16_11
REL_16_2
REL_16_3
REL_16_4
REL_16_5
REL_16_6
REL_16_7
REL_16_8
REL_16_9
REL_16_BETA1
REL_16_BETA2
REL_16_BETA3
REL_16_RC1
REL_17_0
REL_17_1
REL_17_2
REL_17_3
REL_17_4
REL_17_5
REL_17_6
REL_17_7
REL_17_BETA1
REL_17_BETA2
REL_17_BETA3
REL_17_RC1
REL_18_0
REL_18_1
REL_18_BETA1
REL_18_BETA2
REL_18_BETA3
REL_18_RC1
Release_1_0_2
Release_2_0
Release_2_0_0
release-6-3
${ noResults }
8426 Commits (2f39106a209e647d7b1895331fca115f9bb6ec8d)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
4f3b38fe2b |
Get rid of explicit argument-count markings in tab-complete.c.
This replaces the "TailMatchesN" macros with just "TailMatches", and likewise "HeadMatchesN" becomes "HeadMatches" and "MatchesN" becomes "Matches". The various COMPLETE_WITH_LISTn macros are reduced to COMPLETE_WITH, and the single-item COMPLETE_WITH_CONST also gets folded into that. This eliminates a lot of minor annoyance in writing tab-completion rules. Usefully, the compiled code also gets a bit smaller (10% or so, on my machine). The implementation depends on variadic macros, so we couldn't have done this before we required C99. Andres Freund and Thomas Munro; some cosmetic cleanup by me. Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no |
7 years ago |
|
|
f025bd2ddd |
Use size_t consistently in dsa.{ch}.
Takeshi Ideriha complained that there is a mixture of Size and size_t in dsa.c and corresponding header. Let's use size_t. Back-patch to 10 where dsa.c landed, to make future back-patching easy. Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F19ABD9%40G01JPEXMBKW04 |
7 years ago |
|
|
b09a64d602 |
Add missing pg_description strings for pg_type entries.
I noticed that all non-composite, non-array entries in pg_type.dat had descr strings, except for "json" and the pseudo-types. The lack for json seems certainly an oversight, and there's surely little reason to not have entries for the pseudo-types either. So add some. "make reformat-dat-files" turned up some formatting issues in pg_amop.dat, too, so fix those in passing. No catversion bump since the backend doesn't care too much what is in pg_description. |
7 years ago |
|
|
3dc820c43e |
Teach genbki.pl to auto-generate pg_type entries for array types.
This eliminates some more tedium in adding new catalog entries, specifically the need to set up an array type when adding a new built-in data type. Now it's sufficient to assign an OID for the array type and write it in an "array_type_oid" metadata field. You don't have to fill the base type's typarray link explicitly, either. No catversion bump since the contents of pg_type aren't changed. (Well, their order might be different, but that doesn't matter.) John Naylor, reviewed and whacked around a bit by Dagfinn Ilmari Mannsåker, and some more by me. Discussion: https://postgr.es/m/CAJVSVGVTb6m9pJF49b3SuA8J+T-THO9c0hxOmoyv-yGKh-FbNg@mail.gmail.com |
7 years ago |
|
|
2a6368343f |
Add support for nearest-neighbor (KNN) searches to SP-GiST
Currently, KNN searches were supported only by GiST. SP-GiST also capable to support them. This commit implements that support. SP-GiST scan stack is replaced with queue, which serves as stack if no ordering is specified. KNN support is provided for three SP-GIST opclasses: quad_point_ops, kd_point_ops and poly_ops (catversion is bumped). Some common parts between GiST and SP-GiST KNNs are extracted into separate functions. Discussion: https://postgr.es/m/570825e8-47d0-4732-2bf6-88d67d2d51c8%40postgrespro.ru Author: Nikita Glukhov, Alexander Korotkov based on GSoC work by Vlad Sterzhanov Review: Andrey Borodin, Alexander Korotkov |
7 years ago |
|
|
d0cfc3d6a4 |
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if they are stored as views or rules; and plan trees will be serialized only if they get passed to parallel-query workers. This leaves an awful lot of opportunity for bugs/oversights to not get detected, as indeed we've just been reminded of the hard way. To improve matters, this patch adds a new compile option WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees through copyObject, it passes them through nodeToString + stringToNode. Enabling this option in a buildfarm animal or two will catch problems at least for cases that are exercised by the regression tests. A small problem with this idea is that readfuncs.c historically has discarded location fields, on the reasonable grounds that parse locations in a retrieved view are not relevant to the current query. But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements, and it could cause problems for future improvements that might try to report error locations at runtime. To fix that, provide a variant behavior in readfuncs.c that makes it restore location fields when told to. In passing, const-ify the string arguments of stringToNode and its subsidiary functions, just because it annoyed me that they weren't const already. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us |
7 years ago |
|
|
db1071d4ee |
Fix some minor issues exposed by outfuncs/readfuncs testing.
A test patch to pass parse and plan trees through outfuncs + readfuncs exposed several issues that need to be fixed to get clean matches: Query.withCheckOptions failed to get copied; it's intentionally ignored by outfuncs/readfuncs on the grounds that it'd always be NIL anyway in stored rules. This seems less than future-proof, and it's not even saving very much, so just undo the decision and treat the field like all others. Several places that convert a view RTE into a subquery RTE, or similar manipulations, failed to clear out fields that were specific to the original RTE type and should be zero in a subquery RTE. Since readfuncs.c will leave such fields as zero, equalfuncs.c thinks the nodes are different leading to a reported mismatch. It seems like a good idea to clear out the no-longer-needed fields, even though in principle nothing should look at them; the node ought to be indistinguishable from how it would look if we'd built a new node instead of scribbling on the old one. BuildOnConflictExcludedTargetlist randomly set the resname of some TargetEntries to "" not NULL. outfuncs/readfuncs don't distinguish those cases, and so the string will read back in as NULL ... but equalfuncs.c does distinguish. Perhaps we ought to try to make things more consistent in this area --- but it's just useless extra code space for BuildOnConflictExcludedTargetlist to not use NULL here, so I fixed it for now by making it do that. catversion bumped because the change in handling of Query.withCheckOptions affects stored rules. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us |
7 years ago |
|
|
09991e5a47 |
Fix some probably-minor oversights in readfuncs.c.
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't restore them. This doesn't cause obvious failures, because the only things that look at those fields are expandRTE() and get_rte_attribute_type(), which are mostly used during parse analysis, before anything would've passed the parsetree through outfuncs/readfuncs. But expandRTE() is used in build_physical_tlist(), which means that that function will return a wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally, this doesn't cause serious problems, because what it will return is NIL which callers will interpret as "couldn't build a physical tlist because of dropped columns". So you still get a plan that works, though it's marginally less efficient than it could be. There are also some other expandRTE() calls associated with transformation of whole-row Vars in the planner. I have been unable to exhibit misbehavior from that, and it may be unreachable in any case that anyone would care about ... but I'm not entirely convinced, so this seems like something we should back- patch a fix for. Fortunately, we can fix it without forcing a change of stored rules and a catversion bump, because we can just copy these lists from the subsidiary TableFunc object. readfuncs.c was also missing support for NamedTuplestoreScan plan nodes. This accidentally fails to break parallel query because a query using a named tuplestore would never be considered parallel-safe anyway. However, project policy since parallel query came in is that all plan node types should have outfuncs/readfuncs support, so this is clearly an oversight that should be repaired. Noted while fooling around with a patch to test outfuncs/readfuncs more thoroughly. That exposed some other issues too, but these are the only ones that seem worth back-patching. Back-patch to v10 where both of these features came in. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us |
7 years ago |
|
|
1d6fbc38d9 |
Refactor routines for subscription and publication lookups
Those routines gain a missing_ok argument, allowing a caller to get a NULL result instead of an error if set to true. This is part of a larger refactoring effort for objectaddress.c where trying to check for non-existing objects does not result in cache lookup failures. Author: Michael Paquier Reviewed-by: Aleksander Alekseev, Álvaro Herrera Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com |
7 years ago |
|
|
07a3af0ff8 |
Fix parsetree representation of XMLTABLE(XMLNAMESPACES(DEFAULT ...)).
The original coding for XMLTABLE thought it could represent a default namespace by a T_String Value node with a null string pointer. That's not okay, though; in particular outfuncs.c/readfuncs.c are not on board with such a representation, meaning you'll get a null pointer crash if you try to store a view or rule containing this construct. To fix, change the parsetree representation so that we have a NULL list element, instead of a bogus Value node. This isn't really a functional limitation since default XML namespaces aren't yet implemented in the executor; you'd just get "DEFAULT namespace is not supported" anyway. But crashes are not nice, so back-patch to v10 where this syntax was added. Ordinarily we'd consider a parsetree representation change to be un-backpatchable; but since existing releases would crash on the way to storing such constructs, there can't be any existing views/rules to be incompatible with. Per report from Andrey Lepikhov. Discussion: https://postgr.es/m/3690074f-abd2-56a9-144a-aa5545d7a291@postgrespro.ru |
7 years ago |
|
|
1f4a920b73 |
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is, uncorrelated sub-selects) used during an EPQ recheck would have already been evaluated during the main query; this is implicit in the fact that execPlan pointers are not copied into the EPQ estate's es_param_exec_vals. But it's possible for that assumption to fail, if the initplan is only reached conditionally. For example, a sub-select inside a CASE expression could be reached during a recheck when it had not been previously, if the CASE test depends on a column that was just updated. This bug is old, appearing to date back to my rewrite of EvalPlanQual in commit |
7 years ago |
|
|
6b78231d91 |
Move PartitionDispatchData struct definition to execPartition.c
There's no reason to expose the struct definition, so don't. Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Discussion: https://postgr.es/m/d3fa24c1-bc65-7133-81df-6474387ccc4f@lab.ntt.co.jp |
7 years ago |
|
|
75f9c4ca5a |
Don't allow LIMIT/OFFSET clause within sub-selects to be pushed to workers.
Allowing sub-select containing LIMIT/OFFSET in workers can lead to inconsistent results at the top-level as there is no guarantee that the row order will be fully deterministic. The fix is to prohibit pushing LIMIT/OFFSET within sub-selects to workers. Reported-by: Andrew Fletcher Bug: 15324 Author: Amit Kapila Reviewed-by: Dilip Kumar Backpatch-through: 9.6 Discussion: https://postgr.es/m/153417684333.10284.11356259990921828616@wrigleys.postgresql.org |
7 years ago |
|
|
0ba06e0bfb |
Allow concurrent-safe open() and fopen() in frontend code for Windows
PostgreSQL uses a custom wrapper for open() and fopen() which is concurrent-safe, allowing multiple processes to open and work on the same file. This has a couple of advantages: - pg_test_fsync does not handle O_DSYNC correctly otherwise, leading to false claims that disks are unsafe. - TAP tests can run into race conditions when a postmaster and pg_ctl open postmaster.pid, fixing some random failures in the buildfam. pg_upgrade is one frontend tool using workarounds to bypass file locking issues with the log files it generates, however the interactions with pg_ctl are proving to be tedious to get rid of, so this is left for later. Author: Laurenz Albe Reviewed-by: Michael Paquier, Kuntal Ghosh Discussion: https://postgr.es/m/1527846213.2475.31.camel@cybertec.at Discussion: https://postgr.es/m/16922.1520722108@sss.pgh.pa.us |
7 years ago |
|
|
14ea365203 |
Hide a static inline from FRONTEND code.
For some reason pg_waldump is including tuptable.h, and the recent addition of a static inline function to it is causing problems on older buildfarm members that fail to optimize such functions away completely. I wonder if this situation doesn't mean that some header refactoring is called for ... but as a band-aid, wrap the static function in "#ifndef FRONTEND". Discussion: https://postgr.es/m/20180824154237.mabsv6fsz5q37bma@alap3.anarazel.de |
7 years ago |
|
|
ed0cdf0e05 |
Install a check for mis-linking of src/port and src/common functions.
On ELF-based platforms (and maybe others?) it's possible for a shared library, when dynamically loaded into the backend, to call the backend versions of src/port and src/common functions rather than the frontend versions that are actually linked into the shlib. This is definitely not what we want, because the frontend versions often behave slightly differently. Up to now it's been "slight" enough that nobody noticed; but with the addition of SCRAM support functions in src/common, we're observing crashes due to the difference between palloc and malloc memory allocation rules, as reported in bug #15367 from Jeremy Evans. The purpose of this patch is to create a direct test for this type of mis-linking, so that we know whether any given platform requires extra measures to prevent using the wrong functions. If the test fails, it will lead to connection failures in the contrib/postgres_fdw regression test. At the moment, *BSD platforms using ELF format are known to have the problem and can be expected to fail; but we need to know whether anything else does, and we need a reliable ongoing check for future platforms. Actually fixing the problem will be the subject of later commit(s). Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org |
7 years ago |
|
|
361844fe56 |
Save/restore SPI's global variables in SPI_connect() and SPI_finish().
This patch removes two sources of interference between nominally independent functions when one SPI-using function calls another, perhaps without knowing that it does so. Chapman Flack pointed out that xml.c's query_to_xml_internal() expects SPI_tuptable and SPI_processed to stay valid across datatype output function calls; but it's possible that such a call could involve re-entrant use of SPI. It seems likely that there are similar hazards elsewhere, if not in the core code then in third-party SPI users. Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which would typically make for a crash in such a situation. Restoring them to the values they had at SPI_connect() seems like a considerably more useful behavior, and it still meets the design goal of not leaving any dangling pointers to tuple tables of the function being exited. Also, cause SPI_connect() to reset these variables to zeroes/nulls after saving them. This prevents interference in the opposite direction: it's possible that a SPI-using function that's only ever been tested standalone contains assumptions that these variables start out as zeroes. That was the case as long as you were the outermost SPI user, but not so much for an inner user. Now it's consistent. Report and fix suggestion by Chapman Flack, actual patch by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/9fa25bef-2e4f-1c32-22a4-3ad0723c4a17@anastigmatix.net |
7 years ago |
|
|
f868a8143a |
Fix longstanding recursion hazard in sinval message processing.
LockRelationOid and sibling routines supposed that, if our session already
holds the lock they were asked to acquire, they could skip calling
AcceptInvalidationMessages on the grounds that we must have already read
any remote sinval messages issued against the relation being locked.
This is normally true, but there's a critical special case where it's not:
processing inside AcceptInvalidationMessages might attempt to access system
relations, resulting in a recursive call to acquire a relation lock.
Hence, if the outer call had acquired that same system catalog lock, we'd
fall through, despite the possibility that there's an as-yet-unread sinval
message for that system catalog. This could, for example, result in
failure to access a system catalog or index that had just been processed
by VACUUM FULL. This is the explanation for buildfarm failures we've been
seeing intermittently for the past three months. The bug is far older
than that, but commits
|
7 years ago |
|
|
842cb9fa62 |
Refactor dlopen() support
Nowadays, all platforms except Windows and older HP-UX have standard dlopen() support. So having a separate implementation per platform under src/backend/port/dynloader/ is a bit excessive. Instead, treat dlopen() like other library functions that happen to be missing sometimes and put a replacement implementation under src/port/. Discussion: https://www.postgresql.org/message-id/flat/e11a49cb-570a-60b7-707d-7084c8de0e61%402ndquadrant.com#54e735ae37476a121abb4e33c2549b03 |
7 years ago |
|
|
ac27c74def |
Fix the overrun in hash index metapage for smaller block sizes.
The commit
|
7 years ago |
|
|
ae5205c8a8 |
Make argument names of pg_get_object_address consistent, and fix docs.
pg_get_object_address and pg_identify_object_as_address are supposed to be inverses, but they disagreed as to the names of the arguments representing the textual form of an object address. Moreover, the documented argument names didn't agree with reality at all, either for these functions or pg_identify_object. In HEAD and v11, I think we can get away with renaming the input arguments of pg_get_object_address to match the outputs of pg_identify_object_as_address. In theory that might break queries using named-argument notation to call pg_get_object_address, but it seems really unlikely that anybody is doing that, or that they'd have much trouble adjusting if they were. In older branches, we'll just live with the lack of consistency. Aside from fixing the documentation of these functions to match reality, I couldn't resist the temptation to do some copy-editing. Per complaint from Jean-Pierre Pelletier. Back-patch to 9.5 where these functions were introduced. (Before v11, this is a documentation change only.) Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com |
7 years ago |
|
|
17b7c302b5 |
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain constraints to be unique among the constraints of that table or domain. However, the enforcement of that has been pretty haphazard, and it missed some corner cases such as creating a CHECK constraint and then an index constraint of the same name (as per recent report from André Hänsel). Also, due to the lack of an actual unique index enforcing this, duplicates could be created through race conditions. Moreover, the code that searches pg_constraint has been quite inconsistent about how to handle duplicate names if one did occur: some places checked and threw errors if there was more than one match, while others just processed the first match they came to. To fix, create a unique index on (conrelid, contypid, conname). Since either conrelid or contypid is zero, this will separately enforce uniqueness of constraint names among constraints of any one table and any one domain. (If we ever implement SQL assertions, and put them into this catalog, more thought might be needed. But it'd be at least as reasonable to put them into a new catalog; having overloaded this one catalog with two kinds of constraints was a mistake already IMO.) This index can replace the existing non-unique index on conrelid, though we need to keep the one on contypid for query performance reasons. Having done that, we can simplify the logic in various places that either coped with duplicates or neglected to, as well as potentially improve lookup performance when searching for a constraint by name. Also, as per our usual practice, install a preliminary check so that you get something more friendly than a unique-index violation report in the case complained of by André. And teach ChooseIndexName to avoid choosing autogenerated names that would draw such a failure. While it's not possible to make such a change in the back branches, it doesn't seem quite too late to put this into v11, so do so. Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de |
7 years ago |
|
|
c076f3d74a |
Remove pg_constraint.conincluding
This column was added in commit
|
7 years ago |
|
|
4ddd8f5f55 |
Fix memory leak in TRUNCATE decoding
When decoding a TRUNCATE record, the relids array was being allocated in the main ReorderBuffer memory context, but not released with the change resulting in a memory leak. The array was also ignored when serializing/deserializing the change, assuming all the information is stored in the change itself. So when spilling the change to disk, we've only we have serialized only the pointer to the relids array. Thanks to never releasing the array, the pointer however remained valid even after loading the change back to memory, preventing an actual crash. This fixes both the memory leak and (de)serialization. The relids array is still allocated in the main ReorderBuffer memory context (none of the existing ones seems like a good match, and adding an extra context seems like an overkill). The allocation is wrapped in a new ReorderBuffer API functions, to keep the details within reorderbuffer.c, just like the other ReorderBufferGet methods do. Author: Tomas Vondra Discussion: https://www.postgresql.org/message-id/flat/66175a41-9342-2845-652f-1bd4c3ee50aa%402ndquadrant.com Backpatch: 11, where decoding of TRUNCATE was introduced |
7 years ago |
|
|
44cac93464 |
Avoid using potentially-under-aligned page buffers.
There's a project policy against using plain "char buf[BLCKSZ]" local or static variables as page buffers; preferred style is to palloc or malloc each buffer to ensure it is MAXALIGN'd. However, that policy's been ignored in an increasing number of places. We've apparently got away with it so far, probably because (a) relatively few people use platforms on which misalignment causes core dumps and/or (b) the variables chance to be sufficiently aligned anyway. But this is not something to rely on. Moreover, even if we don't get a core dump, we might be paying a lot of cycles for misaligned accesses. To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock that the compiler must allocate with sufficient alignment, and use those in place of plain char arrays. I used these types even for variables where there's no risk of a misaligned access, since ensuring proper alignment should make kernel data transfers faster. I also changed some places where we had been palloc'ing short-lived buffers, for coding style uniformity and to save palloc/pfree overhead. Since this seems to be a live portability hazard (despite the lack of field reports), back-patch to all supported versions. Patch by me; thanks to Michael Paquier for review. Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de |
7 years ago |
|
|
ec74369931 |
Implement "pg_ctl logrotate" command
Currently there are two ways to trigger log rotation in logging collector process: call pg_rotate_logfile() SQL-function or send SIGUSR1 signal directly to logging collector process. However, it's nice to have more suitable way for external tools to do that, which wouldn't require SQL connection or knowledge of logging collector pid. This commit implements triggering log rotation by "pg_ctl logrotate" command. Discussion: https://postgr.es/m/20180416.115435.28153375.horiguchi.kyotaro%40lab.ntt.co.jp Author: Kyotaro Horiguchi, Alexander Kuzmenkov, Alexander Korotkov |
7 years ago |
|
|
5758685c9f |
Fix 8a934d677 for libc++ and make more include order resistant.
The previous definition was used in C++ mode, which causes problems when using clang with libc++ (rather than libstdc++), due to bugs therein. So just avoid in C++ mode. A second problem is that depending on include order and implicit includes the previous definition did not guarantee that the current hack was effective by the time isinf was used, fix that by forcing math.h to be included. This can cause clang using builds, or gcc using ones with JIT enabled, to slow down noticably. It's likely that we at some point want a better solution for the performance problem, but while it's there it should better work. Reported-By: Steven Winfield Bug: #15270 Discussion: https://postgr.es/m/153116283147.1401.360416241833049560@wrigleys.postgresql.org Author: Andres Freund Backpatch: 11, like the previous commit. |
7 years ago |
|
|
8c62d9d16f |
Make checksum_impl.h safe to compile with -fstrict-aliasing.
In general, Postgres requires -fno-strict-aliasing with compilers that implement C99 strict aliasing rules. There's little hope of getting rid of that overall. But it seems like it would be a good idea if storage/checksum_impl.h in particular didn't depend on it, because that header is explicitly intended to be included by external programs. We don't have a lot of control over the compiler switches that an external program might use, as shown by Michael Banck's report of failure in a privately-modified version of pg_verify_checksums. Hence, switch to using a union in place of willy-nilly pointer casting inside this file. I think this makes the code a bit more readable anyway. checksum_impl.h hasn't changed since it was introduced in 9.3, so back-patch all the way. Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de |
7 years ago |
|
|
7cfdc77023 |
Disable support for partitionwise joins in problematic cases.
Commit
|
7 years ago |
|
|
7061e03319 |
Add semicolons to end of internally run queries
This ensures that the --echo output of various tools (under scripts) is valid multiline SQL. Author: Tatsuro Yamada <yamada.tatsuro@lab.ntt.co.jp> |
7 years ago |
|
|
a4a232b1e7 |
Error position support for defaults and check constraints
Add support for error position reporting for the expressions contained in defaults and check constraint definitions. This currently works only for CREATE TABLE, not ALTER TABLE, because the latter is not set up to pass around the original query string. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> |
7 years ago |
|
|
4b035841a1 |
Fix IndexInfo comments.
Recently, ii_KeyAttrNumbers was renamed to ii_IndexAttrNumbers, and ii_Am field was added, but the comments were not updated. Author: Yugo Nagata Discussion: https://www.postgresql.org/message-id/20180830134831.e35a91b8b978b248c16c8f7b@sraoss.co.jp |
7 years ago |
|
|
ee0e2745c2 |
Code review for simplehash.h.
Fix reference to non-existent file in comment. Add SH_ prefix to the EMPTY and IN_USE tokens, to reduce likelihood of collisions with unrelated macros. Add include guards around the function definitions that are not "parameterized", so the header can be used again in the same translation unit. Undefine SH_EQUAL macro where other "parameter" macros are undefined, for the same reason. Author: Thomas Munro Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1LdXZ3mMTM8tHt_b%3DK1kREit%3Dp8sikesak%3DkzHHM07Nw%40mail.gmail.com |
7 years ago |
|
|
9b39b799db |
Add some not null constraints to catalogs
Use BKI_FORCE_NOT_NULL on some catalog field declarations that are never null (according to the source code that accesses them). |
7 years ago |
|
|
a556549d7e |
Improve VACUUM and ANALYZE by avoiding early lock queue
A caller of VACUUM can perform early lookup obtention which can cause other sessions to block on the request done, causing potentially DOS attacks as even a non-privileged user can attempt a vacuum fill of a critical catalog table to block even all incoming connection attempts. Contrary to TRUNCATE, a client could attempt a system-wide VACUUM after building the list of relations to VACUUM, which can cause vacuum_rel() or analyze_rel() to try to lock the relation but the operation would just block. When the client specifies a list of relations and the relation needs to be skipped, ownership checks are done when building the list of relations to work on, preventing a later lock attempt. vacuum_rel() already had the sanity checks needed, except that those were applied too late. This commit refactors the code so as relation skips are checked beforehand, making it safer to avoid too early locks, for both manual VACUUM with and without a list of relations specified. An isolation test is added emulating the fact that early locks do not happen anymore, issuing a WARNING message earlier if the user calling VACUUM is not a relation owner. When a partitioned table is listed in a manual VACUUM or ANALYZE command, its full list of partitions is fetched, all partitions get added to the list to work on, and then each one of them is processed one by one, with ownership checks happening at the later phase of vacuum_rel() or analyze_rel(). Trying to do early ownership checks for each partition is proving to be tedious as this would result in deadlock risks with lock upgrades, and skipping all partitions if the listed partitioned table is not owned would result in a behavior change compared to how Postgres 10 has implemented vacuum for partitioned tables. The original problem reported related to early lock queue for critical relations is fixed anyway, so priority is given to avoiding a backward-incompatible behavior. Reported-by: Lloyd Albin, Jeremy Schneider Author: Michael Paquier Reviewed by: Nathan Bossart, Kyotaro Horiguchi Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org Discussion: https://postgr.es/m/20180812222142.GA6097@paquier.xyz |
7 years ago |
|
|
e75733d46c |
Comment fix for rewriteheap.h.
The description of the filename for mapping files did not match the code. |
7 years ago |
|
|
8ecdefc261 |
Remove test for VA_ARGS, implied by C99.
This simplifies logic / reduces duplication in a few headers. Author: Andres Freund Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com |
7 years ago |
|
|
cb92520563 |
LLVMJIT: LLVMGetHostCPUFeatures now is upstream, use LLMV version if available.
Noticed thanks to buildfarm animal seawasp. Author: Andres Freund Backpatch: v11-, where LLVM based JIT compliation was introduced. |
7 years ago |
|
|
88ebd62fcc |
Deduplicate code between slot_getallattrs() and slot_getsomeattrs().
Code in slot_getallattrs() is the same as if slot_getsomeattrs() is called with number of attributes specified in the tuple descriptor. Implement it that way instead of duplicating the code between those two functions. This is part of a patchseries abstracting TupleTableSlots so they can store arbitrary forms of tuples, but is a nice enough cleanup on its own. Author: Ashutosh Bapat Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de |
7 years ago |
|
|
b19495772e |
doc: Update uses of the word "procedure"
Historically, the term procedure was used as a synonym for function in Postgres/PostgreSQL. Now we have procedures as separate objects from functions, so we need to clean up the documentation to not mix those terms. In particular, mentions of "trigger procedures" are changed to "trigger functions", and access method "support procedures" are changed to "support functions". (The latter already used FUNCTION in the SQL syntax anyway.) Also, the terminology in the SPI chapter has been cleaned up. A few tests, examples, and code comments are also adjusted to be consistent with documentation changes, but not everything. Reported-by: Peter Geoghegan <pg@bowt.ie> Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com> |
7 years ago |
|
|
6771c932cf |
Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.
Previously, this code blindly followed the common coding pattern of passing PQserverVersion(AH->connection) as the server-version parameter of fmtQualifiedId. That works as long as we have a connection; but in pg_restore with text output, we don't. Instead we got a zero from PQserverVersion, which fmtQualifiedId interpreted as "server is too old to have schemas", and so the name went unqualified. That still accidentally managed to work in many cases, which is probably why this ancient bug went undetected for so long. It only became obvious in the wake of the changes to force dump/restore to execute with restricted search_path. In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server- version behavioral dependency, and just making it schema-qualify all the time. We no longer support pg_dump from servers old enough to need the ability to omit schema name, let alone restoring to them. (Also, the few callers outside pg_dump already didn't work with pre-schema servers.) In older branches, that's not an acceptable solution, so instead just tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify its output regardless of server version. Per bug #15338 from Oleg somebody. Back-patch to all supported branches. Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org |
7 years ago |
|
|
c4c3400885 |
Use the built-in float datatypes to implement geometric types
This patch makes the geometric operators and functions use the exported function of the float4/float8 datatypes. The main reason of doing so is to check for underflow and overflow, and to handle NaNs consciously. The float datatypes consider NaNs values to be equal and greater than all non-NaN values. This change considers NaNs equal only for equality operators. The placement operators, contains, overlaps, left/right of etc. continue to return false when NaNs are involved. We don't need to worry about them being considered greater than any-NaN because there aren't any basic comparison operators like less/greater than for the geometric datatypes. The changes may be summarised as: * Check for underflow, overflow and division by zero * Consider NaN values to be equal * Return NULL when the distance is NaN for all closest point operators * Favour not-NaN over NaN where it makes sense The patch also replaces all occurrences of "double" as "float8". They are the same, but were used inconsistently in the same file. Author: Emre Hasegeli Reviewed-by: Kyotaro Horiguchi, Tomas Vondra Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com |
7 years ago |
|
|
02dc7466ba |
Remove duplicate function declarations.
Christoph Berg Discussion: https://postgr.es/m/20180814165536.GB21152@msg.df7cb.de |
7 years ago |
|
|
351855fc4e |
Remove obsolete linux dynloader code
This has been obsolete probably since the late 1990s. |
7 years ago |
|
|
246a6c8f7b |
Make autovacuum more aggressive to remove orphaned temp tables
Commit
|
7 years ago |
|
|
07172d5aff |
Avoid query-lifetime memory leaks in XMLTABLE (bug #15321)
Multiple calls to XMLTABLE in a query (e.g. laterally applying it to a table with an xml column, an important use-case) were leaking large amounts of memory into the per-query context, blowing up memory usage. Repair by reorganizing memory context usage in nodeTableFuncscan; use the usual per-tuple context for row-by-row evaluations instead of perValueCxt, and use the explicitly created context -- renamed from perValueCxt to perTableCxt -- for arguments and state for each individual table-generation operation. Backpatch to PG10 where this code was introduced. Original report by IRC user begriffs; analysis and patch by me. Reviewed by Tom Lane and Pavel Stehule. Discussion: https://postgr.es/m/153394403528.10284.7530399040974170549@wrigleys.postgresql.org |
7 years ago |
|
|
46b5e7c4b5 |
Revert "Distinguish printf-like functions that support %m from those that don't."
This reverts commit
|
7 years ago |
|
|
a2a8acd152 |
Produce compiler errors if errno is referenced inside elog/ereport calls.
It's often unsafe to reference errno within an elog/ereport call, because there are a lot of sub-functions involved and they might not all preserve errno. (This is why we support the %m format spec: it works off a value of errno captured before we execute any potentially-unsafe functions in the arguments.) Therefore, we have a project policy not to use errno there. This patch adds a hack to cause an (admittedly obscure) compiler error for such unsafe usages. With the current code, the error will only be seen on Linux, macOS, and FreeBSD, but that should certainly be enough to catch mistakes in the buildfarm if they somehow get missed earlier. In addition, fix some places in src/common/exec.c that trip the error. I think these places are actually all safe, but it's simple enough to avoid the error by capturing errno manually, and doing so is good future-proofing in case these call sites get any more complicated. Thomas Munro (exec.c fixes by me) Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us |
7 years ago |
|
|
3a60c8ff89 |
Distinguish printf-like functions that support %m from those that don't.
The elog/ereport family of functions certainly support the %m format spec,
because they implement it "by hand". But elsewhere we have printf wrappers
that might or might not allow it depending on whether the platform's printf
does. (Most non-glibc versions don't, and notably, src/port/snprintf.c
doesn't.) Hence, rather than using the gnu_printf format archetype
interchangeably for all these functions, use it only for elog/ereport.
This will allow us to get compiler warnings for mistakes like the ones
fixed in commit
|
7 years ago |
|
|
4974d7f87e |
Handle parallel index builds on mapped relations.
Commit |
7 years ago |