mirror of https://github.com/postgres/postgres
Tag:
Branch:
Tree:
552fcebff0
REL2_0B
REL6_4
REL6_5_PATCHES
REL7_0_PATCHES
REL7_1_STABLE
REL7_2_STABLE
REL7_3_STABLE
REL7_4_STABLE
REL8_0_STABLE
REL8_1_STABLE
REL8_2_STABLE
REL8_3_STABLE
REL8_4_STABLE
REL8_5_ALPHA1_BRANCH
REL8_5_ALPHA2_BRANCH
REL8_5_ALPHA3_BRANCH
REL9_0_ALPHA4_BRANCH
REL9_0_ALPHA5_BRANCH
REL9_0_STABLE
REL9_1_STABLE
REL9_2_STABLE
REL9_3_STABLE
REL9_4_STABLE
REL9_5_STABLE
REL9_6_STABLE
REL_10_STABLE
REL_11_STABLE
REL_12_STABLE
REL_13_STABLE
REL_14_STABLE
REL_15_STABLE
REL_16_STABLE
REL_17_STABLE
REL_18_STABLE
Release_1_0_3
WIN32_DEV
ecpg_big_bison
master
PG95-1_01
PG95-1_08
PG95-1_09
REL2_0
REL6_1
REL6_1_1
REL6_2
REL6_2_1
REL6_3
REL6_3_2
REL6_4_2
REL6_5
REL6_5_1
REL6_5_2
REL6_5_3
REL7_0
REL7_0_2
REL7_0_3
REL7_1
REL7_1_1
REL7_1_2
REL7_1_3
REL7_1_BETA
REL7_1_BETA2
REL7_1_BETA3
REL7_2
REL7_2_1
REL7_2_2
REL7_2_3
REL7_2_4
REL7_2_5
REL7_2_6
REL7_2_7
REL7_2_8
REL7_2_BETA1
REL7_2_BETA2
REL7_2_BETA3
REL7_2_BETA4
REL7_2_BETA5
REL7_2_RC1
REL7_2_RC2
REL7_3
REL7_3_1
REL7_3_10
REL7_3_11
REL7_3_12
REL7_3_13
REL7_3_14
REL7_3_15
REL7_3_16
REL7_3_17
REL7_3_18
REL7_3_19
REL7_3_2
REL7_3_20
REL7_3_21
REL7_3_3
REL7_3_4
REL7_3_5
REL7_3_6
REL7_3_7
REL7_3_8
REL7_3_9
REL7_4
REL7_4_1
REL7_4_10
REL7_4_11
REL7_4_12
REL7_4_13
REL7_4_14
REL7_4_15
REL7_4_16
REL7_4_17
REL7_4_18
REL7_4_19
REL7_4_2
REL7_4_20
REL7_4_21
REL7_4_22
REL7_4_23
REL7_4_24
REL7_4_25
REL7_4_26
REL7_4_27
REL7_4_28
REL7_4_29
REL7_4_3
REL7_4_30
REL7_4_4
REL7_4_5
REL7_4_6
REL7_4_7
REL7_4_8
REL7_4_9
REL7_4_BETA1
REL7_4_BETA2
REL7_4_BETA3
REL7_4_BETA4
REL7_4_BETA5
REL7_4_RC1
REL7_4_RC2
REL8_0_0
REL8_0_0BETA1
REL8_0_0BETA2
REL8_0_0BETA3
REL8_0_0BETA4
REL8_0_0BETA5
REL8_0_0RC1
REL8_0_0RC2
REL8_0_0RC3
REL8_0_0RC4
REL8_0_0RC5
REL8_0_1
REL8_0_10
REL8_0_11
REL8_0_12
REL8_0_13
REL8_0_14
REL8_0_15
REL8_0_16
REL8_0_17
REL8_0_18
REL8_0_19
REL8_0_2
REL8_0_20
REL8_0_21
REL8_0_22
REL8_0_23
REL8_0_24
REL8_0_25
REL8_0_26
REL8_0_3
REL8_0_4
REL8_0_5
REL8_0_6
REL8_0_7
REL8_0_8
REL8_0_9
REL8_1_0
REL8_1_0BETA1
REL8_1_0BETA2
REL8_1_0BETA3
REL8_1_0BETA4
REL8_1_0RC1
REL8_1_1
REL8_1_10
REL8_1_11
REL8_1_12
REL8_1_13
REL8_1_14
REL8_1_15
REL8_1_16
REL8_1_17
REL8_1_18
REL8_1_19
REL8_1_2
REL8_1_20
REL8_1_21
REL8_1_22
REL8_1_23
REL8_1_3
REL8_1_4
REL8_1_5
REL8_1_6
REL8_1_7
REL8_1_8
REL8_1_9
REL8_2_0
REL8_2_1
REL8_2_10
REL8_2_11
REL8_2_12
REL8_2_13
REL8_2_14
REL8_2_15
REL8_2_16
REL8_2_17
REL8_2_18
REL8_2_19
REL8_2_2
REL8_2_20
REL8_2_21
REL8_2_22
REL8_2_23
REL8_2_3
REL8_2_4
REL8_2_5
REL8_2_6
REL8_2_7
REL8_2_8
REL8_2_9
REL8_2_BETA1
REL8_2_BETA2
REL8_2_BETA3
REL8_2_RC1
REL8_3_0
REL8_3_1
REL8_3_10
REL8_3_11
REL8_3_12
REL8_3_13
REL8_3_14
REL8_3_15
REL8_3_16
REL8_3_17
REL8_3_18
REL8_3_19
REL8_3_2
REL8_3_20
REL8_3_21
REL8_3_22
REL8_3_23
REL8_3_3
REL8_3_4
REL8_3_5
REL8_3_6
REL8_3_7
REL8_3_8
REL8_3_9
REL8_3_BETA1
REL8_3_BETA2
REL8_3_BETA3
REL8_3_BETA4
REL8_3_RC1
REL8_3_RC2
REL8_4_0
REL8_4_1
REL8_4_10
REL8_4_11
REL8_4_12
REL8_4_13
REL8_4_14
REL8_4_15
REL8_4_16
REL8_4_17
REL8_4_18
REL8_4_19
REL8_4_2
REL8_4_20
REL8_4_21
REL8_4_22
REL8_4_3
REL8_4_4
REL8_4_5
REL8_4_6
REL8_4_7
REL8_4_8
REL8_4_9
REL8_4_BETA1
REL8_4_BETA2
REL8_4_RC1
REL8_4_RC2
REL8_5_ALPHA1
REL8_5_ALPHA2
REL8_5_ALPHA3
REL9_0_0
REL9_0_1
REL9_0_10
REL9_0_11
REL9_0_12
REL9_0_13
REL9_0_14
REL9_0_15
REL9_0_16
REL9_0_17
REL9_0_18
REL9_0_19
REL9_0_2
REL9_0_20
REL9_0_21
REL9_0_22
REL9_0_23
REL9_0_3
REL9_0_4
REL9_0_5
REL9_0_6
REL9_0_7
REL9_0_8
REL9_0_9
REL9_0_ALPHA4
REL9_0_ALPHA5
REL9_0_BETA1
REL9_0_BETA2
REL9_0_BETA3
REL9_0_BETA4
REL9_0_RC1
REL9_1_0
REL9_1_1
REL9_1_10
REL9_1_11
REL9_1_12
REL9_1_13
REL9_1_14
REL9_1_15
REL9_1_16
REL9_1_17
REL9_1_18
REL9_1_19
REL9_1_2
REL9_1_20
REL9_1_21
REL9_1_22
REL9_1_23
REL9_1_24
REL9_1_3
REL9_1_4
REL9_1_5
REL9_1_6
REL9_1_7
REL9_1_8
REL9_1_9
REL9_1_ALPHA1
REL9_1_ALPHA2
REL9_1_ALPHA3
REL9_1_ALPHA4
REL9_1_ALPHA5
REL9_1_BETA1
REL9_1_BETA2
REL9_1_BETA3
REL9_1_RC1
REL9_2_0
REL9_2_1
REL9_2_10
REL9_2_11
REL9_2_12
REL9_2_13
REL9_2_14
REL9_2_15
REL9_2_16
REL9_2_17
REL9_2_18
REL9_2_19
REL9_2_2
REL9_2_20
REL9_2_21
REL9_2_22
REL9_2_23
REL9_2_24
REL9_2_3
REL9_2_4
REL9_2_5
REL9_2_6
REL9_2_7
REL9_2_8
REL9_2_9
REL9_2_BETA1
REL9_2_BETA2
REL9_2_BETA3
REL9_2_BETA4
REL9_2_RC1
REL9_3_0
REL9_3_1
REL9_3_10
REL9_3_11
REL9_3_12
REL9_3_13
REL9_3_14
REL9_3_15
REL9_3_16
REL9_3_17
REL9_3_18
REL9_3_19
REL9_3_2
REL9_3_20
REL9_3_21
REL9_3_22
REL9_3_23
REL9_3_24
REL9_3_25
REL9_3_3
REL9_3_4
REL9_3_5
REL9_3_6
REL9_3_7
REL9_3_8
REL9_3_9
REL9_3_BETA1
REL9_3_BETA2
REL9_3_RC1
REL9_4_0
REL9_4_1
REL9_4_10
REL9_4_11
REL9_4_12
REL9_4_13
REL9_4_14
REL9_4_15
REL9_4_16
REL9_4_17
REL9_4_18
REL9_4_19
REL9_4_2
REL9_4_20
REL9_4_21
REL9_4_22
REL9_4_23
REL9_4_24
REL9_4_25
REL9_4_26
REL9_4_3
REL9_4_4
REL9_4_5
REL9_4_6
REL9_4_7
REL9_4_8
REL9_4_9
REL9_4_BETA1
REL9_4_BETA2
REL9_4_BETA3
REL9_4_RC1
REL9_5_0
REL9_5_1
REL9_5_10
REL9_5_11
REL9_5_12
REL9_5_13
REL9_5_14
REL9_5_15
REL9_5_16
REL9_5_17
REL9_5_18
REL9_5_19
REL9_5_2
REL9_5_20
REL9_5_21
REL9_5_22
REL9_5_23
REL9_5_24
REL9_5_25
REL9_5_3
REL9_5_4
REL9_5_5
REL9_5_6
REL9_5_7
REL9_5_8
REL9_5_9
REL9_5_ALPHA1
REL9_5_ALPHA2
REL9_5_BETA1
REL9_5_BETA2
REL9_5_RC1
REL9_6_0
REL9_6_1
REL9_6_10
REL9_6_11
REL9_6_12
REL9_6_13
REL9_6_14
REL9_6_15
REL9_6_16
REL9_6_17
REL9_6_18
REL9_6_19
REL9_6_2
REL9_6_20
REL9_6_21
REL9_6_22
REL9_6_23
REL9_6_24
REL9_6_3
REL9_6_4
REL9_6_5
REL9_6_6
REL9_6_7
REL9_6_8
REL9_6_9
REL9_6_BETA1
REL9_6_BETA2
REL9_6_BETA3
REL9_6_BETA4
REL9_6_RC1
REL_10_0
REL_10_1
REL_10_10
REL_10_11
REL_10_12
REL_10_13
REL_10_14
REL_10_15
REL_10_16
REL_10_17
REL_10_18
REL_10_19
REL_10_2
REL_10_20
REL_10_21
REL_10_22
REL_10_23
REL_10_3
REL_10_4
REL_10_5
REL_10_6
REL_10_7
REL_10_8
REL_10_9
REL_10_BETA1
REL_10_BETA2
REL_10_BETA3
REL_10_BETA4
REL_10_RC1
REL_11_0
REL_11_1
REL_11_10
REL_11_11
REL_11_12
REL_11_13
REL_11_14
REL_11_15
REL_11_16
REL_11_17
REL_11_18
REL_11_19
REL_11_2
REL_11_20
REL_11_21
REL_11_22
REL_11_3
REL_11_4
REL_11_5
REL_11_6
REL_11_7
REL_11_8
REL_11_9
REL_11_BETA1
REL_11_BETA2
REL_11_BETA3
REL_11_BETA4
REL_11_RC1
REL_12_0
REL_12_1
REL_12_10
REL_12_11
REL_12_12
REL_12_13
REL_12_14
REL_12_15
REL_12_16
REL_12_17
REL_12_18
REL_12_19
REL_12_2
REL_12_20
REL_12_21
REL_12_22
REL_12_3
REL_12_4
REL_12_5
REL_12_6
REL_12_7
REL_12_8
REL_12_9
REL_12_BETA1
REL_12_BETA2
REL_12_BETA3
REL_12_BETA4
REL_12_RC1
REL_13_0
REL_13_1
REL_13_10
REL_13_11
REL_13_12
REL_13_13
REL_13_14
REL_13_15
REL_13_16
REL_13_17
REL_13_18
REL_13_19
REL_13_2
REL_13_20
REL_13_21
REL_13_22
REL_13_23
REL_13_3
REL_13_4
REL_13_5
REL_13_6
REL_13_7
REL_13_8
REL_13_9
REL_13_BETA1
REL_13_BETA2
REL_13_BETA3
REL_13_RC1
REL_14_0
REL_14_1
REL_14_10
REL_14_11
REL_14_12
REL_14_13
REL_14_14
REL_14_15
REL_14_16
REL_14_17
REL_14_18
REL_14_19
REL_14_2
REL_14_20
REL_14_3
REL_14_4
REL_14_5
REL_14_6
REL_14_7
REL_14_8
REL_14_9
REL_14_BETA1
REL_14_BETA2
REL_14_BETA3
REL_14_RC1
REL_15_0
REL_15_1
REL_15_10
REL_15_11
REL_15_12
REL_15_13
REL_15_14
REL_15_15
REL_15_2
REL_15_3
REL_15_4
REL_15_5
REL_15_6
REL_15_7
REL_15_8
REL_15_9
REL_15_BETA1
REL_15_BETA2
REL_15_BETA3
REL_15_BETA4
REL_15_RC1
REL_15_RC2
REL_16_0
REL_16_1
REL_16_10
REL_16_11
REL_16_2
REL_16_3
REL_16_4
REL_16_5
REL_16_6
REL_16_7
REL_16_8
REL_16_9
REL_16_BETA1
REL_16_BETA2
REL_16_BETA3
REL_16_RC1
REL_17_0
REL_17_1
REL_17_2
REL_17_3
REL_17_4
REL_17_5
REL_17_6
REL_17_7
REL_17_BETA1
REL_17_BETA2
REL_17_BETA3
REL_17_RC1
REL_18_0
REL_18_1
REL_18_BETA1
REL_18_BETA2
REL_18_BETA3
REL_18_RC1
Release_1_0_2
Release_2_0
Release_2_0_0
release-6-3
${ noResults }
5281 Commits (552fcebff04699103cefd2efa71fae1274893dbe)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
ce77abe63c |
Include information on buffer usage during planning phase, in EXPLAIN output, take two.
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. This commit revives the original commit |
6 years ago |
|
|
e41955faf0 |
Fix bugs in gin_fuzzy_search_limit processing.
entryGetItem()'s three code paths each contained bugs associated with filtering the entries for gin_fuzzy_search_limit. The posting-tree path failed to advance "advancePast" after having decided to filter an item. If we ran out of items on the current page and needed to advance to the next, what would actually happen is that entryLoadMoreItems() would re-load the same page. Eventually, the random dropItem() test would accept one of the same items it'd previously rejected, and we'd move on --- but it could take awhile with small gin_fuzzy_search_limit. To add insult to injury, this case would inevitably cause entryLoadMoreItems() to decide it needed to re-descend from the root, making things even slower. The posting-list path failed to implement gin_fuzzy_search_limit filtering at all, so that all entries in the posting list would be returned. The bitmap-result path used a "gotitem" variable that it failed to update in the one place where it'd actually make a difference, ie at the one "continue" statement. I think this was unreachable in practice, because if we'd looped around then it shouldn't be the case that the entries on the new page are before advancePast. Still, the "gotitem" variable was contributing nothing to either clarity or correctness, so get rid of it. Refactor all three loops so that the termination conditions are more alike and less unreadable. The code coverage report showed that we had no coverage at all for the re-descend-from-root code path in entryLoadMoreItems(), which seems like a very bad thing, so add a test case that exercises it. We also had exactly no coverage for gin_fuzzy_search_limit, so add a simplistic test case that at least hits those code paths a little bit. Back-patch to all supported branches. Adé Heyward and Tom Lane Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com |
6 years ago |
|
|
c0885c4c30 |
Improve stability of explain regression test.
The explain regression test runs EXPLAIN commands via the function
that filters unstable outputs. To produce more stable test output,
this commit improves the function so that it also filters out text-mode
Buffers lines. This is necessary because text-mode Buffers lines vary
depending the system state.
This improvement will get rid of the regression test failure that
the commit
|
6 years ago |
|
|
19db23bcbd |
Revert "Include information on buffer usage during planning phase, in EXPLAIN output."
This reverts commit
|
6 years ago |
|
|
ed7a509571 |
Include information on buffer usage during planning phase, in EXPLAIN output.
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. Author: Julien Rouhaud, slightly revised by Fujii Masao Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org |
6 years ago |
|
|
7cb0a423f9 |
Improve stability fix for partition_aggregate test.
Instead of disabling autovacuum on these test tables, adjust the partition boundaries so that the child partitions are not all the same size. That should cause the planner to use a predictable ordering of the per-partition scan nodes even in cases where autovacuum causes the rowcount estimates to be off a bit. Moreover, this also lets these tests show that the planner does properly order the tables in descending size order, something that wasn't being proven before. The pagg_tab1 and pagg_tab2 partitions are still all the same size, but that should be fine, because those tables are so small that (1) autovacuum won't fire on them, and (2) even if it did, it couldn't change the reltuples value --- with only one page, it can't see just part of the relation. Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us |
6 years ago |
|
|
cefb82d49e |
Attempt to stabilize partitionwise_aggregate test
In |
6 years ago |
|
|
2991ac5fc9 |
Add SQL functions for Unicode normalization
This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and check Unicode normal forms, per SQL standard. To support fast IS NORMALIZED tests, we pull in a new data file DerivedNormalizationProps.txt from Unicode and build a lookup table from that, using techniques similar to ones already used for other Unicode data. make update-unicode will keep it up to date. We only build and use these tables for the NFC and NFKC forms, because they are too big for NFD and NFKD and the improvement is not significant enough there. Reviewed-by: Daniel Verite <daniel@manitou-mail.org> Reviewed-by: Andreas Karlsson <andreas@proxel.se> Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com |
6 years ago |
|
|
28cac71bd3 |
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog, commit timestamps, multixact, asynchronous notifications, etc. Until now we had no easy way to monitor these shared caches, compute hit ratios, number of reads/writes etc. This commit extends the statistics collector to track this information for a predefined list of SLRUs, and also introduces a new system view pg_stat_slru displaying the data. The list of built-in SLRUs is fixed, but additional SLRUs may be defined in extensions. Unfortunately, there's no suitable registry of SLRUs, so this patch simply defines a fixed list of SLRUs with entries for the built-in ones and one entry for all additional SLRUs. Extensions adding their own SLRU are fairly rare, so this seems acceptable. This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes are still fixed (hard-coded in the code) and it's not entirely clear which of the SLRUs might need a GUC to tune size. In a way, allowing us to determine that is one of the goals of this patch. Bump catversion as the patch introduces new functions and system view. Author: Tomas Vondra Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development |
6 years ago |
|
|
0936d1b6ff |
Still another try at stabilizing stats_ext test results.
The stats_ext test is not expecting that autovacuum will touch any of its tables; an expectation falsified by commit |
6 years ago |
|
|
02a5786df2 |
Improve error reporting in opclasscmds.c
This commit improves error reporting introduced by
|
6 years ago |
|
|
de3bbfcc96 |
Fix INSERT OVERRIDING USER VALUE behavior
The original implementation disallowed using OVERRIDING USER VALUE on identity columns defined as GENERATED ALWAYS, which is not per standard. So allow that now. Expand documentation and tests around this. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Vik Fearing <vik@postgresfriends.org> Discussion: https://www.postgresql.org/message-id/flat/CAEZATCVrh2ufCwmzzM%3Dk_OfuLhTTPBJCdFkimst2kry4oHepuQ%40mail.gmail.com |
6 years ago |
|
|
911e702077 |
Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have much freedom in the semantics of indexing. These index AMs are GiST, GIN, SP-GiST and BRIN. There opclasses define representation of keys, operations on them and supported search strategies. So, it's natural that opclasses may be faced some tradeoffs, which require user-side decision. This commit implements opclass parameters allowing users to set some values, which tell opclass how to index the particular dataset. This commit doesn't introduce new storage in system catalog. Instead it uses pg_attribute.attoptions, which is used for table column storage options but unused for index attributes. In order to evade changing signature of each opclass support function, we implement unified way to pass options to opclass support functions. Options are set to fn_expr as the constant bytea expression. It's possible due to the fact that opclass support functions are executed outside of expressions, so fn_expr is unused for them. This commit comes with some examples of opclass options usage. We parametrize signature length in GiST. That applies to multiple opclasses: tsvector_ops, gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and gist_hstore_ops. Also we parametrize maximum number of integer ranges for gist__int_ops. However, the main future usage of this feature is expected to be json, where users would be able to specify which way to index particular json parts. Catversion is bumped. Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru Author: Nikita Glukhov, revised by me Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera |
6 years ago |
|
|
1d53432ff9 |
Allow using Unix-domain sockets on Windows in tests
The test suites currently don't use Unix-domain sockets on Windows. This optionally allows enabling that by setting the environment variable PG_TEST_USE_UNIX_SOCKETS. This should currently be considered experimental. In particular, pg_regress.c contains some comments that the cleanup code for Unix-domain sockets doesn't work correctly under Windows, which hasn't been an problem until now. But it's good enough for locally supervised testing of the functionality. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com |
6 years ago |
|
|
24566b359d |
Attempt to fix unstable regression tests, take 2
Following up on
|
6 years ago |
|
|
9cedb16660 |
pg_regress: Observe TMPDIR
Put the temporary socket directory under TMPDIR, if that environment variable is set, instead of the hardcoded /tmp. This allows running the tests if there is no /tmp at all (for example on Windows, although running the tests with Unix-domain sockets is not enabled on Windows yet). We also use TMPDIR everywhere else /tmp is hardcoded, so this makes the behavior consistent. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com |
6 years ago |
|
|
2dc16efedc |
Attempt to fix unstable regression tests
|
6 years ago |
|
|
4083f445c0 |
Improve the performance and accuracy of numeric sqrt() and ln().
Instead of using Newton's method to compute numeric square roots, use the Karatsuba square root algorithm, which performs better for numbers of all sizes. In practice, this is 3-5 times faster for inputs with just a few digits and up to around 10 times faster for larger inputs. Also, the new algorithm guarantees that the final digit of the result is correctly rounded, since it computes an integer square root with truncation, containing at least 1 extra decimal digit before rounding. The former algorithm would occasionally round the wrong way because it rounded both the intermediate and final results. In addition, arrange for sqrt_var() to explicitly support negative rscale values (rounding before the decimal point). This allows the argument reduction phase of ln_var() to be optimised for large inputs, since it only needs to compute square roots with a few more digits than the final ln() result, rather than computing all the digits before the decimal point. For very large inputs, this can be many thousands of times faster. In passing, optimise div_var_fast() in a couple of places where it was doing unnecessary work. Patch be me, reviewed by Tom Lane and Tels. Discussion: https://postgr.es/m/CAEZATCV1A7+jD3P30Zu31KjaxeSEyOn3v9d6tYegpxcq3cQu-g@mail.gmail.com |
6 years ago |
|
|
8f3ec75de4 |
Enable Unix-domain sockets support on Windows
As of Windows 10 version 1803, Unix-domain sockets are supported on Windows. But it's not automatically detected by configure because it looks for struct sockaddr_un and Windows doesn't define that. So we just make our own definition on Windows and override the configure result. Set DEFAULT_PGSOCKET_DIR to empty on Windows so by default no Unix-domain socket is used, because there is no good standard location. In pg_upgrade, we have to do some extra tweaking to preserve the existing behavior of not using Unix-domain sockets on Windows. Adding support would be desirable, but it needs further work, in particular a way to select whether to use Unix-domain sockets from the command-line or with a run-time test. The pg_upgrade test script needs a fix. The previous code passed "localhost" to postgres -k, which only happened to work because Windows used to ignore the -k argument value altogether. We instead need to pass an empty string to get the desired effect. The test suites will continue to not use Unix-domain sockets on Windows. This requires a small tweak in pg_regress.c. The TAP tests don't need to be changed because they decide by the operating system rather than HAVE_UNIX_SOCKETS. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com |
6 years ago |
|
|
87779aa474 |
Prevent functional dependency estimates from exceeding column estimates.
Formerly we applied a functional dependency "a => b with dependency degree f" using the formula P(a,b) = P(a) * [f + (1-f)*P(b)] This leads to the possibility that the combined selectivity P(a,b) could exceed P(b), which is not ideal. The addition of support for IN and OR clauses (commits |
6 years ago |
|
|
b07642dbcd |
Trigger autovacuum based on number of INSERTs
Traditionally autovacuum has only ever invoked a worker based on the estimated number of dead tuples in a table and for anti-wraparound purposes. For the latter, with certain classes of tables such as insert-only tables, anti-wraparound vacuums could be the first vacuum that the table ever receives. This could often lead to autovacuum workers being busy for extended periods of time due to having to potentially freeze every page in the table. This could be particularly bad for very large tables. New clusters, or recently pg_restored clusters could suffer even more as many large tables may have the same relfrozenxid, which could result in large numbers of tables requiring an anti-wraparound vacuum all at once. Here we aim to reduce the work required by anti-wraparound and aggressive vacuums in general, by triggering autovacuum when the table has received enough INSERTs. This is controlled by adding two new GUCs and reloptions; autovacuum_vacuum_insert_threshold and autovacuum_vacuum_insert_scale_factor. These work exactly the same as the existing scale factor and threshold controls, only base themselves off the number of inserts since the last vacuum, rather than the number of dead tuples. New controls were added rather than reusing the existing controls, to allow these new vacuums to be tuned independently and perhaps even completely disabled altogether, which can be done by setting autovacuum_vacuum_insert_threshold to -1. We make no attempt to skip index cleanup operations on these vacuums as they may trigger for an insert-mostly table which continually doesn't have enough dead tuples to trigger an autovacuum for the purpose of removing those dead tuples. If we were to skip cleaning the indexes in this case, then it is possible for the index(es) to become bloated over time. There are additional benefits to triggering autovacuums based on inserts, as tables which never contain enough dead tuples to trigger an autovacuum are now more likely to receive a vacuum, which can mark more of the table as "allvisible" and encourage the query planner to make use of Index Only Scans. Currently, we still obey vacuum_freeze_min_age when triggering these new autovacuums based on INSERTs. For large insert-only tables, it may be beneficial to lower the table's autovacuum_freeze_min_age so that tuples are eligible to be frozen sooner. Here we've opted not to zero that for these types of vacuums, since the table may just be insert-mostly and we may otherwise freeze tuples that are still destined to be updated or removed in the near future. There was some debate to what exactly the new scale factor and threshold should default to. For now, these are set to 0.2 and 1000, respectively. There may be some motivation to adjust these before the release. Author: Laurenz Albe, Darafei Praliaskouski Reviewed-by: Alvaro Herrera, Masahiko Sawada, Chris Travers, Andres Freund, Justin Pryzby Discussion: https://postgr.es/m/CAC8Q8t%2Bj36G_bLF%3D%2B0iMo6jGNWnLnWb1tujXuJr-%2Bx8ZCCTqoQ%40mail.gmail.com |
6 years ago |
|
|
1e6148032e
|
Allow walreceiver configuration to change on reload
The parameters primary_conninfo, primary_slot_name and wal_receiver_create_temp_slot can now be changed with a simple "reload" signal, no longer requiring a server restart. This is achieved by signalling the walreceiver process to terminate and having it start again with the new values. Thanks to Andres Freund, Kyotaro Horiguchi, Fujii Masao for discussion. Author: Sergei Kornilov <sk@zsrv.org> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/19513901543181143@sas1-19a94364928d.qloud-c.yandex.net |
6 years ago |
|
|
958aa438aa |
Further fixes for ssl_passphrase_callback test module.
The Makefile should set TAP_TESTS = 1, not implement the infrastructure for itself. For one thing, it missed the appropriate "make clean" steps. For another, the buildfarm isn't running this test because it wasn't hooked into "make installcheck" either. |
6 years ago |
|
|
e984fb341f |
Don't listen to localhost in ssl_passphrase_callback test
Commit
|
6 years ago |
|
|
13c98bdfc4 |
Fix assorted portability issues in commit 896fcdb23.
Some platforms require libssl to be linked explicitly in the new SSL test module. Borrow contrib/sslinfo's code for that. Since src/test/modules/Makefile now has a variable SUBDIRS list, it needs to follow the ALWAYS_SUBDIRS protocol for that (cf. comments in Makefile.global.in). Blindly try to fix MSVC build failures by adding PGDLLIMPORT. |
6 years ago |
|
|
896fcdb230 |
Provide a TLS init hook
The default hook function sets the default password callback function. In order to allow preloaded libraries to have an opportunity to override the default, TLS initialization if now delayed slightly until after shared preloaded libraries have been loaded. A test module is provided which contains a trivial example that decodes an obfuscated password for an SSL certificate. Author: Andrew Dunstan Reviewed By: Andreas Karlsson, Asaba Takanori Discussion: https://postgr.es/m/04116472-818b-5859-1d74-3d995aab2252@2ndQuadrant.com |
6 years ago |
|
|
76df765e88 |
Reduce test time for disk-based Hash Aggregation.
Discussion: https://postgr.es/m/23196.1584943506@sss.pgh.pa.us |
6 years ago |
|
|
67e0adfb3f |
Report NULL as total backup size if it's not estimated.
Previously 0 was reported in pg_stat_progress_basebackup.total_backup if the total backup size was not estimated. Per discussion, our consensus is that NULL is better choise as the value in total_backup in that case. So this commit makes pg_stat_progress_basebackup view report NULL in total_backup column if the estimation is disabled. Bump catversion. Author: Fujii Masao Reviewed-by: Amit Langote, Magnus Hagander, Alvaro Herrera Discussion: https://postgr.es/m/CABUevExnhOD89zBDuPvfAAh243RzNpwCPEWNLtMYpKHMB8gbAQ@mail.gmail.com |
6 years ago |
|
|
f801ceb696 |
Add regression tests for constraint errors in partitioned tables.
While #16293 only applied to 11 (and 10 to some degree), it seems best to add tests to all branches with partitioning support. Reported-By: Daniel WM Author: Andres Freund Bug: #16293 Discussion: https://postgr.es/m/16293-26f5777d10143a66@postgresql.org Backpatch: 10- |
6 years ago |
|
|
79dfa8afb2 |
Add bound checks for ssl_min_protocol_version and ssl_max_protocol_version
Mixing incorrect bounds in the SSL context leads to confusing error
messages generated by OpenSSL which are hard to act on. New range
checks are added when both min/max parameters are loaded in the context
of a SSL reload to improve the error reporting. Note that this does not
make use of the GUC hook machinery contrary to
|
6 years ago |
|
|
de9396326e |
Revert "Skip WAL for new relfilenodes, under wal_level=minimal."
This reverts commit
|
6 years ago |
|
|
cb2fd7eac2 |
Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Back-patch to 9.5 (all supported versions). This introduces a new WAL record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As always, update standby systems before master systems. This changes sizeof(RelationData) and sizeof(IndexStmt), breaking binary compatibility for affected extensions. (The most recent commit to affect the same class of extensions was 089e4d405d0f3b94c74a2c6a54357a84a681754b.) Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org |
6 years ago |
|
|
24e2885ee3 |
Introduce "anycompatible" family of polymorphic types.
This patch adds the pseudo-types anycompatible, anycompatiblearray, anycompatiblenonarray, and anycompatiblerange. They work much like anyelement, anyarray, anynonarray, and anyrange respectively, except that the actual input values need not match precisely in type. Instead, if we can find a common supertype (using the same rules as for UNION/CASE type resolution), then the parser automatically promotes the input values to that type. For example, "myfunc(anycompatible, anycompatible)" can match a call with one integer and one bigint argument, with the integer automatically promoted to bigint. With anyelement in the definition, the user would have had to cast the integer explicitly. The new types also provide a second, independent set of type variables for function matching; thus with "myfunc(anyelement, anyelement, anycompatible) returns anycompatible" the first two arguments are constrained to be the same type, but the third can be some other type, and the result has the type of the third argument. The need for more than one set of type variables was foreseen back when we first invented the polymorphic types, but we never did anything about it. Pavel Stehule, revised a bit by me Discussion: https://postgr.es/m/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com |
6 years ago |
|
|
1f39bce021 |
Disk-based Hash Aggregation.
While performing hash aggregation, track memory usage when adding new groups to a hash table. If the memory usage exceeds work_mem, enter "spill mode". In spill mode, new groups are not created in the hash table(s), but existing groups continue to be advanced if input tuples match. Tuples that would cause a new group to be created are instead spilled to a logical tape to be processed later. The tuples are spilled in a partitioned fashion. When all tuples from the outer plan are processed (either by advancing the group or spilling the tuple), finalize and emit the groups from the hash table. Then, create new batches of work from the spilled partitions, and select one of the saved batches and process it (possibly spilling recursively). Author: Jeff Davis Reviewed-by: Tomas Vondra, Adam Lee, Justin Pryzby, Taylor Vesely, Melanie Plageman Discussion: https://postgr.es/m/507ac540ec7c20136364b5272acbcd4574aa76ef.camel@j-davis.com |
6 years ago |
|
|
487e9861d0
|
Enable BEFORE row-level triggers for partitioned tables
... with the limitation that the tuple must remain in the same partition. Reviewed-by: Ashutosh Bapat Discussion: https://postgr.es/m/20200227165158.GA2071@alvherre.pgsql |
6 years ago |
|
|
a2b1faa0f2 |
Implement type regcollation
This will be helpful for a following commit and it's also just generally useful, like the other reg* types. Author: Julien Rouhaud Reviewed-by: Thomas Munro and Michael Paquier Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com |
6 years ago |
|
|
ccaa3569f5 |
Recognize some OR clauses as compatible with functional dependencies
Since commit
|
6 years ago |
|
|
e6c178b5b7 |
Refactor our checks for valid function and aggregate signatures.
pg_proc.c and pg_aggregate.c had near-duplicate copies of the logic to decide whether a function or aggregate's signature is legal. This seems like a bad thing even without the problem that the upcoming "anycompatible" patch would roughly double the complexity of that logic. Hence, refactor so that the rules are localized in new subroutines supplied by parse_coerce.c. (One could quibble about just where to add that code, but putting it beside enforce_generic_type_consistency seems not totally unreasonable.) The fact that the rules are about to change would mandate some changes in the wording of the associated error messages in any case. I ended up spelling things out in a fairly literal fashion in the errdetail messages, eg "A result of type anyelement requires at least one input of type anyelement, anyarray, anynonarray, anyenum, or anyrange." Perhaps this is overkill, but once there's more than one subgroup of polymorphic types, people might get confused by more-abstract messages. Discussion: https://postgr.es/m/24137.1584139352@sss.pgh.pa.us |
6 years ago |
|
|
77ec5affbc |
Adjust handling of an ANYARRAY actual input for an ANYARRAY argument.
Ordinarily it's impossible for an actual input of a function to have declared type ANYARRAY, since we'd resolve that to a concrete array type before doing argument type resolution for the function. But an exception arises for functions applied to certain columns of pg_statistic or pg_stats, since we abuse the "anyarray" pseudotype by using it to declare those columns. So parse_coerce.c has to deal with the case. Previously we allowed an ANYARRAY actual input to match an ANYARRAY polymorphic argument, but only if no other argument or result was declared ANYELEMENT. When that logic was written, those were the only two polymorphic types, and I fear nobody thought carefully about how it ought to extend to the other ones that came along later. But actually it was wrong even then, because if a function has two ANYARRAY arguments, it should be able to expect that they have identical element types, and we'd not be able to ensure that. The correct generalization is that we can match an ANYARRAY actual input to an ANYARRAY polymorphic argument only if no other argument or result is of any polymorphic type, so that no promises are being made about element type compatibility. check_generic_type_consistency can't test that condition, but it seems better anyway to accept such matches there and then throw an error if needed in enforce_generic_type_consistency. That way we can produce a specific error message rather than an unintuitive "function does not exist" complaint. (There's some risk perhaps of getting new ambiguous-function complaints, but I think that any set of functions for which that could happen would be ambiguous against ordinary array columns as well.) While we're at it, we can improve the message that's produced in cases that the code did already object to, as shown in the regression test changes. Also, remove a similar test that got cargo-culted in for ANYRANGE; there are no catalog columns of type ANYRANGE, and I hope we never create any, so that's not needed. (It was incomplete anyway.) While here, update some comments and rearrange the code a bit in preparation for upcoming additions of more polymorphic types. In practical situations I believe this is just exchanging one error message for another, hopefully better, one. So it doesn't seem needful to back-patch, even though the mistake is ancient. Discussion: https://postgr.es/m/21569.1584314271@sss.pgh.pa.us |
6 years ago |
|
|
9d9784c840 |
Remove bogus assertion about polymorphic SQL function result.
It is possible to reach check_sql_fn_retval() with an unresolved polymorphic rettype, resulting in an assertion failure as demonstrated by one of the added test cases. However, the code following that throws what seems an acceptable error message, so just remove the Assert and adjust commentary. While here, I thought it'd be a good idea to provide some parallel tests of SQL-function and PL/pgSQL-function polymorphism behavior. Some of these cases are perhaps duplicative of tests elsewhere, but we hadn't any organized coverage of the topic AFAICS. Although that assertion's been wrong all along, it won't have any effect in production builds, so I'm not bothering to back-patch. Discussion: https://postgr.es/m/21569.1584314271@sss.pgh.pa.us |
6 years ago |
|
|
b4570d33aa |
Avoid holding a directory FD open across assorted SRF calls.
This extends the fixes made in commit
|
6 years ago |
|
|
70a7b4776b |
Add backend type to csvlog and optionally log_line_prefix
The backend type, which corresponds to what pg_stat_activity.backend_type shows, is added as a column to the csvlog and can optionally be added to log_line_prefix using the new %b placeholder. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://www.postgresql.org/message-id/flat/c65e5196-4f04-4ead-9353-6088c19615a3@2ndquadrant.com |
6 years ago |
|
|
d8cfa82d51 |
Improve test coverage for multi-column MCV lists
The regression tests for extended statistics were not testing a couple
of important cases for the MCV lists:
* IS NOT NULL clauses - We did have queries with IS NULL clauses, but
not the negative case.
* clauses with variable on the right - All the clauses had the Var on
the left, i.e. (Var op Const), so this adds (Const op Var) too.
* columns with fixed-length types passed by reference - All columns
were using either by-value or varlena types, so add a test with
UUID columns too. This matters for (de)serialization.
* NULL-only dimension - When one of the columns contains only NULL
values, we treat it a a special case during (de)serialization.
* arrays containing NULL - When the constant parameter contains NULL
value, we need to handle it correctly during estimation, for all
IN, ANY and ALL clauses.
Discussion: https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development
Author: Tomas Vondra
|
6 years ago |
|
|
f9696782c7 |
Improve test coverage for functional dependencies
The regression tests for functional dependencies were only using clauses of the form (Var op Const), i.e. with Var on the left side. This adds a couple of queries with Var on the right, to test other code paths. It also prints one of the functional dependencies, to test the data type output function. The functional dependencies are "perfect" with degree of 1.0 so this should be stable. Discussion: https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development Author: Tomas Vondra |
6 years ago |
|
|
4dbcb3f844 |
Restructure polymorphic-type resolution in funcapi.c.
resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to cover the case of having to resolve anyarray given only an anyrange input. The bug was masked if anyelement was also used (as either input or output), which probably helps account for our not having noticed. While looking at this I noticed that resolve_generic_type() would produce the wrong answer if asked to make that same resolution. ISTM that resolve_generic_type() is confusingly defined and overly complex, so rather than fix it, let's just make funcapi.c do the specific lookups it requires for itself. With this change, resolve_generic_type() is not used anywhere, so remove it in HEAD. In the back branches, leave it alone (complete with bug) just in case any external code is using it. While we're here, make some other refactoring adjustments in funcapi.c with an eye to upcoming future expansion of the set of polymorphic types: * Simplify quick-exit tests by adding an overall have_polymorphic_result flag. This is about a wash now but will be a win when there are more flags. * Reduce duplication of code between resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes(). * Don't bother to validate correct matching of anynonarray or anyenum; the parser should have done that, and even if it didn't, just doing "return false" here would lead to a very confusing, off-point error message. (Really, "return false" in these two functions should only occur if the call_expr isn't supplied or we can't obtain data type info from it.) * For the same reason, throw an elog rather than "return false" if we fail to resolve a polymorphic type. The bug's been there since we added anyrange, so back-patch to all supported branches. Discussion: https://postgr.es/m/6093.1584202130@sss.pgh.pa.us |
6 years ago |
|
|
e83daa7e33 |
Use multi-variate MCV lists to estimate ScalarArrayOpExpr
Commit
|
6 years ago |
|
|
8f321bd16c |
Use functional dependencies to estimate ScalarArrayOpExpr
Until now functional dependencies supported only simple equality clauses
and clauses that can be trivially translated to equalities. This commit
allows estimation of some ScalarArrayOpExpr (IN/ANY) clauses.
For IN clauses we can do this thanks to using operator with equality
semantics, which means an IN clause
WHERE c IN (1, 2, ..., N)
can be translated to
WHERE (c = 1 OR c = 2 OR ... OR c = N)
IN clauses are now considered compatible with functional dependencies,
and rely on the same assumption of consistency of queries with data
(which is an assumption we already used for simple equality clauses).
This applies also to ALL clauses with an equality operator, which can be
considered equivalent to IN clause.
ALL clauses are still considered incompatible, although there's some
discussion about maybe relaxing this in the future.
Author: Pierre Ducroquet
Reviewed-by: Tomas Vondra, Dean Rasheed
Discussion: https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy%40pierred-pdoc
|
6 years ago |
|
|
1cc9c2412c |
Preserve replica identity index across ALTER TABLE rewrite
If an index was explicitly set as replica identity index, this setting was lost when a table was rewritten by ALTER TABLE. Because this setting is part of pg_index but actually controlled by ALTER TABLE (not part of CREATE INDEX, say), we have to do some extra work to restore it. Based-on-patch-by: Quan Zongliang <quanzongliang@gmail.com> Reviewed-by: Euler Taveira <euler.taveira@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/c70fcab2-4866-0d9f-1d01-e75e189db342@gmail.com |
6 years ago |
|
|
a029a0641c |
Fix test case instability introduced in 085b6b667.
I forgot that the WAL directory might hold other files besides WAL segments, notably including new segments still being filled. That means a blind test for the first file's size being 16MB can fail. Restrict based on file name length to make it more robust. Per buildfarm. |
6 years ago |
|
|
b08dee24a5
|
Add pg_dump support for ALTER obj DEPENDS ON EXTENSION
pg_dump is oblivious to this kind of dependency, so they're lost on
dump/restores (and pg_upgrade). Have pg_dump emit ALTER lines so that
they're preserved. Add some pg_dump tests for the whole thing, also.
Reviewed-by: Tom Lane (offlist)
Reviewed-by: Ibrar Ahmed
Reviewed-by: Ahsan Hadi (who also reviewed commit
|
6 years ago |