We've made multiple attempts to stabilize the plans shown by commit
1bc0100d2, with little success so far. The reason for the remaining
instability seems to be that if a transaction (such as auto-analyze)
is running concurrently with the test, then get_actual_variable_range may
return a maximum value for "T 1"."C 1" that's far away from the actual max,
as a result of our having transiently inserted such a value earlier in
the test. Because we use a non-MVCC snapshot to fetch the value (for
performance reasons), the presence of other transactions can cause that
function to return entries that are actually dead.
To fix, use a less extreme value in the earlier transient insertion, so
that whether it is visible or not won't affect the selectivity estimate.
The use of 9999 there seems to have been picked with the aid of a
dartboard anyway, rather than having a specific reason.
Discussion: https://postgr.es/m/16962.1523551784@sss.pgh.pa.us
Output: 9999, 999, NULL::integer, 'foo'::text, NULL::timestamp with time zone, NULL::timestamp without time zone, NULL::character varying, 'ft2 '::character(10), NULL::user_enum
Output: 1200, 999, NULL::integer, 'foo'::text, NULL::timestamp with time zone, NULL::timestamp without time zone, NULL::character varying, 'ft2 '::character(10), NULL::user_enum
(5 rows)
INSERT INTO ft2 (c1,c2,c3) VALUES (9999,999,'foo') RETURNING tableoid::regclass;
INSERT INTO ft2 (c1,c2,c3) VALUES (1200,999,'foo') RETURNING tableoid::regclass;
tableoid
----------
ft2
(1 row)
EXPLAIN (verbose, costs off)
UPDATE ft2 SET c3 = 'bar' WHERE c1 = 9999 RETURNING tableoid::regclass; -- can be pushed down
UPDATE ft2 SET c3 = 'bar' WHERE c1 = 1200 RETURNING tableoid::regclass; -- can be pushed down