Increase timeout in multixid_conversion upgrade test

The workload to generate multixids before upgrade is very slow on
buildfarm members running with JIT enabled. The workload runs a lot of
small queries, so it's unsurprising that JIT makes it slower. On my
laptop it nevertheless runs in under 10 s even with JIT enabled, while
some buildfarm members have been hitting the 180 s timeout. That seems
extreme, but I suppose it's still expected on very slow and busy
buildfarm animals. The timeout applies to the BackgroundPsql sessions
as whole rather than the individual queries.

Bump up the timeout to avoid the test failures. Add periodic progress
reports to the test output so that we get a better picture of just how
slow the test is.

In the passing, also fix comments about how many multixids and members
the workload generates. The comments were written based on 10 parallel
connections, but it actually uses 20.

Discussion: https://www.postgresql.org/message-id/b7faf07c-7d2c-4f35-8c43-392e057153ef@gmail.com
master
Heikki Linnakangas 6 hours ago
parent ecb553ae82
commit bd43940b02
  1. 25
      src/bin/pg_upgrade/t/007_multixact_conversion.pl

@ -26,8 +26,9 @@ my $tempdir = PostgreSQL::Test::Utils::tempdir;
# upgrading them. The workload is a mix of KEY SHARE locking queries
# and UPDATEs, and commits and aborts, to generate a mix of multixids
# with different statuses. It consumes around 3000 multixids with
# 30000 members. That's enough to span more than one multixids
# 'offsets' page, and more than one 'members' segment.
# 60000 members in total. That's enough to span more than one
# multixids 'offsets' page, and more than one 'members' segment with
# the default block size.
#
# The workload leaves behind a table called 'mxofftest' containing a
# small number of rows referencing some of the generated multixids.
@ -68,6 +69,12 @@ sub mxact_workload
# verbose by setting this.
my $verbose = 0;
# Bump the timeout on the connections to avoid false negatives on
# slow test systems. The timeout covers the whole duration that
# the connections are open rather than the individual queries.
my $connection_timeout_secs =
4 * $PostgreSQL::Test::Utils::timeout_default;
# Open multiple connections to the database. Start a transaction
# in each connection.
for (0 .. $nclients)
@ -75,8 +82,10 @@ sub mxact_workload
# Use the psql binary from the new installation. The
# BackgroundPsql functionality doesn't work with older psql
# versions.
my $conn = $binnode->background_psql('',
connstr => $node->connstr('postgres'));
my $conn = $binnode->background_psql(
'',
connstr => $node->connstr('postgres'),
timeout => $connection_timeout_secs);
$conn->query_safe("SET log_statement=none", verbose => $verbose)
unless $verbose;
@ -88,12 +97,14 @@ sub mxact_workload
# Run queries using cycling through the connections in a
# round-robin fashion. We keep a transaction open in each
# connection at all times, and lock/update the rows. With 10
# connection at all times, and lock/update the rows. With 20
# connections, each SELECT FOR KEY SHARE query generates a new
# multixid, containing the 10 XIDs of all the transactions running
# at the time.
# multixid, containing the XIDs of all the transactions running at
# the time, ie. around 20 XIDs.
for (my $i = 0; $i < 3000; $i++)
{
note "generating multixids $i / 3000\n" if ($i % 100 == 0);
my $conn = $connections[ $i % $nclients ];
my $sql = ($i % $abort_every == 0) ? "ABORT" : "COMMIT";

Loading…
Cancel
Save