How the SQL client can influence performance
From a recent discussion on the Postgres performance mailing list, I learned that the configuration of the SQL client has an influence on the execution plan chosen by Postgres.
In a nutshell: a SQL client that limits the number of rows returned, will preven Postgres to use parallel execution for the query.
In the mentioned mail thread, DBeaver was used as the SQL client, but SQL Workbench/J can also limit the number of rows returned.
The confusing part is that an
explain (analyze) will show parallel execution, even if “Max Rows” is set to a non-zero value. The reason for this is, that the analyzed statement is not “executed” by the SQL client, but by the server.
If we enable the auto explain module, it’s visible in the logged plan, that a query that was shown to use parallel execution with
explain (analyze) does not use it when run directly:
Query Text: select count(*) from order_line Finalize Aggregate (cost=48189.61..48189.62 rows=1 width=8) (actual time=411.339..411.339 rows=1 loops=1) -> Gather (cost=48189.29..48189.60 rows=3 width=8) (actual time=411.334..411.334 rows=1 loops=1) Workers Planned: 3 Workers Launched: 0 -> Partial Aggregate (cost=47189.29..47189.30 rows=1 width=8) (actual time=411.333..411.333 rows=1 loops=1) -> Parallel Seq Scan on order_line (cost=0.00..43157.03 rows=1612903 width=0) (actual time=0.014..246.931 rows=5000000 loops=1)
As you can see in the above execution plan, Postgres planned for parallel execution (“Workers Planned: 3”) but then did not use it (“Workers Launched: 0”) due to the client configuration. As no parallel workers were used, we also don’t see any details about each worker in the plan.
As soon as the number of rows is not limited (e.g by removing the value in the input field “Max. Rows” in a SQL tab), Postgres does use the parallel execution:
Query Text: select count(*) from order_line Finalize Aggregate (cost=48189.61..48189.62 rows=1 width=8) (actual time=219.892..219.892 rows=1 loops=1) Output: count(*) -> Gather (cost=48189.29..48189.60 rows=3 width=8) (actual time=219.198..247.511 rows=4 loops=1) Output: (PARTIAL count(*)) Workers Planned: 3 Workers Launched: 3 -> Partial Aggregate (cost=47189.29..47189.30 rows=1 width=8) (actual time=180.225..180.225 rows=1 loops=4) Output: PARTIAL count(*) Worker 0: actual time=166.970..166.970 rows=1 loops=1 Worker 1: actual time=175.749..175.749 rows=1 loops=1 Worker 2: actual time=159.863..159.863 rows=1 loops=1 -> Parallel Seq Scan on order_line (cost=0.00..43157.03 rows=1612903 width=0) (actual time=0.027..120.299 rows=1250000 loops=4) Output: order_id, product_id, pieces Worker 0: actual time=0.031..115.322 rows=1073370 loops=1 Worker 1: actual time=0.037..124.710 rows=1444850 loops=1 Worker 2: actual time=0.030..110.901 rows=1055985 loops=1
“Workers Launched” now shows 3 and as I enabled the “verbose” option for the auto explain module, the plan also shows details for each worker.
Tom Lane explained why this happens:
The trouble with the protocol-level limit (setMaxRows) is that it requires being able to suspend the query and resume fetching rows later. We don’t allow that for parallel query because it would involve tying up vastly more resources, ie a bunch of worker processes, not just some extra memory in the client’s own backend.
So if you want to be sure that all your queries run with the best performance, avoid setting “Max. Rows” in SQL Workbench/J.