Canton Optimisation - Optimizing single participant node throughput

Hi team,

May I know how you can optimize the throughput of a single-Canton-participant on Postgres ?

Thanks and Regards,
Jean-Paul

I’m working something out. Will get back to you.

I hope this article answers your question:

https://www.canton.io/docs/dev/user-manual/usermanual/FAQ.html#how-to-setup-canton-to-get-best-performance

2 Likes

This is AMAZING !

Great article @MatthiasSchmalz . Thank you for writing this up!

@jaypeeda : below are two configurations that we’ve used for a customer’s high-throughput setup. You might find them useful to tune your Postgres server.

For sustained throughput simulating 5 day runs of ingesting 10M workflow instances per day at a sustained rate of 700 workflow/s, we use the following config. The workflows create and archive about ~10 contracts per workflow instance on average.

listen_addresses=*
max_connections=300
effective_cache_size=12GB
checkpoint_completion_target=0.93
wal_buffers=16MB
default_statistics_target=100
random_page_cost=1.1
effective_io_concurrency=2
work_mem=40MB
min_wal_size=2GB
max_wal_size=180GB
max_worker_processes=24
max_parallel_workers_per_gather=8
max_parallel_workers=24
max_parallel_maintenance_workers=8
shared_buffers=40GB
maintenance_work_mem=4GB
checkpoint_timeout=30min

Setting a large shared_buffers value is important to keep key indices in-memory and reduce both read and write amplification. Moerover, there is a lot of data being written by the DB, and too frequent Postgres checkpointing can hurt throughput. That’s why the max_wal_size is set to 180 GB.

For testing the same workload on less beefy machines when starting from an empty DB, we use the following config.

listen_addresses=*
max_connections=300
effective_cache_size=12GB
checkpoint_completion_target=0.9
wal_buffers=16MB
default_statistics_target=100
random_page_cost=1.1
effective_io_concurrency=2
work_mem=40MB
min_wal_size=2GB
max_wal_size=8GB
max_worker_processes=4
max_parallel_workers_per_gather=2
max_parallel_workers=4
max_parallel_maintenance_workers=2
shared_buffers=8GB
maintenance_work_mem=2GB
2 Likes

Thanks @Simon_Meier !

Hi Simon,

Is it possible to provide the configuration files (*.conf) with those different parameters set ?

It will be easier for us to use.

Thanks and Regards,

Jean-Paul

Hi @jaypeeda ,

what Postgres configuration to use ultimately depends on the Postgres version, the server hardware, and the workflow that you are optimizing for. Every Postgres installation comes with a default postgresql.conf, which you can modify. The Postgres docs provide good guidance on how to do so here: PostgreSQL: Documentation: 14: 20.1. Setting Parameters

I have not attached full .conf files, as the parameters I’ve given above can be pasted as-is into your postgresql.conf for your server. The obvious caveat is that they might not fit your hardware. You might also be interested in A Performance Cheat Sheet for PostgreSQL | Severalnines for learning more about tuning Postgres performance.

1 Like