Questions about DAML performance

Hi , I’m using Sandbox(in memory) and daml-on-sql(PostgreSQL), and I’m testing contract creation performance using daml json rpc.

First of all, I expected a high TPS through the following document.

Our test environment is as follows:

  • OS : ubuntu
  • DAML SDK, DAML-ON-SQL version: 1.11.1
  • PostgreSQL version: 13 (using Google Cloud)
  • Test tool (load test): http://naver.github.io/ngrinder
  • Test code: IOU contract provided by DAML
  • Test environment: ngrinder <-> haproxy <-> daml json rpc (5 nodes) <-> daml sanbox Or PostgreSQL
  • Test server:
    daml json rpc, sanbox/daml on sql install: 8Core, 32G Ram, 100G SSD (1box, using Google Cloud),
    PostgreSQL: 4Core, 32G Ram, 100G SSD (1box, Google Cloud used)
  • Execution condition: Creating a contract (/v1/create)
  • Test run: Load test run for 10 minutes using 200 virtual users

Predictive test results:

  • 5,000 ~ 10,000 TPS or more

Actual test results

  • Sandbox (in memory): 280TPS
  • PostgreSQL: 130TPS

The results are much lower than expected. Are there any wrong conditions for us to test? (During the test, the postgresql server was not used a lot.)

In fact, this level of performance is slightly higher than that of blockchain.

And the above results did not change much even if the server specifications were changed or the settings were changed.

If there is any way to further increase the performance in our current conditions, I would like advice.

Thank you :upside_down_face:

daml-on-sql was executed as follows.
java -jar daml-on-sql.jar iou.dar --ledgerid=ledger --max-parallel-submissions=2048 --events-page-size=2048 --max-commands-in-flight=2048 --sql -backend-jdbcurl=‘jdbc:postgresql://{{host}}/daml?user=daml&password=daml&initialSize=200&minidle=200&maxActive=200’ --auth-jwt-rs256-crt=ledger.crt

1 Like

It’s probably worth repeating the test using the native Ledger API, instead of going through the HTTP-JSON API, which simplifies interacting with the active contract set but adds cost to the process (e.g. JSON encoding and decoding). Also note that the size of contract creation and exercising arguments are a factor to keep in mind, so very large and chunky contracts could suffer from a performance penalty when opposed to tests run on relatively small contracts.

We are currently working heavily on performance, particularly on the participant node component that serves the Ledger API, so I would recommend staying tuned on the upcoming releases for those and continuing monitoring performance to make sure they match your expectation.

1 Like

Thanks for the answer, I also considered the Ledger API, but there wasn’t a big change in performance due to scale-up. And rest api isn’t a feature that was told to use it while losing a lot of performance (usually it doesn’t lose performance).
What we want to know is that it must have gone through a lot of testing before applying DAML in the enterprise (as are we), but basic performance should be guaranteed. But the results of our tests are lacking a lot, so I think we’ve got the test conditions wrong or there’s a better way.
I hope DAML provides a good idea for this.
Thanks

1 Like

The benchmarks referenced in the paper are from a benchmark run on Digital Asset’s own Blockchain platform, which is no longer under development. That platform was highly horizontally scalable which allowed for transaction throughputs in the 10s of thousands.

The Daml Driver for PostgreSQL 1.X is a single process so it only scales vertically using more CPU and memory. You should already be able to squeeze a lot more out of it using a beefier machine and hitting the Ledger API with small requests. You’ll also see significant performance increases in upcoming releases that @stefanobaghino-da hinted at above.

1 Like