JSON-API Errors while doing large file processing

HI Daml team

For our healthcare application, we have been uploading a 8K file, which contains up to 2000 records that are being posted to the daml ledger as one unit of work. This has caused some instability on our environment and we have seen the following errors on the JSON-API during the processing

16:34:37.882 [http-json-ledger-api-akka.actor.default-dispatcher-6] ERROR com.daml.http.Endpoints - Future failed

io.grpc.StatusRuntimeException: ABORTED: Invalid ledger time: Causal monotonicity violated

An also

16:34:40.368 [http-json-ledger-api-akka.actor.default-dispatcher-6] ERROR com.daml.http.Endpoints - Future failed

io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: gRPC message exceeds maximum size 4194304: 4580238

Any insights on what these errors mean and how we can accommodate large file processing like this ?

some of our Environment variables are :

  • SDK 1.7
  • Daml for Postgress deployed via BTP Sextant
  • BTP Sextant is running 1 Cluster on AWS EKS (mid-size box - m5.large)
1 Like

Let me start with the second error: gRPC imposes a max size on incoming messages (but not outgoing messages). This goes in both directions meaning the ledger imposes a max size on requests set to it and the JSON API imposes a max size on the messages (primarily transactions) it gets sent from the ledger. From the log statement you’ve shown it is slightly unclear to me which direction that error is coming from (side note: I recommend upgrading at least your JSON APi if not everything to SDK 1.17.1 or newer which has much better logging and should make that clearer). Both the JSON API and Daml on SQL have a --max-inbound-message-size flag that allows you to bump the limit to adjust this and allow for larger incoming messages.

As for the causal monotonicity error, I don’t have enough context on how the Sextant setup works to say this with certainty but there was a bug which affected Sandbox, VMBC and some other ledgers a few releases back (1.7 definitely includes that bug) which resulted in contract key races (two concurrent transactions operating on the same contract key with at least one of them modifying the assignment of that key) being misreported as this instead of an InconsistentKeys error. I’ve reached out internally to verify if that bug could also affect your sextant setup.

1 Like

Thanks @cocreature
We will transfer the full log files for your review

We have update the GRPC max size, but that did not seem to make an impact