io.grpc.StatusRuntimeException: ABORTED: PARTICIPANT_BACKPRESSURE(2,0): The participant is overloaded: participant rate limit exceeded (maximum rate: 200 commands/s)

Not sure if this is a Canton specific question or not, but -

I’m doing load testing, and running into the issue that when I

  1. have auth turned on, and
  2. send more than 20 commands per second

I get the error The participant is overloaded: participant rate limit exceeded (maximum rate: 200 commands/s)

This happens regardless of whether or not:

  1. I’m using only one party and one gRPC stream to send the requests
  2. I’m using multiple parties on the same gRPC stream to send the requests
  3. I’m using multiple parties on gRPC streams initialized with different tokens to send the requests

I’m using the scala bindings :see_no_evil:

the code is

    val start_time = System.nanoTime
    for (case ((party, token), client) <- res) {
      val iou  = Iou(party, party, "timbucks", 100, Nil)
      val result = for {
        lc <- client
        _ <- lc.commandServiceClient.submitAndWaitForTransaction(
          command_service.SubmitAndWaitRequest(
            Some(
              Commands(
                ledgerId = lc.ledgerId.unwrap,
                workflowId = UUID.randomUUID().toString,
                commandId = UUID.randomUUID().toString,
                party = party.unwrap,
                commands = Seq(iou.create.command)
              )
            )
          ),
          token
        )
        end_time   = System.nanoTime
        difference = (end_time - start_time) / 1e6
      } yield {
        clq.add(difference)
        difference
      }

My questions are twofold:

  1. why is this happening? and
  2. how can I get past it? I haven’t successfully located the commands per second config option, which seems like it might help.

I believe maximum rate refers to this setting:

participant1.resources.set_resource_limits(
  ResourceLimits(
    // Allow for submitting at most 200 commands per second
    maxRate = Some(200),

    // Limit the number of in-flight requests to 500.
    // A "request" includes every transaction that needs to be validated by participant1:
    // - transactions originating from commands submitted to participant1
    // - transaction originating from commands submitted to different participants.
    // The chosen configuration allows for processing up to 100 requests per second
    // with an average latency of 5 seconds.
    maxDirtyRequests = Some(500),
  )
)

See Scaling and Performance — Daml SDK 2.4.0 documentation

As for a solution, have you tried batching commands? I did some small case testing where that seemed to help, but YMMV.

1 Like

Well - I’m doing perf testing for daml hub, so batching commands unfortunately defeats the point. I will try bumping up the max rate and seeing if that makes a difference.

This does raise the question tho: am I actually sending 200 commands without knowing it? It’s not apparent to me from the participant logs. How can I determine that?

I have now confirmed that if I increase the maxRate to 400, I reach limits at 40 requests per second.

Either

  1. my code has an off by 10 error I can’t see,
  2. the client is opaquely sending 9 additional commands for my one, or
  3. there’s some very unintuitive behavior with this maxRate command.

Not knowing the answer to this question is blocking for the task I’m working on: it leaves me unable to trust the numbers I’m producing. I’m also pretty sure it’s not #1.

I’ve determined that it’s #3 - very unintuitive behavior. In addition to the maxRate, there’s a burst rate that’s calculated based off of that with a divide by 10, and I was exceeding the burst rate. Gonna file a GH issue about this.

Sorry that you’re having trouble by this.

The RateLimiter processes commands in time windows of at least 100ms. If the limit is 200 commands/s, it will accept up to 20 commands within every time slice of 100ms. Thus, if you continuously keep submitting commands, it will accept 200 commands/s.

In your test, it accepts only 20 commands, because you submit all 200 commands at once and then you give up.

1 Like