While uploading multiple DAR files to my Canton participant, one of the uploads failed with the following error: An exception was thrown during the upload-dar command - GRPCIOTimeout

1h

ALL the setup running on minikube
We have recently encountered an issue while uploading DAR files to my Canton participant, and I wanted to share my experience .

Issue:

While uploading multiple DAR files to my Canton participant, one of the uploads failed with the following error:

An exception was thrown during the upload-dar command
- GRPCIOTimeout

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/init/.daml/dist/sor-mvp-init-1.2.22.dar

Uploading /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/init/.daml/dist/sor-mvp-init-1.2.22.dar to 127.0.0.1:4001

Error

An exception was thrown during the upload-dar command

  • GRPCIOTimeout

One reason for this to occur is if the size of DAR file being uploaded exceeds the gRPC maximum message size. The default value for this is 4Mb, but it may be increased when the ledger is (re)started. Please check with your ledger operator.

and port forward cmd failing at same time

kubectl port-forward -n canton svc/zsor-participant-canton-participant 4001:4001 4002:4002

error

Connection lost and ledger participant pod get restarted when we upload dar

E0324 19:23:33.020323 27363 portforward.go:424] “Unhandled Error” err=<
an error occurred forwarding 4001 → 4001: error forwarding port 4001 to pod 83fdf736d871db5723da1ca3a0e9de176c1e2ff99bcdb4540ad8170183146e97, uid : exit status 1: 2025/03/24 13:53:33 socat[28467] E connect(5, AF=2 127.0.0.1:4001, 16): Connection refused

error: lost connection to pod

and when i fire this cmd

daml packages list --host 127.0.0.1 --port 4001

daml-helper: GRPCIOBadStatusCode StatusResourceExhausted (StatusDetails {unStatusDetails = “Received message larger than max (6050326 vs. 4194304)”}) fromList

My Setup:

  • Participant Configuration:
ledger-api {
    address = "0.0.0.0"
    port = 4001
    postgres-data-source.synchronous-commit = off
    max-inbound-message-size = 209715200
    command-service.max-commands-in-flight = 100
    max-contract-state-cache-size = 100
    max-contract-key-state-cache-size = 100
    max-transactions-in-memory-fan-out-buffer-size = 100
}
  • DAR File Sizes:
    • sor-mvp-data-dependencies-1.2.6.dar ( Uploaded successfully)
    • sor-mvp-models-1.2.22.dar (Uploaded successfully)
    • sor-mvp-integration-acord-1.2.0.dar ( Uploaded successfully)
    • sor-mvp-init-1.2.22.dar ( Failed due to GRPCIOTimeout)

Possible Causes:

1. Increase gRPC Maximum Message Size

I have set the max-inbound-message-size = 209715200 as we have discussed
and we are using port forward cmd while we upload the dar

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/init/.daml/dist/sor-mvp-init-1.2.22.dar

But when i ran previously this shows the list but now its gone

daml packages list --host 127.0.0.1 --port 4001

Available packages:

057eed1fd48c238491b8ea06b9b5bf85a5d4c9275dd3f6183e0e6b01730cc2ba

10e0333b52bba1ff147fc408a6b7d68465b157635ee230493bd6029b750dcb05 (daml-stdlib-DA-Action-State-Type-1.0.0)

11bc90545e0175ecc241a88eb3adc8eeccf66f196c8f01da7afa30070a0fb2f0 (daml-script-2.8.1)

38e6274601b21d7202bb995bc5ec147decda5a01b68d57dda422425038772af7 (daml-prim-DA-Internal-NatSyn-1.0.0)

3f4deaf145a15cdcfa762c058005e2edb9baa75bb7f95a4f8f6f937378e86415 (daml-prim-DA-Exception-AssertionFailed-1.0.0)

40f452260bef3f29dede136108fc08a88d5a5250310281067087da6f0baddff7

518032f41fd0175461b35ae0c9691e08b4aea55e62915f8360af2cc7a1f2ba6c

57b5c520512c24035057aa4c783cb7ac7f3f49db29806280962e188be7aadb66 (daml-prim-0.0.0)

5921708ce82f4255deb1b26d2c05358b548720938a5a325718dc69f381ba47ff (daml-stdlib-DA-Stack-Types-1.0.0)

65921e553a353588e950cbc87e98a127730e63295f7ad8d3adae952ef0133b3e (AdminWorkflows-0.27.0)

6839a6d3d430c569b2425e9391717b44ca324b88ba621d597778811b2d05031d

6c2c0667393c5f92f1885163068cd31800d2264eb088eb6fc740e11241b2bf06

733e38d36a2759688a4b2c4cec69d48e7b55ecc8dedc8067b815926c917a182a

76bf0fd12bd945762a01f8fc5bbcdfa4d0ff20f8762af490f8f41d6237c6524f

852d8e3a8ccf952acc73e17522846bc1eb41498e840d637e519ddcca7dbc7671 (daml-stdlib-1.16.0.20210802.7499.0)

86828b9843465f419db1ef8a8ee741d1eef645df02375ebf509cdc8c3ddd16cb (daml-prim-DA-Exception-GeneralError-1.0.0)

8a7806365bbd98d88b4c13832ebfa305f6abaeaf32cfa2b7dd25c4fa489b79fb

90cba7c57711ef02ef53935f378e4282e4c17f3693e5758bf7886f68232b72d9 (daml-stdlib-2.8.1)

97b883cd8a2b7f49f90d5d39c981cf6e110cf1f1c64427a28a6d58ec88c43657 (daml-stdlib-DA-Set-Types-1.0.0)

99a2705ed38c1c26cbb8fe7acf36bbf626668e167a33335de932599219e0a235

a566728bb2d4ad0103eb11ff8140296f4cea4fc94f1f95ddc6c3e4f983d107f1 (daml-prim-0.0.0)

bfcd37bd6b84768e86e432f5f6c33e25d9e7724a9d42e33875ff74f6348e733f

c1f1f00558799eec139fb4f4c76f95fb52fa1837a5dd29600baa1c8ed1bdccfd

cb0552debf219cc909f51cbb5c3b41e9981d39f8f645b1f35e2ef5be2e0b858a (daml-prim-DA-Exception-ArithmeticError-1.0.0)

cc348d369011362a5190fe96dd1f0dfbc697fdfd10e382b9e9666f0da05961b7

d14e08374fc7197d6a0de468c968ae8ba3aadbf9315476fd39071831f5923662

d58cf9939847921b2aab78eaa7b427dc4c649d25e6bee3c749ace4c3f52f5c97

db1ed657133ff274c6a63c54df8cebfba63113c6ccac38d4b504994b8896b00b (AdminWorkflowsWithVacuuming-2.8.1)

e22bce619ae24ca3b8e6519281cb5a33b64b3190cc763248b4c3f9ad5087a92c

e491352788e56ca4603acc411ffe1a49fefd76ed8b163af86cf5ee5f4c38645b

e4cc67c3264eba4a19c080cac5ab32d87551578e0f5f58b6a9460f91c7abc254 (daml-stdlib-DA-Random-Types-1.0.0)

f20de1e4e37b92280264c08bf15eca0be0bc5babd7a7b5e574997f154c00cb78 (daml-prim-DA-Exception-PreconditionFailed-1.0.0)

and running these cmd while some get uploaded some get failed

Data dependencies

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/data-dependencies/.daml/dist/sor-mvp-data-dependencies-1.2.6.dar

Models

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/models/.daml/dist/sor-mvp-models-1.2.22.dar

ACORD integration

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/integrations/acord/.daml/dist/sor-mvp-integration-acord-1.2.0.dar

Init

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/init/.daml/dist/sor-mvp-init-1.2.22.dar

Utilities

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/utilities/.daml/dist/sor-mvp-utilities-1.2.0.dar

ACORD translation

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/acord-translation/.daml/dist/sor-mvp-acord-translation-1.2.22.dar

Lifecycle

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/lifecycle/.daml/dist/sor-mvp-lifecycle-1.2.22.dar

Policy

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/policy/.daml/dist/sor-mvp-policy-1.2.22.dar

Product

daml ledger upload-dar --host=127.0.0.1 --port=4001 /Users/zinnia_india/Documents/qa_deployment/deleting_daml_backup/SOR-DAML/.packages/product/.daml/dist/sor-mvp-product-1.2.22.dar

What is the size of sor-mvp-init-1.2.22.dar?

package size is 5.1Mb

Thank you for confirming the size of the large DAR file. It looks like your max-inbound-message-size can accommodate that.

You also mention…

I think the next clue might be available in the logs for the ledger participant. Does the participant log mention why it is restarting?

what I observed that other dar files sizes are in range of 2 to 3 mb
but when its coming 4 mb or higher like 5mb then dar files not uploading

These are logs but I could not found any error

Bind(stmt=S_2,portal=null)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.109Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.109Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.109Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.109Z”,“@version”:“1”,“message”:" <=BE EmptyQuery",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.109Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.615Z”,“@version”:“1”,“message”:“Scheduled checking…”,“logger_name”:“c.d.c.p.i.h.PollingChecker:participant=participant”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.616Z”,“@version”:“1”,“message”:“Acquiring lock PGLockId(696192769) Exclusive”,“logger_name”:“c.d.c.p.i.h.HaCoordinator$:participant=participant”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.619Z”,“@version”:“1”,“message”:" simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@2a0883a9, maxRows=0, fetchSize=0, flags=16",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.621Z”,“@version”:“1”,“message”:" FE=> Bind(stmt=S_1,portal=null,$1=<696192769>,type=INT8)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.621Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=0)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.621Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.622Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.623Z”,“@version”:“1”,“message”:" <=BE DataRow(len=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.623Z”,“@version”:“1”,“message”:" <=BE CommandStatus(SELECT 1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.623Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.623Z”,“@version”:“1”,“message”:" getObject columnIndex: 1",“logger_name”:“o.postgresql.jdbc.PgConnection”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.623Z”,“@version”:“1”,“message”:" getBoolean columnIndex: 1",“logger_name”:“o.postgresql.jdbc.PgConnection”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.624Z”,“@version”:“1”,“message”:“Check successful.”,“logger_name”:“c.d.c.p.i.h.PollingChecker:participant=participant”,“thread_name”:“ha-polling-checker-timer-thread”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.629Z”,“@version”:“1”,“message”:“Checking lock status of 696192806 at 2025-03-24T15:48:44.628585Z”,“logger_name”:“c.d.c.r.DbLockPostgres:participant=participant/connId=pool-2/lockId=696192806”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.629Z”,“@version”:“1”,“message”:“#1: StreamingInvokerAction$HeadOptionAction [select 1 from pg_locks where locktype = ‘advisory’ and objid = ? and granted = true and pid = pg_backend_pid() limit 1]”,“logger_name”:“s.basic.BasicBackend.action”,“thread_name”:“participant-wallclock-0”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.629Z”,“@version”:“1”,“message”:“Preparing statement: select 1 from pg_locks where locktype = ‘advisory’ and objid = ? and granted = true and pid = pg_backend_pid() limit 1”,“logger_name”:“s.jdbc.JdbcBackend.statement”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“Executing prepared statement: select 1 from pg_locks where locktype = ‘advisory’ and objid = 696192806 and granted = true and pid = pg_backend_pid() limit 1”,“logger_name”:“s.jdbc.JdbcBackend.statement”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“Executing prepared statement: select 1 from pg_locks where locktype = ‘advisory’ and objid = 696192806 and granted = true and pid = pg_backend_pid() limit 1”,“logger_name”:“s.j.J.statementAndParameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“/-----------\”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“| 1 |”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“| Int |”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“|-----------|”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“| 696192806 |”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:“\-----------/”,“logger_name”:“s.jdbc.JdbcBackend.parameter”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:" simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@662db403, maxRows=1, fetchSize=0, flags=16",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:" FE=> Bind(stmt=S_2,portal=null,$1=<696192806>,type=INT4)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.630Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" <=BE DataRow(len=4)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" <=BE PortalSuspended",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“Execution of prepared statement took 859µs”,“logger_name”:“s.jdbc.JdbcBackend.benchmark”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" getObject columnIndex: 1",“logger_name”:“o.postgresql.jdbc.PgConnection”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" getInt columnIndex: 1",“logger_name”:“o.postgresql.jdbc.PgConnection”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:" getInt columnIndex: 1",“logger_name”:“o.postgresql.jdbc.PgConnection”,“thread_name”:“db-lock-pool-ec-2”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“/----------\”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“| 1 |”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“| ?column? |”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“|----------|”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“| 1 |”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“\----------/”,“logger_name”:“s.jdbc.StatementInvoker.result”,“thread_name”:“db-lock-pool-ec-2”,“level”:“DEBUG”,“level_value”:10000}
{“@timestamp”:“2025-03-24T15:48:44.631Z”,“@version”:“1”,“message”:“The operation ‘com.digitalasset.canton.resource.DbLock.checkLock’ was successful. No need to retry. “,“logger_name”:“c.d.c.r.DbLockPostgres:participant=participant/connId=pool-2/lockId=696192806”,“thread_name”:“canton-env-ec-157”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:“Lock 696192806 still acquired at 2025-03-24T15:48:44.628585Z”,“logger_name”:“c.d.c.r.DbLockPostgres:participant=participant/connId=pool-2/lockId=696192806”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:“Checking if connection com.digitalasset.canton.resource.KeepAliveConnection@1d1ce9f1 is valid”,“logger_name”:“c.d.c.r.DbLockedConnection:participant=participant/connId=pool-2”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@2def5453, maxRows=0, fetchSize=0, flags=20",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" FE=> Bind(stmt=S_1,portal=null)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" <=BE EmptyQuery",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:“Connection com.digitalasset.canton.resource.KeepAliveConnection@1d1ce9f1 is valid”,“logger_name”:“c.d.c.r.DbLockedConnection:participant=participant/connId=pool-2”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.632Z”,“@version”:“1”,“message”:“Locked connection is healthy”,“logger_name”:“c.d.c.r.DbLockedConnection:participant=participant/connId=pool-2”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:44.633Z”,“@version”:“1”,“message”:“Checking storage health at 2025-03-24T15:48:44.632964Z”,“logger_name”:“c.d.c.r.DbStorageMulti:participant=participant”,“thread_name”:“participant-wallclock-0”,“level”:“DEBUG”,“level_value”:10000,“trace-id”:“0f75faeeee97423ccce4f0640fc73437”}
{“@timestamp”:“2025-03-24T15:48:44.633Z”,“@version”:“1”,“message”:“Scheduling the next health check at 2025-03-24T15:48:49.633355Z”,“logger_name”:“c.d.c.r.DbStorageMulti:participant=participant”,“thread_name”:“participant-wallclock-0”,“level”:“DEBUG”,“level_value”:10000,“trace-id”:“0f75faeeee97423ccce4f0640fc73437”}
{“@timestamp”:“2025-03-24T15:48:44.633Z”,“@version”:“1”,“message”:“Checking connection pool health at 2025-03-24T15:48:44.633618Z”,“logger_name”:“c.d.c.r.DbLockedConnectionPool:participant=participant”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000,“trace-id”:“3dd01ee78f1c0d92323d33c638eae91e”}
{“@timestamp”:“2025-03-24T15:48:44.633Z”,“@version”:“1”,“message”:“Connection pool remains active”,“logger_name”:“c.d.c.r.DbLockedConnectionPool:participant=participant”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000,“trace-id”:“3dd01ee78f1c0d92323d33c638eae91e”}
{“@timestamp”:“2025-03-24T15:48:44.633Z”,“@version”:“1”,“message”:“Connection pool is healthy”,“logger_name”:“c.d.c.r.DbLockedConnectionPool:participant=participant”,“thread_name”:“participant-wallclock-0”,“level”:“TRACE”,“level_value”:5000,“trace-id”:“3dd01ee78f1c0d92323d33c638eae91e”}
{“@timestamp”:“2025-03-24T15:48:45.103Z”,“@version”:“1”,“message”:" simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@357dde6f, maxRows=0, fetchSize=0, flags=20",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.103Z”,“@version”:“1”,“message”:" FE=> Bind(stmt=S_2,portal=null)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.104Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.104Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.104Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.104Z”,“@version”:“1”,“message”:" <=BE EmptyQuery",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.104Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-indexer#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.110Z”,“@version”:“1”,“message”:" simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@679e72, maxRows=0, fetchSize=0, flags=20",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.110Z”,“@version”:“1”,“message”:" FE=> Bind(stmt=S_2,portal=null)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.110Z”,“@version”:“1”,“message”:" FE=> Execute(portal=null,limit=1)“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.110Z”,“@version”:“1”,“message”:" FE=> Sync",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.110Z”,“@version”:“1”,“message”:" <=BE BindComplete [unnamed]“,“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{”@timestamp":“2025-03-24T15:48:45.111Z”,“@version”:“1”,“message”:" <=BE EmptyQuery",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}
{“@timestamp”:“2025-03-24T15:48:45.111Z”,“@version”:“1”,“message”:" <=BE ReadyForQuery(I)",“logger_name”:“o.p.core.v3.QueryExecutorImpl”,“thread_name”:“DataSourceConnectionProvider-api-server#healthPoller”,“level”:“TRACE”,“level_value”:5000}

the pod goes into crashloopback then going into 137 exit code in pod (means it need more memory) and again comes to normal state


please size of dar file size in mb

This is not (yet?) my area of expertise. I wonder if the guidance at Assign Memory Resources to Containers and Pods | Kubernetes would help?

This means we have to increase memory and avoid restarts so currently I have given 6gb memory on participant ledger server side but daml guys recommend 2 gb enough but our dar files since it is in 5.6 mb range we increased to avoid this exit code dont we think we are consuming more memory