Canton `participant.db.migrate` questions

Just looking for sanity checking from team Canton:

While in the process of getting canton up and running on Daml Hub, we upgraded Canton to 2.2.0, which requires a schema migration. Because we weren’t handling migrations, this broke our existing canton test instances.

To rectify this, my plan is to modify our participant + domain bootstrap scripts to include

participant.stop
participant.db.migrate
participant.start

so that canton schema changes will be automatically applied to our database.

Are there obvious problems to this approach/a better way? I didn’t see any way to do without having to start the participant, stop the participant, and then start it again.

Later iterations we’d like to implement rolling updates and give users windows on downtime occurs, rather than doing it as part of a startup script, but for now just going for “simplest thing that will reliably work.”

Hi Daniel,

If there are pending migrations then the start of a participant node would fail. I suggest you start Canton with the cli option --manual-start, which prevents the participant nodes from starting automatically. Then in your bootstrap script you run the participant.db.migrate and then start the participant (and all the other nodes you may have in the config).

And as you said about maintenance windows, we do recommend to run the DB migrations explicitly and not automatically on each start.

1 Like

For posterity’s sake:

I ended up creating a file called migrate.scala that looked like:

participant1.db.migrate

and then running the command:

bin/canton run --config daml-hub-participant.conf --manual-start migrate.scala

This needs to be run while the node is not operational.

What’s the best way to do this migration when the participant is running in a Kubernetes pod?

Running into this error when running manual start

{“timeStamp”:“2022-10-04T16:26:54.896Z”,“message”:“Running CoordinatedShutdown with reason [ActorSystemTerminateReason]”,“logger”:“akka.actor.CoordinatedShutdown”,“thread”:“canton-env-execution-context-17”,“level”:“INFO”}

{“timeStamp”:“2022-10-04T16:27:14.845Z”,“message”:“Starting Canton version 2.3.4”,“logger”:“com.digitalasset.canton.CantonEnterpriseApp$”,“thread”:“main”,“level”:“INFO”}

{“timeStamp”:“2022-10-04T16:27:15.787Z”,“message”:“Can not determine user home directory using the java system property user.home \n (is set to /home/app). Please set it \non jvm startup using -Duser.home=…”,“logger”:“com.digitalasset.canton.console.AmmoniteConsoleConfig$”,“thread”:“main”,“level”:“WARN”}

{“timeStamp”:“2022-10-04T16:27:16.410Z”,“message”:“Deriving 2 as number of threads from ‘sys.runtime.availableProcessors()’. Please use ‘-Dscala.concurrent.context.numThreads’ to override.”,“logger”:“com.digitalasset.canton.environment.EnterpriseEnvironment”,“thread”:“main”,“level”:“INFO”}

{“timeStamp”:“2022-10-04T16:27:16.763Z”,“message”:“Slf4jLogger started”,“logger”:“akka.event.slf4j.Slf4jLogger”,“thread”:“canton-env-execution-context-17”,“level”:“INFO”}

{“timeStamp”:“2022-10-04T16:27:18.246Z”,“message”:“Manual start requested.”,“logger”:“com.digitalasset.canton.environment.EnterpriseEnvironment”,“thread”:“main”,“level”:“INFO”}

Compiling (synthetic)/ammonite/predef/ArgsPredef.sc

Compiling /(console)

sh: /dev/tty: No such device or address

Nonzero exit value: 1

This is probably best as another question - (un)fortunately, we run ours in kubernetes and I haven’t run into this. It looks like it’s unable to run because it needs user.home set - you could try setting that.

Thanks Daniel, I created a new ticket here: Canton Manual Start Failing in Kubernetes Pod

When I tested locally without setting user.home, it was still able to run and start up. What’s the best way to set this?

(for posterity’s sake again: Canton Manual Start Failing in Kubernetes Pod - #2 by stephenwsun you can set the --no-tty argument when running canton to prevent this)