this also seems to execute successfully, but then when I run a health status I see that the participant ID hash is not the public one from the key file.
I feel I am missing something here but I couldn’t find any other steps for importing a key in the docs.
First, exporting the private key is the first essential step. But when you start the participant again, you need to configure it to not perform “auto-init”. You can do that by setting
Otherwise, the participant will automatically perform the steps (including creating new keys) below:
// I'm using the modified simple-topology.conf
nodes.local.start
// export participant1 secret key and load it into participant2
val secret = participant1.keys.secret.list(filterName = "participant1-identity").head
val namespace = secret.publicKey.fingerprint
participant1.keys.secret.`export`(namespace, Some("secret.key"))
// load secret key
participant2.keys.secret.load("secret.key", Some("idm key"))
// create root certifiacate (self-signed)
participant2.topology.namespace_delegations.authorize(TopologyChangeOp.Add, namespace, namespace, true)
// init id - run this after you created the namespace delegation, as otherwise
// the system will complain about being unable to vet the admin workflow
// packages
// note, the name string can be choosen freely
participant2.topology.init_id("mateus", namespace)
// create signing and encryption keys
val enc = participant2.keys.secret.generate_encryption_key()
val sig = participant2.keys.secret.generate_signing_key()
// assign new keys to this participant
Seq(enc, sig).foreach{ key =>
participant2.topology.owner_to_key_mappings.authorize(TopologyChangeOp.Add,
participant2.id, key.fingerprint, key.purpose)
}
// test to ensure that it works
participant2.domains.connect_local(mydomain)
participant2.health.ping(participant2)
I hope this is clear now. Let me know if you have questions. The topology management system is quite flexible and can be configured in many more ways than the “vanilla” auto-init setup. However, you need to be a bit careful to not if you step off the standard paths.
I get this error when creating the secret variable:
@ val secret = participant1.keys.secret.list(filterName = "participant1-identity").head
java.util.NoSuchElementException: head of empty list
scala.collection.immutable.Nil$.head(List.scala:629)
scala.collection.immutable.Nil$.head(List.scala:628)
ammonite.$sess.cmd2$.<clinit>(cmd2.sc:1)
I’m using the simple topology with the addition of canton.participants.participant1.init.auto-init = false
If you invoke list without the filterName, you should see four keys. All these keys have a “name” attached such that you can keep a hint how that key is used. And when you list you can filter according to that name.
The key that is created during init for the namespace is named “participant1-identity”, so I used that to filter for this key and used .head to get the first element of that list.
Therefore check using list but without the filterName to see which keys are on the participant.
Please note that in my example I’ve mimicked the restart by just exporting the key from p1 and added it to p2 to illustrate the necessary steps.
However you already exported and imported the key, so you only need to use the steps after importing the key.
@ participant2.topology.namespace_delegations.authorize(TopologyChangeOp.Add, namespace, namespace, true)
cmd6.sc:1: value topology is not a member of com.digitalasset.canton.console.LocalParticipantReference
val res6 = participant2.topology.namespace_delegations.authorize(TopologyChangeOp.Add, namespace, namespace, true)
^
cmd6.sc:1: not found: value TopologyChangeOp
val res6 = participant2.topology.namespace_delegations.authorize(TopologyChangeOp.Add, namespace, namespace, true)
^
Compilation Failed
is there anything I need to import? I’m using Canton Enterprise 0.27.0
We’ve renamed identity to topology in v1.0, as we are managing much more than just identities using that component of the system. So the example I’ve given you will work with v1.0.0 and further.
You can replace [T|t]opology with [I|i]dentity in the commands, but maybe it’s better to use the new release candidate canton-enterprise-1.0.0-rc1.tar.gz instead.
A question came up after using this setup for a while: after the key was imported and assigned to the participant, if we restart the node using the same approach, it fails as follows:
e[0;39me[1;31mERROR c.d.c.c.EnterpriseConsoleEnvironment - Request failed for participant.
GrpcClientError: INVALID_ARGUMENT/MappingAlreadyExists(CN18007-2): A matching topology mapping authorized with the same key already exists in this state; existing=IdentityStateElement(
id = FOSuwWcaOGtSl8LfilXouBCpNgW4Mn1c,
mapping = NamespaceDelegation(
namespace = "namespace",
target = SigningPublicKey(id = "key", format = Tink, scheme = Ed25519),
isRootDelegation = true
)
), authKey="key", participant=participantIssuer
Request: AuthorizeNamespaceDelegation(Add,None,"key",true)
Trailers: Metadata(content-type=application/grpc)
e[0;39me[1;31mERROR c.d.c.ServerRunner - Command execution failed.
(the namespace and key were replaced in this snippet)
The error doesn’t surprise me, however I’d like to know how to restart a participant with a key already imported.
I guess you are restarting the participant invoking the bootstrap script again. Generally, bootstrap scripts are just scripts and aren’t idempotent. Therefore, you only need them when initializing.
So you can
re-write the script to be idempotent, i.e. check if the node is already set up and if it is, skip the initialisation.
@Matheus All nodes auto-start and the participants will attempt to reconnect to all registered domains (that do not have manual-start = yes) configured by default.
Except the simple-topology.conf that does:
canton {
parameters {
manual-start = yes
}
in order to show how to manually start & stop nodes.
Admittedly, it’s a bit confusing, and rewriting the getting started guide to skip this is on our radar.
Thank you for this thread. It was very useful material.
With Daml 2.0 there was a change in the API.
The export method is now called download, and load is now called upload.
Additionally, the name of the secret key has been changed from "participant-identity" to "participant-namespace"
For anyone that stumbles over this thread in the future, here is an idempotent bootstrap script that is using the new API.
It checks if the participant is already initialized.
If yes, then it skips the entire initialization.
If the secret.key file is not found on disk (e.g. during the initial startup of the participant) it will create all relevant keys and export the namespace key.
If the secret.key file is found on disk, it will upload it, and finish the initialization.
// skip if participant1 is already initialized. This means that
// if participant1 is restarted without a database wipe, it can
// continue without having to reload the namespace key.
if (!participant1.health.initialized) {
logger.info("Initializing Participant")
val secretKeyFileName = "secret.key"
val keyName = "participant1-namespace"
// If there is a backup of the secret key present, then load it.
// Otherwise create a new namespace key, and export it and its fingerprint.
if (new java.io.File(secretKeyFileName).exists()) {
logger.info("Loading namespace key from disk")
// load secret key
participant1.keys.secret.upload(secretKeyFileName, Some(keyName))
}
else {
logger.info("Creating new namespace key")
// create new namespace key
val key = participant1.keys.secret.generate_signing_key(keyName)
val keyFingerprint = key.fingerprint
logger.info(s"Exporting new namespace key to disk ($secretKeyFileName)")
//save namespace key to disk for storage in the key vault
participant1.keys.secret.download(keyFingerprint, Some(secretKeyFileName))
}
val identityKey = participant1.keys.secret.list(filterName=keyName).head
val namespace = identityKey.publicKey.fingerprint
// create root certifiacate (self-signed)
// This makes the identityKey the new root key
logger.info("Creating root certificate")
participant1.topology.namespace_delegations.authorize(
TopologyChangeOp.Add,
namespace,
namespace,
isRootDelegation = true
)
// init id - run this after you created the namespace delegation (i.e. root certificate),
// as otherwise the system will complain about being unable to vet the admin workflow
// packages
// note, the name string can be choosen freely
logger.info("Initializing Id")
participant1.topology.init_id("participant1", namespace)
// create signing and encryption keys
logger.info("Creating signing and encryption keys")
val enc = participant1.keys.secret.generate_encryption_key()
val sig = participant1.keys.secret.generate_signing_key()
// assign new keys to this participant
logger.info("Assigning keys to participant")
Seq(enc, sig).foreach{ key =>
participant1.topology.owner_to_key_mappings.authorize(TopologyChangeOp.Add,
participant1.id, key.fingerprint, key.purpose)
}
// connect to domain
logger.info("Connecting to domain(s)")
participant1.domains.connect_local(mydomain)
// test to ensure that it works
participant1.health.ping(participant1)
}
Hi @Ratko_Veprek and @Matheus , I am trying to follow this post to do the same. I stuck at the first step of exporting the key. This is the error I got. I now suspect I am using the incorrect namespace. Where should I get the namespace from a canton setup?
The Canton setup is local, participant1 is connected to mydomain
@
participant1.keys.secret.download(“12205c9…”, Some(“/participantKey.key”))
java.lang.IllegalArgumentException: Problem while exporting key pair. Error: Error retrieving private key [12205c91c88a…] no private key found for [12205c91c88a…]
com.digitalasset.canton.console.commands.LocalSecretKeyAdministration.run(VaultAdministration.scala:104)
com.digitalasset.canton.console.commands.LocalSecretKeyAdministration.$anonfun$download$1(VaultAdministration.scala:193)
com.digitalasset.canton.tracing.TraceContext$.withNewTraceContext(TraceContext.scala:128)
com.digitalasset.canton.console.commands.LocalSecretKeyAdministration.download(VaultAdministration.scala:170)
ammonite.$sess.cmd76$.(cmd76.sc:1)