Pointing to private IP and port combination using domains.connect within Azure VMs on linked vnets

Hi all,

I’m trying to connect a participant to a domain node located on 2 different peered Vnets on Virtual Machines hosted with Azure using the



When I try to specify the privateIP:port combination in the command,
e.g domains.connect("domainAlias","")

I receive a an error
java.net.URISyntaxException: Illegal character in scheme name at index 0:

The private IP address cannot be used with a http:// prefix and it seems like the Canton console command does not accept the combination input syntax without a http:// prefix.

The privateIP:port combination is proven to work during remote console access when I include them as parameters in the Canton .conf file as such

address =
port = 5050

so i’m wondering how one can get domains.connect to accept it as well.

Is there a method to specify the port number when using a private IP address in the domains.connect command similar to how one would do so using a URL with a http:// prefix?
e.g domains.connect("domainAlias","http://localhost:5050")?

Thank you.

What error do you get if you try this?


I don’t see why a private ip address should not work there.

Using participant1.domains.connect(“”,“”)

ERROR c.d.c.c.EnterpriseConsoleEnvironment - Request failed for participant1.
  GrpcClientError: UNAVAILABLE/DOMAIN_IS_NOT_AVAILABLE(1,34fdba02): Cannot connect to domain Domain 'domain'
  Request: ConnectDomain(Domain 'domain',false)
  CorrelationId: 34fdba02896418e4a73843e7a5ba9ca6
  RetryIn: 1 second
  Context: HashMap(participant -> participant1, domain -> domain, reason -> Request failed for domain. Is the server running? Did you configure the server address as Are you using the right TLS settings?
  GrpcServiceUnavailable: UNAVAILABLE/io exception
  Request: get domain id
  Causes: Connection refused: /
    Connection refused, alias -> Domain 'domain')
  Command ParticipantAdministration$domains$.connect invoked from cmd1.sc:1
com.digitalasset.canton.console.CommandFailure: Command execution failed.

My bad, I just noticed that the request does get routed to so it seems that it does detect it as a legitimate address, it just resolves to port 443 instead of port 5050. Would there be any reason this happens when connecting the participant node to the domain?

I didn’t experience this issue when bootstrapping the separatedomain nodes or when I accessed the individual nodes using a remote node, all on separate VMs with the same network connectivity privileges that exist between the participant node and domain node I am trying to connect to.

This sounds like there is already a configuration stored in either the console or database, you should probably try to follow the remote sequencer connection steps where you have full control over the proto, IP and port:

Section Connect Using Register

Edit: could you inspect the registered domains?

// Stored configuration
// Probably empty if you have no working connections

Thank you @davidd, it sounds about right. I attempted to establish a connection with a participant node with an in-memory storage configuration instead and it connected successfully.

I’ll drop all my databases, recreate them and retry again. Will update here on the results.

I encountered another issue below when I attempted a domain.setup.bootstrap command from the remote console connected to the sequencer, mediator and domain manager nodes.

ERROR c.d.c.c.EnterpriseConsoleEnvironment - Request failed for remoteDomain.
  GrpcRequestRefusedByServer: NOT_FOUND/NO_APPROPRIATE_SIGNING_KEY_IN_STORE(11,7d5dae9b): Could not find an appropriate signing key to issue the topology transaction
  Request: AuthorizeOwnerToKeyMapping(Add,None,SEQ::domain::122008a032ae...,122046a3a3c4...,signing,false)
  CorrelationId: 7d5dae9bfe722354e7dcf74793a8b93b
  Context: Map(candidates -> List(), domain-manager -> domain)
  Command SetupAdministration$Setup.bootstrap_domain invoked from cmd4.sc:1
com.digitalasset.canton.console.CommandFailure: Command execution failed.

This happened when I ran the bootstrap command shortly after recreating my databases and brought all my nodes back up.

I did not restart the remote console during this process.

Is this potentially caused by existing values in the in-memory storage of the remote console or references to missing values in the db which are used during the bootstrap which I assume are populated in the nodes’ dbs during spinning up of all the nodes?

I’ll reattempt a fresh connection regardless, just wanted to make sure I didn’t break anything while dropping the databases.


Were you successful when you restarted the remote console as well?

Thanks @davidd , that was indeed the issue! Using a fresh database indeed resolved the problem.

Hi @Mate_Varga yes that resolved the issue.
In addition to restarting the remote console, I performed a health check to ensure the nodes are reachable from the remote console before attempting the bootstrap as well.