Hi @Michal_Konopka, the second option that you outlined is what we use in our drivers, each Fabric node also has the Daml Ledger API server (and a few other components) deployed alongside it.
To go a bit more in depth, there are a few different architectures for Daml Drivers, depending on the underlying ledgers and their properties. For Fabric we have two different architectures.
The first one is very roughly:
Where the Fabric peer node has a custom chaincode deployed which is part of the Daml Driver for Fabric. It’s essential that we deploy the Ledger API Server alongside each node for a few reasons. One of them is local validation of all Daml logic, another one is to ensure that all Daml ledgers have a unified Daml API so users can port and/or migrate applications from one ledger to another (a bit more background on this here and here)
The second architecture is based on https://www.canton.io/ and is a two-layer network. In that architecture there are two types of nodes: a domain node and a participant node. The domain node can be thought of as the first layer (L1) and includes the Fabric node and a Daml Driver for Fabric, and the L2 contains the Ledger API server.
So to summarize, Daml itself is a smart contract language but it doesn’t transpile to Chaincode (in theory we could do that but it would have many limitations). Conceptually we do something similar to Fabric but instead of validating the smart contract directly in chaincode, we deploy a chaincode onto the Fabric peer which communicates over an RPC with a local Daml engine which runs the validation.
Hope this helps