@Leonid_Rozenberg Well, the question is quick indeed but I’m afraid a full answer wouldn’t be. I’ll do my best to keep it brief nevertheless.
There is no deep reason why the content addressing scheme couldn’t be more fine-grained, in a fashion similar to the Unison language. In fact, we had a very fine-grained scheme in the past with DAML 0.x. This was before the days of our code generation tools. Without such tools, the content addressing was not particularly pleasant to use since every template pretty much had its own hash and you constantly had to update these hashes in your client applications during development. IIRC, this was one of the main reasons why we changed to the very coarse-grained scheme we have now.
However, a fine-grained content addressing scheme is not valuable in its own right. It is only valuable if it gives you a certain hash stability guarantee: recompiling a package after a modification would leave the hashes of all entities that are not (transitively) impacted by the modification unchanged. Ideally, this would even be the case if you changed your compiler version. Besides the circumstance that this would be hard to achieve with our current GHC-based setup, there are implications of such an approach that might be undesirable when it comes to performance.
Obviously, DAML is a language that allows for very sophisticated abstractions that don’t translate well into how processors execute programs. Thus, achieving a decent execution performance for DAML, both in terms of runtime and memory consumption, requires a fair amount of code optimizations. Such optimizations can happen in two places: in the compiler or in the runtime.
If these optimizations were performed in the compiler, then improving an existing optimization or adding a new one is very likely to change the hashes of all (value-level) entities in a certain package. This pretty much defeats the purpose of a fine-grained content addressing scheme.
Performing the optimizations in the runtime is a very risky endeavor. As described in my answer above, changing the semantics of an existing active contract is a big no-go for DAML-like smart contracts. In other words, two different versions of the runtime must give all code they both understand the exact same semantics. In fact, this is also a prerequisite for being able to validate transactions submitted by other network participants or transactions that were recorded in the past. Such strong backward-compatibility requirements make a runtime that is as “dumb” as possible very desirable: every optimization you perform in the runtime carries the risk of accidentally changing the semantics of existing contracts. IMO, minimizing this risk is crucial for positioning DAML as a secure smart contract language.
Obviously, implementing optimizations in the compiler carries exactly the same risk of accidentally introducing semantic bugs. However, the impact of such bugs is significantly smaller. First of all, we consider DAML-LF the ultimate source of truth regarding the meaning of a contract since DAML-LF is what is deployed to the ledger and executed by the runtime. Second, if the compiler starts producing different DAML-LF for the same DAML than in the past, the content addresses necessarily change as well. This means that the semantics of existing contracts on a ledger remain completely unchanged even if we accidentally introduce compiler bugs after the deployment of their underlying packages.
I would summarize everything said above as:
Stable fine-grained content addressing, sophisticated abstraction capabilities, decent execution performance, stable semantics of active contracts - pick three!
By making DAML a Haskell-like language, we’ve clearly picked the second. IMHO, not having the first one is “only” unpleasant, not having (or at least being able to achieve) the latter two would be unacceptable. Well, maybe that qualifies as a deeper reason why we don’t have a very fine-grained content addressing scheme in DAML.