I want to measure transaction round-trip time dependent on the size of the transaction. An obvious proxy for actual transaction size would be to take the serialized byte size of the command message I’m sending, ignoring any transaction annotation overhead which I’ll consider constant. But this doesn’t seem straight-forward to do (or is it?).
Now I’m thinking since I’m only interested in the relation between size and round-trip time, and not so much in the absolute sizes, I might as well just use a payload text field on my contract, and take the size of that as a proxy. All other things kept equal, when this field gets large enough it should be the dominant factor of transaction size and it should provide me with a valid conclusion on the relation between transaction size and latency.
Do you think this is a viable approximation approach? Or are the aspects that will falsify my results that I’m not considering?
It somewhat depends on what you mean by size. Having multiple nodes in your transaction tree probably has a much bigger impact than just the size in bytes.
If you do want just the size in bytes then Text is reasonable. We use UTF16 internally (Java strings) so if you stick to ASCII you get 2 bytes per character and the caveat mentioned by @Luciano doesn’t apply.
There is no binary data type in DAML (or DAML-LF) at the moment.
You mean if my command has complex effects like other creates and exercises? For now I’m just doing bare creates so the tree complexity should be constant.