I want to measure transaction round-trip time dependent on the size of the transaction. An obvious proxy for actual transaction size would be to take the serialized byte size of the command message I’m sending, ignoring any transaction annotation overhead which I’ll consider constant. But this doesn’t seem straight-forward to do (or is it?).
Now I’m thinking since I’m only interested in the relation between size and round-trip time, and not so much in the absolute sizes, I might as well just use a
payload text field on my contract, and take the size of that as a proxy. All other things kept equal, when this field gets large enough it should be the dominant factor of transaction size and it should provide me with a valid conclusion on the relation between transaction size and latency.
Do you think this is a viable approximation approach? Or are the aspects that will falsify my results that I’m not considering?