I would like to submit and wait a max of 5 seconds, if no response/transaction is not complete, then I would like to to return a reference that can be used to look up the status of that transaction in the future.
I am reviewing the ledger api, but I cannot seem to find reference to a data point that can be used.
The synchronous command service is a simplifying facade; more complex behavior, such as you describe, must be implemented with the submission service and completion service. An overview starts here.
Depending on how stateful your calculation of commands is, you may also want to dig into how deduplication works (also briefly summarized in the aforementioned overview) for details on when and how to resubmit commands when you have lost the ability to associate their submission with their completion.
If a submission is made and based on the description here:
I would do something like:
Get current ledger Offset end (could be cached every N period)
SubmitAndWait
After 5 seconds if no response from the ledger then return to the client the commandId and ledger offset at time of submission.
Client can query for status of command using the commandId + offset and on the java side you run the submission completion client for the submitting party + offset. Then inspect each response in the stream for the command ID.
Your description sounds like it could work. However, I recommend double checking if this is really the right approach for your app.
Usually things become much easier both on the server side and on the client side if you either handle requests synchronously or handle them asynchronously. What you are proposing is to handle them synchronusly if they take <= 5s but if they take longer you handle them asynchronously. That just seems to make things more complex everywhere and I’m not sure what you get in return.
As @cocreature suggests, the benefit of using the command service is that you don’t have to correlate between the submission and the completion service. But you never get that benefit, because your semantics require you to always be prepared to perform that correlation yourself. So you might as well just always do it, and only use the submission and completion services, and set aside the synchronous command service entirely.
If you want some kind of simulated synchronous-but-only-in-the-first-five-seconds interface, I believe that would be best to implement client-side. But, yet again, once your clients are required to always be prepared to potentially deal with an asynchronous response, the benefits of that synchronous facade more or less vanish.
let me use an example of a story to show why i was using both sync and async:
You want to create an Organization. It has a Name and a owner (party)
You http submit for the creation of the organization which can return the organization
it ~may take a long time to submit (due to activity), but arguably it is a edge case
The UI can submit for the creation of the Organization and receive the needed information in a single request. Or they would submit async and poll for this information to eventually arrive (assume no web sockets)
having a sync response means you are returning the desired object or you are returning a stub that will allow you to retrieve it later on (201 vs 202 status code).
having an async response means to always receive 202 and have the client poll through an additional connection.
I would assume in a business application: the client does not care/or is not aware that the data is ledger backed. They want to create the Organization and get the result. The stub is the backup for the client to specifically query for that command result if required. In most crud UI you would just receive 201 (even if no data was returned) and refresh the “datagrid” / page for the latest data.
This model would also make http requests much more accessible: single requests in Postman, curl, etc, rather than dealing with “Submit > Query-Loop” for each request. (and without using async http connections that sleep the connection until returned, which additionally provides some “offline” re-connection benefits)
I don’t have a good counter to the Postman / curl argument, but when you control the client, there’s no reason why you can’t build the same UX over an async connection as over a synchronous one: either way, if that particular request is taking a long time, you have to do something with your UI to show that something is running.
Even without web sockets, you have long polling as an option. At the code level, if you’re in JS, you’ll code individual HTTP requests as async anyway, either through callbacks or through promises / async/await. Otherwise your entire page freezes while you’re waiting for a response.
If you’re coding in Java, you have more leeway with threads, but even there it’s a limited resource and you’re probably better off sticking to async everywhere (unless you have a use-case where you can actually get away with sync everywhere, but those are rare).
While this doesn’t line up with the likely probability distribution of sync/async cases in practice, from an architecture perspective, I think it makes more sense (and will be less likely to frustrate downstreams who think they can just ignore the async case) to turn this idea around. It is really the async case that is the “standard” response, and as an optimization, the endpoint will sometimes return the result immediately.
From an implementation perspective, it makes the most sense to use the submission/completion services unconditionally in this case. The data you use to internally correlate the submission and completion is very similar to what you’ll have to encode and pass back to the client in case you require them to look up their completion later. You can also decide whether you want to help your clients out with automatic retries or various other niceties.
The semantics you’ve described don’t really provide this accessibility. They might provide it often, but according to what you’ve described, every client must always be prepared to handle the async case, because it might happen for any request where completion takes too long.
To expand on @Gary_Verhaegen 's comment, I think it is worth reconsidering how your client-sides will deal with different kinds of HTTP request flows in practice, and what your endpoints need to do to satisfy those. The create and exercise endpoints of the JSON API only provide synchronous response, “as long as it takes”, for this reason, after all.