Asynchronous Method Invocation (AMI) in Java
Asynchronous Method Invocation (AMI) is the term used to describe the client-side support for the asynchronous programming model. AMI supports both oneway and twoway requests, but unlike their synchronous counterparts, AMI requests never block the calling thread. When a client issues an AMI request, the Ice runtime hands the message off to the local transport buffer or, if the buffer is currently full, queues the request for later delivery. The application can then continue its activities and poll or wait for completion of the invocation, or receive a callback when the invocation completes.
AMI is transparent to the server: there is no way for the server to tell whether a client sent a request synchronously or asynchronously.
Asynchronous Exception Semantics
If an invocation throws an exception, the exception can be obtained from the future in several ways:
Call
geton the future;getthrowsCompletionExceptionwith the actual exception available viagetCause()Call
joinon the future;jointhrowsExecutionExceptionwith the actual exception available viagetCause()Use chaining methods such as
exceptionally,handleorwhenCompleteto execute custom actions
The exception is provided by the future, even if the actual error condition for the exception was encountered during the call to the Async method ("on the way out"). The advantage of this behavior is that all exception handling is located with the code that handles the future (instead of being present twice, once where the Async method is called, and again where the future is handled).
There are two exceptions to this rule:
if you destroy the communicator and then make an asynchronous invocation, the
Asyncmethod throwsCommunicatorDestroyedExceptiondirectly.a call to an
Asyncmethod can throwTwowayOnlyException. AnAsyncmethod throws this exception if you call an operation that has a return value or out-parameters on a oneway proxy.
InvocationFuture Class
The CompletableFuture<T> object that is returned by asynchronous proxy methods can be down-casted to InvocationFuture<T> when an application requires more control over an invocation.
Polling for Completion
The InvocationFuture methods allow you to poll for call completion. Polling is useful in a variety of cases. As an example, consider the following simple interface to transfer files from client to server:
interface FileTransfer
{
void send(int offset, ByteSeq bytes);
}
The client repeatedly calls send to send a chunk of the file, indicating at which offset in the file the chunk belongs. A naïve way to transmit a file would be along the following lines:
FileHandle file = open(...);
FileTransferPrx ft = ...;
int chunkSize = ...;
int offset = 0;
while (!file.eof()) {
byte[] bs;
bs = file.read(chunkSize); // Read a chunk
ft.send(offset, bs); // Send the chunk
offset += bs.length;
}
This works, but not very well: because the client makes synchronous calls, it writes each chunk on the wire and then waits for the server to receive the data, process it, and return a reply before writing the next chunk. This means that both client and server spend much of their time doing nothing — the client does nothing while the server processes the data, and the server does nothing while it waits for the client to send the next chunk.
Using asynchronous calls, we can improve on this considerably:
FileHandle file = open(...);
FileTransferPrx ft = ...;
int chunkSize = ...;
int offset = 0;
var results = new LinkedList<InvocationFuture<Void>>();
int numRequests = 5;
while (!file.eof()) {
byte[] bs;
bs = file.read(chunkSize);
// Send up to numRequests + 1 chunks asynchronously.
CompletableFuture<Void> f = ft.sendAsync(offset, bs);
offset += bs.length;
// Wait until this request has been passed to the transport.
var i = (InvocationFuture<Void>)f;
i.waitForSent();
results.add(i);
// Once there are more than numRequests, wait for the least recent one to
// complete.
while (results.size() > numRequests) {
i = results.getFirst();
results.removeFirst();
i.join();
}
}
// Wait for any remaining requests to complete.
while (results.size() > 0) {
InvocationFuture<Void> i = results.getFirst();
results.removeFirst();
i.join();
}
With this code, the client sends up to numRequests + 1 chunks before it waits for the least recent one of these requests to complete. In other words, the client sends the next request without waiting for the preceding request to complete, up to the limit set by numRequests. In effect, this allows the client to "keep the pipe to the server full of data": the client keeps sending data, so both client and server continuously do work.
Obviously, the correct chunk size and value of numRequests depend on the bandwidth of the network as well as the amount of time taken by the server to process each request. However, with a little testing, you can quickly zoom in on the point where making the requests larger or queuing more requests no longer improves performance. With this technique, you can realize the full bandwidth of the link to within a percent or two of the theoretical bandwidth limit of a native socket connection.
Asynchronous Oneway Invocations
You can invoke operations via oneway proxies asynchronously, provided the operation has void return type, does not have any out-parameters, and does not throw user exceptions. If you call an asynchronous proxy method on a oneway proxy for an operation that returns values or throws a user exception, the Async method throws TwowayOnlyException.
The future returned for a oneway invocation completes as soon as the request is successfully written to the client-side transport. The future completes exceptionally if an error occurs before the request is successfully written.
Flow Control
Asynchronous method invocations never block the thread that calls the asynchronous proxy method. The Ice runtime checks to see whether it can write the request to the local transport. If it can, it does so immediately in the caller's thread. (In that case, InvocationFuture.sentSynchronously returns true.) Alternatively, if the local transport does not have sufficient buffer space to accept the request, the Ice runtime queues the request internally for later transmission in the background. (In that case, InvocationFuture.sentSynchronously returns false.)
This creates a potential problem: if a client sends many asynchronous requests at the time the server is too busy to keep up with them, the requests pile up in the client-side run time until, eventually, the client runs out of memory.
The InvocationFuture class provides a way for you to implement flow control by counting the number of requests that are queued so, if that number exceeds some threshold, the client stops invoking more operations until some of the queued operations have drained out of the local transport:
ExamplePrx proxy = ...;
CompletableFuture<Result> f = proxy.doSomethingAsync();
var i = (InvocationFuture<Result>)f;
i.whenSent((sentSynchronously, ex) -> {
if (ex != null) {
// handle errors...
} else {
// this request was sent, send another!
}
});
The whenSent method has the following semantics:
If the Ice runtime was able to pass the entire request to the local transport immediately, the action will be invoked from the current thread and the
sentSynchronouslyargument will be true.If Ice wasn't able to write the entire request without blocking, the action will eventually be invoked from an Ice thread pool thread and the
sentSynchronouslyargument will be false.
If you need more control over the execution environment of your action, you can use one of the whenSentAsync methods instead. The sentSynchronously argument still behaves as described above, but your executor's implementation will determine the threading behavior.
Canceling an Asynchronous Invocation
CompletableFuture provides a cancel method that you can call to cancel an invocation. If the future hasn't already completed either successfully or exceptionally, canceling the future causes it to complete with an instance of java.util.concurrent.CancellationException.
Cancellation prevents a queued invocation from being sent or, if the invocation has already been sent, ignores a reply if the server sends one. Cancellation is a local operation and has no effect on the server.
Concurrency Semantics for AMI
When an invocation completes, the Ice runtime calls complete or completeExceptionally on the future from an Ice thread pool thread. The thread in which your own action executes depends on the completion status of the future and the manner in which you registered the action. Here are some examples:
Suppose you configure an action using
whenComplete. If the future is already complete at the time you callwhenComplete, the action will execute immediately in the calling thread. If the future is not yet complete when you callwhenComplete, the action will eventually execute in an Ice thread pool thread.Now suppose you configure an action using one of the
whenCompleteAsyncmethods. Regardless of the thread in which Ice completes the future, your executor's implementation will determine the thread context in which the action is invoked. The Ice thread pool can be used as an executor; you can obtain the executor by calling theice_executorproxy method. With the Ice thread pool executor, the action is always queued to be executed by the Ice thread pool.