Asynchronous Method Invocation (AMI) in C++
Asynchronous Method Invocation (AMI) is the term used to describe the client-side support for the asynchronous programming model. AMI supports both oneway and twoway requests, but unlike their synchronous counterparts, AMI requests never block the calling thread. When a client issues an AMI request, the Ice runtime hands the message off to the local transport buffer or, if the buffer is currently full, queues the request for later delivery. The application can then continue its activities and poll or wait for completion of the invocation, or receive a callback when the invocation completes.
AMI is transparent to the server: there is no way for the server to tell whether a client sent a request synchronously or asynchronously.
Asynchronous Exception Semantics
If an invocation throws an exception, the exception is reported by the exception callback or by the future, even if the actual error condition for the exception was encountered during the call to the Async
function ("on the way out"). The advantage of this behavior is that all exception handling is located in the same place (instead of being present twice, once where you call the Async
function, and again where you retrieve the result) .
There are two exceptions to this rule:
if you destroy the communicator and then make an asynchronous invocation, the
Async
function throwsCommunicatorDestroyedException
.
This is necessary because, once the communicator is destroyed, its client thread pool is no longer available.a call to an
Async
function can throwTwowayOnlyException
. AnAsync
function throws this exception if you call an operation that has a return value or out-parameters on a oneway proxy.
Asynchronous Oneway Invocations
You can invoke operations via oneway proxies asynchronously, provided the operation has void
return type, does not have any out-parameters, and does not throw user exceptions. If you call an Async
function on a oneway proxy for an operation that returns values or throws a user exception, the Async
function throws TwowayOnlyException
.
An async
oneway invocation does not call the response callback with the callback API; you use the sent callback to make sure the invocation was successfully sent. With the future-based API, the returned future is a future<void>
and this future is made ready when the invocation is sent.
Canceling an Asynchronous Invocation
The Async
function with callback parameters returns a cancel function-object (a std::function<void()>
). You can use this function-object to cancel the invocation, for example:
EmployeesPrx e = ... // get an Employees proxy
auto cancel = e.getNameAsync(
99,
[](string name) { cout << "Employee name is: " << name << endl; });
cancel(); // no longer interested in this name
Calling this cancel function-object prevents a queued invocation from being sent or, if the invocation has already been sent, ignores a reply if the server sends one. This cancelation is purely local and has no effect on the server.
Canceling an invocation that has already completed has no effect. Otherwise, a canceled invocation is considered to be completed, meaning the exception callback (if provided) receives an Ice::InvocationCanceledException
.
Polling for Completion
The future-based Async
function allow you to poll for call completion. Polling is useful in a variety of cases. As an example, consider the following simple interface to transfer files from client to server:
interface FileTransfer
{
void send(int offset, ByteSeq bytes);
}
The client repeatedly calls send
to send a chunk of the file, indicating at which offset in the file the chunk belongs. A naïve way to transmit a file would be along the following lines:
FileHandle file = open(...);
FileTransferPrx ft = ...;
const int chunkSize = ...;
int offset = 0;
while (!file.eof())
{
ByteSeq bs;
bs = file.read(chunkSize); // Read a chunk
ft.send(offset, bs); // Send the chunk
offset += bs.size();
}
This works, but not very well: because the client makes synchronous calls, it writes each chunk on the wire and then waits for the server to receive the data, process it, and return a reply before writing the next chunk. This means that both client and server spend much of their time doing nothing — the client does nothing while the server processes the data, and the server does nothing while it waits for the client to send the next chunk.
Using asynchronous calls, we can improve on this considerably:
FileHandle file = open(...);
FileTransferPrx ft = ...;
const int chunkSize = ...;
int offset = 0;
deque<future<void>> results;
const int numRequests = 5;
while (!file.eof())
{
ByteSeq bs;
bs = file.read(chunkSize);
// Send up to numRequests + 1 chunks asynchronously.
auto fut = ft.sendAsync(offset, bs);
offset += bs.size();
results.push_back(std::move(fut));
// Once there are more than numRequests, wait for the least
// recent one to complete.
while (results.size() > numRequests)
{
results.front().get();
results.pop_front();
}
}
// Wait for any remaining requests to complete.
while (!results.empty())
{
results.front().get();
results.pop_front();
}
With this code, the client sends up to numRequests + 1
chunks before it waits for the least recent one of these requests to complete. In other words, the client sends the next request without waiting for the preceding request to complete, up to the limit set by numRequests
. In effect, this allows the client to "keep the pipe to the server full of data": the client keeps sending data, so both client and server continuously do work.
Obviously, the correct chunk size and value of numRequests
depend on the bandwidth of the network as well as the amount of time taken by the server to process each request. However, with a little testing, you can quickly zoom in on the point where making the requests larger or queuing more requests no longer improves performance. With this technique, you can realize the full bandwidth of the link to within a percent or two of the theoretical bandwidth limit of a native socket connection.
Flow Control
Asynchronous method invocations never block the thread that calls the Async
function : the Ice runtime checks to see whether it can write the request to the local transport. If it can, it does so immediately in the caller's thread. Alternatively, if the local transport does not have sufficient buffer space to accept the request, the Ice runtime queues the request internally for later transmission in the background.
This creates a potential problem: if a client sends many asynchronous requests at the time the server is too busy to keep up with them, the requests pile up in the client-side runtime until, eventually, the client runs out of memory.
The callback API provides a way for you to implement flow control by counting the number of requests that are queued so, if that number exceeds some threshold, the client stops invoking more operations until some of the queued operations have drained out of the local transport.
For example:
EmployeesPtr e = ...; // get an Employees proxy
e.getNameAsync(99,
[](string name) { ... handle name ... },
[](exception_ptr ex) { ... handle exception ... },
[](bool) { ... increase sent counter ... });