Skip to main content
Skip table of contents

Asynchronous Method Dispatch (AMD) in Python

Asynchronous Method Dispatch (AMD) is the server-side equivalent of AMI. With AMD, you can process dispatches asynchronously, allowing the server to optimize resource usage and serve more clients compared to processing all dispatches synchronously.

In Python, however, concurrency has an additional restriction: only one Python thread can execute at a time because of the Global Interpreter Lock (GIL). This makes it especially important to avoid synchronous blocking calls in dispatch operations.

For example, consider the following synchronous dispatch:

PY
def greet(self, name: str, current: Ice.Current) -> str:
  return self._db.getGreet(name)

If _db.getGreet blocks while waiting for the database, the thread handling this dispatch cannot perform other work until the call returns.

With AMD, you can avoid this blocking:

PY
async def greet(self, name: str, current: Ice.Current) -> str:
  return await self._db.getGreetAsync(name)

In this version, the thread does not remain blocked while getGreetAsync runs. Instead, it can execute other tasks, and the coroutine resumes on the appropriate thread once the database result becomes available.

AMD Mapping

Annotating operations with ["amd"] metadata directives has no effect in the Python mapping. The mappings for synchronous and asynchronous dispatch are nearly identical—the only difference is the return type:

  • An operation has asynchronous semantics if it is implemented as an async method or if it returns an awaitable object.

  • Otherwise, the operation has synchronous semantics.

The parameter passing rules for in parameters are the same in both cases.

Consider the Greeter example:

SLICE
module VisitorCenter
{
    interface Greeter
    {
        string greet(string name);
    }
}

The server can choose to implement the operation synchronously or asynchronously.

Synchronous version:

PY
def greet(self, name: str, current: Ice.Current) -> str:
  print(f"Dispatching greet request {{ name = '{name}' }}")
  return f"Hello, {name}!"

Asynchronous version (coroutine):

PY
async def greet(self, name: str, current: Ice.Current) -> str:
  await asyncio.sleep(1)
  print(f"Dispatching greet request {{ name = '{name}' }}")
  return f"Hello, {name}!"

The coroutine is executed according to the configured event loop adapter—for example, on the asyncio event loop thread when the communicator is initialized with an asyncio event loop.

asyncio Integration

Ice provides seamless integration with Python’s asyncio library.

If you supply an asyncio event loop during communicator initialization using the eventLoop parameter of Ice.initialize, asynchronous dispatch will run on the asyncio event loop. This allows you to await asynchronous invocations directly within the dispatch implementation.

The same mechanism can be used to integrate Ice with other asynchronous event loop frameworks. Instead of passing an asyncio loop directly, you must implement the Ice.EventLoopAdapter abstract base class for your event loop of choice and provide it during communicator initialization via the InitializationData.eventLoopAdapter member.

Chaining Asynchronous Invocations

Because proxy invocations return awaitables and asynchronous dispatch methods may also return awaitables, it’s straightforward to chain calls—provided the operations have the same result type and compatible user-exception sets.

Continuing with the Greeter example, the servant can delegate directly to another Greeter:

PY
def greet(self, name: str, current: Ice.Current) -> str:
  return self._greeter.greetAsync(name)

Or, using async/await:

PY
# Coroutine implementation (AMD semantics)
async def greet(self, name: str, current: Ice.Current) -> str:
    return await self._greeter.greetAsync(name)

The greetdispatch is implemented by delegating to another Greeter server, and we directly return the result from the nested async invocation.

See Also
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.