MeeGo 1.2 Harmattan Developer Documentation Develop for the Nokia N9

Inter-process communication guidelines

Inter-process communication (IPC) refers to methods of delivering information between different processes on a device or between devices on a network. Common methods for IPC include saving data into files, socket connections, message queues, and shared memory segments. As a Linux-based system, a Harmattan device can use many of the usual IPC techniques in Linux. In addition to the direct methods described above, Harmattan devices can also use D-Bus, which is a higher-level point-to-point protocol for IPC.

The following table presents different inter-process communication approaches and their performance. From the performance point of view, every communication approach can be described in terms of latency and throughput, or how quickly the IPC type reacts to events and how much transfer communication capacity the IPC type has. Latency is usually unadjustable, but throughput is affected by other factors, such as connection type (local or remote) and device hardware. It is also common to combine multiple IPC approaches, for example, for delivering event notification and data separately.

Different IPC approaches and their performance
IPC name Latency Throughput Description
Signal Low n/a Can be used only for notification, traditionally - to push process to change its state.
Socket Moderate Moderate Only one mechanism which works for remote nodes, not very fast but universal.
D-Bus High High A high-level protocol that increases latency and reduces throughput compared to when using base sockets, gains in increased ease of use.
Shared memory n/a High Data saved in-between process runs (due to swapping access time) can have non-constant access latency.
Mapped files n/a High Data can be saved in-between device boots.
Message queue Low Moderate Data saved in-between process runs. The message size is limited but there is less overhead to handle messages.

As a general guideline, aim to design your applications so that they use IPC in as limited a manner as possible, due to the increasing overhead and complexity. For more information about IPC considerations, see Guidelines for IPC optimisation.

For a more detailed overview on how inter-process communications work on UNIX systems, see Beej's Guide to Unix IPC.

D-Bus in Harmattan

D-Bus is a inter-process communication method for local communication between processes. While it is not as quick as direct socket connections or similar methods, D-Bus is a lightweight component with good performance and flexibility. D-Bus has language bindings for Qt, GLib, Python, and other languages supported in Harmattan.

D-Bus operates a dbus daemon that runs the bus that the inter-process messages are transported on. The applications that connect to D-Bus are considered as clients, and their communication points to the bus are called objects.

For more information on D-Bus, see the following articles on freedesktop.org:

For examples on using D-Bus in Harmattan applications, see the following section:

Guidelines for IPC optimisation

The following list provides guidelines for IPC optimisation:

  • Try to avoid communication as much as possible, cache data locally.
  • Profile data exchange carefully. For example, strace is an excellent tool for troubleshooting the frame of one or several processes.
  • If you have to send a large amount of data, do the following:
    • If there is only one consumer, you can use point-to-point connection (D-Bus, sockets). You can also use sendfile to minimise the load on the sender side.
    • If there are multiple consumers, you can provide a key to data access. A common mistake is to use D-Bus to share a large array, avatars or pictures. You can even prepare a temporary file and share the path to the file to prevent jamming the device when clients try to access your databases or allocate memory for communication.
  • Batching, or concatenating, messages saves a lot of some resources, usually CPU and use-time, but it may also have an impact on other resources, such as latency and memory. Consider the effects before using this approach.
  • In most cases, message broadcasting places a heavy load on the system because it affects many components. It is also difficult to locate which component is not working correctly. The system must have a number of applications in the memory to allow message handling. This leads to active exchange between RAM and swap which sometimes causes "swap trashing" (constant swapping for minimal added value).
  • In addition to performance rules, the use time can be easily influenced by sending or receiving messages if you communicate outside the device and need to wake up WiFi or GSM hardware. In these cases, you can use batching and the heartbeat service.
  • Adjust the required system settings. For example, you can increase the latency of sockets and D-Bus five times by using smaller buffers: rmem_default = 64K and wmem_default = 16K.
  • Implement API-specific optimisation:
    • Signals - the signal handler must avoid memory allocation, so ideally you need to write event into pipe or variable and handle it in the main application loop.
    • Sockets - the setsockopt function is useful for adjusting latency and throughput.
    • Shared memory - memory can be swapped out. In very critical cases, you may need constant access latency to memory, which can be done by locking the memory (in other words, making it non-swappable) using the mlock call. However, since locking large amounts of memory with mlock may be dangerous for the rest of system, avoid using mlock unless it is required for security reasons (for example, for private keys in cryptographic programs).
    • Mapped files - to improve memory handling performance, adjust the mmap flags according to the use case and use madvise.