The growth of Internet-of-Things(IoT) solutions creates vast new opportunities for developers of embedded systems by providing capabilities which can be added to just about any physical object including medical devices, household appliances, home automation, industrial controls, even clothing and light bulbs. This collection of billions of end devices, from the tiniest ultra-efficient connected end-nodes to the high-performance gateways creates a continuously growing demand in the embedded systems industry and sophisticated software design for efficiently supporting the demanding applications running on IoT devices.

Internet of Things (IoT) system-on-chip (SoC) designers have some difficult choices to make on storing data. They usually have to decide how much memory to include for major SoC functions, add on-chip or off-chip memory and whether data programming requirement is one time, a few times, or many times. Usually, these options seem mutually exclusive especially when the system does not provide an efficient memory management algorithm. Due to high-volume and low-price expectations for the IoT-enabled system, the cost is of great concern. Concerning average cost, adding more hardware components such as MMU to a SoC is almost forbidden.

Dynamic functionality in embedded systems is normally discouraged due to resource constraints. Yet, some types of applications inherently require this kind of allocation. A real-life example could a distributed sensory network in agricultural environment which typically forward messages through nodes at a non-deterministic time. This type of application involves time-critical operations such as reading sensors values as well as non-critical operations like messages forwarding.

A simple FIFO queue might not always be an optimal solution because forwarding a message may involve multiple actions with delays (e.g. transmission acknowledgments). Hence, many communication protocols require dynamic allocation to hold incoming messages until they are successfully forwarded. In such scenario, the protocol would rather handle multiple messages at the same time, raising the possibility of a message received later be discarded first. This type of application merges the non-deterministic character of Internet-of-Things devices with the time-critical requirements of sensory systems. Unfortunately, dynamic memory schemes, such as malloc/free, are not suitable for embedded systems as the above.

By definition, real-time applications, function within a time frame that the user senses as immediate or current. There are two main categories of real timing; hard and soft real-time which have to comply with some fundamental principles summarized in M. Stonebraker et al. [8] review.

One essential requirement for real-time applications, the response latency, should be as minimum as possible. The foremost responsibility of a memory management algorithm is to comply with this requirement by providing the amount of memory that is requested from an application in a good average response time as it is described in the approach of TLSF algorithm by Masmano et al. [1]. However, a balance should be

kept between response time and memory consistency, especially when there is an extend use of dynamic allocation and de-allocation which can lead to memory fragmentation, followed up by unexpected behavior of the whole system. Usually, most developers are trying to avoid the use of dynamic memory at all for this reason, as it is mentioned in I. Puaut et al. review [9], but it contradicts with the requirements that IoT-enabled devices have.

Nonetheless, the use of dynamic memory in real-time embedded devices as well as Internet-of-Things devices is important to be deterministic; the time was taken to allocate memory should be predictable, and the memory pool should not become fragmented.

Author

Write A Comment