Compuation offloading in multi-hop networks

Computation offloading in multi-hop networks

Last years have witnessed significant advancements in the hardware and software development of wireless mobile nodes. However, mobile nodes are battery powered which limits their capabilities to run highly energy-consuming applications, such as video processing, voice recognition and 3D localization/mapping processing. One emerging solution to this problem is mobile cloud computing. Basically, mobile cloud computing aims at migrating the computational tasks and data storage from the battery powered mobile nodes to the resource-full cloud servers. A recent study by Cisco Inc. shows, that cloud applications such as video/audio streaming, online gaming, social networking and online storage will occupy most of the mobile data traffic by 2019.

One big advantage of computation offloading is, that it relaxes the required storage and computation capabilities of future mobile nodes while being able to run computationally intensive applications. If the total network energy minimization is a target, a trade-off between offloading and locally computing tasks is appeared. Basically, the energy for computing a task is determined by the required number of CPU cycles whereas the energy for transmitting a task is determined by the amount of data needed to be transmitted to the cloud. Both the number of CPU cycles and the amount of transmitted data, are task-type depended. Accordingly, tasks which require few CPU cycles but a large amount of transmitted data for offloading will rather be computed locally. On the contrary, tasks with a low amount of offloading data are preferably being offloaded. Because of the scarcity of the radio resources, only a subset of nodes can offload simultaneously while the rest should locally compute to head the energy and delay constraints of their computation tasks. Accordingly, this project focuses on developing smart offloading distributed decision algorithms.