Context-Aware Decision Making in Wireless Networks: Optimization and Machine Learning Approaches
In future wireless networks, an enormous number of heterogeneous devices will be connected, leading to a dramatic increase in data traffic. At the same time, future applications will have significantly higher requirements with respect to data rates, reliability, and latency. Conventional approaches, which aim at only improving the communication capabilities of wireless networks, will not be sufficient to satisfy the more demanding requirements arising in future. Hence, a paradigm shift is needed. While conventionally perceived as pure communication networks, wireless networks can provide not only communication resources, but also computation, caching, data collection, and even user resources. Such resources can be part of the network infrastructure and of the wirelessly connected devices and their users. This radically different view on wireless networks as networks of distributed connected resources calls for the development of new techniques that jointly consider and leverage different types of resources in order to improve the system performance.
In this project, we show that such new techniques that jointly consider and leverage different types of resources require context-aware decision making. This is due to the fact that first, resources need to be shared and secondly, trade-offs between different types of resources exist. Thirdly, the optimal resource allocation may depend not only on network conditions, but also on other node-related, user-related or externally given conditions, the so-called context. We provide an overview of context-aware decision making by discussing context awareness, architectures of decision making, and designs of decision agents. Designing a context-aware decision-making framework requires to formulate a context-aware system model. In particular, decision agents responsible for resource allocation need to be identified. These agents may be part of a centralized, decentralized or hierarchical architecture of decision making and a suitable architecture needs to be selected. Finally, designing decision agents requires to model and classify the problem to be solved and to develop an appropriate method according to which decision agents take decisions. We emphasize two designs relevant for contextaware decision making in wireless networks, namely, optimization-based approaches and machine-learning-based approaches, in the latter case speciffically the framework of multi-armed bandits.
Moreover, in this project, we study three candidate techniques for wireless networks that jointly consider and leverage different types of resources, namely, computation offoading in multi-hop wireless networks, caching at the edge of wireless networks, and mobile crowdsourcing. For each technique, we identify a fundamental problem requiring context-aware decision making, propose a novel framework for context-aware decision making, and solve the problem using the proposed framework.
Computation offloading allows wirelessly connected devices to offload computation tasks to resource-rich servers. This may reduce the devices' task completion times and their energy consumption. Computation offlading hence trades computation resources off against communication resources. In this project, for the first time, we study computation offloading in multi-hop wireless networks, where wirelessly connected devices assist each other as relay nodes. We identify the fundamental problem of contextaware computation offloading for energy minimization in multi-hop wireless networks. We propose a novel model that takes into account channel conditions, computing capabilities of the devices, task characteristics, and battery constraints at relay nodes since the effect of computation offloading on the devices' energy consumption depends on these context factors. Based on this model, we take an optimization-based approach and formulate the considered problem as a multi-dimensional knapsack problem, which takes into account that offloading decisions in multi-hop networks are non-trivially coupled as communication resources of relay nodes need to be shared. Finally, we propose a novel context-aware greedy heuristic algorithm for computation offloading in multihop networks. Based on its centralized architecture of decision making, this algorithm enables a central entity to take offloading decisions using centrally collected context information. We show that despite its centralized architecture, the algorithm has a small communication overhead. Numerical results demonstrate that the offloading solution found by the proposed algorithm on average reduces the network energy consumption by 13% compared to the case when no computation offloading is used. Moreover, the proposed algorithm yields near-optimal results in the considered offloading scenarios, with a maximal deviation of less than 6% from the global optimum.
Caching at the edge allows popular content to be cached close to mobile users in order to serve user requests locally, thus reducing backhaul and cellular traffic as well as the latency for the user. Hence, caching at the edge exploits caching resources in order to save communication resources. In this project, we identify the fundamental problem of context-aware proactive caching for maximizing the number of cache hits under missing knowledge about content popularity. We introduce a new model for contextaware proactive caching that takes into account that different users may favor different content and that the users' preferences may depend on their contexts. Using a machinelearning-based approach based on contextual multi-armed bandits (contextual MAB), we propose a novel online learning algorithm for context-aware proactive caching. Based on its decentralized architecture of decision making, this algorithm enables the controller of a local cache to learn context-speciffic content popularity, which is typically not available a priori, online over time. The proposed algorithm takes the cache operator's objective into account by allowing for service differentiation. We analyze the computational complexity as well as the memory and communication requirements of the algorithm, and we show how the algorithm can be extended to practical requirements. Moreover, we derive a sublinear upper bound on the regret of the algorithm, which characterizes the learning speed and proves that the algorithm converges to the optimal cache content placement strategy. Simulations based on real data show that, depending on the cache size, the proposed algorithm achieves up to 27% more cache hits than the best algorithm taken from the literature.
Mobile crowdsourcing (MCS) allows task owners to outsource tasks via a mobile crowdsourcing platform (MCSP) to a set of workers. Hence, MCS exploits user resources for task solving. In this project, we identify the fundamental problem of context-aware worker selection for maximizing the worker performance in MCS under missing knowledge about expected worker performance. We present a novel model for context-aware worker selection in MCS that can cope with different task types and that explicitly allows worker performance to be a non-linear function of both task and worker context. Using a machine-learning-based approach based on contextual MABs, we propose a new context-aware hierarchical online learning algorithm for worker selection in MCS. Based on the proposed hierarchical architecture of decision making, this algorithm splits information collection and decision making among several entities. Local controllers (LCs) in the workers' mobile devices learn the workers' context-specific performances online over time. The MCSP centrally assigns workers to tasks based on a regular information exchange with the LCs. This novel approach solves two critical aspects. First, personal worker context is kept locally in the LCs, which reduces communication overhead and preserves the privacy of the workers, who may not want to share personal context with the MCSP. Secondly, the MCSP is enabled to select the most capable workers for each task based on what the LCs learn about their workers' context-specific performances, which are typically unknown a priori. We analyze the computational complexity and derive upper bounds on the local memory requirements of the algorithm and on the number of times the quality of each worker must be assessed. Moreover, we show that the more access to worker context is granted to the LCs, the lower are the communication requirements of the proposed algorithm compared to an equivalent centralized approach. In addition, we derive a sublinear upper regret bound, which characterizes the learning speed and proves that the algorithm converges to the optimal worker selection strategy. Finally, we show in simulations based on synthetic and real data that, depending on the availability of workers, the proposed algorithm achieves an up to 49% higher cumulative worker performance than the best algorithm from the literature.