作者: Chaima Dhahri , Tomoaki Ohtsuki
DOI: 10.1109/VETECS.2012.6240208
关键词:
摘要: In open-access non-stationary femtocell networks, cellular users (also known as macro or MU) may join, through a handover procedure, one of the neighboring femtocells so to enhance their communications/increase respective channel capacities. To avoid frequent communication disruptions owing effects such ping-pong effect, it is necessary ensure effectiveness cell selection method. Traditionally, method usually measured channel/cell quality metric capacity, load candidate cell, received signal strength (RSS), etc. However, problem with approaches that present performance does not necessarily reflect future performance, thus need for novel can predict \textit{horizon}. Subsequently, we in this paper reinforcement learning (RL), i.e, Q- algorithm, generic solution network. After comparing our different methods literature (least loaded (LL), random and capacity-based), simulation results demonstrate benefits using terms gained capacity number handovers.