作者: Chaima Dhahri , Tomoaki Ohtsuki
DOI: 10.1109/GLOCOM.2012.6503908
关键词:
摘要: In this paper, we focus on user-centered handover decision making in open-access non-stationary femtocell networks. Traditionally, such mechanism is usually based a measured channel/cell quality metric as the channel capacity (between user and target cell). However, throughput experienced by time-varying because of condition, i.e. owing to propagation effects or receiver location. context, can depend not only current state network, but also future possible states (horizon). To end, need implement learning algorithm that predict, past experience, best performing cell future. We present paper reinforcement (RL) framework generic solution for selection problem network selects, without prior knowledge about environment, exploring cells behavior predicting their potential Q-learning algorithm. Our aims at balancing number handovers taking into account dynamic change environment. Simulation results demonstrate our offers an opportunistic-like performance with less handovers.