摘要

In this paper, we address a cross-layer issue of long-term average utility maximization in energy-efficient cognitive radio networks supporting packetized data traffic under the constraint of collision rate with licensed users. Utility is determined by the number of packets transmitted successfully per consumed power and buffer occupancy. We formulate the problem by dynamic programming method namely constrained Markov decision process (CMDP). Reinforcement learning (RL) approach is employed to finding a near-optimal policy under undiscovered environment. The policy learned by RL can guide transmitter to access available channels and select proper transmission rate at the beginning of each frame for its long-term optimal goals. Some implement problems of the RL approach are discussed. Firstly, state space compaction is utilized to cope with so-called curse of dimensionality due to large state space of formulated CMDP. Secondly, action set reduction is presented to reduce the number of actions for some system states. Finally, the CMDP is converted to a corresponding unconstrained Markov decision process (UMDP) by Lagrangian multiplier approach and a golden section search method is proposed to find the proper multiplier. In order to evaluate the performance of the policy learned by RL, we present two naive policies and compare them by simulations.

  • 出版日期2009-10

全文