摘要

In the distributed network service systems such as streaming-media systems and resource-sharing systems with multiple service nodes, admission control (AC) technology is an essential way to enhance performance. Model-based optimization approaches are good ways to be applied to analyze and solve the optimal AC policy. However, due to "the curse of dimensionality", computing such policy for practical systems is a rather difficult task. In this paper, we consider a general model of the distributed network service systems, and address the problem of designing an optimal AC policy. An analytical model is presented for the system with fixed parameters based on semi-Markov decision process (SMDP). We design an event-driven AC policy, and the stationary randomized policy is taken as the policy structure. To solve the SMDP, both the state aggregation approach and the reinforcement-learning (RL) method with online policy optimization algorithm are applied. Then, we extend the problem by considering the system with time-varying parameters, where the arrival rates of requests at each service node may change over time. In view of this situation, an AC policy switching mechanism is presented. This mechanism allows the system to decide whether to adjust its AC policy according to the policy switching rule. And in order to maximize the gain of system, that is, to obtain the optimal AC policy switching rule, another RL-based algorithm is applied. To assess the effectiveness of SMDP-based AC policy and policy switching mechanism for the system, numerical experiments are presented. We compare the performance of optimal policies obtained by the solutions of proposed methods with other classical AC policies. The simulation results illustrate that higher performance and computational efficiency could be achieved by using the SMDP model and RL-based algorithms proposed in this paper.