摘要

A new approach for developing recurrent neural-network models of nonlinear circuits is presented, overcoming the conventional limitations where training information depends on the shapes of circuit waveforms and/or circuit terminations. Using only a finite set of waveforms for model training, our technique enables the trained model to respond accurately to test waveforms of unknown shapes. To relate information of training waveforms with that of test waveforms, we exploit an internal space of a recurrent neural network, called the internal input-neuron space. We formulate a new circuit block combining a generic load and a generic excitation to terminate the circuit. By sweeping the coefficients of the proposed circuit block, we obtain a rich combination of training waveforms to cover the region of interest in the internal input-neuron space effectively. We also present a new method to reduce the amount of training data while maintaining the necessary modeling information. The proposed method is demonstrated through examples of recurrent neural-network modeling of highspeed drivers and an RF amplifier. It is confirmed that, for different terminations and test waveforms, the model trained with the proposed technique has better accuracy and robustness than that using the existing training methods.

  • 出版日期2009-6