摘要

Timing jitter is a major limiting factor for data throughput in serial high-speed interfaces, which forces an accurate analysis of the impact on system performance. Histogram-based methods have been developed for this purpose, and can directly relate collected jitter distributions with the bit-error rate (BER). However, real measurements suffer from limitations introduced by the hardware, such as limited sample size, a discrete number of bins or process variations. In this paper we investigate the performance of a widely used, powerful class of fitting methods, when used together with built-in jitter measurements (BIJM). We derive equations to specify minimum requirements for sample size and time resolution, and provide empirical relations to estimate the error statistics for typical test distributions. This allows designers to characterize key parameters of a BIJM design, and to find an optimum trade-off between hardware expense and system accuracy. A typical design example is also provided and validity of the empirical equations demonstrated with experimental jitter measurements. The equations can be used as tools for configuring a BIJM system, and to assist in realizing production tests and on-chip diagnostics.

  • 出版日期2012-10