摘要

Classical unidimensional scaling provides a difficult combinatorial task. A procedure formulated as a nonlinear programming (NLP) model is proposed to solve this problem. The new method can be implemented with standard mathematical programming software. Unlike the traditional procedures that minimize either the sum of squared error (L-2 norm) or the sum of absolute error (L-1 norm), the pro posed method can minimize the error based on any L-p norm for 1 less than or equal to p < infinity. Extensions of the NLP formulation to address a multidimensional scaling problem under the city-block model are also discussed.