摘要

Linear matrix equations such as the Sylvester equation, Lyapunov equation, Stein equation, and a variate of their generalizations are of significant importance in many applications. A conversion to a classical linear system via the Kronecker product is generally regarded as the last resort because it significantly increases the size of the problem and disrespects any underlying structure. Convention Krylov subspace methods such as GMRES or CGNR might not need the vectorization explicitly, but an otherwise well established preconditioner for encounters the difficulty that it must be disassembled and redistributed over the original matrix coefficients in order to complete evade the Kronecker vectorization. Thus many other techniques for solving linear matrix equations have been developed, which are usually problem dependent and can hardly be generalized when the equation is changed. In contrast, motivated by the notion of order-4 tensor equations, this paper proposes the idea of casting any linear matrix equation under the same framework of generalized normal equation and using low-precision gradient dynamics to achieve high-precision solution. A single computational paradigm therefore serves to handle all types of linear matrix equations. The flow approach has the advantages of being straightforward for implementation, uniform in theory, versatile in application, working directly with the original sizes without Kronecker vectorization, avoiding inversion or factorization, and being easy for convergence analysis. This paper outlines the theory, exemplifies a collection of applications, suggests a simple implementation, and reports some numerical evidences.