摘要

Known algorithms for learning PDFA can only be shown to run in time polynomial in the so-called distinguishability mu of the target machine, besides the number of states and the usual accuracy and confidence parameters. We show that the dependence on mu is necessary in the worst case for every algorithm whose structure resembles existing ones. As a technical tool, a new variant of Statistical Queries termed L-infinity-queries is defined. We show how to simulate L-infinity-queries using classical Statistical Queries and show that known PAC algorithms for learning PDFA are in fact statistical query algorithms. Our results include a lower bound: every algorithm to learn PDFA with queries using a reasonable tolerance must make Omega(1/mu(1-c)) queries for every c %26gt; 0. Finally, an adaptive algorithm that PAC-learns w.r.t. another measure of complexity is described. This yields better efficiency in many cases, while retaining the same inevitable worst-case behavior. Our algorithm requires fewer input parameters than previously existing ones, and has a better sample bound.

  • 出版日期2013-2-18