摘要

In this paper, a new spatio-temporal method for adaptively detecting events based on Allen temporal algebra and external information support is presented. The temporal information is captured by presenting events as the temporal sequences using a lexicon of non-ambiguous temporal patterns. These sequences are then exploited to mine undiscovered sequences with external text information supports by using class associate rules mining technique. By modeling each pattern with linguistic part and perceptual part those work independently and connect together via transformer, it is easy to deploy this method to any new domain (e.g baseball, basketball, tennis, etc.) with a few changes in perceptual part and transformer. Thus the proposed method not only can work well in unwell structured environments but also can be able to adapt itself to new domains without the need (or with a few modification) for external re-programming, re-configuring and re-adjusting. Results of automatic event detection progress are tailored to personalized retrieval via click-and-see style using either conceptual or conceptual-visual query scheme. Experimental results carried on more than 30 hours of soccer video corpus captured at different broadcasters and conditions as well as compared with well-known related methods, demonstrated the efficiency, effectiveness, and robustness of the proposed method in both offline and online processes.

  • 出版日期2010-10