摘要

Shoe marks are regarded as remarkable clues which can be usually detected in crime scenes where forensic experts use them for investigating crimes and identifying the criminals. Hence, developing a robust method for matching shoeprints with one another is of critical significance which can be used for recognizing criminals. In this paper, a promising method is proposed for retrieving shoe marks by means of developing blocking sparse representation technique. In the proposed method, the queried image was divided into two blocks. Then, two sparse representations are extracted for each queried image through approximate l(1) minimizing method. Also, the referenced database is categorized into two parts and two separate dictionaries are developed via them. Next, using the blocks, the total errors of classes are measured by resetting the coefficients related to other classes into zero. The performance of the proposed method was evaluated via the following methods Wright's sparse representation, extracting shoeprint image local and global features by Fourier transform, extracting shoeprint image features by Gabor transform after the image is rotated and extracting the corners of shoeprint image by Hessian and Harris' multi-scale detectors and SIFT descriptors. Accurate detection score was obtained in terms of the ratio of the number of accurately detected images to the total test images. The results of simulations indicated that the proposed method was highly effective and efficient in retrieving shoe marks, whole shoeprints, partial toe and heel shoeprints. Furthermore, it was found that the proposed method had better performance than the other methods with which it was compared. Accurate identification rate according to cumulative match score for the first n matches was measured. That is to say, the proposed method accurately recognized 99.47% of whole shoeprints, 80.53% of partial toe shoeprints and 79.47% of partial heel shoeprints in the first rank. Also, the proposed method was compared with the other methods in terms of rotation and scale distortions. The results indicated that the proposed method was resistant to these distortions.

  • 出版日期2017-8