摘要

Illumination is an important part of the external imaging environment of a video, which reveals the physical conditions under which the video is taken. It is difficult to simulate the realistic illumination even using complicated computer graphic models. Based on the difference of external imaging environment between real videos and Deepfake videos, we propose a method for detecting Deepfake videos by exploiting the consistency of illumination directions. Specifically, we employ the Lambert illumination model to calculate the 2-D illumination directions of the video on a frame-by-frame basis. The authenticity of the video is determined by examining the smoothness of direction change of the entire video. Experiments on the public test datasets TIMIT and FaceForensics++ show that our method can effectively distinguish real videos from Deepfake videos. The method features low computational complexity since it does not involve any model training stages, making it more appropriate for real-time applications.