New Ai Detect Deepfakes by analyzing the Reflection in People’s Eyes

Deepfakes by analyzing the Reflection

After the rapid development of AI-based technologies, it has become easier than ever to fool the public with deepfake images and videos. Citing their potential risks to society, companies such as Facebook, Reddit and Microsoft have already come up with ways and tools to explore deep content on social platforms. Now, a computer scientists team has developed a new AI based tool to detect deepfake. It detects a darker image by inspecting the light reflection on the subject’s eyes.

AI DeepFac detector device measures reflection in the eyeThe new device, developed by computer scientists at the University of Buffalo, is a relatively simple system. Nevertheless, it has proved to be 94% effective in detecting DeepFac content.

Unintended, deepfake images and videos are generated by AI-based apps that analyze thousands of photos and videos of various real-life individuals. After their analysis, the devices can then create realistic-looking pictures and videos of a person who does not exist. In addition, they can be used to create fake faces of popular celebrities to spread misinformation and fool the general public.

Therefore, to generate DeepFake results, AI tools analyze various resources to understand how a person moves, smiles, or sinks. They can then use the information wisely to create digital clones of other individuals that look highly realistic.

How does this work?

Now, common deepfake detection tools analyze the material for any unnatural factors such as irregular facial movements or excessive skin smoothing. The new device, however, relies on the reflection of light in the subjects’ eyes to detect deep images and videos.

According to scientists, the reason for this is that the reflection of a light source on a subject’s eyes is an AI-based device that does not take into consideration when generating DeepFac material. As a result, the eyes in the DeepFac images reflect different reflections on each of the corneas, whereas it should be the same on both eyes if the subject is real. You can see a comparison image just below to understand this.

Scientists say that DeepFac generating tools also do not fully know human anatomy and how certain elements change during interactions with the physical world. Reflection in the eye is one of the factors that change depending on the light source in front of the subject at the time of shooting of the image or video.

So, this deepfake tool takes advantage of this anomaly in deepfake content and successfully detects them. It analyzes an image and produces a similarity-metric score. A high similarity metric-score means that the image is unlikely to be darkened, while a low score means that the image is more likely to be fake.

Known issues

Now, it is worth noting that the instrument has some issues of its own. For example, it does not work if a subject is not looking at the camera or an eye of the subject is not visible in the image. However, scientists hope that these issues will be eradicated in the coming days.

Until now, the device could easily detect less sophisticated DeepFac media. For high-quality deepfake materials, however, the tool may struggle slightly and be inaccurate.

Related posts

Critical Auth Bypass Bug Found in VMWare Data Centre Security Product

5 months ago

The Hidden Threats and Risk of IOT (Internet of Things) devices

5 months ago

Microsoft Realease Windows 11 Preview build to download

4 months ago
Exit mobile version