DIVID Becomes the Most Powerful AI Detection Tool with a 93.7% Success Rate for Detecting AI-Generated Videos and Texts

Artificial Intelligence Fraud Incident

Artificial intelligence (AI) has changed how we live, work and think. We are all familiar with the types of tools AI uses and how one can use them to generate creative content faster while increasing productivity on a global scale. But this technological leap has a downside: AI-assisted fraud. Earlier this year, a multinational was duped into transferring $25 million to fraudsters.

The scammers were able to use AI-generated video images of the CFO and other coworkers in their scam, which helped them plan carefully with enough circumstantial evidence. This incident underscores the pressing need for effective AI detection methods to address such fraudulent mechanisms.

convenience of artificial intelligence in production

AI has changed numerous industries by taking over mundane, repetitive tasks, analyzing vast datasets, and creating artistic content. In creative fields, AI creates top-tier text, images (OpenAI's GPT-3 & DALL-E), and music similarly to improve human creativity and productivity.

From simple customer service chatbots to advanced data analysis, businesses extensively use AI that acts as force multipliers and enhances efficiency in almost every decision-making process.

The significant threat to humans in AI is equally vast and severe as the potential for positive impact while running with AI. Who can forget when employees at the multinational company sent out an accommodation bill for $25 million to scammers?

They employed artificial intelligence to craft convincing video clips of the CFO and other corporate executives. The clips so closely resembled reality that it was difficult for the employees to recognize that someone was duping them. This is testimony to the sophistication of AI-produced content and its ability to weaponize fraudulent activities.

Transcendent Artificial Intelligence

Now, we have advanced enough AI, which makes it even more difficult for humans to distinguish between what is real and what is synthetic. Advanced AI models like GANs and diffusion models achieve this high level of realism. The GANs are known to deploy two neural networks, one of which generates fake data and another keeps the actual content.

These networks produce incredibly realistic synthetic videos via an adversarial loss (MF) feedback system that iterates over time. In contrast, diffusion models make images and videos from random noise, guaranteeing a well-defined sequence with high-quality visual effects.

To safeguard against the increasing threat of AI-powered fraud, researchers at Columbia Engineering led by computer science professor Junfeng Yang have developed a new tool called DIVID (DIffusion-generated VIdeo Detector). DIVID extends the previous capabilities of their Raidar tool, which can also recognize machine-generated text.

Instead of taking a page from GPT-3 like Raidar - which analyzes text without tapping into large lingo libraries - DIVID aims to identify AI-crafted videos by analyzing tells, such as strange pixel placements or abnormal movements in the non-natural breaks between frames.

DIVID detection technology

DIVID uses advanced detection methods to capture generative videos, even those generated by outdated AI models such as GANs. It searches for subtle discrepancies or artifacts that tend to show up in A.I.-generated videos rather than those shot naturally.

This includes individual pixel intensities, texture patterns, and video frame noise characteristics.

DIVID's success is due to its ability to detect anomalies arising from the statistical averaging operations of AI-generated content.

The latest generative AI tools, such as Sora, Runway Gen-2, and Pika, use diffusion models to create videos. These new models are challenging detection technologies with their ability to generate high-quality visuals.

DIRE detection technology

The Yang research team has developed DIRE (Diffusion Reconstruction Error) technology to address this issue. DIRE calculates the disparity between an input image and the corresponding output image generated by a trained diffusion model, reflecting AI involvement.

This process is similar to Raidar's approach in detecting generative text, where fewer edits suggest machine generation while more edits suggest human authorship, whether actual or not.

On a benchmark dataset of diffusion-generated videos created by tools such as Stable Vision Diffusion, Sora, Pika, and Gen-2, we reached an accuracy of up to 93.7% outside distribution regarding Detection with DIVID.

A highlight of this high level of performance is an essential achievement in the AI-powered anti-fraud. It helps identify AI-generated deep fakes, which protect people and organizations from fraudulent activity.

The future development direction of DIVID

Developers can use DIVID as a command-line tool to analyze videos and determine their authenticity. The research team aims to expand its use by turning DIVID into a plugin for video conferencing applications like Zoom, which will help identify fake calls in real time.

Additionally, they plan to create a website or browser plugin to make DIVID's functionality accessible to everyday users.

He said this underscores the need for AI detection to keep up with evolving technology. The Columbia Engineering research team intends to refine the DIVID framework further to process additional synthetic videos generated by alternative open-source tools.

This continual evolution is essential for outpacing the advancements being made in AI and its strategic deployment across vast content creation.

Summary

The development of AI has significantly transformed various industries, leading to improvements in creativity and productivity and accelerating innovation. However, technology designed to make consumer experiences easier can also be used in ways that may mislead individuals and pose risks.

It's essential to harness AI innovation responsibly and safely mitigate these risks. Tools like DIVID at Columbia Engineering are necessary for this purpose. By remaining vigilant and investing in robust detection technologies, we can fully benefit from AI while preventing misuse.

facebook Share