Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

Dayton Students Combat AI Deepfake Voice Scams

2025-11-04Unknown3 minutes read
Cybersecurity
Artificial Intelligence
Higher Education

The Growing Threat of AI Deepfakes

Artificial intelligence offers incredible benefits, but it also opens the door to new security risks. Malicious actors are now using AI to create convincing "deepfakes"—highly realistic photos, videos, and audio—to deceive individuals and institutions, often for financial gain. This technology has made it easier than ever to fall victim to sophisticated digital scams, posing a significant threat to vulnerable people.

Student Research Paves the Way for Detection

At the University of Dayton, students are stepping up to combat this emerging threat. As part of the College of Arts and Sciences Dean's Summer Fellowship program, computer science major Sai Woon Tip partnered with lecturer Tasnia Ashrafi Heya to develop a deepfake detection tool. This fellowship provides undergraduate students with funding to conduct faculty-mentored research over the summer.

"I chose the project to research cloning the human voice, and given the situation of AI technology advancements, how easily deepfake attacks happen,” said Woon Tip, a sophomore from Keng Tong, Myanmar. His research focused on a critical vulnerability created by the combination of popular voice assistants like Alexa, Siri, and Google Assistant with accessible deepfake technology that can generate artificial speech almost identical to a real person's.

A Two-Pronged Approach to Voice Analysis

To build his detection tool, Woon Tip trained two different types of machine learning models. He fed each model 96 samples of real human voice recordings and 96 samples of AI-generated voices. The goal was to teach the models how to accurately distinguish between authentic and fake audio.

One model employed a classical training approach, analyzing core audio features such as voice pitch, frequency, and loudness. The second model used an innovative image-based method, converting audio waves into images called spectrograms. The machine then learned to identify deepfakes by visually comparing the patterns in the spectrograms. Woon Tip noted that further research is needed to determine which method is ultimately more effective for identifying sophisticated deepfakes.

"Sai achieved exceptional results, demonstrating how data-driven approaches can enhance the safety, trust and ethical deployment of AI technologies," said Ashrafi Heya. "This reflects the University of Dayton’s commitment to preparing students to lead responsibly in the rapidly advancing tech era."

Fostering a Culture of Cyber Awareness

Beyond technical solutions, the University is also focused on public education. Woon Tip, who works in the University's IT department, advocates for educating the public on how to recognize the warning signs of a scam.

This effort is being championed by Cyber Flyers, a student group led by Professor of Political Science Grant Neeley. Launched in 2024, the group includes students from diverse majors like management information systems, computer science, and criminal justice. They work to alert the campus community about deepfake and phishing scams through information tables, guest speaker events, and their Instagram account, @cyberflyersud.

A University-Wide Commitment to Cyber Safety

These initiatives are part of a broader mission at the UD Center for Cybersecurity and Data Intelligence, which Neeley directs. The center, recognized by the National Security Agency as a Center of Academic Excellence in Cyber Defense, is dedicated to training the next generation of cybersecurity professionals and developing new methods to mitigate online threats.

The Cyber Flyers initiative underscores the importance of community engagement in promoting online safety. As Professor Neeley advises, a healthy dose of skepticism is a powerful defense. “Sometimes it's as simple as reminding students that if you get something out of the ordinary, ask yourself the question: ‘Am I really expecting this?’ We are all cyber citizens, and cyber safety is something we should all be concerned about.”

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.