Risk Found in AI Image Recognition

Summary: A recent study reveals a flaw in AI photo identification systems because they do not include the alpha stream, which controls picture transparency. Researchers created” AlphaDog,” an attack strategy that alters the transparency of images, allowing hackers to alter images like road signs and clinical images in ways that are beyond the control of AI.

Examined across 100 AI versions, AlphaDog exploits this accountability weakness, posing significant dangers to road safety and medical diagnostics. The research urges improvements to AI models to secure crucial sectors by highlighting these flaws in photo clarity control.

To address this problem and protect photo identification systems, the experts are collaborating with technical giants. This discrepancy highlights the crucialness of complete protection in the development of AI.

Essential Information

  • AlphaDog manipulates picture clarity, misleading AI versions in fields like street safety and healthcare.
  • Most AI techniques omit the beta stream, which is critical for accurate image transparency.
  • Researchers are collaborating with technology companies to integrate safe AI and alpha channel processing.

Origin: UT San Antonio

According to a recent study, artificial intelligence may help people approach and understand large amounts of data with accuracy, but the current image recognition platforms and computer vision models that are integrated into AI often overlook an important back-end feature called the alpha channel, which regulates the transparency of images.

To investigate how hackers can exploit this oversight, researchers at The University of Texas at San Antonio ( UTSA ) created a proprietary attack called AlphaDog.

Their conclusions are described in a paper written by&nbsp, Guenevere Chen, an associate professor in the UTSA Department of Electrical and Computer Engineering, and her previous doctoral student, &nbsp, Qi Xia ‘ 24, and published by the&nbsp, Network and Distributed System Security Symposium 2025.

The UTSA scientists present the technology gap in the report and offer suggestions for how to address this specific type of cyber threat.

” We have two goals. One is a mortal prey, and one is AI”, Chen explained.

By creating AlphaDog, the experts identified and exploited an alpha channel invasion on images in order to determine the risk. The strike simulator makes people perceive images separately from machines do. It works by altering the degree of accountability of photos.

The experts generated 6, 500 AlphaDog strike images and tested them across 100 AI types, including 80 open-source techniques and 20 cloud-based AI systems like ChatGPT.

They discovered that AlphaDog is exceptionally adept at capturing grayscale regions within an image, allowing both grayscale-containing colored images and grayscale-containing images to be compromised by hackers.

The researchers compared images to a range of everyday situations.

They discovered gaps in AI that pose a significant threat to road safety. Using AlphaDog, for example, they could manipulate the grayscale elements of road signs, which could potentially mislead autonomous vehicles.

In addition, they discovered that they could alter grayscale images like X-rays, MRIs, and CT scans, potentially putting the field of telehealth and medical imaging in danger of misdiagnosing.

This could also compromise patient safety and lead to fraud, such as fraudulent insurance claims by altering X-ray results that mistake a normal leg for a broken leg.

Additionally, they discovered a way to change perceptions of people. The UTSA researchers could obliterate facial recognition systems by targeting the alpha channel.

AlphaDog uses the differences between how AI and humans interpret images as transparent. Computer vision models typically process red, green, blue and alpha (RGBA ) images—values defining the opacity of a color.

The alpha channel indicates how transparent each pixel is, allowing the combination of an image and a background image to create a compositite image with the appearance of transparency.

However, using AlphaDog, the researchers found that the AI models they tested do not read all four RGBA channels, instead they only read data from the RGB channels.

The code’s authors focused on RGB but skipped the alpha channel, and humans are the ones who created AI. In other words, they wrote code for AI models to read image files without the alpha channel”, said Chen. ” That’s the vulnerability. Data poisoning is caused by the absence of the alpha channel in these platorms.

She added,” AI is important. It’s changing our world, and we have so many concerns”.

Chen and Xia are working with several key stakeholders, including Google, Amazon and Microsoft, to mitigate the vulnerability regarding AlphaDog’s ability compromise systems.

About this news from AI research

Author: Andrea Ari Castaneda
Source: UT San Antonio
Contact: Andrea Ari Castaneda – UT San Antonio
Image: The image is credited to Neuroscience News

[ihc-register]