Dubai Telegraph - AI's blind spot: tools fail to detect their own fakes

EUR -
AED 4.21081
AFN 73.380876
ALL 95.821367
AMD 434.905178
ANG 2.052472
AOA 1051.413124
ARS 1598.904666
AUD 1.629082
AWG 2.063842
AZN 1.94815
BAM 1.953805
BBD 2.323693
BDT 141.535462
BGN 1.959858
BHD 0.432824
BIF 3420.777931
BMD 1.146579
BND 1.473185
BOB 7.971763
BRL 6.019431
BSD 1.153753
BTN 106.983876
BWP 15.64616
BYN 3.516599
BYR 22472.950295
BZD 2.320396
CAD 1.57407
CDF 2602.734703
CHF 0.909206
CLF 0.026588
CLP 1049.842202
CNY 7.880495
CNH 7.914451
COP 4251.916593
CRC 538.855456
CUC 1.146579
CUP 30.384346
CVE 110.164988
CZK 24.455843
DJF 205.451403
DKK 7.472726
DOP 69.752456
DZD 152.054803
EGP 59.895114
ERN 17.198686
ETB 180.146883
FJD 2.544033
FKP 0.859302
GBP 0.864354
GEL 3.112902
GGP 0.859302
GHS 12.576583
GIP 0.859302
GMD 84.846638
GNF 10111.658098
GTQ 8.836977
GYD 241.360884
HKD 8.986944
HNL 30.535809
HRK 7.531859
HTG 151.205259
HUF 393.429124
IDR 19487.258327
ILS 3.571474
IMP 0.859302
INR 107.05179
IQD 1511.228056
IRR 1507751.511799
ISK 143.216573
JEP 0.859302
JMD 181.150555
JOD 0.812866
JPY 183.156266
KES 148.539438
KGS 100.2684
KHR 4620.188443
KMF 490.735959
KPW 1031.896421
KRW 1719.633639
KWD 0.351839
KYD 0.961378
KZT 556.553574
LAK 24756.252748
LBP 103330.654412
LKR 359.238936
LRD 211.11834
LSL 19.257861
LTL 3.385549
LVL 0.693554
LYD 7.361959
MAD 10.796099
MDL 20.115493
MGA 4805.056884
MKD 61.648715
MMK 2407.934705
MNT 4094.550606
MOP 9.313745
MRU 46.048011
MUR 53.327419
MVR 17.726477
MWK 2000.558306
MXN 20.431294
MYR 4.515167
MZN 73.268833
NAD 19.257861
NGN 1563.566729
NIO 42.454976
NOK 10.999878
NPR 171.188773
NZD 1.971474
OMR 0.440833
PAB 1.153653
PEN 3.939777
PGK 4.977893
PHP 68.883603
PKR 322.29402
PLN 4.274842
PYG 7456.88075
QAR 4.195092
RON 5.092302
RSD 117.454414
RUB 96.173121
RWF 1684.110645
SAR 4.305014
SBD 9.224504
SCR 16.621753
SDG 689.093572
SEK 10.790324
SGD 1.471256
SHP 0.860231
SLE 28.263454
SLL 24043.20278
SOS 659.356045
SRD 42.853431
STD 23731.872367
STN 24.479805
SVC 10.094188
SYP 126.795321
SZL 19.263192
THB 37.591168
TJS 11.034483
TMT 4.013027
TND 3.394818
TOP 2.760687
TRY 50.815525
TTD 7.820446
TWD 36.667914
TZS 2982.515766
UAH 50.737264
UGX 4340.059947
USD 1.146579
UYU 46.717588
UZS 14068.228386
VES 517.041634
VND 30172.228929
VUV 137.122676
WST 3.134408
XAF 655.416296
XAG 0.015356
XAU 0.000237
XCD 3.098687
XCG 2.079131
XDR 0.815131
XOF 655.419151
XPF 119.331742
YER 273.545132
ZAR 19.480092
ZMK 10320.594636
ZMW 22.561486
ZWL 369.198001
  • RBGPF

    0.1000

    82.5

    +0.12%

  • JRI

    -0.1370

    12.323

    -1.11%

  • RYCEF

    -0.2100

    16.6

    -1.27%

  • GSK

    -1.3500

    52.06

    -2.59%

  • CMSC

    -0.1200

    22.83

    -0.53%

  • BCC

    -1.0800

    71.84

    -1.5%

  • RELX

    -0.4300

    33.86

    -1.27%

  • CMSD

    0.0100

    22.89

    +0.04%

  • BCE

    -0.2600

    25.75

    -1.01%

  • RIO

    -2.0800

    87.72

    -2.37%

  • NGG

    -3.0200

    87.4

    -3.46%

  • VOD

    -0.3800

    14.37

    -2.64%

  • AZN

    -2.8700

    188.42

    -1.52%

  • BTI

    -2.4600

    58.09

    -4.23%

  • BP

    0.7600

    44.61

    +1.7%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: Chris Delmas - AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

G.Koya--DT