Dubai Telegraph - AI's blind spot: tools fail to detect their own fakes

EUR -
AED 4.370669
AFN 78.547025
ALL 96.278273
AMD 450.622905
ANG 2.130388
AOA 1091.328986
ARS 1722.405317
AUD 1.696403
AWG 2.142194
AZN 2.027299
BAM 1.948242
BBD 2.397439
BDT 145.456903
BGN 1.998632
BHD 0.448652
BIF 3526.404033
BMD 1.190108
BND 1.507439
BOB 8.225227
BRL 6.216527
BSD 1.190302
BTN 109.307763
BWP 15.571644
BYN 3.390219
BYR 23326.113255
BZD 2.393953
CAD 1.609722
CDF 2686.669586
CHF 0.915437
CLF 0.025998
CLP 1026.336493
CNY 8.269346
CNH 8.273029
COP 4348.154126
CRC 589.42316
CUC 1.190108
CUP 31.537857
CVE 109.839785
CZK 24.336455
DJF 211.96123
DKK 7.467284
DOP 74.93895
DZD 154.05412
EGP 55.854602
ERN 17.851617
ETB 184.910124
FJD 2.613417
FKP 0.862744
GBP 0.866184
GEL 3.207311
GGP 0.862744
GHS 13.03963
GIP 0.862744
GMD 87.474037
GNF 10444.566682
GTQ 9.129733
GYD 249.028048
HKD 9.291725
HNL 31.417639
HRK 7.529934
HTG 155.774996
HUF 380.663726
IDR 19981.910283
ILS 3.677993
IMP 0.862744
INR 109.392866
IQD 1559.343768
IRR 50133.292068
ISK 144.991072
JEP 0.862744
JMD 186.526346
JOD 0.84382
JPY 183.952632
KES 153.523692
KGS 104.074336
KHR 4786.390347
KMF 490.324072
KPW 1071.195635
KRW 1717.629069
KWD 0.365042
KYD 0.991765
KZT 598.65749
LAK 25616.049626
LBP 106592.204903
LKR 368.1019
LRD 214.546736
LSL 18.899793
LTL 3.514079
LVL 0.719884
LYD 7.469085
MAD 10.797202
MDL 20.016559
MGA 5319.451876
MKD 61.630387
MMK 2499.281315
MNT 4245.956935
MOP 9.571785
MRU 47.493541
MUR 54.066684
MVR 18.387421
MWK 2064.02702
MXN 20.580588
MYR 4.691392
MZN 75.869455
NAD 18.899793
NGN 1652.869038
NIO 43.800805
NOK 11.394485
NPR 174.888761
NZD 1.960817
OMR 0.4576
PAB 1.190302
PEN 3.979727
PGK 5.095275
PHP 70.13127
PKR 333.014626
PLN 4.205883
PYG 7973.067429
QAR 4.339763
RON 5.098662
RSD 117.438673
RUB 90.603841
RWF 1736.335388
SAR 4.46358
SBD 9.59001
SCR 16.419937
SDG 715.847357
SEK 10.540451
SGD 1.510158
SHP 0.892889
SLE 29.00886
SLL 24955.965041
SOS 680.257991
SRD 45.284203
STD 24632.829038
STN 24.405725
SVC 10.414682
SYP 13162.086558
SZL 18.89362
THB 37.47471
TJS 11.111392
TMT 4.177278
TND 3.419932
TOP 2.865494
TRY 51.769455
TTD 8.081781
TWD 37.504815
TZS 3064.528011
UAH 51.016503
UGX 4255.561501
USD 1.190108
UYU 46.191183
UZS 14551.667152
VES 436.587186
VND 30871.396828
VUV 142.347093
WST 3.230425
XAF 653.416494
XAG 0.011999
XAU 0.000238
XCD 3.216326
XCG 2.145213
XDR 0.814683
XOF 653.427432
XPF 119.331742
YER 283.71971
ZAR 19.020916
ZMK 10712.396649
ZMW 23.359765
ZWL 383.214232
  • SCS

    0.0200

    16.14

    +0.12%

  • JRI

    0.0350

    12.99

    +0.27%

  • RBGPF

    1.3800

    83.78

    +1.65%

  • BCC

    -0.9850

    79.185

    -1.24%

  • CMSC

    -0.0050

    23.69

    -0.02%

  • RYCEF

    -0.4300

    16

    -2.69%

  • RELX

    -0.5550

    35.61

    -1.56%

  • NGG

    -0.5500

    84.5

    -0.65%

  • BCE

    -0.0200

    25.465

    -0.08%

  • RIO

    -4.2450

    90.885

    -4.67%

  • GSK

    0.6100

    51.265

    +1.19%

  • AZN

    0.5100

    93.1

    +0.55%

  • CMSD

    0.0100

    24.07

    +0.04%

  • VOD

    -0.0850

    14.625

    -0.58%

  • BTI

    -0.0650

    60.145

    -0.11%

  • BP

    -0.2550

    37.785

    -0.67%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: Chris Delmas - AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

G.Koya--DT