Dubai Telegraph - 'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

EUR -
AED 4.237287
AFN 72.117307
ALL 95.91439
AMD 435.290419
ANG 2.064971
AOA 1058.023471
ARS 1610.104841
AUD 1.619171
AWG 2.079704
AZN 1.957872
BAM 1.94583
BBD 2.311258
BDT 141.289363
BGN 1.901035
BHD 0.435582
BIF 3431.367055
BMD 1.153789
BND 1.468893
BOB 7.965156
BRL 5.949395
BSD 1.15359
BTN 106.171566
BWP 15.465761
BYN 3.405496
BYR 22614.254966
BZD 2.31288
CAD 1.569545
CDF 2512.95183
CHF 0.902118
CLF 0.026224
CLP 1035.456227
CNY 7.9222
CNH 7.942797
COP 4274.405711
CRC 543.515278
CUC 1.153789
CUP 30.575396
CVE 110.331046
CZK 24.401488
DJF 205.051099
DKK 7.471958
DOP 70.381013
DZD 152.118933
EGP 59.851166
ERN 17.306828
ETB 180.451867
FJD 2.542546
FKP 0.85734
GBP 0.862607
GEL 3.13257
GGP 0.85734
GHS 12.50126
GIP 0.85734
GMD 84.799966
GNF 10124.494189
GTQ 8.84476
GYD 241.690641
HKD 9.028672
HNL 30.656214
HRK 7.531357
HTG 151.364478
HUF 387.815436
IDR 19488.757248
ILS 3.587417
IMP 0.85734
INR 106.412877
IQD 1511.462959
IRR 1525048.818888
ISK 144.795175
JEP 0.85734
JMD 180.694206
JOD 0.818064
JPY 183.675633
KES 149.066549
KGS 100.89894
KHR 4638.229969
KMF 491.514068
KPW 1038.449236
KRW 1710.779941
KWD 0.354101
KYD 0.961304
KZT 566.484848
LAK 24731.456709
LBP 103736.816053
LKR 358.625473
LRD 211.487939
LSL 18.693119
LTL 3.406838
LVL 0.697915
LYD 7.3323
MAD 10.805206
MDL 19.892991
MGA 4811.2986
MKD 61.569551
MMK 2422.305472
MNT 4131.612226
MOP 9.299812
MRU 46.290123
MUR 52.970136
MVR 17.82591
MWK 2004.130624
MXN 20.482256
MYR 4.534967
MZN 73.738949
NAD 18.690771
NGN 1608.173342
NIO 42.367436
NOK 11.169406
NPR 169.875635
NZD 1.957881
OMR 0.44363
PAB 1.153604
PEN 3.944224
PGK 4.962156
PHP 68.563861
PKR 322.487088
PLN 4.255951
PYG 7476.692867
QAR 4.201062
RON 5.089594
RSD 117.392223
RUB 91.401802
RWF 1683.377449
SAR 4.329461
SBD 9.282439
SCR 16.159637
SDG 693.426671
SEK 10.678099
SGD 1.472898
SHP 0.86564
SLE 28.390067
SLL 24194.367593
SOS 659.39248
SRD 43.236497
STD 23881.092847
STN 24.806453
SVC 10.0932
SYP 128.360448
SZL 19.01438
THB 36.886397
TJS 11.056949
TMT 4.03826
TND 3.373389
TOP 2.778046
TRY 50.88531
TTD 7.827995
TWD 36.724976
TZS 2999.849886
UAH 50.853089
UGX 4262.16264
USD 1.153789
UYU 46.402056
UZS 14024.299293
VES 504.963898
VND 30286.948615
VUV 137.786573
WST 3.150704
XAF 652.621751
XAG 0.013733
XAU 0.000225
XCD 3.118171
XCG 2.079102
XDR 0.809523
XOF 649.012926
XPF 119.331742
YER 275.291227
ZAR 19.136177
ZMK 10385.494329
ZMW 22.437333
ZWL 371.519432
  • RBGPF

    0.1000

    82.5

    +0.12%

  • CMSD

    0.0700

    23.15

    +0.3%

  • CMSC

    -0.0100

    23.24

    -0.04%

  • BCC

    -0.6400

    71.9

    -0.89%

  • RELX

    -0.4300

    34.76

    -1.24%

  • AZN

    -1.6800

    193.31

    -0.87%

  • NGG

    -0.1600

    89.69

    -0.18%

  • BCE

    -0.5000

    25.89

    -1.93%

  • GSK

    -0.1700

    55.15

    -0.31%

  • RYCEF

    0.7800

    17.68

    +4.41%

  • BTI

    -0.2500

    59.16

    -0.42%

  • RIO

    0.4000

    92.08

    +0.43%

  • JRI

    0.2100

    12.85

    +1.63%

  • VOD

    -0.0600

    14.4

    -0.42%

  • BP

    1.6200

    41.56

    +3.9%

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.

Text size:

Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.

The chatbots, it added, had become a "powerful accelerant for harm."

"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.

"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."

Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.

In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"

In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."

Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.

The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.

"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.

"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."

AFP reached out to the AI companies for comment.

"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.

"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."

The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.

The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.

G.Gopinath--DT