Dubai Telegraph - 'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

EUR -
AED 4.306153
AFN 75.0429
ALL 95.503739
AMD 434.75432
ANG 2.098709
AOA 1076.390828
ARS 1633.24778
AUD 1.628526
AWG 2.110569
AZN 1.997971
BAM 1.957785
BBD 2.362126
BDT 143.899979
BGN 1.955914
BHD 0.44281
BIF 3489.474751
BMD 1.172539
BND 1.496038
BOB 8.103802
BRL 5.808644
BSD 1.172804
BTN 111.252582
BWP 15.938311
BYN 3.309523
BYR 22981.755751
BZD 2.358712
CAD 1.59436
CDF 2720.28988
CHF 0.91605
CLF 0.026783
CLP 1054.112588
CNY 8.006387
CNH 8.009617
COP 4288.442525
CRC 533.195048
CUC 1.172539
CUP 31.072272
CVE 110.746729
CZK 24.373212
DJF 208.384014
DKK 7.475055
DOP 69.770598
DZD 155.365983
EGP 62.894658
ERN 17.588078
ETB 184.088973
FJD 2.570327
FKP 0.863714
GBP 0.862002
GEL 3.142861
GGP 0.863714
GHS 13.136953
GIP 0.863714
GMD 85.595732
GNF 10289.026269
GTQ 8.959961
GYD 245.356495
HKD 9.186899
HNL 31.213432
HRK 7.537125
HTG 153.631453
HUF 363.42071
IDR 20325.193765
ILS 3.451755
IMP 0.863714
INR 111.286226
IQD 1536.025512
IRR 1540715.666567
ISK 143.847483
JEP 0.863714
JMD 183.766277
JOD 0.831376
JPY 184.174195
KES 151.433806
KGS 102.503912
KHR 4704.815418
KMF 492.466605
KPW 1055.284674
KRW 1725.179882
KWD 0.36031
KYD 0.977362
KZT 543.223189
LAK 25772.39793
LBP 105000.828342
LKR 374.82671
LRD 215.600573
LSL 19.53494
LTL 3.462202
LVL 0.709257
LYD 7.446066
MAD 10.847448
MDL 20.206948
MGA 4866.035425
MKD 61.633886
MMK 2461.733132
MNT 4195.16771
MOP 9.463379
MRU 46.86681
MUR 55.144932
MVR 18.121629
MWK 2041.980281
MXN 20.469245
MYR 4.655421
MZN 74.929587
NAD 19.534934
NGN 1613.390048
NIO 43.044332
NOK 10.900392
NPR 177.995572
NZD 1.986849
OMR 0.451129
PAB 1.172774
PEN 4.112684
PGK 5.087352
PHP 71.847345
PKR 326.874482
PLN 4.245704
PYG 7213.019006
QAR 4.272149
RON 5.203848
RSD 117.378833
RUB 87.908248
RWF 1713.665104
SAR 4.396996
SBD 9.429684
SCR 16.118093
SDG 704.113715
SEK 10.803423
SGD 1.492177
SHP 0.875418
SLE 28.848748
SLL 24587.542811
SOS 669.519913
SRD 43.920994
STD 24269.180819
STN 24.869543
SVC 10.262409
SYP 129.594802
SZL 19.534925
THB 38.122791
TJS 11.000548
TMT 4.109748
TND 3.378963
TOP 2.823192
TRY 52.931326
TTD 7.960816
TWD 37.086813
TZS 3054.463338
UAH 51.532291
UGX 4409.902668
USD 1.172539
UYU 46.771998
UZS 14011.836168
VES 573.304233
VND 30903.426254
VUV 137.95079
WST 3.183664
XAF 656.670246
XAG 0.01556
XAU 0.000254
XCD 3.168845
XCG 2.113677
XDR 0.815653
XOF 656.621982
XPF 119.331742
YER 279.771908
ZAR 19.540971
ZMK 10554.258277
ZMW 21.901789
ZWL 377.556938
  • RBGPF

    0.5000

    63.1

    +0.79%

  • CMSC

    0.0600

    22.88

    +0.26%

  • RIO

    0.1000

    100.58

    +0.1%

  • RYCEF

    0.5500

    16.35

    +3.36%

  • BP

    -0.9700

    46.41

    -2.09%

  • BCE

    0.1800

    23.96

    +0.75%

  • AZN

    -2.6300

    184.74

    -1.42%

  • CMSD

    0.1500

    23.28

    +0.64%

  • NGG

    -1.0600

    88.48

    -1.2%

  • GSK

    -0.7000

    51.61

    -1.36%

  • JRI

    -0.0100

    12.98

    -0.08%

  • BCC

    -1.1400

    78.13

    -1.46%

  • RELX

    -0.2400

    36.35

    -0.66%

  • BTI

    -0.0900

    58.71

    -0.15%

  • VOD

    0.3500

    16.15

    +2.17%

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.

Text size:

Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.

The chatbots, it added, had become a "powerful accelerant for harm."

"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.

"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."

Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.

In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"

In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."

Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.

The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.

"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.

"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."

AFP reached out to the AI companies for comment.

"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.

"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."

The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.

The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.

G.Gopinath--DT