Dubai Telegraph - Firms and researchers at odds over superhuman AI

EUR -
AED 4.39647
AFN 79.010777
ALL 96.7817
AMD 453.834235
ANG 2.142963
AOA 1097.770504
ARS 1728.714548
AUD 1.697422
AWG 2.154839
AZN 2.03606
BAM 1.959479
BBD 2.410826
BDT 146.2646
BGN 2.010429
BHD 0.451359
BIF 3555.483592
BMD 1.197133
BND 1.514243
BOB 8.270527
BRL 6.218144
BSD 1.196947
BTN 110.127756
BWP 15.609305
BYN 3.381248
BYR 23463.797441
BZD 2.40732
CAD 1.614512
CDF 2702.527156
CHF 0.914657
CLF 0.026043
CLP 1028.337353
CNY 8.318156
CNH 8.313415
COP 4373.125105
CRC 592.211831
CUC 1.197133
CUP 31.724012
CVE 110.884406
CZK 24.328187
DJF 212.75416
DKK 7.467485
DOP 75.419599
DZD 154.65435
EGP 56.059366
ERN 17.956988
ETB 186.200377
FJD 2.621956
FKP 0.868641
GBP 0.866784
GEL 3.226251
GGP 0.868641
GHS 13.114581
GIP 0.868641
GMD 88.00166
GNF 10476.106643
GTQ 9.184243
GYD 250.420144
HKD 9.344996
HNL 31.588305
HRK 7.535923
HTG 156.894557
HUF 380.549872
IDR 20097.400931
ILS 3.704161
IMP 0.868641
INR 109.934056
IQD 1568.04388
IRR 50429.2077
ISK 144.996855
JEP 0.868641
JMD 187.812603
JOD 0.848796
JPY 183.318702
KES 154.514154
KGS 104.688869
KHR 4816.661042
KMF 493.218172
KPW 1077.499653
KRW 1713.586906
KWD 0.366789
KYD 0.997473
KZT 601.288873
LAK 25747.338611
LBP 102474.544325
LKR 370.335275
LRD 221.435728
LSL 18.885656
LTL 3.534821
LVL 0.724134
LYD 7.519117
MAD 10.83945
MDL 20.132798
MGA 5357.167785
MKD 61.629467
MMK 2514.472536
MNT 4270.0428
MOP 9.623167
MRU 47.746641
MUR 54.05048
MVR 18.507873
MWK 2075.496582
MXN 20.615098
MYR 4.704817
MZN 76.329328
NAD 18.885656
NGN 1661.703631
NIO 44.052706
NOK 11.415096
NPR 176.204811
NZD 1.969152
OMR 0.460301
PAB 1.196947
PEN 4.002915
PGK 5.201766
PHP 70.529025
PKR 334.819598
PLN 4.205952
PYG 8032.0796
QAR 4.363392
RON 5.097505
RSD 117.394378
RUB 90.079313
RWF 1746.378689
SAR 4.490097
SBD 9.670049
SCR 16.594223
SDG 720.018515
SEK 10.539112
SGD 1.512703
SHP 0.898159
SLE 29.091786
SLL 25103.269553
SOS 682.882058
SRD 45.495226
STD 24778.226215
STN 24.546083
SVC 10.473663
SYP 13239.776792
SZL 18.879445
THB 37.386326
TJS 11.179589
TMT 4.189964
TND 3.427835
TOP 2.882408
TRY 52.027807
TTD 8.124253
TWD 37.561827
TZS 3070.644609
UAH 51.226874
UGX 4257.99405
USD 1.197133
UYU 45.295038
UZS 14565.345295
VES 429.143458
VND 31125.445585
VUV 143.139968
WST 3.252382
XAF 657.190824
XAG 0.010137
XAU 0.00022
XCD 3.23531
XCG 2.15725
XDR 0.816474
XOF 657.190824
XPF 119.331742
YER 285.394994
ZAR 18.826046
ZMK 10775.631872
ZMW 23.669438
ZWL 385.476184
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    0.0000

    82.4

    0%

  • CMSD

    0.0392

    24.09

    +0.16%

  • BCC

    -0.5500

    80.3

    -0.68%

  • CMSC

    0.0100

    23.71

    +0.04%

  • RYCEF

    -0.1700

    16.43

    -1.03%

  • JRI

    -0.0500

    12.94

    -0.39%

  • NGG

    0.3900

    85.07

    +0.46%

  • VOD

    0.1400

    14.71

    +0.95%

  • BCE

    0.2200

    25.49

    +0.86%

  • RIO

    1.7600

    95.13

    +1.85%

  • RELX

    -1.2100

    36.17

    -3.35%

  • GSK

    0.5600

    50.66

    +1.11%

  • BTI

    0.0600

    60.22

    +0.1%

  • AZN

    -0.6300

    92.59

    -0.68%

  • BP

    0.3400

    38.04

    +0.89%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

X.Wong--DT