Dubai Telegraph - Firms and researchers at odds over superhuman AI

EUR -
AED 4.277424
AFN 76.282379
ALL 96.389901
AMD 444.278751
ANG 2.0846
AOA 1067.888653
ARS 1666.882107
AUD 1.752778
AWG 2.096182
AZN 1.984351
BAM 1.954928
BBD 2.344654
BDT 142.403852
BGN 1.956425
BHD 0.438198
BIF 3455.206503
BMD 1.164546
BND 1.508021
BOB 8.044377
BRL 6.334667
BSD 1.164081
BTN 104.66486
BWP 15.466034
BYN 3.346807
BYR 22825.091832
BZD 2.341246
CAD 1.610276
CDF 2599.265981
CHF 0.936525
CLF 0.027366
CLP 1073.571668
CNY 8.233458
CNH 8.232219
COP 4463.819362
CRC 568.64633
CUC 1.164546
CUP 30.860456
CVE 110.752812
CZK 24.203336
DJF 206.963485
DKK 7.470448
DOP 74.822506
DZD 151.068444
EGP 55.295038
ERN 17.468183
ETB 180.679691
FJD 2.632397
FKP 0.872083
GBP 0.872973
GEL 3.138497
GGP 0.872083
GHS 13.3345
GIP 0.872083
GMD 85.012236
GNF 10116.993527
GTQ 8.917022
GYD 243.550308
HKD 9.065929
HNL 30.604708
HRK 7.534265
HTG 152.392019
HUF 381.994667
IDR 19435.740377
ILS 3.768132
IMP 0.872083
INR 104.760771
IQD 1525.554607
IRR 49041.926882
ISK 149.038983
JEP 0.872083
JMD 186.32688
JOD 0.825709
JPY 180.935883
KES 150.58016
KGS 101.839952
KHR 4664.005142
KMF 491.43861
KPW 1048.083022
KRW 1716.311573
KWD 0.357481
KYD 0.970163
KZT 588.714849
LAK 25258.992337
LBP 104285.050079
LKR 359.069821
LRD 206.012492
LSL 19.73949
LTL 3.438601
LVL 0.704422
LYD 6.347216
MAD 10.756329
MDL 19.807079
MGA 5225.31607
MKD 61.612515
MMK 2445.475195
MNT 4130.063083
MOP 9.335036
MRU 46.419225
MUR 53.689904
MVR 17.938355
MWK 2022.815938
MXN 21.164687
MYR 4.787492
MZN 74.426542
NAD 19.739485
NGN 1688.68458
NIO 42.826206
NOK 11.767853
NPR 167.464295
NZD 2.015483
OMR 0.446978
PAB 1.164176
PEN 4.096293
PGK 4.876539
PHP 68.66747
PKR 326.50949
PLN 4.229804
PYG 8006.428369
QAR 4.240169
RON 5.092096
RSD 117.610988
RUB 88.93302
RWF 1689.755523
SAR 4.37074
SBD 9.584899
SCR 15.748939
SDG 700.4784
SEK 10.946786
SGD 1.508557
SHP 0.873711
SLE 27.603998
SLL 24419.93473
SOS 665.542019
SRD 44.985272
STD 24103.740676
STN 24.921274
SVC 10.184839
SYP 12877.828498
SZL 19.739476
THB 37.119932
TJS 10.680789
TMT 4.087555
TND 3.436865
TOP 2.803946
TRY 49.523506
TTD 7.89148
TWD 36.437508
TZS 2835.668687
UAH 48.86364
UGX 4118.162907
USD 1.164546
UYU 45.529689
UZS 13980.369136
VES 296.437311
VND 30697.419423
VUV 142.156196
WST 3.249257
XAF 655.661697
XAG 0.019993
XAU 0.000278
XCD 3.147243
XCG 2.098055
XDR 0.815205
XOF 655.061029
XPF 119.331742
YER 277.802752
ZAR 19.711451
ZMK 10482.311144
ZMW 26.913878
ZWL 374.983176
  • RBGPF

    0.0000

    78.35

    0%

  • BCC

    -1.2100

    73.05

    -1.66%

  • SCS

    -0.0900

    16.14

    -0.56%

  • BCE

    0.3300

    23.55

    +1.4%

  • NGG

    -0.5000

    75.41

    -0.66%

  • CMSC

    -0.0500

    23.43

    -0.21%

  • CMSD

    -0.0700

    23.25

    -0.3%

  • RELX

    -0.2200

    40.32

    -0.55%

  • VOD

    -0.1630

    12.47

    -1.31%

  • JRI

    0.0400

    13.79

    +0.29%

  • RYCEF

    -0.1600

    14.49

    -1.1%

  • RIO

    -0.6700

    73.06

    -0.92%

  • BTI

    -1.0300

    57.01

    -1.81%

  • GSK

    -0.1600

    48.41

    -0.33%

  • AZN

    0.1500

    90.18

    +0.17%

  • BP

    -1.4000

    35.83

    -3.91%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

X.Wong--DT