Dubai Telegraph - Firms and researchers at odds over superhuman AI

EUR -
AED 4.237
AFN 72.67215
ALL 96.439167
AMD 435.408636
ANG 2.0649
AOA 1057.779611
ARS 1611.010422
AUD 1.624564
AWG 2.079223
AZN 1.945534
BAM 1.958758
BBD 2.321285
BDT 141.413535
BGN 1.971725
BHD 0.435689
BIF 3425.959811
BMD 1.153522
BND 1.472724
BOB 7.964268
BRL 5.999239
BSD 1.15253
BTN 106.434947
BWP 15.663195
BYN 3.45692
BYR 22609.027707
BZD 2.31797
CAD 1.580844
CDF 2612.727331
CHF 0.906552
CLF 0.026444
CLP 1044.421282
CNY 8.024186
CNH 7.939869
COP 4265.100795
CRC 540.234489
CUC 1.153522
CUP 30.568328
CVE 111.459011
CZK 24.430415
DJF 205.236134
DKK 7.472503
DOP 70.306427
DZD 152.806808
EGP 60.267824
ERN 17.302827
ETB 181.535552
FJD 2.54761
FKP 0.867251
GBP 0.864011
GEL 3.137768
GGP 0.867251
GHS 12.556073
GIP 0.867251
GMD 84.785822
GNF 10122.15418
GTQ 8.828331
GYD 241.131426
HKD 9.039568
HNL 30.649418
HRK 7.531693
HTG 151.178936
HUF 389.160771
IDR 19557.962488
ILS 3.570237
IMP 0.867251
INR 106.568171
IQD 1511.113587
IRR 1515900.701843
ISK 143.590528
JEP 0.867251
JMD 181.303769
JOD 0.817873
JPY 183.301551
KES 149.263438
KGS 100.875415
KHR 4635.429751
KMF 494.860672
KPW 1038.220285
KRW 1714.894867
KWD 0.353612
KYD 0.960484
KZT 555.347835
LAK 24771.881325
LBP 103297.879013
LKR 358.905059
LRD 211.38284
LSL 19.332716
LTL 3.40605
LVL 0.697754
LYD 7.394447
MAD 10.837363
MDL 20.106057
MGA 4792.883824
MKD 61.627084
MMK 2422.572577
MNT 4123.260971
MOP 9.302989
MRU 46.273525
MUR 53.868606
MVR 17.833708
MWK 2003.667624
MXN 20.417936
MYR 4.526993
MZN 73.708818
NAD 19.332766
NGN 1563.826412
NIO 42.357371
NOK 11.068751
NPR 170.297794
NZD 1.969866
OMR 0.443525
PAB 1.152575
PEN 3.954846
PGK 4.963026
PHP 68.735485
PKR 322.149837
PLN 4.260412
PYG 7471.28166
QAR 4.202568
RON 5.099835
RSD 117.439798
RUB 95.05593
RWF 1682.988338
SAR 4.33112
SBD 9.287766
SCR 15.104453
SDG 693.266837
SEK 10.686618
SGD 1.47243
SHP 0.86544
SLE 28.389514
SLL 24188.788329
SOS 659.241715
SRD 43.339545
STD 23875.572759
STN 24.916071
SVC 10.084227
SYP 127.897764
SZL 19.333216
THB 37.247344
TJS 11.047116
TMT 4.014256
TND 3.369443
TOP 2.777403
TRY 50.996395
TTD 7.819774
TWD 36.731828
TZS 3016.45951
UAH 50.637624
UGX 4350.531602
USD 1.153522
UYU 46.850745
UZS 13963.381974
VES 514.754787
VND 30337.623912
VUV 137.946383
WST 3.177041
XAF 656.974663
XAG 0.014379
XAU 0.00023
XCD 3.117451
XCG 2.077209
XDR 0.818793
XOF 663.848984
XPF 119.331742
YER 275.111989
ZAR 19.198364
ZMK 10383.082638
ZMW 22.480628
ZWL 371.433556
  • BCC

    1.5800

    73.3

    +2.16%

  • CMSC

    -0.0500

    22.94

    -0.22%

  • BCE

    0.2400

    26.14

    +0.92%

  • JRI

    -0.0400

    12.5

    -0.32%

  • GSK

    -0.1150

    53.655

    -0.21%

  • NGG

    -0.0750

    90.815

    -0.08%

  • BTI

    -0.1950

    60.745

    -0.32%

  • CMSD

    -0.0050

    22.945

    -0.02%

  • RYCEF

    0.3800

    16.5

    +2.3%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • RELX

    -0.1000

    34.37

    -0.29%

  • AZN

    -0.6400

    191.37

    -0.33%

  • BP

    1.1300

    44.03

    +2.57%

  • VOD

    0.1900

    14.79

    +1.28%

  • RIO

    0.4150

    90.275

    +0.46%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

X.Wong--DT