Podcast notes – Skeptical take on the AI revolution – Gary Marcus on Ezra Klein: “They’re just autocomplete…it’s mysticism to think otherwise”

Guest: Gary Marcus, NYU professor emeritus

Derived from things humans said, but doesn’t always know connections between the things – which can lead to it saying whacky things

Transforms everything into an “embedding space”

It’s pastiche – imitating styles, cutting and pasting, template aspect
Both brilliance and errors come from this

Humans have some internal model – of physical world, of a relationship
Understanding is also about indirect meanings

AI doesn’t have these internal representations

Sam Altman – “humans are energy flowing through a neural network”

Human neural networks have much more structure than AI neural nets

Neural nets like ChatGPT have 2 problems:
-not reliable
-not truthful
Making them bigger doesn’t solve those problems – because they don’t have models of “reliability” or “truthfulness”
“They’re just auto-complete”
“It’s mysticism to think otherwise”

No conception of truth – all fundamentally bullshitting
It’s a very serious problem

Can use GPT3 to make up stories about covid, spread bullshit and misinformation

No existing tech to protect us from this potential tidal wave of misinformation

People used ChatGPT to make up fake coding answers to put on StackOverflow
StackOverflow had to ban ChatGPT

“Talking about having submachine guns of misinformation”

Ezra – ChatGPT is really good at mimicking styles, but core doesn’t have truth value or embedded meaning

Silicon Valley always good at surveillance capitalism
Now you can use AI to write targeted propaganda all day long

Ezra – this AI will be good for people and companies who don’t care about truthfulness. eg, Google doesn’t care about “truth” but about clicks

One of main AI use cases is SEO copy to drive clicks and engagement

In some aspects, as models get bigger they get better (eg, at generating synonyms); but at truthfulness, wasn’t as much progress

Loves AI and have thought about it his whole life – just wants it to work better, be on better path

Biology has enormous complexity – too much for humans – AI could really help us
Also climate change
Could empower individuals like DALL-E does for artists
“Reason according to human values”
Right now we have mediocre AI – risk of being net negative – polarization of society, growth of misinformation

Parable of drunk looking for keys at night around a street light
The street light in AI is deep learning

Human mind does many things: pattern recognition, use analogies, plan things, interpret language
We’ve only made progress on a tiny part
We need other tools (not just deep learning)

Fight between neural nets and symbols (symbolic systems / symbolic learning)
Find ways to bridge these 2 traditions

Ezra:
Deep learning – give all this data, figure it out itself, but bad at abstraction
Symbolic systems – great at abstraction (eg, teach it multiplication from first principles), abstract procedures that are guaranteed to work

Most symbol manipulation is hard wired
Kids learn to count using generalized rules

AI hasn’t matched humans in capacity to acquire new rules – some paradigm shift needed
We’re worshipping the false god of “more data”
Believes will be genuine new innovation in next decade

Stopped ~50 minutes in

AI will take your job — IF you don’t learn how to use it

I think this paper is a great illustration of AI’s power, not to eliminate our jobs, but to augment and enrich them:

Although the danger of AI to radiologists is overblown, the new medical computer vision industry will profoundly change how radiologists practice, most likely in a direction that pleases radiologists. And AI has the potential to democratize radiology by enabling nonradiologists in underserved areas to tap into subspecialty expertise, perhaps on their mobile devices

And:

“Will AI replace radiologists?” is the wrong question. The right answer is: Radiologists who use AI will replace radiologists who don’t.

And another example in a completely different industry:

Bank tellers are often cited as the canonical example of a job replaced by technology. But reliable studies of the industry show no such effect. In 1985, the United States had 60 000 automated teller machines (ATMs) and 485 000 bank tellers. In 2002, there were 352 000 ATMs and 527 000 bank tellers. The U.S. Bureau of Labor Statistics counted 600 500 bank tellers in 2008 and projects that this number grew to 638 000 in 2018 (24). Instead, bank tellers’ responsibilities advanced from the drudgery of withdrawals and deposits at the bank window to more interesting and sophisticated transactions.

Source: https://pubs.rsna.org/doi/full/10.1148/ryai.2019190058

“Artificial intelligence has the verbal skills of a four-year-old”

This was written in 2013.

Over the decades it has become apparent that simply throwing more processor cycles at the problem of true artificial intelligence isn’t going to cut it. A brain is orders of magnitude more complex than any AI system developed thus far, but some are getting closer.

And in the seven years since this post, we now have GPT-3, capable of writing essays at least as good as college graduates, on just about any topic that interests you, whether that’s cryptocurrencies or the history of feudalism or the chemistry of marijuana.

I’m not trying to single out this article. It’s just a powerful reminder of how difficult it is for anybody to properly assess current technology, because of our inability to fully grasp the effects of exponential growth and the surprises of non-linear innovation.

We keep trying to peer into the future, and we keep being surprised when the future actually arrives.

What else are we getting wrong about our technologies of today? What other technologies have the skills or experience of a four year old?

A few come to mind: Virtual reality. Blockchains. Robots (although this is increasingly not the case).