Podcast notes – Skeptical take on the AI revolution – Gary Marcus on Ezra Klein: “They’re just autocomplete…it’s mysticism to think otherwise”

Guest: Gary Marcus, NYU professor emeritus

Derived from things humans said, but doesn’t always know connections between the things – which can lead to it saying whacky things

Transforms everything into an “embedding space”

It’s pastiche – imitating styles, cutting and pasting, template aspect
Both brilliance and errors come from this

Humans have some internal model – of physical world, of a relationship
Understanding is also about indirect meanings

AI doesn’t have these internal representations

Sam Altman – “humans are energy flowing through a neural network”

Human neural networks have much more structure than AI neural nets

Neural nets like ChatGPT have 2 problems:
-not reliable
-not truthful
Making them bigger doesn’t solve those problems – because they don’t have models of “reliability” or “truthfulness”
“They’re just auto-complete”
“It’s mysticism to think otherwise”

No conception of truth – all fundamentally bullshitting
It’s a very serious problem

Can use GPT3 to make up stories about covid, spread bullshit and misinformation

No existing tech to protect us from this potential tidal wave of misinformation

People used ChatGPT to make up fake coding answers to put on StackOverflow
StackOverflow had to ban ChatGPT

“Talking about having submachine guns of misinformation”

Ezra – ChatGPT is really good at mimicking styles, but core doesn’t have truth value or embedded meaning

Silicon Valley always good at surveillance capitalism
Now you can use AI to write targeted propaganda all day long

Ezra – this AI will be good for people and companies who don’t care about truthfulness. eg, Google doesn’t care about “truth” but about clicks

One of main AI use cases is SEO copy to drive clicks and engagement

In some aspects, as models get bigger they get better (eg, at generating synonyms); but at truthfulness, wasn’t as much progress

Loves AI and have thought about it his whole life – just wants it to work better, be on better path

Biology has enormous complexity – too much for humans – AI could really help us
Also climate change
Could empower individuals like DALL-E does for artists
“Reason according to human values”
Right now we have mediocre AI – risk of being net negative – polarization of society, growth of misinformation

Parable of drunk looking for keys at night around a street light
The street light in AI is deep learning

Human mind does many things: pattern recognition, use analogies, plan things, interpret language
We’ve only made progress on a tiny part
We need other tools (not just deep learning)

Fight between neural nets and symbols (symbolic systems / symbolic learning)
Find ways to bridge these 2 traditions

Ezra:
Deep learning – give all this data, figure it out itself, but bad at abstraction
Symbolic systems – great at abstraction (eg, teach it multiplication from first principles), abstract procedures that are guaranteed to work

Most symbol manipulation is hard wired
Kids learn to count using generalized rules

AI hasn’t matched humans in capacity to acquire new rules – some paradigm shift needed
We’re worshipping the false god of “more data”
Believes will be genuine new innovation in next decade

Stopped ~50 minutes in