Lecture notes: Gary Marcus on the human mind as a kluge (Google talks) – “Evolution is a tinkerer using spare parts”

New book: Kluge, the haphazard construction of the human mind

Hamlet: “what a piece of work is man…how noble in faculty”

Bertrand Russell: “it’s been said man’s a rational animal…all my life I’ve been searching for evidence that could support this”

Argues man is the RATIONALIZING animal (not the “rational animal”) – searching for reasons to explain why we do what we do

Default thinking is that natural selection leads to human optimalism
But how to reconcile with the manifest clumsiness of the human mind?

Visual system evolving for a billion years
But distinctively human things – like talking, rational deliberation – these are much more recent (eg, 50-100K years ago)

Hill climbing metaphor – issue of local maxima
Evolution won’t necessarily lead to superlative adaptation

Human spine is a kluge – but not best way to support bi-pedal creature, leads to back pain / fragility – eg, multiple columns would be better solution

Evolution builds kluges because it has no foresight nor hindsight – it’s effectively blind
Evolution is always in a hurry – shipping deadline of “now”

Darwin didn’t say “survival of the fittest” – what he meant was fittest of the available options
It’s really “Descent with modification”

Bird’s wing is a modification of the forelimb of all 4-legged creatures – wasn’t designed from scratch as best possible wing

Concept of “evolutionary inertia”
Genes are conserved
Evolution is a tinkerer using spare parts

Kluge is about 3 things
1. Limits of human mind
2. Where they came from
3. What we can do about them

Computer memory – there’s a master map of where everything is stored – like a series of safe deposit boxes
Stored once, saved forever

Human memories are nowhere near as reliable – there’s no master map
Human memory balances 2 things: RECENCY and FREQUENCY
That’s why you often forget where you put your keys, where you parked your car
That’s why pilots use checklists to make sure they do all the required tasks
Evolution didn’t bless us with an erase feature

Brain uses a broadcast system to retrieve info and memories – doesn’t know where individual memories are

Memory is context dependent – if you study while stoned, you might be better off taking test while stoned
Even your posture (eg, seated or standing) can affect memory recall (!)
Seems to apply to rat + maze experiments too

Eyewitness testimony is particularly subject to these memory errors
Shooting incident with 30 eyewitnesses where the witnesses cannot agree about what happened

Framing: “estate tax” versus “death tax”

Garbage in > garbage out

We take biased samples of data and reason from those limited samples

Confirmation bias – eg, religious beliefs, presidential elections

Depression – being depressed means you’re more likely to have depressing thoughts, negative spiral – normal brains have ability to stop this process

Languages have lot of ambiguities – “the spy shot the cop with the revolver”

Moral dilemmas – trolley problem – a person’s judgment can be affected by how messy the experimental room is!

Podcast notes – Skeptical take on the AI revolution – Gary Marcus on Ezra Klein: “They’re just autocomplete…it’s mysticism to think otherwise”

Guest: Gary Marcus, NYU professor emeritus

Derived from things humans said, but doesn’t always know connections between the things – which can lead to it saying whacky things

Transforms everything into an “embedding space”

It’s pastiche – imitating styles, cutting and pasting, template aspect
Both brilliance and errors come from this

Humans have some internal model – of physical world, of a relationship
Understanding is also about indirect meanings

AI doesn’t have these internal representations

Sam Altman – “humans are energy flowing through a neural network”

Human neural networks have much more structure than AI neural nets

Neural nets like ChatGPT have 2 problems:
-not reliable
-not truthful
Making them bigger doesn’t solve those problems – because they don’t have models of “reliability” or “truthfulness”
“They’re just auto-complete”
“It’s mysticism to think otherwise”

No conception of truth – all fundamentally bullshitting
It’s a very serious problem

Can use GPT3 to make up stories about covid, spread bullshit and misinformation

No existing tech to protect us from this potential tidal wave of misinformation

People used ChatGPT to make up fake coding answers to put on StackOverflow
StackOverflow had to ban ChatGPT

“Talking about having submachine guns of misinformation”

Ezra – ChatGPT is really good at mimicking styles, but core doesn’t have truth value or embedded meaning

Silicon Valley always good at surveillance capitalism
Now you can use AI to write targeted propaganda all day long

Ezra – this AI will be good for people and companies who don’t care about truthfulness. eg, Google doesn’t care about “truth” but about clicks

One of main AI use cases is SEO copy to drive clicks and engagement

In some aspects, as models get bigger they get better (eg, at generating synonyms); but at truthfulness, wasn’t as much progress

Loves AI and have thought about it his whole life – just wants it to work better, be on better path

Biology has enormous complexity – too much for humans – AI could really help us
Also climate change
Could empower individuals like DALL-E does for artists
“Reason according to human values”
Right now we have mediocre AI – risk of being net negative – polarization of society, growth of misinformation

Parable of drunk looking for keys at night around a street light
The street light in AI is deep learning

Human mind does many things: pattern recognition, use analogies, plan things, interpret language
We’ve only made progress on a tiny part
We need other tools (not just deep learning)

Fight between neural nets and symbols (symbolic systems / symbolic learning)
Find ways to bridge these 2 traditions

Ezra:
Deep learning – give all this data, figure it out itself, but bad at abstraction
Symbolic systems – great at abstraction (eg, teach it multiplication from first principles), abstract procedures that are guaranteed to work

Most symbol manipulation is hard wired
Kids learn to count using generalized rules

AI hasn’t matched humans in capacity to acquire new rules – some paradigm shift needed
We’re worshipping the false god of “more data”
Believes will be genuine new innovation in next decade

Stopped ~50 minutes in