// everything is paraphrased from Sam’s perspective unless otherwise noted
Base model is useful, but adding RLHF – take human feedback (eg, of two outputs, which is better) – works remarkably well with remarkably little data to make model more useful
Pre training dataset – lots of open source DBs, partnerships – a lot of work is building great dataset
“We should be in awe that we got to this level” (re GPT 4)
Eval = how to measure a model after you’ve trained it
Compressing all of the web into an organized box of human knowledge
“I suspect too much processing power is using model as database” (versus as a reasoning engine)
Every time we put out new model – outside world teaches us a lot – shape technology with us
ChatGPT bias – “not something I felt proud of”
Answer will be to give users more personalized, granular control
Hope these models bring more nuance to world
Important for progress on alignment to increase faster than progress on capabilities
GPT4 = most capable and most aligned model they’ve done
RLHF is important component of alignment
Better alignment > better capabilities and vice-versa
Tuned GPT4 to follow system message (prompt) closely
There are people who spend 12 hours/day, treat it like debugging software, get a feel for model, how prompts work together
Dialogue and iterating with AI / computer as a partner tool – that’s a really big deal
Dream scenario: have a US constitutional convention for AI, agree on rules and system, democratic process, builders have this baked in, each country and user can set own rules / boundaries
Doesn’t like being scolded by a computer — “has a visceral response”
At OpenAI, we’re good at finding lots of small wins, the detail and care applied — the multiplicative impact is large
People getting caught up in parameter count race, similar to gigahertz processor race
OpenAI focuses on just doing whatever works (eg, their focus on scaling LLMs)
We need to expand on GPT paradigm to discover novel new science
If we don’t build AGI but make humans super great — still a huge win
Most programmers think GPT is amazing, makes them 10x more productive
AI can deliver extraordinary increase in quality of life
People want status, drama, people want to create, AI won’t eliminate that
Eliezer Yudkowsky’s AI criticisms – wrote a good blog post on AI alignment, despite much of writing being hard to understand / having logical flaws
Need a tight feedback loop – continue to learn from what we learn
Surprised a bit by ChatGPT reception – thought it would be, eg, 10th fastest growing software product, not 1st
Knew GPT4 would be good – remarkable that we’re even debating whether it’s AGI or not
Re: AI takeoff, believes in slow takeoff, short timelines
Lex: believes GPT4 can fake consciousness
Ilya S said if you trained a model that had no data or training examples whatsoever related to consciousness, yet it could immediately understand when a user described what consciousness felt like
Lex on Ex Machina: consciousness is when you smile for no audience, experience for its own sake
If he interviews job applicants who say their goal is movies or some other thing, instant rejection – YouTube should be THE goal
$100K is around max prize giveaway that most captures attention – above that, diminishing returns
First time in western social media you can go viral on all platforms with same content (short viral video)
Viral videos is a teachable skill
Lex: crucial part of your success is idea generation – others don’t put enough ideas on paper
Every 6 months, you should look back and be embarrassed by your old videos – otherwise you’re not learning enough
Need to be endlessly learning – will go on walks and call people to learn stuff
If new channel, what should you do?
“Just fail”
Your first 10 videos will not get views
Just start uploading Make 100 videos and improve something every time – better script, better editing, better thumbnails
Maybe by 101st video you’ll have some views and can get serious
No such thing as a perfect video – every little thing can be improved
Create best ideas – then determine if they’re doable (don’t let practicality stop you)
eg, when they gave away a $10M island, really had to find creative methods to find a worthwhile island
It’s about intuition – you know your viewers the best, you spent most time on your content, trust your gut
His typical viewer: “a teenager boy that plays video games”
But if 30% of his viewers are women, and a video gets 100M views, that’s still 30M women (!)
Elon said “We want to limit the amount of regrettable minutes people spend on Twitter”
Lex: “I follow the thread of curiosity”
Lex: “I’m against centralized censorship and shadow banning”
Beast: agrees shadow banning should be transparent, you should let people know
Antarctica video
-during summertime the sun never goes down
-named a mountain that wasn’t named
-lucked out with warm weather
Process eg, Stand in circle for 100 days
-figure out idea, act on inspiration
-need independent crew for those 100 days
-need 10 cameras rolling all times – trailer, cameras, house
Process, 100 adults v 100 kids
-did 100 boys v 100 girls, people loved it, wanted to do more
-lots of shooting problems eg, room with bad acoustics
–
Earlier on, if video did bad, he’d be devastated, cry over it
Now he’s much better about it, just figure out how to improve and move on
Earlier on, spent 24 hours on deserted island – but didn’t like the footage
So scraped it, went back and spent another 24 hours LOL
Videos where weather is very hot, or filmed on water – he suffers a lot more (gets sea sick)
“Once you get over fear that you’ll wake up one day and be irrelevant”
Some creators go a little mentally insane during this process
Subscribers is vanity metric – doesn’t really correlate with views
If goal is to be super successful entrepreneur, you either need be WORKING or recharging / resting to recover
Need to find balance
Used to overwork and hate rest / downtime
Optimal day – going down list of his 8 companies, go thru biggest pain points for each (eg, Beast Burgers, Beast Charity, etc)
Delegated most day to day to his teams
For younger first time biz owners
-whenever he hires from traditional industries (eg, Disney) – they just don’t get it – YT is its own new thing
-doesn’t wanna get trapped in bubble of what works today
-shouldn’t start experimenting only when you plateau / start declining – could get even worse
–imperative to experiment while you’re still growing / winning
Beast Burgers
-just started as an experiment, didn’t plan to run a restaurant chain
Always wanted to do Feastables, hasn’t been any innovation in American snacks in a long time
“Feastables is just crushing”
Wal-Mart – didn’t think they’d do this kind of revenue
Had to stop promoting for awhile due to supply chain issues
Expect to 10x in 2023
The Bitcoin Layer – a new-ish addition, I’m a regular reader of Nik’s newsletter, and the podcast offers a low-frills analysis of macro and specifically the US bond market, interest rates, and the Fed; he also tends to invite guests who are slightly less featured than the usual podcast-guest-circuit (eg, the Rogan-Friedman-Huberman axis, or the Ferriss-Rose-Vaynerchuk axis)
Scriptnotes – a long-time sub; swings back and forth between industry insider gossip and screenwriting 101; invites great guests (eg, recent episode with The Daniels)
Iced Coffee Hour – surprised by how much I enjoyed their chat with Tai Lopez, helped to humanize the guy behind his omnipresent Lambo and books; the show provides (me with) valuable insight into how a certain kind of Gen Z influencer / ambitious individual thinks
TWIML – been dipping more than a toe into the waters of AI and ML recently, and this podcast has frequent topical interviews that are the right mix of accessible and technical for me
My First Million – a guilty pleasure, but having been a steady subscriber since it’s early days, I now find that it relies a bit too much on Shaan and Sam’s personal stories (so after listening to eg, 10 episodes, it’ll start to feel like a family reunion where gregarious grandpa tells you about that one time he did X)
Bankless – comfort food for Ethereum fans, with occasional entrees of delicious deep dives, gotta appreciate Ryan + David’s chemistry
What Bitcoin Did – comfort food for Bitcoin fans; more philosophical as of late; always enjoy his Lyn Alden interviews; for me, it scratches a similar itch to Preston’s weekly Bitcoin interviews
Lex Fridman – in a class by itself; he’s the only interviewer who can bring world class guests (like this one w/ game designer Todd Howard) to spend 3 hours chatting about anything and everything; enhanced by Lex’s mix of patience and skeptical kindness
This podcast made me feel very stupid, and very inspired.
—
Turing test – in 1950s, Turing didn’t mean it to be a rigorous formal test, more of a philosophy experiment, didn’t specify things like parameters of test, how long test should last, etc
More modalities than just language to express intelligence eg physical movement
Played chess at 4, earnings from winning a chess competition let him buy a computer
Bought programming books, started making games, felt they were a magical extension of your mind
“AI is ultimate expression of what a machine can do or learn”
At 12yo he got to chess masters level, the process of learning chess makes him think a lot about thinking and about brains
“Chess computer handbook” by David Levy – explained how chess programs were made
First AI program he built was on his Amiga, programmed Othello
Wrote game called “Theme Park” with a core AI, sandbox game, reacted to players, every game was unique
He designed and wrote AI for games in 90s – at the time, game industry was cutting edge of tech (GPUs for game graphics, AI, John Carmack)
“Black and white” game – train a pet, and depending on how you train it, it would be more or less kind to others, powerful example of reinforcement learning
DeepMind – core part of strategy from start was to use games to test how well AI is doing, if the ideas are working
Eg, Go – clear rules and win conditions, humans have played for thousands of years, easy to test how good is your system vs human players
Part of why their AI has progressed so quickly – by developing against games
“Chess is drosophila of intelligence” – Gary Kasparov
Many AI researchers have all wrote chess AI programs
DeepBlue beating Kasparov was a huge moment – he was in college at time – came away more impressed with Kasparov’s mind than with DeepBlue (because Kasparov could play almost at the AI level, but could also do all these other things as a human, while DeepBlue at that time couldn’t even play tic-tac-toe)
What makes chess compelling as a game? Creative tension between bishop and knight – leads to a lot of dynamism
Chess has evolved to balance those two more or less equally (worth 3 points each)
Balanced by humanity over hundreds of years
Different levels of creativity
1. Lowest level is interpolation – averaging everything you see (eg, “an average looking cat”)
2. Next is extrapolation – AI coming up with a new move in Go that no one’s seen
3. Out of the box innovation – coming up with a new game entirely – AI nowhere close to this yet
Currently AI can do 1 and 2 but not 3
For 3, if you were to instruct an AI to create a game, you’d say “come up with a game that takes 5 minutes to learn, but lifetimes to master, aesthetically beautiful, and can be completed in 3-4 hours”
We can’t abstract high level notions like that to AIs (yet)
AI could be used to make current games better by taking game system, playing millions of times, and then improving the balance of rules and parameters – give it a base set + Monte Carlo tree search – takes humans many years and thousands of testers to do it
His first big game was theme park, amusement park – then whole cities – and Will Wright’s made SimEarth simulating the whole earth
“Simulation theory” – doesn’t believe it, in sense that we’re in a computer simulation / game, but does think best way to understand physics and universe is from computation perspective – information as fundamental unit of reality instead of energy or matter
Understanding physics as information theory could be valuable
Roger Penrose – Emperor’s New Mind – he believes we need quantum, something more, to explain consciousness in the brain
Most neuroscientists / mainstream biologists haven’t found any evidence of this
While continually classic Turing machines keep improving – and DeepMind / Demis work is champion of this
Thinks universal Turing machines can eventually mimic human brain without Penrose need for something more
Something profoundly beautiful and amazing about our brains – incredibly efficient machines, in awe of it
Building AI and comparing to human mind will help us unlock what’s truly unique about our minds – consciousness, dreaming, creativity
Philosophy of mind – there haven’t been the tools, but today we increasingly have them
Lex – Universe built human mind which built computers to help us understand universe and human minds
Protein folding – AlphaFold 2 solved it
Proteins = essential to all life – workhorses of biology, amazing bio-nano machines, specified by genetic sequence, in the body they fold into 3D structure (like string of beads folded into a ball)
The 3D structure determines what it does – and drugs must understand this to interact with it
Structure maps to function, and is specified by amino acid sequence
Unique mapping for every protein – but it’s not obvious – and almost infinite possibilities
Can you by studying the sequence, predict the 3D structure?
Takes 1 PhD student an entire PhD to predict one protein But AlphaFold 2 can do it in seconds now – over Christmas can do it over entire human proteome space (!!)
Biologists can now lookup protein 3D structure in a google search
AlphaFold was most complex and meaningful system they’ve built so far
Started on games (AlphaGo, AlphaZero), to bootstrap general learning systems
His passion is scientific challenges – AlphaFold is first proof point 30 component algorithms needed to crack protein folding
About 150K protein structures had been found – that was their training set
Would put some of AF’s best predictions back into training set to accelerate training
AF2 was truly end to end – from amino acid sequence directly to 3D structure, without needing all the intermediary steps – system is better at learning the constraints on its own instead of guiding it
AlphaGo – learning system but trained only to learn Go
AlphaZero – removed need to learn from human games – just play with itself MuZero – didn’t even need to give rules, just let it learn on its own
Started DeepMind in 2010 – back then no one was talking about AI, people mostly thought it doesn’t work (even at MIT)
If all professors tell you you’re mad, at least you know you’re on a unique track Founding tenets / trends
-Algorithmic advances (reinforcement learning)
-Understanding about human brain (architectures, algos) improving
-Compute and GPUs improving
-Mathematical and theoretical definitions of intelligence
Early days – ideas were most important – deep reinforcement learning, transformers, scaling those up As we get closer to AGI, engineering and data become more important
**For large models – scale is clearly necessary but perhaps not sufficient
DeepMind – purposely built multi-disciplinary organization – neuroscience + machine learning + mathematics + gaming, and now philosophers and ethicists too
“A new type of Bell Labs” DeepMind itself is a learning machine building a learning machine
Top things to apply AI – biology and curing diseases (AlphaFold), but it’s just beginning Eventually simulate a virtual cell (maybe in 10 years) – “that’s my dream”
Drugs take 10 years – target to drug candidate – maybe it can be shortened to 1 year with this, AlphaFold as first proof point
Math is perfect description language for physics AI as perfect description language for biology (!)
Open-sourced AlphaFold (including data) – max benefit to humanity – so many downstream applications, better to accelerate research and discovery, used by 500K researchers (almost every biologist in the world!), amazing fundamental research, almost every pharma company is using it, “gateway drug to biology”
Also open-sourced MuJoCo – purchased it explicitly to open source it
One day an AI system could come up with something like General Relativity (!)
Big new breakthroughs will come at intersection of different subject areas (DeepMind = neuroscience + AI engineering)
We just don’t understand what it’d be like to hold the entire internet in your head (imagine reading all of Wikipedia, but much much greater) – no one knows what will result
Nuclear fusion – believe AI can help
In any new field, talk to domain experts for collaboration
What are all the bottleneck problems? Think from 1st principles
Which AI methods can help
Problem of plasma control is great example – plasma is unstable (mini-sun in a reactor), want to predict what plasma will do next, to best model and control it
They’ve largely solved it with AI, and now looking for other problems in fusion
Simulating properties of electrons – if you do it, you can describe how elements and materials work (fundamental to materials science)
Would like to simulate large materials – approximate Schrodinger’s equation
His ultimate aim for AI – to build a tool to help us understand the universe – to test the limits of physics
A true scientist – the more you find out, the more you realize you don’t know Time, consciousness, gravity, life – fundamental things of nature – we don’t really know what they are
We treat them as fact and box them off – but there’s a lot of uncertainty about what it is
Use of AI is to accelerate science to the maximum – imagine a tree of all knowledge – we’ve barely scratched surface, and AI will turbocharge all of it – understanding and finding patterns, and then building tools
If you’re good at chess, you still can’t come up with a move like Garry Kasparov, but he can explain the move to you – potentially AI systems could understand things we could never by ourselves, and then explain it and make it useful for us
We’re already symbiotic with our phones and computers, Neuralink, and could augment / integrate with these AI
His current feeling is we are alone (no aliens)
We could easily be a million years ahead or behind in our evolution, eg, if meteor that destroyed dinosaurs came earlier or later – and in a few hundred years, imagine where we’ll be, AI, space traveling – we’ll be spreading across the stars; will only take ~1M years for Von Neumann systems to populate across the galaxy with that tech
We should have heard a cacophony of voices – but we haven’t heard anything “We’ve searched enough – it should be everywhere”
If we’re alone, somewhat comforting re: Great Filter (maybe we’ve passed it)
Wouldn’t be surprised if we found single cell alien life – but multi-cellular seems incredibly hard
Another large leap is conscious intelligence
General intelligence is costly to begin with – 20% of body’s energy – a game of professional chess is same as F1 racer
Hard to justify evolutionarily – which is why it’s only been done once (on Earth)
AI systems – easy to craft specific solutions, but hard to do generally – at first general systems are way worse
Do AGI systems need consciousness? Consciousness and intelligence are double dissociable – can have one without the other in both ways
Eg, Dogs have consciousness and self-aware but not very intelligent, most animals are pretty specialized
Eg, some AI are amazingly smart at playing certain games or doing certain tasks, but don’t seem conscious
May be our responsibility to build systems that are not sentient
None of our systems today have one iota of consciousness or sentience – way too premature
Re: Google engineer who believed their language system was sentient – Demis believes it’s more a projection of our own minds, our desire to construct narrative and agency even within inanimate systems
Eliza AI chat bots in 1960s – already fooled some people
Neuroscience – certain pre-reqs may be required for consciousness, like self-awareness, coherent preferences over time
Turing test is important, but there’s second that differ in machines: we’re not running on same substrate (humans are carbon based squishy life forms)
Language models – we don’t understand them well enough yet to deploy them at-scale Should AI be required to announce that it is AI?
re: AI ethics, important to look at theology, philosophy, arts & humanities
Heading into an area of radical abundance and knowledge breakthroughs if we do it right – but also huge risks
Need careful controlled testing instead of just releasing into the wild, the harms could be fast and huge
Better to first build these AI systems as tools – carefully experiment and understand – before we really focus on sentience
How to prevent being corrupted by this AI power:
-Important to remain grounded and humble
-Being multi-disciplinary keeps you humble – because always better experts
-Have good colleagues who are also grounded
AI can learn for itself most of its knowledge, but will have residue of culture / values from who builds it
Globally we can’t seem to cooperate well eg, climate change
Need to remove scarcity to help promote world peace – radical abundance
AI should belong to the world and humanity, everyone should have a say
Advice for young
-what are your true passions? explore as many things as possible; find connections between things
-understand yourself – how do you deal with pressure, hone your uniqueness and skills
Perfect day in Demis’ life, habits
-10 years ago: whole day of research + programming, reading lots of papers, reading sci-fi at night or playing games
-today: very structured, complete night-owl, 11-7pm work (back to back meetings, meet as many people as possible), go home and spend time with family, 10pm-4am do individual work, long stretches of thinking and planning and some email
-quiet hours of morning – love that time (1-3am), inspiring music, think deep thoughts, read philosophy books, do all his creative thinking, get into flow, sometimes will go to 6am next day, and pay for it the next day (but it’s worth it)
Always been a generalist – too many interesting things to spend time on just one
Lex: Why are we here?
Demis: To gain knowledge and understand the universe, understand world around us, humanity, and all these things flow from it: compassion, self-knowledge
Feel like universe is almost structured to let itself be understood and learned – why are computers even possible?
If Demis could ask one question of true AGI: “What’s true nature of reality?”
Answer could be a more fundamental explanation of physics, and how to prove them out
A deeper, simpler explanation of things – leading to consciousness, dreaming, life, gravity
Guest: Ariel Ekblaw, director of MIT Space Initiative
Host: Lex Fridman
Parents are ex-Air Force, both pilots
Legacy of pilots > astronauts
Dad was huge sci-fi fan, Heinlein, Asimov
Love affair with civilization scale space exploration
Apollo program leapt far ahead, now rest of space industry is catching up
Favorite book now is Neal Stephenson’s Seveneves Cycle between authors and engineers – authors dream it up, engineers build it, inspire next generation of authors, and cycle continues
Long Now foundation – what does society need to do now for a long and prosperous future?
Instead of abandoning Earth, how do we use space tech to keep Earth livable?
Harsh conditions (such as those in space) are great forcing function for innovation and survival
Future of space habitats = intelligent structures + swarm robots (micro robots that can inspect, repair)
Distributed systems are critical for redundancy – eg, Space Station as a structure of decentralized nodes where, if an emergency happens, people can isolate it and move to another area
Tesserae – her PhD research topic – self assembling space structures – tiles with magnets for autonomous assembly, a decentralized system with sensors and self-determination
Building human sized tiles now
Want to build large enough tiles to form Buckyball (10m in diameter) – much bigger than current ISS modules Buckyball is most efficient surface-area-to-volume shape
What’s purpose of next gen space architecture? Can we give you goosebumps?
Programmable matter – she’s focused on big scale, but there’s other research on tiny self assembling structures
What’s a space cathedral look like? Long sight lines, stunning architecture, more organic (vs geometric), like a nautilus seashell
(truncated) Octahedrons as a great shape / structure for this self-assembly – one could be sleeping quarters, one a storage depot, etc
Unlike modern space stations, these can be re-configured, incorporate more space ships and crew and arrangements
Another favorite book – Ringworld (scifi novel by Larry Niven)
Microgravity vs zero gravity – no such thing as zero gravity, always gravity between two objects
Microgravity is essentially floating, but you’re actually in free fall
Flew 9 times on Vomit Comet
True feeling of weightlessness, like flying in a dream
Instructors tell you to make a memory while you’re experiencing it, because it’s so novel and time flies by so quickly
Takes 3 years round-trip to fly to Mars – journey takes 6-9 months, then wait for planets to find favorable alignment before you can fly back (!)
If you stay in orbit a long time, will be lots of physiological changes
“Deep duration space missions”
Problems include:
–Radiation is biggest risk (on Earth you’re protected by magnetosphere)
–Mental health (small space, long duration, few people)
Space food – all freeze dried
Researching fermented food in space – lots of tasty foods (beer, wine, umami), good for microbiome
After several days to weeks, astronauts can adapt to space / micro gravity
Physical changes – Bone density, muscle atrophy, eyeball shape
There’s water on moon – can use it for drinking water, propellent
Sherlock experiment on Mars – searching for signs of past habitability, organic life potential
Search for alien life is profoundly exciting – may not be carbon based life, how do you build a detector for non-carbon (eg, silicon) life?
If life is as prolific as we hoped, why haven’t we heard of it yet? The challenge of Fermi Paradox
Shadow biosphere – concept of potentially alien / unrecognizable life that already exists on Earth
Lex: Challenge of human unpredictability / motivations – if you like someone too much, problems arise; if you don’t like someone at all, also problems arise
We want artificial gravity eventually – so for example there’s a treadmill part that’s near to 1G (Earth gravity) that astronauts can spend part of the day
Many questions about sex in space – how does fetus evolve in low g?
Mars is not a good home currently for humanity
Atmosphere very thin, hard to grow crops Small outpost is definitely possible – like in Antarctica (McMurdo)
likely early 2030s for first such mission
No supply chain for broken electronic parts, no grocery stores
Thinks floating space cities are more likely – built incrementally
ISS – joint effort of 18 countries, great example of international cooperation even during periods of eg US-Russia conflict
Lex: Space exploration unites us and gives us hope; Look up to the stars and dream
MIT going back to moon as early as this year or 2023
Testing swarm robots that ride on a Rover
Each robot has 4 magnetic wheels
Bill Anders: We came all the way to discover the Moon, and what we really discovered is the Earth