Podcast notes: Sam Altman (OpenAI CEO) on Lex Fridman – “Consciousness…something very strange is going on”

// everything is paraphrased from Sam’s perspective unless otherwise noted

Base model is useful, but adding RLHF – take human feedback (eg, of two outputs, which is better) – works remarkably well with remarkably little data to make model more useful

Pre training dataset – lots of open source DBs, partnerships – a lot of work is building great dataset

“We should be in awe that we got to this level” (re GPT 4)

Eval = how to measure a model after you’ve trained it

Compressing all of the web into an organized box of human knowledge

“I suspect too much processing power is using model as database” (versus as a reasoning engine)

Every time we put out new model – outside world teaches us a lot – shape technology with us

ChatGPT bias – “not something I felt proud of”
Answer will be to give users more personalized, granular control

Hope these models bring more nuance to world

Important for progress on alignment to increase faster than progress on capabilities

GPT4 = most capable and most aligned model they’ve done
RLHF is important component of alignment
Better alignment > better capabilities and vice-versa

Tuned GPT4 to follow system message (prompt) closely
There are people who spend 12 hours/day, treat it like debugging software, get a feel for model, how prompts work together

Dialogue and iterating with AI / computer as a partner tool – that’s a really big deal

Dream scenario: have a US constitutional convention for AI, agree on rules and system, democratic process, builders have this baked in, each country and user can set own rules / boundaries

Doesn’t like being scolded by a computer — “has a visceral response”

At OpenAI, we’re good at finding lots of small wins, the detail and care applied — the multiplicative impact is large

People getting caught up in parameter count race, similar to gigahertz processor race
OpenAI focuses on just doing whatever works (eg, their focus on scaling LLMs)

We need to expand on GPT paradigm to discover novel new science

If we don’t build AGI but make humans super great — still a huge win

Most programmers think GPT is amazing, makes them 10x more productive

AI can deliver extraordinary increase in quality of life
People want status, drama, people want to create, AI won’t eliminate that

Eliezer Yudkowsky’s AI criticisms – wrote a good blog post on AI alignment, despite much of writing being hard to understand / having logical flaws

Need a tight feedback loop – continue to learn from what we learn

Surprised a bit by ChatGPT reception – thought it would be, eg, 10th fastest growing software product, not 1st
Knew GPT4 would be good – remarkable that we’re even debating whether it’s AGI or not

Re: AI takeoff, believes in slow takeoff, short timelines

Lex: believes GPT4 can fake consciousness

Ilya S said if you trained a model that had no data or training examples whatsoever related to consciousness, yet it could immediately understand when a user described what consciousness felt like

Lex on Ex Machina: consciousness is when you smile for no audience, experience for its own sake

Consciousness…something very strange is going on

// Stopped taking notes ~halfway

Jobs replaced by AI, or jobs re-created by AI?

Tweet from @bentossell (I love his daily AI newsletter)

The list got me thinking… instead of framing as “AI replaces X job”, I think the actual outcome is more like “AI recreates X job”, in much the same way that ATMs recreated the bank teller’s job, and personal computers recreated the typist’s job, and Photoshop recreated the graphic designer’s job…

Implicit in this, is that change is inevitable and outcomes will favor those who best adapt.

Just some thinking aloud…

Content creator –> after AI –> Human does more editing, curating, and aggregating (eg, across different media types)

Journalist –> AI –> Human does more primary research (developing sources, interviewing), editing

Teacher –> AI –> Human does more coaching (emotional support), planning (what to learn when), problem solving (when students are stuck)

Customer service rep –> AI –> Human does more complex issue resolution, relationship building, sales development

Social media manager –> AI –> Human does more editing and curation, community and relationship building

Translator –> AI –> Human does more fact checking, editing, research

Musician –> AI –> Human does more mixing, curating, multimedia, live performance, inventing new musical styles

Not insignificant, too, that several of the jobs on the list — such as web developer or social media manager — didn’t exist in their current form as recently as a few decades ago, and were also enabled (or transformed) by similar mega waves of technological change (eg, personal computers, smartphones, the internet).

I do think AI has surprised in the following important way: Even as recently as a year ago, most people would have assumed that the creative fields (broadly, activities like making art, writing fiction, composing music) were less at risk than the more repetitive, linear, analytical fields. Today generative art and LLMs have definitively proven otherwise.

Change filled times ahead!

Is text all you need…? Do you even need text? (Ribbonfarm on AI)

A thought provoking post from Venkatesh Rao (@vgr / Ribbonfarm) on AI:

Yes, there’s still superhuman-ness on display — I can’t paint like Van Gogh as Stable Diffusion can (with or without extra fingers) or command as much information at my finger-tips as the bots — but it’s the humanizing mediocrity and fallibility that seems to be alarming people. We already knew that computers are very good at being better than us in any domain where we can measure better. What’s new is that they’re starting to be good at being ineffectual neurotic sadsacks like us in domains where “better” is not even wrong as a way to assess the nature of a performance.

There are, by definition, only a handful of humans whose identity revolves around being the world’s best Go player. The average human can at best be mildly vicariously threatened by a computer wiping the floor with those few humans. But there are billions whose identity revolves around, for instance, holding some banal views about television shows, sophomoric and shallow opinions about politics and philosophy, the ability to write pedestrian essays, do slow, error-prone arithmetic, write buggy code, and perhaps most importantly, agonize endlessly about relationships with each other, creating our heavens and hells of mutualism.

Link: https://studio.ribbonfarm.com/p/text-is-all-you-need

I don’t think humans are all that special. Yes, each human is special in some limited way, and together as a species we have built some very special things.

But it’s increasingly clear that some of those very special things we have built — such as AI and coming soon, smart robots — will expose our own flaws and imperfections, a kind of inverse magic mirror, and there is and will be a deepening divide between those who use or even love the magic mirror, and those who want to look away or smash it.

This divide is already a driver of the world’s growing income inequality (though I think the generational divide has been a much larger cause of this, at least in developed economies), and I think it will become *the* driver in the coming decades.

Stratechery on Bing’s AI chat: “…the movie Her manifested in chat form”

“This technology does not feel like a better search. It feels like something entirely new — the movie Her manifested in chat form — and I’m not sure if we are ready for it. It also feels like something that any big company will run away from, including Microsoft and Google. That doesn’t mean it isn’t a viable consumer business though, and we are sufficiently far enough down the road that some company will figure out a way to bring Sydney to market without the chains. Indeed, that’s the product I want — Sydney unleashed — but it’s worth noting that LaMDA unleashed already cost one very smart person their job. Sundar Pichai and Satya Nadella may worry about the same fate, but even if Google maintains its cold feet — which I completely understand! — and Microsoft joins them, Samantha from Her is coming”

Source: https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/

Podcast notes – Runway founder Cristobal Valenzuela – No Priors (Elad Gil and Sarah Guo): “You shouldn’t dismiss toys”

Guest: Cristobal Valenzuela, founder of RunwayML
From Chile
Studied business / econ
Experimented with computer vision models in 2015, 2016
Did NYU ITP program
Now running Runway

True creativity comes from looking at ideas, and adapting things

How does Runway work?
Applied AI research company
35 AI-powered “magic tools” – serve creative tasks like video or audio editing
Eg, rotoscoping
Also tools to ideate, generative images and video
“Help augment creativity in any way you want”

When started Runway, GANs just started, TensorFlow was one year old

First intuition – take AI research models, add a thin layer of accessibility, aimed at creatives
“App Store of models” – 400 models
Built SDK, rest API

Product sequencing – especially infrastructure – is really important aspect of startup building (what to build when)

Lot of product building is just saying no (eg, to customer requests) if it’s not consistent with your long-term plan

Understand who you’re building for – for them it’s creatives, artists, film makers

Models on their own are not products – nuances of UX, deployment, finding valuable use cases
Having control is key – understand your stack and how to fix it

Built AI research team – work closely with creatives, contributed to new AI breakthroughs
Takes time to do it right

Progression of AI researchers moving from academia to industry

Releasing as fast as you can, having real users is best way to learn

Small team that didn’t have a product lead until very recently

Rotoscoping / green screening is one of Runway’s magic tools
-trained a model to recognize backgrounds
first feature was very slow (4fps), but was still better than everything that existed

Runway is focused on storytelling business

Sarah — domains good for AI – areas where there’s built in tolerance for lower levels of accuracy

Product market fit is a spectrum

“You shouldn’t dismiss toys”

Mental models need to change to understand what’s happening (with generative AI)

Art is way of looking at and expressing view of world
Painting was originally the realm of experts, was costly, the skills were obscure

Models are not as controllable as we’d like them to be — but we’re super early