Podcast notes: Sam Altman (OpenAI) on AI – “One of genuine new tech platforms since mobile”

Interviewer: Reid Hoffman
Guest: Sam Altman

Not yet trillion dollar “take on Googles” startups yet – but will be a serious challenge to Google for first time

eg, Human level chatbot interface that actually works – new medical services, new education services

Idea of language interface where you say in natural language, dialogue, and computer just does it for you

Very powerful models will be one of genuine new tech platforms since mobile

How to create an enduring differentiated business
-small handful of large base models will win – skeptical of startups doing small models
middle layer will become really important – take large models, tune it, create model for medicine, or model for AI friend – will have data flywheel

Lots of AI experts think these models won’t generate net new knowledge for humanity – thinks they’ll be wrong and surprised

AI in science:
1. Science dedicated products eg Alpha Fold – will see a lot more, bio cos will do amazing things
2. Tools that make us more productive – improve net output of scientists and engineers – eg, CoPilot
3. AI that can be an AI scientist to self improve – automate our own jobs, go off and test new science and research – teaching AI to do that

What is Alignment Problem?
A powerful system that has goals in conflict with ours
How do we build AGI that does things in best interests of humanity
How to avoid accidental or intentional mis-use
AI could eventually help us do alignment research itself
Reid: will be able to tell agent “don’t be racist” and let it figure out

AI moonshots?
-language models will go much further than people think – so much algorithmic progress to come, even if we run out of compute or data
true multi modal models – every modality, fluidly move between them
-continuous learning models
These above 3 things will be huge victory

OpenAI – focus on next thing where we have high confidence, let 10% of company go and explore
Can’t plan for greatness, but sometimes breakthroughs will happen

AI will seep in everywhere
Marginal cost of intelligence and energy will rapidly trend towards zero – will touch almost everything

Metaverse will become like iPhone – a new container for software
AI will be new technological revolution – more about how metaverse will fit into AI then vice-versa

Low cost + fast cycle times is how you compete as a startup

In bio – simulators are bad, AI could help

What are best utopian sci-fi universes so far
-Star Trek is pretty good
-The Last Question is incredible short story
-Reid: Ian Banks – Culture series
-tried to write his own sci fi story, was a lotta fun

Having a lot of kids is great – wants to do it

Won’t be doing prompt engineering in 5 years
Will be text / voice in natural language to get computer to do what you want
eg, Be my therapist and make my life better; Teach me something I want to know

Reid: great visual thinker can get more out of DALL-E — will be an evolving set of human talents going that extra mile

How to define AGI
Equivalent of a median human that you can hire as a coworker – be a doctor, be a coder
Meta-skill of getting good at whatever you need

Super intelligence = smarter than all of humanity put together

Economic impacts will be huge in 20-30 years
Society may not tolerate that change – what is the new social contract
How to fairly distribute wealth
How to ensure access to AI systems (“commodity of the realm”)
Not worried about human fulfillment – we’ll always solve it
But concepts of wealth and access and governance will all change

Running largest UBI experiment in world – 5 year project

Tools for creatives — will be the great application for AI in short-term
Mostly not replacing, but enhancing their jobs

How do these LLMs differentiate from each other?
The middle layer is what will differentiate – the startups fine-tuning the base models, about the data flywheel, could include prompt engineering

Podcast notes – Evolution of NLP – Oren Etzioni – TWIML: “Deep learning is the ultimate prediction engine”

Oren Etzioni – founder of AI2

Late Microsoft cofounder Paul Allen wanted to create Allen Institute for AI – hired Oren to make it happen
Paul had vision of computer revolution, relentless focus on prize of understanding intelligence and the brain

AI2’s mission is “AI for the common good”

AI2’s incubator – 20+ companies in pre-seed stage
Natural part of university lifecycle – ideas that can then grow with right resources

Created Semantic Scholar – free search engine for scientific content
New tool – help make PDFs easier to read, auto-create TLDRs for science papers

Sky Light – computer vision to fight illegal fishing

Deep learning for climate modeling – why use neural network? “Deep learning is ultimate prediction engine”

“Common Sense” project – holy grail for AI – how to endow computers with common sense
Common sense ethics are very important
eg, the paper clip creator that takes over humanity to maximize paper clip production
“Alignment problem” is part of it
Are neural nets enough? Do you need to create symbolic knowledge?
Yujin Choi’s team, Mosaic – common sense repository – a collection of common sense statements about the universe
What about when people disagree? Can relativize answers, eg, “if you’re conservative, you would think X; if liberal, think Y”, etc

“Never trust an AI demo” – need to kick tires and ask right questions
eg, Siri / Alexa – slight changes create very different responses

“You shall know a word by the company it keeps” – underlying principle of NLP

Used to think encoding grammar rules was important
But today’s tech is good at approximating those rules

What is the nature of human level intelligence?
How do we collect and understand human knowledge?

Tech that gets you to space station is different from going to Mars, different from leaving Solar System, etc

Large language models (LLMs) are doing “hallucination”, not very robust (different wording leads to different answers)
Eg, who was US president in 1492? “Columbus”

Is it a game of whack of mole? Or is there some fundamental paradigm of human intelligence?

Some experts believe our current algorithms – back propagation, supervised learning, etc – are foundation for more sophisticated architecture that could get us there
Eg, neural nets are very simple brain models

Disagrees strongly with Elon Musk’s views on AI — doesn’t believe we’re “summoning the demon” — it’s hype, not rooted in data

Neural net tuning – like a billion dials on a stereo

Science is hampered if there are third rails you’re not allowed to study or question

Steadfast in support of open inquiry

Researchers are cautious about releasing language models to public – easy to generate controversial outputs

Surprised by progress of the technology – but again, never trust an AI demo
Think about what’s under hood, implications for society

The Venn of blockchain and AI

I’ve been thinking about the relationship between blockchains and AI lately. Both are emerging foundational technologies and I think it’s no accident they are both coming of age at the same time.

Multiple writers have already expressed this view:

AIs can be used to generate “deep fakes” while cryptographic techniques can be used to reliably authenticate things against such fakery. Flipping it around, crypto is a target-rich environment for scammers and hackers, and machine learning can be used to audit crypto code for vulnerabilities. I am convinced there is something deeper going on here. This reeks of real yin-yangery that extends to the roots of computing somehow

From Venkatesh Rao: https://studio.ribbonfarm.com/p/the-dawn-of-mediocre-computing

I think AI and Web3 are two sides of the same coin. As machines increasingly do the work that humans used to do, we will need tools to manage our identity and our humanity. Web3 is producing those tools and some of us are already using them to write, tweet/cast, make and collect art, and do a host of other things that machines can also do. Web3 will be the human place to do these things when machines start corrupting the traditional places we do/did these things.

From Fred Wilson: https://avc.com/2022/12/sign-everything/

In both writers’ examples, blockchain helps solve some of the problems that AI creates, and vice-versa. I’m reminded of Kevin Kelly who said, Each new technology creates more problems than it solves.

Blockchains and AI have a sort of weird and emergent technological symbiosis and I’m here for it.

So the brain flatulence below is just my way to think aloud, using the writing process to work through the question(s).

*Note: when I say “blockchain”, I include what Fred Wilson calls web3 and Venkatesh calls crypto; there are just a few canonical applications that we’re all familiar with (namely, bitcoin and ethereum); and when I say “AI”, I am thinking about the most popular machine learning models like GPT3 and Stable Diffusion

*Note also: I am just a humble user of these new and powerful AI tools, and can barely understand the abstract of a typical machine learning research paper; so part of the reason why I’m writing this is to find out where I’m wrong a la Cunningham’s law

A blockchain is a tool for individual sovereignty; while an AI is a tool for individual creativity

A blockchain operates at maximum transparency; while an AI operates largely as a black box

A blockchain clearly shows the chain of ownership and history; while an AI… (does something like the opposite in the way it aggregates and melds and mutates as much data as possible?)

A blockchain is “trustless”, in the sense that what you see on-chain is the agreed upon “truth” of all its users; while an AI is (?), in the sense that what it generates is more or less unique to the specific prompt / question / user (and even this can change as the model is updated, or new data is added]

An AI is much easier to use than a blockchain

An AI can create vast quantities of content, very cheaply; while a (truly “decentralized”) blockchain is limited by scalability and cost

An AI is centralized (to a specific company, or model, or data set) in the sense that decision making rests with a team or company; while a blockchain is decentralized and decision making is distributed

A surprising user experience – as in, an unexpected but delightful output – is typically net positive for a user of AI, while seeing something happen on a blockchain that you don’t expect would generally be pretty bad (yes, of course there are airdrops)

Blockchains are a competitive threat to industries with a high degree of centralization (such as fiat currency issuance, and payment networks); AI is a competitive threat to many individual online workers (such as language translators, and freelance writers, and basic QA/QC employees)

Both blockchains and AI have multiple open source products that can be forked by developers

Both blockchains and AI are platforms upon which many other products and services can be built

Both blockchains and AI are technologies that exploded into the popular consciousness in the last 10 years

Both “blockchain” and “AI” are very broad suitcase words, in part because they are both the product of many technologies combined in innovative ways: for blockchain that is everything from cryptography to smart contract programming to PoW mining to distributed consensus mechanisms; for AI that’s, uh…well everything listed here and more, I suppose

I’ll end here for now, but let me know what I got wrong, what I’m missing, and what questions or ideas this might inspire

Addendum #1: I asked ChatGPT

Addendum #2: This NYT article notes that SBF (“Sam Bankscam Fraud”) donated at least $500M to organizations researching AI alignment and AI safety. Not exactly the kind of symbiosis I want to explore, but worth noting.

Addendum #3:

Blockchains can only give precise answers, while AI can give approximate answers or even fabricate answers

Blockchains are censorship resistant, while AI is centralized (most are created by small doxxed teams) and have implemented restrictions on usage (most have rules against, for example, CSAM or nudity)

Interesting snippets from State of AI Report 2022

output from Playground AI

Full report here: https://www.stateof.ai/

I’m far from an AI expert, just an interested student who gets the tingly feels every time I use Stable Diffusion or see output from ChatGPT.

Snippets (copied verbatim):

The chasm between academia and industry in large scale AI work is potentially beyond repair: almost 0% of work is done in academia.

Finding faster matrix multiplication algorithms, a seemingly simple and well-studied problem, has been stale for decades. DeepMind’s approach not only helps speed up research in the field, but also boosts matrix multiplication based technology, that is AI, imaging, and essentially everything happening on our phones.

The authors argue that the ensuing reproducibility failures in ML-based science are systemic: they study 20 reviews across 17 science fields examining errors in ML-based science and find that data leakage errors happened in every one of the 329 papers the reviews span

many LLM capabilities emerge unpredictably when models reach a critical size. These acquired capabilities are exciting, but the emergence phenomenon makes evaluating model safety more difficult.

Alternatively, deploying LLMs on real-world tasks at larger scales is more uncertain as unsafe and undesirable abilities can emerge. Alongside the brittle nature of ML models, this is another feature practitioners will need to account for.

Landmark models from OpenAI and DeepMind have been implemented/cloned/improved by the open source community much faster than we’d have expected.

Compared to US AI research, Chinese papers focus more on surveillance related-tasks. These include autonomy, object detection, tracking, scene understanding, action and speaker recognition.

NVIDIA’s chips are the most popular in AI research papers…and by a massive margin

“We think the most benefits will go to whoever has the biggest computer” – Greg Brockman, OpenAI CTO

As such, the AI could reliably remove 36.4% of normal chest X-rays from a primary health care population data set with a minimal number of false negatives, leading to effectively no compromise on patient safety and a potential significant reduction of workload.

The US leads by the number of AI unicorns, followed by China & the UK; The US has created 292 AI unicorns, with the combined enterprise value of $4.6T.

The compute requirements for large-scale AI experiments has increased >300,000x in the last decade. Over the same period, the % of these projects run by academics has plummeted from ~60% to almost 0%. If the AI community is to continue scaling models, this chasm of “have” and “have nots” creates significant challenges for AI safety, pursuing diverse ideas, talent concentration, and more.

Decentralized research projects are gaining members, funding and momentum. They are succeeding at ambitious large-scale model and data projects that were previously thought to be only possible in large centralised technology companies – most visibly demonstrated by the public release of Stable Diffusion.