Every week, I share my notes from great TED talks. Here’s the complete list (sorry, it takes awhile to load).
This week features two talks from Nick Bostrom on super intelligent computers. One is a TED talk and the other is an author talk @ Google.
Nick Bostrom: What happens when our computers get smarter than we are?
- human brains are largely similar to those of apes (only in the last 250K-1M years did ours begin to differ)
- AI used to be commands in a box
- now there’s a paradigm shift, today it’s about machine learning, about algorithms that learn from raw data like an infant
- because of this, AI is not domain-limited
- machine substrate is hands down superior to biological tissue: no speed or size limitations
- AI will evolve similarly to human intelligence: it took animals millions of years to develop basic intelligence, but complex intelligence developed faster by orders of magnitudes(“AI train won’t stop at human…will swoosh right by”)
- believes super intelligence will have preferences and will – to some extent – get what it wants
- what are those preferences?
- we must avoid anthropomorphizing (ie, we can’t assume it wants what people want)
- there will be unintended consequences no matter what goals we specify (eg, say we ask it to solve a math problem, and it realizes the best solution is re-organize the planet into a giant supercomputer to solve the problem!)
- mentions the King Midas myth (everything he touched turned to gold)
- what we should do now is specify precisely our constraints, goals, and design principles to guide AI’s development
Nick Bostrom: “Superintelligence”
- how do we control AI?
- control and limit its capabilities – for example, only allow it to exist in a box with no ethernet, or only print output on a screen. but to realize its full potential, you must give it full access, and super intelligence can by definition outsmart you
- control its motivation – change its motivations, preferences, and end-goals
- the difference between normal risks (e.g., war) and existential risks (e.g., give everyone the ability to make and detonate a nuclear bomb)
- AI/superintelligence is an existential risk
- “we only get one shot at it…but humans are mostly bad at getting things right the first time”
- we need to develop control mechanisms, the right ways to think about and constrain and optimize the problem, before AI gets ahead of us
- when asked by experts to estimate the median years until development of human-level machine intelligence, estimated 40 years (2050)
Here’s the full list of TED notes!