Oren Etzioni – founder of AI2
Late Microsoft cofounder Paul Allen wanted to create Allen Institute for AI – hired Oren to make it happen
Paul had vision of computer revolution, relentless focus on prize of understanding intelligence and the brain
AI2’s mission is “AI for the common good”
AI2’s incubator – 20+ companies in pre-seed stage
Natural part of university lifecycle – ideas that can then grow with right resources
Created Semantic Scholar – free search engine for scientific content
New tool – help make PDFs easier to read, auto-create TLDRs for science papers
Sky Light – computer vision to fight illegal fishing
Deep learning for climate modeling – why use neural network? “Deep learning is ultimate prediction engine”
“Common Sense” project – holy grail for AI – how to endow computers with common sense
Common sense ethics are very important
eg, the paper clip creator that takes over humanity to maximize paper clip production
“Alignment problem” is part of it
Are neural nets enough? Do you need to create symbolic knowledge?
Yujin Choi’s team, Mosaic – common sense repository – a collection of common sense statements about the universe
What about when people disagree? Can relativize answers, eg, “if you’re conservative, you would think X; if liberal, think Y”, etc
—
“Never trust an AI demo” – need to kick tires and ask right questions
eg, Siri / Alexa – slight changes create very different responses
“You shall know a word by the company it keeps” – underlying principle of NLP
Used to think encoding grammar rules was important
But today’s tech is good at approximating those rules
What is the nature of human level intelligence?
How do we collect and understand human knowledge?
Tech that gets you to space station is different from going to Mars, different from leaving Solar System, etc
Large language models (LLMs) are doing “hallucination”, not very robust (different wording leads to different answers)
Eg, who was US president in 1492? “Columbus”
Is it a game of whack of mole? Or is there some fundamental paradigm of human intelligence?
Some experts believe our current algorithms – back propagation, supervised learning, etc – are foundation for more sophisticated architecture that could get us there
Eg, neural nets are very simple brain models
Disagrees strongly with Elon Musk’s views on AI — doesn’t believe we’re “summoning the demon” — it’s hype, not rooted in data
Neural net tuning – like a billion dials on a stereo
Science is hampered if there are third rails you’re not allowed to study or question
Steadfast in support of open inquiry
Researchers are cautious about releasing language models to public – easy to generate controversial outputs
Surprised by progress of the technology – but again, never trust an AI demo
Think about what’s under hood, implications for society