The killer app comes first (in both crypto and AI)

In crypto there’s a constant chicken and egg debate of apps versus infrastructure. Which is more important? Which accrues more value? As an entrepreneur, which should I build?

For me, this 2018 article effectively settles the debate: https://www.usv.com/writing/2018/10/the-myth-of-the-infrastructure-phase/

The answer: It depends what part of the cycle we’re in.

But when you look at the history of general technologies, the killer app comes first. The infrastructure follows.

For example, light bulbs (the app) were invented before there was an electric grid (the infrastructure). You don’t need the electric grid to have light bulbs. But to have the broad consumer adoption of light bulbs, you do need the electric grid, so the breakout app that is the light bulb came first in 1879, and then was followed by the electric grid starting 1882. (The USV team book club is now reading The Last Days Of Night about the invention of the light bulb).

You could say a series of technological breakthroughs (eg, the right filaments, the right glass container) enabled the first “killer app” (💡💡💡) which then incentivized the infra.

Another example:

Planes (the app) were invented before there were airports (the infrastructure). You don’t need airports to have planes. But to have the broad consumer adoption of planes, you do need airports, so the breakout app that is an airplane came first in 1903, and inspired a phase where people built airlines in 1919, airports in 1928 and air traffic control in 1930 only after there were planes

Same pattern here: a series of new technologies (lightweight engines, proper control mechanisms) enabled the first “killer app” (🛫🛫🛫) which then incentivized the infra.

Crypto’s first killer app is right under our noses: Bitcoin itself.

The killer app was Bitcoin! And what it represents: a sovereign store of value tied to an uncensorable payment network.

Satoshi’s technology breakthrough enabled the killer app (Bitcoin) which has now enabled more than a decade of crypto infrastructure buildout, from alternative Layer 1s to smart contracts to new blockchain primitives.

In generative AI, I think a similar pattern is also unfolding:

ChatGPT was the first AI killer app. The lightbulb moment. 100M+ users within months of launch and one of the fastest growing consumer apps of all time.

ChatGPT opened investors eyes’, blew users’ minds, and now everyone from Google to Softbank to the CCP are spending billions ($7 trillion??) to build and buy AI infrastructure.

And steadily and surely, much of this infrastructure investment and innovation will make AI better, faster, and cheaper. Then more killer apps will be built atop all the GPUs, foundation models, and SDKs. Which then begets more infra. And the cycle continues.

8 thought leaders on the intersection of AI + crypto — Fred Wilson: “AI and Web3 are two sides of the same coin. AI will help make web3 usable for mainstream applications and web3 will help us trust AI”

I posted the original thread here:

Fred Wilson

AI and Web3 are two sides of the same coin. AI will help make web3 usable for mainstream applications and web3 will help us trust AI. Together they will lead to a more powerful, more resilient, more trusted, and more equitable Internet

https://avc.xyz/what-will-happen-in-2024

Vitalik Buterin

It’s a reasonable question: crypto and AI are the two main deep (software) technology trends of the past decade, and it just feels like there must be some kind of connection between the two. It’s easy to come up with synergies at a superficial vibe level: crypto decentralization can balance out AI centralization, AI is opaque and crypto brings transparency, AI needs data and blockchains are good for storing and tracking data.

https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

Arthur Hayes

“Any company that can be attacked in the analogue human legal field will be attacked by those who believe a for-company-profit AI implementation used their data without payment,” he continued. “It is an impossible problem to solve — how do you adequately pay every entity for their data?”

“The only way to create AIs as economic entities is for the ownership to be dispersed wide and far, such that there is no single centralized structure to attack in the traditional legal arena,” he added. “The market will quickly come to realize the entire lifecycle of an AI must be decentralized, which will in turn benefit networks such as Ethereum. Ethereum is the most robust decentralized computer in existence, and I fully expect it to peer power the future AI / human economy.”

https://www.theblock.co/post/271501/bitmex-co-founder-arthur-hayes-joins-decentralized-ai-platform-ritual

Casey Caruso

Since computational strength grows with resource consolidation, AI naturally fosters centralization, where those with more computing power progressively dominate. This introduces a risk to our rate of innovation. I believe decentralization and Web3 stand as contenders to keep AI open.

https://www.caseycaruso.com/thoughts/decentralized-ai

Crypto, Distilled

Divides the web3 AI stack into: Agents, AI analytics; Authentication; Privacy; Data; Compute; Models

Blockchain = provable fairness; AI = unparalleled productivity

https://x.com/DistilledCrypto/status/1753300276298289169?s=20

Travis Kling

AI is a clear opportunity for crypto, but I am wary about crypto’s ability to execute on that opportunity this cycle

https://twitter.com/Travis_Kling/status/1753455596462878815

Varun Mathur

Centralized AI entities have consolidated immense power, regulatory capture, and using their growing network effects, there now exists a period of at most a year, before they cannot be competed against. The world they present to users is that of biased and limited interfaces, where the $20/month “pro” features are far outside the reach of say a college student in India.

https://twitter.com/varun_mathur/status/1754305144630440089

Binance Research

Funding for AI-related web3 projects surged in 2023, reaching US$298M. This is more than the collective funding amount raised for AI projects from 2016 to 2022, at US$148.5M.

Areas of note: DePIN; Zero Knowledge; Consumer dapps; Data analytics

https://www.binance.com/en/research/analysis/ai-x-crypto-latest-data-and-developments/

Venkatesh Rao

AI+blockchains point to a dystopia of impersonal and faceless interchangeable-parts humanity that’s more industrial than the industrial age.

https://studio.ribbonfarm.com/p/brains-chains-and-vibemobiles

The English language will increase its dominance in an AI world

Language is itself a technology, and like many technologies, it exhibits a classic network effect: each additional speaker of a language increases that language’s utility for all other speakers. The more “users” who speak and write English, the more valuable it is to know and use English in just about all affairs.

One obvious example is in software programming. Though there is a lot of symbolic and mathematical notation in programming languages, most would agree that English is head-and-shoulders more valuable to know (relative to the 2nd or 3rd most popular language) if you want to be a good programmer. It’s better for troubleshooting, for reading documentation, for scouring StackOverflow for copy-paste code, and now for getting ChatGPT or CoPilot to write code for you.

My belief is that as AI proliferates, English will only increase its lead. English is already in the lead with 1.4B speakers (though this number varies significantly depending on how you measure fluency), and Mandarin Chinese is second at 1.1B.

Why?

AI models need data. English comprises a majority of the available online training data. It helps that the largest economy in the world (the US) and the most populous country in the world (India, which depending on your reference, surpassed China’s population this year), are both English markets.

The largest content generating internet platforms — from Google to Facebook to Twitter to Wikipedia and on and on — are dominated by English speakers. An AI model’s output quality is directly correlated with the quantity of its training data, and there is simply more English data available than any other language, including Mandarin Chinese. Thus GPT4 and LLAMA and so forth are “smartest” in English.

There are multiple reasons why Mandarin Chinese lags behind, beyond just the fact that the breakthrough innovations in AI research and productization happened first in the US and UK. Among these reasons are the Great Firewall, the highly regulated and controlled nature of Chinese data, and China’s pervasive digital censorship (For example, there are more than 500 words alone that can’t be used on many Chinese UGC websites because they are perceived as unfavorable nicknames for President Xi Jinping)

Thus Chinese online training data lags English in both quantity and likely quality. There are also some reasons related to the languages themselves, where English is a more explicit language and Chinese more contextual.

English’s initial data lead is a self-reinforcing feedback loop — the more that people use English to interact with services like CharacterAI and ChatGPT, the more data the LLMs have to refine and improve (in English). Leaving other languages in the dust, especially long tail ones like Icelandic or Khmer.

As AI agents increasingly interact with each other, I’m guessing they will develop their own unique protocols for AI-to-AI communication. Not dissimilar to how computers communicate via highly structured network requests, only more complex and perhaps unique. AI will eventually create its own AI lingua franca. However, it’s also necessary that some human-readable component be built into this AI-ese (because at a minimum, developers will want to know where to debug and fix errors). English will likely be chosen for that AI-to-AI interface.

Of course, AI is an amazing and broad innovation that will benefit speakers of all languages. It will help to preserve and distribute rarer languages, and enable faster and better language-to-language education and translation. Whether you speak Vietnamese or Icelandic, there will be an AI model for you. I’m simply arguing that these secondary languages won’t be anywhere NEAR as good as the leading English models, and I would venture that even if English isn’t your first or even second language, you will probably still get better results using broken English to interact with ChatGPT than, say, French.

I could be very wrong here. As with any emerging technology, second and third order effects are by their nature unpredictable and chaotic. And the technology still has a long way to evolve and mature. Let’s see how it all plays out. I’m especially curious about what kinds of AI-to-AI communications will emerge, whether exposed through a human-readable interface or otherwise.

Ok that’s it, over and out good sers and madams! OpenAI wow!

“AI has a far greater likelihood of being dangerous in a government’s control”

Complex adaptive systems have like 20 different inputs (that you know of) and 4 different Chesterton’s Fences in them that will produce about 14 different 2nd-order effects, 21 different 3rd-order effects, and 11 different 4th-order effects you didn’t even know could happen.

Models for CAS that I disrespect: the economy, the climate, and any ecosystem with a massive amount of independent and dependent inputs and outputs. Before 2020 I wouldn’t have bucketed epidemiology in here, but it’s a worthy new addition to the “you really don’t know what you’re talking about, do you?” team.

And:

AI has a far greater likelihood of being dangerous in a government’s control than it does in the hands of genuinely brilliant engineers trying to create a transcendent technology for humanity. e/acc.

Thought provoking.

Technology as the ultimate non-zero-sum game

I read two quotes recently that I think are related in a very deep and abstract way:

I think a reasonable case can be made that the discovery and facilitation of non-zero-sum games is both objectively (i.e., metaphysically) and subjectively valuable. Furthermore, I think a reasonable case can be made that we have literally evolved to find this process deeply meaningful and to socially reward people who are very good at engaging in it.

The above is from Brett Andersen’s Substack. If we think about all of the things we love – from art to sports to the best institutions from religions to businesses – they are all prime examples of ultimate success at non-zero-sum games.

Soon after I read this quote:

It would not surprise me if we saw another axial awakening someday, powered by another flood of technology. I find it hard to believe that we could manufacture robots that actually worked and not have them disturb our ideas of religion and God. Someday we will make other minds, and they will surprise us.

That is from uber mensch Kevin Kelly.

With Apple launching their AR headset, with AI dominating every tech headline, with self-driving actually working in major cities, with Boston Dynamics robots doing Olympic caliber back flips, it seems we are on the cusp of an awakening of some sort. A technological revolution in both mind (AI) and body (robots / physical reality). AI alone is already disturbing society’s ideas about relationships and intelligence and emotion.

One of the best definitions I’ve ever heard of technology is “technology is anything that breaks a constraint.” And what is a constraint if not a zero-sum boundary condition of some sort.

Thanks for listening to my ted talk. Cheers