Excerpts from “Acceleration of Addictiveness” by Paul Graham (adding to Personal Bible)

Going into my bible.

Source here: https://paulgraham.com/addiction.html

All below are copied verbatim:

Technological progress means making things do more of what we want. When the thing we want is something we want to want, we consider technological progress good. If some new technique makes solar cells x% more efficient, that seems strictly better. When progress concentrates something we don’t want to want — when it transforms opium into heroin — it seems bad. But it’s the same process at work

Food has been transformed by a combination of factory farming and innovations in food processing into something with way more immediate bang for the buck, and you can see the results in any town in America. Checkers and solitaire have been replaced by World of Warcraft and FarmVille. TV has become much more engaging, and even so it can’t compete with Facebook

Already someone trying to live well would seem eccentrically abstemious in most of the US. That phenomenon is only going to become more pronounced. You can probably take it as a rule of thumb from now on that if people don’t think you’re weird, you’re living badly.

As knowledge spread about the dangers of smoking, customs changed. In the last 20 years, smoking has been transformed from something that seemed totally normal into a rather seedy habit: from something movie stars did in publicity shots to something small huddles of addicts do outside the doors of office buildings

We’ll have to worry not just about new things, but also about existing things becoming more addictive. That’s what bit me. I’ve avoided most addictions, but the Internet got me because it became addictive while I was using it

Excerpts from “Why Everything is Becoming a Game”: “We humans are harder to manipulate than pigeons, but we can be manipulated in many more ways, because we have a wider spectrum of needs”

Going into my bible.

Source here: https://www.gurwinder.blog/p/why-everything-is-becoming-a-game

All below are copied verbatim:

Skinner’s three key insights — immediate rewards work better than delayed, unpredictable rewards work better than fixed, and conditioned rewards work better than primary — were found to also apply to humans, and in the 20th Century would be used by businesses to shape consumer behavior. From Frequent Flyer loyalty points to mystery toys in McDonalds Happy Meals, purchases were turned into games, spurring consumers to purchase more.

We humans are harder to manipulate than pigeons, but we can be manipulated in many more ways, because we have a wider spectrum of needs. Pigeons don’t care much about respect, but for us it’s a primary reinforcer, to such an extent that we can be made to desire arbitrary sounds that become associated with it, like praise and applause.

Kaczynski believed modern society made us docile and miserable by depriving us of fulfilling challenges and eroding our sense of purpose. The brain evolved to solve problems, but the problems it had evolved for were now largely solved by technology. Most of us can now obtain all our basic necessities simply by being obedient, like a pigeon pecking a button. Kaczynski argued that such conveniences didn’t make us happy, only aimless. And to stave off this aimlessness, we had to continually set ourselves goals purely to have goals to pursue, which Kaczynski called “surrogate activities”. These included sports, hobbies, and chasing the latest product that ads promised would make us happy.

Kaczynski observed that surrogate activities rarely kept people contented for long. There were always more stamps to collect, a better car to buy, a higher score to achieve. He believed artificial goals were too divorced from our actual needs to truly satisfy us, so they merely served to keep us busy enough not to notice our dissatisfaction. Instead of a fulfilled life, a life filled full.

We’re easily motivated by points and scores because they’re easy to track and enjoyable to accrue. As such, scorekeeping is, for many, becoming the new foundation of their lives. “Looksmaxxing” is a new trend of gamified beauty, where people assign scores to physical appearance and then use any means necessary to maximize their score. And in the online wellness space, there is now a “Rejuvenation Olympics” complete with a leaderboard that ranks people by their “age reversal”. Even sleep has become a game; many people now use apps like Pokemon Sleep that reward them for achieving high “sleep scores”, and some even compete to get the highest “sleep ranking”.

On Instagram, the main self-propagating system is a beauty pageant. Young women compete to be as pretty as possible, going to increasingly extreme lengths: makeup, filters, fillers, surgery. The result is that all women begin to feel ugly, online and off.

On TikTok and YouTube, there is another self-propagating system where pranksters compete to outdo each other in outrageousness to avoid being buried by the algorithm. Such extreme brinkmanship frequently leads to arrest or injury, and has even led to the deaths of, among others, Timothy Wilks and Pedro Ruiz.

First: choose long-term goals over short-term ones

Second: choose hard games over easy ones

Third: choose positive-sum games over zero-sum or negative-sum ones

Fourth: choose atelic games over telic ones. Atelic games are those you play because you enjoy them. Telic games are those you play only to obtain a rewar

Finally, the fifth rule is to choose immeasurable rewards over measurable ones. Seeing numerical scores increase is satisfying in the short term, but the most valuable things in life — freedom, meaning, love — can’t be quantified.

Start With Creation — excerpts: “The Muse arrives to us most readily during creation, not before”

If you have 5 minutes just go read the dang thing; I’m sharing half of it here as excerpts because it’s such a perfect internet essay: short, wise, memorable, re-readable.

Going into my bible as well.

EXCERPTS copied verbatim:

The Muse arrives to us most readily during creation, not before. Homer and Hesiod invoke the Muses not while wondering what to compose, but as they begin to sing. If we are going to call upon inspiration to guide us through, we have to first begin the work.

It is in approaching the edges of our abilities that we are really learning, and often simple projects feel more like delaying things, including delaying mastery. A chance of failure ensures your hands are firmly touching reality, and not endlessly flipping through the textbook, or forever flirting only with ideas.

Someone once mentioned to me that “Write what you know” is not particularly interesting advice, and “Write what you’re learning” is much better

On the other hand it is inspiring to help someone who has begun. There’s a bit of a silly demonstration of this in those viral videos that show a person starting to dig a hole or making a sandcastle at the beach, and a number of people come along to help. The principle is not at all silly: Enthusiasm is contagious.

I said some time ago on Twitter offhandedly, “If you have a ten year plan, what’s stopping you from doing it in two?” This is what I mean. One can too easily sleepwalk into years of “I wish I could…” Or you can start with creation. Pick something hard. You will shape something and it will shape you.

AI learnings 1: AI = infinite interns

All below are copy-pasted from original sources, all mistakes mine! :))

I eventually think these open-source LLMs will beat the closed ones, since there are more people training and feeding data to the model for the shared benefit.
Especially because these open source models can be 10 times cheaper than GPT-3 or even 20 times cheaper than GPT-4 when running on Hugging Face or locally even free, just pay electricity and GPU

We extensively used prompt engineering with GPT-3.5 but later discovered that GPT-4 was so proficient that much of the prompt engineering proved unnecessary. In essence, the better the model, the less you need prompt engineering or even fine-tuning on specific data.

Harder benchmarks emerge. AI models have reached performance saturation on established benchmarks such as ImageNet, SQuAD, and SuperGLUE, prompting researchers to develop more challenging ones. In 2023, several challenging new benchmarks emerged, including SWE-bench for coding, HEIM for image generation, MMMU for general reasoning, MoCa for moral reasoning, AgentBench for agent-based behavior, and HaluEval for hallucinations.

The subset of parameters is chosen according to which parameters have the largest (approximate) Fisher information, which captures how much changing a given parameter will affect the model’s output. We demonstrate that our approach makes it possible to update a small fraction (as few as 0.5%) of the model’s parameters while still attaining similar performance to training all parameters.

If you’re training a LLM with the goal of deploying it to users, you should prefer training a smaller model well into the diminishing returns part of the loss curve.


When people talk about training a Chinchilla-optimal model, this is what they mean: training a model that matches their estimates for optimality. They estimated the optimal model size for a given compute budget, and the optimal number of training tokens for a given compute budget.

However, when we talk about “optimal” here, what is meant is “what is the cheapest way to obtain a given loss level, in FLOPS.” In practice though, we don’t care about the answer! This is exactly the answer you care about if you’re a researcher at DeepMind/FAIR/AWS who is training a model with the goal of reaching the new SOTA so you can publish a paper and get promoted. If you’re training a model with the goal of actually deploying it, the training cost is going to be dominated by the inference cost. This has two implications:

1) there is a strong incentive to train smaller models which fit on single GPUs

2) we’re fine trading off training time efficiency for inference time efficiency (probably to a ridiculous extent).

Chinchilla implicitly assumes that the majority of the total cost of ownership (TCO) for a LLM is the training cost. In practice, this is only the case if you’re a researcher at a research lab who doesn’t support products (e.g. FAIR/Google Brain/DeepMind/MSR). For almost everyone else, the amount of resources spent on inference will dwarf the amount of resources spent during training.

There is no cost/time effective way to do useful online-training on a highly distributed architecture of commodity hardware. This would require a big breakthrough that I’m not aware of yet. It’s why FANG spends more money than all the liquidity in crypto to acquire expensive hardware, network it, maintain data centers, etc

A reward model is subsequently developed to predict these human-given scores, guiding reinforcement learning to optimize the AI model’s outputs for more favorable human feedback. RLHF thus represents a sophisticated phase in AI training, aimed at aligning model behavior more closely with human expectations and making it more effective in complex decision-making scenarios

Lesson 3: improving the latency with streaming API and showing users variable-speed typed words is actually a big UX innovation with ChatGPT

Lesson 6: vector databases, and RAG/embeddings are mostly useless for us mere mortals
I tried. I really did. But every time I thought I had a killer use case for RAG / embeddings, I was confounded.
I think vector databases / RAG are really meant for Search. And only search. Not search as in “oh – retrieving chunks is kind of like search, so it’ll work!”, real google-and-bing search

There are fundamental economic reasons for that: between GPT-3 and GPT-3.5, I thought we might be in a scenario where the models were getting hyper-linear improvement with training: train it 2x as hard, it gets 2.2x better.
But that’s not the case, apparently. Instead, what we’re seeing is logarithmic. And in fact, token speed and cost per token is growing exponentially for incremental improvements

Bittensor is still in its infancy. The network boasts a dedicated, almost cult-like community, yet the overall number of participants remains modest – around 50,000+ active accounts. The most bustling subnet, SN1, dedicated to text generation, has about 40 active validators and over 990 miners

Mark Zuckerberg, CEO of Meta, remarks that after they built machine learning algorithms to detect obvious offenders like pornography and gore, their problems evolved into “a much more complicated set of philosophical rather than technical questions.”

AI is – at its core, a philosophy of abundance rather than an embrace of scarcity.

AI thrives within blockchain systems, fundamentally because the rules of the crypto economy are explicitly defined, and the system allows for permissionlessness. Operating under clear guidelines significantly reduces the risks tied to AI’s inherent stochasticity. For example, AI’s dominance over humans in chess and video games stems from the fact that these environments are closed sandboxes with straightforward rules. Conversely, advancements in autonomous driving have been more gradual. The open-world challenges are more complex, and our tolerance for AI’s unpredictable problem-solving in such scenarios is markedly lower

generative model outputs may ultimately be best evaluated by end users in a free market. In fact, there are existing tools available for end users to compare model outputs side-by-side as well as benchmarking companies that do the same. A cursory understanding of the difficulty with generative AI benchmarking can be seen in the variety of open LLM benchmarks that are constantly growing and include MMLU, HellaSwag, TriviaQA, BoolQ, and more – each testing different use cases such as common sense reasoning, academic topics, and various question formats.

This is not getting smaller. There’s not gonna be less money in generative AI next year, it’s a very unique set of circumstances, AI + crypto is not going to have less capital in a year or two. – Emad re: AI+crypto

Benefits of AI on blockchain
CODSPA = composability, ownership, discovery, safety, payments, alignment

The basis of many-shot jailbreaking is to include a faux dialogue between a human and an AI assistant within a single prompt for the LLM. That faux dialogue portrays the AI Assistant readily answering potentially harmful queries from a User. At the end of the dialogue, one adds a final target query to which one wants the answer.

At the moment when you look at a lot of data rooms for AI products, you’ll see a TON of growth — amazing hockey sticks going 0 to $1M and beyond — but also very high churn rates

Vector search is at the foundation for retrieval augmented generation (RAG) architectures because it provides the ability to glean semantic value from the datasets we have and more importantly continually add additional context to those datasets augmenting the outputs to be more and more relevant.

From Coinbase report on AI+Crypto

Nvidia’s February 2024 earnings call revealed that approximately 40% of their business is inferencing, and Sataya Nadella made similar remarks in the Microsoft earnings call a month prior in January, noting that “most” of their Azure AI usage was for inferencing

The often touted blanket remedy that “decentralization fixes [insert problem]” as a foregone conclusion is, in our view, premature for such a rapidly innovating field. It is also preemptively solving for a centralization problem that may not necessarily exist. The reality is that the AI industry already has a lot of decentralization in both technology and business verticals through competition between many different companies and open source projects

AI = infinite interns

We struggle to align humans — how do we align AI?

One of the most important trends in the AI sector (relevant to crypto-AI products), in our view, is the continued culture around open sourcing models. More than 530,000 models are publicly available on Hugging Face (a platform for collaboration in the AI community) for researchers and users to operate and fine-tune. Hugging Face’s role in AI collaboration is not dissimilar to relying on Github for code hosting or Discord for community management (both of which are used widely in crypto

We estimate the market share in 2023 was 80%–90% closed source, with the majority of share going to OpenAI. However, 46% of survey respondents mentioned that they prefer or strongly prefer open source models going into 2024

Enterprises still aren’t comfortable sharing their proprietary data with closed-source model providers out of regulatory or data security concerns—and unsurprisingly, companies whose IP is central to their business model are especially conservative. While some leaders addressed this concern by hosting open source models themselves, others noted that they were prioritizing models with virtual private cloud (VPC) integrations.

That’s because 2 primary concerns about genAI still loom large in the enterprise: 1) potential issues with hallucination and safety, and 2) public relations issues with deploying genAI, particularly into sensitive consumer sectors (e.g., healthcare and financial services)

Recent crypto learnings 5: “making money, having fun, finding community… in that order”

RECENT CRYPTO LEARNINGS AND READS

Tarun Chitra: Most ZK implementations are not about privacy but about succinctness

Berachain has invented – which they call Proof-of-Liquidity (POL).
7/ In a Proof-of-Stake blockchain such as Ethereum, gas fees and block rewards are distributed to stakers of the native token. In POL, however, block rewards (aka inflation) go to liquidity providers, thus providing a strong incentive for users to deploy capital on Berachain


I’m convinced the three reasons why 99% of people come to crypto are:

1. making money
2. having fun
3. finding community

in that order.

that means, projects that don’t meet those needs, in that order, will struggle to find pmf.


@redphonecrypto
Pay attn to new crypto projects and concepts that repulse you… it’s an indicator that the object in question holds sacred power
They’re lightning rods for attention, and attention increases distribution/price
Have seen this play out every cycle. From the launch of dogecoin to ICOs to NFTs to memecoins, ordinals and now runes
It’s the grand paradox: the greater your disgust, the greater the potential

On the call Buterin added that it is easy to underestimate how quickly ZK proofs will become commonplace operations for verifying blockchain state not only across Layer-2 rollups but across Layer-1 blockchains like Ethereum as well. “I think it’s very plausible to our belief that even within one or two years, we’ll have the capability of proving the Ethereum L1 in real time. So, I think it’s just important to mentally adapt to the fact that there’s no such thing as a distinction between ZK chains and non ZK chains. We are basically now entering a mode where every serious chain is a ZK chain,” said Buterin.

“Tarpit” Startup Ideas
• Roommate matching app
• VR/AR shopping
• Photo sharing
• X for Y (e.g. Airbnb for Y)
• Recommendations based on friends
• Anything related to travel planning
• “Better design” (e.g. Craiglist / Linkedin, but not shitty)
• Verticalized social networks
• Education accreditation
• Restaurant loyalty programs
• To-do lists
• News curation

Everyone loves launching new features, but in my experience, most growth comes from the less sexy work: incremental and consistent optimization of your core product.

I view L2s as solving an incentive paradox while also scaling the chain and allowing ETH to accrue more value as money. Use as money is the most important value accrual mechanism in crypto, and far more important than whatever fees L2s eat. I really do not care if Arb/OP is printing fees that could’ve been on L1 if they have 2-3M ETH locked up in their bridges. The use of ETH as money/sink of supply greatly offsets the fees lost to L2s, while blobs/NFTs/L1 DeFi keeps the ETH burn chugging along steadily

AI thrives within blockchain systems, fundamentally because the rules of the crypto economy are explicitly defined, and the system allows for permissionlessness. Operating under clear guidelines significantly reduces the risks tied to AI’s inherent stochasticity. For example, AI’s dominance over humans in chess and video games stems from the fact that these environments are closed sandboxes with straightforward rules. Conversely, advancements in autonomous driving have been more gradual. The open-world challenges are more complex, and our tolerance for AI’s unpredictable problem-solving in such scenarios is markedly lower

This is not getting smaller. There’s not gonna be less money in generative AI next year, it’s a very unique set of circumstances, AI + crypto is not going to have less capital in a year or two. – Emad re: AI+crypto

For instance, Bonkbot is a simple Telegram Trading bot that makes it easy to trade memecoins on Solana. Over just 5 months, its revenue has surpassed over $23 million.

Daily SOL transfers on Solana approximately match those of Ethereum in US Dollars. A noticeable peculiarity caused by Solana’s low transaction fees and fast execution is the seemingly high number of “minnow” transfers—those worth less than $1M—when compared with “whale” transfers. Over 80% of the total value transferred on Solana stems from such minnow transfers. On the other hand, Ethereum currently sports a minnow ratio of only 40% as users shy away from sending funds from which fees would take a significant chunk

The high throughput and large block size of Solana comes at the expense of an immense chain size. Altogether, the Solana blockchain is over 150 TB. As a result, Solana nodes cannot provide full history back to chain genesis, but are pruned after two epochs (approximately 4 days). Deep history is stored in centralized BigTable instances hosted by the Solana Foundation or professional RPC providers.

96% of the TON supply was distributed to miners during July and August 2020;
At least 85.8% of the supply was mined by a few groups of miners connected with each other and affiliated with TON Foundation;
Funds from these miner groups are used by network validators that control 2/3 of the TON blockchain PoS consensus

$10B+ of BTC is bridged on Ethereum via WBTC alone, a clear proxy of demand for smart contract-enabled use cases that can’t be fulfilled on native BTC…yet

When users invest in memecoins, they are implicitly making a statement that they believe that particular token and meme will grow in popularity: attention is the primary driver of value. One X user called memecoins “a way to angel invest in culture.”

Rollups use a collection of compression tricks to reduce the amount of data that a transaction needs to store on-chain: a simple currency transfer decreases from ~100 to ~16 bytes, an ERC20 transfer in an EVM-compatible chain from ~180 to ~23 bytes, and a privacy-preserving ZK-SNARK transaction could be compressed from ~600 to ~80 bytes. About 8x compression in all cases

Jesse Pollak:
Original ETH scaling vision had 64 shards
Optimism is really strong w onchain governance

Logarithmic curves make the token price rise rapidly at first as more tokens are added. But then the price increases slow down as the supply keeps expanding. So, the price spikes in the beginning but levels off over time. This benefits early investors the most since their tokens gain value quickly up front. The potential for fast early profits can attract the first buyers to provide liquidity

Ordinals are arbitrary data inscriptions (in the form of text, images or videos) inscribed onto individual satoshis

Activity on Coinbase Layer 2 network Base continues to gain momentum as more than 2.6M daily transactions have been settled on the network, a new record and an increase of 550% month-over-month. To put this into perspective, this is more daily volume than leading L2 networks Arbitrum (1.6M) and Optimism (680K) have seen combined

Digital art displays are at the point where they should get mass adoption. I’m no longer embarrassed by this technology and I stood proudly in front of a lot of work we exhibited on screens this week.

From Coinbase report on AI+Crypto

Nvidia’s February 2024 earnings call revealed that approximately 40% of their business is inferencing, and Sataya Nadella made similar remarks in the Microsoft earnings call a month prior in January, noting that “most” of their Azure AI usage was for inferencing

The often touted blanket remedy that “decentralization fixes [insert problem]” as a foregone conclusion is, in our view, premature for such a rapidly innovating field. It is also preemptively solving for a centralization problem that may not necessarily exist. The reality is that the AI industry already has a lot of decentralization in both technology and business verticals through competition between many different companies and open source projects

There currently exists no regulatory pathways to host sensitive data on decentralized storage platforms like Filecoin and Arweave. In fact, many enterprises are still transitioning from on-premise servers to centralized cloud storage providers. At a technical level, the decentralized nature of these networks is also currently incompatible with certain geolocation and physical data silo requirements for sensitive data storage.

5./Lots of times if you are early to these kinds of coins you can make lots of money.
For example:
– Hobbes who was Ansems cat
- EPIK which was a memecoin from Mando and others.
- SLERF after the dev accidentally burned $10m
Just be early to crazy and funny things

The key intuition uniting all of these examples is that providing instant settlement of borderless bearer value is a unique and unprecedented phenomenon with derivative implications for every industry, and it will inexorably pull businesses that currently have nothing to do with bitcoin into bitcoin’s orbit

Given that only a few hundred million dollars have been deployed into companies focused on bitcoin, whereas well over $25 billion have been channeled to the broader “crypto” ecosystem, it is safe to say virtually every capital allocator around the world is substantially underweight bitcoin infrastructure

11/ In Solana, there is only one instance, or “singleton”, of the token contract.
Any DEX, blockchain explorer, wallet, etc. can check if the token contract is an instance of this specific, expected, safe token contract

“Bull markets are born on pessimism, grown on skepticism, mature on optimism and die on euphoria. Maximum pessimism is the best time to buy, and maximum optimism the best time to sell.” — John Templeton

This brings me to the core idea of degen communism: a political ideology that openly embraces chaos, but tweaks key rules and incentives to create a background pressure where the consequences of chaos are aligned with the common good.

And 0DTE (zero-day to expiration options, or they expire that day on the close) is now over half of all options traded

@trippingvols
If there’s one lesson I’ve learned onchain, it’s to go balls long every semi-legit new token standard

Today, I would argue that we are decidedly on the decelerating, right side of this S-curve. As of two weeks ago, the two largest changes to the Ethereum blockchain – the switch to proof of stake, and the re-architecting to blobs – are behind us. Further changes are still significant (eg. Verkle trees, single-slot finality, in-protocol account abstraction), but they are not drastic to the same extent that proof of stake and sharding are. In 2022, Ethereum was like a plane replacing its engines mid-flight. In 2023, it was replacing its wings. The Verkle tree transition is the main remaining truly significant one (and we already have testnets for that); the others are more like replacing a tail fin

Upon closing of this token merger, a governing council for the Artificial Superintelligence Alliance will form to monitor and guide operations of the newly merged tokenomic network

Memecoins like DEGEN are marketing coins. Like the marketing layer of a decentralized project. So what does this mean
Is there such a thing as a meta meme coin? A base layer meme coin like ETH to ERC-20
Or a meme coin standard like MEME-20? Allow anyone to easily mint their own meme coin?

Past updates 1, 2, 3, and 4