Recent startup, tech, AI, crypto learnings: “The surest sign of a midcurved institution is its insistence on an ability to control, predict, and dictate to complex adaptive systems”

19/ I have 0 insight if/when Indian policies change but honestly feel like the fact that the most populous country on earth has banned crypto is not discussed enough
Utterly massive opportunity when/if this changes

With a simple meme, you can make millions of people laugh all over the world. I can tweet a joke that gets 15 million views, and I can do it from my toilet. That’s scale. And as every good entrepreneur knows, wherever there’s scale, there’s a chance to turn a massive profit.

If there were one rule to unite all memelords, it would be this: capitalize on the current thing.

Mr. Beast —
Anytime we do something that no other creator can do, that seperates us in their mind and makes our videos more special to them. It changes how they see us and it does make them watch more videos and engage more with the brand. You can’t track the “wow factor” but I can describe it. Anything that no other youtuber can do. And it’s important we never lose our wow.

Let’s say we have 10 minute video about a guy surviving weeks in the woods. Instead of making the first 3 minutes of the video about his first day then progressing from there like a logical filmmaker would. We’d tried to cover multiple days in the first 3 minutes of the video so the viewer is now super invested in the story.

But in general once you have someone for 6 minutes they are super invested in the story and probably in what I call a “lull”. They are watching the video without even realizing they are watching a video.

There were days back in the early 2000s when you would have no idea what to expect from a Bernanke or Greenspan-led Fed. The FOMC meetings were actually quite riveting because you simply had no idea what to expect. But not the Powell-led Fed. They often explicitly tell you what they will do, but in rare cases when they don’t, they leak it to their favorite Wall Street Journal puppet, Nick Timiraos, who spells it out for you.

Lest you think I’m being unfair to our ivory-tower friends, here’s the lore of the term “capitalism”: borne from a socialist French intellectual and popularized by Marx. Lol. Yeah. The guys who don’t like the natural system, named the natural system something that’s kinda pejorative.

I want you to help me build a media empire, where we make (formerly) obscure scholars famous, compile the real time history of ASI, and produce the very best intellectual content in the world.

Content, confidence, and context—these are the three dimensions Shreyas Doshi discovered traditional companies use to promote employees. “This is unfortunately the cause of a lot [of] persistent frustration for otherwise-talented people who are GREAT at content, but repeatedly get passed over for promotion to higher levels… it is usually because they are not projecting as much confidence as they ought to for the next level and they are not as attuned to the context of the org & the company.”

Their SwiftNet private key infrastructure and banking messaging standards like ISO20022 are used by 11,000+ banks globally to facilitate the communication of payment instructions between banks

On Mr. Beast as Buzzfeed 2.0
(he can only sell very generic products like chocolate because his audience is so poor and so broad) — and he is in the most competitive part of the ecosystem / general entertainment.

The best ideas come as jokes. Make your thinking as funny as possible.

Do not address your readers as though they were gathered together in a stadium. When people read your copy, they are alone.

This “secret privatization” of the entire North Korean economy has been incredibly thorough. It’s estimated that around 80 percent of all goods and services in North Korea are provided in secret and in shadow. It’s capitalism as an extremophile species of lichen, colonizing the cracks and crevices of the official society, and keeping the whole system afloat.

“The concern was that 200 phones traveling at 800 kilometers per hour in a plane could rapidly connect to many towers at once, overloading the infrastructure. At least that’s what the FCC thought could happen. So, they banned cell phone use in flight in 1991. But there’s a problem with this theory—a plane is a big metal enclosure, essentially a Faraday cage. So, it should block almost all electromagnetic signals.”

After analyzing value spread throughout his career, AQR Capital cofounder concludes that markets are becoming less informationally efficient. “You’d be forgiven if, like me, your initial whiggish assumption is that markets would get more efficient over time. After all, over the last 20-40 years the ubiquity and speed of available information has continuously grown, and at the same time trading costs have come rapidly down. But like me initially, you’d be mistaking speed for accuracy.”

In the 1950s alone, America built five generations of fighter jets, three generations of manned bombers, two classes of aircraft carriers, submarine-launched ballistic missiles, and nuclear-powered attack submarines.

In 2020/21, very loose monetary conditions + huge money printing combined with two major breakout innovations DeFi & NFTs. People were given stimmies, fiat was massively devalued again, and the prospect of decentralized finance being the genuine future of finance, alongside NFTs being the future of digital property caused enormous retail participation.

“Your highly questionable parenting vision,” responded Nicole, who was a day away from labor. “One, no school or college. Two, separate apartment in childhood. Three, move out at 16. Four, learn to drive all machines as early as possible. Five, leave the family fortune to one child. Six, children have to fly in economy while we are in business.” Luckey also believes strongly in (legally obtained) child labor (permits), that having fewer than 2.1 children would make him a traitor to the nation, and that children as young as 2 are fully capable of walking several miles without a stroller (“History shows it,” he says).

“At some point, in business and in life and in romance, you have to commit to a path,” said the 31-year-old Luckey. “A lot of my peers in the tech industry do not share this philosophy … They’re always pursuing everything with optionality.

By 2023, the cold war between these tribes had escalated into open conflict as hedge fund billionaires led the charge to oust Ivy League presidents and The New York Times sued OpenAI. Incursions into enemy territory are treated with alarm, like when Google’s AI model Gemini was criticized by Riverians for reflecting distinctively Villagey political attitudes.

Villagers see themselves as being clearly right on the most important big-picture questions of the day, from climate change to gay and trans rights. So they view the Riverian inclination to poke holes in arguments and “just ask questions’’ as being a waste of time at best, and as potentially empowering a wake of bad-faith actors and bigots.

Bitcoin is worth a trillion bucks and half of Wall Street owns it at this point. All the rest of crypto is worth another trillion. Tether owns more Treasuries than Germany. There’s been more than $20bn of venture capital poured into this space in the last four years. We’re not that early.

The surest sign of a midcurved institution is its insistence on an ability to control, predict, and dictate to complex adaptive systems. You don’t control them, they control you.

Envision interest rates as futures for dollars.
Interest rates = price of money

I trust GPT4 more than I trust our politicians. In the coming years AI models will become so much more capable that their judgment will start being used to mediate disputes – first inside companies but then legally . Lawyers already use it constantly.

China was The State.
Crypto was The Individual.
It’s the Machine that will overthrow the plutocracy, because the core of the plutocracy – its super bubble was the false insistence it was a machine.

Who’s in the loop? How humans create AI that then creates itself

If you think about the approximate lifecycle of AI that’s being built today, it goes something like this:

1. Write algorithms (eg, neural nets)
2. Scrape data (eg, text and images)
3. Train (1) algorithms on (2) scraped data to create models (eg, GPT-4, Stable Diffusion)
4. Use human feedback (eg, RLHF) to fine tune (3) models – including addition of explicit rules / handicaps to prevent abuse
5. Build products using those (4) fine tuned models – both end-user products (like MidJourney) and API endpoints (like OpenAI’s API)
6. Let users do things with the (5) products (eg, write essays, suggest code, translate languages). Inputs > Outputs
7. Users and AI owners then evaluate the (6) results against objectives like profitability, usefulness, controllability, etc. Based on these evaluations, steps (1) through (6) are further refined and rebuilt and improved

Each of those steps initially involved humans. Many humans doing many things. Humans wrote the math and code that went into the machine learning algorithms. Humans wrote the scripts and tools that scraped the data. Etc.

And very steadily, very incrementally, very interestingly, humans are automating and removing themselves from each of those steps.

AI agents are one example of this. Self-directed AI agents can take roughly defined goals and execute multi-step action plans, removing humans from steps (6) and (7).

Data scraping is mostly automated (2). And I think AI and automation can already do much of the cleaning and labeling (eg, ETL), in ways that are better cheaper faster than humans.

AI is being taught how to write and train its own algorithms (steps 1 and 3).

I’m not sure about state of AI art for steps (4) and (5). Step 4 (human feedback) seems hardest because, well, ipso facto. But there are early signs “human feedback” is not all that unique, whether using AI to generate synthetic data, or to perform tasks by “acting like humans” (eg, acting like a therapist), or labeling images, etc.

Step (5) is definitely within reach, given all the viral Twitter threads we’ve seen where AI can build websites and apps and debug code.

So eventually we’ll have AI that can do most if not all of steps 1-7. AI that can write itself, train itself, go out and do stuff in the world, evaluate how well it’s done, and then improve on all of the above. All at digital speed, scale, and with incremental costs falling to zero.

Truly something to behold. And in that world, where will humans be most useful, if anywhere?

Just a fascinating thought experiment, is all. 🧐🧐🧐

These times are only gettin’ weirder.

Using ChatGPT (GPT-4) to study Chinese song lyrics

Recently I wanted to understand the lyrics for 青花瓷, but I couldn’t find good translations through Google since the writing is fairly dense and symbolic. For me it reads like a Tang poem or something. Google Translate was nearly meaningless.

So I turned to ChatGPT (using GPT-4) and boy did it deliver! I was giddy when I saw the first reply to my simple prompt:

chatgpt gpt4 song lyrics

Wow! It’s got everything I need.

I really want to use ChatGPT more. One of the downsides of being in my late 30s is that I’m so *comfortable* with my existing tech habits that it takes more consistent reminding and constant pushing to build a new one.

But this leap feels to me like it’s bigger than when internet search first became fairly good. I’m thinking back to like, the improvement that was Altavista, let alone Google

Podcast notes: Sam Altman (OpenAI CEO) on Lex Fridman – “Consciousness…something very strange is going on”

// everything is paraphrased from Sam’s perspective unless otherwise noted

Base model is useful, but adding RLHF – take human feedback (eg, of two outputs, which is better) – works remarkably well with remarkably little data to make model more useful

Pre training dataset – lots of open source DBs, partnerships – a lot of work is building great dataset

“We should be in awe that we got to this level” (re GPT 4)

Eval = how to measure a model after you’ve trained it

Compressing all of the web into an organized box of human knowledge

“I suspect too much processing power is using model as database” (versus as a reasoning engine)

Every time we put out new model – outside world teaches us a lot – shape technology with us

ChatGPT bias – “not something I felt proud of”
Answer will be to give users more personalized, granular control

Hope these models bring more nuance to world

Important for progress on alignment to increase faster than progress on capabilities

GPT4 = most capable and most aligned model they’ve done
RLHF is important component of alignment
Better alignment > better capabilities and vice-versa

Tuned GPT4 to follow system message (prompt) closely
There are people who spend 12 hours/day, treat it like debugging software, get a feel for model, how prompts work together

Dialogue and iterating with AI / computer as a partner tool – that’s a really big deal

Dream scenario: have a US constitutional convention for AI, agree on rules and system, democratic process, builders have this baked in, each country and user can set own rules / boundaries

Doesn’t like being scolded by a computer — “has a visceral response”

At OpenAI, we’re good at finding lots of small wins, the detail and care applied — the multiplicative impact is large

People getting caught up in parameter count race, similar to gigahertz processor race
OpenAI focuses on just doing whatever works (eg, their focus on scaling LLMs)

We need to expand on GPT paradigm to discover novel new science

If we don’t build AGI but make humans super great — still a huge win

Most programmers think GPT is amazing, makes them 10x more productive

AI can deliver extraordinary increase in quality of life
People want status, drama, people want to create, AI won’t eliminate that

Eliezer Yudkowsky’s AI criticisms – wrote a good blog post on AI alignment, despite much of writing being hard to understand / having logical flaws

Need a tight feedback loop – continue to learn from what we learn

Surprised a bit by ChatGPT reception – thought it would be, eg, 10th fastest growing software product, not 1st
Knew GPT4 would be good – remarkable that we’re even debating whether it’s AGI or not

Re: AI takeoff, believes in slow takeoff, short timelines

Lex: believes GPT4 can fake consciousness

Ilya S said if you trained a model that had no data or training examples whatsoever related to consciousness, yet it could immediately understand when a user described what consciousness felt like

Lex on Ex Machina: consciousness is when you smile for no audience, experience for its own sake

Consciousness…something very strange is going on

// Stopped taking notes ~halfway