US taxpayers keep paying for US banks’ (and the Fed’s) incompetence

From Jesse Myers’ Substack about this week’s JP Morgan takeover of failing First Republic Bank.

All I can say is le sigh:

80% of losses on the assumed loans will be “shared” by the government, meaning they are funded ultimately by the taxpayer. Similarly, the $50B FDIC loan at an undisclosed fixed rate is risk borne ultimately by the taxpayer, in order to fatten the deal enough for JPMorgan to expect a 20% IRR on what would have been massively unattractive without taxpayer-funded incentives

Privatized gains, socialized losses. 2008, but bigger, and fewer people care. Homo domesticus.

The Fed prints $3T, oops we have 10% inflation.

Let’s fix this by rapidly jacking rates; oops, we have bank failures.

Let’s blame it on crypto and supply shocks and anything but our own self-serving incompetence.

Come on, people, just buy BTC and ETH and at least partially opt the f out!

Source: https://jessemyers.substack.com/p/651-may-2023-market-update-on-bitcoin

Homo domesticus (excerpt from Against the Grain which is a fantastic read)

I was struck by this passage from Against the Grain (by James Scott, who also wrote Seeing Like A State; thanks to my friend Cathy for the rec):

Domesticated animals—especially sheep and goats, in this case—can be seen in the same light. They are our dedicated, four-footed (or, in the cases of chickens, ducks, and geese, two-footed) servant foragers. Thanks to their gut bacteria, they can digest plants that we cannot find and/or break down and can bring them back to us, as it were, in their “cooked” form as fat and protein, which we both crave and can digest. We selectively breed these domesticates for the qualities we desire: rapid reproduction, toleration of confinement, docility, meat, and milk and wool production.

Isn’t this what governments and large institutions are doing to us? Whether intentional or purely through the invisible logic of incentives…

The main exception from the above list seems to be rapid reproduction, as fertility rates in developed countries are very clearly declining. But confinement…docility…production… one only needs to look at the inexorable march of capitalism and centralized power structures to see some worrying trends…

Two Degens – George’s interview with Daniel Hwang, and more 5 Minute Crypto

Howdy, how is everyone?

Just some Two Degens updates to share. George had a great chat with his friend and crypto OG Daniel Hwang (who’s been in crypto since early bitcoin mining days, worked at Terra among others, and just knows a TON). Pardon the bit of background noise there.

I’ve also been doing the daily 5 minute crypto update. With a few breaks here or there lol

Check ’em out, and please let us know how we can do better. I have to admit it’s been a joy to get back to making content. We’ll probably add video soon so you can see our filtered faces ;)

Who’s in the loop? How humans create AI that then creates itself

If you think about the approximate lifecycle of AI that’s being built today, it goes something like this:

1. Write algorithms (eg, neural nets)
2. Scrape data (eg, text and images)
3. Train (1) algorithms on (2) scraped data to create models (eg, GPT-4, Stable Diffusion)
4. Use human feedback (eg, RLHF) to fine tune (3) models – including addition of explicit rules / handicaps to prevent abuse
5. Build products using those (4) fine tuned models – both end-user products (like MidJourney) and API endpoints (like OpenAI’s API)
6. Let users do things with the (5) products (eg, write essays, suggest code, translate languages). Inputs > Outputs
7. Users and AI owners then evaluate the (6) results against objectives like profitability, usefulness, controllability, etc. Based on these evaluations, steps (1) through (6) are further refined and rebuilt and improved

Each of those steps initially involved humans. Many humans doing many things. Humans wrote the math and code that went into the machine learning algorithms. Humans wrote the scripts and tools that scraped the data. Etc.

And very steadily, very incrementally, very interestingly, humans are automating and removing themselves from each of those steps.

AI agents are one example of this. Self-directed AI agents can take roughly defined goals and execute multi-step action plans, removing humans from steps (6) and (7).

Data scraping is mostly automated (2). And I think AI and automation can already do much of the cleaning and labeling (eg, ETL), in ways that are better cheaper faster than humans.

AI is being taught how to write and train its own algorithms (steps 1 and 3).

I’m not sure about state of AI art for steps (4) and (5). Step 4 (human feedback) seems hardest because, well, ipso facto. But there are early signs “human feedback” is not all that unique, whether using AI to generate synthetic data, or to perform tasks by “acting like humans” (eg, acting like a therapist), or labeling images, etc.

Step (5) is definitely within reach, given all the viral Twitter threads we’ve seen where AI can build websites and apps and debug code.

So eventually we’ll have AI that can do most if not all of steps 1-7. AI that can write itself, train itself, go out and do stuff in the world, evaluate how well it’s done, and then improve on all of the above. All at digital speed, scale, and with incremental costs falling to zero.

Truly something to behold. And in that world, where will humans be most useful, if anywhere?

Just a fascinating thought experiment, is all. 🧐🧐🧐

These times are only gettin’ weirder.

Using ChatGPT (GPT-4) to study Chinese song lyrics

Recently I wanted to understand the lyrics for 青花瓷, but I couldn’t find good translations through Google since the writing is fairly dense and symbolic. For me it reads like a Tang poem or something. Google Translate was nearly meaningless.

So I turned to ChatGPT (using GPT-4) and boy did it deliver! I was giddy when I saw the first reply to my simple prompt:

chatgpt gpt4 song lyrics

Wow! It’s got everything I need.

I really want to use ChatGPT more. One of the downsides of being in my late 30s is that I’m so *comfortable* with my existing tech habits that it takes more consistent reminding and constant pushing to build a new one.

But this leap feels to me like it’s bigger than when internet search first became fairly good. I’m thinking back to like, the improvement that was Altavista, let alone Google