Technology as the ultimate non-zero-sum game

I read two quotes recently that I think are related in a very deep and abstract way:

I think a reasonable case can be made that the discovery and facilitation of non-zero-sum games is both objectively (i.e., metaphysically) and subjectively valuable. Furthermore, I think a reasonable case can be made that we have literally evolved to find this process deeply meaningful and to socially reward people who are very good at engaging in it.

The above is from Brett Andersen’s Substack. If we think about all of the things we love – from art to sports to the best institutions from religions to businesses – they are all prime examples of ultimate success at non-zero-sum games.

Soon after I read this quote:

It would not surprise me if we saw another axial awakening someday, powered by another flood of technology. I find it hard to believe that we could manufacture robots that actually worked and not have them disturb our ideas of religion and God. Someday we will make other minds, and they will surprise us.

That is from uber mensch Kevin Kelly.

With Apple launching their AR headset, with AI dominating every tech headline, with self-driving actually working in major cities, with Boston Dynamics robots doing Olympic caliber back flips, it seems we are on the cusp of an awakening of some sort. A technological revolution in both mind (AI) and body (robots / physical reality). AI alone is already disturbing society’s ideas about relationships and intelligence and emotion.

One of the best definitions I’ve ever heard of technology is “technology is anything that breaks a constraint.” And what is a constraint if not a zero-sum boundary condition of some sort.

Thanks for listening to my ted talk. Cheers

Who’s in the loop? How humans create AI that then creates itself

If you think about the approximate lifecycle of AI that’s being built today, it goes something like this:

1. Write algorithms (eg, neural nets)
2. Scrape data (eg, text and images)
3. Train (1) algorithms on (2) scraped data to create models (eg, GPT-4, Stable Diffusion)
4. Use human feedback (eg, RLHF) to fine tune (3) models – including addition of explicit rules / handicaps to prevent abuse
5. Build products using those (4) fine tuned models – both end-user products (like MidJourney) and API endpoints (like OpenAI’s API)
6. Let users do things with the (5) products (eg, write essays, suggest code, translate languages). Inputs > Outputs
7. Users and AI owners then evaluate the (6) results against objectives like profitability, usefulness, controllability, etc. Based on these evaluations, steps (1) through (6) are further refined and rebuilt and improved

Each of those steps initially involved humans. Many humans doing many things. Humans wrote the math and code that went into the machine learning algorithms. Humans wrote the scripts and tools that scraped the data. Etc.

And very steadily, very incrementally, very interestingly, humans are automating and removing themselves from each of those steps.

AI agents are one example of this. Self-directed AI agents can take roughly defined goals and execute multi-step action plans, removing humans from steps (6) and (7).

Data scraping is mostly automated (2). And I think AI and automation can already do much of the cleaning and labeling (eg, ETL), in ways that are better cheaper faster than humans.

AI is being taught how to write and train its own algorithms (steps 1 and 3).

I’m not sure about state of AI art for steps (4) and (5). Step 4 (human feedback) seems hardest because, well, ipso facto. But there are early signs “human feedback” is not all that unique, whether using AI to generate synthetic data, or to perform tasks by “acting like humans” (eg, acting like a therapist), or labeling images, etc.

Step (5) is definitely within reach, given all the viral Twitter threads we’ve seen where AI can build websites and apps and debug code.

So eventually we’ll have AI that can do most if not all of steps 1-7. AI that can write itself, train itself, go out and do stuff in the world, evaluate how well it’s done, and then improve on all of the above. All at digital speed, scale, and with incremental costs falling to zero.

Truly something to behold. And in that world, where will humans be most useful, if anywhere?

Just a fascinating thought experiment, is all. šŸ§šŸ§šŸ§

These times are only gettin’ weirder.

Using ChatGPT (GPT-4) to study Chinese song lyrics

Recently I wanted to understand the lyrics for é’čŠ±ē“·, but I couldn’t find good translations through Google since the writing is fairly dense and symbolic. For me it reads like a Tang poem or something. Google Translate was nearly meaningless.

So I turned to ChatGPT (using GPT-4) and boy did it deliver! I was giddy when I saw the first reply to my simple prompt:

chatgpt gpt4 song lyrics

Wow! It’s got everything I need.

I really want to use ChatGPT more. One of the downsides of being in my late 30s is that I’m so *comfortable* with my existing tech habits that it takes more consistent reminding and constant pushing to build a new one.

But this leap feels to me like it’s bigger than when internet search first became fairly good. I’m thinking back to like, the improvement that was Altavista, let alone Google

5 Minute Crypto – šŸ¤– GPT-4 rewriting my show notes šŸ¤–

DALLĀ·E 2023-04-25 19.52.23 - cute robot taking notes while listening to a podcast, cartoon anime style
I’ve been recording a daily 5 minute crypto news update for the last few weeks. Today I asked GPT-4 to rewrite the show notes (which you can see here) into a “simple concise article”. Here’s what it gave me – color me impressed!

Title: Crypto Daily Update: Onchain Billionaires, Ethereum’s Next Upgrade, Dune Dashboards, Market Structure, and Arthur Hayes

On Tuesday, April 25th, Bitcoin is trading at around $27,300, and Ethereum is at $1,800, both experiencing a 1% drop in the last 24 hours.

Blockworks recently published an article discussing “onchain billionaires” who own identifiable crypto wallets with assets worth more than $1 billion. Among them are Ethereum’s Vitalik Buterin, Ripple’s Jed McCaleb, TRON’s Justin Sun, and the mysterious Bitcoin inventor Satoshi Nakamoto.

Ethereum’s next upgrade, called Cancun-Deneb, was announced by the core developers. The upgrade will consist of two parts: Cancun as the execution layer and Deneb as the consensus layer. The most significant part of the upgrade is EIP 4844, dubbed proto danksharding, which will lower transaction costs for layer 2 solutions like Optimism and Arbitrum.

Dune dashboard data provided by 0xkofi reveals insights into rollup economics. Arbitrum has more than double Optimism’s total transaction fee revenues, with ZKSync coming in third. Arbitrum also posts twice the amount of transaction data and has had nearly seven times more builders deploying smart contracts this month compared to Optimism.

A tweet thread by Zero IKA covers basic market structures, including bullish, bearish, and neutral/ranging. Moving averages can be utilized to understand market shifts, such as 9-day or 200-day moving averages.

Q1 insights from Electric Capital highlight the growth of crypto developers. Notable takeaways include the presence of 7,000 full-time developers, a dip in the number of new developers and repositories, and high growth ecosystems such as Aztec Network, Metamask, and Hyperledger.

GOP Majority Whip Tom Emmer criticizes SEC Chair Gary Gensler in a tweet thread, accusing him of incompetence, abuse of power, and contradictory statements that create chaos in the market.

Ethereum is a unique triple-point asset, combining properties of capital assets, transformable assets, and store-of-value assets. This unprecedented combination creates a new investment and ownership paradigm.

Arthur Hayes shares his latest essay, “Exit Liquidity,” discussing the future of trade, the use of multiple currencies, and potential roles of gold and Bitcoin in the global economy.

(written by GPT-4; art by DALL-E)

Sycophancy and sandbagging šŸ¤”

they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their userā€™s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated

Kinda like people, no?

From the same paper I mentioned before: https://cims.nyu.edu/~sbowman/eightthings.pdf