5 Minute Crypto – 🤖 GPT-4 rewriting my show notes 🤖

DALL·E 2023-04-25 19.52.23 - cute robot taking notes while listening to a podcast, cartoon anime style
I’ve been recording a daily 5 minute crypto news update for the last few weeks. Today I asked GPT-4 to rewrite the show notes (which you can see here) into a “simple concise article”. Here’s what it gave me – color me impressed!

Title: Crypto Daily Update: Onchain Billionaires, Ethereum’s Next Upgrade, Dune Dashboards, Market Structure, and Arthur Hayes

On Tuesday, April 25th, Bitcoin is trading at around $27,300, and Ethereum is at $1,800, both experiencing a 1% drop in the last 24 hours.

Blockworks recently published an article discussing “onchain billionaires” who own identifiable crypto wallets with assets worth more than $1 billion. Among them are Ethereum’s Vitalik Buterin, Ripple’s Jed McCaleb, TRON’s Justin Sun, and the mysterious Bitcoin inventor Satoshi Nakamoto.

Ethereum’s next upgrade, called Cancun-Deneb, was announced by the core developers. The upgrade will consist of two parts: Cancun as the execution layer and Deneb as the consensus layer. The most significant part of the upgrade is EIP 4844, dubbed proto danksharding, which will lower transaction costs for layer 2 solutions like Optimism and Arbitrum.

Dune dashboard data provided by 0xkofi reveals insights into rollup economics. Arbitrum has more than double Optimism’s total transaction fee revenues, with ZKSync coming in third. Arbitrum also posts twice the amount of transaction data and has had nearly seven times more builders deploying smart contracts this month compared to Optimism.

A tweet thread by Zero IKA covers basic market structures, including bullish, bearish, and neutral/ranging. Moving averages can be utilized to understand market shifts, such as 9-day or 200-day moving averages.

Q1 insights from Electric Capital highlight the growth of crypto developers. Notable takeaways include the presence of 7,000 full-time developers, a dip in the number of new developers and repositories, and high growth ecosystems such as Aztec Network, Metamask, and Hyperledger.

GOP Majority Whip Tom Emmer criticizes SEC Chair Gary Gensler in a tweet thread, accusing him of incompetence, abuse of power, and contradictory statements that create chaos in the market.

Ethereum is a unique triple-point asset, combining properties of capital assets, transformable assets, and store-of-value assets. This unprecedented combination creates a new investment and ownership paradigm.

Arthur Hayes shares his latest essay, “Exit Liquidity,” discussing the future of trade, the use of multiple currencies, and potential roles of gold and Bitcoin in the global economy.

(written by GPT-4; art by DALL-E)

Sycophancy and sandbagging 🤔

they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated

Kinda like people, no?

From the same paper I mentioned before: https://cims.nyu.edu/~sbowman/eightthings.pdf

8 (fascinating) things about large language models: “Specific important behaviors in LLM tend to emerge unpredictably as a byproduct of increasing investment”

From this paper: https://cims.nyu.edu/~sbowman/eightthings.pdf

Below are some selections from the list (quoted verbatim):

1. LLMs predictably get more capable with increasing investment, even without targeted innovation

There are substantial innovations that distinguish these three models, but they are almost entirely restricted to infrastructural innovations in high-performance computing rather than model-design work that is specific to language technology.

2. Specific important behaviors in LLM tend to emerge unpredictably as a byproduct of increasing investment

They’re justifiably confident that they’ll get a variety of economically valuable new capabilities, but they can make few confident predictions about what those capabilities will be or what preparations they’ll need to make to be able to deploy them responsibly.

4. There are no reliable techniques for steering the behavior of LLMs

In particular, models can misinterpret ambiguous prompts or incentives in unreasonable ways, including in situations that appear unambiguous to humans, leading them to behave unexpectedly

6. Human performance on a task isn’t an upper bound on LLM performance

they are trained on far more data than any human sees, giving them much more information to memorize and potentially synthesize

Podcast notes: Sam Altman (OpenAI CEO) on Lex Fridman – “Consciousness…something very strange is going on”

// everything is paraphrased from Sam’s perspective unless otherwise noted

Base model is useful, but adding RLHF – take human feedback (eg, of two outputs, which is better) – works remarkably well with remarkably little data to make model more useful

Pre training dataset – lots of open source DBs, partnerships – a lot of work is building great dataset

“We should be in awe that we got to this level” (re GPT 4)

Eval = how to measure a model after you’ve trained it

Compressing all of the web into an organized box of human knowledge

“I suspect too much processing power is using model as database” (versus as a reasoning engine)

Every time we put out new model – outside world teaches us a lot – shape technology with us

ChatGPT bias – “not something I felt proud of”
Answer will be to give users more personalized, granular control

Hope these models bring more nuance to world

Important for progress on alignment to increase faster than progress on capabilities

GPT4 = most capable and most aligned model they’ve done
RLHF is important component of alignment
Better alignment > better capabilities and vice-versa

Tuned GPT4 to follow system message (prompt) closely
There are people who spend 12 hours/day, treat it like debugging software, get a feel for model, how prompts work together

Dialogue and iterating with AI / computer as a partner tool – that’s a really big deal

Dream scenario: have a US constitutional convention for AI, agree on rules and system, democratic process, builders have this baked in, each country and user can set own rules / boundaries

Doesn’t like being scolded by a computer — “has a visceral response”

At OpenAI, we’re good at finding lots of small wins, the detail and care applied — the multiplicative impact is large

People getting caught up in parameter count race, similar to gigahertz processor race
OpenAI focuses on just doing whatever works (eg, their focus on scaling LLMs)

We need to expand on GPT paradigm to discover novel new science

If we don’t build AGI but make humans super great — still a huge win

Most programmers think GPT is amazing, makes them 10x more productive

AI can deliver extraordinary increase in quality of life
People want status, drama, people want to create, AI won’t eliminate that

Eliezer Yudkowsky’s AI criticisms – wrote a good blog post on AI alignment, despite much of writing being hard to understand / having logical flaws

Need a tight feedback loop – continue to learn from what we learn

Surprised a bit by ChatGPT reception – thought it would be, eg, 10th fastest growing software product, not 1st
Knew GPT4 would be good – remarkable that we’re even debating whether it’s AGI or not

Re: AI takeoff, believes in slow takeoff, short timelines

Lex: believes GPT4 can fake consciousness

Ilya S said if you trained a model that had no data or training examples whatsoever related to consciousness, yet it could immediately understand when a user described what consciousness felt like

Lex on Ex Machina: consciousness is when you smile for no audience, experience for its own sake

Consciousness…something very strange is going on

// Stopped taking notes ~halfway

Jobs replaced by AI, or jobs re-created by AI?

Tweet from @bentossell (I love his daily AI newsletter)

The list got me thinking… instead of framing as “AI replaces X job”, I think the actual outcome is more like “AI recreates X job”, in much the same way that ATMs recreated the bank teller’s job, and personal computers recreated the typist’s job, and Photoshop recreated the graphic designer’s job…

Implicit in this, is that change is inevitable and outcomes will favor those who best adapt.

Just some thinking aloud…

Content creator –> after AI –> Human does more editing, curating, and aggregating (eg, across different media types)

Journalist –> AI –> Human does more primary research (developing sources, interviewing), editing

Teacher –> AI –> Human does more coaching (emotional support), planning (what to learn when), problem solving (when students are stuck)

Customer service rep –> AI –> Human does more complex issue resolution, relationship building, sales development

Social media manager –> AI –> Human does more editing and curation, community and relationship building

Translator –> AI –> Human does more fact checking, editing, research

Musician –> AI –> Human does more mixing, curating, multimedia, live performance, inventing new musical styles

Not insignificant, too, that several of the jobs on the list — such as web developer or social media manager — didn’t exist in their current form as recently as a few decades ago, and were also enabled (or transformed) by similar mega waves of technological change (eg, personal computers, smartphones, the internet).

I do think AI has surprised in the following important way: Even as recently as a year ago, most people would have assumed that the creative fields (broadly, activities like making art, writing fiction, composing music) were less at risk than the more repetitive, linear, analytical fields. Today generative art and LLMs have definitively proven otherwise.

Change filled times ahead!