This, from the US Treasury:
And:
Incredible writing from: https://jameslavish.substack.com/p/the-cbo-and-taking-the-treasurys
This, from the US Treasury:
And:
Incredible writing from: https://jameslavish.substack.com/p/the-cbo-and-taking-the-treasurys
Hosts: Hasu and Mike
Hasu – advisor to Flashbots, Lido
MEV value chain
-money from reordering / censoring transactions
–any value a privileged actor can extract – eg, Central Bank printing money, can be considered MEV
–
People use crypto to escape MEV in real world
Should build crypto systems resilient to MEV
Principles in reducing MEV
-more competition = lower fees, less MEV
-more private = harder to extract MEV
-more user control
MEV is invisible – even looking at transaction data in Etherscan, won’t see sandwich attack
Parties:
Users
Wallets
Searchers
Builders
Relayers
Validators
MEV schools
1. Democratizing MEV – hard to minimize MEV, isolate builders role, make it competitive
2. Minimize MEV –
User/wallet layer – order flow auctions – users don’t send to public mempool or block builder, auctions off right to execute your transaction, if there are competing bidders, the price rises, and value goes to user (instead of to MEV capturer)
Mike: “Payment for order flow” – Robinhood offering zero fees, selling order flow to Citadel / hedge funds
Mike: In past, equity brokerages would charge you for trades – now people have opted for free trades / invisible fees (eg, Robinhood)
We can do better in Defi – especially the transparency
World of Cosmos and Ethereum are converging – ETH community has been better at executing
Hard to say in future if X project is ETH or Cosmos project – there’s increasing convergence
MEV accrues to whomever gets to order the transactions
Mike: MEV will accrue to execution layer
L2 sequencers today are centralized – with plans to decentralize – will eventually face same MEV problems as ETH L1
L2s all need PBS (proposer builder separation)
Sequencers today in L2 does 4 things
-receive transactions
-decide on ordering of transactions
-give user a receipt
-send order batch to data availability layer — that’s what creates finality
MEV should not be counted towards security budget — that’s how core devs think about it, want to minimize and not enshrine it
Minimum security should be paid from inflation + base fee
“MEV is very hard to track”
Different forms of MEV
-arbitrage – different prices on different exchanges, or underpriced asset
-sandwich attacks – buy before a user, then sell it to the user at higher price
-liquidations – searchers typically do this
Uniswap V3 – concentrated liquidity – reduced sandwich attacks
Statistical arbitrage – take balance sheet risk, small period of time where you have to hold asset before selling it
Many top Defi traders are also block builders – want to maximize inclusion guarantee, greater control over trading strategy – can make trade at last moment, can see all other transactions and order / cancel them
In systems we build, must make sure they’re not sensitive to latency — otherwise there’s incentive to colocate near each other, more centralization
Phil Daian post on this: https://collective.flashbots.net/t/decentralized-crypto-needs-you-to-be-a-geographical-decentralization-maxi/1385
Turn latency into price / auction, auctions are generally more fair, and price (ability to pay) is easier to decentralize than geographic proximity
Users love Robinhood because good feature is very visible (free trades) and bad feature is very invisible (selling user order flow)
Mike: Optimism and Arbitrum have very different approaches to MEV
“Solana is case study for why to not build low latency blockchains”
1 of 2 Solana block builders is operating liquid staking protocol
If you don’t have robust mempool and fee market design, get a lot of spam
58% of Solana transactions are failed arbitrage transactions
What’s novel in Cosmos —
-Osmosis doing something very interesting – onchain block building and searching
-Noma (sp?) & Penumbra – intent based transaction framework
Mike: Cosmos has very different opinions, diversity of ideas
Hasu: Big drawback is everyone has different validator sets, but as shared security grows, what compromises will be made?
How does regulation bump into MEV?
Crypto is about fair and equitable markets for users with less manipulation and exploitation
Execution on public blockchains is continually improving
Regulators are largely pragmatic
“If a single regulatory regime can make rules in crypto, then crypto has just failed”
Lovely list of perspectives and insights he’s gained over the years, at nat.org
A few of my favorites (copied verbatim):
—
The efficient market hypothesis is a lie
-In many cases it’s more accurate to model the world as 500 people than 8 billion
We are often not even asking the right questions
Where do you get your dopamine?
-The answer is predictive of your behavior
Going fast makes you focus on what’s important; there’s no time for bullshit
Enthusiasm matters!
-Energy is a necessary input for progress
—
Added to my personal bible
Lucky to have Imran Mohamad, Kyber’s marketing lead, to discuss in his words:
What happened with $ARB?
What's going on with @CreditSuisse & QE?
How did Kyber back to being in the top DEXes?
What's our longterm ecosystem strategy?
My fav NFT?
All and more here, thanks @habits + @bridgexplore for having me on your podcast! https://t.co/xQ34wGqNLh— Imran.knc | KyberSwap (@imranfaststart) March 27, 2023
And in a separate episode, Jorge and I catch up on all the latest crypto shenanies and macro hankies and bank bailout pankies. Yup.
// everything is paraphrased from Sam’s perspective unless otherwise noted
Base model is useful, but adding RLHF – take human feedback (eg, of two outputs, which is better) – works remarkably well with remarkably little data to make model more useful
Pre training dataset – lots of open source DBs, partnerships – a lot of work is building great dataset
“We should be in awe that we got to this level” (re GPT 4)
Eval = how to measure a model after you’ve trained it
Compressing all of the web into an organized box of human knowledge
“I suspect too much processing power is using model as database” (versus as a reasoning engine)
Every time we put out new model – outside world teaches us a lot – shape technology with us
ChatGPT bias – “not something I felt proud of”
Answer will be to give users more personalized, granular control
Hope these models bring more nuance to world
Important for progress on alignment to increase faster than progress on capabilities
GPT4 = most capable and most aligned model they’ve done
RLHF is important component of alignment
Better alignment > better capabilities and vice-versa
Tuned GPT4 to follow system message (prompt) closely
There are people who spend 12 hours/day, treat it like debugging software, get a feel for model, how prompts work together
Dialogue and iterating with AI / computer as a partner tool – that’s a really big deal
Dream scenario: have a US constitutional convention for AI, agree on rules and system, democratic process, builders have this baked in, each country and user can set own rules / boundaries
Doesn’t like being scolded by a computer — “has a visceral response”
At OpenAI, we’re good at finding lots of small wins, the detail and care applied — the multiplicative impact is large
People getting caught up in parameter count race, similar to gigahertz processor race
OpenAI focuses on just doing whatever works (eg, their focus on scaling LLMs)
We need to expand on GPT paradigm to discover novel new science
If we don’t build AGI but make humans super great — still a huge win
Most programmers think GPT is amazing, makes them 10x more productive
AI can deliver extraordinary increase in quality of life
People want status, drama, people want to create, AI won’t eliminate that
Eliezer Yudkowsky’s AI criticisms – wrote a good blog post on AI alignment, despite much of writing being hard to understand / having logical flaws
Need a tight feedback loop – continue to learn from what we learn
Surprised a bit by ChatGPT reception – thought it would be, eg, 10th fastest growing software product, not 1st
Knew GPT4 would be good – remarkable that we’re even debating whether it’s AGI or not
Re: AI takeoff, believes in slow takeoff, short timelines
Lex: believes GPT4 can fake consciousness
Ilya S said if you trained a model that had no data or training examples whatsoever related to consciousness, yet it could immediately understand when a user described what consciousness felt like
Lex on Ex Machina: consciousness is when you smile for no audience, experience for its own sake
Consciousness…something very strange is going on
// Stopped taking notes ~halfway