Episode 14: Vitalik on the return of Ethereum - The Network State Podcast

#14 - Vitalik on the return of Ethereum

Jul 7, 2025
Youtube
Apple Podcasts
Spotify
This transcript of the podcast was auto-generated and may include typos

Balaji Srinivasan 0:00
Vitalik, welcome. There’s plenty to talk about. First, you want to give some remarks on, I don’t know, the state of Ethereum. What’s on your mind that we can do it to specific

Vitalik Buterin 0:09
things. From the technical perspective, all of the pieces are finally like, actually in place to make it viable to do the kinds of things that we’ve been talking about doing for a really long time, right? And you could give a few different examples of that, right? So one of them is obviously scale, right? So in, you know, 2017 crypto cryptokitties, 2021 defy like, what broke all of those things is basically, eventually, there was so much excitement, it hit against a wall of fixed usage. And then the transaction fee is went to 50, $50 and a bunch of people got angry. Lawyer twos are collectively doing about 250 TPS. And with Petra, there’s the upcoming hard fork. In two weeks, the blob count will double. It’ll go up to 500 and then there’s a pathway to increase that to about 5000 for layer twos, basically, like, there’s a pretty credible path to get to, like, many 1000s of TPS over the course of the next year or so. And for the base layer, there’s been this growing research direction around basically asking, how do we like super optimize the API l1 and in particular, how do we like basically one like, actually formalize some of our criteria in terms of preserving the network’s decentralization, preserving the network’s resilience, making sure that we’re not just like 25 servers, and then turning that into something where, like, we actually, like, have a very clear idea of what the constraints are, so we can super optimize around them, right? And so there’s a collection of EIPs that are planned for 2026 that look like they have a very plausible story for scaling the l1 gas limit by 10x and then after that, we of course, have Zk, VMs, and that’s a story for scaling up even higher. And so in terms of scale, like we’ve basically 10x already, right? But that, and then the question is, how do we go further, and how do we improve interoperability of things that already exist, right? And so from a scale point of view, like things that could not be done two or three years ago can be done now, right? So that’s scale. Another interesting dimension is security, right? So if you think about defi, right, then if you think about the question, well, would you confidently with your with your straight face, recommends to an average person to use defi as a savings in a wealth building vehicle, right? I think honestly, three or four years ago, the answer just had to be an unambiguous No. And the reason, basically, is that, like, What the hell is the point of talk, even talking about 6% API versus 4% API, when the thing that really matters to people is not getting getting minus 100% API, right? And so the but the thing that we’ve seen since then, right? Actually, this is interesting. Like, um, if you look at the statistics, right, if you like, go and, I mean, like, ask the bot, like, basically give me the total num dollar number of defi hacks divided by the total number of defi TVL, I asked, and the answer gave us is, less is, I think, 0.53% right? So basically, that in a randomly selected defi protocol, the chance that you’ll lose money from being hacked is only half a percent. And that look that like, that feels a little uncomfortably high. But number one, that’s like, half a like. That’s only for risky protocols, and if you like things like office, I’ve been trending down over Yes, it has, that’s a great grant. Yes, okay. And then if you look at things like, Ave, like, basically the, you know, the sturdy stuff like, that’s,

Balaji Srinivasan 3:30
can’t make a comment on that, yeah, I never did any yield farming or anything like that, because you’d be, quote, investing with the expectation of 6% return, but you’re actually risking your entire principal, because there could be a smart contract hack, they could go to zero, right? So instead, I’d only risk principal in like an angel investment, where I knew that could go to zero, and I just held back and waited till defi matured, and now it started mature,

Vitalik Buterin 3:55
yeah. So basically, look for mature protocols. It’s even lower than half a percent, right? And then on top of that, the other risk, of course, is like, the risk that you personally screw up because something happens to your wallet. And then this is why I’ve been, like, pushing for social recovery, multi SIG, account, abstraction, all of this stuff, non stop for the last 10 years. And the thing that did happen, right? Is like the safe front end got hacked, right? But if you were using an alter, an alternative UI, then you were fine. If you were checking the transactions, you were signing, you were fine. And obviously you can’t expect regular people to do that, right? But, like, basically, yeah, the infrastructure keeps on hardening, and there’s like, 10 alternative UIs that you can use, right? And so, and even, I think the base UI is planning to move to something much safer, right? And so from an average user point of view, like the your ability to get something which is simultaneously like decentralized and not going to waste your money, like that’s rapidly increasing, right? Like, if you remember 2017 in 2017 like everyone thought that smart contract wallets were dead because the wallet code itself got hacked. It’s right. You remember the parity hack, and then there was another hack. Basic parity got hacked twice, right? And like, that was sort of the nadir for trusting smart contracts, right? And then, since then, like, the safe smart contracts, like the smart contracts, have actually been perfect, like, basically for, like, since the start, right? And so each and every one of these things that in the beginning phase is this, like, crazy thing, and like, oh my god, there’s a big chance you’re gonna lose all of your stuff. Now it’s like, it’s maturing, and it’s more and more this, like, actually, completely fine, right?

Balaji Srinivasan 5:31
It’s funny. You say it’s because this is, this is the flipping right? And I think that, you know, it’s funny. I said this somebody. And you know what is defi? It’s what takes over after the current financial system. Western financial system ends, right? So there’s like a flipping of this is actually going to become more secure than the current Western

Vitalik Buterin 5:49
financial system. Yeah, exactly. This is the other side of the graph, right? Which is this the degree to which you can realistically expect confidence in tradfi, like, honestly, if I had to put my, like, money in a tradfi bank, and then, I guess, like, even in the US, and just ask, close my eyes, and then wait for a year later, is it still there? The, I would say the risk that it’s gone is probably a little bit higher than half a percent, right? Absolutely. And, like, go ahead, talk about this good, yeah, because, like, there’s definitely people who will say it’s, like, some crazy number, like, like 20% that I will think it’s that high, right? But if you’re talking about, like, the survival of your nest egg, your retirement savings, like, even a freaking, like, 2% chance is scary,

Balaji Srinivasan 6:29
right? Well, that’s funny, because if you take your numbers and you’re saying the risk is higher, don’t invest in the US dollar, what you can’t afford to lose, right? Yeah, okay,

Vitalik Buterin 6:38
we’re starting to get more of these assets that are actually coming online, right? And so if you there’s obviously the different US dollars, there’s like, a bunch of different US dollars, and there’s even even, like things like die like they’re only partially dependent on, like, actual US bank like, bank account deposits. And then you’re starting to get other kinds of assets. You’re also starting to get euros and other currencies, and so you’re able to get a, like, pretty good, diversified mirror portfolio, and like, your love, your level of like, personal political risk basically drops down to, like, zero, right? And I mean, for a lot of people, and for like, a rapidly growing percentage of the world, like that’s there is a very meaningful and, like, large reductions in, you know, your chance of getting a minus 100% TPY there. That’s

Balaji Srinivasan 7:27
right, and I think so. Actually, have a few comments on this. The first is that that aspect of, like projecting, rule of law and contracts and so on that we had talked about for a long time in crypto is actually now not even a reality. It’s like a necessity in Argentina or Nigeria or places like this, right? It’s actually like stable coins are actually being used all over the world, and I think, I think they flip Visa and MasterCard not too long ago, right? Like, it’s basically more so when people say, oh, what does crypto do? What are the application crypto where you’re like, Well, okay, there’s digital gold. Okay, fine. But aside from that, okay, well, it’s like, bigger than Visa, MasterCard, okay, okay, what else you got right? Like, and you know, then, of course, it’s already very, very

Vitalik Buterin 8:07
big. Go ahead, because we’re talking about scale, right? We’ve talked about safety. Also. Privacy is another big one. And Ethereum now has mature privacy solutions with railway now privacy pool was launched. I mean, now even the tornado cash is legal again, yes. I mean, it never really died, right? It actually kept being used. But it’s legal even in the US, like even in the US. Two years again,

Balaji Srinivasan 8:32
we have to get Romans free. Roman storm free, free, and pardon. Alexei, right? Thanks. Both. Freedom, pardon. Roman freedom, pardon. Alexey, go ahead. Yeah, yeah.

Vitalik Buterin 8:41
So a lot of, a lot of improvements there, right? Also, even on the non financial side, like, even if you think about, like, things like forecaster, like, I’m honestly impressed by their staying power, right? Like, I think, like, the default assumption I think any of us would have had a couple of years ago is, like, this is cool. This is worth supporting because it’s decentralized. But like, it’s so hard for these things to, like, actually keep users going for past the honeymoon period where they’re excited. I’m super.

Balaji Srinivasan 9:09
I love farcaster so much, and we’re gonna do so much with farcaster. It’s basically, it is the open social platform that we need. And you just like, they’ve it’s like a relay race. I want to take the baton and go further with it. So go ahead, I can say much more about that guy.

Vitalik Buterin 9:23
Yeah, like, it’s just, it’s been running for years. It’s existed for years. It has multiple clients. It has a pretty robust, pretty thriving ecosystem, like a lot of really great stuff that’s happening in farcasterland, right? And it’s like a lot of people are increasingly realizing, and like, the kind of value that it has. And like, even aside from decentralization and just the value of like, a network where you can do like, you can talk to interesting people and do interesting things, and where people are saying, right? And yeah, it’s also a pretty big deal, right? And so. So, like, both on the infrastructure side and on the technical side, like, I think we’ve just been seeing this really, yeah, rapid rise in maturity over the last couple of years that I think is, like, actually really easy to underrate, right? And it’s easy to underrate because, like, Rise of maturity often feels like nothing happening at all, right? But the thing that is happening is, of course, stuff not breaking. And the reason

Balaji Srinivasan 10:23
that’s so important is, once something gets to like 100% reliability and it’s no longer interesting, then you can actually use it for something because you can only have so many, like risk tokens, like, if you think of like a bunch of Jenga blocks for your startup, or what have you, all of the blocks at the bottom have to be like 100% like Python, Django, it’ll just work, right? It’s like, boring. And then you can have, like, one risky thing on the top, but if the whole thing is risky, then it’ll be it won’t work. So the fact that these have gotten to that level of just, okay, yeah, it’ll work. Let me move on to the next thing allows you to innovate. Go ahead. So

Vitalik Buterin 10:56
okay, so talking about the tech, we’ve talked about the application. So like, basically, yeah, even in prediction markets, in other example, like, they’ve basically gone from theory to being reality. And the thing that I love about prediction markets is

Balaji Srinivasan 11:09
how polymer was, like, the number one app in the App Store, yeah, like, it gets

Vitalik Buterin 11:13
mainstream support and attention, even from the types of people who normally think crypto is a scam, like the types of like US political intellectuals that normally just, like, dismiss crypto and think of it as failed, like, they are the same people retweeting poly market screenshots, right? And so, like, that’s like, also been, like, probably the first breakout other than kind of classical defi that we’ve, we’ve seen in a long time. So another one

Balaji Srinivasan 11:40
that’s interesting is open router, right? Have you guys seen open router? It lets you use different llms, and it just has Metamask login is one of the options, because it lets you pay for everything, or what have you, right? That’s like crypto as a tool, as opposed to like crypto hitting you in the face all the time, right?

Vitalik Buterin 11:56
Yeah, like it provides actual useful value, right? Like, you want to be able to talk to llms without them, you know, remembering everything about you

Balaji Srinivasan 12:03
so on that, connecting that to Ethereum, you know, like, one thing that’s very, very, very rough, and, you know, I might like, I call this label extremely approximate, is like, you’re kind of roughly the center left of crypto. I’m roughly the center right of crypto, and we’re both, like, centrists, okay? And so one way of thinking about this is, I think Ethereum, like you talked about the technology of it, but I think from the community standpoint, and so maybe, you know, if you guys, have any guys heard of like Ezra Klein and Derek Thompson’s abundance agenda thing, right? Okay, so I think that they’re thinking about the abundance agenda. It was based on the concept of, like, Bloomberg Democrats, technocratic people, as opposed to, kind of the sort of riots and fires and so on they’re happening now. But I think that the sort of central left abundance agenda kind of people should look at Ethereum, because I think Ethereum can make a play for being the technocratic Bloomberg center left of the, you know, the, let’s say, post, Western or internet world. There’s a lot of these sort of centrist, central left people. And if they were actually in control of the left, then we wouldn’t have a problem. But they’re not right, and they’re putting up these things, and they’re trying to take control of the US government, or they think they are. They’re not going to be able to do it. In fact, it’s going to go crazier. It’s going to Luigi left, not the not the abundance agenda, okay? And so because it’s happening, in my view, they have to start thinking different, and they start thinking about the other people. The other piece, which is startup societies, right? So Ethereum is also into startup societies. Obviously, you have zoosulu and all the sons and daughters and grandchildren, Zulu zoo, Thailand zoo, Georgia zoo, this zoo that, right? And the all of those things are basically places where you can experiment with different forms of governance. Those are all abundance agenda sympathetic people. And so they could actually go and do that kind of thing there. And so I think the technocratic central lefts, like the Noah Smiths, Ezra Kleins, David shores, Derek Thompsons, should really lean into Ethereum, right. And I think that’s actually like a social or community aspect, because that’s not crypto as a scam. That is crypto as actually the opposite of it. It’s actually crypto as community. It’s crypto as rule of law. It’s crypto as you know, like, equality of treatment, where everybody’s appear on the internet. Go ahead,

Vitalik Buterin 14:11
I think, like, the important thing, right, is that it’s like crypto has been a, like a symbolic talisman in a lot of like, those ways, and also other ways for a long time. But like, there is a really big difference between a symbolic talisman and something that can actually do do things for people that people directly need, right? And like, to me, what’s interesting is that Ethereum actually is crossing that chasm from being the first into into being the second. And I mean, also, like, it’s not, it’s also not just Ethereum, right? Like, this is part of a bigger ecosystem. And I mean, like, it goes beyond cryptocurrency, beyond blockchains, even beyond cryptography. Like, one of the topics I’ve been talking about quite a bit recently is dx and like, trying to create, like, this decentralized defensive acceleration, and trying to create this kind of bigger tent. Around, like, basically full Yeah.

Balaji Srinivasan 15:04
So on the topic, all right, so we’re down about two and a half years after the chat chief came on. I want to argue about this, maybe. So are you still an AI percentage Zoomer, or, I don’t know, would you, would you call yourself? Do you classify like you had some non zero percentage

Vitalik Buterin 15:19
of it? Yeah? Yeah. I mean, honestly, might be zoom has gotten higher. Yeah. I mean, I think the basic reason is that, like, progress is happening faster than we expected. At the same time, global politics is worse than we expected. I think, I mean, one of the to me, like, the biggest unrealistic thing in kind of classical doomerism is probably not so much in the problem, but in the solution, right? Because, a lot like, one of the challenges I often see in blowing up data centers, well, you know that, like, that’s, that’s actually, like, the the ironically, the less than you’ve won, the the naive one is, like, the nice guy, international treaties approach, basically, because, like, the, the classical solution to this right is like, let’s get the world to come together and agree to not do all of this stuff and at this until we know how to do everything safely, right. And then at the same time, we’re literally talking about a world where countries are invading each other and like threatening to invade each other, and like blowing up tariffs to infinity, and like all, all kinds of different things, right? And so I want to

Balaji Srinivasan 16:30
argue on this, because I think, like, in my view, killer AI is already here. It’s called drones, right? And AI alignment cannot align, quote an AI to everybody. It has to align it to its controller, and that could be one tribe or another tribe. Each tribe will have its own drones, right? So AI alignment is not synonymous to the AI safety, and, in fact, is basically like the fact that killer AI equals drones, killer AI equals robots. Means I’m not actually concerned about, in my view, image generators and text summarizers, right? Like, those aren’t, those aren’t going to kill us, right? The robots would, right? And go

Vitalik Buterin 17:06
ahead, yeah. So I’m not, you know, I’m not concerned about the image generators and summarizers either, right? I think, I mean, I even have the receipts for this where I publicly, like, basically said that, like, Hey, I don’t expect, like, I basically expected the whole election deep fake thing to be a total nothing burger, and it has been, I mean, I think to me, like the flip from like danger being only sort of human to AI to danger being potentially AI started like comes when AI crosses, like that roughly human level, kind of autonomy and generality

Balaji Srinivasan 17:37
threshold. Okay, so that autonomy is a big thing I want to poke on right? I think to me, the biggest surprise and the biggest argument among my tech friends, maybe yours, over the last two two and a half years, especially around the time the chatgpt came out, is will prompting endure, right? And the fact that prompting has endured over the last two and a half years, and it shows no signs of going away, unless there’s some enormous technological breakthrough, means to me that prompting is just higher order programming, and it’s just programming in English, and then you have, you know, programming Python, you’re programming assembly. And so prompting doesn’t go away. You don’t actually have truly autonomous intelligence. You have amplified intelligence. It’s not agentic intelligence Go

Vitalik Buterin 18:15
ahead. So given that assumption, I agree. But there is, there’s this interesting chart that shows the the time duration of tasks so that AI can complete autonomously and it and it has a doubling time of roughly every seven months. And like, historically, like, basically, like, naively assuming that curves will exponential curves will continue, has been a much better AI prediction method than like, pretty much anything else. And so if you naively follow the curve, then you basically get to AI completing tasks on the length of a human lifetime by about 2035,

Balaji Srinivasan 18:45
maybe it’s possible, but I think I don’t know, the Transformers coherence breaks down. And so there’s certain kinds of things where, like training, is starting to top out, like you’re hitting the available amounts of data out there. They need to generate them right so often. So I understand your argument. My counter argument is that, is that I think genuine algorithmic breakthroughs, conceptual breakthroughs required. I can’t say that’s impossible, but I would say that right now, prompting is much more of a constraint than people are giving it credit for. That’s a The other thing is, you know, and this is like a counter argument. Some of stuff that ilyazer talks about is, I think he really underrated chaos, turbulence, and fundamentally mathematical systems, or physical systems, where predictability is genuinely constrained, mathematically, like you cannot predict beyond a certain window given finite precision arithmetic. And if you you could actually, just as a thought experiment, have have an AI try to predict the motion of a fluid under turbulence. It can’t do that without inventing math or maybe doing things we we don’t think are even possible, right? So if that’s just, that just shows that there’s limits on what AI can just do as a purely computational thing, non empirical thing. I don’t know. I’m just like, the idea of one prompt and it just, you know, runs for the whole

Vitalik Buterin 19:56
life. Go ahead. Yeah. And so I do think that, like, there has. Have been like walls that have broken before, right? Like so the big one that I think really updated me towards some more concern is the whole, like deep sea car, one chain of thought thing. And the reason why that updated me toward concern is because before that, we were in a regime where it felt like the thing that AI is fundamentally doing is it’s just like basically copying and kind of interpolating off of existing training data. And so if you just do that, then the logical outcome is that AI can do lots of things that, like roughly the level of the smartest humans, but then it tops off there, and then you can’t really go

Balaji Srinivasan 20:30
for it, right? The chain of thought genuinely shows deliberations, like a smart person thinking about it,

Vitalik Buterin 20:35
that’s right, yeah. And well, like, basically the way it works, right, is that you start off with a model, and then you ask it to solve problems where you can objectively determine if the answer is right or wrong you haven’t, like, run 10,000 chains of thought and give an answer. Then you filter for the ones where it got it right. And then you just train on that again, and you repeat the loop right. And then I think, there for me, for the first time, we actually saw a training loop that could plausibly get us all the way to superhuman and like, there wasn’t something that it’s obviously constrained by

Balaji Srinivasan 21:04
Interesting. Yeah, it’s because it’s not just the probabilistic it’s also the multiple plans and then selecting from them, I don’t know, I think, Well, okay, here’s the third piece, fourth piece. Here’s how you would do this. This is, this is, like, the how don’t build the torment Nexus? How would you build a torment Nexus thing or whatever? But let’s say one. Let’s say one, one just go about doing that the so I think embodiment is obviously another big piece, and I think that purely digital AI this can’t do it’s like, it’s like a life form that only lives underwater, and when it comes out onto air, no matter if it’s a giant blue whale, it’s gonna run

Vitalik Buterin 21:38
out of air. I agree with this too, but like 10s of all optimuses are pretty friggin impressive. Yeah,

Balaji Srinivasan 21:43
well, yeah, so actually, the unit trees and all these humanoids have gotten impressive, exactly, that’s right. So the embodiment will get solved, right? And but then it has to collect lots of sensory data. And I think ultimately it has to have goals, right? Because without goals, that’s actually the very hardest thing. Like humans, markets and politics are domains where train and test doesn’t if it worked, then you’d be able to just make tons of money in the market. You’d be able to win elections. These are inherently time varying domains that, yeah, I can’t crack right now, because you don’t actually have an approach to time invariant domains. Go ahead,

Vitalik Buterin 22:16
yeah, well, I do know that, like AI has been showing behavior that’s more and more goal, like, right? And like, it is a market incentive to try to train for that, right? It’s a market incentive to try to create a thing where you can basically tell it, like, Hey, make a web app for me, and then it just goes off for six hours. And, like, you don’t have to think about it again, and then you get a web app

Balaji Srinivasan 22:39
at the end, right? Well, the question is, though, there might be an MDL kind of argument here, right? Minimum description length, or, like, input complexity kind of thing where, like, you know, the number of bits, of course, it’s got, like, the web or whatever, it’s got it all cache there, right? So if you’re looking at an existing solution, a small input is enough to pull that out, right? Like, if you want a Sudoku solver, you can just pull that out. But if you’re trying, the more novel it is that you’re trying to make something, make it do, then the more input prompting there’s going to you’re going to need. And it’s not going to be like a one shot that just gets made. That’s my argument Go ahead,

Vitalik Buterin 23:15
right? But I guess saying in general, right? The Williams, so my, like, my prediction on that is, like, for any particular category of thinking, like, at some point we are going to get to the points where AI has done it right, and the reason, and we know it cannot do it because humans have done it. And humans are basically the result of, like, just us throwing some molecules in a soup and then doing about two or like, 10 to the 40 computation steps of evolution, right that

Balaji Srinivasan 23:41
part. So I’m not making the argument that AI won’t be able to think through bioinformatics or physics or something like that. I’m making the argument that it’ll have to be prompted to do so because prompts are so high dimensional that they’re a direction vector in a very, very, very high dimensional space. And so how is AI to search that direction and figure out what to do for humans. It’s basically like reproduction, right? So if you actually had this Skynet scenario where you had robots that could dig out, you know, data centers, or whatever it was for the, you know, AI to live, and they could actually mine the ore and set up the power generators, they get the full reproducible loop where they set up the factories to turn out more robots and have more AI to script it, then you could actually have something where an AI could have a goal of reproducing or whatever, right, like a Terminator, like scenario, but, and that’s not impossible physically. But the thing about that is, probably, prior to that, we probably have a lot of kill switches in these robots. Like, a lot of the stuff was like, oh my god, it’s gonna learn how to bust out of the box, and it’s gonna, like, run on every computer, like Skynet, and it’s gonna, it’s gonna explode out of containment, right? Do you still think that’s possible? I can give you arguments. Is why I don’t

Vitalik Buterin 24:49
think that’s possible in that way. Yeah. And I think in AI, getting out of containment, like that part is plausible. I mean, I can say, like, what parts of the sort of the Doom pipeline I’m more skeptical are? Right? So have you? Have you read the AI 2027, post?

Balaji Srinivasan 25:04
Yeah, but it’s I, I’m, I like, I like some of the people there, but I’ve not. I just think it’s, why do you think it’s naive? For example, it assumes that it’s like us versus China. And there’s a lot of things that it gets wrong,

Vitalik Buterin 25:18
but, and so for me, like the biggest technical thing that felt very implausible is that it assumed a world where, like the where within their universe, like they literally say this by January 2029, aging and cancer are both solved problems like cure for aging, cure for cancer, they’re both listed under Emerging Technologies. In this kind of world, it’s absolutely impossible that we will not have solved pandemics, right? And, like, in 2030 like, basically, it’s assumed that the AI kills everyone with super viruses, right? And, like, I think, generally, yeah, like, those kinds of argument, like the arguments, like, they’re at their strongest when you try to, like, step back and go into the abstract, and you might say this question, like, oh, well, see, you know, you have no idea how Stockfish is going to beat you at chess, but you know that it is going to beat you. But then the challenge with that metaphor is that, if you like, if you give them, if you provide a handicap, right? Like, actually, if you provide, if you take away a rook from Stockfish, then even today, grand masters can a lot of the time beat, like, top ranking Stockfish AI, and if you handicap the Queen, then the grand masters, human grandmasters, just wipe the floor, right? And so there is only so much that a particular threshold of intelligence can actually give you right. So

Balaji Srinivasan 26:33
well to that actually, one of the most interesting things that I saw was with AlphaGo, you could play adversarial input to AlphaGo, right? And you just beat AlphaGo, right? Meaning, what that means is, just for people who don’t know this, like with AI image classification, because the way it works, you can feed it an image that looks exactly like a dog, but a certain calculated layer of static on top of it that’s invisible to the human eye, but that makes the aI think it’s a cat, right? That’s called adversarial input. And to my knowledge, at least there’s been no way of defeating this. This is something which is just an intrinsic way of how deep learning works. To my knowledge, at least I haven’t seen a paper that does that so and that actually also applies, for example, to games. If you give an input to the AI that’s just way outside its training data, you can win the game of Go by doing something that’s totally crazy that AI has never seen before. And it’s actually funny. It’s almost like that Hollywood movie. Intuition about AI is, like, you do something that the robot, it goes, beep, burp, or, you know, and it turns the wrong way because you’re doing something very human that it can’t expect, right? And so to your to your other point about, like, the cure cancer type stuff. This is, in my view, people, you know, I have this concept that never see a book of, like, God, state network, like, what is the most powerful force in the world? These are people who have substituted, like, uh, AGI for God, right? And what they do, I think, is a really, really underrate empiricism. And, like, I know somebody about biotech. You know something about biotech. You need to whatever theory you have, and you might be able to actually do a lot in terms of taking all existing papers, indexing them, digesting them, coming up with theories. Be on the basis you can do a lot with that, for sure, it’s by medical text mining, but then you have to test those theories. You have to actually go and have test tubes, and you have to actually see if it works, and you just won’t know if it works. You’re actually gonna be able to intuit. You might have a theory, but you have to do practice. Go ahead, give me your thoughts.

Vitalik Buterin 28:27
Yeah, and I expect that sort of thing to be like, not an issue at all in sub domain, some domains, but then a huge bottleneck in other domains.

Balaji Srinivasan 28:35
Yeah? Well, embodiment and experiment are massive bottlenecks on AI, huge like, they underwrite the physical world dramatically, and underwrite experiment dramatically

Vitalik Buterin 28:44
patents, right? Like, for embodiments, mean, if, like, if you look at, you know, like, the, even the current bots, right then, like, they already have a lot of advantages, right? Like, I think, like, if, right now, we had to bet on, you know, who would win a karate match between, like, a human and Tesla optimist, like, what would you say?

Balaji Srinivasan 29:03
Oh, I mean, but like, your gun, would machines have been humans for a long time, right? I don’t know, actually, like a karate match, I’m not sure how agile they are actually. I’ve seen the videos of them. I think they’ll eventually get

Vitalik Buterin 29:16
there. Yeah, they’ll get there. Well, well, I think realistically, they’ll beat us on totally different domains, right? Like air is the big one,

Balaji Srinivasan 29:23
right? Okay, so when I say embodiment, what I mean by that is you have to manufacture the robots. You have to have them be economically feasible. You have to transport them to the location. I’m not saying that these are unsolvable problems, but the friction of the physical world is radically like basically what we need to do for some of these scenarios to happen is you need to have as many humanoid robots, as there are smartphones, right? Like a billion of them. And that means you have to crack all kinds of business model type stuff. I mean, it’ll eventually get there, but it’s something where people are like, Oh, cure, AI by 20. I just think their timelines are correct. And I also think, I mean, so what do you have to have? You have to have a robot that could do all the experiments that a human. Could do. They’re at a bench. They’re running the experiments. They have, the reagents they have to have the whole supply chain for them. This is just stuff which is friction that just doesn’t move as fast as the internet. And I think they’re underrating that go

Vitalik Buterin 30:11
ahead. So I actually, I mean, I do think that there’s enough of a risk that this stuff will happen very fast, the fast for us to worry, right? And I mean, like, I think probability of more humanoid robots than humans by, yeah, 2040, I think it’s like, very non zero.

Balaji Srinivasan 30:26
I agree with that. I think it’s funny. Over the last two and a half years, I feel like AI has been both overrated and underrated, and here’s why. Like, obviously, it’s ridiculously important, and in that sense, undrained. I think the long term is right. But in terms of, like, it’s funny how people are actually using AI in practice is they’re just, like, being extremely lazy and using chatgpt for essays, right? Or, like, it’s amazing how many people will just, you like, literally have chatgpt write their tweets. And I’m like, You don’t think I can tell? It’s like, you know, there’s like, a rhetorical question mark. And it’s not this, it’s that, you know, kind of thing, there’s Go ahead, right? There’s certain things they do where it’s like, actually, I want a plugin which a Chrome plugin. Maybe some of you guys can code this. That’s like, is underscore AI, that runs AI detection on every string on a site and then gives you a link. Because what will happen is, if you say to somebody that post is AI, sometimes they’ll be like, No, it’s not, or whatever. And sometimes it actually isn’t, you know, right? And it’s kind of like an accusation of, you’re dumb, that’s AI, they get offended or whatever, right? And so, like, a lot of secret AI. And so if you run it is AI, you get, like, an either scan link, and it’ll tell you why that was likely to be AI, not either scam, but a link that’s like, that a permalink, okay? And then you paste that, you’re like, is AI, you know, probability 73% and you reply to that on x, and it’s like, did I like, let me AI that for you or not? AI that for you, right? And I think so. The way it’s being used is like, not all, but it’s like an idiocratic version of it, right? The smarter you are, the better you use AI, and the dumber people are, or the less taste they have, the worse they use AI, right? But that’s it just feels very different than any emergence path of AI that’s been in science fiction or literature or anything like that. Go ahead, give me your thoughts,

Vitalik Buterin 32:15
yeah. And I think the emergence path of AI has violated expectations and just like all kinds of different ways, right? And, and so, I mean, even, I mean, like, it just repeatedly keeps on doing that, right? Like, if you think of, like, even the 1970s like they, yeah, like, they would not have expected AI to be able to solve arbitrarily complex arithmetic in like 10 years. But take fit, like, 40 years to tell a cat from a dog, right? And and then, like, even today, that like, the line of what are the things AI can do and what are the things AI can’t do is, like, very difficult to describe, right? And I think one of my theories is actually that the task, the last tasks that humans will be able to do better than AI will be precisely the most illegible ones, because those are, those are the ones that are the hardest to train for.

Balaji Srinivasan 33:05
And so, yeah. So actually, what is your list? Because I’ll give you my list.

Vitalik Buterin 33:09
Um, I mean, it was interesting. I remember, I was, remember talking to someone. He asked me, like, what is your like benchmark for when you would say, Yeah, this is AGI. And I said, when an AI is able to independently start a profitable company,

Balaji Srinivasan 33:20
interesting? Yeah. So everybody’s got, I think I would argue we’ve already passed quote, the threshold of AGI in some sense, for some domains, like, it’s better at math than many people. It’s better at right? Yeah, it’ll make mistakes. But people make certain math. You know, it’s funny, these two questions, what can’t ai do, and what can’t China do, I think, are the most important questions for anybody who’s doing a new startup, right? Like or like you build your business to assume AI or China will improve towards that, and then you’ll ride with that, or at least be invulnerable to that, rather than the opposite, right? So the I mean, the thing that I’m seeing is AI right now for and maybe for the indefinite future. It doesn’t do Polish right, so it will actually, you know, the whole thing about AI takes the jobs. Here’s a reframe which a reframe which is AI lets you do any job at an okay level, right? You can be an okay artist, you can be an okay video editor, you can be an okay designer, or whatever. You can at least get your vision out onto paper right, and it’s like, kind of getting to, like a five level or something like that. It might be a cheesy, generic version, but you can do any job. So in a sense, AI means a crude form, not crude, improving form of digital autarky, so long as you have access to the internet right. And China is physical autarky, because they’re the most physically independent country in the world, because they can build everything themselves, right? So that’s how I kind of think about it. But it’s not getting really good at that area, because the threshold for really good keeps moving forward as AI improves. And to actually check the result, it might generate a math theorem. We actually have to be a research mathematician to even understand the. Symbols and check it. Go ahead. Give

Vitalik Buterin 35:01
me your thoughts. The mental model I use is like, if human level is like, normalized at like, some level, like this, right across a range of tasks, then basically, AI kind of looks like this, right? It’s like, better in some domains, worse than other domains. And like this, curve just keeps shifting to the left, and like, often very wobbly and unpredictable ways, right? And so it’s like, it’s, it’s, I think for 50 years it’s been super cute. Like, from, okay, from the perspective of the economy in 1800 by the like, probably the year 2000 we basically had here, like, ASI, from the perspective of, like, oh, probably over 80% of the economy, right? Like, if you think of farming, what percent of that was automated, even by the year 2000 if you think of manufacturing, right? And so what I think is happening, and the pattern that will continue for a while is basically this pattern where, essentially, like both econ, like economic and social value, they continually refocus on the subset of tasks that AI hasn’t just hyper inflated to zero, right, right? And, I mean, at some point we’re going to enter a new like, a regime where, like, that set of human dominant tasks, like, actually shrinks to exactly zero, right? But we’re not there yet. The question is, what happens when they eventually get to the point where they can, like, think, 100 times faster than you, and at the same time they consume, like, 0.1 watts of energy and

Balaji Srinivasan 36:21
stuff, there’s still comparative Vantage right? I mean, or rather, I guess that’s a question, does comparative advantage between AIS and humans exist? And you can argue there isn’t any with dolphins, right? Yeah, like, dolphins are just pets, right? Dolphins use kind of feed, food. There’s nothing you can really delegate to them. They just can’t figure it out. Whatever you know, like, go, go, go, have fun, right? So it’s possible that it gets set level, but I think at least until the prompting issue is solved. Right now, it really appears to me that it’s like artificial intelligence is just constrained by human intelligence, like, you know, it’s basically like, the better you are prompting, the better the AI is, and conversely, go ahead. Yeah, you spell something wrong. It gives you a worse answer,

Vitalik Buterin 37:03
right? Yeah, I fully agree that that’s the status quo today, and probably at least for the next couple of years. Yeah. Okay,

Balaji Srinivasan 37:09
fine. All right. So, all right, so let’s move to maybe talk about one more thing. Let’s go to questions. Okay, startup societies, zizzulu and so on and so forth. So after network state book came out, you actually, there’s actually an old diagram I had, and then you kind of moved one more box on that diagram. Go ahead, why don’t you

Vitalik Buterin 37:26
talk about that? Yeah, basically. So the box was, I mean, this was actually originally, yeah, right, the diagram from your book, which is basically how many people are there, and for how long a time do they come together, right? And so, you know, it’s like, number of people, you know, 200,000 million, and then amount of time, like one day, one week, a month, a year. And then you have, like, Tinder over here, and then you have eHarmony over here. You have universities over here, and then countries over here. So with those, like, the idea was that, okay, you know, we’ve had lots of things that happen where people come together for a week, but there’s this, like, big, discontinuous change that happens. I think when you one you allow the duration of time to get longer. Like, basically, the psychological shift that happens that, and I’m sure a lot of you can probably relate to at this point, is that a week is a break from your life. Two months is your life, right? Like, yeah. Like, if you stay in a place for two months, like, actually fully readjust, like, that being the normal, right? And then, like, actually becomes a day to day rhythm, as opposed to something where, like, you know, you know, you’ve done a few days, and there’s a few days left, and then you’re gone, and then on the other side is, like, scale, right? So no, like, there’s what you can do with, like, 10 people, which is a hacker house, but then there is what, what can you do when you get to, like, Dunbar’s number, which is 150 and that’s the level at which you start to just need to have multi level structure in your society, right? And so like, two months at 150 is like this level where you actually start to get a lot of the complexities of doing something more meaningful at a larger scale, but at the same time, it’s like still small enough that it’s practical to bring together and it’s manageable, right? And so, I mean, we’ve seen that. I mean, zoo, so it was about 200 people for two months. Like, I think it’s, it’s a good range within which I got to, like, experiment and I try to figure out what are the actual kinds of things that we can do. And the general way that I see this, right, is that, like, I think this is a good scale at which you can dog food, things at which you can actually, like, start to go from having some new idea about how civilization might work better to not just like blabbing about it on the internet, which is what people did on internet forums back in 2009 but like, actually try it out, and you actually get, Like, real world information about where the successes and failures are. One

Balaji Srinivasan 39:43
of the things that’s interesting is people in the community develop things for the community, right? Like, like the community was their beta testers, or what have you for, for different types

Vitalik Buterin 39:51
of it’s a full stack incubator, right? Exactly. Well, basically, I think there’s a big difference between, like, being a service provider and. Being a community, right? I think, like, for example, this is probably one of the things that the whole, like E Estonia movement of 10 years ago got wrong, right, which is that, like, they had very advanced e government for their time, right? You know, you could vote online, open up bank accounts online, you will, like, start companies online, all kinds of things really advanced. But the difference between a product and a community is that a product is a hub and spoke model. It’s n people who have a one to one relationship with some kind of center. A community is a model where you have n or you have n people that all have relationships with each other. And I just like in Estonia, I think the problem was that, like there were for E Estonia, there was not enough commonality between the different people who were Estonia residents for them to like actually really become a cohesive community,

Balaji Srinivasan 40:48
right? Yeah, exactly. It’s like community is connectivity. That’s actually how like, literally, of n people, you have n choose two possible relationships, or n squared, if they’re asymmetric, and then you divide how many relationships actually exist by the number of possible relationships? And they just had a hub and spoke thing where everybody was connected to Estonia, but not to each other, right?

Vitalik Buterin 41:09
And so I think the like, one of the unique things that the sort of community as incubator can do right is like, it actually gives you a community as a beta test user. And there’s, like, a big, big difference between having 100 beta testers and having a community of 100 beer beta tester, right? And the difference, basically, is, it’s the difference between 100 people who have no pre existing relationship and need to interact with each other, versus people who do have a lot of those needs every day. And so one of like, like this is, I think, to me, like a very unique value proposition, right, that you can actually can actually start a thing and then you can bootstrap, incubate it in a community, and then you can actually make a huge amount of progress

Balaji Srinivasan 41:48
together. That’s right? And I think, you know, I do think the startup society, this has a concept like what you’re doing on the digital side, we’re doing with network school, we’re helping bootstrap others. I think this is the third type of thing, like, internet company, internet currency, internet community. You know, I’ve said that before, but I think it’s like, the third type of thing that you can start, you know, go ahead,

Vitalik Buterin 42:09
yeah. So, one other thing that we started doing in Zulu, and that we’ve and I think also, like, gotten in, I mean, we really have been in in a long time, right, is biotech. And I think it’s interesting that, like crypto, has had this kind of interesting sort of sister relationship with Frontier biotech for pretty much ever since it began, right? Like, definitely far from just myself, a lot of early crypto people were also very big into longevity.

Balaji Srinivasan 42:35
Jim, my thesis on, why? Oh, go ahead, you know so well half any was an xtropian, right? But I can actually give an exact parallel between crypto and longevity, which is what with Bitcoin. Initially it looks like the existing financial system, but in some ways, with bar charts and trading and stuff, but it rejects fundamental premises, because the existing financial system says that while Hyperinflation is bad, deflation is also bad. So you should lose a little bit of your wealth every year. Like a little bit of inflation is good, that’s normal. And the medical system is similar. It says, While fast death is bad, right? Like trying to live forever, like deflation, the equivalent of that increasing your lifespan every year, that’s also weird and bad. So you should lose a little bit of your health every year via aging, right? So the traditional financial system says lose a little bit of your wealth every year. Traditional medical system says lose a bit of your health every year, right? And the entire crypto and longevity community says, What if we reject that? What if we attack it from first principles and say, like, maybe we don’t have to go to zero. Maybe we can get to infinity instead. The

Vitalik Buterin 43:42
nice thing about longevity, right? Is that I think it’s on this, like, really nice trajectory toward being much more mainstream now, right? Like, yeah, Brian Johnson is, like, a mainstream cultural figure, like non technical friends of I think many of us know who he is, right? And, you know, there’s been a huge amount of progress on that. And I think one space that I think is like still in an earlier part of the curve is resistance to airborne diseases. And I mean, we had a dry run of this during COVID, right? And like, COVID is worse than existing diseases, but it’s also much less bad than it could have been. And like, it was in this weird mid zone where, basically it was not bad enough for us to, like, properly take it seriously, but at the same time, like it actually is much worse.

Balaji Srinivasan 44:33
You know, there’s this concept on Twitter sometimes of the secret third thing, have you heard that before? Right? And essentially, if you guys don’t know this meme. It’s like, Are you a democrat or republican or or a secret third thing, and it presumes that you must be in one of these two categories, and otherwise, you know, it’s like a fake thing, that there’s any more complexity than just zero or one, right? But obviously, if you look at, and I’ll get to my point, if you look at like an image, you know, an image has more than one pixel of information. Right? And like, if you only had zero or one, you would be able to scram image. Some things have necessary complexity. More than just zero or one. It can’t be reduced profitably to that. And I think with COVID, for example, one way of sort of crashing the operating system of some some people online is they’re like, on the one hand, it’s like, this dangerous virus had escaped from a Chinese lab, and, you know, it’s like, so bad, and so for the other hand, it was just the flu, bro, and it didn’t do anything. And why are you so mad, right? Why do we do anything about it, right? And a lot of people hold both of those beliefs at the same time, right? They’re obviously incompatible with each other, right? Where either is, like, this fairly serious thing or it was a completely non issue, and I think it was actually fairly serious, and it did come out of Chinese lab. I mean, it’s not the word was right, it is very serious. Was slash is exactly, that’s right. And it killed literally millions of people. But, and this is a weird thing, see, the thing is, in early 2020 if you had said, okay, just work your way back to January 2020 if you said, this is going to kill 10 million people globally, and a million Americans, and in a few years, it’s not gonna be that big a deal. Okay? That’s literally how people are thinking about it, right? That’s which is actually kind of, if you had said it’s not a big deal or it’s gonna kill millions of people, those who seem like thesis, antithesis, and somehow we got a synthesis, which is totally crazy, where it literally killed a million or millions of people, and it’s also considered, dude, you’re just exaggerating, John, it’s kind of like life goes on after World War One. Well, it does for those who didn’t die, right? But, you know, it was actually a pretty big deal at the time, right? Go ahead,

Vitalik Buterin 46:25
right? I mean, though, I think in one of the important things there, right aside, like, COVID itself is still an ongoing thing, right? Actually, yeah. I mean, I got it quite recently, yeah, I’m actually, if I, if it weren’t for that, this whole thing would have been happening, like, a week and a half earlier. But based you know, it’s still happening, it’s still bad, like, the long term symptoms are still pretty bad, right? And if you think about just like, how our civilizations, media apparatus is handling it, like, from a media perspective, COVID basically ended on 2022 February 24 Right? Like that just is how our media, like apparatus, works, right? Like you can only Yeah, like, handle one big story. Think about one big story at a time, right? And the reality, of course, is the current thing, yeah, exactly. Like, there are multiple current things. You know, COVID was not cured. There were new variants. The efficacy of the vaccines went down to about a quarter. And, like, basically, yeah, the and then the scarier thing is, what could kind of come next? We basically have to get ahead of all of this stuff when we have to massively increase our civilizations bio defense. So like, I think in the UK, there was a wastewater surveillance program that’s, like, very good. And like the the smart technocrats on both sides will tell you that it’s good. But then, like, pretty much sometime, 2021 2022 it was just canceled. And,

Balaji Srinivasan 47:42
I mean, I think the problem is, unless the next pandemic literally photographs like Ebola or something like that, God, hopefully it doesn’t, there’s people who will just literally deny that. They’ve gotten to the point they’ll just deny the germ theory of disease, you know, they just don’t even believe that, like, you know, any Not, not, not that a vaccine works. They don’t believe any vaccines work. They don’t believe drugs work. They don’t believe biotech works. They don’t believe pharma works. And you’re, there’s, there’s totally throwing the baby out with the bathwater. Go ahead.

Vitalik Buterin 48:11
Yeah. I mean, well, let’s, um, I’ve got two solutions for that, by the way, sure. Okay, well, what are your

Balaji Srinivasan 48:16
solutions? So the two solutions, and this is actually true for everything, the two heirs to American empire are China and the internet. And so, for example, when it comes to biotech, like, if you look, China actually flipped the US and the rest of the world in like, science papers, nature papers, like highly cited biotech, like they have a functioning replacement for American academia. And like, if you want to know whether, like, they’re not as political about or not political about it at all. So if a treatment works or not like a Chinese clinic, they will basically be able to tell you about a crazy left or crazy right version of it. So that’s one possible solution that’s for, you know, within China. And the other solution is the internet, where, I think, with Desai right, you can have people who are neither, you know, crazy people like the whole thing about, you know, in mid 2020, where people are like public health, people in the US destroyed their reputations by saying, you know, racism is a real pandemic, and need to go out and protest and so and so forth in the middle of a pandemic, and just completely vitiated all their guidance on lockdowns, or what have you this made people completely distrust all US, public health, and then you have the right then, saying that vaccines don’t even exist. So both sides are just completely crazy about biotech. They’re completely crazy about biology. So the answer is either China or it’s the internet. So you have D side, decentralized science, diagnostics, self experimentation, Brian Johnsonian, where you’re measuring your own body and taking responsibility for your own health. You’ve got the air quality metrics, you’ve got the personal sequencing, you’ve got, you know, all of this. You’ve got your own dashboard. And then you’re working with a community of people, of like mind, to get to scale where you can aggregate that data, you can start calculating, okay, is my numbers? Are they out of sync? Are they in sync, or what have you right? So those are the two approaches, I think, to. Like a rebuild biotech, and just assume the solution is outside America. Go ahead.

Vitalik Buterin 50:04
What’s interesting to think on, like, some of this bio stuff is how a lot of the solutions are surprisingly low tech, right? Like, we’re like, basically. So we know this lamp here. This is a UV lamp, 222, nanometer light. So if it’s under three, like, about two, like, 300 nanometers, that means it’s like, actually, like, fine for your eyes and so on. So it doesn’t, like, hurt you the same way sunlight, ultra, ultra, while it does and what it does is it kills the viruses, right? So it just passively, like, basically, significantly reduces the chance that people in the room are going to infect each other with any airborne pathogen. So this is, like, this is one type of technology. Another type of technology is air filtering. A third type of technology is just better ventilation. Like, there’s really basic things that can be done. And, like, the big bottlenecks are, like, what, like, a lot of the time the bottlenecks are just like, people getting off their butts and doing things right? And to me, this is also part of the value of some of these, like, startup societies, right? That you can actually take things that like, really the smart people know are obviously correct, and then you can just go and do them right? Because a lot of the time for mainstream adoption of these kinds of things like, the bottleneck is not like, is not about the technology. The bottleneck is like, from a politicians point of view. Is this something where they have to go do something untested, or is it a copy paste job?

Balaji Srinivasan 51:24
So this is very, very, very important, because basically, startup societies are deployment points for exactly the obvious, cool, high tech stuff that we can now deploy at 100 or 1000 unit scale. Because it’s like it’s something where people have opted in to basically pushing the frontier of tech. And they can, of course, you can. You know, not everybody has to opt into a DNA test and everybody has to opt into an air monitor, but the kind of people here are going to be more interested in that kind of stuff, because it’s like the frontier of health, frontier of tech. So go ahead, why don’t we take some questions? Actually? Go, yeah, all right, great. So why didn’t you build Ethereum anonymously, like Satoshi, if you could go back, would you do that? I think

Vitalik Buterin 52:06
it would have been too hard. Like, the problem is that by the time Ethereum started, I was already a Bitcoin magazine author for two years. And like, one, I just, like, put way too many words on the internet. It would have been findable. And two is, I think, like, Ethereum is beginning, really did, like, my ability to get any kind of critical mass, I think, definitely bootstrapped a lot off of people already knowing and trusting me from Bitcoin magazine, like it would have just been a way harder

Balaji Srinivasan 52:30
job. A non obvious point is pseudonymity is itself a form of decentralization. And the reason it is is because, like, your government name is in a government database. So it’s a centralized name. It’s linked all the information about you. Punch into a search engine. It pulls up everything on John Smith, whereas Satoshi Nakamoto is a decentralized name. It’s outside of that, right? So in a sense, pseudonymity is itself a form of decentralization. They couldn’t hit either the man or the ball of the network, right? But once Bitcoin was out there, it was like a heat shield for you. It’s like, you know, the NASCAR or Indy 500 Right? Like, you know, you can kind of, kind of like slipstream behind a car, right? So you didn’t have to use exactly the same tactics similarly with network state, personal distribution is important. So the way that we’re doing decentralization is lots of startup societies. So it’s not just like vitalikville or Balaji Berg, or whatever, I want many of you guys to set up your own sharp societies. We’ll fund those. There’s many different visions of the good, right? And, and so that’s decentralization, another form, rather than pseudonymity. Go ahead.

Vitalik Buterin 53:31
Well, what’s interesting to me about, like, the whole zero knowledge proof thing in particular, right? It’s like, it opens up the entire design space, right? Like, if remember, like, 15 years ago, the whole narrative is, like, basically the dichotomy between sort of trusted, institutionally because KYC versus on the internet, nobody knows you’re a dog. And the challenge now ZK reconciles that. The challenge now, of course, is that if you don’t prove you’re you’re not a dog, then you’re also not proving that you’re not a bot. And so, like, there’s no reason for people to listen to, right? And so, but, but what the zero knowledge proofs gives you is they give you the ability to have privacy, but at the same time, have have reputation and so, and that, what that also means is like you actually have an incentive to like, say and do things that are constructive because they increase your reputation. And that’s reputation that you have and can use in a future context, despite the fact that all of this reputation is being mediated through ZK and so, like, actually, every operation is unlinked from every other operation.

Balaji Srinivasan 54:37
I think, like, you know zero knowledge is definitely underrated. And if you’re, if you’re going to spend, if you’re early in your career, that’s, I don’t there’s some things I think, Zk, biotech, things like that, haven’t had had their massive breakout moment yet. And ZK is definitely one. Okay, here’s another one. This is you’ll like this one. Okay, what are a list of some 1000 to one or 10,000 to one? Return. Earn public goods that we could or should have but don’t.

Vitalik Buterin 55:02
And I think one of the things is just, like, more kind of, like, standard form packages for things. So if you want to, like, start a guest, if you want to start a startup society, and then, like, here’s like, open source software that you can start with. Here’s like, the the 12345, guide of what kinds of things you should do if, like you, wants to do something digital, like, say, yeah. Like me, thought through templates. Yeah, exactly like thought through templates for like, pretty much everything is a big one. And also, I think on the like education side, like, the biggest under provided thing that I think, like, I’ve identified, and like, I even a lot of my writing has been about trying to compensate for since the beginning, is like, like, you’ve probably seen a lot of this with math, right? Like, you have a bunch on the one hand, you have papers that are technically accurate, but they’re just like, so insanely complex, and everything’s in, like, freaking Greek, and, like, nobody can understand it, except for people that have had in person conversations with the original authors. And then on the other side, you have like, pop sign, like, stuff that just doesn’t even try to be accurate, right? And they tell you things about how, oh, Schroedinger cat means that, like, the cat can be dead and alive at the same time, and like, you, you know, absolute like, that sentence gives you absolutely no intuition whatsoever. About, like, what is the point of of quantum, right? But like, it makes you feel smart, because you know the password, right? And so, you know, there’s like, an entire space that’s like, in the middle there that like actually tries to like give you the intuition, and like helps you understand things, right? But then what? But in a way, it’s like, actually understandable to people and in general, like, if you’re creating educational resources, like, to me, that’s like a huge, unclaimed, operable, like, space of opportunities to provide value. Like, and that repeats itself for pretty much everything that’s emerging.

Balaji Srinivasan 57:02
Yeah, and I think we want to do a lot of that in network state or school. So here’s a good one, and maybe you can give an answer. And maybe this may be something we disagree maybe not. Approximately 99% of stable coin market cap is backed by dollars. This reinforces USD, seven of 10 largest BTC owners, or US ns, DS, how does the US not win? If crypto wins? So you can give an answer at mine. I mean,

Vitalik Buterin 57:23
for First of all, right, it’s like all of these things are chaotic systems that just have a huge number of all kinds of different variables going in all kinds of different directions, right? Like, five years ago, China was the winner of mining, and then, you know, the governments itself, like, just finally bans mining, hard enough that I think actually, China still does 10, about 10% of bitcoins hash power, but it’s like basically knocked out of the number one position. So all kinds of things could happen, right? And so, you know the, and there’s even all kinds of things that the US could do with respects to crypto, right? And then, obviously, yeah. Then you know, the there’s all kinds of things that even like the individual crypto holders in the US could end up doing, right? So that’s one, and then two, like macroeconomics, can just go in all kinds of different directions, right? Like, we’re basically breaking an 80, like an 80 year era of stability in about like three different ways at the same time. And the what

Balaji Srinivasan 58:20
do you look at as those three different ways? There’s the main, the tech of AI and so on.

Vitalik Buterin 58:24
Yeah. I mean, I think, like, basically, like, domestic politics, international politics and AI. And, I mean, even AI itself, like, you can expand that, or like, basically, like thinking in terms of, like, technology exiting the great stagnation in general, right? So, you know, I think we’ve been seeing bio exiting the great stagnation. We’ve been seeing Virtual Reality exit the great stagnation. Like, a lot of these things also really feed into each other, right? So basically, from, like, there’s just a lot of the internet is also a big one. Like, the internet has just changed a lot of variables in terms of what kinds of things in our world have an economic premium and what kinds of things don’t? And so the like, what dominates can just change massively over the next five years in directions that pretty much nobody has been predicting is, I think one part of my answer and the other part of my answer is like, once you’re on crypto rails, your ability to go from one thing to another thing just becomes 100 times easier.

Balaji Srinivasan 59:23
Okay, great. So that leads directly to my answer to this question. So actually, I should do a I’m gonna start doing video tweets, okay, meaning like 59 second, 29 second, vertical videos. So one of them will be on the defi matrix. Okay, so defi matrix is, if you imagine the table of every asset across every asset, right? So you have fiat currencies, you have cryptocurrencies, you have nfts, you have domain names, ns, domain names, and so on and so forth, all of them now, due to uniswap and due to all of these on chain, market makers and exchanges have a price like you can get out of. This asset into that asset. And you can do it right now, and you can do it for the whole thing. You can market sell. I mean, you may not want to do that, but you can do that, right? And so the moment that you loaded the dollar on chain, you made it swappable in a million different venues, any time of day and night by anybody in any country, at some price, right? It’s like putting something onto an ice skating rink, and it can move in any direction with very low friction, very, very, very, very quickly, at the speed of crypto, which is like that the speed of the Internet. It’s not banking hours, which are nine to five. It’s not capital controls. It’s not within the control of the US banking system. In the same way it’s at the speed of the Internet, right? So that alone is a huge change, which is the, you know, sometimes something is forklift into new domain, and it the forklift itself. Everybody’s focusing on it for a while, but it’s now in a new domain, and the physics are totally different. The internet is totally different. So the defi matrix, and because it means you can swap in and out of something, that actually means, what’s another term for people use for cash. They call it liquidity, right? Getting liquid. I have an IPO, right? That’s like, how a tech bro says, like, you know, I’m making money. Okay? So the like, liquidity can now actually happen without actual cash, because you have an asset that you can just swap in to whatever asset you want with a click. There’ll be some price for that. But you can keep it all in Bitcoin. You can keep it in something in something else. So that’s number one is that means, by the way, here’s a very important thing, Joe was similar to the defi matrix. An analogy is in the early 2000s Okay, there were all these newspapers, okay? And they forklift themselves online. And there’s just like the online version of The New York Times and the and the, you know, Miami Herald and San Francisco Chronicle. There’s forklift them. So they’re like, Okay, I’m online. What’s a big deal? I’m online. I’m offline. But once they were all online, Google News could make a table of all of them. And suddenly, when you load a Google News, you saw they were all reprinting the same story, the same Reuters clip. They didn’t have unique stories. There’s like 73 outlets reprinting the same Reuters clip. And suddenly everybody realized, oh, wait a second, these guys just have a thin layer of like local news on top of basically, like wire services. And so that began, essentially the commoditization of local news. And local news basically died, right? Because you could just get your news on the Internet. The forklifting revealed that these were commodities and their previous moats that they didn’t even realize were moats were the geography because they had trucks that delivered the newspapers. They had ink. That whole saying of never argue with a man who buys ink by the barrel. It’s actually now, you know, you can always defeat a man who still buys ink by the barrel. They can. They can’t argue with you, right? You don’t have to buy ink by the barrel, right? So the point being that what they didn’t realize is that by forklifting it online, they’d lost their moat, and then they were put into total global competition with everybody else, and then all the local newspapers died, and only the strong survived, and the craziest, you know, most international news, or whatever you know, dominated. That is what’s going to happen to every fiat currency. Every fiat currency is now in a war for its survival, comparable to local news, because their only advantage was their geography. They’re basically all doing the same thing as everything else, right? It’s only the geography that provides any advantage for a fiat currency. So what that means is they’re now going to have to if you think that would happen with local news or the news, it couldn’t compete on geography. Instead, it competed on ideology. You have all of these online, different niches and verticals that are global, but competing on ideology as opposed to location, right? Okay, so that’s what’s going to happen with assets already happening. They’re going from competing on geography, like, you know, this Fiat and that Fiat, to competing on ideology and on features which are, like, you know, zero knowledge, or like smart contracts, or like on print ability of Bitcoin, that’s already happening, and you’re having tribes form around them. What hasn’t happened is for many Fiats to die. Many Fiats are going to die. This is my view, right? When many Fiats die, lots of things break because it’s a tool for social control and so on. And Bitcoin will Moon more than you can imagine. It can moon. And so be in a safe country at that time. And so then the to the other point is like, how does the So, that’s how like, basically, with the stable coin market cap. And then so it doesn’t reinforce USD exactly, just makes it very liquid, and it can swap into anything else, and you can easily have some other asset that you swap into. And then in terms of seven of the 10 largest holders are, are American? Well, the the issue is that the US is not it’s not the United States. Is a disunited states, and so everybody’s just taking a chair and using it against the next guy, and the red team says the blue team did it, which they did? The blue team says the red guys just did it. So, which they did? So even Delaware is being destroyed, right? It’s like a civil war where every institution us is getting wrecked, right? And then the internet survives, because the internet is what stays up even the time of nuclear war. So the only reboot is from the internet. That’s not the US winning. That is the internet surviving after this whole all these institutions get melted down, which is a very different thing. And fundamentally, I think the fundamental optical illusion that you want to wrestle with is, is the internet American? Right? It’s like saying, Is America British on the. Hand. Of course, America is British. It was born out of Britain. Speaks English like you know, the early colonists, the names of towns, George Washington, all that kind of stuff on their hand. It’s clearly something that’s become much more than Britain. It’s exceeded even its worthy progenitor. Okay, so I want to do a few more, and then maybe two more, and then we’ll wrap right. Okay, so All right, this is a good one. How can we, how can we mega how can we make Ethereum great again? All right, if you, or if maybe you have a different slogan, so massive tariffs on all uniswap trades out, you know of Ethereum, right? Okay, fine, though. How can we make Ethereum great again? What should Ethereum North Star goal be? Number of daos price total holders, etc,

Vitalik Buterin 1:05:44
yeah. I mean, I think Ethereum in general, like, it needs to have like twin, like, the twin goals basically are like, one is like getting used and and getting used at a large scale, and the other is, of course, continuing to sort of be something that’s worth being used, in the sense of being something that’s meaningfully, I mean, like, much more secure and much more decentralized and so on than the kinds of systems that people would be using otherwise. And obviously, I think if we sacrifice on either of those things, there’s no points, right? Because if Ethereum can’t scale and it can’t provide enough enough security, and it can’t provide enough experience, then like, whatever properties that happens, they don’t reach anyone. On the other hand, if Ethereum gets scaling and UX at the cost of, like, basically becoming a copy of tradfi, then like, there is, like, there’s also either no points, or at least, like, much less of a point that there could have been, right? And so the question is,

Balaji Srinivasan 1:06:49
argue that point? Okay, my argument is, sometimes you can have something that’s cyclical in the xy plane, but the progress is on the z axis. So even if you rebuilt like, for example, there was Microsoft, and there’s Google, and then there was Facebook, and there’s like, even though they’re they’re like, new guys same as old guy, they’re not exactly same. There’s actually progress technically, like, React is better than Angular, for example, right?

Vitalik Buterin 1:07:12
Go ahead, yeah, yeah. Okay, that is fair. But, I mean, I think there is a strong technical roadmap that gets us both of those things, right? And the technical roadmap is, I mean, basically there’s a bunch of strategies to scale both l1 and l2 execution, right? And, I mean, we’ve talked to like, there’s stuff like delayed block execute, a block execution, block level access lists, distributed history, storage, like stuff that I think there was, like a pretty good bank list podcast with Ethereum researchers about some of these kinds of things about a week ago. So on the l1 side, and then on the l2 side, of course, increasing the number of blobs and using peer dots to be able to handle that, and then finally, using ZK snarks, similar, I guess, realistically, Starks inside of the l1 for scaling. So just like actually applying all of that, all of that technology, and I think there are things that we have to kind of rethink on a fundamental level, and like be to update some of the ideals from some of the early Satoshi era, right? So, like, probably the biggest thing is that, like, the thing that Satoshi did not understand that well is the asymmetry between creating and verification. And it’s ironic, because in proof of work, like he actually totally did right. But if you think about every other aspect of Bitcoin, right, a block takes as much computation to, like, create, not the proof of work part, but like the part of choosing the contents and right as it does first for someone else to verify, right? Like the cost that it takes, the amount of computation, bandwidth, storage that it takes to run a verifier, is exactly the same as as what it takes to actually all choose the contents of the block in the first place, right? And that was an inherent limitation of the pre ZK snark era, the pre stateless client era, the pre like access list era, like basically the era before all of these different scaling concepts have come up. And that the way that all of these scaling concepts work is they basically say, well, they put more load on one node, and they ask one the one node, to do more work in order to create hints that then make it the work of verifying much, much easier for everyone else, right? And so the take advantage of that asymmetry, right? That is what stateless clients are about. It’s what ZK snarks are about. It’s what data availability sample is about. Like, basically all of the fancy buzzword technologies are ultimately some version of this, right? And the challenge is, like, early Bitcoin mentality really valued this concept of, like, running a node at home, right? And the challenge was, like, the good aspect of this is, like, what you want is you want a system where the rules of the system are rules where, like, the. A if those rules start getting broken, the blocks that break the rules just automatically get rejected by everyone immediately, right? Like, Swift does not give you that property right. Credit cards do not give you that property. Like, if I get, like, have a US dollar balance, I have no way to verify that these resolvers created with an algorithm that will base any particular limit on how many dollars are in existence, right? And so you what you do not want is you do not want a system where, like, five people can get together, they can change the rules, and then everyone else accepts them by default, right? You want a system where changing the rules is actually hard, right? This is like constitutionalism, but like a level above that, because we’re like, basically going into smart contracts and doing it at the level of tech, right? And so to do that, like, basically the Bitcoin way was to say, Well, you’ll have everyone re executing everything at home, right? But with snarks, like, basically you have this asymmetry. You have the difference between nodes that are responsible for building the chain and nodes that are responsible for verifying the chain. And then you have a whole spectrum in the middle, right? And the question is, well, actually, if you allow the cost of building parts of the chain to be a little bit higher, then you can massively increase the scale. And you can increase the scale. Basically, you can increase the number of users who actually get to benefit from your chains guarantees directly, right? Like this is one of those things that, like, mistakes that El Salvador made, right? It’s like the Lightning Network implementation, like, it was actually based on custodial wallets like that. But so I’ll say something. But, like, basically, the question is, like, how do you avoid that, right? And basically the solution is like, well, you basically have this kind of hub like model where you have a smaller number of more powerful nodes, and, like, you’re concentrating the proofing work and at the same time you’re massively distributing the verifying work, right? Basically, the challenge is, like, if we can actually do this, then, like, we actually can, like, create something that is sort of maximum decentralization and maximum scale at the same time. And the technology exists to, like, actually do that, right? It’s

Balaji Srinivasan 1:12:03
funny because that is, like, 10 years after Bitcoin would only have come up after people have pushed on that part of the country. One thing is, I do think Bitcoin scalability. It’s not lightning. I’m very skeptical in some ways about but it actually already exists, which is, it’s either a an infrequent on chain transaction or B, it’s an off chain transaction to something like Coinbase or binance or El Salvador or whatever. That just runs essentially off chain transactions between its people. So you have infrequent digital gold movements between these hubs, and then you have like, all these spokes on these hubs. And that’s basically how Bitcoin

Vitalik Buterin 1:12:38
scales. My critique of that kind of stuff is basically that if the like open, permissionless, censorship resistant, aspects of the ecosystem are not aspects that can be that are experienced by users directly, and we just say those things belong to institutions, then that’s just something that’s inherently more vulnerable

Balaji Srinivasan 1:12:56
to capture. Okay, last, last question, then let’s wrap some say Silicon Valley SF is losing its supreme position as central tech founders. Do you agree? Yes, no. And if so, where do you think is the best place to start a global

Vitalik Buterin 1:13:09
startup? I think it depends on your industry, right? Like, like, if you’re an AI, then spending a lot of time there, like is, like, very helpful. I mean, I think, like, my personal philosophy toward that whole region is, like, there’s a lot of value you can get by being part of it. But, like, I think if you go visit for two to three weeks a year, then you get the majority of the benefit and you at the same time, like, that’s still a low enough dose that you can, like, avoid imbibing the crazy. I think while a lot of parts of the world are great places to be based in, right? I mean, if you like, if it’s like, you’ve like, like, places like Berlin, like, I’m always a big fan of Berlin, because I think the local community does a great job of, like, remembering what crypto is actually there for, right? And, like, basically, not just, I mean, like, chasing, like, the the short term narratives, there’s a huge amount of top tier talent, right? Like, even like safe, like the smart contract blog, people are there. Huge numbers of Ethereum core developers are there, right? So there’s a lot of like, different parts of the world that are like this. And so I think, especially with a combination of the internet and with a combination of more localized net like in person, network effect hubs like this, you’re the like the needs to be in a global superstar city to be at the front of the world is much less than it was five years ago.

Balaji Srinivasan 1:14:36
I agree with that. All right, with that, let’s wrap. Thanks everybody. Thank you.

Vitalik on the return of Ethereum (Ep. 14) | Network State Podcast