#15 - Jeremy Howard on Fast.ai
Balaji Srinivasan 0:00
Jeremy, Welcome to Network city podcast. And we’ve been, we’ve been
friends or friendly online. I think for a while, you are the founder of
fast.ai which is this incredible course that’s online. We both taught
large online courses, so we kind of have talked about that. You’re the
founder of answer.ai before that, I think you were at Kaggle, right? And
you’re Australian, you have an interest in biomedicine, and I think
we’re also into, I mean, peace and trade, broadly, internationalism and
so on. Give me the spiel. Is that? Danielle, everything is that? Or give
me Jeremy on. Jeremy,
Speaker 1 0:34
yeah, no, pretty much. I mean, I’ll say maybe fast. AI, most people know
us for the course, because that’s how most people interact with us, but
that was only one quarter of it so fast. AI was all about trying to
avoid a kind of massive centralization of power and inequality due to
what my wife and I saw in 2012 is likely to be a rapid growth of AI. And
so we want to similar to open the eyes mission in theory, except we
actually were open. Yeah. So we only so through their initial way, yeah.
So we basically decided to get AI into the hands as many people as
possible, including people with few resources. And so we did a lot of
research to figure out how to make AI more accessible, because at that
time, only five labs in the world, and yeah, the techniques to actually
use AI in practice were not published. They were kind of like little
Yeah. So my wife Rachel actually asked earlier, when he was presenting
in like, 2012 or something, about some of his work. It’s like, okay, so
how did you actually do that bit? What weights did you use? You know
what fine tuning with us? He’s like, Oh, we don’t, we don’t publish any
of that. That’s our bag of tricks. So we were like, Okay, this is not
okay. Like, this is this technology is going to change the world, and
it’s requires a bag of tricks that you have to go to Stanford to learn,
you know? So we figured out all the tricks and built a lot more tricks
of our own. And then, you know, everybody then tried to make it all
about money. So then Google eventually started creating TPUs and stuff.
Instead of saying, like, oh, you can’t, I remember Jeff Dean saying,
there’s no point trying to do stuff with AI unless you’re at Google,
because only we have compute, yeah, and we beat them in a global
competition to train image net, Kaggle. No, not that fast. Ai, oh,
really, I didn’t actually know that. Yeah, yeah, that was a global
competition called Don bench, and we competed against Intel. They had,
like, a class, Don bench, D, A, W, N, bench, e, n, C, H,
Balaji Srinivasan 2:37
by the I love, I’m friendly with Jeff Dean. I think he’s amazing. And
so, yeah, so, so that that’s actually pretty impressive. I mean, I’m
sure he was impressed that you’re able to do so much.
Speaker 1 2:46
Oh, yeah, no, he was, he was great about it, you know, they, they
published a post, they published a paper, and they credited us. And
that’s not hard feelings, you know. But we just wanted, we just, we
wanted to say, like, No, you don’t have to be a rich Google person to,
you know, how is that successful?
Balaji Srinivasan 3:02
Actually? Maybe you can talk about that because, like, that’s a little
surprising to me. Because, you know, obviously deep seek has brought
costs down recently, but back then was it, did you like, obviously,
Google had massive amounts of clean data and huge compute resources and
so on. Why could, how could the student projects be competitive with
Google during dawn bench, because
Speaker 1 3:28
these big labs suffer from being over resourced. So in fact, not as bad
now, but particularly around that time in the next few years at Google,
you were explicitly rewarded for using more compute. Where else we were,
like, hey, we don’t have much money. Like, we made no revenue. We had no
grants. It was just my wife and I put our own money into fast AI
Balaji Srinivasan 3:51
experience. How are they rewarded
Speaker 1 3:54
for you? So they were basically, if you could use more TPUs, that’s like
a good tick on your performance. No, really, yeah, wow. Okay. So, you
know, we came along and said, Hey, like, so, for example,
Balaji Srinivasan 4:08
it’s because they wanted people to use the TPU, since they
Speaker 1 4:10
wanted to, like, show off how big their their rig was. Look at our big
rig, and these people using our big rig to do these big things. So, for
example, in Dawn bench, it was an image recognition competition, be as
fast as you can to train a model. And the images were 224, by 224,
pixels. And we thought like, Okay, well, 90% of the time the first 90%
of training, we’re going to train on 64 by 64 pixel downsized versions.
Yeah, makes perfect sense. They look the same. You know, the last 10% we
use bigger ones that 4x or 16x delta. Nobody else thought of that. You
know, this is one of the major tricks we used. And why would anybody at
like an open AI or Google try and do that? Because it’s like, oh, now
we’re not using our amazing deep. Use.
Balaji Srinivasan 5:00
Well, it’s interesting because, you know, that’s actually, I’m actually
going to put out a little little comic on this, actually, on that, which
is, you know, that meme about a secret third thing is, people will say,
Oh, you’re not an X or a Y, but a secret third thing. And they’ll say it
sarcastically, like, Oh, you must be a Democrat, Republican. You’re not
a secret third thing, right? But actually, if you think about like, like
a, like, an image zero or one, one pixel is not enough to describe the
complexity of an image. You need not just a secret third thing, but a
secret fourth and fifth and 1,000th and millionth and so on, right?
Pixels. But you know, there is, there is a minimum necessary complexity,
right? And it’s interesting, because obviously, if you go all the way
down to, like a, you know, if you have the number of pixels, all the way
down to just one, you’re not going to get enough, right? So it’s an
empirical question, going from 256, to 64 it still works. I don’t know,
maybe going to 32 it still works. Maybe going to a Fave icon. It even
kind of still works. I don’t know if you did that, if you
Speaker 1 5:59
absolutely did. And I first just did it visually, you know, I just
downscaled it, and I looked, and I was like, Can I still see what that
is, right? And if I couldn’t see it, then I thought, computer probably
won’t be able to
Balaji Srinivasan 6:11
do as well. What was it? Was it like? Was it 16? Was it 32 was kind of
6464
Speaker 1 6:16
Yeah, okay, yeah, at 32 you squint, yeah, it, you can kind of see it’s
maybe a dog, but you can’t see what kind of dog it is. I see, instance,
interesting.
Balaji Srinivasan 6:27
Yeah, okay, so okay, I want to actually, I love, I love this. So first
of all, I want to actually show you something we’ll jump around or
whatever. I want to show you something that we have done that I think is
a compliment to fast.ai and this also, so I taught a MOOC in 2013 called
Startup engineering. I’m a big fan of it. Yeah. Okay, great. So I did
that with Vijay Pandey. My colleague is a big fan of Vijay as well.
Great. So he’s now at the bio fund. We invested a lot of bio stuff
together, so we have that overlap
Speaker 1 6:54
as well. We’re interested. So you and Steve Huffman created those two
fantastic courses. I don’t know if you have a look. I don’t know. Steve
Hoffman’s ghosts, yeah. So similar thing. They were both like, kind of
end to end, like, how to make stuff? Oh, okay, got it. And
Balaji Srinivasan 7:09
that’s a Reddit founder, yeah. He’s my friend. Also, I didn’t actually
know, yeah, of course, yeah. So, and neither of them are really
available anymore, and they’re, you know, I free web development course
by Steve Hoffman, interesting data. We need a we need a modern one. All
right, okay, so how about this? Maybe I’ll do a refresher, and we can,
we’ll send the fast AI people put it online and something like that. I
think that I do think a 2025, version. So actually, you know, let me
tell you what I’m planning to do next on this well. So the reason I
taught that course very similar, I think, in some ways, to your, you
know, kind of kind of thing is, I know there’s a lot of talent on the
internet, right? And actually, really, around the world. And you know
how, like, the, you know, it comes to the dark matter, and, like, the
Hubble telescope, and you can find the dark matter around the globe, or
not the globe the in the universe, right? So, like, gravitational
lensing, yeah, exactly. That’s right. And so you need, like, a special
telescope to see that, right? So, by analogy, just a fun analogy, the
mobile telescope, like the phones that billions of people now have,
allow us to find, if the Hubble telescope allows us to find the dark
matter, the mobile telescope, so to speak, allows us to find the dark
talent around the world, right? Basically, people who really have
nothing other than their phone and their hunger to learn, right? And we
can offer them a course, and that’s like a sky hook and a bootstrap.
Speaker 1 8:28
That’s what fast.ai was about as well. Like, we really reached out to
parts of India and Africa and stuff that had nothing. So we had, like, a
guy from the Ivory Coast who was like, asking, like, is there some way
to get this on CDs, because we don’t have internet here? And yeah,
turned out, like one of our biggest markets was in Lagos.
Balaji Srinivasan 8:50
It’s amazing. So actually, I have a fair number of folks in in Nigeria,
basically anywhere there’s Anglophones around the world, in India,
Nigeria, in the Philippines, right? There’s actually all these
Anglophones, meaning just English. I do want to translate into other
languages and so on, but I think that’s like, the v1
Speaker 1 9:07
right? Yeah, absolutely, yeah, no. I mean, it was just like, it is all
this talent around the world, and it drives me crazy that it’s not being
used, you know, they’re like, picking coffee beans or whatever. Yeah,
and, and, as you say, like, they’ve got, like, so many of them were
saying, like, I’m training a neuro particularly when colab, Google Pay,
like, came along, like, I’m training a neural net on my phone, you know,
through co lab, you know, can you help me do this or that? And I’m just
like, oh, this is great, you know. And so there was a there was a young
woman from Bangladesh, one of our first courses, who contacted me, and
she was like, Jeremy, you probably don’t even know who I am, but I’m in
Bangladesh, and I’m a teenager. And she was like, I want to know if what
I’m doing is okay, because I feel shame. Same she said, I don’t know
anybody else in my province that does anything with AI. I don’t know any
other girls that use computers. Everybody thinks I’m weird. I want you
to know. I want to know if you think it’s okay for me to do AI.
Balaji Srinivasan 10:16
Oh, she just needed the social encouragement.
Speaker 1 10:19
And I and I wrote back, and I said, not only is it okay, but like, you
know you’re gonna put your province on the map, you know? And you know
what? Like, couple of years later, she wrote to me from Google in
Silicon Valley. Wow. Thanks to you. I’m now a Google Scholar. They flew
me over to San Francisco.
Balaji Srinivasan 10:37
What I like to do is, I like to find these folks, mentor them, train
them, stand them up, and now they’re leaders in their own communities.
It’s, you know, quote, Teach a man, Teach a man to fish, or teach a man
to recognize an image of a fish, right, so to speak, right? Actually,
you know, you can use that. That’s a good one liner, you know, because
you open with the you open with the bird thing from, from XKCD. So teach
a man to recognize an image of a fish or woman, you know,
Speaker 1 10:59
right? You know. The fish specifically you need to know is the Tench.
Tench, the Tench. Anybody who’s understands computer vision knows about
the tension, because tension is the first image net category. So anybody
who’s image, Teach a man to recognize a Tench, yes,
Balaji Srinivasan 11:17
yeah. That’s good. That’s right. Actually, that’s like replaced Lena,
yes, exactly, yes. That’s right. Okay, so let’s see, um. Now, why give
me the German life story? So, like before? So I know fast. Ai, no,
Kaggle. I know answer. Ai, I know the COVID and, you know, masks, what’s
uh, like, what’s uh
Speaker 1 11:35
before Kaggle. So yes. So Anthony and I kind of got Kaggle started in
Melbourne, in Australia, and then we flew out here. He had this crazy
idea that venture capitalists in America would put money into our little
startup. And I thought it was crazy. I thought there was no way, but he
was right and I was wrong as like, Okay, I’ll come I’ll give it a go.
But, you know, is
Balaji Srinivasan 12:03
Kaggle? Have some is it an Australian is just, sort of just like a funny
word, made up word, just a
Speaker 1 12:09
made up word, okay, yeah, like Google, Kaggle, yeah, and and, yeah. We
spoke to some of your old colleagues. We spoke to mark Andre and it was
interesting at that time, Andres and Horowitz hadn’t done anything in
machine learning, and in the end, they were very good about it. They
passed on our round and they said, Look, we don’t know anything about
machine learning. Maybe it’s going to be a big deal, but we don’t have
anybody here that can judge that or not. But you know, so we ended up
with like Bernard khosler and other folks put the money in. But before
that, I had two startups in that I ran out of Australia. One was called
fast mail, which became a very popular global email company, and then
the other was called optimal decisions, which, if you’re insurance, you
would definitely know, and if you are not, you definitely wouldn’t. It.
Basically, trans changed how insurance companies price away from using
just actuarial methods to using optimization based
Balaji Srinivasan 13:06
methods like convex optimization or something like that. No, yeah,
Speaker 1 13:09
yeah, just, you know, pretty classic optimization. But the key thing was
to model elasticity and competitor price, not just risk. Because if all
you do is model risk, all you can do is cost plus pricing, which, as you
know, is economically very suboptimal. So we made insurance companies a
lot more profitable, which I have no pride over. In hindsight, I don’t
know why I spent years of my life working on that, but yeah, originally,
I don’t know. Like coming out of school, I was a bit lost, to be honest,
because, like, I was interested in stuff that nobody else was interested
in. So I was interested in, like, spreadsheets and databases and PCs.
This is a bit over 30 years ago. I didn’t know any other adults or kids
that were interested in any of those things in Australia, yeah, yeah,
okay, and there weren’t any university courses you could go to that were
about data. So I ended up doing philosophy, but I actually ended up not
going to any classes because I happened to get a job at McKinsey and
Company, where they really appreciated this odd set of skills I
Balaji Srinivasan 14:21
had. So tell me about so McKinsey is actually interesting to me because
there’s the, let me give the negative and the positive view of McKinsey.
So the negative view of McKinsey is, oh, you know, you’re hiring
overpriced consultants to tell you to fire people and blah, blah, blah,
blah, blah, right? And the positive view is it’s something that takes
young people and gives them lots of different kinds of business
experience, and, you know, lets them actually see the actual numbers of
lots of businesses, and actually trains people to make, of course, good
slide decks and good presentations, but really to communicate well and
understand the gears and nuts and bolts of businesses. Is. And actually,
when I’ve hired former McKinsey and Bain and so on, people, they’ve
actually done fairly well. They’re very good non technical athletes,
like power users or what have you, right? I don’t know. Give me your
thoughts on that. Maybe,
Speaker 1 15:13
oh, I mean, you know, I would say this unusual. Sorry to be negative. I
mean, it’s just like the Pro and, oh, I love, I love. Like, please.
Like, challenge me. Okay, good. If I say something worth challenging,
challenge me, because otherwise it’s boring for everybody listening too,
and boring for me. Look, I started there when I was 19.
Balaji Srinivasan 15:29
So, oh, really, wow, that’s interesting. Yeah.
Speaker 1 15:33
So I was years younger than everybody else, and for me, it was eye
opening, and it was great, because suddenly there were people who cared
about what I did. And you’re right. They’re generally non technical
people, just one of the reasons why, as a 19 year old, I could be really
successful there. You know,
Balaji Srinivasan 15:55
did you feel you leveled up when you were there?
Speaker 1 15:57
Yes and no. It’s funny. You say it’s this kind of polarizing thing. It
was polarizing in my life too, right? Because at one level, it’s like, I
felt like, Okay, I need to learn business, because I didn’t know any of
that stuff and I wanted to create my own companies.
Balaji Srinivasan 16:11
Yeah, you’re very commercial for a professor. Yeah? Professor type,
yeah,
Speaker 1 16:15
yeah. Well, I mean, I never went into, I’ve never been a professional
academic in my life, right?
Balaji Srinivasan 16:21
But you’ve got, you’ve got the, I think we both have that
disposition,
Speaker 1 16:24
yeah, sure. No, absolutely. And so I was trying to learn business, and
by being at McKinsey, I did learn a lot about how business worked, but
also in a lot of ways, it’s a very conservative organization. Because I
was telling my colleagues at the time, hey, this new internet thing. I
think it’s going to be big, you know, and they’d be like, I don’t know
Jeremy’s computer stuff. It’s, this is pretty nerdy. It’s like, what’s
it for? Like, I don’t know exactly, but I feel like, like, very early
90s. I feel like it’s going to impact business. And they’re just like,
No, look. Let me explain how business works. You know, business is about
relationships and strategy and capital and, you know, and in the end,
like they were wrong, you know. And I didn’t have the trust in myself at
the time. You didn’t know whether you were wrong or I was sure I was
wrong, right? And I just kept trying to figure out why I’m so wrong. I
felt really upset with myself for being stupid, that they everybody else
can see it. It’s so obvious that they’re just like, look, Jeremy. Let me
try to explain it. I just couldn’t get it. So I wish I had, you know, I
stayed in consulting for 10 years. Oh, really, well, I should have done
it just two because that’s enough. And, like, what I really learned
there was a sales. Like, it’s really great for learning sales.
Balaji Srinivasan 17:47
Well, what did you like, I don’t know, what are the top three five
things you learn in McKinsey? Like, sales,
Speaker 1 17:53
yeah, so I was and ended it Carney. So I went from there to 80 Carney.
What I learned was like, Okay, it’s all about change and influence,
right? So it’s not just sales, but it’s a kind of sales. It’s like,
you’re trying to sell an idea, or you’re trying to sell a piece of work,
whatever. So we were very careful about mapping out the organization,
you know? So it’s like, okay, we want to sell this piece of work next,
or we want to help our client sell this idea, okay, who’s everybody in
the organization, who’s in any way a stakeholder who could have an
opinion, who could cause this to succeed, who could cause this to fail?
Like, okay, who do we know? Who knows that person and, like, extremely
kind of careful and optimized process of creating change through human
management and human connections. We brought professional actors in,
like, play the role of different types of clients, and we would then
interact with them, and then, you know, then talk about what the results
were. It was just way more intense human optimization than I’d ever
conceived of. I’d always thought of that human side as being like, oh,
some people are charismatic, you know, or Oh, some people are just good
at convincing people. It’s like, no, they’re their skills. There’s a
science, there’s a there’s a logic, there’s it’s like a different kind
of logic to programming a computer. But if you want to get an
organization to do a thing, you know, you have to know how to map it out
and how to react. You know, in
Balaji Srinivasan 19:33
some ways it felt it’s a graph traversal in some ways, yeah,
Speaker 1 19:37
but in some ways it felt cold and kind of calculating and horrible to be
like, Oh, this human being. I don’t seeing that as a human being. I’m
seeing them as, like, this cog in this machine, and I’m going to use
this process. But it totally worked, yes, you know. And so it made me,
after a while, I changed my view of it. I was like, You know what? Like
getting organizations to do? Things. Is important. It is important. And
so if that involves treating people as machine parts, sometimes, because
humans are very predictable, yes, you know. And so if you learn how to
manage different types of humans and different types of situations, and
like, you know, so like, you get the one person to be your kind of
inside bowl who’s, like, super champion or whatever, and they’ve
recognized that they can use you to advance their career. And then you
talk to them specifically about how they can advance their career. And
then they tell you who’s going to get in the way. And then you get three
more people, and then you use that to put pressure on the fifth person
who is well known to, you know, be somebody who likes following rather
than leading. And, you know, you structure it out. It’ll play out. And
at the end it’s like, okay, it happened. You
Balaji Srinivasan 20:52
know, it’s funny, like the way, you know, do you know Mark cranny at a
six and Z? I don’t know if you know him. He’s a very different
personality than you, but he also, he’s like a gruff Mormon a few words,
but he’s like, a sales genius, actually, right? And very similar, like,
the way I think about it, that kind of reconciles all of it is, it’s a
nested set of like, Win Win relationships all the way up to the
organization level, right? Like, the best kind of sales is when you are
genuinely selling them something that will improve their business, or
their their product, or something in some way, right? And then it will
also improve at a nested level, the career of this person who approves
it, and so and so. It’s almost like a, like a venture investment all the
way through. And that is actually what I think is the reason that
that’ll work is that’s the most consistent kind of thing, where even if
you’re flipping them to do it, they will like it in the medium to long
run,
Speaker 1 21:39
yeah. And if you’re trying to have a dent on the world, you know, and
you’ve got good ideas and develop good things, but you’re unable to
influence anybody to buy it or use it, then you’re not going to have a
dent on
Balaji Srinivasan 21:53
the world. Like, that’s actually, you know, it’s funny. One of the, I
mean, there’s a lot of great things about your course, but one of the
best is the domain name fast.ai, right? Like, I learn AI fast, amazing.
Okay, that’s what I want, right? So that’s like an example of sort of an
inbuilt marketing kind of thing, which is great, right? And I’m sure
there was some thought into that, because lots
Speaker 1 22:11
of people could have named it. Oh, yeah, we did a lot of marketing stuff
there. We also, as far as I know, we were the first company in the world
to do AB tests on our homepage. Oh, is that right? Interesting. I think
we were also the first to have all the free email accounts. A little
footer would be added to every email message marketing the service like
we did a lot of little things like that, things like that, little viral
things that today are everywhere.
Balaji Srinivasan 22:36
Yes. So, okay, great. Actually, I want to show you something which is so
we took. So let me describe problem and then solution and get your your
thoughts right. So you and I have both taught large online courses,
right? And the typical thing that happens with large online course is
people, it’s a little bit like signing up for for like a workout, right?
People, aspirationally want to do it, and then they want to have done
it. They want to have done it. Yes, exactly, that’s right, yes. And then
they
Speaker 1 23:08
want to be the kind of person that would have done that. There’s
Balaji Srinivasan 23:11
something good out of that, right? But what happens is they sign up for
and the problem is allocating the time, or then, if they have the time,
the energy, or the discouragement, or what have you, there have been
various mechanisms and so on to try to solve that, address that right?
There’s like Cohort Based Learning and, you know, and so on. And those
things work to an extent. Cohorts are great, yes, so that can work. But
let me show you something that we did, which we call a learn a THON.
Speaker 2 23:37
When should you use a random forest? What is the confusion matrix?
Dunno. What about collaborative filtering? Dunner, you
Balaji Srinivasan 24:04
When could you use a random forest tabular data, and if you have a lot
of like, noisy features, what is the confusion matrix? Like a table of
actual answers against like the predicted answers, and then comparing,
you know, like how often it gets it right, and then when and how much it
gets it wrong? What is collaborative filtering recommendation algorithm
by clustering people or items or things by similarity. So basically,
well, we’re going to do a virtual, you know, updated version of that,
but basically, so the fastest AI. So essentially, literally, we took,
because what is it like? About 10 hours, 11 hours of videos, right?
Yeah. So over two days, we said, Okay, you really want to do fast AI,
okay, sign up, come here, 9am on on Saturday morning and nine to nine.
Saturday, nine to nine, Sunday, they watch every single video start to
finish. No phones, yeah, right, yeah. And then when it was time to go
and type things in, you know, laptops out, do
Speaker 1 24:58
that absolutely. And it drives me. Crazy, because so many people tell
me, like, oh, Jeremy, I started your course. I meant to finish. You
know, I’ve tried three times I haven’t managed to finish. I always think
like, look, yeah, you could just put aside one weekend and just binge
it, you know, get it done.
Balaji Srinivasan 25:15
Yes, exactly. And I want to, did I shoot the fellowship video? Okay,
hold on. Take a look at this. Okay, global meritocracy is finally here,
because we’re awarding $100,000 in funding for the new network school
fellowship, and anyone from anywhere can apply. Now you might well ask
how well you see we’ve set up shop on an island right off the coast of
Singapore, in the new Special Economic Zone, and it has an enlightened
immigration policy. That means it’s the perfect place to assemble a
global community of tech founders and AI creators. And that’s what we’ve
done. We’ve set up housing, food, co work, fitness classes, yoga, fast
Wi Fi, office, pots, a state of the art gym, healthy snacks, Starlink, a
makerspace, a Content Studio, guest lectures from the most successful
founders and investors in the world, Nomad visas and help with
everything else you might need. And we have funding too if you’re good.
So go and apply for the nervous tool fellowship [email protected] the only
connection you need is an internet connection.
Speaker 1 26:16
That’s very inspiring. I want to come great. So also, Malaysia is
awesome. So Malaysia, that’s
Balaji Srinivasan 26:23
right? So basically, the combination of Singapore, Malaysia and the new
Singapore Johor Special Economic Zone, you know, it was one of the
things where there was theory, and then somebody had to put that into
practice, right? So the theory is, like, Singapore has a lot of capital,
but doesn’t have a lot of land, yeah, Malaysia is actually improving a
lot, yeah, but it doesn’t.
Speaker 1 26:41
Malaysia’s got a good education system. It’s a strong,
Balaji Srinivasan 26:45
yeah, very underrated, and it’s improving a lot, and you can basically
live a pretty good life there, I think, super good. And it’s right next
door, right? Yeah. So Malaysia has land and has less. Can literally
drive there, you literally drive there, I literally drive back and forth
all the time, right? In fact, we’re just, like, 30 minutes from
Singapore. Basically, just, literally, you know, just go over the
bridge, pop. You can see, you can see Singapore directly from from it,
right? So, and we’ll have probably have a ferry or something back and
forth that’ll give down to, like, 15 minutes, yeah? So I want, like,
these autonomous boat kind of things, right? So why not? Yeah. So those
knock on road, let’s get, let’s get that, right? So this is something,
what you’re seeing in a video, is something I’ve wanted to do for more
than 10 years, right? And you just have to build all the, you know,
overnight thing 10 years in the making. So certainly, anybody who’s like
doing fast AI, who’s taking the deep learning courses, we’re looking for
the kinds of people who’ve completed your course, and yeah, we can fund
them and help them build things, and in particular, to think about. So
let me explain kind of the motivation behind what we’re doing with
network school, right? So a it’s very hard, obviously now to get student
visas, skilled worker visas, into the US. It’s, I mean, even like people
who are tourist visas, like they’re getting strip searched, or crazy
things happen. You saw there’s actually some Australian or what have
you, like, some terrible thing happened to them or that,
Speaker 1 27:57
right? And you think every, almost every country now has some story
examples of people, citizens of their country, that have been screwed
around,
Balaji Srinivasan 28:05
tourist visas, student visas, skilled worker visas, like
Speaker 1 28:09
the US and in Southeast Asia, these countries are now competing for that
talent with their digital visas, with their startup visas.
Balaji Srinivasan 28:17
Exactly, it’s so smart. This is exactly, that’s right. And this is the
thing I was
Speaker 1 28:21
like, I want Australia to get on there. Australia to get on that boat
too. You know, we’ve had this global talent visa in Australia, which is
pretty good. So, yeah, I have it. Everybody
Balaji Srinivasan 28:29
needs to do this. You know, the country’s offering digital nomad visas,
right? So there’s this weird thing where the US is taking itself out of
the global economy, yeah, just as everybody
Speaker 1 28:38
else, everybody else is diving in, exactly, and all of America’s big
value creators are tech,
Balaji Srinivasan 28:46
that’s right, exactly. And they’re globally mobile, because there’s no
silicon in Silicon Valley. No, we’re not like
Speaker 1 28:50
mining. So our team NS AI is fully distributed in Turkey, Japan,
Australia, Ireland, if
Balaji Srinivasan 29:01
you ever want to co locate them, we can host them in every school for a
week or a month or something like this. And one of the things you want
to do is, like, co location for remote teams.
Speaker 1 29:09
That’s a nice idea because, like, we’ve got together for the first time
ever in person here in Singapore. Oh, great. And we’re all like, Oh,
it’s so nice to spend a week together. Eric Ries and I at answer AI, we
did something a bit unusual. We decided to only have one policy. And our
only policy, at answer AI, is to only have one policy.
Balaji Srinivasan 29:33
Okay, what is that policy? The
Unknown Speaker 29:35
policy is to only have one policy. Oh,
Balaji Srinivasan 29:39
it’s very meta. Is like, one of those recursive kind of things. Go
ahead,
Speaker 1 29:42
I’m done that we only have one policy, and it’s to only have one policy.
So you can’t have no policies, because that’s a policy,
Balaji Srinivasan 29:50
okay, okay,
Speaker 1 29:51
so we have no policies other than the policy that we’re only going to
have one policy, I see, okay, got it and why? Well, policies. They’re
like, ideologies, they’re like, they’re these fixed things which say
like, oh, you can turn your brain off now, because, because we’ve
decided, X, you know, in this situation, this is how you’re meant to
behave. Like, I am deeply skeptical of ideologies and policies and all
of these cognitive shortcuts that basically say like, oh, I believe in
this thing, because that’s what my ideology says, you know,
Balaji Srinivasan 30:26
yes. So let me give an analogy or a way of thinking about this that I
have from the network state book, which is, um, you know, like
programming paradigms, you can have imperative programming, functional
programming, declarative programming, and so on and so forth, right? And
for certain problem domains, you know, certain style, it just makes it
very easy and concise to solve that problem domain, right? But then you
also want, like, a multi paradigm language, like, like Python, or
something like with Haskell, you know, you can just do everything as f
of g, of H, of x, and you can actually get far with that. But it’s
sometimes nice to do things in an imperative style, or what have you,
right? And, and so that’s how I think about political paradigms, right?
Like, and every analysis. You know, I’m not a big UFC guy, but like,
Ultimate Fighting Championship is some people are using grappling, some
boxing, some Muay Thai, and it’s situational. As to do I solve this with
a kick or a punch, right? Do I Solve this as functional or imperative?
And I think, like Lee Kuan Yew was someone who was like that, where he
understood many different political schools of thought, and then he
just, like applied the right technique that was sort of self consistent
in that school of thought for that situation, right? And so that’s like
the beyond ideology thing, which is, you’re aware of a lot of these
different things, and you situationally figure out which one is
appropriate, and then you use that because
Speaker 1 31:50
Andrew, you know, you’re constantly curious and interested. And you
know, what you care about is doing a good job, you know, rather than
being consistent with other members of your tribe, most humans are
mainly interested in being consistent with other members of their tribe.
That’s right, number one driving force.
Balaji Srinivasan 32:11
And the thing about that is, there’s, there’s a meta rationality to
that. I think it’s kind of like general, like evolutionary game theory,
right? So, like, you can imagine you have two populations of people who
are conformists and dissidents, so to speak, right? And the dissidents
are constantly exploring, and they’re taking a high risk, and sometimes
they fall off a cliff, and sometimes they have reward, and the tribe
follows them, right? And the conformists are just, you know, they’re
like, this is risk capital, and this is just, you know, stay home,
money, or what have you, so to speak, right? So you can make an argument
for a portfolio strategy as to why you want a small number of dissidents
who are sometimes wrong when they’re wrong, or contrarians, or whatever
you want to call it entrepreneurs, right? And then most people should
actually, like go the tribe, so they don’t run off a cliff, but they
could actually find, you know, a better, better pasture, or something
over here, that’s, that’s one way of thinking about the respective
balance. Go ahead. Yeah. I
Speaker 1 33:04
mean, I’m kind of curious about this, because, like globally, somehow
every jurisdiction has settled on the same education system, and the
education system teaches children to be conformist. Yes, if you, if you
you know the test tests whether you can feed back the things you were
taught in the way that you were taught them, you will get rewarded if
you do what you’re told and like, I’m kind of curious about how much of
this thing we see in the world is because every single child, basically
in The Western world, at least, has learned this same behaviors. Have
you
Balaji Srinivasan 33:43
heard the concept of the Prussian educational system? Yeah. Okay, do you
know what preceded that? No. Okay, so there’s this great book we can put
it on screen called, called the craft apprentice. Okay? And one of my
macro kind of theories of the world is that history is running in
reverse. And I can show you a bunch of graphs on that, or what have you,
but literally, like a U curve, where, in many ways, our future is more
like our past, like, more like, let’s say the 1850s and then eventually
the 1750s than the 1950s like, there’s a lot of U curves which have
their minimum or maximum in 1950 and I can, I can show you some graphs
on that. And so one premise of that is, like, prior to the Prussian
educational system, which was, which is what we currently know, K
through 12 and so on. That was all set up. It was inspired by Bismarck
after German unification to have all the children get basically the same
software in their heads. It’s like, you know how with Windows, you have,
like, the default install that comes off the factory. And then you have
like, you know, Windows premium, ultimate, maybe for college graduates.
And then you have the service packs from, you know, mainstream media.
That’s how I kind of think about right, right? And, and there’s a reason
for that, because then everybody kind of has the same references. They
they salute the flag, and, you know, they’ve just got the same basic
install, and they can interoperate, right? There’s, there’s a rationale
for that. It’s how you it’s a softer part of constructing a nation. In
fact, arguably, that’s even after. As important as, quote, the hardware
part, right, which is like the physical territory and the people and so
on. But before that, there was a different system, which was all based
on apprenticeship. And they would start working from an early age, and
they would just learn practical skills very, very early on. Or they’d be
like Jebediah and Abigail would have 12 kids, and they’d all be working
on the farm, and they’d be like mini industrial robots, so to speak,
picking fruit or something like that. You know, mending fences very,
very early on. So the entire concept of extended adolescence wasn’t
there, the concept of being on your parents health insurance till 26 or
whatever, it wasn’t there. And now, the reason that that stuff got
introduced in part, is because, I think, in the in the late 1800s with
the advent of like industrialization in factories, these kids were no
longer under the supervision of their parents. Or if people the parents
knew they were under the supervision of factory owners who would push
them too hard, right? Like these were like the child labor factories,
you know, and so and so forth. And that was a dis alignment between,
like, the interest of the factory owner and the kids. That’s when the
child labor laws were passed and so on. I mean,
Speaker 1 36:06
that took a long time. It took a long time. It’s like, what was it like
6070, years, Britain was the first in the world to introduce child labor
laws. But yes, much still took much longer than it should
Balaji Srinivasan 36:16
have. That’s right, this old Dickensian kind of era, or what have you,
right? So then, so now there’s a good to that at first, but then that’s
what actually led to the modern era of adolescence. And you know, I’m
having fun as a kid for a long period of time, and now we have this
extremely extended adolescence and training period where some people are
like students as doctors, all the way up into their 30s before they
start their career. And they’re almost middle age before this, you know.
And I think that the the corrective to that is, because everything good,
you can always overdo it, right? And so you can go from quote, you know,
like being opposing to child labor, to not allowing people to even work
until they’re in their 30s, as a doctor, for example, or right? So I
think the opposite of that, the thesis, antithesis, synthesis is when
the kid is at home and they’re under the supervision of their parent,
but they’re able to start earning online by doing development, software
development and so on. Even 1012 years ago, I had a bunch of kids. Some
of my best students at Stanford 1012 years ago were were kids who had
actually earn their first dollar doing online programming in their
teens, right? And it’s not even so much about the amount of money. It is
that it’s that the market is greater. This is how I think with Have you
seen the grade inflation graphs? Yeah. So, like you put that on screen,
but basically, kind of crazy. Everybody gets a 4.0 basically, students
are the customers, so they’re basically buying a job. And so how do you
how do you deal with that? And my answer is, the market is a greater
right? So now you have kids. They’re doing software. They can’t hurt
themselves, like in a factory. They’re under supervision because they’re
working remote at home, but they’re also like, apprenticing, right? I
think we network school, we also want to make that happen where now
they’re in a friendly environment along a bunch of other adults. They
can run around and roam and so on. And then they can level up. They can
be next to an electrical engineer, next to a mechanical engineer, as
they’re building robots and stuff like that, and just help them with
small things, right? And they start to see what the like, what adults
are doing. And it’s not just being, you know, sitting at a desk the
whole day, right? So let me pause there. That’s kind of how I’m thinking
about part of the future education. Maybe you have some thoughts. Maybe
you
Speaker 1 38:23
have some thoughts. I have a lot of thoughts. Yeah. So, I mean, I I know
a lot of kids who are in that kind of interesting group who are
basically ready to go to university when they’re like, 11 or 12, and
adults all try to stop them. Oh, interesting. It’s like, we don’t. For
some reason, the vast majority of adults I deal with don’t want children
to learn when they’re ready to learn. They have to learn at the speed
which they’re expected to learn. They want a speed limit. Yeah, yes. And
they assume any kid that’s keen to learn more, it must be the parents
fault that they’re pushing them kids. Kids are not allowed to have
curiosity and drive and passion, but actually, not every kid learns
everything at the same speed. Yeah. So I’m very interested in, like, how
do we help this that talent at the much younger age, not because I want
to, like, make them more productive or whatever, but just because I know
so many of these kids are deeply unhappy when they’re artificially held
back, and I want to, all you know, help them all have the opportunity to
to have that excitement of feeling like they’re achieving their
potential, that they’re that they’re just really happy with the things
they’re building. So I’ve got a kid, you know, she’s nine, and she’s we
let her basically have whatever opportunities she wants. You know, when
she chooses her curriculum, and she chooses what she does, and she’s
happy for us to provide her some guidance as well, you know, but we
don’t. Don’t force her to do anything. And, yeah, she’s got this great
cohort of friends all around the world now who learn in this way, and
are all doing it at their own speed. Obviously, with AI, there’s a lot
of opportunities to help more and more of these kinds of kids develop as
they’re ready, you know, and get a much more customized, personalized,
dynamic education experience, one that’s not focused on conformity or
authority. You know, sometimes my daughter comes back, she’s like, she
does lots and lots of extracurricular things, you know, one of them is
trampolining. She comes back from trampolining, sometimes she’d be like,
Oh, I got a gold star for good behavior. Isn’t that great? And I always
say, like, I don’t know. I’m not sure I want you to have great behavior.
You know? Why do you think that’s so important to have great
behavior?
Balaji Srinivasan 41:02
Well, well, of course, it depends, obviously, like a layer of dissidents
and so on, on top of a fundamentally pro social attitude is good, but if
people are, like, anti social and they’re littering or they’re, you
know, yelling in the street, that’s
Speaker 1 41:14
exactly it’s, it’s, it’s not necessarily, you know, being the best
behaved kid in the class, and getting the gold star that week is not
necessarily the great thing, and it’s not something, not something I
want her to be proud of, right? You know, yeah, she’s incredibly pro
social, she’s incredibly kind, she’s incredibly generous, but that
doesn’t mean she has to do everything she’s told as soon as she’s
told
Balaji Srinivasan 41:36
to do it. That’s right? And this is, it’s funny, says, because
Speaker 1 41:38
basically, particularly for a girl, like, like, like, girls are
particularly taught to like, fit in and do what they’re told. And I
don’t want her to be somebody in society who just fits in and does what
she’s told.
Balaji Srinivasan 41:52
I think, I think this concept of like, the balance and so on where it’s
like, you know, as you said, they’re pro social, and they’re kind, but
they also don’t obey every single command and so so
Speaker 1 42:04
yeah, I tend to focus on empathy with my daughter, which maybe ends up
in a similar place, yeah, just like, particularly for younger kids,
empathy doesn’t necessarily come as easily. So I have to kind of say,
like, Okay, you thought that was funny. Now, can you try to imagine what
that person’s situation was. Do you think they would have found it
funny? You know, if you were them in that situation, has anything
similar happened to you before and eventually? So I’m just like, oh,
wow, did I just do that thing to them that another person did to me that
made me sad. Like, Oh, wow. I feel so sad. I didn’t want to make upset
that person.
Balaji Srinivasan 42:39
It’s funny because, you know, sometimes you can get to like, just like
with religions, you can often get to a similar behavior pattern by
different kinds of religion, different so I had a recent tweet a little
bit viral on actually, that exact topic of empathy. And essentially what
I said is, because I was, I was talking to conservatives, and I was
saying, Look, empathy is actually a useful concept, even for a
completely cold blooded capitalist, right? Why? Because you have to
understand other guys point of view and their win, win, right? And a lot
of the like, especially in today’s America, they’ve gotten themselves in
the in the mental state where I think everybody’s exploiting them,
everybody’s ripping them off, right? And that, like, Australia is an
enemy and Canada’s an enemy and Vietnam is an enemy, and whatever,
right? And it’s like, you know, lots of people are just neutral, right?
They’re just business partners, or they’re just, like, living their
lives. And you don’t have to, like, fight, and you can’t fight the
entire world, and you also have some understanding of, okay, what’s
their win? And how can we get to a win? Win, often a win, win is more
profitable for both parties involved, and so on and so forth, right?
So
Speaker 1 43:48
you can and actually, altruism is programmed into us, like this is
something we’ve discovered, like, evolutionarily, it’s been programmed
into all of us to not be altruistic is to fight against your basic
instincts, and that’s really dangerous, because when you fight against
things that evolution has programmed you to do, you’re creating a new
unstable equilibrium. So why has that happened? Well, presumably, there
were plenty of groups that had no altruism in their villages, you know,
just genetically. They didn’t have that as part of their DNA. Didn’t
cooperate in the died out. They died out, you know. And so we, we as a
species, you know, we’re not perfect, right? But you don’t want to
underestimate the power of what we’re born with. You know, when we’re
born. You know, altruism is not weakness. Altruism is is strength. These
are the people that survived. And if you want to fight against that,
then you’re fighting against a basic survival instinct. Also it’s, it’s
nigh on impossible to design and organize such a complex system. They
they arise. Over a very long period of time to create these marvelously
stable equilibria, you know, and this is what kind of terrifies me at
the moment, is there are so many opportunities to destabilize that
equilibrium right now, you know, with with technology and the
connectivity we have, and historically, each time you get a previously
stable equilibrium is damaged, sometimes ending up with, you know,
hundreds of years of societal misery. And so I always just like, I’m
definitely very keen to see change and growth, but I want people to
understand the power of where we’re at, and know how hard it was to get
there, and to know enough history to know that, you know, creating, you
know, destabilizing an equilibrium creates a power vacuum. And there are
certain people who are extremely motivated and good at taking advantage
of power vacuums, and the pair, the people you definitely don’t want in
power you know? Yeah, well, I don’t know. Like, somehow Singapore did an
amazing job, like, the one country in the world that, like, I think they
just got lucky with Lee Kuan Yew. Do you know what I mean? They ended up
with a guy who’s kind of incorruptible. He doesn’t have a huge chip on
his shoulder. He just cares about outcomes. Most places around the
world, in that situation, end up with, you know, basically a, you know,
deeply insecure chip on their shoulder, power hungry person.
Balaji Srinivasan 46:33
Is funny about Lee Kuan Yew, which I think is very underappreciated, is
he like he could argue his case in English. I think this is the most
underappreciated aspect of Lee Kuan Yew, because he would argue his case
in English. He could argue on the global stage right other people
understood at least his point of view. He could make it cogently. He
could do it in short form. He could do it in long form, sound bites and
then extempo, you know, long speeches, extemporaneously or in policy
papers, and he made sure that Singapore won the argument. And if you win
the argument, then you often don’t have to fight, right? Because there’s
like, that swing vote in the middle who’s like, you know what? He has a
point here. We should do it his way, and so on and so forth, right? And
I feel that, for example, there’s other other folks in East Asia who
delivered comparable economic results to LKY, right? For example, in
South Korea or in Taiwan or what have you. But they couldn’t make their
argument in English, right? That’s a really exceptional aspect of they
could speak in Korean, they could speak in Chinese, but, like they
couldn’t, they couldn’t make their case on a global stage, right? And,
and I think that’s very underrated, and it’s something I think about a
lot because so let me, let me actually slightly counter argue with you
on the power vacuum thing, which is there, right? I think that we are
about to enter a period where the the future is China versus the
internet. Should I elaborate on what I mean by that, China versus the
internet. China versus the internet, right? So the 20th Century was sort
of a symmetric thing, you know, almost like basketball, like the final
four plays, and it then ends up as US versus USSR, everybody slugs it
out, right? Sean McMeekin has this book called Stalin’s War, where he
kind of makes a point that World War One and world war two can be seen
almost as like a 30 Years War, like, years war, like an extended bar
brawl with people like smashing chairs over each other’s heads all
around the world, right? And then it kind of lands up as the US versus
USSR, right? With Japan and Germany eliminated, and and, and other
powers too, US, UK, France, blah, blah, right? I think this century is
going to be different where it’s not a symmetric thing, but asymmetric.
Like China and the internet are, I think the balancing things and
China’s obvious. I think the internet is not obvious. What I mean, but
China’s obvious. China, if you take the quote, American empire, I think
China inherits the manufacturing and the money and the military, or not
all the money, but the manufacturing, the military, and really the might
of it globally, like blood, the alliances and so on. The world is, after
this tariff thing, recentralizing around China, totally right? Quickly,
interesting
Speaker 1 49:11
to see how eminent that is, but that’s, it’s something very deep
happening there. Yeah.
Balaji Srinivasan 49:16
So, so I think what’s going to happen, and it’s not just
economically,
Speaker 1 49:19
also culturally. You know, America’s cultural power has been enormous.
It has been, that’s right. So now in Australia, I’m seeing people being
like, oh, America’s kind of cringe. Now it’s
Balaji Srinivasan 49:30
cringe. Now, that’s right. But I think that the other air that the less
visible but as important air, is the internet, which it has the people,
the values and the language, okay? And the reason I say that is the only
thing that has economic scale comparable to China is actually the
internet like so that’s that. Why? Why am I into crypto? I’m into crypto
because everybody in the internet is equal, meaning you’re peer to peer.
You can send packets back and forth. You’re the same. Property rights,
you have same contract law, right? You have the same monetary policy.
And so whatever you were born into, you can opt in to a system of law
that is superior to the one that you were born into. And it’s like
emigrating to at least half of what a government is, right? It’s not the
land, it’s not the physical territory. Yet I’ll come to that, but it’s
at least the property rights, the and you have to have some sacrifice.
You have to buy some of the coin, or whatever. You start to start
interacting with this. Now you have, like, a system of law that’s often
superior to the one that you inherited, whether it was in Nigeria or is
in, you know, Lebanon or something like that. These places have
destroyed currencies. They don’t guard property rights. Now you can
finally save because, you know, the blockchain protects your savings,
right? So I think that the internet has half of what we want, which is
it has a system of government, and with all these blockchains, multiple
systems of government, and actually compare it, one of the ways I think
about it is, you know, early America, it didn’t actually think of itself
as America at first. They were British colonists, right there. They’re,
you know, like the Virginia colony, Massachusetts colony, and they had a
land and they had a people, but they didn’t have a government, right?
Because the government was in London, and took a while for them to
develop a sense of national consciousness and realize, oh, that’s
actually not our government. Our government is here, right? So they had
land, people government. They became America, right? I think the
internet is evolving in the opposite way it has the people, and actually
as a government, in the form of the blockchain. I blockchain, but
doesn’t yet have land. I think that’s the
Speaker 1 51:26
next step, and hopefully it won’t be versus unfortunately, Xi Jinping
has moved into a power vacuum in China. Prior to that, actually, China
was much more of a democracy than people realized. Talk about this.
Well, I think a lot of people don’t understand how the political
situation in China worked. So there was a lot of voting, but unlike most
western democracies, the voting was entirely within the party. Party.
Yep. Now people might think, Oh, that’s not very big.
Balaji Srinivasan 51:59
It’s actually 100 million people. Chinese guy was very big. And then
you
Speaker 1 52:02
go and it’s not like of my so I spent a lot of time in China, and with a
lot of really great people in China, young people, and the vast majority
of the best of the people, most what they wanted to do was to get into
the party. So the not commenting on whether this is good or bad, but it
ends up with that kind of a democracy of you know, the hardest working,
most intellectually capable people
Balaji Srinivasan 52:29
can I make a provocative comment? So there’s a book called The party
decides. Point of that book was the American unit. Party decides who’s
actually running on the Democrat and Republican side. For many years,
there people have said a choice Don X, choice, Don echo or whatever,
right? And so there’s a similarity to that where there were, quote,
smoke filled rooms where the candidate was determined. And certainly
with a recent Democrat primary, it was something where, basically the
party determined who was running, and so on and so forth. Then there’s a
whole disaster, the whole Biden comma thing. So there’s more similarity
to the American system for many years, where there was essentially a
unit party that decided, like, who the candidates were. Then some would
argue, and now I’d say, in a sense, we’ve had true democracy burst
forth, but that some people conceptualize this democracy. Let me
pause
Speaker 1 53:14
there, yeah. So, yeah. So that’s another whole connection I’ll leave
aside for a moment, which is that actually, yeah, there’s, there’s
actually a lot more conspiracies in the world than people realize.
There’s a lot of smoke filled rooms. I’ve been in plenty of them. Yes, I
think I just wanted to mention is the thing that was missing in what you
said, is the is the key power for me, the key issue for me, which is the
presence of positive feedback loops. Now, when I say positive feedback
loop. I don’t mean good feedback loop. I mean a feedback loop which goes
back and causes more of itself. So power and wealth, like, like viral
reproduction, something like that. But like, power and wealth are
naturally positive feedback loops. Getting more power puts you in a
position to be able to get more power. Getting more wealth puts you in a
position to get more wealth, and then you’ve got the cross correlation.
Getting more power helps you get more wealth. Getting more wealth helps
you get more power. I talked to them earlier about the importance of a
stable equilibrium. How can you get a stable equilibrium in a situation
where somebody getting ahead can let them get more ahead, right? There’s
a the compounding interest. There’s a huge tension here, right? And this
is where democracy and capitalism and the market economy come into a
huge, a huge problem, right? Which is if, if you allow those positive
feedback loops to happen, then you end up with people who have
incredible riches and incredible power because they’re on the right
side
Balaji Srinivasan 54:46
of that feedback loop. Yes, okay,
Speaker 1 54:49
it’s a natural disequilibrium, and it’s not compatible with actual
market forces or with democracy. Because you’re now in a situation where
you can, like you can buy the media, you know you can, or nowadays, like
the social networks or whatever you can, you know, stick all the odds in
your favor. And that is not again, that’s not a resilient state to be
in. So somehow, many societies in the world have managed to create
sophisticated, complex equilibria that have avoided this for decades,
you know, but it’s not the natural state of things. The natural state of
things is for there to be, you know, one incredibly wealthy and powerful
person that you know is there because of the power of
Balaji Srinivasan 55:40
positive feedback. Okay, so let me, let me disagree with that in two
ways. And then you counter argument. Counter argument. The first is,
there’s a saying like shirt sleeves to shirt sleeves in three
generations, right? Which is to say that, like this guy, he starts a
factory, his son inherits it, and his discipline, grandson puts a
fortune up his nose and, you know, does drugs and, you know, basically
spends down the whole thing, right? And this is like the resource curse
concept, where when people get too wealthy or too powerful, these get
extremely lazy. They forget cause and effect, especially if they’re two
or three generations out, they don’t even know what hard work resulted
in that fortune in the first place. And they just blow the whole thing
up. And that’s actually what’s happening with the US right now. Like in
many ways, I think the people are currently running the US government
are not founders. They’re heirs. They’ve inherited the system that, like
better people, set up decades and decades ago. They don’t even
understand how it works. It’s like a factory they’ve inherited. And they
don’t understand how it produces widgets, or how it maintains global
order, global peace, and they just think I’m big and powerful, and they
don’t understand why it
Speaker 1 56:45
exists. I think that’s true, but it doesn’t matter, because the what the
data shows is that over multiple hundreds of years periods, the wealthy
families stay the wealthy families and and at like, the highest levels
of power you know, you like, if you look at the history of the, you
know, English royal family, whatever, or of Chinese emperors, like, they
stay there for hundreds of years, you know, and they create, they create
feudal systems underneath themselves, which are critical for
establishing loyalty and all that. That’s the more natural state of
things that things fall into, unless you can maintain that
equilibrium.
Balaji Srinivasan 57:29
Okay, so I’m on a counterarguance from an argument that I think is
interesting, at least, maybe to you know, maybe you’ll disagree. So if
you have an heir, or you, let’s say you have a like a Genghis Khan,
right? They have two like they have a child, they’ve got half their DNA,
then another child, they’ve got a fourth, then another child, they’ve
got an eighth, right? And most of the time, people don’t have an
exponentially increasing number of children. So that means that that
fortune, for example, would or whatever it is. It’s very hard to pass a
fortune down many generations, number one and number two is that person
almost doesn’t even exist anymore because their genes are being split
up, diluted, like Does the person even in what sense is somebody who’s
only 116
Speaker 1 58:10
part of the same family, right? I think you’re dramatically though, over
emphasizing the importance of genes over context
Balaji Srinivasan 58:20
so, like, they’re four generations down. How is it they’ve got a bunch
of descendants, right? The vast majority of their descendants must like,
what does it even mean to say a family across four or five generations
that family doesn’t like?
Speaker 1 58:34
Arguably, not how power is transferred, right? So power is transferred
by picking an air, and they have an air, and they have an air, and then
as soon as there’s like a lack of a clear air, then you get 100 years of
war, and then somebody wins, and now they have another, you know, air,
air, air, like they, that’s the thing They they generate this system of
hierarchical loyalty and and they do like you can see historically, sure
that people do maintain it,
Balaji Srinivasan 59:06
but I had made two points. First is, most of their heirs are not
inheriting that fortune, so the majority of the family or the
descendants or whatever are not right, because it would be divided. The
second is even this fourth or fifth generation guy is now, like 1/32
Genghis Khan, or, or what have you. And so they may just not have the
the zeal or the energy of the original Genghis, right? Say, lose, and
then there’s a new guy who, who takes over, right? So basically, what
I’m saying is it’s almost like there’s a there’s a huge tax, like a 50%
tax every generation, that makes it very hard to keep concentrating the
same stuff in the same because the same people don’t even exist three or
four even there’s some In brief,
Speaker 1 59:48
you’ve got the premise wrong. Premise is that what better there is the
genes. And what I’m saying is no Balaji. What matters is the power of
the positive feedback loop. Power gets begets power. Yeah, it doesn’t
matter if my I’m five generations away from Genghis Khan. What matters
is I’m the king of England or I am the king of France, right? But you
know, like, if, like you saw what happened in China, hundreds of years
of terrible emperors, opium addicts, destroying the country. They still
maintain the power, right? And the country went from like during the
Tang Dynasty, the, you know, the vast majority of GDP in the world was
in China. Cultural Center was was in China. Scientific center was in
China. And then, through power concentration, the civilization died, you
know, sure. So we don’t want that to happen,
Balaji Srinivasan 1:00:46
I guess so. Let me agree with you on that, and I do think that it needs
to be alternatives and so on and so forth. I’ll just make one other
point, which is, if that person is only 1/32 or 1/64 Genghis Khan, then
there were 31 or 63 other people or families that rose so so like the
mobility is actually there. If there’s it’s if it’s a sufficiently
exogamous society, then all these folks did rise to become rulers,
because their bloodlines actually did get up there. So basically, what
I’m essentially, what I’m not, what I’m agreeing with you, is the title
got passed down, but the family doesn’t even exist beyond 564, whatever
number of generations, right? The family just gets diluted out. Does
that make any sense?
Speaker 1 1:01:31
Yeah, but that’s what I’m saying. It doesn’t matter, right? What matters
is that you the positive feedback loop created a power and wealth
concentration that was maintained for hundreds of years, and most people
in the country suffered, right? And that’s the thing that we want to
avoid, and it’s incredibly difficult to avoid, because that’s the
natural state of things. It’s positive feedback loops,
Balaji Srinivasan 1:01:57
I guess. I guess maybe this is an empirical question, and we can look at
different trajectories, but I think it is difficult to maintain that
power and wealth concentration without zeal, and that zeal, if it’s not
there, what people get fat and happy a few generations out, like we’ve
seen that. I mean maybe, maybe we’re just thinking of different kinds of
examples, right? And for example, in tech, it’s almost entirely quote
new money, right? And what I find is that people who’ve inherited
fortunes are just lethargic, right? They don’t have that energy. So we
are seeing this internet disruption, right, this dark talent that’s
hungrier. I would always invest in that. I would always back that,
because it’s hungrier and it wants it, right? Whereas, so I’m always
seeing, seeing anti compounding. I’m only seeing, I guess, yeah, no,
Speaker 1 1:02:43
and I agree with all that. But I’m trying to get you to think about the
end state, okay, like, like, I, maybe I agree with everything you’re
saying, right? But what I’m trying to say is, okay, consider the
positive feedback loop here, right? You’ve with AI now you’ve got the
ability to create more power, you know, and and more wealth, and we’re
more connected, like we could literally end up with a global dictator,
and we could literally end up with a permanent underclass representing
99.99%
Balaji Srinivasan 1:03:19
of the world. So let’s talk about how we prevent that, right? Because I,
because this is something I do think about, right? So my view is, and
you may may or disagree with this or not, or is that we got people got
more left than they expected. Now they’re getting more right than they
expect, more Maga, and then they’re gonna get more China than they
expected. Like, basically, I think what’s gonna happen is China’s
rolling up a lot of alliances, like the EU is doing deals with China,
all its historical rivals in Southeast Asia are now just all folding in.
So the whole global economy is recentralizing around China and America
has not just become isolationist. They’ve isolated itself from the world
and the most punishing, they’ve sort of self imposed the most punishing
sanctions of all time on themselves, like a rogue state North Korea,
Iran would face this kind of embargo, but it was, like, self imposed
because they think it’s going to make them strong. It’s really kind of
crazy stuff, Maga Maoism or whatever, right? So, as a consequence, I
think a lot of power gets centralized in China. And
Speaker 1 1:04:14
along with that, interestingly, you’re seeing a huge this kind of
cultural isolationism happening in America, yes, also, like, quite
difficult to undo, potentially extremely difficult, because they don’t
end up like Japan pre the myji restoration. No, they thought they’re
powerful. They thought they’re strong, but actually they separate
themselves in society and
Balaji Srinivasan 1:04:36
become weak. That’s a good outcome. I actually, I think it’s quite
that’s a good outcome. Yeah, fair enough. I think, I mean, because
that’s actually something where they give up the Empire, but they’re
just like, you know, a country, or
Speaker 1 1:04:47
would stay isolationist, except they’ve got nuclear weapons. Well,
that’s, that’s a
Balaji Srinivasan 1:04:51
problem. The thing is, I think, you know, there’s a lot of people who
will say, like, actually, both on the left and the right, will say, we
need to, you know, over. Public non empire, or we need to shut down, you
know? And the problem is that, first of all, maybe you’ll agree with
these things. I’ll give a view, and then maybe shoot at it, right? I
think the first thing at least, that I start with is American Empire is
real, and it was spectacular, in the sense of, arguably, for all its
faults, one of the greatest of all time. Absolutely, it did have
capitalism, democracy, world peace, in many ways, then lost its way,
especially recently. And now you’ve got a very common kind of thing
where folks on the left think, Oh, the US is bombing lots of countries.
It should stop doing that. Folks on the right think the US is being
exploited by all these foreigners abroad. It’s being cheated. We’ve de
industrialized. We need to stop all that. Bring all those jobs. Okay,
fine. So this group thinks the US is harming the world. This group
thinks the world is harming the US. Both them think they want to shut
down the Empire, bring the troops home, you know, and so on. Okay? I
remember
Speaker 1 1:05:51
also, like, during that heyday of the 50s, you know, the American top
marginal tax rate was like 80% like 90% Yeah, yeah, that’s like, they’re
working very hard to avoid this positive feedback. I loop, I mentioned,
you know, redistributing the wealthiest So,
Balaji Srinivasan 1:06:06
okay, so on that point, just to talk about that, the at that time,
though, power was completely centralized in the US government, right? So
you almost have, like, a toothpaste tube squeezing where, like, if you
avoid centralization on one axis, you often get it in another kind of
thing, right? So, um,
Speaker 1 1:06:24
because, because the people who want power will find ways to get it.
Yeah, you can have
Balaji Srinivasan 1:06:28
total centralization of government power, or you can have totalization
of corporate power, or maybe military power and so, or you can have
checks and balances. And where I think the world is going to go is a
billion person, Chinese super state. And then eventually, like, 1000
million person network states like and then I think India is going to be
in the middle. I think there’s other countries are going to be in the
middle, and so on and so forth. But that’s, that’s where I think things
go by, like 2040, or so, right? And and so hopefully that gives, I’m not
saying they’re all a million person networks. Some might be bigger, some
might be smaller, but I but I do think that we’ll have a lot of choice
of jurisdictions.
Unknown Speaker 1:07:04
I mean, that would be nice. That’s at
Balaji Srinivasan 1:07:07
least a hope, perhaps. Yeah, good.
Speaker 1 1:07:09
I just got to say, keep thinking about the positive feedback problem,
because I think it still has it, you know, it feels, you know, Rosy to
the level of being late. Well, that’s that seems not in line with
power
Balaji Srinivasan 1:07:24
dynamics, I guess. I guess my biggest argument against that is
arbitrage, because it’s very difficult to get. Or let me give a game
theoretic argument, right, which is going back to your sales example,
right? If you have two people, you have four possible outcomes in a Win,
Lose thing you can have win, win, win, lose, lose, win, lose, lose,
right. If you have three people, you have two thirds. So eight possible
outcomes, win, win, win, win, win, lose, right. And you have k people,
you have two to the k possible outcomes where you know n of them can win
and n minus k can lose, and so on for any value of n and k, okay, so
this is how I think about, like managing a startup, right? With a
startup, if you have 100 people, what you don’t want is political
behavior where some subset of them loses and the other subset wins. You
want to have a single thing which aligns everybody, and that’s like
equity, and that’s like the exit so they all know, if I work together,
we all get the maximum payoff when it’s all win, win, win, win, across
the board, right? However, there’s limits to how large you can make
that, right. You might make that 100 people. Might make that 1000 you
might make it even a million people, like cryptocurrencies, of getting
it to 10s or hundreds of millions of people, right? But I don’t think
you can get to everybody. And the reason you can’t get to everybody is
at some point there is an incentive to break away, to dis line, is what
I call network defect, right? And so that is a counterweight to kind of,
I think what you’re seeing about infinite compounding, it’s
actually,
Speaker 1 1:08:50
if you’re allowed to go ahead, if you’re allowed to, like, I mean, like,
yeah, it’s like, oh, you know, the people in Wessex could have left, or
whatever. It’s like, no, they’re in a feudal state, and they would have
got killed, and there’s violence and, right? And, like, if you add AI in
the mix, then you can have, like, absolute global surveillance and power
and total control, right? So now, okay, so here’s it’s fine. In theory,
you could go and do something else. In practice, if you even talk about
it, you get shot in the face, yeah.
Balaji Srinivasan 1:09:16
So the right, so the practical way, where I do agree with you is the
Chinese drone Armada, right? Because they can manufacture huge numbers
of robots, and those robots are they’re no longer like human beings who
can defect, right? Because they can’t defect all these concepts I’ve
been talking about with the game the principal agent problem goes away
and just one guy pushing a button, and it’s like a machine that just
enacts our action around the world, right? That is definitely something
which changes these dynamics. That is actually something where you could
have centralization and power for a long time, and that is actually
something we should think of as the most important thing to build
counterweights to going three to five.
Speaker 1 1:09:58
So I think your network states idea can hit the. That too. So
Balaji Srinivasan 1:10:01
fast.ai. You’ve got this practical deep learning for coders, part one,
part two.
Speaker 1 1:10:06
We’ve done a new course called How to solve it with code, and we built a
whole new platform for it, which we basically beta tested it. We opened
up sign ups for 24 hours, kind of reasonably quietly, 1000 people signed
up within 24 hours. So then we closed it, we did that, and the reactions
we got were amazing, like we’ve had hundreds of people come back and
say, this changed my life. I’ve got a new job. Well, it’s not open for
everybody, but it’s it’s solve it.fast.ai. So we’re trying to figure out
how to now make the most of this. Because we’ve come up we’ve created
something clearly extraordinary, basically the fundamental idea. I don’t
know how familiar with the polio book, but it’s basically
Balaji Srinivasan 1:10:53
like it’s a bag of tricks for solving math problems. Yeah, but it’s a
bag of tricks.
Speaker 1 1:10:57
It’s actually a fundamental idea, which is that to do things
iteratively, step by step, and when you apply that idea to coding, and
then you bring AI into the mix as well, you can, we’ve kind of come up
with this way of solving problems with code and AI, where you’re
constantly in control of the AI. You never get into that situation where
the AI is kind of controlling you, yeah? So, yeah. So we, like, I say
we, this was from months ago. We haven’t let anybody use it for months
because we’ve been running it and testing it, yeah. So it’s a bit of a
long story, but basically it’s a whole different way of thinking about
problem solving, which is the exact opposite of the whole
Balaji Srinivasan 1:11:39
vibe coding kind of, it’s like, let’s think step by step for humans.
Yeah, let’s
Speaker 1 1:11:44
think. Let’s think step by step for human plus AI together. The AI sees
all of your thinking. You see, the AI is thinking. You write code. The
AI write code. You’re constantly focused on learning and iteratively
improving. You know, vibe coding, it’s just like one shot thing where
you don’t learn anything, you get up more more technical debt. So it’s
actually, it’s interesting, like my co founder, Eric Ries, has this Lean
Startup approach, which it turns out, is really similar to the Polya
approach. Again, it’s like highly iterative learning based. So we’re
hoping that through this solve it course, that we’re going to eventually
build something like your startup engineering. Oh, great, but, but using
this solve it approach and with, with the help of AI, to allow and then
to create, like, 1000 new startups from that course, and then work with
investors to give each of them, you know, a start financially and maybe
hopefully build the next generation of founders.
Balaji Srinivasan 1:12:45
Amazing. And I think, you know, would be good. I want to actually talk
about the network school fellowship with your fast AI folks. I think a
lot of them could benefit from applying. So, okay, awesome. Thank you
very much.
Unknown Speaker 1:12:57
Jeremy, great.