Class #4 | MS&E435: Economics of the AI Supercycle Stanford University Spring '26 Apoorv Agrawal
Summary
In this Stanford MS&E 435 guest lecture, Databricks CEO Ali Ghodsi argues that "AGI is already here" but enterprises cannot capture value because models lack organizational context, and that the AI productivity lag mirrors the 40-year lag from electrification — a "human refactoring" problem, not a model-capability problem. His major substantive citations check out: the MIT NANDA "GenAI Divide" report's 95% pilot-failure figure, Paul David's 1990 Stanford paper "The Dynamo and the Computer," the 40-year electrification productivity lag, the Solow productivity paradox, Databricks' ~20,000 customers, Anthropic's ~$30B run-rate, and US healthcare at ~17–18% of GDP. Several smaller details slip: the Berkeley AMPLab he names was launched in 2011, not 2009 (its predecessor RAD Lab was 2005); he attributes the famous "computer age everywhere but in the productivity statistics" quote to "Richard" — it is from Robert Solow; Airbnb was founded in 2008, not 2009; Kimi K2.6 went GA on April 20, 2026, not "two days ago" relative to a May 6 taping. None of these errors change the substance of his argument.
Claims Analyzed (12)
Source Quality
Conversational lecture; most citations from memory. Substantive academic and industry references (MIT NANDA, Paul David 1990, Solow, Hamilton Helmer's Seven Powers) are real and traceable. Errors are date and name slips characteristic of extemporaneous speaking, not invented or misleading framing. No anonymous attribution; no statistical sleight of hand.
Transcript
Class #4 — Economics of the AI Supercycle
Stanford MS&E 435 (Spring 2026) — guest: Ali Ghodsi, co-founder & CEO of Databricks. Interviewed by Apoorv Agrawal.
"We already have AGI"
Asked for a state-of-the-union view, Ali Ghodsi opens with a deliberate provocation aimed at a roomful of stressed Stanford students: chill out. The frenzy in Silicon Valley — XAI versus Cursor, OpenAI versus Anthropic, the race for "superintelligence" — is, in his view, unwarranted. Worse, the stress chases people into tunnel vision and bad work. Twenty-two-year-olds asking him whether they have already missed the boat on AGI are asking the wrong question.
His provocation: we already have artificial general intelligence. Ghodsi tests the audience: how many of you think we have AGI? About 10 percent raise their hands — the same ratio he reports getting every time he asks. Then: how many of you interact with people who are clearly less capable than the smartest models you use? Most hands go up. The point lands. By the definition AI researchers were using when Ghodsi was at Berkeley around the time he joined as a visiting scholar in 2009 — he names AMPLab, though that lab was officially launched in 2011 as a successor to the earlier RAD Lab — we have already cleared the bar. The goalposts have simply moved.
Why none of it is working
If we already have AGI, why is no one capturing the value? Ghodsi cites the MIT NANDA report that found 95 percent of enterprise generative-AI pilots are failing. He won't defend the exact number — maybe it is 75 — but he insists the direction is right. Walk into any large enterprise and the picture is the same: humans shuffling TPS reports, sales teams hired from old-school companies, processes that look exactly like Office Space. Even inside the AI labs themselves, he says, things run in old-school ways.
His diagnosis is a single word: context. Inside every organisation there is a John or a Jane who has been there fifteen, twenty, thirty years and holds the entire process in their head. Models do not have access to that context. Without it, agents make stupid mistakes and are useless — not because they are not intelligent, but because they are missing what the long-tenured humans know. The work that matters, in his framing, is not training bigger models. It is figuring out how to download the carbon-based brain into the silicon.
The dynamo, the computer, and AI
Ghodsi reaches for a historical analogy: Paul David's 1990 Stanford paper The Dynamo and the Computer. Robert Solow — a Nobel laureate, though Ghodsi misnames him "Richard" in the moment — famously quipped in 1987 that you could see the computer age everywhere except in the productivity statistics. People bought PCs and used them as typewriters. The output was printed, filed, and indexed by assistants. No productivity gain showed up.
The same pattern played out four decades earlier with electrification. Factories built around centralised steam power and overhead line shafts simply replaced the steam engine with a dynamo and changed nothing else. It took roughly 40 years — from the 1880 commercialisation of the electric motor to the 1920s — before factories rebuilt their floor plans around unit drive (one motor per machine, distributed power, big horizontal layouts outside the cities) and finally captured the productivity gains. The technology was not the bottleneck. The rewiring of the organisation was.
That, Ghodsi says, is exactly what is happening now. He is meeting CEOs of major banks who insist AI is amazing, that they need it, and yet they cannot find the productivity gains in their own organisations. He tells them: you have AGI. They are confused. The brain is here. The body — the hands, the legs, the processes — has not been rebuilt yet.
A connector built in two days, or in nine months
Ghodsi gives a Databricks anecdote to make the point concrete. Databricks builds production data connectors to systems like Salesforce, Workday, and NetSuite. A single production-grade connector takes the team three quarters — nine months — to ship. As LLMs improved, Ghodsi tried it himself and built one in two days. He took it to the team. Two weeks later they came back: yes, with AI we can compress nine months down to seven and a half. That's it.
A different engineer, working from first principles, looked at the same problem and came back with: we can ship seven connectors in one quarter. The difference was not AI. The first quarter, in the original process, was Stanford-trained product managers flying out to customers to gather requirements and producing 60-page reports. The new approach: write the requirements in a week and iterate, because software is now cheap to rewrite. Outsource the test-environment setup (Salesforce instances, Workday tenants) to specialists running in parallel. Replace the one-engineer-per-connector structure (a "bus factor of one") with seven engineers working on all seven connectors. The breakthrough was process refactoring, not model capability. GPT-7 would not have helped.
Is software dead?
The class wants to know whether they should buy the dip on software stocks. Ghodsi pushes back on the premise. If software is dead, then OpenAI, Anthropic, and Nvidia — all software-and-people businesses — should also be dead, and they obviously are not. Two things have changed: barriers to entry are lower, because writing software is cheaper, and switching costs are lower, because in an agent-mediated world you no longer have to learn a new UI to switch tools.
But the moats from the Hamilton Helmer playbook — economies of scale, brand (Ferrari, Rolex), trust and security, network effects, switching costs, and proprietary data — are intact. Ghodsi's grade for software companies: if you have data and you sit in a core operational loop, you are robust. The middle-of-the-grade workflow companies, with stale UX and no innovation, are in trouble. But even those companies, if they have customers and data and they get serious about AI, can fight their way through. The companies that look exactly the same as ten years ago and have just been riding revenue growth — those are the ones that should be afraid.
Where the value accrues
Asked how he would allocate $100 across Jensen Huang's five-layer stack — energy, chips, infrastructure, models, applications — Ghodsi puts it at the top. Applications win. He frames it through his own PhD experience in networking in the early 2000s, when the smartest people in the field were certain that multicast — efficiently broadcasting one source to the whole world — was the most important problem in computing. They were wrong. Bandwidth got cheap, fibre got laid, and multicast was solved by brute capacity. The real winners on the internet turned out to be ideas that sounded ridiculous in 2000: a taxi app (Uber), selling books online (Amazon), renting your bedroom (Airbnb), short text messages (Twitter). Anyone pitching those in 2000 would have been called insane.
The same will be true here. Today everyone is focused on chips and infrastructure — Nvidia, OpenAI, Anthropic, DeepMind. The trillion-dollar AI companies of the next decade will be in unexpected applications. Ghodsi names two: healthcare (~17–18% of US GDP, willingness to pay is essentially infinite, and the current product is poor), and education (despite VC consensus that it is uninvestable).
Frontier vs open source
Open-source models are closing the gap. The blue line on the standard chart, which used to lag frontier by three to four months, is now roughly a month behind. Two days before this lecture, Moonshot's Kimi 2.6 dropped — Ghodsi calls it "the best model ever in the history of mankind, if it had been released in January." (The K2.6 general-availability release was actually April 20, 2026, a few weeks earlier than the "two days ago" framing — but the substantive point about open-source closing the gap is correct.)
His take on the model layer: it will be valuable, but it will look like the cloud. Token factories at scale, thin gross margins, thin operating margins, Amazon-book-selling economics. A few players, not many.
Closing advice to the class
Ghodsi closes with the same theme he opened on. Do not be stressed. He thought the world was ending during his PhD in the early 2000s and worked on what everyone "knew" was the most important problem in computing — multicast — which turned out not to be a problem at all. Airbnb was started in 2009 (the company was actually founded in 2008, joining Y Combinator's January 2009 batch), but, he points out, Airbnb could just as easily have been started in 2001. Nothing in the technology stack changed. It just took nine years for someone — Brian Chesky, who needed a place to crash at a conference — to come up with the idea. Good ideas, in Ghodsi's reading, are rare and slow.
His parting line: think long term, like Bezos did when he left Wall Street and bet on the secular trend of the internet, starting with the most boring possible category — books. Do not be swayed by whatever Twitter is screaming about today. There is a high probability it is the multicast of our time.
Show raw transcript with timestamps
You know I thought where we'd start Ali is you know a lot of talk right before you joined about there's world's moving fast XAI cursor open AAI fighting anthropic you know you guys have done such a great job of stacking you know going from a data business to a lakehouse business to now an AI business just state of the union >> view from the top what are you what are you seeing like frame the landscape for us um what are the biggest things that you're thinking about and you know I've got a bunch of questions that we can talk about but I thought would just open it up to like what is the biggest thing on your mind as you as you think about AI? >> Yeah, I think that uh you know I think you guys can chill out. Don't be stressed. You know, I think times are crazy and uh I think it's not warranted basically and I think the stress uh makes people do stupid things and chase just uh you know, whatever happens to be the crazy thing that everybody's talking about on Twitter. Uh I think it makes people have tunnel vision and not work on the right stuff. >> Yeah. Uh and I think that's what I see with like the current generation like every year we have interns coming to
data bricks and the interns I do always like a session with them an hour or 90 minutes or something they can ask questions and last two years have been just insane. Before they would ask for like good career advice and you would give them good career advice. Now they're like 22 year olds who are like, "Oh my god, should I like start start my own company and be a CEO?" Or if I like delay that by six months working on something, have I ruined my career and life is over and you know, AGI is going to happen and I'm going to miss the boat and like what am I going to do? I'm like so I'm like just trying to tell people like calm down, take a deep breath, things take time. >> Yeah. >> You know, so that's what I would say. I would say actually I think also in Silicon Valley if you think about it right now what's happening is I think uh and you might disagree with some of this so feel free to push back. You guys can might disagree too. You can push back as well. Uh but there's this quest for super intelligence which I think is unwarranted. >> Yeah. >> Because first of all, they're not even defining what super intelligence is. But it's this kind of like godlike, you know, it's like I think people reading Kurswell and you know it's takeoff singularity, you know, this thing that
comes and like you know recursive self-improvement and you know cures all the diseases and GDP jumps by like 10% and unemployment goes to 20% and there's no more jobs and UBI to everyone and so on. No, I think they believe it. I think it's not needed. I think we already have AGI. >> So, we already have artificial general intelligence. >> Uh, you know, um, okay, this is always equally fun. How many people think we have AGI already? >> Okay, it's always the same. It's always like 10%. Okay. Uh, how many of you think that a lot of people that you interact with are not as smart as the smartest models that you use? Okay. Now, let's start all over. How many of you think we don't have AGI yet? >> By the way, it always works. It's like, you know, see, it's like the hypnosis is working. For some reason, they've gotten the whole world to believe we don't have AGI, but it's like you just answered it that you have it, you know, right? >> But yet, nah, no. Uh, you want to move the goalpost. By the way, I was at the
research lab in 2009 at UC Berkeley called AMPLAB. was probably the biggest uh most active kind of important AI lab of its time in 2009 >> and um you know the you know the god of AI was working in that lab which is Michael Jordan his name is actually that so he's like the Michael Jordan of AI uh and um back then our definition of AGI artificial general intelligence um you know we've hit that >> like anything we imagined would be AGI we already hit that and those are all the leading AI researchers in United states many of them were working in that lab. But I was just I wanted to see like if I'm just full of it. So I went and asked some of those people that were there at the time and I asked them as hey do you agree and they all said yeah according to that definition in 2009 for sure we've hit that but you know and then there's always some you know stupid butt we moved the goalpost or we want to change it or we want to have some other definition or it did or the AI at some there's some example that you know it couldn't count the number of RS in strawberry or something so therefore we don't have AGI um we already have AGI
okay it's already smarter than many of the people that you interact with that is general intelligence it is artificial it's not exact a human it's not the way human brain works so we already have that so in some sense uh you know blowing a lot of money on GPUs and data centers and all of that kind of stuff is not really needed >> okay then there is at the same time so you ask for the state of the union on the other hand you have like the MIT tech report that says that 95% of the PC's are failing right >> uh it's kind of right directionally I don't know if the 95% might be wrong maybe it's just 75% who knows u but if you go inside of an enterprise and or inside of an organization you go into any company and you look at how they're using stuff, the reality is that there's no like lots of agentic co-workers running around doing all the work, you know, blending with humans. That's not happening. Okay? It's just humans shuffling TPS reports. >> Okay? It's like office the QA the movie is office the movie the office space the movie is still like how the world runs. >> This the reality. This is just the truth. Like
>> even inside the AI companies, that's how they run them. It's like they they like to think but they're hiring sales people from old school companies and they're running things in old school ways and I don't see like that futuristic thing. >> So then what's going on? We have AGI but on the other hand uh none of this is working and no company is using it. What the hell is going on? >> Uh I think it's very simple. Um, if you don't get all the context that exists inside of these organizations and how humans work and everything, all the context we have in our heads, if you don't get that to the models and the agents, they're going to do lots of stupid mistakes and they're useless. And that's what's happening right now. The models just don't have or the agents don't have the context that humans have inside of organizations. Therefore, they're useless. They do stupid mistakes because they don't know all the stuff that we know. Mhm. >> You know, inside of every company, there's always like this one guy or this one gal >> who's like, "Oh, go ask John or Jane." Like she knows everything, you know, and everybody's like tapping on that person's, you know, and that's the one person you can't lose in the company. If you lose that person, the whole company collapses. >> Yeah. >> That one person has that one person exists in every department, in every
company, in every organization. And that one person has all the context in their head. >> And that person what they have in their head is not inside of the model. >> So therefore, the model can't operate. it just doesn't know a lot of the stuff that's sort of usually John or Jane in that company have been there for 10 years, 15 years, 20 years, sometimes 30, 40 years. Um, you need to get that transferred to the AI. If you don't, the AIS doesn't matter if you get super intelligence and you can solve really difficult, you know, math questions. Um, uh, you know, and if you can get that context into the AI, we already have AGI and they can already crack the problem. So my uh urge to you guys would be uh you know if you want to have impact in the world figure out how to get that context into the AIS uh inside of an or like take an organization how do you transform how old school business is happening and how do you get those processes into the agents then you will have massive impact because AGI is already here >> right that's my state of the unit >> AGI is already here >> you got to download the brain
>> into the silicon >> get the carbon to talk to the silicon >> yes >> you know Actually, we were just talking about this. >> And by the way, queue up your your like push backs. I'm very curious to hear. >> I'm sure a majority disagrees. How many disagree with this? >> Oh, not that many. I was hoping. Okay, I'm going to be more provocative. Okay, we need more push back. Okay, so you know, before we before we go into AI, you know, there's a shadow of AI. >> Uhhuh. >> Software is dead. >> Mhm. >> Software has been dead for a while. We've had this >> four times. It happens. Every time it happens, it bounces back. >> Yeah. >> Some macro reason. Brexit, taper tantrum, inflation, this time it's AI and the question that class is asking is should we be loading up on on software stocks? So, so is software dead? Is this a buy the dip situation? >> Uhhuh. >> And you know, I can't think of a better person to ask because depending on the day you ask, there's like software, you know, we AI company, software company speak about that is software. >> I think you know better. You're an investor. I'm not an investor. Also, I don't give financial advice. Uh but
Having said that, if all software is dead, then isn't OpenAI entropic dead? >> They're just software companies with a bunch of researchers >> writing software. >> So those companies would be dead, too, right? So they shouldn't have trillion dollar valuations. SpaceX might make sense because they make rockets, but everybody else should be dead. Um, Nvidia should be dead because they just have really smart people who create chip designs, humans that use some software to create chip designs and then they ship them over the internet probably over to TSMC which is a real company creating actual chips. But then Nvidia would be dead as well. So the world's most valuable company should be dead as well cuz software is dead, right? So software obviously isn't dead and it's not going to be dead and Nvidia and Open AI and Entropic are not going to be dead companies uh, you know, because of whatever SAS apocalypse or whatever we want to call it. Yeah. >> Um but um I do think two things are true. I think that um two uh big changes have happened which is one is barriers to entry >> uh have significantly gone down and then
switching costs have significantly gone down. So let's talk about those. Um barriers to entry because it's easier than ever to write software. >> Mhm. >> Um so that's like a new weapon. >> Yeah. Yeah, anyone can produce software uh very cheaply >> almost at zero cost. It's not quite zero cost and it will never be zero cost but much much cheaper than before >> but that weapon is available to everyone. >> Mhm. >> So also the people that create software now also have that weapon. It's not like only some some new players have that everyone now. >> Mhm. >> So including data bricks like we're a software company but we also have that weapon and it's an awesome weapon using Have you used that weapon and substituted any of your core software expenses like your CRM, your IT help desk, your office of the CFO software? >> No, I think that's stupid. Uh, also I think switching costs are lowered because it's easier to switch between UIs. >> Mhm. >> You know, humans get locked into software. They get you I don't know if you're Are you How many use Android? >> Oh, no one. Okay. Wow. How many use >> guys? >> Okay. Wow. Okay. All right.
Um, you don't want to switch to Android. Why? You don't know the UI. You don't know how to use it, right? It's like a different UI. You would have to also transfer all your data, your phone contacts, all that. It's too much inertia switching cost. That's a switching cost, right? But if in the future you're just talking to an agent, that switching cost gets eliminated because you're just talking to an agent. So, who cares if the agent is instrumenting your Android or your iPhone or your Gmail or Outlook or your Salesforce or the competitor or whatever it is. So, that's like the switching costs coming down as well. M >> um so yeah I think it's going to be more competition >> um so I think software companies will have to run more efficiently >> uh that I think is going to happen >> um but software is not the only moat there's a good book you should read it's called the seven powers how many have read the seven powers >> okay a bunch of people here okay yeah so there's like there are modes that are not just software right I mean like economies of scale if you can do things at scale better than anyone else uh so that you can afford crazy fixed costs because you're advertising them way because of your scale you know Amazon
AWS um you know uh then that's a moat >> uh if you have a brand like Ferrari or Rolex you know that's a that's a moat people writing cheap software can't just come replace that brand people care about that brand >> uh trust you know like I'm the only one providing you can trust my company we don't get hacked we have like really secure software we have special certification maybe we had patents that remains a moat that you cannot break that that easily. So, um all these remain uh there's a bunch of other ones. Switching costs on so um data is a big moat. If you have special data that no one else has >> um that only you have, >> that's a moat. Doesn't matter if they can write cheap software. So, so I think it's the answer is in between. >> Yeah. >> The way I say it is if if a company has uh been around for 10 years and they have not innovated, if their software looks the same as 10 years ago, but the revenue has been going up, >> Yeah. they should be worried. Yeah. >> Because they have not been innovating and it's probably easier for a company
that starts today to then with you know barriers of entry being lower write software quickly that's much better than that company because that company hasn't done anything for 10 years. >> Yeah. >> They should be really afraid. >> Yeah. >> Uh and probably they don't have the innovation muscle anymore because they you know they're not innovating. So obviously they're not they don't have innovators. Um those kind of companies are going to be wiped out. >> Yeah. But there's going to be other companies that have been innovating the last 10 years and they're software companies or companies that now get their together because they're nervous. >> Yeah. >> And they'll be fine too. >> Perfect. >> So what do you think? I mean you're an investor. >> I mean I think um there's a grade exactly as you said like if I was to give the grades to types of software companies I would say if you have got a lot of data like you said if you've got you know some cyber like you're like in some core loop you're probably most robust and immune from it. Somewhere in the middle is all the workflow software like which has not innovated. The UX still looks same and you're like scrunching down on your shoulder and typing I met Ali Goatsy today. These are the notes like that stuff's probably gone.
>> Yeah. >> If you you were exactly you said no innovation you were a part of the old habit >> but any one of those they have customers they have data. If they build great AI and start innovating they can keep the keep keep on going. They might have to change their pricing structure and their cost basis but they'll be fine. In fact, they have a lot of advantages against incumbents. They have data, they have customers, and they have some scale. So, they have some economies of scale going, >> but they do have to get their together. And that's, you know, easier said than done. >> Yeah. Yeah. Yeah. I'll um flash a chart. Have you guys seen this uh chart from Ethan Malik? He talks about AI is very good at some things. He calls it the jagged frontier. This is customer support. software engineering would be like this would be the frontier of like maybe software engineering, maybe this is customer support or whatever. But then there's a lot of stuff that's like terrible at like like this skill or this skill or and so on. And you know you Ali, you see a lot of >> we're here man
already. >> We're over there. >> We're here. We're here. >> Um >> they kind of admitted to it. >> That's right. That's right. That's right. >> Reluctantly. >> Yeah. >> Yeah. So, you know, you you've got what, like 6,000 7,000 customers? Those are 10,000 customers. >> No, we have probably 20,000 customers. 20,000 customers, sorry. >> Plus, yeah. >> As you see this and you have that's a very good sample of the entire universe of what's happening. So, in that sample of 20,000 customers, what are what are areas where AI is like hitting home runs and like working as advertised? >> Yep. >> And what are areas where the frontier is still uh rough and it's not working? The PC's are failing. >> Yeah. Look, it's not AI's fault. I mean, most companies are somewhere here. >> I think we have AGI, but I think most companies, if you look at how much are they, maybe they're having AI is helping me in some tasks. That's what most companies are doing. That's just how it is. >> And uh it's because that context isn't there in the model. >> So, the model can't do it. >> Take support, which everybody said, okay, that's going to be dead. Support is like gone. Yeah. >> Right. Support is very hard.
>> Support are literally the things that humans don't know what to do. Like they they get stuck. So, take data bricks. Databix offers support. >> Databix is a company that offers. >> It's a platform, advanced platform where you can do data science, machine learning, you can do advanced things on the platform. These are smart people who make, you know, big salaries. They have education, you know, they have data science education. They're trying to use data bricks and maybe they get stuck. So, their machine learning models doesn't have, you know, the right it's not getting the right F1 score or, you know, something like that >> and they're stuck and they tried everything. They call our support. So it's pretty hard to automate that. You can't actually give it to none of the current support automation. We've tried them all >> companies all of them immediately even actually when they start talking to us as soon as they know who we are like whoa whoa we can't help you like you go do get out of here you know uh so uh so yeah most of the world is over here >> but it's because we don't have the context. If the AI could have all the context of how our support engineers at data bricks operate
>> then the AI could do it. >> Yeah. It just doesn't have it. >> Yeah. You know, one of the things we used to say at panel is your AI strategy starts at your data strategy. Yes. You got to get the roads paved and have the data flowing and >> is that you know if you were to bucket the best enterprises who are like maybe like starting to head towards the right in your customer base of 20,000 >> what is common between the ones who are making it work >> and the Ferraris are flying >> um and and and and ones where I'm I'm guessing it's a context problem for the ones that it's not working and what does it take to get the context working? >> Yeah, it's very hard. It's a human problem like it's not an AI problem. we already have AGI. It's a human problem. I I don't see anyone really doing an excellent job at this. >> You have to kind of rewire all your processes in the organization uh to to be able to do it. This is like well known. I mean my favorite is there's an article actually that I recommend people reading from 1990 uh produced by actually a Stanford professor or researcher. Um it's called I think um you know from the dynamo to the computer. >> Okay, check it out. So dynamo to
computer and it looks at different uh sort of u technological revolutions and how long it h how long it took for them to have impact on productivity of econ of the economy >> and it's just you know takes just forever like when the PCs came out the joke was the Nobel laureate economist um you know uh um uh Richard said that >> computers or PCs you can find them everywhere except in uh the productivity statistics you know, like it just doesn't show up in the statistics. Um, >> why people were buying PCs and they were using them as typewriters. >> So, they would have people type on PCs but then print out the sheets and then put them in folders and then have assistants that like index them and do things. So, like you didn't see any productivity gains from it. >> And same thing with if you look at the industrial revolution, same thing happened. Uh, you know, we had these steam engines and the steam factories were sort of super dense and they were
running like with these, you know, they were called the line uh shafts >> which were like these things that rotate. >> Mhm. >> When the electric engine came, that's a dynamo. >> Uh, it took 40 years before they saw any productivity gains in the in the economy. >> Wow. >> Yeah. Check it out. This is that article. It took from 1880. The diffusion took 40 years >> from 1880 to 1920 when the electric engine came >> uh to see impact. So what they were doing is they were going to these factories that already were these line shaft factories >> that were these dense factories where you have a steam engine that's rotating this line shaft and it's rotating these belts and then everything is working. You have these multiple stories. Um and all they did is just like the PC they use as a typewriter. They would replace the steam engine with an electric engine. M >> and that doesn't just like replacing the PC with you don't get any productivity gains. It took till 1920 >> by maybe it was 1915 but I'm roughly right. Um until they realized wait we have to change the whole factory floor. >> We have to move the factories out of the cities.
>> We have to have like floor plans that are much bigger because now we can >> distribute the electricity. electricity is much more it doesn't you know it's not like the um the torque that has you know inefficiency right we can spread it out we can have floor pans that are big and we can run different parts of the factory at different rates unit drive versus group drive um took a very long time >> that's what's going to happen same same thing now rewiring I know it because I have 20,000 customers and I talked to them I was late to this meeting because I was meeting one of the CEOs of one of the big banks and same problem he has the same problem all the organizations I work with have the same problem. They're like, I'm not seeing any advant like I don't see they're all like AI is amazing. It's coming. It's like I need it. I need to do that. But they're like, I don't see any productivity gains in my organization. You know, what the hell am I doing wrong? >> And I tell them we have AGI and they're like what? >> Like that is not true. Like we don't see anything. It's >> it's a very tough problem cuz you're like, hey, I got the brain but I got to rebuild the human body. >> Yeah. >> The hands, the legs. >> Yeah. Let me give you the body.
>> Yeah. Let me give you an example from databicks. So database helps you get data from all the different systems like Salesforce, workday and so on. Collect them in one place, >> secure it and then do AI on it like you can do predictions, you can build predictive models. That's what database is. >> So we built connectors to all these systems. >> These connectors are it would take us three quarters to build a production connector. We're good at this what we do for a living. We build these connectors like we can build a connector from data bricks to Salesforce production ready. M >> it would take us 3/4 so 9 months to do that >> shipped secure >> nice >> with so that's like that's what we did >> so you know as uh you know the LLMs got faster and faster and faster I started sort of experimenting with this myself and I was like oh I could write a connector in two days so I went to the team that builds this and um and I was like hey I can do this in two days how come it takes you guys three quarters they're like okay great point let us come back to you so they went and they thought about it and they came back in two weeks and they said Okay, you're right. Uh, it's but you're also not
right. We looked at it and yeah, this AI is useful. We can compress it down from 3/4 by one and a half month. So, we can get it from 9 months to 7 and 1/2 months. >> That's it. >> That's it. I'm like, well, I can do it in two days. And like, no, no, no, no offense to you, but you know, this is production code and it really actually works and you know, we have like customer feedback and you know, it's like secure and you know, you wrote some toy. God knows what that I mean, no offense, you're great, but you know, let us let us >> a missing link. >> Uh, so I was like, a man, this is kind of depressing, but yeah, I'll take the one and a half month improvement and, you know, maybe it's something, but maybe I'm just stupid and I don't get it. >> Yeah. >> Then I found another guy in the company. We went to him and we sort of said, "Hey, can you look at this problem?" And he's very first principled. He's a very smart guy and he doesn't care about all this like, you know, fluff. He's like he cuts through the fluff and he cut through the fluff and he worked with a team and he came back and they said, "Hey, after looking at the problem, we can do seven connectors in one quarter." >> Boom. >> Yeah.
>> Let's go. What is the difference? >> So, what's the difference? Okay. So, what he did is he he went from first principles with some team members and they looked at it and they said, "Okay, first quarter they're just sending our very expensive, very smart, Stanford educated product managers out to the customers to talk to the customers and collect feedback." what exactly is your requirements? How do you use Salesforce and so on? That takes a full quarter. At the end of that quarter, our amazing smart uh product managers come back with like a 60 70 80 page super nice report on exactly all the requirements. >> Okay, so you're blocked for a whole quarter. So for sure you can't law you can't compress it below below that. >> Then code writing starts but we have to test this stuff. So testing requires you to set up Salesforce, workday, Netswuite, but those are not software by data bricks. So we're not very good at that. that takes a very long time and it's hard to find people to do that data bricks. So that again is like a process that takes a long time for us to stand up and it's very errorprone. So we couldn't do that either. Um and then we have one person for each connector. >> They go on vacation, they get sick, you
know, so on. So all of that. So what he did is he just from first principles looked at it and said we're going to just rewire all of this. And lots of people didn't like this. They were unhappy about it. But he said that uh you know the product requirements instead of one quarter we're just going to take one week and quickly write down whatever we have. We might get things wrong, but because the software is so fast to write, we can rewrite it again. >> So, let's iterate faster. Uh the standing up the Salesforce instances, let's outsource that to firms that can do that for us, and we can just pay them a lot and they do it in parallel. >> So, we can shrink that as well. And then one person per connector. Let's change that. Let's have seven people, seven connectors, and then they all work on all the connectors together. >> So, we don't have what's called, you know, bus factor one. If someone is hit by a bus, >> the whole project is not stopped. Right. >> Right. Uh so um so yeah so got it all done into one quarter and seven seven connectors shipped and you know so but this had nothing to do with uh like really it didn't have anything to do with AI or AGI or smarter models or super intelligence or gi gigantic like
you can have the next like GPT7 or OPU 6 would not have helped us >> u do this better we needed to do those make those changes >> and that's like a human refactoring problem and process change and um so this is what the whole world is going through. So% >> that's what you need to do well if you want to if you want to succeed. Some are doing it better, others are not. >> Hamilton Helmer actually talks about this quite a bit actually. So for all of you who are picking assignment option one and want to be investors, Hamilton Helmer's uh book is a must readad on process power. >> We were debating this um before this if Ali you had uh $100 to invest across what Jensen calls the five layer stack. energy chips, infra model and apps. Where does value acrue? If you were to put a hundred bucks in the in in the index of energy and chips and input and and so on, uh with let's say a long-term time frame, where does where would you put it? How would you allocate the $100?
Um, >> and why? >> I'm a computer scientist. I'm not an investor. I don't give financial advice, but >> but you're allocating, >> you know, you're $5500. >> Yeah, you are allocating money data bricks time, right? Data bricks is across three of these. >> Yeah. I I would just say look, it's obvious that the applications are going to be the winners, >> right? >> So, I would put put it in the top. >> Uh it's kind of like uh and I'll give you some some guesses that you know, but who knows actually it's very hard to predict. So you would have to kind of have a I would go early stage and I would have a seed strategy and I would invest in many many startups and I would get most of them wrong but a few would actually make it and they would be the next Google or whatever. Um but you know in 1990 like when I did my PhD um in the early 2000 >> mh >> um I was in the networking field. Networking was like the cool thing to do. It was the advanced thing cuz the internet was like you want to work on it like the internet was the big thing at the time and you want to and the coolest thing on the internet was >> uh networking.
>> Yeah. >> And the hardest problem like the smartest math brains were working on at the time. We all knew what the future would look like. >> The future everybody knew what the most important problem everyone's going to work on is the what's called the multiccast problem >> which is Yeah. See >> it's problematic that no one knows what that is today. We were we were clearly wrong. So multiccast is, you know, you want to broadcast from one source, let's say a soccer game or a football game or basketball game to the whole world cuz everybody wants to watch it at the same time. >> We didn't know how to solve that efficiently. >> So all of the smartest brains in the world were trying to work on this problem and bandwidth was scarce >> while we were doing this. And by the way, we actually, you know, had pretty good problems and I started a company on this >> uh and we had great solution. Unfortunately, the cost of bandwidth just plummeted and they just deployed so much fiber that no one this problem was not a problem ever. >> So, no one needed to buy this software. So, it was complete waste of time. Uh and at that time we thought the hardest problems the most interesting things to work on are Cisco routers, routing, BGP,
border gateway protocol, internet protocol like you know queuing theory, quality of service, these kind of things. Those are like the most interesting things you can because we had tunnel vision on the internet >> and the what is the internet? Well, at the time it was the internet protocols and those things. No apps really existed, right? So we were all focused on that and today everybody's focused I would say on >> chips >> I think like you know well I think chips and you know I think infrastructure >> yeah I think people are really right now the hot new thing is like Nvidia open AI anthropic deep mind these are the things everybody's focused on AGI super intelligence that's >> what I said at the beginning but uh on the internet there were like really weird things >> that took off the really weird things that took off were like taxi business you know which is Uber >> Uber Yeah. >> Uh or selling books, >> which is the lamest thing ever, but that became Amazon, which became AWS. >> Yeah. >> You know, uh or uh renting your bedroom to people like, you know, that's Airbnb.
Uh and or um sending people short text, >> which became Twitter, >> right? >> You know, right? >> Uh these are like and if you said them in those words in 2000 to people, people would say you're out of your mind. Like you're insane. You're full of it. Uh but that's those were the great ideas of the time. Those are the ones that we came. So I think it's the same thing here. >> Uh to throw a few of them out there. Um I think healthcare is like 17% of US GDP. >> Yeah. >> Um you know >> we we we all still unfortunately will die and we all care about our health and the health of our loved ones. I think there's huge we have you know uh the propensity to pay for this like we'd pay anything to be able to save lives or of our loved ones or our own lives or our own health issues. Uh and it's not particularly well done today. Surprise surprise you know healthcare is not like awesome. Uh so imagine a company that has seen a million patients >> like I have seen 100 million patients with your kind of genetic composition
and the kind of issues that you might have in the future and I can help you >> but what are you willing to pay for me to help you with that >> could be a company that's trillions of dollars worth >> right >> um to take something out of left field that I think people think is really you know uh not interesting and not but take education >> education actually in in VC space the consensus has always been education is like a terri investment, right? Is it like we see people say, "Oh, it's like never invest in education. It's like shit." >> What's the last public market company, you know? >> Yeah. What's the last trillion dollar education company? >> Not even 100 billion. Yeah. >> Yeah. Yeah. Anything, right? Um so, but most people have kids >> and you know, more kids are produced and they they do need to go through uh get to get an education whether people believe it or not. Yeah. >> And um uh and people do care actually if the education for their kids are good or not. Elections are won and lost. There's cultural issues on these things like you know of what you you're allowed to teach my kids or not, right? Elections are won and lost on that. Not because it's a
stupid topic because it matters like what are you teaching my kids matters and my >> kids being brainwashed to do the right thing or the wrong thing or are they going to be do their are they you know well equipped to get the jobs of the future. Yeah. >> Uh I think if if there's a company that can provide amazing education >> uh using AI, um I think >> people a lot of people will pay for that. And if it's like proven that that does a better job then, um then then you know whatever they're getting right now. Um just two flavors of like obvious companies that I think will exist and they could be trillion dollar companies. >> Yeah. >> If they do it well, they will have data mode. >> Yeah. >> Um they will have economies of scale mode. there's like winner takes it all kind of dynamics in those markets at least >> in countries in geos. >> Yeah. >> So uh so I think the value accused to the top. >> Yeah. >> We can't wait in Yeah. We can't wait for that to happen. >> Would you push back? >> No. I think I mean look I've I've written extensively about this eagerly waiting for this what I call the blue triangle to uh to invert. Uh I don't
know if you've seen this but basically this is uh >> all of the money in AI >> Yeah. >> is with one guy. >> Yeah. That's why Jensen's so happy all the time as as you know. >> Yeah. >> Um >> these guys are fighting for >> dollars. There's like no money there. There's very little money here. I mean people are making some money here and uh so we'll see. But but that's the bet. The bet is that this thing will look like a more sustainable >> Yeah. It will go that way. I mean% all value in Silicon Valley and in tech and in technology moves up the stack all the time. >> Yeah. Like you know you even look at the greatest companies like okay the company that created the PCs IBM was like the greatest market cap and all the value accured there but then that became commoditized then it became the software on top of it which is like the operating systems and the Microsoft of the world and so on then you know here at Stanford actually a while back it was like 20 years ago VMware which is how do you virtualize that software and that became commoditized and then like you know so it keeps moving up the stack all the time >> that's how that's that's how it's going to be here too >> 100% and you know the big one of the forces that is commoditizing this you've
spoken about this is open source. >> Open source is uh getting pretty good. >> Yep. >> This blue line is open source. >> The the gap is closing. This is like what 3 3 4 months. >> Yeah. >> This gap is now like a month. >> Yeah. >> And but still people are spending so much money on these frontier models. >> Yeah. >> People cannot wait to get their hands on 47 from open from from cloud or 55 from GPT. But >> then there's this whole economy of of very good open source models. What what do you make of all this? Like you on one side you've got people earning what 30 billion now or maybe 40 billion at Enthropic or but on the other side this open source stuff is like nearly free. Obviously you've got to pay the hosting. >> Yeah. >> How do you think this shakes out? Will will that model layer the the proprietary model layer acrew any value? >> No, I think it's going to be valuable >> and I think people will want it whether it's open source or not. Let's put that aside for a second. I think there will be token factories >> which serve this stuff up. It's just like the cloud, >> right? I don't think like I think we foolish to say you all will have your
own little mini data center in your living rooms and you're going to run you know your own PCs and you're going to insert GPU cards that you buy at home and you're going to run this yourselves >> or on your phone or MacBook or the edge >> some of them will exist they will come to the edges but I do think there'll be like big yeah >> centralized data centers where this happens >> but we haven't discussed are they running open source models or they running proprietary models and um and here's a fun fact so moonshot the Chinese company released Kimmy. Yeah. Uh 2.6. >> Very good model. >> Two days two days ago or two days ago. Tuesday. >> Uh yeah, Tuesday. Uh so two days ago they released three days two days ago they released 2.6. In January they released 2.5. >> And here's a fun fact. 2.6 that they released on Tuesday is the best model ever in the history of mankind ever produced. Frontier non-frontier if it just had been released in January. >> Yeah. >> But open source will be here. >> Yeah. And it will apply pricing pressure. And this business of frontier models, >> that core business of providing frontier models is going to be economies of scale
game. >> And you will have to uh do it at small margins. It's like an Amazon.com book selling business. >> That's what it's going to look like in the future. Therefore, there not going to be that many people doing it. >> Yeah. >> It's just like an Amazon.com. And gross margins are going to be tiny and operating margins are going to be small. That's my >> Yeah. >> take. >> Yeah. Yeah. I think so too. Um, three rapidfire questions before we uh wrap. >> Your uh favorite AI product that you use every day. >> I don't know. That's a tough one. Uh um I mean I use all of these. >> Yeah. >> Uh you know I actually like cursor. I know that's like everybody loves cloud code. >> I like the diffs and how how how it works. So like on coding I use like a combo of those. >> I still kind of like it. Yeah. >> Are you still using it after uh Elon owns it? >> No, I stopped. No, of course. Yeah. >> Yeah. because you're going to lose access to entropic and open tokens through cursor I presume. >> Yeah. Yeah. Awesome. It's great. Yeah. >> Good, good, good, good supporter. >> Um, >> you've been >> the truth is I do use data bricks as
junior products. This is the truth >> because you know it just most of my >> inside data bricks most of my decisions are like numerical and quantitive in nature like should we do this? What's the ROI on this? What's the cost on that? What it's going to cost us? What's the So, I need something that can understand numerical data and time series data. So Genie is like really good for that. So that's that's what I honestly go to quite a bit >> quite often. >> Right. >> Um future for data bricks, you've been at this for for 15 years or so. Um what is your vision for the next decade for data bricks? >> Well, I think the cost of software is going down. >> Yeah. >> And so barriers to entry and switching costs are going down. So there is an a SAS apocalypse of sorts, but not all software is going to be dead. >> Yeah. uh we would love to partake in that and >> right >> kill some software >> right >> right right any advice for uh students in the room who are about to uh make career decisions >> yeah I think don't don't be worried about the fear-mongering don't be stressed out take it easy um I think
that uh that's those I was very stressed doing my PhD in the early 2000 I thought like the world is ending with the internet and everything >> and uh you know working on this most important problem that we all knew was the most important problem which was the multiccast problem which none of you which none of you have heard of >> uh turn out not to be a problem uh but I think one interesting thing is that in 2000 we had internet in 2009 Airbnb was started >> right >> okay but there's no reason why Airbnb should start in 2009 Airbnb could have started we I've made this argument to you Airbnb could have started in 2001 >> there's nothing like we needed something additional to happen in the world >> you know uh Airbnb could have happened and disrupted hotel businesses in 2001. Yet it took nine years for someone to have that idea, >> right? >> And that that was Brian. And by the way, Brian is not like he sat there and he was taking a Stanford class thinking about like a case study project. >> Uh Brian needed like bed and breakfast, right? >> And like yeah, it was at some conference and he's like, "Why is this so hard? Like can't I just solve this myself?" So it took 9 years to come up with that
good idea. So I think good ideas are very hard to come by. Actually, I think humans are very bad at coming up with great ideas, >> right? uh and we have like this tunnel vision and we focus on the wrong problems like we did with multiccast in my earlier you know my PhD was really stupid um so uh chill out and take a long-term perspective and uh you know work on the things that you think will have long-term good impact. I think Jeff Bessos did it pretty well uh when he was an investment banker in Wall Street and uh and he said hey zooming out what's like the big thing that's happening it's the internet. >> Yeah. >> And then he said hey let's just make a secular bet on internet's going there's going to be more and more internet so it's going to slowly over time disrupt things. >> So then he was said said okay can we in the long run probably purchasing can move more to the net. Maybe not right now. And then he started with he was very modest and he started with kind of the dumbest thing you could possibly no one like the unsexiest thing which was a
complete commodity that looks identical and there's no differentiation which is books. >> Yeah. >> And he just started with that >> and he just bet on that secular trend and every year it was more and more right >> and you know and now it's like everything on the planet. It's the everything store. Yeah. So, kind of think long term like that and don't be swayed by the coolest thing that everybody's like right now um sort of making lots of noise on Twitter on because chances are it's probably something like multiccast. >> Yeah. Awesome. Well, thank you so much for staying longer, folks. Thank you. >> Thank you. Awesome. Thank you so much.