The Reasoning Show
The Reasoning Show - Reasoning through the AI Revolution
The industry's leading AI podcast, explores how leaders think through AI, technology, and transformation. Each week founders, investors, and operators unpack the decisions behind the systems shaping modern business.
Hosts: Aaron Delp and Brian Gracely
The Reasoning Show
Evaluating AI Models in 2026
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Aaron and Brian review some of the latest AI model releases and discuss how they would evaluate them through the lens of an Enterprise AI Architect.
SHOW: 1003
SHOW TRANSCRIPT: The Cloudcast #1003 Transcript
SHOW VIDEO: https://youtube.com/@TheCloudcastNET
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"
SHOW NOTES:
- Last Week in AI Podcast #234
- Artificial Analysis.AI
- Opus 4.6 Release
- GPT Codex 5.3 Release
- GLM-5 Release
- OpenAI Preparedness Framework
- Sam’s Tweet that 5.3 Codex hit “high” ranking for cybersecurity
- Fortune Article on 5.3 high ranking
TAKEAWAYS
- The frequency of AI model releases can lead to numbness among users.
- Evaluating AI models requires understanding their specific use cases and benchmarks.
- Enterprises must consider the compatibility and integration of new models with existing systems.
- Benchmarks are becoming more accessible but still require careful interpretation.
- The rapid pace of AI development creates challenges for enterprise adoption and integration.
- Companies need to be proactive in managing the versioning of AI models.
- The industry may need to establish clearer standards for evaluating AI performance.
- Efficiency and cost-effectiveness are becoming critical metrics for AI adoption.
- The timing of model releases can impact their market reception and user adoption.
- Businesses must adapt to the fast-paced changes in AI technology to remain competitive.
FEEDBACK?
- Email: show at the cloudcast dot net
- Bluesky: @cloudcastpod.bsky.social
- Twitter/X: @cloudcastpod
- Instagram: @cloudcastpod
- TikTok: @cloudcastpod
FEEDBACK?
- Email: show @ reasoning dot show
- Bluesky: @reasoningshow.bsky.social
- Twitter/X: @ReasoningShow
- Instagram: @reasoningshow
- TikTok: @reasoningshow
The Cloudcast (00:01.322)
Three, two, one. Good morning, good evening, wherever you are. And welcome back to the Cloudcast. We are coming to you live from the massive Cloudcast studios here in Raleigh, North Carolina. Aaron, good to see you, man. Good to see you back in the States. How have you been?
Aaron Delp (00:13.612)
I'm good. I'm good. Yeah, we hadn't we hadn't talked in a little while. It's good to see you,
The Cloudcast (00:17.17)
I know, I know. think tonight we're gonna do kind of the equivalent of a show that you and I do, which is like when you and I decide to get together and like go have a beer or get some breakfast together. And we just, we have some topics that we sort of have some insight on, some opinions on, but probably as many questions as we have answers for stuff. And we're try and figure some stuff out. So what have you brought to the Waffle House or to the IHOP today?
Aaron Delp (00:40.75)
Yeah, think of this as the the thinking out loud podcast and yeah, this is a perfect analogy or a perfect way to put it is like, whenever we get together, it's like, hey, have you heard of this? Or what do you think about this? I don't know that we have any answers today. I mean, we might have some. We might have some opinions. but anyway, this is what's been kind of kicking around in my head. So there's another podcast that's out there.
last week in AI podcast, I really like that podcast, by the way, you know, go consider giving them a subscribe. They and they were talking about all of the model releases that have happened recently. And I don't know, I kind of want your feedback on this. I feel like the model releases, I've just kind of gotten a little, I don't know, almost numb to them. I mean, I knew some happened and
And then there's all this like, this is the biggest release ever, but every release is the biggest release ever. And all the numbers just keep getting bigger. And then all of them release all at the same time. And some people say they do that intentionally. So no one one up, you know, one ups each other. I don't know. Like I was just kind of like, I had a bunch of thoughts about like models and model releases and why should people care? Quite frankly.
The Cloudcast (01:54.206)
Yeah, no, I think it's a good question. the more I was thinking about it, like we could, let me just look really quick at all the ones that we have sort of listed just so people, know, Anthropic Opus 4.6 came out, GPT, Codex 5.3 from OpenAI, GLM 5, which is more of an open source one. I mean, there several that came out that were, you know, considered, whether they're considered frontier or close to frontier.
I mean, they're coming from very high level, well-funded labs. I think the thing that I started thinking about is you threw these out there, and we can go through, there's a couple of other good links in the show notes that talk about kind of how they get evaluated, how they get ranked and so forth. I thought what might be sort of valuable to the audience is to, instead of us sort of saying like, hey, well, I've heard this, or do we think...
Do we think these are commodities now? Is there a moat? All that kind of stuff, which we can dig into. I thought it might be useful to kind of look at this from the perspective of if I was an end customer, if I was a business user, and we were trying to bring some AI into our business for whatever we're doing, whether it's software development or analyzing systems or doing research or whatever it might be, how do we go about looking at this? Because obviously,
running a business, you're not in the business of just immediately swapping to the next version of new software, whatever it is, whether it's an AI model or a database or a version of Kubernetes or whatever the heck it is. So I thought it might be interesting to sort of think through like, okay, given the pace of this stuff, how would we even go about thinking about dealing with it or evaluating it or making sure it's compatible with
the latest version of an application that I built, all the stuff that, to a certain extent, this stuff is not new in the idea of, hey, there is a software cadence that's now much faster than it was before. But I do think there are ramifications of this that would be worth digging into. How does that sound for framing? So here's my first question. So every time these models come out, they
Aaron Delp (04:02.561)
Yeah, yeah, I love it. Love it. Sounds good.
The Cloudcast (04:12.23)
They tell you a bunch of things. They tell you everything from how many objects it was built upon. get these 4B, 5B, 10B, billions of parameters of information that were done. They talk about the different types of tasks they're typically good at. So it could be coding, it could be research, could be now we're seeing some stuff around work and office tasks and all that sort of stuff.
Like what's the, you have a first set of things that you sort of look for to get some sense of like, okay, not just is this big, but like, you how do I frame it versus other stuff? Like what are the first couple of things that you tend to look for, you know, when something new comes out?
Aaron Delp (04:58.315)
Yeah, so the biggest thing I look for personally is, it, you know, and I've kind of like, I think I've talked about this on the show before, but I kind of see AI as two very distinct and separating markets over time. There's, you know, the consumer prosumer aspect of it of like, hey, you know, new model came out today. Is it in my dashboard and can I immediately start using it right now? Right. Just immediate consumption of it.
And then there is the probably more, I'm going to say, specialized enterprise use case of like, a model came out, but I can't use it off the shelf. Like I've got to fine tune it. I've got to train it. have to put it in my rag pipeline. So there's going to be this like, can you use it day one? And then there is this like, does it have to go through almost like, know, like I almost see like an enterprise IT go back to old school app days of like
The Cloudcast (05:46.986)
Okay, yeah.
Aaron Delp (05:56.162)
Hey, the new version of, I don't know, make up an app, Salesforce or SAP came out and the IT department has to test it and certify it and roll it out to you and you don't see it for another two or three months.
The Cloudcast (06:10.666)
Right. Yeah. that's, and that'll and that I think the way that that's going to sort of manifest itself, if you're thinking about this, if you are, like you said, if you're sort of a consumer, you know, you pull up Claude code or something like that, the the immediateness is going to literally be like that little drop down menu that says like, you know, this version or, you know, fast version or research version or whatever it is, like, they've sort of gone through and done the immediacy for you.
I think on the enterprise side of things, what that's probably going to manifest itself into is either if your company has some sort of restriction on it, your drop down menu might be different than the consumer one. Or if you're doing something like model as a service. So maybe you're building an application that's going to talk to an API that talks to some model. You're probably going to get some sort of notification that like, a new model is available. And you're going to look in your
Model as a Service portal. But the assumption there is that your company has probably done a certain amount of secondary whatever analysis, whether it's putting some guardrails around it or like you said, fine tuning it with certain stuff, attach it to certain rag vector databases. Those things are probably going to have to take place before you're going to get that immediately on the enterprise side, just because
There's always the possibility of like, kind of what was this trained on? Do I make sure that if this is a customer facing application that they can't either overuse it or ask it to sell you a car for a penny? All that kind of stuff that we've seen start to come along. Okay, so that's the first thing. The second thing that always jumps out at me is there's always a list of benchmarks. And the benchmarks...
feel like they're getting so so a couple things. So number one, there's always a whole bunch of benchmarks. There's never it's not just like, it's not like if you bought Yeah, or it's not it's not even like, I bought a new computer. What do I get? What do I think about? Well, I think about, you know, CPU speed, you know, like, how fast is it? And how much memory does it have? Right? Like, those are always the two sort of this is this is maybe like, 10 different benchmarks. And and you sort of look at them. And I think they're getting
Aaron Delp (08:14.208)
Here's the one test.
The Cloudcast (08:33.918)
I think the industry is getting better at this because before the benchmarks were very much listed as if they were like, you know, math theorems or something like, they were like, you looked at them and they weren't in any way obvious unless you absolutely lived in the model building world. And now they feel like they're getting a little more, you know, like, normies can kind of understand them. They'll say something along the lines of like, you know, this is for coding. This is for like,
deep research, this is for, I don't know, whatever. So you're getting a little bit of sense of that. But it's still kind of weird to kind of be like, OK, I guess the first question about benchmarks is, if you were looking at them, do you trust the ones from the vendors themselves? Or do you expect some third party to do that? And if it's a third party, how does that work?
Aaron Delp (09:28.576)
Yeah, yeah. So first of all, I'll do a shout out. ArtificialAnalysis.ai. Really, really good site. mean, I think it is, you know, there's them and there's LLM Arena. LLM Arena just rebranded themselves into like Arena.ai or something like that within the last week or so. So those were the two I was aware of, of like, you're, you know, basically companies can go out and submit standards to them or, you know,
The Cloudcast (09:46.13)
Okay. Yep.
Aaron Delp (09:58.455)
trying to be somewhat of an independent benchmarking kind of firm. But here's the thing. How much does that really matter anymore? And you know what I mean is like, unless like to your point about going to normies, unless you have like this one benchmark you really, really care about, like when a lot of the models come out, it's like, we've ranked higher than everyone else on 10 different benchmarks. And then it lasts for like three days.
The Cloudcast (10:28.35)
Yeah. Yeah. Right.
Aaron Delp (10:28.702)
until the next model comes out and it scores the highest. so you can't even necessarily... I mean, it's such a snapshot point in time. And even like that GLM5 release that I put in the notes, it was so funny because...
They did all their benchmarking, but they did it against Opus 4-5, and then they released their stuff, but they released it like the day after Opus 4-6 came out, but then they had to release their benchmarks on 4-5, and everyone was like, you're not on 4-6, and it's like, it's been 24 hours, people.
The Cloudcast (11:03.602)
Yeah, yeah. it's a little bit, I feel like it's a little bit like when you look at, and again, this is probably a terrible analogy, but like I think people could relate to it. Like if you were trying to make an investment and you just said, look, I'm not really a professional investor, so I'm going to invest in like a mutual fund, right? There are organizations like Morningstar that will evaluate, you know,
Mutual funds, essentially what they're looking at is like, you know, what's in the fund, which is, you know, a bunch of stocks or bonds or something. How risky is it? And then they tell you some basic things that you can kind of understand, like, this is about the fees that they're going to charge you. And I feel like it's a similar type of thing, but there's still another level of like, you know, simplicity that's sort of needed there. Because I mean, the other part of it is like,
Yes, this one came out and it does 85.3 % in this benchmark and it's better than the other vendor vendor one or the other open one that was 81.7. And you're kind of, it's sort of like, well, okay, if I just compare those two numbers, this one, A obviously looks better than B, but you almost want somebody or some other way to go, yeah, but if this one is 14 % bigger, what does that mean in terms of like,
GPU sizing or overall cost or is there some sort of context window thing here that makes it, is there a tunable feature there that can make this work? so, yeah.
Aaron Delp (12:35.563)
Yeah.
Aaron Delp (12:40.627)
And also to compare it against the standard of like, Hey, like maybe it benchmarks at 84, but the tasks I'm doing is a 70. And I could pick any of them. And to your point, like I can pick the most efficient model. I maybe, and so there's, I feel like there, I don't feel like we're there yet as an industry. Like all we're doing is min maxing against these been marked benchmarks of like up into the right, up into the right. Every model is more powerful. Every model is better, but at the same time, is it?
The Cloudcast (12:49.17)
Right, right.
Aaron Delp (13:09.875)
I really like your point of like, like, is there an efficiency standpoint? Is there something else I need to be considering? Is what I have good enough for, you know, like at some point we got to stop chasing.
The Cloudcast (13:22.762)
Well, yeah, or at least understand what value the chasing does for us, right? So you talked about artificial analysis.ai, LLM arena. I mean, that's what Hugging Face does as well to a certain extent. I can never quite understand what Hugging Face does, but I know they at least publish a bunch of these benchmarks. It's interesting that the industry is so okay with this stuff because the idea of like,
Aaron Delp (13:28.617)
Yeah. Yeah, absolutely.
The Cloudcast (13:51.092)
published benchmarks has always been a little bit taboo, you know, at least in the IT industry, because it's like, you know, did I build, did I build, there's a term that people may or may not have heard called like lab queen, which is I, I built this environment, and I tuned my environment or my situation to be perfect for that. Like, it's like, you know, like, like racing drag racing a car, you only go straight as fast as you can. And then you compare it to something else that you just leave the defaults on. And it's like, look at how much better I did than, than the defaults.
Aaron Delp (13:57.407)
Yes. Yeah.
The Cloudcast (14:20.522)
So.
Aaron Delp (14:21.043)
Yeah, yeah. mean, quick, quick side story. I don't know if I ever told this in the podcast. When I was at IBM, one of my first jobs out of college, I actually got to talk to is an IBM distinguished engineer and he was the head of bench benchmarks for IBM hardware. So, you know, when we used to do in the height of benchmarking days, he was like the dude and and the tricks they used to do like to make the hard drives faster, like
The Cloudcast (14:40.852)
Right. Yeah.
Aaron Delp (14:50.879)
They would factor in the rotational speeds and then only put the data on the first round of the cylinders and never put any data on the edge because you'd go faster. Just crazy, crazy shit. I almost feel like the AI industry as a whole is a little bit in that lab queen phase. And there's even thoughts of, like I actually just saw it with some of the...
The Cloudcast (14:56.564)
That's right.
Aaron Delp (15:17.479)
Opus 46 and 53, there was like some questions of like, have the model started to realize they're being benchmarked? And are they starting to adjust their behavior? And if so, ain't shit you can do about it because it's in the model.
The Cloudcast (15:25.31)
Yeah.
The Cloudcast (15:30.76)
Right. Right. Right. Yeah, I mean, I think that the second thing that I guess the third thing so you know, so we have sort of like, you know, how fast can you use it? The second is like, you know, what am I looking for as terms of, know, whether it's a benchmark or something. I think the next thing that jumps to mind is, is timing of this, right? And in in sort of two ways. One, I think you mentioned it earlier, like, it feels like we get a lot of them at the same time, which
I gotta imagine to a certain extent, again, a lot of these researchers all came out of the same places, right? They came out of OpenAI or they came out of Google Brain or Google DeepMind or they came out of the same academic universities. so they're all kind of probably back-channeling with each other, like, hey, how's it going? They're looking at reports and all sorts, not that there's collusion going on, but again, it's a small community of people they sort of know. But it does make me wonder,
you know, does it does it do the industry any good that they're all in sync like this? Or, you know, do we eventually figure, you know, do companies eventually start figuring out like, it doesn't do me any good to always release on the February 17 date, I want to own, like the summer, or I want to own, you know, the fall when people are starting to plan their budget for the next year, or, you know, I mean, right now, they're just literally in a in a constant drag race with each other of like, which one's fastest and so forth. But
I do wonder at some point, will they start figuring out the timing as they get beyond just the race to headlines or the race to funding or something? Because right now it doesn't feel like their timing has any rhyme or reason to it, or at least it's not necessarily like end user advantageous. It's just simply the game they're playing within their own game.
Aaron Delp (17:21.087)
Yeah, yeah, no, it definitely feels.
I like your idea there of like, it's just a race to release versus, I mean, again, we're kind of comparing it. guess, maybe this is a little bit of old man yells at cloud here a little bit, but I'm comparing it against my background of like, hey, you kind of do releases around trade shows. You kind of do releases around your event or you kind of build these moments in the industry where you kind of want to get a bunch of buzz or a bunch of press, but.
This is man just go as fast as you can and release this the day it's done you you kick it out the door because it'll be you know obsolete in a couple weeks kind of thing.
The Cloudcast (18:00.234)
Well, it's, yeah, and it's, and I got to imagine it drives some behavior that they don't necessarily want, but that happens automatically. I mean, if you're sitting there and you're writing code and you're like, Oh, you know, GPT codex 5.3 came out and Opus 4.6 came out. I mean, the first thing you're going to do is, you know, take some project and dump it into both of them. It's to be like, all right, let's see what came out of those things. Like you're, you're going to get these immediate sort of switching situations because they both came out of the exact same time.
You know, it's like going to like a pie shop and there's like 10 pies. You're like, I don't know, I'll just take a, I'll take a bite of each one of those. Like, you know, if you don't create any, if you don't create any scarcity, you create, you create sort of a push to want to switch to things just out of curiosity. Although the flip side of that, if I put my like enterprise hat on, if I was an enterprise architect, this feels like a very untenable, unsustainable situation, right? Like we've,
Aaron Delp (18:35.314)
You
The Cloudcast (18:58.774)
We've talked in the past about various non-SaaS things. So if all this stuff that you're consuming is a SaaS, it's not that big of a deal because you don't have to maintain it. You don't have to patch it and all this sort of stuff. But if this is something that you're building internally or you're somehow, it interacts with something, I imagine this just kind of, this sort of goes to the idea of like,
If you didn't get cloud right and you didn't get CI, CD right and you didn't get dynamic frequent releases right, AI is gonna be a nightmare for you because it's already going really fast. yeah, at some point a couple of years from now, some of these companies will flame out and they won't be around. But I mean, there's still gonna be enough of them around that they're gonna be chasing each other.
Aaron Delp (19:34.654)
Yeah. Yeah.
The Cloudcast (19:50.794)
This feels like a time when even if you're still experimenting or doing POCs where you better be figuring out like how are we going to be doing versioning? How are we going to do incompatibility between the application and the model? it just going to be like, hey, we're all writing to the OpenAI AI? is there, you're going to have to be dealing with like MCP versions. You're going to be having to deal with other stuff versions. So if you're not getting ahead of that right now,
Aaron Delp (20:14.516)
there.
The Cloudcast (20:17.034)
Like if you weren't good at fast moving software before you better figure out a way to make those investments, whether they're around people or process or whatever, because this stuff is even faster. And, you know, because it's not deterministic, it's not a database, right? Like you're going to have to figure out guardrails and you're going to have to figure out tuning and you're going like, it won't just be like, cool, the new version came out and then we're going to ship it a week later. It's going to be like, you're going to have to do a bunch of work, you know, that you may not expect to do.
Aaron Delp (20:29.993)
Yeah.
Aaron Delp (20:47.038)
Well, and I'll give you the listeners out there another quick, you know, for instance story on that. have a good friend at a reseller that I was talking to a couple of weeks ago, and we were just kind of talking about AI and enterprise adoption of AI in general. And he said, yeah, the biggest problem we have right now is just the velocity of everything, velocity of the GPUs and how fast the GPUs come out, velocity of the models. you know, and he actually kind of thought it was like, yeah.
He's like, yeah, I remember when Kubernetes, you just do like, what was it, every three months, four months? Yeah, three months. And he's like, remember what a nightmare that was? you know, how the enterprise, basically the enterprise was like, hey, you need to slow down a little bit. He's like, yeah, now it's even worse. And you know, between all the models and everything going on. And he said the hardest problem with closing a deal right now is.
The Cloudcast (21:20.404)
Three months. Yeah. Yeah.
The Cloudcast (21:31.987)
yeah.
Aaron Delp (21:41.053)
you can't get through the negotiation process and the plan and the system integration and all this other stuff before everything changes. And then it's basically like, well, that stuff's old now. And so just closing, you know, on the vendor side of the house, just getting folks to make a commitment, it's almost like a constant analysis paralysis.
The Cloudcast (22:04.136)
Well, it's yeah, I mean, I I could recall some more stories as well about, know, like, hey, this is how long the testing cycle took for this version. And these are this is the scope of it. And, you know, and then you were like, well, but a new version is going to come out before we get done with that thing. And you've you know, you've written a contract with the people you're working with or the whatever scope to be tied to a specific version of something like that becomes an absolute nightmare. Like you are in a you're in a loop that can never complete itself.
and then deploy and then go make money or generate revenue or reduce cost or whatever that thing is. yeah, that's another thing to sort of be careful of as you're figuring out who you're working with or how you're working with stuff. If you've got any sort of dependencies, whether they are sign-off dependencies, contract dependencies, whatever, be careful about tying them to a specific release or a specific version because you know,
You can get yourself into situations in which you want some flexibility there that contracts don't necessarily give you very much. All right. So we covered, I think we covered a bunch of things. I think the thing that we're sort of coming back to, kind of open-ended questions is, I think we keep asking ourselves, do these models feel like they have any sort of motes to them? And I feel like there's a part of the industry that's sort of like,
Aaron Delp (23:09.385)
Yeah, agreed.
The Cloudcast (23:30.812)
Nope, know, like the open ones are getting almost as good as some of the closed ones. The closed ones have, you know, there's three or four options out there. So people, you know, make it fairly easy to switch. And I can, I can buy into that, that, you know, the motes are going to get smaller and smaller over time. They're all training on the same sort of data, but it does feel like there are, you know, certain things, certain models that are pretty good at certain tasks, you know, like, you know, we can, can just use like, you know,
Claude as an example, right? We went from talking about all sorts of IDEs and different plugins to sort of, people are kind of coming around to being like, Claude kind of feels like the de facto now. It's sort of taken over from Co-Pilot. You don't hear people talking about cursor even all that much. So mean, I do feel like there are going to be certain focused motes. It'll be interesting to see though if there are kind of anything for anybody motes or if that kind of...
wanes and kind of comes and goes over time. I think that's still a really, really open-ended question, even with all these models coming out all the time. But yeah, think that. I mean, think if I was sort of making a mental checklist, and then we can wrap it up, I think I would be looking for, because again, to a certain extent, we treat these models as if they're just
Aaron Delp (24:38.299)
Agreed. Agreed. Yeah.
The Cloudcast (24:59.218)
fungible. We're like, yeah, I bounced around from this one. I tried that one. I tried that one. And that's sort of nice. But at some point, people are going to want some characteristics of being like, do I trust this thing for a couple of years? And it's not so much a specific model, but do I trust a specific lab? Do I think they're going to be around? Are they well-funded? Are they maintaining their core engineers? they been poached recently? So I think there's a stability thing that people are going to have to figure out. I think there's a
there's a benchmark thing that people are gonna have to figure out that is both simpler than it is today and then also has sort of a financial characteristic to it. it's, again, what can I avoid paying for that I don't have to pay for? Or how can I get good enough on the hardware that I have? I think that's gonna be important. And then I think, kind of going through the other stuff in the checklist that we had is gonna be, how adaptable can I...
It's not really on the model side. It's like how adaptable can I be to this fast pace change of stuff? Because again, just like people were like, hey, Kubernetes people, can you go slower? And it was like, nope, we're not going to do that. The users had to adapt to the speed of the changes of stuff. And I think that's going to be a big thing to watch over the next couple of years. Because those are not muscles that you just develop overnight. Those are things that either you
developed them previously with cloud and got pretty good out of them, or you're going to have to start figuring those out pretty quickly. So that's my sort of mental checkbox, I think.
Aaron Delp (26:30.257)
Yeah, yeah. No, agreed. And I was just going to add one more to it. And I mean, you kind of touched on it with the benchmarking thing, but I think there really does need to be at some point this idea of the efficiency versus power thing. you know, at what point at what point did they get good enough and you're going to stop chasing the latest things? And, you know, is it a I don't know.
The Cloudcast (26:49.277)
yeah.
Aaron Delp (26:57.961)
cost per token, is it a power per token, you know, what ends up being some of the metrics, right? Your equivalent of like the speed and the memory capacity comment earlier, right? Like what's gonna end up being the gold standards of sizing? You know, at the end of the day, this is enterprise sizing.
The Cloudcast (27:14.386)
Right. Well, and if you're doing this in any sort of even your own data center or even a COLA or something, you know, something that doesn't seem like unlimited, you need to know that stuff sort of, you know, a few years in advance, because you can't run out of it and then just call something up. I mean, it's a step function to get to that next level of like capacity. so yeah, people are going to have to start planning. It's going to be a weird thing to do, because it's going to feel like it's a couple year planning cycle. And then you've got this hardware that
you know, sort of depreciates faster than normal hardware. And it's just a whole different level of, it's a different type of calculus to start working on that I don't think people are totally kind of ready for, or at least we haven't made it simple enough for everybody to kind of grok all the big picture of it. maybe that's something we need to keep digging into, make it simpler for people. So, all right, folks, with that, we're going to wrap it up. Hopefully, you got a little bit insight into how Aaron and I, you know, work our way through some topics that we
We have an inkling of information on, we try and kind of ping pong off each other and then go figure out where we've got gaps, we can go learn some stuff and those become new shows. So appreciate you going along on the ride for us, us, along with us. And we probably got a lot more of these coming up over the next months and years ahead. So with that, we'll wrap it up. Thank you all for listening. Thanks for telling a friend and we will talk to you next week.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Software Defined Talk
Software Defined Talk LLC
Dithering Preview
Ben Thompson and John Gruber
Everyday AI Podcast – An AI and ChatGPT Podcast
Everyday AI
Prof G Markets
Vox Media Podcast Network
Acquired
Ben Gilbert and David Rosenthal
Decoder with Nilay Patel
The VergetheCUBE
SiliconANGLE, Media