The Deepdive
Join Allen and Ida as they dive deep into the world of tech, unpacking the latest trends, innovations, and disruptions in an engaging, thought-provoking conversation. Whether you’re a tech enthusiast or just curious about how technology shapes our world, The Deepdive is your go-to podcast for insightful analysis and passionate discussion.
Tune in for fresh perspectives, dynamic debates, and the tech talk you didn’t know you needed!
The Deepdive
Orbit Edge: Building AGI Off-World
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Start with a number that doesn’t feel real: $40 billion aimed at building enough compute to chase AGI on a 2026 timeline. Now ask a simple question—where do you put a million H100-class GPUs when the grid is straining, cooling is expensive, and latency kills real-time AI? We take you above the clouds to explore orbital edge computing, where satellites stop acting like dumb mirrors and start thinking for themselves.
We walk through the shift from bent pipe architectures to on-orbit inference, showing how smart satellites can delete useless data, trigger real-time alerts, and deliver answers faster than ground clouds. Low Earth orbit provides the latency profile that real-time applications need, while laser intersatellite links unlock bandwidth 10 to 100 times higher than radio and even beat undersea fiber on some routes. With optical terminals becoming a standard and constellations scaling into the hundreds, space turns into a high-speed backbone for global AI.
From COTS accelerators adapted for radiation to redundancy that shrugs off single event upsets and latch-up, we dig into what it takes to compute in vacuum. Then we connect the dots: real-time ground services that feel instant, federated learning that trains in orbit, and a plausible path to terawatt-scale compute that Earth simply can’t host. Along the way, we confront the hardest challenge—the software that schedules, routes, and heals a moving, laser-linked data center circling the planet.
If AI truly needs more power, less latency, and a global footprint, space may be the only address left. Tune in, think bigger, and decide for yourself whether this is hype or the next logical step for intelligence. If you enjoyed the conversation, follow the show, share it with a friend, and leave a quick review to help others find it.
Leave your thoughts in the comments and subscribe for more tech updates and reviews.
We're kicking off this deep dive with a number that is so big, it's almost not a number. It's um it's more like a statement of intent.$40 billion.
Ida:Aaron Powell Yeah, that's the approximate war chest Elon Musk has put together for XAI. And a huge chunk of that,$20 billion, comes from a very recent, very aggressive funding round.
Allan:And that money is not for, you know, nice office chairs or fancy coffee. It's buying one thing, raw processing power.
Ida:Aaron Powell On a scale that honestly kind of breaks our existing infrastructure, the whole point is to build a GPU cluster, they're calling it Colossus, that will have more than one million H100 equivalents.
Allan:A million. And the timeline, I mean, this is the wildest part. Musk has basically thrown down the gauntlet and said artificial general intelligence AGI arise in 2026.
Ida:Aaron Powell, I know, 2026, it's just around the corner, which leads to a very practical problem.
Allan:Okay, right. Let's unpack that. If you're building the biggest, most power-hungry computer humanity has ever conceived, where do you physically put it? Our power grids are already straining.
Ida:And you need cooling, and every single millisecond of delay matters. And that's our mission for this deep dive: to follow the compute away from Earth, away from the cloud, and straight into orbit.
Allan:Aaron Powell We're talking about a paradigm shift called orbital edge computing or OEC.
Ida:This is where the uh insane thirst of AI for more power crashes right into the physical limits of our planet. For decades, satellites were pretty dumb. They used what's called a bent pipe architecture.
Allan:Aaron Powell Bent pipe. Okay, what does that mean?
Ida:Aaron Powell Think of it like a remote sensing satellite. It takes a huge high-res picture of, say, a weather system. The Satavolt itself does zero thinking. It just acts like a bent pipe in the sky, collecting all that raw data and just blasting the whole fire hose down to a ground station.
Allan:Aaron Powell, which sounds incredibly inefficient. You're flooding the ground with petabytes of data, and I bet 90% of it is useless, like pictures of clouds when you wanted to see the ground.
Ida:Exactly. You get this massive communication bottleneck and you're burning energy and time transmitting data, you're just going to delete anyway.
Allan:So OEC flips that script.
Ida:It completely flips it. It pushes the actual computation, the thinking, up to the satellite itself. So instead of beaming down terabytes of raw images, the satellite analyzes them right there in orbit.
Allan:It can do real-time disaster monitoring or just figure out which images are cloudy and throw them away on the spot. You're turning the satellite from a dumb camera into a smart edge server.
Ida:And the performance jump is just, it's dramatic. The sources we looked at showed that for some tasks, the processing time in orbit was almost three times lower than doing it on the ground.
Allan:Three times faster. So you get your answer quicker and you pay way less to get it.
Ida:And for real-time AI, that speed is everything. Which brings us to where in space you have to do this. You can't just stick a supercomputer anywhere. It's all about low latency.
Allan:Right. And that means you have to be in low Earth orbit EO. We're talking altitudes between, what, 300 and 1500 kilometers up?
Ida:That's the sweet spot. If you look at the old geostationary satellites, the GEOs, they're way up at 35,000 kilometers.
Allan:Great for parking a satellite over one spot, like for TV broadcasts.
Ida:But the time it takes for a signal to go up and back is about 270 milliseconds. That's almost a third of a second. You can't run a real-time application with that kind of lag.
Allan:No way. It's useless. But down in LEO, that delay just collapses. You're looking at latencies as low as five milliseconds, maybe 35 on the high end. That's the foundation for all of this.
Ida:What's really cool is how OEC is more than just, you know, putting a server in space. It's actually better than its terrestrial equivalent in some key ways.
Allan:Aaron Powell, you're talking about comparing it to mobile edge computing or MEC, the stuff in 5G towers.
Ida:Exactly. When you put them side by side, the differences are stark. First, coverage. MEC on the ground, even with huge investment, might only cover 20% of remote areas. OEC gives you 100% global coverage by definition.
Allan:You could be on a boat in the middle of the Pacific and get edge compute power.
Ida:That's the idea. But the really critical difference, the one that network engineers obsess over, is mobility.
Allan:Okay, wait, drill down on that for me. You've got satellites screaming around the planet at 17,000 miles an hour. That sounds like the definition of chaotic, unpredictable movement.
Ida:Aaron Powell It sounds like it, but it's not. That's the paradox. Yes, they are fast, but their mobility is highly predictable. They're in fixed orbits. We can calculate their exact position years from now.
Allan:Ah, so it's manage chaos. Unlike people on the ground with their phones who are just wandering around randomly.
Ida:Precisely. That randomness makes managing a ground network incredibly complex. With satellites, you can model the entire network topology in advance, which makes everything way more efficient.
Allan:Okay, that makes perfect sense. But but there's a problem. You can't run a global AI on radio waves. The spectrum would just you'll be gone in a second. You need a communication revolution.
Ida:And we have it. It's laser communication or uh optical intersatellite links.
Allan:OASL. LaserCom. And NASA's analogy for this is perfect. They said it's like switching from dial-up to high-speed internet.
Ida:It's not an exaggeration. We're talking 10 to 100 times the bandwidth of traditional radio frequency systems. And because it uses near infrared light, it doesn't touch the radio spectrum at all. Problem solved.
Allan:So you can move massive amounts of data between satellites instantly. But it gets better because it's not just for space. This tech can actually make the internet faster here on Earth.
Ida:Okay, yeah, get this. The sources showed an example for communication between New York and London. Using laser links in space, the round trip time is only 50 milliseconds.
Allan:50. And what is it for the undersea fiber optic cable?
Ida:Well, the absolute speed of light limit for light traveling through that glass fiber is about 55 milliseconds.
Allan:Wait a minute. You're telling me the network in space is faster than the fiber network we buried in the ocean.
Ida:Yes, because light travels faster in a vacuum than it does through glass. It's a straighter, faster path. That five millisecond difference is an eternity in things like high frequency trading.
Allan:That's mind-blowing. And what about security? A laser seems pretty secure.
Ida:Incredibly secure. The beam is extremely narrow. Think of it like a laser pointer hitting a tiny target thousands of miles away. You basically have to be physically inside the beam to intercept it, which makes it almost impossible to jam or eavesdrop on.
Allan:And this isn't just theory, right? People are using this now.
Ida:Oh yeah. Starlink is using inner satellite links extensively. But the real driver for the market right now is the U.S. Space Development Agency, the SDA.
Allan:What are they doing?
Ida:They're standardizing the hardware, the optical communications terminal, and they're planning a constellation of three to five hundred interoperable satellites, each needing three to five laser links. That kind of guaranteed order is what makes the technology cheap enough to scale for everyone.
Allan:So we have the demand from AI, we have the location in LEO, and we have the hyperfast network with LaserCom. Let's talk about the actual computers, the smart satellites.
Ida:This is where we see commercial off-the-shelf hardware, C OTS, being adapted for space. And it's already solving that bent pipe problem we talked about.
Allan:Give me an example.
Ida:The European Space Agency satellite, SAT 1, it carried a tiny little chip, a Movidious VPU. It used less than one watt of power to deliver about two TOPS of processing.
Allan:Two TOPS. That's not a lot, but what could it do with it?
Ida:It did one thing perfectly. It ran a neural network that detected cloud cover. If an image was too cloudy, the satellite just deleted it on the spot. It didn't waste time or money sending useless data back to Earth.
Allan:That is actually genius. It's the ultimate data filter right at the source. And we're seeing more powerful stuff go up now, too. Definitely.
Ida:China's Tianzi No. The compute density is just getting wild.
Allan:But space is a tough landlord. You can't just send your laptop up there. The radiation alone must be a nightmare.
Ida:It's the biggest challenge. Powerful radiation, wild temperature swings, limited power. Let's focus on the radiation and something called single event effects, or C.
Allan:Single event effect. Sounds like a polite way of saying a cosmic ray just fried your chip.
Ida:That is exactly what it is. A charged particle slams into the semiconductor and it can cause two kinds of failures.
Allan:Okay, let's start with the less scary one, the soft failure.
Ida:That's a single event upset or SEU. A particle hits a memory cell and flips a bit. A one becomes a zero. It can cause a temporary glitch, maybe some corrupt data, but the system can usually correct it and move on.
Allan:And the hard failure, the one that ends the mission.
Ida:That's the single event latch-up, or SEL. This is catastrophic. The particle doesn't just flip a bit, it triggers a runaway current. It creates a short circuit inside the chip.
Allan:Wait, a short circuit? What does that actually do to the satellite?
Ida:It can literally melt the component. It heats up uncontrollably. The only way to stop it is to cut all power to that part of the satellite, basically a full reboot, to clear the short. If you don't catch it instantly, the mission is over.
Allan:So you have to build in redundancy. There's no other way.
Ida:Redundancy is everything. Starlink satellites, for example, have four other satellites that can immediately take over functions if one of them suffers a major failure. It's the only way to guarantee uptime in that environment.
Allan:Okay, so we have these fast, radiation-hardened computers in orbit. What are the killer apps? What does this actually enable?
Ida:Two big areas jump out immediately. First, real-time ground services. Think about things that need zero lag, like truly immersive AR and VR, or streaming Ultra HD video to places with no terrestrial internet.
Allan:Aaron Powell And the performance is actually better from space.
Ida:For some things, yes. One test bed showed that for video conferencing, the round trip time from a satellite edge server was 16 milliseconds. From a regular cloud server on the ground, it was 46. That's the difference between feeling instant and feeling laggy. And the second area is processing the data that's already in orbit using techniques like satellite federated learning.
Allan:Federated learning. So the AI model trains in space.
Ida:Exactly. The satellites collect all this data, right? Instead of sending petabytes of raw data down to Earth to train a model, they keep the data on board. They train their local part of the model and only send down the tiny optimized updates.
Allan:So you get this massive collaborative AI training swarm happening entirely in orbit.
Ida:And with the laser links, the whole constellation can update the global model on the ground, even if only one satellite has a connection at that moment. It saves an incredible amount of time and energy.
Allan:Which brings us right back to Musk and that$40 billion war chest.
Ida:Yeah.
Allan:He's not just doing this for better internet. He sees space as the only place to build the future of AGI.
Ida:For three very fundamental reasons. Number one, continuous solar energy, no night, no clouds, just 247 power.
Allan:Two, unlimited heat dissipation. You just radiate all that waste heat from the GPUs into the cold vacuum of space. No cooling towers needed.
Ida:And number three, and this one really tells you about the scale he's thinking of. No competition for electricity with Earth. He's talking about needing terawatts of power for AGI. You can't just pull that from the grid on Earth without collapsing it. In space, it's basically limitless.
Allan:The ambition is just off the charts, and it's all tied to that 2026 timeline. He also predicts that by 2030, AI will be smarter than all humans combined.
Ida:And that this will lead to the creation of over 10 billion humanoid Optimus robots by 2040.
Allan:10 billion robots. The amount of data processing required to manage that is it's just unfathomable.
Ida:And he's already said what the biggest bottleneck, the biggest black hole for all that compute power will be video processing, high-resolution video from 10 billion robots or self-driving cars or orbital sensors.
Allan:So to summarize where we are, we have this massive AI demand about to come online. The solution is OEC, moving compute into space, and it's all held together by the incredible speed of LaserCom.
Ida:Aaron Powell Right. It's this huge global effort to turn satellites from dumb relays into really powerful thinking edge servers. It's the only way to scale AI beyond the physical limits of our own planet.
Allan:Aaron Powell The hardware is going up, the network is being built, but the software to manage a planet-spanning network where every node is moving at supersonic speed, that seems like a monumental challenge.
Ida:Aaron Powell It is. Which leaves us with a final thought, a final question for you to chew on. Given that the algorithms to reliably optimize these massive dynamic constellations are still a huge challenge, these are NP-hard problems. And given that Musk himself says video processing will be the biggest drain on compute, how quickly can we actually develop the software, the test beds, the orbital operating system needed to manage a multi-terawatt space compute infrastructure? Before that AGI Singularity supposedly gets here in 2026, it feels like we've got about two years to solve it. That's a whole new kind of space race.