Blog
/
Intrinsic is making software-defined robotics production-ready

Intrinsic is making software-defined robotics production-ready

Photo of Andrew Zigler
|
Blog_Google_Software_defined_robotics_2400x1256_e5bc18cb8e

For 25 years, Brian Gerkey has been waiting for robotics to have its moment. As CTO of Intrinsic and co-founder of the Robot Operating System (ROS), Gerkey has watched hardware capabilities race ahead of the software needed to make robots truly useful. That gap is finally closing, not because robot arms suddenly got better, but because software can now handle the messy, unpredictable real world.

Intrinsic recently transitioned from an Alphabet moonshot to an integrated unit within Google, positioning the company to accelerate the transfer of frontier AI research into deployable industrial automation. For engineering leaders navigating the convergence of AI and physical systems, Gerkey's insights reveal what it takes to move from impressive demos to production-grade reliability.

Modern software is making physical AI useful

Robotics has reached an inflection point, but not in the way most people assume. The hardware, including robot arms, sensors, and actuators, has been mature for decades. What changed is the software's ability to perceive, plan, and control in variable environments. Gerkey notes that after a 25-year wait, robotics is finally becoming a hot topic for a very practical reason: it is becoming genuinely useful.

Historically, robots operated in rigidly engineered environments where all variation was eliminated. You had to pick from exactly this spot and place at exactly that angle. The robot was essentially blind, repeating the same motion in a controlled loop. This worked for high-volume manufacturing but left enormous potential untapped.

The breakthrough lies in software progress across three core areas: perception, planning, and control. Modern neural networks excel at perception, allowing robots to identify objects, recognize obstacles, and adapt to scene variation without pre-programming every possibility. Planning algorithms determine safe paths through dynamic environments. Control systems execute those plans with precision.

Gerkey is careful to note that modern AI accelerates long-standing robotics challenges rather than replacing classical techniques entirely. Many production systems still rely on a hybrid approach, combining neural network perception with proven motion planning and control methods. The key is knowing which tool fits which problem.

Intrinsic's closer integration with Google and DeepMind creates a tighter research-to-production pipeline. Research models must be adapted, fine-tuned, and hardened for real-world deployment, a substantial engineering effort that benefits from direct collaboration rather than technology transfer across organizational boundaries.

Software-defined robotics eliminates one-off automation rebuilds

The traditional model of industrial automation treats each task as a bespoke engineering project. Change the product, rebuild the robot cell. Change the process, reprogram from scratch. Gerkey describes each of these new tasks as its own bespoke snowflake.

Software-defined robotics inverts this model. The same hardware becomes a reconfigurable resource. As Gerkey explains, hardware can become a software-defined resource where its utility is a direct result of the code you put into it.

This shift mirrors the evolution of computing infrastructure. Just as virtual machines and containers made compute resources flexible and reusable, software-defined robotics makes physical automation adaptable across changing manufacturing requirements. A robot arm that assembled one phone model can be reconfigured through software updates to handle a different product without tearing down the entire cell.

The advantage extends beyond flexibility. Software-defined systems can respond to positional variation in real time. If a part arrives slightly offset from its expected location, the robot perceives the difference and adjusts its motion plan accordingly rather than failing because conditions deviated from a preset routine.

Gerkey emphasizes that this approach aligns robotics development with modern software engineering practices like modular decomposition, reusable components, version control, and continuous integration. Teams can separate concerns, allowing perception specialists, motion planners, and domain experts to contribute independently without rebuilding the entire stack.

Digital-first workflows amplify these benefits. Simulation and digital twins enable teams to choose hardware, place sensors, and validate behavior before physical deployment. This reduces iteration cost and compresses the timeline from concept to production, though Gerkey is quick to note that simulation cannot fully replace validation on real hardware.

Open source robotics accelerates commercial adoption

Open collaboration has been a defining force in robotics adoption, with ROS serving as shared infrastructure across research labs and commercial deployments worldwide. The ecosystem's growth reflects a deliberate tool-building philosophy. Gerkey attributes the platform's success to a distributed systems approach that lets people contribute just the part they are specifically good at.

ROS succeeded not as a monolithic framework but as a modular distributed system. Contributors can add independent packages such as device drivers, perception algorithms, and motion planners without modifying a core codebase. This architecture scales because it lowers barriers to participation. A researcher adding support for a new camera does not need to understand the entire stack, just how to publish sensor data in a standard format.

Licensing choices proved equally critical. By using permissive licenses like BSD and Apache 2.0, organizations were given the ability to adopt, modify, and commercialize ROS-based systems without heavy constraints. This meant companies could build on the platform with the understanding that they could freely use it in their business.

This commercial-friendly approach created a virtuous cycle. Companies could build businesses on ROS, which increased investment in the ecosystem, which improved the platform for everyone. The result is a global community contributing everything from mobile robot navigation to space station manipulators.

The ecosystem view extends beyond ROS alone. Interoperability with adjacent platforms matters. NVIDIA's emphasis on the OpenUSD standard for 3D environments creates opportunities for interchange between ROS's native representations and newer formats. Gerkey's approach is pragmatic: support multiple standards, enable conversion, and let users vote with their feet.

Skill-based programming turns robotics expertise into reusable software

Skills are reusable software abstractions that package complex robotics capabilities into callable building blocks. This model separates infrastructure from application logic, enabling domain experts to contribute without mastering every subsystem. Gerkey explains that a skill simply takes necessary functionality and turns it into a basic, callable function.

Consider object pose estimation, a computer vision problem that uses neural networks to determine an object's 3D location and orientation from camera images. In Intrinsic's framework, this becomes a specific skill, estimate_pose, that developers can drag and drop into workflows. Input: camera feed and object model. Output: pose in 3D space. The underlying implementation remains hidden, allowing application developers to focus on orchestration rather than algorithm internals.

Motion execution works similarly. A move_robot skill accepts a target pose and handles the complexity of motion planning by checking for collisions, respecting joint limits, and generating smooth trajectories. The developer specifies the goal, and the skill manages the execution.

This abstraction layer becomes especially powerful when domain experts encode process knowledge. Welding, for instance, requires tight feedback loops and deep understanding of materials, an expertise that robotics infrastructure teams rarely possess. A welding expert can encode their process knowledge into a skill that controls the welding torch with precision. This incorporates years of practical knowledge without the expert needing to rebuild perception, planning, and control from scratch. This skill then becomes reusable across applications and shareable across teams.

Gerkey connects skill-based programming to broader democratization trends. Lower-cost hardware, accessible simulation platforms, and modular software frameworks reduce the barrier to entry. Builders can experiment in digital environments before deploying to physical systems, compressing the learning curve and expanding the pool of potential contributors.

Production robotics demands years of reliability hardening

The gap between a compelling demo and a production system is measured in years and orders of magnitude of reliability improvement. Gerkey points out that there is a significant gulf between a video of a robot succeeding a few times in a lab and a product that a customer can rely on for their entire business.

Demonstrations celebrate what is possible. Production systems must deliver what is dependable. Manufacturing environments demand reliability thresholds far above the levels that generate excitement in research contexts. While a robot performing a dextrous manipulation task with 90 percent accuracy is an extraordinary research achievement, Gerkey notes that for a manufacturing customer, a 90 percent reliability rate is essentially useless.

Ninety percent reliability means one failure in ten attempts, which is unacceptable when that failure halts a production line or damages expensive components. Industrial customers need multiple nines of uptime, and achieving that requires sustained engineering effort.

Hardening research advances for customer environments involves adapting models trained in controlled labs to the variability of shop floors, including lighting changes, material variations, vibrations, dust, and temperature fluctuations. Simulation and digital twins reduce iteration cost by allowing teams to test scenarios and edge cases before physical deployment, but they cannot fully replace validation on real hardware. Physics engines approximate reality; they do not replicate it perfectly.

Preparing for the software-defined robotics era

Gerkey points to autonomous vehicles as a case study in the timeline from breakthrough to deployment. DARPA challenges in the mid-2000s demonstrated feasibility. Commercial deployment began roughly 15 years later, after extensive refinement, testing, and operational learning. Robotics in manufacturing follows a similar arc, though Gerkey suggests accelerating technology may compress these timelines.

Rod Brooks, founder of iRobot and Gerkey's mentor, formulated a law: moving from a lab demo to 99.9 percent reliability takes ten years, and every additional nine requires another decade. While Gerkey believes modern tools may shorten these periods, he still measures production readiness in years, not quarters.

The transformation of robotics from fixed automation to software-defined systems represents a fundamental shift in how we build and deploy physical AI. For engineering leaders, the lessons are clear: invest in modular architectures, embrace open ecosystems, separate infrastructure from domain expertise, and respect the gap between demonstration and production.

Intrinsic's AI for Industry Challenge, focused on cable handling in electronics manufacturing, offers a practical entry point for teams exploring these concepts. Participants can experiment in simulation, submit policies for evaluation, and progress to physical hardware testing, experiencing firsthand the journey from algorithm to deployment.

As Gerkey's 25-year wait comes to fruition, the question for engineering leaders is no longer whether software-defined robotics will arrive, but how quickly they can adapt their organizations to capitalize on it.

To hear more about the future of physical AI and software-defined robotics, catch Brian Gerkey's full episode on the Dev Interrupted podcast.

andrewzigler_4239eb98ca

Andrew Zigler

Andrew Zigler is a developer advocate and host of the Dev Interrupted podcast, where engineering leadership meets real-world insight. With a background in Classics from The University of Texas at Austin and early years spent teaching in Japan, he brings a humanistic lens to the tech world. Andrew's work bridges the gap between technical excellence and team wellbeing.

Connect with

Your next read