A Conversation with Kendall Lowrey on Our Robotics Future
Modular Robotics, Event-Based Camera Perception and more
In a world where robotics and automation continue to advance at an unprecedented pace, few thinkers are as deeply immersed in the nuances of this transformation as Kendall Lowrey. A postdoctoral researcher in robotics and machine learning at the University of Washington, Lowrey has dedicated his career to developing intelligent systems that autonomously discover solutions and master complex tasks through experience. He co-founded the Seattle Laboratory for Robotics and has worked extensively on AI-driven perception systems, event-based cameras, and real-time control methods that enhance the adaptability and efficiency of robotic platforms. His academic journey includes a PhD in robotics from UW under the mentorship of Emo Todorov and an undergraduate degree in electrical & computer engineering and biomedical engineering from Carnegie Mellon University. He is also the Co-Founder of Et Cetera Robotics, an early-stage startup developing motion perception software for robotic use cases.
Our conversation ranged from the practical applications of robotics in manufacturing and logistics to the broader implications on labor, economic structures, and even the way we assign value to human effort.
Robotics as the Next Platform
Lowrey draws an insightful comparison between robotics today and the early days of smartphones. Much like how the iPhone absorbed multiple devices—GPS units, cameras, music players—into a single platform, he believes robotics is moving toward a more flexible, modular future. Instead of highly specialized machines built for single tasks, we may soon see general-purpose robots with swappable hardware and software, allowing them to transition seamlessly between roles.
“If your iPhone had legs and could walk around your house, what new use cases would emerge?” Lowrey asks.
The shift toward modular robotics represents more than just technological convenience—it offers a different prognosis of the robotics value chain. Instead of monolithic manufacturers producing highly specialized machines, we may see a more distributed ecosystem emerge, where hardware and software providers operate in a dynamic, plug-and-play market.
“This is already happening in drones,” Lowrey explains. “You have companies like DJI making fully integrated solutions, but there’s also a huge aftermarket of plug-and-play modules that let people customize their drones for specific use cases.”
This shift could lead to the rise of an entirely new market, where companies build ‘robotic app stores,’ allowing customers to upgrade their robots with new capabilities on demand. Industries such as manufacturing, agriculture, and logistics would benefit from unprecedented customization, enabling businesses to quickly adapt their robotic fleets without replacing entire systems. However, this transformation would require new standards and interoperability protocols to ensure seamless integration across different robotic components and software platforms.
“Hardware standardization and modular design will be key to unlocking mass adoption,” Lowrey predicts. “Imagine a robotic workforce that can be upgraded just like software—where a farmer downloads a new ‘grape-picking’ module onto a general-purpose agricultural robot, or a logistics company swaps in high-precision gripping arms for warehouse automation.”
Event-Based Cameras: Unlocking Real-Time Perception
Lowrey believes his work in event-based cameras is one of the most significant breakthroughs in real-time robotic perception. Unlike traditional cameras, which capture full frames at a fixed rate, event-based cameras only register changes in the environment—drastically reducing data processing needs and latency.
“Our system can detect and track objects at 200 Hz, all while running on a tiny Raspberry Pi,” Lowrey notes. “Conventional AI-driven perception stacks struggle to match that efficiency even with power-hungry Nvidia edge computers.” To put this in perspective, traditional frame-based cameras process images at a fixed frame rate, typically between 30 and 60 frames per second, requiring significantly higher computational power. Event-based cameras, however, process changes in the scene as they occur, rather than waiting for the next frame. This allows Lowrey’s system to detect motion with near-instantaneous reaction times—comparable to how human vision prioritizes movement rather than scanning an entire static image. This advantage becomes crucial in high-speed applications like drone interception, where every millisecond counts in determining whether a system succeeds or fails in tracking a target.,” Lowrey notes. “Conventional AI-driven perception stacks struggle to match that efficiency even with power-hungry Nvidia edge computers.”
Event-based cameras unlock new capabilities, particularly in high-speed applications like drone interception, where reaction time is critical.
“Robotics perception today is still built on a frame-based paradigm that’s fundamentally inefficient for real-world dynamics,” Lowrey explains. “Event-based vision changes that—it’s closer to how biological vision systems work, allowing robots to process motion in real time rather than playing catch-up with static images.”
Despite the promise of modular robotics and event-based cameras, Lowrey notes that progress is slower than it could be.
“There’s a ton of momentum behind the way things have always been done,” he says. “We’re still using perception systems that were designed for static environments when what we really need are systems that can react instantly, like the human eye.”
The Labor Impacts of Robotics and Undervalued Roles
One of the broader implications of robotics, according to Lowrey, is its effect on labor markets and how society values different types of work. While automation has traditionally been framed as a threat to blue collar jobs, Lowrey believes the conversation needs to be more nuanced.
“There are a lot of roles that are undervalued not because they aren’t important, but because their contributions are difficult to quantify in traditional economic terms,” he says. “Skilled trades like construction, welding, and firefighting require a level of adaptability and physical intelligence that is extremely hard to automate.”
Lowrey notes that much of the discussion around automation assumes that robots will seamlessly replace human labor in a one-to-one fashion. However, he believes that many roles will evolve rather than disappear entirely. “You might have robots assisting in certain physical tasks, but the knowledge, decision-making, and situational awareness of skilled workers will still be crucial.”
Beyond traditional labor sectors, Lowrey also points out that many socially beneficial roles—such as environmental monitoring, urban maintenance, and disaster relief—are currently undervalued because they don’t directly generate economic returns in the short term. “We have the technology to deploy autonomous systems to plant trees, monitor forests, or manage fire risks,” he explains, “but unless there's a direct profit incentive, we don’t see large-scale investment in these applications.”
By reframing how we assign value to different forms of labor, Lowrey argues that automation could enable a shift where society prioritizes work that has long-term benefits beyond immediate financial return.
The promise of robotics lies not in replacing human ingenuity, but in amplifying it. "If we continue to measure economic value purely in terms of wage-based labor, we risk missing the broader transformation that robotics will bring," Lowrey warns. If embraced correctly, modular and perceptive autonomous systems can free humans from drudgery and elevate roles that emphasize creativity, decision-making, and complex problem-solving. The greatest test, therefore, is not in building more capable machines but in adapting our cultural and economic frameworks to ensure that progress benefits society as a whole, rather than merely optimizing for profit or efficiency.
“We often frame automation as a zero-sum game—either robots take jobs, or they don’t,” Lowrey says. “But the reality is far more complex. The real challenge is figuring out what we want humans to do, not just what we want robots to do.1”
This requires a reassessment of what we value as a society. As machines take on more physical tasks, the emphasis shifts to creativity, problem-solving, and human interaction—areas that are difficult to quantify in traditional economic terms.
"A robot can stack bricks, but it can’t understand why a building should be beautiful," Lowrey notes. "It can weld metal, but it can’t inspire a team of workers."
This theme — the reallocation of human intelligence — is one that I am fascinated in and will be the focus of many upcoming essays.