Request for Startups — Teleoperation for Robotics
Why Teleoperation is the Key to Unlocking the $10T Robotics Stack
by Elad Verbin and Eyal Baroz, Lunar Ventures.
Teleoperation is the Blindspot of Robotics
The conventional wisdom about robotics is that we’re waiting for AI to solve autonomy. Companies are racing to build the most capable robots and the best foundation models. Once the AI is good enough, the robots will automate the world. Until then, we wait.
We think this framing is unproductive and harmful. There is a way to get many more robots on the ground, immediately, and it’s hiding in plain sight.
Here’s a fact that should get way more attention: when you watch a humanoid robot demo, you’re probably watching a teleoperated robot. The rule of thumb: “If a humanoid demo is not explicitly advertised as autonomous — one should assume it’s tele-ops.”
This isn’t a scandal. It’s the secret hiding in plain sight.
In 2025, investors poured over $10 billion into robotics. Figure AI alone has raised $1.9 billion total. The humanoid market is projected to reach $5 trillion by 2050. And almost none of this money is going towards building up the teleoperation stack — the infrastructure that makes all those demos actually work. Analysts put the current teleoperation market at $890 million. A rounding error.
We think this is a blindspot. Here’s our thesis: teleoperation isn’t a stopgap while we wait for “real” autonomy. Teleoperation is the path to autonomy. It’s an apprenticeship: robots learning from human operators, one task at a time. The companies that figured this out first were autonomous car companies, who needed to put cars driving on the road. Waymo, Cruise before it suspended operations — they didn’t solve autonomous driving by waiting for perfect AI. They “cheated” in the best way possible. They put remote operators in the loop. When a Waymo car encountered something it couldn’t handle: a construction zone, a cop waving traffic through, a guy on a unicycle, a human teleoperator took over. The AI handled everything else. Over time, the “everything else” bucket grew larger and the interventions grew rarer..
This is the way. And it requires a whole lot of software: to manage the gradual handoff from 1:1 human-to-robot ratio, toward 1:5, and eventually 1:100 and 1:1000. A lot of value will already be created at 1:1 (see more on that below), but when we get to 1:10 or 1:100, the economics will transform radically, and teleoperation companies will be multi-billion dollar companies. This is the path robotics will follow to automate the world, to take over manufacturing, logistics, defense, nursing, agriculture, public works, and everything else.
At Lunar Ventures, we’re actively looking to invest in startups building teleoperation systems for robotics. We invest cheques of €500K-€1.5M, leading investment rounds across Europe and US in pre-seed startups which don’t have any product or revenue: often two founders with an idea and a 1-pager, pre-incorporation.
If you’re building in this space, email us at robotics@lunar.vc. (Or directly at our personal emails: Elad and Eyal.) And if you know someone building in this space, or that will like this article, send it to them! Also, subscribe to get future writing from us!
Our thesis in brief: When a human teleoperates a robot, the resulting data is perfectly native — no simulation-to-real gap, no domain adaptation problem. Train an AI on that data, and you get automation that actually works. Teleoperation is the apprenticeship that produces AI. Every intervention is a lesson. The companies that embrace this human-machine hybrid will generate the data, revenue, and learning to cross to full autonomy. (We expand this argument in Section 5.)
We’re interested in the entire teleoperation field, but here are the opportunities we find most compelling: (We expand on each of these in the next section.)
The Control Layer: the software that makes it work — interfaces, latency compensation, fleet coordination, managing the gradual handoff from human to autonomous control, and the interworking of humans with robots on the same workstreams.
Vertically Integrated Robotics: companies that own both hardware and software in a particular vertical, building a teleoperation flywheel on the ground.
Full-Stack Universal Teleoperable Manipulators, especially for Zero-Human Environments: companies building full-stack hardware+software solutions which are highly reliable, multi-purpose machines which can operate in zero-human environments. This includes universal teleoperable manipulators: machines that can be teleoperated and can do many generic tasks, which come both in cheap medium-reliability flavors, as well as in expensive ultra-high-reliability versions
In the following: Sections 1-3 cover what companies we believe should be built immediately. Sections 4-8 dive into our reasoning — why we believe this thesis.
1. The Three Biggest Opportunities for Building in Teleoperations
1.1 The Control Layer: Software for Teleoperation
The most interesting opportunity is the control layer — the software that lets humans and AI collaborate during the long transition to autonomy.
Here’s what makes this interesting: it’s not an AI problem. It’s a systems problem. The question isn’t “how do we build smarter robots?” It’s “how do we manage Centaur teams1 — hybrid human-AI workforces — through a multi-year transition?”
This requires control interfaces that don’t make operators exhausted after an eight-hour shift. It requires latency compensation that works in production, not just in demos where you can cherry-pick the good takes. It requires fleet coordination across robots that weren’t designed to talk to each other. And it requires software that manages the gradual handoff from 1:1 human-to-robot ratios toward one-to-many.
This is closer to building air traffic control than building a neural network. The hardest unsolved problems are attention and exceptions: How does one operator maintain awareness across ten robots? What happens when three robots hit edge cases simultaneously? How do you hand off control without losing context? We expand on these in Section 8, but the short version is: the software to solve them well doesn’t exist yet, and the companies that build it will own a critical chokepoint.
Some people who understand this best are:
Engineers who’ve run remote operations at autonomous car companies. They’ve felt the pain firsthand.
Gaming engine developers: they’ve built low-latency real-time systems.
Roboticists who’ve actually designed systems where humans and machines work together, not just systems where machines work alone.
1.2 Vertically-Integrated Robotics
Vertically-Integrated operations that own both hardware and software in a particular vertical are probably an even better business than pure software plays, though harder to build.
Why? Because you can optimize the whole system, not just your slice of it: not just the hardware or the software. You sell outcomes, not tools: your value proposition is “this task gets done”, rather than selling tools that require assembly. And you build expertise and reputation in a particular vertical. Buyers, especially in manufacturing, are tired of integrating six vendors’ products and having no one to call when it breaks. And crucially: you can generate revenue while building toward automation.
There is a hard trade-off here: this is much more capital-intensive than building tools, and it’s slower to scale. Capital needs are larger, and the path involves manufacturing partnerships and long integration cycles. But the opportunities are just as large, or larger, as for building tools.
1.3 Full-Stack Robotics for Zero-Human Environments
Here we build tools and systems, but not necessarily vertically-integrated. The important thing is to make them as universal as possible, i.e. able to do many tasks, and to build them for the use case that’s under the most pain. The places where robots are most needed are the places where humans literally can’t go. For these, we need highly reliable, multi-purpose machines. And in those environments, the robots that are most needed are the robots that maintain other robots, and fix problems when they arise. These are machines that can handle dozens of tasks, use general tools, and prioritize reliability over speed.
In this direction, we actually need two kinds of machines. We need the simple machines that are still universal: these are your cheap, generic, Universal Teleoperable Manipulators: machines able to use many tools in a modular way, and to perform any operation that a human would do. (These machines might split into segments depending on the “resolution” of the operation needed: some machines deal with big boxes weighing tens of kilos, and some are able to solder tiny circuits.) These universal manipulators don’t have to be entirely robust, since they have other machines to fix them — more on this next. But they do need to be affordable, and universal.
On top of that, we need the actual ultra-high-robustness universal teleoperable manipulators which are in charge of fixing the other ones when they break. These ones can be as expensive as they need to be, but they need to have many nines of robustness, and they need to withstand extreme environments that present systemic challenges to the whole system, e.g. high temperatures and hazardous environments.
This is the hardest category to build for, but also the one with the clearest value proposition. When you literally cannot send a human — a space vehicle, a forward operating base under fire, a contaminated facility, an underwater datacenter — the alternative to a working robot isn’t a cheaper robot. It’s nothing. That changes the economics entirely. Buyers will pay for reliability in ways they won’t when “just send a technician” is an option. Once they get battle-tested and cheaper at scale, the universal manipulators built for such environments will also find ample use in situations where humans are present.
There’s genuine value in dual-use applications: what works for a forward operating base also works for a remote mining facility. Military customers fund the R&D, and commercial applications provide scale. The challenge is that this is a smaller market with tough regulatory dynamics, long sales cycles, and a very high technical bar. You need to be comfortable with years of development before significant revenue. (And you need investors, like Lunar Ventures, who are comfortable with this as well.)
Some people best positioned to build this are:
Engineers who’ve built high-reliability systems for harsh environments — who’ve internalized that “99% reliable” means “fails constantly” when you’re running 24/7.
Roboticists with experience in space, defense, or remote industrial operations.
People who’ve worked on systems where failure isn’t an option — and have opinions about how to build them better and faster than incumbents.
2. Founders Who’ve Seen It Firsthand
The common thread across these opportunities: the best founders are people who have lived the problem. You know what makes teleoperation hard because you’ve built it yourself — in autonomous driving, gaming, defense robotics, space systems, or industrial automation. You’ve felt the frustration of interfaces that don’t work and latency that drives operators crazy. And you understand the path from human control to full autonomy because you’ve walked it before in a different domain.
This isn’t a space where you can pattern-match your way to success from the outside. The problems are too specific and the failure modes too non-obvious. The best founding teams here have at least one team member who has seen the inside of these systems and has strong opinions about what’s broken and how to fix it. Startup-Founder fit is a real thing. We’re always happy to talk to anyone smart and ambitious; but for your startup to succeed, we recommend you ask “are we a unique fit to solve this particular problem?”; if you are, then talk to us! And if you’re not, then pivot to a problem where you are a unique fit — we’re happy to discuss and try to find such a problem.
3. Let’s Talk!
If you’re building in any of these areas, please email us at robotics@lunar.vc (or at our personal emails) and let’s talk!
We’ve invested in many companies that are pre-incorporation: two founders with a great idea, and that’s the stage where we’re most helpful, because all the most interesting work has not yet been done. Send us a deck, or just a 1-pager about what you’re building and why you’re the perfect team for the job. We invest at pre-seed, typically €500K-€1.5M, in Europe and North America. We’re a deep-tech fund — we care about algorithms and computer science applied to hard problems, and we prioritize building a strong technically-differentiated system, over making quick revenue and getting “locked in” to a target that’s too small.
Know someone building in this space? Share this with them.
Working on something adjacent? Reach out anyway.
What follows is the argument behind our thesis. If you’ve read this far and want to understand why we believe teleoperation is the path to autonomy, this is for you. Our goal isn’t to share conclusions; it’s to show reasoning you can poke holes in.
4. Teleoperation is Not a Stopgap — It’s an Apprenticeship
The skeptic’s view of teleoperation is that it’s a temporary fix while we wait for real AI. We believe this misses the point entirely.
Fact 1: When a human teleoperates a robot, the data has no domain adaptation problem. There’s no gap between where the data comes from and where it needs to work. The inputs are exactly what the robot’s sensors see; the outputs are exactly what the actuators should do. This is what makes teleoperation data native — robot-shaped from the start.
Fact 2: If you train an AI to perfectly mimic those teleoperators, you’ve built perfect automation. Not simulation-to-real transfer. Not video-to-robot translation. This is full automation: AI directly imitating successful robot operation.
From these two facts, we see that teleoperation provides an apprenticeship period, to gradually turn robots into real domain-expert workers. The robot works “under” the human master, learns what they do, and gradually takes over tasks as it proves ready. The master doesn’t disappear — they step back. Every intervention is a lesson. This is what happened (and is still happening) in self-driving, and it’s what’s going to happen in manufacturing, logistics, and all of physical endeavors.
Waymo’s disengagement data tells this story clearly. In 2015, Waymo cars needed human intervention roughly every 1,250 miles. By 2020, that number had stretched to 30,000 miles. But the number didn’t keep rising; in fact, it decreased somewhat: today Waymo still averages 17,000 miles between critical interventions — but with a rapidly-expanding service area. Waymo pushed the technology until a certain economically-viable “intervention frequency” was reached, and then started to gradually expand its service area, at a rate that keeps that intervention frequency controlled.2
How did Waymo achieve this? It wasn’t just breakthroughs in AI, but also laborious data collection: five years of teleoperators, some of them sitting in call centers in the Philippines while operating (or “assisting”) cars on the streets of Phoenix, handling every edge case the AI couldn’t solve. Each intervention generated exactly the data needed to make the next intervention less likely. The humans weren’t a fallback, they were the masters training the apprentice. And at one intervention every 17,000 miles, the economics make sense, so they can expand their service area to eventually cover the entire globe. (It turns out the likely main bottleneck to expansion is neither the robustness of their AI nor the need for teleoperators, but rather their ability to procure cars, and stifling regulation.)
Imagine you’re one of those teleoperators. It’s 3pm in Manila (11pm in San Francisco). On your screen, a Waymo is approaching a nightlife area at Haight-Ashbury. The AI has flagged uncertainty — the street is in disarray, and drunk revelers are running into the street in a way the model is not managing to predict. You take control, navigate the car through, hand it back to autonomy. That fifteen-second intervention just taught the system something it will remember forever. Multiply that by thousands of operators, millions of miles, years of edge cases. That’s how you build autonomous systems that actually work. And you’ve achieved this at non-San-Francisco labor costs, and by teleoperators, spread around the world, working in their own timezones’ working hours.
This dynamic creates a flywheel that’s both technically and economically powerful. You start with fully human-controlled operations, then gradually automate. First you automate the routine tasks. After that, humans get to just monitor and occasionally step in. After this is robust, monitoring subsides, and humans get called only for exceptions. Eventually you achieve nearly full autonomy, and then actual full autonomy. Each step generates revenue (you’re delivering value from day one) and generates data (every human action teaches the AI something). You don’t need to solve full autonomy before you have a business.
There’s also an immediate economic edge that has nothing to do with AI training. In today’s world of immigration restrictions and tariff wars, teleoperation enables a new kind of globalization. Production can happen anywhere while workers stay where they are. A TSMC factory in Arizona staffed by operators in Taiwan. Teleoperation is both a bridge and a destination.
The bottom line: robotics companies that skip this apprenticeship phase are risking completely missing the AI transition. They will build core technology, while their competitors are operating, and gradually automating as they go. The companies that embrace the human-machine hybrid will generate the data, the revenue, and the learning that lets them cross to full autonomy.
So why now? What changed?
5. The Best Time for Teleoperation was 2025, the Second Best Time is Now
Several forces are converging to make teleoperation strategically urgent.
Starting with geopolitics. The US defense establishment is in full war-preparedness mode. They’re hoping full-scale global war does not break out; but the best way to minimize the chance for such a war, is to be maximally prepared for one. (Check out John Antal’s take on this and on being prepared for the next war.) In pushing for war readiness, military leaders have found that America’s industrial base can’t sustain a prolonged conflict. Modern warfare requires sustained production, not stockpiled munitions — but the current manufacturing base can’t support this, and the labor pool can’t either. Robots have to step up. Teleoperation is how you get robots that actually work while autonomy catches up.
Layer on immigration. The US is restricting immigration more aggressively than at any point in decades. The traditional model — ship workers to where the labor is needed — is breaking down. But teleoperation inverts this: workers in Korea or Poland operate robots in Ohio without visas, without relocation. The labor pool becomes global even as borders close.
Then add reshoring pressure. Tariffs and supply chain anxiety are pushing manufacturing back to the US and Europe. But these regions don’t have the workers, and the workforce is getting older anyway. You can’t build a chip fab in Arizona and staff it with Arizona residents — there aren’t enough with the right skills. Teleoperation makes it work: domestic production, global labor, no one has to move.3
These three forces all point the same direction: demand for robots that work now, not in five years when autonomy is “solved”.
There’s also a technology case. The missing piece for robotics was never hardware — it was cognition. You could build arms, legs, sensors. But you couldn’t build judgment. Large language models changed this.4 They gave machines the ability to reason through unstructured situations. With cognition, robots adapt instead of breaking. Teleoperation is now the on-ramp to an inevitable destination, not an expensive dead end.
The UX technology itself also unlocked quite recently: VR headsets matured (Quest 3, Vision Pro), and edge AI got cheap enough for real-time control.5
Meanwhile, money is chasing “Physical AI” as the next frontier. The robotics market is projected to grow from $90 billion (2024) to over $200 billion by 2030.6 Teleoperation catches tailwinds from this wave — it’s the infrastructure that makes these bets work.
The geopolitical pressures create urgency, the technology creates possibility, and the capital creates fuel. And teleoperation remains overlooked.
What if we don’t build teleoperation? What if robotics keeps getting big funding, but the control layer doesn’t get built? Imagine a manufacturing executive who just spent $50 million on humanoid robots. In production, they hit edge cases constantly. Without good teleoperation infrastructure, exceptions require on-site technicians. Utilization craters. Multiply this across enough companies, and you get a trough of disillusionment for physical AI — not because the robots can’t work, but because the human-machine interface never got built. Robotics is deemed a “disappointment”, and we go into “Robotics winter”. The world, with its increasingly aging population, can’t afford this. The Teleoperation Stack is needed immediately.
So where do we expect the first wins to come from?
6. Timing the Teleoperation Adoption Verticals
Timing adoption is hard, but let’s try.
Teleoperation already had one big rollout (still ongoing), in autonomous driving. This sector is unique in that it’s an enormous segment of the economy, with automation challenges that are very difficult, but quite uniform across the sector. So it makes sense that it was the first to roll out. What sector is going to be next?
We believe defense and space will move early. They are not the largest, but likely the early movers. It’s worth understanding why — because the same logic will tell us which other verticals follow and when.
In zero-human environments, you can’t send a technician when something goes wrong. That constraint alone forces a willingness to pay for reliability that doesn’t exist in most markets. A robot on a lunar base or a forward operating base needs to work, full stop — and the buyer will pay accordingly. The US defense budget dedicates $13.4 billion to autonomy and AI systems in FY2026, and they’re not waiting for prices to come down. This is where the early revenues are: premium pricing, motivated buyers, and environments where the “just send a person” option doesn’t exist.
Manufacturing is where the volume comes from — but the adoption curve there is steeper. Assembly lines, light manufacturing, electronics production: the labor shortage is real (nearly half a million US manufacturing jobs sit unfilled) and the economic case for automation is already compelling. But manufacturing buyers are ruthlessly practical. They won’t deploy until systems hit a reliability threshold that lets them run continuous operations without constant babysitting. The bottleneck isn’t demand — it’s proving the technology works at manufacturing-grade uptime. Once that threshold is crossed, adoption will be fast, because the economics have been ready for years. This is where the platform winners will emerge.
Other verticals — nursing, agriculture, general physical labor — will follow later, maybe five to ten years out. They lack both the premium pricing of defense and the concentrated deployment environments of manufacturing. They’ll adopt once the technology has been battle-tested elsewhere and the costs have come down. This is fine; the first two markets are big enough to build substantial companies.
But what exactly needs to get built? What are the hard problems where solutions don’t exist yet?
7. The Most Urgent Problem: Software for the Control Layer
What are the biggest challenges, and what do their likely solutions look like? We covered some big problems in in Section 1; here are some additional challenges that don’t cleanly translate into individual startups, but are more of a “feature, not product”:
Control interfaces are still clunky. VR headsets help, but operating a robotic arm remotely for eight hours without fatigue is hard.7 Most current systems feel like flying a drone through molasses. Latency drives humans crazy: 200 milliseconds of lag makes tasks significantly harder.8 Gaming solved parts of this with prediction and local simulation, but teleoperation has higher stakes and a physical ground truth that can’t be faked. The right answer probably combines predictive models, edge processing, and clever UI design that hides unavoidable lag.
Fleet coordination is messy. Picture 20 robots on a factory floor: some human-controlled, some autonomous, some managed by different systems, with workers walking among them. How do you prevent collisions? Maintain situational awareness? Hand off control smoothly? These systems problems don’t have good solutions yet.
The biggest unlock is going from one-to-one to one-to-many. To be clear: one-to-one teleoperation is already valuable. It enables 24/7 factory operations without night shifts. It reduces the need for worker migration. It allows labor cost arbitrage. And every session generates training data. But the economics transform at one-to-many. At one-to-ten, you’re actually scaling human productivity.
The software to enable smooth operator switching, attention management across multiple robots, and exception handling across a fleet doesn’t exist yet. This is the problem that, when solved well, will transform the economics of the entire space.
These are real, unsolved problems. They’re also exactly the problems that create defensible businesses for the companies that solve them. And being software, they’re cheaper to build than hardware or vertical solutions.
8. Conclusion: The Path Forward
Every humanoid demo you’re watching? There’s probably a teleoperator behind it. The question is: who’s building the infrastructure that makes that teleoperator effective in production — and that eventually makes them unnecessary?
We’ve laid out our thesis here, but we’re certain we’re wrong about some of it. We think the window is now, but we might be two years early or late. And while we think the control layer is the most interesting opportunity, it’s possible it gets absorbed as a feature by full-stack robotics companies — the way AWS subsumed many dev tools.
If you’re seeing dynamics we’re missing, or if you think our reasoning is flawed in ways we haven’t anticipated, please leave us a comment. Please take this as an invitation for discussion rather than a declaration of truth.
The future of robotics isn’t being built in AI labs. It’s being built by teleoperators — in call centers in Manila, in control rooms monitoring factory floors, and in the startups building the software to make this all work. We’re looking for the founders who understand this and are building the systems those teleoperators will use.
If that’s you, email us: robotics@lunar.vc or at our personal emails: Elad and Eyal.
Dr. Elad Verbin is a Computer Scientist, based in Zagreb and Berlin. He was originally a researcher building algorithms, and is now an investor in computer science based startups. He is a founding partner at Lunar Ventures.
Eyal Baroz is a technologist and operator turned investor. He spent over 25 years in semiconductors, telecom, e-commerce, and re-commerce, serving as CTO/CPO at startups and unicorns and co-founding a robotics company. He is a Partner at Lunar Ventures.
Lunar Ventures is a pre-seed deep-tech fund investing in deep-tech and algorithm-driven startups. We’re scientists and engineers ourselves — a fund built by technical people, for technical people. Our funds manage €100M, and we invest cheques of €500K-€1.5M, leading investment rounds across Europe and North America.
We use the term “Centaur teams” to denote humans and robots working together. This term comes from “Centaur chess,” a format pioneered by Garry Kasparov in 1998 after he lost to IBM’s Deep Blue. Kasparov proposed a new kind of chess where humans could partner with computers. The most famous result came in 2005: in a freestyle chess tournament, the winners weren’t grandmasters with supercomputers — they were two amateur players with three ordinary laptops. Their edge was process: when the computers disagreed, the humans “coached” them to investigate further. As Kasparov put it: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” The term Centaur Team has since been adopted in other fields, in defense for example, to denote cases where the best results are achieved when humans and computers work together as a team. We think robotics is entering its Centaur era: the best results will come not from the best robots or the best operators, but from the best human-machine processes. Also check our Dario Amodei’s take on Centaur teams in AI.
There is a whole debate right now on whether Waymo is using humans to “remotely drive” cars, on whether these remote drivers are based only in the US or also in the Philippines, and on whether that’s a good thing. Waymo has recently confirmed they are using remote “assistance personnel”, but they say these personnel is only “providing advice and support to Waymo robotaxis, but do not directly control, steer, or drive the vehicle”. Waymo released this statement in response to US Senator Ed Markey, who is very concerned at the concept of non-US-citizens remotely driving vehicles in American cities. (It bears mentioning that these same exact Philippine residents are in fact allowed to drive vehicles in US cities, when they are physically present in the US.) In any case, the issue of teleoperation is increasingly being debated in the public sphere, which seems like a good sign that it is indeed becoming a part of the future.
This point, and this whole article, is not a normative or moral claim, just a practical one. Regardless of how we feel about it, this is the world we live in.
The connection between language models and robotics isn’t obvious — LLMs process text, while robots manipulate objects. So why do we claim that LLMs unlock robotics? The reason is that the largest bottleneck to truly-autonomous robotics was never hardware, it was judgment: the ability to handle novel situations that weren’t explicitly programmed. LLMs enable machines to have judgement. When humanity built LLMs, we found that sufficiently-large LLMs, as part of their training process, become “world models” which cognitively understand the world in the presence of fuzziness and uncertainty. They “think like humans”. This is exactly the cognitive capability robotics was waiting for. This is true even for pure-language models, who already have a deep understanding of the world that they were not explicitly programmed with. But researchers are also working on vision-language models which extend this to things that are not captured in language: they can interpret what they see, and apply cognitive processes to it. With LLMs and vision-language models, when a robot encounters something unexpected, they can reason about what to do rather than failing or calling for human intervention. This doesn’t solve manipulation (you still need motor skills) or perception (you still need sensors), but it solves the reasoning gap that made robots brittle. In Dario Amodei’s mental model of LLMs — “a country of geniuses in a datacenter” — that country of geniuses can easily be looped in whenever robots don’t know what to do, and unblock the system. This capability was simply non-existent until the last few years.
Platforms like NVIDIA’s Jetson Orin deliver 275 TOPS at 60W with sub-30ms inference latency — enough to run transformer-based control policies in real time.
Current analyst estimates put the teleoperation segment at just $890 million growing 24% YoY to $4 billion by 2032. We think this dramatically underestimates the opportunity because it treats teleoperation as a niche rather than as an enabler of infrastructure for the broader robotics market.
30-80% of VR users experience motion sickness. Studies show teleoperation under latency significantly increases cognitive load as measured by fNIRS brain imaging.
Researchers found that latency below 170ms has minor impact, but beyond 300ms operations become genuinely challenging. The 200-300ms range is where performance degrades noticeably.


