He’s Got The Whole Virtual World In His Hands

Adam Berenzweig of CTRL-labs has been working for most of his life on helping humans and machines communicate with one another. His latest project is a revolutionary interface which will add much more of our bodies into virtual realities, starting with our hands.

GiN: When talking with people who are driven pioneers in technology, we always like to ask what influenced them to take that path. Was there some cool video game or movie or experience that made you want to follow a life developing technology?

CTRL-labs' Adam Berenzweig
CTRL-labs’ Adam Berenzweig

Adam Berenzweig: I played video games and had a programming computer growing up, but my passion for interface technology was actually borne out of music. As a musician, I spent a lot of time in the 90s tinkering around with ways to build new kinds of instruments that engage the human body. I was keenly aware of the tradeoff between sensitivity and expressiveness in musical instruments. This curiosity ultimately extended into the technology world, as I began to think more broadly about how tech could capture natural movements of the body. I actually stumbled upon some research being done by Japanese hardware hackers on EMGs. They were working on a technology that didn’t use camera-based systems (like Wii controllers and other mainstream devices today), but that could pick up muscle contraction strength. That’s where I really began my deep dive into this sort of body movement-centric technology.

GiN: Can you tell us a little bit about your professional background, and how you originally got involved in things like machine learning and especially extended reality or XR?

Adam Berenzweig: I did some speech recognition work in grad school, and ever since then, machine learning has been the driving force behind my decades-long commitment to exploring human-machine interactions.

I have an Electrical Engineering PhD from Columbia University. My research focused on how to compute music similarity using signal processing and machine learning. I was also the founding CTO of Clarifai — they’re an early pioneer in deep learning and image recognition. Prior to that, I spent 10 years as a software engineer at Google, where I led the music recommendation team that launched Google Play music — I built products around machine learning across several projects, including Google News, Goggles, and Realtime Search.

GiN: Your background is heavily into machine learning, artificial intelligence and technologies like that, yet now you are working with CTRL-labs, which is trying to make strides into XR. How closely are the two technologies related, and how much does one depend on the other?

 Adam Berenzweig: Machine learning has been the basis of my work with human-machine interactions, which has naturally seeped into the XR space. As CTRL-labs’ Director of R&D, I’m leveraging deep learning to create an interface that solves a glaring hole in immersive technology. So far, human-machine interactions in XR have favored interpreting users’ head motions and developing HUDs, but our hands have sort of been left in the dust. I’m using my expertise to give users their hands back and extend the immersive experience to this long-overlooked mode of output.

GiN: What are some of the limitations holding back XR today?

Virtual Reality by European Space Agency (CC BY-SA 3.0 IGO)

Adam Berenzweig: There’s no doubt that XR today can give us fantastic, immersive experiences visually. But within this amazing world, we’re stuck with using controllers instead of our hands, which severely hinders our entire experience. As impressive as these activities or environments are, our experience is really defined by the type of control we have. When you think about it, there’s not much control left when our hands — and therefore, our ability to engage with the world — are reduced to these two very non-human sticks.

GiN: Can you talk a little bit about CTRL-labs, your goals, and why the company was founded? Also, why is it named that?

Adam Berenzweig: CTRL-labs was founded four years ago by three Columbia PhD Neuroscientists: Dr. Patrick Kaifosh, Dr. Tim Machado, and Dr. Thomas Reardon, creator of Internet Explorer. Our work covers a wide range of scientific disciplines — we’re at the intersection of computational neuroscience, statistics, machine learning, biophysics, and hardware. Our goal is to ultimately give control back to people, and we plan to do this by giving the neuroscience and developer communities the tools they need to reimagine humans’ relationships with machines.

GiN: Tell us about this new neural interface your company is developing. How does it work, and why is it going to be better than the input devices we use today?

Adam Berenzweig: You may have heard of some controversial brain-machine interface projects out there that require invasive surgeries. Or, I’m sure you’re aware of some non-invasive systems that are camera-based — but even those end up being completely useless if the field-of-view or hand visibility are compromised. We’ve removed these major roadblocks of other existing input devices and made an interface that’s wireless, non-invasive, independent of cameras, and comfortably worn around the wrist. CTRL-kit is a lightweight and durable electromyography device that uses surface EMG sensors and advanced machine learning to decode and translate neural signals into control — and even better, it’s ready to be explored by developers right out of the box.

GiN. What is the new interface able to do today, and what are the next steps in its development?

Adam Berenzweig: Today, CTRL-kit has the capacity to recreate the movements in XR we’d typically associate with motion capture, without the use of a camera, joystick, or button. We’re giving people their hands back in XR — we’re able to recreate a lifelike low latency rendering of hands with a high dynamic range. That means you can engage muscle contractions to pinch, apply force, and use individual fingers, all without a controller. Looking ahead, we’re going to continue iterating upon CTRL-kit to improve its functionality and design, and provide higher-level control signals like the equivalent of buttons and joysticks to increase the number of applications both within XR and beyond.

GiN: What kinds of XR activities will we be able to perform in the future using the new interface which are impossible or difficult today?

Adam Berenzweig: The new realm of possibilities here is really endless and up to the human imagination. A controller-free XR is now more possible. Our hands are free to explore and engage with this space, in such a way that’s a natural extension of our thoughts and intentions. You can now actually pet the dog, turn the doorknob, and pick up the coins in XR. And beyond these more immediate types of interactions, “getting your hands back” in VR also serves a social function. The way we use our hands when we talk and interact with other people plays a critical role in our communication — neural interfaces can help ensure we don’t lose that social presence in a VR world.

Going even further, there’s a whole class of experiences that are harder to imagine because there’s nothing currently like it. Imagine being able to control an animal or a puppet as easily as you can move your own hand; or having a new limb or a tail; or telekinetic powers with tiny imperceptible hand twitches. It’s hard to imagine these scenarios (even in the XR world) because we haven’t had technology that gives us that kind of control yet. But neural interfaces are bridging that gap to not only make these tasks more imaginable, but actually tenable.

GiN: Do you anticipate a need for other types of interfaces to further advance XR?

Adam Berenzweig: There are a lot of very interesting possibilities for new types of interfaces in the future — the challenge looking forward will be getting all of these potential interfaces to cooperate, and work in sync with each other. Looking far ahead into the future, I can imagine a multi-modal approach where we have interfaces that extend to other parts of the body. But right now, we’re laser-focused on the brain, and we will be for a while. Neural interfaces truly are the logical, pragmatic solution to our XR problems, and are the inevitable future of these kinds of interactions.

GiN: Because we are a gaming publication, I have to ask you some game questions. Have you thought of any games or game applications that could make use of your new controller?

Adam Berenzweig: Of course. We’ve already been able to reshape arcade classics like Asteroids. There are videos out there of my colleague playing the game (and winning!) without even physically touching a controller. Think of the countless games where you use an arcade-style joystick or the arrow/WASD keys to navigate, and a big red button or the space bar to “shoot.” Now think of how much more comfortable you’d be — and how much more in-control you’d feel — playing the same games with your arms folded or hands in your pockets. Asteroids, Pac-Man, you name it. You don’t have to move your hands anymore, because your intention is the controller!

Now imagine this level of control in more complex console games with 3D environments. For example — if your left hand is already preoccupied in “Press-and-Hold” mode while running away from something, why should you have to momentarily disarm your right hand’s actions just to pick up an item off the ground? What if you could do all these things simultaneously and more naturally with just a slight, imperceptible twitch of your hand? For every hand motion or muscle you can think about activating, there’s a potential corresponding game control you can execute without moving or compromising control.

GiN: Advancing a little bit more into the future, movies like Ready Player One show a vision of an XR that is nearly like real-life, one where it’s almost possible to live inside the virtual reality, or at least forget about real life for a little while. How close are we to ever getting to something like that, or is it even possible?

Ready Player One Movie Poster
Ready Player One Movie Poster

Adam Berenzweig: I think it’s definitely possible. Thus far, living inside an XR feels like just that — living inside an XR. You’re wearing clunky goggles, holding onto dehumanizing stick-controllers, and very much aware of the situation. As immersive and impressive as these environments may be, they’re nowhere near real-life; because out here, you actually have continuous control of how you engage with the world around you.

That being said, I don’t think full reality simulations are neither necessary nor sufficient to be interesting to people. The truth is, we’ll probably never get quite there — a completely real life-like XR simulation — but that doesn’t really matter, because within that XR space, we’d be able to do other, new things that are so much more amazing and interesting than reality!

GiN: Obviously we are not going to be jumping into realistic virtual worlds anytime soon, but what do you think the next milestone in XR development will be, where people can see that it’s a huge step in the right direction?

Adam Berenzweig: I think the next milestone starts with the question, “what’s the biggest problem in XR today?” Of course, that problem is control; the same problem that’s hindering our broader relationship with technology. We take in an astronomical amount of information from the digital world every day — from phone screens, to video games, to whatever’s playing on your VR goggles — but our output is way behind the curve. That’s where the next step in XR needs to be, and it’ll be driven by neural interfaces. In order for these virtual worlds to feel more realistic, you have to make our interactions more realistic. People will see a huge leap forward once we begin to blur the line between human action in the real world and human action in the virtual world.

GiN: Anything else you want to add about CTRL-labs or XR in general that we should know about or expect?

Adam Berenzweig: We have a broader near-term vision that goes beyond XR. Everyone at CTRL-labs is definitely excited to reshape this space — especially for gamers — but we also have our eyes on robotics, productivity, and clinical research. Neural interfaces can revolutionize immersion and control in a variety of applications, so we want all sorts of developers to use the CTRL-kit SDK and API to begin integrating neural control into their respective fields.

In the long term, neural interfaces will go mainstream. Our scientific groundwork and developer partnerships will pave the way for mass consumer adoption of this tech. We’ll know we’ve done our job when the world starts to ask, “Wait, people back in the day actually had to move their hands to control their devices?”

Developers:
Share this GiN Article on your favorite social media network:

Leave a Reply

Your email address will not be published. Required fields are marked *