Magic Leap revealed a mixed reality headset that it believes reinvents the way people will interact with computers and reality.
Unlike the opaque diver’s masks of virtual reality — which replace the real world with a virtual one — Magic Leap’s device, called Lightwear, resembles goggles, which users can see through as if they are wearing a special pair of glasses. The goggles are tethered to a powerful pocket-sized computer, called the Lightpack, and can inject life-like moving and reactive people, robots, spaceships — anything — into a person’s view of the real world.
The whole company rides on the back of founder Rony Abovitz, a bombastic bioengineer who helped design the surgery-assisting robot arms of Mako Surgical Corp. The sale of that company for $1.65 billion funded nearly the first four years of Magic Leap.
The last time the company spoke publicly in any great detail was about a year ago, when it invited Wired magazine to its South Florida headquarters to see the tech in action, but not to write about what the hardware looked like. Earlier this month, Glixel received a similar invitation. Abovitz invited me down to visit the company headquarters in Fort Lauderdale to write about the science of the technology and to finally detail how the first consumer headgear works and what it looks and feels like.
This revelation – the first real look at what the secretive, multi-billion dollar company has been working on all these years – is the first step toward the 2018 release of the company’s first consumer product. It also adds some insight into why major companies like Google and Alibaba have invested hundreds of millions of dollars into Magic Leap, and why some researchers believe the creation could be as significant as the birth of the Internet. Technology like this “is moving us toward a new medium of human-computing interaction,” said David Nelson, creative director of the MxR Lab at USC Institute for Creative Technologies. “It’s the death of reality.”

The Leap
My first experience with Magic Leap’s technology in action occurred in a sort of sound stage, inside a building separated from the rest of the complex. This is where the company tests out experiences that might one day be built for use in a theme park or other large area. As with almost all of the demos I experienced over the course of an hour or so, I can describe the intent and my own thoughts, but I agreed not to divulge the specifics of the characters or IP. In many cases, these are experiences that will never see the light of day; instead, they were constructed to give visitors who pass through the facility – under non-disclosure agreements – a chance to see the magic in action.
This first, oversized demo dropped me into a science-fiction world, playing out an entire scene that was, in this one case, augmented with powerful, hidden fans, building-shaking speakers and an array of computer-controlled, colorful lighting. It was a powerful experience, demonstrating how a theme park could potentially craft rides with no walls or waits. Most importantly, it took place among the set-dressing of the stage – the real world props that cluttered the ground and walls around me – and while it didn’t look indistinguishable from reality, it was close. To see those creations appearing not on the physical world around me, as if it were some sort of animated sticker, but in it, was startling.

Next, Sam Miller, senior director, systems engineering, and Shanna de luliis, senior technical marketing manager, walked me through the process of launching three screens, essentially computer monitors, into my world. They looked like large, extremely flat television screens or monitors. More importantly, they stayed put, so I could pop them up wherever I wanted, even creating an array of them that I could use just as I would any multi-monitor setup. In this case, I had three screens up; each spread out just enough that I had to turn my head to look at them. And the robot still hovered over to the side.
After the screens, I tried a little demo that created a floating, four-sided television, each showing live TV. I could walk around the object, watching different channels. All of the channels kept playing whether I was watching them or not.
At another point, a wall in the room suddenly showed the outline of a door with bright white light shining through it. The door opened and a woman walked in.
She walked up to me, stopping a few feet away, to stand nearby. The level of detail was impressive. Though I wouldn’t mistake her for a real person – there was something about her luminescence, her design, that gave her away. While she didn’t talk or react to what I was saying, she does have that ability. Miller had her on manual control, running her face through a series of emotions: smiling, angry, disgusted. I noticed that when I moved or looked around, her eyes tracked mine. The cameras inside the Lightwear were feeding her data so she could maintain eye contact. It was a little unnerving and I found myself breaking eye contact eventually, to avoid being rude.
One day, this human construct will be your Apple Siri, Amazon Alexa, or OK Google, but she won’t just be a disembodied voice. She will walk with you, look to you, deliver AI-powered, embodied assistance.
The demonstrations were an odd collection of ideas, including a massive comic that you could walk up to and view as if looking through a window.

In another example, they showed me a quick demonstration of something they called volumetric capture. The team went to some capture studios and had them capture live performances of actors using special equipment. They then dropped that actor into the system, essentially putting the live performance into any room the user happens to be standing. While some of the finer points of the capture were rough, like the space between nose and lip being a little too rounded together, the overall impression was of being able to watch up close and walk around a live performance of real people. I can’t describe what the capture showed, only that it was of someone moving very quickly, and my view of it kept up. There was no stuttering or slowdowns, even when I walked around the performance, up close and far away. And, I was told, the performance could be made larger or shrunk down to fit in the palm of my hand.
Light Fields
“I call this the cockroach of the industry because it just never dies and it needs to just stop,” Abovitz says. We’re looking at a monitor in his office that’s displaying the image of a stereoscope from the 1830s. The stereoscope takes two pictures of the same thing, each from a slightly different angle, and then places them a distance from your eyes. You hold the device up to your eyes, which has a piece of wood that blocks the eyes from seeing more than the image in front of it, and stare out at the two pictures. The result is a sort of 3D. But Abovitz is a bit more clinical in his description of the technology that he thinks has run its course. “It causes an accommodation conflict so your eyes can’t function normally,” he says. “By looking at these two images in an unnatural way, you create this unnatural 3D image.”
Everything that is 3D tends to come from this 1838 technology. “It’s sort of distressing to me,” Abovitz says. “It’s from the 1800s, but it keeps reappearing. It showed up in the red-and-blue glasses in movies theaters. It showed up in the Sixties, [through the] 2000s. When VR was coming back again, it’s like the same thing; It’s like we’re still using this same idea.”

Where the 1830’s technology uses two flat images, virtual reality essentially uses two screens. Abovitz thought there had to be a better way. He was uninterested in improving virtual reality; instead, he sought a better way to create images that can be placed into a person’s view of the real world. In short, he was interested in mixed reality. Where virtual reality recreates everything you see, and augmented reality can inject images into your world, mixed reality adds the images into the real world but is also aware of you and the world you inhabit. So, for instance, a mixed reality horse would know where your couch is and not to walk through a wall in your bedroom.
As Abovitz started to look into how “we got stuck in this mess and what the way out was” he discovered that there were two things going on in the world of mixed reality and perception that he wanted to study.
The first was something called the analog light field signal. The light field is essentially all of the light bouncing off all of the objects in a world. When you take a picture, you’re capturing a very thin slice of that light field. The eye, however, sees much more of that light field, allowing a person to perceive depth, movement and a lot of other visual subtleties. The other thing that Abovitz wanted to figure out was how that light field signal makes its way into your brain through the eye and into the visual cortex.“The world you perceive is actually built in your visual cortex,” he says. “The idea is that your visual cortex and a good part of the brain is like a rendering engine and that the world you see outside is being rendered by roughly a 100 trillion neural-connections.”
The current thinking is that about 40 percent of your neurological power is being used for visual processing. That can jump up to 70 to 80 percent when you’re doing something like playing a sport. “You’re basically creating the visual world,” he says. “You’re really co-creating it with this massive visual signal which we call the dynamic analog light field signal. That is sort of our term for the totality of the photon wavefront and particle light field everywhere in the universe. It’s like this gigantic ocean; it’s everywhere. It’s an infinite signal and it contains a massive amount of information.”
The massive amount of complicated information involved in creating an artificial light field makes it hard to do, let alone in a way that includes motion. In 2011, Abovitz was trying to puzzle this out with a friend of his who studied theoretical physics at CalTech. They came up with the idea that the human visual system was acting like a filter. The eye wasn’t really seeing anything, but instead was filtering out a thinner stream of light from that massive field and then feeding it to the visual cortex. “At this point, we were sort of on our own,” he says of the theory. “We were way off the grid.”
The conclusion they eventually reached was that the visual cortex functions a lot like a graphics processor in a computer. It takes the information fed to it by the eyes and renders a world for the person to perceive. And that it only really needs to be fed a very sparse amount of data to do that. “Maybe we all have genetically passed on versions of the world and all we do is intake sparse change data to update that model, but we have a persistent model,” Abovitz says. “That seems to really hold together with how people have evolved in the sense of how we need to survive, reproduce and build shelter and that all of our visual-spatial systems seems to be around from eyeball to fingertip and fingertip to midfield and midfield to farfield. This kind of whole structure where a tiger very far away looks like a flat piece of cardboard, but a tiger really close is incredibly detailed and spatial.”
What that would mean is that the brain grabs more information and renders more detail when it needs to. And that completely changed the way Abovitz and his team were thinking about the light field problem. Suddenly, if the theory was right, technology didn’t need to capture the entirety of the light field and recreate it; it just needed to grab the right bits of that light field and feed it to the visual cortex through the eye. Abovitz calls it a system engineering view of the brain. “Our thought was, if we could figure out this signal and or approximate it, maybe it would be really cool to encode that into a wafer,” he says. “That we could make a small wafer that could emit the digital light field signal back through the front again. That was the key idea.”
Suddenly, Abovitz went from trying to solve the problem to needing to engineer the solution. He was sure if they could create a chip that would deliver the right parts of a light field to the brain, he could trick it into thinking it was seeing real things that weren’t there. The realization meant that they were trying to get rid of the display and just use what humans already have. “There were two core zen ideas: The no-display-is-the-best-display and what’s-outside-is-actually-inside. And they turned out to be, at least from what we’ve seen so far, completely true. Everything you think is outside of you is completely rendered internally by you, co-created by you plus the analog light field signal.
“Everyone is inherently creative because everyone is constantly making their own Avatar world. The world you are living in, you are creating constantly; co-creating constantly, which is super exciting.”
The next step was making something that could prove the theory.
Hello World
Magic Leap’s “Hello World” moment may be lost on others, but for the team, who had been working to prove their theory at that point for four years, it was euphoric.
Also, it was a single pixel.
“The first real moment, which no one will care about, is when we had a pixel, and we were using a joystick, and we are just moving a pixel around the room,” Abovitz says. “It was like Pong in 1970 or something. Well, less sophisticated than Pong. It was just a little dot that we were moving around the room and it was like, ‘Whoa, did we just do that?’”

Abovitz calls those early years at Magic Leap, before their 2014 breakthrough, “wandering in the desert.” In 2013 they started building what would become the first working prototype. Abovitz shows me a picture of the thing, which they call the Bench. I tell him it looks a bit like a massive, Steam-Punk guillotine, but he corrects me. It’s more like something out of Clockwork Orange, he says. You put your head under a suspension of a massive rectangle of electronics and wires, in a device that locks your entire head in place – and you sit there, very still, and wait as the team attempts to use a signal generator to trick your brain into seeing something that isn’t there.
The process was slow-going and very frustrating. Abovitz said he frequently traveled to Kitty Hawk and the Saturn 5 building for inspiration. At this point, the team was made up of a motley crew that included people from NASA, computer scientists, physicists, comic book creators. The team kept iterating and iterating on their idea until finally, as Abovitz says, we had our dot.
The dot proved the theory and after that the innovations came much faster. Soon, they were dropping characters from a comic book idea into the real world and discussing the idea of creating a game called Monster Battle. The concept would have had kids going out to a real playground and then having their giant monsters fighting it out with each other overhead.

The timing was fortunate. Abovitz knew that he would eventually run out of the money earmarked for the start-up and that external funding was a must. Luckily, the pixel and those two characters were enough to show Google and some others that Magic Leap was heading in the right direction. Magic Leap managed to raise $540 million in venture funding by the end of 2014.
They used the money to move out of the single-room office and began working on their first wearable version of the tech, which the company now refers to affectionately as the Cheesehead. “That was like, let’s take the light field signal generator stuff and put computer vision stuff on it and rig it up and start walking around,” Abovitz says. “And it weighed like tens of pounds. And that was this moment where we were like we need to combine motion and high-end computer vision. “
The Cheesehead showed that they could extract those key elements of the signal down to the nano-structure and place it into a wafer that would create the digital light field signal used to help people view this new mixed reality world. The large, oddly shaped device also allowed the company’s growing software team to test out the code on which they were working.
Over the next two years, the team continued to iterate on software, hardware, science and design. To speed up production and testing cycles, the moved into a massive space outside of Fort Lauderdale and built a wafer fabrication plant on an underground floor. “We went on this really crazy sprint from basically October 2014 to December 2017,” he says.
The Magic
In the catacombs of Magic Leap’s massive complex, tucked away in underground clean rooms, robotic arms and bunny-suited humans quietly collaborate, assembling a steady stream of the photonic chips that empower the company’s new, perhaps better reality. Abovitz guides me down a long cement corridor, stopping occasionally at a window to point out the work being done inside. Paul Greco, Magic Leap’s senior vice president in charge of hardware and engineering programs, explains that the entire underground floor had to be gutted and rebuilt to turn it into the sort of facility that could do what they needed. There’s not a lot that anyone at Magic Leap will tell me about the wafers being manufactured on site, likely because they seem to be what puts the magic in Magic Leap. Abovitz calls the smallish, translucent rectangles, photonic wafers. But if he hadn’t said anything, I’d describe them as a sort of lens.
“Up until this point, we’ve been kind of in the woodshed first developing the notion of the signal and then trying to invent the transistor of that signal,” Abovitz says. “We’re not moving electrons around with transistors; we are moving photons, a photonic signal with a three-dimensional ray of nanostructures. We don’t really have a name for them yet, so that is what I’ve been calling Sea Monkeys, but that is not a name we could use. I don’t want the Sea Monkey people to get mad at us. So we’re going to come up with a cool name for our structures.”
The wafer, he says, moves photons around in the 3D nano-structure in a way that allows it to output a very particular digital light field signal. Those photonic wafers eventually make their way into a larger, rounder lens, which is then built into a sleek pair of goggles. My first close look at the full Magic Leap hardware comes in a secluded space upstairs that resembles a fashion showroom. All of the components of Magic Leap’s device are tied together with a similar design language, says Gary Natsume, senior vice president of design at the company: speckled greys that Natsume calls moon dust and perfect circles. This system – the ninth generation of the hardware – is made up of these three components: a headset and small pod-like computer connected by a single, long cable and a controller, known simply as Control. The headset looks almost like a pair of goggles held in place with a thick strap. They’re lightweight, modern-looking, if not exactly stylish, and certainly much sleeker than anything virtual reality has to offer. “The lens are a very iconic form,” Natsume says. “The aspiration is that eventually, this will become like glasses and people will wear them every day.”

The headband that holds the goggles in place uses a “crown temple” design, Natsume says. “It came from our study on how to distribute weight evenly around your head.” To put on the goggles, a person holds either side of the plastic crown in their hands and pulls apart. The crown spreads apart into a left, right and back piece. Then you slide it onto your head like a headband. Two short cables come out of the back of the headband and merge into one, before slinking four or five feet into the system’s Lightpack. The Lightpack is two rounded pods connected smoothly on one end to form a gap between them. It’s designed, Natsume says, to clip onto your pocket or onto a shoulder strap that Abovitz describes as a sort of guitar strap.
The goggles will come in two sizes, and the forehead pad, nose pieces, and temple pads can all be customized to tweak the comfort and fit. By the time they launch, the company will also take prescription details to build into the lenses for those who typically wear glasses.
The controller is a rounded bit of plastic that sits comfortably in your hand and features an array of buttons, six-degrees of freedom motion sensing, haptics, and a touchpad.
The Lightwear and Lightpack are almost toy-like in their design, not because they feel cheap – they don’t – but because they’re so light and there seems to be so little to them. Abovitz, though, is quick to point out just how much is packed into that small space. “This is a self-contained computer,” he says. “Think about something close to like a MacBook Pro or an Alienware PC. It’s got a powerful CPU and GPU. It’s got a drive, WiFi, all kinds of electronics, so it’s like a computer folded up onto itself.

Then he points to the Lightwear goggles. “There is another powerful computer in here, which is a real-time computer that’s sensing the world and does computer vision processing and has machine learning capability so it can constantly be aware of the world outside of you,” he says. “So you have the least amount of weight with what is like a satellite of engineering up here.”
The headset also can sense the sound around a user through four built-in microphones and uses a real-time computer vision processor along with – I counted six – external cameras to track the wearer and the world they’re in, in real-time. Tiny, high-end speakers built into the temples of the device provides spatial sound that can react to your movement and the movement of the creations with which you’re interacting. “This isn’t a pair of glasses with a camera on it,” he says. “It’s what we think of as spatial computing. It has full awareness.”
Abovitz declines to say what the GPU, CPU or other specs are of the headset, nor will he say what the battery life is. They need to hold something back to release later, he says, besides they’re still working on battery optimization. As we leave, I notice a long table shrouded in a white sheet. What’s under there? I ask. The next prototypes, Abovitz says.
Sigur Ros Music and Weta Robots
As things wrapped up in the demo room, Miller asked me what I thought, and I told him: The goggles were so comfortable you almost forget you’re wearing them. The computer attachment fits neatly into my pocket, and its tether to the headset never got in my way. The controller felt intuitive almost immediately. The sound was both accurate and powerful. But I had one concern: The field of view.
Like Microsoft’s HoloLens, which uses a different sort of technology to create mixed reality, Magic Leap’s Lightwear doesn’t offer you a field of view that matches your eyes. Instead, the Magic Leap creations appear in a field of view that is roughly the shape of a rectangle on its side. Because the view is floating in space, I couldn’t measure it, so I did the next best thing: I spent a few minutes holding out first a credit card in front of my face and then my hands to try to be able to describe how big that invisible frame is. The credit card was much too small. I ended up with this: The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.
“I can say that our future-gen hardware tech significantly expands the field of view,” Miller says. “So the field of view that you are looking at on these devices is the field of view this will ship with. For the next generation product, it will be significantly bigger. We have that stuff working in the labs.”
De Luliis adds that developers have the option to fade the edges so that there won’t be such a harsh break where the image stops “and your brain will kind of naturally fill in the gaps for you.”
Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimble. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.
My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.
Finally, I went to a separate room to see an experience that I can talk about in full detail. Iceland experimental rock band Sigur Ros has been quietly working with some folks at Magic Leap to create an experience that they like to call a soundscape. For this particular demo, the team had me put on earbuds plugged into the goggles. “What you are about to see is a project called Tonandi,” Mike Tucker, technical lead on the project, tells me. “What you’re going to see is not a recorded piece of music but an interactive soundscape. That’s how they like to describe it.”
Tonandi starts by creating a ring of ethereal trees around you and then waiting to see what you do next. Inside, floating all around me are these sorts of wisps dancing in the air. As I wave my hands at them, they create a sort of humming music, vanishing or shifting around me. Over time, different sorts of creations appear, and I touch them, wave at them, tap them, waiting to see what sort of music the interaction will add to the growing orchestral choir that surrounds me. Soon pods erupt from the ground on long stalks and grass springs from the carpet and coffee table. The pods open like flowering buds and I notice stingray-like creators made of colorful lights floating around me. My movements, don’t just change this pocket world unfolding around me, it allows me to co-create the music I hear, combining my actions with Sigur Ros’ sounds.
Experiencing Tonandi was effortless; the sort of surreal, magic-realism-infused musical creation that you could hand over to anyone to try. But behind the scenes, a lot was going on. Tucker says the project uses a lot of the tech built into Magic Leap’s gear. “We’re using a bunch of unique features to Magic Leap,” he says. “We’re using the meshing of the room, we’re using eye tracking, and you’re going to use gesture, our input system for most of the experience.”
Later, over lunch in a conference room, Abovitz says that the team did once experiment with a horror experience. “It was terrifying,” he says. “People would not go into the room anymore. It was very, very, very scary, like almost life-threateningly scary, so we kind of said, ‘OK, let’s put that aside for now.”

There was much more to see and do, but not in the time I had. One experience I had hoped to check out was based on something created by Weta Workshop, the special effects and prop company behind movies like The Lord of the Rings, Blade Runner 2049 and Thor: Ragnarok. But it wasn’t available to test.
The in-progress game, the first created by Weta’s newly formed Weta Gameshop division, is designed around the universe of Dr. Grordbort, a fantastical universe created by Weta concept designer Greg Broadmore and owned by Weta co-founders Richard Taylor and Tania Rodger. There are about 55 people now working on the game in the new division, which is headed up by Broadmore, Taylor tells Glixel. “We have been working on it for about five years,” Taylor says. “We’ve been developing the game in direct correlation with the hardware and software development at Magic Leap.”
The game, which Taylor believes will be available when the hardware launches next year, is a sort of first-person shooter set in the off-kilter tongue-in-cheek world of Dr. Grordbort. In the fiction of the game, a planet of robots have figured out a way to fashion portals to earth and are set on invading the planet. The player has to stop them from coming through the portals and wreaking destruction using the system’s controller, which looks like a ray gun while playing. “When you enter the game, portals open up in your lounge wall or bedroom wall, and you can look through into the robot planet,” Taylor says. “It starts out calmly enough with Gimble [the robot shown during one of the demos], and then Dr. Grordbot has a chat with you and all hell breaks loose. It’s the most frenetic real-world experience.”
The Persistence of Reality
The billion-dollar technology of Magic Leap seems so effortless at times that it would be easy to underestimate it. And in some ways, that’s one of the key innovations of the technology. It can feel like it’s not there.
One of the fundamental problems that Abovitz and his team at Magic Leap were hoping to solve was the discomfort that some experience while using virtual reality headsets and nearly everyone finds in the prolonged use of screens of any type. “So our goal is to ultimately build spatial computing into something that a lot of people in the world can use all day every day all the time everywhere,” Abovtiz tells me. “That’s the ambitious goal; it’ll take time to get there. But part of all day is that you need something that is light and comfortable. It has to fit you like socks and shoes have to fit you. It has to be really well tuned for your face, well tuned for your body. And I think a fundamental part of all day is the signal has to be very compatible with you.”
Finding a way to recreate a light field, he says, means that the result is a viewing experience as natural and comfortable as looking around you. That, he says, is the bedrock upon which Magic Leap’s work is built. “You don’t ever want to think about it again,” he says. You just want to know that we took care of it and we think that’s an important first step.”
While the bedrock of that technology may be solved, Abovitz acknowledges the delivery of the experience still hasn’t been perfected when I bring up the limited field of view. He says that the company is still fine-tuning the experience. “Field of view we think is, we’d call it workable and good for ML1,” he says of what will be the first consumer headset. “It is one of the things we will continue to iterate on in Magic Leap 2 and 3 and beyond. And there is sort of a point where you hit a form factor and a field of view … where you are sort of done there.”

Abovitz doesn’t really answer my questions about another major sticking point with the tech: the ability to deliver multiple focal points. In theory, a light field should allow you to look past a created image to the reality behind it and have that closer image lose some focus. The demonstrations I went through didn’t really present an opportunity to see if the goggles could do that effectively. So I asked if the technology supported multiple focal planes. “Magic Leap’s Lightwear utilizes our proprietary Digital Lightfield technology, which is our digital version of an analog lightfield signal,” he told me in a follow-up email. “Developers may create applications and experiences with characters and objects that appear correctly in space and allow a user to focus naturally on an object of interest, as they would in the real world.”
When I pushed for a clearer answer, Abovitz declined, citing concern over proprietary information.
Abovitz’ view that this first release of hardware is workable and good could explain why they’re calling it the Magic Leap One: Creator Edition. To Magic Leap, creators are developers, brands, agencies, but also early adopter consumers. “The consumers who bought the first Mac, or the first PCs,” he says. “Everyone who would have bought the first iPod. It’s that kind of group. But it’s definitely not just a development kit. If you’re a consumer-creator you are going to be happy.”
Abovitz also declined to give me a ship date or price. But he did say that there is no doubt that the first version will ship in 2018. As for the cost: “So we have an internal price, but we are not talking about that yet,” he says. “Pre-order and pricing will come together. I would say we are more of a premium computing system. We are more of a premium artisanal computer. “
Despite not answering a number of key questions, the day spent wandering the hallways and byways of Magic Leap left me with a much clearer sense of what the company was up to – that it wasn’t just about the headset, or even the light field tech that’s driving it. Magic Leap, in releasing their system to the world, is combining a slew of technologies into something that could one day reinvent the way we deal with all technology.

The light field photonics, which can line up a fake reality in your natural light real one, may be the most obvious of the innovations on display, but there’s much more. The visual perception system is actively tracking the world you’re moving through, noting things like flat surfaces, walls, objects. The result is a headset that sees what you do, and that can then have its creations behave appropriately, whether that means hanging a mixed reality monitor next to your real one, or making sure the floating fish in your living room don’t drift through a couch. That room mapping is also used to keep track of the things you place in your world so they’re there waiting for you when you come back. Line up six monitors above your desk and go to sleep, the next day they’ll be exactly where you left them. While we don’t know the specifications of the miniature computer built into the Lightpack, we do know that it is designed to run video games in a world in which it can see and react. The Control is a straightforward way to interact with the system, but it can also use Magic Leap’s own gesture tracking which includes not just hands and fingers, but your head position, voice, eye tracking and more. And finally, the technology delivers not just a light field, but a sound field, the sort of aware stereo that can track your movements and react to make sure the audio is always coming from the object, no matter where you stand. It can even relay distance and intensity. This is just the hardware and software used to create the baseline of Magic Leap One and its Lightwear, Lightpack and Control. Early next year, the company plans to open up a creator portal and deliver access to its software development kit. Then it won’t just be Magic Leap and its partners – folks like Weta, Sigur Ros, ILMxLAB and Twilio – working on new experiences. It will be everyone and anyone.
As things wrap up, the team walks me by Abovitz’s office to say goodbye. He insists on taking me to the entrance himself. Abovitz stops as he’s walking me through the cavernous space of Magic Leap toward the exit. On a wall near a stairwell hangs a lone painting. The Salvador Dali print is entitled “Spectacles with Holograms and Computers for Seeing Imagined Objects,” and Abovitz says he thinks it represents the work Magic Leap has been doing in recreating light fields. With his finger he traces the similarities, pointing to the math-like scribbling under the image, the glasses, the illustration of a field of view and what appears to be a lit up visual cortex.
Where I see an external image being absorbed and processed by the visual cortex and the brain, Abovitz sees the opposite: a painting that shows a computer-generated image being injected into the brain and then projected into the user’s world.
It is Magic Leap by Dali – Magic Realism seen through the eye of a surrealist.