The world is filled with depth-sensing camera tech now: in, , AR headsets such as the HoloLens and Magic Leap, and . But Structure Core, the first in-house depth-sensing camera from Boulder, Colorado-based Occipital, aims to be a self-contained component that could let researchers or creators build their own solutions fast.
The module connects with Windows, Linux, Android and Macs (sorry, iOS developers). It also has some key improvements over the original 2013-era Structure sensor. A version of that model connected to the iPad, and offered an early look at where AR technology would head next.
Occipital developed that original Structure sensor with PrimeSense, the Israeli company whose tech also powered the original Microsoft Kinect.in 2013, which led to the Face ID-sensing TrueDepth camera on iPhones and the iPad Pro. According to Occipital, the original Structure sensor sold close to 100,000 units, and had nearly 9,000 developers, which led to 120 apps for the platform.
Structure Core is available to preorder now, in both a self-enclosed version and a module designed to be embedded into another piece of electronics, like an AR headset or robot prototype. Occipital will sell the sensor in waves: an early-access run arriving in “the next two weeks” will cost $599; a second wave hitting in January will cost $499. The main release, targeted for March of next year, will cost $399.
Up close and personal with Structure Core
I got a quick demo of what Structure Core could do with Occipital’s Vikas Reddy and Adam Rodnitzky. Compared to the first Structure camera, the new Structure Core can sense motion far better, and pull depth information faster.
The Structure Core senses 3D detail and depth up to around 5 meters (16 feet) away via infrared projection, and can sense even further with its other cameras. There are dual infrared cameras plus a 160-degree black and white camera and an 85-degree color camera, along with self-contained six degrees of freedom motion tracking with a built-in six-axis IMU. It makes this new sensor, essentially, a self-contained motion-sensing computer vision camera rig.
In a few moments, co-founder Vikas Reddy connected Structure Core and quickly scanned our office lobby, creating 3D scans of the floor, walls and furniture, and even building a map of the room’s core features.
Occipital imagines Structure Core being used in drones, where the USB 3-connected camera is light and fast enough to be a part of a rapid navigation system. Or in robots: Misty, the home robotics company that’s spun off from Sphero (and is also based in Boulder), is using a version of Occipital’s Structure Core in the head of its Misty II robot. Occipital is already working with farmers exploring the camera’s use for improving robotic berry picking, or on drones to create flying scans of hard-to-reach locations for projects such as repair work.
For anyone dreaming of making a mixed-reality headset, or playing around with making autonomous robots or vehicles or drones, or for researchers looking to have advanced computer-vision cameras, a module like Structure Core could be a useful and fascinating tool.
And, ultimately, it has to be. Competition in the space is heating up, with the likes of Google (owner of Lytro light-field camera technology and the), Apple (with the iPhone’s TrueDepth camera and ARKit’s progress), Microsoft ( and mixed reality advancements) and even smaller companies like Skydio (autonomous consumer drones) ramping up increasingly impressive computer-vision products as the calendar counts down to what will undoubtedly be an even more competitive 2019.
Read more: The best gifts for 2018
Read more: The best gifts under $50