Richard Newcombe, PhD.

I hold a Postdoctoral Associate position at the University of Washington, working on computer vision advised by Steve Seitz and Dieter Fox.

I researched Robot Vision for my PhD. with Andrew Davison and Murray Shanahan at Imperial College, London, and before that I studied with Owen Holland at the University of Essex where I received my BSc. and MSc. in robotics, machine learning and embedded systems.

Ideas in perception, robotics, augmented realities and consciousness are what get me going everyday and I enjoy bringing new ideas that can be made to work into the world. In particular I enjoy thinking of solutions that span across hardware and algorithm spaces making use of whatever new tools, computing machinery and ways of thinking I can get access to.

newcombe@cs.washington.edu [Google Scholar]

PhD. Thesis: Dense Visual SLAM

Richard A. Newcombe, Imperial College, London, 2012

With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition.

DART: Dense Articulated Real-time Tracking

Tanner Schmidt, Richard Newcombe, Dieter Fox (RSS 2014)

DART is a general framework for tracking articulated objects composed of rigid bodies chained together through a kinematic chain. DART can track a broad set of objects encountered in indoor environments, including furniture, tools, human bodies, human hands, and robot manipulators.

SLAM++: Simultaneous Localisation and Mapping at the Level of Objects

Renato F. Salas-Moreno, Richard A. Newcombe, Hauke Strasdat, Paul H. J. Kelly and Andrew J. Davison (CVPR 2013)

We present the major advantages of a new object oriented 3D SLAM paradigm, which takes advantage, in the loop, of prior knowledge that many scenes consist of repeated domain-specific objects and structures. We demonstrate real-time incremental SLAM, including loop closure, relocalisation and the detection of moved objects enabling real-time generation of object level scene descriptions.

Real-Time Surface Lightfield Capture for Augmentation of Planar Specular Surfaces

Jan Jachnik, Richard Newcombe, Andrew J. Davison (ISMAR 2012)

We present an algorithm for real-time surface lightfield capture from a single hand-held camera, which is able to capture dense illumination information for general specular surfaces. Our system incorporates a guidance mechanism to help the user interactively during capture. We then split the light-field into its diffuse and specular components, and show that the specular component can be used for estimation of an environment map.

Real-Time Camera Tracking: When is High Frame-Rate Best?

Ankur Handa, Richard Newcombe, Adrien Angeli, and Andrew J. Davison (ECCV 2012)

How are application-dependent performance requirements of accuracy, robustness and computational cost optimised as frame-rate varies? Using 3D camera tracking as our test problem we analyse a dense whole image alignment approach to tracking trading off between computation and image capture using photorealistic ray-traced video of a detailed scene. Our experiments lead to quantitative conclusions about frame-rate selection crucial to pushing tracking performance.

KinectFusion: Real-Time Dense Surface Mapping and Tracking

Richard A. Newcombe Shahram Izadi,Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Andrew Fitzgibbon (ISMAR 2011, Best paper award)

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.

KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera

Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, and Andrew Fitzgibbon

In this paper we introduce novel extensions to the core KinectFusion pipeline to demonstrate object segmentation and user interaction directly in front of the sensor. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any reconstructed physical surface to be appropriated for touch.

DTAM: Dense Tracking and Mapping in Real-Time

Richard Newcombe, Steven Lovegrove, Andrew Davison (ICCV 2011)

DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.

Live Dense Reconstruction with a single moving camera

Richard Newcombe, Andrew Davison (CVPR 2010)

We present a method which enables rapid and dense re- construction of scenes browsed by a single live camera. Real-time monocular dense reconstruction opens up many application areas, and we demonstrate both real-time novel view synthesis and advanced augmented reality where augmentations interact physically with the 3D scene and are correctly clipped by occlusions.