%html
:sass
body
background-color: #c0c0c0
font-family: 'Lato', sans-serif
#main
max-width: 800px
margin: 0 auto
#pic
float: left
margin: 0 1em 0 0
width: 330px
#paperpic
float: left
margin: 0 1em 0 0
width: 190px
border-radius: 0px
.section
float: left
background-color: white
margin: 1em
border-radius: 0px
width: 100%
-webkit-box-shadow: 0 8px 6px -6px black
-moz-box-shadow: 0 8px 6px -6px black
box-shadow: 0 8px 6px -6px black
.footer
background-color: #c0c0c0
margin: 1em
text-align: right
width: 100%
.text
float: left
width: 450px
font-size: 85%
a
text-decoration: none
a:hover
text-decoration: underline
#paper p
margin: 0.3em 0
font-size: 90%
#paper p2
margin: 0.3em 0
font-size: 80%
#paper h3
padding-top: 1em
margin: 0
font-size: 100%
#paper #links
word-spacing: 1.5em
#paper p4
margin: 0.3em 0
font-size: 90%
word-spacing: 1.5em
h2
margin-left: 1em
%head
%title Richard Newcombe
%link(href="http://fonts.googleapis.com/css?family=Lato" rel="stylesheet" type="test/css")
%script
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-59777432-1', 'auto');
ga('send', 'pageview');
%body
#main
.section
%img#pic(src="img/newcombe_peaks.jpg")
.text
:markdown
**Richard Newcombe**, PhD.
I hold a **Postdoctoral Associate** position at the **University of Washington**, working on **computer
vision** advised by **Steve Seitz** and **Dieter Fox**.
I researched **Robot Vision** for my PhD. with **Andrew Davison** and **Murray Shanahan**
at **Imperial College, London**, and before that I studied with **Owen Holland** at the **University of Essex**
where I received my BSc. and MSc. in robotics, machine learning and embedded systems.
Ideas in perception, robotics, augmented realities and consciousness are what get me going everyday and I enjoy bringing
new ideas that can be made to work into the world. In particular I enjoy thinking of solutions that span across hardware and algorithm
spaces making use of whatever new tools, computing machinery and ways of thinking I can get access to.
%p
%a(target="_blank" href="mailto:newcombe@cs.washington.edu") newcombe@cs.washington.edu
%a(target="_blank" href="https://scholar.google.com/citations?user=MhowvPkAAAAJ&hl=en") [Google Scholar]
.section
#paper
%img#paperpic(src="img/Huxley2.jpg")
%h3
%a(href="papers/Newcombe-RA-Thesis-2014-compressed.pdf") PhD. Thesis: Dense Visual SLAM
%p2 Richard A. Newcombe, Imperial College, London, 2012
%p
%em
With the availability of massively-parallel commodity computing hardware, we demonstrate
new algorithms that achieve high quality incremental dense reconstruction within
online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes
possible numerous applications that can utilise online surface modelling, for instance: planning
robot interactions with unknown objects, augmented reality with characters that interact
with the scene, or providing enhanced data for object recognition.
%p#links
%a(href="papers/Newcombe-RA-Thesis-2014-compressed.pdf") pdf(32Mb)
%a(href="papers/Newcombe-RA-Thesis-2014.pdf") pdf(190Mb)
%a(target="_blank" href="https://www.youtube.com/watch?v=63Eflo8tLnI") video
.section
#paper
%img#paperpic(src="img/dart_thumb.jpg")
%h3
%a(href="papers/schmidt_etal_rss2014.pdf") DART: Dense Articulated Real-time Tracking
%p2 Tanner Schmidt, Richard Newcombe, Dieter Fox (RSS 2014)
%p
%em
DART is a general framework for tracking articulated objects composed of
rigid bodies chained together through a kinematic chain. DART can track a
broad set of objects encountered in indoor environments, including furniture,
tools, human bodies, human hands, and robot manipulators.
%p#links
%a(href="papers/schmidt_etal_rss2014.pdf") pdf
%a(href="papers/bib/Schmidt_DART_2014.bib") bibtex
%a(href="http://homes.cs.washington.edu/~tws10/RSS.mp4") video
.section
#paper
%img#paperpic(src="img/slampp.jpg")
%h3
%a(href="papers/Salas-Moreno_etal_cvpr2013.pdf") SLAM++: Simultaneous Localisation and Mapping at the Level of Objects
%p2 Renato F. Salas-Moreno, Richard A. Newcombe, Hauke Strasdat, Paul H. J. Kelly and Andrew J. Davison (CVPR 2013)
%p
%em
We present the major advantages of a new object oriented
3D SLAM paradigm, which takes advantage,
in the loop, of prior knowledge that many scenes consist
of repeated domain-specific objects and structures.
We demonstrate real-time incremental SLAM, including loop closure, relocalisation
and the detection of moved objects enabling real-time generation of object level scene descriptions.
%p#links
%a(href="papers/Salas-Moreno_etal_cvpr2013.pdf") pdf
%a(href="papers/bib/Salas-Moreno_SLAMPP_2013.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=tmrAh1CqCRo") video
%a(target="_blank" href="https://www.youtube.com/watch?v=6IU4e8yUdis") demo
.section
#paper
%img#paperpic(src="img/lightfield.jpg")
%h3
%a(href="papers/jachnik_etal_ismar2012.pdf") Real-Time Surface Lightfield Capture for Augmentation of Planar Specular Surfaces
%p2 Jan Jachnik, Richard Newcombe, Andrew J. Davison (ISMAR 2012)
%p
%em
We present an algorithm for real-time surface lightfield capture from a
single hand-held camera, which is able to capture dense illumination
information for general specular surfaces. Our system incorporates a
guidance mechanism to help the user interactively during capture.
We then split the light-field into its diffuse and specular components,
and show that the specular component can be used for estimation of an environment map.
%p#links
%a(href="papers/jachnik_etal_ismar2012.pdf") pdf
%a(href="papers/bib/Jachnik_Lightfields_2012.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=pky822zG4hM") video
.section
#paper
%img#paperpic(src="img/highframerate.jpg")
%h3
%a(href="papers/handa_etal_eccv2012.pdf") Real-Time Camera Tracking: When is High Frame-Rate Best?
%p2 Ankur Handa, Richard Newcombe, Adrien Angeli, and Andrew J. Davison (ECCV 2012)
%p
%em
How are application-dependent performance requirements of accuracy, robustness and computational cost optimised as
frame-rate varies? Using 3D camera tracking as our test problem we analyse a dense whole image
alignment approach to tracking trading off between computation and image capture using photorealistic
ray-traced video of a detailed scene. Our experiments lead to quantitative conclusions about frame-rate selection
crucial to pushing tracking performance.
%p#links
%a(href="papers/handa_etal_eccv2012.pdf") pdf
%a(href="papers/bib/Handa_realtime_2012.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=ytOIrGyEE64") video
.section
#paper
%img#paperpic(src="img/steveIsfused.jpg")
%h3
%a(href="papers/newcombe_etal_ismar2011.pdf") KinectFusion: Real-Time Dense Surface Mapping and Tracking
%p2 Richard A. Newcombe Shahram Izadi,Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Andrew Fitzgibbon (ISMAR 2011, Best paper award)
%p
%em
We present a system for accurate real-time mapping of complex and
arbitrary indoor scenes in variable lighting conditions,
using only a moving low-cost depth camera and commodity graphics hardware.
Modelling of natural scenes, in real-time with only commodity
sensor and GPU hardware, promises an exciting step forward
in augmented reality (AR), in particular, it allows dense surfaces to
be reconstructed in real-time, with a level of detail and robustness
beyond any solution yet presented using passive computer vision.
%p#links
%a(href="papers/newcombe_etal_ismar2011.pdf") pdf
%a(href="papers/bib/Newcombe_kfusion_2011.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=quGhaggn3cQ") video
.section
#paper
%img#paperpic(src="img/kinectuist.jpg")
%h3
%a(href="papers/Izadi_etal_uist2011.pdf") KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera
%p2 Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, and Andrew Fitzgibbon
%p
%em
In this paper we introduce novel extensions to the core KinectFusion pipeline to demonstrate object segmentation and user
interaction directly in front of the sensor. These extensions are used to enable real-time multi-touch interactions anywhere,
allowing any reconstructed physical surface to be appropriated for touch.
%p#links
%a(href="papers/Izadi_etal_uist2011.pdf") pdf
%a(href="papers/bib/Izadi_kfusion_app_2011.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=quGhaggn3cQ") video
.section
#paper
%img#paperpic(src="img/dtam2.jpg")
%h3
%a(href="papers/newcombe_etal_iccv2011.pdf") DTAM: Dense Tracking and Mapping in Real-Time
%p2 Richard Newcombe, Steven Lovegrove, Andrew Davison (ICCV 2011)
%p
%em
DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense,
every pixel methods. We demonstrate that a dense model permits superior tracking performance
under rapid motion compared to a state of the art method using features;
and also show the additional usefulness of the dense model for real-time scene interaction
in a physics-enhanced augmented reality application.
%p#links
%a(href="papers/newcombe_etal_iccv2011.pdf") pdf
%a(href="papers/bib/Newcombe_DTAM_2011.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=Df9WhgibCQA") video
.section
#paper
%img#paperpic(src="img/ldr.jpg")
%h3
%a(href="papers/newcombe_davison_cvpr2010.pdf") Live Dense Reconstruction with a single moving camera
%p2 Richard Newcombe, Andrew Davison (CVPR 2010)
%p
%em
We present a method which enables rapid and dense re- construction of scenes browsed by a single live camera.
Real-time monocular dense reconstruction opens up many application areas, and we demonstrate both real-time
novel view synthesis and advanced augmented reality where augmentations interact physically with the 3D scene
and are correctly clipped by occlusions.
%p#links
%a(href="papers/newcombe_davison_cvpr2010.pdf") pdf
%a(href="papers/bib/Newcombe_LDR_2010.bib") bibtex
%a(target="_blank" href="https://www.youtube.com/watch?v=CZiSK7OMANw") video
.footer
%a(href="index.haml") source code