Chengde Wan Thomas Probst Luc Van Gool Angela Yao

Abstract
We present a simple and effective method for 3D hand pose estimation from a single depth frame. As opposed to previous state-of-the-art methods based on holistic 3D regression, our method works on dense pixel-wise estimation. This is achieved by careful design choices in pose parameterization, which leverages both 2D and 3D properties of depth map. Specifically, we decompose the pose parameters into a set of per-pixel estimations, i.e., 2D heat maps, 3D heat maps and unit 3D directional vector fields. The 2D/3D joint heat maps and 3D joint offsets are estimated via multi-task network cascades, which is trained end-to-end. The pixel-wise estimations can be directly translated into a vote casting scheme. A variant of mean shift is then used to aggregate local votes while enforcing consensus between the the estimated 3D pose and the pixel-wise 2D and 3D estimations by design. Our method is efficient and highly accurate. On MSRA and NYU hand dataset, our method outperforms all previous state-of-the-art approaches by a large margin. On the ICVL hand dataset, our method achieves similar accuracy compared to the currently proposed nearly saturated result and outperforms various other proposed methods. Code is available \href.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| hand-pose-estimation-on-icvl-hands | Dense Pixel-wise Estimation | Average 3D Error: 7.3 |
| hand-pose-estimation-on-msra-hands | Dense Pixel-wise Estimation | Average 3D Error: 7.2 |
| hand-pose-estimation-on-nyu-hands | Dense Pixel-wise Estimation | Average 3D Error: 10.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.