3276 results (BibTeX)

oxel level [18]F-FDG PET/MRI unsupervised segmentation of the tumor microenvironment

Katiyar, P., Divine, M., Pichler, B., Disselhorst, J.

World Molecular Imaging Conference, 2014 (poster)

ei

[BibTex]

[BibTex]


Efficient nearest neighbors via robust sparse hashing

Cherian, A., Sra, S., Morellas, V., Papanikolopoulos, N.

IEEE Transactions on Image Processing, 23(8):3646-3655, 2014 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Fast Newton methods for the group fused lasso

Wytock, M., Sra, S., Kolter, J.

In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pages: 888-897, (Editors: Zhang, N. L. and Tian, J.), AUAI Press, UAI, 2014 (inproceedings)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Towards an optimal stochastic alternating direction method of multipliers

Azadi, S., Sra, S.

Proceedings of the 31st International Conference on Machine Learning, 32, pages: 620-628, (Editors: Xing, E. P. and Jebara, T.), JMLR, ICML, 2014 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Nonconvex Proximal Splitting with Computational Errors

Sra, S.

In Regularization, Optimization, Kernels, and Support Vector Machines, pages: 83-102, 4, (Editors: Suykens, J. A. K., Signoretto, M. and Argyriou, A.), CRC Press, 2014 (inbook)

ei

[BibTex]

[BibTex]


Localized Complexities for Transductive Learning

Tolstikhin, I., Blanchard, G., Kloft, M.

In Proceedings of the 27th Conference on Learning Theory, 35, pages: 857-884, (Editors: Balcan, M.-F. and Feldman, V. and Szepesvári, C.), JMLR, COLT, 2014 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Economic Parameters from Revealed Preferences

Balcan, M., Daniely, A., Mehta, R., Urner, R., Vazirani, V.

In Web and Internet Economics - 10th International Conference, 8877, pages: 338-353, Lecture Notes in Computer Science, (Editors: Liu, T.-Y. and Qi, Q. and Ye, Y.), WINE, 2014 (inproceedings)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Active Learning - Modern Learning Theory

Balcan, M., Urner, R.

In Encyclopedia of Algorithms, (Editors: Kao, M.-Y.), Springer Berlin Heidelberg, 2014 (incollection)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Domain adaptation-can quantity compensate for quality?

Ben-David, S., Urner, R.

Annals of Mathematics and Artificial Intelligence, 70(3):185-202, 2014 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


The sample complexity of agnostic learning under deterministic labels

Ben-David, S., Urner, R.

In Proceedings of the 27th Conference on Learning Theory, 35, pages: 527-542, (Editors: Balcan, M.-F. and Feldman, V. and Szepesvári, C.), JMLR, COLT, 2014 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


Full Dynamics LQR Control of a Humanoid Robot: An Experimental Study on Balancing and Squatting

Mason, S., Righetti, L., Schaal, S.

In 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2014 (inproceedings)

am

Project Page [BibTex]

Project Page [BibTex]


Cost-Sensitive Active Learning With Lookahead: Optimizing Field Surveys for Remote Sensing Data Classification

Persello, C., Boularias, A., Dalponte, M., Gobakken, T., Naesset, E., Schölkopf, B.

IEEE Transactions on Geoscience and Remote Sensing, 10(52):6652 - 6664, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


Epidural electrocorticography for monitoring of arousal in locked-in state

Martens, S., Bensch, M., Halder, S., Hill, J., Nijboer, F., Ramos-Murguialday, A., Schölkopf, B., Birbaumer, N., Gharabaghi, A.

Frontiers in Human Neuroscience, 8(861), 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


Wenn es was zu sagen gibt

(Klaus Tschira Award 2014 in Computer Science)

Trimpe, S.

Bild der Wissenschaft, pages: 20-23, November 2014, (popular science article in German) (article)

am

PDF Project Page [BibTex]

PDF Project Page [BibTex]


The Feasibility of Causal Discovery in Complex Systems: An Examination of Climate Change Attribution and Detection

Lacosse, E.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

ei

[BibTex]

[BibTex]


Causal Discovery in the Presence of Time-Dependent Relations or Small Sample Size

Huang, B.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

ei

[BibTex]

[BibTex]


Development of advanced methods for improving astronomical images

Schmeißer, N.

Eberhard Karls Universität Tübingen, Germany, Eberhard Karls Universität Tübingen, Germany, 2014 (diplomathesis)

ei

[BibTex]

[BibTex]


A global analysis of extreme events and consequences for the terrestrial carbon cycle

Zscheischler, J.

Diss. No. 22043, ETH Zurich, Switzerland, ETH Zurich, Switzerland, 2014 (phdthesis)

ei

[BibTex]

[BibTex]


Analysis of Distance Functions in Graphs

Alamgir, M.

University of Hamburg, Germany, University of Hamburg, Germany, 2014 (phdthesis)

ei

[BibTex]

[BibTex]


Two numerical models designed to reproduce Saturn ring temperatures as measured by Cassini-CIRS

Altobelli, N., Lopez-Paz, D., Pilorz, S., Spilker, L., Morishima, R., Brooks, S., Leyrat, C., Deau, E., Edgington, S., Flandes, A.

Icarus, 238(0):205 - 220, 2014 (article)

ei

Web link (url) DOI [BibTex]

Web link (url) DOI [BibTex]


Quantifying the effect of intertrial dependence on perceptual decisions

Fründ, I., Wichmann, F., Macke, J.

Journal of Vision, 14(7):1-16, 2014 (article)

ei

Web PDF link (url) DOI Project Page [BibTex]


Thumb md jnb1
Segmentation of Biomedical Images Using Active Contour Model with Robust Image Feature and Shape Prior

S. Y. Yeo., X. Xie., I. Sazonov., P. Nithiarasu.

International Journal for Numerical Methods in Biomedical Engineering, 30(2):232- 248, 2014 (article)

Abstract
In this article, a new level set model is proposed for the segmentation of biomedical images. The image energy of the proposed model is derived from a robust image gradient feature which gives the active contour a global representation of the geometric configuration, making it more robust in dealing with image noise, weak edges, and initial configurations. Statistical shape information is incorporated using nonparametric shape density distribution, which allows the shape model to handle relatively large shape variations. The segmentation of various shapes from both synthetic and real images depict the robustness and efficiency of the proposed method.

ps

[BibTex]

[BibTex]


Thumb md glsn1
Automatic 4D Reconstruction of Patient-Specific Cardiac Mesh with 1- to-1 Vertex Correspondence from Segmented Contours Lines

C. W. Lim., Y. Su., S. Y. Yeo., G. M. Ng., V. T. Nguyen., L. Zhong., R. S. Tan., K. K. Poh., P. Chai,.

PLOS ONE, 9(4), 2014 (article)

Abstract
We propose an automatic algorithm for the reconstruction of patient-specific cardiac mesh models with 1-to-1 vertex correspondence. In this framework, a series of 3D meshes depicting the endocardial surface of the heart at each time step is constructed, based on a set of border delineated magnetic resonance imaging (MRI) data of the whole cardiac cycle. The key contribution in this work involves a novel reconstruction technique to generate a 4D (i.e., spatial–temporal) model of the heart with 1-to-1 vertex mapping throughout the time frames. The reconstructed 3D model from the first time step is used as a base template model and then deformed to fit the segmented contours from the subsequent time steps. A method to determine a tree-based connectivity relationship is proposed to ensure robust mapping during mesh deformation. The novel feature is the ability to handle intra- and inter-frame 2D topology changes of the contours, which manifests as a series of merging and splitting of contours when the images are viewed either in a spatial or temporal sequence. Our algorithm has been tested on five acquisitions of cardiac MRI and can successfully reconstruct the full 4D heart model in around 30 minutes per subject. The generated 4D heart model conforms very well with the input segmented contours and the mesh element shape is of reasonably good quality. The work is important in the support of downstream computational simulation activities.

ps

[BibTex]

[BibTex]


Left Ventricle Segmentation by Dynamic Shape Constrained Random Walk

X. Yang., Y. Su., M. Wan., S. Y. Yeo., C. Lim., S. T. Wong., L. Zhong., R. S. Tan.

In Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014 (inproceedings)

Abstract
Accurate and robust extraction of the left ventricle (LV) cavity is a key step for quantitative analysis of cardiac functions. In this study, we propose an improved LV cavity segmentation method that incorporates a dynamic shape constraint into the weighting function of the random walks algorithm. The method involves an iterative process that updates an intermediate result to the desired solution. The shape constraint restricts the solution space of the segmentation result, such that the robustness of the algorithm is increased to handle misleading information that emanates from noise, weak boundaries, and clutter. Our experiments on real cardiac magnetic resonance images demonstrate that the proposed method obtains better segmentation performance than standard method.

ps

[BibTex]

[BibTex]


A Novel Causal Inference Method for Time Series

Shajarisales, N.

Eberhard Karls Universität Tübingen, Germany, Eberhard Karls Universität Tübingen, Germany, 2014 (mastersthesis)

ei

PDF [BibTex]

PDF [BibTex]


Quantifying statistical dependency

Besserve, M.

Research Network on Learning Systems Summer School, 2014 (talk)

ei

[BibTex]

[BibTex]


Unsupervised identification of neural events in local field potentials

Besserve, M., Schölkopf, B., Logothetis, N.

44th Annual Meeting of the Society for Neuroscience (Neuroscience), 2014 (talk)

ei

[BibTex]

[BibTex]


Dynamical source analysis of hippocampal sharp-wave ripple episodes

Ramirez-Villegas, J., Logothetis, N., Besserve, M.

Bernstein Conference, 2014 (poster)

ei

DOI [BibTex]

DOI [BibTex]


CAM: Causal Additive Models, high-dimensional order search and penalized regression

Bühlmann, P., Peters, J., Ernest, J.

Annals of Statistics, 42(6):2526-2556, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


Identifiability of Gaussian Structural Equation Models with Equal Error Variances

Peters, J., Bühlman, P.

Biometrika, 101(1):219-228, 2014 (article)

ei

DOI [BibTex]


Assessing attention and cognitive function in completely locked-in state with event-related brain potentials and epidural electrocorticography

Bensch, M., Martens, S., Halder, S., Hill, J., Nijboer, F., Ramos, A., Birbaumer, N., Bodgan, M., Kotchoubey, B., Rosenstiel, W., Schölkopf, B., Gharabaghi, A.

Journal of Neural Engineering, 11(2):026006, 2014 (article)

Abstract
Objective. Patients in the completely locked-in state (CLIS), due to, for example, amyotrophic lateral sclerosis (ALS), no longer possess voluntary muscle control. Assessing attention and cognitive function in these patients during the course of the disease is a challenging but essential task for both nursing staff and physicians. Approach. An electrophysiological cognition test battery, including auditory and semantic stimuli, was applied in a late-stage ALS patient at four different time points during a six-month epidural electrocorticography (ECoG) recording period. Event-related cortical potentials (ERP), together with changes in the ECoG signal spectrum, were recorded via 128 channels that partially covered the left frontal, temporal and parietal cortex. Main results. Auditory but not semantic stimuli induced significant and reproducible ERP projecting to specific temporal and parietal cortical areas. N1/P2 responses could be detected throughout the whole study period. The highest P3 ERP was measured immediately after the patient's last communication through voluntary muscle control, which was paralleled by low theta and high gamma spectral power. Three months after the patient's last communication, i.e., in the CLIS, P3 responses could no longer be detected. At the same time, increased activity in low-frequency bands and a sharp drop of gamma spectral power were recorded. Significance. Cortical electrophysiological measures indicate at least partially intact attention and cognitive function during sparse volitional motor control for communication. Although the P3 ERP and frequency-specific changes in the ECoG spectrum may serve as indicators for CLIS, a close-meshed monitoring will be required to define the exact time point of the transition.

ei

DOI [BibTex]

DOI [BibTex]


Thumb md eccv14
Image-based 4-d Reconstruction Using 3-d Change Detection

Ulusoy, A., Mundy, J.

In Computer Vision – ECCV 2014, pages: 31-45, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
This paper describes an approach to reconstruct the complete history of a 3-d scene over time from imagery. The proposed approach avoids rebuilding 3-d models of the scene at each time instant. Instead, the approach employs an initial 3-d model which is continuously updated with changes in the environment to form a full 4-d representation. This updating scheme is enabled by a novel algorithm that infers 3-d changes with respect to the model at one time step from images taken at a subsequent time step. This algorithm can effectively detect changes even when the illumination conditions between image collections are significantly different. The performance of the proposed framework is demonstrated on four challenging datasets in terms of 4-d modeling accuracy as well as quantitative evaluation of 3-d change detection.

ps

video pdf supplementary DOI [BibTex]

video pdf supplementary DOI [BibTex]


Thumb md isprs2014
Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

Restrepo, M., Ulusoy, A., Mundy, J.

In ISPRS Journal of Photogrammetry and Remote Sensing, 98(0):1-18, 2014 (inproceedings)

Abstract
Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the {PVM} to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of {PVMs} to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

ps

Publisher site link (url) DOI [BibTex]

Publisher site link (url) DOI [BibTex]


Thumb md dissertation teaser scaled
Human Pose Estimation from Video and Inertial Sensors

Pons-Moll, G.

Ph.D Thesis, -, 2014 (book)

Abstract
The analysis and understanding of human movement is central to many applications such as sports science, medical diagnosis and movie production. The ability to automatically monitor human activity in security sensitive areas such as airports, lobbies or borders is of great practical importance. Furthermore, automatic pose estimation from images leverages the processing and understanding of massive digital libraries available on the Internet. We build upon a model based approach where the human shape is modelled with a surface mesh and the motion is parametrized by a kinematic chain. We then seek for the pose of the model that best explains the available observations coming from different sensors. In a first scenario, we consider a calibrated mult-iview setup in an indoor studio. To obtain very accurate results, we propose a novel tracker that combines information coming from video and a small set of Inertial Measurement Units (IMUs). We do so by locally optimizing a joint energy consisting of a term that measures the likelihood of the video data and a term for the IMU data. This is the first work to successfully combine video and IMUs information for full body pose estimation. When compared to commercial marker based systems the proposed solution is more cost efficient and less intrusive for the user. In a second scenario, we relax the assumption of an indoor studio and we tackle outdoor scenes with background clutter, illumination changes, large recording volumes and difficult motions of people interacting with objects. Again, we combine information from video and IMUs. Here we employ a particle based optimization approach that allows us to be more robust to tracking failures. To satisfy the orientation constraints imposed by the IMUs, we derive an analytic Inverse Kinematics (IK) procedure to sample from the manifold of valid poses. The generated hypothesis come from a lower dimensional manifold and therefore the computational cost can be reduced. Experiments on challenging sequences suggest the proposed tracker can be applied to capture in outdoor scenarios. Furthermore, the proposed IK sampling procedure can be used to integrate any kind of constraints derived from the environment. Finally, we consider the most challenging possible scenario: pose estimation of monocular images. Here, we argue that estimating the pose to the degree of accuracy as in an engineered environment is too ambitious with the current technology. Therefore, we propose to extract meaningful semantic information about the pose directly from image features in a discriminative fashion. In particular, we introduce posebits which are semantic pose descriptors about the geometric relationships between parts in the body. The experiments show that the intermediate step of inferring posebits from images can improve pose estimation from monocular imagery. Furthermore, posebits can be very useful as input feature for many computer vision algorithms.

ps

pdf [BibTex]


Thumb md mosh heroes icon
MoSh: Motion and Shape Capture from Sparse Markers

Loper, M., Mahmood, N., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 33(6):220:1-220:13, ACM, New York, NY, USA, November 2014 (article)

Abstract
Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.

ps

pdf video data pdf from publisher link (url) DOI Project Page [BibTex]

pdf video data pdf from publisher link (url) DOI Project Page [BibTex]


Thumb md ps page panel
Probabilistic Progress Bars

Kiefel, M., Schuler, C., Hennig, P.

In Conference on Pattern Recognition (GCPR), 8753, pages: 331-341, Lecture Notes in Computer Science, (Editors: Jiang, X., Hornegger, J., and Koch, R.), Springer, GCPR, September 2014 (inproceedings)

Abstract
Predicting the time at which the integral over a stochastic process reaches a target level is a value of interest in many applications. Often, such computations have to be made at low cost, in real time. As an intuitive example that captures many features of this problem class, we choose progress bars, a ubiquitous element of computer user interfaces. These predictors are usually based on simple point estimators, with no error modelling. This leads to fluctuating behaviour confusing to the user. It also does not provide a distribution prediction (risk values), which are crucial for many other application areas. We construct and empirically evaluate a fast, constant cost algorithm using a Gauss-Markov process model which provides more information to the user.

ei ps pn

website+code pdf DOI Project Page [BibTex]

website+code pdf DOI Project Page [BibTex]


Thumb md blurreccv
Modeling Blurred Video with Layers

Wulff, J., Black, M. J.

In Computer Vision – ECCV 2014, 8694, pages: 236-252, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with fi nite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between di fferently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer's appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences.

ps

pdf Supplemental Video Data DOI Project Page [BibTex]

pdf Supplemental Video Data DOI Project Page [BibTex]


Thumb md new teaser aligned
Optical Flow Estimation with Channel Constancy

Sevilla-Lara, L., Sun, D., Learned-Miller, E., Black, M. J.

In Computer Vision – ECCV 2014, 8689, pages: 423-438, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Large motions remain a challenge for current optical flow algorithms. Traditionally, large motions are addressed using multi-resolution representations like Gaussian pyramids. To deal with large displacements, many pyramid levels are needed and, if an object is small, it may be invisible at the highest levels. To address this we decompose images using a channel representation (CR) and replace the standard brightness constancy assumption with a descriptor constancy assumption. CRs can be seen as an over-segmentation of the scene into layers based on some image feature. If the appearance of a foreground object differs from the background then its descriptor will be different and they will be represented in different layers.We create a pyramid by smoothing these layers, without mixing foreground and background or losing small objects. Our method estimates more accurate flow than the baseline on the MPI-Sintel benchmark, especially for fast motions and near motion boundaries.

ps

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Thumb md hongwmpt eccv2014
Tracking using Multilevel Quantizations

Hong, Z., Wang, C., Mei, X., Prokhorov, D., Tao, D.

In Computer Vision – ECCV 2014, 8694, pages: 155-171, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Most object tracking methods only exploit a single quantization of an image space: pixels, superpixels, or bounding boxes, each of which has advantages and disadvantages. It is highly unlikely that a common optimal quantization level, suitable for tracking all objects in all environments, exists. We therefore propose a hierarchical appearance representation model for tracking, based on a graphical model that exploits shared information across multiple quantization levels. The tracker aims to find the most possible position of the target by jointly classifying the pixels and superpixels and obtaining the best configuration across all levels. The motion of the bounding box is taken into consideration, while Online Random Forests are used to provide pixel- and superpixel-level quantizations and progressively updated on-the-fly. By appropriately considering the multilevel quantizations, our tracker exhibits not only excellent performance in non-rigid object deformation handling, but also its robustness to occlusions. A quantitative evaluation is conducted on two benchmark datasets: a non-rigid object tracking dataset (11 sequences) and the CVPR2013 tracking benchmark (50 sequences). Experimental results show that our tracker overcomes various tracking challenges and is superior to a number of other popular tracking methods.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb md opendr
OpenDR: An Approximate Differentiable Renderer

Loper, M., Black, M. J.

In Computer Vision – ECCV 2014, 8695, pages: 154-169, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate di fferentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new autodiff erentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.

ps

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]


Thumb md teaser
Intrinsic Video

Kong, N., Gehler, P., Black, M. J.

In Computer Vision – ECCV 2014, 8690, pages: 360-375, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Intrinsic images such as albedo and shading are valuable for later stages of visual processing. Previous methods for extracting albedo and shading use either single images or images together with depth data. Instead, we define intrinsic video estimation as the problem of extracting temporally coherent albedo and shading from video alone. Our approach exploits the assumption that albedo is constant over time while shading changes slowly. Optical flow aids in the accurate estimation of intrinsic video by providing temporal continuity as well as putative surface boundaries. Additionally, we find that the estimated albedo sequence can be used to improve optical flow accuracy in sequences with changing illumination. The approach makes only weak assumptions about the scene and we show that it substantially outperforms existing single-frame intrinsic image methods. We evaluate this quantitatively on synthetic sequences as well on challenging natural sequences with complex geometry, motion, and illumination.

ps

pdf Supplementary Video DOI Project Page [BibTex]

pdf Supplementary Video DOI Project Page [BibTex]


Thumb md screen shot 2014 07 09 at 15.49.27
Robot Arm Pose Estimation through Pixel-Wise Part Classification

Bohg, J., Romero, J., Herzog, A., Schaal, S.

In IEEE International Conference on Robotics and Automation (ICRA) 2014, pages: 3143-3150, IEEE International Conference on Robotics and Automation (ICRA), June 2014 (inproceedings)

Abstract
We propose to frame the problem of marker-less robot arm pose estimation as a pixel-wise part classification problem. As input, we use a depth image in which each pixel is classified to be either from a particular robot part or the background. The classifier is a random decision forest trained on a large number of synthetically generated and labeled depth images. From all the training samples ending up at a leaf node, a set of offsets is learned that votes for relative joint positions. Pooling these votes over all foreground pixels and subsequent clustering gives us an estimate of the true joint positions. Due to the intrinsic parallelism of pixel-wise classification, this approach can run in super real-time and is more efficient than previous ICP-like methods. We quantitatively evaluate the accuracy of this approach on synthetic data. We also demonstrate that the method produces accurate joint estimates on real data despite being purely trained on synthetic data.

am ps

video code pdf DOI Project Page [BibTex]

video code pdf DOI Project Page [BibTex]


Thumb md thumb thumb2
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

Tzionas, D., Srikantha, A., Aponte, P., Gall, J.

In German Conference on Pattern Recognition (GCPR), pages: 1-13, Lecture Notes in Computer Science, Springer, GCPR, September 2014 (inproceedings)

Abstract
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.

ps

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]


Thumb md fop
Human Pose Estimation with Fields of Parts

Kiefel, M., Gehler, P.

In Computer Vision – ECCV 2014, LNCS 8693, pages: 331-346, Lecture Notes in Computer Science, (Editors: Fleet, David and Pajdla, Tomas and Schiele, Bernt and Tuytelaars, Tinne), Springer, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
This paper proposes a new formulation of the human pose estimation problem. We present the Fields of Parts model, a binary Conditional Random Field model designed to detect human body parts of articulated people in single images. The Fields of Parts model is inspired by the idea of Pictorial Structures, it models local appearance and joint spatial configuration of the human body. However the underlying graph structure is entirely different. The idea is simple: we model the presence and absence of a body part at every possible position, orientation, and scale in an image with a binary random variable. This results into a vast number of random variables, however, we show that approximate inference in this model is efficient. Moreover we can encode the very same appearance and spatial structure as in Pictorial Structures models. This approach allows us to combine ideas from segmentation and pose estimation into a single model. The Fields of Parts model can use evidence from the background, include local color information, and it is connected more densely than a kinematic chain structure. On the challenging Leeds Sports Poses dataset we improve over the Pictorial Structures counterpart by 5.5% in terms of Average Precision of Keypoints (APK).

ei ps

website pdf DOI Project Page [BibTex]

website pdf DOI Project Page [BibTex]


Thumb md teaser 200 10
Discovering Object Classes from Activities

Srikantha, A., Gall, J.

In European Conference on Computer Vision, 8694, pages: 415-430, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
In order to avoid an expensive manual labeling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visual similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in these videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.

ps

pdf anno poster DOI Project Page [BibTex]

pdf anno poster DOI Project Page [BibTex]


Thumb md miccai
Automated Detection of New or Evolving Melanocytic Lesions Using a 3D Body Model

Bogo, F., Romero, J., Peserico, E., Black, M. J.

In Medical Image Computing and Computer-Assisted Intervention (MICCAI), 8673, pages: 593-600, Lecture Notes in Computer Science, (Editors: Golland, Polina and Hata, Nobuhiko and Barillot, Christian and Hornegger, Joachim and Howe, Robert), Spring International Publishing, Medical Image Computing and Computer-Assisted Intervention (MICCAI), September 2014 (inproceedings)

Abstract
Detection of new or rapidly evolving melanocytic lesions is crucial for early diagnosis and treatment of melanoma.We propose a fully automated pre-screening system for detecting new lesions or changes in existing ones, on the order of 2 - 3mm, over almost the entire body surface. Our solution is based on a multi-camera 3D stereo system. The system captures 3D textured scans of a subject at diff erent times and then brings these scans into correspondence by aligning them with a learned, parametric, non-rigid 3D body model. This means that captured skin textures are in accurate alignment across scans, facilitating the detection of new or changing lesions. The integration of lesion segmentation with a deformable 3D body model is a key contribution that makes our approach robust to changes in illumination and subject pose.

ps

pdf Poster DOI Project Page [BibTex]

pdf Poster DOI Project Page [BibTex]


Thumb md freelymoving2
A freely-moving monkey treadmill model

Foster, J., Nuyujukian, P., Freifeld, O., Gao, H., Walker, R., Ryu, S., Meng, T., Murmann, B., Black, M. J., Shenoy, K.

J. of Neural Engineering, 11(4):046020, 2014 (article)

Abstract
Objective: Motor neuroscience and brain-machine interface (BMI) design is based on examining how the brain controls voluntary movement, typically by recording neural activity and behavior from animal models. Recording technologies used with these animal models have traditionally limited the range of behaviors that can be studied, and thus the generality of science and engineering research. We aim to design a freely-moving animal model using neural and behavioral recording technologies that do not constrain movement. Approach: We have established a freely-moving rhesus monkey model employing technology that transmits neural activity from an intracortical array using a head-mounted device and records behavior through computer vision using markerless motion capture. We demonstrate the excitability and utility of this new monkey model, including the fi rst recordings from motor cortex while rhesus monkeys walk quadrupedally on a treadmill. Main results: Using this monkey model, we show that multi-unit threshold-crossing neural activity encodes the phase of walking and that the average ring rate of the threshold crossings covaries with the speed of individual steps. On a population level, we find that neural state-space trajectories of walking at diff erent speeds have similar rotational dynamics in some dimensions that evolve at the step rate of walking, yet robustly separate by speed in other state-space dimensions. Significance: Freely-moving animal models may allow neuroscientists to examine a wider range of behaviors and can provide a flexible experimental paradigm for examining the neural mechanisms that underlie movement generation across behaviors and environments. For BMIs, freely-moving animal models have the potential to aid prosthetic design by examining how neural encoding changes with posture, environment, and other real-world context changes. Understanding this new realm of behavior in more naturalistic settings is essential for overall progress of basic motor neuroscience and for the successful translation of BMIs to people with paralysis.

ps

pdf Supplementary DOI Project Page [BibTex]

pdf Supplementary DOI Project Page [BibTex]