Rubén Gómez Ojeda
PhD. student in Computer Vision and Mobile Robotics
I am Rubén Gómez Ojeda, a PhD student in the Machine Perception and Intelligent Robotics group (MAPIR) at the University of Málaga (Spain). My research interests include:
- Computer Vision: Visual odometry and SLAM, and Deep Learning based Place Recognition
- Mobile Robotics: Obstacle avoidance for autonomous vehicles.
- Unmanned Aerial Vehicles (UAVs).
- Sensor Fusion.
I was born in Málaga (Spain) in 1988. I received a B.Sc-M.Sc in "Ingeniería Industrial" (a general engineering which covers mechanics, electricity, electronic, computer science, etc.) from the University of Málaga in 2012. In October 2013 I joined the Machine Perception and Intelligent Robotics (MAPIR) group as a Researcher in Computer Vision. I received the M.Sc in Mechatronics in 2014. I received a grant (DPI2014-55826-R) from the National (Spanish) Plan of Research to do a 4-year PhD under the supervision of Prof. Javier González-Jiménez, which I started in November 2015. From October 2016 to February 2017 I was a visting researcher at the Robotics and Perception Group of the University of Zurich, under the supervision of Prof. Davide Scaramuzza.
Deep Image Enhancement for VO in HDR Environments: In One of the main open challenges in visual odometry (VO) is the robustness to difficult illumination conditions or high dynamic range (HDR) environments. We address this problem from a deep learning perspective, for which we propose two different deep networks: a very deep model consisting of both CNNs and LSTM, and another one of small size capable of executing in real-time on a GPU. Both networks transform a sequence of RGB images into more informative ones, while also being robust to changes in illumination, exposure time, gamma correction, etc. We validate the enhanced representations by evaluating the sequences produced by the two architectures in several state-of-art VO algorithms, such as ORB-SLAM and DSO.
Arxiv draft: https://arxiv.org/abs/1707.01274
PL-SLAM: In this work we propose a combined approach to stereo visual SLAM based on the simultaneous employment of both point and line segment features, as in our previous approaches to Visual Odometry, that is capable of working robustly in a wide variety of scenarios. As a consequence, we also obtain meaningful maps that can be further employed to extract valuable information from structured scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features, with an ad-hoc implementation of the combined algorithm to solve the bundle adjustment problem for this particular case.
Arxiv draft: https://arxiv.org/abs/1705.09479
Deep Place Recognition: Place recognition is still an open problem in computer vision, and its difficulty increases under changes in the scenario, viewpoint, illumination or weather condition. We propose a Convolutional Neural Network (CNN) with the purpose of recognize the same location under severe weather or illumination variations, seasonal changes, etc. In contrast to previous approaches which rely on visual descriptors, our algorithm works with the complete image, reducing unnecessary errors induced by posterior feature matching processes by providing a better estimate of place similarity.
Monocular Visual Odometry: In this work, we extend a popular semi-direct approach to monocular visual odometry known as SVO to work with line segments, hence obtaining a more robust system capable of dealing with both textured and structured environments. The proposed odometry system allows for the fast tracking of line segments since it eliminates the necessity of continuously extracting and matching features between subsequent frames. The method, of course, has a higher computational burden than the original SVO, but it still runs with frequencies of 60Hz on a personal computer while performing robustly in a wider variety of scenarios.
Code: Here (GitHub)
Stereo Visual Odometry: A common strategy to stereo visual odometry (SVO), known as feature-based, tracks some relevant features (traditionally keypoints) in a sequence of stereo images, and then estimates the pose increment between those frames by imposing some rigid-body constraints between the features. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these point-based algorithms deteriorate. For that, we propose a probabilistic SVO algorithm based on the combination of both keypoints and line segments, since they provide complementary information, and hence capable of working in a wide variety of environments.
Code: Here (GitHub)
Here you can find a list of my publications with pdf author's version and bibtex references:
Rubén Gómez Ojeda
Dpto. Ingeniería de Sistemas y Automática
E.T.S.I. Informática - Telecomunicaciones
Universidad de Málaga
Campus Universitario de Teatinos
29071 Málaga, Spain
Phone: +34 952 13 3362
e-mail: rubengooj [at] gmail.com