Social media and crowdsourcing for social good
The student will contribute to a multidisciplinary initiative for the use of social media and mobile crowdsourcing for social good. Several projects are available. Specific topics include:
* Social media analytics
* Visualization of social and crowdsourced data
* Smartphone apps for mobile crowdsourcing
Students will be working with social computing researchers studying European and developing cities.
Contact: Prof. Daniel Gatica-Perez [email protected]
Mobile interface for the generation of movements with variations in robots
The aim of the project is to extend the standard spline-based approach used in robotics to an interface allowing the user to define not only the keypoints that the robot should pass through, but also the variations allowed for each keypoint. This will be achieved by a model predictive control implementation of a Bezier curve (see references below). The project will exploit a humanoid robot and a Lenovo Phab2-pro mobile phone, both available at Idiap. The Phab2-pro works with Tango, the augmented reality interface toolkit from Google (see links below). A basic interface between the mobile phone and the robot is already available for the project (by using the ROS middleware).
The goal of the project will be to extend the existing approach to 3D paths, by developing an interface allowing the user to define 3D ellipsoids by defining the center and the three principal axes. This interface will then be used to move the robot hands to the desired paths with natural variations. The developed approach will be evaluated by comparing it to the baseline approach of defining a single Bezier spline through 3D coordinates. The effect of the variations will be evaluated by letting a group of users observe several repetitions of the movements with natural variations, and contrasting it with the repetition of a single trajectory.
Dr Sylvain Calinon
robot learning, robot interface
Berio, D., Calinon, S. and Leymarie, F.F. (2017). Generating Calligraphic Trajectories with Model Predictive Control. In Proc. of the 43rd Conf. on Graphics Interface.
Pose sketching interface to control a humanoid robot
The aim of the project is to create an interface allowing a user to draw a stick figure corresponding to the pose of a humanoid robot (arms and head), which is then used to move the robot to the desired pose. The project will exploit a humanoid robot and a Lenovo Phab2-pro mobile phone, both available at Idiap. The Phab2-pro works with Tango, the augmented reality interface toolkit from Google (see links below). A basic interface between the mobile phone and the robot is already available for the project (by using the ROS middleware).
The first step of the project will be to develop an algorithm to transform the 2D sketch to the closest corresponding 3D pose. Existing algorithms developed in the context of computer graphics interfaces will be used as a starting point for this development. The second step will be to use this algorithm on the mobile phone to control a humanoid robot. The last step will be to evaluate the algorithm and the interface, by comparing it to the baseline approach of moving the robot articulations one-by-one to achieve a desired upper-body pose.
Dr Sylvain Calinon
robot learning, robot interface
(MS or Semester project) Robotic microscopy platform integration
As part of Idiap’s ongoing effort to develop a microscopy platform that combines a custom light-sheet fluorescence microscope (LSFM or SPIM), a 4D moving stage, multispectral lasers/LEDs, and robotic arms to acquire data from moving objects, with moving sensors and illumination, we are creating a multi-modal abstraction layer over the hardware in order to easily discover and reproduce acquisition and processing techniques, in particular: structured illumination, time-lapse cardiac blood flow imaging, three-dimensional reconstruction, deblurring, computer vision, temporal and spatial superresolution. The student will help develop a programming interface connecting sensors, sources and moving elements with machine learning and signal processing algorithms and investigate acquisition protocols for microscopy.
(MS or Semester project) Superresolution methods in Optical Projection Tomography (OPT)
Optical projection tomography is a form of tomography involving optical microscopy. It is in many ways the optical equivalent of X-ray computed tomography or the medical CT scan. Essential mathematics and reconstruction algorithms used for CT and OPT are similar; for example, radon transform or iterative reconstruction based on projection data. Both medical CT and OPT compute 3D volumes based on transmission of the photon through the material of interest. OPT is popular due to the common availability of optical components. The drawbacks of this method are a lack of spatial resolution, due to the requirement of using low numerical aperture optics, and a lower temporal resolution, due to the requirement for images from multiple views. Using computational methods such as deconvolution, guided interpolation, structured illumination and deep learning, the student will investigate new superresolution techniques both in terms of spatial resolution, and temporal resolution of time-lapse movies.