publications
Publications by categories in reversed chronological order. Generated by jekyll-scholar.
Journals
2020
-
Cleaning Tasks Knowledge Transfer Between Heterogeneous Robots: a Deep Learning Approach Jaeseok Kim, Nino Cauli, Pedro Vicente, Bruno D. Damas, Alexandre Bernardino, José Santos-Victor, and Filippo Cavallo Journal of Intelligent & Robotic Systems 2020 [Abstract] [DOI] [PDF] [Bibtex]
Nowadays, autonomous service robots are becoming an important topic in robotic research. Differently from typical industrial scenarios, with highly controlled environments, service robots must show an additional robustness to task perturbations and changes in the characteristics of their sensory feedback. In this paper, a robot is taught to perform two different cleaning tasks over a table, using a learning from demonstration paradigm. However, differently from other approaches, a convolutional neural network is used to generalize the demonstrations to different, not yet seen dirt or stain patterns on the same table using only visual feedback, and to perform cleaning movements accordingly. Robustness to robot posture and illumination changes is achieved using data augmentation techniques and camera images transformation. This robustness allows the transfer of knowledge regarding execution of cleaning tasks between heterogeneous robots operating in different environmental settings. To demonstrate the viability of the proposed approach, a network trained in Lisbon to perform cleaning tasks, using the iCub robot, is successfully employed by the DoRo robot in Peccioli, Italy.
@article{kim2020jint, author = {Kim, Jaeseok and Cauli, Nino and Vicente, Pedro and Damas, Bruno D. and Bernardino, Alexandre and Santos{-}Victor, Jos{\'{e}} and Cavallo, Filippo}, title = {Cleaning Tasks Knowledge Transfer Between Heterogeneous Robots: a Deep Learning Approach}, journal = {Journal of Intelligent {\&} Robotic Systems}, volume = {98}, number = {1}, pages = {191--205}, year = {2020}, url = {https://doi.org/10.1007/s10846-019-01072-4}, doi = {10.1007/s10846-019-01072-4}, pdf = {jkim_jint2020.pdf} }
2018
-
Markerless Eye-Hand Kinematic Calibration on the iCub Humanoid Robot Pedro Vicente, Lorenzo Jamone, and Alexandre Bernardino Frontiers in Robotics and AI 2018 [Abstract] [DOI] [PDF] [Bibtex]
Humanoid robots are resourceful platforms and can be used in diverse application scenarios.However, their high number of degrees of freedom in (i.e., moving arms, head and eyes) deteriorates the precision of eye-hand coordination. A good kinematic calibration is often difficult to achieve, due to several factors, e.g., unmodeled deformations of the structure or backlash in the actuators. This is particularly challenging for very complex robots such as the iCub humanoid robot, which has 12 degrees of freedom and cable-driven actuation in the serial chain from the eyes to the hand. The exploitation of real-time robot sensing is of paramount importance to increase the accuracy of the coordination, for example, to realize precise grasping and manipulation tasks. In this code paper, we propose an online and markerless solution to the eye-hand kinematic calibration of the iCub humanoid robot. We have implemented a sequential Monte Carlo algorithm estimating kinematic calibration parameters (joint offsets) which improve the eye-hand coordination based on the proprioception and vision sensing of the robot. We have shown the usefulness of the developed code and its accuracy on simulation and real-world scenarios. The code is written in C++ and CUDA, where we exploit the GPU to increase the speed of the method. The code is made available online along with a Dataset for testing purposes.
@article{10.3389/frobt.2018.00046, author = {Vicente, Pedro and Jamone, Lorenzo and Bernardino, Alexandre}, title = {Markerless Eye-Hand Kinematic Calibration on the iCub Humanoid Robot}, journal = {Frontiers in Robotics and AI}, volume = {5}, pages = {46}, year = {2018}, url = {https://www.frontiersin.org/article/10.3389/frobt.2018.00046}, doi = {10.3389/frobt.2018.00046}, issn = {2296-9144}, pdf = {pvicente-frontiers2018.pdf} }
2016
-
Robotic Hand Pose Estimation Based on Stereo Vision and GPU-enabled Internal Graphical Simulation Pedro Vicente, Lorenzo Jamone, and Alexandre Bernardino Journal of Intelligent & Robotic Systems 2016 [Abstract] [DOI] [PDF] [Bibtex]
Humanoid robots have complex kinematic chains whose modeling is error prone. If the robot model is not well calibrated, its hand pose cannot be determined precisely from the encoder readings, and this affects reaching and grasping accuracy. In our work, we propose a novel method to simultaneously i) estimate the pose of the robot hand, and ii) calibrate the robot kinematic model. This is achieved by combining stereo vision, proprioception, and a 3D computer graphics model of the robot. Notably, the use of GPU programming allows to perform the estimation and calibration in real time during the execution of arm reaching movements. Proprioceptive information is exploited to generate hypotheses about the visual appearance of the hand in the camera images, using the 3D computer graphics model of the robot that includes both kinematic and texture information. These hypotheses are compared with the actual visual input using particle filtering, to obtain both i) the best estimate of the hand pose and ii) a set of joint offsets to calibrate the kinematics of the robot model. We evaluate two different approaches to estimate the 6D pose of the hand from vision (silhouette segmentation and edges extraction) and show experimentally that the pose estimation error is considerably reduced with respect to the nominal robot model. Moreover, the GPU implementation ensures a performance about 3 times faster than the CPU one, allowing real-time operation.
@article{vicente2016jint, author = {Vicente, Pedro and Jamone, Lorenzo and Bernardino, Alexandre}, title = {Robotic Hand Pose Estimation Based on Stereo Vision and GPU-enabled Internal Graphical Simulation}, journal = {Journal of Intelligent {\&} Robotic Systems}, year = {2016}, month = sep, day = {01}, volume = {83}, number = {3}, pages = {339--358}, issn = {1573-0409}, doi = {10.1007/s10846-016-0376-6}, url = {https://doi.org/10.1007/s10846-016-0376-6}, pdf = {vicente16gpuPoseEst_jint.pdf} }
-
Online Body Schema Adaptation Based on Internal Mental Simulation and Multisensory Feedback Pedro Vicente, Lorenzo Jamone, and Alexandre Bernardino Frontiers in Robotics and AI 2016 [Abstract] [DOI] [PDF] [Bibtex]
In this paper, we describe a novel approach to obtain automatic adaptation of the robot body schema and to improve the robot perceptual and motor skills based on this body knowledge. Predictions obtained through a mental simulation of the body are combined with the real sensory feedback to achieve two objectives simultaneously: body schema adaptation and markerless 6D hand pose estimation. The body schema consists of a computer graphics simulation of the robot, which includes the arm and head kinematics (adapted online during the movements) and an appearance model of the hand shape and texture. The mental simulation process generates predictions on how the hand will appear in the robot camera images, based on the body schema and the proprioceptive information (i.e. motor encoders). These predictions are compared to the actual images using Sequential Monte Carlo techniques to feed a particle-based Bayesian estimation method to estimate the parameters of the body schema. The updated body schema will improve the estimates of the 6D hand pose, which is then used in a closed-loop control scheme (i.e. visual servoing), enabling precise reaching. We report experiments with the iCub humanoid robot that support the validity of our approach. A number of simulations with precise ground-truth were performed to evaluate the estimation capabilities of the proposed framework. Then, we show how the use of high-performance GPU programming and an edge-based algorithm for visual perception allow for real-time implementation in real world scenarios.
@article{vicente2016frontiers, author = {Vicente, Pedro and Jamone, Lorenzo and Bernardino, Alexandre}, title = {Online Body Schema Adaptation Based on Internal Mental Simulation and Multisensory Feedback}, journal = {Frontiers in Robotics and AI}, volume = {3}, pages = {7}, year = {2016}, url = {https://www.frontiersin.org/article/10.3389/frobt.2016.00007}, doi = {10.3389/frobt.2016.00007}, issn = {2296-9144}, pdf = {pvicente-frontiers2016.pdf} }
Conferences
2020
-
Active Robot Learning for Efficient Body-Schema Online Adaptation Gonçalo Cunha, Alexandre Bernardino, Pedro Vicente, Ricardo Ribeiro, and Plínio Moreno In Portuguese Conference on Pattern Recognition (RecPad) 2020 [Abstract] [PDF] [Bibtex]
This work proposes an active learning approach for estimating the Denavit-Hartenberg parameters of 7 joints of the iCub arm in a simulation environment, using observations of the end-effector’s pose and knowing the values from proprioceptive sensors. Cost-sensitive active learning, aims to reduce the number of measurements taken and also reduce the total movement performed by the robot while calibrating, thus reducing energy consumption, along with mechanical fatigue and wear. The estimation of the arm’s parameters is done using the Extended Kalman Filter and the active exploration is guided by the A-Optimality criterion. The results show cost-sensitive active learning can perform similarly to the straightforward active learning approach, while reducing significantly the necessary movement.
@inproceedings{gcunha2020recpad, title = {{Active Robot Learning for Efficient Body-Schema Online Adaptation}}, author = {Cunha, Gonçalo and Bernardino, Alexandre and Vicente, Pedro and and Ribeiro, Ricardo and Moreno, Plínio}, booktitle = {Portuguese Conference on Pattern Recognition (RecPad)}, year = {2020}, pdf = {acunha-recpad2020.pdf}, bestp = {Best Poster Award} }
[Best Poster Award]
-
2D Visual Servoing meets Rapidly-exploring Random Trees for collision avoidance Miguel Nascimento, Pedro Vicente, and Alexandre Bernardino In IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) 2020 [Abstract] [DOI] [PDF] [Bibtex]
Visual Servoing is a well-known subject in robotics. However, there are still some challenges on the visual control of robots for applications in human environments. In this article, we propose a method for path planning and correction of kinematic errors using visual servoing. 3D information provided by external cameras will be used for segmenting the environment and detecting the obstacles in the scene. Rapidly-exploring Random Trees are then used to calculate a path through the obstacles to a given, previously calculated, end-effector goal pose. This allows for model-free path planning for cluttered environments by using a point cloud representation of the environment. The proposed path is then followed by the robot in open-loop. Error correction is performed near the goal pose by using real-time calculated image features as control points for an Image-Based Visual Servoing controller that drives the end-effector towards the desired goal pose. With this method, we intend to achieve the navigation of a robotic arm through a cluttered environment towards a goal pose with error correction performed at the end of the trajectory to mitigate both the weaknesses of Image Based Visual Servoing and of open-loop trajectory following. We made several experiments in order to validate our approach by evaluating each individual main component (environment segmentation, trajectory calculation and error correction through visual servoing) of our solution. Furthermore, our solution was implemented in ROS using the Baxter Research Robot.
@inproceedings{mnascimento2020icarsc, author = {{Nascimento}, Miguel and {Vicente}, Pedro and {Bernardino}, Alexandre}, booktitle = {IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)}, title = {2D Visual Servoing meets Rapidly-exploring Random Trees for collision avoidance}, year = {2020}, pages = {227-232}, doi = {10.1109/ICARSC49921.2020.9096133}, bestp = {Highly Commended Paper Award}, pdf = {mnascimento2020icarsc.pdf} }
[Highly Commended Paper Award]
2019
-
Robotic Interactive Physics Parameters Estimator (RIPPE) Atabak Dehban, Carlos Cardoso, Pedro Vicente, Alexandre Bernardino, and José Santos-Victor In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2019 [Abstract] [PDF] [Bibtex]
The ability to reason about natural laws of an environment directly contributes to successful performance in it. In this work, we present RIPPE, a framework that allows a robot to leverage existing physics simulators as its knowledge base for learning interactions with in-animate objects. To achieve this, the robot needs to initially interact with its surrounding environment and observe the effects of its behaviours. Relying on the simulator to efficiently solve the partial differential equations describing these physical interactions, the robot infers consistent physical parameters of its surroundings by repeating the same actions in simulation and evaluate how closely they match its real observations. The learning process is performed using Bayesian Optimisation techniques to sample efficiently the parameter space. We assess the utility of these inferred parameters by measuring how well they can explain physical interactions using previously unseen actions and tools.
@inproceedings{adehban2019icdl, title = {{Robotic Interactive Physics Parameters Estimator (RIPPE)}}, author = {Dehban, Atabak and Cardoso, Carlos and Vicente, Pedro and Bernardino, Alexandre and Santos-Victor, José}, booktitle = {Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, year = {2019}, organization = {IEEE}, pdf = {adehban-icdl2019.pdf} }
2018
-
Finding safe 3D robot grasps through efficient haptic exploration with unscented Bayesian optimization and collision penalty João Castanheira, Pedro Vicente, Ruben Martinez-Cantin, Lorenzo Jamone, and Alexandre Bernardino In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018 [Abstract] [PDF] [Bibtex]
Robust grasping is a major, and still unsolved, problem in robotics. Information about the 3D shape of an object can be obtained either from prior knowledge (e.g., accurate models of known objects or approximate models of familiar objects) or real-time sensing (e.g., partial point clouds of unknown objects) and can be used to identify good potential grasps. However, due to modeling and sensing inaccuracies, local exploration is often needed to refine such grasps and successfully apply them in the real world. The recently proposed unscented Bayesian optimization technique can make such exploration safer by selecting grasps that are robust to uncertainty in the input space (e.g., inaccuracies in the grasp execution). Extending our previous work on 2D optimization, in this paper we propose a 3D haptic exploration strategy that combines unscented Bayesian optimization with a novel collision penalty heuristic to find safe grasps in a very efficient way: while by augmenting the search-space to 3D we are able to find better grasps, the collision penalty heuristic allows us to do so without increasing the number of exploration steps.
@inproceedings{jcastanheira2018iros, title = {{Finding safe 3D robot grasps through efficient haptic exploration with unscented Bayesian optimization and collision penalty}}, author = {Castanheira, João and Vicente, Pedro and Martinez-Cantin, Ruben and Jamone, Lorenzo and Bernardino, Alexandre}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, year = {2018}, organization = {IEEE}, pdf = {jcastanheira-iros2018.pdf} }
-
Incremental adaptation of a robot body schema based on touch events Rodrigo Zenha, Pedro Vicente, Lorenzo Jamone, and Alexandre Bernardino In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2018 [Abstract] [PDF] [Bibtex]
The term ‘body schema’ refers to a computational representation of a physical body; the neural representation of a human body, or the numerical representation of a robot body. In both humans and robots, such a representation is crucial to accurately control body movements. While humans learn and continuously adapt their body schema based on multimodal perception and neural plasticity, robots are typically assigned with a fixed analytical model (e.g., the robot kinematics) which describes their bodies. However, there are always discrepancies between a model and the real robot, and they vary over time, thus affecting the accuracy of movement control. In this work, we equip a humanoid robot with the ability to incrementally estimate such model inaccuracies by touching known planar surfaces (e.g., walls) in its vicinity through motor babbling exploration, effectively adapting its own body schema based on the contact information alone. The problem is formulated as an adaptive parameter estimation (Extended Kalman Filter) which makes use of planar constraints obtained at each contact detection. We compare different incremental update methods through an extensive set of experiments with a realistic simulation of the iCub humanoid robot, showing that the model inaccuracies can be reduced by more than 80%.
@inproceedings{rzenha2018icdl, title = {{Incremental adaptation of a robot body schema based on touch events}}, author = {Zenha, Rodrigo and Vicente, Pedro and Jamone, Lorenzo and Bernardino, Alexandre}, booktitle = {Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, year = {2018}, organization = {IEEE}, pdf = {rzenha_icdl2018.pdf} }
-
Autonomous table-cleaning from kinesthetic demonstrations using Deep Learning Nino Cauli, Pedro Vicente, Jaeseok Kim, Bruno Damas, Alexandre Bernardino, Filippo Cavallo, and José Santos-Victor In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2018 [Abstract] [PDF] [Bibtex]
We address the problem of teaching a robot how to autonomously perform table-cleaning tasks in a robust way. In particular, we focus on wiping and sweeping a table with a tool (e.g., a sponge). For the training phase, we use a set of kinestethic demonstrations performed over a table. The recorded 2D table-space trajectories, together with the images acquired by the robot, are used to train a deep convolutional network that automatically learns the parameters of a Gaussian Mixture Model that represents the hand movement. After the learning stage, the network is fed with the current image show- ing the location/shape of the dirt or stain to clean. The robot is able to perform cleaning arm-movements, obtained through Gaussian Mixture Regression using the mixture parameters provided by the network. Invariance to the robot posture is achieved by applying a plane-projective transformation before inputting the images to the neural network; robustness to illumination changes and other disturbances is increased by considering an augmented data set. This improves the general- ization properties of the neural network, enabling for instance its use with the left arm after being trained using trajectories acquired with the right arm. The system was tested on the iCub robot generating a cleaning behaviour similar to the one of human demonstrators.
@inproceedings{cauli2018autonomous, title = {{Autonomous table-cleaning from kinesthetic demonstrations using Deep Learning}}, author = {Cauli, Nino and Vicente, Pedro and Kim, Jaeseok and Damas, Bruno and Bernardino, Alexandre and Cavallo, Filippo and Santos-Victor, Jos{\'e}}, booktitle = {Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, year = {2018}, organization = {IEEE}, pdf = {ncauli_icdl2018.pdf} }
-
“iCub, clean the table!” A robot learning from demonstration approach using deep neural networks Jaeseok Kim, Nino Cauli, Pedro Vicente, Bruno Damas, Filippo Cavallo, and José Santos-Victor In IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) 2018 [Abstract] [DOI] [PDF] [Bibtex]
Autonomous service robots have become a key research topic in robotics, particularly for household chores. A typical home scenario is highly unconstrained and a service robot needs to adapt constantly to new situations. In this paper, we address the problem of autonomous cleaning tasks in uncontrolled environments. In our approach, a human instructor uses kinestethic demonstrations to teach a robot how to perform different cleaning tasks on a table. Then, we use Task Parametrized Gaussian Mixture Models (TP-GMMs) to encode the demonstrations variability, while providing appropriate generalization abilities. TP-GMMs extend Gaussian Mixture Models with an auxiliary set of reference frames, in order to ex- trapolate the demonstrations to different task parameters such as movement locations, amplitude or orientations. However, the reference frames (that parametrize TP-GMMs) can be very difficult to extract in practice, as it may require segmenting the cluttered images of the working table-top. Instead, in this work the reference frames are automatically extracted from robot camera images, using a deep neural network that was trained during human demonstrations of a cleaning task. This approach has two main benefits: (i) it takes the human completely out of the loop while performing complex cleaning tasks; and (ii) the network is able to identify the specific task to be performed directly from image data, thus also enabling automatic task selection from a set of previously demonstrated tasks. The system was implemented on the iCub humanoid robot. During the tests, the robot was able to successfully clean a table with two different types of dirt (wiping a marker’s scribble or sweeping clusters of lentils).
@inproceedings{kim2018icub, title = {{``iCub, clean the table!'' A robot learning from demonstration approach using deep neural networks}}, author = {Kim, Jaeseok and Cauli, Nino and Vicente, Pedro and Damas, Bruno and Cavallo, Filippo and Santos-Victor, Jos{\'e}}, booktitle = {IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)}, pages = {3--9}, year = {2018}, organization = {IEEE}, pdf = {jkim-icarsc2018.pdf}, bestp = {Best Conference Paper Award}, doi = {10.1109/ICARSC.2018.8374152} }
[Best Conference Paper Award]
2017
-
Learning at the ends: From hand to tool affordances in humanoid robots G. Saponaro, P. Vicente, A. Dehban, L. Jamone, A. Bernardino, and J. Santos-Victor In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2017 [Abstract] [DOI] [PDF] [Bibtex]
One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable; nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.
@inproceedings{saponaro2017icdl, author = {Saponaro, G. and Vicente, P. and Dehban, A. and Jamone, L. and Bernardino, A. and Santos-Victor, J.}, booktitle = {Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, title = {{Learning at the ends: From hand to tool affordances in humanoid robots}}, year = {2017}, volume = {}, number = {}, pages = {331-337}, keywords = {end effectors;humanoid robots;human-robot interaction;learning (artificial intelligence);learning systems;tool affordances;humanoid robots;open challenges;unpredictable human environment;object affordances;possible world interactions;training data;manipulative robot;external objects;similar end-effectors;sensorimotor experiences;sensorimotor skills;bare hands;visual motor imagination mechanism;different hand postures;probabilistic model;unseen tools;tool selection tasks;iCub humanoid robot;collected sensorimotor data;hand posture affordances dataset;possible human-made tools;Tools;Robot sensing systems;Visualization;Humanoid robots;Solid modeling;Shape}, doi = {10.1109/DEVLRN.2017.8329826}, issn = {2161-9484}, pdf = {gsaponaro-icdlepirob2017.pdf}, month = sep }
-
Towards markerless visual servoing of grasping tasks for humanoid robots P. Vicente, L. Jamone, and A. Bernardino In IEEE International Conference on Robotics and Automation (ICRA) 2017 [Abstract] [DOI] [PDF] [Bibtex]
Vision-based grasping for humanoid robots is a challenging problem due to a multitude of factors. First, humanoid robots use an “eye-to-hand” kinematics configuration that, on the contrary to the more common “eye-in-hand” configuration, demands a precise estimate of the position of the robot’s hand. Second, humanoid robots have a long kinematic chain from the eyes to the hands, prone to accumulate the calibration errors of the kinematics model, which offsets the measured hand-to-object relative pose from the real one. In this paper, we propose a method able to solve these two issues jointly. A robust pose estimation of the robot’s hand is achieved via a 3D model-based stereo-vision algorithm, using an edge-based distance transform metric and synthetically generated images of a robot’s arm-hand internal computer- graphics model (kinematics and appearance). Then, a particle- based optimisation method adapts on-line the robot’s internal model to match the real and the synthetically generated images, effectively compensating the kinematics calibration errors. We evaluate the proposed approach using a position-based visual- servoing method on the iCub robot, showing the importance of the continuous visual feedback in humanoid grasping tasks.
@inproceedings{vicente2017icra, author = {Vicente, P. and Jamone, L. and Bernardino, A.}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, title = {{Towards markerless visual servoing of grasping tasks for humanoid robots}}, year = {2017}, volume = {}, number = {}, pages = {3811-3816}, keywords = {computer graphics;error analysis;humanoid robots;manipulator kinematics;optimisation;pose estimation;robot vision;stereo image processing;3D model-based stereo-vision algorithm;edge-based distance transform metric;eye-to-hand kinematics configuration;grasping task visual servoing;hand-to-object relative pose measurement;humanoid grasping tasks;iCub robot;kinematic calibration errors;position estimation;robot arm-hand internal computer-graphic model;robust robot pose estimation;synthetically generated images;vision-based grasping;visual feedback;Calibration;Grasping;Humanoid robots;Kinematics;Solid modeling;Visualization}, doi = {10.1109/ICRA.2017.7989441}, issn = {}, pdf = {pvicente_ICRA2017.pdf}, month = may }
-
Wedding robotics: A case study P. Vicente, and A. Bernardino In IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) 2017 [Abstract] [DOI] [PDF] [Bibtex]
In this work, we propose to study a social robot in a wedding context, where it plays the role of a wedding ring bearer. We focus on the interaction with the audience, their expectations, and reactions, rather than in technical details. We collect data from 121 individuals belonging to two different groups, those who have seen the robot behaviour (live or recorded versions) and those who did not see the robot performance. We divide the study into three parts: i) the reactions of the guests at the wedding, ii) a comparison between subjects which were exposed or not to the robot behaviour, and iii) a within-subjects experiment where after filling a survey, they are asked to see the recorded robot behaviour. The guests reacted positively to the experiment. The robot was considered likeable, lively and safe by the majority of the participants in the study. The group that observed the robot’s behaviour had a better opinion on the use of robots in wedding ceremonies than the group that did not observe the experience. This may suggest that a higher presence of robots in social activities will increase the acceptance of robots in society. Index Terms—humanoid robot, social robotics, human-robot interaction, social experiment, case study.
@inproceedings{vicente2017icarsc, author = {Vicente, P. and Bernardino, A.}, booktitle = {IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)}, title = {{Wedding robotics: A case study}}, year = {2017}, volume = {}, number = {}, pages = {140-145}, keywords = {mobile robots;robot behaviour;robot performance;social robot;technical details;wedding guest reactions;wedding ring bearer;wedding robotics;Conferences;Context;Humanoid robots;Robot sensing systems;Service robots;Software;case study;human-robot interaction;humanoid robot;social experiment;social robotics}, doi = {10.1109/ICARSC.2017.7964066}, pdf = {pvicente-icarsc2017.pdf}, month = apr }
2015
-
GPU-Enabled Particle Based Optimization for Robotic-Hand Pose Estimation and Self-Calibration Pedro Vicente, Ricardo Ferreira, Lorenzo Jamone, and Alexandre Bernardino In IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) 2015 [Abstract] [DOI] [PDF] [Bibtex]
Humanoid robots have complex kinematic chains that are difficult to model with the precision required to reach and/or grasp objects properly. In this paper we propose a GPU- enabled vision based 3D hand pose estimation method that runs during robotic reaching tasks to calibrate in real time the kinematic chain of the robot arm. This is achieved by combining: i) proprioceptive and visual sensing; and ii) a kinematic and computer graphics model of the system. We use proprioceptive input to create visual hypotheses about the hand appearance in the image using a 3D CAD model inside the game engine from Unity Technologies. These hypotheses are compared with the actual visual input using particle filter techniques. The outcome of this processing is the best hypothesis for the hand pose and a set of joint offsets to calibrate the arm. We tested our approach in a simulation environment and verified that the angular error is reduced 3 times and the position error about 12 times comparing with the non-calibrated case (proprioception only). The used GPU implementation techniques ensures a performance 2.5 times faster than performing the computations on the CPU. Index Terms—humanoid robot, robotic-hand pose estimation, robot self-calibration, 3D model based tracking, GPU, reaching.
@inproceedings{vicente2015icarsc, author = {Vicente, Pedro and Ferreira, Ricardo and Jamone, Lorenzo and Bernardino, Alexandre}, booktitle = {IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)}, title = {{GPU-Enabled Particle Based Optimization for Robotic-Hand Pose Estimation and Self-Calibration}}, year = {2015}, pages = {3-8}, keywords = {CAD;control engineering computing;graphics processing units;humanoid robots;manipulator kinematics;optimisation;particle filtering (numerical methods);pose estimation;robot vision;solid modelling;3D CAD model;3D hand pose estimation method;GPU-enabled particle based optimization;GPU-enabled vision;Unity Technologies;computer graphics model;game engine;humanoid robots;kinematic chains;particle filter techniques;proprioceptive sensing;robot arm;robotic-hand pose estimation;robotic-hand self-calibration;visual hypotheses;visual sensing;Computational modeling;Estimation;Graphics processing units;Joints;Kinematics;Robots;Visualization;3D model based tracking;GPU;humanoid robot;reaching;robot self-calibration;robotic-hand pose estimation}, doi = {10.1109/ICARSC.2015.25}, pdf = {pvicente-ICARSC15.pdf}, month = apr }
2014
-
Eye-hand online adaptation during reaching tasks in a humanoid robot Pedro Vicente, Ricardo Ferreira, Lorenzo Jamone, and Alexandre Bernardino In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2014 [Abstract] [DOI] [PDF] [Bibtex]
In this paper we propose a method for the online adaptation of a humanoid robot’s arm kinematics, using its visual and proprioceptive sensors. A typical reaching movement starts with a ballistic open-loop phase to bring the hand to the vicinity of the object. During this phase, as soon as the hand of the robot enters the field of view of one of its cameras, a vision based 3D hand pose estimation method feeds a particle filter that gradually adjusts the arm kinematics’ parameters. Our method makes use of a 3D CAD model of the robot hand (geometry and texture) whose predicted position in the image is compared at each time step with the cameras’ incoming information. When the hand gets close to the object, the kinematic errors have reduced significantly and a better control of grasping can eventually be achieved. We have tested the method both in simulation and with the real robot and verify error decreases by a factor of 3 during a typical reaching time span. Index Terms—Online adaptation, internal model learning, 3D model based tracking, reaching, humanoid robot.
@inproceedings{vicente2014icdl, author = {Vicente, Pedro and Ferreira, Ricardo and Jamone, Lorenzo and Bernardino, Alexandre}, booktitle = {Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, title = {{Eye-hand online adaptation during reaching tasks in a humanoid robot}}, year = {2014}, pages = {175-180}, organization = {IEEE}, keywords = {CAD;cameras;dexterous manipulators;humanoid robots;image sensors;manipulator kinematics;particle filtering (numerical methods);pose estimation;robot vision;solid modelling;ballistic open-loop phase;cameras;eye-hand online adaptation;humanoid robot arm kinematics;kinematic errors;particle filter;proprioceptive sensors;reaching tasks;robot hand 3D CAD model;vision based 3D hand pose estimation method;visual sensors;Cameras;Joints;Kinematics;Robot vision systems;Visualization;3D model based tracking;Online adaptation;humanoid robot;internal model learning;reaching}, doi = {10.1109/DEVLRN.2014.6982978}, pdf = {vicente14icdlEyeHand.pdf}, month = oct }
Under review
2021
-
From rocks to walls: a model-free reinforcement learning approach to dry stacking with irregular rocks André Menezes, Pedro Vicente, Alexandre Bernardino, and Rodrigo Ventura In IEEE International Conference on Robotics and Automation (ICRA) 2021 [Abstract] [Bibtex]
In-situ resource utilization (ISRU) is a key aspect for an efficient human exploration of extraterrestrial environments. A cost-effective method for the construction of preliminary structures is dry stacking with locally found unprocessed rocks, which is a challenging task. This work focus on learning this task from scratch. Former approaches rely on previously acquired models, which may be hard to obtain in the context of a mission. In alternative, we propose a model-free, data driven approach. We formulate an abstraction of the problem as the task of selecting the position to place each rock, presented to the robot in a sequence, on top of the currently built structure. The goal is to assemble a wall that approximates a target volume, given the 3D perception of the currently built structure, the next object and the target volume. An agent is developed to learn this task using reinforcement learning. The deep Q-networks (DQN) algorithm is used, where the Q-network outputs a value map corresponding to the expected return of placing the object in each position of a top-view depth image. The learned q-function is able to capture the goal and dynamics of the environment. The emerged behaviour is, to some extent, consistent with dry stacking theory. The learned policy outperforms engineered heuristics, both in terms of stability of the structure and similarity with the target volume. Despite the simplification of the task, the policy learned with this approach could be applied to a realistic setting as the high level planner in an autonomous construction pipeline.
@inproceedings{amenezes2021icra, title = {{From rocks to walls: a model-free reinforcement learning approach to dry stacking with irregular rocks}}, author = {Menezes, André and Vicente, Pedro and Bernardino, Alexandre and Ventura, Rodrigo}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, year = {2021} }