260 results on '"robot manipulation"'
Search Results
2. Desktop-scale robot tape manipulation for additive manufacturing
- Author
-
Tushar, Nahid, Wu, Rencheng, She, Yu, Zhou, Wenchao, and Shou, Wan
- Published
- 2025
- Full Text
- View/download PDF
3. A Novel Moving Strategy of Teleoperating Manipulation via Exo-Gloves
- Author
-
Chen, Yanjun, Li, Yipeng, Du, Wenfu, Zeng, Puyi, Cheng, Zhongjiang, Zhang, Yang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lan, Xuguang, editor, Mei, Xuesong, editor, Jiang, Caigui, editor, Zhao, Fei, editor, and Tian, Zhiqiang, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Few-Shot Transfer Learning for Deep Reinforcement Learning on Robotic Manipulation Tasks
- Author
-
He, Yuanzhi, Wallbridge, Christopher D., Hernndez, Juan D., Colombo, Gualtiero B., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huda, M. Nazmul, editor, Wang, Mingfeng, editor, and Kalganova, Tatiana, editor
- Published
- 2025
- Full Text
- View/download PDF
5. RoLD: Robot Latent Diffusion for Multi-task Policy Modeling
- Author
-
Tan, Wenhui, Liu, Bei, Zhang, Junbo, Song, Ruihua, Fu, Jianlong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ide, Ichiro, editor, Kompatsiaris, Ioannis, editor, Xu, Changsheng, editor, Yanai, Keiji, editor, Chu, Wei-Ta, editor, Nitta, Naoko, editor, Riegler, Michael, editor, and Yamasaki, Toshihiko, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipulation
- Author
-
Ju, Yuanchen, Hu, Kaizhe, Zhang, Guowei, Zhang, Gu, Jiang, Mingrun, Xu, Huazhe, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Reliable and robust robotic handling of microplates via computer vision and touch feedback.
- Author
-
Scamarcio, Vincenzo, Tan, Jasper, Stellacci, Francesco, and Hughes, Josie
- Subjects
LIFE sciences ,COMPUTER vision ,MOBILE robots ,MICROPLATES ,ROBOTICS - Abstract
Laboratory automation requires reliable and precise handling of microplates, but existing robotic systems often struggle to achieve this, particularly when navigating around the dynamic and variable nature of laboratory environments. This work introduces a novel method integrating simultaneous localization and mapping (SLAM), computer vision, and tactile feedback for the precise and autonomous placement of microplates. Implemented on a bi-manual mobile robot, the method achieves fine-positioning accuracies of ± 1.2 mm and ± 0.4°. The approach was validated through experiments using both mockup and real laboratory instruments, demonstrating at least a 95% success rate across varied conditions and robust performance in a multi-stage protocol. Compared to existing methods, our framework effectively generalizes to different instruments without compromising efficiency. These findings highlight the potential for enhanced robotic manipulation in laboratory automation, paving the way for more reliable and reproducible experimental workflows. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. One image for one strategy: human grasping with deep reinforcement based on small-sample representative data.
- Author
-
Wang, Fei, Shi, Manyi, Chen, Chao, Zhu, Jinbiao, Liu, Yue, and Chu, Hao
- Abstract
As the first step in grasping operations, vision-guided grasping actions play a crucial role in enabling intelligent robots to perform complex interactive tasks. In order to solve the difficulties in data set preparation and consumption of computing resources before and during training network, we introduce a method of training human grasping strategies based on small sample representative data sets, and learn a human grasping strategy through only one depth image. Our key idea is to use the entire human grasping area instead of multiple grasping gestures so that we can greatly reduce the preparation of dataset. Then the grasping strategy is trained through the q-learning framework, the agent is allowed to continuously explore the environment so that it can overcome lack of data annotation and prediction in early stage of the visual network, then successfully map the human strategy into visual prediction. Considering the widespread clutter environment in real tasks, we introduce push actions and adopt a staged reward function to make it conducive to the grasping. Finally we learned the human grasping strategy and applied it successfully, and stably executed it on objects that not seen before, improved the convergence speed and grasping effect while reducing the consumption of computing resources. We conducted experiments on a Doosan robotic arm equipped with an Intel Realsense camera and a two-finger gripper, and achieved human strategy grasping with a high success rate in cluttered scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Manipulator Control of the Robotized TMS System with Incurved TMS Coil Case.
- Author
-
Kim, Jaewoo and Yang, Gi-Hun
- Subjects
TORQUE control ,TRANSCRANIAL magnetic stimulation ,SUBJECT headings ,ROBOT control systems ,MAGNETIC control - Abstract
Featured Application: The force/torque control, considering the incurved shape of the TMS coil shape, improved the adherence between the coil and the subject's head. It also enhanced the accuracy of the coil. This paper proposes the force/torque control strategy for the robotized transcranial magnetic stimulation (TMS) system, considering the shape of the TMS coil case. Hybrid position/force control is used to compensate for the error between the current and target position of the coil and to maintain the contact between the coil and the subject's head. The desired force magnitude of the force control part of the hybrid controller is scheduled by the error between the current and target position of the TMS coil for fast error reduction and the comfort of the subject. Additionally, the torque proportional to the torque acting on the coil's center is generated to stabilize the contact. Compliance control, which makes the robot adaptive to the environment, stabilizes the coil and head interaction during force/torque control. The experimental results showed that the force controller made the coil generate a relatively large force for a short time (less than 10 s) for the fast error reduction, and a relatively small interaction force was maintained for the contact. They showed that the torque controller made the contact area inside the coil. The experiment also showed that the proposed strategy could be used for tracking a new target point estimated by the neuronavigation system when the head moved slightly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Efficient Robot Manipulation via Reinforcement Learning with Dynamic Movement Primitives-Based Policy.
- Author
-
Li, Shangde, Huang, Wenjun, Miao, Chenyang, Xu, Kun, Chen, Yidong, Sun, Tianfu, and Cui, Yunduan
- Subjects
REINFORCEMENT learning ,ROBOT control systems ,ROBOTS ,PRIOR learning ,LEARNING - Abstract
Reinforcement learning (RL) that autonomously explores optimal control policies has become a crucial direction for developing intelligent robots while Dynamic Movement Primitives (DMPs) serve as a powerful tool for efficiently expressing robot trajectories. This article explores an efficient integration of RL and DMP to enhance the learning efficiency and control performance of reinforcement learning in robot manipulation tasks by focusing on the forms of control actions and their smoothness. A novel approach, DDPG-DMP, is proposed to address the efficiency and feasibility issues in the current RL approaches that employ DMP to generate control actions. The proposed method naturally integrates a DMP-based policy into the actor–critic framework of the traditional RL approach Deep Deterministic Policy Gradient (DDPG) and derives the corresponding update formulas to learn the networks that properly decide the parameters of DMPs. A novel inverse controller is further introduced to adaptively learn the translation from observed states into various robot control signals through DMPs, eliminating the requirement for human prior knowledge. Evaluated on five robot arm control benchmark tasks, DDPG-DMP demonstrates significant advantages in control performance, learning efficiency, and smoothness of robot actions compared to related baselines, highlighting its potential in complex robot control applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration.
- Author
-
Ramachandruni, Kartik, Kent, Cassandra, and Chernova, Sonia
- Subjects
ROBOTS ,HUMAN beings - Abstract
Collaborative human-robot task execution approaches require mutual adaptation, allowing both the human and robot partners to take active roles in action selection and role assignment to achieve a single shared goal. Prior works have utilized a leader-follower paradigm in which either agent must follow the actions specified by the other agent. We introduce the User-aware Hierarchical Task Planning (UHTP) framework, a communication-free human-robot collaborative approach for adaptive execution of multi-step tasks that moves beyond the leader-follower paradigm. Specifically, our approach enables the robot to observe the human, perform actions that support the human's decisions, and actively select actions that maximize the expected efficiency of the collaborative task. In turn, the human chooses actions based on their observation of the task and the robot, without being dictated by a scheduler or the robot. We evaluate UHTP both in simulation and in a human subjects experiment of a collaborative drill assembly task. Our results show that UHTP achieves more efficient task plans and shorter task completion times than non-adaptive baselines across a wide range of human behaviors, that interacting with a UHTP-controlled robot reduces the human's cognitive workload, and that humans prefer to work with our adaptive robot over a fixed-policy alternative. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Reliable and robust robotic handling of microplates via computer vision and touch feedback
- Author
-
Vincenzo Scamarcio, Jasper Tan, Francesco Stellacci, and Josie Hughes
- Subjects
robot manipulation ,automation ,computer vision ,life science ,mobile robotics ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Laboratory automation requires reliable and precise handling of microplates, but existing robotic systems often struggle to achieve this, particularly when navigating around the dynamic and variable nature of laboratory environments. This work introduces a novel method integrating simultaneous localization and mapping (SLAM), computer vision, and tactile feedback for the precise and autonomous placement of microplates. Implemented on a bi-manual mobile robot, the method achieves fine-positioning accuracies of ±1.2 mm and ±0.4°. The approach was validated through experiments using both mockup and real laboratory instruments, demonstrating at least a 95% success rate across varied conditions and robust performance in a multi-stage protocol. Compared to existing methods, our framework effectively generalizes to different instruments without compromising efficiency. These findings highlight the potential for enhanced robotic manipulation in laboratory automation, paving the way for more reliable and reproducible experimental workflows.
- Published
- 2025
- Full Text
- View/download PDF
13. A novel framework inspired by human behavior for peg-in-hole assembly
- Author
-
Guo, Peng, Si, Weiyong, and Yang, Chenguang
- Published
- 2024
- Full Text
- View/download PDF
14. Deep Bayesian-Assisted Keypoint Detection for Pose Estimation in Assembly Automation.
- Author
-
Shi, Debo, Rahimpour, Alireza, Ghafourian, Amin, Naddaf Shargh, Mohammad, Upadhyay, Devesh, Soltani Bozchalooi, Iman, and Lasky, Ty
- Subjects
AI ,assembly automation ,convolutional neural networks ,deep learning ,keypoint detection ,manufacturing automation ,pose estimation ,robot manipulation ,robotics ,Bayes Theorem ,Automation ,Machine Learning - Abstract
Pose estimation is crucial for automating assembly tasks, yet achieving sufficient accuracy for assembly automation remains challenging and part-specific. This paper presents a novel, streamlined approach to pose estimation that facilitates automation of assembly tasks. Our proposed method employs deep learning on a limited number of annotated images to identify a set of keypoints on the parts of interest. To compensate for network shortcomings and enhance accuracy we incorporated a Bayesian updating stage that leverages our detailed knowledge of the assembly part design. This Bayesian updating step refines the network output, significantly improving pose estimation accuracy. For this purpose, we utilized a subset of network-generated keypoint positions with higher quality as measurements, while for the remaining keypoints, the network outputs only serve as priors. The geometry data aid in constructing likelihood functions, which in turn result in enhanced posterior distributions of keypoint pixel positions. We then employed the maximum a posteriori (MAP) estimates of keypoint locations to obtain a final pose, allowing for an update to the nominal assembly trajectory. We evaluated our method on a 14-point snap-fit dash trim assembly for a Ford Mustang dashboard, demonstrating promising results. Our approach does not require tailoring to new applications, nor does it rely on extensive machine learning expertise or large amounts of training data. This makes our method a scalable and adaptable solution for the production floors.
- Published
- 2023
15. A Novel Grasp Detection Algorithm with Multi-Target Semantic Segmentation for a Robot to Manipulate Cluttered Objects.
- Author
-
Zhong, Xungao, Chen, Yijun, Luo, Jiaguo, Shi, Chaoquan, and Hu, Huosheng
- Subjects
TRANSFORMER models ,OBJECT recognition (Computer vision) ,ROBOTS ,ALGORITHMS ,GENERALIZATION ,ROBOT hands - Abstract
Objects in cluttered environments may have similar sizes and shapes, which remains a huge challenge for robot grasping manipulation. The existing segmentation methods, such as Mask R-CNN and Yolo-v8, tend to lose the shape details of objects when dealing with messy scenes, and this loss of detail limits the grasp performance of robots in complex environments. This paper proposes a high-performance grasp detection algorithm with a multi-target semantic segmentation model, which can effectively improve a robot's grasp success rate in cluttered environments. The algorithm consists of two cascades: Semantic Segmentation and Grasp Detection modules (SS-GD), in which the backbone network of the semantic segmentation module is developed by using the state-of-the-art Swin Transformer structure. It can extract the detailed features of objects in cluttered environments and enable a robot to understand the position and shape of the candidate object. To construct the grasp schema SS-GD focused on important vision features, a grasp detection module is designed based on the Squeeze-and-Excitation (SE) attention mechanism, to predict the corresponding grasp configuration accurately. The grasp detection experiments were conducted on an actual UR5 robot platform to verify the robustness and generalization of the proposed SS-GD method in cluttered environments. A best grasp success rate of 91.7% was achieved for cluttered multi-target workspaces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Force-Balanced 2 Degree of Freedom Robot Manipulator Based on Four Bar Linkages
- Author
-
Vyas, Yash, Tognon, Marco, Cocuzza, Silvio, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Hybrid Robotic Control for Flexible Element Disassembly
- Author
-
Tapia Sal Paz, Benjamín, Sorrosal, Gorka, Mancisidor, Aitziber, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
18. Dual Vision-Based Reinforcement Learning: Solving Robot Manipulation Task with Both Static-View and Active-View Cameras
- Author
-
Liu, Chenchen, Wang, Ruo Xuan, Zhang, Zhengshen, Ang, Marcelo H., Jr., Lu, Wen Feng, Tay, Francis E. H., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Park, Ji Su, editor, Yang, Laurence T., editor, Pan, Yi, editor, and Park, James J., editor
- Published
- 2024
- Full Text
- View/download PDF
19. CNN - Based Object Detection for Robot Grasping in Cluttered Environment
- Author
-
Ćirić, Ivan, Ivačko, Nikola, Lalić, Stefan, Nejković, Valentina, Milošević, Maša, Stojiljković, Dušan, Jevtić, Dušan, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Trajanovic, Miroslav, editor, Filipovic, Nenad, editor, and Zdravkovic, Milan, editor
- Published
- 2024
- Full Text
- View/download PDF
20. Opening Doors and Drawers by a UR5 Robot with Force Control
- Author
-
Dukić, Jana, Vulić, Lukrecia, Šimundić, Valentin, Pejić, Petra, Cupec, Robert, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Keser, Tomislav, editor, Ademović, Naida, editor, Desnica, Eleonora, editor, and Grgić, Ivan, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Manipulator Control of the Robotized TMS System with Incurved TMS Coil Case
- Author
-
Jaewoo Kim and Gi-Hun Yang
- Subjects
hybrid position/force control ,robot manipulation ,torque control ,transcranial magnetic stimulation ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
This paper proposes the force/torque control strategy for the robotized transcranial magnetic stimulation (TMS) system, considering the shape of the TMS coil case. Hybrid position/force control is used to compensate for the error between the current and target position of the coil and to maintain the contact between the coil and the subject’s head. The desired force magnitude of the force control part of the hybrid controller is scheduled by the error between the current and target position of the TMS coil for fast error reduction and the comfort of the subject. Additionally, the torque proportional to the torque acting on the coil’s center is generated to stabilize the contact. Compliance control, which makes the robot adaptive to the environment, stabilizes the coil and head interaction during force/torque control. The experimental results showed that the force controller made the coil generate a relatively large force for a short time (less than 10 s) for the fast error reduction, and a relatively small interaction force was maintained for the contact. They showed that the torque controller made the contact area inside the coil. The experiment also showed that the proposed strategy could be used for tracking a new target point estimated by the neuronavigation system when the head moved slightly.
- Published
- 2024
- Full Text
- View/download PDF
22. Mask-Attention A3C: Visual Explanation of Action–State Value in Deep Reinforcement Learning
- Author
-
Hidenori Itaya, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi, and Komei Sugiura
- Subjects
Deep reinforcement learning ,explainable AI ,visual explanation ,video games ,robot manipulation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep reinforcement learning (DRL) can learn an agent’s optimal behavior from the experience it gains through interacting with its environment. However, since the decision-making process of DRL agents is a black-box, it is difficult for users to understand the reasons for the agents’ actions. To date, conventional visual explanation methods for DRL agents have focused only on the policy and not on the state value. In this work, we propose a DRL method called Mask-Attention A3C (Mask A3C) to analyze agents’ decision-making by focusing on both the policy and value branches, which have different outputs. Inspired by the Actor-Critic method, our method introduces an Attention mechanism that applies mask processing to the feature map of the policy and value branches using mask-attention, which is a heat-map representation of the basis for judging the policy and state values. We also propose the introduction of a Mask-attention Loss to obtain highly interpretable mask-attention. By introducing this loss function, the agent learns not to gaze at regions that do not affect its decision-making. Our evaluations with Atari 2600 as a video game strategy task and robot manipulation as a robot control task showed that visualizing the mask-attention of an agent during its action selection facilitates the analysis of the agent’s decision-making. We also investigated the effect of Mask-attention Loss and confirmed that it is useful for analyzing agents’ decision-making. In addition, we showed that these mask-attentions are highly interpretable to the user by conducting a user survey on the prediction of the agent’s behavior.
- Published
- 2024
- Full Text
- View/download PDF
23. Efficient Robot Manipulation via Reinforcement Learning with Dynamic Movement Primitives-Based Policy
- Author
-
Shangde Li, Wenjun Huang, Chenyang Miao, Kun Xu, Yidong Chen, Tianfu Sun, and Yunduan Cui
- Subjects
reinforcement learning ,robot manipulation ,dynamic movement primitives ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Reinforcement learning (RL) that autonomously explores optimal control policies has become a crucial direction for developing intelligent robots while Dynamic Movement Primitives (DMPs) serve as a powerful tool for efficiently expressing robot trajectories. This article explores an efficient integration of RL and DMP to enhance the learning efficiency and control performance of reinforcement learning in robot manipulation tasks by focusing on the forms of control actions and their smoothness. A novel approach, DDPG-DMP, is proposed to address the efficiency and feasibility issues in the current RL approaches that employ DMP to generate control actions. The proposed method naturally integrates a DMP-based policy into the actor–critic framework of the traditional RL approach Deep Deterministic Policy Gradient (DDPG) and derives the corresponding update formulas to learn the networks that properly decide the parameters of DMPs. A novel inverse controller is further introduced to adaptively learn the translation from observed states into various robot control signals through DMPs, eliminating the requirement for human prior knowledge. Evaluated on five robot arm control benchmark tasks, DDPG-DMP demonstrates significant advantages in control performance, learning efficiency, and smoothness of robot actions compared to related baselines, highlighting its potential in complex robot control applications.
- Published
- 2024
- Full Text
- View/download PDF
24. Interacting with Obstacles Using a Bio-Inspired, Flexible, Underactuated Multilink Manipulator.
- Author
-
Prigozin, Amit and Degani, Amir
- Subjects
- *
BIOLOGICALLY inspired computing , *MANIPULATORS (Machinery) , *SYSTEMS design , *ROBOTICS - Abstract
With the increasing demand for robotic manipulators to operate in complex environments, it is important to develop designs that work in obstacle-rich environments and can navigate around obstacles. This paper aims to demonstrate the capabilities of a bio-inspired, underactuated multilink manipulator in environments with fixed and/or movable obstacles. To simplify the system design, a single rotational actuator is used at the base of the manipulator. We present a modeling method for flexible, multilink underactuated manipulators, including their interaction with obstacles. We also demonstrate how to plan a trajectory for the manipulator in environments with fixed obstacles. The robustness of the manipulator is examined by analyzing the effects of uncertainty in its initial state and the position of obstacles. Next, we demonstrate the performance of the manipulator in environments with movable obstacles and show the advantages of controlling the obstacles' radii and positions. Lastly, we showcase the process of picking up an object in workspaces with obstacles. All the findings are supported by simulations as well as hardware experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Learning high-level robotic manipulation actions with visual predictive model.
- Author
-
Ma, Anji, Chi, Guoyi, Ivaldi, Serena, and Chen, Lipeng
- Subjects
PREDICTION models ,VISUAL learning ,ROBOT motion ,ROBOTICS ,ROBOT programming ,VISUAL perception - Abstract
Learning visual predictive models has great potential for real-world robot manipulations. Visual predictive models serve as a model of real-world dynamics to comprehend the interactions between the robot and objects. However, prior works in the literature have focused mainly on low-level elementary robot actions, which typically result in lengthy, inefficient, and highly complex robot manipulation. In contrast, humans usually employ top–down thinking of high-level actions rather than bottom–up stacking of low-level ones. To address this limitation, we present a novel formulation for robot manipulation that can be accomplished by pick-and-place, a commonly applied high-level robot action, through grasping. We propose a novel visual predictive model that combines an action decomposer and a video prediction network to learn the intrinsic semantic information of high-level actions. Experiments show that our model can accurately predict the object dynamics (i.e., the object movements under robot manipulation) while trained directly on observations of high-level pick-and-place actions. We also demonstrate that, together with a sampling-based planner, our model achieves a higher success rate using high-level actions on a variety of real robot manipulation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Trampoline Stiffness Estimation by Using Robotic System for Quantitative Evaluation of Jumping Exercises.
- Author
-
Park, Gunseok, Choi, Seung-Hwan, Kim, Chang-Hyun, Kim, Min Young, and Lee, Suwoong
- Subjects
- *
TRAMPOLINES , *STANDARD deviations , *DIABETIC foot , *ROBOTIC exoskeletons , *POSITION sensors , *FOOT orthoses - Abstract
Trampolines are recognized as a valuable tool in exercise and rehabilitation due to their unique properties like elasticity, rebound force, low-impact exercise, and enhancement of posture, balance, and cardiopulmonary function. To quantitatively assess the effects of trampoline exercises, it is essential to estimate factors such as stiffness, elements influencing jump dynamics, and user safety. Previous studies assessing trampoline characteristics had limitations in performing repetitive experiments at various locations on the trampoline. Therefore, this research introduces a robotic system equipped with foot-shaped jigs to evaluate trampoline stiffness and quantitatively measure exercise effects. This system, through automated, repetitive movements at various locations on the trampoline, accurately measures the elastic coefficient and vertical forces. The robot maneuvers based on the coordinates of the trampoline, as determined by its torque and position sensors. The force sensor measures data related to the force exerted, along with the vertical force data at X, Y, and Z coordinates. The model's accuracy was evaluated using linear regression based on Hooke's Law, with Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Correlation Coefficient Squared (R-squared) metrics. In the analysis including only the distance between X and the foot-shaped jigs, the average MAE, RMSE, and R-squared values were 17.9702, 21.7226, and 0.9840, respectively. Notably, expanding the model to include distances in X, Y, and between the foot-shaped jigs resulted in a decrease in MAE to 15.7347, RMSE to 18.8226, and an increase in R-squared to 0.9854. The integrated model, including distances in X, Y, and between the foot-shaped jigs, showed improved predictive capability with lower MAE and RMSE and higher R-squared, indicating its effectiveness in more accurately predicting trampoline dynamics, vital in fitness and rehabilitation fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Text2Motion: from natural language instructions to feasible plans.
- Author
-
Lin, Kevin, Agia, Christopher, Migimatsu, Toki, Pavone, Marco, and Bohg, Jeannette
- Abstract
We propose Text2Motion, a language-based planning framework enabling robots to solve sequential manipulation tasks that require long-horizon reasoning. Given a natural language instruction, our framework constructs both a task- and motion-level plan that is verified to reach inferred symbolic goals. Text2Motion uses feasibility heuristics encoded in Q-functions of a library of skills to guide task planning with Large Language Models. Whereas previous language-based planners only consider the feasibility of individual skills, Text2Motion actively resolves geometric dependencies spanning skill sequences by performing geometric feasibility planning during its search. We evaluate our method on a suite of problems that require long-horizon reasoning, interpretation of abstract goals, and handling of partial affordance perception. Our experiments show that Text2Motion can solve these challenging problems with a success rate of 82%, while prior state-of-the-art language-based planning methods only achieve 13%. Text2Motion thus provides promising generalization characteristics to semantically diverse sequential manipulation tasks with geometric dependencies between skills. Qualitative results are made available at https://sites.google.com/stanford.edu/text2motion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Adapted Mapping Estimator in Visual Servoing Control for Model-Free Robotics Manipulator
- Author
-
Tian, Jun, Zhong, Xungao, Luo, Jiaguo, Peng, Xiafu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Huayong, editor, Liu, Honghai, editor, Zou, Jun, editor, Yin, Zhouping, editor, Liu, Lianqing, editor, Yang, Geng, editor, Ouyang, Xiaoping, editor, and Wang, Zhiyong, editor
- Published
- 2023
- Full Text
- View/download PDF
29. Artificial Intelligence in Autonomous Systems. A Collection of Projects in Six Problem Classes
- Author
-
Pareigis, Stephan, Tiedemann, Tim, Schönherr, Nils, Mihajlov, Denisz, Denecke, Eric, Tran, Justin, Koch, Sven, Abdelkarim, Awab, Mang, Maximilian, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Unger, Herwig, editor, and Schaible, Marcel, editor
- Published
- 2023
- Full Text
- View/download PDF
30. Hierarchical Knowledge Representation of Complex Tasks Based on Dynamic Motion Primitives
- Author
-
Miao, Shengyi, Zhong, Daming, Miao, Runqing, Sun, Fuchun, Wen, Zhenkun, Huang, Haiming, Zhang, Xiaodong, Wang, Na, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sun, Fuchun, editor, Cangelosi, Angelo, editor, Zhang, Jianwei, editor, Yu, Yuanlong, editor, Liu, Huaping, editor, and Fang, Bin, editor
- Published
- 2023
- Full Text
- View/download PDF
31. Post-facto Misrecognition Filter Based on Resumable Interruptions for Coping with Real World Uncertainty in the Development of Reactive Robotic Behaviors
- Author
-
de Campos Affonso, Guilherme, Okada, Kei, Inaba, Masayuki, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Petrovic, Ivan, editor, Menegatti, Emanuele, editor, and Marković, Ivan, editor
- Published
- 2023
- Full Text
- View/download PDF
32. Learning high-level robotic manipulation actions with visual predictive model
- Author
-
Anji Ma, Guoyi Chi, Serena Ivaldi, and Lipeng Chen
- Subjects
Robot manipulation ,Visual foresight ,Visual perception ,Deep learning ,Grasp planning ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Learning visual predictive models has great potential for real-world robot manipulations. Visual predictive models serve as a model of real-world dynamics to comprehend the interactions between the robot and objects. However, prior works in the literature have focused mainly on low-level elementary robot actions, which typically result in lengthy, inefficient, and highly complex robot manipulation. In contrast, humans usually employ top–down thinking of high-level actions rather than bottom–up stacking of low-level ones. To address this limitation, we present a novel formulation for robot manipulation that can be accomplished by pick-and-place, a commonly applied high-level robot action, through grasping. We propose a novel visual predictive model that combines an action decomposer and a video prediction network to learn the intrinsic semantic information of high-level actions. Experiments show that our model can accurately predict the object dynamics (i.e., the object movements under robot manipulation) while trained directly on observations of high-level pick-and-place actions. We also demonstrate that, together with a sampling-based planner, our model achieves a higher success rate using high-level actions on a variety of real robot manipulation tasks.
- Published
- 2023
- Full Text
- View/download PDF
33. A Novel Grasp Detection Algorithm with Multi-Target Semantic Segmentation for a Robot to Manipulate Cluttered Objects
- Author
-
Xungao Zhong, Yijun Chen, Jiaguo Luo, Chaoquan Shi, and Huosheng Hu
- Subjects
robot manipulation ,grasp detection ,semantic segmentation ,cluttered objects ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Objects in cluttered environments may have similar sizes and shapes, which remains a huge challenge for robot grasping manipulation. The existing segmentation methods, such as Mask R-CNN and Yolo-v8, tend to lose the shape details of objects when dealing with messy scenes, and this loss of detail limits the grasp performance of robots in complex environments. This paper proposes a high-performance grasp detection algorithm with a multi-target semantic segmentation model, which can effectively improve a robot’s grasp success rate in cluttered environments. The algorithm consists of two cascades: Semantic Segmentation and Grasp Detection modules (SS-GD), in which the backbone network of the semantic segmentation module is developed by using the state-of-the-art Swin Transformer structure. It can extract the detailed features of objects in cluttered environments and enable a robot to understand the position and shape of the candidate object. To construct the grasp schema SS-GD focused on important vision features, a grasp detection module is designed based on the Squeeze-and-Excitation (SE) attention mechanism, to predict the corresponding grasp configuration accurately. The grasp detection experiments were conducted on an actual UR5 robot platform to verify the robustness and generalization of the proposed SS-GD method in cluttered environments. A best grasp success rate of 91.7% was achieved for cluttered multi-target workspaces.
- Published
- 2024
- Full Text
- View/download PDF
34. Editorial: Rising stars in field robotics: 2022
- Author
-
Dimitrios Kanoulas, Shehryar Khattak, and Giuseppe Loianno
- Subjects
robotics ,automation ,legged robot ,robot manipulation ,LLM ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Published
- 2024
- Full Text
- View/download PDF
35. Safe Trajectory Path Planning Algorithm Based on RRT* While Maintaining Moderate Margin From Obstacles.
- Author
-
Lim, Subin and Jin, Sangrok
- Abstract
This paper presents Ex-RRT*, a modification of the rapidly-exploring random tree star (RRT*) algorithm that allows the robot to avoid obstacles with a margin. RRT* generates the shortest path to a destination while avoiding obstacles. However, if the robot's embedded trajectory generation algorithm interpolates the waypoints generated by the RRT* to make a motion, collisions may occur with the edges or overhang of obstacles. This algorithm adds a cost function for the distance from each node to the nearest obstacle to ensure that the waypoints generated by the path planner have an appropriate margin for obstacles. It is designed to provide safer control from collisions when each robot's embedded trajectory generation algorithm operates by interpolating waypoints derived by path planning. Through simulation, we compare the proposed Ex-RRT* and conventional RRT* with performance indices such as total distance traveled and collision avoidance norm. Experiments are conducted on the task of moving an object inside a box with a commercial robot to validate the proposed algorithm. The proposed algorithm generates paths with improved safety and can be applied to various robotic arms and mobile platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Scalable Lifelong Imitation Learning for Robot Fleets
- Author
-
Hoque, Ryan
- Subjects
Computer science ,Robotics ,Artificial intelligence ,Fleet Learning ,Imitation Learning ,Robot Manipulation - Abstract
Recent breakthroughs in deep learning have revolutionized natural language processing, computer vision, and robotics. Nevertheless, reliable robot autonomy in unstructured environments remains elusive. Without the Internet-scale data available for language and vision, robotics faces a unique chicken-and-egg problem: robot learning requires large datasets from deployment at scale, but robot learning is not yet reliable enough for deployment at scale. We propose a scalable human-in-the-loop learning paradigm as a potential solution to this paradox, and we argue that it is the key ingredient behind the recent growth of large-scale robot deployments in applications such as autonomous driving and e-commerce order fulfillment. We develop novel formalisms, algorithms, benchmarks, systems, and applications for this setting and evaluate its performance in extensive simulation and physical experiments. This dissertation is composed of three complementary parts. In Part I, we propose novel algorithms and systems for interactive imitation learning, in which autonomous robots can actively query human supervisors for assistance when needed. In Part II, we introduce interactive fleet learning, which generalizes interactive imitation learning to multiple robots and multiple human supervisors. In Part III, we introduce and study systems for remote supervision of robot fleets over the Internet, enabling interactive fleet learning at a distance. Throughout this thesis, we design algorithms and systems with an emphasis on scalability in terms of the number of robots, number of humans, amount of human supervision required, dataset size, and distribution of physical locations. We conclude with a discussion of limitations and opportunities for future work.
- Published
- 2024
37. Review on human‐like robot manipulation using dexterous hands
- Author
-
Suhas Kadalagere Sampath, Ning Wang, Hao Wu, and Chenguang Yang
- Subjects
dexterous hand ,learning‐based manipulation ,robot manipulation ,Computer engineering. Computer hardware ,TK7885-7895 ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract In recent years, human hand‐based robotic hands or dexterous hands have gained attention due to their enormous capabilities of handling soft materials compared to traditional grippers. Back in the earlier days, the development of a hand model close to that of a human was an impossible task but with the advancements made in technology, dexterous hands with three, four or five‐fingered robotic hands have been developed to mimic human hand nature. However, human‐like manipulation of dexterous hands to this date remains a challenge. Thus, this review focuses on (a) the history and motivation behind the development of dexterous hands, (b) a brief overview of the available multi‐fingered hands, and (c) learning‐based methods such as traditional and data‐driven learning methods for manipulating dexterous hands. Additionally, it discusses the challenges faced in terms of the manipulation of multi‐fingered or dexterous hands.
- Published
- 2023
- Full Text
- View/download PDF
38. On-ground validation of orbital GNC: Visual navigation assessment in robotic testbed facility
- Author
-
Muralidharan, Vivek, Makhdoomi, Mohatashem Reyaz, Žinys, Augustinas, Razgus, Bronislovas, Klimavičius, Marius, Olivares-Mendez, Miguel, and Martinez, Carol
- Published
- 2024
- Full Text
- View/download PDF
39. CLUE-AI: A Convolutional Three-Stream Anomaly Identification Framework for Robot Manipulation
- Author
-
Dogan Altan and Sanem Sariel
- Subjects
Cognitive robots ,robot manipulation ,robot safety ,anomaly identification ,robot learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Despite the great promise of service robots in everyday tasks, many roboethics issues remain to be addressed before these robots can physically work in human environments. Robot safety is one of the essential concerns for roboethics which is not just a design-time issue. It is also crucial to devise the required onboard monitoring and control strategies to enable robots to be aware of and react to anomalies (i.e., unexpected deviations from intended outcomes) that arise during their operations in the real world. The detection and identification of these anomalies is an essential first step toward fulfilling these requirements. Although several architectures have been proposed for anomaly detection; identification has not yet been thoroughly investigated. This task is challenging since indicators may appear long before anomalies are detected. In this paper, we propose a ConvoLUtional threE-stream Anomaly Identification (CLUE-AI) framework to address this problem. The framework fuses visual, auditory and proprioceptive data streams to identify everyday object manipulation anomalies. A stream of 2D images gathered through an RGB-D camera placed on the head of the robot is processed within a self-attention-enabled visual stage to capture visual anomaly indicators. The auditory modality provided by the microphone placed on the robot’s lower torso is processed within a designed convolutional neural network (CNN) in the auditory stage. Last, the force applied by the gripper and the gripper state is processed within a CNN to obtain proprioceptive features. These outputs are then combined with a late fusion scheme. Our novel three-stream framework design is analyzed on everyday object manipulation tasks with a Baxter humanoid robot in a semi-structured setting. The results indicate that CLUE-AI achieves an f-score of 94%, outperforming the other baselines in classifying anomalies.
- Published
- 2023
- Full Text
- View/download PDF
40. ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
- Author
-
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, and Katsushi Ikeuchi
- Subjects
Task planning ,robot manipulation ,large language models ,ChatGPT ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper introduces a novel method for translating natural-language instructions into executable robot actions using OpenAI’s ChatGPT in a few-shot setting. We propose customizable input prompts for ChatGPT that can easily integrate with robot execution systems or visual recognition programs, adapt to various environments, and create multi-step task plans while mitigating the impact of token limit imposed on ChatGPT. In our approach, ChatGPT receives both instructions and textual environmental data, and outputs a task plan and an updated environment. These environmental data are reused in subsequent task planning, thus eliminating the extensive record-keeping of prior task plans within the prompts of ChatGPT. Experimental results demonstrated the effectiveness of these prompts across various domestic environments, such as manipulations in front of a shelf, a fridge, and a drawer. The conversational capability of ChatGPT allows users to adjust the output via natural-language feedback. Additionally, a quantitative evaluation using VirtualHome showed that our results are comparable to previous studies. Specifically, 36% of task planning met both executability and correctness, and the rate approached 100% after several rounds of feedback. Our experiments revealed that ChatGPT can reasonably plan tasks and estimate post-operation environments without actual experience in object manipulation. Despite the allure of ChatGPT-based task planning in robotics, a standardized methodology remains elusive, making our work a substantial contribution. These prompts can serve as customizable templates, offering practical resources for the robotics research community. Our prompts and source code are open source and publicly available at https://github.com/microsoft/ChatGPT-Robot-Manipulation-Prompts.
- Published
- 2023
- Full Text
- View/download PDF
41. Viewpoint Selection for the Efficient Teleoperation of a Robot Arm Using Reinforcement Learning
- Author
-
Haoxiang Liu, Ren Komatsu, Shinsuke Nakashima, Hiroyuki Hamada, Nobuto Matsuhira, Hajime Asama, and Atsushi Yamashita
- Subjects
Deep reinforcement learning ,human interface ,robot manipulation ,teleoperation ,viewpoint selection ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this study, we developed a novel method to determine the optimal viewpoint from which an operator could realize faster and more accurate robot teleoperation using reinforcement learning. The reinforcement learning model was trained using images obtained from several candidate viewpoints from scratch, and the viewpoint at which the model achieved the highest rewards was considered the optimal viewpoint. The target robot, task, and environment were modeled using computer simulations and the candidate viewpoint images were obtained using those simulations. We employed the world model as our reinforcement learning model to maximize rewards in the reaching task of a robot arm. The reward function was designed to encourage the robot arm to reach the target position both quickly and accurately. The experimental results validated the choice of the world model as the reinforcement learning model. Moreover, subject experiments wherein subjects operated a robot arm remotely to reach the target position were conducted. The experiments produced results that strongly aligned with the performance obtained through computer simulations, indicating that the proposed method is capable of selecting the optimal viewpoint without handcrafted design and subject experiments.
- Published
- 2023
- Full Text
- View/download PDF
42. Neural-Based Detection and Segmentation of Articulated Objects for Robotic Interaction in a Household Environment.
- Author
-
Mula, Arkadiusz, Młodzikowski, Kamil, Gawron, Patryk, and Belter, Dominik
- Subjects
ROBOTICS ,HOUSEHOLDS ,POINT cloud ,ROBOTS - Abstract
Robots operating in household environments should detect and estimate the properties of articulated objects to efficiently perform tasks given by human operators. This paper presents the design and implementation of a system for estimating a point-cloud-based model of a scene, enhanced with information about articulated objects, based on a single RGB-D image. This article describes the neural method used to detect handles, detect and extract fronts, detect rotational joints, and build a point cloud model. It compares various architectures of neural networks to detect handles, the fronts of drawers and cabinets, and estimate rotational joints. In the end, the results are merged to build a 3D model of articulated objects in the environment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. MURM: Utilization of Multi-Views for Goal-Conditioned Reinforcement Learning in Robotic Manipulation.
- Author
-
Jang, Seongwon, Jeong, Hyemi, and Yang, Hyunseok
- Subjects
REINFORCEMENT learning ,SUPERVISED learning ,ROBOTICS ,ROBOT motion ,CARTESIAN coordinates - Abstract
We present a novel framework, multi-view unified reinforcement learning for robotic manipulation (MURM), which efficiently utilizes multiple camera views to train a goal-conditioned policy for a robot to perform complex tasks. The MURM framework consists of three main phases: (i) demo collection from an expert, (ii) representation learning, and (iii) offline reinforcement learning. In the demo collection phase, we design a scripted expert policy that uses privileged information, such as Cartesian coordinates of a target and goal, to solve the tasks. We add noise to the expert policy to provide sufficient interactive information about the environment, as well as suboptimal behavioral trajectories. We designed three tasks in a Pybullet simulation environment, including placing an object in a desired goal position and picking up various objects that are randomly positioned in the environment. In the representation learning phase, we use a vector-quantized variational autoencoder (VQVAE) to learn a more structured latent representation that makes it feasible to train for RL compared to high-dimensional raw images. We train VQVAE models for each distinct camera view and define the best viewpoint settings for training. In the offline reinforcement learning phase, we use the Implicit Q-learning (IQL) algorithm as our baseline and introduce a separated Q-functions method and dropout method that can be implemented in multi-view settings to train the goal-conditioned policy with supervised goal images. We conduct experiments in simulation and show that the single-view baseline fails to solve complex tasks, whereas MURM is successful. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. 혼합현실기반 비접촉식 협동로봇 조작 시스템.
- Author
-
Seokbin Hwang and Sungmin Kim
- Subjects
INDUSTRIAL robots ,MIXED reality ,AUGMENTED reality ,VIRTUAL reality ,HUMAN-robot interaction ,POSE estimation (Computer vision) ,TECHNOLOGICAL progress - Abstract
Extended reality (XR), which encompasses Augmented reality (AR), Virtual reality (VR) and Mixed reality(MR) has been extending its value in various fields. Collaborative robot, also known as cobot, has become a major technical trend in manufacturing due to its productivity and efficiency. This study proposes an integrated system of MR and collaborative robot using vision-based camera pose estimation and Vuforia SDK. It utilizes the hand and gesture recognition of a MR device to ensure the intuitive manipulation. The proposed system is expected to be developed as a collision avoidance system using MR techniques in the future. And other types of robot which is available of vision technique can also be employed for intuitive manipulation system as well. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Self-supervised Interactive Object Segmentation Through a Singulation-and-Grasping Approach
- Author
-
Yu, Houjian, Choi, Changhyun, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
46. Automated harvesting by a dual-arm fruit harvesting robot
- Author
-
Takeshi Yoshida, Yuki Onishi, Takuya Kawahara, and Takanori Fukao
- Subjects
Harvesting robot ,Robot manipulation ,Deep learning ,Technology ,Mechanical engineering and machinery ,TJ1-1570 ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Machine design and drawing ,TJ227-240 ,Technology (General) ,T1-995 ,Industrial engineering. Management engineering ,T55.4-60.8 ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
Abstract In this study, we propose a method to automate fruit harvesting with a fruit harvesting robot equipped with robotic arms. Given the future growth of the world population, food shortages are expected to accelerate. Since much of Japan’s agriculture is dependent on imports, it is expected to be greatly affected by this upcoming food shortage. In recent years, the number of agricultural workers in Japan has been decreasing and the population is aging. As a result, there is a need to automate and reduce labor in agricultural work using agricultural machinery. In particular, fruit cultivation requires a lot of manual labor due to the variety of orchard conditions and tree shapes, causing mechanization and automation to lag behind. In this study, a dual-armed fruit harvesting robot was designed and fabricated to reach most of the fruits on joint V-shaped trellis that was cultivated and adjusted for the robot. To harvest the fruit, the fruit harvesting robot uses sensors and computer vision to detect and estimate the position of the fruit and then inserts end-effectors into the lower part of the fruit. During this process, there is a possibility of collision within the robot itself or with other fruits depending on the position of the fruit to be harvested. In this study, inverse kinematics and a fast path planning method using random sampling is used to harvest fruits with robot arms. This method makes it possible to control the robot arms without interfering with the fruit or the other robot arm by considering them as obstacles. Through experiments, this study showed that these methods can be used to detect pears and apples outdoors and automatically harvest them using the robot arms.
- Published
- 2022
- Full Text
- View/download PDF
47. A Fast 6DOF Visual Selective Grasping System Using Point Clouds.
- Author
-
de Oliveira, Daniel Moura and Conceicao, Andre Gustavo Scolari
- Subjects
POINT cloud ,SINGLE-degree-of-freedom systems ,DEEP learning ,TIME management - Abstract
Visual object grasping can be complex when dealing with different shapes, points of view, and environments since the robotic manipulator must estimate the most feasible place to grasp. This work proposes a new selective grasping system using only point clouds of objects. For the selection of the object of interest, a deep learning network for object classification is proposed, named Point Encoder Convolution (PEC). The network is trained with a dataset obtained in a realistic simulator and uses an autoencoder with 1D convolution. The developed grasping algorithm used in the system uses geometry primitives and lateral curvatures to estimate the best region to grasp without previously knowing the object's point cloud. Experimental results show a success ratio of 94% for a dataset with five classes, and the proposed visual selective grasping system can be executed in around 0.004 s, suitable for tasks that require a low execution time or use low-cost hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Semantic Representation of Robot Manipulation with Knowledge Graph.
- Author
-
Miao, Runqing, Jia, Qingxuan, Sun, Fuchun, Chen, Gang, Huang, Haiming, and Miao, Shengyi
- Subjects
- *
KNOWLEDGE graphs , *KNOWLEDGE base , *CONVOLUTIONAL neural networks , *ROBOTS - Abstract
Autonomous indoor service robots are affected by multiple factors when they are directly involved in manipulation tasks in daily life, such as scenes, objects, and actions. It is of self-evident importance to properly parse these factors and interpret intentions according to human cognition and semantics. In this study, the design of a semantic representation framework based on a knowledge graph is presented, including (1) a multi-layer knowledge-representation model, (2) a multi-module knowledge-representation system, and (3) a method to extract manipulation knowledge from multiple sources of information. Moreover, with the aim of generating semantic representations of entities and relations in the knowledge base, a knowledge-graph-embedding method based on graph convolutional neural networks is proposed in order to provide high-precision predictions of factors in manipulation tasks. Through the prediction of action sequences via this embedding method, robots in real-world environments can be effectively guided by the knowledge framework to complete task planning and object-oriented transfer. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Review on human‐like robot manipulation using dexterous hands.
- Author
-
Kadalagere Sampath, Suhas, Wang, Ning, Wu, Hao, and Yang, Chenguang
- Subjects
HUMANOID robots ,ROBOT hands ,MATERIALS handling - Abstract
In recent years, human hand‐based robotic hands or dexterous hands have gained attention due to their enormous capabilities of handling soft materials compared to traditional grippers. Back in the earlier days, the development of a hand model close to that of a human was an impossible task but with the advancements made in technology, dexterous hands with three, four or five‐fingered robotic hands have been developed to mimic human hand nature. However, human‐like manipulation of dexterous hands to this date remains a challenge. Thus, this review focuses on (a) the history and motivation behind the development of dexterous hands, (b) a brief overview of the available multi‐fingered hands, and (c) learning‐based methods such as traditional and data‐driven learning methods for manipulating dexterous hands. Additionally, it discusses the challenges faced in terms of the manipulation of multi‐fingered or dexterous hands. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Recent Advancements in Agriculture Robots: Benefits and Challenges.
- Author
-
Cheng, Chao, Fu, Jun, Su, Hang, and Ren, Luquan
- Subjects
AGRICULTURAL robots ,AGRICULTURAL technology ,INDUSTRIAL robots ,ARTIFICIAL intelligence ,ROBOTS ,COMPUTER science ,AGRICULTURE - Abstract
In the development of digital agriculture, agricultural robots play a unique role and confer numerous advantages in farming production. From the invention of the first industrial robots in the 1950s, robots have begun to capture the attention of both research and industry. Thanks to the recent advancements in computer science, sensing, and control approaches, agricultural robots have experienced a rapid evolution, relying on various cutting-edge technologies for different application scenarios. Indeed, significant refinements have been achieved by integrating perception, decision-making, control, and execution techniques. However, most agricultural robots continue to require intelligence solutions, limiting them to small-scale applications without quantity production because of their lack of integration with artificial intelligence. Therefore, to help researchers and engineers grasp the prevalent research status of agricultural robots, in this review we refer to more than 100 pieces of literature according to the category of agricultural robots under discussion. In this context, we bring together diverse agricultural robot research statuses and applications and discuss the benefits and challenges involved in further applications. Finally, directional indications are put forward with respect to the research trends relating to agricultural robots. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.