1. Visual and corresponding tactile dataset of flexible material for robots and cross modal perception.
- Author
-
Xu S, Xu H, Mao F, Ji M, and Yang W
- Abstract
Humans primarily understand the world around them through visual perception and touch. As a result, visual and tactile information play crucial roles in the interaction between humans and their environment. In order to establish a correlation between what is seen and what is felt on the same object, particularly on flexible objects (such as textile, leather, skin etc.) which humans often access by touch to cooperatively determine their quality, the need for a new dataset that includes both visual and tactile information arises. This has motivated us to create a dataset that combines visual images and corresponding tactile data to explore the potential of cross-modal data fusion. We have chosen leather as our object of focus due to its widespread usage in everyday life. The dataset we propose consists of visual images depicting leather in various colours and displaying defects, alongside corresponding tactile data collected from the same region of the leather. Notably, the tactile data comprises components along the X, Y, and Z axes. To effectively demonstrate the relationship between visual and tactile data on the same object region, the tactile data is aligned with the visual data and visualized through interpolation. Considering the potential applications in computer vision, we have manually labelled the defect regions in each visual-tactile sample. Ultimately, the dataset comprises a total of 687 records. Each sample includes visual images, image representations of the tactile data (referred to as tactile images for simplicity), and segmentation images highlighting the defect regions, all with the same resolution., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF