Search

Your search keyword '"Mayol-Cuevas, Walterio"' showing total 307 results

Search Constraints

Start Over You searched for: Author "Mayol-Cuevas, Walterio" Remove constraint Author: "Mayol-Cuevas, Walterio"
307 results on '"Mayol-Cuevas, Walterio"'

Search Results

1. Re-localization acceleration with Medoid Silhouette Clustering

2. Are you Struggling? Dataset and Baselines for Struggle Determination in Assembly Videos

3. SuperTran: Reference Based Video Transformer for Enhancing Low Bitrate Streams in Real Time

4. AROS: Affordance Recognition with One-Shot Human Stances

5. Rebellion and Disobedience as Useful Tools in Human-Robot Interaction Research -- The Handheld Robotics Case

6. On-Sensor Binarized Fully Convolutional Neural Network with A Pixel Processor Array

7. The Object at Hand: Automated Editing for Mixed Reality Video Guidance from Hand-Object Interactions

8. Egocentric Hand-object Interaction Detection and Application

9. Understanding Egocentric Hand-Object Interactions from Hand Pose Estimation

10. Direct Servo Control from In-Sensor CNN Inference with A Pixel Processor Array

11. Bringing A Robot Simulator to the SCAMP Vision System

12. Filter Distribution Templates in Convolutional Networks for Image Classification Tasks

13. Towards Efficient Convolutional Network Models with Filter Distribution Templates

14. Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel Processor Array

15. Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays

16. Action Modifiers: Learning from Adverbs in Instructional Videos

17. Reach Out and Help: Assisted Remote Collaboration through a Handheld Robot

18. A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays

19. Egocentric affordance detection with the one-shot geometry-driven Interaction Tensor

20. Rebellion and Obedience: The Effects of Intention Prediction in Cooperative Handheld Robots

21. The Pros and Cons: Rank-aware Temporal Attention for Skill Determination in Long Videos

22. What can I do here? Leveraging Deep 3D saliency and geometry for fast and scalable multiple affordance detection

23. Towards Intention Prediction for Handheld Robots: a Case of Simulated Block Copying

24. I Can See Your Aim: Estimating User Attention From Gaze For Handheld Robot Collaboration

25. Towards CNN map representation and compression for camera relocalisation

26. Geometric Affordances from a Single Example via the Interaction Tensor

27. Who's Better? Who's Best? Pairwise Deep Ranking for Skill Determination

28. Trespassing the Boundaries: Labeling Temporal Bounds for Object Interactions in Egocentric Video

29. Improving Classification by Improving Labelling: Introducing Probabilistic Multi-Label Object Interaction Recognition

30. Towards CNN Map Compression for camera relocalisation

31. Automated capture and delivery of assistive task guidance with an eyewear computer: The GlaciAR system

32. SEMBED: Semantic Embedding of Egocentric Action Videos

33. Towards an objective evaluation of underactuated gripper designs

34. You-Do, I-Learn: Unsupervised Multi-User egocentric Approach Towards Video-Based Guidance

36. Instance-level Object Recognition Using Deep Temporal Coherence

39. Wearable visual robots

40. Towards Autonomous Flight of Low-Cost MAVs by Using a Probabilistic Visual Odometry Approach

41. Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage

47. Visual Mapping and Multi-modal Localisation for Anywhere AR Authoring

49. Egocentric Visual Event Classification with Location-Based Priors

Catalog

Books, media, physical & digital resources