4 results on '"Pablo Vicente-Moñivar"'
Search Results
2. The Sixth Visual Object Tracking VOT2018 Challenge Results.
- Author
-
Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman P. Pflugfelder, Luka Cehovin Zajc, Tomás Vojír, Goutam Bhat, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernández, álvaro García-Martín, álvaro Iglesias-Arias, A. Aydin Alatan, Abel González-García, Alfredo Petrosino, Alireza Memarmoghadam, Andrea Vedaldi, Andrej Muhic, Anfeng He, Arnold W. M. Smeulders, Asanka G. Perera, Bo Li 0114, Boyu Chen, Changick Kim, Changsheng Xu, Changzhen Xiong, Cheng Tian, Chong Luo, Chong Sun, Cong Hao, Daijin Kim 0001, Deepak Mishra 0002, Deming Chen, Dong Wang 0004, Dongyoon Wee, Efstratios Gavves, Erhan Gundogdu, Erik Velasco-Salido, Fahad Shahbaz Khan, Fan Yang 0035, Fei Zhao, Feng Li 0031, Francesco Battistone, George De Ath, Gorthi R. K. Sai Subrahmanyam, Guilherme Sousa Bastos, Haibin Ling, Hamed Kiani Galoogahi, Hankyeol Lee, Haojie Li, Haojie Zhao, Heng Fan 0001, Honggang Zhang 0002, Horst Possegger, Houqiang Li, Huchuan Lu, Hui Zhi, Huiyun Li, Hyemin Lee, Hyung Jin Chang, Isabela Drummond, Jack Valmadre, Jaime Spencer Martin, Javaan Singh Chahl, Jin Young Choi 0002, Jing Li 0036, Jinqiao Wang, Jinqing Qi, Jinyoung Sung, Joakim Johnander, João F. Henriques, Jongwon Choi, Joost van de Weijer 0001, Jorge Rodríguez Herranz, José M. Martínez 0001, Josef Kittler, Junfei Zhuang, Junyu Gao 0002, Klemen Grm, Lichao Zhang, Lijun Wang, Lingxiao Yang, Litu Rout, Liu Si, Luca Bertinetto, Lutao Chu, Manqiang Che, Mario Edoardo Maresca, Martin Danelljan, Ming-Hsuan Yang 0001, Mohamed H. Abdelpakey, Mohamed S. Shehata, Myunggu Kang, Namhoon Lee, Ning Wang 0020, Ondrej Miksik, Payman Moallem, Pablo Vicente-Moñivar, Pedro Senna, Peixia Li, Philip H. S. Torr, Priya Mariam Raju, Ruihe Qian, Qiang Wang 0051, Qin Zhou, Qing Guo 0005, Rafael Martin Nieto, Rama Krishna Sai Subrahmanyam Gorthi, Ran Tao 0004, Richard Bowden, Richard M. Everson, Runling Wang, Sangdoo Yun, Seokeon Choi, Sergio Vivas, Shuai Bai, Shuangping Huang, Sihang Wu, Simon Hadfield, Siwen Wang, Stuart Golodetz, Ming Tang 0001, Tianyang Xu, Tianzhu Zhang, Tobias Fischer 0001, Vincenzo Santopietro, Vitomir Struc, Wei Wang 0335, Wangmeng Zuo, Wei Feng 0005, Wei Wu 0021, Wei Zou, Weiming Hu, Wengang Zhou, Wenjun Zeng, Xiaofan Zhang 0002, Xiaohe Wu, Xiao-Jun Wu 0001, Xinmei Tian 0001, Yan Li 0014, Yan Lu 0001, Yee Wei Law, Yi Wu 0001, Yiannis Demiris, Yicai Yang, Yifan Jiao, Yuhong Li, Yunhua Zhang, Yuxuan Sun, Zheng Zhang 0022, Zheng Zhu, Zhen-Hua Feng, Zhihui Wang 0001, and Zhiqun He
- Published
- 2018
- Full Text
- View/download PDF
3. The Sixth Visual Object Tracking VOT2018 Challenge Results
- Author
-
Houqiang Li, Huchuan Lu, Siwen Wang, Rafael Martin-Nieto, Efstratios Gavves, Feng Li, Manqiang Che, Erhan Gundogdu, Priya Mariam Raju, Xiaofan Zhang, Roman Pflugfelder, Yan Lu, Xinmei Tian, Martin Danelljan, Deepak Mishra, Guilherme Sousa Bastos, Honggang Zhang, Heng Fan, Mohamed H. Abdelpakey, Zhen-Hua Feng, Wang Wei, Andrej Muhič, Wengang Zhou, Deming Chen, Haojie Zhao, Sihang Wu, Richard M. Everson, Junfei Zhuang, Qin Zhou, Myunggu Kang, Abel Gonzalez-Garcia, Pablo Vicente-Moñivar, Richard Bowden, Horst Possegger, Yicai Yang, Andrea Vedaldi, Jaime Spencer Martin, Jongwon Choi, Yunhua Zhang, Yiannis Demiris, Seokeon Choi, Alireza Memarmoghadam, Wangmeng Zuo, Changzhen Xiong, Yuxuan Sun, Daijin Kim, Yuhong Li, Qing Guo, Tang Ming, Arnold W. M. Smeulders, Hamed Kiani Galoogahi, Zhihui Wang, Asanka G. Perera, Fahad Shahbaz Khan, George De Ath, Shuangping Huang, Qian Ruihe, Philip H. S. Torr, Haojie Li, Zhiqun He, João F. Henriques, Namhoon Lee, Chong Sun, Jorge Rodríguez Herranz, Vincenzo Santopietro, Lijun Wang, Qiang Wang, Gustavo Fernandez, Shuai Bai, Weiming Hu, Ondrej Miksik, Dongyoon Wee, Xiaohe Wu, Goutam Bhat, Yifan Jiao, A. Aydin Alatan, Alfredo Petrosino, Ran Tao, Tianyang Xu, Sergio Vivas, Cheng Tian, Yee Wei Law, Wei Feng, José M. Martínez, Luca Bertinetto, Runling Wang, Liu Si, Tianzhu Zhang, Tomas Vojir, Mario Edoardo Maresca, Lichao Zhang, Changick Kim, Luka Čehovin Zajc, Lingxiao Yang, Yan Li, Javaan Chahl, Simon Hadfield, Chong Luo, Jiří Matas, Ales Leonardis, Jack Valmadre, Pedro Senna, Josef Kittler, Klemen Grm, Cong Hao, Haibin Ling, Isabela Drummond, Zheng Zhang, Fan Yang, Joakim Johnander, Tobias Fischer, Gorthi R. K. Sai Subrahmanyam, Jinyoung Sung, Jin-Young Choi, Bo Li, Hui Zhi, Álvaro Iglesias-Arias, Joost van de Weijer, Hyung Jin Chang, Jinqing Qi, Michael Felsberg, Francesco Battistone, Sangdoo Yun, Wei Zou, Huiyun Li, Boyu Chen, Zheng Zhu, Jing Li, Abdelrahman Eldesokey, Litu Rout, Matej Kristan, Mohamed Shehata, Fei Zhao, Changsheng Xu, Alan Lukežič, Yi Wu, Wenjun Zeng, Lutao Chu, Vitomir Struc, Stuart Golodetz, Alvaro Garcia-Martin, Dong Wang, Junyu Gao, Hankyeol Lee, Hyemin Lee, Ning Wang, Wei Wu, Anfeng He, Xiaojun Wu, Rama Krishna Sai Subrahmanyam Gorthi, Payman Moallem, Peixia Li, Jinqiao Wang, Erik Velasco-Salido, Ming-Hsuan Yang, Kristan, Matej, Leonardis, Ales, Matas, Jiri, Felsberg, Michael, Perera, Asanka G, Chahl, Javaan, Law, Yee Wei, and He, Zhiqun
- Subjects
Source code ,source code ,business.industry ,Computer science ,Computer Sciences ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,Datavetenskap (datalogi) ,Datorseende och robotik (autonoma system) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,dataset ,020201 artificial intelligence & image processing ,Computer vision ,Artificial Intelligence & Image Processing ,Artificial intelligence ,08 Information and Computing Sciences ,tracker benchmarking activity ,business ,Computer Vision and Robotics (Autonomous Systems) ,media_common - Abstract
The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net). Funding agencies: Slovenian research agencySlovenian Research Agency - Slovenia [P2-0214, P2-0094, J2-8175]; Czech Science FoundationGrant Agency of the Czech Republic [GACR P103/12/G084]; WASP; VR (EMC2); SSF (SymbiCloud); SNIC; AIT Strategic Research Programme 2017 Visua
- Published
- 2019
4. Towards a Professional Gesture Recognition with RGB-D from Smartphone
- Author
-
Sotiris Manitsaris, Alina Glushkova, and Pablo Vicente Moñivar
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,gesture recognition ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,pose estimation ,smartphone ,depth map ,020901 industrial engineering & automation ,Gesture recognition ,Mobile phone ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Hidden Markov model ,business ,Pose ,Hidden Markov Models ,Gesture - Abstract
The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.