71 results on '"Gorthi R. K. Sai Subrahmanyam"'
Search Results
2. An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks
- Author
-
Chathoth, Swetha Velluva, Mishra, Asish Kumar, Mishra, Deepak, and Gorthi R. K. Sai, Subrahmanyam
- Published
- 2022
- Full Text
- View/download PDF
3. Scale and Rotation Corrected CNNs (SRC-CNNs) for Scale and Rotation Invariant Character Recognition: SRC-CNN for Scale and Rotation Invariant Character Recognition.
- Author
-
V. C. Swetha, Deepak Mishra 0002, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2018
- Full Text
- View/download PDF
4. The Sixth Visual Object Tracking VOT2018 Challenge Results.
- Author
-
Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman P. Pflugfelder, Luka Cehovin Zajc, Tomás Vojír, Goutam Bhat, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernández, álvaro García-Martín, álvaro Iglesias-Arias, A. Aydin Alatan, Abel González-García, Alfredo Petrosino, Alireza Memarmoghadam, Andrea Vedaldi, Andrej Muhic, Anfeng He, Arnold W. M. Smeulders, Asanka G. Perera, Bo Li 0114, Boyu Chen, Changick Kim, Changsheng Xu, Changzhen Xiong, Cheng Tian, Chong Luo, Chong Sun, Cong Hao, Daijin Kim 0001, Deepak Mishra 0002, Deming Chen, Dong Wang 0004, Dongyoon Wee, Efstratios Gavves, Erhan Gundogdu, Erik Velasco-Salido, Fahad Shahbaz Khan, Fan Yang 0035, Fei Zhao, Feng Li 0031, Francesco Battistone, George De Ath, Gorthi R. K. Sai Subrahmanyam, Guilherme Sousa Bastos, Haibin Ling, Hamed Kiani Galoogahi, Hankyeol Lee, Haojie Li, Haojie Zhao, Heng Fan 0001, Honggang Zhang 0002, Horst Possegger, Houqiang Li, Huchuan Lu, Hui Zhi, Huiyun Li, Hyemin Lee, Hyung Jin Chang, Isabela Drummond, Jack Valmadre, Jaime Spencer Martin, Javaan Singh Chahl, Jin Young Choi 0002, Jing Li 0036, Jinqiao Wang, Jinqing Qi, Jinyoung Sung, Joakim Johnander, João F. Henriques, Jongwon Choi, Joost van de Weijer 0001, Jorge Rodríguez Herranz, José M. Martínez 0001, Josef Kittler, Junfei Zhuang, Junyu Gao 0002, Klemen Grm, Lichao Zhang, Lijun Wang, Lingxiao Yang, Litu Rout, Liu Si, Luca Bertinetto, Lutao Chu, Manqiang Che, Mario Edoardo Maresca, Martin Danelljan, Ming-Hsuan Yang 0001, Mohamed H. Abdelpakey, Mohamed S. Shehata, Myunggu Kang, Namhoon Lee, Ning Wang 0020, Ondrej Miksik, Payman Moallem, Pablo Vicente-Moñivar, Pedro Senna, Peixia Li, Philip H. S. Torr, Priya Mariam Raju, Ruihe Qian, Qiang Wang 0051, Qin Zhou, Qing Guo 0005, Rafael Martin Nieto, Rama Krishna Sai Subrahmanyam Gorthi, Ran Tao 0004, Richard Bowden, Richard M. Everson, Runling Wang, Sangdoo Yun, Seokeon Choi, Sergio Vivas, Shuai Bai, Shuangping Huang, Sihang Wu, Simon Hadfield, Siwen Wang, Stuart Golodetz, Ming Tang 0001, Tianyang Xu, Tianzhu Zhang, Tobias Fischer 0001, Vincenzo Santopietro, Vitomir Struc, Wei Wang 0335, Wangmeng Zuo, Wei Feng 0005, Wei Wu 0021, Wei Zou, Weiming Hu, Wengang Zhou, Wenjun Zeng, Xiaofan Zhang 0002, Xiaohe Wu, Xiao-Jun Wu 0001, Xinmei Tian 0001, Yan Li 0014, Yan Lu 0001, Yee Wei Law, Yi Wu 0001, Yiannis Demiris, Yicai Yang, Yifan Jiao, Yuhong Li, Yunhua Zhang, Yuxuan Sun, Zheng Zhang 0022, Zheng Zhu, Zhen-Hua Feng, Zhihui Wang 0001, and Zhiqun He
- Published
- 2018
- Full Text
- View/download PDF
5. Bayesian Approach for Landslide Identification from High-Resolution Satellite Images.
- Author
-
Pilli Madalasa, Gorthi R. K. Sai Subrahmanyam, Tapas Ranjan Martha, Rama Rao Nidamanuri, and Deepak Mishra 0002
- Published
- 2017
- Full Text
- View/download PDF
6. Classification of Breast Masses Using Convolutional Neural Network as Feature Extractor and Classifier.
- Author
-
Pinaki Ranjan Sarkar, Deepak Mishra 0002, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2017
- Full Text
- View/download PDF
7. Rotation Invariant Digit Recognition Using Convolutional Neural Network.
- Author
-
Ayushi Jain, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2017
- Full Text
- View/download PDF
8. Learning-Based Fuzzy Fusion of Multiple Classifiers for Object-Oriented Classification of High Resolution Images.
- Author
-
Rajeswari Balasubramaniam, Gorthi R. K. Sai Subrahmanyam, and Rama Rao Nidamanuri
- Published
- 2017
- Full Text
- View/download PDF
9. Stochastic Assimilation Technique for Cloud Motion Analysis.
- Author
-
Kalamraju Mounika, J. Sheeba Rani, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2017
- Full Text
- View/download PDF
10. The Visual Object Tracking VOT2017 Challenge Results.
- Author
-
Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman P. Pflugfelder, Luka Cehovin Zajc, Tomas Vojir, Gustav Häger, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernández, álvaro García-Martín, Andrej Muhic, Alfredo Petrosino, Alireza Memarmoghadam, Andrea Vedaldi, Antoine Manzanera, Antoine Tran, A. Aydin Alatan, Bogdan Mocanu, Boyu Chen, Chang Huang, Changsheng Xu, Chong Sun, Dalong Du, David Zhang 0001, Dawei Du, Deepak Mishra 0002, Erhan Gundogdu, Erik Velasco-Salido, Fahad Shahbaz Khan, Francesco Battistone, Gorthi R. K. Sai Subrahmanyam, Goutam Bhat, Guan Huang, Guilherme Sousa Bastos, Guna Seetharaman, Hongliang Zhang, Houqiang Li, Huchuan Lu, Isabela Drummond, Jack Valmadre, Jae-chan Jeong, Jaeil Cho, Jae-Yeong Lee, Jana Noskova, Jianke Zhu, Jin Gao, Jingyu Liu 0004, Ji-Wan Kim, João F. Henriques, José M. Martínez 0001, Junfei Zhuang, Junliang Xing, Junyu Gao 0002, Kai Chen 0023, Kannappan Palaniappan, Karel Lebeda, Ke Gao, Kris M. Kitani, Lei Zhang 0006, Lijun Wang, Lingxiao Yang, Longyin Wen, Luca Bertinetto, Mahdieh Poostchi, Martin Danelljan, Matthias Mueller 0001, Mengdan Zhang, Ming-Hsuan Yang 0001, Nianhao Xie, Ning Wang 0020, Ondrej Miksik, Payman Moallem, Pallavi M. Venugopal, Pedro Senna, Philip H. S. Torr, Qiang Wang 0051, Qifeng Yu, Qingming Huang, Rafael Martin Nieto, Richard Bowden, Risheng Liu, Ruxandra Tapu, Simon Hadfield, Siwei Lyu, Stuart Golodetz, Sunglok Choi, Tianzhu Zhang, Titus B. Zaharia, Vincenzo Santopietro, Wei Zou, Weiming Hu, Wenbing Tao, Wenbo Li 0001, Wengang Zhou, Xianguo Yu, Xiao Bian, Yang Li 0041, Yifan Xing, Yingruo Fan, Zheng Zhu, Zhipeng Zhang, and Zhiqun He
- Published
- 2017
- Full Text
- View/download PDF
11. Computationally efficient deep tracker: Guided MDNet.
- Author
-
Pallavi M. Venugopal, Deepak Mishra 0002, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2017
- Full Text
- View/download PDF
12. Improved Transfer Learning through Shallow Network Embedding for Classification of Leukemia Cells.
- Author
-
Kaushik S. Kalmady, Adithya S. Kamath, G. Gopakumar 0003, Gorthi R. K. Sai Subrahmanyam, and Gorthi Sai Siva
- Published
- 2017
- Full Text
- View/download PDF
13. Stacked Features Based CNN for Rotation Invariant Digit Classification.
- Author
-
Ayushi Jain, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2017
- Full Text
- View/download PDF
14. Applicability of Self-Organizing Maps in Content-Based Image Classification.
- Author
-
Kumar Rohit, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2016
- Full Text
- View/download PDF
15. Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images.
- Author
-
G. Gopakumar 0002, M. Swetha, Gorthi Sai Siva, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2016
- Full Text
- View/download PDF
16. A differential excitation based rotational invariance for convolutional neural networks.
- Author
-
Haribabu Kandi, Deepak Mishra 0002, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2016
- Full Text
- View/download PDF
17. Application of transfer learning in RGB-D object recognition.
- Author
-
Abhishek Kumar, S. Nithin Shrivatsav, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2016
- Full Text
- View/download PDF
18. The Visual Object Tracking VOT2016 Challenge Results.
- Author
-
Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman P. Pflugfelder, Luka Cehovin, Tomás Vojír, Gustav Häger, Alan Lukezic, Gustavo Fernández, Abhinav Gupta 0001, Alfredo Petrosino, Alireza Memarmoghadam, álvaro García-Martín, Andrés Solís Montero, Andrea Vedaldi, Andreas Robinson, Andy Jinhua Ma, Anton Varfolomieiev, A. Aydin Alatan, Aykut Erdem, Bernard Ghanem, Bin Liu 0014, Bohyung Han, Brais Martínez, Chang-Ming Chang, Changsheng Xu, Chong Sun, Daijin Kim 0001, Dapeng Chen, Dawei Du, Deepak Mishra 0002, Dit-Yan Yeung, Erhan Gundogdu, Erkut Erdem, Fahad Shahbaz Khan, Fatih Porikli, Fei Zhao, Filiz Bunyak, Francesco Battistone, Gao Zhu, Giorgio Roffo, Gorthi R. K. Sai Subrahmanyam, Guilherme Sousa Bastos, Guna Seetharaman, Henry Medeiros 0001, Hongdong Li, Honggang Qi, Horst Bischof, Horst Possegger, Huchuan Lu, Hyemin Lee, Hyeonseob Nam, Hyung Jin Chang, Isabela Drummond, Jack Valmadre, Jae-chan Jeong, Jaeil Cho, Jae-Yeong Lee, Jianke Zhu, Jiayi Feng, Jin Gao, Jin Young Choi 0002, Jingjing Xiao, Ji-Wan Kim, Jiyeoup Jeong, João F. Henriques, Jochen Lang 0001, Jongwon Choi, José M. Martínez 0001, Junliang Xing, Junyu Gao 0002, Kannappan Palaniappan, Karel Lebeda, Ke Gao, Krystian Mikolajczyk, Lei Qin, Lijun Wang, Longyin Wen, Luca Bertinetto, Madan Kumar Rapuru, Mahdieh Poostchi, Mario Edoardo Maresca, Martin Danelljan, Matthias Mueller 0001, Mengdan Zhang, Michael Arens, Michel F. Valstar, Ming Tang 0001, Mooyeol Baek, Muhammad Haris Khan, Naiyan Wang, Nana Fan, Noor Al-Shakarji, Ondrej Miksik, Osman Akin, Payman Moallem, Pedro Senna, Philip H. S. Torr, Pong C. Yuen, Qingming Huang, Rafael Martin Nieto, Rengarajan Pelapur, Richard Bowden, Robert Laganière, Rustam Stolkin, Ryan Walsh, Sebastian Bernd Krah, Shengkun Li, Shengping Zhang, Shizeng Yao, Simon Hadfield, Simone Melzi, Siwei Lyu, Siyi Li, Stefan Becker, Stuart Golodetz, Sumithra Kakanuru, Sunglok Choi, Tao Hu, Thomas Mauthner, Tianzhu Zhang, Tony P. Pridmore, Vincenzo Santopietro, Weiming Hu, Wenbo Li 0001, Wolfgang Hübner 0001, Xiangyuan Lan, Xiaomeng Wang, Xin Li 0034, Yang Li 0041, Yiannis Demiris, Yifan Wang 0004, Yuankai Qi, Zejian Yuan, Zexiong Cai, Zhan Xu, Zhenyu He 0001, and Zhizhen Chi
- Published
- 2016
- Full Text
- View/download PDF
19. Compressive Sensing framework for simultaneous compression and despeckling of SAR images.
- Author
-
Preetha C, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2015
- Full Text
- View/download PDF
20. Integrated algorithm for different tracking challenges.
- Author
-
Jay Shah, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2015
- Full Text
- View/download PDF
21. Phase unwrapping with Kalman filter based denoising in digital holographic interferometry.
- Author
-
P. Ram Sukumar, Rahul G. Waghmare, Rakesh Kumar Singh, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra 0002
- Published
- 2015
- Full Text
- View/download PDF
22. Enhancing Face Recognition Under Unconstrained Background Clutter Using Color Based Segmentation.
- Author
-
Ankush Chatterjee, Deepak Mishra 0002, and Gorthi R. K. Sai Subrahmanyam
- Published
- 2015
- Full Text
- View/download PDF
23. Morphology based Classification of Leukemia Cell lines: K562 and MOLT in a Microfluidics based Imaging Flow Cytometer.
- Author
-
G. Gopakumar 0002, Gorthi R. K. Sai Subrahmanyam, and Gorthi Sai Siva
- Published
- 2014
- Full Text
- View/download PDF
24. Edge-preserving unscented Kalman filter for speckle reduction.
- Author
-
Gorthi R. K. Sai Subrahmanyam, A. N. Rajagopalan 0001, Rangarajan Aravind, and Gerhard Rigoll
- Published
- 2008
- Full Text
- View/download PDF
25. Unscented Kalman Filter for Image Estimation in Film-Grain Noise.
- Author
-
Gorthi R. K. Sai Subrahmanyam, A. N. Rajagopalan 0001, and Rangarajan Aravind
- Published
- 2007
- Full Text
- View/download PDF
26. A New Extension of Kalman Filter to Non-Gaussian Priors.
- Author
-
Gorthi R. K. Sai Subrahmanyam, A. N. Rajagopalan 0001, and Rangarajan Aravind
- Published
- 2006
- Full Text
- View/download PDF
27. An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks
- Author
-
Chathoth, Swetha Velluva, primary, Mishra, Asish Kumar, additional, Mishra, Deepak, additional, and Gorthi R. K. Sai, Subrahmanyam, additional
- Published
- 2021
- Full Text
- View/download PDF
28. Incorporating rotational invariance in convolutional neural network architecture
- Author
-
Ayushi Jain, Gorthi R. K. Sai Subrahmanyam, Haribabu Kandi, Swetha Velluva Chathoth, and Deepak Mishra
- Subjects
business.industry ,Computer science ,Deep learning ,020207 software engineering ,02 engineering and technology ,Invariant (physics) ,Convolutional neural network ,Nonlinear system ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Rotational invariance ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Architecture ,business ,Scaling ,Algorithm ,Free parameter - Abstract
Convolutional neural networks (CNNs) are one of the deep learning architectures capable of learning complex set of nonlinear features useful for effectively representing the structure of input to the network. Existing CNN architectures are invariant to small distortions, translations, scaling but are sensitive to rotations. In this paper, unlike the approaches where training samples with different orientations are included, we propose a new architecture in which addition of a rotational invariant map gives few folds of improvement towards rotational invariance to the network. We also propose an improved architecture where rotational invariance is achieved by rotationally varying the convolutional maps. We show that the proposed methods give better invariance towards rotations as compared to conventional training of CNN architecture (where the network is trained without considering the different orientation of training samples). The methods achieve rotation-independent classification by introducing few modifications in conventional CNNs, but do not add any trainable parameter to the network, thus keeping the number of free parameters/weights constant. We demonstrate the performance of proposed rotation invariant architectures for handwritten digits and texture data sets.
- Published
- 2018
29. Guided MDNet tracker with guided samples
- Author
-
Venugopal Minimol, Pallavi, primary, Mishra, Deepak, additional, and Gorthi, R. K. Sai Subrahmanyam, additional
- Published
- 2021
- Full Text
- View/download PDF
30. Efficient directionality-driven dictionary learning for compressive sensing magnetic resonance imaging reconstruction
- Author
-
Arun, Anupama, primary, Thomas, Thomas James, additional, Rani, J. Sheeba, additional, and Gorthi, R. K. Sai Subrahmanyam, additional
- Published
- 2020
- Full Text
- View/download PDF
31. Satellite Image Resolution Enhancement Using Nonsubsampled Contourlet Transform and Clustering on Subbands
- Author
-
S. S. Aneesh Raj, Gorthi R. K. Sai Subrahmanyam, and Madhu S. Nair
- Subjects
business.industry ,Geography, Planning and Development ,Resolution (electron density) ,0211 other engineering and technologies ,02 engineering and technology ,Contourlet ,Image (mathematics) ,Geography ,Band-pass filter ,Satellite image ,0202 electrical engineering, electronic engineering, information engineering ,Earth and Planetary Sciences (miscellaneous) ,020201 artificial intelligence & image processing ,Computer vision ,Satellite ,Artificial intelligence ,business ,Cluster analysis ,021101 geological & geomatics engineering ,Interpolation - Abstract
In this paper two new schemes for resolution enhancement (RE) of satellite images are proposed based on Nonsubsampled Contourlet Transform (NSCT). First one is based on the interpolation on band pass images obtained by applying NSCT on the input low resolution image. Similar to Demirel and Anbarjafari (IEEE Trans Geosci Remote Sens 49(6):1997–2004, 2011), as an intermediate step, the difference between approximation band and the input low resolution image is added with all the band pass directional subbands, to obtain a sharper image. This method is simple and computationally efficient but lacks sharp recovery of the edges due to the interpolation of band pass images. To overcome this, another method is proposed to obtain the difference layer, where dictionary is built using patches which are extracted from high resolution training image subbands. Similar patches from the dictionary are then clustered together. This method gives a much sharper image than the first method. Subjective and objective analysis of proposed methods reveals the superiority of the methods over conventional and other state-of-the-art RE methods.
- Published
- 2017
32. Towards Automated Breast Mass Classification using Deep Learning Framework
- Author
-
Priya Prabhakar, Pinaki Ranjan Sarkar, Gorthi R. K. Sai Subrahmanyam, and Deepak Mishra
- Subjects
Receiver operating characteristic ,business.industry ,Computer science ,Deep learning ,Wavelet transform ,Pattern recognition ,CAD ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Identification (information) ,0302 clinical medicine ,Wavelet ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Sensitivity (control systems) ,Artificial intelligence ,Mass classification ,business - Abstract
Due to high variability in shape, structure and occurrence; the non-palpable breast masses are often missed by the experienced radiologists. To aid them with more accurate identification, computer-aided detection (CAD) systems are widely used. Most of the developed CAD systems use complex handcrafted features which introduce difficulties for further improvement in performance. Deep or high-level features extracted using deep learning models already have proven its superiority over the low or middle-level handcrafted features. In this paper, we propose an automated deep CAD system performing both the functions: mass detection and classification. Our proposed framework is composed of three cascaded structures: suspicious region identification, mass/no-mass detection and mass classification. To detect the suspicious regions in a breast mammogram, we have used a deep hierarchical mass prediction network. Then we take a decision on whether the predicted lesions contain any abnormal masses using CNN high-level features from the augmented intensity and wavelet features. Afterwards, the mass classification is carried out only for abnormal cases with the same CNN structure. The whole process of breast mass classification including the extraction of wavelet features is automated in this work. We have tested our proposed model on widely used DDSM and INbreast databases in which mass prediction network has achieved the sensitivity of 0.94 and 0.96 followed by a mass/no-mass detection with the area under the curve (AUC) of 0.9976 and 0.9922 respectively on receiver operating characteristic (ROC) curve. Finally, the classification network has obtained an accuracy of 98.05% in DDSM and 98.14% in INbreast database which we believe is the best reported so far.
- Published
- 2019
33. Deep Learning Applications to Cytopathology: A Study on the Detection of Malaria and on the Classification of Leukaemia Cell-Lines
- Author
-
Gorthi R. K. Sai Subrahmanyam and G. Gopakumar
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,education ,Leukaemia cell ,medicine.disease ,Machine learning ,computer.software_genre ,Convolutional neural network ,Cytopathology ,medicine ,Artificial intelligence ,Transfer of learning ,business ,computer ,Classifier (UML) ,Malaria - Abstract
This chapter discusses a few applications of deep learning networks in cytopathology. Specifically, the detection of malaria from slide images of blood smear and classification of leukaemia cell-lines are addressed. The chapter starts with relevant theory for traditional (deep) multi-layer neural networks with back-propagation, followed by motivation, theory and training in Convolutional Neural Networks (CNN), the trending deep-learning based classifier. The detection of malaria from blood smear slide images using CNN is addressed followed by a discussion on the transfer learning capability of CNN by taking the classification of leukaemia cell-lines: K562, MOLT & HL60 as an example. The transfer learning capability of CNN is of particular interest especially when there are only very limited number of training samples to come up with a stand alone deep CNN classifier.
- Published
- 2019
34. The Sixth Visual Object Tracking VOT2018 Challenge Results
- Author
-
Houqiang Li, Huchuan Lu, Siwen Wang, Rafael Martin-Nieto, Efstratios Gavves, Feng Li, Manqiang Che, Erhan Gundogdu, Priya Mariam Raju, Xiaofan Zhang, Roman Pflugfelder, Yan Lu, Xinmei Tian, Martin Danelljan, Deepak Mishra, Guilherme Sousa Bastos, Honggang Zhang, Heng Fan, Mohamed H. Abdelpakey, Zhen-Hua Feng, Wang Wei, Andrej Muhič, Wengang Zhou, Deming Chen, Haojie Zhao, Sihang Wu, Richard M. Everson, Junfei Zhuang, Qin Zhou, Myunggu Kang, Abel Gonzalez-Garcia, Pablo Vicente-Moñivar, Richard Bowden, Horst Possegger, Yicai Yang, Andrea Vedaldi, Jaime Spencer Martin, Jongwon Choi, Yunhua Zhang, Yiannis Demiris, Seokeon Choi, Alireza Memarmoghadam, Wangmeng Zuo, Changzhen Xiong, Yuxuan Sun, Daijin Kim, Yuhong Li, Qing Guo, Tang Ming, Arnold W. M. Smeulders, Hamed Kiani Galoogahi, Zhihui Wang, Asanka G. Perera, Fahad Shahbaz Khan, George De Ath, Shuangping Huang, Qian Ruihe, Philip H. S. Torr, Haojie Li, Zhiqun He, João F. Henriques, Namhoon Lee, Chong Sun, Jorge Rodríguez Herranz, Vincenzo Santopietro, Lijun Wang, Qiang Wang, Gustavo Fernandez, Shuai Bai, Weiming Hu, Ondrej Miksik, Dongyoon Wee, Xiaohe Wu, Goutam Bhat, Yifan Jiao, A. Aydin Alatan, Alfredo Petrosino, Ran Tao, Tianyang Xu, Sergio Vivas, Cheng Tian, Yee Wei Law, Wei Feng, José M. Martínez, Luca Bertinetto, Runling Wang, Liu Si, Tianzhu Zhang, Tomas Vojir, Mario Edoardo Maresca, Lichao Zhang, Changick Kim, Luka Čehovin Zajc, Lingxiao Yang, Yan Li, Javaan Chahl, Simon Hadfield, Chong Luo, Jiří Matas, Ales Leonardis, Jack Valmadre, Pedro Senna, Josef Kittler, Klemen Grm, Cong Hao, Haibin Ling, Isabela Drummond, Zheng Zhang, Fan Yang, Joakim Johnander, Tobias Fischer, Gorthi R. K. Sai Subrahmanyam, Jinyoung Sung, Jin-Young Choi, Bo Li, Hui Zhi, Álvaro Iglesias-Arias, Joost van de Weijer, Hyung Jin Chang, Jinqing Qi, Michael Felsberg, Francesco Battistone, Sangdoo Yun, Wei Zou, Huiyun Li, Boyu Chen, Zheng Zhu, Jing Li, Abdelrahman Eldesokey, Litu Rout, Matej Kristan, Mohamed Shehata, Fei Zhao, Changsheng Xu, Alan Lukežič, Yi Wu, Wenjun Zeng, Lutao Chu, Vitomir Struc, Stuart Golodetz, Alvaro Garcia-Martin, Dong Wang, Junyu Gao, Hankyeol Lee, Hyemin Lee, Ning Wang, Wei Wu, Anfeng He, Xiaojun Wu, Rama Krishna Sai Subrahmanyam Gorthi, Payman Moallem, Peixia Li, Jinqiao Wang, Erik Velasco-Salido, Ming-Hsuan Yang, Kristan, Matej, Leonardis, Ales, Matas, Jiri, Felsberg, Michael, Perera, Asanka G, Chahl, Javaan, Law, Yee Wei, and He, Zhiqun
- Subjects
Source code ,source code ,business.industry ,Computer science ,Computer Sciences ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,Datavetenskap (datalogi) ,Datorseende och robotik (autonoma system) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,dataset ,020201 artificial intelligence & image processing ,Computer vision ,Artificial Intelligence & Image Processing ,Artificial intelligence ,08 Information and Computing Sciences ,tracker benchmarking activity ,business ,Computer Vision and Robotics (Autonomous Systems) ,media_common - Abstract
The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net). Funding agencies: Slovenian research agencySlovenian Research Agency - Slovenia [P2-0214, P2-0094, J2-8175]; Czech Science FoundationGrant Agency of the Czech Republic [GACR P103/12/G084]; WASP; VR (EMC2); SSF (SymbiCloud); SNIC; AIT Strategic Research Programme 2017 Visua
- Published
- 2019
35. Bag of Visual Words based Correlation Filter Tracker (BoVW-CFT)
- Author
-
Deepak Mishra, Priya Mariam Raju, and Gorthi R. K. Sai Subrahmanyam
- Subjects
BitTorrent tracker ,Computer science ,business.industry ,Detector ,02 engineering and technology ,Tracking (particle physics) ,Discriminative model ,Robustness (computer science) ,Bag-of-words model in computer vision ,Video tracking ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
Accurate and robust visual object tracking is one of the most challenging computer vision problems. Recently, discriminative correlation filter trackers have shown promising results on benchmark datasets with continuous performance improvements in tracking accuracy and robustness. Still, these algorithms fail to track as the target object and background conditions undergo drastic changes over time. They are also incapable to resume tracking once the target is lost, limiting the ability to track long term. The proposed BoVW-CFT is a classifier-based generic technique to handle tracking uncertainties in correlation filter trackers. Tracking failures in correlation trackers are automatically identified and an image classifier with training, testing and online update stages is proposed as detector in the tracking scenario using Bag of Visual Words (BoVW) features. The proposed detector falls under the parts based model and is quite well suited in the tracking framework. Further, the online training stage in the proposed framework with updated model or training samples, incorporates temporal information, helping to detect rotated, blurred and scaled versions of the target. On detecting a target loss in the correlation tracker, the trained classifier, referred to as detector, is invoked to re-initialize the tracker with the actual target location. Therefore, for each tracking uncertainty, two output patches are obtained, one each from the base tracker and the classifier. The final target location is estimated using the normalized cross-correlation with the initial target patch. The method has the advantages of mitigating the model drift in correlation trackers and learns a robust model that tracks long term. Extensive experimental results demonstrate an improvement of 4.1% in the expected overlap, 1.86% in accuracy and 15.46% in robustness on VOT2016 and 1.82% in overlap precision, 2.32% in AUC and 2.87% in success rates on OTB100.
- Published
- 2018
36. Variable Patch Dictionaries for efficient Compressed Sensing based MRI Reconstruction
- Author
-
Anupama Arun, Gorthi R. K. Sai Subrahmanyam, Thomas James Thomas, and J. Sheeba Rani
- Subjects
Basis (linear algebra) ,business.industry ,Image quality ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,Variable (computer science) ,0302 clinical medicine ,Compressed sensing ,Sampling (signal processing) ,Frequency domain ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business - Abstract
Compressed Sensing (CS) is a novel framework that facilitates under-sampling of the Fourier space in Magnetic Resonance Imaging (MRI) acquisition without significant loss in image quality. Conventional techniques in Compressed Sensing Magnetic Resonance Imaging (CS-MRI) employ fixed analytic transforms as the sparsifying basis, which are able to sparsely represent only certain type of image features. Adaptive transforms learnt using dictionary learning (DL) techniques, have emerged as an interesting alternative, as they are tailored to a class of MR images. However, due to the complexity involved in the learning process, DL based techniques in MRI have been restricted to learning from small patches within the image, in an online fashion. In this work, a recently proposed dictionary learning framework called Trainlets has been extended to efficiently learn similar and higher order dictionaries from a vast database of MR images belonging to a particular scan type. Moreover, the proposed trainlets-based CS-MRI framework is designed to incorporate multiple offline-learned dictionaries corresponding to varying patch sizes, to adaptively denoise different regions in the image, which overcomes the degradation associated with choosing a fixed patch size. The proposed variable patch size CS-MRI scheme is shown to have superior performance with upto 5 dB improvement in terms of PSNR and good improvement in SSIM in each case, and achieve much faster reconstructions with respect to popular DL based CS-MRI schemes, even when the sampling percentage is higher.
- Published
- 2018
37. Batch-Mode Active Learning-Based Superpixel Library Generation for Very High-Resolution Aerial Image Classification
- Author
-
Rama Rao Nidamanuri, Srivalsan Namboodiri, Gorthi R. K. Sai Subrahmanyam, and Rajeswari Balasubramaniam
- Subjects
Very high resolution ,Contextual image classification ,Pixel ,Computer science ,business.industry ,Detector ,Softmax function ,Batch processing ,Pattern recognition ,Artificial intelligence ,business ,Classifier (UML) ,Aerial image - Abstract
In this paper, we introduce an active learning-based object training library generation for a multi-classifier object-oriented image analysis (OOIA) system. Given a sufficient number of training samples, supervised classification is the method of choice for image classification. However, this strategy becomes computationally expensive with an increase in the number of classes or the number of images to be classified. While several active learning approaches do exist for pixel-based training library generation and for hyperspectral image classification, there is no standard training library generation strategy for object-oriented image analysis (OOIA) of very high spatial resolution images. The above issue is solved in this proposed method where an optimised training library of objects (superpixels) is generated based on a batch-mode active learning (AL) approach. A softmax classifier is used as a detector in this method which helps in determining the right samples to be chosen for library updation. To this end, we construct a multi-classifier system with max-voting decision to classify an image at pixel level. This algorithm was applied on three different very high-resolution airborne datasets, each with varying complexity in terms of variations in geographical context, sensors, illumination and view angles. Our method has empirically outperformed the traditional OOIA by producing equivalent accuracy with a training library that is orders of magnitude smaller. The most distinctive ability of the algorithm is experienced in the most heterogeneous dataset where its performance in terms of accuracy is around twice the performance of the traditional method in the same situation.
- Published
- 2018
38. Bayesian Approach for Landslide Identification from High-Resolution Satellite Images
- Author
-
Gorthi R. K. Sai Subrahmanyam, Deepak Mishra, Rama Rao Nidamanuri, Tapas R. Martha, and Pilli Madalasa
- Subjects
Naive Bayes classifier ,Identification (information) ,Computer science ,Bayesian probability ,Multispectral image ,Segmentation ,Landslide ,Satellite ,Change detection ,Remote sensing - Abstract
Landslides are one of the severe natural catastrophes that affect thousands of lives and cause colossal damage to infrastructure from small to region scales. Detection of landslide is a prerequisite for damage assessment. We propose a novel method based on object-oriented image analysis using bi-temporal satellite images and DEM. The proposed methodology involves segmentation, followed by extraction of spatial and spectral features of landslides and classification based on supervised Bayesian classifier. The proposed framework is based on the change detection of spatial features which capture the spatial attributes of landslides. The proposed methodology has been applied for the detection and mapping of landslides of different sizes in selected study sites in Himachal Pradesh and Uttarakhand, India. For this, high-resolution multispectral images from the IRS, LISS-IV sensor and DEM from Cartosat-1 are used in this study. The resultant landslides are compared and validated with the inventory landslide maps. The results show that the proposed methodology can identify medium- and large-scale landslides efficiently.
- Published
- 2018
39. Stochastic Assimilation Technique for Cloud Motion Analysis
- Author
-
Gorthi R. K. Sai Subrahmanyam, Kalamraju Mounika, and J. Sheeba Rani
- Subjects
Motion analysis ,Computer science ,business.industry ,Robustness (computer science) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Cloud computing ,Kalman filter ,Filter (signal processing) ,Image warping ,business ,Algorithm ,Motion (physics) - Abstract
Cloud motion analysis plays a key role in analyzing the climatic changes. Recent works show that Classic-NL approach outperforms many other conventional motion analysis techniques. This paper presents an efficient approach for assimilation of satellite images using a recursive stochastic filter, Weighted Ensemble Transform Kalman Filter (WETKF), with appropriate dynamical model and image warping-based non-linear measurement model. Here, cloud motion against the occlusions, missing information, and unexpected merging and splitting of clouds has been analyzed. This will pave a way for automatic analysis of motion fields and to draw inferences about their local and global motion over several years. This paper also demonstrates efficacy and robustness of WETKF over Classic-Non-Local-based approach (Bibin Johnson J et al., International conference on computer vision and 11 image processing, 2016) [1].
- Published
- 2018
40. Learning-Based Fuzzy Fusion of Multiple Classifiers for Object-Oriented Classification of High Resolution Images
- Author
-
Rajeswari Balasubramaniam, Rama Rao Nidamanuri, and Gorthi R. K. Sai Subrahmanyam
- Subjects
Object-oriented programming ,Contextual image classification ,Pixel ,Computer science ,business.industry ,Segmentation ,Pattern recognition ,Artificial intelligence ,Variance (accounting) ,Construct (python library) ,business ,Task (project management) ,Image (mathematics) - Abstract
In remote-sensing, multi-classifier systems (MCS) have found its use for efficient pixel level image classification. Current challenge faced by the RS community is, classification of very high resolution (VHR) satellite/aerial images. Despite the abundance of data, certain inherent difficulties affect the performance of existing pixel-based models. Hence, the trend for classification of VHR imagery has shifted to object-oriented image analysis (OOIA) which work at object level. We propose a shift of paradigm to object-oriented MCS (OOMCS) for efficient classification of VHR imagery. Our system uses the modern computer vision concept of superpixels for the segmentation stage in OOIA. To this end, we construct a learning-based decision fusion method for integrating the decisions from the MCS at superpixel level for the classification task. Upon detailed experimentation, we show that our method exceeds in performance with respect to a variety of traditional OOIA decision systems. Our method has also empirically outperformed under conditions of two typical artefacts, namely unbalanced samples and high intra-class variance.
- Published
- 2018
41. Rotation Invariant Digit Recognition Using Convolutional Neural Network
- Author
-
Deepak Mishra, Gorthi R. K. Sai Subrahmanyam, and Ayushi Jain
- Subjects
CAPTCHA ,Computational complexity theory ,business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Invariant (physics) ,computer.software_genre ,Convolutional neural network ,Nonlinear system ,Discriminative model ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Scaling ,computer - Abstract
Deep learning architectures use a set of layers to learn hierarchical features from the input. The learnt features are discriminative, and thus can be used for classification tasks. Convolutional neural networks (CNNs) are one of the widely used deep learning architectures. CNN extracts prominent features from the input by passing it through the layers of convolution and nonlinear activation. These features are invariant to scaling and small amount of distortions in the input image, but they offer rotation invariance only for smaller degrees of rotation. We propose an idea of using multiple instance of CNN to enhance the overall rotation invariant capabilities of the architecture even for higher degrees of rotation in the input image. The architecture is then applied to handwritten digit classification and captcha recognition. The proposed method requires less number of images for training, and therefore reduces the training time. Moreover, our method offers an additional advantage of finding the approximate orientation of the object in an image, without any additional computational complexity.
- Published
- 2018
42. Classification of Breast Masses Using Convolutional Neural Network as Feature Extractor and Classifier
- Author
-
Deepak Mishra, Pinaki Ranjan Sarkar, and Gorthi R. K. Sai Subrahmanyam
- Subjects
Computer science ,business.industry ,Feature extraction ,Pattern recognition ,CAD ,02 engineering and technology ,Cad system ,Convolutional neural network ,Computer aided detection ,030218 nuclear medicine & medical imaging ,Extractor ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) - Abstract
Due to the difficulties of radiologists to detect micro-calcification clusters, computer-aided detection (CAD) system is much needed. Many researchers have undertaken the challenge of building an efficient CAD system and several feature extraction methods are being proposed. Most of them extract low- or mid-level features which restrict the accuracy of the overall classification. We observed that high-level features lead to a better diagnosis and convolutional neural network (CNN) is the best-known model to extract high-level features. In this paper, we propose to use a CNN architecture to do both of the feature extraction and classification task. Our proposed network was applied to both MIAS and DDSM databases, and we have achieved accuracy of \(99.074\%\) and \(99.267\%\), respectively, which we believe that is the best reported so far.
- Published
- 2018
43. Improved Transfer Learning through Shallow Network Embedding for Classification of Leukemia Cells
- Author
-
Gorthi R. K. Sai Subrahmanyam, Sai Siva Gorthi, Adithya S Kamath, G. Gopakumar, and Kaushik S. Kalmady
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Feature extraction ,Feature selection ,02 engineering and technology ,Image segmentation ,021001 nanoscience & nanotechnology ,Machine learning ,computer.software_genre ,01 natural sciences ,Ensemble learning ,010309 optics ,0103 physical sciences ,Artificial intelligence ,0210 nano-technology ,Transfer of learning ,business ,Throughput (business) ,computer - Abstract
One of the most crucial parts in the diagnosis of a wide variety of ailments is cytopathological testing. This process is often laborious, time consuming and requires skill. These constraints have led to interests in automating the process. Several deep learning based methods have been proposed in this domain to enable machines to gain human expertise. In this paper, we investigate the effectiveness of transfer learning using fine-tuned features from modified deep neural architectures and certain ensemble learning methods for classifying the leukemia cell lines HL60, MOLT, and K562. Microfluidics-based imaging flow cytometry (mIFC) is used for obtaining the images instead of image cytometry. This is because mIFC guarantees significantly higher throughput and is easy to set up with minimal expenses. We find that the use of fine-tuned features from a modified deep neural network for transfer learning provides a substantial improvement in performance compared to earlier works. We also identify that without any fine tuning, feature selection using ensemble methods on the deep features also provide comparable performance on the considered Leukemia cell classification problem. These results show that automated methods can in fact be a valuable guide in cytopathological testing especially in resource limited settings.
- Published
- 2017
44. Framework for morphometric classification of cells in imaging flow cytometry
- Author
-
G. Gopakumar, Veerendra Kalyan Jagannadh, Sai Siva Gorthi, and Gorthi R. K. Sai Subrahmanyam
- Subjects
Histology ,Computer science ,Microfluidics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Pathology and Forensic Medicine ,Flow cytometry ,010309 optics ,020204 information systems ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Image resolution ,Mass screening ,Active contour model ,medicine.diagnostic_test ,business.industry ,Pattern recognition ,Support vector machine ,Flow (mathematics) ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Imaging flow cytometry is an emerging technology that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy. It allows high-throughput imaging of cells with good spatial resolution, while they are in flow. This paper proposes a general framework for the processing/classification of cells imaged using imaging flow cytometer. Each cell is localized by finding an accurate cell contour. Then, features reflecting cell size, circularity and complexity are extracted for the classification using SVM. Unlike the conventional iterative, semi-automatic segmentation algorithms such as active contour, we propose a noniterative, fully automatic graph-based cell localization. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using custom fabricated cost-effective microfluidics-based imaging flow cytometer. The proposed system is a significant development in the direction of building a cost-effective cell analysis platform that would facilitate affordable mass screening camps looking cellular morphology for disease diagnosis.
- Published
- 2015
45. The Visual Object Tracking VOT2017 Challenge Results
- Author
-
Ruxandra Tapu, Tianzhu Zhang, Jaeil Cho, Dalong Du, Philip H. S. Torr, Kris M. Kitani, Deepak Mishra, Wenbing Tao, Fahad Shahbaz Khan, Luka Čehovin Zajc, Boyu Chen, Jae-chan Jeong, Andrea Vedaldi, Dawei Du, Jianke Zhu, Bogdan Mocanu, Weiming Hu, Alvaro Garcia-Martin, Jingyu Liu, João F. Henriques, Yang Li, Kai Chen, Junliang Xing, Luca Bertinetto, Chang Huang, Jiri Matas, Nianhao Xie, Risheng Liu, Payman Moallem, Guan Huang, Chong Sun, Qiang Wang, Roman Pflugfelder, David Zhang, Yifan Xing, Titus Zaharia, Gustavo Fernandez, Erhan Gundogdu, Karel Lebeda, Lingxiao Yang, Francesco Battistone, Guilherme Sousa Bastos, Junfei Zhuang, Matej Kristan, Zhipeng Zhang, Changsheng Xu, Vincenzo Santopietro, Matthias Mueller, Ning Wang, Ke Gao, Gustav Häger, Andrej Muhič, Pedro Senna, Richard Bowden, Wengang Zhou, Zhiqun He, Ming-Hsuan Yang, Qifeng Yu, Alireza Memarmoghadam, Jin Gao, Ondrej Miksik, Lei Zhang, Zheng Zhu, Alfredo Petrosino, Ales Leonardis, Tomas Vojir, Yingruo Fan, Siwei Lyu, Houqiang Li, Pallavi Venugopal M, Gorthi R. K. Sai Subrahmanyam, Longyin Wen, Xiao Bian, José M. Martínez, Antoine Tran, Michael Felsberg, Wei Zou, Wenbo Li, Jana Noskova, Sunglok Choi, Isabela Drummond, Xianguo Yu, Alan Lukezic, Stuart Golodetz, Abdelrahman Eldesokey, Lijun Wang, Erik Velasco-Salido, Huchuan Lu, Antoine Manzanera, Simon Hadfield, Ji-Wan Kim, Qingming Huang, Mengdan Zhang, Rafael Martin-Nieto, Goutam Bhat, Jae-Yeong Lee, Martin Danelljan, A. Aydin Alatan, Kannappan Palaniappan, Jack Valmadre, Guna Seetharaman, Junyu Gao, Hongliang Zhang, and Mahdieh Poostchi
- Subjects
Source code ,Computer Sciences ,business.industry ,Computer science ,media_common.quotation_subject ,020207 software engineering ,Image processing ,02 engineering and technology ,Tracking (particle physics) ,Visualization ,Datavetenskap (datalogi) ,Data visualization ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years. The evaluation included the standard VOT and other popular methodologies and a new "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The VOT2017 goes beyond its predecessors by (i) improving the VOT public dataset and introducing a separate VOT2017 sequestered dataset, (ii) introducing a realtime tracking experiment and (iii) releasing a redesigned toolkit that supports complex experiments. The dataset, the evaluation kit and the results are publicly available at the challenge website(1). Funding Agencies|Slovenian research agency research programs [P2-0214, P2-0094]; Slovenian research agency project [J2-8175]; Czech Science Foundation Project [GACR P103/12/G084]; WASP; VR (EMC2); SSF (SymbiCloud); SNIC; AIT Strategic Research Programme Visual Surveillance and Insight; Faculty of Computer Science, University of Ljubljana, Slovenia
- Published
- 2017
46. Correlation-Based Tracker-Level Fusion for Robust Visual Tracking
- Author
-
Madan Kumar Rapuru, Pallavi M. Venugopal, Gorthi R. K. Sai Subrahmanyam, Sumithra Kakanuru, and Deepak Mishra
- Subjects
business.industry ,Computer science ,Detector ,Tracking system ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Visualization ,Correlation ,Robustness (computer science) ,020204 information systems ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
Although visual object tracking algorithms are capable of handling various challenging scenarios individually, none of them are robust enough to handle all the challenges simultaneously. For any online tracking by detection method, the key issue lies in detecting the target over the whole frame and updating systematically a target model based on the last detected appearance to avoid the drift phenomenon. This paper aims at proposing a novel robust tracking algorithm by fusing the frame level detection strategy of tracking, learning, & detection with the systematic model update strategy of Kernelized Correlation Filter tracker. The risk of drift is mitigated by the fact that the model updates are primarily driven by the detections that occur in the spatial neighborhood of the latest detections. The motivation behind the selection of trackers is their complementary nature in handling tracking challenges. The proposed algorithm efficiently combines the two state-of-the-art tracking algorithms based on conservative correspondence measure with strategic model updates, which takes advantages of both and outperforms them on their short ends by virtue of other. Extensive evaluation of the proposed method based on different metrics is carried out on the data sets ALOV300++, Visual Tracker Benchmark, and Visual Object Tracking. We demonstrated its performance in terms of robustness and success rate by comparing with state-of-the-art trackers.
- Published
- 2017
47. Computationally efficient deep tracker: Guided MDNet
- Author
-
Deepak Mishra, Gorthi R. K. Sai Subrahmanyam, and M Pallavi Venugopal
- Subjects
Ground truth ,BitTorrent tracker ,Computer science ,business.industry ,Computation ,Frame (networking) ,Tracking system ,02 engineering and technology ,010501 environmental sciences ,Tracking (particle physics) ,01 natural sciences ,Convolutional neural network ,0202 electrical engineering, electronic engineering, information engineering ,Clutter ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
The main objective of the paper is to recommend an essential improvement to the existing Multi-Domain Convolutional Neural Network tracker (MDNet) which is used to track unknown object in a video-stream. MDNet is able to handle major basic tracking challenges like fast motion, background clutter, out of view, scale variations etc. through offline training and online tracking. We pre-train the Convolutional Neural Network (CNN) offline using many videos with ground truth to obtain a target representation in the network. In online tracking the MDNet uses large number of random sample of windows around the previous target for estimating the target in the current frame which make its tracking computationally complex while testing or obtaining the track. The major contribution of the paper is to give guided samples to the MDNet rather than random samples so that the computation and time required by the CNN while tracking could be greatly reduced. Evaluation of the proposed algorithm is done using the videos from the ALOV300++ dataset and the VOT dataset and the results are compared with the state of art trackers.
- Published
- 2017
48. Cytopathological image analysis using deep-learning networks in microfluidic microscopy
- Author
-
Sai Siva Gorthi, Deepak Mishra, G. Gopakumar, Gorthi R. K. Sai Subrahmanyam, and K. Hari Babu
- Subjects
0301 basic medicine ,Computer science ,Feature extraction ,Microfluidics ,HL-60 Cells ,01 natural sciences ,Convolutional neural network ,Domain (software engineering) ,010309 optics ,Machine Learning ,03 medical and health sciences ,Deep belief network ,Optics ,0103 physical sciences ,Image Processing, Computer-Assisted ,Humans ,Instrumentation Appiled Physics ,Segmentation ,Throughput (business) ,Microscopy ,Artificial neural network ,business.industry ,Deep learning ,Pattern recognition ,Precursor Cell Lymphoblastic Leukemia-Lymphoma ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,030104 developmental biology ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Neural Networks, Computer ,business ,K562 Cells ,Algorithms - Abstract
Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60). The cell images used in the classification are captured using a low-cost, high-throughput cell imaging technique: microfluidics-based imaging flow cytometry. We demonstrate that without any conventional fine segmentation followed by explicit feature extraction, the proposed deep-learning algorithms effectively classify the coarsely localized cell lines. We show that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems and are important in the medical domain, where the availability of labeled data is limited for training. We hope that our work enables the development of a clinically significant high-throughput microfluidic microscopy-based tool for disease screening/triaging, especially in resource-limited settings. (C) 2016 Optical Society of America.
- Published
- 2017
49. Convolutional neural network-based malaria diagnosis from focus stack of blood smear images acquired using custom-built slide scanner
- Author
-
Gorthi Sai Siva, Murali Swetha, Gorthi R. K. Sai Subrahmanyam, and G. Gopakumar
- Subjects
0301 basic medicine ,Scanner ,Pathology ,medicine.medical_specialty ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Physics and Astronomy ,02 engineering and technology ,Convolutional neural network ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Stack (abstract data type) ,parasitic diseases ,0202 electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Leukocytes ,Medicine ,Humans ,General Materials Science ,Segmentation ,Computer vision ,Instrumentation (computer programming) ,Sensitivity (control systems) ,Malaria, Falciparum ,business.industry ,General Engineering ,General Chemistry ,Focus stacking ,030104 developmental biology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Neural Networks, Computer ,Focus (optics) ,business - Abstract
The present paper introduces a focus stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. The slide images are acquired with a custom-built portable slide scanner made from low-cost, off-the-shelf components and is suitable for point-of-care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis.
- Published
- 2017
50. Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images
- Author
-
Gorthi R. K. Sai Subrahmanyam, G. Gopakumar, Gorthi Sai Siva, and M. Swetha
- Subjects
0301 basic medicine ,biology ,Computer science ,business.industry ,Microscope slide ,Cell segmentation ,Plasmodium falciparum ,biology.organism_classification ,medicine.disease ,Convolutional neural network ,03 medical and health sciences ,030104 developmental biology ,Death toll ,parasitic diseases ,medicine ,Computer vision ,Artificial intelligence ,Detection rate ,business ,Malaria - Abstract
Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.