8 results on '"Bharath Ramesh"'
Search Results
2. A Hybrid Neuromorphic Object Tracking and Classification Framework for Real-Time Systems
- Author
-
Andres Ussa, Chockalingam Senthil Rajen, Tarun Pulluri, Deepak Singla, Jyotibdha Acharya, Gideon Fu Chuanrong, Arindam Basu, and Bharath Ramesh
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Software ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
3. EBBINNOT: A Hardware-Efficient Hybrid Event-Frame Tracker for Stationary Dynamic Vision Sensors
- Author
-
Vivek Mohan, Deepak Singla, Tarun Pulluri, Andres Ussa, Pradeep Kumar Gopalakrishnan, Pao-Sheng Sun, Bharath Ramesh, and Arindam Basu
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Signal Processing ,Computer Science Applications ,Information Systems - Published
- 2022
- Full Text
- View/download PDF
4. e-TLD: Event-Based Framework for Dynamic Object Tracking
- Author
-
Bharath Ramesh, Matthew Ong, Cheng Xiang, Shihao Zhang, Garrick Orchard, Andrés Ussa, and Hong Yang
- Subjects
Ground truth ,Event (computing) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Representation (systemics) ,02 engineering and technology ,Tracking (particle physics) ,Object (computer science) ,Discriminative model ,Sliding window protocol ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
This paper presents a long-term object tracking framework with a moving event camera under general tracking conditions. A first of its kind for these revolutionary cameras, the tracking framework uses a discriminative representation for the object with online learning, and detects and re-tracks the object when it comes back into the field-of-view. One of the key novelties is the use of an event-based local sliding window technique that tracks reliably in scenes with cluttered and textured background. In addition, Bayesian bootstrapping is used to assist real-time processing and boost the discriminative power of the object representation. On the other hand, when the object re-enters the field-of-view of the camera, a data-driven, global sliding window detector locates the object for subsequent tracking. Extensive experiments demonstrate the ability of the proposed framework to track and detect arbitrary objects of various shapes and sizes, including dynamic objects such as a human. This is a significant improvement compared to earlier works that simply track objects as long as they are visible under simpler background settings. Using the ground truth locations for five different objects under three motion settings, namely translation, rotation and 6-DOF, quantitative measurement is reported for the event-based tracking framework with critical insights on various performance issues. Finally, real-time implementation in C++ highlights tracking ability under scale, rotation, view-point and occlusion scenarios in a lab setting.
- Published
- 2021
- Full Text
- View/download PDF
5. Single Image Deraining Integrating Physics Model and Density-Oriented Conditional GAN Refinement
- Author
-
Zhi Gao, Jinqiang Cui, Bharath Ramesh, Min Cao, and Tiancan Mei
- Subjects
Generalization ,Applied Mathematics ,media_common.quotation_subject ,Feature extraction ,Atmospheric model ,Density estimation ,Robustness (computer science) ,Logic gate ,Signal Processing ,Quality (business) ,Electrical and Electronic Engineering ,Algorithm ,Image restoration ,media_common - Abstract
Although advanced single image deraining methods have been proposed, their generalization ability to real-world images is usually limited, especially when dealing with rain patterns of different densities, shapes, and directions. In order to improve the robustness and generalization of these deraining methods, we propose a novel density-aware single image deraining method with gated multi-scale feature fusion, which consists of two stages. In the first stage, a sophisticated physics model is leveraged for initial deraining and a network branch is utilized for rain density estimation to guide the subsequent refinement. The second stage of model-independent refinement is realized using conditional Generative Adversarial Network (cGAN), attempting to eliminate artifacts and improve the restoration quality. Extensive experiments have been conducted on the representative synthetic rain datasets and real rain scenes, demonstrating the superiority of our method in terms of effectiveness and generalization ability, which outperforms the state-of-the-arts.
- Published
- 2021
- Full Text
- View/download PDF
6. Vehicle Detection in Remote Sensing Images Leveraging on Simultaneous Super-Resolution
- Author
-
Hong Ji, Tiancan Mei, Bharath Ramesh, and Zhi Gao
- Subjects
Computer science ,Feature extraction ,Detector ,0211 other engineering and technologies ,02 engineering and technology ,Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology ,Convolutional neural network ,Image resolution ,Object detection ,021101 geological & geomatics engineering ,Image (mathematics) ,Remote sensing - Abstract
Owing to the relatively small size of vehicles in remote sensing images, lacking sufficient detailed appearance to distinguish vehicles from similar objects, the detection performance is still far from satisfactory compared with the detection results on everyday images. Inspired by the positive effects of super-resolution convolutional neural network (SRCNN) for object detection and the stunning success of deep CNN techniques, we apply generative adversarial network frameworks to realize simultaneous SRCNN and vehicle detection in an end-to-end manner, and the detection loss is backpropagated into the SRCNN during training to facilitate detection. In particular, our work is unsupervised and bypasses the requirement of low-/high-resolution image pairs during the training stage, achieving increased generality and applicability. Extensive experiments on representative data sets demonstrate that our method outperforms the state-of-the-art detectors. (The source code will be made available after the review process.)
- Published
- 2020
- Full Text
- View/download PDF
7. EOVNet: Earth-Observation Image-Based Vehicle Detection Network
- Author
-
Hong Ji, Zhi Gao, Bharath Ramesh, Xiaodong Liu, and Tiancan Mei
- Subjects
Atmospheric Science ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Feature extraction ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Convolutional neural network ,Object detection ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Pyramid (image processing) ,Artificial intelligence ,Computers in Earth Sciences ,business ,021101 geological & geomatics engineering - Abstract
Vehicle detection from earth-observation (EO) image has been attracting remarkable attention for its critical value in a variety of applications. Encouraged by the stunning success of deep learning techniques based on convolutional neural networks (CNNs), which have revolutionized the visual data processing community and obtained the state-of-the-art performance in a variety of classification and recognition tasks on benchmark datasets, we propose a network, called EOVNet (EO image-based vehicle detection network), to bridge the gap between the advanced deep learning research of object detection and the specific task of vehicle detection in EO images. Our network has integrated nearly all advanced techniques including very deep residual networks for feature extraction, feature pyramid to fuse multiscale features, network for proposal generation with feature sharing, and hard example mining. Moreover, our novel designs of probability-based localization and homography-based data augmentation have been investigated, resulting in further improvement of the detection performance. For performance evaluation, we have collected nearly all the representative EO datasets associated with vehicle detection. Extensive experiments on the representative datasets demonstrate that our method outperforms the state-of-the-art object detection approach Faster R-CNN++ (which is based on the Faster R-CNN framework, but with significant improvement) with 5% average precision improvement. The source code will be made available after the review process.
- Published
- 2019
- Full Text
- View/download PDF
8. Synergizing Appearance and Motion With Low Rank Representation for Vehicle Counting and Traffic Flow Analysis
- Author
-
Ruifang Zhai, Zhi Gao, Bharath Ramesh, Xu Yan, Hailong Qin, Yazhe Tang, and Pengfei Wang
- Subjects
050210 logistics & transportation ,Vehicle counting ,Pixel ,business.industry ,Computer science ,Mechanical Engineering ,05 social sciences ,Traffic flow analysis ,02 engineering and technology ,Computer Science Applications ,Matrix decomposition ,Robustness (computer science) ,Vehicle detection ,0502 economics and business ,Automotive Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Robust principal component analysis ,Motion flow - Abstract
Appearance and motion, which are complementary, account for a dominant proportion of visual information. We propose to synergize them using a low-rank representation framework for the estimation and analysis of traffic flow. Taking advantage of the downward-looking camera configuration, we do the processing only on the measure line, called virtual gantry, instead of dealing with the whole frame, resulting in much improved efficiency. Enforcing the low-rank constraint on the spatiotemporal image which is generated via stacking pixels on virtual gantry over time, we introduce the block-sparse robust principal component analysis algorithm, in which the motion cue is leveraged to highlight the foreground and realize vehicle detection with high accuracy. The motion flow is further exploited for size normalization to classify vehicles into lite, small, medium, and large categories. Benefiting from the low-rank representation, our method is parameter insensitive, robust to illumination changes, and requires no training. We perform extensive experiments on the 24/7 videos collected over the highways in China and Singapore, obtaining nearly 100% accuracy. Meanwhile, insightful observations on the obtained traffic information are given, which could be very valuable to the users, especially to the traffic management sectors.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.