218 results on '"Suo, Jinli"'
Search Results
202. Retrieving Object Motions from Coded Shutter Snapshot in Dark Environment_supp3-3280010.avi
- Author
-
Suo, Jinli, primary
- Full Text
- View/download PDF
203. Retrieving Object Motions from Coded Shutter Snapshot in Dark Environment_supp2-3280010.avi
- Author
-
Suo, Jinli, primary
- Full Text
- View/download PDF
204. High-resolution multispectral imaging using a photodiode
- Author
-
Tsia, Kevin K., Goda, Keisuke, Bian, Liheng, Suo, Jinli, Chen, Feng, and Dai, Qionghai
- Published
- 2018
- Full Text
- View/download PDF
205. HOPE: High-Order Polynomial Expansion of Black-Box Neural Networks.
- Author
-
Xiao T, Zhang W, Cheng Y, and Suo J
- Abstract
Despite their remarkable performance, deep neural networks remain mostly "black boxes", suggesting inexplicability and hindering their wide applications in fields requiring making rational decisions. Here we introduce HOPE (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor polynomial on a reference input. Specifically, we derive the high-order derivative rule for composite functions and extend the rule to neural networks to obtain their high-order derivatives quickly and accurately. From these derivatives, we can then derive the Taylor polynomial of the neural network, which provides an explicit expression of the network's local interpretations. We combine the Taylor polynomials obtained under different reference inputs to obtain the global interpretation of the neural network. Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method. Moreover, we demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection. We compared HOPE with other XAI methods and demonstrated our advantages.
- Published
- 2024
- Full Text
- View/download PDF
206. Sharing massive biomedical data at magnitudes lower bandwidth using implicit neural function.
- Author
-
Yang R, Xiao T, Cheng Y, Li A, Qu J, Liang R, Bao S, Wang X, Wang J, Suo J, Luo Q, and Dai Q
- Subjects
- Humans, Data Compression methods, Deep Learning, Biomedical Research methods, Information Dissemination methods, Neural Networks, Computer
- Abstract
Efficient storage and sharing of massive biomedical data would open up their wide accessibility to different institutions and disciplines. However, compressors tailored for natural photos/videos are rapidly limited for biomedical data, while emerging deep learning-based methods demand huge training data and are difficult to generalize. Here, we propose to conduct Biomedical data compRession with Implicit nEural Function (BRIEF) by representing the target data with compact neural networks, which are data specific and thus have no generalization issues. Benefiting from the strong representation capability of implicit neural function, BRIEF achieves 2[Formula: see text]3 orders of magnitude compression on diverse biomedical data at significantly higher fidelity than existing techniques. Besides, BRIEF is of consistent performance across the whole data volume, and supports customized spatially varying fidelity. BRIEF's multifold advantageous features also serve reliable downstream tasks at low bandwidth. Our approach will facilitate low-bandwidth data sharing and promote collaboration and progress in the biomedical field., Competing Interests: Competing interests statement:The authors declare no competing interest.
- Published
- 2024
- Full Text
- View/download PDF
207. An event-oriented diffusion-refinement method for sparse events completion.
- Author
-
Zhang B, Han Y, Suo J, and Dai Q
- Abstract
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames, and feature ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great advantages in challenging scenarios with fast motion and large dynamic range. However, the recorded events might be highly sparse due to either limited hardware bandwidth or extreme photon starvation in harsh environments. To unlock the full potential of event cameras, we propose an inventive event sequence completion approach conforming to the unique characteristics of event data in both the processing stage and the output form. Specifically, we treat event streams as 3D event clouds in the spatiotemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully. To validate the effectiveness of our method comprehensively, we perform extensive experiments on three widely used public datasets with different spatial resolutions, and additionally collect a novel event dataset covering diverse scenarios with highly dynamic motions and under harsh illumination. Besides generating high-quality dense events, our method can benefit downstream applications such as object classification and intensity frame reconstruction., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
208. iSMOD: an integrative browser for image-based single-cell multi-omics data.
- Author
-
Zhang W, Suo J, Yan Y, Yang R, Lu Y, Jin Y, Gao S, Li S, Gao J, Zhang M, and Dai Q
- Subjects
- In Situ Hybridization, Fluorescence, Genomics methods, Gene Expression Profiling, Proteomics, Multiomics
- Abstract
Genomic and transcriptomic image data, represented by DNA and RNA fluorescence in situ hybridization (FISH), respectively, together with proteomic data, particularly that related to nuclear proteins, can help elucidate gene regulation in relation to the spatial positions of chromatins, messenger RNAs, and key proteins. However, methods for image-based multi-omics data collection and analysis are lacking. To this end, we aimed to develop the first integrative browser called iSMOD (image-based Single-cell Multi-omics Database) to collect and browse comprehensive FISH and nucleus proteomics data based on the title, abstract, and related experimental figures, which integrates multi-omics studies focusing on the key players in the cell nucleus from 20 000+ (still growing) published papers. We have also provided several exemplar demonstrations to show iSMOD's wide applications-profiling multi-omics research to reveal the molecular target for diseases; exploring the working mechanism behind biological phenomena using multi-omics interactions, and integrating the 3D multi-omics data in a virtual cell nucleus. iSMOD is a cornerstone for delineating a global view of relevant research to enable the integration of scattered data and thus provides new insights regarding the missing components of molecular pathway mechanisms and facilitates improved and efficient scientific research., (© The Author(s) 2023. Published by Oxford University Press on behalf of Nucleic Acids Research.)
- Published
- 2023
- Full Text
- View/download PDF
209. Handheld snapshot multi-spectral camera at tens-of-megapixel resolution.
- Author
-
Zhang W, Suo J, Dong K, Li L, Yuan X, Pei C, and Dai Q
- Abstract
Multi-spectral imaging is a fundamental tool characterizing the constituent energy of scene radiation. However, current multi-spectral video cameras cannot scale up beyond megapixel resolution due to optical constraints and the complexity of the reconstruction algorithms. To circumvent the above issues, we propose a tens-of-megapixel handheld multi-spectral videography approach (THETA), with a proof-of-concept camera achieving 65-megapixel videography of 12 wavebands within visible light range. The high performance is brought by multiple designs: We propose an imaging scheme to fabricate a thin mask for encoding spatio-spectral data using a conventional film camera. Afterwards, a fiber optic plate is introduced for building a compact prototype supporting pixel-wise encoding with a large space-bandwidth product. Finally, a deep-network-based algorithm is adopted for large-scale multi-spectral data decoding, with the coding pattern specially designed to facilitate efficient coarse-to-fine model training. Experimentally, we demonstrate THETA's advantageous and wide applications in outdoor imaging of large macroscopic scenes., (© 2023. Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
210. INFWIDE: Image and Feature Space Wiener Deconvolution Network for Non-Blind Image Deblurring in Low-Light Conditions.
- Author
-
Zhang Z, Cheng Y, Suo J, Bian L, and Dai Q
- Abstract
Under low-light environment, handheld photography suffers from severe camera shake under long exposure settings. Although existing deblurring algorithms have shown promising performance on well-exposed blurry images, they still cannot cope with low-light snapshots. Sophisticated noise and saturation regions are two dominating challenges in practical low-light deblurring: the former violates the Gaussian or Poisson assumption widely used in most existing algorithms and thus degrades their performance badly, while the latter introduces non-linearity to the classical convolution-based blurring model and makes the deblurring task even challenging. In this work, we propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE) to tackle these problems systematically. In terms of algorithm design, INFWIDE proposes a two-branch architecture, which explicitly removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space, and integrates the two complementary outputs with a subtle multi-scale fusion network for high quality night photograph deblurring. For effective network training, we design a set of loss functions integrating a forward imaging model and backward reconstruction to form a close-loop regularization to secure good convergence of the deep neural network. Further, to optimize INFWIDE's applicability in real low-light conditions, a physical-process-based low-light noise model is employed to synthesize realistic noisy night photographs for model training. Taking advantage of the traditional Wiener deconvolution algorithm's physically driven characteristics and deep neural network's representation ability, INFWIDE can recover fine details while suppressing the unpleasant artifacts during deblurring. Extensive experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
- Published
- 2023
- Full Text
- View/download PDF
211. Retrieving Object Motions From Coded Shutter Snapshot in Dark Environment.
- Author
-
Dong K, Guo Y, Yang R, Cheng Y, Suo J, and Dai Q
- Subjects
- Motion, Signal-To-Noise Ratio, Learning, Lighting
- Abstract
Video object detection is a widely studied topic and has made significant progress in the past decades. However, the feature extraction and calculations in existing video object detectors demand decent imaging quality and avoidance of severe motion blur. Under extremely dark scenarios, due to limited sensor sensitivity, we have to trade off signal-to-noise ratio for motion blur compensation or vice versa, and thus suffer from performance deterioration. To address this issue, we propose to temporally multiplex a frame sequence into one snapshot and extract the cues characterizing object motion for trajectory retrieval. For effective encoding, we build a prototype for encoded capture by mounting a highly compatible programmable shutter. Correspondingly, in terms of decoding, we design an end-to-end deep network called detection from coded snapshot (DECENT) to retrieve sequential bounding boxes from the coded blurry measurements of dynamic scenes. For effective network learning, we generate quasi-real data by incorporating physically-driven noise into the temporally coded imaging model, which circumvents the unavailability of training data and with high generalization ability on real dark videos. The approach offers multiple advantages, including low bandwidth, low cost, compact setup, and high accuracy. The effectiveness of the proposed approach is experimentally validated under low illumination vision and provide a feasible way for night surveillance.
- Published
- 2023
- Full Text
- View/download PDF
212. Plug-and-Play Algorithms for Video Snapshot Compressive Imaging.
- Author
-
Yuan X, Liu Y, Suo J, Durand F, and Dai Q
- Abstract
We consider the reconstruction problem of video snapshot compressive imaging (SCI), which captures high-speed videos using a low-speed 2D sensor (detector). The underlying principle of SCI is to modulate sequential high-speed frames with different masks and then these encoded frames are integrated into a snapshot on the sensor and thus the sensor can be of low-speed. On one hand, video SCI enjoys the advantages of low-bandwidth, low-power and low-cost. On the other hand, applying SCI to large-scale problems (HD or UHD videos) in our daily life is still challenging and one of the bottlenecks lies in the reconstruction algorithm. Existing algorithms are either too slow (iterative optimization algorithms) or not flexible to the encoding process (deep learning based end-to-end networks). In this paper, we develop fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework. In addition to the PnP-ADMM method, we further propose the PnP-GAP (generalized alternating projection) algorithm with a lower computational workload. We first employ the image deep denoising priors to show that PnP can recover a UHD color video with 30 frames from a snapshot measurement. Since videos have strong temporal correlation, by employing the video deep denoising priors, we achieve a significant improvement in the results. Furthermore, we extend the proposed PnP algorithms to the color SCI system using mosaic sensors, where each pixel only captures the red, green or blue channels. A joint reconstruction and demosaicing paradigm is developed for flexible and high quality reconstruction of color video SCI systems. Extensive results on both simulation and real datasets verify the superiority of our proposed algorithm.
- Published
- 2022
- Full Text
- View/download PDF
213. Snapshot compressive imaging based digital image correlation: temporally super-resolved full-resolution deformation measurement.
- Author
-
Chen W, Zhang B, Gu L, Liu H, Suo J, and Shao X
- Abstract
The limited throughput of a digital image correlation (DIC) system hampers measuring deformations at both high spatial resolution and high temporal resolution. To address this dilemma, in this paper we propose to integrate snapshot compressive imaging (SCI)-a recently proposed computational imaging approach-into DIC for high-speed, high-resolution deformation measurement. Specifically, an SCI-DIC system is established to encode a sequence of fast changing speckle patterns into a snapshot and a high-accuracy speckle decompress SCI (Sp-DeSCI) algorithm is proposed for computational reconstruction of the speckle sequence. To adapt SCI reconstruction to the unique characteristics of speckle patterns, we propose three techniques under SCI reconstruction framework to secure high-precision reconstruction, including the normalized sum squared difference criterion, speckle-adaptive patch search strategy, and adaptive group aggregation. For efficacy validation of the proposed Sp-DeSCI, we conducted extensive simulated experiments and a four-point bending SCI-DIC experiment on real data. Both simulation and real experiments verify that the Sp-DeSCI successfully removes the deviations of reconstructed speckles in DeSCI and provides the highest displacement accuracy among existing algorithms. The SCI-DIC system together with the Sp-DeSCI algorithm can offer temporally super-resolved deformation measurement at full spatial resolution, and can potentially replace conventional high-speed DIC in real measurements.
- Published
- 2022
- Full Text
- View/download PDF
214. Plug-and-play pixel super-resolution phase retrieval for digital holography.
- Author
-
Chang X, Bian L, Gao Y, Cao L, Suo J, and Zhang J
- Abstract
In order to increase signal-to-noise ratio in optical imaging, most detectors sacrifice resolution to increase pixel size in a confined area, which impedes further development of high throughput holographic imaging. Although the pixel super-resolution technique (PSR) enables resolution enhancement, it suffers from the trade-off between reconstruction quality and super-resolution ratio. In this work, we report a high-fidelity PSR phase retrieval method with plug-and-play optimization, termed PNP-PSR. It decomposes PSR reconstruction into independent sub-problems based on generalized alternating projection framework. An alternating projection operator and an enhancing neural network are employed to tackle the measurement fidelity and statistical prior regularization, respectively. PNP-PSR incorporates the advantages of individual operators, achieving both high efficiency and noise robustness. Extensive experiments show that PNP-PSR outperforms the existing techniques in both resolution enhancement and noise suppression.
- Published
- 2022
- Full Text
- View/download PDF
215. Weighted sampling-adaptive single-pixel sensing.
- Author
-
Zhan X, Zhu C, Suo J, and Bian L
- Subjects
- Neural Networks, Computer
- Abstract
The novel single-pixel sensing technique that uses an end-to-end neural network for joint optimization achieves high-level semantic sensing, which is effective but computation-consuming for varied sampling rates. In this Letter, we report a weighted optimization technique for sampling-adaptive single-pixel sensing, which only needs to train the network once for any dynamic sampling rate. Specifically, we innovatively introduce a weighting scheme in the encoding process to characterize different patterns' modulation efficiencies, in which the modulation patterns and their corresponding weights are updated iteratively. The optimal pattern series with the highest weights is employed for light modulation in the experimental implementation, thus achieving highly efficient sensing. Experiments validated that once the network is trained with a sampling rate of 1, the single-target classification accuracy reaches up to 95.00% at a sampling rate of 0.03 on the MNIST dataset and 90.20% at a sampling rate of 0.07 on the CCPD dataset for multi-target sensing.
- Published
- 2022
- Full Text
- View/download PDF
216. High-axial-resolution single-molecule localization under dense excitation with a multi-channel deep U-Net.
- Author
-
Zhang W, Zhang Z, Bian L, Wang H, Suo J, and Dai Q
- Abstract
Single-molecule localization microscopy (SMLM) can bypass the diffraction limit of optical microscopes and greatly improve the resolution in fluorescence microscopy. By introducing the point spread function (PSF) engineering technique, we can customize depth varying PSF to achieve higher axial resolution. However, most existing 3D single-molecule localization algorithms require excited fluorescent molecules to be sparse and captured at high signal-to-noise ratios, which results in a long acquisition time and precludes SMLM's further applications in many potential fields. To address this problem, we propose a novel 3D single-molecular localization method based on a multi-channel neural network based on U-Net. By leveraging the deep network's great advantages in feature extraction, the proposed network can reliably discriminate dense fluorescent molecules with overlapped PSFs and corrupted by sensor noise. Both simulated and real experiments demonstrate its superior performance in PSF engineered microscopes with short exposure and dense excitations, which holds great potential in fast 3D super-resolution microscopy.
- Published
- 2021
- Full Text
- View/download PDF
217. High-dimensional camera shake removal with given depth map.
- Author
-
Yue T, Suo J, and Dai Q
- Abstract
Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.
- Published
- 2014
- Full Text
- View/download PDF
218. A compositional and dynamic model for face aging.
- Author
-
Suo J, Zhu SC, Shan S, and Chen X
- Subjects
- Algorithms, Analysis of Variance, Biometric Identification, Computer Simulation, Humans, Markov Chains, Stochastic Processes, Face anatomy & histology, Hair anatomy & histology, Models, Biological, Skin Aging physiology
- Abstract
In this paper, we present a compositional and dynamic model for face aging. The compositional model represents faces in each age group by a hierarchical And-Or graph, in which And nodes decompose a face into parts to describe details (e.g., hair, wrinkles, etc.) crucial for age perception and Or nodes represent large diversity of faces by alternative selections. Then a face instance is a transverse of the And-Or graph-parse graph. Face aging is modeled as a Markov process on the parse graph representation. We learn the parameters of the dynamic model from a large annotated face data set and the stochasticity of face aging is modeled in the dynamics explicitly. Based on this model, we propose a face aging simulation and prediction algorithm. Inversely, an automatic age estimation algorithm is also developed under this representation. We study two criteria to evaluate the aging results using human perception experiments: 1) the accuracy of simulation: whether the aged faces are perceived of the intended age group, and 2) preservation of identity: whether the aged faces are perceived as the same person. Quantitative statistical analysis validates the performance of our aging model and age estimation algorithm.
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.