22 results on '"Zheng Qu"'
Search Results
2. Coordinated Motion Generation and Object Placement: A Reactive Planning and Landing Approach
- Author
-
Riddhiman Laha, Jonathan Vorndamme, Luis F.C. Figueredo, Zheng Qu, Abdalla Swikir, Christoph Jahne, and Sami Haddadin
- Published
- 2021
- Full Text
- View/download PDF
3. Rumor Detection of COVID-19 Pandemic on Online Social Networks
- Author
-
Chen Lyu, Zheng Qu, Anqi Shi, and Qingyao Jia
- Subjects
Value (ethics) ,050101 languages & linguistics ,Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,05 social sciences ,Feature extraction ,02 engineering and technology ,Rumor ,Machine learning ,computer.software_genre ,Ensemble learning ,Task (project management) ,Pandemic ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,computer - Abstract
The new coronavirus epidemic (COVID-19) has received widespread attention, causing the health crisis across the world. Massive information about the COVID-19 has emerged on social networks. However, not all information disseminated on social networks is true and reliable. In response to the COVID-19 pandemic, only real information is valuable to the authorities and the public. Therefore, it is an essential task to detect rumors of the COVID-19 on social networks. In this paper, we attempt to solve this problem by using an approach of machine learning on the platform of Weibo. First, we extract text characteristics, user-related features, interaction-based features, and emotion-based features from the spread messages of the COVID-19. Second, by combining these four types of features, we design an intelligent rumor detection model with the technique of ensemble learning. Finally, we conduct extensive experiments on the collected data from Weibo. Experimental results indicate that our model can significantly improve the accuracy of rumor detection, with an accuracy rate of 91% and an AUC value of 0.96.
- Published
- 2020
- Full Text
- View/download PDF
4. A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks
- Author
-
Mohammad Alian, Peitian Pan, Zheng Qu, Yifan Yuan, Ren Wang, Youjie Li, Jongse Park, Nam Sung Kim, Hadi Esmaeilzadeh, and Alexander G. Schwing
- Subjects
010302 applied physics ,Speedup ,Artificial neural network ,Distributed database ,Computer science ,Distributed computing ,Training system ,02 engineering and technology ,Network interface ,01 natural sciences ,Bottleneck ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,Data compression ,Block (data storage) - Abstract
Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.
- Published
- 2018
- Full Text
- View/download PDF
5. A new hybrid asymmetric and buck-boost fronted converter for SRM with active boost voltage capability
- Author
-
Shaofei Tang, Zheng Qu, Huijun Wang, Xinsheng Wei, and Qi Chen
- Subjects
Engineering ,Boosting (machine learning) ,business.industry ,020209 energy ,020208 electrical & electronic engineering ,Buck–boost converter ,02 engineering and technology ,Converters ,Inductor ,Switched reluctance motor ,law.invention ,Capacitor ,Control theory ,law ,Boost converter ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,business ,Voltage - Abstract
In order to improve performance of switched reluctance motor (SRM) at high speed, a new hybrid asymmetric and buck-boost fronted converter for SRM with active boost voltage capability is proposed. Different from existing converters with boosting voltage, the proposed converter has an active regulation ability of boosting voltage, in which voltage level is independent from switching and dwell angels. Firstly, the basic structure and operating modes are introduced and analyzed. Secondly, the key parameter such as inductor and boosting capacitor is investigated and designed. Finally, some experiments are implemented and results show the validation of proposed method.
- Published
- 2017
- Full Text
- View/download PDF
6. Adaptive output-feedback control for a class of multi-input-multi-output plants with applications to very flexible aircraft
- Author
-
Eugene Lavretsky, Zheng Qu, and Anuradha M. Annaswamy
- Subjects
020301 aerospace & aeronautics ,0209 industrial biotechnology ,Engineering ,Adaptive control ,business.industry ,MIMO ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Control engineering ,02 engineering and technology ,Nonlinear system ,020901 industrial engineering & automation ,0203 mechanical engineering ,Control theory ,Parametric model ,Actuator ,business ,Reference model ,Parametric statistics - Abstract
A dominant presence of parametric model uncertainties motivates an adaptive approach for control of very flexible aircraft (VFA). This paper proposes an adaptive controller that includes a baseline design based on observers and parameter adaptation based on a closed-loop reference model (CRM), and is applicable for a class of multi-input multi-output (MIMO) plants where number of outputs exceeds number of inputs. In particular, the proposed controller allows the plant to have first-order actuator dynamics and parametric uncertainties in both plant and actuator dynamics. Conditions are delineated under which this controller can guarantee stability and asymptotic reference tracking, and the overall design is validated on a nonlinear VFA model.
- Published
- 2016
- Full Text
- View/download PDF
7. Adaptive output-feedback control for relative degree two systems based on closed-loop reference models
- Author
-
Zheng Qu, Eugene Lavretsky, and Anuradha M. Annaswamy
- Subjects
Engineering ,Degree (graph theory) ,Control theory ,business.industry ,MIMO ,Control engineering ,Adaptation (computer science) ,business ,Actuator ,Reference model ,Stability (probability) ,Parametric statistics - Abstract
In this paper, a new adaptive output-feedback controller for multi-input-multi-output (MIMO) linear plant models with relative degree two is developed. The adaptive controller includes a baseline design based on observers and parameter adaptation based on a closed-loop reference model (CRM). The overall design guarantees robust stability and tracking performance in the presence of parametric uncertainties that are commonly seen in aircraft applications.
- Published
- 2015
- Full Text
- View/download PDF
8. Control-oriented modeling of spark assisted compression ignition using a double Wiebe function
- Author
-
Joel Oudart, Varun Mittal, Nikhil Ravi, Aleksandar Kojic, Zheng Qu, and Eric Doran
- Subjects
Ignition system ,Physics ,law ,Homogeneous charge compression ignition ,Range (aeronautics) ,Nuclear engineering ,Spark (mathematics) ,Solid modeling ,Sensitivity (control systems) ,Combustion ,Energy (signal processing) ,Automotive engineering ,law.invention - Abstract
Spark assisted compression ignition (SACI) is currently under exploration as a combustion strategy to extend the operating range of homogeneous charge compression ignition (HCCI), which provides efficiency benefits over standard spark ignition (SI) combustion. This paper presents a physics-based control-oriented approach to modeling combustion in SACI. A double-Wiebe function is developed to capture the two-stage energy release seen in SACI, where a portion of the fuel is burned through flame propagation initiated by a spark, which then initiates auto-ignition in the remaining fuel. This double-Wiebe function is incorporated into a previously developed continuous-time model of HCCI combustion, and correlations for the Wiebe function parameters are developed based on physical model states. A simpler cycle-by-cycle HCCI model is also extended with a two-step energy release description for SACI combustion. Both models accurately capture the behavior of SACI and its sensitivity to different actuators when compared to experimental data.
- Published
- 2015
- Full Text
- View/download PDF
9. Fast distributed coordinate descent for non-strongly convex losses
- Author
-
Zheng Qu, Olivier Fercoq, Peter Richtárik, and Martin Takáč
- Subjects
FOS: Computer and information sciences ,Optimization problem ,math.OC ,cs.LG ,MathematicsofComputing_NUMERICALANALYSIS ,Regular polygon ,Supercomputer ,Machine Learning (cs.LG) ,Computer Science - Learning ,Core (game theory) ,Rate of convergence ,Lasso (statistics) ,Optimization and Control (math.OC) ,FOS: Mathematics ,Applied mathematics ,Convex function ,Coordinate descent ,Mathematics - Optimization and Control ,Mathematics - Abstract
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal $O(1/k^2)$ convergence rate, where $k$ is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest supercomputer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.
- Published
- 2014
- Full Text
- View/download PDF
10. Markov operators on cones and non-commutative consensus
- Author
-
Stéphane Gaubert, Zheng Qu, Centre de Mathématiques Appliquées - Ecole Polytechnique (CMAP), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS), Max-plus algebras and mathematics of decision (MAXPLUS), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
Discrete mathematics ,symbols.namesake ,Matrix (mathematics) ,Markov kernel ,Markov chain ,Hilbert space ,symbols ,Markov process ,Quantum channel ,Contraction (operator theory) ,Commutative property ,ComputingMilieux_MISCELLANEOUS ,Mathematics - Abstract
The analysis of classical consensus algorithms relies on contraction properties of Markov matrices with respect to the Hilbert semi-norm (infinitesimal version of Hilbert's projective metric) and to the total variation norm. We generalize these properties to the case of operators on cones. This is motivated by the study of “non-commutative consensus”, i.e., of the dynamics of linear maps leaving invariant cones of positive semi-definite matrices. Such maps appear in quantum information (Kraus maps), and in the study of matrix means. We give a characterization of the contraction rate of an abstract Markov operator on a cone, which extends classical formulae obtained by Dœblin and Dobrushin in the case of Markov matrices. In the special case of Kraus maps, we relate the absence of contraction to the positivity of the “zero-error capacity” of a quantum channel. We finally show that a number of decision problems concerning the contraction rate of Kraus maps reduce to finding a rank one matrix in linear spaces satisfying certain conditions and discuss complexity issues.
- Published
- 2013
- Full Text
- View/download PDF
11. The contract net based task allocation algorithm for wireless sensor network
- Author
-
Lin Chen, Zhipeng Gao, Yang Yang, Zheng Qu, and Qiu Xuesong
- Subjects
Traffic flow (computer networking) ,Task (computing) ,Key distribution in wireless sensor networks ,Brooks–Iyengar algorithm ,Computer science ,business.industry ,Node (networking) ,Real-time computing ,Mobile wireless sensor network ,Energy consumption ,business ,Wireless sensor network ,Computer network - Abstract
Since wireless sensor network has limited resources, it's important to design its task allocation algorithm reasonably to reduce energy consumption. The contract net is simple and flexible so that it can meet the needs of the wireless sensor network. In this paper, we introduce the improved C-MEANS algorithm to cluster nodes to decrease the number of bidders, and at the same time, the LMS algorithm is adopted to predict the bid value of the nodes. The simulation results show that the energy consumption and traffic flow are reduced, and the bid value more accurately reflects the status of the node when allocated tasks, which increased the complete rate of network tasks.
- Published
- 2012
- Full Text
- View/download PDF
12. Study on multi-item spare parts ordering policies under certain demand
- Author
-
Jian-rong Zhang and Chang-zheng Qu
- Subjects
Operations research ,Computer science ,Spare part ,Joint (building) ,Materials management ,Multi item - Abstract
The characteristic of spare parts inventory was pointed out, and the usage of multi-item joint ordering policy in the spare parts inventory was advanced. Based on analysis of three policies (independent ordering policy, unanimous ordering policy and joint ordering policy), a new method was proposed to get the ordering frequency of different kinds of spare parts in multi-item inventory. The results of research show that aggregating effect of joint ordering policy was optimal to independent ordering policy and unanimous ordering policy.
- Published
- 2010
- Full Text
- View/download PDF
13. A Comparative Study of Compatibility and Transformation between Probability and Possibility
- Author
-
Fu-Zheng Qu and Li-Ping He
- Subjects
Probability theory ,business.industry ,Conditional mutual information ,Compatibility (mechanics) ,Applied probability ,Artificial intelligence ,Uncertainty quantification ,Imprecise probability ,business ,Mathematical economics ,Probability measure ,Mathematics ,Possibility theory - Abstract
In this paper, an annotated survey of approaches to the transformation from a probability measure to a possibility measure or conversely is provided and noticeable properties of the transformations are described. The close relationship of possibility theory and probability theory on quantifying uncertainty is an interesting question which needs further theoretical research. Moreover, the transformations between them are vital for practical applications and attract more attention. The already- existing approaches considered less on discussions of the basis for such transformation. Hence, the compatibility of possibility and probability theory based on evidence theory is investigated first, and then detailed transformations are presented and their characteristics are proved to be a foundation of the transformations. From the systematic point of view, this paper also points out that information semantics inherent in possibility and probability measures may be an elementary factor worth considering. At last, application areas, technical difficulties and future work are stated.
- Published
- 2007
- Full Text
- View/download PDF
14. Earth Science Mission Concept Design System
- Author
-
Zheng Qu, Meemong Lee, Charles E. Miller, and Annmarie Eldering
- Subjects
Earth system science ,Set (abstract data type) ,Earth's orbit ,Computer science ,Process (engineering) ,SIGNAL (programming language) ,Systems engineering ,Satellite ,Atmospheric model ,Remote sensing - Abstract
Knowledge about the Earth system (i.e., the physical-chemical processes that describe its evolution) is generated from global measurements of geophysical parameters taken at appropriate mission observation scenarios. Any given instrument transforms the incoming signal into an instrument-dependent output, from which the scientific observable of interest, say a chemical constituent, is eventually retrieved. The ability to simulate high-fidelity incoming geophysical signals, instrument transformations, and retrievals is currently project specific, and this ability is developed after a particular instrument design has been chosen. This seriously limits the process of conceiving the next-generation missions starting from the science questions to be answered, and choosing the appropriate measurement strategy based on the expected accuracy and precision. To enable science-driven mission concept formulation and design validation, the atmospheric scientists at JPL developed a set of parameters for defining the measurement requirements that are verifiable and traceable and a set of metrics for evaluating the mission concepts that are quantifiable and tradable. This paper presents a global atmospheric science mission concept design system that allows scientists to explore a large range of mission concepts integrating the measurement requirement parameters and evaluation metrics.
- Published
- 2007
- Full Text
- View/download PDF
15. Atmospheric correction of Hyperion data and techniques for dynamic scene correction
- Author
-
M. Ferri, B. Kindel, Alexander F. H. Goetz, and Zheng Qu
- Subjects
Data set ,Radiance ,Calibration ,Atmospheric correction ,Hyperspectral imaging ,Environmental science ,Atmospheric model ,Atmospheric optics ,Multispectral pattern recognition ,Remote sensing - Abstract
Data from Hyperion over several sites in Argentina, Australia and the US were processed with the HATCH-2d atmospheric correction algorithm and compared with ground spectral reflectance measurements as well as with two other correction algorithms, ATREM and ACORN. MNF and water vapor images were used in the evaluations to detect artifacts. A technique has been developed to remove the ubiquitous column striping in the higher MNF bands that is also seen in the water vapor images. It involves normalizing the column means in each MNF band and subsequently transposing the data to create an internally consistent radiance data set. MNF images created from the corrected data set do not exhibit striping.
- Published
- 2003
- Full Text
- View/download PDF
16. Control-oriented modeling of spark assisted compression ignition using a double Wiebe function.
- Author
-
Zheng Qu, Ravi, Nikhil, Oudart, Joel, Doran, Eric, Mittal, Varun, and Kojic, Aleksandar
- Published
- 2015
- Full Text
- View/download PDF
17. Research on Workflow Modeling Based on Object-oriented Colored Pet Net
- Author
-
Jun, Yang, primary, Yong-li, Yu, additional, and Chang-zheng, Qu, additional
- Published
- 2009
- Full Text
- View/download PDF
18. Earth Science Mission Concept Design System.
- Author
-
Lee, M., Miller, C., Eldering, A., and Zheng Qu
- Published
- 2007
- Full Text
- View/download PDF
19. A Comparative Study of Compatibility and Transformation between Probability and Possibility.
- Author
-
Li-Ping He and Fu-Zheng Qu
- Published
- 2007
- Full Text
- View/download PDF
20. The High Accuracy Atmospheric Correction for Hyperspectral Data (HATCH) Model.
- Author
-
Zheng Qu, Kindel, Bruce C., and Goetz, Alexander F.H.
- Subjects
- *
REMOTE sensing , *EARTH sciences - Abstract
The High-accuracy Atmospheric Correction for Hyperspectral Data (HATCH) model was developed for deriving high-quality surface reflectance spectra from remotely sensed hyperspectral imaging data. This paper presents the novel techniques applied in HATCH. An innovative technique, a "smoothness test" for water vapor amount retrieval and for automatic spectral calibration, is developed for HATCH. HATCH also includes an original fast radiative transfer equation solver and a correlated-k gaseous absorption model based on HITRAN 2000 database. Spectral regions with overlapping absorptions by different gases are handled by precomputing a correlated-k lookup table for various gas mixing ratios. The interaction between multiple scattering and absorption is explicitly handled through the use of the correlated-k method for gaseous absorption. Finally, some results are presented for HATCH applied to Airborne Visible Infrared Imaging Spectoradiometer data and together with comparison of the results between HATCH and the Atmosphere Removal program. The limitations in HATCH include that the HATCH assumes a Lambertian surface, and adjacent effect is not considered. HATCH assumes aerosols to be spatially homogeneous in a scene. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
21. HATCH: Results From Simulated Radiances, AVIRIS and Hyperion.
- Author
-
Goetz, Alexander F.H., Kindel, Bruce C., Ferri, Mario, and Zheng Qu
- Subjects
REMOTE sensing ,EARTH sciences - Abstract
The atmospheric correction program High-accuracy Atmospheric Correction for Hyperspectral data (HATCH) has been developed specifically to convert radiance from imaging spectrometer sensors to reflectance on a pixel-by-pixel basis. HATCH was developed to update the previously available model, the Atmosphere Removal model (ATREM). In this paper, we test the HATCH model against model data using MODTRAN 4 as well as with Airborne Visible/Infrared Imaging Spectroradiomter and Hyperion data for which simultaneous ground reflectances were acquired. We also compare HATCH to the commercially available ACORN model. Results show that HATCH produces smoother-looking spectra than its predecessor ATREM and is less influenced by liquid water in vegetation. Comparisons with MODTRAN were made by calculating above-atmosphere radiances from a hypothetical target with a 0.5 reflectance at all wavelengths between 400 and 2500 nm and retrieving them with HATCH. The results show a maximum deviation of 10% at several wavelengths, highlighting the differences between the models. Hyperion images contain striping artifacts. We show that optimum removal is obtained by normalizing the means and standard deviations of each column after converting radiance to reflectance with an atmospheric model. HATCH produces water vapor corrections virtually unaffected by vegetation liquid water if the water vapor band at 940 nm is used in the calculation. Retrievals using the ll40-nm band are subject to errors associated with liquid water in vegetation. Retrievals of reflectance from Hyperion radiances require use of the 1140-nm band because the 940-nm band falls in the detector crossover region. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
22. Atmospheric correction of Hyperion data and techniques for dynamic scene correction.
- Author
-
Goetz, A.F.H., Ferri, M., Kindel, B., and Zheng Qu
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.