31,538 results
Search Results
102. Cover 3.
- Subjects
ARTIFICIAL intelligence ,DIGITAL Object Identifiers - Published
- 2022
- Full Text
- View/download PDF
103. Real-Time Task Scheduling for Machine Perception in Intelligent Cyber-Physical Systems.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Shao, Huajie, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
CYBER physical systems ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,PARTITIONS (Building) ,RESOURCE allocation - Abstract
This paper explores criticality-based real-time scheduling of neural-network-based machine inference pipelines in cyber-physical systems (CPS) to mitigate the effect of algorithmic priority inversion. We specifically focus on the perception subsystem, an important subsystem feeding other components (e.g., planning and control). In general, priority inversion occurs in real-time systems when computations that are of lower priority are performed together with or ahead of those that are of higher priority. In current machine perception software, significant priority inversion occurs because resource allocation to the underlying neural network models does not differentiate between critical and less critical data within a scene. To remedy this problem, in recent work, we proposed an architecture to partition the input data into regions of different criticality, then formulated a utility-based optimization problem to batch and schedule their processing in a manner that maximizes confidence in perception results, subject to criticality-based time constraints. This journal extension matures the work in several directions: (i) We extend confidence maximization to a generalized utility optimization formulation that accounts for criticality in the utility function itself, offering finer-grained control over resource allocation within the perception pipeline; (ii) we further instantiate and compare two different criticality metrics (distance-based and relative velocity-based) to understand their relative advantages; and (iii) we explore the limitations of the approach, specifically how inaccuracies in criticality-based attention cueing affect performance. All experiments are conducted on the NVIDIA Jetson AGX Xavier platform with a real-world driving dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
104. Artificial Intelligence Aided Automated Design for Reliability of Power Electronic Systems.
- Author
-
Dragicevic, Tomislav, Wheeler, Patrick, and Blaabjerg, Frede
- Subjects
ELECTRONIC systems ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,THERMAL stresses ,RELIABILITY in engineering ,PATTERN matching - Abstract
This paper proposes a new methodology for automated design of power electronic systems realized through the use of artificial intelligence. Existing approaches do not consider the system's reliability as a performance metric or are limited to reliability evaluation for a certain fixed set of design parameters. The method proposed in this paper establishes a functional relationship between design parameters and reliability metrics, and uses them as the basis for optimal design. The first step in this new framework is to create a nonparametric surrogate model of the power converter that can quickly map the variables characterizing the operating conditions (e.g., ambient temperature and irradiation) and design parameters (e.g., switching frequency and dc link voltage) into variables characterizing the thermal stress of a converter (e.g., mean temperature and temperature variation of its devices). This step can be carried out by training a dedicated artificial neural network (ANN) either on experimental or simulation data. The resulting network is named as $\text{ANN}_{1}$ and can be deployed as an accurate surrogate converter model. This model can then be used to quickly map the yearly mission profile into a thermal stress profile of any selected device for a large set of design parameter values. The resulting data is then used to train $\text{ANN}_{2}$ , which becomes an overall system representation that explicitly maps the design parameters into a yearly lifetime consumption. To verify the proposed methodology, $\text{ANN}_{2}$ is deployed in conjunction with the standard converter design tools on an exemplary grid-connected PV converter case study. This study showed how to find the optimal balance between the reliability and output filter size in the system with respect to several design constraints. This paper is also accompanied by a comprehensive dataset that was used for training the ANNs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
105. Intuitive Adaptive Orientation Control for Enhanced Human–Robot Interaction.
- Author
-
Campeau-Lecours, Alexandre, Cote-Allard, Ulysse, Vu, Dinh-Son, Routhier, Francois, Gosselin, Benoit, and Gosselin, Clement
- Subjects
HUMAN-robot interaction ,SURGICAL robots ,ARTIFICIAL intelligence ,INDUSTRIAL robots ,INTERNET surveys - Abstract
Robotic devices can be leveraged to raise the abilities of humans to perform demanding and complex tasks with less effort. Although the first priority of such human–robot interaction (HRI) is safety, robotic devices must also be intuitive and efficient in order to be adopted by a broad range of users. One challenge in the control of such assistive robots is the management of the end-effector orientation, that is not always intuitive for the human operator, especially for neophytes. This paper presents a novel orientation control algorithm designed for robotic arms in the context of HRI. This paper aims at making the control of the robot's orientation easier and more intuitive for the user, both in the fields of rehabilitation (in particular individuals living with upper limb disabilities) and industrial robotics. The performance and intuitiveness of the proposed orientation control algorithm is assessed and improved through two experiments with a JACO assistive robot with 25 able-bodied subjects, an online survey with 117 respondents via the Amazon Mechanical Turk and through two experiments with a UR5 industrial robot with 12 able-bodied subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
106. Enhancing Binocular Depth Estimation Based on Proactive Perception and Action Cyclic Learning for an Autonomous Developmental Robot.
- Author
-
Jin, Yongsik and Lee, Minho
- Subjects
HUMANOID robots ,ARTIFICIAL intelligence ,HUMAN-robot interaction - Abstract
In humans, perception and action (PA) possess cyclically causal relations. In this paper, we propose a new PA-based cyclic learning framework to autonomously enhance the depth-estimation accuracy of a humanoid robot and perform given behavioral tasks. The proposed method integrates the concepts of sensory invariance-driven action and object-size invariance to autonomously enhance the depth-estimation accuracy. If the depth estimation is reliable, the reinforcement learning framework is used to generate goal-directed actions of a humanoid robot based on a perceived environment. Iterative PA cycles of a robot autonomously refine its depth-estimation. The proposed method is evaluated using a humanoid robot (NAO) with stereo cameras, and the experimental results demonstrate that the proposed framework is effective for autonomously enhancing both the depth-estimation accuracy and the action-generation performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
107. Special Issue on Deep Reinforcement Learning and Adaptive Dynamic Programming.
- Author
-
Zhao, Dongbin, Liu, Derong, Lewis, F. L., Principe, Jose C., and Squartini, Stefano
- Subjects
REINFORCEMENT learning ,ARTIFICIAL neural networks - Abstract
In the first issue of Nature 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning.” Furthermore, in the first issue of Nature 2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. This becomes a new milestone in artificial intelligence history, the core of which is the algorithm of deep reinforcement learning (RL). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
108. Towards Self-Organizing Swarms of Reconfigurable Self-Aware Robots
- Author
-
Oliver Kosak, Alwin Hoffmann, Alexander Schiendorfer, Hella Seebach, Andreas Angerer, and Constantin Wanninger
- Subjects
Engineering ,business.industry ,Human–computer interaction ,Multi-agent system ,Position paper ,Self aware ,Robot ,Artificial intelligence ,ddc:004 ,business ,Complex adaptive system - Abstract
Designing complex adaptive systems for real world applications is a delicate challenge, especially when support for humans in crucial situations should be achieved. In this position paper, we propose a multi-agent based approach for physically reconfigurable, heterogeneous robot swarms. These can be deployed when there is a need to search, continuously observe and react, e.g. in disaster scenarios. We show first results that validate the feasibility of our approach.
- Published
- 2016
109. Bond graph models for human behavior
- Author
-
Abdelrhman Mahamadi and Shivakumar Sastry
- Subjects
Theoretical computer science ,Medical treatment ,business.industry ,Computer science ,Energy transfer ,Short paper ,Artificial intelligence ,Hydraulic machinery ,business ,Focus (optics) ,Bond graph ,Energy exchange ,Domain (software engineering) - Abstract
Advances in technology have led to rapid increase in the number and the complexity of engineered systems. Consequently, the need for effective tools and techniques for designing, implementing and analyzing such systems has increased. Bond Graphs were proposed as domain independent approach for modeling dynamic systems in 1960. This approach is a unifying methodology to represent and analyze systems in which there is energy exchange and, hence, one can represent, validate, analyze and generate models for the behavior of electrical, mechanical, chemical, fluid or hydraulic system. In this short paper we describe how to create bond graphs for a system with a focus on modeling Human Behavior. We describe how a model can be used to study the energy transfers in the system and how to obtain a mathematical model for the dynamic behavior of the system from the bond graph model.
- Published
- 2016
110. Combining Unsupervised and Supervised Learning for Discovering Disease Subclasses
- Author
-
Svetlana I. Nihtyanova, Riccardo Bellazzi, Pietro Bosoni, Christopher P. Denton, and Allan Tucker
- Subjects
Connective Tissue Disorder ,business.industry ,Supervised learning ,Short paper ,Disease ,030204 cardiovascular system & hematology ,Health outcomes ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Identification (information) ,0302 clinical medicine ,Unsupervised learning ,Medicine ,Artificial intelligence ,business ,computer - Abstract
Diseases are often umbrella terms for many subcategories of disease. The identification of these subcategories is vital if we are to develop personalised treatments that are better focussed on individual patients. In this short paper, we explore the use of a combination of unsupervised learning to identify potential subclasses, and supervised learning to build models for better predicting a number of different health outcomes for patients that suffer from systemic sclerosis, a rare chronic connective tissue disorder - but one that shares many characteristics with other diseases. We explore a number of different algorithms for constructing models that simultaneously predict health outcomes and identify subcategories.
- Published
- 2016
111. Cover 3.
- Subjects
ARTIFICIAL intelligence ,DIGITAL Object Identifiers - Published
- 2022
- Full Text
- View/download PDF
112. The Impact of Artificial Intelligence on IoT: The 7th IEEE World Forum on the Internet of Things - WFIoT2021: 20–24 June 2021 // Hilton Riverside Hotel, New Orleans, Louisiana, USA.
- Subjects
ARTIFICIAL intelligence ,INTERNET forums ,INTERNET of things ,PUBLIC sector - Abstract
The IEEE World Forum on the Internet of Things (WFIoT2021) seeks submissions and proposals for original technical papers that address the Internet of Things (IoT), its theoretical and technological building blocks, the applications that drive the growth and evolution of IoT, operational considerations, experimentation, experiences from deployments, and the impacts of IoT on consumers, the public sector, and industrial verticals. The theme for the World Forum is “The Impact of Artificial Intelligence on IoT”. In recognition of the rapid growth of IoT across the world and adoption across almost all verticals we encourage the submission of multi-disciplinary content. Papers should address, but are not limited to, the high-level topics below and a more detailed list found on the WFIoT2021 website that can be downloaded as a PDF document: [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
113. An Overview of Machine Learning Techniques for Radiowave Propagation Modeling.
- Author
-
Seretis, Aristeidis and Sarris, Costas D.
- Subjects
ARTIFICIAL intelligence ,WIRELESS communications ,MACHINE learning - Abstract
We give an overview of recent developments in the modeling of radiowave propagation, based on machine learning (ML) algorithms. We identify the input and output specification and the architecture of the model as the main challenges associated with ML-driven propagation models. Relevant papers are discussed and categorized based on their approach to each of these challenges. Emphasis is given on presenting the prospects and open problems in this promising and rapidly evolving area. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
114. From HEVC to VVC: The First Development Steps of a Practical Intra Video Encoder.
- Author
-
Viitanen, Marko, Sainio, Joose, Mercat, Alexandre, Lemmetti, Ari, and Vanne, Jarno
- Subjects
VIDEO coding ,ARTIFICIAL intelligence ,STREAMING media ,VIDEOS - Abstract
Versatile Video Coding (VVC/H.266) is an emerging successor to the widespread High Efficiency Video Coding (HEVC/H.265) and is shown to double the coding efficiency for the same subjective visual quality. Nevertheless, VVC still adopts the similar hybrid video coding scheme as HEVC and thereby sets the scene for reusing many HEVC coding tools and techniques as is or with minor modifications. This paper explores the feasibility of developing a practical software VVC intra encoder from our open-source Kvazaar HEVC encoder. The outcome of this work is called uvg266 VVC intra encoder that is distributed under the same permissive 3-clause BSD license as Kvazaar. uvg266 inherits the optimized coding flow of Kvazaar and all upgradable Kvazaar intra coding tools, but it also introduces basic VVC intra coding tools not available in HEVC. To the best of our knowledge, this is the first work to describe the implementation details of upgrading an HEVC encoder to a VVC encoder. The rapid development time with promising coding performance make our proposal a viable approach over the encoder development from scratch. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
115. IEEE Transactions on Robotics information for authors.
- Subjects
AUTHORS ,PUBLICATIONS ,ARTIFICIAL intelligence ,ROBOT kinematics ,MECHATRONICS ,FUZZY systems - Published
- 2011
- Full Text
- View/download PDF
116. IEEE Transactions on Robotics information for authors.
- Subjects
AUTHOR-publisher relations ,ROBOTICS ,ROBOT dynamics ,INDUSTRIAL robots ,KINEMATICS ,ARTIFICIAL intelligence ,COPYRIGHT ,MANUSCRIPT preparation (Authorship) - Published
- 2011
- Full Text
- View/download PDF
117. IEEE Transactions on Neural Networks information for authors.
- Subjects
ARTIFICIAL neural networks ,PUBLISHING ,COMPUTER software ,COMPUTER input-output equipment ,ARTIFICIAL intelligence ,SELF-organizing systems ,ALGORITHMS - Published
- 2011
- Full Text
- View/download PDF
118. Guest Editorial Special Section on the IEEE International Conference on Microelectronic Test Structures.
- Author
-
Mita, Yoshio and Smith, Stewart
- Subjects
MICROELECTRONICS ,SEMICONDUCTOR devices ,INFORMATION society ,ARTIFICIAL intelligence ,CONFERENCES & conventions - Abstract
Semiconductor Devices have been, and continue to be, the core of the information society. Together with tiny and inexpensive sensors, huge amounts of physical data will be collected in cyber-systems and analyzed by artificial intelligence. In such cyber-physical system, large-scale, low-power, and reliable semiconductor devices should be integrated with sensors and actuators. In addition to the classical trend of semiconductors, the engineers of mid-2010s must explore many materials for “new functionality,” whose impact on standard LSI system is still unclear. Recent integration technology such as chip-on-chip (3-D stacked IC) increases complexity of the device fabrication and analysis. It is therefore clear that everyone must seek for reliable and productive fabrication to achieve satisfactory yields, and a key component in addressing these issues is the characterization of the technology. Test structures, as well as test methods, play a major role in technology characterization and covers elements such as feature size measurement, parameter extraction, fluctuation assessment in transistors, stability measurement, and analogue parameter characterization. For thirty years the IEEE has annually sponsored the IEEE International Conference on Microelectronic Test Structures to discuss cutting edge methods in characterization. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
119. A Mixed-Pruning Based Framework for Embedded Convolutional Neural Network Acceleration.
- Author
-
Chang, Xuepeng, Pan, Huihui, Lin, Weiyang, and Gao, Huijun
- Subjects
CONVOLUTIONAL neural networks ,FIELD programmable gate arrays ,PHYSIOLOGICAL effects of acceleration ,ARTIFICIAL intelligence ,PROBLEM solving ,SPACE-time codes ,DATA warehousing - Abstract
Convolutional neural networks (CNN) have been proved to be an effective method in the field of artificial intelligence (AI), and large-scale deploying CNN to embedded devices, no doubt, will greatly promote the development and application of AI into the practical industry. However, mainly due to the space-time complexity of CNN, computing power, memory bandwidth and flexibility are performance bottlenecks. In this paper, a framework containing model compression and hardware acceleration is proposed to solve the above problems. This framework consists of a mixed pruning method, data storage optimization for efficient memory utilization and an accelerator for mapping CNN on field programmable gate array (FPGA). The mixed pruning method is used to compress the model, and data bit-width is reduced to 8-bit by data quantization. Accelerator based on FPGA makes it flexible, configurable and efficient for CNN implementation. The model compression is evaluated on NVIDIA RTX2080Ti, and the results illustrate that the VGG16 is compressed by $30\times $ and the fully convolutional network (FCN) is compressed by $11\times $ within 1% accuracy loss. The compressed model is deployed and accelerated on ZCU102, which is up to $1.7\times $ and $24.5\times $ better in energy efficiency compared with RTX2080Ti and Intel i7 7700. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
120. A simple sheet bending recognition for augmenting a two-dimensional marker-based response analyzer
- Author
-
Motoki Miura and Tei Sui
- Subjects
Paper sheet ,Spectrum analyzer ,Bending (metalworking) ,SIMPLE (military communications protocol) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,GeneralLiterature_MISCELLANEOUS ,ComputingMethodologies_PATTERNRECOGNITION ,Computer vision ,Artificial intelligence ,business ,Fiducial marker ,Mobile device ,Simulation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Conventional response analyzers use various student devices such as clickers and mobile devices, which require the additional burdens of charging and managing. Fiducial marker-based response analyzers have been proposed to relieve these burdens. The IDs and rotations of the fiducial markers are used to identify students and their respective answers. The marker-based approach is straightforward, but transferrable data are limited to the ID and the rotations. To enhance the transferrable data, we introduce a method of extracting curves of the marker sheet. Since a paper sheet is flexible, students can control the shape of the sheet intuitively. We have implemented our method by modifying a conventional fiducial marker recognizer and confirmed its effectiveness., 2015 International Conference on Informatics, Electronics and Vision (ICIEV), Fukuoka, Japan, June 15, 2015 to June 18, 2015
- Published
- 2015
121. Real-Time Constant Objective Quality Video Coding Strategy in High Efficiency Video Coding.
- Author
-
Cai, Qi, Chen, Zhifeng, Wu, Dapeng Oliver, and Huang, Bo
- Subjects
VIDEO coding ,VIDEO surveillance ,ARTIFICIAL intelligence ,MACHINE performance ,ALGORITHMS ,STREAMING media - Abstract
As video data are occupying an increasingly more significant portion of global data traffic, video communication has become an indispensable component for most multimedia applications. As an enabling technology of video communication, although video coding is well standardized, the strategy to control video codec is highly customized to applications. The consistency of video quality is being paid more attention in many emerging applications. For example, in video surveillance, the quality of video frames should be stable in order to ensure the performance of machine intelligence algorithm, such as object detection precision. In this paper, we focus on achieving constant objective reconstruction quality in the process of video coding. To achieve certain rate-distortion performance, bit rate and distortion metric are usually modeled as a function of video content and control parameters of codec. For content modeling in existing work, there is still room for improvement, including the design of more efficient content feature, the compensation for assumption about constant RD characteristics among consecutive frames, and the adjustment of Lagrangian multiplier $\lambda $ according to content property. The main contributions of this paper are: 1) a robust content adaptive model for residual bit rate modeling based on content statistics called mean absolute partial transformed difference (MAPTD); 2) a content-related header bit rate modeling; 3) preprocessing scheme for robust content feature estimation at scene change; 4) a distortion model consistent with local content; and 5) content adaptive $\lambda $ determination. The experimental results show that our constant quality control strategy can achieve superior performance compared with the state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
122. Editorial Special Issue on “Memory Devices and Technologies for the Next Decade”.
- Author
-
Compagnoni, Christian Monzio, Kang, Jinfeng, Shih, Yen-Hao, Du, Pei-Ying Penny, Kim, Tae-Hun, Mouli, Chandra, Yang, Joshua, and Roy, Kaushik
- Subjects
COMPUTER storage devices ,FLASH memory ,5G networks ,ARTIFICIAL intelligence ,INTERNET of things ,BIG data - Abstract
The expected explosive growth of big data, the Internet of Things, artificial intelligence, and 5G mobile networks will not only challenge but also offer new opportunities to solid-state memories in the next decade. Mainstream technologies such as the 3-D NAND Flash and the 1T-1C DRAM technology will have to keep evolving to prolong their scaling trends and maintain their undisputed leadership in the standalone memory arena. At the same time, other memory technologies may take advantage of the rise of the new market applications, likely changing the balance among cost, performance, and reliability. Phase-change memories (PCM), magnetoresistive random-access memories (MRAM), resistive random-access memories (ReRAM), and ferroelectric memories have the potential to play a role both in the embedded and in the standalone memory market. However, all of them will need innovations to fully demonstrate their long-term performance. Finally, all the memory technologies will have to compete to prove the benefits of new applications and solutions, such as the mixing of storage and computing with in-memory computing, neuromorphic computing, and nonvolatile logic. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
123. LDEF Formalism for Agent-Based Model Development.
- Author
-
Bae, Jang Won and Moon, Il-Chul
- Subjects
MULTIAGENT systems ,COMPUTER software reusability - Abstract
As agent-based models (ABMs) are applied to various domains, the efficiency of model development has become an important issue in its applications. The current practice is that many models are developed from scratch, while they could have been built by reusing existing models. Moreover, when models need reconfiguration, they often need to be rebuilt significantly. These problems reduce the development efficiency and ultimately damage the efficacy of ABM. This paper partially resolves the challenges of model reusability from the systems engineering approach. Specifically, we propose a formalism-based ABM development and demonstrate its potential to promote model reuses. Our formalism, named large-scale, dynamic, extensible, and flexible (LDEF) formalism, encourages the building of a larger model by the composition of modularly developed components. Also, LDEF is tailored to the ABM contexts to represent the agent’s action procedure and support the dynamic changes of their interactions. This paper shows that LDEF improves the model reusability in ABM development through its practical examples and theoretical discussions. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
124. Spectrum Sensing Performance in Cognitive Radio Networks With Multiple Primary Users.
- Author
-
Furtado, Antonio, Irio, Luis, Oliveira, Rodolfo, Bernardo, Luis, and Dinis, Rui
- Subjects
COGNITIVE radio ,RADIO interference ,ARTIFICIAL intelligence ,FALSE alarms ,RADIO frequency allocation ,PARAMETERIZATION - Abstract
Radio spectrum sensing (SS) has been an active topic of research over the past years due to its importance to cognitive radio (CR) systems. However, in CR networks (CRNs) with multiple primary users (PUs), the secondary users (SUs) can often detect PUs that are located outside the sensing range, due to the level of the aggregated interference caused by the PUs. This effect, known as spatial false alarm (SFA), degrades the performance of CRNs because it decreases the SUs' medium access probability. This paper characterizes the SFA effect in a CRN, identifying possible actions to attenuate it. Adopting energy-based sensing (EBS) in each SU, this paper starts to characterize the interference caused by multiple PUs located outside a desired sensing region. The interference formulation is then used to write the probabilities of detection and false alarm, and closed-form expressions are presented and validated through simulation. The first remark to be made is that the SFA can be neglected, depending on the path-loss factor and the number of samples collected by the energy detector to decide the spectrum's occupancy state. However, it is shown that by increasing the number of samples needed to increase the sensing accuracy, the SUs may degrade their throughput, namely, if SUs are equipped with a single radio that is sequentially used for sensing and transmission (split-phase operation). Assuming this scenario, this paper ends by providing a bound for the maximum throughput achieved in a CRN with multiple active PUs and for a given level of PUs' detection inside the SUs' sensing region. The results presented in this paper show the impact of path loss and EBS parameterization on SUs' throughput and are particularly useful to guide the design and parameterization of multihop CRNs, including future ad hoc CRNs considering multiple PUs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
125. Reducing Under-Frequency Load Shedding in Isolated Power Systems Using Neural Networks. Gran Canaria: A Case Study.
- Author
-
Padron, S., Hernandez, M., and Falcon, A.
- Subjects
ARTIFICIAL neural networks ,ELECTRIC power systems ,ENERGY storage ,PULSED power systems ,ELECTRIC potential ,DIRECT currents ,VOLTAGE-frequency converters - Abstract
Small isolated power systems often experience generator outages, which are responsible for the activation of the under-frequency load shedding scheme with the corresponding negative impact on electricity consumers and, hence, market loss. There are three main causes of this problem: the power system's low inertia, the speed governors' low capacity, and a poor size-ratio between generator and system. The most extensive research line in this area is focused on the optimization of the load shedding scheme, which is a partial solution. Another research line is presented to solve the problem from the point of view of the system operator. This paper proposes an online method to predict and correct possible load shedding by redistributing load dispatching. This proposal uses artificial intelligence techniques, in particular neural networks, and a special-purpose power system simulator. In order to evaluate the proposal, the achieved solution is applied to a real case study: the island of Gran Canaria. This application shows the improvement that might be achieved by implementing this simple method. The method proposed in this paper is strongly recommended for regions that have suitable geographical sites as well as energy problems similar to those of the Canary Islands (see tech. rep. “Map of the Canary Islands Power Systems” by Red Electrica de Espana). [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
126. Artificial Intelligence in Industrial Systems.
- Author
-
Palhares, R. M., Yuan, Y., and Wang, Q.
- Subjects
ARTIFICIAL intelligence ,FEATURE extraction - Published
- 2019
- Full Text
- View/download PDF
127. AI Algorithm-Based Two-Stage Optimal Design Methodology of High-Efficiency CLLC Resonant Converters for the Hybrid AC–DC Microgrid Applications.
- Author
-
Zhao, Bin, Zhang, Xin, and Huang, Jingjing
- Subjects
REACTIVE power ,AIR gap (Engineering) ,ARTIFICIAL intelligence ,POWER density ,ELECTRIC inductance ,MAGNETIC flux leakage - Abstract
Thanks to the advantages of high power density and the capacity of bidirectional power transfer, the CLLC resonant converter is widely used in the hybrid ac–dc microgrid as a dc transformer to interlink the ac and dc bus. Since the voltages of ac and dc bus are controlled by the energy management system, the CLLC resonant converter operates under open-loop condition, which means the switching frequency and duty cycle are fixed. As a result, in the hybrid ac–dc microgrid applications, for the CLLC converter, the main concern is not the voltage regulation but the conversion efficiency. This paper focuses on the total power loss optimization and the magnetic design of the CLLC resonant converter based on artificial intelligence (AI) algorithm. In order to optimize the total power loss, an AI algorithm-based two-stage optimal design method is proposed. In the first stage, the total power loss, including the driving loss, turn-off loss, conduction loss of the switches, the power loss of the resonant capacitances, and copper and core loss of the transformer are optimized by the proposed AI algorithm, GA+PSO, and the optimal parameters, including the leakage inductances (Lr1 and Lr2), magnetizing inductance (Lm), and resonant capacitances (Cr1 and Cr2) are derived. In the second stage, the optimal leakage inductances and magnetizing inductance are realized by setting proper distance between the primary winding and the secondary winding (dw), and the thickness of the air gap (da). As for the magnetic design, in this paper, the leakage inductances of a planar transformer are used as the resonant inductances. The equations of dw and da to achieve the optimal leakage inductances and magnetizing inductance are derived. Both the proposed optimal design method and the equations of dw and da are validated by simulations and experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
128. A Framework for Automated Cellular Network Tuning With Reinforcement Learning.
- Author
-
Mismar, Faris B., Choi, Jinseok, and Evans, Brian L.
- Subjects
REINFORCEMENT learning ,NETWORK performance ,CONFIGURATION management - Abstract
Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the performance for indoor and outdoor environments. By leveraging the ability of $Q$ -learning to estimate future performance improvement rewards, we propose two algorithms: 1) closed loop power control (PC) for downlink voice over LTE (VoLTE) and 2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the signal-to-interference plus noise ratio (SINR) of a user equipment (UE) meets the target SINR. It does so without the UE having to send power control requests. The SON fault management algorithm uses RL to improve the performance of an outdoor base station cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex performance optimization problem using RL. Simulation results show that our proposed RL-based algorithms outperform the industry standards today in realistic cellular communication environments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
129. Editorial.
- Author
-
Roska, Tamá
- Subjects
RESEARCH ,PERIODICALS ,ELECTRONIC data processing ,ARTIFICIAL intelligence ,COMPUTERS - Abstract
This article presents information about the journal "IEEE Transactions on Circuits and Systems," as of December 2003. From January 2002 to October 2003, the journal received about 1200 research papers. The number of papers per working day has increased continuously. The rejection ratio for the papers arriving after January 2002 , with the review completed, was about 70%. In spite of significant improvement and the efforts of more than 40 associate editors, 9% of the papers received were without the first review after 145 days. The author says that new areas introduced by the journal have not attracted too many papers. Therefore, those subject areas have been more precisely defined in the journal. Computer and computing aspects are influencing everyday work, including design and manufacturing.
- Published
- 2003
- Full Text
- View/download PDF
130. Pushing the Limits of Deep CNNs for Pedestrian Detection.
- Author
-
Hu, Qichang, Wang, Peng, Shen, Chunhua, van den Hengel, Anton, and Porikli, Fatih
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE processing ,BIG data - Abstract
Compared with other applications in computer vision, convolutional neural networks (CNNs) have underperformed on pedestrian detection. A breakthrough was made very recently using sophisticated deep CNN (DCNN) models, with a number of handcrafted features or explicit occlusion handling mechanism. In this paper, we show that by reusing the convolutional feature maps of a DCNN model as image features to train an ensemble of boosted decision models, we are able to achieve the best reported accuracy without using specially designed learning algorithms. We empirically identify and disclose important implementation details. We also show that pixel labeling may be simply combined with a detector to boost the detection performance. By adding complementary handcrafted features such as optical flow, the DCNN-based detector can be further improved. We advance the state-of-the-art results by lowering the log-average miss rate from 11.7% to 8.9% on the Caltech data set and from 11.2% to 8.6% on the Inria data set. We also achieve a comparable result to state-of-the-art approaches on the KITTI data set. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
131. Learning-Based Memory Allocation Optimization for Delay-Sensitive Big Data Processing.
- Author
-
Tsai, Linjiun, Franke, Hubertus, Li, Chung-Sheng, and Liao, Wanjiun
- Subjects
BIG data ,MACHINE learning ,ARTIFICIAL intelligence ,DISTRIBUTED computing ,ELECTRONIC data processing - Abstract
Optimal resource provisioning is essential for scalable big data analytics. However, it has been difficult to accurately forecast the resource requirements before the actual deployment of these applications as their resource requirements are heavily application and data dependent. This paper identifies the existence of effective memory resource requirements for most of the big data analytic applications running inside JVMs in distributed Spark environments. Provisioning memory less than the effective memory requirement may result in rapid deterioration of the application execution in terms of its total execution time. A machine learning-based prediction model is proposed in this paper to forecast the effective memory requirement of an application given its service level agreement. This model captures the memory consumption behavior of big data applications and the dynamics of memory utilization in a distributed cluster environment. With an accurate prediction of the effective memory requirement, it is shown that up to 60 percent savings of the memory resource is feasible if an execution time penalty of 10 percent is acceptable. The accuracy of the model is evaluated on a physical Spark cluster with 128 cores and 1TB of total memory. The experiment results show that the proposed solution can predict the minimum required memory size for given acceptable delays with high accuracy, even if the behavior of target applications is unknown during the training of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
132. Hall Thrusters With Permanent Magnets: Current Solutions and Perspectives.
- Author
-
Lorello, Ludovico, Levchenko, Igor, Bazaka, Kateryna, Keidar, Michael, Luxiang Xu, Huang, S., Lim, J. W. M., and Shuyan Xu
- Subjects
TECHNOLOGICAL innovations ,WIRELESS communications ,ARTIFICIAL intelligence ,ENERGY consumption ,PERMANENT magnets - Abstract
We present a focused review of selected design solutions for the permanent magnet-based magnetic circuitry of Hall-type thrusters, with the emphasis on their relevance to miniaturized devices potentially suitable for application in CubeSats and other types of small satellites. Coaxial, cylindrical, and cusped designs of Hall-type thrusters are considered. The issues related to the influence of magnetic configurations on channel wear are also addressed. This paper also outlines a state of the art in the high-temperature permanent magnets and offers some perspective views onto the further development of miniaturized Hall-type thrusters. Several nontrivial design solutions are considered, and schematics of the potentially promising ones for the reduction of wear and damage were examined. Overall, this paper demonstrates the usability and several significant advantages of Hall thrusters with permanent magnetic system. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
133. Overview of Hall Electric Propulsion in China.
- Author
-
Ding Yongjie, Li Hong, Wei Liqiu, Hu Yanlin, Shen Yan, Liu Hui, Ning Zhongxi, Mao Wei, and Yu Daren
- Subjects
COLLOID thrusters ,MAGNETIC properties ,PROPULSION systems ,MAGNETIC fields ,ARTIFICIAL intelligence - Abstract
Hall thruster is an attractive type of electric propulsion that is being developed to replace the chemical propulsion deployed in many tasks on satellites. The research on Hall thrusters in China has gradually progressed from basic theory to engineering applications, especially with the experience of on-orbit flight tests in low earth and geostationary equatorial orbits. This paper discusses the progress of research on Hall thrusters in China in recent years. This includes the beam focusing, scaling, high specific impulse, long lifetime, and effective magnetic field excitation techniques, as well as the stabilization technique of low-frequency oscillation, along with other types of Hall thrusters, such as cylindrical Hall, double-stage Hall, cusped field, nested-channel Hall, and anode layer Hall thrusters. In conclusion, this paper discusses the programs intended for the future focusing on the development of thrusters, their application prospects, and their use in space applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
134. Optimal One-Wafer Cyclic Scheduling of Time-Constrained Hybrid Multicluster Tools via Petri Nets.
- Author
-
Yang, FaJun, Wu, NaiQi, Qiao, Yan, and Zhou, MengChu
- Subjects
PETRI nets ,ARTIFICIAL intelligence - Abstract
Scheduling a multicluster tool with wafer residency time constraints is highly challenging yet important in ensuring high productivity of wafer fabrication. This paper presents a method to find an optimal one-wafer cyclic schedule for it. A Petri net is developed to model the dynamic behavior of the tool. By this model, a schedule of the system is analytically expressed as a function of robots’ waiting time. Based on this model, this paper presents the necessary and sufficient conditions under which a feasible one-wafer cyclic schedule exists. Then, it gives efficient algorithms to find such a schedule that is optimal. These algorithms require determining the robots’ waiting time via simple calculation and thus are efficient. Examples are given to show the application and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
135. Learning to Compose and Reason with Language Tree Structures for Visual Grounding.
- Author
-
Hong, Richang, Liu, Daqing, Mo, Xiaoyu, He, Xiangnan, and Zhang, Hanwang
- Subjects
NATURAL languages ,ARTIFICIAL intelligence ,TREES ,CONTINUOUS functions ,LANGUAGE & languages - Abstract
Grounding natural language in images, such as localizing “the black dog on the left of the tree”, is one of the core problems in artificial intelligence, as it needs to comprehend the fine-grained language compositions. However, existing solutions merely rely on the association between the holistic language features and visual features, while neglect the nature of composite reasoning implied in the language. In this paper, we propose a natural language grounding model that can automatically compose a binary tree structure for parsing the language and then perform visual reasoning along the tree in a bottom-up fashion. We call our model RvG-Tree: Recursive Grounding Tree, which is inspired by the intuition that any language expression can be recursively decomposed into two constituent parts, and the grounding confidence score can be recursively accumulated by calculating their grounding scores returned by the two sub-trees. RvG-Tree can be trained end-to-end by using the Straight-Through Gumbel-Softmax estimator that allows the gradients from the continuous score functions passing through the discrete tree construction. Experiments on several benchmarks show that our model achieves the state-of-the-art performance with more explainable reasoning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
136. Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning.
- Author
-
Zhang, Yifan, Zhao, Peilin, Wu, Qingyao, Li, Bin, Huang, Junzhou, and Tan, Mingkui
- Subjects
PORTFOLIO management (Investments) ,DEEP learning ,REWARD (Psychology) ,ARTIFICIAL intelligence ,TRANSACTION costs ,REINFORCEMENT learning - Abstract
Portfolio Selection is an important real-world financial task and has attracted extensive attention in artificial intelligence communities. This task, however, has two main difficulties: (i) the non-stationary price series and complex asset correlations make the learning of feature representation very hard; (ii) the practicality principle in financial markets requires controlling both transaction and risk costs. Most existing methods adopt handcraft features and/or consider no constraints for the costs, which may make them perform unsatisfactorily and fail to control both costs in practice. In this paper, we propose a cost-sensitive portfolio selection method with deep reinforcement learning. Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations, while a new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning. We theoretically analyze the near-optimality of the proposed reward, which shows that the growth rate of the policy regarding this reward function can approach the theoretical optimum. We also empirically evaluate the proposed method on real-world datasets. Promising results demonstrate the effectiveness and superiority of the proposed method in terms of profitability, cost-sensitivity and representation abilities. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
137. 2021 Index IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 43.
- Subjects
ARTIFICIAL intelligence ,SUBJECT headings ,INDEXES - Abstract
This index covers all technical items - papers, correspondence, reviews, etc. - that appeared in this periodical during the year, and items from previous years that were commented upon or corrected in this year. Departments and other items may also be covered if they have been judged to have archival value. The Author Index contains the primary entry for each item, listed under the first author's name. The primary entry includes the co-authors' names, the title of the paper or other item, and its location, specified by the publication abbreviation, year, month, and inclusive pagination. The Subject Index contains entries describing the item under all appropriate subject headings, plus the first author's name, the publication abbreviation, month, and year, and inclusive pages. Note that the item title is found only under the primary entry in the Author Index. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
138. Coordinated Wide-Area Damping Control Using Deep Neural Networks and Reinforcement Learning.
- Author
-
Gupta, Pooja, Pal, Anamitra, and Vittal, Vijay
- Subjects
REINFORCEMENT learning ,STATIC VAR compensators ,LINEAR matrix inequalities ,DEEP learning ,ARTIFICIAL intelligence - Abstract
This paper proposes the design of two coordinated wide-area damping controllers (CWADCs) for damping low frequency oscillations (LFOs), while accounting for the uncertainties present in the power system. The controllers based on Deep Neural Network (DNN) and Deep Reinforcement Learning (DRL), respectively, coordinate the operation of different local damping controls such as power system stabilizers (PSSs), static VAr compensators (SVCs), and supplementary damping controllers for DC lines (DC-SDCs). The DNN-CWADC learns to make control decisions using supervised learning; the training dataset consisting of polytopic controllers designed with the help of linear matrix inequality (LMI)-based mixed $H_2/H_\infty$ optimization. The DRL-CWADC learns to adapt to the system uncertainties based on its continuous interaction with the power system environment by employing an advanced version of the state-of-the-art deep deterministic policy gradient (DDPG) algorithm referred to as bounded exploratory control-based DDPG (BEC-DDPG). The studies performed on a 33 machine, 127 bus equivalent model of the Western Electricity Coordinating Council (WECC) system-embedded with different types of damping controls demonstrate the effectiveness of the proposed CWADCs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
139. Learning and Guaranteed Cost Control With Event-Based Adaptive Critic Implementation.
- Author
-
Wang, Ding and Liu, Derong
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,ARTIFICIAL intelligence - Abstract
This paper focuses on the event-triggered guaranteed cost control design of nonlinear systems via a self-learning technique. In brief, an event-based guaranteed cost control strategy of nonlinear systems subjects to matched uncertainties is developed, thereby balancing the performance of guaranteed cost and the actuality of limited communication resource. The original control design is transformed into an optimal control problem with an event-based mechanism, where the relationship of guaranteed cost performance compared to the time-based formulation is discussed. A critic neural network is constructed for implementing the event-based optimal control design with stability guarantee. Simulation experiments are carried out to verify the theoretical results in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
140. Spatial and Temporal Downsampling in Event-Based Visual Classification.
- Author
-
Cohen, Gregory, Afshar, Saeed, Orchard, Garrick, Tapson, Jonathan, Benosman, Ryad, and van Schaik, Andre
- Subjects
SPATIO-temporal variation ,ARTIFICIAL intelligence ,IMAGE processing - Abstract
As the interest in event-based vision sensors for mobile and aerial applications grows, there is an increasing need for high-speed and highly robust algorithms for performing visual tasks using event-based data. As event rate and network structure have a direct impact on the power consumed by such systems, it is important to explore the efficiency of the event-based encoding used by these sensors. The work presented in this paper represents the first study solely focused on the effects of both spatial and temporal downsampling on event-based vision data and makes use of a variety of data sets chosen to fully explore and characterize the nature of downsampling operations. The results show that both spatial downsampling and temporal downsampling produce improved classification accuracy and, additionally, a lower overall data rate. A finding is particularly relevant for bandwidth and power constrained systems. For a given network containing 1000 hidden layer neurons, the spatially downsampled systems achieved a best case accuracy of 89.38% on N-MNIST as opposed to 81.03% with no downsampling at the same hidden layer size. On the N-Caltech101 data set, the downsampled system achieved a best case accuracy of 18.25%, compared with 7.43% achieved with no downsampling. The results show that downsampling is an important preprocessing technique in event-based visual processing, especially for applications sensitive to power consumption and transmission bandwidth. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
141. Guest Editorial Introduction to the Special Section on Fog/Edge Computing for Autonomous and Connected Cars.
- Author
-
Pang, Ai-Chun, Au, Edward, Ai, Bo, and Zhuang, Weihua
- Subjects
CLOUD computing ,ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,INTERNET of things ,MACHINE learning - Abstract
The seven papers in this special section focuses on fog computing for autonomous and connected automobiles. With advances in wireless communications, machine learning, and sensing technologies, autonomous and connected cars are becoming a reality. Many potential applications (e.g., augmented reality for information providing through a heads-up display and accident avoidance in autopilot) require significant computing power to process data generated by vehicle sensors for near-real-time responses. Upgrading on-board computers is one option at relatively high cost. Another solution is cloud computing, where the traditional centralized approach suffers from long latency and unstable connections in a vehicular environment and may congest network backhaul with a large amount of data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
142. Introduction to the Special Section on Nature-Inspired Algorithms for EMC and Signal and Power Integrity Applications.
- Author
-
Liu, E.-X. and Orlandi, A.
- Subjects
ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,ALGORITHMS ,ELECTROMAGNETIC compatibility ,ARTIFICIAL neural networks ,WIRELESS communications - Abstract
The papers in this special section are focused on one aspect of the current technological wave. As we all know, nature-inspired (NI) computing and algorithms play an ever increasing role in problem solving by way of optimization, machine intelligence, data mining, and resource management. Nature has evolved over millions of years under a variety of harsh and severe environments and, thus, provides a rich source of inspiration for designing algorithms and approaches to tackle real-world challenging EMC and Signal & Power Integrity (SI/PI) problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
143. A Conceptual Structure for Computer Vision
- Author
-
Sidney Fels, Gregor Miller, and Steve Oldridge
- Subjects
Vision science ,Stereo cameras ,Computer science ,business.industry ,Machine vision ,Decomposition (computer science) ,Position paper ,Computer vision ,Artificial intelligence ,business ,Axiom ,Scope (computer science) ,Abstraction (linguistics) - Abstract
The research presented in this paper represents several novel conceptual contributions to the computer vision literature. In this position paper, our goal is to define the scope of computer vision analysis and discuss a new categorisation of the computer vision problem. We first provide a novel decomposition of computer vision into base components which we term the axioms of vision. These are used to define researcher-level and developer-level access to vision algorithms, in a way which does not require expert knowledge of computer vision. We discuss a new line of thought for computer vision by basing analyses on descriptions of the problem instead of in terms of algorithms. From this an abstraction can be developed to provide a layer above algorithmic details. This is extended to the idea of a formal description language which may be automatically interpreted thus allowing those not familiar with computer vision techniques to utilise sophisticated methods.
- Published
- 2011
144. On learning with imperfect representations
- Author
-
Peter Stone and Shivaram Kalyanakrishnan
- Subjects
Coping (psychology) ,Computer science ,business.industry ,media_common.quotation_subject ,Multi-task learning ,Approximation algorithm ,Machine learning ,computer.software_genre ,Function approximation ,Human–computer interaction ,Reinforcement learning ,Introspection ,Position paper ,Imperfect ,Artificial intelligence ,business ,computer ,media_common - Abstract
In this paper we present a perspective on the relationship between learning and representation in sequential decision making tasks. We undertake a brief survey of existing real-world applications, which demonstrates that the classical “tabular” representation seldom applies in practice. Specifically, several practical tasks suffer from state aliasing, and most demand some form of generalization and function approximation. Coping with these representational aspects thus becomes an important direction for furthering the advent of reinforcement learning in practice. The central thesis we present in this position paper is that in practice, learning methods specifically developed to work with imperfect representations are likely to perform better than those developed for perfect representations and then applied in imperfect-representation settings. We specify an evaluation criterion for learning methods in practice, and propose a framework for their synthesis. In particular, we highlight the degrees of “representational bias” prevalent in different learning methods. We reference a variety of relevant literature as a background for this introspective essay.
- Published
- 2011
145. Accurate human motion capture in large areas by combining IMU- and laser-based people tracking
- Author
-
Cyrill Stachniss, Jakob Ziegler, Wolfram Burgard, Giorgio Grisetti, and Henrik Kretzschmar
- Subjects
a mobile robot equipped with a laser scanner is used to anchor the pose estimates of a person given a map of the environment. It uses a particle filter to globally localize a person wearing a motion capture suit and to robustly track the person's position. To obtain a smooth and globally aligned trajectory of the person ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mobile robot ,Tracking system ,Motion capture ,the movie industry captures actors to animate virtual characters that perform stunts. Today's tracking systems either operate with statically mounted cameras and thus can be used in confined areas only or rely on inertial sensors that allow for free and large-scale motion but suffer from drift in the pose estimate. This paper presents a novel tracking approach that aims to provide globally aligned full body posture estimates by combining a mobile robot and an inertial motion capture system. In our approach ,we solve a least squares optimization problem formulated from the motion capture suite and tracking data. Our approach has been implemented on a real robot and exhaustively tested. As the experimental evaluation shows ,A large number of applications use motion capture systems to track the location and the body posture of people. For instance, the movie industry captures actors to animate virtual characters that perform stunts. Today's tracking systems either operate with statically mounted cameras and thus can be used in confined areas only or rely on inertial sensors that allow for free and large-scale motion but suffer from drift in the pose estimate. This paper presents a novel tracking approach that aims to provide globally aligned full body posture estimates by combining a mobile robot and an inertial motion capture system. In our approach, a mobile robot equipped with a laser scanner is used to anchor the pose estimates of a person given a map of the environment. It uses a particle filter to globally localize a person wearing a motion capture suit and to robustly track the person's position. To obtain a smooth and globally aligned trajectory of the person, we solve a least squares optimization problem formulated from the motion capture suite and tracking data. Our approach has been implemented on a real robot and exhaustively tested. As the experimental evaluation shows, our system is able to provide locally precise and globally aligned estimates of the person's full body posture ,Inertial measurement unit ,A large number of applications use motion capture systems to track the location and the body posture of people. For instance ,our system is able to provide locally precise and globally aligned estimates of the person's full body posture ,Trajectory ,Robot ,Computer vision ,Artificial intelligence ,business ,Pose ,Computer animation - Abstract
A large number of applications use motion capture systems to track the location and the body posture of people. For instance, the movie industry captures actors to animate virtual characters that perform stunts. Today's tracking systems either operate with statically mounted cameras and thus can be used in confined areas only or rely on inertial sensors that allow for free and large-scale motion but suffer from drift in the pose estimate. This paper presents a novel tracking approach that aims to provide globally aligned full body posture estimates by combining a mobile robot and an inertial motion capture system. In our approach, a mobile robot equipped with a laser scanner is used to anchor the pose estimates of a person given a map of the environment. It uses a particle filter to globally localize a person wearing a motion capture suit and to robustly track the person's position. To obtain a smooth and globally aligned trajectory of the person, we solve a least squares optimization problem formulated from the motion capture suite and tracking data. Our approach has been implemented on a real robot and exhaustively tested. As the experimental evaluation shows, our system is able to provide locally precise and globally aligned estimates of the person's full body posture.
- Published
- 2011
146. Special Issue on Artificial Intelligence at the Edge.
- Author
-
Falcao, Gabriel and Cavallaro, Joseph
- Subjects
ARTIFICIAL intelligence ,EDGE computing ,MICROPROCESSORS ,COMPUTER architecture ,PERSONAL computers - Abstract
The papers in this special section explore cutting edge research on topics that combine artificial intelligence with edge computing, relating to the design, performance, or application of microprocessors and microcomputers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
147. Guest Editorial: IEEE TC Special Issue On Smart Edge Computing and IoT.
- Author
-
Benini, Luca, Benatti, Simone, Jang, Taekwang, and Rahimi, Abbas
- Subjects
ARTIFICIAL intelligence ,EDGE computing ,DISRUPTIVE innovations ,INTERNET of things ,ELECTRONIC data processing ,TEXTILE machinery ,ELECTRIC power distribution grids - Abstract
The papers in this special section focus on smart edge computing and the Internet of Things (IoT). The evolution of the (IoT) is changing the nature of edge-computing devices. Availability of novel sensor interfaces, efficient digital low power processors, and high-bandwidth low-power communication protocols have generated a perfect storm within the IoT ecosystem. Next generation IoT end-nodes have to support, in place, an increasing range of functionality: multi-sensory data processing and analysis, complex systems control strategies, and, ultimately, artificial intelligence. These new capabilities will enable disruptive innovation in wearable and implantable biomedical devices, autonomous insect-sized drones, miniaturized devices for environmental sensing and continuous monitoring of buildings, industrial machinery, power grids. As a result, we witness a paradigm shift towards computationally demanding tasks on tiny form-factor devices at extreme energy efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
148. Cover.
- Subjects
CONTENT-based image retrieval ,INTELLECTUAL property ,ARTIFICIAL intelligence ,OPEN access publishing - Published
- 2021
- Full Text
- View/download PDF
149. Cover.
- Subjects
CONTENT-based image retrieval ,INTELLECTUAL property ,ARTIFICIAL intelligence ,OPEN access publishing - Published
- 2021
- Full Text
- View/download PDF
150. Call for Papers IEEE Transactions on Artificial Intelligence.
- Subjects
- *
ARTIFICIAL intelligence , *DIGITAL Object Identifiers , *EMAIL - Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.