77 results on '"Simone Palazzo"'
Search Results
2. Deep learning approach for detecting low frequency events on DAS data at Vulcano Island, Italy
- Author
-
Martina Allegra, Gilda Currenti, Flavio Cannavò, Philippe Jousset, Michele Prestifilippo, Rosalba Napoli, Mariangela Sciotto, Giuseppe Di Grazia, Eugenio Privitera, Simone Palazzo, and Charlotte Krawczyk3
- Abstract
Since September 2021, signs of unrest at Vulcano Island have been noticed after four years of quiescence, along with CO2 degassing and the occurrence of long-period and very long-period events. With the intention of improving the monitoring activities, a submarine fiber optic telecommunications cable linking Vulcano Island to Sicily was interrogated from 15 January to 14 February 2022. Of particular interest has been the recording of 1488 events with wide range of waveforms made up of two main frequency bands (from 3 to 5 Hz and from 0.1 to 0.2 Hz).With the aim of the automatic detection of seismic-volcanic events, different approaches were explored, particularly investigating whether the application of machine learning could provide the same performance as conventional techniques. Unlike many traditional algorithms, deep learning manages to guarantee a generalized approach by automatically and hierarchically extracting the relevant features from the raw data. Due to their spatio-temporal density, the data acquired by the DAS can be assimilated to a sequence of images; this property has been exploited by re-designing deep learning techniques for image processing, specifically employing Convolutional Neural Networks.The results demonstrate that deep learning not only achives good performance but that it even outperforms classical algorithms. Despite providing a generalized approach, Convolutional Neural Networks have been shown to be more effective than traditional tecniques in expoiting the high spatial and temporal sampling of the acquired data.
- Published
- 2023
- Full Text
- View/download PDF
3. Distributed dynamic strain sensing of very long period and long period events on telecom fiber-optic cables at Vulcano, Italy
- Author
-
Gilda Currenti, Martina Allegra, Flavio Cannavò, Philippe Jousset, Michele Prestifilippo, Rosalba Napoli, Mariangela Sciotto, Giuseppe Di Grazia, Eugenio Privitera, Simone Palazzo, and Charlotte Krawczyk
- Subjects
Multidisciplinary - Abstract
Volcano-seismic signals can help for volcanic hazard estimation and eruption forecasting. However, the underlying mechanism for their low frequency components is still a matter of debate. Here, we show signatures of dynamic strain records from Distributed Acoustic Sensing in the low frequencies of volcanic signals at Vulcano Island, Italy. Signs of unrest have been observed since September 2021, with CO2 degassing and occurrence of long period and very long period events. We interrogated a fiber-optic telecommunication cable on-shore and off-shore linking Vulcano Island to Sicily. We explore various approaches to automatically detect seismo-volcanic events both adapting conventional algorithms and using machine learning techniques. During one month of acquisition, we found 1488 events with a great variety of waveforms composed of two main frequency bands (from 0.1 to 0.2 Hz and from 3 to 5 Hz) with various relative amplitudes. On the basis of spectral signature and family classification, we propose a model in which gas accumulates in the hydrothermal system and is released through a series of resonating fractures until the surface. Our findings demonstrate that fiber optic telecom cables in association with cutting-edge machine learning algorithms contribute to a better understanding and monitoring of volcanic hydrothermal systems.
- Published
- 2023
- Full Text
- View/download PDF
4. Explainable Deep Learning System for Advanced Silicon and Silicon Carbide Electrical Wafer Defect Map Assessment
- Author
-
Riccardo Emanuele Sarpietro, Carmelo Pino, Salvatore Coffa, Angelo Messina, Simone Palazzo, Sebastiano Battiato, Concetto Spampinato, and Francesco Rundo
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
5. Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction
- Author
-
Concetto Spampinato, Daniela Giordano, Francesco Rundo, Giovanni Bellitto, Simone Palazzo, and F. Proietto Salanitri
- Subjects
FOS: Computer and information sciences ,Normalization (statistics) ,Source code ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,media_common.quotation_subject ,Computer Science - Computer Vision and Pattern Recognition ,Video saliency Prediction ,Machine learning ,computer.software_genre ,Hierarchical database model ,Domain (software engineering) ,Artificial Intelligence ,media_common ,Domain adaptation ,Domain specific learning ,Gradient reversal layer ,Conspicuity networks ,business.industry ,Conspicuity maps ,Pattern recognition (psychology) ,Benchmark (computing) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Feature learning ,Software ,Smoothing - Abstract
In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as conspicuity maps) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for domain adaptation and domain-specific learning. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at https://github.com/perceivelab/hd2s.
- Published
- 2021
- Full Text
- View/download PDF
6. A Hybrid Modulation Technique for Voltage Regulation in LLC Converters in the Presence of Transformer Parasitic Capacitance
- Author
-
Simone Palazzo, Giovanni Busatto, Enzo De Santis, Roberto Giacomobono, Dario Di Ruzza, and Giuseppe Panariello
- Published
- 2022
- Full Text
- View/download PDF
7. Hybrid Deep Learning Pipeline for Advanced Electrical Wafer Defect Maps Assessment
- Author
-
Francesco Rundo, Salvatore Coffa, Michele Calabretta, Riccardo Emanuele Sarpietro, Angelo Messina, Carmelo Pino, Simone Palazzo, and Concetto Spampinato
- Published
- 2022
- Full Text
- View/download PDF
8. Transformer-based image generation from scene graphs
- Author
-
Renato Sortino, Simone Palazzo, Francesco Rundo, and Concetto Spampinato
- Subjects
Signal Processing ,Computer Vision and Pattern Recognition ,Software - Published
- 2023
- Full Text
- View/download PDF
9. Effects of Auxiliary Knowledge on Continual Learning
- Author
-
Giovanni, Bellitto, Matteo, Pennisi, Simone, Palazzo, Bonicelli, Lorenzo, Boschini, Matteo, Calderara, Simone, and Concetto, Spampinato
- Published
- 2022
- Full Text
- View/download PDF
10. TinyHD: Efficient video saliency prediction with heterogeneous decoders using hierarchical maps distillation
- Author
-
Feiyan Hu, Simone Palazzo, Federica Proietto Salanitri, Giovanni Bellitto, Morteza Moradi, Concetto Spampinato, and Kevin McGuinness
- Subjects
FOS: Computer and information sciences ,Signal processing ,Artificial intelligence ,Electronic engineering ,Computer Vision and Pattern Recognition (cs.CV) ,Machine learning ,Digital video ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Video saliency prediction has recently attracted attention of the research community, as it is an upstream task for several practical applications. However, current solutions are particularly computationally demanding, especially due to the wide usage of spatio-temporal 3D convolutions. We observe that, while different model architectures achieve similar performance on benchmarks, visual variations between predicted saliency maps are still significant. Inspired by this intuition, we propose a lightweight model that employs multiple simple heterogeneous decoders and adopts several practical approaches to improve accuracy while keeping computational costs low, such as hierarchical multi-map knowledge distillation, multi-output saliency prediction, unlabeled auxiliary datasets and channel reduction with teacher assistant supervision. Our approach achieves saliency prediction accuracy on par or better than state-of-the-art methods on DFH1K, UCF-Sports and Hollywood2 benchmarks, while enhancing significantly the efficiency of the model. Code is on https://github.com/feiyanhu/tinyHD, Comment: WACV2023
- Published
- 2022
11. Transfer without Forgetting
- Author
-
Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Giovanni Bellitto, Matteo Pennisi, Simone Palazzo, Concetto Spampinato, and Simone Calderara
- Subjects
Continual Learning ,Lifelong Learning ,Pretraining ,Experience Replay ,Attention ,Transfer Learning ,Continual Learning, Lifelong Learning, Experience Replay, Transfer Learning, Pretraining, Attention - Published
- 2022
12. GAN Latent Space Manipulation and Aggregation for Federated Learning in Medical Imaging
- Author
-
Matteo Pennisi, Federica Proietto Salanitri, Simone Palazzo, Carmelo Pino, Francesco Rundo, Daniela Giordano, and Concetto Spampinato
- Published
- 2022
- Full Text
- View/download PDF
13. REactor Safety Analysis ToolboX RESA-TX
- Author
-
Alejandra Cuesta, Simone Palazzo, Stefan Wenzel, and Alexander Kerner
- Subjects
General Medicine - Abstract
The REactor Safety Analysis ToolboX RESA-TX is a software and data package in development that combines the automatisation of all established procedures for deterministic safety analysis (DSA), the integration of expert know-how and a large database including the most relevant information required for conducting a DSA. In the current state of the art, DSA is a complex and thus error-prone process that is highly time-consuming and repetitive. The reliability of the result is strongly dependent on the availability of plant data and expert know-how. The idea of developing the RESA-TX toolbox arose at GRS to cope with these conditions. The innovative approach proposes an automated and standardised procedure, supported by a large database of plant design characteristics, plant behaviour, regulatory rules and DSA expert knowledge incorporated within the tool. Its application allows the end user to automatically generate and verify an input deck, as well as conduct design basis accident (DBA) calculations for a certain design with highly reduced manual intervention. The databases can be extended depending on available information or other boundary conditions. A heuristic approach is integrated into the model generation process, where users often suffer from a lack of information about the facility under consideration. These heuristics can be replaced when higher information quality is available or enhanced over time, which can lead to more reliable results with increasing usage of the tool. As a result, the application of RESA-TX could highly increase the efficiency of the DSA process, reducing both repetitiveness as well as user-induced errors. This in return will lead to an improvement in the quality of the analysis and reliability of the results. In consequence, RESA-TX will allow for a DSA to be conducted more frequently in situations where time or budget was a limitation before, thereby contributing to an increase in reactor safety.
- Published
- 2023
- Full Text
- View/download PDF
14. Role of Active Clamp Circuit in a DC/AC Isolated Converter based on the principle of Pulsating DC Link
- Author
-
Annunziata Sanseverino, Simone Palazzo, Daniele Marciano, Giovanni Busatto, and Francesco Velardi
- Subjects
Materials science ,business.industry ,Electrical engineering ,SiC device ,Isolated DC/AC converter ,AC/AC converter ,law.invention ,Inductance ,Capacitor ,Active clamp ,Parasitic capacitance ,Overvoltage ,law ,Pulsating DC link ,Parasitic element ,business ,Transformer ,Voltage - Abstract
The operation of Active Clamp circuit is presented for an Isolated DC/AC Converter based on the principle of Pulsating DC link (PDL). This innovative two stage converter, made of a Phase Shifted Full Bridge and a Three Phase Inverter, is featured by the pulsating evolution of the voltage on the intermediate DC link resulting from the absence of the output filter of the first stage. The zero voltage phases of the PDL are used to achieve the Zero Voltage Transition (ZVT) for all the switches of the second stage. The Active Clamp circuit is used to clamp the voltage overshoots on the Pulsating DC link due to the parasitic inductance which resonates with stray capacitance of the converter. Moreover, the circuit permits the storage of the energy associated with the overvoltage and its restitution to the DC-link during the subsequent energizing phases. The role of the Active Clamp capacitor is discussed on the basis of a comprehensive simulation analysis and experimental data obtained from a 5kW prototype.
- Published
- 2021
- Full Text
- View/download PDF
15. Self-improving classification performance through GAN distillation
- Author
-
Simone Palazzo, Matteo Pennisi, and Concetto Spampinato
- Subjects
Computer science ,business.industry ,law ,Artificial intelligence ,Process engineering ,business ,Distillation ,law.invention - Published
- 2021
- Full Text
- View/download PDF
16. Adversarial Framework for Unsupervised Learning of Motion Dynamics in Videos
- Author
-
Simone Palazzo, Daniela Giordano, Pierluca D'Oro, Concetto Spampinato, and Mubarak Shah
- Subjects
Discriminator ,Motion dynamics ,Computer science ,business.industry ,Supervised learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Machine learning ,computer.software_genre ,Adversarial system ,Artificial Intelligence ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Segmentation ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software - Abstract
Human behavior understanding in videos is a complex, still unsolved problem and requires to accurately model motion at both the local (pixel-wise dense prediction) and global (aggregation of motion cues) levels. Current approaches based on supervised learning require large amounts of annotated data, whose scarce availability is one of the main limiting factors to the development of general solutions. Unsupervised learning can instead leverage the vast amount of videos available on the web and it is a promising solution for overcoming the existing limitations. In this paper, we propose an adversarial GAN-based framework that learns video representations and dynamics through a self-supervision mechanism in order to perform dense and global prediction in videos. Our approach synthesizes videos by (1) factorizing the process into the generation of static visual content and motion, (2) learning a suitable representation of a motion latent space in order to enforce spatio-temporal coherency of object trajectories, and (3) incorporating motion estimation and pixel-wise dense prediction into the training procedure. Self-supervision is enforced by using motion masks produced by the generator, as a co-product of its generation process, to supervise the discriminator network in performing dense prediction. Performance evaluation, carried out on standard benchmarks, shows that our approach is able to learn, in an unsupervised way, both local and global video dynamics. The learned representations, then, support the training of video object segmentation methods with sensibly less (about 50%) annotations, giving performance comparable to the state of the art. Furthermore, the proposed method achieves promising performance in generating realistic videos, outperforming state-of-the-art approaches especially on motion-related metrics.
- Published
- 2019
- Full Text
- View/download PDF
17. SurfaceNet: Adversarial SVBRDF Estimation from a Single Image
- Author
-
Giuseppe Vecchio, Simone Palazzo, and Concetto Spampinato
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In this paper we present SurfaceNet, an approach for estimating spatially-varying bidirectional reflectance distribution function (SVBRDF) material properties from a single image. We pose the problem as an image translation task and propose a novel patch-based generative adversarial network (GAN) that is able to produce high-quality, high-resolution surface reflectance maps. The employment of the GAN paradigm has a twofold objective: 1) allowing the model to recover finer details than standard translation models; 2) reducing the domain shift between synthetic and real data distributions in an unsupervised way. An extensive evaluation, carried out on a public benchmark of synthetic and real images under different illumination conditions, shows that SurfaceNet largely outperforms existing SVBRDF reconstruction methods, both quantitatively and qualitatively. Furthermore, SurfaceNet exhibits a remarkable ability in generating high-quality maps from real samples without any supervision at training time.
- Published
- 2021
18. Deep Recurrent-Convolutional Model for Automated Segmentation of Craniomaxillofacial CT Scans
- Author
-
Concetto Spampinato, Rosalia Leonardi, F. Proietto Salanitri, Francesca Murabito, Ulas Bagci, Francesco Rundo, Daniela Giordano, and Simone Palazzo
- Subjects
0301 basic medicine ,Computer science ,business.industry ,Generalization ,Deep learning ,Anatomical structures ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Automated segmentation ,Pattern recognition ,Image segmentation ,030105 genetics & heredity ,03 medical and health sciences ,0302 clinical medicine ,Feature (computer vision) ,Convolutional code ,030220 oncology & carcinogenesis ,Segmentation ,Artificial intelligence ,business - Abstract
In this paper we define a deep learning architecture for automated segmentation of anatomical structures in Craniomaxillofacial (CMF) CT scans that leverages the recent success of encoder-decoder models for semantic segmentation of natural images. In particular, we propose a fully convolutional deep network that combines the advantages of recent fully convolutional models, such as Tiramisu, with squeeze-and-excitation blocks for feature recalibration, integrated with convolutional LSTMs to model spatio-temporal correlations between consecutive slices. The proposed segmentation network shows superior performance and generalization capabilities (to different structures and imaging modalities) than state of the art methods on automated segmentation of CMF structures (e.g., mandibles and airways) in several standard benchmarks (e.g., MICCAI datasets) and on new datasets proposed herein, effectively facing shape variability.
- Published
- 2021
- Full Text
- View/download PDF
19. Deep Multi-stage Model for Automated Landmarking of Craniomaxillofacial CT Scans
- Author
-
L. Prezzavento, Concetto Spampinato, Ulas Bagci, Francesco Rundo, Simone Palazzo, Daniela Giordano, Giovanni Bellitto, and Rosalia Leonardi
- Subjects
business.industry ,Computer science ,Pipeline (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image segmentation ,030218 nuclear medicine & medical imaging ,Multi stage ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Upstream (networking) ,Segmentation ,Granularity ,Artificial intelligence ,Scale (map) ,business ,Image resolution - Abstract
In this paper we define a deep multi-stage architecture for automated landmarking of craniomaxillofacial (CMF) CT images. Our model is composed of three subnetworks that first localize, on reduced-resolution images, areas where landmarks may be found and then refine the search, at full-resolution scale, through a hierarchical structure aiming at increasing the granularity of the investigated region. The multi-stage pipeline is designed to deal with full resolution data and does not require any additional pre-processing step to reduce search space, as opposed to existing methods that can be only adopted for searching landmarks located in well-defined anatomical structures (e.g., mandibles). The automated landmarking system is tested on identifying landmarks located in several CMF regions, achieving an average error of 0.8 mm, significantly lower than expert readings. The proposed model also outperforms baselines and is on par with existing models that employ additional upstream segmentation, on state-of-the-art benchmarks.
- Published
- 2021
- Full Text
- View/download PDF
20. A Novel Modulation Technique for Pulsating DC Link Multistage Converter with Zero Voltage Transition Based on Different and Unrelated Switching Frequencies
- Author
-
Annunziata Sanseverino, Giovanni Busatto, Carmine Abbate, D. Tedesco, Francesco Velardi, Daniele Marciano, and Simone Palazzo
- Subjects
Total harmonic distortion ,Computer science ,Modulation technique ,Pulsating DC link ,SiC device ,ZVT ,Filter (signal processing) ,Predistortion ,Power (physics) ,Synchronization (alternating current) ,Modulation ,Control theory ,Distortion ,Voltage - Abstract
A novel modulation technique is presented for an Isolated DC/AC multistage Converter, based on Pulsating DC Link principle. In this converter, the absence of a filter at the output of the first stage provides high reliability features, fast dynamic response and Zero Voltage Transitions (ZVT) of the secondary stage switches thanks to the zero voltage time intervals of the pulsating DC link. The proposed modulation techniques allows the use of two different switching frequencies for the two converter stages in such a way that each stage can be modulated with the best switching frequency to improve the efficiency, dynamic performances and converter size. Moreover, the modulation technique permits the optimization of the ratio between the switching frequencies of the two power stages which is fundamental to avoid that the pulsating DC link can compromise the Total Harmonic Distortion THD of the output voltage. An exhaustive description of the presented modulation logic is provided, highlighting the benefits due to the increase of the switching frequency of the first stage. The synchronization between the power stages is also used to guarantee the ZV commutations of the second stage, during the freewheeling phases of the first stage with consequent reduction of losses. Furthermore the effects of the predistortion of modulating signals on the distortion of output voltages are discussed and the simulations results are presented. The experimental results, obtained by the characterization of 20kW prototype, are presented and discussed to validate the features of the proposed modulation logic.
- Published
- 2021
21. An accurate switching current measurement based on resistive shunt applied to short circuit gan hemt characterization
- Author
-
Carmine Abbate, Simone Palazzo, Giovanni Busatto, Annunziata Sanseverino, Leandro Colella, Roberto Di Folco, Emanuele Martano, and Francesco Velardi
- Subjects
Technology ,Materials science ,QH301-705.5 ,short circuit tests ,QC1-999 ,High-electron-mobility transistor ,Hardware_PERFORMANCEANDRELIABILITY ,Inductor ,Switching time ,Hardware_INTEGRATEDCIRCUITS ,General Materials Science ,Biology (General) ,Instrumentation ,QD1-999 ,Fluid Flow and Transfer Processes ,Resistive touchscreen ,business.industry ,Process Chemistry and Technology ,Physics ,GaN HEMT ,General Engineering ,switching current measurement ,Engineering (General). Civil engineering (General) ,Computer Science Applications ,Chemistry ,Optoelectronics ,Transient (oscillation) ,TA1-2040 ,business ,Short circuit ,Shunt (electrical) ,Voltage - Abstract
The use of a resistive shunt is one of the simplest and most used methods for measuring current in an electronic device. Many researchers use this method to measure drain current during short-circuiting of fast devices such as GaN HEMTs. However, the high switching speed of these devices together with the non-ideality of the shunt resistors produces an overestimation of the current in the initial phases of the transient. In this paper, a passive compensation network is proposed, which is formed by adding an inductor to the voltage measurement circuit and allows an accurate measurement of the current using the resistive shunt even in the presence of very fast devices. The proposed method is validated by simulations and experimental measurements.
- Published
- 2021
22. Hierarchical 3D Feature Learning forPancreas Segmentation
- Author
-
Concetto Spampinato, Ismail Irmakci, Giovanni Bellitto, Ulas Bagci, Simone Palazzo, and Federica Proietto Salanitri
- Subjects
FOS: Computer and information sciences ,Segmentation map ,Computer Science - Machine Learning ,Computer science ,Decoding ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Convolutional neural network ,Dice ,Article ,Machine Learning (cs.LG) ,3D modeling ,Medical computing ,Magnetic resonance imaging ,Network coding ,Statistical tests ,Learn+ ,Machine learning ,FOS: Electrical engineering, electronic engineering, information engineering ,MRI scan ,Hierarchical encoder-decoder architecture ,Segmentation ,Encoder-decoder architecture ,Representation (mathematics) ,CT and MRI pancreas segmentation ,business.industry ,Image and Video Processing (eess.IV) ,Fully convolutional neural networks ,Feature learning ,Pattern recognition ,CT-scan ,Electrical Engineering and Systems Science - Image and Video Processing ,Computerized tomography ,Convolution ,Data set ,Convolutional neural networks ,Medical imaging ,Fully convolutional neural network ,Artificial intelligence ,business ,Encoder ,Decoding methods ,Volume (compression) - Abstract
12th International Workshop on Machine Learning in Medical Imaging, MLMI 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 -- 27 September 2021 through 27 September 2021 -- -- 266089, We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans. More specifically, the proposed model consists of a 3D encoder that learns to extract volume features at different scales; features taken at different points of the encoder hierarchy are then sent to multiple 3D decoders that individually predict intermediate segmentation maps. Finally, all segmentation maps are combined to obtain a unique detailed segmentation mask. We test our model on both CT and MRI imaging data: the publicly available NIH Pancreas-CT dataset (consisting of 82 contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans). Experimental results show that our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%, and yields promising segmentation performance on a very challenging MRI data set (average Dice score is about 77%). Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding, thus substantiating our architectural design. © 2021, Springer Nature Switzerland AG.
- Published
- 2021
- Full Text
- View/download PDF
23. Correct block-design experiments mitigate temporal correlation bias in EEG classification
- Author
-
Joseph Schmidt, Concetto Spampinato, Daniela Giordano, Mubarak Shah, Isaak Kavasidis, and Simone Palazzo
- Subjects
Visual perception ,medicine.diagnostic_test ,Computer science ,business.industry ,medicine ,Pattern recognition ,Artificial intelligence ,Electroencephalography ,Cognitive neuroscience ,Temporal correlation ,business ,Classifier (UML) ,Block design - Abstract
It is argued in [1] that [2] was able to classify EEG responses to visual stimuli solely because of the temporal correlation that exists in all EEG data and the use of a block design. While one of the analyses in [1] is correct, i.e., that low-frequency slow EEG activity can inflate classifier performance in block-designed studies [2], as we already discussed in [3], we here show that the main claim in [1] is drastically overstated and their other analyses are seriously flawed by wrong methodological choices. Our counter-analyses clearly demonstrate that the data in [2] show small temporal correlation and that such a correlation minimally contributes to classification accuracy. Thus, [1]’s analysis and criticism of block-design studies does not generalize to our case or, possibly, to other cases. To validate our counter-claims, we evaluate the performance of several state-of-the-art classification methods on the dataset in [2] (after properly filtering the data) reaching about 50% classification accuracy over 40 classes, lower than in [2], but still significant. We then investigate the influence of EEG temporal correlation on classification accuracy by testing the same models in two additional experimental settings: one that replicates [1]’s rapid-design experiment, and another one that examines the data between blocks while subjects are shown a blank screen. In both cases, classification accuracy is at or near chance, in contrast to what [1] reports, indicating a negligible contribution of temporal correlation to classification accuracy. We, instead, are able to replicate the results in [1] only when intentionally contaminating our data by inducing a temporal correlation. This suggests that what Liet al.[1] demonstrate is simply thattheir data are strongly contaminated by temporal correlation and low signal-to-noise ratio.We argue that the reason why Liet al.in [1] observe such high correlation in EEG data is their unconventional experimental design and settings that violate the basic cognitive neuroscience study design recommendations, first and foremost the one of limiting the experiments’ duration, as instead done in [2]. The reduced stimulus-driven neural activity, the removal of breaks and the prolonged duration of experiments in [1], removed the very neural responses that one would hope to classify, leaving only the amplified slow EEG activity consistent with a temporal correlation. Furthermore, the influence of temporal correlation on classification performance in [1] is exacerbated by their choice to perform per-subject classification rather than the more commonly-used and appropriate pooled subject classification as in [2]. Our analyses and reasoning in this paper refute the claims of the“perils and pitfalls of block-design”in [1]. Finally, we conclude the paper by examining a number of other oversimplistic statements, inconsistencies, misinterpretation of machine learning concepts, speculations and misleading claims in [1].NoteThis paper was prepared as a response to [1] before its publication and we were not given access to the code (although its authors had agreed, through the PAMI EiC, to share it with us). For this reason, in the experiments presented in this work we employed our own implementation of their model.
- Published
- 2020
- Full Text
- View/download PDF
24. Visual Saliency Detection guided by Neural Signals
- Author
-
Daniela Giordano, Sebastiano Battiato, Simone Palazzo, Concetto Spampinato, and Francesco Rundo
- Subjects
Matching (graph theory) ,Computer science ,business.industry ,Deep learning ,Process (computing) ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Convolutional neural network ,Visualization ,Salient ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Set (psychology) ,business ,0105 earth and related environmental sciences - Abstract
Saliency detection is a fundamental process of human visual perception, since it allows us to identify the most important parts of a scene, directing our analysis and interpretation capabilities on a reduced set of information and reducing reaction times. However, current approaches for automatic saliency detection either attempt to mimic human capabilities by building attention maps from hand-crafted feature analysis, or employ convolutional neural networks trained as black boxes, without any architectural or information prior from human biology.In this paper, we present an approach for saliency detection that combines the success of deep learning in identifying representations for visual data with a training paradigm aimed at matching neural activity provided directly by brain signals recorded while subjects look at images. We show that our approach is able to capture correspondences between visual elements and neural activities, successfully generalizing to unseen images to identify their most salient regions.
- Published
- 2020
- Full Text
- View/download PDF
25. Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features
- Author
-
Mubarak Shah, Daniela Giordano, Concetto Spampinato, Isaak Kavasidis, Joseph Schmidt, and Simone Palazzo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Visual perception ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Cognitive neuroscience ,Machine Learning (cs.LG) ,Artificial Intelligence ,Salience (neuroscience) ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Attention ,Contextual image classification ,business.industry ,Applied Mathematics ,Deep learning ,Brain ,Pattern recognition ,Manifold ,Visualization ,Multimodal learning ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,Visual Perception ,Unsupervised learning ,Neurons and Cognition (q-bio.NC) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Neural Networks, Computer ,business ,Software ,Algorithms - Abstract
This work presents a novel method of exploring human brain-visual representations, with a view towards replicating these processes in machines. The core idea is to learn plausible computational and biological representations by correlating human neural activity and natural images. Thus, we first propose a model, EEG-ChannelNet , to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration, for learning a joint manifold that maximizes a compatibility measure between visual features and brain representations. We then carry out image classification and saliency detection on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn, can be used to effectively supervise the training of deep learning models, as demonstrated by the high performance of image classification and saliency detection on out-of-training classes. The obtained results show that the learned brain-visual features lead to improved performance and simultaneously bring deep models more in line with cognitive neuroscience work related to visual perception and attention.
- Published
- 2020
26. Unsupervised deep learning on seismic data to detect volcanic unrest
- Author
-
Flavio Cannavo', Andrea Cannata, Simone Palazzo, Concetto Spampinato, Demian Faraci, Giulia Castagnolo, Isaak Kavasidis, Chiara Montagna, and Simone Colucci
- Abstract
The significant efforts of the last years in new monitoring techniques and networks have led to large datasets and improved our capabilities to measure volcano conditions. Thus nowadays the challenge is to retrieve information from this huge amount of data to significantly improve our capability to automatically recognize signs of potentially hazardous unrest.Unrest detection from unlabeled data is a particularly challenging task, since the lack of annotations on the temporal localization of these phenomena makes it impossible to train a machine learning model in a supervised way. The proposed approach, therefore, aims at learning unsupervised low-dimensional representations of the input signal during normal volcanic activity by training a variational autoencoder (VAE) to compress, reconstruct and synthesize input signals. Thanks to the internal structure of the proposed VAE architecture, with 1-dimensional convolutional layers with residual blocks and attention mechanism, the representation learned by the model can be employed to detect deviations from normal volcanic activity. In our experiments, we test and evaluate two techniques for unrest detection: a generative approach, with a bank of synthetic signals used to assess the degree of correspondence between normal activity and an input signal; and a discriminative approach, employing unsupervised clustering in the VAE representation space to identify prototypes of normal activity for comparison with an input signal.
- Published
- 2020
- Full Text
- View/download PDF
27. Top-down saliency detection driven by visual classification
- Author
-
Simone Palazzo, Daniela Giordano, Francesca Murabito, Concetto Spampinato, Michael Riegler, and Konstantin Pogorelov
- Subjects
Computer science ,business.industry ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Visual task ,Pattern recognition ,02 engineering and technology ,Benchmarking ,Top-down and bottom-up design ,Object (computer science) ,050105 experimental psychology ,Task (project management) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Visual attention ,Eye tracking ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
This paper presents an approach for saliency detection able to emulate the integration of the top-down (task-controlled) and bottom-up (sensory information) processes involved in human visual attention. In particular, we first learn how to generate saliency when a specific visual task has to be accomplished. Afterwards, we investigate if and to what extent the learned saliency maps can support visual classification in nontrivial cases. To achieve this, we propose SalClassNet, a CNN framework consisting of two networks jointly trained: a) the first one computing top-down saliency maps from input images, and b) the second one exploiting the computed saliency maps for visual classification. To test our approach, we collected a dataset of eye-gaze maps, using a Tobii T60 eye tracker, by asking several subjects to look at images from the Stanford Dogs dataset, with the objective of distinguishing dog breeds. Performance analysis on our dataset and other saliency benchmarking datasets, such as POET, showed that SalClassNet outperforms state-of-the-art saliency detectors, such as SalNet and SALICON. Finally, we also analyzed the performance of SalClassNet in a fine-grained recognition task and found out that it yields enhanced classification accuracy compared to Inception and VGG-19 classifiers. The achieved results, thus, demonstrate that 1) conditioning saliency detectors with object classes reaches state-of-the-art performance, and 2) explicitly providing top-down saliency maps to visual classifiers enhances accuracy.
- Published
- 2018
- Full Text
- View/download PDF
28. High Frequency, High Efficiency, and High Power Density GaN-Based LLC Resonant Converter: State-of-the-Art and Perspectives
- Author
-
Simone Palazzo, Giuseppe Panariello, Enzo De Santis, Dario Di Ruzza, Arturo Amendola, Annunziata Sanseverino, Seyed Abolfazl Mortazavizadeh, Francesco Velardi, and Giovanni Busatto
- Subjects
Technology ,Materials science ,QH301-705.5 ,QC1-999 ,review ,High-electron-mobility transistor ,resonant converter ,law.invention ,chemistry.chemical_compound ,law ,MOSFET ,Silicon carbide ,General Materials Science ,Power semiconductor device ,Biology (General) ,Transformer ,QD1-999 ,Instrumentation ,Power density ,Fluid Flow and Transfer Processes ,business.industry ,Physics ,Process Chemistry and Technology ,GaN HEMT ,General Engineering ,Electrical engineering ,Converters ,Engineering (General). Civil engineering (General) ,Computer Science Applications ,Power (physics) ,Chemistry ,chemistry ,TA1-2040 ,business ,LLC - Abstract
Soft switching for both primary and secondary side devices is available by using LLC converters. This resonant converter is an ideal candidate for today’s high frequency, high efficiency, and high power density applications like adapters, Uninterrupted Power Supplies (UPS), Solid State Transformers (SST), electric vehicle battery chargers, renewable energy systems, servers, and telecom systems. Using Gallium-Nitride (GaN)-based power switches in this converter merits more and more switching frequency, power density, and efficiency. Therefore, the present paper focused on GaN-based LLC resonant converters. The converter structure, operation regions, design steps, and drive system are described precisely. Then its losses are discussed, and the magnets and inductance characteristics are investigated. After that, various interleaved topologies, as a solution to improve power density and decrease current ripples, have been discussed. Also, some challenges and concerns related to GaN-based LLC converters have been reviewed. Commercially available power transistors based on various technologies, i.e., GaN HEMT, Silicon (Si) MOSFET, and Silicon Carbide (SiC) have been compared. Finally, the LLC resonant converter has been simulated by taking advantage of LTspice and GaN HEMT merits, as compared with Si MOSFETs.
- Published
- 2021
- Full Text
- View/download PDF
29. Gamifying Video Object Segmentation
- Author
-
Concetto Spampinato, Daniela Giordano, and Simone Palazzo
- Subjects
FOS: Computer and information sciences ,Interactive video ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,02 engineering and technology ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Computer vision ,business.industry ,Segmentation-based object categorization ,Applied Mathematics ,Image segmentation ,Object (computer science) ,Visualization ,Computational Theory and Mathematics ,Video tracking ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare their performance with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of human time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging tasks. In particular, our method relies on a web game to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided input. Performance analysis carried out on challenging video datasets with some users playing the game demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches., Comment: Submitted to PAMI
- Published
- 2017
- Full Text
- View/download PDF
30. MASK-RL: Multiagent Video Object Segmentation Framework Through Reinforcement Learning
- Author
-
Daniela Giordano, Giuseppe Vecchio, Francesco Rundo, Simone Palazzo, and Concetto Spampinato
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,02 engineering and technology ,Object (computer science) ,Machine learning ,computer.software_genre ,Computer Science Applications ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Reinforcement learning ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,Scale (map) ,computer ,Software - Abstract
Integrating human-provided location priors into video object segmentation has been shown to be an effective strategy to enhance performance, but their application at large scale is unfeasible. Gamification can help reduce the annotation burden, but it still requires user involvement. We propose a video object segmentation framework that leverages the combined advantages of user feedback for segmentation and gamification strategy by simulating multiple game players through a reinforcement learning (RL) model that reproduces human ability to pinpoint moving objects and using the simulated feedback to drive the decisions of a fully convolutional deep segmentation network. Experimental results on the DAVIS-17 benchmark show that: 1) including user-provided prior, even if not precise, yields high performance; 2) our RL agent replicates satisfactorily the same variability of humans in identifying spatiotemporal salient objects; and 3) employing artificially generated priors in an unsupervised video object segmentation model reaches state-of-the-art performance.
- Published
- 2020
31. Domain Adaptation for Outdoor Robot Traversability Estimation from RGB data with Safety-Preserving Loss
- Author
-
Luciano Cantelli, Simone Palazzo, Dario Calogero Guastella, Daniela Giordano, Concetto Spampinato, Paolo Spadaro, Giovanni Muscato, and Francesco Rundo
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer science ,business.industry ,Deep learning ,Computer Vision and Pattern Recognition (cs.CV) ,Perspective (graphical) ,Computer Science - Computer Vision and Pattern Recognition ,Mobile robot ,Terrain ,02 engineering and technology ,Task (project management) ,Computer Science - Robotics ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Robotics (cs.RO) - Abstract
Being able to estimate the traversability of the area surrounding a mobile robot is a fundamental task in the design of a navigation algorithm. However, the task is often complex, since it requires evaluating distances from obstacles, type and slope of terrain, and dealing with non-obvious discontinuities in detected distances due to perspective. In this paper, we present an approach based on deep learning to estimate and anticipate the traversing score of different routes in the field of view of an on-board RGB camera. The backbone of the proposed model is based on a state-of-the-art deep segmentation model, which is fine-tuned on the task of predicting route traversability. We then enhance the model's capabilities by a) addressing domain shifts through gradient-reversal unsupervised adaptation, and b) accounting for the specific safety requirements of a mobile robot, by encouraging the model to err on the safe side, i.e., penalizing errors that would cause collisions with obstacles more than those that would cause the robot to stop in advance. Experimental results show that our approach is able to satisfactorily identify traversable areas and to generalize to unseen locations., Accepted at IROS 2020
- Published
- 2020
32. Physical mechanisms for gate damage induced by heavy ions in SiC power MOSFET
- Author
-
Simone Palazzo, Francesco Velardi, A. Di Pasquale, Giovanni Busatto, Daniele Marciano, and Annunziata Sanseverino
- Subjects
Materials science ,Oxide ,02 engineering and technology ,01 natural sciences ,Ion ,Trap (computing) ,chemistry.chemical_compound ,Gate oxide ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Power MOSFET ,Safety, Risk, Reliability and Quality ,Electrical conductor ,Quantum tunnelling ,010302 applied physics ,business.industry ,020208 electrical & electronic engineering ,Condensed Matter Physics ,Thermal conduction ,Atomic and Molecular Physics, and Optics ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,chemistry ,Optoelectronics ,business - Abstract
The objective of the paper is to present a trap assisted conduction mechanism able to explain the creation of a conductive path in the gate oxide of SiC power MOSFET during the impact of heavy ions. The consequent large current flow through the oxide can induce damages to the gate structure. The proposed mechanism combines a Fowler-Nordheim tunneling at the SiC/SiO2 interface with a trap-assisted valence band conduction mechanisms based on Poole-Frenkel effect. The model is based on the results of 2D finite element simulation and is supported by previous works dealing with trap assisted tunneling hole injection in silicon dioxide.
- Published
- 2020
33. Exploiting structured high-level knowledge for domain-specific visual classification
- Author
-
Mubarak Shah, Francesca Murabito, Carmelo Pino, Concetto Spampinato, Simone Palazzo, Daniela Giordano, and Francesco Rundo
- Subjects
Computer science ,Inference ,02 engineering and technology ,Ontology (information science) ,Machine learning ,computer.software_genre ,01 natural sciences ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,Computational ontologies ,Belief networks ,Contextual image classification ,business.industry ,Deep learning ,Cognitive neuroscience of visual object recognition ,Bayesian network ,Signal Processing ,Domain knowledge ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Fine-grained visual classification ,business ,computer ,Software - Abstract
In the last decade, deep learning models have yielded impressive performance on visual object recognition and image classification. However these methods still rely on learning visual data distributions and show difficulties in dealing with complex scenarios where visual appearance only is not enough to effectively tackle them. This is the case, for instance, of fine-grained image classification in domain-specific applications for which it is very complex to employ data-driven models because of the lack of large amounts of samples and that, instead, can be solved by resorting to specialized human knowledge. However, encoding this specialized knowledge and injecting it into deep models is not trivial. In this paper, we address this problem by: a) employing computational ontologies to model specialized knowledge in a structured representation and, b) building a hybrid visual-semantic classification framework. The classification method performs inference over a Bayesian Network graph, whose structure depends on the knowledge encoded in an ontology and evidences are built using the outputs of deep networks. We test our approach on a fine-grained classification task, employing an extremely complex dataset containing images from several fruit varieties as well as visual and semantic annotations. Since the classification is done at the variety level (e.g., discriminating between different cherry varieties), appearance changes slightly and expert domain knowledge — making using of contextual information — is required to perform classification accurately. Experimental results show that our approach significantly outperforms standard deep learning–based classification methods over the considered scenario as well as existing methods leveraging semantic information for classification. These results demonstrate, on one hand, the difficulty of purely-visual deep methods in tackling small and highly-specialized datasets and, on the other hard, the capabilities of our approach to effectively encode and use semantic knowledge for enhanced accuracy.
- Published
- 2021
- Full Text
- View/download PDF
34. Recent Advances at the Brain-Driven Computer Vision Workshop 2018
- Author
-
Stavros I. Dimitriadis, Isaak Kavasidis, Simone Palazzo, and Dimitris Kastaniotis
- Subjects
Engineering ,business.industry ,Event (computing) ,Computer vision ,Artificial intelligence ,business ,Field (computer science) - Abstract
The 1\(^\text {st}\) edition of the Brain-Driven Computer Vision Workshop, held in Munich in conjunction with the European Conference on Computer Vision 2018, aimed at attracting, promoting and inspiring research on paradigms, methods and tools for computer vision driven or inspired by the human brain. While successful, in terms of the quality of received submissions and audience present at the event, the workshop emphasized some of the factors that currently limit research in this field. In this report, we discuss the success points of the workshop, the characteristics of the presented works, and our considerations on the state of current research and future directions of research in this topic.
- Published
- 2019
- Full Text
- View/download PDF
35. Biodiversity Information Retrieval Through Large Scale Content-Based Identification: A Long-Term Evaluation
- Author
-
WP Willem Pier Vellinga, Pierre Bonnet, Hervé Glotin, Jean-Christophe Lombardo, Henning Müller, Robert Planqué, Concetto Spampinato, Hervé Goëau, Simone Palazzo, Alexis Joly, Scientific Data Management (ZENITH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Botanique et Modélisation de l'Architecture des Plantes et des Végétations (UMR AMAP), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Institut National de la Recherche Agronomique (INRA)-Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-Institut de Recherche pour le Développement (IRD [France-Sud]), Université de Toulon (UTLN), Dipartimento di Ingegneria Informatica e delle Telecomunicazioni [University of Catania] (DIIT), Università degli studi di Catania [Catania], Xeno-canto foundation, Inria-SIC [Sophia Antipolis], Inria Sophia Antipolis - Méditerranée (CRISAM), Haute École Spécialisée de Suisse Occidentale Valais-Wallis (HES-SO Valais-Wallis), Nicola Ferro, Carol Peters, Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Inria Sophia Antipolis - Méditerranée (CRISAM), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-Institut National de la Recherche Agronomique (INRA)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD [France-Sud]), Département Systèmes Biologiques (Cirad-BIOS), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad), DYNamiques de l’Information (DYNI), Laboratoire d'Informatique et Systèmes (LIS), Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS), and Università degli studi di Catania = University of Catania (Unict)
- Subjects
0106 biological sciences ,Information retrieval ,Computer science ,media_common.quotation_subject ,Biodiversity ,Context (language use) ,02 engineering and technology ,15. Life on land ,[SDV.BV.BOT]Life Sciences [q-bio]/Vegetal Biology/Botanics ,[SDV.BID.SPT]Life Sciences [q-bio]/Biodiversity/Systematics, Phylogenetics and taxonomy ,010603 evolutionary biology ,01 natural sciences ,Clef ,Task (project management) ,Identification (information) ,[SDV.EE.ECO]Life Sciences [q-bio]/Ecology, environment/Ecosystems ,13. Climate action ,Scale (social sciences) ,[INFO.INFO-IR]Computer Science [cs]/Information Retrieval [cs.IR] ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Earth Summit ,[SDE.BE]Environmental Sciences/Biodiversity and Ecology ,Diversity (politics) ,media_common - Abstract
AcknowledgementsThe organization of the PlantCLEF task is supported by the French project Floris’Tic (Tela Botanica, INRIA, CIRAD, INRA, IRD) funded in the context of the national investment program PIA. The organization of the BirdCLEF task is supported by the Xeno-Canto foundation for nature sounds as well as the French CNRS project SABIOD.ORG and EADM MADICS, and Floris’Tic. The annotations of some soundscapes were prepared with the late wonderful Lucio Pando at Explorama Lodges, with the support of Pam Bucur, Marie Trone and H. Glotin. The organization of the SeaCLEF task is supported by the Ceta-mada NGO and the French project Floris’Tic.; International audience; Identifying and naming living plants or animals is usually impossible for the general public and often a difficult task for professionals and naturalists. Bridging this gap is a key challenge towards enabling effective biodiversity information retrieval systems. This taxonomic gap was actually already identified as one of the main ecological challenges to be solved during the Rio de Janeiro United Nations “Earth Summit” in 1992. Since 2011, the LifeCLEF challenges conducted in the context of the CLEF evaluation forum have been boosting and evaluating the advances in this domain. Data collections with an unprecedented volume and diversity have been shared with the scientific community to allow repeatable and long-term experiments. This paper describes the methodology of the conducted evaluation campaigns as well as providing a synthesis of the main results and lessons learned along the years.
- Published
- 2019
- Full Text
- View/download PDF
36. An AI-based Framework for Supporting Large Scale Automated Analysis of Video Capsule Endoscopy
- Author
-
Concetto Spampinato, Francesca Murabito, Daniela Giordano, Simone Palazzo, and Carmelo Pino
- Subjects
Computer science ,business.industry ,Deep learning ,Scale (chemistry) ,0206 medical engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Video transmission ,Limiting ,Machine learning ,computer.software_genre ,020601 biomedical engineering ,Video capsule endoscopy ,Upload ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,Key (cryptography) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Video Capsule Endoscopy (VCE) is a diagnostic imaging technology, based on capsule with a built-in camera, that enables screening of the gastro-intestinal tract by reducing the invasiveness of traditional endoscopy procedures. Despite VCE has been designed mainly for investigations on small intestine, it is a powerful tool, because of its low invasiveness and usage simplicity, for supporting large scale screening. However, each VCE video is typically long about eight hours, and endoscopists usually take about two hours, using simple computing methods, for its analysis, thus limiting its application for large scale studies. In this paper, we propose a novel computational framework leveraging the recent advances in artificial intelligence based on the deep learning paradigm to support effectively the whole screening procedure from video transmission to automated lesion identification to reporting. More specifically, our approach handles multiple video uploads at the same time, processes them automatically with the objective of identifying key video frames with potential lesions (for subsequent analysis by endoscopists) and provides physicians with means to compare the findings with either previously detected lesions or with images and scientific information from relevant retrieved documents for a more accurate final diagnosis.
- Published
- 2019
37. Generating Synthetic Video Sequences by Explicitly Modeling Object Motion
- Author
-
Daniela Giordano, Mubarak Shah, Pierluca D'Oro, Concetto Spampinato, and Simone Palazzo
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Motion (physics) ,Visual Objects ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Point (geometry) ,Computer vision ,Artificial intelligence ,business ,computer ,Encoder ,0105 earth and related environmental sciences ,computer.programming_language ,Generator (mathematics) - Abstract
Recent GAN-based video generation approaches model videos as the combination of a time-independent scene component and a time-varying motion component, thus factorizing the generation problem into generating background and foreground separately. One of the main limitations of current approaches is that both factors are learned by mapping one source latent space to videos, which complicates the generation task as a single data point must be informative of both background and foreground content. In this paper we propose a GAN framework for video generation that, instead, employs two latent spaces in order to structure the generative process in a more natural way: (1) a latent space to generate the static visual content of a scene (background), which remains the same for the whole video, and (2) a latent space where motion is encoded as a trajectory between sampled points and whose dynamics are modeled through an RNN encoder (jointly trained with the generator and the discriminator) and then mapped by the generator to visual objects’ motion. Performance evaluation showed that our approach is able to control effectively the generation process as well as to synthesize more realistic videos than state-of-the-art methods.
- Published
- 2018
38. Nonparametric label propagation using mutual local similarity in nearest neighbors
- Author
-
Isaak Kavasidis, Daniela Giordano, Simone Palazzo, and Concetto Spampinato
- Subjects
Similarity (geometry) ,business.industry ,Computer science ,Data-driven approaches ,Big data ,Nonparametric statistics ,BIg data ,Pattern recognition ,Scale (descriptive set theory) ,computer.software_genre ,Image (mathematics) ,k-nearest neighbors algorithm ,Task (computing) ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Computer Vision and Pattern Recognition ,Nearest-neighbor ,Signal Processing ,Parametric model ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,computer ,Software - Abstract
Label propagation by means of nearest-neighbor search and "mutual local similarity".Effectiveness and efficiency of quantized HoG for large scale image retrieval.Example application to low-quality underwater images and fish classification. The shift from model-based approaches to data-driven ones is opening new frontiers in computer vision. Several tasks which required the development of sophisticated parametric models can now be solved through simple algorithms, by offloading the complexity of the task to the amount of available data. However, in order to develop data-driven approaches, it is necessary to have large annotated datasets. Unfortunately, manual labeling of large scale datasets is a complex, error prone and tedious task, especially when dealing with noisy images or with fine-grained visual tasks.In this paper we present an automatic label propagation approach that transfers labels from a small set of manually labeled images to a large set of unlabeled items by means of nearest-neighbor search operating on HoG image descriptors. In particular, we introduce the concept of mutual local similarity between the labeled query image and its nearest neighbors as the condition to be verified for propagating labels.The performance evaluation, carried out on the COREL 5K dataset and on a dataset of 20 million underwater low-quality images, showed how big data combined to simple nonparametric approaches allows to solve effectively complex visual tasks.
- Published
- 2015
- Full Text
- View/download PDF
39. Brain2Image
- Author
-
Mubarak Shah, Concetto Spampinato, Isaak Kavasidis, Simone Palazzo, and Daniela Giordano
- Subjects
Visual perception ,Computer science ,media_common.quotation_subject ,Variational autoencoder ,02 engineering and technology ,Electroencephalography ,03 medical and health sciences ,0302 clinical medicine ,Neuroimaging ,Reading (process) ,Media Technology ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Computer vision ,EEG ,Representation (mathematics) ,media_common ,Image generation ,Computer Graphics and Computer-Aided Design ,1707 ,Software ,medicine.diagnostic_test ,business.industry ,Deep learning ,Pattern recognition ,020201 artificial intelligence & image processing ,Noise (video) ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Generative grammar - Abstract
Reading the human mind has been a hot topic in the last decades, and recent research in neuroscience has found evidence on the possibility of decoding, from neuroimaging data, how the human brain works. At the same time, the recent rediscovery of deep learning combined to the large interest of scientific community on generative methods has enabled the generation of realistic images by learning a data distribution from noise. The quality of generated images increases when the input data conveys information on visual content of images. Leveraging on these recent trends, in this paper we present an approach for generating images using visually-evoked brain signals recorded through an electroencephalograph (EEG). More specifically, we recorded EEG data from several subjects while observing images on a screen and tried to regenerate the seen images. To achieve this goal, we developed a deep-learning framework consisting of an LSTM stacked with a generative method, which learns a more compact and noise-free representation of EEG data and employs it to generate the visual stimuli evoking specific brain responses. OurBrain2Image approach was trained and tested using EEG data from six subjects while they were looking at images from 40 ImageNet classes. As generative models, we compared variational autoencoders (VAE) and generative adversarial networks (GAN). The results show that, indeed, our approach is able to generate an image drawn from the same distribution of the shown images. Furthermore, GAN, despite generating less realistic images, show better performance than VAE, especially as concern sharpness. The obtained performance provides useful hints on the fact that EEG contains patterns related to visual content and that such patterns can be used to effectively generate images that are semantically coherent to the evoking visual stimuli.
- Published
- 2017
- Full Text
- View/download PDF
40. Generative Adversarial Networks Conditioned by Brain Signals
- Author
-
Mubarak Shah, Isaak Kavasidis, Concetto Spampinato, Simone Palazzo, and Daniela Giordano
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Representation (systemics) ,Inference ,Pattern recognition ,Software ,1707 ,02 engineering and technology ,Electroencephalography ,Object (computer science) ,Manifold ,Visualization ,03 medical and health sciences ,0302 clinical medicine ,Recurrent neural network ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Set (psychology) ,030217 neurology & neurosurgery - Abstract
Recent advancements in generative adversarial networks (GANs), using deep convolutional models, have supported the development of image generation techniques able to reach satisfactory levels of realism. Further improvements have been proposed to condition GANs to generate images matching a specific object category or a short text description. In this work, we build on the latter class of approaches and investigate the possibility of driving and conditioning the image generation process by means of brain signals recorded, through an electroencephalograph (EEG), while users look at images from a set of 40 ImageNet object categories with the objective of generating the seen images. To accomplish this task, we first demonstrate that brain activity EEG signals encode visually-related information that allows us to accurately discriminate between visual object categories and, accordingly, we extract a more compact class-dependent representation of EEG data using recurrent neural networks. Afterwards, we use the learned EEG manifold to condition image generation employing GANs, which, during inference, will read EEG signals and convert them into images. We tested our generative approach using EEG signals recorded from six subjects while looking at images of the aforementioned 40 visual classes. The results show that for classes represented by well-defined visual patterns (e.g., pandas, airplane, etc.), the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process.
- Published
- 2017
- Full Text
- View/download PDF
41. Analyses of the MYRRHA spallation loop using the system code ATHLET
- Author
-
Katrien Van Tichelen, Kiril Velkov, Simone Palazzo, and Georg Lerchl
- Subjects
Thermal efficiency ,Materials science ,business.industry ,Nuclear engineering ,Radioactive waste ,Nuclear power ,Forced convection ,Coolant ,Nuclear physics ,Nuclear Energy and Engineering ,Heat exchanger ,Heat transfer ,Spallation ,business - Abstract
One of the most interesting technologies recognized as promising for the future of nuclear energy is the Accelerator Driven System (ADS). The purpose of the ADS is to function as an element of an integrated nuclear power enterprise comprising of conventional and advanced power reactors for energy production, and for reducing the radiotoxicity of the nuclear waste produced by these power reactors before entombment in a geologic repository ( Stanculescu, 2000 ). The use of new types of coolants such as molten lead or lead–bismuth eutectic alloy (LBE) represents another particular feature in the design concept of these reactors, since it permits one to take advantage of higher Heavy Liquid Metals (HLM) boiling temperature compared to water, leading to an improvement in thermal efficiency. Furthermore, it allows the reactor to be operated at a lower pressure, thus reducing the probability of a Loss-Of-Coolant Accident (LOCA) and, consequently, increasing reactor safety. The proposed use of relatively new types of coolants, especially in combination with new fuel types and cladding materials, demands specific attention to the thermal–hydraulics and core mechanics in normal and abnormal conditions. For that reason, a detailed analysis using system codes is necessary. In this paper, a new version of the ATHLET system code is tested, in which the physical properties of liquid metals like sodium, lead and LBE are implemented. The object of this study is the spallation loop of the MYRRHA facility, in which LBE is circulated by forced convection to remove the heat deposited by a proton beam. A detailed nodalization is set up for performing thermal–hydraulic calculations for both nominal conditions and accident scenarios in order to have a good characterization of the entire loop. The start-up transient has verified the correct removal of the heat generated in the target by the foreseen heat exchanger. During accidental transients, it was noted that the level of LBE in the beam line changes in agreement with preliminary studies on the target device. The pump failure test represents the most dangerous scenario, as the temperature in the target area will reach a very high value within a few seconds after the blockage of the pump impeller. The results obtained have subsequently been compared to the ones achieved with previous numerical simulations which were performed using a version of RELAP5/mod 3.3 system code that was modified at the University of Pisa to account for LBE properties. The comparison has confirmed the capability of the new version of the ATHLET code for the analysis of hydraulic circuits cooled by liquid metals, although both codes use different heat transfer correlations.
- Published
- 2013
- Full Text
- View/download PDF
42. An innovative web-based collaborative platform for video annotation
- Author
-
Concetto Spampinato, Roberto Di Salvo, Isaak Kavasidis, Simone Palazzo, and Daniela Giordano
- Subjects
Ground truth data ,Computer Networks and Communications ,Computer science ,Image segmentation ,Video labeling ,02 engineering and technology ,computer.software_genre ,Annotation ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Web application ,Image retrieval ,Ground truth ,Multimedia ,business.industry ,020207 software engineering ,Usability ,Object detection ,Hardware and Architecture ,Video tracking ,020201 artificial intelligence & image processing ,User interface ,business ,computer ,Software - Abstract
Large scale labeled datasets are of key importance for the development of automatic video analysis tools as they, from one hand, allow multi-class classifiers training and, from the other hand, support the algorithms' evaluation phase. This is widely recognized by the multimedia and computer vision communities, as witnessed by the growing number of available datasets; however, the research still lacks in annotation tools able to meet user needs, since a lot of human concentration is necessary to generate high quality ground truth data. Nevertheless, it is not feasible to collect large video ground truths, covering as much scenarios and object categories as possible, by exploiting only the effort of isolated research groups. In this paper we present a collaborative web-based platform for video ground truth annotation. It features an easy and intuitive user interface that allows plain video annotation and instant sharing/integration of the generated ground truths, in order to not only alleviate a large part of the effort and time needed, but also to increase the quality of the generated annotations. The tool has been on-line in the last four months and, at the current date, we have collected about 70,000 annotations. A comparative performance evaluation has also shown that our system outperforms existing state of the art methods in terms of annotation time, annotation quality and system's usability.
- Published
- 2013
- Full Text
- View/download PDF
43. Generating Knowledge-Enriched Image Annotations for Fine-Grained Visual Classification
- Author
-
Simone Palazzo, Concetto Spampinato, Francesca Murabito, and Daniela Giordano
- Subjects
Contextual image classification ,business.industry ,Computer science ,020207 software engineering ,Context (language use) ,Pattern recognition ,02 engineering and technology ,Object (computer science) ,Visual appearance ,Image (mathematics) ,Knowledge base ,Bounding overwatch ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Exploiting high-level visual knowledge is the key for a great leap in image classification, in particular, and computer vision, in general. In this paper, we present a tool for generating knowledge-enriched visual annotations and use it to build a benchmarking dataset for a complex classification problem that cannot be solved by learning low and middle-level visual descriptor distributions only. The resulting VegImage dataset contains 3,872 images of 24 fruit varieties, over than 60,000 bounding boxes (portraying the different varieties of fruits as well as context objects such as leaves, etc.) and a large knowledge base (over 1,000,000 OWL triples) containing a-priori knowledge about object visual appearance. We also tested existing fine-grained and CNN-based classification methods on this dataset, showing the difficulty of purely visual-based methods in tackling it.
- Published
- 2017
44. Implicit Vs. Explicit Human Feedback for Interactive Video Object Segmentation
- Author
-
Simone Palazzo, Concetto Spampinato, Daniela Giordano, and Francesca Murabito
- Subjects
Exploit ,Computer science ,Interactive video ,business.industry ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Benchmarking ,Object (computer science) ,Set (abstract data type) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Segmentation ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,User feedback ,Energy (signal processing) - Abstract
This paper investigates how to exploit human feedback for interactive object segmentation in videos. In particular, we present an interactive video object segmentation approach where humans can contribute by either explicitly clicking on objects of interest in videos or implicitly while looking at video sequences. User feedback is then translated into a set of spatio-temporal constraints for an energy-based minimization problem. We tested the method on standard benchmarking datasets when using both eye-gaze data and user clicks. The results indicated how our method outperformed existing automated and interactive methods regardless of the type of human feedback (explicit or implicit), and that click-based feedback was more reliable than eye-gaze one.
- Published
- 2017
- Full Text
- View/download PDF
45. Overview of the JET results in support to ITER
- Author
-
Alfredo Pironti, J. Simpson-Hutchinson, Sean Conroy, J. Uljanovs, D. Middleton-Gear, G. Possnert, C. Angioni, R. McAdams, Nicholas Watkins, E. Fortuna-Zalesna, A. Garcia-Carrasco, K. Gałązka, D. Nodwell, Pasquale Gaudio, R.A. Pitts, Svetlana V. Ratynskaia, Seppo Koivuranta, O. J. Kwon, C. Boyd, A. Boboc, M. Reinhart, Igor Lengar, Jarrod Leddy, Hiroyasu Utoh, J. H. Ahn, A. Stevens, J. Lönnroth, U. Kruezi, C. Guillemaut, N. Fonnesu, W. Studholme, Marek Rubel, P. Cahyna, O. McCormack, A. S. Jacobsen, D. Mazon, Gunta Kizane, N. Ashikawa, William Tang, J. Goff, F. Nespoli, Thomas Giegerich, G. Petravich, Angela Busse, Corneliu Porosnicu, M. Bigi, M. Wheatley, Christopher N. Bowman, J. Zacks, Ivan Calvo, U. Losada, H. Weisen, B. Bauvir, Stanislas Pamela, Sylvain Brémond, M.F. Stamp, Scott W. McIntosh, A. Rakha, S. Glöggler, V. Braic, C. Bottereau, S. Murphy, S. Knott, Luigi Fortuna, P. Bunting, N. Vora, S. D. Scott, A. Lazaros, R. Dejarnac, P. Buratti, H.R. Strauss, Gabriele Croci, M. Nocente, A. Hollingsworth, S. Reynolds, D. J. Wilson, D. D. Brown, T.C. Luce, S. Zoletnik, E. Nilsson, L. Laguardia, O. Marchuk, F.P. Orsitto, E. Cecil, V. Huber, J. B. Girardo, Stylianos Varoutis, M. D. Axton, Hyun-Tae Kim, E. Safi, Ch. Day, S. Arshad, J. Rzadkiewicz, P. Prior, A. Meigs, S. Esquembri, P. Gohil, K. Purahoo, Torbjörn Hellsten, N. Tipton, R. Guirlet, E. Joffrin, V. Aldred, Calin Besliu, M. Valentinuzzi, G. T. Jones, J. Edwards, Giuseppe Ambrosino, Laurent Marot, N. Lam, F. Crisanti, G. Verona Rinati, R. Marshal, Michael L. Brown, D. Frigione, D. Chandra, Michaele Freisinger, R. Olney, Jari Varje, S. Whetham, F. Parra Diaz, M. R. Hough, P. Dinca, F. Salzedas, A. Goodyear, R. Gowland, J. A. Wilson, J. Horacek, D. King, K. Flinders, I. R. Merrigan, M. Ghate, R. Michling, F. Saint-Laurent, G. Kocsis, D. Van Eester, C. Young, R. O. Dendy, A. Meakins, N. Pace, C. L. Hunter, D. Alegre, S. Foster, V. Riccardo, M. Bulman, C. Jeong, Marek Szawlowski, B. D. Whitehead, Vasily Kiptily, James Harrison, Hiroshi Tojo, G. T. A. Huijsmans, J. W. Coenen, X. Litaudon, Justin Williams, C. Hidalgo, S. Lesnoj, I.E. Day, A. W. Morris, R. Mooney, Yann Corre, S. Brezinsek, B. Gonçalves, M. Kresina, D. Coombs, F. Köchl, J. L. Gardarein, W. Davis, Aqsa Shabbir, Kanti M. Aggarwal, L. Colas, A. B. Kukushkin, Seppo Sipilä, Elisabeth Rachlew, Leena Aho-Mantila, O. G. Pompilian, E. Viezzer, Shane Cooper, Fabio Villone, P. Blanchard, Patrick Tamain, P. Camp, T. Szabolics, C. Luna, Kalle Heinola, H. G. Esser, V. Bobkov, James Buchanan, Andrew West, Hajime Urano, Roberta Lima Gomes, J.P. Coad, Th. Pütterich, A. Sinha, S. Hollis, R. D. Wood, G. D. Ewart, F. S. Griph, T. Kobuchi, X. Lefebvre, S. Warder, A.J. Thornton, S. Peschanyi, B. Graham, Giuseppe Telesca, M. Kempenaars, J. Bernardo, M. Hughes, Eva Belonohy, S. Schmuck, Kai Nordlund, T. J. Smith, P. Hertout, K. D. Lawson, M. Brix, Matthew Sibbald, Grégoire Hornung, C. Tame, Matthew Carr, S. Wray, P. T. Doyle, A. Somers, Giuseppe Chitarin, D. C. Campling, Mitul Abhangi, I. Jepu, David A. Wood, J. Miettunen, A. Sopplesa, Raffaele Fresa, S. Saarelma, M. Bacharis, J. Pozzi, P. Vallejos Olivares, Teddy Craciunescu, Raffaele Albanese, S. Knipe, Jason P. Byrne, A. C. C. Sips, S. Hazel, V. Kazantzidis, G. Stankūnas, A. Kundu, J. Mailloux, C. Guerard, Pramit Dutta, J. E. Boom, Eduardo Alves, P. Grazier, Saskia Mordijck, V.S. Neverov, Kazuo Hoshino, A. P. Vadgama, P. D. Brennan, P. Innocente, Piergiorgio Sonato, M. Irishkin, M. Berry, D. W. Robson, Dieter Leichtle, Fabio Pisano, P. McCullen, T. M. Huddleston, Kensaku Kamiya, D. Pacella, Tommy Ahlgren, A. Kirschner, B. Magesh, A. Ash, J. Mlynář, C. Castaldo, C. Marchetto, D. L. Hillis, M. Incelli, B. Viola, R. J. Robins, E. Andersson Sundén, G. Ramogida, Matthew Reinke, Gerd Meisl, Yannis Kominis, R. Proudfoot, C. Noble, N. J. Conway, V. P. Lo Schiavo, Jorge Luis Rodriguez, Hugo Bufferand, C. H. A. Hogben, B. Evans, R. Sartori, H. Greuner, M. G. Dunne, K. Schöpf, M. I. K. Santala, E. Giovannozzi, A. E. Shevelev, C. Gil, P. Boulting, P. Sagar, A.E. Shumack, P. A. Coates, C. Ayres, R. Prakash, C. Giroud, M. Parsons, J. C. Giacalone, S. Meshchaninov, A. Peackoc, G. De Temmerman, A.C.A. Figueiredo, D. Gallart, P. Santa, Sergey Popovichev, Ivan Lupelli, M. Valovic, Thomas Johnson, Y. Martynova, M. Rack, Olivier Sauter, J. Garcia, P. Siren, I. Balboa, S. Lee, Hans Nordman, R. Roccella, M. Faitsch, Julien Hillairet, Patrick J. McCarthy, C. Reux, Irena Ivanova-Stanik, V. Coccorese, Ye. O. Kazakov, R. El-Jorf, C. Hamlyn-Harris, Matthias Weiszflog, C. F. Maggi, Panagiotis Tolias, N. C. Hawkes, E. Clark, Bruno Santos, B. Sieglin, R. Rodionov, Roch Kwiatkowski, P. Denner, C. Woodley, Hugh Summers, Francesco Pizzo, G. Pucella, D. Croft, F. Di Maio, M. Tomes, D. Molina, A. Fernades, L. Amicucci, Marco Cecconello, A. Bisoffi, Z. Ul-Abidin, J. Wilkinson, H. Maier, S. Rowe, M. Beckers, P.J. Knight, E. Pajuste, Choong-Seock Chang, K. Deakin, M. Enachescu, A. Cobalt, D. Tskhakaya Jun, Michela Gelfusa, Rémy Nouailletas, R. Ragona, N. Bonanomi, D. A. Homfray, K. Riddle, Yann Camenen, J. D. Thomas, R.P. Doerner, Timothy P. Robinson, Y. Miyoshi, Ph. Jacquet, H. T. Lambertz, D. Pulley, A. Bécoulet, E. Tholerus, O. Bogar, M. Peterka, R. Crowe, C. Sommariva, A. R. Talbot, N. K. Butler, N. Reid, R. Zagórski, Gerald Pintsuk, Juri Romazanov, Andre Neto, G. L. Ravera, Paolo Arena, A. Manning, F. Durodié, Maryna Chernyshova, D. Karkinsky, Štefan Matejčík, J. P. Thomas, A. Wilson, L. Joita, R. Naish, P. Strand, M. Balden, M. Kaufman, T. Powell, V. Schmidt, D. Barnes, José Vicente, S. Doswon, Daniel F. Valcarcel, Claudia Corradino, R. Warren, Annette M. Hynes, J. D. Strachan, A. M. Messiaen, M. Kovari, O. Omolayo, D. M. Witts, R. C. Felton, C. Fleming, C. A. Marren, Patrick Maget, J. Galdon-Quiroga, H. R. Koslowski, Bruce Lipschultz, Ana Elisa Bauer de Camargo Silva, J. Waterhouse, R. J. Dumont, M. Schneider, Sara Moradi, K. J. Nicholls, M. Beldishevski, Benedikt Geiger, A. Jardin, A. Ekedahl, A. Lyssoivan, C. Waldon, Davide Galassi, F. Jaulmes, A. Kirk, Yannick Marandet, F. Hasenbeck, Gabor Szepesi, R. C. Pereira, J. Juul Rasmussen, Nobuyuki Aiba, Michelle E. Walker, Gábor Cseh, Scott W. Mosher, R. Bastow, A. Di Siena, E. Lazzaro, M. Curuia, C. D. Challis, Z. Ghani, J. Deane, João M. C. Sousa, Henrik Sjöstrand, T. O'Gorman, H. R. Wilson, P. Devynck, M. Price, C. A. Thompson, Daniele Marocco, A. Cullen, M. Clark, M. Lennholm, D. Carralero, N. Balshaw, Roland Sabot, I. Stepanov, N. Petrella, Filippo Sartori, L. W. Packer, P. Thomas, M. Lungu, A. V. Krasilnikov, R. Young, Jonathan Graves, J. C. Hillesheim, Mǎdǎlina Vlad, Duccio Testa, Pierre Dumortier, Paulo Carvalho, M. Gosk, Yong-Su Na, M. Buckley, Carlos A. Silva, V. Fuchs, K. Vasava, P. A. Tigwell, B. Wakeling, M. Medland, M. Bellinger, K. Gal, Petter Ström, E. Veshchev, F. Nabais, A. Wynn, L. Lauro Taroni, B. Beckett, L. Gil, M. Towndrow, Brian Grierson, Harry M. Meyer, V. Philipps, A. de Castro, D. Kinna, D. Conka, Göran Ericsson, L. Piron, J. Hawkins, D. Cooper, Kenneth Hammond, V.V. Parail, Cristian Ruset, G.J. van Rooij, M. N. A. Beurskens, N. Fawlk, G. Evison, M. Van De Mortel, N. Marcenko, B. Slade, Th. Franke, Simone Peruzzo, N. den Harder, D. Baião, A. Martin de Aguilera, Frederic Imbeaux, Carlo Sozzi, J.L. de Pablos, J. Svensson, A. Withycombe, Ane Lasa, H. Sheikh, V.A. Yavorskij, Nick Walkden, E. Lerche, C. S. Gibson, Roberto Zanino, Y. Peysson, David Hatch, B. Bazylev, E. de la Cal, S. Hacquin, T. D. V. Haupt, S. A. Silburn, T.T.C. Jones, Maria Teresa Porfiri, Walid Helou, S. E. Sharapov, M. Zerbini, Ken W Bell, Marco Marinelli, Kyriakos Hizanidis, J. M. Fontdecaba, N. Teplova, K. K. Kirov, S. Vartanian, W. W. Pires de Sa, T. C. Hender, J. K. Blackburn, I. Monakhov, H. Patten, P. A. Simmons, Y. Austin, J. Regana, Stefano Coda, Amanda J. Page, D. Fuller, António J.N. Batista, A. Horton, P. Heesterman, S. Cramp, J. Hobirk, F. Clairet, A. Burckhart, M. Allinson, Larry R. Baylor, W. Leysen, D. B. Gin, P. Nielsen, A. Kantor, Yueqiang Liu, A.V. Stephen, Jose Ramon Martin-Solis, P. Mantica, B. C. Regan, Aleksander Drenik, A. Lukin, L. Thorne, G. Nemtsev, J. Denis, M. E. Graham, D. Rigamonti, W. Van Renterghem, M. Tardocchi, M. Koubiti, A. Malaquias, M. Tsalas, A. Cufar, Giuseppe Prestopino, D. Kogut, N. Pomaro, J. Keep, Jochen Linke, Shimpei Futatani, Boris Breizman, A. Sirinelli, M. Chandler, M. Fortune, F. Degli Agostini, I. Jenkins, T. Spelzini, G. Calabrò, O. N. Kent, A. Lunniss, Etienne Hodille, Z. Vizvary, Volker Naulin, T. Eich, F. Mink, A. Alkseev, P. W. Haydon, Massimo Angelone, Norberto Catarino, J. Lapins, Roberto Pasqualotto, R. Lawless, T. Schlummer, F. Bonelli, M. Wischmeier, Stéphane Devaux, G. Saibene, Dirk Reiser, Y. R. Martin, H. Bergsåker, Jon Godwin, Alessia Santucci, C. Lane, Justyna Grzonka, Ph. Mertens, Claudio Verona, David Moulton, E. Delabie, Anna Salmi, P. G. Smith, T. Bolzonella, Silvio Ceccuzzi, Ulrich Fischer, G. Liu, M. A. Henderson, M. Marinucci, T. Suzuki, Jakub Bielecki, João Figueiredo, M. Afzal, J. Cane, Robert Hager, Luciano Bertalot, M. Firdaouss, G. Tvalashvili, D. Hepple, D. Esteve, M. De Bock, Y. Baranov, R. D'Inca, G. De Tommasi, Ch. Linsmeier, T. Nicolas, I. J. Pearson, P. Finburg, Ireneusz Książek, S. Talebzadeh, A. Czarnecka, A. Botrugno, M. Gethins, Bohdan Bieg, R. Baughan, I. Borodkina, B. Kos, A. Muraro, T. Vasilopoulou, G. Hermon, S.J. Wukitch, Jari Likonen, D. P. Coster, Guglielmo Rubinacci, I. H. Coffey, Justine M. Kent, S. E. Dorling, J. Dankowski, Geert Verdoolaege, Daisuke Nishijima, R. Clarkson, E. R. Solano, M. Stephen, A. Lescinskis, P. Staniec, Karl Schmid, M. Mayer, Peter Lang, T. Franklin, M.I. Williams, C. G. Elsmore, F. Maviglia, C. Di Troia, C. Penot, A. Zarins, Pierre Manas, D. F. Gear, Yu Gao, Philipp Drews, E. Letellier, A. S. Thompson, L. Forsythe, I. Zychor, E. Khilkevich, A. Manzanares, T. Nakano, Paulo Rodrigues, J. Edmond, Sebastián Dormido-Canto, R. Dux, C. Appelbee, L. Moser, Angelo Cenedese, D. Fagan, N. Richardson, Giuseppe Gorini, V. Rohde, R. Paprok, João P. S. Bizarro, P. Aleynikov, M. Sertoli, Ł. Świderski, Simone Palazzo, O. W. Davies, D. Douai, N. Macdonald, M. Baruzzo, J. López-Razola, M. Lungaroni, D. Clatworthy, R. Bravanec, J. Lovell, Ambrogio Fasoli, S.-P. Pehkonen, M. E. Puiatti, P. Papp, G. Bodnar, V. Aslanyan, A. Weckmann, K. A. Taylor, R. Henriques, I. T. Chapman, Ewa Pawelec, Miles M. Turner, Steven J. Meitner, M. Bernert, Ph. Maquet, R. C. Meadows, A. Shaw, N. Vianello, L. Barrera Orte, Tomas Markovic, A. Fil, A. S. Couchman, Inessa Bolshakova, J. Fyvie, Konstantina Mergia, J. Gallagher, R.V. Budny, Frank Leipold, C. J. Rapson, R. C. Lobel, Gennady V. Miloshevsky, K.-D. Zastrow, Ph. Duckworth, Gianluca Rubino, G. Withenshaw, S. Maruyama, S. P. Hallworth Cook, M. Newman, Jérôme Bucalossi, P. Drewelow, Nuno Cruz, D. Iglesias, I. Nedzelski, T. Donne, P. Leichuer, R. Cesario, M. D. J. Bright, T. Boyce, N. Imazawa, Per Petersson, R. King, A. Loving, L. Garzotti, Jorge Ferreira, G. Corrigan, D. Sandiford, B. Tal, P. Puglia, Daniel Tegnered, J. Karhunen, James S. Wright, Tom Wauters, J. McKehon, K. Rathod, Olivier Février, Alessandro Formisano, Petra Bilkova, M. Groth, Ricardo Magnus Osorio Galvao, F. Medina, S. Collins, H. J. Boyer, Elena Bruno, Horacio Fernandes, M. J. Stead, R. Paccagnella, J. Kaniewski, Ion E. Stamatelatos, F. Causa, M. F. F. Nave, A. Patel, D. C. McDonald, L. Moreira, Mariano Ruiz, K. Dylst, Raymond A. Shaw, A. Brett, Jane Johnston, P. P. Pereira Puglia, J. Ongena, N. A. Benterman, V. N. Amosov, Christian Grisolia, J. Simpson, C. Perez von Thun, Jan Weiland, P. Tonner, F. Belli, T. Odupitan, T. Dittmar, Edmund Highcock, Taina Kurki-Suonio, I. Uytdenhouwen, Estelle Gauthier, M. Oberkofler, B. Alper, Iris D. Young, S. Soare, Yuji Hatano, D. Reece, D. Borodin, M. Moneti, W. Yanling, S. Mianowski, K. Fenton, Stephen J. Bailey, R. Coelho, Sandra C. Chapman, E. Łaszyńska, A. R. Field, F.J. Martínez, Anders Nielsen, M. Smithies, M. J. Mantsinen, A. J. Capel, N. D. Smith, A. Pires dos Reis, M.-L. Mayoral, T. Loarer, P. Carman, N. Grazier, S. Breton, J. M. A. Bradshaw, Alexandre C. Pereira, Fulvio Auriemma, Fulvio Militello, Barbara Cannas, D. Ulyatt, A. Kappatou, P. Blatchford, R. Scannell, B. I. Oswuigwe, Darren Price, Robert E. Grove, D. Guard, M. Leyland, G. Stubbs, J. W. Banks, V.V. Plyusnin, M. S. J. Rainford, Andrea Murari, Sanjeev Ranjan, A. Huber, V. Krasilnikov, C. Bower, H. Leggate, S. Abduallev, P. Tsavalas, G. Giruzzi, K. Maczewa, Colin Roach, P. Beaumont, R. P. Johnson, Anna Widdowson, L. A. Kogan, A. Baron Wiechec, Markus Airila, J. Morris, Robert Skilton, Katarzyna Słabkowska, M. A. Barnard, Jean-Paul Booth, Alessandro Pau, R. Price, R. Bament, M. Tokitani, I. Turner, T. Vu, P. Huynh, S.N. Gerasimov, D. I. Refy, Yunfeng Liang, Anders Hjalmarsson, S. Dalley, Roberto Ambrosino, O. Hemming, T. R. Blackman, Y. Zhou, Vasile Zoita, P. Vincenzi, A. Loarte, C. Rayner, Martin Imrisek, M. Tripsky, C. Mazzotta, A. Uccello, V. Basiuk, Lide Yao, V. Goloborod'ko, S. Villari, B. P. Duval, N. Bulmer, W. Zhang, L. Hackett, D. N. Borba, M. Halitovs, Mario Pillon, H. Arnichand, Alberto Alfier, A. Lawson, A. Masiello, T. Makkonen, A. Vitins, D. Rendell, D. Paton, L. Avotina, A. Krivska, M. Maslov, Richard Verhoeven, Marc Goniche, A. Broslawski, Marica Rebai, E. de la Luna, E. Militello-Asp, V. Cocilovo, L. Carraro, Michael Fitzgerald, Bernardo B. Carvalho, D. Young, C.G. Lowry, F. J. Casson, L.-G. Eriksson, T. M. Biewer, B. Esposito, F.G. Rimini, J. Fessey, G. Kaveney, S. Hall, Robin Barnsley, Michael Lehnen, N. Bekris, L. F. Ruchko, P. Batistoni, E. Alessi, M. G. O'Mullane, D. S. Darrow, C. N. Grundy, N. Hayter, Ivo S. Carvalho, M. Brombin, Enrico Zilli, M. Valisa, M. Reich, S. Panja, C. Gurl, Charles Harrington, Emmanuele Peluso, M. Porton, Michael Walsh, D. Falie, A. Reed, Jacob Eriksson, P. Macheta, J. M. Faustin, S. Cortes, S. Fietz, P. Piovesan, D. Ciric, Eric Nardon, R. Neu, Bojiang Ding, G.A. Rattá, F. Reimold, R. Craven, M. Cox, J. Orszagh, Aaro Järvinen, A. S. Thrysøe, A. Shepherd, I. Ďuran, Andrew M. Edwards, A. Kinch, J. Beal, M. Gherendi, Martin Köppen, D. Samaddar, P. Dalgliesh, I. Vinyar, J. Jansons, Nengchao Wang, J. Wu, John Wright, S. Wiesen, C. King, Alessandra Fanni, L. D. Horton, N. Krawczyk, J. Buch, K. Krieger, Václav Petržílka, D. Schworer, C. Watts, T. Keenan, Andrea Malizia, B. D. Stevens, P. Trimble, C. P. Lungu, V. Prajapati, Marco Ariola, C. Wellstood, S. Gilligan, Mirko Salewski, Michael Barnes, Florin Spineanu, H. Doerk, C. Kennedy, S. Jachmich, J. Caumont, Isabel L. Nunes, A. Petre, A. Kallenbach, M. Anghel, B. Lomanowski, Marco Riva, M. Romanelli, G. De Masi, T. May-Smith, T. Xu, A. Goussarov, S. Romanelli, M. Okabayashi, A. Baker, R. Salmon, T. Tala, Nicolas Fedorczak, S. Lanthaler, Giuliana Sias, J. Risner, Clarisse Bourdelle, M. E. Manso, Fabio Moro, R. Lucock, M. Bassan, M. T. Ogawa, V. Thompson, A. M. Whitehead, S. D. A. Reyes Cortes, Igor Bykov, Gennady Sergienko, E. Stefanikova, Mattia Frasca, H. Dabirikhah, Lorenzo Frassinetti, N. Dzysiuk, D. L. Keeling, Juan Manuel López, M. Turnyanskiy, Daniel Dunai, David Taylor, Arturo Buscarino, Carolina Björkas, A. Baciero, S. Meigh, M. Garcia-Munoz, Massimiliano Mattei, M. Hill, Gwyndaf Evans, S. Minucci, Xiang Gao, A. V. Chankin, Francesco Romanelli, A. Lahtinen, L. Giacomelli, A. Owen, Jesús Vega, Jonathan Citrin, Antti Hakola, Petr Vondracek, Sehyun Kwak, P. Abreu, L. Meneses, S. S. Medley, G. Gervasini, Surya K. Pathak, Kristel Crombé, M. Cleverly, H.S. Kim, C. Stan-Sion, Nobuyuki Asakura, E. Wang, A. Cardinali, L. Fazendeiro, R. Cavazzana, P. J. Lomas, J. Hawes, G. Stables, Silvia Spagnolo, S. P. Hotchin, N. R. Green, Slawomir Jednorog, Ewa Kowalska-Strzęciwilk, A. Martin, Linwei Li, Rajnikant Makwana, Richard Goulding, I. Voitsekhovitch, M. Bowden, I. Kodeli, Peter Hawkins, S. S. Henderson, Ondrej Ficker, Carl Hellesen, D. Yadikin, Fabio Subba, Luka Snoj, Anthony Laing, N. Ben Ayed, Mario Cavinato, M. Goodliffe, C. Clements, D. Kenny, Axel Klix, S. Gee, R. J. E. Smith, P. de Vries, L. Fittill, Min-Gu Yoo, S. Menmuir, K. Cave-Ayland, S. Potzel, D. Grist, K. Blackman, S. A. Robinson, Rodney Walker, David Pfefferlé, W. Broeckx, D. Harting, S. G. J. Tyrrell, F. Binda, L. Horvath, Davide Flammini, P. V. Edappala, Raul Moreno, G. M. D. Hogeweij, P. Card, A. Hagar, Ion Tiseanu, Rita Lorenzini, L. Appel, Jet Contributors, J. Flanagan, C. Paz Soldan, U. Samm, Otto Asunta, F. Eriksson, C. Taliercio, F. S. Zaitsev, G. F. Matthews, Tuomas Koskela, P. J. Howarth, D. Terranova, M. Skiba, Amanda Hubbard, R. Otin, K. G. McClements, M. Park, R. McKean, C. Christopher Klepper, I. Karnowska, Peter J. Pool, G. Ciraolo, Jennifer M. Lehmann, Institut de Mécanique des Fluides et des Solides (IMFS), Université Louis Pasteur - Strasbourg I-Centre National de la Recherche Scientifique (CNRS), VTT Technical Research Centre of Finland (VTT), Association EURATOM-TEKES, Association EURATOM-TEKES, Helsinki University of Technology, Finland, Assoc. Euratom-ENEA-CREATE, Universita Mediterranea of Reggio Calabria [Reggio Calabria], EURATOM/CCFE Fusion Association, Culham Science Centre [Abingdon], Instituto Tecnológico e Nuclear (ITN), ITN, University of Naples Federico II = Università degli studi di Napoli Federico II, Max-Planck-Institut für Plasmaphysik [Garching] (IPP), Università degli studi di Catania = University of Catania (Unict), National Institute for Fusion Science (NIFS), Laboratoire de Physique Nucléaire et de Hautes Énergies (LPNHE), Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université Paris Diderot - Paris 7 (UPD7)-Centre National de la Recherche Scientifique (CNRS), ITER organization (ITER), Karlsruhe Institute of Technology (KIT), Institut de Chimie des Substances Naturelles (ICSN), Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS), Institut de Recherche sur la Fusion par confinement Magnétique (IRFM), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), European Fusion Development Agreement [Garching bei München] ( EFDA-CSU), Institut d'ophtalmologie Hédi-Rais de Tunis, Service Cardiologie [CHU Toulouse], Pôle Cardiovasculaire et Métabolique [CHU Toulouse], Centre Hospitalier Universitaire de Toulouse (CHU Toulouse)-Centre Hospitalier Universitaire de Toulouse (CHU Toulouse), H. Niewodniczanski Institute of Nuclear Physics, Polska Akademia Nauk = Polish Academy of Sciences (PAN), Laboratoire de recherche en Hydrodynamique, Énergétique et Environnement Atmosphérique (LHEEA), École Centrale de Nantes (ECN)-Centre National de la Recherche Scientifique (CNRS), Euratom/UKAEA Fusion Assoc., Magnetic Sensor laboratory [Lviv] (MSL), National Polytechnic University of Lviv (LPNU), The National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) [Moscow, Russia], Institute of Energy and Climate Research - Plasma Physics (IEK-4), Forschungszentrum Jülich GmbH | Centre de recherche de Juliers, Helmholtz-Gemeinschaft = Helmholtz Association-Helmholtz-Gemeinschaft = Helmholtz Association, Institute for Problems of Material Science, National Academy of Sciences of Ukraine (NASU), Institute of Plasma Physics [Praha], Czech Academy of Sciences [Prague] (CAS), Physique des interactions ioniques et moléculaires (PIIM), Aix Marseille Université (AMU)-Centre National de la Recherche Scientifique (CNRS), Département Méthodes et Modèles Mathématiques pour l'Industrie (3MI-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Centre G2I, Department of Hydraulics, Transportations and Roads, Laboratoire de microbiologie et génétique moléculaires - UMR5100 (LMGM), Centre de Biologie Intégrative (CBI), Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS), Metallurgical & Materials Engineering Department (MS 388), University of Nevada [Reno], AUTRES, Institute of Plasma Physics and Laser Microfusion [Warsaw] (IPPLM), Culham Centre for Fusion Energy (CCFE), Astrophysics Research Centre [Belfast] (ARC), Queen's University [Belfast] (QUB), Commissariat à l'énergie atomique et aux énergies alternatives - Laboratoire d'Electronique et de Technologie de l'Information (CEA-LETI), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), School of Mathematics [Cardiff], Cardiff University, Associazone EURATOM ENEA sulla Fusione, EURATOM, Laboratoire de physique des plasmas de l'ERM, Laboratorium voor plasmafysica van de KMS (LPP ERM KMS), Ecole Royale Militaire / Koninklijke Militaire School (ERM KMS), Paul-Drude-Institut für Festkörperelektronik (PDI), Institut für Physik, University of Basel (Unibas), Dutch Institute for Fundamental Energy Research [Nieuwegein] (DIFFER), Dutch Institute for Fundamental Energy Research [Eindhoven] (DIFFER), Institut Jean Lamour (IJL), Institut de Chimie du CNRS (INC)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), CEA Cadarache, Dipartimento di Energia [Milano], Politecnico di Milano [Milan] (POLIMI), Laboratoire de Mécanique, Modélisation et Procédés Propres (M2P2), Aix Marseille Université (AMU)-École Centrale de Marseille (ECM)-Centre National de la Recherche Scientifique (CNRS), Lille économie management - UMR 9221 (LEM), Université d'Artois (UA)-Université catholique de Lille (UCL)-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Euratom research and training programme 633053, Institut de Mécanique des Fluides et des Solides ( IMFS ), Université Louis Pasteur - Strasbourg I-Centre National de la Recherche Scientifique ( CNRS ), VTT Technical Research Centre of Finland ( VTT ), Univ. Mediterranea RC, Culham Science Centre, Instituto Tecnológico e Nuclear ( ITN ), Università degli studi di Napoli Federico II, Max-Planck-Institut für Plasmaphysik [Garching] ( IPP ), Università degli studi di Catania [Catania], National Institute for Fusion Science, National Institutes of Natural Sciences, Laboratoire de Physique Nucléaire et de Hautes Énergies ( LPNHE ), Université Pierre et Marie Curie - Paris 6 ( UPMC ) -Institut National de Physique Nucléaire et de Physique des Particules du CNRS ( IN2P3 ) -Université Paris Diderot - Paris 7 ( UPD7 ) -Centre National de la Recherche Scientifique ( CNRS ), School of Geography, Earth and Environmental Sciences, ITER Organization, Karlsruhe Institute of Technology ( KIT ), Laboratoire de Nanotechnologie et d'Instrumentation Optique ( LNIO ), Institut Charles Delaunay ( ICD ), Université de Technologie de Troyes ( UTT ) -Centre National de la Recherche Scientifique ( CNRS ) -Université de Technologie de Troyes ( UTT ) -Centre National de la Recherche Scientifique ( CNRS ), Institut de Chimie des Substances Naturelles ( ICSN ), Centre National de la Recherche Scientifique ( CNRS ), Institut de Recherche sur la Fusion par confinement Magnétique ( IRFM ), Commissariat à l'énergie atomique et aux énergies alternatives ( CEA ), European Fusion Development Agreement [Garching bei München] ( EFDA-CSU ), Service de cardiologie [Toulouse], Université Paul Sabatier - Toulouse 3 ( UPS ) -CHU Toulouse [Toulouse]-Hôpital de Rangueil, ITER [St. Paul-lez-Durance], ITER, Polska Akademia Nauk ( PAN ), Laboratoire de recherche en Hydrodynamique, Énergétique et Environnement Atmosphérique ( LHEEA ), École Centrale de Nantes ( ECN ) -Centre National de la Recherche Scientifique ( CNRS ), MSL, Lviv Polytechnic National University ( MSL ), Lviv Polytechnic National University, Centre d'études et de recherches appliquées à la gestion ( CERAG ), Université Pierre Mendès France - Grenoble 2 ( UPMF ) -Centre National de la Recherche Scientifique ( CNRS ), Institute of Energy and Climate Research - Plasma Physics ( IEK-4 ), Forschungszentrum Jülich GmbH, National Academy of Sciences of Ukraine ( NASU ), Lille - Economie et Management ( LEM ), Université catholique de Lille ( UCL ) -Université de Lille-Centre National de la Recherche Scientifique ( CNRS ), Czech Academy of Sciences [Prague] ( ASCR ), Physique des interactions ioniques et moléculaires ( PIIM ), Aix Marseille Université ( AMU ) -Centre National de la Recherche Scientifique ( CNRS ), Département Méthodes et Modèles Mathématiques pour l'Industrie ( 3MI-ENSMSE ), École des Mines de Saint-Étienne ( Mines Saint-Étienne MSE ), Institut Mines-Télécom [Paris]-Institut Mines-Télécom [Paris]-Centre G2I, Laboratoire de microbiologie et génétique moléculaires ( LMGM ), Université Paul Sabatier - Toulouse 3 ( UPS ) -Centre National de la Recherche Scientifique ( CNRS ), University of Nevada, Institute of Plasma Physics and Laser Microfusion [Warsaw] ( IPPLM ), UCL Department of Space and Climate Physics, University College of London [London] ( UCL ), Astrophysics Research Centre [Belfast] ( ARC ), Queen's University [Belfast] ( QUB ), Laboratoire d'Electronique et des Technologies de l'Information ( CEA-LETI ), Commissariat à l'énergie atomique et aux énergies alternatives ( CEA ) -Université Grenoble Alpes [Saint Martin d'Hères], Cardiff School of Mathematics, Laboratoire de physique des plasmas de l'ERM, Laboratorium voor plasmafysica van de KMS ( LPP ERM KMS ), Ecole Royale Militaire / Koninklijke Militaire School ( ERM KMS ), Paul-Drude-Institut für Festkörperelektronik, University of Basel ( Unibas ), Dutch Institute for Fundamental Energy Research [Nieuwegein] ( DIFFER ), Dutch Institute for Fundamental Energy Research [Eindhoven] ( DIFFER ), Institut Jean Lamour ( IJL ), Centre National de la Recherche Scientifique ( CNRS ) -Université de Lorraine ( UL ), Dipartimento di Energia, Politecnico di Milano [Milan], Max Planck Institute for Plasma Physics, Laboratoire de Mécanique, Modélisation et Procédés Propres ( M2P2 ), Aix Marseille Université ( AMU ) -Ecole Centrale de Marseille ( ECM ) -Centre National de la Recherche Scientifique ( CNRS ), Universitat Politècnica de Catalunya. Departament de Física, Universitat Politècnica de Catalunya. ANT - Advanced Nuclear Technologies Research Group, JET Contributors, Litaudon, X, Abduallev, S, Abhangi, M, Abreu, P, Afzal, M, Aggarwal, K, Ahlgren, T, Ahn, J, Aho Mantila, L, Aiba, N, Airila, M, Albanese, R, Aldred, V, Alegre, D, Alessi, E, Aleynikov, P, Alfier, A, Alkseev, A, Allinson, M, Alper, B, Alves, E, Ambrosino, G, Ambrosino, R, Amicucci, L, Amosov, V, Andersson Sundén, E, Angelone, M, Anghel, M, Angioni, C, Appel, L, Appelbee, C, Arena, P, Ariola, M, Arnichand, H, Arshad, S, Ash, A, Ashikawa, N, Aslanyan, V, Asunta, O, Auriemma, F, Austin, Y, Avotina, L, Axton, M, Ayres, C, Bacharis, M, Baciero, A, Baiã¡o, D, Bailey, S, Baker, A, Balboa, I, Balden, M, Balshaw, N, Bament, R, Banks, J, Baranov, Y, Barnard, M, Barnes, D, Barnes, M, Barnsley, R, Baron Wiechec, A, Barrera Orte, L, Baruzzo, M, Basiuk, V, Bassan, M, Bastow, R, Batista, A, Batistoni, P, Baughan, R, Bauvir, B, Baylor, L, Bazylev, B, Beal, J, Beaumont, P, Beckers, M, Beckett, B, Becoulet, A, Bekris, N, Beldishevski, M, Bell, K, Belli, F, Bellinger, M, Belonohy, Ã, Ben Ayed, N, Benterman, N, Bergsã¥ker, H, Bernardo, J, Bernert, M, Berry, M, Bertalot, L, Besliu, C, Beurskens, M, Bieg, B, Bielecki, J, Biewer, T, Bigi, M, Bãlkovã¡, P, Binda, F, Bisoffi, A, Bizarro, J, Bjã¶rkas, C, Blackburn, J, Blackman, K, Blackman, T, Blanchard, P, Blatchford, P, Bobkov, V, Boboc, A, Bodnã¡r, G, Bogar, O, Bolshakova, I, Bolzonella, T, Bonanomi, N, Bonelli, F, Boom, J, Booth, J, Borba, D, Borodin, D, Borodkina, I, Botrugno, A, Bottereau, C, Boulting, P, Bourdelle, C, Bowden, M, Bower, C, Bowman, C, Boyce, T, Boyd, C, Boyer, H, Bradshaw, J, Braic, V, Bravanec, R, Breizman, B, Bremond, S, Brennan, P, Breton, S, Brett, A, Brezinsek, S, Bright, M, Brix, M, Broeckx, W, Brombin, M, Broså‚awski, A, Brown, D, Brown, M, Bruno, E, Bucalossi, J, Buch, J, Buchanan, J, Buckley, M, Budny, R, Bufferand, H, Bulman, M, Bulmer, N, Bunting, P, Buratti, P, Burckhart, A, Buscarino, A, Busse, A, Butler, N, Bykov, I, Byrne, J, Cahyna, P, Calabrã², G, Calvo, I, Camenen, Y, Camp, P, Campling, D, Cane, J, Cannas, B, Capel, A, Card, P, Cardinali, A, Carman, P, Carr, M, Carralero, D, Carraro, L, Carvalho, B, Carvalho, I, Carvalho, P, Casson, F, Castaldo, C, Catarino, N, Caumont, J, Causa, F, Cavazzana, R, Cave Ayland, K, Cavinato, M, Cecconello, M, Ceccuzzi, S, Cecil, E, Cenedese, A, Cesario, R, Challis, C, Chandler, M, Chandra, D, Chang, C, Chankin, A, Chapman, I, Chapman, S, Chernyshova, M, Chitarin, G, Ciraolo, G, Ciric, D, Citrin, J, Clairet, F, Clark, E, Clark, M, Clarkson, R, Clatworthy, D, Clements, C, Cleverly, M, Coad, J, Coates, P, Cobalt, A, Coccorese, V, Cocilovo, V, Coda, S, Coelho, R, Coenen, J, Coffey, I, Colas, L, Collins, S, Conka, D, Conroy, S, Conway, N, Coombs, D, Cooper, D, Cooper, S, Corradino, C, Corre, Y, Corrigan, G, Cortes, S, Coster, D, Couchman, A, Cox, M, Craciunescu, T, Cramp, S, Craven, R, Crisanti, F, Croci, G, Croft, D, Crombã©, K, Crowe, R, Cruz, N, Cseh, G, Cufar, A, Cullen, A, Curuia, M, Czarnecka, A, Dabirikhah, H, Dalgliesh, P, Dalley, S, Dankowski, J, Darrow, D, Davies, O, Davis, W, Day, C, Day, I, De Bock, M, De Castro, A, De La Cal, E, De La Luna, E, De Masi, G, De Pablos, J, De Temmerman, G, De Tommasi, G, De Vries, P, Deakin, K, Deane, J, Degli Agostini, F, Dejarnac, R, Delabie, E, Den Harder, N, Dendy, R, Denis, J, Denner, P, Devaux, S, Devynck, P, Di Maio, F, Di Siena, A, Di Troia, C, Dinca, P, D'Inca, R, Ding, B, Dittmar, T, Doerk, H, Doerner, R, Donnã©, T, Dorling, S, Dormido Canto, S, Doswon, S, Douai, D, Doyle, P, Drenik, A, Drewelow, P, Drews, P, Duckworth, P, Dumont, R, Dumortier, P, Dunai, D, Dunne, M, Äžuran, I, Durodiã©, F, Dutta, P, Duval, B, Dux, R, Dylst, K, Dzysiuk, N, Edappala, P, Edmond, J, Edwards, A, Edwards, J, Eich, T, Ekedahl, A, El Jorf, R, Elsmore, C, Enachescu, M, Ericsson, G, Eriksson, F, Eriksson, J, Eriksson, L, Esposito, B, Esquembri, S, Esser, H, Esteve, D, Evans, B, Evans, G, Evison, G, Ewart, G, Fagan, D, Faitsch, M, Falie, D, Fanni, A, Fasoli, A, Faustin, J, Fawlk, N, Fazendeiro, L, Fedorczak, N, Felton, R, Fenton, K, Fernades, A, Fernandes, H, Ferreira, J, Fessey, J, Fã©vrier, O, Ficker, O, Field, A, Fietz, S, Figueiredo, A, Figueiredo, J, Fil, A, Finburg, P, Firdaouss, M, Fischer, U, Fittill, L, Fitzgerald, M, Flammini, D, Flanagan, J, Fleming, C, Flinders, K, Fonnesu, N, Fontdecaba, J, Formisano, A, Forsythe, L, Fortuna, L, Fortuna Zalesna, E, Fortune, M, Foster, S, Franke, T, Franklin, T, Frasca, M, Frassinetti, L, Freisinger, M, Fresa, R, Frigione, D, Fuchs, V, Fuller, D, Futatani, S, Fyvie, J, Gã¡l, K, Galassi, D, Gaå‚azka, K, Galdon Quiroga, J, Gallagher, J, Gallart, D, Galvã¡o, R, Gao, X, Gao, Y, Garcia, J, Garcia Carrasco, A, GarcÃa Muñoz, M, Gardarein, J, Garzotti, L, Gaudio, P, Gauthier, E, Gear, D, Gee, S, Geiger, B, Gelfusa, M, Gerasimov, S, Gervasini, G, Gethins, M, Ghani, Z, Ghate, M, Gherendi, M, Giacalone, J, Giacomelli, L, Gibson, C, Giegerich, T, Gil, C, Gil, L, Gilligan, S, Gin, D, Giovannozzi, E, Girardo, J, Giroud, C, Giruzzi, G, Glã¶ggler, S, Godwin, J, Goff, J, Gohil, P, Goloborod'Ko, V, Gomes, R, Goncalves, B, Goniche, M, Goodliffe, M, Goodyear, A, Gorini, G, Gosk, M, Goulding, R, Goussarov, A, Gowland, R, Graham, B, Graham, M, Graves, J, Grazier, N, Grazier, P, Green, N, Greuner, H, Grierson, B, Griph, F, Grisolia, C, Grist, D, Groth, M, Grove, R, Grundy, C, Grzonka, J, Guard, D, Guã©rard, C, Guillemaut, C, Guirlet, R, Gurl, C, Utoh, H, Hackett, L, Hacquin, S, Hagar, A, Hager, R, Hakola, A, Halitovs, M, Hall, S, Hallworth Cook, S, Hamlyn Harris, C, Hammond, K, Harrington, C, Harrison, J, Harting, D, Hasenbeck, F, Hatano, Y, Hatch, D, Haupt, T, Hawes, J, Hawkes, N, Hawkins, J, Hawkins, P, Haydon, P, Hayter, N, Hazel, S, Heesterman, P, Heinola, K, Hellesen, C, Hellsten, T, Helou, W, Hemming, O, Hender, T, Henderson, M, Henderson, S, Henriques, R, Hepple, D, Hermon, G, Hertout, P, Hidalgo, C, Highcock, E, Hill, M, Hillairet, J, Hillesheim, J, Hillis, D, Hizanidis, K, Hjalmarsson, A, Hobirk, J, Hodille, E, Hogben, C, Hogeweij, G, Hollingsworth, A, Hollis, S, Homfray, D, Horã¡ä ek, J, Hornung, G, Horton, A, Horton, L, Horvath, L, Hotchin, S, Hough, M, Howarth, P, Hubbard, A, Huber, A, Huber, V, Huddleston, T, Hughes, M, Huijsmans, G, Hunter, C, Huynh, P, Hynes, A, Iglesias, D, Imazawa, N, Imbeaux, F, Imrãå¡ek, M, Incelli, M, Innocente, P, Irishkin, M, Ivanova Stanik, I, Jachmich, S, Jacobsen, A, Jacquet, P, Jansons, J, Jardin, A, Jã¤rvinen, A, Jaulmes, F, Jednorã³g, S, Jenkins, I, Jeong, C, Jepu, I, Joffrin, E, Johnson, R, Johnson, T, Johnston, J, Joita, L, Jones, G, Jones, T, Hoshino, K, Kallenbach, A, Kamiya, K, Kaniewski, J, Kantor, A, Kappatou, A, Karhunen, J, Karkinsky, D, Karnowska, I, Kaufman, M, Kaveney, G, Kazakov, Y, Kazantzidis, V, Keeling, D, Keenan, T, Keep, J, Kempenaars, M, Kennedy, C, Kenny, D, Kent, J, Kent, O, Khilkevich, E, Kim, H, Kinch, A, King, C, King, D, King, R, Kinna, D, Kiptily, V, Kirk, A, Kirov, K, Kirschner, A, Kizane, G, Klepper, C, Klix, A, Knight, P, Knipe, S, Knott, S, Kobuchi, T, Kã¶chl, F, Kocsis, G, Kodeli, I, Kogan, L, Kogut, D, Koivuranta, S, Kominis, Y, Kã¶ppen, M, Kos, B, Koskela, T, Koslowski, H, Koubiti, M, Kovari, M, Kowalska StrzÈ©ciwilk, E, Krasilnikov, A, Krasilnikov, V, Krawczyk, N, Kresina, M, Krieger, K, Krivska, A, Kruezi, U, Ksiaå¼ek, I, Kukushkin, A, Kundu, A, Kurki Suonio, T, Kwak, S, Kwiatkowski, R, Kwon, O, Laguardia, L, Lahtinen, A, Laing, A, Lam, N, Lambertz, H, Lane, C, Lang, P, Lanthaler, S, Lapins, J, Lasa, A, Last, J, Å aszyå„ska, E, Lawless, R, Lawson, A, Lawson, K, Lazaros, A, Lazzaro, E, Leddy, J, Lee, S, Lefebvre, X, Leggate, H, Lehmann, J, Lehnen, M, Leichtle, D, Leichuer, P, Leipold, F, Lengar, I, Lennholm, M, Lerche, E, Lescinskis, A, Lesnoj, S, Letellier, E, Leyland, M, Leysen, W, Li, L, Liang, Y, Likonen, J, Linke, J, Linsmeier, C, Lipschultz, B, Liu, G, Liu, Y, Lo Schiavo, V, Loarer, T, Loarte, A, Lobel, R, Lomanowski, B, Lomas, P, Lã¶nnroth, J, Lã³pez, J, López Razola, J, Lorenzini, R, Losada, U, Lovell, J, Loving, A, Lowry, C, Luce, T, Lucock, R, Lukin, A, Luna, C, Lungaroni, M, Lungu, C, Lungu, M, Lunniss, A, Lupelli, I, Lyssoivan, A, Macdonald, N, Macheta, P, Maczewa, K, Magesh, B, Maget, P, Maggi, C, Maier, H, Mailloux, J, Makkonen, T, Makwana, R, Malaquias, A, Malizia, A, Manas, P, Manning, A, Manso, M, Mantica, P, Mantsinen, M, Manzanares, A, Maquet, P, Marandet, Y, Marcenko, N, Marchetto, C, Marchuk, O, Marinelli, M, Marinucci, M, Markoviä , T, Marocco, D, Marot, L, Marren, C, Marshal, R, Martin, A, Martin, Y, MartÃn De Aguilera, A, Martãnez, F, MartÃn SolÃs, J, Martynova, Y, Maruyama, S, Masiello, A, Maslov, M, Matejcik, S, Mattei, M, Matthews, G, Maviglia, F, Mayer, M, Mayoral, M, May Smith, T, Mazon, D, Mazzotta, C, Mcadams, R, Mccarthy, P, Mcclements, K, Mccormack, O, Mccullen, P, Mcdonald, D, Mcintosh, S, Mckean, R, Mckehon, J, Meadows, R, Meakins, A, Medina, F, Medland, M, Medley, S, Meigh, S, Meigs, A, Meisl, G, Meitner, S, Meneses, L, Menmuir, S, Mergia, K, Merrigan, I, Mertens, P, Meshchaninov, S, Messiaen, A, Meyer, H, Mianowski, S, Michling, R, Middleton Gear, D, Miettunen, J, Militello, F, Militello Asp, E, Miloshevsky, G, Mink, F, Minucci, S, Miyoshi, Y, Mlynã¡å™, J, Molina, D, Monakhov, I, Moneti, M, Mooney, R, Moradi, S, Mordijck, S, Moreira, L, Moreno, R, Moro, F, Morris, A, Morris, J, Moser, L, Mosher, S, Moulton, D, Murari, A, Muraro, A, Murphy, S, Asakura, N, Na, Y, Nabais, F, Naish, R, Nakano, T, Nardon, E, Naulin, V, Nave, M, Nedzelski, I, Nemtsev, G, Nespoli, F, Neto, A, Neu, R, Neverov, V, Newman, M, Nicholls, K, Nicolas, T, Nielsen, A, Nielsen, P, Nilsson, E, Nishijima, D, Noble, C, Nocente, M, Nodwell, D, Nordlund, K, Nordman, H, Nouailletas, R, Nunes, I, Oberkofler, M, Odupitan, T, Ogawa, M, O'Gorman, T, Okabayashi, M, Olney, R, Omolayo, O, O'Mullane, M, Ongena, J, Orsitto, F, Orszagh, J, Oswuigwe, B, Otin, R, Owen, A, Paccagnella, R, Pace, N, Pacella, D, Packer, L, Page, A, Pajuste, E, Palazzo, S, Pamela, S, Panja, S, Papp, P, Paprok, R, Parail, V, Park, M, Parra Diaz, F, Parsons, M, Pasqualotto, R, Patel, A, Pathak, S, Paton, D, Patten, H, Pau, A, Pawelec, E, Paz Soldan, C, Peackoc, A, Pearson, I, Pehkonen, S, Peluso, E, Penot, C, Pereira, A, Pereira, R, Pereira Puglia, P, Perez Von Thun, C, Peruzzo, S, Peschanyi, S, Peterka, M, Petersson, P, Petravich, G, Petre, A, Petrella, N, Petrå¾ilka, V, Peysson, Y, Pfefferlã©, D, Philipps, V, Pillon, M, Pintsuk, G, Piovesan, P, Pires Dos Reis, A, Piron, L, Pironti, A, Pisano, F, Pitts, R, Pizzo, F, Plyusnin, V, Pomaro, N, Pompilian, O, Pool, P, Popovichev, S, Porfiri, M, Porosnicu, C, Porton, M, Possnert, G, Potzel, S, Powell, T, Pozzi, J, Prajapati, V, Prakash, R, Prestopino, G, Price, D, Price, M, Price, R, Prior, P, Proudfoot, R, Pucella, G, Puglia, P, Puiatti, M, Pulley, D, Purahoo, K, Pã¼tterich, T, Rachlew, E, Rack, M, Ragona, R, Rainford, M, Rakha, A, Ramogida, G, Ranjan, S, Rapson, C, Rasmussen, J, Rathod, K, Rattã¡, G, Ratynskaia, S, Ravera, G, Rayner, C, Rebai, M, Reece, D, Reed, A, Rã©fy, D, Regan, B, Regaã±a, J, Reich, M, Reid, N, Reimold, F, Reinhart, M, Reinke, M, Reiser, D, Rendell, D, Reux, C, Reyes Cortes, S, Reynolds, S, Riccardo, V, Richardson, N, Riddle, K, Rigamonti, D, Rimini, F, Risner, J, Riva, M, Roach, C, Robins, R, Robinson, S, Robinson, T, Robson, D, Roccella, R, Rodionov, R, Rodrigues, P, Rodriguez, J, Rohde, V, Romanelli, F, Romanelli, M, Romanelli, S, Romazanov, J, Rowe, S, Rubel, M, Rubinacci, G, Rubino, G, Ruchko, L, Ruiz, M, Ruset, C, Rzadkiewicz, J, Saarelma, S, Sabot, R, Safi, E, Sagar, P, Saibene, G, Saint Laurent, F, Salewski, M, Salmi, A, Salmon, R, Salzedas, F, Samaddar, D, Samm, U, Sandiford, D, Santa, P, Santala, M, Santos, B, Santucci, A, Sartori, F, Sartori, R, Sauter, O, Scannell, R, Schlummer, T, Schmid, K, Schmidt, V, Schmuck, S, Schneider, M, Schã¶pf, K, Schwã¶rer, D, Scott, S, Sergienko, G, Sertoli, M, Shabbir, A, Sharapov, S, Shaw, A, Shaw, R, Sheikh, H, Shepherd, A, Shevelev, A, Shumack, A, Sias, G, Sibbald, M, Sieglin, B, Silburn, S, Silva, A, Silva, C, Simmons, P, Simpson, J, Simpson Hutchinson, J, Sinha, A, Sipilã¤, S, Sips, A, Sirã©n, P, Sirinelli, A, Sjã¶strand, H, Skiba, M, Skilton, R, Slabkowska, K, Slade, B, Smith, N, Smith, P, Smith, R, Smith, T, Smithies, M, Snoj, L, Soare, S, Solano, E, Somers, A, Sommariva, C, Sonato, P, Sopplesa, A, Sousa, J, Sozzi, C, Spagnolo, S, Spelzini, T, Spineanu, F, Stables, G, Stamatelatos, I, Stamp, M, Staniec, P, Stankå«nas, G, Stan Sion, C, Stead, M, Stefanikova, E, Stepanov, I, Stephen, A, Stephen, M, Stevens, A, Stevens, B, Strachan, J, Strand, P, Strauss, H, Strã¶m, P, Stubbs, G, Studholme, W, Subba, F, Summers, H, Svensson, J, Åšwiderski, Å, Szabolics, T, Szawlowski, M, Szepesi, G, Suzuki, T, Tã¡l, B, Tala, T, Talbot, A, Talebzadeh, S, Taliercio, C, Tamain, P, Tame, C, Tang, W, Tardocchi, M, Taroni, L, Taylor, D, Taylor, K, Tegnered, D, Telesca, G, Teplova, N, Terranova, D, Testa, D, Tholerus, E, Thomas, J, Thomas, P, Thompson, A, Thompson, C, Thompson, V, Thorne, L, Thornton, A, Thrysã¸e, A, Tigwell, P, Tipton, N, Tiseanu, I, Tojo, H, Tokitani, M, Tolias, P, Tomeå¡, M, Tonner, P, Towndrow, M, Trimble, P, Tripsky, M, Tsalas, M, Tsavalas, P, Tskhakaya Jun, D, Turner, I, Turner, M, Turnyanskiy, M, Tvalashvili, G, Tyrrell, S, Uccello, A, Ul Abidin, Z, Uljanovs, J, Ulyatt, D, Urano, H, Uytdenhouwen, I, Vadgama, A, Valcarcel, D, Valentinuzzi, M, Valisa, M, Vallejos Olivares, P, Valovic, M, Van De Mortel, M, Van Eester, D, Van Renterghem, W, Van Rooij, G, Varje, J, Varoutis, S, Vartanian, S, Vasava, K, Vasilopoulou, T, Vega, J, Verdoolaege, G, Verhoeven, R, Verona, C, Verona Rinati, G, Veshchev, E, Vianello, N, Vicente, J, Viezzer, E, Villari, S, Villone, F, Vincenzi, P, Vinyar, I, Viola, B, Vitins, A, Vizvary, Z, Vlad, M, Voitsekhovitch, I, Vondrã¡ä ek, P, Vora, N, Vu, T, Pires De Sa, W, Wakeling, B, Waldon, C, Walkden, N, Walker, M, Walker, R, Walsh, M, Wang, E, Wang, N, Warder, S, Warren, R, Waterhouse, J, Watkins, N, Watts, C, Wauters, T, Weckmann, A, Weiland, J, Weisen, H, Weiszflog, M, Wellstood, C, West, A, Wheatley, M, Whetham, S, Whitehead, A, Whitehead, B, Widdowson, A, Wiesen, S, Wilkinson, J, Williams, J, Williams, M, Wilson, A, Wilson, D, Wilson, H, Wilson, J, Wischmeier, M, Withenshaw, G, Withycombe, A, Witts, D, Wood, D, Wood, R, Woodley, C, Wray, S, Wright, J, Wu, J, Wukitch, S, Wynn, A, Xu, T, Yadikin, D, Yanling, W, Yao, L, Yavorskij, V, Yoo, M, Young, C, Young, D, Young, I, Young, R, Zacks, J, Zagorski, R, Zaitsev, F, Zanino, R, Zarins, A, Zastrow, K, Zerbini, M, Zhang, W, Zhou, Y, Zilli, E, Zoita, V, Zoletnik, S, Zychor, I, Materials Physics, Department of Physics, European Commission, Litaudon, X., Abduallev, S., Abhangi, M., Abreu, P., Afzal, M., Aggarwal, K. M., Ahlgren, T., Ahn, J. H., Aho-Mantila, L., Aiba, N., Airila, M., Albanese, R., Aldred, V., Alegre, D., Alessi, E., Aleynikov, P., Alfier, A., Alkseev, A., Allinson, M., Alper, B., Alves, E., Ambrosino, G., Ambrosino, R., Amicucci, L., Amosov, V., Andersson Sundén, E., Angelone, M., Anghel, M., Angioni, C., Appel, L., Appelbee, C., Arena, P., Ariola, M., Arnichand, H., Arshad, S., Ash, A., Ashikawa, N., Aslanyan, V., Asunta, O., Auriemma, F., Austin, Y., Avotina, L., Axton, M. D., Ayres, C., Bacharis, M., Baciero, A., Baião, D., Bailey, S., Baker, A., Balboa, I., Balden, M., Balshaw, N., Bament, R., Banks, J. W., Baranov, Y. F., Barnard, M. A., Barnes, D., Barnes, M., Barnsley, R., Baron Wiechec, A., Barrera Orte, L., Baruzzo, M., Basiuk, V., Bassan, M., Bastow, R., Batista, A., Batistoni, P., Baughan, R., Bauvir, B., Baylor, L., Bazylev, B., Beal, J., Beaumont, P. S., Beckers, M., Beckett, B., Becoulet, A., Bekris, N., Beldishevski, M., Bell, K., Belli, F., Bellinger, M., Belonohy, É., Ben Ayed, N., Benterman, N. A., Bergsåker, H., Bernardo, J., Bernert, M., Berry, M., Bertalot, L., Besliu, C., Beurskens, M., Bieg, B., Bielecki, J., Biewer, T., Bigi, M., Bílková, P., Binda, F., Bisoffi, A., Bizarro, J. P. S., Björkas, C., Blackburn, J., Blackman, K., Blackman, T. R., Blanchard, P., Blatchford, P., Bobkov, V., Boboc, A., Bodnár, G., Bogar, O., Bolshakova, I., Bolzonella, T., Bonanomi, N., Bonelli, F., Boom, J., Booth, J., Borba, D., Borodin, D., Borodkina, I., Botrugno, A., Bottereau, C., Boulting, P., Bourdelle, C., Bowden, M., Bower, C., Bowman, C., Boyce, T., Boyd, C., Boyer, H. J., Bradshaw, J. M. A., Braic, V., Bravanec, R., Breizman, B., Bremond, S., Brennan, P. D., Breton, S., Brett, A., Brezinsek, S., Bright, M. D. J., Brix, M., Broeckx, W., Brombin, M., Brosławski, A., Brown, D. P. D., Brown, M., Bruno, E., Bucalossi, J., Buch, J., Buchanan, J., Buckley, M. A., Budny, R., Bufferand, H., Bulman, M., Bulmer, N., Bunting, P., Buratti, P., Burckhart, A., Buscarino, A., Busse, A., Butler, N. K., Bykov, I., Byrne, J., Cahyna, P., Calabrò, G., Calvo, I., Camenen, Y., Camp, P., Campling, D. C., Cane, J., Cannas, B., Capel, A. J., Card, P. J., Cardinali, A., Carman, P., Carr, M., Carralero, D., Carraro, L., Carvalho, B. B., Carvalho, I., Carvalho, P., Casson, F. J., Castaldo, C., Catarino, N., Caumont, J., Causa, F., Cavazzana, R., Cave-Ayland, K., Cavinato, M., Cecconello, M., Ceccuzzi, S., Cecil, E., Cenedese, A., Cesario, R., Challis, C. D., Chandler, M., Chandra, D., Chang, C. S., Chankin, A., Chapman, I. T., Chapman, S. C., Chernyshova, M., Chitarin, G., Ciraolo, G., Ciric, D., Citrin, J., Clairet, F., Clark, E., Clark, M., Clarkson, R., Clatworthy, D., Clements, C., Cleverly, M., Coad, J. P., Coates, P. A., Cobalt, A., Coccorese, V., Cocilovo, V., Coda, S., Coelho, R., Coenen, J. W., Coffey, I., Colas, L., Collins, S., Conka, D., Conroy, S., Conway, N., Coombs, D., Cooper, D., Cooper, S. R., Corradino, C., Corre, Y., Corrigan, G., Cortes, S., Coster, D., Couchman, A. S., Cox, M. P., Craciunescu, T., Cramp, S., Craven, R., Crisanti, F., Croci, G., Croft, D., Crombé, K., Crowe, R., Cruz, N., Cseh, G., Cufar, A., Cullen, A., Curuia, M., Czarnecka, A., Dabirikhah, H., Dalgliesh, P., Dalley, S., Dankowski, J., Darrow, D., Davies, O., Davis, W., Day, C., Day, I. E., De Bock, M., de Castro, A., de la Cal, E., de la Luna, E., De Masi, G., de Pablos, J. L., De Temmerman, G., De Tommasi, G., de Vries, P., Deakin, K., Deane, J., Degli Agostini, F., Dejarnac, R., Delabie, E., den Harder, N., Dendy, R. O., Denis, J., Denner, P., Devaux, S., Devynck, P., Di Maio, F., Di Siena, A., Di Troia, C., Dinca, P., D’Inca, R., Ding, B., Dittmar, T., Doerk, H., Doerner, R. P., Donné, T., Dorling, S. E., Dormido-Canto, S., Doswon, S., Douai, D., Doyle, P. T., Drenik, A., Drewelow, P., Drews, P., Duckworth, Ph., Dumont, R., Dumortier, P., Dunai, D., Dunne, M., Ďuran, I., Durodié, F., Dutta, P., Duval, B. P., Dux, R., Dylst, K., Dzysiuk, N., Edappala, P. V., Edmond, J., Edwards, A. M., Edwards, J., Eich, Th., Ekedahl, A., El-Jorf, R., Elsmore, C. G., Enachescu, M., Ericsson, G., Eriksson, F., Eriksson, J., Eriksson, L. G., Esposito, B., Esquembri, S., Esser, H. G., Esteve, D., Evans, B., Evans, G. E., Evison, G., Ewart, G. D., Fagan, D., Faitsch, M., Falie, D., Fanni, A., Fasoli, A., Faustin, J. M., Fawlk, N., Fazendeiro, L., Fedorczak, N., Felton, R. C., Fenton, K., Fernades, A., Fernandes, H., Ferreira, J., Fessey, J. A., Février, O., Ficker, O., Field, A., Fietz, S., Figueiredo, A., Figueiredo, J., Fil, A., Finburg, P., Firdaouss, M., Fischer, U., Fittill, L., Fitzgerald, M., Flammini, D., Flanagan, J., Fleming, C., Flinders, K., Fonnesu, N., Fontdecaba, J. M., Formisano, A., Forsythe, L., Fortuna, L., Fortuna-Zalesna, E., Fortune, M., Foster, S., Franke, T., Franklin, T., Frasca, M., Frassinetti, L., Freisinger, M., Fresa, R., Frigione, D., Fuchs, V., Fuller, D., Futatani, S., Fyvie, J., Gál, K., Galassi, D., Gałązka, K., Galdon-Quiroga, J., Gallagher, J., Gallart, D., Galvão, R., Gao, X., Gao, Y., Garcia, J., Garcia-Carrasco, A., García-Muñoz, M., Gardarein, J. -L., Garzotti, L., Gaudio, P., Gauthier, E., Gear, D. F., Gee, S. J., Geiger, B., Gelfusa, M., Gerasimov, S., Gervasini, G., Gethins, M., Ghani, Z., Ghate, M., Gherendi, M., Giacalone, J. C., Giacomelli, L., Gibson, C. S., Giegerich, T., Gil, C., Gil, L., Gilligan, S., Gin, D., Giovannozzi, E., Girardo, J. B., Giroud, C., Giruzzi, G., Glöggler, S., Godwin, J., Goff, J., Gohil, P., Goloborod’Ko, V., Gomes, R., Gonçalves, B., Goniche, M., Goodliffe, M., Goodyear, A., Gorini, G., Gosk, M., Goulding, R., Goussarov, A., Gowland, R., Graham, B., Graham, M. E., Graves, J. P., Grazier, N., Grazier, P., Green, N. R., Greuner, H., Grierson, B., Griph, F. S., Grisolia, C., Grist, D., Groth, M., Grove, R., Grundy, C. N., Grzonka, J., Guard, D., Guérard, C., Guillemaut, C., Guirlet, R., Gurl, C., Utoh, H. H., Hackett, L. J., Hacquin, S., Hagar, A., Hager, R., Hakola, A., Halitovs, M., Hall, S. J., Hallworth Cook, S. P., Hamlyn-Harris, C., Hammond, K., Harrington, C., Harrison, J., Harting, D., Hasenbeck, F., Hatano, Y., Hatch, D. R., Haupt, T. D. V., Hawes, J., Hawkes, N. C., Hawkins, J., Hawkins, P., Haydon, P. W., Hayter, N., Hazel, S., Heesterman, P. J. L., Heinola, K., Hellesen, C., Hellsten, T., Helou, W., Hemming, O. N., Hender, T. C., Henderson, M., Henderson, S. S., Henriques, R., Hepple, D., Hermon, G., Hertout, P., Hidalgo, C., Highcock, E. G., Hill, M., Hillairet, J., Hillesheim, J., Hillis, D., Hizanidis, K., Hjalmarsson, A., Hobirk, J., Hodille, E., Hogben, C. H. A., Hogeweij, G. M. D., Hollingsworth, A., Hollis, S., Homfray, D. A., Horáček, J., Hornung, G., Horton, A. R., Horton, L. D., Horvath, L., Hotchin, S. P., Hough, M. R., Howarth, P. J., Hubbard, A., Huber, A., Huber, V., Huddleston, T. M., Hughes, M., Huijsmans, G. T. A., Hunter, C. L., Huynh, P., Hynes, A. M., Iglesias, D., Imazawa, N., Imbeaux, F., Imríšek, M., Incelli, M., Innocente, P., Irishkin, M., Ivanova-Stanik, I., Jachmich, S., Jacobsen, A. S., Jacquet, P., Jansons, J., Jardin, A., Järvinen, A., Jaulmes, F., Jednoróg, S., Jenkins, I., Jeong, C., Jepu, I., Joffrin, E., Johnson, R., Johnson, T., Johnston, Jane, Joita, L., Jones, G., Jones, T. T. C., Hoshino, K. K., Kallenbach, A., Kamiya, K., Kaniewski, J., Kantor, A., Kappatou, A., Karhunen, J., Karkinsky, D., Karnowska, I., Kaufman, M., Kaveney, G., Kazakov, Y., Kazantzidis, V., Keeling, D. L., Keenan, T., Keep, J., Kempenaars, M., Kennedy, C., Kenny, D., Kent, J., Kent, O. N., Khilkevich, E., Kim, H. T., Kim, H. S., Kinch, A., King, C., King, D., King, R. F., Kinna, D. J., Kiptily, V., Kirk, A., Kirov, K., Kirschner, A., Kizane, G., Klepper, C., Klix, A., Knight, P., Knipe, S. J., Knott, S., Kobuchi, T., Köchl, F., Kocsis, G., Kodeli, I., Kogan, L., Kogut, D., Koivuranta, S., Kominis, Y., Köppen, M., Kos, B., Koskela, T., Koslowski, H. R., Koubiti, M., Kovari, M., Kowalska-Strzęciwilk, E., Krasilnikov, A., Krasilnikov, V., Krawczyk, N., Kresina, M., Krieger, K., Krivska, A., Kruezi, U., Książek, I., Kukushkin, A., Kundu, A., Kurki-Suonio, T., Kwak, S., Kwiatkowski, R., Kwon, O. J., Laguardia, L., Lahtinen, A., Laing, A., Lam, N., Lambertz, H. T., Lane, C., Lang, P. T., Lanthaler, S., Lapins, J., Lasa, A., Last, J. R., Łaszyńska, E., Lawless, R., Lawson, A., Lawson, K. D., Lazaros, A., Lazzaro, E., Leddy, J., Lee, S., Lefebvre, X., Leggate, H. J., Lehmann, J., Lehnen, M., Leichtle, D., Leichuer, P., Leipold, F., Lengar, I., Lennholm, M., Lerche, E., Lescinskis, A., Lesnoj, S., Letellier, E., Leyland, M., Leysen, W., Li, L., Liang, Y., Likonen, J., Linke, J., Linsmeier, Ch., Lipschultz, B., Liu, G., Liu, Y., Lo Schiavo, V. P., Loarer, T., Loarte, A., Lobel, R. C., Lomanowski, B., Lomas, P. J., Lönnroth, J., López, J. M., López-Razola, J., Lorenzini, R., Losada, U., Lovell, J. J., Loving, A. B., Lowry, C., Luce, T., Lucock, R. M. A., Lukin, A., Luna, C., Lungaroni, M., Lungu, C. P., Lungu, M., Lunniss, A., Lupelli, I., Lyssoivan, A., Macdonald, N., Macheta, P., Maczewa, K., Magesh, B., Maget, P., Maggi, C., Maier, H., Mailloux, J., Makkonen, T., Makwana, R., Malaquias, A., Malizia, A., Manas, P., Manning, A., Manso, M. E., Mantica, P., Mantsinen, M., Manzanares, A., Maquet, Ph., Marandet, Y., Marcenko, N., Marchetto, C., Marchuk, O., Marinelli, M., Marinucci, M., Markovič, T., Marocco, D., Marot, L., Marren, C. A., Marshal, R., Martin, A., Martin, Y., Martín de Aguilera, A., Martínez, F. J., Martín-Solís, J. R., Martynova, Y., Maruyama, S., Masiello, A., Maslov, M., Matejcik, S., Mattei, M., Matthews, G. F., Maviglia, F., Mayer, M., Mayoral, M. L., May-Smith, T., Mazon, D., Mazzotta, C., Mcadams, R., Mccarthy, P. J., Mcclements, K. G., Mccormack, O., Mccullen, P. A., Mcdonald, D., Mcintosh, S., Mckean, R., Mckehon, J., Meadows, R. C., Meakins, A., Medina, F., Medland, M., Medley, S., Meigh, S., Meigs, A. G., Meisl, G., Meitner, S., Meneses, L., Menmuir, S., Mergia, K., Merrigan, I. R., Mertens, Ph., Meshchaninov, S., Messiaen, A., Meyer, H., Mianowski, S., Michling, R., Middleton-Gear, D., Miettunen, J., Militello, F., Militello-Asp, E., Miloshevsky, G., Mink, F., Minucci, S., Miyoshi, Y., Mlynář, J., Molina, D., Monakhov, I., Moneti, M., Mooney, R., Moradi, S., Mordijck, S., Moreira, L., Moreno, R., Moro, F., Morris, A. W., Morris, J., Moser, L., Mosher, S., Moulton, D., Murari, A., Muraro, A., Murphy, S., Asakura, N. N., Na, Y. S., Nabais, F., Naish, R., Nakano, T., Nardon, E., Naulin, V., Nave, M. F. F., Nedzelski, I., Nemtsev, G., Nespoli, F., Neto, A., Neu, R., Neverov, V. S., Newman, M., Nicholls, K. J., Nicolas, T., Nielsen, A. H., Nielsen, P., Nilsson, E., Nishijima, D., Noble, C., Nocente, M., Nodwell, D., Nordlund, K., Nordman, H., Nouailletas, R., Nunes, I., Oberkofler, M., Odupitan, T., Ogawa, M. T., O’Gorman, T., Okabayashi, M., Olney, R., Omolayo, O., O’Mullane, M., Ongena, J., Orsitto, F., Orszagh, J., Oswuigwe, B. I., Otin, R., Owen, A., Paccagnella, R., Pace, N., Pacella, D., Packer, L. W., Page, A., Pajuste, E., Palazzo, S., Pamela, S., Panja, S., Papp, P., Paprok, R., Parail, V., Park, M., Parra Diaz, F., Parsons, M., Pasqualotto, R., Patel, A., Pathak, S., Paton, D., Patten, H., Pau, A., Pawelec, E., Paz Soldan, C., Peackoc, A., Pearson, I. J., Pehkonen, S. -P., Peluso, E., Penot, C., Pereira, A., Pereira, R., Pereira Puglia, P. P., Perez von Thun, C., Peruzzo, S., Peschanyi, S., Peterka, M., Petersson, P., Petravich, G., Petre, A., Petrella, N., Petržilka, V., Peysson, Y., Pfefferlé, D., Philipps, V., Pillon, M., Pintsuk, G., Piovesan, P., Pires dos Reis, A., Piron, L., Pironti, A., Pisano, F., Pitts, R., Pizzo, F., Plyusnin, V., Pomaro, N., Pompilian, O. G., Pool, P. J., Popovichev, S., Porfiri, M. T., Porosnicu, C., Porton, M., Possnert, G., Potzel, S., Powell, T., Pozzi, J., Prajapati, V., Prakash, R., Prestopino, G., Price, D., Price, M., Price, R., Prior, P., Proudfoot, R., Pucella, G., Puglia, P., Puiatti, M. E., Pulley, D., Purahoo, K., Pütterich, Th., Rachlew, E., Rack, M., Ragona, R., Rainford, M. S. J., Rakha, A., Ramogida, G., Ranjan, S., Rapson, C. J., Rasmussen, J. J., Rathod, K., Rattá, G., Ratynskaia, S., Ravera, G., Rayner, C., Rebai, M., Reece, D., Reed, A., Réfy, D., Regan, B., Regaña, J., Reich, M., Reid, N., Reimold, F., Reinhart, M., Reinke, M., Reiser, D., Rendell, D., Reux, C., Reyes Cortes, S. D. A., Reynolds, S., Riccardo, V., Richardson, N., Riddle, K., Rigamonti, D., Rimini, F. G., Risner, J., Riva, M., Roach, C., Robins, R. J., Robinson, S. A., Robinson, T., Robson, D. W., Roccella, R., Rodionov, R., Rodrigues, P., Rodriguez, J., Rohde, V., Romanelli, F., Romanelli, M., Romanelli, S., Romazanov, J., Rowe, S., Rubel, M., Rubinacci, G., Rubino, G., Ruchko, L., Ruiz, M., Ruset, C., Rzadkiewicz, J., Saarelma, S., Sabot, R., Safi, E., Sagar, P., Saibene, G., Saint-Laurent, F., Salewski, M., Salmi, A., Salmon, R., Salzedas, F., Samaddar, D., Samm, U., Sandiford, D., Santa, P., Santala, M. I. K., Santos, B., Santucci, A., Sartori, F., Sartori, R., Sauter, O., Scannell, R., Schlummer, T., Schmid, K., Schmidt, V., Schmuck, S., Schneider, M., Schöpf, K., Schwörer, D., Scott, S. D., Sergienko, G., Sertoli, M., Shabbir, A., Sharapov, S. E., Shaw, A., Shaw, R., Sheikh, H., Shepherd, A., Shevelev, A., Shumack, A., Sias, G., Sibbald, M., Sieglin, B., Silburn, S., Silva, A., Silva, C., Simmons, P. A., Simpson, J., Simpson-Hutchinson, J., Sinha, A., Sipilä, S. K., Sips, A. C. C., Sirén, P., Sirinelli, A., Sjöstrand, H., Skiba, M., Skilton, R., Slabkowska, K., Slade, B., Smith, N., Smith, P. G., Smith, R., Smith, T. J., Smithies, M., Snoj, L., Soare, S., Solano, E. R., Somers, A., Sommariva, C., Sonato, P., Sopplesa, A., Sousa, J., Sozzi, C., Spagnolo, S., Spelzini, T., Spineanu, F., Stables, G., Stamatelatos, I., Stamp, M. F., Staniec, P., Stankūnas, G., Stan-Sion, C., Stead, M. J., Stefanikova, E., Stepanov, I., Stephen, A. V., Stephen, M., Stevens, A., Stevens, B. D., Strachan, J., Strand, P., Strauss, H. R., Ström, P., Stubbs, G., Studholme, W., Subba, F., Summers, H. P., Svensson, J., Świderski, Ł., Szabolics, T., Szawlowski, M., Szepesi, G., Suzuki, T. T., Tál, B., Tala, T., Talbot, A. R., Talebzadeh, S., Taliercio, C., Tamain, P., Tame, C., Tang, W., Tardocchi, M., Taroni, L., Taylor, D., Taylor, K. A., Tegnered, D., Telesca, G., Teplova, N., Terranova, D., Testa, D., Tholerus, E., Thomas, J., Thomas, J. D., Thomas, P., Thompson, A., Thompson, C. -A., Thompson, V. K., Thorne, L., Thornton, A., Thrysøe, A. S., Tigwell, P. A., Tipton, N., Tiseanu, I., Tojo, H., Tokitani, M., Tolias, P., Tomeš, M., Tonner, P., Towndrow, M., Trimble, P., Tripsky, M., Tsalas, M., Tsavalas, P., Tskhakaya jun, D., Turner, I., Turner, M. M., Turnyanskiy, M., Tvalashvili, G., Tyrrell, S. G. J., Uccello, A., Ul-Abidin, Z., Uljanovs, J., Ulyatt, D., Urano, H., Uytdenhouwen, I., Vadgama, A. P., Valcarcel, D., Valentinuzzi, M., Valisa, M., Vallejos Olivares, P., Valovic, M., Van De Mortel, M., Van Eester, D., Van Renterghem, W., van Rooij, G. J., Varje, J., Varoutis, S., Vartanian, S., Vasava, K., Vasilopoulou, T., Vega, J., Verdoolaege, G., Verhoeven, R., Verona, C., Verona Rinati, G., Veshchev, E., Vianello, N., Vicente, J., Viezzer, E., Villari, S., Villone, F., Vincenzi, P., Vinyar, I., Viola, B., Vitins, A., Vizvary, Z., Vlad, M., Voitsekhovitch, I., Vondráček, P., Vora, N., Vu, T., Pires de Sa, W. W., Wakeling, B., Waldon, C. W. F., Walkden, N., Walker, M., Walker, R., Walsh, M., Wang, E., Wang, N., Warder, S., Warren, R. J., Waterhouse, J., Watkins, N. W., Watts, C., Wauters, T., Weckmann, A., Weiland, J., Weisen, H., Weiszflog, M., Wellstood, C., West, A. T., Wheatley, M. R., Whetham, S., Whitehead, A. M., Whitehead, B. D., Widdowson, A. M., Wiesen, S., Wilkinson, J., Williams, J., Williams, M., Wilson, A. R., Wilson, D. J., Wilson, H. R., Wilson, J., Wischmeier, M., Withenshaw, G., Withycombe, A., Witts, D. M., Wood, D., Wood, R., Woodley, C., Wray, S., Wright, J., Wright, J. C., Wu, J., Wukitch, S., Wynn, A., Xu, T., Yadikin, D., Yanling, W., Yao, L., Yavorskij, V., Yoo, M. G., Young, C., Young, D., Young, I. D., Young, R., Zacks, J., Zagorski, R., Zaitsev, F. S., Zanino, R., Zarins, A., Zastrow, K. D., Zerbini, M., Zhang, W., Zhou, Y., Zilli, E., Zoita, V., Zoletnik, S., Zychor, I., Andersson Sundén, E., Baiã¡o, D., Belonohy, Ã. ., Bergsã¥ker, H., Bãlkovã¡, P., Bjã¶rkas, C., Bodnã¡r, G., Broså awski, A., Calabrã², G., Crombã©, K., De Castro, A., De La Cal, E., De La Luna, E., De Pablos, J. L., De Vries, P., Den Harder, N., D'Inca, R., Donnã©, T., Duckworth, P. h., Ä uran, I., Durodiã©, F., Eich, T. h., Fã©vrier, O., Gã¡l, K., Gaå azka, K., Galvã¡o, R., GarcÃa-Muñoz, M., Gardarein, J. -. L., Glã¶ggler, S., Goloborod'Ko, V., Goncalves, B., Guã©rard, C., Horã¡ä ek, J., Imrãå¡ek, M., Jã¤rvinen, A., Jednorã³g, S., Kã¶chl, F., Kã¶ppen, M., Kowalska-StrzÈ©ciwilk, E., Ksiaå¼ek, I., Å aszyå ska, E., Linsmeier, C. h., Lã¶nnroth, J., Lã³pez, J. M., López-Razola, J., Maquet, P. h., Markoviä , T., MartÃn De Aguilera, A., Martãnez, F. J., MartÃn-SolÃs, J. R., Mertens, P. h., Mlynã¡å , J., O'Gorman, T., O'Mullane, M., Pehkonen, S. -. P., Perez Von Thun, C., Petrå¾ilka, V., Pfefferlã©, D., Pires Dos Reis, A., Pã¼tterich, T. h., Rattã¡, G., Rã©fy, D., Regaã±a, J., Schã¶pf, K., Schwã¶rer, D., Sipilã¤, S. K., Sirã©n, P., Sjã¶strand, H., Stankå«nas, G., Strã¶m, P., Å widerski, Å. ., Tã¡l, B., Thompson, C. -. A., Thrysã¸e, A. S., Tomeå¡, M., Tskhakaya Jun, D., Van Rooij, G. J., Vondrã¡ä ek, P., Pires De Sa, W. W., Centre National de la Recherche Scientifique (CNRS)-Université Paris Diderot - Paris 7 (UPD7)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université Pierre et Marie Curie - Paris 6 (UPMC), Centre National de la Recherche Scientifique (CNRS)-Institut de Chimie du CNRS (INC), Hôpital de Rangueil, CHU Toulouse [Toulouse]-CHU Toulouse [Toulouse], Laboratoire de microbiologie et génétique moléculaires (LMGM), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS), Université de Lorraine (UL)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS), Dipartimento di Energia [Milano] (DENG), Centre National de la Recherche Scientifique (CNRS)-École Centrale de Marseille (ECM)-Aix Marseille Université (AMU), Research Centre Julich (FZJ), Institute for Plasma Research, Instituto Superior Tecnico Lisboa, Queen's University Belfast, University of Helsinki, CEA, Department of Applied Physics, School services, SCI, National Institutes for Quantum and Radiological Science and Technology, VTT, University of Naples Federico II, Universidad Nacional de Educacion a Distancia, CNR, Russian Research Centre Kurchatov Institute, Universita degli Studi di Napoli Parthenope, Ente Per Le Nuove Tecnologie L'energia e l'ambiente, Troitsk Institute for Innovation and Fusion Research, Uppsala University, National Institute for Cryogenics and Isotopic Technology, Max-Planck-Institut fur Plasmaphysik, University of Catania, Fusion for Energy Joint Undertaking, National Institutes of Natural Sciences - National Institute for Fusion Science, Massachusetts Institute of Technology, University of Latvia, Imperial College London, CIEMAT, University of Oxford, EUROfusion Programme Management Unit, Oak Ridge National Laboratory, Karlsruhe Institute of Technology KIT, University of York, Royal Institute of Technology, Maritime University of Szczecin, H. Niewodniczanski Institute of Nuclear Physics of the Polish Academy of Sciences, Czech Academy of Sciences, University of Trento, Ecole Polytechnique Federale de Lausanne (EPFL), Wigner Research Centre for Physics, Comenius University, University of Milan - Bicocca, National Institute for Optoelectronics, Fourth State Research, University of Texas at Austin, Belgian Nuclear Research Center, National Centre for Nuclear Research (NCBJ), Princeton University, CNRS, University of Cagliari, University of Warwick, Soltan Institute for Nuclear Studies, FOM Institute DIFFER, National Institute for Laser, Plasma and Radiation Physics, Ghent University, J. Stefan Institute, Universite de Lorraine, CAS - Institute of Plasma Physics, University of California at San Diego, Koninklijke Militaire School - Ecole Royale Militaire, Horia Hulubei National Institute of Physics and Nuclear Engineering, Chalmers University of Technology, School services, ELEC, Department of Signal Processing and Acoustics, Automaatio- ja systeemitekniik, Universidad Politecnica de Madrid, Second University of Naples, Warsaw University of Technology, Universita della Basilicata, Barcelona Supercomp. Center, Universidad de Sevilla, Centro Brasileiro de Pesquisas Fisicas, Department of Electrical Engineering and Automation, Sähkötekniikan laitos, University of Rome Tor Vergata, RAS - Ioffe Physico Technical Institute, General Atomics, University of Innsbruck, Fusion and Plasma Physics, University of Toyama, University of Strathclyde, National Technical University of Athens, Universita della Tuscia, Technical University of Denmark, Korea Advanced Institute of Science and Technology, Seoul National University, University College Cork, Vienna University of Technology, University of Opole, Daegu University, National Fusion Research Institute, Dublin City University, Universidad Politécnica de Madrid, PELIN LLC, Arizona State University, Universidad Complutense, University of Basel, Universidad Carlos III de Madrid, Consorzio CREATE, Demokritos National Centre for Scientific Research, Purdue University, Universite Libre de Bruxelles, School Services, ARTS, Department of Design, University of California Office of the President, Universidade de Sao Paulo, School Services, BIZ, Department of Information and Service Management, Lithuanian Energy Institute, HRS Fusion, Politecnico di Torino, University of Cassino, University of Electronic Science and Technology of China, Department of Electronics and Nanoengineering, Aalto-yliopisto, Aalto University, and Faculdade de Engenharia
- Subjects
Technology ,fusion ,Física [Ciências exactas e naturais] ,Tokamak ,Nuclear engineering ,DIAGNOSTICS ,01 natural sciences ,ILW ,010305 fluids & plasmas ,law.invention ,Ilw ,[SPI.MECA.MEFL]Engineering Sciences [physics]/Mechanics [physics.med-ph]/Fluids mechanics [physics.class-ph] ,Plasma ,H-Mode Plasmas ,law ,ITER ,Disruption Prediction ,COLLISIONALITY ,EDGE LOCALIZED MODES ,Diagnostics ,Operation ,JET ,plasma ,Nuclear and High Energy Physics ,Condensed Matter Physics ,Physics ,Jet (fluid) ,JET, plasma, fusion, ITER ,Divertor ,Settore FIS/01 - Fisica Sperimentale ,Fusion, Plasma and Space Physics ,DENSITY PEAKING ,Carbon Wall ,H-MODE PLASMAS ,[ SPI.MECA.MEFL ] Engineering Sciences [physics]/Mechanics [physics.med-ph]/Fluids mechanics [physics.class-ph] ,Density Peaking ,Neutron transport ,Facing Components ,Collisionality ,114 Physical sciences ,Física, Física ,Nuclear physics ,Physical sciences [Natural sciences] ,Fusion, plasma och rymdfysik ,Pedestal ,0103 physical sciences ,Nuclear fusion ,ddc:530 ,Neutron ,010306 general physics ,Fusion ,Physics, Physical sciences ,Nuclear and High Energy Physic ,Edge Localized Modes ,QC717 ,Física [Àrees temàtiques de la UPC] ,Reactors de fusió ,Física ,FACING COMPONENTS ,Fusion reactors ,Jet ,CARBON WALL ,DISRUPTION PREDICTION ,OPERATION ,ddc:600 - Abstract
The 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at ßN ~ 1.8 and n/nGW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014–2018 under grant agreement No 633053. Peer Reviewed Article signat per 1.173 autors/es: X. Litaudon35, S. Abduallev39, M. Abhangi46, P. Abreu53, M. Afzal7, K.M. Aggarwal29, T. Ahlgren101, J.H. Ahn8, L. Aho-Mantila112, N. Aiba69, M. Airila112, R. Albanese105, V. Aldred7, D. Alegre93, E. Alessi45, P. Aleynikov55, A. Alfier12, A. Alkseev72, M. Allinson7, B. Alper7, E. Alves53, G. Ambrosino105, R. Ambrosino106, L. Amicucci90, V. Amosov88, E. Andersson Sundén22, M. Angelone90, M. Anghel85, C. Angioni62, L. Appel7, C. Appelbee7, P. Arena30, M. Ariola106, H. Arnichand8, S. Arshad41, A. Ash7, N. Ashikawa68, V. Aslanyan64, O. Asunta1, F. Auriemma12, Y. Austin7, L. Avotina103, M.D. Axton7, C. Ayres7, M. Bacharis24, A. Baciero57, D. Baião53, S. Bailey7, A. Baker7, I. Balboa7, M. Balden62, N. Balshaw7, R. Bament7, J.W. Banks7, Y.F. Baranov7, M.A. Barnard7, D. Barnes7, M. Barnes27, R. Barnsley55, A. Baron Wiechec7, L. Barrera Orte34, M. Baruzzo12, V. Basiuk8, M. Bassan55, R. Bastow7, A. Batista53, P. Batistoni90, R. Baughan7, B. Bauvir55, L. Baylor73, B. Bazylev56, J. Beal110, P.S. Beaumont7, M. Beckers39, B. Beckett7, A. Becoulet8, N. Bekris35, M. Beldishevski7, K. Bell7, F. Belli90, M. Bellinger7, É. Belonohy62, N. Ben Ayed7, N.A. Benterman7, H. Bergsåker42, J. Bernardo53, M. Bernert62, M. Berry7, L. Bertalot55, C. Besliu7, M. Beurskens63, B. Bieg61, J. Bielecki47, T. Biewer73, M. Bigi12, P. Bílková50, F. Binda22, A. Bisoffi31, J.P.S. Bizarro53, C. Björkas101, J. Blackburn7, K. Blackman7, T.R. Blackman7, P. Blanchard33, P. Blatchford7, V. Bobkov62, A. Boboc7, G. Bodnár113, O. Bogar18, I. Bolshakova60, T. Bolzonella12, N. Bonanomi97, F. Bonelli56, J. Boom62, J. Booth7, D. Borba35,53, D. Borodin39, I. Borodkina39, A. Botrugno90, C. Bottereau8, P. Boulting7, C. Bourdelle8, M. Bowden7, C. Bower7, C. Bowman110, T. Boyce7, C. Boyd7, H.J. Boyer7, J.M.A. Bradshaw7, V. Braic87, R. Bravanec40, B. Breizman107, S. Bremond8, P.D. Brennan7, S. Breton8, A. Brett7, S. Brezinsek39, M.D.J. Bright7, M. Brix7, W. Broeckx78, M. Brombin12, A. Brosławski65, D.P.D. Brown7, M. Brown7, E. Bruno55, J. Bucalossi8, J. Buch46, J. Buchanan7, M.A. Buckley7, R. Budny76, H. Bufferand8, M. Bulman7, N. Bulmer7, P. Bunting7, P. Buratti90, A. Burckhart62, A. Buscarino30, A. Busse7, N.K. Butler7, I. Bykov42, J. Byrne7, P. Cahyna50, G. Calabrò90, I. Calvo57, Y. Camenen4, P. Camp7, D.C. Campling7, J. Cane7, B. Cannas17, A.J. Capel7, P.J. Card7, A. Cardinali90, P. Carman7, M. Carr7, D. Carralero62, L. Carraro12, B.B. Carvalho53, I. Carvalho53, P. Carvalho53, F.J. Casson7, C. Castaldo90, N. Catarino53, J. Caumont7, F. Causa90, R. Cavazzana12, K. Cave-Ayland7, M. Cavinato12, M. Cecconello22, S. Ceccuzzi90, E. Cecil76, A. Cenedese12, R. Cesario90, C.D. Challis7, M. Chandler7, D. Chandra46, C.S. Chang76, A. Chankin62, I.T. Chapman7, S.C. Chapman28, M. Chernyshova49, G. Chitarin12, G. Ciraolo8, D. Ciric7, J. Citrin38, F. Clairet8, E. Clark7, M. Clark7, R. Clarkson7, D. Clatworthy7, C. Clements7, M. Cleverly7, J.P. Coad7, P.A. Coates7, A. Cobalt7, V. Coccorese105, V. Cocilovo90, S. Coda33, R. Coelho53, J.W. Coenen39, I. Coffey29, L. Colas8, S. Collins7, D. Conka103, S. Conroy22, N. Conway7, D. Coombs7, D. Cooper7, S.R. Cooper7, C. Corradino30, Y. Corre8, G. Corrigan7, S. Cortes53, D. Coster62, A.S. Couchman7, M.P. Cox7, T. Craciunescu86, S. Cramp7, R. Craven7, F. Crisanti90, G. Croci97, D. Croft7, K. Crombé15, R. Crowe7, N. Cruz53, G. Cseh113, A. Cufar81, A. Cullen7, M. Curuia85, A. Czarnecka49, H. Dabirikhah7, P. Dalgliesh7, S. Dalley7, J. Dankowski47, D. Darrow76, O. Davies7, W. Davis55,76, C. Day56, I.E. Day7, M. De Bock55, A. de Castro57, E. de la Cal57, E. de la Luna57, G. De Masi12, J. L. de Pablos57, G. De Temmerman55, G. De Tommasi105, P. de Vries55, K. Deakin7, J. Deane7, F. Degli Agostini12, R. Dejarnac50, E. Delabie73, N. den Harder38, R.O. Dendy7, J. Denis8, P. Denner39, S. Devaux62,104, P. Devynck8, F. Di Maio55, A. Di Siena62, C. Di Troia90, P. Dinca86, R. D’Inca62, B. Ding51, T. Dittmar39, H. Doerk62, R.P. Doerner9, T. Donné34, S.E. Dorling7, S. Dormido-Canto93, S. Doswon7, D. Douai8, P.T. Doyle7, A. Drenik62,81, P. Drewelow63, P. Drews39, Ph. Duckworth55, R. Dumont8, P. Dumortier58, D. Dunai113, M. Dunne62, I. Ďuran50, F. Durodié58, P. Dutta46, B. P. Duval33, R. Dux62, K. Dylst78, N. Dzysiuk22, P.V. Edappala46, J. Edmond7, A.M. Edwards7, J. Edwards7, Th. Eich62, A. Ekedahl8, R. El-Jorf7, C.G. Elsmore7, M. Enachescu84, G. Ericsson22, F. Eriksson16, J. Eriksson22, L.G. Eriksson36, B. Esposito90, S. Esquembri94, H.G. Esser39, D. Esteve8, B. Evans7, G.E. Evans7, G. Evison7, G.D. Ewart7, D. Fagan7, M. Faitsch62, D. Falie86, A. Fanni17, A. Fasoli33, J. M. Faustin33, N. Fawlk7, L. Fazendeiro53, N. Fedorczak8, R.C. Felton7, K. Fenton7, A. Fernades53, H. Fernandes53, J. Ferreira53, J.A. Fessey7, O. Février8, O. Ficker50, A. Field7, S. Fietz62, A. Figueiredo53, J. Figueiredo53,35, A. Fil8, P. Finburg7, M. Firdaouss8, U. Fischer56, L. Fittill7, M. Fitzgerald7, D. Flammini90, J. Flanagan7, C. Fleming7, K. Flinders7, N. Fonnesu90, J. M. Fontdecaba57, A. Formisano79, L. Forsythe7, L. Fortuna30, E. Fortuna-Zalesna19, M. Fortune7, S. Foster7, T. Franke34, T. Franklin7, M. Frasca30, L. Frassinetti42, M. Freisinger39, R. Fresa98, D. Frigione90, V. Fuchs50, D. Fuller35, S. Futatani6, J. Fyvie7, K. Gál34,62, D. Galassi2, K. Gałązka49, J. Galdon-Quiroga92, J. Gallagher7, D. Gallart6, R. Galvão10, X. Gao51, Y. Gao39, J. Garcia8, A. Garcia-Carrasco42, M. García-Muñoz92, J.-L. Gardarein3, L. Garzotti7, P. Gaudio95, E. Gauthier8, D.F. Gear7, S.J. Gee7, B. Geiger62, M. Gelfusa95, S. Gerasimov7, G. Gervasini45, M. Gethins7, Z. Ghani7, M. Ghate46, M. Gherendi86, J.C. Giacalone8, L. Giacomelli45, C.S. Gibson7, T. Giegerich56, C. Gil8, L. Gil53, S. Gilligan7, D. Gin54, E. Giovannozzi90, J.B. Girardo8, C. Giroud7, G. Giruzzi8, S. Glöggler62, J. Godwin7, J. Goff7, P. Gohil43, V. Goloborod’ko102, R. Gomes53, B. Gonçalves53, M. Goniche8, M. Goodliffe7, A. Goodyear7, G. Gorini97, M. Gosk65, R. Goulding76, A. Goussarov78, R. Gowland7, B. Graham7, M.E. Graham7, J. P. Graves33, N. Grazier7, P. Grazier7, N.R. Green7, H. Greuner62, B. Grierson76, F.S. Griph7, C. Grisolia8, D. Grist7, M. Groth1, R. Grove73, C.N. Grundy7, J. Grzonka19, D. Guard7, C. Guérard34, C. Guillemaut8,53, R. Guirlet8, C. Gurl7, H.H. Utoh69, L.J. Hackett7, S. Hacquin8,35, A. Hagar7, R. Hager76, A. Hakola112, M. Halitovs103, S.J. Hall7, S.P. Hallworth Cook7, C. Hamlyn-Harris7, K. Hammond7, C. Harrington7, J. Harrison7, D. Harting7, F. Hasenbeck39, Y. Hatano108, D.R. Hatch107, T.D.V. Haupt7, J. Hawes7, N.C. Hawkes7, J. Hawkins7, P. Hawkins7, P.W. Haydon7, N. Hayter7, S. Hazel7, P.J.L. Heesterman7, K. Heinola101, C. Hellesen22, T. Hellsten42, W. Helou8, O.N. Hemming7, T.C. Hender7, M. Henderson55, S.S. Henderson21, R. Henriques53, D. Hepple7, G. Hermon7, P. Hertout8, C. Hidalgo57, E.G. Highcock27, M. Hill7, J. Hillairet8, J. Hillesheim7, D. Hillis73, K. Hizanidis70, A. Hjalmarsson22, J. Hobirk62, E. Hodille8, C.H.A. Hogben7, G.M.D. Hogeweij38, A. Hollingsworth7, S. Hollis7, D.A. Homfray7, J. Horáček50, G. Hornung15, A.R. Horton7, L.D. Horton36, L. Horvath110, S.P. Hotchin7, M.R. Hough7, P.J. Howarth7, A. Hubbard64, A. Huber39, V. Huber39, T.M. Huddleston7, M. Hughes7, G.T.A. Huijsmans55, C.L. Hunter7, P. Huynh8, A.M. Hynes7, D. Iglesias7, N. Imazawa69, F. Imbeaux8, M. Imríšek50, M. Incelli109, P. Innocente12, M. Irishkin8, I. Ivanova-Stanik49, S. Jachmich58,35, A.S. Jacobsen83, P. Jacquet7, J. Jansons103, A. Jardin8, A. Järvinen1, F. Jaulmes38, S. Jednoróg49, I. Jenkins7, C. Jeong20, I. Jepu86, E. Joffrin8, R. Johnson7, T. Johnson42, Jane Johnston7, L. Joita7, G. Jones7, T.T.C. Jones7, K.K. Hoshino69, A. Kallenbach62, K. Kamiya69, J. Kaniewski7, A. Kantor7, A. Kappatou62, J. Karhunen1, D. Karkinsky7, I. Karnowska7, M. Kaufman73, G. Kaveney7, Y. Kazakov58, V. Kazantzidis70, D.L. Keeling7, T. Keenan7, J. Keep7, M. Kempenaars7, C. Kennedy7, D. Kenny7, J. Kent7, O.N. Kent7, E. Khilkevich54, H.T. Kim35, H.S. Kim80, A. Kinch7, C. king7, D. King7, R.F. King7, D.J. Kinna7, V. Kiptily7, A. Kirk7, K. Kirov7, A. Kirschner39, G. Kizane103, C. Klepper73, A. Klix56, P. Knight7, S.J. Knipe7, S. Knott96, T. Kobuchi69, F. Köchl111, G. Kocsis113, I. Kodeli81, L. Kogan7, D. Kogut8, S. Koivuranta112, Y. Kominis70, M. Köppen39, B. Kos81, T. Koskela1, H.R. Koslowski39, M. Koubiti4, M. Kovari7, E. Kowalska-Strzęciwilk49, A. Krasilnikov88, V. Krasilnikov88, N. Krawczyk49, M. Kresina8, K. Krieger62, A. Krivska58, U. Kruezi7, I. Książek48, A. Kukushkin72, A. Kundu46, T. Kurki-Suonio1, S. Kwak20, R. Kwiatkowski65, O.J. Kwon13, L. Laguardia45, A. Lahtinen101, A. Laing7, N. Lam7, H.T. Lambertz39, C. Lane7, P.T. Lang62, S. Lanthaler33, J. Lapins103, A. Lasa101, J.R. Last7, E. Łaszyńska49, R. Lawless7, A. Lawson7, K.D. Lawson7, A. Lazaros70, E. Lazzaro45, J. Leddy110, S. Lee66, X. Lefebvre7, H.J. Leggate32, J. Lehmann7, M. Lehnen55, D. Leichtle41, P. Leichuer7, F. Leipold55,83, I. Lengar81, M. Lennholm36, E. Lerche58, A. Lescinskis103, S. Lesnoj7, E. Letellier7, M. Leyland110, W. Leysen78, L. Li39, Y. Liang39, J. Likonen112, J. Linke39, Ch. Linsmeier39, B. Lipschultz110, G. Liu55, Y. Liu51, V.P. Lo Schiavo105, T. Loarer8, A. Loarte55, R.C. Lobel7, B. Lomanowski1, P.J. Lomas7, J. Lönnroth1,35, J. M. López94, J. López-Razola57, R. Lorenzini12, U. Losada57, J.J. Lovell7, A.B. Loving7, C. Lowry36, T. Luce43, R.M.A. Lucock7, A. Lukin74, C. Luna5, M. Lungaroni95, C.P. Lungu86, M. Lungu86, A. Lunniss110, I. Lupelli7, A. Lyssoivan58, N. Macdonald7, P. Macheta7, K. Maczewa7, B. Magesh46, P. Maget8, C. Maggi7, H. Maier62, J. Mailloux7, T. Makkonen1, R. Makwana46, A. Malaquias53, A. Malizia95, P. Manas4, A. Manning7, M.E. Manso53, P. Mantica45, M. Mantsinen6, A. Manzanares91, Ph. Maquet55, Y. Marandet4, N. Marcenko88, C. Marchetto45, O. Marchuk39, M. Marinelli95, M. Marinucci90, T. Markovič50, D. Marocco90, L. Marot26, C.A. Marren7, R. Marshal7, A. Martin7, Y. Martin33, A. Martín de Aguilera57, F.J. Martínez93, J. R. Martín-Solís14, Y. Martynova39, S. Maruyama55, A. Masiello12, M. Maslov7, S. Matejcik18, M. Mattei79, G.F. Matthews7, F. Maviglia11, M. Mayer62, M.L. Mayoral34, T. May-Smith7, D. Mazon8, C. Mazzotta90, R. McAdams7, P.J. McCarthy96, K.G. McClements7, O. McCormack12, P.A. McCullen7, D. McDonald34, S. McIntosh7, R. McKean7, J. McKehon7, R.C. Meadows7, A. Meakins7, F. Medina57, M. Medland7, S. Medley7, S. Meigh7, A.G. Meigs7, G. Meisl62, S. Meitner73, L. Meneses53, S. Menmuir7,42, K. Mergia71, I.R. Merrigan7, Ph. Mertens39, S. Meshchaninov88, A. Messiaen58, H. Meyer7, S. Mianowski65, R. Michling55, D. Middleton-Gear7, J. Miettunen1, F. Militello7, E. Militello-Asp7, G. Miloshevsky77, F. Mink62, S. Minucci105, Y. Miyoshi69, J. Mlynář50, D. Molina8, I. Monakhov7, M. Moneti109, R. Mooney7, S. Moradi37, S. Mordijck43, L. Moreira7, R. Moreno57, F. Moro90, A.W. Morris7, J. Morris7, L. Moser26, S. Mosher73, D. Moulton7,1, A. Murari12,35, A. Muraro45, S. Murphy7, N.N. Asakura69, Y.S. Na80, F. Nabais53, R. Naish7, T. Nakano69, E. Nardon8, V. Naulin83, M.F.F. Nave53, I. Nedzelski53, G. Nemtsev88, F. Nespoli33, A. Neto41, R. Neu62, V.S. Neverov72, M. Newman7, K.J. Nicholls7, T. Nicolas33, A.H. Nielsen83, P. Nielsen12, E. Nilsson8, D. Nishijima99, C. Noble7, M. Nocente97, D. Nodwell7, K. Nordlund101, H. Nordman16, R. Nouailletas8, I. Nunes53, M. Oberkofler62, T. Odupitan7, M.T. Ogawa69, T. O’Gorman7, M. Okabayashi76, R. Olney7, O. Omolayo7, M. O’Mullane21, J. Ongena58, F. Orsitto11, J. Orszagh18, B.I. Oswuigwe7, R. Otin7, A. Owen7, R. Paccagnella12, N. Pace7, D. Pacella90, L.W. Packer7, A. Page7, E. Pajuste103, S. Palazzo30, S. Pamela7, S. Panja46, P. Papp18, R. Paprok50, V. Parail7, M. Park66, F. Parra Diaz27, M. Parsons73, R. Pasqualotto12, A. Patel7, S. Pathak46, D. Paton7, H. Patten33, A. Pau17, E. Pawelec48, C. Paz Soldan43, A. Peackoc36, I.J. Pearson7, S.-P. Pehkonen112, E. Peluso95, C. Penot55, A. Pereira57, R. Pereira53, P.P. Pereira Puglia7, C. Perez von Thun35,39, S. Peruzzo12, S. Peschanyi56, M. Peterka50, P. Petersson42, G. Petravich113, A. Petre84, N. Petrella7, V. Petržilka50, Y. Peysson8, D. Pfefferlé33, V. Philipps39, M. Pillon90, G. Pintsuk39, P. Piovesan12, A. Pires dos Reis52, L. Piron7, A. Pironti105, F. Pisano17, R. Pitts55, F. Pizzo79, V. Plyusnin53, N. Pomaro12, O.G. Pompilian86, P.J. Pool7, S. Popovichev7, M.T. Porfiri90, C. Porosnicu86, M. Porton7, G. Possnert22, S. Potzel62, T. Powell7, J. Pozzi7, V. Prajapati46, R. Prakash46, G. Prestopino95, D. Price7, M. Price7, R. Price7, P. Prior7, R. Proudfoot7, G. Pucella90, P. Puglia52, M.E. Puiatti12, D. Pulley7, K. Purahoo7, Th. Pütterich62, E. Rachlew25, M. Rack39, R. Ragona58, M.S.J. Rainford7, A. Rakha6, G. Ramogida90, S. Ranjan46, C.J. Rapson62, J.J. Rasmussen83, K. Rathod46, G. Rattá57, S. Ratynskaia82, G. Ravera90, C. Rayner7, M. Rebai97, D. Reece7, A. Reed7, D. Réfy113, B. Regan7, J. Regaña34, M. Reich62, N. Reid7, F. Reimold39, M. Reinhart34, M. Reinke110,73, D. Reiser39, D. Rendell7, C. Reux8, S.D.A. Reyes Cortes53, S. Reynolds7, V. Riccardo7, N. Richardson7, K. Riddle7, D. Rigamonti97, F.G. Rimini7, J. Risner73, M. Riva90, C. Roach7, R.J. Robins7, S.A. Robinson7, T. Robinson7, D.W. Robson7, R. Roccella55, R. Rodionov88, P. Rodrigues53, J. Rodriguez7, V. Rohde62, F. Romanelli90, M. Romanelli7, S. Romanelli7, J. Romazanov39, S. Rowe7, M. Rubel42, G. Rubinacci105, G. Rubino12, L. Ruchko52, M. Ruiz94, C. Ruset86, J. Rzadkiewicz65, S. Saarelma7, R. Sabot8, E. Safi101, P. Sagar7, G. Saibene41, F. Saint-Laurent8, M. Salewski83, A. Salmi112, R. Salmon7, F. Salzedas53, D. Samaddar7, U. Samm39, D. Sandiford7, P. Santa46, M.I.K. Santala1, B. Santos53, A. Santucci90, F. Sartori41, R. Sartori41, O. Sauter33, R. Scannell7, T. Schlummer39, K. Schmid62, V. Schmidt12, S. Schmuck7, M. Schneider8, K. Schöpf102, D. Schwörer32, S.D. Scott76, G. Sergienko39, M. Sertoli62, A. Shabbir15, S.E. Sharapov7, A. Shaw7, R. Shaw7, H. Sheikh7, A. Shepherd7, A. Shevelev54, A. Shumack38, G. Sias17, M. Sibbald7, B. Sieglin62, S. Silburn7, A. Silva53, C. Silva53, P.A. Simmons7, J. Simpson7, J. Simpson-Hutchinson7, A. Sinha46, S.K. Sipilä1, A.C.C. Sips36, P. Sirén112, A. Sirinelli55, H. Sjöstrand22, M. Skiba22, R. Skilton7, K. Slabkowska49, B. Slade7, N. Smith7, P.G. Smith7, R. Smith7, T.J. Smith7, M. Smithies110, L. Snoj81, S. Soare85, E. R. Solano35,57, A. Somers32, C. Sommariva8, P. Sonato12, A. Sopplesa12, J. Sousa53, C. Sozzi45, S. Spagnolo12, T. Spelzini7, F. Spineanu86, G. Stables7, I. Stamatelatos71, M.F. Stamp7, P. Staniec7, G. Stankūnas59, C. Stan-Sion84, M.J. Stead7, E. Stefanikova42, I. Stepanov58, A.V. Stephen7, M. Stephen46, A. Stevens7, B.D. Stevens7, J. Strachan76, P. Strand16, H.R. Strauss44, P. Ström42, G. Stubbs7, W. Studholme7, F. Subba75, H.P. Summers21, J. Svensson63, Ł. Świderski65, T. Szabolics113, M. Szawlowski49, G. Szepesi7, T.T. Suzuki69, B. Tál113, T. Tala112, A.R. Talbot7, S. Talebzadeh95, C. Taliercio12, P. Tamain8, C. Tame7, W. Tang76, M. Tardocchi45, L. Taroni12, D. Taylor7, K.A. Taylor7, D. Tegnered16, G. Telesca15, N. Teplova54, D. Terranova12, D. Testa33, E. Tholerus42, J. Thomas7, J.D. Thomas7, P. Thomas55, A. Thompson7, C.-A. Thompson7, V.K. Thompson7, L. Thorne7, A. Thornton7, A.S. Thrysøe83, P.A. Tigwell7, N. Tipton7, I. Tiseanu86, H. Tojo69, M. Tokitani67, P. Tolias82, M. Tomeš50, P. Tonner7, M. Towndrow7, P. Trimble7, M. Tripsky58, M. Tsalas38, P. Tsavalas71, D. Tskhakaya jun102, I. Turner7, M.M. Turner32, M. Turnyanskiy34, G. Tvalashvili7, S.G.J. Tyrrell7, A. Uccello45, Z. Ul-Abidin7, J. Uljanovs1, D. Ulyatt7, H. Urano69, I. Uytdenhouwen78, A.P. Vadgama7, D. Valcarcel7, M. Valentinuzzi8, M. Valisa12, P. Vallejos Olivares42, M. Valovic7, M. Van De Mortel7, D. Van Eester58, W. Van Renterghem78, G.J. van Rooij38, J. Varje1, S. Varoutis56, S. Vartanian8, K. Vasava46, T. Vasilopoulou71, J. Vega57, G. Verdoolaege58, R. Verhoeven7, C. Verona95, G. Verona Rinati95, E. Veshchev55, N. Vianello45, J. Vicente53, E. Viezzer62,92, S. Villari90, F. Villone100, P. Vincenzi12, I. Vinyar74, B. Viola90, A. Vitins103, Z. Vizvary7, M. Vlad86, I. Voitsekhovitch34, P. Vondráček50, N. Vora7, T. Vu8, W.W. Pires de Sa52, B. Wakeling7, C.W.F. Waldon7, N. Walkden7, M. Walker7, R. Walker7, M. Walsh55, E. Wang39, N. Wang39, S. Warder7, R.J. Warren7, J. Waterhouse7, N.W. Watkins28, C. Watts55, T. Wauters58, A. Weckmann42, J. Weiland23, H. Weisen33, M. Weiszflog22, C. Wellstood7, A.T. West7, M.R. Wheatley7, S. Whetham7, A.M. Whitehead7, B.D. Whitehead7, A.M. Widdowson7, S. Wiesen39, J. Wilkinson7, J. Williams7, M. Williams7, A.R. Wilson7, D.J. Wilson7, H.R. Wilson110, J. Wilson7, M. Wischmeier62, G. Withenshaw7, A. Withycombe7, D.M. Witts7, D. Wood7, R. Wood7, C. Woodley7, S. Wray7, J. Wright7, J.C. Wright64, J. Wu89, S. Wukitch64, A. Wynn110, T. Xu7, D. Yadikin16, W. Yanling39, L. Yao89, V. Yavorskij102, M.G. Yoo80, C. Young7, D. Young7, I.D. Young7, R. Young7, J. Zacks7, R. Zagorski49, F.S. Zaitsev18, R. Zanino75, A. Zarins103, K.D. Zastrow7, M. Zerbini90, W. Zhang62, Y. Zhou42, E. Zilli12, V. Zoita86, S. Zoletnik113, I. Zychor65 and JET Contributorsa // EUROfusion Consortium JET, Culham Science Centre, Abingdon, OX14 3DB, United Kingdom / 1 Aalto University, PO Box 14100, FIN-00076 Aalto, Finland / 2 Aix Marseille Université, CNRS, Centrale Marseille, M2P2 UMR 7340, 13451, Marseille, France / 3 Aix-Marseille Université, CNRS, IUSTI UMR 7343, 13013 Marseille, France / 4 Aix-Marseille Université, CNRS, PIIM, UMR 7345, 13013 Marseille, France / 5 Arizona State University, Tempe, AZ, United States of America / 6 Barcelona Supercomputing Center, Barcelona, Spain / 7 CCFE, Culham Science Centre, Abingdon, Oxon, OX14 3DB, United Kingdom / 8 CEA, IRFM, F-13108 Saint Paul Lez Durance, France / 9 Center for Energy Research, University of California at San Diego, La Jolla, CA 92093, United States of America / 10 Centro Brasileiro de Pesquisas Fisicas, Rua Xavier Sigaud, 160, Rio de Janeiro CEP 22290-180, Brazil / 11 Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 12 Consorzio RFX, corso Stati Uniti 4, 35127 Padova, Italy / 13 Daegu University, Jillyang, Gyeongsan, Gyeongbuk 712-174, Republic of Korea / 14 Departamento de Física, Universidad Carlos III de Madrid, 28911 Leganés, Madrid, Spain / 15 Department of Applied Physics UG (Ghent University) St-Pietersnieuwstraat 41 B-9000 Ghent, Belgium / 16 Department of Earth and Space Sciences, Chalmers University of Technology, SE-41296 Gothenburg, Sweden / 17 Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi 09123, Cagliari, Italy / 18 Department of Experimental Physics, Faculty of Mathematics, Physics and Informatics Comenius University Mlynska dolina F2, 84248 Bratislava, Slovakia / 19 Department of Materials Science, Warsaw University of Technology, PL-01-152 Warsaw, Poland / 20 Department of Nuclear and Quantum Engineering, KAIST, Daejeon 34141, Korea / 21 Department of Physics and Applied Physics, University of Strathclyde, Glasgow, G4 ONG, United Kingdom / 22 Department of Physics and Astronomy, Uppsala University, SE-75120 Uppsala, Sweden / 23 Department of Physics, Chalmers University of Technology, SE-41296 Gothenburg, Sweden / 24 Department of Physics, Imperial College London, London, SW7 2AZ, United Kingdom / 25 Department of Physics, SCI, KTH, SE-10691 Stockholm, Sweden / 26 Department of Physics, University of Basel, Basel, Switzerland / 27 Department of Physics, University of Oxford, Oxford, OX1 2JD, United Kingdom / 28 Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom / 29 Department of Pure and Applied Physics, Queens University, Belfast, BT7 1NN, United Kingdom / 30 Dipartimento di Ingegneria Elettrica Elettronica e Informatica, Università degli Studi di Catania, 95125 Catania, Italy / 31 Dipartimento di Ingegneria Industriale, University of Trento, Trento, Italy / 32 Dublin City University (DCU), Dublin, Ireland / 33 Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center (SPC), CH-1015 Lausanne, Switzerland / 34 EUROfusion Programme Management Unit, Boltzmannstr. 2, 85748 Garching, Germany / 35 EUROfusion Programme Management Unit, Culham Science Centre, Culham, OX14 3DB, United Kingdom / 36 European Commission, B-1049 Brussels, Belgium / 37 Fluid and Plasma Dynamics, ULB—Campus Plaine—CP 231 Boulevard du Triomphe, 1050 Bruxelles, Belgium / 38 FOM Institute DIFFER, Eindhoven, Netherlands / 39 Forschungszentrum Jülich GmbH, Institut für Energie- und Klimaforschung—Plasmaphysik, 52425 Jülich, Germany / 40 Fourth State Research, 503 Lockhart Dr, Austin, TX, United States of America / 41 Fusion for Energy Joint Undertaking, Josep Pl. 2, Torres Diagonal Litoral B3, 08019, Barcelona, Spain / 42 Fusion Plasma Physics, EES, KTH, SE-10044 Stockholm, Sweden / 43 General Atomics, PO Box 85608, San Diego, CA 92186-5608, United States of America / 44 HRS Fusion, West Orange, NJ, United States of America / 45 IFP-CNR, via R. Cozzi 53, 20125 Milano, Italy / 46 Institute for Plasma Research, Bhat, Gandhinagar-382 428, Gujarat State, India / 47 Institute of Nuclear Physics, Radzikowskiego 152, 31-342 Kraków, Poland / 48 Institute of Physics, Opole University, Oleska 48, 45-052 Opole, Poland / 49 Institute of Plasma Physics and Laser Microfusion, Hery 23, 01-497 Warsaw, Poland / 50 Institute of Plasma Physics AS CR, Za Slovankou 1782/3, 182 00 Praha 8, Czechia / 51 Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, People’s Republic of China / 52 Instituto de Física, Universidade de São Paulo, Rua do Matão Travessa R Nr.187 CEP 05508-090 Cidade Universitária, São Paulo, Brasil / 53 Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal / 54 Ioffe Physico-Technical Institute, 26 Politekhnicheskaya, St Petersburg 194021, Russian Federation / 55 ITER Organization, Route de Vinon, CS 90 046, 13067 Saint Paul Lez Durance, France / 56 Karlsruhe Institute of Technology, PO Box 3640, D-76021 Karlsruhe, Germany / 57 Laboratorio Nacional de Fusión, CIEMAT, Madrid, Spain / 58 Laboratory for Plasma Physics Koninklijke Militaire School—Ecole Royale Militaire, Renaissancelaan 30 Avenue de la Renaissance B-1000, Brussels, Belgium / 59 Lithuanian energy institute, Breslaujos g. 3, LT-44403, Kaunas, Lithuania / 60 Magnetic Sensor Laboratory, Lviv Polytechnic National University, Lviv, Ukraine / 61 Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland / 62 Max-Planck-Institut für Plasmaphysik, D-85748 Garching, Germany / 63 Max-Planck-Institut für Plasmaphysik, Teilinsitut Greifswald, D-17491 Greifswald, Germany / 64 MIT Plasma Science and Fusion Centre, Cambridge, MA 02139, United States of America / 65 National Centre for Nuclear Research (NCBJ), 05-400 Otwock-Świerk, Poland / 66 National Fusion Research Institute (NFRI), 169-148 Gwahak-ro, Yuseong-gu, Daejeon 305-806, Republic of Korea / 67 National Institute for Fusion Science, Oroshi, Toki, Gifu 509-5292, Japan / 68 National Institute for Fusion Science, Toki, 509-5292, Japan / 69 National Institutes for Quantum and Radiological Science and Technology, Naka, Ibaraki 311-0193, Japan / 70 National Technical University of Athens, Iroon Politechniou 9, 157 73 Zografou, Athens, Greece / 71 NCSR ‘Demokritos’, 153 10, Agia Paraskevi Attikis, Greece / 72 NRC Kurchatov Institute, 1 Kurchatov Square, Moscow 123182, Russian Federation / 73 Oak Ridge National Laboratory, Oak Ridge, TN 37831-6169, United States of America / 74 PELIN LLC, 27a, Gzhatskaya Ulitsa, Saint Petersburg, 195220, Russian Federation / 75 Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy / 76 Princeton Plasma Physics Laboratory, James Forrestal Campus, Princeton, NJ 08543, United States of America / 77 Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, United States of America / 78 SCK-CEN, Nuclear Research Centre, 2400 Mol, Belgium / 79 Second University of Napoli, Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 80 Seoul National University, Shilim-Dong, Gwanak-Gu, Republic of Korea / 81 Slovenian Fusion Association (SFA), Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia / 82 Space and Plasma Physics, EES, KTH SE-100 44 Stockholm, Sweden / 83 Technical University of Denmark, Department of Physics, Bldg 309, DK-2800 Kgs Lyngby, Denmark / 84 The ‘Horia Hulubei’ National Institute for Physics and Nuclear Engineering, Magurele-Bucharest, Romania / 85 The National Institute for Cryogenics and Isotopic Technology, Ramnicu Valcea, Romania / 86 The National Institute for Laser, Plasma and Radiation Physics, Magurele-Bucharest, Romania / 87 The National Institute for Optoelectronics, Magurele-Bucharest, Romania / 88 Troitsk Insitute of Innovating and Thermonuclear Research (TRINITI), Troitsk 142190, Moscow Region, Russian Federation / 89 University of Electronic Science and Technology of China, Chengdu, People’s Republic of China / 90 Unità Tecnica Fusione, ENEA C. R. Frascati, via E. Fermi 45, 00044 Frascati (Roma), Italy / 91 Universidad Complutense de Madrid, Madrid, Spain / 92 Universidad de Sevilla, Sevilla, Spain / 93 Universidad Nacional de Educación a Distancia, Madrid, Spain / 94 Universidad Politécnica de Madrid, Grupo I2A2, Madrid, Spain / 95 Università di Roma Tor Vergata, Via del Politecnico 1, Roma, Italy / 96 University College Cork (UCC), Ireland / 97 University Milano-Bicocca, piazza della Scienza 3, 20126 Milano, Italy / 98 University of Basilicata, Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 99 University of California, 1111 Franklin St., Oakland, CA 94607, United States of America / 100 University of Cassino, Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 101 University of Helsinki, PO Box 43, FI-00014 University of Helsinki, Finland / 102 University of Innsbruck, Fusion@Österreichische Akademie der Wissenschaften (ÖAW), Innsbruck, Austria / 103 University of Latvia, 19 Raina Blvd., Riga, LV 1586, Latvia / 104 University of Lorraine, CNRS, UMR7198, YIJL, Nancy, France / 105 University of Napoli ‘Federico II’, Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 106 University of Napoli Parthenope, Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy / 107 University of Texas at Austin, Institute for Fusion Studies, Austin, TX 78712, United States of America / 108 University of Toyama, Toyama, 930-8555, Japan / 109 University of Tuscia, DEIM, Via del Paradiso 47, 01100 Viterbo, Italy / 110 University of York, Heslington, York YO10 5DD, United Kingdom / 111 Vienna University of Technology, Fusion@Österreichische Akademie der Wissenschaften (ÖAW), Austria / 112 VTT Technical Research Centre of Finland, PO Box 1000, FIN-02044 VTT, Finland / 113 Wigner Research Centre for Physics, PO Box 49, H-1525 Budapest, Hungary
- Published
- 2017
- Full Text
- View/download PDF
46. LifeCLEF 2016: Multimedia Life Species Identification Challenges
- Author
-
WP Willem Pier Vellinga, Simone Palazzo, Concetto Spampinato, Hervé Glotin, Henning Müller, Alexis Joly, Julien Champ, Robert Planqué, Pierre Bonnet, Hervé Goëau, Scientific Data Management (ZENITH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Université de Toulon (UTLN), Dipartimento di Ingegneria Informatica e delle Telecomunicazioni [University of Catania] (DIIT), Università degli studi di Catania [Catania], Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad), Xeno-canto foundation, Institut National de la Recherche Agronomique (INRA), Department of Mathematics [Amsterdam], Universiteit van Amsterdam (UvA), Haute Ecole Spécialisée de Suisse Occidentale (HES-SO), Norbert Fuhr, Paulo Quaresma, Teresa Gonçalves, Birger Larsen, Krisztian Balog, Craig Macdonald, Linda Cappellato, Nicola Ferro, Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Inria Sophia Antipolis - Méditerranée (CRISAM), Botanique et Modélisation de l'Architecture des Plantes et des Végétations (UMR AMAP), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Institut National de la Recherche Agronomique (INRA)-Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-Institut de Recherche pour le Développement (IRD [France-Sud]), Vrije universiteit = Free university of Amsterdam [Amsterdam] (VU), Università degli studi di Catania = University of Catania (Unict), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-Institut National de la Recherche Agronomique (INRA)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD [France-Sud]), and Vrije Universiteit Amsterdam [Amsterdam] (VU)
- Subjects
Multimedia ,Computer science ,020207 software engineering ,02 engineering and technology ,Multimedia information retrieval ,15. Life on land ,computer.software_genre ,Convolutional neural network ,Bridge (nautical) ,Task (project management) ,Query expansion ,Identification (information) ,13. Climate action ,0202 electrical engineering, electronic engineering, information engineering ,Identity (object-oriented programming) ,Species identification ,020201 artificial intelligence & image processing ,[INFO]Computer Science [cs] ,computer - Abstract
International audience; Using multimedia identification tools is considered as one of the most promising solutions to help bridge the taxonomic gap and build accurate knowledge of the identity, the geographic distribution and the evolution of living species. Large and structured communities of nature observers (e.g., iSpot, Xeno-canto, Tela Botanica, etc.) as well as big monitoring equipment have actually started to produce outstanding collections of multimedia records. Unfortunately, the performance of the state-of-the-art analysis techniques on such data is still not well understood and is far from reaching real world requirements. The LifeCLEF lab proposes to evaluate these challenges around 3 tasks related to multimedia information retrieval and fine-grained classification problems in 3 domains. Each task is based on large volumes of real-world data and the measured challenges are defined in collaboration with biologists and environmental stakeholders to reflect realistic usage scenarios. For each task, we report the methodology, the data sets as well as the results and the main outcome
- Published
- 2016
- Full Text
- View/download PDF
47. Fine-Grained Object Recognition in Underwater Visual Data
- Author
-
Frédéric Precioso, Concetto Spampinato, Pierre Joalland, Hervé Glotin, S. Paris, Diane Lingrand, Simone Palazzo, K. Blanc, Dipartimento di Ingegneria Informatica e delle Telecomunicazioni [University of Catania] (DIIT), Università degli studi di Catania [Catania], Système, Information et Signal (SIS), Université de Toulon (UTLN)-ISITV, Université de Toulon (UTLN), Laboratoire d'Informatique, Signaux, et Systèmes de Sophia-Antipolis (I3S) / Equipe KEIA, Scalable and Pervasive softwARe and Knowledge Systems (Laboratoire I3S - SPARKS), Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), Université Nice Sophia Antipolis (... - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (... - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), and COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Cognitive neuroscience of visual object recognition ,Fish species ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,02 engineering and technology ,010501 environmental sciences ,Object (computer science) ,01 natural sciences ,Set (abstract data type) ,Categorization ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,False positive paradox ,020201 artificial intelligence & image processing ,Computer vision ,14. Life underwater ,Artificial intelligence ,Underwater ,Precision and recall ,business ,Software ,0105 earth and related environmental sciences - Abstract
International audience; In this paper we investigate the fine-grained object categorization problem of determining fish species in low-quality visual data (images and videos) recorded in real-life settings. We first describe a new annotated dataset of about 35,000 fish images (MA-35K dataset), derived from the Fish4Knowledge project, covering 10 fish species from the Eastern Indo-Pacific bio-geographic zone. We then resort to a label propagation method able to transfer the labels from the MA-35K to a set of 20 million fish images in order to achieve variability in fish appearance. The resulting annotated dataset, containing over one million annotations (AA-1M), was then manually checked by removing false positives as well as images with occlusions between fish or showing partially fish. Finally, we randomly picked more than 30,000 fish images distributed among ten fish species and extracted from about 400 10-minute videos, and used this data (both images and videos) for the fish task of the LifeCLEF 2014 contest. Together with the fine-grained visual dataset release, we also present two approaches for fish species classification in, respectively, still images and videos. Both approaches showed high performance (for some fish species the precision and recall were close to one) in object classification and outperformed state-of-the-art methods. In addition, despite the fact that dataset is unbalanced in the number of images per species, both methods (especially the one operating on still images) appear to be rather robust against the long-tail curse of data, showing the best performance on the less populated object classes.
- Published
- 2016
- Full Text
- View/download PDF
48. A diversity-based search approach to support annotation of a large fish image dataset
- Author
-
Daniela Giordano, Concetto Spampinato, and Simone Palazzo
- Subjects
Similarity (geometry) ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,computer.software_genre ,Set (abstract data type) ,Computer graphics ,k-NN search ,Annotation ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Information retrieval ,Relevance (information retrieval) ,Cluster analysis ,business.industry ,020207 software engineering ,Pattern recognition ,Object (computer science) ,Random forest ,ComputingMethodologies_PATTERNRECOGNITION ,Hardware and Architecture ,020201 artificial intelligence & image processing ,Data mining ,Artificial intelligence ,business ,computer ,Software ,Information Systems - Abstract
Label propagation consists in annotating an unlabeled dataset starting from a set of labeled items. However, most current methods exploit only image similarity between labeled and unlabeled images in order to find propagation candidates, which may result, especially in very large datasets, in retrieving mostly near-duplicate images. While such approaches are technically correct, as they maximize the propagation precision, the resulting annotated dataset may not be as useful, since they lack intra-class variability within the set of images sharing the same label. In this paper, we propose an approach for label propagation which favors the propagation of an object's label to a set of images representing as many different views of that object as possible, while at the same time preserving the relevance of the retrieved items to the query. Our method is based on a diversity-based clustering technique using a random forest framework and a label propagation approach which is able to effectively and efficiently propagate annotations using a similarity-based approach operating on clusters. The method was tested on a very large dataset of fish images achieving good performance in automated label propagation, ensuring diversification of the annotated items while preserving precision.
- Published
- 2016
49. Fish Tracking
- Author
-
Daniela Giordano, Simone Palazzo, and Concetto Spampinato
- Published
- 2016
- Full Text
- View/download PDF
50. Fish detection
- Author
-
Daniela Giordano, Simone Palazzo, and Concetto Spampinato
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.