145 results on '"G. Marbach"'
Search Results
2. Meta Pseudo Labels for Anomaly Detection via Partially Observed Anomalies.
- Author
-
Sinong Zhao, Zhaoyang Yu, Xiaofei Wang, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2023
- Full Text
- View/download PDF
3. MSDN: A Multi-Subspace Deviation Net for Anomaly Detection.
- Author
-
Sinong Zhao, Zhaoyang Yu, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2022
- Full Text
- View/download PDF
4. An Evolving Network Model from Clique Extension.
- Author
-
Anthony Bonato, Ryan Cushman, Trent G. Marbach, and Zhiyuan Zhang
- Published
- 2022
- Full Text
- View/download PDF
5. Improving Load Balancing for Modern Data Centers Through Resource Equivalence Classes.
- Author
-
Kaiyue Duan, Yusen Li, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2021
- Full Text
- View/download PDF
6. The Iterated Local Directed Transitivity Model for Social Networks.
- Author
-
Anthony Bonato, Daniel W. Cranston, Melissa A. Huggan, Trent G. Marbach, and Raja Mutharasan
- Published
- 2020
- Full Text
- View/download PDF
7. Improving Load Balance via Resource Exchange in Large-Scale Search Engines.
- Author
-
Kaiyue Duan, Yusen Li, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2020
- Full Text
- View/download PDF
8. Hybrid Dynamic Pruning for Efficient and Effective Query Processing.
- Author
-
Wenxiu Fang, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2020
- Full Text
- View/download PDF
9. Audinet: A Decentralized Auditing System for Cloud Storage.
- Author
-
Meng Yan 0008, Jiajia Xu, Trent G. Marbach, Haitao Li, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2020
- Full Text
- View/download PDF
10. Themis: Efficient and Adaptive Resource Partitioning for Reducing Response Delay in Cloud Gaming.
- Author
-
Yusen Li, Haoyuan Liu, Xiwei Wang, Lingjun Pu, Trent G. Marbach, Shanjiang Tang, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2019
- Full Text
- View/download PDF
11. Towards a Latin-Square Search Engine.
- Author
-
Wenxiu Fang, Rebecca J. Stones, Trent G. Marbach, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2019
- Full Text
- View/download PDF
12. Predicting Hard Drive Failures for Cloud Storage Systems.
- Author
-
Dongshi Liu, Bo Wang, Peng Li 0026, Rebecca J. Stones, Trent G. Marbach, Gang Wang 0001, Xiaoguang Liu 0001, and Zhongwei Li
- Published
- 2019
- Full Text
- View/download PDF
13. Load Prediction for Data Centers Based on Database Service.
- Author
-
Rui Cao, Zhaoyang Yu, Trent G. Marbach, Jing Li 0036, Gang Wang 0001, and Xiaoguang Liu 0001
- Published
- 2018
- Full Text
- View/download PDF
14. Performance Analysis of 3D XPoint SSDs in Virtualized and Non-Virtualized Environments.
- Author
-
Jiachen Zhang, Peng Li 0026, Bo Liu, Trent G. Marbach, Xiaoguang Liu 0001, and Gang Wang 0001
- Published
- 2018
- Full Text
- View/download PDF
15. Gecko: A Resilient Dispersal Scheme for Multi-Cloud Storage
- Author
-
Meng Yan, Jiaqi Feng, Trent G. Marbach, Rebecca J. Stones, Gang Wang, and Xiaoguang Liu
- Subjects
Blockchain ,data recovery ,dispersal scheme ,integrity check ,Latin square ,multi-cloud ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
We have entered an era where copious amounts of sensitive data are being stored in the cloud. To meet the rising privacy, reliability, and verifiability needs, we propose Gecko, a multi-cloud dispersal scheme where: (a) the key used to encrypt the data file is the secret in a Latin-square-autotopism secret-sharing scheme, (b) data files and encryption keys are dispersed separately to multiple clouds, and (c) a blockchain-based integrity-check protocol is devised to pinpoint faulty data. Gecko enables fast and thorough key renewal: when a portion of the key (the secret) is leaked, we replace all shares of the partially-leaked secret without replacing the secret itself; this immediately resists targeted attack to certain file without re-encrypting the data file itself. Key renewal is further accelerated by the blockchain-based integrity check. We evaluate Gecko theoretically and experimentally against the traditional AONT-RS dispersal scheme, drawing two conclusions: 1) Gecko admits powerful key renewal and identification of damaged data, with a minor transfer overhead; and 2) Gecko performs key renewal three to five times faster than AONT-RS hybrid-slice renewal (the closest thing AONT-RS has to key renewal).
- Published
- 2019
- Full Text
- View/download PDF
16. Computing Autotopism Groups of Partial Latin Rectangles
- Author
-
Daniel Kotlar, Trent G. Marbach, Raúl M. Falcón, and Rebecca J. Stones
- Subjects
Range (mathematics) ,Software ,Group (mathematics) ,Computer science ,Backtracking ,business.industry ,Computation ,Overhead (computing) ,Graph (abstract data type) ,Arithmetic ,business ,Column (database) ,Theoretical Computer Science - Abstract
Computing the autotopism group of a partial Latin rectangle (PLR) can be performed in multiple ways. This study has two aims: comparing some of these methods experimentally to identify those that are competitive; and identifying design goals for developing practical software. We compare six families of algorithms (two backtracking and four graph-theoretic methods), with and without using entry invariants (EIs), in a range of settings. Two EIs are considered: frequencies of row, column, and symbol representatives; and 2 × 2 submatrices. The best approach to computing autotopism groups varies. When PLRs have many autotopisms (such as having very few entries or being a group table), the McKay, Meynert, and Myrvold (MMM) method computes generators for the autotopism group efficiently. (The MMM method is the standard way to compute autotopisms.) Otherwise, PLRs ordinarily have trivial or small autotopism groups, and the task is to verify this. The so-called PLR graph method is slightly more efficient in this setting than the MMM method (in some circumstances, around twice as fast). With an intermediate number of entries, the quick-to-compute strong EIs are effective at reducing the need for computation without introducing significant overhead. With a full or almost-full PLR, a more sophisticated EI is needed to reduce down-the-line computation. These results suggest a hybrid approach to computing autotopism groups: The software decides on suitable EIs based on the input; and the user chooses between the MMM or the PLR graph methods, depending on their dataset. This article expands the authors’ previous article Computing autotopism groups of PLRs: a pilot study .
- Published
- 2020
17. Tight Bounds on Probabilistic Zero Forcing on Hypercubes and Grids
- Author
-
Natalie C. Behague, Trent G. Marbach, and Paweł Prałat
- Subjects
Computational Theory and Mathematics ,Applied Mathematics ,Astrophysics::Solar and Stellar Astrophysics ,Discrete Mathematics and Combinatorics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Geometry and Topology ,Astrophysics::Galaxy Astrophysics ,Theoretical Computer Science - Abstract
Zero forcing is a deterministic iterative graph colouring process in which vertices are coloured either blue or white, and in every round, any blue vertices that have a single white neighbour force these white vertices to become blue. Here we study probabilistic zero forcing, where blue vertices have a non-zero probability of forcing each white neighbour to become blue. We explore the propagation time for probabilistic zero forcing on hypercubes and grids.
- Published
- 2022
18. Pursuit-evasion games on latin square graphs
- Author
-
Shreya Ahirwar, Anthony Bonato, Leanna Gittins, Alice Huang, Trent G. Marbach, and Tomer Zaidman
- Subjects
FOS: Computer and information sciences ,Discrete Mathematics (cs.DM) ,Mathematics::History and Overview ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Computer Science - Discrete Mathematics - Abstract
We investigate various pursuit-evasion parameters on latin square graphs, including the cop number, metric dimension, and localization number. The cop number of latin square graphs is studied, and for $k$-MOLS$(n),$ bounds for the cop number are given. If $n>(k+1)^2,$ then the cop number is shown to be $k+2.$ Lower and upper bounds are provided for the metric dimension and localization number of latin square graphs. The metric dimension of back-circulant latin squares shows that the lower bound is close to tight. Recent results on covers and partial transversals of latin squares provide the upper bound of $n+O\left(\frac{\log{n}}{\log{\log{n}}}\right)$ on the localization number of a latin square graph of order $n.$
- Published
- 2021
19. The enumeration of cyclic mutually nearly orthogonal Latin squares
- Author
-
Trent G. Marbach, Fatih Demirkale, Janne I. Kokkala, and Diane Donovan
- Subjects
Conjecture ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Graeco-Latin square ,01 natural sciences ,Combinatorics ,Set (abstract data type) ,symbols.namesake ,Orthogonality ,010201 computation theory & mathematics ,Latin square ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Enumeration ,Discrete Mathematics and Combinatorics ,Order (group theory) ,Variety (universal algebra) ,Mathematics - Abstract
In this paper, we study collections of mutually nearly orthogonal Latin squares (MNOLS), which come from a modification of the orthogonality condition for mutually orthogonal Latin squares. In particular, we find the maximum μ such that there exists a set of μ cyclic MNOLS of order n for n≤18, as well as providing a full enumeration of sets and lists of μ cyclic MNOLS of order n under a variety of equivalences with n≤18. This resolves in the negative a conjecture that proposed that the maximum μ for which a set of μ cyclic MNOLS of order n exists is [n/4]+1.
- Published
- 2019
20. Gecko: A Resilient Dispersal Scheme for Multi-Cloud Storage
- Author
-
Gang Wang, Feng Jiaqi, Rebecca J. Stones, Meng Yan, Xiaoguang Liu, and Trent G. Marbach
- Subjects
Scheme (programming language) ,General Computer Science ,business.industry ,Computer science ,Reliability (computer networking) ,Latin square ,General Engineering ,Cloud computing ,Encryption ,Identification (information) ,Blockchain ,multi-cloud ,Data file ,Overhead (computing) ,General Materials Science ,dispersal scheme ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,computer ,Cloud storage ,data recovery ,integrity check ,computer.programming_language ,Computer network - Abstract
We have entered an era where copious amounts of sensitive data are being stored in the cloud. To meet the rising privacy, reliability, and verifiability needs, we propose Gecko, a multi-cloud dispersal scheme where: (a) the key used to encrypt the data file is the secret in a Latin-square-autotopism secret-sharing scheme, (b) data files and encryption keys are dispersed separately to multiple clouds, and (c) a blockchain-based integrity-check protocol is devised to pinpoint faulty data. Gecko enables fast and thorough key renewal: when a portion of the key (the secret) is leaked, we replace all shares of the partially-leaked secret without replacing the secret itself; this immediately resists targeted attack to certain file without re-encrypting the data file itself. Key renewal is further accelerated by the blockchain-based integrity check. We evaluate Gecko theoretically and experimentally against the traditional AONT-RS dispersal scheme, drawing two conclusions: 1) Gecko admits powerful key renewal and identification of damaged data, with a minor transfer overhead; and 2) Gecko performs key renewal three to five times faster than AONT-RS hybrid-slice renewal (the closest thing AONT-RS has to key renewal).
- Published
- 2019
21. The iterated local transitivity model for hypergraphs
- Author
-
Natalie C. Behague, Anthony Bonato, Melissa A. Huggan, Rehan Malik, and Trent G. Marbach
- Subjects
FOS: Computer and information sciences ,Mathematics::Combinatorics ,Discrete Mathematics (cs.DM) ,Computer Science::Discrete Mathematics ,Applied Mathematics ,Discrete Mathematics and Combinatorics ,Computer Science - Discrete Mathematics - Abstract
Complex networks are pervasive in the real world, capturing dyadic interactions between pairs of vertices, and a large corpus has emerged on their mining and modeling. However, many phenomena are comprised of polyadic interactions between more than two vertices. Such complex hypergraphs range from emails among groups of individuals, scholarly collaboration, or joint interactions of proteins in living cells. A key generative principle within social and other complex networks is transitivity, where friends of friends are more likely friends. The previously proposed Iterated Local Transitivity (ILT) model incorporated transitivity as an evolutionary mechanism. The ILT model provably satisfies many observed properties of social networks, such as densification, low average distances, and high clustering coefficients. We propose a new, generative model for complex hypergraphs based on transitivity, called the Iterated Local Transitivity Hypergraph (or ILTH) model. In ILTH, we iteratively apply the principle of transitivity to form new hypergraphs. The resulting model generates hypergraphs simulating properties observed in real-world complex hypergraphs, such as densification and low average distances. We consider properties unique to hypergraphs not captured by their 2-section. We show that certain motifs, which are specified subhypergraphs of small order, have faster growth rates in ILTH hypergraphs than in random hypergraphs with the same order and expected average degree. We show that the graphs admitting a homomorphism into the 2-section of the initial hypergraph appear as induced subgraphs in the 2-section of ILTH hypergraphs. We consider new and existing hypergraph clustering coefficients, and show that these coefficients have larger values in ILTH hypergraphs than in comparable random hypergraphs.
- Published
- 2021
22. The game of Cops and Eternal Robbers
- Author
-
Anthony Bonato, Fionn Mc Inerney, Melissa A. Huggan, Trent G. Marbach, Department of Mathematics [Ryerson University - Toronto], Ryerson University [Toronto], Algorithmique, Combinatoire et Recherche Opérationnelle (ACRO), Laboratoire d'Informatique et Systèmes (LIS), Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS), ANR-17-CE40-0015,DISTANCIA,Théorie métrique des graphes(2017), and ANR-14-CE25-0006,GAG,Jeux et graphes(2014)
- Subjects
FOS: Computer and information sciences ,Computer Science::Computer Science and Game Theory ,General Computer Science ,Discrete Mathematics (cs.DM) ,Domination analysis ,0102 computer and information sciences ,02 engineering and technology ,[INFO.INFO-DM]Computer Science [cs]/Discrete Mathematics [cs.DM] ,01 natural sciences ,Theoretical Computer Science ,law.invention ,Computer Science::Robotics ,Integer ,law ,Computer Science::Discrete Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Mathematics ,Mathematics - Combinatorics ,Cartesian coordinate system ,Mathematics ,Discrete mathematics ,16. Peace & justice ,010201 computation theory & mathematics ,020201 artificial intelligence & image processing ,Combinatorics (math.CO) ,Constant (mathematics) ,Computer Science - Discrete Mathematics - Abstract
International audience; We introduce the game of Cops and Eternal Robbers played on graphs, where there are infinitely many robbers that appear sequentially over distinct plays of the game. A positive integer $t$ is fixed, and the cops are required to capture the robber in at most $t$ time-steps in each play. The associated optimization parameter is the eternal cop number, denoted by $c_t^{\infty}$, which equals the eternal domination number in the case $t=1$, and the cop number for sufficiently large $t$. We study the complexity of Cops and Eternal Robbers, and show that the game is NP-hard when $t$ is a fixed constant and EXPTIME-complete for large values of $t$. We determine precise values of $c_t^{\infty}$ for paths and cycles. The eternal cop number is studied for retracts, and this approach is applied to give bounds for trees, as well as for strong and Cartesian grids.
- Published
- 2021
23. Improving Load Balancing for Modern Data Centers Through Resource Equivalence Classes
- Author
-
Yusen Li, Trent G. Marbach, Kaiyue Duan, Xiaoguang Liu, and Gang Wang
- Subjects
Resource (project management) ,business.industry ,Computer science ,Distributed computing ,Quality of service ,Server ,Programming paradigm ,Local search (optimization) ,Data center ,Transient (computer programming) ,Load balancing (computing) ,business - Abstract
Load balancing is one of the most significant concerns for data center (DC) management, and the basic method is reassigning applications from overloaded servers to underloaded servers. However, to ensure the service availability, during the reassignment of an application, some resources (i.e., transient resources) are consumed simultaneously on its initial server and its target server, which imposes a challenge for load balancing. The latest research has proposed a concept called resource equivalence class (REC: a set of resource configurations such that a latency-critical (LC) application running with any one of them can meet the QoS target). In this paper, we use the REC to improve the load balancing for a DC where multiple LC applications have already been co-located on servers with the service availability and QoS requirements. We formulate the proposed load rebalancing problem as a multi-objective constrained programming model. To solve the proposed problem, we propose to use a machine learning-based classification model to construct the RECs for applications, and we develop a local search (LS) algorithm to approximate the optimal solution. We evaluate the proposed algorithm via simulated experiments using real LC applications. To our knowledge, it is the first time to use REC for improving load balancing.
- Published
- 2021
24. The localization capture time of a graph
- Author
-
Natalie C. Behague, Anthony Bonato, Melissa A. Huggan, Trent G. Marbach, and Brittany Pittman
- Subjects
FOS: Computer and information sciences ,Computer Science::Computer Science and Game Theory ,General Computer Science ,Discrete Mathematics (cs.DM) ,Computer Science::Discrete Mathematics ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Theoretical Computer Science ,Computer Science - Discrete Mathematics ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
The localization game is a pursuit-evasion game analogous to Cops and Robbers, where the robber is invisible and the cops send distance probes in an attempt to identify the location of the robber. We present a novel graph parameter called the capture time, which measures how long the localization game lasts assuming optimal play. We conjecture that the capture time is linear in the order of the graph, and show that the conjecture holds for graph families such as trees and interval graphs. We study bounds on the capture time for trees and its monotone property on induced subgraphs of trees and more general graphs. We give upper bounds for the capture time on the incidence graphs of projective planes. We finish with new bounds on the localization number and capture time using treewidth.
- Published
- 2021
- Full Text
- View/download PDF
25. Hybrid Dynamic Pruning for Efficient and Effective Query Processing
- Author
-
Trent G. Marbach, Wenxiu Fang, Gang Wang, and Xiaoguang Liu
- Subjects
Computer science ,Data mining ,Pruning algorithm ,Pruning (decision trees) ,Latency (engineering) ,computer.software_genre ,computer ,Computer Science::Databases ,Field (computer science) - Abstract
The performance of query processing has always been a concern in the field of information retrieval. Dynamic pruning algorithms have been proposed to improve query processing performance in terms of efficiency and effectiveness. However, a single pruning algorithm generally does not have both advantages. In this work, we investigate the performance of the main dynamic pruning algorithms in terms of average and tail latency as well as the accuracy of query results, and find that they are complementary. Inspired by these findings, we propose two types of hybrid dynamic pruning algorithms that choose different combinations of strategies according to the characteristics of each query. Experimental results demonstrate that our proposed methods yield a good balance between both efficiency and effectiveness.
- Published
- 2020
26. Balanced Equi-$n$-Squares
- Author
-
Trent G. Marbach, Rebecca J. Stones, Zhuanhao Wu, and Saieed Akbari
- Subjects
Combinatorics ,Computational Theory and Mathematics ,Applied Mathematics ,Discrete Mathematics and Combinatorics ,Geometry and Topology ,Theoretical Computer Science ,Mathematics - Abstract
We define a $d$-balanced equi-$n$-square $L=(l_{ij})$, for some divisor $d$ of $n$, as an $n \times n$ matrix containing symbols from $\mathbb{Z}_n$ in which any symbol that occurs in a row or column, occurs exactly $d$ times in that row or column. We show how to construct a $d$-balanced equi-$n$-square from a partition of a Latin square of order $n$ into $d \times (n/d)$ subrectangles. In graph theory, $L$ is equivalent to a decomposition of $K_{n,n}$ into $d$-regular spanning subgraphs of $K_{n/d,n/d}$. We also study when $L$ is diagonally cyclic, defined as when $l_{(i+1)(j+1)}=l_{ij}+1$ for all $i,j \in \mathbb{Z}_n$, which correspond to cyclic such decompositions of $K_{n,n}$ (and thus $\alpha$-labellings). We identify necessary conditions for the existence of (a) $d$-balanced equi-$n$-squares, (b) diagonally cyclic $d$-balanced equi-$n$-squares, and (c) Latin squares of order $n$ which partition into $d \times (n/d)$ subrectangles. We prove the necessary conditions are sufficient for arbitrary fixed $d \geq 1$ when $n$ is sufficiently large, and we resolve the existence problem completely when $d \in \{1,2,3\}$. Along the way, we identify a bijection between $\alpha$-labellings of $d$-regular bipartite graphs and what we call $d$-starters: matrices with exactly one filled cell in each top-left-to-bottom-right unbroken diagonal, and either $d$ or $0$ filled cells in each row and column. We use $d$-starters to construct diagonally cyclic $d$-balanced equi-$n$-squares, but this also gives new constructions of $\alpha$-labellings.
- Published
- 2020
27. Audinet: A Decentralized Auditing System for Cloud Storage
- Author
-
Meng Yan, Xiaoguang Liu, Jiajia Xu, Gang Wang, Haitao Li, and Trent G. Marbach
- Subjects
021110 strategic, defence & security studies ,Smart contract ,Computer science ,business.industry ,0211 other engineering and technologies ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Cloud computing ,Cryptography ,02 engineering and technology ,Audit ,Computer security ,computer.software_genre ,Proof of retrievability ,Set (abstract data type) ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Incentive ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,Cloud storage - Abstract
In cloud storage, remote data auditing is designed to verify the integrity of cloud data on behalf of cloud users. The audit is performed by a third-party auditor (TPA) according to auditing protocols such as proof of retrievability and provable data possession. However, the TPA-based auditing framework leads to single-point failures, opaque audit processes and undetected mistakes. In this paper, we propose a decentralized auditing system in which the audit is performed by multiple auditors and the audit result is reached in a collaborative and transparent way. Auditors are selected for each audit randomly from the set of cloud users via modified cryptographic sortition; auditing procedures are implemented using a smart contract, and auditing records are published on a blockchain; an incentive mechanism is provided to regulate the behavior of system participants. We implement a prototype system and demonstrate that the proposed system is reliable and technically feasible.
- Published
- 2020
28. Improving Load Balance via Resource Exchange in Large-Scale Search Engines
- Author
-
Yusen Li, Xiaoguang Liu, Kaiyue Duan, Trent G. Marbach, and Gang Wang
- Subjects
020203 distributed computing ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Load balancing (computing) ,Search engine ,Resource (project management) ,0202 electrical engineering, electronic engineering, information engineering ,Transient (computer programming) ,business ,Integer programming - Abstract
Load balance is one of the major issues in large-scale search engines. A commonly used load balancing approach in search engine datacenters is to reassign index shards among machines. However, reassigning shards within stringent resource environments is challenging due to transient resource constraints (during reassignment, some resources are consumed simultaneously by a shard on the initial machine and its copy on the target machine). In this paper, we study a new shard reassignment problem where each shard has already been placed on a machine and we are allowed to use several exchangeable machines to improve load balancing. The exchangeable machines are initially vacant (i.e., without any shard), and after reassignment we return some vacant machines as compensation for using them. We formulate the new shard reassignment problem as a linearly constrained integer programming (IP) model, and we propose a shard reassignment algorithm (SRA), which is based on large neighborhood search (LNS), to approximate the optimal solution. We conduct extensive experiments to evaluate the proposed algorithm using both synthetic data and real data from actual datacenters, comparing with the state-of-art load balancing method. The results show that our solution outperforms the state-of-the-art alternative significantly.
- Published
- 2020
29. The localization number of designs
- Author
-
Melissa A. Huggan, Trent G. Marbach, and Anthony Bonato
- Subjects
FOS: Computer and information sciences ,Discrete Mathematics (cs.DM) ,010102 general mathematics ,Incomplete block ,0102 computer and information sciences ,16. Peace & justice ,01 natural sciences ,Graph ,Combinatorics ,Steiner system ,010201 computation theory & mathematics ,Transversal (combinatorics) ,FOS: Mathematics ,Discrete Mathematics and Combinatorics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Affine transformation ,Projective plane ,0101 mathematics ,Incidence (geometry) ,Mathematics ,MathematicsofComputing_DISCRETEMATHEMATICS ,Computer Science - Discrete Mathematics - Abstract
We study the localization number of incidence graphs of designs. In the localization game played on a graph, the cops attempt to determine the location of an invisible robber via distance probes. The localization number of a graph $G$, written $\zeta(G)$, is the minimum number of cops needed to ensure the robber's capture. We present bounds on the localization number of incidence graphs of balanced incomplete block designs. Exact values of the localization number are given for the incidence graphs of projective and affine planes. Bounds are given for Steiner systems and for transversal designs.
- Published
- 2020
30. Predicting Hard Drive Failures for Cloud Storage Systems
- Author
-
Zhongwei Li, Rebecca J. Stones, Trent G. Marbach, Dongshi Liu, Bo Wang, Peng Li, Xiaoguang Liu, and Gang Wang
- Subjects
021103 operations research ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,0211 other engineering and technologies ,0202 electrical engineering, electronic engineering, information engineering ,02 engineering and technology ,Cloud storage - Abstract
To improve reactive hard-drive fault-tolerance techniques, many statistical and machine learning methods have been proposed for failure prediction based on SMART attributes. However, disparate datasets and metrics have been used to experimentally evaluate these models, so a direct comparison between them cannot readily be made.
- Published
- 2020
31. Towards a Latin-Square Search Engine
- Author
-
Xiaoguang Liu, Trent G. Marbach, Wenxiu Fang, Rebecca J. Stones, and Gang Wang
- Subjects
Matrix (mathematics) ,Search engine ,Theoretical computer science ,Computer science ,Latin square ,Mathematics::History and Overview ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Graph (abstract data type) ,Graph isomorphism ,Data structure ,Tree (graph theory) ,Equivalence class ,Graph - Abstract
Latin squares are combinatorial matrices that are widely used in diverse areas of research such as codes and cryptography, software testing, mathematical research, and experimental designs. All of these fields would benefit from a search engine for Latin squares. One major obstacle to developing a Latin-square search engine is that any Latin square has a large number of equivalent Latin squares, which are contained in multiple equivalence classes, and thus we need an efficient online method for canonical labelling Latin squares. Canonical labelling usually proceeds via the Nauty graph isomorphism software, but this incurs conversion costs. Moreover, the canonical labels are practically random members of their equivalence classes. A second obstacle is how large amounts of searchable Latin-square data may be stored efficiently. In this paper, we design data structures and algorithms suitable for a Latin-square search engine. We use a tree-based data structure for storing large numbers of Latin squares that also enables efficient search capabilities. We design an efficient canonical labelling algorithm (via partial Latin squares, PLSs) which does not require graph conversion, facilitates compression, and the labels are more humanly meaningful. We implement and experiment with a skeletal prototype of the Latin-square search engine. Experimental results confirm that the PLS method is faster than Nauty, and has reduced space requirements.
- Published
- 2019
32. Two-line graphs of partial Latin rectangles
- Author
-
Rebecca J. Stones, Raúl M. Falcón, Dani Kotlar, Trent G. Marbach, and Eiran Danan
- Subjects
Mathematics::Combinatorics ,Mathematics::General Mathematics ,Applied Mathematics ,010102 general mathematics ,Latin rectangle ,0102 computer and information sciences ,01 natural sciences ,law.invention ,Combinatorics ,010201 computation theory & mathematics ,law ,Line graph ,Bipartite graph ,Discrete Mathematics and Combinatorics ,0101 mathematics ,Mathematics - Abstract
Two-line graphs of a given partial Latin rectangle are introduced as vertex-and-edge-coloured bipartite graphs that give rise to new autotopism invariants. They reduce the complexity of any currently known method for computing autotopism groups of partial Latin rectangles.
- Published
- 2018
33. Covers and partial transversals of Latin squares
- Author
-
Ian M. Wanless, Trent G. Marbach, Rebecca J. Stones, and Darcy Best
- Subjects
Applied Mathematics ,Order (ring theory) ,0102 computer and information sciences ,01 natural sciences ,Computer Science Applications ,Combinatorics ,Transversal (geometry) ,05B15 ,Cover (topology) ,010201 computation theory & mathematics ,Latin square ,FOS: Mathematics ,Mathematics - Combinatorics ,Maximum size ,Combinatorics (math.CO) ,Nuclear Experiment ,Mathematics - Abstract
We define a cover of a Latin square to be a set of entries that includes at least one representative of each row, column and symbol. A cover is minimal if it does not contain any smaller cover. A partial transversal is a set of entries that includes at most one representative of each row, column and symbol. A partial transversal is maximal if it is not contained in any larger partial transversal. We explore the relationship between covers and partial transversals. We prove the following: (1) The minimum size of a cover in a Latin square of order n is $$n+a$$ if and only if the maximum size of a partial transversal is either $$n-2a$$ or $$n-2a+1$$ . (2) A minimal cover in a Latin square of order n has size at most $$\mu _n=3(n+1/2-\sqrt{n+1/4})$$ . (3) There are infinitely many orders n for which there exists a Latin square having a minimal cover of every size from n to $$\mu _n$$ . (4) Every Latin square of order n has a minimal cover of a size which is asymptotically equal to $$\mu _n$$ . (5) If $$1\leqslant k\leqslant n/2$$ and $$n\geqslant 5$$ then there is a Latin square of order n with a maximal partial transversal of size $$n-k$$ . (6) For any $$\varepsilon >0$$ , asymptotically almost all Latin squares have no maximal partial transversal of size less than $$n-n^{2/3+\varepsilon }$$ .
- Published
- 2018
34. Computing autotopism groups of partial Latin rectangles: A pilot study
- Author
-
Daniel Kotlar, Trent G. Marbach, Raúl M. Falcón, Rebecca J. Stones, Universidad de Sevilla. Departamento de Matemática Aplicada I (ETSII), and Junta de Andalucía
- Subjects
Autotopism ,Transitive relation ,Backtracking ,Group (mathematics) ,Computational Mechanics ,Latin square ,Block matrix ,Partial Latin rectangle ,Symbol (chemistry) ,Combinatorics ,Computational Mathematics ,Computational Theory and Mathematics ,05B15 ,Homogeneous space ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Invariant (mathematics) ,Graph automorphism ,Mathematics - Abstract
Computing the autotopism group of a partial Latin rectangle can be performed in a variety of ways. This pilot study has two aims: (a) to compare these methods experimentally, and (b) to identify the design goals one should have in mind for developing practical software. To this end, we compare six families of algorithms (two backtracking methods and four graph automorphism methods), with and without the use of entry invariants, on two test suites. We consider two entry invariants: one determined by the frequencies of row, column, and symbol representatives, and one determined by $2 \times 2$ submatrices. We find: (a) with very few entries, many symmetries often exist, and these should be identified mathematically rather than computationally, (b) with an intermediate number of entries, a quick-to-compute entry invariant was effective at reducing the need for computation, (c) with an almost-full partial Latin rectangle, more sophisticated entry invariants are needed, and (d) the performance for (full) Latin squares is significantly poorer than other partial Latin rectangles of comparable size, obstructed by the existence of Latin squares with large (possibly transitive) autotopism groups., Comment: 16 pages, 5 figures, 1 table
- Published
- 2019
35. Performance Analysis of 3D XPoint SSDs in Virtualized and Non-Virtualized Environments
- Author
-
Peng Li, Trent G. Marbach, Jiachen Zhang, Xiaoguang Liu, Gang Wang, and Bo Liu
- Subjects
Random access memory ,Hardware_MEMORYSTRUCTURES ,business.industry ,Computer science ,020206 networking & telecommunications ,3D XPoint ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Virtualization ,Enterprise data management ,Non-volatile memory ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,020201 artificial intelligence & image processing ,business ,computer - Abstract
Intel's Optane SSD recently came to the market as the pioneer of 3D XPoint based commercial devices. They have much lower latency (about 14 μs) and better parallelism properties than traditional SSDs, and as such are set to replace NAND flash SSD in commercial settings. To best serve cloud and enterprise data centers' higher performing storage demands, it is necessary to know the performance characteristics of the new devices in both virtualized cloud environments and traditional non-virtualized environments. In this paper, we present an analysis of Optane SSDs based on a large number of experiments. We use several micro-benchmarks to gain knowledge of Optane's basic performance metrics. We also discuss the impact of state-of-the-art storage stacks on the performance of Optane SSDs. By analyzing the test results, we provide configuration suggestions for storage I/O applications using Optane SSDs. Lastly, we evaluate the real-world performance of Optane SDDs by running MySQL database based experiments. All the experiments are performed in non-virtualized and virtualized environments (Linux and QEMU) with a comparison study between the Optane SSD and a SATA NAND flash-based SSD.
- Published
- 2018
36. Refining invariants for computing autotopism groups of partial Latin rectangles
- Author
-
Rebecca J. Stones, Eiran Danan, Raúl M. Falcón, Trent G. Marbach, and Dani Kotlar
- Subjects
Discrete mathematics ,Combined use ,Latin rectangle ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Combinatorics ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Bipartite graph ,Discrete Mathematics and Combinatorics ,Partition (number theory) ,Invariant (mathematics) ,Row ,Mathematics - Abstract
Prior to using computational tools that find the autotopism group of a partial Latin rectangle (its stabilizer group under row, column and symbol permutations), it is beneficial to find partitions of the rows, columns and symbols that are invariant under autotopisms and are as fine as possible. We look at the lattices formed by these partitions and introduce two invariant refining maps on these lattices. The first map generalizes the strong entry invariant in a previous work. The second map utilizes some bipartite graphs, introduced here, whose structure is determined by pairs of rows (or columns, or symbols). Experimental results indicate that in most cases (ordinarily 99%+), the combined use of these invariants gives the theoretical best partition of the rows, columns and symbols, outperforms the strong entry invariant, which only gives the theoretical best partitions in roughly 80% of the cases.
- Published
- 2020
37. Load Prediction for Data Centers Based on Database Service
- Author
-
Jing Li, Rui Cao, Zhaoyang Yu, Trent G. Marbach, Gang Wang, and Xiaoguang Liu
- Subjects
Database ,Computer science ,business.industry ,Cloud computing ,Workload ,02 engineering and technology ,computer.software_genre ,Scheduling (computing) ,Data modeling ,020204 information systems ,Server ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data center ,business ,computer - Abstract
In the era of cloud computing, the over-occupancy of data center resources (CPU, memory, disk) and subsequent machine failure have resulted in great loss to users and enterprises. So it makes sense to anticipate the server workload in advance. Previous research on server workloads has focused on trend analysis and time series fitting. We propose an approach to forecast the workloads of servers based on machine learning. And our data comes from a database-based data center that is real, large-scale, and enterprise-class. We use the servers' historical monitoring data for our models to predict future workloads and hence provide the ability to automatically warn overload and reallocate resources. We calculate the failure detection rate and false alarm rate of our overload detection models, as well as put forward an evaluation based on the overload processing cost. Experimental results show that machine learning methods especially Random Forest can better predict the server load than traditional time series analysis method. We use the forecast results to propose some scheduling strategies to prevent server overload, achieve intelligent operation and maintenance, and failure prediction. Compared with the traditional time series analysis method, our method uses less data and lower dimensions, and yields more accurate predictions.
- Published
- 2018
38. Materials R&D for a timely DEMO: Key findings and recommendations of the EU Roadmap Materials Assessment Group
- Author
-
E. Diegele, Mark R. Gilbert, Steven J. Zinkle, Pietro Agostini, Colin English, Baldev Raj, L. W. Packer, Sergei L. Dudarev, Sehila Gonzalez, David Ward, Gianfranco Federici, Antonella Li Puma, J.-L. Boutard, Christian Linsmeier, Derek Stork, Michael Rieth, Derek Buckthorpe, G Marbach, Angel Ibarra, and Min Quang Tran
- Subjects
Scope (project management) ,business.industry ,Computer science ,Mechanical Engineering ,Divertor ,Blanket ,Nuclear Energy and Engineering ,Software deployment ,Key (cryptography) ,Systems engineering ,General Materials Science ,Fission reactor ,business ,Baseline (configuration management) ,Risk management ,Civil and Structural Engineering - Abstract
The findings of the EU Fusion Programme's 'Materials Assessment Group' (MAG), assessing readiness of Structural, Plasma Facing (PF) and High Heat Flux (HHF) materials for DEMO, are discussed. These are incorporated into the EU Fusion Power Roadmap [1] , with a decision to construct DEMO in the early 2030s. The methodology uses project-based and systems-engineering approaches, the concept of Technology Readiness Levels, and considers lessons learned from Fission reactor material development. 'Baseline' materials are identified for each DEMO role, and the DEMO mission risks analysed from the known limitations, or unknown properties, associated with each baseline material. RD programmes to address these risks are developed. The DEMO assessed has a phase I with a 'starter blanket': the blanket must withstand >= 2 MW yr m(-2) fusion neutron flux (equivalent to similar to 20 dpa front-wall steel damage). The baseline materials all have significant associated risks, so development of 'Risk Mitigation Materials' (RMM) is recommended. The RD programme has parallel development of the baseline and RMM, up to 'down-selection' points to align with decisions on the DEMO blanket and divertor engineering definition. ITER licensing experience is used to refine the issues for materials nuclear testing, and arguments are developed to optimise scope of materials tests with fusion neutron ('14MeV') spectra before DEMO design finalisation. Some 14 MeV testing is still essential, and the Roadmap requires deployment of a >= 30 dpa (steels) testing capability by 2026. Programme optimisation by the pre-testing with fission neutrons on isotopically- or chemically-doped steels and with ion-beams is discussed along with the minimum 14 MeV testing programme, and the key role which fundamental and mission-oriented modelling can play in orienting the research. (C) 2013 Elsevier B.V. All rights reserved.
- Published
- 2014
39. Waste Management for ITER and for a Fusion Reactor
- Author
-
O. Gastaldi, S. Rosanvallon, L. Di Pace, R. Pampin, and G Marbach
- Subjects
Environmental science ,Forestry - Abstract
La production et la gestion de dechets doivent etre considerees dans le developpement de n'importe quel reacteur nucleaire en vue de son acceptation par le public et de son developpement eventuel. La minimisation des dechets et l'optimisation de leur classification ont ete prises en compte des l'origine de la conception d'ITER. Ceci s'est par exemple traduit par le choix approprie de materiaux pour reduire l'activation. Comme ITER sera construit en France, la gestion des dechets devra se conformer a la reglementation francaise. Celle-ci prevoit un zonage definissant les perimetres de production de dechets conventionnels ou nucleaires et une classification des dechets nucleaires en fonction de leur activite, leur periode radioactive et leur toxicite. Pendant la duree de vie d'ITER, les dechets conventionnels (produits comme dans toute installation industrielle) et radioactifs seront produits lors de l'exploitation et lors du demantelement. Concernant les dechets nucleaires, les dechets d'exploitation proviendront essentiellement du remplacement des composants et des dechets associes aux differents processus. Neanmoins, la quantite principale des dechets nucleaires proviendra du demantelement. Il y a deux sources menant a la creation de dechets nucleaires. La premiere provient de l'activation des structures entourant le plasma par les neutrons de 14 MeV issus de la reaction de fusion entre le deuterium et le tritium. La deuxieme concerne la contamination liee aux materiaux actives et a la presence de tritium. Une evaluation des dechets nucleaires d'ITER a ete realisee en se basant d'une part sur des calculs d'activation et d'autre part sur le retour d'experience d'autres installations de fusion ou reacteurs de fission. Ces dechets representeront: - entre 1600 et 3800 tonnes de dechets technologiques pendant les 20 annees de l'exploitation d'ITER (20 % de dechets a tres faible activite, 75 % de dechets a faible ou moyenne activite a vie courte, et 5 % de dechet de moyenne activite a vie longue). - environ 750 tonnes issues du remplacement de composants pendant l'exploitation d'ITER (activite moyenne a vie longue vie). - environ 30000 tonnes issue du demantelement (60 % de dechets a tres faible activite, 30 % de dechets a faible ou moyenne activite a vie courte, et 10 % de dechet de moyenne activite a vie longue). Ces resultats tiennent compte, si necessaire, d'une decroissance radioactive (moins de 30 ans), d'une decontamination des systemes de refroidissement pour retirer les produits de corrosion, et de la detritiation des dechets provenant de l'usine tritium. La detritiation peut egalement etre envisagee pour reduire le degazage tritium des colis afin de reduire les contraintes d'acceptation sur les sites de stockage. Meme si la grande partie des dechets d'ITER interviendra dans plus de 30 ans (pendant le demantelement), des contacts ont ete pris avec l'Agence Nationale pour la gestion des Dechets Radioactifs (ANDRA) afin de tenir compte des futurs besoins d'ITER. Une attention particuliere est donnee a l'acceptation des dechets mixtes (dechet nucleaire contenant du beryllium) et des dechets de moyenne activite a vie longue.
- Published
- 2007
40. The Spectrum for $3$-Way $k$-Homogeneous Latin Trades
- Author
-
Lijun Ji and Trent G. Marbach
- Subjects
Class (set theory) ,Property (philosophy) ,Applied Mathematics ,Spectrum (topology) ,Theoretical Computer Science ,Combinatorics ,Computational Theory and Mathematics ,Latin square ,Homogeneous ,Idempotence ,Discrete Mathematics and Combinatorics ,Geometry and Topology ,3-Way ,Mathematics - Abstract
A $\mu$-way $k$-homogeneous Latin trade was defined by Bagheri Gh, Donovan, Mahmoodian (2012), where the existence of $3$-way $k$-homogeneous Latin trades was specifically investigated. We investigate the existence of a certain class of $\mu$-way $k$-homogeneous Latin trades with an idempotent like property. We present a number of constructions for $\mu$-way $k$-homogeneous Latin trades with this property, and show that these can be used to fill in the spectrum of $3$-way $k$-homogeneous Latin trades for all but $196$ possible exceptions.
- Published
- 2015
41. Influence of heat treatments on residual tritium amount in tritiated stainless steel waste
- Author
-
P. Trabuc, J. Chêne, G Marbach, A. Lassoued, O. Gastaldi, Anne-Marie Brass, Laboratoire de Physico-Chimie de l'Etat Solide (CHIMSOL), Université Paris-Sud - Paris 11 (UP11)-Centre National de la Recherche Scientifique (CNRS), EURATOM Association, Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Laboratoire Analyse et Modélisation pour la Biologie et l'Environnement (LAMBE), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université d'Évry-Val-d'Essonne (UEVE)-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Intégration des Systèmes et des Technologies (LIST), and Centre National de la Recherche Scientifique (CNRS)-Université Paris-Sud - Paris 11 (UP11)
- Subjects
Materials science ,Annealing (metallurgy) ,Oxide ,02 engineering and technology ,engineering.material ,Residual ,01 natural sciences ,7. Clean energy ,010305 fluids & plasmas ,High-level waste ,chemistry.chemical_compound ,Desorption ,0103 physical sciences ,General Materials Science ,Austenitic stainless steel ,ComputingMilieux_MISCELLANEOUS ,Civil and Structural Engineering ,Mechanical Engineering ,Metallurgy ,Fusion power ,021001 nanoscience & nanotechnology ,Nuclear Energy and Engineering ,chemistry ,engineering ,Tritium ,[PHYS.PHYS.PHYS-CHEM-PH]Physics [physics]/Physics [physics]/Chemical Physics [physics.chem-ph] ,0210 nano-technology - Abstract
In the framework of waste management strategy for future fusion reactor, it has been demonstrated that one of the major issue is to reduce the high level waste. The tritium activity strongly contributes to the waste classification. A better knowledge of the mechanisms involved in tritium trapping and desorption will allow to choose the most appropriate procedure to reduce the tritium constraint. This paper deals with the management of tritiated steels and the way to reduce tritium transfer. This experimental work concerns steel obtained by remelting under vacuum of tritiated austenitic stainless steel wastes. The goal of this study is to develop heat treatments in solid phase to lower the residual tritium concentration and its desorption kinetics at room temperature. The surface activity, the desorption of residual tritium at room temperature and the residual concentration have been measured for each condition (temperature, time, atmosphere, etc.). For reheating temperatures below 600 °C, the surface activity increases by one or two orders of magnitude; this is the consequence of the formation of a tritium rich oxide layer. After a 20 h annealing above 300 °C and removing of the tritium rich oxide layer, the surface activity is strongly reduced (80%) when compared with the value measured before annealing. For such annealing conditions, the amount of desorbed tritium is reduced by a factor 25 and the desorption kinetics yield a reduction factor of 40–50 after annealing at 400 °C for 20 h.
- Published
- 2005
42. Progress in licensing ITER in Cadarache
- Author
-
Sandrine Rosanvallon, Joëlle Uzan-Elbez, P. Garin, G Marbach, Jean-Philippe Girard, and L. Rodríguez-Rodrigo
- Subjects
Engineering ,Tokamak ,Process (engineering) ,business.industry ,Mechanical Engineering ,Nuclear engineering ,Iter tokamak ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Fusion power ,law.invention ,Nuclear Energy and Engineering ,law ,General Materials Science ,business ,Civil and Structural Engineering - Abstract
The licensing procedure for ITER in Europe in the framework of the French regulations is a non-prescriptive approach based on a continuous dialogue between the nuclear installation owner (or its representative) and the safety authority. In this paper, the licensing procedure and main safety issues, which are being studied in this process, are presented.
- Published
- 2005
43. Use of the SPIRAL 2 facility for material irradiations with 14 MeV energy neutrons
- Author
-
Y. Huguet, F. Pellemoine, P. Magaud, A.C.C. Villari, D. Ridikas, G. Marbach, R. Anne, A. Mosnier, M. Lipa, X. Ledoux, M.G. Saint-Laurent, Département d'Astrophysique, de physique des Particules, de physique Nucléaire et de l'Instrumentation Associée (DAPNIA), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), DAM Île-de-France (DAM/DIF), Direction des Applications Militaires (DAM), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Grand Accélérateur National d'Ions Lourds (GANIL), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS), Association EURATOM-CEA (CEA/DSM/DRFC), and SPIRAL2
- Subjects
Materials science ,Power station ,Rare isotope beams ,Mechanical Engineering ,Neutron ,[PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] ,Fusion power ,Heavy ions ,Nuclear physics ,Nuclear Energy and Engineering ,General Materials Science ,Irradiation ,Material properties ,Hot cell ,Beam (structure) ,Spiral ,Civil and Structural Engineering - Abstract
International audience; The primary goal of an irradiation facility for fusion applications will be to generate a material irradiation database for the design, construction, licensing and safe operation of a fusion demonstration power station (e.g., DEMO). This will be achieved through testing and qualifying material performance under neutron irradiation that simulates service up to the full lifetime anticipated in the power plant. Preliminary investigations of 14 MeV neutron effects on different kinds of fusion material could be assessed by the SPIRAL 2 Project at GANIL (Caen, France), aiming at rare isotope beams production for nuclear physics research with first beams expected by 2009. In SPIRAL 2, a deuteron beam of 5 mA and 40 MeV interacts with a rotating carbon disk producing high-energy neutrons (in the range between 1 and 40 MeV) via C (d, xn) reactions. Then, the facility could be used for 3-4 months $y^−1$ for material irradiation purposes. This would correspond to damage rates in the order of 1-2 dpa $y^−1$ (in Fe) in a volume of $\sim$10 $cm^3$. Therefore, the use of miniaturized specimens will be essential in order to effectively utilize the available irradiation volume in SPIRAL 2. Sample package irradiation temperature would be in the range of 250-1000 °C. The irradiation level of 1-2 dpa $y^−1$ with 14 MeV neutrons (average energy) may be interesting for micro-structural and metallurgical investigations (e.g., mini-traction, small punch tests, etc.) and possibly for the understanding of specimen size/geometric effects of critical material properties. Due to the small test cell volume, sample in situ experiments are not foreseen. However, sample packages would be, if required, available each month after transfer in a special hot cell on-site.
- Published
- 2005
44. High heat flux components in fusion devices: from current experience in Tore Supra towards the ITER challenge
- Author
-
Ph. Chappuis, M. Lipa, A. Durocher, J.J. Cordier, J. Schlosser, F. Escourbiac, A. Grosman, G Marbach, P. Bayetti, D. Guilhem, and R. Mitteau
- Subjects
Nuclear and High Energy Physics ,Engineering ,business.industry ,Nuclear engineering ,Flux ,Experience feedback ,Tore Supra ,Nuclear physics ,Nuclear Energy and Engineering ,Heat flux ,Limiter ,General Materials Science ,Current (fluid) ,business ,High heat - Abstract
A pioneering activity has been developed by CEA and the European industry in the field of actively cooled high heat flux plasma facing components in Tore Supra operation, which is today culminating with the routine operation of an actively cooled toroidal pumped limiter (TPL) capable of sustaining up to 10 MW/m2 of nominal convected heat flux. This success is the result of a long lead development and industrialization program (about 10 years) marked out with a number of technical and managerial challenges that were taken up and has allowed us to build up a unique experience feedback database, which is displayed in the paper.
- Published
- 2004
45. ITER at Cadarache: An Example of Licensing a Fusion Facility
- Author
-
J. Jacquinot, G. Marbach, and N. Taylor
- Subjects
Nuclear and High Energy Physics ,Magnetic fusion ,Power station ,Computer science ,Mechanical Engineering ,Iter tokamak ,Fusion power ,Identification (information) ,Documentation ,Nuclear Energy and Engineering ,Systems engineering ,General Materials Science ,Environmental impact assessment ,Civil and Structural Engineering - Abstract
The existing regulatory framework in France provides a full and coherent licensing basis to permit ITER to be build and operated at Cadarache. The specific sitting studies including the submission of the first step of licensing documentation for ITER offers an early assessment of fusion power plants.The regulatory procedure begins with the release of a Safety Objectives Report, which has already been sent to the Safety Authorities in the beginning of 2002. This report is presenting a description of the plant, the radioactive inventory and the identification of the main risks and associated safety functions.This document includes a preliminary evaluation of the environmental impact associated with normal operation and representative accidental events. The results of this analysis are given. For example the consequences in term of additional doses are estimated to be about 1 {mu}Sv per year for the closest inhabitants around the Cadarache site during normal operation.The licensing of ITER will be the first experience of licensing a major magnetic fusion device on a scale similar to that of a power plant. Thus there are lessons to be learnt and precedents that will be set.
- Published
- 2003
46. The EU power plant conceptual study
- Author
-
G. Marbach, I. Cook, and David Maisonnier
- Subjects
Thermonuclear fusion ,Power station ,Mechanical Engineering ,Fusion power ,Nuclear reactor ,law.invention ,Nuclear physics ,Nuclear Energy and Engineering ,Work (electrical) ,law ,Credibility ,Nuclear power plant ,Systems engineering ,media_common.cataloged_instance ,General Materials Science ,European union ,Civil and Structural Engineering ,media_common - Abstract
An integrated Power Plant Conceptual Study (PPCS) was launched in 1988 in the frame of the European Fusion Programme. The objective of the PPCS is to demonstrate: the credibility of fusion power plant design(s); the claims for the safety and environmental advantages and for the economic viability of fusion power. In addition results of the PPCS Programme will help to define the R and D Programme. The strategy of the PPCS is to study a limited number of Plant Models that span the expected range of possibilities. These range from an advanced version of International Thermonuclear Experimental Reactor (ITER) through to the presently foreseen ultimately achievable values of the plasma physics and technology parameters, together with a range of blanket and high heat flux technologies. The primary focus of detailed studies, and the first work to be undertaken, is on two Plant Models that emphasise limited extrapolation, since credibility within the philosophy of a possible fast-track development of fusion power is a major aim of the study. The remainder of this paper discusses the physics studies and the preliminary technology studies that have formed the launching pad for the detailed studies of Plant Models within the PPCS. The discussion concentrates on the ‘limited extrapolation’ Models that are being studied first, with brief reference to the Models that are further from ITER.
- Published
- 2002
47. Steel Detritiation by Melting with Gas Bubbling
- Author
-
G Marbach, J. L. Courouau, W. Gulden, S. Rosanvallon, Association EURATOM-CEA (CEA/DSM/DRFC), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), and European Fusion Development Agreement [Garching bei München] ( EFDA-CSU)
- Subjects
Nuclear and High Energy Physics ,[CHIM.GENI]Chemical Sciences/Chemical engineering ,Nuclear Energy and Engineering ,Waste management ,Mechanical Engineering ,Environmental science ,General Materials Science ,Tritium ,[CHIM.MATE]Chemical Sciences/Material chemistry ,Limit (mathematics) ,ComputingMilieux_MISCELLANEOUS ,Civil and Structural Engineering - Abstract
The waste management is a challenge for any future fusion facility including ITER. Detritiation could allow easier procedures since the practices in different countries already limit tritium conten...
- Published
- 2002
48. Fusion Reactor Safety
- Author
-
C. Gordon, David A. Petti, T. Dolan, S. Ohira, G. Marbach, I. Cook, K. Moshonas, and W. Gulden
- Subjects
Nuclear and High Energy Physics ,Engineering ,business.industry ,Nuclear engineering ,Technical committee ,Fusion power ,Condensed Matter Physics ,business - Abstract
Report on the 7th IAEA Technical Committee Meeting held at Cannes, France, 13-16 June 2000.
- Published
- 2001
49. Steel detritiation, optimization of a process
- Author
-
A.M Brass, J Chêne, S Rosanvallon, G Marbach, and J.P Daclin
- Subjects
Tokamak ,Hydrogen ,Mechanical Engineering ,Nuclear engineering ,Radioactive waste ,chemistry.chemical_element ,Nuclear reactor ,Fusion power ,law.invention ,Nuclear physics ,Nuclear Energy and Engineering ,chemistry ,law ,Scientific method ,Environmental science ,Nuclear waste storage ,General Materials Science ,Tritium ,Civil and Structural Engineering - Abstract
In the framework of waste management for the ITER reactor, steel detritiation is an important challenge since nuclear waste storage already assigns and plans limited tritium contents. A study has been initiated to determine a detritiation process drawing on a melting process currently used on a semi-industrial scale at the CEA/DAM (Commissariat a l'Energie Atomique/Direction des Applications Militaires). The first part of the project consists in characterizing these ingots. The measurements performed at the CNRS ORSAY (Centre National de Recherche Scientifique), permit to assess, by autoradiography, tritium distribution in two directions (longitudinal and transverse) and to measure tritium desorption using an original β counting device. The second part of the project should allow us to assess the possibility of improving the process in terms of detritiation rates using bubbling argon and hydrogen during melting to improve mixing, and thus facilitate the removal of tritium.
- Published
- 2000
50. Results, conclusions, and implications of the SEAFP-2 programme
- Author
-
L. Di Pace, Neill Taylor, I. Cook, P Rocco, G Marbach, and C Girard
- Subjects
Mechanical Engineering ,Nuclear engineering ,Nuclear reactor ,Fusion power ,Nuclear decommissioning ,Conceptual study ,law.invention ,Decay time ,Nuclear Energy and Engineering ,Material selection ,Risk analysis (engineering) ,law ,Environmental science ,General Materials Science ,Decay heat ,Civil and Structural Engineering ,Clearance - Abstract
Fusion power stations inherently will have no actinides or fission products, extremely low levels of nuclear energy, and low levels of decay heat power. With appropriate design and material selection, these favourable inherent features could give rise to substantial safety and environmental advantages. Analyses performed within the SEAFP-2 project of the European fusion programme have shown that it should be possible to design commercial fusion power stations so that • the maximum doses to the public arising from the most severe conceivable accident driven by in-plant energies would be at the milliSievert level — well below the level at which evacuation would be considered; • after a few decades, most, perhaps all, of the activated material arising from the operation and decommissioning of the plant could be cleared or recycled, with little, or no, need for repository disposal; • the above goals can be achieved by using relatively well-developed and near-term low-activation martensitic steel as structural material. The results supporting these conclusions are summarised in this paper. The detailed lessons learnt will be input to a future European conceptual study of commercial fusion power stations.
- Published
- 2000
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.