108 results
Search Results
2. An efficient algorithm for solving the constellation-to-ground coverage problem based on latitude strip division.
- Author
-
Wu, Huanqin, Song, Zhiming, Wang, Maocai, Chen, Xiaoyu, and Dai, Guangming
- Subjects
- *
LATITUDE , *LONGITUDE , *ORBITS of artificial satellites , *ALGORITHMS , *GRIDS (Cartography) - Abstract
• A computing method for constellation-to-ground coverage is investigated. • The maximum and minimum coverage regions are calculated. • Simulation results demonstrate the validity and effectiveness of the proposed algorithm. The issue of constellation-to-ground coverage is a research focus in Earth observation applications. Traditional calculations often rely on grid point methods or their derivatives, but these can be limited in their application, relatively costly, inefficient, and often do not account for potential errors. This paper proposes a novel, efficient method based on latitude strip division for calculating the ground area coverage of satellite constellations, capable of providing the upper and lower bounds of coverage ratio for any ground area. Initially, the ground target area is divided into several latitude strips, and the target area range is utilized to determine the longitude range of each latitude strip. Subsequently, the upper and lower bounds of coverage of each strip are calculated according to the satellite ground coverage range. On this basis, the coverage boundary function is defined and the coverage ratio is derived through comprehensive statistics analysis. Finally, depending on the accuracy of the latitude strip division, the precise coverage area and coverage ratio with upper and lower bounds are determined for instantaneous, continuous, and cumulative coverage problems. Numerical simulation experiments were carried out and compared with the traditional grid point method to validate the effectiveness and computational efficiency of this algorithm in addressing the coverage issue for arbitrarily shaped ground areas. When compared to the longitude strip method, it was confirmed that for ground areas where the longitude exceeds the latitude range, this approach offers superior computational efficiency than the latitude strip method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Carbon-zero agility: Enabling carbon-zero organizations through agile management and ambiguous feedback algorithms.
- Author
-
Lv, David Diwei and Cho, Erin
- Subjects
- *
CARBON offsetting , *SCRUM (Computer software development) , *ALGORITHMS , *JUDGMENT (Psychology) , *JUST-in-time systems - Abstract
To enable organizations to achieve carbon neutrality through agile capabilities, this paper establishes an integrative framework of carbon-zero agility consisting of three dimensions: search scope agility, search locus agility, and search pace agility. However, applying common agile methodologies like Scrum, Kanban, and Lean to cultivate these capabilities inevitably introduces feedback ambiguity, which can paralyze decision-making and increase errors due to inherent human cognitive limitations. To address this, tailored carbon-zero feedback algorithms are proposed to complement human judgment in agile workflows. Specifically, prescriptive analytics, federated learning, and probabilistic programming are injected into Scrum, Kanban, and Lean respectively to restore clarity amidst ambiguity. The framework is grounded in cases from the textile industry to demonstrate applicability in practical settings. By targeting the roots of distortions with human-algorithm collaborations, it provides an actionable roadmap to implement carbon-zero agility. • Establishes carbon-zero agility framework: search scope, locus, and pace agility. • Identifies behavioral challenges, and feedback ambiguity in agile for carbon-zero. • Proposes tailored carbon-zero feedback algorithms to enhance agility. • Connects agile processes with broader organizational sustainability capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Algorithm and hardness results in double Roman domination of graphs.
- Author
-
Poureidi, Abolfazl
- Subjects
- *
DOMINATING set , *CHARTS, diagrams, etc. , *ALGORITHMS , *STATISTICAL decision making , *HARDNESS , *ROMANS , *COMPUTATIONAL complexity - Abstract
Let G = (V , E) be a graph. A double Roman dominating function (DRDF) of G is a function f : V → { 0 , 1 , 2 , 3 } such that (i) each vertex v with f (v) = 0 is adjacent to either a vertex u with f (u) = 3 or two vertices u 1 and u 2 with f (u 1) = f (u 2) = 2 , and (ii) each vertex v with f (v) = 1 is adjacent to a vertex u with f (u) > 1. The double Roman domination number of G is the minimum weight of a DRDF along all DRDFs on G , where the weight of a DRDF f on G is f (V) = ∑ v ∈ V f (v). In this paper, we first propose an algorithm to compute the double Roman domination number of an interval graph G = (V , E) in O (| V | + | E |) time, answering a problem posed in Banerjee et al. (2020) [2]. Next, we show that the decision problem associated with the double Roman domination is NP-complete for split graphs. Finally, we show that the computational complexities of the Roman domination problem and the double Roman domination problem are different. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Hardness results for three kinds of colored connections of graphs.
- Author
-
Huang, Zhong and Li, Xueliang
- Subjects
- *
GRAPH connectivity , *HARDNESS , *COMPUTATIONAL complexity , *CHROMATIC polynomial , *ALGORITHMS - Abstract
The concept of rainbow connection number of a graph was introduced by Chartrand et al. in 2008. Inspired by this concept, other concepts on colored version of connectivity in graphs were introduced, such as the monochromatic connection number by Caro and Yuster in 2011, the proper connection number by Borozan et al. in 2012, and the conflict-free connection number by Czap et al. in 2018, as well as some other variants of connection numbers later on. Chakraborty et al. proved that to compute the rainbow connection number of a graph is NP-hard. For a long time, it has been tried to fix the computational complexity for the monochromatic connection number, the proper connection number and the conflict-free connection number of a graph. However, it has not been solved yet. Only the complexity results for the strong version, i.e., the strong proper connection number and the strong conflict-free connection number, of these connection numbers were determined to be NP-hard. In this paper, we prove that to compute each of the monochromatic connection number, the proper connection number and the conflict free connection number for a graph is NP-hard. This solves a long standing problem in this field, asked in many talks of workshops and papers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. A fast algorithm for maximizing a non-monotone DR-submodular integer lattice function.
- Author
-
Nong, Qingqin, Fang, Jiazhu, Gong, Suning, Feng, Yan, and Qu, Xiaoying
- Subjects
- *
ALGORITHMS , *INTEGERS , *APPROXIMATION algorithms - Abstract
In this paper we consider the problem of maximizing a non-monotone and non-negative DR-submodular function on a bounded integer lattice [ B → ] = { (x 1 , ... , x n) ∈ Z + n : 0 ≤ x k ≤ B k , ∀ 1 ≤ k ≤ n } without any constraint, where B → = (B 1 , ... , B n) ∈ Z + n. We design an algorithm for the problem and measure its performance by its approximation ratio and the number of value oracle queries it needs, where the latter one is the dominating term in the running time of an algorithm. It has been showed that, for the problem considered, any algorithm achieving an approximation ratio greater than 1 2 requires an exponential number of value oracle queries. In the literature there are two algorithms that reach 1 2 approximation guarantee. The first algorithm needs O (n | | B | | ∞) oracle queries. The second one reduces its number of oracle queries to O (n max { 1 , log | | B → | | ∞ }) but it needs large storage. In this paper we present a randomized approximation algorithm that has an approximation guarantee of 1 2 , calls O (n max { 1 , log | | B → | | ∞ }) oracle queries and does not need large storage, improving the results of the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A tight bound for shortest augmenting paths on trees.
- Author
-
Bosek, Bartłomiej, Leniowski, Dariusz, Sankowski, Piotr, and Zych-Pawlewicz, Anna
- Subjects
- *
TREES , *BIPARTITE graphs , *LOGICAL prediction , *SAWLOGS , *ALGORITHMS - Abstract
The shortest augmenting path technique is one of the fundamental ideas used in maximum matching and maximum flow algorithms. Since being introduced by Edmonds and Karp in 1972, it has been widely applied in many different settings. Surprisingly, despite this extensive usage, it is still not well understood even in the simplest case: online bipartite matching problem on trees. In this problem a bipartite tree T = (W ⊎ B , E) is being revealed online, i.e., in each round one vertex from B with its incident edges arrives. It was conjectured by Chaudhuri et al. [6] that the total length of all shortest augmenting paths found is O (n log n). In this paper we prove a tight O (n log n) upper bound for the total length of shortest augmenting paths for trees improving over O (n log 2 n) bound [3]. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A novel algorithm to model concrete based on geometrical properties of aggregate and its application.
- Author
-
Gupta, Pramod Kumar and Singh, Chandrabhan
- Subjects
- *
MORTAR , *GEOMETRIC modeling , *CONCRETE , *IMPACT loads , *ALGORITHMS - Abstract
• A novel algorithm is developed to model the aggregates geometrically. • 3D meso-scale concrete model is developed, considering aggregate parameters. • Inclusion of aggregates in mortar, increases the strength of obtained concrete. • In impact loading, mortar and Interfacial transition zone fail simultaneously. • Extent of damage depends on the loading rate. In this paper, a novel algorithm is developed to generate the geometrical model of coarse aggregate based on its physical properties. The size, elongation index, and flakiness index are considered while developing the algorithm. The developed geometrical model of aggregate is further used for generating the finite element (FE) meso-model of concrete. Three distinct phases are considered in the FE model, i.e., aggregate, mortar, and interfacial transition zone. Further, numerical simulation has been performed under different loading conditions (i.e., quasi-static and high-strain loading rate). Additional considerations are discussed to develop a suitable FE model for a 3D split Hopkinson pressure bar specimen. The finite element (FE) simulation results are compared with available literature to verify the developed meso-model. It is verified that the mortar and ITZ fail during the quasi-static loading, whereas the structural effect (lateral inertia confinement) plays a significant role in the high-strain loading rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A probabilistic algorithm for vertex cover.
- Author
-
Berend, D. and Mamana, S.
- Subjects
- *
GREEDY algorithms , *ALGORITHMS , *NP-hard problems , *PROBLEM solving , *RANDOM graphs - Abstract
The Minimum Vertex Cover P roblem is the optimization problem of finding a vertex cover V c of minimal cardinality in a given graph. It is a classic NP-hard problem, and various algorithms have been suggested for it. In this paper, we start with a basic algorithm for solving the problem. Using a probabilistic idea, we use it to develop an improved algorithm. The algorithm is greedy; at each step it adds to the cover a vertex such that the expected cover size, if we continue randomly after this step, is minimal. We study the new algorithm theoretically and empirically, and run simulations to compare its performance to that of some algorithms of a similar nature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. On the smoothing of the norm objective penalty function for two-cardinality sparse constrained optimization problems.
- Author
-
Min, Jiang, Meng, Zhiqing, Zhou, Gengui, and Shen, Rui
- Subjects
- *
CONSTRAINED optimization , *ALGORITHMS - Abstract
This paper studies a smoothing norm objective penalty function for two-cardinality sparse constrained optimization problems. Good properties are proved for the smoothing norm objective penalty functions in solving two-cardinality sparse constrained optimization problems, after which an algorithm is presented. Furthermore, its convergence is also proved under some conditions. The proposed algorithm can solve a satisfactory approximate optimal solution in a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Byzantine-tolerant causal broadcast.
- Author
-
Auvolat, Alex, Frey, Davide, Raynal, Michel, and Taïani, François
- Subjects
- *
ALGORITHMS , *BROADCASTING industry , *CRYPTOCURRENCIES , *GAUSSIAN channels , *MULTICASTING (Computer networks) - Abstract
• In this paper, we provide a formal definition of a causal broadcast abstraction in the presence of Byzantine processes. • We present a simple causal broadcast algorithm that implements this specification in the presence of Byzantine processes. • We prove our algorithm to be correct under the specification we have provided. • One difficulty consists in proving that an adversary cannot destroy the causality between broadcasts of correct processes. • We illustrate how our algorithm can help implement a money transfer object, with potential application to cryptocurrencies. Causal broadcast is a communication abstraction built on top of point-to-point send/receive networks that ensures that any two messages whose broadcasts are causally related (as captured by Lamport's "happened before" relation) are delivered in their sending order. Several causal broadcast algorithms have been designed for failure-free and crash-prone asynchronous message-passing systems. This article first gives a formal definition of a causal broadcast abstraction in the presence of Byzantine processes, in the form of two equivalent characterizations, and then presents a simple causal broadcast algorithm that implements it. The main difficulty in the design and the proof of this algorithm comes from the very nature of Byzantine faults: Byzantine processes may have arbitrary behavior, and the algorithm must ensure that correct processes (i) maintain a coherent view of causality and (ii) are never prevented from communicating between themselves. To this end, the algorithm is built modularly, namely it works on top of any Byzantine-tolerant reliable broadcast algorithm. Due to this modularity, the proposed algorithm is easy to understand and inherits the computability assumptions (notably the maximal number of processes that may be Byzantine) and the message/time complexities of the underlying reliable broadcast on top of which it is built. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. On the solution bound of two-sided scaffold filling.
- Author
-
Ma, Jingjing, Zhu, Daming, Jiang, Haitao, and Zhu, Binhai
- Subjects
- *
GRAPH connectivity , *ALGORITHMS - Abstract
In this paper, we propose an algorithm which approximates the Two-Sided Scaffold Filling problem to a performance ratio 1.4 + ε. This is achieved through a deep investigation of the optimal solution structure of Two-Sided Scaffold Filling. We make use of a relevant graph aiming at a solution of a Two-Sided Scaffold Filling instance, and evaluate the optimal solution value by the number of connected components in this graph. We show that an arbitrary optimal solution can be transformed into one whose relevant graph admits connected components that are available to compare with the solution of our algorithm in terms of their values. The performance ratio 1.4 + ε is obtained by comparing the bound of such an optimal solution with the solution of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. A transformation algorithm to construct a rectangular floorplan.
- Author
-
Kumar, Vinod and Shekhawat, Krishnendra
- Subjects
- *
VERY large scale circuit integration , *ARCHITECTURAL designs , *ARCHITECTURAL design , *ALGORITHMS , *FLOOR plans , *RECTANGLES - Abstract
A rectangular floorplan (RFP) is a partition of a rectangle R into rectangles R 1 , R 2 , ... , R n such that no four of them meet at a point. Rectangular floorplans find their applications in VLSI circuits floorplanning and architectural floorplanning. Let G represent the graph of a rectangular floorplan RFP 1 (n) and H be a subgraph of G , where n denotes the number of rectangles in RFP 1 (n). It is well known that every subgraph of G may not admit an RFP, i.e., G does not have a hereditary property. Hence, in this paper, we first derive a necessary and sufficient condition for H to admit a rectangular floorplan RFP 2 (n). Further, if H admits an RFP, we present a linear time transformation algorithm for deriving RFP 2 (n) from RFP 1 (n). • Existence of a rectangular floorplan for a subgraph of the graph representing a rectangular floorplan is shown. • A transformation algorithm has been proposed to transform a rectangular floorplan to another rectangular floorplan. • Application to VLSI circuits design and architectural design have been discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. A system for transformation of sentences from the enriched formalized Node of Knowledge record into relational database.
- Author
-
Čandrlić, Sanja, Ašenbrener Katić, Martina, and Pavlić, Mile
- Subjects
- *
THEORY of knowledge , *RELATIONAL databases , *ALGORITHMS , *NATURAL language processing , *ARTIFICIAL intelligence - Abstract
Highlights • Process model for transformation of formalized sentences into relational database. • Algorithms of functions and procedures for each step of transformation. • Meta model for the structure and content of formalized sentences in the relational database. Abstract Verbalized text contains knowledge necessary and sufficient for transfer of numerous human cognitions. The question is how this knowledge was saved into text. Authors believe that they found an important idea how to assemble knowledge into nodes of knowledge. Similar methods in the field of knowledge networks exist, but none of them does it in the same way as the Node of Knowledge (NOK) method. Basic terms and concepts in a language are represented in words whose meaning is final and cannot be divided in subterms. More complex meaning can be achieved by combining words in sentences. According to authors' opinion, each sentence includes connective medium for words in the sentence (related to semantic reasons), which is inwrought in the Node of Knowledge method. Using the Node of Knowledge method each sentence can be presented as a network of connected words. This network is enriched with links between words so a computer can interpret meaning and knowledge of the sentence in the same way an intelligent person does. A formalized and semantically enriched record of sentences (called Formalized Node of Knowledge – FNOK) is developed. Authors find that in this way, even without statistical text analysis, an algorithm can give correct answer to a question set based on written text. This paper presents the system for transformation of textual knowledge expressed in natural language sentences into a relational database. The system is a part of a larger knowledge-based system based on the Node of Knowledge (NOK) conceptual framework for knowledge-based system development. This paper starts with sentences written in the formalized and enriched form for which a logical transformation into the structure of a relational database is proposed. The system and algorithms for the transformation of formalized sentences into n-tuples of the relational database are distributed and represented in several steps. The research has shown that it is possible to store semantically enriched sentences in relational databases. The solution presented in this paper is important for further development of the system for receiving questions from users and providing answers (i.e. question-answering system), with the ability to use well-developed relational SQL languages. Relational database of texts enables numerous applications in the field of expert and intelligent systems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. A new algorithm of links labelling for the isomorphism detection of various kinematic chains using binary code.
- Author
-
Rai, Rajneesh Kumar and Punjabi, Sunil
- Subjects
- *
BINARY codes , *KINEMATIC chains , *ALGORITHMS , *LABELS - Abstract
Highlights • A new algorithm of links labelling along with the binary code, is used for the isomorphism detection of kinematic chains. • Kinematic chains with simple and multiple joints, and also the Epicyclic gear chains have been tested very effectively. • Decoding of the binary codes for various kinematic chains is presented in a unique way. Abstract Isomorphism of kinematic chains (KCs) has always been a critical issue for the researchers dealing with structural synthesis. Consequently, many researchers of repute presented various methods during the last eight decades for the KCs with either simple and/or multiple joints. Binary code is one of such various methods, but the major problem lies in the algorithm of links labelling, which becomes cumbersome, in particular, for large KCs. The paper presents a simple algorithm of links labelling used to find out a binary sequence which, in turn, provides a maximum binary code (chain invariant). The algorithm is tested for six, seven, eight, nine, ten, eleven, twelve and fifteen links with simple joints, seven and eight links KCs with multiple joints and finally, the Epicyclic gear trains (EGTs) with four, five and six links for its efficiency and reliability. The results are in full agreement with the references taken for the purpose. The paper discusses, in a unique way, the decoding of the binary codes of different KCs also. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Improved Moore-Penrose continuation algorithm for the computation of problems with critical points.
- Author
-
Léger, S., Larocque, P., and LeBlanc, D.
- Subjects
- *
CUSP forms (Mathematics) , *ALGORITHMS , *CONTINUATION methods , *FINITE element method - Abstract
• This paper presents an improved algorithm for the Moore-Penrose continuation method. • Its ability to compute solution curves with critical points is greatly improved. • Difficulties observed using the standard approach are not encountered. • Easy implementation. Using typical solution strategies to compute the solution curve of challenging problems often leads to the break down of the algorithm. To improve the solution process, numerical continuation methods have proved to be a very efficient tool. However, these methods can still lead to undesired results. In particular, near severe limit points and cusps, the solution process frequently encounters one of the following situations: divergence of the algorithm, a change in direction which makes the algorithm backtrack on a part of the solution curve that has already been obtained and omitting important regions of the solution curve by converging to a point that is much farther than the one anticipated. Detecting these situations is not an easy task when solving practical problems since the shape of the solution curve is not known in advance. This paper will therefore present a modified Moore-Penrose continuation method that will include two key aspects to solve challenging problems: detection of problematic regions during the solution process and additional steps to deal with them. The proposed approach can either be used as a basic continuation method or simply activated when difficulties occur. Numerical examples will be presented to show the efficiency of the new approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Remote sensing algorithms for particulate inorganic carbon (PIC) and the global cycle of PIC.
- Author
-
Balch, William M. and Mitchell, Catherine
- Subjects
- *
COLLOIDAL carbon , *OCEAN color , *ALGORITHMS , *REMOTE sensing , *MACHINE learning , *CALCIUM carbonate - Abstract
This paper begins with a review of the history of remote sensing algorithms for the determination of particulate inorganic carbon (PIC; aka calcium carbonate), primarily associated with haptophyte phytoplankton known as coccolithophores. These algae have strong optical particle backscattering (b bp) which can dominate ocean color properties.. In non-bloom conditions, coccolithophore b bp typically accounts for ∼10-20% of the total b bp , whereas in turbid coccolithophore blooms, coccolithophore b bp can account for >90% of total b bp. Since total b bp features heavily in a number of algorithms for the determination of phytoplankton standing stock, disproportionate coccolithophore b bp can cause significant errors in a wide variety of other ocean-color algorithms. Here we discriminate between qualitative coccolithophore algorithms (coccolith flags), quantitative algorithms to determine the concentration of coccolithophore PIC and algorithms that focus on coccolithophore biomass. Algorithms from satellite sensors, such as the AVHRR and MISR, not typically used for phytoplankton remote sensing, are discussed as well as an improved method to model the backscattering cross-section of PIC. We also cover remote sensing algorithms for determination of calcification rates, modeling vertical profiles of PIC for the remote sensing of integrated euphotic PIC, and the effect of coccolithophore species variation on PIC retrievals. The second part of this review paper covers what we have learned about the cycling of PIC from remotely-sensed satellite measurements since the first satellite observations in 1982. The analysis begins from the global perspective, then focuses on five sub-regions which have become notorious for their regular, high-reflectance coccolithophore blooms (Southern Ocean, Atlantic Ocean, Arctic Ocean, Black Sea and Bering Sea). We end with a discussion of future directions for the PIC algorithms using machine-learning approaches and hyperspectral applications during the upcoming PACE era. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Improved algorithm for the locating-chromatic number of trees.
- Author
-
Baskoro, Edy Tri and Primaskun, Devi Imulia Dian
- Subjects
- *
ALGORITHMS , *TREES , *NUMBER concept , *TREE graphs - Abstract
The concept of the locating-chromatic number for graphs was introduced by Chartrand et al. (2002). In this paper, we propose an algorithm to determine the upper bound of the locating-chromatic number of any tree. This algorithm works much better than the one given by Furuya and Matsumoto (2019). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Graph orientation with splits.
- Author
-
Asahiro, Yuichi, Jansson, Jesper, Miyano, Eiji, Nikpey, Hesam, and Ono, Hirotaka
- Subjects
- *
SEQUENCE spaces , *MAXIMA & minima , *COMPUTATIONAL complexity , *ALGORITHMS , *INTEGERS - Abstract
The Minimum Maximum Outdegree Problem (MMO) is to assign a direction to every edge in an input undirected, edge-weighted graph so that the maximum weighted outdegree taken over all vertices becomes as small as possible. In this paper, we introduce a new variant of MMO called the p-Split Minimum Maximum Outdegree Problem (p -Split-MMO) in which one is allowed to perform a sequence of p split operations on the vertices before orienting the edges, for some specified non-negative integer p , and study its computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. On algorithmic complexity of double Roman domination.
- Author
-
Poureidi, Abolfazl and Jafari Rad, Nader
- Subjects
- *
DOMINATING set , *PLANAR graphs , *BIPARTITE graphs , *STATISTICAL decision making , *ALGORITHMS , *ROMANS - Abstract
A double Roman dominating function (DRDF) on a graph G = (V , E) is a function f : V ⟶ { 0 , 1 , 2 , 3 } such that every vertex v ∈ V with f (v) = 0 is either adjacent to a vertex u with f (u) = 3 or two distinct vertices x and y with f (x) = f (y) = 2 , and every vertex v ∈ V with f (v) = 1 is adjacent to a vertex u with f (u) ≥ 2. The weight of f is the sum f (V) = ∑ v ∈ V f (v). The minimum weight of a DRDF on G is the double Roman domination number of G , denoted by γ d R (G). A graph G is a double Roman Graph if γ d R (G) = 3 γ (G) , where γ (G) is the domination number of G. In this paper, we first show that the decision problem associated to double Roman domination is NP-complete even when restricted to planar graphs. Then, we study the complexity issue of a problem posed in [R.A. Beeler, T.W. Haynesa and S.T. Hedetniemi, Double Roman domination, Discrete Appl. Math. 211 (2016), 23–29], and show that the problem of deciding whether a given graph is double Roman is NP-hard even when restricted to bipartite or chordal graphs. Then, we give linear algorithms that compute the domination number and the double Roman domination number of a given unicyclic graph. Finally, we give a linear algorithm that decides whether a given unicyclic graph is double Roman. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Yield constrained automated design algorithm for power optimized pipeline ADC.
- Author
-
Sadrafshari, Vala, Sadrafshari, Shamin, and Sharifkhani, Mohammad
- Subjects
- *
ANALOG-to-digital converters , *PIPELINES , *DEGREES of freedom , *ALGORITHMS - Abstract
Pipeline Analog to Digital Converter (ADC) design processes include several redesign steps to achieve the optimum solution. Hence, designers prefer to use automated algorithms for this purpose. In this paper, an automated algorithm for CAD tool is presented considering the trade-off between yield and power consumption for pipeline ADCs. This automated algorithm benefits from multiple degrees of freedom including the system level down to transistor level parameters, which helps CAD tools to find the optimized solution. It allows designers to choose an optimum scenario considering the trade-off between yield and power consumption. To evaluate the capabilities of this algorithm, a 10-bit pipeline ADC is designed and analyzed. This ADC has 10-bit resolution and 6.3 mW power, 91% yield, 55.3 dB SNDR and 58.8 dB SFDR, which are all in good agreement with the algorithm results. In comparison with similar designs it offers a competitive Figure of Merit (FOM), which proves the capability of this algorithm in finding the optimum solution. • Optimum pipeline ADC design processes include several redesign steps. • An automated algorithm considers the trade-off between yield and power consumption. • Algorithm benefits from a large design space. • Comparing the algorithm with similar designs proves its capability in optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Multidisciplinary Treatments of True Posterior Inferior Cerebellar Artery Aneurysms: Single-Center Retrospective Study and Treatment Algorithm.
- Author
-
Kanemoto, Yukihide, Michiwaki, Yuhei, Maeda, Kazushi, Kawano, Yosuke, Maehara, Naoki, Nagaoka, Shintaro, and Gi, Hidefuku
- Subjects
- *
DISSECTING aneurysms , *ALGORITHMS , *ANEURYSMS , *ARTERIES , *MULTIVARIATE analysis , *RETROSPECTIVE studies - Abstract
True posterior inferior cerebellar artery (PICA) aneurysms outside the vertebral artery−PICA region are rare, with approximately 30 cases reported in just a few papers; no treatment paradigm has been advocated. The objective of this study was to present detailed clinical features and outcomes for several treatments for true PICA aneurysms and suggest an algorithm for treatment strategies. We retrospectively analyzed outcomes of patients treated for PICA aneurysms with microsurgical and endovascular treatments. We also investigated the influence of several factors on the modified Rankin Scale score. Cases with PICA aneurysms (n = 36) outside the vertebral artery−PICA region were identified angiographically. Aneurysm locations included anterior medullary (n = 7), lateral medullary (n = 10), tonsillomedullary (n = 4), telovelotonsillar (n = 12), and cortical (n = 3) segments of the PICA. Aneurysm morphology was as follows: dissecting: 22; fusiform: 6; and saccular: 8. On multivariate analysis, age (P = 0.028) and lack of vermian infarction (P =0.037) were associated with a significantly better prognosis. Prognosis was not significantly different for the 5 aneurysm locations and among the 4 treatment groups: clipping/coiling, trapping/parent artery occlusion, trapping/parent artery occlusion + bypass, and observation including external ventricular drainage. This study suggests that factors associated with significantly better prognosis include age, clip/coil treatments, and no vermian infarction complication. A treatment algorithm for true PICA aneurysms was supported according to pretreatment H and K grade, PICA segments, aneurysm morphology, and 3 types of ischemia linked to the brainstem, cerebellar hemisphere, or vermis. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. An improved algorithm for checking the injectivity of 2D toric surface patches.
- Author
-
Yu, Ying-Ying, Ji, Ye, and Zhu, Chun-Gang
- Subjects
- *
ISOGEOMETRIC analysis , *TENSOR products , *ALGORITHMS , *PARAMETERIZATION - Abstract
Injective parametrizations are widely used both in theory and in applications. The injectivity of parameteric curves and surfaces means that there are no self-intersections. Toric surface patch is defined by a set of integer lattice points A and corresponding control points and weights, which includes rational tensor product and triangle Bézier patches as special cases. In 2011, Sottile and Zhu presented a geometric method to check the injectivity of 2D toric surface patches. In this paper, we present an improved algorithm of their method. The complexity of the improved algorithm is reduced from O (n 3) to O (n 2) , where n = # (A). Some examples are shown to illustrate the effectiveness of our algorithm. Moreover, the algorithm is also applied to check the injectivity of parameterization in isogeometric analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Calculating an upper bound of the locating-chromatic number of trees.
- Author
-
Assiyatun, Hilda, Syofyan, Dian Kastika, and Baskoro, Edy Tri
- Subjects
- *
TREES , *TREE graphs , *CATERPILLARS , *COORDINATES , *ALGORITHMS - Abstract
The locating-chromatic number of a graph G (V , E) is the cardinality of a minimum resolving partition of the vertex set V (G) such that all vertices have distinct coordinates with respect to this partition and every two adjacent vertices not contained in the same partition class. Determining the locating-chromatic number of any tree is a difficult task. In this paper, we propose an algorithm to compute the upper bound on the locating-chromatic number of any tree. To do so, we decompose a tree into caterpillars and then compute the upper bound of the locating-chromatic number of this tree in terms of the ones for these caterpillars. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. RegularSearch, a fast performance algorithm for typical testors computation.
- Author
-
Lefebre-Lobaina, Jairo A. and Shulcloper, José Ruiz
- Subjects
- *
ALGORITHMS , *FEATURE selection , *SEARCH algorithms - Abstract
In pattern recognition tasks, selecting the appropriate features for a supervised classification problem is a critical task. One theoretical tool used for this purpose is the Test Theory, which identifies feature relevance and reduces the complexity of the training and classification processes. However, searching for irreducible subsets of features, called typical testors, that can differentiate objects belonging to different classes, is an exponential problem in terms of the number of features. To address this problem, various algorithms have been proposed in the literature that use different strategies to avoid unnecessary comparisons. In this paper, is proposed the RegularSearch algorithm to find all typical testors in a dataset associated with a supervised classification problem. The algorithm incrementally searches for candidate subsets of features that are most likely to be associated with a typical testor, reducing the number of comparisons as new features are added to the candidate subset. Our comparisons with the best-performing algorithms reported in the literature demonstrate that RegularSearch is more efficient for processing synthetic and real problem datasets in certain scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A polynomial algorithm for computing the weak rupture degree of trees.
- Author
-
Wei, Zongtian, Yue, Chao, Li, Yinkui, Yue, Hongyun, and Liu, Yong
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *TREES , *GEOMETRIC vertices - Abstract
Let G = (V , E) be a graph. The weak rupture degree of G is defined as r w (G) = max { ω (G − X) − | X | − m e (G − X) : ω (G − X) > 1 } , where the maximum is taken over all X , the subset of V (G), ω (G − X) is the number of components in G − X , and m e (G − X) is the size (edge number) of a largest component in G − X. This is an important parameter to quantitatively describe the invulnerability of networks. In this paper, based on a study of relationship between network structure and the weak rupture degree, a polynomial algorithm for computing the weak rupture degree of trees is given. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. A new data clustering algorithm based on critical distance methodology.
- Author
-
Kuwil, Farag Hamed, Shaar, Fadi, Topcu, Ahmet Ercan, and Murtagh, Fionn
- Subjects
- *
EUCLIDEAN algorithm , *K-means clustering , *EUCLIDEAN distance , *REGRESSION trees , *MATHEMATICAL statistics , *ALGORITHMS , *GRAPH algorithms - Abstract
• A new CDC algorithm. • 6 indicators to evaluate the results. • 26 experiments had been conducted. A variety of algorithms have recently emerged in the field of cluster analysis. Consequently, based on the distribution nature of the data, an appropriate algorithm can be chosen for the purpose of clustering. It is difficult for a user to decide a priori which algorithm would be the most appropriate for a given dataset. Algorithms based on graphs provide good results for this task. However, these algorithms are vulnerable to outliers with limited information about edges contained in the tree to split a dataset. Thus, in several fields, the need for better clustering algorithms increases and for this reason utilizing robust and dynamic algorithms to improve and simplify the whole process of data clustering has become an urgent need. In this paper, we propose a novel distance-based clustering algorithm called the critical distance clustering algorithm. This algorithm depends on the Euclidean distance between data points and some basic mathematical statistics operations. The algorithm is simple, robust, and flexible; it works with quantitative data that are real-valued, not qualitative, and categorical with different dimensions. In this work, 26 experiments are conducted using different types of real and synthetic datasets taken from different fields. The results prove that the new algorithm outperforms some popular clustering algorithms such as MST-based clustering, K-means, and Dbscan. Moreover, the algorithm can precisely produce more reasonable clusters even when the dataset contains outliers and without specifying any parameters in advance. It also provides a number of indicators to evaluate the established clusters and prove the validity of the clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Algorithms to define diabetes type using data from administrative databases: A systematic review of the evidence.
- Author
-
Sajjadi, Seyedeh Forough, Sacre, Julian W., Chen, Lei, Wild, Sarah H., Shaw, Jonathan E, and Magliano, Dianna J.
- Subjects
- *
TYPE 1 diabetes , *TYPE 2 diabetes , *DIABETES , *ALGORITHMS , *INSULIN therapy - Abstract
To find the best-performing algorithms to distinguish type 1 and type 2 diabetes in administrative data. Embase and MEDLINE databases were searched from January 2000 until January 2023. Papers evaluating the performance of algorithms to define type 1 and type 2 diabetes by reporting diagnostic metrics against a range of reference standards were selected. Study quality was evaluated using the Quality Assessment of Diagnostic Accuracy Studies. Of the 24 studies meeting the eligibility criteria, 19 demonstrated a low risk of bias and low concerns about the applicability of the study population across all domains. Algorithms considering multiple diabetes diagnostic codes alone were sensitive and specific approaches to classify diabetes type (both metrics >92.1% for type 1 diabetes; >86.9% for type 2 diabetes). Among the top 10-performing algorithms to detect type 1 and type 2 diabetes, 70% and 100% featured multiple criteria, respectively. Information on insulin use was more sensitive and specific for detecting diabetes type than were criteria based on use of oral hypoglycaemic agents. Algorithms based on multiple diabetes diagnostic codes and insulin use are the most accurate approaches to distinguish type 1 from type 2 diabetes using administrative data. Approaches with more than one criterion may also increase sensitivity in distinguishing diabetes type. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Use of canonical variables to solve state function based flash problems.
- Author
-
Paterson, Duncan, Stenby, Erling H., and Yan, Wei
- Subjects
- *
PHASE equilibrium , *ALGORITHMS - Abstract
This paper presents a new algorithm for solving the general state function based flash problem. The algorithm uses the canonical variables to the state function to solve the equation of state. Doing so moves some of the complexity of the flash problem to the equation of state solver, effectively simplifying the phase-split problem. A two-phase example is described and examined over a wide range of temperature and pressure conditions. A multiphase (up to four-phase) mixture is used as a demonstration of the method for solving multiphase flash problems. It is demonstrated that the solution algorithm takes a similar CPU-time to that used for solving conventional flash problems. The proposed algorithm will help to robustly solve general, difficult flash problems commonly encountered in modern process and reservoir simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Evaluating the Performance of various Algorithms for Wind Energy Optimization: A Hybrid Decision-Making model.
- Author
-
Ala, Ali, Mahmoudi, Amin, Mirjalili, Seyedali, Simic, Vladimir, and Pamucar, Dragan
- Subjects
- *
WIND power , *METAHEURISTIC algorithms , *OPTIMIZATION algorithms , *PARTICLE swarm optimization , *ALGORITHMS , *EVOLUTIONARY algorithms - Abstract
Wind resource is one of the most promising renewable energy, which has become a suitable replacement for fossil fuels. Optimizing the transferring wind energy from a wind turbine is essential to obtain the maximum power output as other variables are uncontrollable. This paper presents four different optimization algorithms, namely ant lion optimization (ALO), whale optimization algorithm (WOA), particle swarm optimization (PSO), and crow search optimization (CSO), considering a hybrid decision-making model to compare the performances of wind energy optimization. In the first phase, the evolutionary algorithms are defined based on several factors to meet the need for wind energy based on volumetric and time reliability, reversibility, and vulnerability as well as evaluate optimized energy to the subscriber from the Gansu region. In the second phase, the ordinal priority approach (OPA) is coupled with VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method to rank the evolutionary algorithms. Then, the results are compared with the absolute optimal response based on the nonlinear programming method obtained from GAMS software. The results demonstrate that an ALO outperforms other algorithms. The average accuracy of ALO is 92%. CSO is the least accurate with 55% of the absolute optimal response. ALO is found to be faster, more efficient, and achieved economy and reliability as compared to other optimization algorithms for solving the problem under consideration. It is shown that the applied models are robust, effective, and able to save costs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. A new algorithmic framework for basic problems on binary images.
- Author
-
Asano, T., Buzer, L., and Bereg, S.
- Subjects
- *
MATHEMATICAL connectedness , *COMPUTATIONAL complexity , *IMAGE processing , *ALGORITHMS , *GRAPH theory , *GRAPH connectivity - Abstract
This paper presents a new algorithmic framework for some basic problems on binary images. Algorithms for binary images such as one of extracting a connected component containing a query pixel and that of connected components labeling play basic roles in image processing. Those algorithms usually use linear work space for efficient implementation. In this paper we propose algorithms for several basic problems on binary images which are efficient in time and space, using space-efficient algorithms for grid graphs. More exactly, some of them run in O ( n log n ) time using O ( 1 ) work space and the others run in O ( n ) or O ( n log n ) time using O ( n ) work space for a binary image of n pixels stored in a read-only array. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
32. Algorithms for solving the inverse problem associated with [formula omitted].
- Author
-
Lebtahi, Leila, Romero, Óscar, and Thome, Néstor
- Subjects
- *
INVERSE problems , *MATRICES (Mathematics) , *ALGORITHMS , *NATURAL numbers , *ABSTRACT algebra - Abstract
In previous papers, the authors introduced and characterized a class of matrices called { K , s + 1 } -potent. Also, they established a method to construct these matrices. The purpose of this paper is to solve the associated inverse problem. Several algorithms are developed in order to find all involutory matrices K satisfying K A s + 1 K = A for a given matrix A ∈ C n × n and a given natural number s . The cases s = 0 and s ≥ 1 are separately studied since they produce different situations. In addition, some examples are presented showing the numerical performance of the methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Full-matrix capture with phased shift migration for flaw detection in layered objects with complex geometry.
- Author
-
Lukomski, Tomasz
- Subjects
- *
ULTRASONIC imaging , *IMAGE reconstruction , *PHASED array antennas , *LAYER structure (Solids) , *ALGORITHMS - Abstract
This paper introduces a method for an ultrasonic imaging with a phased array based on a wave migration algorithm. The method allows for imaging layered objects with lateral velocity variations such as objects with a complex geometry or layers that are not perpendicular to the array’s axis. The full-matrix capture ensures that there is enough information to reconstruct an image even when the wave indication angle is large. The method is implemented in a omega- k domain. The proposed algorithm is first tested in a single simulation of a concave object with side drilled holes under the concave surface. For evaluating the algorithm’s performance three experiments are presented: one with a tilted object (surface not perpendicular with respect to the array axis) with side drilled holes and two experiments of an object with concave surface and two artificial defects under it. The results presented in the paper verify that the proposed method reconstructs images from the data gathered with the phased array. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Nonlinear marine predator algorithm: A cost-effective optimizer for fair power allocation in NOMA-VLC-B5G networks.
- Author
-
Sadiq, Ali Safaa, Dehkordi, Amin Abdollahi, Mirjalili, Seyedali, and Pham, Quoc-Viet
- Subjects
- *
BROWNIAN motion , *OPTICAL communications , *ALGORITHMS , *CURIOSITY , *VISIBLE spectra , *BANDWIDTH allocation , *LOTKA-Volterra equations - Abstract
• Non-linear Marine Predator Algorithm (NMPA) is introduced in this paper. • NMPA mimics two foraging techniques by marine predators: Levy and Brownian movements. • Fair power allocation in NOMA-VLC-B5G Networks is solved using NMPA algorithm. This paper is an influential attempt to identify and alleviate some of the issues with the recently proposed optimization technique called the Marine Predator Algorithm (MPA). With a visual investigation of its exploratory and exploitative behavior, it is observed that the transition of search from being global to local can be further improved. As an extremely cost-effective method, a set of nonlinear functions is used to change the search patterns of the MPA algorithm. The proposed algorithm, called Nonlinear Marin Predator Algorithm (NMPA), is tested on a set of benchmark functions. A comprehensive comparative study shows the superiority of the proposed method compared to the original MPA and even other recent meta -heuristics. The paper also considers solving a real-world case study around power allocation in non-orthogonal multiple access (NOMA) and visible light communications (VLC) for Beyond 5G (B5G) networks to showcase the applicability of the NMPA algorithm. NMPA algorithm also shows its superiority in solving a wide range of benchmark functions as well as obtaining fair power allocation for multiple users in NOMA-VLC-B5G systems compared with the state-of-the-art algorithms. 1 1 For the source code and the application, you can download the related files via the following links: GitHub and MathWorks: https://github.com/alisafaa12/nmpa https://uk.mathworks.com/matlabcentral/fileexchange/111135-nonlinear-marine-predator-algorithm-nmpa. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Automatic evaluation of rebar spacing using LiDAR data.
- Author
-
Yuan, Xinxing, Smith, Alan, Sarlo, Rodrigo, Lippitt, Christopher D., and Moreu, Fernando
- Subjects
- *
LIDAR , *REINFORCED concrete , *ALGORITHMS , *ADHESIVE tape - Abstract
When constructing Reinforced Concrete (RC) components, rebar spacing must be inspected in the field before pouring concrete. The traditional inspection of rebar spacing: (1) depends on the experience of the individual inspector; (2) is limited to a pass/fail outcome; and (3) is in general conducted with limited time. This paper presents a recognition algorithm that automatically identifies the rebar position using LiDAR. The result is an automatic Rebar Layout Quality Index (RLQI). Researchers collected LiDAR data from a real rebar mat, compared it with the design drawings, and obtained the RLQI for both top and bottom rebar mats. The authors also quantified the quality of the structural construction with a new flexural relative moment strength index. This methodology has the potential to reduce inspection time and increase the reliability of inspections through an objective assessment of rebar spacing, by creating an as-built record of actual rebar location in the structure. [Display omitted] • New algorithm for automatically measuring rebar spacing prior to concrete pour. • Results compared to tape measurer and existing LiDAR algorithms for rebar spacing. • Field implementation and accuracy validation for one real structure in the field. • Structural quality reported by relative strength index created from LiDAR data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. A methodology to reach high power factor during multiple EVs charging.
- Author
-
Lamedica, Regina, Maccioni, Marco, Ruvio, Alessandro, Timar, Tudor Gabriel, Carere, Federico, Sammartino, Eleonora, and Ferrazza, Diego
- Subjects
- *
REACTIVE power , *SHOPPING malls , *POPULATION density , *ALGORITHMS - Abstract
• A new methodology to increase power factor for multiple EVs charging. • A Simulink model validate by experimental measurements. • Real time algorithm able to mitigate the power factor decay at the point of delivery. The paper suggests a new methodology to increase the PF of a hub for EV charging. A smart management of the EV charging sessions allows to impose the stand-by operation of a vehicle based on the local PF value at EV charging points. Current harmonics are taken into account to ensure an effective improvement of the hub power quality. The case study of this paper consists of a hub for EV charging with a dedicated MV/LV substation, enriched by an experimental dataset containing measurements of 13 EVs during charging in terms of active and reactive power profiles, current harmonics, PF trends. The model is developed in Simulink environment and the proposed algorithm is implemented in a MATLAB script. The results highlight that in case of multiple EV charging sessions, the algorithm is able to maintain the hub PF value greater than the acceptable PF threshold limit. This scenario is expectable in real conditions, e.g. EV hubs near shopping centres or in cities with high population density. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Clump breakage algorithm for DEM simulation of crushable aggregates.
- Author
-
Brzeziński, Karol and Gladky, Anton
- Subjects
- *
STRAINS & stresses (Mechanics) , *ALGORITHMS , *POTENTIAL energy - Abstract
The paper presents a new clump breakage algorithm for DEM simulation of crushable aggregates. We emphasize some issues related to the modeling of rigid clump breakage and abrasion (e.g., no interactions between clump members, the theoretical mass of clump members higher than the mass of the clump). The algorithm presented in this paper tackles those issues. Correction to the Love-Weber stress tensor is proposed, which allows estimating the stress state of clump members separately. Also, we presented the splitting algorithm that conserves the mass in the system without introducing excessive potential energy coming from particles overlapping. The adapted strength criterion allows for capturing the size effects. The algorithm was implemented in the Yade software and is freely available. • The algorithm presented allows for modeling splitting and abrasion of rigid clumps of spheres. • The mass in the system is conserved after particle breakage. • Mohr-Coulomb-Weibull criterion is implemented to account for scale effects. • Example simulations are presented, including breakage of particles during oedometric test and wear. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Automatic generation of Markush structures from specific compounds.
- Author
-
Kovács, Péter, Botka, Gábor, and Figyelmesi, Árpád
- Subjects
- *
CHEMINFORMATICS , *PATENTS , *ALGORITHMS , *BIOACTIVE compounds - Abstract
Markush structures play an important role in cheminformatics, especially in chemical patents. This paper presents a novel algorithm for automatically generating Markush structures from series of specific compounds. This method can effectively be used to assist patent drafting or to compose combinatorial libraries based on several molecules of interest. According to the authors' knowledge, the presented algorithm is the first solution to this problem. It is available in multiple software products of ChemAxon. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. On graphs with the maximum edge metric dimension.
- Author
-
Zhu, Enqiang, Taranenko, Andrej, Shao, Zehui, and Xu, Jin
- Subjects
- *
GRAPH connectivity , *GEOMETRIC vertices , *ALGORITHMS , *MATHEMATICAL connectedness , *GRAPH theory - Abstract
Abstract An edge metric generator of a connected graph G is a vertex subset S for which every two distinct edges of G have distinct distance to some vertex of S , where the distance between a vertex v and an edge e is defined as the minimum of distances between v and the two endpoints of e in G. The smallest cardinality of an edge metric generator of G is the edge metric dimension, denoted by dim e (G). It follows that 1 ≤ dim e (G) ≤ n − 1 for any n -vertex graph G. A graph whose edge metric dimension achieves the upper bound is topful. In this paper, the structure of topful graphs is characterized, and many necessary and sufficient conditions for a graph to be topful are obtained. Using these results we design an O (n 3) time algorithm which determines whether a graph of order n is topful or not. Moreover, we describe and address an interesting class of topful graphs whose super graphs obtained by adding one edge are not topful. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. The [formula omitted]-labeling problem: An algorithmic tour.
- Author
-
Fertin, Guillaume, Rusu, Irena, and Vialette, Stéphane
- Subjects
- *
GRAPH theory , *ALGORITHMS , *GRAPHIC methods , *MATHEMATICAL optimization , *MATHEMATICAL bounds - Abstract
Given a graph G = ( V , E ) of order n and maximum degree Δ , the NP -complete S - labeling problem consists in finding a labeling of G , i.e. a bijective mapping ϕ : V → { 1 , 2 … n } , such that SL ϕ ( G ) = ∑ u v ∈ E min { ϕ ( u ) , ϕ ( v ) } is minimized. In this paper, we study the S - labeling problem, with a particular focus on algorithmic issues. We first give intrinsic properties of optimal labelings, which will prove useful for our algorithmic study. We then provide lower bounds on SL ϕ ( G ) , together with a generic greedy algorithm, which collectively allow us to approximate the problem in several classes of graphs—in particular, we obtain constant approximation ratios for regular graphs and bounded degree graphs. We also show that deciding whether there exists a labeling ϕ of G such that SL ϕ ( G ) ≤ | E | + k is solvable in O ∗ ( 2 2 k ( 2 k ) ! ) time, thus fixed-parameterized tractable in k . We finally show that the S - Labeling problem is polynomial-time solvable for two classes of graphs, namely split graphs and (sets of) caterpillars. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Zonotope-based recursive estimation of the feasible solution set for linear static systems with additive and multiplicative uncertainties.
- Author
-
Wang, Hao, Kolmanovsky, Ilya V., and Sun, Jing
- Subjects
- *
PARAMETER estimation , *POLYTOPES , *TIME-varying systems , *ALGORITHMS , *UNCERTAINTY (Information theory) , *LINEAR programming , *LINEAR matrix inequalities - Abstract
In this paper, we develop two zonotope-based set-membership estimation algorithms for identification of time-varying parameters in linear static models, where both additive and multiplicative uncertainties are treated explicitly. The two recursive algorithms can be differentiated by their ways of processing the data and required computations. The first algorithm, which is referred to as Cone And Zonotope Intersection (CAZI), requires solving linear programming problems at each iteration. The second algorithm, referred to as the Polyhedron And Zonotope Intersection (PAZI), involves linear programming as well as an optimization subject to linear matrix inequalities (LMIs). Both algorithms are capable of providing tight overbounds of the feasible solution set (FSS) in an application to health monitoring of marine engines. Furthermore, PAZI algorithm applied to mini-batches of measurement data leads itself to further analysis of the relation between the estimation results at different iterations. In addition, an example of identifying time-varying parameters is also reported. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Existence result for differential variational inequality with relaxing the convexity condition.
- Author
-
Wang, Xing, Qi, Ya-Wei, Tao, Chang-Qi, and Wu, Qi
- Subjects
- *
CONVEX domains , *MATHEMATICS theorems , *ALGORITHMS , *PROBLEM solving , *IMPLICIT functions - Abstract
In this paper, a class of differential variational inequalities are studied, and a new approach is introduced to relax the convexity condition. Firstly, an existence theorem of Carathéodory weak solution for the differential variational inequalities is established. Secondly, an algorithm for solving the problem is developed and the convergence analysis for the algorithm is given. Finally, a numerical example is reported to illustrate the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Estimation of biophysical parameters in a neuron model under random fluctuations.
- Author
-
Upadhyay, Ranjit Kumar, Paul, Chinmoy, Mondal, Argha, and Vishwakarma, Gajendra K.
- Subjects
- *
NOISE (Work environment) , *NEURONS , *RANDOM noise theory , *MEMBRANE potential , *ALGORITHMS - Abstract
In this paper, an attempt has been made to estimate the biophysical parameters in an improved version of Morris–Lecar (M–L) neuron model in a noisy environment. To observe the influence of noisy stimulation in estimation procedure, a Gaussian white noise has been added to the membrane voltage of the model system. Estimation of the parameters has been investigated by a proposed algorithm. The denoising technique (local projection method) has been applied to reduce the influence of noisy stimuli and the effectiveness of the method is reported. The proposed scheme performs well for an excitable neuron model and provides good estimates between the estimated parameters and the actual values in a reasonable way. This approach can be used for parameter estimation for other nonlinear dynamical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Algorithms for bi-objective multiple-choice hardware/software partitioning.
- Author
-
Shi, Wenjun, Wu, Jigang, Lam, Siew-kei, and Srikanthan, Thambipillai
- Subjects
- *
HARDWARE , *COMPUTER software , *ELECTRIC power consumption , *ALGORITHMS , *COMPUTER programming - Abstract
This paper proposes three algorithms for multiple-choice hardware-software partitioning with the objectives of minimizing execution time and power consumption, while meeting area constraint. Firstly, a heuristic algorithm is proposed to rapidly generate an approximate solution. In the second algorithm we refined the approximate solution using a customized tabu search algorithm. Finally, a dynamic programming algorithm is proposed to calculate the exact solution. Simulation results show that the approximate solution is very close to the exact solution. This can be further refined by tabu search to achieve a solution with less than 1.5% error for all cases considered in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. Finding popular branchings in vertex-weighted directed graphs.
- Author
-
Natsui, Kei and Takazawa, Kenjiro
- Subjects
- *
DIRECTED graphs , *COMBINATORIAL optimization , *ALGORITHMS - Abstract
Popular matchings have been intensively studied recently as a relaxed concept of stable matchings. By applying the concept of popular matchings to branchings in directed graphs, Kavitha et al. introduced popular branchings. Let G = (V G , E G) be a directed graph, where each vertex has preferences over its incoming arcs, and B and B ′ be branchings in G. A vertex v ∈ V G prefers B to B ′ if v prefers its incoming arc of B to that of B ′ , where having an arbitrary incoming arc is preferred to having no incoming arc. We say that B is more popular than B ′ if the number of vertices preferring B to B ′ is greater than the number of vertices preferring B to B ′. A branching B is called a popular branching if there is no branching more popular than B. Kavitha et al. proposed an algorithm for finding a popular branching when the preferences of each vertex are given by a strict partial order. The correctness of this algorithm is proved by utilizing classical theorems on the duality of weighted arborescences. In this paper, we generalize popular branchings to weighted popular branchings in vertex-weighted directed graphs, in the same manner as weighted popular matchings by Mestre. We give an algorithm for finding a weighted popular branching for the case where the preferences of each vertex are given by a total preorder and the weights satisfy certain conditions. Our algorithm is based on Kavitha et al.'s algorithm, while it includes elaborated procedures resulting from the vertex-weights. Its correctness also builds upon the duality of weighted arborescences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. A three-term conjugate gradient algorithm for large-scale unconstrained optimization problems.
- Author
-
Deng, Songhai and Wan, Zhong
- Subjects
- *
MATHEMATICAL optimization , *PROBLEM solving , *APPROXIMATION theory , *ALGORITHMS , *STOCHASTIC convergence , *MATHEMATICAL analysis - Abstract
In this paper, a three-term conjugate gradient algorithm is developed for solving large-scale unconstrained optimization problems. The search direction at each iteration of the algorithm is determined by rectifying the steepest descent direction with the difference between the current iterative points and that between the gradients. It is proved that such a direction satisfies the approximate secant condition as well as the conjugacy condition. The strategies of acceleration and restart are incorporated into designing the algorithm to improve its numerical performance. Global convergence of the proposed algorithm is established under two mild assumptions. By implementing the algorithm to solve 75 benchmark test problems available in the literature, the obtained results indicate that the algorithm developed in this paper outperforms the existent similar state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
47. Outcome of a three-phase treatment algorithm for inpatients with melancholic depression.
- Author
-
Vermeiden, Marlijn, Kamperman, Astrid M., Hoogendijk, Witte J.G., van den Broek, Walter W., and Birkenhäger, Tom K.
- Subjects
- *
MENTAL depression , *THERAPEUTICS , *ANTIDEPRESSANTS , *MELANCHOLY , *HEALTH outcome assessment , *ALGORITHMS - Abstract
Background In patients suffering from major depressive disorder, non-response to initial antidepressant monotherapy is relatively common. The use of treatment algorithms may optimize and enhance treatment outcome. Methods A single-center 3-phase treatment algorithm was evaluated for inpatients with major depressive disorder, i.e. phase I (n = 85): 7 weeks optimal antidepressant monotherapy (imipramine or venlafaxine); phase II (n = 39): 4 weeks subsequent plasma level-targeted dose lithium addition in case of insufficient improvement of antidepressant monotherapy; and phase III (n = 8): subsequent electroconvulsive therapy in case of insufficient improvement of antidepressant‑lithium treatment. Overall feasibility of the 3-phase algorithm was determined by the number of dropouts, and overall efficacy was evaluated using weekly scores on the 17-item Hamilton Rating Scale for Depression (HAM-D) during the treatment phases of the algorithm. This paper is based on an RCT comparing the two antidepressants in phase I and adding lithium in phase II. Results Of the 85 patients analyzed, overall dropout during the 3-phase treatment algorithm was 24 (28%) patients. When analyzing the 3-phase treatment algorithm on a modified intention-to-treat basis, 39 (46%) patients achieved complete remission (HAM-D score ≤ 7) by the end of the algorithm. Regarding response (HAM-D score reduction ≥50%): of the 85 patients, 60 (71%) were responders by the end of the algorithm. Conclusion The favorable outcome of the 3-phase treatment algorithm emphasizes the importance of pursuing stepwise antidepressant treatment in patients who are nonresponsive to the first antidepressant. Clinical trial registration This study protocol is registered at http://www.controlled-trials.com , “Pharmacological Treatment of Depression” (identifier: ISRCTN73221288). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. A polymorphic uncertain equilibrium model and its deterministic equivalent formulation for decentralized supply chain management.
- Author
-
Wan, Zhong, Wu, Hao, and Dai, Lin
- Subjects
- *
STIFFNESS (Engineering) , *PROGRAMMING languages , *STRUCTURAL dynamics , *STRUCTURAL stability , *ALGORITHMS - Abstract
Supply chain management is a multidisciplinary engineering problem. In this paper, a polymorphic uncertain equilibrium model (PUEM) is constructed to capture the joint maximization of the profits for the manufacturers and retailers in a supply chain network. To ensure applicability of the model in practice, the demand of the consumers is regarded as a continuous random variable, the holding cost of the retailer and the transaction cost between the manufacturer and retailer are described by fuzzy sets. For the PUEM, a deterministic equivalent formulation (DEF) is first derived by compromise programming approach such that the existing powerful algorithms in the standard smooth optimization are employed to find an approximate equilibrium point for the uncertain problem. Actually, the DEF turns out to be a nonlinear complementarity problem (NCP), a special variational inequality. Thus, a modified partially Jacobian smoothing algorithm is developed to solve the corresponding NCP, where the gradient information of the model is used to efficiently generate search direction. Sensitivity analysis offers a number of useful managerial implications based on practical applications of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. An extended nonmonotone line search technique for large-scale unconstrained optimization.
- Author
-
Huang, Shuai, Wan, Zhong, and Zhang, Jing
- Subjects
- *
ALGORITHMS , *DIFFERENTIABLE functions , *APPROXIMATE solutions (Logic) , *MATHEMATICAL optimization , *INDUCTIVE teaching - Abstract
In this paper, an extended nonmonotone line search is proposed to improve the efficiency of the existing line searches. This line search is first proved to be an extension of the classical line search rules. On the one hand, under mild assumptions, global convergence and R-linear convergence are established for the new line search rule. On the other hand, by numerical experiments, it is shown that the line search can integrate the advantages of the existing methods in searching for a suitable step-size. Combined with the spectral step-size, a class of spectral gradient algorithms are developed and employed to solve a large number of benchmark test problems from CUTEst. Numerical results show that the new line search is promising in solving large-scale optimization problems, and outperforms the other similar ones as it is combined with a spectral gradient method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Validation of a new snoring detection device based on a hysteresis extraction algorithm.
- Author
-
Hara, Hirotaka, Tsutsumi, Masakazu, Tarumoto, Syunsuke, Shiga, Toshikazu, and Yamashita, Hiroshi
- Subjects
- *
SNORING , *ALGORITHMS , *MEDICAL equipment , *SLEEP apnea syndromes , *MOUTH breathing , *PREVENTION - Abstract
Objective This paper aims to introduce and validate our newly developed snoring detection device to automatically identify the incidence and amplitude of snores using the hysteresis extraction method. Methods Thirty patients (16 males and 14 females) with a history of snoring were included in this study. Each patient underwent a conventional polysomnography (PSG). Natural overnight snoring was recorded from each subject using our original snore detection device and an integrated circuit (IC) recorder while the patient slept during PSG. A new algorithm based on hysteresis extraction was used to detect snores and qualify the level of each event at 30-s intervals (one epoch). The automated and subjective assessment concordance was evaluated by comparing a total of 27,295 epochs, and sensitivity, specificity, and accuracy were calculated. Results Study population analysis revealed a mean rate of snore time against the total sleep time of 14.1 ± 7.9%. Further, validation of the automatic snore detection revealed the following: sensitivity, 71.2%; specificity, 93.1%; positive predictive value, 77.7%; negative predictive value, 94.6%; and accuracy, 90.7%. Conclusions This study revealed the efficacy of our newly developed snoring detection device and indicated that it may serve as a useful method in further snoring analysis via objective medical assessment. However, the sample size of 30 subjects was relatively small; therefore, further research is needed to evaluate this device. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.