325 results on '"Kaya, Kamer"'
Search Results
102. Effective heuristics for matchings in hypergraphs
- Author
-
Dufossé, Fanny, Kaya, Kamer, Panagiotas, Ioannis, Uçar, Bora, Data Aware Large Scale Computing (DATAMOVE ), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Faculty of Engineering and Natural Sciences (Sabanci University), Sabanci University [Istanbul], Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon)-Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon), and Inria Grenoble Rhône-Alpes
- Subjects
Tensor scaling ,Karp- Sipser heuristic ,Heuristique Karp–Sipser ,Couplage ,Matching in hypergraphs ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Hypergraphes ,[INFO]Computer Science [cs] ,[INFO.INFO-DM]Computer Science [cs]/Discrete Mathematics [cs.DM] ,d-dimensional matching - Abstract
The problem of finding a maximum cardinality matching in a d-partite d-uniform hypergraph is an important problem in combinatorial optimization and has beentheoretically analyzed by several researchers. In this work, we first devise heuristics for this problem by generalizing the existing cheap graph matching heuristics. Then, we propose a novel heuristic based on tensor scaling to extend the matching via judicious hyperedge selections. Experiments on random, synthetic and real-life hypergraphs show that this new heuristic is highly practical and superior to the others on finding a matching with largecardinality.; Le problème consistant à trouver un couplage maximal dans un hypergraphe uniforme ayant d parts est un problème important en optimisation combinatoire. Dans ce travail, nous concevons d’abord des heuristiques pour ce problème en généralisant les heuristiques de couplage dans des graphes. Ensuite, nous proposons une nouvelle heuristique basée sur des méthodes de mise à l’échelle de tenseur pour étendre le couplage via des sélections judicieuses d’hyperarêtes. Des expériences sur des hypergraphes aléatoires, synthétiques et réels montrent que cette nouvelle heuristique est simple à mettre en pratique et supérieure aux autres pour trouver des couplages de grande cardinalité.
- Published
- 2018
103. A CRT-based verifiable secret sharing scheme secure against unbounded adversaries
- Author
-
Selçuk, Ali Aydın, Kaya, Kamer, Ersoy, Oğuzhan, Pedersen, Thomas Brochmann, Anarim, Emin, Selçuk, Ali Aydın, Kaya, Kamer, Ersoy, Oğuzhan, Pedersen, Thomas Brochmann, and Anarim, Emin
- Abstract
For commitments on secrets, statistical hiding is a must when we are dealing with a long-term secret or when the secret domain is small enough for a brute-force attack by a powerful adversary. Unfortunately, all the Chinese Remainder Theorem-based verifiable secret sharing schemes in the literature are either insecure or suffer from the vulnerability of computationally hiding commitments. To the best of our knowledge, there exist five such studies where two of them were already proven to be insecure. In this work, we first show that two of the remaining schemes are also insecure, that is, the schemes reveal information on the secret even when the adversary is passive. In addition, the remaining one is only secure against a computationally bounded adversary which can be a problem for secret sharing schemes requiring long-term secret obscurity or using small secret domain. We propose a modification for the latter scheme and prove that the modified scheme is a secure verifiable secret sharing scheme against an unbounded adversary. Lastly, as an application, we show how to use the new scheme for joint random secret sharing and analyze the practicality and efficiency of the proposed schemes. Copyright (C) 2016 John Wiley ; Sons, Ltd.
- Published
- 2019
104. Sharing DSS by the Chinese Remainder Theorem
- Author
-
Kaya, Kamer, Selçuk, Ali Aydın, Kaya, Kamer, and Selçuk, Ali Aydın
- Abstract
In this paper, we propose a new threshold scheme for the Digital Signature Standard (DSS) using Asmuth Bloom secret sharing based on the Chinese Remainder Theorem (CRT). To achieve the desired result, we first show how to realize certain other threshold primitives using Asmuth Bloom secret sharing, such as joint random secret sharing, joint exponential random secret sharing, and joint exponential inverse random secret sharing. We prove the security of our scheme against a static adversary. To the best of our knowledge, this is the first provably secure threshold DSS scheme based on CRT. (C) 2013 Elsevier B.V. All rights reserved.
- Published
- 2019
105. A CRT-based verifiable secret sharing scheme secure against unbounded adversaries
- Author
-
Ersoy, Oğuzhan, Pedersen, Thomas Brochmann, Anarim, Emin, Kaya, Kamer, Selçuk, Ali Aydın, Ersoy, Oğuzhan, Pedersen, Thomas Brochmann, Anarim, Emin, Kaya, Kamer, and Selçuk, Ali Aydın
- Abstract
For commitments on secrets, statistical hiding is a must when we are dealing with a long-term secret or when the secret domain is small enough for a brute-force attack by a powerful adversary. Unfortunately, all the Chinese Remainder Theorem-based verifiable secret sharing schemes in the literature are either insecure or suffer from the vulnerability of computationally hiding commitments. To the best of our knowledge, there exist five such studies where two of them were already proven to be insecure. In this work, we first show that two of the remaining schemes are also insecure, that is, the schemes reveal information on the secret even when the adversary is passive. In addition, the remaining one is only secure against a computationally bounded adversary which can be a problem for secret sharing schemes requiring long-term secret obscurity or using small secret domain. We propose a modification for the latter scheme and prove that the modified scheme is a secure verifiable secret sharing scheme against an unbounded adversary. Lastly, as an application, we show how to use the new scheme for joint random secret sharing and analyze the practicality and efficiency of the proposed schemes. Copyright (C) 2016 John Wiley ; Sons, Ltd.
- Published
- 2019
106. Scaling matrices and counting the perfect matchings in graphs
- Author
-
Dufossé, Fanny, Kaya, Kamer, Panagiotas, Ioannis, Uçar, Bora, Data Aware Large Scale Computing (DATAMOVE ), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Faculty of Engineering and Natural Sciences (Sabanci University), Sabanci University [Istanbul], Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon)-Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon), and Inria Grenoble Rhône-Alpes
- Subjects
matrices bistochastiques ,ACM: G.: Mathematics of Computing/G.2: DISCRETE MATHEMATICS/G.2.2: Graph Theory ,Permanent approximation ,algorithmes randomisés ,randomized algorithms ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Sinkhorn-Knopp scaling ,doubly stochastic matrices ,permanent - Abstract
We investigate efficient randomized methods for approximating the number of perfect matchings in bipartite graphs and general graphs. Our approach is based on assigning probabilities to edges.; Nous étudions des méthodes randomiséese efficaces pour approximer le nombre de couplages parfaits dans des graphes bipartis et des graphes généraux. Notre approche se base sur l'assignation de probabilités aux arêtes.
- Published
- 2018
107. Normalisation de matrice et dénombrement des couplages parfaits dans les graphes
- Author
-
Dufossé, Fanny, Kaya, Kamer, Panagiotas, Ioannis, Uçar, Bora, Data Aware Large Scale Computing (DATAMOVE ), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Faculty of Engineering and Natural Sciences (Sabanci University), Sabanci University [Istanbul], Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), and Inria Grenoble Rhône-Alpes
- Subjects
matrices bistochastiques ,ACM: G.: Mathematics of Computing/G.2: DISCRETE MATHEMATICS/G.2.2: Graph Theory ,Permanent approximation ,algorithmes randomisés ,randomized algorithms ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Sinkhorn-Knopp scaling ,doubly stochastic matrices ,permanent - Abstract
We investigate efficient randomized methods for approximating the number of perfect matchings in bipartite graphs and general graphs. Our approach is based on assigning probabilities to edges.; Nous étudions des méthodes randomiséese efficaces pour approximer le nombre de couplages parfaits dans des graphes bipartis et des graphes généraux. Notre approche se base sur l'assignation de probabilités aux arêtes.
- Published
- 2018
108. Acyclic partitioning of large directed acyclic graphs
- Author
-
Herrmann, Julien, Yusuf Özkaya, M, Uçar, Bora, Kaya, Kamer, Çatalyürek, Ümit, School of Computational Science and Engineering, Georgia Institute of Technology [Atlanta], Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), Centre National de la Recherche Scientifique (CNRS), Faculty of Engineering and Natural Sciences (Sabanci University), Sabanci University [Istanbul], and Inria - Research Centre Grenoble – Rhône-Alpes
- Subjects
acyclic partitioning ,multilevel partitioning ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,[INFO.INFO-DM]Computer Science [cs]/Discrete Mathematics [cs.DM] ,directed graph ,partitionnement acyclic ,partitionnement multiniveau ,graphes orientés - Abstract
We investigate the problem of partitioning the vertices of a directed acyclic graph into a given number of parts. The objective function is to minimize the number or the total weight of the edges having end points in different parts, which is also known as edge cut. The standard load balancing constraint of having an equitable partition of the vertices among the parts should be met.Furthermore, the partition is required to be {\em acyclic}, i.e., the inter-part edges between the vertices from different parts should preserve an acyclic dependency structure among the parts. In this work, we adopt the multilevel approach with coarsening, initial partitioning, and refinementphases for acyclic partitioning of directed acyclic graphs. We focus on two-way partitioning (sometimes called bisection), as this scheme can be used in a recursive way for multi-way partitioning. To ensure the acyclicity of the partition at all times, we propose novel and efficient coarsening andrefinement heuristics. The quality of the computed acyclic partitions is assessed by computing the edge cut. We also propose effective ways to use the standard undirected graph partitioning methods in our multilevel scheme. We perform a large set of experiments on a dataset consisting of (i)~graphs coming from an application and (ii)~some others corresponding to matrices from a public collection. We report improvements, on average, around 59\% compared to the current state of the art.; Nous étudions le problème du partitionnement des sommets d’un graphe acyclique dirigé en un nombre donné de parties. La fonction objectiveest de minimiser le poids total des arêtes ayant des extrémités dans différentes parties, qui sont également nommées arêtes coupées. La contrainted’équilibrage de charge standard d’avoir une partition équitable des sommets entre les parties doit être respectée. En outre, la partition doit être acyclique, c’est-à-dire, les arêtes coupées doivent préserver une structure de dépendance acyclique entre les parties. Dans ce travail, nous adoptons une approche multiniveaux avec des étapes de contraction, partitionnement initial et raffinement pour le partitionnement acyclique des graphes acycliques dirigés. Nous nous concentrons sur la bissection, car ce schéma peut être utilisé d’une manière récursive pour le partitionnement multi-voies. Pour assurer l’acyclicité du partitionnement à tout moment, nous proposons des méthodes de contraction etde raffinement. La qualité des partitions acycliques calculées est évaluée en calculant la somme des poids des arêtes coupées. Nous proposons également des moyens efficaces afin d’utiliser des méthodes standard de partitionnement de graphes non orientés dans notre schéma multi-niveaux. Nous effectuons un grand nombre d’expériences sur un ensemble de données constitué de (i) graphes provenant d’une application, et (ii) d’autres graphes correspondant à des matrices d’une collection publique. Nous rapportons des améliorations par rapport aux méthodes existantes d’environ 59 % en moyenne.
- Published
- 2018
109. Using Synchronizing Heuristics to Construct Homing Sequences
- Author
-
Çirişci, Berk, primary, Emek, M., primary, Sorguç, Ege, primary, Kaya, Kamer, primary, and Yenigun, Husnu, primary
- Published
- 2019
- Full Text
- View/download PDF
110. Multilevel Algorithms for Acyclic Partitioning of Directed Acyclic Graphs
- Author
-
Herrmann, Julien, primary, Özkaya, M. Yusuf, additional, Uçar, Bora, additional, Kaya, Kamer, additional, and Çatalyürek, Ümit V., additional
- Published
- 2019
- Full Text
- View/download PDF
111. Reordering matrices for optimal sparse matrix bipartitioning
- Author
-
Mumcuyan, Aras, Kaya, Kamer, and Yenigün, Hüsnü
- Subjects
QA075 Electronic computers. Computer science - Abstract
Sparse-matrix vector multiplication (SpMV) is one of the widely used and extensively studied kernels in today’s scientific computing and high-performance computing domains. The efficiency and scalability of this kernel is extensively investigated on single-core, multi-core, many-core processors and accelerators, and on distributed memory. In general, a good mapping of an application’s tasks to the processing units in a distributed environment is important since communication among these tasks is the main bottleneck on scalability. A fundamental approach to solve this problem is modeling the application via a graph/hypergraph and partitioning it. For SpMV, several graph/hypergraph models have been proposed. These approaches consider the problem as a balanced partitioning problem where the vertices (tasks) are partitioned (assigned) to the parts (processors) in a way that the total vertex weight (processor load) is balanced and the total communication incurred among the processors is minimized. The partitioning problem is NP-Hard and all the existing studies and tools use heuristics to solve the problem. For graphs, the literature on optimal partitioning contains a number of notable studies; however for hypergraphs, very little work has been done. Unfortunately, it has been shown that unlike graphs, hypergraphs can exactly model the total communication for SpMV. Recently, Pelt and Bisseling proposed a novel, purely combinatorial branch-and-bound-based approach for the sparse-matrix bipartitioning problem which can tackle relatively larger hypergraphs that were impossible to optimally partition into two by using previous methods. This work can be considered as an extension to their approach with two ideas. We propose to use; 1) matrix ordering techniques to use more information in the earlier branches of the tree, and 2) a machine learning approach to choose the best ordering based on matrix features. As our experiments on various matrices will show, these enhancements make the optimal bipartitioning process much faster.
- Published
- 2017
112. Multi-exponentiations on multi-cores
- Author
-
Topçuoğlu, Cem, Kaya, Kamer, and Savaş, Erkay
- Subjects
QA075 Electronic computers. Computer science - Abstract
Modular exponentiation lies at the core of many cryptographic schemes and its efficient implementation is a must for a reasonable practical performance. For various applications, multiple exponentiations with different bases and exponents need to be performed and multiplied. Although this multiexponentiation operation can be implemented by individually exponentiating the bases to their corresponding exponents, as discussed in the literature, a significant performance boost can be obtained when the operation is considered as a whole. However, performing separate exponentiations is pleasingly parallelizable but the latter approach requires a careful implementation on a multi-core processor. In this work, we propose a parallel algorithm and implementation based on an existing multi-exponentiation algorithm with precomputation. The experimental results show that the proposed implementation is significantly faster than the existing parallel multi-exponentiation schemes in the literature.
- Published
- 2017
113. HAMSI: A parallel incremental optimization algorithm using quadratic approximations for solving partially separable problems
- Author
-
Kaya, Kamer, Öztoprak, Figen, Birbil, Ş. İlker, Cemgil, A. Taylan, Şimşekli, Umut, Kuru, Nurdan, Koptagel, Hazal, and Öztürk, M. Kaan
- Subjects
FOS: Computer and information sciences ,Computer Science - Learning ,QA075 Electronic computers. Computer science ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
We propose HAMSI (Hessian Approximated Multiple Subsets Iteration), which is a provably convergent, second order incremental algorithm for solving large-scale partially separable optimization problems. The algorithm is based on a local quadratic approximation, and hence, allows incorporating curvature information to speed-up the convergence. HAMSI is inherently parallel and it scales nicely with the number of processors. Combined with techniques for effectively utilizing modern parallel computer architectures, we illustrate that the proposed method converges more rapidly than a parallel stochastic gradient descent when both methods are used to solve large-scale matrix factorization problems. This performance gain comes only at the expense of using memory that scales linearly with the total size of the optimization variables. We conclude that HAMSI may be considered as a viable alternative in many large scale problems, where first order methods based on variants of stochastic gradient descent are applicable., Comment: The software is available at https://github.com/spartensor/hamsi-mf
- Published
- 2017
114. Parallelized preconditioned model building algorithm for matrix factorization
- Author
-
Kaya, Kamer, Birbil, İlker, Öztürk, Mehmet Kaan, and Gohari, Amir
- Subjects
QA075 Electronic computers. Computer science - Abstract
Matrix factorization is a common task underlying several machine learning applications such as recommender systems, topic modeling, or compressed sensing. Given a large and possibly sparse matrix A, we seek two smaller matrices W and H such that their product is as close to A as possible. The objective is minimizing the sum of square errors in the approximation. Typically such problems involve hundreds of thousands of unknowns, so an optimizer must be exceptionally efficient. In this study, a new algorithm, Preconditioned Model Building is adapted to factorize matrices composed of movie ratings in the MovieLens data sets with 1, 10, and 20 million entries. We present experiments that compare the sequential MATLAB implementation of the PMB algorithm with other algorithms in the minFunc package. We also employ a lock-free sparse matrix factorization algorithm and provide a scalable shared-memory parallel implementation. We show that (a) the optimization performance of the PMB algorithm is comparable to the best algorithms in common use, and (b) the computational performance can be significantly increased with parallelization
- Published
- 2017
115. A CRT-based verifiable secret sharing scheme secure against unbounded adversaries
- Author
-
Ersoy, Oguzhan, Pedersen, Thomas Brochmann, Kaya, Kamer, Selcuk, Ali Aydin, Anarim, Emin, TOBB ETU, Faculty of Engineering, Department of Computer Engineering, TOBB ETÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü, and Selçuk, Ali Aydın
- Subjects
TheoryofComputation_MISCELLANEOUS ,QA075 Electronic computers. Computer science ,Joint Random Secret Sharing ,Verifiable Secret Sharing ,Statistically Hiding Commitments ,Asmuth-Bloom ,Chinese Remainder Theorem - Abstract
For commitments on secrets, statistical hiding is a must when we are dealing with a long-term secret or when the secret domain is small enough for a brute-force attack by a powerful adversary. Unfortunately, all the Chinese Remainder Theorem-based verifiable secret sharing schemes in the literature are either insecure or suffer from the vulnerability of computationally hiding commitments. To the best of our knowledge, there exist five such studies where two of them were already proven to be insecure. In this work, we first show that two of the remaining schemes are also insecure, that is, the schemes reveal information on the secret even when the adversary is passive. In addition, the remaining one is only secure against a computationally bounded adversary which can be a problem for secret sharing schemes requiring long-term secret obscurity or using small secret domain. We propose a modification for the latter scheme and prove that the modified scheme is a secure verifiable secret sharing scheme against an unbounded adversary. Lastly, as an application, we show how to use the new scheme for joint random secret sharing and analyze the practicality and efficiency of the proposed schemes. Copyright (C) 2016 John Wiley & Sons, Ltd.
- Published
- 2016
116. Optimally bipartitioning sparse matrices with reordering and parallelization
- Author
-
Mumcuyan, Aras, primary, Usta, Baran, additional, Kaya, Kamer, additional, and Yenigün, Hüsnü, additional
- Published
- 2018
- Full Text
- View/download PDF
117. A generic Private Information Retrieval scheme with parallel multi-exponentiations on multicore processors
- Author
-
Topcuoğlu, Cem, primary, Kaya, Kamer, additional, and Savaş, Erkay, additional
- Published
- 2018
- Full Text
- View/download PDF
118. Enumerator: An Efficient Approach for Enumerating all Valid t-tuples
- Author
-
Mercan, Hanefi, primary, Kaya, Kamer, additional, and Yilmaz, Cemal, additional
- Published
- 2018
- Full Text
- View/download PDF
119. A resource provisioning framework for bioinformatics applications in multi-cloud environments
- Author
-
Senturk, Izzet F., primary, Balakrishnan, P., additional, Abu-Doleh, Anas, additional, Kaya, Kamer, additional, Malluhi, Qutaibah, additional, and Çatalyürek, Ümit V., additional
- Published
- 2018
- Full Text
- View/download PDF
120. Using Structure of Automata for Faster Synchronizing Heuristics
- Author
-
Cirisci, Berk, primary, Kahraman, Muhammed Kerem, primary, Yildirimoglu, Cagri Uluc, primary, Kaya, Kamer, primary, and Yenigun, Husnu, primary
- Published
- 2018
- Full Text
- View/download PDF
121. Greed Is Good: Parallel Algorithms for Bipartite-Graph Partial Coloring on Multicore Architectures
- Author
-
Tas, Mustafa Kemal, primary, Kaya, Kamer, additional, and Saule, Erik, additional
- Published
- 2017
- Full Text
- View/download PDF
122. An Approach for Choosing the Best Covering Array Constructor to Use
- Author
-
Mercan, Hanefi, primary, Yilmaz, Cemal, additional, and Kaya, Kamer, additional
- Published
- 2017
- Full Text
- View/download PDF
123. Acyclic Partitioning of Large Directed Acyclic Graphs
- Author
-
Herrmann, Julien, primary, Kho, Jonathan, additional, Ucar, Bora, additional, Kaya, Kamer, additional, and Catalyurek, Umit V., additional
- Published
- 2017
- Full Text
- View/download PDF
124. A New Method for Computational Private Information Retrieval
- Author
-
Tillem, Gamze, primary, Savaş, Erkay, additional, and Kaya, Kamer, additional
- Published
- 2017
- Full Text
- View/download PDF
125. Graph Manipulations for Fast Centrality Computation
- Author
-
Sariyüce, Ahmet Erdem, primary, Kaya, Kamer, additional, Saule, Erik, additional, and Çatalyürek, Ümit V., additional
- Published
- 2017
- Full Text
- View/download PDF
126. Hypergraph partitioning for multiple communication cost metrics: model and methods
- Author
-
Deveci, Mehmet, Kaya, Kamer, Uçar, Bora, and Çatalyürek, Ümit V.
- Subjects
QA075 Electronic computers. Computer science - Abstract
We investigate hypergraph partitioning-based methods for efficient parallelization of communicating tasks. A good partitioning method should divide the load among the processors as evenly as possible and minimize the inter-processor communication overhead. The total communication volume is the most popular communication overhead metric which is reduced by the existing state-of-the-art hypergraph partitioners. However, other metrics such as the total number of messages, the maximum amount of data transferred by a processor, or a combination of them are equally, if not more, important. Existing hypergraph-based solutions use a two phase approach to minimize such metrics where in each phase, they minimize a different metric, sometimes at the expense of others. We propose a one-phase approach where all the communication cost metrics can be effectively minimized in a multi-objective setting and reductions can be achieved for all metrics together. For an accurate modeling of the maximum volume and the number of messages sent and received by a processor, we propose the use of directed hypergraphs. The directions on hyperedges necessitate revisiting the standard partitioning heuristics. We do so and propose a multi-objective, multi-level hypergraph partitioner called UMPa. The partitioner takes various prioritized communication metrics into account, and optimizes all of them together in the same phase. Compared to the state-of-the-art methods which only minimize the total communication volume, we show on a large number of problem instances that UMPa produces better partitions in terms of several communication metrics.
- Published
- 2014
127. Incremental closeness centrality in distributed memory
- Author
-
Sarıyüce, Ahmet Erdem, Saule, Erik, Kaya, Kamer, and Çatalyürek, Ümit V.
- Subjects
QA075 Electronic computers. Computer science - Abstract
Networks are commonly used to model traffic patterns, social interactions, or web pages. The vertices in a network do not possess the same characteristics: some vertices are naturally more connected and some vertices can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given vertex in the network. When the network is dynamic and keeps changing, the relative importance of the vertices also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose Streamer, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined, replicated parallelism, and SpMM-based BFSs, and it takes NUMA effects into account. It makes maintaining the Closeness Centrality values of real-life networks with millions of interactions significantly faster and obtains almost linear speedups on a 64 nodes 8 threads/node cluster.
- Published
- 2014
128. Multilevel Threshold Secret Sharing using the Chinese Remainder Theorem.
- Author
-
Ersoy, Oğuzhan, Kaya, Kamer, and Kaşkaloğlu, Kerem
- Subjects
- *
CHINESE remainder theorem , *CRYPTOSYSTEMS , *SHARING - Abstract
In multilevel secret sharing, a secret is shared among a set of hierarchically organized participants in a way that the members of the superior compartments are more powerful and can replace the participants of an inferior one to form an authorized coalition during secret reconstruction. In this work, we first show that the only existing multilevel threshold secret sharing scheme based on the Chinese Remainder Theorem (CRT) is not secure and fails to work with certain natural threshold settings on compartments. As the main contribution, we propose a secure CRT-based scheme that works for all threshold settings. In the proposed scheme, we employ a refined version of Asmuth-Bloom secret sharing with a special and generic Asmuth-Bloom sequence called the anchor sequence. [ABSTRACT FROM AUTHOR]
- Published
- 2019
129. Foreword: 1st International Workshop on High Performance Computing for Big Data
- Author
-
Kaya, Kamer, Gedik, Buğra, and Çatalyürek, Ümit V.
- Subjects
International workshops ,Big data ,High performance computing (HPC) ,Parallel processing ,Technical presentations ,High performance computing ,Data problems - Abstract
Date of Conference: 9-12 Sept. 2014 Conference name: 43rd International Conference on Parallel Processing Workshops, 2014 The 1st International Workshop on High Performance Computing for Big Data (HPC4BD) is held on September 10, 2014 in concordance with 43rd International Conference on Parallel Processing (ICPP-2014). The workshop aimed to bring high performance computing (HPC) experts and experts from various application domains together to discuss their Big Data problems. There were four works accepted to be presented in this year's workshop. This foreword presents a summary of the them. © 2014 IEEE.
- Published
- 2014
130. Inconsistent clones and faults in software
- Author
-
Kaya, Kamer
- Abstract
Softwareklone in einem System erfordern eine hohe Vorsicht im Entwicklungszyklus eines Softwareprojekts. Viele Forscher sind der Ansicht, dass Klone vor allem inkonsistente Klone die Ursache diverser Fehler in Softwaresystemen sind, die sich unbemerkt einschleichen und nicht nachverfolgt werden können. Vor allem die Auswirkungen der inkonsistenten Klone liegen im Interesse vieler Forschungsarbeiten. Jedoch liegen die Forschungsergebnisse der Studien weit auseinander. Im Rahmen dieser Diplomarbeit werden die Auswirkungen der inkonsistenten Klone in einem Softwaresystem analysiert. Des Weiteren analysiert diese Arbeit auf empirischer Basis im Rahmen eines Studiendesigns den Zusammenhang der Inkonsistenten und Fehlern in Softwaresystemen. Die Studie wurde auf drei Industriesystemen durchgeführt und ergab als Resultat, dass Entwickler über fast alle Klonstellen einer Klonklasse informiert sind und diese bei Bedarf zu 58%-92% zeitgleich modifizieren. Es sind lediglich 3%-33% der inkonsistenten Klonklassen fehlerbehaftet und stellen somit eine geringe Gefahr für die Softwareentwicklung. Die umfangreiche Analyse gab den Beschluss, dass die Inkonsistenzen im Vergleich zu exakten Klonen mindestens weniger als die Hälfte einen Fehler verursachen. Weiterhin beweist die Studie, dass durch das Klonen aus Bibliotheken, Klone eine erheblich geringe Anzahl an Fehler darstellen und nach bis zu vier Jahren Klonzeit keinen einzigen Fehler in der gesamten Revisionshistorie verursacht haben. Die Ergebnisse dieser Arbeit beweisen, dass Entwickler bewusst Klonen und dass es durch das bewusste Klonen keinen erhöhten Zusammenhang zwischen inkonsistente Klone und Fehler gibt.
- Published
- 2014
131. A CRT-based verifiable secret sharing scheme secure against unbounded adversaries
- Author
-
Ersoy, Oğuzhan, primary, Pedersen, Thomas Brochmann, additional, Kaya, Kamer, additional, Selçuk, Ali Aydın, additional, and Anarim, Emin, additional
- Published
- 2016
- Full Text
- View/download PDF
132. On the Relationship of Inconsistent Software Clones and Faults: An Empirical Study
- Author
-
Wagner, Stefan, primary, Abdulkhaleq, Asim, additional, Kaya, Kamer, additional, and Paar, Alexander, additional
- Published
- 2016
- Full Text
- View/download PDF
133. On analysis of partitioning models and metrics in parallel sparse matrix-vector multiplication
- Author
-
Catalyurek, Umit, Kaya, Kamer, Uçar, Bora, Department of Biomedical Informatics [Columbus], Ohio State University [Columbus] (OSU), Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), INRIA, École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), and Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL)
- Subjects
hypergraph partitioning ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,Parallel sparse-matrix vector multiplication - Abstract
Graph/hypergraph partitioning models and methods have been successfully used to minimize the communication requirements among processors in several parallel computing applications. Parallel sparse matrix-vector multiplication~(SpMxV) is one of the representative applications that renders these models and methods indispensable in many scientific computing contexts. We investigate the interplay of several partitioning metrics and execution times of SpMxV implementations in three libraries: Trilinos, PETSc, and an in-house one. We design and carry out experiments with up to 512 processors and investigate the results with regression analysis. Our experiments show that the partitioning metrics, although not an exact measure of communication cost, influence the performance greatly in a distributed memory setting. The regression analyses demonstrate which metric is the most influential for the execution time of the three libraries used.
- Published
- 2013
134. Investigations on push-relabel based algorithms for the maximum transversal problem
- Author
-
Kaya, Kamer, Langguth, Johannes, Manne, Fredrik, Uçar, Bora, Department of Biomedical Informatics [Columbus], Ohio State University [Columbus] (OSU), Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), Department of Informatics [Bergen] (UiB), University of Bergen (UiB), Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), INRIA, École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), and Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL)
- Subjects
[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,[MATH.MATH-CO]Mathematics [math]/Combinatorics [math.CO] ,[INFO.INFO-RO]Computer Science [cs]/Operations Research [cs.RO] ,[INFO.INFO-DM]Computer Science [cs]/Discrete Mathematics [cs.DM] ,[INFO.INFO-MS]Computer Science [cs]/Mathematical Software [cs.MS] - Abstract
We investigate the push-relabel algorithm for solving the problem of finding a maximum cardinality matching in a bipartite graph in the context of the maximum transversal problem. We describe in detail an optimized yet easy-to-implement version of the algorithm and fine-tune its parameters. We also introduce new performance-enhancing techniques. On a wide range of real-world instances, we compare the push-relabel algorithm with state-of-the-art augmenting path-based algorithms and the recently proposed pseudoflow approach. We conclude that a carefully tuned push-relabel algorithm is competitive with all known augmenting path-based algorithms, and superior to the pseudoflow-based ones.; Nous étudions le problème de couplage maximum dans des graphes bipartis. Nous décrivons en détail une version optimisée de l'algorithme en ajustant ses paramètres. L'algorithme est facile à mettre en œuvre. Nous introduisons également de nouvelles techniques pour améliorer la performance de l'algorithme. Sur un large éventail de cas du monde réel, nous comparons l'algorithme Push-Relabel avec des algorithmes basés sur les concepts de chemins augmentants et de pseudoflot récemment proposés. Nous concluons qu'un algorithme de type Push-Relabel soigneusement réglé est en concurrence avec tous les algorithmes connus de type chemins augmentants, et supérieur à ceux de type pseudoflot.
- Published
- 2012
135. Shattering and Compressing Networks for Centrality Analysis
- Author
-
Sar��y��ce, Ahmet Erdem, Saule, Erik, Kaya, Kamer, and ��ataly��rek, ��mit V.
- Subjects
Social and Information Networks (cs.SI) ,FOS: Computer and information sciences ,Physics - Physics and Society ,Computer Science - Data Structures and Algorithms ,FOS: Physical sciences ,Data Structures and Algorithms (cs.DS) ,Computer Science - Social and Information Networks ,Physics and Society (physics.soc-ph) - Abstract
Who is more important in a network? Who controls the flow between the nodes or whose contribution is significant for connections? Centrality metrics play an important role while answering these questions. The betweenness metric is useful for network analysis and implemented in various tools. Since it is one of the most computationally expensive kernels in graph mining, several techniques have been proposed for fast computation of betweenness centrality. In this work, we propose and investigate techniques which compress a network and shatter it into pieces so that the rest of the computation can be handled independently for each piece. Although we designed and tuned the shattering process for betweenness, it can be adapted for other centrality metrics in a straightforward manner. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for various types of networks.
- Published
- 2012
136. Diversifying Citation Recommendations
- Author
-
K������ktun��, Onur, Saule, Erik, Kaya, Kamer, and ��ataly��rek, ��mit V.
- Subjects
Social and Information Networks (cs.SI) ,FOS: Computer and information sciences ,Digital Libraries (cs.DL) ,Information Retrieval (cs.IR) - Abstract
Literature search is arguably one of the most important phases of the academic and non-academic research. The increase in the number of published papers each year makes manual search inefficient and furthermore insufficient. Hence, automatized methods such as search engines have been of interest in the last thirty years. Unfortunately, these traditional engines use keyword-based approaches to solve the search problem, but these approaches are prone to ambiguity and synonymy. On the other hand, bibliographic search techniques based only on the citation information are not prone to these problems since they do not consider textual similarity. For many particular research areas and topics, the amount of knowledge to humankind is immense, and obtaining the desired information is as hard as looking for a needle in a haystack. Furthermore, sometimes, what we are looking for is a set of documents where each one is different than the others, but at the same time, as a whole we want them to cover all the important parts of the literature relevant to our search. This paper targets the problem of result diversification in citation-based bibliographic search. It surveys a set of techniques which aim to find a set of papers with satisfactory quality and diversity. We enhance these algorithms with a direction-awareness functionality to allow the users to reach either old, well-cited, well-known research papers or recent, less-known ones. We also propose a set of novel techniques for a better diversification of the results. All the techniques considered are compared by performing a rigorous experimentation. The results show that some of the proposed techniques are very successful in practice while performing a search in a bibliographic database., 19 pages, manuscript under review
- Published
- 2012
- Full Text
- View/download PDF
137. Recommendation on Academic Networks using Direction Aware Citation Analysis
- Author
-
K������ktun��, Onur, Saule, Erik, Kaya, Kamer, and ��ataly��rek, ��mit V.
- Subjects
FOS: Computer and information sciences ,Digital Libraries (cs.DL) ,Information Retrieval (cs.IR) - Abstract
The literature search has always been an important part of an academic research. It greatly helps to improve the quality of the research process and output, and increase the efficiency of the researchers in terms of their novel contribution to science. As the number of published papers increases every year, a manual search becomes more exhaustive even with the help of today's search engines since they are not specialized for this task. In academics, two relevant papers do not always have to share keywords, cite one another, or even be in the same field. Although a well-known paper is usually an easy pray in such a hunt, relevant papers using a different terminology, especially recent ones, are not obvious to the eye. In this work, we propose paper recommendation algorithms by using the citation information among papers. The proposed algorithms are direction aware in the sense that they can be tuned to find either recent or traditional papers. The algorithms require a set of papers as input and recommend a set of related ones. If the user wants to give negative or positive feedback on the suggested paper set, the recommendation is refined. The search process can be easily guided in that sense by relevance feedback. We show that this slight guidance helps the user to reach a desired paper in a more efficient way. We adapt our models and algorithms also for the venue and reviewer recommendation tasks. Accuracy of the models and algorithms is thoroughly evaluated by comparison with multiple baselines and algorithms from the literature in terms of several objectives specific to citation, venue, and reviewer recommendation tasks. All of these algorithms are implemented within a publicly available web-service framework (http://theadvisor.osu.edu/) which currently uses the data from DBLP and CiteSeer to construct the proposed citation graph., 10 pages, 8 figures, 3 tables
- Published
- 2012
- Full Text
- View/download PDF
138. Preconditioners based on Strong Components
- Author
-
Kaya, Kamer, Duff, Iain S., Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS), CERFACS, STFC Rutherford Appleton Laboratory (RAL), Science and Technology Facilities Council (STFC), and INRIA
- Subjects
[INFO.INFO-NA]Computer Science [cs]/Numerical Analysis [cs.NA] ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] - Abstract
Session 9; International audience
- Published
- 2011
139. Experiments on push-relabel-based maximum cardinality matching algorithms for bipartite graphs
- Author
-
Kaya, Kamer, Langguth, Johannes, Manne, Fredrik, Uçar, Bora, Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS), Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), Department of Informatics [Bergen] (UiB), University of Bergen (UiB), Algorithms and Scheduling for Distributed Heterogeneous Platforms (GRAAL), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), CERFACS, École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), and Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL)
- Subjects
[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] - Abstract
no abstract
- Published
- 2011
140. MICA
- Author
-
Hatem, Ayat, primary, Kaya, Kamer, additional, Parvin, Jeffrey, additional, Huang, Kun, additional, and Çatalyürek, Ümit V., additional
- Published
- 2015
- Full Text
- View/download PDF
141. Fast and High Quality Topology-Aware Task Mapping
- Author
-
Deveci, Mehmet, primary, Kaya, Kamer, additional, Ucar, Bora, additional, and Catalyurek, Umit V., additional
- Published
- 2015
- Full Text
- View/download PDF
142. Phylogenetic visualization of the spread of H7 influenza A viruses
- Author
-
Janies, Daniel A., primary, Pomeroy, Laura W., additional, Krueger, Chris, additional, Zhang, Yuqi, additional, Senturk, Izzet F., additional, Kaya, Kamer, additional, and Çatalyürek, Ümit V., additional
- Published
- 2015
- Full Text
- View/download PDF
143. Diversifying Citation Recommendations
- Author
-
Küçüktunç, Onur, primary, Saule, Erik, additional, Kaya, Kamer, additional, and Çatalyürek, Ümit V., additional
- Published
- 2014
- Full Text
- View/download PDF
144. Threshold cryptography with Chinese remainder theorem
- Author
-
Kaya, Kamer, Selçuk, Ali Aydın, and Bilgisayar Mühendisliği Anabilim Dalı
- Subjects
Cryptography ,Threshold cryptography ,Computers--Access control ,Function sharing ,Provable security ,Data protection ,Computer security ,Computer Engineering and Computer Science and Control ,Chinese Remainder Theorem ,QA76.9.A25 K39 2009 ,Coding theory ,Secret sharing ,AsmuthBloom ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
Bilgi güvenliği, elektronik iletişimin hayatımızın her alanına girmesi ile birlikte giderek daha çok önemli hale gelmektedir. Bilgi güvenliği kavramının içeriği kullanıldığı uygulamanın çeşidine ve gereksinimlerine göre değişebilmektedir. Fakat kullanılan alan ya da uygulama ne olursa olsun, güvenlik için hangi algoritmalar kullanılırsa kullanılsın, güvenlik ilk önce gerekli kişilerin bilmesi gereken bir anahtarın gizli kalmasına dayanmaktadır.Güvenliğin en önemli unsuru olan anahtarların gizli kalması ve kaybolmaması gereksinimleri değişik problemleri de beraberinde getirmektedir. Anahtarın sadece bir kişide, sunucuda ya da veritabanında saklanması, sistemin güvenliğini o kişinin güvenliğine ve güvenilirliğine indirgemektedir. Bunun yanında şifrenin başka bir kopyasının olmaması da yazılım/donanım arızaları gibi durumlarda anahtarın tamamen kaybedilmesi gibi sakıncalar içermektedir. Anahtarın birden fazla kişide bulunması durumunda ise anahtarı ele geçirmeye çalışan biri için artık bir değil birden fazla hedef vardır ve dolayısıyla, anahtarın güvenliği bu kişilerinin en az güvenliğe sahip olanının güvenliğine indirgenmektedir.Anahtar paylaştırma yöntemleri ilk olarak yukarıda bahsedilen problemleri çözmek için önerilmiştir. Bu yöntemlerdeki anafikir anahtarın belli bir grup içinde öyle paylaştırılmasıdır ki, sadece önceden belirlenen koalisyonlar bir araya geldiğinde anahtarı elde edebilmeli daha küçük koalisyonlar ise anahtar hakkında hiçbir bilgi elde edememelidir. Bu sayede, şirketlerin karar mekanizması uygulamaları, büyük ölçekli finans uygulamaları, nükleer sistemlerin komuta-kontrol uygulamaları gibi alanlarda gizli kalması gereken anahtarlar anahtar paylaştırma yöntemleri kullanılarak saklanabilir.Eşik kriptografisi anahtar paylaştırma yöntemlerinin özel bir hali ile ilgilenir. Eşik kriptografisine dayanan anahtar paylaştırma yöntemlerinde bir koalisyonun içindeki kişi sayısı, büyüklüğü, belli bir eşiği, kısaca t, geçiyorsa, o koalisyon anahtarı elde edebilir. Daha küçük koalisyonlar ise anahtar hakkında hiç bir bilgi elde edemezler. Literatürde ilk önerilen anahtar paylaştıma yöntemlerinden biri Shamir'in eşik kriptografisine dayanan yöntemidir. Shamir bu yöntemde anahtarı t-1 dereceli bir polinomun sabit terimi olarak düşünmüş ve polinomun geçtiği noktaları grup içinde dağıtmıştır. Bu sayede, gerekli olduğunda t büyüklüğündeki bir koalisyon, polinomu yaratarak anahtarı elde edebilir. Bu yöntem sonraları güvenlik üzerine araştırma yapan bilim insanları tarafından kabul görmüş ve değişik uygulamalarda kullanılmıştır. Bu yöntem ile yaklaşık aynı zamanlarda önerilen Blakley'in düzlem geometrisine dayalı anahtar paylaştırma yöntemi ve Asmuth ve Bloom'un önerdiği Çin Kalan Teoremi'ne dayalı yöntem güvenlik açısından gerekli ve yeterli şartları sağladıkları halde araştırmacılar tarafından rağbet görmemişlerdir.Anahtar paylaştırma yöntemleri yukarıda bahsedilen uygulamalar dışında da değişik güvenlik uygulamaları için temel yapı parçacığı görevini görmüşlerdir. Bu uygulamalar, genelde fonksiyon paylaştırma yöntemi olarak bilinen, herhangi bir fonksiyonun çıktısının, herbiri gizli bir fonksiyon girdisine sahip bir grup tarafından, fonksiyon girdileri gizli kalmak şartı ile hesaplanması problemini içerir. Literatürde, anahtar paylaştırma yöntemleri temel alınarak paylaştırılan bu fonksiyonlara RSA, ElGamal ve Paillier gibi açık anahtar algoritmalarının imza yada şifreleme fonksiyonları örnek gösterilebilir. Elektronik seçim gibi yeni nesil uygulamalar fonksiyon paylaştırma yöntemlerini yoğun bir şekilde kullanmaktadır.Daha önce de bahsedildiği gibi, Shamir'in anahtar paylaştırma yöntemi literatürde sıklıkla kullanılan bir yöntem olup diğer anahtar paylaştırma sistemleri pek rağbet görmemektedir. Fakat, bu tezin gösterdiği gibi Çin Kalan Teoremi'ne dayalı anahtar paylaştırma yöntemleri de pratik olarak bu tür uygulamalarda kullanılabilir. Her uygulama değişik güvenlik gereksinimlerine sahip olduğu için, Shamir'in yöntemi değişik eklentiler tasarlanarak çeşitli uygulamalarda kullanılmıştır. Bu tez temel olarak farklı anahtar paylaştırma yöntemlerinin çeşitli uygulamalarda nasıl kullanabileceği üzerine yoğunlaşacaktır. Tezde Çin Kalan Teoremi'ne dayalı bir anahtar paylaştırma yöntemi olan Asmuth-Bloom yöntemi için bazı değişiklikler önerilecektir. Sonra da bu yeni yöntemler kullanılarak kanıtlanabilir güvenliğe sahip fonksiyon paylaştırma yöntemleri ve halihazırda varolan uygulamalarda gereken değişik güvenlik eklentileri tasarlanacaktır. Information security has become much more important since electronic communication is started to be used in our daily life. The content of the term information security varies according to the type and the requirements of the area. However, no matter which algorithms are used, security depends on the secrecy of a key which is supposed to be only known by the agents in the first place.The requirement of the key being secret brings several problems. Storing a secret key on only one person, server or database reduces the security of the system to the security and credibility of that agent. Besides, not having a backup of the key introduces the problem of losing the key if a software/hardware failure occurs. On the other hand, if the key is held by more than one agent an adversary with a desire for the key has more flexibility of choosing the target. Hence the security is reduced to the security of the least secure or least credible of these agents.Secret sharing schemes are introduced to solve the problems above. The main idea of these schemes is to share the secret among the agents such that only predefined coalitions can come together and reveal the secret, while no other coalition can obtain any information about the secret. Thus, the keys used in theareas requiring vital secrecy like large-scale finance applications and commandcontrol mechanisms of nuclear systems, can be stored by using secret sharing schemes.Threshold cryptography deals with a particular type of secret sharing schemes. In threshold cryptography related secret sharing schemes, if the size of a coalition exceeds a bound t, it can reveal the key. And, smaller coalitions can reveal no information about the key. Actually, the first secret sharing scheme in the literature is the threshold scheme of Shamir where he considered the secret as the constantof a polynomial of degree t - 1, and distributed the points on the polynomial to the group of users. Thus, a coalition of size t can recover the polynomial and reveal the key but a smaller coalition can not. This scheme is widely accepted by the researchers and used in several applications. Shamir's secret sharing scheme is not the only one in the literature. For example, almost concurrently, Blakley proposed another secret sharing scheme depending on planar geometry and Asmuth and Bloom proposed a scheme depending on the Chinese Remainder Theorem. Although these schemes satisfy the necessary and sufficient conditions for the security, they have not been considered for the applications requiring asecret sharing scheme.Secret sharing schemes constituted a building block in several other applications other than the ones mentioned above. These applications simply contain a standard problem in the literature, the function sharing problem. In a function sharing scheme, each user has its own secret as an input to a function and the scheme computes the outcome of the function without revealing the secrets. In the literature, encryption or signature functions of the public key algorithms like RSA, ElGamal and Paillier can be given as an example to the functions shared by using a secret sharing scheme. Even new generation applications like electronic voting require a function sharing scheme.As mentioned before, Shamir's secret sharing scheme has attracted much of the attention in the literature and other schemes are not considered much. However, as this thesis shows, secret sharing schemes depending on the Chinese Remainder Theorem can be practically used in these applications. Since each application has different needs, Shamir's secret sharing scheme is used in applications with severalextensions. Basically, this thesis investigates how to adapt Chinese Remainder Theorem based secret sharing schemes to the applications in the literature. We first propose some modifications on the Asmuth-Bloom secret sharing scheme and then by using this modified scheme we designed provably secure function sharing schemes and security extensions. 106
- Published
- 2009
145. Querying Co-regulated Genes on Diverse Gene Expression Datasets Via Biclustering.
- Author
-
Deveci, Mehmet, Küçüktunç, Onur, Eren, Kemal, Bozdağ, Doruk, Kaya, Kamer, and Çatalyürek, Ümit V.
- Published
- 2016
- Full Text
- View/download PDF
146. Sabahattin Engin hayatı - sanatı - eserleri
- Author
-
Kaya, Kamer, Kolcu, Hasan, and Türk Dili ve Edebiyatı Anabilim Dalı
- Subjects
Türk Dili ve Edebiyatı ,Turkish Language and Literature - Abstract
XI ÖZET Çağdaş tiyatro yazarlarımızdan Sabahattin Engin'in Hayatı- Sanatı - Eserleri adlı çalışmamızda yazarımızı ve eserlerini tanıtmaya çalıştık. Çalışmamız üç bölümden oluşmaktadır. Birinci bölümde, önce yazarın hayatım verdik. Daha sonra yazarın sanat anlayışını belirlemeye çalıştık. Bu bölümde son olarak yazarın bütün eserlerinin( oyunları, yazıları ve diğer eserleri) bir bibliyografyasını verdik. İkinci bölümde yazarın yayımlanmış yirmi eserini `Tanıtım, Özet, Değerlendirme, Kişiler, Zaman, Mekan, Dil ve Üslup` başlıkları altında inceledik. Bölümün sonunda `Sabahattin Engin'in Oyunları Hakkında Genel Bir Değerlendirme` başlığıyla bir değerlendirme yaptık. Üçüncü bölümde ise yazarın bugüne kadar yayımlanmış yazılarından bazılarının `Türk Edebiyatıyla İlgili Olanlar` ve `Batı Edebiyatıyla İlgili Olanlar` başlıkları altında metinlerini verdik. xn ABSTRACT We have tried to introduce one of our contamporary play writers Sabahattin Engin in this study called ` Sabahattin Engin's Life, Art, and Works`. This work, in this content, bears the characteristics of being the first. Our study consists of three chapters. In the first chapter we gave the writer's life first. Then we tried to identify the writers' s understanding of art. And finanally, we gave a bibliography of the writer's all Works ( his plays, his writings published in the press and literary reviews. In the Second Chapter we studied writer's already published twenty works under the titles as, ` Introduction, Abstract, Evaluatron, Characters, Time, Place, Language and Literary Style` In the Thirdy Chapter we gave the texts out of some of his writings under the titles as, ` His Writings Related to Turkish Literature` and ` His Writings Related to Western Literature`. 398
- Published
- 2004
147. Iterative-improvement-based heuristics for adaptive scheduling of tasks sharing files on heterogeneous master-slave environments
- Author
-
Kaya, Kamer and Aykanat, Cevdet
- Subjects
Computational grids (Computer systems) ,QA76.9.C58 K39 2004 - Abstract
Cataloged from PDF version of article. Kaya, Kamer M.S.
- Published
- 2004
148. Adapting Iterative-Improvement Heuristics for Scheduling File-Sharing Tasks on Heterogeneous Platforms
- Author
-
Kaya, Kamer, primary, Uçar, Bora, additional, and Aykanat, Cevdet, additional
- Full Text
- View/download PDF
149. Robust Threshold Schemes Based on the Chinese Remainder Theorem
- Author
-
Kaya, Kamer, primary and Selçuk, Ali Aydın, additional
- Full Text
- View/download PDF
150. Hardware/Software Vectorization for Closeness Centrality on Multi-/Many-Core Architectures
- Author
-
Sariyuce, Ahmet Erdem, primary, Saule, Erik, additional, Kaya, Kamer, additional, and Catalyurek, Umit V., additional
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.