171 results
Search Results
2. Emerging opportunities of using large language models for translation between drug molecules and indications.
- Author
-
Oniani, David, Hilsman, Jordan, Zang, Chengxi, Wang, Junmei, Cai, Lianjin, Zawala, Jan, and Wang, Yanshan
- Subjects
- *
LANGUAGE models , *GENERATIVE artificial intelligence , *DRUG discovery , *MOLECULES , *EVIDENCE gaps - Abstract
A drug molecule is a substance that changes an organism's mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications (which describes the disease, condition or symptoms for which the drug is used), or vice versa. Addressing this challenge could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A survey on IoT trust model frameworks.
- Author
-
Ferraris, Davide, Fernandez-Gago, Carmen, Roman, Rodrigo, and Lopez, Javier
- Subjects
- *
TRUST , *INTERNET of things , *COMPUTER science - Abstract
Trust can be considered as a multidisciplinary concept, which is strongly related to the context and it falls in different fields such as Philosophy, Psychology or Computer Science. Trust is fundamental in every relationship, because without it, an entity will not interact with other entities. This aspect is very important especially in the Internet of Things (IoT), where many entities produced by different vendors and created for different purposes have to interact among them through the internet often under uncertainty. Trust can overcome this uncertainty, creating a strong basis to ease the process of interaction among these entities. We believe that considering trust in the IoT is fundamental, and in order to implement it in any IoT entity, it is fundamental to consider it through the whole System Development Life Cycle. In this paper, we propose an analysis of different works that consider trust for the IoT. We will focus especially on the analysis of frameworks that have been developed in order to include trust in the IoT. We will make a classification of them providing a set of parameters that we believe are fundamental in order to properly consider trust in the IoT. Thus, we will identify important aspects to be taken into consideration when developing frameworks that implement trust in the IoT, finding gaps and proposing possible solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Attacking cryptosystems by means of virus machines.
- Author
-
Pérez-Jiménez, Mario J., Ramírez-de-Arellano, Antonio, and Orellana-Martín, David
- Subjects
- *
CRYPTOSYSTEMS , *PUBLIC key cryptography , *POLYNOMIAL time algorithms , *COMPUTER science , *MACHINERY , *MACHINE design - Abstract
The security that resides in the public-key cryptosystems relies on the presumed computational hardness of mathematical problems behind the systems themselves (e.g. the semiprime factorization problem in the RSA cryptosystem), that is because there is not known any polynomial time (classical) algorithm to solve them. The paper focuses on the computing paradigm of virus machines within the area of Unconventional Computing and Natural Computing. Virus machines, which incorporate concepts of virology and computer science, are considered as number computing devices with the environment. The paper designs a virus machine that solves a generalization of the semiprime factorization problem and verifies it formally. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Mental time-travel, semantic flexibility, and A.I. ethics.
- Author
-
Arvan, Marcus
- Subjects
- *
ETHICAL problems , *COMPUTER performance , *ARTIFICIAL intelligence , *ETHICS , *HUMAN beings - Abstract
This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed 'general ethical dilemma analyzer,' GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving 'mental time-travel,' whereby we simulate different possible pasts and futures. I demonstrate how mental time-travel psychology leads us to resolve the semantic trilemma through a six-step process of interpersonal negotiation and renegotiation, and then conclude by showing how comparative advantages in processing power would plausibly cause AI to use similar processes to solve the semantic trilemma more reliably than we do, leading AI to make better moral-semantic choices than humans do by our very own lights. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Fast algorithm for parallel solving inversion of large scale small matrices based on GPU.
- Author
-
Xuebin, Jin, Yewang, Chen, Wentao, Fan, Yong, Zhang, and Jixiang, Du
- Subjects
- *
PARALLEL algorithms , *MATRIX inversion , *MATRICES (Mathematics) , *IMAGE processing , *COMPUTER science , *HIGH performance computing - Abstract
Inverting a matrix is time-consuming, and many works focus on accelerating the inversion of a single large matrix by GPU. However, the problem of parallelizing the inversion of a large number of small matrices has received little attention. These problems are widely applied in computer science, including accelerating cryptographic algorithms and image processing algorithms. In this paper, we propose a Revised In-Place Inversion algorithm for inverting a large number of small matrices on the CUDA platform, which adopts a more refined parallelization scheme and outperforms other algorithms, achieving a speedup of up to 20.9572 times over the batch matrix inverse kernel in CUBLAS. Additionally, we found that there is an upper bound on the input data size for each GPU device, and the performance will degrade if the input data size is too large. Therefore, we propose the Saturation Size Curve based on this finding to divide matrices into batches and improve the algorithm performance. Experimental results show that this strategy increases the algorithm's performance by 1.75 times and effectively alleviates the problem of performance degradation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Empiricism in the foundations of cognition.
- Author
-
Childers, Timothy, Hvorecký, Juraj, and Majer, Ondrej
- Subjects
- *
PHILOSOPHY of science , *DEEP learning , *EMPIRICISM , *BEHAVIORISM (Psychology) , *COGNITION , *COMPUTER science - Abstract
This paper traces the empiricist program from early debates between nativism and behaviorism within philosophy, through debates about early connectionist approaches within the cognitive sciences, and up to their recent iterations within the domain of deep learning. We demonstrate how current debates on the nature of cognition via deep network architecture echo some of the core issues from the Chomsky/Quine debate and investigate the strength of support offered by these various lines of research to the empiricist standpoint. Referencing literature from both computer science and philosophy, we conclude that the current state of deep learning does not offer strong encouragement to the empiricist side despite some arguments to the contrary. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Multi-label classification of research articles using Word2Vec and identification of similarity threshold.
- Author
-
Mustafa, Ghulam, Usman, Muhammad, Yu, Lisu, afzal, Muhammad Tanvir, Sulaiman, Muhammad, and Shahid, Abdul
- Subjects
- *
CLASSIFICATION , *COMPUTER science , *CITATION indexes , *DIGITAL libraries , *METADATA , *SEARCH engines - Abstract
Every year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. VGL: a high-performance graph processing framework for the NEC SX-Aurora TSUBASA vector architecture.
- Author
-
Afanasyev, Ilya V., Voevodin, Vladimir V., Komatsu, Kazuhiko, and Kobayashi, Hiroaki
- Subjects
- *
GRAPHICS processing units , *GRAPH algorithms , *COMPUTER science , *SUPERCOMPUTERS - Abstract
Developing efficient graph algorithms implementations is an extremely important problem of modern computer science, since graphs are frequently used in various real-world applications. Graph algorithms typically belong to the data-intensive class, and thus using architectures with high-bandwidth memory potentially allows to solve many graph problems significantly faster compared to modern multicore CPUs. Among other supercomputer architectures, vector systems, such as the SX family of NEC vector supercomputers, are equipped with high-bandwidth memory. However, the highly irregular structure of many real-world graphs makes it extremely challenging to implement graph algorithms on vector systems, since these implementations are usually bulky and complicated, and a deep understanding of vector architectures hardware features is required. This paper presents the world first attempt to develop an efficient and simultaneously simple graph processing framework for modern vector systems. Our vector graph library (VGL) framework targets NEC SX-Aurora TSUBASA as a primary vector architecture and provides relatively simple computational and data abstractions. These abstractions incorporate many vector-oriented optimization strategies into a high-level programming model, allowing quick implementation of new graph algorithms with a small amount of code and minimal knowledge about features of vector systems. In this paper, we evaluate the VGL performance on four widely used graph processing problems: breadth-first search, single source shortest paths, connected components, and page rank. The provided comparative performance analysis demonstrates that the VGL-based implementations achieve significant acceleration over the existing high-performance frameworks and libraries: up to 14 times speedup over multicore CPUs (Ligra, Galois, GAPBS) and up to 3 times speedup compared to NVIDIA GPU (Gunrock, NVGRAPH) implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. The Paternity of the Modern Computer.
- Author
-
Lara, Juan A., Pazos, Juan, de Sojo, Aurea Anguera, and Aljawarneh, Shadi
- Subjects
- *
PATERNITY , *COMPUTERS , *SCIENTIFIC community , *ARTIFICIAL intelligence , *TURING machines - Abstract
In recent decades, there has been a proliferation among the scientific community of works that focus on Alan Turing's contributions to the design and development of the modern computer. However, there are significant discrepancies among these studies, to such a point that some of them cast serious doubts on Alan Turing's work with respect to today's computer, and there are others that staunchly defend his leading role, as well as other studies that set out more well-balanced opinions. Faced with this situation, the aim of this paper is to analyse the evidence existing today in order to be able to draw a conclusion about whether or not Turing anticipated the trivialisation of the modern computer memory and, likewise, if his universal a-machine is the precursor of the general-purpose computer so omnipresent today. As a result of our research, the authors conclude that Turing did indeed play a leading role in the appearance of the modern computer, although he was not the only one or the first in the field of Computing Science, albeit he was the most influential, both in scope and in depth. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. A smooth approximation approach for optimization with probabilistic constraints based on sigmoid function.
- Author
-
Ren, Yong H., Xiong, Ying, Yan, Yu H., and Gu, Jian
- Subjects
- *
CONSTRAINED optimization , *FINANCIAL statistics , *TELECOMMUNICATION systems , *PRODUCT design , *COMPUTER science , *CONVEX functions - Abstract
Many practical problems, such as computer science, communications network, product design, system control, statistics and finance, etc.,can be formulated as a probabilistic constrained optimization problem (PCOP) which is challenging to solve since it is usually nonconvex and nonsmooth. Effective methods for the probabilistic constrained optimization problem mostly focus on approximation techniques, such as convex approximation, D.C. (difference of two convex functions) approximation, and so on. This paper aims at studying a smooth approximation approach. A smooth approximation to the probabilistic constraint function based on a sigmoid function is analyzed. Equivalence of PCOP and the corresponding approximation problem are shown under some appropriate assumptions. Sequential convex approximation (SCA) algorithm is implemented to solve the smooth approximation problem. Numerical results suggest that the smooth approximation approach proposed is effective for optimization problems with probabilistic constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Data objects for knowing.
- Author
-
Fonseca, Fred
- Subjects
- *
DATA science , *PHILOSOPHY of science , *COMPUTER science , *SCIENTIFIC experimentation , *SCIENTIFIC knowledge - Abstract
Although true in some aspects, the suggested characterization of today's science as a dichotomy between traditional science and data-driven science misses some of the nuance, complexity, and possibility that exists between the two positions. Part of the problem is the claim that Data Science works without theories. There are many theories behind the data that are used in science. However, for data science, the only theories that matter are those in mathematics, statistics, and computer science. In this conceptual paper, we add two other philosophy of science tenets, experiments and data, to the discussion to create a more nuanced view of how data science uses theories. Following Ihde's concept of technoscience and the incessant quest for more precision, magnification, and resolution, we argue that technology-driven science created a need for more technology-driven science, culminating in data science. Further, we adapt Hacking and Galison's views on physics to argue that data science is also an experimental science, which uses data objects in experiments. Drawing from Heelan (The Journal of Philosophy 85:515–524, 1988), we called these objects "data-objects-for-knowing". Finally, we conclude that data science is a science to study artificially created phenomena—a science to study the data manipulated by the equations and operations of AI. It disregards the connections between data and the real world that were carefully built by the theories from other sciences. In the experiments of data science, data are the world itself. The knowledge created by data science is purposely disconnected from any theory from other sciences; it is a knowledge for the sake of itself. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Intelligent rule-based approach for effective information retrieval and dynamic storage in local repositories.
- Author
-
Alagarsamy, Ramachandran and Sahaaya Arul Mary, S. A.
- Subjects
- *
INFORMATION storage & retrieval systems , *INFORMATION organization , *TEMPORAL databases , *WEB search engines , *INFORMATION retrieval , *COMPUTER science - Abstract
Rules in artificial intelligence are able to provide inference facilities for making effective decisions in information retrieval. In such a scenario, a keyword-based information retrieval systems use the keywords as indexes generated by Web crawlers without considering semantics. Moreover, many Web search engines perform re-ranking based on relevance feedback. However, the relevant documents in e-learning applications must be stored in local repositories and must be updated dynamically based on the level of the users. Therefore, it is necessary to have an information retrieval system which performs relevant information extraction and stores such relevant information dynamically in local e-learning repositories. In order to address these issues, a new information retrieval and local storage system is proposed in this paper by applying rules for making effective decisions in the storage and retrieval algorithms which are newly proposed in this paper. For this purpose, two new algorithms called intelligent rule-based relevant information retrieval algorithm with semantics and a secured information storage using semantic knowledge representation are proposed in this paper for effectively retrieving the e-learning contents from the Web on computer science subject and to store them in local repositories with semantic indexing. The major advantages of the proposed information retrieval system include increase in accuracy, reduction in retrieval time and effective storage in local repositories for further use. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Computer conference welcomes gobbledegook paper.
- Author
-
Ball, Philip
- Subjects
- *
CONFERENCES & conventions , *COMPUTER science , *CYBERNETICS , *INFORMATION science , *AUTOMATIC control systems - Abstract
Reports that Massachusetts Institute of Technology computer science graduate Jeremy Stribling's paper "Rooter: A Methodology for the Typical Unification of Access Points and Redundancy," co-authored with Daniel Aguayo and Maxwell Krohn, was accepted for the 9th World Multi-Conference on Systemics, Cybernetics and Informatics to be held in Florida in July. Development of an automatic computer-science paper generator that cobbles together articles adorned with randomly generated graphs; Other information provided by the paper.
- Published
- 2005
- Full Text
- View/download PDF
15. Diagnostic approach in assessment of a ferroresonant circuit.
- Author
-
Majka, Łukasz and Klimas, Maciej
- Subjects
- *
USB technology , *FREEWARE (Computer software) , *COMPUTER-aided design software , *COMPUTER science , *SYSTEMS software , *SOFTWARE architecture - Abstract
This paper presents possibilities offered by a diagnostic system called FeD. The system is completely original; it has been developed by the authors on the basis of Arduino platform. The system has been designed to perform and record measurements and to carry out different numerical operations. The real-time function for several operations is incorporated in this system. The necessary input data for the system consist of the electrical voltage waveforms only. Rescaled voltage quantities can be displayed, measured, recorded or computed in any chosen way. The system has been developed particularly for measurements and computations in the ferroresonant circuits. The strongest part of the system is its versatility. It works with a standard PC and supports a universal connection (USB standard). This is undeniably a cost-wise solution. Driving and control of the system functions are carried out using the authors' original software implemented in SciLab environment. This is free software, similar to and compatible with other existing CAD programs such as Octave and MATLAB. The obtained data, scripts and results can be freely transferred between them. The program is equipped with a transparent GUI. The need of constructing a special system to diagnose the ferroresonant circuit has emerged during earlier ferroresonance analyses and computations. Every ferroresonant circuit requires specific kind of diagnostics to estimate and display its base features in order to determine the best scientific approach to the problem. The ferroresonance phenomenon belongs to the domain of nonlinear problems. Its analysis requires excellent skills in mathematics and physics as well as computer science. Moreover, this subject also requires specialized engineering knowledge, particularly in the field of power engineering and power system equipment. Modern mathematical models and analyses used in ferroresonant computations are quite accurate; however, in case of a common user, they are often difficult to understand or implement. This paper provides full description of construction, features and test results of the developed hardware/software system designed for diagnostics of ferroresonant circuits. The test circuit case study has been performed in the entire power supply range. Results of measurements and computations as well as screenshots captured from authors' original software are shown in different figures. The developed software and recorded data have been finally used in modeling and further simulations. During this, the application of the fractional derivative iron core coil model to ferroresonance analysis has been shown. The waveforms obtained from computer simulations have been compared with those obtained from measurements performed in the test circuit. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Special issue: selected papers from the 21st international symposium on temporal representations and reasoning (TIME-2014).
- Author
-
Bresolin, Davide and Sciavicco, Guido
- Subjects
- *
SPECIAL issues of periodicals , *REASONING , *ARTIFICIAL intelligence , *COMPUTER science , *COMPUTATIONAL complexity , *CONFERENCES & conventions - Published
- 2016
- Full Text
- View/download PDF
17. Three-party quantum private computation of cardinalities of set intersection and union based on GHZ states.
- Author
-
Zhang, Cai, Long, Yinxiang, Sun, Zhiwei, Li, Qin, and Huang, Qiong
- Subjects
- *
QUANTUM computing , *NOISE , *CRYPTOGRAPHY , *QUANTUM theory , *COMPUTER science - Abstract
Private Set Intersection Cardinality (PSI-CA) and Private Set Union Cardinality (PSU-CA) are two cryptographic primitives whereby two or more parties are able to obtain the cardinalities of the intersection and the union of their respective private sets, and the privacy of their sets is preserved. In this paper, we propose a three-party protocol to finish these tasks by using quantum resources, where every two, as well as three, parties can obtain the cardinalities of the intersection and the union of their private sets with the help of a semi-honest third party (TP). In our protocol, GHZ states play a role in encoding private information that will be used by TP to compute the cardinalities. We show that the presented protocol is secure against well-known quantum attacks. In addition, we analyze the influence of six typical kinds of Markovian noise on our protocol. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Single shot multi-oriented text detection based on local and non-local features.
- Author
-
Li, XiaoQian, Liu, Jie, Zhang, ShuWu, Zhang, GuiXuan, and Zheng, Yang
- Subjects
- *
CONVOLUTIONAL neural networks , *DETECTORS , *COMPUTER science , *ROBUST statistics , *INTERNET - Abstract
In order to improve the robustness of text detector on scene text of various scales, a single shot text detector that combines local and non-local features is proposed in this paper. A dilated inception module for local feature extraction and a text self-attention module for non-local feature extraction are presented, and these two kinds of modules are integrated into single shot detector (SSD) of generic object detection so as to perform multi-oriented text detection in natural scene. The proposed modules make a contribution to richer and wider receptive field and enhance feature representation. Furthermore, the performance of our text detector is improved. In addition, compared with previous text detectors based on SSD which classify positive and negative samples depending on default boxes, we exploit pixels as reference for more accurate matching with ground truth which avoids complex anchor design. Furthermore, to evaluate the effectiveness of the proposed method, we carry out several comparative experiments on public standard benchmarks and analyze the experimental results in detail. The experimental results illustrate that the proposed text detector can compete with the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Multi-granularity hybrid parallel network simplex algorithm for minimum-cost flow problems.
- Author
-
Jiang, Jincheng, Chen, Jinsong, and Wang, Chisheng
- Subjects
- *
SIMPLEX algorithm , *COMPUTING platforms , *GRAPH theory , *PARALLEL programming , *COMPUTER science , *BOTTLENECKS (Manufacturing) - Abstract
Minimum-cost flow problems widely exist in graph theory, computer science, information science, and transportation science. The network simplex algorithm is a fast and frequently used method for solving minimum-cost flow problems. However, the conventional sequential algorithms cannot satisfy the requirement of high-computational efficiency for large-scale networks. Parallel computing has resulted in numerous significant advances in science and technology over the past decades and is potential to develop an effective means to solve the computational bottleneck problem of large-scale networks. This paper first analyzes the parallelizability of network simplex algorithm and then presents a multi-granularity parallel network simplex algorithm (MPNSA) with fine- and coarse-granularity parallel strategies, which are suitable for shared- and distributed-memory parallel applications, respectively. MPNSA is achieved by message-passing interface, open multiprocessing, and compute unified device architecture, so that it can be compatible with different high-performance computing platforms. Experimental results demonstrated that MPNSA has very great accelerating effects and the maximum speedup reaches 18.7. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. A Bayesian perspective of statistical machine learning for big data.
- Author
-
Sambasivan, Rajiv, Das, Sourish, and Sahu, Sujit K.
- Subjects
- *
STATISTICAL learning , *MACHINE learning , *COMPUTER science , *SUPERVISED learning , *BIG data , *DEEP learning , *CONCEPT mapping - Abstract
Statistical Machine Learning (SML) refers to a body of algorithms and methods by which computers are allowed to discover important features of input data sets which are often very large in size. The very task of feature discovery from data is essentially the meaning of the keyword 'learning' in SML. Theoretical justifications for the effectiveness of the SML algorithms are underpinned by sound principles from different disciplines, such as Computer Science and Statistics. The theoretical underpinnings particularly justified by statistical inference methods are together termed as statistical learning theory. This paper provides a review of SML from a Bayesian decision theoretic point of view—where we argue that many SML techniques are closely connected to making inference by using the so called Bayesian paradigm. We discuss many important SML techniques such as supervised and unsupervised learning, deep learning, online learning and Gaussian processes especially in the context of very large data sets where these are often employed. We present a dictionary which maps the key concepts of SML from Computer Science and Statistics. We illustrate the SML techniques with three moderately large data sets where we also discuss many practical implementation issues. Thus the review is especially targeted at statisticians and computer scientists who are aspiring to understand and apply SML for moderately large to big data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. A Novel Artificial Bee Colony Optimization Algorithm with SVM for Bio-inspired Software-Defined Networking.
- Author
-
Chiang, Hsiu-Sen, Sangaiah, Arun Kumar, Chen, Mu-Yen, and Liu, Jia-Yu
- Subjects
- *
BEES algorithm , *BIOLOGICALLY inspired computing , *SOFTWARE-defined networking , *PARTICLE swarm optimization , *COMPUTER science , *SUPPORT vector machines - Abstract
In recent years, artificial intelligence and bio-inspired computing methodologies have risen rapidly and have been successfully applied to many fields. Bio-inspired network systems are a field of biology and computer science, it has the high relation to the bio-inspired computing and bio-inspired system. It has the self-organizing and self-healing characteristics that help them in achieving complex tasks with much ease in the network environment. Software-defined networking provides a breakthrough in network transformation. However, increasing network requirement and focus on the controller for determining the network functionality and resources allocations aims at self-management capabilities. More recently, the artificial bee colony (ABC) algorithm has been used to solve the issues of parameter optimization. In this paper, a discretized food source for an artificial bee colony (DfABC) optimization algorithm is proposed and applied to optimize the kernel parameters of a support vector machine (SVM) model, creating a new hybrid. In order to further improve prediction accuracy, the proposed DfABC algorithm is applied to six popular UCI datasets. We also compare the DfABC algorithm to particle swarm optimization (PSO), the genetic algorithm (GA), and the original ABC algorithm. The experimental results show that the proposed DfABC-SVM model achieves better classification accuracy with a shorter convergence time, outperforming the other hybrid artificial intelligence models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. What do we owe to intelligent robots?
- Author
-
Gordon, John-Stewart
- Subjects
- *
MORAL reasoning , *AUTONOMOUS robots , *ROBOTS , *COMPUTER science , *ARTIFICIAL intelligence , *ETHICS - Abstract
Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots "ethical" and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly advanced yet artificially intelligent beings will deserve moral protection (in the form of being granted moral rights) once they become capable of moral reasoning and decision-making. I argue that we are obligated to grant them moral rights once they have become full ethical agents, i.e., subjects of morality. I present four related arguments in support of this claim and thereafter examine four main objections to the idea of ascribing moral rights to artificial intelligent robots. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Ensemble residual network-based gender and activity recognition method with signals.
- Author
-
Tuncer, Turker, Ertam, Fatih, Dogan, Sengul, Aydemir, Emrah, and Pławiak, Paweł
- Subjects
- *
DEEP learning , *HUMAN activity recognition , *ARTIFICIAL intelligence , *SUPPORT vector machines , *COMPUTER science , *FEATURE selection , *GENDER - Abstract
Nowadays, deep learning is one of the popular research areas of the computer sciences, and many deep networks have been proposed to solve artificial intelligence and machine learning problems. Residual networks (ResNet) for instance ResNet18, ResNet50 and ResNet101 are widely used deep network in the literature. In this paper, a novel ResNet-based signal recognition method is presented. In this study, ResNet18, ResNet50 and ResNet101 are utilized as feature extractor and each network extracts 1000 features. The extracted features are concatenated, and 3000 features are obtained. In the feature selection phase, 1000 most discriminative features are selected using ReliefF, and these selected features are used as input for the third-degree polynomial (cubic) activation-based support vector machine. The proposed method achieved 99.96% and 99.61% classification accuracy rates for gender and activity recognitions, respectively. These results clearly demonstrate that the proposed pre-trained ensemble ResNet-based method achieved high success rate for sensors signals. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Review of single-point diamond turning process in terms of ultra-precision optical surface roughness.
- Author
-
Hatefi, Shahrokh and Abou-El-Hossein, Khaled
- Subjects
- *
DIAMOND turning , *SURFACE roughness , *MANUFACTURING processes , *COMPUTER science , *ELECTRONIC equipment , *COMPUTER engineering , *TEXTILE machinery - Abstract
Ultra-precision machining is the recent realm subsequent to conventional precision machining processes. Recently, achieving nanoscale features on products has become important in manufacturing of critical components. One of the main objectives in advanced manufacturing of optics is to reach ultimately high precision in accuracy of optical surface generation. Through further development of computer numerical controlled machinery technology, single-point diamond turning (SPDT) has evolved rapidly and became a key step in the process chain of nano-machining. In SPDT, advanced and competitive technology for optical surface generation combined with ultra-precision fixtures and accurate metrological systems, high-precision surface machining with scales down to 1 nanometer, even less than 1 nanometer, are successfully achieved. Different engineering applications including medical, dental, defense, aerospace, computer science, and electronic components demand extreme smoothness and optical quality of the machined surfaces. However, there are limitations and drawbacks in SPDT process and surface generation using this technology. Different factors may significantly influence turning conditions, affect surface generation, and limit the outcome of the process. This paper attempts to provide a review of ultra-precision SPDT: technology and characteristics, manufacturing process, applications, machinable materials, and surface generation. Subsequently, influencing factors on surface generation are introduced and comprehensively discussed. Studying influencing factors on surface generation could enable setting optimized sets of machining factors and providing best possible machining conditions for generating high quality optical surfaces. Furthermore, limitations and drawbacks of standard structure SPDT process is discussed. Although a number of published studies have attempted to provide a good perspective of the SPDT process by looking into the effect of influencing factors on surface generation and existing limitations, more investigation needs to be undertaken to discover all destructive effects, origins, and influences in order to further extend the machinability of materials, reduce side effects, and improve the outcome of SPDT. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Quantum evolutionary algorithm with rotational gate and Hϵ-gate updating in real and integer domains for optimization.
- Author
-
Kamalinejad, M., Arzani, H., and Kaveh, A.
- Subjects
- *
EVOLUTIONARY algorithms , *PHYSICAL laws , *QUANTUM gates , *COMPUTER science , *INTEGERS , *BINARY codes - Abstract
Attempts to find new advanced methods for optimization and decision-making problems in controlling real tasks have become today's trend in mathematics, mechanics, and computer science and many other disciplines. Meta-heuristics approaches provide powerful means to find an algorithm based on nature or physical laws and phenomena such as Newtonian laws in collision (CBO), and space laws and parallel verses (MVO) and many other laws. The recently developed approach by Han and Kim (IEEE Trans. Evolut. Comput. 6(6):580–593, 2002) is based on quantum mechanics laws. A quantum evolutionary algorithm (QEA) uses Q-bit individuals in binary code analogous to genes in the conventional genetic algorithm. In different stages of iterations, the Q-bit solutions are updated using the prominent quantum gate rotational gate and H ϵ -gate. This paper is devoted to the assessment of the QEA using some well-known optimization problems. QEA has excellent features such as practical exploration and exploitation of domain space due to utilizing a binary approach for generation of the solutions and rotational gate updating based on the probability of 0 or 1. Here, H ϵ -gate and parallel phase are used to change the path of finding the optimal solution for escaping from local optima. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Decentralized iterative approaches for community clustering in the networks.
- Author
-
Bhih, Amhmed, Johnson, Princy, and Randles, Martin
- Subjects
- *
COMPUTER science , *COMMUNITY organization , *COMPUTER architecture , *COMMUNITIES , *BIOMATHEMATICS , *GRAPHICS processing units - Abstract
In this era of big data, as the data size is scaling up, the need for computing power is exponentially increasing. However, most of the community detection algorithms in the literature are classified as global algorithms, which require access to the entire information of the network. These algorithms designed to work on a single machine cannot be directly parallelized. Hence, it is impossible for such algorithms working in stand-alone machines to find communities in large-scale networks and also the required processing power far exceeds the processing capabilities of single machines. In this paper, a set of novel Decentralized Iterative Community Clustering Approaches to extract an efficient community structure for large networks are proposed and devalued using the LFR benchmark model. The approaches have the ability to identify the community clusters from the entire network without global knowledge of the network topology and will work with a range of computer architecture platforms (e.g., cluster of PCs, multi-core distributed memory servers, GPUs). Detecting and characterizing such community structures is one of the fundamental topics in network systems' analysis, and it has many important applications in different branches of science including computer science, physics, mathematics and biology ranging from visualization, exploratory and data mining to building prediction models. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Computer science in support of high-performance applications: Papers from the 2004 lacsi symposium.
- Author
-
Oldehoeft, Rod
- Subjects
- *
CONFERENCES & conventions , *SPECIAL events , *COMPUTER science , *TECHNICAL institutes - Abstract
Information about the 5th symposium of the Los Alamos Computer Science Institute on October 12-14, 2004 held in Santa Fe, New Mexico. The event's format evolved into sessions which was devoted to specialized workshops on subjects relevant to higher-performance computing and keynote addresses and sessions of contributed, refereed research papers. Panel discussions of contemporary interest were also presented.
- Published
- 2006
- Full Text
- View/download PDF
28. New approach for ensuring cloud computing security: using data hiding methods.
- Author
-
Yesilyurt, Murat and Yalman, Yildiray
- Subjects
- *
CLOUD computing , *INFORMATION theory , *COMPUTER architecture , *DATA mining , *COMPUTER science - Abstract
Cloud computing is one of the largest developments occurred in the field of information technology during recent years. This model has become more desirable for all institutions, organizations and also for personal use thanks to the storage of 'valuable information' at low costs, access to such information from anywhere in the world as well as its ease of use and low cost. In this paper, the services constituting the cloud architecture and deployment models are examined, and the main factors in the provision of security requirements of all those models as well as points to be taken into consideration are described in detail. In addition, the methods and tools considering how security, confidentiality and integrity of the information or data that forms the basis of modern technology are implemented in cloud computing architecture are examined. Finally, it is proposed in the paper that the use of data hiding methods in terms of access security in cloud computing architecture and the security of the stored data would be very effective in securing information. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. An improved method in deep packet inspection based on regular expression.
- Author
-
Sun, Ruxia, Shi, Lingfeng, Yin, Chunyong, and Wang, Jin
- Subjects
- *
DEEP packet inspection (Computer security) , *INTERNET security , *LINUX operating systems , *COMPUTER network monitoring , *DATABASE management , *COMPUTER science - Abstract
The continuous development of Internet technology makes the network intrusion detection technology get more and more attention. Deep packet inspection technology as an effective network intrusion detection technology can play a huge role in network security. Deep packet inspection technology is a kind of network intrusion detection technology applied to the application layer in detail, rather than only detecting the port information of the packet. The regular expression matching technology is a key technology in deep packet inspection because of the rich semantics and flexibility of regular expressions. However, a huge number of transfer edges exist when the matching algorithm is being applied, which will lead to an increase in memory usage of the algorithm. In this paper, we propose an improved method of concatenating transfer edges. By using character interval, several consecutive characters are represented by character intervals, which can reduce the number of transfer edges effectively. In addition, a comparison experiment is given to compare the two methods which are before and after the improvement. It shows that the number of transfer edges can be reduced to 10% of that before improvement and the efficiency of deep packet inspection is improved. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Building, composing and experimenting complex spatial models with the GAMA platform.
- Author
-
Taillandier, Patrick, Gaudou, Benoit, Grignard, Arnaud, Huynh, Quang-Nghi, Marilleau, Nicolas, Caillou, Philippe, Philippon, Damien, and Drogoul, Alexis
- Subjects
- *
COMPUTER science , *SCIENTISTS - Abstract
The agent-based modeling approach is now used in many domains such as geography, ecology, or economy, and more generally to study (spatially explicit) socio-environmental systems where the heterogeneity of the actors and the numerous feedback loops between them requires a modular and incremental approach to modeling. One major reason of this success, besides this conceptual facility, can be found in the support provided by the development of increasingly powerful software platforms, which now allow modelers without a strong background in computer science to easily and quickly develop their own models. Another trend observed in the latest years is the development of much more descriptive and detailed models able not only to better represent complex systems, but also answer more intricate questions. In that respect, if all agent-based modeling platforms support the design of small to mid-size models, i.e. models with little heterogeneity between agents, simple representation of the environment, simple agent decision-making processes, etc., very few are adapted to the design of large-scale models. GAMA is one of the latter. It has been designed with the aim of supporting the writing (and composing) of fairly complex models, with a strong support of the spatial dimension, while guaranteeing non-computer scientists an easy access to high-level, otherwise complex, operations. This paper presents GAMA 1.8, the latest revision to date of the platform, with a focus on its modeling language and its capabilities to manage the spatial dimension of models. The capabilities of GAMA are illustrated by the presentation of applications that take advantage of its new features. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. TSO-to-TSO linearizability is undecidable.
- Author
-
Wang, Chao, Lv, Yi, and Wu, Peng
- Subjects
- *
COMPUTER science , *CRYSTAL structure , *LOSSY data compression , *DATA compression , *DATABASES - Abstract
TSO-to-TSO linearizability is a variant of linearizability for concurrent libraries on the total store order (TSO) memory model. It is proved in this paper that TSO-to-TSO linearizability for a bounded number of processes is undecidable. We first show that the trace inclusion problem of a classic-lossy single-channel system, which is known undecidable, can be reduced to the history inclusion problem of specific libraries on the TSO memory model. Based on the equivalence between history inclusion and extended history inclusion for these libraries, we then prove that the extended history inclusion problem of libraries is undecidable on the TSO memory model. By means of extended history inclusion as an equivalent characterization of TSO-to-TSO linearizability, we finally prove that TSO-to-TSO linearizability is undecidable for a bounded number of processes. Additionally, we prove that all variants of history inclusion problems are undecidable on TSO for a bounded number of processes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Existence of periodic solution for fourth-order generalized neutral p-Laplacian differential equation with attractive and repulsive singularities.
- Author
-
Yun Xin and Hongmin Liu
- Subjects
- *
DIFFERENTIAL equations , *TOPOLOGICAL degree , *LAPLACIAN operator , *COMPUTER science , *INTEGRALS - Abstract
In this paper, we investigate the existence of a positive periodic solution for the following fourth-order p-Laplacian generalized neutral differential equation with attractive and repulsive singularities: (ϕp(u(t) - c(t)u(t - δ(t)")") + f(u(t))'(t) + g(t, u(t)) = k(t), where g has a singularity at the origin. The novelty of the present article is that we show that attractive and repulsive singularities enable the achievement of a new existence criterion of a positive periodic solution through an application of coincidence degree theory. Recent results in the literature are generalized and significantly improved. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. The Porphyrian Tree and Multiple Inheritance: A Rejoinder to Tylman on Computer Science and Philosophy.
- Author
-
Demey, Lorenz
- Subjects
- *
OBJECT-oriented programming , *PROGRAMMING languages , *PHILOSOPHY of science , *METAPHYSICS , *IDEA (Philosophy) - Abstract
Tylman (Found Sci,
2017 ) has recently pointed out some striking conceptual and methodological analogies between philosophy and computer science. In this paper, I focus on one of Tylman’s most convincing cases, viz. the similarity between Plato’s theory of Ideas and the object-oriented programming (OOP) paradigm, and analyze it in some more detail. In particular, I argue that the (Neo)platonic doctrine of the Porphyrian tree corresponds to the fact that most object-oriented programming languages do not support multiple inheritance. This analysis further reinforces Tylman’s point regarding the conceptual continuity between classical metaphysical theorizing and contemporary computer science. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
34. Sparsification and subexponential approximation.
- Author
-
Bonnet, Édouard and Paschos, Vangelis Th.
- Subjects
- *
GEOMETRIC vertices , *APPROXIMATION theory , *FEEDBACK control systems , *MATHEMATICAL optimization , *COMPUTER science - Abstract
Instance sparsification is well-known in the world of exact computation since it is very closely linked to the
Exponential Time Hypothesis . In this paper, we extend the concept of sparsification in order to capture subexponential time approximation. We develop a new tool for inapproximability, called approximation preserving sparsification, and use it in order to get strong inapproximability results in subexponential time for several fundamental optimization problems such as min dominating set, min feedback vertex set, min set cover, min feedback arc set, and others. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
35. Snapshot and continuous points-based trajectory search.
- Author
-
Qi, Shuyao, Sacharidis, Dimitris, Bouros, Panagiotis, and Mamoulis, Nikos
- Subjects
- *
GLOBAL Positioning System , *INFORMATION retrieval , *DATA mining , *SPATIAL analysis (Statistics) , *COMPUTER science - Abstract
Trajectory data capture the traveling history of moving objects such as people or vehicles. With the proliferation of GPS and tracking technologies, huge volumes of trajectories are rapidly generated and collected. Under this, applications such as route recommendation and traveling behavior mining call for efficient trajectory retrieval. In this paper, we first focus on distance-to-points trajectory search; given a collection of trajectories and a set query points, the goal is to retrieve the top- k trajectories that pass as close as possible to all query points. We advance the state-of-the-art by combining existing approaches to a hybrid nearest neighbor-based method while also proposing an alternative, more efficient spatial range-based approach. Second, we investigate the continuous counterpart of distance-to-points trajectory search where the query is long-standing and the set of returned trajectories needs to be maintained whenever updates occur to the query and/or the data. Third, we propose and study two practical variants of distance-to-points trajectory search, which take into account the temporal characteristics of the searched trajectories. Through an extensive experimental analysis with real trajectory data, we show that our range-based approach outperforms previous methods by at least one order of magnitude for the snapshot and up to several times for the continuous version of the queries. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. SDE: A Novel Selective, Discriminative and Equalizing Feature Representation for Visual Recognition.
- Author
-
Xie, Guo-Sen, Zhang, Xu-Yao, Yan, Shuicheng, and Liu, Cheng-Lin
- Subjects
- *
BAG-of-words model (Computer science) , *NEURAL circuitry , *INFORMATION retrieval , *IMAGE retrieval , *COMPUTER science - Abstract
Bag of Words (BoW) model and Convolutional Neural Network (CNN) are two milestones in visual recognition. Both BoW and CNN require a feature pooling operation for constructing the frameworks. Particularly, the max-pooling has been validated as an efficient and effective pooling method compared with other methods such as average pooling and stochastic pooling. In this paper, we first evaluate different pooling methods, and then propose a new feature pooling method termed as selective, discriminative and equalizing pooling (SDE). The SDE representation is a feature learning mechanism by jointly optimizing the pooled representations with the target of learning more selective, discriminative and equalizing features. We use bilevel optimization to solve the joint optimization problem. Experiments on seven benchmark datasets (including both single-label and multi-label ones) well validate the effectiveness of our framework. Particularly, we achieve the state-of-the-art fused results (mAP) of 93.21 and 93.97% on the PASCAL VOC2007 and VOC2012 datasets, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
37. DeepProposals: Hunting Objects and Actions by Cascading Deep Convolutional Layers.
- Author
-
Ghodrati, Amir, Diba, Ali, Pedersoli, Marco, Tuytelaars, Tinne, and Gool, Luc
- Subjects
- *
NEURAL circuitry , *SIGNAL detection , *SIGNAL convolution , *COMPUTER science , *VIDEOS - Abstract
In this paper, a new method for generating object and action proposals in images and videos is proposed. It builds on activations of different convolutional layers of a pretrained CNN, combining the localization accuracy of the early layers with the high informativeness (and hence recall) of the later layers. To this end, we build an inverse cascade that, going backward from the later to the earlier convolutional layers of the CNN, selects the most promising locations and refines them in a coarse-to-fine manner. The method is efficient, because (i) it re-uses the same features extracted for detection, (ii) it aggregates features using integral images, and (iii) it avoids a dense evaluation of the proposals thanks to the use of the inverse coarse-to-fine cascade. The method is also accurate. We show that DeepProposals outperform most of the previous object proposal and action proposal approaches and, when plugged into a CNN-based object detector, produce state-of-the-art detection performance. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. Joint Image Denoising and Disparity Estimation via Stereo Structure PCA and Noise-Tolerant Cost.
- Author
-
Jiao, Jianbo, Yang, Qingxiong, He, Shengfeng, Gu, Shuhang, Zhang, Lei, and Lau, Rynson
- Subjects
- *
SIGNAL denoising , *MULTIPLE correspondence analysis (Statistics) , *SIGNAL processing , *COMPUTER science , *INFORMATION measurement - Abstract
Stereo cameras are now commonly available on cars and mobile phones. However, the captured images may suffer from low image quality under noisy conditions, producing inaccurate disparity. In this paper, we aim at jointly restoring a clean image pair and estimating the corresponding disparity. To this end, we propose a new joint framework that iteratively optimizes these two different tasks in a multiscale fashion. First, structure information between the stereo pair is utilized to denoise the images using a non-local means strategy. Second, a new noise-tolerant cost function is proposed for noisy stereo matching. These two terms are integrated into a multiscale framework in which cross-scale information is leveraged to further improve both denoising and stereo matching. Extensive experiments on datasets captured from indoor, outdoor, and low-light conditions show that the proposed method achieves superior performance than the state-of-the-art image denoising and disparity estimation methods. While it outperforms multi-image denoising methods by about 2 dB on average, it achieves a 50% error reduction over radiometric-change-robust stereo matching on the challenging KITTI dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. New families of special numbers and polynomials arising from applications of p-adic q-integrals.
- Author
-
Kim, Daeyeoul, Ozden Ayna, Hacer, Simsek, Yilmaz, and Yardimci, Ahmet
- Subjects
- *
POLYNOMIALS , *FROBENIUS manifolds , *COMPUTER science , *MATHEMATICAL models , *FACTORIZATION - Abstract
In this manuscript, generating functions are constructed for the new special families of polynomials and numbers using the p-adic q-integral technique. Partial derivative equations, functional equations and other properties of these generating functions are given. With the help of these equations, many interesting and useful identities, relations, and formulas are derived. We also give p-adic q-integral representations of these numbers and polynomials. The results we have obtained for these special numbers and polynomials are closely related to well-known families of polynomials and numbers including the Bernoulli numbers, the Apostol-type Bernoulli numbers and polynomials and the Frobenius-Euler numbers, the Stirling numbers, and the Daehee numbers. We give some remarks and observations on the results of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. VREM: An advanced virtual environment for micro assembly.
- Author
-
Cecil, J. and Jones, James
- Subjects
- *
MICROASSEMBLING , *VIRTUAL reality , *MICRON computers , *COMPUTATIONAL complexity , *COMPUTER science - Abstract
Micro assembly involves the assembly of micron-sized devices. Given the complexity of this domain, the role of virtual environments becomes important as they provide a basis to propose and compare assembly alternatives virtually prior to physical assembly. This paper proposes an integrated approach which includes the use of virtual reality-based assembly environments that interface with physical micro assembly environments. Such an approach can be an intrinsic part of a collaborative manufacturing framework that seeks to support the rapid assembly of micro devices. In this paper, the design of VREM (Virtual Reality based Environment for Micro Assembly) is discussed which is based on this integrated approach involving use of virtual and physical resources. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
41. Experiments using Semantic Web technologies to connect IUGONET, ESPAS and GFZ ISDC data portals.
- Author
-
Ritschel, Bernd, Borchert, Friederike, Kneitschel, Gregor, Neher, Günther, Schildbach, Susanne, Iyemori, Toshihiko, Koyama, Yukinobu, Yatagai, Akiyo, Hori, Tomoaki, Hapgood, Mike, Belehaki, Anna, Galkin, Ivan, and King, Todd
- Subjects
- *
SEMANTIC computing , *COMPUTER science , *LINKED data (Semantic Web) ,FOREIGN relations of the European Union - Abstract
E-science on the Web plays an important role and offers the most advanced technology for the integration of data systems. It also makes available data for the research of more and more complex aspects of the system earth and beyond. The great number of e-science projects founded by the European Union (EU), university-driven Japanese efforts in the field of data services and institutional anchored developments for the enhancement of a sustainable data management in Germany are proof of the relevance and acceptance of e-science or cyberspace-based applications as a significant tool for successful scientific work. The collaboration activities related to near-earth space science data systems and first results in the field of information science between the EU-funded project ESPAS, the Japanese IUGONET project and the GFZ ISDC-based research and development activities are the focus of this paper. The main objective of the collaboration is the use of a Semantic Web approach for the mashup of the project related and so far inoperable data systems. Both the development and use of mapped and/or merged geo and space science controlled vocabularies and the connection of entities in ontology-based domain data model are addressed. The developed controlled vocabularies for the description of geo and space science data and related context information as well as the domain ontologies itself with their domain and cross-domain relationships will be published in Linked Open Data. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
42. Permanence and global attractivity of a nonautonomous modified Leslie-Gower predator-prey model with Holling-type II schemes and a prey refuge.
- Author
-
Xie, Xiangdong, Xue, Yalong, Chen, Jinhuang, and Li, Tingting
- Subjects
- *
PREDATION , *STABILITY theory , *DIFFERENCE equations , *MATHEMATICAL analysis , *COMPUTER science , *MATHEMATICAL models - Abstract
A nonautonomous modified Leslie-Gower predator-prey model with Holling-type II schemes and a prey refuge is studied in this paper. Persistent property and stability property of the system are investigated. Some findings about the influence of prey refuge are obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
43. e-Business systems integration: a systems perspective.
- Author
-
Wang, Song, Li, Ling, Wang, Kanliang, and Jones, James
- Subjects
- *
ELECTRONIC commerce , *SYSTEMS engineering , *SYSTEMS theory , *METHODOLOGY , *COMPUTER science , *META-analysis - Abstract
Systems science has emerged as a meta-discipline and a meta-language, correspondingly, which can be applied to discuss issues in e-business systems and relevant enterprise architecture and enterprise integration. A lot of researches on enterprise architecture and enterprise integration in e-business systems have their theoretical findings and effective practices naturally influenced by systems theory and relative methodologies. This paper strives to review the contribution of systems theory to enterprise architecture and integration. It also tries to summarize methods or tools applied on enterprise systems level, and to investigate many crucial scopes, concepts and their interrelationship in e-business systems integration activities. Finally, this paper presents new prospects in enterprise architecture and integration for e-business systems. All of these may be useful to deal with the increase complex informatics issues of modern enterprises. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. Distinguishing and relating higher-order and first-order processes by expressiveness.
- Author
-
Xu, Xian
- Subjects
- *
DATA encryption , *BISIMULATION , *OPERATOR theory , *CALCULI , *COMPUTER science , *MATHEMATICAL analysis - Abstract
This is a paper on distinguishing and relating two important kinds of calculi through expressiveness, settling some critical but long unanswered questions. The delimitation of higher-order and first-order process calculi is a basic and pivotal topic in the study of process theory. Particularly, expressiveness studies mutual encodability, which helps decide whether process-passing or name-passing is more fundamental, and the way they ought to be used in both theory and practice. In this paper, we contribute to such demarcation with three major results. Firstly $$\pi $$ (first-order pi-calculus) can faithfully express $$\varPi $$ (basic higher-order pi-calculus). The calculus $$\varPi $$ has the elementary operators (input, output, composition and restriction). This actually is a corollary of a more general result, that $$\pi $$ can encode $$\varPi ^r$$ ( $$\varPi $$ enriched with the relabelling operator). Secondly $$\varPi $$ cannot interpret $$\pi $$ reasonably. This is of more significance since it separates $$\varPi $$ and $$\pi $$ by drawing a well-defined boundary. Thirdly an encoding from $$\pi $$ to $$\varPi ^r$$ is revisited and discussed, which not only implies how to make $$\varPi $$ more useful but also stresses the importance of name-passing in $$\pi $$. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study.
- Author
-
Pérez-Marín, Diana and Pascual-Nieto, Ismael
- Subjects
- *
CASE studies , *COMPUTER science , *COLLEGE students , *BLENDED learning , *TEACHING methods - Abstract
Students tend to procrastinate. In particular, Computer Science students tend to reduce the number of hours devoted to study concepts after class. In this paper, a case study on the use of Blended Learning to encourage Computer Science students to study is described. Furthermore, an experiment in which the reaction of 131 Computer Science university students to the proposal is analyzed. The material for the preparation of an exam was produced both in electronic and paper formats. 64 students were asked to study using a free-text scoring system, and 67 students were asked to study with printed documentation in the same class. The students' reactions, the results of a pre-post-test and the answers to a voluntary and anonymous satisfaction questionnaire were registered. After that, students were given the option to keep studying with the scoring system or with the printed documentation. 99% of the students chose to study with the computer, and a higher frequency of study was registered during the previous month to the exam. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. A dynamic and fast event matching algorithm for a content-based publish/subscribe information dissemination system in Sensor-Grid.
- Author
-
Hassan, Mohammad Mehedi, Biao Song, and Eui-Nam Huh
- Subjects
- *
ALGORITHMS , *COMPUTER algorithms , *HERMENEUTICS , *DATABASE design , *COMPUTER science - Abstract
In this paper, we discuss one of the most important issues in Sensor-Grid, i.e., to develop a fast and flexible content-based publish/subscribe information dissemination (CBPSID) system for automatic fusion, interpretation, sharing and delivery of huge sensor data to consumers as the entire Sensor-Grid environment is very dynamic. Existing works to develop the CBPSID system in Sensor-Grid mostly focus on reducing the effort to define and maintain subscriptions and to handle the difficulty of dynamic changes of publishers and consumers data. However, the performance of a CBPSID system in Sensor-Grid is bounded by the expensive matching/evaluation cost of events. Existing event-matching algorithms are not very efficient, especially for interval range predicates or overlapping predicates in subscriptions which are practical in Sensor-Grid as well as other application areas. So in this paper we discuss the above challenge and propose a dynamic and fast event-matching algorithm called CGIM for the CBPSID system in Sensor-Grid. The algorithm supports range predicates or overlapping predicates very well and provides single and composite event matching. It uses two approaches, called SGIM and DGIM, to group the subscriptions by the predicates and dynamically identifies appropriate number of groups considering different statistical distributions of subscriptions at run time. Also, we present an experimental evaluation of the proposed algorithm in a Sensor-Grid based u-Healthcare scenario using synthetic workloads and compare its performance with existing algorithms. The experimental results show that our algorithm significantly reduces the evaluation cost (on average using SGIM by 79% and DGIM by 88%) comparing with others and guarantees the scalability with respect to the number of subscriptions as well as the number of predicates and events. In addition, further experiments were conducted by applying the CGIM algorithm in other application areas, e.g. in the publish/subscribe system for online job sites, to show its diverse utilization and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
47. Parallel loop generation and scheduling.
- Author
-
Lotfi, Shahriar and Parsa, Saeed
- Subjects
- *
LOOP tiling (Computer science) , *PARALLEL processing , *COMPUTER memory management , *COMPUTER science , *ELECTRONIC data processing - Abstract
Loop tiling is an efficient loop transformation, mainly applied to detect coarse-grained parallelism in loops. It is a difficult task to apply n-dimensional non-rectangular tiles to generate parallel loops. This paper offers an efficient scheme to apply non-rectangular n-dimensional tiles in non-rectangular iteration spaces, to generate parallel loops. In order to exploit wavefront parallelism efficiently, all the tiles with equal sum of coordinates are assumed to reside on the same wavefront. Also, in order to assign parallelepiped tiles on each wavefront to different processors, an improved block scheduling strategy is offered in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
48. Computer science and decision theory.
- Author
-
Roberts, Fred
- Subjects
- *
COMPUTER science , *DECISION theory , *SOCIAL sciences , *CONSENSUS (Social sciences) , *DECISION making , *TECHNOLOGY - Abstract
This paper reviews applications in computer science that decision theorists have addressed for years, discusses the requirements posed by these applications that place great strain on decision theory/social science methods, and explores applications in the social and decision sciences of newer decision-theoretic methods developed with computer science applications in mind. The paper deals with the relation between computer science and decision-theoretic methods of consensus, with the relation between computer science and game theory and decisions, and with “algorithmic decision theory.” [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
49. Relational structures model of concurrency.
- Author
-
Janicki, Ryszard
- Subjects
- *
COMPUTER multitasking , *PARTIALLY ordered spaces , *MATHEMATICAL models , *COMMUTATIVE algebra , *ALGEBRA , *COMPUTER science - Abstract
The paper deals with the foundations of concurrency theory. We show how structurally complex concurrent behaviours can be modelled by relational structures $${(X, \diamondsuit, \sqsubset)}$$ , where X is a set (of event occurrences), and $${\diamondsuit}$$ (interpreted as commutativity) and $${\sqsubset}$$ (interpreted as weak causality) are binary relations on X. The paper is a continuation of the approach initiated in Gaifman and Pratt (Proceedings of LICS’87, pp 72–85, 1987), Lamport (J ACM 33:313–326, 1986), Abraham et al. (Semantics for concurrency, workshops in computing. Springer, Heidelberg, pp 311–323, 1990) and Janicki and Koutny (Lect Notes Comput Sci 506:59–74, 1991), substantially developed in Janicki and Koutny (Theoretical Computer Science 112:5–52, 1993) and Janicki and Koutny (Acta Informatica 34:367–388, 1997), and recently generalized in Guo and Janicki (Lect Notes Comput Sci 2422:178–191, 2002) and Janicki (Lect Notes Comput Sci 3407:84–98, 2005). For the first time the full model for the most general case is given. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
50. A grid-based node split algorithm for managing current location data of moving objects.
- Author
-
Hong, Dong-Suk, Kang, Hong-Koo, Kim, Dong-Oh, Yun, Jae-Kwan, and Han, Ki-Joon
- Subjects
- *
DISTRIBUTED computing , *GRIDS (Cartography) , *ALGORITHMS , *INFORMATION resources management , *COMPUTER architecture , *COMPUTER science , *COMPUTER engineering , *GEOGRAPHICAL location codes , *GEOGRAPHICAL positions - Abstract
There is rapidly increasing interest in Location Based Service (LBS) which utilizes location data of moving objects. To efficiently manage the huge amounts of location data in LBS, the GALIS (Gracefully Aging Location Information System) architecture, a cluster-based distributed computing architecture, is proposed. The GALIS using the non-uniform 2-level grid algorithm performs load balancing and indexing for nodes. However, the non-uniform 2-level grid algorithm has a problem creating unnecessary nodes when moving objects are crowded in a certain region. Therefore, a new node split algorithm, which is more efficient for various distribution of moving objects, is proposed in this paper. Because the algorithm proposed in this paper considers spatial distribution for the current location of moving objects, it can perform efficient load balancing without creating unnecessary nodes even when moving objects are congested in a certain region. Besides, the various data distribution configuration for moving objects has been experimented by implementing node split simulators and it’s been verified that the proposed algorithm can split nodes more efficiently than the existing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.