141 results
Search Results
2. Editorial of the special issue on semantic technologies for data and algorithmic governance.
- Author
-
Kirrane, Sabrina, Seneviratne, Oshani, and Dumontier, Michel
- Subjects
DIGITAL technology ,SEMANTICS ,NETWORK governance ,GENERAL semantics ,DATA privacy ,RDF (Document markup language) - Abstract
This document is an editorial for a special issue on semantic technologies for data and algorithmic governance. It highlights the importance of technology in enabling effective governance structures and processes. The editorial poses several questions regarding privacy, transparency, bias, and trust in data and algorithmic governance systems. The special issue covers a wide range of topics, including data management, fake news identification, fairness, accountability, privacy regulations, and user-friendly interface design. The editorial also provides an overview of the papers included in the special issue, which focus on consent mechanisms, ontologies, policy languages, and privacy-preserving technologies. The papers were influenced by the General Data Protection Regulation (GDPR) and have potential applications in the proposed EU Data Governance Act and EU Artificial Intelligence (AI) Act. However, the editorial notes that more research is needed in the field of data and algorithm governance. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
3. CANARD: An approach for generating expressive correspondences based on competency questions for alignment.
- Author
-
Thiéblin, Elodie, Sousa, Guilherme, Haemmerlé, Ollivier, and Trojahn, Cássia
- Subjects
ONTOLOGY - Abstract
Ontology matching aims at making ontologies interoperable. While the field has fully developed in the last years, most approaches are still limited to the generation of simple correspondences. More expressiveness is, however, required to better address the different kinds of ontology heterogeneities. This paper presents CANARD (Complex Alignment Need and A-box based Relation Discovery), an approach for generating expressive correspondences that rely on the notion of competency questions for alignment (CQA). A CQA expresses the user knowledge needs in terms of alignment and aims at reducing the alignment space. The approach takes as input a set of CQAs as SPARQL queries over the source ontology. The generation of correspondences is performed by matching the subgraph from the source CQA to the similar surroundings of the instances from the target ontology. Evaluation is carried out on both synthetic and real-world datasets. The impact of several approach parameters is discussed. Experiments have showed that CANARD performs, overall, better on CQA coverage than precision and that using existing same:As links, between the instances of the source and target ontologies, gives better results than exact label matches of their labels. The use of CQA improved also both CQA coverage and precision with respect to using automatically generated queries. The reassessment of the counter-example increased significantly the precision, to the detriment of runtime. Finally, experiments on large datasets showed that CANARD is one of the few systems that can perform on large knowledge bases, but depends on regularly populated knowledge bases and the quality of instance links. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. The RDF2vec family of knowledge graph embedding methods.
- Author
-
Portisch, Jan and Paulheim, Heiko
- Subjects
KNOWLEDGE graphs ,DESCRIPTION logics ,LANGUAGE models ,MACHINE learning ,RANDOM walks - Abstract
Knowledge graph embeddings represent a group of machine learning techniques which project entities and relations of a knowledge graph to continuous vector spaces. RDF2vec is a scalable embedding approach rooted in the combination of random walks with a language model. It has been successfully used in various applications. Recently, multiple variants to the RDF2vec approach have been proposed, introducing variations both on the walk generation and on the language modeling side. The combination of those different approaches has lead to an increasing family of RDF2vec variants. In this paper, we evaluate a total of twelve RDF2vec variants on a comprehensive set of benchmark models, and compare them to seven existing knowledge graph embedding methods from the family of link prediction approaches. Besides the established GEval benchmark introducing various downstream machine learning tasks on the DBpedia knowledge graph, we also use the new DLCC (Description Logic Class Constructors) benchmark consisting of two gold standards, one based on DBpedia, and one based on synthetically generated graphs. The latter allows for analyzing which ontological patterns in a knowledge graph can actually be learned by different embedding. With this evaluation, we observe that certain tailored RDF2vec variants can lead to improved performance on different downstream tasks, given the nature of the underlying problem, and that they, in particular, have a different behavior in modeling similarity and relatedness. The findings can be used to provide guidance in selecting a particular RDF2vec method for a given task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A tribute to Trevor Bench-Capon (1953–2024).
- Published
- 2024
- Full Text
- View/download PDF
6. Preface.
- Author
-
Bernardinello, Luca, Kleijn, Jetty, and Petrucci, Laure
- Subjects
PETRI nets ,OBJECT manipulation - Published
- 2024
- Full Text
- View/download PDF
7. Special Issue on Semantic Web for Industrial Engineering: Research and Applications.
- Author
-
Aameri, Bahar, Poveda-Villalón, María, Sanfilippo, Emilio M., and Terkaj, Walter
- Subjects
SEMANTIC Web ,INDUSTRIAL engineering ,ONTOLOGIES (Information retrieval) ,INDUSTRIAL research ,NATURAL language processing - Abstract
This document is a special issue of the journal "Semantic Web" that focuses on the application of Semantic Web technologies in the field of Industrial Engineering. It discusses the motivations behind using Semantic Web in industrial domains, such as data management and interoperability. The document also addresses the challenges faced in implementing Semantic Web approaches in industrial settings, including knowledge representation, decision support systems, and human-machine interaction. It mentions various initiatives and projects aimed at promoting the use of Semantic Web in industrial engineering. The special issue covers topics such as Digital Twin applications, ontology development, and data analysis workflows. The document concludes by summarizing the accepted papers for the special issue, which cover topics like data modeling, knowledge graphs, and data quality governance. The authors emphasize the importance of ontologies in managing large datasets and acknowledge the challenges in ontology engineering. They also discuss the potential of ontologies in discerning semantic convergences and divergences in data, although complete interoperability is not guaranteed. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
8. ConSolid: A federated ecosystem for heterogeneous multi-stakeholder projects.
- Author
-
Werbrouck, Jeroen, Pauwels, Pieter, Beetz, Jakob, Verborgh, Ruben, and Mannens, Erik
- Subjects
ARCHITECTURAL design ,CONSTRUCTION industry ,ECOSYSTEMS ,ECOSYSTEM services - Abstract
In many industries, multiple parties collaborate on a larger project. At the same time, each of those stakeholders participates in multiple independent projects simultaneously. A double patchwork can thus be identified, with a many-to-many relationship between actors and collaborative projects. One key example is the construction industry, where every project is unique, involving specialists for many subdomains, ranging from the architectural design over technical installations to geospatial information, governmental regulation and sometimes even historical research. A digital representation of this process and its outcomes requires semantic interoperability between these subdomains, which however often work with heterogeneous and unstructured data. In this paper we propose to address this double patchwork via a decentralized ecosystem for multi-stakeholder, multi-industry collaborations dealing with heterogeneous information snippets. At its core, this ecosystem, called ConSolid, builds upon the Solid specifications for Web decentralization, but extends these both on a (meta)data pattern level and on microservice level. To increase the robustness of data allocation and filtering, we identify the need to go beyond Solid's current LDP-inspired interfaces to a Solid Pod and introduce the concept of metadata-generated 'virtual views', to be generated using an access-controlled SPARQL interface to a Pod. A recursive, scalable way to discover multi-vault aggregations is proposed, along with data patterns for connecting and aligning heterogeneous (RDF and non-RDF) resources across vaults in a mediatype-agnostic fashion. We demonstrate the use and benefits of the ecosystem using minimal running examples, concluding with the setup of an example use case from the Architecture, Engineering, Construction and Operations (AECO) industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. The materials design ontology.
- Author
-
Lambrix, Patrick, Armiento, Rickard, Li, Huanyu, Hartig, Olaf, Abd Nikooie Pour, Mina, and Li, Ying
- Subjects
ONTOLOGIES (Information retrieval) ,ONTOLOGY ,SEMANTIC Web ,DATABASES - Abstract
In the materials design domain, much of the data from materials calculations is stored in different heterogeneous databases with different data and access models. Therefore, accessing and integrating data from different sources is challenging. As ontology-based access and integration alleviates these issues, in this paper we address data access and interoperability for computational materials databases by developing the Materials Design Ontology. This ontology is inspired by and guided by the OPTIMADE effort that aims to make materials databases interoperable and includes many of the data providers in computational materials science. In this paper, first, we describe the development and the content of the Materials Design Ontology. Then, we use a topic model-based approach to propose additional candidate concepts for the ontology. Finally, we show the use of the Materials Design Ontology by a proof-of-concept implementation of a data access and integration system for materials databases based on the ontology.
1 This paper is an extension of (In The Semantic Web – ISWC 2020 – 19th International Semantic Web Conference, Proceedings, Part II (2000) 212–227 Springer) with results from (In ESWC Workshop on Domain Ontologies for Research Data Management in Industry Commons of Materials and Manufacturing2021 1–11) and currently unpublished results regarding an application using the ontology. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
10. An ontology for maintenance activities and its application to data quality.
- Author
-
Woods, Caitlin, Selway, Matt, Bikaun, Tyler, Stumptner, Markus, and Hodkiewicz, Melinda
- Subjects
DATA quality ,NATURAL language processing ,ONTOLOGIES (Information retrieval) ,WORKFLOW management systems ,ONTOLOGY ,INFRASTRUCTURE (Economics) ,DRUG infusion pumps ,CENTRIFUGAL pumps ,WORKFLOW management - Abstract
Maintenance of assets is a multi-million dollar cost each year for asset intensive organisations in the defence, manufacturing, resource and infrastructure sectors. These costs are tracked though maintenance work order (MWO) records. MWO records contain structured data for dates, costs, and asset identification and unstructured text describing the work required, for example 'replace leaking pump'. Our focus in this paper is on data quality for maintenance activity terms in MWO records (e.g. replace , repair , adjust and inspect). We present two contributions in this paper. First, we propose a reference ontology for maintenance activity terms. We use natural language processing to identify seven core maintenance activity terms and their synonyms from 800,000 MWOs. We provide elucidations for these seven terms. Second, we demonstrate use of the reference ontology in an application-level ontology using an industrial use case. The end-to-end NLP-ontology pipeline identifies data quality issues with 55% of the MWO records for a centrifugal pump over 8 years. For the 33% of records where a verb was not provided in the unstructured text, the ontology can infer a relevant activity class. The selection of the maintenance activity terms is informed by the ISO 14224 and ISO 15926-4 standards and conforms to ISO/IEC 21838-2 Basic Formal Ontology (BFO). The reference and application ontologies presented here provide an example for how industrial organisations can augment their maintenance work management processes with ontological workflows to improve data quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Creating occupant-centered digital twins using the Occupant Feedback Ontology implemented in a smartwatch app.
- Author
-
Donkers, Alex, de Vries, Bauke, and Yang, Dujuan
- Subjects
DIGITAL twins ,SEMANTIC Web ,SMARTWATCHES ,ONTOLOGY ,ENVIRONMENTAL quality ,MOBILE apps - Abstract
Occupant feedback enables building managers to improve occupants' health, comfort, and satisfaction. However, acquiring continuous occupant feedback and integrating this feedback with other building information is challenging. This paper presents a scalable method to acquire continuous occupant feedback and directly integrate this with other building information. Semantic web technologies were applied to solve data interoperability issues. The Occupant Feedback Ontology was developed to describe feedback semantically. Next to this, a smartwatch app – Mintal – was developed to acquire continuous feedback on indoor environmental quality. The app gathers location, medical information, and answers on short micro surveys. Mintal applied the Occupant Feedback Ontology to directly integrate the feedback with linked building data. A case study was performed to evaluate this method. A semantic digital twin was created by integrating linked building data, sensor data, and occupant feedback. Results from SPARQL queries gave more insight into an occupant's perceived comfort levels in the Open Flat. The case study shows how integrating feedback with building information allows for more occupant-centric decision support tools. The approach presented in this paper can be used in a wide range of use cases, both within and without the architecture, building, and construction domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Advancing Parkinson’s Disease Research in Canada: The Canadian Open Parkinson Network (C-OPN) Cohort.
- Author
-
Cressatti, Marisa, Pinilla-Monsalve, Gabriel D., Blais, Mathieu, Normandeau, Catherine P., Degroot, Clotilde, Kathol, Iris, Bogard, Sarah, Bendas, Anna, Camicioli, Richard, Dupré, Nicolas, Gan-Or, Ziv, Grimes, David A., Kalia, Lorraine V., MacDonald, Penny A., McKeown, Martin J., Martino, Davide, Miyasaki, Janis M., Schlossmacher, Michael G., Stoessl, A. Jon, and Strafella, Antonio P.
- Subjects
- *
MONONUCLEAR leukocytes , *MEDICAL personnel , *DISEASE management , *SYMPTOMS , *CITIES & towns - Abstract
\n Enhancing the interactions between study participants, clinicians, and investigators is imperative for advancing Parkinson’s disease (PD) research. The Canadian Open Parkinson Network (C-OPN) stands as a nationwide endeavor, connecting the PD community with ten accredited universities and movement disorders research centers spanning, at the time of this analysis, British Columbia, Alberta, Ontario, and Quebec.Our aim is to showcase C-OPN as a paradigm for bolstering national collaboration to accelerate PD research and to provide an initial overview of already collected data sets.The C-OPN database comprises de-identified data concerning demographics, symptoms and signs, treatment approaches, and standardized assessments. Additionally, it collects venous blood-derived biomaterials, such as for analyses of DNA, peripheral blood mononuclear cells (PBMC), and serum. Accessible to researchers, C-OPN resources are available through web-based data management systems for multi-center studies, including REDCap.As of November 2023, the C-OPN had enrolled 1,505 PD participants. The male-to-female ratio was 1.77:1, with 83% (
n = 1098) residing in urban areas and 82% (n = 1084) having pursued post-secondary education. The average age at diagnosis was 60.2±10.3 years. Herein, our analysis of the C-OPN PD cohort encompasses environmental factors, motor and non-motor symptoms, disease management, and regional differences among provinces. As of April 2024, 32 research projects have utilized C-OPN resources.C-OPN represents a national platform promoting multidisciplinary and multisite research that focuses on PD to promote innovation, exploration of care models, and collaboration among Canadian scientists.Teamwork and communication between people living with Parkinson’s disease (PD), physicians, health professionals, and research scientists is important for improving the lives of those living with this condition. The Canadian Open Parkinson Network (C-OPN) is a Canada-wide initiative, connecting the PD community with ten accredited universities and movement disorders research centers located in – at the time of this analysis–British Columbia, Alberta, Ontario, and Quebec. The aim of this paper is to showcase C-OPN as a useful resource for physician and research scientists studying PD in Canada and around the world, and to provide snapshot of already collected data. The C-OPN database comprises de-identified (meaning removal of any identifying information, such as name or date of birth) data concerning lifestyle, disease symptoms, treatments, and results from standardized tests. It also collects blood samples for further analysis. As of November 2023, C-OPN had enrolled 1,505 PD participants across Canada. Most of the participants were male (64%), living in urban areas (83%), and completed post-secondary education (82%). The average age at diagnosis was 60.2±10.3 years. In this paper, we look at environmental factors, motor and non-motor symptoms, different disease management strategies, and regional differences between provinces. In conclusion, C-OPN represents a national platform that encourages multidisciplinary and multisite research focusing on PD to promote innovation and collaboration among Canadian scientists. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
13. Discovering Process Models with Long-Term Dependencies while Providing Guarantees and Filtering Infrequent Behavior Patterns.
- Author
-
Mannel, Lisa L. and van der Aalst, Wil M. P.
- Subjects
FLEXIBLE manufacturing systems ,PETRI nets ,ALGORITHMS ,NOISE - Abstract
In process discovery, the goal is to find, for a given event log, the model describing the underlying process. While process models can be represented in a variety of ways, Petri nets form a theoretically well-explored description language and are therefore often used. In this paper, we extend the eST-Miner process discovery algorithm. The eST-Miner computes a set of Petri net places which are considered to be fitting with respect to a certain fraction of the behavior described by the given event log as indicated by a given noise threshold. It evaluates all possible candidate places using token-based replay. The set of replayable traces is determined for each place in isolation, i.e., these sets do not need to be consistent. This allows the algorithm to abstract from infrequent behavioral patterns occurring only in some traces. However, when combining places into a Petri net by connecting them to the corresponding uniquely labeled transitions, the resulting net can replay exactly those traces from the event log that are allowed by the combination of all inserted places. Thus, inserting places one-by-one without considering their combined effect may result in deadlocks and low fitness of the Petri net. In this paper, we explore adaptions of the eST-Miner, that aim to select a subset of places such that the resulting Petri net guarantees a definable minimal fitness while maintaining high precision with respect to the input event log. Furthermore, current place evaluation techniques tend to block the execution of infrequent activity labels. Thus, a refined place fitness metric is introduced and thoroughly investigated. In our experiments we use real and artificial event logs to evaluate and compare the impact of the various place selection strategies and place fitness evaluation metrics on the returned Petri net. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Proportional Fuzzy Set Extensions and Imprecise Proportions.
- Author
-
Kahraman, Cengiz
- Subjects
FUZZY sets ,AGGREGATION operators ,ARITHMETIC - Abstract
The extensions of ordinary fuzzy sets are problematic because they require decimal numbers for membership, non-membership and indecision degrees of an element from the experts, which cannot be easily determined. This will be more difficult when three or more digits' membership degrees have to be assigned. Instead, proportional relations between the degrees of parameters of a fuzzy set extension will make it easier to determine the membership, non-membership, and indecision degrees. The objective of this paper is to present a simple but effective technique for determining these degrees with several decimal digits and to enable the expert to assign more stable values when asked at different time points. Some proportion-based models for the fuzzy sets extensions, intuitionistic fuzzy sets, Pythagorean fuzzy sets, picture fuzzy sets, and spherical fuzzy sets are proposed, including their arithmetic operations and aggregation operators. Proportional fuzzy sets require only the proportional relations between the parameters of the extensions of fuzzy sets. Their contribution is that these models will ease the use of fuzzy set extensions with the data better representing expert judgments. The imprecise definition of proportions is also incorporated into the given models. The application and comparative analyses result in that proportional fuzzy sets are easily applied to any problem and produce valid outcomes. Furthermore, proportional fuzzy sets clearly showed the role of the degree of indecision in the ranking of alternatives in binomial and trinomial fuzzy sets. In the considered car selection problem, it has been observed that there are minor changes in the ordering of intuitionistic and spherical fuzzy sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. An Extended EDAS Approach Based on Cumulative Prospect Theory for Multiple Attributes Group Decision Making with Interval-Valued Intuitionistic Fuzzy Information.
- Author
-
Wang, Jing, Cai, Qiang, Wei, Guiwu, and Liao, Ningna
- Subjects
GROUP decision making ,PROSPECT theory ,FUZZY sets ,PSYCHOLOGICAL factors ,VENTURE capital ,ENTROPY (Information theory) - Abstract
The interval-valued intuitionistic fuzzy sets (IVIFSs), based on the intuitionistic fuzzy sets (IFSs), combine the classical decision method and its research and application is attracting attention. After a comparative analysis, it becomes clear that multiple classical methods with IVIFSs' information have been applied to many practical issues. In this paper, we extended the classical EDAS method based on the Cumulative Prospect Theory (CPT) considering the decision experts (DEs)' psychological factors under IVIFSs. Taking the fuzzy and uncertain character of the IVIFSs and the psychological preference into consideration, an original EDAS method, based on the CPT under IVIFSs (IVIF-CPT-EDAS) method, is created for multiple-attribute group decision making (MAGDM) issues. Meanwhile, the information entropy method is used to evaluate the attribute weight. Finally, a numerical example for Green Technology Venture Capital (GTVC) project selection is given, some comparisons are used to illustrate the advantages of the IVIF-CPT-EDAS method and a sensitivity analysis is applied to prove the effectiveness and stability of this new method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Incremental schema integration for data wrangling via knowledge graphs.
- Author
-
Flores, Javier, Rabbani, Kashif, Nadal, Sergi, Gómez, Cristina, Romero, Oscar, Jamin, Emmanuel, and Dasiopoulou, Stamatia
- Subjects
KNOWLEDGE graphs ,DATA integration - Abstract
Virtual data integration is the current approach to go for data wrangling in data-driven decision-making. In this paper, we focus on automating schema integration, which extracts a homogenised representation of the data source schemata and integrates them into a global schema to enable virtual data integration. Schema integration requires a set of well-known constructs: the data source schemata and wrappers, a global integrated schema and the mappings between them. Based on them, virtual data integration systems enable fast and on-demand data exploration via query rewriting. Unfortunately, the generation of such constructs is currently performed in a largely manual manner, hindering its feasibility in real scenarios. This becomes aggravated when dealing with heterogeneous and evolving data sources. To overcome these issues, we propose a fully-fledged semi-automatic and incremental approach grounded on knowledge graphs to generate the required schema integration constructs in four main steps: bootstrapping, schema matching, schema integration, and generation of system-specific constructs. We also present Nextia DI , a tool implementing our approach. Finally, a comprehensive evaluation is presented to scrutinize our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Semantic-enabled architecture for auditable privacy-preserving data analysis.
- Author
-
Ekaputra, Fajar J., Ekelhart, Andreas, Mayer, Rudolf, Miksa, Tomasz, Šarčević, Tanja, Tsepelakis, Sotirios, and Waltersdorfer, Laura
- Subjects
DATA analysis ,SEMANTIC Web ,AUDIT trails ,TASK analysis ,DATA protection ,SEMANTICS ,RIGHT to be forgotten - Abstract
Small and medium-sized organisations face challenges in acquiring, storing and analysing personal data, particularly sensitive data (e.g., data of medical nature), due to data protection regulations, such as the GDPR in the EU, which stipulates high standards in data protection. Consequently, these organisations often refrain from collecting data centrally, which means losing the potential of data analytics and learning from aggregated user data. To enable organisations to leverage the full-potential of the collected personal data, two main technical challenges need to be addressed: (i) organisations must preserve the privacy of individual users and honour their consent, while (ii) being able to provide data and algorithmic governance, e.g., in the form of audit trails, to increase trust in the result and support reproducibility of the data analysis tasks performed on the collected data. Such an auditable, privacy-preserving data analysis is currently challenging to achieve, as existing methods and tools only offer partial solutions to this problem, e.g., data representation of audit trails and user consent, automatic checking of usage policies or data anonymisation. To the best of our knowledge, there exists no approach providing an integrated architecture for auditable, privacy-preserving data analysis. To address these gaps, as the main contribution of this paper, we propose the WellFort approach, a semantic-enabled architecture for auditable, privacy-preserving data analysis which provides secure storage for users' sensitive data with explicit consent, and delivers a trusted, auditable analysis environment for executing data analytic processes in a privacy-preserving manner. Additional contributions include the adaptation of Semantic Web technologies as an integral part of the WellFort architecture, and the demonstration of the approach through a feasibility study with a prototype supporting use cases from the medical domain. Our evaluation shows that WellFort enables privacy preserving analysis of data, and collects sufficient information in an automated way to support its auditability at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Neural axiom network for knowledge graph reasoning.
- Author
-
Li, Juan, Chen, Xiangnan, Yu, Hongtao, Chen, Jiaoyan, and Zhang, Wen
- Subjects
KNOWLEDGE graphs ,AXIOMS - Abstract
Knowledge graph reasoning (KGR) aims to infer new knowledge or detect noises, which is essential for improving the quality of knowledge graphs. Recently, various KGR techniques, such as symbolic- and embedding-based methods, have been proposed and shown strong reasoning ability. Symbolic-based reasoning methods infer missing triples according to predefined rules or ontologies. Although rules and axioms have proven effective, it is difficult to obtain them. Embedding-based reasoning methods represent entities and relations as vectors, and complete KGs via vector computation. However, they mainly rely on structural information and ignore implicit axiom information not predefined in KGs but can be reflected in data. That is, each correct triple is also a logically consistent triple and satisfies all axioms. In this paper, we propose a novel NeuRal Axiom Network (NeuRAN) framework that combines explicit structural and implicit axiom information without introducing additional ontologies. Specifically, the framework consists of a KG embedding module that preserves the semantics of triples and five axiom modules that encode five kinds of implicit axioms. These axioms correspond to five typical object property expression axioms defined in OWL2, including ObjectPropertyDomain, ObjectPropertyRange, DisjointObjectProperties, IrreflexiveObjectProperty and AsymmetricObjectProperty. The KG embedding module and axiom modules compute the scores that the triple conforms to the semantics and the corresponding axioms, respectively. Compared with KG embedding models and CKRL, our method achieves comparable performance on noise detection and triple classification and achieves significant performance on link prediction. Compared with TransE and TransH, our method improves the link prediction performance on the Hits@1 metric by 22.0% and 20.8% on WN18RR-10% dataset, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A study of concept similarity in Wikidata.
- Author
-
Ilievski, Filip, Shenoy, Kartik, Chalupsky, Hans, Klein, Nicholas, and Szekely, Pedro
- Subjects
LANGUAGE models ,KNOWLEDGE graphs ,RETROFITTING ,CROWDSOURCING ,ARTIFICIAL intelligence ,MULTICASTING (Computer networks) - Abstract
Robust estimation of concept similarity is crucial for applications of AI in the commercial, biomedical, and publishing domains, among others. While the related task of word similarity has been extensively studied, resulting in a wide range of methods, estimating concept similarity between nodes in Wikidata has not been considered so far. In light of the adoption of Wikidata for increasingly complex tasks that rely on similarity, and its unique size, breadth, and crowdsourcing nature, we propose that conceptual similarity should be revisited for the case of Wikidata. In this paper, we study a wide range of representative similarity methods for Wikidata, organized into three categories, and leverage background information for knowledge injection via retrofitting. We measure the impact of retrofitting with different weighted subsets from Wikidata and ProBase. Experiments on three benchmarks show that the best performance is achieved by pairing language models with rich information, whereas the impact of injecting knowledge is most positive on methods that originally do not consider comprehensive information. The performance of retrofitting is conditioned on the selection of high-quality similarity knowledge. A key limitation of this study, similar to prior work lies in the limited size and scope of the similarity benchmarks. While Wikidata provides an unprecedented possibility for a representative evaluation of concept similarity, effectively doing so remains a key challenge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Differential privacy and SPARQL.
- Author
-
Buil-Aranda, Carlos, Lobo, Jorge, and Olmedo, Federico
- Subjects
RDF (Document markup language) ,KNOWLEDGE graphs ,PRIVACY ,NUMERIC databases ,SQL - Abstract
Differential privacy is a framework that provides formal tools to develop algorithms to access databases and answer statistical queries with quantifiable accuracy and privacy guarantees. The notions of differential privacy are defined independently of the data model and the query language at steak. Most differential privacy results have been obtained on aggregation queries such as counting or finding maximum or average values, and on grouping queries over aggregations such as the creation of histograms. So far, the data model used by the framework research has typically been the relational model and the query language SQL. However, effective realizations of differential privacy for SQL queries that required joins had been limited. This has imposed severe restrictions on applying differential privacy in RDF knowledge graphs and SPARQL queries. By the simple nature of RDF data, most useful queries accessing RDF graphs will require intensive use of joins. Recently, new differential privacy techniques have been developed that can be applied to many types of joins in SQL with reasonable results. This opened the question of whether these new results carry over to RDF and SPARQL. In this paper we provide a positive answer to this question by presenting an algorithm that can answer counting queries over a large class of SPARQL queries that guarantees differential privacy, if the RDF graph is accompanied with semantic information about its structure. We have implemented our algorithm and conducted several experiments, showing the feasibility of our approach for large graph databases. Our aim has been to present an approach that can be used as a stepping stone towards extensions and other realizations of differential privacy for SPARQL and RDF. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Consent through the lens of semantics: State of the art survey and best practices.
- Author
-
Kurteva, Anelia, Chhetri, Tek Raj, Pandit, Harshvardhan J., and Fensel, Anna
- Subjects
BEST practices ,BLOCKCHAINS ,SEMANTIC Web ,INFORMATION sharing ,ELECTRONIC data processing ,GENERAL semantics - Abstract
The acceptance of the GDPR legislation in 2018 started a new technological shift towards achieving transparency. GDPR put focus on the concept of informed consent applicable for data processing, which led to an increase of the responsibilities regarding data sharing for both end users and companies. This paper presents a literature survey of existing solutions that use semantic technology for implementing consent. The main focus is on ontologies, how they are used for consent representation and for consent management in combination with other technologies such as blockchain. We also focus on visualisation solutions aimed at improving individuals' consent comprehension. Finally, based on the overviewed state of the art we propose best practices for consent implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Abstract argumentation with conditional preferences.
- Author
-
Bernreiter, Michael, Dvořák, Wolfgang, and Woltran, Stefan
- Abstract
In this paper, we study conditional preferences in abstract argumentation by introducing a new generalization of Dung-style argumentation frameworks (AFs) called Conditional Preference-based AFs (CPAFs). Each subset of arguments in a CPAF can be associated with its own preference relation. This generalizes existing approaches for preference-handling in abstract argumentation, and allows us to reason about conditional preferences in a general way. We conduct a principle-based analysis of CPAFs and compare them to related generalizations of AFs. Specifically, we highlight similarities and differences to Modgil's Extended AFs and show that our formalism can capture Value-based AFs. Moreover, we show that in some cases the introduction of conditional preferences leads to an increase in computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Decidability in argumentation semantics.
- Author
-
Dunne, Paul E.
- Abstract
Much of the formal study of algorithmic concerns with respect to semantics for abstract argumentation frameworks has focused on the issue of computational complexity. In contrast matters regarding computability have been largely neglected. Recent trends in semantics have, however, started to concentrate not so much on the formulation of novel semantics but more on identifying common properties: for example, from basic ideas such as conflict-freeness through to quite sophisticated ideas such as serializability. The aim of this paper is to look at the implications these more recent studies have for computability rather than computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Argumentation with justified preferences.
- Author
-
Pyon, Sung-Jun
- Abstract
It is often necessary and reasonable to justify preferences before reasoning from them. Moreover, justifying a preference ordering is reduced to justifying the criterion that produces the ordering. This paper builds on the well-known ASPIC+ formalism to develop a model that integrates justifying qualitative preferences with reasoning from the justified preferences. We first introduce a notion of preference criterion in order to model the way in which preferences are justified by an argumentation framework. We also adapt the notion of argumentation theory to build a sequence of argumentation frameworks, in which an argumentation framework justifies preferences that are to underlie the next framework. That is, in our formalism, preferences become not only an input of an argumentation framework, but also an output of it. This kind of input-output process can be applied in the further steps of argumentation. We also explore some interesting properties of our formalism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. The critical need to accelerate cerebral palsy research with consumer engagement, global networks, and adaptive designs.
- Author
-
Thomas, Sruthi P., Novak, Iona, Ritterband-Rosenbaum, Anina, Lind, Karin, Webb, Annabel, Gross, Paul, and McNamara, Maria
- Subjects
- *
CEREBRAL palsy prevention , *MIDDLE-income countries , *PATIENTS , *MENTAL illness , *CLINICAL medicine research , *HOSPITAL emergency services , *EMERGENCY medical services , *GLOBAL burden of disease , *NEUROLOGICAL disorders , *EXPERIMENTAL design , *MEDICAL research , *PRIORITY (Philosophy) , *QUALITY of life , *HEALTH information systems , *MEDICAL needs assessment , *PATIENT participation , *LOW-income countries ,RESEARCH evaluation - Abstract
The prevalence of cerebral palsy (CP) varies globally, with higher rates and burden of disease in low- and middle-income countries. CP is a lifelong condition with no cure, presenting diverse challenges such as motor impairment, epilepsy, and mental health disorders. Research progress has been made but more is needed, especially given consumer demands for faster advancements and improvements in the scientific evidence base for interventions. This paper explores three strategies to accelerate CP research: consumer engagement, global clinical trial networks, and adaptive designs. Consumer engagement involving individuals with lived experience enhances research outcomes. Global clinical trial networks provide efficiency through larger and more diverse participant pools. Adaptive designs, unlike traditional randomized controlled trials, allow real-time modifications based on interim analyses, potentially answering complex questions more efficiently. The establishment of a CP Global Clinical Trials Network, integrating consumer engagement, global collaboration, and adaptive designs, marks a paradigm shift. The Network aims to address consumer-set research priorities. While challenges like ethical considerations and capacity building exist, the potential benefits for consumers, clinicians, researchers, and funding bodies are substantial. This paper underscores the urgency of transforming CP research methodologies for quicker translation of novel treatments into clinical practice to improve quality of life for those with CP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Relation-Algebraic Verification of Disjoint-Set Forests.
- Author
-
Guttmann, Walter
- Subjects
- *
RELATION algebras , *SEMANTICS - Abstract
This paper studies how to use relation algebras, which are useful for high-level specification and verification, for proving the correctness of lower-level array-based implementations of algorithms. We give a simple relation-algebraic semantics of read and write operations on associative arrays. The array operations seamlessly integrate with assignments in computation models supporting while-programs. As a result, relation algebras can be used for verifying programs with associative arrays. We verify the correctness of an array-based implementation of disjoint-set forests using the union-by-rank strategy and find operations with path compression, path splitting and path halving. All results are formally proved in Isabelle/HOL. This paper is an extended version of [1]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A semantic framework for condition monitoring in Industry 4.0 based on evolving knowledge bases.
- Author
-
Giustozzi, Franco, Saunier, Julien, and Zanni-Merk, Cecilia
- Subjects
KNOWLEDGE base ,INDUSTRY 4.0 ,REAL-time computing ,MANUFACTURING processes ,DATA integration ,FACTORIES - Abstract
In Industry 4.0, factory assets and machines are equipped with sensors that collect data for effective condition monitoring. This is a difficult task since it requires the integration and processing of heterogeneous data from different sources, with different temporal resolutions and underlying meanings. Ontologies have emerged as a pertinent method to deal with data integration and to represent manufacturing knowledge in a machine-interpretable way through the construction of semantic models. Ontologies are used to structure knowledge in knowledge bases, which also contain instances and information about these data. Thus, a knowledge base provides a sort of virtual representation of the different elements involved in a manufacturing process. Moreover, the monitoring of industrial processes depends on the dynamic context of their execution. Under these circumstances, the semantic model must provide a way to represent this evolution in order to represent in which situation(s) a resource is in during the execution of its tasks to support decision making. This paper proposes a semantic framework to address the evolution of knowledge bases for condition monitoring in Industry 4.0. To this end, firstly we propose a semantic model (the COInd4 ontology) for the manufacturing domain that represents the resources and processes that are part of a factory, with special emphasis on the context of these resources and processes. Relevant situations that combine sensor observations with domain knowledge are also represented in the model. Secondly, an approach that uses stream reasoning to detect these situations that lead to potential failures is introduced. This approach enriches data collected from sensors with contextual information using the proposed semantic model. The use of stream reasoning facilitates the integration of data from different data sources, different temporal resolutions as well as the processing of these data in real time. This allows to derive high-level situations from lower-level context and sensor information. Detecting situations can trigger actions to adapt the process behavior, and in turn, this change in behavior can lead to the generation of new contexts leading to new situations. These situations can have different levels of severity, and can be nested in different ways. Dealing with the rich relations among situations requires an efficient approach to organize them. Therefore, we propose a method to build a lattice, ordering those situations depending on the constraints they rely on. This lattice represents a road-map of all the situations that can be reached from a given one, normal or abnormal. This helps in decision support, by allowing the identification of the actions that can be taken to correct the abnormality avoiding in this way the interruption of the manufacturing processes. Finally, an industrial application scenario for the proposed approach is described. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Deriving semantic validation rules from industrial standards: An OPC UA study.
- Author
-
Bareedu, Yashoda Saisree, Frühwirth, Thomas, Niedermeier, Christoph, Sabou, Marta, Steindl, Gernot, Thuluva, Aparna Saisree, Tsaneva, Stefani, and Tufek Ozkaya, Nilay
- Subjects
SEMANTIC Web ,NATURAL language processing - Abstract
Industrial standards provide guidelines for data modeling to ensure interoperability between stakeholders of an industry branch (e.g., robotics). Most frequently, such guidelines are provided in an unstructured format (e.g., pdf documents) which hampers the automated validations of information objects (e.g., data models) that rely on such standards in terms of their compliance with the modeling constraints prescribed by the guidelines. This raises the risk of costly interoperability errors induced by the incorrect use of the standards. There is, therefore, an increased interest in automatic semantic validation of information objects based on industrial standards. In this paper we focus on an approach to semantic validation by formally representing the modeling constraints from unstructured documents as explicit, machine-actionable rules (to be then used for semantic validation) and (semi-)automatically extracting such rules from pdf documents. While our approach aims to be generically applicable, we exemplify an adaptation of the approach in the concrete context of the OPC UA industrial standard, given its large-scale adoption among important industrial stakeholders and the OPC UA internal efforts towards semantic validation. We conclude that (i) it is feasible to represent modeling constraints from the standard specifications as rules, which can be organized in a taxonomy and represented using Semantic Web technologies such as OWL and SPARQL; (ii) we could automatically identify modeling constraints in the specification documents by inspecting the tables (P = 87 %) and text of these documents (F1 up to 94%); (iii) the translation of the modeling constraints into formal rules could be fully automated when constraints were extracted from tables and required a Human-in-the-loop approach for constraints extracted from text. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Enhancing awareness of industrial robots in collaborative manufacturing.
- Author
-
Umbrico, Alessandro, Cesta, Amedeo, and Orlandini, Andrea
- Subjects
CONSCIOUSNESS raising ,INDUSTRIAL robots ,KNOWLEDGE representation (Information theory) ,ARTIFICIAL intelligence ,ONTOLOGY ,ROBOTS - Abstract
The diffusion of Human-Robot Collaborative cells is prevented by several barriers. Classical control approaches seem not yet fully suitable for facing the variability conveyed by the presence of human operators beside robots. The capabilities of representing heterogeneous knowledge representation and performing abstract reasoning are crucial to enhance the flexibility of control solutions. To this aim, the ontology SOHO (Sharework Ontology for Human-Robot Collaboration) has been specifically designed for representing Human-Robot Collaboration scenarios, following a context-based approach. This work brings several contributions. This paper proposes an extension of SOHO to better characterize behavioral constraints of collaborative tasks. Furthermore, this work shows a knowledge extraction procedure designed to automatize the synthesis of Artificial Intelligence plan-based controllers for realizing flexible coordination of human and robot behaviors in collaborative tasks. The generality of the ontological model and the developed representation capabilities as well as the validity of the synthesized planning domains are evaluated on a number of realistic industrial scenarios where collaborative robots are actually deployed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An ontology of 3D environment where a simulated manipulation task takes place (ENVON).
- Author
-
Zhao, Yingshen, Sarkar, Arkopaul, Elmhadhbi, Linda, Karray, Mohamed Hedi, Fillatreau, Philippe, and Archimède, Bernard
- Subjects
THREE-dimensional modeling ,ONTOLOGIES (Information retrieval) ,ROBOTIC path planning ,ONTOLOGY ,RIGID bodies ,CONTROL rooms ,ROBOTICS ,GEOMETRIC modeling ,POTENTIAL field method (Robotics) - Abstract
Thanks to the advent of robotics in shopfloor and warehouse environments, control rooms need to seamlessly exchange information regarding the dynamically changing 3D environment to facilitate tasks and path planning for the robots. Adding to the complexity, this type of environment is heterogeneous as it includes both free space and various types of rigid bodies (equipment, materials, humans etc.). At the same time, 3D environment-related information is also required by the virtual applications (e.g., VR techniques) for the behavioral study of CAD-based product models or simulation of CNC operations. In past research, information models for such heterogeneous 3D environments are often built without ensuring connection among different levels of abstractions required for different applications. For addressing such multiple points of view and modelling requirements for 3D objects and environments, this paper proposes an ontology model that integrates the contextual, topologic, and geometric information of both the rigid bodies and the free space. The ontology provides an evolvable knowledge model that can support simulated task-related information in general. This ontology aims to greatly improve interoperability as a path planning system (e.g., robot) and will be able to deal with different applications by simply updating the contextual semantics related to some targeted application while keeping the geometric and topological models intact by leveraging the semantic link among the models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A benchmark dataset with Knowledge Graph generation for Industry 4.0 production lines.
- Author
-
Yahya, Muhammad, Ali, Aabid, Mehmood, Qaiser, Yang, Lan, Breslin, John G., and Ali, Muhammad Intizar
- Subjects
KNOWLEDGE graphs ,ONTOLOGIES (Information retrieval) ,VERTICAL integration ,AUTONOMOUS robots ,INDUSTRY 4.0 ,SYSTEM integration ,INDUSTRIAL revolution - Abstract
Industry 4.0 (I4.0) is a new era in the industrial revolution that emphasizes machine connectivity, automation, and data analytics. The I4.0 pillars such as autonomous robots, cloud computing, horizontal and vertical system integration, and the industrial internet of things have increased the performance and efficiency of production lines in the manufacturing industry. Over the past years, efforts have been made to propose semantic models to represent the manufacturing domain knowledge, one such model is Reference Generalized Ontological Model (RGOM).
1 https://w3id.org/rgom However, its adaptability like other models is not ensured due to the lack of manufacturing data. In this paper, we aim to develop a benchmark dataset for knowledge graph generation in Industry 4.0 production lines and to show the benefits of using ontologies and semantic annotations of data to showcase how the I4.0 industry can benefit from KGs and semantic datasets. This work is the result of collaboration with the production line managers, supervisors, and engineers in the football industry to acquire realistic production line data2 https://github.com/MuhammadYahta/ManufacturingProductionLineDataSetGeneration-Football, .3 https://zenodo.org/record/7779522 Knowledge Graphs (KGs) or Knowledge Graph (KG) have emerged as a significant technology to store the semantics of the domain entities. KGs have been used in a variety of industries, including banking, the automobile industry, oil and gas, pharmaceutical and health care, publishing, media, etc. The data is mapped and populated to the RGOM classes and relationships using an automated solution based on JenaAPI, producing an I4.0 KG. It contains more than 2.5 million axioms and about 1 million instances. This KG enables us to demonstrate the adaptability and usefulness of the RGOM. Our research helps the production line staff to take timely decisions by exploiting the information embedded in the KG. In relation to this, the RGOM adaptability is demonstrated with the help of a use case scenario to discover required information such as current temperature at a particular time, the status of the motor, tools deployed on the machine, etc. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
32. Towards a formal ontology of engineering functions, behaviours, and capabilities.
- Author
-
Compagno, Francesco and Borgo, Stefano
- Subjects
ONTOLOGY ,ENGINEERING models ,FIRST-order logic ,ENGINEERING ,ENGINEERING systems - Abstract
In both applied ontology and engineering, functionality is a well-researched topic, since it is through teleological causal reasoning that domain experts build mental models of engineering systems, giving birth to functions. These mental models are important throughout the whole lifecycle of any product, being used from the design phase up to diagnosis activities. Though a vast amount of work to model functions has already been carried out, the literature has not settled on a shared and well-defined approach due to the variety of concepts involved and the modeling tasks that functional descriptions should satisfy. The work in this paper posits the basis and makes some crucial steps towards a rich ontological description of functions and related concepts, such as behaviour, capability, and capacity. A conceptual analysis of such notions is carried out using the top-level ontology DOLCE as a framework, and the ensuing logical theory is formally described in first-order logic and OWL, showing how ontological concepts can model major aspects of engineering products in applications. In particular, it is shown how functions can be distinguished from the implementation methods to realize them, how one can differentiate between capabilities and capacities of a product, and how these are related to engineering functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Leakage-Resilient Hybrid Signcryption in Heterogeneous Public-key Systems.
- Author
-
Ho, Ting-Chieh, Tseng, Yuh-Min, and Huang, Sen-Shan
- Subjects
PUBLIC key cryptography ,COMPUTATIONAL complexity ,CRYPTOGRAPHY - Abstract
Signcryption integrates both signature and encryption schemes into single scheme to ensure both content unforgeability (authentication) and message confidentiality while reducing computational complexity. Typically, both signers (senders) and decrypters (receivers) in a signcryption scheme belong to the same public-key systems. When signers and decrypters in a signcryption scheme belong to heterogeneous public-key systems, this scheme is called a hybrid signcryption scheme which provides more elastic usage than typical signcryption schemes. In recent years, a new kind of attack, named side-channel attack, allows adversaries to learn a portion of the secret keys used in cryptographic algorithms. To resist such an attack, leakage-resilient cryptography has been widely discussed and studied while a large number of leakage-resilient schemes have been proposed. Also, numerous hybrid signcryption schemes under heterogeneous public-key systems were proposed, but none of them possesses leakage-resilient property. In this paper, we propose the first hybrid signcryption scheme with leakage resilience, called leakage-resilient hybrid signcryption scheme, in heterogeneous public-key systems (LR-HSC-HPKS). Security proofs are demonstrated to show that the proposed scheme provides both authentication and confidentiality against two types of adversaries in heterogeneous public-key systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Location Selection of Electric Vehicle Charging Stations Through Employing the Spherical Fuzzy CoCoSo and CRITIC Technique.
- Author
-
Yan, Rong, Han, Yongguang, Zhang, Huiyuan, and Wei, Cun
- Subjects
ELECTRIC vehicle charging stations ,ELECTRIC vehicles ,SUSTAINABLE transportation ,GROUP decision making ,FUZZY sets ,FUZZY numbers - Abstract
Energy conservation and emission reduction are important policies vigorously promoted in China. With the continuous popularization of the concept of green transportation, electric vehicles have become a green transportation tool with good development prospects, greatly reducing the pressure on the environment and resources caused by rapid economic growth. The development status of electric vehicles has a significant impact on urban energy security, environmental protection, and sustainable development in China. With the widespread application of new energy vehicles, charging piles have become an important auxiliary infrastructure necessary for the development of electric vehicles. They have significant social and economic benefits, so it is imperative to build electric vehicle charging piles. There are many factors to consider in the scientific layout of electric vehicle charging stations, and the location selection problem of electric vehicle charging stations is a multiple-attribute group decision-making (MAGDM) problem. Recently, the Combined Compromise Solution (CoCoSo) technique and CRITIC technique have been utilized to deal with MAGDM issues. Spherical fuzzy sets (SFSs) can uncover the uncertainty and fuzziness in MAGDM more effectively and deeply. In this paper, on basis of CoCoSo technique, a novel spherical fuzzy number CoCoSo (SFN-CoCoSo) technique based on spherical fuzzy number cosine similarity measure (SFNCSM) and spherical fuzzy number Euclidean distance (SFNED) is conducted for dealing with MAGDM. Moreover, when the attribute weights are completely unknown, the CRITIC technique is extended to SFSs to acquire the attribute weights based on the SFNCSM and SFNED. Finally, the SFN-CoCoSo technique is utilized for location selection problem of electric vehicle charging stations to prove practicability of the developed technique and compare the SFN-CoCoSo technique with existing techniques to further demonstrate its superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Numerical Approximations of the Riemann–Liouville and Riesz Fractional Integrals.
- Author
-
Ciesielski, Mariusz and Grodzki, Grzegorz
- Subjects
FRACTIONAL integrals ,FLOATING-point arithmetic ,FRACTIONAL calculus ,CAUCHY integrals ,SPLINES ,GAUSSIAN quadrature formulas ,SPLINE theory ,NUMERICAL integration - Abstract
In this paper, the numerical algorithms for calculating the values of the left- and right-sided Riemann–Liouville fractional integrals and the Riesz fractional integral using spline interpolation techniques are derived. The linear, quadratic and three variants of cubic splines are taken into account. The estimation of errors using analytical methods are derived. We show four examples of numerical evaluation of the mentioned fractional integrals and determine the experimental rate of convergence for each derived algorithm. The high-precision calculations are executed using the 128-bit floating-point numbers and arithmetic routines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A Fuzzy MARCOS-Based Analysis of Dragonfly Algorithm Variants in Industrial Optimization Problems.
- Author
-
Kalita, Kanak, Ganesh, Narayanan, Shankar, Rajendran, and Chakraborty, Shankar
- Subjects
BEES algorithm ,ANT algorithms ,FUZZY decision making ,POLLINATORS ,DIFFERENTIAL evolution ,ALGORITHMS ,METAHEURISTIC algorithms ,CHEMICAL processes - Abstract
Metaheuristics are commonly employed as a means of solving many distinct kinds of optimization problems. Several natural-process-inspired metaheuristic optimizers have been introduced in the recent years. The convergence, computational burden and statistical relevance of metaheuristics should be studied and compared for their potential use in future algorithm design and implementation. In this paper, eight different variants of dragonfly algorithm, i.e. classical dragonfly algorithm (DA), hybrid memory-based dragonfly algorithm with differential evolution (DADE), quantum-behaved and Gaussian mutational dragonfly algorithm (QGDA), memory-based hybrid dragonfly algorithm (MHDA), chaotic dragonfly algorithm (CDA), biogeography-based Mexican hat wavelet dragonfly algorithm (BMDA), hybrid Nelder-Mead algorithm and dragonfly algorithm (INMDA), and hybridization of dragonfly algorithm and artificial bee colony (HDA) are applied to solve four industrial chemical process optimization problems. A fuzzy multi-criteria decision making tool in the form of fuzzy-measurement alternatives and ranking according to compromise solution (MARCOS) is adopted to ascertain the relative rankings of the DA variants with respect to computational time, Friedman's rank based on optimal solutions and convergence rate. Based on the comprehensive testing of the algorithms, it is revealed that DADE, QGDA and classical DA are the top three DA variants in solving the industrial chemical process optimization problems under consideration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A Lance Distance-Based MAIRCA Method for q-Rung Orthopair Fuzzy MCDM with Completely Unknown Weight Information.
- Author
-
Wang, Haolun, Xu, Tingjun, Pamucar, Dragan, Li, Xuxiang, and Feng, Liangqing
- Subjects
MULTIPLE criteria decision making ,DECISION making - Abstract
The purpose of this manuscript is to develop a novel MAIRCA (Multi-Attribute Ideal-Real Comparative Analysis) method to solve the MCDM (Multiple Criteria Decision-Making) problems with completely unknown weights in the q-rung orthopair fuzzy (q-ROF) setting. Firstly, the new concepts of q-ROF Lance distance are defined and some related properties are discussed in this paper, from which we establish the maximizing deviation method (MDM) model for q-ROF numbers to determine the optimal criteria weight. Then, the Lance distance-based MAIRCA (MAIRCA-L) method is designed. In it, the preference, theoretical and real evaluation matrices are calculated considering the interaction relationship in q-ROF numbers, and the q-ROF Lance distance is applied to obtain the gap matrix. Finally, we manifest the effectiveness and advantage of the q-ROF MAIRCA-L method by two numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Levenberg-Marquardt Algorithm Applied for Foggy Image Enhancement.
- Author
-
Curila, Sorin, Curila, Mircea, Curila, Diana, and Grava, Cristian
- Subjects
IMAGE intensifiers ,COST functions ,NONLINEAR estimation ,GAUSS-Newton method ,NONLINEAR functions - Abstract
In this paper, we introduce a novel Model Based Foggy Image Enhancement using Levenberg-Marquardt non-linear estimation (MBFIELM). It presents a solution for enhancing image quality that has been compromised by homogeneous fog. Given an observation set represented by a foggy image, it is desired to estimate an analytical function dependent on adjustable variables that best cross the data in order to approximate them. A cost function is used to measure how the estimated function fits the observation set. Here, we use the Levenberg-Marquardt algorithm, a combination of the Gradient descent and the Gauss-Newton method, to optimize the non-linear cost function. An inverse transformation will result in an enhanced image. Both visual assessments and quantitative assessments, the latter utilizing a quality defogged image measure introduced by Liu et al. (2020), are highlighted in the experimental results section. The efficacy of MBFIELM is substantiated by metrics comparable to those of recognized algorithms like Artificial Multiple Exposure Fusion (AMEF), DehazeNet (a trainable end-to-end system), and Dark Channel Prior (DCP). There exist instances where the performance indices of AMEF exceed those of our model, yet there are situations where MBFIELM asserts superiority, outperforming these standard-bearers in algorithmic efficacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Screening approaches for lung cancer by blood-based biomarkers: Challenges and opportunities.
- Author
-
van den Broek, Daniel and Groen, Harry J.M.
- Subjects
TUMOR markers ,MEDICAL screening ,LUNG cancer ,EARLY detection of cancer ,CANCER-related mortality ,PULMONARY nodules - Abstract
Lung cancer (LC) is one of the leading causes for cancer-related deaths in the world, accounting for 28% of all cancer deaths in Europe. Screening for lung cancer can enable earlier detection of LC and reduce lung cancer mortality as was demonstrated in several large image-based screening studies such as the NELSON and the NLST. Based on these studies, screening is recommended in the US and in the UK a targeted lung health check program was initiated. In Europe lung cancer screening (LCS) has not been implemented due to limited data on cost-effectiveness in the different health care systems and questions on for example the selection of high-risk individuals, adherence to screening, management of indeterminate nodules, and risk of overdiagnosis. Liquid biomarkers are considered to have a high potential to address these questions by supporting pre- and post- Low Dose CT (LDCT) risk-assessment thereby improving the overall efficacy of LCS. A wide variety of biomarkers, including cfDNA, miRNA, proteins and inflammatory markers have been studied in the context of LCS. Despite the available data, biomarkers are currently not implemented or evaluated in screening studies or screening programs. As a result, it remains an open question which biomarker will actually improve a LCS program and do this against acceptable costs. In this paper we discuss the current status of different promising biomarkers and the challenges and opportunities of blood-based biomarkers in the context of lung cancer screening. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Lung cancer tumor markers in serous effusions and other body fluids.
- Author
-
Trapé, Jaume, Bérgamo, Silvia, González-Garcia, Laura, and González-Fernández, Carolina
- Subjects
BODY fluids ,LUNG cancer ,PLEURAL effusions ,TUMOR markers ,LUNG tumors ,EXUDATES & transudates - Abstract
From its onset and during its progression, lung cancer may affect various extrapulmonary structures. These include the serous membranes, the pleura and pericardium, and less frequently the central nervous system, with leptomeningeal involvement. In these cases, fluid accumulates in the serous membranes which may contain substances secreted by the tumor. Measuring the concentrations of these substances can provide useful information for elucidating the origin of the fluid accumulation, either in pleural and pericardial effusions or in cerebrospinal fluid. This paper describes the histological types of lung cancer that most frequently affect the serosa and leptomeninges. It also reviews the literature on tumor markers in different fluids and makes recommendations for their interpretation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Correctness Notions for Petri Nets with Identifiers.
- Author
-
van der Werf, Jan Martijn E.M., Rivkin, Andrey, Montali, Marco, and Polyvyanyy, Artem
- Subjects
PETRI nets ,INFORMATION storage & retrieval systems ,ELECTRONIC data processing - Abstract
A model of an information system describes its processes and how resources are involved in these processes to manipulate data objects. This paper presents an extension to the Petri nets formalism suitable for describing information systems in which states refer to object instances of predefined types and resources are identified as instances of special object types. Several correctness criteria for resource- and object-aware information systems models are proposed, supplemented with discussions on their decidability for interesting classes of systems. These new correctness criteria can be seen as generalizations of the classical soundness property of workflow models concerned with process control flow correctness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Waiting Nets: State Classes and Taxonomy.
- Author
-
Hélouët, Loïc and Agrawal, Pranay
- Subjects
PETRI nets ,EQUIVALENCE (Linguistics) ,TIME measurements ,TAXONOMY - Abstract
In time Petri nets (TPNs), time and control are tightly connected: time measurement for a transition starts only when all resources needed to fire it are available. Further, upper bounds on duration of enabledness can force transitions to fire (this is called urgency). For many systems, one wants to decouple control and time, i.e. start measuring time as soon as a part of the preset of a transition is filled, and fire it after some delay and when all needed resources are available. This paper considers an extension of TPN called waiting nets that dissociates time measurement and control. Their semantics allows time measurement to start with incomplete presets, and can ignore urgency when upper bounds of intervals are reached but all resources needed to fire are not yet available. Firing of a transition is then allowed as soon as missing resources are available. It is known that extending bounded TPNs with stopwatches leads to undecidability. Our extension is weaker, and we show how to compute a finite state class graph for bounded waiting nets, yielding decidability of reachability and coverability. We then compare expressiveness of waiting nets with that of other models w.r.t. timed language equivalence, and show that they are strictly more expressive than TPNs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. MADLINK: Attentive multihop and entity descriptions for link prediction in knowledge graphs.
- Author
-
Biswas, Russa, Sack, Harald, and Alam, Mehwish
- Subjects
KNOWLEDGE graphs ,RANDOM walks ,FORECASTING ,TRAILS - Abstract
Knowledge Graphs (KGs) comprise of interlinked information in the form of entities and relations between them in a particular domain and provide the backbone for many applications. However, the KGs are often incomplete as the links between the entities are missing. Link Prediction is the task of predicting these missing links in a KG based on the existing links. Recent years have witnessed many studies on link prediction using KG embeddings which is one of the mainstream tasks in KG completion. To do so, most of the existing methods learn the latent representation of the entities and relations whereas only a few of them consider contextual information as well as the textual descriptions of the entities. This paper introduces an attentive encoder-decoder based link prediction approach considering both structural information of the KG and the textual entity descriptions. Random walk based path selection method is used to encapsulate the contextual information of an entity in a KG. The model explores a bidirectional Gated Recurrent Unit (GRU) based encoder-decoder to learn the representation of the paths whereas SBERT is used to generate the representation of the entity descriptions. The proposed approach outperforms most of the state-of-the-art models and achieves comparable results with the rest when evaluated with FB15K, FB15K-237, WN18, WN18RR, and YAGO3-10 datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Helio: A framework for implementing the life cycle of knowledge graphs.
- Author
-
Cimmino, Andrea and García-Castro, Raúl
- Subjects
KNOWLEDGE graphs ,LIFE cycles (Biology) ,RDF (Document markup language) ,PRIVATE companies - Abstract
Building and publishing knowledge graphs (KG) as Linked Data, either on the Web or in private companies, has become a relevant and crucial process in many domains. This process requires that users perform a wide number of tasks conforming to the life cycle of a KG, and these tasks usually involve different unrelated research topics, such as RDF materialisation or link discovery. There is already a large corpus of tools and methods designed to perform these tasks; however, the lack of one tool that gathers them all leads practitioners to develop ad-hoc pipelines that are not generic and, thus, non-reusable. As a result, building and publishing a KG is becoming a complex and resource-consuming process. In this paper, a generic framework called Helio is presented. The framework aims to cover a set of requirements elicited from the KG life cycle and provide a tool capable of performing the different tasks required to build and publish KGs. As a result, Helio aims at providing users with the means for reducing the effort required to perform this process and, also, Helio aims to prevent the development of ad-hoc pipelines. Furthermore, the Helio framework has been applied in many different contexts, from European projects to research work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. An ontological approach for representing declarative mapping languages.
- Author
-
Iglesias-Molina, Ana, Cimmino, Andrea, Ruckhaus, Edna, Chaves-Fraga, David, García-Castro, Raúl, and Corcho, Oscar
- Subjects
KNOWLEDGE graphs ,PROGRAMMING languages ,DATABASES ,CONCEPT mapping ,ONTOLOGIES (Information retrieval) ,LANGUAGE & languages - Abstract
Knowledge Graphs are currently created using an assortment of techniques and tools: ad hoc code in a programming language, database export scripts, OpenRefine transformations, mapping languages, etc. Focusing on the latter, the wide variety of use cases, data peculiarities, and potential uses has had a substantial impact in how mappings have been created, extended, and applied. As a result, a large number of languages and their associated tools have been created. In this paper, we present the Conceptual Mapping ontology, that is designed to represent the features and characteristics of existing declarative mapping languages to construct Knowledge Graphs. This ontology is built upon the requirements extracted from experts experience, a thorough analysis of the features and capabilities of current mapping languages presented as a comparative framework; and the languages' limitations discussed by the community and denoted as Mapping Challenges. The ontology is evaluated to ensure that it meets these requirements and has no inconsistencies, pitfalls or modelling errors, and is publicly available online along with its documentation and related resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A systematic overview of data federation systems.
- Author
-
Gu, Zhenzhen, Corcoglioniti, Francesco, Lanti, Davide, Mosca, Alessandro, Xiao, Guohui, Xiong, Jing, and Calvanese, Diego
- Subjects
SEMANTIC Web ,RDF (Document markup language) ,WEB databases ,REFERENCE sources ,DATA security ,APPLICATION program interfaces ,RESEARCH personnel - Abstract
Data federation addresses the problem of uniformly accessing multiple, possibly heterogeneous data sources, by mapping them into a unified schema, such as an RDF(S)/OWL ontology or a relational schema, and by supporting the execution of queries, like SPARQL or SQL queries, over that unified schema. Data explosion in volume and variety has made data federation increasingly popular in many application domains. Hence, many data federation systems have been developed in industry and academia, and it has become challenging for users to select suitable systems to achieve their objectives. In order to systematically analyze and compare these systems, we propose an evaluation framework comprising four dimensions: (i) federation capabilities, i.e., query language, data source, and federation techniques; (ii) data security, i.e., authentication, authorization, auditing, encryption, and data masking; (iii) interface, i.e., graphical interface, command line interface, and application programming interface; and (iv) development, i.e., main development language, deployment, commercial support, open source, and release. Using this framework, we thoroughly studied 51 data federation systems from the Semantic Web and Database communities. This paper shares the results of our investigation and aims to provide reference material and insights for users, developers and researchers selecting or further developing data federation systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A concurrent language for modelling agents arguing on a shared argumentation space.
- Author
-
Bistarelli, Stefano and Taticchi, Carlo
- Abstract
While agent-based modelling languages naturally implement concurrency, the currently available languages for argumentation do not allow to explicitly model this type of interaction. In this paper we introduce a concurrent language for handling agents arguing and communicating using a shared argumentation space. We also show how to perform high-level operations like persuasion and negotiation through basic belief revision constructs, and present a working implementation of the language and the associated web interface. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Tissue Factor and Its Cerebrospinal Fluid Protein Profiles in Parkinson’s Disease.
- Author
-
Zimmermann, Milan, Fandrich, Madeleine, Jakobi, Meike, Röben, Benjamin, Wurster, Isabel, Lerche, Stefanie, Schulte, Claudia, Zimmermann, Shahrzad, Deuschle, Christian, Schneiderhan-Marra, Nicole, Joos, Thomas O., Gasser, Thomas, and Brockmann, Kathrin
- Subjects
- *
THROMBOPLASTIN , *LEWY body dementia , *AGE factors in disease , *OLDER patients , *PATIENTS - Abstract
\n Prior investigations have elucidated pathophysiological interactions involving blood coagulation and neurodegenerative diseases. These interactions pertain to age-related effects and a mild platelet antiaggregant function of exogenous
α -Synuclein.Our study sought to explore whether cerebrospinal fluid (CSF) levels of tissue factor (TF), the initiator of the extrinsic pathway of hemostasis, differ between controls (CON) compared to patients with Parkinson’s disease (PD) and dementia with Lewy bodies (DLB), considering that these conditions represent a spectrum ofα -Synuclein pathology. We further investigated whether TF levels are associated with longitudinal progression in PD.We examined CSF levels of TF in 479 PD patients, 67 patients diagnosed with DLB, and 16 CON in order to evaluate potential continuum patterns among DLB, PD, and CON. Of the 479 PD patients, 96 carried aGBA1 variant (PDGBA1 ), while the 383 non-carriers were classified as PD wildtype (PDWT ). We considered both longitudinal clinical data as well as CSF measurements of common neurodegenerative markers (amyloid-β 1-42, h-Tau, p-Tau, NfL,α -Synuclein). Kaplan-Meier survival and Cox regression analysis stratified by TF tertile levels was conducted.Higher CSF levels of TF were associated with an older age at examination in PD and a significant later onset of postural instability in PDGBA1 . TF levels were lower in male vs. female PD. DLBGBA1 exhibited the lowest TF levels, followed by PDGBA1 , with CON showing the highest levels.TF as representative of blood hemostasis could be an interesting CSF candidate to further explore in PD and DLB.Parkinson’s disease is a common age-related condition, primarily affecting older individuals. However, it shows a wide range of symptoms and clinical courses, influenced by genetic mutations, neuroinflammatory processes and lifestyle factors. Research into the disease’s mechanisms is important for developing new therapies that could potentially slow its progression. Early diagnosis is also essential, as new disease-modifying therapies are most effective when started at an early stage of the disease. In this paper, we focus on a protein called tissue factor, which plays a role in both blood coagulation and neuroinflammation. Proteins involved in blood-coagulation also exhibit an increase in blood-concentration with higher age. Also, a subtle platelet antiaggregant function of exogenousα -Synuclein was found, a protein which aggregates in the brain of patients with Parkinson’s disease. Additionally, higher tissue factor levels were found in plaques of proteins (amyloid-β 1-42) found in Alzheimer’s disease. Thus, tissue factor could be a promising biomarker candidate for neurodegenerative diseases. We analyzed the concentration of this protein in the cerebrospinal fluid of 479 patients with Parkinson’s disease, 16 control participants, and 67 patients with dementia with Lewy bodies, a sub-type of Parkinson’s disease with exceptionally high levels ofα -Synuclein in the brain. Our findings showed the lowest levels of tissue factor in patients with dementia with Lewy bodies, followed by those with typical Parkinson’s disease, and the highest levels in controls. Additionally, older patients had higher tissue factor levels than younger patients, and levels were lower in male patients compared to female patients. Thus, measuring tissue factor levels could help in diagnosing Parkinson’s disease. Further studies, especially with larger control groups, are needed to confirm these results. Additionally, exploring the connections between blood coagulation and Parkinson’s disease could improve our understanding of the disease’s mechanisms, potentially leading to new pharmaceutical developments. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
49. Analysis of a newly developed multidisciplinary program in the Middle East informed by the recently revised spina bifida guidelines.
- Author
-
Collier, Talia, Castillo, Jonathan, Thornton, Lisa, Vallasciani, Santiago, and Castillo, Heidi
- Subjects
- *
SPINA bifida , *ELECTRONIC health records , *CAUDAL regression syndrome , *DATABASES , *MYELOMENINGOCELE - Abstract
This paper describes the development and characteristics of a multi-disciplinary spina bifida clinic in Qatar considering the recently revised and globally available Guidelines for the Care of People with Spina Bifida (GCPSB).A retrospective chart review was performed on individuals in Sidra’s multidisciplinary spina bifida clinic database from January 2019 to June 2020. Their electronic health records were reviewed for demographics, as well as neurosurgical, urologic, rehabilitation, and orthopedic interventions.There were 127 patients in the database; 117 met inclusion criteria for diagnoses of myelomeningocele, meningocele, sacral agenesis/caudal regression, and/or spinal lipoma. Generally, Qatar is following GCPSB recommendations for multidisciplinary care. Consanguineous relationships, difficulties with access to urological and rehabilitation supplies and equipment, school access, and variable timing of neurosurgical closure were areas that demonstrated differences from GCPSB recommendations due to barriers in implementation.The GCPSB recommendations are applicable in an international setting such as Qatar. Despite a few barriers in implementing some of the recommendations, this new multi-disciplinary spina bifida clinic demonstrates alignment with many of the GCPSB guidelines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. The Role of Diet in Parkinson's Disease.
- Author
-
Tosefsky, Kira N., Zhu, Julie, Wang, Yolanda N., Lam, Joyce S.T., Cammalleri, Amanda, and Appel-Cresswell, Silke
- Subjects
- *
DIETARY patterns , *MEDITERRANEAN diet , *DIETARY supplements , *PARKINSON'S disease , *FOOD habits - Abstract
The aim of this review is to examine the intersection of Parkinson's disease (PD) with nutrition, to identify best nutritional practices based on current evidence, and to identify gaps in the evidence and suggest future directions. Epidemiological work has linked various dietary patterns and food groups to changes in PD risk; however, fewer studies have evaluated the role of various diets, dietary components, and supplements in the management of established PD. There is substantial interest in exploring the role of diet-related interventions in both symptomatic management and potential disease modification. In this paper, we evaluate the utility of several dietary patterns, including the Mediterranean (MeDi), Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND), Alternative Healthy Eating Index (AHEI), vegan/vegetarian, and ketogenic diet in persons with PD. Additionally, we provide an overview of the evidence relating several individual food groups and nutritional supplements to PD risk, symptoms and progression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.