142 results
Search Results
2. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
- *
ALGORITHMS , *SYSTEMS design , *CYBER physical systems , *COMPUTER scheduling , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. LGA launches election White Paper.
- Subjects
- *
BUREAUCRACY , *ARTIFICIAL intelligence - Published
- 2024
4. AI and Responsible Authorship.
- Author
-
Pennock, Robert T.
- Subjects
- *
ARTIFICIAL intelligence , *CHATBOTS , *GENERATIVE artificial intelligence , *LANGUAGE models , *SCIENTIFIC knowledge , *ARTIFICIAL neural networks - Abstract
This article explores the question of whether artificial intelligence (AI) should be considered a coauthor of scientific papers. It provides a historical overview of AI, from its early beginnings to its current capabilities. The author argues that while AI can generate text, it lacks the ability to make new discoveries or truly understand concepts. The article also raises ethical concerns about the accuracy and truthfulness of AI-generated content. It concludes that while AI can assist in research, the responsibility for its use lies with human researchers, as AI tools are not yet capable of taking ethical responsibility for the research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
5. Hostile Spaces.
- Author
-
Bickers, Patricia
- Subjects
- *
ARTISTS , *DRAWING , *ART exhibitions , *ARTIFICIAL intelligence - Abstract
An interview with British artist Tim Head is presented. Topics discussed include Head's creation of the pen-and-ink drawings "The Furies" during the pandemic lockdown, the exhibition "How It Is" wherein Head showcased his works including "Siren 1," and the increasing influence of artificial intelligence on artists' lives. Also discussed are Head's "The Tyranny of Reason" exhibition at the Institute of Contemporary Arts in 1985 and his series of tonal collages on paper.
- Published
- 2024
6. A state-of-the-art review on the utilization of machine learning in nanofluids, solar energy generation, and the prognosis of solar power.
- Author
-
Singh, Santosh Kumar, Tiwari, Arun Kumar, and Paliwal, H.K.
- Subjects
- *
DEEP learning , *MACHINE learning , *SOLAR energy , *HEAT exchanger efficiency , *NANOFLUIDS , *ARTIFICIAL intelligence , *PEROVSKITE - Abstract
In the contemporary data-driven era, the fields of machine learning, deep learning, big data, statistics, and data science are essential for forecasting outcomes and getting insights from data. This paper looks at how machine learning approaches can be used to anticipate solar power generation, assess heat exchanger heat transfer efficiency, and predict the thermo-physical properties of nanofluids. The review specifically focuses on the potential use of machine learning in solar thermal applications, perovskites, and photovoltaic power forecasting. Predictions of nanofluid characteristics and device performance may be more accurately made with the development of machine learning algorithms. The use of machine learning in the creation of new perovskites and the assessment of their effectiveness and stability is also included in the review. Additionally, the paper explores developments in artificial intelligence, particularly deep learning, in this area and offers insights into techniques for forecasting solar power, including PV production, cloud motion, and weather classification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Use of Open Access AI in teaching classical antiquity. A methodological proposal.
- Author
-
Díaz-Sánchez, Carlos and Chapinal-Heras, Diego
- Subjects
- *
DIGITAL humanities , *ARTIFICIAL intelligence , *TECHNOLOGICAL innovations , *PHILOLOGY , *METHODOLOGY - Abstract
The aim of this contribution is to present an innovative approach to the use of Open Access AI in teaching the Classical era at high school and university level. The paper first explains the growing interest in AI technology and its main applications in the subjects of philology, history and other related areas. The following sections show the different steps of the proposal, which uses the Midjourney program, as well as its pros and cons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Building Trust in AI Farming Tools.
- Author
-
Joosse, Tess
- Subjects
- *
DECISION support systems , *AGRICULTURAL implements , *ARTIFICIAL intelligence , *MACHINE learning , *AGRICULTURE , *AGRICULTURAL technology , *PRECISION farming - Abstract
Precision agriculture tools like decision support systems increasingly use machine‐learning algorithms and other types of artificial intelligence (AI) to analyze large quantities of agricultural data and provide recommendations to producers and crop advisers. However, several barriers threaten adoption of these tools. Three papers in the recent Agronomy Journal special section, "Machine Learning in Agriculture," explore this phenomenon and offer solutions and opportunities for building trust in these technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Use of Open Access AI in teaching classical antiquity. A methodological proposal.
- Author
-
Díaz-Sánchez, Carlos and Chapinal-Heras, Diego
- Subjects
- *
CLASSICAL antiquities , *ARTIFICIAL intelligence , *PHILOLOGY , *DIGITAL humanities , *EDUCATIONAL technology - Abstract
The aim of this contribution is to present an innovative approach to the use of Open Access AI in teaching the Classical era at high school and university level. The paper first explains the growing interest in AI technology and its main applications in the subjects of philology, history and other related areas. The following sections show the different steps of the proposal, which uses the Midjourney program, as well as its pros and cons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark.
- Author
-
Becker, Sören, Vielhaben, Johanna, Ackermann, Marcel, Müller, Klaus-Robert, Lapuschkin, Sebastian, and Samek, Wojciech
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *SEX (Biology) , *SPOKEN English , *FEATURE selection - Abstract
Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions. This paper explores post-hoc explanations for deep neural networks in the audio domain. Notably, we present a novel Open Source audio dataset consisting of 30,000 audio samples of English spoken digits which we use for classification tasks on spoken digits and speakers' biological sex. We use the popular XAI technique Layer-wise Relevance Propagation LRP to identify relevant features for two neural network architectures that process either waveform or spectrogram representations of the data. Based on the relevance scores obtained from LRP, hypotheses about the neural networks' feature selection are derived and subsequently tested through systematic manipulations of the input data. Further, we take a step beyond visual explanations and introduce audible heatmaps. We demonstrate the superior interpretability of audible explanations over visual ones in a human user study. • We present a novel audio dataset consisting of 30,000 audio samples of spoken digits. • We use LRP to explain predictions of two different models in the audio domain. • We confirm hypotheses about the neural networks' use of features from explanations. • We present audible explanations and demonstrate their superior interpretability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Applying artificial intelligence for spiral blade pitch distance optimization installed on the tubes containing nanofluid embedded in PCM-filled solar panel: A two-phase simulation model.
- Author
-
Liu, Yuanyuan
- Subjects
- *
SOLAR panels , *NANOFLUIDS , *ENERGY storage , *ARTIFICIAL intelligence , *PHASE change materials , *HEAT transfer coefficient - Abstract
The use of phase change materials (PCMs) to store energy in solar systems can be very useful to augment their efficiency. This paper presents a numerical study on the effect of using PCM in a thermal solar panel system. Four tubes containing alumina/water nanofluids (NFs) flow are used in this system, and NFs flow is analyzed. A spiral blade is employed on the tubes. By changing the pitch distance (d-P) of the spiral blade, the variations of the molten PCM (MPCM) and the temperature of different components of the solar system are determined transiently for 25 min. The numerical study is carried out using the FEM. The results demonstrate that the outlet temperature (T-O) is enhanced from 293 K at time 0 s to a temperature of more than 327 K at 1500s. The value of the heat transfer coefficient (HTC) reaches a value of more than 44 W/m2K up to 25 min. The value of the HTC is lower most of the time for a d-P of 20 mm than other distances. Changing the d-P from 20 to 30 mm can vary the T-O by 0.5 degrees. The minimum of MPCM corresponds to a d-P of 20 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Evaluating magnetic fields using deep learning.
- Author
-
Rahman, Mohammad Mushfiqur, Khan, Arbaaz, Lowther, David, and Giannacopoulos, Dennis
- Subjects
- *
DEEP learning , *MAGNETIC fields , *RECURRENT neural networks , *FINITE difference method , *SUPERVISED learning , *ARTIFICIAL intelligence , *FINITE element method - Abstract
Purpose: The purpose of this paper is to develop surrogate models, using deep learning (DL), that can facilitate the application of EM analysis software. In the current status quo, electrical systems can be found in an ever-increasing range of products that are part of everyone's daily live. With the advances in technology, industries such as the automotive, communications and medical devices have been disrupted with new electrical and electronic systems. The innovation and development of such systems with increasing complexity over time has been supported by the increased use of electromagnetic (EM) analysis software. Such software enables engineers to virtually design, analyze and optimize EM systems without the need for building physical prototypes, thus helping to shorten the development cycles and consequently cut costs. Design/methodology/approach: The industry standard for simulating EM problems is using either the finite difference method or the finite element method (FEM). Optimization of the design process using such methods requires significant computational resources and time. With the emergence of artificial intelligence, along with specialized tools for automatic differentiation, the use of DL has become computationally much more efficient and cheaper. These advances in machine learning have ushered in a new era in EM simulations where engineers can compute results much faster while maintaining a certain level of accuracy. Findings: This paper proposed two different models that can compute the magnetic field distribution in EM systems. The first model is based on a recurrent neural network, which is trained through a data-driven supervised learning method. The second model is an extension to the first with the incorporation of additional physics-based information to the authors' model. Such a DL model, which is constrained by the laws of physics, is known as a physics-informed neural network. The solutions when compared with the ground truth, computed using FEM, show promising accuracy for the authors' DL models while reducing the computation time and resources required, as compared to previous implementations in the literature. Originality/value: The paper proposes a neural network architecture and is trained with two different learning methodologies, namely, supervised and physics-based. The working of the network along with the different learning methodologies is validated over several EM problems with varying levels of complexity. Furthermore, a comparative study is performed regarding performance accuracy and computational cost to establish the efficacy of different architectures and learning methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Data sharing concepts: a viable system model diagnosis.
- Author
-
Perko, Igor
- Subjects
- *
INFORMATION sharing , *SMART devices , *PROCESS capability , *ARTIFICIAL intelligence , *NETWORK governance , *DATA science , *PROPERTY rights - Abstract
Purpose: Artificial intelligence (AI) reasoning is fuelled by high-quality, detailed behavioural data. These can usually be obtained by the biometrical sensors embedded in smart devices. The currently used data collecting approach, where data ownership and property rights are taken by the data scientists, designers of a device or a related application, delivers multiple ethical, sociological and governance concerns. In this paper, the author is opening a systemic examination of a data sharing concept in which data producers execute their data property rights. Design/methodology/approach: Since data sharing concept delivers a substantially different alternative, it needs to be thoroughly examined from multiple perspectives, among them: the ethical, social and feasibility. At this stage, theoretical examination modes in the form of literature analysis and mental model development are being performed. Findings: Data sharing concepts, framework, mechanisms and swift viability are examined. The author determined that data sharing could lead to virtuous data science by augmenting data producers' capacity to govern their data and regulators' capacity to interact in the process. Truly interdisciplinary research is proposed to follow up on this research. Research limitations/implications: Since the research proposal is theoretical, the proposal may not provide direct applicative value but is largely focussed on fuelling the research directions. Practical implications: For the researchers, data sharing concepts will provide an alternative approach and help resolve multiple ethical considerations related to the internet of things (IoT) data collecting approach. For the practitioners in data science, it will provide numerous new challenges, such as distributed data storing, distributed data analysis and intelligent data sharing protocols. Social implications: Data sharing may post significant implications in research and development. Since ethical, legislative moral and trust-related issues are managed in the negotiation process, data can be shared freely, which in a practical sense expands the data pool for virtuous research in social sciences. Originality/value: The paper opens new research directions of data sharing concepts and space for a new field of research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. AI as a Tool to Manage Contracts and Challenges in Applying Legal Tech to Contracts Management.
- Author
-
MARTINELLI, Silvia
- Subjects
- *
LEGAL professions , *CONTRACTS , *MANAGEMENT contracts , *ARTIFICIAL intelligence , *CONTRACT management , *HIGH technology industries - Abstract
The article analyses the consequences and challenges of new technologies' application to contracts. The change occurring in the application of AI and, broadly, automation systems to contracts affects the creation of the contract, the professionals involved, the market, regulations, and regulatory systems, as well as contract law. Two main consequences of this change. The first one is the application to the legal sector and to the contract of a larger phenomenon: the servitization or productification/mercification of professional performance. The second disruptive factor introduced by these technologies, less underlined, is the possibility to use data to manage, conclude and analyse contracts. Contracts are no longer represented by paper documents. The contract is moving from document to data, represented as a set of fluid information and it can be organized and managed by technology for the efficient (and peaceful) relationship of the parties. The paper describes this contract revolution analysing: (1) the effects on the legal products, legal market, and legal professionals; (2) the role of the legal tech company as a new intermediary; (3) compliance and regtech new possibilities; (4) applications to asymmetries and clauses evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
15. THE YEAR OF THE AI CONVERSATION.
- Author
-
ORNES, STEPHEN
- Subjects
- *
ARTIFICIAL intelligence , *CHATGPT , *MILITARY medicine , *RESEARCH personnel , *PROBLEM solving - Abstract
Generative AI tools, such as ChatGPT, have gained popularity and have been used for various purposes, including writing papers, generating music, and solving math problems. However, these tools can also produce misinformation and amplify biases. The development of AI systems has a long history, with early programs like ELIZA attempting to simulate conversation. While AI systems like ChatGPT show promise in reasoning, they are not equivalent to human thinking. The field of generative AI is evolving, with researchers exploring applications in fields like medicine and the military. However, there are concerns about the risks associated with these tools, such as cybersecurity and privacy issues. While AI can enhance efficiency, human ingenuity is still necessary to address the challenges that arise. [Extracted from the article]
- Published
- 2024
16. AI WRITES ABOUT ITSELF.
- Author
-
Thunström, Almira Osmanovic
- Subjects
- *
ARTIFICIAL intelligence - Abstract
But it dawned on me that, although a lot of academic papers had been written about GPT-3, and with the help of GPT-3, none that I could find had GPT-3 as the main author. If GPT-3 is producing the content, the documentation has to be visible without throwing off the flow of the text; it would look strange to add the method section before every single paragraph that was generated by the AI. Illustration by Thomas Fuchs ON A RAINY AFTERNOON EARLIER THIS YEAR, I LOGGED INTO MY OpenAI account and typed a simple instruction for the research company's artificial-intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. [Extracted from the article]
- Published
- 2022
17. AI bot generates artificial authors on research papers.
- Subjects
- *
ARTIFICIAL intelligence , *AUTHORS - Abstract
The article reports that the artificial intelligence (AI) bot ChatGPT has been listed as an author on several research papers and publishers mention that ChatGPT does not meet the definition of an "author" as it cannot take responsibility for the content.
- Published
- 2023
18. A supervised data augmentation strategy based on random combinations of key features.
- Author
-
Ding, Yongchang, Liu, Chang, Zhu, Haifeng, and Chen, Qianjun
- Subjects
- *
DATA augmentation , *CONVOLUTIONAL neural networks , *IMAGE recognition (Computer vision) , *ARTIFICIAL intelligence , *FEATURE extraction , *CLASSIFICATION - Abstract
Data augmentation strategies have always been important in machine learning techniques and play a unique role in model performance optimization processes. Therefore, in recent years, these techniques have become popular in the artificial intelligence field. In this paper, a new data augmentation strategy is proposed based on the interpretation algorithm of deep convolutional neural networks, i.e., constructing new training samples by deeply exploiting key features extracted from interpretable networks to achieve sample augmentation. Thus, a novel supervised data augmentation approach known as Supervised Data Augmentation–Key Feature Extraction (SDA-KFE) was proposed. By introducing the Neural Network Interpreter-Segmentation Recognition and Interpretation (NNI-SRI) algorithm, an augmentation strategy is proposed that can balance the high accuracy and high robustness of the final model while ensuring a large amount of data augmentation. The advantages of the SDA-KFE algorithm are mainly reflected in the following aspects. First, it is easy to implement. This algorithm is implemented based on the lightweight NNI-SRI algorithm, which lays the foundation for the implementation of SDA-KFE so that it can be easily implemented on convolutional neural networks. Second, this model, which is widely applicable, can be applied to almost any deep convolutional network. Through research and experiments on this proposed algorithm, SDA-KFE can be applied in graphical image binary classification and multiclassification models. Third, SDA-KFE can rapidly construct data samples with diverse variations. Under the premise of determining the classification labels of the generated samples, the distribution of the feature unit composition of the samples can be controlled. Compared with traditional data augmentation methods, SDA-KFE can control the direction of the model performance, i.e., the balance between the pursuit of high accuracy and robust performance of the model. Therefore, the novel supervised augmentation approach proposed in this paper is relevant for optimizing deep convolutional neural networks, solving model overfitting, augmenting data types, etc. The data augmentation algorithm proposed in this paper can be regarded as a useful supplement to traditional data augmentation methods, such as horizontal or vertical image flipping, cropping, color transformation, extension and rotation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. The contemporary state of big data analytics and artificial intelligence towards intelligent supply chain risk management: a comprehensive review.
- Author
-
Shah, Harsh M., Gardas, Bhaskar B., Narwane, Vaibhav S., and Mehta, Hitansh S.
- Subjects
- *
SUPPLY chain management , *BIG data , *ARTIFICIAL intelligence , *STRUCTURAL equation modeling , *RISK managers - Abstract
Purpose: This paper aims to conduct a systematic literature review of the research in the field of Artificial Intelligence (AI) and Big Data Analytics (BDA) in Supply Chain Risk Management (SCRM). Finally, future research directions in this field have been suggested. Design/methodology/approach: The papers were searched using a set of keywords in the SCOPUS database. These papers were filtered using the Title abstract keywords principle. Further, more papers were found using the forward-backward referencing method. The finalized papers were then classified into eight categories. Findings: The previous papers in AI and BDA in SCRM were studied. These papers emphasized various modelling and application techniques for AI and BDA in making the supply chain (SC) more resilient. It was found that more research has been done into conceptual modelling rather than real-life applications. It was seen that the use of AI-based techniques and structural equation modelling was prominent. Practical implications: AI and BDA help build the risk profile, which will guide the decision-makers and risk managers make their decisions quickly and more effectively, reducing the risks on the SC and making it resilient. Other than this, they can predict the risks in disasters, epidemics and any further disruption. They also help select the suppliers and location of the various elements of the SC to reduce the lead times. Originality/value: The paper suggests various future research directions that fellow researchers can explore. None of the previous research examined the role of BDA and AI in SCRM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Three-way approximation of decision granules based on the rough set approach.
- Author
-
Stepaniuk, Jaroslaw and Skowron, Andrzej
- Subjects
- *
ROUGH sets , *ARTIFICIAL intelligence , *PROCESS optimization , *APPROXIMATE reasoning , *DECISION making , *GRANULAR computing - Abstract
We discuss the three-way rough set based approach for approximation of decision granules in Intelligent Systems (IS's). The novelty of the approach is in presenting a new concept of approximation space which is based on advanced reasoning tools. Many generalisations of the rough set approaches developed over the years are mainly concentrated around reasoning concerning (partial) inclusion of sets. However, such approximation spaces are not satisfactory to deal with important aspects of approximate reasoning by IS's aiming to construct of the high quality approximations of compound decision granules. We demonstrate a number of examples supporting this claim. In particular, in solving the considered in the paper problems are involved complex algorithmic optimization processes directed by reasoning tools supporting searching for (semi-)optimal approximations of decision granules in huge spaces. This paper is a step toward developing tools for derivation of granules supporting IS's in perceiving situations to a degree satisfactory for making the right decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. What perceptron neural networks are (not) good for?
- Author
-
Calude, Cristian S., Heidari, Shahrokh, and Sifakis, Joseph
- Subjects
- *
QUANTUM annealing , *ARTIFICIAL intelligence , *BOOLEAN functions , *SET functions , *QUANTUM computing , *COMPLEXITY (Philosophy) , *SUCCESS - Abstract
Perceptron Neural Networks (PNNs) are essential components of intelligent systems because they produce efficient solutions to problems of overwhelming complexity for conventional computing methods. Many papers show that PNNs can approximate a wide variety of functions, but comparatively, very few discuss their limitations and the scope of this paper. To this aim, we define two classes of Boolean functions – sensitive and robust –, and prove that an exponentially large set of sensitive functions are exponentially difficult to compute by multi-layer PNNs (hence incomputable by single-layer PNNs). A comparatively large set of functions in the second one, but not all, are computable by single-layer PNNs. Finally, we used polynomial threshold PNNs to compute all Boolean functions with quantum annealing and present in detail a QUBO computation on the D-Wave Advantage. These results confirm that the successes of PNNs, or lack of them, are in part determined by properties of the learned data sets and suggest that sensitive functions may not be (efficiently) computed by PNNs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Effective human–AI work design for collaborative decision-making.
- Author
-
Jain, Ruchika, Garg, Naval, and Khera, Shikha N.
- Subjects
- *
WORK design , *TRUST , *DIVISION of labor , *DECISION making , *ARTIFICIAL intelligence - Abstract
Purpose: With the increase in the adoption of artificial intelligence (AI)-based decision-making, organizations are facilitating human–AI collaboration. This collaboration can occur in a variety of configurations with the division of labor, with differences in the nature of interdependence being parallel or sequential, along with or without the presence of specialization. This study intends to explore the extent to which humans express comfort with different models human–AI collaboration. Design/methodology/approach: Situational response surveys were adopted to identify configurations where humans experience the greatest trust, role clarity and preferred feedback style. Regression analysis was used to analyze the results. Findings: Some configurations contribute to greater trust and role clarity with AI as a colleague. There is no configuration in which AI as a colleague produces lower trust than humans. At the same time, the human distrust in AI may be less about human vs AI and more about the division of labor in which human–AI work. Practical implications: The study explores the extent to which humans express comfort with different models of an algorithm as partners. It focuses on work design and the division of labor between humans and AI. The finding of the study emphasizes the role of work design in human–AI collaboration. There is human–AI work design that should be avoided as they reduce trust. Organizations need to be cautious in considering the impact of design on building trust and gaining acceptance with technology. Originality/value: The paper's originality lies in focusing on the design of collaboration rather than on performance of the team. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Multi-modal fusion network with complementarity and importance for emotion recognition.
- Author
-
Liu, Shuai, Gao, Peng, Li, Yating, Fu, Weina, and Ding, Weiping
- Subjects
- *
EMOTION recognition , *ARTIFICIAL intelligence , *MACHINE learning , *DEEP learning - Abstract
Multimodal emotion recognition, that is, emotion recognition uses machine learning to generate multi-modal features on the basis of videos which has become a research hotspot in the field of artificial intelligence. Traditional multi-modal emotion recognition method only simply connects multiple modalities, and the interactive utilization rate of modal information is low, and it cannot reflect the real emotion under the conflict of modal features well. This article first proves that effective weighting can improve the discrimination between modalities. Therefore, this paper takes into account the importance differences between multiple modalities, and assigns weights to them through the importance attention network. At the same time, considering that there is a certain complementary relationship between the modalities, this paper constructs an attention network with complementary modalities. Finally, the reconstructed features are fused to obtain a multi-modal feature with good interaction. The method proposed in this paper is compared with traditional methods in public datasets. The test results show that our method is accurate in It performs well in both the rate and confusion matrix metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A New Frontier: AI and Ancient Language Pedagogy.
- Author
-
Ross, Edward A. S.
- Subjects
- *
ARTIFICIAL intelligence , *CHATGPT , *CLASSICAL languages , *GREEK language , *LATIN language , *SANSKRIT language - Abstract
In November 2022, ChatGPT 3.5 was released on a public research preview, gaining notoriety for its ability to pull from a vast body of information to create coherent and digestible bodies of text that accurately respond to queries (OpenAI, 2022). It is able to recognise the grammar and vocabulary of ancient languages, translate passages, and compose texts at an alarmingly accurate and rapid rate. For teachers, this AI has had mixed reviews. Some fear its ability to produce well-written work effortlessly, while others are excited by its abilities to push the boundaries of current teaching practices. This paper explores how well ChatGPT explains grammatical concepts, parses inflected forms, and translates Classical Latin, Ancient Greek, and Classical Sanskrit. Overall, ChatGPT is rather good at working with Classical Latin and Sanskrit, but its abilities with Ancient Greek are deeply problematic. Although it is quite flawed at this time, ChatGPT, when used properly, could become a useful a tool for ancient language study. With proper guiding phrases, students could use this AI to practise vocabulary, check their translations, and rephrase grammatical concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Investor preference analysis: An online optimization approach with missing information.
- Author
-
Hu, Xiao, Chen, Yiqing, Ren, Long, and Xu, Zeshui
- Subjects
- *
INVESTORS , *RECOMMENDER systems , *INVESTMENT advisors , *ARTIFICIAL intelligence , *MACHINE learning , *MARKET segmentation - Abstract
How to derive an investor's preference is vital for investment advisors and online lending platforms for targeted marketing strategies, e.g., market segmentation and financial product recommendation. However, investor preference analysis usually depends on judgments from human investment experts, which are inherently subjective and costly. Intelligent investment advisors (or Robo-advisors), supported by cutting-edge technologies such as machine learning and artificial intelligence, are established to relieve these pending issues. This paper employs an online optimization framework to obtain investors' preferences for further financial product recommendations. This proposed method allows us to update the investor's preference for newly-arriving data sets and tackle the situation where plenty of missing values in investors' records are present. Unlike the black-box-like machine learning approach, our method can provide more managerial implications regarding why one financial product/service is preferred. Real-world data set from an online financial platform is used to compare the existing approaches and shows the stronger and more stable performance of our method when facing different data-missing types and situations with different missing degrees, followed by a recommendation system application. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Machine Learning and the Relevance of IP Rights: An Account of Transparency Requirements for AI.
- Author
-
BAYAMLIOĞLU, Emre
- Subjects
- *
MACHINE learning , *PROTECTION of trade secrets , *INTELLECTUAL property , *GENERAL Data Protection Regulation, 2016 , *ARTIFICIAL intelligence - Abstract
As a sub-branch of Artificial Intelligence (AI), Machine Learning (ML) is an inductive method of problem solving which can accomplish tasks that once required human participation and discretion. As governments and other institutions increasingly deploy ML-based systems to predict, rate and act upon individuals' behaviour or personal traits, there is growing political and legal demand for transparency so that the outcome of these systems could be interpretable, and thus contestable where necessary. Previous research has revealed that transparency in automated decisionmaking (ADM) entails not only openness and disclosure in the conventional sense but further administrative and technical measures such as the algorithmic audit or black-box testing of these systems. The implementation of such broadened scope of transparency inevitably involves the reproduction and/or adaptation of the relevant informational elements and components of the ML-based systems. This gives rise to the questions: (1) to what extent reliance on Intellectual Property (IP) rights could excuse automated decision-makers from the obligation of making transparent and contestable decisions, e.g., under Article 22 of the General Data Protection Regulation (GDPR); (2) what are the counter-arguments based on statutory exceptions and limitations restricting IP rights; and (3) what may be the possible solutions either within the IP regime or through regulatory intervention. Overall, the paper aims obtain a macro-view of the potential areas of conflict between the possible transparency measures/tools and the relevant IP regimes - i.e., copyright, sui generis database right and trade secret protection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
27. The impact of using a turbulator at the nanofluid flow inlet to cool a solar panel in the presence of phase change materials using artificial intelligence.
- Author
-
Hai, Tao, Awad, Omar I., Li, Shaoyi, Zain, Jasni Mohamad, and Bash, Ali A.H. Karah
- Subjects
- *
PHASE change materials , *SOLAR panels , *ARTIFICIAL intelligence , *NANOFLUIDS , *FINITE element method - Abstract
Due to the importance of renewable energies in providing energy for the future of human beings, a simulation is performed in this paper on the nanofluid (NFD) flow inside a U-shaped tube placed under a solar panel (SPL). The tube is placed under the panel inside the enclosure containing phase change material (PCM). Alumina-water NFD and an organic PCM are used in this study. A turbulator (TBR) is utilized to improve the heat transfer (HTF) between the NFD flow and the SPL. This study is unsteady and is carried out at different Reynolds numbers (Re s) to examine the effect of the TBR. The finite element method (FEM) is employed for the simulations and temperature-dependent relationships are used for the NFD flow. Finally, the best conditions are evaluated using artificial intelligence. The results of this study demonstrated that the use of a TBR causes the panel temperature (TPL) to reduce and the complete melting time of the PCM to enhance. The increment in Re entailed a reduction in the melting process of PCM and the TPL. The enhancement of Re increased the value of the HTF coefficient. Besides, the use of TBR enhanced the HTF coefficient. The minimum amounts of TPL and pressure drop correspond to the case when Re = 211 and the TBR is employed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Assessing bank default determinants via machine learning.
- Author
-
Lagasio, Valentina, Pampurini, Francesca, Pezzola, Annagiulia, and Quaranta, Anna Grazia
- Subjects
- *
MACHINE learning , *BANK failures , *ARTIFICIAL intelligence , *HEURISTIC , *EUROZONE - Abstract
• Many ML algorithms are used to identify the main determinants of a bank default. • We use of a graph neural network that has never been used in a financial context. • We obtain a balanced dataset by customizing the heuristic oversampling method. • Like previous literature, we show that neural network outperforms other approaches. • We include, for the first time, competition among the possible default determinants. The financial sector is very interested in Artificial Intelligence due to the opportunities that it offers, especially those related to methods of machine-learning. The aim of this paper is to employ a variety of machine-learning algorithms to identify the main determinants of bank default and to understand the impact of each variable on it. Bank default is one of the most studied topics in financial literature because of the severity of its consequences on the whole economic system. However, little attention has been paid to the identification of the major determinants of bank failures via machine-learning approaches. This paper employs several machine-learning algorithms, including a graph neural network that has never been used in a financial context. Another novelty is the implementation of a balanced dataset by customising the heuristic oversampling method based on k-means and synthetic minority over-sampling technique. This paper also deals with the inclusion of competition among the possible default determinants. The dataset consists of all the banks in the Euro Area in the period 2018–2020. The results obtained are useful from both micro- and macro-economic points of view. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Contradiction of modern and social-humanitarian artificial intelligence.
- Author
-
Raikov, Alexander Nikolaevich and Pirani, Massimiliano
- Subjects
- *
ARTIFICIAL intelligence , *PROBLEM solving , *COLLECTIVE behavior , *HUMAN behavior , *INVERSE problems - Abstract
Purpose: The purpose of the paper is to propose an effective approach of artificial intelligence (AI) addressing social-humanitarian reality comprising non-formalizable representation. The new task is to describe processes of integration of AI and humans in the hybrid systems framework. Design/methodology/approach: Social-humanitarian dynamics contradict traditional characteristics of AI. Suggested methodology embraces formalized and non-formalized parts as a whole. Holonic and special convergent approaches are combined to ensure purposefulness and sustainability of collective decision-making. Inverse problem solving on topology spaces, control thermodynamics and non-formalizable (considering quantum and relativistic) semantics include observers of eigenforms of reality. Findings: Collective decision-making cannot be represented only by formal means. Thus, this paper suggests the equation of hybrid reality (HyR), which integrates formalizable and non-formalizable parts conveying and coalescing holonic approaches, thermodynamic theory, cognitive modeling and inverse problem solving. The special convergent approach makes the solution of this equation purposeful and sustainable. Research limitations/implications: The suggested approach is far reaching with respect of current state-of-the-art technology; medium-term limitations are expected in the creation of cognitive semantics. Practical implications: Social-humanitarian events embrace all phenomena connected with individual and collective human behavior and decision-making. The paper will impact deeply networked experts, groups of crowds, rescue teams, researchers, professional communities, society and environment. Originality/value: New possibilities for advanced AI to enable purposeful and sustainable social-humanitarian subjects. The special convergent information structuring during collective decision-making creates necessary conditions toward the goals. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Population based training and federated learning frameworks for hyperparameter optimisation and ML unfairness using Ulimisana Optimisation Algorithm.
- Author
-
Maumela, Tshifhiwa, Nelwamondo, Fulufhelo, and Marwala, Tshilidzi
- Subjects
- *
MATHEMATICAL optimization , *MACHINE learning , *SOCIAL networks , *ARTIFICIAL intelligence - Abstract
This paper introduces the Ulimisana Optimisation Algorithm enabled Population Based Training (PBT-UOA) framework which allows for hyperparameters to be fine-tuned using a population based meta-heuristic algorithm at the same time as parameters are being optimised. Models are trained until near-convergence on the updated hyperparameters and the parameters of the best performing model are shared to warm start the other models in the next hyperparameter tuning iteration. In the PBT-UOA, all models are trained using the same dataset. This framework performed better than the Bayesian Optimisation algorithm. This paper also introduces the Ulimisana Optimisation Algorithm enabled Federated Learning (FL-UOA) framework which is an extension of the PBT-UOA. This framework is introduced to address the challenges of scattered datasets and privacy that is presented by the increase in connected end-devices. The FL-UOA learns on local data in scattered end-devices without sending datasets to a central server. The training datasets in local end-devices are used to evaluate models trained in other end-devices. The performance metrics are used to update the Social Trust Network (STN) of the FL-UOA framework. The FL-UOA outperformed the classic Federated Learning framework. This STN updating technique was tested in Machine Learning (ML) Unfairness to see how well it functioned as a regularisation term. This was achieved by training different models on subsets that contained datasets representing only specific sensitive groups. Results showed that by updating the hyperparameters while learning the parameters on the dataset scattered across different devices, the FL-UOA, takes advantage of diversified learning and reduces the ML Unfairness for models trained on group specific datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Relying on Generative AI Has Its Pitfalls.
- Author
-
Hoke, Tara
- Subjects
- *
GENERATIVE artificial intelligence , *ENGINEERS , *CIVIL engineering , *ARTIFICIAL intelligence , *CHATGPT - Abstract
This article discusses the ethical concerns surrounding the use of generative AI in engineering. It presents a hypothetical situation where an engineering professor discovers an extraneous phrase in a paper he is reviewing, suspecting that the author used ChatGPT, an AI system, to generate some of the content. The author admits to using ChatGPT but claims it was only to improve his writing. However, the lack of evidence and the ethical concerns raised by generative AI lead to the rejection of the paper. The article emphasizes the need for understanding the appropriate uses and limitations of generative AI and highlights the potential for incorrect or misleading information to be produced. It also discusses the ethical implications of misattributing authorship and the importance of transparency and accountability in the engineering profession. [Extracted from the article]
- Published
- 2024
32. Interval type-2 possibilistic picture C-means clustering incorporating local information for noisy image segmentation.
- Author
-
Wu, Chengmao and Liu, Tairong
- Subjects
- *
SOFT sets , *FUZZY algorithms , *COMPUTATIONAL intelligence , *ARTIFICIAL intelligence , *IMAGE segmentation , *FUZZY sets - Abstract
Picture fuzzy C-means clustering is a novel computational intelligence method that has some advantages over fuzzy clustering in pattern analysis and machine intelligence. However, picture fuzzy clustering is easily affected by noise and weighting exponent, which seriously limits its widespread application. To address this issue, this paper proposes a new robust possibilistic clustering method called "interval type-2 possibilistic picture C-means clustering with local information". This method combines interval type-2 fuzzy sets with possibilistic C-means clustering based on picture fuzzy sets, strengthening the noise resistance of picture fuzzy clustering. Firstly, this paper creatively extends an improved possibilistic clustering with double weighing exponents to picture fuzzy sets, solving the problem of consistency clustering in existing possibilistic picture clustering. Second, this paper originally introduces a new picture local information factor in possibilistic picture clustering and further enhances the anti-noise robustness of the method by using spatial possibilistic picture partition information. Finally, this paper skillfully extends this clustering method to interval type-2 fuzzy sets, which can handle more flexibly high-order uncertainties than type-1 clustering method. Experimental results indicate that this proposed method has better segmentation performance and stronger noise suppression ability compared with existing picture fuzzy clustering and interval type-2 fuzzy clustering. In summary, this work has made significant contributions to the development of picture fuzzy clustering theory and its applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Trainable and explainable simplicial map neural networks.
- Author
-
Paluzo-Hidalgo, Eduardo, Gonzalez-Diaz, Rocio, and Gutiérrez-Naranjo, Miguel A.
- Subjects
- *
MAP projection , *ARTIFICIAL intelligence , *GENERALIZATION - Abstract
Simplicial map neural networks (SMNNs) are topology-based neural networks with interesting properties such as universal approximation ability and robustness to adversarial examples under appropriate conditions. However, SMNNs present some bottlenecks for their possible application in high-dimensional datasets. First, SMNNs have precomputed fixed weight and no SMNN training process has been defined so far, so they lack generalization ability. Second, SMNNs require the construction of a convex polytope surrounding the input dataset. In this paper, we overcome these issues by proposing an SMNN training procedure based on a support subset of the given dataset and replacing the construction of the convex polytope by a method based on projections to a hypersphere. In addition, the explainability capacity of SMNNs and effective implementation are also newly introduced in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Registration Open for Symposium on AI in Pharmaceutical Medicine.
- Author
-
Madigan, David and Alemayehu, Demissie
- Subjects
- *
ARTIFICIAL intelligence , *CONFERENCES & conventions , *RECORDING & registration - Abstract
The article discusses the impact of COVID-19 on the educational sector, specifically focusing on teaching statistics and data science during the pandemic. It highlights three papers that describe different approaches to teaching during this time. The article also includes other papers on various topics such as causal inference, qualitative methods for investigating data science skills, and personalized education. Additionally, the article announces the registration for the 2024 Annual Symposium on Risks and Opportunities of AI in Pharmaceutical Medicine, which aims to address the challenges and opportunities of AI in pharmaceutical medicine and promote collaboration among industry, academia, regulatory agencies, and professional associations. The symposium will feature panel discussions and a keynote address by the director of the Center for Drug Evaluation and Research at the US Food and Drug Administration. [Extracted from the article]
- Published
- 2024
35. Explanation leaks: Explanation-guided model extraction attacks.
- Author
-
Yan, Anli, Huang, Teng, Ke, Lishan, Liu, Xiaozhang, Chen, Qi, and Dong, Changyu
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *INFORMATION modeling , *EXPLANATION , *ORGANIZATIONAL transparency - Abstract
Explainable artificial intelligence (XAI) is gradually becoming a key component of many artificial intelligence systems. However, such pursuit of transparency may bring potential privacy threats to the model confidentially, as the adversary may obtain more critical information about the model. In this paper, we systematically study how model decision explanations impact model extraction attacks, which aim at stealing the functionalities of a black-box model. Based on the threat models we formulated, an XAI-aware model extraction attack (XaMEA), a novel attack framework that exploits spatial knowledge from decision explanations is proposed. XaMEA is designed to be model-agnostic: it achieves considerable extraction fidelity on arbitrary machine learning (ML) models. Moreover, we proved that this attack is inexorable, even if the target model does not proactively provide model explanations. Various empirical results have also verified the effectiveness of XaMEA and disclosed privacy leakages caused by decision explanations. We hope this work would highlight the need for techniques that better trade off the transparency and privacy of ML models. • We propose XaMEA, three XAI-aware model extraction attack architectures. • We further carry out XAI-aware model extraction attacks against non-explanation target models. • We evaluate the attack effectiveness of XaMEA with an exhaustive set of experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Interval incremental learning of interval data streams and application to vehicle tracking.
- Author
-
Leite, Daniel, Škrjanc, Igor, Blažič, Sašo, Zdešar, Andrej, and Gomide, Fernando
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *ELECTRONIC data processing , *GRANULAR computing , *PARAMETER estimation - Abstract
This paper presents a method called Interval Incremental Learning (IIL) to capture spatial and temporal patterns in uncertain data streams. The patterns are represented by information granules and a granular rule base with the purpose of developing explainable human-centered computational models of virtual and physical systems. Fundamentally, interval data are either included into wider and more meaningful information granules recursively, or used for structural adaptation of the rule base. An Uncertainty-Weighted Recursive-Least-Squares (UW-RLS) method is proposed to update affine local functions associated with the rules. Online recursive procedures that build interval-based models from scratch and guarantee balanced information granularity are described. The procedures assure stable and understandable rule-based modeling. In general, the model can play the role of a predictor, a controller, or a classifier, with online sample-per-sample structural adaptation and parameter estimation done concurrently. The IIL method is aligned with issues and needs of the Internet of Things, Big Data processing, and eXplainable Artificial Intelligence. An application example concerning real-time land-vehicle localization and tracking in an uncertain environment illustrates the usefulness of the method. We also provide the Driving Through Manhattan interval dataset to foster future investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Marking a global industry Interpack gathering.
- Author
-
Barston, Neill
- Subjects
- *
SMALL business , *ARTIFICIAL intelligence , *BAKERIES , *FLEXIBLE packaging - Published
- 2023
38. Meshless methods for American option pricing through Physics-Informed Neural Networks.
- Author
-
Gatta, Federico, Di Cola, Vincenzo Schiano, Giampaolo, Fabio, Piccialli, Francesco, and Cuomo, Salvatore
- Subjects
- *
PRICES , *FINITE difference method , *DEEP learning , *PARTIAL differential equations , *ARTIFICIAL intelligence - Abstract
Nowadays, Deep Learning is drastically revolutionizing financial research as well as industry. Many methods have been discussed in the last few years, mainly related to option pricing. In fact, traditional approaches such as Monte Carlo simulation or finite difference methods are seriously harmed by multi-dimensional underlying and path dependency. Thus, dealing with particular contracts such as American multi-asset options is still rough. This paper addresses such a problem by pricing said put options with a novel meshless methodology, named Physics-Informed Neural Networks (PINNs), based on Artificial Intelligence. PINN paradigm has been recently introduced in Deep Learning literature. It exploits the theoretical background of the universal approximation theorem for neural networks to solve Partial Differential Equations numerically. This Deep Learning meshless method incorporates the equation and its initial and boundary conditions thanks to a specially designed loss function. We develop a suitable PINN for the proposed problem by introducing an algorithmic trick for improving the convergence of the free boundary problem. Furthermore, the worthiness of the proposal is assessed by several experiments concerned with single and multi-asset options. Finally, a parametric model is built to benefit further studies of option value behaviour related to particular market conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Application of fuzzy learning in IoT-enabled remote healthcare monitoring and control of anesthetic depth during surgery.
- Author
-
Farivar, Faezeh, Jolfaei, Alireza, Manthouri, Mohammad, and Haghighi, Mohammad Sayad
- Subjects
- *
ADAPTIVE fuzzy control , *ADAPTIVE control systems , *ARTIFICIAL intelligence , *FUZZY control systems , *DISTANCE education - Abstract
• Providing AI-enabled IoT system in healthcare monitoring and control. • Adjusting the depth of anesthesia in surgery by automatically infusion. • Designing an adaptive control system using a robust control method and fuzzy system. • Employing fuzzy learning to provide an intelligent estimator for patient model uncertainties. • Remote tuning of drug infusion through network channels. Smart remote patient monitoring and early disease diagnosis systems have made huge progresses after the introduction of Internet of Things (IoT) and Artificial Intelligence (AI) concepts. This paper proposes an AI-enabled IoT system to monitor and adjust the depth of anesthesia via network channels. More precisely, fuzzy learning systems are employed to develop a control system for the depth of anesthesia in surgeries. This scheme is composed of variable structure control and adaptive type-II fuzzy systems. Therefore, the controller is adaptive and robust to any perturbations and disturbances that may happen during a patient's surgery. The adaptive type-II fuzzy system is designed as an intelligent online estimator to approximate patient model uncertainties. This estimation helps in boosting the performance of the variable structure control system. An artificial neuron is also designed to reduce chattering for the proposed control system. The designed control system can efficiently adjust the anesthesia drug infusion rate and regulate the Bispectral index. The networked structure of the proposed system makes remote tuning of drug infusion possible. Performance of the designed controller is evaluated on several patient models. Simulation results confirm the validity and effectiveness of the proposed remote drug delivery system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Evaluating semantic similarity and relatedness between concepts by combining taxonomic and non-taxonomic semantic features of WordNet and Wikipedia.
- Author
-
Hussain, Muhammad Jawad, Bai, Heming, Wasti, Shahbaz Hassan, Huang, Guangjian, and Jiang, Yuncheng
- Subjects
- *
HYPERLINKS , *ARTIFICIAL intelligence , *INFORMATION retrieval , *COGNITIVE science - Abstract
Many applications in cognitive science and artificial intelligence utilize semantic similarity and relatedness to solve difficult tasks such as information retrieval, word sense disambiguation, and text classification. Previously, several approaches for evaluating concept similarity and relatedness based on WordNet or Wikipedia have been proposed. WordNet-based methods rely on highly precise knowledge but have limited lexical coverage. In contrast, Wikipedia-based models achieve more coverage but sacrifice knowledge quality. Therefore, in this paper, we focus on developing a comprehensive semantic similarity and relatedness method based on WordNet and Wikipedia. To improve the accuracy of existing measures, we combine various taxonomic and non-taxonomic features of WordNet, including gloss, lemmas, examples, sister-terms, derivations, holonyms/meronyms, and hypernyms/hyponyms, with Wikipedia gloss and hyperlinks, to describe concepts. We present a novel technique for extracting ' is-a ' and ' part-whole ' relationships between concepts using the Wikipedia link structure. The suggested technique identifies taxonomic and non-taxonomic relationships between concepts and offers dense vector representations of concepts. To fully exploit WordNet and Wikipedia's semantic attributes, the proposed method integrates their semantic knowledge at feature-level, combining semantic similarity and relatedness into a single comprehensive measure. The experimental results demonstrate the effectiveness of the proposed method over state-of-the-art measures on various gold standard benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. EFFECT: Explainable framework for meta-learning in automatic classification algorithm selection.
- Author
-
Shao, Xinyue, Wang, Hongzhi, Zhu, Xiao, Xiong, Feng, Mu, Tianyu, and Zhang, Yan
- Subjects
- *
CLASSIFICATION algorithms , *MACHINE learning , *ARTIFICIAL intelligence , *AUTOMATIC classification , *COUNTERFACTUALS (Logic) , *ALGORITHMS , *FAIRNESS - Abstract
• Explainable framework for meta-learning. • Efficiency and high causality. • Intervention and counterfactual. With the growing convergence of artificial intelligence and daily life scenarios, the application scenarios for intelligent decision methods are becoming increasingly complex. The development of various machine learning algorithms has benefited all disciplines of study, but choosing which algorithm is most suitable for a certain problem among a large number of algorithms is a challenge that every field must overcome. Another challenge at the practical application level is that machine learning algorithms currently trained with large amounts of data are primarily black-box and uninterpretable. This indicates that these methods pose potential risks and are difficult to rely on, thus hindering their application in sensitive fields such as finance and healthcare. The first challenge can be overcome by using meta-learning to combine data and prior knowledge to efficiently and automatically select the machine learning models. The second challenge remains to be addressed due to the lack of interpretability of traditional meta-learning techniques and deficiencies in transparency and fairness. Achieving the interpretability of meta-learning in autonomous algorithm selection for classification is crucial to balance the need for high accuracy and transparency of machine learning models in practical application scenarios. This paper proposes EFFECT , an interpretable meta-learning framework that can explain the recommendation results of meta-learning algorithm selection and provide a more complete and accurate explanation of the recommendation algorithm's performance on specific datasets combined with business scenarios. Extensive experiments have demonstrated the validity and correctness of this framework. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. EFFECT: Explainable framework for meta-learning in automatic classification algorithm selection.
- Author
-
Shao, Xinyue, Wang, Hongzhi, Zhu, Xiao, Xiong, Feng, Mu, Tianyu, and Zhang, Yan
- Subjects
- *
CLASSIFICATION algorithms , *MACHINE learning , *ARTIFICIAL intelligence , *AUTOMATIC classification , *COUNTERFACTUALS (Logic) , *ALGORITHMS , *FAIRNESS - Abstract
• Explainable framework for meta-learning. • Efficiency and high causality. • Intervention and counterfactual. With the growing convergence of artificial intelligence and daily life scenarios, the application scenarios for intelligent decision methods are becoming increasingly complex. The development of various machine learning algorithms has benefited all disciplines of study, but choosing which algorithm is most suitable for a certain problem among a large number of algorithms is a challenge that every field must overcome. Another challenge at the practical application level is that machine learning algorithms currently trained with large amounts of data are primarily black-box and uninterpretable. This indicates that these methods pose potential risks and are difficult to rely on, thus hindering their application in sensitive fields such as finance and healthcare. The first challenge can be overcome by using meta-learning to combine data and prior knowledge to efficiently and automatically select the machine learning models. The second challenge remains to be addressed due to the lack of interpretability of traditional meta-learning techniques and deficiencies in transparency and fairness. Achieving the interpretability of meta-learning in autonomous algorithm selection for classification is crucial to balance the need for high accuracy and transparency of machine learning models in practical application scenarios. This paper proposes EFFECT , an interpretable meta-learning framework that can explain the recommendation results of meta-learning algorithm selection and provide a more complete and accurate explanation of the recommendation algorithm's performance on specific datasets combined with business scenarios. Extensive experiments have demonstrated the validity and correctness of this framework. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Exploring AI-healthcare innovation: natural language processing-based patents analysis for technology-driven roadmapping.
- Author
-
Wang, Yu-Hui and Lin, Guan-Yu
- Subjects
- *
SURGICAL robots , *NATURAL language processing , *ROBOTICS , *NATURAL languages , *PATENTS , *HEALTH care industry , *COMPUTED tomography - Abstract
Purpose: The purposes of this paper are (1) to explore the overall development of AI technologies and applications that have been demonstrated to be fundamentally important in the healthcare industry, and their related commercialized products and (2) to identify technologies with promise as the basis of useful applications and profitable products in the AI-healthcare domain. Design/methodology/approach: This study adopts a technology-driven technology roadmap approach, combined with natural language processing (NLP)-based patents analysis, to identify promising and potentially profitable existing AI technologies and products in the domain of AI healthcare. Findings: Robotics technology exhibits huge potential in surgical and diagnostics applications. Intuitive Surgical Inc., manufacturer of the Da Vinci robotic system and Ion robotic lung-biopsy system, dominates the robotics-assisted surgical and diagnostic fields. Diagnostics and medical imaging are particularly active fields for the application of AI, not only for analysis of CT and MRI scans, but also for image archiving and communications. Originality/value: This study is a pioneering attempt to clarify the interrelationships of particular promising technologies for application and related products in the AI-healthcare domain. Its findings provide critical information about the patent activities of key incumbent actors, and thus offer important insights into recent and current technological and product developments in the emergent AI-healthcare sector. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. A fuzzy semantic representation and reasoning model for multiple associative predicates in knowledge graph.
- Author
-
Li, Pu, Wang, Xin, Liang, Hui, Zhang, Suzhi, Zhang, Yazhou, Jiang, Yuncheng, and Tang, Yong
- Subjects
- *
KNOWLEDGE graphs , *FUZZY graphs , *ARTIFICIAL intelligence , *MODEL-based reasoning , *INTUITION , *SCALABILITY - Abstract
• Fuzzy knowledge graph is a more general description of classical knowledge graph. • The fuzzy semantic scalability between multiple associative predicates is analyzed. • The mathematical model of semantic relationship in fuzzy knowledge graph is designed. • Some fuzzy reasoning rules are presented to realize fuzzy semantic extension. • Performance of the strategy discovers more implicit valid knowledge with fuzzy semantic. As the latest achievement of the development in semiotics, knowledge graph has been recognized and widely used by more and more researchers for its rich semantic information and clear logical structure. How to discovery the deep relevant knowledge from the massive graph-structured data has become a hot spot of artificial intelligence. Considering that some predicates in knowledge graph express fuzzy relationships whose semantics are not certain, the basic schema of classical knowledge graph in the form of RDF triple cannot describe the fuzzy semantic information effectively. To counter above problems, in this paper, we present a new semantic representation and reasoning model for multiple associative predicates by introducing fuzzy theory. Concretely, the presented method defines a new fuzzy annotating strategy to represent the fuzzy semantics between associative predicates in different RDF triples. On this basis, some fuzzy reasoning rules are presented to realize fuzzy semantic extension for classical knowledge graph. Lastly, the experimental results show that our proposal can discover more implicit valid knowledge with fuzzy semantic and have a good consistency with the intuition of human judgments. Overall, the methods proposed in this paper constitute some effective ways of knowledge discovery of structured semantic data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. LSMCore: A 69k-Synapse/mm 2 Single-Core Digital Neuromorphic Processor for Liquid State Machine.
- Author
-
Wang, Lei, Yang, Zhijie, Guo, Shasha, Qu, Lianhua, Zhang, Xiangyu, Kang, Ziyang, and Xu, Weixia
- Subjects
- *
SPEECH perception , *ARTIFICIAL intelligence , *LIQUIDS , *EDGE computing , *MATHEMATICAL optimization , *SYNAPSES , *PHYSIOLOGICAL effects of acceleration - Abstract
Neuromorphic processors have gained momentum recently due to their high energy efficiency in artificial intelligence applications compared to DNN accelerators. Most neuromorphic processors are executing SNNs (Spiking Neural Networks). Liquid State Machine (LSM), as the spiking version of reservoir computing, shows advantages and great potential in image classification, speech recognition, language translation, etc.. Comparing with other SNN models, LSM has the characteristics of easy to train and low resource utilization, which is suitable for low-power and resource-constrained edge computing scenarios. In this paper, we propose a novel design of a neuromorphic processor, LSMCore, aiming at LSM acceleration. LSMCore supports both training and inference of LSM. It consists of 256 input neurons, 1024 liquid neurons, and 1.31M synapses. Besides, multiple optimization techniques, including weight quantization for reducing storage, zero-skipping for decreasing dynamic sparsity, and mini-batch training are adopted in this processor. The experimental results show that the frequency of LSMCore achieves 400 MHz, the power is 4.9W and the area is 18.49 mm2 with a 40nm library. Comparing with the baseline, LSMCore achieves up to $80.7\times $ ($49.6\times $), $91.3\times $ ($56.3\times $), and $83.1\times $ ($56.8\times $) speedup on MNIST, N-MNIST, and Free Spoken Digital Dataset (FSDD) respectively for training (inference), while the accuracy of LSMCore on these three datasets are 96.8%, 97.6%, and 90% respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Proposal of Analog In-Memory Computing With Magnified Tunnel Magnetoresistance Ratio and Universal STT-MRAM Cell.
- Author
-
Cai, Hao, Guo, Yanan, Liu, Bo, Zhou, Mingyang, Chen, Juntong, Liu, Xinning, and Yang, Jun
- Subjects
- *
TUNNEL magnetoresistance , *CONVOLUTIONAL neural networks , *SPIN transfer torque , *ANALOG-to-digital converters , *MAGNETIC torque , *RANDOM access memory , *ARTIFICIAL intelligence - Abstract
In-memory computing (IMC) is an effective solution for energy-efficient artificial intelligence applications. Analog IMC amortizes the power consumption of multiple sensing amplifiers with an analog-to-digital converter (ADC) and simultaneously completes the calculation of multi-line data with a high parallelism degree. Based on a universal one-transistor one-magnetic tunnel junction (MTJ) spin transfer torque magnetic RAM (STT-MRAM) cell, this paper demonstrates a novel tunneling magnetoresistance (TMR) ratio magnifying method to realize analog IMC. Previous concerns including low TMR ratio and analog calculation nonlinearity are addressed using device-circuit interaction. The TMR is magnified $7500\times $ using a latch structure in combination with the device. Peripheral circuits are minimally modified to enable in-memory matrix-vector multiplication. A current mirror with a feedback structure is implemented to enhance analog computing linearity and calculation accuracy. The proposed design maximumly supports 1024 2-bit input and 1-bit weight multiply-and-accumulate (MAC) computations simultaneously. The proposal is simulated using the 28-nm CMOS process and MTJ compact model. The integral nonlinearity is reduced by 57.6% compared with the conventional structure. 9.47-25.4 TOPS/W is realized with 2-bit input, 1-bit weight, and 4-bit output convolution neural network (CNN). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Three-way decision based on confidence level change in rough set.
- Author
-
Guo, Doudou, Jiang, Chunmao, and Wu, Peng
- Subjects
- *
ROUGH sets , *CONFIDENCE , *ARTIFICIAL intelligence , *DECISION making , *THEORY of knowledge - Abstract
Three-way decision (3WD) and rough set are two influential theories of study knowledge discovery and uncertain artificial intelligence. A central notion of 3WD is a tri-level thinking paradigm consisting of trisecting, acting, and outcome (i.e., TAO model). As is well known, movement-based on three-way decision (M-3WD) and change-based TAO model, mainly started from the perspective of effectiveness measure, are two outcome evolution studies about the three-way decision, which could lead to some limitations in application. This paper builds a change-based three-way decision (C-3WD) based on confidence level, and an application to rough set is also discussed. Furthermore, the (α , β) -approximate probability regions of rough set are re-decided by the change model, and a medical decision example is introduced to explain how to make C-3WD in the classification of rough set. By comparing the effectiveness of the traditional three-way decision method with ours, it is again verified from the two aspects of cost and earns and concludes that the model in this paper is more suitable for the decision process that includes trisecting and acting. Some experiments on various datasets to demonstrate the effectiveness of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Garbage In, Garbage Out: Factors That Erode Research Integrity.
- Author
-
Lasda, Elaine M.
- Subjects
- *
PHONOLOGICAL awareness , *ARTIFICIAL intelligence , *RESEARCH ethics , *CITATION analysis , *INFORMATION resources , *MEDICAL research - Abstract
The article offers information on the intersection of AI, research, and its impact on scientific literature. Topics include concerns about the ethical implications of AI-generated content, fraudulent research papers created by paper mills, and the increasing retractions of scientific literature. The author reflects on the potential consequences of the rise in spurious publications on research impact metrics and raises concerns about research integrity.
- Published
- 2023
49. Ontology alignment evaluation for online assessment of e-learners: a new e-learning management system.
- Author
-
B.R., Rajakumar, Yenduri, Gokul, Vyas, Sumit, and D., Binu
- Subjects
- *
SEQUENTIAL pattern mining , *ONTOLOGY , *DIGITAL learning , *LEARNING Management System , *HIERARCHICAL clustering (Cluster analysis) , *MATHEMATICAL optimization - Abstract
Purpose: This paper aims to propose a new assessment system module for handling the comprehensive answers written through the answer interface. Design/methodology/approach: The working principle is under three major phases: Preliminary semantic processing: In the pre-processing work, the keywords are extracted for each answer given by the course instructor. In fact, this answer is actually considered as the key to evaluating the answers written by the e-learners. Keyword and semantic processing of e-learners for hierarchical clustering-based ontology construction: For each answer given by each student, the keywords and the semantic information are extracted and clustered (hierarchical clustering) using a new improved rider optimization algorithm known as Rider with Randomized Overtaker Update (RR-OU). Ontology matching evaluation: Once the ontology structures are completed, a new alignment procedure is used to find out the similarity between two different documents. Moreover, the objects defined in this work focuses on "how exactly the matching process is done for evaluating the document." Finally, the e-learners are classified based on their grades. Findings: On observing the outcomes, the proposed model shows less relative mean squared error measure when weights were (0.5, 0, 0.5), and it was 71.78% and 16.92% better than the error values attained for (0, 0.5, 0.5) and (0.5, 0.5, 0). On examining the outcomes, the values of error attained for (1, 0, 0) were found to be lower than the values when weights were (0, 0, 1) and (0, 1, 0). Here, the mean absolute error (MAE) measure for weight (1, 0, 0) was 33.99% and 51.52% better than the MAE value for weights (0, 0, 1) and (0, 1, 0). On analyzing the overall error analysis, the mean absolute percentage error of the implemented RR-OU model was 3.74% and 56.53% better than k-means and collaborative filtering + Onto + sequential pattern mining models, respectively. Originality/value: This paper adopts the latest optimization algorithm called RR-OU for proposing a new assessment system module for handling the comprehensive answers written through the answer interface. To the best of the authors' knowledge, this is the first work that uses RR-OU-based optimization for developing a new ontology alignment-based online assessment of e-learners. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Sentiment analysis on newspaper article reviews: contribution towards improved rider optimization-based hybrid classifier.
- Author
-
A., Pandiaraj, C., Sundar, and S., Pavalarajan
- Subjects
- *
SENTIMENT analysis , *FEATURE extraction , *PARTICLE swarm optimization , *WOLVES , *NEWSPAPERS , *SUPPORT vector machines , *COMPETITION horses , *SEMANTICS - Abstract
Purpose: Up to date development in sentiment analysis has resulted in a symbolic growth in the volume of study, especially on more subjective text types, namely, product or movie reviews. The key difference between these texts with news articles is that their target is defined and unique across the text. Hence, the reviews on newspaper articles can deal with three subtasks: correctly spotting the target, splitting the good and bad content from the reviews on the concerned target and evaluating different opinions provided in a detailed manner. On defining these tasks, this paper aims to implement a new sentiment analysis model for article reviews from the newspaper. Design/methodology/approach: Here, tweets from various newspaper articles are taken and the sentiment analysis process is done with pre-processing, semantic word extraction, feature extraction and classification. Initially, the pre-processing phase is performed, in which different steps such as stop word removal, stemming, blank space removal are carried out and it results in producing the keywords that speak about positive, negative or neutral. Further, semantic words (similar) are extracted from the available dictionary by matching the keywords. Next, the feature extraction is done for the extracted keywords and semantic words using holoentropy to attain information statistics, which results in the attainment of maximum related information. Here, two categories of holoentropy features are extracted: joint holoentropy and cross holoentropy. These extracted features of entire keywords are finally subjected to a hybrid classifier, which merges the beneficial concepts of neural network (NN), and deep belief network (DBN). For improving the performance of sentiment classification, modification is done by inducing the idea of a modified rider optimization algorithm (ROA), so-called new steering updated ROA (NSU-ROA) into NN and DBN for weight update. Hence, the average of both improved classifiers will provide the classified sentiment as positive, negative or neutral from the reviews of newspaper articles effectively. Findings: Three data sets were considered for experimentation. The results have shown that the developed NSU-ROA + DBN + NN attained high accuracy, which was 2.6% superior to particle swarm optimization, 3% superior to FireFly, 3.8% superior to grey wolf optimization, 5.5% superior to whale optimization algorithm and 3.2% superior to ROA-based DBN + NN from data set 1. The classification analysis has shown that the accuracy of the proposed NSU − DBN + NN was 3.4% enhanced than DBN + NN, 25% enhanced than DBN and 28.5% enhanced than NN and 32.3% enhanced than support vector machine from data set 2. Thus, the effective performance of the proposed NSU − ROA + DBN + NN on sentiment analysis of newspaper articles has been proved. Originality/value: This paper adopts the latest optimization algorithm called the NSU-ROA to effectively recognize the sentiments of the newspapers with NN and DBN. This is the first work that uses NSU-ROA-based optimization for accurate identification of sentiments from newspaper articles. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.