40 results
Search Results
2. Artificial intelligence can predict the progression of Alzheimer's disease
- Published
- 2024
3. After All, Artificial Intelligence is not Intelligent: in a Search for a Comprehensible Neuroscientific Definition of Intelligence
- Author
-
Sthéfano Divino
- Subjects
artificial intelligence ,computer science ,machine learning ,neuroscience ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
This paper explores a series of thoughts about the meaning of intelligence in neuroscience and computer science. This work aims to present an understandable definition that fits our contemporary artificial intelligence background. The research methodology of this essay lies in existing theories of artificial intelligence, focused on computer science and neuroscience. I analyze the relationship between intelligence and neuroscience and Hawkin’s Thousand Brains Theory, an approach to show what it is an intelligent agent according to neuroscience. Here, the main result relies on the verification that intelligence is only possible in the neocortex. According to this result, the study performs a second critical analysis aiming to demonstrate why there is no artificial intelligence today.
- Published
- 2022
- Full Text
- View/download PDF
4. Personalized HIV Treatment: Bringing Marginalized Patients to the Forefront With Situational Analysis
- Author
-
Renate Baumgartner
- Subjects
personalized medicine ,antiretroviral therapy ,health care ,artificial intelligence ,machine learning ,implicated actors ,Social sciences (General) ,H1-99 - Abstract
Since the early 2000s, personalized medicine (PM) has been a much-hyped field of healthcare. HIV treatment optimization tools were one of the first successful examples of PM, and have since their development been used to find tailored and optimized treatment for HIV-positive people. In this paper on a case study of the social arena of personalized HIV therapy I show how social worlds worked on both shared and distinct goals within the arena. I highlight the simultaneous centering and marginalization to which people seeking HIV therapy were subjected discursively in the social worlds. I also demonstrate that the further patients were from practitioners' daily work, the more they were reduced to their blood samples, rather than being constructed as complex and human.
- Published
- 2023
- Full Text
- View/download PDF
5. IIoT/IoT and Artificial Intelligence/Machine Learning as a Process Optimization Driver under Industry 4.0 Model/IIoT/IoT e Inteligencia Artificial/Aprendizaje Automatico como Motor de Optimizacion de Procesos en el Modelo de Industria 4.0
- Author
-
Mateo, Federico Walas and Redchuk, Andres
- Published
- 2021
- Full Text
- View/download PDF
6. Digitalization and artificial intelligence in industry
- Author
-
Campos Vigo, Alexandra Solange, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, and Delgado Prieto, Miquel
- Subjects
Artificial intelligence ,Informàtica [Àrees temàtiques de la UPC] ,Intel·ligència artificial ,Machine learning ,Aprenentatge automàtic ,Industrial revolution ,Revolució industrial - Abstract
In the last decades, we have witnessed many changes regarding the improvement of physical models and processes with positive impact on humanity, which we call Industrial Revolutions. In this paper, a description of the fundamental concepts of the last of these revolutions present until today is made, referring to the Fourth Industrial Revolution, called "Industry 4.0". In this work, a description of the State of the Art of the enabling technologies of this Industry 4.0 is presented, but because this topic is very broad, the description is limited to Artificial Intelligence and Cyber-Physical Systems applied to the industrial sector. Likewise, a case study is presented on how to carry out the design of an application and the analytics associated with a typical service in Industry 4.0, based on a cyber-physical system structure and supported by machine learning procedures. The methodology carried out to address this work consists of two stages. Firstly, an exploratory stage based on document search and management, mainly through academic research articles to identify, organize and describe the concepts and components involved in the application of artificial intelligence and cyber-physical systems in the industrial context. Secondly, a design and implementation stage of the logical part of an industrial maintenance application based on data processing in the Matlab platform. Finally, from the case study developed in the second stage, the results are presented using graphs and statistics to summarize the data obtained. And a conclussion section where the interpretation of the results and recomendations for future directions of the case of study are presented. En las últimas décadas, hemos presenciado muchos cambios respecto a la mejora de modelos y procesos físicos con repercusión positiva en la humanidad, a los que llamamos Revoluciones Industriales. En este trabajo, se realiza una descripción sobre conceptos fundamentales de la última de estas revoluciones presente hasta el día de hoy, refiriéndonos a la Cuarta Revolución Industrial, denominada “Industria 4.0”. Se presenta, en este trabajo, una descripción del Estado de Arte de las tecnologías habilitadoras de dicha Industria 4.0, pero debido a que este tema es muy amplio, se limita la descripción, a la Inteligencia Artificial y a los Sistemas Ciber-físicos aplicados al sector industrial. Asimismo, se presenta un caso de estudio sobre llevar a cabo el diseño de una aplicación y la analítica asociada a un servicio tipo en industria 4.0, basado en una estructura de sistema ciber-físico y soportado por procedimientos de machine learning. La metodología llevada a cabo para afrontar este trabajo, consta de dos etapas. En primer lugar, una etapa exploratoria a base de búsqueda y gestión documental, principalmente a través de artículos de investigación académica para identificar, organizar y describir los conceptos y componentes que intervienen en la aplicación de inteligencia artificial y sistemas ciber-físicos en el contexto industrial. En segundo lugar, una etapa de diseño e implementación de la parte lógica de una aplicación de mantenimiento industrial basada en el tratamiento de datos bajo plataforma Matlab. Finalmente, a partir del caso práctico desarrollado en la segunda etapa, se presentan los resultados mediante gráficos y estadísticas para resumir los datos obtenidos. Y una sección de conclusiones donde se presenta la interpretación de los resultados y recomendaciones para las direcciones futuras del caso de estudio. Incoming
- Published
- 2023
7. Designing Predictive Tools for Personalized Functionalities in Knitted Performance Wear
- Author
-
Martijn ten Bhömer, Hai-Ning Liang, Difeng Yu, Yuanjin Liu, Yifan Zhang, Eva de Laat, and Carola Leegwater
- Subjects
knitwear ,Interaction Design ,Industry 4.0 ,Machine learning ,Artificial Intelligence ,Interface Design ,Drawing. Design. Illustration ,NC1-1940 ,Engineering design ,TA174 - Abstract
Developments of advanced textile manufacturing techniques—such as 3D body-forming knitwear machinery—allows the production of almost finalized garments, which require little to no further production steps to finalize the garment. Moreover, advanced knitting technology in combination with new materials enables the integration of localized functionalities within a garment on a “stitch by stitch level.” There is potential in enhancing the design tools for advanced knitting manufacturing through the use of technologies such as data gathering, machine learning, and simulation. This approach reflects the potential of Industry 4.0, as design, product development, and manufacturing are moving closer together. However, there is still limited knowledge at present about how these new technologies and tools can have an impact on the creative design process. The case study presented in this paper explores the potential of predictive software design tools for fashion designers who are developing personalized advanced functionalities in textile products. The main research question explored in this article is: “How can designers benefit from intelligent design software for the manufacturing of advanced personalized functionalities in textile products?”. Within this larger research question three sub-research questions are explored: (1) What kind of advanced functionalities can be considered for the personalization process of knitwear? (2) How to design interactions and interfaces that use intelligent predictive algorithms to stimulate creativity during the fashion design process? (3) How will predictive software impact the manufacturing process for other stakeholders and production steps? These questions are investigated through the analysis of a Research Through Design case study, in which several predictive algorithms were compared and implemented in a user interface that would aid knitwear designers during the development process of high-performance running tights.
- Published
- 2019
- Full Text
- View/download PDF
8. Artificial Neural Network (ANN) in a Small Dataset to determine Neutrality in the Pronunciation of English as a Foreign Language in Filipino Call Center Agents
- Author
-
Rey Benjamin M. Baquirin and Proceso L. Fernandez
- Subjects
Artificial Intelligence ,Machine Learning ,Speech Processing ,Neural Networks ,Classification ,MFCC ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Artificial Neural Networks (ANNs) have continued to be efficient models in solving classification problems. In this paper, we explore the use of an A NN with a small dataset to accurately classify whet her Filipino call center agents’ pronunciations are neutral or not based on their employer’s standards. Isolated utterances of the ten most commonly used words in the call center were recorded from eleven agents creating a dataset of 110 utterances. Two learning specialists were consulted to establish ground truths and Cohen’s Kappa was computed as 0.82, validating the reliability of the dataset. The first thirteen Mel-Frequency Cepstral Coefficients (MFCCs) were then extracted from each word and an ANN was trained with Ten-fold Stratified Cross Validation. Experimental results on the model recorded a classification accuracy of 89.60% supported by an overall F-Score of 0.92.
- Published
- 2018
- Full Text
- View/download PDF
9. Sistema de trading algorítmico utilizando un modelo de machine learning generado por auto machine learning como regla de filtro
- Author
-
López Benítez, Edwin José, Hernandez Perez, German Jairo, and Algoritmos y Combinatoria (Algos-Un)
- Subjects
Artificial intelligence ,Trading algorítmico ,Mercados financieros ,Financial markets ,004 - Procesamiento de datos Ciencia de los computadores [000 - Ciencias de la computación, información y obras generales] ,629 - Otras ramas de la ingeniería [620 - Ingeniería y operaciones afines] ,332 - Economía financiera [330 - Economía] ,Inteligencia artificial ,Aprendizaje Automáticos ,Engineering economy ,Machine learning ,Economía industrial ,Algorithmic trading ,Auto machine learning - Abstract
ilustraciones, diagramas En el presente trabajo se mejora el rendimiento de un sistema o estrategia de trading algorítmico, basado en indicadores técnicos, mediante la incorporación de un modelo de clasificación que permite discriminar operaciones potenciales de la estrategia entre ganadoras y perdedoras. Las características utilizadas como entrada del modelo de machine learning son generadas a partir de indicadores técnicos en el instante que se abre una operación, obtenidas mediante una simulación de mercado con datos de en formato Open, High, Low, Close en la divisa del eurodólar. Para el proceso de búsqueda del modelo clasificación adecuado, se plantean dos mecanismos basados en automachine learning y algoritmos evolutivos, utilizando la librería Evaluation of a Tree-Based Pipeline Optimization Tool for Automating Data Science (TPOT) y una propuesta basada en el Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) y TPOT, donde se orienta la búsqueda multiobjetivo con la métrica del accuracy y el System Quality Number (SQN) una métrica para evaluar sistemas de trading. En los experimentos realizados, los modelos de clasificación elegidos por NSGA-II mejoraron significativamente el rendimiento de la estrategia de trading, con un 32.5% de los modelos fuera de muestra que mostraron rendimientos positivos y un comportamiento similar dentro y fuera de muestra. Mientras que con TPOT, los clasificadores encontrados tendieron a tener buen rendimiento dentro de muestra, pero no consistente fuera de muestra. La estrategia final elegida por NSGA-II tuvo un 60-61% de operaciones rentables tanto dentro como fuera de muestra, mientras TPOT tuvo un 98% y un 62% respectivamente. (Texto tomado de la fuente) This paper improves the performance of an algorithmic trading system or strategy, based on technical indicators, by incorporating a classification model that allows discriminating potential trades of the strategy between winners and losers. The characteristics used as input of the machine learning model are generated from technical indicators at the moment a trade is opened, obtained through a market simulation with data in Open, High, Low, Close format in the Eurodollar currency. For the search process of the appropriate classification model, two mechanisms based on automachine learning and evolutionary algorithms are proposed, using the Evaluation of a Tree-Based Pipeline Optimization Tool for Automating Data Science (TPOT) library and a proposal based on the Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) and TPOT, where the multi-objective search is oriented with the accuracy metric and the System Quality Number (SQN), a metric to evaluate trading systems. In the experiments conducted, the classification models chosen by NSGA-II significantly improved the performance of the trading strategy, with 32.5% of the out-of-sample models showing positive returns and similar in-sample and out-of-sample behavior. Whereas with TPOT, the classifiers found tended to perform well in-sample, but not consistently out-of-sample. The final strategy chosen by NSGA-II had 60-61% profitable trades both in-sample and out-of-sample, while TPOT had 98% and 62% respectively. Maestría Magíster en Ingeniería - Ingeniería Industrial Ingeniería Económica
- Published
- 2023
10. Use of machine learning techniques in the Kansei engineering synthesis phase
- Author
-
Zamora-Polo, Francisco, Heras García de Vinuesa, Ana de las, Lama-Ruiz, Juan Ramón, Luque Sendra, Amalia, Universidad de Sevilla. Departamento de Ingeniería del Diseño, and Universidad de Sevilla. TEP022: Diseño Industrial e Ingeniería del Proyecto y la Innovación
- Subjects
Artificial intelligence ,Ingeniería kansei ,Engineering project ,Kansei engineering ,Machine learning ,Proyectos de ingeniería ,Aprendizaje automático ,Inteligencia artificial ,Emotional design ,Diseño emocional - Abstract
Una de las principales metodologías para el diseño emocional de productos es la ingeniería Kansei. En esta técnica se pretende relacionar las propiedades del producto o servicio con las sensaciones percibidas por los usuarios. Una aplicación clásica de esta metodología requiere distintas fases entre la que se encuentran la elección del dominio del diseño, la definición del espacio semántico y de propiedades, la síntesis, la validación y la construcción del modelo. La popularización de las técnicas de inteligencia artificial, entre las que se encuentra el aprendizaje automático, ha llevado a muchos autores a utilizar estas herramientas en la fase de síntesis. En este trabajo se analizan las principales herramientas de aprendizaje automático usadas en la fase de síntesis de ingeniería kansei, así como la adecuación de su uso, en base al espacio de propiedades previamente definido. Kansei engineering is one of the main methodologies for the emotional design of products. This technique aims to relate the properties of the product or service to the sensations perceived by users. A classic application of this methodology requires different phases, among which are the choice of the product domain, the definition of the semantic space and properties, the elaboration of the synthesis, the validation and the construction of the model and validation. The popularization of artificial intelligence techniques, including machine learning, has led many authors to use these mathematical models in the synthesis phase. This paper analyses the main machine learning tools used in the synthesis phase of kansei engineering, as well as the relevance of their use, based on the property space previously described.
- Published
- 2022
11. Emotion identification system through facial recognition using artificial intelligence
- Author
-
Alexandra Paricela Canazas, Johnnathan Jimmy Ramos Blaz, Patricio Dante Torres Martínez, and Xiomara Jaquehua Mamani
- Subjects
Machine Learning ,Visión por computadora ,Artificial Intelligence ,Computer Vision ,Emociones ,Emotions ,Expresiones faciales ,Facial expressions ,Aprendizaje automático ,Inteligencia artificial - Abstract
The main objective of this paper is to develop a system to identify the emotions in a person using face recognition using artificial intelligence. The system development was based on the basic algorithm of Eigenfaces or Principal Component Analysis, one of the most widely used face recognition models. In addition, Python language and some of its available libraries such as Numpy, OpenCV y Sklearn were used for the implementation., El presente articulo tiene como principal objetivo el desarrollo de un sistema que permita identificar las emociones de una persona mediante el reconocimiento de rostros utilizando inteligencia artificial. Para el desarrollo del sistema se tuvo como base el algoritmo básico de Eigenfaces o Análisis de Componente Principal, el cual es uno de los modelos de reconocimiento de rostros más utilizado. Así mismo fue utilizado el lenguaje Python y algunas de sus librerías disponibles como Numpy, OpenCV y Sklearn para la implementación.
- Published
- 2022
12. Framework para analisis de software malicioso en Android
- Author
-
Urcuqui López, Christian Camilo and Navarro Cadavid, Andrés
- Published
- 2016
- Full Text
- View/download PDF
13. El aprendizaje bala incertidumbre aplicado al mantenimiento de interruptores de potencia
- Author
-
Gondres Torné, Israel, Lajes Choy, Santiago Eduardo, Rodríguez León, Nervelio, and del Castillo Serpa, Alfredo
- Published
- 2014
14. Distributed Supervised Sentiment Analysis of Tweets: Integrating Machine Learning and Streaming Analytics for Big Data Challenges in Communication and Audience Research
- Author
-
Mateo Álvarez, Miguel Vicente Mariño, Félix Ortega Mohedano, and Carlos Arcila Calderón
- Subjects
0301 basic medicine ,Big Data ,Process (engineering) ,Computer science ,Big data ,Twitter ,Face (sociological concept) ,Context (language use) ,Machine learning ,computer.software_genre ,Streaming ,Investigación de comunicación y audiencias ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Análisis de sentimiento ,Spark (mathematics) ,Sentiment Analysis ,lcsh:Social sciences (General) ,Distributed Computing Environment ,Apache Spark ,business.industry ,Sentiment analysis ,Analítica en tiempo real ,General Social Sciences ,Communication and Audience Research ,030104 developmental biology ,Analytics ,lcsh:H1-99 ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery - Abstract
The large-scale analysis of tweets in real-time using supervised sentiment analysis depicts a unique opportunity for communication and audience research. Bringing together machine learning and streaming analytics approaches in a distributed environment might help scholars to obtain valuable data from Twitter in order to immediately classify messages depending on the context with no restrictions of time or storage, empowering cross-sectional, longitudinal and experimental designs with new inputs. Even when communication and audience researchers begin to use computational methods, most of them remain unfamiliar with distributed technologies to face big data challenges. This paper describes the implementation of parallelized machine learning methods in Apache Spark to predict sentiments in real-time tweets and explains how this process can be scaled up using academic or commercial distributed computing when personal computers do not support computations and storage. We discuss the limitation of these methods and their implications in communication, audience and media studies.El análisis a gran escala de tweets en tiempo real utilizando el análisis de sentimiento supervisado representa una oportunidad única para la investigación de comunicación y audiencias. El poner juntos los enfoques de aprendizaje automático y de analítica en tiempo real en un entorno distribuido puede ayudar a los investigadores a obtener datos valiosos de Twitter con el fin de clasificar de forma inmediata mensajes en función de su contexto, sin restricciones de tiempo o almacenamiento, mejorando los diseños transversales, longitudinales y experimentales con nuevas fuentes de datos. A pesar de que los investigadores de comunicación y audiencias ya han comenzado a utilizar los métodos computacionales en sus rutinas, la mayoría desconocen el uso de las tecnologías de computo distribuido para afrontar retos de dimensión big data. Este artículo describe la implementación de métodos de aprendizaje automático paralelizados en Apache Spark para predecir sentimientos de tweets en tiempo real y explica cómo este proceso puede ser escalado usando computación distribuida tanto comercial como académica, cuando los ordenadores personales son insuficientes para almacenar y analizar los datos. Se discuten las limitaciones de estos métodos y sus implicaciones en los estudios de medios, comunicación y audiencias.
- Published
- 2019
15. Run a machine learning model to identify potential customers based on a probabilistic process for the company Dell Technologies
- Author
-
Rincon, Nestor Fabio, Gonzalez, Daniela, and Sanchez, Pedro
- Subjects
Algorithm ,ALGORITMOS (COMPUTADORES) ,Artificial intelligence ,Machine learning ,Algortimo ,Machine learnign ,INTELIGENCIA ARTIFICIAL - Abstract
En el presente trabajo se hablará de la propuesta de un proceso probabilístico haciendo uso de la herramienta Amazon Web Services AML que permita identificar los clientes potenciales en la empresa Dell Technologies por medio de la información proveniente de Salesforce CRM de la compañía. Para ello, se describirá el proceso paso a paso de cómo se realizó el desarrollo del algoritmo incluyendo la teoría necesaria para entenderlo. Seguidamente, se llevarán a cabo actividades tales como la definición del estado actual del proceso de venta, el análisis para la selección de variables teniendo en cuenta factores como el crecimiento del mercado, la afectación del sector tecnológico a partir de la pandemia y los esfuerzos gubernamentales para fortalecer la infraestructura tecnológica del país, el montaje de datos en el algoritmo, resultados y análisis de históricos de venta de los clientes resultantes del algoritmo propuesto. 1. PROBLEMA .................................................................................................................... 6 1.1. IDENTIFICACIÓN .......................................................................................................... 6 1.2. DESCRIPCIÓN .............................................................................................................. 11 1.3. PLANTEAMIENTO ...................................................................................................... 12 1.4. DELIMITACIÓN ........................................................................................................... 12 1.4.1. CONCEPTUAL ............................................................................................................. 12 1.4.2. GEOGRÁFICA .............................................................................................................. 13 2. OBJETIVOS................................................................................................................... 14 2.1. OBJETIVO GENERAL ................................................................................................. 14 2.2. OBJETIVOS ESPECÍFICOS ......................................................................................... 14 3. ANTECEDENTES ......................................................................................................... 14 3.1. INTERNOS .................................................................................................................... 14 3.2. EXTERNOS ................................................................................................................... 15 4. JUSTIFICACIÓN........................................................................................................... 16 5. MARCO REFERENCIAL ............................................................................................. 18 5.1. MARCO TEÓRICO ....................................................................................................... 18 5.2. MARCO CONCEPTUAL .............................................................................................. 21 6. METODOLOGÍA .......................................................................................................... 23 7. RESULTADOS ESPERADOS ...................................................................................... 25 8. RECURSOS ................................................................................................................... 25 9. CRONOGRAMA ........................................................................................................... 27 10. SELECCIÓN DE LA FUENTE DE INFORMACIÓN ................................................. 28 11. SITUACIÓN ACTUAL ................................................................................................. 32 3 Dell Customer Communication - Confidential 12. SELECCIÓN DE VARIABLES .................................................................................... 37 13. ARTICULACIÓN DE LA INFORMACIÓN EN LA HERRAMIENTA ..................... 39 14. ANÁLISIS DE LOS RESULTADOS ............................................................................ 47 15. CONCLUSIONES Y RECOMENDACIONES ............................................................. 55 16. BIBLIOGRAFÍA ............................................................................................................ 57 In this paper, we will talk about the proposal of a probabilistic process using the Amazon Web Services AML tool that allows identifying potential customers in the Dell Technologies company through information from the company's Salesforce CRM. To do this, the step-by-step process of how the algorithm was developed will be described, including the necessary theory to understand it. Then they will carry out activities such as the definition of the current status of the sales process, the analysis for the selection of variables taking into account factors such as the growth of the market, the impact on the technology sector after the pandemic and government efforts to strengthen the technological infrastructure of the country, assembly algorithm data, results and analysis of historical customer sales resulting from the proposed algorithm. Pregrado
- Published
- 2021
16. Hacia la democratización del aprendizaje de máquinas usando AutoGOAL
- Author
-
Estevanell-Valladares, Ernesto L., Estévez-Velarde, Suilan, Piad-Morffis, Alejandro, Gutiérrez, Yoan, Montoyo, Andres, Almeida-Cruz, Yudivian, Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos, and Procesamiento del Lenguaje y Sistemas de Información (GPLSI)
- Subjects
Artificial intelligence ,Automated learning ,Machine learning ,Lenguajes y Sistemas Informáticos ,Aprendizaje de máquinas ,Aprendizaje automatizado ,AutoML ,Inteligencia artificial - Abstract
El aprendizaje automático es un campo de la inteligencia artificial que ha ganado un reciente interés en todas las áreas de la industria, motivado fundamentalmente por el acelerado crecimiento de las capacidades de cómputo y la disponibilidad de datos. Sin embargo, una de las principales dificultades para su aplicación es la necesidad de expertos que conozcan los detalles internos de los múltiples modelos que pueden ser utilizados. En este contexto ha surgido un nuevo campo de estudio, denominado AutoML (Automated Machine Learning), que facilita la utilización de estas técnicas por expertos de otros dominios. Este artículo presenta una propuesta concreta de un sistema —AutoGOAL— que ha sido diseñada para resolver problemas de aprendizaje automático de variada naturaleza. Además, se realiza una breve comparación entre sistemas existentes de relevancia en el campo. La propuesta es competitiva con herramientas del estado del arte en problemas clásicos de aprendizaje, a la vez que puede desplegarse, sin esfuerzo adicional, en dominios más complejos, como el procesamiento de lenguaje natural. AutoGOAL constituye un paso más hacia la democratización del aprendizaje automático para usuarios no expertos en el tema. Machine Learning is a field of Artificial Intelligence that has gained recent interest in all areas of the industry, motivated primarily by the accelerated growth of computer capabilities and data availability. However, one of the main difficulties for its application is the need for experts who know the internal details of the multiple models that can be used. In this context, a new field of study has emerged, AutoML (Automated Machine Learning), which facilitates the use of these techniques by experts from other domains. This paper presents a concrete proposal of a system —AutoGOAL— which has been designed to solve machine learning problems of various kinds. In addition, a brief comparison is made between relevant existing systems in the field. The proposal is competitive with state-of-the-art tools in classic machine learning problems, and it can be seamlessly deployed in more complex domains, such as natural language processing. AutoGOAL is another step towards the democratization of machine learning for non-expert users.
- Published
- 2021
17. Métodos de aprendizaje automático aplicados al desarrollo de la bioinformática
- Author
-
Franco Martín, Pilar and Castellanos Garzón, José Antonio
- Subjects
Algorithm ,machine learning ,Artificial Intelligence ,Bioinformatics ,Algoritmo ,1203.04 Inteligencia Artificial ,1209 Estadística ,Bioinformática ,Inteligencia Artificial ,Aprendizaje automático - Abstract
Trabajo de fin de Grado. Grado en Estadística. Curso académico 2020-2021., [ES]La cantidad de datos existentes ha crecido exponencialmente, lo que dificulta su tratamiento con métodos tradicionales. Por ello, se han desarrollado las técnicas de aprendizaje automático, capaces de manejar tal volumen de información. En este trabajo se van a explicar conceptos básicos pertenecientes al ámbito del aprendizaje automático, como el big data, la minería de datos o la inteligencia artificial, así como diferenciar los tipos de aprendizaje automático que hay (supervisado, no supervisado, semisupervisado y por refuerzo). También se quiere mostrar el funcionamiento de diversas técnicas de aprendizaje automático, a través de sus algoritmos con base matemática y estadística o mediante código en lenguajes de programación estadísticos (r y python, principalmente). Una vez comprendidos los conceptos mencionados previamente, se pretende ver las aplicaciones que puede tener el aprendizaje automático en el terreno de la bioinformática, en especial, en la rama de la genómica., [EN]The amount of data we count on has grown exponentially, which makes its processing with traditional methods difficult. Therefore, machine learning techniques have been developed. They are able to handle such a volume of in formation. In this paper, basic concepts belonging to the field of machine learning will be explained, such as big data, data mining or artificial intelli gence. We will also differentiate the types of machine learning: supervised, unsupervised, semisupervised and machine learning by reinforcement. We also want to show how different machine learning techniques work through their algorithms with a mathematical and statistical basis or through code in sta tistical programming languages (r and python, principally). Once the above mentioned concepts are understood, we will see the machine learning applica tions in the field of bioinformatics, specially in the branch of genomics.
- Published
- 2021
18. Análisis y comparación de modelos de clasificación de aprendizaje automático aplicado a riesgo crediticio
- Author
-
Alarcón Flores, Jorge Brian and López Malca y Cols., Jiam Carlos
- Subjects
machine learning ,inteligencia artificial ,credit risk ,riesgo crediticio ,aprendizaje automático ,modelos matemáticos ,artificial intelligence ,gradient boosting ,mathematic models - Abstract
The financial industry has become into a very competitive sector worldwide. In that sense, the credit granting decision is one of the most important process of all, and in whose accuracy, rests the good performance of several critical business KPI's such as loans level, credit recoveries level and nonperforming loans ratios. This key process has historically based on the experts’ judgement, and have taken the decision of granting or not credit loans according to several customer credit behavior elements. In the last decade, the developing of certain technology such AI and machine learning has allowed this process automation. The present paper has its main goal, the analysis of several mathematical algorithms based on machine learning and the exposition of which of them have the better results in credit granting predictions to collaborate with current knowledge in this particular issue, giving an objective explanation of the results and suggesting following researches to be developed in order to get better results in existing mathematical algorithms. As results of the experimentation determined that the best model was Gradient Boosting, with an accuracy of 83.71%. El sector industrial financiero se ha convertido en un sector muy competitivo a nivel mundial. Dentro de este contexto, la decisión del otorgamiento de crédito es uno de los procesos más importantes del cual dependen indicadores críticos del negocio como son las colocaciones, las recuperaciones y el índice de morosidad. Este proceso se ha basado históricamente en expertos del negocio, quienes en base a su experiencia determinaban en función a ciertas variables de comportamiento del solicitante, si debían otorgar o no el crédito. En esta última década, el desarrollo de tecnologías como la inteligencia artificial y el aprendizaje de máquina han aportado mucho en la automatización de este proceso. El presente trabajo tiene como objetivo principal el análisis de varios algoritmos matemáticos basados en el aprendizaje de máquina en las predicciones de otorgamiento de crédito, dando una explicación objetiva de los resultados y sugiriendo las siguientes investigaciones que se desarrollarán con el fin de obtener mejores resultados en los algoritmos matemáticos existentes. Como resultados de la experimentación de determinó que el mejor modelo fue el de Gradient Boosting, con una exactitud de 83.71%.
- Published
- 2018
19. Estudio de la aplicación de Machine Learning a técnicas de posicionamiento en interiores
- Author
-
Gómez Ortiz, Javier, García Gutiérrez, Alberto Eloy, and Universidad de Cantabria
- Subjects
Artificial intelligence ,Aprendizaje autónomo ,Evaluación ,Conjunto de datos ,Entrenamiento ,Machine learning ,Testing ,Training ,Redes neuronales ,Inteligencia artificial ,Neural networks ,Dataset - Abstract
RESUMEN: En este trabajo se utilizan las técnicas de aprendizaje autónomo como solución al problema de la localización en interiores. Este problema ha sido estudiado previa- mente en numerosos proyectos, siguiendo soluciones consideradas como tradicionales, pero sin conseguir aportar resultados satisfactorios. Es por ello que, tras realizar un estudio de las diferentes técnicas de inteligencia articial y aprendizaje autónomo que existen actualmente, se ha seleccionado aquella que aporta mejores resultados. El método así propuesto ha sido probado haciendo uso del conjunto de datos y resulta- dos obtenidos mediante técnicas clásicas de trilateración. El análisis de los resultados obtenidos permite asegurar una mejora en las estimaciones de posición, además de otras ventajas desde el punto de vista de despliegue e implementación. ABSTRACT: In the present paper, machine learning techniques are used as a solution to the indoor location problem. This same problem has been studied previously in many projects, following solutions considered as traditional, but without achieving any satisfactory result. Consequently, after researching the different artificial intelligence and machine learning techniques that currently exist, the one that provides the best results has been selected. The method proposed has been tested using the dataset and results obtained by classical trilateration techniques. The analysis of the obtained results ensures an improvement in the indoor positioning, as well as other advantages from the point of view of deployment and implementation. Máster en Ingeniería de Telecomunicación
- Published
- 2018
20. Indicators of ADHD symptoms in virtual learning context using machine learning technics
- Author
-
Silvia Margarita Baldiris Navarro, Laura Patricia Mancera Valetts, and Viviana Betancur Chicué
- Subjects
Règles de Classification ,plataformas virtuales de aprendizaje ,reglas de clasificación ,machine learning technics ,Desordem de Déficit de Atenção com Hiperatividade ,lcsh:Business ,Machine learning ,computer.software_genre ,Regras de Classificação ,Task (project management) ,Attention deficit hyperactivity disorder ,Sistema de Hipermídia Adaptativa ,aprendizaje automático -- modelos de enseñanaza ,aprendizaje automático ,medicine ,Aprendizado de Técnicas de Máquina ,Profiling (information science) ,Troubles et Déficits d'Attention causés par l'Hyperactivité ,Set (psychology) ,user modeling ,lcsh:Commerce ,classification rules ,Plataforma de Aprendizagem virtual ,Modelagem do Usuário ,business.industry ,virtual learning platform ,Système Hypermédias Adaptatifs ,User modeling ,modelo de usuario ,sistemas hipermedia adaptativos ,Déficit de atención e hiperactividad ,General Medicine ,medicine.disease ,Executive functions ,Modélisation de l'Utilisateur ,Statistical classification ,lcsh:HF1-6182 ,Virtual learning environment ,tecnología educativa ,Artificial intelligence ,Psychology ,business ,Plate-forme Virtuelle d'Apprentissage ,lcsh:HF5001-6182 ,computer ,adaptive hypermedia system - Abstract
This paper presents a user model for students performing virtual learning processes. This model is used to infer the presence of Attention Deficit Hyperactivity Disorder (ADHD) indicators in a student. The user model is built considering three user characteristics, which can be also used as variables in different contexts. These variables are: behavioral conduct (BC), executive functions performance (EFP), and emotional state (ES). For inferring the ADHD symptomatic profile of a student and hislher emotional alterations, these features are used as input in a set of classification rules. Based on the testing of the proposed model, training examples are obtained. These examples are used to prepare a classification machine learning algorithm for performing, and improving, the task of profiling a student. The proposed user model can provide the first step to adapt learning resources in e-learning platforms to people with attention problems, specifically, young-adult students with ADHD. Este artículo presenta un proceso de modelado de usuario, específicamente un modelado de estudiante, en un ambiente virtual de aprendizaje, que permite inferir la presencia o no de síntomas del Déficit de Atención e Hiperactividad (TDAH). El modelo de usuario es construido teniendo en cuenta tres características del estudiante: Conducta de comportamiento (BC), Rendimiento de funciones ejecutivas (EFP), y estado emocional (ES). Para inferir si un estudiante puede tener un perfil asintomático de TDAH, se usa un grupo de reglas de clasificación que usan los resultados obtenidos en cada característica como datos de entrada para su funcionamiento. Basados en las pruebas del modelo propuesto, se obtiene un grupo de entrenamiento que es usado para preparar un algoritmo de aprendizaje automático, el cual podrá realizar y mejorar la tarea de crear el perfil para cada estudiante de acuerdo a si presenta o no síntomas del TDAH o problemas de atención. Esto, puede ser el primer paso para ofrecer recursos de aprendizajes adaptados a las necesidades educativas de estudiantes que presenten este trastorno. Cet article présente un modèle d'utilisateur type chez les étudiants inscrits en mode d'apprentissage virtuels. Ce modèle est utilisé pour prévenir chez les élèves la présence de Troubles et Déficits d'Attention causés par l’Hyperactivité (TDAH). Le modèle est construit pour prendre en compte trois caractéristiques de l'utilisateur qui peuvent aussi être utilisées comme variables dans différents contextes. Ces variables sont : la conduite et comportement (CC), la performance (P) et l'état émotionnel (EE). Pour déduire le profil symptomatique de TDAH d'un étudiant et de ses altérations émotionnelles, ces fonctionnalités sont utilisées comme données dans un ensemble de règles de classification. Ces exemples sont utilisés pour préparer un algorithme d'apprentissage automatique de classification et permettent d'améliorer l'analyse du profil d'un élève. Le modèle d'utilisateur type peut offrir la première étape pour l'adaptation des ressources d'apprentissage aux plates-formes d'enseignement à distance pour des personnes atteintes de troubles de l'attention, en particulier chez les jeunes étudiants atteints de TDAH. Este artigo apresenta um modelo de usuário para o desempenho dos estudantes nos processos de aprendizagem virtual. Este modelo é usado para inferir a presença de indicadores de Déficit de Atenção com Hiperatividade (TDAH) em um estudante. O modelo de usuários é construído considerando três características dos usuários, as quais podem também ser usadas como variáveis em diferentes contextos. Essas variáveis são: conduta de comportamento, desempenho de funções executivas e estado emocional. Para inferir o perfil sintomático do TDAH de um estudante e seus/suas alterações emocionais, estas características são usadas como entrada em um modelo de regras de classificação. Baseado no teste do modelo proposto, exemplos de treinamento são obtidos. Esses exemplos são usados para preparar um algoritmo de desempenho da preparação da classificação, e melhorar, a habilidade de perfilar um estudante. O modelo do usuário proposto pode prover o primeiro passo para adaptar os recursos de aprendizagem nas plataformas de aprendizagem virtual para pessoas com problemas de atenção, especificamente, estudantes que são adultos-jovens com TDAH.
- Published
- 2015
21. A Systematic Review on Object Localisation Methods in Images
- Author
-
Surajit Saikia, Deisy Chaves, Maria Trujillo, Enrique Alegre, and Laura Fernández-Robles
- Subjects
0209 industrial biotechnology ,General Computer Science ,Process (engineering) ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Digital image ,020901 industrial engineering & automation ,Image processing ,Pattern recognition ,Detection algorithms ,Machine learning ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,Algoritmos de detección ,Procesamiento de imágenes ,Implementation ,media_common ,Information retrieval ,business.industry ,Deep learning ,Object recognition ,Visual inspection ,Reconocimiento de objetos ,Control and Systems Engineering ,Obstacle ,Robot ,Aprendizaje máquina ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Reconocimiento de patrones - Abstract
[EN] Currently, many applications require a precise localization of the objects that appear in an image, to later process them. This is the case of visual inspection in the industry, computer-aided clinical diagnostic systems, the obstacle detection in vehicles or in robots, among others. However, several factors such as the quality of the image and the appearance of the objects to be detected make this automatic location difficult. In this article, we carry out a systematic revision of the main methods used to locate objects by considering since the methods based on sliding windows, as the detector proposed by Viola and Jones, until the current methods that use deep learning networks, such as Faster-RCNN or Mask-RCNN. For each proposal, we describe the relevant details, considering their advantages and disadvantages, as well as the main applications of these methods in various areas. This paper aims to provide a clean and condensed review of the state of the art of these techniques, their usefulness and their implementations in order to facilitate their knowledge and use by any researcher that requires locating objects in digital images. We conclude this work by summarizing the main ideas presented and discussing the future trends of these methods., [ES] Actualmente, muchas aplicaciones requieren localizar de forma precisa los objetos que aparecen en una imagen, para su posterior procesamiento. Este es el caso de la inspección visual en la industria, los sistemas de diagnóstico clínico asistido por computador, la detección de obstáculos en vehículos o en robots, entre otros. Sin embargo, diversos factores como la calidad de la imagen y la apariencia de los objetos a detectar, dificultan la localización automática. En este artículo realizamos una revisión sistemática de los principales métodos utilizados para localizar objetos, considerando desde los métodos basados en ventanas deslizantes, como el detector propuesto por Viola y Jones, hasta los métodos actuales que usan redes de aprendizaje profundo, tales como Faster-RCNNo Mask-RCNN. Para cada propuesta, describimos los detalles relevantes, considerando sus ventajas y desventajas, así como sus aplicaciones en diversas áreas. El artículo pretende proporcionar una revisión ordenada y condensada del estado del arte de estas técnicas, su utilidad y sus implementaciones a fin de facilitar su conocimiento y uso por cualquier investigador que requiera localizar objetos en imágenes digitales. Concluimos este trabajo resumiendo las ideas presentadas y discutiendo líneas de trabajo futuro., Este trabajo ha sido financiado parcialmente por diferentes instituciones. Deisy Chaves cuenta con una beca “Estudios de Doctorado en Colombia 2013” de COLCIENCIAS. Surajit Saikia cuenta con una beca de la Junta de Castilla y León con referencia EDU/529/2017. También queremos agradecer el apoyo de INCIBE (Instituto Nacional de Ciberseguridad) mediante la Adenda 22 al convenio con la Universidad de León.
- Published
- 2018
- Full Text
- View/download PDF
22. A VoIP call classifier for carrier grade based on Support Vector Machines
- Author
-
Jairo Alberto Cardona-Peña, Juan Pablo Tello-Portillo, and Juan Ricardo Wilches-Cortina
- Subjects
lcsh:TN1-997 ,Theoretical computer science ,Audio analysis ,Computer science ,media_common.quotation_subject ,SVM ,02 engineering and technology ,reconocimiento de patrones ,computer.software_genre ,Machine learning ,lcsh:Technology ,020204 information systems ,Carrier grade ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Quality (business) ,Audio signal processing ,SVM, VoIP ,Análisis de audio ,lcsh:Mining engineering. Metallurgy ,media_common ,Voice over IP ,lcsh:T ,business.industry ,pattern recognition ,General Engineering ,020206 networking & telecommunications ,Support vector machine ,Task (computing) ,ComputingMethodologies_PATTERNRECOGNITION ,VoIP ,62 Ingeniería y operaciones afines / Engineering ,Artificial intelligence ,business ,computer - Abstract
Currently, VoIP company technicians conduct tests to classify call quality as good or bad. Even though, there are automatic platforms that make test VoIP calls to classify them, they do not perform audio processing to detect False Answer Supervision (FAS), which is a common and undesirable feature of VoIP calls. In this paper, a Vector Support Machine (SVM) along with several functions used in voice recognition were implemented to emulate the human decision procedure (the task of audio classification and analysis performed by technicians). The experiments were based on the comparison between the results obtained from the current classification methods and those derived from the SVM. A 10-fold cross-validation was used to evaluate the system performance. The tests results from the proposed methodology show a better percentage of successful classification compared to a selected automatic platform called CheckMyRoutes. Actualmente, los técnicos de compañías de VoIP realizan pruebas y clasifican las llamadas como buenas o malas. Asimismo, existen plataformas automáticas que realizan llamadas VoIP para clasificarlas, sin realizar procesamiento de audio; proceso necesario cuando se pretende detectar el False Answer Supervision (FAS), una característica común e indeseable de las llamadas VoIP. Se implementó una Máquina de Vectores de Soporte (SVM) junto con varias funciones utilizadas en el reconocimiento de voz para emular la toma de decisiones de los humanos (tarea de clasificación y análisis de audio realizada por los técnicos). Los experimentos se basaron en la comparación entre los resultados obtenidos de los métodos de clasificación actuales y los derivados de la SVM. Se utilizó una validación cruzada de diez veces para evaluar el rendimiento del sistema. Derivado de los resultados, la metodología propuesta muestra un mejor porcentaje de clasificación exitosa comparado con una plataforma automática llamada CheckMyRoutes.
- Published
- 2017
23. Muestreo adaptativo aplicado a la robótica: Revisión del estado de la técnica
- Author
-
João Valente and Ignacio Pastor
- Subjects
0209 industrial biotechnology ,Adaptive sampling ,General Computer Science ,Remote sensing application ,Computer science ,lcsh:Control engineering systems. Automatic machinery (General) ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,lcsh:TJ212-225 ,020901 industrial engineering & automation ,Cobertura Óptima ,Field robotics ,Teledetección ,Sampling methodology ,Optimal coverage ,Simulation ,Path planning ,0105 earth and related environmental sciences ,010505 oceanography ,business.industry ,Sampling (statistics) ,Robotics ,Muestreo adaptativo ,Remote sensing ,Planificación de trayectorias ,Control and Systems Engineering ,Artificial intelligence ,business ,computer ,Robots de exteriores - Abstract
[ES] En este artículo se presenta la revisión de una técnica de muestreo de especial interés para aplicaciones a sistemas roboticos dedicados a la teledetección. Esta técnica es conocida como muestreo adaptativo. En este artículo se realiza una recopilación de las principales técnicas de muestreo adaptativo aplicados a la robótica, haciendo uso de la planificación de trayectorias. Finalmente, se destaca un conjunto de proyectos actualmente en desarrollo, sobre aplicaciones reales de la técnica de muestreo adaptativo en la robótica., [EN] In this paper, a robotics sampling methodology known as Adaptive Sampling (AS) is reviewed. Although the method is not yet widespread in robotics, it plays an important role in remote sensing applications over rapidly changing environments. This article gives an introduction to AS and summarizes the main AS techniques and algorithms applied to robotics. Finally, a number of projects currently under development using AS to solve relevant monitoring or sampling issues, are highlighted.
- Published
- 2017
24. Online estimation of rollator user condition using spatiotemporal gait parameters
- Author
-
Cristina Urdiales, Marina Tirado, Antonio B. Martínez, and Joaquin Ballesteros
- Subjects
0209 industrial biotechnology ,Computer science ,medicine.medical_treatment ,Asistencia sanitaria ,02 engineering and technology ,Clinical scales ,Robot asistivo ,Machine learning ,computer.software_genre ,03 medical and health sciences ,020901 industrial engineering & automation ,0302 clinical medicine ,Gait (human) ,medicine ,Assistive robot ,Simulation ,Balance (ability) ,Rehabilitation ,business.industry ,Tinetti test ,Work (physics) ,Rollator ,Gait ,Test (assessment) ,Caminador ,Artificial intelligence ,Automatic gait analysis ,business ,computer ,030217 neurology & neurosurgery ,Parámetros clínicos - Abstract
Assistance to people during rehabilitation has to be adapted to their needs. Too little help can lead to frustration and stress in the user; an excess of help may lead to low participation and loss of residual skills. Robotic rollators may adapt assistance. The main challenge to cope with this issue is to estimate how much help is needed on the fly, because it depends not only on the person condition, but also on the specific situation that they are negotiating. Clinical scales provide a global condition based estimation, but no local estimator based on punctual needs. Condition also changes in time, so clinical scales need to be recalculated again and again. In this paper we propose a novel approach to estimate users’ condition in a continuous way via a robotic rollator. Our work focuses on predicting the value of the well known Tinetti Mobility test from spatiotemporal gait parameters obtained from our platform while users walk. This prediction provides continuous insight on the condition of the user and could be used to modify the amount of help provided. The proposed method has been validated with 19 volunteers at a local hospital that use a rollator for rehabilitation. All volunteers presented some physical or mental disabilities. Our results sucessfully show a high correlation of spatiotemporal gait parameters with Tinetti Mobility test gait (R2 = 0.7) and Tinetti Mobility test balance (R2 = 0.6). Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.
- Published
- 2016
25. Ordinal evolutionary artificial neural networks for solving an imbalanced liver transplantation problem
- Author
-
María Pérez-Ortiz, César Hervás-Martínez, Manuel Dorado-Moreno, María Dolores Ayllón-Terán, and Pedro Antonio Gutiérrez
- Subjects
Matching (statistics) ,Fitness function ,Artificial neural network ,Computer science ,business.industry ,Existential quantification ,medicine.medical_treatment ,Evolutionary algorithm ,Artificial neural network model ,02 engineering and technology ,Liver transplantation ,Machine learning ,computer.software_genre ,Ordinal regression ,ComputingMethodologies_PATTERNRECOGNITION ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Ordinal regression considers classification problems where there exists a natural ordering among the categories. In this learning setting, thresholds models are one of the most used and successful techniques. On the other hand, liver transplantation is a widely-used treatment for patients with a terminal liver disease. This paper considers the survival time of the recipient to perform an appropriate donor-recipient matching, which is a highly imbalanced classification problem. An artificial neural network model applied to ordinal classification is used, combining evolutionary and gradient-descent algorithms to optimize its parameters, together with an ordinal over-sampling technique. The evolutionary algorithm applies a modified fitness function able to deal with the ordinal imbalanced nature of the dataset. The results show that the proposed model leads to competitive performance for this problem.
- Published
- 2016
26. Program Implementation of the Rating Methods of Preference Ranking
- Author
-
Alexey L. Sadovski, Carl Steidley, and Kelly Torres-Knott
- Subjects
preference ranking ,Computer science ,Materials Science (miscellaneous) ,Ecological systems theory ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Ranking (information retrieval) ,Convergence (routing) ,Selection (linguistics) ,Almost everywhere ,Business and International Management ,Set (psychology) ,business.industry ,lcsh:Mathematics ,Rating methods ,selection problem ,lcsh:QA1-939 ,problema de escogencia ,Preference ,Artificial intelligence ,business ,Realization (systems) ,computer ,Métodos de valoración ,ranking de preferencias - Abstract
Tide charts, based upon harmonic analysis, is the general method of choice for predicting water levels. In the shallow waters of the Gulf of Mexico, however, tide charts are woefully inadequate for the prediction of water levels. We have developed a number of models for the prediction of water levels. In this paper we summarize these methods and discuss the development of an axiomatic tool that we use to measure the quality of predictions of water levels in the estuaries and shallow waters of the Gulf of Mexico. This quality measure is based upon the preference rankings of National Ocean Service criteria by experts in the field. Gráficos “tide”, basados en análisis armónico, es el método general de escogencia para predecir niveles de agua. En aguas bajas del Golfo de México, sin embargo, gráficos “tide” no son adecuados para la predicción de niveles de agua. En este artículo resuminos estos métodos y discutimos el desarrollo de una herramienta axiomática que usamos para medir la calidad de las predicciones de niveles de agua en estuarios y aguas bajas del Golfo de México. Esta medida de calidad está basada en criterios de rankings de preferencia del Servicio Oceánico Nacional por expertos en el campo.
- Published
- 2012
27. Hacia las pruebas en sistemas de alta variabilidad utilizando opiniones de los usuarios
- Author
-
Jesennia Cardenas, Jorge L. Rodas, David Méndez-Acuña, José A. Galindo, David Benavides, Universidad Estatal de Milagro (UNEMI), Diversity-centric Software Engineering (DiverSe), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-LANGAGE ET GÉNIE LOGICIEL (IRISA-D4), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Departamento de Lenguajes y Sistemas Informáticos, Universidad de Sevilla / University of Sevilla, Rubby Casallas, CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA), Universidad de Sevilla, and Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
- Subjects
Prioritization ,Computer science ,business.industry ,[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE] ,Recommender system ,Machine learning ,computer.software_genre ,Sistemas de recomendación ,Sistemas de alta variabilidd ,World Wide Web ,Test case ,Android ,Modelos de características ,Artificial intelligence ,Software system ,Android (operating system) ,business ,computer - Abstract
Los sistemas de alta variabilidad son sistemas de software que describen una gran cantidad de configuraciones. Este elevado número de configuraciones hace que el testing sea un proceso caro y propenso a errores. En el ecosistema Android, por ejemplo, podemos encontrar hasta 2ˆ24 configuraciones válidas del emulador, lo que hace imposible el testing de una aplicación sobre todas ellas. Para aliviar el problema, distintas investigaciones proponen la selección de un subconjunto de casos de pruebas que mitigue este elevado costo. Concretamente, las propuestas se centran en la priorización y selección de las pruebas de manera que sólo se prueban aquellas configuraciones relevantes segun algún criterio. En este artículo proponemos tener en cuenta las opiniones de los usuarios en la selección y priorización de las pruebas. Para ello, exploramos el uso de sistemas de recomendación como posible mejora a la selección de casos de pruebas en sistemas de alta variabilidad. Variability intensive systems are software systems that describe a large set of diverse and different configurations that share some characteristics. This high number of configurations makes testing such systems an expensive and error-prone task. For example, in the Android ecosystem we can find up to 24 different valid configurations, thus, making it impossible to test an application on all of them. To alleviate this problem, previous research suggest the selection of a subset of test cases that maximize the changes of finding errors while maximizing the diversity of configurations. Concretely, the proposals focus on the prioritization and selection of tests, so only relevant configurations are tested according to some criterion. In this paper, we envision the use of user reports to prioritize and select meaningful tests. To do this, we explore the use of recommender systems as a possible improvement to the selection of test cases in intensive variability systems.
- Published
- 2015
28. An Approach for Solving Goal Programming Problems using Interval Type-2 Fuzzy Goals
- Author
-
Krisna Y. Espinosa-Ayala, Juan S. Patino-Callejas, and Juan Carlos Figueroa-García
- Subjects
Programación lineal difusa ,Fuzzy classification ,Fuzzy programming ,business.industry ,Fuzzy set ,Interval (mathematics) ,Programación por metas ,Machine learning ,computer.software_genre ,Type-2 fuzzy sets and systems ,Goal programming ,Defuzzification ,Fuzzy logic ,fuzzy linear programming ,Interval Type-2 fuzzy sets ,lcsh:TA1-2040 ,Fuzzy set operations ,Artificial intelligence ,Type-2 Fuzzy sets ,business ,lcsh:Engineering (General). Civil engineering (General) ,computer ,Conjuntos difusos Tipo-2 de intervalo ,Mathematics - Abstract
This paper presents a proposal for solving goal problems involving multiple experts opinions and perceptions. In goal programming problems where no statistical data about their goals exist, the use of information coming from experts becomes the last reliable source. This way, we propose an approach to model this kind of goals using Interval Type-2 fuzzy sets, and a simple method for finding an optimal solution based on previous methods that have been proposed for classical fuzzy sets. Este trabajo presenta un acercamiento a la solución de problemas de programación por metas que incluyen la opinión y percepión de múltiples expertos. En problemas de metas que no tienen información estadística adecuada para definir los valores meta, el uso de información proveniente de expertos se convierte en la última fuente confiable de información. Así pues, proponemos una aproximación al modelado de este tipo de problemas utilizando conjuntos difusos de Intervalo Tipo-2, y un método sencillo para encontrar soluciones usando métodos propuestos por otros autores para conjuntos difusos clásicos.
- Published
- 2015
29. Búsqueda local proactiva basada en FDC
- Author
-
Alejandro Rosete-Suárez and Mailyn Moreno-Espino
- Subjects
Mathematical optimization ,Engineering ,Computer Science::Neural and Evolutionary Computation ,Evolutionary algorithm ,Metaheuristics ,Machine learning ,computer.software_genre ,Metaheurísticas ,Time windows ,Búsqueda con Vecindad Variable ,Local search (optimization) ,Proactive Behavior ,Metaheuristic ,Proactividad ,business.industry ,Variable Neighborhood Search ,General Engineering ,Agentes ,Agents ,Great Deluge algorithm ,Distance correlation ,Identification (information) ,FDC ,Artificial intelligence ,business ,Hill climbing ,computer - Abstract
En este trabajo se presenta ECE-MP-FDC, una variante proactiva del algoritmo de búsqueda Escalador de Colinas (o Búsqueda Local). El algoritmo identifica la mejor estructura de vecindad a través de la aplicación repetida del operador de mutación, y evalúa la conveniencia de cada una usando la métrica FDC (Fitness Distance Correlation). La mejor estructura de vecindad se usa durante una ventana de tiempo, luego de la cual se repite el análisis. Se presenta un estudio experimental en 28 funciones sobre cadenas binarias de 100 bits con distintos grados de dificultad. La variante proactiva del Escalador de Colinas basada en FDC logra resultados similares o mejores que otras metaheurísticas (Algoritmos Evolutivos, Algoritmo del Gran Diluvio, Aceptación por Umbral, RRT). This paper introduces a proactive version of Hill Climbing (or Local Search). It is based on the identification of the best neighborhood through the repeated application of mutations and the evaluation of theses neighborhood by using FDC (Fitness Distance Correlation). The best neighborhood is used during a time window, and then the analysis is repeated. An experimental study was conducted in 28 functions on binary strings. The proposed algorithm achieves good performance compared to other metaheuristics (Evolutionary Algorithms, Great Deluge Algorithm, Threshold Accepting, and RRT).
- Published
- 2014
30. Predicción de series de tiempo no lineales usando MARS
- Author
-
Paula Andrea Camacho, Juan David Velásquez-Henao, and Carlos Jaime Franco-Cardona
- Subjects
Engineering ,Multivariate adaptive regression splines ,Artificial neural network ,Artificial neural networks ,comparative studies ,business.industry ,métodos no paramétricos ,General Engineering ,estudios comparativos ,Statistical model ,Mars Exploration Program ,nonparametric methods ,Machine learning ,computer.software_genre ,modelos ARIMA ,Specification ,ARIMA models ,Redes neuronales artificiales ,Autoregressive integrated moving average ,Artificial intelligence ,Time series ,business ,Nonlinear regression ,computer - Abstract
Uno de los usos más importantes de las redes neuronales artificiales es el pronóstico de series de tiempo no lineales, aunque los problemas en la construcción del modelo, tales como la selección de las entradas, la complejidad del modelo y la estimación de los parámetros, permanecen sin una solución satisfactoria. La mayoría de los esfuerzos en investigación están orientados a resolver estos problemas. Sin embargo, los modelos emergidos de la estadística podrían ser más adecuados que las redes neuronales para el pronóstico, en el sentido de que el proceso de especificación es basado enteramente en criterios estadísticos. La regresión adaptativa multivariada por tramos (MARS, por su sigla en inglés) es un método estadístico comúnmente usado para resolver problemas no lineales de regresión, y es posible usarlo para el pronóstico de series de tiempo. No obstante, faltan estudios que comparen los resultados obtenidos usando MARS y redes neuronales artificiales, con el fin de determinar cuál modelo es mejor. En este artículo, se pronostican cuatro series de tiempo no lineales usando MARS y se comparan los resultados obtenidos contra los resultados reportados en la literatura técnica cuando se usan las redes neuronales artificiales y la aproximación ARIMA. El principal hallazgo en esta investigación es que, para todos los casos considerados, los pronósticos obtenidos con MARS son inferiores en precisión respecto a otras aproximaciones. One of the most important uses of artificial neural networks is to forecast non-linear time series, although model-building issues, such as input selection, model complexity and parameters estimation, remain without a satisfactory solution. More of research efforts are devoted to solve these issues. However, other models emerged from statistics would be more appropriated than neural networks for forecasting, in the sense that the process of model specification is based entirely on statistical criteria. Multivariate adaptive regression splines (MARS) is a statistical model commonly used for solving nonlinear regression problems, and it is possible to use it for forecasting time series. Nonetheless, there is a lack of studies comparing the results obtained using MARS and neural network models, with the aim of determinate which model is better. In this paper, we forecast four nonlinear time series using MARS and we compare the obtained results against the reported results in the technical literature when artificial neural networks and the ARIMA approach are used. The main finding in this research, it is that for all considered cases, the forecasts obtained with MARS are lower in accuracy in relation to the other approaches.
- Published
- 2014
31. Analysis of pattern recognition and dimensionality reduction techniques for odor biometrics
- Author
-
Ana Herrero, Guillermo Vidal-de-Miguel, Irene Rodriguez-Lujan, Gonzalo Bailador, and Carmen Sanchez-Avila
- Subjects
Information Systems and Management ,Biometrics ,Computer science ,Robótica e Informática Industrial ,Feature selection ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Management Information Systems ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,business.industry ,Dimensionality reduction ,010401 analytical chemistry ,Pattern recognition ,0104 chemical sciences ,Odor ,Spectrogram ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Software ,psychological phenomena and processes - Abstract
In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.
- Published
- 2013
32. Stochastic Model of the conceptual interconnections in a classroom learning process
- Author
-
Nicolás Alberto Moreno Reyes, José Alejandro González Campos, Juan Carlos Medina Magdaleno, and Diana Milena Galvis Soto
- Subjects
knowledge ,matriz de adjacência ,adjacency matrix ,Stochastic modelling ,business.industry ,Process (engineering) ,Computer science ,gráficos ,intervalos de confiança e distribuição assintótica ,graph ,Machine learning ,computer.software_genre ,distribuição de Poisson ,distribución Poisson ,grafos ,intervalos de confianza ,confidence interval and asymptotic distribution ,Poisson distribution ,Artificial intelligence ,business ,computer ,matriz de adyacencia ,saberes ,distribución asintótica - Abstract
Actualmente la inclusión es un concepto que está en la mesa de diálogo de las instituciones preocupadas por la educación, hacer parte del proceso de enseñanza y aprendizaje a todos los alumnos como personas, independiente de su realidad [3]. Sin embargo, el proceso de cuantificación y modelación de las interconexiones conceptuales, es un proceso distante, por tal razón, queremos dar un paso adelante en una educación para todos y objetivizar los procesos de cuantificación de la consistencia grupal en torno a un concepto [1], que denominaremos inclusión cognitiva. Por tal razón se definen los Grafos de Consistencia Conceptual (GCC) y el vínculo conceptual como una variable aleatoria Poisson, promoviendo una metodología de visualización y cuantificación de la Consistencia Conceptual de grupo. Este trabajo tiene como base teórica el procesamiento de los saberes según el Laboratorio Experimental de Saberes Matemáticos, Lab[e]saM [1] y se trasforma en un nuevo complemento para el estudio de las trasposiciones didácticas. Currently inclusion is a concept present in the dialogue of the institutions concerned about education; it means to include in the teaching-learning process all students as individuals, regardless of their reality [3]. However, quantification and modelling of conceptual interconnections is a distant process; therefore, our intention is to take a step forward an education for all, and objectify the quantification process in group consistency around a concept [1] which will be called cognitive inclusion. Therefore, conceptual consistency graph and conceptual link are defined as a Poisson random variable, promoting a visualization and quantification methodology of conceptual consistency within a group. This paper is based on the knowledge processing according to the LAB[e]SAM [1] and it turns into a new complement for the transposition didactics studies. Inclusão, na atualidade, é um conceito amplamento discutido nas instituições de ensino que se preocupam com o processo ensino-aprendizagem de todos os estudantes como pessoas, independentemente do contexto em que se inserem [3]. No entanto, o processo de quantificação e modelagem das interconexões conceituais, é um processo distante da realidade esperada. Assim, dá-se um passo à frente na educação para todos e objetiva-se os processos de quantificação da consistência grupal em torno de um conceito [1], denominado inclusão cognitiva. Por essa razão, definem-se Gráficos de Consistência Conceitual (GCC) e o vínculo conceitual como uma variante aleatória Poisson, promovendo uma metodologia de visualização e quantificação da consistência conceitual grupal. Baseia-se em processamento de saberes segundo o Laboratório Experimental de Saberes Matemáticos, Lab[e] saM [1] que se transformam em novo complemento para o estudo das transposições didáticas.
- Published
- 2012
33. Human activity monitoring by local and global finite state machines
- Author
-
Antonio Fernández-Caballero, José María Rodríguez-Sánchez, and José Carlos Castillo
- Subjects
Structure (mathematical logic) ,Finite-state machine ,Computer science ,business.industry ,General Engineering ,Machine learning ,computer.software_genre ,Computer security ,Motion (physics) ,Computer Science Applications ,Task (project management) ,Test case ,Rule-based machine translation ,Artificial Intelligence ,Problem domain ,Ingenierías ,Artificial intelligence ,business ,computer - Abstract
There are a number of solutions to automate the monotonous task of looking at a monitor to find suspicious behaviors in video surveillance scenarios. Detecting strange objects and intruders, or tracking people and objects, is essential for surveillance and safety in crowded environments. The present work deals with the idea of jointly modeling simple and complex behaviors to report local and global human activities in natural scenes. Modeling human activities with state machines is still common in our days and is the approach offered in this paper. We incorporate knowledge about the problem domain into an expected structure of the activity model. Motion-based image features are linked explicitly to a symbolic notion of hierarchical activity through several layers of more abstract activity descriptions. Atomic actions are detected at a low level and fed to hand-crafted grammars to detect activity patterns of interest. Also, we work with shape and trajectory to indicate the events related to moving objects. In order to validate our proposal we have performed several tests with some CAVIAR test cases.
- Published
- 2012
34. A simple method for decision making in robocup soccer simulation 3d environment
- Author
-
Maleki, Khashayar Niki, Valipour, Mohammad Hadi, Ashrafi, Roohollah Yeylaghi, Mokari, Sadegh, Jamali, M. R., and Lucas, Caro
- Subjects
FOS: Computer and information sciences ,I.2.3 ,Artificial intelligence ,I.2.11 ,Computer Science - Artificial Intelligence ,Multi-Agent systems ,I.2.9 ,Fuzzy reinforcement learning ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,68T15, 68T40, 68T37 ,Fuzzy Logic ,RoboCup soccer simulation ,Machine learning ,Reinforcement learning ,Robotics (cs.RO) - Abstract
In this paper new hierarchical hybrid fuzzy-crisp methods for decision making and action selection of an agent in soccer simulation 3D environment are presented. First, the skills of an agent are introduced, implemented and classified in two layers, the basicskills and the highlevel skills. In the second layer, a twophase mechanism for decision making is introduced. In phase one, some useful methods are implemented which check the agent's situation for performing required skills. In the next phase, the team str ategy, team for mation, agent's role and the agent's positioning system are introduced. A fuzzy logical approach is employed to recognize the team strategy and further more to tell the player the best position to move. At last, we comprised our implemented algor ithm in the Robocup Soccer Simulation 3D environment and results showed th eefficiency of the introduced methodology., Comment: 8 pages, 10 figures; Revista Avances en Sistemas e Informatica, Vol. 5, No. 3, December 2008
- Published
- 2008
35. Boosting Support Vector Machines
- Author
-
Fernando Lozano Martínez and Elkin Eduardo García Díaz
- Subjects
generalización ,History ,Boosting (machine learning) ,Polymers and Plastics ,Computer science ,business.industry ,generalizacion ,SVM ,SMO ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Boosting ,Support vector machine ,lcsh:TA1-2040 ,SVM./ Boosting ,Artificial intelligence ,Business and International Management ,lcsh:Engineering (General). Civil engineering (General) ,business ,computer ,generalization - Abstract
En este artículo, se presenta un algoritmo de clasificación binaria basado en Support Vector Machines (Máquinas de Vectores de Soporte) que combinado apropiadamente con técnicas de Boosting consigue un mejor desempeño en cuanto a tiempo de entrenamiento y conserva características similares de generalización con un modelo de igual complejidad pero de representación más compacta./ In this paper we present an algorithm of binary classification based on Support Vector Machines. It is combined with a modified Boosting algorithm. It run faster than the original SVM algorithm with a similar generalization error and equal complexity model but it has more compact representation.
- Published
- 2006
36. Apple creates a secret AI lab in Switzerland and recruits Google experts
- Published
- 2024
37. New breakthrough for artificial intelligence to think like a human: begins to generalize from what it has learned
- Published
- 2023
38. Experiment makes a machine relate concepts as humans do
- Published
- 2023
39. Surprising finding: experiment makes a machine relate concepts as humans do
- Published
- 2023
40. Artificial intelligence and stem cells help identify Parkinson's disease subtypes
- Published
- 2023
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.