66 results on '"Asdrúbal López Chau"'
Search Results
2. Impacto del sistema para la enseñanza y traducción de la lengua de señas mexicana UAEMex en instituciones públicas
- Author
-
Rafael Rojas Hernández, Elvira Ivone González Jaimes, Valentín Trujillo Mora, Asdrúbal López Chau, and Carlos Omar González Morán
- Subjects
General Medicine - Abstract
El objetivo es probar y medir el impacto del software de Sistema para la Enseñanza y Traducción de la Lengua de Señas Mexicana UAEMex creado inteligencia artificial para facilitar la comunicación en personas con discapacidades auditivas o del habla. Se utilizó método cuantitativo de tipo exploratorio y descriptivo para probar y medir el impacto de los niveles de enseñanza-aprendizaje y traducción en tres grupos experimentales de personal pertenecientes a tres instituciones públicas dedicadas a la atención de personas con discapacidades auditivas o del habla y con corte longitudinal por las mediciones en los tres módulos de enseñanza-aprendizaje y por el seguimiento tres meses con el cuestionario del método FODA. Obteniendo resultados en aprendizaje y traducción de los tres módulos fue del 80% en velocidad normal y 73% velocidad rápida. En la aplicabilidad del sistema realizado a los tres meses y ejecutado pos personal mostro fortaleza internas 100%, en oportunidades 90%, en debilidades 68.33% y en amenazas 72.5%. Por lo que se puede decir que en beneficios y logros que el usuario pudo constatar en la vida laboral es muy alto y las oportunidades de mejora es medio bajo en las cuales se debe de trabajar para perfeccionar el software.
- Published
- 2023
- Full Text
- View/download PDF
3. Identificación de acoso laboral en docentes de educación superior basada en respuestas de satisfacción en el trabajo
- Author
-
Asdrúbal López Chau and J. Patricia Muñoz-Chávez
- Subjects
Education - Abstract
En este documento, utilizando la teoría del intercambio social como base teórica, proponemos identificar la presencia de acoso laboral en profesores universitarios, a través de máquinas de vectores soporte, y la aplicación de un instrumento que mida la satisfacción en el trabajo, en lugar de evaluar el nivel de acoso laboral explícitamente. La muestra fue de 248 docentes de cuatro universidades públicas en México. Obtuvimos los siguientes resultados: la desvalorización del trabajo es el tipo de acoso más frecuente, mientras que el mobbing personal es el menos frecuente. El kernel RBF es la mejor opción para predecir acoso laboral en las dimensiones: sobrecarga de trabajo, mobbing personal y devalorización del trabajo; el núcleo polinomial es el mejor para el mobbing organizacional. La precisión de clasificación de los modelos es superior al 91%, y la puntuación F = 0.93, ambos en el peor de los casos. Según el rendimiento de los modelos, se puede predecir el acoso laboral con precisión.
- Published
- 2022
- Full Text
- View/download PDF
4. Efficient nucleus segmentation of white blood cells mimicking the human perception of color
- Author
-
Jair Cervantes, Matías Alvarado, Farid García-Lamont, and Asdrúbal López-Chau
- Subjects
White (horse) ,Computer science ,business.industry ,General Chemical Engineering ,Color recognition ,media_common.quotation_subject ,Human Factors and Ergonomics ,General Chemistry ,Image segmentation ,Color space ,medicine.anatomical_structure ,Perception ,medicine ,Segmentation ,Computer vision ,Artificial intelligence ,Chromaticity ,business ,Nucleus ,media_common - Published
- 2021
- Full Text
- View/download PDF
5. Retrieval of Weighted Lexicons Based on Supervised Learning Method
- Author
-
Asdrúbal López-Chau, Rafael Rojas-Hernández, David Valle-Cruz, and Valentin Trujillo-Mora
- Published
- 2023
- Full Text
- View/download PDF
6. Affective Polarization in the U.S
- Author
-
Asdrúbal López Chau, Rodrigo Sandoval Almazan, and David Valle-Cruz
- Abstract
Affective polarization is a phenomenon that has invaded the political arena empowered by social networks. In this chapter, the authors analyze the Capitol riot posts on Twitter. To achieve this, the authors use affective computing introducing the multi-emotional charge combined with statistical analysis based on the t-student test and Welch's t-test. The research questions guiding this study are: How do social media platforms' messages impact on inciting? Do social media platforms' messages with negative emotional charge affect legitimizing of the Capitol protest? Findings identify the significant influence of Donald Trump on Twitter during the Capitol riot. Moreover, data analysis identifies positive and negative emotions towards Donald Trump as well as similarities in the showed emotions of Trump and the audience.
- Published
- 2022
- Full Text
- View/download PDF
7. Impacto de los tutoriales para la prevención y combate en contra el hostigamiento y acoso sexual cibernético
- Author
-
Elvira Ivone González Jaimes, Asdrúbal López Chau, Jorge Bautista López, and Valentín Trujillo Mora
- Abstract
El objetivo de la investigación es medir el impacto de los tutoriales para la prevención y combate en contra del Hostigamiento y Acoso Sexual Cibernético, delito que ha aumentado de forma exponencial debido al mal uso de los medios digitales. Los tutoriales tienen la función de emitir mensajes de: conductas preventivas, seguridad, identificación del victimario, manejo personal del suceso y denuncia del delito ante autoridades. Se utilizó método cuantitativo, corte transversal y análisis descriptivo del cuestionario de satisfacción en muestra de 4,104 participantes. Los resultaron fueron: los hombres prefieren visualizar el mensaje de identificación del victimario para evitar ser confundido con éste (61%) y denunciar el delito en medios electrónicos (51%). Las mujeres solicitan atención personalizada, profesional e integral con justicia pronta y expedita (68%) para terminar con la impunidad y erradicar la cultura misógina. Concluyendo con un buen nivel de satisfacción ante la información emitida por los tutoriales (88%).
- Published
- 2021
- Full Text
- View/download PDF
8. Plataformas de Financiamiento Fintech
- Author
-
Laura Angélica Décaro Santiago, María Guadalupe Soriano Hernández, Asdrúbal López Chau, and Juan Pedro Benítez Guadarrama
- Abstract
Las Plataformas de Financiamiento Fintech (PFFs) han revolucionado la forma en que se otorgan créditos a nivel mundial, generando un gran interés por su estudio. En este sentido, se realizó un artículo de revisión, cuyo objetivo es describir el estado actual de la investigación científica sobre las PFFs, a través de la recuperación de documentos científicos de impacto y de autores prolíficos en el tema. Entre los principales hallazgos se identifica que ha iniciado la ola de investigación con evidencia empírica, aunque escasa del tipo cuantitativo, limitándose al análisis de un reducido número de PFFs de Estados Unidos y China. Con ello, salen a la luz diferencias entre estos dos países, centradas en el tipo de datos que exponen a los inversionistas, los inversionistas, la homogeneidad entre PFFs y el enfoque de investigación. Asimismo, se observan más artículos enfocados hacia las PFFs que satisfacen el consumo, sin embargo se visualizan algunos estudios empíricos, sobre otras, cuyo objetivo es financiar empresas o proyectos, esto en la región Europea. Finalmente, los científicos enfatizan que las variables que tienen mayor incidencia en el crecimiento de las PFFs son la incorporación tecnológica y la posición privilegiada en el tema de regulación..
- Published
- 2021
- Full Text
- View/download PDF
9. A Vertical Fragmentation Method for Multimedia Databases Considering Content-Based Queries
- Author
-
Aldo Osmar Ortiz-Ballona, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, Felipe Castro-Medina, María Antonieta Abud-Figueroa, and Nidia Rodríguez-Mazahua
- Published
- 2022
- Full Text
- View/download PDF
10. Strengthening of Financial Profiles in Latin America
- Author
-
Asdrúbal López-Chau, Laura Angélica Décaro-Santiago, Valentin Trujillo-Mora, Felipe Casto-Medina, Elvira Ivone González Jaimes, Rafael Rojas-Hernández, and Maria Guadalupe Soriano-Hernández
- Abstract
The financial profile of natural and legal persons allows financial institutions and financial services companies to identify the level of default of credit applicants. In this way, maintaining a healthy financial profile is derived from a positive history and status, the result of good financial decisions. In Latin America, the vast majority of people pay little attention to strengthening this type of profile, and it is a subject that is rarely addressed on the most popular social media platforms. In this chapter, a search for opinions of Latin Americans regarding the financial profile on the Twitter social network was made, finding that there is a very small number of responses to publications on this topic. Therefore, a web platform is proposed to provide advice on this important issue. The general design of a module for text analysis using sentiment analysis and machine learning techniques is also proposed.
- Published
- 2022
- Full Text
- View/download PDF
11. Analysis of the Moodle Learning Management System and user comfort levels
- Author
-
Asdrúbal López Chau and Elvira Ivone González Jaimes
- Subjects
virtual education ,learning experiences ,education.field_of_study ,Process (engineering) ,media_common.quotation_subject ,Distance education ,Population ,meaningful learning ,Face (sociological concept) ,lcsh:LB5-3640 ,lcsh:Theory and practice of education ,Face-to-face ,Workflow ,ComputingMilieux_COMPUTERSANDEDUCATION ,Mathematics education ,Learning Management ,virtual environment ,Psychology ,education ,Function (engineering) ,media_common - Abstract
This study analyzes the workflow experiences in the teaching-learning process of teachers and students enrolled in mixed (face to face /distance education) and distance education systems. The main objective is to evaluate Moodle, Learning Management System with the systematized Brief Inventory of Experiences. In this study we measure and compare the flow and comfort experiences of a population of 8140 participants, divided into two groups: 320 teachers and 7820 students. We can conclude that there is a significant difference in the p
- Published
- 2021
- Full Text
- View/download PDF
12. Sentiment Analysis in Crisis Situations for Better Connected Government
- Author
-
Rodrigo Sandoval-Almazan, David Valle-Cruz, and Asdrúbal López Chau
- Subjects
Government ,020204 information systems ,Political economy ,Sentiment analysis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Business - Abstract
One of the pillars of connected government is citizen centricity: an approach in which citizen participation is essential. In Mexico, social networks are currently one of the most important means by which citizens express their needs and provide opinions to the government. The goal of this chapter is to contribute to citizen centricity by adapting the methodology of sentiment analysis of social media posts to an expanded version for crisis situations. The main difference in this approach from the normally accepted one is that instead of using pre-defined classes (positive and negative) for sentiments, the authors first determined the different data categories and then applied them to the classic process of sentiment analysis. This approach was tested using posts on Mexico's earthquake in 2017. They found that needs, demands, and claims made in the posts reflect sentiments in a better way, and this can help to improve the government-citizen connection.
- Published
- 2022
- Full Text
- View/download PDF
13. Comparative Analysis of Decision Tree Algorithms for Data Warehouse Fragmentation*
- Author
-
Nidia Rodríguez Mazahua, Lisbeth Rodríguez Mazahua, Asdrúbal López Chau, and Giner Alor Hernández
- Subjects
020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,General Medicine - Abstract
One of the main problems faced by Data Warehouse designers is fragmentation.Several studies have proposed data mining-based horizontal fragmentation methods.However, not exists a horizontal fragmentation technique that uses a decision tree. This paper presents the analysis of different decision tree algorithms to select the best one to implement the fragmentation method. Such analysis was performed under version 3.9.4 of Weka, considering four evaluation metrics (Precision, ROC Area, Recall and F-measure) for different selected data sets using the Star Schema Benchmark. The results showed that the two best algorithms were J48 and Random Forest in most cases; nevertheless, J48 was selected because it is more efficient in building the model.
- Published
- 2020
- Full Text
- View/download PDF
14. FRAGMENT: A Web Application for Database Fragmentation, Allocation and Replication over a Cloud Environment
- Author
-
Asdrúbal López-Chau, María Antonieta Abud-Figueroa, Lisbeth Rodríguez-Mazahua, Felipe Castro-Medina, and Giner Alor-Hernández
- Subjects
Distributed Computing Environment ,General Computer Science ,Database ,Distributed database ,Computer science ,business.industry ,Relational database ,Fragmentation (computing) ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Metadata ,Centralized database ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,business ,computer - Abstract
Fragmentation, allocation and replication are techniques widely used in relational databases to improve the performance of operations and reduce their cost in distributed environments. This article shows an analysis of different methods for database fragmentation, allocation and replication and a Web application called FRAGMENT that adopts the work technique that was selected in the analysis stage, because it presents a fragmentation and replication method, it is applied to a cloud environment, it is easy to implement, it focuses on improving the performance of the operations executed on the database, it shows everything necessary for its implementation and is based on a cost model. FRAGMENT analyzes the operations performed in any table of a database, proposes fragmentation schemes based on the most expensive attributes and allocates and replicates a scheme chosen by the user in a distributed environment in the cloud. This work shows a common problem in fragmentation methods, overlapping fragments, and provides an algorithm with an approach to address it. This algorithm results in the predicates that will define each fragment in a distributed environment. To validate the implemented technique, a second web application is presented, dedicated to simulate operations on sites and focused on producing a log file for the main application. Experiments with the TPC-E benchmark demonstrated lower response time of the queries executed against the distributed database generated by FRAGMENT compared with a centralized database.
- Published
- 2020
- Full Text
- View/download PDF
15. Emotion-Aware Explainable Artificial Intelligence for Personality, Emotion, and Mood Simulation
- Author
-
David Valle-Cruz, Asdrúbal López-Chau, Marco Antonio Ramos-Corchado, and Vianney Muñoz-Jiménez
- Subjects
General Computer Science - Published
- 2022
- Full Text
- View/download PDF
16. Teleworker Experiences in #COVID-19
- Author
-
Rigoberto García-Contreras, J. Patricia Muñoz-Chávez, David Valle-Cruz, and Asdrúbal López-Chau
- Abstract
The COVID-19 pandemic has become a critical and disruptive event that has substantially changed the way people live and work. Although several studies have examined the effects of remote work on organizational outcomes and behaviors, only a few have inquired into how its opportune implementation impacts aggregate emotions over time. This chapter aims to conduct a sentiment analysis with public reactions on Twitter about telework during the pandemic period. The results showed fluctuations in emotional polarity, starting with a higher positive charge in the early pandemic scenarios that became weaker, and the negative polarity of emotions increased. Fear, sadness, and anger were the emotions that increased the most during the pandemic. Knowledge about people's sentiments about telework is important to complement organizational research and to complement the framework for the development of efficient telework implementation strategies.
- Published
- 2022
- Full Text
- View/download PDF
17. Review on the Application of Lexicon-Based Political Sentiment Analysis in Social Media
- Author
-
David Valle-Cruz, Asdrúbal López-Chau, and Rodrigo Sandoval-Almazán
- Abstract
This chapter presented an analysis of the application of lexicon-based political sentiment analysis in social media. The aim is to identify the most frequently used lexicons in political sentiment analysis, their results, similarities, and differences. For this, the authors conducted a systematic literature review based on PRISMA methodology. Afinn, NRC, and SenticNet lexicons are tested and combined for data analysis from the 2020 U.S. presidential campaign. Findings show that political sentiment analysis is a new field studied for only 10 years. Political sentiment analysis could generate benefits in understanding problems such as political polarization, discourse analysis, politician influence, candidate profiling, and improving government-citizen interaction, among other problems in the public sphere, enhanced by the combination of lexicons and multimodal analysis. The authors conclude that polarity was one of the critical dimensions identified for finding variations in the behavior and polarity of sentiments. Limitations and future work also are presented.
- Published
- 2022
- Full Text
- View/download PDF
18. Public Budget Simulations with Machine Learning and Synthetic Data: Some Challenges and Lessons from the Mexican Case
- Author
-
David Valle-Cruz, Vanessa Fernandez-Cortez, Asdrúbal López-Chau, and Rafael Rojas-Hernández
- Published
- 2022
- Full Text
- View/download PDF
19. Convolutional Neural Networks Applied to Emotion Analysis in Texts: Experimentation from the Mexican Context
- Author
-
Juan-Carlos Garduño-Miralrio, David Valle-Cruz, Asdrúbal López-Chau, and Rafael Rojas-Hernández
- Published
- 2022
- Full Text
- View/download PDF
20. A Brief Review of Vertical Fragmentation Methods Considering Multimedia Databases and Content-Based Queries
- Author
-
Felipe Castro-Medina, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, Celia Romero-Torres, Aldo Osmar Ortiz-Ballona, and María Antonieta Abud-Figueroa
- Subjects
Database ,Multimedia ,Distributed database ,Relational database ,Computer science ,business.industry ,Multimedia database ,computer.software_genre ,Query optimization ,Market fragmentation ,Content (measure theory) ,Web application ,business ,computer ,Image retrieval - Abstract
Vertical fragmentation is a distributed database design technique widely used in relational databases to reduce query execution costs. This technique has been also applied in multimedia databases to take advantage of its benefits in query optimization. Content-based retrieval is essential in multimedia databases to provide relevant data to the user. This paper presents a review of the literature on vertical fragmentation methods to determine if they consider multimedia data and content-based queries, are easy to implement, are complete (i.e., the paper includes all the information necessary to implement the method), and if they provide a cost model for the evaluation of their results. To meet this objective, we analyzed and classified 37 papers referring to vertical fragmentation and/or content-based image retrieval. Finally, we selected the best method from our comparative analysis and presented a web application architecture for applying vertical fragmentation in multimedia databases to optimize content-based queries.
- Published
- 2021
- Full Text
- View/download PDF
21. An Improvement to FRAGMENT: A Web Application for Database Fragmentation, Allocation, and Replication Over a Cloud Environment
- Author
-
Cuauhtémoc Sánchez-Ramírez, Felipe Castro-Medina, Ulises Juárez-Martínez, Lisbeth Rodríguez-Mazahua, Giner Alor-Hernández, and Asdrúbal López-Chau
- Subjects
Distributed Computing Environment ,Database ,business.industry ,Computer science ,Cloud computing ,computer.software_genre ,Replication (computing) ,Market fragmentation ,Reduction (complexity) ,Fragment (logic) ,Benchmark (computing) ,Web application ,business ,computer - Abstract
Fragmentation plays a very important role in databases since this achieves an adequate design in distributed environments such as the cloud, which in terms of execution cost of the read and write operations, benefits are obtained. This work presents an improvement to the design of the overlapping fragmentation algorithm shown in previous works, which is part of a method to also carry out allocation and replication. The improvement consists of ordering the predicates by cost and in this way obtaining each predicate contemplating that the fragment overlap will occur, however, the fragments with a higher cost will be kept more intact than those with a lower cost. Experiments with the TPC-E benchmark adapted to a distributed environment demonstrate that our enhanced approach achieves a significant reduction of query response time.
- Published
- 2021
- Full Text
- View/download PDF
22. How much do Twitter posts affect voters? Analysis of the multi-emotional charge with affective computing in political campaigns
- Author
-
Rodrigo Sandoval-Almazan, Asdrúbal López-Chau, and David Valle-Cruz
- Subjects
Government ,Presidential system ,05 social sciences ,Sentiment analysis ,050801 communication & media studies ,Advertising ,Predictive analytics ,Affect (psychology) ,0506 political science ,Politics ,0508 media and communications ,050602 political science & public administration ,Social media ,Affective computing ,Psychology - Abstract
Artificial intelligence has been applied to different sectors in the government from data analysis to predictive analytics, as well as, policing, combating the COVID-19 pandemic, and political election campaigns. In the academic field, the exploitation and understanding of data generated in social media have mostly focused on unimodal sentiment analysis, based on one-dimensional sentiment analysis. This research proposes an analysis of the emotional charge for the U.S. presidential elections in 2020, based on a hybrid approach that combines affective computing and classic statistical analysis. We analyze the multi-emotional charge of candidates and voters, as well as the potential relationship of the candidates' emotions on the voters. Through this analysis is possible to determine the degree of agreement between candidates and voters. Future research is proposed for the area of affective computing in political campaigns.
- Published
- 2021
- Full Text
- View/download PDF
23. Análisis de la utilidad del Bastón Blanco Inteligente UAEM para personas con discapacidad visual
- Author
-
Valentín Trujillo Mora, Elvira Ivone González Jaimes, Asdrúbal López Chau, and Jorge Bautista López
- Subjects
Software portability ,ALARM ,White cane ,Severe visual impairment ,Human–computer interaction ,Visually impaired ,business.industry ,Computer science ,Global Positioning System ,Exploratory research ,Ocean Engineering ,Statistical analysis ,business - Abstract
El objetivo de la presente investigación es probar la utilidad del prototipo de Bastón Blanco Inteligente UAEM con sensores ultrasónicos (disparadores de alarmas y vibraciones) y sistema GPS (Sistema de Posicionamiento de Posición Global) para usuarios con discapacidad visual. La tecnología incluida en el Bastón Blanco Inteligente UAEM le proporciona al usuario con discapacidad visual diversas ventajas para ampliar su movilidad de forma segura, lo que en definitiva sirve para mejorar su calidad de vida. Sin embargo, su uso adecuado requiere de un entrenamiento especializado que ayude al usuario a obtener la utilidad que el prototipo promete. Para probar esos beneficios se efectuó un estudio exploratorio donde se analizaron las experiencias de entrenamiento y de consumo de 20 participantes adultos con discapacidad visual severa y ceguera. El análisis estadístico fue descriptivo, y permitió registrar la satisfacción de los usuarios ante las características físicas y los beneficios ofrecidos por el prototipo. Como resultado se observó la asociación entre las vibraciones, los sonidos y los diferentes mensajes emitidos (obstáculos diversos a pequeñas o grandes distancias). El Bastón Blanco Inteligente UAEM con sensores ultrasónicos y sistema GPS es un prototipo que ayuda a la movilidad segura y autónoma, lo que eleva la calidad de vida del usuario porque el dispositivo es ligero, plegable y su material es resistente; además, contiene aditamentos sonoros y vibratorios que proporcionan la simulación de un mapa físico a bajo precio.
- Published
- 2021
- Full Text
- View/download PDF
24. Comparative Analysis of Decision Tree Algorithms for Data Warehouse Fragmentation
- Author
-
Nidia Rodríguez-Mazahua, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, Giner Alor-Hernández, and S. Gustavo Peláez-Camarena
- Subjects
Data set ,C4.5 algorithm ,Computer science ,Star schema ,Information gain ratio ,Decision tree ,Feature selection ,Tuple ,Algorithm ,Data warehouse - Abstract
One of the main problems faced by Data Warehouse (DW) designers is fragmentation. Several studies have proposed data mining-based horizontal fragmentation methods, which focus on optimizing query response time and execution cost to make the DW more efficient. However, to the best of our knowledge, it does not exist a horizontal fragmentation technique that uses a decision tree to carry out fragmentation. Given the importance of decision trees in classification, since they allow obtaining pure partitions (subsets of tuples) in a data set using measures such as Information Gain, Gain Ratio and the Gini Index, the aim of this work is to use decision trees in the DW fragmentation. This chapter presents the analysis of different decision tree algorithms to select the best one to implement the fragmentation method. Such analysis was performed under version 3.9.4 of Weka considering four evaluation metrics (Precision, ROC Area, Recall, and F-measure) for different selected data sets using the SSB (Star Schema Benchmark). Several experiments were carried out using two attribute selection methods: Best First and Greedy Stepwise, the data sets were pre-processed using the Class Conditional Probabilities filter and it was included the analysis of two data sets (24 and 50 queries) with this filter, to know the behavior of the decision tree algorithms for each data set. Once the analysis was concluded, we can determine that for 24 queries data set the best algorithm was RandomTree since it won in two methods. On the other hand, in the data set of 50 queries, the best decision tree algorithms were LMT and RandomForest because they obtained the best performance for all methods tested. Finally, J48 was the selected algorithm when neither an attribute selection method nor the Class Probabilities filter are used. But, if only the latter is applied to the data set, the best performance is given by the LMT algorithm.
- Published
- 2021
- Full Text
- View/download PDF
25. Application of Dynamic Fragmentation Methods in Multimedia Databases: A Review
- Author
-
Jair Cervantes, Felipe Castro-Medina, Isaac Machorro-Cano, Asdrúbal López-Chau, Giner Alor-Hernández, and Lisbeth Rodríguez-Mazahua
- Subjects
Computer science ,literature review ,hybrid fragmentation ,General Physics and Astronomy ,cost model ,lcsh:Astrophysics ,Database administrator ,02 engineering and technology ,Review ,computer.software_genre ,020204 information systems ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,dynamic fragmentation ,multimedia fragmentation ,vertical fragmentation ,Multimedia ,Database ,horizontal fragmentation ,Fragmentation (computing) ,lcsh:QC1-999 ,Workflow ,lcsh:Q ,020201 artificial intelligence & image processing ,computer ,lcsh:Physics ,Large size - Abstract
Fragmentation is a design technique widely used in multimedia databases, because it produces substantial benefits in reducing response times, causing lower execution costs in each operation performed. Multimedia databases include data whose main characteristic is their large size, therefore, database administrators face a challenge of great importance, since they must contemplate the different qualities of non-trivial data. These databases over time undergo changes in their access patterns. Different fragmentation techniques presented in related studies show adequate workflows, however, some do not contemplate changes in access patterns. This paper aims to provide an in-depth review of the literature related to dynamic fragmentation of multimedia databases, to identify the main challenges, technologies employed, types of fragmentation used, and characteristics of the cost model. This review provides valuable information for database administrators by showing essential characteristics to perform proper fragmentation and to improve the performance of fragmentation schemes. The reduction of costs in fragmentation methods is one of the most desired main properties. To fulfill this objective, the works include cost models, covering different qualities. In this analysis, a set of characteristics used in the cost models of each work is presented to facilitate the creation of a new cost model including the most used qualities. In addition, different data sets or reference points used in the testing stage of each work analyzed are presented.
- Published
- 2020
26. Impression analysis of trending topics in Twitter with classification algorithms
- Author
-
David Valle-Cruz, Asdrúbal López-Chau, and Rodrigo Sandoval-Almazan
- Subjects
Information retrieval ,Computer science ,business.industry ,Event (computing) ,Sentiment analysis ,Context (language use) ,02 engineering and technology ,Impression ,Statistical classification ,Identification (information) ,Categorization ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Mass media - Abstract
The use of a simple categorization of emotions or even the use of universal expressions of emotions is unsuitable to properly identify sentiments in posts in some situations. The main goal of this paper is to analyze impressions of Twitter messages in the 19S Mexican earthquake of 2017 through machine learning techniques, specifically with classification algorithms. To identify impressions, we applied sentiment analysis based on supervised methods, and we identified a customized list of terms that we called impressions, which reflects the nature of tweets related to the event of study. Our proposed impressions analysis is useful to understand Twitter messages during different events since impressions adapt to each situation and context, based on emotional frameworks. We found that Twitter is useful to prove or disprove the information disseminated by the mass media and mainly for asking for help. Analyzing this kind of data in real-time will be useful for decision-making. The contribution of this paper is to fill the gap in the sentiment analysis area and the automatic identification of eleven impressions for disaster events in Twitter using machine learning techniques. This method has been called impression analysis.
- Published
- 2020
- Full Text
- View/download PDF
27. Does Twitter affect Stock Market Decisions?Financial Sentiment Analysis in Pandemic Seasons: A Comparative Study of H1N1 and COVID-19
- Author
-
David Valle-Cruz, Vanessa Fernandez-Cortez, Rodrigo Sandoval-Almazan, and Asdrúbal López-Chau
- Subjects
Coronavirus disease 2019 (COVID-19) ,Financial economics ,Sentiment analysis ,Pandemic ,Stock market ,Business ,Affect (psychology) - Abstract
Backgroud:Investors are always playing with the fears and desires of buyers and sellers. Stock exchange markets are not the exception. Financial sentiment analysis allows us to understand the effect of reactions and emotions on social media in the stock market. In this research, we analyze Twitter data and financial indices to answer the question: How do polarity generated by the posts on Twitter influence financial indices behavior in pandemic seasons? Methods:The study is based on the sentiment analysis of influential Twitter accounts in this field and its relationship with the behavior of important financial indices. To achieve this, we tested four lexicons to detect polarity on Twitter. Results:Our findings shows that the period in which the markets reacted was 6 to 13 days after the information was shared and disseminated on Twitter in the COVID-19 season, and 1 to 2 day for H1N1 season. Furthermore, in our analysis, we found that the lexicons that got the best results for sentiment analysis on Twitter were S140 and Affin.Conclusions:Financial sentiment analysis is an important technique to forecasting stock market and polarity is the most widely used technique in the financial area. There is a relationship between the polarity in Twitter and the financial indexes behavior. The most influential Twitter accounts during the pandemic season were The New York Times, Bloomberg, CNN News, and Investing, presenting a very high relation between sentiments on Twitter and the stock market behavior.
- Published
- 2020
- Full Text
- View/download PDF
28. SCM-IoT: An Aproach for Internet of Things Services Integration and Coordination
- Author
-
Isaac Machorro-Cano, José Oscar Olmedo-Aguirre, Giner Alor-Hernández, Lisbeth Rodríguez-Mazahua, José Luis Sánchez-Cervantes, and Asdrúbal López-Chau
- Subjects
Fluid Flow and Transfer Processes ,colored Petri nets ,intelligent systems ,internet of things ,services interoperability ,services composition ,Process Chemistry and Technology ,General Engineering ,General Materials Science ,Instrumentation ,Computer Science Applications - Abstract
Today, new applications demand an internet of things (IoT) infrastructure with greater intelligence in our daily use devices. Among the salient features that characterize intelligent IoT systems are interoperability and dynamism. While service-oriented architectures (SOA) offer a well-developed and standardized architecture and protocols for interoperability, answering whether SOA offers enough dynamism to merge IoT with artificial intelligence (AI) is still in its beginnings. This paper proposes an SOA model, called SCM-IoT (service composition model for IoT), for incorporating AI into IoT systems, addressing their coordination by a mediator offering services for storage, production, discovery, and notification of relevant data for client applications. The model allows IoT systems to be incrementally developed from three perspectives: a conceptual model, platform-independent computational model, and platform-dependent computational model. Finally, as a case of study, a domotic IoT system application is developed in SCM-IoT to analyze the characteristics and benefits of the proposed approach.
- Published
- 2022
- Full Text
- View/download PDF
29. A CBIR System for the Recognition of Agricultural Machinery
- Author
-
Silvestre Gustavo Peláez-Camarena, Isaac Machorro-Cano, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, María Antonieta Abud-Figueroa, and Rodolfo Rojas Ruiz
- Subjects
Agricultural machinery ,Computer science ,business.industry ,General Medicine ,Agricultural engineering ,business - Published
- 2018
- Full Text
- View/download PDF
30. Automatic computing of number of clusters for color image segmentation employing fuzzy c-means by extracting chromaticity features of colors
- Author
-
Asdrúbal López-Chau, Arturo Yee-Rendon, Farid García-Lamont, and Jair Cervantes
- Subjects
Pixel ,Artificial neural network ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Color space ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Variation of information ,Chromaticity ,business - Abstract
In this paper we introduce a method for color image segmentation by computing automatically the number of clusters the data, pixels, are divided into using fuzzy c-means. In several works the number of clusters is defined by the user. In other ones the number of clusters is computed by obtaining the number of dominant colors, which is determined with unsupervised neural networks (NN) trained with the image’s colors; the number of dominant colors is defined by the number of the most activated neurons. The drawbacks with this approach are as follows: (1) The NN must be trained every time a new image is given and (2) despite employing different color spaces, the intensity data of colors are used, so the undesired effects of non-uniform illumination may affect computing the number of dominant colors. Our proposal consists in processing the images with an unsupervised NN trained previously with chromaticity samples of different colors; the number of the neurons with the highest activation occurrences defines the number of clusters the image is segmented. By training the NN with chromatic data of colors it can be employed to process any image without training it again, and our approach is, to some extent, robust to non-uniform illumination. We perform experiments with the images of the Berkeley segmentation database, using competitive NN and self-organizing maps; we compute and compare the quantitative evaluation of the segmented images obtained with related works using the probabilistic random index and variation of information metrics.
- Published
- 2018
- Full Text
- View/download PDF
31. Preliminary Results of an Analysis Using Association Rules to Find Relations between Medical Opinions About the non-Realization of Autopsies in a Mexican Hospital
- Author
-
Giner Alor-Hernández, Elayne Rubio Delgado, Silvestre Gustavo Peláez-Camarena, María Antonieta Abud-Figueroa, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, and José Antonio Palet Guzmán
- Subjects
010504 meteorology & atmospheric sciences ,Association rule learning ,Computer science ,020209 energy ,0202 electrical engineering, electronic engineering, information engineering ,Operations management ,02 engineering and technology ,General Medicine ,Set (psychology) ,01 natural sciences ,Realization (probability) ,0105 earth and related environmental sciences - Abstract
In the last years, a significant reduction in the number of autopsies realized in the hospitals of the world has been observed. Since medics are the closest people to this problematic, they can offer information that helps clarify why the decreasing of this practice has occurred. In this paper, data mining techniques are applied to perform an analysis of medical opinions regarding the realization of autopsies in a hospital of Veracruz, in Mexico. The opinions were collected through surveys applied to 85 medics of the hospital. The result is a model represented by a set of rules that suggests some of the factors that are related to the decrease in the number of autopsies in the hospital, according to the survey respondents.
- Published
- 2017
- Full Text
- View/download PDF
32. Automatic Classification of Lobed Simple and Unlobed Simple Leaves for Plant Identification
- Author
-
Valentin Trujillo-Mora, Rafael Rojas-Hernández, Juan Carlos Flores-Bastida, and Asdrúbal López-Chau
- Subjects
Plant identification ,Simple (abstract algebra) ,General Medicine ,Biological system ,Mathematics - Published
- 2017
- Full Text
- View/download PDF
33. Aplicación web e-learning multiplataforma para recolección de datos de usuarios y retroalimentación automática basada en técnicas estadísticas
- Author
-
Beatriz Alejandra Olivares Zepahua, Celia Romero Torres, Samuel Montiel Santos, Asdrúbal López-Chau, and Luis Ángel Reyes Hernández
- Subjects
General Medicine - Published
- 2017
- Full Text
- View/download PDF
34. Detection of Compound Leaves for Plant Identification
- Author
-
Asdrúbal López Chau, Rafael Rojas Hernandez, Lisbeth Rodríguez Mazahua, Farid García Lamont, Jair Cervantes Canales, and Valentín Trujillo Mora
- Subjects
021110 strategic, defence & security studies ,General Computer Science ,business.industry ,Binary image ,Feature extraction ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Data set ,Plant identification ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Mathematics - Abstract
Automatic plant identification has been an important issue in the last years. Most of the state-of-the-art methods for this purpose use leaf features to predict the species. Despite there are many methods to extract different leaf features, just few of them are focused on discriminating between simple and compound leaves. In this work, we introduce a method to detect compound leaves. Our method uses concentric circles to explore the surface of the leaf to count the changes of color in binary images, then, the changes are analyzed to detect compound leaves. The method predicts correctly more than 96% of the leaves in the Flavia data set. We also tested the method with some images of leaves available on the Internet, with 100% of correctness. The information on whether a leaf is or not compound, was used on experiments to observe whether this improves the performance of classifiers.
- Published
- 2017
- Full Text
- View/download PDF
35. Sentiment Analysis of Twitter Data Through Machine Learning Techniques
- Author
-
David Valle-Cruz, Asdrúbal López-Chau, and Rodrigo Sandoval-Almazan
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Sentiment analysis ,Supervised learning ,Cloud computing ,Machine learning ,computer.software_genre ,Sadness ,Support vector machine ,Naive Bayes classifier ,Happiness ,Social media ,Artificial intelligence ,business ,computer ,media_common - Abstract
Cloud computing is a revolutionary technology for businesses, governments, and citizens. Some examples of Software-as-a-Services (SaaS) of cloud computing are banking apps, e-mail, blog, online news, and social networks. In this chapter, we analyze data sets generated by trending topics on Twitter that emerged from Mexican citizens that interacted during the earthquake of September 19, 2017, using sentiment analysis and supervised learning, based on the Ekman’s six emotional model. We built three classifiers to determine the emotions of tweets that belong to the same topic. The classifiers with the best accuracy for predicting emotions were Naive Bayes and support vector machine. We found that the most frequent predicted emotions were happiness, anger, and sadness; also, that 6.5% of predicted tweets were irrelevant. We provide some recommendations about the use of machine learning techniques in sentiment analysis. Our contribution is the expansion of the emotions range, from three (negative, neutral, positive) to six in order to provide more elements to understand how users interact with social media platforms. Future research will include validation of the method with different data sets and emotions, and the addition of new artificial intelligence techniques to improve accuracy.
- Published
- 2020
- Full Text
- View/download PDF
36. Instrumento certificador de tecnologías de la información y comunicación y tecnologías del aprendizaje y el conocimiento para docentes universitarios
- Author
-
Valentín Trujillo Mora, Asdrúbal López Chau, Elvira Ivone González Jaimes, and Jorge Bautista López
- Subjects
Transformative learning ,Knowledge management ,business.industry ,Information and Communications Technology ,Ocean Engineering ,business ,Psychology ,Elaboration ,Agile software development - Abstract
Este artículo presenta los resultados de la investigación cuyo objetivo consistió en la elaboración de un instrumento innovador, ágil, veraz y confiable para evaluar y certificar al personal académico universitario. Un instrumento capaz de detectar cuatro niveles de conocimientos y uso en tecnologías de la información y la comunicación (TIC) y tecnologías del aprendizaje y el conocimiento (TAC). Se trató de un estudio cuasiexperimental dividido en tres fases: a) elaboración de la prueba con diseño mixto transformativo secuencial; b) evaluación de la prueba con diseño mixto secuencial, y c) sistematización con App Ionic 3. Una vez realizados los análisis cualitativos y cuantitativos, el instrumento resultó versátil y transferible en 1) selección y contratación de personal, 2) identificar el nivel conocimientos que tiene el personal, 3) crear programas y/o cursos que refuerce o cree nuevas habilidades en TIC y TAC, 4) categorizar al personal de acuerdo su nivel de conocimientos, 5) promocionar al personal para adquirir beneficios o prestaciones laborales, 6) evidenciar el nivel que tiene los académicos en la evaluación que realizan los organismos acreditadores y 7) constancia curricular de posesión de nivel de conocimiento y uso en TIC y TAC en área académica. Dicha prueba puede ser aplicable de forma individual o colectiva y obtener calificaciones a través de un método manual, por lo que se anexa una guía de usuario, o electrónico, para lo cual se requiere solicitar la página web a los autores.
- Published
- 2019
- Full Text
- View/download PDF
37. Design of a Horizontal Data Fragmentation, Allocation and Replication Method in the Cloud
- Author
-
María Antonieta Abud-Figueroa, Asdrúbal López-Chau, Lisbeth Rodríguez-Mazahua, Isaac Machorro-Cano, and Felipe Castro-Medina
- Subjects
Replication method ,Distributed database ,business.industry ,Computer science ,Distributed computing ,Fragmentation (computing) ,Web application ,Cloud computing ,business - Abstract
At present, the demand for information in distributed database systems is large and growing day by day. While this is happening, new challenges arise to improve the performance of the databases. Data fragmentation and replication methods have a leading role in distributed systems over the cloud, which is why this paper presents the design of a horizontal fragmentation, allocation and replication method in the cloud. This research proposes an algorithm for solving the problem of overlapping horizontal fragments in a data fragmentation and replication method in the cloud. The design of a Web application using the aforementioned method is also presented, this allows the fragmentation, assignment and replication scheme proposed by the method to be directly applied on a distributed database.
- Published
- 2019
- Full Text
- View/download PDF
38. Propagación de malware: propuesta de modelo para simulación y análisis
- Author
-
Pedro Guevara López, Luis Angel García Reyes, Asdrúbal López-Chau, and Rafael Rojas Hernández
- Subjects
General Medicine - Published
- 2016
- Full Text
- View/download PDF
39. Human mimic color perception for segmentation of color images using a three-layered self-organizing map previously trained to classify color chromaticity
- Author
-
Farid García-Lamont, Jair Cervantes, and Asdrúbal López-Chau
- Subjects
0209 industrial biotechnology ,Color histogram ,Computer science ,Color vision ,Map coloring ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color balance ,02 engineering and technology ,False color ,HSL and HSV ,Color space ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Computer vision ,Chromaticity ,Hue ,Color image ,business.industry ,Binary image ,Image segmentation ,Color quantization ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Software - Abstract
Most of the works addressing segmentation of color images use clustering-based methods; the drawback with such methods is that they require a priori knowledge of the amount of clusters, so the number of clusters is set depending on the nature of the scene so as not to lose color features of the scene. Other works that employ different unsupervised learning-based methods use the colors of the given image, but the classifying method employed is retrained again when a new image is given. Humans have the nature capability to: (1) recognize colors by using their previous knowledge, that is, they do not need to learn to identify colors every time they observe a new image and, (2) within a scene, humans can recognize regions or objects by their chromaticity features. Hence, in this paper we propose to emulate the human color perception for color image segmentation. We train a three-layered self-organizing map with chromaticity samples so that the neural network is able to segment color images by their chromaticity features. When training is finished, we use the same neural network to process several images, without training it again and without specifying, to some extent, the number of colors the image have. The hue component of colors is extracted by mapping the input image from the RGB space to the HSV space. We test our proposal using the Berkeley segmentation database and compare quantitatively our results with related works; according to the results comparison, we claim that our approach is competitive.
- Published
- 2016
- Full Text
- View/download PDF
40. Active rule base development for dynamic vertical partitioning of multimedia databases
- Author
-
Giner Alor-Hernández, Jair Cervantes, Xiaoou Li, Asdrúbal López-Chau, and Lisbeth Rodríguez-Mazahua
- Subjects
Scheme (programming language) ,Database ,Multimedia ,Computer Networks and Communications ,Computer science ,Response time ,Workload ,02 engineering and technology ,Query optimization ,computer.software_genre ,Database design ,Set (abstract data type) ,Workflow ,Artificial Intelligence ,Hardware and Architecture ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,computer ,Reactive system ,Software ,Information Systems ,computer.programming_language - Abstract
Currently, vertical partitioning has been used in multimedia databases in order to take advantage of its potential benefits in query optimization. Nevertheless, most vertical partitioning algorithms are static; this means that they optimize a vertical partitioning scheme (VPS) according to a workload, but if this workload suffers changes, the VPS may be degraded, which would result in long query response time. This paper presents a set of active rules to perform dynamic vertical partitioning in multimedia databases. First of all, these rules collect all the information that a vertical partitioning algorithm needs as input. Then, they evaluate this information in order to know if the database has experienced enough changes to trigger a performance evaluator. In this case, if the performance of the database falls below a threshold previously calculated by the rules, the vertical partitioning algorithm is triggered, which gets a new VPS. Finally, the rules materialize the new VPS. Our active rule base is implemented in the system DYMOND, which is an active rule-based system for dynamic vertical partitioning of multimedia databases. DYMOND's architecture and workflow are presented in this paper. Moreover, a case study is used to clarify and evaluate the functionality of our active rule base. Additionally, authors of this paper performed a qualitative evaluation with the aim of comparing and evaluating DYMOND's functionality. The results showed that DYMOND improved query performance in multimedia databases.
- Published
- 2016
- Full Text
- View/download PDF
41. Árbol de decisión C4.5 basado en entropía minoritaria para clasificación de conjuntos de datos no balanceados
- Author
-
Luis Alberto Caballero Cruz, Jorge Bautista López, and Asdrúbal López-Chau
- Subjects
General Medicine - Abstract
Resumen. En el area de aprendizaje automatico, el problema de desbalance en los datos es uno de los mas desafiantes. Desde hace mas de una decada, se han desarrollado nuevos metodos para mejorar el desempeno de los metodos de clasificacion para este tipo de problema. En este articulo se presenta una modificacion al algoritmo C4.5 usando el concepto de entroṕia minoritaria. La propuesta esta basada en la correccion de un error que observamos en una publicacion previa. La implementacion del nuevo metodo presentado es probada con conjuntos publicamente disponibles. Los resultados obtenidos muestran la utilidad del metodo desarrollado.
- Published
- 2015
- Full Text
- View/download PDF
42. Captura de datos para análisis de la dinámica del tecleo de números para sistema operativo Android
- Author
-
Carlos A Rojas, Selene Nieto-Ruiz, Asdrúbal López-Chau, and Yonic Antonio Gomez Sanchez
- Subjects
General Medicine - Abstract
Resumen. El analisis de la dinamica del tecleo es una tecnica usada para verificar la identidad de usuarios de sistemas informaticos. Actualmente, existen pocos conjuntos de datos publicamente disponibles de usuarios reales, y los que se encuentran corresponden a teclados tipo QUERTY. El teclado tipo numerico, que es usado en diversas aplicaciones importantes, ha recibido poca atencion por parte de la comunidad cientifica. En este trabajo se presenta el desarrollo de un sistema para sistema operativo Android, cuya finalidad es la captura de los tiempos de tecleo usados comunmente en biometŕia informatica. Los datos generados por usuarios reales usando este sistema, se han publicado en la Internet para su libre descarga. En este articulo se presenta un resumen de los datos y se presentan trabajos futuros.
- Published
- 2015
- Full Text
- View/download PDF
43. Data selection based on decision tree for SVM classification on large data sets
- Author
-
Lisbeth Rodríguez Mazahua, Jair Cervantes, Farid García Lamont, Asdrúbal López-Chau, and J. Sergio Ruíz
- Subjects
Training set ,Small data ,Computer science ,business.industry ,Decision tree ,Pattern recognition ,Machine learning ,computer.software_genre ,Set (abstract data type) ,Support vector machine ,Data set ,Tree (data structure) ,Data point ,Ranking SVM ,Artificial intelligence ,business ,computer ,Software - Abstract
Graphical abstractDisplay Omitted HighlightsThis paper describes the development of an algorithm for training large data sets.The algorithm uses a first stage of SVM with a small data set.The algorithm uses decision trees to find best data points in the entire data set.DT is trained using SV and non-SV found in the first SVM stage.In the second SVM stage the training data represent all data points found by the DT. Support Vector Machine (SVM) has important properties such as a strong mathematical background and a better generalization capability with respect to other classification methods. On the other hand, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the size of input data set. In this study, a new algorithm to speed up the training time of SVM is presented; this method selects a small and representative amount of data from data sets to improve training time of SVM. The novel method uses an induction tree to reduce the training data set for SVM, producing a very fast and high-accuracy algorithm. According to the results, the proposed algorithm produces results with similar accuracy and in a faster way than the current SVM implementations.
- Published
- 2015
- Full Text
- View/download PDF
44. Contrast Enhancement of RGB Color Images by Histogram Equalization of Color Vectors’ Intensities
- Author
-
Jair Cervantes, Farid García-Lamont, Sergio Ruiz, and Asdrúbal López-Chau
- Subjects
Channel (digital image) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,HSL and HSV ,Color space ,Grayscale ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Chromaticity ,business ,Histogram equalization ,ComputingMethodologies_COMPUTERGRAPHICS ,Hue - Abstract
The histogram equalization (HE) is a technique developed for image contrast enhancement of grayscale images. For RGB (Red, Green, Blue) color images, the HE is usually applied in the color channels separately; due to correlation between the color channels, the chromaticity of colors is modified. In order to overcome this problem, the colors of the image are mapped to different color spaces where the chromaticity and the intensity of colors are decoupled; then, the HE is applied in the intensity channel. Mapping colors between different color spaces may involve a huge computational load, because the mathematical operations are not linear. In this paper we present a proposal for contrast enhancement of RGB color images, without mapping the colors to different color spaces, where the HE is applied to the intensities of the color vectors. We show that the images obtained with our proposal are very similar to the images processed in the HSV (Hue, Saturation, Value) and L*a*b* color spaces.
- Published
- 2018
- Full Text
- View/download PDF
45. Association Analysis of Medical Opinions About the Non-realization of Autopsies in a Mexican Hospital
- Author
-
Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, José Antonio Palet Guzmán, Silvestre Gustavo Peláez-Camarena, and Elayne Rubio Delgado
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Association rule learning ,business.industry ,Lift (data mining) ,Medical procedure ,02 engineering and technology ,medicine.disease ,Filter (software) ,03 medical and health sciences ,030104 developmental biology ,Family medicine ,0202 electrical engineering, electronic engineering, information engineering ,Medicine ,020201 artificial intelligence & image processing ,Medical emergency ,business ,Set (psychology) ,Realization (probability) ,Statistic - Abstract
Hospital autopsy rates around the world have dramatically decreased in frequency in the past years. In that sense, as physicians are very close to that kind of practice, the opinions of doctors might help to clarify the reasons and characteristics of the decline of this important medical procedure. This chapter explains how, to the effects of this study, data mining techniques were applied to perform an analysis of medical opinions about the practice of autopsies in a hospital of Veracruz, Mexico. These opinions were obtained from a survey, applied to 85 doctors of the hospital. The application of data mining techniques allowed the construction of a model, which is represented by a set of rules. The rules suggest some factors that are related to the decrease of the realization of autopsies in the hospital. All this was achieved in a framework where support and confidence thresholds were applied. Likewise, the results were refined by the addition of an objective statistic measure, named Lift, which helps filter out uninteresting association rules.
- Published
- 2017
- Full Text
- View/download PDF
46. Método rápido de preprocesamiento para clasificación en conjuntos de datos no balanceados
- Author
-
William Cruz-Santos, Liliana Puente-Maury, Lourdes López-García, and Asdrúbal López-Chau
- Subjects
General Medicine - Abstract
Resumen. El problema de desbalance en clasificacion se presenta en conjuntos de datos que tienen una cantidad grande de datos de cierto tipo (clase mayoritaria), mientras que el numero de datos del tipo contrario es considerablemente menor (clase minoritaria). En este escenario, practicamente todos los metodos de clasificacion presentan un bajo desempeno. En este articulo se propone un nuevo metodo de preprocesamiento, que utiliza un enfoque similar a las tecnicas de basadas en enlaces Tomek, pero cuyo tiempo de ejecucion es dramaticamente reducido con respecto al calculo por fuerza bruta, comunmente utilizado en dichas tecnicas. Los resultados obtenidos en los experimentos demuestran la efectividad del metodo propuesto para mejorar las areas de las curvas ROC y PRC de metodos de clasificacion aplicados a conjuntos de datos reales no balanceados.
- Published
- 2014
- Full Text
- View/download PDF
47. Support vector machine classification for large datasets using decision tree and Fisher linear discriminant
- Author
-
Xiaoou Li, Asdrúbal López Chau, and Wen Yu
- Subjects
Structured support vector machine ,Computer Networks and Communications ,business.industry ,Computer science ,Entropy (statistical thermodynamics) ,Decision tree ,Pattern recognition ,Machine learning ,computer.software_genre ,Linear discriminant analysis ,Standard deviation ,Support vector machine ,Entropy (classical thermodynamics) ,ComputingMethodologies_PATTERNRECOGNITION ,Hardware and Architecture ,Optimal discriminant analysis ,Entropy (information theory) ,Artificial intelligence ,Entropy (energy dispersal) ,business ,computer ,Time complexity ,Entropy (arrow of time) ,Software ,Entropy (order and disorder) - Abstract
Training a support vector machine (SVM) with data number n has time complexity between O ( n 2 ) and O ( n 3 ) . Most training algorithms for SVM are not suitable for large datasets. Decision trees can simplify SVM training, however classification accuracy becomes lower when there are inseparable points. This paper introduces a novel method for SVM classification. A decision tree is used to detect the low entropy regions in input space, and Fisher’s linear discriminant is applied to detect the data near to support vectors. The experimental results demonstrate that our approach has good classification accuracy and low standard deviation, the training is significantly faster than the other existing methods.
- Published
- 2014
- Full Text
- View/download PDF
48. Convex and concave hulls for classification with support vector machine
- Author
-
Xiaoou Li, Asdrúbal López Chau, and Wen Yu
- Subjects
Convex hull ,Structured support vector machine ,business.industry ,Cognitive Neuroscience ,MathematicsofComputing_GENERAL ,Regular polygon ,Pattern recognition ,Grid ,Computer Science Applications ,Support vector machine ,Svm classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Hull ,Artificial intelligence ,business ,Time complexity ,Mathematics - Abstract
The training of a support vector machine (SVM) has the time complexity of O(n^3) with data number n. Normal SVM algorithms are not suitable for classification of large data sets. Convex hull can simplify SVM training, however the classification accuracy becomes lower when there are inseparable points. This paper introduces a novel method for SVM classification, called convex-concave hull. After grid pre-processing, the convex hull and the concave (non-convex) hull are found by Jarvis march method. Then the vertices of the convex-concave hull are applied for SVM training. The proposed convex-concave hull SVM classifier has distinctive advantages on dealing with large data sets with higher accuracy. Experimental results demonstrate that our approach has good classification accuracy while the training is significantly faster than the other training methods.
- Published
- 2013
- Full Text
- View/download PDF
49. Fisher’s decision tree
- Author
-
Lourdes López-García, Farid García Lamont, Asdrúbal López-Chau, and Jair Cervantes
- Subjects
Incremental decision tree ,business.industry ,Decision tree learning ,General Engineering ,Univariate ,Decision tree ,ID3 algorithm ,Pattern recognition ,Machine learning ,computer.software_genre ,Linear discriminant analysis ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Alternating decision tree ,Decision stump ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
Univariate decision trees are classifiers currently used in many data mining applications. This classifier discovers partitions in the input space via hyperplanes that are orthogonal to the axes of attributes, producing a model that can be understood by human experts. One disadvantage of univariate decision trees is that they produce complex and inaccurate models when decision boundaries are not orthogonal to axes. In this paper we introduce the Fisher's Tree, it is a classifier that takes advantage of dimensionality reduction of Fisher's linear discriminant and uses the decomposition strategy of decision trees, to come up with an oblique decision tree. Our proposal generates an artificial attribute that is used to split the data in a recursive way. The Fisher's decision tree induces oblique trees whose accuracy, size, number of leaves and training time are competitive with respect to other decision trees reported in the literature. We use more than ten public available data sets to demonstrate the effectiveness of our method.
- Published
- 2013
- Full Text
- View/download PDF
50. Leaf Categorization Methods for Plant Identification
- Author
-
Valentin Trujillo-Mora, Rafael Rojas-Hernández, Farid García Lamont, Lisbeth Rodríguez-Mazahua, Jair Cervantes, and Asdrúbal López-Chau
- Subjects
business.industry ,Pattern recognition ,04 agricultural and veterinary sciences ,02 engineering and technology ,Plant identification ,Categorization ,040103 agronomy & agriculture ,0202 electrical engineering, electronic engineering, information engineering ,0401 agriculture, forestry, and fisheries ,Classification methods ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Mathematics - Abstract
In most of classic plant identification methods a dichotomous or multi-access key is used to compare characteristics of leaves. Some questions about if the analyzed leaves are lobed, unlobed, simple or compound need to be answered to identify plants successfully. However, very little attention has been paid to make an automatic distinction of leaves using such features. In this paper we first explore if incorporating prior knowledge about leaves (categorizing between lobed simple leaves, and the unlobed simple ones) has an effect on the performance of six classification methods. According to the results of experiments with more than 1,900 images of leaves from Flavia data set, we found that it is statically significant the relationship between such categorization and the improvement of the performances of the classifiers tested. Therefore, we propose two novel methods to automatically differentiate between lobed simple leaves, and the unlobed simple ones. The proposals are invariant to rotation, and achieve correct prediction rates greater than 98%.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.