6,076 results on '"activation function"'
Search Results
2. On Complex Neural Networks
- Author
-
Sandhu, Momin Jamil, Yang, Xin-She, Yang, Xin-She, Series Editor, Dey, Nilanjan, Series Editor, and Fong, Simon, Series Editor
- Published
- 2025
- Full Text
- View/download PDF
3. ReLU, Sparseness, and the Encoding of Optic Flow in Neural Networks.
- Author
-
Layton, Oliver W., Peng, Siyuan, and Steinmetz, Scott T.
- Abstract
Accurate self-motion estimation is critical for various navigational tasks in mobile robotics. Optic flow provides a means to estimate self-motion using a camera sensor and is particularly valuable in GPS- and radio-denied environments. The present study investigates the influence of different activation functions—ReLU, leaky ReLU, GELU, and Mish—on the accuracy, robustness, and encoding properties of convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) trained to estimate self-motion from optic flow. Our results demonstrate that networks with ReLU and leaky ReLU activation functions not only achieved superior accuracy in self-motion estimation from novel optic flow patterns but also exhibited greater robustness under challenging conditions. The advantages offered by ReLU and leaky ReLU may stem from their ability to induce sparser representations than GELU and Mish do. Our work characterizes the encoding of optic flow in neural networks and highlights how the sparseness induced by ReLU may enhance robust and accurate self-motion estimation from optic flow. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A wavelet CNN with appropriate feed-allocation and PSO optimized activations for diabetic retinopathy grading.
- Author
-
Raja, Chandrasekaran, B V, Santhosh Krishna, Loganathan, Balaji, Suman, Sanjay Kumar, Bhagyalakshmi, L., Alrashoud, Mubarak, Giri, Jayant, and Sathish, T.
- Abstract
This work modifies the architecture of conventional CNN with the integration of Multi-resolution Analysis (MRA) in a CNN framework for Diabetic Retinopathy (DR) diagnosis and grading. Here, the HF sub-bands are subjected to optimized activations and are directly fed to the fully connected layers, as it encompasses edge features. Unlike FD-Relu, the proposed function preserves significant negative coefficients, compared to the S-Relu, the proposed third-order S-Relu is optimized such that it sustains the activations in the range suitable for the wavelet coefficients. The coefficients of higher-order terms of the proposed 3rd-order S-Relu are optimized with PSO, fitting the maximum energy of the wavelet sub-bands to ensure High Frequency (HF) edge preservation. The authors re-architecture 3 different CNNs published in the Retinal Image analysis field, with spatial and wavelet inputs with optimized activations. The highest accuracy of 96% is attained with the AlexNet re-architecture, with 35,126 fundus images secured from the Kaggle dataset. As we can infer the proposed re-architecture wavelet CNN outperformed the multiscale shallow CNNs, multiscale attention net, and stacked CNNs with a 6.6, 0.3, 0.7 per cent increase in accuracy. The entire implementation of the wavelet CNN is made available under source code. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. 基于EfficientNet改进模型的服饰图像 智能分类技术.
- Author
-
王佳鑫, 李雪飞, and 张颖
- Abstract
Copyright of Journal of Donghua University (Natural Science Edition) is the property of Journal of Donghua University (Natural Science) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
6. GDnet-IP: Grouped Dropout-Based Convolutional Neural Network for Insect Pest Recognition.
- Author
-
Li, Dongcheng, Xu, Yongqi, Yuan, Zheming, and Dai, Zhijun
- Subjects
CONVOLUTIONAL neural networks ,RECOGNITION (Psychology) ,INSECT pests ,IMAGE recognition (Computer vision) ,SMART structures - Abstract
Lightweight convolutional neural network (CNN) models have proven effective in recognizing common pest species, yet challenges remain in enhancing their nonlinear learning capacity and reducing overfitting. This study introduces a grouped dropout strategy and modifies the CNN architecture to improve the accuracy of multi-class insect recognition. Specifically, we optimized the base model by selecting appropriate optimizers, fine-tuning the dropout probability, and adjusting the learning rate decay strategy. Additionally, we replaced ReLU with PReLU and added BatchNorm layers after each Inception layer, enhancing the model's nonlinear expression and training stability. Leveraging the Inception module's branching structure and the adaptive grouping properties of the WeDIV clustering algorithm, we developed two grouped dropout models, the iGDnet-IP and GDnet-IP. Experimental results on a dataset containing 20 insect species (15 pests and five beneficial insects) demonstrated an increase in cross-validation accuracy from 84.68% to 92.12%, with notable improvements in the recognition rates for difficult-to-classify species, such as Parnara guttatus Bremer and Grey (PGBG) and Papilio xuthus Linnaeus (PXLL), increasing from 38% and 47% to 62% and 93%, respectively. Furthermore, these models showed significant accuracy advantages over standard dropout methods on test sets, with faster training times compared to four conventional CNN models, highlighting their suitability for mobile applications. Theoretical analyses of model gradients and Fisher information provide further insight into the grouped dropout strategy's role in improving CNN interpretability for insect recognition tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Intelligent prediction of methane hydrate phase boundary conditions in ionic liquids using deep learning algorithms.
- Author
-
Bavoh, Cornelius Borecho, Sambo, Chico, Quainoo, Ato Kwamena, and Lal, Bhajan
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *METHANE hydrates , *OPTIMIZATION algorithms , *PHASE equilibrium - Abstract
The objective of this work is to predict the methane hydrate phase boundary equilibrium temperature in the presence of ionic liquids (ILs) using machine learning techniques to overcome the limitations of the existing empirically proposed models. To achieve the objectives of this work, five deep neural networks (DNN) algorithms; Adadelta, Ftrl, Adagrad, Adam, and RMSProp coupled with six activation functions (elu, leaky relu, sigmoid, relu, tanh, and selu) were used on 610 experimental datasets from literature. The independent variables used to predict the ILs methane hydrate boundary temperature were pressure (2.39–100.43 MPa), concentration (0.10–50 wt.%), and ILs molecular weight (91.11–339.50 gmol−1). The study revealed that Adadelta DNN optimization algorithm and elu activation functions gave the best predictions with an average RMSE of 0.6727 and 0.6989, respectively. The findings suggest that the use of Adadelta coupled with elu accurately predicts the methane hydrate phase boundary condition in the presence of ionic liquids. The excellent performance of Adadelta and elu resides in their ability to predict exponential data trends which is the fundamental behavior of hydrate phase behavior condition. This work pioneered the use of machine learning techniques to predict hydrate behavior conditions in IL systems. Thus, the findings in this work will enhance the development of simple hydrate phase behavior properties predictive software for IL systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. AONet: Attention network with optional activation for unsupervised video anomaly detection.
- Author
-
Rakhmonov, Akhrorjon Akhmadjon Ugli, Subramanian, Barathi, Amirian Varnousefaderani, Bahar, and Kim, Jeonghong
- Subjects
CONVOLUTIONAL neural networks ,ANOMALY detection (Computer security) ,VIDEO surveillance ,AMBIGUITY - Abstract
Anomaly detection in video surveillance is crucial but challenging due to the rarity of irregular events and ambiguity of defining anomalies. We propose a method called AONet that utilizes a spatiotemporal module to extract spatiotemporal features efficiently, as well as a residual autoencoder equipped with an attention network for effective future frame prediction in video anomaly detection. AONet utilizes a novel activation function called OptAF that combines the strengths of the ReLU, leaky ReLU, and sigmoid functions. Furthermore, the proposed method employs a combination of robust loss functions to address various aspects of prediction errors and enhance training effectiveness. The performance of the proposed method is evaluated on three widely used benchmark datasets. The results indicate that the proposed method outperforms existing state‐of‐the‐art methods and demonstrates comparable performance, achieving area under the curve values of 97.0%, 86.9%, and 73.8% on the UCSD Ped2, CUHK Avenue, and ShanghaiTech Campus datasets, respectively. Additionally, the high speed of the proposed method enables its application to real‐time tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Improving Pre-trained CNN-LSTM Models for Image Captioning with Hyper-Parameter Optimization.
- Author
-
Khassaf, Nuha M. and Ali, Nada Hussein M.
- Subjects
IMAGE recognition (Computer vision) ,SHORT-term memory ,LONG short-term memory ,CONVOLUTIONAL neural networks ,RECOGNITION (Psychology) ,DEEP learning - Abstract
The issue of image captioning, which comprises automatic text generation to understand an image's visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real- world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of the previous stage. Improvements include the use of a new activation function, regular parameter tuning, and an improved learning rate in the later stages of training. The experimental results on the flickr8k dataset showed a noticeable and satisfactory improvement in the second stage, where a clear increment was achieved in the evaluation metrics Bleu1-4, Meteor, and Rouge-L. This increment confirmed the effectiveness of the alterations and highlighted the importance of hyper-parameter tuning in improving the performance of CNN-LSTM models in image caption tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A comparative analysis of activation functions in neural networks: unveiling categories.
- Author
-
Bouraya, Sara and Belangour, Abdessamad
- Subjects
ARTIFICIAL neural networks ,DEEP learning ,RESEARCH personnel ,COMPARATIVE studies - Abstract
Activation functions (AFs) play a critical role in artificial neural networks, allowing for the modeling of complex, non-linear relationships in data. In this review paper, we provide an overview of the most commonly used AFs in deep learning. In this comparative study, we survey and compare the different AFs in deep learning and artificial neural networks. Our aim is to provide insights into the strengths and weaknesses of each AF and to provide guidance on the appropriate selection of AFs for different types of problems. We evaluate the most commonly used AFs, including sigmoid, tanh, rectified linear units (ReLUs) and its variants, exponential linear unit (ELU), and SoftMax. For each activation category, we discuss its properties, mathematical formulation (MF), and the benefits and drawbacks in terms of its ability to model complex, non-linear relationships in data. In conclusion, this comparative study provides a comprehensive overview of the properties and performance of different AFs, and serves as a valuable resource for researchers and practitioners in deep learning and artificial neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. AONet: Attention network with optional activation for unsupervised video anomaly detection
- Author
-
Akhrorjon Akhmadjon Ugli Rakhmonov, Barathi Subramanian, Bahar Amirian Varnousefaderani, and Jeonghong Kim
- Subjects
activation function ,convolutional neural network ,loss function ,unsupervised learning ,video anomaly detection ,Telecommunication ,TK5101-6720 ,Electronics ,TK7800-8360 - Abstract
Anomaly detection in video surveillance is crucial but challenging due to the rarity of irregular events and ambiguity of defining anomalies. We propose a method called AONet that utilizes a spatiotemporal module to extract spatio-temporal features efficiently, as well as a residual autoencoder equipped with an attention network for effective future frame prediction in video anomaly detection. AONet utilizes a novel activation function called OptAF that com-bines the strengths of the ReLU, leaky ReLU, and sigmoid functions. Further-more, the proposed method employs a combination of robust loss functions to address various aspects of prediction errors and enhance training effectiveness. The performance of the proposed method is evaluated on three widely used benchmark datasets. The results indicate that the proposed method outper-forms existing state-of-the-art methods and demonstrates comparable perfor-mance, achieving area under the curve values of 97.0%, 86.9%, and 73.8% on the UCSD Ped2, CUHK Avenue, and ShanghaiTech Campus datasets, respec-tively. Additionally, the high speed of the proposed method enables its applica-tion to real-time tasks.
- Published
- 2024
- Full Text
- View/download PDF
12. A wavelet CNN with appropriate feed-allocation and PSO optimized activations for diabetic retinopathy grading
- Author
-
Chandrasekaran Raja, Santhosh Krishna B V, Balaji Loganathan, Sanjay Kumar Suman, L. Bhagyalakshmi, Mubarak Alrashoud, Jayant Giri, and T. Sathish
- Subjects
WaveletCNN ,activation function ,ResNet ,AlexNet ,wavelet ,diabetic retinopathy ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Automation ,T59.5 - Abstract
This work modifies the architecture of conventional CNN with the integration of Multi-resolution Analysis (MRA) in a CNN framework for Diabetic Retinopathy (DR) diagnosis and grading. Here, the HF sub-bands are subjected to optimized activations and are directly fed to the fully connected layers, as it encompasses edge features. Unlike FD-Relu, the proposed function preserves significant negative coefficients, compared to the S-Relu, the proposed third-order S-Relu is optimized such that it sustains the activations in the range suitable for the wavelet coefficients. The coefficients of higher-order terms of the proposed 3rd-order S-Relu are optimized with PSO, fitting the maximum energy of the wavelet sub-bands to ensure High Frequency (HF) edge preservation. The authors re-architecture 3 different CNNs published in the Retinal Image analysis field, with spatial and wavelet inputs with optimized activations. The highest accuracy of 96% is attained with the AlexNet re-architecture, with 35,126 fundus images secured from the Kaggle dataset. As we can infer the proposed re-architecture wavelet CNN outperformed the multiscale shallow CNNs, multiscale attention net, and stacked CNNs with a 6.6, 0.3, 0.7 per cent increase in accuracy. The entire implementation of the wavelet CNN is made available under source code.
- Published
- 2024
- Full Text
- View/download PDF
13. New activation functions and Zhangians in zeroing neural network and applications to time-varying matrix pseudoinversion.
- Author
-
Gao, Yuefeng, Tang, Zhichao, Ke, Yuanyuan, and Stanimirović, Predrag S.
- Subjects
- *
TIME-varying networks , *ERROR functions , *LYAPUNOV stability , *STREAMING video & television , *STABILITY theory , *ECCENTRIC loads - Abstract
We are guided by the fact that zeroing neural networks (ZNN) are proven tool in online solving the time-varying (TV) matrix Moore–Penrose (M–P) inverse. This paper focuses on online computing TV full-row rank or full-column rank matrix M–P inverse using a novel ZNN model with an optimized activation function (AF) and improved error function (Zhangian). ZNN dynamical systems accelerated by the optimized class of AFs converge in a finite-time to the TV theoretical M–P inverse. The upper bounds of the estimated convergence time are obtained analytically using the Lyapunov stability theory. The simulation experiments support the theoretical analysis and demonstrate the effectiveness of the proposed ZNN dynamics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Grid multi-scroll attractors in cellular neural network with a new activation function and pulse current stimulation.
- Author
-
Jin, Hui and Li, Zhijun
- Abstract
Cellular neural networks (CNNs) have attracted much attention in academia and industry due to their rich dynamic characteristics and potential application value. A tri-cell CNN with a nested sinusoidal activation function (NASF) under a multi-level pulse current stimulation is developed here. The basic features of the CNN system are analyzed from the perspectives of symmetry, dissipativity, and stability of equilibrium points. The complicated dynamical behaviors are thoroughly investigated via phase portraits, Poincaré maps, time series, bifurcation diagrams, Lyapunov exponents, and basins of attraction. It is found that the tri-cell CNN can generate various complex grid multi-scroll attractors (GMSAs). The number of scrolls in GMSAs can be controlled by the logic level of the pulse current and the saturation value of the NSAF. Furthermore, this CNN model can demonstrate intricate initial offset boosting dynamics under the appropriate parameters. This may result in an infinite number of self-excited chaotic attractors and hidden period-1 attractors with identical shapes but different positions, leading to the intriguing coexistence of homogeneous and heterogeneous multistability. The existence of GMSAs with different number of scrolls is verified by MCU-based hardware experiments. Finally, a pseudo-random number generator (PRNG) based on GMSA is designed to explore its potential applications in the field of information security. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. On Systems of Neural ODEs with Generalized Power Activation Functions
- Author
-
Vasiliy Ye. Belozyorov and Yevhen V. Koshel
- Subjects
system of ordinary autonomous differential equations ,limit cycle ,chaotic attractor ,logistic mapping ,residual neural network ,activation function ,time series ,Mathematics ,QA1-939 - Abstract
When constructing neural network-based models, it is common practice to use time-tested activation functions such as the hyperbolic tangent, the sigmoid or the ReLU functions. These choices, however, may be suboptimal. The hyperbolic tangent and the sigmoid functions are differentiable but bounded, which can lead to vanishing gradient problem. The ReLU is not bounded but is not differentiable in the point 0, which may lead to suboptimal training in some optimizers. One can attempt to use sigmoid-like functions like the cubic root, but it is also not differentiable in the point 0. One activation function that is often overlooked is the identity function. Even though it doesn’t induce nonlinear behavior in the model by itself, it can help build more explainable models more quickly due to non-existent cost of its evaluation, while the non-linearities can be provided by the model’s evaluation rule. In this article, we explore the use of specially-designed unbounded differentiable generalized power activation function, the identity function, and their combinations for approximating univariate time series data with neural ordinary differential equations. Examples are given.
- Published
- 2024
- Full Text
- View/download PDF
16. Study on Fast Temporal Prediction Method of Flame Propagation Velocity in Methane Gas Deflagration Experiment Based on Neural Network.
- Author
-
Wang, Xueqi, Wang, Boqiao, Yu, Kuai, Zhu, Wenbin, Zhang, Jinnan, and Zhang, Bin
- Subjects
- *
WIRE netting , *FLAME , *METHANE , *VELOCITY , *GASES - Abstract
To address the challenges of high experimental costs, complexity, and time consumption associated with pre-mixed combustible gas deflagration experiments under semi-open space obstacle conditions, a rapid temporal prediction method for flame propagation velocity based on Ranger-GRU neural networks is proposed. The deflagration experiment data are employed as the training dataset for the neural network, with the coefficient of determination (R2) and mean squared error (MSE) used as evaluation metrics to assess the predictive performance of the network. First, 108 sets of pre-mixed methane gas deflagration experiments were conducted, varying obstacle parameters to investigate methane deflagration mechanisms under different conditions. The experimental results demonstrate that obstacle-to-ignition source distance, obstacle shape, obstacle length, obstacle quantity, and thick and fine wire mesh obstacles all significantly influence flame propagation velocity. Subsequently, the GRU neural network was trained, and different activation functions (Sigmoid, Relu, PReLU) and optimizers (Lookahead, RAdam, Adam, Ranger) were incorporated into the backpropagation updating process of the network. The training results show that the Ranger-GRU neural network based on the PReLU activation function achieves the highest mean R2 value of 0.96 and the lowest mean MSE value of 7.16759. Therefore, the Ranger-GRU neural network with PReLU activation function can be a viable rapid prediction method for flame propagation velocity in pre-mixed methane gas deflagration experiments under semi-open space obstacle conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. 面向工业场景带钢表面缺陷检测的 LF-YOLO.
- Author
-
马肖瑶, 黎 睿, 李自力, and 翟文正
- Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
18. Lightweight Wheat Spike Detection Method Based on Activation and Loss Function Enhancements for YOLOv5s.
- Author
-
Li, Jingsong, Dai, Feijie, Qian, Haiming, Huang, Linsheng, and Zhao, Jinling
- Subjects
- *
OBJECT recognition (Computer vision) , *NETWORK performance , *WHEAT , *LIGHTING - Abstract
Wheat spike count is one of the critical indicators for assessing the growth and yield of wheat. However, illumination variations, mutual occlusion, and background interference have greatly affected wheat spike detection. A lightweight detection method was proposed based on the YOLOv5s. Initially, the original YOLOv5s was improved by combing the additional small-scale detection layer and integrating the ECA (Efficient Channel Attention) attention mechanism into all C3 modules (YOLOv5s + 4 + ECAC3). After comparing GhostNet, ShuffleNetV2, and MobileNetV3, the GhostNet architecture was finally selected as the optimal lightweight model framework based on its superior performance in various evaluations. Subsequently, the incorporation of five different activation functions into the network led to the identification of the RReLU (Randomized Leaky ReLU) activation function as the most effective in augmenting the network's performance. Ultimately, the network's loss function of CIoU (Complete Intersection over Union) was optimized using the EIoU (Efficient Intersection over Union) loss function. Despite a minor reduction of 2.17% in accuracy for the refined YOLOv5s + 4 + ECAC3 + G + RR + E network when compared to the YOLOv5s + 4 + ECAC3, there was a marginal improvement of 0.77% over the original YOLOv5s. Furthermore, the parameter count was diminished by 32% and 28.2% relative to the YOLOv5s + 4 + ECAC3 and YOLOv5s, respectively. The model size was reduced by 28.0% and 20%, and the Giga Floating-point Operations Per Second (GFLOPs) were lowered by 33.2% and 9.5%, respectively, signifying a substantial improvement in the network's efficiency without significantly compromising accuracy. This study offers a methodological reference for the rapid and accurate detection of agricultural objects through the enhancement of a deep learning network. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Multivariate Perturbed Hyperbolic Tangent-Activated Singular Integral Approximation.
- Author
-
Anastassiou, George A.
- Subjects
- *
SMOOTHNESS of functions , *QUANTITATIVE research , *DENSITY - Abstract
Here we study the quantitative multivariate approximation of perturbed hyperbolic tangent-activated singular integral operators to the unit operator. The engaged neural network activation function is both parametrized and deformed, and the related kernel is a density function on R N . We exhibit uniform and L p , p ≥ 1 approximations via Jackson-type inequalities involving the first L p modulus of smoothness, 1 ≤ p ≤ ∞ . The differentiability of our multivariate functions is covered extensively in our approximations. We continue by detailing the global smoothness preservation results of our operators. We conclude the paper with the simultaneous approximation and the simultaneous global smoothness preservation by our multivariate perturbed activated singular integrals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A General Method for Solving Differential Equations of Motion Using Physics-Informed Neural Networks.
- Author
-
Zhang, Wenhao, Ni, Pinghe, Zhao, Mi, and Du, Xiuli
- Subjects
DIFFERENTIAL equations ,AUTOMATIC differentiation ,EQUATIONS of motion ,STRUCTURAL dynamics ,DEGREES of freedom - Abstract
The physics-informed neural network (PINN) is an effective alternative method for solving differential equations that do not require grid partitioning, making it easy to implement. In this study, using automatic differentiation techniques, the PINN method is employed to solve differential equations by embedding prior physical information, such as boundary and initial conditions, into the loss function. The differential equation solution is obtained by minimizing the loss function. The PINN method is trained using the Adam algorithm, taking the differential equations of motion in structural dynamics as an example. The time sample set generated by the Sobol sequence is used as the input, while the displacement is considered the output. The initial conditions are incorporated into the loss function as penalty terms using automatic differentiation techniques. The effectiveness of the proposed method is validated through the numerical analysis of a two-degree-of-freedom system, a four-story frame structure, and a cantilever beam. The study also explores the impact of the input samples, the activation functions, the weight coefficients of the loss function, and the width and depth of the neural network on the PINN predictions. The results demonstrate that the PINN method effectively solves the differential equations of motion of damped systems. It is a general approach for solving differential equations of motion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm.
- Author
-
Heena Kalim, Chug, Anuradha, and Singh, Amit Prakash
- Abstract
The paper introduces two novel activation functions known as modExp and modExp
m . The activation functions possess several desirable properties, such as being continuously differentiable, bounded, smooth, and non-monotonic. Our studies have shown that modExp and modExpm consistently outperform ReLU and other activation functions across a range of challenging datasets and complex models. Initially, the experiments involve training and classifying using a multi-layer perceptron (MLP) on benchmark data sets like the Diagnostic Wisconsin Breast Cancer and Iris Flower datasets. Both modExp and modExpm demonstrate impressive performance, with modExp achieving 94.15 and 95.56% and modExpm achieving 94.15 and 95.56% respectively, when compared to ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. In addition, a series of experiments were carried out on five different depths of deeper neural networks, ranging from five to eight layers, using MNIST datasets. The modExpm activation function demonstrated superior performance accuracy on various neural network configurations, achieving 95.56, 95.43, 94.72, 95.14, and 95.61% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers, and 8 layers respectively. The modExp activation function also performed well, achieving the second highest accuracy of 95.42, 94.33, 94.76, 95.06, and 95.37% on the same network configurations, outperforming ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. The results of the statistical feature measures show that both activation functions have the highest mean accuracy, the lowest standard deviation, the lowest Root Mean squared Error, the lowest variance, and the lowest Mean squared Error. According to the experiment, both functions converge more quickly than ReLU, which is a significant advantage in Neural network learning. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
22. 基于CNN-A-BiLSTM 的无刷直流 电机故障诊断方法研究.
- Author
-
覃仕明 and 马鹏
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
23. 基于计算机视觉的石化火灾智能监测研究.
- Author
-
孙雪婷, 傅钰江, 林堂茂, 王 涵, and 陈 博
- Abstract
Copyright of Petroleum Refinery Engineering is the property of Petroleum Refinery Engineering Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
24. Adaptive Morphing Activation Function for Neural Networks.
- Author
-
Herrera-Alcántara, Oscar and Arellano-Balderas, Salvador
- Subjects
- *
WAVELETS (Mathematics) , *FRACTIONAL calculus , *MACHINE learning , *POLYNOMIALS , *ALGORITHMS - Abstract
A novel morphing activation function is proposed, motivated by the wavelet theory and the use of wavelets as activation functions. Morphing refers to the gradual change of shape to mimic several apparently unrelated activation functions. The shape is controlled by the fractional order derivative, which is a trainable parameter to be optimized in the neural network learning process. Given the morphing activation function, and taking only integer-order derivatives, efficient piecewise polynomial versions of several existing activation functions are obtained. Experiments show that the performance of polynomial versions PolySigmoid, PolySoftplus, PolyGeLU, PolySwish, and PolyMish is similar or better than their counterparts Sigmoid, Softplus, GeLU, Swish, and Mish. Furthermore, it is possible to learn the best shape from the data by optimizing the fractional-order derivative with gradient descent algorithms, leading to the study of a more general formula based on fractional calculus to build and adapt activation functions with properties useful in machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. 应用动态激活函数的轻量化YOLOv8行人检测算法.
- Author
-
王晓军, 陈高宇, and 李晓航
- Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
26. 基于 ACON 激活函数和卷积神经网络的 滚动轴承故障诊断.
- Author
-
常志远, 刘昌奎, 李志农, and 周世健
- Abstract
Copyright of Bearing is the property of Bearing Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
27. modSwish: a new activation function for neural network.
- Author
-
Kalim, Heena, Chug, Anuradha, and Singh, Amit Prakash
- Abstract
The activation functions are extremely important to neural networks since they are responsible for learning the abstract characteristics of the data through nonlinear modification. The paper presents a new activation function, which is referred to as modSwish. It is continuously differentiable, unbounded above, bounded below, and non-monotonic. Our results demonstrate that modSwish outperforms ReLU on a number of challenging datasets and neural network models. In the beginning of the experiment, Neural Networks are trained and classified using benchmark data like Diagnostic Wisconsin Breast Cancer and Iris and modSwish achieved 93.57% and 95.56% accuracy, respectively. Secondly, experiments were conducted on five distinct neural network depths that ranged from five to eight layers over MNIST datasets. The modSwish activation function obtained 95.57%, 95.29%, 94.93%, 94.69%, and 95.03% accuracy on 8 layers, 7 layers, 6 layers, 5 thinner layers, and wider 5 layers neural networks respectively. Finally, experiments were conducted on two distinct Convolution Neural Network with two convolution layer and four convolution layers over CIFAR-10 datasets. The modSwish activation function obtained 60.04% and 69.22% accuracy on two convolution layer and four convolution layer model respectively. Statistical feature measurements demonstrate that modSwish has the best mean accuracy, lowest Root Mean squared Error, lowest standard deviation, lowest variance, and lowest Mean squared Error. The study indicated that modSwish has faster convergence compared to ReLU, making it a valuable factor in deep learning. The results of the experiments suggest that modSwish can be a promising substitute for ReLU, leading to better performance in neural network models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Rogue wave, lump, kink, periodic and breather-like solutions of the (2+1)-dimensional KdV equation.
- Author
-
Zheng, Wanguang, Liu, Yaqing, and Chu, Jingyi
- Subjects
- *
ROGUE waves , *ARTIFICIAL neural networks , *WATER waves , *WATER depth - Abstract
In this paper, the (2+1)-dimensional KdV equation is investigated by using the bilinear neural network method (BNNM). We construct six neural network models, extending beyond single hidden layer models to create deeper and broader network structures (e.g. [3-3-1], [3-4-1], [3-1-3-1], [3-4-1-1], [3-2-2-1] and [3-2-3-1-1] models). Introducing specific activation functions into the neural network model enables the generation of various test functions, resulting in novel solutions for equations that include rogue wave solutions, lump-kink solutions, periodic soliton solution, breather-like solutions and lump solutions. The physical properties of these novel solutions are vividly depicted through three-dimensional plots, density plots, and curve plots. The findings contribute to a better understanding of the propagation behavior of shallow water waves. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Trish: an efficient activation function for CNN models and analysis of its effectiveness with optimizers in diagnosing glaucoma.
- Author
-
Közkurt, Cemil, Diker, Aykut, Elen, Abdullah, Kılıçarslan, Serhat, Dönmez, Emrah, and Demir, Fahrettin Burak
- Subjects
- *
GLAUCOMA , *CONVOLUTIONAL neural networks , *SYMPTOMS , *VISION disorders , *DIAGNOSIS - Abstract
Glaucoma is an eye disease that spreads over time without showing any symptoms at an early age and can result in vision loss in advanced ages. The most critical issue in this disease is to detect the symptoms of the disease at an early age. Various researches are carried out on machine learning approaches that will provide support to the expert for this diagnosis. The activation function plays a pivotal role in deep learning models, as it introduces nonlinearity, enabling neural networks to learn complex patterns and relationships within data, thus facilitating accurate predictions and effective feature representations. In this study, it is focused on developing an activation function that can be used in CNN architectures using glaucoma disease datasets. The developed function (Trish) was compared with ReLU, LReLU, Mish, Swish, Smish, and Logish activation functions using SGD, Adam, RmsProp, AdaDelta, AdaGrad, Adamax, and Nadam optimizers in CNN architectures. Datasets consisting of retinal fundus images named ACRIMA and HRF were used within the scope of the experiments. These datasets are widely known and currently used in the literature. To strengthen the test validity, the proposed function was also tested on the CIFAR-10 dataset. As a result of the study, 97.22% validation accuracy performance was obtained. It should be stated that the acquired performance value is at a significant level for the detection of glaucoma. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. High-Performance Binocular Disparity Prediction Algorithm for Edge Computing.
- Author
-
Cheng, Yuxi, Song, Yang, Liu, Yi, Zhang, Hui, and Liu, Feng
- Subjects
- *
EDGE computing , *DATA compression , *DISTRIBUTION costs , *COMPUTATIONAL complexity , *FORECASTING , *PREDICTION algorithms - Abstract
End-to-end disparity estimation algorithms based on cost volume deployed in edge-end neural network accelerators have the problem of structural adaptation and need to ensure accuracy under the condition of adaptation operator. Therefore, this paper proposes a novel disparity calculation algorithm that uses low-rank approximation to approximately replace 3D convolution and transposed 3D convolution, WReLU to reduce data compression caused by the activation function, and unimodal cost volume filtering and a confidence estimation network to regularize cost volume. It alleviates the problem of disparity-matching cost distribution being far away from the true distribution and greatly reduces the computational complexity and number of parameters of the algorithm while improving accuracy. Experimental results show that compared with a typical disparity estimation network, the absolute error of the proposed algorithm is reduced by 38.3%, the three-pixel error is reduced to 1.41%, and the number of parameters is reduced by 67.3%. The calculation accuracy is better than that of other algorithms, it is easier to deploy, and it has strong structural adaptability and better practicability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. 基于检测增强型 YOLOv3-tiny 的道路场景行人检测.
- Author
-
田亮, 金积德, and 郑庆祥
- Subjects
- *
FEATURE extraction , *NETWORK performance , *TRAFFIC accidents , *CONVOLUTIONAL neural networks , *DEEP learning , *PEDESTRIANS - Abstract
To provide drivers with real-time and accurate pedestrian information and reduce traffic accidents, the detection of enhanced YOLOv3-tiny (DOEYT) pedestrian detection algorithm was proposed. The robust feature extraction network was established, and the asymmetric max-pooling was used for down sampling to prevent the loss of lateral pedestrian features due to the increased receptive field. Hardswish was employed as activation function for the convolutional layers to optimize network performance, and the global context (GC) self-attention mechanism was used to capture holistic feature information. In the classification and regression network, the three-scale detection strategy was adopted to improve the accuracy of small-scale pedestrian target detection. The k-means++ algorithm was used to regenerate dataset anchor boxes for enhancing network convergence speed. The pedestrian detection dataset was constructed and divided into training and testing sets to evaluate DOEYT performance. The results show that by the asymmetric max-pooling, Hardswish function and GC self-attention mechanism, AP values are increased by 14.4%, 7.9% and 10.8%, respectively. On the testing set, DOEYT achieves average precision of 91.2% and detection speed of 103 frames per second, which demonstrates that the proposed algorithm can quickly and accurately detect pedestrians for reducing the risk of traffic accidents. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Computational analysis of electrode structure and configuration for efficient and localized neural stimulation.
- Author
-
Choi, Ji Hoon, Moon, Jeongju, Park, Young Hoon, and Eom, Kyungsik
- Abstract
Neuromodulation technique using electric stimulation is widely applied in neural prosthesis, therapy, and neuroscience research. Various stimulation techniques have been developed to enhance stimulation efficiency and to precisely target the specific area of the brain which involves optimizing the geometry and the configuration of the electrode, stimulation pulse type and shapes, and electrode materials. Although the effects of electrode shape, size, and configuration on the performance of neural stimulation have individually been characterized, to date, there is no integrative investigation of how this factor affects neural stimulation. In this study, we computationally modeled the various types of electrodes with varying shapes, sizes, and configurations and simulated the electric field to calculate the activation function. The electrode geometry is then integratively assessed in terms of stimulation efficiency and stimulation focality. We found that stimulation efficiency is enhanced by making the electrode sharper and smaller. A center-to-vertex distance exceeding 100 µm shows enhanced stimulation efficiency in the bipolar configuration. Additionally, the separation distance of less than 1 mm between the reference and stimulation electrodes exhibits higher stimulation efficiency compared to the monopolar configuration. The region of neurons to be stimulated can also be modified. We found that sharper electrodes can locally activate the neuron. In most cases, except for the rectangular electrode shape with a center-to-vertex distance smaller than 100 µm, the bipolar electrode configuration can locally stimulate neurons as opposed to the monopolar configuration. These findings shed light on the optimal selection of neural electrodes depending on the target applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Intelligent Beta-Based Polynomial Approximation of Activation Functions for a Robust Data Encryption System.
- Author
-
Issaoui, Hanen, ElAdel, Asma, and Zaied, Mourad
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,POLYNOMIAL approximation ,BETA functions ,DATA encryption - Abstract
Deep neural network-based machine learning algorithms are widely used within different sectors and produce excellent results. However, their use requires access to private, often confidential, and sensitive information (financial, medical, etc). This requires precise measures and particular attention to data security and confidentiality. In this paper, we propose a new solution to this problem by integrating a proposed Convolutional Neural Network (CNN) model on encrypted data within the constraints of homomorphic encryption techniques. Specifically, we focus on the approximate activation functions ReLU, Sigmoid, and Tanh, which appear to be the key functions of CNNs. We start by developing new low-degree polynomials, which are essential for successful Homomorphic Encryption (HE). The activation functions will be replaced by these polynomials, which are based on the Beta function and its primitive. To make certain that the data is contained within a given range, the next step is to build a new CNN model using batch normalization. Finally, our methodology and the effectiveness of the proposed strategy are evaluated using Mnist and Cifar10. The experimental results support the proposed approach's efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. An Improved Deep Sparse Autoencoder Driven Network Intrusion Detection System (IDSAE-NIDS).
- Author
-
Mazadu, Jesse Ismaila and Jegede, Abayomi
- Subjects
COMPUTER networks ,USER experience ,COMPUTER users ,MALWARE ,ALGORITHMS ,INTRUSION detection systems (Computer security) ,SYSTEM downtime - Abstract
Computer network users experience persistent attacks due to vulnerabilities in the systems and network. An intrusion detection system (IDS) is a system that checks the flow of data in the network and alerts or detects abnormal traffic. Even though there is no perfect system anywhere on earth, there may be a cause for breakdown/downtime now and then. More so, the intrusion detection system may run into errors in a bid to detect malware in incoming traffic on a network. Hence, the need to develop a system that can detect intrusion in the network and do that with minimal error. This proposed paper is an improved deep sparse autoencoder-driven network intrusion detection system (IDSAE NIDS) that addresses the issue of interpretability of the L2 regularization technique employed in other works. The proposed IDSAE NIDS model was trained using a mini-batch gradient descent algorithm, L1 regularization technique, and ReLU activation function to achieve a better model performance. Experimental results based on the NSL-KDD dataset show that our approach provides significant performance and improvements over other deep sparse autoencoder NIDSs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
35. Research on Fabric Defect Detection Algorithm Based on Lightweight YOLOv7-Tiny
- Author
-
Tang Li, Mei Shunqi, Shi Yishan, Zhou Shi, Zheng Quan, Hongkai Jiang, Xu Qiao, and Zhang Zhiming
- Subjects
Neural network ,fabric defect detection ,activation function ,Ghost module ,upsampling ,clustering algorithm ,Science ,Textile bleaching, dyeing, printing, etc. ,TP890-933 - Abstract
The current advanced neural network models are expanding in size and complexity to achieve improved detection accuracy. This study designs a lightweight fabric defect detection algorithm based on YOLOv7-tiny, called YOLOv7-tiny-MGCK. Its objectives are to improve the performance of fabric defect detection against complex backgrounds and to find a balance between the algorithm’s lightweight nature and its accuracy. The algorithm utilizes the Mish activation function, known for its superior nonlinear performance capability and smoother curve, enabling the neural network to manage more complex challenges. The Ghost convolution module is also incorporated to reduce computation and model parameters. The lightweight upsampling technique CARAFE facilitates the flexible extraction of deep features, coupled with their integration with shallow features. In addition, an improved K-Means clustering algorithm, KMMP, is employed to select appropriate anchor box for fabric defects. The experimental results show: a reduction in the number of parameters by 45.5% and computational volume by 41.0%, along with increases in precision by 3.9%, recall by 7.0%, and mAP by 3.0%. These results indicated that the improved algorithm achieves a more effective balance between detection performance and the requirement for a lightweight solution.
- Published
- 2024
- Full Text
- View/download PDF
36. Hybrid Activation Functions in Deep Convolutional Neural Networks for Maize and Paddy Leaf Disease Recognition: A Transfer Learning Approach
- Author
-
Sunilkumar, H. R., Poornima, K. M., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Thirunavukkarasu, I., editor, and Kumar, Roshan, editor
- Published
- 2024
- Full Text
- View/download PDF
37. Enhancing Facial Emotion Level Recognition: A CNN-Based Approach to Balancing Data
- Author
-
Kumar, T. A., Aashrith, M., Vineeth, K. S., Subhash, B., Reddy, S. A., Alam, Junaid, Maity, Soumyadev, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Goar, Vishal, editor, Kuri, Manoj, editor, Kumar, Rajesh, editor, and Senjyu, Tomonobu, editor
- Published
- 2024
- Full Text
- View/download PDF
38. Adaptive Smooth Activation Function for Improved Organ Segmentation and Disease Diagnosis
- Author
-
Biswas, Koushik, Jha, Debesh, Tomar, Nikhil Kumar, Karri, Meghana, Reza, Amit, Durak, Gorkem, Medetalibeyoglu, Alpay, Antalek, Matthew, Velichko, Yury, Ladner, Daniela, Borhani, Amir, Bagci, Ulas, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
39. License Number Plate Recognition Using Convolution Neural Network
- Author
-
Arya, Mithlesh, Sharma, Reena, Gaur, Sonam, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Agrawal, Jitendra, editor, Shukla, Rajesh K., editor, Sharma, Sanjeev, editor, and Shieh, Chin-Shiuh, editor
- Published
- 2024
- Full Text
- View/download PDF
40. Extreme Learning Machine – A New Machine Learning Paradigm
- Author
-
Perfilieva, Irina, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kahraman, Cengiz, editor, Cevik Onar, Sezi, editor, Cebi, Selcuk, editor, Oztaysi, Basar, editor, Tolga, A. Cagrı, editor, and Ucal Sari, Irem, editor
- Published
- 2024
- Full Text
- View/download PDF
41. The Outcomes of Generative AI Are Exactly the Nash Equilibria of a Non-potential Game
- Author
-
Djehiche, Boualem, Tembine, Hamidou, Kacprzyk, Janusz, Series Editor, Novikov, Dmitry A., Editorial Board Member, Shi, Peng, Editorial Board Member, Cao, Jinde, Editorial Board Member, Polycarpou, Marios, Editorial Board Member, Pedrycz, Witold, Editorial Board Member, Ngoc Thach, Nguyen, editor, Trung, Nguyen Duc, editor, Ha, Doan Thanh, editor, and Kreinovich, Vladik, editor
- Published
- 2024
- Full Text
- View/download PDF
42. Analysis of the Effectiveness of Neural Networks with Different Configurations
- Author
-
Degtyareva, Ksenia, Borodulin, Aleksey, Gantimurov, Andrei, Kukartsev, Vladislav, Mikhalev, Anton, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Yang, Xin-She, editor, Sherratt, Simon, editor, Dey, Nilanjan, editor, and Joshi, Amit, editor
- Published
- 2024
- Full Text
- View/download PDF
43. An Application of Deep Learning Using Leaky Rectified Linear Unit and Hyperbolic Tangent in Non-destructive Testing
- Author
-
Tekwani, Bharti, Gupta, Archana Bohra, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Tiwari, Ritu, editor, Saraswat, Mukesh, editor, and Pavone, Mario, editor
- Published
- 2024
- Full Text
- View/download PDF
44. A Modified Hopfield Model with Adjustable Activation Function for Buridan’s Assay
- Author
-
Liu, Xingjian, Du, Chuangyi, Tao, Lingyi, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Le, Xinyi, editor, and Zhang, Zhijun, editor
- Published
- 2024
- Full Text
- View/download PDF
45. End-to-End Image Compression Through Machine Semantics
- Author
-
Liu, Jianran, Zhang, Chang, Ji, Wen, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Zhai, Guangtao, editor, Zhou, Jun, editor, Ye, Long, editor, Yang, Hua, editor, An, Ping, editor, and Yang, Xiaokang, editor
- Published
- 2024
- Full Text
- View/download PDF
46. Evaluation Method of College Students’ Education Based on Artificial Neural Networks
- Author
-
Sheng, Qiufang, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Pichappan, Pit, editor, Rodriguez Jorge, Ricardo, editor, and Chung, Yao-Liang, editor
- Published
- 2024
- Full Text
- View/download PDF
47. Human Behavior Recognition Algorithm Based on HD-C3D Model
- Author
-
Xie, Zhihao, Yu, Lei, Wang, Qi, Ma, Ziji, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Wu, Celimuge, editor, Chen, Xianfu, editor, Feng, Jie, editor, and Wu, Zhen, editor
- Published
- 2024
- Full Text
- View/download PDF
48. Integrated Tomato Cultivation Using Backpropagation Neural Network on Bipolar Fuzzy Sets
- Author
-
Anita Shanthi, S., Preethi, R., Leung, Ho-Hon, editor, Sivaraj, R., editor, and Kamalov, Firuz, editor
- Published
- 2024
- Full Text
- View/download PDF
49. A Comparative Analysis on Various Modified Deep Convolution Neural Networks on Maize Plant Leaf Disease Classification
- Author
-
Kumar, H. R. Sunil, Poornima, K. M., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Shrivastava, Vivek, editor, and Bansal, Jagdish Chand, editor
- Published
- 2024
- Full Text
- View/download PDF
50. Hardware Implementation of Three-Layered Perceptron Using FPGA
- Author
-
Tiwari, Rishabh, Bhingarde, Abhishek, Kulkarni, Atharva, Kulkarni, Rahul, Joshi, Manisha, Charniya, Nadir, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Shrivastava, Vivek, editor, and Bansal, Jagdish Chand, editor
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.