18 results on '"Kalganova, T."'
Search Results
2. Operational modal analysis and prediction of remaining useful life for rotating machinery
- Author
-
Sternharz, German, Kalganova, T., and Mares, C.
- Subjects
Condition Monitoring ,Structural Dynamics ,Computational Mechanics ,Machine Learning ,Artificial Intelligence - Abstract
The significance of rotating machinery spans areas from household items to vital industry sectors, such as aerospace, automotive, railway, sea transport, resource extraction, and manufacturing. Hence, our technologised society depends on efficient and reliable operation of rotating machinery. To contribute to this aim, this thesis leverages measurable quantities during its operation for structural-mechanical evaluation employing Operational Modal Analysis (OMA) and the prediction of Remaining Useful Life (RUL). Modal parameters determined by OMA are central for the design, test, and validation of rotating machinery. This thesis introduces the first open parametric simulation dataset of rotating machinery during an acceleration run. As there is a lack of similar open datasets suitable for OMA, it lays a foundation for improved reproducibility and comparability of future research. Based on this, the Averaged Order-Based Modal Analysis (AOBMA) method is developed. The novel addition of scaling and weighted averaging of individual machine orders in AOBMA alleviates the analysis effort of the existing Order-Based Modal Analysis (OBMA) method by providing a unified set of modal parameters with higher accuracy. As such, AOBMA showed a lower mean absolute relative error of 0.03 for damping ratio estimations across compared modes while OBMA provided an error value of 0.32 depending on the processed order. At excitation with high harmonic contributions, AOBMA also resulted in the highest number of accurately identified modes among the compared methods. At a harmonic ratio of 0.8, for example, AOBMA identified an average of 11.9 modes per estimation, while OBMA and baseline OMA followed with 9.5 and 9 modes, respectively. Moreover, it is the first study, which systematically evaluates the impact of excitation conditions on the compared methods and finds an advantage of OBMA and AOBMA over traditional OMA regarding mode shape estimation accuracy. While OMA can be used to evaluate significant structural changes, Machine Learning (ML) methods have seen substantially greater success in condition monitoring, including RUL prediction. However, as these methods often require large amounts of time and cost- intensive training data, a novel data-efficient RUL prediction methodology is introduced, taking advantage of distinct healthy and faulty condition data. When the number of training sequences from an open dataset is reduced to 5%, an average prediction Root Mean Square Error (RMSE) of 24.9 operation cycles is achieved, outperforming the baseline method with an RMSE of 28.1. Motivated by environmental considerations, the impact of data reduction on the training duration of several method variants is quantified. When the full training set is utilised, the most resource-saving variant of the proposed approach achieves an average training duration of 8.9% compared to the baseline method.
- Published
- 2023
3. HEDCOS - High Efficiency Dynamic Combinatorial Optimization System - using Ant Colony Optimization algorithm
- Author
-
Skackauskas, Jonas, Kalganova, T., and Dear, I.
- Subjects
Ant Colony Optimization (ACO) ,Dynamic Combinatorial Optimization ,Dynamic Multi-dimensional Knapsack Problem (DMKP) ,Herder Ants ,Event-Triggered - Abstract
Dynamic combinatorial optimization is gaining popularity among industrial practitioners due to the ever-increasing scale of their optimization problems and efforts to solve them to remain competitive. Larger optimization problems are not only more computationally intense to optimize but also have more uncertainty within problem inputs. If some aspects of the problem are subject to dynamic change, it becomes a Dynamic Optimization Problem (DOP). In this thesis, a High Efficiency Dynamic Combinatorial Optimization System is built to solve challenging DOPs with high-quality solutions. The system is created using Ant Colony Optimization (ACO) baseline algorithm with three novel developments. First, introduced an extension method for ACO algorithm called Dynamic Impact. Dynamic Impact is designed to improve convergence and solution quality by solving challenging optimization problems with a non-linear relationship between resource consumption and fitness. This proposed method is tested against the real-world Microchip Manufacturing Plant Production Floor Optimization (MMPPFO) problem and the theoretical benchmark Multidimensional Knapsack Problem (MKP). Second, a non-stochastic dataset generation method was introduced to solve the dynamic optimization research replicability problem. This method uses a static benchmark dataset as a starting point and source of entropy to generate a sequence of dynamic states. Then using this method, 1405 Dynamic Multidimensional Knapsack Problem (DMKP) benchmark datasets were generated and published using famous static MKP benchmark instances as the initial state. Third, introduced a nature-inspired discrete dynamic optimization strategy for ACO by modelling real-world ants' symbiotic relationship with aphids. ACO with Aphids strategy is designed to solve discrete domain DOPs with event-triggered discrete dynamism. The strategy improved inter-state convergence by allowing better solution recovery after dynamic environment changes. Aphids mediate the information from previous dynamic optimization states to maximize initial results performance and minimize the impact on convergence speed. This strategy is tested for DMKP and against identical ACO implementations using Full-Restart and Pheromone-Sharing strategies, with all other variables isolated. Overall, Dynamic Impact and ACO with Aphids developments are compounding. Using Dynamic Impact on single objective optimization of MMPPFO, the fitness value was improved by 33.2% over the ACO algorithm without Dynamic Impact. MKP benchmark instances of low complexity have been solved to a 100% success rate even when a high degree of solution sparseness is observed, and large complexity instances have shown the average gap improved by 4.26 times. ACO with Aphids has also demonstrated superior performance over the Pheromone-Sharing strategy in every test on average gap reduced by 29.2% for a total compounded dynamic optimization performance improvement of 6.02 times. Also, ACO with Aphids has outperformed the Full-Restart strategy for large datasets groups, and the overall average gap is reduced by 52.5% for a total compounded dynamic optimization performance improvement of 8.99 times.
- Published
- 2022
4. High performance disturbance observer based control system design for permanent magnet synchronous AC machine applications
- Author
-
Sarsembayev, Bayandy, Kalganova, T., and Do, T. D.
- Subjects
Disturbannce feedforward compensation ,Disturbance estimation ,Servomotor control ,Wind energy conversion system ,Maximum power point tracking - Abstract
An electrical machine is one of the main workforces in different industries and serves them in various applications. Machine drive control design involves many technical issues for efficient and robust exploitation. Over several decades, Permanent Magnet Synchronous Motor (PMSM) is getting preferred for industrial applications over its counterpart Squirrel Cage Induction Motor (SCIM) drive, because of their higher efficiency, power density, and higher torque to inertia ratio. In the prospective that PMSM drives are considered the drives of the future, there are still technical challenges and issues related to PMSM control. Many studies have been devoted to PMSM control in the past, but there are still some open research areas that bring worldwide researchers' interests back to PMSM drive control. One of the approaches that may facilitate better performance, higher efficiency, and robust and reliable work of the control system is the disturbance observer-based control (DOBC) with linear and nonlinear output feedback control for PM synchronous machine applications. DOBC is adopted due to its ability to reject external and internal disturbances with improving tracking performance in the variable speed wind energy conversion system (WECS) to maximize power extraction. The high order disturbance observer (HODO) is utilized to estimate the aerodynamic torque-based wind speed without the use of a traditional anemometer, which reduces the overall cost and improves the reliability of the whole system. Also, this method has been designed to improve the angular shaft speed tracking of the PMSM system under load torque disturbance and speed variations. The model-based linear and nonlinear feedback control are used in the proposed control systems. The sliding mode control (SMC) with switching output feedback control law and integral SMC with linear feedback and state-dependent Riccati equation (SDRE) based approaches have been designed for the systems. The SDRE control accounts for the nonlinear multivariable structure of the WECS and is approximated with Taylor series expansion terms. The chattering inherited from SMC is eliminated by the continuous approximation technique. The sliding mode is guaranteed by eliminating the reaching mode in the proposed integral SMC. The model-free cascaded linear feedback control system based on the proportional-integral (PI) controllers use a back-calculation algorithm anti-windup scheme. The proposed speed controllers are synthesized with HODO to compensate for the external disturbance, model uncertainty, noise, and modelling errors. Moreover, servomechanism-based SDRE control, a near-optimal control system is designed to suppress the model uncertainty and noise without the use of disturbance observers. The proposed control systems for PMSM speed regulation have demonstrated a significant improvement in the angular shaft speed-tracking performance at the transients. Their performances have been tested under speed, load torque variations, and model uncertainty. For example, HODO-based SMC with switching output feedback control law (SOFCL) has demonstrated improvement by more than 78% than the PI-PI control system of the PMSM. The performance of the HODOs-based Integral SMC with SDRE nonlinear feedback is improved by 80.5% under external disturbance, model uncertainty, and noise than Integral SMC with linear feedback in the WECS. The HODO-based SDRE control with servomechanism has shown an 80.2% improvement of mean absolute percentage error under disturbances than Integral SMC with linear feedback in the WECS. The PMSM speed tracking performance of the proposed HODO-based discrete-time PI-PI control system with back-calculation algorithm anti-windup scheme is improved by 87.29% and 90.2% in the speed commands and load torque disturbance variations scenarios respectively. The simulations for testing the proposed control system of the PMSM system and WECS have been implemented in Matlab/Simulink environment. The PMSM speed control experimental results have been obtained with Lucas-Nuelle DSP-based rapid control prototyping kit.
- Published
- 2022
5. Fast embedding for image classification & retrieval and its application to the hostel industry
- Author
-
Ammatmanee, Chanattra, Gan, L., and Kalganova, T.
- Subjects
Machine learning ,Deep learning ,Unsupervised learning ,Convolutional neural network ,Artificial intelligence - Abstract
Content-based image classification and retrieval are the automatic processes of taking an unseen image input and extracting its features representing the input image. Then, for the classification task, this mathematically measured input is categorized according to established criteria in the server and consequently shows the output as a result. On the other hand, for the retrieval task, the extracted features of an unseen query image are sent to the server to search for the most visually similar images to a given image and retrieve these images as a result. Despite image features could be represented by classical features, artificial intelligence-based features, Convolutional Neural Networks (CNN) to be precise, have become powerful tools in the field. Nonetheless, the high dimensional CNN features have been a challenge in particular for applications on mobile or Internet of Things devices. Therefore, in this thesis, several fast embeddings are explored and proposed to overcome the constraints of low memory, bandwidth, and power. Furthermore, the first hostel image database is created with three datasets, hostel image dataset containing 13,908 interior and exterior images of hostels across the world, and Hostels-900 dataset and Hostels-2K dataset containing 972 images and 2,380 images, respectively, of 20 London hostel buildings. The results demonstrate that the proposed fast embeddings such as the application of GHM-Rand operator, GHM-Fix operator, and binary feature vectors are able to outperform or give competitive results to those state-of-the-art methods with a lot less computational resource. Additionally, the findings from a ten-year literature review of CBIR study in the tourism industry could picturize the relevant research activities in the past decade which are not only beneficial to the hostel industry or tourism sector but also to the computer science and engineering research communities for the potential real-life applications of the existing and developing technologies in the field.
- Published
- 2022
6. Homogeneous vector capsules and their application to sufficient and complete data
- Author
-
Byerly, Adam D., Kalganova, T., and Dolins, S.
- Subjects
Convolutional neural networks ,Capsule networks ,Dimensionality reduction - Abstract
Capsules (vector-valued neurons) have recently become a more active area of research in neural networks. However, existing formulations have several drawbacks including the large number of trainable parameters that they require as well as the reliance on routing mechanisms between layers of capsules. The primary aim of this project is to demonstrate the benefits of a new formulation of capsules called Homogeneous Vector Capsules (HVCs) that overcome these drawbacks. Using HVCs, new state-of-the-art accuracies for the MNIST dataset are established for multiple individual models as well as multiple ensembles. This work additionally presents a dataset consisting of high-resolution images of 13 micro-PCBs captured in various rotations and perspectives relative to the camera, with each sample labeled for PCB type, rotation category, and perspective categories. Experiments performed and elucidated in this work examine classification accuracy of rotations and perspectives that were not trained on as well as the ability to artificially generate missing rotations and perspectives during training. The results of these experiments include showing that using HVCs is superior to using fully connected layers. This work also showed that certain training samples are more informative of class membership than others. These samples can be identified prior to training by analyzing their position in reduced dimensional space relative to the classes' centroids in that space. And a definition and calculation both for class density and dataset completeness based on the distribution of data in the reduced dimensional space has been put forth. Experimentation using the dataset completeness calculation shows that those datasets that meet a certain completeness threshold can be trained on a subset of the total dataset, based on each class's density, while improving upon or maintaining validation accuracy.
- Published
- 2022
7. Multi-objective community detection applied to social and COVID-19 constructed networks
- Author
-
Ahmed, Jenan Moosa, Kalganova, T., and Awad, W. S.
- Subjects
Graph mining ,Contact tracing of covid-19 ,Attribute based community detection ,Attributed social networks ,Homogeneity in social networks - Abstract
Community Detection plays an integral part in network analysis, as it facilitates understanding the structures and functional characteristics of the network. Communities organize real-world networks into densely connected groups of nodes. This thesis provides a critical analysis of the Community Detection and highlights the main areas including algorithms, evaluation metrics, applications, and datasets in social networks. After defining the research gap, this thesis proposes two Attribute-Based Label Propagation algorithms that maximizes both Modularity and homogeneity. Homogeneity is considered as an objective function one time, and as a constraint another time. To better capture the homogeneity of real-world networks, a new Penalized Homogeneity degree (PHd) is proposed, that can be easily personalized based on the network characteristics. For the first time, COVID-19 tracing data are utilized to form two dataset networks: one is based on the virus transition between the world countries. While the second dataset is an attributed network based on the virus transition among the contact-tracing in the Kingdom of Bahrain. This type of networks that is concerned in tracking a disease was not formed based on COVID-19 virus and has never been studied as a community detection problem. The proposed datasets are validated and tested in several experiments. The proposed Penalized Homogeneity measure is personalized and used to evaluate the proposed attributed network. Extensive experiments and analysis are carried out to evaluate the proposed methods and benchmark the results with other well-known algorithms. The results are compared in terms of Modularity, proposed PHd, and accuracy measures. The proposed methods have achieved maximum performance among other methods, with 26.6% better performance in Modularity, and 33.96% in PHd on the proposed dataset, as well as noteworthy results on benchmarking datasets with improvement in Modularity measures of 7.24%, and 4.96% respectively, and proposed PHd values 27% and 81.9%.
- Published
- 2022
8. OptPlatform : metaheuristic optimisation framework for solving complex real-world problems
- Author
-
Dzalbs, Ivars, Kalganova, T., and Meng, H.
- Subjects
Optimization ,Metaheuristics ,Supply chain optimisation ,Automated tuning - Abstract
We optimise daily, whether that is planning a round trip that visits the most attractions within a given holiday budget or just taking a train instead of driving a car in a rush hour. Many problems, just like these, are solved by individuals as part of our daily schedule, and they are effortless and straightforward. If we now scale that to many individuals with many different schedules, like a school timetable, we get to a point where it is just not feasible or practical to solve by hand. In such instances, optimisation methods are used to obtain an optimal solution. In this thesis, a practical approach to optimisation has been taken by developing an optimisation platform with all the necessary tools to be used by practitioners who are not necessarily familiar with the subject of optimisation. First, a high-performance metaheuristic optimisation framework (MOF) called OptPlatform is implemented, and the versatility and performance are evaluated across multiple benchmarks and real-world optimisation problems. Results show that, compared to competing MOFs, the OptPlatform outperforms in both the solution quality and computation time. Second, the most suitable hardware platform for OptPlatform is determined by an in-depth analysis of Ant Colony Optimisation scaling across CPU, GPU and enterprise Xeon Phi. Contrary to the common benchmark problems used in the literature, the supply chain problem solved could not scale on GPUs. Third, a variety of metaheuristics are implemented into OptPlatform. Including, a new metaheuristic based on Imperialist Competitive Algorithm (ICA), called ICA with Independence and Constrained Assimilation (ICAwICA) is proposed. The ICAwICA was compared against two different types of benchmark problems, and results show the versatile application of the algorithm, matching and in some cases outperforming the custom-tuned approaches. Finally, essential MOF features like automatic algorithm selection and tuning, lacking on existing frameworks, are implemented in OptPlatform. Two novel approaches are proposed and compared to existing methods. Results indicate the superiority of the implemented tuning algorithms within constrained tuning budget environment.
- Published
- 2021
9. Design, modelling, and control of an ambidextrous robot arm
- Author
-
Mukhtar, Mashood, Kalganova, T., and Mousavi, A.
- Subjects
629.8 ,Task and motion planning ,Kinematic analysis ,ANFIS controller ,3D printed - Abstract
This thesis presents the novel design of an ambidextrous robot arm that offers double range of motion as compared to dexterous arms. The proposed arm is unique in terms of design (ambidextrous feature), actuation (use of two different actuators simultaneously: Pneumatic Artificial Muscle (PAM) & Electric Motor)) and control (combined use of Proportional Integral Derivative (PID) with Neural Network (NN) for the hand and modified Multiple Adaptive Neuro-fuzzy Inference System (MANFIS) controller for the arm). The primary challenge of the project was to achieve ambidextrous behavior of the arm. Thus, a feasibility analysis was carried out to evaluate possible mechanical designs. The secondary aim was to deal with control issues associated with the ambidextrous design. Due to the ambidextrous nature of the design, the stability of such a device becomes a challenging task. Conventional controllers and artificial intelligence-based controllers were explored to find the most suitable one. Performances of all these controllers have been compared through experiments, and combined use of PID with NN was found to be the most accurate controller to drive the ambidextrous robot hand. In terms of ambidextrous robot arm control, a solution based on forward kinematic and inverse kinematic approach is presented, and results are verified using the derived equation in MATLAB. Since solving inverse kinematics analytically is difficult, Adaptive Neuro-Fuzzy Inference system (ANFIS) is developed using ANFIS MATLAB toolbox. When generic ANFIS failed to produce satisfactory results, modified MANFIS is proposed. The efficiency of the ambidextrous arm has been tested by comparing its performance with a conventional robot arm. The results obtained from experiments proved the efficiency of the ambidextrous arm when compared with a conventional arm in terms of power consumption and stability.
- Published
- 2020
10. Enhancing performance of conventional computer networks employing selected SDN principles
- Author
-
Hasan, Hasanein, Cosmas, J., and Kalganova, T.
- Subjects
004.6 ,Conventional computer networks ,Routing protocols ,Software defined networking ,Multi-protocol label switching ,Link failure - Abstract
This research is related to computer networks. In this thesis, three main issues are addressed which affect the performance of any computer network: congestion, efficient resources utilization and link failure. Those issues are related to each other in many situations. Many approaches have been suggested to deal with those issues as well as many solutions were applied. Despite all the improvements of the technology and the proposed solutions, those issues continue to be a burden on the system’s performance. This effect is related to the increase of the Quality of Service (QoS) requirements in modern networks. The basic idea of this research is evolving the intelligence of a conventional computer network when dealing with those issues by adding some features of the Software Defined Networking (SDN). This adoption upgrades the conventional computer network system to be more dynamic and higher self-organizing when dealing with those issues. This idea is applied on a system represented by a computer network that uses the Open Shortest Path First (OSPF) routing protocol. The first improvement deals with the distribution of Internet Protocol (IP) routed flows. The second improvement deals with tunnel establishment that serves Multi-Protocol Label Switching (MPLS) routed flows and the third improvement deals with bandwidth reservation when applying network restoration represented by Fast Re-route (FRR) mechanism to sooth the effect of link failure in OSPF/MPLS routed network. This idea is also applied on another system that uses the Enhanced Interior Gateway Routing Protocol (EIGRP) to improve the performance of its routing algorithm. Adopting the SDN notion is achieved by adding an intelligent controller to the system and creating a dialog of messages between the controller and the conventional routers. This requires upgrading the routers to respond to the new modified system. Our proposed approaches are presented with simulations of different configurations which produce fine results.
- Published
- 2016
11. Global supply chain optimization : a machine learning perspective to improve caterpillar's logistics operations
- Author
-
Veluscek, Marco, Kalganova, T., and Broomhead, P.
- Subjects
658.7 ,Combinatorial optimization ,Artificial intelligence ,Ant colony optimization ,Meta-heuristics ,Hyper-heuristics - Abstract
Supply chain optimization is one of the key components for the effective management of a company with a complex manufacturing process and distribution network. Companies with a global presence in particular are motivated to optimize their distribution plans in order to keep their operating costs low and competitive. Changing condition in the global market and volatile energy prices increase the need for an automatic decision and optimization tool. In recent years, many techniques and applications have been proposed to address the problem of supply chain optimization. However, such techniques are often too problemspecific or too knowledge-intensive to be implemented as in-expensive, and easy-to-use computer system. The effort required to implement an optimization system for a new instance of the problem appears to be quite significant. The development process necessitates the involvement of expert personnel and the level of automation is low. The aim of this project is to develop a set of strategies capable of increasing the level of automation when developing a new optimization system. An increased level of automation is achieved by focusing on three areas: multi-objective optimization, optimization algorithm usability, and optimization model design. A literature review highlighted the great level of interest for the problem of multiobjective optimization in the research community. However, the review emphasized a lack of standardization in the area and insufficient understanding of the relationship between multi-objective strategies and problems. Experts in the area of optimization and artificial intelligence are interested in improving the usability of the most recent optimization algorithms. They stated the concern that the large number of variants and parameters, which characterizes such algorithms, affect their potential applicability in real-world environments. Such characteristics are seen as the root cause for the low success of the most recent optimization algorithms in industrial applications. Crucial task for the development of an optimization system is the design of the optimization model. Such task is one of the most complex in the development process, however, it is still performed mostly manually. The importance and the complexity of the task strongly suggest the development of tools to aid the design of optimization models. In order to address such challenges, first the problem of multi-objective optimization is considered and the most widely adopted techniques to solve it are identified. Such techniques are analyzed and described in details to increase the level of standardization in the area. Empirical evidences are highlighted to suggest what type of relationship exists between strategies and problem instances. Regarding the optimization algorithm, a classification method is proposed to improve its usability and computational requirement by automatically tuning one of its key parameters, the termination condition. The algorithm understands the problem complexity and automatically assigns the best termination condition to minimize runtime. The runtime of the optimization system has been reduced by more than 60%. Arguably, the usability of the algorithm has been improved as well, as one of the key configuration tasks can now be completed automatically. Finally, a system is presented to aid the definition of the optimization model through regression analysis. The purpose of the method is to gather as much knowledge about the problem as possible so that the task of the optimization model definition requires a lower user involvement. The application of the proposed algorithm is estimated that could have saved almost 1000 man-weeks to complete the project. The developed strategies have been applied to the problem of Caterpillar’s global supply chain optimization. This thesis describes also the process of developing an optimization system for Caterpillar and highlights the challenges and research opportunities identified while undertaking this work. This thesis describes the optimization model designed for Caterpillar’s supply chain and the implementation details of the Ant Colony System, the algorithm selected to optimize the supply chain. The system is now used to design the distribution plans of more than 7,000 products. The system improved Caterpillar’s marginal profit on such products by a factor of 4.6% on average.
- Published
- 2016
12. Intelligent optimisation of analogue circuits using particle swarm optimisation, genetic programming and genetic folding
- Author
-
Ushie, Ogri James, Abbod, M., and Kalganova, T.
- Subjects
006.3 ,Genetic programming/folding ,Modified symbolic circuit analysis in matlab (mscam) ,Artificial intelligence ,Evolutionary computing ,Swarm intelligence - Abstract
This research presents various intelligent optimisation methods which are: genetic algorithm (GA), particle swarm optimisation (PSO), artificial bee colony algorithm (ABCA), firefly algorithm (FA) and bacterial foraging optimisation (BFO). It attempts to minimise analogue electronic filter and amplifier circuits, taking a cascode amplifier design as a case study, and utilising the above-mentioned intelligent optimisation algorithms with the aim of determining the best among them to be used. Small signal analysis (SSA) conversion of the cascode circuit is performed while mesh analysis is applied to transform the circuit to matrices form. Computer programmes are developed in Matlab using the above mentioned intelligent optimisation algorithms to minimise the cascode amplifier circuit. The objective function is based on input resistance, output resistance, power consumption, gain, upperfrequency band and lower frequency band. The cascode circuit result presented, applied the above-mentioned existing intelligent optimisation algorithms to optimise the same circuit and compared the techniques with the one using Nelder-Mead and the original circuit simulated in PSpice. Four circuit element types (resistors, capacitors, transistors and operational amplifier (op-amp)) are targeted using the optimisation techniques and subsequently compared to the initial circuit. The PSO based optimised result has proven to be best followed by that of GA optimised technique regarding power consumption reduction and frequency response. This work modifies symbolic circuit analysis in Matlab (MSCAM) tool which utilises Netlist from PSpice or from simulation to generate matrices. These matrices are used for optimisation or to compute circuit parameters. The tool is modified to handle both active and passive elements such as inductors, resistors, capacitors, transistors and op-amps. The transistors are transformed into SSA and op-amp use the SSA that is easy to implement in programming. Results are presented to illustrate the potential of the algorithm. Results are compared to PSpice simulation and the approach handled larger matrices dimensions compared to that of existing symbolic circuit analysis in Matlab tool (SCAM). The SCAM formed matrices by adding additional rows and columns due to how the algorithm was developed which takes more computer resources and limit its performance. Next to this, this work attempts to reduce component count in high-pass, low-pass, and all- pass active filters. Also, it uses a lower order filter to realise same results as higher order filter regarding frequency response curve. The optimisers applied are GA, PSO (the best two methods among them) and Nelder-Mead (the worst method) are used subsequently for the filters optimisation. The filters are converted into their SSA while nodal analysis is applied to transform the circuit to matrices form. High-pass, low-pass, and all- pass active filters results are presented to demonstrate the effectiveness of the technique. Results presented have shown that with a computer code, a lower order op-amp filter can be applied to realise the same results as that of a higher order one. Furthermore, PSO can realise the best results regarding frequency response for the three results, followed by GA whereas Nelder- Mead has the worst results. Furthermore, this research introduced genetic folding (GF), MSCAM, and automatically simulated Netlist into existing genetic programming (GP), which is a new contribution in this work, which enhances the development of independent Matlab toolbox for the evolution of passive and active filter circuits. The active filter circuit evolution especially when operational amplifier is involved as a component is of it first kind in circuit evolution. In the work, only one software package is used instead of combining PSpice and Matlab in electronic circuit simulation. This saves the elapsed time for moving the simulation between the two platforms and reduces the cost of subscription. The evolving circuit from GP using Matlab simulation is automatically transformed into a symbolic Netlist also by Matlab simulation. The Netlist is fed into MSCAM; where MSCAM uses it to generate matrices for the simulation. The matrices enhance frequency response analysis of low-pass, high-pass, band-pass, band-stop of active and passive filter circuits. After the circuit evolution using the developed GP, PSO is then applied to optimise some of the circuits. The algorithm is tested with twelve different circuits (five examples of the active filter, four examples of passive filter circuits and three examples of transistor amplifier circuits) and the results presented have shown that the algorithm is efficient regarding design.
- Published
- 2016
13. Remote-controlled ambidextrous robot hand actuated by pneumatic muscles : from feasibility study to design and control algorithms
- Author
-
Akyürek, Emre, Kalganova, T., and Powell, R.
- Subjects
629.8 ,Robotics ,Mechanical architectures ,Phasing plane switch control ,Sliding-mode control ,Backstepping control - Abstract
This thesis relates to the development of the Ambidextrous Robot Hand engineered in Brunel University. Assigned to a robotic hand, the ambidextrous feature means that two different behaviours are accessible from a single robot hand, because of its fingers architecture which permits them to bend in both ways. On one hand, the robotic device can therefore behave as a right hand whereas, on another hand, it can behave as a left hand. The main contribution of this project is its ambidextrous feature, totally unique in robotics area. Moreover, the Ambidextrous Robot Hand is actuated by pneumatic artificial muscles (PAMs), which are not commonly used to drive robot hands. The type of the actuators consequently adds more originality to the project. The primary challenge is to reach an ambidextrous behaviour using PAMs designed to actuate non-ambidextrous robot hands. Thus, a feasibility study is carried out for this purpose. Investigating a number of mechanical possibilities, an ambidextrous design is reached with features almost identical for its right and left sides. A testbench is thereafter designed to investigate this possibility even further to design ambidextrous fingers using 3D printing and an asymmetrical tendons routing engineered to reduce the number of actuators. The Ambidextrous Robot Hand is connected to a remote control interface accessible from its website, which provides video streaming as feedback, to be eventually used as an online rehabilitation device. The secondary main challenge is to implement control algorithms on a robot hand with a range twice larger than others, with an asymmetrical tendons routing and actuated by nonlinear actuators. A number of control algorithms are therefore investigated to interact with the angular displacement of the fingers and the grasping abilities of the hand. Several solutions are found out, notably the implementations of a phasing plane switch control and a sliding-mode control, both specific to the architecture of the Ambidextrous Robot Hand. The implementation of these two algorithms on a robotic hand actuated by PAMs is almost as innovative as the ambidextrous design of the mechanical structure itself.
- Published
- 2015
14. Holoscopic 3D imaging and display technology : camera/processing/display
- Author
-
Swash, Mohammad Rafiq, Kalganova, T., Tsekleves, E., Aggoun, A., and Broomhead, P.
- Subjects
006.6 ,3D display technology ,3D camera technology ,3D computer graphics ,3D pixel mapping ,3D image conversion - Abstract
Holoscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”. While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display. Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation. Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability. Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images.
- Published
- 2013
15. Automatic design of analogue circuits
- Author
-
Sapargaliyev, Yerbol and Kalganova, T.
- Subjects
621.3815 ,Evolutionary strategy ,Evolutionary algortihms ,Evolutionary electronics ,Incremental evolution - Abstract
Evolvable Hardware (EHW) is a promising area in electronics today. Evolutionary Algorithms (EA), together with a circuit simulation tool or real hardware, automatically designs a circuit for a given problem. The circuits evolved may have unconventional designs and be less dependent on the personal knowledge of a designer. Nowadays, EA are represented by Genetic Algorithms (GA), Genetic Programming (GP) and Evolutionary Strategy (ES). While GA is definitely the most popular tool, GP has rapidly developed in recent years and is notable by its outstanding results. However, to date the use of ES for analogue circuit synthesis has been limited to a few applications. This work is devoted to exploring the potential of ES to create novel analogue designs. The narrative of the thesis starts with a framework of an ES-based system generating simple circuits, such as low pass filters. Then it continues with a step-by-step progression to increasingly sophisticated designs that require additional strength from the system. Finally, it describes the modernization of the system using novel techniques that enable the synthesis of complex multi-pin circuits that are newly evolved. It has been discovered that ES has strong power to synthesize analogue circuits. The circuits evolved in the first part of the thesis exceed similar results made previously using other techniques in a component economy, in the better functioning of the evolved circuits and in the computing power spent to reach the results. The target circuits for evolution in the second half are chosen by the author to challenge the capability of the developed system. By functioning, they do not belong to the conventional analogue domain but to applications that are usually adopted by digital circuits. To solve the design tasks, the system has been gradually developed to support the ability of evolving increasingly complex circuits. As a final result, a state-of-the-art ES-based system has been developed that possesses a novel mutation paradigm, with an ability to create, store and reuse substructures, to adapt the mutation, selection parameters and population size, utilize automatic incremental evolution and use the power of parallel computing. It has been discovered that with the ability to synthesis the most up-to-date multi-pin complex analogue circuits that have ever been automatically synthesized before, the system is capable of synthesizing circuits that are problematic for conventional design with application domains that lay beyond the conventional application domain for analogue circuits.
- Published
- 2011
16. An intelligent manufacturing system for heat treatment scheduling
- Author
-
Al-Kanhal, Tawfeeq, Abbod, M. F., and Kalganova, T.
- Subjects
669 ,Genetic algorithm (GA) ,Heat treatment operation ,Scheduling ,Neuro fuzzy system ,Particle swarm optimisation (PSO) - Abstract
This research is focused on the integration problem of process planning and scheduling in steel heat treatment operations environment using artificial intelligent techniques that are capable of dealing with such problems. This work addresses the issues involved in developing a suitable methodology for scheduling heat treatment operations of steel. Several intelligent algorithms have been developed for these propose namely, Genetic Algorithm (GA), Sexual Genetic Algorithm (SGA), Genetic Algorithm with Chromosome differentiation (GACD), Age Genetic Algorithm (AGA), and Mimetic Genetic Algorithm (MGA). These algorithms have been employed to develop an efficient intelligent algorithm using Algorithm Portfolio methodology. After that all the algorithms have been tested on two types of scheduling benchmarks. To apply these algorithms on heat treatment scheduling, a furnace model is developed for optimisation proposes. Furthermore, a system that is capable of selecting the optimal heat treatment regime is developed so the required metal properties can be achieved with the least energy consumption and the shortest time using Neuro-Fuzzy (NF) and Particle Swarm Optimisation (PSO) methodologies. Based on this system, PSO is used to optimise the heat treatment process by selecting different heat treatment conditions. The selected conditions are evaluated so the best selection can be identified. This work addresses the issues involved in developing a suitable methodology for developing an NF system and PSO for mechanical properties of the steel. Using the optimisers, furnace model and heat treatment system model, the intelligent system model is developed and implemented successfully. The results of this system were exciting and the optimisers were working correctly.
- Published
- 2010
17. Ant colony optimization based simulation of 3d automatic hose/pipe routing
- Author
-
Thantulage, Gishantha I. F. and Kalganova, T.
- Subjects
005.3 ,Multi-Objective Hose Routing ,Ant System ,Tessellated Format ,Freeform CAD Geometries ,Multi-Objective Ant Colony Optimization ,Pareto Strength Ant Colony Algorithms ,Domination ,Collision Detection ,Multi-Hose Routing ,Multi-Colony Ant System - Abstract
This thesis focuses on applying one of the rapidly growing non-deterministic optimization algorithms, the ant colony algorithm, for simulating automatic hose/pipe routing with several conflicting objectives. Within the thesis, methods have been developed and applied to single objective hose routing, multi-objective hose routing and multi-hose routing. The use of simulation and optimization in engineering design has been widely applied in all fields of engineering as the computational capabilities of computers has increased and improved. As a result of this, the application of non-deterministic optimization techniques such as genetic algorithms, simulated annealing algorithms, ant colony algorithms, etc. has increased dramatically resulting in vast improvements in the design process. Initially, two versions of ant colony algorithms have been developed based on, respectively, a random network and a grid network for a single objective (minimizing the length of the hoses) and avoiding obstacles in the CAD model. While applying ant colony algorithms for the simulation of hose routing, two modifications have been proposed for reducing the size of the search space and avoiding the stagnation problem. Hose routing problems often consist of several conflicting or trade-off objectives. In classical approaches, in many cases, multiple objectives are aggregated into one single objective function and optimization is then treated as a single-objective optimization problem. In this thesis two versions of ant colony algorithms are presented for multihose routing with two conflicting objectives: minimizing the total length of the hoses and maximizing the total shared length (bundle length). In this case the two objectives are aggregated into a single objective. The current state-of-the-art approach for handling multi-objective design problems is to employ the concept of Pareto optimality. Within this thesis a new Pareto-based general purpose ant colony algorithm (PSACO) is proposed and applied to a multi-objective hose routing problem that consists of the following objectives: total length of the hoses between the start and the end locations, number of bends, and angles of bends. The proposed method is capable of handling any number of objectives and uses a single pheromone matrix for all the objectives. The domination concept is used for updating the pheromone matrix. Among the currently available multi-objective ant colony optimization (MOACO) algorithms, P-ACO generates very good solutions in the central part of the Pareto front and hence the proposed algorithm is compared with P-ACO. A new term is added to the random proportional rule of both of the algorithms (PSACO and P-ACO) to attract ants towards edges that make angles close to the pre-specified angles of bends. A refinement algorithm is also suggested for searching an acceptable solution after the completion of searching the entire search space. For all of the simulations, the STL format (tessellated format) for the obstacles is used in the algorithm instead of the original shapes of the obstacles. This STL format is passed to the C++ library RAPID for collision detection. As a result of using this format, the algorithms can handle freeform obstacles and the algorithms are not restricted to a particular software package.
- Published
- 2009
18. Computational intelligence techniques in asset risk analysis
- Author
-
Serguieva, Antoaneta and Kalganova, T.
- Subjects
519 - Abstract
The problem of asset risk analysis is positioned within the computational intelligence paradigm. We suggest an algorithm for reformulating asset pricing, which involves incorporating imprecise information into the pricing factors through fuzzy variables as well as a calibration procedure for their possibility distributions. Then fuzzy mathematics is used to process the imprecise factors and obtain an asset evaluation. This evaluation is further automated using neural networks with sign restrictions on their weights. While such type of networks has been only used for up to two network inputs and hypothetical data, here we apply thirty-six inputs and empirical data. To achieve successful training, we modify the Levenberg-Marquart backpropagation algorithm. The intermediate result achieved is that the fuzzy asset evaluation inherits features of the factor imprecision and provides the basis for risk analysis. Next, we formulate a risk measure and a risk robustness measure based on the fuzzy asset evaluation under different characteristics of the pricing factors as well as different calibrations. Our database, extracted from DataStream, includes thirty-five companies traded on the London Stock Exchange. For each company, the risk and robustness measures are evaluated and an asset risk analysis is carried out through these values, indicating the implications they have on company performance. A comparative company risk analysis is also provided. Then, we employ both risk measures to formulate a two-step asset ranking method. The assets are initially rated according to the investors' risk preference. In addition, an algorithm is suggested to incorporate the asset robustness information and refine further the ranking benefiting market analysts. The rationale provided by the ranking technique serves as a point of departure in designing an asset risk classifier. We identify the fuzzy neural network structure of the classifier and develop an evolutionary training algorithm. The algorithm starts with suggesting preliminary heuristics in constructing a sufficient training set of assets with various characteristics revealed by the values of the pricing factors and the asset risk values. Then, the training algorithm works at two levels, the inner level targets weight optimization, while the outer level efficiently guides the exploration of the search space. The latter is achieved by automatically decomposing the training set into subsets of decreasing complexity and then incrementing backward the corresponding subpopulations of partially trained networks. The empirical results prove that the developed algorithm is capable of training the identified fuzzy network structure. This is a problem of such complexity that prevents single-level evolution from attaining meaningful results. The final outcome is an automatic asset classifier, based on the investors’ perceptions of acceptable risk. All the steps described above constitute our approach to reformulating asset risk analysis within the approximate reasoning framework through the fusion of various computational intelligence techniques.
- Published
- 2004
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.