83 results on '"Adil, M."'
Search Results
2. Stochastic Robustness of Delayed Discrete Noises for Delay Differential Equations
- Author
-
Fawaz E. Alsaadi, Lichao Feng, Madini O. Alassafi, Reem M. Alotaibi, Adil M. Ahmad, and Jinde Cao
- Subjects
robust boundedness ,robust stability ,delayed discrete noises ,delay differential equations ,Mathematics ,QA1-939 - Abstract
Stochastic robustness of discrete noises has already been proposed and studied in the previous work. Nevertheless, the significant phenomenon of delays is left in the basket both in the deterministic and the stochastic parts of the considered equation by the existing work. Stimulated by the above, this paper is devoted to studying the stochastic robustness issue of delayed discrete noises for delay differential equations, including the issues of robust stability and robust boundedness.
- Published
- 2022
- Full Text
- View/download PDF
3. An augmented subgradient method for minimizing nonsmooth DC functions
- Author
-
Sona Taheri, Adil M. Bagirov, and N. Hoseini Monjezi
- Subjects
Computational Mathematics ,Sequence ,Mathematical optimization ,Control and Optimization ,Line search ,Optimization problem ,Applied Mathematics ,Regular polygon ,Point (geometry) ,Subgradient method ,Critical point (mathematics) ,DC bias ,Mathematics - Abstract
A method, called an augmented subgradient method, is developed to solve unconstrained nonsmooth difference of convex (DC) optimization problems. At each iteration of this method search directions are found by using several subgradients of the first DC component and one subgradient of the second DC component of the objective function. The developed method applies an Armijo-type line search procedure to find the next iteration point. It is proved that the sequence of points generated by the method converges to a critical point of the unconstrained DC optimization problem. The performance of the method is demonstrated using academic test problems with nonsmooth DC objective functions and its performance is compared with that of two general nonsmooth optimization solvers and five solvers specifically designed for unconstrained DC optimization. Computational results show that the developed method is efficient and robust for solving nonsmooth DC optimization problems.
- Published
- 2021
4. Robust piecewise linear L1-regression via nonsmooth DC optimization
- Author
-
Napsu Karmitsa, Adil M. Bagirov, Sona Taheri, Nargiz Sultanova, and Soodabeh Asadi
- Subjects
021103 operations research ,Control and Optimization ,Optimization problem ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Regular polygon ,Regression analysis ,02 engineering and technology ,01 natural sciences ,Regression ,Piecewise linear function ,010104 statistics & probability ,Outlier ,Applied mathematics ,0101 mathematics ,Regression problems ,Software ,Mathematics - Abstract
Piecewise linear L 1 -regression problem is formulated as an unconstrained difference of convex (DC) optimization problem and an algorithm for solving this problem is developed. Auxiliary problems ...
- Published
- 2020
5. Aggregate subgradient method for nonsmooth DC optimization
- Author
-
Sona Taheri, Kaisa Joki, Marko M. Mäkelä, Adil M. Bagirov, and Napsu Karmitsa
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Current (mathematics) ,Optimization problem ,Null (mathematics) ,Aggregate (data warehouse) ,0211 other engineering and technologies ,Regular polygon ,Computational intelligence ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Convex combination ,0101 mathematics ,Subgradient method ,Mathematics - Abstract
The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers.
- Published
- 2020
6. Field Effect in Semiconductor-Electrolyte Interfaces : Application to Investigations of Electronic Properties of Semiconductor Surfaces
- Author
-
KONOROV, PAVEL P., YAFYASOV, ADIL M., BOGEVOLNOV, VLADISLAV B., KONOROV, PAVEL P., YAFYASOV, ADIL M., and BOGEVOLNOV, VLADISLAV B.
- Published
- 2021
- Full Text
- View/download PDF
7. A difference of convex optimization algorithm for piecewise linear regression
- Author
-
Soodabeh Asadi, Adil M. Bagirov, and Sona Taheri
- Subjects
0209 industrial biotechnology ,021103 operations research ,Control and Optimization ,Convex optimization algorithm ,Optimization problem ,Applied Mathematics ,Strategy and Management ,Mathematics::Optimization and Control ,0211 other engineering and technologies ,Regular polygon ,Regression analysis ,02 engineering and technology ,Subderivative ,Atomic and Molecular Physics, and Optics ,Piecewise linear function ,Statistics::Machine Learning ,020901 industrial engineering & automation ,Applied mathematics ,Business and International Management ,Electrical and Electronic Engineering ,Segmented regression ,Regression algorithm ,Mathematics - Abstract
The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.
- Published
- 2019
8. Discrete Gradient Methods
- Author
-
Adil M. Bagirov, Napsu Karmitsa, and Marko M. Mäkelä
- Subjects
Nonlinear conjugate gradient method ,Bundle method ,Bundle ,Applied mathematics ,Point (geometry) ,Descent direction ,Gradient descent ,Gradient method ,Subgradient method ,Mathematics - Abstract
In this chapter, we introduce two discrete gradient methods that can be considered as semi-derivative free methods in a sense that they do not use subgradient information and they do not approximate the subgradient but at the end of the solution process (i.e., near the optimal point). The introduced methods are the original discrete gradient method for small-scale nonsmooth optimization and its limited memory bundle version the limited memory discrete gradient bundle method for medium- and semi-large problems.
- Published
- 2020
9. Bundle Methods for Nonsmooth DC Optimization
- Author
-
Kaisa Joki and Adil M. Bagirov
- Subjects
Convex optimization ,Convergence (routing) ,Applied mathematics ,Bundle methods ,Stationary point ,Piecewise linear model ,Mathematics - Abstract
This chapter is devoted to algorithms for solving nonsmooth unconstrained difference of convex optimization problems. Different types of stationarity conditions are discussed and the relationship between sets of different stationary points (critical, Clarke stationary and inf-stationary) is established. Bundle methods are developed based on a nonconvex piecewise linear model of the objective function and the convergence of these methods is studied. Numerical results are presented to demonstrate the performance of the methods.
- Published
- 2020
10. A sharp augmented Lagrangian-based method in constrained non-convex optimization
- Author
-
Adil M. Bagirov, Refail Kasimbeyli, Gurkan Ozturk, Anadolu Üniversitesi, Mühendislik Fakültesi, Endüstri Mühendisliği Bölümü, and Kasımbeyli, Refail
- Subjects
Non-Smooth Optimization ,021103 operations research ,Control and Optimization ,Optimization problem ,Augmented Lagrangian method ,Discrete Gradient Method ,Applied Mathematics ,Constrained Optimization ,0211 other engineering and technologies ,Constrained optimization ,Sharp Augmented Lagrangian ,010103 numerical & computational mathematics ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Nonlinear system ,Modified Subgradient Algorithm ,Non-Convex Optimization ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Global optimization ,Software ,Inner loop ,Mathematics - Abstract
4th International Conference on Computational and Experimental Science and Engineering (ICCESEN) -- OCT 04-08, 2017 -- Kemer, TURKEY, WOS: 000463781100002, In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature., Australian Research Council's Discovery Projects funding scheme [DP140103213], The research by A.M. Bagirov was supported under Australian Research Council's Discovery Projects funding scheme (Project No. DP140103213).
- Published
- 2018
11. Double Bundle Method for finding Clarke Stationary Points in Nonsmooth DC Programming
- Author
-
Marko M. Mäkelä, Sona Taheri, Napsu Karmitsa, Adil M. Bagirov, and Kaisa Joki
- Subjects
021103 operations research ,ta111 ,0211 other engineering and technologies ,Regular polygon ,Dc programming ,010103 numerical & computational mathematics ,02 engineering and technology ,Bundle methods ,01 natural sciences ,Stationary point ,Theoretical Computer Science ,Double bundle ,Applied mathematics ,Computer Science::Symbolic Computation ,0101 mathematics ,Software ,Mathematics - Abstract
The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) func...
- Published
- 2018
12. Hydrological Analysis and Trans-boundary Water Management of the Blue Nile River Basin
- Author
-
Adil M. Elkider, Abdin M. A. Salih, M. Abbas, and Salih H. Hamid
- Subjects
Trans boundary ,Botany ,Mathematics - Abstract
يغطي المدي الجغرافي لهذا البحث منطقة حوض النيل الأزرق إعتبارا من الحوض الفرعي لبحيرة تانا بإثيوبيا وحتي إلتفاء نهر النيل الأزرق مع النيل الأبيض بالخرطوم/السودان. يهدف البحث للمساهمة في الإدارة المستدامة لحوض النيل الأزرق من خلال إنشاء قاعدة بيانات هيدرولوجية موثوقة ونظام للمعلومات المناخية والتي يمكن إستخدامها لإختبار وتحديد أفضل خيارات لتنمية وإدارة الحوض، بالإضافة لوضع نظام فعال لإدارة المياه العابرة بحوض نهر النيل الأزرق (والذي يحتوي علي 16 حوضا فرعيا). في ظل غياب محطات قياس كمية الأمطار في المناطق الطرفية الخالية من البيانات بحوض نهر النيل الأزرق، تم تنزيل مياه الأمطار من موقع نظام إعادة تحليل التوقعات المناخية (CFSR) والتي تمت معايرتها مع كمية مياه الأمطار بمحطات القياس المتوفرة. كما إستخدم البحث بيانات الأمطار من الأقمار الصناعية والتي تمت معايرتها ومعامل التبخر كمدخلات بيانات أساسية لبرنامج تقييم وتخطيط المياه، كما تم إعتماد الخيار المبسط لتساقط الأمطار لتحديد الجريان السطحي (Rainfall-Runoff) لحوض النيل الأزرق. تم إستخدام برنامج تقييم وتخطيط المياه (WEAP) لتقييم وضع مشروعات الموارد المائية في خمسة سيناريوهات مختلفة وذلك للتعرف علي الوضع المستقبلي بالحوض حتي 2041م، وذلك من خلال إختبار العديد من نظم التشغيل، مثل تحديد درجة الأولوية في التشغيل لبرنامج الملء، توليد الطاقة الكهرومائية، تحديد كمية الجريان المطلوبة. وبإستخدام برنامج (WEAP). تم إدارة حوض نهر النيل الأزرق بإستخدام برنامج المحاكاة في ظل الوضع الحالي وذلك بإعتبار جميع مشاريع الموارد المائية في السودان وإثيوبيا، كما تمت دراسة خمسة سيناريوهات مستقبلية مختلفة للتعرف علي الوضع المائي حتي 2041م. والمشاريع المستقبلية في إثيوبيا تتضمن سد النهضة وخزاني كارادوبي وماندايا بالإضافة لمشاريع الري في إثيوبيا والسودان والتي تم إعتبارها. تمت معايرة نموذج تقييم وتخطيط المياه خلال الفترة (1980-1995م) وتم التحقق من صحتها للفترة (1996-2010م)، كما تمت مقارنة التصرفات التي تمت معايرتها مع البيانات الشهرية المقاسة عند محطات الديم والجويسي والحواته والخرطوم والتي أعطت قيم ونتائج معقولة وذلك بناء علي عوامل الكفاءة (Nash-Sutcliffe efficiency) و (Coefficient of Determination). وقد أظهرت النتائج بشكل فعال أن التصرفات المحاكاة معقولة وذلك عند إستخدام كفاءة ناش ستكلف (Nash-Sutcliffe Efficiency – r2) ومعامل التحديد (Coefficient of Determination - d). توصل البحث إلي أن هناك طلبات مياه لمشاريع الموارد المائية بحوض نهر النيل الأزرق لم يتم الإيفاء بها (بنسبة أكبر من 50 %)، خاصة عند إعتبار أولوية عالية لجميع مشاريع الموارد المائية في الحوض لجميع السيناريوهات المقترحة. يوصي البحث بإستخدام بيانات هطول الأمطار من الأقمار الصناعية بعد معايرتها لتمثل بيانات محطات هطول الأمطار الأرضية، وكذلك إستخدام عوامل التبخر المعدلة في المناطق النادرة من البيانات كما هو الحال في حوض النيل الأزرق بدلا من الإنتظار لتوقيع بروتوكول حول تبادل البيانات والذي قد يستغرق وقتا طويلا إذا تم. لتحديد مسارات إدارة المياه بصورة مستدامة ومثلي بين دول حوض النيل الأزرق. يوصي البحث بأن يكون هناك إتفاق تعاون شامل بين دول حوض النيل الأزرق لتحديد ترتيب الأولويات لكل مشروع والإحتياجات المائية الشهرية المطلوبة لضمان الحماية المستدامة لمشروعات الأمن المائية لدول أسفل النهر
- Published
- 2017
13. Incremental DC optimization algorithm for large-scale clusterwise linear regression
- Author
-
Emre Cimen, Sona Taheri, and Adil M. Bagirov
- Subjects
Computational Mathematics ,Mathematical optimization ,Sequence ,Data point ,Scale (ratio) ,Optimization algorithm ,Quantitative Biology::Molecular Networks ,Applied Mathematics ,Linear regression ,Regular polygon ,Convex function ,Regression ,Mathematics - Abstract
The objective function in the nonsmooth optimization model of the clusterwise linear regression (CLR) problem with the squared regression error is represented as a difference of two convex functions. Then using the difference of convex algorithm (DCA) approach the CLR problem is replaced by the sequence of smooth unconstrained optimization subproblems. A new algorithm based on the DCA and the incremental approach is designed to solve the CLR problem. We apply the Quasi-Newton method to solve the subproblems. The proposed algorithm is evaluated using several synthetic and real-world data sets for regression and compared with other algorithms for CLR. Results demonstrate that the DCA based algorithm is efficient for solving CLR problems with the large number of data points and in particular, outperforms other algorithms when the number of input variables is small.
- Published
- 2021
14. Nonsmooth DC programming approach to clusterwise linear regression: optimality conditions and algorithms
- Author
-
Adil M. Bagirov and Julien Ugon
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Optimization problem ,Applied Mathematics ,0211 other engineering and technologies ,Dc programming ,Regression analysis ,02 engineering and technology ,Function (mathematics) ,Regression error ,Statistics::Machine Learning ,Linear regression ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Convex function ,Representation (mathematics) ,Algorithm ,Software ,Mathematics - Abstract
The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.
- Published
- 2017
15. Minimizing nonsmooth DC functions via successive DC piecewise-affine approximations
- Author
-
Manlio Gaudioso, Adil M. Bagirov, Giovanna Miglionico, and Giovanni Giallombardo
- Subjects
Pointwise ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,02 engineering and technology ,Function (mathematics) ,Management Science and Operations Research ,Computer Science Applications ,Reduction (complexity) ,Bundle ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,020201 artificial intelligence & image processing ,Quadratic programming ,Cutting-plane method ,Descent (mathematics) ,Mathematics - Abstract
We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems.
- Published
- 2017
16. New diagonal bundle method for clustering problems in large data sets
- Author
-
Adil M. Bagirov, Napsu Karmitsa, and Sona Taheri
- Subjects
Clustering high-dimensional data ,ta112 ,021103 operations research ,Information Systems and Management ,Fuzzy clustering ,General Computer Science ,Correlation clustering ,ta111 ,0211 other engineering and technologies ,Constrained clustering ,02 engineering and technology ,Management Science and Operations Research ,computer.software_genre ,Industrial and Manufacturing Engineering ,Data stream clustering ,CURE data clustering algorithm ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Canopy clustering algorithm ,020201 artificial intelligence & image processing ,Data mining ,Cluster analysis ,computer ,Mathematics - Abstract
Clustering is one of the most important tasks in data mining. Recent developments in computer hardware allow us to store in random access memory (RAM) and repeatedly read data sets with hundreds of thousands and even millions of data points. This makes it possible to use conventional clustering algorithms in such data sets. However, these algorithms may need prohibitively large computational time and fail to produce accurate solutions. Therefore, it is important to develop clustering algorithms which are accurate and can provide real time clustering in large data sets. This paper introduces one of them. Using nonsmooth optimization formulation of the clustering problem the objective function is represented as a difference of two convex (DC) functions. Then a new diagonal bundle algorithm that explicitly uses this structure is designed and combined with an incremental approach to solve this problem. The method is evaluated using real world data sets with both large number of attributes and large number of data points. The proposed method is compared with two other clustering algorithms using numerical results.
- Published
- 2017
17. A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes
- Author
-
Adil M. Bagirov, Marko M. Mäkelä, Napsu Karmitsa, and Kaisa Joki
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Current (mathematics) ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Computer Science Applications ,Component (UML) ,Convergence (routing) ,Point (geometry) ,0101 mathematics ,Representation (mathematics) ,Subgradient method ,Cutting-plane method ,Mathematics - Abstract
In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an $$\varepsilon $$ź-critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.
- Published
- 2016
18. Nonsmooth DC programming approach to the minimum sum-of-squares clustering problems
- Author
-
Julien Ugon, Adil M. Bagirov, and Sona Taheri
- Subjects
DBSCAN ,Mathematical optimization ,021103 operations research ,Correlation clustering ,0211 other engineering and technologies ,Constrained clustering ,02 engineering and technology ,16. Peace & justice ,Determining the number of clusters in a data set ,Data stream clustering ,Artificial Intelligence ,CURE data clustering algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Canopy clustering algorithm ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Cluster analysis ,Software ,Mathematics - Abstract
This paper introduces an algorithm for solving the minimum sum-of-squares clustering problems using their difference of convex representations. A non-smooth non-convex optimization formulation of the clustering problem is used to design the algorithm. Characterizations of critical points, stationary points in the sense of generalized gradients and inf-stationary points of the clustering problem are given. The proposed algorithm is tested and compared with other clustering algorithms using large real world data sets.
- Published
- 2016
19. Sensitivity of algorithm parameters and objective function scaling in multi-objective optimisation of water distribution systems
- Author
-
Andrew Barton, Helena Mala-Jetmarova, and Adil M. Bagirov
- Subjects
Atmospheric Science ,Mathematical optimization ,business.industry ,Pareto principle ,Geotechnical Engineering and Engineering Geology ,Multi-objective optimization ,Software ,Range (statistics) ,Calibration ,Sensitivity (control systems) ,business ,Algorithm ,Scaling ,Civil and Structural Engineering ,Water Science and Technology ,Mathematics ,Network analysis - Abstract
This paper presents an extensive analysis of the sensitivity of multi-objective algorithm parameters and objective function scaling tested on a large number of parameter setting combinations for a water distribution system optimisation problem. The optimisation model comprises two operational objectives minimised concurrently, the pump energy costs and deviations of constituent concentrations as a water quality measure. This optimisation model is applied to a regional non-drinking water distribution system, and solved using the optimisation software GANetXL incorporating the NSGA-II linked with the network analysis software EPANet. The sensitivity analysis employs a set of performance metrics, which were designed to capture the overall quality of the computed Pareto fronts. The performance and sensitivity of NSGA-II parameters using those metrics is evaluated. The results demonstrate that NSGA-II is sensitive to different parameter settings, and unlike in the single-objective problems, a range of parameter setting combinations appears to be required to reach a Pareto front of optimal solutions. Additionally, inadequately scaled objective functions cause the NSGA-II bias towards the second objective. Lastly, the methodology for performance and sensitivity analysis may be used for calibration of algorithm parameters.
- Published
- 2015
20. An Algorithm for Clustering UsingL1-Norm Based on Hyperbolic Smoothing Technique
- Author
-
Ehsan Mohebi and Adil M. Bagirov
- Subjects
021103 operations research ,Fuzzy clustering ,Single-linkage clustering ,Correlation clustering ,0211 other engineering and technologies ,Constrained clustering ,02 engineering and technology ,Computational Mathematics ,Artificial Intelligence ,CURE data clustering algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Canopy clustering algorithm ,020201 artificial intelligence & image processing ,Cluster analysis ,Algorithm ,k-medians clustering ,Mathematics - Abstract
Cluster analysis deals with the problem of organization of a collection of objects into clusters based on a similarity measure, which can be defined using various distance functions. The use of different similarity measures allows one to find different cluster structures in a data set. In this article, an algorithm is developed to solve clustering problems where the similarity measure is defined using the L1-norm. The algorithm is designed using the nonsmooth optimization approach to the clustering problem. Smoothing techniques are applied to smooth both the clustering function and the L1-norm. The algorithm computes clusters sequentially and finds global or near global solutions to the clustering problem. Results of numerical experiments using 12 real-world data sets are reported, and the proposed algorithm is compared with two other clustering algorithms.
- Published
- 2015
21. An incremental clustering algorithm based on hyperbolic smoothing
- Author
-
Adil M. Bagirov, Adilson Elias Xavier, Gurkan Ozturk, Burak Ordin, and Anadolu Üniversitesi, Mühendislik Fakültesi, Endüstri Mühendisliği Bölümü
- Subjects
Mathematical optimization ,Nonlinear Programming ,Control and Optimization ,Optimization problem ,Applied Mathematics ,Correlation clustering ,Smoothing Techniques ,Constrained clustering ,Computational Mathematics ,Nonsmooth Optimization ,CURE data clustering algorithm ,Canopy clustering algorithm ,Cluster Analysis ,Cluster analysis ,Global optimization ,Smoothing ,Mathematics - Abstract
WOS: 000351906500010, Clustering is an important problem in data mining. It can be formulated as a nonsmooth, nonconvex optimization problem. For the most global optimization techniques this problem is challenging even in medium size data sets. In this paper, we propose an approach that allows one to apply local methods of smooth optimization to solve the clustering problems. We apply an incremental approach to generate starting points for cluster centers which enables us to deal with nonconvexity of the problem. The hyperbolic smoothing technique is applied to handle nonsmoothness of the clustering problems and to make it possible application of smooth optimization algorithms to solve them. Results of numerical experiments with eleven real-world data sets and the comparison with state-of-the-art incremental clustering algorithms demonstrate that the smooth optimization algorithms in combination with the incremental approach are powerful alternative to existing clustering algorithms., Australian Research Council [DP140103213], Dr. Burak Ordin acknowledges TUBITAK for its support of his visit to the University of Ballarat, Australia. This research by A. M. Bagirov was supported under Australian Research Council's Discovery Projects funding scheme (Project No. DP140103213). We are grateful to two anonymous referees for their comments and criticism that helped the authors to significantly improve the quality of the paper.
- Published
- 2014
22. An algorithm for clusterwise linear regression based on smoothing techniques
- Author
-
Adil M. Bagirov, Hijran G. Mirzayeva, and Julien Ugon
- Subjects
Data set ,Mathematical optimization ,Control and Optimization ,Linear regression ,Computational intelligence ,Linear regression function ,Regression analysis ,Incremental algorithm ,Algorithm ,Smoothing ,Global optimization problem ,Mathematics - Abstract
We propose an algorithm based on an incremental approach and smoothing techniques to solve clusterwise linear regression (CLR) problems. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate an initial solution for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or approximate global solutions to the CLR problems. The algorithm is tested using several data sets for regression analysis and compared with the multistart and incremental Spath algorithms.
- Published
- 2014
23. Nonsmooth Optimization Algorithm for Solving Clusterwise Linear Regression Problems
- Author
-
Julien Ugon, Hijran G. Mirzayeva, and Adil M. Bagirov
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Optimization algorithm ,Applied Mathematics ,Regression analysis ,Linear regression function ,Management Science and Operations Research ,Statistics::Computation ,Statistics::Machine Learning ,Discrete gradient method ,Theory of computation ,Linear regression ,Incremental algorithm ,Mathematics - Abstract
Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.
- Published
- 2014
24. Aggregate codifferential method for nonsmooth DC optimization
- Author
-
Ali Hakan Tor, Adil M. Bagirov, and Bülent Karasözen
- Subjects
Computational Mathematics ,Mathematical optimization ,Applied Mathematics ,Computation ,Aggregate (data warehouse) ,Convergence (routing) ,Regular polygon ,Bundle methods ,Subgradient method ,Mathematics - Abstract
A new algorithm is developed based on the concept of codifferential for minimizing the difference of convex nonsmooth functions. Since the computation of the whole codifferential is not always possible, we use a fixed number of elements from the codifferential to compute the search directions. The convergence of the proposed algorithm is proved. The efficiency of the algorithm is demonstrated by comparing it with the subgradient, the truncated codifferential and the proximal bundle methods using nonsmooth optimization test problems. (C) 2013 Elsevier B.V. All rights reserved.
- Published
- 2014
25. Solving DC programs using the cutting angle method
- Author
-
Gleb Beliakov, Albert Ferrer, Adil M. Bagirov, Universitat Politècnica de Catalunya. Departament de Matemàtica Aplicada I, and Universitat Politècnica de Catalunya. GNOM - Grup d'Optimització Numèrica i Modelització
- Subjects
Mathematical optimization ,Control and Optimization ,DC programming ,business.industry ,Applied Mathematics ,Dc programming ,Regular polygon ,Derivative ,Operations research ,Investigació operativa ,Management Science and Operations Research ,Matemàtiques i estadística::Investigació operativa [Àrees temàtiques de la UPC] ,Computer Science Applications ,Cutting Angle method ,Minification ,Lipschitz programming ,Convex function ,business ,Global optimization ,Mathematics ,Subdivision - Abstract
In this paper, we propose a new algorithm for global minimization of functions represented as a difference of two convex functions. The proposed method is a derivative free method and it is designed by adapting the extended cutting angle method. We present preliminary results of numerical experiments using test problems with difference of convex objective functions and box-constraints. We also compare the proposed algorithm with a classical one that uses prismatical subdivisions.
- Published
- 2014
26. Fractal Image Compression Using Self-Organizing Mapping
- Author
-
Adil M. Ahmed, Rashad A. Al-Jawfi, and Baligh Al-Helali
- Subjects
Lossless compression ,Texture compression ,Theoretical computer science ,Iterated function system ,Collage theorem ,Fractal compression ,Fractal transform ,General Medicine ,Algorithm ,Mathematics ,Image compression ,Data compression - Abstract
One of the main disadvantages of fractal image data compression is a loss time in the process of image compression (encoding) and conversion into a system of iterated functions (IFS). In this paper, the idea of the inverse problem of fixed point is introduced. This inverse problem is based on collage theorem which is the cornerstone of the mathematical idea of fractal image compression. Then this idea is applied by iterated function system, iterative system functions and grayscale iterated function system down to general transformation. Mathematical formulation form is also provided on the digital image space, which deals with the computer. Next, this process has been revised to reduce the time required for image compression by excluding some parts of the image that have a specific milestone. The neural network algorithms have been applied on the process of compression (encryption). The experimental results are presented and the performance of the proposed algorithm is discussed. Finally, the comparison between filtered ranges method and self-organizing method is introduced.
- Published
- 2014
27. Nonsmooth nonconvex optimization approach to clusterwise linear regression problems
- Author
-
Adil M. Bagirov, Hijran G. Mirzayeva, and Julien Ugon
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Artificial neural network ,k-means clustering ,Regression analysis ,Function (mathematics) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Regression ,Data set ,Statistics::Machine Learning ,Modeling and Simulation ,Linear regression ,Point (geometry) ,Mathematics - Abstract
Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Spath algorithm on several publicly available data sets for regression analysis.
- Published
- 2013
28. Hyperbolic smoothing function method for minimax problems
- Author
-
Adil M. Bagirov, N Sultanova, and A. Al Nuaimat
- Subjects
Statistics::Theory ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Exponential smoothing ,MathematicsofComputing_NUMERICALANALYSIS ,Subderivative ,Function (mathematics) ,Management Science and Operations Research ,Minimax ,Minimax approximation algorithm ,Nonlinear programming ,Algebraic number ,Smoothing ,Mathematics - Abstract
In this article, an approach for solving finite minimax problems is proposed. This approach is based on the use of hyperbolic smoothing functions. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in general algebraic modelling system. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem.
- Published
- 2013
29. Limited memory discrete gradient bundle method for nonsmooth derivative-free optimization
- Author
-
Adil M. Bagirov and Napsu Karmitsa
- Subjects
Continuous optimization ,Mathematical optimization ,Control and Optimization ,Optimization problem ,Applied Mathematics ,Computation ,Derivative-free optimization ,Convergence (routing) ,Random optimization ,Management Science and Operations Research ,Lipschitz continuity ,Subgradient method ,Mathematics - Abstract
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.
- Published
- 2012
30. Subgradient Method for Nonconvex Nonsmooth Optimization
- Author
-
Napsu Karmitsa, N Sultanova, A. Al Nuaimat, Adil M. Bagirov, and L. Jin
- Subjects
Mathematical optimization ,Control and Optimization ,Line search ,Optimization problem ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Mathematics::Optimization and Control ,Management Science and Operations Research ,Simple (abstract algebra) ,Theory of computation ,Convergence (routing) ,Point (geometry) ,Subgradient method ,Mathematics ,Descent (mathematics) - Abstract
In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments.
- Published
- 2012
31. Comparing different nonsmooth minimization methods and software
- Author
-
Marko M. Mäkelä, Adil M. Bagirov, and Napsu Karmitsa
- Subjects
Mathematical optimization ,Control and Optimization ,business.industry ,Applied Mathematics ,Regular polygon ,Piecewise linear function ,Software ,Quadratic equation ,Black box ,Test set ,Minification ,business ,Subgradient method ,Mathematics - Abstract
Most nonsmooth optimization (NSO) methods can be divided into two main groups: subgradient methods and bundle methods. In this paper, we test and compare different methods from both groups as well as some methods which may be considered as hybrids of these two and/or some others. All the solvers tested are so-called general black box methods which, at least in theory, can be applied to solve almost all NSO problems. The test set includes a large number of unconstrained nonsmooth convex and nonconvex problems of different size. In particular, it includes piecewise linear and quadratic problems. The aim of this work is not to foreground some methods over the others but to get some insight on which method to select for certain types of problems.
- Published
- 2012
32. Fast modified global k-means algorithm for incremental cluster construction
- Author
-
Adil M. Bagirov, Julien Ugon, and Dean Webb
- Subjects
Computational complexity theory ,k-medoids ,Iterative method ,k-means clustering ,Function (mathematics) ,Set (abstract data type) ,Matrix (mathematics) ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Cluster analysis ,Algorithm ,Software ,Mathematics - Abstract
The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms.
- Published
- 2011
33. Preface of the special issue OR: connecting sciences supported by global optimization related to the 25th European conference on operational research (EURO XXV 2012)
- Author
-
Gerhard-Wilhelm Weber, Adil M. Bagirov, and Kaisa Miettinen
- Subjects
Control theory (sociology) ,Continuous optimization ,Control and Optimization ,Operations research ,Applied Mathematics ,Related research ,Management Science and Operations Research ,Global optimization ,Implementation ,Computer Science Applications ,Mathematics ,Theme (narrative) - Abstract
This volume of the Journal of Global Optimization is devoted to papers presented at the 25th European Conference on Operational Research, EURO XXV 2012, which was held on July 811, 2012, in Vilnius, Lithuania. The conference attracted 2044 registered participants from 68 countries fromall the continents across theORcommunity. ContinuousOptimizationwas one of the largest main areas in the entire conferencewith about 250 participants. They exchanged experiences in solving real-world problems, discussed recent achievements in optimization theory, methods and applications, reported on developments and implementations of appropriate models and efficient solution procedures for problems of continuous optimization. The EURO XXV 2012 conference provided an excellent forum for researchers and practitioners to promote their recent advances in continuous optimization to the wider scientific community, to identify new research challenges as well as promising research developments in theory, methods and applications and to promote interactions with colleagues from related research areas of modern OR and its emerging applications. In this spirit, this special issue is devoted to the theme connecting sciences supported by global optimization. For this special issue, participants of EURO XXV 2012 were invited to submit papers on continuous optimization and related topics. The papers included recent theoretical and applied contributions in various fields including linear, nonlinear, stochastic, parametric and dynamic optimization as well as control theory. Based on rigorous reviewing processes, six papers
- Published
- 2014
34. Classification through incremental max–min separability
- Author
-
Dean Webb, Julien Ugon, Adil M. Bagirov, and Bülent Karasözen
- Subjects
Piecewise linear function ,Mathematical optimization ,Optimization problem ,Hyperplane ,Artificial Intelligence ,Test set ,Supervised learning ,Data classification ,Linear classifier ,Computer Vision and Pattern Recognition ,Heuristics ,Mathematics - Abstract
Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers.
- Published
- 2010
35. Codifferential method for minimizing nonsmooth DC functions
- Author
-
Adil M. Bagirov and Julien Ugon
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Convergence (routing) ,Minimization algorithm ,Decomposition (computer science) ,A priori and a posteriori ,Management Science and Operations Research ,Convex function ,Bundle methods ,Computer Science Applications ,Mathematics ,Descent (mathematics) - Abstract
In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given.
- Published
- 2010
36. An $L_{2}$-Boosting Algorithm for Estimation of a Regression Function
- Author
-
Adil M. Bagirov, Michael Kohler, and C. Clausen
- Subjects
Polynomial regression ,Mathematical optimization ,Explained sum of squares ,Least trimmed squares ,Library and Information Sciences ,Computer Science Applications ,Residual sum of squares ,Non-linear least squares ,Applied mathematics ,Total least squares ,Nonlinear regression ,Information Systems ,Variance function ,Mathematics - Abstract
An L 2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
- Published
- 2010
37. A quasisecant method for minimizing nonsmooth functions
- Author
-
Asef Nazari Ganjehlou and Adil M. Bagirov
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Bundle method ,Applied Mathematics ,Mathematics::Optimization and Control ,Subderivative ,Variety (universal algebra) ,Bundle methods ,Stationary point ,Software ,Descent (mathematics) ,Mathematics - Abstract
We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.
- Published
- 2010
38. A multidimensional descent method for global optimization
- Author
-
Adil M. Bagirov, Jiapu Zhang, and Alexander Rubinov
- Subjects
Mathematical optimization ,Control and Optimization ,Iterated local search ,business.industry ,Applied Mathematics ,Management Science and Operations Research ,Local optimum ,Simulated annealing ,Local search (optimization) ,Guided Local Search ,Gradient descent ,business ,Global optimization ,Hill climbing ,Mathematics - Abstract
This article presents a new multidimensional descent method for solving global optimization problems with box-constraints. This is a hybrid method where local search method is used for a local descent and global search is used for further multidimensional search on the subsets of intersection of cones generated by the local search method and the feasible region. The discrete gradient method is used for local search and the cutting angle method is used for global search. Two- and three-dimensional cones are used for the global search. Such an approach allows one, as a rule, to escape local minimizers which are not global ones. The proposed method is local optimization method with strong global search properties. We present results of numerical experiments using both smooth and non-smooth global optimization test problems. These results demonstrate that the proposed algorithm allows one to find a global or a near global minimizer.
- Published
- 2009
39. Estimation of a Regression Function by Maxima of Minima of Linear Functions
- Author
-
Adil M. Bagirov, C. Clausen, and Michael Kohler
- Subjects
Independent and identically distributed random variables ,Regression analysis ,Library and Information Sciences ,Computer Science Applications ,Nonparametric regression ,Maxima and minima ,Rate of convergence ,Statistics ,Applied mathematics ,Maxima ,Nonlinear regression ,Random variable ,Information Systems ,Mathematics - Abstract
In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data.
- Published
- 2009
40. Modified global -means algorithm for minimum sum-of-squares clustering problems
- Author
-
Adil M. Bagirov
- Subjects
Mathematical optimization ,k-medoids ,Population-based incremental learning ,Parallel algorithm ,Artificial Intelligence ,CURE data clustering algorithm ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,Canopy clustering algorithm ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,k-medians clustering ,Mathematics ,FSA-Red Algorithm - Abstract
k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.
- Published
- 2008
41. An algorithm for the estimation of a regression function by continuous piecewise linear functions
- Author
-
Adil M. Bagirov, Michael Kohler, and Conny Clausen
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Estimator ,Nonparametric regression ,Linear-fractional programming ,Piecewise linear function ,Maxima and minima ,Computational Mathematics ,Step function ,Piecewise ,Algorithm ,Mathematics - Abstract
The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones.
- Published
- 2008
42. Discrete Gradient Method: Derivative-Free Method for Nonsmooth Optimization
- Author
-
Bülent Karasözen, Adil M. Bagirov, and M. Sezer
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Applied Mathematics ,Mathematics::Optimization and Control ,Subderivative ,Management Science and Operations Research ,Solver ,Statistics::Computation ,Statistics::Machine Learning ,Discrete optimization ,Theory of computation ,Derivative-free optimization ,Gradient method ,Mathematics ,Descent (mathematics) - Abstract
A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented.
- Published
- 2007
43. An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization
- Author
-
Adil M. Bagirov and Asef Nazari Ganjehlou
- Subjects
Mathematical optimization ,General Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Subderivative ,Management Science and Operations Research ,Lipschitz continuity ,Linear inequality ,Convergence (routing) ,Minification ,Subgradient method ,Algorithm ,Software ,Descent (mathematics) ,Mathematics - Abstract
In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method.
- Published
- 2007
44. FRACTAL FOURIER COEFFICIENTS WITH APPLICATION TO IDENTIFICATION PROTOCOLS
- Author
-
Arkan J. Mohammed, Adil M. Ahmed, Nadia M. G. Al-Saidi, and Elisha A. Ogada
- Subjects
Identification (information) ,Fractal ,Biological system ,Fourier series ,Mathematics - Published
- 2015
45. An incremental piecewise linear classifier based on polyhedral conic separation
- Author
-
Adil M. Bagirov, Refail Kasimbeyli, Gurkan Ozturk, Anadolu Üniversitesi, Mühendislik Fakültesi, Endüstri Mühendisliği Bölümü, and Kasımbeyli, Refail
- Subjects
Mathematical optimization ,Discrete Gradient Method ,Classification ,Piecewise linear function ,Nonlinear system ,Error function ,Discrete gradient method ,Artificial Intelligence ,Conic section ,Nonsmooth Nonconvex Optimization ,A priori and a posteriori ,Polyhedral Conic Separation ,Classifier (UML) ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
WOS: 000361624700018, In this paper, a piecewise linear classifier based on polyhedral conic separation is developed. This classifier builds nonlinear boundaries between classes using polyhedral conic functions. Since the number of polyhedral conic functions separating classes is not known a priori, an incremental approach is proposed to build separating functions. These functions are found by minimizing an error function which is nonsmooth and nonconvex. A special procedure is proposed to generate starting points to minimize the error function and this procedure is based on the incremental approach. The discrete gradient method, which is a derivative-free method for nonsmooth optimization, is applied to minimize the error function starting from those points. The proposed classifier is applied to solve classification problems on 12 publicly available data sets and compared with some mainstream and piecewise linear classifiers., Australian Research Council [DP140103213], The authors would like to thank three anonymous referees for their criticism and comments which allowed to improve the quality of the paper. The research by Dr. A.M. Bagirov was supported under Australian Research Council's Discovery Projects funding scheme (Project No. DP140103213).
- Published
- 2015
46. A Heuristic Algorithm For Solving The Minimum Sum-Of-Squares Clustering Problems
- Author
-
Burak Ordin and Adil M. Bagirov
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Correlation clustering ,k-means clustering ,Management Science and Operations Research ,Computer Science Applications ,Data stream clustering ,CURE data clustering algorithm ,Canopy clustering algorithm ,Heuristics ,Cluster analysis ,Global optimization ,Mathematics - Abstract
Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global $$k$$ k -means and modified global $$k$$ k -means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global $$k$$ k -means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the $$k$$ k -means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global $$k$$ k -means algorithms in finding global solutions to clustering problems.
- Published
- 2015
47. Non-smooth optimization methods for computation of the Conditional Value-at-risk and portfolio optimization
- Author
-
Gleb Beliakov and Adil M. Bagirov
- Subjects
Continuous optimization ,Vector optimization ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Discrete optimization ,Derivative-free optimization ,Random optimization ,Management Science and Operations Research ,Portfolio optimization ,Metaheuristic ,Global optimization ,Mathematics - Abstract
We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21–41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.
- Published
- 2006
48. Piecewise Partially Separable Functions and a Derivative-free Algorithm for Large Scale Nonsmooth Optimization
- Author
-
Julien Ugon and Adil M. Bagirov
- Subjects
Mathematical optimization ,Control and Optimization ,Scale (ratio) ,Applied Mathematics ,Structure (category theory) ,Subderivative ,Derivative ,Management Science and Operations Research ,Computer Science Applications ,Separable space ,Piecewise ,Minification ,Algorithm ,Mathematics - Abstract
This paper introduces the notion of piecewise partially separable functions and studies their properties. We also consider some of many applications of these functions. Finally, we consider the problem of minimizing of piecewise partially separable functions and develop an algorithm for its solution. This algorithm exploits the structure of such functions. We present the results of preliminary numerical experiments.
- Published
- 2006
49. A new nonsmooth optimization algorithm for minimum sum-of-squares clustering problems
- Author
-
Adil M. Bagirov and John Yearwood
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Optimization algorithm ,Modeling and Simulation ,Mathematics::Optimization and Control ,Explained sum of squares ,Management Science and Operations Research ,Cluster analysis ,Industrial and Manufacturing Engineering ,Mathematics - Abstract
The minimum sum-of-squares clustering problem is formulated as a problem of nonsmooth, nonconvex optimization, and an algorithm for solving the former problem based on nonsmooth optimization techniques is developed. The issue of applying this algorithm to large data sets is discussed. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithm.
- Published
- 2006
50. A derivative-free method for linearly constrained nonsmooth optimization
- Author
-
Adil M. Bagirov, Moumita Ghosh, and Dean Webb
- Subjects
Mathematical optimization ,Class (set theory) ,Control and Optimization ,Optimization problem ,Applied Mathematics ,Strategy and Management ,Computation ,Constrained optimization ,Derivative ,Function (mathematics) ,Subderivative ,Lipschitz continuity ,Atomic and Molecular Physics, and Optics ,Business and International Management ,Electrical and Electronic Engineering ,Mathematics - Abstract
This paper develops a new derivative-free method for solving linearly constrained nonsmooth optimization problems. The objective functions in these problems are, in general, non-regular locally Lipschitz continuous function. The computation of generalized subgradients of such functions is difficult task. In this paper we suggest an algorithm for the computation of subgradients of a broad class of non-regular locally continuous Lipschitz functions. This algorithm is based on the notion of a discrete gradient. An algorithm for solving linearly constrained nonsmooth optimization problems based on discrete gradients is developed. We report preliminary results of numerical experiments. These results demonstrate that the proposed algorithm is efficient for solving linearly constrained nonsmooth optimization problems.
- Published
- 2006
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.