Training of the feed forward artificial neural networks using dragonfly algorithm
dc.contributor.author | Gulcu, Saban | |
dc.date.accessioned | 2024-02-23T14:02:10Z | |
dc.date.available | 2024-02-23T14:02:10Z | |
dc.date.issued | 2022 | |
dc.department | NEÜ | en_US |
dc.description.abstract | One of the most important parts of an artificial neural network (ANN) which affects performance is training algorithms. Training algorithms optimize the weights and biases of the ANN according to the inputs-outputs pattern. Two types of training algorithms are widely used: Gradient methods and meta-heuristic methods. Gradient methods are effective in training the ANN. But they have some disadvantages. The main disadvantage of gradient methods is premature convergence. Secondly, the performance of gradient methods highly depends on the initial parameters and positions. Thirdly, they can easily get stuck in local optima. To overcome these disadvantages, this article presents a new hybrid algorithm (DA-MLP) to train the feed-forward multilayer neural networks (MLP) using the dragonfly algorithm. The dragonfly algorithm optimizes the weights and biases of the MLP. In the experiments, one real-world problem in the civil engineering area and eight classification datasets were used. To verify the success of the DA-MLP algorithm, the results of the DA-MLP algorithm were compared with the results of four algorithms (the BAT-MLP based on the bat optimization algorithm, the SMS-MLP based on the states of matter search optimization algorithm, the PSO-MLP based on the particle swarm optimization algorithm, and the backpropagation (BP) algorithm). The experimental study showed that the DA-MLP algorithm is more efficient than the other algorithms. (c) 2022 Elsevier B.V. All rights reserved. | en_US |
dc.identifier.doi | 10.1016/j.asoc.2022.109023 | |
dc.identifier.issn | 1568-4946 | |
dc.identifier.issn | 1872-9681 | |
dc.identifier.scopus | 2-s2.0-85131088632 | en_US |
dc.identifier.scopusquality | Q1 | en_US |
dc.identifier.uri | https://doi.org/10.1016/j.asoc.2022.109023 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12452/11620 | |
dc.identifier.volume | 124 | en_US |
dc.identifier.wos | WOS:000808523400003 | en_US |
dc.identifier.wosquality | Q1 | en_US |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.ispartof | Applied Soft Computing | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Artificial Neural Networks | en_US |
dc.subject | Training Of Artificial Neural Networks | en_US |
dc.subject | Dragonfly Algorithm | en_US |
dc.subject | Optimization | en_US |
dc.subject | Multilayer Perceptron | en_US |
dc.title | Training of the feed forward artificial neural networks using dragonfly algorithm | en_US |
dc.type | Article | en_US |