Training of the feed forward artificial neural networks using dragonfly algorithm

Küçük Resim Yok

Tarih

2022

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Elsevier

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

One of the most important parts of an artificial neural network (ANN) which affects performance is training algorithms. Training algorithms optimize the weights and biases of the ANN according to the inputs-outputs pattern. Two types of training algorithms are widely used: Gradient methods and meta-heuristic methods. Gradient methods are effective in training the ANN. But they have some disadvantages. The main disadvantage of gradient methods is premature convergence. Secondly, the performance of gradient methods highly depends on the initial parameters and positions. Thirdly, they can easily get stuck in local optima. To overcome these disadvantages, this article presents a new hybrid algorithm (DA-MLP) to train the feed-forward multilayer neural networks (MLP) using the dragonfly algorithm. The dragonfly algorithm optimizes the weights and biases of the MLP. In the experiments, one real-world problem in the civil engineering area and eight classification datasets were used. To verify the success of the DA-MLP algorithm, the results of the DA-MLP algorithm were compared with the results of four algorithms (the BAT-MLP based on the bat optimization algorithm, the SMS-MLP based on the states of matter search optimization algorithm, the PSO-MLP based on the particle swarm optimization algorithm, and the backpropagation (BP) algorithm). The experimental study showed that the DA-MLP algorithm is more efficient than the other algorithms. (c) 2022 Elsevier B.V. All rights reserved.

Açıklama

Anahtar Kelimeler

Artificial Neural Networks, Training Of Artificial Neural Networks, Dragonfly Algorithm, Optimization, Multilayer Perceptron

Kaynak

Applied Soft Computing

WoS Q Değeri

Q1

Scopus Q Değeri

Q1

Cilt

124

Sayı

Künye