Abstract
The gradient descent algorithms like backpropagation (BP) or its variations on multi-layered feed-forward networks are widely used in many applications. However, the most serious problem associated with the BP is local minima problem. Especially, an exceeding number of hidden nodes make the corresponding network deepen the local minima problem. We propose an algorithm which shows stable performance on training despite of the large number of hidden nodes. This algorithm is called separate learning algorithm in which hidden-to-output and input-to-hidden separately trained. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method.
Original language | English |
---|---|
Pages (from-to) | 3640-3643 |
Number of pages | 4 |
Journal | Neurocomputing |
Volume | 71 |
Issue number | 16-18 |
DOIs | |
State | Published - Oct 2008 |
Keywords
- Backpropagation
- Hidden nodes
- Local minima
- Separate learning
- Target values