This article explains Min and Max hidden units in the analysis dialog of SANN module.
Issue/Introduction
Min and Max hidden units in SANN
Environment
Windows
Resolution
Min. hidden units. Specify the minimum number of hidden units to be tried by the Automated Network Search (ANS) when using MLP networks. Max. hidden units. Specify the maximum number of hidden units to be tried by the Automated Network Search (ANS) when using RBF networks.
Note: What effect does the number of hidden units have?
In general, increasing the number of hidden units increases the modeling power of the neural network (it can model a more convoluted, complex underlying function), but also makes it larger, more difficult to train, slower to operate, and more prone to over-fitting (modeling noise instead of the underlying function). Decreasing the number of hidden units has the opposite effect.
If your data is from a fairly simple function or is very noisy, or if you have too few cases, a network with relatively few hidden units is preferable. If, in experimenting with different numbers of hidden units you find that larger networks have better training performance, but worse selection performance, then you are probably over-fitting and should revert to smaller networks.