SCHOTTKY, B and GERL, F and KREY, U (1994) GENERALIZATION ABILITY AND INFORMATION GAIN OF CLOCK-MODEL PERCEPTRONS. ZEITSCHRIFT FUR PHYSIK B-CONDENSED MATTER, 96 (2). pp. 279-289. ISSN 0722-3277,
Full text not available from this repository.Abstract
We study the generalization ability g(Q) of Q-state Clock-model perceptrons for (i) Hebbian and for certain Non-Hebbian learning procedures, namely (ii) learning with maximal stability, (iii) zero stability and (iv) optimal generalization, for the case of random training sets. Among other results we find that g(Q) behaves quite different in the Hebbian and in the Non-Hebbian cases in the limit Q --> infinity. E.g. in the Hebbian case for finite alpha, g(Q) vanishes always proportional to 1/Q, whereas in the Non-Hebbian cases considered, g(Q) converges for Q --> infinity to a non-trivial continuous function g infinity(alpha), which vanishes for a < 2, but increases rapidly for alpha > 2. This means that for (ii), (iii) and (iv), as a function of alpha at Q = infinity, there is a 2nd-order phase transition from a non-generalizing phase for a less than or equal to 2 to a generalizing phase for alpha > 2. Different behaviour of the Hebbian and Non-Hebbian cases, respectively, is also observed for the information gain obtained through learning. For the particular case of AdaTron Learning, which is identical to case (ii), we find a geometrical formulation for g(Q)(alpha), which is applicable to more general models.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | NEURAL NETWORKS; STATISTICAL-MECHANICS; STORAGE CAPACITY; ALGORITHM; ADATRON; |
| Depositing User: | Dr. Gernot Deinzer |
| Last Modified: | 19 Oct 2022 08:39 |
| URI: | https://pred.uni-regensburg.de/id/eprint/52966 |
Actions (login required)
![]() |
View Item |

