Communications - Scientific Letters of the University of Zilina 2000, 2(2):38-55 | DOI: 10.26552/com.C.2000.2.38-55

Intelligent adaptable systems: First order approach

Peter Geczy1, Shiro Usui1
1 Future Technology Research Center, Toyohashi University of Technology, Hibarigaoka, Tempaku-cho, Toyohashi, Japan

Future of communications inevitably calls for technologies that feature high-level flexibility, adaptability, and intelligence. Intelligent adaptable systems are particularly suitable for this mission. Majority of adaptable systems utilize neural networks. Artificial neural networks are systems with huge network-like interconnectivity. They are not programmed. Neural Networks gain valuable properties through the process of adaptation called learning. Presented intelligent adaptable system utilizes neural network technology. The system incorporates internal adaptability at several levels. It autonomously adapts its parameters and structure to the presented data. Externally, it appropriately manages its input-output interfaces. The system is able to select suitable training exemplars from the available amount of data in order to achieve the optimal learning performance. After training the system provides logical output format of the task. Introduced intelligent adaptable system consists of several modules. Principle and functionality of each module is described and illustratively demonstrated.

Keywords: no keywords

Published: June 30, 2000  Show citation

ACS AIP APA ASA Harvard Chicago Chicago Notes IEEE ISO690 MLA NLM Turabian Vancouver
Geczy, P., & Usui, S. (2000). Intelligent adaptable systems: First order approach. Communications - Scientific Letters of the University of Zilina2(2), 38-55. doi: 10.26552/com.C.2000.2.38-55
Download citation

References

  1. SHANNON, C. E., WEAVER, W.: The Mathematical Theory of Communications. University of Illinois Press, University of Illinois, 1949.
  2. ARBIB, M. A. (Editor): The Handbook of Brain Theory and Neural Networks. MIT Press, Cambridge, Massachusetts, 1995.
  3. BAUM, E., HAUSSLER, D.: What size of network gives valid generalization. Neural Computation, 1(1):151-160, 1989. Go to original source...
  4. HWANG, J., CHOI, J. J., SEHO, O., MARKS II, R. J.: Query learning based on boundary search and gradient computation of trained multilayer perceptrons. In Proceedings of IJCNN'90, pp. 57-62, San Diego, 1990. Go to original source...
  5. BAUM, E. B.: Neural net algorithm that learn in polynomial time for examples and queries. IEEE Trans. on Neural Networks, 2(1):5-19, 1991. Go to original source...
  6. R. Battiti. Using mutual information for selecting features in supervised neural net learning. IEEE Trans. on Neural Networks, 5(4):537-550, 1994. Go to original source...
  7. CACHIN, C.: Pedagogical pattern selection strategies. Neural Networks, 7(1):175-181, 1994. Go to original source...
  8. MUNRO, P. W.: Repeat until bored: A pattern selection strategy. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing Systems 4 (Denver), pp. 1001-1008, San Mateo, 1992. Morgan Kaufmann.
  9. FLETCHER, K.: Practical Methods of Optimization. John Wiley & Sons, Essex, 1987.
  10. WOLFE, P. Convergent conditions for ascent methods. SIAM Review, 11:226-235, 1969. Go to original source...
  11. POWELL, M. J. D.: A view of unconstrained optimization. In L. C. W. Dixon, editor, Optimization in Action, London, 1976. Academic Press.
  12. AL-BAALI, M., FLETCHER, R.: An ecient line search for nonlinear least squares. Journal of Optimization Theory and Application, 48(3):359-377, 1986. Go to original source...
  13. JACOBS, R. A.: Increasing rates of convergence through learning rate adaptation. Neural Networks, 1:295-307, 1988. Go to original source...
  14. T. P. Vogl, J. K. Manglis, A. K. Rigler, T. W. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59:257-263, 1988. Go to original source...
  15. PFLUG, Ch. G.: Non-asymptotic confidence bounds for stochastic approximation algorithms. Mathematic, 110:297-314, 1990. Go to original source...
  16. TOLLENAERE, T., SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks, 3:561-573, 1990. Go to original source...
  17. DARKEN, C., MOODY, J.: Towards faster stochastic gradient search. In J. E. Moody, S. J. Hason, and R. P. Lipmann, editors, Proceedings of the Neural Information Processing Systems 4 (Denver), pp. 1009-1016, San Mateo, 1992. Morgan Kaufmann.
  18. OCHIAI, K., TODA, N., USUI, S.: Kick-Out learning algorithm to reduce the oscillation of weights. Neural Networks, 7(5):797-807, 1994. Go to original source...
  19. PERANTONIS, S. J., KARRAS, D. A.: An efficient constrained learning algorithm with momentum acceleration. Neural Networks, 8(2):237-249, 1995. Go to original source...
  20. BECKER, S., LEE CUN, Y.: Improving the convergence of back-propagation learning with second order methods. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of The 1988 Connectionist Models Summer School (Pittsburgh), pp. 62-72, N.Y., 1989. Wiley.
  21. BISHOP, C.: Exact calculation of the Hessian matrix for the multilayer perceptron. Neural Computation, 4(4):494-501, 1992. Go to original source...
  22. YU, X., LOH, N. K., MILLER, W. C.: A new acceleration technique for the backpropagation algorithm. In Proceedings of The IEEE International Conference on Neural Networks, pp. 1157-1161, San Francisco, 1993.
  23. YU, X., CHEN, G., CHENG, S.: Dynamic learning rate optimization of the backpropagation algorithm. IEEE Transactions on Neural Networks, 6(3):669-677, 1995. Go to original source...
  24. HINTON, G. E.: Connectionist learning procedures. Technical Report CMU-CS-87-115, Carnegie-Mellon University, 1987.
  25. WEIGEND, S. A., RUMELHART, D. E., and HUBERMANA, B. A.: Generalization by weight elimination with application to forecasting. In R. P. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pp. 875-882, San Mateo, 1991. Morgan Kaufmann.
  26. LECUN, Y., DENKER, J. S., SOLLA, S. A.: Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pp. 598-605, San Mateo, 1990. Morgan Kaufmann.
  27. HASSIBI, B., STORK, D. G., WOLF, G. J.: Optimal brain surgeon and general network pruning. In IEEE International Conference on Neural Networks, pp. 293-299, San Francisco, 1993.
  28. CIBAS, T., SOULIÉ, F. F., GALLINARI, P., RANDYS, S.: Variable selection with neural networks. Neurocomputing, 12:223-248, 1996. Go to original source...
  29. GÉCZY, P., USUI, S.: Learning performance measures for MLP networks. In Proceedings of ICNN'97, pp. 1845-1850, Houston, 1997.
  30. GÉCZY, P., USUI, S.: Effects of structural adjustments on the estimate of spectral radius of error matrices. In Proceedings of ICNN'97, pp. 1862-1867, Houston, 1997.
  31. GÉCZY, P., USUI, S.: Effects of structural modifications of a network on Jacobean and error matrices. Submitted to Neural Networks, March 1997.
  32. TOWELL, G., SHAVLIK, J. W.: Extracting refined rules from knowledge-based neural networks. Machine Learning, 13:71-101, 1993. Go to original source...
  33. ANDREWS, R., DIEDERICH, J., TICKLE, A. B.: A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8:373-389, 1995. Go to original source...
  34. KASABOV, N.: Learning fuzzy rules and approximate reasoning in fuzzy neural networks and hybrid systems. Fuzzy Sets and Systems, 2:135-149, 1996. Go to original source...
  35. FU, L.: Rule generation from neural networks. IEEE Transactions on SMC, 24:1114-1124, 1994. Go to original source...
  36. GÉCZY, P., USUI, S.: Rule extraction from trained artificial neural networks. In Proceedings of ICONIP'97, pp.835-838, Dunedin, 1997.
  37. GÉCZY, P., USUI, S.: Fuzzy rule acquisition from trained artificial neural networks. Journal of Advanced Computational Intelligence (accepted), March 1998.
  38. GÉCZY, P., USUI, S.: Rule extraction from trained artificial neural networks. BEHAVIORMETRIKA, 26(1):89-106, 1999. Go to original source...
  39. GÉCZY, P., USUI, S.: Knowledge acquisition from networks of abstrast bio-neurons. In Proceedings of ICONIP'99, pp. 610-615, Perth, 1999.
  40. HORNIK, K.: Multilayer feedforward networks are universal approximators. Neural Networks, 2:359-366, 1989. Go to original source...
  41. MHASKAR, H. N.: Neural networks for optimal approximation of smooth and analytic functions. Neural Computation, 8:164-177, 1995. Go to original source...
  42. GÉCZY, P., USUI, S.: A novel dynamic sample selection algorithm for accelerated learning. Technical Report NC97-03, IEICE, pp. 189-196, March 1997.
  43. GÉCZY, P., USUI, S.: Sample selection algorithm utilizing Lipschitz continuity condition. In Proceedings of JNNS'97, pp.190-191, Kanazawa, 1997.
  44. GÉCZY, P., USUI, S.: Dynamic sample selection: Theory. IEICE Transactions on Fundamentals, E81-A(9):1931-1939, 1998.
  45. GÉCZY, P., USUI, S.: Dynamic sample selection: Implementation. IEICE Transactions on Fundamentals, E81-A(9):1940-1947, 1998.
  46. GÉCZY, P., USUI, S.: Deterministic approach to dynamic sample selection. In Proceedings of ICONIP'98, pp. 1612-1615, Kitakyushu, 1998.
  47. ARMIJO, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1-3, 1966. Go to original source...
  48. CENDROWSKA, J., Prism: An algorithm for inducing modular rules. International Journal of Man-Machine Studies, 27:349-370, 1987. Go to original source...
  49. FISHER, R. A.: The use of multiple measurements in taxonomic problems. Annual Eugenics,7(II):179-188, 1936. Go to original source...
  50. DASARATHY, B. V.: Nosing around the neighborhood: A new system structure and classification rule for recognition in partially exposed environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(1):67-71, 1980. Go to original source...
  51. GÉCZY, P., USUI, S.: Fast back-propagation with automatically adjustable learning rate. Technical Report NC97-61, IEICE, pp. 47-54, December 1997.
  52. GÉCZY, P., USUI, S.: Superlinear and automatically adaptable conjugate gradient training algorithm. Technical Report NC97-149, IEICE, pp. 71-78, March 1998.
  53. GÉCZY, P., USUI, S.: On design of superlinear first order automatic machine learning techniques. In Proceedings of WCCI'98, pp.51-56, Anchorage, 1998.
  54. GÉCZY, P., USUI, S.: Novel first order optimization classification framework. IEICE Transactions on Fundamentals, Submitted(June), 1999.
  55. GÉCZY, P., USUI, S.: Superlinear conjugate gradient method with adaptable step length and constant momentum term. IEICE Transactions on Fundamentals, Submitted(June), 1999.
  56. GÉCZY, P., USUI, S.: Universal superlinear learning algorithm design. IEEE Transactions on Neural Networks, Submitted(February), 1999.
  57. FLETCHER, R., REEVES, C. M.: Function minimization by conjugate gradients. Comput. Journal, 7:149-154, 1964. Go to original source...
  58. WNEK, J., MICHALSKI, R. S.: Comparing symbolic and subsymbolic learning: Three studies. In R. S. Michalski and G. Tecuci, editors, Machine Learning: A Multistrategy Approach, volume 4, San Mateo, 1993. Morgan Kaufmann.
  59. ALEFELD, G., HERZBERGER, J.: Introduction to Interval Computations. Academic Press, New York, 1983.
  60. DUDA, R. O., HART, P E.: Pattern Classification and Scene Analysis. John Wiley & Sons, 1973.

This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, distribution, and reproduction in any medium, provided the original publication is properly cited. No use, distribution or reproduction is permitted which does not comply with these terms.