Date of Award


Publication Type

Doctoral Thesis

Degree Name



Electrical and Computer Engineering

First Advisor

Miller, William C.,


Engineering, Electronics and Electrical.



Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.


A method for the in-the-loop training of neural networks with low resolution synaptic weights is developed in this thesis. This research was motivated by the need to train an intelligent sensor that had been designed and fabricated with low resolution weights in order to meet constraints that were imposed upon the designers. The training method developed in this thesis can be also be used as a fault tolerant training procedure for networks with high resolution weights that have been effectively reduced to low resolution weights due to malfunctioning circuits. The proposed training method is conceptually new and employs three distinct but interrelated parts that each required the development of new approaches. In the first part, a model of each neuron activation function in the sensor is determined in terms of a small ideal neural subnetwork by using an in-the-loop system identification strategy that is based on a knowledge of the sensor's architecture. In the second part, a complete neural network model for the sensor is developed that utilizes the known sensor architecture in conjunction with neural subnetworks in place of the individual sensor neurons. This model can be trained using continuous weights and is required as part of the strategy for training networks with highly quantized weights. The standard backpropagation training algorithm can not be used directly since no explicit analytical expression for the neuron activation function is available. A variation of the backpropagation algorithm has been derived in the thesis that uses input/output data from the subnetwork together with an approximated derivative expression in order to reach convergence. The continuous weights determined using this algorithm can be readily quantized to their nearest allowable values so that the effects of weight quantization can be seen immediately. In the third part of the method an algorithm for training networks with low resolution weights has been developed that measures the sensitivity of each weight with respect to the error function and then perturbs the weights with higher sensitivity in an iterative training/retraining procedure until the desired convergence is reached. The iterative procedure employs the neuron model derived in the first part together with the training method developed in the second part. The application of the in-the-loop training procedure based on these three parts leads to a set of convergent synaptic weights that can be implemented exactly using low resolution digital multipliers. Source: Dissertation Abstracts International, Volume: 61-09, Section: B, page: 4905. Adviser: William C. Miller. Thesis (Ph.D.)--University of Windsor (Canada), 2000.