Date of Award
2-1-2022
Publication Type
Thesis
Degree Name
M.Sc.
Department
Electrical and Computer Engineering
Keywords
ANNs, Approximation methods(CORDICPWLPOWER of TWOLUTs & RALUTs), FPGAs, Optimization on hardware resource latency error), Tanh activation function
Supervisor
M. Ahmadi
Supervisor
S. Alirezaee
Rights
info:eu-repo/semantics/embargoedAccess
Abstract
Artificial neural networks (ANN) consist of a layered network of the neurons which compute the weighted sum of multiple inputs and pass it through a non-linear activation function (AF). A major difficulty is faced in the implementation of AF, which is usually hyperbolic tangent (Tanh) function. Tanh consists of exponential and division terms which makes its accurate implementation very difficult. Tanh is the most suitable for back propagation learning algorithm because it is differentiable. Previous studies have shown that the accuracy of the AF impacts the performance and the size of the whole neural networks (NNs). AFs are important elements of NNs. A low complexity accurate hardware implementation of the AF is required to meet the performance and area targets of the NN accelerators. Hardware implementation of NNs plays a major role in many applications and the implementation of the AF is an important consideration. These networks are computationally expensive, customized accelerators are designed for achieving the required performance at lower cost and power. Recently, ANNs have been implemented on Field Programmable Gate Arrays (FPGAs) with low-cost and low-power dissipation. Due to the hardware limitations on FPGAs, AFs are all based on the approximation methods rather than being calculated on the FPGAs. I this thesis different approaches and methods are presented which achieve comparatively very good accuracy with a relatively smaller logic area to reduce hardware resource consumption when ANNs are deployed on FPGA boards. The proposed designs in each chapter consider the fixed-point representation of numbers. The non-linear nature of the AF uses multiple methods for its implementation such as look up tables (LUTs), range addressable look up tables (RALUTs), power of two approximation, piecewise linear approximation (PWL) and coordinate rotation digital computer (CORDIC) to achieve high precision and speed as well as low hardware resource consumption. This thesis aims to build a comparative study to evaluate different methods in terms of speed, cost, accuracy, and resource of Tanh on FPGAs.
Recommended Citation
Soaryaasa, Samira, "Different Implementation Methods of Tanh on FPGAS for Neural Networks Application" (2022). Electronic Theses and Dissertations. 8795.
https://scholar.uwindsor.ca/etd/8795