PhD forum: Why TanH is a hardware friendly activation function for CNNs

Abstract : Convolutional Neural Networks (CNNs) [1] are the state of the art of image classification that improved accuracy and robustness of machine vision systems at the price of a very high computational cost. This motivated multiple research efforts to investigate the applicability of approximate computing and more particularly, fixed point-arithmetic for CNNs. In all this approaches, a recurrent problem is that the learned parameters in deep fraCNN layers have a significantly lower numerical dynamic range when compared to the feature maps, which prevents from using of a low bit-width representation in deep layers. In this paper, we demonstrate that using the TanH activation function is way to prevent this issue. To support this demonstration, three benchmark CNN models are trained with the TanH function. These models are then quantized using the same bit-width across all the layers. In the case of FPGA based accelerators, this approach infers the minimal amount of logic elements to deploy CNNs. © 2017 Association for Computing Machinery.
Complete list of metadatas

https://hal-univ-rennes1.archives-ouvertes.fr/hal-01687764
Contributor : Laurent Jonchère <>
Submitted on : Thursday, January 18, 2018 - 5:09:33 PM
Last modification on : Tuesday, February 5, 2019 - 3:58:22 PM

Identifiers

Citation

K. Abdelouahab, Maxime Pelcat, F. Berry. PhD forum: Why TanH is a hardware friendly activation function for CNNs. 11th International Conference on Distributed Smart Cameras, ICDSC 2017, Sep 2017, Stanford, United States. ⟨10.1145/3131885.3131937⟩. ⟨hal-01687764⟩

Share

Metrics

Record views

183