International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Contact Us

IJSTR >> Volume 9 - Issue 11, November 2020 Edition

International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616

Spiking Neural Network For Energy Efficient Learning And Recognition

[Full Text]



Wang Ning Lo, Yan Chiew Wong



Spiking Neural Network, Neuromorphic, Digit Recognition, FPGA.



Nowadays, people are confronted with an increasingly large amount of data and a tremendous change of human-machine interaction modes. It is a challenging and time-consuming task for traditional computing system to deal with the content of information. The use of applications consumes energy and hard to perform through standard programmed algorithms. Spiking neural networks have emerged that achieve favourable advantages in terms of energy and time efficiency by using spikes for computation and communication as well as solving different problems such as pattern classification and image processing. Therefore, an energy-efficient spiking feedforward computing system is presented to evaluate its performance. Common building blocks and techniques used to implement a spiking neural network are investigated to identify design parameters for hardware-based neuron implementations. Izhikevich neuron, Address-Event Representation system and Spiking-Timing-Dependent Plasticity module are developed by using Vivado software. Demonstration of digit recognition using SNN hardware implementation on FPGA has been performed. The energy consumption of the system is only 136mW and low hardware resource utilization has been observed. This work presents essential properties of a spiking feedforward computing system that emulates the behaviour of biological neural networks, showing the potential for learning and classification in significantly reduced energy resources.



[1] N. Kasabov, Deep Learning in Spiking Neural Networks for Brain-Inspired Artificial Intelligence. 2018, doi: 10.1145/3274005.3274006.
[2] Y.C. Wong, Y.Q. Lee, “Design and development of deep learning convolutional neural network on an field programmable gate array,” J. Telecommun. Electron. Comput. Eng., vol. 10, no. 4, pp. 25–29, 2018.
[3] T. Mittal and R. K. Sharma, “Speech recognition using ANN and predator-inuenced civilized swarm optimization algorithm,” Turkish J. Electr. Eng. Comput. Sci., vol. 24, no. 6, pp. 4790–4803, 2016, doi: 10.3906/elk-1412-19310.3906/elk-1412-193.
[4] Y.C. Wong, L.J. Choi, S.S.S. Ranjit, H. Zhang and A.R. Syafeeza, “Deep Learning Based Racing Bib Number Detection and Recognition,” Jordanian J. Comput. Inf. Technol., vol. 5, no. 3, pp. 181–194, 2019, doi: 10.5455/jjcit.71-156274772810.5455/jjcit.71-1562747728.
[5] R. J. Vogelstein, U. Mallik, E. Culurciello, G. Cauwenberghs, and R. Etienne-Cummings, “Saliency-driven image acuity modulation on a reconfigurable silicon array of spiking neurons,” Adv. Neural Inf. Process. Syst., no. May 2014, 2005.
[6] W. Gerstner, “Spiking Neuron Models,” Encycl. Neurosci., pp. 277–280, 2009, doi: 10.1016/B978-008045046-9.01405-410.1016/B978-008045046-9.01405-4.
[7] J. L. Lobo, J. Del Ser, A. Bifet, and N. Kasabov, “Spiking Neural Networks and online learning: An overview and perspectives,” Neural Networks, vol. 121, pp. 88–100, 2020, doi: 10.1016/j.neunet.2019.09.00410.1016/j.neunet.2019.09.004.
[8] S. Dutta, V. Kumar, A. Shukla, N. R. Mohapatra, and U. Ganguly, “Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET,” Sci. Rep., vol. 7, no. 1, pp. 1–7, 2017, doi: 10.1038/s41598-017-07418-y10.1038/s41598-017-07418-y.
[9] F. Santamaria and J. M. Bower, “Hodgkin-Huxley Models,” Encycl. Neurosci., pp. 1173–1180, 2009, doi: 10.1016/B978-008045046-9.01413-310.1016/B978-008045046-9.01413-3.
[10] C. Zhao, W. Danesh, B. T. Wysocki, and Y. Yi, “Neuromorphic encoding system design with chaos based CMOS analog neuron,” 2015 IEEE Symp. Comput. Intell. Secur. Def. Appl., 2015, doi: 10.1109/CISDA.2015.720863110.1109/CISDA.2015.7208631.
[11] M. G. Johnson and S. Chartier, “Spike neural models (part I): The Hodgkin-Huxley model,” Quant. Methods Psychol., vol. 13, no. 2, pp. 105–119, 2017, doi: 10.20982/tqmp.13.2.p10510.20982/tqmp.13.2.p105.
[12] E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Trans. Neural Networks, vol. 14, no. 6, pp. 1569–1572, 2003, doi: 10.1109/TNN.2003.82044010.1109/TNN.2003.820440.
[13] Y. Çakir, “Modeling of time delay-induced multiple synchronization behavior of interneuronal networks with the Izhikevich neuron model,” Turkish J. Electr. Eng. Comput. Sci., vol. 25, no. 4, pp. 2595–2605, 2017, doi: 10.3906/elk-1606-8110.3906/elk-1606-81.
[14] M. Mahowald, “VLSI Analogs of Neuronal Visual Processing : Thesis by,” Technology, vol. 1992, no. May, 1992.
[15] E. Culurciello and A. G. Andreou, “A Comparative Study of Access Topologies for Chip-Level Address-Event Communication Channels,” IEEE Trans. Neural Networks, vol. 14, no. 5, pp. 1266–1277, 2003, doi: 10.1109/TNN.2003.81638510.1109/TNN.2003.816385.
[16] K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature, vol. 575, no. 7784, pp. 607–617, 2019, doi: 10.1038/s41586-019-1677-210.1038/s41586-019-1677-2.
[17] M. Davies et al., “Loihi: A Neuromorphic Manycore Processor with On-Chip Learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, Jan. 2018, doi: 10.1109/MM.2018.11213035910.1109/MM.2018.112130359.
[18] F. Christophe, T. Mikkonen, V. Andalibi, K. Koskimies, and T. Laukkarinen, “Pattern recognition with Spiking Neural Networks: A simple training method,” CEUR Workshop Proc., vol. 1525, pp. 296–308, 2015.
[19] P. U. Diehl and M. Cook, “Unsupervised learning of digit recognition using spike-timing-dependent plasticity,” Front. Comput. Neurosci., vol. 9, no. AUGUST, pp. 1–9, 2015, doi: 10.3389/fncom.2015.0009910.3389/fncom.2015.00099.
[20] B. Ruf and M. Schmitt, “Learning Temporally Encoded Patterns in Networks of Spiking Neurons,” Neural Process. Lett., vol. 5, no. 1, pp. 9–18, 1997, doi: 10.1023/A:100969700868110.1023/A:1009697008681.
[21] A. Kasiński and F. Ponulak, “Comparison of supervised learning methods for spike time coding in spiking neural networks,” Int. J. Appl. Math. Comput. Sci., vol. 16, no. 1, pp. 101–113, 2006.
[22] E.-G. Merino Mallorquí, “Digital system for spiking neural network emulation,” Universitat Politècnica de Catalunya, 2017.
[23] A. Ankit, A. Sengupta, P. Panda, and K. Roy, “RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks,” pp. 1–6, 2017, doi: 10.1145/3061639.306231110.1145/3061639.3062311.
[24] S. Chaturvedi and A. A. Kurshid, “ASIC implementation for improved character recognition and classification using SNN model,” Procedia Comput. Sci., vol. 62, no. Scse, pp. 151–158, 2015, doi: 10.1016/j.procs.2015.08.42810.1016/j.procs.2015.08.428.
[25] Y. Duan, S. Li, R. Zhang, Q. Wang, J. Chen, and G. E. Sobelman, “Energy-Efficient Architecture for FPGA-based Deep Convolutional Neural Networks with Binary Weights,” Int. Conf. Digit. Signal Process. DSP, vol. 2018-Novem, pp. 1–5, 2019, doi: 10.1109/ICDSP.2018.863159610.1109/ICDSP.2018.8631596.
[26] Gan Feng, Zuyi Hu, Song Chen, and Feng Wu, “Energy-efficient and high-throughput FPGA-based accelerator for Convolutional Neural Networks,” in 2016 13th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), Oct. 2016, pp. 624–626, doi: 10.1109/ICSICT.2016.799899610.1109/ICSICT.2016.7998996.
[27] O. Elgawi, A. M. Mutawa, and A. Ahmad, “Energy-Efficient Embedded Inference of SVMs on FPGA,” Proc. IEEE Comput. Soc. Annu. Symp. VLSI, ISVLSI, vol. 2019-July, pp. 164–168, 2019, doi: 10.1109/ISVLSI.2019.0003810.1109/ISVLSI.2019.00038.
[28] S. Afifi, H. GholamHosseini, and R. Sinha, “A system on chip for melanoma detection using FPGA-based SVM classifier,” Microprocess. Microsyst., vol. 65, pp. 57–68, 2019, doi: 10.1016/j.micpro.2018.12.00510.1016/j.micpro.2018.12.005.
[29] Y. Li, “Energy Efficient Spiking Neuromorphic Architectures For Pattern Recognition,” no. May, 2016.
[30] Ju, X., Fang, B., Yan, R., Xu, X., & Tang, “An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification,” Neural Comput., vol. 32, pp. 1–23, 2019, doi: 10.1162/neco_a_0124510.1162/neco_a_01245.