NeuroMimicry Attacks: Adversarial Evasion in Spiking Neuromorphic Systems

Abstract

Neuromorphic computing has become popular in robotics, edge devices and IoT because of its energy efficiency and biological inspiration. These systems are based on spiking neural networks (SNNs), which process information in discrete spike events, providing real-time and low-power operation. However, in spite of these advantages, the safety of spiking neuromorphic systems has not been studied extensively as compared to traditional deep learning systems. In this paper, we present NeuroMimicry Attacks, a type of adversarial evasion attack in which adversarial examples are patterns of the spike-train that are highly similar to a legitimate activity but reach malicious goals. These attacks take advantage of the temporal and spatiotemporal properties of SNNs and are challenging to identify using the current anomaly detection systems. This work has four contributions: first, a taxonomy of mimicry-based adversarial attacks is created; second, algorithms to generate realistic spike-train perturbations and synthetic mimicry patterns are proposed; third, defense strategies are proposed, including spatiotemporal anomaly detection and adversarial training; fourth, the work has been experimentally validated using benchmark neuromorphic datasets and platforms. Findings indicate that the NeuroMimicry Attack is a major threat and requires strong defensive systems specific to neuromorphic systems.

Country : USA

1 Alex Mathew2 Frank Valentin3 Audrey Tobesman

  1. Department of Cybersecurity, Bethany College, USA
  2. Department of Cybersecurity, Bethany College, USA
  3. Department of Cybersecurity, Bethany College, USA

IRJIET, Volume 9, Issue 9, September 2025 pp. 10-14

doi.org/10.47001/IRJIET/2025.909002

References

  1. Aitsam, M., Davies, S., & Di Nuovo, A. (2022). Neuromorphic computing for interactive robotics: A systematic review. IEEE Access, 10, 122261-122279.
  2. Bharath, N., Tiwari, P., & Lakshmi, D. (2025). Sustainable AI hardware for advanced healthcare diagnostics. In AI-Powered Systems for Healthcare Diagnostics and Treatment (pp. 267-310). IGI Global Scientific Publishing.
  3. Borra, R. (2024). Neuromorphic Computing: Bridging Biological Intelligence and Artificial Intelligence. International Journal of Engineering and Advanced Technology, 14(2), 10-35940.
  4. Kim, E., & Kim, Y. (2024). Exploring the potential of spiking neural networks in biomedical applications: Advantages, limitations, and future perspectives. Biomedical Engineering Letters, 14(5), 967-980.
  5. Kudithipudi, D., Schuman, C., Vineyard, C. M., Pandit, T., Merkel, C., Kubendran, R.,... & Furber, S. (2025). Neuromorphic computing at scale. Nature, 637(8047), 801-812.
  6. Leontev, M., Antonov, D., & Sukhov, S. (2021, September). Robustness of spiking neural networks against adversarial attacks. In the 2021 International Conference on Information Technology and Nanotechnology (ITNT) (pp. 1-6). IEEE.
  7. Liang, L., Hu, X., Deng, L., Wu, Y., Li, G., Ding, Y.,... & Xie, Y. (2021). Exploring adversarial attack in spiking neural networks with spike-compatible gradient. IEEE Transactions on Neural Networks and Learning Systems, 34(5), 2569-2583.
  8. Lin, X., Dong, C., Liu, X., & Cheng, D. (2022, December). Spiking neural networks subject to adversarial attacks in the spiking domain. In International Conference on Machine Learning for Cyber Security (pp. 457-471). Cham: Springer Nature Switzerland.
  9. Liu, R., Shi, J., Chen, X., & Lu, C. (2024). Network anomaly detection and security defense technology based on machine learning: A review. Computers and Electrical Engineering, 119, 109581.
  10. Marchisio, A., Pira, G., Martina, M., Masera, G., & Shafique, M. (2021, July). Dvs-attacks: Adversarial attacks on dynamic vision sensors for spiking neural networks. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-9). IEEE.
  11. Nazari, N., Gubbi, K. I., Latibari, B. S., Chowdhury, M. A., Fang, C., Sasan, A., ... & Salehi, S. (2024, May). Securing on-chip learning: Navigating vulnerabilities and potential safeguards in spiking neural network architectures. In 2024 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1-5). IEEE.
  12. Oh, S., Kwon, D., Yeom, G., Kang, W. M., Lee, S., Woo, S. Y., ... & Lee, J. H. (2022). Neuron circuits for low-power spiking neural networks using time-to-first-spike encoding. IEEE Access, 10, 24444-24455.
  13. Rathi, N., Chakraborty, I., Kosta, A., Sengupta, A., Ankit, A., Panda, P., & Roy, K. (2023). Exploring neuromorphic computing based on spiking neural networks: Algorithms to hardware. ACM Computing Surveys, 55(12), 1-49.
  14. Rathi, N., & Roy, K. (2024). Lite-snn: Leveraging inherent dynamics to train energy-efficient spiking neural networks for sequential learning. IEEE Transactions on Cognitive and Developmental Systems, 16(6), 1905-1914.
  15. Tong, C., Zheng, X., Li, J., Ma, X., Gao, L., & Xiang, Y. (2023). Query-efficient black-box adversarial attacks on automatic speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31, 3981-3992.
  16. Shen, J., Liu, J. K., & Wang, Y. (2021). Dynamic spatiotemporal pattern recognition with recurrent spiking neural network. Neural Computation, 33(11), 2971-2995.
  17. Wang, H., Li, Y. F., & Gryllias, K. (2023). Brain-inspired spiking neural networks for industrial fault diagnosis: A survey, challenges, and opportunities. arXiv preprint arXiv:2401.02429. https://doi.org/10.48550/arXiv.2401.02429.
  18. Wang, T., Guo, J., Zhang, B., Yang, G., & Li, D. (2025). Deploying AI on edge: Advancement and challenges in edge intelligence. Mathematics, 13(11), 1878.