Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
Information
security management systems and frameworks have embraced traditional risk
assessment (RA) methodologies and standards as a cornerstone for secure
environments. However, in today's world, where threats are constantly evolving
and new vulnerabilities are constantly being found, these approaches encounter
numerous challenges. To get around this issue, some have suggested DRA models, which
continually and dynamically evaluate risks to an organization's activities in
(almost) real time. Connected smart devices, known as the Internet of Things
(IoT), have changed the face of modern technology. These advances present new
security challenges, but they also bring new opportunities. For intrusion
detection systems (IDS), cybersecurity is of the utmost importance. When it
comes to protecting Internet of Things (IoT) devices from cyberattacks, Deep
Learning has demonstrated encouraging results. Despite intrusion detection
systems' (IDS) critical role in protecting sensitive data by detecting and
preventing malicious actions, traditional IDS solutions have difficulties when
used to the Internet of Things (IoT). This article explores state-of-the-art,
Deep Learning-based intrusion detection approaches for Internet of Things
security. Recent developments in intrusion detection systems (IDS) for the
internet of things (IoT) are reviewed here, with an emphasis on the relevant
deep learning algorithms, datasets, attack types, and assessment metrics. This
work offers a fresh perspective on managing hazards in system-to-system
communication through API calls and helps to tackle this difficulty. Effective
threat identification from huge API call datasets is achieved through the
introduction of an integrated architecture that integrates deep-learning
models, specifically ANN and MLP. In order to improve overall resilience, the
detected threats are analyzed to find appropriate mitigations. To ensure that
AI models are accessible to all user groups, this work also introduces
transparency obligation practices for the whole AI life cycle, beginning with
dataset preprocessing and ending with model performance evaluation. These
practices include data and methodological transparency as well as SHapley
Additive exPlanations (SHAP) analysis. Experiment results showing an average
detection accuracy of 88% utilizing the Windows PE Malware API dataset justify
the proposed methodology.
Country : USA
IRJIET, Volume 5, Issue 2, February 2021 pp. 109-115