Test Case Coverage Model with Priority Constraints for Mutation Testing on UI Testing, Mutation Operators, and the DOM

Abstract

It is of the utmost importance to prioritize the stability of web applications in this era of heavy reliance on them in order to ensure seamless digital experiences. The present method of creating web application UI test suites using Selenium-compatible technologies is not systematically tested for problem detection efficacy, even though UI testing is commonly acknowledged as crucial for user satisfaction. In order to overcome the challenges of mutation testing in online systems, this work introduces a groundbreaking Test Case Coverage Model with Priority Constraints (TCCM-PTWA). While most mutation testing methods target the source code, our method is browser-specific and operates inside the Document Object Model (DOM). A large variety of web apps can be assured to be compatible with this innovative method since no changes to the source code are required. Optimization of resource allocation, reduction of testing overhead, and prioritization of test cases according to relevance are all ways in which priority constraints in TCCM-PTWA enhance testing. Not only that, but we take inspiration from common online application vulnerabilities and present a set of mutation operators tailored to web applications. These operators are designed to increase the efficiency of mutation testing in real-world scenarios by simulating real-world difficulties. The results of our empirical investigation on sensor-based systems demonstrate that TCCM-PTWA efficiently analyzes test suites and finds issues, whereas priorit constraints enhance the reliability and resilience of online services. The unique Test Case Coverage Model with Priority Constraints is introduced in this study with an emphasis on DOM, UI testing, and MAEWU (Mutation Analysis for Web Applications with Emphasis on UI). The unique challenges of web applications are addressed by this methodology, which offers a comprehensive solution to improve the reliability and longevity of web applications in the digital age.

Country : USA

1 Mohammed Sadhik Shaik

  1. Sr. Software Web Developer Engineer, Computer Science, Germania Insurance, Melissa, Texas, USA

IRJIET, Volume 9, Issue 1, January 2025 pp. 182-188

doi.org/10.47001/IRJIET/2025.901023

References

  1. R. A. DeMillo, R. J. Lipton, and F. G. Sayward, “Hints on test data selection: Help for the practicing programmer,” Computer, vol. 11, no. 4, pp. 34–41, April 1978.
  2. T. Acree, T. A. Budd, R. A. DeMillo, R. J. Lipton, and F. G. Sayward, “Mutation analysis,” Georgia Institute of Technology, Atlanta, Georgia, techreport GIT-ICS-79/08, 1979.
  3. O. E. C. Olivares, F. Pastore, and L. Briand, “Mutation analysis for cyber-physical systems: Scalable solutions and results in the space domain,” IEEE Transactions on Software Engineering, 2021.
  4. D. Fortunato, J. Campos, and R. Abreu, “Mutation testing of quantum programs: A case study with qiskit,” IEEE Transactions on Quantum Engineering, vol. 3, pp. 1–17, 2022.
  5. O. Maler and D. Nickovic, “Monitoring properties of analog and mixed-signal circuits,” International Journal on Software Tools for Technology Transfer, vol. 15, no. 3, pp. 247–268, 2013.
  6. R. Gopinath, C. Jensen, and A. Groce, “The theory of composite faults,” in 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), 2017, pp. 47–57.
  7. G. Petrovic, M. Ivankovic, G. Fraser, and R. Just, “Does mutation testing improve testing practices?” in Proceedings of the International Conference on Software Engineering (ICSE), 2021.
  8. R. Just, D. Jalali, L. Inozemtseva, M. D. Ernst, R. Holmes, and G. Fraser, “Are mutants a valid substitute for real faults in software testing?” in Proceedings of the Symposium on the Foundations of Software Engineering (FSE), 2014.
  9. J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin, “Using mutation analysis for assessing and comparing testing coverage criteria,” IEEE Transactions on Software Engineering, vol. 32, no. 8, pp. 608–624, 2006.
  10. J. H. Andrews, L. C. Briand, and Y. Labiche, “Is mutation an appropriate tool for testing experiments?” in Proceedings of the International Conference on Software Engineering (ICSE), 2015.
  11. Y. Jia and M. Harman, “Higher order mutation testing,” Information and Software Technology, vol. 51, no. 10, pp. 1379–1393, 2009.
  12. S. Demeyer, A. Parsai, S. Vercammen, B. v. Bladel, and M. Abdi, “Formal verification of developer tests: a research agenda inspired by mutation testing,” in International Symposium on Leveraging Applications of Formal Methods. Springer, 2020, pp. 9–24.
  13. S. Vercammen, S. Demeyer, M. Borg, N. Pettersson, and G. Hedin, “Mutation testing optimisations using the clang front-end,” arXiv preprint arXiv:2210.17215, 2022.
  14. M. Papadakis, M. Kintis, J. Zhang, Y. Jia, Y. Le Traon, and M. Harman, “Mutation testing advances: an analysis and survey,” in Advances in Computers. Elsevier, 2019, vol. 112, pp. 275–378.
  15. B. Kurtz, P. Ammann, and J. Offutt, “Static analysis of mutant subsumption,” in 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), IEEE, 2015, pp. 1–10.