1*Nayema Akter, 2Md Miftahul Bari

1School of computer science and engineering, Sichuan University, China, School of information Engineering, Mianyang teachers College.

nayemaakhter32@gmail.com; mdmiftahulbari@gmail.com;

 Abstract

Cognitive computing technologies particularly, Artificial Intelligence (AI) algorithms are playing more and more in decision support and in operational functions in different fields. As part of the growing use of such algorithms, there is now a need to validate these algorithms and make them credible or neutral in their processing. The existing studies on AI assurance show that the area is highly disjointed, with the that involves diverse motivation and assumptions prevailing over the state of the art. This manuscript provides a taxonomic view of AI assurance research that occurred between 1985 and 2021 with a focus on structured methodologies. A new definition of AI assurance is discussed, and a comparison of the assurance approaches using a recently introduced ten-metric scale is provided. This manuscript concludes with design principles and proposed directions for future research regarding assurance in the broad field of artificial intelligence.

Keywords

AI Assurance, Artificial Intelligence, Validation, Verification, Explainable AI

Citation

Nayema Akter, & Md Miftahul Bari. (2024). Enhancing Trust in Artificial Intelligence: A Review of Assurance Methods for Broad Application. Journal of Global Knowledge and Innovation, 1(1), 1–15. https://doi.org/10.5281/zenodo.13885892