1. Interpretability and accessibility of machine learning in selected food processing, agriculture and health applications
- Author
-
Ranasinghe, N., Ramanan, A., Fernando, S., Hameed, P. N., Herath, D., Malepathirana, T., Suganthan, P., Niranjan, M., and Halgamuge, S.
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Artificial Intelligence (AI) and its data-centric branch of machine learning (ML) have greatly evolved over the last few decades. However, as AI is used increasingly in real world use cases, the importance of the interpretability of and accessibility to AI systems have become major research areas. The lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms. This is due to many reasons including ethical and regulatory concerns, which have resulted in poorer adoption of ML in some areas. The recent past has seen a surge in research on interpretable ML. Generally, designing a ML system requires good domain understanding combined with expert knowledge. New techniques are emerging to improve ML accessibility through automated model design. This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems while also being relevant to developing countries. We review work under multiple levels of interpretability including scientific and mathematical interpretation, statistical interpretation and partial semantic interpretation. This review includes applications in three areas, namely food processing, agriculture and health., Comment: published in the "Journal of the National Science Foundation of Sri Lanka, Volume 50"
- Published
- 2022
- Full Text
- View/download PDF