Back to Search Start Over

Real sparks of artificial intelligence and the importance of inner interpretability.

Authors :
Grzankowski, Alex
Source :
Inquiry. Jan2024, p1-27. 27p.
Publication Year :
2024

Abstract

The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’, is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. Black-box Interpretability fails to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can’t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0020174X
Database :
Academic Search Index
Journal :
Inquiry
Publication Type :
Academic Journal
Accession number :
174738968
Full Text :
https://doi.org/10.1080/0020174x.2023.2296468