Back to Search Start Over

Enhancing ChatGPT’s querying capability with voice-based interaction and a CNN-based impaired vision detection model

Authors :
Ahmad, Awais
Jabbar, Sohail
Akram, Sheraz
Anand, Paul
Raza, Umar
Alshuqayran, Nuha
Ahmad, Awais
Jabbar, Sohail
Akram, Sheraz
Anand, Paul
Raza, Umar
Alshuqayran, Nuha
Publication Year :
2024

Abstract

This paper presents an innovative approach to enhance the querying capability of ChatGPT, a conversational artificial intelligence model, by incorporating voice-based interaction and a convolutional neural network (CNN)-based impaired vision detection model. The proposed system aims to improve user experience and accessibility by allowing users to interact with ChatGPT using voice commands. Additionally, a CNN-based model is employed to detect impairments in user vision, enabling the system to adapt its responses and provide appropriate assistance. This research tackles head-on the challenges of user experience and inclusivity in artificial intelligence (AI). It underscores our commitment to overcoming these obstacles, making ChatGPT more accessible and valuable for a broader audience. The integration of voice-based interaction and impaired vision detection represents a novel approach to conversational AI. Notably, this innovation transcends novelty; it carries the potential to profoundly impact the lives of users, particularly those with visual impairments. The modular approach to system design ensures adaptability and scalability, critical for the practical implementation of these advancements. Crucially, the solution places the user at its core. Customizing responses for those with visual impairments demonstrates AI’s potential to not only understand but also accommodate individual needs and preferences.

Details

Database :
OAIster
Notes :
text, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1428026498
Document Type :
Electronic Resource