Please use this identifier to cite or link to this item: https://hdl.handle.net/10321/4634
DC FieldValueLanguage
dc.contributor.authorSivate, Themba M.en_US
dc.contributor.authorPillay, Nelendranen_US
dc.contributor.authorMoorgas, Kevinen_US
dc.contributor.authorSingh, Navinen_US
dc.date.accessioned2023-02-15T06:18:58Z-
dc.date.available2023-02-15T06:18:58Z-
dc.date.issued2022-07-20-
dc.identifier.citationSivate, T.M. et al. Autonomous classification and spatial location of objects from stereoscopic image sequences for the visually impaired. Presented at: 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET). doi:10.1109/icecet55527.2022.9872538en_US
dc.identifier.isbn9781665470872-
dc.identifier.urihttps://hdl.handle.net/10321/4634-
dc.description.abstractOne of the main problems faced by visually impaired individuals is the inability or difficulty to identify objects. A visually impaired person usually wears glasses that help to enlarge or focus on nearby objects, and therefore heavily relies on physical touch to identify an object. There are challenges when walking on the road or navigating to a specific location since the vision is lost or reduced thereby increasing the risk of an accident. This paper proposes a simple portable machine vision system for assisting the visually impaired by providing auditory feedback of nearby objects in real-time. The proposed system consists of three main hardware components consisting of a single board computer, a wireless camera, and an earpiece module. YOLACT object detection library was used to detect objects from the captured image. The objects are converted to an audio signal using the Festival Speech Synthesis System. Experimental results show that the system is efficient and capable of providing audio feedback of detected objects to the visually impaired person in real-time.en_US
dc.format.extent6 pen_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartof2022 International Conference on Electrical, Computer and Energy Technologies (ICECET)en_US
dc.subjectVisually impaireden_US
dc.subjectSingle-board computeren_US
dc.subjectPortableen_US
dc.subjectBluetoothen_US
dc.subjectComputer visionen_US
dc.subjectAudio feedbacken_US
dc.subjectObject detectionen_US
dc.titleAutonomous classification and spatial location of objects from stereoscopic image sequences for the visually impaireden_US
dc.typeConferenceen_US
dc.date.updated2023-02-07T12:47:27Z-
dc.identifier.doi10.1109/icecet55527.2022.9872538-
item.openairetypeConference-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.grantfulltextopen-
item.cerifentitytypePublications-
item.languageiso639-1en-
item.fulltextWith Fulltext-
Appears in Collections:Research Publications (Engineering and Built Environment)
Files in This Item:
File Description SizeFormat
IEEE Copyright clearance.docxCopyright clearance227.39 kBMicrosoft Word XMLView/Open
Sivate et al_2022.pdfArticle570.19 kBAdobe PDFView/Open
Show simple item record

Page view(s)

163
checked on Dec 22, 2024

Download(s)

54
checked on Dec 22, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.