Presentation Type

Poster

Human-Computer Interaction for the WMRA

Abstract

The Wheelchair Mounted Robotic Arm (WMRA) is a device designed to help people with disabilities complete their activities of daily living. The goal of this project is to choose and create hardware and software for two new interfaces to work with existing control software written for the WMRA. An eye gaze system and a voice control system were chosen because they will expand current interface choices to benefit the greatest number of people with disabilities, and will be able to control all the joints on the robotic arm and the wheelchair. The eye gaze interface system will consist of a webcam to track the user's eye, and software written with the open source image processing library OpenCV. The eye position will be used to control a mouse cursor for interfacing with an existing computer touch screen command system. The voice control system will use a microphone and the open source speech recognition engine "Julius" to map the user’s voice input to pre-set commands to control the WMRA. The completed interfaces will be tested by human subjects on three activities of daily living and data will be collected on the effectiveness and difficulties encountered during their use.

Categories

Engineering/Physical Science

Research Type

Course Related

Mentor Information

Dr. Redwan Alqasemi

This document is currently not available here.

Share

COinS
 

Human-Computer Interaction for the WMRA

The Wheelchair Mounted Robotic Arm (WMRA) is a device designed to help people with disabilities complete their activities of daily living. The goal of this project is to choose and create hardware and software for two new interfaces to work with existing control software written for the WMRA. An eye gaze system and a voice control system were chosen because they will expand current interface choices to benefit the greatest number of people with disabilities, and will be able to control all the joints on the robotic arm and the wheelchair. The eye gaze interface system will consist of a webcam to track the user's eye, and software written with the open source image processing library OpenCV. The eye position will be used to control a mouse cursor for interfacing with an existing computer touch screen command system. The voice control system will use a microphone and the open source speech recognition engine "Julius" to map the user’s voice input to pre-set commands to control the WMRA. The completed interfaces will be tested by human subjects on three activities of daily living and data will be collected on the effectiveness and difficulties encountered during their use.