HOPE

Helping Our People Easily

Overview

We follow a goal-oriented approach in analyzing and implementing user requirements for HOPE. This involves studying the problems and goals of the target users. We describe the problem(s) as something to avoid, eliminate or alleviate. We describe the goal(s) as something to achieve. We describe the problem(s) as certain events/phenomena/situations that negatively affect the goal(s).

Problem: Seniors, due to their age related symptoms like unclear speech, hearing loss, weak vision and/or memory loss, and people suffering from other conditions such as Autism and Aphasia, have difficulty in communication as well as in performing day-to-day activities, thereby affecting their quality of life and safety. Also, existing devices in the market are bulky, expensive, non-ubiquitous and/or provide partial functionalities, not catered uniquely to the concerned user.

Goal: Develop a ubiquitous, cheap and usable product that aims at alleviating the communication and day-to-day difficulties faced by the seniors due to their age-related symptoms, and people suffering from conditions such as Autism and Aphasia, which also enhances their independence, quality of life and safety.

Solution: Project HOPE, a smartphone-based software system that addresses the above mentioned goals with its various features and functionality collaborations. HOPE currently supports a variety of touch-screen smartphones and tablets, running Android version 2.1 or higher.

It is believed that we function best when all senses work in a complementary manner; so we have used various facets of vocabulary to build a comprehensive unit of communication. Some of the core building blocks of this vocabulary are:

Icons: An iconic representation of any activity or object makes the recollection fast and less ambiguous. e.g. Icon for drinking water or pointing at emergency sign.
Pictures: Using actual pictures of family members or places like restaurants, food items and so on. A picture is worth a thousand words but also a thousand interpretations, so it needs to work with other dimensions to make it unambiguous.
Text: A textual representation of any picture which helps to put word to icon/picture.
Sound: A text-to-speech engine converts the textual representation into a sound representation.
Speech recognition: The built-in speech recognition accepts a user's speech (used more by an assisting person than the seniors having unclear speech) and converts into textual form.
Sensor Input: Built-in acclerometer provides the values that help in detecting a fall, leading to a call for help. There is work-in progress to incorporate other sensors as well.
Maps: To know the current location or relative distances, we make use of the Google Maps service. The user can also use the zoom controls on the map, according to the preference.
HOPE Screen
                 HOPE Screen
   
Check out the features