Now-a-days to lead our life we had to run with our fast materialistic world. To express our thoughts, we communicate with different people in different languages and in different ways. But, it is difficult for the dumb, blind and people who are affected by paralysis to convey what they think. The dumb people and the paralysis affected people, feel extremely difficult to express their ideas. The blind people face difficulties in writing their exams. Hence, there is a need to develop a platform for those people who are physically challengeable. An Embedded device shall address the above said problems. (1) The paralysis affected people by wearing a glove made up of flex sensors can convey their ideas; (2) By pressing the keypad (buttons) in the device, dumb people can express their basic needs with the text-to-speech IC. (3) When the blind people speak, speech-to-text module converts it and viewed in a display. All of these techniques work in a single embedded device by the control of an ARM7 Processor. By using this device, PWD (People With Disabilities) can easily communicate their basic needs with the society and they can lead their life independently in a peaceful way.
PDF Abstract XML References Citation
How to cite this article
When the blind people need to go school and learn, it may cause some unique problems for their university/college life as well. There is a need for a Human Interpreter to communicate a message for a blind child to do his/her schooling. For such people, this device acts as an interpreter. By using this device, the blind people can write their examination without any help from an interpreter which can enable them to become high achievers in this world.
Many dumb people live in families where they are the only dumb person. If the dumb people try to express something, we do not have enough patience to listen them. They use some sign languages to express their ideas as shown in Fig. 1. By using this device, they can express their basic thoughts by just pressing the keypad (buttons) which is already stored in audio form for that appropriate word.
People affected by Paralysis face many barriers when trying to do any physical activity. They use aids such as wheelchair, guide, support dogs and white cane. They feel highly difficult with these communication aids also.
|Fig. 1:||Sign language|
By using this embedded device, the paralysis affected people can express their basic needs just by wearing gloves which is made with flex sensors.
For blind people, some technologies are available. Saaid et al. (2009) developed a prototype RFIW for blind people, acted as a guide for walking in a correct path. Angin et al. (2010) proposed an open and extensible architecture for blind using Web for location-specific information. Bourbakis et al. (2008) proposed a prototype for multimodal device and mobile that assists in navigation and reading by using the Stochastic Petri-Nets. Oliveira et al. (2012) employed a sensory replacement and haptic fingertip reading in inclusive classrooms. Brilhault et al. (2011) designed an assistive device based on fusion of GPS, adapted GIS and vision based positioning. Zhang et al. (2008) implemented a University Braille as a Personal Digital Assistant with the help of Braille keyboard controller and the print-to-Braille translation. Ab Aziz and Hendr (2012) discussed about the machine translation. A solution for dumb people was given by Rajeswari et al. (2010) an AG-500 Articulograph sensor and an equivalent SAMPA code proposed for PHONWEB. Begum and Hasanuzzaman (2009) implemented Bangladeshi sign language. Krishnaswamy and Srikkanth (2010) sensed the vibrations produced in the vocal chords and encoded it and then compared with a reference threshold to produce sound. Almeshrky and Ab Aziz (2012) proposed a transfer based approach to translate a dialogue between Arabic and Malay language. Hamid and Ramli (2012) proposed a method to improve audio based system. For people who are affected by paralysis some technologies are available. Shih et al. (2011) proposed a commercial numerical keyboard acted as the input device. Nazemi et al. (2011) used a Digital Talking Book player implemented for the visually impaired to read, navigate, search and bookmark written material. Seki and Kiso (2011) proposed an adaptive driving control scheme based on fuzzy algorithm for electric wheelchairs. Xu et al. (2012) proposed a solution to monitor the movement of the accessory of elderly in the smart home. Melek et al. (2005a) gave a solution for people who are affected by diaphragm paralysis. Melek et al. (2005b) proposed a solution for people who are affected by facial paralysis due to Lyme disease. Zahmatkash and Vafaeenasab (2011) discussed about the people who are affected by knee osteoarthritis. Dursun (2008) proposed an intelligent speed controller based on fuzzy logic to increase comfort of disabled people. These are the some existing systems about the assistive devices for the PWD.
As shown in Fig. 2, speech-to-text module is connected with the processor and a display. When a blind person speaks, voice is sent to the speech-to-text-converter which converts voice into a text format and it is viewed in the display. It helps them in their examinations.
If a dumb person need to convey their basic ideas, they press the keypad (buttons) in the device as shown in Fig. 2. The button sends the corresponding signal to the processor according to their requirements (e.g., 1st button for Water). The processor processes that signal using the text-to-speech IC. The output of the speaker is a voice signal. With this embedded device, the dumb people can easily convey their basic requirements.
The people who are affected by Paralysis wear gloves which is designed with a flex sensors. By moving their finger up and down, the signal is generated in the flex sensors (e.g., thumb finger upside movement: rest room). That signal is sent to a bluetooth transmitter which is in the gloves.
|Fig. 2:||Overview of the system|
The device is placed in their body which is connected to a bluetooth receiver and it receives the signal from the bluetooth transmitter. This signal is sent to the processor and then a text-to-speech IC converts it to audio signal. It helps them to impart their basic needs easily.
Device for dumb people: When dumb people press the keypad (buttons), the signal is sent to the processor. Then the processor converts into voice by sending the signal to the text-to-speech conversion IC. For example, if a button is assigned as a signal for food and if they press the button, then it produces a sound as Food. ARM is used as the processor and the TTS256 is a text-to-speech IC.
ARM processor: ARM is 32-bit microprocessor, which is low power consumption processor. The architecture is based on RISC principles. The related decode mechanisms and instruction sets are simpler.
TTS256: TTS256 is an 8-bit microprocessor which converts the text to speech. It does it by combining the savage innovations speak gin or the magnevation speak jet. The TTS256 is a 9600 N81 Baud Serial interfaces with narrow 28 pin DIP. It converts the text to Speak Jet codes by the serial port. Once the Speak Jet start speaks, it lowers its ready line and it raises after it completes its operation, which determines whether it is ready to accept more data. The command debug on followed by a new line determines that a debug mode enters and the command debug off followed by a new line determines that a debug mode exits.
The converted form of voice signal from the text-to-speech IC is sent to the speaker. Using this embedded device, the dumb people can reveal their ideas to the society.
Device for paralysis affected person: The people who are affected by the Paralysis wear gloves which are designed with flex sensors. Each flex sensor denotes a word. When a finger is moved, it produces the voice of that word. This takes place by placing a Bluetooth transmitter and receiver. After processing, the ARM processor sends the information to the text-to-speech IC which converts the text into the appropriate audio voice output can get by using the speaker.
Flex sensors: As shown in Fig. 3, Flex sensors are used to sense the finger movement of the people who are affected by the paralysis. The flex sensors are based on resistive carbon element technology. They are an analog resistor which works as variable voltage drivers. The carbon resistive element is placed inside the flex sensors as a thin flexible substrate. Depending upon the amount of bend in the sensors, it can change its electrical resistance value. When there are more bends then the resistance value is more. The flex sensors are in the form of a thin strip from 1-5 inches long. It can be undirected or bi-directed. They vary in their values as 1-20 kΩ; 50-200 kΩ. As shown in Fig. 4, when there bend in sensor substrate then it produces a resistance output relative to the bent radius. A flex of 0° produces 10 kΩ resistance and with a flex of 90° produces 30-40 kΩ.
The mechanical specification of flex sensors is greater than 1 million life cycle, less than or equal to 0.43 mm (0.017 inches) height and the temperature range is -35 to +80°C. The electrical specification of flex sensors are 25 kΩ flat resistance and has ±30% of resistance tolerance. The bend resistance range of flex sensors is nearly 45 to 125 kΩ (it depending on bend radius) and its power rating is nearly 0.50 W continuously and 1 W in peak range. The flex sensors have high level of reliability, consistency and repeatability.
Due to the source impedance of the flex sensors as voltage divider, the low bias current of the op-amp reduces error as shown in Fig. 5. This takes place because of the sensor which is used as an impedance buffer with a single sided operational amplifier.
|Fig. 3:||Flex sensors|
|Fig. 4:||Flex sensor-variable resistance readings|
|Fig. 5:||Flex sensor basic circuit|
Bluetooth: The Bluetooth is a wireless technology. The range of communication is between 0 to 10 m with a power consumption of 1 mW. It increases to 100 m by amplifying power to 20 dBm.
Device for blind people: When blind people speak, the audio voice input is sent to the speech to text module. There are three methods available for the conversion of speech to text. They are speech recognition, Computer Assisted Note-taking (CAN) and Communication Access Real-time Translation (CART). These methods can differ based on their ability to generate exact real-time transcripts. The text message from the speech-to-text module is send into the ARM processor and then sends into the display and the text output can be viewed.
When the blind people speak, their voice is sent as input to speech-to-text module. It takes place by the Automatic speech recognition technique for speech-to-text conversion. It uses a sound shielded environment. It can also be designed by CAN-systems. The speed and quality of the transfer depends on the kind by which it abbreviations or shortened words are translated into the corresponding sentences. Linguistic knowledge is necessary in this technique for adaptation of the wordings. The CART systems can also be of use for highly flexible real-time speech-to-text conversion. The limitations of CART include the long period of training and costs of the steno system. Here the automatic speech recognition is used. Then the converted text message is sent to the processor. After recognition, all operations are sent by the processor to the display.
|Fig. 6:||Voice to text conversion interface|
The output text is viewed in a display which is processed from the speech-to-text module. Figure 6 shows the details when a blind person says a message as ARM IS A 32 BIT PROCESSOR.
The above said embedded device is a boon for the dumb people, physically handicapped people and for the blind people. This single device resolves the maximum problems faced by those people. Any one of those people can use this device and they can reveal their basic thoughts with other people. By using this device, the PWD can lead their life in an enjoyable way as all other people in the world. This single device is a 3 in 1 device. The lifetime of the device is high and it is highly stable, reliable and an efficient device.
- Brilhault, A., S. Kammoun, O. Gutierrez, P. Truillet and C. Jouffrais, 2011. Fusion of artificial vision and GPS to improve blind pedestrian positioning. Proceedings of the International Conference on New Technologies, Mobility and Security (NTMS), February 7-10, 2011, Paris, France, pp: 1-5.
- Nazemi, A., C. Ortega-Sanchez and I. Murray, 2011. Digital talking book player for the visually impaired using FPGAs. Proceedings of the International Conference on Reconfigurable Computing and FPGAs, November 30-December 2, 2011, Cancun, pp: 493-496.
- Krishnaswamy, C. and G. Srikkanth, 2010. Dynamic Acoustic for Dumb using Embedded Interface (DADEI). Proceedings of the International Conference on Computer Modeling and Simulation, Volume 3, January 22-24, 2010, Sanya, Hainan, pp: 525-527.
- Shih, C.T., C.H. Shih and N.R. Guo, 2011. Development of a computer assistive input device using screen-partitioning and mousekeys method for people with disabilities. Proceedings of the International Conference on Computer Science and Service System (CSSS), June 27-29, 2011, Nanjing, pp: 2201-2204.
- Oliveira, F.C.M.B., F. Quek, H. Cowan and B. Fang, 2012. The Haptic Deictic System-HDS: Bringing blind students to mainstream classrooms. IEEE Trans. Haptics, 5: 172-183.
- Almeshrky, H.A. and M.J. Ab Aziz, 2012. Arabic malay machine translation for a dialogue system. J. Appl. Sci., 12: 1371-1377.
- Hamid, L.A. and D.A. Ramli, 2012. Performance of qualitative fusion scheme for multi-biometric speaker verification system in noisy condition. J. Applied Sci., 12: 1282-1289.
- Saaid, M.F., I. Ismail and M.Z.H. Noor, 2009. Radio Frequency Identification Walking Stick (RFIWS): A device for the blind. Proceedings of the International Colloquium on Signal Processing and Its Applications (CSPA), March 6-8, 2009, Kuala Lumpur, pp: 250-253.
- Ab Aziz, M.J. and A.M. Hendr, 2012. Translation of classical arabic language to English. J. Applied Sci., 12: 781-786.
- Bourbakis, N., R. Keefer, D. Dakopoulos and A. Esposito, 2008. A multimodal interaction scheme between a blind user and the tyflos assistive prototype. Proceedings of the IEEE International Conference on Tools with Artificial Intelligence, Volume 2, November 3-5, 2008, Dayton, OH, pp: 487-494.
- Angin, P., B. Bhargava and S. Helal, 2010. A mobile-cloud collaborative traffic lights detector for blind navigation. Proceedings of the International Conference on Mobile Data Management, May 23-26, 2010, Kansas City, MO, USA, pp: 396-401.
- Rajeswari, K., E. Jeevitha and V.K.G.K. Selvi, 2010. Virtual voice-the voice for the dumb. Proceedings of the International Conference on Computational Intelligence and Computing Research, December 28-29, 2010, Coimbatore, pp: 1-3.
- Begum, S. and M. Hasanuzzaman, 2009. Computer Vision-based Bangladeshi sign language recognition system. Proceedings of the International Conference on Computer and Information Technology, December 21-23, 2009, Dhaka, pp: 414-419.
- Zhang, X., C. Ortega-Sanchez and I. Murray, 2008. Reconfigurable PDA for the visually impaired using FPGAs. Proceedings of the International Conference on Reconfigurable Computing and FPGAs, December 3-5, 2008, Cancun, pp: 1-6.