In this project our main purpose is developing a smart glove that helps people who use sign language to communicate in their daily life easily.
Learning a sign language is a demanding process. Therefore this process is not well-known by many healthy individuals. With the help ofthis smart glove those healthy individuals will be able to understand those who use sign language. The smart glove also can be used as a simulator for healthy individuals who want to learn the sign language.
Our project involves simulating letters, which are the cornerstones of a language. In this project we use fpga to implement the algorithm that simulates hand gestures to visual letters. Flex sensors on the fingers give the shape of the hand in order to detect handgestures. The vga monitor is used to visually show the hand gestures converted to letters in FPGA.
Here in figure 1,figure 2,figure 3 illustrate the timing signals produced by the VGA controller. The controller contains two counters. One counter increments on pixel clocks and controls the timing of the h_sync(horizontal sync) signal. By setting it up such that the display time starts at counter value 0, the counter value equals the pixel’s column coordinate during the display time. The horizontal display time is followed by a blanking time, which includes a horizontal front porch, the horizontal syncpulse itself, and the horizontal back porch, each of specified duration. At the end of the row, the counter resets to start the next row. The same operations are applied within the vertical axis with other counter v_sync.
Using these counters, the VGA controller outputs the horizontal sync, vertical sync, display enable, and pixel coordinate signals. The sync pulses are specified as positive or negative polarity for each VGA mode.
We first create random images and change their color by using R-G-B signals as in figure 4. To write a text or a string on ascreen, there is a well known Font Rom Map -figure 5 -includes all the ASCII characters (each character is 8x16 bit). We implement it in our VHDL code.By using Font Rom, our code is now able to write text or letters on the monitor as you can see in figure 6.
To convert analog data coming from flex sensors we have used PMOD AD2. It is a powerfull converter and has 12 bit resolution per channel. Since 4 finger is enough to perform most sign language letters, one 4 channel PMOD is enough to work.
PMOD AD2 is using I2C so to be able to communicate with FPGA, we write a continious 4 channel reader I2C vhdl code. The logic of this communication is shown on figure 7.
As as summary the schema of our project is look like in figure 8 and figure 9. As a reference sign language, we have used American Sign Language (ASL) which has the performed gestures in figure 10.
Now we can perform 16 letters which are marked on figure 10.