4. What is SST?
It is a technology that helps to transmit information
without using our vocal cords.
Aims to observe our silent speech and transform it into
text/audio output.
The software can be installed in wrist tag/ display,
mobile or PC.
5. “What happens if we don’t communicate? Suddenly we
lose our voice during an accident……”
Helps those who had lost their voice but wish to speak.
Output can be routed to communication networks.
People can speak over phone without disturbing others.
Also can speak in noisy environment.
Why Needed........?
6. Idea was popularized in the 1968 Stanley Kubrick’s science
fiction film ‘‘2001 – A Space Odyssey ” (Using Electronic signals)
US space agency Nasa has investigated the technique for
communicating in noisy environments such as the Space Station.
SST was demonstrated in the year 2010 at CeBIT’s “future
park”, one of the largest trade fair.
This technology is being developed at Karlsruhe Institute of
Technology ( KIT ), Germany.
Wand and Tanja Shultz
Origin
8. ELECTROMYOGRAPHY(EMG)
A technique for evaluating and recording the electrical
activity produced by skeletal muscles.
It detects the electrical potential generated by muscle
cells, when these cells are electrically or neurologically
activated.
Performed using instrument called an electromyograph,
to produce a record called an electromyogram.
signals can be analyzed to detect medical abnormalities.
9. How can We Speak….?
When we generally speak aloud, air passes through
larynx or vocal cord & the tongue.
Words are produced using articulator muscle in the
mouth & jaw region.
11. Process….
monitor tiny muscular movements that occur when we speak.
Monitored signals are converted into electrical pulses that can
then be turned into speech, without a sound uttered.
Fig: Electromyography activity
12. DRAWBACKS
Device presently needs nine leads to be attached to our
face which is quite impractical to make it usable.
It’s little painful.
Translation to Chinese language is a bit difficult.
Not portable
13. Image processing In SST
A device oriented package to design and implement for the
purpose of lip reading.
It works based on our silent speech.
It can recognize words, single sentence or even continuous
sentences of people of different region.
Device consider our non-speech accent and pronunciation
by observing every movement of our lip and facial Expression
17. Face Detection
Perform Lighting Compensation on image.
Extract skin region and remove all the noisy data.
Check for face criterions.
Skin colour blocks are identified.
Height and width ratio (1.5 and 0.8) computed and
Minimal face dimension constrained is implemented.
Crop the current region.
18. Skin Segmentation
One of the important steps in face feature extraction.
Colour segmentation of human face depends on the
colour space that is selected.
Skin colours of different people are closely grouped in
normalized RG colour plane ( by Yang and Waibel).
Search for the pixels which are close enough to this
spread .
20. Active Shape Models
a)Original image
d)Active shape of face
Used to detect face in the captured video.
Shape model is formed from a set of
manually annotated shape of faces:
•Align all shapes of the learning data to an
arbitrary reference by geometric
transformation.
•Calculate average shape .
Model positioned on the face.
Iteratively deformed until it sticks to the
face in respective bounding boxes
Mouth region Localization.
21. Face Detection
VideoFileReader('path')
Reads video frame by frame
CascadeObjectDetector('FrontalFaceLBP')
Creates a detector for face
activecontour(A,mask,method)
Detect active contour inside face region .Here active contour is lip (i.e..
major difference region).
centroidColumn(X), centroidRow(Y) – centroid point
Middlerow,middlecolumn– minor and major axis lines of lip contour
Contour fitting point location
23. 1.Live video 2.ROI video
3.Facial features
detected live video
4.Lip during motion with perimeter
contour and key points
5.Multi Image montage(28 frames)6.Threshold Analysis
24. Applications
People can communicate in different languages by translating
the output of SST.
Helps to Analyse and understand the people who have lost voice
to speak or stuttering problem.
Silent Sound Techniques is applied in Military for communicating
secret/confidential matters to others.
Helps people to make silent calls during meetings/ in mass
crowded places.
User can tell PIN no., credit card no., password and other
personals without bothering some eavesdroppers.
Software can be installed in wrist watch, wrist tag or
display/Mobile/Pc and etc.
25. Conclusion
The software is being trained based on the lip structure, complexion
and features of the lip area.
Provide easier mode of communication for people with speech
disabilities by converting the identified lip movements directly to
speech.
Software can be integrated onto mobile oriented or hand-held
devices.
Lip read for Chinese language Mandarin is highly personalized.
Systems are still preliminary need improvement.
26. REFERERENCES
Pradeep B.S. And Zhang Jingang , “Silent Sound Technology for
Mandarin”.
Sasikumar Gurumurthy and B.K.Tripathy , “Design and
Implementation of Face Recognition System in Matlab Using the
Features of Lips”.
Evangelos Skodras and Nikolaos Fakotakis , “An Unconstrained
Method for Lip Detection in Color Images”.
Priya Jethani and Bharat Choudhari , “Silent Sound Technology: A
Solution to Noisy Communication”.