I want to create a Virtual Keyboard that can catch whatever key you ‘speak’ and send the keystroke to the active application. The Virtual Keyboard part and linking it to Speech Recognition will be done easily but the problem I am suffering is that the Speech Recognition is inefficient!
For example I say ‘c’ and it takes it as ‘v’ or something. This is extremely irritating and though it works with the microphone on my Logitech headset, it still doesn’t recognize what I am saying sometimes… It’s worse with the default microphone on my Lenovo laptop.
But it is weird that the Google speech recognition thing on the Google Search Engine works perfectly, with or without mike… Why is that?
Is there a way to improve my program?
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Speech.Synthesis;
using System.Speech.Recognition;
using System.Threading;
namespace TextToVoice
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
SpeechSynthesizer sSynth = new SpeechSynthesizer();
PromptBuilder pBuilder = new PromptBuilder();
SpeechRecognitionEngine sRecognize = new SpeechRecognitionEngine();
private void Form1_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
pBuilder.ClearContent();
pBuilder.AppendText(textBox1.Text);
sSynth.Speak(pBuilder);
}
private void button2_Click(object sender, EventArgs e)
{
button2.Enabled = false;
button3.Enabled = true;
Choices sList = new Choices();
sList.Add(new string[] { "one", "2", "3", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"});
Grammar gr = new Grammar(new GrammarBuilder(sList));
try
{
sRecognize.RequestRecognizerUpdate();
sRecognize.LoadGrammar(gr);
sRecognize.SpeechRecognized += sRecognize_SpeechRecognized;
sRecognize.SetInputToDefaultAudioDevice();
sRecognize.RecognizeAsync(RecognizeMode.Multiple);
}
catch
{
return;
}
}
void sRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "exit")
{
Application.Exit();
}
else if (e.Result.Text == "how are you")
{
sSynth.Speak("I am fine");
textBox1.Text = "";
pBuilder.ClearContent();
}
else if (e.Result.Text == "hey")
{
sSynth.Speak("Hello sir");
textBox1.Text = "";
}
else
{
textBox1.Text = textBox1.Text + " " + e.Result.Text;
}
}
private void button3_Click(object sender, EventArgs e)
{
button2.Enabled = true;
button3.Enabled = false;
}
}
}
Okay an edit..
Basically is there a way to create a program that can improve the application’s UNDERSTANDING of what I am speaking? Like Windows Speech Recognition does by making us read text and then understanding how I speak words or whatever, except that that is too tedious 😛
6
The quality of speech recognition depends on many parameters:
-
Microphone: as you noted, a headset microphone is better than the one in your laptop. Studio microphones will give the best results, I imagine.
-
Environment: you’ll have hard time making speech recognition work in a noisy environment compared to a quiet one (ideally a studio).
-
Pronunciation: for instance, I’m not a native English speaker and have a poor accent, and when I tried to use Google’s speech recognition, half of the time, Google understands something else. At the same time, it understands practically everything when my girlfriend is speaking.
-
Dictionary: if you pronounce words which actually exist, speech recognition engine may improve its process by using a dictionary of words. For example, if you pronounce “elephant”, it has a good chance of getting it right. If you tell “eglefont”, none of the engines will be able to write the word.
-
Contextual subsets: if the dictionary is bound to a context, it will be easier for the engine to understand you. For example, asking the engine to type what you say is much more difficult than asking it to understand just four commands: “start”, “stop”, “move left” and “move right”.
While the first three points may help improving the recognition in general, I think you should first focus on the last two points.
What is happening, I imagine, is that the recognition engine in your application is trying to understand words, and is unable to do that because you are pronouncing only letters. I don’t think the fact that you’ve specified just letters is relevant: it may be that the Grammar
is still interpreting this as words, although Windows’ Text to Speech understands that single letters are actually letters, not words.
Since speech recognition in Windows can also be trained for a specific voice/pronunciation, there might be a way to train it for specific words (in your case, single letters). This being said, I haven’t used this part of .NET Framework, so I don’t know how much is it customizable.
Also related: Jeff Atwood, Whatever Happened to Voice Recognition?
6