I am building a simple Python application using continuous speech translation through the Azure Cognitive Services Speech SDK. Translation and detection between languages works as far as I can tell. My goal is to update the UI according to the detected language. However, I cannot get a detected language signal. The docs say that for Python continuous language detection works. Can I get a signal for the detected language?
My question is about:
speechsdk.languageconfig.AutoDetectSourceLanguageConfig(self.detectable_languages)
I am setting the languages according to the SDK specs using the language code and locale:
detectable_languages = ["en-US", "es-MX"]
This is where I am configuring the translation recognizer:
speech_translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
def configure(self):
self.speech_translation_config = self.init_speech_translation_config()
self.audio_config = self.set_audio_source()
self.auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(self.detectable_languages)
print("Detectable languages set in AutoDetectSourceLanguageConfig:")
print(self.auto_detect_source_language_config.languages)
self.translation_recognizer = self.init_translation_recognizer()
self.set_event_callbacks()
When I run my app I get this output:
'AutoDetectSourceLanguageConfig' object has no attribute 'languages
It seems to me I should get a signal for “en-US” or “es-MX”.