'Bot Framework Speech not working for other languages

I have created a bot and also supports multilingual (English, Kannada, Tamil, Telugu, Hindi). I am also adding Speech service to Bot. Bot is not able to recognize other than English. Could you please look into my issue and suggest me what changes needs to be done in Bot Framework WebChat.

Issue: Base on user speaking language , it has to detect the language and set that language in Locale Parameter . let us suppose user speaking in Kannada it has to detect the language and text should be write in Kannada in the webchat . if he is speaking in Kannada the text should come in Kannada in webchat.

Webchat image



Solution 1:[1]

You can use this method to make the call. You should be able to print to file for your requirement and below snippet.

I would also request you to try the same for debugging with your actual speech resource key and region config if the container endpoint is failing to do so.

Note: Here Hindi- hi-IN is choosen, You can change according to your requirement.

1.  `public  static  async  Task  RecognitionWithLanguageAndDetailedOutputAsync()`
2.  `{`
3.  `// Creates an instance of a speech config with specified subscription key and service region.`
4.  `// Replace with your own subscription key and service region (e.g., "westus") if using the Azure service API`
5.  `var config =  SpeechConfig.FromSubscription("<your_key>",  "<your_region>");`
6.  `config.SpeechRecognitionLanguage  =  "hi-IN";`

7.  `// Replace the language with your language in BCP-47 format, e.g., en-US.`
8.  `//var language = "en-US";`
9.  `config.OutputFormat  =  OutputFormat.Detailed;`

10.  `FileStream filestream =  new  FileStream("out.txt",  FileMode.Create);`
11.  `var streamwriter =  new  StreamWriter(filestream);`
12.  `streamwriter.AutoFlush  =  true;`
13.  `Console.SetOut(streamwriter);`
14.  `Console.SetError(streamwriter);`

15.  `// Creates a speech recognizer for the specified language, using microphone as audio input.`
16.  `// Requests detailed output format.`
17.  `//using (var recognizer = new SpeechRecognizer(config, language))`
18.  `using  (var recognizer =  new  SpeechRecognizer(config))`
19.  `{`
20.  `// Starts recognizing.`
21.  `//Console.WriteLine($"Say something in {language} ...");`
22.  `Console.WriteLine($"Say something ...");`

23.  `// Starts speech recognition, and returns after a single utterance is recognized. The end of a`
24.  `// single utterance is determined by listening for silence at the end or until a maximum of 15`
25.  `// seconds of audio is processed. The task returns the recognition text as result.`
26.  `// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single`
27.  `// shot recognition like command or query.`
28.  `// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.`
29.  `var result =  await recognizer.RecognizeOnceAsync().ConfigureAwait(false);`

30.  `// Checks result.`
31.  `if  (result.Reason  ==  ResultReason.RecognizedSpeech)`
32.  `{`
33.  `Console.WriteLine($"RECOGNIZED: Text={result.Text}");`
34.  `Console.WriteLine(" DETAILED RESULTS:");`

35.  `var detailedResults = result.Best();`
36.  `foreach  (var item in detailedResults)  // NOTE: We need to put this in all languages, or take it out of CSharp`
37.  `{`
38.  `Console.WriteLine($" Confidence: {item.Confidence}, Text: {item.Text}, LexicalForm: {item.LexicalForm}, NormalizedForm: {item.NormalizedForm}, MaskedNormalizedForm: {item.MaskedNormalizedForm}");`
39.  `// Console.W($" Confidence: {item.Confidence}, Text: {item.Text}, LexicalForm: {item.LexicalForm}, NormalizedForm: {item.NormalizedForm}, MaskedNormalizedForm: {item.MaskedNormalizedForm}");`
40.  `}`
41.  `}`
42.  `else  if  (result.Reason  ==  ResultReason.NoMatch)`
43.  `{`
44.  `Console.WriteLine($"NOMATCH: Speech could not be recognized.");`
45.  `}`
46.  `else  if  (result.Reason  ==  ResultReason.Canceled)`
47.  `{`
48.  `var cancellation =  CancellationDetails.FromResult(result);`
49.  `Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");`

50.  `if  (cancellation.Reason  ==  CancellationReason.Error)`
51.  `{`
52.  `Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");`
53.  `Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");`
54.  `Console.WriteLine($"CANCELED: Did you update the subscription info?");`
55.  `}`
56.  `}`
57.  `}`
58.  `}`
59. 

In case the issue persists again then its with the container as the latest container is used instead of the container with the required locale. This is documented here to use the container specific to locale instead of latest. All formats of container versions can be looked up here.

here we used the version "2.12.0-amd64-hi-in" i.e mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:2.12.0-amd64-hi-in in the docker run command and it produces the result in the correct language format.

Sample Output:

enter image description here

Solution 2:[2]

  1. You can use Conditional formatting inside home tab with highlight cells rules -> Duplicate values.

  2. you can use delete duplicates inside data tab.

  3. using pivot also you can find duplicates

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 RajkumarMamidiChettu-MT
Solution 2 Peter Csala