I am using .NET 6 configure live transcription using Deepgram Version “4.0.2” and WebSockets, but I am keep getting the following messages, while it not working:
2024-07-02 06:50:14.245 [Information] DeepgramWsClientOptions: KeepAlive: False
2024-07-02 06:50:14.270 [Information] DeepgramWsClientOptions: OnPrem: False
2024-07-02 06:50:14.271 [Information] DeepgramWsClientOptions: APIVersion: v1
2024-07-02 06:50:14.273 [Information] DeepgramWsClientOptions: API KEY set from environment variable
2024-07-02 06:50:14.289 [Information] DeepgramWsClientOptions: REST BaseAddress does not contain API version: api.deepgram.com
2024-07-02 06:50:14.292 [Information] DeepgramWsClientOptions: BaseAddress does not contain protocol: api.deepgram.com/v1
2024-07-02 06:50:14.294 [Information] DeepgramWsClientOptions: BaseAddress: wss://api.deepgram.com/v1
2024-07-02 06:50:16.400 [Information] Connect: options:
{
"model": "nova-2",
"punctuate": true,
"smart_format": true
}
2024-07-02 06:50:16.403 [Information] Connect: Using default connect cancellation token
Here are my code snippets:
AudioProcessingService.cs
using Deepgram;
using Deepgram.Models.Live.v1;
namespace WebSocketAudioServer.Services;
public class AudioProcessingService
{
// This method processes the received audio data.
public async Task Process(byte[] audioData)
{
// Placeholder: Implement your audio processing logic here.
// For now, we'll just print the length of the data to the console.
Console.WriteLine($"Processed audio data: {audioData.Length} bytes");
// Create a new instance of the Deepgram client.
var liveClient = new LiveClient();
// Subscribe to the EventResponseReceived event
liveClient.Subscribe(new EventHandler<ResultResponse>((sender, e) =>
{
if (e.Channel != null && e.Channel.Alternatives != null && e.Channel.Alternatives.Count > 0)
{
if (e.Channel.Alternatives[0].Transcript == "")
{
return;
}
Console.WriteLine($"Speaker: {e.Channel.Alternatives[0].Transcript}");
}else
{
Console.WriteLine("No speaker detected");
}
}));
// Start the connection
var liveSchema = new LiveSchema()
{
Model = "nova-2",
Punctuate = true,
SmartFormat = true,
};
await liveClient.Connect(liveSchema);
// get the webcast data... this is a blocking operation
try
{
liveClient.Send(audioData);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
// Stop the connection
await liveClient.Stop();
}
}
launchSettings.json
{
"profiles": {
"WebSocketAudioServer": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"applicationUrl": "http://localhost:5000",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development",
"DEEPGRAM_API_KEY": "abc"
}
}
}
}
This is all boiler code from deepgram’s websites:
https://developers.deepgram.com/docs/dotnet-sdk-streaming-transcription
https://developers.deepgram.com/docs/dotnet-sdk-v3-to-v4-migration-guide
https://github.com/deepgram/deepgram-dotnet-sdk/blob/main/examples/streaming/http/Program.cs
Any help or nudge in the right direction would be much appreciated, as I am looking through Deepgram’s documentation and online sources but have not had any luck so far.
Thanks.