.Make sure being compatible along with several structures, including.NET 6.0,. NET Structure 4.6.2, and.NET Criterion 2.0 and also above.Lessen addictions to prevent variation problems and the demand for binding redirects.Transcribing Audio Files.One of the major capabilities of the SDK is actually audio transcription. Programmers can easily transcribe audio reports asynchronously or even in real-time. Below is actually an example of just how to record an audio file:.making use of AssemblyAI.using AssemblyAI.Transcripts.var customer = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby reports, comparable code can be utilized to attain transcription.wait for using var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise sustains real-time sound transcription utilizing Streaming Speech-to-Text. This attribute is actually specifically valuable for requests requiring prompt handling of audio records.making use of AssemblyAI.Realtime.await using var scribe = new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving sound from a microphone as an example.GetAudio( async (chunk) => wait for transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Apps.The SDK includes with LeMUR to permit designers to develop big foreign language design (LLM) functions on voice information. Here is actually an example:.var lemurTaskParams = brand-new LemurTaskParams.Cause="Deliver a brief rundown of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Cleverness Versions.In addition, the SDK features built-in assistance for audio cleverness versions, making it possible for sentiment study and other state-of-the-art attributes.var records = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = real. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, explore the formal AssemblyAI blog.Image resource: Shutterstock.