Using streamlit-webrtc with STUN/TURN Azure AI avatar service that requires avatar config, SDP exchange etc. #1765
corticalstack
started this conversation in
General
Replies: 1 comment
-
Took the test app a bit further. Steps are:
Instantiating the webrtc client is successful and shows as playing. The logging outputs the SDP offer generated by streamlit-webrtc. When the streamlit-webrtc.WebRtcMode is RECVONLY, I don't get any streaming content in the video player element. Both video and audio tracks are set to "inactive" in the answer SDP, e.g.: When the WebRtcMode is SENDRECV, it shows my webcam, not the avatar video streaming service:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm just wondering if it's possible to use streamlit-webrtc to stream one-way the video/audio tracks from the Azure AI Avatar service.
Example code provided by Microsoft, for example chat.js which is used in a .html page, makes a request to the AI avatar service with a generated SDP offer and additional config, which responds with an SDP answer, which in turn is used to create the RTC peer connection. A call to the AI avatar service with the ICE and sdp configuration looks like below:
`try:
speech_config = speechsdk.SpeechConfig(subscription=self.speech_key, endpoint=f'wss://{self.speech_region}.tts.speech.microsoft.com/cognitiveservices/websocket/v1?enableTalkingAvatar=true')
As streamlit-webrtc seems to abstract away much of the low-level WebRTC setup, so we don't directly work with SDP objects, with the library handling the SDP exchange, ICE candidate gathering, and peer connection setup internally, I'm wondering if it's possible to use the component in this way with a third-party STUN/TURN server that requires additional config (i.e. the sdp, avatar config).
Thanks for any hints.
Beta Was this translation helpful? Give feedback.
All reactions