Releases: daily-co/daily-python
v0.5.2
Improvements and fixes
- Disabled echo cancellation, noise suppression and auto gain control by default to match the previous library behavior.
v0.5.1
Improvements and fixes
- Fixed a crash when passing audio frames to
VirtualMicrophone.write_frames()
that require padding (i.e. non-multiple of 10ms worth of audio frames).
v0.5.0
New features
- Support for non-blocking virtual audio devices. This allows integration with hardware devices (e.g. via PyAudio).
- Echo cancellation, noise suppression and auto gain control can now be enabled for virtual microphones via custom constraints.
- It is now possible to pass additional Deepgram settings to
start_transcription()
using the newextra
field. - New demos to show how to integrate with PyAudio, how to send images and other improvements in existing demos.
Improvements and fixes
- Fixed an issue on user leave (manual or by the server) that would prevent the user to rejoin.
API changes
- Transcription defaults have been removed in favor of Deepgram's defaults. This allows to simply specify
{"model": "nova-2"}
. - Transcription
redact
can now also be a list of strings as supported by Deepgram (e.g.["pci"]
).
v0.4.0
New features
- Added support for capturing individual participant audio tracks.
- Added support for ejecting participants.
- Support python >= 3.7 and, on Linux, glibc >= 2.28.
Improvements and fixes
- Improved general video renderer performance.
- Improved media subscriptions stability and performance.
- Fixed a deadlock that would not allow receiving multiple simultaneous video renderers.
- Fixed a deadlock when a screen share was stopped.
- Fixed an issue where setting the user name could not be reflected automatically when requesting participants list.
- Fixed an issue that could cause joins/reconnects to not complete successfully.
- Fixed a sporadic crash that could occur when handling media streams.
- Added Qt demo (similar to the existing Gtk demo).
- Qt and Gtk demos can now save the selected participant audio into a WAV file and can also render screen share.
v0.3.1
Improvements and fixes
- Fixed an issue that could cause
daily-python
clients to join a meeting session in a different region. - Fixed a dead-lock that could occur when a
CallClient
is destroyed.
v0.3.0
New features
- Support for sending chat messages to Daily Prebuilt (
CallClient.send_prebuilt_chat_message()
). - Added Python type hints (helpful for editor completions).
- Support for Python 3.8.
Improvements and fixes
- Improved response time of
CallClient
getter functions. - Improved low-latency performace of virtual audio devices.
- Fixed potential crash after
CallClient.leave()
. - Improved internal safeness of participant video renderers.
- Fixed a
VirtualMicrophoneDevice
memory leak. - Properly trigger a transcription error event if transcription can't start.
- Demos have been updated to show more real live code.
API changes
- Removed
detect_language
fromTranscriptionSettings
. EventHandler.on_transcription_stopped
can now tell if transcription was stopped by a user or because of an error occurred.
v0.2.0
New features
- Support for start/stop recordings.
- Support for start/stop transcriptions and receive transcriptions messages.
Improvements and fixes
- Fixed an issue that was causing sporadic audio gaps on macOS and in certain OS task scheduling scenarios.
- Network re-connections have been improved.
API changes
VirtualSpeakerDevice.read_frames()
has been improved and doesn't require the user to add sleeps. Therefore, it is now possible to read, for example, 10 seconds of audio in a single call. Since the timings are now controlled internally, this minimizes any potential audio issues.
The following old code:
SAMPLE_RATE = 16000
READ_INTERVAL = 0.01
FRAMES_TO_READ = int(SAMPLE_RATE * READ_INTERVAL)
SECONDS_TO_READ = 10.0
for _ in range (int(SECONDS_TO_READ / READ_INTERVAL)):
buffer = speaker.read_frames(FRAMES_TO_READ)
time.sleep(READ_INTERVAL)
can be replaced with:
SECONDS_TO_READ = 10
FRAMES_TO_READ = SAMPLE_RATE * SECONDS_TO_READ
buffer = speaker.read_frames(FRAMES_TO_READ)
v0.1.1
- Fixed an issue where virtual devices could cause other Python threads to be blocked.
v0.1.0
-
Fixed an issue where
meeting_token
was not properly handled inCallClient.join()
. -
VirtualMicrophoneDevice.write_samples()
has been renamed toVirtualMicrophoneDevice.write_frames()
-
VirtualMicrophoneDevice.read_samples()
has been renamed toVirtualMicrophoneDevice.read_frames()
-
VirtualMicrophoneDevice.write_frames()
now takes a single argumentframes
.frames
are audio frames, an interleaved set of audio samples per channel.write_frames()
is a synchronous call that will finish when all audio frames have been written.write_frames()
also adds padding when necessary. -
Handling microphone audio samples has been simplified. Previous, the following code was required:
while True:
buffer = stream.read(960)
if not buffer:
break
if len(buffer) < 960:
buffer += b'\00' * (960 - len(buffer))
microphone.write_samples(buffer, 480)
time.sleep(0.03)
It can now be replaced with:
microphone.write_frames(stream.read())
v0.0.15
- Virtual camera support. It is now possible to create a virtual camera and dynamically provide frames to it so they are sent as your client camera to the meeting.
- Color formats have been renamed
ARGB32
->ARGB
,BGRA32
->BGRA
........RGB24
->RGB
. The bytestrings with frame contents are now expected to follow the color format order. So, for example, anRGBA
image will haveR
in the first byte,G
in the second byte,B
in the third byte,A
in the fourth byte and so on. Daily.init()
parametervirtual_devices
is not necessary anymore. Therefore,Daily.init(virtual_devices = True)
is now simplyDaily.init()
.