đ đš Speaker Diarization
This project is a speaker diarization solution that relies on pyannote.audio 2.0. It can automatically identify speakers in audio files, providing accurate diarization results with high efficiency and accuracy.
đ Quick Start
This speaker diarization solution relies on pyannote.audio 2.0. For installation instructions, please refer to here.
Basic Usage
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization@2022.07")
diarization = pipeline("audio.wav")
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
⨠Features
Advanced Usage
If the number of speakers is known in advance, you can include the num_speakers
parameter in the parameters dictionary:
handler = EndpointHandler()
diarization = handler({"inputs": base64_audio, "parameters": {"num_speakers": 2}})
You can also provide lower and/or upper bounds on the number of speakers using the min_speakers
and max_speakers
parameters:
handler = EndpointHandler()
diarization = handler({"inputs": base64_audio, "parameters": {"min_speakers": 2, "max_speakers": 5}})
If you're feeling adventurous, you can experiment with various pipeline hyperparameters. For instance, you can use a more aggressive voice activity detection by increasing the value of segmentation_onset
threshold:
hparams = handler.pipeline.parameters(instantiated=True)
hparams["segmentation_onset"] += 0.1
handler.pipeline.instantiate(hparams)
To apply the updated handler for the API inference that can handle the number of speakers, use the following code:
from typing import Dict
from pyannote.audio import Pipeline
import torch
import base64
import numpy as np
SAMPLE_RATE = 16000
class EndpointHandler():
def __init__(self, path=""):
self.pipeline = Pipeline.from_pretrained("KIFF/pyannote-speaker-diarization-endpoint")
def __call__(self, data: Dict[str, bytes]) -> Dict[str, str]:
"""
Args:
data (:obj:):
includes the deserialized audio file as bytes
Return:
A :obj:`dict`:. base64 encoded image
"""
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
audio_data = base64.b64decode(inputs)
audio_nparray = np.frombuffer(audio_data, dtype=np.int16)
audio_tensor= torch.from_numpy(audio_nparray).float().unsqueeze(0)
pyannote_input = {"waveform": audio_tensor, "sample_rate": SAMPLE_RATE}
if parameters is not None:
diarization = self.pipeline(pyannote_input, **parameters)
else:
diarization = self.pipeline(pyannote_input)
processed_diarization = [
{"label": str(label), "start": str(segment.start), "stop": str(segment.end)}
for segment, _, label in diarization.itertracks(yield_label=True)
]
return {"diarization": processed_diarization}
đ Documentation
Benchmark
Real-time factor
Real-time factor is around 5% using one Nvidia Tesla V100 SXM2 GPU (for the neural inference part) and one Intel Cascade Lake 6248 CPU (for the clustering part).
In other words, it takes approximately 3 minutes to process a one hour conversation.
Accuracy
This pipeline is benchmarked on a growing collection of datasets.
Processing is fully automatic:
- no manual voice activity detection (as is sometimes the case in the literature)
- no manual number of speakers (though it is possible to provide it to the pipeline)
- no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named "Full" in this paper):
- no forgiveness collar
- evaluation of overlapped speech
đ License
This project is licensed under the MIT license.
đģ Support
For commercial enquiries and scientific consulting, please contact me.
For technical questions and bug reports, please check pyannote.audio Github repository.
đ Citations
@inproceedings{Bredin2021,
Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}},
Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine},
Booktitle = {Proc. Interspeech 2021},
Address = {Brno, Czech Republic},
Month = {August},
Year = {2021},
}
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}