23 Star 285 Fork 103

PaddlePaddle / PaddleSpeech

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

(简体中文|English)


PaddleSpeech 是基于飞桨 PaddlePaddle 的语音方向的开源模型库,用于语音和音频中的各种关键任务的开发,包含大量基于深度学习前沿和有影响力的模型,一些典型的应用示例如下:

PaddleSpeech 荣获 NAACL2022 Best Demo Award, 请访问 Arxiv 论文。

效果展示

语音识别
输入音频 识别结果

I knocked at the door on the ancient side of the building.

我认为跑步最重要的就是给我带来了身体健康。
语音翻译 (英译中)
输入音频 翻译结果

我 在 这栋 建筑 的 古老 门上 敲门。
语音合成
输入文本 合成音频
Life was like a box of chocolates, you never know what you're gonna get.
早上好,今天是2020/10/29,最低温度是-3°C。
季姬寂,集鸡,鸡即棘鸡。棘鸡饥叽,季姬及箕稷济鸡。鸡既济,跻姬笈,季姬忌,急咭鸡,鸡急,继圾几,季姬急,即籍箕击鸡,箕疾击几伎,伎即齑,鸡叽集几基,季姬急极屐击鸡,鸡既殛,季姬激,即记《季姬击鸡记》。
大家好,我是 parrot 虚拟老师,我们来读一首诗,我与春风皆过客,I and the spring breeze are passing by,你携秋水揽星河,you take the autumn water to take the galaxy。
宜家唔系事必要你讲,但系你所讲嘅说话将会变成呈堂证供。
各个国家有各个国家嘅国歌

更多合成音频,可以参考 PaddleSpeech 语音合成音频示例

标点恢复
输入文本 输出文本
今天的天气真不错啊你下午有空吗我想约你一起去吃饭 今天的天气真不错啊!你下午有空吗?我想约你一起去吃饭。

特性

本项目采用了易用、高效、灵活以及可扩展的实现,旨在为工业应用、学术研究提供更好的支持,实现的功能包含训练、推断以及测试模块,以及部署过程,主要包括

  • 📦 易用性: 安装门槛低,可使用 CLI 快速开始。
  • 🏆 对标 SoTA: 提供了高速、轻量级模型,且借鉴了最前沿的技术。
  • 🏆 流式 ASR 和 TTS 系统:工业级的端到端流式识别、流式合成系统。
  • 💯 基于规则的中文前端: 我们的前端包含文本正则化和字音转换(G2P)。此外,我们使用自定义语言规则来适应中文语境。
  • 多种工业界以及学术界主流功能支持:
    • 🛎️ 典型音频任务: 本工具包提供了音频任务如音频分类、语音翻译、自动语音识别、文本转语音、语音合成、声纹识别、KWS等任务的实现。
    • 🔬 主流模型及数据集: 本工具包实现了参与整条语音任务流水线的各个模块,并且采用了主流数据集如 LibriSpeech、LJSpeech、AIShell、CSMSC,详情请见 模型列表
    • 🧩 级联模型应用: 作为传统语音任务的扩展,我们结合了自然语言处理、计算机视觉等任务,实现更接近实际需求的产业级应用。

近期更新

🔥 加入技术交流群获取入群福利

  • 3 日直播课链接: 深度解读 【一句话语音合成】【小样本语音合成】【定制化语音识别】语音交互技术
  • 20G 学习大礼包:视频课程、前沿论文与学习资料

微信扫描二维码关注公众号,点击“马上报名”填写问卷加入官方交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。

安装

我们强烈建议用户在 Linux 环境下,3.7 以上版本的 python 上安装 PaddleSpeech。

相关依赖

  • gcc >= 4.8.5
  • paddlepaddle >= 2.5.0
  • python >= 3.8
  • linux(推荐), mac, windows

PaddleSpeech 依赖于 paddlepaddle,安装可以参考 paddlepaddle 官网,根据自己机器的情况进行选择。这里给出 cpu 版本示例,其它版本大家可以根据自己机器的情况进行安装。

pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple

你也可以安装指定版本的paddlepaddle,或者安装 develop 版本。

# 安装2.4.1版本. 注意:2.4.1只是一个示例,请按照对paddlepaddle的最小依赖进行选择。
pip install paddlepaddle==2.4.1 -i https://mirror.baidu.com/pypi/simple
# 安装 develop 版本
pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html

PaddleSpeech 快速安装方式有两种,一种是 pip 安装,一种是源码编译(推荐)。

pip 安装

pip install pytest-runner
pip install paddlespeech

源码编译

git clone https://github.com/PaddlePaddle/PaddleSpeech.git
cd PaddleSpeech
pip install pytest-runner
pip install .

更多关于安装问题,如 conda 环境,librosa 依赖的系统库,gcc 环境问题,kaldi 安装等,可以参考这篇安装文档,如安装上遇到问题可以在 #2150 上留言以及查找相关问题

快速开始

安装完成后,开发者可以通过命令行或者 Python 快速开始,命令行模式下改变 --input 可以尝试用自己的音频或文本测试,支持 16k wav 格式音频。

你也可以在 aistudio 中快速体验 👉🏻一键预测,快速上手 Speech 开发任务

测试音频示例下载

wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav

语音识别

 (点击可展开)开源中文语音识别

命令行一键体验

paddlespeech asr --lang zh --input zh.wav

Python API 一键预测

>>> from paddlespeech.cli.asr.infer import ASRExecutor
>>> asr = ASRExecutor()
>>> result = asr(audio_file="zh.wav")
>>> print(result)
我认为跑步最重要的就是给我带来了身体健康

语音合成

 开源中文语音合成

输出 24k 采样率wav格式音频

命令行一键体验

paddlespeech tts --input "你好,欢迎使用百度飞桨深度学习框架!" --output output.wav

Python API 一键预测

>>> from paddlespeech.cli.tts.infer import TTSExecutor
>>> tts = TTSExecutor()
>>> tts(text="今天天气十分不错。", output="output.wav")

声音分类

 适配多场景的开放领域声音分类工具

基于 AudioSet 数据集 527 个类别的声音分类模型

命令行一键体验

paddlespeech cls --input zh.wav

python API 一键预测

>>> from paddlespeech.cli.cls.infer import CLSExecutor
>>> cls = CLSExecutor()
>>> result = cls(audio_file="zh.wav")
>>> print(result)
Speech 0.9027186632156372

声纹提取

 工业级声纹提取工具

命令行一键体验

paddlespeech vector --task spk --input zh.wav

Python API 一键预测

>>> from paddlespeech.cli.vector import VectorExecutor
>>> vec = VectorExecutor()
>>> result = vec(audio_file="zh.wav")
>>> print(result) # 187维向量
[ -0.19083306   9.474295   -14.122263    -2.0916545    0.04848729
   4.9295826    1.4780062    0.3733844   10.695862     3.2697146
  -4.48199     -0.6617882   -9.170393   -11.1568775   -1.2358263 ...]

标点恢复

 一键恢复文本标点,可与ASR模型配合使用

命令行一键体验

paddlespeech text --task punc --input 今天的天气真不错啊你下午有空吗我想约你一起去吃饭

Python API 一键预测

>>> from paddlespeech.cli.text.infer import TextExecutor
>>> text_punc = TextExecutor()
>>> result = text_punc(text="今天的天气真不错啊你下午有空吗我想约你一起去吃饭")
今天的天气真不错啊你下午有空吗我想约你一起去吃饭

语音翻译

 端到端英译中语音翻译工具

使用预编译的 kaldi 相关工具,只支持在 Ubuntu 系统中体验

命令行一键体验

paddlespeech st --input en.wav

python API 一键预测

>>> from paddlespeech.cli.st.infer import STExecutor
>>> st = STExecutor()
>>> result = st(audio_file="en.wav")
['我 在 这栋 建筑 的 古老 门上 敲门 。']

快速使用服务

安装完成后,开发者可以通过命令行一键启动语音识别,语音合成,音频分类等多种服务。

你可以在 AI Studio 中快速体验:SpeechServer 一键部署

启动服务

paddlespeech_server start --config_file ./demos/speech_server/conf/application.yaml

访问语音识别服务

paddlespeech_client asr --server_ip 127.0.0.1 --port 8090 --input input_16k.wav

访问语音合成服务

paddlespeech_client tts --server_ip 127.0.0.1 --port 8090 --input "您好,欢迎使用百度飞桨语音合成服务。" --output output.wav

访问音频分类服务

paddlespeech_client cls --server_ip 127.0.0.1 --port 8090 --input input.wav

更多服务相关的命令行使用信息,请参考 demos

快速使用流式服务

开发者可以尝试 流式 ASR流式 TTS 服务.

启动流式 ASR 服务

paddlespeech_server start --config_file ./demos/streaming_asr_server/conf/application.yaml

访问流式 ASR 服务

paddlespeech_client asr_online --server_ip 127.0.0.1 --port 8090 --input input_16k.wav

启动流式 TTS 服务

paddlespeech_server start --config_file ./demos/streaming_tts_server/conf/tts_online_application.yaml

访问流式 TTS 服务

paddlespeech_client tts_online --server_ip 127.0.0.1 --port 8092 --protocol http --input "您好,欢迎使用百度飞桨语音合成服务。" --output output.wav

更多信息参看: 流式 ASR流式 TTS

模型列表

PaddleSpeech 支持很多主流的模型,并提供了预训练模型,详情请见模型列表

PaddleSpeech 的 语音转文本 包含语音识别声学模型、语音识别语言模型和语音翻译, 详情如下:

语音转文本模块类型 数据集 模型类型 脚本
语音识别 Aishell DeepSpeech2 RNN + Conv based Models deepspeech2-aishell
Transformer based Attention Models u2.transformer.conformer-aishell
Librispeech Transformer based Attention Models deepspeech2-librispeech / transformer.conformer.u2-librispeech / transformer.conformer.u2-kaldi-librispeech
TIMIT Unified Streaming & Non-streaming Two-pass u2-timit
对齐 THCHS30 MFA mfa-thchs30
语言模型 Ngram 语言模型 kenlm
语音翻译(英译中) TED En-Zh Transformer + ASR MTL transformer-ted
FAT + Transformer + ASR MTL fat-st-ted

PaddleSpeech 的 语音合成 主要包含三个模块:文本前端、声学模型和声码器。声学模型和声码器模型如下:

语音合成模块类型 模型类型 数据集 脚本
文本前端 tn / g2p
声学模型 Tacotron2 LJSpeech / CSMSC tacotron2-ljspeech / tacotron2-csmsc
Transformer TTS LJSpeech transformer-ljspeech
SpeedySpeech CSMSC speedyspeech-csmsc
FastSpeech2 LJSpeech / VCTK / CSMSC / AISHELL-3 / ZH_EN / finetune fastspeech2-ljspeech / fastspeech2-vctk / fastspeech2-csmsc / fastspeech2-aishell3 / fastspeech2-zh_en / fastspeech2-finetune
ERNIE-SAT VCTK / AISHELL-3 / ZH_EN ERNIE-SAT-vctk / ERNIE-SAT-aishell3 / ERNIE-SAT-zh_en
DiffSinger Opencpop DiffSinger-opencpop
声码器 WaveFlow LJSpeech waveflow-ljspeech
Parallel WaveGAN LJSpeech / VCTK / CSMSC / AISHELL-3 / Opencpop PWGAN-ljspeech / PWGAN-vctk / PWGAN-csmsc / PWGAN-aishell3 / PWGAN-opencpop
Multi Band MelGAN CSMSC Multi Band MelGAN-csmsc
Style MelGAN CSMSC Style MelGAN-csmsc
HiFiGAN LJSpeech / VCTK / CSMSC / AISHELL-3 / Opencpop HiFiGAN-ljspeech / HiFiGAN-vctk / HiFiGAN-csmsc / HiFiGAN-aishell3 / HiFiGAN-opencpop
WaveRNN CSMSC WaveRNN-csmsc
声音克隆 GE2E Librispeech, etc. GE2E
SV2TTS (GE2E + Tacotron2) AISHELL-3 VC0
SV2TTS (GE2E + FastSpeech2) AISHELL-3 VC1
SV2TTS (ECAPA-TDNN + FastSpeech2) AISHELL-3 VC2
GE2E + VITS AISHELL-3 VITS-VC
端到端 VITS CSMSC / AISHELL-3 VITS-csmsc / VITS-aishell3

声音分类

任务 数据集 模型类型 脚本
声音分类 ESC-50 PANN pann-esc50

语音唤醒

任务 数据集 模型类型 脚本
语音唤醒 hey-snips MDTC mdtc-hey-snips

声纹识别

任务 数据集 模型类型 脚本
声纹识别 VoxCeleb1/2 ECAPA-TDNN ecapa-tdnn-voxceleb12

说话人日志

任务 数据集 模型类型 脚本
说话人日志 AMI ECAPA-TDNN + AHC / SC ecapa-tdnn-ami

标点恢复

任务 数据集 模型类型 脚本
标点恢复 IWLST2012_zh Ernie Linear iwslt2012-punc0

教程文档

对于 PaddleSpeech 的所关注的任务,以下指南有助于帮助开发者快速入门,了解语音相关核心思想。

语音合成模块最初被称为 Parakeet,现在与此仓库合并。如果您对该任务的学术研究感兴趣,请参阅 TTS 研究概述。此外,模型介绍 是了解语音合成流程的一个很好的指南。

⭐ 应用案例

  • PaddleBoBo: 使用 PaddleSpeech 的语音合成模块生成虚拟人的声音。

引用

要引用 PaddleSpeech 进行研究,请使用以下格式进行引用。

@InProceedings{pmlr-v162-bai22d,
  title = {{A}$^3${T}: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing},
  author = {Bai, He and Zheng, Renjie and Chen, Junkun and Ma, Mingbo and Li, Xintong and Huang, Liang},
  booktitle = {Proceedings of the 39th International Conference on Machine Learning},
  pages = {1399--1411},
  year = {2022},
  volume = {162},
  series = {Proceedings of Machine Learning Research},
  month = {17--23 Jul},
  publisher = {PMLR},
  pdf = {https://proceedings.mlr.press/v162/bai22d/bai22d.pdf},
  url = {https://proceedings.mlr.press/v162/bai22d.html},
}

@inproceedings{zhang2022paddlespeech,
    title = {PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit},
    author = {Hui Zhang, Tian Yuan, Junkun Chen, Xintong Li, Renjie Zheng, Yuxin Huang, Xiaojie Chen, Enlei Gong, Zeyu Chen, Xiaoguang Hu, dianhai yu, Yanjun Ma, Liang Huang},
    booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations},
    year = {2022},
    publisher = {Association for Computational Linguistics},
}

@inproceedings{zheng2021fused,
  title={Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation},
  author={Zheng, Renjie and Chen, Junkun and Ma, Mingbo and Huang, Liang},
  booktitle={International Conference on Machine Learning},
  pages={12736--12746},
  year={2021},
  organization={PMLR}
}

参与 PaddleSpeech 的开发

热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 PaddleSpeech 的开发中!

贡献者

致谢

此外,PaddleSpeech 依赖于许多开源存储库。有关更多信息,请参阅 references

License

PaddleSpeech 在 Apache-2.0 许可 下提供。

Stargazers over time

Stargazers over time

Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation. 展开 收起
Python 等 6 种语言
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/paddlepaddle/PaddleSpeech.git
git@gitee.com:paddlepaddle/PaddleSpeech.git
paddlepaddle
PaddleSpeech
PaddleSpeech
develop

搜索帮助

14c37bed 8189591 565d56ea 8189591