The input speech audio of OpenVoice can be in Any Language. OpenVoice can clone the voice in that speech audio, and use the voice to speak in multiple languages. For quick use, we recommend you to try the already deployed services:
For users who want to quickly try OpenVoice and do not require high quality or stability, click any of the following links:
This section is only for developers and researchers who are familiar with Linux, Python and PyTorch. Clone this repo, and run
conda create -n openvoice python=3.9
conda activate openvoice
git clone git@github.com:myshell-ai/OpenVoice.git
cd OpenVoice
pip install -e .
No matter if you are using V1 or V2, the above installation is the same.
Download the checkpoint from here and extract it to the checkpoints
folder.
1. Flexible Voice Style Control.
Please see demo_part1.ipynb
for an example usage of how OpenVoice enables flexible style control over the cloned voice.
2. Cross-Lingual Voice Cloning.
Please see demo_part2.ipynb
for an example for languages seen or unseen in the MSML training set.
3. Gradio Demo.. We provide a minimalist local gradio demo here. We strongly suggest the users to look into demo_part1.ipynb
, demo_part2.ipynb
and the QnA if they run into issues with the gradio demo. Launch a local gradio demo with python -m openvoice_app --share
.
Download the checkpoint from here and extract it to the checkpoints_v2
folder.
Install MeloTTS:
pip install git+https://github.com/myshell-ai/MeloTTS.git
python -m unidic download
Demo Usage. Please see demo_part3.ipynb
for example usage of OpenVoice V2. Now it natively supports English, Spanish, French, Chinese, Japanese and Korean.
This section provides the unofficial installation guides by open-source contributors in the community:
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。