注意
点击 这里 下载完整示例代码
使用 Tacotron2 进行文本转语音¶
概述¶
本教程展示了如何使用 torchaudio 中预训练的 Tacotron2 构建文本转语音流程。
文本转语音流程如下:
文本预处理
首先,输入文本被编码为符号列表。在本教程中,我们将使用英文字符和音素作为符号。
频谱图生成
从编码文本生成频谱图。我们使用
Tacotron2模型来完成此操作。时域转换
最后一步是将频谱图转换为波形。从频谱图生成语音的过程也称为声码器(Vocoder)。在本教程中,使用了三种不同的声码器:
WaveRNN、GriffinLim以及 Nvidia 的 WaveGlow。
下图展示了整个流程。
所有相关组件都打包在 torchaudio.pipelines.Tacotron2TTSBundle 中,
但本教程也将涵盖其背后的处理过程。
准备¶
首先,我们安装必要的依赖项。除了
torchaudio,还需要 DeepPhonemizer 来执行基于音素的编码。
%%bash
pip3 install deep_phonemizer
import torch
import torchaudio
torch.random.manual_seed(0)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(torch.__version__)
print(torchaudio.__version__)
print(device)
2.1.1
2.1.1
cuda
import IPython
import matplotlib.pyplot as plt
文本处理¶
基于字符的编码¶
在本节中,我们将介绍基于字符的编码是如何工作的。
由于预训练的 Tacotron2 模型需要特定的符号表集,因此提供了与 torchaudio 中相同的功能。本节主要用于解释编码的基础知识。
首先,我们定义符号集。例如,我们可以使用
'_-!\'(),.:;? abcdefghijklmnopqrstuvwxyz'。然后,我们将输入文本中的每个字符映射到表中对应符号的索引。
以下是此类处理的一个示例。在该示例中,表中未出现的符号将被忽略。
[19, 16, 23, 23, 26, 11, 34, 26, 29, 23, 15, 2, 11, 31, 16, 35, 31, 11, 31, 26, 11, 30, 27, 16, 16, 14, 19, 2]
如上所述,符号表和索引必须与预训练的 Tacotron2 模型所期望的一致。torchaudio 提供了该转换以及预训练模型。例如,您可以实例化并使用此类转换,如下所示。
tensor([[19, 16, 23, 23, 26, 11, 34, 26, 29, 23, 15, 2, 11, 31, 16, 35, 31, 11,
31, 26, 11, 30, 27, 16, 16, 14, 19, 2]])
tensor([28], dtype=torch.int32)
The processor 对象接受文本或文本列表作为输入。
当提供文本列表时,返回的 lengths 变量
表示输出批次中每个处理过的标记的有效长度。
中间表示可按以下方式检索。
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd', '!', ' ', 't', 'e', 'x', 't', ' ', 't', 'o', ' ', 's', 'p', 'e', 'e', 'c', 'h', '!']
基于音素的编码¶
基于音素的编码与基于字符的编码类似,但它使用基于音素的符号表和一个 G2P(字形到音素)模型。
G2P 模型的细节超出了本教程的范围,我们仅查看转换后的效果。
与基于字符的编码情况类似,编码过程应与预训练的 Tacotron2 模型所训练的内容相匹配。
torchaudio 提供了一个用于创建该过程的接口。
以下代码说明了如何创建和使用该流程。在后台,使用 DeepPhonemizer 包创建一个 G2P 模型,并获取 DeepPhonemizer 的作者发布的预训练权重。
0%| | 0.00/63.6M [00:00<?, ?B/s]
0%| | 56.0k/63.6M [00:00<03:29, 318kB/s]
0%| | 296k/63.6M [00:00<01:11, 932kB/s]
2%|1 | 1.16M/63.6M [00:00<00:23, 2.83MB/s]
6%|5 | 3.52M/63.6M [00:00<00:07, 7.88MB/s]
8%|8 | 5.09M/63.6M [00:00<00:06, 9.27MB/s]
13%|#2 | 8.14M/63.6M [00:00<00:04, 13.8MB/s]
15%|#5 | 9.73M/63.6M [00:01<00:04, 13.3MB/s]
20%|## | 12.9M/63.6M [00:01<00:03, 16.8MB/s]
23%|##2 | 14.6M/63.6M [00:01<00:03, 15.6MB/s]
28%|##7 | 17.8M/63.6M [00:01<00:02, 18.4MB/s]
31%|### | 19.5M/63.6M [00:01<00:02, 16.9MB/s]
36%|###5 | 22.7M/63.6M [00:01<00:02, 19.2MB/s]
39%|###8 | 24.5M/63.6M [00:01<00:02, 17.8MB/s]
44%|####3 | 27.7M/63.6M [00:01<00:01, 21.5MB/s]
47%|####6 | 29.8M/63.6M [00:02<00:01, 18.3MB/s]
52%|#####1 | 32.9M/63.6M [00:02<00:01, 20.1MB/s]
55%|#####4 | 34.9M/63.6M [00:02<00:01, 20.2MB/s]
58%|#####8 | 36.9M/63.6M [00:02<00:01, 19.1MB/s]
63%|######2 | 40.0M/63.6M [00:02<00:01, 20.7MB/s]
66%|######6 | 42.0M/63.6M [00:02<00:01, 21.0MB/s]
69%|######9 | 44.1M/63.6M [00:02<00:01, 19.5MB/s]
74%|#######4 | 47.1M/63.6M [00:03<00:00, 21.0MB/s]
77%|#######7 | 49.3M/63.6M [00:03<00:00, 19.7MB/s]
82%|########2 | 52.5M/63.6M [00:03<00:00, 23.1MB/s]
86%|########6 | 54.8M/63.6M [00:03<00:00, 21.3MB/s]
89%|########9 | 56.9M/63.6M [00:03<00:00, 20.0MB/s]
95%|#########4| 60.1M/63.6M [00:03<00:00, 21.9MB/s]
98%|#########8| 62.4M/63.6M [00:03<00:00, 20.6MB/s]
100%|##########| 63.6M/63.6M [00:03<00:00, 17.6MB/s]
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:282: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
tensor([[54, 20, 65, 69, 11, 92, 44, 65, 38, 2, 11, 81, 40, 64, 79, 81, 11, 81,
20, 11, 79, 77, 59, 37, 2]])
tensor([25], dtype=torch.int32)
请注意,编码后的值与基于字符的编码示例不同。
中间表示形式如下所示。
['HH', 'AH', 'L', 'OW', ' ', 'W', 'ER', 'L', 'D', '!', ' ', 'T', 'EH', 'K', 'S', 'T', ' ', 'T', 'AH', ' ', 'S', 'P', 'IY', 'CH', '!']
频谱图生成¶
Tacotron2 是我们用来从编码文本生成频谱图的模型。有关该模型的详细信息,请参阅 论文。
使用预训练权重实例化 Tacotron2 模型非常简单,但请注意,输入到 Tacotron2 模型的数据需要经过匹配文本处理器的处理。
torchaudio.pipelines.Tacotron2TTSBundle 将匹配的模型和处理器捆绑在一起,以便轻松创建流水线。
有关可用的捆绑包及其用法,请参阅
Tacotron2TTSBundle。
bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_PHONE_LJSPEECH
processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)
text = "Hello world! Text to speech!"
with torch.inference_mode():
processed, lengths = processor(text)
processed = processed.to(device)
lengths = lengths.to(device)
spec, _, _ = tacotron2.infer(processed, lengths)
_ = plt.imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")

/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:282: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
Downloading: "https://download.pytorch.org/torchaudio/models/tacotron2_english_phonemes_1500_epochs_wavernn_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/tacotron2_english_phonemes_1500_epochs_wavernn_ljspeech.pth
0%| | 0.00/107M [00:00<?, ?B/s]
14%|#3 | 14.8M/107M [00:00<00:01, 69.4MB/s]
20%|#9 | 21.4M/107M [00:00<00:01, 53.3MB/s]
29%|##8 | 30.8M/107M [00:00<00:01, 51.7MB/s]
33%|###3 | 35.6M/107M [00:00<00:01, 39.8MB/s]
44%|####3 | 46.8M/107M [00:00<00:01, 51.1MB/s]
48%|####8 | 52.0M/107M [00:01<00:01, 46.9MB/s]
59%|#####9 | 63.7M/107M [00:01<00:00, 51.9MB/s]
64%|######3 | 68.7M/107M [00:01<00:00, 46.7MB/s]
73%|#######3 | 78.8M/107M [00:01<00:00, 54.5MB/s]
78%|#######8 | 84.1M/107M [00:01<00:00, 44.3MB/s]
89%|########9 | 95.7M/107M [00:01<00:00, 51.8MB/s]
94%|#########3| 101M/107M [00:02<00:00, 43.5MB/s]
100%|#########9| 107M/107M [00:02<00:00, 30.4MB/s]
100%|##########| 107M/107M [00:02<00:00, 43.3MB/s]
请注意,Tacotron2.infer 方法执行多项式采样,因此生成频谱图的过程会引入随机性。
def plot():
fig, ax = plt.subplots(3, 1)
for i in range(3):
with torch.inference_mode():
spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
print(spec[0].shape)
ax[i].imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")
plot()

torch.Size([80, 190])
torch.Size([80, 184])
torch.Size([80, 185])
波形生成¶
生成频谱图后,最后一步是从频谱图中恢复波形。
torchaudio 提供基于 GriffinLim 和
WaveRNN 的声码器。
WaveRNN¶
承接上一节,我们可以从同一个捆绑包中实例化匹配的 WaveRNN 模型。
bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_PHONE_LJSPEECH
processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)
vocoder = bundle.get_vocoder().to(device)
text = "Hello world! Text to speech!"
with torch.inference_mode():
processed, lengths = processor(text)
processed = processed.to(device)
lengths = lengths.to(device)
spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
waveforms, lengths = vocoder(spec, spec_lengths)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:282: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
Downloading: "https://download.pytorch.org/torchaudio/models/wavernn_10k_epochs_8bits_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/wavernn_10k_epochs_8bits_ljspeech.pth
0%| | 0.00/16.7M [00:00<?, ?B/s]
89%|########8 | 14.8M/16.7M [00:00<00:00, 73.1MB/s]
100%|##########| 16.7M/16.7M [00:00<00:00, 56.0MB/s]
def plot(waveforms, spec, sample_rate):
waveforms = waveforms.cpu().detach()
fig, [ax1, ax2] = plt.subplots(2, 1)
ax1.plot(waveforms[0])
ax1.set_xlim(0, waveforms.size(-1))
ax1.grid(True)
ax2.imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")
return IPython.display.Audio(waveforms[0:1], rate=sample_rate)
plot(waveforms, spec, vocoder.sample_rate)
Griffin-Lim¶
使用 Griffin-Lim 声码器与 WaveRNN 相同。您可以实例化
声码器对象,使用
get_vocoder()
方法并传入频谱图。
bundle = torchaudio.pipelines.TACOTRON2_GRIFFINLIM_PHONE_LJSPEECH
processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)
vocoder = bundle.get_vocoder().to(device)
with torch.inference_mode():
processed, lengths = processor(text)
processed = processed.to(device)
lengths = lengths.to(device)
spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
waveforms, lengths = vocoder(spec, spec_lengths)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:282: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
Downloading: "https://download.pytorch.org/torchaudio/models/tacotron2_english_phonemes_1500_epochs_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/tacotron2_english_phonemes_1500_epochs_ljspeech.pth
0%| | 0.00/107M [00:00<?, ?B/s]
7%|6 | 7.30M/107M [00:00<00:01, 76.6MB/s]
14%|#3 | 14.8M/107M [00:00<00:01, 70.3MB/s]
20%|## | 21.5M/107M [00:00<00:02, 43.1MB/s]
29%|##9 | 31.7M/107M [00:00<00:01, 49.5MB/s]
34%|###4 | 36.9M/107M [00:00<00:01, 39.7MB/s]
44%|####3 | 46.8M/107M [00:01<00:01, 41.6MB/s]
47%|####7 | 51.0M/107M [00:01<00:01, 30.3MB/s]
51%|##### | 54.7M/107M [00:01<00:01, 30.3MB/s]
56%|#####6 | 60.6M/107M [00:01<00:01, 29.5MB/s]
60%|#####9 | 64.0M/107M [00:01<00:01, 25.5MB/s]
74%|#######4 | 79.7M/107M [00:02<00:00, 45.1MB/s]
79%|#######8 | 84.5M/107M [00:02<00:00, 40.6MB/s]
89%|########9 | 96.0M/107M [00:02<00:00, 49.0MB/s]
94%|#########3| 101M/107M [00:02<00:00, 40.5MB/s]
99%|#########9| 107M/107M [00:02<00:00, 40.9MB/s]
100%|##########| 107M/107M [00:02<00:00, 39.9MB/s]
Waveglow¶
Waveglow 是由 Nvidia 发布的声码器。预训练权重已发布在 Torch Hub 上。可以使用 torch.hub 模块实例化该模型。
# Workaround to load model mapped on GPU
# https://stackoverflow.com/a/61840832
waveglow = torch.hub.load(
"NVIDIA/DeepLearningExamples:torchhub",
"nvidia_waveglow",
model_math="fp32",
pretrained=False,
)
checkpoint = torch.hub.load_state_dict_from_url(
"https://api.ngc.nvidia.com/v2/models/nvidia/waveglowpyt_fp32/versions/1/files/nvidia_waveglowpyt_fp32_20190306.pth", # noqa: E501
progress=False,
map_location=device,
)
state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}
waveglow.load_state_dict(state_dict)
waveglow = waveglow.remove_weightnorm(waveglow)
waveglow = waveglow.to(device)
waveglow.eval()
with torch.no_grad():
waveforms = waveglow.infer(spec)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/hub.py:294: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour
warnings.warn(
Downloading: "https://github.com/NVIDIA/DeepLearningExamples/zipball/torchhub" to /root/.cache/torch/hub/torchhub.zip
/root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub/PyTorch/Classification/ConvNets/image_classification/models/common.py:13: UserWarning: pytorch_quantization module not found, quantization will not be available
warnings.warn(
/root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub/PyTorch/Classification/ConvNets/image_classification/models/efficientnet.py:17: UserWarning: pytorch_quantization module not found, quantization will not be available
warnings.warn(
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Downloading: "https://api.ngc.nvidia.com/v2/models/nvidia/waveglowpyt_fp32/versions/1/files/nvidia_waveglowpyt_fp32_20190306.pth" to /root/.cache/torch/hub/checkpoints/nvidia_waveglowpyt_fp32_20190306.pth
脚本的总运行时间: ( 1 分钟 49.079 秒)