李宏毅机器学习课程笔记EP14

【HW4】Self attention0.1李宏毅2021/2022春机器学习课程笔记EP14(P45-P47)

从今天开始我将学习李宏毅教授的机器学习视频,下面是课程的连接(强推)李宏毅2021/2022春机器学习课程_哔哩哔哩_bilibili。一共有155个视频,争取都学习完成吧。

那么首先这门课程需要有一定的代码基础,简单学习一下Python的基本用法,还有里面的NumPy库等等的基本知识。再就是数学方面的基础啦,微积分、线性代数和概率论的基础都是听懂这门课必须的。


本次作业有一个小的提示吧,就是不要尝试运行2021或者2022的代码了,那个里面的数据集的源地址已经404,本来我在网络上找到了这一课的数据集想尝试放到colab上面,但是因为网络波动原因一直上传失败,最后发现2023最新的作业里的代码的数据集是可以下载的,所以就不用麻烦了,直接运行2023的作业就行了。那本次作业的数据集相比以往大了不少,我自己没怎么改,用了差不多快5个小时的时间,如果想过更好的baseline估计要数倍的时间。


下载数据集

!wget https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partaa
!wget https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partab
!wget https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partac
!wget https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partad

!cat Dataset.tar.gz.part* > Dataset.tar.gz
!rm Dataset.tar.gz.partaa
!rm Dataset.tar.gz.partab
!rm Dataset.tar.gz.partac
!rm Dataset.tar.gz.partad
# unzip the file
!tar zxf Dataset.tar.gz
!rm Dataset.tar.gz

–2024-04-06 07:44:39– https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partaa Resolving github.com (github.com)… 140.82.114.3 Connecting to github.com (github.com)|140.82.114.3|:443… connected. HTTP request sent, awaiting response… 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/7646b36b-6033-4a31-bac4-380c4d21d91e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074440Z&X-Amz-Expires=300&X-Amz-Signature=010f486318683278533162f545daed7f736fc0053694c6e98b3ca1cf848037df&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partaa&response-content-type=application%2Foctet-stream [following] –2024-04-06 07:44:40– https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/7646b36b-6033-4a31-bac4-380c4d21d91e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074440Z&X-Amz-Expires=300&X-Amz-Signature=010f486318683278533162f545daed7f736fc0053694c6e98b3ca1cf848037df&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partaa&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)… 185.199.110.133, 185.199.109.133, 185.199.108.133, … Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 1560784333 (1.5G) [application/octet-stream] Saving to: ‘Dataset.tar.gz.partaa’ Dataset.tar.gz.part 100%[===================>] 1.45G 69.3MB/s in 21s 2024-04-06 07:45:01 (70.1 MB/s) – ‘Dataset.tar.gz.partaa’ saved [1560784333/1560784333] –2024-04-06 07:45:01– https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partab Resolving github.com (github.com)… 140.82.114.3 Connecting to github.com (github.com)|140.82.114.3|:443… connected. HTTP request sent, awaiting response… 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/95b45712-6e2f-4a52-96b1-7d88578345fc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074502Z&X-Amz-Expires=300&X-Amz-Signature=bbcf001659c13b2799db0c9e208aac953c48ab98ba18d9fc0bd1c65661bed690&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partab&response-content-type=application%2Foctet-stream [following] –2024-04-06 07:45:02– https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/95b45712-6e2f-4a52-96b1-7d88578345fc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074502Z&X-Amz-Expires=300&X-Amz-Signature=bbcf001659c13b2799db0c9e208aac953c48ab98ba18d9fc0bd1c65661bed690&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partab&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)… 185.199.108.133, 185.199.109.133, 185.199.110.133, … Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 1560784333 (1.5G) [application/octet-stream] Saving to: ‘Dataset.tar.gz.partab’ Dataset.tar.gz.part 100%[===================>] 1.45G 185MB/s in 8.3s 2024-04-06 07:45:10 (180 MB/s) – ‘Dataset.tar.gz.partab’ saved [1560784333/1560784333] –2024-04-06 07:45:10– https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partac Resolving github.com (github.com)… 140.82.113.4 Connecting to github.com (github.com)|140.82.113.4|:443… connected. HTTP request sent, awaiting response… 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/0c9d42d3-95b7-4ca4-b57c-ab1a66a5564d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074511Z&X-Amz-Expires=300&X-Amz-Signature=5c7261a7bf2e44dc70b11060edd9b7ab404f111d200ac6cb8a5a596f279d407c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partac&response-content-type=application%2Foctet-stream [following] –2024-04-06 07:45:11– https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/0c9d42d3-95b7-4ca4-b57c-ab1a66a5564d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074511Z&X-Amz-Expires=300&X-Amz-Signature=5c7261a7bf2e44dc70b11060edd9b7ab404f111d200ac6cb8a5a596f279d407c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partac&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)… 185.199.110.133, 185.199.108.133, 185.199.111.133, … Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 1560784333 (1.5G) [application/octet-stream] Saving to: ‘Dataset.tar.gz.partac’ Dataset.tar.gz.part 100%[===================>] 1.45G 68.6MB/s in 22s 2024-04-06 07:45:33 (67.7 MB/s) – ‘Dataset.tar.gz.partac’ saved [1560784333/1560784333] –2024-04-06 07:45:33– https://github.com/googly-mingto/ML2023HW4/releases/download/data/Dataset.tar.gz.partad Resolving github.com (github.com)… 140.82.113.4 Connecting to github.com (github.com)|140.82.113.4|:443… connected. HTTP request sent, awaiting response… 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/0ee11da6-8c96-4463-b084-cea8f95d26e9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074533Z&X-Amz-Expires=300&X-Amz-Signature=bedbbb164433c7d64178e9133717d7ae79ff24ff77722a12027b04b44d2612ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partad&response-content-type=application%2Foctet-stream [following] –2024-04-06 07:45:33– https://objects.githubusercontent.com/github-production-release-asset-2e65be/606989982/0ee11da6-8c96-4463-b084-cea8f95d26e9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240406%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240406T074533Z&X-Amz-Expires=300&X-Amz-Signature=bedbbb164433c7d64178e9133717d7ae79ff24ff77722a12027b04b44d2612ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=606989982&response-content-disposition=attachment%3B%20filename%3DDataset.tar.gz.partad&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)… 185.199.108.133, 185.199.111.133, 185.199.109.133, … Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 1560784336 (1.5G) [application/octet-stream] Saving to: ‘Dataset.tar.gz.partad’ Dataset.tar.gz.part 100%[===================>] 1.45G 69.6MB/s in 22s 2024-04-06 07:45:57 (68.7 MB/s) – ‘Dataset.tar.gz.partad’ saved [1560784336/1560784336] tar: Ignoring unknown extended header keyword ‘LIBARCHIVE.xattr.com.apple.macl’

!tar zxf Dataset.tar.gz

tar (child): Dataset.tar.gz: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now

固定随机种子

import numpy as np
import torch
import random

def set_seed(seed):
    np.random.seed(seed)
    random.seed(seed)
    torch.manual_seed(seed)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

set_seed(87)

Dataset

import os
import json
import torch
import random
from pathlib import Path
from torch.utils.data import Dataset
from torch.nn.utils.rnn import pad_sequence


class myDataset(Dataset):
	def __init__(self, data_dir, segment_len=128):
		self.data_dir = data_dir
		self.segment_len = segment_len

		# Load the mapping from speaker neme to their corresponding id.
		mapping_path = Path(data_dir) / "mapping.json"
		mapping = json.load(mapping_path.open())
		self.speaker2id = mapping["speaker2id"]

		# Load metadata of training data.
		metadata_path = Path(data_dir) / "metadata.json"
		metadata = json.load(open(metadata_path))["speakers"]

		# Get the total number of speaker.
		self.speaker_num = len(metadata.keys())
		self.data = []
		for speaker in metadata.keys():
			for utterances in metadata[speaker]:
				self.data.append([utterances["feature_path"], self.speaker2id[speaker]])

	def __len__(self):
			return len(self.data)

	def __getitem__(self, index):
		feat_path, speaker = self.data[index]
		# Load preprocessed mel-spectrogram.
		mel = torch.load(os.path.join(self.data_dir, feat_path))

		# Segmemt mel-spectrogram into "segment_len" frames.
		if len(mel) > self.segment_len:
			# Randomly get the starting point of the segment.
			start = random.randint(0, len(mel) - self.segment_len)
			# Get a segment with "segment_len" frames.
			mel = torch.FloatTensor(mel[start:start+self.segment_len])
		else:
			mel = torch.FloatTensor(mel)
		# Turn the speaker id into long for computing loss later.
		speaker = torch.FloatTensor([speaker]).long()
		return mel, speaker

	def get_speaker_number(self):
		return self.speaker_num

Download

import torch
from torch.utils.data import DataLoader, random_split
from torch.nn.utils.rnn import pad_sequence


def collate_batch(batch):
	# Process features within a batch.
	"""Collate a batch of data."""
	mel, speaker = zip(*batch)
	# Because we train the model batch by batch, we need to pad the features in the same batch to make their lengths the same.
	mel = pad_sequence(mel, batch_first=True, padding_value=-20)    # pad log 10^(-20) which is very small value.
	# mel: (batch size, length, 40)
	return mel, torch.FloatTensor(speaker).long()


def get_dataloader(data_dir, batch_size, n_workers):
	"""Generate dataloader"""
	dataset = myDataset(data_dir)
	speaker_num = dataset.get_speaker_number()
	# Split dataset into training dataset and validation dataset
	trainlen = int(0.9 * len(dataset))
	lengths = [trainlen, len(dataset) - trainlen]
	trainset, validset = random_split(dataset, lengths)

	train_loader = DataLoader(
		trainset,
		batch_size=batch_size,
		shuffle=True,
		drop_last=True,
		num_workers=n_workers,
		pin_memory=True,
		collate_fn=collate_batch,
	)
	valid_loader = DataLoader(
		validset,
		batch_size=batch_size,
		num_workers=n_workers,
		drop_last=True,
		pin_memory=True,
		collate_fn=collate_batch,
	)

	return train_loader, valid_loader, speaker_num

模型

import torch
import torch.nn as nn
import torch.nn.functional as F


class Classifier(nn.Module):
	def __init__(self, d_model=80, n_spks=600, dropout=0.1):
		super().__init__()
		# Project the dimension of features from that of input into d_model.
		self.prenet = nn.Linear(40, d_model)
		# TODO:
		#   Change Transformer to Conformer.
		#   https://arxiv.org/abs/2005.08100
		self.encoder_layer = nn.TransformerEncoderLayer(
			d_model=d_model, dim_feedforward=256, nhead=2
		)
		# self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=2)

		# Project the the dimension of features from d_model into speaker nums.
		self.pred_layer = nn.Sequential(
			nn.Linear(d_model, d_model),
			nn.Sigmoid(),
			nn.Linear(d_model, n_spks),
		)

	def forward(self, mels):
		"""
		args:
			mels: (batch size, length, 40)
		return:
			out: (batch size, n_spks)
		"""
		# out: (batch size, length, d_model)
		out = self.prenet(mels)
		# out: (length, batch size, d_model)
		out = out.permute(1, 0, 2)
		# The encoder layer expect features in the shape of (length, batch size, d_model).
		out = self.encoder_layer(out)
		# out: (batch size, length, d_model)
		out = out.transpose(0, 1)
		# mean pooling
		stats = out.mean(dim=1)

		# out: (batch, n_spks)
		out = self.pred_layer(stats)
		return out

学习率表

import math

import torch
from torch.optim import Optimizer
from torch.optim.lr_scheduler import LambdaLR


def get_cosine_schedule_with_warmup(
	optimizer: Optimizer,
	num_warmup_steps: int,
	num_training_steps: int,
	num_cycles: float = 0.5,
	last_epoch: int = -1,
):
	"""
	Create a schedule with a learning rate that decreases following the values of the cosine function between the
	initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the
	initial lr set in the optimizer.

	Args:
		optimizer (:class:`~torch.optim.Optimizer`):
		The optimizer for which to schedule the learning rate.
		num_warmup_steps (:obj:`int`):
		The number of steps for the warmup phase.
		num_training_steps (:obj:`int`):
		The total number of training steps.
		num_cycles (:obj:`float`, `optional`, defaults to 0.5):
		The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0
		following a half-cosine).
		last_epoch (:obj:`int`, `optional`, defaults to -1):
		The index of the last epoch when resuming training.

	Return:
		:obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
	"""
	def lr_lambda(current_step):
		# Warmup
		if current_step < num_warmup_steps:
			return float(current_step) / float(max(1, num_warmup_steps))
		# decadence
		progress = float(current_step - num_warmup_steps) / float(
			max(1, num_training_steps - num_warmup_steps)
		)
		return max(
			0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress))
		)

	return LambdaLR(optimizer, lr_lambda, last_epoch)

模型函数

import torch


def model_fn(batch, model, criterion, device):
	"""Forward a batch through the model."""

	mels, labels = batch
	mels = mels.to(device)
	labels = labels.to(device)

	outs = model(mels)

	loss = criterion(outs, labels)

	# Get the speaker id with highest probability.
	preds = outs.argmax(1)
	# Compute accuracy.
	accuracy = torch.mean((preds == labels).float())

	return loss, accuracy

验证

from tqdm import tqdm
import torch


def valid(dataloader, model, criterion, device):
	"""Validate on validation set."""

	model.eval()
	running_loss = 0.0
	running_accuracy = 0.0
	pbar = tqdm(total=len(dataloader.dataset), ncols=0, desc="Valid", unit=" uttr")

	for i, batch in enumerate(dataloader):
		with torch.no_grad():
			loss, accuracy = model_fn(batch, model, criterion, device)
			running_loss += loss.item()
			running_accuracy += accuracy.item()

		pbar.update(dataloader.batch_size)
		pbar.set_postfix(
			loss=f"{running_loss / (i+1):.2f}",
			accuracy=f"{running_accuracy / (i+1):.2f}",
		)

	pbar.close()
	model.train()

	return running_accuracy / len(dataloader)

主函数

from tqdm import tqdm

import torch
import torch.nn as nn
from torch.optim import AdamW
from torch.utils.data import DataLoader, random_split


def parse_args():
	"""arguments"""
	config = {
		"data_dir": "./Dataset",
		"save_path": "model.ckpt",
		"batch_size": 64,
		"n_workers": 8,
		"valid_steps": 2000,
		"warmup_steps": 1000,
		"save_steps": 10000,
		"total_steps": 70000,
	}

	return config


def main(
	data_dir,
	save_path,
	batch_size,
	n_workers,
	valid_steps,
	warmup_steps,
	total_steps,
	save_steps,
):
	"""Main function."""
	device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
	print(f"[Info]: Use {device} now!")

	train_loader, valid_loader, speaker_num = get_dataloader(data_dir, batch_size, n_workers)
	train_iterator = iter(train_loader)
	print(f"[Info]: Finish loading data!",flush = True)

	model = Classifier(n_spks=speaker_num).to(device)
	criterion = nn.CrossEntropyLoss()
	optimizer = AdamW(model.parameters(), lr=1e-3)
	scheduler = get_cosine_schedule_with_warmup(optimizer, warmup_steps, total_steps)
	print(f"[Info]: Finish creating model!",flush = True)

	best_accuracy = -1.0
	best_state_dict = None

	pbar = tqdm(total=valid_steps, ncols=0, desc="Train", unit=" step")

	for step in range(total_steps):
		# Get data
		try:
			batch = next(train_iterator)
		except StopIteration:
			train_iterator = iter(train_loader)
			batch = next(train_iterator)

		loss, accuracy = model_fn(batch, model, criterion, device)
		batch_loss = loss.item()
		batch_accuracy = accuracy.item()

		# Updata model
		loss.backward()
		optimizer.step()
		scheduler.step()
		optimizer.zero_grad()

		# Log
		pbar.update()
		pbar.set_postfix(
			loss=f"{batch_loss:.2f}",
			accuracy=f"{batch_accuracy:.2f}",
			step=step + 1,
		)

		# Do validation
		if (step + 1) % valid_steps == 0:
			pbar.close()

			valid_accuracy = valid(valid_loader, model, criterion, device)

			# keep the best model
			if valid_accuracy > best_accuracy:
				best_accuracy = valid_accuracy
				best_state_dict = model.state_dict()

			pbar = tqdm(total=valid_steps, ncols=0, desc="Train", unit=" step")

		# Save the best model so far.
		if (step + 1) % save_steps == 0 and best_state_dict is not None:
			torch.save(best_state_dict, save_path)
			pbar.write(f"Step {step + 1}, best model saved. (accuracy={best_accuracy:.4f})")

	pbar.close()


if __name__ == "__main__":
	main(**parse_args())
[Info]: Use cpu now!
Train:   0% 0/2000 [24:09<?, ? step/s]
Train:   0% 0/2000 [22:12<?, ? step/s]
[Info]: Finish loading data!
[Info]: Finish creating model!
Train: 100% 2000/2000 [08:21<00:00,  3.99 step/s, accuracy=0.05, loss=4.89, step=2000]
Valid:  99% 5632/5667 [00:16<00:00, 345.07 uttr/s, accuracy=0.06, loss=4.94]
Train: 100% 2000/2000 [08:18<00:00,  4.02 step/s, accuracy=0.16, loss=4.23, step=4000]
Valid:  99% 5632/5667 [00:13<00:00, 429.97 uttr/s, accuracy=0.15, loss=4.22]
Train: 100% 2000/2000 [08:07<00:00,  4.10 step/s, accuracy=0.20, loss=3.84, step=6000]
Valid:  99% 5632/5667 [00:10<00:00, 523.66 uttr/s, accuracy=0.20, loss=3.82]
Train: 100% 2000/2000 [08:20<00:00,  4.00 step/s, accuracy=0.34, loss=3.35, step=8000]
Valid:  99% 5632/5667 [00:11<00:00, 506.66 uttr/s, accuracy=0.26, loss=3.50]
Train: 100% 2000/2000 [08:16<00:00,  4.03 step/s, accuracy=0.36, loss=3.18, step=1e+4]
Valid:  99% 5632/5667 [00:07<00:00, 732.66 uttr/s, accuracy=0.29, loss=3.26] 


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [1:04:43<?, ? step/s]
Step 10000, best model saved. (accuracy=0.2919)
Train: 100% 2000/2000 [08:14<00:00,  4.05 step/s, accuracy=0.41, loss=3.03, step=12000]
Valid:  99% 5632/5667 [00:10<00:00, 547.19 uttr/s, accuracy=0.32, loss=3.15]
Train: 100% 2000/2000 [08:03<00:00,  4.13 step/s, accuracy=0.41, loss=2.50, step=14000]
Valid:  99% 5632/5667 [00:09<00:00, 576.39 uttr/s, accuracy=0.35, loss=2.97]
Train: 100% 2000/2000 [08:18<00:00,  4.01 step/s, accuracy=0.44, loss=2.55, step=16000]
Valid:  99% 5632/5667 [00:13<00:00, 412.67 uttr/s, accuracy=0.38, loss=2.84]
Train: 100% 2000/2000 [08:11<00:00,  4.07 step/s, accuracy=0.45, loss=2.55, step=18000]
Valid:  99% 5632/5667 [00:14<00:00, 399.51 uttr/s, accuracy=0.40, loss=2.73]
Train: 100% 2000/2000 [08:21<00:00,  3.99 step/s, accuracy=0.36, loss=2.65, step=2e+4]
Valid:  99% 5632/5667 [00:09<00:00, 591.63 uttr/s, accuracy=0.42, loss=2.62]


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [1:46:50<?, ? step/s]
Step 20000, best model saved. (accuracy=0.4213)
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.45, loss=2.40, step=22000]
Valid:  99% 5632/5667 [00:10<00:00, 546.98 uttr/s, accuracy=0.45, loss=2.52]
Train: 100% 2000/2000 [08:14<00:00,  4.04 step/s, accuracy=0.48, loss=2.24, step=24000]
Valid:  99% 5632/5667 [00:12<00:00, 464.67 uttr/s, accuracy=0.45, loss=2.51]
Train: 100% 2000/2000 [08:10<00:00,  4.08 step/s, accuracy=0.50, loss=2.19, step=26000]
Valid:  99% 5632/5667 [00:09<00:00, 612.79 uttr/s, accuracy=0.46, loss=2.42] 
Train: 100% 2000/2000 [08:19<00:00,  4.00 step/s, accuracy=0.52, loss=2.19, step=28000]
Valid:  99% 5632/5667 [00:10<00:00, 543.84 uttr/s, accuracy=0.47, loss=2.36] 
Train: 100% 2000/2000 [08:24<00:00,  3.96 step/s, accuracy=0.48, loss=2.03, step=3e+4]
Valid:  99% 5632/5667 [00:14<00:00, 380.97 uttr/s, accuracy=0.49, loss=2.33]


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [2:29:14<?, ? step/s]
Step 30000, best model saved. (accuracy=0.4854)
Train: 100% 2000/2000 [08:21<00:00,  3.99 step/s, accuracy=0.44, loss=2.43, step=32000]
Valid:  99% 5632/5667 [00:07<00:00, 704.44 uttr/s, accuracy=0.50, loss=2.26] 
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.66, loss=1.72, step=34000]
Valid:  99% 5632/5667 [00:09<00:00, 620.62 uttr/s, accuracy=0.50, loss=2.24] 
Train: 100% 2000/2000 [08:20<00:00,  4.00 step/s, accuracy=0.61, loss=1.80, step=36000]
Valid:  99% 5632/5667 [00:09<00:00, 600.96 uttr/s, accuracy=0.51, loss=2.19]
Train: 100% 2000/2000 [08:13<00:00,  4.06 step/s, accuracy=0.64, loss=1.74, step=38000]
Valid:  99% 5632/5667 [00:12<00:00, 444.12 uttr/s, accuracy=0.53, loss=2.13]
Train: 100% 2000/2000 [08:20<00:00,  3.99 step/s, accuracy=0.70, loss=1.49, step=4e+4]
Valid:  99% 5632/5667 [00:10<00:00, 539.81 uttr/s, accuracy=0.53, loss=2.09] 


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [3:11:36<?, ? step/s]
Step 40000, best model saved. (accuracy=0.5300)
Train: 100% 2000/2000 [08:15<00:00,  4.04 step/s, accuracy=0.62, loss=1.76, step=42000]
Valid:  99% 5632/5667 [00:13<00:00, 417.85 uttr/s, accuracy=0.53, loss=2.06]
Train: 100% 2000/2000 [08:18<00:00,  4.02 step/s, accuracy=0.56, loss=2.04, step=44000]
Valid:  99% 5632/5667 [00:11<00:00, 495.57 uttr/s, accuracy=0.55, loss=2.04] 
Train: 100% 2000/2000 [08:13<00:00,  4.05 step/s, accuracy=0.53, loss=1.97, step=46000]
Valid:  99% 5632/5667 [00:08<00:00, 670.28 uttr/s, accuracy=0.55, loss=1.98]
Train: 100% 2000/2000 [08:03<00:00,  4.14 step/s, accuracy=0.56, loss=1.95, step=48000]
Valid:  99% 5632/5667 [00:16<00:00, 346.02 uttr/s, accuracy=0.56, loss=1.99]
Train: 100% 2000/2000 [08:08<00:00,  4.09 step/s, accuracy=0.59, loss=1.92, step=5e+4]
Valid:  99% 5632/5667 [00:10<00:00, 554.63 uttr/s, accuracy=0.56, loss=2.00] 


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [3:53:35<?, ? step/s]
Step 50000, best model saved. (accuracy=0.5584)
Train: 100% 2000/2000 [08:16<00:00,  4.03 step/s, accuracy=0.58, loss=1.55, step=52000]
Valid:  99% 5632/5667 [00:11<00:00, 501.86 uttr/s, accuracy=0.57, loss=1.94]
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.48, loss=1.97, step=54000]
Valid:  99% 5632/5667 [00:17<00:00, 327.04 uttr/s, accuracy=0.57, loss=1.93]
Train: 100% 2000/2000 [08:26<00:00,  3.95 step/s, accuracy=0.81, loss=1.05, step=56000]
Valid:  99% 5632/5667 [00:07<00:00, 712.35 uttr/s, accuracy=0.57, loss=1.93] 
Train: 100% 2000/2000 [08:18<00:00,  4.01 step/s, accuracy=0.61, loss=1.72, step=58000]
Valid:  99% 5632/5667 [00:09<00:00, 567.86 uttr/s, accuracy=0.57, loss=1.94]
Train: 100% 2000/2000 [08:19<00:00,  4.00 step/s, accuracy=0.73, loss=1.19, step=6e+4]
Valid:  99% 5632/5667 [00:17<00:00, 313.22 uttr/s, accuracy=0.58, loss=1.93]


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [4:36:19<?, ? step/s]
Step 60000, best model saved. (accuracy=0.5785)
Train: 100% 2000/2000 [08:08<00:00,  4.09 step/s, accuracy=0.67, loss=1.34, step=62000]
Valid:  99% 5632/5667 [00:10<00:00, 517.80 uttr/s, accuracy=0.59, loss=1.88] 
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.61, loss=1.61, step=64000]
Valid:  99% 5632/5667 [00:09<00:00, 624.30 uttr/s, accuracy=0.58, loss=1.91] 
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.61, loss=1.64, step=66000]
Valid:  99% 5632/5667 [00:11<00:00, 511.24 uttr/s, accuracy=0.57, loss=1.91]
Train: 100% 2000/2000 [08:21<00:00,  3.99 step/s, accuracy=0.72, loss=1.20, step=68000]
Valid:  99% 5632/5667 [00:08<00:00, 684.44 uttr/s, accuracy=0.58, loss=1.89]
Train: 100% 2000/2000 [08:17<00:00,  4.02 step/s, accuracy=0.64, loss=1.54, step=7e+4]
Valid:  99% 5632/5667 [00:08<00:00, 638.93 uttr/s, accuracy=0.58, loss=1.89] 


Train:   0% 0/2000 [00:00<?, ? step/s]

Train:   0% 0/2000 [00:00<?, ? step/s]
Step 70000, best model saved. (accuracy=0.5881)

推理

推理的数据集

import os
import json
import torch
from pathlib import Path
from torch.utils.data import Dataset


class InferenceDataset(Dataset):
	def __init__(self, data_dir):
		testdata_path = Path(data_dir) / "testdata.json"
		metadata = json.load(testdata_path.open())
		self.data_dir = data_dir
		self.data = metadata["utterances"]

	def __len__(self):
		return len(self.data)

	def __getitem__(self, index):
		utterance = self.data[index]
		feat_path = utterance["feature_path"]
		mel = torch.load(os.path.join(self.data_dir, feat_path))

		return feat_path, mel


def inference_collate_batch(batch):
	"""Collate a batch of data."""
	feat_paths, mels = zip(*batch)

	return feat_paths, torch.stack(mels)

主函数的推理

import json
import csv
from pathlib import Path
from tqdm.notebook import tqdm

import torch
from torch.utils.data import DataLoader

def parse_args():
	"""arguments"""
	config = {
		"data_dir": "./Dataset",
		"model_path": "./model.ckpt",
		"output_path": "./output.csv",
	}

	return config


def main(
	data_dir,
	model_path,
	output_path,
):
	"""Main function."""
	device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
	print(f"[Info]: Use {device} now!")

	mapping_path = Path(data_dir) / "mapping.json"
	mapping = json.load(mapping_path.open())

	dataset = InferenceDataset(data_dir)
	dataloader = DataLoader(
		dataset,
		batch_size=1,
		shuffle=False,
		drop_last=False,
		num_workers=8,
		collate_fn=inference_collate_batch,
	)
	print(f"[Info]: Finish loading data!",flush = True)

	speaker_num = len(mapping["id2speaker"])
	model = Classifier(n_spks=speaker_num).to(device)
	model.load_state_dict(torch.load(model_path))
	model.eval()
	print(f"[Info]: Finish creating model!",flush = True)

	results = [["Id", "Category"]]
	for feat_paths, mels in tqdm(dataloader):
		with torch.no_grad():
			mels = mels.to(device)
			outs = model(mels)
			preds = outs.argmax(1).cpu().numpy()
			for feat_path, pred in zip(feat_paths, preds):
				results.append([feat_path, mapping["id2speaker"][str(pred)]])

	with open(output_path, 'w', newline='') as csvfile:
		writer = csv.writer(csvfile)
		writer.writerows(results)


if __name__ == "__main__":
	main(**parse_args())
[Info]: Use cpu now!
[Info]: Finish loading data!
[Info]: Finish creating model!

100%

 8000/8000 [02:10<00:00, 53.94it/s]

暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇