Recent Posts
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- clean code
- PRML
- 자료구조
- CV
- classification
- 머신러닝
- math
- pytorch
- Front
- ML
- FGVC
- GAN
- Python
- dl
- Torch
- SSL
- Meta Learning
- web
- nerf
- Depth estimation
- algorithm
- 알고리즘
- 3d
- FineGrained
- cs
- 딥러닝
- nlp
- Vision
- computervision
- REACT
- Today
- Total
KalelPark's LAB
[CODE] MixUp 분할해서 구현하기 본문

MixUp 구현.
import torch
import numpy as np
import matplotlib.pyplot as plt
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
batch_size = 4
trainset = torchvision.datasets.STL10(root='./temp', split = "train",
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
tf = transforms.ToPILImage()
def mixup(images, alpha = 1.0):
indices = torch.randperm(len(images))
shuffled_images = images[indices]
print(shuffled_images.size(), images.size())
print("indexing : ", indices)
lam = np.clip(np.random.beta(alpha, alpha), 0.45, 0.45)
print(f"lambda : {lam}")
mixedup_images = lam*images + (1 - lam)*shuffled_images
return mixedup_images, images, shuffled_images
image, label = next(iter(trainloader))
aa = tf(mixed_images[0])
aa.show()
aa = tf(order_images[0])
aa.show()
aa = tf(shuffling[0])
aa.show()
결과물
'Data Science > CODE' 카테고리의 다른 글
[CODE] Gradient Clipping이란? (0) | 2023.03.21 |
---|---|
[CODE] Gradient Accumulate이란? (0) | 2023.03.20 |
[CODE] Multi-GPU 활용하기 (0) | 2023.03.19 |
[CODE] Masked AutoEncoder CODE로 살펴보기 (0) | 2023.03.16 |
[CODE] How to making useful logger? (0) | 2023.03.15 |
Comments