Featured

Classifier-Free Diffusion Guidance (NeurIPS 2021)

“Classifier-Free Diffusion Guidance”는 조건부 생성(conditional generation) Diffusion Model을 개선한 논문입니다. 이 논문이 발표되기 이전, 특정 클래스의 이미지를 생성하도록 모델을 유도하는 일반적인 방법은 별도의 이미지 분류기(Classifier)를 활용하는 ‘Classifier Guidance’였습니다. 이 방식은 효과적이었으나, Diffusion Model 외에 노이즈 낀 이...

Denoising Diffusion Probabilistic Models (NeurIPS 2020)

2020년 발표된 “Denoising Diffusion Probabilistic Models” (DDPM)은 생성 모델 연구에 중요한 전환점을 제시한 논문입니다. 당시 Generative Adversarial Networks (GANs)는 훈련 불안정성과 모드 붕괴(mode collapse)와 같은 문제점에도 불구하고 높은 성능으로 인해 고품질 이미지 생성 분야에서 널리 사용되고 있었습니다. DDPM은 이러한 상황 속에서 Diffusion...

Regularization: a method to avoid overfitting

Explains the principles and characteristics of L1 and L2 regularization, their connection to Maximum a Posteriori (MAP) estimation, and how to apply regularization to prevent overfitting in deep learning models.

Understanding the T-Test: A simple guide to statistical hypothesis testing

A statistical method used to evaluate whether differences between two sample means are significant or due to chance.

Cross-Entropy and KL Divergence: From Probability Distribution

A detailed exploration of Cross-Entropy and KL Divergence, deriving their formulas step-by-step from the principles of probability and information theory.

Teleporting vector space using linear transformation

Explore the concept of linear transformations from basics to matrix representation and inverse transformations, providing an intuitive understanding of vector space mappings.

Expanding ANOVA: Two-Way and Multi-Way Analyses

Explore the concepts of two-way and multi-way ANOVA, delving into interactions between factors. Conclude with a comprehensive summary of ANOVA techniques.

Understanding ANOVA: Introduction and One-Way Analysis

Discover the fundamentals of Analysis of Variance (ANOVA) and dive into one-way ANOVA to analyze differences between group means.

Understanding the T-Test: A simple guide to statistical hypothesis testing

A statistical method used to evaluate whether differences between two sample means are significant or due to chance.

Cross-Entropy and KL Divergence: From Probability Distribution

A detailed exploration of Cross-Entropy and KL Divergence, deriving their formulas step-by-step from the principles of probability and information theory.