蜜桃麻豆影像在线观看_秋霞av国产精品一区_久久激情五月婷婷_久久激情综合

<返回

Exploring Diffusion Time-steps for Unsupervised Representation Learning

Zhongqi Yue, Jiankun Wang, Qianru Sun, Lei Ji, Eric I-Chao Chang, Hanwang Zhang

ICLR 2024 Conference

May 2024

Keywords: unsupervised representation learning, diffusion model, representation disentanglement, counterfactual generation

Abstract:

Representation learning is all about discovering the hidden modular attributes that generate the data faithfully. We explore the potential of Denoising Diffusion Probabilistic Model (DM) in unsupervised learning of the modular attributes. We build a theoretical framework that connects the diffusion time-steps and the hidden attributes, which serves as an effective inductive bias for unsupervised learning. Specifically, the forward diffusion process incrementally adds Gaussian noise to samples at each time-step, which essentially collapses different samples into similar ones by losing attributes, e.g., fine-grained attributes such as texture are lost with less noise added (i.e., early time-steps), while coarse-grained ones such as shape are lost by adding more noise (i.e., late time-steps). To disentangle the modular attributes, at each time-step t, we learn a t-specific feature to compensate for the newly lost attribute, and the set of all {1,...,t}-specific features, corresponding to the cumulative set of lost attributes, are trained to make up for the reconstruction error of a pre-trained DM at time-step t. On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves attribute classification and enables faithful counterfactual generation, e.g., interpolating only one specified attribute between two images, validating the disentanglement quality.

View More PDF>>

主站蜘蛛池模板: 湘阴县| 乌兰县| 淮北市| 增城市| 荣昌县| 二连浩特市| 繁昌县| 宜兰市| 苏尼特左旗| 绥化市| 嘉祥县| 台安县| 无极县| 阜南县| 渝中区| 琼中| 那坡县| 丰城市| 永新县| 游戏| 博湖县| 焦作市| 公主岭市| 金门县| 北流市| 珲春市| 灵寿县| 汉沽区| 阿荣旗| 阿尔山市| 前郭尔| 松潘县| 资中县| 武定县| 株洲市| 通州区| 建始县| 潮州市| 绥芬河市| 阿克苏市| 绥芬河市|