As a significant composition of art, fine art painting
is becoming a research hotspot in machine learning community.
With unique aesthetic value, paintings have quite different
representations from natural images, making them irreplaceable.
Meanwhile, the lack of training data is common in paintingrelated
machine learning tasks. Therefore, the synthesis of fine
art painting is meaningful and challenging work. There are two
main types of generative models for image synthesis: generative
adversarial networks (GANs) and likelihood-based models.
GAN-based models can obtain high-quality samples but usually
sacrifice diversity and training stability. Diffusion models are a
class of likelihood-based models and have recently been shown
to achieve state-of-the-art quality on the image synthesis tasks.
In this paper, we explore generating fine art paintings by using
diffusion models. We carried out the experiments on the partial
impression paintings from the Wikiart dataset. The results
demonstrate that the diffusion model can generate high-quality
samples, and it is easy to train to cover more target distribution
than the GAN-based methods.
修改评论