Abstract: Image colorization, which makes a grayscale image coloured, is one of the classical topics of computer vision. Due to the indeterminate nature of the problem, image colorization techniques currently rely heavily on human intuition. The traditional scribble-based approach requires the user to provide local image colour information, which is a highly labour intensive task. More recently, example-induced approaches using neural networks that do not rely on user guidance became increasingly popular. Such approaches require learning unique colour information from a large number of images, and transfers the colour composition from the full-colour image to the grayscale image. However, in practice, it is unrealistic to expect that one would have a large number of coloured images that is similar to the target grayscale image. In this talk, I will describe what is a Deep Convolutional Generative Adversarial Network (DCGAN) and how we can use it to build a system that achieves one-to-one image colorization, i.e. we use only one source image to colour the target grayscale image.
DetailsOpen to all,
Contact: Michael Kampouridis