Skip to content

ioankont/Fast-Style-Transfer-ImageTransformationNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Style-Transfer-with-ImageTransformationNet in Tensorflow 2

Implementation of Perceptual Losses for Real-Time Style Transfe from the paper (Justin Johnson, Alexandre Alahi, Li Fei-Fei, 2016)

Combine any content image with a specific style, with one quick forward pass to the trained network.

We trained a feed forward convolutional network to different style images each time, and here the results


Description

For training used 60000 images from COCOdataset for 2 epochs. Every input image pass through the Image Transformation Network. The output (generated image) is an input for the VGG19, where we extract the feature representations. As an input for VGG19 is also the style image, in which we want to train our network and the input images of the Image Transformation Network. We calculate the content and style losses, but now we update the values of the Image Transformation Network, instead of the white-image noise as in previous work.


With this algorithm, our network is training so that can apply the specific style image to any input image of our choise

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages