How did you train your model?
Hi, awesome space. I was quite surprised by the results of your space. If you're willing to share some knowledge or insight. I'm very curious to know how you trained your model. Which architecture did you use? what the size of your dataset? ... super interesting results.
Hi ! I'm glad you liked it...I used "colorization_release_v2.caffemodel" for this application...The model, loaded from colorization_deploy_v2.prototxt, colorization_release_v2.caffemodel, and pts_in_hull.npy, is configured with specific blobs for accurate color prediction. The app preprocesses each image from the uploaded video split as image frames by converting it to the LAB color space, where the L channel (grayscale) is extracted, resized to 224x224 pixels, and normalized before being fed into the model to predict the ab (color) channels. The predicted ab channels are resized to match the original image dimensions and combined with the original L channel to reconstruct the LAB image, which is then converted back to BGR for the final colorized output...these outputs are combined as a video again and thus you get back the colorized video. The app uses a pre-trained model, so it does not involve training but uses transfer learning to provide accurate colorization based on prior optimization on a dataset of natural images...I'm in the process of training one on my own, based on the insights I received from this one...since I'm writing my final exams, I have paused my work.....Thanks again for your support...I hope you find this helpful :).....Feel free to connect on such discussions...I will try to reply as soon as possible
That was pretty insightful. I was not expecting a LAB pipeline, really interesting... I look forward to seeing how your trained model turns out. Good luck on your exams.
Thank you so much