๐ ๐ ๐ Happy to share our recent work. We noticed that image resolution plays an important role, either in improving multi-modal large language models (MLLM) performance or in Sora style any resolution encoder decoder, we hope this work can help lift restriction of 224x224 resolution limit in ViT.
SegGPT is a vision generalist on image segmentation, quite like GPTs for computer vision โจ It comes with the last release of transformers ๐ Demo and more in this post! SegGPT is an extension of the Painter, where you speak to images with images: the model takes in an image prompt, transformed version of the image prompt, the actual image you want to see the same transform, and expected to output the transformed image. SegGPT consists of a vanilla ViT with a decoder on top (linear, conv, linear). The model is trained on diverse segmentation examples, where they provide example image-mask pairs, the actual input to be segmented, and the decoder head learns to reconstruct the mask output. This generalizes pretty well! The authors do not claim state-of-the-art results as the model is mainly used zero-shot and few-shot inference. They also do prompt tuning, where they freeze the parameters of the model and only optimize the image tensor (the input context). Thanks to ๐ค transformers you can use this model easily! See here https://huggingface.co/docs/transformers/en/model_doc/seggpt I have built an app for you to try it out. I combined SegGPT with Depth Anything Model, so you don't have to upload image mask prompts in your prompt pair ๐ค Try it here merve/seggpt-depth-anything Also check out the collection merve/seggpt-660466a303bc3cd7559d271b