You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

OpenCLIP ViT-g-14 Model

This is an OpenCLIP model using the ViT-g-14 architecture, pretrained on the LAION-2B dataset.

Usage

Run inference locally using the following example:

import open_clip
from PIL import Image
import torch

# Load the model and preprocessing pipeline
model, preprocess = open_clip.create_model_and_transforms(
    "hf-hub:NikkiZed/openclip-vit-g-14", 
    pretrained="open_clip_pytorch_model.bin"
)

# Load and preprocess an image
image = Image.open("path_to_image.jpg").convert("RGB")
input_tensor = preprocess(image).unsqueeze(0)

# Generate embeddings
with torch.no_grad():
    features = model.encode_image(input_tensor)

print("Image features:", features)
Downloads last month
51
Inference API
Inference API (serverless) has been turned off for this model.