SentenceTransformer based on CocoRoF/ModernBERT-SimCSE_v02

This is a sentence-transformers model finetuned from CocoRoF/ModernBERT-SimCSE_v02. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: CocoRoF/ModernBERT-SimCSE_v02
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("CocoRoF/ModernBERT-SimCSE-multitask_v03")
# Run inference
sentences = [
    '버스가 바쁜 길을 따라 운전한다.',
    '녹색 버스가 도로를 따라 내려간다.',
    '그 여자는 데이트하러 가는 중이다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8224
spearman_cosine 0.822
pearson_euclidean 0.7786
spearman_euclidean 0.7816
pearson_manhattan 0.7809
spearman_manhattan 0.7847
pearson_dot 0.7544
spearman_dot 0.7435
pearson_max 0.8224
spearman_max 0.822

Training Details

Training Dataset

Unnamed Dataset

  • Size: 5,749 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 7 tokens
    • mean: 13.52 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 13.41 tokens
    • max: 32 tokens
    • min: 0.0
    • mean: 0.45
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    비행기가 이륙하고 있다. 비행기가 이륙하고 있다. 1.0
    한 남자가 큰 플루트를 연주하고 있다. 남자가 플루트를 연주하고 있다. 0.76
    한 남자가 피자에 치즈를 뿌려놓고 있다. 한 남자가 구운 피자에 치즈 조각을 뿌려놓고 있다. 0.76
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,500 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 7 tokens
    • mean: 20.38 tokens
    • max: 52 tokens
    • min: 6 tokens
    • mean: 20.52 tokens
    • max: 54 tokens
    • min: 0.0
    • mean: 0.42
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    안전모를 가진 한 남자가 춤을 추고 있다. 안전모를 쓴 한 남자가 춤을 추고 있다. 1.0
    어린아이가 말을 타고 있다. 아이가 말을 타고 있다. 0.95
    한 남자가 뱀에게 쥐를 먹이고 있다. 남자가 뱀에게 쥐를 먹이고 있다. 1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • overwrite_output_dir: True
  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 8
  • learning_rate: 1e-05
  • num_train_epochs: 10.0
  • warmup_ratio: 0.1
  • push_to_hub: True
  • hub_model_id: CocoRoF/ModernBERT-SimCSE-multitask_v03
  • hub_strategy: checkpoint
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: True
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10.0
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: CocoRoF/ModernBERT-SimCSE-multitask_v03
  • hub_strategy: checkpoint
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss sts_dev_spearman_max
0.2228 10 0.0283 - -
0.4457 20 0.0344 - -
0.6685 30 0.0305 0.0310 0.7939
0.8914 40 0.0489 - -
1.1337 50 0.0382 - -
1.3565 60 0.0271 0.0293 0.7994
1.5794 70 0.0344 - -
1.8022 80 0.0382 - -
2.0446 90 0.0419 0.0280 0.8059
2.2674 100 0.0244 - -
2.4903 110 0.0307 - -
2.7131 120 0.0291 0.0269 0.8108
2.9359 130 0.038 - -
3.1783 140 0.0269 - -
3.4011 150 0.0268 0.0262 0.8155
3.6240 160 0.0246 - -
3.8468 170 0.0313 - -
4.0891 180 0.0303 0.0259 0.8185
4.3120 190 0.0198 - -
4.5348 200 0.0257 - -
4.7577 210 0.0242 0.0255 0.8202
4.9805 220 0.0293 - -
5.2228 230 0.0193 - -
5.4457 240 0.0222 0.0254 0.8222
5.6685 250 0.0184 - -
5.8914 260 0.0243 - -
6.1337 270 0.0204 0.0254 0.8235
6.3565 280 0.0147 - -
6.5794 290 0.0196 - -
6.8022 300 0.0176 0.0253 0.8227
7.0446 310 0.0202 - -
7.2674 320 0.0123 - -
7.4903 330 0.0151 0.0254 0.8236
7.7131 340 0.0132 - -
7.9359 350 0.0158 - -
8.1783 360 0.0118 0.0256 0.8240
8.4011 370 0.0115 - -
8.6240 380 0.0105 - -
8.8468 390 0.0111 0.0256 0.8215
9.0891 400 0.011 - -
9.3120 410 0.0076 - -
9.5348 420 0.0091 0.0256 0.8220
9.7577 430 0.0075 - -
9.9805 440 0.0093 - -

Framework Versions

  • Python: 3.11.10
  • Sentence Transformers: 3.3.1
  • Transformers: 4.48.0.dev0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.0
  • Datasets: 3.1.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
4
Safetensors
Model size
184M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for x2bee/ModernBERT-SimCSE-multitask_v03

Finetuned
(1)
this model

Evaluation results