SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2 on the allstats-search-pairs-dataset dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-v1-1")
# Run inference
sentences = [
    'Biaya hidup kelompok perumahan Indonesia 2017',
    'Statistik Upah 2013',
    'Survei Biaya Hidup (SBH) 2018 Bulukumba, Watampone, Makassar, Pare-Pare, dan Palopo',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric allstats-semantic-mpnet-eval allstats-semantic-mpnet-test
pearson_cosine 0.9833 0.9833
spearman_cosine 0.8515 0.8521

Training Details

Training Dataset

allstats-search-pairs-dataset

  • Dataset: allstats-search-pairs-dataset at 6712cb1
  • Size: 79,621 training samples
  • Columns: query, doc, and label
  • Approximate statistics based on the first 1000 samples:
    query doc label
    type string string float
    details
    • min: 5 tokens
    • mean: 10.78 tokens
    • max: 39 tokens
    • min: 5 tokens
    • mean: 13.73 tokens
    • max: 58 tokens
    • min: 0.0
    • mean: 0.44
    • max: 0.99
  • Samples:
    query doc label
    Produksi jagung di Indonesia tahun 2009 Indeks Unit Value Ekspor Menurut Kode SITC Bulan Februari 2024 0.1
    Data produksi industri manufaktur 2021 Perkembangan Indeks Produksi Industri Manufaktur 2021 0.96
    direktori perusahaan industri penggilingan padi tahun 2012 provinsi sulawesi utara dan gorontalo Neraca Pemerintahan Umum Indonesia 2007-2012 0.03
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

allstats-search-pairs-dataset

  • Dataset: allstats-search-pairs-dataset at 6712cb1
  • Size: 9,952 evaluation samples
  • Columns: query, doc, and label
  • Approximate statistics based on the first 1000 samples:
    query doc label
    type string string float
    details
    • min: 5 tokens
    • mean: 10.75 tokens
    • max: 40 tokens
    • min: 4 tokens
    • mean: 14.09 tokens
    • max: 49 tokens
    • min: 0.01
    • mean: 0.48
    • max: 0.99
  • Samples:
    query doc label
    Daftar perusahaan industri pengolahan skala kecil 2006 Statistik Migrasi Nusa Tenggara Barat Hasil SP 2010 0.05
    Populasi Indonesia per provinsi 2000-2010 Indikator Ekonomi Desember 2023 0.08
    Data harga barang desa non-pangan tahun 2022 Statistik Kunjungan Tamu Asing 2004 0.1
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 12
  • warmup_ratio: 0.1
  • fp16: True
  • dataloader_num_workers: 4
  • load_best_model_at_end: True
  • label_smoothing_factor: 0.01
  • eval_on_start: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 12
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 4
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.01
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss allstats-semantic-mpnet-eval_spearman_cosine allstats-semantic-mpnet-test_spearman_cosine
0 0 - 0.0958 0.6404 -
0.2008 250 0.0464 0.0246 0.7693 -
0.4016 500 0.0218 0.0179 0.7720 -
0.6024 750 0.0172 0.0153 0.7790 -
0.8032 1000 0.0156 0.0136 0.7809 -
1.0040 1250 0.0137 0.0139 0.7769 -
1.2048 1500 0.0112 0.0120 0.7825 -
1.4056 1750 0.0104 0.0112 0.7869 -
1.6064 2000 0.01 0.0103 0.7893 -
1.8072 2250 0.009 0.0097 0.7944 -
2.0080 2500 0.0088 0.0097 0.7947 -
2.2088 2750 0.0064 0.0086 0.7971 -
2.4096 3000 0.006 0.0085 0.7991 -
2.6104 3250 0.006 0.0084 0.7995 -
2.8112 3500 0.006 0.0081 0.8047 -
3.0120 3750 0.0058 0.0082 0.8055 -
3.2129 4000 0.0041 0.0077 0.8096 -
3.4137 4250 0.0042 0.0078 0.8092 -
3.6145 4500 0.004 0.0074 0.8107 -
3.8153 4750 0.0043 0.0073 0.8132 -
4.0161 5000 0.0044 0.0076 0.8090 -
4.2169 5250 0.0032 0.0071 0.8173 -
4.4177 5500 0.0031 0.0068 0.8218 -
4.6185 5750 0.0031 0.0067 0.8200 -
4.8193 6000 0.0032 0.0065 0.8233 -
5.0201 6250 0.0029 0.0067 0.8227 -
5.2209 6500 0.0024 0.0064 0.8249 -
5.4217 6750 0.0023 0.0066 0.8298 -
5.6225 7000 0.0025 0.0063 0.8271 -
5.8233 7250 0.0024 0.0064 0.8299 -
6.0241 7500 0.0023 0.0064 0.8312 -
6.2249 7750 0.0017 0.0061 0.8319 -
6.4257 8000 0.0017 0.0059 0.8330 -
6.6265 8250 0.0019 0.0064 0.8309 -
6.8273 8500 0.002 0.0061 0.8332 -
7.0281 8750 0.0018 0.0061 0.8360 -
7.2289 9000 0.0014 0.0060 0.8387 -
7.4297 9250 0.0014 0.0059 0.8396 -
7.6305 9500 0.0014 0.0059 0.8402 -
7.8313 9750 0.0014 0.0059 0.8388 -
8.0321 10000 0.0014 0.0058 0.8411 -
8.2329 10250 0.0011 0.0059 0.8420 -
8.4337 10500 0.0011 0.0057 0.8431 -
8.6345 10750 0.0011 0.0057 0.8418 -
8.8353 11000 0.0011 0.0057 0.8440 -
9.0361 11250 0.0011 0.0057 0.8449 -
9.2369 11500 0.0008 0.0056 0.8451 -
9.4378 11750 0.0009 0.0057 0.8456 -
9.6386 12000 0.0009 0.0056 0.8469 -
9.8394 12250 0.0009 0.0056 0.8470 -
10.0402 12500 0.0009 0.0056 0.8475 -
10.2410 12750 0.0007 0.0056 0.8489 -
10.4418 13000 0.0007 0.0056 0.8495 -
10.6426 13250 0.0007 0.0056 0.8501 -
10.8434 13500 0.0007 0.0056 0.8497 -
11.0442 13750 0.0006 0.0056 0.8500 -
11.245 14000 0.0006 0.0055 0.8506 -
11.4458 14250 0.0006 0.0055 0.8507 -
11.6466 14500 0.0006 0.0055 0.8512 -
11.8474 14750 0.0006 0.0055 0.8515 -
12.0 14940 - - - 0.8521
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
9
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yahyaabd/allstats-v1-1

Dataset used to train yahyaabd/allstats-v1-1

Evaluation results