SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Nashhz/SBERT_KFOLD_Job_Descriptions_to_Skills")
# Run inference
sentences = [
    "Description: I'm seeking an expert in Google Sheets and data management to create a comprehensive tracking system for student progress. Details - Each student will have their own Google Sheet file. - Each file will contain 6 levels as separate sheets and a checkbox in each sheet for tracking progress. - When the checkbox is ticked, the data needs to be sent to a central database for us to know the student has completed a level and certificates need to be printed. The data to be sent to the database includes - Student's Name - Current Level - Package Details - Date and time Ideal skills for this project include - Advanced knowledge of Google Sheets - Experience with data management and database creation - Attention to detail to ensure accurate tracking of each student's progress. Each student's Google Sheet should include - Their Name - Their Current Level - Details about the Package they are on - A space to track their Progress Please only apply if you have relevant experience and can demonstrate your ability to deliver this project efficiently.",
    'Skills: PHP, Visual Basic, Data Processing, Data Entry, Excel',
    'Skills: Computer Security, Network Administration, Virtual Machines, Web Security, Linux',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 8,117 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 7 tokens
    • mean: 139.48 tokens
    • max: 256 tokens
    • min: 4 tokens
    • mean: 16.81 tokens
    • max: 33 tokens
    • min: -0.07
    • mean: 0.46
    • max: 0.83
  • Samples:
    sentence_0 sentence_1 label
    Description: Looking for a Freelance Videographer & Post-Production Editor! We're hosting a charity event near Sandton, Johannesburg, in support of those affected by abuse. The event will run for about 1-2 hours and features a live band performance. Project Scope Event Recording Capture the entire live set approx. 30 minutes including breaks and speakers, totaling around 1 hour. Post-Production Create a dynamic video for social media, similar to a movie trailer, highlighting the live band and key moments from the event. Photography Take a few impactful photos during the event, including post-production edits. Interviews Film one-on-one segments with speakers for inclusion in the post-event video. Sound Design Incorporate music and sound effects, using creative content available online. Delivery Deadline Edited photos and videos need to be submitted by 14 October 2024. This project aims to capture the spirit of the event, supporting the Family Protection Association login to view URL and empowering women while raising funds and awareness for the cause. If you're interested and have a flair for storytelling through video, please reach out! Date 13 October 2024 Time 1300pm to 1400pm 2 hour set Skills: Video Editing, Video Production, Video Services, Videography, After Effects 0.4115103483200073
    Description: Hi! I am Lradon from Andvids. We are a video production agency from China assisting our clients in finding content creators to produce unboxing videos. General requirements of the videos Video duration 1-3 minutes,without music Format Landscape screen 169 MP4 Content â' show your face to explain product features and demonstrate in English fluently. 30 of the time is used to explain product features and show product details, 70 of the time is used to demonstrate the use of the product and the use process in mutiple sences. â'Don't talking about price, personal privacy information, do not appear two-dimensional code, express bill, license plate, door plate, etc Clarity 1080p. Make sure the environment is clean and bright ,and the lens is stable and does not shake Upload to Amazon Sometimes we need you to upload the videos to Amazon If you are interested in this job,feel free to contact me and please send me an introduction video or anything you have shot login to view URL forward to receiving your login to view URL you! Skills: Video Editing, Video Production, Videography, Video Services, After Effects 0.4927669167518616
    Description: I'm looking for an expert in electronic circuit board design to create and manufacture a simple electronic board for industrial marine machinery. The ideal candidate should have - Experience in designing circuit boards - Ability to design simple, yet effective electronic boards. - Skills in both design and manufacturing of circuit boards. This project is all about creating a reliable, efficient circuit board that can withstand the rigors of marine use. The board is very straight forward design which will be- dc power supply covering 12 volt or 24 volt dc- but range with charging should cover 08 volts to 32 volts dc- it will have an on off button- when selected to on it will engage a 12 vdc solenoid very small and will activate it for 3 minutes and then when stopped it will do this every 7 days on a timer for 3 minutes- when turned off it will not activate and when turned on again it will start again it will activate the solid for 3 minutes and then when finish it will start a 7 day time to repeat the 3 minute solenoid and will be on constant repeat Skills: Electronics, Electrical Engineering, PCB Layout, Circuit Design, Engineering 0.2869749069213867
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.9843 500 0.0012
1.9685 1000 0.0011
2.9528 1500 0.0008
3.9370 2000 0.0006
0.9843 500 0.0009
1.9685 1000 0.0008
2.9528 1500 0.0006
3.9370 2000 0.0005
0.9843 500 0.0007
1.9685 1000 0.0007
2.9528 1500 0.0005
3.9370 2000 0.0004
0.9843 500 0.0006
1.9685 1000 0.0006
2.9528 1500 0.0004
3.9370 2000 0.0003
0.9843 500 0.0005
1.9685 1000 0.0005
2.9528 1500 0.0004
3.9370 2000 0.0003

Framework Versions

  • Python: 3.12.6
  • Sentence Transformers: 3.2.0
  • Transformers: 4.45.2
  • PyTorch: 2.4.1+cpu
  • Accelerate: 1.0.1
  • Datasets: 3.0.1
  • Tokenizers: 0.20.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
37
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Nashhz/SBERT_KFOLD_Job_Descriptions_to_Skills

Finetuned
(221)
this model