--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:2382 - loss:MultipleNegativesRankingLoss base_model: nomic-ai/nomic-embed-text-v1 widget: - source_sentence: Collect the details that are associated with product '- Com espessura constante de' '- 0,04 m', with quantity 1900, unit M2 sentences: - 'Item Description: UNKNOWN PRODUCT, priced at 949.00 EUR, Origin: National' - 'Product: UNKNOWN PRODUCT, Estimated Value: 514.00 EUR' - "Details for 'MacBook Pro 14\" Processador M2/3 16GB/18GB RAM | SSD 512GB | Teclado\ \ Es-Es', with quantity 1, unit UN:\n - LOTE 31\n - Price: 656.00 EUR" - source_sentence: Collect the details that are associated with Lot 14 product '' 'Monitor de Sinais Vitais ', with quantity 2, unit Subcontracting Unit sentences: - "Details for 'Monitor de Sinais Vitais ', with quantity 2, unit Subcontracting\ \ Unit:\n - LOTE 60\n - Price: 564.00 EUR" - "Details for UNKNOWN PRODUCT:\n - LOTE 90\n - Price: 658.00 EUR" - 'Item Description: UNKNOWN PRODUCT, priced at 90.00 EUR, Origin: National' - source_sentence: Collect the details that are associated with product '' '2202000270 - FIO SUT. AC. POLIGLIC. ABS. RÁPIDA 4/0 MULTIF AG. CILIND. 17 MM 1/2 C (UNID)', with quantity 288, unit UN sentences: - 'Item Description: ''2202000270 - FIO SUT. AC. POLIGLIC. ABS. RÁPIDA 4/0 MULTIF AG. CILIND. 17 MM 1/2 C (UNID)'', with quantity 288, unit UN, priced at 66.00 EUR, Origin: National' - 'Product: ''2202000285 - FIO SUT. POLIPROPI. NÃO ABS. 4/0 MONOF. AG. LANC. 16 MM 3/8 (UNID)'', with quantity 468, unit UN, Estimated Value: 619.00 EUR' - 'Item Description: ''Carro transporte de roupa limpa/roupa suja'', with quantity 1, unit Subcontracting Unit, priced at 574.00 EUR, Origin: National' - source_sentence: Collect the details that are associated with product '' '2202000006 - FIO SUT. SEDA NÃO ABS. 0 MULTIF. SEM AGULHA (CART.)', with quantity 72, unit UN sentences: - 'Item Description: ''2202000309 - FIO SUT. ABS. MÉDIO PRAZO 2/0 MONOF. BARBADO, C/ AG. CILIND. 30MM 1/2C, 23CM (CART.)'', with quantity 24, unit UN, priced at 206.00 EUR, Origin: National' - "Details for '2202000006 - FIO SUT. SEDA NÃO ABS. 0 MULTIF. SEM AGULHA (CART.)',\ \ with quantity 72, unit UN:\n - LOTE 82\n - Price: 854.00 EUR" - 'LOTE 10 Description: ''Mesas apoio (anestesia e circulante)'', with quantity 4, unit Subcontracting Unit Price: 117.00 EUR' - source_sentence: Collect the details that are associated with product '' '2202000251 - FIO SUT. ABS. LONGA 1 MONOF. AG. CILIND. 48 MM 1/2C 90CM (CART.)', with quantity 144, unit UN sentences: - "Details for UNKNOWN PRODUCT:\n - LOTE 34\n - Price: 477.00 EUR" - "Details for '2202000251 - FIO SUT. ABS. LONGA 1 MONOF. AG. CILIND. 48 MM 1/2C\ \ 90CM (CART.)', with quantity 144, unit UN:\n - LOTE 73\n - Price: 644.00 EUR" - 'Item Description: ''Mesas de Mayo'', with quantity 2, unit Subcontracting Unit, priced at 651.00 EUR, Origin: National' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on nomic-ai/nomic-embed-text-v1 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: .nan name: Pearson Cosine - type: spearman_cosine value: .nan name: Spearman Cosine --- # SentenceTransformer based on nomic-ai/nomic-embed-text-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("ptpedroVortal/nomic_vortal_v3.4") # Run inference sentences = [ "Collect the details that are associated with product '' '2202000251 - FIO SUT. ABS. LONGA 1 MONOF. AG. CILIND. 48 MM 1/2C 90CM (CART.)', with quantity 144, unit UN", "Details for '2202000251 - FIO SUT. ABS. LONGA 1 MONOF. AG. CILIND. 48 MM 1/2C 90CM (CART.)', with quantity 144, unit UN:\n - LOTE 73\n - Price: 644.00 EUR", "Item Description: 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit, priced at 651.00 EUR, Origin: National", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with __main__.CustomEvaluator | Metric | Value | |:--------------------|:--------| | pearson_cosine | nan | | **spearman_cosine** | **nan** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 2,382 training samples * Columns: query, correct_node, and score * Approximate statistics based on the first 1000 samples: | | query | correct_node | score | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | | | | * Samples: | query | correct_node | score | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | Collect the details that are associated with product '' '2202000275 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF AG. CILIND. 30MM 1/2 LOOP (UNID)', with quantity 216, unit UN | LOTE 98
Description: '2202000275 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF AG. CILIND. 30MM 1/2 LOOP (UNID)', with quantity 216, unit UN
Price: 940.00 EUR
| 1 | | Collect the details that are associated with product '' '2202000294 - FIO SUT. AC. POLIGLIC. ABS. 2/0 MULTIF SEM AGULHA PRÉ CORTADO (UNID)', with quantity 324, unit UN | Product: '2202000294 - FIO SUT. AC. POLIGLIC. ABS. 2/0 MULTIF SEM AGULHA PRÉ CORTADO (UNID)', with quantity 324, unit UN, Estimated Value: 696.00 EUR | 1 | | Collect the details that are associated with Lot 4 product '' 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit | LOTE 44
Description: 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit
Price: 542.00 EUR
| 1 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 297 evaluation samples * Columns: query, correct_node, and score * Approximate statistics based on the first 297 samples: | | query | correct_node | score | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | | | | * Samples: | query | correct_node | score | |:-------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | Collect the details that are associated with Lot 7 product '' 'Carro transporte de roupa suja', with quantity 1, unit Subcontracting Unit | Item Description: 'Carro transporte de roupa suja', with quantity 1, unit Subcontracting Unit, priced at 628.00 EUR, Origin: National | 1 | | Collect the details that are associated with Lot 10 product '' 'Mesas para cirurgia', with quantity 2, unit Subcontracting Unit | Details for 'Mesas para cirurgia', with quantity 2, unit Subcontracting Unit:
- LOTE 83
- Price: 940.00 EUR
| 1 | | Collect the details that are associated with Lot 1 product '' 'PAINEL MULTIPLO ALERGENOS RESPIRATORIOS ', with quantity 1152, unit UND | Product: 'PAINEL MULTIPLO ALERGENOS RESPIRATORIOS ', with quantity 1152, unit UND, Estimated Value: 714.00 EUR | 1 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | spearman_cosine | |:----------:|:-------:|:-------------:|:---------------:|:---------------:| | 0.6711 | 100 | 0.6485 | 0.4410 | nan | | 1.3356 | 200 | 0.5026 | 0.4399 | nan | | **2.0067** | **300** | **0.491** | **0.4175** | **nan** | | 2.6711 | 400 | 0.442 | 0.4409 | nan | | 3.3356 | 500 | 0.3999 | 0.4421 | nan | | 4.0067 | 600 | 0.367 | 0.6182 | nan | | 4.6711 | 700 | 0.3743 | 0.6104 | nan | | 5.3356 | 800 | 0.1972 | 0.6115 | nan | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.3.1 - Transformers: 4.47.0.dev0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```