--- base_model: Snowflake/snowflake-arctic-embed-m datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:678 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: What are some of the content types mentioned in the context? sentences: - 'and/or use cases that were not evaluated in initial testing. \\ \end{tabular} & \begin{tabular}{l} Value Chain and Component \\ Integration \\ \end{tabular} \\ \hline MG-3.1-004 & \begin{tabular}{l} Take reasonable measures to review training data for CBRN information, and \\ intellectual property, and where appropriate, remove it. Implement reasonable \\ measures to prevent, flag, or take other action in response to outputs that \\ reproduce particular training data (e.g., plagiarized, trademarked, patented, \\ licensed content or trade secret material). \\ \end{tabular} & \begin{tabular}{l} Intellectual Property; CBRN \\ Information or Capabilities \\ \end{tabular} \\ \hline \end{tabular} \end{center}' - 'Bias and Homogenization \\ \end{tabular} \\ \hline GV-6.2-004 & \begin{tabular}{l} Establish policies and procedures for continuous monitoring of third-party GAI \\ systems in deployment. \\ \end{tabular} & \begin{tabular}{l} Value Chain and Component \\ Integration \\ \end{tabular} \\ \hline GV-6.2-005 & \begin{tabular}{l} Establish policies and procedures that address GAI data redundancy, including \\ model weights and other system artifacts. \\ \end{tabular} & Harmful Bias and Homogenization \\ \hline GV-6.2-006 & \begin{tabular}{l} Establish policies and procedures to test and manage risks related to rollover and \\ fallback technologies for GAI systems, acknowledging that rollover and fallback \\ may include manual processing. \\ \end{tabular} & Information Integrity \\ \hline GV-6.2-007 & \begin{tabular}{l} Review vendor contracts and avoid arbitrary or capricious termination of critical \\ GAI technologies or vendor services and non-standard terms that may amplify or \\' - 'time. \\ \end{tabular} & \begin{tabular}{l} Information Integrity; Obscene, \\ Degrading, and/or Abusive \\ Content; Value Chain and \\ Component Integration; Harmful \\ Bias and Homogenization; \\ Dangerous, Violent, or Hateful \\ Content; CBRN Information or \\ Capabilities \\ \end{tabular} \\ \hline GV-1.3-002 & \begin{tabular}{l} Establish minimum thresholds for performance or assurance criteria and review as \\ part of deployment approval ("go/"no-go") policies, procedures, and processes, \\ with reviewed processes and approval thresholds reflecting measurement of GAI \\ capabilities and risks. \\ \end{tabular} & \begin{tabular}{l} CBRN Information or Capabilities; \\ Confabulation; Dangerous, \\ Violent, or Hateful Content \\ \end{tabular} \\ \hline GV-1.3-003 & \begin{tabular}{l} Establish a test plan and response policy, before developing highly capable models, \\ to periodically evaluate whether the model may misuse CBRN information or \\' - source_sentence: What are the legal and regulatory requirements involving AI that need to be understood, managed, and documented? sentences: - 'GOVERN 1.1: Legal and regulatory requirements involving Al are understood, managed, and documented. \begin{center} \begin{tabular}{|l|l|l|} \hline Action ID & Suggested Action & GAI Risks \\ \hline GV-1.1-001 & \begin{tabular}{l} Align GAI development and use with applicable laws and regulations, including \\ those related to data privacy, copyright and intellectual property law. \\ \end{tabular} & \begin{tabular}{l} Data Privacy; Harmful Bias and \\ Homogenization; Intellectual \\ Property \\ \end{tabular} \\ \hline \end{tabular} \end{center} Al Actor Tasks: Governance and Oversight\\ ${ }^{14} \mathrm{AI}$ Actors are defined by the OECD as "those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI." See Appendix A of the AI RMF for additional descriptions of Al Actors and AI Actor Tasks.' - '\begin{center} \begin{tabular}{|c|c|c|} \hline Action ID & Suggested Action & GAI Risks \\ \hline GV-1.6-001 & \begin{tabular}{l} Enumerate organizational GAI systems for incorporation into AI system inventory \\ and adjust AI system inventory requirements to account for GAI risks. \\ \end{tabular} & Information Security \\ \hline GV-1.6-002 & \begin{tabular}{l} Define any inventory exemptions in organizational policies for GAI systems \\ embedded into application software. \\ \end{tabular} & \begin{tabular}{l} Value Chain and Component \\ Integration \\ \end{tabular} \\ \hline GV-1.6-003 & \begin{tabular}{l} In addition to general model, governance, and risk information, consider the \\ following items in GAI system inventory entries: Data provenance information \\ (e.g., source, signatures, versioning, watermarks); Known issues reported from \\ internal bug tracking or external information sharing resources (e.g., Al incident \\' - 'Wei, J. et al. (2024) Long Form Factuality in Large Language Models. arXiv. \href{https://arxiv.org/pdf/2403.18802}{https://arxiv.org/pdf/2403.18802} Weidinger, L. et al. (2021) Ethical and social risks of harm from Language Models. arXiv. \href{https://arxiv.org/pdf/2112.04359}{https://arxiv.org/pdf/2112.04359} Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems. arXiv. \href{https://arxiv.org/pdf/2310.11986}{https://arxiv.org/pdf/2310.11986} Weidinger, L. et al. (2022) Taxonomy of Risks posed by Language Models. FAccT'' 22. \href{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088}{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088} West, D. (2023) Al poses disproportionate risks to women. Brookings. \href{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}' - source_sentence: What are some known issues reported from internal bug tracking or external information sharing resources? sentences: - 'Kirchenbauer, J. et al. (2023) A Watermark for Large Language Models. OpenReview. \href{https://openreview.net/forum?id=aX8ig9X2a7}{https://openreview.net/forum?id=aX8ig9X2a7} Kleinberg, J. et al. (May 2021) Algorithmic monoculture and social welfare. PNAS.\\ \href{https://www.pnas.org/doi/10.1073/pnas}{https://www.pnas.org/doi/10.1073/pnas}. 2018340118\\ Lakatos, S. (2023) A Revealing Picture. Graphika. \href{https://graphika.com/reports/a-revealing-picture}{https://graphika.com/reports/a-revealing-picture}\\ Lee, H. et al. (2024) Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks. arXiv. \href{https://arxiv.org/pdf/2310.07879}{https://arxiv.org/pdf/2310.07879} Lenaerts-Bergmans, B. (2024) Data Poisoning: The Exploitation of Generative AI. Crowdstrike. \href{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}' - '(e.g., source, signatures, versioning, watermarks); Known issues reported from \\ internal bug tracking or external information sharing resources (e.g., Al incident \\ database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles \\ and responsibilities; Special rights and considerations for intellectual property, \\ licensed works, or personal, privileged, proprietary or sensitive data; Underlying \\ foundation models, versions of underlying models, and access modes. \\ \end{tabular} & \begin{tabular}{l} Data Privacy; Human-AI \\ Configuration; Information \\ Integrity; Intellectual Property; \\ Value Chain and Component \\ Integration \\ \end{tabular} \\ \hline \multicolumn{3}{|l|}{AI Actor Tasks: Governance and Oversight} \\ \hline \end{tabular} \end{center}' - 'Trustworthy AI Characteristic: Safe, Explainable and Interpretable \subsection*{2.2. Confabulation} "Confabulation" refers to a phenomenon in which GAI systems generate and confidently present erroneous or false content in response to prompts. Confabulations also include generated outputs that diverge from the prompts or other input or that contradict previously generated statements in the same context. These phenomena are colloquially also referred to as "hallucinations" or "fabrications."' - source_sentence: Why do image generator models struggle to produce non-stereotyped content even when prompted? sentences: - Bias exists in many forms and can become ingrained in automated systems. Al systems, including GAI systems, can increase the speed and scale at which harmful biases manifest and are acted upon, potentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and society. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current text-to-image models underrepresent women and/or racial minorities, and people with disabilities. Image generator models have also produced biased or stereotyped output for various demographic groups and have difficulty producing non-stereotyped content even when the prompt specifically requests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which may stem from their training data, can also cause representational harms or perpetuate or exacerbate bias based on race, gender, disability, or other protected classes. - 'The White House (2016) Circular No. A-130, Managing Information as a Strategic Resource. \href{https://www.whitehouse.gov/wp-}{https://www.whitehouse.gov/wp-}\\ content/uploads/legacy drupal files/omb/circulars/A130/a130revised.pdf\\ The White House (2023) Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. \href{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}' - "%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\ \\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\\ if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\\ svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\\ else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n\ }\n\n\\begin{document}\n\\maketitle\n\\section*{Artificial Intelligence Risk Management\ \ Framework: Generative Artificial Intelligence Profile}\n\\section*{NIST Trustworthy\ \ and Responsible AI NIST AI 600-1}\n\\section*{Artificial Intelligence Risk Management\ \ Framework: Generative Artificial Intelligence Profile}\nThis publication is\ \ available free of charge from:\\\\\n\\href{https://doi.org/10.6028/NIST.Al.600-1}{https://doi.org/10.6028/NIST.Al.600-1}\n\ \nJuly 2024\n\n\\includegraphics[max width=\\textwidth, center]{2024_09_22_1b8d52aa873ff5f60066g-02}\\\ \\\nU.S. Department of Commerce Gina M. Raimondo, Secretary" - source_sentence: What processes should be updated for GAI acquisition and procurement vendor assessments? sentences: - 'Inventory all third-party entities with access to organizational content and \\ establish approved GAI technology and service provider lists. \\ \end{tabular} & \begin{tabular}{l} Value Chain and Component \\ Integration \\ \end{tabular} \\ \hline GV-6.1-008 & \begin{tabular}{l} Maintain records of changes to content made by third parties to promote content \\ provenance, including sources, timestamps, metadata. \\ \end{tabular} & \begin{tabular}{l} Information Integrity; Value Chain \\ and Component Integration; \\ Intellectual Property \\ \end{tabular} \\ \hline GV-6.1-009 & \begin{tabular}{l} Update and integrate due diligence processes for GAI acquisition and \\ procurement vendor assessments to include intellectual property, data privacy, \\ security, and other risks. For example, update processes to: Address solutions that \\ may rely on embedded GAI technologies; Address ongoing monitoring, \\ assessments, and alerting, dynamic risk assessments, and real-time reporting \\' - "\\item Information Integrity: Lowered barrier to entry to generate and support\ \ the exchange and consumption of content which may not distinguish fact from\ \ opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale\ \ dis- and mis-information campaigns.\n \\item Information Security: Lowered\ \ barriers for offensive cyber capabilities, including via automated discovery\ \ and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive\ \ cyber\n\\end{enumerate}\n\\footnotetext{${ }^{6}$ Some commenters have noted\ \ that the terms \"hallucination\" and \"fabrication\" anthropomorphize GAI, which\ \ itself is a risk related to GAI systems as it can inappropriately attribute\ \ human characteristics to non-human entities.\\\\" - 'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\ \end{tabular} & \begin{tabular}{l} Information Integrity; Harmful Bias \\ and Homogenization \\ \end{tabular} \\ \hline AI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV & & \\ \hline \end{tabular} \end{center}' model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.8850574712643678 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9540229885057471 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8850574712643678 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.31800766283524895 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999998 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.02458492975734355 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.026500638569604086 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.027777777777777776 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.027777777777777776 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.20817571346541755 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.927969348659004 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.025776926351638994 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.8850574712643678 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.9540229885057471 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 1.0 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.8850574712643678 name: Dot Precision@1 - type: dot_precision@3 value: 0.31800766283524895 name: Dot Precision@3 - type: dot_precision@5 value: 0.19999999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.09999999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.02458492975734355 name: Dot Recall@1 - type: dot_recall@3 value: 0.026500638569604086 name: Dot Recall@3 - type: dot_recall@5 value: 0.027777777777777776 name: Dot Recall@5 - type: dot_recall@10 value: 0.027777777777777776 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.20817571346541755 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.927969348659004 name: Dot Mrr@10 - type: dot_map@100 value: 0.025776926351638994 name: Dot Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Mr-Cool/midterm-finetuned-embedding") # Run inference sentences = [ 'What processes should be updated for GAI acquisition and procurement vendor assessments?', 'Inventory all third-party entities with access to organizational content and \\\\\nestablish approved GAI technology and service provider lists. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nValue Chain and Component \\\\\nIntegration \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-008 & \\begin{tabular}{l}\nMaintain records of changes to content made by third parties to promote content \\\\\nprovenance, including sources, timestamps, metadata. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Value Chain \\\\\nand Component Integration; \\\\\nIntellectual Property \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-009 & \\begin{tabular}{l}\nUpdate and integrate due diligence processes for GAI acquisition and \\\\\nprocurement vendor assessments to include intellectual property, data privacy, \\\\\nsecurity, and other risks. For example, update processes to: Address solutions that \\\\\nmay rely on embedded GAI technologies; Address ongoing monitoring, \\\\\nassessments, and alerting, dynamic risk assessments, and real-time reporting \\\\', 'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Harmful Bias \\\\\nand Homogenization \\\\\n\\end{tabular} \\\\\n\\hline\nAI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8851 | | cosine_accuracy@3 | 0.954 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8851 | | cosine_precision@3 | 0.318 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.0246 | | cosine_recall@3 | 0.0265 | | cosine_recall@5 | 0.0278 | | cosine_recall@10 | 0.0278 | | cosine_ndcg@10 | 0.2082 | | cosine_mrr@10 | 0.928 | | **cosine_map@100** | **0.0258** | | dot_accuracy@1 | 0.8851 | | dot_accuracy@3 | 0.954 | | dot_accuracy@5 | 1.0 | | dot_accuracy@10 | 1.0 | | dot_precision@1 | 0.8851 | | dot_precision@3 | 0.318 | | dot_precision@5 | 0.2 | | dot_precision@10 | 0.1 | | dot_recall@1 | 0.0246 | | dot_recall@3 | 0.0265 | | dot_recall@5 | 0.0278 | | dot_recall@10 | 0.0278 | | dot_ndcg@10 | 0.2082 | | dot_mrr@10 | 0.928 | | dot_map@100 | 0.0258 | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 678 training samples * Columns: sentence_0 and sentence_1 * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | sentence_0 | sentence_1 | |:------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------| | What are the characteristics of trustworthy AI? | GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. | | How are the characteristics of trustworthy AI integrated into organizational policies? | GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. | | Why is it important to integrate trustworthy AI characteristics into organizational processes? | GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. | * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin
### Training Logs | Epoch | Step | cosine_map@100 | |:------:|:----:|:--------------:| | 1.0 | 34 | 0.0250 | | 1.4706 | 50 | 0.0258 | | 2.0 | 68 | 0.0257 | | 2.9412 | 100 | 0.0258 | | 3.0 | 102 | 0.0258 | | 4.0 | 136 | 0.0258 | | 4.4118 | 150 | 0.0258 | | 5.0 | 170 | 0.0258 | ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 3.0.1 - Transformers: 4.44.2 - PyTorch: 2.6.0.dev20240922+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```