BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mogmix/bge-base-financial-matryoshka")
# Run inference
sentences = [
'We use a variety of practices to measure and support progress against these growth behaviors and to ensure that our employees are engaged and fulfilled at work.',
'How does the company measure and support employee engagement and cultural growth?',
"How does the company's membership format affect its profitability?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.7071 | 0.6971 | 0.6957 | 0.6829 | 0.6371 |
cosine_accuracy@3 | 0.8314 | 0.8329 | 0.83 | 0.8257 | 0.8043 |
cosine_accuracy@5 | 0.8729 | 0.8743 | 0.87 | 0.8529 | 0.8429 |
cosine_accuracy@10 | 0.9229 | 0.9157 | 0.91 | 0.9071 | 0.8814 |
cosine_precision@1 | 0.7071 | 0.6971 | 0.6957 | 0.6829 | 0.6371 |
cosine_precision@3 | 0.2771 | 0.2776 | 0.2767 | 0.2752 | 0.2681 |
cosine_precision@5 | 0.1746 | 0.1749 | 0.174 | 0.1706 | 0.1686 |
cosine_precision@10 | 0.0923 | 0.0916 | 0.091 | 0.0907 | 0.0881 |
cosine_recall@1 | 0.7071 | 0.6971 | 0.6957 | 0.6829 | 0.6371 |
cosine_recall@3 | 0.8314 | 0.8329 | 0.83 | 0.8257 | 0.8043 |
cosine_recall@5 | 0.8729 | 0.8743 | 0.87 | 0.8529 | 0.8429 |
cosine_recall@10 | 0.9229 | 0.9157 | 0.91 | 0.9071 | 0.8814 |
cosine_ndcg@10 | 0.8153 | 0.8089 | 0.8052 | 0.7972 | 0.7646 |
cosine_mrr@10 | 0.7809 | 0.7744 | 0.7714 | 0.7619 | 0.7265 |
cosine_map@100 | 0.7836 | 0.7775 | 0.7749 | 0.7655 | 0.7307 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 6,300 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 4 tokens
- mean: 45.46 tokens
- max: 439 tokens
- min: 7 tokens
- mean: 20.55 tokens
- max: 41 tokens
- Samples:
positive anchor We believe our residential connectivity revenue will increase as a result of growth in average domestic broadband revenue per customer, as well as increases in domestic wireless and international connectivity revenue.
What are the projected trends for Comcast's residential connectivity revenue in 2023?
The company's Artificial Intelligence Platform (AIP) leverages machine learning technologies and LLMs within the Gotham and Foundry platforms to connect AI with enterprise data, aiding in decision-making processes.
How does the company integrate large language models with its software platforms?
The impairment charges for Depop and Elo7 were influenced by factors such as macroeconomic conditions including reopening and inflation, as well as management changes and revised projected cash flows affecting their fair values.
What factors contributed to the impairment charges for Depop and Elo7 in 2022?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.8122 | 10 | 1.5675 | - | - | - | - | - |
1.0 | 13 | - | 0.8000 | 0.7975 | 0.7897 | 0.7811 | 0.7419 |
1.5685 | 20 | 0.6203 | - | - | - | - | - |
2.0 | 26 | - | 0.8114 | 0.8063 | 0.8044 | 0.7928 | 0.7599 |
2.3249 | 30 | 0.4678 | - | - | - | - | - |
3.0 | 39 | - | 0.8152 | 0.8092 | 0.8046 | 0.7967 | 0.7660 |
3.0812 | 40 | 0.4106 | - | - | - | - | - |
3.731 | 48 | - | 0.8153 | 0.8089 | 0.8052 | 0.7972 | 0.7646 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mogmix/bge-base-financial-matryoshka
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.707
- Cosine Accuracy@3 on dim 768self-reported0.831
- Cosine Accuracy@5 on dim 768self-reported0.873
- Cosine Accuracy@10 on dim 768self-reported0.923
- Cosine Precision@1 on dim 768self-reported0.707
- Cosine Precision@3 on dim 768self-reported0.277
- Cosine Precision@5 on dim 768self-reported0.175
- Cosine Precision@10 on dim 768self-reported0.092
- Cosine Recall@1 on dim 768self-reported0.707
- Cosine Recall@3 on dim 768self-reported0.831