π If you are able to, please help me fund my open research. π Thank you for your generosity! π€ |
---|
FMMB-BE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition)
π§πͺ The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding at the speed of light texts of up to 8192 tokens written in French, Dutch, German or English. It produces embeddings very similar across languages
π For each input text, the FMMB model autodetects the most efficient tokenizer (English, French, Dutch, or German) and routes the input text to that tokenizer. Each tokenizer uses its own language-specific token embeddings, reducing the risk of language interference. Because all the other weights are shared, the FMMB models can mix and match different languages in the same batch without requiring to load 4 different models in memory. That said: if you know the tokenizer you want to use in advance, you can use the monolingual variants for French, Dutch, German or English for a faster tokenization and lower memory footprint.
π This sentence-transformers model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes.
πͺ Thanks to the magic of Trans-Tokenization, monoligual English models such as ModernBERT-Embed from Nomic AI can be turned into embedding models for another language. And this, with almost no GPU compute involved! π€―
βοΈ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other.
β‘ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases.
β οΈ This model is cross-lingually aligned, but trained in an unsupervised manner. It is recommended to finetune this model on your use case before using it.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: ModernBERT-Embed-Base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- parallel-sentences
- Languages: fr,nl,de,en
- License: apache-2.0
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
IMPORTANT: While waiting for the next stable release of the transformers
library, please install the latest git release to use modernbert
models:
pip install --upgrade git+https://github.com/huggingface/transformers.git
The easiest way to use this model is to install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE", trust_remote_code=True)
# Run inference
sentences = [
'These three mysterious men came to our help.',
'Three strange guys helped us then.',
'These three black birds came in our garden.',
'Some people are helpful.',
'One, two, three... Who can guess the next digits?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [5, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [5, 5]
Training Details
Training Dataset
parallel-sentences
- Dataset: parallel dataset
- Size: 8,066,634 training samples
- Columns:
sent1
andsent2
- Approximate statistics based on the first 1000 samples:
sent1 sent2 type string string details - min: 6 tokens
- mean: 17.86 tokens
- max: 46 tokens
- min: 6 tokens
- mean: 18.87 tokens
- max: 52 tokens
- Samples:
sent1 sent2 The faces may change, but the essential views that have characterised Israelβs government for decades will remain the same after 9 April
Les visages peuvent changer, mais les opinions fondamentales qui caractΓ©risent le gouvernement israΓ©lien depuis des dΓ©cennies resteront les mΓͺmes aprΓ¨s le 9 avril
- Yeah. My husband never talked about business.
M'n man had het nooit over z'n zaken.
Or do they think that We hear not their secrets and their private counsels?
Oder meinen sie, daΓ Wir ihre Geheimnisse und heimlichen Beratungen nicht hΓΆren?
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 256per_device_eval_batch_size
: 256learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1bf16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 256per_device_eval_batch_size
: 256per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Framework Versions
- Python: 3.11.7
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.2.0+cu121
- Accelerate: 1.0.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
If you use or finetune this model, please consider citing this paper and the sentence-transformers library:
BibTeX
This model
@misc{remy-2025-fmmb-be,
title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition},
author={Francois Remy},
year={2025},
eprint={2501.99999},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 19