πŸ™ If you are able to, please help me fund my open research. πŸ™ Thank you for your generosity! πŸ€—

FMMB-BE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition)

πŸ‡§πŸ‡ͺ The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding at the speed of light texts of up to 8192 tokens written in French, Dutch, German or English. It produces embeddings very similar across languages

πŸ”€ For each input text, the FMMB model autodetects the most efficient tokenizer (English, French, Dutch, or German) and routes the input text to that tokenizer. Each tokenizer uses its own language-specific token embeddings, reducing the risk of language interference. Because all the other weights are shared, the FMMB models can mix and match different languages in the same batch without requiring to load 4 different models in memory. That said: if you know the tokenizer you want to use in advance, you can use the monolingual variants for French, Dutch, German or English for a faster tokenization and lower memory footprint.

πŸ†˜ This sentence-transformers model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes.

πŸͺ„ Thanks to the magic of Trans-Tokenization, monoligual English models such as ModernBERT-Embed from Nomic AI can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🀯

βš–οΈ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other.

⚑ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases.

⚠️ This model is cross-lingually aligned, but trained in an unsupervised manner. It is recommended to finetune this model on your use case before using it.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: ModernBERT-Embed-Base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • parallel-sentences
  • Languages: fr,nl,de,en
  • License: apache-2.0

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

IMPORTANT: While waiting for the next stable release of the transformers library, please install the latest git release to use modernbert models:

pip install --upgrade git+https://github.com/huggingface/transformers.git

The easiest way to use this model is to install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the πŸ€— Hub
model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE", trust_remote_code=True)
# Run inference
sentences = [
    'These three mysterious men came to our help.',
    'Three strange guys helped us then.',
    'These three black birds came in our garden.',
    'Some people are helpful.',
    'One, two, three... Who can guess the next digits?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [5, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [5, 5]

Training Details

Training Dataset

parallel-sentences

  • Dataset: parallel dataset
  • Size: 8,066,634 training samples
  • Columns: sent1 and sent2
  • Approximate statistics based on the first 1000 samples:
    sent1 sent2
    type string string
    details
    • min: 6 tokens
    • mean: 17.86 tokens
    • max: 46 tokens
    • min: 6 tokens
    • mean: 18.87 tokens
    • max: 52 tokens
  • Samples:
    sent1 sent2
    The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April Les visages peuvent changer, mais les opinions fondamentales qui caractΓ©risent le gouvernement israΓ©lien depuis des dΓ©cennies resteront les mΓͺmes aprΓ¨s le 9 avril
    - Yeah. My husband never talked about business. M'n man had het nooit over z'n zaken.
    Or do they think that We hear not their secrets and their private counsels? Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hâren?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Framework Versions

  • Python: 3.11.7
  • Sentence Transformers: 3.3.1
  • Transformers: 4.48.0.dev0
  • PyTorch: 2.2.0+cu121
  • Accelerate: 1.0.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

If you use or finetune this model, please consider citing this paper and the sentence-transformers library:

BibTeX

This model

@misc{remy-2025-fmmb-be,
    title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition},
    author={Francois Remy},
    year={2025},
    eprint={2501.99999},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
19
Safetensors
Model size
264M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Parallia/Fairly-Multilingual-ModernBERT-Embed-BE

Finetuned
(7)
this model
Finetunes
4 models

Space using Parallia/Fairly-Multilingual-ModernBERT-Embed-BE 1