🙏 If you are able to, please help me fund my open research. 🙏 Thank you for your generosity! 🤗

FMMB-BE-DE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition): Monolingual German version.

🇩🇪 This monolingual German version of the Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding texts up to 8192 tokens written in German at the speed of light. It uses the exact same weights as the original FMMB-BE model, and therefore produces identical embeddings, but this version comes with only a German-optimized tokenizer and its associated embedding table, thereby improving performance.

🆘 This sentence-transformers model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes.

🪄 Thanks to the magic of Trans-Tokenization, monoligual English models such as ModernBERT-Embed from Nomic AI can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🤯

⚖️ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other.

⚡ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: ModernBERT-Embed-Base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • parallel-sentences
  • Languages: de
  • License: apache-2.0

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

IMPORTANT: While waiting for the next stable release of the transformers library, please install the latest git release to use modernbert models:

pip install --upgrade git+https://github.com/huggingface/transformers.git

The easiest way to use this model is to install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE")
# Run inference
sentences = [
    'Diese drei geheimnisvollen Männer kamen uns dann zu Hilfe.',
    'Drei ziemlich seltsame Typen halfen uns danach.',
    'Diese drei schwarzen Vögel sahen dann in unseren Garten.',
    'Einige Leute sind hilfsbereit.',
    'Un, zwei, drei... Wer kann die nächsten Zahlen erraten?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [5, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [5, 5]

Training Details

Training Dataset

parallel-sentences

  • Dataset: parallel dataset
  • Size: 8,066,634 training samples
  • Columns: sent1 and sent2
  • Approximate statistics based on the first 1000 samples:
    sent1 sent2
    type string string
    details
    • min: 6 tokens
    • mean: 17.86 tokens
    • max: 46 tokens
    • min: 6 tokens
    • mean: 18.87 tokens
    • max: 52 tokens
  • Samples:
    sent1 sent2
    The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April Les visages peuvent changer, mais les opinions fondamentales qui caractérisent le gouvernement israélien depuis des décennies resteront les mêmes après le 9 avril
    - Yeah. My husband never talked about business. M'n man had het nooit over z'n zaken.
    Or do they think that We hear not their secrets and their private counsels? Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hören?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Framework Versions

  • Python: 3.11.7
  • Sentence Transformers: 3.3.1
  • Transformers: 4.48.0.dev0
  • PyTorch: 2.2.0+cu121
  • Accelerate: 1.0.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

If you use or finetune this model, please consider citing this paper and the sentence-transformers library:

BibTeX

This model

@misc{remy-2025-fmmb-be,
    title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition},
    author={Francois Remy},
    year={2025},
    eprint={2501.99999},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
42
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE