medit-xl-F16-GGUF / README.md
aynig's picture
Upload README.md with huggingface_hub
ce720f0 verified
metadata
license: cc-by-nc-sa-4.0
datasets:
  - wi_locness
  - matejklemen/falko_merlin
  - paws
  - paws-x
  - asset
language:
  - en
  - de
  - es
  - ar
  - ja
  - ko
  - zh
metrics:
  - bleu
  - rouge
  - sari
  - accuracy
library_name: transformers
widget:
  - text: >-
      Umschreiben sie den satz: When I grow up, I start to understand what he
      said is quite right.
    example_title: GEC (de|en)
  - text: >-
      문장의 간단한 버전 작성: Cuando se pueden mantener tasas de flujo comparables, los
      resultados son altos.
    example_title: Simplification (ko|es)
  - text: 'Paraphrase this: いちごは物語を紹介し、読者をイベントに導くと彼は言った。'
    example_title: Paraphrase (en|ja)
pipeline_tag: text2text-generation
base_model: grammarly/medit-xl
tags:
  - llama-cpp
  - gguf-my-lora

aynig/medit-xl-F16-GGUF

This LoRA adapter was converted to GGUF format from grammarly/medit-xl via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora medit-xl-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora medit-xl-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.