ipryzk's picture
Update README.md
f004971 verified
|
raw
history blame
589 Bytes
metadata
language:
  - en
metrics:
  - accuracy:96%
  - loss: 0.266
base_model:
  - microsoft/deberta-large
data:
  - >-
    stored within this directory (respectively training_set and validation_set
    for training and validation)
notes:
  - Howdy! This is Ian Prazak's fine-tuned DeBERTa Large model.
  - >-
    The model was trained over 3 epochs with a dataset of approximately 30,000
    formatted examples in JSONL.
  - The base DeBERTa Large tokenizer was used.
  - >-
    The metrics listed above represent the final validation loss and accuracy
    before any signs of overfitting were observed.
  - Thank you for checking it out! :)