bert-large-uncased-whole-word-masking-squad2
This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.
Overview
Language model: bert-large
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Usage
In Transformers
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "Shobhank-iiitdwd/bert-large-uncased-squad2-QA"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train Shobhank-iiitdwd/bert-large-uncased-squad2-QA
Evaluation results
- Exact Match on squad_v2validation set self-reported80.885
- F1 on squad_v2validation set self-reported83.876