metadata
license: llama3.1
datasets:
- Universal-NER/Pile-NER-type
language:
- en
pipeline_tag: text-generation
tags:
- zero-shot NER
- NER
base_model:
- meta-llama/Llama-3.1-8B-Instruct
SLIMER-3-PARALLEL: Show Less Instruct More Entity Recognition LLaMA3
This LLaMA-3 based SLIMER implementation scores +17 % over paper's original SLIMER LLaMA-2,
while allowing up to 16 NEs to be extracted in parallel per prompt.
GitHub repository: https://github.com/andrewzamai/SLIMER/tree/LLaMA3
SLIMER is an LLM specifically instructed for zero-shot NER on English language.
SLIMER for Italian language can be found at: https://huggingface.co/expertai/LLaMAntino-3-SLIMER-IT
Instructed on a reduced number of samples, it is designed to tackle never-seen-before Named Entity tags by leveraging a prompt enriched with a DEFINITION and GUIDELINES for the NE to be extracted.
Instruction Tuning Prompt
<|start_header_id|>user<|end_header_id|>
You are given a text chunk (delimited by triple quotes) and an instruction.
Read the text and answer to the instruction in the end.
"""
{input text}
"""
Instruction: Extract the entities of type [NEs_list] from the text chunk you have read. Be aware that not all of these entities are necessarily present. Do not extract entities that do not exist in the text, return an empty list for that tag. Ensure each entity is assigned to only one appropriate class.
To help you, here are dedicated Definition and Guidelines for each entity tag.
{
"TASK":: {
"Definition": "",
"Guidelines": ""
}
}
Return only a JSON object. The JSON should strictly follow this format: {expected_json_format}. DO NOT output anything else, just the JSON itself.
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Currently existing approaches fine-tune on an extensive number of entity classes (around 13K) and assess zero-shot NER capabilities on Out-Of-Distribution input domains.
SLIMER performs comparably to these state-of-the-art models on OOD input domains, while being trained only a reduced number of samples and a set of NE tags that overlap in lesser degree with test sets.
We extend the standard zero-shot evaluations (CrossNER and MIT) with BUSTER, which is characterized by financial entities that are rather far from the more traditional tags observed by all models during training.
An inverse trend can be observed, with SLIMER emerging as the most effective in dealing with these unseen labels, thanks to its lighter instruction tuning methodology and the use of definition and guidelines.
Model |
Backbone |
#Params |
MIT |
CrossNER |
BUSTER |
AVG |
|
|
|
Movie |
Restaurant |
AI |
Literature |
Music |
Politics |
Science |
|
|
ChatGPT |
gpt-3.5-turbo |
- |
5.3 |
32.8 |
52.4 |
39.8 |
66.6 |
68.5 |
67.0 |
- |
- |
InstructUIE |
Flan-T5-xxl |
11B |
63.0 |
21.0 |
49.0 |
47.2 |
53.2 |
48.2 |
49.3 |
- |
- |
UniNER-type |
LLaMA-1 |
7B |
42.4 |
31.7 |
53.5 |
59.4 |
65.0 |
60.8 |
61.1 |
34.8 |
51.1 |
GoLLIE |
Code-LLaMA |
7B |
63.0 |
43.4 |
59.1 |
62.7 |
67.8 |
57.2 |
55.5 |
27.7 |
54.6 |
GLiNER-L |
DeBERTa-v3 |
0.3B |
57.2 |
42.9 |
57.2 |
64.4 |
69.6 |
72.6 |
62.6 |
26.6 |
56.6 |
GNER-T5 |
Flan-T5-xxl |
11B |
62.5 |
51.0 |
68.2 |
68.7 |
81.2 |
75.1 |
76.7 |
27.9 |
63.9 |
GNER-LLaMA |
LLaMA-1 |
7B |
68.6 |
47.5 |
63.1 |
68.2 |
75.7 |
69.4 |
69.9 |
23.6 |
60.8 |
SLIMER |
LLaMA-3.1-Instruct |
8B |
58.4 |
45.3 |
58.0 |
65.0 |
77.0 |
71.2 |
67.3 |
39.32 |
60.2 |
JSON Template
JSON SLIMER prompt
{
"description": "SLIMER prompt",
"prompt_input": "<|start_header_id|>system<|end_header_id|>\n\nYou are an expert in Named Entity Recognition designed to output JSON only.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nYou are given a text chunk (delimited by triple quotes) and an instruction.\nRead the text and answer to the instruction in the end.\n\"\"\"\n{input}\n\"\"\"\nInstruction: Extract the Named Entities of type {NE_name} from the text chunk you have read. You are given a DEFINITION and some GUIDELINES.\nDEFINITION: {definition}\nGUIDELINES: {guidelines}\nReturn a JSON list of instances of this Named Entity type (for example [\"text_span_1\", \"text_span_2\"]. Return an empty list [] if no instances are present. Return only the JSON list, no further motivations or introduction to the answer.<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n"
}
from vllm import LLM, SamplingParams
vllm_model = LLM(model="expertai/SLIMER-LLaMA3")
sampling_params = SamplingParams(temperature=0, max_tokens=128)
prompts = [prompter.generate_prompt(instruction, input) for instruction, input in instruction_input_pairs]
responses = vllm_model.generate(prompts, sampling_params)
Citation
If you find SLIMER useful in your research or work, please cite the following paper:
@misc{zamai2024lessinstructmoreenriching,
title={Show Less, Instruct More: Enriching Prompts with Definitions and Guidelines for Zero-Shot NER},
author={Andrew Zamai and Andrea Zugarini and Leonardo Rigutini and Marco Ernandes and Marco Maggini},
year={2024},
eprint={2407.01272},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.01272},
}