bling-answer-tool / README.md
doberst's picture
Upload README.md
9152e69 verified
|
raw
history blame
1.92 kB
metadata
license: apache-2.0

Model Card for Model ID

slim-ner-tool is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.

slim-ner-tool is a 4_K_M quantized GGUF version of slim-ner, providing a small, fast inference implementation.

Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:

from llmware.models import ModelCatalog

# to load the model and make a basic inference
ner_tool = ModelCatalog().load_model("slim-ner-tool")
response = ner_tool.function_call(text_sample)  

# this one line will download the model and run a series of tests
ModelCatalog().test_run("slim-ner-tool", verbose=True)  

Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:

from llmware.agents import LLMfx

llm_fx = LLMfx()
llm_fx.load_tool("ner")
response = llm_fx.named_entity_extraction(text)

Model Description

  • Developed by: llmware
  • Model type: GGUF
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Quantized from model: llmware/slim-sentiment (finetuned tiny llama)

Uses

SLIM models provide a fast, flexible, intuitive way to integrate classifiers and structured function calls into RAG and LLM application workflows.

Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.

Model Card Contact

Darren Oberst & llmware team