doberst commited on
Commit
05447f6
·
verified ·
1 Parent(s): 938c112

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -12
README.md CHANGED
@@ -6,20 +6,18 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **slim-ner-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
-
11
- slim-ner-tool is a 4_K_M quantized GGUF version of slim-ner, providing a small, fast inference implementation.
12
 
13
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
  # to load the model and make a basic inference
18
- ner_tool = ModelCatalog().load_model("slim-ner-tool")
19
- response = ner_tool.function_call(text_sample)
20
 
21
  # this one line will download the model and run a series of tests
22
- ModelCatalog().test_run("slim-ner-tool", verbose=True)
23
 
24
 
25
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
@@ -27,8 +25,8 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
- llm_fx.load_tool("ner")
31
- response = llm_fx.named_entity_extraction(text)
32
 
33
 
34
  ### Model Description
@@ -39,18 +37,15 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
39
  - **Model type:** GGUF
40
  - **Language(s) (NLP):** English
41
  - **License:** Apache 2.0
42
- - **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama)
43
 
44
  ## Uses
45
 
46
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
 
48
- SLIM models provide a fast, flexible, intuitive way to integrate classifiers and structured function calls into RAG and LLM application workflows.
49
-
50
  Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
51
 
52
 
53
-
54
  ## Model Card Contact
55
 
56
  Darren Oberst & llmware team
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ **bling-qa-tool** is a 4_K_M quantized GGUF version of bling-tiny-llama-1b-v0, providing a small, fast inference implementation.
 
 
10
 
11
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
12
 
13
  from llmware.models import ModelCatalog
14
 
15
  # to load the model and make a basic inference
16
+ qa_tool = ModelCatalog().load_model("bling-qa-tool")
17
+ response = qa_tool.function_call(text_sample)
18
 
19
  # this one line will download the model and run a series of tests
20
+ ModelCatalog().test_run("bling-qa-tool", verbose=True)
21
 
22
 
23
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
 
25
  from llmware.agents import LLMfx
26
 
27
  llm_fx = LLMfx()
28
+ llm_fx.load_tool("quick_question")
29
+ response = llm_fx.quick_question(text)
30
 
31
 
32
  ### Model Description
 
37
  - **Model type:** GGUF
38
  - **Language(s) (NLP):** English
39
  - **License:** Apache 2.0
40
+ - **Quantized from model:** llmware/bling-tiny-llama-1b-v0
41
 
42
  ## Uses
43
 
44
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
45
 
 
 
46
  Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
47
 
48
 
 
49
  ## Model Card Contact
50
 
51
  Darren Oberst & llmware team