Triangle104's picture
Update README.md
0eb3b0c verified
---
base_model: Spestly/Ava-1.5-12B
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
library_name: transformers
---
# Triangle104/Ava-1.5-12B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Spestly/Ava-1.5-12B`](https://huggingface.co/Spestly/Ava-1.5-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Ava-1.5-12B) for more details on the model.
---
Model details:
-
Ava 1.5
Ava 1.5 is a cutting-edge conversational AI model, fine-tuned from Ava 1.0 to deliver exceptional conversational capabilities. Designed to be your go-to AI for engaging, accurate, and context-aware dialogues, Ava 1.5 incorporates updated knowledge and enhanced natural language understanding to provide an unparalleled user experience.
Key Features
-
Enhanced Conversational Skills: Ava 1.5 demonstrates fluid and human-like dialogue generation with improved contextual understanding.
Updated Knowledge Base: Trained on the latest datasets, Ava 1.0 ensures responses are relevant and informed.
Multi-Turn Conversation: Handles complex, multi-turn interactions seamlessly, maintaining coherence and focus.
Personalized Assistance: Adapts responses based on user preferences and context.
Multilingual Support: Capable of understanding and responding in multiple languages with high accuracy.
Why Ava 1.5?
-
Ava 1.5 is built to excel in a wide range of applications:
Customer Support: Provides intelligent, empathetic, and accurate responses to customer queries.
Education: Acts as an interactive tutor, offering explanations and personalized guidance.
Personal Assistance: Supports daily tasks, scheduling, and answering general queries with ease.
Creative Collaboration: Assists with brainstorming, writing, and other creative processes.
Usage
-
Using Ava 1.5 in your project is straightforward. Here’s a quick setup guide:
Installation
-
Ensure you have the necessary libraries and dependencies installed. Use the following command:
pip install transformers
Implementation
-
Here’s a sample Python script to interact with Ava 1.5:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Spestly/Ava-1.5-12B")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Spestly/Ava-1.5-12B")
model = AutoModelForCausalLM.from_pretrained("Spestly/Ava-1.5-12B")
Training Highlights
-
Ava 1.5 was fine-tuned with the following enhancements:
Extensive Conversational Dataset: Leveraging a wide array of open-domain and specialized conversational datasets.
Knowledge Integration: Incorporating recent advancements and updates to provide cutting-edge insights.
Fine-Tuning on Ava 1.0: Utilizing the powerful Ava 1.0 model to further refine and expand upon the model's ability to perform tasks!
Limitations
-
Contextual Challenges: In rare cases, Ava 1.0 may misinterpret ambiguous inputs.
Hardware Requirements: Optimal performance requires a robust system with GPU acceleration.
Roadmap
-
Ava 2.0: Introducing real-time learning capabilities and broader conversational adaptability.
Lightweight Model: Developing a lightweight version optimized for edge devices.
Domain-Specific Fine-Tunes: Specialized versions for industries like healthcare, education, and finance.
Contributing
-
We welcome contributions to enhance Ava! Here’s how you can get involved:
Fork this repository.
Create a feature branch.
Submit a pull request with detailed explanations of your changes.
License
-
Ava 1.5 is released under Apache 2.0 License. Please review the LICENSE file for more details.
Contact
-
For inquiries, feedback, or support, feel free to reach out:
Email: [email protected]
GitHub: Spestly
Website: Ava Project Page
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Ava-1.5-12B-Q5_K_M-GGUF --hf-file ava-1.5-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Ava-1.5-12B-Q5_K_M-GGUF --hf-file ava-1.5-12b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Ava-1.5-12B-Q5_K_M-GGUF --hf-file ava-1.5-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Ava-1.5-12B-Q5_K_M-GGUF --hf-file ava-1.5-12b-q5_k_m.gguf -c 2048
```