File size: 2,728 Bytes
2e28a8b
 
 
 
 
 
 
 
 
 
 
46aa307
2e28a8b
46aa307
2e28a8b
46aa307
2e28a8b
46aa307
2e28a8b
46aa307
2e28a8b
46aa307
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
base_model: Spestly/Athena-2-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
library_name: transformers
---
![Header](https://raw.githubusercontent.com/Aayan-Mishra/Images/refs/heads/main/AwA.png)

# AwA - 1.5B

AwA (Answers with Athena) is my portfolio project, showcasing a cutting-edge Chain-of-Thought (CoT) reasoning model. I created AwA to excel in providing detailed, step-by-step answers to complex questions across diverse domains. This model represents my dedication to advancing AI’s capability for enhanced comprehension, problem-solving, and knowledge synthesis.

## Key Features

- **Chain-of-Thought Reasoning:** AwA delivers step-by-step breakdowns of solutions, mimicking logical human thought processes.

- **Domain Versatility:** Performs exceptionally across a wide range of domains, including mathematics, science, literature, and more.

- **Adaptive Responses:** Adjusts answer depth and complexity based on input queries, catering to both novices and experts.

- **Interactive Design:** Designed for educational tools, research assistants, and decision-making systems.

## Intended Use Cases

- **Educational Applications:** Supports learning by breaking down complex problems into manageable steps.

- **Research Assistance:** Generates structured insights and explanations in academic or professional research.

- **Decision Support:** Enhances understanding in business, engineering, and scientific contexts.

- **General Inquiry:** Provides coherent, in-depth answers to everyday questions.

# Type: Chain-of-Thought (CoT) Reasoning Model

- Base Architecture: Adapted from [qwen2]

- Parameters: [1.54B]

- Fine-tuning: Specialized fine-tuning on Chain-of-Thought reasoning datasets to enhance step-by-step explanatory capabilities.



## Ethical Considerations

- **Bias Mitigation:** I have taken steps to minimise biases in the training data. However, users are encouraged to cross-verify outputs in sensitive contexts.

- **Limitations:** May not provide exhaustive answers for niche topics or domains outside its training scope.

- **User Responsibility:** Designed as an assistive tool, not a replacement for expert human judgment.


## Usage

### Option A: Local

Using locally with the Transformers library

```python
# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="Spestly/AwA-1.5B")
pipe(messages)
```

### Option B: API & Space

You can use the AwA HuggingFace space or the AwA API (Coming soon!)


## Roadmap

- More AwA model sizes e.g 7B and 14B
- Create AwA API via spestly package