Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,115 @@ license: apache-2.0
|
|
9 |
short_description: Deployment of the skapadia3214/groq-moa repo
|
10 |
---
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
short_description: Deployment of the skapadia3214/groq-moa repo
|
10 |
---
|
11 |
|
12 |
+
# Repo deployment
|
13 |
+
skapadia3214/groq-moa
|
14 |
+
|
15 |
+
# Mixture-of-Agents Demo Powered by Groq
|
16 |
+
|
17 |
+
This Streamlit application showcases the Mixture of Agents (MOA) architecture proposed by Together AI, powered by Groq LLMs. It allows users to interact with a configurable multi-agent system for enhanced AI-driven conversations.
|
18 |
+
|
19 |
+
![MOA Architecture](./static/moa_groq.svg)
|
20 |
+
*Source: Adaptation of [Together AI Blog - Mixture of Agents](https://www.together.ai/blog/together-moa)*
|
21 |
+
|
22 |
+
## Features
|
23 |
+
|
24 |
+
- Interactive chat interface powered by MOA
|
25 |
+
- Configurable main model and layer agents
|
26 |
+
- Real-time streaming of responses
|
27 |
+
- Visualization of intermediate layer outputs
|
28 |
+
- Customizable agent parameters through the UI
|
29 |
+
|
30 |
+
## Installation
|
31 |
+
|
32 |
+
1. Clone the repository:
|
33 |
+
```
|
34 |
+
git clone https://github.com/skapadia3214/groq-moa.git
|
35 |
+
cd groq-moa
|
36 |
+
```
|
37 |
+
|
38 |
+
2. Install the required dependencies:
|
39 |
+
```
|
40 |
+
pip install -r requirements.txt
|
41 |
+
```
|
42 |
+
|
43 |
+
3. Set up your environment variables:
|
44 |
+
Create a `.env` file in the root directory and add your Groq API key:
|
45 |
+
```
|
46 |
+
GROQ_API_KEY=your_api_key_here
|
47 |
+
```
|
48 |
+
|
49 |
+
## Usage
|
50 |
+
|
51 |
+
1. Run the Streamlit app:
|
52 |
+
```
|
53 |
+
streamlit run app.py
|
54 |
+
```
|
55 |
+
|
56 |
+
2. Open your web browser and navigate to the URL provided by Streamlit (usually `http://localhost:8501`).
|
57 |
+
|
58 |
+
3. Use the sidebar to configure the MOA settings:
|
59 |
+
- Select the main model
|
60 |
+
- Set the number of cycles
|
61 |
+
- Customize the layer agent configuration
|
62 |
+
|
63 |
+
4. Start chatting with the MOA system using the input box at the bottom of the page.
|
64 |
+
|
65 |
+
## Project Structure
|
66 |
+
|
67 |
+
- `app.py`: Main Streamlit application file
|
68 |
+
- `moa/`: Package containing the MOA implementation
|
69 |
+
- `__init__.py`: Package initializer
|
70 |
+
- `moa.py`: Core MOA agent implementation
|
71 |
+
- `prompts.py`: System prompts for the agents
|
72 |
+
- `main.py`: CLI version of the MOA chat interface
|
73 |
+
- `requirements.txt`: List of Python dependencies
|
74 |
+
- `static/`: Directory for static assets (images, etc.)
|
75 |
+
|
76 |
+
## Configuration
|
77 |
+
|
78 |
+
The MOA system can be configured through the Streamlit UI or by modifying the default configuration in `app.py`. The main configurable parameters are:
|
79 |
+
|
80 |
+
- Main model: The primary language model used for generating final responses
|
81 |
+
- Number of cycles: How many times the layer agents are invoked before the main agent
|
82 |
+
- Layer agent configuration: A JSON object defining the system prompts, model names, and other parameters for each layer agent
|
83 |
+
|
84 |
+
## Contributing
|
85 |
+
|
86 |
+
Contributions to this project are welcome! Please follow these steps to contribute:
|
87 |
+
|
88 |
+
1. Fork the repository
|
89 |
+
2. Create a new branch for your feature or bug fix
|
90 |
+
3. Make your changes and commit them with descriptive commit messages
|
91 |
+
4. Push your changes to your fork
|
92 |
+
5. Submit a pull request to the main repository
|
93 |
+
|
94 |
+
Please ensure that your code adheres to the project's coding standards and includes appropriate tests and documentation.
|
95 |
+
|
96 |
+
## License
|
97 |
+
|
98 |
+
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
|
99 |
+
|
100 |
+
## Acknowledgements
|
101 |
+
|
102 |
+
- [Groq](https://groq.com/) for providing the underlying language models
|
103 |
+
- [Together AI](https://www.together.ai/) for proposing the Mixture of Agents architecture and providing the conceptual image
|
104 |
+
- [Streamlit](https://streamlit.io/) for the web application framework
|
105 |
+
|
106 |
+
## Citation
|
107 |
+
|
108 |
+
This project implements the Mixture-of-Agents architecture proposed in the following paper:
|
109 |
+
|
110 |
+
```
|
111 |
+
@article{wang2024mixture,
|
112 |
+
title={Mixture-of-Agents Enhances Large Language Model Capabilities},
|
113 |
+
author={Wang, Junlin and Wang, Jue and Athiwaratkun, Ben and Zhang, Ce and Zou, James},
|
114 |
+
journal={arXiv preprint arXiv:2406.04692},
|
115 |
+
year={2024}
|
116 |
+
}
|
117 |
+
```
|
118 |
+
|
119 |
+
For more information about the Mixture-of-Agents concept, please refer to the [original research paper](https://arxiv.org/abs/2406.04692) and the [Together AI blog post](https://www.together.ai/blog/together-moa).
|
120 |
+
|
121 |
+
## Contact
|
122 |
+
|
123 |
+
For questions or support, please open an issue on the GitHub repository or contact [email protected] directly.
|