File size: 2,959 Bytes
49aef49 6d30494 49aef49 744c282 23f1957 fede3bb a6caf4f 27ae371 1d2066a 0887647 1d2066a 9429f07 f903d4d 6e9b7d4 c6ab15b 6e9b7d4 631bfb3 9429f07 f7d38ee 49aef49 6d30494 0700430 49aef49 a4013ca 7bce14b 49aef49 7bce14b f7d38ee 691aa18 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
datasets:
- danielpark/gorani-100k-llama2-13b-instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
### I am seeking a company that can provide support for a minimum of 4 or more A100 GPUs.
I am interested in companies located outside of South Korea that can offer resources I can directly and exclusively handle for a specific period through SSH tunneling or the cloud.
If your company meets these criteria, kindly contact me at google mail([email protected]). Your company will partially have access to records of my dataset and training code. However, I retain full decision-making authority over the project, and we can discuss any further agreements in the future. Thank you for your consideration. (Companies with offices in South Korea or those targeting AI-related businesses in South Korea are also excluded.)
# The project is currently in progress. Please refrain from using weights and datasets.
## Status: 19.7k check point weights open, waiting for the results on the LLM leaderboard.
| Update Schedule | Task Description | Status |
|-----------------|----------------------------|--------|
| 23-10-05 | Completed training - 19.7k 13b weight | Done |
| 23-10-06 | Submitted hf model weights (REV 01) | Done |
| 23-10-20 | Q.C | On Process |
| 23-10-13 | Completed training - 50k 13b weight | |
| 23-10-14 | Submitted hf model weights | |
| 23-10-18 | Completed training - 100k 13b weight | |
| 23-10-20 | Q.A | |
| 23-10-21 | Official weight release | |
# GORANI 100k
- Model: [danielpark/gorani-100k-llama2-13b-instruct](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct)
- Dataset: [danielpark/gorani-100k](https://huggingface.co/danielpark/gorani-100k)
## Template
I use llama2-13b with LFM, but I have used it without a default system message. If a system message is specified in some datasets, I use that content.
```
### System:
{System}
### User:
{New_User_Input}
### Input:
{New User Input}
### Response:
{New_Assistant_Answer}
```
## Caution
The model weights and dataset have not been properly curated yet and are strictly prohibited for use under any license. In relation to this, the developers do not assume any responsibility, either implicitly or explicitly.
## Updates
| Revision | Commit Hash | Updated | Train Process | Status |
| ---------------|------------------------------------------------------------|------------|------------------|---------------|
| Revision 01 | [6d30494fa8da84128499d55075eef57094336d03](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct/commit/6d30494fa8da84128499d55075eef57094336d03) | 23.10.04 | 19,740/100,000 | On Training |
|