crestf411 commited on
Commit
82c7576
·
verified ·
1 Parent(s): c3fbab0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -12
README.md CHANGED
@@ -3,22 +3,36 @@ license: llama3
3
  license_name: llama3
4
  license_link: LICENSE
5
  library_name: transformers
 
 
6
  ---
7
- # Llama-3-70B-Instruct-abliterated Model Card
8
 
9
- This is meta-llama/Llama-3-70B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
10
 
11
- TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal direction orthogonalized out.
12
 
13
- ## Quants
14
- [GGUF Quants available here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated-GGUF)
15
 
16
- ## For the people who like tinkering or looking to save bandwidth
17
- In the repo, I've included `refusal_dir.pth`
18
- If you have Llama-3-70B-Instruct model downloaded already, you can use the ortho cookbook to apply it to your downloaded model, which will make it the same as what you'd download from here.
19
 
20
- ## Quirkiness awareness notice
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- This model may come with interesting quirks, as I obviously haven't extensively tested it, and the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. The code I used to generate it (and my published 'Kappa-3' model which is just Phi-3 with the same methodology applied) is available in a Python notebook in this repo. Specifically, the [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
23
-
24
- If you manage to develop further improvements, please share! This is really the most primitive way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
 
3
  license_name: llama3
4
  license_link: LICENSE
5
  library_name: transformers
6
+ tags:
7
+ - not-for-all-audiences
8
  ---
 
9
 
10
+ Daybreak (2024 May 24) v0.4 LoRA on top of https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated
11
 
12
+ Beware, depraved. Not suitable for any audience.
13
 
14
+ Dataset curation to remove slop-perceived expressions continues. Unfortunately L3-Instruct (which this is merged on top of) is riddled with "barely audible"s and "couldn't help"s and "shivers down spines" etc.
 
15
 
16
+ The below regexes return 0 matches, but as noted above, there are still frequent occurrences of these in the base instruct merge. **Bold** entries are new since v0.3.
 
 
17
 
18
+ * 'barely above a whisper',
19
+ * **'barely audible',**
20
+ * 'shiver([s]?) down',
21
+ * ' ministration',
22
+ * 'audible (["\'"]?)p[l]?op',
23
+ * 'can\'t help but',
24
+ * 'buck([s]?) my ',
25
+ * 'buck([s]?) h[ei][rs] ',
26
+ * '[Dd]espite h[ie][mr]self',
27
+ * 'slick slit',
28
+ * 'whatever it takes',
29
+ * 'unlike anything (s?)he',
30
+ * **'a mix([a-z]*) of',**
31
+ * 'wave after wave',
32
+ * 'reckless abandon',
33
+ * '[Mm]aybe, just maybe',
34
+ * **'eyes gleaming',**
35
+ * **'mischievously',**
36
+ * **"couldn't help but",**
37
 
38
+ From testing so far, it feels like temperature 0.8-0.9 is a good starting point. I have mostly tested with everything neutralized. Please give feedback on which parameters work good for you.