File size: 1,206 Bytes
62b350b
 
 
 
 
 
 
 
5c66ff8
 
35080b6
 
ae775df
 
 
 
 
 
 
 
 
 
 
6673579
 
e342a6a
89672dd
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
base_model: v000000/L3.1-Niitorm-8B-LATCOSx2
tags:
- llama
- merge
- llama-cpp
---

# Llama-3.1-Niitorm-8B-LATCOSx2 

![d48ca23f-9063-4a66-a6b8-0abcbfe26dc5.jpg](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/96he_28zzbVZBoh29QlIm.jpeg)

# Ordered by quality:
* q8_0 imatrix
* q8_0
* q6_k imatrix
* q6_k
* q5_k_m imatrix
* q5_k_s imatrix
* q4_k_m imatrix
* q4_k_s imatrix
* iq4_xs imatrix
* q4_0_4_8 imatrix arm
* q4_0_4_4 imatrix arm

This is a test RP model, <b>"v000000/L3.1-Niitorm-8B-t0.0001"</b> but merged one extra time with <b>"akjindal53244/Llama-3.1-Storm-8B"</b>. Using a new merging algorithm I wrote <b>"LATCOS"</b>, which is non linear interpolation and cosine vector similarity between tensors in both magnitude and direction.
This attempts to find the smoothest possible interpolation and make them work more seamlessly together by taking into account the vector direction where both models agree. The model seems a lot smarter even though it's just a bit more of storm, but also more compliant which could be a negative since it's less dynamic.

<i>imatrix data randomized bartowski, kalomeze, rp snippets, working gpt4 code, human messaging, story</i>