--- library_name: transformers tags: [] --- This was an experiment. I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni). The intention was to see if the Omni model can gain abliterated functions. The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.