repetitive

#9
by Utochi - opened

I must say this has been a very interesting model to work with and get to know. its smart. but it has an issue of starting to get repetitive a ways into a roleplay as if it cant handle anything more than 5k context tokens.

It always starts off strong. then gradually and slowly loses its mind. if it kept up with its strong start up to 8k context id absolutely love this mode. cranking up the repeat penalty tokens helps a tiny bit but not much.

Using mistral instruct template and i keep the context tokens to 5k. but even at 5k it gets a little repetitive but not as bad as 8k.

I want to love this model, so i hope the repetition is addressed eventually

I must say this has been a very interesting model to work with and get to know. its smart. but it has an issue of starting to get repetitive a ways into a roleplay as if it cant handle anything more than 5k context tokens.

It always starts off strong. then gradually and slowly loses its mind. if it kept up with its strong start up to 8k context id absolutely love this mode. cranking up the repeat penalty tokens helps a tiny bit but not much.

Using mistral instruct template and i keep the context tokens to 5k. but even at 5k it gets a little repetitive but not as bad as 8k.

I want to love this model, so i hope the repetition is addressed eventually

This is more or less an issue with every model. It really helps to write more detailed answers yourself that the model can make something out of because giving short repetitive answers yourself will lead to the model doing the same. You can always tweak settings in silly tavern or whatever you're using but the most important thing is to give detailed answered yourself in roleplays and usually the model will be more creative as well when it gets more input.

I must say this has been a very interesting model to work with and get to know. its smart. but it has an issue of starting to get repetitive a ways into a roleplay as if it cant handle anything more than 5k context tokens.

It always starts off strong. then gradually and slowly loses its mind. if it kept up with its strong start up to 8k context id absolutely love this mode. cranking up the repeat penalty tokens helps a tiny bit but not much.

Using mistral instruct template and i keep the context tokens to 5k. but even at 5k it gets a little repetitive but not as bad as 8k.

I want to love this model, so i hope the repetition is addressed eventually

This is more or less an issue with every model. It really helps to write more detailed answers yourself that the model can make something out of because giving short repetitive answers yourself will lead to the model doing the same. You can always tweak settings in silly tavern or whatever you're using but the most important thing is to give detailed answered yourself in roleplays and usually the model will be more creative as well when it gets more input.

thats not quite what ive found. ive seen models handle things just fine, moving the story or situation along just fine without things becoming too repetitive. for instance, Undi95__Lumimaid-Magnum-v4-12B-GGUF__Lumimaid-Magnum-v4-12B.q8_0.gguf, this model is handling the repetitive nature considerably better. cant say why, dont really understand it. this one however handles 8k context quite nicely with very little repetition.

Anthracite org

Mind if i ask what quantization you used and what did you use to inference it? i can try and troubleshoot if this is an actual problem, as my own testing with EXL2 went very well without any issues.

Sign up or log in to comment