[MODELS] Discussion

#372
by victor HF staff - opened
Hugging Chat org
โ€ข
edited Sep 23, 2024

Here we can discuss about HuggingChat available models.

image.png

victor pinned discussion

what are limits of using these? how many api calls can i send them per month?

How can I know which model am using

How can I know which model am using

at the bottom of your screen:
image.png

Out of all these models, Gemma, which was recently released, has the newest information about .NET. However, I don't know which one has the most accurate answers regarding coding

Gemma seems really biased. With web search on, it says that it doesn't have access to recent information asking it almost anything about recent events. But when I ask it about recent events with Google, I get responses with the recent events.

apparently gemma cannot code?

Gemma is just like Google's Gemini series models, it have a very strong moral limit put on, any operation that may related to file operation, access that might be deep, would be censored and refused to reply.
So even there are solution for such things in its training data, it will just be filtered and ignored.
But still didn't test the coding accuracy that doesn't related to these kind of "dangerous" operations

@acharyaaditya26 it's normal for LLMs to mess up sometimes. There is likely nothing to fix. They also tend to degrade in response-quality over time, so it makes sense to open a new chat when discussing a new topic.

The difference between items 63 and 67 in the list is subtle CohereForAI/c4ai-command-r-plus-08-2024
๐Ÿคท

@acharyaaditya26 it's normal for LLMs to mess up sometimes. There is likely nothing to fix. They also tend to degrade in response-quality over time, so it makes sense to open a new chat when discussing a new topic.

The strange thing is that for me when I retry after a certain period of time it works again without hallucinating.

This comment has been hidden

Getting a lot of numbers in bad output from Llama 3.3 70B. Sometimes it seems fine at first, then degrades at the end of the reply. Other times it starts out looking something like the following with up to 12 bad retries in a row:

My name, the 2271.1, the 22. **01. 01,, the 1. ** **. ** 222. **. 00. **. 01. 00.
You look at **. 227. **.
You ** **.
You **.
You are **. **.
I have 12.
I have **. 0
I look 09 00
.
You have **.
I

I 07.
I
You

.
I
I

I
I 0
I 12.
I
I

**.

0
I
**.
.
I
**.
I
I
I
.

I
0 **0.
I
.
. 0
I
**

@acharyaaditya26 it's normal for LLMs to mess up sometimes. There is likely nothing to fix. They also tend to degrade in response-quality over time, so it makes sense to open a new chat when discussing a new topic.

I agree but i get weird output in second output from assistant

@acharyaaditya26 it's normal for LLMs to mess up sometimes. There is likely nothing to fix. They also tend to degrade in response-quality over time, so it makes sense to open a new chat when discussing a new topic.

I agree but i get weird output in second output from assistant

is it super high temp + top p and top k?

Sign up or log in to comment