Yassine Ennaour's picture

Yassine Ennaour

Lyte

AI & ML interests

None yet

Recent Activity

updated a model about 3 hours ago
Lyte/Titans-MAC-test-bad-run-with-bug
liked a model about 10 hours ago
deepseek-ai/DeepSeek-R1
published a model about 10 hours ago
Lyte/Titans-MAC-test-bad-run-with-bug
View all activity

Organizations

Kandir Research's profile picture

Lyte's activity

upvoted an article 9 days ago
view article
Article

TerjamaBench: A Cultural Benchmark for English-Darija Machine Translation

By imomayiz โ€ข
โ€ข 20
reacted to hexgrad's post with ๐Ÿš€๐Ÿ”ฅ 12 days ago
view post
Post
16088
๐Ÿ“ฃ Looking for labeled, high-quality synthetic audio/TTS data ๐Ÿ“ฃ Have you been or are you currently calling API endpoints from OpenAI, ElevenLabs, etc? Do you have labeled audio data sitting around gathering dust? Let's talk! Join https://discord.gg/QuGxSWBfQy or comment down below.

If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.

What does this mean? If you've been calling closed-source TTS or audio API endpoints to:
- Build voice agents
- Make long-form audio, like audiobooks or podcasts
- Handle customer support, etc
Then YOU can contribute to the training mix and get useful artifacts in return. โค๏ธ

More details at hexgrad/Kokoro-82M#21
ยท
reacted to alielfilali01's post with ๐Ÿ‘ 13 days ago
view post
Post
1809
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 ๐Ÿ’€) to the ranking of best LLMs in Arabic !


Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !

- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !

- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)


Check out the latest rankings: inceptionai/AraGen-Leaderboard