chansung park's picture

chansung park PRO

chansung

AI & ML interests

None yet

Recent Activity

View all activity

Articles

Organizations

Notebooks-explorers's profile picture various keras sd deployment 's profile picture LLMs's profile picture Gradio-Themes-Party's profile picture Hugging Face Fellows's profile picture Alpaca LoRA's profile picture Webhooks Explorers (BETA)'s profile picture Deploy HF TF ViTs's profile picture Blog-explorers's profile picture Personal Coding Assistant's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Top Contributors: Dataset Downloads's profile picture llama-duo's profile picture klcsp's profile picture ExpanLLM's profile picture

chansung's activity

reacted to their post with 👍 about 5 hours ago
view post
Post
122
Simple Summarization on DeepSeek-R1 from DeepSeek AI

The RL stage is very important.
↳ However, it is difficult to create a truly helpful AI for people solely through RL.
↳ So, we applied a learning pipeline consisting of four stages: providing a good starting point, reasoning RL, SFT, and safety RL, and achieved performance comparable to o1.
↳ Simply fine-tuning other open models with the data generated by R1-Zero (distillation) resulted in performance comparable to o1-mini.

Of course, this is just a brief overview and may not be of much help. All models are accessible on Hugging Face, and the paper can be read through the GitHub repository.


Model: https://huggingface.co/deepseek-ai
Paper: https://github.com/deepseek-ai/DeepSeek-R1
  • 1 reply
·
posted an update about 5 hours ago
view post
Post
122
Simple Summarization on DeepSeek-R1 from DeepSeek AI

The RL stage is very important.
↳ However, it is difficult to create a truly helpful AI for people solely through RL.
↳ So, we applied a learning pipeline consisting of four stages: providing a good starting point, reasoning RL, SFT, and safety RL, and achieved performance comparable to o1.
↳ Simply fine-tuning other open models with the data generated by R1-Zero (distillation) resulted in performance comparable to o1-mini.

Of course, this is just a brief overview and may not be of much help. All models are accessible on Hugging Face, and the paper can be read through the GitHub repository.


Model: https://huggingface.co/deepseek-ai
Paper: https://github.com/deepseek-ai/DeepSeek-R1
  • 1 reply
·
upvoted an article 1 day ago
view article
Article

Introducing multi-backends (TRT-LLM, vLLM) support for Text Generation Inference

54