๐ ClearerVoice-Studio New Feature: Speech Super-Resolution with MossFormer2 ! ๐ Weโre excited to announce that ClearerVoice-Studio now supports speech super-resolution, powered by our latest MossFormer2-based model! Whatโs New?
๐ Convert Low-Resolution to High-Resolution Audio: Transform low-resolution audio (effective sampling rate โฅ 16 kHz) into crystal-clear, high-resolution audio at 48 kHz.
๐ค Cutting-Edge Technology: Leverages the MossFormer2 model plus HiFi-GAN, optimised for generating high-quality audio with enhanced perceptual clarity.
๐ง Enhanced Listening Experience: Perfect for speech enhancement, content restoration, and high-fidelity audio applications.
๐ Try It Out! Upgrade to the latest version of ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) to experience this powerful feature. Check out the updated documentation and examples in our repository.
Let us know your thoughts, feedback, or feature requests in the Issues section.
๐ What are scaling laws? These are empiric laws that say "Every time you increase compute spent in training 10-fold, your LLM's performance will go up by a predictable tick". Of course, they apply only if you train your model with the right methods.
The image below illustrates it: they're from a paper by Google, "Scaling Autoregressive Models for Content-Rich Text-to-Image Generation", and they show how quality and instruction following of models improve when you scale the model up (which is equivalent to scaling up the compute spent in training).
โก๏ธ These scaling laws have immense impact: they triggered the largest gold rush ever, with companies pouring billions into scaling up theiur training. Microsoft and OpenAI spent 100B into their "Startgate" mega training cluster, due to start running in 2028.
๐ค So, what about these reports of scaling laws slowing down?
If they are true, they would mean a gigantic paradigm shift, as the hundreds of billions poured by AI companies into scaling could be a dead-end. โ๏ธ
But I doubt it: until the most recent publications, scaling laws showed no signs of weakness, and the researchers at the higher end of the scale-up seems to imply the scaling up continues.
We just released a paper (NeuZip) that compresses VRAM in a lossless manner to run larger models. This should be particularly useful when VRAM is insufficient during training/inference. Specifically, we look inside each floating number and find that the exponents are highly compressible (as shown in the figure below).