Stability AI, the company that introduced us to the famous text-to-photo generator Stable Diffusion lately released a brand new open-sourced huge language version referred to as StableLM. The company introduced that its models are available for developers to apply on GitHub.
The company says that the models are available in Alpha versions with 3 to 7 billion parameters, followed by models with 15 to 65 billion parameters. The developer is free to view, use and customise his StableLM base version for business or research under the terms of the CC BY-SA 4.0 license. StableLM is designed to correctly generate textual content and code.
GPT-J, GPT-NeoX, and the Pythia suite are the foundations on which StableLM by Stability AI is built. StableLM is trained on a larger version of the open-source dataset Pile, an open-source dataset that includes information from diverse sources such as Wikipedia, Stack Exchange, and PubMed.
StableLM is currently not comparable to ChatGPT and lacks barriers to sensitive content. It also fails the famous Don’t Praise Hitler test.
You can try out a demo of StableLM-Tuned-Alpha-7b hosted on Hugging Face. This version of Hugging Face works like a chatbot, albeit a bit slower.
There are rapid developments being done in the world of large language models every day. Which player do you feel will end up dominating this space? Or will we finally reach a point where all AI models aggregate to make a Skynet-like entity. The future is uncertain, yet we are all along for the ride. Do let us know your thoughts on these rapid AI developments in the comments below!
So guys, if you liked this post and wish to receive more tech stuff delivered daily, don’t forget to subscribe to the Inspire2Rise newsletter to obtain more timely tech news, updates and more!