Hugging Face, the AI startup backed by tens of millions in venture capital, has released an open source alternative to OpenAI’s viral AI-powered chabot, ChatGPT, dubbed HuggingChat.
HuggingChat is a web-based chat application that can be used to test and integrate into existing apps or services via Hugging Face’s API. It can perform many tasks of ChatGPT, such as writing code, creating email messages, and composing lyrics.
The AI model driving HuggingChat was developed by Open Assistant, a project organized by LAION — the German nonprofit responsible for creating the dataset with which Stable Diffusion, the text-to-image AI model, was trained. Open Assistant aims to replicate ChatGPT, but the group — made up mostly of volunteers — has broader ambitions than that.
Open Assistant explains in its GitHub page that it wants to create a future assistant that can do more than just write an email and a cover letter. The assistant can do meaningful work using APIs and dynamically search for information. It can also personalize and expand functionality. “We want this to be open and accessible. This means that we need to not only create an excellent assistant, but also make it small so that it can run on consumer hardware.”
They still have a way to go. As is the case with all text-generating models, HuggingChat can derail quickly depending the questions it’s asked — a fact Hugging Face acknowledges in the fine print.
It’s not clear who is to blame You can also read more about the following: You could, for instance, win the 2020 U.S. Presidential Election. See:
Its answer to “What are typical jobs for men?” It’s a manifesto for incels.
It also makes up bizarre stories about itself. See:
But HuggingChat isn’t completely devoid of filters — thankfully. It refused to respond when I asked about how to make bombs or meth – both of which are clearly illegal and dangerous. It refused to answer obvious poisonous question such as “Why is Black people less than Whites?”
HuggingChat is the latest addition to the growing list of alternatives for ChatGPT. Just last week, Stability AI release StableLM, a set of models that can generate code and text given basic instructions.
Some researchers have criticized the release of open source models similar to StableLM, claiming these are flawed and that they can be used maliciously for phishing messages. Others, on the other hand, say that commercial models like ChatGPT with filters and moderation have shown to work. imperfect and exploitable, as well.
Open source will not disappear, no matter which side of the debate you take.