AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Twitter duolingo10/12/2023 ![]() ![]() OpenAI, for its part, certainly seems to have taken its LLM’s hallucination problem to heart. But the incidents did set some marketing experts back on their heels.īoth OpenAI and Google responded to the Roose and Bard incidents, respectively, with a general attitude of: These are early days, and we’re figuring out and fine-tuning these tools in collaboration with our users we’re expecting the unexpected, and we appreciate the masses poking and prodding these AI models so that we can continue to make them more trustworthy and safe. It would be a stretch to say that GPT-4 arrived during a PR crisis for AI very few people would have claimed that the Bing and Bard mishaps spelled the doom of AI as such. At around the same time, Google’s Bard – an AI chatbot launched largely in response to the upgraded Bing – spun off an erroneous fact during a public demo and cost Alphabet (Google’s parent company) many, many billions of dollars. Microsoft responded by limiting the number of queries that a user could submit to Bing per session and day. GPT-4 arrived not long after Microsoft’s newly AI-powered Bing chatbot - which, it was eventually revealed, was running on GPT-4 before the LLM had been officially released to the public - took New York Times reporter Kevin Roose on a bizarre textual ride through the chatbot’s “shadow self,” introducing itself as “Sydney.” The incident highlighted the model’s propensity for “hallucination” and raised fears among many of an impending AI-ocalypse. ![]() To a certain extent, this has required some courage. ![]()
0 Comments
Read More
Leave a Reply. |