The New York Times, citing people familiar with the process, said two employees in Google’s responsible innovation department tried to prevent the launch of the Bard chatbot last month, but failed.
They warned that it was vulnerable to “inaccurate and dangerous statements”.
Already wary product reviewers noticed problems with large AI language models like Bard and its archrival ChatGPT when Google’s top lawyer met with search and security executives to tell them the company was prioritizing AI over everything else.
Sources claimed that the couple’s fears that the chatbot was giving out false information, harming users who had become emotionally attached, or even unleashing “technological violence” through artificial grassroots harassment, were later downplayed by innovation chief executive Jane Jinai. And while reviewers urged Google to wait before launching Bard, Jenai allegedly edited his report to remove the recommendation entirely.
Jenai defended her actions to the Times, pointing out that reviewers shouldn’t have shared people’s opinion on whether they should continue because Bard was just an experiment. He claimed to have improved the report by “correcting inaccurate assumptions and already adding more risks and harms to be explored.” This, she insisted, makes the final product safer.
Google thanked Genai for the decision to release Bard as a “limited trial,” but the chatbot is still set to fully integrate with market-dominant search engine Google “soon,” according to Google’s own website.
Google has been successful in tackling employee protests about artificial intelligence before. It fired Blake Lemoine last year after he said LaMDA (a language model for conversational applications) had become sensible, and researcher Mahdi Elmohamadi resigned after the company prevented him from publishing a paper warning about cybersecurity vulnerabilities in large language models. , such as Bard. And in 2020, AI researcher Timnit Gebra was fired after publishing a study that accused Google of not paying enough attention to AI development.
However, a growing faction of AI researchers, technology executives, and other influential futurists influenced Google and its rivals to rapidly “move” into Microsoft and OpenAI so that effective security measures could be placed on the technology. A recent open letter calling for a six-month suspension of “giant AI experiments” attracted thousands of signatures, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak.
Technology’s ability to upend society by making many human professions (or people themselves) obsolete is central to the warnings of many experts, although smaller risks are often cited, such as data breaches that have already occurred in OpenAI.
You must log in to post a comment.