[ad_1]
Yesterday at the Google I / O Developer Conference, the company outlined ambitious plans for a future based on the artificial intelligence of advanced language. These systems, according to Google CEO Sundar Pichai, allow users to find information and organize their lives by naturally chatting with computers. All you have to do is talk and the machine will answer.
But for many in the AI community, there was a major shortcoming in this debate: Google’s response to its own study examining the dangers of such systems.
In December 2020 and February 2021, Google fired first Timnit Gebru and then Margaret Mitchell, leaders of its ethical artificial intelligence team. The story of their departure is complex, but it was triggered a paper written by the couple (With researchers outside of Google) is exploring the risks associated with the language models that Google now presents as key to its future. As noted in the paper and other critiques, these AI systems are prone to a number of flaws, including the emergence of offensive and racist language; coding racial and gender bias through speech; and the general inability to sort the facts out of fiction. For many in the world of artificial intelligence, Google shot Gebruin and Mitchell censorship of their work.
For some viewers, as Pichai said, how Google’s artificial intelligence models are always designed with “fairness, accuracy, security, and privacy,” the difference between a company’s words and actions raised questions about its ability to protect this technology.
“Google just introduced LaMDA’s new large language model in I / O” has tweeted Meredith Whittaker, artificial intelligence justice researcher and founder of the AI Now institution. “This is a testament to its strategic importance for the Co-teams to spend months preparing these announcements. Tl; dr this plan was in place when Google fired Timnit + and tried to stifle him + research by criticizing this approach.”
Google just introduced LaMDA’s new large language model in I / O. This is a testament to its strategic importance for Co-teams to spend months preparing these announcements. Tl; dr this plan was in place when Google fired Timnit + tried to suppress him + research criticizes this approach https://t.co/6VObPJ1ebo
– Meredith Whittaker (@mer__edith) May 18, 2021
Gebru himself has tweeted“This is called ethical laundering” – referring to the tendency of the technology industry to trumpet ethical concerns while ignoring findings that hinder companies ’ability to make a profit.
Talking Limit, Emily Bender, a professor at the University of Washington who co – authored the paper with Geberi and Mitchell, said Google ‘s presentation did nothing to allay her concerns about the company’ s ability to make such technology safe.
” blog post [discussing LaMDA] and based on history, I’m not sure Google is really wary of all the risks outlined in the magazine, ”Bender said,“ First, they separated the two authors nominally over the paper. If the issues we raised were in front of them, they deliberately lost for themselves very relevant expertise for this task. “
In its blog post on LaMDA, Google highlights several of these issues and stresses the need to further develop its work. “Language may be one of humanity’s greatest tools, but like all tools, it can be misused,” writes senior research director Zoubin Ghahramani and product management director Eli Collins. “Language-trained models can spread this abuse – for example, by incorporating prejudice, reflecting hostile speech, or repeating misleading information.”
But Bender says the company is confusing problems and needs to be clearer about how it handles them. He notes, for example, that Google refers to checking the language used to train models like LaMDA, but does not provide any details on what the process looks like. “I’d really like to know about the review process (or lack thereof),” Bender says.
It was only after the presentation that Google referred at all to its artificial intelligence unit a CNET to interview With Jeff Dean, Google’s artificial intelligence manager. Dean pointed out that Google had suffered a real “reputation hit” – something Limit has previously reported – but that the company had to “move past” these events. “We’re not shy about criticizing our own products,” Dean said CNET. “As long as it’s done with a lens towards the facts and proper handling of the extensive work we’re doing in this state, but also to address some of these issues.”
For business critics, however, the debate needs to be much more open than this.
[ad_2]