Report: Microsoft reportedly cut a key AI ethics team.

0
151

At Microsoft’s most recent 10,000-person layoff, an entire team in charge of ensuring that AI products are launched with protections to reduce societal harms was eliminated, according to Platformer.

Former workers claimed that a key component of Microsoft’s approach to lower the risks connected with integrating OpenAI technology in its products was the ethics and society team. Before it was abandoned, the team created a comprehensive “responsible innovation toolbox” to assist Microsoft developers in anticipating the negative effects of AI and developing strategies to mitigate those effects.

Platformer’s study was published immediately before OpenAI unveiled GPT-4, which is reportedly its most potent AI model to date and is already supporting Bing search, according to Reuters.

kgcmeritzone
Adds

Microsoft reaffirmed its “commitment to building AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that emphasize this,” according to a statement supplied to Ars.

Microsoft referred to the ethics and society team’s work as “trailblazing” and claimed that over the previous six years, the corporation had placed a greater emphasis on funding and growing the size of its Office of Responsible AI. Together with Microsoft’s other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering, this office is still operational.

Platformer’s study was published immediately before OpenAI unveiled GPT-4, which is reportedly its most potent AI model to date and is already supporting Bing search, according to Reuters.

Microsoft reaffirmed its “commitment to building AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that emphasize this,” according to a statement supplied to Ars.

Microsoft referred to the ethics and society team’s work as “trailblazing” and claimed that over the previous six years, the corporation had placed a greater emphasis on funding and growing the size of its Office of Responsible AI. Together with Microsoft’s other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering, this office is still operational.

Emily Bender, a University of Washington researcher who specializes on computational linguistics and moral dilemmas in NLP, joined other detractors in criticizing Microsoft’s decision to disband the ethics and society unit via Twitter. According to Bender, who is an outsider, Microsoft’s choice was “short-sighted.” She continued, “Any big cuts to the people doing the work are damning given how difficult and vital this work is.”

 

A synopsis of the ethics and society team’s history

Microsoft started putting more of an emphasis on teams that are investigating ethical AI in 2017, according to CNBC report. Platformer stated that by 2020, the ethics and society team, with a peak size of 30 members, would be a part of that initiative. But, Microsoft started relocating the majority of the ethics and society team personnel into particular product teams last October as the AI battle with Google intensified. Only seven workers were left to carry out the “ambitious plans” of the ethics and society team, according to employees who spoke to Platformer.

It was too much work for a team that small, and former team members claimed that Microsoft didn’t always follow their recommendations. For example, mitigation strategies were suggested for Bing Image Creator that might prevent it from copying the brands of living artists, but Microsoft didn’t always implement these suggestions. (Microsoft disputes that assertion, claiming that the tool was altered before to release to allay the team’s worries.)

According to Platformer, John Montgomery, corporate vice president of AI at Microsoft, stated that there was a lot of pressure to “take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed” while the team was being reduced last fall. Montgomery was informed by staff that they had “serious” worries about possible detrimental effects of this speed-based strategy, but he argued that “the pressures stay the same.”

But even as the size of the ethics and society team shrank, Microsoft assured the group that it wouldn’t be disbanded. On March 6, however, the remaining members of the team were informed at a Zoom meeting that it was deemed “business critical” to completely dissolve the team.

Because Microsoft “managed to collect some really brilliant people working on AI, ethics, and social effect for technology,” Bender told Ars, the decision is especially upsetting.” For a while, the team appeared to be “really even fairly empowered at Microsoft,” according to her. Yet according to Bender, Microsoft is “essentially saying” through this action that it will follow the ethics and society team’s suggestions if it so decides “They must be eliminated if they are incompatible with our short-term financial goals.

“The worst thing is we’ve exposed the business to risk and human beings to risk,” one former employee told Platformer.

LEAVE A REPLY

Please enter your comment!
Please enter your name here