

Microsoft suite for government employees software#
The employee cited a recent test of Amazon’s Rekognition software by the American Civil Liberties Union, which ran photos of every member of Congress against a collection of mugshots. “We cannot avert our eyes from the human cost of our business,” the employee wrote, calling the software a “flawed technology that reinforces existing bias.”Īccording to the Amazon employee, studies show that facial recognition software often misidentifies people with darker skin. A group of 450 Amazon employees reportedly signed a letter asking CEO Jeff Bezos to stop selling its facial recognition software, Rekognition, to law enforcement agencies, according to an Amazon employee who published an anonymous opinion piece Tuesday on Medium (the publishing platform verified the author’s employment at Amazon). One technology that workers are concerned about is facial recognition software. As a worst-case scenario, they often cite the example of IBM’s contract with Nazi Germany, in which the American tech company developed a system that helped Nazis classify, organize and murder Jews. Employees are worried about government contractsĮmployees at different tech companies are worried about different types of projects, but they do have one thing in common: a shared concern about government contracts, and the risk that government officials can use their technology to violate basic human rights. But their calls to put ethics and values before profit are forcing Silicon Valley to consider the moral ramifications of what they’re creating, and whether it’s benefiting humanity and “ promoting fundamental human rights” - or doing the opposite. Some of their protests have had an impact others have not. Internal protests at some of America’s most powerful tech companies reflect mounting employee concerns about the ethical implications of the technology they are developing. It’s unclear how many employees are part of the group, but it may not matter, as Microsoft has indicated it won’t drop its bid on the cloud computing contract for the Pentagon.
Microsoft suite for government employees code#
“With no transparency in these negotiations, and an opaque ethics body that arbitrates moral decisions, accepting this contract would make it impossible for the average Microsoft employee to know whether or not they are writing code that is intended to harm and surveil,” wrote an anonymous group of Microsoft employees in a letter published Friday (and verified) by Medium. Now Microsoft employees are pushing executives to do the same. On October 8, Google announced that it was pulling out of the running for the JEDI contract. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.

They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence.

Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it.
“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier. In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.Īs part of the contract, known as JEDI, engineers would build cloud storage for military data there are few public details about what else it would entail. The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.
