Reading-notes

Ethics in the workplace

The internal backlash among employees represents mounting concerns about whether Google has “lost its moral compass” in the corporate pursuit to enrich shareholders. But it also suggests that the people who make Google’s technology have more power in shaping corporate decisions than even shareholders have. In April, thousands of Google employees protested the company’s military contract with the Pentagon — known as project Maven — which developed technology to analyze drone video footage that could potentially identify human targets.

The push to make employees corporate stakeholders For the past few decades, rank-and-file workers have had no real influence in how public companies invest profits or make decisions about new revenue streams.

Modern American capitalism has been driven by a singular mission: to bring value to the people who own company stock. Vox’s Matt Yglesias explains how this mentality leads executives to pursue profit above other worthwhile goals.

Google is acting like a traditional company; one that squeezes every dime out of the marketplace, heedless of intangibles like principle, ethical cost, and even at the risk of the safety of its users…If technology is a tool, then it means the people making that tool have a responsibility to curb their tool’s misuse by playing a role in the decisions on how it gets used. And if the people who are the leaders of the company don’t believe this, they should hear it in plainer and clearer terms: namely, you do not become one of the largest companies in the history of capitalism without the assistance of the workers making those tools.

It’s not clear yet whether the employee campaigns at Amazon and Microsoft will have the same impact as the employee activism at Google—but the pushback at Amazon and Microsoft this week shows that tech employees are increasingly willing to speak up against work that they believe to be unethical.

Academics and students in the fields of computer science and artificial intelligence joined Google employees in voicing concerns about Project Maven, arguing that Google was unethically paving the way for the creation of fully autonomous weapons. Asaro praised Google’s ethical principles for their commitment to building socially beneficial AI, avoiding bias, and building in privacy and accountability. However, Google could improve by adding more public transparency and working with the United Nations to reject autonomous weapons, he said.

“Ultimately, how the company enacts these principles is what will matter more than statements such as this,” Asaro said. “In the absence of positive actions, such as publicly supporting an international ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build. Otherwise we will continue to rely on the conscientious employees wiling to risk their positions in Google to ensure the company ‘does no evil.’”

“Microsoft’s guidelines on accessibility and security go above and beyond because we care about our customers. We ask for the same approach to a policy on ethics and acceptable use of our technology. Making our products accessible to all audiences has required us to be proactive and unwavering about inclusion. If we don’t make the same commitment to be ethical, we won’t be. We must design against abuse and the potential to cause violence and harm.

Ethics in Technology

The trolley problem Automakers and suppliers largely downplay the risks of what in philosophical circles is known as “the trolley problem” — named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.

In the circumstance of the self-driving car, it’s often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.

Philosophical considerations, aside, automakers argue it’s all but bunk — it’s so contrived.

“I don’t remember when I took my driver’s license test that this was one of the questions,” said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.

If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies — for brakes, for sensors — will detect danger and react more appropriately.

Self-driving cars will also introduce new forms of human control. Not having to own a vehicle and hiring one only when it’s needed will itself be a major new form – one that will likely give many people more control over their monthly finances. Passengers are also likely to have more choice over vehicle settings, preferences and routes, not to mention what they do while they’re in the car.

Cefkin likens the uncertainty over self-driving cars to the angst that accompanied cellphones when they first arrived.

“People felt like they weren’t as much in control as to when they communicated with people, but we adapted,” she says. “That feeling itself is socially constructed and there will be some replacement feeling of control in the future, it just won’t look the way it does today.”

Would you feel safe in a self-driving car?

In 2017: Absolutely not if the car is completely autonomous. In 2027: Possibly. I think a lot will depend on how mature the technology will become. Right now, self-driving cars exist, but the human is in the loop to jump in if the car makes a mistake or needs input. Recently, a Tesla driver died in a self-driving car accident because he completely trusted the car to make the right choices. We do not really have self-driving cars yet. Rather, we have semi-automated self-driving cars.

Tying data to permissions can be done through encryption, which is slow, riddled with DRM, burdensome, hard to implement, and bad for innovation. Or it can be done through legislation, which has about as much chance of success as regulating spam: it feels great, but it’s damned hard to enforce.

There are brilliant examples of how a quantified society can improve the way we live, love, work, and play. Big Data helps detect disease outbreaks, improve how students learn, reveal political partisanship, and save hundreds of millions of dollars for commuters—to pick just four examples. These are benefits we simply can’t ignore as we try to survive on a planet bursting with people and shaken by climate and energy crises.

But governments need to balance reliance on data with checks and balances about how this reliance erodes privacy and creates civil and moral issues we haven’t thought through. It’s something that most of the electorate isn’t thinking about, and yet it affects every purchase they make.

This should be fun.

These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.

Pre-Programmed Catastrophes

But one look at the relevant scientific literature shows that attempts to control opinions, in the sense of their “optimization”, are doomed to fail because of the complexity of the problem. The dynamics of the formation of opinions are full of surprises. Nobody knows how the digital magic wand, that is to say the manipulative nudging technique, should best be used. What would have been the right or wrong measure often is apparent only afterwards. During the German swine flu epidemic in 2009, for example, everybody was encouraged to go for vaccination. However, we now know that a certain percentage of those who received the immunization were affected by an unusual disease, narcolepsy. Fortunately, there were not more people who chose to get vaccinated!

Legal Issues

This raises legal issues that, given the huge fines against tobacco companies, banks, IT and automotive companies over the past few years, should not be ignored. But which laws, if any, might be violated? First of all, it is clear that manipulative technologies restrict the freedom of choice. If the remote control of our behaviour worked perfectly, we would essentially be digital slaves, because we would only execute decisions that were actually made by others before. Of course, manipulative technologies are only partly effective. Nevertheless, our freedom is disappearing slowly, but surely—in fact, slowly enough that there has been little resistance from the population, so far.

Tech Company Principles

“When we started Terrafugia to make personal aviation safer, more practical, and dramatically more accessible in 2006, we were told over and over again that what we were doing was impossible. It turns out that quite the opposite was the case. Our efforts have been a key inspiration in starting a whole new ‘flying car’ and urban air mobility industry. As part of creating that industry, I spend a lot of time thinking about regulations and standards to ensure that these new vehicles are safely and responsibly built and operated. Having a tool like the Ethical OS is incredibly useful in that process as it provides a launch point from which to think about things from a fresh perspective. Aviation is a very established industry, but the implications of widespread, on-demand, hyper-local flights are something we aren’t necessarily equipped to think about using the existing paradigms. The Ethical OS framework is applicable beyond just digital work and is a valuable thought experiment for anyone whose business it is to change the world with technology.”

  1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  1. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use

Nature and uniqueness: whether we are making available technology that is unique or more generally available

Scale: whether the use of this technology will have significant impact

Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI for the long term While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.