Policing Consciousness: A Model to Create Ethical AI Policy

Deepinder Uppal
6 min readJun 22, 2020

From predictive policing initiatives, to hunting terrorists on social media, Artificial Intelligence (AI) is a hot topic amongst the tech industry right now. It’s an innovative development that has the potential to change the way we live with intelligent machine learning and big data. But, the world of AI and machine learning is still developing, and while it surely holds it’s benefits, it hasn’t even reached its maximum potential. The capabilities of AI are almost limitless, and many individuals are reluctant to adopt it fully. As it continues to evolve, industry leaders are discussing ethical policies and assessing potential risks as it makes its way into everyday life.

Understand What AI is, and What it is not… at Least Yet

In 1986 the late MIT professor Marvin Minsky, the father of artificial intelligence, defined AI as “the science of making machines do those things that would be considered intelligent if they were done by people.” In the modern context, this definition is not necessarily all-encompassing. Instead it may help to think of AI as a discipline rather than a specific technology. If we were to keep with this analogy, we could seek to define AI in terms of its goals. Thus, we could reach a candidate definition which has the form: “AI is the discipline that aims at building…”. This form is similar enough to the possible answers Russell and Norvig ruminated on in their seminal publication, Artificial Intelligence: A Modern Approach, known in the AI community as simply AIMA2e.

Using this definitional form we can reach a number of interesting responses which in themselves can be placed along two dimensions: how an AI should think, versus how an AI should respond. AIMA summarized this quartet of types as the four possible goals of AI.

This quartet of possibilities provides a seemingly exhaustive summary of modern use cases for AI. As an example, the “Hosts” in the recent Westworld TV series would fall under the ‘Human/Reasoning’ quadrant whereas the ‘Human/Act’ position is occupied most prominently by Alan Turing, whose test is passed only by those systems able to act sufficiently like a human. Using the definitional form described above we are now in a position to understand what we would be policing when attempting to create ethical AI policy.

For the purposes of this analysis, I.e. the understanding of how meaningful AI policy can be implemented, let us use John Searle’s segmentation of the types of AI: strong and weak. According to Searle, strong AI is truly sentient, such that an appropriately programmed AI can be literally said to understand and have other cognitive states. Contrast that with weak AI, where an appropriately programmed machine can simulate thought, yet is devoid of cognition in unto itself. Finally we have artificial general intelligence’ (AGI) or human-level, and potentially beyond human-level intelligence. Often the antagonist in a number of science fiction dystopias, AGI is more of an extension of strong AI. This hypothetical state of AI has the potential to far surpass humanity in terms of intelligence and perhaps even modes of consciousness.

Establish Potential Risks and Pitfalls

There are numerous ongoing efforts that purport to provide frameworks to support the creation of AI legislation. In more recent history, in 2018 a G7-backed international oversight panel, the Global Partnership on AI, was proposed to analyze the socioeconomic, and ethical implications of AI and use these insights to steer AI development for the G7 nations. In 2019 per executive order, the White House’s Office of Science and Technology Policy released guidance for the regulation of AI applications. In February 2020, the European Union followed suite, and published its own draft strategy for regulating AI.

Though numerous, all of these frameworks agree that public policy considerations should be focused on technical, economic and legal implications associated with AI. Specifically, the mandate to regulate AI should be broad enough to encompass three distinct areas: the governance of AI, establishing who is responsible and accountable for an AI driven response, and of course the privacy and safety issues often attributed to AI capabilities.

The first and most crucial stage of creating ethical AI policy begins with establishing the risks and pitfalls associated with these areas. For example, the current dialog on regulating lethal autonomous weapon systems (LAWS), lethal autonomous robots (LAR), and autonomous weapon systems (AWS) hope to provide distinct criteria for governing the use, and active motoring of autonomous weapons. This would likely include institutionalizing consistent standards that effectively act as technical specifications required for the creation of LAWS. The intent is for these specifications to be governed and updated by a community of experts together with a legal and political verification process.

Maintain Transparency

While media may represent AI rogue robotic machines that take over the world, many of their premises are based on fictional concepts. In reality, a driverless car doesn’t have any intention of taking over the world; but it does come with its own set of risks. For example the risk of entrusting our safety to a “black box”.

Many AI driven processes leverage deep learning models which ingest millions of data points in order to isolate specific attributes, -sometimes in real time. These complex process are then leveraged to make crucial decisions, for instance a driverless car which makes the decision to hit a pedestrian. The complexity of these processes create a unique challenge for AI regulators. It is simply not acceptable to make the claim, “the car made a decision but we cannot tell you why”. These notoriously opaque processes provide a “black box” experience to their users.

Researchers have tried to come up with a number of proposals to combat the black box risk. From “observational” approaches which hope to allow users to infer AI behavior to “surgical” approaches that would allow regulators to audit neural net activity associated with a specific behavior. However, perhaps we should accept that we may never be able to parse out specific behavioral responses from millions of lines of code; maybe we should accept we are human. But as human being we can still understand process related activities; so perhaps the goal shouldn’t be to understand each and every decision born out of petabytes of calculations, perhaps we can mitigate the risk of black box activity by understanding, and auditing the process an AI goes through in order to make their decisions.

Know The Impact of Bias

Can human biases have an impact on AI and the way it interprets data and information? If humans create AI systems, can their own biases impact how the system functions and collects data? Absolutely. The decision an AI makes is only as good as the data it has access to. However, bad data can contain implicit racial, gender, or ideological biases. Many AI models continue to be trained using bad data, allowing for greater bias to be filtered to the AI’s decision making processes.

Bias in technology is nothing new. Take a simple google search, if we were to search for ‘C.E.O.’ we would more than likely return an entire front page of white males. The solution to bias isn’t all that sophisticated. There needs to be a standardized approach to that increases the level of rigor required for data that will be provided to an AI. For categories like race and gender, the solution is to provide better data samples such that you get a better representation in the data sets.

Whereas this approach isn’t necessarily a technological hurdle, it forces researchers and implementers to expand their own world views. It forces them to challenge their current conception of socioeconomic status, racial demographics and understanding of diversity. As their experience starts to grow beyond their current confines into a more pluralistic, global viewpoint A.I. systems themselves will become less biased.

.

.

Originally published on deepinderuppal.net

--

--

Deepinder Uppal

Based in Detroit, Deepinder Uppal is the VP of Innovation and Technology in the Public Sector for Information Builders. Learn more @ deepinderuppal.org