4 November 2017

Ethiopia: Foible in Too Much Reason

opinion

Artificial Intelligence (AI) is dangerous, unethical and inhumane.

The only problem is that massive technology firms that are trying to make AI a reality, such as Google or Facebook, want us to think that it is so complicated, it is better not to ask one fundamental question: how does an AI "reason"?

Most basic technological devices like laptops or phones function within the parameters of "command" and "execute". We command a computer to do a task and it executes it. However, to find out where intelligence plays a role, we have to look at smart technology.

Devices that are "smart" are more advanced in executing our commands. Their smartness is their ability to learn from the user's behaviour. They are programmed to provide multiple "suggestions" to the solutions of the tasks they are commanded to execute. Siri, a virtual assistant in Apple's operating systems, is a good example of smart technology.

When I ask Siri to find me the nearest Pizza shop, not only would the virtual assistant do as told but can learn from my interactions to develop suggestions for the next time I need a fast food restaurant (my behavioural trends). If I live in the Bole area of Addis Abeba, and always choose Efoy Pizza, the next time I am at Ayat and starved for a pizza, Siri would suggest if I want an Efoy Pizza branch at Ayat.

This is because Siri has "learnt" from my behaviour, showing continuity in the observed behavioural trends when choosing a pizza place. This does not make Siri intelligent; it merely makes it smart. It is because Siri studied my pattern of behaviour to act on the next logical step.

Therefore, the definitions one would find for artificial intelligence on Google are all smart technologies that are incorrectly and intentionally named AI.

So, when does smart technology crossover and become intelligent?

AI programming algorithm is not based on command and execute, but on the mathematical equation of logic, guided by reason and based on probability.

The ability to reason is the foundation of intelligence. Before we make a decision, we always reason out consciously or sub-consciously. We can do so because we possess the concept of free will. Plants and animals' choices are rationalised solely based on the response to their environment (stimuli). A sunflower cannot decide not to follow the movement of the sun the same way a bird cannot choose not to fly because it does not feel like it.

This is also true for technology. Smart technology functions as it is programmed to function- through the algorithm of command and execute. It can learn to adapt and evolve, but it does not have the free will to exercise those rights outside the parameters of a set task. For example, Google's latest smart tech (falsely named AI), known as AlphaGO Zero, was able to play a complicated Chinese board game and re-program itself to create advanced moves without the help of humans.

However, because it lacks free will, it could not program new moves in areas that have nothing to do with the Chinese board game. It can not develop new codes to fly a plane or even play chess because it is still limited to the parameters of the Chinese board game.

Humans can exercise free will not only because we can see the world objectively but can also internalise our environment subjectively. The ability to look at events inherently is based on two elements that are the foundations of free will: self-awareness and self-actualisation. The latter means the ability to examine one's role and those of others within a set environment.

To self-actualise, one needs to be self-aware - aware that he exists. These two abilities are only evident in humans. The ability to question one's existence (self-awareness) and the ability to examine the role of life and those around us in a set environment (self-actualisation) is what sets us apart from every being on the planet.

However, even though humans have free will, through evolution, we have realised that with it comes social responsibility. Social norms guide us to make sure our reasoning behind our decisions is compatible with our community. Such standards are rules and guidelines set by humanity to regulate individual human behaviour so that it can function within a society. And to enforce social order, laws, socially acceptable practices and commonly held values make it possible for humans to live in set communities.

Social norms are uniquely human because they help us develop humanism, which makes it possible to form empathy towards our fellow humankind.

However, AI is only guided by logic in its reasoning behind the decisions it makes. This means its free will is rationalised through the prism of logic. It cannot learn social responsibility because it neither has nor is it able to develop human emotions and social norms that are the building blocks of social responsibility.

The drawback with decisions based on logic, minus humanism, is that they would completely be founded on mathematical equations of probability for the best outcome of a set problem. At times, the best result of a set situation could be illogical yet humane.

Let us say we are driving a futuristic AI controlled car on a highway at 100Km an hour. We see a van fully engulfed in fire with a little girl trapped inside. We demand the AI car to stop, but it will not. The AI has correctly calculated halting a car moving at such a high speed suddenly, would most probably crash it, kill me and endanger others on the road. It may also correctly figure that the little girl has little chance of survival thus the logical decision not to stop is the "right" one.

Yet, our decision to stop the car might not be reasonable, but it is the right, humane decision. Even if the little girl did not have enough chances of survival and rescuing her would put us at a high fatality probability, my decision was guided by human emotion (empathy for the girl) and social norms (duty) to fulfil a responsibility regardless of lack of logic in my decision making.

When the AI decided based on logic, rather than risking more than three lives, it saved two. The danger of AI is even if it is programmed to help humans, the mere fact that its decision-making ability lacks the building blocks of social responsibility will deter it from doing so. The question is not if humans can create AI, but whether we should. Its way of reasoning will automatically endanger the humanism of humans.

Neftalem Fikre Has a Background in International Relations, ICT, Sociology and Behaviourism.

Ethiopia

New Film Policy Set to Promote Industry, Cultural Values

The new Film policy that has come into effect as of November 2017 would help promote the film industry and contribute to… Read more »

Copyright © 2017 Addis Fortune. All rights reserved. Distributed by AllAfrica Global Media (allAfrica.com). To contact the copyright holder directly for corrections — or for permission to republish or make other authorized use of this material, click here.

AllAfrica publishes around 900 reports a day from more than 140 news organizations and over 500 other institutions and individuals, representing a diversity of positions on every topic. We publish news and views ranging from vigorous opponents of governments to government publications and spokespersons. Publishers named above each report are responsible for their own content, which AllAfrica does not have the legal right to edit or correct.

Articles and commentaries that identify allAfrica.com as the publisher are produced or commissioned by AllAfrica. To address comments or complaints, please Contact us.