The scientific community has separated AI into two general categories: Strong AI, and Weak AI. Weak AI are capable of completing a narrow task, and Strong AI a general task. As a consequence of this definition, Strong AI may embody human characteristics of intelligence such as creativity, risk assessment, decision making -- but these attributes are not necessarily intrinsic to the concept of a Strong AI. So long as it can accomplish tasks beyond the specific ones programmed into it, it qualifies as a Strong AI: A program that can print a message written directly into its source code onto a sheet of paper is weak, whereas one that can write original ideas is not. This definition may make it seem like modern conversational AI (i.e Siri) are Strong AI, but they are not. They are thoroughly-tested algorithms that parse semantics and reply based on massive databases of past conversations.

The use of the word "merely" in the prior paragraph somewhat discounts the fact that a computer is a hunk of plastic, silicone, and copper that we arranged so that it could make billions of calculations per second. This process, remarkable as it is, is not the focus of this article. The important thing to note is that computers do not innately exist, do not innately learn. They are limited by the imagination and programming capacity of their creators; they cannot yet accomplish freeform tasks.

If that is the case -- that AI can only do what we tell it to -- then how have we managed to program an AI that beats the best human chess players? The answer is twofold.

1. Brute Force. Computers may not be smarter than us yet, but they have been faster and more powerful. To continue with the example of chess, modern chess engines examine hundreds of thousands of variations up to 20 moves in advance. An engine designer (engineer?) would not need to be a chess grandmaster to create an AI that defeats a grandmaster due to the computer's superior ability to calculate.

2. Machine Learning. Machine Learning is the ability of a machine to gradually adapt and make improvements to itself without the influence of a programmer. Not only can computers make calculations faster than humans, they can make minor adjustments to themselves faster than we can fine-tune them. Because of this, many optimization problems in computer science/engineering are solved via machine learning, from face-tracking software to Google Translate. Machine learning manifests in a variety of ways, however our focus will be on one particular form with clearly defined steps: a genetic algorithm.

A genetic algorithm is so called because it mimics the evolution of an organism. First, a single, trivial algorithm is created with a list of input variables and choices it can make. That algorithm is the first “generation.” The algorithms in the second generation are the “children” of the first algorithm, modeled after the parent. But like evolution of species, children have minor mutations in their “dna,” or code. These mutations are what allow algorithms to “learn,” through a survival-of-the-fittest process. Whichever mutated algorithms achieve the highest level of success, as determined by the creators of the AI, become the parents for the next generation (Bhasin).

external image XSQ38bOrTZcjBM2jpz_uHAb966tl0wfmUDX_x6kiLPcrCCHzNhlAftMEb9_aK-pDzn11Eoz-QSWOT8zWSfhZpenq1hExWTezWG_MjsY3mVy-xXz7sTGkMjr-7GzuEVQ7euvGUji3

This process of mutation, culling, and reproduction gradually produces stronger and stronger AI. In many cases, the final AI will act in a way that could never have been predicted by its creators. This is the embodiment of the term “artificial intelligence”: a machine that mimics intelligent life.

The limitations of machine learning are contained within the phrase “highest level of success.” A machine learning process can only be created so long as its created are capable of testing success (Bhasin). An algorithm that looks at images of food and determines if they are images of pizza or not images of pizza has a fairly simple test: compile a list of photos which are checked by humans whether they are pizza or not and score the computer based on which percentage of photos they guess correctly.

Please watch this video from 2:00 to 4:02 for a demonstration of a genetic algorithm.




More complex algorithms may have more intricate tests. For example, something like google translate may compare passages it translates to pre-translated passages and appendices of grammar rules. The tests are often the most time-consuming part of creating a machine-learning algorithm.

We haven’t created an AI that can nontrivially adapt to its environment, an AI that can program a more effective version of itself, or even an AI that can compose beautiful music because the tests that assess these would be extremely open-ended tests that would be nigh-impossible to score numerically.

The fact that we have not yet succeeded in creating a strong AI does not mean, however, that one will never be created. One question that arises with regards to a sentient AI is: “Is it alive?”

The seven characteristics of life are as follows:

Responsiveness to stimuli;
Growth and adaptation;
The capability to reproduce;
Having a metabolism and breathing;
Maintaining homeostasis;
Being comprised of cells;
Passing traits onto offspring

Responsiveness.
This characteristic is trivially met by a strong AI. A strong AI, by definition is responsive not only to its environment, but any possible environment.

Adaptation.
This characteristic is another essential aspect of a strong AI, though it may not necessarily exist intra-generationally, but only between generations as the algorithms evolve.

Reproduction.
Though it does not represent a natural, physical birth, a strong AI can write children programs that inevitably improve upon it.

Metabolism.
Under strict definitions, an AI does not have a metabolism.

Homeostasis
A program by itself doesn’t need to maintain homeostasis. In one sense, that fulfills the characteristic, but in another, the lack of a need for homeostasis signifies that an AI fails this characteristic.

Cells:
Once again, under a rigid biological definition of “cell,” an AI fails this requirement. In a contrived way, however, an AI is comprised of cells -- bits.

Passing on Traits.
This final characteristic is the very embodiment of a machine learning genetic algorithm. The final algorithm is created because every successful parent algorithm passed its successful traits onto its children, and mutation by mutation the end product was born.

Though an AI does not satisfy all 7 characteristics of life, humanity will inevitably be responsible for its categorization. Anthropocentrism -- or human-centric philosophy -- will no longer suffice as machines begin to emulate and displace humans on scales not before seen.

Philosophers have long contended that all humans have a certain set of fundamental rights. These rights, articulated in numerous different ways throughout history, have nonetheless become entrenched in our society’s laws and customs and are encoded into documents from the Magna Carta to the U.S. Constitution to the U.N. Declaration of Human Rights. We believe that human rights should not be extended to AI. A term such as “intelligent rights” may function better grammatically here, but we don’t believe that intelligence alone renders a thing deserving of the protections commonly understood as human rights.

AI are definitively not human -- nor even definitively alive. Before discussing whether or not human rights should be extended, it’s important to establish that they did not already exist by default.

Inability to experience suffering. An AI, unless programmed to, does not experience suffering. In fact, even if programmed to, the AI would not have nerve endings or pain receptors, and as such would require artificial versions to experience harm. Thought this is technically possible, it isn’t useful to speculate about as there yet seems to be no logical reason to develop these. Assuming an AI doesn’t suffer, prevention of harm being done to it doesn’t rise to the same level of importance as preventing harm to a human being, or even an animal.

Weak definition of AI. What sort of compilation of code would receive these rights? Would a 2-line python program that prints out the phrase “Hello World” receive the right to pursue life, liberty, and property? What about an algorithm in an early generation in a machine learning process? Suppose that algorithm could only say the word “Jazz” over and over, but its descendants would eventually be able to deliver original lectures on the composition of jazz music and field questions from the audience. Clearly, assuming that some AI were to be granted human rights, the great-great-great ... great-grandmother algorithm would not possess intelligent qualities, but the great-great-great ... great-grandchild would. There must be one generation within the lineage in which the parent would not be granted human rights, but the child would. The distinction between parent and child would be very small, perhaps differing by a single line of code. This extremely thin line makes it difficult to extend human rights to AI at all.

Lack of Uniqueness. At least realized through our current computers (thought it’s difficult to imagine a computer that would work differently), any two programs with identical text executed on the same compiler are functionally the same. The right to life, then, is nigh-impossible to provide to an AI. If a programmer writes a few lines of code then deletes it, is that murder? What if they remember what they wrote and later re-wrote an identical program? What if they forget entirely, but another programmer elsewhere writes the same exact code in a completely different part of the world? A written instance of a computer program is not itself alive -- rather, it is the idea behind its writing that gives it life. Therefore, to give human rights to a computer program is to give human rights to an idea.

The Shred of Doubt. John Searle argues that just because a computer simulates intelligence does not imply that it truly has intelligence. He considers a man in a room with English-to-Chinese Dictionaries and tables of Chinese Grammar. The man is given passages in English and translates them into passages in Chinese. To anyone outside of the room unable to see in, the man would appear to understand Chinese. Yet in reality, the man doesn’t understand Chinese at all and is merely following simple instructions. Searle then applied this argument to the workings of a computer: a computer doesn’t understand what it does; it merely obeys instructions (Searle). This argument initially seems antithetical to the nature of a Strong AI, which by definition can learn, adapt, and complete tasks that may seem outside the bounds of its original instructions. However, without humanity having seen the workings of a Strong AI (nor even fully understood our own brains), we do not know whether a finite set of instructions can satisfy the notion of intelligence, whether an AI can truly understand what it does rather than just appear to.

Self-Preservation. Extending protections to AI is a dangerous game for us to play. Human civilization is at risk merely by allowing an AI to surpass us at all, let alone without means to monitor and impede their progress. In order to prevent against the dangers of a rogue superintelligence, we must maintain control over the AI we develop. See: Risks of Superintelligence.