There is one more issue to be covered:
What is the consciousness?
What is it that is it that makes you ‘you.’
When you wake up in the morning, you see a world around you. It is the same world that was there when you went to sleep. Your clothes are in the place you expect them to be. There is a kind of recorded database of information that the being that you call ‘you’ can access. It goes back, day after day, through your past. Some days were not particularly remarkable; you don’t remember much about them because there isn’t much to remember. Your mental database keeps information selectively, storing the most important events with deep memories, and the less important events with shallow memories that might be reinforced (if you go through a very similar experience to one that you don’t really totally remember again, you will remember it vaguely, an experience called ‘Deja Vú,’ and the memory of the memory will become another memory). Other events are profound and make a deep impression on your mental database. You will remember death, pain, suffering, and certain events of extreme happiness in great detail.
Through it all is a grand conductor: There is something we call ‘our consciousness’ that actually makes the decisions about what to think about and how to think about it.
What is this ‘consciousness?’
Where does it come from?
How does it get its power?
Various people have looked at this issue in great depth from several different perspectives. Sir Francis Crick, the co-discoverer of the genetic code inside DNA, examined this issue from one perspective in his book ‘the absurd hypothesis.’ Crick believed that science could provide information to help us understand just about anything. DNA based life forms operated in accordance with certain scientific principles: the atoms of DNA join together in certain ways and form a complex coded message; the neurons operate according to certain rules; the electrical pathways of the brain are understandable; we can hook up people to wires, let them think about certain things, and record the location and amount of electrical activity.
Each ‘thought’ is a message. This message is decoded and turned into electricity. In addition to the electrical activity, the
neurons change chemically in understandable ways. The neurons are connected together with millions of tiny ‘wires’ that move information from one place to another. If large amounts of information are moving on a specific pathway, the brain will someone realize a larger pathway is required and will either expand the existing pathway’s capacity or build a new path for that same information. Each of these changes can be understood individually with an understanding of the nature of chemical bonds, the energy released or amassed when these bonds break and are assembled, and the quantum mechanical forces that keep everything together.
After Francis Crick won the Nobel Prize, he had plenty of money to do research any way he wanted. He wanted to study the attributes of the brain and of human beings that set us apart from other animals. Humans clearly think differently than other beings on Earth. He wanted to figure out if he could find some structural differences in our minds, our nervous systems, or DNA, or some other physical and structure that scientists could study that might account for the difference. Humans have a self-awareness that other beings don’t have. We have a chain of thoughts that we are able to control; allowing us to make complex plans, build incredibly complicated tools, and communicate in ways that no other animals can match. We can formulate thoughts in our heads, decide what we wish to say to other people, edit it in our heads before we say it (to make sure we are clear and don’t offend the others), and then say exactly what we want to say. No other animals appear capable of these things.
Some people use the term 'soul' to refer to the stream of controllable consciousness that humans have. It is an internal awareness, a kind of homunculus (a word that means ‘little person inside of me’) that makes us consider ourselves to be special and unique; it is a kind of ‘essence’ of us. Crick wanted to study this ‘soul’ to see if he could relate the mental activities that we associate with our self-awareness with any sort of electrical, chemical, or other changes that scientists might be able to study.
He studied the cellular structure of the brain in great detail. He basically found that the brain resembles a truly massive computer, which each of the cells called ‘neurons’ connected to large numbers of other cells with electrical pathways. Each neuron is like a computer processor; it takes certain inputs, processes them in some way, and then creates outputs through several of the connected pathways; these outputs go into other neurons that process them further, and emit outputs that go to other neurons, and so on, to infinity.
Some of the neurons appear to be memory storage devices, able to release their recording with the right combination of stimuli from the input wiring. Others appear to be what computer designers call ‘input output devices;’ they either take in information from the outside world (our eyes, ears, and other senses do this) or send information out to appendages that are remote from the brain, like our legs and arms. Some of the neurons appear to be what computer program designers call ‘compilers:’ they take several inputs from several sources and put them together into a coding sequence that will then process other information. They appear to be computer programs that basically write new computer programs. Other neurons work in ways that no existing silicon-based computer is able to work: they build new pathways, allowing us to process new information in unique new ways and to revise the way we think about things.
This makes a lot of sense:
our minds are learning new things. They wouldn’t be able to process the incredible amounts of information they process if they didn’t grow and adapt over time. Your little baby mind, for example, wouldn’t be able to process information about cooking food, cleaning windows, riding a bike, for example. It needs to be able to create subroutines that can be compiled into larger programs.
Crick did a great deal of research into the physical structures of the brain in an attempt to figure out what the thing we call ‘consciousness’ was, how it worked, and if it had a location, where it was located. He found that the brain is an incredibly complex mechanism. You might understand this if you understand the basics of modern computers.
Each computer does its work in a device called a ‘processor.’ You can find the processor on an older computer (when they were separate parts) by opening the case and looking for a square about an inch by an inch with an immense spider web of wires leading to and away from it. I am writing this in 2018 and, at this time, people are expanding computer capabilities by building computers with multiple processors. The first processor splits the task into smaller tasks. It then passes each task to another processor or another part of that particular processor for processing. At the end, the processed information is put back together again and used to control the letters on the screen, the sounds you hear, the pictures you see, the settings on the control of a self-driving car, or whatever the computer was designed to do. So far, the processors are laid out on a linear fashion, which you may think of as two dimensional (2d) processing. Having a series of processors make computers far more capable. But they don’t have anything close to the processing power of the brain of even the simplest animal (like a fly), because the animals brain has processors stacked together in a three dimensional pattern.
Crick explains the way this work in his book ‘Live Itself.’ Each ‘fold’ of the brain is essentially a 2D network of processors. They communicate with each other through multiple neural pathways, just as a computer with multiple processors will communicate through multiple 2D networks of processors. But there are certain things the fly’s mind can do that the network of processors in the computer can’t do. The fly’s brain can fold over the processor network into three or more folds. It can then make connections to the folds above and below. It can even skip over a fold and make connections to processors that are two folds up or down. This leads to true 3D processing. The differences in computational capabilities between 2D and 3D processing are vast. The fly can do incredible things that no computer could come close to doing. The fly, for example, can recognize its mates; it can recognize food, it can lay eggs, and, of course, it can fly. The 3D neural network is far more capable than the 2d model.
Artificial Intelligence Or Real Intelligence?
In 1936, the mathematician Alan Turing published a paper called ‘On Computable Numbers, with an Application to the Entscheidungsproblem.’ This paper had an appendix that was not published until after Turing’s death, titled ‘Intelligent Machinery, A Heretical Theory.’
Turing had believed that the appendix was too controversial to be included with the original paper.
He had had ideas that got him into a lot of trouble his entire life; he thought that his ideas about intelligent machinery would give his critics too much ammunition and he never published this appendix at all; it was not published until 1996, 42 years after his death and 60 years after he wrote it. In this essay, Turing claims that it would be possible to construct a machine that could be taught, could solve problems that were unsolvable by algorithms, and could learn and become more complex over time, eventually becoming so intelligent that it wouldn’t be possible for a human interacting with it remotely to tell that it was a machine, not a real human being. This is a heretical idea because, like many other scientific explanations for the things we see around us, it implies that the religious explanations for reality are incorrect. If it would be possible to make a machine that could think as humans think, how can we tell for sure that the others we are interacting with are really humans, with God-given souls? How can we even be sure that we, ourselves, have God-given souls that are subject to damnation or salvation by the proper thoughts? His collogues attacked him for even thinking such things.
Here are some excerpts from the essay:
'You cannot make a machine to for you.'
This is a commonplace that is usually accepted without question. It will be the purpose of this paper to question it.
Most machinery developed for commercial purposes is intended to carry out some very specific job, and to carry it out with certainty and considerable speed. Very often it does the same series of operations over and over again without any variety. This fact about the actual machinery available is a powerful argument to many in favour of the slogan quoted above. To a mathematical logician this argument is not available, for it has been shown that there are machines theoretically possible which will do something very close to thinking. They will, for instance, test the validity of a formal proof in the system of Principia Mathematica, or even tell of a formula of that system whether it is provable or disprovable. By Godel's famous theorem, or some similar argument, one can show that however the machine is constructed there are bound to be cases where the machine fails to give an answer, but a mathematician would be able to. I believe that this danger of the mathematician making mistakes is an unavoidable corollary of his power of sometimes hitting upon an entirely new method. This seems to be confirmed by the well known fact that the most reliable people will not usually hit upon really new methods.
My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely. They will make mistakes at times, and at times they may make new and very interesting statements, and on the whole the output of them will be worth attention to the same sort of extent as the output of a human mind.
It is clearly possible to produce a machine which would give a very good account of itself for any range of tests, if the machine were made sufficiently elaborate. Such a machine would give itself away by making the same sort of mistake over and over again, and being quite unable to correct itself, or to be corrected by argument from outside. If the machine were able in some way to 'learn by experience' it would be much more impressive. If this were the case there seems to be no real reason why one should not start from a comparatively simple machine, and, by subjecting it to a suitable range of 'experience' transform it into one which was much more elaborate, and was able to deal with a far greater range of contingencies.
I may now give some indication of the way in which such a machine might be expected to function. The machine would incorporate a memory.
This does not need very much explanation. It would simply be a list of all the statements that had been made to it or by it, and all the moves it had made and the cards it had played in its games. These would be listed in chronological order. Besides this straightforward memory there would be a number of 'indexes of experiences'. To explain this idea I will suggest the form which one such index might possibly take. It might be an alphabetical index of the words that had been used giving the 'times' at which they had been used, so that they could be looked up in the memory. Another such index might contain patterns of men or parts of a GO board that had occurred. At comparatively late stages of education the memory might be extended to include important parts of the configuration of the machine at each moment, or in other words it would begin to remember what its thoughts had been. This would give rise to fruitful new forms of indexing.
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do in trying, say, to keep one's intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits.
Turing expected that a machine that could mimic human thought would never be built because of religious opposition. Believers had great influence in the governments of the world. They would fight against any attempt to build this machine and prevent it from becoming reality.
But he doesn’t seem to have considered an important reality of societies that divide the world into ‘nations' and accept that nations can have and own everything they can take by force. These systems have military needs that go above all other
needs. If people have to disregard religion to build better tools of war, they will do so.
Turning thought that the research needed for this kind of machine would never be undertaken, but he was wrong. Even the example he uses in this essay, of Galileo, shows exactly why they were wrong. It is true that Galileo was arrested and jailed for life for his analysis in ‘Two New Sciences,’ and that this text was banned in its home country, with all copies collected and burned. But Galileo’s ideas had an important military application: The banned book contained formulas that could be used to predict the exact place a cannonball would land if launched at a given angle. Without this formula, bombardment would never be effective because experience alone won’t account for many variables. (For example, if you are used to using a totally level range for tests, your understanding will be very far off if the target is even a few feet below or above the cannon; if you are used to a particular slope, a different slope will throw you off; there are just too many variables to find where the cannonball will end up by random chance. Galileo’s calculations could tell where they would end up, exactly.) Military planners began to realize that Galileo’s work could help them do their jobs and began to request more research be done in the field. Researchers were not just allowed to advance Galileo’s work; they were paid to do it. The more work they did, the better results they got, and the field advanced into a very complex science, the field that we now call ‘rocket science.’
We live in societies with incredibly powerful forces pushing toward war. The forces are so strong that religion gets pushed aside or ignored if ‘heretical’ research has the potential to help the military. This has happened in the field of computers and, as a result, we have started down the path that can lead to the kinds of machines that Turing claimed could exist in this essay. It may help to understand these machines if you understand how these machines came to exist, what they do, and how they work. We will see that these machines actually use processes that are very similar to the process that Crick and other researchers have worked out to represent the operation of the human brain.
History of Smart Machines
If we use the term ‘computer' loosely enough, computers have been around for a long time. The abacus is a simple kind of computer, adding numbers up in rows. The abacus has been around many thousands of years. People have used variations of what we now call the ‘slide rule’ for centuries. These basically have two sets of numbers in different scales on the rulers; line them up properly and you can do simple calculations like multiplying, taking powers and roots, calculating logarithms, and doing calculations with sines, cosines, and tangents. These were computers but they were really ‘dumb’ computers. They couldn’t learn anything. They could only be operated by hand. Each device only performed the functions it had been built to perform. Nothing could be ‘programmed’ into them.
Alan Turing built the world’s first programmable computer for military purposes. In September of 1938, he was hired by GC&CS, the British code breaking organization. While there, he built the machine he called the ‘A machine,’ later renamed ‘The Turing machine.’ This machine was built for a very specific purpose: It was designed to determine the settings on the enigma machine, a coding machine used by the Nazi military. The machine he built to work out these settings was the world’s first electronic computer. To do its job, the machine would have to process a set of information, change its settings to new settings depending on the results of the first test, process again, and constantly revise its internal settings, making additional calculations based on these new settings. It had to be able to change its programming.
The enigma coding device the German military used had a set of rotors that changed letters to other letters; the first rotor might change a Z, for example, to a G; this ‘G’ would be sent to another rotor that might change it to a Y, and then to one after another of the rotors that ultimately led to a final output. The rotors were not fixed, but would adjust. They didn’t just adjust at the end of the day or end of a message, but could be set to adjust after each letter typed. A rotor might move 2 settings, or 6 settings, or not move at all, with each press of the key. The code operators would be told the settings that would tell the machine how many keystrokes between steps or steps per keystroke each rotor would move. They would adjust the machines settings to match that day’s settings and type in the message.
At the destination, the message would be typed into another machine that was set the reverse way. The second machine would change the coded message back into a readable message.
The hardware of the machines was not secret: These machines had been created for commercial use before the war and many were available to buy from commercial equipment dealers and even junk shops around the world. It was easy to get the hardware. The problem was in the software: there were a very large number of possible settings. If you didn’t know the
settings, you couldn’t decode the message. The standard commercial machine had a total of 6 settings that could be adjusted a total of 150,738,274,937,250 different ways. A Polish code breaking team had invented a computing machine called the ‘Bomba’ which could test the settings one at a time. It could go through one set of settings, decode the message and display it. Humans could then see if it made sense. If it didn’t, they would push a button and it would go on to the next combination and try it. The enigma machine had so many possible settings that, even if a giant network of bomba machines and army of workers were on the job, they couldn’t test more than a tiny percentage of them in a day. Since the settings were changed each day, they had no chance of testing the settings one at a time.
Turing's first programmable computer was designed to learn. It would figure out certain combinations that were highly unlikely and find some that were more likely. It would then ‘concentrate’ on the more likely combinations to find the sets that were the most likely. It would then narrow down these combinations by doing this over and over, until it had a manageable number (say a few thousand) likely combinations to test. These could then be fed into the bomba machine to test them one at a time. This system worked. Turing had built a machine that could often give the settings of the day within a few hours. The British could then decode all messages sent with that day’s settings and work out all secret plans. Many authors have claimed that this information turned the tide of the war. Before the British had the machine, its side was losing. After, the British side quickly gained the upper hand. Military planners can do a much better job if they know where the enemy is going to attack and where it is going to leave itself vulnerable.
After the war, Turning wanted to continue his work and see if he could make a computer with non-military applications. But he had signed an agreement not to discuss anything about his work and the British government held him to this agreement. He couldn’t work publicly with others in the field. But he did talk and write letters and some of the information was helpful, particularly to his American counterpart, Johnny Von Newmann.
Von Newmann
On 18 March, 1939, Nature magazine published an article by the Austrian physicist Lise Meitner called ‘Products of the Fission of the Uranium Nucleus.’ Dr. Meitner suggested that atomic nuclei may be split ‘like a droplet of water,’ releasing truly staggering amounts of energy. No one in the United States except the recent immigrant Albert Einstein seemed to realize the significance of this article. In August of 1939, Einstein wrote a letter to Franklin Roosevelt, the President of the United States. Here is the critical part of the letter:
In the course of the last four months it has been made probable that it may become possible to set up a nuclear chain reaction in a large mass of uranium, by which vast amounts of power and large quantities of new radium-like elements would be generated. Now it appears almost certain that this could be achieved in the immediate future.
This new phenomenon would also lead to the construction of bombs, and it is conceivable that extremely powerful bombs of a new type may thus be constructed. A single bomb of this type, carried by boat and exploded in a port, might very well destroy the whole port together with some of the surrounding territory.
Roosevelt set up the United States nuclear program the same day he got this letter.
Meitner had explained that a nuclear self-supporting fusion reaction was possible and this reaction could release immense amounts of energy. The problem was that such a reaction would require a special kind of uranium, an isotope called ‘U235,’
which is very rare in nature and extremely difficult to refine and remove from other uranium. The only ways of refining this uranium involved the use of immense amounts of electricity to make even a tiny bit of the U235. To be self supporting, the uranium atoms would have to be more than a ‘critical mass.’ (See sidebar for more information.)
The difficulty involves calculating something called the ‘critical mass’ of uranium. Neutrons are going into and flowing out of
uranium all the time. The larger the mass, the more of these neutrons stay in the uranium and hit other uranium atoms, leading to a chain reaction. How many atoms do you need to get a chain reaction that is self-sustaining? Once you know this, you can design a bomb; without this information, you have nowhere to start. The result has to be calculated. It turns out the number is 5x1024 atoms of U235. Both the team in the United States and the team in Germany were competing to get this number. If the German team had gotten this number first, Germany would have had the bomb first.
No one knew how much U235 would be needed to make a critical mass. To figure this out, you need to do some incredibly complex mathematics. As of the beginning of 1940, the mathematical tools needed to solve this problem simply didn’t exist.
Both the Germans and the Americans realized the importance of getting the answer. If they found, as they eventually did, that the ‘critical mass’ is only a few pounds of U235, they could justify diverting enormous amounts of electricity from existing factories and building large numbers of new power plants to generate the electricity to make it, so they could build some of these bombs. If it took a few tons per bomb, or even a few hundred pounds, they wouldn’t be able to make enough U235 to make these weapons and it would not make sense to divert resources to the nuclear bombs. Both the Germans and Americans started a frantic program to solve this mathematical problem.
The famed mathematician and physicist ‘Werner Heisenberg’ headed the German team up. The United States didn’t have any researchers with the expertise to solve this problem. Roosevelt discovered that the leaders in this field lived in Hungry and hired the very best Hungarian physicists and mathematician’s money could buy for the American team. The team leader was the Hungarian, Johnny Von Newmann.
The two men took entirely different approaches to solving the problem.
Heisenberg created a brand new branch of mathematics, called ‘matrix algebra,’ for to work out the answer. Matrix algebra basically allows large numbers of complex interconnected interactions to be solved simultaneously. It is a systematic approach that would definitely give the right answer, if all of the rules of matrix algebra were followed precisely and every single calculation was done correctly. It takes what starts out as many different problems and puts them together into one truly giant problem. Solve it by adding, multiplying, dividing, and performing other calculations on large ‘matrices’ of numbers. When you finish, you end up with something called a ‘determinant’ of the matrix, which is a simple number. This number is the answer to the puzzle.
The problem with this approach is that there are millions of different calculations that have to be made to get this one number. Make a single mistake in any one of these calculations, and the number you get is wrong. Heisenberg had large teams of accountants going over the calculations, checking each other’s work, and trying to find the answer. Unfortunately, there are certain common mistakes in arithmetic that are very common. The first person who does the calculation can get this wrong answer; the one who checks this work can then get the exact same wrong answer, and this can happen over and over again. In the end, Heisenberg came up with the wrong answer. He told the military that the critical mass was more than a ton of U235, far too much to make. Germany decided not to pursue the bomb.
Von Newman took a different approach. He realized that no amount of checking and cross checking of the calculations could prevent errors. People make mistakes with numbers. Since there is no way to tell exactly where the error was, it would not be possible to tell if the answer given was correct. Von Newmann felt that only machines could solve this problem. He built a computer that worked on principles that were very similar to the principles of Turing’s machine. His machine was essentially self programmable: it would solve one part of the problem, then reprogram itself given the solution to that part of the problem, and solve the next part of the problem, going on until it had the final answer. Von Newman was right: He got the right answer and the United States quickly realized it could win the war, and have an advantage in all future wars, if it devoted whatever resources were necessary to the bomb. On July 16, 1945, the first atomic bomb was exploded in the test facility near the Los Alamos research facility. Less than a month later, the Japanese emperor surrendered, granting the United States total victory and the right to do anything it wanted with anything Japan had formerly owned.
The H Bomb
After the war, Von Newmann had turned his attention to an even harder problem:
It would be possible to use a standard nuclear explosion to trigger a fusion reaction, the same reaction that is used to turn hydrogen into helium into the sun. This reaction has far greater potential than the nuclear devices he had built out of uranium. The simple uranium bomb, also called an ‘A-bomb,’ could only destroy the hub of a city, an area of a few square miles. A fusion bomb, called an ‘H-bomb,’ could potentially be any size desired, even large enough to destroy the planet. The calculations needed for the H-bomb were far more complex than those needed for the A-bomb. Turning had explained the basic operating principles of a programmable computer to Von-Newmann.
Von Newmann had never had any budget constraints: the United States government had devoted nearly half of the total GDP of the entire United States (half of all wealth produced) to the bomb for several years. Von Newmann was their wonder boy: if he wanted money for a computer, he got it.
He immediately began work on a far more advanced computer, the Electronic Numerical Integrator and Computer (or ENIC), which became operational on February 15, 1946. It was located at the Army Ballistics Research Laboratory (now called the ‘US Army Research Laboratory). It was designed, built, and programmed to make the calculations needed for the hydrogen
bomb.
All electronic computers built before 1955 used vacuum tubes for calculations. The tubes used enormous amounts of energy (the UNIVAC, the first commercial unit, had 5,200 vacuum tubes and consumed 125KW of energy, about the amount of energy needed to power a city block at the time). These computers were, obviously, very expensive. Because the tubes were fragile, got very hot, and burnt out often, these computers were also incredibly unreliable.
The first transistors were built in 1948 at the Bell Telephone laboratory. High quality transistors could perform the same functions as the vacuum tubes in a tiny fraction of the space with a tiny fraction of the energy. But early transistors were not of a high enough quality to replace vacuum tubes until 1954. In January of that year, Gordon Teal, working at Texas Instruments, developed a transistor that could be made through a relatively simple process, could be mass produced, and would perform the same functions as vacuum tubes. The first transistorized computer was the he Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. Although transistor based computers were far cheaper and more reliable than vacuum tube computers, they were still not reliable enough to use for much of anything other than developing weapons and other military applications. The CADET was far more reliable than any vacuum tube computer, but its average working time between breakdowns was still only 80 minutes. This was time to do a lot of calculations.
One worker at Texas Instruments, Jack Kilby, realized that he could basically make a silicon-based transistor and basically etch a line that would cut it in half, creating two transistors. The two transistors could then be cut by another line to create four, eight, or more transistors. The transistors could then be connected together with wires that could be soldered onto the transistors or, later, printed with electricity conducting ink. He had created a new type of device, called an ‘integrated circuit.’ The first integrated circuit was ready for testing on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated." Texas instruments began making the integrated circuits for the United States military. For the next few years, virtually all integrated circuits produced were purchased by the United States military, with the largest use being for guidance systems for the missiles that carried nuclear bombs.
In 1964, Frank Wanlass demonstrated a chip he designed containing a then-incredible 120 transistors.
The military had a lot of uses for chips and ordered a lot of them. Many companies opened factories to build them. With (very expensive) precision machines, producers could cut finer and finer lines into and print more elaborate networks of wires onto the chips, creating chips with more and more transistors.
In April of 1974, the Intel corporation introduced the 8080 chip. This chip had 4,500 transistors on a single chip that was 6mm by 6 mm (about ¼ inch by ¼ inch). The etched lines that separated the transistors were a mere 3 microns thick. To put this into perspective, an atom of hydrogen is about 1 angstrom across. There are 10,000 angstroms in a micron so the line etched to separate the transistors in the chip was only 30,000 atoms across.
This was an incredible device. For once, the military was not first on board. The first 8080 computers were made by hobbyists as kits. In 1976, Jobs and Steve Wozniak built the first fully assembled computer available to purchase in heir garage in Sunnyvale Ca. This was the Apple 1.
Chips have gotten faster and more capable each year since they were first created. A computer ‘chip’ is basically a tiny piece of silicon that has been etched to turn it into an electrical circuit. The etching marks create individual transistors that are as small as the etchings can make them. Each transistor can do the same work as one of the vacuum tubes in the original UNIVAC. But modern chips now have transistors so tiny that a single chip can have billions of them. The largest as of 2018 is the Graphcore GC2 IPU with 23.6 billion transistors. The etched lines that separate the transistors are only 160 angstroms (the equivalent of 160 hydrogen atoms) across.
The ‘clock speed’ of a chip is the number of calculations it can make (the number of times it can totally reset any number of its transistors) per second. Most computer chips now have clock speeds of more than 2 GHZ, meaning more than 2 billion calculations per second. Multiply this by the billions of letters and numbers that can be processed in each calculation set, and you get chips that have truly incredible capabilities. You don’t have to guess about the capabilities of these devices. Get a smart phone. Many today can produce video at speeds of more than 240 frames per second with each frame having more than 8 million pixels, each of which may be any of more than 2 million different colors and shades. You can record this and play it back on a machine small enough to put into your pocket. You can then use the computer in the phone to edit it, add titles, change the language, change the colors, and send it anywhere in the world you want in a matter of seconds.
Machines Made By Machines
These chips are now designed and built almost entirely by machines. The pathways are too complex for the human mind to comprehend. The etchings are far too tiny for humans to hope to even see, with even the best microscopes. (A wavelength of light is 5,000 angstroms; it is not possible to see anything smaller than a wavelength of light, even with the best microscopes.)
Computer programs, the instruction sets that run computers are also built by machines. Programmers used languages which include instruction sets that themselves are made of complex instruction sets, each of which instructs transistors to hold a certain state of charge for a certain length of time, to allow a pathway to open or to close an open pathway.
As of 2018, computers with the basic capabilities that Turning surmised in his ‘heretical’ paper of 1936 are pretty close to reality. I get fooled sometimes. (I hate it when the phone rings and I have a conversation with what I think is a very concerned person and ultimately find out that it is just a computer that is programmed to respond to the things I say.) Machines will clearly be able to mimic all of these things closely enough that most people won’t be able to tell the difference between a human and a machine in remote exchanges.
The Stream of Activity in a Computer
The basic idea of a computer with what we now call 'artificial intelligence' goes back to Alan Turing’s 1936 paper. It involves a “stream’ activity. In his paper, the activity started with a type of symbols that loaded the initial operating system onto the computer. The computer would be ready to receive instructions. The tape would pass a read head that would read it. The reader would send the symbols to the processor that would decide what to do with them.
Sometimes, the symbols on the tape would tell the processor to read something from a different tape, a tape of what we may call ‘memory.’ There may be several different memories, including short-term memories and long-term memories. The processor will have the address of the tape it needs; it will tell the tape device to advance to the appropriate space and read what it is supposed to read.
Sometimes, the processor would want to remember something for the short run. For example, it may be doing a complex calculation and need some numbers for an intermediate step of the calculation. It will advance to a blank place on the tape and write the data, placing the address of the data on another part of the tape where it stores addresses. By writing, reading, calculating, rewriting, rereading, the computer can do extremely complex calculations.
On the first computers, the term ‘tape’ was literally tape. A long tape would be strung between two reels. The reels would spin to get to the right location. The head would read the information then spin the tape. It would write and spin, read and spin, write and spin, over and over. Because the tape was mechanical, it took a very long time to do complex calculations with these machines. Each time, the tape had to find the right spot for the read head.
Starting with the Apple II computer in 1979, the tape was replaced by something called a ‘floppy disk.’ This disk was basically a tape that was laid out in a spiral. The first floppy disks were 8 inches in diameter. The floppy disk system was much faster than the old tape systems because the tape didn’t have to advance back and forth over long distances anymore. The head could simply move in or out on the spiral. In time, people found out how to make the lines on the spiral smaller, allowing more data to be on a disk. But even the best floppy disks had very limited abilities to store data. The first floppy disks stored only 360 KB of data. Each KB was 1,000 ‘bytes,’ each of which was 8 ‘bits’ in size, so it essentially stored 2.8 million individual ‘bits’ of information, each of which was either a 1 or a 0. (It either was a ‘marked’ space or an ‘empty’ space.’) The best floppy disks increased this storage ability by a factor of 10.
The first ‘hard drives’ were basically sets of stacked floppy disks. These were placed inside a housing and permanently sealed to create a vacuum. Without any air resistance, the disks could spin extremely rapidly, allowing the data to be printed and read very quickly. These types of disks are now able to store more than 1 gigabyte, more than a billion times more data than the first floppy disks. As I write this, scientists are developing memory chips that can sore far more information in a smaller area.
The processor is a large network of transistors. Transistors are switches that can be left on or turned off and will hold this on or off position until signaled to change. This network may have millions or even billions of transistors on it. Each transistor network is broken down into sets of 8 transistors. (Note: this is for ‘8 bit computers.’ The first Apple and IBM computers were 8 bit computers. More recent computers have computers in sets of 16, 32, or even 64 computers, allowing far more complex calculations.) Each set of 8 transistors works together to render a letter or symbol. Each letter or symbol is represented by a certain setting of these 8 computers. For example, if you want the computer to remember the letter ‘a’ for example, one of the particular sets of transistors will be set to off-on-off-off-off-off-off-on. In computer language, ‘off’ is represented by the number ‘0’ and ‘on’ by the number ‘1.’ The setting for the letter ‘a’ then becomes 01000001, where ‘0’ represents a transistor that is set to ‘off’ and ‘1’ representing a transistor set to ‘on.’ If the 8 wire circuit is set to off, on, off, off, off, off, off, on, that circuit is set to hold the letter ‘a.’ If you start with a chip that has no electricity put on to it, and energize it properly, it will set the transistors to make the letter a.
If you want a letter ‘b' you would set the transistors to 01000010. Each letter has a different setting.
You may hook up a keyboard to the processor. You can press the letter ‘a’ and the keyboard will send signals that turn on the transistors in the right order to make the letter ‘a.’ Now, the processor is remembering something. As long as the electricity is on, the transistors will hold their ‘states.’ These transistors may then be wired to another set of transistors that are hooked up to an output device, like a computer screen. If you have them talk to each other, when you press the ‘a’ on the keyboard, the transistors will go into the state of ‘a’ and stay there. They will then ‘tell’ the transistors connected to the screen to illuminate the letter ‘a’ on the screen.
In early computers, most of the transistors were wired in a way that allows them to communicate with the ‘tape’ (the memory system). The tape tells them when to turn on and when to turn off. They can write to the tape. If they need something stored on the tape for a short time, they can put it into short-term memory; if they need it stored over the long run, they can put it into long-term memory. As computer chips got more and more transistors, short-term memory could go directly to networks of transistors that would store them there. This got rid of the need for mechanical devices to read and write, making the computer much, much faster.
The signals come at a fixed rate. This rate is called the ‘cock rate.’ A set of signals will go through the wires and then there will be a pause when the signals will ‘settle into’ their desired state. This basically means that the transistors that are supposed to be on will get to an ‘on’ state while the transistors that are off go into an ‘off’ state.
While the pause is happening, the electricity continues to flow through the machine. It is used to ‘hold’ the transistors in their intended state. If the electricity should ever go off, the transistor states will return to their preset unpowered state (some are set to be off without power, some on; those are the only two states possible).
Then another pulse goes through the computer resetting the transistors, millions of them at a time, in accordance with its programming. With billions of calculations being made each second, written to short term memory, read from short-term memory, modern computers can do a great many things very rapidly. When Turing wrote about a machine with artificial intelligence, he talked about an extreme idea: he claimed there would come a time when the machine would be able to provide feedback so ‘human like’ that humans interacting remotely would not be able to tell that they were interacting with a machine or a real human. Machines get more and more capable as complexity of the chips (basically, this means the number of transistors) increases, clock speeds increase, the amount of memory that the computers can access at random (called ‘random access memory’ or ‘RAM’) increases, and the complexity of the programs—the instruction sets—increase over time.
Today, you can push a button on your phone and ask the phone a question. The computer will be able to analyze the tones, pitch, inflection, and volume to determine what you said. It will then use a complex program to determine what sort of information would help you get what you want. It will then look for this information, searching enormous databases that may be on computer servers on the other side of the planet. It will find the best answer and present it, both in written text and by voice.
If you like the way the computer answered your question, you can press a button that basically tells the computer it did a good job; if not, you can press another button that says you didn’t like the answer. The computer goes over the differences in the algorithms that gave acceptable answers and those that gave bad answers. It then adjusts the algorithms to try to make answers better.
These computers are modern versions of Turing’s ‘learning machines.’ They learn how to provide better answers to questions.
If you interact with these computers regularly, you will get used to their ‘personalities.’ You will learn the way they answer questions and figure out ways to phrase your questions that will make them more likely to give you the answers that provide the help you wan. You will be making the same kinds of adjustments that you make when you are dealing with people from other cultures or something different about their way of life.
The computers that provide these answers have multiple redundant systems in place to prevent them from ever losing their electricity. If they do lose their electricity, even for a microsecond, they will lose their ability to do anything. They will become nothing but chips of silicon (dirt). They will have to be reprogrammed with an operating system before they can work again. To prevent this, redundant power systems provide back up power if the main power system goes out. If the backup goes out, another backup kicks in. The power never goes out.
If a processor starts to act up, a computer will catch the error and switch the operations to another processor without any interruption. The memories are backed up many different places. If a memory bank should fail, another has the same information and can take over. If you wake up in the middle of the night and want to know the time, you can ask your computer for the time. It will tell you. You can ask about the weather, the name of the president of France, the name of his wife and all his girlfriends. You can get pictures of his girlfriends and compare them. You can play chess or poker with your computer, you can have it stream a video for you, you can ask it to monitor your heart rate and vital signs and pick a song that is appropriate to these things.
What happens if the power goes out?
If this happens, all of the computer chips become nothing but pieces of rock again. They aren’t capable of doing anything.
They are the machine equivalent of ‘dead.’
The Stream of Consciousness/Activity in a Living Thing
Your brain and nervous system run on electricity. If you go to the doctor with certain ailments, you will be hooked up to machines that read the electrical impulses to determine if there might be some problems in the switches or circuits that send the signals to our muscles, organs, or parts of your brain. If doctors find that these circuits are functioning normally and sending the right signals, but some muscle, organ, or brain component is not functioning correctly, the doctors know that the problem is in the component itself. If the signals are bad, the doctors might try to repair the part of the nervous system or brain that sends the signals; if this is impossible, they may implant a device to send the correct signals to the affected organ. For a simple example, sometimes the signals that go to the heart don’t work right and the heart beats too fast or too slow. Doctors can fix this by implanting a pacemaker.
The electricity is on from the moment of your conception. Throughout the life of all living things, electricity will be produced by adenosine triphosphate (ATP). ATP breaks down into ADP (adenosine diphosphate, with one phosphate group broken off) producing roughly 10 watt hours of electricity per pound of ATP reduced. Each cell contains mitochondria; these are the factories that make ATP. They use glucose, water, and oxygen to provide the energy that reattaches the phosphate group to the adenosine base. Some cells don’t do a lot of work or don’t live very long, so they don’t need a lot of mitochondria. Sperm are in this category; they have only about 50 to 100 mitochondria per cell. The egg will need a LOT of energy to grow into a person, so it will have a great deal more mitochondria, normally between 100,000 and 250,000 per egg. The ATP produces its energy in pulses, just like the computer. Each pulse of electricity goes into a neuron. The neuron is a kind of processor, like the processor of a computer. You wouldn’t be able to compare a neuron to a transistor because a transistor only has three connections. A typical neuron has about 100,000 connections to other cells. Most of these connections go to other neurons. Some go to ‘input devices’ like the optic nerve, the nerves that we use to understand touch, sound, or our other senses. Some go to ‘output devices,’ including muscles, telling these devices what to do.
By contrast, the most complex computer processors as of this writing (2018) have only about 2,000 connections. In machines with multiple processors, most of these connections go to other processors; the rest go either to input or output devices. So far, the largest multiple processor machines have 18 processors connected. The human nervous system has about 100 billion neurons.
The two types of devices share a very important characteristic: they need electricity to run. If the electricity shuts off, even for a second, both devices lose their memory settings and become nothing but a piece of non-living matter.
Just as computer designers have placed a great emphasis on making sure the computers won’t lose their electricity, whoever or whatever designed living things on Earth went to elaborate lengths to make sure the electricity would never stop.
As noted above, the electricity comes from the breakdown of adenosine triphosphate, or ATP. ATP is a ‘spine’ of adenine, hooked up to three phosphate groups in a molecule that looks like the capital letter ‘E.’ You could think of this E as having each of its three arms spring loaded: pull off the arm, and the spring does work. In this case, you could think of the spring having a magnet on it and running the magnet through a coil of wire, to generate a spark. The spark is a ‘pulse’ of the nerve cell, one unit of calculation. The body goes to elaborate lengths to make sure that there is always a lot of ATP, by making it constantly. But what if there isn’t enough at a given time? The body absolutely has to have electricity so it has an emergency way to get more: it can break off another of the phosphate groups, to get the same amount of electricity it got breaking off the first group. It is very, very, very rare that this happens, but if necessary, it can happen and your body can get electricity.
What if there is an incredible need for electricity, one so vast that the body even runs out of ADP? If this happens, you are in a dire situation, perhaps fighting a wild animal that will kill you if you lose, perhaps running for your life, or perhaps having a heart attack. Your body needs more electricity than it can get from the existing ATP or ADP. In an absolute emergency it can break off the third arm and get more electricity. This is so rare that it will never happen to most of us. The reason is that the body is making more ATP all the time. Most ATP is made by mitochondria within the cell. The cell starts with glucose (pushed in from the bloodstream after you have eaten), oxygen (from hemoglobin brought in by the blood) and water (which you drink and get from your food and is 55% of your blood). Mitochondria use a super-efficient mechanism called the ‘Krebs cycle’ to convert glucose, water, and oxygen into pure energy; they use this energy to ‘reload’ the phosphate groups back onto the adenosine backbone.
What if there isn’t enough glucose?
This can happen at certain times. Your body always has a ‘blood glucose level.’ (Talk to your doctor to find out yours; it is measured every time you have a physical or any other blood work.) If it falls too low, your body will react very quickly. First, it will send signals to you that will tell you that you are hungry. You will want to eat. If your blood sugar is just a little low, you will just be a little hungry. You may only be interested in eating if you can get specific foods, like a hamburger or ice cream. If the sugar level keeps getting lower, you won’t be as picky. You will be able to willing to eat pretty much anything you recognize as food. If you don’t eat for many days, you will start to wonder if your body can metabolize things you don’t normally eat, like insect eggs, sour fruit, or rats. In addition to sending signals to you to eat, it will start to conserve energy. It will make you feel very, very tired. Every step you take will cause your muscles to ache and require immense effort. You won’t want to move.
If your body tells you to eat but you don’t, your body will start to eat itself. It will start to break down the easy-to-metabolize fats in the liver and inside the muscles. If you have plenty of stored fat, your body can last many days eating these reserves. But eventually you will run out. When you run out of the easy-to-metabolize fats, it will start to work on the proteins of the muscles themselves. The proteins can be converted directly into ATP. The process is highly inefficient and you don’t get nearly as much ATP from a pound of muscle tissue you lose than you would get from a pound of glucose. But your body is desperate. It needs electricity and it makes its electricity from the sugars, proteins, and fats that you eat or that are stored in your body. In many situations, people can live for more than a month with no significant intake of food. But there will come a time when your body has nothing left to eat except vital organs. If it can only get electricity to run the brain by metabolizing these essentials, it will start to turn them into ATP so the electrical processes will not shut down. By the time you get to this stage, your body will be so weak and tired it will only allow you to stay awake for short periods of time; your body allows you this in the hope that, during one of these times, you will find food. But if you don’t find food, there will come a time when you go to sleep and your body doesn’t have enough ATP to power the parts of your brain responsible for consciousness. You will never wake up. A few hours to a few days of this and a key part of your body, the heart perhaps, the lungs, or the spinal cord signals that run the heart or lungs, will stop functioning. The mitochondria will try to the very last to make ATP. But there will be no food for energy. The final molecules of ATP will become ADP and then AMP. Then the electricity will shut down.
When the electricity shuts down in your nervous system, you will no longer be a living thing. You will be mostly calcium (bones) and water, with a few proteins and fats that no longer function to do anything at all. Your body will be just as lifeless as the chunk of rock (silicon) in a computer chip, when its electricity signals stop. When the constant electricity that had been holding the neurons in their desired states ends, your brain will go from ‘living’ to ‘dead.’
The Stream Of Consciousness
If you wanted to make a machine that was as close as possible to a human, so close it would be able to fool most humans into thinking it was a human, what would you have to provide that 21st century computers lack?
Human minds have something that has been called various names from a ‘homunculus’ (the ‘little man in your head,’) to ‘consciousness’ to a ‘soul.’ In a practical sense, this is like a thread that may be thought of as a main thread in a story. Many signals come into this thread: you see things, you hear them, you feel them, and you otherwise interact with them through a network of senses that all appear to be in perfect synchrony. There is something outside of you that you may call ‘reality’ that generates the sights, sounds, smells, and other feelings. There are sensory input devices that determine ‘what is happening’ in reality and transmit these signals to a part of your mind that puts them together to create a picture of reality.
Picturing, hearing, and otherwise sensing reality is an important part of what it means to be human, but it is not the critical thing. Other animals seem to have the same or, in some cases, even better abilities to sense and interpret reality. (Dogs have much better senses of smell than humans; eagles have sharper vision, dolphins and bats have much better hearing, for a few examples.) Machines can shoot constant video, determine what is in their field of view, and identify specific individuals by their appearance, sound, or other stimuli. This ability is, by itself, not enough to convince a person that another being or machine might possibly be a human.
For a moment, let’s leave the issue of what a machine might be able to do aside, and consider what would be needed for a person to believe another living thing was a human. Clearly, the ability to speak would be a big help. If the other being could speak in a way that indicated she was able to understand abstract concepts, to visualize things that only existed conceptually, and to engage in complex abstract reasoning, you would probably be strongly inclined to think the being was a human. But the ability to speak in a certain language wouldn’t be necessary. A great many people who don’t speak the same language as you are real people and could easily convince you they are human, without any need to first learn to speak and understand enough of your language to communicate abstract concepts.
Say that you are kidnapped and drugged; while you are unconscious, you are moved to a part of the planet thousands of miles from your home, where no one speaks any language you understand. When you wake, you will easily be able to tell which of the beings around you are human, without having to understand the meaning of the words they say. If you escape, you may look for people who are not likely to be sympathetic to the kidnappers for assistance. If you find people not aligned with the kidnappers and who appear to be empathetic to the plight of another human in need, you will be able to make yourself understood pretty quickly. You can find ways to use signs to indicate that you are hungry, you are thirsty, you are sleepy, or you need to hygiene facilities. You will probably not have to try to figure out how to tell them that you are from a place far away—they will probably be able to tell this by the way you look and act—but you can find ways to let them know that you need help and show them how they might be able to help you. You will be able to tell, without any common language, that they are human and they will be able to tell that you are human.
If you are of the proper age and disposition for mating, you may be attracted to someone of the opposite sex and, given time and the right circumstances, even have a loving relationship. You may not be able to describe in scientific terms what it is about the other beings that convince you they are humans, but you will be able to know the difference.
One of the key features you might look for would besomething you might call a ‘stream of consciousness.’ Non-human animals would probably only focus on you for a brief time, perhaps to determine if you might be a predator and they should run, or that you might be food and they should consider attacking, or to determine that you are of no interest to them and they can ignore you. Once their attention was fixed either on running away, attacking, or ignoring you, you would not expect any kind of reason or entreaties to treat you differently. A human, however, would do something you might call ‘trying to figure you out.’ She might look at you and then go back to her peers, and chatter something to them. Then come back and look at you in more detail, perhaps trying to touch you or get you to understand some verbal signals. Then she might to back to her peers and verbalize something with them again, then come back to you, and do this over and over. You may find that certain gestures you make lead to reactions you interpret as fear or anger. You can go to great lengths to avoid these gestures. You may find certain gestures seem to invite her to continue her investigation and perhaps even make her smile. You can pattern your behavior to her reactions.
You will see that she is doing the same thing.
You are reacting, she is reacting.
You will be able to determine by her actions that she has a memory.
When she is trying to figure you out, she may repeat the same test a few times until she is satisfied she knows the answer, but she won’t keep trying the same thing over and over expecting you to react differently. Once she knows how you act in one situation, she will expand her tests to get more information about you. You will not only be able to tell that she has a memory, you will be able to tell that she can process this memory over time. If you listen to her vocalizations long enough, you will be able to tell certain patterns. You may hear her peers refer to her with a certain vocalization and you may see her react whenever this specific sound is uttered. You may think that this might be her name and you may try to mimic it, to let her know that you understand the concept of a ‘name.’ If she reacts in a way that indicates that she understands you are trying to call her by name, you may then try to teach her your name.
You will recognize that she is a human-like being, not a sub-human animal, by recognizing a kind of thread of behavior, thought, speech, and communication that conforms to a ‘human like’ pattern. If you watch her long enough, you will soon start to see expressions or patterns of movement that indicate what we humans call ‘emotions.’ You can tell how she is feeling. You will be able to tell that certain things you do make her act in ways that indicate she is feeling something different. You will see that there is some kind of logical connection between the things that are going on around her and her behavior, as an indication of her feelings. You will realize that imagines of reality are going into her, through her senses; she then processes them in a logical way, almost as if you, an image in her sensory range, is a kind of movie to her, that the movie of ‘you’ is interacting with the movie of ‘her and her peers’ and ‘the environment around her.’ You will recognize this particular way of interacting with reality, as if it is a movie, from your own experience. You will think that a being that interacts with reality the same general way that you do might be the same sort of being as you are.
You may find certain things she does strange, as they aren’t the things you are used to seeing other people do. For example, when Columbus landed on the Caribbean island of Haiti in November of 1493, he clearly recognized the inhabitants of Haiti as people and they clearly recognized the Spaniards as people. They both had complex languages; both understood sign language enough to trade, both felt sexual attraction between the opposite sex (for heterosexuals) and attractive people of the same sex (for homosexuals). But the natives of Haiti had certain features that Columbus felt were extremely strange.
For example, they generally went around naked and had no shame for their bodies. Almost certainly, the natives thought it was extremely strange that the newcomers felt they had to hide their bodies under many layers, even though they were clearly uncomfortable in the stifling tropical heat. The natives had certain social features that were extremely strange to the newcomers. For example, Columbus was surprised by the lack of property, the lack of ownership, and the general idea of sharing and equality among the natives; the natives were shocked that the majority of the Spaniards were subservient and basically acted as if they were slaves to a tiny minority that were their masters; the majority (which could have easily overwhelmed the minority that controlled them and done as they pleased with them) accepted certain people as their masters, did whatever they were told, and cowered in fear at the slightest hint of disapproval by their masters and rulers.
The two groups had different customs and social behaviors. But neither group appears to have had any doubt that the members of the other group were true human beings.
They could tell.
What if you had way to make machines that looked like living beings and you wanted to make them so realistic they could fool a group of people into believing they were true human-like beings. In other words, what if you wanted the people to think of the machines the same way that Columbus thought of the natives of America and the same way they thought of Columbus and his men: they had different cultures but they were convinced the others were true humans? What kind of characteristics would you give these machines? Assume, for this analysis, that you have all processing abilities of the leaders of the servers of Apple computer, Google, Facebook, and all artificial intelligence (AI) software that has been created so far in the past. You are trying to find the key features of speech, behavior, and reactions that will cause the observers to conclude that the human appearing machines they are interacting with really are humans. Where would you focus your attention?
It seems pretty clear to me that they would need to have something that was or at least appeared to be a stream of consciousness. They act in ways that make them appear that they have the ‘movie of existence’ playing in their minds. Observers would see that these AI machines ‘had lives’ before the came into contact with the observers. The AI machines would have some sort of social structure that would be logical and make sense. They would act in such a way that the observers would accept that they had been interacting with each other for a very long time, found behavior patterns that helped both the individuals and groups meet their needs, and practiced these behaviors.
This, by itself, wouldn’t be enough: groups of dogs clearly have social hierarchies and behave consistently with learning, but they are not humans. You would need more. You would need to get the observers to believe that the AI machines were in control of their own thoughts. Other, non-human, animals may well have something that we would think of as a ‘stream of consciousness.’ Many animals certainly appear to be thinking about things and exhibit behavior that indicates that they are thinking about certain things. (You can tell if a dog is thinking about biting you by her behavior. Many people have found out the hard way that this is, indeed, what the dog was thinking about, when they get bitten.) But we will be able to tell by the behavior that the other animals have an extremely limited behavioral repertoire. They are clearly not in conscious control of the way they will be acting over time, able to manipulate their behavior to cause the ‘movie’ of their existence play out in a way that benefits them.
This is what you will need. If you can generate this kind of behavior, you will have made an artificial intelligence machine that could fool people into thinking it is a true human.
Are We Androids?
What if we did send a package to another world?
It might contain DNA that was modified so that it would eventually give the atmosphere of the destination planet enough oxygen to support the extremely efficient Krebs cycle, as happened with cyanobacteria here on Earth. This first ‘terraforming’ being would be able to use the light of the planet’s sun to do this work, just as the cyanobacteria use the sun’s energy to remove carbon dioxide from the atmosphere and replace it with oxygen. It might contain DNA for several different complex life forms that would take advantage of the oxygen. We almost certainly couldn’t and wouldn’t even want to send fully formed humans to the other world. Even with a bootstrap that would allow activation and birth at a certain point, they almost certainly wouldn’t make it. We would want to send far simpler life forms to the other world and then evolution select the best ones for survival. Evolution would make sure that the beings were perfectly adapted to the realities of life on their world: those that were not well adapted would be selected for extinction while those that were would be selected for survival.
We could send an operating system that would be able to accommodate both the primitive starting beings and the more complex ones to follow. The beings would develop physical tools like eyes, ears, and other sensory organs. The operating system would accommodate them, allowing them to use these ‘peripheral devices’ as they were developed. Eventually, one of the species of beings would develop some processing center in their brains that would allow them to direct their thoughts. The members of this species would become self aware. They would gain the same basic abilities as humans have here on Earth. The operating system would be designed to accommodate this advance. Beings with this ability could use it to help them survive, while lesser beings and those without this ability would not be able to compete and perish.
The operating system would be designed to accommodate these thought processes. We, here on Earth, would have either put this operating system together from scratch or adapted it from the operating system that works for us. The beings would have their own will and their own ability to direct their thoughts, but this will and this ability would have been designed and worked out by engineers here on Earth.
What kinds of beings would these be? Would they be ‘androids' or would they be 'life?’
I don’t have any idea how to tell. If you have a test that would answer this definitively, let me know.
And what of us?
What if the operating system for simple multi-cellular sexually reproducing beings was sent here to this world, on a craft from another world, some 3.58 billion years ago? These beings take advantage of oxygen for their life processes and could not survive until oxygen levels got up to their current 21% about 541 million years ago. (The pre-industrial oxygen level was about 2%. It is declining as we burn carbon fuels and the oxygen gets incorporated with carbon to form carbon dioxide; this carbon dioxide replaces the existing oxygen. You can find charts showing the oxygen’s very clear decline on the Scripps Oxygen study at http://scrippso2.ucsd.edu/). When the oxygen levels got high enough, some sort of mechanism was triggered to apply the bootstrap to these primitive sexually reproducing multi-cellular beings. The beings gained capabilities. Their operating system was able to accommodate their greater capabilities. About 69 million years ago, the evolved into the family that we call ‘primates,’ the family that includes humans. About 3.4 million years ago, they evolved a primitive frontal cortex and the operating system connected it to the rest of their brains. They could think on a conscious level. They could make complex tools. They could communicate in unique ways, with vocalizations that were themselves tools; each utterance meant something and worked as part of a complex message that could communicate complex ideas. They gained the ability to speak, to think, to talk about abstract issues like the meaning of life.
What if the ones who put this entire thing together had a purpose for us? What if they highlighted the genetic instructions that we would need to modify the DNA—the CRISPR sequence—so that, if we wanted, we could do the same thing they had done, and send life even further into the universe than they sent it? What if they realized that by the time we gained the ability to understand the message in the DNA, we would also already have or be close to developing the ability to send packages to other worlds ourselves?
If these things happened, what would this tell us about the reason we are here?