How do we create artificial intelligence? Maybe it already exists and is watching us? What is the difference between artificial and natural intelligence? since there are more questions than answers, let's look at them in order, sort them out
People always wanted to have help both physical and mental work. It was gradually approaching practically the ideal of what was conceived, but there were frequent disappointments in expectations, and doubt that it was even possible to create this? So what is artificial intelligence, and indeed if it is created and opened, then we will be equated with the Absolute, with God?
Often, artificial intelligence is compared with the human mind, as if what is supposed to be created by an artificial person is supposed.
Then it is interesting, what to invest in the concept of a natural (real) person?
It can be considered in two dimensions:
in the space of matter, and the space of the mind, it is precisely the second that needs to be created artificially. Our holoava brain is material, but the mind is something else that is outside of matter.
Since the task of creating artificial intelligence is of an informational nature, it is correct to declare both levels of human existence as information systems in such a plane and consider the problem. The first information system is the material world surrounding us, the second information system is the human mind. It should be noted right away that these systems do not correlate with each other. And why should they relate? What could be the correspondence between the material medium and the information recorded on it? That's right, no.
When knowing the world around us, a person - his knowing "I" - is outside the knowable, in another information system. You can learn something exclusively external, and not internal, do not you find? Try to track the movement of neurons in your brain when you try to track the movement of neurons in your brain when you try to track the movement of neurons in your brain ... and so on to infinity. In someone else’s brain, that is, in a region external to the mind, this is quite conceivable, but not in the brain with its own neurons, thanks to which thinking is possible. Therefore, I say that cognition is an external function: in relation to another information system it is possible in principle, and in relation to its own information system (the one in which the cognitive "I" of the subject is located) is by no means possible, it is simply unacceptable, beyond this line, the concept of reason loses its meaning.
I will try to illustrate the statement with a good example.
Suppose green people landed on our planet, quite peaceful and sociable. The runaway mankind asks them different questions, and the green men reasonably and in detail answer, to the joy of those gathered. Are space guests intelligent beings? Without a doubt, are! How bitter the disappointment of the crowd is when it turns out that sociable green men are not at all rational creatures, but biorobots, acting on the basis of their program, or even directly in real time, according to direct instructions from the direct owners. Who, then, is reasonable? What a question - of course, those guys from Proxima Centauri, where the control signal comes from to the green biorobots! Suppose, let’s say ... What if a signal from Proxima Centauri is sent by other, more advanced, controlling mechanical devices? Can you imagine this? Why not ?! Who, then, are sentient beings? Those who made not only green men, but also control transmitters from Proxima Centauri, of course. Well and so on, if you understand the logic applied ... As you can see, the question of the reasonableness of space guests directly depends on our knowledge in this matter - more precisely, on what we consider to be an “extreme” information system:
consider the “extreme” brain of green men - means that green men are intelligent creatures;
we find out that the “brain” of green men is a device that receives signals from Proxima Centauri - oh yes, but here it’s completely clear, real intelligent creatures are on Proxima Centauri;
ah, on Proxima Centauri only control transmitters, you say? - Well then, intelligent creatures are somewhere in the distance, did someone make these control transmitters?
Find a suitable information system external to the one in question, and the concept of reason will immediately move there. Is it not too vague for a concept to be implemented as an operating mechanism?
Do you still think that the human mind exists? Based on a strong belief and personal sense of self? And bring evidence that you yourself are not a green man, that is, a biological machine that is not controlled by someone from the outside? What are your reasons to consider yourself a rational being, in fact? The self-awareness of rationality is not proof: it is entirely conceivable that the actual creator programmed you with a similar sense of self for the very purpose that you would not have guessed anything. The situation, repeatedly beaten in the cinema: a robot that considers itself a human being. Just an assumption about an external information system, and there is no trace left of your rationality!
What is reason in this case? The trivial ability to respond to signals. Man in a certain way reacts to signals, therefore, he is intelligent. The iron is equally reasonable: I plug the plug into a power outlet, and imagine, the iron begins to heat up - is not this a highly organized mind? Ah, the iron is designed to heat up when the plug is inserted into the outlet ?! So, after all, a person is arranged in a certain way: if he touches a hot iron, he will yank his hand away with a scream - from this side his reaction is as predictable as the reaction of the iron when it is turned on. And the fact that the device of the iron is known, but the device of the human brain - is not very good yet, does not prove the qualitative difference of some reactions from others, that is, it does not prove the superiority of man over the iron. An iron is a mechanism produced by a person, so even a person can turn out to be someone produced a mechanism, how are they so fundamentally different from each other?
Since humanity certainly wants to produce an artificial person, let us analyze how the artificial mind can differ from the natural, human?
In cinema, robots usually have forms close to human ones, but this is not necessary when creating an artificial mind. Let the iron, if only the iron is thinking - enough to solve our problem.
2. The complexity.
The device of man is undoubtedly more complicated than the device of iron. Although the iron is not the most difficult of the inventions of mankind: some of the modern machines are comparable in terms of complexity with the human brain, at least in the sense that they are beyond the scope of understanding of the average man in the street.
3. The ability to self-development.
Fully programmable ability: it is enough to take into account all the previous signals and own reactions in the algorithms - in other words, be based on past experience.
Man, due to the complexity of his device, is more unpredictable than an iron. More - because the iron is also not always predictable: for example, after plugging the plug into the outlet, it may not turn on (due to a breakdown), or it may crack or catch fire (due to a more serious malfunction). Of course, the list of possible reactions of the iron to the outlet is limited, but the list of human reactions is also too: if you step on your foot, most likely the culprit will apologize or remain silent - it is extremely unlikely that he will smile at you two hundred and fifty times, then wave his ears and swoop in the sky.
However, making artificial intelligence humanly unpredictable is easy: you just need to insert a random number generator into the algorithm.
5. Simplifying or complicating?
The material from which artificial intelligence is created can be any, not necessarily biological. But it can be biological. In this case, a side question arises: how active should be the participation in the biological, essentially natural, process of the builder of artificial intelligence? Should a scientist borrow a living cell from nature and grow an artificial mind from it, or for this to be considered fully artificial, it is necessary to modify a living cell taken from nature, or can a scientist use ready-made cell blocks to create artificial intelligence? Suppose, the creator of artificial intelligence, following the example of Frankenstein, finds a pair of dead corpses, amputates the left hemisphere in the first corpse, the right hemisphere in the second corpse, then connects the hemisphere and gets a functioning brain? Does this brain, made up of the hemispheres of different people, have artificial or natural intelligence? If natural, where is the line between artificial and natural? And if a scientist took a single cell from the brain and grew a whole brain from it, would the grown mind be artificial and natural? .. If the answer is no, if the left and right hemispheres are joined by an artificial brain ... then in general the devil knows what happens, you know . It turns out, it is enough to remove one cell from the natural brain, as the brain becomes artificial. That is, it is not a matter of the methods that the scientist uses, but as a result of:
when the mind changes as a result of the actions of a scientist (in fact, as a result of the influence of external factors, even minimal ones) - artificial intelligence;
mind in its original state is natural.
Why then surgery? Enough external stimuli that allow you to change the mind in the desired direction! I hint at the media that manipulate the moral and ethical representations of humanity without any scalpel. From this point of view, the viewer adhered to the zombie possesses much like an artificial intelligence - a product of high technology. Such a viewer is a biorobot, like the green men mentioned above. Strictly speaking, any interhuman communication leads to a change in the intelligence of the communicating parties - though, in contrast to the situation with the zombie, bilateral and, in this sense, equal.
What do I want to say? The fact that when trying to create a thinking robot, you should decide on the methods for achieving the goal: we want to create simple from complex or simple from simple. If in order to create an artificial mind it is enough to simplify or stop the development of a person’s natural mind, the task of creating an artificial mind has been solved long and reliably.
6. The soul.
Often in conversations about artificial intelligence they appeal to the soul: say, you try to create a thinking machine with a human soul, then ... Well, it depends on what you mean by the soul: if some psychophysical features (psycho - in the sense of reaction to external stimuli, physical - in sense of a physical device), then the technique has all of these qualities. Each device has its own device, almost always with individual characteristics: on this side, typical devices are no different from people tailored according to a single pattern, but at the same time they are purely individual. As for the psycho-features, that is, reactions to external stimuli ... Just don’t tell me that the devices do not have psychophysical features. About twenty years ago, I had a home-made computer connected to a TV, so this freak often could not read a floppy disk (then there were five-inch ones). You sit down to such a computer and you never know whether you can play Tetris or not. In short, this electronic *** worked for mood. It took decades of corporate training so that his descendants, little resembling a makeshift ancestral, learned to function stably. Although the burrows of technical devices, even the latest ones, still appear from time to time ... And you say that technology has no psychophysical features!
7. A complete human copy.
Adherents of artificial intelligence may require a complete copy of a person, that is, a person in the flesh.
We ask them: what, aren't people reproducing themselves, that is, not being copied by other people? What does it mean “artificially”? Usually this word means: created by man. However, babies are not made in heaven, but by humans, although in the order prescribed by God: in this sense, a natural child will not differ from an artificial one any more than a part manufactured on a conveyor with observance of safety rules, from a part made manually without safety . Suppose a locksmith tried and manually blinded no worse. But was it worth it if the result is indistinguishable from the conveyor ?! Absolutely according to Professor Preobrazhensky, right? “Please explain to me why you need to artificially fabricate Spinoz, when any woman can give birth to him anytime. After all, she gave birth to Madame Lomonosov in Kholmogory, this famous one. Doctor, humanity itself takes care of this and in an evolutionary order every year stubbornly, isolating from the mass of all scum, creates dozens of outstanding geniuses that adorn the globe. ”
By the way, does a human clone have artificial or natural intelligence?
No, as you like, but in the very concept of “artificial intelligence” lies a contradiction. When we say “reason”, we mean our own human mind, which, by virtue of our natural essence, is an information system that is extremely personal for us. We cannot recognize the structure of our own brain: we can know the structure of another person’s brain, although this knowledge will not be absolutely convincing for us. I remember when I was a child I was very surprised when I had an X-ray of a dislocated wrist, and it became clear that there were bones under the skin. I was sure that the structure of my hand is not like everyone else ... Artificial reason because the reason that there is nothing from the outside: no external information system that defines it. On the other hand, “artificial” implies - we have also created. However, how can you create something without understanding its structure? This method coincides with the above-mentioned natural fertility. It is possible, of course, to construct artificial intelligence from individual blocks, the device of which is unknown to us. This is similar to combining hemispheres amputated in different people into a single brain - I considered this option. And to assemble artificial intelligence from the simplest parts and at the same time not understand the assembly order ... I assure you, this is by no means impossible. Or you have to derive the resulting design from your own information field. For example, a father starts a mechanical teddy bear with a key, which seems to be alive for his child. And the father himself does not perceive the toy bear alive: having knowledge of an adult, he simply cannot - unless, of course, he is thirty years younger. No, the option of creating an artificial person and subsequent own stupidity does not work.
What do we get in the end? And here's what: talk of creating artificial intelligence is not correct. But if we mean by a thinking artificial being a human-like machine with complex and not quite predictable reactions resembling human ones, then, of course, artificial intelligence can be created. True, such devices are not entirely pragmatic: specialized mechanisms designed for ironing clothes, transporting people, transmitting information, etc. have a better future. Thinking irons, in short.
Artificial Intelligence, Biotechnology, Research and Forecasting in IT
Electronic brains A prerequisite for the onset of technological singularity is the creation of "strong artificial intelligence" (artificial superintelligence, ASI), able to independently modify itself. It is important to understand whether this AI should work as a human mind, or at least its platform should be constructed similarly to the brain?
The brain of an animal (including humans) and the computer work differently. The brain is a three-dimensional network, “sharpened” for parallel processing of huge data arrays, while current computers process information linearly, although millions of times faster than brains. Microprocessors can perform amazing calculations with speed and efficiency far exceeding the capabilities of the human brain, but they use completely different approaches to information processing. But traditional processors do not do very well with the parallel processing of large amounts of data, which is necessary for solving complex multi-factor problems or, for example, pattern recognition.
The neuromorphic microcircuits currently being developed are designed to process information in parallel, similar to the brain of animals, using, in particular, neural networks. Neuromorphic computers, perhaps, will use optical technologies, which will make it possible to produce trillions of simultaneous calculations, which will make it possible to simulate more or less accurately the entire human brain.
The Blue Brain Project and the Human Brain Project, funded by the European Union, the government of Switzerland and IBM, set the task to build a full-fledged computer model of the functioning of the human brain using biologically realistic neuron modeling. The Human Brain Project aims to achieve functional modeling of the human brain by 2016.
On the other hand, neuromorphic chips will allow computers to process data from the "sense organs", detect and predict patterns, and learn from their own experience. This is a huge step forward in the field of artificial intelligence, noticeably bringing us closer to creating a full-fledged strong AI, i.e. a computer that could successfully solve any problems that a person can theoretically solve.
Imagine such an AI inside a humanoid robot that looks and behaves like a person, but learns much faster and can perform almost any task better than Homo Sapiens. These robots could have self-awareness and / or feelings, depending on how we decide to program them. Worker robots are all useless, but what about the “social” robots living with us, caring for children, the sick and the elderly? Of course, it would be great if they could fully communicate with us; if they possessed consciousness and emotions like us? A bit like the AI in Spike Jones’s film “Her” (Her).
In the not too distant future, perhaps even less than twenty years later, such robots can replace a person in almost any job, creating an abundance society where people can spend time as they like. In this reality, upscale robots will drive the economy. Food, energy and most consumer goods will be free or very cheap, and people will receive a monthly fixed benefit from the state.
All this sounds very beautiful. But what about an AI that far surpasses the human mind? Artificial superintelligence (ASI), or strong artificial intelligence (SII), with the ability to learn and improve itself and potentially able to become millions or billions of times smarter than the most intelligent of people? The creation of such a creature could theoretically lead to technological singularity.
Futurologist Ray Kurzweil believes that singularity will occur around 2045. Among the critics of Kurzweil is Microsoft's co-founder Paul Allen, who believes that singularity is still a long way off. Allen believes that in order to build such a computer, you first need to thoroughly understand the principles of the human brain, and for some reason these studies should be accelerated sharply, like digital technologies in the years 70-90, or medicine a little earlier. But in reality, on the contrary, studies of the brain work require more and more effort and bring fewer real results - he calls this problem “inhibition due to complexity”.
Without interfering in the debate between Paul Allen and Ray Kurzweil (his response to Allen's criticism), I would like to discuss whether a complete understanding and simulation of the work of the human brain is absolutely necessary to create an SRI.
It is quite natural for us to consider ourselves as a species, the peak of evolution, including intellectual, simply because it happened in the biological world on Earth. But this does not mean that our brain is perfect, and that other forms of higher mind cannot work otherwise.
On the contrary, if aliens with superior intellect exist, it is almost unbelievable that their mind will function in the same way as ours. The evolutionary process is random and depends on innumerable factors, and even if life were recreated on a planet identical to the Earth, it would not begin to develop in the same way, and accordingly, after N billion years, we would observe completely different biological species. If Mass Permian or some other global extinction had not happened? We wouldn’t be there. But this does not mean that other animals would not have evolved to a developed intellect instead of us (and it is likely that their intellect would have been more developed due to odds in millions of years). Perhaps it would be some kind of intelligent octopus with a completely different brain structure.
Human emotions and narrow-mindedness push us to the idea that everything good and reasonable should be arranged in the same way as we are. This error of thinking led to the development of religions with anthropomorphic gods. Primitive or simplified religions, such as animism or Buddhism, often either have a non-human deity or have no gods at all. More selfish religions, poly- or monotheistic, usually represent god or gods in the form of superhuman. But we do not want to make the same mistake when creating an artificial supermind. The superhuman mind should not be an “enlarged” copy of the human, and the computer should not be similar to our biological brain.
The human brain is the brilliant result of four billion years of evolution. Or, more correctly, a tiny twig in the Great Tree of Evolution. Birds have a much smaller brain than mammals and are often considered very stupid animals. But, for example, crows have psychological skills at about preschool age. They exhibit conscious, proactive, focused behavior, develop problem-solving skills and can even use tools. And all this with a bean-sized brain. In 2004, a study in the Department of Animal Behavior and Experimental Psychology at the University of Cambridge showed that crows are almost as smart as anthropoids.
Of course, there is no need to repeat the human brain in detail for the manifestation of consciousness and initiative. Intelligence depends not only on the size of the brain, the number of neurons or the complexity of the cortex, but also, for example, on the ratio of brain size to body weight. Therefore, cows, whose brain is similar in size to the brain of a chimpanzee, are dumber than ravens and mice.
But what about computers? Computers are, in fact, only “brains”; they do not have bodies. Indeed, when computers become faster and more efficient, their size tends to decrease rather than increase. This is another reason why we should not compare the biological brain and computers.
As Kurzweil explains in his answer to Allen, knowledge of how the human brain works can only lead to some way of solving specific problems in the development of AI, however, most of these tasks are gradually solved without the help of neurophysiologists. We already know that the “specialization” of the parts of the brain mainly occurs through the training and processing of our own experience, rather than “programming”. Modern AI systems can already learn from their own experience, for example, IBM Watson collected most of his "knowledge" by reading books on his own.
So, there is no reason to be sure that it is impossible to create an artificial supermind without first understanding the work of your brain. A computer chip, by definition, is designed differently from biochemical neural networks, and a machine will never feel emotions like we do (although it may experience other emotions that are beyond human comprehension). And despite these differences, computers can already acquire knowledge on their own, and most likely, they will do better with them, even if they learn differently from people. And if you give them the opportunity to improve themselves, the machines themselves may well launch a non-biological evolution, leading to superhuman intelligence, and ultimately to a singularity.
Futurists are sure: sooner or later, scientists will be able to create artificial intelligence, similar to human, or even superior to it. Scientists are trying to do this by modeling the human brain, but they still have a long way to go to copy 100 million brain neurons and 1 trillion of their connections. Moreover, there are already prerequisites for this: for example, neuroscientist Henry Markram and his colleagues are working on a promising project - creating a completely identical human virtual brain, and Barack Obama has allocated $ 100 million for research into brain functions and innovative projects in this area.
However, at a recent International Science Forum in New York, a group of researchers announced that there were at least four serious obstacles to creating artificial intelligence.
You can probably design a machine that will function like a human brain, but not vice versa. In society, it is customary to compare the work of the brain with the work of a computer, however, according to the neuroscientist Douglas Fields, such parallels are unacceptable, since the brain is a biological organ with living cells and tissues that transmit electrical impulses, and not a circuit board with a digital code and wires .
Recently, scientists were able to uncover the “secret” of the brain neural network: for this, they scanned the smallest pieces of nerve tissue using an electron microscope, and then recreated its computer model, but existing technologies are not enough to do the same with the whole brain. Neuroscientist Kristen Harris explained: currently, one brain cell is equivalent in power to one laptop.
But even if super-modern computers appear that can recreate the whole trillion neural connections, scientists will have to decipher for a long time how each of them manifests itself in the consciousness and behavior of a person. In addition, the neurons themselves make up only 15% of the nervous tissue, and besides them there are also auxiliary cells - glia.
Центральная нервная система
Gregory Wheeler from Carnegie Mellon University noticed that the brain sends signals that would be useless, for example, without the spinal cord. Therefore, in order to create artificial intelligence, similar to human, it is necessary to develop not one organ, but the whole organism.
thus, the AI system can be represented as the following relationships