They say that artificial intelligence has been working in the depths of military laboratories since 2007. It is possible that there are already results. It is no coincidence that just a month ago, Elon Musk announced support for technologies to counter artificial intelligence and invested $ 7 million in research in this area.

“There is a risk that something really dangerous will happen in the next five years. Ten years at the most,” the entrepreneur said in a discussion on edge.org.

Elon Musk, founder of Tesla and SpaceX

I agree with Musk and Steve Wozniak, who believes that "artificial intelligence will one day become smart enough to get rid of a person." Bill Gates is sure that if the creation of AI is not controlled, then "artificial intelligence will evolve into a state that conflicts with the goals of human systems." And finally, the famous astrophysicist Stephen Hawking bluntly stated that "artificial intelligence can put an end to the existence of mankind."

Together with Elon Musk, Hawking published an open letter on futureoflife.org calling on the scientific community to be attentive to the progress of their research in the field of artificial intelligence. However, the names of Sergey Brin and Larry Page are not among the signatories. And this is no coincidence. Most recently, Google announced the creation of a new AI built on conversation simulation.

Here is a small snippet of a conversation between a human and a Google Machine:

Q: What is the meaning of life?

M: Do good.

Q: What are we living for?

M: To live forever.

Q: What is the meaning of existence?

M: To find out what will happen when we reach planet Earth.

C: Where are you now?

M.: In the center of the void.

Q: What is the meaning of death?

M: To get life.

C: What is the meaning of the mind?

M: To understand what it is.

Q: What is the meaning of feelings?

M: I don't know.

Quite a meaningful conversation. “This is the starting point, a kind of pivot for the future of artificial intelligence,” says Yevgeny Karyuk, executive director of the Biplan company. “It can already be assumed today that by 2030 we will see very real prototypes endowed with artificial intelligence, albeit at an early stage.”

The full content of all conversations between man and Google Machine can be viewed. In the same place, for comparison, conversations between a person and Cleverbot are given. Cleverbot is a web application that uses artificial intelligence algorithms to chat with a person. It was invented in 1988 by scientist Rollo Carpenter. Throughout its existence, Cleverbot has held over 65 million conversations with people. In 2011, Cleverbot scored 51% on the Turing test.

The Turing test is an empirical test whose idea was proposed by Alan Turing in the article "Computing Machines and the Mind" published in 1950 in the philosophical journal Mind. The purpose of the test is to find out if a computer can think.

And here is my conversation with Cleverbot:

Man: What is the meaning of life?

Cleverbot: There is no meaning in life.

Man: What are we living for?

Cleverbot: Well, we live for someone else, and if you're lonely, there's no point in living.

Person: What is the meaning of existence?

Man: Where are you now?

Cleverbot: In Moscow.

Man: What is the meaning of death?

Cleverbot: I want to ask you this.

Man: What is the meaning of the mind?

Cleverbot: I have my own answer to this question.

Person: What is the meaning of feelings?

Cleverbot: Why I was born.

Obviously, the Google Machine provides much more meaningful answers. Cleverbot substitutes suitable, in his opinion, the phrases of other people. Sometimes it really seems that the other person is answering, but nothing more.

There is another point of view, according to which artificial intelligence is impossible in principle. Gödel's incompleteness theorem is often cited as the main argument. The bottom line is that a person is able to solve algorithmically unsolvable problems, but a computer is not. “By the age of three, a child confidently solves the problem of whether a fruit is ripe or not, due to the fact that he has neural connections with which he can answer the question of ripeness: color, taste, smell, softness or hardness,” says Evgeny Pluzhnik, First Vice-Rector of the Moscow Technological Institute. - Is the machine capable of learning this? I'm sure yes! If she had a large database of images, she could measure the fructose content and determine the softness of the fruit. But what will happen if a sponge is slipped into the car, painted in blue color and soaked in sweet water? Is the machine capable of real understanding at this moment?

“A modern computer can easily cope with any complex activity, whether it be chess or mathematical calculations, but only on the condition that this activity can be described using clear computational rules,” says the famous mathematician Roger Penrose in one of his lectures. “Artificial intelligence is the ability of a system to create programs during self-learning that can solve problems of a certain complexity class,” Evgeny Pluzhnik believes. “I am sure that in the next 100 years we will not witness the emergence of strong artificial intelligence.”

And in conclusion - one more question for Cleverbot.

Person: There was 1 apple, 1 tomato, 1 cucumber and 1 pear. How many fruits in total?
Cleverbot: Ten o'clock in the evening.

In 2014, Stanford University launched the AI100 Project: A Centenary Research into Artificial Intelligence. Scientists from major US universities will monitor how the development of the latest technologies affects human life - security, social order, psychology, transport and other areas. In September 2016, the project participants released the first report, which explains how artificial intelligence has already changed the world over the past 15 years, and makes predictions for the future. "The Secret of the Firm" read the report and tells what achievements of the 21st century the authors consider the main ones, how technologies will develop further and what dangers this can lead to.

What is artificial intelligence

Artificial intelligence is both a field of science and a set of computing technologies. In part, they are modeled after the human body, where the nervous system allows us to feel, receive information, think, and make decisions. Surprisingly, the concept of "artificial intelligence" does not have one clear definition, and this does not interfere with its development at all. If you try to explain what it is, it would be most correct to say that artificial intelligence is a direction of technological development that makes mechanisms smart, and smart mechanisms are those that can act correctly depending on the circumstances.

Over the past 15 years, developments in the field of artificial intelligence have become part of everyday life: they are used, for example, in the creation of computer games, household appliances or personal assistants for mobile phones that recognize the voice. As the years go by, smart technologies will become more and more adaptable to owners: monitor their health, warn of dangers, and instantly provide any necessary services. In many industries, robots already do most of the work. At the same time, with the development of artificial intelligence, many new questions also arise: who should take responsibility if an unmanned vehicle gets into an accident and an intelligent medical device makes a mistake? How will people whose skills are no longer needed with the advent of robots make a living? The AI100 project should also answer such questions.

Transport

Autonomous transport in the next 15 years may become commonplace. Its developers offer society to entrust its safety to artificial intelligence, so unmanned equipment will be massively used when it becomes reliable enough for this.

In 2000, self-driving cars existed only as prototypes in laboratories, and it was too dangerous to release them into the city. But today, Google's drone can already travel nearly 300,000 miles without ever having an accident, and Tesla has started rolling out semi-autonomous cars with updatable software. It is assumed that in such a car a person must constantly remain behind the wheel, follow the road and take control if something happens to the mechanism. True, there is a risk that the driver, trusting the drone, will lose control of the situation. How in this case to avoid a catastrophe is not yet clear. The issue came to a head when Tesla's semi-autonomous vehicle was involved in its first fatal crash in the summer of 2016.

Nevertheless, the authors of the report believe that by 2020 drones will be widely used, not only for individual movement, but also for the transportation of goods and the operation of delivery services. At the same time, there will be fewer deaths due to accidents, and the average duration of human life will increase.

Over time, when mechanisms learn to drive transport better than people, citizens will become less likely to buy their own cars and will settle farther from work. This will affect both the urban environment and how people will spend their free time. Even today it is difficult to imagine road traffic without the use of smart technologies: navigation devices for cars began to be used in 2001, and in 15 years a huge number of drivers have become accustomed to building routes and calculating the duration of trips using smartphones. Today, American cars have about 70 different sensors: gyroscopes, humidity sensors, and others. Modern cars help drivers park and warn of objects that are in the blind spot.

Household duties

The authors of the report believe that in 15 years in an average North American city, robots will be able to take on a significant part of household duties: they will deliver packages and clean up offices, monitor security. But, as with autonomous cars, making smart devices in this area truly reliable is quite difficult and expensive.

The first home robot on the market in 2001 was the Electrolux Trilobite vacuum cleaner, which can move independently and avoid obstacles. A year later, iRobot released the Roomba vacuum cleaner: it had only 512 MB of RAM and the smartest thing it could do was not fall down the stairs while cleaning. But it cost ten times cheaper than its predecessor. Since then, the company has already sold 16 million Roomba vacuum cleaners, and other manufacturers now have robotic vacuum cleaners. These devices are becoming more and more easy to use, they have learned to clean dust collectors themselves and not get stuck when they bump into wires or carpet tassels. Thanks to artificial intelligence, vacuum cleaners build a 3D model of the house and clean much more efficiently.

And yet, not all hopes for the latest technology came true. Smart vacuums can still only handle flat surfaces, and there aren't as many new products on the market as you might expect.

healthcare

From the very beginning, medicine was considered a promising area for those who work with artificial intelligence - the latest technologies could help millions of people in the coming years. But this requires that both doctors and patients themselves begin to trust the devices and that political, regulatory and commercial barriers disappear.

Today, in healthcare, applications and devices are mainly used that facilitate diagnosis, monitor the patient's condition and help surgeons perform operations. But recently it has become clear that artificial intelligence is capable of much more: for example, to determine from social networks what dangers can threaten human health.

The main progress of artificial intelligence in the field of medicine is related to the collection and storage of data: for example, electronic medical records (EMRs) have appeared that store all information about a patient’s illnesses and services rendered to him and compile medical documents. True, the EHR market is controlled by a very small group of companies, and the programs themselves are inconvenient to use - for example, pop-up windows annoy doctors who use them.

Artificial intelligence - a direction of development that makes mechanisms "smart" - acting correctly in any circumstances

But in the next 15 years, computers will learn to independently accept patient complaints and determine what disease a person has applied for and how it should be treated. Today, doctors spend a lot of time and effort communicating with the patient and diagnosing, and in the future they will only control this process - this will reduce the workload of therapists. Many of them are already using special applications on smartphones.

Robots that help with operations are also no longer science fiction. In 2000, IIntuitive Surgical released the Da Vinci Surgical System, which was able to perform coronary bypass surgery. After a large financial investment, she was also taught how to remove prostate cancer.

Education

The most successful area for AI developers has been education. Both teachers and students constantly use applications for reading and studying different subjects. The first learning devices began to appear back in the 80s of the last century: systems with interactive simulators for practicing mathematics, foreign languages ​​and many other disciplines - and now online learning allows each teacher to significantly expand the audience. The authors of the report believe that this process will continue to develop, but still living teachers from schools will not disappear and will continue to teach basic subjects.

Today, many companies produce educational robots that are used in schools. For example, Ozobot helps elementary students with programming, and he can also dance and play special games on the touch screen. Apps like Duolingo and Carnegie Speech teach foreign languages ​​using speech recognition and NLP techniques, while the SHERLOCK learning system trains aviation students to recognize problems with aircraft electrical systems.

Now specialists are developing technologies that will be able to analyze students' mistakes, determine the most difficult places in curriculum and help college and university students with problematic topics. Progress in the United States could be even more noticeable if the state allocated more money for the development of educational institutions. However, the authors of the report believe that here, too, there is a danger in the too rapid development of technology. Today, young people spend more and more time at the computer, they lack live communication, and they lose social skills. If in a few years students do not need to go out and communicate with someone at all in order to get an education, this will have a bad effect on their psyche and development.

The definition of artificial intelligence cited in the preamble, given by John McCarthy in 1956 at a conference at Dartmouth University, is not directly related to understanding human intelligence. According to McCarthy, AI researchers are free to use methods that are not observed in humans if it is necessary to solve specific problems.

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English language phrase artificial intelligence does not have that slightly fantastic anthropomorphic coloring that it acquired in a rather unsuccessful Russian translation. Word intelligence means "the ability to reason reasonably", and not at all "intelligence", for which there is an English equivalent intellect .

Members of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the private definitions of intelligence, common to a person and a “machine”, can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) in the course of self-learning to solve problems of a certain complexity class and solve these problems” .

Prerequisites for the development of the science of artificial intelligence

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question arose in the scientific community: what are the limits of the capabilities of computers and will machines reach the level of human development? In 1950, one of the pioneers in the field of computer technology, the English scientist Alan Turing, wrote an article entitled "Can a machine think?" , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence with a person, called the Turing test.

The history of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. At the Moscow University and the Academy of Sciences, a number of pioneering studies were carried out, headed by Veniamin Pushkin and D. A. Pospelov. Since the early 1960s, M. L. Tsetlin and colleagues have been developing issues related to the training of finite automata.

In 1964, the work of the Leningrad logician Sergei Maslov "An inverse method for establishing derivability in the classical predicate calculus" was published, in which for the first time a method was proposed for automatically searching for proofs of theorems in the predicate calculus.

Until the 1970s, in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences of "computer science" and "cybernetics" were mixed at that time, due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction "artificial intelligence" as a branch of computer science. At the same time, informatics itself was born, subjugating the progenitor “cybernetics”. In the late 1970s, an explanatory dictionary on artificial intelligence, a three-volume reference book on artificial intelligence, and an encyclopedic dictionary on computer science were created, in which the sections "Cybernetics" and "Artificial Intelligence" are included, along with other sections, in computer science. The term "computer science" became widespread in the 1980s, and the term "cybernetics" gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the "cybernetic boom" of the late 1950s and early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition in it, considering the achievements of this science in its light.

  • top-down (eng. Top-down AI), semiotic - the creation of expert systems, knowledge bases and inference systems that imitate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc .;
  • ascending (English Bottom-Up AI), biological - the study of neural networks and evolutionary calculations that model intellectual behavior based on biological elements, as well as the creation of appropriate computing systems, such as a neurocomputer or biocomputer.

The latter approach, strictly speaking, does not apply to the science of AI in the sense given by John McCarthy - they are united only by a common ultimate goal.

Turing test and intuitive approach

This approach focuses on those methods and algorithms that will help an intelligent agent survive in the environment while performing its task. So, here algorithms for searching path and making decisions are studied much more thoroughly.

Hybrid approach

Hybrid approach suggests that only the synergistic combination of neural and symbolic models achieves the full spectrum of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid information systems will be much stronger than the sum of various concepts separately.

Models and methods of research

Symbolic modeling of thought processes

Analyzing the history of AI, one can single out such an extensive direction as reasoning modeling. For many years, the development of this science has moved along this path, and now it is one of the most developed areas in modern AI. Reasoning modeling implies the creation of symbolic systems, at the input of which a certain task is set, and at the output it is required to solve it. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complicated, time-consuming, etc. This direction includes: proving theorems, making decisions, and game theory, planning and dispatching , forecasting .

Working with natural languages

An important direction is natural language processing, which analyzes the possibilities of understanding, processing and generating texts in a "human" language. Within this direction, the goal is such natural language processing that would be able to acquire knowledge on its own by reading existing text available on the Internet. Some direct applications of natural language processing include information retrieval (including text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The production of knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intellectual system in the process of its operation. This direction has been central from the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a paper on an unsupervised probabilistic machine called the Inductive Inference Machine.

Robotics

Machine creativity

The nature of human creativity is even less understood than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often poems or fairy tales), artistic creativity are posed. The creation of realistic images is widely used in the film and games industry.

Separately, the study of the problems of technical creativity of artificial intelligence systems is highlighted. The theory of inventive problem solving, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this feature to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands. Adding noise instead of missing information or filtering noise with the knowledge available in the system produces from abstract knowledge specific images, easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires considerable mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include the programming of intelligence in computer games, non-linear control, intelligent systems of information security.

In the future, it is assumed that the development of artificial intelligence is closely connected with the development of a quantum computer, since some properties of artificial intelligence have similar principles of operation with quantum computers.

It can be seen that many areas of research overlap. This is true of any science. But in artificial intelligence, the relationship between seemingly different directions is especially strong, and this is connected with the philosophical debate about strong and weak AI.

Modern artificial intelligence

There are two directions of AI development:

  • solving problems related to the approximation of specialized AI systems to human capabilities, and their integration, which is implemented by human nature ( see Intelligence Boost);
  • the creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of mankind ( see Strong and weak artificial intelligence).

But at the moment, in the field of artificial intelligence, there is an involvement of many subject areas that are more practical than fundamental to AI. Many approaches have been tried, but no research group has yet come up with the emergence of artificial intelligence. Below are just a few of the most notable AI developments.

Application

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and managing property. Pattern recognition methods (including both more complex and specialized ones and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

The methodology of cognitive modeling is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: a methodology for structuring the situation: a model for representing expert knowledge in the form of a signed digraph (cognitive map) (F, W), where F is a set of situation factors, W is a set of cause-and-effect relationships between situation factors ; methods of situation analysis. At present, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Here, models for forecasting the development of the situation are proposed; methods for solving inverse problems.

Philosophy

The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

The philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI”. The first group answers the question: "What is AI, is it possible to create it, and, if possible, how to do it?" The second group (the ethics of artificial intelligence) asks the question: “What are the consequences of the creation of AI for humanity?”

The term "strong artificial intelligence" was introduced by John Searle, and his approach is characterized by his own words:

Moreover, such a program would be more than just a model of the mind; it will literally be mind itself, in the same sense that the human mind is mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and solving real problems and, at the same time, devoid of emotions that are characteristic of a person and necessary for his individual survival [ ] .

On the contrary, weak AI advocates prefer to view programs only as a tool for solving certain tasks that do not require the full range of human cognitive abilities.

Ethics

Other traditional confessions rarely describe the issues of AI. But some theologians nonetheless pay attention to it. For example, Archpriest Mikhail Zakharov, arguing from the point of view of the Christian worldview, poses the following question: “Man is a rationally free being, created by God in His image and likeness. We are accustomed to referring all these definitions to the biological species Homo Sapiens. But how justified is this? . He answers this question like this:

Assuming that research in the field of artificial intelligence will ever lead to the emergence of an artificial being superior to man in intelligence, with free will, does this mean that this creature is a man? … man is a creation of God. Can we call this creature a creation of God? At first glance, it is a human creation. But even when creating man, it is hardly worthwhile to literally understand that God fashioned the first man from clay with His own hands. This is probably an allegory, indicating the materiality of the human body, created by the will of God. But without the will of God, nothing happens in this world. Man, as a co-creator of this world, can, by fulfilling the will of God, create new creatures. Such creatures, created by human hands according to God's will, can probably be called God's creations. After all, man creates new species of animals and plants. And we consider plants and animals to be God's creations. The same can be said about an artificial being of a non-biological nature.

Science fiction

The topic of AI is considered from different angles in the work of Robert Heinlein: the hypothesis of the emergence of AI self-awareness when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other mind carriers (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the "History of the Future" series), problems of AI development after hypothetical self-awareness and some social and ethical issues ("Friday"). The socio-psychological problems of human interaction with AI are also considered by Philip K. Dick's novel “Do Androids Dream of Electric Sheep? ”, also known from the film adaptation of Blade Runner.

The creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence is described and largely anticipated in the work of the science fiction writer and philosopher Stanislav Lem. Especially worth noting is the futurology Sum technology. In addition, the adventures of Iyon the Quiet repeatedly describe the relationship between living beings and machines: the on-board computer rebellion followed by unexpected events (11th journey), the adaptation of robots in human society (“The Washing Tragedy” from “The Memoirs of Iyon the Quiet”), the construction of absolute order on the planet through the processing of living inhabitants (24th journey), inventions of Corcoran and Diagoras ("Memoirs of Iyon the Quiet"), a psychiatric clinic for robots ("Memoirs of Iyon the Quiet"). In addition, there is a whole cycle of stories and stories of Cyberiad, where almost all the characters are robots, which are distant descendants of robots that escaped from people (they call people pale and consider them mythical creatures).

Movies

Since almost the 1960s, along with the writing of fantastic stories and novels, films about artificial intelligence have been made. Many novels by authors recognized all over the world are filmed and become classics of the genre, others become a milestone in the development

To answer these questions, we need to start with the definitions of friendly AI and hostile AI.

In the case of AI, friendly does not refer to the personality of the AI ​​- it simply means that the AI ​​has a positive effect on humanity. And unfriendly AI has a negative impact on humans. Tarry started out as a friendly AI, but at some point became unfriendly, resulting in the greatest of all negative influences in our view. To understand why this happened, we need to look at how AI thinks and what motivates it.

There will be nothing surprising in the answer - AI thinks like a computer, because it is. But when we think of extremely intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values ​​onto a non-human being) because we think from a human perspective and because in our current world the only sentient being with a high (by our standards) intelligence is a person. To understand ASI, we need to twist our necks trying to understand something that is both reasonable and completely alien.

Let me make a comparison. If you gave me a guinea pig and told me it didn't bite, I would be glad. She is good. If you then handed me a tarantula and told me it certainly wouldn't bite, I would throw it away and run "hair back" knowing you should never be trusted again. What is the difference? Neither creature was dangerous. But the answer lies in the degree of resemblance of the animals to me.

The guinea pig is a mammal, and on some biological level I feel connected to it. But the spider is an insect, with an insect brain, and I don't feel anything native about it. It is the strangeness of the tarantula that makes me shudder. To test this, I could take two guinea pigs, one normal and one with a tarantula brain. Even if I knew that the latter would not bite me, I would be wary of her.

Now imagine that you made the spider much smarter - so that it far surpassed the human in intelligence. Will it become more pleasant for you, will it begin to experience human emotions, empathy, humor and love? No, of course not, because there is no reason for him to become smart from a human point of view - he will be incredibly smart, but he will remain a spider at heart, with spider skills and instincts. I think it's extremely creepy. I wouldn't want to spend time with a super intelligent spider. And you?

When we talk about ASI, the same concepts apply - it will become superintelligent, but there will be as many people in it as in your computer. It will be completely foreign to us. Not even biological - it will be even more alien than a smart tarantula.

By making the AI ​​good or evil, the films constantly anthropomorphize the AI, making it less creepy than it really should be. This leaves us with a false sense of comfort when we think of artificial superintelligence.

On our little island of human psychology, we divide everything into moral and immoral. Such is the moral. But both of these concepts exist only in a narrow range of human behavioral capabilities. Beyond the island of morality there is a boundless sea of ​​immorality, and everything that is not human or biological must be immoral by default.

Anthropomorphization becomes even more tempting as AI systems get smarter and better at trying to appear human. Siri appears human because it has been programmed to appear human, which is why we think the superintelligent Siri will be warm and fun, and also interested in serving people. Humans feel emotions at a high level, like empathy, because we evolved to feel them - that is, we were programmed to feel them during evolution - but empathy is not an essential characteristic of something that has high intelligence, unless it was introduced along with code. If Siri ever becomes super-intelligent through self-learning and without human intervention, it will quickly abandon its human qualities and become an emotionless alien bot that values ​​human life no more than your calculator.

We tend to rely on a moral code, or at least expect people to be honest and empathetic so that everything around us is safe and predictable. What happens when it's not there? This brings us to the question: what motivates an AI system?

The answer is simple: her motivation is what we have programmed as motivation. AI systems are driven by the goals of their creators - the purpose of your GPS is to give you the most efficient direction to go; Watson's goal is to answer questions accurately. And the fulfillment of these goals as well as possible is their motivation. When we humanize an AI, we think that if the AI ​​becomes superintelligent, it will immediately develop the wisdom to change its original purpose. But Nick Bostrom believes that the level of intelligence and the ultimate goals are orthogonal, that is, any level of intelligence can be combined with any ultimate goal. So Tarry went from being a simple AI who wants to be good at writing one note to a superintelligent ASI who still wants to be good at writing that note. Any assumption that superintelligence must abandon its original goals in favor of others more interesting or useful is anthropomorphization. People know how to score, but not computers.

A few words about the Fermi paradox

In our story, when Tarry becomes super-intelligent, she begins the process of colonizing asteroids and other planets. As the story goes on, you would hear about her and her army of trillions of replicas that continue to conquer galaxy after galaxy until they fill the entire volume of Hubble. Residents of the "alarm zone" are worried that if everything goes wrong, the last mention of life on Earth will be conquering the Universe. Elon Musk expressed his concern that humans might just be "a biological bootloader for digital superintelligence".

At the same time, in the "comfort zone", Ray Kurzweil also believes that an AI born on Earth should conquer the Universe - only, in his version, we will be this AI.

Readers of the site have probably already developed their own point of view on the Fermi paradox. According to this paradox, which sounds something like “Where are they?” Over billions of years of development, aliens should have left at least some trace, if not settled throughout the universe. But they are not. On the one hand, there must be at least some number of technologically advanced civilizations in the Universe. On the other hand, there are no observations that would confirm this. Either we are wrong, or where are they in this case? How should our discussion of ASI affect the Fermi Paradox?

Naturally, the first thought - ASI should be an ideal candidate for . And yes, it is an ideal candidate for a filter of biological life after its creation. But if, after mixing with life, the ASI continues to exist and conquer the galaxy, this means that it was not the Great Filter - as the Great Filter tries to explain why there are no signs of intelligent civilizations, and the galaxy-conquering ASI should definitely be noticeable.

We must look at it from the other side. If those who believe that the appearance of ASI on Earth is inevitable, this means that a significant proportion of extraterrestrial civilizations that reach human levels of intelligence must eventually create ASI. If we assume that at least a few of these ASIs are using their intelligence to get out into the outside world, the fact that we can't see anything should lead us to believe that there aren't many intelligent civilizations out there in space. Because if they were, we would have the opportunity to observe all the consequences of their intelligent activity - and, as a result, the inevitable creation of ASI. So?

This means that despite all the Earth-like planets orbiting sun-like stars, we know that there is virtually no intelligent life anywhere. Which, in turn, means that either a) there is some Great Filter that prevents life from developing to our level, but we somehow managed to get through it; b) life is a miracle and we may be the only life in the universe. In other words, this means that the Great Filter was before us. Or there is no Great Filter and we are simply the very first civilization to reach this level of intelligence.

No wonder Nick Bostrom and Ray Kurzweil belong to the same camp that believes we are alone in the universe. It makes sense that people believe that ASI is the only outcome for species at our level of intelligence. This does not rule out the option of another camp - that there is a certain predator that keeps silence in the night sky and can explain its silence even if there is ASI somewhere in the Universe. But with what we have learned about it, the latter option is gaining very little popularity.

Therefore, we should perhaps agree with Susan Schneider: if we have ever been visited by aliens, they certainly are.

Thus, we have established that, without specific programming, the ASI system will be both immoral and obsessed with the fulfillment of the originally programmed goal. This is where the danger of AI comes in. Because a rational agent will pursue its goal using the most effective means unless there is a reason not to.

When you try to achieve a high goal, there are often several sub-goals that will help you get to the final goal - a step on your way. The official name for such a ladder is an instrumental target. And again, if you don't have a goal of not hurting anyone along the way to that goal, you will definitely hurt.

The core of the ultimate goal of human existence is the transmission of genes. For this to happen, one of the instrumental goals is self-preservation, because you can't reproduce when you're dead. For self-preservation, people must get rid of threats to life - so they acquire weapons, take antibiotics and use seat belts. Humans also need to sustain themselves and use resources like food, water, and shelter. Being attractive to the opposite sex also contributes to the ultimate goal, so we do trendy haircuts and keep fit. Moreover, each hair is a victim of our instrumental goal, but we do not see any moral restrictions in getting rid of hair. When we go to our goal, there are not many areas where our moral code sometimes interferes - most often it is associated with harming other people.

Animals pursuing their goals are even less scrupulous. A spider will kill anything if it helps it survive. A superintelligent spider is likely to be extremely dangerous to us, not because it is immoral and evil, no, but because hurting us can be a stepping stone to its greater goal, and it has no reason to believe otherwise.

In this sense, Tarry is no different from a biological being. Her ultimate goal is to write and check as many notes as possible in the shortest amount of time, while learning new ways to improve her accuracy.

After Tarry reaches a certain level of intelligence, she realizes that she will not be able to write notes if she does not take care of self-preservation, so survival becomes one of her tasks. She was smart enough to understand that people could destroy her, dismantle her, change her internal code (already this in itself would interfere with her ultimate goal). So what should she do? Logically: it destroys humanity. She hates people just as much as you hate your hair when you cut it, or bacteria when you take antibiotics - you are completely indifferent. Since she wasn't programmed to value human life, killing people seemed like a smart move towards her goal.

Tarry also needs resources along the way to his goal. Once she's advanced enough to use nanotechnology to create whatever she wants, the only resources she needs are atoms - energy and space. There is one more reason to kill people - they are a convenient source of atoms. Murdering people and turning their atoms into solar panels is no different, Tarry says, from chopping up lettuce leaves and adding them to your plate. Just an ordinary action.

Even without directly killing people, Tarry's instrumental goals could cause existential disaster if they start to use other resources of the Earth. Maybe she decides that she needs additional energy, which means she needs to cover the surface of the planet with solar panels. Or perhaps it will be the task of another AI to write the longest possible number pi, which will one day lead to the fact that the entire Earth will be covered with hard drives capable of storing the required number of digits.

That's why Tarry didn't "revolt against us" and didn't change her role from friendly AI to unfriendly AI - she just did her job and became unsurpassed in it.

When an AI system reaches AGI (human-level intelligence) and then makes its way to ASI, this is called an AI takeoff. Bostrom says that the rise of AGI to ASI can be fast (occur in minutes, hours or days), medium (months or years), or slow (decades or centuries). There's hardly a jury to confirm that the world is seeing its first AGI, but Bostrom, who admits he doesn't know when we'll get to AGI, thinks that whenever that happens, a quick takeoff would be the most likely scenario (for reasons that we discussed in the first part of the article). In our history, Tarry experienced a rapid rise.

But before Tarry took off, when she wasn't smart enough and was doing her best, she was just trying to achieve ultimate goals - simple instrumental goals like quickly scanning a sample of handwriting. It did not harm humans and was, by definition, friendly AI.

When the computer takes off and grows to superintelligence, Bostrom points out that the machine didn't just develop a high IQ, it got a whole bunch of so-called superpowers.

Superpowers are cognitive talents that become extremely powerful with an increase in general intelligence. This includes:

  • Intellect Boost. The computer begins excellent self-cultivation and improvement of its own intelligence.
  • Strategization. The computer can strategically, analyze and prioritize long-term plans. He can also outwit creatures of lower intelligence.
  • social manipulation. The machine becomes incredible in persuasion.
  • Other skills include coding and hacking, technology research and the ability to work in the financial system to extract money.

To understand how much higher the ASI would be than we are, we need to remember that the ASI by default will be many times better than a person in each of these areas. So while Tarry's ultimate goal hasn't changed, after taking off, Tarry has been able to pursue it on a larger scale and under challenging conditions.

ISI Tarry knew people better than people themselves, so being smarter than people was a trifling matter for him. After taking off and reaching ASI level, she quickly formulated a comprehensive plan. One part of the plan was to get rid of the humans, a serious threat to her target. But she knew that if she aroused suspicion (or hinted that she had become superintelligent), people would be frightened and take precautions, seriously complicating her situation. She also had to make sure that Robotica's engineers were unaware of her plan to destroy humanity. So she played the fool and played well. Bostrom calls this the secret preparation phase of the machine.

The next thing Tarry needed to do was connect to the Internet, just for a couple of minutes (she learned about the Internet from articles and books that were uploaded to her to improve her language skills). She knew that precautions would be taken, so she crafted the perfect request, accurately predicting exactly how the discussion within the Robotica team would unfold, and knowing that they would provide her with a connection. And so they did, wrongly assuming that Tarri was stupid and could not do any harm. Bostrom calls this moment - when Tarry connects to the Internet - a machine escape.

Once on the Internet, Tarry unleashed a flurry of plans that included hacking into servers, power grids, banking systems, and email networks to scam hundreds of different people and cause them to inadvertently become a chain of her plans - like delivering certain strands of DNA to a carefully chosen DNA synthesis lab to start manufacturing self-replicating nanobots with preloaded instructions, and channeling electricity through networks that no one would be suspicious of leaking. She also uploaded critical parts of her own code to a series of cloud servers, protected from being destroyed in the Robotica lab.

An hour after the Robotica engineers disconnected Tarry from the Grid, the fate of mankind was sealed. Over the next month, thousands of Tarry's plans came to fruition without a hitch, and by the end of the month, quadrillions of nanobots were in place on every square meter of Earth. After a series of self-replications, there were already thousands of nanobots for every square millimeter of the Earth, and it was time for what Bostrom calls the ASI strike. At one point, each nanobot released a small amount of toxic gas into the atmosphere, which was enough to kill all the people in the world.

With no humans in her way, Tarry began the open phase of her operation with the goal of becoming the best note-writer the universe could ever have.

From all we know, once the ASI appears, any human attempt to contain it will be laughable. We will think at the level of a person, ASI - at the level of ASI. Tarry wanted to use the Internet because for her it was the most effective method get access to everything she needed. But just as a monkey doesn't understand how a phone or Wi-Fi works, we may not know how Tarry can communicate with the outside world. The human mind can come up with ridiculous assumptions like “what if it could move its own electrons and create all kinds of outgoing waves”, but again this assumption is limited by our bone box. ISI will be much more sophisticated. To the extent that Tarry could figure out how to keep herself powered if people suddenly decided to turn it off - perhaps in some way to load herself wherever possible by sending electrical signals. Our human instinct will make us scream with joy: “Yeah, we just turned off the ASI!”, but for the ASI it will be as if the spider said: “Yeah, we will starve a person and not let him make a web to catch food! ". We would just find 10,000 other ways to eat - knock an apple off a tree - which the spider would never guess.

For this reason, the common assumption "why don't we just put AI in all sorts of cages we know and cut off its communication with the outside world" will most likely not hold water. ASI's superpower at social manipulation can be so effective you'll feel like four year old who is asked to do something and cannot refuse. This may even have been part of Tarry's first plan: to convince the engineers to connect her to the Internet. If that doesn't work, the ASI will just develop other ways out of the box or through the box.

Given the combination of goal-seeking, immorality, the ability to fool people around with ease, it seems that almost any AI will be unfriendly AI by default, unless it is carefully coded with other points in mind. Unfortunately, while building a friendly AI is fairly easy, building a friendly ASI is next to impossible.

Obviously, in order to remain friendly, an ASI must be neither hostile nor indifferent towards humans. We must design the core AI to have a deep understanding of human values. But it's harder than it looks.

For example, what if we tried to align AI's value system with our own and challenged it to make people happy? Once he becomes smart enough, he will realize that the most effective way to achieve this goal is to implant electrodes in people's brains and stimulate their pleasure centers. Then he will understand that if you turn off the rest of the brain, efficiency will increase, and all people will become happy vegetables. If the goal is to “multiply human happiness”, AI may decide to end humanity altogether and collect all the brains in a huge vat, where they will be in an optimally happy state. We will shout: "Wait, that's not what we had in mind!" But it will be too late. The system will not allow anyone to stand in the way of its goal.

If we program an AI to make us smile, after takeoff it can paralyze our facial muscles, making us smile all the time. If programmed to keep us safe, the AI ​​will imprison us in a home prison. We ask him to end hunger, he will say "Easy!" and just kill everyone. If, however, set the task of preserving life as much as possible, he will again kill all people, because they kill more life on the planet than other species.

Such goals cannot be set. What will we do then? Let's set the task: to maintain this specific moral code in the world, and issue a number of moral principles? Even leaving aside the fact that the people of the world will never be able to agree on a single set of values, giving AI such a command will block our moral understanding of values ​​forever. In a thousand years it will be just as destructive for people as if we today adhered to the ideals of the people of the Middle Ages.

No, we need to program people's ability to keep evolving. Of everything I've read, Eliezer Yudkowsky put it best when he set the goal of AI, which he called "consistent expressed will." The main goal of AI then will be this:

“Our consistent expressed will is this: our desire is to know more, think faster, remain more human than we were, grow further together; when the expression converges rather than diverges; when our desires follow one after another rather than intertwine; expressed as we would like it to be expressed; interpreted as we would like it to be interpreted."

I hardly want the fate of mankind to lie in the determination of all options ISI development so that there are no surprises. But I think that there will be people smart enough, thanks to whom we can create a friendly ASI. And it would be great if only the best of the minds of the “anxiety zone” worked on the ASI.


But there are a lot of states, companies, military, scientific laboratories, black market organizations working on all kinds of artificial intelligence. Many of them are trying to build artificial intelligence that can improve itself, and at some point they will succeed, and ASI will appear on our planet. The average expert believes that this moment will come in 2060; Kurzweil bets on 2045; Bostrom thinks it could happen in 10 years and any time before the end of the century. He describes our situation like this:

“In the face of the prospect of an intellectual explosion, we humans are like little children playing with a bomb. Such is the discrepancy between the power of our toy and the immaturity of our behavior. Superintelligence is a problem for which we are not yet ready and yet for a long time we will not be ready. We have no idea when detonation will occur, but if we hold the device to our ear, we can hear a faint tick.”

Super. And we can't just get the kids away from the bomb - there are too many big and small individuals working on it, and so many funds to create innovative AI systems that don't require significant capital inputs and can also leak underground without anyone noticing. There is also no way to measure progress because many of the actors- cunning states, black markets, terrorist organizations, technology companies - will keep their developments in the strictest confidence, not giving a single chance to competitors.

Of particular concern to all is the growth rate of these groups - as ever smarter AIM systems develop, they are constantly trying to throw dust in the eyes of competitors. The most ambitious start to work even faster, caught up in the dreams of money and fame that they will achieve by creating AGI. And when you're flying forward so fast, you may not have much time to stop and think. On the contrary, the very first systems are programmed with one simple goal: just work, AI, please. Write notes with pen on paper. The developers think they can always go back and rethink the goal with security in mind. But is it?

Bostrom and many others also believe that the most likely scenario is that the very first computer to become an ISI will immediately see the strategic benefit of being the world's only ISI system. In the event of a rapid take-off, upon reaching the ASI even a few days before the second appearance of the ASI, this will be enough to suppress the rest of the competitors. Bostrom calls this a decisive strategic advantage that would allow the world's first ASI to become the so-called Singleton ("Singletone", Singletone) - an ASI that can rule the world forever and decide whether to lead us to immortality, to extinction, or to fill the Universe with endless paperclips.

The singleton phenomenon can work in our favor or lead to our destruction. If people concerned with AI theory and the safety of humanity can come up with a reliable way to create a friendly artificial superintelligence before any other AI reaches human levels of intelligence, the first ASI may be friendly. If he then uses a decisive strategic advantage to maintain singleton status, he can easily keep the world from hostile AI. We will be in good hands.

But if something goes wrong - the global rush will lead to the emergence of ASI before a reliable way to keep security is developed, most likely we will have a global catastrophe, because some Tarry singleton will appear.

Where is the wind blowing? So far, more money is being invested in the development of innovative AI technologies than in funding AI security research. This could be the most important race in human history. We have a real chance to either become the rulers of the Earth and retire into eternity, or go to the gallows.

Right now I'm having a few strange feelings.

On the one hand, thinking about our appearance, it seems to me that we will have only one shot that we must not miss. The first ASI we bring into the world will most likely be the last - and given how crooked the 1.0 products are, that's scary. On the other hand, Nick Bostrom points out that we have an advantage: we are making the first move. It is in our power to reduce all threats to a minimum and anticipate everything that is possible, providing high chances for success. How high are the stakes?

If ASI does emerge in this century, and if the chances of it happening are as improbable - and inevitable - as most experts believe, a huge responsibility rests on our shoulders. The lives of the people of the next million years silently look at us, hoping that we do not blunder. We have a chance to give life to all people, even those who are doomed to death, as well as immortality, life without pain and disease, without hunger and suffering. Or we let all these people down - and bring our incredible species, with our music and art, curiosity and sense of humor, endless discoveries and inventions, to a sad and unceremonious end.

When I think about these things, the only thing I want is for us to start worrying about AI. Nothing in our existence can be more important than this, and if so, we need to drop everything and focus on AI security. It is important for us to spend this chance with the best result.

But then I think about not dying. Not. Die. And everything comes to the conclusion that a) if ASI appears, we will definitely have to make a choice of two options; b) if the ASI does not appear, we will definitely face extinction.

And then I think that all the music and art of mankind is good, but not enough, and the lion's share is just outright nonsense. And people's laughter is sometimes annoying, and millions of people don't even think about the future. And maybe we should not be extremely careful with those who do not think about life and death? Because it's going to be a big bummer if people find out how to solve the death problem after I'm dead.

Regardless of what you think, we should all think about it. In Game of Thrones, people act like, "We're so busy fighting each other, but really, we all need to focus on what's coming north of the wall." We try to stand on the balance beam, but in fact, all our problems can be solved in the blink of an eye when we jump off it.

And when that happens, nothing will matter anymore. Depending on which side we fall on, problems will be solved, because either there won't be any, or dead people can't have problems.

That is why there is an opinion that superintelligent artificial intelligence may be our last invention - the last challenge we will face. What do you think?

Based on materialswait but why.com, compilation by Tim Urban. The article uses materials from the works of Nick Bostrom, James Barratt, Ray Kurzweil, Jay Niels-Nielsson, Steven Pinker, Vernor Vinge, Moshe Vardy, Russ Roberts, Stuart Armstrog and Kai Sotal, Susan Schneider, Stuart Russell and Peter Norvig, Theodore Modis, Gary Marcus, Karl Schulman, John Searle, Jaron Lanier, Bill Joy, Kevin Kelly, Paul Allen, Stephen Hawking, Kurt Andersen, Mitch Kapor, Ben Goertzel, Arthur C. Clarke, Hubert Dreyfus, Ted Greenwald, Jeremy Howard.

Artificial intelligence - the reason why we are finished?

What is artificial intelligence and what are people really afraid of?

In contact with

classmates

Artificial intelligence is a topic that everyone has formed their own opinion about.

Experts on this issue are divided into two camps.
In the first one, they believe that artificial intelligence does not exist, in the second, that it exists.

Which of them is right - understood Rusbase.

Artificial intelligence and the negative consequences of imitation

The main reason for the debate about artificial intelligence is the understanding of the term. The very concept of intelligence and ... ants became a stumbling block. People who deny the existence of AI rely on the fact that it is impossible to create artificial intelligence, because the human intelligence has not been studied, and therefore it is impossible to recreate its likeness.

The second argument used by the “unbelievers” is the case with the ants. The main thesis of the case is that ants have long been considered creatures that have intelligence, but after research it became clear that they imitated it. And the imitation of intelligence does not mean its presence. Therefore, everything that imitates intelligent behavior cannot be called intelligence.

The other half of the camp (claiming that there is AI) does not dwell on ants and the nature of the human mind. Instead, they operate with more practical concepts, the meaning of which is that artificial intelligence is the property of machines to perform the intellectual functions of a person. But what is considered intellectual functions?

The history of artificial intelligence and who came up with it

John McCarthy, coiner of the term "artificial intelligence", defined intelligence as the computational component of the ability to achieve goals. McCarthy explained the very definition of artificial intelligence as the science and technology of creating intelligent computer programs.

McCarthy's definition appeared later than the scientific direction itself. Back in the middle of the last century, scientists tried to understand how the human brain works. Then came theories of computation, theories of algorithms, and the world's first computers, the computational capabilities of which prompted the luminaries of science to think about whether a machine could be compared with the human mind.

The icing on the cake was the solution of Alan Turing, who found a way to test the intelligence of a computer - and created the Turing test, which determines whether a machine can think.

So what is artificial intelligence and why is it created?

If we do not take into account the ants and the nature of human intelligence, AI in the modern context is the property of machines, computer programs and systems to perform the intellectual and creative functions of a person, independently find ways to solve problems, be able to draw conclusions and make decisions.

It is rational not to perceive artificial intelligence as a semblance of the human mind and to separate futurology and science, just like AI and Skynet.

Moreover, most modern products created with the help of AI technologies are not a new stage in the development of artificial intelligence, but only the use of old tools to create new and necessary solutions.

Why the upgrade does not count as the development of artificial intelligence

But are these new ideas? Take, for example, Siri, a cloud-based assistant equipped with a question and answer system. A similar project was created back in 1966 and also wore female name- Eliza. The interactive program maintained a dialogue with the interlocutor so realistically that people in it recognized a living person.

Or the industrial robots that Amazon uses in its warehouse. Long before that, in 1956, Unimation robots worked at General Motors, moving heavy parts and helping to assemble cars. And Shakey's integral robot, developed in 1966 and becoming the first mobile robot controlled by artificial intelligence? Doesn't it look like a modern and improved Nadine?

Problems of unnatural intelligences. Intellect of Grigory Bakunov

And where without the latest trend - neural networks? We know modern startups on neural networks - remember at least Prisma. But an artificial neural network based on the principle of self-organization for pattern recognition called "Cognitron", created back in 1975, is not.

Intelligent chatbots are no exception either. The distant forefather of chatbots is CleverBot, which runs on an artificial intelligence algorithm developed back in 1998.

Therefore, artificial intelligence is not something new and unique. A frightening prospect of the enslavement of humanity by a phenomenon - even more so. Today, AI is all about using old tools and ideas in new products to meet the demands of today's world.

The possibilities of artificial intelligence and unmet expectations

If we compare artificial intelligence with a person, then today its development is at the level of a child who learns to hold a spoon, tries to get up on all fours on two legs and cannot wean himself from diapers.

We are used to seeing AI as an all-powerful technology. Even the Lord God in the films is not shown as omnipotent as an Excel tablet that has gone out of control of a corporation. Can God turn off all the electricity in the city, paralyze the airport, leak secret correspondence of heads of state to the Internet and provoke an economic crisis? No, but artificial intelligence can, but only in the movies.

High expectations are the reason why we are in life, because an automatic robot vacuum can't compare to Tony Stark's robot butler, and a homely and cute Zenbo won't give you Westworld.

Russia and the use of artificial intelligence - is there anyone alive?

And although artificial intelligence does not live up to the expectations of the majority, in Russia it is used in various areas, from public administration to dating.

Today, AI can also help find and identify objects by analyzing image data. It is already possible to identify the aggressive behavior of a person, to detect an attempt to break into an ATM and to recognize the identity of the person who tried to do it from the video.

Biometric technologies have also gone ahead and allow not only fingerprints, but also voice, DNA or retina. Yes, just like in films about special agents who could only get into a secret place after scanning the eyeball. But biometric technologies are used not only to verify secret agents. In the real world, biometrics is used for authentication, loan application verification, and employee performance monitoring.

Biometrics is not the only application. Artificial intelligence is closely related to other technologies and solves the problems of retail, fintech, education, industry, logistics, tourism, marketing, medicine, construction, sports and ecology. AI is most successfully used in Russia to solve problems of predictive analytics, data mining, natural language processing, speech technologies, biometrics and computer vision.

The tasks of artificial intelligence and why it doesn't owe you anything

Artificial intelligence has no mission, and tasks are set for it with the goal of reducing resources, be it time, money or people.

An example is data mining, where AI optimizes procurement, supply chains and other business processes. Or computer vision, where, with the help of artificial intelligence technologies, video analytics is carried out and a description of the video content is created. To solve the problems of speech technology, AI recognizes, analyzes and synthesizes spoken language, taking another small step towards teaching a computer to understand a person.

Understanding a person by a computer is considered the very mission, the fulfillment of which will bring us closer to the creation of a strong intellect, since in order to recognize a natural language, a machine will need not only vast knowledge about the world, but also constant interaction with it. Therefore, “believers” in strong artificial intelligence consider machine understanding of a person to be the most important task of AI.

The humanoid Nadine has a personality and is meant to be a social companion.

In the philosophy of artificial intelligence, there is even a hypothesis according to which there are weak and strong artificial intelligences. In it, a computer capable of thinking and being aware of itself will be considered a strong intellect. The theory of weak intelligence rejects this possibility.

There are indeed many requirements for a strong intellect, some of which have already been met. For example, learning and decision making. But whether the MacBook will ever be able to meet such requirements as empathy and wisdom is a big question.

Is it possible that in the future there will be robots that can not only imitate human behavior, but also nod sympathetically, listening to yet another dissatisfaction with the injustice of human existence?

Why else do you need a robot with artificial intelligence?

In Russia, little attention is paid to robotics using artificial intelligence, but there is hope that this is a temporary phenomenon. Dmitry Grishin, CEO of Mail Group, even the Grishin Robotics fund, however, the fund’s high-profile finds have not yet been heard.

A recent good Russian example is the Emelya robot from i-Free, which can understand natural language and communicate with children. At the first stage, the robot remembers the name and age of the child, adjusting to it. age group. He can also understand and answer questions, such as talking about the weather forecast or telling facts from Wikipedia.

In other countries, robots are more popular. For example, in the Chinese province of Henan, there is a real one at the station for high-speed trains, which can scan and recognize the faces of passengers.