What will be our future in the world of Artificial Intelligence?- ARTICLE GATE
From Mary Shelley's 1818 novel Frankenstein, to the humanoid robot character named Ash who betrayed the team in the 1979 movie "Alien," to Ava, the robot in 2015's Ex Machina, established culture has always been instrumental in exposing social concerns about scientific and technological advances, and particularly the creation of human-like intelligences. With Artificial Intelligence and robotics increasingly appearing in our news feeds, it's a great time for the recently established Leverhulme Center for the Future of Intelligence. This center is under control of Cambridge University and Cambridge Arts, which have the links of the University of Oxford, Oxford Martin School, Imperial College London, and the University of California at Berkeley.
The Center, which brings together many kinds of thinkers from different fields, have the aim to research and predict what the opportunities and challenges will be if the development of artificial intelligence accelerates, and also to provide a more measurable and useful perspective about artificial intelligence. D., Director of the Center for Existential Risk Research (CSER), who helped develop the new Center proposal.
On the other hand, he also underlines the point that they are aware of, saying, "Focusing only on the risk of disaster has limited us in terms of the scope of this field, considering that there is much to be addressed in artificial intelligence." The Center is envisioned as a center that will host experts from similar disciplines dealing with artificial intelligence and will examine not only its long-term, but also its short and medium-term impacts, taking into account not only risks but also opportunities and challenges.
A NARROW INTELLIGENCE
Although artificial intelligence has made headlines or been the subject of films that placate some of our concerns, the work of the new Center offers a different style and more applicable perspectives. Although scary stories about Artificial Intelligence give us a chill, its current use is relatively limited. Dr. Ó hÉigeartaigh explains: “Frankly, we can say that the AIs we see around the world are actually of limited intelligence, being extraordinarily good practices for only performing certain tasks, such as navigating the city, playing chess, or running a search engine. Right now, we don't have an intelligence that can solve general problems or have the cognitive skills of a dog, let alone a human.
This measured and multifaceted approach has enabled the Center to be open and enthusiastic about the opportunities it offers, while recognizing the serious problems brought up by new technology. Dr. Ó hÉigeartaigh says it's still only biological entities that can perform actions in the world, such as learning, adapting, thinking, and many other different things.Dr. Ó Although hÉigeartaigh's main field of study is computational biology, he has conducted interdisciplinary programs for many years. In this way, while managing the Existential Risk Center, he draws on multidisciplinary thinking that addresses researched issues from a wide variety of perspectives. Dr.
AI APPLICATIONS
Indeed, the need for a different perspective is not only part of the intellectual chemistry that produces original thought and opinion, but also a partial answer to the question of how new technology builds knowledge and attracts different expertise. "Most of the challenges we face, from a scientist's perspective, are that we have to analyze huge amounts of data from a wide variety of sources and make sense of incredibly complex interconnected systems. That's a challenge, even for multiple teams. The systems we're currently developing are specifically designed to make sense of big data. For example, helping analyze millions of genomes to find the origin of cancer, analyze many aspects of climate change, or use solar energy, like trying to make our energy grid or smart homes more efficient. If we discover how to apply artificial intelligence to the problems we face, we will be able to contribute to the solution of these problems."
Many unsuccessful predictions have long been made towards more general AI in the past, rather than the more limited intelligence currently used in many technologies. Dr. “Some people argue that the recent enthusiasm is also wrong,” says Ó hÉigeartaigh. "We're also seeing unprecedented amounts of investment in this space, and exciting projects focused on more general approaches to AI. While these are only a 50 percent chance of happening this century, there must be people who are thinking and working on it." He touches on another equally important point: Even if an entire AI fails, technological advances in this field will still be very important, and considering the social, cultural and political implications of these developments,
DIFFERENT TYPES OF INTELLIGENCE
Another issue that the popular discourse and discussions about Artificial Intelligence contain is that we treat this issue in a very anthropocentric way; however, we have to take into account that there are different types of intelligence in the world. Dr. Ó hÉigeartaigh argues that we have an approach that puts both humans and planet earth at the center, from human intelligence to the intelligence of ravens from the crow family: “We should not limit ourselves to anthropocentric intelligence. One of our first projects that we defined in the initial phase is "Types of Intelligence", and we have already started holding preliminary meetings about it.” Among them is Imperial College Neurology Professor Murray Shanahan, an expert in bonobo intelligence, mathematical logic and machine learning. "All of these people concentrates on coming up with relatively new ideas for different types of intelligence skills. Although it is very difficult to say exactly what intelligence is, it may make our job easier to say what intelligence does and start from there."
IMPROVING INTELLIGENCE
Another issue is how this type of AI will evolve. Evolutionary biology has evolved over time by trial and error, and as Dr Ó hÉigeartaigh explains, some organisms with a higher error rate have evolved more rapidly than others with a low fault tolerance. "As we design our algorithms and AI, we programmers have a choice of how we want to do it. There is also a class of AI learning we call evolutionary algorithms that allows some use of trial and error." Just as there are reasons for wanting to be open to the changes that occur, there are also reasons why we don't want it, he says, “because in the end we may not achieve anything substantial or we may face undesirable consequences”.
There are many different evolutionary factors that play a role at this point. Revolutions in scientific fields are accelerating the development in this field at an explosive level, leading to more brain power, more doctoral support, and greater allocation of resources to the field of artificial intelligence. “As an example, we can cite the high success of Deep Learning in its early days. This resulted in more resources being obtained and many high achievers using this method,” he says.
Similarly, it is reasonable to say that there are conceptual breakthroughs in Artificial Intelligence, but it is not possible to predict how long it will take to make these breakthroughs or how much they will accelerate developments in the field. “Things we can't predict create great uncertainty, so it's absurd to say we're certain to have general AI by 2070 just because we've made so much progress,” he comments. But eventually, revolutionary breakthroughs will be made, and in this context, there is a need for places like Leverhulme that encourage people to think authentically and creatively about things that will make a big social impact.
No comments