Question: What makes it possible for Siri, Alexa or Google Assistant to answer questions and solve tasks? Answer: artificial intelligence (AI). Anyone who spends time in the digital world interacts with it. And AI is a hot topic today. Most magazines, newspapers and science programs have covered the subject in recent months. AI is indeed fascinating, but it’s also something many people are afraid of.
Although artificial intelligence is not necessarily anything new, it is most definitely experiencing a renaissance today, many decades after its first mention. People have been exploring AI since the beginning of electronic computing. In the 1950s, one of the most influential theoreticians in the early phase of computer development, British computer scientist Alan Turing, posed the question as to whether machines were capable of thought. Since then, the scientific field of AI has experienced a number of ups and downs. In the 1990s, scientists concentrated more on using AI for real-life problems. A milestone in AI research occurred when IBM’s “Deep Blue” computer beat world champion Garri Kasparov at chess in 1997. On a side note, an average chess app on any smartphone these days would be able to beat “Deep Blue,” which shows just how far the technology has come in the meantime. But what is actually behind the recent waves of progress and popularity in AI? The development is being driven in part by massive amounts of data (global data volume grows at a rate of 50% per year); on the other hand, the rapidly growing computing power and capacity of computers along with significantly improved algorithms and approaches to machine learning are also playing an key role.
Scientists have been exploring the subject of AI for decades, but that doesn’t mean there is any clear or universally accepted definition of it. American mathematician Marvin Minsky is considered an AI pioneer, having co-founded the brand-new scientific discipline in 1956 with the words: “Artificial Intelligence is the science of making machines do things that would require intelligence if done by men.” Minsky argued that the things a human brain accomplishes were not supernatural, and therefore that it must be possible to teach these things to machines.
“There is no universal definition of artificial intelligence,” notes Claudia Pohlink, head of artificial intelligence and machine learning at Deutsche Telekom’s Innovation Laboratories in Berlin. “We define AI as follows: The goal of research into artificial intelligence is to enable intelligent behavior in machines with the help of science. In that process, one of our key priorities is to point out that AI is designed to support people in their everyday lives, not replace them. AI is a very complex concept, and when they talk about AI, many people are actually talking about machine learning, which Text: Anja Jönsson 13 TITLE is a subject area within AI.” Pohlink explains further: “Simply put, machine learning is a method of analyzing large amounts of data from which different types of knowledge are ‘artificially’ generated and learned. Groups are formed, images recognized and patterns identified. For example, after an initial learning process, AI is capable of differentiating cats from dogs on photos. Deep learning is a special form of machine learning, a process that goes even deeper, applies to larger amounts of data and makes use of neural networks.”
The neural networks Pohlink is talking about are modeled on the human brain and contain artificial neurons. These are built up in layers and linked to one another. The more layers and neurons there are, the larger the number of complex relationships that can be mapped and displayed. “Much like the human brain, AI has to constantly solve new tasks and respond to changing circumstances, and this requires continuously new information (data) so as to be able to work out models more precisely and develop alternative solutions,” notes Pohlink. “In other words, AI is constantly trying to improve itself and increase the accuracy of its hits with regard to answers. A good example of this would be facial recognition; the larger the number of different images a neural network receives from a particular face as a basis for its learning, the higher the likelihood that it will be able to filter that face out of a mass of other faces at a later date.”
The extent of AI’s abilities is very dependent on the quality and quantity of the data provided to it. This means that AI will become more intelligent and more capable of learning when it receives a larger amount of high-quality data. It also means that incomplete or inaccurate data will lead to insufficient results. This is the point at which algorithmic prejudices – often referred to as a “bias” – emerge. “This is where the danger arises that the data is biased,” notes Pohlink. “We have many examples of this in reality. Amazon, for example, tried to use AI to process CVs and garner recommendations as to which candidates were best for a job. They had to halt the experiment because the AI was suggesting primarily male candidates.”
Artificial intelligence is no longer a thing of the future; it’s already part of our everyday lives. This was confirmed in a recent study carried out by the U.S. software company Pega (“What Consumers Really Think About AI: A Global Study,” 2019), which asked 6,000 individuals across the globe whether they used a device containing artificial intelligence. The findings showed the ambivalent relationship of consumers to AI: 84% of respondents used AI (on the basis of specific devices or services containing AI components), such as virtual home assistants, intelligent chat bots and predictive product recommendations), but only one in three respondents was aware of the fact that AI was involved. And only one in two respondents knew that AI solutions make it possible for machines to learn new things. Even fewer respondents were aware of the fact that AI can also solve problems and understand languages. And yet it is precisely these abilities that represent AI’s fundamental characteristics.
Today, our everyday digital lives would be unthinkable without AI. Especially in the various apps and functions on our smartphones, many of them operate with the help of intelligent computer programs. For example, the latest iPhone now features facial recognition. In order to make it possible for users to unlock 14 TITLE the iPhone XS quickly, Apple developed face-ID technology; while scanning the face, the camera uses a point projector to project 30,000 image points onto the face of the user. These points serve as a type of map for the creation of a digital pattern. Even a new pair of glasses or a beard will not confuse the iPhone, seeing as the facial recognition function is constantly improving itself with the help of machine learning. Facebook and other apps also use AI to adapt as effectively as possible to the interests of their users. If anyone is wondering why they’re always being shown their best friends’ posts, AI is behind it. The abovementioned chat bots represent another important area of application for AI. These computer programs “converse” with users and answer questions. In the process, they rely on large databases that enable them to understand questions and provide appropriate answers.
The Pega survey also shows that 70% of respondents find AI to be troubling in one way or the other. A quarter of respondents even fear that machines will one day be able to take over the world. Companies should reflect on these consumer fears and take advantage of all opportunities to explain the benefits of AI to consumers in an understandable way.
The key starting points for these efforts are education and transparency. According to Pohlink, users should be informed in advance with regard to a number of things, including what happens to their data, what kind of knowledge AI can garner on the basis of collected user data and what this knowledge will ultimately be used for. Today, AI is used solely in a task-related manner, and Pohlink argues that this is why it is easier to overlook what AI is actually capable of achieving.
In fact, these days, the fields in which AI systems can be applied are vast: they include medicine, telecommunications, banking, insurance, financial services, legal services, the automotive industry, public administration – the list goes on. AI has long since arrived in all of these areas. And yet, there is still a palpable sense of apprehension. “Each new technology has its downside,” notes Pohlink, who is aware of the existing fears. “Stephen Hawking thought AI could become a terrible event in human history, and even Tesla head Elon Musk is critical of it.” Of course, she continues, “I see it as our duty to make sure that certain ethical principles are observed in the development of AI.” In this spirit, Deutsche Telekom – and other companies, as well – have developed a code of ethics for the handling of AI, the guiding principle of which is: AI systems should always be subject to the same laws that apply to humans.
No doubt, the further development of AI is unstoppable. But how can people unfamiliar with the rapid development come to a better understanding of it? Pohlink has an answer: “The easiest way to understand it is to compare it to the introduction of the computer into our everyday lives. Some people were overwhelmed by t h i s c h a n g e . Others actively attended classes to be able to grasp and use the new technology.” Pohlink speaks passionately about her area of expertise, and argues that AI seeks to support human beings – not replace them – in their daily lives. “The great thing about AI is that it can be used by anyone for anyone, because it learns to understand the needs and demands of each user. The best way for inexperienced users to relieve their apprehension is to simply welcome it into their lives in a playful manner. AI is designed to simplify our daily lives, so we can expect that its usability will soon become increasingly intuitive as well.”