Why fully autonomous driving won’t work
a translation bei Deepl from this article:
https://zeitung.faz.net/faz/d-economy/2023-11-27/19336ee12dde063c40002ecd78aeab5c?GEPC=s5
AI has been used in robot cars for years, but its intelligence also has its limits.
In 2015, there was an event in Munich on the topic of digitalization and fully autonomous driving. When a speaker there said that we would never be able to make our road traffic fully autonomous in Germany, several people stood up and left the room in protest. At the time, anything seemed possible with artificial intelligence. Recently, however, it was announced that the General Motors subsidiary Cruise had been stripped of its license to drive fully autonomously in California. This is probably just the beginning.
Artificial intelligence has once again captured the imagination of many people with the rise of generative AI in 2023. Many became afraid. Some already saw world domination being handed over to smart computers. Others thought they would soon be able to copy their own mind onto an AI machine in order to finally achieve immortality. AI has long since ceased to lead a technical shadowy existence, which pleases the developers – who wouldn’t want their work to be considered important? But AI is and remains a rather sober, technical discipline. It is based on high-performance mathematics (such as deep learning) and clever computing technology. With these ingredients, intelligent behavior can be excellently simulated. To better assess the intelligence of today’s systems, we can introduce intelligence levels as a first approximation: Level 1 (intelligence of correct reasoning, deduction), Level 2 (intelligence of learning, induction), Level 3 (intelligence of cognition, linking deduction and induction), Level 4 (intelligence of conscious perception), Level 5 (self-perception), Level 6 (intelligence of feeling), Level 7 (intelligence of volition) and at Level 8 the so-called self-referential intelligence of humans. Based on this classification (and many others are possible), today’s AI is at level 3. However, the first three levels have the special characteristic that they can be fully mathematized. They can therefore also be described as rational intelligence. From level 4, perception, this is no longer possible in the author’s opinion. This has enormous consequences – also for road traffic. Today’s AI is subject to fundamental application limits.
A „small data“ problem
Three problems will now be used to explain why this AI cannot be used for fully autonomous road traffic (Level 5, „driving without a steering wheel“) – and not in principle, it is not a question of computing technology. It’s about the „small data“ problem, the problem of perception and a problem of logic itself.
Regarding the „small data“ problem: although today’s AI applications are very diverse, they are almost always implemented in „digital space“. Why? Well, in order for an AI to perform its services, vast amounts of data have to be collected, processed and fed to the AI. This works extremely well in many areas – AI has achieved numerous successes using „big data“. But our very own human world is not a „big data“ phenomenon. If we have to develop solutions in our natural world, these are actually always „small data“ tasks. We don’t have tens of millions of data in stock for numerous human use cases, we can’t see 10,000 tigers in the jungle until we realize that they are dangerous. Nor should we burn ourselves 1000 times on a hot hob before we stop touching it. In humans, a few isolated incidents must be enough to train the brain. If you show a small child five pictures of dogs and five pictures of cats, the child will be able to distinguish between cats and dogs forever. If you show a deep learning system five pictures of dogs and five pictures of cats, the AI will produce random results. AI systems based on deep learning simply cannot learn from this amount of data – not in principle, because this AI has too many free parameters that have to be set during the learning process. The human brain can learn from extremely few examples, and this is precisely what ensures our survival in a natural environment. No human has to train millions of kilometers to get a driver’s license in order to drive through a city later on. They can’t. But they don’t need to either. A common idea, even among experts, is that AI just needs to get bigger and faster, then it will come closer to the performance of the brain and reach it at some point. But that won’t happen.
What is the problem? Today, artificial intelligence is implemented as an algorithm on a piece of hardware, a computer – in contrast, humans have a brain. But this is not a computer running software. However, humans solve their „small data“ problems through the interplay between their neuronal brain tissue and their mental consciousness. In image recognition, it is not enough to carry out mathematical operations; far too much training data would be required for this, which is why deep learning only works in conjunction with big data and not in natural environments. AI systems based on deep learning cannot learn from individual cases and, above all, cannot extrapolate from the individual cases presented. In road traffic, where often only specific individual cases occur („Oh, that was close!“), this approach must therefore fail. Not so with humans. Their consciousness solves several tasks at once in the detection of a road situation: On the one hand, it stabilizes the neuronal recognition process and guides it to the next possible (reasonable) „attractor“ even in individual cases. On the other hand, awareness is necessary for visual perception. In this sense, all of today’s (unconscious) AI vehicles drive blindly through our streets.
German car manufacturers rely on levels 3 and 4
This brings us to the second example problem, the paradox of (visual) perception: I refer to visual perception as those processing steps that lead to a person actually seeing the objects to be perceived at the place where they exist, i.e. recognizing them directly in their external environment. According to this definition, today’s AI systems cannot perceive anything. When they „observe“ the street with their video cameras, they generate internal representations of the outside world in their working memories. Every external object is represented by internal tables of zeros and ones. This is not the case with humans. Of course, they also have an internal representation of the outside world in the neural networks of the visual cortex. But they can do much more. Paradoxically, they look out of their heads, even though all signals from the outside are transmitted to the inside of the brain. Humans see the cars exactly where they are, nothing is calculated, they actually see all road users outside their brain. The famous AI critic Hubert Dreyfus would say that humans are in their own world, they do not represent it.
Releasing AI systems that cannot perceive into public road traffic is courageous or perhaps even negligent, that is for lawyers to judge. In any case, German car manufacturers are much more sensible than their American competitors. They are focusing on levels 3 and 4, on something that is technically feasible. Here, AI is also in the place where it belongs – as a supporter of humans, as a driver assistance system and not to replace them. AI cannot replace humans in road traffic, neither technically, legally nor morally (in the event of traffic accidents). In California, we are currently seeing just how many traffic accidents an AI can cause on the roads – hundreds in the past few months alone.
How can fully autonomous driving work?
Mind you, this article is only about fully autonomous driving on public roads: as soon as warehouses, hospitals, factory buildings or certain streets are virtualized (i.e. completely digitally mapped) and equipped with QR codes and sensors, then fully autonomous driving can work. Why? Because these areas of application then become „under-complex“ and (fully) mathematizable, at least in principle. The AI systems will then always know where they are in the virtual world, how the road is routed and what is coming towards them. But that is precisely the difference: humans do not need such information. They are not connected to an energy-intensive cloud, nor do they need to know what awaits them around a bend. Nor do they have a lidar on their head. With 20 watts of brain power, it drives fully autonomously through the most complex environments.
Of course, the limitations described here do not only apply to road traffic. If we wanted to use AI in general to create a thoroughly digital and autonomous machine society, we would have to convert our highly complex society into a standardized factory. This would be the only way to implement fully autonomous social processes. Some seem to be in favor of this, others are worried that it will happen. As a result, however, such a society would be inhumane, as it would elevate technology above humans. Fortunately, such a machine society will never work. Even in our factories, AI has to be supplied with data, started, maintained and, if necessary, switched off by humans. Not only does AI have no intrinsic goals, it cannot function fully autonomously in a constantly changing and feedback environment. AI is simply not a subject at all, it is represented by mathematical processes that humans use – just like other tools.
A pious wish
This leads to the third problem example: AI systems are formal tools. This also has important consequences. The restrictions of formal systems have been known for decades – for example, through logician Kurt Gödel’s „incompleteness theorem“, which has gone down in history as a revolutionary achievement, or Rice’s theorem, named after mathematician Henry Gordon Rice. Both proved far-reaching limitations of algorithmic statements in a mathematically compelling way.
As a result, controlling all road traffic algorithmically with an AI is a pious wish. Fully autonomous road traffic for a country like Germany cannot be achieved, although there can and will be many special applications for fully autonomous vehicles. However, public road traffic is too complex and too unpredictable, even if it could be described with sufficient precision as a formal meta-system. As a very simple example, imagine what happens at an unsignposted intersection where four cars drive up at the same time. In the United States, we can see how difficult it is for AI vehicles to deal with such problems, even though they can be solved trivially and their solutions are even formally regulated. We are familiar with the resulting „deadlocks“ from our computers, on which algorithms also get stuck from time to time – something that we can quickly solve by resetting the computer.
Ultimately, this problem is also a philosophical issue. The Laplacean demon has long since had its day; the natural world is neither deterministic nor fully mathematizable. And even if it could be fully mathematized, it would be of no use, because mathematics also has limits, as described above. As soon as the systems to be mathematized reach a certain complexity and the participants within the systems influence each other mutually and self-referentially, system states will inevitably occur that cannot be resolved algorithmically within the system. In the case of such problems, any formal AI must be able to cope and thus, for example, bring the car to a standstill. Humans do not have to. Why? Because when problems arise, they can leave any learned formal system – such as the rules of the road – at any time and enter any new set of rules. Or simply try out something completely new. Humans can (mentally) pull themselves out of the quagmire by their hair, so to speak, which is almost impossible for a formal AI.
Sobering, but not unpleasant
Of course, an AI will soon emerge that will beat humans in IQ tests. But just as an excavator on the side of the road doesn’t scare us just because it can dig deeper than a human, just as a car doesn’t scare us just because it drives faster than a human can ever run, an AI that will have a higher IQ than a human shouldn’t scare us either. And this is simply because this IQ comparison is only ever about the rational side of intelligence, the part of intelligence that can be mathematized. But even the level of perception is not a mathematical phenomenon, but a physical one.
The bottom line is this: In the next 50 years at least, fully autonomous road traffic will not be possible in a country like Germany, except in special cases. Technical solutions to the difficulties outlined are to be hoped for in future neuromorphic systems and quantum computers, as these do not calculate formally but physically and may then also have perception.
This may be sobering. But it is actually not unpleasant that our real world cannot be mechanized. That humans in their natural environment cannot be replaced by any AI in the world. Not today – and very probably not tomorrow either.
Dr. Ralf Otte is a professor at the Institute for Automation Systems at Ulm University of Applied Sciences