Can Self-Driving Cars Keep Us Safe?

DLabs.AI

The deadly accident during the tests of a self-driving car has revived the discussion around this technology. Can equipment created by fallible men be infallible? To what extent can this tragic incident stop the development of unmanned vehicles? And how would those cars perform on Polish bumpy roads?

These are some of the questions Radio Gdańsk asked our company’s Chief Data Science Officer, Dr. Krzysztof Rykaczewski.

Wiktor Miliszewski: When it comes to new technologies, we expect them to be infallible and safe. Meanwhile, a tragic accident like this happens.

Krzysztof Rykaczewski: From what I’ve read, many factors could have influenced this. The woman entered the middle of a four-lane road that was completely dark. She was not on a pedestrian crossing. In general, self-driving cars should be able to cope with such situations. In this case, it was so dark that the car’s sensors did not react, or there was some other reason the technology failed.

So, was it the “machine’s” or man’s fault?

There was fault on both sides. I don’t know how this specific system of the vehicle is designed, but it seems like the engineers have missed something. They could have not reviewed an additional factor, or the sensors were not sensitive enough. On the other hand, the woman crossed in the middle of a road that did not have side lighting — there’s a video published with footage from just before the accident and the shape of her silhouette is visible half a second before the impact. The whole silhouette is visible for a fraction of a second. From what I’ve read, according to the police, regardless of the situation, no one could have avoided a tragedy in this place, but it’s an ongoing investigation, and so, no final judgements are made. I know that the case is being investigated by the national transport security council. Generally speaking, the incident was difficult to avoid.

This tragedy has shocked the general public. Some say that, maybe, we shouldn’t be so fast with implementing such advanced technologies. Is the future of self-driving cars threatened?

It may cause some delays, but generally I believe that this technology and this field will evolve all the time. When traditional cars were introduced, accidents happened too, yet technology moved on. Therefore, I think that the people involved will now stop and start figuring out what the causes were, why such an incident occurred, draw consequences and, after a while, move on with some kind of research.

How do we set the safety standards that will allow us to say: “Yes, now is the right moment, let’s introduce self-driving cars on the streets, nothing bad will happen”?

I think the statistics are already great, but kept under the carpet, because it’s better not to applaud before there’s absolute certainty. Even more so, when it could potentially harm your public image. In my opinion, the statistics prove that self-driving cars are safe, because Uber cars, for example, rarely required the assistance of their drivers. On average they drove 10 000 kilometres without the wheel being touched by a human, and I’ve read that a regulation regarding some kind of permission for self-driving cars was about to pass in the American Congress. Now, this will probably be delayed, or additional restrictions will appear.

So the new technology will give us greater safety?

From the point of view of today’s technology, I think that those cars are much safer than the ones driven by a driver. A man concentrates on the road and possible factors in his field of vision, and a car like that is equipped with a whole bunch of sensors and can see that something is approaching in an uncontrolled way from the side or from the back, which allows it to prevent a major accident.

And a machine won’t pick up the phone, which could distract the driver any time.

Yes, but this is just one example of many factors influencing the better safety that comes with implementing new technologies.

So it’s official: man is much more fallible than machine?

One can divagate what „fallibility” means. Because, how do we check the accuracy of such machines today? If we have neural networks that can recognize numbers and letters, we check how much we, as people, and how much the machine was able to recognize. There really are cases where I had doubts about the choice. The machine made the decisions much quicker and, after a while, I realised that it was right. From this point of view, machines may have a slightly bigger „imagination”, they can make use of larger sets of data, but they can also extrapolate better solutions, based on the data they have.

There’s much more to life, though, than safety issues. It is still unclear how to approach self-driving cars from a legal point of view. Who should be responsible for a tragedy caused by such vehicle?

This tragic accident will, in a way, be a legal precedent, and surely the judge who will rule in this case will give it a lot of thought. From what I know, in the case of Uber there will probably be some sort of penalty, since they didn’t give some functionality enough attention. Especially that night vision is not a problem for today’s sensors, so they will probably shoulder part of the blame.

Self-driving cars are a technology associated with the West. Can we also look out for them in Poland?

I don’t know if we have the right infrastructure. The majority of those cars is electric, and we do not have a big number of charging stations. It’s true that there are also hybrid self-driving cars, but as far as I know, no one has ever tested them on Polish roads, which are of bad quality sometimes. What we need in Poland right now is a big number of charging stations, which is not yet guaranteed.

„Polish roads, which are of bad quality sometimes”. Memes are created about the holes in our roads. The problem is big. Can the sensors and all the other technologies in self-driving cars cope with such conditions?

The machines are being prepared for incidents related to bad roads. We need to look at it in a completely different way. Those machines learn collectively. Not based on individual cases, like people do. Data collected by one vehicle is stored in a central system that is processing them and redistributing the knowledge on the entire fleet, which drives safer as a result. This way, the vehicles learn collectively, gain more data, and prepare for various situations that they might not yet have encountered as individual cars.

Article originally published on Radio Gdańsk website.

 

how to implement ai

DLabs.AI

DLabs.AI is a team of Data Science experts providing comprehensive solutions and IT systems along with Machine Learning and Artificial Intelligence algorithms that maximize business clients'​ profits and minimize the risks associated with implementation.

Read more on our blog