AI Liability: Something that we did not really think about?

Murphy Choy
8 min readJul 19, 2021

Recently, I read an article on AI liability on HBR, and it is an exciting concept that is fast becoming a possible problem in the real world. The article talks about a possible future scenario where a company or individual could become liable if an AI system makes one decision that negatively affects someone. The article explains that, with the wide use of AI increasing in society, liability could arise from any number of decisions, including those made by machines themselves. This article makes it incredibly hard not to think about the potential for things going wrong when these products are used so much on such a vast scale.

A simple example that arises from this article is a self-driving car AI system. Let us say this system is in the process of working out the best route to travel; it then has to decide whether or not to make a manoeuvre that will risk killing the driver for the sake of saving others. It could be argued that if it decided not to crash into another car, it could cause many deaths overall. If this ethical decision leads to other AI systems becoming more aware of these issues, what would happen when making these decisions under pressure? This seems like an authentic problem, but there are still many unknowns in how exactly AI systems will be able to handle them and situations like these in the future.

The issue with this apparent liability is that it could make some people avoid using AI altogether, which would not be beneficial for the industry’s growth. There are many benefits to AI technology that often go unnoticed due to the lack of awareness in society, and these include a few things. First, we often think about how AI can automate repetitive or tedious tasks (like data processing), but there are many more advantages. For example, AI has proven to help us in many different ways (such as having better health and better mental health). This also means that with issues like this one arising, there will be those who avoid using an AI system because they do not want to become liable if something goes wrong.

As a first step, it is vital to understand the concept of AI liability. Liability under most circumstances refers to the legal obligation that a person or legal entity has to meet specific standards. In this case, it would mean that a company could be held legally responsible for anything wrong due to an AI system. This could be anything from it making a morally grey decision to it simply going wrong in some way. One example of this is how self-driving cars are still not perfect and require constant attention from the driver to work correctly and safely (even if they are only partial).

The potential issues surrounding AI liability might have started as theoretical problems, but they have also become real issues because of the extensive use of AI systems worldwide. I believe that there is a way to mitigate these issues to some extent, but it will be challenging, requiring a significant amount of effort.

Perhaps the biggest question regarding AI liability is how we can be sure that an AI system is always morally and ethically correct? This is because there are many ambiguities in the systems decision-making process itself. For example, how do we know that an AI system has not made a wrong choice if they have made thousands of good ones? With all this in mind, it seems impossible to prove that an AI system always makes the right decisions; plus, even if we could prove this, many people might still not trust an AI system.

The first step will be to figure out if there are flaws in AI systems that would make them liable for any decision they made. We could then develop a solution to mitigate any possible liability, which could be anything from restricted choice sets being implemented or having more human supervision over the AI systems. This could involve the human supervisors being able to override any AI system decisions, but there are a few problems with this, such as the human not always knowing why the AI system made its decision.

Another possible solution to AI liability is to have rules or codes of ethics implemented into AI systems. This would mean that the system would have a set of rules that it can only make decisions based on, which would benefit situations like the one mentioned earlier in this article where an AI system might be forced to make a morally grey decision. There are already systems similar to this in place with some self-driving cars like Tesla’s autopilot feature, but it seems like we could do more than just having this kind of code. With the data from an AI system, it would be possible to simulate human decision-making processes that people would generally use. This could be done by having a completely separate set of instructions for the AI systems to follow when they make decisions. Even if this system tells the AI system that it is better not to crash into another car, it has no way of knowing why this is the better option to not learn. Having a ‘don’t ever do X’ rule would help make sure that the AI system makes good decisions based on whatever reasons are set out in those rules. This could then be a solution to the problem, but there are many other unknowns with this one, such as how it would be implemented or if it would have any real effect on AI systems.

A common misconception is that AI systems can only be beneficial when used correctly. This is not true because they can also cause harm through misuse, which leads to further liability problems. There have been many accidents that have occurred through the use of self-driving cars that were caused by human error and not by an AI system making a wrong decision. This means that if you are only looking at the liability aspect of things, an autonomous car will always be safer than having a human drive it.

The most important thing when it comes to managing AI liability is how you go about managing it. If an AI system must decide on its own, then the best way to deal with the issue would be to ensure that it has all of the relevant information when making that decision. If this is not possible, then any decisions made by the system will be based on incomplete information and might have a greater chance to go wrong.

This means that there will always be ways in which an AI system can make a wrong decision, as humans are often unpredictable themselves. In many cases, there will come a moment where an AI needs to make a highly complicated decision that will affect the rest of the system and its choices. This might mean that it cannot be left up to an AI system to make these decisions as they will likely fail. This follows the logic of ‘if you do not trust it, do not use it, which means that we should be taking more precautions when using an AI system.

Some argue that there is no reason why an AI system should not be able to make decisions on its own because humans are already doing that every day. The difference is that humans have experience in making decisions by themselves, which is not the same for AI systems. Even though we might think that AI systems have become more intelligent over time, they still cannot think like humans, which will be a problem when dealing with AI liability. If an AI system makes a mistake, then there would be no way to know exactly why it made that decision, leading to further problems down the line.

However, even if we could solve this problem, I do not think it will do much to convince those concerned about AI liability. Society needs to appreciate the potential benefits that these systems have and value them more to grow the industry. The solution that we can develop will never replace the impact of a personal connection, a feeling of trust from the human perspective. In order to see more growth in industries such as AI, people must feel like they can trust the systems and use them without fear of anything wrong happening. This is where we need to start, in order to manage the liability of AI systems better, we must understand them better.

The main problem with AI liability is that there are no laws or guidelines that deal with the issue. With new systems coming out every year, it is hard to keep up with the latest developments and this leave industries in a state of uncertainty about how to deal with different situations. This means that the industry will not have any concrete rules when things go wrong, and this could mean that liability could cause a massive backlash towards AI systems and our trust in them.

To manage AI more efficiently, there need to be rules regarding how these systems can be used. It will be easier to deal with AI liability with more rules by ensuring that the law looks after these systems and industries. There are already laws in place that consider AI systems to be a product, but this does not go far enough. To use these machines better, we need to help them learn and grow into our everyday lives.

A good way of managing AI liability is by ensuring a high level of transparency when dealing with any issues or problems. This means the people in charge must explain how they deal with the issue and how they plan to fix any problems that occur down the line. The more we know about solutions and risks in AI systems, the better equipped we will be for preventing problems and dealing with them properly when they arise. Improving this transparency will help to manage the liability of AI well.

This leads us to the argument that AI is beneficial because it helps to improve our everyday lives. It could be argued that these benefits are not worth the risk of accepting or bearing liability, but this is not something that can always be avoided. Society has a responsibility to deal with issues that come from new technologies as soon as they arise, and it is better to take care of such problems while they are small so that we do not have to deal with them later on when they become a bigger problem.

--

--

Murphy Choy
0 Followers

Murphy has over a decade of experience in the area of business consulting, data monetisation and startups.