Artificial Intelligence liability will change the entire product liability regulation. Almost 40 years ago, in 1985, the Council of the European Communities adopted the Product Liability Directive.
While it has been beneficial in terms of protecting the interests of consumers and producers and by requiring producers to provide compensation if a defective product causes any physical damage to consumers or their property irrespective of the negligence or fault of the producer, it is not equipped for situations when new technologies, like Artificial Intelligence (AI) is involved. Therefore, in September 2022 European Commission (EC) adopted two proposals for new liability rules for AI to protect consumers and foster innovation.
By revising the Product Liability Directive, the first Commission proposal seeks to modernize the existing strict liability rules of manufacturers for defective products. It is expected that the revised rules will give businesses legal certainty allowing them to invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm.
The proposal seeks to modernize liability rules for circular economy business models by making sure that liability rules are clear and fair for companies substantially modifying products and to modernize liability rules for products in the digital age to allow compensation for damage when such products as robots, drones or smart home systems are made unsafe by software updates, AI, digital operational services or manufacturer’s lack of recognition of cybersecurity issues. This proposal will create a more level playing field between EU and non-EU manufacturers by requiring the importer or the manufacturer’s EU representative to provide compensation. Moreover, it is expected that the proposal will put consumers on an equal footing with manufacturers because manufacturers will be required to disclose evidence, introduce more flexible claim initiation and ease the burden of proof for victims in complex cases.
For the first time ever, the proposal for the AI Liability Directive introduces a targeted harmonization of national liability rules for AI, provides greater protection for victims harmed by AI, reduces the burden of proof and promotes the AI sector.
Both individuals and businesses shall benefit from the same standards of protection when harmed by AI products or services as if they would be harmed under any other circumstances therefore it should be easier to claim compensation for AI-related damage. The proposal also simplifies the legal process for victims to prove that fault led to damage by reducing the burden of proof. Presumption of causality is introduced which presumes that when a relevant fault has been established and a causal link to AI performance seems reasonably likely, there is no need for victims to explain in detail how harm was caused by a specific fault or omission, which might be quite complicated considering the complexity of technologies or systems involved. Also, the right of access to evidence is introduced meaning that such information must be made available by companies and suppliers if a high-risk AI is involved. Additionally, the AI Liability Directive proposal also seeks to promote the AI sector by increasing guarantees and harmonizing certain rules that are outside of the scope of the Product Liability Directive, for example, when the damage is caused by wrongful behavior, such as breaches of privacy, damages caused by safety issues or when discrimination involving AI technology is present.
The Commission’s proposals will now need to be adopted by the Parliament and the Council to become binding.
Find out more information here: