There is a new European AI law focusing on banning some of its uses like facial recognition in public places that could affect the industry as a whole. With the health sector and other industries implementing AI, the utilization of this technology has started to become very important to society. The argument as of the moment is that compliance costs for high-risk AI products could go up to $452K.
Europe’s AI Law
The European Union outlined a new artificial intelligence legislation proposition earlier this year and got hundreds of feedbacks from different companies and organizations. The European Commission officially closed the consultation period back in August and is now gearing up for a further debate in the European Parliament.
The new law could ban some of the uses of the technology while focusing on regulation and review for artificial intelligence systems that could be considered “high risk” (for example, education and employment decisions). According to the story by VentureBeat, any company that has a software product that is considered “high risk” will now be required to have a Conformité Européenne badge in order for them to enter the market.
Compliance Costs Could Go Up to $452K
The product is now required to be designed and overseen by humans in order to both avoid automation bias as well as becoming accurate to a level that is proportionate to its use. There are a lot of companies concerned about the whole knock-on effect the new law could have.
They reportedly argue that it could in fact stifle European innovation as more talent is being lured to regions that remain less strict like the US. The anticipated compliance cost for high-risk AI systems could go up to $452K, according to datainnovation.org.
2016 EU General Data Protection Regulation Laws
In autumn, the UK also published its own national AI strategy that was designed to keep “minimum” regulation according to a minister. The EU General Data Protection Regulation laws would require all companies with websites in the Atlantic to react and adapt to them when they were first implemented in 2016.
In certain areas like health, this particular technology is very important. The use of AI in healthcare will, according to VentureBeat, “inevitably” be considered within the “high risk” label. Due to the importance of AI in healthcare, the legislation focuses heavily on reducing AI biases.
Building Public Trust with AI
The new law aims to set a gold standard in order to build public trust which remains very vital to the industry. If a certain AI system is fed data that fails to accurately represent a target, the results can then be affected.
Faulty results could damage trust which remains very vital in the healthcare industry since a lack of trust will limit effectiveness. AI breakthroughs will not be effective if patients remain suspicious of a therapy or diagnosis made by an algorithm or if the patients fail to understand how the conclusions have been made.