Added: 5th March 2021 by Kubrick Group
AI solutions are optimising the world around us: they power our search engines, animate our customer service chatbots, and predict the length of a car journey or time of delivery. As the concept of AI continues to grow in familiarity within businesses and households alike, its relative infancy in the stages of development is easily forgotten. Nonetheless, there is as of yet no dedicated regulation of AI in UK or EU law; whilst GDPR puts controls on the data that feeds AI algorithms, the implementation Artificial Intelligence is a largely unexplored territory for businesses to conquer.
Without the motivation of adhering to official regulation, the cost of researching, creating, and embedding ethical AI practices could be mistaken as an investment without return. However, for those organisations searching for tangible benefits beyond fulfilling a moral obligation, there are three fundamental reasons for businesses to create their own policies and principles to ensure their AI use is ethical.
We’ve already seen what happens when algorithms go awry; the backlash which the UK government faced to their A-level exam results calculations in August 2020 was a timely reminder of the real people behind the data that feeds a model. Any organisation with the maturity of technology to implement AI processes should also understand the full spectrum of possible outcomes and strive to mitigate against potential risks by setting standards and protocols for AI use.
Research from Capgemini’s Artificial Intelligence Group demonstrates a preference for AI use which is deemed ethical by consumers. The majority of the survey’s respondents believed such an approach to AI would increase their loyalty to a company and/or their consumption of its products or services. Conversely, over 40% agreed that they would complain in the event of an interaction with AI which was problematic, whilst 34% would stop interacting with the business entirely as a result.
As AI technologies continue to advance, formal regulation is an inevitability looming on the horizon. Guidelines for best practices can be implemented to strengthen ethical AI use and prepare for future regulation in the same vein. Organisations at the fore include pharmaceuticals leader AstraZeneca, who have already published principles which echo the conclusions of research and advisory bodies such as Gartner . Thus, their guidelines are likely aligned with the overarching aims of future regulatory policies and provide a secure footing from which they can create their internal governance frameworks.
The Fourth Industrial Revolution is well underway, but the full extent of its capabilities and consequences is yet to be determined. For now, the role that AI has and will play is indisputably critical for businesses in all industries - a reality which has only been reinforced and accelerated by the COVID-19 pandemic. Rather than attempt to avoid its implementation, organisations must embrace AI with careful consideration of their responsibilities and utilise the principles and best practices published by institutions and industry leaders. By approaching new technologies with recognition of both risk and reward, leaders can embed Artificial Intelligence within their teams with the same confidence as any other decision for the advancement of their business.