TECH TALK: BILL MAGEE says boards are being encouraged to hire ‘ethicists’ to oversee the dangers and benefits of new tools
Businesses fully expect an uncertain year ahead. Confused by conflicting reports about the benefits, or otherwise, of artificial intelligence they’re being urged to embrace a brand new role by hiring an “AI ethicist” at board level. Don’t worry. We’re not talking about any old automated and unaccountable algorithm, app or bot, but an actual real person.
The digital age Is increasingly manifesting itself as an era jam-packed full of misinformation, photoshop enhancement and deep news-fakery. It’s keeping watchdogs, regulatory authorities and lawyers ultra-busy. It’s not helped by Google admitting, according to Bloomberg and BBC reports, it edited an AI viral video to “look better.”
Just what can be believed nowadays via online and mobile channels?
A European Union move to regulate AI across the bloc’s 27 member countries has been described as a “world first” in terms of legal oversight of conversational platforms led by ChatGPT. In stark contrast, ITPro and others wonder if the UK-US data “bridge” just coming into effect can pull off a harmonisation of information protection standards, across the pond and back again.
It’s high time for boards to take AI seriously, claims University of Oxford associate professor (philosophy and ethics) and author, Carrisa Veliz who gave a keynote at Scotland’s annual data summit, staged by the Data Lab. She said that to navigate the digital age successfully “the board of every company should have a member who is an AI ethicist…companies have to understand the risks…it is time for boards to be prepared as AI steers them into potential danger.”
An Economist book of the Year recipient for her “Privacy and Power – why you should take back control of your data”, she warns of a danger of getting “swept up in the enthusiasm” of such a commercially game-changing tech. Veliz singles out how it is far too easy for employees (surely also executives?) on frontlines to take shortcuts without doing proper reviews, using AI tools in relatively unsupervised ways despite not being tested at scale.
As boards mull over how to become more deeply engaged with the new tech, IBM continues to confidently predict AI will “definitely not displace” but “augment” white collar jobs. MIT maintains such automotive practices will aid productivity and efficiencies in the workplace.
In stark contrast, Goldman Sachs offers an altogether gloomy prognosis. The global investment management giant claims AI might very well create 69 million jobs within the next five years, but at a cost of the loss of 300 million full-time roles in the next 15.
It’s clear it is the wise business that establishes an AI governance framework. One containing clear ethical principles integral to an organisation’s digital roadmap. Failure to identify and implement such a policy is likely to cause major headaches involving AI operating models.
Lee Ho Alexander, lead software engineer at Edinburgh-based Exception, says: “Though AI trends tend to centre around their apparent limitless potential, the challenge for organisations is tempering the art of the possible, rather than toil with sci-fi moral dilemmas.”
Unless you’re a global goliath, with teams of the very best data engineers and scientific minds at your disposal, enterprises and SMEs alike must look beyond the technological rhetoric “making their own judgements on what they see and hear.”
Bottom line? AI adoption and an organisation’s ability to utilise and deploy it in a meaningful, beneficial way depends on technological and organisational readiness.
Catch Lee’s blog “Unlocking AI’s Power: Assessing Readiness and Applications here: www.info.exceptionuk.com/en-gb/ai/is-your-own-ai-revolution-hiding-in-plain-sight
This article is supported by digital transformation company Exception