Insights

We trust in God - but should we trust in AI?

Or AI DOES Dream of Electric Sheep

A lot has been said about AI and trust. When it first burst onto the scene much was made of its ability to eliminate human bias.  Decisions made by AI – we were told – would be fairer.  Prejudice wouldn’t exist. There was debate about its use in the judiciary, in recruitment, in making decisions about the allocation of grants etc AI wouldn’t be swayed by its previous experiences, or by those of its peers or parents. It couldn’t be influenced or bribed.  

However, the first flush has passed and like every love affair reality has set in.  There is a mounting sense of if not mistrust then suspicion and in an environment like Higher Education the stakes are often higher and the need for caution even greater. 

If we are going to exploit AI to it full benefits (which are considerable) in this sector, we need to overcome those perceptions – real or not .  The following three are not the limit of the challenge by any means – but HE has a particular role to play or advantage to deploy in creating a wider solution.  

The first is what is often referred to as the Black Box Effect. AI is a new technology and has an air of mystery around it. We often don’t understand how AI came up with the decision – which data it was given, the technology it uses, the model architecture etc.  Developers don’t want to reveal this – despite commitments to open source – in case it damages competitive advantage, makes them more vulnerable to cyber attack, or simply because customers/clients won’t understand what thy are being told which in turn leads to mistrust.  

Regulation can help here – forcing organisations to comply with regular external audits (AI auditors being one of those new jobs that didn’t exist five years ago) but they don’t properly exist yet. It may be easier to be turn the question on its head – encouraging trust by trusting people. Involving the people who teach/train radiologists in how AI makes decisions to improve its uptake in medical situations. Higher Education has a valuable role to play in helping man and machine get the best from each other. 

The next is ironic – bias.  Initially AI was hoped to be a solution – data based decisions without human vanity or experience to taint it. However it soon became clear that sadly AI is a frail as we are – and only as good as the data it is given and the parameters it is used in. 

AI sadly is harsher on sentencing certain minorities than a human, let alone denying mortgages.  

For now, there isn’t much we can do. Adding more diverse sources of data, using better testing and monitoring,  applying what are sometimes called “fairness constraints” will all go some way – but one of the best is by widening the people who actually design the AI.  The collaborative and open nature of Higher Education – which tends to be less constrained by competitive commercial firewalls – is a perfect opportunity for HE to help broaden the pool of people working on AI as a tool, therefore broadening the inputs and outputs of AI.  It also has relationships that traverse geography and social definitions of “fair”. It can create consensus where Governments would struggle on what “fair” looks like and how it can be applied. 

The final is that Philip K Dick was right – androids DO dream of electric sheep. Or rather AI LLM’s hallucinate. They generate a response that is illogical due to a number of factors; limited data, flawed or insufficient training environments,  or language barriers. It can lead the AI to come to believe it is being attacked (and shut down) or that the user is “kind” and lead the AI to effectively “fall in love” and favour that user.   

The irony is that the drive at the moment is to use AI to replace us or strengthen us.  But we learn to partner more not less.  We use it in Higher Education to help us learn, to help us understand, to democratize education.  So while part of the solution is better data, better training and – as it is with every mitigation for AI – the key is to involve people, not just the AI.  To get us to step up and not step back. 

Higher Education is investing a lot in the technology – and it should. But we must match this with investment in data literacy and data ethics of the people using it, developing it, providing it with context.  We need to fill the gaps, and not wait for the machine to do it for us.  

Content Hub