In 2023, it was hard to escape discussions about Artificial Intelligence (AI) and this year the clamour is only becoming louder. A quick search for AI news stories leads with 12 stories within 24 hours. The most prominent being the AI generated deep fake audio of London Mayor Sadiq Khan, in which Mr Khan was made to sound like he was making disparaging remarks about Remembrance Sunday – a situation which the politician points out could have led to large scale disorder were it to be believed without question. This example points to the negative use of the technology which has left many feeling fearful of it.
Despite Prime Minister Rishi Sunak holding a prominent conference on AI last year, it is still a technology that few fully understand, and that governments are grappling with how to control and regulate. It is, however, a Pandora’s box that cannot be closed again and if 2023 fostered a sense of panic around AI, 2024 should be the year where workplaces and organisations start to have real conversations about where they can use AI constructively to support their workforce and customers.
The possibilities of AI can seem limitless, and this adds to the sense that it is too big to comprehend or collaborate with, but some organisations have already started this journey.
At the end of last year, Share invited one housing organisation to a conference to discuss how they have developed an AI product to monitor activity in their housing for older people, technology that can detect lack of activity and raise the alarm, supporting older people to live independently for longer, giving their families additional peace of mind, and in the absence of human oversight ensuring the safety of their tenants.
Their journey was not without its difficulties, and a large section of their workshop was dedicated to ethics, with a cautionary tale of the appetite of some firms to buy data for vague purposes, a situation they managed to avoid. But ultimately, they are proud of the product that can support their older tenants to live safely alone for longer.
There is no doubt that to embrace this new technology, organisations need to be open to the possibilities that it creates. A large part of the apprehension around AI is that it will ultimately dispense with the need for human endeavour in some sectors and hence it is to be resisted, but this is not the case. Organisations should look to AI to take over tasks that free up staff for more, not less human interaction.
Take for example, AI and employee relations, trends show already that HR professionals are moving towards AI technologies for monitoring trends in the workplace, creating induction materials and improving recruitment processes, but in a way that complements the human input, rather than relinquishes the need for it.
So how do organisations start their AI journey? Much the same as embedding any other major change, it is important to overcome staff fears about the technology and have open and collaborative conversations about where AI can have a positive impact on the business, and the principles of the Kubler-Ross change curve apply here as they do to any significant change.
Should policies and procedures be updated?
Starting the conversation about AI before any investment is made will ensure that colleagues are not unnecessarily suspicious of the move, and can feel included, helping to overcome that fear and reticence.
Once you have embarked upon this journey, it is useful to define the parameters of the use of AI, such as when it is and isn’t acceptable to use. As a learning and development organisation, this has been a particularly important issue when it comes to ChatGPT and plagiarism, we now have policies and procedures in place to deal with this particular issue.
Do you need to update policies and procedures? Will your staff be clear on when it is and is not acceptable to use AI in the course of their tasks? Ensuring that this is clear for staff will help to avoid the pitfalls that come with vague parameters around any new technologies, which leads to the next area that must be considered – risk management.
Risk management is key for any technological implementation within an organisation and introducing AI technologies should be no different, risks should be mapped, categorised, mitigated and monitored with any new programs, with additional attention given to security, ethics and morale within this matrix. The recently highlighted Post Office scandal is a timely reminder that technology requires rigorous human oversight.
Much like the advent of email and the internet, AI will most likely change the workplace forever. And like these earlier technologies there will be issues that arise and as yet unforeseen consequences. It is, however, a development that will only keep developing and organisations need to start reflecting now on how they can best use it to complement the work that their people are doing, deliver more for their customers and look after their staff.
If you would like to discuss training and development around risk management or managing change, contact the team at Share for an informal chat.