Warnings that workers could be exploited by new AI technologies – what are the dangers?

Warnings that workers could be exploited by new AI technologies – what are the dangers?

BCL partner and data protection law and surveillance expert Julian Hayes recently spoke to People Management magazine about the data protection issues surrounding employers’ increasing use of artificial intelligence, following the Data Protection and Digital Information Bill back before parliament this week (w/c 17th April).

Here is a short extract from Yoana Cholteeva’s article for People Management*. If you wish to read the full article, please visit the People Management website.

“The Trades Union Congress (TUC) has warned that AI is “transforming” the way people are hired, managed and fired – which would mean AI is making “high-risk, life changing” decisions about workers’ lives, it said.

During its AI conference in London earlier this week (Tuesday 18 April), the TUC voiced its concerns that the government was “failing to protect workers from being exploited by new AI technologies” as the intelligence has recently surged in popularity in workplaces…

The TUC has also warned that in addition to failing to regulate AI effectively, the government could also be “watering down important protections” already in place.

As the Data Protection and Digital Information Bill is back before parliament this week, the union body said changes to the bill would add further threat to important rights, currently guaranteed under GDPR, such as providing workers with protections against automated decision making and giving workers and unions a say over the introduction of new technologies through an impact assessment process.

“Echoing this, Julian Hayes, partner and surveillance law specialist at BCL Solicitors, also warns of “the danger of the algorithm becoming king” in decisions made in work situations if proposed amendments to UK GDPR go through.

He cautions that this will make the algorithm very powerful in “legally fraught and sensitive life events such as hiring, firing and pay decisions, where meaningful human involvement has historically driven decisions, actions and results”.

Hayes says this is “indicative of the government’s essentially laissez-faire approach to AI”, adding that “to strike the right balance between the opportunities and threats the use of AI brings, developers and users of AI need greater regulatory certainty”. “

*This article was first published by People Management on 20 April 2023.

Julian Hayes is a partner specialising in corporate and financial crime, computer misuse offences, surveillance and data protection law. He advises individuals and corporates in relation to fraud and corruption investigations by the SFO, enforcement actions by the FCA (insider dealing and market abuse) and offences under the customs and excise legislation prosecuted by HMRC. As well as expertise in relation to cybercrime, Julian also specialises in advising data controllers and others on the provisions of the Data Protection Act 2018 and GDPR (including breach reporting), and Communication Service Providers in relation to their obligations under the Investigatory Powers Act 2016 and its associated Codes of Conduct.

Related articles