GLAA Employee Login
GLAA
  • Report Problems: 0800 432 0804
  • General Office Enquiries: 0345 602 5020
  • Home
    • Licence renewals
    • Licensing Portal Login
    • Active Check Portal Login
  • Who We Are
    • What we do
    • GLA to GLAA: 20 year anniversary
    • Our Aims and Objectives
    • The GLAA Board
    • Legislation
    • Vacancies
    • Modern slavery
    • Freedom of Information
    • Press Releases
    • Better regulation
    • Our partners
    • Trade Union Facility Time 2023/24
    • Pay and work rights
  • What's New
    • Latest news
    • News Archive
    • Briefs and guidance
    • Freedom of information requests
    • Parliamentary Questions
    • GLAA Newsletters
  • Our Impact
    • Who has a GLAA licence
    • How we inspect and prosecute
    • Revocations results
    • Appeals against the GLAA
    • Criminal offences and sanctions
    • Strategic Assessment
  • Publications
    • GLAA Publication Scheme
    • Resources
    • Licensing guidance
    • GLAA Brief and Licensing News
    • Legislation
    • Corporate Publications
    • Labour Exploitation
  • Contact Us
  • Report Issues
    • Is the GLAA the correct enforcement body for you?
    • English
    • Bulgarian
    • Latvian
    • Lithuanian
    • Polish
    • Portuguese
    • Romanian
    • Slovak
  • Information for workers
    • Worker Information
    • Workers' Rights Leaflets
    • Government Leaflets
    • What You Should Expect at Work
    • Who Else Can Help
    • Your rights
    • How we can help
    • eVisa - Moving to a digital proof of immigration status
  • Licence renewals
  • Licensing Portal Login
  • Active Check Portal Login
  • I supply workers
    • I need a GLAA licence
    • I have a GLAA licence
  • I use workers
    • Labour User Best Practice
    • Inspections and Investigations
    • Keep up to date with Licence Changes
    • Public Register Checks and Formal 'Active Check' Guidance
  • Licence fee review 2025
  • Home
  • Publications
  • Resources
  • GLAA Webinars
  • Emerging Technology and Labour Exploitation: The Role of Artificial Intelligence

Emerging Technology and Labour Exploitation: The Role of Artificial Intelligence

Bhumi Pandya, Senior Policy Analyst
Savannah Storm, Research and Indexing Officer

January 2026

Executive Summary

Context

Artificial Intelligence (AI) is transforming the way workers are recruited, managed, and monitored. Although differing uses of AI can create better efficiency, AI also presents an increased risk for labour abuse, such as tracking workers and basing their responsibilities on efficiency results. This briefing explores how AI intersects with labour exploitation and how AI may evolve in the future.

1.    AI in Recruitment and Pre-Employment 
AI can be used for the ease of recruitment processes and pre-employment checks. However, using AI for these reasons replaces human decision making, and this can have negative impacts such as systemic bias and targeted job adverts. Moreover, those using AI for applications and interviews are able to create fraudulent applications and use AI to replace them in virtual interviews. 

2.    AI in the Workplace
Employers increasingly rely on AI to predict staff turnover, profile workers, and monitor productivity via tools such as facial recognition. This reliance of AI can be misused to overwork staff, reinforce surveillance, and retain workers under coercive conditions. Algorithmic coercion tactics may be used as a way to exploit workers for example, through surveillance and targeting specific worker profiles, highlighting that AI is being used to discriminate workers and monitor worker efficiency.

3.    AI Exploitation Tactics and Negative Impacts
AI can be used as a form of exploitation via wearable technologies, such as smartwatches and trackers, which can cause exploitation through micromanagement. Additionally, the use and reliance of AI can cause several negative impacts for workers using AI and those receiving information via the use of AI (such as translation tools). 

4.    AI and Concealing Exploitation
AI is constantly evolving, and it can be used to conceal exploitation. This can be seen via document forgery, automated communications, data manipulation, and AI being used as a form of managing reputation of a business. AI may be used to help conceal labour exploitation, through forging documents or manipulating certain data such as time sheets and pay records.

5.    Emerging and Future Risks
With AI continuously changing, adapting, and evolving, there are various potential emerging future risks associated that are likely to impact labour exploitation as a whole. Those highlighted in this section are: recruitment scams, coercion tactics, the gig economy, and cross-border exploitation. 


 
Context 

AI is transforming the labour market. Although this change presents opportunities for efficiency, it also heightens the risk for exploitation. The UK Government’s AI Playbook portrays a national strategy to integrate AI across public and private sectors to outline AI’s role in productivity, data analysis, and decision making. However, this Playbook also outlines the potential risks of AI which include algorithmic bias (where AI can produce unfair or discriminatory outcomes), data misuse, and regulatory blind spots which could facilitate exploitation. 

Moreover, there has been increased public concern over AI being used in recruitment and as a form of workplace surveillance; all of which can impact low paid and migrant workers.  The Institute for Public Policy Research’s (IPPR) Transformed by AI report evidences these concerns as the report warns that AI may reshape low wage industries which may increase informal work.

Such developments in AI highlight the need for further understanding of AI and how it can increase exploitation.
 

1.    AI in Recruitment and Pre-Employment 
AI can be used to create ease of recruitment processes and pre-employment checks. Using AI replaces human decision making for recruiting employees and performing any pre-employment checks. However, AI being used to make decisions in recruitment and pre-employment checks can have negative impacts such as systemic bias (a bias that results from rules, processes, or norms that advantage certain groups and disadvantage others) and targeted job adverts. Additionally, those using AI for applications and interviews are able to create fraudulent applications and to replace them in virtual interviews.

Recruitment Mechanisms, Bias and Discrimination

Recruitment that is steered by AI is likely to be biased. Algorithmic screening typically discriminates against ethnic minorities, women, and migrant works (all groups of individuals that are susceptible to exploitation). Similarly, worker testimonies have also highlighted that automated systems filter out candidates based on postcode, nationality, or incomplete digital footprints. Additionally, it has been found that predictive scoring systems disadvantage low paid or irregularly employed individuals, which puts them at risk of exclusion from legitimate work and pushes them toward informal work and exploitative practices. 

Moreover, with businesses using AI screening (an automated process using algorithms to analyse and filter large numbers of applications in a small amount of time) to enhance their recruitment processes, there is an increased risk of bias creating an unfair hiring environment that may affect vulnerable, migrant, or low paid workers. A number of cases have described AI being used in recruitment processes which resulted in negative outcomes for applicants. For example, AI body language scoring software was rating applicants poorly and costed them job offers despite their skill and experience and older applicants were not being screened in favour of younger applicants. Furthermore, concerns have been highlighted regarding the use of AI in hiring, stating that certain tools such as facial recognition scoring and emotion detection sensitivity, are a particular risk of impacting underrepresented groups. In addition to this, discrimination has also been prevalent in AI tools provided for sourcing, screening, and selection tools, through the ability to filter out candidates with certain protected characteristics, or through assumptions of these characteristics. Additionally, this information was often processed unlawfully and without candidate knowledge. These examples above highlight that AI in recruitment can be used to sift vulnerable employees and entrap them into exploitative working conditions. 

The AI systems used for recruitment, whether it be sourcing, screening, interview, or selection, can all be subject to bias and may produce discriminatory outcomes. As all AI models can perform differently depending on the environment on which they are deployed, consistent monitoring by organisations is necessary for ensuring inconsistencies are identified.

Job Advertising and Worker Targeting

Furthermore, AI can also be used as a means to target job advertising, highlighting that AI has to ability to gain workers in an exploitative environment via targeting specific job seekers. Concerns have been emphasized regarding “job adverts that include a gendered job title,” as they are “likely to be discriminatory”, thus AI tools in recruitment must be used with caution. This highlights that some jobs generated via AI can be detected via job titles which mention gender. Also, AI used in job advertising “may disadvantage women from receiving job adverts” due to biased targeting methods. This suggests that AI can use bias to target job seekers, and exploitative employers may be using AI to capture vulnerable workers through job adverts which look legitimate but have been programmed to only accept applications from, and land on the pages of, vulnerable individuals. Legal safeguards have been called to prevent such discriminatory targeting and to ensure that there is transparency in recruitment advertising. However, although this may lead to decreasing the ability for AI to target certain individuals, AI is constantly evolving and there is likely to be further loopholes for exploitative employers.

Fraud, Deepfakes and Fake Identification

Moreover, the rise of AI has led to further growing concerns and threats regarding Curriculum Vitae (CV) fraud and deepfake interviews. A rise in CV fraud via AI has been detected, with 67% of large companies reporting an increase in job application fraud, which attributed to the trend of AI tools being used to enhance or fabricate experience or qualifications. There is also the concern that qualification fraud is “slipping through the cracks” due to employer lack of awareness or organisation underdeveloped detective practices. Fraudulent applications can lead to the worker fearing to challenge poor conditions as they have already falsified their skills and experience, consequently making it difficult for such workers to report or be identified.

In addition to fraudulent applications, it has been acknowledged that AI can create convincing fake identification, payslips, and visa documents which heightens a potential risk of identity fraud. Further to this, deepfakes have been recognised as difficult to monitor, yet this does not mean they do not exist. This highlights the growing risks from the use of AI in employment and regulatory bodies by warning that deepfake technologies are sophisticated enough to convincingly manipulate audio and video, thus creating realistic digital personas which can be placed in virtual interviews or during identification stages. For the UK labour market, this presents a threat of fraudulent applications exploiting deepfakes to impersonate legitimate candidates or obscure their true identify, whilst exploitative recruiters could use manipulated media to falsify evidence. Such information inserts the need for stronger verification methods across recruitment processes in order to identify deepfakes.

Overall, it can be seen that AI use for recruitment processes, pre-employment checks, job adverts, job applications and interviews can cause various negative impacts such as systemic bias, targeting of vulnerable workers, and entrapment of vulnerable workers in exploitative working conditions. 


2.    AI in the Workplace
In a work environment, AI can be used to predict staff turnover, to profile workers, create algorithmic scheduling, and monitor productivity via facial recognition and algorithmic coercion tactics. 

Profiling Workers

Employers are able to use AI tools to profile workers into two categories: 

1.    An individual who is likely to report exploitation;
2.    An individual who is unlikely to report exploitation.

By using AI to profile workers, employers can assess individuals that they may want to replace or who can be pushed into further exploitative conditions. The IPPR and TUC AI Bill Project highlight ethical issues such as, profiling can penalise the workers who raise concerns or take any leave, enabling coercive practices. Exploitative employers use this profiling method to organise job responsibilities and money earning opportunities between those who are likely to report and those who are unlikely to report. The IPPR report specifically mentions that AI and algorithmic systems are changing labour markets, including how “organisational and strategic tasks” are transformed. This could mean that those who are unlikely to report are given further responsibilities, and more money compared to those who are sifted as ‘likely to report.’ Furthermore, if such employers notice that a certain demographic are less likely to report, it aids them in knowing what demographic to hire again; highlighting that workers are targeted, and systemic bias is used.

Managing Job Responsibilities

Furthermore, once employed, workers job responsibilities can be organised via roster systems which are created and managed by AI, raising the risk of exploitation. There have been testimonies from UK workers facing unpredictable hours and penalties for missed shifts due to automatic scheduling, highlighting that employer dependency on AI can negatively impact vulnerable workers. Additionally, it was found that the output quality of AI scheduling tools directly relied on the quality of the information inputted, thus highlighting AI being objective rather than subjective. AI being objective means that it does not take any external information into account. For example, if a worker was slower at stocking shelves than another worker and AI does not understand that this worker has a disability unless inputted. Although the study which found this result examined shift work within the retail sector, it did find that the major consequences of errors with employee availability information would result in: managers having to manually correct the resulting schedules, schedule instability, and increased time burden to make corrections. This indicates the unreliability of AI. Although further research would be needed to determine whether the use of AI in labour exploitation would result in similar findings, the Trades Union Congress do note that algorithmic management has a significant impact on workers.

Surveillance and Monitoring

In relation to impacts on workers, AI can be used as a form of surveillance and productivity monitoring of employees. Concerns have been raised surveillance technologies led by AI, including systems that continuously track keystrokes, location, task completion speed, eye movements, and micro pauses. Moreover, it is highlighted that these tools can create a constant monitoring environment causing workers to feel unable to take breaks, raise concerns or be independent without triggering automated performance flags. Such monitoring and surveillance can be a form of coercive exploitation where the worker feels as though they must perform to the ability expected by the employer to ensure no negative consequences. Moreover, UK warehouse, delivery and logistics workers report being subject to algorithmic monitoring that tracks walking speed, parcel scanning rates, route efficiency and the amount of time that they are idle in real time. Many workers describe feeling pressured to work at unsafe speeds or skip rest breaks to avoid automated disciplinary action or penalties. Consequently, AI surveillance to monitor worker productivity can lead to employers taking advantage of workers and, can consequently make exploitation difficult to detect due to coercion being embedded within AI rather than human control.

Facial Recognition

Additionally, facial recognition can be used to identify workers and monitor them.
Lily Turner explains that facial recognition is an identity verification technology that identifies or verifies a person by analysing and comparing patterns based on their facial features. This can be done via devices such as camera, smart phones, and security systems, making it a versatile tool. In terms of banking, government and law enforcement, facial recognition is used for security and crime prevention purposes. In healthcare, face detection is used for identifying patients and accessing their medical records. Consequently, there are various advantages of facial recognition such as enhanced security, safety, convenience, and fraud prevention. However, there are also concerns such as privacy, the potential for abuse, and security vulnerabilities. For example, as facial recognition collects and stores data, there is the question of how this data is used, stored, and if it is misused. In relation to labour exploitation, there is the fear that facial recognition is misused for unauthorised surveillance, tracking, or profiling of individuals without their consent. Moreover, 74% of USA employers are using online monitoring and surveillance tools with 67% using biometric tracking like facial recognition and fingerprint scans. Such facial recognition software can be used for different reasons such as tracking attendance, identifying employees, and tracking employees such as delivery and gig workers. There have been complaints highlighting the harms of facial recognition such as bias.     For example, a woman was denied a promotion based on her race and disability. These examples portray the ability to use facial recognition for beneficial purposes but also highlights that facial recognition can be used negatively. In relation to labour exploitation, facial recognition could potentially be used as a form of control to track workers, such as those in the gig economy, and track attendance, and these results can then be used as a justification for further exploiting vulnerable workers by using information against them, such as attendance results.

Moreover, such excessive monitoring via AI can discourage workers from reporting abuse due to fears of retaliation based on their performance data. The TUC AI Bill Project documents how employers may increasingly use retention tactics led by AI that rely on coercive data, creating conditions that increase worker dependence on exploitative labour providers. These systems analyse detailed behavioural data, such as patterns of lateness, fluctuations in productivity, personal circumstances from HR records, or even information obtained via internal communications such as identifying workers that are ‘compliant,’ ‘vulnerable,’ or ‘unlikely to leave.’  The AI Bill Project goes on to warn that such profiling can be weaponised in low-paid sectors to selectively offer shifts, overtime, or housing to only those workers who appear least likely to challenge any unfair treatment or exploitation. 

Algorithmic Coercion

In practice, such profiling can result in forms of algorithmic coercion. Workers who query payslips, raise any concerns regarding to safety or show any signs of asserting their rights may find themselves deprioritised and moved to less desirable shifts, or excluded from internal opportunities, all via AI and without any human decision making. However, workers who appear dependent, such as migrant workers in debt, requiring accommodation, or flagged as knowing limited English, may be targeted with retention incentives designed to push them further into exploitative conditions. Such behaviour can highlight that coercion can be masked behind AI, making it difficult for victims to recognise abuse or identify a person to speak to. New barriers of reporting are also highlighted as workers may fear that raising concerns will trigger negative algorithmic scoring or job insecurity. Labour providers may use these coercion tactics to exploit workers, thus deepening labour exploitation while reducing any visible human involvement. 

Overall, it can be seen that AI used to profile workers, create algorithmic scheduling and monitor productivity via facial recognition portray signs of algorithmic coercion tactics, which can signify a digital tactic of labour exploitation.

 
3.    AI Exploitation Tactics and Negative Impacts

AI can be used as a form of exploitation via wearable technologies, such as smartwatches and trackers, which cause various risks such as micromanagement. Additionally, the use and reliance of AI, can cause various impacts on workers using AI and those receiving information via the use of AI (such as translation tools).

Tracking Workers and Micromanagement

Wearable technologies can be seen as a form of micromanagement from employers. Case studies from warehouses and the care sector have been portrayed where wearable trackers are used to monitor movements and speed. Such systems create a situation of constant pressure, reduced independence, and can normalise unsafe working conditions. Moreover, AI that is used in the workplace may pose significant risks of job displacement or changes in job roles, increased surveillance or monitoring of workers, discrimination or unfair treatment, increased workload and pressure, and compromised privacy. This portrays that wearable technology can be an exploitation tactic due to the micromanagement and coercion it causes. Additionally, the use of AI wearable technology to monitor workers may have numerous risks: dehumanisation of workers, mandatory disclosure of personal body metrics, health-based discrimination, and inaccurate perceptions of productivity based on wrongful analysis of data. Similarly suggesting that wearable technology as an exploitation tactic is a strong method which results in various negative impacts on workers.

Translation Tool

In addition to above, there are further negative impacts of AI for example, being used as a translation tool. Several risks can be seen of using AI for translation reasons. In the context of healthcare professionals these risks include translating medical consent forms, prescriptions or doses, complex terminology, and health record coding, as errors in these circumstances can have significant detrimental effects on the patient. Similarly, if potential victims of labour exploitation are using translation tools when reporting, it may not be accurate and can cause detrimental effects. Yet, workers may have no choice but to rely on AI translation tools due to their lack of English speaking or understanding.

Reliance of AI

Moreover, it has been found that individuals are relying on AI. 45% of individuals in the UK rely on AI to do a task which portrays that there is a large amount of individuals using and depending on AI. Yet around 18% of workers reported that AI increases workload stress and pressure, and 29% state AI has increased compliance and privacy risks. This portrays mixed views on the reliance of AI but does portray an increased percentage of individuals who rely on AI. In addition, the IPPR warns that over reliance on automation in low-paid jobs reduces human oversight, which could lead to allowing and increasing exploitative behaviour and patterns to go undetected. The lack of human oversight is also likely to cause confusion as no accountability will be able to be placed, consequently causing enforcement issues as there will not be a human to place liability on. 

Overall, wearable technologies highlight a modern form of exploitation via micromanagement and coercion. Additionally, using AI as a translation tool can have many negative implications and detrimental impacts, yet some vulnerable workers may have no choice but to rely on it due to their inability to understand English.
 
4.    AI and Concealing Exploitation

AI is constantly evolving and can be used to conceal exploitation. This can be seen via document forgery, automated communications, data manipulation, and AI being used as a form of managing reputation. 

Automated Communications

Automated communications pose a threat and can be used to conceal exploitation. Automated communications such as AI chatbots and scripted voicebots are used by UK organisations to manage queries and complaints, but the AI use carries risks when applied to labour environments where exploitation may be hidden. AI systems can make or filter decisions about workers without oversight, creating barriers to human decision makers. Similarly, AI decision-making systems produce inaccurate responses, which portrays the lack of reliability of AI. Moreover, chatbots often fail to manage complex or sensitive queries, close interactions, or issue templated responses that hide the severity of a complaint. Furthermore, many commercial chatbots are ‘intentionally designed to deflect contacts away from human agents,’ thus reducing the likelihood that complaints reach someone that is actually able to intervene. There is further evidence of AI assistants mishandling sensitive information, illustrating how automated systems can misinterpret information. These pieces of evidence highlight that AI tools used by labour providers are likely to be exploited to delay or divert worker complaints and hide patterns of mistreatment. 

Manipulating Data and Reputation Management

Further to above, AI is able to manipulate data which could conceal exploitation via falsifying records. For example, AI that is capable of changing records, such as attendance logs or payroll data, represent an increased risk when stopping worker exploitation as the information will be inaccurate due to being tampered with. Additionally, the IPPR report highlights how AI could be used to modify data to hide non-compliance or exploitation. Although no UK case studies currently highlight this risk, it should still be a risk to remain aware of as this may be a pattern of concealed exploitation.

Furthermore, AI can be used as way to manage reputation; this method is likely to be used to conceal exploitation. There have been views, although relating to consumer businesses, highlighting that AI is being used to hide negative online evidence. Some views have predicted that reviews created via AI are likely to increase, with positive reviews more likely to be created this way than negative reviews, leading to potential issues of misrepresentation of what is being sold. This portrays that AI can be used to conceal exploitation and create false narratives relating to labour exploitation, having detrimental effects. Moreover, businesses are increasingly using AI tools to shape online narratives, monitor brand mentions, and respond to negative comments. This enables businesses to remain aware of public sentiment via AI use. Additionally, companies are routinely using AI to monitor review platforms, social media, and other online mechanisms to assess the sentiment, identify negative posts and reviews, and use AI to suppress or counter them. These examples portray that AI is being used as a method of reputation management, which can also be used as a method to conceal labour exploitation. This is because these same tools can be used by labour providers to suppress complaints, hide reports of exploitation, or obscure negative reputational signals before they reach regulators. 

Overall, AI can be used to conceal exploitation, this is a key risk as there are various ways AI can be used to hide exploitative practices. 
 
5.    Emerging and Future Risks

With AI continuously adapting and evolving, there are various potential emerging future risks associated with labour exploitation.

5.1    AI and Recruitment Scams

AI is cost effective and less time consuming in producing fake job adverts, applications, and phishing content. This increases the risk of substantial recruitment scams that target vulnerable jobseekers. The IPPR highlights that enforcement bodies, such as Europol, identify AI as creating convincing scams. This highlights that AI used for recruitment is an ongoing and future risk as exploiters are targeting vulnerable workers this way.

5.2    AI and Coercion

Deepfakes are significantly used in sextortion, extortion, and intimidation campaigns; all harms that are reported to be rising. AI generated imageries can elevate such harms, and these harms have caused growing government concern. These coercion techniques used could be used similarly to coerce and silence exploited workers through, for example, fabricated images, which would create a new form of control.

Moreover, deepfake scams are affecting the UK population. 26% of UK residents have received voice deepfake calls in the last 12 months. Of these calls, 40% reported being scammed, 35% reported losing money, and 32% had personal information stolen. Additionally, an impersonation of HMRC was the UK’s top scam detected, and elicited panic in victims resulting in the loss of bank details, personal information, and financial information. This example highlights that such deepfakes are convincingly created and how easy it is for vulnerable individuals to fall into exploitative working arrangements. This further portrays that coercion and scams via deepfakes are on the rise as an ongoing and increasing risk.

5.3    AI in the Gig Economy

Algorithmic management where AI allocates tasks, evaluates performances, and determines pay, is becoming the dominant form of control in gig economy platforms. This research also indicates that platforms increasingly replace human decision making with AI decision making which dictates work without any other input, highlighting the domination of AI in the gig economy. 

Further evidence suggests that algorithmic control can produce new forms of dominance and dependency for gig workers, predominantly those with insecure migration statuses or limited digital literacy as stated in the previous Gig Economy Briefing (July 2025). Furthermore, AI systems can intensify surveillance and penalise behaviours such as rest breaks, declining jobs, or reporting unfair practices (all factors that heighten exploitation vulnerability). These dynamics present risks such as how algorithmic processes can conceal excessive working hours, underpayment, and exploitative working conditions. Furthermore, AI sanctions are likely to deter vulnerable workers from raising any concerns, which will reduce visibility of exploitation within the gig economy sector.


5.4    Cross-Border Exploitation

A documented problem in UK sectors has been scams targeting migrant workers, predominantly in care. Investigations have shown that many workers recruited under visa-sponsor schemes have ended up being exploited via illegal fees, non-existent jobs when reaching the UK, or withheld work. Further to this, nearly 2,000 sponsorship licences were revoked for rule breaking, verifying the exploitative practices. 

Although there are no specific cases publicly available exemplifying the above, the increased ability that AI has to create convincing fake documentation and can impersonate communications, indicating that AI has the potential to perform such scams. There is a credible risk that exploitative employers can use AI tools to exploit migrant workers.


5.5    Concealment Methods

The AI Playbook warns of “adversarial AI” where exploitative individuals manipulate AI systems to ensure they are not detected, for example by generating fake documents. Additionally, AI tools significantly lower technical barriers for phishing, impersonation or document forgery which makes mass fraud more feasible than before. 

In regard to labour exploitation, this may mean that the typical checks such as verifying ID and reviewing paperwork, may not be sufficient. AI may have the potential to be used to fabricate audit trails or generate compliance records. 
 
6.    Conclusion

AI is transforming the labour market which offers efficiency as well as heightens the risk of labour exploitation. Across recruitment, workplace management, and operationally, AI has the potential to embed systemic bias, enable discriminatory targeting, and facilitate coercion. Various reports and case studies have highlighted how algorithmic screening, automated scheduling, surveillance via AI, and wearable AI devices can create environments where labour exploitation becomes difficult to detect. Exploitative tactics such as document forgery, automated communications, and manipulation of data further portray how AI can be used to conceal labour exploitation, reduce transparency, and obstruct reporting mechanisms. 

When looking ahead at the future and emergence of AI, various additional risks can be seen such as, recruitment scams, deepfake coercion, algorithmic control in the gig economy, and cross-border exploitation which is facilitated via hidden exploitation methods. Such developments emphasise the importance of remaining aware of AI advancements. 

 

 

[Elements of this document have been developed with assistance from Copilot and have been fully reviewed by Bhumi Pandya and Savannah Storm, to ensure accuracy and reliability.]

Click here to return to the top of the page

© 2026 Gangmasters and Labour Abuse Authority

  • Privacy & Cookies
  • Terms and Conditions
  • Accessibility
  • Sitemap

Powered by 10 Digital