Navigating AI In The Workplace
The Rise of AI in Work
AI adoption is accelerating. According to the UK Government’s Public Attitudes to Data & AI report, about six in ten UK adults have used chatbots in the past three months, and over four in ten do so at least monthly for personal or work tasks. EY’s AI Sentiment Index found that 44% of people in the UK report using AI in a professional setting.
This all means that AI is already part of the working landscape, whether you formally permit it or not. And that raises a lot of important questions!
Key Issues & Risks You Should Be Considering
Here are some key areas to keep front of mind:
Job Applications / Candidates Using AI
Candidates are increasingly using AI to draft CVs, cover letters or even test responses. It’s not inherently wrong, but it can blur the line between personal skill and machine assistance.
Think about how much you rely on written submissions when assessing applicants. If it’s vital that a person’s words are their own, consider:
Asking candidates to declare AI use
Using AI-detection tools for written submissions (some tools we’ve come across are Quillbot, Copyleaks and Scribbr)
Shifting focus towards live, practical assessments instead of pre-prepared text
The key is to identify the skills and attributes that matter for your roles, and to design fair, transparent selection methods that assess them without AI interference.
Automated Shortlisting & Decision Systems
Be alert to bias and discrimination risks under the Equality Act 2010. Even major companies have fallen foul of this - Amazon famously had to abandon an AI-based recruitment tool that unintentionally disadvantaged female applicants.
To stay compliant and fair:
Never rely solely on AI for selection or rejection decisions
Carry out human sampling or checks of algorithms for bias
Ensure a real person makes the final call or provides an appeal route
This approach also helps you meet UK GDPR requirements for human oversight in automated decision-making
Confidential Information
If AI is processing personal data, this must be explained in your privacy notice.
For higher-risk applications (for example, recruitment screening or performance assessment), you’ll also need a Data Protection Impact Assessment (DPIA).
When staff use tools like ChatGPT or other public AI systems, remember that anything they input can become public or be used to train the model. To stay compliant:
Only use approved, secure AI tools
Avoid entering names, personal or client data, or sensitive business information
Anonymise data wherever possible
Adjust AI settings to prevent data from being stored or reused
Also, be mindful of intellectual property - both in terms of what your staff upload and what they create using AI. Ownership can be complex if the AI has contributed to the work.
AI Policies
Every organisation should have a clear AI use policy covering governance, data security, approved tools, and human oversight. This ensures that employees know what they can and can’t use AI for, what tools are acceptable and how to use them correctly as well as who they should ask if unsure.
A good policy also allows for fair disciplinary action if someone misuses AI, since the rules are clearly defined.
Contracts and Employment Terms
Consider if you need to review your employment contracts and policies to make sure they cover:
Appropriate and prohibited AI use
Data confidentiality and intellectual property ownership
Disciplinary action for AI misuse
Responsibilities for checking AI outputs before use
Accuracy & Quality
AI can be a fantastic tool and its output is often very convincing but can also be inaccurate or biased. Always fact-check and review AI-generated material, and train staff to do the same - especially when it’s client-facing or forms part of a decision-making process.
Grievances, Bullying & Disciplinary Issues
AI can introduce new grounds for complaints - if someone feels an AI system treated them unfairly, or if a colleague misuses AI to create or spread harmful content.
Keep grievance and disciplinary procedures under review, and ensure any AI-linked concerns can be investigated transparently and fairly.
Disciplinary, Dismissal & Redundancy & Automation
If AI or automation affects job roles, follow proper consultation and redundancy procedures. Communicate clearly with your workforce - fear and uncertainty about “AI replacing people” can quickly affect morale.
Misuse of AI may be a disciplinary matter, but only where you’ve set clear expectations through policies, contracts and training.
What You Can Do Now: A Practical Checklist:
How We Can Help
At Paveley HR, we can help you get ahead of the curve. We can support by:
Drafting or updating AI Use Policies tailored to your sector
Reviewing contracts and employee handbooks to include AI clauses
Providing manager and employee training on responsible AI use
Conducting audits or investigations linked to AI or data concerns
Advising on fair and compliant redundancy or disciplinary processes where AI is involved
Working with you to develop communication plans that help your teams feel informed and confident — not fearful — about the responsible use of AI in your workplace.
Existing clients will be sent further details of this shortly.
If you’re not already a client but would like to have a more in-depth conversation about any of the above, and how we can help your organisation use AI safely, compliantly and with confidence, we’d love to chat.