Is AI Making Workplaces Smarter, or Less Fair?

Is AI Making Workplaces Smarter, or Less Fair?

As tools like ChatGPT, Copilot and Gemini become embedded in everyday work across industries, it’s more important than ever for organisations to use them responsibly, fairly, and ethically.


Just as crucial is the question of trust. As users, we need to know that the information these tools produce comes from reliable and unbiased sources. To understand how that trust is being built in practice, I spoke with Fernando Mourão, SEEK’s Head of Responsible AI. He shared how one of Australia’s leading tech employers is moving beyond the AI hype and putting real focus on fairness, accountability, and transparency in the way it designs and uses AI.


1. Bringing Responsibility into the AI Development Lifecycle

Fernando described his role at SEEK as one deeply embedded in operational culture:


“My role is basically helping to operationalise practices and behaviours that allow SEEK to deliver AI in a safe, reliable, and responsible way.”

SEEK is an organisation that has over 200 people working on AI projects, and has been utilising Artificial Intelligence tools within the organisation for over 10 years. Fernando highlights that it isn’t about writing policies and walking away. It's about embedding responsible AI practices across people, platforms, and pipelines, from initial design to post-deployment monitoring.


2. Bias Isn’t Just a Data Problem

When I asked whether biased outputs stem purely from the data used, Fernando took a broader view:

“Bias comes from business decisions, algorithms itself, and many other aspects.”


He added: “This kind of impact is particularly important in our domain because AI can often replicate or amplify existing biases, especially in recruitment. And we definitely want to avoid that. We need to anticipate and mitigate those risks wherever possible.”


For SEEK, addressing bias means going beyond clean data, it’s about thoughtful design, governance, and building awareness across teams.


3. Addressing Bias in the Real World

 The conversation highlighted some of the proactive steps SEEK is taking:

  • Defining a clear impact framework for its AI tools
  • Implementing systems to detect multilingual and algorithmic bias
  • Creating feedback loops between policy, education, and technology

These efforts reflect the Diversity Council Australia's T.R.E.A.D. Guidelines, which encourage organisations to Team up, Reflect, Educate, Acquire, and Decide when deploying AI in recruitment. The framework is designed to help businesses introduce AI tools with greater awareness and accountability, reducing the risk of reinforcing bias rather than eliminating it.

4. AI Accessibility:

A Double-Edged SwoAs AI becomes more accessible to non-technical users, Fernando stressed the risks of rushed adoption:
“AI is now more accessible, so more people are using it. But that also means more risk, because they don’t always understand how to use it properly.”


With ease of use comes greater responsibility. Without the right education and oversight, even well-meaning users can introduce harm or inequity.


“It’s like having a knife in your kitchen. If you don’t know what you’re doing, it can be dangerous.”

This concern is supported by research. Dr Natalie Sheard of the University of Melbourne, in her article “Discrimination by Recruitment Algorithms is a Real Problem,” writes:


“As the use by employers of AI to screen job applicants grows, there are serious risks of discrimination against women, older applicants and minority groups.”


Sheard cautions that until proper regulation is in place, organisations must tread carefully with these technologies, especially in hiring where the impact on people’s lives is direct and long-lasting.


Final Thoughts:

The stakes aren’t just operational; they’re human. Whether it’s screening a job applicant, prioritising a customer query, or shaping internal decisions, AI systems are already making choices that affect people’s lives.


If you're working with or implementing AI in your organisation, my advice is: don’t wait for regulation to catch up. Start asking the hard questions now, involve diverse voices early, and treat ethical design as core to the process.


If we’re not intentional in how we build AI, we risk deepening the very biases we hope to eliminate.


As Artificial Intelligence becomes more embedded in the tools we use every day, are we doing enough to ensure it serves people in a meaningful and fair way?


Author,


Jonny Church

Principal Recruitment Consultant at PRA 
Recruiting all things Data, AI and Architecture



References

Sheard, N. (2025). Discrimination by Recruitment Algorithms is a Real Problem. University of Melbourne – Pursuit. https://pursuit.unimelb.edu.au/articles/discrimination-by-recruitment-algorithms-is-a-real-problem

Diversity Council Australia (2024). Inclusive AI at Work – T.R.E.A.D. Guidelines. https://www.dca.org.au/news/media-releases/dca-releases-guidelines-to-reduce-bias-in-ai-recruitment

Conversation with - Fernando Mourão on Wednesday, 2nd of July 2025https://www.linkedin.com/in/fernando-mour%C3%A3o-a40a5183/


2025 Market Salary Survey
By Admin PRA July 1, 2025
Download our 2025 PRA Market Salary Survey
By Shazamme System User December 3, 2024
Can you believe 2024 is almost over? As we get ready to say goodbye to the year, let’s look back at some of the key talent trends that shaped the market in 2024 and are set to continue their influence into 2025. Spoiler alert: It’s all about speed, upskilling, and an experience that puts candidates first.
By Shazamme System User August 27, 2024
This week saw a significant change Australian workplace rights with new legislation that empowers employees to set boundaries, ensuring that work does not intrude into their personal time outside of official working hours. It demonstrates the increasing importance of maintaining a healthy work-life balance, and the introduction of the "right to disconnect" is a significant step in this direction. The change has benefits for Employees & Employers and we dive into them below: Combating Constant Availability: The legislation addresses the growing issue of employees feeling pressured to be constantly available, a problem worsened by remote work and the prevalence of digital communication tools. Protecting Personal Time: Workers can now fully disengage from work-related communications during their non-working hours, except in cases of emergency or pre-agreed necessity. Fostering better culture: By encouraging employees to disconnect, companies can foster a more motivated, productive workforce, as employees return to work refreshed and focused. Promoting Mental Health: This shift towards prioritising mental health and well-being in the workplace may lead to a healthier, more sustainable work culture. What are some tips for employers with this change? Encourage Clear Boundaries: Support your employees in setting and respecting boundaries between work and personal time to help maintain their work-life balance. Avoid After-Hours Communication : Minimise sending work-related emails or messages outside of official working hours unless it's an emergency or has been pre-agreed upon. Promote a Healthy Work Culture: Foster an environment where disconnecting after work is the norm, helping to reduce stress and prevent burnout among your team. Lead by Example: As a leader, demonstrate the importance of the right to disconnect by respecting your own boundaries and not engaging in work communications after hours. Provide Flexibility: Offer flexible working arrangements that allow employees to manage their time effectively while still meeting their professional responsibilities. Whilst it will take a while to adjust the changes, together Employees & Employers can work together on balancing work and personal time.