Is AI Making Workplaces Smarter, or Less Fair?
Is AI Making Workplaces Smarter, or Less Fair?
As tools like ChatGPT, Copilot and Gemini become embedded in everyday work across industries, it’s more important than ever for organisations to use them responsibly, fairly, and ethically.
Just as crucial is the question of trust. As users, we need to know that the information these tools produce comes from reliable and unbiased sources. To understand how that trust is being built in practice, I spoke with Fernando Mourão, SEEK’s Head of Responsible AI. He shared how one of Australia’s leading tech employers is moving beyond the AI hype and putting real focus on fairness, accountability, and transparency in the way it designs and uses AI.
1. Bringing Responsibility into the AI Development Lifecycle
Fernando described his role at SEEK as one deeply embedded in operational culture:
“My role is basically helping to operationalise practices and behaviours that allow SEEK to deliver AI in a safe, reliable, and responsible way.”
SEEK is an organisation that has over 200 people working on AI projects, and has been utilising Artificial Intelligence tools within the organisation for over 10 years. Fernando highlights that it isn’t about writing policies and walking away. It's about embedding responsible AI practices across people, platforms, and pipelines, from initial design to post-deployment monitoring.
2. Bias Isn’t Just a Data Problem
When I asked whether biased outputs stem purely from the data used, Fernando took a broader view:
“Bias comes from business decisions, algorithms itself, and many other aspects.”
He added: “This kind of impact is particularly important in our domain because AI can often replicate or amplify existing biases, especially in recruitment. And we definitely want to avoid that. We need to anticipate and mitigate those risks wherever possible.”
For SEEK, addressing bias means going beyond clean data, it’s about thoughtful design, governance, and building awareness across teams.
3. Addressing Bias in the Real World
The conversation highlighted some of the proactive steps SEEK is taking:
- Defining a clear impact framework for its AI tools
- Implementing systems to detect multilingual and algorithmic bias
- Creating feedback loops between policy, education, and technology
These efforts reflect the Diversity Council Australia's T.R.E.A.D. Guidelines, which encourage organisations to Team up, Reflect, Educate, Acquire, and Decide when deploying AI in recruitment. The framework is designed to help businesses introduce AI tools with greater awareness and accountability, reducing the risk of reinforcing bias rather than eliminating it.
4. AI Accessibility:
A Double-Edged SwoAs AI becomes more accessible to non-technical users, Fernando stressed the risks of rushed adoption:
“AI is now more accessible, so more people are using it. But that also means more risk, because they don’t always understand how to use it properly.”
With ease of use comes greater responsibility. Without the right education and oversight, even well-meaning users can introduce harm or inequity.
“It’s like having a knife in your kitchen. If you don’t know what you’re doing, it can be dangerous.”
This concern is supported by research. Dr Natalie Sheard of the University of Melbourne, in her article “Discrimination by Recruitment Algorithms is a Real Problem,” writes:
“As the use by employers of AI to screen job applicants grows, there are serious risks of discrimination against women, older applicants and minority groups.”
Sheard cautions that until proper regulation is in place, organisations must tread carefully with these technologies, especially in hiring where the impact on people’s lives is direct and long-lasting.
Final Thoughts:
The stakes aren’t just operational; they’re human. Whether it’s screening a job applicant, prioritising a customer query, or shaping internal decisions, AI systems are already making choices that affect people’s lives.
If you're working with or implementing AI in your organisation, my advice is: don’t wait for regulation to catch up. Start asking the hard questions now, involve diverse voices early, and treat ethical design as core to the process.
If we’re not intentional in how we build AI, we risk deepening the very biases we hope to eliminate.
As Artificial Intelligence becomes more embedded in the tools we use every day, are we doing enough to ensure it serves people in a meaningful and fair way?
Author,
Principal Recruitment Consultant at
PRA
Recruiting all things Data, AI and Architecture
References
Sheard, N. (2025). Discrimination by Recruitment Algorithms is a Real Problem. University of Melbourne – Pursuit. https://pursuit.unimelb.edu.au/articles/discrimination-by-recruitment-algorithms-is-a-real-problem
Diversity Council Australia (2024). Inclusive AI at Work – T.R.E.A.D. Guidelines. https://www.dca.org.au/news/media-releases/dca-releases-guidelines-to-reduce-bias-in-ai-recruitment
Conversation with - Fernando Mourão on Wednesday, 2nd of July 2025https://www.linkedin.com/in/fernando-mour%C3%A3o-a40a5183/

