Is AI Making Workplaces Smarter, or Less Fair?

Is AI Making Workplaces Smarter, or Less Fair?

As tools like ChatGPT, Copilot and Gemini become embedded in everyday work across industries, it’s more important than ever for organisations to use them responsibly, fairly, and ethically.


Just as crucial is the question of trust. As users, we need to know that the information these tools produce comes from reliable and unbiased sources. To understand how that trust is being built in practice, I spoke with Fernando Mourão, SEEK’s Head of Responsible AI. He shared how one of Australia’s leading tech employers is moving beyond the AI hype and putting real focus on fairness, accountability, and transparency in the way it designs and uses AI.


1. Bringing Responsibility into the AI Development Lifecycle

Fernando described his role at SEEK as one deeply embedded in operational culture:


“My role is basically helping to operationalise practices and behaviours that allow SEEK to deliver AI in a safe, reliable, and responsible way.”

SEEK is an organisation that has over 200 people working on AI projects, and has been utilising Artificial Intelligence tools within the organisation for over 10 years. Fernando highlights that it isn’t about writing policies and walking away. It's about embedding responsible AI practices across people, platforms, and pipelines, from initial design to post-deployment monitoring.


2. Bias Isn’t Just a Data Problem

When I asked whether biased outputs stem purely from the data used, Fernando took a broader view:

“Bias comes from business decisions, algorithms itself, and many other aspects.”


He added: “This kind of impact is particularly important in our domain because AI can often replicate or amplify existing biases, especially in recruitment. And we definitely want to avoid that. We need to anticipate and mitigate those risks wherever possible.”


For SEEK, addressing bias means going beyond clean data, it’s about thoughtful design, governance, and building awareness across teams.


3. Addressing Bias in the Real World

 The conversation highlighted some of the proactive steps SEEK is taking:

  • Defining a clear impact framework for its AI tools
  • Implementing systems to detect multilingual and algorithmic bias
  • Creating feedback loops between policy, education, and technology

These efforts reflect the Diversity Council Australia's T.R.E.A.D. Guidelines, which encourage organisations to Team up, Reflect, Educate, Acquire, and Decide when deploying AI in recruitment. The framework is designed to help businesses introduce AI tools with greater awareness and accountability, reducing the risk of reinforcing bias rather than eliminating it.

4. AI Accessibility:

A Double-Edged SwoAs AI becomes more accessible to non-technical users, Fernando stressed the risks of rushed adoption:
“AI is now more accessible, so more people are using it. But that also means more risk, because they don’t always understand how to use it properly.”


With ease of use comes greater responsibility. Without the right education and oversight, even well-meaning users can introduce harm or inequity.


“It’s like having a knife in your kitchen. If you don’t know what you’re doing, it can be dangerous.”

This concern is supported by research. Dr Natalie Sheard of the University of Melbourne, in her article “Discrimination by Recruitment Algorithms is a Real Problem,” writes:


“As the use by employers of AI to screen job applicants grows, there are serious risks of discrimination against women, older applicants and minority groups.”


Sheard cautions that until proper regulation is in place, organisations must tread carefully with these technologies, especially in hiring where the impact on people’s lives is direct and long-lasting.


Final Thoughts:

The stakes aren’t just operational; they’re human. Whether it’s screening a job applicant, prioritising a customer query, or shaping internal decisions, AI systems are already making choices that affect people’s lives.


If you're working with or implementing AI in your organisation, my advice is: don’t wait for regulation to catch up. Start asking the hard questions now, involve diverse voices early, and treat ethical design as core to the process.


If we’re not intentional in how we build AI, we risk deepening the very biases we hope to eliminate.


As Artificial Intelligence becomes more embedded in the tools we use every day, are we doing enough to ensure it serves people in a meaningful and fair way?


Author,


Jonny Church

Principal Recruitment Consultant at PRA 
Recruiting all things Data, AI and Architecture



References

Sheard, N. (2025). Discrimination by Recruitment Algorithms is a Real Problem. University of Melbourne – Pursuit. https://pursuit.unimelb.edu.au/articles/discrimination-by-recruitment-algorithms-is-a-real-problem

Diversity Council Australia (2024). Inclusive AI at Work – T.R.E.A.D. Guidelines. https://www.dca.org.au/news/media-releases/dca-releases-guidelines-to-reduce-bias-in-ai-recruitment

Conversation with - Fernando Mourão on Wednesday, 2nd of July 2025https://www.linkedin.com/in/fernando-mour%C3%A3o-a40a5183/


By Carrah Jordan March 9, 2026
Somewhere in the world right now, a hiring manager is asking a question… and three seconds later ChatGPT is answering it.
By Admin PRA September 29, 2025
The AI Authenticity Gap: Why Your AI-Generated CV Might Be Costing You the Job I see hundreds of CVs every week. I spend more time on LinkedIn than I care to admit. And one thing that's becoming increasingly prevalent is the appearance of overly authored posts and descriptions with plenty of words but precious little substance. Much of this has coincided with the widespread adoption of tools like ChatGPT. As someone working adjacent to the tech space, I was genuinely excited when AI started making waves across the world. I thought this was going to be a real game changer, and in many ways, it has been. But the overuse of generic AI-generated content has become so prevalent that I feel some people are now failing to show their authentic voice - the very thing that makes them stand out in a competitive market. The Early Adopter's Reality Check I was one of those people who tried to adapt early to AI, using it to help me in my professional and personal life. But here's the crucial difference: I didn't just accept the standard output I was given. I took the bones and made them my own. I used AI as a tool, not as a ghost-writer. Too often now, I see CVs that have been completely assembled by ChatGPT - so generic, so obviously automated, that I genuinely feel the candidate would have been better off not sending anything at all. These applications don't just blend into the background; they actively work against the candidate by signalling a lack of effort and authenticity. The Numbers Don't Lie Recent research validates what recruiters like myself are seeing daily. A May 2025 survey of 600 U.S. hiring managers revealed some startling statistics: One in five recruiters (19.6%) would outright reject a candidate with an AI-generated resume or cover letter Over a third of hiring managers (33.5%) can spot an AI-generated resume in under 20 seconds 58% of hiring managers express concern about AI-generated applications Think about that for a moment. Hiring managers are detecting AI-written CVs in less time than it takes to read a single paragraph. The very tool candidates think gives them an edge is often the red flag that gets them filtered out. The Efficiency Versus Laziness Debate When ChatGPT first emerged, many of my colleagues said outright that this was going to make people lazy. I argued against that view. I believed that just as Excel made formulating reports easier without making us worse at analysis, ChatGPT would help people be more efficient in their work - freeing them up to focus on strategic thinking and creative problem-solving rather than getting bogged down in formatting and structure. I still believe AI can be a powerful efficiency tool when used correctly. The problem is that many candidates aren't using it to enhance their work; they're using it to replace their work entirely. The Personal Touch in an AI World While improvements are being made to make AI-generated content seem less generic, there's a fundamental issue when you're putting forward something meant to be a representation of yourself. Your CV is your professional story. It's your opportunity to showcase not just what you've done, but who you are, how you think, and what makes you different from the hundreds of other applicants. When you rely on AI to put it all together, you lose all control and that crucial personal touch. The research backs this up: Baby Boomers and Gen X hiring managers are particularly sceptical, with one in four Baby Boomer managers likely to reject fully AI-generated resumes. Even among younger Millennials and Gen Z managers, who you might expect to be more accepting of AI use, there's a clear expectation that the final product must sound human, show real effort, and reflect the individual behind the words. The Right Way to Use AI in Your Job Search By all means, use the tools available to you. AI can be excellent for: Brainstorming bullet points you might have forgotten Identifying gaps in your experience narrative Improving grammar and clarity in your existing writing Suggesting different ways to frame an achievement Creating a first draft structure that you then completely personalise But don't think that because you can do something quickly and easily, you're going to get the same results as someone who actually takes the time to show they've invested effort. The data shows that 74% of hiring managers have encountered AI-generated content in applications, and they're becoming increasingly adept at spotting it. Standing Out in a Tough Market It's a challenging market out there in many sectors of the technology industry. If you want to stand out from the crowd, you need to ensure you can show exactly who you are. That means: Write in your own voice - Not the corporate-speak that AI defaults to Include specific examples - Generic achievements sound hollow Show your personality - What drives you? What excites you about your work? Customize for each role - AI-generated applications often feel one-size-fits-all Proofread beyond grammar - Does this sound like something you would actually say? The Bottom Line The irony is that in trying to use AI to save time and improve their chances, many candidates are actually undermining themselves. They're creating a sea of sameness in which their application drowns rather than floats to the top. Remember: hiring managers want to hire people, not algorithms. They want to understand your unique perspective, your problem-solving approach, your communication style. They want to see evidence that you've put thought and effort into your application because that's a strong indicator of the thought and effort you'll put into the job itself. Use AI as a tool in your toolkit - but make sure the final product is unmistakably, authentically you. That's what will make you stand out in 2025 and beyond. Need help crafting a CV that showcases your authentic voice while still being competitive in today's market? Get in touch, I'd be happy to provide guidance on how to strike that perfect balance between efficiency and authenticity. Article written by: Jack Davies PRA Brisbane Associate Consultant - Development and Testing M: 0483 969 454 E: jack.davies@pra.com.au
By Admin PRA September 29, 2025
Job hunting can be tricky, but we’ve got you covered. Our 2025 PRA Job Seeker Handbook is full of tips and insights to help you: Make your applications stand out Nail your interviews Navigate offers with confidence And land the role that’s right for you Download your free copy today!