By Emma Weber - AI Transformation Advisor and Author. Emma Weber has spent 23 years in behaviour change and learning transfer, helping organisations globally navigate the human side of AI transformation. Founder of Being Human in the Age of AI.
← All posts
Politics
The Week UK Politicians Finally Agreed on AI - And Why Australia Needs to Catch Up
29 April 2026
·
6 min read
·
Emma Weber
Fridays have a particular rhythm for me. I've built a rather nifty little agent that goes out each week and checks what my key thought leaders in the AI space have been saying - the press they've had, their latest thinking, their most recent updates. It means that by mid Friday morning, I can slow down, make a coffee, and actually read - properly read - the people I most respect in this space. It's one of the parts of my week I most look forward to.
This week, though, I wasn't sure whether to feel thrilled or quietly terrified.
Something Remarkable Happened on Newsnight
For those of you who aren't familiar with British television, Newsnight is about as serious as it gets. It's the BBC programme that politicians actually prepare for. So when a clip surfaced showing politicians from the left and from the right not just appearing in the same room, but genuinely agreeing with each other - I sat up straight.
What they were agreeing on was AI governance. And what was particularly striking was Lord Gove - from the Conservative side - making the case not only for bipartisan agreement within the UK, but for international agreement. The framing was clear: this is not a partisan issue. This is not a national issue. This is a global one that requires grown-up, coordinated governance - and it needs to happen now.
BBC Newsnight, April 2026. Watch it.*
Opens on YouTube →
Why This Moment Lands Differently When You've Been In the Research
Over the past several weeks I've been doing a deep dive - into the literature, into conversations with Australian organisations, and into a benchmarking exercise with Trish Uhl, who brings an extraordinary global lens to everything she touches. Together we've been looking at where Australian organisations sit against what's happening worldwide, and what the trajectory actually looks like from the inside.
What we're seeing is both hopeful and sobering. AI is not primarily a technical challenge. The research makes that abundantly clear. What's actually determining success or failure in organisations right now is how people operate within them - and most critically, how leaders choose to lead through this transformation. The technology is almost the easy part.
I've come to believe this deeply, even if it doesn't show up as a clean data point: AI doesn't change an organisation's culture. It amplifies it. If your culture is one of openness, curiosity, and psychological safety - AI accelerates all of that. If your culture has siloes, fear, or a blame dynamic running underneath the surface - AI will surface and amplify those things too.
AI doesn't change an organisation's culture. It amplifies it.
Which, paradoxically, is part of why I'm leaning into excitement rather than anxiety. Because if AI is going to hold up a mirror to the way we operate - that's actually an opportunity. An opportunity to reset. To look honestly at the culture we've built, and decide whether it's the one we want to amplify. Not every organisation will see it that way. But the ones that do? They will move differently through this.
Being Human in the Age of AI
Since the sale of Lever, my focus has been on one central question: what does it mean to be human in the age of AI? It's the lens through which I look at almost everything now - and it turns out, it's a question that doesn't sit neatly in one box.
It's personal - how do we hold on to our sense of self, our creativity, our relationships, in a world where the line between human and machine output is increasingly blurred? It's professional - how do organisations retain the human skills, the judgement, the irreplaceable things that no AI can replicate? And it's societal - and you truly cannot talk about any of this without lifting your eyes to the bigger picture.
That's what made the Newsnight clip so significant for me. Here were politicians - not tech people, not researchers - publicly naming that the societal dimension of AI requires governance, requires coordination, and requires urgency. Some of the most vocal calls for this kind of framework, interestingly, are coming from inside the AI industry itself. The leaders of some of the biggest AI companies have been among the most consistent voices saying: we need guardrails. We need international cooperation. We need a serious conversation about what kind of future we're building.
The Australian Conversation We Aren't Having
Here's my honest frustration. One of the things I've found most difficult over the past few weeks is that in Australia, we aren't really having this conversation - not at the level it needs to happen. Awareness among C-suite leaders and boards is lower than it should be. Politicians aren't yet finding that cross-party consensus. And in many organisations, AI is either being banned quietly, or adopted chaotically, with very little in between.
Here's what makes this genuinely puzzling, though - because it's not that Australia doesn't have the appetite or the courage to lead on technology legislation. Quite the opposite. In November 2024, Australia became the first country in the world to ban social media for children under 16, with the Online Safety Amendment Act coming into force in December 2025. We went first. More than a dozen countries - including Brazil, Indonesia, France, Norway, Denmark, Spain and Malaysia - have since enacted or are actively legislating similar bans. The UK, Germany and Italy are in the conversation too.
So the question isn't whether Australia is capable of bold, principled action on technology and its impacts on people. We've already shown we are. The question is: why haven't we brought that same energy to AI?
The pace of AI development does not leave a lot of time for a measured wait-and-see approach. The trajectory is clear, and the organisations and nations that are moving deliberately, and moving now, will be the ones that shape what this looks like for everyone else. Australia has already shown it knows how to go first. It's time to do it again.
What I'm Committing to Do About It
Watching that Newsnight clip has galvanised me.
I'm going to be doing more - much more - to raise awareness among C-suite executives and boards here in Australia. Not fear-based awareness. Not doom-scrolling dressed up as a briefing. But honest, grounded, practical awareness of what is happening, what it means for organisations, and what good leadership through this transition actually looks like.
And I'm going to keep building the practical tools and frameworks that help learning professionals and HR leaders bring their people through this - gently, kindly, and with real positivity. Because the research is clear on one thing: the organisations that succeed aren't the ones with the cleverest AI strategy. They're the ones where people feel safe to learn, safe to experiment, and supported by leadership that has taken the time to understand what they're asking of them.
There is so much opportunity in this moment. I genuinely believe that. But seizing it requires that we talk about it - honestly, openly, and with enough urgency to match the pace of what's actually happening out there.
A question I keep returning to: in your organisation, who actually owns the AI conversation? Is it genuinely shared across your C-suite - your CEO speaking openly about the purpose and values driving your strategy, your HR and people leaders thinking through the human implications - or has it quietly been handed off as a technology problem for the CTO to solve alone?
Because AI was never a technology problem. And in my experience, that distinction - whether your CEO is in the room talking about the why, not just the what - is one of the clearest indicators of whether a transformation will land well, or quietly stall.
I'd love to know your thoughts. Are you seeing this conversation happen in your organisation? Is your board talking about AI governance? And who in your C-suite is leading it? Drop me a message or share this post - because the more of us who are talking about it, the more likely we are to get the policy environment, the leadership awareness, and the human-centred approach that this moment actually deserves.
And a final thought - I'll be starting a thoughtful newsletter soon. If you'd like me to keep you posted with my thinking, let me know. I'll always have an unsubscribe link and will handle your data appropriately.
Sources
- Australia's world-first ban: Online Safety Amendment (Social Media Minimum Age) Act 2024, passed 29 November 2024, came into force 10 December 2025. Source: Bloomberg / NPR / Wikipedia
- Countries with enacted or near-enacted bans (as of April 2026): Brazil, Indonesia, France, Portugal, Malaysia
- Countries with active legislation in progress: Denmark, Norway, Spain, Slovenia, Greece
- Countries considering: UK, Germany, Italy (and others). Full tracker: TechCrunch, 8 April 2026
Ready to go further
Most organisations think they're further ahead than they are. Some are further ahead than they think. The Assessment tells you which.
Take the AI Transformation Readiness Assessment
Around 12 minutes. No login required. We'll send your results.
Frequently asked questions
Is Australia behind other countries on AI governance?
Yes. According to Emma Weber, who has been benchmarking Australian organisations against global peers alongside expert Trish Uhl, awareness of AI governance among Australian C-suite leaders and boards is lower than it should be. Politicians have not yet achieved the cross-party consensus on AI that the UK demonstrated in April 2026. In many Australian organisations, AI is either being banned quietly or adopted chaotically, with very little deliberate governance in between.
What happened on BBC Newsnight regarding AI governance in 2026?
In April 2026, a clip from BBC Newsnight showed politicians from both the left and the right of UK politics not merely appearing together, but genuinely agreeing on AI governance. Notably, Lord Gove - from the Conservative side - made the case for bipartisan agreement within the UK and, further, for international coordination on AI governance. The framing was explicit: AI governance is not a partisan issue, not a national issue, but a global one requiring grown-up, coordinated action - now.
How does AI affect organisational culture?
AI does not change an organisation's culture - it amplifies it. If an organisation's culture is characterised by openness, curiosity, and psychological safety, AI accelerates all of those qualities. If the culture has siloes, fear, or a blame dynamic running beneath the surface, AI will surface and amplify those too. This makes the human and cultural dimensions of AI transformation as strategically significant as the technology itself.
Who should own the AI conversation in an organisation's C-suite?
AI strategy should be genuinely shared across the C-suite - not handed off as a technology problem for the CTO to solve alone. The CEO needs to be in the room speaking openly about the purpose and values driving the organisation's AI direction. HR and people leaders need to be thinking through the human implications from the start. Whether the CEO is present in the AI conversation - talking about the why, not just the what - is one of the clearest early indicators of whether an AI transformation will land well or quietly stall.
Was Australia the first country to ban social media for children?
Yes. In November 2024, Australia became the first country in the world to ban social media for children under 16, through the Online Safety Amendment (Social Media Minimum Age) Act 2024, which came into force in December 2025. More than a dozen countries - including Brazil, Indonesia, France, Portugal and Malaysia - have since enacted or are actively legislating similar bans, with Denmark, Norway, Spain, Slovenia and Greece progressing legislation, and the UK, Germany and Italy considering similar moves.