Australia's AI Productivity Gains Depend on Building Trust, Not Avoiding Rules
The Business Council of Australia calls artificial intelligence "the single greatest opportunity” in a generation to lift productivity. The Productivity Commission frames AI, and other digital technologies, as "modern engines of economic growth" that could add billions to GDP. These promises are seductive, but the Commission's own interim report reveals a notable blind spot. Its Draft Recommendation 1.3 advises pausing steps to implement mandatory guardrails for high-risk AI until reviews of existing frameworks are complete. On the surface, this seems cautious and pragmatic. In reality, it carries significant risks and contributes to a narrative lacking nuance where we must regulate or innovate.
Pausing guardrails is not neutral. It means leaving Australians exposed while powerful AI systems are deployed in workplaces, classrooms, healthcare, and government services. It places far too heavy a burden on existing regulatory frameworks in the face of AI-specific risks, from algorithmic bias to deepfakes, which don't fit neatly within the current legislative environment. It sets an unreasonably high bar for action by saying new rules should only apply where "technology-neutral regulation is not possible" and it exposes risks of over-reliance on embedded systems before Australia defines its own standards, making it harder and costlier to retrofit protections later.
While this recommendation is rooted in a well-worn path or pitting regulation against innovation, this misses a critical nuance; that public confidence and trust are essential ingredients in unlocking the full productivity power of AI. Trust can erode if people see governments waiting to act while harms accumulate. Businesses, too, need certainty: clear rules attract investment, while uncertainty drives capital elsewhere. By pausing guardrails, Australia signals to both citizens and markets that speed takes priority over safety. That is a recipe for an adoption curve as shaky as the trust it is built on, undermining potential productivity gains.
Good governance should not accept an innovation or regulation binary, it's about achieving both together, embedding guardails early and proportionately, building public confidence without smothering innovation
Trust and Innovation: Breaking the False Binary
Research consistently shows that trust drives technology adoption more powerfully than technical capability alone. Trust, dubbed by KPMG as the “defining currency of adoption”, is particularly essential in any ambitions for AI-fueled productivity gains, particularly in light of current public sentiment, with only 36% of Australians being “willing to trust AI” and 46% believing that the “risks of AI outweigh the benefits”. If we look internationally, we can see how the trust dividend can pay off. The EU Artificial Intelligence Act (2024), the word’s first comprehensive law for governing AI, is argued to have encouraged responsible innovation and nurtured trust among business and consumers, both of which are essential for widespread market growth of AI. This hypothesis is supported by concrete data, despite initial fears of over-regulation, European AI startups raised about $13 billion in 2024 (a 22% increase year-on-year), suggesting that a trusted, well-regulated market can attract capital rather than repel it.
Without trustworthy systems, both real and percieved, productivity gains can evaporate in additional oversight costs. This is why the innovation-versus-regulation framing is deeply counterproductive. Businesses hesitate to fully integrate tools they can't rely on. Consumers resist adoption if safety and privacy feel uncertain. Far from being cost-free, the absence of clear guardrails can slow productivity growth by forcing organisations into endless oversight and crisis management. The real challenge isn't whether to choose innovation or regulation, it's how to achieve both together.
Lessons from Success and Failure
Australia's track record provides clear lessons about the relationship between regulation, trust, and innovation outcomes.
Regulation done well enables innovation to flourish. Aviation safety standards are a classic case. Far from slowing the industry, rigorous oversight has made air travel one of the safest and most widely used forms of transport, enabling a thriving industry built on public confidence. Additionally, the Australian Therapeutic Goods Administration boasts some of the strictest regulations in the world, helping emerging technologies such as biotech become a high-value and trusted sector.
In contrast, when regulation has lagged, Australians have been left exposed. The avoidance of prescriptive oversight in the finance industry culminated in the 2019 Royal Commission into Misconduct, which revealed systemic failures and imposed costs in the billions. Similar patterns have emerged in telecommunications and vocational education and training, where light-touch approaches created short-term flexibility for providers but ultimately delivered greater long-term costs, both financial and social.
Global Stakes and Choices
Australia’s decisions on AI governance will be noticed internationally, and different countries are already demonstrating how the trust-regulation balance plays out in practice.
The United States' lighter, market-led approach offers a cautionary contrast. The federal government has left most oversight to the market, resulting in fragmented, reactive responses to disinformation, deepfakes, and AI-enabled fraud. Trust has suffered, and businesses face patchwork regulation at the state level instead of predictable national standards.
Japan's "Society 5.0" strategy pursues a human-centric model that embeds ethics while maintaining competitiveness, explicitly rejecting the innovation-versus-regulation binary. This initiative shows how countries can reflect national values while fostering innovation.
If Australia defaults to being a "regulation taker," we risk adopting frameworks built for other economies and contexts. We'll miss the chance to shape global norms and demonstrate that innovation and regulation can be complementary rather than competing forces.
Building Australia's Approach
Australia has already started laying important foundations that recognise the trust-regulation connection. We have policies for responsible AI use in government, a national framework for AI assurance coordinating federal and state approaches, and guidelines for generative AI in schools. These demonstrate that we can embrace innovation while building the trust infrastructure that makes it sustainable.
But leadership requires more than foundations. It means making deliberate choices about which AI uses need urgent safeguards, how we align with or diverge from global standards, and what trust infrastructure, literacy, transparency, and accountability mechanisms are needed to give both citizens and businesses confidence.
The Commission's call to pause regulatory development until more analysis is complete may sound prudent, but delays aren't cost-free. Australia has the democratic institutions and incentives that ensure technology serves the public interest. Clear, timely regulations and governance build trust and thus accelerate adoption. This means that a balanced approach of regulation and innovation is well within our national interests and capabilities.
By Taylor Dee Hawkins