Fragile or Future-ready? The Decisions Shaping Australia’s AI Ambitions
Introduction Senior data leaders are being asked to navigate a level of complexity that feels new, even by recent standards. Across Australia, organisations are under pressure to turn AI ambition into real outcomes, often while working with fragile data foundations that were never designed for this level of demand. Other factors compound that pressure. Economic […]
Posted: Tuesday, May 12

i 3 Table of Contents

Fragile or Future-ready? The Decisions Shaping Australia’s AI Ambitions

Introduction

Senior data leaders are being asked to navigate a level of complexity that feels new, even by recent standards. Across Australia, organisations are under pressure to turn AI ambition into real outcomes, often while working with fragile data foundations that were never designed for this level of demand.

Other factors compound that pressure. Economic conditions remain uncertain, budgets are under scrutiny, supply chains continue to shift, and expectations from shareholders, customers and employees continue to rise.  At the same time, timelines are shrinking, with leaders expected to deliver results faster than ever.

It’s no surprise that Gartner predicts that over 40% of agentic AI projects will be cancelled by the end of 2027, citing rising costs, unclear business value and inadequate risk controls.

In this environment, it’s easy to prioritise short-term fixes that reduce cost and complexity. But those decisions often create problems further down the line. As real-time data streaming becomes the critical nervous system of the modern enterprise, leaders need to consider whether their infrastructure can evolve with them, with resilience, governance and AI readiness at the core, or whether today’s choices are leaving the business exposed over time.

The real question is no longer how quickly organisations can move, but whether what they are building today will hold up tomorrow.

When Short-term Decisions Create Long-term Risk

When budgets are tight, it’s natural to prioritise solutions that promise immediate efficiency. Lower upfront costs or “good enough” functionality can feel like practical choices in a high-pressure environment. But the more important question should be what it costs to run and scale over time. Can it integrate across environments? Can it handle new use cases without added complexity?

What often gets missed are the ongoing demands. From the people required to manage the system, to the effort involved in maintaining performance or troubleshooting when something breaks. While these costs don’t show up early, they accumulate quickly.

Solutions that appear simple at the outset rarely stay that way. As requirements grow, gaps start to appear, whether in monitoring, governance or control. Teams are then forced to compensate, layering in third-party tools, building workarounds, and diverting resources to manage infrastructure rather than delivering value. What initially looked cost-effective eventually becomes unwieldy and difficult to scale.

This is where leaders often get caught out. Now the problem becomes: “How many people will we need to maintain it?”

Designing Systems That Can Handle Constant Change

In an environment where disruption is constant, from regulatory change to unpredictable demand, systems are expected to adapt in real time. Financial institutions need to meet stricter requirements regarding availability and data sovereignty. Retailers and logistics providers have to be able to operate under sudden spikes in purchases without disrupting fulfilment. Telecoms have to guarantee uptime for millions of customers, even as network demand fluctuates.

In each case, resilience is not a feature that can be bolted on. It needs to be designed into the foundation. That means choosing platforms that can connect with and operate consistently across environments, scale without complexity, and adapt as requirements change.

The risk is choosing tools that cannot keep up with what comes next, setting your business up to fall behind.

Why AI Outcomes Depend on Your Data Foundations

Across Australia, organisations are investing heavily in AI to improve decision-making, automate processes and enhance customer experiences. But the effectiveness of these systems depends entirely on the quality and availability of the data behind them.

AI is only as good as the data it consumes. When data is fragmented or poorly governed, AI cannot deliver on its promise. It undermines outcomes and creates reputational and regulatory risk. This is why many organisations are discovering that the barrier to AI success is often not the model itself, but the data behind it. Businesses need to have governance, lineage, and security into their data foundations from the start.

Real-world use cases make this especially clear. Real-time fraud detection depends on secure, low-latency streams. In retail, personalisation and recommendation engines depend on accurate, unified customer data. In manufacturing, supply chain optimisation hinges on reliable, cross-border data flows.

In each case, the value of AI is directly tied to how quickly and reliably data can move across the organisation. Organisations seeing meaningful returns from AI are not just investing in models, they are investing in data architectures that ensure information is continuously available, trusted and ready to use. Without that foundation, even the most advanced AI capabilities remain limited.

A Leadership Decision, Not Just a Technology One

Decisions about data infrastructure are, at their core, leadership decisions. They determine how teams allocate resources and manage risk. They determine whether systems can scale with demand and deliver on the promise of AI.

In conversation with local technology leaders, the same question comes up again and again: what are you building towards? If the priority is short-term efficiency, it’s easy to build systems that solve today’s problems but struggle to support what you need tomorrow. If the focus is on adaptability and long-term capability, particularly in the age of AI, those decisions tend to look very different.

Ultimately, the pressure to move quickly is unlikely to ease. But in an environment defined by rapid change, “good enough” rarely remains good enough for long. The organisations pulling ahead are not those making the fastest or cheapest decisions. They are the ones making deliberate investments in adaptability, real-time data capabilities and capacity to innovate; foundations that can support both today’s priorities and tomorrow’s demands.

Andrew Foo
Andrew Foo has 20+ years experience in information technology with experience spanning business strategy, services, delivery, education, pre-sales and product management. After spending most of his early career as a practitioner implementing and delivering BI applications and systems, Andrew spent a large deal of his career working for global software companies (Rubrik, Cloudera, Hortonworks, IBM, SAS, Cognos) helping clients identify opportunities for effectively using data and technology to create new intelligence. At Rubrik, Andrew led a passionate team of sales engineers to help organisations become cyber-resilient and protect against modern day threats and ransomware. His professional experience with enterprise software includes Cyber Resiliency, Data Protection, Big Data, Business Analytics along with Information Management & Governance disciplines across people, process and technology. Consulting and architecture advisory form a key part of Andrew’s expertise, with involvement in major customer accounts in private and government organisations in helping them define their data and information security strategies. Andrew is a Certified IBM IT Architect, an Open Group Master Certified IT Architect (Information Architecture specialisation) and has held number of technical certifications with RedHat, Microsoft and Cognos. Prior to his current role at Confluent, Andrew held APJ leadership roles in Sales Engineering and Professional Services at Rubrik, Cloudera, Hortonworks and was Chief & Executive Architect at IBM’s Big Data & Analytics Services division.
Share This