Public, Profiled, and Unprotected – Event Recap
Posted: Wednesday, Oct 29
  • KBI.Media
  • $
  • Public, Profiled, and Unprotected – Event Recap
Cam retains oversight of the editorial content for KBI.Media and finds time to write every once in a while too. With a keen interest in geo-political goings-on, Cam oversees our new Global News section. And being an ex-techy, he puts his (very) atrophied knowledge to use, giving a helping hand to shape and manage our Technical section on occasion too.

i 3 Table of Contents

Public, Profiled, and Unprotected – Event Recap

Introduction

When the panel took the stage for “Public, profiled, and unprotected: How OSINT, AI and Geolocation blur the line between privacy and surveillance” at Cyber Con 2025 on the Thursday afternoon, the tone was set for a fascinating, multidisciplinary conversation.

The Three Pillars – OSINT, AI, and Legal Privacy

The talks core focus was the profound change AI has brought to the intersection of privacy and surveillance, especially through the lens of Open-Source INTelligence (OSINT). The panel represented three expert domains: Carter Smith (OSINT and human security), Jordan Moshcovitis (physics and AI), and Ruarri Fairweather (privacy law and regulation). Moderator Newnham steered the talk, drawing out both technical detail and the wider societal implications.

“OSINT Is Two Parts” Carter Smith grounded listeners in OSINT’s evolution, “OSINT [the] collection, analysis, and dissemination of freely available information. I like to classify OSINT as two parts… technical OSINT, which is mapping out the physical or the technical attack surface… and the human attack surface: people, their relationships, addresses, accounts.” He pointed out that, while OSINT has long served law enforcement and academia, it’s now “pretty prolific in our lives”, often practiced without people realising it. As Smith wryly observed, “Sometimes call it stalking, but I’ll let you vouch me in your own terms.”

“A Paradigm Shift” Jordan Moshcovitis Breaks Down AI’s Real Impact

Jordan Moshcovitis, the physicist turned AI specialist, said, “When we say AI, the first thing we think about is general… talking to ChatGPT, Gemini… at its core, it’s knowledge, words, meaning, context, understanding, turned into geometry.” He described a “phase transition” in data handling, a leap from simple database searches to the ability for models to correlate complex, disparate bits of information across text, images, voice, and more. “We’re no longer trying to search for big data that are from rows in the table… now looking for correlations in these vast spaces of embedded vectors.”

This networking of previously “atomic” data points means identity can be pieced together faster and more convincingly than ever. His key takeaway was of pattern recognition becoming the fundamental technical capability that drives what deeper meaning. THe intersection of the creation and observing patterns and identity.

Privacy Law in Crisis – Ruarri Fairweather on Regulation

Fairweather had perhaps the most pragmatic take, outlining the regulatory challenges we are seeing, “Privacy law has gone through quite a transformation… Fast forward to where we are today and it absolutely is something that we’re very concerned about.”

From his view, law is lagging behind technology, a black-and-white system faced with endless shades of grey.

“We’re at a point that potentially the technology or the information is outstripping the paper reform,” he warned. Recent reforms (like Australia’s new tort for privacy intrusion) are reactive, not proactive: “What we’re absolutely learning and seeing is that you can recreate that data, in fact go much further. And that’s going to be really hard because… some information that was online… is absolutely being trained into models that can then be abused.”

What’s Changed?

The panel tracked how manual OSINT, and indeed other efforts creating privacy concerns, has morphed into a parallelised, automated process, now driven by AI.

Smith recalled, “Five years ago, most of it was sort of something specialists… bounded by how fast someone can read, click through links. There was quite a low maturity for automated tooling… and it was really expensive.”

Today, automated agents can, for instance, create “knowledge graphs” and social maps, amalgamating text, image, and metadata to reveal connections even the subject is unaware of.

As Moshcovitis put it, “The amount of data we have available and every data point that we add on the Internet ourselves adds more fuel to the dumpster fire… Forward privacy is going to be very difficult.”

New Risks of Identity, Aggregation, and Surveillance

One striking anecdote came from the compere, Newnham, who described investigators unmasking an online predator by tracking peculiar phraseology across forums, work that took months by hand but could now take minutes.

Smith illustrated the loss of “practical obscurity”, the idea that some information is safe simply because it’s hard to find, “No matter how obscure you can be in future… it’s not binary anymore. It’s not true or false. You know, how close are you to being close to this?”

Fairweather warned, “It isn’t as black and white as we would like it to be. It’s no longer simply ‘this is public and this is private’. A lot of the information is now coexisting.”

The democratisation of surveillance tools once reserved for intelligence agencies, now being available to anyone with a laptop, means “the barrier to entry, the cost to entry is significantly lower,” Smith said. “Nation states used to have this capability. Now it’s available to lots of people.”

Societal Consequences and Looking Ahead

Asked about societal impact, Fairweather was sobering, “We’re getting back to potentially [a new] ‘script kiddy’ time. It’s going to make it more difficult for professionals… What we’re absolutely learning and seeing is that you can recreate that data, [but] go much further.” Regulation must evolve, he said, “We absolutely need the regulatory focus to start to accommodate and recognise that it’s not individual data points… We need to look at it from much broader perspective…” meaning that otherwise inoccuous data when combined, can culminate in a situation where the whole is very much greater than sum of the parts in terms of identity.

Smith urged second-order thinking, “It’s a shift from what photos am I posting, where am I posting to what am I actually posting? Do I need to post? Do I need to leave this review?” to essentially limit the amount of data points that can be strung together.

Moshcovitis drove home the importance of hygiene, not just in what we post but also in how we interact with chat models, because, “…what you posted in 2011 may end up in the logs on the training.” meaning that forgotten shared data points can, and likely will, come back to haunt you, even if in seemingly banal ways.

Can We Dilute Our Data?

An audience member asked about using AI themselves to “dilute” their online identity. Smith mused, “Feeding in false information.” Moshcovitis cautioned, “Make sure that you’re also not leaving a fingerprint in the way you use that… everything leaves a fingerprint.” Fairweather noted that models are “already trained” on vast datasets, dilution may have limited effect unless models are retrained or datasets corrected.

Final Takeaways

  • AI turns “atoms” of data into patterns of identity, almost regardless of intent
  • Regulations and laws lag behind the pace of technological change
  • Practical obscurity, and the private/public divide, is dead
  • Surveillance is democratised
  • Personal security hygiene is critical. Post only what you must; every bit can be aggregated
  • Technical and policy solutions are needed. But for now, vigilance and restraint are your best defence

The panel left no doubt: navigating cyber and privacy in the era of AI demands new ways of thinking, acting, and regulating, at every level of society, and that we will need collective action in the form of laws and regulation to even hope to have some sembelence of protection.

Share This