Exploring Opportunities, Fears, and the Future with AI
Posted: Tuesday, Jul 16
  • KBI.Media
  • $
  • Exploring Opportunities, Fears, and the Future with AI
Karissa Breen, crowned a LinkedIn ‘Top Voice in Technology’, is more commonly known as KB. A serial Entrepreneur that Co-Founded the TMFE Group, a holding company and consortium of several businesses all relating to cybersecurity including, an industry-leading media platform, a marketing agency, a content production studio, and the executive headhunting firm, MercSec. KBI.Media is an independent and agnostic global cyber security media company led by KB at the helm of the journalism division. As a Cybersecurity Investigative Journalist, KB hosts her flagship podcast, KBKast, interviewing cybersecurity practitioners around the globe on security and the problems business executives face. It has been downloaded in 65 countries with more than 300K downloads globally, influencing billions in cyber budgets. KB asks hard questions and gets real answers from her guests, providing a unique, uncoloured position on the always evolving landscape of cybersecurity. As a Producer and Host of the streaming show, 2Fa.tv, she sits down with experts to demystify the world of cybersecurity and provide genuine insight to businesses executives on the downstream impacts cybersecurity advancement and events have on our wider world.

i 3 Table of Contents

Exploring Opportunities, Fears, and the Future with AI

Claudionor Coelho, Chief AI Officer from Zscaler, and a member of the World Economic Forum AI Group at Zenith Live chatted with me about large language models (LLMs) and the potential ethical challenges surrounding their use.

Recently it was announced about the launch of Copilot at Zscaler and how the technologies presented were utilised in the creation of the Co-Pilot technology. General perceptions of AI among the broader public and the potential fears associated with AI.

Claudionor Coelho commented,

“People are fearful of the unknown. They don't know what's happening is going to happen in the future.”

Companies and people need be aware about the importance of understanding that AI, particularly large language models such as GPT-3, is not the sole solution-bringing tool, but rather an aid that requires human direction. Large language models, by themselves, possess limitations and are prone to generating misleading information, or what Coelho refers to as "hallucination."

The need for careful monitoring and human oversight when utilising AI technologies is something perhaps that can be overlooked or not addressed at all, not by intention though.

The revelation of the potential applications of AI in various fields, such as healthcare and drug discovery. The use of graph neural networks and deep reinforcement learning for new drug discovery, as demonstrated by companies like Google DeepMind, shed light on the promising developments in the industry.

In addressing concerns about potential job displacement and the introduction of new skill requirements due to AI's advancements, Coelho stresses the need to embrace the technology rather than fear it. He explains that AI is not intended to replace humans but instead act as an "exoskeleton" enhancing human capabilities and streamlining complex processes.

“The way that I think about this is that AI, large language models, and deep learning in general, they're going to become your exoskeleton.”

The on-going ethical dilemma continues, especially with mainstream media outlets expressing concerns over potential violations of their content when large language models are trained on publicly available written content. The risk of inaccuracies and misinformation spreading due to the training of AI on unverified content raises critical questions about the ethical guidelines and the reliability of the training sets used for large language models.

Taking a closer look at the training sets and the frameworks used to determine the quality of content becomes crucial in addressing the potential spread of misinformation.

Coelho went on to say,

“…they need to be trained on very well written documents, and you cannot train them without well written documents. And some people are saying that we're going to run out of training text to train the next generation of larger language models by the end of this year or next year.”

With uncertainties surrounding the parameters and guidelines for qualifying content as high or low quality, a call for increased transparency and regulation in AI training activities is warranted.

Coelho concluded,

“You need a lot of people to actually evaluate and and to basically tag the text as being ‘this is high quality’, ‘this is not high quality’.”

Share This