AI Ethics: Developing AI models with intent, transparency and diversity
Posted: Thursday, Nov 28

i 3 Table of Contents

AI Ethics: Developing AI models with intent, transparency and diversity

Introduction

As humans, we all walk through the world with a certain level of uncious bias. It makes sense then that anything we โ€˜inventโ€™ is inherently riddled with this bias whether we think so or not. Itโ€™s no surprise that the development of AI has come with a laundry list of ethical concerns.

Recently I spoke with Vini Cardoso, Field CTO of Cloudera as part of Tickerโ€™s โ€˜Tech Edgeโ€™ series to understand some of the key ethical issues surrounding the development of AI and how organisations can build AI models based on core ethical principles. Vini told me ethical AI means โ€œintegrating the core ethical principles into the development of AI systems so we can ensure that it benefits society in a fair and responsible way.โ€ As AI becomes more pervasive, Vini advised we have to practically address risks around privacy, bias and any other unintended consequences we haven’t yet thought about.

Weโ€™ve already seen the creation of a Select Committee on Adopting Artificial Intelligence in Australia amidst plenty of concern from everyday Australians figuring out how to navigate this new, game-changing technology. Given that IDC predicts APAC alone is expected to spend $49.2 billion on AI by 2026, creating AI models with ethics front of mind, is going to be critical.

Developing Ethical AI Models

According to Vini, there are three ways organisations can build and utilise AI models ethically.

The first is to have ethical guidelines in place that clearly outline who is accountable for the AI systems. This includes creating governance processes in the development of AI models, and specifying what transparency measures need to be taken to ensure organisations can see whatโ€™s running behind the scenes. These guidelines are critical, because AI systems need to be explainable, not like black boxes where no one knows how you’re generating the outcome. This also builds trust in AI outputs and helps to reduce human oversight.

The second is knowing your data. When building AI models you need to feed it data that is fit for purpose so that the quality of the output matches the input. This means having strong data governance in place to ensure youโ€™re not providing AI models with outdated, poor-quality data or sensitive information that could breach privacy rules.

Lastly, organisations need to set a clear intent for AI projects. This means adopting a โ€˜know your data, know your intent approachโ€™. If you understand your data sources then you can clearly define the outcomes you want to achieve. As part of this, itโ€™s important to put yourself in your consumerโ€™s shoes and ask, โ€œWould you be comfortable using your own data in the same way youโ€™re using it within the organisation?โ€

One additional point that is often overlooked is building a cross-functional and diverse team to build and manage AI models. Organisations tend to gravitate towards data scientists for these roles, but they should also include people who are domain experts in AI and ethics as well as subject matter experts to help drive diversity of thought and improve the quality of the AI models they produce.

Ethical AI In Practice

There are specific industries and use cases where ethical AI is extremely critical. When building AI tools, itโ€™s important that we require inputs that are relevant to the task at hand. Think about using an AI tool to apply for a new job. Asking for someoneโ€™s country of birth or even address could expose the applicant to unconscious bias within the AI model, and itโ€™s not relevant to the job and therefore shouldnโ€™t be included. Another example could be applying for a credit score. Your gender shouldnโ€™t matter in this case because it doesnโ€™t have a material impact on your financial situation.

Organisations should take a cautious and measured approach when it comes to rolling out AI across their business in order to weed out any unethical practices whilst pushing forward with innovation. For example, Vini recommended starting with AI models for internal systems to create efficiencies and cost optimisation. For example, internal development applications where the risk of unethical AI practices having an impact on the organisation’s product or service is low. Once the organisation is comfortable with its AI models, it can then start to expand this to more critical or customer-facing activities like contact centres.

Whilst AI brings incredible opportunities, if weโ€™re not careful, the risks of unethical AI practices could quickly soon outweigh the benefits of the technology itself. When developing and maintaining AI models, intent, transparency and diversity are key to ensuring organisations are making better decisions that ultimately lead to better outcomes for those using, benefiting from and impacted by AI tools – which is every single one of us.

Alyssa Blackburn
Alyssa Blackburn is the Director of Information Management at AvePoint, where she helps organisations achieve value from their information and records. With nearly 20 years of experience in the information management industry, Alyssa has worked with both public and private sector organisations to deliver guidance for information management success in the digital age. A passionate information management professional, Alyssa is actively involved in the industry and is an in-demand speaker at conferences and industry events worldwide. She frequently contributes to industry publications and will happily talk for hours about how to modernise a retention and disposal schedule and classification scheme.
Share This