Navigating the AI Ethical Landscape

Artificial intelligence (AI) has been at the forefront of research in STEM-related areas (i.e. science, technology, engineering, mathematics) for decades. With the swift launch of ChatGPT and other AI models, many are asking: “What will our future look like?”. Comprehending the current landscape is complicated, and navigating the intricate web of ethical considerations of artificial intelligence is even more challenging.

What is Artificial Intelligence?

At the most basic level, AI combines computer science and powerful datasets to solve problems to make intelligent computer programs. Early AI research focused on problem-solving, and as the research advanced, agencies started training computers to mimic human behaviour. This introduced what we see today in terms of automation and formal reasoning, including decision support systems (e.g. GPS, ERP dashboards) and smart search systems (e.g. Apple’s Siri, Google’s Alexa) that complement or augment human abilities.

What is ChatGPT?

ChatGPT is the famous AI model created by OpenAI that has the world buzzing since November 30, 2022. It uses generative pre-trained transformer (GPT) language models to provide its users with a response based on the context and its interpreted intent of a user’s question. We asked ChatGPT how it worked (in under 240 words), and this was the response we received:

ChatGPT is a language model powered by transformer neural networks. It undergoes pre-training by learning from a vast amount of internet text, enabling it to understand patterns and relationships in language. The model consists of self-attention and feed-forward neural networks that capture context and generate coherent responses. After pre-training, it goes through fine-tuning where it is trained on specific datasets with human review to align its responses with guidelines. When you interact with ChatGPT, it processes your input, analyzes the context, and generates a response based on the patterns it has learned.

Of course, it has now learned to add a disclaimer to its response:

It’s important to note that while ChatGPT strives to provide helpful and accurate information, it may occasionally produce incorrect or nonsensical responses. Therefore, it’s always a good idea to critically evaluate the information and exercise caution when relying on the model’s responses.

ChatGPT is just one example; click here for a McKinsey article on AI essentials.

Concerns Surrounding Artificial Intelligence

As early as 1983, Hollywood foreshadowed AI’s power with its famous film “War Games”. A teenager accidentally awakens a prototype AI chatbot running various simulations, potentially initiating nuclear war. We won’t give away any spoilers if you haven’t seen the movie! But we highly recommend it as a film that remains relevant 40 years later: Artificial intelligence is only as good as the training it receives from humans.

Today, AI can be seen across many industries: banking, retail, healthcare, manufacturing and many more. It’s helping businesses improve efficiencies and reduce costs. Globally, annual spending on AI is expected to hit $110 billion by 2024 – most large companies now have multiple AI systems as part of their overall strategy: sourcing of materials, processing big data, billing, medical imaging, recruitment… The list only goes on – the reality is that automation is here to stay. But now, many question whether it will do more societal harm than economic good.

Ethical Considerations

As a leader of an organization, it is critical to navigate the world of AI. Artificial intelligence must be regulated in your place of business, and that is where IT FX can help. The spectrum of this topic is vast, and the following considerations offer just a short glimpse:

  1. Privacy & Data Protection: All personally identifiable and corporate data must be protected. Using AI models can pose significant privacy risks if users input sensitive data, underscoring the need for policies and frameworks governing AI.
  2. Bias & Discrimination: AI decision-making can only be as good as the data it ingests (i.e., garbage in, garbage out) – AI will replicate human biases. Increasing the transparency surrounding data training processes and algorithm usage will help understand the cause of potential discrimination and bias, allowing your organization to gain a stronger footing in addressing these issues. Follow a framework that will standardize production, and constantly test models before and after deployment.
  3. Responsibility & Accountability: AI should be auditable and traceable, and there should always be oversight, assessment, audit, and due diligence mechanisms in place. Ultimately, AI cannot replace human responsibility and accountability.

The use of artificial intelligence in organizations brings forth a variety of ethical considerations that individuals and organizations must address, and the points mentioned above are merely a peephole into this complex landscape.  Contact us if you would like to discuss how we can assist you in the planning, implementation, enhancement, and governance of AI systems within your organization.

Sources: Definition of AI | OpenAI & ChatGPT | Worldwide Spending on AI | Great Promise but Potential for Peril