Login
Newsletter Signup

Site Search

Ethical AI Strategies: 5 Proper Principles for Artificial Intelligence Models

February 16, 2024 by Infosys

From being a piece of fiction threatening to enslave or eliminate all human life to being a conversational pervasive reality, artificial intelligence has come a long way -- especially with the generative AI surge in recent time. That obviously has its roots in multiple temporal factors like computing power, data availability etc. -- but the point of import is the rapid expansion in the AI race.

Since ChatGPT was made open to the public, there has been a corporate race to come up with a better GPT bot, better large language models with more and more attributes in order to have the upper hand. Nobody can deny the potential use of the technology, but, like any other race, there need to be a set of guiding principles that provide a guiding light for the currently chaotic AI race large corporations are into.

Why do we need AI guardrails?

Like all technologies, artificial intelligence, including the much talked about generative AI, have its flaws as well as potential adversarial use cases. AI is prone to biases, inaccuracies, hallucinations, Type 1 and Type 2 errors and much more. On the other hand, bad actors eventually find applications for any breakthrough technologies. Think of deep fakes combined with generative AI to produce extremely legitimate videos of leaders spewing hatred, or images generated using AI that seems legitimate but are actually meant for phishing. There are endless negative use cases for the technology as well.

Five Key Principles to Follow

To ensure that the race for better artificial intelligence is ethical and has lasting positive impact on the society, below are the key principles AI models should adhere to:

  1. Correctness â€“ One of the key aspects that AI has to take into account is being correct or accurate. Predictive and Generative AI are often used for decision-making or presenting content and humans, in all their flawed glory, do not cross-check the predictions or content generated by AI. The onus of correctness, in the current state of AI, lies on humans. This onus has to shift to the AI models. Afterall, what is the use of getting a nice presentation generated by a GPT only to sit and cross-check every graph, every figure, every table in it? Usage of AI becomes a moot point with manual fact-checking in the mix.
  2. Sustainability â€“ With the race for large language models heating up, sustainability is another major variable to be considered. Training LLMs takes a massive amount of computing power, which, in turn, needs massive amounts of energy and water for operations. Even predictions made using LLMs require a decent amount of energy as compared to regular prediction models. This essentially means that corporates have to focus on offsetting the environmental impacts of AI with an equivalent amount of positive contribution to environmental goals on other fronts. Nobody really wants to see the climate clock sped up.
  3. Fairness â€“ Humans are flawed and one of the flaws that they pass on to artificial intelligence is the flaw of biases. The training data that artificial intelligence is given or has access to invariably has biases in it. It is easier to monitor and offset biases when providing structured, labelled training data to models, but given the current state of generative AI where training is driven by models off of unstructured data on the internet, controlling the introduction of biases has become much more challenging. There need to be defined steps for monitoring and remediating biases in AI models, whether manually or through introspection processes.
  4. Reliability â€“ The other biggest hurdle in a pervasive adoption of AI is the trust factor. Humans, more often than not, do not trust the outcome of the AI model, though it might be totally accurate with a 100% probability. The reason for that is the black-boxed nature of AI – humans do not know what is going on inside the neural networks and how the AI arrived at this outcome. This is especially true when the outcomes are counter-intuitive or in contract to human opinion. The way out is to have in-built explainability in AI models and provide interactive visualizations of it thereof. Once a human knows how the result was derived, the outcome becomes more acceptable.
  5. Human-centricity â€“ A perhaps more philosophical guiding principle is for all artificial intelligence to be aligned for the betterment of humans; they have to be trained to be human-first. While training and while predicting outcomes, AI models should have higher weightage for outcomes that could positively impact the human society if applicable. As AI evolves with unsupervised learning, this philosophical guardrail can help guide it to the right crests in the future.

Conclusion

The future of human race is tightly entwined with how artificial intelligence shapes up in the future. We are at the cusp of a new age, an age where AI will augment human life and become an inextricable, intrinsic, pervasive part of it. The difference between AI being a savior and AI being the last nail of human civilization is how our generation nips the negatives in the bud. Any guidelines, any guardrails, any frameworks, any legislations on AI have to be brought in now, before all of this spirals out of control and reigning it back becomes an exercise in futility.

Author Pratyush Anand is a principal technology architect for Salesforce solutions at Infosys, the global IT consulting company. More: Read additional guest blogs from infosys here.

subscribe to our newsletter

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Latest Updates

  • Most Read

  • Search Our Sustainability Databases

  • Sustainability & Green IT Predictions for 2024

  • Tab #1
    Tab #2
    Tab Content #1
    Tab Content #2
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram