Artificial Intelligence

AI Can Turbocharge Profits. But It Shouldn’t Be At The Expense Of Ethics

  • In the rush to harness AI for profit, business shouldn’t neglect the possible negative impact of the technology.
  • Misinformation, privacy and bias are all areas requiring attention.
  • Technological advancements are key to confronting global challenges – this requires innovation and guardrails.

As the adoption of generative AI rapidly expands across all corners of society, businesses of all kinds are poised to become quicker, more creative and more clever. We’re seeing this everywhere: Casinos use AI to better predict customers’ gambling habits and lure them with irresistible promotions. AI guides product designers as they choose better and more efficient materials. Many firms are even beginning to use the technology to predict payments based on scanned invoices. According to Goldman Sachs, the widespread use of AI could lead to a 1.5% annual growth in productivity over a 10-year period.

However, this rapid expansion also should bring with it caution: Businesses must be careful not to expand the adoption of AI purely for profit. They must realize that, like many other fast-emerging technologies before it, unbridled use of AI could have dangerous consequences. Generative AI has the potential to turbocharge the spread of disinformation, worsen social media addiction among teenagers, and perpetuate existing social biases. These consequences are not only harmful for society at large, but bad for businesses that work tirelessly to generate trust among customers – only to mis-step and see reputational risks from which it can be impossible to recover. As firms try to adapt to the quickly evolving AI landscape, how can businesses use this groundbreaking technology ethically?

1. Prioritizing data privacy

First, firms must prioritize protecting data – their own, their clients’, and their customers’. To do so, they must understand the risks of using public Large Language Models (LLMs). LLMs are the backbone of generative AI. They are algorithms that feed on large amounts of data to make predictions and generate content. Public LLMs gather data from generic, publicly available datasets and make it available to anyone. If prompted correctly, they may reveal or leak sensitive data used in their training processes – or introduce biases. Since LLMs lack a delete button, they cannot unlearn data, which makes risks related to leakage permanent. Regulated industries, like financial institutions, should be particularly worried about the risks of using public LLMs. A leak that publishes financial information, such as bank account numbers and transaction details, could result in client identity theft and even fraud, not to mention hefty legal fees for banks.

LEARN MORE  The Datasets That Enable AI Advances

To mitigate this risk, companies can use private LLMs, which are trained on a company’s specific, private corpus of data and can only be accessed by authorized stakeholders. With private LLMs, firms can reap the benefits of generative AI – for example, via chatbots developed on customers’ own data — without the risk of sending the data to third parties. And because these LLMs are trained on specific information and allow more control over update cycles, they are less likely to “hallucinate”, or provide “irrelevant, nonsensical, or factually incorrect” responses.

2. Mitigating AI bias

At the core of AI is data. Without it, AI is useless — but with the wrong data, the technology can also be dangerous. Popular generative AI systems like ChatGPT rely on large and publicly available data sources, some of which reflect historical and social biases. AI systems that rely on these datasets end up replicating these biases. Consider our earlier example of a bank: Algorithms trained on historical data generated by discriminatory practices (for example, redlining in 1930s Chicago) can lead banks to deny loans to marginalized communities. Insurance companies can also end up charging higher premiums, and credit bureaus misrepresenting credit scores.

The best way to combat AI bias is to incorporate humans in LLM training processes. This human-AI relationship can be twofold: Humans can monitor AI systems to provide input, feedback and corrections to enhance its performance, and the trained AI can be used to help humans detect bias in their behaviour. For example, as humans provide AI with the right data, and teach it to phase out bias through corrections, AI can be trained to alert hiring managers of hidden discriminatory practices that may exist in their companies’ hiring decisions.

LEARN MORE  Drug Resistance: Could Global Goals Be The Answer To This Worldwide Health Crisis?

3. Implementing a framework for transparency

Firms must ensure that their use of AI is in compliance with regulatory frameworks, including data protection, cybersecurity and corporate governance laws. But how can firms comply with oversight mechanisms that have yet to be designed? The answer lies in transparency. Transparency is key to generate trust and overcome fear that AI can manipulate and even dictate our lives. The EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) has developed an assessment list for trustworthy AI that firms can use as a guide. It follows these three tenets:

  • Traceability: Is the process that developed the AI system accurately and properly documented? For example, can you trace back which data was used by the AI system to generate decisions?
  • Explainability: Can the reasoning behind the AI system’s solutions or decisions be explained? More importantly, can humans understand the decisions made by an AI system?
  • Communication: Has the system’s potentials and limitations been communicated to users? For example, should you tell users that you are communicating with an AI bot and not a human?

AI is one of the most promising technological tools ever developed, not because it can help us boost profits and productivity (though it certainly will), but because of its enormous potential to help us become better humans. We must never lose sight of the fact that humans, including humans in the private sector, are the ones steering the wheel behind AI’s development. It is our responsibility to develop it responsibly – it’s good for society and good for business.

By: Raj Verma (Chief Executive Officer, SingleStore)
Originally published at: World Economic Forum



For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!



Our humans need coffee too! Your support is highly appreciated, thank you!
Total
0
Shares
Previous Article
Flooded houses or neighborhood

Re-Writing The Underwriting Story: How To Navigate The Complexities Of Modern Risks

Next Article
Food

Farmers In India Are Using AI For Agriculture – Here's How They Could Inspire The World

Related Posts
Total
0
Share