Responsible AI: A key challenge for Banking and Insurance
Anaïs Brossard, Product Marketing Specialist.
Artificial Intelligence is transforming the banking and insurance sectors, creating opportunities for automation, personalization, and operational efficiency. A recent McKinsey study suggests generative AI could contribute $2.6 trillion to $4.4 trillion annually to the global economy. However, its rapid implementation raises significant ethical concerns around algorithmic bias, transparency, and data privacy. Tackling these challenges demands a responsible approach to AI development.
In financial services – particularly banking and insurance, which handle sensitive personal data under strict regulations – integrating new technologies into legacy systems requires time and rigorous processes. At Zelros, we’ve partnered with leading insurers and banks for almost a decade, and we are well aware of this challenge. That’s why we’ve embedded responsible AI principles into our solutions from day one, advocating for this approach since our founding. We believe this is the only viable path to help financial institutions adopt innovations securely and confidently.
The landscape has evolved dramatically since then, with numerous new regulations emerging to protect both consumers and businesses. Let’s revisit the fundamentals.
What is responsible AI?
Responsible AI ensures that artificial intelligence systems are fair, transparent, and respectful of fundamental rights. In banking and insurance, this means developing reliable models that comply with regulations such as GDPR in Europe or the AI Act, while also meeting customer expectations.
The importance of these challenges continues to grow. AI is now at the heart of the banking and insurance industries, but its use must remain responsible to prevent misuse and maintain user trust. At Zelros, this commitment has been a core principle from the very beginning. We continue to strengthen it by constantly evolving our practices.
The Apple Card case – a clear example of why Responsible AI matters
In 2019, Apple and its banking partner, Goldman Sachs, faced allegations of discrimination when the algorithm behind the Apple Card granted tech entrepreneur David Heinemeier Hansson a credit limit 20 times higher than his wife’s, despite their shared finances and her superior credit score. Apple co-founder Steve Wozniak reported a similar disparity, receiving 10 times the limit of his spouse despite identical assets. This incident exposed critical flaws in opaque AI systems:
- Algorithmic bias risks: Despite claims of gender-blind inputs, AI indirectly perpetuated bias through proxies like spending patterns or historical data reflecting societal inequalities.
- Lack of transparency: Neither Apple nor Goldman Sachs could explain how credit decisions were made, leaving customers frustrated and regulators concerned.
- Regulatory scrutiny: The New York Department of Financial Services launched an investigation, emphasizing that any discriminatory outcome—intentional or not—violates fair lending laws.
The case underscores why Responsible AI requires continuous auditing, bias mitigation frameworks, and human oversight—principles Zelros embeds in every solution. By prioritizing explainable models and proactive compliance with evolving regulations like the EU AI Act, we help financial institutions avoid reputational damage while fostering innovation that earns customer confidence.
The Pillars of Responsible AI
At Zelros, we’ve explored different ways to create fair and responsible AI, focusing on key principles like transparency, fairness, and data protection. Developing responsible AI means following a few essential guidelines:
1. Transparency: Clarity and Explainability
AI systems must operate with clarity – their decision-making processes should be transparent and interpretable for both users and regulators. Five years ago, we outlined actionable strategies in our article 7 Ways to Foster Fair and Responsible AI in Insurance Services, emphasizing that open-source solutions and collaborative frameworks are essential to achieving this transparency. At Zelros, we provide well-documented and transparent recommendations to help advisors better support their clients.
2. Fairness
AI systems must ensure impartial treatment for all individuals, regardless of their background, gender, or social status.
In the insurance industry, fairness is key to maintaining customer trust. AI can help detect fraud while ensuring fair pricing for honest customers. However, human oversight remains essential to identify and correct potential biases.
3. Data Privacy
AI systems must comply with privacy regulations, such as GDPR in Europe, to protect customer information and maintain data security.
4. Accountability
Companies must take responsibility for how AI affects people and put in place solutions to fix errors or biases in their algorithms.
Opportunities and Challenges of Responsible AI in Banking and Insurance
AI is transforming these industries by automating repetitive tasks, personalizing services, and enhancing security through fraud detection. Predictive models, for instance, help optimize claims management and assess risks more accurately, improving the customer experience.
However, these advancements raise ethical issues. If AI is trained on biased data, it can lead to unfair insurance rates or loan discrimination. Additionally, many users don’t understand how AI makes decisions, which can reduce trust.
How to ensure Responsible AI?
To make AI responsible, companies should follow key best practices:
- Transparency: Clearly explain how AI systems make decisions.
- Monitor bias: Regularly check AI models to prevent discrimination, especially in banking and insurance, where bias can have serious consequences.
- Protect data: Keep personal data secure and respect customer privacy.
- Educate & raise awareness: Train AI professionals on ethical issues and inform the public about inclusion challenges.
- At Zelros, we host events yearly, including the upcoming “AI Forum for Banking and Insurance“ on February 11, 2025, to promote AI while highlighting key ethical issues.
- Preserve Choice: AI should assist, not replace, human decision-making, allowing users to make different choices.
Organizations like Positive AI support businesses in adopting responsible AI through workshops, tools, and practical solutions.
Zelros, ISO 27001 certified: A guarantee of enhanced security
Zelros is proud to be an ISO/IEC 27001-certified company, demonstrating our commitment to implementing a robust information security management system. This certification assures our partners that their data is safeguarded to the highest international standards.
In alignment with our dedication to ethical and transparent AI practices, we actively comply with key regulatory frameworks, including GDPR, DORA. By adhering to these regulations, Zelros not only fosters responsible AI usage but also ensures a secure, trustworthy foundation for our partners and clients.
We’re looking forward to sharing more with you about the upcoming EU AI Act. It will be the first global regulation on AI, the EU AI Act—set to be fully applicable by 2026—aiming to ensure ethical standards, enhance transparency in AI operations, protect user privacy, and mitigate biases and discrimination.
Conclusion
Responsible AI is crucial for ensuring that the banking and insurance sectors remain trustworthy, fair, and innovative. By adopting ethical practices, these industries can leverage AI to enhance their services while protecting their customers and building trust. At Zelros, we combine AI innovation with Responsible AI to offer solutions that create a positive impact in banking and insurance.
Explore our other articles on responsible AI: