Artificial Intelligence in Banking: Opportunities, Risks, and BaFin Focus
AI has long since found its way into the European banking sector. While many institutions are experimenting with chatbots, risk models, or fraud detection, one question is becoming increasingly central in 2025:
How can AI be used innovatively without violating regulatory frameworks or governance standards?
This article summarises current expert opinions and outlines actionable recommendations for banks, with a clear focus on the BaFin perspective. At adorsys, we see that combining Security by Design with Compliance by Design is a key lever to answer this question in practice.
Experts see potential, and warn risks
Experts from the European Central Bank (ECB), Bank of International Settlements (BIS), and BaFin all emphasise:
AI can automate processes, improve decision-making, and elevate customer interaction to a new level.
At the same time, it introduces model risks, bias issues, and dependencies on third-party providers that institutions must actively manage.
BaFin makes it clear that AI applications are subject to the same high standards as any other critical IT system, now complemented by new obligations arising from the EU AI Act. This aligns with our experience from projects across all our areas of expertise where regulatory expectations and technical architecture decisions are closely linked.
Where AI is already creating value in banking today
Experts highlight three areas with particularly high potential:
Lending & Risk Models:
AI enables faster credit assessments, early risk detection, and more personalised lending decisions.
Fraud Prevention & Compliance:
Machine learning models support pattern recognition, reduce false positives, and ease the workload of compliance teams.
Customer Experience & Service:
Chatbots and generative AI speed up inquiries, summarise documents, and relieve advisors.
Generative AI is seen as a powerful accelerator, if governance and data quality are in place. In our AI & Data and Digital Finance & Identity work, we therefore start with data governance, explainability and integration into secure, compliant backends rather than isolated pilots. At the same time, we use a semantic layer to fuel accurate, relevant, and compliant answers from enterprise GPT, chatbots, and agents.
Regulatory Frameworks with a BaFin Focus
In addition to the well-known MaRisk and BAIT requirements, the EU AI Act will shape the coming years:
Creditworthiness assessments are classified as high-risk applications.
Banks must demonstrate data quality, transparency, and proper documentation.
BaFin emphasises that management boards bear full responsibility for AI-driven decisions, even when using external models.
This means that Compliance by Design must be considered right from the start as part of a broader Security & Compliance by Design approach in architecture, software engineering, and operations.
Governance Challenges
Experts identify three key risks:
Model Risk & Drift:
AI models often lose accuracy over time, making continuous monitoring essential.
Bias & Fairness:
Skewed training data can lead to discriminatory outcomes.
Third-Party Dependencies:
When many banks rely on the same models or cloud providers, systemic risks can arise.
Recommendation: Classify the risks of all AI use cases and establish a clear AI governance structure. This is where AI & Data governance, secure cloud architectures and managed services can help to keep control over models, data and providers over the full lifecycle.
Recommendations for Banks
Develop an AI strategy:
Prioritise critical use cases and define long-term goals;
Establish governance:
Create AI oversight committees with clear roles and responsibilities;
Ensure data quality:
Implement data governance frameworks and bias controls;
Manage third-party risks:
Include SLAs, audit rights, and exit strategies in contracts;
Integrate compliance early:
Conduct gap analyses for the AI Act and BaFin requirements; embedding Security & Compliance by Design into software engineering, data and cloud decisions from the outset;
Build culture and skills:
Offer training for departments and introduce new roles such as Model Risk Manager.
Conclusion: Innovation Requires Responsibility
AI will shape the future of banking, but only within clear boundaries. Supervisory authorities expect not only innovation from institutions, but also responsibility, transparency, and governance competence.
Banks that establish data quality, AI governance, and compliance by design today will be able to harness the potential of AI without exposing themselves to regulatory risks.
Technology partners like adorsys, with expertise across Trust & Cybersecurity, AI & Data, Software Engineering, Digital Finance & Identity, Cloud Technologies and Managed Services, can support in turning these principles into concrete architectures, platforms and operating models.

Keywords