A company requests a loan. The response comes in seconds: denied. No meeting, no discussion, no explanation beyond a standard email. The decision was made by an algorithm, and the applicant has no idea what determined the outcome or how they could challenge it. This is not a hypothetical scenario. It happens every day in the European financial sector, and the real problem is not that algorithms make decisions, but that those affected can do almost nothing about it.

 

From rights on paper to rights in practice

When applied in its most common form, transparency may end up being just a formality. This is a problem that Hategan Digital constantly observes in practice. Information is not valuable unless it comes with the power to understand, challenge, and obtain real correction.

Three fundamental rights under European Union law are directly relevant here, namely data protection, non-discrimination, and the right to an effective remedy. These are not abstract principles. They function as a logical structure, because information only has value if it opens the way to challenge. And contestation is only relevant if it can lead to real redress. If one of the links is missing, the whole chain breaks.

 

What data protection does

Data protection is not just a technical issue about who stores what and where. It is a procedural right that determines how information can be used to produce legal effects on a company or individual. Without solid guarantees, the relationship between the applicant and the financial institution becomes unequal. The dynamic is simple and follows the scenario where the institution sees, classifies, and decides, and the customer cannot verify or correct what defines them in the eyes of the system.

Non-discrimination matters just as much. In lending, classification is inevitable. But it becomes problematic when the criteria are unclear or when they reproduce historical inequalities. Without a justification that the applicant can understand and without the possibility of requesting a review, equality remains a theoretical principle.

And the right to an effective remedy remains the ultimate test. A right is only real and effective if it can be exercised, verified, and remedied. The formal possibility of filing a complaint means nothing if there is no real capacity to obtain a genuine reassessment of the decision.

 

The meaning of Article 22 of the GDPR and the significance of the SCHUFA ruling

Article 22 of the GDPR applies when an individual or company is subject to a decision based entirely on automated processing that produces legal effects or a significant impact. This includes credit being denied, costs being changed, and access to certain products being restricted.

The 2023 SCHUFA ruling, handed down by the Court of Justice of the European Union, clarified one essential point. If an automated score actually determines the final decision, then the entire procedure falls under Article 22. It does not matter what label the institution puts on the process. It does not matter if an employee formally signs the result. What matters is the actual effect. Legal responsibility follows the decision, not its form.

This closes one of the most common ways of avoiding responsibility, which is to present a decision as having been reviewed by a human operator when, in fact, the automated score dictated the result from the outset.

 

Showcase compliance—what it means in practice

With over two decades of experience, Hategan Attorneys has identified several mechanisms through which institutions can formally comply with legal requirements without offering real protection.

Purely symbolic human oversight is probably the most common example. Someone reviews the algorithm's decision, but has neither the information nor the authority to change it. Then there are vague explanations, which technically meet the requirement for transparency but say nothing useful about what factors mattered and why. There are also channels for appeal with no real power to change the outcome. And, perhaps most frustratingly, there is fragmentation of responsibility, with the flow following the scenario: the company sends to the bank, the bank sends to the scoring provider, the provider sends back to the bank. No one is responsible, everyone is compliant.

And these are by no means isolated cases. They are structural patterns of automated governance. Without an anchor in fundamental rights and without a clear procedural architecture, GDPR guarantees may remain only on paper.

A minimal regulatory framework that actually works

Ioana Chiper Zah, Managing Associate at Hategan Attorneys, together with Cristina Donia Bodnariuc, founder of Lexethica and researcher in AI governance and accountability, put forward a minimal framework of procedural safeguards for automated financial decisions.

 

The proposal was developed in an academic article written in the context of the debates opened by the NAIL 2026 Conference "AI, Finance & Law: Innovation, Integrity, and the Future of Markets" (Hamburg Network for AI & Law). The framework has four elements, each built on the previous one.

  1. Clear and timely notification. The affected company must know that an automated decision has been made, what role automation played, and what options it has to challenge the decision. Not after searching through terms and conditions, but at the moment the decision takes effect.
  2. Intelligible reasons. Not the source code of the algorithm and not trade secrets. But concrete factors that influenced the decision and the personal data that was decisive, presented in language that the recipient can understand.
  3. Authentic human review. A competent person with access to relevant information and the authority to change the outcome. If human review cannot change the decision, it is not review.
  4. Real remedy. A mechanism that corrects both the individual outcome and any systemic errors behind the decision.

 

How does this relate to the AI Act and the new Consumer Credit Directive?

The proposed procedural framework is not just an academic exercise. It aligns with the clear direction of European regulation. The AI Act classifies artificial intelligence systems used in loan assessment and credit scoring as high-risk systems. This means concrete obligations, which must be reflected in detailed technical documentation, risk management systems, effective human oversight, transparency towards users, and the obligation to maintain activity logs. These are not recommendations. They are requirements with penalties.

At the same time, the new Consumer Credit Directive (Directive 2023/2225) strengthens the rules on solvency assessment. It requires that the assessment be based on relevant and accurate information, not limited to automated processing, and provide a clear explanation in case of refusal. This creates a substantial, not just formal, obligation of transparency.

Together, the GDPR, the AI Act, and the Consumer Credit Directive form a convergent framework. Financial institutions that are now building real transparency, challenge, and remedy procedures are not just complying with the law. They are preparing for a regulatory environment that will become increasingly demanding.

 

What this means for companies

In a financial ecosystem that is increasingly dependent on automation, protection cannot be reduced to simply checking a box. The accuracy of decisions does not depend on how transparent an algorithm is in itself, but on how questionable and repairable its results are.

Companies operating in financial services, whether banks, fintechs, or insurers, need to move beyond the checklist stage. The regulatory direction is clear. And those that are now building real procedural safeguards will be better positioned than those that try to add them later, under pressure.

Hategan Digital works with companies at the intersection of technology and regulation. We design AI governance frameworks that comply with GDPR, AI Act, and other industry regulations, keeping the focus on what really protects the business: systems that aren't just technically compliant, but can sustain when challenged.