Ant International has taken first place in the NeurIPS Competition on Fairness in AI Face Detection, a win the company says reflects its commitment to building secure and inclusive financial technology—especially as deepfake-driven fraud rises worldwide.
Why Algorithmic Bias Still Threatens Digital Finance
Facial recognition continues to expand across fintech, banking, travel, and identity verification. Yet studies from NIST show many commonly used models produce significantly higher error rates for women and people of colour. That bias often comes from imbalanced training data and homogeneous engineering teams training the systems.
In digital payments, this isn’t just an ethical issue—it’s a security flaw. Biased systems can incorrectly reject legitimate users, lock people out of financial services, and create exploitable weaknesses for attackers.
A Global Challenge With Real Stakes
The NeurIPS competition tasked teams with building fair and high-performing face detection models across gender, age, and skin-tone groups. More than 2,100 models from 162 teams were submitted. The test set included 1.2 million AI-generated faces designed to represent global demographics accurately.
Ant International’s model came out on top.
Inside Ant’s Winning Model
The company’s approach blends a Mixture of Experts (MoE) architecture with a built-in bias-detection system. It uses two opposing neural networks:
• One model focuses on detecting manipulated or AI-generated faces
• A second model challenges it, forcing the system to ignore demographic cues
This adversarial training setup ensures the final model learns to detect genuine signals of manipulation rather than mistakenly relying on race, age, or gender indicators.
Ant also trained its system using a globally representative dataset and real payment fraud scenarios to ensure performance holds up at production scale.
Dr. Tianyi Zhang, general manager of risk management and cybersecurity at Ant International, notes that fairness isn’t just about inclusion—it’s foundational to security. Any bias introduces vulnerabilities that deepfakes can exploit.
Rolling Out Fairness Tech Across Ant’s Financial Ecosystem
The winning model has already begun integrating into Ant’s identity systems, supporting eKYC checks across all markets where the company operates. Ant says the system now reaches detection accuracy above 99.8% across demographics.
With more than 1.8 billion users and 150 million businesses relying on services like Alipay+, Antom, Bettr, and WorldFirst, the company is positioning AI safety as a core pillar.
Ant’s AI SHIELD framework—designed to prevent data leakage, unauthorised access, and adversarial misuse—underpins the rollout. As a result, features such as Alipay+ EasySafePay 360 have reportedly cut digital wallet account takeover incidents by 90%.
Why Fair AI Is Essential for Financial Inclusion
In emerging markets especially, biased verification systems can prevent people from opening accounts, accessing credit, or receiving government payments. By removing demographic distortions, Ant aims to ensure its identity systems work consistently in all 200 markets it serves.
Fairness becomes more than an academic challenge—it becomes an infrastructure requirement for global commerce.


