In 2026, discrimination by AI is no longer a hypothetical concern—it's documented across hiring systems, lending decisions, criminal justice risk assessment, healthcare treatments, and numerous other domains. An AI hiring system trained on historical hiring data replicates the biases of previous hiring decisions. A lending AI trained on approval data discriminates against groups that have historically faced discrimination. The recognition that AI can perpetuate and amplify human bias has led to a rapidly growing field of bias mitigation research and practice.
Where Bias Comes From
AI bias has multiple sources: training data bias (your historical data reflects discriminatory practices), label bias (the outcomes you're trying to predict are themselves biased), feature engineering bias (features you select correlate with protected characteristics), and interaction bias (the model learns proxies for protected characteristics like race or gender).
These sources of bias aren't always obvious. A hiring algorithm trained on 'time to promotion' as an outcome might be discriminatory if certain groups face barriers to promotion in the underlying organization. A lending algorithm trained on 'default' might be biased if certain groups were historically denied credit, so lower-risk applicants from those groups were never offered loans to default on.
Technical Mitigation Approaches
By 2026, organizations have several bias mitigation techniques available: removing sensitive attributes from training data (though this doesn't eliminate bias from correlated features); reweighting training data to balance representation across groups; adjusting model thresholds to equalize error rates across groups; or constraining optimization to enforce fairness metrics alongside accuracy.
More fundamental approaches involve reformulating the problem to be less prone to bias: instead of predicting 'promotion,' predict 'performance on new responsibilities'; instead of predicting 'default,' use alternative data sources for creditworthiness assessment.
The Challenge of Fairness
No single definition of fairness exists. Demographic parity (equal outcomes across groups) conflicts with equalized odds (equal error rates). Individual fairness (treating similar individuals similarly) conflicts with group fairness (equal treatment of groups). Different stakeholders care about different fairness definitions.
The practical approach in 2026 is transparency: document which fairness metrics you optimize for, measure performance across different demographic groups, and make trade-offs explicit. A lending algorithm that operates well for most groups but discriminates against a minority group is now documented and explained rather than hidden.
