Statistical Data-Driven Decision-Making Considering Bias, Fairness, and Transparency in AI
DOI:
https://doi.org/10.31763/iota.v5i2.905Keywords:
Addressing AI challenges, Bias in AI, Challenges in AI, Fairness in AI, Transparency in AIAbstract
Bias, fairness, and transparency are critical issues in Artificial Intelligence (AI). These problems can arise from sources such as biased training data, algorithmic bias, and reinforcement learning bias. Bias may lead to unintended consequences while attempting to correct bias. The use of the black-box model, along with proprietary and confidentiality constraints, can further obscure decision-making processes. Regulatory challenges complicate the governance of AI systems. Unfairness can arise when the algorithm uses inappropriate features in AI-based decision-making. Lack of transparency in AI-based computation leads to reduced trust, accountability issues, and difficulty in understanding or challenging automated decisions. Addressing bias, fairness, and transparency in AI is crucial to ensure ethical, responsible, and inclusive technology. Governments, organizations, and researchers must work together to create AI systems that serve humanity without reinforcing discrimination. Without addressing these problems, AI will have to risk inequalities and lose public trust. For example, “if you tell an AI image tool to create a man writing with his LEFT hand, the AI will create a man writing with his right hand” India’s PM Modiji pointed it out in a Paris speech. Unfairness can arise when the algorithm uses inappropriate features or a biased training data set to make a decision.