ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
1. Ethics of AI
Abhishek Nandy Chief Data Scientist PrediQt Business Solution PVT LTD
Rishabh Gupta Senior Engineer Mercedes Benz Research and Development India
3. Ethics of AI
The ethics of artificial intelligence (AI) is a broad and critical area of study that addresses the moral implications and
societal impacts of AI technologies. Here are some key considerations in the ethics of AI:
1.Bias and Fairness: AI systems can perpetuate or even exacerbate biases present in their training data. Ensuring
fairness involves creating algorithms that do not unfairly discriminate against any group of people, especially
marginalized communities.
2.Transparency and Explainability: AI systems, particularly those involving machine learning, can be highly complex
and opaque. It's important for these systems to be transparent so that users understand how decisions are made. This
is particularly crucial in areas such as healthcare, criminal justice, and finance.
4. Ethics of AI
1.Accountability: Determining who is responsible when AI systems cause harm or make errors is a complex issue.
There needs to be a clear understanding of how liability is distributed among developers, users, and manufacturers.
2.Privacy: AI systems often rely on vast amounts of data, which can include sensitive personal information.
Safeguarding this data and ensuring it is used ethically is paramount to protecting individual privacy.
3.Security: As AI systems become more integral to critical infrastructure and personal applications, ensuring these
systems are secure from attacks and manipulation is increasingly important.
5. Fairness in AI
fairness in AI is crucial in high-stakes decision-making contexts, such as criminal justice, healthcare,
and finance.
An AI system that is biased or unfair can have severe consequences for individuals and society as a
whole.
For example, if an AI-powered credit scoring system is biased against certain groups, it can
perpetuate financial inequality and make it difficult for individuals in these groups to access credit.
6. Strategies for prompting Responsible AI in
production for agile operations
1. Bias detection and mitigation
It is important to work to identify potential sources of bias in AI systems and develop strategies to mitigate them. This can involve careful data selection, reweighting, or using techniques such as
adversarial training.
2. Fairness testing
Rigorous testing can be used to ensure that AI systems are fair and unbiased. This can involve using techniques such as statistical parity, equal opportunity, or equalized odds. For example,
when developing an AI-powered credit scoring system, it is important to ensure that the system is not biased against any particular demographic group and that it treats everyone fairly and
equally.
3. Transparency and explainability
AI systems should be designed to be transparent and explainable, so that stakeholders can understand how decisions are being made. This can help build trust in the system and prevent
unintended consequences.
4. Ethical design principles
Ethical design principles can be used to guide the development of AI systems, ensuring that they are designed to be ethical and responsible from the outset. For example, it is important to
ensure that the data used to train an AI system is diverse and representative, and that the system does not perpetuate any existing inequalities or biases.
7. Demo
1.Imports and Setup: The script imports necessary libraries for data manipulation, machine learning modeling, and fairness consideration. It sets a random seed
for reproducibility.
2.Data Generation:
•It creates a synthetic dataset with 1000 samples, consisting of features for age and gender, and a binary target variable 'income_over_50K' which is
determined by a simplistic rule related to age.
3.Data Preparation:
•Converts the 'gender' column to a categorical data type.
•Splits the data into training and testing sets (80% training, 20% testing).
4.Preprocessing:
•Sets up a preprocessing step using ColumnTransformer that applies OneHotEncoder to the 'gender' column to convert categorical data into a format
suitable for model training, while passing through the 'age' column unmodified.
8. Demo continued
1.Model Setup with Fairness Considerations:
•Defines a logistic regression model and a fairness constraint of demographic parity (ensuring equal opportunity across different groups).
•Uses Fairlearn’s ExponentiatedGradient class to create a mitigator that aims to learn a fair model by reducing the disparity in outcomes across sensitive features
(gender in this case).
2.Training:
•The logistic regression model is trained using the ExponentiatedGradient class, which adjusts the model parameters to meet the fairness constraint based on the
sensitive feature (gender).
3.Prediction and Fairness Evaluation:
•Makes predictions on the test data using the fair model.
•Sets up metrics for evaluating the model (accuracy and selection rate).
•Uses Fairlearn's MetricFrame to evaluate these metrics across different sensitive groups, allowing you to assess how the fairness interventions impacted the
model’s performance and decision-making process.
4.Output:
•Prints the fairness metrics for each group, showing the performance and selection rate (how often the positive class is predicted) by group.
9. Plots
1.Extract Metrics:
•Retrieves the accuracy and selection rate by group from a MetricFrame object (mf_mitigated). This object likely contains the results from a fairness assessment
of a machine learning model, showing how these metrics vary across different groups defined by sensitive features (like gender).
2.Plot Configuration:
•Creates a figure with two subplots (arranged in 1 row and 2 columns) with a figure size of 14 by 5 inches. This setup allows for side-by-side comparison of the
two metrics.
10. Case Study by Rishabh Gupta
Automobile Industry case study Mercedes Benz.
Creating intelligent Dashboards with present day AI