The mass adoption of AI in our everyday life poses lots of threats to every individual in spite of the AI’s abilities to solving problems within a short period from large amounts of data which would not have been possible with the limited abilities of man.
The ever growing use and demand of AI in health, education, business, transportation, security, agriculture and the likes requires that these AI systems be regulated.
Responsible machine learning is an effort to policing AI systems such that these systems meet human social norms( humans’ ethical judgments).This ensures that we build trust and confidence in AI models centered around human ethics.
Areas to look at in ensuring responsible ML: explainable, accountability, transparency, privacy & robustness, safety & reliability and fairness.
Accountability: This ensures that every person involved in the pipeline creation process of the AI should be at anytime be held accountable for the decision made by the AI and its impact in the society/ world. There should be some sort of traceability to trace issues to the source through the AI life cycle.
Transparency: This reinforces trust through disclosure. Examples are laws that requires that users have the rights to access their data and also know the basis for which an AI system made a particular prediction on them. Users should also know if they are actually chatting/ communicating with a bot or a human being.
IBM OpenScale ensures that models are monitored so that developers are able to know and understand why a model made a particular prediction.
Privacy & Robustness: This looks at safeguarding consumer’s privacy and data rights. Homomorphic encryption is a great technology that would help secure users data from been leaked. This type of technology allows data to be read without necessarily decrypting it hence third parties have access to the relevant data only and nothing more. This is suitable for use in the health and finance sectors.
Explainable: The AI system should be able to allow for checking of inferences or decisions it makes. Example, why it choose to employ people having Winifred as their names or give loans to certain class of men over women in spite of both parties having similar qualifications.
Safety & Reliability: How safe are users of your AI system. Example is with the self-driving cars and the likes. Can we rely on them to auto pilot us safely to our destinations.
Fairness: This ensures that models’ are fair across a whole population hence, encouraging an inclusive society whiles eliminating inequalities in employment, loaning, disability and the likes. A system to moderate or check for biases would be a great savior.
Bias in models occur when models make inferences that are discriminatory to certain groups/under represented groups. This could be seen in ages, nationality, sex, gender and race.
Intentional Bias: This form of bias is associated with the influence of the creators/ engineers in the creation of the AI through out its pipelines. If the creators are racist or stereotypical then the model could be made to behave as such on inferences made.
Unintentional Bias: This form of bias in usually associated with the type and availability of data. Garbage In Garbage Out.
Recommended actions to take:
Teams should ensure that they know and understand their institution’s guidelines and are abreast with national and international regulations.
The AI system should be human-centric( aligns with the norms and values of all user groups).
Documentation and keeping of very detailed records of the institutions’ design and decision making processes.
Spell out correctly or clarify the roles of each team member in the AI life cycle.
Mitigating tools/ solutions
- Microsoft AI fairlearn tool kit
- IBM’s AI fairness 360 tool kit