AI gender bias and cyberbullying
BY TERESA NAIDOO
{ WIseAI }
The biggest problem we face today is that males incorporate their own biases in the various stages of their conceptions in AI and the only way to bridge the gap is to involve women and diverse women in the creation of AI tools. There are many examples of biases in AI impeding the success of women and people of colour.
Biases can crawl into the system in several ways. AI layout makes its decisions based on data, which include human decisions or consideration of social and historical inequities. AI biases are human responsibilities. They hurt those being discriminated against, but also those who are denied participation in economy and society.
The most alarming is that women and children fall victims of cyber criminals. Cyberbullying takes place online through smartphones and tablets usually on social media sites, messaging apps, gaming sites and chat rooms. It comes in the form of harassment, denigration, impersonation or cyber stalking.
This problem is highly visible in African countries. In The Gambia social media platforms are not widely used as in the western countries, but the majority of people communicate through WhatsApp, which is fully controlled by Facebook. This communication platform is the main focus to detect online harassment, which could reach far more beyond only that single platform.
Gambian young women and girls fall victims of cyber harassment and, as they state, they don’t know how to address these problems as in their country there is an existence of a culture of silence and when they speak out, the blame falls on them. The role of women in Gambian culture is somehow underestimated, with girls who although are welcome in the family, but there are not as high expectations from them as there are from the boys. The girls can suffer from early discrimination in terms of education and socialisation, while their mothers could be victimised for having daughters.
Cyberbullying affects women in the same way. Majority of Gambian women and girls confirm that they were targeted on social media platforms, which use AI to manage messages and track behaviour. Currently the country hasn’t got any policies or regulations addressing online crimes and as a result, there are no legal consequences for this type of actions. But the existing problems are manageable.
Charitable organisations, who are helping victims of cyberbullying, appeal to the government, to come together and hear the story of those who were harmed. Messages and behaviours collected from these interactions could be implemented into AI, which would know how to prevent and block inappropriate activity.
But the male AI creators often don’t see online harassment as a key problem, so that women should get the chance to build products and services of the future to make the AI cycle more trustworthy. That requires a shift in perception of the role of women in a society. Technology can change our lives, but if we’re not sensible enough, it can damage human existence.
A gender-responsive approach to innovation will help to rectify the bias already existing in the AI system. Women are generally considered more ethical, societal and political in the matters of their work on AI. Internet space has to become safer for all users.