The COVID-19 lockdown witnessed a spurt in incidences of online harassment and domestic violence against women as people are spending more time on the Internet.
A Web Foundation survey highlights that 52% of young women and girls admitted that they have experienced online abuse, including threatening messages, sexual harassment, and the sharing of private images without consent. In an endeavour to make online space safer for women, a machine learning expert, and an alumnus of the Indian Institute of Technology (IIT) Roorkee, Richi Nayak has developed an algorithm that identifies and reports misogynistic posts on social media platforms.
Advertisement
Her research demonstrates the use of STEM knowledge to address societal issues and her endeavour towards making lives brighter for women.
Richi had been exploring to leverage her expertise in machine learning to solve a social issue. She realized that detecting abusive content targeted on women will make online space safer for them. She, along with her colleague Md Abdul Bashar, developed an algorithm that has been trained to understand the content, context, and intent behind social media posts.
“I have always been interested in mathematics at a young age. I would like to thank my late supervisor Prof JD Sharma for introducing me to the field of Machine Learning during the post-graduation at IIT Roorkee. I was also fortunate to receive guidance and mentorship under IIT Roorkee’s expert academicians including the late Prof GC Nayak, Prof C Mohan, and Prof JL Gaindhar. It was instrumental in motivating me to take up a career in research to address societal issues” said Richi Nayak, a computer science professor at Queensland University of Technology, Australia.
Her research focuses on the training of models with datasets like Wikipedia and subsequently training it in somewhat abusive language through user review data. It also trained the model on a large dataset of tweets. Besides equipping it with linguistic capability, the researchers taught it to distinguish between misogynistic and non-misogynistic tweets.
Her research marks a paradigm shift from users reporting suspected cases of harassment to automatically detecting and reporting abusive content on social media.