It is cool to do interdisciplinary research, but what does it take? Computational Social Science as an example.
During my postdoc at Caltech, I started to utilize machine learning and artificial intelligence (ML/AI) to study social science problems. I have enjoyed the opportunity to talk to students and senior researchers in social science who share the same interest. Coming from a computer science background, it is quite an education for me to align my language with experts in a different field and look at problems differently. For example, even though people in ML get concerned about issues like echo chamber effects, confirmation bias, hate speech, and social media trolling, conducting research on mitigating them takes more than our usual routine. Here are some of my thoughts on the challenges we face. The first two are about specific difficulties, and the last two are more generally about building research relations and the research process.
In social media analysis, we used to collect our data by using platform APIs and scraping websites. It may be fine if our problem is, for example, link prediction for generating better friend recommendations. However, there are many completeness and efficiency limitations in this approach when it comes to monitoring fast-evolving discussions and studying moderations/management of platforms. This paper describes the challenge in more detail. Surprisingly, there was no good solution for an efficient and reliable data collection mechanism for real-time social media data before. We developed such a method by ourselves, which benefits many of our projects.
Another issue is there are large domain shifts between different time periods or different discussion spaces. Machine learning usually depends on the assumption that the data distribution in our training data is the same as the one in the testing. However, domain or distribution shift is everywhere. In many problems, it is possible that historical data is not representative enough due to difficulties in data collection or just the nature of the problem. This prevents us from using data from one domain and generalizing our conclusion to the other one. For example, at the beginning of the #MeToo movement, the frequency of the keyword “MeToo” increases so fast that no model can capture this trend using historical data. It also reflects the importance of having access to the real data instead of relying on synthetic or historical data in our research. Domain shift is a fundamental problem in the machine learning area. Check my related projects here.
Coming from an ML background, I am so used to having access to “ground truth,” whether it is the image label in image classification or the expert policy in imitation learning. I am used to having clear evaluation criteria, such as accuracy, F-1 score, or [email protected]. In Computational Social Science, even though we also need to evaluate how good the model is, the criterion and ground truth are not as obvious and need to be chosen carefully. For example, if we want to create Trustworthy Social Media, instead of having A/B testing on different policies for longer user engagement, what criterion should we use to determine whether we have achieved our goal? Is it the diversity of opinions, the users’ satisfaction level by a survey, or the overall sentiment? When modeling topic evolution using real social media data, how should we do model selection? Should we depend more on our common sense or the quantatitive measures like perplexity? You may have figured out that there are objective criteria and subjective ones. We may need to transfer subjective criteria to objective numbers for the quantitative analysis. But taking into human perspectives would be a must if we go back to the core problem we care about here: creating Trustworthy Social Media to ensure a better online environment for everyone.
Before my collaboration with social scientists, I had many assumptions about the kind of research they are doing. I assumed they care more about causality rather than predictability. I assumed they want to interpret the model more than using the model. I thought they only use “small data” from human subject experiments. This may be true for some researchers, but it really depends on the actual problems they are working on. For our projects on dynamic keyword search, predicting the trending words for the next period helps us improve our monitors for data collection. We have a huge social media dataset to deal with, even though we will rely on human subject experiments to testify some interventional strategies. There is also a strong need for accurate uncertainty quantification methods in many predictive problems. Therefore, if you are interested in social science questions but have assumptions that prevent you from talking to domain experts, it’s time to throw away the biased assumptions and start the conversation.
Making the (social media) world a better place is a very complicated problem. There are researchers from both the social science world and the ML/AI/engineering world who want to commit to this. But we seem to be missing some essential components in our research to really make a change.
From the social science point of view, most of the research in Computational Social Science focuses on analyzing the software engineering outcome directly. For example, we tell the public how social media platforms facilitate polarization and how it may affect our beliefs and preferences in a way that we are not aware of. But these platforms are based on algorithms and software infrastructure optimizing unknown objectives. Rethinking from the beginning of the pipeline and studying the design of the platform would be necessary. This may require help from the engineering field.
From a pure engineering perspective, the way we think about our research may need to be changed. Previously, researchers conduct fundamental research and publish the results without worrying about actual downstream applications. However, in a software engineering pipeline, if the researchers do not note the potential negative consequences if the method is misused, the engineering and commercial sectors may not either. (This is very much inspired by Charles Isbell’s and Hanna Wallach’s talk at the last Neurips conference.) To mitigate the problem from the beginning, we need help from the social science field to pick up useful approaches to think about our ML/AI research.
Therefore, develop Trustworthy Social Media, should join forces!