*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Combatting Abusive Behavior in Online Communities Using Cross-Community Learning.
Eshwar Chandrasekharan
Ph.D. student in Computer Science
School of Interactive Computing
College of Computing
Georgia Institute of Technology
Date: Friday, February 8, 2019
Time: 3:00-5:00 PM EST
Location: TSRB 223
---
Committee:
Dr. Eric Gilbert (Advisor, School of Interactive Computing, Georgia Institute of Technology),
Dr. Amy Bruckman (School of Interactive Computing, Georgia Institute of Technology),
Dr. Munmun De Choudhury (School of Interactive Computing, Georgia Institute of Technology),
Dr. Jacob Eisenstein (School of Interactive Computing, Georgia Institute of Technology),
Dr. Cliff Lampe (School of Information, University of Michigan).
---
Summary:
Since its earliest days, harassment and abuse have plagued the Internet. Recent research has focused on in-domain, machine learning-based approaches to detect abusive content and faces several challenges, most notably the need to obtain vast amounts of labeled training data. My work aims to bridge this gap by introducing a new class of machine learning tools, that are based on cross-community learning, in order to combat abusive behavior in online communities. First, I introduced a new class of machine learning tools that are based on cross-community linguistic similarity. Next, I discovered the existence of widely overlapping norms, across distinct online communities, suggesting that new automated tools for moderation could find traction in borrowing data from communities which share similar values.
The abuse models that I built will enable a new class of interactive machine learning systems that can sidestep the need for site-specific classifiers. I propose to build a system prototype that will bring these pieces together in the form of open source software to detect abusive behavior online through cross-community learning, and thereby socio-algorithmically govern speech on large-scale Internet platforms like Reddit.