CASE STUDY: Machine Vision for Content Moderation
- Scagility
- Sep 10, 2024
- 1 min read
Client
Media Company
Problem / Opportunity
A small online media company that accepts user-generated content was being overwhelmed by the volume of content and the need to ensure moderation for inappropriate content. It used a small army of offshore content moderators who worked 8-10 hours shifts manually reviewing each piece of content before approving it for publishing. This process was often monotonous for the reviewer and prone for error after the reviewer reached a certain level of fatigue. The risk to the publisher, however, was the failure to adequately moderate their content could result in the loss of their merchant banking account and inability to receive credit card transactions.

Resolution & Results
Manual processing of each user-generated content submission was inherently unscalable. The Scagility team looked at opportunities for reducing moderation by using machine vision to automate the review of each submission and flag exception cases for human review.
Using open source models, the Scagility team was able to develop a machine vision prototype within 2 weeks. The model was capable of reducing human moderation volumes by 96% by flagging content that appeared to be inappropriate. The accuracy of the model increased over time with the use of human validation.
Comments