Machine Learning has brought huge benefits in many domains and generated hundreds of billions of dollars in revenue. However, the second-order consequences of machine learning-based approaches can lead to potentially devastating outcomes.
This article by Kashmir Hill in the New York Times is exceptional reporting on a very sensitive topic – the identification of abusive material or CSAM.
As the parent of two young children in the COVID age, I rely on telehealth services and friends who are medical professionals to help with anxiety-provoking (yet often trivial) medical situations. I often send photos of weird rashes or bug bites to determine if it is something to worry about.
In the article, a parent took a photo of their child to send to a medical professional. This photo was uploaded to Google Photos, where it was flagged as being potentially abusive material by a machine learning algorithm.
Google ended up suspending and permanently deleting his Gmail account and his Google Fi phone and flagging his account to law enforcement.
Just imagine how you might deal with losing both your primary email account, your phone number, and your authenticator app.
Finding and reporting abuse is critical. But, as the article illustrates, ML-based approaches often lack context. A photo shared with a medical professional may share similar features to those showing abuse.
Before we start devolving more and more of our day-to-day lives and decisions to machine learning-based algorithms, we may want to consider the consequences of removing humans from the loop.