Automatic image moderation using Amazon Rekognition

Posted on 2020-02-10 by Karol Bąk Comments
ruby
aws
rails

AWS provides a huge number of cloud services. Today we will focus on Amazon Rekognition which can help us with automatic moderation of uploaded images. By adding custom validation we will prevent uploading inappropriate content like nudity or violence.

Amazon Rekognition

Amazon Rekognition service uses machine learning to analyze images and videos. It allows to identify objects, recognize text, analyze faces and more. We will use content moderation feature which detects unsafe content on images. On free tier you can analyze up to 5000 images per month with no cost.

Getting started

In order to use Amazon Rekognition we need aws-sdk-rekognition gem. It’s included in aws-sdk but you can install it standalone if you’re not using other services. Now we need to set up our AWS credentials. The easiest way to do it is via environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_REGION. If you haven’t generated your credentials yet or would like to use different initialization method take a look here.

Let’s try a simple code:

require 'aws-sdk-rekognition'
require 'ap'

# initialize client
client = Aws::Rekognition::Client.new
# pick an image
unsafe_file = File.read('unsafe.jpg')
# detect unsafe content
ap client.detect_moderation_labels({ image: { bytes: unsafe_file }})

safe_file = File.read('safe.jpg')
ap client.detect_moderation_labels({ image: { bytes: safe_file }})

As a result we receive moderation labels if the image contains unsafe elements and an empty list if it’s considered as a safe picture.

#<#<Class:0x00007f98930fdce0>:Aws::Rekognition::Types::DetectModerationLabelsResponse:0x7f98952af438
    human_loop_activation_output = nil,
    moderation_labels = [
        [0] #<#<Class:0x00007f989312ca18>:Aws::Rekognition::Types::ModerationLabel:0x7f98952af398
            confidence = 99.58805847167969,
            name = "Suggestive",
            parent_name = ""
        >,
        [1] #<#<Class:0x00007f989312ca18>:Aws::Rekognition::Types::ModerationLabel:0x7f98952aeec0
            confidence = 99.58805847167969,
            name = "Partial Nudity",
            parent_name = "Suggestive"
        >
    ],
    moderation_model_version = "3.0"
>
#<#<Class:0x00007f98930fdce0>:Aws::Rekognition::Types::DetectModerationLabelsResponse:0x7f9895050548
    human_loop_activation_output = nil,
    moderation_labels = [],
    moderation_model_version = "3.0"
>

Currently Amazon Rekognition can detect 4 types of unsafe content with following subcategories:

  1. Explicit Nudity: Nudity, Graphic Male Nudity, Graphic Female Nudity, Sexual Activity, Illustrated Nudity Or Sexual Activity, Adult Toys
  2. Suggestive: Female Swimwear Or Underwear, Male Swimwear Or Underwear, Partial Nudity, Revealing Clothes
  3. Violence: Graphic Violence Or Gore, Physical Violence, Weapons Violence, Weapons, Self Injury
  4. Visually Disturbing: Emaciated Bodies, Corpses, Hanging

As you can see it also returns confidence level. By default it uses 50.0 as a threshold - it won’t return any labels with lower confidence. We can change that by passing additional argument:

client.detect_moderation_labels({ image: { bytes: unsafe_file }, min_confidence: 40.0 })

ActiveRecord, ActiveStorage integration

Let’s take a look at real word example. We have an ActiveRecord model with ActiveStorage attachment image. We want to validate content of this image after upload and return errors if it contains unsafe content. We can do it by writing custom validation method:

NOTE: Calling external services in callbacks/validations generally isn’t a good idea. Consider using form object or some other solution in your project.

has_one_attached :image

validate :image_moderation

def image_moderation
  # don't run validation if no image uploaded or image didn't change
  return if image.blank? || !image.changed?

  # initialize client, you can move it to singleton class, class variable or any other place. It's just PoC
  client = Aws::Rekognition::Client.new

  # detect labels using active storage attachment
  moderation_labels = client.detect_moderation_labels({ image: { bytes: attachment_changes['image'].attachable }}).moderation_labels

  # add validation errors if unsafe content detected
  errors.add(:image, "contains forbidden content - #{moderation_labels[0].name}") if moderation_labels.present?
end

Now if one of users will try to upload an inappropriate image they will receive an error, for example: “Image contains forbidden content - Partial Nudity”.

Summary

Amazon Rekognition is a really powerful tool. In our case, by using custom validation method we used it to detect unsafe content on uploaded images but it gives tons of different possibilities. I strongly recommend to take a closer look at this service.