Cloud Compliance with AreBOT: Designing Complex Compliance Policies

In this post, we will take a closer look at the configuration language to define compliance policies using AreBOT.
17.10.2017
Tags

This is a follow-up to our series of posts on cloud compliance in AWS environments. First, we discussed the pros and cons of using the AWS Config service to ensure that all resources have the desired configurations. Then, we briefly introduced AreBOT, the tool we developed here at kreuzwerker to simplify the design, evaluation, and enforcement of compliance rules in complex AWS clouds.

In this post, we will take a closer look at the configuration language to define compliance policies using AreBOT.

A configuration file to rule them all

All the AreBOT components and tasks are specified in a single, easy-to-read configuration file. Let’s consider a case scenario to show how to write one.

The admins of an AWS cloud infrastructure want that all Elastic Compute Cloud (EC2) instances, Security Groups, and Elastic Block Store (EBS) volumes are tagged with a “ProjectName”. The value of this tag must apply an internal naming convention. Furthermore, security standards require that data on EBS volumes is always encrypted. The cloud infrastructure spans two AWS accounts and resources may reside in different regions. In case of compliance violation, the admins want to be immediately alerted via e-mail.

This scenario can be easily accommodated using the simple yet powerful AreBOT configuration language. First, we formulate the compliance policy for the EC2 resources, which needs to specify two rules for tag-compliance, one rule for data encryption compliance, and the actions to take in case of violations.

ec2_policy "myECpolicy" { // compliance policy on EC2 resources
  api_call "RunInstances" { // monitor the API Calls that create new EC2 instances
    compliant "Tag.ProjectName" { // compliance rule: tagging requirement
      schema = "^Proj-[0-9][0-9][0-9]$" // e.g., "Proj-007" is a compliant project name
      mandatory = true // all instances must have this tag
      action = [ "notify_admins" ] // actions to trigger if not-compliant
    }
  }
  api_call "CreateVolume" { // monitor the API Calls that create new EBS volumes
    compliant "Tag.ProjectName" { // 1st compliance rule: tagging requirement
      schema = "^Proj-[0-9][0-9][0-9]$"
      mandatory = true
      action = [ "notify_admins" ]
    }
    compliant "Encrypted" { // 2nd compliance rule: encryption requirement
      schema = "true"
      action = [ "notify_admins" ]
    }
  }

  action "notify_admins" { // the action associated with the compliance rules
    email { receiver  = [ "admin@email.com" , "ec2.admin@email.com" ] }
  }
}

Then, we add to the configuration file a second policy for checking the tag-compliance of Security Groups.

security_group_policy "mySGpolicy" { // compliance policy on Security Groups
  api_call "CreateSecurityGroup" { // monitor the API Calls that create new Security Groups
    compliant "Tag.ProjectName" { // compliance rule: tagging requirement
      schema = "^Proj-[0-9][0-9][0-9]$"
      mandatory = true
      action = [ "notify_admins" ]
    }
  }

  action "notify_admins" { // the action associated with the compliance rule
    email { receiver  = [ "admin@email.com" , "sg.admin@email.com" ] }
  }
}

Please note that action elements can share the same name if they are in different policy scopes. For instance, the actions notify_admins specify different behaviors depending on their respective policy - in our case, the list of recipients who receive email notifications.
Finally, the configuration file must indicate which are the AWS accounts (allowed to be) monitored by AreBOT. This can be easily done as follows.

account "test_account" { // first account, for the test environment
  account_id = "000000000001"
  region = "eu-west-1"
  arebot_role_arn = "arn:aws:iam::000000000001/AreBot" // the role that enables AreBOT to access the account
  all_events_queue = "AreBotEventQueue" // the queue where all CloudWatch events are delivered
}

account "prod_account" { // second account, for the production environment
  account_id = "000000000002"
  region = "eu-west-2"
  arebot_role_arn = "arn:aws:iam::000000000002:role/AreBot"
  all_events_queue = "AreBotEventQueue"
}

A couple of comments about the above are in order. First, arebot_role_arn indicates the IAM role in the AWS account that supplies AreBOT the credentials to do its job. Second, all_events_queue specifies the Amazon SQS queue that collects data about resource’s configuration changes (as Amazon CloudWatch events), and that is polled by the AreBOT compliance checker (for more details, see our previous post).

Conclusion and How to Get More Information

In this post, we showed how easy is to configure AreBOT to enforce compliance requirements in a rather simple but realistic scenario. All in all, the process took us few minutes and less than 50 lines of configuration code. Without our tool, and using only AWS services, the same scenario would have required the implementation of a Lambda function for each compliance rule, a proper setting for the AWS Config service, and an additional Lambda function to customize email notifications. And, it goes without saying, these operations had to be repeated for each AWS account to monitor. That’s indeed a lot of work saved, don’t you think?

If we got you curious about AreBOT and cloud compliance, let’s continue this discussion at DevOpsDays Berlin - 2017! It would be a great opportunity to meet in person and share our thoughts and ideas with you.

Or, you know, just stay tuned on this blog for the next posts of this series.

Image credits for the cover image go to:
technode