Creating a snapshot including description via AWS Lambda

aws lambda delete snapshots
aws automated ebs snapshot script
automate-ebs-snapshots-using-lambda-function
automated ebs snapshots using aws lambda and cloudwatch
python script to take snapshot of ebs volume
lambda copy ebs snapshot to another region
aws lambda function to delete snapshots
aws delete snapshots older than 30 days

This is my code for creating a snapshot via AWS Lambda.

import boto3
import collections
import datetime

ec = boto3.client('ec2')

def lambda_handler(event, context):
    reservations = ec.describe_instances(
        Filters=[
            {'Name': 'tag-key', 'Values': ['Backup', 'backup']},
        ]
    ).get(
        'Reservations', []
    )

    instances = sum(
        [
            [i for i in r['Instances']]
            for r in reservations
        ], [])

    print "Found %d instances that need backing up" % len(instances)

    to_tag = collections.defaultdict(list)

    for instance in instances:
        try:
            retention_days = [
                int(t.get('Value')) for t in instance['Tags']
                if t['Key'] == 'Retention'][0]
        except IndexError:
            retention_days = 14

        for volume in ec.volumes.filter(Filters=[
            {'Name': 'attachment.instance-id', 'Values': [instance.id]}
        ]):
            description = 'scheduled-%s.%s-%s' % (instance_name, volume.volume_id, datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))

            print 'description: %s' % (description)

        for dev in instance['BlockDeviceMappings']:
            if dev.get('Ebs', None) is None:
                continue
            vol_id = dev['Ebs']['VolumeId']
            print "Found EBS volume %s on instance %s" % (
            vol_id, instance['InstanceId'])

            snap = ec.create_snapshot(
                VolumeId=vol_id,
            )

            to_tag[retention_days].append(snap['SnapshotId'])

            print "Retaining snapshot %s of volume %s from instance %s for %d days" % (
                snap['SnapshotId'],
                vol_id,
                instance['InstanceId'],
                retention_days,
            )


    for retention_days in to_tag.keys():
        delete_date = datetime.date.today() +     datetime.timedelta(days=retention_days)
        delete_fmt = delete_date.strftime('%Y-%m-%d')
        print "Will delete %d snapshots on %s" % (len(to_tag[retention_days]), delete_fmt)
        ec.create_tags(
            Resources=to_tag[retention_days],
            Tags=[
                {'Key': 'DeleteOn', 'Value': delete_fmt},
            ]
        )

I got the following response :

'EC2' object has no attribute 'volumes': AttributeError
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 34, in lambda_handler
    for volume in ec.volumes.filter(Filters=[
AttributeError: 'EC2' object has no attribute 'volumes'

Whgen I used ec = boto3.resource('ec2') instead of ec = boto3.client('ec2'), I get the description but some others such as describe_instances don't work

So, please tell me what the replacement for volumes is in boto3.client('ec2')

I've found these guys solutions to be the best to date:

https://blog.powerupcloud.com/2016/02/15/automate-ebs-snapshots-using-lambda-function/

There are examples of descriptions in their code base being added to snaps.

Automated EBS Snapshots using AWS Lambda & CloudWatch , Overview. In this post, we'll cover how to automate EBS snapshots for your AWS infrastructure using Lambda and CloudWatch. We'll build a  For setting up a lambda function for creating automated snapshots, you need to do the following. Set up the python script with necessary parameters. An IAM role with snapshot create, modify and delete access. Create a lambda function with the python script.

boto3.resource is an abstraction for low level boto3.client You are mixing both. If you are using client.describe_instances, then use client.describe_volumes.

If you want to use resource.volumes, then use resource.instances. I prefer resource.instances because of its powerful filter and abstraction. If you use resources and want to access the underlying client for some reason, you can get the low level client using meta.

ec2 = boto3.resource('ec2')
client = ec2.meta.client

Avoid dealing with reservations etc., Use resource.instances. There are plenty of examples if you google for it. Few lines of code and very readable.

Automating Amazon EBS Snapshot Management with AWS Step , CloudWatch Events integrates with AWS Lambda to let you execute your that you have already set up CloudWatch Events to create the snapshots on a Choose Configure Details and give the rule a name and description. Create the Snapshot lifecycle policy: Go to EC2 console. 2. Under the Elastic Block Store, you can see the Lifecycle Manager. 3. Click Create snapshot policy. Description: Give a name for your policy.

I had the same problem, but in my case I needed to copy all the tags from the EC2 instance to the snapshots, take a look to my code it might help you or at leas guide you:

https://github.com/christianhxc/aws-lambda-automated-snapshots/blob/master/src/schedule-ebs-snapshot-backups.py

Doing it this way, you just need to make sure that the instance has the "Name" tag so it can be copied to the snapshot, I also needed this because of the CostCenter tag

Tutorial: Schedule Automated Amazon EBS Snapshots Using , In this tutorial, you create an automated snapshot of an existing Amazon Elastic Block Store (Amazon EBS) volume on a Creating rules with built-in targets is supported only in the AWS Management Console. For Rule definition, type a name and description for the rule. Next topic: Tutorial: Schedule Lambda Functions. In the Lambda console, navigate through Functions > Create a Lambda Function -> Configure function.Then, use the following parameters: Name, Description, Runtime. I used the Boto library that is in the AWS SDK for Python for the code below:

I found the solution online https://serverlesscode.com/post/lambda-schedule-ebs-snapshot-backups/ did not fit my needs so I've written the following backup script. This is still my first draft but I find it's much easier to read and also a better implementation for me because rather than targeting instances, I target volumes. Tagging is also improved.

import boto3
import collections
import datetime
import os

ec = boto3.client('ec2')
ec2 = boto3.resource('ec2')
TAG = os.environ['TAG']

def lambda_handler(event, context):
    volumes = ec.describe_volumes(
        Filters=[
            {'Name': 'tag-key', 'Values': [ "Backup", TAG ]},
            {'Name': 'status', 'Values': [ "in-use" ] },
        ]
    ).get(
        'Volumes', []
    )

    print "Found {0} volumes that need backing up".format(len(volumes))

    for v in volumes:
      vol_id = v['VolumeId']
      print(vol_id)
      vol_name = None
      snap = None
      vol_tags = v.get('Tags', None)

      try:
          retention_days = [
              int(t.get('Value')) for t in vol_tags
              if t['Key'] == 'Retention'][0]
      except IndexError:
          retention_days = 7

      delete_date = datetime.date.today() + datetime.timedelta(days=retention_days)
      delete_fmt = delete_date.strftime('%Y-%m-%d')
      print "Will delete volume: {0} on {1}".format(vol_id, delete_fmt)

      if vol_tags is not None:
        for tag in vol_tags:
          try:
            if tag['Key'] == 'Name':
              vol_name = tag.get('Value')

              print(vol_name)
              snap = ec.create_snapshot(
                  VolumeId=vol_id,
              )

              ec2.Snapshot(id=snap['SnapshotId']).create_tags(
                  Tags=[
                      {'Key': 'DeleteOn', 'Value': delete_fmt},
                      {'Key': 'Name', 'Value': vol_name},
                  ]
              )

              break
          except:
            print "No Tag key 'Name' found."

        print "Retaining snapshot %s of volume %s aka %s for %d days" % (
            snap['SnapshotId'],
            vol_id,
            vol_name,
            retention_days,
        )

Automating the Amazon EBS Snapshot Lifecycle, Combined with the monitoring features of Amazon CloudWatch Events and AWS CloudTrail, You can also specify custom tags to be applied to snapshots on creation. aws dlm create-lifecycle-policy --description " My volume policy " --​state You can use AWS Lambda and Amazon CloudWatch Events to handle event  Create Snapshots Function in Lambda. Now, we can move on to writing the code to create snapshots. In the Lambda console, go to Functions > Create a Lambda Function -> Configure function and use the following parameters: In our code, we'll be using Boto library which is the AWS SDK for Python.

Just copy the function and find description in the code and replace it with your custom description in single quotes.

Hope it helps!

#Tag to folllow
#Retention    number of days here
#backup
#backup-monthly

import boto3
import collections
import datetime

ec = boto3.client('ec2')

def lambda_handler(event, context):
    reservations = ec.describe_instances(
        Filters=[
            {'Name': 'tag-key', 'Values': ['backup', 'Backup']},
            # Uncomment this line if need to take snaphsot of running instances only
            # {'Name': 'instance-state-name', 'Values': ['running']},
        ]
    ).get(
        'Reservations', []
    )

    instances = sum(
        [
            [i for i in r['Instances']]
            for r in reservations
        ], [])

    print "Found %d instances that need backing up" % len(instances)

    to_tag = collections.defaultdict(list)

    for instance in instances:
        try:
            retention_days = [
                int(t.get('Value')) for t in instance['Tags']
                if t['Key'] == 'Retention'][0]
        except IndexError:
            retention_days = 7

        for dev in instance['BlockDeviceMappings']:
            if dev.get('Ebs', None) is None:
                continue
            vol_id = dev['Ebs']['VolumeId']
            print "Found EBS volume %s on instance %s" % (
                vol_id, instance['InstanceId'])

            instance_id = instance['InstanceId']

            snapshot_name = 'N/A'
            if 'Tags' in instance:
                for tags in instance['Tags']:
                    if tags["Key"] == 'Name':
                        snapshot_name = tags["Value"]

            print "Tagging snapshot with Name: {} and Instance ID {}".format(snapshot_name, instance_id)

            snap = ec.create_snapshot(
                Description = 'Description goes here',
                VolumeId = vol_id,
                TagSpecifications = [{
                    'ResourceType': 'snapshot',
                    'Tags': [{
                        'Key': 'Name',
                        'Value': snapshot_name
                    }, ]
                }, ]
                # DryRun = False
                )

            to_tag[retention_days].append(snap['SnapshotId'])

            print "Retaining snapshot %s of volume %s from instance %s for %d days" % (
                snap['SnapshotId'],
                vol_id,
                instance['InstanceId'],
                retention_days,
            )


    for retention_days in to_tag.keys():
        delete_date = datetime.date.today() + datetime.timedelta(days=retention_days)
        delete_fmt = delete_date.strftime('%Y-%m-%d')
        print "Will delete %d snapshots on %s" % (len(to_tag[retention_days]), delete_fmt)
        ec.create_tags(
            Resources=to_tag[retention_days],
            Tags=[
                {'Key': 'DeleteOn', 'Value': delete_fmt},
            ]
        )

New – CloudWatch Events for EBS Snapshots, They are also copying snapshots between regions on a regular basis for shareSnapshot – Fired after a snapshot is shared with your AWS Then I will incorporate an AWS Lambda function (created by my Contact Us · AWS Careers · File a Support Ticket · Knowledge Center · AWS Support Overview  Create the Lambda Function for snapshot. In the Lambda console navigate to Functions > Create Function -> Author from scratch. Give the lambda function a name eg, ebs_backup. In the Role section, select Choose an existing role and then select the role (lambda_ebs_snapshot_backup) which we created in the first step and click Create Function.

Use AWS Lambda to automatically snapshot your instances , Rather than using heavy standalone software packages, why not instead Below we'll look at how an AWS Lambda function can be used to then create snapshots for all volumes associated with those instances. and Role definition for Lambda functions; AWS Lambda function creation; Benchmarks  In the console, create a new “service role” for AWS Lambda. Using the AWS CLI. If you’re using the CLI, you have to use a trust document when creating the IAM role. The trust document allows code running in AWS Lambda to authenticate and use the policies associated with a role.

Automating EBS snapshots with Lambda, Lab Overview. In this Lab, you will automate the process of creating EBS snapshots using Lambda functions and CloudWatch Events. Steps to create the function: Go to Services, Lambda, and click Create a Lambda Function. Skip the blueprint screen. Write a name for it (ebs-backup-worker) Select Python 2.7 as a Runtime option. Paste the code below. Select the previously created IAM role (ebs-lambda-worker) Click Next and Create Function.

AMI & Snapshot Management Using AWS Lambda, Manage your AWS AMIs and snapshots through AWS Lambda with this Lambda needs access to describe instances, create/deregister  Create snapshots of an Amazon EBS volume or create multi-volume snapshots to use as a baseline for new volumes or for data backup. Creating Amazon EBS Snapshots - Amazon Elastic Compute Cloud AWS Documentation Amazon EC2 User Guide for Linux Instances

Comments
  • This satisfied my requirement. Thank you
  • This link is dead
  • Thank you for your valuable suggestions. All I have to do now is add a description to the snapshot. Could you please tell me how to add that without changing the code.