Properly use of AWS Lambda layers

5 min

What are AWS Lambda layers? As we know, AWS Lambda functions allow to execute code in the cloud according to the serverless paradigm. Each serverless cloud application is normally characterized by multiple independent Lambda functions capable of responding to specific events (like rest API, scheduled, triggers). Each Lambda function is defined by its own deployment package which contains its source code and any requirements, such as additional libraries, dependencies and middleware.

In this type of architecture, the AWS Lambda layers allow to introduce the concept of code/dependency reusability, in order to share modules among different functions: the layers are simple packages that can be reused in Lambda and they actually extend the base runtime. Let’s see how to use layers.

AWS Lambda layers 101

How do you prepare an AWS Lambda layer? To show it let’s consider this example: a Lambda built in Python that requires to run a binary application not included in the standard AWS runtime. In our example, the application is a simple bash script.

#!/bin/bash

# This is version.sh script
echo "Hello from layer!"

To create the Layer we need to prepare a ZIP archive with the following structure:

layer.zip
└ bin/version.sh

From the AWS Lambda console we just need to create the Layer providing the source ZIP archive and compatible runtimes names.

Now suppose to create a Lambda function that uses that layer. The code might look like this:

import json

def lambda_handler(event, context):
    import os

    stream = os.popen('version.sh')
    output = stream.read()
    
    return {
        'statusCode': 200,
        'body': json.dumps('Message from script: {}'.format(output))
    }

Its output will be:

{
  "statusCode": 200,
  "body": "\"Message from script: Hello from layer!\\n\""
}

Our AWS Lambda function correctly executes the bash script included in the layer. This happens because the contents of the Layer are extracted into the /opt folder. Since we used the structure provided by AWS to build the deployment ZIP archive, our bash script is already included in the default PATH (/opt/bin). Great!

Considering a more complete example, a project in Python I mentioned in another post: Using Chromium and Selenium in an AWS Lambda function.

To use Chromium in a Lambda function, you need to include the binaries and related libraries in the deployment package, as AWS obviously doesn’t include them in the standard Python runtime. My first approach was to not use any layers by obtaining a single ZIP package of more than 80MB. Whenever I wanted to update my Lambda function code, I was forced to upload the entire package, resulting in a long wait. Considering the number of times I repeated the operation during the development phase of the project and that the source of the function was a very small part of the whole package (a few lines of code), I realize how much time I wasted!

The second, much smarter approach was to use an AWS Lambda layer to include the binaries of Chromium and all the Python packages required in a similar way to what was seen above. The structure is this:

layer.zip
└ bin
  └ chromium
    chromedriver    
    fonts.conf      
    lib
    └ ...        
└ python
  └ selenium
    selenium-3.14.0.dist-info
    ...

To install the Python packages I used the usual PIP command:

pip3 install -r requirements.txt -t python

Once the layer was created, the time required for deploying the function was significantly reduced, all in favor of productivity.

Some more information on AWS Lambda Layers :

  • Can be used by multiple Lambdas
  • Can be updated and a new version is created each time
  • Versions are automatically numbered from 1 up
  • Can be shared with other AWS Accounts and made public
  • Are specific to an AWS Region
  • If there are multiple layers in a Lambda, they are “merged” together in the specified order, overwriting any files already present
  • A function can use up to 5 levels at a time
  • Do not allow exceed the limit of the size of the AWS distribution package

When to Use AWS Lambda Layers

In my specific case, using a layer has brought great benefits by reducing deployment times. So I’ve wondered if it’s always a good idea to use AWS Lambda layers. Spoiler alert: the answer is no!

There are two main reasons for using layers:

  • the reduction of the size of AWS Lambda deployment packages
  • the reusability of code, middleware and binaries

This last point is the most critical: what happens to Lambda functions during the lifecycle of the layers they depend on?

Layers can be deleted : removing a layer does not cause problems for the functions that already use it. It is possible to modify the code of the function but, if it is necessary to modify the levels on which it depends, the dependence on the layer no longer available must be removed.

Layers can be upgraded: creating a new version of a layer does not cause problems for functions that use previous versions. However, the lambda updating process is not automatic: if necessary, the new layer version must be specified in the lambda definition, removing the previous one first. Although the use of layers can therefore allow the distribution of fixes and security patches related to common Lambda components, it must be taken into account that this process is not completely automated.

AWS Lambda layers: more complex testing?

In addition to what has already been highlighted in the previous paragraph, the use of layers entails the need to face new challenges, especially in the context of tests.

The first aspect to consider is that a layer causes the introduction of dependencies that are only available at runtime, making it more difficult to debug your code locally. The solution is to download the content of the layers from AWS and include it during the build process. Not very practical, however.

Similarly, the execution of unit tests and integration tests undergoes an increase in complexity: as for local debugging, the content of the layers must be available during execution.

The second aspect concerns static languages such as Java or C#, for which it is required that all dependencies are available to compile DLL or JAR. Obviously, even in this case there are more or less elegant solutions, such as loading them at runtime.

Security & Performance

In general, the introduction of AWS Lambda layers does not involve any security disadvantages: better, it is possible to deploy new versions of existing layers to release security patches. As seen above, remember that the update process is not automatic.

Particular attention should be paid to third-party layers : there are different levels made publicly available and dedicated to various fields. Although it is actually convenient to be able to use a layer already configured for a very specific purpose, it is obviously better to create your own layers directly so as not to fall victim to malicious code. Alternatively, it is always recommended to first check the repository of the layer you intend to use.

Performance: the use of layers as an alternative to an all-in-one package has no effect even in the case of a cold start.

CloudFormation

Creating AWS Lambda Layers in CloudFormation is very simple. Layers are resources of type AWS :: Lambda :: LayerVersion. In the Lambda function definition, the Layers parameter allows to specify a list of (maximum 5) dependencies.

Here’s an example:

Conclusions

My two cents: using AWS Lambda layers certainly brings benefits in the presence of large dependencies that do not need to be updated very frequently. Moving these dependencies into a layer significantly reduces the deployment time of your Lambda function.

What about sharing source code? In this case it is good to make assessments that take into account the complexity that is introduced in the application’s debug and test processes: it is likely that the effort required is not justified by the benefits that can be obtained with the introduction of layers.

Did we have fun? See you next time!

source code

AWS Amplify: code lint & end-to-end testing with Cypress

6 min

Back from the summer break, I picked up on AWS Amplify. In a previous article I had talked about my first approach to this AWS framework. In this post I want to deal with the implementation of linter for source code and end-to-end tests with Cypress, obviously automated in the AWS Amplify CI/CD pipeline.

This is the link to the GitHub repository and this is the link to the web application.

Linter

The linter is a tool that analyzes the source code to flag programming errors, bugs, stylistic errors and suspicious constructs.

(wikipedia)

Linter tools allows to increase the quality of the source code. Using these tools in a CI/CD pipeline also allows to deploy an application when its source code meets certain quality levels.

The web application I created with AWS Amplify includes a React frontend and few Python lambdas in backend. For this reason I need two different tools, specific to the two technologies used.

ESLint

One of the best linters I’ve had the chance to test for JavaScript is ESLint: its functions are not limited to test but it includes automatic fix of a large number of problems.

ESLint installation is simple: from the main directory of our project, we can install the linter with the following command:

npm install eslint --save-dev

Once the installation is complete, we can run the first configuration wizard:

npx eslint --init

You will be asked for various information: in general you can choose whether to use ESLint for:

  • syntax checking
  • syntax checking and problem finding
  • syntax checking, problem finding and code style checking

This last option is interesting and includes most popular styles, such as Airbnb and Google source code styles. Once the configuration is complete, the required packages are installed. In my case:

"devDependencies": {
    "eslint": "^7.8.1",
    "eslint-config-google": "^0.14.0",
    "eslint-plugin-react": "^7.20.6"
  }

Now that we’ve set up ESLint, let’s check the code:

npx eslint src/*.js

Depending on the options chosen during configuration, the result may not be the most optimistic, revealing a long series of problems.

But as I said initially, ESLint allows us to automatically correct some of these, using the following command:

npx eslint src/*.js --fix

Great! Of the 87 problems initially detected, only 6 require our intervention to be corrected.

We can also decide to ignore some specific problems. If you want, for example, to avoid reporting for missing JSDoc comments, you must modify the rules section of the ESLint configuration file .eslintrc.js.

  'rules': {
    "require-jsdoc": "off"
  },

Only one problem remains, related to a line that is too long. We need to fix it manually.

The goal is to automate this process in the CI / CD pipeline, in order to stop the deployment in case some errors are detected.

Let’s configure the pipeline by editing the amplify.yml file like this:

frontend:
  phases:
    preBuild:
      commands:
        - npm ci
    build:
      commands:
        - npx eslint src/*.js 
        - npm run build
  artifacts:
    baseDirectory: build
    files:
      - '**/*'
  cache:
    paths:
      - node_modules/**/*

Among the commands of the build phase of the frontend, we insert the command to run ESLint. If it detects a problem, the pipeline will stop automatically.

As far as the frontend is concerned that’s all! For any further information on the use of ESLint, I suggest to consult the excellent documentation.

Pylint

It’s time to check the backend code. Having built some AWS Lambda functions in Python, I chose Pylint for this purpose. Installation is normally done with PIP:

pip install pylint

To parse a file with Pylint just run it specifying the filename as an argument.

Unlike the tool seen above, Pylint attributes a score (up to 10) to the analyzed code. The score is obviously based on the problems found.

Some problems can be ignored by editing the configuration file. To generate a configuration file to be customized later with your own settings, you need to run the command:

pylint --generate-rcfile

The generated file is well documented and understandable.

In order to integrate Pylint into the AWS Amplify pipeline, the operations to be performed are more complicated than previously seen for ESLint.

First you need to install Pylint in the Docker image used for the backend build phase. As explained in the previous article that I wrote about AWS Amplify, the default Docker image used for the build of the solution is not correctly configured: there is in fact a known problem related to the configuration of Python 3.8 also reported in this issue (Amplify can’t find Python3.8 on build phase of CI/CD ).

The easiest workaround I identified is to create a Docker image with all the requirements. Below the Dockerfile.

Once Pylint is installed, we need to configure the pipeline by modifying the amplify.yml file like this:

backend:
  phases:
    build:
      commands:
        - '# Evaluate backend Python code quality'
        - find amplify/backend/function -name index.py -type f | xargs pylint --fail-under=5 
        - '# Execute Amplify CLI with the helper script'
        - amplifyPush --simple

We use find to Pylint all the index.py files of our backend lambda functions. Pay attention to the –fail-under parameter: this instructs Pylint to consider a source code evaluation lower than 5 as failed. In this case the Pylint exit code will be non-zero and the CI/CD pipeline execution is interrupted. With a rating higher than 5, the exit code will be zero. The default threshold is 10, corresponding to a “perfect” source code and honestly very difficult to obtain in the presence of complex applications.

We’ve completed introducing linters to our pipeline. Let’s see now how to automate the testing of our application.

End-to-end test with Cypress

Cypress is a solution that allows you to perform automatic tests of our web application, simulating the operations a user performs via UI. The tests are collected in suite.

Passing the tests ensures proper operation from the user’s point of view .

The Amplify console offers integration with Cypress (an end-to-end test framework) for browser-based testing. For web apps using Cypress, the Amplify console automatically detects test commands on the repo connection, runs the tests, and provides access to videos, screenshots and any other artifacts made available during the build phase.

(AWS)

Installation is very simple:

npm install cypress

With the following command Cypress is started:

npx cypress open

This will involve creating a cypress directory in our project which includes all configuration files and test suites. Some examples are present in cypress/integration . I don’t go into the details of how to build a test suite because there is already a huge documentation about it.

Configuring Cypress for AWS Amplify and Cognito

Let’s see how to properly configure our project to integrate with AWS Amplify.

The configuration file cypress.json is now present in the root of the project and we must edit in this way:

{ 
    "baseUrl": "http://localhost:3000/", 
    "experimentalShadowDomSupport": true
}

The first parameter set the base URL of our web application.

The experimentalShadowDomSupport parameter is very important in the case of React applications using the AWS Amplify Cognito authentication backend and its UI components. In a nutshell, by not enabling Shadow DOM support, we will not be able to authenticate into our application during the testing phases.

The test suite used to verify the login and logout functionality of the web application is the following:

For security reasons, the username and password to be used for testing are not specified in the script but must be defined in environment variables directly on the AWS Amplify console. The variables are CYPRESS_username and CYPRESS_password .

Pipeline Configuration

Last step is pipeline configuration for running the tests, as explained by Cypress. And here’s the complete amplify.yml file.

Again, as with linters, our deployment pipeline will stop if the tests fail.

It is possible to watch execution video of the tests on the AWS Amplify console, by downloading the artifacts. Here is an example.

Conclusions

The combined use of linter and end-to-end test tools allows to release high quality source code, tested as an end user. With AWS Amplify we have the ability to integrate these tools by simplifying DevOps operations and maintaining control of the CI/CD phases.

In conclusion I confirm the impressions of the first post: the next time I have to create a solution, a simple PoC or a more complex web application, I will definitely consider using AWS Amplify! Obviously flanked by Cypress and one or more linters.

Did we have fun? See you next time!

AWS Amplify recipe

AWS Amplify 101

9 min

During the past AWS Summit I attended an interesting session concerning AWS Amplify. The excellent speaker Marcia Villalba (AWS Developer Advocate) quickly convinced me to try this framework. A few days later I had the right opportunity: having the need to keep my son trained with the multiplication tables during these summer holidays, I decided to develop a simple web application with AWS Amplify in order to discover its strengths and weaknesses.

This is the link to the GitHub repository and this is the link to the web application of the project that came out of it. Yes, you can also practice with the multiplication tables.

What’s Amplify?

AWS Amplify is a set of tools and services that allow a developer to build modern full stack applications using AWS cloud services. For example, having to create a React web application, Amplify allows us to manage the development and deployment of the frontend, backend services and the related CI/CD pipeline. For the same application it is possible to have multiple environments (for example dev, test, stage & production). Amplify also allows to integrate some AWS services very quickly into your frontend, writing very few lines of code: one example above all, authentication with AWS Cognito.

Great! There seems to be everything one-man-band developer needs to quickly build an application! Let’s try.

Learn the multiplication tables

As a first step, better clarify the ideas of what we want to achieve: whether they are four sketches on a sheet of paper (my case) or a good mockup in Balsamiq , let’s try to imagine UI & UX of our application.

The main purpose is to train the knowledge of the multiplication tables by subjecting the user to a test: 10 multiplications relating to the same multiplication table, randomly chosen from 2 to 10 (I do not have to explain why I have excluded the multiplication table of 1, right?).

It would be interesting to keep track of errors and time spent answering questions, in order to have a scoreboard with the best result for each table.

Do we want to show each user their own scoreboard and push them to improve themselves? We will therefore have to memorize it and we need an authentication process!

Having the scoreboard of each user, we can also choose the times table object of the next challenge based on the previous results, in order to train the user on the multiplication tables for which he has encountered greater difficulties.

And by magic, the UI of our application appears.

First steps

Now that we have the clearest ideas on what to do, let’s take the first steps. We have already decided to use React and Amplify. Let’s see the prerequisites and create our project, as explained in the official Amplify Docs Tutorial.

The prerequisites indicated by AWS for Amplify are these:

  • Node.js  v10.x or later
  • npm  v5.x or later
  • git  v2.14.1 or later

Obviously, an AWS account is required and we will find out, during the deployment of the backend, that we will also need Python> = 3.8 and Pipenv .

To install Amplify CLI and configure it, proceed as indicated. The configuration of Amplify CLI requires the creation of an IAM User which will be used for the management of the backend services.

npm install -g @aws-amplify/cli
amplify configure

We are now starting our React project and its Amplify backend.

# Create React project
npx create-react-app amplify-101
cd amplify-101

# Run React
npm start

# Init amplify
amplify init

# Install Amplify libraries for React
npm install aws-amplify @aws-amplify/ui-react

At this point we will have to edit the src/index.js file by inserting the following lines:

import Amplify from "aws-amplify";
import awsExports from "./aws-exports";
Amplify.configure(awsExports);

Great. At the end of these activities (explained in detail in the official Amplify Docs tutorial ) we have a new React project working and we are ready to create the related backend with Amplify.

Authentication with Cognito

What do we need? Well, since the user is at the center of our application, we can start with the authentication process: this is the point that I said the first “WOW” realizing how simple it is with Amplify.

Let’s add “auth” to the backend using this command and answer a couple of simple questions.

amplify add auth

? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings?  No, I am done.

We ask Amplify’s CLI to publish our backend on the Cloud. The command we (will often) use is:

amplify push

Amplify takes care of everything, going to create what is required on the backend side via a CloudFormation stack.

Now let’s insert the UI in our React frontend: the fastest method is to modify two lines of code in the src/App.js file:

# New import for auth
import { withAuthenticator } from '@aws-amplify/ui-react'

# Change default export
export default withAuthenticator(App)

Done! Our app now provides a complete flow of user registration, login and logout. Obviously the UI elements are customizable and we can change the behavior of our application in response to the login event. Let’s see how.

<AmplifySignUp> allows to customize the registration process of a new user. I chose to request only the email address (which is verified by sending an OTP code) and a password. The email address will be used as a username for logins.

<AmplifySignIn> allows to customize the login process: also in this case I have specified to use the email address as username for access.

<AmplifyAuthenticator> allows to change the state of our app in response to login and logout, through handleAuthStateChange. We can understand if a user has authenticated himself by checking the status {this.state.authState}. We are able to view its username with {this.state.user.username}.

Awesome! Now let’s see how to add some functionality to our app.

Storage

Which app has no data to store? With Amplify we have two possibilities: static content on S3 or NoSQL database with DynamoDB. In our application we need to create a scoreboard table to store the best results of each user. With Amplify CLI the creation of the table is very fast:

amplify add storage

By selecting NoSQL Database and answering a few simple questions about the structure of the table you intend to create, you get it deployed on AWS.

A clarification: Amplify supports the creation of GraphQL API (AppSync) that we will use in the following steps. By specifying the @model directive in the GraphQL schema, it is possible to delegate to Amplify the deployment of a table and all that is needed to manage the related CRUD operations from the frontend. It is a great convenience if we don’t need to manage and customize data with complex application logic.

In the case of our scoreboard , we need to manage data entry exclusively on the backend side. We must also evaluate the results of a test and update the scoreboard accordingly. Frontend access is read-only. For these reasons I preferred not to use the @model directive and manage storage and APIs separately (but still with Amplify).

We need a second table that I called challenges: as can be seen from the name, it used to store the results of an “ongoing” challenge so that we can compare them with the answers of our user and define the outcome of the test. For the same reasons I preferred to manage, also for this table, deployment and API separately.

Backend functions

Let’s start writing the code for the backend: I chose to create the necessary Lambda functions in Python. One of the Amplify features that I appreciate is the centralization of the code of our application in a single repository: frontend, backend and infrastructure code (IaaC) can be easily and quickly modified to adapt to new requirements and new features.

Let’s work on the main function of our app: the generation of a new test and the verification of its results. We use the command:

amplify add function

We need to provide the name of the function, which runtime we want to use (Python, Node, etc ..) and whether the function must have access to some previously defined resource. In our case the newChallenge function have access to both previously created NoSQL tables.

I used the same command to create the scoreboard function which allows the frontend to view the contents of the user’s scoreboard.

The source code of the backend functions can be found in the amplify/backend/function path of our project. Whenever we go to change it, we just need to push the solution to update the backend on the cloud.

In this post we don’t go deep in the code of the Lambda functions created, which is available in this GitHub repository. Let’s say that both functions respond to GraphQL requests with a JSON populated with the requested data, as well as obviously implementing the application logic (test generation, test evaluation and storage of results).

The reference to DynamoDB tables created previously is provided through an environment variable: the name of the scoreboard table, for example, is provided in STORAGE_SCOREBOARD_NAME.

dynamodb = boto3.resource('dynamodb')
scoreboard_table = dynamodb.Table(os.environ['STORAGE_SCOREBOARD_NAME'])

The invocation event of the function provides the information relating to the GraphQL request to be answered: the typeName parameter indicates if the request is a Query or a Mutation. Let’s see in the next paragraph the last step required to complete our app, the implementation of the GraphQL API.

GraphQL API

We define the last needed backend resource: the APIs with which our React web application will interact.

amplify add api

You can specify existing API Rest or choose to create a GraphQL endpoint, which is our preference. Amplify takes care of managing its implementation and integration with the frontend.

We just have to worry about defining our application’s data schema. Let’s see it.

We define two basic types: challenge which represents a test and score which represents the results.

The definition of the Query is more interesting: we are going to define two calls to the Lambda functions previously implemented. First one gets an array of scores to display the scoreboard and is called getScores . Second one gets a new challenge (getChallenge). Note that the name of the functions includes a reference to the environment ${env}.

The type Mutation allows to send the results of a test to the API for its evaluation. The Lambda function called is always newChallenge to which some parameters are passed, the unique identifier of the test and the results provided by the user; API response include test results.

How to use these APIs in React? It’s very simple: just specify the required imports (whose code is automatically generated by Amplify) and call them in your frontend.

This is an excerpt from the code used to get your own scoreboard.

import { getScores }  from './graphql/queries';
import { API, graphqlOperation } from "aws-amplify";

......

  componentDidMount() {
    this.fetchData();  
  }

  async fetchData() {
    const data = await API.graphql(graphqlOperation(getScores));
    const scores = data.data.getScores
    console.log(scores);
    this.setState({'scores':scores});
  }

Note: the user is not specified in the getScores call. In fact, thanks to the integration with AWS Cognito, the user’s identity is specified directly in the Lambda function invocation event, in the identity parameter.

In the case of mutation, the code used on the submit of a challenge is the following:

import { API, graphqlOperation } from 'aws-amplify'
import { sendChallengeResults } from './graphql/mutations';

....

  handleClick() {
    this.setState({'loadingResults': true})

    // mutation
    const challenge_result = { id: this.props.challenge.id, results: this.state.results }

    API.graphql(graphqlOperation(sendChallengeResults, challenge_result))
      .then(data => {
        console.log({ data });
        this.setState({'score': data.data.sendChallengeResults});
      })
      .catch(err => console.log('error: ', err));
  }

Deployment

We finished! All the components of our app have been made.

amplify status

The Amplify CLI allows us to deploy our web application with two simple commands:

amplify add hosting
amplify publish

However, I have not chosen this method, wanting to test the potential of CI/CD that Amplify makes available. To do this you need to use the Amplify Console.

We first place the solution source code on a GIT repository. Using the Amplify console we must connect the branch of our repository to implement the pipeline.

Amplify Console

Done! Our pipeline is operational and our application is now available online, built and deployed. At first try? Not exactly!

Unfortunately for an unknown reason, the default Docker image used for the build process of the solution is not correctly configured: in fact there is a known problem related to the configuration of Python 3.8 also reported in this issue (Amplify can’t find Python3.8 on build phase of CI/CD).

The easiest workaround I identified is to create a Docker image with all the requirements. Below the Dockerfile.

The image is available on DockerHub and I configured the CI pipeline to use that image. Solved! CI/CD pipeline is now running without errors.

Build settings

Conclusions

We are at the end of this post: the creation of this simple web application has allowed me to take my first steps with Amplify. What conclusions have I reached?

Definitely very positive opinion! Amplify allows you to create complex and modern applications very quickly, easily integrating the (serverless) services of AWS Cloud. Authentication with AWS Cognito is a clear example, but there is the possibility of integrating many other features, such as Analytics, Elasticsearch or AI/ML.

GraphQL allows us to simply manage the data of our application, in particular for the classic CRUD operations.

The centralization of the source code, frontend, backend and infrastructure (IaaC), allows to keep the entire solution under control and ensures to quickly adapt to new requirements and new features.

Is it a product dedicated to a full stack developer who works individually? Not exclusively! Although Amplify allows a single developer to follow every aspect of their application, also simplifying the operations of DevOps, I believe that even a work team can easily collaborate on a solution by taking advantage of the use of Amplify.

Is it suitable for the development of any solution? I would say no! I believe that the added value of Amplify is, as already mentioned, the ability to quickly and centrally manage all aspects of your application. If the complexity of the backend is such as to include elements not directly managed by Amplify, it is preferable to use other tools or a mixed approach.

For the reasons mentioned here, I believe that the ideal user of Amplify is the full stack developer, the young startups or the more “agile” development companies who need to quickly put in place new solutions or new features.

In conclusion, the next time I have to create a solution, a simple PoC or a more complex web application, I will definitely consider using AWS Amplify!

We had fun? See you next time!

Chromium & Selenium

Chromium and Selenium in AWS Lambda

4 min

Let’s see how it is possible to use Chromium and Selenium in an AWS Lambda function; first, some information for those unfamiliar with these two projects.

Chromium is the open source browser from which Google Chrome derives. Browsers share most of the code and functionality. However, they differ in terms of license and Chromium does not support Flash, does not have an automatic update system and does not collect usage and crash statistics. We will see that these differences do not affect the potential of the project in the least and that, thanks to Chromium, we can perform many interesting tasks within our Lambda functions.

Selenium is a well-known framework dedicated to testing web applications. We are particularly interested in Selenium WebDriver, a tool that allows to control all the main browsers available today, including Chrome/Chromium.

Why use Chrome in a Lambda function?

What is the purpose of using a browser in an environment (AWS Lambda) that does not have a GUI? In reality there are several. Operating a browser allows you to automate a series of tasks. It is possible, for example, to test your web application automatically by creating a CI pipeline. Or, not least, web scraping.

The web scraping is a technique that allows to extract data from a website and its possible applications are infinite: monitor product prices, check the availability of a service, build databases by acquiring records from multiple open sources.

In this post we will see how to use Chromium and Selenium in a Lambda function Python to render an URL.

Example lambda function

The Lambda Function we are going to create has a specific purpose: given a URL, Chromium is used to render the related web page and a screenshot of the content is captured in PNG format. We are going to save the image in an S3 bucket. By running the function periodically, we will be able to “historicize” the changes of any site, for example the homepage of an online information newspaper.

Obviously our Lambda has many possibilities for improvement: the aim is to show the potential of this approach.

Let’s start with the required Python packages (requirements.txt).

selenium==2.53.0
chromedriver-binary==2.37.0

Packages are not normally available in the AWS Lambda Python environment: we need to create a zipfile to distribute our function including all dependencies.

We get Chromium, in its Headless version, that is able to run in server environments. We need the relevant Selenium driver. Here are the links I used.

How to use Selenium Webdriver

A simple example of how to use Selenium in Python: the get method allows to direct the browser to the indicated URL, while the save_screenshot method allows to save a PNG image of the content. The image will have the size of the Chromium window, which is set using the window-size argument to 1280×1024 pixels.

I decided to create a wrapper class to render a page in all its height . We need to calculate the size of the window required to contain all the elements. We use a trick, a script that we will execute after page loading. Again Webdriver exposes an execute_script method that is right for us.

The solution is not very elegant but functional: the requested URL must be loaded twice. First time browser window is set to a fixed size. After page loading, a JS script is used to determine the size of the required window. After that, original browser window is closed and a new one, correctly sized, is opened to get the full height screenshot. The JS script used was taken from this interesting post .

Wrapper is also “Lambda ready”: it correctly manages the temporary paths and the position of the headless-chromium executable specifying the necessary chrome_options.

Let’s see what our Lambda function is like:

The event handler of the function simply takes care of instantiating our WedDriverScreenshot object and uses it to generate two screenshots: the first with a fixed window size (1280×1024 pixels). For the second, the Height parameter is omitted, which will be automatically determined by our wrapper.

Here are the two resulting images, compared with each other, relating to the site www.repubblica.it

Deployment of the function

I collected in this GitHub repository all files needed to deploy the project on AWS Cloud. In addition to the files already analyzed previously, there is a CloudFormation template for stack deployment. The most important section obviously concerns the definition of the ScreenshotFunction : some environment variables such as PATH and PYTHONPATH are fundamental for the correct execution of the function and Chromium. It is also required to pay attention to the memory requirements and the timeout setting: loading a page can take several seconds. The lib path includes some libraries required for the execution of Chromium and not present by default in the Lambda environment.

As usual I prefer to use a Makefile for the execution of the main operations.

## download chromedriver, headless-chrome to `./bin/`
make fetch-dependencies

## prepares build.zip archive for AWS Lambda deploy 
make lambda-build		

## create CloudFormation stack with lambda function and role.
make BUCKET=your_bucket_name create-stack 

The CloudFormation template requires an S3 bucket to be used as a source for the Lambda function deployment. Subsequently the same bucket will be used to store the PNG screenshots.

Conclusions

We used Selenium and Chromium to render a web page. It is a possible application of these two projects in the serverless field. As anticipated, another very interesting application is web scraping. In this case, Webdriver’s page_source attribute allows access to the sources of the page and a package such as BeautifulSoup can be very useful for extracting the data we intend to collect. The Selenium package for Python offers several methods to automate other operations: please check the excellent documentation.

We will see an example of web scraping in one of the next blog posts! At the moment, have a look of this post GitHub repository.

Did we have fun? See you next time!

pipeline

Build AWS Lambda CI pipeline with Goss and TravisCI

4 min

I recently written about how to develop AWS Lambda functions offline, using the docker images of LambdaCI . Starting from the Python environment used for the development of a sample Lambda function, let’s see how to create a Continuous Integration pipeline that deals with testing it for each new commit. We use Goss as a test tool and TravisCI for the implementation of the CI pipeline.

AWS Lambda in a Docker container

Let’s quickly see how to run an AWS Lambda function in a Docker container.

We use the following Python function, which deals with processing messages queued on SQS and which requires a package (PILLOW) normally not installed in the default Lambda environment.

To create the required docker image for the execution of our Lambda function, Dockerfile is the following, which starting from the image lambci/lambda:python3.6, allows the installation of any additional Python packages envisaged in requirements.txt

Please check this post for more details; now we have our Docker image and our Lambda function is correctly running in a container. We are ready to start building the test suite.

Goss and DGoss

I chose to use Goss as a test tool mainly for these reasons: thanks to the wrapper DGoss, it is possible to interact directly with a docker container both for the execution phase and during the development (or editing) of the tests. The main purpose of Goss is to validate the configuration of a server, which in our case is precisely the container running the Lambda function. Test suite can be generated directly from the current state of the server (or container) making the operation very fast and practical.

Test suite is defined by YAML files.

We proceed very quickly with the installation of Goss and DGoss and we start the container that will allow us to create the test suite, using the Makefile syntax.

This command starts the container docker in “STAY OPEN” mode: API server listen on port 9001 and our Lambda function replies to HTTP requests. The parameter edit indicates to DGoss that we are going to use the container for the creation of the test suite. The test directory purpose is to collect files needed to run the tests. For example, if we want to verify SQS messages process by lambda, some sample events (in JSON format) will be saved in this folder to be used during the tests.

At this point we can use any Goss command to create our test suite: if, for example, we want to verify the presence of the function handler file, we use the command:

goss add file /var/task/src/lambda_function.py

The ADD option adds a new test to the configuration file goss.yaml , in our case to check for the presence of the specified file. Goss allows to check many configuration parameters of the environment where is running, such as the presence of files, packages, processes, DNS resolution, etc. The reference manual is very exhaustive.

Now let’s create a script that allows us to verify the response of our Lambda function.

#!/bin/sh
# /var/task/test/test-script.sh

curl --data-binary "@/var/task/test/test-event.json" http://localhost:9001/2015-03-31/functions/myfunction/invocations

We use curl to query the endpoint of our Lambda function by sending a test SQS event (the file test-event.json ). So let’s add another Goss test, this time of type command. The goss.yaml file will become like the following.

The command section includes the execution of the script that queries the Lambda function. We expect on stdout a JSON with specific content, which confirms the correct execution of the function. Of course this test is only an explanatory example of what can be done with Goss.

When we have finished defining the test suite, exiting the container (exit), DGoss will take care of copying the newly edited file goss.yaml to our local host, so that it persists after deleting the container. By starting the edit procedure again, the file will be copied to the new container and synchronized again at the end.

The time has come to test! Instead of edit , we simply specify the command run of DGoss.

DGoss creates a new container and runs the test suite. Here is an example.

Great! Our test suite is ready, let’s proceed to integrate it into our CI pipeline.

AWS Lambda CI with TravisCI

The time has come to configure the CI pipeline. I chose TravisCI for its simplicity and because it allows to build docker images for free. I also used it in this post with Ansible and it’s the CI tool that I prefer. To automate the testing process at each commit we configure TravisCI to monitor the GitHub / Bitbucket repository hosting the sources.

A simple file .travis.yaml creates the pipeline.

Let’s analyze it: the services directive instructs TravisCI to use the docker service. Before starting the tests, Goss / DGoss is installed and our AWS Lambda image is built, as specified in before_install. Finally script indicates the command for the effective execution of the test suite that we have prepared previously.

Here is the result:

Great! Our pipeline is done!

Conclusions

We took care of developing and automating the test of a simple AWS Lambda function without it ever actually entering the AWS cloud: I have used this environment working on some projects and I believe it allows to significantly speed up the development and maintaining of sources, allowing to deploy tested and quality code on AWS.

This GitHub repository collects what has been treated in this article and can be used as a starting point for building your own CI pipeline with Goss.

Did we have fun? See you next time!

AWS Lambda in Docker

AWS Lambda offline development with Docker

3 min

I work on projects that are increasingly oriented towards the serverless paradigm and increasingly implemented on AWS Lambda platform. Being able to offline develop an AWS Lambda function, comfortably in your favorite IDE, without having to upload the code to be able to test it, allows significantly speed up of activities and increased efficiency.

AWS Lambda environment in docker

That’s right! The solution that allows us to develop AWS Lambda code in offline mode is to use a docker image that replicates in an almost identical way the AWS live environment. The docker images available at DockerHub constitute a sandbox inside to perform its function, sure to find the same libraries, file structure and related permissions, environment variables and production context. Fantastic!

A Lambda function is rarely “independent” of other resources: it often needs to access objects stored in an S3 bucket, queue messages on SQS or access a DynamoDB table. The interesting aspect of this solution is the ability to develop and test your code offline, however interacting with real AWS services and resources, simply by specifying a pair of AWS access keys in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

The LambdaCI project is frequently updated and well documented: it includes several runtime environments such as Python, which we will use in the next paragraphs. The environment I used for development is available in this repository.

Example function

Suppose we are working on a simple Python function that deals with processing SQS messages and that uses a package normally not installed in the AWS Lambda Python environment. The example code is as follows.

First the Logger object is instantiated: we are going to use it to trace the SQS event. Addends are foreseen in the body of the message and the result will be reported in the logs. We will also trace the version of the PILLOW package, normally not installed by default in the AWS Lambda environment, to verify that the installation of the additional packages is successful. Finally, a sample text message is returned (“All the best for you”) at the end of the function execution.

Now let’s see how to execute the Lambda function inside a Docker container.

Dockerfile and Docker-Compose

First we need to worry about how to install additional Python packages, in our example PILLOW. We create a new Docker image using a dockerfile based on lambci/lambda:python3.6 image. Let’s install all the additional packages specified in the file requirements.txt

Finally, with a docker-compose.yml file we define a lambda service to be used for offline debugging. The purpose is to map the host directory src for the sources and set PYTHONPATH to use the additional packages in /var/task/lib

As a first test, just start docker-compose to run our Lambda function, passing the handler.

docker-compose run lambda src.lambda_function.lambda_handler

Events

Our function expects an SQS event to be processed. How to send it? First we need to get a test JSON and save it in a file (for example event.json). Let’s specify it in the docker-compose command line.

docker-compose run lambda src.lambda_function.lambda_handler "$(cat event.json)"

This is the result of execution.

Perfect! Our function is performed correctly and the result corresponds to what is expected. Starting the Docker container corresponds to an AWS Lambda cold start. Let’s see how it is possible to keep the container active to call the function several times.

Keep running

Alternatively, you can start and keep the container of our Lambda function running: you can make several consecutive calls quickly without waiting for the “cold start” times. In this mode, an API server is started which responds by default to port 9001.

docker-compose run -e DOCKER_LAMBDA_STAY_OPEN=1 -p 9001:9001 lambda src.lambda_function.lambda_handler

We are going to call our function using, for example, curl.

curl --data-binary "@event.json" http://localhost:9001/2015-03-31/functions/myfunction/invocations

The default handler of our Lambda function responds to the this endpoint. The data-binary parameter allows to send the contents of the JSON file, the sample SQS event.

Conclusions

I collected in this GitHub repository the files needed to recreate the Docker environment that I use for offline development and debugging of AWS Lambda functions in Python. For convenience I have collected the most frequent operations in a Makefile .

The make lambda-build command realizes the deployment package of the function, including the additional packages.

Below is an example of deploying our Lambda function with CloudFormation.

Other commands available in the Makefile are:

## create Docker image with requirements
make docker-build

## run "src.lambda_function.lambda_handler" with docker-compose
## mapping "./tmp" and "./src" folders. 
## "event.json" file is loaded and provided to function  
make lambda-run

## run API server on port 9001 with "src.lambda_function.lambda_handler" 	 
make lambda-stay-open

We had fun? Would we like to use this environment to build a CI pipeline that tests our function? In the next post!

Your disposable emails on AWS

11 min

90% of online services have a registration process which requires to provide a valid email address. It’s required to receive an email with a link to click on. After that you will gain access to the service you were interested in but, at the same time, you will start to receive newsletters, surveys, promotions, etc in your mailbox.

A disposable address is useful in these situations: it allows to complete the registration process and prevents from receiving unwanted emails in the future, keeping our personal mailbox clean.

UPDATE!!

Are you looking for a ready to use disposable email service, awesome API and Slack integration?

try my24h.email

There are several online services that offer disposable email addresses and the most famous is probably Mailinator. However, these services are probably not suitable for professional use in a company and they are often limited. The mail domain used, for example, is sometimes recognized as “disposable” and not considered valid during the registration process.

In this article we are going to create Disposable-mail, a disposable and customizable email service on your own internet domain, using AWS building blocks. The service is totally serverless: EC2 instances are not necessary and only simple lambda functions are implemented.

Disposable-mail acts like this:

  • Go to the website
  • Choose an email address and type it
  • The mailbox you have chosen is immediately activated and you can start receiving emails

Do you want to try it now? Find it here: http://disposable.aws.gotocloud.it

No need to register yourself or choose a password. The mailbox is valid for one hour and after 60 minutes all the messages received will be deleted and the mailbox will no longer be active.

It is not possible to send emails, only to receive them. Attachments are not displayed (maybe in the next version).

To avoid abuse Google reCAPTCHA is hidden in the login form and this is the reason you need your reCAPTCHA keys.

Requirements

Do you want to create your disposable email service on your own internet domain? Here’s what you need:

  • an AWS account
  • an internet domain (a third level is better) on Route53 or anywhere it’s possible to configure the DNS zone
  • a pair of Google reCAPTCHA keys (site / private) valid for your domain (reCAPTCHA console)

That’s all.

Solution architecture

As anticipated, Disposable-mail is totally serverless and, given the nature of these AWS building blocks, we will not have to worry about their management, scalability is guaranteed and it’s “pay per use”.

The layout shows how essentially two workflows exist: the first, totally automated, of receiving, checking, storing and deleting emails. The second, given by the user who accesses the React application and consults his mailbox. The common element o is represented by the S3 bucket where the emails received are stored and the DynamoDB tables where the related metadata are stored.

To receive emails, we rely on the Amazon SES (Simple Email Service) service: we will deal with its configuration in the next paragraph. Some lambda functions take care of managing the flow of emails. Route53 is used to host the DNS zone of our internet domain.

The web application is hosted directly in a public S3 bucket and the API endpoint is managed by the Amazon API Gateway service. Some lambda functions take care of processing requests from clients.

We rely on CloudWatch to periodically perform a lambda function whose purpose is to disable and eliminate expired mailboxes.

First Steps

The first step in the realization of the solution is the correct configuration of the Amazon SES service. The CloudFormation template that we will use for the construction of the entire backend infrastructure assumes that the email domain we have chosen is already correctly configured and verified.

Let’s start the configuration: from the AWS console it is necessary to access the SES service. This service is not available in all AWS Regions: it is therefore possible that you must use a different Region different. For the sake of simplicity, the entire solution will be hosted in the same AWS Region.

Domains displays the list of all internet domains for which the SES service has been configured. We need to add our domain and carry out the verification process.

By clicking on Verify a New Domain we add our domain by specifying its name. As mentioned above, it is better to use a third level domain because all mailboxes will be managed by Disposable-mail.

Once the domain is added we need to configure DNS zone. Two records should be added:

  • a TXT record required by AWS verification
  • an MX record to enable emails receiving

If the DNS zone is managed by Route53, all we have to do is click on Use Route 53: the configuration of the two records will be automatic. Otherwise it is necessary to refer to your provider to modify the zone as shown.

Once the domain has been verified, we will be able to receive emails via Amazon SES.

Backend – Receiving emails

The CloudFormation template creates a new “catch-all” type of email reception rule (Rule Set), that is valid for all email addresses of type <anything>@<your_domain>

The specified “Recipient” corresponds to our domain.

The first Action invokes a lambda function and verifies its response. The IncomingMailCheckFunction verifies if email destination address is valid and active. Verification result is sent back to SES service: continue to next actions (CONTINUE) or stop the execution (STOP_RULE_SET).

The second Action, performed only if the previous one has allowed it, stores email in the S3 bucket called “incoming.disposable.<domain_name>” and notifies the SNS topic “IncomingMailTopic“.

How is the email address verified? A DynamoDB table is used to store the list of active email addresses and their expiration (TTL). The key of the table is the email address.

## part of IncomingMailCheckFunction, see Github repository for  
## the complete functions code.

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(os.environ['addresses_table_name'])

def address_exists(address):
  exists = False
  response = table.get_item(Key={'address': address})
  if 'Item' in response:
      item = response['Item']
      if item['TTL'] > int(time.time()):
         exists = True

def lambda_handler(event, context):
  for record in event['Records']:
    to_address = record['ses']['mail']['destination'][0]
    if address_exists(to_address):
       return {'disposition': 'CONTINUE'}
    else:
       return {'disposition': 'STOP_RULE_SET'}
    break;

To complete the analysis of the flow of receiving emails, we look at the StoreEmailFunction. This function takes care of storing the data of the email received in the DynamoDB database, in the “emails” table. The table has a Primary Key “destination” which corresponds to the email address and a Sort Key which corresponds to the “messageId” of the email itself. The lambda function receives notifications from the SNS topic.

## part of StoreEmailFunction, see Github repository for  
## the complete functions code.

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(os.environ['emails_table_name'])

## Store email details in database
def store_email(email, receipt):
   response = table.put_item(
                  Item= {'destination': email['destination'][0],
                         'messageId': email['messageId'],
                         'timestamp': email['timestamp'],
                         'source': email['source'],
                         'commonHeaders': email['commonHeaders'],
                         'bucketName': receipt['action']['bucketName'],
                         'bucketObjectKey': receipt['action']['objectKey'],
                         'isNew': "true"
                  }
   )

def lambda_handler(event, context):
    message = json.loads(event['Records'][0]['Sns']['Message'])
    store_email(message['mail'], message['receipt'])

The SNS notification contains the name of the S3 bucket and the objectKey used to store it. By saving the information in the DynamoDB table, it will be possible to retrieve the content of the email later.

The CleanUpFunction is invoked periodically by CloudWatch in order to empty and deactivate expired mailboxes. It also takes care of deleting session records that are also expired.

A scan is performed on the addresses and sessions tables, checking the value of the TTL attribute.

## part of CleanUpFunction, see Github repository for 
## the complete functions code.

def cleanup():
    ## get expired addresses
    try:
        response = addresses_table.scan(
            FilterExpression = Attr('TTL').lt(int(time.time())),
            ProjectionExpression = "address"
        )
    except ClientError as e:
        logger.error('## DynamoDB Client Exception')
        logger.error(e.response['Error']['Message'])
    else:
        for I in response['Items']:
            find_emails(I['address'])
            delete_address_item(I['address'])

Backend – API

API Gateway is used as a REST (regional) endpoint for the web application.

Lambda integration is used with three lambda functions dedicated to creating a new mailbox (CreateEmailFunction), consulting the list of emails present (GetEmailsListFunction) and downloading the content of a specific email (GetEmailFileFunction).

The CreateEmailFunction function responds to the GET method of the /create resource and has two parameters:

  • address – email address to be created / to log-in
  • captcha – verification code

Both parameters are validated and, in case the email address does not exist, the same is created by inserting the relevant record in the addresses table. The function returns a sessionID parameter in the body which will be used by the client for further API calls. Valid session IDs and their expiration are stored in the sessions table.

## part of CreateEmailFunction, see Github repository for  
## the complete functions code.

import http.client, urllib.parse
import os
import re

valid_domains = os.environ['valid_domains'].split(',')
recaptcha_key = os.environ['recaptcha_key']

## A very simple function to validate email address
def validate_email(address):
    valid = False
    # Do some basic regex validation 
    match = re.match('^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,4})$', address)
    if (match != None):
        domain = address.split("@")[-1]
        if domain in valid_domains:
            valid = True
    return valid

## reCAPTCHA validation function
def validate_recaptcha(token):
    conn = http.client.HTTPSConnection('www.google.com')
    headers = {"Content-type": "application/x-www-form-urlencoded"}
    params = urllib.parse.urlencode({'secret': recaptcha_key, 'response': token}) 
    conn.request('POST', '/recaptcha/api/siteverify', params, headers)
    response = conn.getresponse()
    r = response.read().decode() 
    r = json.loads(r)
    return r['success']

The GetEmailsListFunction responds to the GET method of the /{destination} resource where the {destination} parameter is the mailbox address. The function validates the sessionID parameter provided in the request (queryString) and returns the list of emails in the mailbox in JSON format

The list of emails is obtained through a Query on the emails table.

## part of GetEmailsListFunction, see Github repository for  
## the complete functions code.

dynamodb = boto3.resource("dynamodb")
emails_table = dynamodb.Table(os.environ['emails_table_name'])
sessions_table = dynamodb.Table(os.environ['sessions_table_name'])

## Get emails list from DB
def get_emails(destination):
    items = None
    try:
        filtering_exp = Key('destination').eq(destination)
        response = emails_table.query(KeyConditionExpression=filtering_exp)        
    except ClientError as e:
        logger.info('## DynamoDB Client Exception')
        logger.info(e.response['Error']['Message'])
    else:
        #Remove private bucket details        
        for i in response['Items']:
            i.pop('bucketObjectKey', None)
            i.pop('bucketName', None)
        items = {'Items' : response['Items'], 'Count' : response['Count']}
    return items

## Verify session
def session_is_valid(sessionid):
    valid = False
    try:
        response = sessions_table.get_item(
            Key={
            'sessionId': sessionid
            }
        )
    except ClientError as e:
        logger.info('## DynamoDB Client Exception')
        logger.info(e.response['Error']['Message'])
    else:
        if 'Item' in response:
            item = response['Item']
            if item['TTL'] > int(time.time()):
                valid = True
    return valid

def lambda_handler(event, context):

    headers = {
        "access-control-allow-headers": 
           "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
        "access-control-allow-methods": 
           "DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT",
        "access-control-allow-origin": "*"
    }
     
    if session_is_valid(event["queryStringParameters"]["sessionid"]):
        destination = event["pathParameters"]["destination"]
        items = get_emails(destination)
        if items != None:
           result = {"statusCode": 200, "body": json.dumps(items), "headers": headers }

    return result    

The GetEmailFileFunction function responds to the GET method of the /{destination}/{messageId} resource where the {destination} parameter is the mailbox address and {messageId} is the unique identifier of the mail. Also this function checks the sessionID provided in the request (queryString).

It returns the content of the mail by reading it from the S3 bucket and sets the email record in emails table as readed, changing the isNew attribute to “false“.

## part of GetEmailFileFunction, see Github repository for  
## the complete functions code.

dynamodb = boto3.resource("dynamodb")
emails_table = dynamodb.Table(os.environ['emails_table_name'])
s3 = boto3.client('s3')

## Get emails details, including s3 bucket and s3 bucket object key.
def get_email_file(destination, messageId):
    result = None
    try:
        response = emails_table.get_item(
            Key={
            'destination': destination,
            'messageId' : messageId
            }
        )
    except ClientError as e:
        logger.info('## DynamoDB Client Exception')
        logger.info(e.response['Error']['Message'])
    else:
        if 'Item' in response:
            result = response['Item']
    return result

## Set item ad readed in DynamoDB table
def set_as_readed(destination, messageId):
    try:
        response = emails_table.update_item(
            Key={
            'destination': destination,
            'messageId' : messageId
            },
            UpdateExpression="SET isNew = :updated",                   
            ExpressionAttributeValues={':updated': 'false'}
        )
    except ClientError as e:
        logger.info('## DynamoDB Client Exception')
        logger.info(e.response['Error']['Message'])

def lambda_handler(event, context):

    headers = {
        "access-control-allow-headers": 
           "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
        "access-control-allow-methods": 
           "DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT",
        "access-control-allow-origin": "*"
    }

    email_file = get_email_file(destination, messageId) 
    if email_file != None:
        data = s3.get_object(
            Bucket=email_file['bucketName'], 
            Key=email_file['bucketObjectKey'])
        contents = data['Body'].read().decode('utf-8')
        headers.update({"content-type": "message/rfc822"})
        result = {
           "statusCode": 200,
            "headers": headers,
            "body":  contents 
        }
        if email_file['isNew'] == 'true':
           set_as_readed(destination, messageId)

    return result

In order to enable CORS, an OPTION method has been added for each resource managed by API Gateway. In addition, lambda functions return specific CORS headers.

The CloudFormation template creates the API distribution and the related stage in order to obtain a valid endpoint. API endpoint URL is used to configure client web application.

Backend Installation

Once you have finished configuring the SES service for your domain as explained in the First Steps section, we use a CloudFormation template to build the entire backend infrastructure. The template provides for the creation of the S3 bucket, DynamoDB tables, all lambda functions and APIs. It also provides creation of all IAM roles, permissions and services configuration, including the Amazon SNS topic.

AWSTemplateFormatVersion: 2010-09-09
Description: Disposable Email

Parameters:
    DomainName:
        Description: Domain name to be used
        Type: String
    ReCaptchaPrivateKey:
        Description: ReCaptcha private key to validate mailbox creation requests  
        Type: String
        Default: ""
    AddressesTableName:
        Description: DynamoDB table to store valid addresses
        Type: String
        Default: "disposable_addresses_table"
    EmailsTableName:
        Description: DynamoDB table to store emails 
        Type: String
        Default: "disposable_emails_table"
    SessionsTableName:
        Description: DynamoDB table to store client sessions 
        Type: String
        Default: "disposable_sessions_table"  
    RuleSetName:
        Description: SES RuleSet name to be used 
        Type: String
        Default: "default-rule-set"
    MailboxTTL:
        Description: Life (seconds) of mailbox
        Type: String
        Default: "3600"
    SessionTTL:
        Description: Session expiration in seconds
        Type: String
        Default: "600"
    SourceBucket:
        Description: Lambda functions source S3 Bucket
        Type: String
        Default: "cfvn" 

The sources of the lambda functions are not directly written in the template but in external python files. During stack creation, ZIP files of lambda functions must be hosted in an S3 bucket. You can specify a different bucket (default is my “official” public bucket) using SourceBucket parameter. This is required if you need to customize lambda functions before creating the stack.

To build the entire backend infrastructure you can use the CloudFormation console or AWS CLI:

aws cloudformation create-stack --stack-name <name> --template-url <url> --parameters <parameters> --capabilities CAPABILITY_IAM
DescriptionExample
<name>Stack namedisposable_stack
<url>Template URLhttps://cfvn.s3-eu-west-1.amazonaws.com/disposable.yml

<parameters> is the list of parameters to be provided to the template. The two main and mandatory parameters are DomainName and ReCaptchaPrivateKey.

ParameterKey=DomainName,ParameterValue=<your_domain>  
ParameterKey=ReCaptchaPrivateKey,ParameterValue=<your_private_captcha_key>

Once the stack is created, we need to know the newly created endpoint to be used in web application configuration. This is provided by CloudFormation as an output value. It is possible to get it in the CloudFormation console or directly from AWS CLI:

aws cloudformation describe-stacks --stack-name <name> --query Stacks[0].Outputs[0].OutputValue

Frontend application

The React / Material-UI application is very simple: it consists of three simple components: LoginForm to allow creation and mailbox login, EmailList to list mailbox messages and EmailViewer to view the content of an email.

The main component App state stores the email address and the session ID, both initially empty. The LoginForm component is built to manage the login process.

<LoginForm changeAddress={this.changeAddress.bind(this)} 
           apiEndpoint={APIEndpoint}
           recaptcha_key={ReCaptcha_SiteKey}
           email_domain={email_domain}/>

The LoginForm component includes the reCAPTCHA component and calls the /create API in the form’s onSubmit event.

handleSubmit(event) {
        const recaptchaValue = this.recaptchaRef.current.getValue();
        if (recaptchaValue === '') {
           window.location.reload();
        } else { 
            console.log("Captcha value:", recaptchaValue);
        
          fetch(this.props.apiEndpoint + 'create?address=' 
             + encodeURI(this.state.address + this.props.email_domain)
             + '&captcha=' + recaptchaValue)
          .then(r =>  r.json().then(data => ({status: r.status, body: data})))
          .then(r => {
            console.log(r);
            console.log('Response from API: ' + r.body.message);
            if (r.status === 200) {
              this.props.changeAddress(this.state.address + this.props.email_domain, r.body.sessionid);  
            }
        })
        .catch(console.log);
        }
        event.preventDefault();
    }

After login, the App component build the EmailList component.

<EmailList address={this.state.address} 
           sessionid={this.state.sessionid}
           changeAddress={this.changeAddress.bind(this)} 
           apiEndpoint={APIEndpoint}/>

The EmailList component calls the /{destination} API to get the list of emails stored in selected mailbox.

getList(force) {
        fetch(this.props.apiEndpoint + this.state.address +
              '?sessionid=' + encodeURI(this.props.sessionid))
        .then(response => {
            const statusCode = response.status;
            const data = response.json();
            return Promise.all([statusCode, data]);
          })
        .then(res => {
            console.log(res);
            if (res[0] === 400) {
                this.logout();
            } else {      
                res[1].Items.sort(function(a,b){
                    if (a.timestamp > b.timestamp) { return -1 } else { return 1 }
                });         
                if ((this.listIsChanged(res[1].Items) || force)) {
                    this.setState({ emails: res[1].Items });
                    if ((this.state.selectedId === '') && (res[1].Items.length > 0)) {
                        this.setState({ selectedId: res[1].Items[0].messageId });   
                    }
                }
            }
        })
        .catch(console.log)
    }

It creates an EmailViewer component to show the selected message contents.

<EmailViewer address={this.state.address} 
             messageId={this.state.selectedId} 
             sessionid={this.state.sessionid} 
             apiEndpoint={this.props.apiEndpoint}/

Show email content is the most complex part of the client application. Once the content has been obtained by the /{destination}/{messageId} API, it is required to parse it.

For this purpose, the emailjs-mime-parser library is used: it allows to access the different parts of the email. The getMailContents function selects the correct body part to be shown, giving precedence to HTML contents.

To view HTML content we use a very dangerous “trick”:

<div className="body" dangerouslySetInnerHTML={{__html: this.state.messageData}} />

Frontend Installation

To create the React web application, once the repository has been cloned, it is required to configure some parameters at the beginning of App.js file:

//  - your APIEndpoint 
const APIEndpoint = <your_API_endpoint>; 

//  - your ReCaptcha Site Key  
const ReCaptcha_SiteKey = <your_reCAPTCHA_site_key>;  
 
//  - your email domain 
const email_domain = <your_domain>;

Use npm start to test the application and npm run build to prepare files to be placed on your favorite hosting or, as I did, directly in a public S3 bucket. Remember to enable the “localhost” domain in reCAPTCHA configuration to be able to test your application locally.

Conclusions

In this GitHub repository are available sources of the React application and the CloudFormation template to create your AWS backend.

Disposable-mail was born as an example of a cloud-native solution. Many aspects need improvements (security in example). Fell free to suggest any improvements. However, I hope this is a good starting point to create your disposable webmail service.

We had fun? See you next time!

Add to Slack – Monetize Your API

7 min

API monetization is a growing trend. It is now a consolidated business of digital companies to grant access, for a fee, to the APIs providing company data or to some services using different distribution channels.

A possible channel for spreading your APIs and services is integration with existing and well-known platforms, such as Slack. If your service or APIs can be useful and valuable in the context of a meeting, a project chat or during customer care activities, then creating an integration for Slack could prove profitable.

But what’s behind the integration of existing APIs with Slack? How complex is it to publish the classic “Add to Slack” button on your website? Let’s see it in this step-by-step guide.

Create an App

In Slack you can create your own apps or use existing ones. The most complete and professional ones are listed in the App Directory. You will find everything from integration with Google Calendar to Zoom and GitHub. Normally apps allow you to add functionality to your channels, providing integration with existing services. An app can communicate with participants in a channel by sending suggestion messages and notifications or by responding to specific commands. The app can “express itself” in the form of a bot or act as you, sending messages directly. In short: the potential is unlimited!

Creating your own app is simple: just reach the API portal and follow the directions of “Create New App”.

Before going into the next steps of creating an app, it is better to clarify how this can be distributed on Slack.

Installation in your Workspace

By default, when we create an App on Slack, it can be installed on your workspace. Anyone invited to participate in the same workplace can therefore interact and take advantage of it. If we intend to create an application dedicated exclusively to our team, no further effort is required.

Distribution

If the app we intend to create must be able to be installed and shared on other workspaces, for example at company level, it is necessary to activate the “distribution” of the application. This allows the creation of the classic “Add to Slack” button and involves the implementation of an OAuth 2.0 flow to manage the authorization process.

App Directory

The App Directory is a kind of Slack marketplace where they are listed (by category) all applications that have passed a verification process. It’s conducted directly by a Slack team that ensures that the application has the necessary technical and quality requirements. It is obvious that if we intend to monetize our APIs through an integration, publication on the App Directory will allow us to have a much wider audience and better reputation, because already verified by Slack itself.

Publishing your app in the official Directory is obviously a more complex process, which involves a long list of requirements to be met.

Interaction

We created our app, which however does not yet perform any function. Depending on the service we want to integrate into Slack, first it is better to define how our application will interact on Slack channels. There are several possibilities: the app can be invoked with a “slash command” or in the case of specific events.

The “slash commands” are the typical Slack commands preceded by a slash such as, for example / INVITE . In the case of events, it is possible to call the app when it is mentioned (@app_name) during a conversation or when direct messages are sent to it.

Personally I chose this second approach, based on events, but it’s not a single, optimal solution: it depends on the service we are integrating and on the UX that we want to provide. Furthermore, nobody forbids us to foresee both ways.

Integration

How to integrate our service and its existing APIs? It will be necessary to create a dedicated layer , able to respond to Slack’s requests and interact with users, on the one hand, and able to call our APIs on the other side.

AWS serverless Slack integration layer example

What should this integration layer do? Let’s see:

  • Receive events from Slack and respond accordingly, providing the service provided by our APIs
  • Manage the OAuth 2.0 authorization process
  • Manage subscription and billing o integrate with an E-Commerce platform

Subscription to events

We have choosen to receive notifications when an event of interest to our app occurs. It’s required to specify our integration layer endpoint.

Integration layer must reply to the request using the challenge parameter as indicated to validate our endpoint. After setting it we can specify which notifications we want to receive.

The events are well described: message.im occurs, for example, when a direct message is sent to our app, typically in the app private channel.

The concept of Scope is important to understand: in order to receive some notifications it is necessary that our application has obtained authorization for a particular scope; in the previous example, im: history . We will see that this authorization will be requested during the app authorization process.

Is that all? Not yet. It is also appropriate to make sure that our integration layer verifies the origin of the requests: we certainly do not want to publish a paid service and be defrauded right?

The request verification algorithm is indicated in this article from the official documentation. Requires the use of the Signing Secret , a parameter that Slack attributes to each application and which can be found in the App Credentials section of your application.

Personally, I took care of creating a totally serverless integration layer, using API Gateway and AWS Lambda. Here is an example of the event subscriber function, including Base64 decoding and Slack signature verification.

Notifications are in JSON format and well documented. An example of a payload for message: im is here.

What is the app_home_opened event? It occurs when the user opens the app section and can be used to send a welcome message and to explain, for example, how to use the app. It is good to remember this event because its correct management is one of the requirements for publication in the App Directory.

Available events are many and allow you to customize the UX of your integration .

Authorization

The OAuth 2.0 authorization process is documented very clearly.

Authorization process consists of the following steps:

  1. User starts the installation process by clicking on “Add to Slack”. HTML code of the button can be easily obtained from the app page. URL called by the button need to be managed by our integration layer which will redirect (302) to the Slack authorization URL (https://slack.com/oauth/v2/authorize) specifying required parameters client_id and scope. First parameter identifies our app and its value is available in the App Credentials section. The second specifies the various authorizations that we intend to request for our app (scope).
  2. The user authorizes the installation of the application
  3. A first authorization code is provided to our integration layer at the redirect URL specified in the OAuth & Permissions section of our application
  4. The integration layer takes care of requesting the authorization token by calling the dedicated URL (https://slack.com/api/oauth.v2.access) and specifying the client_id and client_secret parameters, in addition to the code just obtained
  5. If the answer is positive, the token is stored by the integration layer so that it can be used to interact with the related workspace.

Here is an example of a Lambda function able to manage the process, storing the OAuth tokens in a DynamoDB table.

UX & Billing

Are we done? No, we’re just at the beginning! We have created two functions, the first of which is able to receive notifications about the events that we are interested in managing with our app. The second is responsible for correctly managing the authorization process.

We must now roll up our sleeves and implement real integration between Slack and our service or API. The Python package slackclient allows us to interact with the Web Client API in a way you can, for example, post a message in response to a specific direct command. As mentioned at the beginning of this post, the possibilities are endless and depend a lot on the service you want to integrate.

Slack messages can be enriched with elements to make them more functional and attractive. The structure of each message is made up of blocks. The excellent editor allows you to experiment with the layout of messages before implement them in your app.

We will also have to manage integration with the E-Commerce platform in order to manage subscriptions to our service. I personally used Paddle, an all-in-one SaaS commerce platform.

Publishing on the App Directory

Is our App ready? Have we tested its functionality, the installation process and integration with the E-Commerce platform? Very well! Are we ready for publication in the App Directory? Again, not yet.

The publication process involves the verification by the Slack team of a series of requirements that our App must satisfy. There are many, divided into sections and which I recommend reading carefully.

The verification process is very painstaking and it can take a few days to get feedback. If even one of the requirements is not met, it is reported and a new submission must be made, obviously after correcting your app.

So let’s get ready to create, for example, landing pages for customer support and for your privacy policy. Particular attention must be paid to the required auth scope, which will be motivated in detail.

If we have done everything correctly, our app will finally be present in the Slack Directory and we will start monetizing our service .. or at least we hope!

Did we have fun? See you next time!


Keys

Free SSL certificates with Certbot in AWS Lambda

5 min

Thanks to Certbot and to Electronic Frontier Foundation it is possible to provide a totally free SSL certificate to your website. Certbot is a command line tool to request a valid SSL certificate for your domain, following a process to verify the ownership. The tool can also deal with web server certificate installation and many other tasks (plugins available). This post is a guide about how to automatically request and renew your free SSL certificates with Certbot in AWS Lambda.

Why use Certbot in AWS Lambda?

I deal with several web applications using CloudFront for content distribution, associated with a source S3 bucket. So I decided to create a simple Lambda function that deals with obtaining SSL certificates with Certbot and periodically verifying their expiration date. If necessary, it automatically renews and imports the new certificate onto AWS Certificate Manager.

Result? No more expired SSL certificates! The automation of the process is particularly important considering the short life (90 days) of the certificates issued by Let’s Encrypt CA.

Solution Overview

The heart of the solution is obviously the Lambda function, periodically invoked by a CloudWatch event. The function manages a list of domains (specified in the DOMAINS_LIST environment variable – comma separated values) and for each of them:

  • checks the presence of the certificate in ACM
  • if not present, a new certificate is requested using Certbot. In order to correctly complete the domain ownership verification process, the function copies the validation token into S3 bucket hosting domain static resources
  • for existing certificates, the expiration date is verified and, if necessary, renew process is done

At the end of the process, the configuration, certificates and private keys are stored in a private S3 bucket, to be reusable at the next run.

Using Certbot in AWS Lambda

Certbot is written in Python and can be easily used to automate the certificate request, renewal and revocation processes. However, using it in an AWS Lambda environment requires an additional preparation step, so that all the necessary packages and dependencies are correctly installed. This fantastic guide explains in detail how to use an EC2 instance for this purpose. I have prepared the package containing version 1.3.0 of Certbot which is available in the repository related to this post.

The Certbot configuration is located in the / tmp directory of the lambda instance and removed for security reasons at the end of the execution, because it includes the private keys of the certificates.

To proceed with the renewal operations, however, it is necessary to preserve the configuration of Certbot. The directory tree containing the configuration is compressed in a zip file and copied into the S3 bucket set by the environment variable CERTBOT_BUCKET. At the next execution, the configuration is restored from the zip file in the / tmp directory of the instance.

The dark side of the Symlinks

Certbot checks that its configuration tree is valid. It checks, among other things, the presence of symbolic links in the live directory for each domain.

The backup and restore process of the configuration tree in zip format removes these links (replaced by the actual files). To restore the initial situation, this method is used.

Pass the verification challenge with AWS S3

To request a certificate, you must pass a challenge, which proves ownership of the relevant domain. In order for the Lambda function to correctly manage this challenge, it is required:

  • an S3 bucket named with domain name
  • S3 bucket is already configured as source of CloudFront distribution for the domain or, alternatively, static website hosting (HTTP) of S3 bucket is – temporarily – active
  • correct DNS configuration for the domain (CNAME to CloudFront / S3)
  • IAM role to allows PutObject and DeleteObject operations on S3 bucket

The challenge consists to create a specific file containing a token provided by Certbot. This file must be placed temporarily on the website in order to be verified by Let’s Encrypt’s servers. At the end of the verification process the file is removed. These operations are performed by two Python scripts ( auth-hook.py and cleanup-hook.py ).

Access to the domain S3 bucket is required only to pass the challenge and to obtain the first SSL certificate. It will no longer be required during subsequent renewals.

Import the certificate to ACM

Once the certificate is obtained, the lambda function takes care of importing it into ACM. If a certificate already exists for the same domain, it is replaced by specifying the relative ARN during the import.

Important note: to be used in CloudFront, the certificate must be imported into the US East region (N. Virginia). For convenience, I built the entire stack in this AWS Region.

Scheduling with CloudWatch

CloudWatch is used to run the lambda function once a day. Among the lambda configuration parameters, an environment variable (CERTS_RENEW_DAYS_BEFORE_EXPIRATION ) determines how many days before the deadline renew the single certificate. The expiration date is obtained by ACM. No unnecessary renewal are attempted so lambda execution can be scheduled on a daily basis without worries.

The renewal through CertBot is forced , guaranteeing the obtaining of a new certificate 30 (default value) days before the expiration.

Deployment via CloudFormation

In the repository of this project the CloudFormation template is available for creating the stack. This includes the function, its role, the CloudWatch event and its permissions.

To deploy, the first step after cloning repository, is to build the function.

  make lambda-build  

To create the stack it is necessary to indicate the S3 bucket to be used for storing the Certbot configuration.

  make BUCKET = your_bucket_name create-stack  

Provided bucket is also used to temporarily store the sources of the lambda function for deployment.

Once the stack is created, it is necessary to set the environment variables DOMAINS_LIST with the list of domains to be managed separated by commas and DOMAINS_EMAIL with the email address to be use when requesting certificates. For each new domain it is necessary to provide the correct access policy to the relative S3 bucket that hosts the static resources.

Conclusions

Getting free SSL certificates for your projects is really useful; automating its management through this project has given me the opportunity to forget about it and live happily.

I recommend carrying out your own tests using the staging environment made available by Certbot. The CERTBOT_ENV environment variable allows you to define whether to use the production endpoint or the staging one.

Pay attention to ACM quota : the number of certificates that can be imported into ACM is limited in time (quantity in the last year). During my tests I ran into a very low limit: 20! As reported in this post you must contact the AWS support for removal.

 Error: you have reached your limit of 20 certificates in the last year.  

Further developments? Lot of! The requirement to have an S3 bucket with the same domain name can, for example, be overcome by providing for a more advanced configuration, perhaps stored in a DB. Feel free to improve the original source code and let me know!

Did we have fun? See you next time!

Automation

TravisCI pipeline to test Ansible roles with Molecule on AWS EC2.

5 min

Automation is awesome!” – This is one of the slogans that I read more and more often on blogs and articles dealing with DevOps and CI / CD. I agree: the today possibilities to automate integration and deployment processes are incredible.

Suppose, for example, that we have to automate the installation of Apache on Ubuntu. We write an Ansible role to carry out this operation and we can reuse it as a building block of a larger automated process, such as deploying a complete solution.

One wonders: how to test the role during its development and subsequent updates? Having recently faced a similar challenge, I had the opportunity to create a CI pipeline using very powerful (and free!) tools. Let’s see.

Ansible Role

If Ansible is not already installed in your system, please install it.

We are going to create our my-httpd script in the simplest way possible: we create all the directories that normally describe Ansible roles, as part of an awesome-ci example project.

mkdir -p awesome-ci/roles/my-httpd
cd awesome-ci/roles/my-httpd
mkdir defaults files handlers meta templates tasks vars

In the tasks directory we create the main.yml file with the Ansible code for installing Apache. In the project’s root directory we create a playbook.yml which can use our role. The content of the two files will be for example:

Now that we have written all the code needed to install Apache with Ansible, let’s focus on building our pipeline, starting with the first tests.

Code linting

What is code linting? Wikipedia suggests us to be “a tool that analyzes source code to flag programming errors, bugs, stylistic errors, and suspicious constructs“.

ansible-lint allows us to perform this type of verification on our playbook and role. To install it use PIP:

pip install ansible-lint

To check our playbook, move to the project’s root directory and simply run:

ansible-lint playbook.yml

ansible-lint verify the content of the playbook and the roles used against certain rules, reporting any errors and violations of best practices, like this:

[201] Trailing whitespace
/home/awesome-ci/roles/my-httpd/tasks/main.yml:8
        name: httpd 

Each violation is identified by a code (in this case 201). The related documentation describes causes and solutions. In my lucky case, the only violation reported concerns a simple space at the end of the indicated line. It is possible to ignore some rules by specifying their identification codes on the command line.

ansible-lint playbook.yml -x 201

Whenever we change our role, we can use ansible-lint to check it again against appropriate best practices. But let’s not stop here! The time has come to check whether our role actually does what we need.

Molecule

Molecule is a tool designed to test your Ansible roles, in different scenarios and with different operating systems, relying on different virtualization and Cloud providers.

To install it, use PIP. However, I recommend checking the official documentation for installation requirements.

pip install molecule

At the end of the installation process, we are ready to recreate our role using Molecule.

# Save old role
mv my-httpd my-httpd-orig

# Init new role using Molecule
molecule init role -r my-httpd -d docker

# Move task file to new role
cp my-httpd-orig/tasks/main.yml my-httpd/tasks/
rm -r my-httpd-orig 

With the role initialization command, we instructed Molecule to use docker as a test environment. Let’s move to the role directory to check it!

cd my-httpd
molecule test

Unfortunately we will immediately see that our test fails during the linting phase. Molecule creates a role using ansible-galaxy, therefore complete with all the necessary directories, META included. So we need to properly correct META by replacing the generic information reported. Alternatively, ansible-lint can be instructed to ignore some rules. Let’s edit the Molecule configuration file.

The molecule directory of our role includes a subdirectory for each test scenario. The scenario we created is default. In this folder the molecule.yml reports the configuration of each phase, including the linting one. We add the option to exclude certain rules.

provisioner:
  name: ansible
  lint:
    name: ansible-lint
    options:
      x: [201, 701, 703]

Let’s run the test again: the code linting phase is successfully passed but we run into another error.

fatal: [instance]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "[Errno 2] No such file or directory", "rc": 2}

The reason? Not so immediately identifiable, but by analyzing a bit the molecule configuration file that we have previously modified, we realize that the docker image used to test the role is based on CentOS. Instead we decide to use Ubuntu 18.04, for which the “apt-get update” command makes sense.

platforms:
  - name: instance
    image: ubuntu:18.04

Let’s run the test again and … boom! Passed! However, we are still missing something: the operations indicated in our role are carried out without errors but we have not actually taken care of verifying that the Apache2 service is installed and active. We can configure Molecule to check it for us at the end of the playbook execution, regardless of the actions taken in the role.

We need to edit the test_default.py file located in the Molecule tests directory.

Let’s run the test again paying attention to the “verify” action.

--> Scenario: 'default'
--> Action: 'verify'
--> Executing Testinfra tests found in /home/awesome-ci/roles/my-httpd/molecule/default/tests/...
    ============================= test session starts ==============================
    platform darwin -- Python 3.8.1, pytest-4.6.4, py-1.8.1, pluggy-0.13.1
    rootdir: /home/awesome-ci/roles/my-httpd/molecule/default
    plugins: testinfra-3.3.0
collected 2 items                                                              
    
    tests/test_default.py ..                                                 [100%]
    
    =========================== 2 passed in 3.14 seconds ===========================
Verifier completed successfully.

Great! Let’s recap:

  1. We configured Molecule to verify our role through ansible-lint.
  2. The role is used by Molecule in a playbook to install and start Apache in an Ubuntu based docker container.
  3. Upon completion, Molecule verifies that Apache is actually installed and started.

All in an automated way! The time has come to move to the Cloud. Why not try our role in a totally different scenario? For example on AWS EC2?

Molecule & AWS EC2

To test our role on AWS we need to create a new Molecule scenario. From the role directory we initiate the new aws-ec2 scenario with:

molecule init scenario -d ec2 -r my-httpd -s aws-ec2

As you can see, this time we use ec2 as a test environment. The molecule.yml file is obviously different from the previous scenario, because includes specific settings for the AWS platform.

platforms:
    - name: instance
      image: ami-0047b5df4f5c2a90e     # change according to your EC2
      instance_type: t2.micro
      vpc_subnet_id: subnet-9248d8c8   # change according to your VPC

The AMI image and subnet specified in the configuration file must be modified in accordance with the region that you intend to use for the test. To find the correct Ubuntu AMI I recommend using this site.

The configurations provided in the docker scenario also apply: specify the exclusions in the code linting rules and the end-of-deployment tests for the verification of the Apache service.

We must specify AWS regions and credentials: it can be done by setting some environment variables before starting the test with Molecule.

export EC2_REGION=eu-west-2
export AWS_ACCESS_KEY_ID=YOUR_ACCESSKEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRETACCESSKEY 

molecule test

Success! We quickly created a new scenario and are also able to test our role on AWS. On EC2 console we are able to see that Molecule take care of starting a new virtual machine, run the test playbook on it and verify role activities. At the end of the operation, Molecule terminate the EC2 instance.

Now let’s see how to create a CI pipeline to execute these tests with Molecule on AWS at every push of our GitHub source repository,

Travis CI

With TravisCI is possible to configure the CI pipeline for the GitHub repository which hosts our automation project. The configuration is done by creating the .travis.yml file in root.

In the install section, the test-requirements.txt file lists the packages to be installed. Its content is:

ansible-lint
molecule
boto 
boto3

The script section indicates the operations to be executed: ansible-lint is used preliminarily for the playbook and role checks; Molecule is used to test the role in the AWS-EC2 scenario.

Something is missing? Yes, AWS credentials. To share them securely we use Travis’ CLI. First let’s install it:

gem install travis

We proceed to encode the credentials securely, adding them to the Travis configuration file. Credentials will be passed in the form of environment variables to Molecule during the execution of the tests.

travis login --pro

travis encrypt AWS_ACCESS_KEY_ID="<your_key>" --add env.global --pro
travis encrypt AWS_SECRET_ACCESS_KEY="<your_secret>" --add env.global --pro

Let’s try a build with TravisCI and … bingo! We finished.

Conclusions

We have seen how the use of these tools allows you to develop Ansible roles and playbooks by constantly subjecting them to a pipeline capable of verifying their correctness, on completely different environments and scenarios.

I have used the same tools in a bigger project, available on this GitHub repository: it is an Ansible playbook dedicated to the creation of a Docker Swarm cluster.

We had fun? See you next time!

This is a real smart switch!

4 min

Have you ever tried to ask Alexa to turn on the lights in the living room surrounded by a swarm of screaming children? Difficult eh? And what about in the late evening, would you ask Alexa to turn off the bathroom light which is systematically forgotten on, running the risk of waking up the children? That’s right: you have to get up from the sofa.

In these and many other situations the most suitable and sustainable smart switch ever is a nice piece of cardboard! Do not you believe it? I’ll show you.

No tricks

A few days ago I asked myself how I could use the security cameras installed inside and outside my home to make it more smart.

Well! A silent Alexa replacement would be handy, I said to myself.

So I tried to find ways to use camera images to generate events, such as turning a light bulb on or off. The idea that came to me is this: identify a well recognizable object and read the text placed above it, a sign in short! A sign that can be used to give commands to our IoT devices.

Object detection

I used a Raspberry PI4 to analyze the video stream of my camera and recognize objects thanks to CV2 library, its DDN (Deep Neural Network) module and the MobileNetSSD V2 Coco model.

Going through the classes of objects recognized by the model, a nice “bus” immediately caught my eye. Perfect! An object easily recognizable and hardly present in my living room, just to avoid incurring false positives. The image I chose to decorate my sign was immediately recognized by the model as a bus. Great!

However, using the image in a different context, the result was not positive. The presence of other recognized objects (person and sofa) and the reduced size of my bus compared to the entire image has compromised its correct recognition.

Finding contours

Do I give up? No way! Our sign is a nice rectangle. Why not detect the rectangles in the image, extract its contents and use it for object detection?

Going to analyze the contours and looking for those that have approximately 4 sides, boom! Found it!

Now I’m able to submit the image contained in the rectangle to the object detection model and, yes! It is correctly recognized as a bus.

Text recognition

I said to myself: our bus is the “activation image”, like the word “Alexa” is the “activation word” when pronounced in the proximity of Echo devices. I just need to submit the image to a text recognition system to identify commands and act accordingly.

I chose a cloud solution, using Amazon Rekognition. When a bus (or truck or car) is identified in the image, it is transferred to an S3 bucket and subjected to the text recognition algorithm. Results? Excellent.

Detected text:ALL ON
Confidence: 98.95%
Id: 0
Type:LINE

Finished!

We finished! Now we just have to run the commands based on the text returned by Amazon Rekognition. In my case I limited myself to turning on a couple of LEDs connected directly to PI4 but the limit is fantasy!

Source code is here.

Behind the scenes

Few tips about the realization of the same project: the resolution and the position of the cameras are very important and tuning is required to obtain good results. On Raspberry PI4 don’t use very big image resolution: it is necessary to find a compromise between the required quality by the text recognition system and the requested resources for its elaboration.

To install OpenCV on Raspberry please check this link. Bufferless video capture class is from this post. Box recognition is from this page.

We had fun? See you next time!