Cloud computing services have been revolutionary to how software systems are developed and deployed. One growing trend in this area has been the rise in popularity of serverless architecture. In the past, serverless described an application architecture that heavily relied on 3rd party services that manage server-side logic and state, typically referred to as Backend-As-A-Services or BaaS. However, today the term is refers to server side logic that is run in stateless, event triggered, and ephemeral compute containers that are managed by a 3rd party and is commonly called Function-As-A-Service or FaaS.

AWS Lambda is widely seen as the pioneer of the serverless space but all of the major cloud players now have competing products in the space. Frameworks like Serverless, Apex, and Chalice are built on top of the various serverless platforms in order to extend their functionality and make serverless products/platforms easier to work with.

The serverless style of architecture comes with a variety of benefits, namely:

  • Easier operational management as the platform separates the application from the infrastructure that it is running on.
  • Innovation happens quicker because of the aforementioned separation allows for a focus on the application logic rather than concerns stemming from systems engineering of the infrastructure.
  • Reduced operations costs as you only pay for the time and resources needed to execute a function.

Compared to a traditional server-side setup, the gains of these benefits can be understood in the context of the development life cycle. When deploying a new feature or bug fix, the whole backend or service where that code appears must temporarily be down for the update to be applied. Any system downtime can result in the loss of data and a poor user experience. With redundancy and the right deployment configurations, this can be mitigated. However, upkeep of such a setup incurs cost in server resources, its own development and maintenance, and dedicated personnel time.

With serverless architecture, developers can apply updates piecemeal with none of the risks of downtime as each function is an independent resource. This encourages a modular style of writing code that is recommended as a best practice for development and testing. As an independent resource, the code is run only when called, meaning there is no cost for idly running.

In this article, we will be using the Serverless Framework, an open-source application framework, to build serverless architectures on AWS Lambda and other cloud based services. We are going to build a secure API for a ToDo application and write the server side functions to run on Lambda. Many tutorials for front end tools and frameworks use the ToDo application for teaching their basic concepts. We want to consider what the setup of a backend for such an application could look like to process server-side logic such as storing and accessing data.


Please make sure that you have installed Node.js on your computer (to be able to follow along). Following the directions in the Serverless documentation, you can install the command line tool serverless. Please note that at the time of the writing of this article there is a known issue with Node.js version 8.0.
To get an idea about the basic structure of a Serverless applications, use the command line tool to create an empty project.


Looking at the directory structure we can see that the boilerplate include just two files:

Inside handler.js we see the code to managed and executed in Lambda:

A look at the configuration file serverless.yml shows several commented lines generated where we can see the options for various cloud services. Below are only the uncommented lines that configure this demo project:

The service section describes the name of project; provider contains the configuration options for cloud service provider; functions sections contains configurations relating to what functions are available: their naming, what code they relate to, and what events can access them.
Our next step is to go inside the project directory where we’ll use the command line tool to deploy this function on AWS.

Diving into this output we learn a few things about how a serverless deployment is configured on the AWS infrastructure. There are three services being utilized by this: Cloudformation, S3, and Lambda. Cloudformation is a platform that allows users to create and manage a collection of related AWS resources. S3 is short for Simple Storage Service, which is an object store with a web interface which allows for storage and retrieval of data. This is where the code will reside, in a in a designated bucket named serverless-demo-dev-serverlessdeploymentbucket-zi9rpv2yn3uc.
The Service Information section looks familiar with some addition information to the configurations from the serverless.yaml file. The keys stage, region, and api keys are in fact default configurations that can be set up in that YAML file. The staging environment the code will be deployed to is defined by stage, region defines which geographical region of the AWS infrastructure the code will reside on, and lastly we have api keys that list out the names of keys to be used to securely call our Lambda functions. In a future step, we will set this up.
The last bit of information is the ARN (Amazon Resource Name) for the Lambda function which helps to uniquely identify resources in AWS.
To see what is returned from a call to this ARN we can run the command line Serverless tool to call the function directly:

The JSON data is what is returned to requester of the Lambda function and in our example we have an HTTP response. We’ll be creating an HTTP endpoint configuration for this function so that an application external to AWS can call it.
In our serverless.yml file we will add an event for the function:

After this change we need to deploy again:
eddie:serverless-demo$ serverless deploy -v
In the out you can see a number of provisioning and configuration steps taking place that we won’t go into detail. You will notice that now there is a new service being used, ApiGateway. As the name implies this service allows for the configuration and use of APIs.

Running request on this endpoint will give you the full response along with the message from the direct call to the function.

As we know it’s not good practice to have insecure endpoints, we are going to add configuration to generate an API key and secure our call to sayhello. Here is what the full revised serverless.yml file will look like:

Our next deploy will update the configuration and in the Service Information we will see the API key generated by AWS:
api keys:
secret: Dt7CiOXofX3TeRvxZxOfe11RVwRZVeSp7OhNXsIv
If we try running the curl command again we know get an error message:
At the time of the writing of this article, the team working on Serverless is implementing automation to have the API key associated to endpoints. For now let me walk you through how to create a Usage Plan for you endpoint, this defines configuration that defines throttling and quota limit on each API key.
Log into your AWS console and navigate to the page for API Gateway. Select Usage Plans in the left side menu. When you click on the Create button a form will popup. Below you can see my configurations. Feel free to adjust them as needed:
Screen Shot 2017-06-10 at 11.35.22 AM.png
Next we add the associated API stage, which in our case will be serverless-demo-dev:
Screen Shot 2017-06-10 at 11.36.05 AM.png
We’ve already generated an API key through the serverless command line tool earlier, but in this step of the wizard we will look it up and associate it with the Usage Plan:
Screen Shot 2017-06-10 at 11.36.51 AM.png
When you’re done you will see the configuration page for the new Usage Plan:
Screen Shot 2017-06-10 at 11.37.20 AM.png
To test that our key does in fact work add we can now add it as a parameter to the call:

You should receive an HTTP response similar to when the endpoint was insecure.
Now we ready to mock up the endpoints for a To Do application. We are interested in providing basic CRUD (Create Read Update Delete) functionality that call be called by updating the handler.js file.

Updating the functions section of the YAML file:

With this deploy we now have fully mocked up API:

Congratulations! We’ve successfully gone through the basics of creating an API hosted on AWS using the serverless command line tool. You know a little about the cloud services used to architect this backend. The next steps are to add persistent storage for our ToDo application.
If you would like the full code from this project please visit the GitHub repository.

The following two tabs change content below.

Eddie Kollar

Eddie is a freelance writer and software developer. He has a variety of experience in developing full-stack systems.

Latest posts by Eddie Kollar (see all)