re:Invent 2017 — The Day of the Serverless

Luca Bianchi
4 min readDec 1, 2017

UPDATE: a few Serverless sessions are live on youtube. Check them out! (https://www.youtube.com/playlist?list=PLhr1KZpdzukc-1lMQ8iugc82jaVUt_Nej)

NOTE: this post is part 2 of a streak on AWS 2017 re:Invent. Articles are self-contained, however, here is part 1. I make the assumption you know very well what Serverless programming is. If you need a refresh, refer to .

We always knew it.

Since 2014 when AWS Lambda announcement, we knew were facing an unprecedented shift in services development. Also, our expectations have never been more real than now. Serverless ecosystem grew steadily year after year, yet it is at its early stage.

During this year conference, AWS folks did not miss the expectations, showing how Serverless is more a principle than a specific technology. Serverless computing is based on four main assumptions:

  • no server management
  • flexible scaling
  • high availability
  • never pay for idle

So, every service that matches these requirements can be defined Serverless and becomes part of the ecosystem. Following this path, some companies other than AWS presented at their solution. A particular mention goes to MongoDB

MongoDB stitch

Do you remember when your app was made of a lot of boilerplate code, just to perform simple standard CRUD operations? Forget that. If your code relies on MongoDB as persistence, a new service, named Stitch, is gonna do the heavy work for you. Stitch is a service layer, available right now for every Mongo Atlas customer (on-premise instances support is coming) adding a set of read/write capabilities, together with service connectors and rules. Read/Write support means CRUD functionalities bundled into the layer, exposing REST endpoint to update your documents and collections. Think about an API gateway proxy over it, ’cause security is important, and.. the magic is done! But Stitch awesomenesses doesn’t stop here: it offers out of the box connector to market standard services such as Twilio, S3, SES and supports login. Routing can be performed using Service Rules, a JSON containing mapping rules for services, that can direct events accordingly to business logic.

Persistence

AWS itself presented a number of improvements and new services: starting with DynamoDB that now can be shared across multiple regions (with GlobalTables) and provides support for “point in time” data recovery, then arrived two jaw-dropping news. The first one is a revolution of an existing service, Aurora, moved to per second billing becoming potentially a game changer: relational databases are uncool for serverless applications due to their cluster price model and complexity of network configuration; Aurora could make relational cool again introducing a REST enabled database with infinite automatic scalability. The other tool presented is a long waited graph database, fully managed by AWS; even if Neptune pricing at launch is going to make you pay even for idle, I guess it won’t last too long before being moved to a pay-per-use model.

API

Have you ever built an orchestration layer? Have you ever needed a way to invoke multiple services, collect data and respond with an aggregated payload? A couple of years ago, Facebook did face the same exact issues. They decided to resolve the situation developing GraphQL, a service capable to resolve queries posted by clients against your microservice, aggregate and send responses back to the client. The only main issue GraphQL always presented was infrastructure provisioning, since it required a layer of scalable servers to avoid becoming a SPOF in your architecture. To address this, AWS announced AppSync a fully managed GraphQL service that can leverage existing AWS services as graph resolvers. This means you have an entry point to allow your client invoke any AWS service (your lambdas included) with no need to write a single line of code. If your use case is just data read/write towards AWS services, out-of-the-box resolvers will make the work for you with no need to even write a single lambda function.

Lambda

The base computing unit in Serverless is a function, which in AWS jargon is called Lambda, the first service of this kind, introduced back in 2014. This year Lambda received a generous amount of updates starting from weighted deployments, a functionality that allows for canary deployments, A/B testing and automatic rollbacks. Moreover also concurrency monitoring (fundamental to understand the level of parallel execution) and CloudTrail logging. Event Lambda itself has been updated, shifting RAM limit to 3GB and adding GoLang support. Finally, AWS published Serverless App Repo, a repository for discovering serverless applications and configuring them, with no pain.

IDE

Today, an unattended release hit the Serverless ecosystem: AWS Cloud9, a fully fledged IDE perfectly intended for developers. Can’t wait to see it in action

Finally, AWS during this put a lot of focus on democratizing Machine Learning with a lot of mind-blowing announcements, but this is a story for another day.

--

--

Luca Bianchi

AWS Serverless Hero. Loves speaking about Serverless, ML, and Blockchain. ServerlessDays Milano co-organizer. Opinions are my own.