JS Party – Episode #49

Serverless? We don’t need no stinkin’ SERVERS

with special guest Jeremy Daly

All Episodes

Disclaimer: no servers were harmed in the taping of this show. We hosted a special discussion with Jeremy Daly, Kevin Ball, Nick Nisi, and Christopher Hiller on the ideas around serverless, managed services, Functions as a Service (FaaS), micro-services, nano-services, all-the-services!

Featuring

Sponsors

Gauge – Low maintenance test automation! Gauge is free and open source test automation framework that takes the pain out of acceptance testing.

RollbarWe catch our errors before our users do because of Rollbar. Resolve errors in minutes, and deploy your code with confidence. Learn more at rollbar.com/changelog.

DigitalOcean – DigitalOcean is simplicity at scale. Whether your business is running one virtual machine or ten thousand, DigitalOcean gets out of your way so your team can build, deploy, and scale faster and more efficiently. New accounts get $100 in credit to use in your first 60 days.

AlgoliaOur search partner. Algolia’s full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We’re using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello and welcome to another week of JS Party, where every week we are throwing a party about JavaScript and the web. I am your host for this week, Kball, and I’m joined with our regular panelists, Nick Nisi…

Hello!

And Christopher Hiller, aka b0neskull.

I love that moniker. We also have a special guest with us today - Jeremy Daly is joining us. He is the CTO of AlertMe.news, and a long-time advocate of serverless, which will be our topic for today.

Hey, guys. Thanks for having me.

Thanks for joining us, Jeremy. Let’s kick things off with a question, which is what the heck is serverless? Coming at this (long-time guy) obviously there’s still a server involved, right?

There is, yes. It’s one of those things where a lot of people – I don’t wanna say get upset, but a lot of people use the semantics of the term to argue against it, which is kind of silly, because if you think about wireless technology, and I know this is used multiple times, but there are still wires in wireless technology, it’s just you as the end user don’t have to deal with those wires. So I like to look at serverless similar to that, where obviously there’s servers behind the scenes that are doing things, but you as a developer don’t have to worry about provisioning servers.

The difference between provisioning something like an EC2 instance, for example, where you have to launch it, you have to pay for it 24 hours a day, you have to install the updates, you have to worry about all the permissions and everything that’s going on there… With serverless, you actually just write some code and you tell AWS or the Google Cloud platform or whoever to say “Hey, when this particular thing happens, I want you to take it in, run my code, and then spit something back out.” You’re only paying when your code is actually executing, and you don’t have to worry about having all those servers backing that for you.

Yeah, that makes a ton of sense. And I’ve heard it described also as “functions as a service.” We’ve gone from all these different layers, but if I just have my functions…

Yeah. Well, something about functions as a service - sometimes people equate those to serverless, but functions as a service is part of serverless. That’s why people sometimes call serverless “servicefull”, because the idea is to say that yes, functions as a service are these little containers that will run for you, they’ll execute your code, you don’t have to worry about it, but then you need to interact with other services in order to make something valuable happen.

[04:12] So whether you’re writing to a database, or you’re writing to some sort of a stream, or you’re reading information in from something, there’s a bunch of other services that are involved there, but again, those are all managed services.

Sometimes people say functions as a service kind of acts as the glue that sticks all that stuff together, but it does go beyond just the function aspect of it.

How does this term differentiate from microservices? Or is this just a way to facilitate microservices…?

That’s actually interesting, where serverless takes us… And without getting maybe too deep – microservices obviously are taking a larger application, finding the seams in it and splitting it up so that your building service is separate from your catalog service, or something like that… So serverless is a way in which you can deploy microservices, and you can certainly take a number of functions or a single function with some additional managed services and create a microservice there… And of course, it’s much easier to communicate between functions using something like Lambda, for example, because you can call them from each other… But the difference between microservices is that microservices are sort of monolithic applications in themselves; they’re not distributed, and they usually have to be replicated either horizontally, or you’ve gotta up the server requirements in order to get more performance out of them… Whereas with something like serverless, there’s this new concept of nanoservices, where you’re basically saying “Parts of my microservice might need to scale more than other parts of it.”

Maybe I have an image processing component, or some sort of machine learning component, and that requires more resources in order to process that. If I had all of that packaged into a single microservice in a container, for example, I would have to scale the entire container, so all parts of that application would have to scale, or that service would have to scale in order to handle it.

Now with this idea of nanoservices, you can take that microservice, put it out there in a serverless environment, and then when an individual component of that microservice needs to scale, that’s where we sort of consider those nanoservices, and those can scale just independently, even though they’re part of that larger service.

I think you just blew my mind with this nanoservices thing… Either that, or I’m just horrified. [laughter] Basically, you’re saying that we have these microservices and what they’re doing is they’re calling out to functions as a service?

Well, yeah – I mean, a microservice, if you think about it, is just a small monolithic application, right? It does something specific; it’s your billing service, so it keeps the ledger, it creates invoices, it does all that kind of stuff. So you can build a series of individual functions. Well, rather than having that all in one big Java app, or PHP, or whatever you use - Node, if you’re writing in Node - rather than having that all in one giant function or one giant app, you can split that up in individual functions… And again, functions is probably the wrong term here, because a function in serverless could run multiple subroutines, if you wanna think about it that way. So it’s like functions within functions… But the idea is a function is this individual unit that can execute any amount of code on its own.

So you take five or six functions or whatever it is, and that can be your entire billing service, so that you sort of consider your microservice. But you don’t have to launch that microservice into a container or onto a service, you launch all of those components of the microservice independently into the serverless environment (like Lambda, or Azure, or something like that) and now those all act as one service, they can communicate with one another, but then they can also scale individually, and then of course you can communicate across other services, whether they’re messages buses, or SQS, or SNS, or Kinesis, or any of those things to actually communicate between not only your individual services, your individual functions, but also the larger microservices, if that makes sense.

[08:07] So we had this evolution where we had this monolithic application, which was like all the things are in one bundle, and that turned out to be hard to scale from both a technical perspective of “This is a very expensive thing that we need to put more servers on, and if one piece needs to scale, we scale it all”, and also from kind of a management perspective of like teams working on different pieces.

So then we split that and we said “Okay, now we’re gonna go to microservices, where each one of these vertical slices can scale independently, and it can have a different team, but it can also have different servers”, and what I’m hearing from you now, Jeremy, is this idea of serverless is taking that final thing and saying you know what, maybe a microservice is the wrong concept, because that’s still at the level of “Here’s a self-contained thing, it’s just we’ve sliced it apart. What if we just take any piece of functionality and split that out, and let that scale independently, and be worked on potentially independently, and just kind of go all the way down to the bare atoms that are making up our program and have each of those independent?” Is that a fair assessment?

I think that’s actually a great way to look at it. The only thing I would add to that is – and again, this is probably more confusing because of the implementation than it is from actually doing it in practice… But typically, with a microservice you’d have a small team own a microservice all the way, everything from the database, to the code, to the implementation. That’s still possible here, it’s just that there’s no sort of application-level division, or microservice division when you put functions – I’ll use AWS Lambda, for example… When you upload five functions that you say are part of this microservice into AWS Lambda, that you just go into one big, giant list of functions that are available… But you can tag those functions, and AWS actually just launched their Applications tab, which tries to consolidate functions that are part of the same service… But what you would do is your microservice team that’s working on – again, to go back to the example of the billing function… You might be working on five functions, plus you have an RDS Aurora database that backs it.

You would want those functions to be contained, in the sense that one team would manage them, you’d probably put them in one Git repository or something like that, and all of the interaction with that billing database would happen from those five functions. But you then upload those five functions that can scale independently, but the idea is that you might have other teams that are uploading other functions, but your microservice team would own those five functions in the database, and anything else that supports it.

So would you build functions that manage your database connections, or taking to this specific database, and then would other functions talk through that, or would they somehow have – how would share functionality between that if, say, another set of functions needed to communicate with that database?

That’s a great question. That’s what I was trying to kind of get the point across… So if you build a function that is the gateway into your billing function, you would want other services to communicate with that function. Now, you can communicate with it directly, without ever having to leave the environment, you could put an API Gateway in front of it so that it could actually be accessible using a REST API… But the point is if you have your catalog service, and your catalog service needs to get some sort of billing information, you wouldn’t write any of your functions in the catalog service to access the database that supplies information to the billing service. Instead, you would communicate with a function in the billing service that would then communicate with the database, so that way you can keep those separations of concerns, and then you don’t have to worry about two teams trying to share the same database.

Interesting.

[11:51] Typically, what you would do for a microservice… It just breaks down into these nanoservices now, which can become confusing because now you’ve got individual components, but you still kind of want to have them all part of a larger microservice, so that a team can own them and can own the data that supports them.

Got it. So where previously your team scaling and your technical scaling were along the same lines, this is saying “Let’s break out the technical scaling, but we still wanna group these things for team scaling purposes.”

Yeah, yeah.

Quick question… As you’ve been talking, there’ve been some parts that sound like they’re probably generic to serverless, and some things where you’re talking about something in specific, like Lambda, or something like that… Are there ways that the different implementations differ across these different cloud providers, or have we more or less converged to the same functionality

There is certainly differentiation between AWS and IBM and the Google Cloud platform… But most of it is the same. The general idea is you write some code and you upload it into a function and it’s event-driven architecture - an event comes in, and that could be somebody uploads a file into an S3 bucket, or somebody posts something to an API Gateway, or there’s a message that comes in from a message bus, or something like that… Whatever those events are that come in, the basic idea of serverless is it’s a function that receives an event, does something with it, and then returns something back. Pretty much all implementations of it are the same in that regard.

Lambda, for example, was out in 2014 - way ahead of pretty much anybody else, so they’ve got a number of services that really complement it. You have their CloudWatch to easily log data; they’ve got their Simple Queue service, which allows you to do a message bus for queues. They have SNS, which is the ability to multi-cast events to multiple Lambdas or other locations, they have their Kinesis streams, and then they of course have Aurora Serverless and DynamoDB, which is their highly scalable serverless NoSQL database… So they have a lot more services that you can use in that regard. But then OpenWhisk, and Google Cloud Functions, and Microsoft Azure Functions they’re all very similar and they all have slightly different implementations; some of them run for longer…

Google Cloud Functions automatically has a built-in HTTP REST API, so that’s how you can access those functions, as well as access them through other events… But for the most part it’s pretty much the same, and there’s actually – and speaking of Serverless Inc., there is a committee out there that’s working to standardize events for serverless functions. That’s out there now, so hopefully that will kind of push all the providers to at least standardize the way that events are received, which I think would be a good point in consolidating the market.

That’s what I was gonna ask… I don’t have a ton of experience with serverless functionality, but I have played around with Netlify a little bit, and I think with the JavaScript API to tie into that, you basically create a function that accepts the event that’s happening - I think maybe a context - and then it gives you a callback as an argument to it; that callback is how you respond with something. Is that what they’re working to standardize - how you define a function and how it will be run and receive the inputs from a REST call or from some other event that might be happening, and then how you respond to that?

Yeah, I think that’s the basic idea… It’s to say that when an event comes in that is for X, or for Y, or whatever that event is, that it would be in a similar format… Similar maybe to what they did with RDF standards and things like that, to try to say when you’re representing a product that this is what a product should look like, these should be the fields, this should be the nomenclature that you use to describe these things…

[15:50] Right now obviously functionality or the event that comes in from an SQS, which would be a simple queue service, is different, even within Amazon, than it is when you get messaging from Kinesis, or when you get a DynamoDB stream or something coming in. So the idea here I think is to say if you’re gonna say, “Hey, an image or a file was added here”, or “Here is a REST API call that was made”, “This is what it should look like, this is the data it should contain”, so that you could then say “I’m gonna take my function from provider A and move it to provider B” with not as much pain as changing how it processes those events.

Yeah, that’d be great.

Alright, this is probably a good time to roll into a quick break. After the break, we’ll come back and keep drilling into your brain of how this stuff actually works, and maybe start digging into what is the value proposition. We’ve talked a lot about how this thing works, how it’s different, but let’s look at the value once we get back from the break.

Alright, welcome back… Just before the break we talked about getting into value. Unfortunately, one of our panelists had his internet go out due to construction, but he sent in a question, and I wanna kind of put it there… So we talked about how this is kind of like taking this concept that we had of microservices and taking it down even more, and he was bringing up the point of “What is the value prop of this, as compared to just continuing to split down microservices into more microservices?” I think it’s a bit of a different model, but can we explore, like, what’s the point of serverless?

For me - and I think this is true of a lot of people - the speed of development is really fast. And also to take a step back, this puts developers a lot closer to the operational side of things. If you figure your traditional development firm or development team, they usually have – we’ve invented this thing called DevOps, where you try to get these developers who also do operations, and they try to get you through the CI/CD process, and get things deployed… You still have to deploy a server. Or if you wanna go down the container orchestration route and you wanna do Kubernetes or something like that, now you’ve got labels and pods and all these other things that have to be created and orchestrated and containers built in order to run code… So it gets really complicated, and you can spend months just trying to set up your environment in order to do something as simple as - again, bad example, but process an image or convert an image.

[19:43] With serverless, you can write a function that converts an image or does a simple transformation for an ETL task, for example, and if you use a framework like Serverless, or you use AWS SAM, or Claudia.js or some of these other ones, you type a couple commands in the command line and that deploys that application or that function to Lambda or to OpenWhisk or wherever you want it to go, and then it’s immediately available. So you can build applications, and of course, like we said, the more functions you write or the more complex you make your applications, the more robust they get… but you can go ahead and build these things in minutes, as opposed to potentially waiting quite some time for an operations team or a DevOps team to set up an environment for you to actually launch code. The benefit of that there comes with autoscaling as well.

If I have to write an ETL task – or, I’ll give you an example… I had a startup several years ago, right about the time that AWS was starting to get popular, and they didn’t have any of this stuff… So we actually did an image processing component. Our image processing component would reach out to Facebook and Instagram and to Picasa, it would download all your images that were associated with your accounts, and we would run them through a series of processing scripts. We had two giant image servers that were just chugging, that if we had a lot of activity, they would basically choke, so you have a bunch of backed up things that needed to run.

The same is actually true now if you think about even autoscaling. If I have something like Elastic Beanstalk, or I’m using OpsWorks, or something where I have horizontally scaled services, I have to scale those physical servers or virtual servers, but essentially I have to launch more servers in order to scale those up. And that’s not a difficult thing to do, it just takes five minutes to start up a new server or a new virtual machine. By the time that happens, I’ve already lost the real-time aspect of it. With Kubernetes or with Docker, or if you’re using ECS or the EKS service at Amazon, those will launch very quickly so it’s a little bit better, but with serverless I could just write that image processing system now, I could write that in an hour maybe and launch it, and I wouldn’t have to worry about any operational stuff, because that will just continue to scale as more concurrent requests come in.

Having spent about a month wrapping my head around Kubernetes and trying to get stuff up and all of that, that sounds pretty darn appealing, I’ve gotta say…

Yeah, and if you look at it from the business case, which is sort of the way I like to look at it… I started as a developer, I had my own development company, I grew that, then I started some startups, so I’ve been in the CTO role, in a number of positions, and when you’re in the CTO role, you’re forced to think about the business value of things… And just thinking about how much money past companies I’ve worked for or have started have invested in operations, it’s kind of crazy. We lost a lot of time just trying to figure out how to get our database to scale correctly, or how to distribute the workload for our ETL tasks, or something like that.

Some people say “Well, serverless is no-ops”, which is not true, but it’s certainly is less ops. Most of what you need to do, the developer can actually handle; you might want a cloud professional that can come in and say “Alright, we want these IAM roles”, and there’s some tweaking of knobs you can do, but for the most part, the idea is to say “You don’t have to worry about 95% of the infrastructure anymore.” You just upload that code and it goes live, which saves your development teams a ton of money, it saves you a ton of time to solve business problems, as opposed to technical problems, and then the cost aspect of it is huge. If you have spikes in traffic, you can certainly plan your scaling so that when you know you get heavy traffic, maybe around noon time, or certain times, you sort of pre-warm your servers or your infrastructures so that you scale out a little bit, so you can handle that load. But you are wasting a ton of money, especially when it isn’t under that heavy load, so you’ve just got all this idle time. With serverless, you’re only paying for when it executes, which saves a lot of money.

[24:04] If you factor in a 95% reduction (or whatever it is) in operational costs, plus you’re not paying for any idle time, serverless, if you run it as scale, might cost you a little bit more than just running a couple of EC2 servers, but if you factor in total cost of ownership and get rid of all of that work, all of that operational work and all of that planning and things like that, the value is huge. So your actual cost savings are gigantic, compared to going that standard route.

By the way, I see that Chris managed to get his internet back, so he’s back with us. Cool, so this sounds exciting, as somebody who does deal with a lot of business management. What are the downsides? Is local development hard? Are there any pain points? What does this cost us?

I think that is actually a really good point in terms of local workflows - it’s easy to write a single function, there’s plenty of frameworks out there, again, Serverless being one of the most popular ones; AWS, their Serverless Application Model (SAM), they have local development capabilities, and there’s a bunch of other ones out there as well. So you can write a function and you can execute it locally, and everything is great. You can simulate an event, and then it will spit back something for you. But as soon as you say “Well, I need to write to this queue” or “I need to access information from DynamoDB” or “I’ve gotta do some other calculation where I’m interacting with –” maybe I’m writing a function that interacts with three other functions, or a couple of other services, whether it’s through API calls or through direct function calls… So now it starts to get a little bit complicated. And again, there’s tools out there that people are working on, better tools to do it.

Sometimes you have to do a lot of mocking and stubbing in order to make the local aspect of this work a little bit better… But there’s also a lot of cloud-based solutions to this as well - Stackery, and AWS has their Cloud9 service that allows sort of an online or web-based IDE that you can do some of that stuff with… So it’s getting better, but that aspect of it, local development is sort of a pain.

But beyond the idea of just working locally, serverless right now does have its limits. AWS just announced that you can run a function for 15 minutes, as opposed to the traditional 5, and I think IBM - those run for 10 minutes. Google Cloud Functions I think is still 5… So there’s some limitations there. There’s limitations on the amount of memory you can use, there’s limitations on the number of CPU cycles that you get with each function… So there are some limitations and that means it isn’t necessarily perfect for every workload, but there are also arbitrary limits. Just because it can only run for 15 minutes is probably more of a provisioning or sort of a resource planning constraint that AWS has, because they say “Well, we can’t just run servers with enough capacity that somebody could tie one up for an hour and a half. We need to kind of balance that”, because they’re paying for idle time, you’re not… Which I kind of mentioned in the last point about the cost savings - now the cloud provider is taking the risk on idle time, as opposed to the company that’s buying that time… So there’s a huge win there, obviously. But again, with some of those limitations, serverless isn’t necessarily right for everything.

So to run it locally - sorry to go back to that - you said that you either have to run all of the functions that the one you’re working on may need to hit, or you might need to mock those in some way… Is there any helpers with that? I assume they would be specific to the types of functions, whether they’re Lambda functions or Google Cloud, or whatever other provider… Are they specific to those?

[27:53] Yeah. Also, to be clear about how these functions work - essentially, it’s just a handler; there’s a handler function within your code, and when the function gets triggered, the system knows to call that function within your code. From there, you can call other functions and have other requirements and things like that. But the basic idea is you’re just running whatever code you’re running. You’re running JavaScript, or Node, you’re running Python or Go - those applications will just run locally on your machine, so you have to have obviously that runtime installed so that you can execute that code. When you do that, these other cloud providers - where you’re gonna host the code really doesn’t matter when you run it locally. It’s when you are trying to reach out to another service that needs to exist. If you’re using DynamoDB, for example, there’s a local version of DynamoDB that you can download and run… But let’s say you’re accessing MySQL or Postgres - you can just run a local copy of that, and then locally point to it so that you don’t have to connect remotely.

One of the things that I do in my development a lot is - especially because I do a lot with microservices - I will write a microservice and test it locally and have it do what it needs to do, and then I’ll publish that, whether it’s in dev, or in staging, or sometimes in production, depending on what we’re doing with it… And then the great thing is that when you run another microservice locally, you can make that remote call to that live microservice. So it does give you the ability – of course, if you lose your internet connection or the function becomes unavailable for some reason, then obviously it’s harder to test, but… I mean, I’m a big proponent of writing a lot of stubbed tests, doing a lot of unit testing and things like that, and then sort of running a full integration test that actually will access live services in order to do it. But there are some things you can do - you can run local APIs, you can run local versions (like I said) of DynamoDB or some other services, but I’d say it’s not any more difficult than trying to test microservices written in a more traditional sense.

One restriction that I noticed when doing some Lambda development - basically, the version of Node that you wanna use is not necessarily the version of Node that Amazon is running. I don’t know if that’s the same for Google Cloud or Azure, but I think with OpenWhisk anyway you have some sort of – at the very least you can run your own instance of it and have a better, more granular control over your environment. But that’s a problem I ran into; it’s like “Why isn’t AWS upgrading Node?” This version of Node that they’re running is about to become unmaintained.

That’s a good question, and actually that was one of the things that frustrated me quite a bit, because I was writing Node functions with Async/Await when I first started using Lambda, because you could polyfill and you could run the latest version… And when AWS launched, Lambda was at like 4.3, so you couldn’t do quite a bit of things. Then they upgraded to 6.1, and they still didn’t have Async/Await, which made me – when I switched to writing a lot of things for serverless, I had to switch back to Promises. So I was writing a lot of things with Bluebird and things like that in order to manage the processes there.

Quite a while ago they’ve upgraded to 8 (Lambda) and I know that Google Cloud Functions is now on 8… So most of that new functionality is there, but I think part of the reason why they do that is it needs to be highly stable and I think they may need to make some adjustments to it in terms of how it operates, in terms of how much memory it uses, and I guess they’re running it through their Hypervisors and all kinds of things like that… I think they just need to be smart about it, and that’s why it takes a little bit of time to upgrade. But I will say that Node 8 – there’s some new things that have come out, and it’s 8.1 that they’re running on Lambda, I’ve found I can do pretty much anything I want with it.

[32:03] It would be nice if they always were up to date, but it’s at a point now where – I know Node’s getting better, but version 8.1 is pretty good; it gives us Async/Await, it gives us classes, it gives us some of those more modern things that makes development easier.

Right. Yeah, I had resorted to actually transpiling my code with Babel, and then just uploading a bundle.

Yeah, yeah.

So I’m glad to hear that things have moved forward.

I think one other benefit that you might have mentioned that I didn’t really realize until you said it is with you being able to have functions or services that are just oriented to one specific thing and aren’t really reliant on other ones, except for on the edges and the ways that you communicate in and out of them, it does allow you to diversify the technology you’re using, whether you want to switch between languages or switch between frameworks or start migrating to a new language or framework… That’s a benefit that I hadn’t really considered.

Yeah, actually that’s one of the huge benefits there. Again, you think about your traditional microservice - everything you do in that microservice, you’re usually gonna choose one runtime. You’re gonna say “We’re gonna write everything in Python” or “Everything’s gonna be in Node” or whatever, and you do that because again, you don’t want your containers or your services, or the servers that the services are running on - you don’t want them to have too many runtimes installed, so they can do all these different things.

Right.

With something like serverless, you can say “Look, the function that accesses the database and writes this stuff here - Node is fine for that, that’s okay.” But then we have maybe some sort of number crunching thing that we need to do in order to compile some reports, and maybe Python would be better in order to write that in. So now within one microservice you could have multiple languages being used, and those functions can communicate with one another just through a simple HTTP call through the SDKs. So it’s very easy for you to diversify that way. That’s within a single service.

Even more practical probably is to say “Look, we have a team that is writing this particular service, and they think it’s better to write it in Java (or .NET or whatever). Then we’ve got another team that is a JavaScript team, or whatever.” That’s really great, because now you can have a diverse set of technologies; you don’t wanna get too many, but you could have a diverse set of technologies. But what’s really great about this idea of splitting up functions into really small units is to say “Okay, somebody wrote this function in Python a year ago, and we have a new guy that came in and we need to make some changes to it.” You could probably rewrite that entire function in a couple of hours, because it’s so small; it’s a couple hundred lines of code, not even. Maybe a hundred lines of code. So you could rewrite that function in a new language, and then run your unit tests against it, and “Yeah, it does exactly what we need it to do.”

So that’s another great thing about this, is where you’re really minimizing the code surface. You do less and less in code and more with these managed services that it connects, and it makes it extremely efficient for developers to kind of go in and make changes, swap things out… And then you’re also not looking through that library file that is 10,000 lines long, with no comments, and things that aren’t even being used anymore, but you’re afraid to remove them because you don’t know if they’re not being used anymore… This is just much more obvious when you take this approach.

I feel like you’re calling out my codebase right now.

[laughs] We all have them.

That actually raises an interesting question, which is “How do you manage these codebases?” Is this a bunch of folders in a single repo? Do you have repos for every function? How are you even thinking about these things?

[35:46] Actually, that is one of the things that’s sort of the downside to this. What I do and what a lot of people recommend is to create a separate Git repository for each microservice that you’re creating, and then if you’re using – for example, the Serverless framework uses a serverless.yaml file, which you specify all the functions and you can also specify cloud formation templates in there as well… So if you need to generate an SQS queue, or you need to generate an SNS, or any other services you need (a DynamoDB table), you can do that all in one file. So you typically have your service all defined within one serverless.yaml file; it’s very similar when you’re doing a SAM template - you define all your functions, everything in a single SAM template, along with your cloud formation resources… So you have all those functions in – and I like to split up my functions into separate files, too. Sometimes people will identify a function that points to a handler within a larger file that has multiple functions in it… So you have a lot of flexibility there, but I always separate them into smaller ones.

So now you have just this folder, this Git repo that has this set of functionality in it, you tag your function so you know that it’s part of a particular service, and so forth. In my opinion (that’s how I do it), I’ve found that the best way to do it. If you start co-mingling them in a larger monorepo or something like that, then it just gets confusing in terms of which service does which. But if you own that Git repo - and again, this can get difficult to manage, because sometimes you have a hundred microservices, so now you have 100 Git repos, which seems a little bit crazy, but I still found this to be the best… So now you can go in and you can document that, you can specify the well-defined interface, how people are supposed to communicate with it, what the events should look like going in, what events will look like coming out… So you can really own that and give that to one team, and then version it separately.

Of course, with microservices - you can have 100 microservices running, and then I can go ahead and swap services in and out; if I’ve made a contract with any other microservice, I know that it’s gonna accept the input and it’s gonna respond in a way that it can understand.

Yeah, I worry a little bit - and I don’t have much experience actually implementing serverless - that we’re gonna have… You know the old joke about microservices, right? It’s like, you have a problem, so you implement microservices, and now you have 100 problems.

Exactly. [laughs]

This might take that even to another level, at least in terms of like conceptual management of the code.

Yeah, I totally agree with you. I had kind of gone back and forth about the best way to organize stuff… Because in some cases, if you just think about a simple REST API - a lot of times there’s this argument for serverless functions to say “Okay, so if I have a REST API that looks up a customer, then I should point that to a serverless function that just looks up the customer. Then if I need to add a new customer, I should have another endpoint there, and that should point to a different function that handles just adding a customer. So the idea is to keep these functions as small as possible. But the problem is that then if you have a complex API, you may have 40 functions as part of a single microservice, and that becomes to me a little bit unwieldy. And there’s a lot of shared code you want in there - the database connection information, or configuration information… So there’s a lot of that that you wanna share, and you can certainly have a shared library between those different functions that get deployed when you deploy the function, but I like to consolidate sometimes.

I like to say, look, if I’ve got an API that needs to do maybe an admin of a user, so it can add a new user, it can remove a user, it can update their profile image or whatever you wanna do there - sometimes I’ll stick all of those routes into a single Lambda function, because you also have this problem of cold starts, which we haven’t really talked about yet… But when a new function that isn’t warm, that hasn’t been used in a while - when somebody tries to access that function, it might take a couple of seconds sometimes before that function becomes available. So if you’re using functions as the back-end for an API, you wanna keep those functions warm, and you don’t want those to get cold, because then it could take some time, and it’d be higher latency in order to get a response back. So by consolidating functions or a route into a single function, that again, accesses a library and so forth, I’ve still found the performance to be extremely good, and then the management of it is a heck of a lot easier.

[40:18] Has anyone had created – maybe if you have this situation where you have this microservice, for instance, and maybe it does the four CRUD operations, or what have you… Has anyone created some sort of abstraction that says “Okay, you just write your code, and you pretend it’s a single codebase, but what we’re gonna do is essentially split this up into –” basically, there’s like a tool that would split it up behind the scenes based on the endpoint, so it kind of allows you as a developer to look at it as a single entity? Basically, just so you can reason about it a little better. And then maybe it implements, say, sharing of code between things, but so you don’t have to think about it, it would split up your service into multiple functions. Has anyone attempted anything like that, or is there anything out there that does this?

Well, to some degree… If you think about a web framework like Express, for example… The idea is Express is generally – you define all your routes, and then you kind of off-load those to separate files that will actually do the processing of those routes, and you would share your library there.

I actually wrote an open source project… It’s called Lambda API, and you can go to bit.ly/lambdaapi. It’s specifically for AWS and Lambda, but it’s essentially a very lightweight version of Express. So when I am writing APIs, I just set about consolidating all the routes into one Lambda function… What I’ll do is I’ll do that and then use Lambda API, which maps the routes just like it would for Express, or Restify, or any of those… And then I’ll break the actual business logic out into separate library files.

If I wanna be able to access the service from the REST API, I have one file that will do all that routing for me, so it makes it a little bit easier from an initial setup standpoint, and then of course I can communicate with those individual functions… But then I also would potentially launch those as separate functions, so that I can communicate with them directly through the SDK. So it’s not exactly what you’re talking about, but I see what you’re saying, where you just want to write an application and have it split it up automatically for you.

I think there’s just a lot of code sharing that needs to happen there, so I think that you wanna know where those separations are… But again, part of the – and I don’t wanna get too deep into this, but one of the things that you often find with microservices and teams that are building microservices are shared codebases where there might be some sort of a database connection layer. And whether you’re connecting to a different database or not isn’t the point; it’s just – somebody wrote code that does the database connection. In a monolithic application you just include that, and that’s available for every service that you have running in the monolith. When you break that out, now sometimes you have ten different services that need to share code in order to do this database connection; the problem with doing that is obviously if somebody changes the code because they need to do something there, then all of a sudden you get all this code that’s out of sync. So you can go down the road of versioning and things like that, so that everybody could have their own version of it, but you’re always working on the main repository if you need to update that. But within an individual microservice, you can write your own shared libraries.

If you write a Lambda function that does some process where it finds some matches in a database - if there’s a snippet of code that does that, you can have that snippet of code be triggered when somebody calls that from an API and an API event comes in, and you can have it trigger that bit of code. But then you could also share that code with another Lambda function that’s meant to respond when there’s a Kinesis event that comes in, or some other event that comes in, or it’s called directly from another function.

[44:18] Within the microservice, reusing code is pretty simple, and when you deploy your microservice, you generally redeploy all of your functions so that any new updates are a part of it. But because it’s all owned by the same team or should be owned by the same team, managing that is a lot easier. I don’t know if that answered your question in any way, shape or form.

It’s hard to say… I need to look at this Lambda API thing. I’ll check it out here.

I think we’re at a good spot to take another quick break, and then we come back, we will dive a little bit deeper into this concept of architecture - what does it look like to implement an application using serverless, and do you build your whole application, how does one architect to take advantage of this? We’ll see you after the break!

Okay, welcome back, everyone… Back on JS Party, talking about serverless. I wanna explore with you, Jeremy, the question of how do we use this in the broader ecosystem of product development? If we’re starting the flood in serverless, is this something that is like you’re gonna rearchitect your system entirely to take advantage of serverless, is this something where you’re gonna architect something that you have your standard application but it’s calling out for little pieces? How does this play into the way that we fully build applications?

First of all, I would highly suggest about version 2 syndrome, and saying “Hey, let’s just rewrite our whole application.” Because chances are most of your application is probably running just fine, or it’s at least running.

An important thing to remember with serverless, or any technology you want to integrate in slowly - it’s not an all or nothing proposition. It’s not like everything has to be serverless, or vice versa. The way that I would suggest, especially if you’re a new team and you’re looking at this, whether you’re already running microservices, or you’re running a monolith, or whatever you’re doing, look at what parts of your application that you wanna improve, pick a small part of it - maybe it’s an ETL task, maybe it’s some sort of processing task - and then you can build out a serverless or a small serverless microservice or application that handles that piece of your system. Then using something like the strangler pattern, where you would maybe use API Gateway to send most of your traffic – most of your API traffic goes to your old monolith, or your other microservices, and then you take one route and you route that into the serverless application that you built.

[48:24] Again, that’s an important piece of it, because I do think that over time you might look and say “Well, we have a problem scaling this one particular piece of our application. And maybe my monolith works perfectly fine for everything else, but when I have to do X, I get bottlenecks”, so maybe that would be a good candidate to split out and take advantage of that near-limitless scaling that serverless gives us.

Interesting. I had to quickly google the strangler pattern, because that’s a new one to me.

Me too.

Essentially, if I’m understanding it properly, it’s basically giving you a way to migrate pieces at a time via having a routing layer in-between your application and other things, is that right?

That’s correct, yeah.

Cool. Okay, so coming from an existing thing, pick a piece that you want to scale better or something, and tackle that. What about when you’re thinking about building an application from scratch. Is serverless something where you would, for example, build a whole web app that’s all serverless, or is this something that fits into a broader ecosystem? How do you deal with things like authentication, and all that other kind of nonsense?

Yeah, so it all depends, obviously, on what you’re building. But if I’m working on a new greenfield application, I’m going to ask myself the question “Can this be built in serverless?” If the answer is yes, then you build it in serverless. If the answer is no, then you ask yourself that question “Can I build it in serverless?” because you probably can.

It’s sort of a thing to me where I can’t see many applications that the majority of them couldn’t be built in serverless. I do think there are some limitations, again, especially with long-running tasks and things like that, but Serverless Inc. is launching v2 of their framework, which is gonna be cloud-agnostic, and one of the features they have there is you can actually launch your functions either as Lambda functions (which would be the traditional serverless), or you can launch them as Fargate functions or Fargate containers, so it would actually launch your function into a Fargate container, so you are using Fargate into a container so that you could run that as long as you wanted to. Basically, it would build the container and launch a little server for you and scale that. That’s kind of a new thing where serverless might be heading, where containers might be part of this.

But anyway, so if I’m building a new application though, I would pretty much look at it and say “What do I need to actually process? What’s the business logic that I have to write?” Because I think a lot of times people start planning an application they say “Okay, well what database should we use? What programming language should we write it in?” With serverless, I think you can just basically say “Okay, what do I wanna solve?” and then you can find a bunch of managed services and pieces that you can glue together, and you really don’t have to write that much code in order to get a working application.

You’re most likely gonna have a front-end to your application, whether that’s a React app, or Vue, or Angular, or whatever you’re using, then you start thinking about “Okay, how can I have serverless back my CDNs? How can I put stuff out in an S3 bucket, or on one of the other CDN providers, and say ‘That can be my single-page app.” Maybe that can go beyond a single-page app, because another component - I’m rambling here a bit, but talking about this gets me sort of excited, because I think this is definitely the future… If you look at something like Cloudflare workers, or Lambda Edge, which is sort of the global distributed CDN that will call serverless functions as different events happen…

[51:55] So you can call a serverless function when somebody tries to access a cached object somewhere, and that can change the headers, it can detect what region they are and route them differently. It can perform A/B routing, so that it goes different places; it can know that it’s a mobile app, or a mobile device that’s accessing it, so it’ll do something different there.

Not only that, it can actually wait for the response from the origin and then do something with the response from the origin to say “Okay, I’ve loaded an image, but now I wanna add these five or six headers to it, or I wanna change the caching behavior of it because it’s being accessed from a mobile device or being accessed from the EU, rather than being accessed from the United States”, or something like that. So you start layering in this now, where you have all of these back-end services that are glued together with serverless, and then you have all of these CDNs that are out there, that can host the front-end of it… And not only can they make API calls and do things like that, but they can also interact just as part of the execution of loading something there. So you can handle your SEO, you can – the possibilities are quite limitless when it comes to that stuff… And I’ll talk about authentication, but I’ll stop in case you have any questions in between.

Well, you might be about to cover this, but one question I had is how much of your application logic can you actually even push out to the edge? Because one of the things this gets me thinking about is like, you know, one of the major limitations on performance where we’ve gotten to is literally the speed of light. You can’t speed up the speed of light. So if somebody’s over in the EU, or in Africa, or wherever, accessing your application back in the U.S. somewhere, that’s built in a whole bunch of latency. But if you can actually push a lot of that logic out all the way to the CDN… When I first saw stuff about Lambda at the edge, my mind was blown. I was like, “You mean I can actually be running my application where the user is, not where I am?”

Correct… I mean, it’s to a certain extent. You certainly don’t want your Lambda function that is being accessed in Tokyo, or something like that - you don’t want that to be calling a database that’s hosted in U.S. East 1, it’s hosted in Virginia, because you’re gonna have latency there. So you’ve got a very limited set of time in which you can execute some sort of code. But the great thing about it - let’s say that you want to globally distribute content; if it’s a blog, or it’s content on your website or whatever, even a response from the API, which might be some sort of JSON that has your formatted blog post in it, or something like that, even a call back to the origin, originally, to load that the first time, you can then cache it. So you could from the edge make an API call to your API that runs in Oregon, or that runs in Ohio, or wherever you’re running your data center where that runs - that can make that call, and so yeah, maybe the first time somebody accesses that page and it’s gotta load that JSON file or the API response, yeah, maybe it takes a second to load that. But then every other time, until it expires - and you can set that on each individual piece of content; you’ve got a lot of power on the edge there - that will load instantaneously the next time it comes around, the next time it gets loaded.

So you wanna be careful about how much you’re trying to do in real-time on the edge, because you’ll lose the benefit of the saved latency, but you certainly can cache bits of your data. I think about my own blog - it loads it from a MySQL database every time you load a page, which is ridiculously inefficient, and again, it’s just WordPress, so it’s kind of plagued by that… But it would be so much easier for me to just cache that information and have it as a cached, static page, because 99% of that page doesn’t change, and if I did have to change something small, then I could make an API call, the page would load, and then if it takes 500 milliseconds for it to load some bit of dynamic display because it has to make an API call, even if it’s from the edge, then let it. But yeah, you can put a lot of that functionality right out there on all those edge servers.

[56:16] Essentially, you could run the equivalent of a service worker with a little proxy there, except instead of it being per-browser, it’s per-location, on the CDN.

Correct, yeah. It’s very exciting. There’s a lot of very cool things that can be done with that.

Would functions typically be authenticated before they’re run, or is that something that the function itself would have to handle?

That’s actually another good question. The way that authentication works, at least in AWS - and I’m more familiar with AWS; that’s the primary one that I use. So if you’re accessing functions from one another, so if you’re calling your billing service from your catalog service, or something like that, the IAM roles are all built in. So you have to give a function permission to call the Lambda SDK for the invoke function permission. But if it’s outside, there is no access to your Lambda functions from the outside. They actually all run behind a control plane where there is no direct network access to them, which makes them highly secure. So in order for you to load or trigger a Lambda function from the outside, you have to route it through something like API Gateway.

API Gateway has a whole bunch of built-in functionality where you can have that load and authentication function. So the first time somebody tries to make a call to one of your endpoints, that will actually go and run a Lambda function that can look at a web token, or it can do OAuth, or something like that where it can read whatever types of authentication headers you’re sending in, and then make the decision as to whether or not that has access to specific routes. Then you basically just send it back, just a policy document, and then the AWS API Gateway will decide whether or not you can access specific routes.

So that’s a really great way to do it, where your Lambda functions can be pretty dumb. They don’t have to know whether or not somebody has access to it; they just know that if the API Gateway allows them to route an event to it, then they’re authenticated. Of course, you get access to all the headers and everything that gets sent to you within that Lambda function, including the policy documents, so if there was something in there where it’s like they have the ability to read but they don’t have the ability to write, then within your function you may wanna add those ACLs there, but for the most part you would handle that at the gateway level.

Awesome. We’re gonna have to wrap pretty soon. Are there any major things going on in the serverless world, either big advancements that happened recently that people might not have heard about, or stuff that’s in progress about to hit that you wanna share?

A couple things. I wanna mentioned a few companies that are doing some really interesting work with serverless observability. With our traditional applications, if we’re running servers or ever if we’re running containers, we can install all kinds of daemons and bots and all kinds of things that are running there that can listen and now what our CPU usage is, and know if we’re exceeding memory, or if there’s something happening there, and that just gives us a whole bunch of reporting.

With serverless, obviously, the functions themselves are ephemeral, so they spin up and then when no one’s using them, they go to sleep again, or they actually disappear completely. So you can log information to cloud, watch logs and then kind of go through it, but seeing the whole process from request, to processing, and then maybe through a couple of different managed services, and then being able to see the results, and then if there’s something that happens there, tracking the billing… There’s just all kinds of things that you really don’t have good access into, other than sort of pouring through the logs yourself, and even that is sort of a pain.

So there’s a bunch of companies… Dashbird is one that has an observability platform. Epsagon just launched their product yesterday actually, which is a serverless observability and tracing platform, and they do some pretty cool things in this space… There’s a company called Thundra, which was a spin-off of OpsGenie, which just got bought by Atlassian… So there’s a bunch of companies in the space, plus there’s a whole security aspect around this which we didn’t really talk about. A company called PureSec - they’re out of Israel, as well; Ory Segal is the CTO over there, and they’re doing some really great work in terms of building tools that help with things like event injection, and other things that could potentially – remote code execution, and other things that are still possible and are attack vectors agains Serverless.

[01:00:35.22] There are a lot of companies that are building some really cool stuff, a lot of companies getting funded… PureSec just got funded with another seven million dollars, and then obviously Serverless has raised money, and a couple others. So there’s some interesting things happening, some cool tools being built… AWS Lambda just announced their 15-minute execution times, which is kind of a big thing, as well as that application view, and one of the guys I know at AWS has said “Look, Reinvent is coming up in a couple of weeks here, five weeks, or whatever it is…”, and they haven’t even scratched the surface of what they’re gonna launch. They said they’re basically gonna blow people’s minds with new stuff that’s coming down the pipe for serverless, so it should be some exciting times very soon… Or it already is, but there’ll be more exciting times.

Very cool. And if someone wants to get started with this and just kind of play around, what is the easiest way, in your opinion, to do that? Please don’t say Lambda…

I’m not gonna say Lambda. I’m gonna say the Serverless framework, because I do think that the Serverless framework version 1 that’s out now - it’s very easy for you just to say “I wanna launch to Lambda, I wanna launch to Azure, I wanna launch to Google Cloud Functions”, or whatever. They’re all different levels of functionality that you can do. Obviously, again, Lambda is light years ahead of some of these, and there are a lot more capabilities there, but certainly if you just wanna play around with it and write a couple of functions and see how they all work with one another, any of the cloud platform providers are great. The major ones are doing some great work. Microsoft has got some good stuff with Azure, and the IBM OpenWhisk stuff is very good, and they’ve got some cool stuff with durable functions… There’s all kinds of great stuff that’s happening, which is why we do need some standardization, so that it’ll be easier to go between different providers.

I would say download the Serverless framework (Serverless.com) and there’s a bunch of help guides out there, there’s a bunch of Get Started guides and things like that, super-simple to play around with. Don’t be afraid of the frameworks, don’t be afraid of the deployment and stuff like that; it’s just writing code. Write some code that takes the event in and do something with it, spit something back out, and you’ll be surprised how easy it is to get started with this. And what’s nice about the Serverless framework is once you’re ready to actually put it up on the web and you wanna actually see it in real-time, you just SLS deploy, or a serverless-based deploy and it just puts it up there for you, it handles all the deployment, all of the configuration, and then you get a URL endpoint back, and then you can go ahead and start playing around with it.

One last question that came in from the Slack - if somebody has listened this far, is there anything that we haven’t covered that they should not leave without knowing? Particular resources, talks to go listen to, other types of things?

Yeah, there is a ton of information out there, and we probably just scratched the surface of most of this stuff with serverless… There were a number of conferences that have been based around serverless, which if you wanna watch some videos - Serverlessconf, which they’ve just had their last one in San Francisco, I think it was in August… If you search for “Serverlessconf San Francisco 2018” or something like that, you should be able to find on a cloud.guru all of the videos from it, and there’s a bunch of 30-minute talks, a few 5-minute lightning talks, and they talk about everything. You’ve got everyone from Simon Wardley speaking, I think Ben Kehoe was there… There’s just a whole bunch of guys in this space, that really know their stuff. That would be a great place to go and watch a number of videos that would really go deep into some of the challenges and some of the benefits and all the other things around serverless.

Awesome! Well, thank you, Jeremy, for joining us for this week’s JS Party. Nick and Chris, awesome as always, and we’ll catch you all next week!

Thank you, guys.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00