Everybody is talking about Serverless these days. What is it and what’s all this buzz about? In order to properly understand what it brings to the table, you would have to look back into the history of web development.

Table of Contents

History

During the dotcom bubble (1997-2001) a lot of people saw business opportunities on the internet. Suddenly you could reach customers all across the world and enter markets you never thought you could get into without enormous capital and power. The internet was for everybody and it did not take a lot of capital to start your own business. It was a Get big fast kind of business that anybody could jump into. As a result of this, thousands of .com websites (hence the name dotcom bubble) popped up.

Websites in those days were created using plain old HTML which was created in 1993 and CSS which came 3 years later in 1996. Some websites used JavaScript to bring some dynamic nature to the webpages. Even fewer but fancier websites made use of Flash to give their sites an interactive feel.

While, HTML, CSS & JavaScript have since evolved and stayed relevant, Flash is almost dead. Part of the reason for it is that you haad to create Flash files in another application which was radically different from the coding feel of HTML, CSS & JS. Also, it has been marred by all kinds of vulnerabilities to the point that most browsers block it by default these days. Microsoft made a lame attempt at the market of Flash with Silverlight but it did not gain any momentum either.

Coming back to the topic, enterpreneurs were creating (or getting them created by someone) websites of all kinds and putting them on the internet so people could use them. In order for people across the world to get to see their website with their fancy products, the HTML, CSS, JS & Flash files needed to be hosted somewhere. That is, they must be put on a server that always runs and serves these files to the users’ browsers.

Deployment

Typically, one would log into the server using FTP and copy their files to make the website run.

Domains

A server just has an internet connection and a dedicated IP address that it’s Internet Service Provider has assigned to it. How do people reach your website? They can’t remember the IP address of each and every website to reach them.

You would register a domain name from a domain registrar. Something like, mysuperawesomewebsite.com and confgure it’s DNS to point to the IP address of your server. So now, anybody can reach your website by going to http://mysuperawesomewebsite.com using their browser.

You might ask, how does one know the name of the website either. For that we had search engines like (Google, Yahoo, etc). But, that’s a totally different topic and I’ll cover it later.

Scaling

While most of these dotcom websites died out as the funding ran out. Some, with good quality products or services did survive and started earning enormous profits.

But, they faced challenges of their own. As the website became more and more popular, the always online hosting server was no longer able to handle the volume of users it was getting.

The solution – get a bigger/badder machine with better hardware. So, you increased the CPU and RAM on your server. This is called vertically scaling up. But, there are limitations to it. You can only add so many CPUs and RAM to a single server. Well, unless you have a factory of your own to build totally custom hardware. In which case, go build your damn supercomputers that do a lot of good (think weather prediction) for the world.

Another problem is scaling up Input & Output. Most problems these websites solved were not compute intensive and mostly limited by I/O. Adding more I/O to a single machine is not as straight forward as adding CPU and RAM. You need multiple network cards (which is easy). But, you also need multiple internet connections. And these connections were not your then typical dialup lines that people used at homes. These were expensive and hard to get.

Clustering

To solve the scaling problem you create a cluster of servers. That is, instead of one server running your website, you put the HTML, CSS & JS on two or more servers and they distribute the load between them.

Each server would have a differnt IP address and a dedicated internet line to handle the I/O problem. With DNS, you specify multiple IP addresses against your domain (mysuperawesomewebsite.com) so that requests are distributed between the servers. This is a very basic way of distributing load between servers. Most DNS servers support Round-robin which shuffles the list of IP addresses that the user’s computer get when they are trying to go to your website. Browsers try the IP addresses in that order to load your website. So, users in the US might get the first IP address as the primary for your domain and connect to Server 1. While users in India might get the second one as the primary IP and connect to Server 2.

This is an over simplification and the actual order depends on your DNS server and stuff (DNS is after all hierarchical).

You get some sort of fault-tolerance too. As, if the server with the primary IP address is down, the browser will attempt to connect to the secondary server using it’s IP and load your website.

There are much better solutions to these problems of load balancing & fault tolerance which I’ll discuss below. But, this is how websites used to run back then. Infact, a lot of extremely popular websites still run in this fashion and do quite well.

The Problems

The main problem with this approach is that you have to manage your servers manually. If the volume of users suddenly increased you would have to buy or lease a new one from your hosting provider (which itself takes time). Then you would have to deploy your HTML, CSS & JS files on it. Add the new IP address to your DNS and wait for the change to propagate across the world (which could take upto 48 hours). Until then, requests wouldn’t even be coming to your new server and it would just be sitting idle, thus not really helping out with balancing load.

There is another factor here and that is the database. Typically your website is storing some user information or showing some dynamic data that is being stored in the database. This database needs to be scaled up too. But, the principle behind it is largely the same.

The New Way – Serverless

The idea behind Serverless computing is that you worry about what your application does and not how or where it is deployed, how it is scaled up or down.

It takes care of spawning new containers when the requests grow and then deleting them when the requests come down. You pay for the usage your website incurs because of others using it. When nobody is using it, you pay nothing (well you pay a nominal amount, but it is miniscule compared to what you would pay for a normal server).

There is however a hidden cost to it. A big part of Serverless is AWS Lambda which again is extremely cheap for low usage. However, it is a layer that Amazon built on top of EC2 and containers. As such the added cost of developing and maintaining it is passed on to your Lambda costs. Same goes for EC2 which is costlier than a dedicated instance. It is imperative from a business point of view that this be done. Besides, there pricing is very fair. Having said all that it will always be cheaper for a website/application that gets a lot of traffic to be hosted on a dedicated machine. The performance will also be much better that what you get on a shared tenant instance in EC2. But, then you pay for that machine even if nobody is using it.

In my opinion it is best to start a new application on AWS Lambda/Serverless platform and then migrate to a dedicated instance if the application becomes very popular. But, then again, maybe you can spend the time wasted in managing those servers on building another cool serverless application. Remember, life is short and we only get one shot at it. The choice is yours.

No votes yet.
Please wait...