NGINX and the Future of the Web Server

Robertson, CEO of NGINX: “Today’s websites aren’t really websites anymore, they’re applications.”

Image: Colin Barker

Web server company NGINX bills itself as “the secret heart of the modern web” and claims to power 60% of the world’s busiest websites.

CEO Gus Robertson is an Australian native with big ambitions for the company: while NGINX already has a significant presence in the US, he now plans to expand his public profile worldwide. ZDNet recently spoke to Robertson to find out more.

ZDNet: Tell me about NGINX.

​When to use NGINX instead of Apache

​When to use NGINX instead of Apache

They are both popular open-source web servers but, according to NGINX CEO Gus Robertson, they have different use cases. And Microsoft? Its web server fell below 10% of all active websites for the first time in 20 years.

Read more

Robertson: There are several different categories in the web server market. Apache is the original web server and it was built 20, 25 years ago as an open source web server.

It was designed for a different type of Internet than we have today. Then the websites were really brochures. Websites today aren’t really websites anymore, they’re apps. You connect to it, you share, you download videos and a host of other features.

NGINX started in 2004, as an open source project, written by one of our founders, Igor Sysoev, and he wrote the software himself, 100%.

Where did he come from?

Moscow, and when he started NGINX he was really trying to get rid of an itch that he had had for some time. At the company where he worked, he was managing concurrent incoming connections to the application he was working on, and Apache really couldn’t scale up to 1000 or maybe 2000 concurrent connections.

He tried to write modules for Apache and then tried to scale them beyond those limits. There was actually quite a challenge on the internet at the time to see who could break the 10,000 barrier.

Igor went home, wrote some code, tested it, broke the 10,000 barrier, and opened the code. That was in 2004. He managed the project on his own until 2011. By then, it had grown too big because at that point around 50 million websites were using the software.

He was just getting too many feature and enhancement requests, so he got together with two of his friends, formed a company, and called it NGINX Inc. The idea was that they would be able to invest in more engineers and support staff around the project and then be able to monetize it somehow.

I joined the company in 2012 when there were seven guys in Moscow and myself in the United States. Since then we have been able to grow the business and now have over 120 employees worldwide.

With this next step in our expansion, we have opened an EMEA office in Cork, Ireland, and expect to develop over 100 people there over the next three years. The business has grown year on year and we now have over 317 million websites using our software, including 58% of the most trafficked sites in the world.

We are now the default and most popular web server for any website with reasonable traffic. Think sites like Uber, Netflix, BuzzFeed, BBC and SoundCloud.

Is this a simple growth path?

Simple in terms of adoption and growth. It really took off around 2007, 2008. That’s when the way people interacted with websites changed.

That’s when websites changed from brochure websites to sites with real content and real apps.

This is when broadband was fully embraced and cell phones started to appear. There were so many connections and so many people coming to the websites and the sites had to be able to scale.

NGINX became the default standard because of our architecture, which was a very different architecture from Apache.

Apache is an event-driven architecture, rather than a process-driven architecture. This means that they handle traffic in a very different way than we do.

What is the difference between how you and Apache handle traffic?

Rather than creating a separate amount of memory and CPU for each connection and keeping it open, we only take memory and CPU when a request comes from a connection, and pass it on to the upstream server.

We don’t keep the connection open if it’s not up, so we don’t lock up CPU and memory, and we can handle asynchronous traffic.

Would you describe your way of working as completely flexible in this sense?

Exactly. A good analogy is the idea of ​​a bank teller. You don’t create a bank teller for every person and even if you’re there and you don’t need to deposit or withdraw money, we don’t need a bank teller just in case where you would need money. You go to the bank and ask to deposit or withdraw money.

So where does the speed come from?

This comes from the lightweight nature of our software. Although we have an incredible amount of capability and functionality in the software, there are less than 200,000 lines of code left. If you install it, it is less than 3MB.

We’re very fussy about not adding an extra line of code if it doesn’t need to be there. It’s a very lightweight and powerful software, we don’t want it to become bloatware.

To what do you attribute the company’s success? Is it simply the quality of the software?

We are the world’s leading web server for high performance websites. But what we have also done is extend the open source product for our commercial offering to handle more functionality that extends it from a web server to an Application Delivery Platform (ADP).

Now, an ADP does more than just deliver applications. It does load balancing, it does caching, it has security capabilities, and it acts as an application firewall. He does health checks, monitoring, etc.

It is the natural bump in the wire to do authentication of incoming traffic or to terminate or encrypt. It’s the natural place to store commonly used content, like images, videos, or HTML pages.

You can dramatically speed up an application’s performance by placing more of the heavy HTTP lifting load on the front of the application, so that the application server on the back-end doesn’t have to make that application logic.

If you think about how apps are delivered today, let’s say Amazon.com for example. Amazon.com is about 178 individual services, which means that each individual app is there to do a very specific thing.

If you type in Nike shoes, for example, you get a lot. You get reviews, you get recommendations, you get sizes, you get all of that information, and each is a separate service, or microservice that focuses on providing that one thing.

As you do, all of these services need to communicate and the way they communicate is through HTTP traffic — and how do they do that? They have NGINX.

So how do you manage a smaller site or app?

The same problems exist for little guys as for Amazons. You look at how you handle the incoming connection, how you handle the encrypted connection, whether I’m a bank or a small site, I always need to encrypt that traffic.

And if I’m on an app, I always expect sub-second response time. The issues affecting a small website are exactly the same as those affecting a large one, it’s just on a different level of magnitude.

How do you keep it all safe?

There are many ways. One would be an SSL. Another is Web Application Firewall – the ability to look at different traffic and monitor that traffic. We have a lot of discrete functions configured on the back-end. For example, you can say, “I know all my end users, so as users come in, I can whitelist those I know or blacklist those I don’t know.”

I can rate users to limit the requests a certain user can make and that’s really important, not only to monitor DDoS attacks coming in, but you can also be DDoS internally by another API.

And all this is simple?

We have a configuration file in NGINX, and NGINX is a model that runs on Linux, so it’s command line driven. We don’t have a configuration dashboard per se.

But we have a dashboard that shows you all the monitoring and analysis of all incoming traffic.

What are the biggest issues your customers are facing right now?

DDoS is huge: it’s a means that can bring down a site. But traffic load is the most common.

If you look at the industry in the United States, Thanksgiving is one of the biggest [days for website traffic] as well as Black Friday and Cyber ​​Monday. Every year, big sites go down on these days because they didn’t plan or anticipate the amount of traffic they were going to get. And that’s good traffic. It’s not bad traffic. It’s not a DDoS attack, but it can also bring down a site.

People describe NGINX as a kind of shock absorber on the front of your website.

But surely there must be occasions when traffic can overload a site?

There are limitations, but since NGINIX does not block traffic, we can still handle very large amounts. We’re not saying we can handle it all. If you are overwhelmed by a massive DDoS attack then this is what it is. But NGINIX is very good at absorbing the shock of a massive amount of internet traffic.

If there is a limitation, it is bandwidth.

What else is new with NGINX?

We’ve extended NGINX Plus with load balancing, caching, SSL Plus, monitoring, and analytics. What all of this does is it puts us in front of another class of technology — the application delivery controller, and they’re made by companies like F5 and Citrix. They created a hardware-based approach to solving application acceleration.

We are seeing a transition from hardware to software, and from a network perspective to a software perspective. We see many of our customers migrating from these expensive hardware devices to our NGINX NGINIX Plus commercial product. It’s because of the cost savings, because it’s software, because it’s app-centric, because it goes cloud and it’s cloud native.

What we’re seeing happening is that we’re all moving from a monolithic, all-in-one-package approach to a microservices or distributed application approach.

Learn more about NGINX and web servers