A Serverless Case Study - Netflix's Architecture
Describing one of the major practical use cases of serverless application development
In the last article, we learned about the definition of serverless architectures and how they truly work.
We also learned how they differ from traditional client-server-database architectures.
In this article, let’s delve into a practical use case of this revolutionary technology and see how a major company employs it in their own system.
But first, let’s cover some key concepts 👇
Stored Procedures
When talking about serverless functions, we often say that they can be small pieces of function code that can be tightly integrated with a database.
These functions can either be triggered by a workflow which was created by a specific microservice. This added complexity can often need some extra considerations for your system:
It can force you to be vendor specific. For example, with AWS Serverless Application Management platform, you have the ability to choose a certain language or framework and develop your function in them. Then you can use the Serverless Application Repository (SAR) from AWS to easily manage and distribute your functions.
Vendor lock-in is a tricky subject to tackle because your functions are to be executed in the context of that vendor, and your unit and integration tests will also begin to reflect that.
If you go with a vendor that doesn’t offer such capabilities, you’ll have to figure out those solutions yourself.
If your application is simple enough, it doesn’t need an overhead like we’ve seen above. However, in complex architectures like that of Netflix’s video platform, it becomes necessary.
Requirements
Some requirements for a Netflix video platform service includes:
An audio/video processing pipeline that is robust enough to handle complex, high-latency workflows ranging from seconds to several minutes.
Low-latency or latency-sensitive workflows that require the user to be waiting for a job to finish.
Codebase to be modular enough to support loose-coupling in between infrastructure code and application code.
Better monitoring and telemetry: more moving parts add complexity but the separation of concerns add better observability to each component in the pipeline.
How a typical microservice functions
When a client makes a request, the API Gateway does the following by acting as a bridge between the business logic (the services) and the client:
routing the request to the correct service
authentication and authorization of the request
load balancing for requests
monitoring and logging requests
The services themselves maintain their application data with direct connections to databases and contain all the business logic.
Netflix microservice — serverless function architecture
Layering of three different services occur in Netflix’s platform:
the client facing high-level API service that maps requests to internal serverless functions
the rule-based layer for connecting the request to the correct function
the serverless layer for running the functions upon invocation from the workflow layer
The higher level APIs can either:
be decomposed into lower level APIs and rules or
they can invoke a serverless function
A typical high-level API can be for instance, a subtitle generation request:
level 1 API workflow rules calls a text based lower level API
the text API in level 2 then calls a language based lower level API to generate subtitles in a specific language
the workflow rules in level 3 then instruct a serverless function to generate the subtitles in the language, for instance, French.
The pipeline from top to lower level services can take any amount of time from seconds to several minutes to complete.
Advantages of a layered system include:
Functions are only called for specific use case triggers defined explicitly within the rules.
Success and error notifications from functions are recorded which helps keep track of which process in the pipeline is taking too long to complete, is erroring out, is retrying, and so on.
If you enjoyed this issue, share it with a friend!