Serverless computing is a method of developing and deploying cloud services without the need for servers to be managed. Serverless computing, as opposed to IaaS (Infrastructure-as-a-Service), provides a greater level of abstraction, promising improved developer productivity and lower operational expenses. Resource flexibility, zero operations, and pay-as-you-go are all important features of serverless computing.
A new wave of cloud services, led by AWS Lambda, has made it possible for developers to create web services straight from code functions in recent years. Developers don't have to worry about setting up or managing servers. When requests come in, the cloud provider instantly builds an execution environment and scales it based on demand. Function-as-a-Service is the name for this method (FaaS). According to industry reports, FaaS is already popular among cloud users and could become the dominant form of cloud computing in the future.
AWS Lambda is built on Firecracker, a lightweight virtualization technique. Firecracker, also known as microVM, is smaller and faster than typical system VMs, making it ideal for FaaS workloads.
Docker containers are used by IBM Cloud Functions to provide separation.
When deployed in the public cloud, Microsoft Azure Functions is compatible with Docker, but with extra VM protection.
Google Cloud Functions chose a middle ground. Its gVisor engine is built to run Docker on the cloud reliably. gVisor, on the other hand, is 2x slower than ordinary Docker.
FaaS still has performance and scalability difficulties, even with the performance-optimized system VMs and application containers. To begin with, a cold start is slow. It could take seconds to setup and run a microVM or container. This is a critical issue for FaaS, as each function execution may need the creation of a new microVM or container. Because the function could run faster when the execution environment was "warmed up," this flaw resulted in inconsistent and unpredictable performance.
For each function call, the VM or container must put up a runtime software stack, which includes operating system-specific standard libraries. As a result, the footprint is somewhat enormous.
Customers are charged for coarse-grained resource usage, such as allocated memory and execution time, in the current generation of VM or container-based FaaS systems. Fine-grained use billing, such as CPU cycles for each function run, is not supported.
WebAssembly is compatible with a variety of programming languages, including C/C++ and Rust. The WASI standard allows WebAssembly runtimes to interface with operating system resources, making it ideal for server-side applications.
Functions written in WebAssembly can be run safely and independently. Without requiring any code changes, those functions can be initiated and cancelled on demand across various underlying systems. WebAssembly can accurately assess finely-grained resource consumptions at runtime because it provides abstractions at the opcode level.
Performance and size are the most significant advantages WebAssembly offers over system VMs, microVMs, and containers. WebAssembly-based FaaS is currently available on the edge networks of smaller cloud providers like Fastly and Cloudflare.
We at Planetr decided to use WebAssembly to enable FaaS in Planetr network because of its obvioud advantages stated above. Planetr supports Rust programming language to write functions and also provides a bootstrap project to make it easy. Other programming languages will supported in the future and is under active development. The following links will help you started with Planetr functions in just few minutes.