Bee is a cloud native streaming computation platform made specifically for low latency product space. Bee provides core capabilities to reduce the compute and communication cost impact of your distributed computing layer. Unlike products like Flink, Bee automatically constructs robust compute topologies to minimize the routing and bandwidth overhead enabling low-latency and bandwith sensitive products deployed in a multi-zone environment. Bee makes all that happen with a simple to learn and simple to configure framework.
Bee was inspired by Apache Flink stateful functions and was motivated by the fact that existing stateful function APIs and cloud native lambda frameworks provides horizontal scalability at the expense of latency and bandwidth cost beyond a comfortable threshold necessary for low latency products. Thus, latency and bandwidth cost sensitive systems are forced to be manually configured and/or run in the legacy world with various orchestration difficulties. Bee provides a distributed stateful lambda execution ecosystem with strong exactly-once execution guarantee that is horizontally scalable and, has a very small latency and bandwidth footprint.
Most applications have an intricate web of low-latency as well has high-latency lambda pipelines. Bee helps engineers focus on the business logic and worry less about the communication complexities.
As an engineer, sre or ops you can think Bee as a part of your app that needs no special deployment or cluster maintenance. This is unlike systems like Flink which need a dedicated team to maintain. You simply deploy your app like any other executables, just linked with Bee libraries.
Bee topology is created automatically from the worker profiles to ensure distribution of lambdas while minimizing communication load, effectively reducing bandwidth consumption and communication latency. Bee does this while providing necessary redundancies and failover mechanims.
Static typed lambdas allows extremly fast lambda resolutions within the framework. Bee is very efficient when it comes to facilitating static typed lambdas. Locally, within a machine a single thread is able to route 10s of millions of lambda calls with ease on a commodity hardware. So the route call overhead is very small allowing Bee to be useful in low-latency product space.
Bee offers various robust features to enforce host level affinity for complex lambda pipelines to reduce latency footprint for high-volume-high-frequency data pipeline. It does this without sacrificing the ability to scale horizontally and vertically.
C++ and Rust are aimed at low-latency workflows while C# and Go strikes a balance for many apps. C++ lambda definitions offer a very fast lambda call layer with very low overhead supporting 10s of millions of lambda calls per second per thread. Majority of the components such as networking, routing, caching, state management etc… are in C++
Native implies that there are no marshalling involved. Entire lambda execution environment is natively written in the supported languages. The lambda, necessary type definitions and their runtime depended components are coded in respective languages to maximize the performance relative to the language runtime capabilities.
In a bee hive cluster, you can run worker bees of varying languages (C++, C#, Rust, Go and Python). While a single worker bee can be of only one of those languages, the cluster could have a mixed settings. This allows a large enterprise system with varying tech stack to operate cohesively with minimal latency footprint.
Copyright © 2022 Yuga LLC - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.