Bayware is infrastructure as code for interconnecting application microservices wherever you deploy them. Bayware works with any client-side load balancer to create a simpler, faster, more secure service mesh that works with any workload on any cloud with no complex network configuration.
Bayware works even if you haven't yet adopted client-side load balancers or haven't yet mastered Kubernetes. Learn more about migration on our Solutions page. Or go deeper with our documentation. Or try it yourself from the Azure Marketplace.
Company and Solution
Bayware brings everything a DevOps teams needs for cross-cloud, cross-cluster, cross-VPC service meshes end to end into a single solution owned and operated by DevOps as part of the CI process.
Bayware implements all this as code. Other service meshes give you part of the infrastructure as code, but then leave a lot - IP address management, security, routing and more - out of the solution. With Bayware, DevOps needs no bolt on SDN or other legacy networking to deliver.
Bayware is a programmable network microservices architecture that enables you to give every application its own secure, isolated overlay network all in software. It is totally cloud agnostic but is designed from the to cross trust boundaries securely and automatically.
Bayware is a suite of distributed software components: an orchestrator, agents, and processors installed via an Ansible-based Fabric manager. Get more technical via our documentation.
Bayware's service interconnection fabric including the service graph intent declaration, is truly Infrastructure as Code that is uniquely well suited for distributed Cloud use cases. And it is the key foundation of a Multicloud service mesh.
Infrastructure as Code is the framework for provisioning data centers and clouds via code that computers read and execute, rather than configuring physical hardware or using the interactive configuration tools for a physical or virtual device.
For an application team, this code should instantiate the infrastructure services to support their application, and they should create, test, and iterate on this code as part of the continuous (CI) development and deployment pipeline.
If it is really infrastructure as code, development and deployment teams, DevOps, can execute this code on any standard compute platform and receive the same services. To get higher performance, they should simply be able to throw more resources - a bigger machine - at it, and to scale they should simply deploy more instances of the code.
This has been achieved with virtual machines and now with container technologies for the compute domain. But this remains elusive for the security and networking domain.
The current network and security industry model is to enable experts to pass in device-specific configuration parameters as code. But this requires code that is up to date with whatever capabilities the device maker has made available, and requires users to master how to achieve their intent via the configurables on that device.
Service Mesh technologies may be infrastructure as code, but don't reach into networking and security between clouds, between trust domains, between VPCs and clusters.
That is where Bayware shines.
Bayware patented Network Microservices are micro-encoded contracts for communication patterns that you easily program by adding application-specific labels to standard templates, and can be designed and approved by your networking and security professionals.
A Bayware deployment requires only IP access to Linux hosts to automatically fulfill in software everything DevOps needs for an application's networking and security:
Service Meshes are excellent at responding to movement and replication of microservices. But the underling data networking solutions are optimized accommodate many tenants and high throughput not to change continuously. So a whole raft of tools including many virtual network functions (VNFs) have appeared to connect these two ends of a tradeoff, including
With Bayware, DevOps gets the benefits of all of those network-as-infrastructure tools, but automatically. Some are included such as CNI, and eBPF, others are replaced by the our simple and powerful Network Microservices architecture.
Most importantly DevOps does not have to acquire, master, and configure any of them. We explain a little more in this video below.
Or jump into the details in our Documentation.
Building applications as microservices was always a good practice. Development is better with focused teams who provide other teams with clean interfaces.
Docker containers made microservices clearly advantageous by greatly simplifying deployment. All libraries and other immediately dependent software are packaged as a single named object that relies only on Linux. They can be copied easily and scaled independently as they boot up in milliseconds. You can update even a single layer of software within a container.
Docker invented a method, Swarm, for managing a whole herd of these containers. Unfortunately for Docker, Google had already created its own orchestration for its massive web properties, and open sourced it as Kubernetes. Kubernetes dominates microservices orchestration.
Power users of Kubernetes next turned their attention to managing the communications among microservices. The communication stack from service registry to health checks and load balancing to access rules and routing remained outside the container framework. They require configuring external systems - infrastructure systems - to meet the needs of the application and its containers.
For a few years, many thought that the only requirement was a container-network interface or CNI to keep track of the IP addresses of all the containers in play so that infrastructure systems could serve them. So for some time, Calico and other followers were the answer to container infrastructure as code.
Pioneers at LinkerD and later Envoy were not satisfied with this continued dependency on external systems, and wanted greater control by the DevOps team. They created a sidecar proxy for each container that would manage the communications with other proxies paired to the other containers. The idea of a service mesh was born.
Teams from Kubernetes at Google introduced Istio as a policy orchestrator for these sidecar proxies. They worked mostly closely with Envoy. Envoy has become the leading sidecar proxy. The central idea is that a proxy is a client-side load balancer. And it is a proxy server to form mutual links as tunnels with other sidecars based on service names not IP addresses. So, in a single trust domain, all of the external infrastructure dependencies have been removed.
The Istio project created modules to define and manage the sidecar configurations as well as service registry, the certificate handling, telemetry gathering and an interface to Kubernetes.
So, in a single cluster, a Service Mesh enables DevOps to be entirely self sufficient to continuously test, deploy, scale, and optimize the entire application deployment.
Service Mesh is a great leap forward for focusing resources on Applications rather than infrastructure. You now can find application-level service meshes not just from Istio, but also from Hashicorp, NGINX, VMWare, and AWS.
There's really just one major problem with all of these. What if you have multiple clusters and need to deploy across multiple trust domains as in hybrid cloud or edge for IOT?
As describe by Istio and VMware, for example, the answer is you have to reintroduce classic networking both SDN as well as configuration of each cloud's networking frameworks in order to complete the solution.
Enterprises need to run applications in increasingly distributed environments.
Dev teams need to define their application's communication intent and security policy at the application level, not specific to each cloud, data center, or edge data center infrastructure they may need to use.
What’s DevOps needs is an application-defined architecture that delivers agility and programmability all the way through the infrastructure services in highly distributed use cases without sacrificing security in any manner.
Bayware is the breakthrough technology that makes application-defined networking and security possible by capturing application communication intent as a simple service graph based entirely on the application topology - in a manner that is entirely infrastructure agnostic.
See it in action here.
Bayware is all you expect in a service mesh. Bayware works with any client-side load balancer to create a service mesh that is unaffected by crossing clusters, or VPCs or cloud trust boundaries.
Like you see with Istio, Bayware is based on service names rather than IP addresses which gives it some powerfully positive attributes:
When you cross trust boundaries with today's service meshes, you lose the central benefit of self-sufficiency. You become dependent on external infrastructure systems.
You can see in this stack how much DevOps has to rely on traditional infrastructure teams to make a secure system work end to end. You can also see that a CNI by itself, is not enough either.
You can see how Bayware's all in one model keep DevOps self-sufficient. Bayware delivers on the central benefit of a service mesh when you cross trust domains; that is cross clusters, or VPCs or clouds.
Bayware also provides an application-centric service to service interconnection fabric as code even if:
Go deeper on how we do this in our documentation.
Bayware is simply better for distributed use cases than standard service meshes. Bayware has unique control-plane only architecture that enables it to deliver a complete solution for cross-cloud, cross-VPC, cross-cluster networking and security as part of the service mesh rather than having to bolt on SDNs and additional security products.
Bayware relies only on Linux as a foundation, so it works the same everywhere. Bayware captures your application's communication intent as code, as a simple service graph based on your application topology. Bayware automatically creates cloud-native telemetry end to end.
That means DevOps gets outstanding cross-cloud networking and security without having to master networking.
Bayware is a complete, simple, and powerful service mesh that you can acquire and use from the AWS or Azure marketplaces, and use it across any private or public cloud.
Bayware is all code, that DevOps maintains as part of its CI process. So Bayware enables DevOps to be self-sufficient while still collaborating with Security and Networking teams to make sure their requirements are included in the code. Rather than exchanging requirements documents, DevOps, SecOps, and NetOps can collaborate on the code. That's how agile works.
Unified and instant connectivity across public and private clouds and different workload types – containers, VMs, Linux servers
A single, all-software, auto-discover/register, multi-cloud, DevOps and AI ready solution that is easily programmable down to the flow level using a library of common patterns
Application isolation, pervasive IP security, and policy programmability and verifiability down to the role and flow levels
Watch the Bayware Product Demo video above to see how seamlessly Bayware works across clouds - enabling an application across three clouds. Bayware radically simplifies service-to-service communications provisioning in three steps.
And watch how easily Bayware adapts to changing conditions and events across the mesh automatically and with full, cloud-native visibility.
Bayware makes it simple.
See if for yourself in a test drive or by using it with no Bayware license fees at AWS or Azure Marketplace.
Get the technical details in our Product Documentation.
Create new services at a very high rate of speed—and change them—so your business can keep pace in a continuous development world.