A secure, responsive, multi-cloud
service mesh for every application
 

BAYWARE MULTICLOUD SERVICE MESH

Bayware is infrastructure as code for interconnecting application microservices wherever you deploy them. Bayware works with any client-side load balancer to create a simpler, faster, more secure service mesh that works with any workload on any cloud with no complex network configuration.

Migrate the cloud and Microservices faster simpler and more securely

Bayware works even if you haven't yet adopted client-side load balancers or haven't yet mastered Kubernetes. Learn more about migration on our Solutions page. Or go deeper with our documentation.  Or try it yourself from the Azure Marketplace.

Company and Solution

Additional Documentation on Request

  • Product data sheet
  • PoC Overview including roles and success criteria
  • Deployment guide with FAQs

 

Make DevOps Self Sufficient

Bayware brings everything a DevOps teams needs for cross-cloud, cross-cluster, cross-VPC service meshes end to end into a single solution owned and operated by DevOps as part of the CI process.

 

Get the Full Benefits of Infrastructure as Code

Bayware implements all this as code. Other service meshes give you part of the infrastructure as code, but then leave a lot - IP address management, security, routing and more - out of the solution. With Bayware, DevOps needs no bolt on SDN or other legacy networking to deliver.

 SERVICE INTERCONNECTION AS A SERVICE MESH FOUNDATION

For Hybrid Cloud or Any Cloud

Bayware is a programmable network microservices architecture that enables you to give every application its own secure, isolated overlay network all in software.  It is totally cloud agnostic but is designed from the to cross trust boundaries securely and automatically.

Bayware is a suite of distributed software components: an orchestrator, agents, and processors installed via an Ansible-based Fabric manager. Get more technical via our documentation.

Bayware's service interconnection fabric including the service graph intent declaration, is truly Infrastructure as Code that is uniquely well suited for distributed Cloud use cases.  And it is the key foundation of a Multicloud service mesh.

 

Definition of Infrastructure as Code

Infrastructure as Code is the framework for provisioning data centers and clouds via code that computers read and execute, rather than configuring physical hardware or using the interactive configuration tools for a physical or virtual device. 

For an application team, this code should instantiate the infrastructure services to support their application, and they should create, test, and iterate on this code as part of the continuous (CI) development and deployment pipeline.

 

Infrastructure as Code benefits lacking for networking and security

If it is really infrastructure as code, development and deployment teams, DevOps, can execute this code on any standard compute platform and receive the same services. To get higher performance, they should simply be able to throw more resources - a bigger machine - at it, and to scale they should simply deploy more instances of the code.

This has been achieved with virtual machines and now with container technologies for the compute domain. But this remains elusive for the security and networking domain.  

The current network and security industry model is to enable experts to pass in device-specific configuration parameters as code. But this requires code that is up to date with whatever capabilities the device maker has made available, and requires users to master how to achieve their intent via the configurables on that device.

Service Mesh technologies may be infrastructure as code, but don't reach into networking and security between clouds, between trust domains, between VPCs and clusters.

That is where Bayware shines.

 

Service Interconnection Fabric is Infrastructure as Code

Bayware patented Network Microservices are micro-encoded contracts for communication patterns that you easily program by adding application-specific labels to standard templates, and can be designed and approved by your networking and security professionals.

A Bayware deployment requires only IP access to Linux hosts to automatically fulfill in software everything DevOps needs for an application's networking and security:

  • service authorization & name resolution without separate service discovery system
  • address translation without globally-unique IP address management
  • endpoint protocol filtering wihout IPtables management
  • flow-level microsegmentation without subnets, VLANS, VRFs
  • perimeter security without firewall ACLs
  • policy-based routing without mastering routing protocols
  • link encryption & VPN tunneling without special gateways
  • observability & telemetry without complex correlations

 

Replace Many Infrastructure-as-Code Tools with One 

Service Meshes are excellent at responding to movement and replication of microservices. But the underling data networking solutions are optimized accommodate many tenants and high throughput not to change continuously. So a whole raft of tools including many virtual network functions (VNFs) have appeared to connect these two ends of a tradeoff, including

  • Container Network Inferfaces (CNI)
  • IPTable automation for endpoint filtering, including with eBPF
  • Microsegmentation and other application aware firewall solutions
  • VPN gateways and Ingress/Egress Servers
  • Virtual routers and transit gateways
  • Programmable Network Operating Systems and Network switches
  • Complex labeling solutions within Software Defined Networking management systems.

With Bayware, DevOps gets the benefits of all of those network-as-infrastructure tools, but automatically. Some are included such as CNI, and eBPF, others are replaced by the our simple and powerful Network Microservices architecture.

Most importantly DevOps does not have to acquire, master, and configure any of them.  We explain a little more in this video below.

Or jump into the details in our Documentation.

WHY A MULTICLOUD SERVICE MESH

Service Mesh, Kubernetes and Istio / Envoy 

A brief history

Building applications as microservices was always a good practice. Development is better with focused teams who provide other teams with clean interfaces. 

Docker containers made microservices clearly advantageous by greatly simplifying deployment. All libraries and other immediately dependent software are packaged as a single named object that relies only on Linux. They can be copied easily and scaled independently as they boot up in milliseconds. You can update even a single layer of software within a container. 

Docker invented a method, Swarm, for managing a whole herd of these containers. Unfortunately for Docker, Google had already created its own orchestration for its massive web properties, and open sourced it as Kubernetes. Kubernetes dominates microservices orchestration.

 

Service Mesh Originated with Kubernetes

Power users of Kubernetes next turned their attention to managing the communications among microservices. The communication stack from service registry to health checks and load balancing to access rules and routing remained outside the container framework. They require configuring external systems - infrastructure systems - to meet the needs of the application and its containers.

For a few years, many thought that the only requirement was a container-network interface or CNI to keep track of the IP addresses of all the containers in play so that infrastructure systems could serve them. So for some time, Calico and other followers were the answer to container infrastructure as code.

Pioneers at LinkerD and later Envoy were not satisfied with this continued dependency on external systems, and wanted greater control by the DevOps team. They created a sidecar proxy for each container that would manage the communications with other proxies paired to the other containers. The idea of a service mesh was born.

 

Istio and Envoy Mainstreamed Service Mesh for DevOps

Teams from Kubernetes at Google introduced Istio as a policy orchestrator for these sidecar proxies. They worked mostly closely with Envoy. Envoy has become the leading sidecar proxy.  The central idea is that a proxy is a client-side load balancer.  And it is a proxy server to form mutual links as tunnels with other sidecars based on service names not IP addresses. So, in a single trust domain, all of the external infrastructure dependencies have been removed.

The Istio project created modules to define and manage the sidecar configurations as well as service registry, the certificate handling, telemetry gathering and an interface to Kubernetes.  

So, in a single cluster, a Service Mesh enables DevOps to be entirely self sufficient to continuously test, deploy, scale, and optimize the entire application deployment.

 

Limitations of a Standard Service Mesh

Service Mesh is a great leap forward for focusing resources on Applications rather than infrastructure. You now can find application-level service meshes not just from Istio, but also from Hashicorp, NGINX, VMWare, and AWS.

There's really just one major problem with all of these.  What if you have multiple clusters and need to deploy across multiple trust domains as in hybrid cloud or edge for IOT?

As describe by Istio and VMware, for example, the answer is you have to reintroduce classic networking both SDN as well as configuration of each cloud's networking frameworks in order to complete the solution.

VMware-Istio Hybrid Cloud

 

 

Service Mesh for Distributed Applications

Enterprises need to run applications in increasingly distributed environments.

  • Private-public cloud hybrids including shared data center exchanges.
  • Availability zones close to customers with geographic redundancy.
  • Fail over from one cloud, one VPC, one Cluster to another. Use of edge data centers for IOT.

Defining an application's communication intent

Dev teams need to define their application's communication intent and security policy at the application level, not specific to each cloud, data center, or edge data center infrastructure they may need to use.

What’s DevOps needs is an application-defined architecture that delivers agility and programmability all the way through the infrastructure services in highly distributed use cases without sacrificing security in any manner.

 

Bayware: Communication intent as an application service graph 

Bayware is the breakthrough technology that makes application-defined networking and security possible by capturing application communication intent as a simple service graph based entirely on the application topology - in a manner that is entirely infrastructure agnostic.

See it in action here

 

Bayware plus a proxy creates a complete cross-cloud service mesh 

Bayware is all you expect in a service mesh. Bayware works with any client-side load balancer to create a service mesh that is unaffected by crossing clusters, or VPCs or cloud trust boundaries.  

Like you see with Istio, Bayware is based on service names rather than IP addresses which gives it some powerfully positive attributes:

  • Application specific - no coordination or dependencies on other departments.
  • Cloud agnostic - it’s just code, so it should not matter where you deploy it
  • Responsive - it automatically adjusts when you add, move, or delete microservices
  • All in one place - communication and security policies are in one system
  • Observable - the same telemetry you use for your application works to track the mesh

 

Comparison with Istio in a hybrid-cloud use case

When you cross trust boundaries with today's service meshes, you lose the central benefit of self-sufficiency. You become dependent on external infrastructure systems. 

You can see in this stack how much DevOps has to rely on traditional infrastructure teams to make a secure system work end to end. You can also see that a CNI by itself, is not enough either.

Standard vs Bayware service mesh 

You can see how Bayware's all in one model keep DevOps self-sufficient. Bayware delivers on the central benefit of a service mesh when you cross trust domains; that is cross clusters, or VPCs or clouds.

 

Even if you're not ready for Kubernetes and Envoy

Bayware also provides an application-centric service to service interconnection fabric as code even if:

  • you're not ready for client-side load balance
  • whether you've adopted containers and Kubernetes or not
  • if you're goal is to simplify your deployment across VPCs or Clusters in the same cloud.  

Go deeper on how we do this in our documentation.

 

Service Mesh Comparison

Bayware is simply better for distributed use cases than standard service meshes. Bayware has unique control-plane only architecture that enables it to deliver a complete solution for cross-cloud, cross-VPC, cross-cluster networking and security as part of the service mesh rather than having to bolt on SDNs and additional security products.

Bayware relies only on Linux as a foundation, so it works the same everywhere. Bayware captures your application's communication intent as code, as a simple service graph based on your application topology. Bayware automatically creates cloud-native telemetry end to end.

Bayware Tech advantages

 

That means DevOps gets outstanding cross-cloud networking and security without having to master networking. 

  • Eliminates complex configuration
  • Works the same across every cloud
  • Works if you’re on Kubernetes or not
  • Natively supports distributed services
  • Is all code in your CI/CD pipeline

Bayware is a complete, simple, and powerful service mesh that you can acquire and use from the AWS or Azure marketplaces, and use it across any private or public cloud.

Watch us securely deploy and multicloud application with no network configuration here. Or dive deeper into our Documentation.

 

 

Bayware enables DevOps Agility across teams

Give Security and Networking Teams Visibility into your Service Mesh

Bayware is all code, that DevOps maintains as part of its CI process. So Bayware enables DevOps to be self-sufficient while still collaborating with Security and Networking teams to make sure their requirements are included in the code.  Rather than exchanging requirements documents, DevOps, SecOps, and NetOps can collaborate on the code. That's how agile works.

 

Bayware-icon-01

Agility

Unified and instant connectivity across public and private clouds and different workload types – containers, VMs, Linux servers

Programmability

A single, all-software, auto-discover/register, multi-cloud, DevOps and AI ready solution that is easily programmable down to the flow level using a library of common patterns

Bayware-icon-02
Bayware-icon-03

Security

Application isolation, pervasive IP security, and policy programmability and verifiability down to the role and flow levels

BAYWARE: SERVICE MESH MADE MORE COMPLETE

Why a Service Mesh at Layer 3

Watch the Bayware Product Demo video above to see how seamlessly Bayware works across clouds - enabling an application across three clouds. Bayware radically simplifies service-to-service communications provisioning in three steps.

And watch how easily Bayware adapts to changing conditions and events across the mesh automatically and with full, cloud-native visibility.

 

Modern applications need dynamic microservices communications 

Bayware makes it simple.

  1. Program application intent and network policy into a service graph simply by adding application labels to contract templates
  2. Deploy Bayware software components alongside your application workloads in any Linux infrastructure
  3. Run securely by having Kubernetes or your Ansible orchestration system use tokens to authorize workloads on the mesh

See if for yourself in a test drive or by using it with no Bayware license fees at AWS or Azure Marketplace

Get the technical details in our Product Documentation.

Test drive Bayware to experience application migration made simple Free Test Drive

BAYWARE FEATURES

Create new services at a very high rate of speed—and change them—so your business can keep pace in a continuous development world.

Bayware-icon-04Connectivity and policy programmability via Network Microservices
Bayware-icon-05Zero-touch workload attachment, with automatic link encryption
Bayware-icon-06Crypto-based workload identity, decoupled from network locators
Bayware-icon-07Role-based network policy
Bayware-icon-08Zero-trust networking
Bayware-icon-09Network services entirely abstracted from the underlay providers
Bayware-icon-10Near real time policy update and enforcement