How to abstract outbound source IP addresses of applications across different VPCs, vNets, regions, or clouds
Many enterprise applications must make API calls to services that are hosted outside of their network by a 3rd party, such as a customer or partner organization. In this model, the receiving network often must whitelist source IP addresses that are allowed to make an API call to their services. The process of whitelisting IP is often slow and painful. To reduce the friction of this process, network admins need to reduce the number of IP addresses that need to be whitelisted to a few and ensure they remain unchanged.
In the Data Center accomplishing above was much easier as the public IP addresses were very static. In the public cloud, the nature of flexibility we have in application deployment makes it nearly impossible to guarantee that traffic is always coming from a single IP address. A public IP address is assigned to a single instance, meaning traffic coming from different instances will have different public IPs.
We can leverage NAT gateways for a certain level of flexibility. Still, cloud providers NAT gateways, namely AWS, are tied to a single VPC and can’t share its IP across different regions or VPC. If the requesting servers are spread out across many VPCs or many regions, controlling which IP the request comes from is difficult or even impossible. Another critical issue to keep in mind with the built-in NAT services is lack of visibility and operational feature. This makes managing and troubleshooting connecting through such constructs complicated, particularly when working with external networks
Our ideal approach here is to have one or two IP addresses that can float across our resources regardless of the region, VPC/vNET, or even the cloud they are in. In a way, we are trying to abstract all of our public cloud networks behind a single IP address for a given partner, or across different partners. Think of it as if the network is behind a giant cloud with one or two IPs in front of it.
Aviatrix multi-cloud networking platform offers a simple and scalable option for setting up this architecture in the most optimal fashion. Our application components can be hosted across not only different VPCs but also different regions or clouds.
In this architecture, all internet bound traffic is routed through Aviatrix Transit service and pushed out from the centralized gateways that sit inside Transit FireNet VPC. Some of the advantages of this architecture include:
Aviatrix controller automatically understands the topology and maintains appropriate routes across the VPCs. With an industry-leading software-defined approach, controller program routes necessary to make sure the network operates as defined. When traffic is initiated from any of the apps inside the spoke VPCs, the end-to-end route management of Aviatrix Controller will ensure proper steering of traffic to the centralized Transit FireNet Egress gateways in the transit VPC. Here traffic is source NAT’ed and sent over to partner services across the internet.
High Availability and Load Balancing
To increase high availability and performance, we can leverage cross-az availability in the transit layer. With this model, we can ensure fast failover in case of a failure in the compute layer of cloud providers where gateways run. With that, we can also enable active/active load balancing to increase throughput as well. However, deploying two different aviatrix gateways will mean that traffic will source into two different public IPs, in which case we need to pass 2 IPs to the partner for whitelisting.
To summarize, here are the advantages of this solution:
· Centralized control plane simplifies management and operations
· Supports web UI, API, and Terraform for flexible interface
· Scalable and elastic architecture
· Supports Multi-VPC, Multi-region, and Multi-cloud support
· Day2 operational capability with traffic level visibility, and troubleshooting
This architecture is supported in Aviatrix version 6.0 and above