using aviatrix as layer for storage replication between multicloud

  • 26 July 2023
  • 5 replies
  • 81 views

Userlevel 1
Badge +4

 for  curiosity  purpose   can  be   use  Aviatric  as  layer   for  storage  replication  between  multi cloud ?


5 replies

Userlevel 2
Badge +1

Hi @MohammedBanabila!

following your question, do you have any particular case scenario in mind? can you please share with us a potential example?

 

Thanks,

Nico

Userlevel 1
Badge +4

just for learning purpose and as proof of concept.  my thought of scenario  if  we use aviatrix as layer to communicate to multiple cloud provider  ,  using api call for storages to replication  data   .meaning let aviatrix has role to  control and manage  it and  can  be  create a identity that associate with each cloud provider  as source and destintaion give permission   for example upload or download  along with replication  data from these storages .

Userlevel 2
Badge +1

Following your clarification and for my understanding, we should consider two potential generic case scenarios:

  1. For Uploading/Downloading tasks initiated by an Ec2 Instance or VM, potentially, we could leverage the Aviatrix Cloud Network Backbone to address those data flows to the various storage private endpoints (for i.e. Amazon S3 Access Points or Azure Private Endpoint). This needs to be tailored and tested for specific case scenarios.
  2. For particular CSP storage features like: AWS S3 replication, Azure Blob replication, unfortunately, we cannot force this data flowing trough the Aviatrix Cloud Network Backbone, simply because the CSP itself runs API calls addressing them directly to the internal CSP storage endpoints: those data flow traverse the referenced CSP backbone.
    Another interesting service is GCP Storage Transfer Service, capable to sync data from Amazon, Azure, on-premises and GCP itself to a Google Cloud Storage Bucket. In this particular case scenario, GCP spins up a VM (as a managed service) able to copy/sync data from different CSPs to the selected destination Cloud Storage Bucket; the API calls are addressed directly to the public IP address of the source CSP service (like AWS and Azure) and the data is internally transferred to the selected Google bucket, leveraging the Google backbone network.

I hope the above points satisfy your curiosity and if not, please, ping me back! 

 

Cheers,

Nico

Userlevel 1
Badge +4

 in my understanding in second choice   ,i could  let gcp  storage transfer service  be as hub and  let other CSP  to  be spoke for sync  it  and define lifecycle of  storage 

 

 

thanks   Domenico  Marino  for your explanation

Userlevel 2
Badge +1

following your last comment and for my understanding, I confirm you could define Lifecycle Management rules after importing objects (from Google Cloud, Amazon, Azure, on-premises) into a GCP bucket.

 

Cheers,

Nico

Reply