• Home
  • High performance Backup Target

Tag Archive

Tag Archives for " High performance Backup Target "

Hyper-V Amigos Showcast – Episode 20 – Windows Server 2019 as Veeam Backup Target Part II

Hello there! Good to see you are back for more Hyper-V Amigo goodness. In episode 20 of the Hyper-V Amigo ShowCast, we continue our journey in the different ways in which we can use storage spaces in backup targets. In our previous "Hyper-V Amigos ShowCast (Episode 19)– Windows Server 2019 as Veeam Backup Target Part I" we looked at stand-alone or member servers with Storage Spaces. With both direct-attached storage and SMB files shares as backup targets. We also played with Multi Resilient Volumes.

For this WebCast, we have one 2 node S2D cluster set up for the Hyper-V workload. On a second 2 node S2D cluster, we host 2 SOFS file shares. Each on their own CSV LUN. SOFS on S2D is supported for backups and archival workloads. And as it is SMB3 and we have RDMA capable NICs we can leverage RDMA (RoCE, Mellanox ConnectX-5) to benefit from CPU offloading and superb throughput at ultra-low latency.

The General Purpose File Server (GPFS role) is not supported on S2D for now. You can use GPFS with shared storage and in combination with continuous availability. This performs well as a high available backup target as well. The benefit here is that this is cost-effective (Windows Server Standard licenses will do) and you get to use the shared storage of your choice. But in this ShowCast, we focus on the S2D scenario and we didn’t build a non-supported scenario.

You would normally expect to notice the performance impact of continuous availability when you compare the speeds with the previous episode where we used a non-high available file share (no continuous availability possible). But we have better storage in the lab for this test, the source system is usually the bottleneck and as such our results were pretty awesome.

The lab has 4 Tarox server nodes with a mix of Intel Optane DC Memory (Persistent Memory or Storage Class Memory), Intel NVMe and Intel SSD disks. For the networking, we leverage Mellanox ConnectX-5 100Gbps NICs and SN2100 100Gbps switches. Hence we both had a grin on our face just prepping this lab.

As a side note, the performance impact of continuous availability and write-through is expected. I have written about it before here. The reason why you might contemplate to use it. Next to a requirement for high availability, is due to the small but realistic data corruption risk you have with not continuously available SMB shares. The reason is that they do not provide write-through for guaranteed data persistence.

We also demonstrate the “Instant Recovery” capability of Veeam to make workloads available fast and point out the benefits.

We put in this kind of effort to make sure we know what works and what doesn’t. To be better prepared to design solutions that work, even if they require a bit “out of the box“ thinking. We don’t make decisions based on vendor marchitecture and infomercials. We contemplate on what solutions might work and test those. That is how we deliver better solutions, better results, and happier customers. We hope you enjoy our musings and efforts and wish you all a very end of 2019 and a very happy new year in 2020!

Your humble Hyper-V Amigos

Hyper-V Amigos Showcast – Episode 19 – Windows Server 2019 as Veeam Backup Target

The Hyper-V Amigos ride again! In this episode (19) we discuss some testing we are doing to create high performant backup targets with Storage Spaces in Windows Server 2019. We’re experimenting with stand-alone Mirror Accelerated Parity with SSDs in the performance tier and HDDs in the capacity tier on a backup target. We compare backs via the Veeam data mover to this repository directly as well as via an SMB 3 file share. We look at throughput, latency and CPU consumption.

 One of the questions we have is whether an offload card like SolarFlare would benefit backups as these offload not just RDMA capable workloads. The aim is to find how much we can throw at a single 2U backup repository that must combine both speed and capacity. We discuss the reasons why we are doing so. For me, it is because rack units come at a premium price in various locations. This means that spending money to come up with repository building blocks that offer performance and capacity in fewer rack units ensure we spend the money where it benefits us. If the number of rack units (likely) and power (less likely) are less of a concern the economics are different.

We also address the drawbacks of an SMB 3 backup target and we will show you a solution in a later episode when we leverage a continuously available file share to solve this while also providing failover capabilities. This can be done via S2D or shared storage. The question remains if storage spaces put too much load on the CPUs compared to a RAID controller approach, but this is debatable today. When it comes to write-back cache, that’s why they invented MVMe!

One benefits with storage spaces is that ReFS can auto repair corrupt bist from the redundant copies and as such offers protection against bit rot.

While this is just a small lab setup we are getting some interesting results and pointers to further investigate. The only real show stopper is the ridiculous up mark OEMs have for SSD and NMVE drives. That is way too high and more than any added value of testing and support can justify.

Anyway, enjoy the webcast and we hope to deliver a few follow-ups in the near future. Even PMEM with Intel Optane DC memory is in the works!

Your humble Hyper-V Amigos