• Home
  • Videos

Hyper-V Amigos Showcast – Episode 20 – Windows Server 2019 as Veeam Backup Target Part II

Hello there! Good to see you are back for more Hyper-V Amigo goodness. In episode 20 of the Hyper-V Amigo ShowCast, we continue our journey in the different ways in which we can use storage spaces in backup targets. In our previous "Hyper-V Amigos ShowCast (Episode 19)– Windows Server 2019 as Veeam Backup Target Part I" we looked at stand-alone or member servers with Storage Spaces. With both direct-attached storage and SMB files shares as backup targets. We also played with Multi Resilient Volumes.

For this WebCast, we have one 2 node S2D cluster set up for the Hyper-V workload. On a second 2 node S2D cluster, we host 2 SOFS file shares. Each on their own CSV LUN. SOFS on S2D is supported for backups and archival workloads. And as it is SMB3 and we have RDMA capable NICs we can leverage RDMA (RoCE, Mellanox ConnectX-5) to benefit from CPU offloading and superb throughput at ultra-low latency.

The General Purpose File Server (GPFS role) is not supported on S2D for now. You can use GPFS with shared storage and in combination with continuous availability. This performs well as a high available backup target as well. The benefit here is that this is cost-effective (Windows Server Standard licenses will do) and you get to use the shared storage of your choice. But in this ShowCast, we focus on the S2D scenario and we didn’t build a non-supported scenario.

You would normally expect to notice the performance impact of continuous availability when you compare the speeds with the previous episode where we used a non-high available file share (no continuous availability possible). But we have better storage in the lab for this test, the source system is usually the bottleneck and as such our results were pretty awesome.

The lab has 4 Tarox server nodes with a mix of Intel Optane DC Memory (Persistent Memory or Storage Class Memory), Intel NVMe and Intel SSD disks. For the networking, we leverage Mellanox ConnectX-5 100Gbps NICs and SN2100 100Gbps switches. Hence we both had a grin on our face just prepping this lab.

As a side note, the performance impact of continuous availability and write-through is expected. I have written about it before here. The reason why you might contemplate to use it. Next to a requirement for high availability, is due to the small but realistic data corruption risk you have with not continuously available SMB shares. The reason is that they do not provide write-through for guaranteed data persistence.

We also demonstrate the “Instant Recovery” capability of Veeam to make workloads available fast and point out the benefits.

We put in this kind of effort to make sure we know what works and what doesn’t. To be better prepared to design solutions that work, even if they require a bit “out of the box“ thinking. We don’t make decisions based on vendor marchitecture and infomercials. We contemplate on what solutions might work and test those. That is how we deliver better solutions, better results, and happier customers. We hope you enjoy our musings and efforts and wish you all a very end of 2019 and a very happy new year in 2020!

Your humble Hyper-V Amigos

admin
 

Click Here to Leave a Comment Below 0 comments

Leave a Reply: