Tag Archive

Tag Archives for " Storage Spaces "

Hyper-V Amigos Show Cast – Episode 21 – Bare Metal Recovery of an S2D node with Veeam Agent for Windows Integrated in Veeam Backup & Replication

Hello everyone! Carsten Rachfahl and Didier Van Hoye, aka “The Hyper-V Amigos” created another Hyper-V Amigos show cast just before X-Mas 2020. In episode 21 we are diving into leveraging the Veeam Agent for Windows integrated with Veeam Backup & Replication (v10 RC1) to protect our physical S2D nodes. For shops that don’t have an automated cluster node build processes set up or rely on external help to come in and do it this can be a huge time saver.

We walk through the entire process and end up doing a bare metal recovery of one of the S2D nodes. The steps include:

  • Setting up an Active Directory protection group for our S2D cluster.
  • Creating a backup job for a Windows Server, where we select failover cluster as type (Which has only the “Managed by Backup Server” as the mode).
  • We run a backup
  • We create the Veeam Agent Recovery Media (the most finicky part)
  • We restore one of the S2D hosts completely using the bare metal recovery option

Now we had some issues in the lab one of them suffering to a BSOD on the laptop used to make the recording and being a bit too impatient when booting from the ISO over a BMC virtual CD/DVD. Hence we had to glue some parts together and fast forward through the boring bits. We do appreciate that watching a system bot for 10 minutes doesn’t make for good infotainment. Other than that, it went fine and we were able to demonstrate the process from the beginning to the end.

As is the case with any process you should test and experiment to make sure you are familiar with the process. That makes it all a little easier and hurt a little less when the day comes you have to do it for real.

This is probably our last show cast for 2019 and we hope it helps you look into some of the capabilities and options you have with Veeam in regards to protecting any workloads. Long gone are the days that Veeam was only about protecting virtual Machines. Veeam is about protecting data where ever it lives. In VMs, physical servers, workstations, PCs, laptop, on-prem, in the cloud and Office 365. On top of that, you can restore it where ever you want to avoid lock-in and costly migration projects and tools. Check it out.

The Hyper-V Amigos wish you a very happy New Year in 2020!

Hyper-V Amigos Showcast – Episode 20 – Windows Server 2019 as Veeam Backup Target Part II

Hello there! Good to see you are back for more Hyper-V Amigo goodness. In episode 20 of the Hyper-V Amigo ShowCast, we continue our journey in the different ways in which we can use storage spaces in backup targets. In our previous "Hyper-V Amigos ShowCast (Episode 19)– Windows Server 2019 as Veeam Backup Target Part I" we looked at stand-alone or member servers with Storage Spaces. With both direct-attached storage and SMB files shares as backup targets. We also played with Multi Resilient Volumes.

For this WebCast, we have one 2 node S2D cluster set up for the Hyper-V workload. On a second 2 node S2D cluster, we host 2 SOFS file shares. Each on their own CSV LUN. SOFS on S2D is supported for backups and archival workloads. And as it is SMB3 and we have RDMA capable NICs we can leverage RDMA (RoCE, Mellanox ConnectX-5) to benefit from CPU offloading and superb throughput at ultra-low latency.

The General Purpose File Server (GPFS role) is not supported on S2D for now. You can use GPFS with shared storage and in combination with continuous availability. This performs well as a high available backup target as well. The benefit here is that this is cost-effective (Windows Server Standard licenses will do) and you get to use the shared storage of your choice. But in this ShowCast, we focus on the S2D scenario and we didn’t build a non-supported scenario.

You would normally expect to notice the performance impact of continuous availability when you compare the speeds with the previous episode where we used a non-high available file share (no continuous availability possible). But we have better storage in the lab for this test, the source system is usually the bottleneck and as such our results were pretty awesome.

The lab has 4 Tarox server nodes with a mix of Intel Optane DC Memory (Persistent Memory or Storage Class Memory), Intel NVMe and Intel SSD disks. For the networking, we leverage Mellanox ConnectX-5 100Gbps NICs and SN2100 100Gbps switches. Hence we both had a grin on our face just prepping this lab.

As a side note, the performance impact of continuous availability and write-through is expected. I have written about it before here. The reason why you might contemplate to use it. Next to a requirement for high availability, is due to the small but realistic data corruption risk you have with not continuously available SMB shares. The reason is that they do not provide write-through for guaranteed data persistence.

We also demonstrate the “Instant Recovery” capability of Veeam to make workloads available fast and point out the benefits.

We put in this kind of effort to make sure we know what works and what doesn’t. To be better prepared to design solutions that work, even if they require a bit “out of the box“ thinking. We don’t make decisions based on vendor marchitecture and infomercials. We contemplate on what solutions might work and test those. That is how we deliver better solutions, better results, and happier customers. We hope you enjoy our musings and efforts and wish you all a very end of 2019 and a very happy new year in 2020!

Your humble Hyper-V Amigos

Hyper-V Amigos Showcast – Episode 19 – Windows Server 2019 as Veeam Backup Target

The Hyper-V Amigos ride again! In this episode (19) we discuss some testing we are doing to create high performant backup targets with Storage Spaces in Windows Server 2019. We’re experimenting with stand-alone Mirror Accelerated Parity with SSDs in the performance tier and HDDs in the capacity tier on a backup target. We compare backs via the Veeam data mover to this repository directly as well as via an SMB 3 file share. We look at throughput, latency and CPU consumption.

 One of the questions we have is whether an offload card like SolarFlare would benefit backups as these offload not just RDMA capable workloads. The aim is to find how much we can throw at a single 2U backup repository that must combine both speed and capacity. We discuss the reasons why we are doing so. For me, it is because rack units come at a premium price in various locations. This means that spending money to come up with repository building blocks that offer performance and capacity in fewer rack units ensure we spend the money where it benefits us. If the number of rack units (likely) and power (less likely) are less of a concern the economics are different.

We also address the drawbacks of an SMB 3 backup target and we will show you a solution in a later episode when we leverage a continuously available file share to solve this while also providing failover capabilities. This can be done via S2D or shared storage. The question remains if storage spaces put too much load on the CPUs compared to a RAID controller approach, but this is debatable today. When it comes to write-back cache, that’s why they invented MVMe!

One benefits with storage spaces is that ReFS can auto repair corrupt bist from the redundant copies and as such offers protection against bit rot.

While this is just a small lab setup we are getting some interesting results and pointers to further investigate. The only real show stopper is the ridiculous up mark OEMs have for SSD and NMVE drives. That is way too high and more than any added value of testing and support can justify.

Anyway, enjoy the webcast and we hope to deliver a few follow-ups in the near future. Even PMEM with Intel Optane DC memory is in the works!

Your humble Hyper-V Amigos

1

Hyper-V Amigos Showcast Episode 12–ReFS and Backup

imageIn this Episode Didier and I look at a single host deployment with Storage Spaces on Windows Server 2016. We create a “Hybrid” disk just like in Storage Spaces Direct by combining SSD & HDD in a storage Tier. We were very happy to discover that ReFSv3.1 does real time tiering. We’re very excited about this because we want to leverage the benefits if Veeam Backup & Replication 9.5 brings by leveraging ReFSv3.1 (Block Cloning) in regards to backup transformation actions and Grandfather-Father-Son (GFS) spaces savings. To do so we’re looking at our options to get these benefits and capabilities leveraging affordable yet performant storage for our backup targets. S2D is one such option but might be cost prohibitive or overkill in certain environments.

ReFS v3.1 on non-clustered Windows Server 2016 hosts bring us integrity streaming, file corruption repair with instant recovery as protection against bit rot, the performance of tiered storage and SMB3 as a backup target at a great price point.

But please watch the video and see for yourself.

The Hyper-V Amigos Smile

Hyper-V Amigos Showcast Episode 6 Storage Spaces

ThumbIn this show Didier and Carsten talk about features in Storage Spaces. Carsten demonstrate how to get statistics from the Autotiering process, how a Scale-out Fileserver reacts on a catastrophe (power loos) and shows what happens when he turn of an actively used JBOD in a Enclosureaware three JBOD setup.

Have fun watching this Showcast.

P.S.: Here are the links to the mentioned Autotiering.ps1 script and the explaining blog post (in German).