G+_Adam EL-Idrissi Posted October 25, 2014 Share Posted October 25, 2014 I'm a huge fan of western digital drives. I've used them 2008 when I built my "first" pc (first since 1996.lol).last year I around Christmas time I built my home server. files,back ups, streaming,etc. This year for my birthday,couple of weeks, I plan on building a new one and am curious on what y'all experience is with large capacity drives. I have a wd results 2tb in my server now and in the new one was going to use my current one and add 3 more. Looking at the wd red series on new egg the 3tb model is $120. Getting 3 would be 12th instead of 6tb or 8tb if I add my current red to the mix. Price wise it seems to be better to add three 3 tb drives for 4-6tb of more storage buy the reviews I've seen are mixed on the 3tb compared to the 2tb model,which I have found have far fewer complaints. Since its a 24/7/365 box reliability is obviously key and from a post on backblaezes blog, wd drives have a significantly less fail rate compared to Seagate. What is your experience with wd red,3tb+? What brand Nas drive do you like? Also,IL continue to run Ubuntu server and if I use raid will either be raid 5 or raid 10.looking for most storage space. Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 26, 2014 Share Posted October 26, 2014 My opinion,...first off, I am a Seagate fanboy. I only ever had one Seagate fail out of over 15 years working with them. Even then, I had a replacement drive in two days. Exceptional customer service. As for WD Reds, I opted to choose them over Seagate because they were first to market with the "NAS" drive. I purchased 16-4TB Red drives and within a week or so, I had one drive that would spin up, but failed to ever be recognized. I assumed this was a logic board failure (yes I replaced the cable). I then also had a second drive that started to toss bad sectors at me. I was at a 12.5% failure rate. I wasn't happy about this. I obviously summited both for RMA. My experience only got worse from there. In speaking with a customer service representative, I was told they didn't have any in stock to ship at the moment; really?!?! It took a month before I got my first replacement and then when it arrived it turned out to be a Green drive! The second drive arrived a week later and was a Red. I then had to deal with customer support again. They refused to allow cross shipment with the Green and would not allow a Red to leave without getting the Green back first. So I asked for an overnight label in each direction as a sign of good faith to make things right, with me, the customer. They said they would, yet that was not the case, it was ground in both directions. It took me another two weeks to get the other Red drive. By then, two months for an RMA. I should note, I ended up just buying two new Red drives right after I got off the phone with them initially when I was told they didn't have any in stock. So I was not "without" for the 1-2 months it took to get my RMA drives. All that being said, I kick myself for not sticking with what I know and just buying Seagate NAS drives. When, not if, but when these start failing, I will be replacing them with Seagate. As for your question on large capacity drives, I would recommend at least RAID-6, if you can. With larger drives, your probability of encountering a bad sector or some other flaw goes up dramatically. Therefore, with RAID-5, if you have a drive drop out and you go through a rebuild, the probability of encountering another bad sector on another drive increases. If this happens and that drive also falls out of the array, the whole array falls apart. [insert Steve Gibson's Spinrite plug here :)] This "can" be avoided if you scrub the array periodically to suss out any flaws like this in a controlled environment. Therefore, with drives over 1TB, I wouldn't recommend the use of RAID-5. In the end, it really comes down to how important your data is to you and how long you can live with downtime; should a problem happen. But remember this, RAID is not a form of backup! Always have at very least another complete copy of your data elsewhere. Hope this helps you in your decision making.? Link to comment Share on other sites More sharing options...
G+_Matthew Bowen Posted October 26, 2014 Share Posted October 26, 2014 I switched ever since I paid ~$200 for a 200gb WD drive and it lasted a month. Link to comment Share on other sites More sharing options...
G+_Travis Hershberger Posted October 26, 2014 Share Posted October 26, 2014 WE Reds are decent drives for what Adam EL-Idrissi would be using them for. I've got 4 in 3 different RAID arrays at work, and they've been running well. The oldest is one of the first red drives WD released, which annoyed me to no end, but that's a whole other conversation. I would not use them in a parity RAID, ever. They are just so slow already, and, frankly, are not meant for use in a parity RAID. RAID 5 would be very, very dangerous in this case. You would likely never recovery the raid array if you had one of the 3TB drives fail in a RAID 5. If you really want to know why, check out the links I referenced. Lots of information on RAID, failure scenarios, and performance information. Ref: Scott Allen Miller's wall of RAID links: http://www.smbitjournal.com/2012/12/the-history-of-array-splitting http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013 http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count http://www.smbitjournal.com/2012/11/hardware-and-software-raid http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable http://www.smbitjournal.com/2011/09/spotlight-on-smb-storage http://www.smbitjournal.com/2013/06/dreaded-array-confusion http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805 http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162 http://queue.acm.org/detail.cfm?id=1670144 Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 26, 2014 Share Posted October 26, 2014 Travis Hershberger, I would have to, respectfully, disagree with you. WD Red and Seagate NAS drives are purpose built for parity RAID. In that they don't linger on attempts to recover data, they just tell the controller "I give up, ask another drive." Whereas, standalone drives will try and try and try before they give up. That being said, any drive is fine to use in a RAID-0 or a RAID-1 and probably better so in a RAID-0 as you would want the drive to attempt to recover data. RAID-1 really doesn't matter since it is an exact copy. Therefore, NAS specific drives are purpose built for parity RAID. However, I do agree with you about how RAID is becoming more of a risk as drive size increases. For example, in my setup, I have two RAID-6 arrays consisting of 8 drives each. Each of witch is on it's own server and I use DRBD with GFS2 in an HA cluster running in dual primary mode. What this does, in a sense, is become a "networked" RAID-61. A server can go completely down and the clients don't even notice per the HA cluster. Using GFS2, both systems can simultaneously read and write the file system since DRBD handles replication below the FS layer. This is all monitored via Nagios with custom plugins. In theory, I could take a RAID-6 array completely down. However, the dual parity of RAID-6 keeps things going in the event of drive failure. No need to waste controller ports on hot standbys, with hot swap, I just pull the bad drive and replace it with a spare in my crash kit after Nagios alerts me of the failed drive. If the array was to completely fall apart, I could just build a new one and DRBD would handle the data replication from the other array. There is also a dedicated NIC on each server, back-2-back, for DRBD replication and each server has an additional pair of NIC's bonded in active/standby which then uplink to two different switches. I admit this is probably over the top for your average user, but I just wanted to share how I combat the ever growing probability of data loss with RAID as drive sizes increase. I will say the advanced format with 4k sectors is much welcomed in relation to this. For those that are curious, RAID-2,3,4 are identical to RAID-5 only in RAID-2 data is striped at the bit level and you have a dedicated parity disk, RAID-3 stripes at the byte level with dedicated parity, RAID-4 stripes at the block level with dedicated parity and RAID-5 introduces distributed parity. What we really need is a RAID-7 with triple distributed parity, much like ZFS is doing. Cost most likely is the prohibiting factor here because one would need some beefy hardware on a dedicated controller for computing triple parity fast enough that data speeds do not suffer. ZFS is the trade off in light of the fact that computer hardware has enough resources that the offloading of the RAID is not really needed. Call me old school, but I just like how hardware RAID presents one "disk" to the OS and I can manage it independently of the OS. I could go on and on, but I've hijacked this post enough already, lol.? Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 26, 2014 Share Posted October 26, 2014 Adam EL-Idrissi, all things considered, it really boils down to whom you have loyalty to and backing up your data. I'm sure you will agree that both WD and Seagate are the deafacto HDD vendors for the most part for home users as they have consumed all others. Companies don't get that way for being bad at what they do (unless you're Comcast). My final thoughts on this, since you already have a WD arsenal, I would stick with them. I am sure most would agree that keeping the same vendor, family and model (if you can help it) drive is the best way to assemble a RAID array.? Link to comment Share on other sites More sharing options...
G+_Travis Hershberger Posted October 26, 2014 Share Posted October 26, 2014 Luke Militello Raid 6 could work, but you're going to sacrifice speed. It is probably just fine for streaming movies from, but it will be very slow to write anything. http://www.ryanfrantz.com/posts/calculating-disk-iops/ This shows a write penalty of 4 for RAID 5, it's even higher for RAID 6. Keep in mind I'm used to living in a world where iops matter much more than sustained read/write speeds. Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 26, 2014 Share Posted October 26, 2014 Travis Hershberger, this is going to be true for any parity calculation. If speed is more important that redundancy and reliability, then RAID is probably not for you. In which case, LVM is going to be what you want. I think we can both agree that there is no right way or wrong way, just the way that works best for the intended use and personal preference. Link to comment Share on other sites More sharing options...
G+_Travis Hershberger Posted October 27, 2014 Share Posted October 27, 2014 Luke Militello I'd never use LVM without mirroring, and using LVM for data protection just uses mdadm anyway. Since we know RAID 5 WILL LOOSE arrays, RAID 0 can be tempting. At least with RAID 5 you have an opportunity to backup before everything just goes away. Now I know you're not really recommending RAID 0, just wanted to clarify a bit here. Link to comment Share on other sites More sharing options...
G+_Adam EL-Idrissi Posted October 29, 2014 Author Share Posted October 29, 2014 Luke Militello you are rught. Seagate and wd are the "main players". I've used wd for about 6 years and toshiba,Hitachi on and off for about 4 years. I'm a fan of wd drives but was curious of anyone's real world experience. I've read reviews but they seem.....slack. I've also seen backblazes results but their uses is more intense than anything I could ever through at a drive. Link to comment Share on other sites More sharing options...
G+_Adam EL-Idrissi Posted October 29, 2014 Author Share Posted October 29, 2014 Maybe this is a "noob" question but what benefit does raid have for home use with file serving,back ups ,streaming,etc? Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 29, 2014 Share Posted October 29, 2014 Adam EL-Idrissi, uptime. Especially when you have a LOT of data, a complete restore can take days. Link to comment Share on other sites More sharing options...
G+_Luke Militello Posted October 29, 2014 Share Posted October 29, 2014 Even with the fastest mechanical drives on the market and assuming there are no bottlenecks, it would take about 24 hours to transfer 16TB at speeds of around 200MB/s. Link to comment Share on other sites More sharing options...
Recommended Posts