Jump to content

I 'm looking at building a new low power server


G+_Adam EL-Idrissi
 Share

Recommended Posts

I'm looking at building a new low power server. I currently have a intel d510 mini itx board, 2gb ram and a 2 tb hard drive in an ark mini itx case. The main purpose for the server is file sharing and back ups and media streaming through plex and for a couple desktops and laptops and xbmc for the raspberry pi connected to the tv. The new hard ware I'm looking at is the new AMD Athlon 5350, asrock am1h-itx mother board 4gb or 8 gb ram and four 2tb wd red drives. Still debating on a case. I currently use ubuntu server and i love it. My question is should i install the os on a ssd/usb drive or just install all four drives and install the os one of the 2tb drives? Would there be any advantage of running the os from a ssd or usb drive?

Link to comment
Share on other sites

If it were me, I'd install the OS on an SSD.  I've found that most USB sticks die from the constant log file writes, so if you go the USB route I'd just keep /var/log on a different device.  Using a drive dedicated to the OS isn't really a necessity anymore, but it makes getting the RAID running in another system much quicker and easier.  I've even had mdadm pickup on the RAID UUID and mount it after booting it on a new system.

Link to comment
Share on other sites

I would not install the os on your data drives. If there is a problem, and you need to reinstall the os , you don't want to risk accidentally overwriting your data. 

 

I would choose to install the os on a flash drive. They are low power, and will save a sata port. Travis Hershberger I do agree with you that flash drives use to die do to the abuse of an os, but I don't believe that to be the case any more. I have had great luck with using flash drives as my main drive.

 

One thought for an ssd. It may be possible to use an ssd as your main drive and a high speed cache system. Not too sure how this would be done, or if it would be worth it, just an idea. 

Link to comment
Share on other sites

RAID 5 + 4 2TB Red drives means you'll have a 50% chance of the array not rebuilding due to a URE.  Add that to the hugely increased risk of a hardware drive failure during a rebuild, and a successful RAID 5 rebuild is quite small.  The only time I'd personally use a RAID 5 like this is when I wanted a day or two to take a backup image of the array after a drive dies.

 

According to the 2nd link I reference below your only choice is RAID 10.  I'd tend to agree with that.  You could use RAID 6, but it will be much, much, much slower.

 

References:

http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage/

http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013/

http://www.smbitjournal.com/2012/11/hardware-and-software-raid/

http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better/

http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess/

http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable/

http://www.smbitjournal.com/2013/06/dreaded-array-confusion/

http://www.smbitjournal.com/2014/07/comparing-raid-10-and-raid-01/

http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805

http://queue.acm.org/detail.cfm?id=1670144

Link to comment
Share on other sites

Personally, I would avoid using an SSD for a cache as I don't yet believe the SSD's tech is proven over the long haul. I hate spending money for hardware. I buy the top of the market when I buy, and I use it until it dies. Just before that happens, I start researching for replacements. That strategy has served me well over the past 30+ years.

 

My understanding of SSD's are that they are basically more reliable scaled up USB sticks. Way over-simplified, I know. And it won't always be true. Even so, prudence seems to dictate that if there's a way to avoid writing to an SSD, avoid it and extend the SSD's time before failure (TBF).

Link to comment
Share on other sites

Paul Simard It's true that the first and some second generation SSD drives didn't wear very well.  Anything you can buy off the shelf today is going to last as long as the old style hard drives.  Using them as cache in a business is standard practice.  It is generally recommended that an SSD being used for cache is one that has a built in power source to finish writing anything in it's cache should it loose power.

 

Wish I could find the link to a stress test someone ran.  They were testing 256GB drives and they all lasted through 900TB writes, averaged over 1PB writes, and one was still going when they published.  With warranties covering 5 years and 1 or more full writes per day the manufacturers think they're more reliable than hard drives now.

Link to comment
Share on other sites

  • 3 weeks later...

Travis Hershberger I'm an old fuss budget. Almost 60, and I don't trust version 1 or even 2 of anything. For example, I recall passing on Windows 2.0 years ago, and becoming an ardent adopter of V3.0 later on. I follow the get once, use until it dies, then replace school. I agree that SSD's have great promise, and performance details and prices are improving almost daily. I expect I'll begin using them soon when appropriate. My issue is recommending them as a cache solution. For a 'slow' cache that refreshes over a lengthy time period, OK maybe. But on a server where cache expiration times can be measured in minutes, not so much. I like to think of SSD's as WORF drives, as in Write Once, Read Forever, not Klingons. I'm watching the silicon based CD's and DVD's as archival media for much the same reason. IMO, and for my purposes, the tech is still immature. I'm slowly coming to believe that 4th generation nuclear plants are the path to true energy independence here in the US. I likevto be sure of my path. (Smiling) So sue me.

Link to comment
Share on other sites

 Share

×
×
  • Create New...