Announcement

Collapse
No announcement yet.

redundancy...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Actually, my fear of the blackbox is not so irrational... On my brother's ReadyNas, something related to the powersupply on the mainboard broke. But to gain access to his data, he needed to find a different device with the same raid controller.

    So basically, I would want something where, no matter what, the data is accessible (does not have to be directly or so, just accessible without big procedures). DrivePool on Windows seems perfect for that, but a purpose built system with all licenses gets quite expensive, so I was looking for a cheaper option (both license wise but also regarding necessary hardware). I suspect any mirroring done by either a hardware or software raid also allows the disk to be read in a different system. Snapraid also allows it as it works at file level.
    But anything above that not really... With a software raid you can move it, independent of the controller, but then you really need to know what you are doing not to loose your data.
    Last edited by VJ; 24 October 2017, 04:17.
    pixar
    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

    Comment


    • #17
      If I had at least month old offline backup of important data such as pictures, documents and and the rest of files on my laptops, no backup of downloaded movies and music, then RAID5 is OK. Otherwise i'd go with 2x RAID1. It's more expensive in drives but you don't need fancy controller and you can upgrade just one segment. I'm running RAID5 at work on NAS but I'm using that NAS to back up servers to. I'm also doing a weekly backup which I take off-site. For 6 drives RAID 6 is OK. Cold spare (unopened drive sitting in a box to replace before ordering replacement is also worth considering).

      Generally NASes use Linux LVM or use one which uses that - where you can recover data on a Linux machines. I would only do RAID5 on server grade controller. problem with those is that you need BBWC module to power memory for write caching and there battery goes every 3-4 years. It costs 50€ to replace.

      The problem is silent data corruption that I'm more affraid of. Against this only ZFS protects.

      IMO Linux is best as you can easilly create RAID1 in software and then you can import that in any other machine that can run Linux. It's also easy to create samba shares and this is all you really need.

      Also Centos has 10-years of security updates for major releases. Debian and Ubuntu LTS are also better than Windows 10 in this regard and probably better than Windows server. With Windows 10 I no longer consider Windows enterprise grade OS.

      For ZFS and FreeNAS I have no practical experience but I think you could take any desktop (or source one) install FreeNAS on USB and access drives.

      EDIT: I agree on blackbox problem. For example in our 1st Synology fan died and it wouldn't run without one. So until I connected some fan to it I was unable to power it on. Also for upgrades I think DIY is better as good case can last 10+ years and then replacing just motherboard and CPU or 2 drives once in a while only costs like 200-300 EUR. Much better than building new 1000 EUR box every 5 years or so - after 5 years I no longer would trust the drives.
      Last edited by UtwigMU; 24 October 2017, 04:28.

      Comment


      • #18
        VJ, have you decided on Windows or Linux for the storage server? If Linux then I have nothing to add. Wrt hardware, any old hardware will do.
        Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
        [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

        Comment


        • #19
          My current server otherwise is Proliant ML350 G6 with LFF (room for 6 3.5 hot swap). Running single quad Xeon with HT - i can upgrade to dual 6-core for 12 threads. I have 12GB ECC but I can upgrade to 96GB per socket. I'm running ESX on internal USB drive and I have 2 4TB WD Golds in HW RAID 1 (server controller - no cache but for RAID1 is OK and I can see drives in ILO. Then I have PCIe USB 3.0 card that i gave to SBS 2011 machine running in Hypervisor. I'm using USB3.0 drives for Windows server backup and I rotate them once or twice a month - 100km away.

          I have injected SLIC 2.3 activation in ESXi so all Windows Server up to 2012 and Windows desktop up to 8 can be activated using OEM activation.
          Last edited by UtwigMU; 24 October 2017, 04:50.

          Comment


          • #20
            Basically, my problem is:
            - commercial NAS: blackbox issue... failure of the device = problems accessing data
            - commercial server (e.g. that HP): limitations in hardware (e.g. have to use hardware raid)
            - diy server: which disk organization system to use and which os?

            Umfriend:
            I have not decided on any OS, I was expecting to find something in Linux but the more I learn the less I'm sure of that. The whole ZFS thing scares me a bit, particularly the fact that even there raidz1 is not enough. If for home use you need at least 4 disks to have 2 disk redundancy, then I would feel safer having 4 disks and just put two mirrors. But if you start mirroring, then there are more options.

            I have no old hardware that is suitable... Either I have a 18 year old Pentium two (I doubt that is suitable), or a 14 year old dual Xeon where I have driver issues on modern OS with the only sata controller that is in it. While I have my current htpc based on an Asus Z97 which already has a few years, if I would go with a computer, I would aim for a custom built aimed for low power usage; something using an Intel C3000 soc or so. And it would be nice if it were in a small format or desktop format - I don't have the space for a tower.

            While Windows 10 is not really servergrade, at the moment it looks like the combination of Windows + DrivePool seems to be the solution I would feel most comfortable with.
            If I would go Windows however, then I could go with the HP, use hardware raid1 and be done with it. Or go more expensive with a custom built. The downside is that it would be a headless server, so either it would have to be Window Professional (for RDP) or I would have to resort to VNC (with which I have had issues connecting to a locked windows)

            On Linux, you have very nice tools for a webbased management (e.g. webmin), so that would be a benefit of Linux. Going with the HP would solve the redundancy issue, but not many OS choices (it comes with ClearOS - a linux variant based on debian - and should have full support for the raid). Going with a custom built brings in the whole redundancy mess into the OS, but if I would go with a mirror it would be rather straight forward.

            Maybe I'm trying to do too much: if all I need is safe storage, then perhaps I should just look at a simple mirroring system that gives access to files. And then I would just go with the simplest non-blackbox solution, which now seems to be the HP+ClearOS.

            There are things with Docker which I'm just now learning about and which may be of interest...
            pixar
            Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

            Comment


            • #21
              RAID is mostly dead. Don't bother.

              The standard accepted backup scheme is call a 1-2-3 Backup.

              1 = Original data on the source. The data here is normally mirrored using non-RAID of some kind to prevent the parity performance hit and maintain production should there be a drive failure.
              2 = Local backup of the data. Since this is non-production you put the backups on an inexpensive storage solution. Don't bother with RAID or data mirroring. JBOD works fine here.
              3 = Off-site backup, usually to the Cloud. This is the disaster recovery copy. Many providers now offer geo-located storage, where the data is copied to multiple sites around the globe, or within a defined region (like US or EU) to further decrease the chance of critical data loss.

              The 1-2-3 strategy was developed off Google research that found the best backup method was having multiple copies of the data. Things like RAID only helped to keep production environments running while a disk is replaced, but does not necessarily help with keeping data from getting lost.

              The cloud has helped with the 1-2-3 strat because you can get super cheap cold storage on the Internet, and with 1Gb Internet becoming more common, backups can complete is a timely manner now. Beats paying for the Iron Mountain truck to pick up the tape rotation.
              “Inside every sane person there’s a madman struggling to get out”
              –The Light Fantastic, Terry Pratchett

              Comment


              • #22
                Yes... the problem is finding the balance for home use... My idea was
                1. storage server with redundancy (initially I thought raidz1, now more thinking mirror)
                2. backup to hdd in second computer
                3. important data to external hdd/optical disk

                So for personal use, the redundancy may be a bit overkill, but my idea was that when I'm adding a storage server, I want to do it properly, just in case of a forgotten backup or whatever.

                Looking so far, it looks like a computer with mirroring is the way the go. Can you confirm that a hdd from a mirror at server level (e.g. using the RAID1 option of a Marvell or Intel host) can be read in another system?
                I guess I'll have to decide which computer: homebuilt or something like the HP. Pricewise the HP will be hard to beat... The question is what else I could want the server to do (media streaming / TV streaming comes to mind).
                pixar
                Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                Comment


                • #23
                  If you go Raid1, you might as well use DP as it allows for mix&match.

                  Oh, and how large will storage be? DP works well with their Scanner product. It scans the surface of HDDs regularly and monitors SMART. If it encounters an issue it can evacuate a HDD prior to it failing (assuming no sudden death and sufficient space on the other pooled HDDs). They now also have CloudDrive. You would use a cloud storage provider and Clouddrive would simulate a HDD on there. The advantages? Off site backup (not against accidental deletions though I think) and *all data is enctrypted with a key only you know*. Provider can't read it. I think it is still in beta though and I don't like cloud (except for Dropbox).
                  Last edited by Umfriend; 24 October 2017, 11:47.
                  Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                  [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                  Comment


                  • #24
                    I would like to be able to add storage as needed. It does not even have to form one pool, just the ability of adding e.g. a second mirror is enough. And that brings me back to the problem:

                    DrivePool -> Windows -> custombuilt -> 1. price for licenses, 2. suitable case (this is expensive, or something like this), 3. headless operation risks

                    The case issue is important to me, as the computer would be installed on the top shelf of a cabinet. I would not like to have to move it a lot, so it should either not be too heavy or offer easy access to some of the components (e.g. harddisks). The Fractal Design cases microATX cases are also an option as they are quite small (though not sure if e.g. the 804 is not too high - in the 804 you can remove the drives from the side, in the 304 you have to lift them up; then again the 304 may be small enough to easily take out). But you'd really have to keep track of the harddisks to know which one has issues if it happens.

                    I could get that HP as the hardware is cheap and run Windows on it, but as it already has controller based RAID1 (and not JBOD), there is little point in putting DrivePool on top, so why go Windows?
                    And then: if I go custom built, I can put whatever system on there and mirroring is not an issue, so why get an expensive Windows license when I would not run any other software on it?

                    Perhaps I'm just overthinking it, as any mirrored storage would be fine.

                    Still, can anyone answer: if you have a mirror set in the sata host adapter (e.g. in Intel or Marvell), can you the take out a single harddisk and read it in a different system?
                    Last edited by VJ; 25 October 2017, 01:11.
                    pixar
                    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                    Comment


                    • #25
                      I am not sure about the requirements here. With DP, you can easily expand the Pool (or add another Pool) by adding new disks (of any size). I don't think it is that simple with RAID but I could well be wrong.

                      And yes, if you want a potentially large storage server, then the case may become a pricy issue. There are 4U rackmount cases that offer, I donnow, 16 or 20 hot-swap bays. But in reality, you only need one IMO and that is for a backup HDD that can rotate offsite (or use some cloud thingy, but I don't like that).

                      You could for instance do:
                      1. Case https://www.alternate.nl/NZXT/Source...265795?lk=9309
                      2. https://www.alternate.nl/Icy-Dock/MB...407688?lk=9308, less then Eur 110 in all.

                      Of course, for the offsite rotation, you do need to be able to access the server.

                      On headless, would not teamviewer be an alternative to RDP?

                      And it really depends, this solution would offer 7 HDDs for a pool (1 is used for OS, presumably on SSD or perhaps an M.2 in which case you'd have space for 8 + 1 Backup HDD). But do you envisage using/needing this? The upside is, should you have old HW around (MB, CPU, MEM and..... HDDs!), than I guess you;d be all set to go. Not even sure you would need Windows 10. I guess all you need to do is share space which, I assume, W7 can do as well.
                      Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                      [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                      Comment


                      • #26
                        I would just backup via the network to another computer.

                        Biggest problem is the space for the case: it is high and difficult to reach. So I don't want a heavy case that would be difficult to put there and work on when standing on a ladder. Taking it down would require disconnecting it, and if the case is too big there is not enough maneuvering space. A light, small case could be taken down more easily if needed. I know this won't happen that often, but changing disks or even just cleaning the ventilators etc. should be possible.

                        As I don't need to recycle old hardware, and there are interesting mini-itx mainboards, these may be interesting:
                        Computers, gaming, multimedia en huishoudelijke elektronica van topmerken koop je voordelig bij alternate.nl. Voor 22.00 uur besteld, de volgende werkdag in huis.

                        Computers, gaming, multimedia en huishoudelijke elektronica van topmerken koop je voordelig bij alternate.nl. Voor 22.00 uur besteld, de volgende werkdag in huis.

                        De zwarte Node 304 van Fractal Design is een desktop behuizing met plaats voor een ATX voeding. De behuizing beschikt over zes interne 3,5"/2,5" in...

                        or some of the even cheaper cfi models I posted earlier, if I could find them.

                        edit: forgot to mention: Teamviewer shows a code on the computer that is controlled; to control it from a distance you need to show this code. The alternative would be VNC, but I have had issues with not being able to connect to a computer that rebooted. Due to the location of the case, temporarily connecting a monitor is an issue (of course, less of an issue if the case can be taken down easily).

                        I'm still repeating my question, as it is something I cannot seem to find online... If you have a mirror set in the sata host adapter (e.g. in Intel or Marvell), can you the take out a single harddisk and read it in a different system?
                        Last edited by VJ; 25 October 2017, 03:56.
                        pixar
                        Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                        Comment


                        • #27
                          I don't know.

                          I thought it was possible to have a fixed code with Teamviewer: set once on the server and you're good to go (was my thinking).

                          Wrt to mirroring (I assume Raid1), how would you go about once you need to expand storage? Replace the disks? Add a new pair as a seperate mirror?
                          Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                          [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                          Comment


                          • #28
                            The code stays the same, but the password is generated on every launch. The option you mention is called easy access and is only available on the full version of Teamviewer, and requires internet access (uses your Teamviewer ID).

                            At the moment, I'm not too bothered with expanding, as my storage needs increases slowly, or rather in jumps. I would not have an issue with the storage not being a pool but rather several storage points. Expanding is then a matter of either a second mirror, or replacing the disks in a mirror. I know it is not as elegant as the DrivePool solution, but for sure sufficient to my needs for now. Changing to a pool is a software matter, which I don't want too low level in the file system (I want the individual disks to be readable, as in DrivePool or maybe Greyhole) which I can decide on later if I want that.

                            Funny how the thread went from redundancy, to cases, ... But very interesting to me...

                            Still repeating the one open question: If you have a mirror set in the sata host adapter (e.g. in Intel or Marvell), can you the take out a single harddisk and read it in a different system?
                            pixar
                            Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                            Comment


                            • #29
                              The problem with Windows is security. See Hillary server, NSA, backdoors, the wanacry 0-day which was known to hackers and states for a long time. Why pay for shit OS when you can get enterprise grade OS for free. Also see teamviewer hack, the CCleaner malware (not related but sums up state of ecosystem).

                              I'm actually considering building air gapped network on which storage and offline workstation would sit.

                              IMO if you won't go with ZFS use onboard RAID or install Centos and create multiple RAID1 arrays. You can use innexpensive 5400RPM drives and skip fancy LSI controller, cache, battery and enterprise drives. Those only pay off once you go to 6 drives or beyond.

                              Then create simple passworded samba shares. use SSH to manage - it's built into Windows 10, MacOS and Linux and you can get phone apps for it. With the learning curve I think you can do it over weekend. It is very portable as you can use any Linux (USB pen drive on another machine) to quickly recover data. btrfs in RAID1 is safe now and it's built into the Kernel, it has better features than ext4.

                              Set it to auto update and find a script that will email you if drive fails.

                              Once optics in Poland gets connected build another box like this there and use rsync. So even if Arabs overrun Spain or Russians overrun Poland you're still OK.
                              Last edited by UtwigMU; 25 October 2017, 14:38.

                              Comment


                              • #30
                                It is a home situation on a closed network. I agree on security matters, but also feel I should not be too paranoid.

                                At the moment, I'm testing Ubuntu Server 17.10. I installed webmin on top of it, at it looks very nice. Now I'm trying to see what other functionality I can add to it: ideally I would like it to be my media server, which means running Logitech Media Server (for music to my squeezeboxes), Serviio (video files) and TVHeadend (as TV tuner). The main reason I want to put the functionality on this server is that it would mean there is just one computer running for most of the time, and I plan to make it a low power one. None of these servers require much resources unless you have to transcode, but that is something I can avoid (I have control over all the clients). Ubuntu 17.10 is quite recent, so I may have to compile Logitech Media Server due to some Perl issue (but I will wait, I would guess the package will get updated soon, and for sure by the time I will set up the server). For my home automation system, there is a debian distribution that adds functionality (intended for raspberry) but there is a normal pc version. It is very low demanding on resources, but it would be nice if I could run that virtually (kvm, virtualbox or even qemu to emulate the raspberry - kvm seems the best option so the first to try), so I still have to check if I can add that one - it may be better just to get a raspberry. A personal cloud could be interesting, but then security becomes a big issue.
                                So at the moment, I'm checking how far I can get with Ubuntu Server...

                                Optics are connected in Poland (don't tell me you missed the photo?! ), but I don't have a good connection here in Spain.

                                Possibly interesting case:

                                You can add drive cages. nothing hotswap, but the general format of the case would make it well suited for my application: it is low and deep, and you can quite easily access it from the side.
                                Last edited by VJ; 26 October 2017, 03:57.
                                pixar
                                Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                                Comment

                                Working...
                                X