I can’t say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NVMe Non-Volatile Memory Express interface for mass storage
    PCIe Peripheral Component Interconnect Express
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage

    4 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

    [Thread #13 for this sub, first seen 8th Aug 2023, 21:55] [FAQ] [Full list] [Contact] [Source code]

    • brygphilomena@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I think I’m at 7x 18tb drives. I’m slowly replacing all the smaller 8tb disks in my server. Only 5 more to go. After that it’s a new server with more bays and/or a jbod shelf.

      • iesou@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s my next step. I have 8 8tb drives I need to start swapping, 2x512 NVMEs for system/app cache, and 1 2tb NVME for media cache.

  • Millie@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I dream of this kind of storage. I just added a second m.2 with a couple of TB on it and the space is lovely but I can already see I’ll fill it sooner than I’d like.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I will say, it’s nice not having to nickel and dime my storage.

      But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

      I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

      Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

      The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

      But, still PLENTY of usable storage, and- highly available at that!

  • joel@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Love this. Apart from hosting an instance, what are you using it for? Self-cloud?

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I host a few handfuls of websites, some discord bots.

      I hoard Linux isos. I use it for general purpose learning and experimentation.

      There is also kubernetes running, source control, and a bit of everything else.

  • webuge@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Well this seems to be a good problem to have hahah. If you need to get rid of some of those ssds count with me.

  • krolden@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Having a large flash pool really makes your life so much better.

    Until you fill up all your space and have to buy more :p

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hopefully that doesn’t happen soon! I don’t have too much room for more flash, lol.

      But, I have quite a bit of available space, so, there shouldn’t be any concerns. Also- tomorrow, after a few adapters arrives, I’ll be adding another 2x 1T flash drives my Optiplex 5060 SFF.

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Whats the problem?

      Each NVMe uses 4 lanes. For each of these x8 slots, they have two NVMes, for a total of 8 lanes.

      The x16 slot already has 4x NVMe in it, lol. The other x16 slot has a GPU, which is located in that particular slot due to the lovely 3d-printed fan shroud.

      One of the other full-height x8 slots also has a PLX switch, and is loaded with 4 more NVMes.