• Solid-state drives — from bespoke to commodity

    Home » Forums » Newsletter and Homepage topics » Solid-state drives — from bespoke to commodity

    • This topic has 69 replies, 14 voices, and was last updated 1 year ago.
    Author
    Topic
    #2448434

    HARDWARE By Ben Myers Solid-state drives (SSDs) have a surprisingly long history, leading up to the types commonly in use today. It takes some plannin
    [See the full post at: Solid-state drives — from bespoke to commodity]

    7 users thanked author for this post.
    Viewing 23 reply threads
    Author
    Replies
    • #2448450

      In 2012 I made an investment of over $2500 in SSDs to build a high-end workstation around nearly 2 TB of SSD storage (total system cost was near $6000). This consisted of 4 x 480 GB top quality OCZ SSD drives arranged into a RAID 0 array via a fast PCIe RAID controller. In time I added another 4 of those same drives to that system.

      SSD was the best investment in computing I ever made. Bar none.

      That computer was a Dell Precision 5500 with a pair of the fastest Xeons available and a plethora of top-end parts, including 48 GB of ECC RAM, nVidia Quadro GPU, multiple big monitors, hyper-fast BD-R optical drive, you name it. It was the prime workstation from which I ran my software company for 8 years.

      Between the solid state drive array and the big RAM, the practical I/O throughput of this workstation – the speed at which files could actually be copied soup to nuts – was nearly 2 gigabytes per second. And with consistently almost zero latency. The response of that system was virtually instantaneous to do everything I asked it to do, and it just didn’t bog down no matter how much work I would give it. Though I always had great backups, it served me well, never, ever having even a single I/O glitch. Virtually the only reboots year after year were for Windows Updates or driver update installs.

      The extreme usability of a system like that – awakened through the use of SSD storage – for demanding computing needs was phenomenal.

      I finally shut it off in 2020, still running a highly tuned Windows 8.1, having built a newer system based on the same goals, but to run Windows 10. Being brutally honest, the sheer responsiveness of my older workstation was not noticeably worse than this new one, which is based on a RAID 0 array of NVMe M.2 drives.

      The numbers say my current workstation is some 5 times faster in terms of raw I/O speed capability but the OS simply can’t take advantage of it. Windows has gotten less efficient by leaps and bounds, though in the end this newer Dell Precision 7820 can pump files around at about 3 gigabytes per second, and does do several other things (like compile large C++ projects) somewhat faster.

      Moral of the story:

      If you want to wake any system up so that disk I/O is no longer the bottleneck to high performance, run it from fast solid state storage.

      Once you use a system based on SSD/flash you will never again be able to stand using a system with yesteryear’s spinning platter hard drives.

      -Noel

      2 users thanked author for this post.
    • #2448496

      I am typing this comment using a Mac (described below in my signature panel). It is the first computer with an SSD I have ever had, and in a couple of weeks it is going to be five years old.

      I had dithered quite a while about getting a machine with an SSD, because I was not sure what advantages and which drawbacks this might bring (” there is always somethin’ “). In particular, having read about it, I was not clear enough on whether the degradation of the stored data was the same, better or worse than with a hard disk.

      I have had this laptop for quite some time, and ever since I bought it, I have had no problems traceable to the SSD: the mass storage device has been performing just as well as the HDs in my previous computers, except that it does not need to be defragged: definitely a plus. This machine is faster that its 8-year old predecessor: an HP Pavilion running  Windows 7 and advertised as a fast “gaming computer.” But this greater speed is hard to pin down to one single factor among those that make this newer machine run faster, from faster RAM, clock, bus, etc., and SSD.

      So, whether this Mac is faster mainly because of the SSD or for some other reason, this is not clear to me. But the good news to me is that the SSD, so far, is proving itself to be at least as reliable as the HDs in my three previous machines, each in use for six to eight years without significant hardware problems before being replaced with a new one.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      1 user thanked author for this post.
    • #2448503

      Ben,
      You will find this interesting

      I recently upgraded a Dell 790 and a Dell 990 to boot from a PCIe M.2 NVMe.  Both Dells did not originally have a BIOS capability to read NVMe drives.

      I found an excellent tutorial and upgrade files on Paul Murana’s website.  He detailed all the steps needed to modify the Dell BIOS, to add the driver for NVMe drives.  Link below.

      Carefully following the steps, I was able to do the “concerning” upgrade to both Dells, without incident.  There are two situations that could cause frustration.
      1st – Use “Cmd” rather than PowerShell, since PowerShell would not run one of the “tools”.
      2nd – Installing a clean Windows 10 on your new NVMe drive is good, but if you wish to clone your OS to the NVMe, it must be formatted GPT and booting to UEFI.  A MBR formatted drive will not work on any NVMe.  My existing drives were MBR.  I tried a few programs (AOMEI for one), that promised to convert MBR to GPT without data loss.  They did create GPT drives with the data, BUT, due to partitioning, they were NOT bootable.  I did a clean install of Win10.  Windows 11 will not work, because of the CPU restrictions from Microsoft.

      I’m quite happy with the performance of both computers.  They both have I7 CPUs and 8 GB ram.  They seem to respond as well as my Dell 7070, NVMe, I7, 16Gb unit.

      For anyone interested, please consider donating to Paul.  He put a lot of work into this.
      Here’s the Link

      https://www.tachytelic.net/2022/02/dell-optiplex-790-990-nvme/

      1 user thanked author for this post.
      • #2448731

        Oh, my!  What Paul Murana did is not for the faint-hearted, is it?  Very innovative BIOS hack indeed!  It makes me wonder about another BIOS issue that has perplexed me, the white list of wifi cards authorized for use in most models of laptops and many name brand desktop/tower systems.  If the card you put in the system is not in the list, the system refuses to boot up.  If wifi card models could be added to the white list, e.g. 802.11ax cards, or the white list bypassed, it would be a breakthrough.  I am not sure I want to sacrifice a working laptop to this kind of effort, though.

        1 user thanked author for this post.
        • #2448737

          Some Highpoint NVMe add-in cards are bootable with a standard motherboard BIOS.

          The sequence we followed to add a SSD7103 to an HP Z220 workstation was straightforward:

          (1)  insert add-in card in Gen3 x16 slot

          (2)  load device driver when prompted at STARTUP

          (3)  format a RAID-0 array, as usual

          (4)  “migrate OS” using Partition Wizard software

          (5)   re-boot into motherboard BIOS and change boot drive

          Highpoint’s web page:

          https://www.highpoint-tech.com/gen3-nvme-m2-bootable

          1 user thanked author for this post.
        • #2449013

          I am not aware of any laptop manufacturers that still put whitelists for wifi cards in the firmware. HP and Lenovo both used to do that, but have supposedly stopped the practice. If any manufacturers are still doing this, I would be very interested in knowing which ones.

          Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon
          XPG Xenia 15, i7-9750H/16GB & GTX1660ti, KDE Neon
          Acer Swift Go 14, i5-1335U/16GB, KDE Neon

          • #2449016

            Lenovo still does it, or maybe they stopped recently?  I attempted to put a Lenovo-branded (with Lenovo FRU on the label) wifi6 card into my wife’s elderly Lenovo Thinkpad W540 and it refused to boot, the BIOS telling me that the card was not compatible.

      • #2448893

        As the TV pitchman says: “That’s not all, folks!”  Scroll down to the bottom of the home page https://www.tachytelic.net/ and you will see a total of 4 blog posts explaining how to modify Dell Optiplexes and a Compaq Elite SFF to do the same booting of a system from an NVMe device mounted on a card like I used.  He started with Optiplex 9020/7020/3020, then 9010/7010/3010, the Compaq Elite and lastly the Optiplex 790/990, working backwards from newer to oldest.  This great if you want to try this with an older Optiplex.  About 2 years ago, I was satisfied with simply upgrading a number of Optiplex 9020/7020/3020 with SATA SSDs.

        1 user thanked author for this post.
        • #2448944

          If you read the Comments it appears the author never really answered this key question:

          “How do you get those speeds its only pcie 2.0, more like 1650mps ?”

           

          In HP’s workstation documentation, the motherboard diagrams show the number of “mechanical” lanes and the number of “electrical” lanes for each PCIe expansion slot.

          For example “x16(4) Gen2” means x16 mechanical lanes and x4 electrical lanes, PCIe 2.0 .

          But, I don’t see the same details in the Maintenance and Service Guide he cites:

          http://h10032.www1.hp.com/ctg/Manual/c03612798.pdf

           

          PCIe 2.0 uses a 5G clock and an 8b/10b “legacy frame”;  therefore:

          x4 @ 5G / 10 bits per byte  =  2,000 MB/second MAX HEADROOM

           

          Another consideration worth mentioning is that HP provides their “HP Support Assistant”.  Depending on the options chosen, that software may upgrade an HP BIOS automatically.  If that happened, his custom mod would be overwritten with a standard BIOS.

          I would expect that an expert capable of this BIOS mod would already realize the need to DISABLE that option in “HP Support Assistant”.

        • #2449014

          I’ve given more thought to the measurements reported above:

          3,421.68 is very common for that model of Samsung NVMe M.2 SSD.

          HOWEVER, the chipset in that HP PC cannot oscillate PCIe 2.0 lanes fast enough to achieve that result.

          It is possible that the author displayed the wrong screen shot, either intentionally or otherwise.

          Another remote possibility is that his modified BIOS was somehow able to “overclock” 4 PCIe 2.0 lanes — by increasing the clock to 8G AND by changing to jumbo frames.

          Not only could this void any existing warranty(s);

          it would probably stress the chipset sufficiently to hasten burnout.

          One absolute constant in this technology is one binary digit per clock tick.

          A 5,000 MHz clock is capable of transmitting a MAXIMUM of 5,000 Mb/second.

          Thus, each PCIe 2.0 lane oscillates at 5G and it transmits 10 bits per byte:

          5,000 MHz / 10 x 4  =  2,000 MB/second MAX HEADROOM.

           

      • #2449613

        2nd – Installing a clean Windows 10 on your new NVMe drive is good, but if you wish to clone your OS to the NVMe, it must be formatted GPT and booting to UEFI. A MBR formatted drive will not work on any NVMe. My existing drives were MBR. I tried a few programs (AOMEI for one), that promised to convert MBR to GPT without data loss. They did create GPT drives with the data, BUT, due to partitioning, they were NOT bootable. I did a clean install of Win10. Windows 11 will not work, because of the CPU restrictions from Microsoft.

        Are you saying that on a NVMe SSD you can’t use MBR but must use GPT to partition the disk?

        That is certainly not my own experience at all. I have two computers at home, one based on a X99 motherboard and the other based on a B365 motherboard. On both machines I boot Windows 8.1 from a NVMe SSD formatted using the MBR scheme. No problems in booting at all.

        Hope for the best. Prepare for the worst.

        • #2449620

          There should be no reason you can’t use MBR on an NVMe disk as long as the BIOS supports it. Disk formats are standard.

          cheers, Paul

          1 user thanked author for this post.
    • #2448506

      I remember the first Quantum Technologies flash drives, in the latter 90s. A mere mortal couldn’t afford them, but they held 1.5GB of flash memory, and were 5.25″ form factor (I can’t remember if that was half or full-height, and I can’t recall the interface, though my guess would be SCSI for maximum bandwidth). I haven’t been able to find information or pictures on them in recent years (Maxtor acquired Quantum, then some time later, Seagate acquired Maxtor), so I doubt many sold.

      The thought was really exciting at the time. Of course, today, SSDs are orders of magnitude larger, faster, and cheaper.

      We are SysAdmins.
      We walk in the wiring closets no others will enter.
      We stand on the bridge, and no malware may pass.
      We engage in support, we do not retreat.
      We live for the LAN.
      We die for the LAN.

      1 user thanked author for this post.
    • #2448523

      Excellent summary, Ben!

      Perhaps in a follow-up supplement you could explore and explain some of the finer points of SSDs installed in PCIe expansion slots e.g. the “4×4” add-in cards are producing FANTASTIC performance numbers.

      We had no trouble upgrading an HP workstation with a Highpoint SSD7103 and 4 x Samsung NVMe M.2 SSDs.  READs are clocked above 11,000 MB/second.

      That SSD7103 performs so well, it eliminated the need for a ramdisk, for our purposes.

      What I like most about the Highpoint models is their decision to add support for booting from these AICs withOUT needing bifurcation support in the motherboard’s BIOS.

      Thus, your readers may also benefit from a good summary of “bifurcation”.

      Also, wiring PCIe slots directly to the CPU socket has eliminated the need for dedicated hardware RAID controllers.  With multi-core CPUs proliferating, there is almost always an idle or semi-idle core that can handle the I/O overhead on the PCIe bus.

      Lastly, it’s worth mentioning that PCIe 3.0 switched from the 8b/10b “legacy frame” to the 128b/130b “jumbo frame”.  This one change removed a lot of transmission overhead that dates back to dial-up modems and such.

      Whether unfortunate or not (depending on one’s experiences and preferences), the SATA standard has been stuck at 6G and the 8b/10b legacy frame it seems FOREVER.

      (Imho, that “freeze” may imply an oligopoly among IT hardware vendors.)

      Arguably the single most important feature of NVMe technology is that it effectively SYNCS the chipset with the storage device.

      The latter development opened the door to PCIe 4.0 and PCIe 5.0, which are delivering performance that was mostly theoretical only 10 years ago.

       

      1 user thanked author for this post.
      • #2448533

        p.s.

        “legacy frame” is 10 bits per byte:

        1 start bit + 8 data bits + 1 stop bit = 10 bits per byte

        “jumbo frame” is 1 start bit + 16 bytes @ 8 bits + 1 stop bit

        =  130 bits / 16 bytes = 8.125 bits per byte

        Thus,

        modern SATA is legacy frame at 6G + 1 Serial channel

        NVMe Gen3 is jumbo frame at 8G x 4 Serial channels

        NVMe Gen4 is jumbo frame at 16G x 4 Serial channels

        NVMe Gen5 is jumbo frame at 32G x 4 Serial channels

         

      • #2448732

        I think that the freeze of SATA at 6G is more likely a pragmatic decision to put relatively scarce engineering resources to work on the newer and inherently faster PCI-e interface.  But one has to wonder how difficult is the engineering challenge to use a jumbo frame with SATA.  Answer is that it requires many, many changes to many, many BIOSes, and also a SATA SSD that runs with a legacy frame if it finds out that the motherboard BIOS cannot handle jumbo.  This last sentence implies that it is a serious engineering-and-testing effort and they all do not want to expend the resources to update the SATA standard.

        You lost me there on BIOS bifurcation.  Never heard the term before, keeping my head down and my nose inside computers most days.

        1 user thanked author for this post.
        • #2448739

          re:  bifurcation

          a full x16 PCIe 3.0 expansion slot usually transmits raw data over 16 serial “lanes” in parallel, like the vast majority of video cards

          almost all NVMe devices only require x4 serial “lanes”

          therefore, x16 lanes can be viewed as “4×4” NVMe devices i.e.

          x16 is “bifurcated” into 4 drives each using x4 lanes:  4×4 = 16

          to chose either x16 or “4×4” requires support in the motherboard BIOS; and, that support typically comes with more expensive server and workstation motherboards;  the latter allow “passive” 4×4 add-in cards to present 4 discrete NVMe drives to the OS

          Highpoint developed a way around this “bifurcation” requirement by implementing the necessary logic in their add-in cards and device drivers.

          Therefore, there is no need for a motherboard BIOS to support “bifurcation” if the User installs a “bootable” Highpoint add-in card.

          The Highpoint cards that are bootable handle the “bifurcation” from x16 to “4×4” internally.  Our SSD7103 model does this;  and, from reading their website I believe their SSD6202 and SSD6204 models also do this.

           

          further reading:

          https://shuttletitan.com/miscellaneous/pcie-bifurcation-what-is-it-how-to-enable-optimal-configurations-and-use-cases-for-nvme-sdds-gpus/

          1 user thanked author for this post.
          • #2448743

            p.s.  I should add that Highpoint also make other add-in cards that are not bootable but they also circumvent the “bifurcation” requirement.

            These other cards are designed for data storage only, NOT for hosting a bootable OS.

            From memory I recall that some of these other cards can be installed in multiple PCIe expansion slots, and RAID arrays can span multiples of these add-in cards.

            One model, in particular, has 8 x NVMe sockets.  Thus, 2 such add-in cards support 16 discrete NVMe M.2 drives, and a RAID array can span all 16 drives!

          • #2448747

            here’s the ASUS Gen4 version of their passive 4×4 add-in card:

            this type of “passive” card has no integrated controller; as such,

            it simply divides x16 lanes into four x4 sockets (“4×4”)

            and therefore requires “bifurcation” support in the motherboard BIOS:

            https://www.asus.com/ca-en/Motherboards-Components/Motherboards/Accessories/HYPER-M-2-X16-GEN-4-CARD/
            <h1 class=”LevelFourProductPageHeaderDesktop__modelName__1L-le” tabindex=”0″>HYPER M.2 X16 GEN 4 CARD</h1>

             

        • #2448741

          I may be naive about this, but a “SATA-IV” standard could support generalized “auto-detection” of key operating parameters e.g. clock rate, frame size etc.

          Quality SATA drives already “auto-detect” the clock rate of the host:  1.5G, 3G and 6G

          If a standard NVMe device can use 4 lanes @ 8G with jumbo frames,

          then a SATA-IV SSD could use 1 lane @ 8G with jumbo frames.

          The point here is to “sync” the storage device with the chipset.

          8G / 8.125 bits per byte  =  984.6 MB/second  (i.e. x1 lane, PCIe 3.0)

          16G / 8.125 bits per byte  =  1,969.2 MB/second  (i.e. x1 lane, PCIe 4.0)

          32G / 8.125 bits per byte  =  3,938.4 MB/second (i.e. x1 lane, PCIe 5.0)

          The latter are significant improvements over SATA-III’s ceiling of 600 MB/second (i.e. 6G / 10).

          The latest USB standards now utilize a 128b/132b “jumbo frame” @ 10G.

          Those changes weren’t too difficult for USB developers.

    • #2448587

      When using a PCIe adapter, another factor to consider is the upstream bandwidth assigned to any one expansion slot.

      Even though certain empty PCIe slots may be x4, x8 or x16 “mechanical”,
      the motherboard manual should be consulted to confirm if there are
      any special limitations that may be enforced by the motherboard BIOS.

      x1 slots are really out of the question, because NVMe almost always
      requires x4 PCIe 3.0 lanes (or higher PCIe Gen).

      (The only exception I can think of was the early Optane M.2 SSDs
      which only used x2 PCIe 3.0 lanes.)

      Also, of equal importance is the PCIe Generation of any one slot.

      PCIe 2.0 oscillates at 5G and still uses the 8b/10b “legacy frame”.

      Thus, EVEN IF a single NVMe M.2 SSD does function with a PCIe adapter,
      its performance ceiling will be dictated by the 5G clock and
      by the 8b/10b “legacy frame”:

      5G / 10 bits per byte = 500 MB/second MAX HEADROOM!

      Best way is to ensure that any empty x4, x8 or x16 PCIe expansion slot
      supports at least PCIe 3.0 and is assigned at least x4 PCIe 3.0 lanes.

      8G / 8.125 bits per byte = 984.6 MB/second MAX HEADROOM per lane

      A good motherboard manual should always specify these parameters
      for all PCIe expansion slots; and, it should also specify
      under what conditions the motherboard BIOS will make its own
      decisions about the number of PCIe lanes actually assigned
      to any given PCIe expansion slot after STARTUP.

      Just to illustrate, we have one HP Z220 workstation and
      one HP Z240 workstation:
      the former has both PCIe 2.0 and PCIe 3.0 expansion slots;
      the latter has expansion slots that are all PCIe 3.0.

      Hope this helps.

    • #2448655

      I read this article with interest. Can anyone tell me whether it is safe to use SSDs for a (Postgres) SQL Database that is hammered 24/7 by a couple of hundred remote users with constant updates, writes, deletes, index changes (more writes, deletes and rewrites). It would be great to speed  this up, but at the moment users are satisfied with throughput. I’ve always been told that constant writes, rewrites and deletes will cause SSDs to fail quickly. Is this not now the case?

      1 user thanked author for this post.
    • #2448660

      “hammered 24/7” sounds extreme

      have you considered a large ramdisk?

      a server with 4-channel ECC DRAM should handle that load with ease

      this is the ramdisk software we use:

      https://www.softperfect.com/products/ramdisk/

      it keeps track of changes within memory clusters,

      and it only writes changed clusters to permanent storage

      e.g. at SHUTDOWN and per schedule

    • #2448714

      Database that is hammered 24/7 by a couple of hundred remote users

      That doesn’t sound like a particularly large demand so a fast HDD array should cope without issue.
      An SSD RAID 1 array would also be fine – it would be RAID10 for HDD.

      SSD does not have the short life that early rumours suggested.

      cheers, Paul

      3 users thanked author for this post.
    • #2449392

      PCIe 5.0 SSDs May Get Wider and Require Wider M.2 Slots

      https://www.tomshardware.com/news/pcie-5-ssds-may-get-fatter-require-wider-m2-slots

       

      Type 25110-D8-M

      1 user thanked author for this post.
      • #2449517

        The article reads:

        “Conventional 2280 SSDs should still fit the wider M.2 slots without problems.”

        So, what is the purpose of the extra 3mm — from 22mm to 25mm?

        I could see 2-sided connectors, much like dual-ported SAS connectors, to allow x8 lanes in parallel instead of the current x4 lanes.

        It’s unfortunate the author did not elaborate on that question.

        And, we didn’t need a mechanical drawing to appreciate the extra 3mm in overall width.

    • #2449418

      Only yesterday it was hard disk packs (when first I met them in my ill-spent youth, these were the heart of the fast mass storage of an IBM 360/50: the drives were the size of washing machines, and the packs had six or so plates 20″ across piled on top of each other and supported by a central stem, with a separate plastic case with a lock-on lid and a handle on top, used to pick the disk pack off the driver and carry it to its nice and comfy metal cabinet when it was not in use. This video shows how these packs looked like and how they were made:

      https://www.youtube.com/watch?v=PQwCMDRajJo

      Then came the smaller and smaller HDD, with more and more storage capacity, eventually small enough to be used inside PCs.

      Now we have SSD: chips instead of HDD spinning disks and all those moving parts: no moving parts any more! All silicon and mixed within it some assorted metal, metalloid and non-metal atoms, in particular those of certain rare earths, all properly distributed to make things work! Much quieter, with more room for data to be kept in there and also faster.

      But what would come next?

      Well, that is a silly question to those of us who know our science fiction well: what comes next is data crystals!

      A data crystal, read with the appropriate equipment, will produce one of those  3-D sound-and-moving pictures of someone about to be murdered in her spaceship by unseen attackers, desperately leaving for whomever might ever come across the recording, a dire message like: “they killed us all and now they are coming for youu!”

      I don’t know about “they”, but the crystals look like they are coming:

      https://www.pcgamer.com/data-crystals-may-yet-make-the-leap-from-sci-fi-to-real-world-storage/

      https://www.tomshardware.com/uk/news/5d-storage-optical-data-cube

      Excerpt:

      Researchers with the University of Southampton, UK, published research eerily reminiscent of the sci-fi concept of a “data cube” — promising incommensurate storage in a palm-sized device. However, that concept may be much closer to reality than expected, as the research describes a new, high-speed laser method of writing onto 5D structures. The 5D structure, built out of silica glass, can support long-term writings – and achieve storage densities that are 10,000x higher than current Blu-Ray technology.

      The new laser technology enabled the researchers to write in five dimensions – two optical and three spatial. The new approach can achieve write speeds of 1,000,000 voxels per second, the equivalent of 230 kilobytes of data (more than 100 pages of text) per second. That may sound ridiculously slow by today’s standards – just look at the speeds the best SSDs achieve in comparison, such as 5,000 MB per second writes on the Samsung 980 Pro. However, some particular use-cases could benefit very much from such a technology, such as museums, libraries, and sure, the Ark-paradigm in science fiction. Furthermore, this technology actually could translate into real-world, cold-storage applications.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      1 user thanked author for this post.
      • #2449908

        HA!  My first exposure to a computer was a Burroughs 220 with no disks and a bank of tape drives that went “thub, thub” reading and writing.  Then a Univac 1107 with hardware large enough to fill my house.  I really cut my teeth on a GE 225, starting as tape-only, but with a disk drive added.  The computer initially had 8K of 20-bit memory, later upgraded to 16K.  You had to write tight code.  The disk drive had two controllers, one the master controller and the other for the drive controller.  The master controller allowed for four drive controllers.  all these controllers looked remarkably like GE refrigerators.  Then there was the disk drive itself, which we called a pizza oven.  You could see the drives spinning inside through glass panels on the side.  Individual disks were about 2 feet in diameter.  When the disk drive arrived and it was installed, my boss asked me to do something with it.  No file system in those days, not even an operating system. So I worked out a scheme to store programs on the disk, rather than reading them through the card reader.

        Here is a link to some info about the old GE 200-series computers.

        https://www.smecc.org/g_e__200_series_computers.htm

         

        1 user thanked author for this post.
    • #2449515

      I am not sure what 4-D might really be I am certain I have on idea what 5-D could be.

      🍻

      Just because you don't know where you are going doesn't mean any road will get you there.
      • #2449525

        wavy: What’s 4D supposed to mean here? It’s all explained in the articles I’ve linked. But 4D, I would agree, is not that clear. To understand something like this I would need to read first a technical paper myself, with text and figures and equations, to understand that clearly, because a few words don’t seem enough. But it has to do with the direction in which the light propagates the slowest and with a soliton, that is a solitary traveling wave (of “particles”?).

        But I am confident that at least Tom, in his garage, never intentionally lies. As to those at that gaming magazine? We’ll, you know gamers …

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        • #2449647

          I guess I am just dense, oh no wait that is just my fifth dimension!

          🍻

          Just because you don't know where you are going doesn't mean any road will get you there.
          1 user thanked author for this post.
      • #2449648

        I really meant to say ‘no’ there not ‘on’ of course. Solitons are a more wide spread notion than I was familiar with. Amazing really, thanks for the heads up Oscar!

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
    • #2449571

      I am not sure what 4-D might really be I am certain I have on idea what 5-D could be.

      Umm…

      https://www.youtube.com/watch?v=UKkNlwpajNk

      😉

       

      1 user thanked author for this post.
      • #2449585

        Cybertooth: I can understand this YT 5-D explanation you have linked, so it must be from about the years when the IBM 360/50 was still around, along with those disk packs I have mentioned. You know? Before singers started singing with their microphones half-way down their esophagi, making the comprehension of the lyrics somewhat difficult. Besides and regardless of that, a general improvement in enunciation is really something to be desired.

        But I must say, all this is somehow distantly related to mass storage media, solid state drives in particular.

        So, to take cover back again under the SSD roof we have wandered away from, here is all about the future of the SSD, before the data crystal moves on from today’s proof-of-concept to an everyday object that can fit in the palm of one’s hand, a piece of shiny, colorful and translucent stone or glass that, with several petabytes (PB) of data inside, can contain all of the written works, all the good movies and decent TV and streaming shows and music, popular or classic, ever written, filmed or recorded in the last century and a half or more, plus the whole of Wikipedia and several other big encyclopedias … and still have room to spare for more. But probably as “write once, read many times” storage, and perhaps still slow to do either, so one may have to download all the desired content before starting to use it and, maybe, enjoy it. Or learn it.

        The SSD in the yet to come:

        https://www.techradar.com/news/heres-what-an-ssd-in-2025-could-look-like

        Excerpt:

        V-NAND is the most important SSD innovation of the last decade. With cells stacked vertically, SSDs not only had much higher storage density, but also lowered power consumption, while boosting performance, at the same time.

        NAND flash’s big strength is its flexibility, which is why you can find it in everything from USB flash drives and smartphones to SSDs. Also, unlike traditional HDDs that need a certain configuration of platters and reading heads is required, flash drives come in all kinds of shapes and sizes.

        Thanks to advancements in V-NAND, there are also signs that SSDs may be overtaking hard disks when it comes to how much data you can fit in a certain-sized box. For a long time SSDs had significantly less capacity than hard disks, and made up for this with their increased performance. This rule no longer applies thanks to SSDs like the Samsung 860 QVO that offer storage capacities of up to 4TB.

        And at least some of the above, that was written in 2020, is possibly old news by now.

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        2 users thanked author for this post.
    • #2449655

      The present might be a good time to do a post mortem on Intel’s Optane.  I would welcome a few current links here, which point to objective reviews of Intel’s decision to stop further production of Optane products for PCs.

      I believe Optane DIMMs for servers are still available.

      I do remember at least one review that found Optane DIMMs forced the entire regular DRAM subsystem to down-clock to Optane’s maximum;  this problem was happening to large servers however.

      Also, if we relax the common definition of “persistent” and/or “non-volatile”, there are plenty of modern UPS options that are reliable enough to keep DRAM powered ON, even though it is a “volatile” storage medium.

      Last time I looked, flash memory is still no match for modern DRAM as far as raw latencies are concerned.  And, DRAM is byte-addressable;  flash memory is not.

      Several years ago, we filed a Provisional Patent Application for a concept that enabled a motherboard BIOS to “format RAM”, thus allowing a fresh OS install to utilize a ramdisk for the Windows C: partition.

      http://supremelaw.org/patents/bios.enhancements/provisional.application.1.htm

      http://supremelaw.org/patents/bios.enhancements/provisional.application.2.htm

      During subsequent routine STARTUPs, those BIOS enhancements would automatically restore a current drive image of the C: system partition.

      This mode is how our ramdisk software functions:  at SHUTDOWN it writes the entire contents of our ramdisk to non-volatile memory (a RAID-0 array of 4 x SSDs), and at STARTUP it automatically restores those entire contents to the ramdisk.

      Initially we envisioned enabling this concept in a large server with plenty of DRAM to spare e.g. 100GB for the OS and 900GB for application programs etc.  A workstation with only 64GB of DRAM would assign too much of that memory to the OS.

      This guy installed Windows 10 on a ramdisk using VM technology:

      https://www.youtube.com/watch?v=MK1OPc3k_cQ

      0:00 – Intro

      0:22 – Preparation

      1:11 – Speed comparison (HDD, SSD, RAM)

      1:54 – Configuration

      2:24 – Installing Windows 10 on RAM

      4:18 – Disk space problem

      7:02 – Final product

      7:32 – Reboot

      7:48 – Outro

      1 user thanked author for this post.
      • #2449657

        Re:  “(Awaiting moderation)”

        Did I say or do something wrong?

        RSVP

        • #2449663

          You left HTML detritus (which may or may not be the problem).

          Carpe Diem {with backup and coffee}
          offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
          offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
          online▸ Win11Pro 22H2.22621.1778 x64 i5-9400 RAM16GB HDD Firefox114.0b8 MicrosoftDefender
          • #2449667

            fixed

            p.s.  (it got “moderated” but I don’t know why)

            after fixing, it got “moderated” again

            • #2449671

              I think you will have to wait until a moderator drops by to explain it.

              Carpe Diem {with backup and coffee}
              offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
              offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
              online▸ Win11Pro 22H2.22621.1778 x64 i5-9400 RAM16GB HDD Firefox114.0b8 MicrosoftDefender
              1 user thanked author for this post.
            • #2449675

              Thanks.

              I tried to discuss Optane DIMMs, and a Provisional Patent Application I submitted several years ago for enhancing a motherboard BIOS to “format RAM” and allow a fresh OS install to that ramdisk.

              This guy did the same thing, using a VM:

              https://www.youtube.com/watch?v=MK1OPc3k_cQ

              That “detritus” came from his “Timestamps” et seq.

            • #2449678

              Blending Optane DIMMs with that “format RAM” feature, I also conceived of a way to exploit the former triple-channel DIMM slot chipsets, like this:

              Channels 1 and 2 are both modern dual-channel interleaved (“quad” overall).

              Channel 3 is populated with Optane DIMMs and dedicated to the OS and all application programs (e.g. Windows C: partition).

              This setup would allow Channels 1 and 2 to operate faster and hence NOT be hampered by slower Optane DIMMs in Channel 3.

              And, down at the level of the DC power circuits, Channel 3 could also be separately powered with a dedicated UPS/PSU combination;  although, given the non-volatile nature of Optane DIMMs, such a separate DC power circuit would not be absolutely necessary, starting out.

              Allyn Malventano and I were bouncing this idea around, briefly, before he accepted a new job at Intel.  He understood how a “heterogeneous” DRAM setup could work, with the proper changes to the chipset’s memory logic.

               

            • #2449690

              This Comment — below the YT video above — really captured the essence of a memory-resident OS i.e. resident in VOLATILE DRAM:

               

              “Well , I’m wondering what if you backup the image onto the SSD before every shutdown then later mount the image again on to the memory/virtual disk ?  Shifting should take less than a minute.”

               

              MOREOVER …

              … if the OS is also resident in an Optane “ring” of a 3-ring circus (triple-channel DIMM slots), there should be no need to “mount the image” (i.e. read the image file from non-volatile storage).

              During routine STARTUP, the OS is booted directly from the NON-VOLATILE Optane “ring” — because it behaves exactly like any other NTFS C: partition.

            • #2449691

              So, using the KISS principle, consider this sequence as one workable objective:

              (1) build very large workstation / server with Optane installed in Channel 3;

              (2) install OS to standard NVMe SSD, and keep for future recovery as needed;

              (3) format Optane as a standard NTFS partition

              (4) migrate OS to Optane partition e.g. with Partition Wizard (or other)

              (5)  re-boot into BIOS and change boot partition to Optane

              (6)  whenever the Optane OS gets corrupted, boot from NVMe SSD and recover

            • #2449719

              SupremeLAW: Are ramdisks, which are famously fast, if consisting of current/latest DRAM chips, still faster than currently available SSDs?

              Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

              MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
              Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
              macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

              1 user thanked author for this post.
            • #2449720

              Re:  “Are ramdisks … still faster than currently available SSDs?”

              The answers below are based on parameters we used in a Utility Patent Application and in a presentation to the Storage Developer Conference (2012):

              I visited Newegg.com and searched for G.SKILL DDR5:  they’re famous for high-performance DRAM.

              That search found:

              https://www.newegg.com/g-skill-32gb-288-pin-ddr5-sdram/p/N82E16820374359?Description=G.SKILL%20DDR5&cm_re=G.SKILL_DDR5-_-20-374-359-_-Product&quicklink=true

               

              DDR5-6400 x 8 bytes per cycle =  51,200 MB/second  (aka PC5-51,200)

              PCIe 3.0 = 8G / 8.125 bits per byte  =  984.6 MB/second per x1 lane

              x4 PCIe 3.0 lanes @ 984.6  =  3,938.4 MB/second MAX HEADROOM

              Apples-to-apples comparison:

              PCIe 5.0 = 32G / 8.125 bits per byte  =  3,938.4 MB/second per x1 lane

              x4 PCIe 5.0 lanes @ 3,938.4  =  15,753 MB/second

              that G.SKILL DDR5 is 51,200 / 15,753  =  3.25 TIMES faster than a single Gen5 NVMe SSD (assuming zero controller overhead)

              One would need to configure multiples of the latter NVMe SSDs in a RAID-0 array in order to come close to that raw DDR5-6400 bandwidth:

              4 x 15,753  =  63,012 MB/second MAX HEADROOM (zero controller overhead)

              2 users thanked author for this post.
            • #2449723

              Here’s how we compute SSD controller overhead e.g.:

              Use the measured READ speed of the Samsung NVMe SSD,

              reported above:

              Controller Overhead  =  1.0 – (3,421.68 / 3,938.4)  =  1.0  –  0.8688  =  13.12%

              1 user thanked author for this post.
    • #2449732

      For the sake of comparison, the following is the ATTO result using 4 x Samsung 840 Pro in a RAID-0 array controlled by a Highpoint RocketRAID 2720 PCIe 2.0 add-in card.

      At that time, we were comparing performance WITH and WITHOUT aligning those SSDs:

      http://supremelaw.org/systems/ATTO/4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.2.bmp

      4 x 6G SSD / 10 bits per byte  =  4 x 600 MB/second  =  2,400 MB/second MAX HEADROOM

      Controller Overhead  ~=  1.0 – (1,879 / 2,400)  =  1.0 – 0.7829  =  21.71%

       

      Note: that is the overall “aggregate” overhead measurement, which takes into account the overhead in each SSD’s internal controller PLUS the overhead in the Highpoint controller.

       

    • #2449739

      Here’s a CDM measurement of a “4×4” Highpoint SSD7103 with 4 x Samsung 970 EVO Plus in RAID-0, installed as a bootable add-in card in an HP Z220 Tower workstation.

      x4 PCIe 3.0 lanes @ 8G/8.125 x 4 NVMe SSD  =  15,753 MB/second MAX HEADROOM (zero controller overhead)

      Controller Overhead  =  1.0 – (11,697 / 15,753)  =  1.0 – 0.7425  =  25.75%

      http://supremelaw.org/systems/highpoint/Highpoint.SSD7103/CDM.screen.shot.1.png

      1 user thanked author for this post.
    • #2449918

      fixed

      p.s.  (it got “moderated” but I don’t know why)

      after fixing, it got “moderated” again

      Based on personal experience on this forum, I’m pretty sure that the reason your post got moderated is that it contained multiple hyperlinks. Having more than a certain number of hyperlinks seems to trigger an alert to put a hold on the post until a mod can review its contents. (Not sure what that number might be.)

      Interesting technical discussion, BTW–thanks!

       

      1 user thanked author for this post.
      • #2449921

        Good explanation: Many thanks!

        in the future, I’ll “paste” into Windows NOTEPAD first:

        that should strip out the “detritus”.

        1 user thanked author for this post.
        • #2450032

          Right click and select paste unformatted / plain text.

          cheers, Paul

          1 user thanked author for this post.
          • #2450093

            THIS IS A TEST (see attached screenshot)

            <span class=”style-scope yt-formatted-string” dir=”auto”>Timestamps: </span>0:00<span class=”style-scope yt-formatted-string” dir=”auto”> – Intro </span>0:22<span class=”style-scope yt-formatted-string” dir=”auto”> – Preparation </span>1:11<span class=”style-scope yt-formatted-string” dir=”auto”> – Speed comparison (HDD, SSD, RAM) </span>1:54<span class=”style-scope yt-formatted-string” dir=”auto”> – Configuration </span>2:24<span class=”style-scope yt-formatted-string” dir=”auto”> – Installing Windows 10 on RAM </span>4:18<span class=”style-scope yt-formatted-string” dir=”auto”> – Disk space problem </span>7:02<span class=”style-scope yt-formatted-string” dir=”auto”> – Final product </span>7:32<span class=”style-scope yt-formatted-string” dir=”auto”> – Reboot </span>7:48<span class=”style-scope yt-formatted-string” dir=”auto”> – Outro</span>

             

          • #2450095

            Re:  “Right click and select paste unformatted / plain text.”

            Does that sequence work correctly with a different browser?

            I’m using the latest Firefox 64-bit version with Windows 10 Pro 64-bit version.

    • #2449922

      I searched for the most recent technical review of Intel’s Optane DIMMs and found this:

      Big Data Analytics Meets Big Memory with Intel Optane PMem
      written by Tom Fenton January 24, 2022

      https://www.storagereview.com/review/big-data-analytics-meets-big-memory-with-intel-optane-pmem

      Conclusion

      Intel Optane PMem is an exciting and transformative technology that is starting to reshape the datacenter, but as with all other technologies, it fortunately does not exist in a vacuum. Leading, forward-thinking companies such as Dell Technologies, Intel, MemVerge, and Hazelcast are finding synergies and starting to exploit this new technology to find its true potential in the datacenter: Intel Optane PMem modules are offered at around half the cost of DRAM; Dell Technologies has servers that support the massive amounts of low-latency memory capacity that Intel Optane PMem provides; Hazelcast allows applications to take advantage of these technologies on a grand scale; and MemVerge provides the monitoring, management, and data services for Intel Optane PMem, and, by abstracting away the DRAM API, it makes Intel Optane PMem appear as DRAM to existing applications thereby allowing them to run without being modified or re-architected.

      If everything else is equal, businesses would opt for real-time activities versus batched activities. But since everything is not equal, batch processing is often the chosen pattern to avoid the costs associated with real-time processing. However, as customer expectations continue to rise in a world that is increasingly more real-time-oriented, businesses need to find new ways to create a competitive advantage. By leveraging real-time speeds without suffering the traditional costs of in-memory computing, leading businesses can make the leap with technologies like Intel Optane PMem, MemVerge, and Hazelcast to build solutions that help them respond to their demands, and that of their customers, faster than ever before.

       

    • #2449928

      This is the Optane review (March 2021) that mentioned how Optane DIMMs required the entire main memory subsystem to “down-clock” to Optane’s clock speed:

      https://www.servethehome.com/glorious-complexity-of-intel-optane-dimms-and-micron-exiting-3d-xpoint/

      [begin quote]

      It is slower in two ways compared to DRAM typically found in your DIMM sockets.

      First, it has a higher latency because it is writing to the persistent 3D XPoint instead of DRAM.

      The second is one that not many discuss.

      The first two generations of Intel Optane DCPMM or PMem 100 and PMem 200 operate at DDR4-2666 speeds with Cascade Lake and Cooper Lake.

      That is extremely important.

      Once you add PMem to a server, the memory speed drops to DDR4-2666. So on Cascade Lake or the 2nd generation Intel Xeon Scalable that means we go from 6x DDR4-2933 per socket to 6x DDR4-2666.

      [end quote]

    • #2450104

      This is another TEST: USING “Text” tab instead of “Visual” tab:

      Timestamps:
      0:00 – Intro
      0:22 – Preparation
      1:11 – Speed comparison (HDD, SSD, RAM)
      1:54 – Configuration
      2:24 – Installing Windows 10 on RAM
      4:18 – Disk space problem
      7:02 – Final product
      7:32 – Reboot
      7:48 – Outro

    • #2450110

      Getting back to this Windows 10 install to a ramdisk:

      https://www.youtube.com/watch?v=MK1OPc3k_cQ

      Instead of requiring custom mods to a motherboard BIOS, it might be possible to handle a routine STARTUP using “Boot Mode”.

      The latter “Mode” is available in Partition Wizard, and it is also launches automatically whenever Windows is asked to make changes to C: e.g. in Command Prompt:

      CHKDSK C: /f

      Where “/f” == Fix partition

      I’m not familiar with the necessary low-level programming, however.

      Achieving the same result withOUT custom mods to the motherboard BIOS would be ideal!

      Repeating:  because Windows 10 requires so much storage, this experiment should probably be attempted on a large and available server computer.

      • #2450112

        Maybe what this scientific experiment needs is a generous grant from a funding agency like the National Science Foundation.

        And, a university Computer Science Department would probably be very interested in using those funds to develop a specialized curriculum for Masters Degree students in CS.

        And, if possible, Open Source solutions would benefit more than proprietary solutions.

        Intel might also be interested but they no longer build motherboards.  Their Optane DIMMs would be ideal to resurrect the obsolete triple-channel DRAM chipsets or some variation on that theme, like the “heterogeneous” memory logic suggested by Allyn Malventano.

        So, some kind of Joint Venture with a motherboard manufacturer would also be needed, e.g. SuperMicro builds lots of server motherboards.

        A server with at least 256GB of DRAM should work:

        100GB for Windows, 156GB for standard main memory.

        A server with 1TB of DRAM would be more realistic, given current server trends.

    • #2450122

      Thinking out loud, the following sequence builds on procedures
      with which the average Prosumer is already very familiar:

      (1) install Windows 10 on conventional NVMe SSD
      (2) install third-party ramdisk software and format one ramdisk
      (3) install third-party “migrate OS” software e.g. Partition Wizard
      (4) “migrate OS” to ramdisk formatted at (2) above
      (5) re-boot into the motherboard BIOS

      It’s at this point that all motherboard BIOS subsystems
      will NOT recognize such a ramdisk as a BOOTABLE device.

      Hence, the most elegant solution is to augment that BIOS
      so that it recognizes the ramdisk as a BOOTABLE device,
      allowing the User to choose that device for booting.

      A Prosumer should realize that a full SHUTDOWN may cause
      the ramdisk to lose its contents. Therefore, the Prosumer
      will need to repeat the sequence above, starting at (4).

      One obvious variation of (4) is to restore a working drive image
      to the ramdisk, instead of “migrate OS” in Partition Wizard.

      A “smart” custom BIOS will know if a bootable OS
      is already installed in the ramdisk; and, if NOT,
      it issues a routine error message e.g. “No ramdisk found” .

      With the above sequences well understood, there are important
      variations that can be considered.

      A “Format RAM” option in the motherboard BIOS will also allow
      a fresh OS install to write directly to the ramdisk,
      just like any other bootable device.

      The above are some of the reasons why I prefer
      a general solution that enhances a motherboard BIOS
      with all functionality required to support that
      “Format RAM” option in the motherboard BIOS.

      Hope this helps.

    • #2450201

      Here’s a first-order approximation that shows
      how a PCIe 4.0 “4×4” add-in card can perform
      better than some Optane DIMMs @ DDR4-2666:

      Start with the measured READ speed reported above: 11,697 MB/second

      The latter is an empirical measurement that reflects controller overheads.

      Double that by upgrading to PCIe 4.0: 11,697 x 2 ~= 23,394 MB/second

      The latter may not be realistic as long as the raw speeds of Nand Flash
      components are the limiting factor.

      Scale that for comparison to DDR4: 23,394 / 8 bytes per cycle ~= DDR4-2924

      The Optane DIMMs discussed above were rated at DDR4-2666.

      Therefore, an OS can be freshly installed to a PCIe 4.0 4×4 add-in card without compromising the rated speed of the main memory subsystem.

      Even if a triple-channel architecture were to assign Channel 3
      to Optane DIMMs, and
      even if an OS were installed onto those Optane DIMMs,
      the overall performance of a PCIe 4.0 4×4 add-in card is expected
      to be nearly the same — withOUT all the extra work required
      to customize that triple-channel architecture.

      And, of course, a 4×4 add-in card uses non-volatile memory,
      so there is no need to worry about the loss of data
      that occurs when DRAM is powered OFF.

    • #2450207

      To cross-check my assumptions, I found this video at Highpoint’s website:

      https://filedn.com/lG3WBCwKGHT7yNuTsFCwXy0/HighPoint-Download/Video/SSD/7505/SSD7505_Win10Pro_980PRO.mp4

      4 x Samsung 980 Pro 1TB NVMe M.2 SSD in RAID-0

      23,673 MB/second MEASURED compares very closely to:
      23,394 MB/second PROJECTED above i.e. PCIe 3.0 MEASURED x 2

      The latter numbers mean that aggregate controller overhead hasn’t changed from PCIe 3.0 to PCIe 4.0; and, that Samsung’s in-house Nand Flash is still not the limiting factor.

      WAIT FOR IT! 2 x “4×4” add-in cards produced
      40,367 MB/second MEASURED in “Cross-Sync” RAID-0 mode!

      The latter number equates to 40,367 / 8 = DDR5-5045

    Viewing 23 reply threads
    Reply To: Solid-state drives — from bespoke to commodity

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: