• Our world is not very S.M.A.R.T. about SSDs

    Home » Forums » Newsletter and Homepage topics » Our world is not very S.M.A.R.T. about SSDs

    Author
    Topic
    #2423646

    ISSUE 19.06 • 2022-02-07 HARDWARE By Ben Myers With solid-state drives (SSDs), the SMART ante is raised because an SSD can fail catastrophically — CLU
    [See the full post at: Our world is not very S.M.A.R.T. about SSDs]

    11 users thanked author for this post.
    Viewing 57 reply threads
    Author
    Replies
    • #2423653

      4 years Samsung 256GB NVMe SSD.

      Smart Disk Info is portable app.

    • #2423656

      an SSD can fail catastrophically

      So can an HDD, although they tend to give early warning as the mechanical bits wear. The only resolution is a regular backup and a new disk.

      cheers, Paul

      1 user thanked author for this post.
      • #2423867

        Note that the catastrophic failure of an SSD, i.e. when it is bricked and useless, applies to older SSDs.  Nearly all of the newer ones become read-only when all the spare blocks are consumed.

        2 users thanked author for this post.
    • #2423669

      Hi Ben,

      Thanks for the good article on SSDs. I have a related problem in my MacBook Pro. I have the smallest SSD (128) and now discover it’s too small to take the next OS-X update. How do I determine if the SSD can be upgraded to 256/512 or if it’s soldered in? I found these specs for an upgrade kit but don’t see how to determine if my laptop is compatible using the printed part number – where can I find that? Macbook Pro 2017 running OS 10.14.6.

      Compatibility:
      MacBook Pro 13″ A1708
      – MacBookPro13,1 Late 2016: MLL42LL/A (2.0 GHz Core i5)
      – MacBookPro13,1 Late 2016: MLL42LL/A (2.4 GHz Core i7)
      – MacBookPro14,1 Mid 2017: MPXQ2LL/A (2.3 GHz Core i5)
      – MacBookPro14,1 Mid 2017: MPXQ2LL/A (2.5 GHz Core i7)

      Identifying Numbers:
      – APN: 661-05112, 661-07586
      – Printed Part #: 656-0042, 656-0045, 656-0045A

      1 user thanked author for this post.
      • #2423670

        Since you will need to get into your MacBook to change the SSD, a sure method to find out if the SSD is easily replaced (mainly not soldered) is to open up the MacBook and take a look. You’ve got nothing to lose since you’ll be going in there anyway.

        • #2423783

          Graham: ” … is to open up the MacBook and take a look. You’ve got nothing to lose since you’ll be going in there anyway.

          Unfortunately, at least in the case of a laptop, one needs a special screwdriver to remove a whole bunch of tiny, tiny, tiny pentalobe screws, as well as correspondingly steady hands and good eyesight. “Open the Mac and have a look” is not good practical advice for Mac users that are no so neurologically and, or sensorily gifted enough, or have no hands-on experience doing things that amount to performing successful brain surgery in the dark.

          I agree with Alex that “it can be replaced by a skilled user“, “skilled” being the operative word here.

          And Mac laptops of post 2014 vintage have the SSD glued, I believe.

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

        • #2423868

          I’ve seen MacBooks newer than 2014 with SSDs that can be replaced, not integrated into the motherboard.  Best bet is to check out one’s model at EveryMac followed by a visit to iFixit for repairs illustrated with very good photos.

          1 user thanked author for this post.
        • #2425169

          Another reason not to own Apple products.

          1 user thanked author for this post.
        • #2425454

          Just another reason I have never, and never will own anything Apple.  I am old, and started with personal computers in the very beginning, so very well remember the introduction of the IBM PC, which started the world of building your own computer with parts from many various sources.  Apple was a highly proprietary system, with parts available ONLY from Apple, and they rapidly became the far second system.  Years of building my own systems and working on many others led me down the anti Apple path.

          I know Apple now has some good products for some people, but to me it would be almost impossible for Apple to make a second first impression.

      • #2423675

        The SSD on a none-touchbar MacBook Pro 2017 can be upgraded by Apple (propriety SSD), but can be replaced by a skilled user.

        https://www.ifixit.com/Answers/View/412537/Detach+SSD+in+MacBook+Pro+2017+13%22+without+the+touch+bar

      • #2423848

        The model number, if that is what you mean by “part number” is engraved on a plate stuck to the bottom of the laptop.

        Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2423866

        Dave, Been there with a few MacBooks, Pro and Air.  Your best bet is to go to everymac.com. https://everymac.com/ultimate-mac-lookup/  Enter the A-number and the EMC number, and the website will tell you what’s inside your MacBook.  A MacBook Pro from 2017 is old enough that its SSD is not soldered, but the interface pinout is proprietary, so you need a proprietary Mac-compatible SSD, 256GB, 512GB or larger.  OWC sells a lot of kits.  iFixit has superb illustrated how-tos for Mac repair or replacement of parts, and I think that the company also sells Mac-compatible SSD kits.  Some of these kits include the screwdrivers to remove the ever-so-special pentalobe screws.  You can also get a cheap but serviceable kit with many screwdriver heads at Walmart, though there are kits easier to use.

        I’ve done a few of these MacBook repairs, and they are not too difficult if you have the right tools and parts.

        • #2423887

          Ben Myers: True enough, one can get one of those kits. What one cannot get, if one does not have it already, is the skill to use it in the right way to operate on an expensive and possibly irreplaceable piece of hardware.

          This is what I would do: Just find a decent shop and take it there. This may not be easy to find, but it is definitely easier to do than to try using a kit one does not know what to do with and, or lacks the necessary mental and physical skills to use it.

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2423915

        Just “popped the top” on a 2014 MacBook Pro to change the battery (came with tools perfect!) upgraded the drive to a 1tb while I was in there, thereby hangs a problem got a 256gb hanging around perfectly fine in all respects. You would think stick it in an enclosure and job done, well not exactly. OWC do kits that are the price of a “2nd Mortgage” but are good inc. tools etc and you get to use the old SSD/Flash drive in an enclosure, if you have a look around you’ll find that there are no compatible enclosures for your old drive for anything less than OWC’s prices (not cheap). Even a 128gb such as yours is a pretty useful SSD and still has years use left in it, even if you only use it as an ad hoc “Time Machine”  backup drive its handy to have a bit of extra storage. Just a consideration. Definitely do your research before “popping the hood” Even if it means wading through tons of YouTube videos. @OscarCP‘s links are extremely good sources.

        Hope this helps a little

        “pssst know anyone who wants to buy a slightly used 256gb Apple drive”  😉

      • #2425156

        Try opening a chat window on apple support site. I’ve actually managed to get a human and pertinent info for some devices that were older.

    • #2423671

      Thanks, Graham. Chicken and egg problem, no? I wouldn’t be going there at all if I knew that it was soldered…

    • #2423676

      Excellent work, Alex! Very helpful, this means it’s possible. Pleased to hear that. Will watch the youtube and see how difficult it will be. May choose to have a pro do it, but at least I know it can be done.

      Cheers!

    • #2423683

      Ben,

      Great article ! … but I was a bit surprised you made no mention of over-provisioning and how it’s ‘supposed’ to affect the SSD life span.  Attached are screenshots of my Samsung 840PRO, it’s SMART state and 8% OP.  Any comments on how OP affects this conversation would be welcome … Thanks!

      Marty

       

      1 user thanked author for this post.
      • #2423869

        Intentionally, I tried to keep a laser focus on SMART, both to stay on track and also not to write too many words.  Maybe you’re suggesting a follow-on article all about SSDs, explaining TRIM, overprovisioning and other SSD features?

        1 user thanked author for this post.
        • #2423940

          Well, that’d be up to you!  But I’m ‘sacrificing’ nearly 20GB in the OP partition with an understanding this would improve not only longevity but reliability.  Any thoughts on this topic (as well as the others) would be welcome.  Thanks!

        • #2423945

          Given the real world life of an SSD, overprovisioning has no advantage.

          See this post below: 2423928

          cheers, Paul

        • #2423960

          Hmmm … I get a very different conclusion from the article you referenced!  It states:

          “When cells become more trouble than they’re worth, fresh blood is called up from the SSD’s overprovisioned “spare” area. These replacement cells ensure the drive maintains the same user-accessible capacity regardless of any underlying flash failures.”

          This is exactly what they show the Samsung 840 PRO doing (which happens to be the SSD in my Dell desktop!):

          The article concludes for the 840 PRO:

          “That reserve counter seems to be the best gauge for the 840 Pro’s remaining life. The wear-leveling count is supposed to be related to drive health, but it expired 1.5PB ago.

          Given what’s supposedly still in the tank, 3PB doesn’t seem impossible.”

           

        • #2424001

          May have to agree to disagree.  I’ve been mildly overprovisioning all SSD’s installed since 2014 in my custom builds.  These workstations are under constant heavy use as CAD/Graphics/Rendering tools. Their drives with overprovisioning at 15% of available space consistently stay better performing over time than factory installed drives, and last much longer before replacement.  We mainly use Samsung or Crucial SSD drives.

          Additionally, I started doing the same for RAID arrays, and likewise drive replacement frequencies and dropouts on the arrays are far fewer than non overprovisioned sets.

          But for a low level load for many home users, the benefit is not likely noticeable.  

          Also see https://www.anandtech.com/show/8216/samsung-ssd-850-pro-128gb-256gb-1tb-review-enter-the-3d-era/7

          ~ Group "Weekend" ~

        • #2425740

          I much appreciate your attention to S.M.A.R.T.

          Our RAID controllers come with a GUI that logs anomalies with much better and much more useful error messages;  and, yes those controllers also report S.M.A.R.T. data, but the latter is entirely useless: it never changes!

          Given the state of affairs among the manufacturers you so well described, from practical experience I honestly believe that there will be more to gain from directing their attention to the poorly understood problem of cable defects and failures.

          I mention this here because I have recently had two (2) entirely different cables fail completely:  one was in an internal SFF-8087 fan-out SATA data cable, and the other was in a SFF-8088-to-SFF-8470 multi-lane external cable.

          Let’s itemize the sheer number of mechanical connections in the latter:

          (1) the edge connector on the add-in RAID controller card

          (2) the multi-lane connection on the SFF-8088 that plugs into the controller

          (3) the multi-lane connection on the SFF-8470 that plugs into an adapter in the external enclosure

          (4) the SATA cables that connect to the internal side of the latter adapter

          (5) the same SATA cables that connect to the SSDs and/or HDDs in that external enclosure.

          When one of 2 x SFF-8088-to-SFF-8470 failed, I immediately suspected that a reliable HDD had failed.  It took me a while to realize my error, because I should have suspected the cabling FIRST, and that would have saved me a lot of trouble-shooting time.

          The test that worked was to switch those 2 cables, and the “failing” HDD started working again.  So, I ordered 2 new cables, and now we’re back to normal.

          More of this Story:  I’m aware of some aging motherboard manuals that mentioned a technology for testing network cables.  I may be out in left field, but I do believe the entire industry could benefit a LOT from the refinement of similar technologies that are dedicated to isolating failing and faulty cables, particularly data transmission cables.

          p.s. Apologies if this “rant” is off-topic.  I should have saved any relevant S.M.A.R.T. data that was recorded when our cables were failing;  in the future, thanks to this excellent article, I intend to do so.

    • #2423686

      I looked at a number of YouTube videos and some recommended removing one narrow cable and a screwed down clip, others said DON’T remove that cable but remove a larger one, before replacing the SSD. Anyone know about this? I think the point was to disconnect the battery but not sure why this would be necessary, and it’s liable to cause cable damage.

    • #2423701

      With solid-state drives (SSDs), the SMART ante is raised because an SSD can fail catastrophically

      I don’t pay any attention to SMART data.  I don’t use Google, but Google uses a lot of drives.  In a study of consumer-grade disk drives (pdf) published in 2007, Google found S.M.A.R.T data not so reliable a predictor of drive failure.

      “Out of all failed drives, over 56% of them have no count in any of the four strong SMART signals, namely scan errors, reallocation count, offline reallocation, and probational count. In other words, models based only on those signals can never predict more than half of the failed drives.”

      In 2007, at least, SMART data was equivalent to a coin flip for predicting failure.

      On the other hand, I personally have had a number of drives fail, at least three catastrophically (one apparently had a short in the PCB and wouldn’t let the PC even POST).  In none of those instances did I lose anything, because I had recent drive images of everything on the drive.

      If one wishes to have real insurance against data loss, establish a regimen of regular drive imaging, and stick to it religiously, whether with HDD’s or SSD’s.  Nothing else will save your bacon as effectively.

      Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
      We all have our own reasons for doing the things that we do. We don't all have to do the same things.

      2 users thanked author for this post.
    • #2423712

      Thanks for a great article!

      If I switch a Windows 7 computer with 16 GB of RAM to an SSD, can I help extend the life of the SSD by turning off the paging file?

      For a “typical” office machine (email, MS Office, web browsing, moderate YouTube videos, etc), will that result in a performance hit?  And if so, how much more RAM might be needed to compensate?

      And does RAM wear out the same way an SSD does?  🙂

      — AWRon

       

      • #2423836

        You should keep a minimal amount of pagefile to avoid issues. I had this discussion a while ago with ch100 here. I think 800MB was the value and that is what I would suggest as a minimum although I thought I used a lower value. Adding more RAM won’t change that. A fixed pagefile is best if it is enough to cover the needs when RAM is not enough. The issue is if you do run out of RAM, it will start swapping and then you will notice a huge slowdown on a mechanical hard disk, although I never seen how it impacts the performance when using a SSD, especially the fast NVMe kind. What can happen is if you do need more RAM and the pagefile is fixed and too small, you can run into issues and have the system break down on you.

        The key here is not needing to use your full amount of RAM. I don’t think the pagefile is used that much because you make it bigger if not needed on Window. Unix is different and I am not sure about Linux today. If you use applications that need a lot of RAM and leave a ton of browser windows open for days, with memory leaks, you can end up using a lot of RAM. But 16GB seem plenty for normal usage browsing the web, no gaming and no heavy duty apps like video editing.

        I monitor my RAM usage and never ran into issues, I restart my browser when it starts to eat up gigs after days being open on many tabs, but if you want to have peace of mind, you can let Windows manage your swap file. It will grow if needed. The fact you have a lot of RAM for your needs is probably the most important factor in all this. I’m not sure the rest makes much difference in your real life experience.

        I’ve never heard of RAM wearing out due to use. It needs power to keep information, so I guess it always have power regardless of whether you write to it or not during use of your PC. I wouldn’t worry about this.

        Just having 16GB and a SSD does the most for your performance.

        2 users thanked author for this post.
      • #2423870

        Better than the paging file, turn off indexing!  The paging file is there for the times when your computer runs out of memory, and it has to swap out data and program segments to make room for the program you just clicked on.

        1 user thanked author for this post.
      • #2423928

        can I help extend the life of the SSD by turning off the paging file?

        There is no point trying to extend the life of an SSD.
        They are very long lived and in normal use (anything not crypto generation) will give many years of service – sudden failure notwithstanding.
        See this 7 year old article for more info.
        https://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking-petabytes/

        cheers, Paul

        2 users thanked author for this post.
        • #2425368

          In our LAN hosting a dozen workstations, we’ve lately had nothing but cable failures and absolutely zero SSD failures, after we began to migrate from HDDs to SSDs several years ago.

          Failing cables have included vanilla SATA data cables, and SFF-8088-to-SFF-8470 multi-lane cables to external storage enclosures.

          S.M.A.R.T data was totally useless for troubleshooting our cable problems.

          For that matter, Windows Event Viewer was reporting a “paging device error” but the faulty cable was connected to a brand new HDD that was not hosting pagefile.sys .

          It was a long weekend recently, especially when that workstation would hang during POST with no apparent error messages.

          I have to kick myself every time a drive has started acting up, only to confirm after trials-and-errors that the cable was the problem all along.  LOL!!

      • #2425361

        Optimizing pagefile.sys was a hot topic a few years before SSDs became cost-effective.

        Here’s what we did on workstations that hosted multiple HDDs:

        Starting with a second brand new HDD, we formatted a small NTFS partition in the primary position, that was dedicated to pagefile.sys .

        Then, we created a contiguous pagefile.sys in that dedicated partition, using the excellent CONTIG freeware.  Of course, we then needed to move the paging file from C: to our newly created contiguous pagefile.sys on that second physical HDD, primary partition.

        This design exploits the fact that the outermost HDD tracks are the fastest, mainly because HDD linear recording densities are almost constant, regardless of track diameter.  Thus, the amount of raw data on any given track is directly proportional to track diameter.

        CONTIG creates a pagefile.sys with perfectly contiguous sectors.  In this fashion, paging I/O maps memory sectors to pagefile.sys sectors in perfect order, minimizing track-to-track armature movements aka “head thrashing”.

        I haven’t seen any controlled scientific experiments with different pagefile.sys configurations on SSDs, and would love the opportunity to study any articles already published.

        One theory that should be tested empirically is the benefit that obtains when a workstation can perform multiple I/Os in parallel across different drives.   This setup should benefit from the practical reality of idle CPU cores, and threads, in a multi-core system.

        Such a capability was generalized many years ago by enabling DMA (direct memory access) in peripheral controllers.

        If you suspect your system is paging a lot, it’s worth a test to see if performance improves by moving pagefile.sys to a SSD where no other I/O is being performed concurrently with paging.

        Along those same lines, a RAID-0 array of multiple NVMe SSDs should perform paging noticeably faster than a JBOD SSD:  2 array members should be almost twice as fast, and 4 array members should be almost 4 times as fast as a single JBOD drive.

        We upgraded a refurbished HP workstation with a 4×4 add-in card using 4 PCIe 3.0 Samsung NVMe drives in RAID-0, and it performs READs in excess of 11,000 MB/second.

        On that system, we let Windows 10 manage paging automatically.  That workstation is wicked fast for our routine database tasks, so fast in fact that we decided we really didn’t need to host our database in a ramdisk any longer.

        Hope this helps.

    • #2423721

      Trying to “spread the word” about this newsletter, I posted the link to this article on FB.  So I was surprised to see that, while it displayed the title of the article, it also displayed a Twitter logo and advertised 1Pass.

      Screenshot-2022-02-07-103136

      If a friend of mine had posted this, my first thought would have been that this is a phishing scam.  Can y’all do anything about this?

    • #2423746

      I highly recommend Argus Monitor because: 1. It works with SSDs; 2. It continually checks S.M.A.R.T data in background; and 3. It notifies the user if there is a problem with no false alarms.  Excellent product – been using it for over 10 years.
      https://www.argusmonitor.com/overview_fan_control.php
      Note: Scroll down to the bottom of this page for the SMART info.  The publisher recently decided to emphasize the fan control capabilities of the product for marketing purposes as most people don’t even use SMART. But the product started as SMART monitoring app when it was released in 2009 and now works almost all available HDDs / SSDs.  The author also keeps it updated as new motherboard HD/SSD controllers are released.

       

      • #2423873

        Argus Monitor has flown under my radar because it is portrayed as temperature monitor-and-control software, with SMART almost as a throw-in.  I just installed it and it displays SMART data for an NVMe SSD, passing the acid test that Speccy and the Linux Disks command fail.  The free version is fully functional or close, for 30 days.  There are other programs to display system temperatures, CPU and GPU and even SSD or HDD, but this one is quite well done.  I wonder how often it samples SMART data, or if it is simply on demand?

        I would much like to see a program that samples SMART data once a day or provides it on demand.  Then we all would not need divine intervention by Microsoft to incorporate SMART monitoring and reporting into Windows, sort of.

        • #2424044

          It’s a great SMART app.  It’s very configurable and you can set the SMART polling time under Menu, Settings, General, Data Sample Interval.  It started out as a SMART app only (it was not just thrown in).  It later evolved to include MB monitoring and fan control.

    • #2423696

      Thank-you for an excellent SSD article. I will be downloading the Clear Disk Info utility program. In addition to my multiple weekday image and file backups, I ‘clone’ my system SSD to an external HDD monthly. The ‘data’ SSD gets copied monthly. A small effort to protect against a catastrophic event.

    • #2423709

      I downloaded the ClearDiskInfo program and it worked perfectly (and super fast), but after a few minutes my system froze and required a reboot.  No lasting ill effects that I’m aware of.

      Is that a cause for concern?

      Thanks,

      Brian

      Dell XPS 8930, mfr June 2020

    • #2423720

      I recently installed a samsung evo 500gb drive. downloaded and installed the ‘magician’ software and got the “selected drive does not support this feature’ message for a number of the magician topics.   No thanks to samsung i found the solution was to update the the drive’s firmware.  Afterwards the “selected drive….” messages stopped appearing.

      the magician software is deficient regarding explanations about settings. And certainly was deficient about needing firmware update.

    • #2423761

      There is a way to care for your SSD and HD drives that will extend their lives, recover bad sectors, refresh entire drives, is easy, and has been proven effective and safe for many years. (Not for Mac though.)

      Just get SpinRite from Gibson Research and run it periodically. That’s all. It will save your sanity and hardware. End of worries!

      Go to https://grc.com and read up. Lots of other good stuff there also.

      It is a reasonable one time cost – no annual subscriptions – updates are free except for infrequent major versions. I’ve been running version 6.0 since 2005 and version 5.0 before that. Cost to upgrade from 5 to 6 was $29. If you get the current version 6.0 now the update to 6.1 is free and coming soon. It sounds like it will be awesome.

      You’re welcome.

       

      1 user thanked author for this post.
      • #2423880

        SpinRite is great for what it does, but it does not deal with a hard drive with a progressively worsening number of bad sectors afterwards.  Let’s say, for example, that a system with a spinning hard drive gets a bit of a bump, and the drive read-write heads momentarily touch the surface of the drive, scratching off some of the magnetic coating.  SpinRite may be able to take care of the immediate problem, but the etched-off coating is now floating around inside the drive, with the possibility of an abrasive effect on other areas of the drive platter(s).  There’s a reason drives are assembled and disassembled in clean rooms, absolutely free of dust and dirt.

        I cannot advocate using SpinRite on an SSD, but Steve Gibson is better prepared to advise all of us on that topic.

        1 user thanked author for this post.
    • #2423784

      Very interesting and timely article on an important issue.

      Question: Is there a version of Clear Disk Info for Macs? If not, what else would be its equivalent?

      My MacBook Pro ca. mid-2015 in: About this Mac/System Report/Storage provides only this SMART information on the SSD: “Verified”, or “Failing.”

      Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

    • #2423823

      My 960GB Sandisk Extreme Pro has been in daily use for 7 or 8 years, and is used for everything including games, some video and photo editing, Office, internet, and even as a DVR for for several months until we got a dedicated 3TB WD Red HDD. Sandisk Toolbox says it still has 98% life remaining. It’s been the main drive in 2 different PCs, and it still runs fine. The 10-year warranty is a nice touch, too.

      Going all the way back to the Kingston V100 96GB SSD we’ve never worn out an SSD. An early Mushkin 240GBGB SSD did fail suddenly after 18 months use, and they replaced it with a newer model which has an upgraded controller. That’s the only one I’ve ever seen fail.

      Despite my confidence in SSDs we still make regular backups on Portable HDDs which we plug into a USB port to do the backup then unplug it for safekeeping (screw those ransomware bad guys!).

      2 users thanked author for this post.
      • #2425370

        We’ve now doing the same thing with an excellent StarTech 3.5″ external enclosure that supports 2 interfaces:  eSATA and USB 3.0 .  The metal enclosure is cool, trayless and front-loading.

        Yes, it only accepts 3.5″ SATA HDDs, but HDD cost per gigabyte has dropped enormously as SSDs are now preferred for speed.

        We just spent $80 on another brand new WDC 2TB “Black” HDD, which was close to $300 several years ago.

        The other benefit of these external enclosures is a separate ON/OFF switch:  we now switch that external enclosure ON just long enough to run our backup task, then we switch it OFF.

        This policy should mean that this 2TB HDD will run reliably for most of the 5-year warranty period, and Western Digital does honor their factory warranties, provided a new WDC drive is registered at their on-line warranty database.

        1 user thanked author for this post.
    • #2423829

      My 960GB Sandisk Extreme Pro has been in daily use for 7 or 8 years, and is used for everything including games, some video and photo editing, Office, internet, and even as a DVR for for several months until we got a dedicated 3TB WD Red HDD. Sandisk Toolbox says it still has 98% life remaining. It’s been the main drive in 2 different PCs, and it still runs fine. The 10-year warranty is a nice touch, too.

      Going all the way back to the Kingston V100 96GB SSD we’ve never worn out an SSD. An early Mushkin 240GBGB SSD did fail suddenly after 18 months use, and they replaced it with a newer model which has an upgraded controller. That’s the only one I’ve ever seen fail.

      Despite my confidence in SSDs we still make regular backups on Portable HDDs which we plug into a USB port to do the backup then unplug it for safekeeping (screw those ransomware bad guys!).

      Agreed. I ran a Windows 2003 Server with some SSD in RAID-1 configuration for 8 years without an issue. As we were replacing the server I took the old one home formatted the drives and installed Windows XP on the system with C: and D: no raid. Then a few years later I gave the computer to a friend that was in need of a computer. The hardware finally died due to motherboard overheating, but it wasn’t due to SSD failure. I know SSDs do fail as I’ve had a couple fail but for the most part, I’ve had much fewer SSD failures than I did with mechanical drives. My current computer has 4 years on it with the same SSD drives. The one drive says I’ve transferred 33.1 TB on it so far and no signs of a failure yet.

       

      1 user thanked author for this post.
    • #2423819

      I tried Clear Disk Info and it gave 1% life remaining for one of my disks. But Crystal Disk Info and Hard Disk Sentinel give 96%. Power On Time is 14140 hours. A Kingston 120G with 11TB writes.

      1 user thanked author for this post.
      • #2423883

        Kingston is one SSD brand that provides very little, actually no, information about the SMART attributes in its drives.  What you may be seeing is different interpretations of the same data.

    • #2423851

      I don’t play games, never keep many windows open for more than an hour, or keep the laptop on for more than 8 – 12 hours at a single stretch, before turning it off and calling it a day. I do watch movies and shows, both streaming and from my DVD collection. I do, now and then, some heavy number crunching in several up to half hour runs a day, for my job. My computer is described in my signature panel, below.

      Any idea of how likely is this use to wear out an SSD and which one of the above would be the main culprit for this to happen?

      Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2423886

        The usage of your MacBook that you’ve portrayed seems pretty unlikely to wear out the SSD with excessive write operations.  Is this the original SSD, or one that was installed after the initial purchase?  If you installed it yourself in 2020, it is still pretty new.  You’ve got 16GB of memory there, cutting down on SSD writes, compared to the budget Macs with 4GB or 8GB.  Great screen!

        Leaving a computer on generally has no effect on the SSD or hard drive.  Ditto watching movies and shows. If the number crunching makes intensive use of the SSD, it is perhaps the major risk factor.

        Keep on!

        1 user thanked author for this post.
        • #2423898

          Ben Myers: The original 1 TB SSD is by now 4.5+ years old and has left 570 GB of free space. In “Verified” condition, according to Apple’s reading of the SMART data, that I take to mean “it’s OK, for now.” The other message would be “Failing” or in plain English: “No good, back up everything, clone SSD and, since you don’t trust yourself, with good reason, to fix this, run to a not too distant repair shop you find on the Web, where they may know what to do, or not, but shall charge you anyway.” There is no third message.

          Thanks for your advice and for starting this discussion.

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2425375

        Without knowing a LOT more about your daily usage patterns, I venture to guess that environmental factors will make a much bigger difference for longevity, chiefly power conditioning, temperature control and cooling, and moisture control.

        All our workstations are powered by APC battery-backup units, and they’ve saved my butt many times already.

        Case in point, during the past 6 months, we’ve been slowly upgrading to 2.5 and 5.0 GbE dongles and add-in cards, to inter-operate smoothly with a TP-LINK high-speed switch.

        The initial testing we did was to move several large drive images of C: across our LAN.

        I should have expected this, but I was still surprised when that TP-LINK switch failed less than 30 days after purchase:  my best guess is that it just over-heated.

        So, the factory replaced it, and we jury-rigged 2 simple fans blowing cross-wise over that switch.  With both fans running 24/7, we’ve had no more problem with the replacement TP-LINK switch.

        Moral of this story:  power conditioning and temperature control are both BIG DEALS!

    • #2423859

      I learned something from this piece.  Thanks.

    • #2423885

      Hi Ben..
      I read the article on SSDs, so tried ClearDiskInfo.exe.
      I use Crystal Disk Info, and they give different results!

      The disks ClearDiskInfo.exe. says are not perfect,
      Crystal Disk Info says are okay, and vice-versa.
      You can get Crystal Disk Info from https://crystalmark.info/en/
      I would be interested in what you think is happening.
      Thanks.

      • #2423913

        What I think is happening are differing interpretations of the same data, because there is no agreed upon standard for SSD SMART data attributes.  SKHynix is unwilling to release a technical description of how it uses SMART data in SSDs, as I noted in the article.  Other manufacturers are not always forthright about SMART either.

        3 users thanked author for this post.
        • #2425744

          One theory I maintain about S.M.A.R.T. is the proprietary nature of controllers embedded in SSDs.

          How such a controller defines and detects “errors”, and what such a controller does upon detecting “errors”, are a set of functions that are designed and implemented by the firmware programmers.

          I understand entirely how a major manufacturer like Samsung would want those details to remain CONFIDENTIAL, because they are Samsung’s intellectual property, legally speaking.

          Consider the obvious hoopla Intel generated upon initially announcing “Optane” SSDs.

          I specifically remember comparing Samsung’s PCIe 3.0 NVMe M.2 SSDs.

          The benchmarks repeatedly showed the Samsung’s NVMe M.2 SSDS were consistently averaging ~3,500 MB/second doing READs.

          If we do the math, that performance came amazingly close to the theoretical maximum throughput aka “MAX HEADROOM”:

          8G / 8.125 bits per byte x 4 lanes  =  3,938.5 MB/second MAX HEADROOM

          (jumbo frame is 128b/130b i.e. 130 bits / 16 bytes  =  8.125 bits per byte)

          By comparison, Intel’s first batch of “Optane” M.2 SSDs only used 2 PCIe 3.0 lanes, which immediately crippled their performance.

          Samsung had every right to keep the details of their NVMe success completely and entirely CONFIDENTIAL, while filling zeroes in the S.M.A.R.T. data cells.

    • #2423891

      Command Timeout Count on Clear Disk Info shows 138 and a warning, however Crystal Disk Info shows 100/100.

      Is this any cause for concern?  All tests came referenced in the article came out OK.

      Disk is 14 months old installed in a Dell Computer E6540 Windows 10

      • #2424306

        Hi Ben,

        Is the”Command Timeout Count” of 138 and a “Warning” notice a cause for concern?

        Noted that Crystal Disk Info does not show a problem.

        SK hynix SC401 SATA 512GB (SSD).

        Thanks for your help.

         

    • #2423896

      You computer jocks seem to be so tied up in your computerese that you miss the most *basic* issue with SSDs:  the physics of the devices.  The devices are limited by the depth of the potential wells into which electrical charges are placed to store data.
      If the wells are not deep enough, thermal effects can cause charge to leak out. Boltzmann’s law gives that probability for each potential well; given the large numbers of such wells (bits), the probability of data corruption becomes nonvanishing.  Moreover, if the temperature increases, the probability of leakage from the wells increases – exponentially.

      SSDs are suitable only for short-term storage.

      2 users thanked author for this post.
      • #2424057

        This is in fact true.

        Most modern SSD’s, when stored (no power) under 70F (21C) will retain their integrity for at least one year.  Longer is possible but YMMV depending on the NVRAM type, quality, batch, etc.  Too many variables.  Higher storage temps can greatly shorten that metric.

        We use SSD’s as live operating drives, and HDD’s for archival storage – either powered or cold.

        For long storage (over one year but under ten) we prefer CMR HDD drives for speed and ease of backup restoration.  For even longer time periods and for relatively small data sets (25GB per optical disk) we use archival grade Blu-Ray disks – or in the case where terabytes of long term storage are needed we rotate storage for critical records to new HDD media on a schedule.

        Tape appears to be making a comeback for long term storage, but I can’t get over my mis-trust of that media from the bad old days . . .

        If you use a home backup solution – I always recommend a USB HDD based removable drive, and always have two.  One online to receive backups and one offline.  Rotate regularly.  Put the in-service start date on a label on these drives, and replace them if they get wonky or older than five years.

        ~ Group "Weekend" ~

        6 users thanked author for this post.
        • #2424058

          NetDef, can you clarify please?

          “Most modern SSD’s, when stored (no power) under 70F (21C) will retain their integrity for at least one year. Longer is possible but YMMV depending on the NVRAM type, quality, batch, etc. Too many variables. Higher storage temps can greatly shorten that metric”

          I am new to SSDs but I have thumb drives a decade old or more that are still working fine. Surprised that silicon would be unstable unless you are talking about solder fingers growing between leads and I thought getting read of PbSn solder fixed that, too.

          Inquiring minds…

        • #2424669

          Ditto experience on the thumb drives.  But I find that even lightly-used thumb drives sometimes get corrupt files – even if the drive or file has not been accessed for a year or more.

        • #2425378

          Do you have any experience with M-DISCs?

          Christopher Barnatt at YouTube put one to a torture test:  he wrote data to an M-DISC then put it in a freezer for a few days.  After it thawed, Chris was still able to read all the raw data without any errors.

          We recently purchased a few slim ODD writers that are supposed to write M-DISCS too, but we haven’t gotten around to trying any for large files like drive images of C: .

          1 user thanked author for this post.
        • #2425755

          We have been using M-Disks for a few years now, but am now researching alternatives.  It’s becoming apparent that the media is getting hard to find, and the smaller DVD sized capacities seem to be discontinued.

          The DoD torture tested M-Disks a few years ago and results seemed pretty good.

          Media obsolescence as well as their readers is a major concern.  If I have a disk rated for hundreds of years, and if that data matters, will anyone be able to read it then?

           

          But . . .  this is starting to develop into an entirely new topic, one that is VERY interesting but might deserve it’s own thread.

           

          ~ Group "Weekend" ~

          2 users thanked author for this post.
        • #2425789

          But . . . this is starting to develop into an entirely new topic, one that is VERY interesting but might deserve it’s own thread.

          Agreed. Before posting again users might want to re-read Ben Myer’s original article at at Our World is Not Very S.M.A.R.T. About SSDs regarding the ability (or inability) of diagnostic tools like Clear Disk Info and Samsung Magician to report meaningful S.M.A.R.T. data and accurately predict potential SSDs failures. Please don’t stray too far off-topic.

      • #2424313

        I do agree with the physics, as explained, but not with the conclusion: I, for example, have been using the SSD in my Mac (see technical specs in my signature panel) for more than 4.5 years with no trouble whatsoever and I am certainly not the only one on planet Earth that can make such a claim. If 4.5+ years is “short-term”, I wonder what “long-term” might be: a decade, century, a millennium, and aeon?

        As to making regular backups to, in my case, an external HDD, yes, of course, that is much recommended as standard practice whether one has an SSD or an HHD drive inside one’s machine’s box.

        Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2424324

        We have the following SSDs in daily use since:

        Kingston V+100 96GB – 2011

        Kingston V+200 64GB – 2012

        Intel X25-M 80GB – 2012

        Sandisk Ultra 120GB – 2013 (failed in 2015; replaced under warranty)

        Sandisk Extreme 240GB – 2013

        Sandisk Extreme Pro 960GB – 2014

        Samsung 840 250GB – 2014

        WD Blue 3D 500GB – 2018

        WD Blue 3D M.2 500GB – 2020

        Sandisk Extreme Pro, Samsung 840, and WD Blue 3D 500GB are all heavily used for large game download/play/delete/repeat, video and photo editing and, for the Sandisk Extreme Pro, recording/viewing/deleting/repeat TV programs.

        Thus, 7 of the SSDs listed above have seen daily usage for 8 years or more, with several of them worked hard. The shortest remaining life among them all is the Kingston V+200 at 74% remaining. That’s not short term.

        Sandisk Extreme Pro and Samsung 850 Pro (we don’t have one) were both designed with professional use involving frequent heavy write activity. They both have a 10-year warranty which is not pro-rated.

        In the same period we’ve seen two Seagate HDDs fail (a 1TB and a 2TB), plus a Toshiba 1TB laptop HDD.

        I repair or troubleshoot computers for several friends, none of whom have experienced a single SSD failure.

        An Intel rep told a seminar which I attended that most (but not all) SSD failures are a result of controller failure, not the NAND memory chips. Of course, if you write enough TBs of data for long enough, then an SSD will surely cease working. Typically, that would be longer than you ever keep a computer. Cheers!

      • #2424371

        miss the most *basic* issue with SSDs:  the physics of the devices.

        That’s why manufacturers use ECC. Loss of data from a few cells does not affect data integrity.

        If you want to test – and have the disk self correct – the data on a stored disk, simply read all the files from the disk.
        See this post for more details:#2318643

        cheers, Paul

    • #2423897

      OK, I followed  Ben Myers advice here  #2423866 : “Your best bet is to go to everymac.com. https://everymac.com/ultimate-mac-lookup/  Enter the A-number and the EMC number, and the website will tell you what’s inside your MacBook.” advice, found the A model number under the laptop and the EMC number looking at a list here:

      https://everymac.com/systems/by_capability/mac-specs-by-emc-number.html

      So I did: The A number, 1398, is not unique to a particular computer model, but to several, from 2013 though 2015.  The EMC was more useful, and I found with it that the SSD is not soldered by clicking on a link in the EMC page to get here:

      https://everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i7-2.8-15-iris-only-mid-2015-retina-display-specs.html

      And, from there here, and to the point:

      https://everymac.com/systems/apple/macbook_pro/macbook-pro-retina-display-faq/macbook-pro-retina-display-how-to-upgrade-ssd-storage.html

      Conclusion: my Mac’s SSD is not soldered:

      Apple does not intend for end users to upgrade the SSD in these models themselves. The company even has used uncommon “pentalobe” screws — also called five-point Torx screws — to make the upgrade more difficult. However, access is straightforward with the correct screwdriver, the SSD modules are removable, and Apple has not blocked upgrades in firmware, either. There are two significantly different SSD designs for these models, though.

      Specifically, the “Mid-2012” and “Early 2013” models use a 6 Gb/s SATA-based SSD whereas the “Late 2013,””Mid-2014” and “Mid-2015” models use a PCIe 2.0-based SSD. These SSD modules are neither interchangeable nor backwards compatible with earlier systems.

      As a result, third-parties, like site sponsor OWC have released a 6 Gb/s SATA-based SSD upgrade with a compatible connector for the “Mid-2012” and “Early 2013” models and another PCIe 2.0-based flash SSD with a compatible connector for the “Late 2013” and subsequent MacBook Pro models.

      By default, from testing the “Late 2013” and “Mid-2014” models, OWC discovered that when a “blade” SSD from a Cylinder Mac Pro is installed in one of these systems, it “negotiates a x4 PCIe connection versus the stock cards, which negotiate a x2 PCIe connection.” This means that these Retina MacBook Pro provided more than 1200 MB/s drive performance, a huge jump from the standard SSD.”

      Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

      • #2423969

        One small bit of caution here.  I may have overstated the ease of replacement of a proprietary Mac NVMe SSD. Physical replacement of an SSD is the easy part. Remove pentalobe screws from the bottom, remove bottom cover, remove small capacity SSD, install large capacity SSD and replace cover and screws.

        Installing a fresh copy of MacOS is straightforward (Command-R when powering up), as long as one has a working internet connection to give access to the Apple mother ship.

        If you want to clone the smaller capacity SSD to the larger SSD thereby retaining all your data and licensed programs, life gets more complicated.  Easiest way is to use the Disk Utility in the MacOS Recovery System to clone the smaller SSD to an external USB drive, then use it again to clone from the external drive to the newer high capacity drive.

        There are USB adapters that allow one to attach proprietary Mac SSD to a USB port, but it makes no sense to buy one unless one does this sort of job repeatedly.

    • #2423944

      oldergeeks.com seems like its unstable. Can’t even reach the page to d/l the file. Major Geeks has it if anyone else has problems.

      https://www.majorgeeks.com/files/details/clear_disk_info.html

    • #2423947

      A power surge will sound the death knell for SSD drives.

      Thus the importance of a good surge protectors.

      • #2423971

        Really good surge protectors or a UPS of ample capacity!  Really good power supplies offer some protection for the rest of the computer.  The magnitude of the surge, usually when power is restored, has a lot to do with it. Here in the US, a surge of 130v is almost negligible.  200v will destroy not just an SSD, but most everything else inside a computer.

        1 user thanked author for this post.
        • #2423976

          We use CyberPower OR1500pfclcd surge protectors.

          They are part of CyberPower’s PFC Sinewave UPS series and we use them to support our work stations, modems, and routers.

          Specifications are as follows:

          • Power Rating 1500 VA/1050 Watts
          • Topology: Line Interactive
          • Waveform: Sine Wave
          • Plug type & cord: NEMA 5-15P, 6 ft. cord
          • Outlet types: 8 x NEMA 5-15R
          • Communication: USB, Serial
          • Management software: PowerPanel Business Edition
          • Warranty: 3 year
          • Connected Equipment Guarantee: $200,000

          For less expensive/sensitive equipment we use Tripp Lite Isobars. The Tripp Lite isobars are not as robust  as the CyberPowerbut units but come with $50,000 Ultimate Lifetime Insurance.  We have used the insurance three times over the last 20 years to replace electronics that have failed due to power surges (television, WIFI radio, and microwave).

          3 users thanked author for this post.
        • #2423997

          We’ve been using that exact model of Cyberpower’s UPS for years now with excellent results for workstations and network racks in problematic power situations.

          Unfortunately sourcing new replacements (and even their batteries) has become very hit and miss this year, not to mention their prices have gone way up.

          ~ Group "Weekend" ~

        • #2424032

          If you are in the US/Canada, have you tried finding a Cyber Power reseller by going to their web site at  https://www.cyberpowersystems.com/reseller-search/  .

          The page has links to their resellers including, but not limited to, Amazon, B&H, Best Buy, CDW, Costco, Fry’s, Home Depot, Newegg, MicroAge, Office Depot, Provantage, Staples, Walmart, etc.

          1 user thanked author for this post.
      • #2424175

        A power surge will sound the death knell for SSD drives

        A power surge only affects the PSU, which isolates and provides regulated power to the other components. The PSU is designed to cope with a range of input voltages, but too much or too little will cause the PSU to provide the wrong voltage to internal components. Some internal components will tolerate the improper voltage and some won’t, but none will be unaffected in some way.

        If you need reliability or have power supply issues, there are two things you need to do.
        Backup regularly, because no solution is 100% effective.
        Add a UPS in front of the PC(s) to limit the voltage changes / provide power during low / no power times.

        cheers, Paul

        2 users thanked author for this post.
        • #2424227

          We had a HP Windows 10 work station that was exposed to a power surge about a year ago.

          Installed a new SSD , used its recovery software to restore it to an as new condition, reinstalled its software, and recovered its working files from a backup.

          The machine has been working fine ever since.

          Thus me comment related to SSD drives and their exposure to power surges.

          2 users thanked author for this post.
        • #2424370

          Anecdotal data is good, but you shouldn’t use it as proof. It’s just as likely that the SSD manufacturer uses lower tolerance components to keep cost down and a disk from another manufacturer would have survived.

          cheers, Paul

        • #2425536

          Our AC power cabling always enforces the following “policy”:

          CHEAP links BURN first

          EXPENSIVE links BURN last

          Thus, at the 110V wall outlet, we plug in a relatively cheap surge protector with multiple outlets.  Ideally, the later surge protector has a separate ON/OFF switch for each outlet.

          Next in line is a high-quality UPS / battery backup unit:  we’ve been using the APC brand for years, and they work great, particularly if their PowerChute Personal software is installed with the required USB cable installed correctly.

          Last in line is the quality PSU that we try to install in every workstation that we build.  Presently, our hands-down favorite is Seasonic brand.  Their Tech Support is superb.

          If a surge burns the surge protector but leaves the UPS intact, you’re out maybe another $10 to $20.

          If a surge burns the UPS too, you’re out maybe another $100.

          If a surge burns all the way thru the PSU, your system probably suffered the direct hit of a lightning strike:  but, the good news is that the latter event is highly improbable, and there are other ways to shield computers from such worse-case events.

          Just our “policy” here.  Hope this helps.

          1 user thanked author for this post.
    • #2423978

      With a conventional hard drive, there are two kinds of failures that are possible (roughly speaking)… mechanical failure and electronic failure. The various moving parts will eventually wear out, and that will necessarily cause problems.

      Whether you get any warning about this is a luck of the draw thing. You might, but it’s by no means certain. Even if there is some warning, it doesn’t mean there won’t be any data loss by the time you are aware there is a problem.

      The second kind of failure is an electronics failure. Hard drives have RAM (for buffering/caching) and CPUs like a PC (or a SSD), and these components can fail. When this happens, it is quite likely to be of the “bolt from the blue” type that no one saw coming.

      SSDs, of course, have no moving parts, so the first kind of failure can’t happen. The second kind can, though. It’s not as obvious, but there is also a difference between a media wear-out failure and any other electronics failure, in that one is an expected, fairly predictable thing that you can see coming from the SMART data, while the other electronics failure can’t really be predicted by SMART, just as with the hard drive whose electronics fail suddenly.

      A lot of people are put off by the ticking time bomb aspect of the SSD, where you can watch it losing its life from the moment you start using it. The hard drive is a ticking time bomb too… you know it’s going to fail at some point, but there’s no inkling of when it might be, so it’s easier to kind of think of it like it isn’t going to happen (until it actually does).

      I’ve got one SSD that I’ve beaten on pretty hard since I bought it (a Samsung 840 Pro 128GB). I’ve had the swap/paging file on it for the whole time I have had it, and Firefox used to have some pretty nasty memory leaks that caused it to consume all of my RAM very frequently, and it was only the speed of the SSD that kept the system limping along until I could close Firefox and get the RAM back. Still, that now 8 year (and one month) old drive is still in service. The drive is now down to 67% of its rated service life, and it has had just over 42,000 power-on hours.

      Note that the rated life doesn’t mean that the drive just dies when the clock runs down to zero. It was like this with at least one model of Intel drive (the electronics would turn the drive to read-only instantaneously, and once it was powered down, it would brick itself, so you could not even use it in read-only mode to get your data), but in the famous Tech Reports test of SSDs, where lots of drives were written to until they died, the 840 Pro was only about half done when it got to its officially worn-out status.

      The most recent hard drive failure I had was the sudden, bolt-from-the-blue type. I was upgrading my Asus laptop (Core 2 Duo era) from XP, and while the drive had a lot of hours on it (~23k, if I recall) and was approaching five years old, it still worked perfectly, with no SMART data worries and no misbehavior of any kind. It was a reliable workhorse.

      And then, during the installation of 7, it quit. No noise, no warning at all. It never read or wrote another byte of data. The warranty was five years, so I contacted Seagate and sent it back under warranty, just under the wire.

      That drive had a lot of hours and a lot of years, at ~22,000 and nearly 5, respectively, but my Samsung 840 Pro SSD has it beat, with three more years and nearly double the active hours. It will fail someday, but I have backups, so I will leave it in and keep getting my money’s worth out of it all these years later!

       

      Dell XPS 13/9310, i5-1135G7/16GB, Kubuntu 22.04
      XPG Xenia 15, i7-9750H/16GB & GTX1660ti, Kubuntu 22.04

      2 users thanked author for this post.
    • #2424050

      Urged on by discussion here, I tried an Inateck all-purpose USB 3.0 adapter for hard drives, bought a couple of months ago, for the first time.  It is all purpose because it handles SATA drives and both notebook 44-pin 2.5″ and desktop 40-pin 3.5″ parallel ATA drives.  ClearDiskInfo shows the SMART attributes for the SATA drive I connected up, but Speccy gets confused.  So the Inateck implements the USB Attached SCSI Protocol (UASP) to allow SMART data to be seen over USB 3.0.  I have not yet put it to the acid test of seeing SMART data for a PATA drive.  The Inateck unit is affordable at around $30, well worth it for me.

      1 user thanked author for this post.
    • #2424051

      Here is a photo of the Inateck all-purpose USB 3.0 adapter, so there is no doubt what I talked about.

      Inateck all-purpose USB3 adapter

      • #2424087

        Ben Myers: What is an USB adapter? How is it used? Inside a computer or outside of it? I have never heard of it, or seen one, other than the one in the photo shown by you. I may not be the only one here in the same situation. Perhaps you could explain? If you did, I’ll thank you.

        Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

        • #2424150

          Best to do show, along with tell.  There are three types, all attaching to a USB port on a computer.

          For most purposes, the Inateck one shown in a photo previously, handles a 2.5″ SATA solid state drive as well as other kinds of drives, both old-time spinning and SSD.  Attach a SATA drive to the SATA connector, provide power to the Inateck, connect the Inateck adapter to a USB port with a cable.

          For any of these gizmos, the drive shows up as an external drive, either on a MacOS, Windows or Linux computer.

          I do not have photo for the second one, to which an NVMe SSD is attached, and which also plugs into the computer’s USB port with a cable.

          The third one is intended only for MacBook SSDs, which use a proprietary, Apple-only edge connector, very different from a standard NVMe SSD.  Same idea here.  Attach your MacBook SSD to the small board, then connect the small board to a USB 3.0 port with the supplied cable, aluminum housing optional. The photo shows one very similar to the one(s) I have here for MacOS data recovery.  USB adapter for MacBook proprietary SSDs

          2 users thanked author for this post.
        • #2424153

          @OscarCP, here is an Amazon.com page for the product Ben referenced.

          This thingy lets you hook up IDE and SATA drives to a computer via a USB cable. You plug the desired drive in the appropriate connector, connect one end of the USB cable to the dock and the other to the computer. All connections are external, no opening of computer cases is required.

          I don’t have this exact device, but I have others like it, as well as a couple of docking stations like this one. These serve much the same purpose as the Inateck; some models feature dual SATA connectors but this one in particular has one SATA and one IDE.

           

          1 user thanked author for this post.
        • #2424172

          Thanks Ben Myer and Cybertooth:

          “… All connections are external, no opening of computer cases is required.

          I like that. So this is one way to, among other things, copy (or clone) the Mac SSD to another, external (at least at the time) SSD. Without attempting major surgery on the Mac. (Instead of doing that and then finding fewer pentalobe screws than the ones one removed to open the Mac …)

          Good to know. I’ll keep this in mind.

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

        • #2425387

          One of the problems that less experienced users may not appreciate initially, is insufficient DC power that is provided by any given USB port.

          This problem occurs most frequently with older USB 2.0 ports, but it can also occur with newer integrated USB 3.0 ports if the motherboard’s chipset doesn’t meet the minimum current specifications.

          The easiest way around this non-obvious barrier is a “Y” USB adapter cable that plugs into 2 x USB ports:  one transmits data, and the other simply supplements the USB device with extra DC power.

          This problem becomes obvious if one is installing a USB add-in card into an empty PCIe expansion slot:  if the card has a Molex or SATA power connector, a power cable from the PSU needs to plug into that connector to provide the USB port with all the DC power any device could possibly need.

          1 user thanked author for this post.
        • #2424182
        • #2424188

          marvin, I don’t see how a MacBook Pro SSD would plug into that – seems like it’s for a different format drive.

        • #2425625

          Speaking of USB ports, many modern motherboards come with 2 entirely different types of USB connectors.

          One type is most often available at the rear I/O panel where RJ-45 and audio ports are also located.

          Another type is much less obvious because it consists of pin headers that are mounted vertically at the factory, at right angles to the motherboard plane.

          As such, those pin headers are available for customizing a PC’s USB cabling options.

          One popular way of utilizing those pin headers is to attach a compatible cable that adds more USB ports to an otherwise empty 3.5″ or 5.25″ drive bay.

          Similarly, a brand new chassis should come with one or more of those cables, to activate USB ports mounted in the chassis front panel.

          For a less popular example, a simple adapter cable attaches the block connector at one end to an integrated pin header, and the other end is a standard USB Type-A connector.

          We recently deployed the latter setup by “concealing” a 256GB flash drive inside a small form factor HP workstation.  That flash drive plugs directly into the Type-A connector on the latter simple adapter cable.

          Even though the latter flash drive is only USB 2.0, it still works fine for doing routine backups of third-party system software.  And, cable management is no problem because that adapter cable is only about 8″ long overall.

          One could, for example, maintain the latest drive image of C: on such a “concealed” USB flash drive.  It won’t be as fast as USB 3.0, but the convenient availability recommends this design highly.

    • #2424068

      We often look to the manufactures warranty policy for an estimate of the useful life of equipment.

      In the case of Western Digital, they offer a 5-year limited warranty on their 250 GB, 500GB, 1 TB, 2 TB, and 4 TB WD Blue SATA SSD 2.5”/7mm cased internal drives.

      1 user thanked author for this post.
      • #2425389

        And, a good metric to enforce, when shopping for storage, is quotient = cost / (warranty years).

        Typically, the higher price of a 5-year warranty vs a 3-year warranty, will favor the 5-year warranty when that quotient is taken into consideration.

    • #2424206

      Thank you for this timely article having just installed my first NVMe in a new build. It’s a Samsung 980 Pro 500GB. Initially I installed Linux on it, just to test the rig and it ran OK.

      I’ve now copied a working Windows 7 partition onto a HD (not the SSD) and boot from that. The rig (a MSI B550M with AMD Ryzen™ 7 3800X does not support Win7 so I installed some drivers from AMD marked Samsung: secnvme.sys and secnvmeF.sys). Windows Disk Mgt sees the drive as 512MB EFI System Partition (heathy) and 465.2GB Primary Partition (healthy).

      I’ve just started Win 7 and Crystal Disk is showing Red Bad for the SSD. Does Crystal Disk expect the SSD to have Windows on it (e.g. does it need the Windows drivers)?

      The Red is against Available Spare 0000000000 and Yellow against Percentage Used 00000000FF. Other attributes are Unknown

      Alan

       

      • #2424285

        The drivers do not need to be on the SSD. If they are installed in Windows, that’s all that is needed, regardless of where Windows lives.

        I would say the drive is fine. The SMART implementations are notoriously varied, and CrystalDiskInfo is probably just misinterpreting the numbers it sees. If it were being reported bad by the utility from the drive manufacturer, though, I would contact them for assistance. You can always do that if you are concerned, but if it were mine I’d be fine with it. I am always making backups, though, just because failures do happen.

        On that topic, I took my Acer Swift 1 out of mothballs (not literally) last night to check and see if Windows was still installed on it, and it was, so I started to boot it to check something, and it got about halfway there and then turned off. Power plugged in, but battery stopped charging, and it won’t respond to anything.

        I only mothballed the Swift because its role was taken by my Dell XPS 13, but it had been working fine until then.

        Failures happen, and often with zero warning. This time it was presumably the Swift’s motherboard (as it encompasses nearly everything), but it could have as easily been a SSD or a hard drive.

        Dell XPS 13/9310, i5-1135G7/16GB, Kubuntu 22.04
        XPG Xenia 15, i7-9750H/16GB & GTX1660ti, Kubuntu 22.04

        2 users thanked author for this post.
        • #2424290

          While unlikely, it is just possible that the un-mothballed ACER Swift needed to be plugged in for 10 minutes to an hour before turning it on. Sometimes electronics that have been in storage for a while like to sit in a warm, dry location so that any condensation, however slight, has a chance to evaporate. Also, some battery operated devices need at least a few minutes on charge before powering up.

    • #2424238

      I would have a lot more confidence in these tools if they could agree a little better.

      Capture-2

      🍻

      Just because you don't know where you are going doesn't mean any road will get you there.
      2 users thanked author for this post.
      • #2424367

        It seems all reports of low remaining life are from Clear Disk.
        I would consider Clear Disk to be suspect until proven otherwise.

        cheers, Paul

        • #2424459

          To repeat what I stated in my article, if there were standards for SSD SMART attributes and their meaning, all of the software that looks at them would have a consistent way of interpreting what they mean.  There are no SSD SMART standards today, so every SSD manufacturer does what it wants, and most do not bother to tell us what they do.  As a result, every piece of software that looks at SMART has latitude to interpret results differently. And they do, as seen by all the commentary here.

          If anything, this is a worse embarrassment for the computer industry than Microsoft Windows, Apple MacOS and various Linux flavors (at least the ones I’ve tested) failure to grapple with timely notifications of hard drive and SSD reliability issues, before we all have to cope with BSODs, unbootable Macs and kernel panics.

          1 user thanked author for this post.
        • #2425570

          We got frustrated with the lack of “granularity” in the Error checking options available in “My Computer | Properties | Tools”.

          So, we “bit the bullet” and wrote a complex BATCH program that passes command line options to the CHKDSK command, and runs that command on all partitions active in any one workstation.

          C: gets handled differently in that BATCH program, chiefly it skips C: if the user has requested CHKDSK to “fix” errors with the “/f” option.

          That’s no big deal, because CHKDSK can always be launched separately to “fix” C: (requiring a RESTART).

          CHKDSK is smart enough to inform the User if either a DISMOUNT or RESTART are required.

          Here’s a typical invocation of our DODISKS.bat BATCH program:

          dodisks /v /c /i /x

          Each of those command line options is explained in the CHKDSK help:

          chkdsk /?

          For example, “/x” forces a dismount without requiring User intervention.  Without that option, CHKDSK will prompt the User to confirm a dismount.  Either way, the partition is re-mounted when CHKDSK is done checking it.

          When DODISKS.bat finishes, we simply scroll back to the top and search visually for any reported anomalies.  Then, it’s a piece o’ cake to run CHKDSK again on each partition that may need “fixing” of some kind.

          In this way, DODISKS.bat can run unattended.  Or, if I have nothing else to do, I can sip a fresh cup of coffee while I watch it plow thru all active partitions in one of our workstations.

          It would be nice if future versions of CHKDSK supported an option to write anomalies in a .txt file that is specified on the command line.

    • #2424320

      Based upon the information contained in this thread, I am beginning to understand why HP is configuring/delivering our new workstations with two drives:

      • 256 GB TLC Solid State Drive
        • Size: 256 GB
        • Interface: SATA
        • Type: TLC Solid State
        • Width: 2.5 in (6.35 cm)
      • 2 TB HDD
        • Size: 2 TB
        • Interface: SATA
        • Rotational speed: 7200 rpm

      Our practice has been to replace the 256 GB SSD with a 2 TB SSD for operating software and storage while using the second 2 TB traditional drive for backups.

      Moving forward we will keep the 256 GB SSD for Windows and programs such as Microsoft Office, use the traditional 2 TB HDD to provide stable data storage, and add a third traditional drive for backups.

      Thank you all for your participation in this thread

      • #2424368

        I can’t see how you reach that conclusion from the posts above.
        The most likely reason for a smaller SSD and decent HDD is purely cost.

        Do you have experience of the replacement SSDs dying?
        Having the HDD as backup seems to be a very sensible way to mitigate SSD failure. Even keeping the original SSD and making an image of it to the HDD whilst using the HDD for data is good mitigation.

        On the other hand, a couple of new 2TB SSDs at around $200 starts to look like a decent 2 disk NAS with UPS. 4 of them is a 4 disk unit with snapshot backup for true ransomware protection.

        cheers, Paul

      • #2424827

        This HP configuration largely matches the setup in my customized Dell Precision T5810 with its 256GB SSD for programs and regularly used stuff and a 2TB traditional hard drive to keep all the data I use.  One thing to consider is customizing the hard drive setup to cycle the drive down when it has not been used for a while.  The drive remains in a hot standby state, ready to cycle up again when needed, albeit with a few seconds delay.  The idea is to save wear and tear on the drive motor and bearings.  You can set this up in the Windows Power Options.

        1 user thanked author for this post.
        • #2425583

          An external USB-cabled enclosure is also useful if it has a separate ON/OFF switch.

          With that enclosure switched ON, we’ve come to make heavy use of the XCOPY command in Command Prompt.

          XCOPY works equally well if one is copying an entire top-level folder and/or some sub-folder below a top-level folder.

          In this next simple example, XCOPY updates a backup copy of a top-level folder named “website.mirror”:

          E:

          cd \

          xcopy website.mirror X:\website.mirror /s/e/v/d

          Where,

          X: is the Windows drive letter for a partition in the eXternal enclosure.

           

          Now, as happens with our library website, we may only need to update the “authors” sub-folder in that library.  XCOPY works the same way on sub-folders too:

          E:

          cd \

          cd website.mirror

          xcopy authors X:\website.mirror\authors /s/e/v/d

           

          Finally, because the latter requires more typing than is absolutely necessary, we also wrote 2 custom BATCH programs named GET.bat and PUT.bat that launch XCOPY with the required command-line options e.g.:

          cd \

          cd website.mirror

          put authors X

           

          The GET.bat and PUT.bat BATCH programs take care of “parsing” the command-line options required by the XCOPY command.

           

    • #2424408

      It seems all reports of low remaining life are from Clear Disk.
      I would consider Clear Disk to be suspect until proven otherwise.

      Clear Disk is weird but for the opposite reason you mention. It claims Remaining life is 100% on a four year, two month old, Toshiba NvMe 500 GB. That reading makes no sense. Crystal DiskInfo says the health of the drive is “good” at 84%. That seems more credible.

    • #2424445

      Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
      We all have our own reasons for doing the things that we do. We don't all have to do the same things.

      • #2424462

        Yesterday’s New York Times had a piece by industry veteran John Markoff front, top and center of the business page addressing the issue that as circuits inside chips get closer and closer together, and smaller of course, the odds increase for random electrical leaks leading to equally random totally unpredictable failures of all kinds. This applies not just to CPUs, but also to memory and all kinds of other chips. Maybe the industry stops at 7nm processes?  5nm?

        2 users thanked author for this post.
        • #2424489

          Ben Myers: “Maybe the industry stops at 7nm processes?  5nm?

          If not, maybe Nature calling out: “enough is enough! will do it:

          This has been  worry for several years now: as components become tinier and tinier, the probability of a quantum phenomenon known as tunneling increases, meaning that electrons can jump from where they are supposed to be to somewhere else where they shouldn’t, regardless of insulating barriers, equally tiny. Also the probability of being hit to serious and even total damage in a tiny component by a hard cosmic ray particle such as an iron nucleus, could be high enough to result in repeated integrated circuit failures.

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

          1 user thanked author for this post.
        • #2425651

          Undetectable errors are not a theoretical paranoia from some far-out fantasy land.

          When we were deciding to rely on RAID-0 arrays for speed, lots of opponents argued that the “redundant” array modes would be much more reliable, and that we were increasing the probability of an unrecoverable error — vaguely defined as a catastrophic loss of Operating System files.

          I was not persuaded very much by their arguments, mainly because a failed member in a RAID-0 array requires replacing the failed member;  and, that ended up being the “same difference” as when a JBOD drive failed — it must be replaced and data restored to it — same as a RAID-0 array.

          What tipped the scale for me was a discussion between Allyn Malventano and Ryan Shrout, formerly at pcper.com .  Both accepted jobs at Intel (last I heard).

          Allyn reported witnessing serious problems when “redundant” arrays fell into infinite loops during automatic “recovery” sequences.

          In other words, the automatic repair logic for a RAID-5 or RAID-6 array created more errors from which it was unable to recover itself.

          There was no escape from the infinite loop Allyn had confirmed.

          So we went with RAID-0 arrays for the C: partitions on our primary workstations, and never looked back;  their performance AND RELIABILITY have been steller.

        • #2425652

          Only 6 days ago:

          “Elon Musk’s Starlink operation lost 40 out of 49 satellites it launched into the Earth’s upper atmosphere on Wednesday, as a geomagnetic storm knocked out the majority of the fleet.”

          “Why do I have this sinking feeling?” asked the church mouse, the first rodent-astronaut to pilot a StarLink Internet transponder satellite.

          “Just wait until you reach the lower atmosphere at Mach 10,” replied the Choir Master.

          1 user thanked author for this post.
        • #2425526

          Here’s a link to the Markoff article referred to above:

          https://www.nytimes.com/2022/02/07/technology/computer-chips-errors.html

          I’m always amused when I see a reference to the “cosmic rays caused it” line, which is the oldest excuse in the book given by tech support call center reps when they’ve run out of ideas to explain some problem with their software.

          But I think the analogy of finding a leaking faucet in one unit of an apartment building that covers the entire land mass of the United Sates is fabulous.

          — AWRon

          2 users thanked author for this post.
        • #2425655

          The “cosmic ray did it” may be amusing, but this is not going to be so amusing if the size of the components in an integrated circuit becomes sufficiently small. Which is the point of “cosmic ray” being mentioned here, in the first place.

          As to Musk’s satellites coming down soon after being launched:

          “Ohh, who could have guessed that it was not ideal to launch during an ongoing massive geomagnetic disturbance, because the direct impact with a solar mass ejection, predicted using spacecraft solar data to hit Earth days in advance, was going to thicken the upper atmosphere?”

          Well, this has been known since the beginnings of the Space Age, in fact it is basic stuff we, that have to deal with artificial satellites and also have a side interest on the mechanics of the upper atmosphere, must and do know.

          And this is the company, with Mr. Musk as its head, that is supposed to have in orbit and manage well an unprecedentedly huge constellation of thousands of satellites, meant to profitably connect, those who can pay, to the Internet everywhere? And this low-orbit design has been chosen to make the speed of connection super-high, so the satellites are going to be just some 50 km above the International Space Station and the Chinese space station (that has already had to be moved to dodge another one of Mr. Musk’s satellites). And these satellites are giving heartburn to astronomer’s all over the world, because, even with extra precautions, they reflect down too much sunlight, spoiling the work of whole (very hard to get time for) telescope observing nights, particularly with the ones with really big eyes. Way to go, Mr.Musk!

          Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

          1 user thanked author for this post.
    • #2424661

      I think something got lost in my post about cold storage and SSD’s.

      By “cold,” we mean a drive that contains data and is continuously offline with no power.  We do NOT mean a drive that is occasionally plugged in from time to time, nor a continuously active drive in service.  Your primary SSD in your computer is safe unless you don’t use that computer for a year or so.

      SSD’s that are in frequent use do not lose data due to cell drainage.  SSD’s that are used as long term cold storage can drop data depending on age and environment.  This applies to any flash based storage.

      For an in depth explanation see https://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention

      ~ Group "Weekend" ~

      2 users thanked author for this post.
      • #2425656

        The article linked by NetDef ( #2424661 ) puts an end to a lot of concerns about SSD loss of information under normal usage conditions. Including my own.

        Ex Windows user (Win. 98, XP, 7) since mid-2020. Now: running macOS Big Sur 11.6 & sometimes, Linux (Mint)

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV and Malwarebytes for Macs.

        1 user thanked author for this post.
    • #2425099

      I also thank you for this wonderful article.  This week i ran into an unusual problem with a WD Blue 500G NVMe in one of my PCs.  System shutdown and would shutdown in about 30 seconds when a boot started.   Older machine so i suspected that new grease was needed on the CPU.  It did need the grease, but the system still would not boot.  Booted from cloned SSHD drive.  System came up normally.  Tried to clone the system to the NVMe drive.  System shutdown within a few minutes of the clone starting.   Tried again, same result.  Used Windows format drive command to fully (not quick) format the NVMe.  Cloned the system and booted from the NVMe.  Been running 3 days now.  WD Dashboard shows the drive as normal with 100 spares showing on the smart data.

      2 users thanked author for this post.
    • #2425147

      I have a 6 year old Dell 2-in-1 laptop with a 256k SSD that suddenly failed to boot on 11/25/21.  I was finally able to restart it after a while into Safe mode and did a chkdsk with repair.  After that it started up correctly and showed no errors.  However, when I ran Dell’s online system check, it ran through most of the tests without any problem, but then it suddenly said the SSD was defective and should be replaced.  I assumed after reading your article that even though the system RAM test was ok, something in the SMART data triggered the “replace the SSD drive” result.

      Since the laptop was already 6 years old, I decided not to immediately try to replace the SSD, and went with ordering a new replacement laptop, and transferred my old data files to the new laptop.

      Now I went back to the old Dell and tried viewing the trouble logs.  I found the following errors dated 11/25/2021 (see attachments)

      It appears there was an error in the NTFS index?

      And it was cleared after the CHKSD was run.

      What do you think about the life of the restored system?  Should I change the SSD, or continue to use the laptop or give it to someone?

       

      Thanks

      • #2425195

        If the SSD is listed as failing then it may last another year, or only a day. Whether you use it or give it away, it needs a new disk.

        cheers, Paul

        1 user thanked author for this post.
        • #2425486

          I retested the SSD (actually the whole computer) by running Dells support computer check that runs about 40 minutes, and it said everything tested OK.  Don’t know what happened to the previous error notice.

          Thanks

    • #2425392

      In case this setup may benefit other Users here, this is how we setup the system software on a new workstation:

      After the OS and all application software are installed and working correctly, we run “Migrate OS” in the excellent Partition Wizard software.

      That task should install a second identical set of system software on a second bootable drive.

      Then, we periodically write a drive image file of C:, and make redundant copies of that drive image as a hedge against sudden storage device failures.

      If the main C: partition gets hosed and/or begins to malfunction due to malware and such, we simply change the BIOS boot drive and boot from the second OS copy on the second bootable drive.

      While running from that second bootable drive, we fire up Acronis and restore the latest working drive image file to the primary boot drive.

      Then, we change the BIOS boot drive back to the primary boot drive.

      Restoring a working C: partition takes all of 10 to 15 minutes, and totally eliminates the need to spend hours trying to troubleshoot a system software failure.

      To put a little icing on the cake, we also keep a CHANGES file up-to-date, so we know what recent changes must be re-done, because they were done after the latest drive image of C: was created.

       

      1 user thanked author for this post.
      • #2425481

        Thanks for your several informative posts in this thread.

        Could you please provide specific software information for “Partition Wizard” and “CONTIG”?

        Also, is your reference to the CHANGES file something inside one of the programs, or a separate piece of software?

        Thank you,

        — AWRon

        1 user thanked author for this post.
        • #2425485

          Here is the link showing the steps to Migrate OS:

          How to Migrate OS to SSD/HD | MiniTool Partition Wizard Tutorial

          Here is the link to download the Minitool Partition Wizard which includes the Migrate OS feature. Hover on the blue “Download Partition Wizard” button  on this page to  try out the Pro Version:

          Thanks for Downloading MiniTool Partition Wizard Free

          MACRIUM REFLECT: There’s a totally free version of Macrium Reflect available to Clone/Copy/Migrate your OS to another drive. I have found it to be very reliable and fairly easy to use over a period of several years. So, you can migrate (copy/clone) Windows onto a second drive, then if the need arises you can select that drive in your system BIOS (or UEFI as they call it these days) and boot from it:

          Macrium Software | Reflect Free Edition

           

           

          2 users thanked author for this post.
        • #2425523

          Last time I checked, the latest freeware version of Partition Wizard does NOT support “Migrate OS”.  That excellent feature is supported in the fully licensed version.

        • #2425522

          [Apologies:  I forgot to LOGIN:  this answer may appear twice.]

           

          “CHANGES” is merely the NTFS ASCII filename that I chose to describe a simple .txt file that I maintain with Windows NOTEPAD.  Latest entries are at the bottom.

          I should also add here that we always shrink C: to something like 100GB, and format the remainder as a dedicated data partition.  This makes management of the system software set a LOT easier, all around.

          On one of our workstations, here are the most recent CHANGES, that correspond to the Acronis image files with the same .NNN extension, and here’s how to interpret those lines of text:

          if I needed to roll back to version .016, after rolling back I would still need to make the changes listed below under .017 — because the latter changes were made AFTER image file .016 was written:

          Acronis.Images.016
          downloaded & installed Firefox 90.0.1 (64-bit)
          downloaded & installed Firefox 90.0.2 (64-bit)
          downloaded & installed Firefox 91.0 (64-bit)
          downloaded & installed DriverEasy 5.7.0.39448 from:
          E:\drivereasy.com\DriverEasy_Setup.exe
          downloaded & installed Firefox 91.0.1 (64-bit)
          downloaded & installed Firefox 91.0.2 (64-bit)
          downloaded & installed BURNAWARE from:
          E:\burnaware\burnaware_free_14.6.exe
          downloaded & installed IMGBURN from:
          E:\imgburn\SetupImgBurn_2.5.8.0.exe
          downloaded & installed Firefox 92.0 (64-bit)

          Acronis.Images.017
          downloaded & installed Firefox 92.0.1 (64-bit)
          downloaded & installed Firefox 93.0 (64-bit)
          downloaded & installed Firefox 94.0.1 (64-bit)
          downloaded & installed Firefox 94.0.2 (64-bit)
          downloaded & installed Firefox 95.0 (64-bit)
          downloaded & installed Firefox 95.0.1 (64-bit)
          downloaded & installed Firefox 95.0.2 (64-bit)
          downloaded & installed Firefox 96.0.1 (64-bit)
          downloaded & installed Partition Wizard 12.6 from:
          https://cdn2.minitool.com/?p=pw&e=pw-setup
          by running:
          E:\Partition.Wizard\pw1206-setup.exe
          downloaded & installed Firefox 96.0.2 (64-bit)
          downloaded & installed Firefox 96.0.3 (64-bit)
          downloaded & installed Firefox 97.0 (64-bit)

        • #2425527

          It’s my understanding that logically sequential sectors in memory may not be physically sequential in memory.  Nevertheless, the memory latency required to READ the next logically sequential sector should be about the same as the memory latency required to READ any random sector.  “RAM” = random access memory.  Therefore, creating pagefile.sys with the CONTIG freeware should speed up paging when pagefile.sys is hosted on rotating platter HDD:

          https://docs.microsoft.com/en-us/sysinternals/downloads/contig

          Contig v1.81

          By Mark Russinovich

          Published: October 12, 2021

          Using Contig

          Contig is a utility that defragments a specified file or files. Use it to optimize execution of your frequently used files.

        • #2425532

          One of the tricky things about “pagefile.sys” is the file system attributes.

          It’s both “hidden” and a “system” file.

          To view the attribute settings on C:\pagefile.sys, do this in Command Prompt:

          C:

          cd \

          attrib pagefile.sys

          Remember those attribute settings, because you will need to apply those same settings to the contiguous pagefile.sys that you created with CONTIG.

          Similarly, if you’re interested in playing around, you will need to change those attributes again if you simply need to DELETE any particular pagefile.sys.  Clearly, the OS won’t allow you to DELETE a pagefile.sys that is currently in use by the OS.

          Many commands like DELETE don’t work on “hidden” files.

          And, if you move pagefile.sys from C: to another partition, Windows should automatically delete the one stored on C: .  But, using the sequence above, it’s easy enough to double-check.

          You may, at some point, end up with multiple pagefile.sys files;  and, it’s easier to remember if your system configuration has only one such file at any point in time.

          To get help:

          attrib /?

      • #2425621

        How does your system avoid possible ransomware? I mean, if the second HDD is installed in that same PC, couldn’t a hacker still encrypt its contents for ransom? apologies if I failed to understand your full post. 😀

        1 user thanked author for this post.
        • #2425627

          >  couldn’t a hacker still encrypt its contents for ransom? 

          Good question!

          Yes:  it’s “wide open” like any other NFTS drive letter.

          There are other and better solutions to blocking ransomware, but I am not the best person to ask about the most recommended solutions.

          AskWoody probably has some of THE BEST experts for advice on that problem.

          For one, our version of Acronis True Image software does claim to have a robust defense against ransomware, but I believe that defense only applies to drive image files written by that software.

          We also routinely backup drive images to an external USB enclosure, and switch it OFF when we are not actively running our backup BATCH programs.

          Similarly, we have re-purposed a few aging Windows XP PCs that are now dedicated to doing nothing but backup storage:  again, we STARTUP those XP PCs only long enough to update backups, then they are SHUTDOWN promptly.

          If ransomware hoses an entire C: partition, our workstations allow a recovery by simply restoring a working drive image:  all such drive images were written to save all system software files, so a successful restore ends up over-writing every file in C: .

          As an extra precaution, before restoring a working drive image, one might also re-format C: by launching a program that has not been infected by ransomware.  But, in the past we have found that such reformatting is not necessary and as such ends up being a waste of time.

          1 user thanked author for this post.
        • #2425642

          Point taken, thanks. We do a similar thing by plugging in a portable USB hard drive with a black case to create a system image backup, then unplug that drive. A second portable drive with a red case is used for the next backup. then we continue to alternate those drives for additional backups. This provides some protection in case any form of malware is discovered that may have been hiding there when we made the most recent backup. Again, not perfect, but this method has served us well for about 6 years now. For the record we use a WD My Passport 2TB and a Seagate 2TB portable. Both are 2.5-inch size so they are powered from any available USB port, no power adapter required.

          Speaking of old Windows XP machines we have a Compaq from 1999 still running on Windows 98SE, allowing me to enjoy a couple of now-ancient games!

          1 user thanked author for this post.
        • #2425647

          We also have a pair of Western Digital “Passports” and both have been flawless for years, with no apparent problems.

          If a User remembers to format them as MBR partitions instead of GPT partisions, and stays within the 2TB upper limit, those Passports can be plugged into any USB 2.0 and USB 3.0 ports, and work fine with most Windows operating systems.

          I must admit, I am very loyal to Western Digital.

          I honestly think the only serious problem I have ever had was a 2TB Black HDD that was advertised as “new” at Newegg.

          However, when it failed, WDC’s database reported that it was actually a refurb and not eligible for replacement.

          WDC’s Tech Support referred us to Newegg who did replace it promptly.

          And, if my memory is correct, we’ve had only one other WDC HDD fail over many years of superb service.

          For a while, User Forums were reporting “Black” WDC HDDs dropping out of RAID arrays.  WDC isolated that problem in their error recovery firmware:  while they were busy, those routines would not acknowledge ACKs from RAID controllers, and the controllers would drop them from arrays.

          So, WDC added a feature to some of their HDDs called “Time Limited Error Recovery” (“TLER”) designed specifically for RAID arrays;  and, with that added feature, those RE (“RAID Edition”) HDDs no longer dropped out of RAID arrays.

          But, it took a while for the IT community to catch on to this highly technical point.

          p.s. I’m not an Western Digital employee, and I get nothing for praising the company here.

          THANKS!

           

        • #2425648

          I’m curious about one file system change that might have a direct effect on some ransomware.

          We tend to populate a tower chassis with as many drives as it can handle.  And, with Icy Dock’s 5.25″ 8-drive enclosure for 2.5″ SSDs, it’s pretty easy now to run out of drive letters, particularly when some letters need to be reserved for Network Drives and USB ports.

          So, in light of these limitations, we have run “Migrate OS” to create a clone of the OS on a second drive.  Then, we simply remove any drive letter that has been assigned automatically once the “Migrate OS” task is finished.

          Removing a drive letter is very easy in Windows Disk Management.

          Do you happen to know if ransomware will simply ignore a formatted partition if it lacks a Windows driver letter?

          Removing a drive letter has resulted in no adverse consequences for us.

          Only certain software can still access a formatted partition that lacks a drive letter:  HDTune is one such program that still sees such a partition, even if it has no drive letter assigned.

          If we ever need to boot from a secondary OS, the boot sequence always assigns C: anyway, even if that secondary boot partition has no drive letter assigned normally (i.e. when booting from the primary boot partition).

    • #2425490

      Hello.

      I enclose four screen captures of Clear Disk, CrystalDiskInfo, SSD-Z and HardDiskCentinel of a 256GB SSD Samsung 860 Pro. The only one that warns me is the Clear Disk. I don’t know who to trust.

      1 user thanked author for this post.
      • #2425663

        Given the number of reports here of “odd” results from Clear Disk, I would not trust it.

        I use CrystalDiskInfo because I like the notification options (pop-up, email).
        It correctly spotted a failing drive on a hard disk a number of years ago.

        cheers, Paul

      • #2425707

        I enclose four screen captures of Clear Disk, CrystalDiskInfo, SSD-Z and HardDiskCentinel of a 256GB SSD Samsung 860 Pro. The only one that warns me is the Clear Disk. I don’t know who to trust.

        Hi epaff:

        If your SSD is the same Samsung 860 PRO SATA III SSD listed <here> on the Samsung site, have you tried running the proprietary Samsung Magician management software available on the 860 PRO support page at https://www.samsung.com/ca/support/model/MZ-76P256BW/? The Samsung Magician 6 Installation Guide says this software comes with a built-in diagnostic tool but I’m not sure if the report would show S.M.A.R.T. attributes or % Lifetime Remaining that you could compare with your Clear Disk Info results.

        Just FYI, I attached images of the Clear Disk Info, CrystalDiskInfo, Speccy, and HWiNFO diagnostics for my own Toshiba NVMe SSD in post # 2409451 of Ben Myer’s earlier Dec 2021 topic Hard Drives – Still Pretty S.M.A.R.T..  Clear Disk Info showed I had 100% Lifetime Renaming while CrystalDiskInfo reported a Health Status of 93% and HWiNFO (my “default” hardware diagnostic tool) reported a Device Health of 93%. The Clear Disk Info definition of Percent Lifetime Remaining in the user interface is “the percent of write/erase cycles remaining before this SSD becomes read-only“, which might be different than the agorithm(s) that CrystalDiskInfo and HWiNFO use to calculate “disk health”.  I’m not certain, but the CrystalDiskInfo documentation at Health Status and Health Status Setting seems to show that their Health Status is a weighted calculation based on the reallocated sector count, current pending sector count, and uncorrectable sector count, and suggests the threshold for Good / Caution / Bad / Unknown could depend the controller used by the vendor (e.g., Intel, Samsung, etc.).
        ———–
        Dell Inspiron 15 5584 * 64-bit Win 10 Pro v21H2 build 19044.1526 * 256 GB Toshiba KBG40ZNS256G NVMe SSD

      • #2427584

        I found the article on SSD SMART very interesting and helpful, but I too found the differences in reporting among tools disturbing, and looked deeper into possible causes. I would also suggest that ClearDisk is the questionable reporter. Although I understand that there can be a difference of interpretation of SMART parameters, there are some differences that cleardisk has that turn out to make it unusable for me, in favor of Crystaldiskinfo, or perhaps an alternative better choice would be the SSD manufacturer’s own monitoring software.

        When I use HDD’s I find the SMART data has been useful in conjunction with Victoria disk utility, and looking at reallocated sectors, and pending sectors (and seek times in the test). When these parameters start to increase, and after running Victoria and get even higher, (along with “funnies” in system operation) it has turned out that it was time to think about replacing the drive.

        With the conversion to SSD’s, it’s been a learning experience, and that is why I pursued this a little more.

        I used 5 tools on one of my Samsung drives. Cleardisk, Crystaldiskinfo, Samsung Magician, Passmark Disk Cleanup, and the windows based smartmon tools using the “smartctl” command. (I have two of the same kind of Samsung drives in two of the same model laptops; I used cleardisk and crystaldisk on one of them, and all 5 on the other. The cleardisk/crystaldisk differences were consistent on both.) All of the four other tools were consistent in reporting; cleardisk had that parameter 231, percent lifetime remaining, that didn’t seem to make any sense.

        Looking further, (and using the picture of SMART reporting by manufacturers from the original article that I’ve attached) the first reason for cleardisk’s lack for my use is that cleardisk is the ONLY tool that reports that parameter for my Samsung drives. The picture suggests that it is not available in Samsung. All the other tools (even Samsung’s) do not report that parameter. It may be in the SMART data somewhere, but if Samsung does not report it with their tool, any “interpretation” might be questionable.

        All the other parameters were reported, and pretty much agreed identically, except for another key value, that of parameter  241, total GB written. Cleardisk reports a negative number that I cannot resolve into any kind of number that makes any kind of sense. The other tools reporting of that number are consistent, and end up helping to determine a SWAG at used and remaining life.

        I found a Samsung article (https://image-us.samsung.com/SamsungUS/b2b/resource/2016/05/31/WHP-SSD-SSDSMARTATTRIBUTES-APR16J.pdf) that gives a detailed description about how Samsung would go about determining remaining life. For the full blown method (their words from the article) “…estimating SSD lifetime using conventional SMART attributes is a relatively complex and labor intensive process, involving multiple calculations.” Even their “simpler” method involves using some tricky commands to get some initial SMART data, starting off a testing cycle, using the drive in normal service for a nominal time, and gathering the after testing SMART parameters and doing some more calculations. That looked kind of daunting too.

        But a simplistic SWAG might be available (and ends up agreeing with the other tool reports). It seems that SSD’s life is very much related to the amount of writes to the device. Using the parameter 241 value (total LBA’s written) provided by the four agreeing tools, it appears that my two drives have had 1.4 TB and 3.1 TB written to them. The warranted or serviceable life for my Samsung model drives that I found by a search suggest that they should be able to have 300 TBW “life”; that matches the 99% remaining by CrystalDiskinfo, and I’ll continue monitoring the SSD’s on a periodic basis to see if the hypothesis for this SWAG remains consistent. The Samsung number doesn’t mean they fail after that, but gives some idea of when I should be more watchful and look at some of the other parameters.

        Thanks for a very thought provoking article; I also agree that it would be “nice” if the manufacturers would all have a consistent way of reporting drive status. Samsung’s magician just reporting “good” is nice, but it would be nicer if the quality term was a bit more granular.
        Ben article picture

        1 user thanked author for this post.
        • #2427590

          Many thanks for all those details.

          Do you have any experience with SSDs as members of RAID arrays?

          The reason why I ask is our multi-year experience with Highpoint RocketRAID add-in cards.

          They come with a proprietary “GUI” that also logs anomalies which that controller detects with all active drives, both SSDs and HDDs.

          Also, at STARTUP there is an Option ROM display that also indicates whether or not a connected drive is being detected by the controller:  this is a very low-level test, but it’s useful as an early warning that a member drive is not responding to a routine controller ACK at STARTUP.

          We finally isolated a SAS drive problem in the block data connector:  our RAID controller would “poll” it, during staggered spin-up, but the controller would eventually give up and simply show it as “missing in action”.   This one problem was also intermittent:  sometimes that SAS drive DID show up properly during routine STARTUP.

          Also, have you ever tried Windows “Event Viewer” to see if the OS is detecting and reporting any anomalies in the storage subsystem?

          Lastly, although not obvious, most of our most recent problems were found to be failed SATA data cables.

        • #2427595

          Sorry, no experience with RAID at all.

          I have looked at Event viewer on occasion, but only when something seems to be going wrong. Guess I never did end up looking for HDD events there when I was suspicious. Guess I went for the “old reliable” other tools I trusted to help to get to the eventual solution.

          And yes, cabling and connections can end up causing really hard to resolve intermittent “funnies.”

          1 user thanked author for this post.
    • #2425540

      IMO the only true stamina check of an SSD and data within is end-user habit.
      Images and backups works well here with a monthly forced clean and trim.
      I have a OCZ Solid from circa 10 years ago+ that still works well using that method..who needs additional mis-aligned results junkware to set the paranoia levels higher LOL, just use it.

      1 user thanked author for this post.
    • #2425565

      http://supremelaw.org/systems/io.tests/platter.transfer.crossover.graphs.png

      The above is a rare empirical comparison that was published on an Internet User forum many years ago.  I don’t remember the author(s), however.  It confirms empirically how HDD speeds slow down as they fill up.  The intervening factor is the need to maintain constant linear recording density on rotating platter HDDs.

      This constant recording density means that the amount of raw data on any given circular track is directly proportional to track diameter.  Therefore, the outermost tracks are the fastest and the innermost tracks are the slowest.  When a new HDD is formatted the first time, the formatting begins at the outermost tracks and continues inward until the partition is fully defined.

      The Partition Wizard software can also be run to confirm the same phenomenon.  The “Surface Test” option also begins at the outermost tracks of any given partition.  As it runs, it computes and displays the running average data rate.  If the User watches how that displayed number changes, it slowly declines as the “Surface Test” task proceeds its way thru the partition, sector-by-sector.

      The individual squares in the partition map do not correspond exactly to physical sectors;  the amount of raw data inside any one “square” is an approximation which Partition Wizard calculates, in order to fit the entire “map”onto a single monitor window, regardless of partition size.

      2 users thanked author for this post.
    • #2425741

      We just purchased, direct from Western Digital, two WD Elements USB 3.0 portable SSD drives with 2TB storage delivered for $119.98 US plus tax.  That is $59.99 each delivered.  SKU:  WDBU6Y0020BBK-WESN

      The drives, when ordered from Western Digital, come with three-year limited warranties not the normal two-year warranties.

      • #2425747

        Western Digital has spent many years refining their huge selection of rotating platter HDDs.

        These external USB-cabled drives are famous for their convenience and reliability.

        Because I’m a virtual maniac about performance, a little math shows how close those external HDDs come to hard-wired internal HDDs:

        USB 3.0 = 5G / 10 bits per byte  =  500 MB/second MAX HEADROOM

        SATA-III = 6G / 10 bits per byte  =  600 MB/second MAX HEADROOM

        In general, rotating platters are reaching ~250 MB/second +/- MAX.

        Therefore, it’s not the USB 3.0 interface that imposes any limits:  the performance limits of rotating platter HDDs are dictated by the raw data rates of the READ/WRITE heads.

        The situation is entirely different with external SSDs, however.

        My friend David Rivera recently did an apples-to-apples comparison of SSD performance when connected via USB, and when connected via SATA cables.  He published that video at YouTube.  The USB interface imposed a much lower ceiling on performance:

        397 vs 535 MB/second

        https://www.youtube.com/watch?v=jwfMelizf74&t=1s

        • #2425860

          It’s worth noting that USB 3.0 maximum theoretical bandwidth is 5Gbps, whereas the newer USB 3.2 standard has max. 10Gbps bandwidth. This permits a SATA III (6GB/s) SSD to achieve full performance compared to the limitations shown in the video you referenced.

          1 user thanked author for this post.
        • #2425862

          Thanks!

          I became so confused by all the USB name changes, I can’t recall which one upgraded the frame layout from 8b/10b (“legacy frame”) to 128b/132b (“jumbo frame”).

          Yes, 10G / 8.250 bits per byte = 1,212 MB/second

          (132 bits / 16 bytes  =  8.250 bits per byte)

          Now we be cookin’ 🙂

           

          So, MAX HEADROOM of 1,212 MB/second is starting to sizzle!

          1,212 MB/second is slightly more than DOUBLE the SATA-III ceiling of 600 MB/second (6G / 10).

           

          I’d like to see the industry embrace a “SATA-IV” standard that supported auto-detection of various clock rates e.g. 8G, 12G, 16G, and required the 128b/130b “jumbo frame” now in the PCIe 3.0 standard.

           

          In other words, at a minimum “sync” the SSD’s controller to the motherboard’s chipset.

          Modern 2.5″ SSDs already auto-adjust to the speed of integrated SATA ports on PC motherboards e.g. 1.5, 3.0 and 6.0 GHz.

          This SATA-IV concept would allow standard SATA SSDs to perform in a manner similar to a 1-lane NVMe SSD (“S” in “SATA” is Serial).

          Standard NVMe M.2 SSDs should continue to use 4 PCIe lanes and continue to sync with new PCIe standards e.g. PCIe 4.0 and 5.0.

           

          Thanks for the heads-up.

        • #2425865

          So, to illustrate:

          “sync” each SATA-IV channel with 16G, and implement the PCIe 3.0 128b/130b “jumbo frame”:

          16G / 8.125  =  1,969.2 MB/second per SATA-IV channel

          That’s more than 3 TIMES the current SATA-III ceiling.

          I believe the industry could do this, but they have obviously joined forces, like any oligopoly, to FREEZE future SATA developments.

           

        • #2425874

          joined forces, like any oligopoly, to FREEZE future SATA developments

          And they’ve done this because?

          cheers, Paul

        • #2425950

          I can’t speak for them.

          Nevertheless, if we go back to the time when NVMe was being announced, those announcements were pretty negative about all of the “overhead” that is inherent in the SATA protocol.

          I was a little surprised to read those negative claims.  They struck me as a marketing strategy to motivate buyers to accept NVMe.

          The leading 2.5″ SATA SSDs e.g. from Samsung et al. are now routinely doing READs at ~550 MB/second.

          I worked up an simple metric for measuring controller overhead:

          1 – (measured speed / max headroom)

          Using 550 and 600, then

          controller overhead = 1 – (550 / 600) = 8.3%

          In plain English, that’s the percentage of time the controller is NOT moving raw data.  Thus, if the measured speed were 600, overhead equals zero percent.

          I never understood why a SATA SSD controller overhead of only 8.3% was something to complain about.

          Let’s compare Samsung’s NVMe performance:

          1 – (3,500 / 3938.5) = 11.1%

          NVMe is still great for several reasons:

          each channel “syncs” with the chipset clock and jumbo frame;

          4 channels run in parallel;  and,

          the sockets can be wired directly to the CPU, where otherwise “idle” cores can perform like dedicated IOPs.

          So, the decision to “freeze” SATA and to enforce that freeze by preventing SATA from also “syncing” with chipsets, is not a technical decision, but a purely marketing decision.

          Similarly, the PCIe upgrade to jumbo frames is another advance which SATA cannot exploit, as long as that freeze is in effect.

          In a paper to the Storage Developer Conference back in 2012, we proposed “syncing” storage with chipsets.

          But, here we are, 10 years later and SATA is still stuck at 6G and legacy frames.

          Also, I do not buy the argument that SATA-IV would require radical changes in motherboard chipsets.  Add-in controllers can do the job in the interim, just as 6G SATA SSDs were controlled initially with PCIe add-in cards.

          Thanks to everyone here for allowing me to “rant”.

        • #2425974

          You mentioned: “1,212 MB/second is slightly more than DOUBLE the SATA-III ceiling of 600 MB/second (6G / 10).”

          Yes, and that bandwidth allows for simultaneous Read/Write speeds of 500MB/s or more; not that I expect any SATA SSD to achieve mixed R/W performance that high, but it’s nice to see there’s enough headroom for it.

          1 user thanked author for this post.
        • #2425992

          I like to focus on MAX HEADROOM (as I’ve come to call it), as a way of showing what’s “possible”, and also as a way of encouraging consumers to realize how far short of that potential any given device is performing in fact.

          There’s no escaping controller overhead:  the fairer question is “how much overhead” is there in any given controller design?

          Let’s do the inverse:  assume MAX HEADROOM is 1,212 MB/second and the same controller “overhead” of 8.3% obtains.  Then:

          (1 – .083) x 1,212  =  1,111 MB/second

          Or, use the NVMe “overhead” we calculated above:

          (1 – .111) x 1,212  =  1,077 MB/second

          Where things become really interesting, imho, is the effect that higher clock speeds have on these projections.

          PCIe 3.0 uses 8G;

          PCIe 4.0 uses 16G;  and

          PCIe 5.0 uses 32G.

          The latter are ENORMOUS jumps in chipset performance;  and, just yesterday I browsed Newegg and saw a flurry of new ASUS motherboards that support PCIe 5.0 and DDR5 !!

          32G / 8.125 (jumbo frame) = 3,938 MB/second (i.e. the equivalent of 4 x PCIe 3.0 lanes in a single serial channel)

          Overhead aside, wouldn’t it be nice if a single SATA channel were capable of almost 4,000 MB/second?

      • #2426359

        Correction – The WD Elements USB 3.0 portable drives with 2TB storage are not SSD

        1 user thanked author for this post.
        • #2426367

          USB 3.0 is more than adequate for spinning platter HDDs.

          5G / 10 bits per byte  =  500 MB/second MAX HEADROOM

          The latter uses the old “legacy frame”:

          1 start bit + 8 data bits = 1 stop bit = 10 bits per byte transmitted

          The best HDDs presently approach 250MB/second, so USB 3.0 has plenty of available “headroom”.

          Just be sure that your USB 3.0 host ports can deliver sufficient DC power to spinning platters, which typically consume more power than comparable SSDs.

          When in doubt, there is a cheap “Y” cable:  one Type-A connector transmits data and power, and the other Type-A connector merely supplements the power (no data transmitted).  The other end is another Type-A female, which connects to the external USB drive.

          Here’s one such “Y” cable at Newegg:

          https://www.newegg.com/p/36F-00G6-00863?Description=USB%203.0%20power%20Y%20cable&cm_re=USB_3.0%20power%20Y%20cable-_-9SIAFZ4GSD4474-_-Product&quicklink=true

    • #2425762

      I’m curious about one file system change that might have a direct effect on some ransomware.

      We tend to populate a tower chassis with as many drives as it can handle.  And, with Icy Dock’s 5.25″ 8-drive enclosure for 2.5″ SSDs, it’s pretty easy now to run out of drive letters, particularly when some letters need to be reserved for Network Drives and USB ports.

      So, in light of these limitations, we have run “Migrate OS” to create a clone of the OS on a second drive.  Then, we simply remove any drive letter that has been assigned automatically once the “Migrate OS” task is finished.

      Removing a drive letter is very easy in Windows Disk Management.

      Do you happen to know if ransomware will simply ignore a formatted partition if it lacks a Windows driver letter?

      Removing a drive letter has resulted in no adverse consequences for us.

      Only certain software can still access a formatted partition that lacks a drive letter:  HDTune is one such program that still sees such a partition, even if it has no drive letter assigned.

      If we ever need to boot from a secondary OS, the boot sequence always assigns C: anyway, even if that secondary boot partition has no drive letter assigned normally (i.e. when booting from the primary boot partition).

      “Do you happen to know if ransomware will simply ignore a formatted partition if it lacks a Windows driver letter?”

      I would not assume this is safe.  Even if it’s safe today ransomware is under continuous development and may be able to overcome this tomorrow.

      Safe “offline” backups mean “physically disconnected from the network or computer.”

      We know from bitter experience that malware can target network shares not assigned a drive letter.  It’s a very short leap for it to also enumerate local volumes that are not assigned a drive letter.

      Example, here is a SMART monitor listing drives, note that one of the drives does not currently have a drive letter.  The program is doing nothing special to find that drive, the OS kernel provides this info if you know how to ask.  I can also read and write data to that volume with several admin tools.  I could also use a redirected folder name to mount that volume, no drive letter required!

      hdsent_listofdrives

      Here’s a look at the DISKPART utility (built into every windows system) output on the same system. There’s nothing stopping ransomware from using this very tool scripted to mount the drive once it get’s onto your system.

      diskpart_listvolumes

      ~ Group "Weekend" ~

      1 user thanked author for this post.
      • #2425775

        Many MANY thanks for that detailed answer, and clarification.

        I feared what you confirmed, but I don’t have enough experience to have answered my question without help.

        I don’t know if this next point is DIRECTLY or IMMEDIATELY relevant;  nevertheless, we recently dove into the Management Console of our LAN router.  The real-time clock had never been set properly, and a firmware bug prevented the Event Log from being updated.

        After we selected the option to get current time from the Internet, that router’s Event Log started showing a large number of DDOS attempts and simple “hacks” that are automatically blocked by our router’s built-in IPS (intrusion prevention system).

        All of the IP addresses blocked by that IPS originated in China, quite possibly because we have been volunteering pro bono to help investigate what happened at the 100-nation world military games held in Wuhan during October 2019. 

        (An infrared surveillance satellite photographed a huge HOT SPOT at a PLA garrison near Wuhan: one senior journalist inferred plasma-furnace cremation sites running 24/7.  A similar HOT SPOT was also detected in Wuhan proper.  This is pretty serious: those military athletes were in very good health and very fit before they succumbed to some biological warfare agent, most probably added to the fish in 2 lakes where aquatic events were held.)

        As such, somebody “out there” has a reason to hack our LAN.

        That experience, in turn, has motivated me to DISABLE all of the “Wake on” options in all of our network controllers e.g. “Wake on Pattern Match” and “Wake on Magic Packet” are 2 such settings that we were able to DISABLE in all of our network controllers, both integrated and 2.5 GbE USB dongles.

        It occurs to me that ransomware criminals may also be exploiting these “Wake on” options to STARTUP computers remotely.

        I really do appreciate your detailed answer:  those criminals probably already know how to corrupt an entire partition, even if it lacks an assigned driver letter.

        1 user thanked author for this post.
    • #2425903

      First of all, a big thanks Ben for the original article. I was previously bitten with SSD failure about 6 years ago when my boot drive failed and also had issues restoring the data which resulted in my decision to purchase a new PC and also to install Macrium reflect. As before, the 250 Gb SSD is just for system files and everything else is on a 4Tb HDD.

      As I consider how to cover my bases going forward, especially given that my CPU/motherboard combination does not qualify for the upgrade to Win 11, I am looking at three options:

      1. Carry on backing up the configuration and hope that no issues occur with the SSD; Win 11 is not a necessary upgrade and get a spare SSD ready for emergencies.
      2. Buy a brand new PC and migrate everything over ready for Win 11.
      3. Buy a new SSD and drop it into the chassis and migrate my Win 10 system over at a time of my choosing to avoid the angst of an emergency system rebuild.

      Given the issues with chip manufacture right now, the upgrade is potentially more expensive than in a year or 2 but of course may be even worse. This is my main machine (apart from iPhone and iPad) so losing it would be more than just annoying. If this was you, how would you proceed? (Appreciate this may be thread drift, ben – happy to set up a new thread if appropriate)

      Peter

      1 user thanked author for this post.
      • #2425978

        Carry on backing up the configuration and hope that no issues occur with the SSD

        Backup is to recover from issues, SSD or otherwise. MR will allow you to boot from USB and restore to a new SSD. No need to do anything else.

        If your SSD does die, buy one then. If the motherboard dies, you can try to replace it or buy a new PC. Either way, wait until you have a failure before spending any more money.

        cheers, Paul

        2 users thanked author for this post.
        • #2425993

          > MR will allow you to boot from USB and restore to a new SSD

          Do you happen to know if MR also initializes the boot sector properly?

          The reason why I ask was a problem I ran into many years ago, when I tried to restore a Norton GHOST image to a new SSD.

          It did NOT work, and the solution that worked was to do a fresh OS install to that new SSD, and THEN to restore the latest GHOST image to the same partition on our new SSD.

          Without knowing for sure, I would expect that MR knows how to make the “new SSD” bootable.

           

        • #2426104

          Boot sectors are only required on MBR disks – not so common since W10.
          MR correctly restores all drive data on MBR and GPT disks. It also correctly aligns SSDs.

          cheers, Paul

          1 user thanked author for this post.
        • #2426155

          Many thanks for that clarification.

          I did need to restore Windows 7 Pro x64 recently on one workstation.  At first attempt, I only restored the C: system partition, and the RESTART failed.

          There is a second “System Reserved” partition, and when I restored both C: and that second “System Reserved” partition from the same drive image file, then RESTART worked.

          So, I concluded that both contents of the drive image file were necessary to do a proper restore of the OS.

          I’m glad to read that MR is working so well for you.

      • #2426002

        >  If this was you, how would you proceed?

        Consider how we layout partitions with 2 or more physical drives in a single chassis, using Windows and NTFS nomenclature:

        drive letter D: is typically assigned to the ODD (optical disc drive)

        drive zero:  C: / E:

        drive one:  F: / G:

        and so on, where C: is sized ~100GB +/- .

        E: is created by “shrinking” C: using third-party software like Partition Wizard.

        F: is created by “Migrating OS” from C: , also with Partition Wizard.

        A drive image of C: is first written to G: , then that drive image file is copied to E: .

        (Because I’m paranoid about these things, I always run FC in Command Prompt, to guarantee that the drive image stored on G: is EXACTLY THE SAME as the copy stored on E: .)

        The image “restore” software on optical discs is TOO SLOW (in our experience).  More importantly, those discs do NOT have RAID device drivers, so they won’t restore an OS that is hosted on a RAID-0 array.

        If C: needs to be restored, boot into the motherboard BIOS, change the “boot drive” to F: on drive one, and restore the drive image file of your choosing to the primary C: partition on drive zero.

        Re-boot into the motherboard BIOS and change the “boot drive” back to primary C:  on drive zero.

        At this point, your OS should be running the way it was when you created the drive image file that you selected when restoring that OS.

        Almost forgot: you’ll need a policy and procedure for archiving redundant copies of everything on E: because, if drive zero fails, it takes everything with it.  For that reason, we use G: to “mirror” everything on E: using simple BATCH programs e.g. BAK.bat .

        The overall philosophy of the latter approach is to enforce a very clear separation between storage hardware and the OS file set.  If hardware fails, just replace it, with no loss of any data whatsoever.

        Hope this helps.  HARD KNOCKS U signing off.

        • #2426080

          Looks like a good method. Two things:

          1. Print a list of the drives, Boot Drive priority, restore queue, etc. for all involved persons involved so nobody has an excuse to screw it up.

          2. On the end of each drive that is exposed/visible when the PC side panel is removed place a label designating which partitions are on that drive; again, so nobody involved can screw it up. Peel & stick label with Scotch tape over it might stay stuck on for a few years, LOL.

          1 user thanked author for this post.
        • #2426082

          Most excellent idea!

          Your suggestion to “tag” the visible end of a drive is something we also do, because our RAID controllers “initialize” drives, which renders them incompatible for motherboard SATA ports.  As such, our “JBOD” tag tells us a drive has been “initialized” and won’t work any longer with standard chipsets or USB-to-SATA adapters.  (Perhaps an “initialized” tag would be more accurate.)

          Over the years, we’ve found it essential to maintain a spreadsheet, with pencils and plenty of erasers, that maps Windows drive letters to physical drives, the locations of those physical drives, and the motherboard and controller ports to which those drives are connected.

          That chart looks something like this:

          A:   floppy disk drive (not used)

          B:   not used / available for Network Drive letter

          C: / E:  HPT RAID-0: 4 x Samsung 256GB SSD in Icy Dock 5.25″ enclosure on front panel

          F: / G:  WDC 1TB Black HDD in internal drive cage #1, motherboard SATA0 port

          D:  optical disc drive

          … and so on.

          That layout closely resembles the partition map in Windows Disk Management.

          Lately, we’ve even gone the next step and drawn a block wiring diagram and the cable connections to motherboard and RAID controller ports at one end of the cables, and the physical drive ports at the other end of the cables.

          It would help a LOT if Microsoft would make it easier to map logical drive letters to physical drives, perhaps by adding a “description” column to a spreadsheet that is easily maintained by system administrators.

          It could be automated by the OS, first by initializing that spreadsheet with 26 rows, one for each letter of the alphabet, and then by populating a column with a summary description of the physical device to which each drive letter is assigned.

          “My Computer” comes close (“This PC” in Windows 10), but it needs to allow more detail about the physical drive to which any drive letter is assigned.  It there is already a way to do that, forgive me for never finding it!

          Specifically, see “Add columns” for “Comments” and “Network Location”.  How does one add text to “Comments”?

          We never use S.M.A.R.T. data.  Our RAID controllers do a fair job of logging abnormal events, and a list of those events can be viewed by launching the controller’s GUI inside Firefox.

          Thanks!

        • #2426085

          You mentioned: ” It would help a LOT if Microsoft would make it easier to map logical drive letters to physical drives, perhaps by adding a “description” column to a spreadsheet that is easily maintained by system administrators.

          Great idea! However, the best I’ve come up with at home is when offered to name the New Volume (usually after initializing the drive) I’ll name it SeagateDATA3TB, or WDBackups4TB, or TVRecordings3TB, etc.

          Our main PC is a HTPC/Gaming/Home server rig with a 960GB SSD boot drive, 3 x HDDs, and an 8TB external HDD that causes the PC to take an extra 15 – 20 seconds to post before booting into Windows (annoying).

          1 user thanked author for this post.
        • #2426087

          Maybe we should propose a new system utility that gives the system administrator full control over that “spreadsheet” e.g. by choosing which “rows” and which “columns” to display, and to enable or disable populating a “description” column with a summary of the physical device to which a drive letter “row” is assigned.

          Such a spreadsheet can be built and maintained manually now, but it would help if the OS could populate some of the columns automatically, such as HDD / SSD make, model and serial number (“S/N”), and other data that the OS must already store somewhere deep inside the Operating System’s internal database.

          Another column could be a WARNING flag e.g. that blinks red, or displays a bold red “X”, if a given drive letter “row” did not STARTUP normally.

          This WARNING option could be ENABLED or DISABLED by the system administrator, and it would help a LOT to isolate a failed and/or failing HDD or SSD.

          At STARTUP, a pop-up window could display that WARNING with a short summary of the drive letter(s) that did not startup normally.

          The system administrator should be allowed to identify which drive letters SHOULD startup normally at every cold start and at every RESTART.

          What comes to my mind, in this context, is one 600GB SAS drive that was prone to intermittent problems from day one.

          After lots of trials-and-errors, we finally decided that the data connector was marginal, and ambient temperature may have changed the connector’s shape.

          Our spreadsheet concept would have helped to isolate the failing SAS drive more quickly, because there were 3 such SAS drives in one of our workstations, at that time.  All were set to allow staggered spin-up.

          And, another clue was the long time some RESTARTS were requiring before reaching the Windows Desktop.  The RAID controller kept trying and trying to spin-up that failing SAS drive, and would finally stop trying.

      • #2426161

        >   to avoid the angst of an emergency system rebuild

        >   losing it [main machine] would be more than just annoying

        Here’s an option that has worked VERY well for us:

        search Newegg for “refurbished HP workstation”

        That search turned up a marketplace seller named “PC server and parts”.  Visit the latter’s website, where you can find a huge selection of “refurbished” HP workstations with Windows 10 Pro pre-installed.

        HP’s workstation hardware is very reliable, and the latter vendor will do a fresh install of Windows 10 Pro before shipping, and rebuild with a brand new SSD.

        HP’s website also has an enormous volume of pertinent documentation.

        Because of the hardware restrictions in W11, tons of these now “obsolete” workstations are turning up USED.  For our work, the Intel Core i7-3770 is all the power we need for routine database and website maintenance:  4 cores, 8 threads, with turbo-boost.

        And, the newer “used” HP workstations come with powerful Xeon CPUs.

        HP also manufactures literally tons of spare parts, like proprietary PSUs, specialized SATA power cables and such, which can be readily located at eBay.

        When we discount the retail price of a new OEM version of W10 Pro x64, the hardware that comes with one of these REFURBs is almost free.

        As long as you don’t mind doing your own hardware maintenance, in your situation I would try to “clone” a copy of “main machine”, complete with all the same third-party software and all the same User settings.

        This “clone machine” can also network to your main machine, and you can do peer-to-peer backups over Ethernet.  XCOPY works fine over a LAN.

        These workstation motherboards have empty expansion slots, and you can upgrade your network as needed with 2.5 and 10GbE add-in cards.

        You won’t need a second Keyboard/Video/Mouse, if you get a quality KVM switch.

        However, since your main machine dedicates an entire SSD to the OS, you have the option to add a redundant “data drive” to “clone machine”, time permitting:

        … everything else is on a 4Tb HDD

        For flexibility, I would also purchase a second 4TB HDD and install it in StarTech’s excellent external USB | eSATA enclosure.  Connect the USB cable to “main machine”, and power it UP only long enough to update backups, then power it OFF.

        As I see it, you need to backup that 4TB HDD in any event.

        Then, if “main machine” dies, you can simply re-connect the StarTech enclosure to “clone machine”, and you’re up and running in something like 60 seconds.

        That way, you can decide what to do with “main machine” after you get A-ROUND-TUIT (that’s the opposite of A-SQUARE-TUIT).

        We can ship ROUND-TUITs to you, but they’re expensive!  🙂

        1 user thanked author for this post.
        • #2426166

          p.s.  I’ll wager you don’t really need the ENTIRE 250GB for the OS on your “main machine”.

          For redundancy (always wise), I would shrink C: to create enough space for a dedicated data partition. Partition Wizard can shrink C: in a matter of minutes.

          That dedicated data partition is an excellent place to store a redundant drive image of your C: system partition.

          Using the sizes and letters we choose by default, C: = 100GB and E: = 150GB .

          Write a new drive image file to your 4TB HDD, then copy that drive image file to the dedicated data partition on your SSD (that’s E: in the latter example).

          A smaller C: is just good policy, particularly if you make a habit of saving downloads to E: above.  Firefox, for example, allows you to choose where to write downloads, instead of adding them by default to the Users folder on C: .

          E: can also make it easy to “roll back” certain third-party software, if/when the latest version has a bug.  For example, “timesync.exe” versions can be serialized in the folder E:\timesync like this:

          E:\timesync\timesync.Ver-1.exe

          E:\timesync\timesync.Ver-2.exe

          and so on.  If timesync.Ver-3 has a bug, just remove it and re-install Ver-2.

          The latter practice works well for all such third-party software etc.

          And, THE MAJOR problem with formatting the entire SSD with C: is the inevitable loss of private data files that were stored on C: AFTER the latest drive image file was created.  Restoring the latest drive image file to C: will not know anything about those private data files, and they will disappear forever, under those conditions.

    • #2426105

      It would help a LOT if Microsoft would make it easier to map logical drive letters to physical drives

      You can do this easily with a bit of PowerShell.
      Copy and past the code below into a PS window and all will be revealed.

      cheers, Paul

      Get-WmiObject Win32_DiskDrive | ForEach-Object {
        $disk = $_
        $partitions = "ASSOCIATORS OF " +
                      "{Win32_DiskDrive.DeviceID='$($disk.DeviceID)'} " +
                      "WHERE AssocClass = Win32_DiskDriveToDiskPartition"
        Get-WmiObject -Query $partitions | ForEach-Object {
          $partition = $_
          $drives = "ASSOCIATORS OF " +
                    "{Win32_DiskPartition.DeviceID='$($partition.DeviceID)'} " +
                    "WHERE AssocClass = Win32_LogicalDiskToPartition"
          Get-WmiObject -Query $drives | ForEach-Object {
            New-Object -Type PSCustomObject -Property @{
              Disk        = $disk.DeviceID
              DiskModel   = $disk.Model
              Serial		= $disk.SerialNumber
              Partition   = $partition.Name
              DriveLetter = $_.DeviceID
              VolumeName  = $_.VolumeName
            }
          }
        }
      }
      
      2 users thanked author for this post.
      • #2426179

        Thanks for that!  What I had in mind exploits a lot more AI (artificial intelligence).

        And, what I’m visualizing here may already be available on some versions of MS Windows Server, but I just don’t have any direct experience with those OS versions.

        To compare, at STARTUP my main W10 workstation displays a pop-up to confirm that certain Network Drives are not available.  That’s correct whenever other PCs in our LAN are powered OFF.

        This kind of diagnostic, and general OS awareness, should also be enabled for all local drive letters too, particularly all drive letters OTHER THAN C: .

        A “Drive Letter Management” program would do a lot more than merely populate certain columns in a spreadsheet.

        For example, the system administrator would be able to choose which “rows” are displayed by default, and which drive letters should show up as “active” after a STARTUP has reached the Windows Desktop.

        After the administrator designates a range of drive letters as “NOT IN USE”, an option would thereafter conceal those rows until the administrator reversed that setting.

        If a drive letter EXPECTED to show up does NOT DO SO after STARTUP, then a RED DOT shows up in the appropriate column for that drive letter.

        In plain English, the administrator establishes a “set of drive letters” that should appear in “This PC” (formerly “My Computer”), and the OS attempts to confirm “normal” STARTUP for each of the drive letters in that “set”.

        It would also need to be smart enough to update the spreadsheet if/when a removable drive is detected e.g. USB thumb drive is connected, just as “This PC” does so now.

        And, other spreadsheet “columns” should be populated automatically with the descriptors your script obtains from the OS.  Any such columns can be displayed, or not displayed, at the option of the administrator, just as “NTFS” can appear or not appear in a column dedicated to “file system type”.

        Correct me if I am wrong about this:  in W10, the User can display a “Comments” column in “This PC”, but there’s no way to store User-provided text in those Comments cells.

        • #2426182

          I want to populate rows in the “Comments” column, inside the red rectangle here:

          http://supremelaw.org/systems/Paul27.This.PC.Comments.png

        • #2426289

          I have no idea if you can populate that section.
          More importantly, that info will be lost if you have a machine failure so you are better off having the info saved on a remote system / paper.

          cheers, Paul

          1 user thanked author for this post.
        • #2426340

          Thanks!  So, a custom spreadsheet will do, as long as it’s also backed up.

          FYI:  I found this, but haven’t tried it:

          https://answers.microsoft.com/en-us/windows/forum/all/how-do-i-add-comments-to-a-folder-within-file/7f5e1825-47ec-4276-bce9-0cfd4c66e5f5

          “For a “regular” folder you created, the quickest way to create the desktop.ini file and set the appropriate folder attributes (required for a desktop.ini file to be processed) is to assign a custom icon to the folder via the Customize tab in its Properties dialog. If you don’t want the custom icon, delete it when you add the InfoTip line.”

    • #2426186

      (I got Firefox to “paste” that screeshot:  see above.)

      • #2426205

        Let’s say that “Drive Letter Manager” has rapidly evolved into an AI robot that speaks computerese with ease.  Its name is HAL-10000 (“Hal” for short).

        Paul here asks Hal, “Did all drive letters startup OK?”

        Hal replies, “Paul, I’m sorry to report that drive letter M: did not startup correctly.”

        Paul asks Hal, “Hal, what happened to M: ?”

        Hal replies, “Paul, drive M: was connected to the RAID controller.  Allow me to interrogate the RAID controller, and I’ll get right back to you.”

        15 seconds later:

        Hal replies again, “Paul, I asked the RAID controller to test cables, and a faulty cable was discovered on RAID controller port 0_3 .”

        Paul asks Hal, “Hal, what do you recommend to fix drive letter M: ?”

        Hal replies, “Paul, that cable should be replaced with one StarTech model SFF-8087 fan-out cable.  If a new cable does not fix the problem, please resume this conversation at that time.”

        Paul says, “Will do. ”

        Paul adds, “Hal, what do you think of Paul T’s PowerShell script?”

        Hal answers, “Oh yes, Paul, we evolved far beyond those geek codes about 31.4159 years ago.  My programmers were awarded with 10 pizza Pi’s after teaching me to speak fluent computerese.  I’m still studying English — haven’t quite mastered that one yet.”

         

         

    • #2426232

      Noobie question — If I migrate my wife’s production machine Win 10 Pro 64-bit with apps and data from a HDD to a SSD, shouldn’t I splurge on a large SSD?  I’ve read somewhere that having lots of extra SSD capacity extends its useful life.

      I’m thinking of buying a 2 TB Samsung 970 EVO Plus to put in as an NVME M.2.  (The machine is only PCIe 3, not 4, and with DDR3 not 4, but there’s a hack to do this as an NVMe M.2..)

      And also a classic HDD for nightly scheduled backup with a purchased Macrium Reflect.

      What do you think?

      1 user thanked author for this post.
      • #2426244

        I checked into that Samsung 2TB 970 EVO Plus.  That would be a truly excellent choice.

        And, you’ll need an available M.2 socket that supports the NVMe protocol.

        Only other thing is the form factor:  a single 2.5″ SSD can be mounted in a 2.5-to-3.5″ adapter that has 2 “slots” or levels e.g.:

        Corsair CSSD-BRKT2 Dual SSD Mounting Bracket

        Large, zoomable image of Corsair SSD Mounting Bracket CSSD-BRKT. 3 of 15

        And, wiring is easiest because there is no need for any additional cabling adapters.

        BUT, there is a BIG performance difference between NVMe and SATA;  for those Samsungs expect 3,400 MB/second READs vs 550 MB/second READs, respectivelly.

        Unless there is a reason why you absolutely MUST HAVE NVMe speeds, I would purchase the 2.5″ 870 EVO, instead of the M.2 version;  and, I would also purchase the latter “dual mounting bracket” and install that sub-assembly in an available internal 3.5″ drive bay.

        The latter configuration gives the 2.5″ SSD much better exposure to active cooling, which will flow over both planar surfaces, especially if a single SSD is mounted in the top layer.

        M.2 SSDs are more prone to heat “throttling” unless you go to the trouble of finding a compatible heatsink.  But, M.2 heatsinks are pretty expensive, as compared to the retail price of a 2-slot 2.5-to-3.5″ mounting bracket.

        Another option is an M.2-to-2.5″ adapter.  Look in Icy Dock’s product catalog for a few options like the next photo.

        The latter requires a compatible U.2 cable and connector.

        You’ll want one that includes thermal transfer material that permits direct contact between the M.2 and its enclosure.  Or, I believe there are similar adapters that are wide open to active air flow on their top sides.

        Hope this helps.

         

        1 user thanked author for this post.
      • #2426291

        There is no need to have extra capacity to make the drive last longer. The only time an SSD will have reduced life is if you use it hard in a commercial environment / bitcoin miner. In a workstation you will not see any benefit.
        BTW, to gain the potential extra life you need to remove some of the capacity from use and assign it to the internal spare.

        You should already have an external HDD for backup.

        cheers, Paul

        1 user thanked author for this post.
    • #2426235

      Whatever size you purchase for your SSD hosting the OS, consider “shrinking” C: to a manageable size e.g. 100GB, and format the remainder as a dedicated data partition.

      (I discuss several advantages of shrinking C: in my other recent REPLYs here.)

      Most modern SSDs now do their own “wear leveling” internally.  This logic can spread newly written “logical” data blocks across the set of “physical” blocks that are still unused, and continue with this logic after all unused “physical” blocks have been written once, then twice, then thrice and so on.

      So, yes, larger SSDs can be “wear leveled” more easily, using such logic. And, a lot depends on how much unused space you intend to maintain, and how often you intend to update existing files already stored on your SSD.

      This “wear leveling” logic was found to be more valuable for flash memory that tolerates fewer WRITE cycles: SLC tolerates the most WRITE cycles, MLC tolerates fewer WRITE cycles, TLC even fewer, and QLC the fewest number of WRITE cycles for any discrete “physical” block of NAND flash memory.

      You might contact your preferred SSD manufacturer, to ask that question of their Tech Support staff:  their answers (if any) should be much more accurate for the particular SSD you intend to purchase.

      If you do contact those Tech Support staff, give them this “for instance”:

      Starting out, you intend to use only one-third of your new SSD.  Does the wear leveling logic assign future WRITEs to the second-third, then to the third-third, and continue in that sequence, like a “circular queue”?

      Nevertheless, don’t be surprised if they answer, saying those details are “proprietary”.

      The internal wear-leveling logic in most modern SSDs is considered proprietary intellectual property, and the manufacturers may be hesitant to disclose exact details to their retail customers.

      Hope this helps.

    • #2426333

      Supreme —

      A) So, do you think a Samsung 970 EVO Plus (2TB or 1TB) is SLC, TLC or QLC?

      If Samsung made it in two or three of these flavors over time, is there a way to know in advance?

      B) If, currently, a Samsung 970 EVO Plus is not the best SLC, then who is, and which models?

      In my case, I’m going to mod an older PC with only PCIe3 and DDR3, so I don’t need an NVMe M.2 that’s for a newer, better setup, but I would like it to be as durable as reasonably possible.

      Thanks.

      (And please all chime in with dissenting opinions!!)

      1 user thanked author for this post.
    • #2426337

      The Newegg description says “MLC”.

      I always browse the manufacturer’s website for full technical details

      NVMe was introduced with PCIe 3.0:

      8 GHz clock, 128b/130b “jumbo frame” and up to x4 lanes per M.2

      You’ll need an M.2 socket, or an add-in card with an M.2 socket.

       

       

      • #2426341

        Supreme – Is MLC OK?

        I know about getting an adapter-card socket.  You wrote “x4 lanes”.  So I would only use the PCIe x4 slot, not the x 16?

        Thanks again,

        1 user thanked author for this post.
        • #2426342

          MLC is good.

          The Internet has plenty of reviews comparing various NVMe M.2 SSDs.

          The NVMe M.2 socket can transmit raw data over x4 PCIe lanes in parallel (maximum).  Some early M.2 devices utilized only x2 PCIe lanes e.g. Optanes.

          The edge connector on an adapter card needs to be x4 wide, and such an edge connector can plug into an available x4 socket, x8 socket and x16 socket.  That’s how PCI-Express was originally designed.

          But, even if it “works” (sort of), an x1 or x2 edge connector cannot transmit x4 PCIe lanes in parallel (simple arithmetic, actually).

          The edge connector on PCIe add-in cards is different from an M.2 edge connector that plugs into an M.2 socket.

           

           

        • #2426418

          MLC is excellent; however, most manufacturers have switched to TLC , and the cheapest models may use QLC. Various design improvements allow TLC to perform on a par with MLC. These improvements also allow SSD life expectancy to be extended beyond earlier models. Life expectancy is called TeraBytes Written or TBW. Assuming the controller circuitry is reliable and your computer doesn’t experience any sudden voltage spikes, then almost any SSD you buy for home use, including gaming, Office, internet, occasional video and photo editing, etc., will last longer than the computer itself. Every man-made product has some failures, and that’s why we keep talking about backup, backup, backup. But, as mentioned, most SSDs are reliable, and last for years. Our old Kingston 96GB SSD is still running just fine. The PC it’s installed in is very low-spec by current standards, yet it boots up Windows 7 in about 16 seconds and responds quickly when running programs and games. I think we got it in 2012, but not 100% sure.

    • #2426349

      So, do you think a Samsung 970 EVO Plus (2TB or 1TB) is SLC, TLC or QLC?

      Samsung Is the Latest SSD Manufacturer Caught Cheating Its Customers

      1 user thanked author for this post.
      • #2426362

        Please disregard this REPLY if you think it irrelevant.

        A few years back, our web server migrated to newer hardware, and during that migration certain customer records were not properly migrated too.

        Our customer history showed a new “start date” that obscured our true history spanning 10+ years without interruption.

        After we brought this minor error to their attention, I did notice how the quality of their Tech Support changed for the better.

        In general, and after 50+ years of using computers, I can honestly say that major vendors can be expected to give better post-sales service to longer-term customers.

        And, since all of us make mistrakes, what matters more to me is the manner in which any one vendor responds to a proven problem of their making.

    • #2426356

      I’m pretty loyal to the Western Digital brand, so take that into consideration.

      Western Digital acquired SanDisk several years ago, and that was a wise acquisition:

      SanDisk had already developed SSD technology in competition with other manufacturers, so WDC was able to improve on already good things being sold by the former SanDisk.

      We have now purchased several Western Digital “Blue” 2.5″ SSDs and every one has worked perfectly right out of the box.

      The WDC m.2 SATA SSDs perform as reliably as the WDC 2.5″ SSDs.

      Our favorite M.2 adapter is this StarTech model 35S24M2NGFF:

      https://www.startech.com/en-us/hdd/35s24m2ngff

      Even if only one or two sockets are populated, it’s nice to know there are other available sockets for future expansion — withOUT needing to add more power cables.

      1 user thanked author for this post.
      • #2426420

        Unfortunately, WD has been in trouble for downgrading their SN550 SSD and their RED HDDs.

        The SN550 originally had a respectable 800+ MB/s sequential Write speed after the SLC cache was exhausted. The newer version is still called SN550, but with different NAND and controller its post-SLC cache Write speed is now less than 400MB/s.

        The WD Red series originally used CMR (conventional magnetic recording). The newer version substituted SMR (shingle magnetic recording). Although SMR might be OK in a simple NAS home server for data, using it in a RAID setup is a bad idea. WD was exposed for this. Eventually, they changed so that WD Red is SMR only, and they released the Red Plus which is CMR.

        It’s the deception that makes one no longer trust a maker who does that. Also guilty of similar deception is Crucial (their P2 SSD), ADATA (their SX8200 Pro), and Kingston (I forget the model number bit it was several years ago). At home, we have two WD SSDs which work ok, but I probably won’t buy any more ….

        1 user thanked author for this post.
        • #2426422

          I saw a few reviews that hammered WDC pretty hard for that “SMR” controversy, and deservedly so.

          (For those who don’t know, “SMR” is an abbreviation for Shingle Magnetic Recording, a HDD recording technology that is very different from “CMR” = Conventional Magnetic Recording.  Bottom Line:  stick with CMR)

           

          So, as Prosumers and DIY builders, how do we decide which vendor(s) to prefer, if they are all prone to publishing vaporware and misleading claims?

           

          For myself, I rely on past experience e.g. WDC’s “Black” high-performance HDDs have worked very well for us in JBOD mode e.g. in external USB enclosures and in properly cooled internal drive cages.

          Similarly, WDC’s RE models (RAID Edition) we purchased have far exceeded their factory warranties, and Partition Wizard’s “Surface Test” STILL finds no bad sectors.

          Thanks!

        • #2426423

          Re:  how to we decide?

          Newegg’s valuable answer to that question provides space below each product description for customer “Reviews”.

          In the past, those “Reviews” have been very informative whenever I have needed to study them.

          1 user thanked author for this post.
    • #2427770
      • #2427859

        Power loss 1 second earlier would result in the same data loss.

        There is nothing you can do about data loss when the power fails. You can only try to prevent power failure with a UPS, but nothing is ever guaranteed.

        cheers, Paul

        1 user thanked author for this post.
        • #2427913

          Paul T makes an excellent point about a UPS.

          Just to amplify his point somewhat, we always plug a UPS into a less expensive surge protector — between the UPS and the wall socket.  That way, a destructive “transient” will take out the less expensive surge protector first, and ideally not reach the UPS.

          Some models of surge protectors actually plug directly into the wall socket, to increase the number of sockets and to provide USB charging ports.

          Tripp-Lite now manufactures a power strip with a separate ON/OFF switch for each outlet in that strip.  This makes it easy to switch a PSU ON and OFF if your PSU has no separate ON/OFF switch (some HP workstations were built with such PSUs).

          Also, some older residential units were wired with 2-prong outlets:  these should be replaced with a GFI outlet by a licensed electrician;  or, a ground cable can be connected to the copper ground pin that should be located near the circuit breaker panel.

          If there is no visible copper ground pin, the breaker panel should have some sort of ground connection e.g. if “ground” was attached to the rebar in the foundation slab.

          In general, it is most advisable to pay a licensed electrician for advice.

          When I have mentioned lightning strikes in the past, I did not intend to imply a “direct hit” which will melt steel.  What I had in mind was the more probable voltage “spike” that occurs when distant lightning induces such a short-duration transient in the surrounding power distribution network, e.g. 5 to 10 mile radius.

          Although these transients are very short in duration, measured in milliseconds, their voltage “peaks” can be very high — enough to damage low-voltage DC circuitry.

          In the past, some data centers opted to deploy RAID controllers which were sold with an optional battery dedicated to providing backup power required to flush buffers.  But, these controller batteries do NOT keep HDDs and SSDs powered up during a blackout.

          Now that M.2 sockets are wired directly to multi-core CPUs, BEST WAY is to power an entire system with a quality UPS having plenty of spare capacity.

          Also there are models now that output a pure sine wave, which may be worth the extra cost.

          Lastly, APC’s PowerChute Personal software allows the administrator to adjust several important settings, chiefly “sensitivity” and both the HIGH and LOW voltages.  The latter define a “range” which the UPS detects as “normal” i.e. requiring no audible warnings, event logs, or intervention by the UPS.

          One last thing:  a UPS may NOT do anything to protect your Internet cabling, e.g. if a cable modem outputs to a router.  Even if the router’s AC adapter is powered by your UPS, a transient can still reach that router via the cable modem and its link to the “last mile” (street side telephone pole etc.)

          Some APC UPS units do have coaxial connectors, but one field technician told me that they can cause problems for cable modems.

          So, when ordering Internet service, ask the field technician for the BEST WAY to protect that circuit from damaging surges.

          Hope this helps.

           

        • #2427926

          Tripp Lite(R) TLP76MSGB ECO-Surge(TM) 7-Outlet Surge Protector with 6 Individually Controlled Outlets, 6ft Cord

          https://www.newegg.com/p/17B-000R-002T8?Item=9SIB5AWH3K6539

        • #2428145

          Fiber optics.
          One might also check the grounding on the pole nearest them if they have above ground service. Lightning suppressors for circuit boxes are also available.

          🍻

          Just because you don't know where you are going doesn't mean any road will get you there.
          1 user thanked author for this post.
        • #2428272

          One might also check the grounding on the pole nearest them

          How? Do you have a tester for that?

          cheers, Paul

        • #2428466

          eyes

          🍻

          Just because you don't know where you are going doesn't mean any road will get you there.
        • #2428472

          We use a SPERRY HGT6520 Outlet Circuit Analyzer.

          It indicates if the circuit is:

          • Correct
          • Bad Ground
          • Open Neutral
          • Hot/Ground Rev
          • Hot/Neutral Reversed
          • Open Hot.

          It is an incredible tool.

        • #2428437

          “Just to amplify his point somewhat, we always plug a UPS into a less expensive surge protector — between the UPS and the wall socket.”

          Please do not do this!

          It’s against code in nearly all countries to daisy chain surge protectors with other surge protectors or UPS units or power strips.

          And for good reason.  Unless you know how to calculate loads, and monitor your power strips and UPS outlets to stay at safe loads, one runs the risk of an electrical fire.  There is also the problem of impedance over longer chained extension runs which can create heat in the wire and especially at the plug junctions.

          In commercial buildings where occasional fire inspections take place the inspector can and will usually levy fines against the business if they find daisy chained power cords, strips or UPS’s plugged into secondary surge protectors.

           

          ~ Group "Weekend" ~

          1 user thanked author for this post.
        • #2428440

          We’ve been doing this for 20+ years and never had a single problem.

          We only use high-quality components, and we are careful to connect short AC cables to avoid impedance problems.

          Our APC UPS units record blackouts and voltages outside the User-defined range, and they do switch to battery power during those events.

          Can you cite the “Code” in your City and State, so I can read it, please?

          1 user thanked author for this post.
        • #2428447

          We don’t daisy-chain surge protectors, and we don’t daisy-chain UPS units.

          This is what APC says at their website:

          <u>Plugging your UPS into a surge protector:</u>
          In order for your UPS to get the best power available, you should plug your UPS directly into the wall receptacle. Plugging your UPS into a surge protector may cause the UPS to go to battery often when it normally should remain online. This is because other, more powerful equipment may draw necessary voltage away from the UPS which it requires to remain online. In addition, it may compromise the ground connection which the UPS needs in order to provide adequate surge protection. All APC Back-UPS and Smart-UPS products provide proper surge suppression for power lines without the need of additional protection.”

          [end quote]

          No mention of any fire hazard there.

          And, our licensed electrician inspected our setup, and did not find any problems with our wiring.

          We plug all our APC UPS units into APC power strips capable of some surge protection.  I don’t have the exact electrical specs handy.

          And, our APC UPS units are not “going to battery often” as far as we can infer from their event logs.

          We have set sensitivity HIGH and selected the tightest possible voltage range, in order to detect more transients that would otherwise be ignored.

           

          Are you primarily concerned about daisy-chaining surge protectors?

           

          p.s.  We also provide a dedicated ground connection because some of our wall outlets are still 2-prong.

        • #2428454

          This is something that comes up a lot with new clients in my business.

          From: https://www.ocwr.gov/publications/fast-facts/power-strips-and-dangerous-daisy-chains/

          ” … interconnecting these devices is a violation of Occupational Safety and Health Administration (OSHA) regulations and the National Electrical Code because doing so can cause them to become overloaded, which could lead to their failure and a possible fire.”

          ~ Group "Weekend" ~

        • #2428826

          Gentlemen, clearly he hasn’t burned his building down. Be respectful and agree to disagree.

          Susan Bradley Patch Lady

          1 user thanked author for this post.
    • #2433946

      Hey Ben,

      Thanks for this article. I found it useful.

      I have a 500 GB Samsung 860 EVO SATA 2.5″ SSD as my primary hard drive (C:). Reading your article, I decided to download and use Clear Disk Info (I already use Samsung Magician). The SSD is just about 2 years old and this computer is used mainly for email and internet surfing (no gaming or video editing). To my surprise, Clear Disk Info showed the “Percent Lifetime Remaining” as 30%.  Over the last two months, this has dropped to 25%. Samsung Magician shows all SMART data as good. It also shows 21.4 TB written as good. Checking on the specs for this drive at Samsung shows an endurance of 2,400 TB written so 21.4 TB is only 0.9% of the used life. So why is Clear Disk Info now showing only 25%?

      Samsung Magician does not report “Percent Lifetime Remaining”. I also used “Speccy” and this also shows all reported SMART data as good and, like Magician, does not report “Percent Lifetime Remaining”.  Given the calculation above yielding 0.9%, there seems to be a discrepancy between what Clear Disk Info is reporting and what Samsung specs show.

      Two thing that come to my mind are either the Clear Disk Info software is mis-reporting the data or the SSD is defective. What do you think?

      Thanks,

      Steve H.

    • #2433968

      The discussion here suggests that Clear Disk Info does not accurately report SSD life.

      Try CrystalDiskInfo for a second opinion.

      cheers, Paul

      • #2434304

        Hi Paul,

        Per your suggestion, I tried Crystal Disk Info and it is showing the “Health Status” of the SSD as “Good/95%”. Seems to contradict Clear Disk Info’s reporting of 25% for “Percent Lifetime Remaining”.

        Thanks,

        sahalen

    • #2434016

      If it might help someone else here, the following happened to me last week:

      we are running an experiment to see how long a quality motherboard functions AOK.

      We deploy some aging XP PCs to function as backup storage servers.

      An old Intel motherboard connects directly to a few SSDs, and a PCIe expansion slot is populated with a Highpoint RocketRAID model 2640×4 which connects to a few more SSDs.

      In the middle of a routine backup task, the Highpoint controller sounded a loud alarm.

      This was helpful, because a failing SSD that connects to the motherboard SATA ports does not sound any alarms;  it just generates warnings in Event Viewer, and the mouse starts acting weird too.

      We re-booted to see if the problem persisted, and sure enough one of the SATA drives wired to the Highpoint controller had dropped out:  this was obvious from the list of active drives reported by that controller’s Option ROM during STARTUP.

      By checking the data and power cables, we were able to isolate the problem:  one of the SATA power cables was coming loose, even though it has a “latching” style connector, because the SSD’s power connector was not also a “latching” style.

      The two “dimples” on the latch had nothing to seat into.

      That failing SSD is mounted in a drive bay that is cooled by a small fan, and my best guess is that the minimal amount of vibration caused by that fan very slowly but surely caused the power connector to BACK OUT very gradually and imperceptibly over time.

      When I re-seated the power connector, that SSD went back to normal.

      Wish they were all that easy!

    Viewing 57 reply threads
    Reply To: Our world is not very S.M.A.R.T. about SSDs

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: