• Best way to clear paging file or have it reset back to its minimum size?

    Home » Forums » AskWoody support » Windows » Windows 7 » Questions: Windows 7 » Best way to clear paging file or have it reset back to its minimum size?

    Tags:

    Author
    Topic
    #205630

    In short, earlier this evening I noticed a couple of applications opening slightly slower than usual. I checked task manager and Firefox was oddly using almost all of what would be my spare memory (~14GB out of 16GB). So, I closed Firefox to release the memory, and as expected my paging file had increased from 1024MB to 3081MB during that Firefox RAM spike. I have the pagefile set to 1024-4096MB, but it’s pretty much always been 1024MB until tonight when the RAM usage neared 100%.

    After re-opening Firefox my overall RAM usage was lower than usual since other processes were partially offloaded to the paging file during Firefox’s previous memory spike. I’ve noticed that closing and re-opening certain applications (e.g. Dropbox, Steam) causes them to resume to their typical RAM usage, such that they are apparently no longer partially using the pagefile.

    From what I understand the pagefile most likely won’t shrink back down to 1024MB on its own. My main concerns are:

    1. If I rebooted, would the programs all resume to using RAM and not reading whatever was stored in the pagefile during the previous session?

    2. If so, I assume the pagefile will still be at 3081MB and will sit on the partition at that size even if unused.

    I’d like to reduce the pagefile back down to 1024MB, so these appear to be my options:

    1. Turn off the pagefile, reboot, turn it back on to desired settings, reboot again.

    2. Set the max size to 1024MB, reboot, confirm the pagefile size reduced 1024MB, then set the max back to 4096MB.

    3. Use group policy editor to set Windows to clear the pagefile at shutdown, reboot, confirm it cleared/reset to min size (1024MB), then revert the group policy setting.

    I’m just curious if anyone has an opinion on how to go about this minor task. Thanks.

     

    Side note – in case someone wonders there’s no virus or foul play (confirmed with MSE/MBAM scans). I’ve seen this happen on another PC before – Firefox suddenly goes crazy and eats up almost all available memory, just from a couple tabs like Google Sheets. It’s not an uncommon issue from what I’ve read.

    [redacted]

    1 user thanked author for this post.
    Viewing 7 reply threads
    Author
    Replies
    • #205676

      I can’t answer your question, as I’ve never paid any attention to it.  Why are you concerned with the size of the page file?  You have plenty of RAM, so unless something gets a memory leak like Firefox seems to have had in that case, just use it! Windows really does a good job of managing memory, so if you’re not having any slowdowns, it’s working as intended.  It doesn’t need to be modified and micromanaged to perform well.

      Personally, I’d set a lot bigger pagefile than that, but of course that’s up to you.  When Firefox swallowed up all your RAM, I’m thinking there was not enough pagefile space to let the excess allocated RAM be swapped out, thus freeing physical RAM, which would prevent the slowdown (for a time).  Of course, the real solution is to not have the application run away with RAM consumption (eventually it would consume all resources, no matter how large they may be), but that’s obvious.

      I’ve seen this kind of concern a lot in the Linux world… people have the idea that paging out is to be avoided.  They suggest setting the “vm.swappiness” parameter (yes, that’s really what it is called!) to something very low, like 1 (scale of 1 to 100, default is 60 in Ubuntu), and they suggest this as a good idea for PCs having as little as 4GB of RAM.

      In contrast, I set my 4GB Acer Swift laptop (Pentium N4200 CPU, 4 core, with SATA SSD, running Linux Mint 18.3 at the time) to swappiness of 80 with a swap size of 8GB, in direct opposition to all of the “performance” advice people seem to give about virtual memory.  In that configuration, I had Waterfox open with 250 tabs, most of them fairly heavyweight with lots of images and scripts, and it was still very responsive and usable.

      I used Waterfox a couple of hours with all those tabs loaded (there are addons to unload pages and reclaim the memory after a time, but I intentionally didn’t have one active for this test), with physical RAM showing nearly full and the swap file about half full (equal to physical RAM), and it was still going strong.  Better than I had expected, actually!  There were brief pauses here and there while it was swapping back and forth (I presume), but they were not really bad, and if I had not been actively evaluating the performance, I may not even have noticed them.

      For this setup, I had Waterfox set to use 6 content processes, which seems like the direct opposite of what would be a good idea when RAM is limited.  More content processes means more memory used– it warns the user of this right in the Waterfox UI where you set the process limit.  My thought was that separate processes would possibly better allow the system to swap out inactive tabs (I must admit to not having much insight into the low-level operations of the VM system, hence the need to experiment).  This did seem to be the case, as I repeated the test later with the content process limit set to 2, and the performance was noticeably worse with huge numbers of tabs open.

      One of the points of that test was to ascertain whether Linux was as good as Windows in memory management, especially as it relates to virtual memory.  Windows is very good at this.  I still don’t know if Linux is as good, the same, or maybe better than Windows, but in that one test, I was not disappointed in its performance.

      Using the pagefile is not a bad thing at all; in the case of Windows, the whole OS is built around the idea of virtual memory.  Windows is modeled after DEC’s VMS (virtual memory system) mainframe OS, after all.

      There’s no special reason that all the programs that have allocated memory but are idle have to keep everything in RAM, limiting what the program(s) you’re actively using at the time can actually use.  If you’re running a couple of memory-hungry programs at the same time, and if the one in the background isn’t actively doing something at the moment (rendering, etc.), keeping it in memory when all it’s doing is sitting there waiting to be used doesn’t benefit you in any way.  If the system can swap that program’s allocated memory out, you can use that physical RAM in the program that’s actually in use at the time, and the system can use it for caching until it is needed.  You gain performance by paging the idle program’s data out to disk.  Windows tries to keep the physical ram working, not full of stuff that’s just sitting there, and conversely, not sitting there empty most of the time either.

      People often think that thrashing is the same thing as swapping or paging out.  Thrashing, of course, is the “technical” term for the slowdown that happens when physical memory is full and the system is unable to push enough stuff out to the pagefile to get RAM to satisfy the demands of whatever programs the person is running, causing the system to be seriously bottlenecked by the relative lack of speed of the disk compared to physical RAM. That’s the only time that most people even notice that virtual memory is in use, so they conclude that virtual memory must be bad.

      In reality, if you allow the system to page things out before you get to the crisis stage, you decrease the tendency to thrash later.  Trying to keep everything in memory (unless you truly have gargantuan amounts of RAM) is asking for a memory crisis, and that virtually guarantees thrashing.

       

       

      Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
      XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
      Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

      1 user thanked author for this post.
    • #205679

      @amraybt,

      1. Which of the two OS’es in your sig does this issue pertain to? W7 x64 🙂

      2. Is your system storage an SSD or HDD?

      3. What third party (obscure if any) extensions are you using in FF?

      Have you tried Lowering Memory Usage When FF is Minimized:
      Open a new tab and type in about:config (will give warning – accept)

      Search for: config.trim_on_minimize

      If the preference name does not exist, it needs to be created.
      Right-click on the background and select “New Boolean.”
      Enter the name when prompted: config.trim_on_minimize
      Enter the value: True

      Then restart FF

      If you have FF open and then minimize it, this will free-up more RAM for the system. Not exactly what your after but, this helps big time 😉

      If you find there are problems with this due to your FF set-up, just set the ‘config.trim_on_minimize‘ to false to revert back.

      Windows - commercial by definition and now function...
      2 users thanked author for this post.
      • #205721

        This was definitely not a normal amount of RAM for Firefox to use, even with a ridiculous number of tabs open.  I’ve had over 1000 tabs open and it didn’t use 14 GB!  I think it was about 8 or 9 GB, having started from a fresh session.  This was a memory leak caused by the program itself or an addon, I think.  Firefox has long had issues with that, and while it is much better for me now than it ever has been before, Waterfox (built on FF code base) still won’t release all the memory it should when tabs are closed.  I could show screenshots of 8 GB being taken by FF with a single tab open.  Some of it is from recently closed tabs being cached and such, but not so much that it should be using 8 GB for one tab.

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
        XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
        Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

        • #205748

          Glad I asked since most assumed you were on a HDD.

          I’d advise you set your pagefile to disable and also within services, disable superprefetch then reboot to remove remnants of the existing pagefile. Pagefile is not needed with SSD’s and reduces the write cycles to the SSD shortening it’s lifespan.

          Any one else care to add to this?

          EDIT: My information regarding no pagefile was incorrect 🙂

          Windows - commercial by definition and now function...
          • #205933

            Having a SSD doesn’t mean you don’t need a pagefile.  Disabling prefetch is normal with a SSD, but the pagefile is still as important (or not) as it would be with a HDD.

            The only time the pagefile being on the SSD is going to make any noticeable difference in the life of the SSD would be if you’re really using the pagefile a lot.  That only happens if you really need it, and if you really need it, the SSD is by far better than putting it on a slowpoke HDD or not having one at all.

            SSDs are indeed consumable devices, but for most people and usage regimes, the life of the SSD in TBW (total bytes written, really, but many people use it as “terabytes written”) is sufficiently large that even using it for paging purposes, the service life is going to be in the decades rather than in years.

            My 128GB SSD in my desktop PC is 4.5 years old, and despite me hitting the pagefile pretty hard over the years (you should have seen my RamMap64 reports!), it’s still got 69% of its SMART-reported rated service life left, and even more of its actual life left.  When Tech Report tested the 256-GB version of the same series as mine, the Samsung 840 Pro, to the point that it eventually died, it had gone 2.4 petabytes in total writes, even though its SMART rated service life ended at 500 TB.  Mine has half as many NAND cells as the 256 GB model, so it would nominally be expected to last half as long, but that’s still a very long life.  At the rate I am going, it will outlive me!

             

            Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
            XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
            Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

            1 user thanked author for this post.
          • #206027

            Firstly, apologies to @amrabyt, the above info I have supplied regarding no pagefile on SSD’s seems to be incorrect, old ssd habits die hard, my bad.

            My systems have primary SSD’s with secondary HDD’s with the exception of the linux device which has an SSD only. All secondary HDD’s now have the pagefile on at a minimum (W8.1 Haswell PC had no pagefile and I never encountered any issues with it). Hey, Ho, sometimes we get things wrong but, to err is only human.

            Windows - commercial by definition and now function...
      • #206095

        @Microfix:    I am totally “computer illiterate” and would like to ask exactly what this “Page File” is.    I’m Win 7 x64, Group A, and have no sophisticated programs on the computer.  The more I have on the computer the more likely I am to get into trouble so I try to err on the side of caution.  Any and all explanations would be most appreciated.   Thank you to any who may may enlighten me somewhat here.

    • #205681

      There is very little reason to use more than about 200-300 MB of page file with 16 GB of RAM on the system, unless you have unusually memory intensive applications. Firefox is not one of them when talking about these amounts like 16 GB unless you open hundreds of tabs etc. The suggestion from Microfix may work for you, but the tabs which you would reopen need to claim back the memory released previously which causes delays in responsiveness.
      My suggestion is to set your page file to 800 GB MB RAM min-max and maybe set the debugging file to None.
      If you encounter out-of-memory issues with the above settings, then you would have to monitor and find out exactly what happens on your system.
      Do not set the policy to clear the page file at shutdown, as this would increase dramatically the shutdown time.

      2 users thanked author for this post.
      • #205685

        Assuming the OP has an HDD in the system in question, I’d agree with CH100 but :

        Do not set the policy to clear the page file at shutdown, as this would increase dramatically the shutdown time.

        This has been known as a security issue for decades and is better to have a slow shutdown and erase the pagefile.sys on HDD’s 🙂 YMMV

        Windows - commercial by definition and now function...
        • #205727

          Self-encrypting SSDs for the win!

          Or, for Win.

          If you use the self-encrypting feature of so-equipped drives, your entire disk is protected, which of course will include the pagefile.sys, the Linux swap, and all of the volumes on the disk, regardless of file system or partition type.  I specifically bought Samsung drives for both of my laptops that are SSD-equipped, since the Samsung drives currently available through normal retail channels (850 series or newer) all have SED functionality.  Encryption that imposes no performance penalty whatsoever, and is completely transparent to the OS… unless there’s a problem with your system’s firmware, it’s as good as the password you give it.  Unfortunately, not all desktops will allow the ATA password to be set (for example, mine), but all the laptops I’ve tried it on (three of them) did it nicely.

           

          Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
          XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
          Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

          1 user thanked author for this post.
        • #205810

          @Microfix What kind of systems are we discussing here to grant a ban on reading the contents of a page file/memory previously in use after shutdown?

    • #205700

      Having data stored in a page file is only a security issue if someone nicks your PC / disk and is not content with being able to access all your data.
      If you are paranoid encrypt the hard disk, then nothing is available to an attacker. SSD disks often have encryption hardware built-in so they operate at full speed even with encryption.

      cheers, Paul

      2 users thanked author for this post.
    • #205702

      This has probably already been explained, but I’ll say it here just in case it hasn’t been said clearly enough:

      RAM is the first thing that is used by programs in doing their normal processing. If all the RAM is in use, and more RAM is needed, the paging file is used as if it was RAM. This gives your programs extra RAM to work with. However, the paging file is a slower version of RAM, because it involves disk reads and writes, rather than electronic reads and writes. But you can reduce or maybe eliminate this slowness by using a solid state drive as your primary Windows drive, because you will be speeding up the disk reads and writes by using a solid state drive rather than a mechanical drive. And if you have one of the new SSDs which plugs right into a PCI-Express socket, you will likely eliminate the delays altogether.

      By having an SSD as your primary Windows drive, your paging file will be on the SSD rather than on a mechanical hard drive, thereby speeding up your paging file by making it much more like true RAM than it would be if it was on a mechanical hard drive.

      Your paging file needs to be at least as big as needed to handle all RAM overflows. It doesn’t hurt a thing for it to be bigger than that, but a bottleneck (“traffic jam”) will result if the paging file is not big enough. I suggest you make it bigger than you think you need, so that there is always enough space available in your paging file for all situations.

      The best way for you to speed things up is to make sure that you have plenty of RAM installed in the computer (16 GB for most people), so that the paging file is never used. The second best way to speed things up is to install the fastest SSD that your system can use, one that has plenty of room for all situations. Do both of those things, and you will always get the best of both worlds.

      Group "L" (Linux Mint)
      with Windows 10 running in a remote session on my file server
      1 user thanked author for this post.
    • #205750

      @amraybt see my note here: #205748

      Windows - commercial by definition and now function...
      1 user thanked author for this post.
    • #205835

      1. A Shutdown abandons all memory. A boot is a fresh start with memory. Hence, the use of virtual memory (page file) has to start over. Whatever WAS there is ignored, and as space is needed (paging out) the disc blocks are freshly written. The page tables (mapping of virtual address to actual physical disc address) have started empty with a bootup. Actually to re-read the (former) paged-memory disc blocks would require low-level disc access programming. (This assumes that the disc is not undergoing forensic analysis.)

      2. War story. Y’ars ago, Windows 95. Computer came with 32M, but I had salvaged another 16M stick, so it was 48M. Looking at memory utilization, I saw that everything would fit in RAM. I turned off Paging, and Shutdown. Would not boot! Sweat. Booted into Safe Mode. Turned Paging on again. All was well again.
      Moral? Windows (95 at least) *requires* that there be defined a Page file. I haven’t experimented with later versions.
      So don’t disable paging.
      (I don’t know if the OS will even permit one to Delete the Page file. If one did so, he is really removing memory which is in-use. Any part of memory that is not marked as non-pageable may happen to be ‘out-there’ somewhere. A self-inflicted lobotomy is not survivable.)

      Windows 7 allows one to set Minimum and Maximum allocations of Paging space. The file contents starts fresh with each boot; whether or not the allocation is to the same physical disc blocks I don’t know. The Page File Table (in non-paged memory, obviously) starts with no entries, and is filled in as (RAM) memory has to be paged out.

      The Hibernation file, on the other hand, is a different critter, and a different subject.

      2 users thanked author for this post.
    • #205936

      I like and use hibernation all the time on my desktop. My SSD is several years old and has written 5.8TB. If I only get 100TB out of it I can run it for another 50 years.  🙂

      cheers, Paul

    • #205775
      Windows - commercial by definition and now function...
    • #205813

      @amraybt No typo in saying 800 MB. It is a minimum value which comes up in a pop-up with your amount of RAM if you keep the Kernel debugging on. Otherwise use 400-512 MB if you think few hundred MB matter for your storage. Less than that may cause your system to crash due to internal Windows memory management requirements. Windows can also create a page file when needed to avoid hard-crashing, but your system would not be healthy in such conditions until next reboot.
      To get rid of the previously increased page file, always use your first option
      1. Turn off the pagefile, reboot, turn it back on to desired settings (min 1024MB, max 4096MB), reboot again.

      EDIT: Use perfmon to monitor page file and memory and you will learn a lot about how your system uses page file, memory cache and so on.
      Use Process Explorer to monitor in the longer term Committed Memory and make a decision about the Page File.
      Read everything written about Page File by Mark Russinovich for Microsoft/Sysinternals and Nick Rintalan from Citrix for a complete understanding of the issues involved.

    • #205955

      @amraybt Made the correction from GB to MB as noted.
      Thank you.

    • #205843

      I thought about this and had discussions with ch100 a while ago.

      Since then, I always use 200-2000 for my machines (many in a a corporate network). It never grew out of 200Mb. If you have a standard HD, you might want to put the pagefile on it. Reason is, if you ever get an issue with memory leak and you start using the swap file, you will know it because it will slow down a lot, then you fix the issue! This is not normal behavior for most PC with a decent amount of RAM to use the swap file for most common scenarios. My PCs have 4-16GB and use the same settings. There is no need for old rules related to how much memory you have and things like this. You shouldn’t have to use pagefile today. It is not worth it in my opinion. If that happens regularly, you add RAM or change PC. That is why having just 2GB of very slow memory gives you plenty of time to react and fix the issue, before the meltdown of no memory left. On Unix/Linux, things are different and more complex.

      If you want to keep your images tidy, just put 200MB-200MB, restart, then set it to whatever please you after and restart (I suggest 200-2000 or 200-4000). Don’t make it more complicated than it needs to be. Or use 1024-4096 if you are that nervous. The important thing is to have a minimum and the possibility to grow if needed.

      If you can disable hibernation, you will reduce a lot of space needed for your image by removing a huge unnecessary file tied to your memory size.

      powercfg -hibernate:off

      will destroy the huge hibernation file (hiberfile.sys I think, this is not the swap file). You won’t be able to fast boot, but fast boot is slow boot often (slower than booting with hibernation disabled), so I also always disable hibernation on all machines, unless I have a very specific need to have a laptop in hibernation for a long time. Normal suspend doesn’t consume much power and there is many thing I don’t like about Fast boot and hibernation in general.

       

       

      3 users thanked author for this post.
    • #205870

      @alexeiffel
      Only one note here. I think 200MB is too small and it will likely exceed that amount on any normal system. Doubling that value as minimum would offer a better option.
      Never, ever run Windows without a page file.

      3 users thanked author for this post.
    • #206123

      Yes, you might think that 200MB is too small. For context, I disable the big kernel dump on issues.

      Still, you might think it is not enough. Let me ask you a question, then. If I set mine to 200-2000, I allow Windows to make it bigger as needed, so if it needs bigger, it will just use it at the time it needs it, right? If the file doesn’t grow, I might assume it wasn’t needed? You seemed to imply earlier it would need a reboot to get bigger. I thought the reboot was only needed to make it smaller or if you changed the setting, but once it is set at, let’s say 200-2000, I didn’t think you needed the reboot to not crash when you need memory. That wouldn’t make no sense to have a lower and upper limit then, no?

      Mark Russinovich says to set it to total peak commit minus RAM + double that amount for safety. So, if your total peak commit is normally under RAM, it would mean 0 for a swap file. We know we don’t want to use no swap file, but 200 with the ability to grow as needed on the fly seemed to me like the answer, and then I can dump a partial kernel info too and avoid maybe some weird issues when you have no swap file. As a car salesman would say to you, we’re not going to not have a deal for just 200 difference, so I am fine with 400, but out of curiosity, do you have any other technical reason to say 200-2000 would create issues?

      I can assure you I have never seen any issue that would seem related to setting 200-2000 since I do it. I think a warning of low virtual memory appear if you ever have a problem coming. But, using a growable on the fly basic swap file seems to work for me. Of course, I don’t use very low memory machines or ones that could benefit from paging, suspending unused apps to leave more RAM to active programs and get performance benefits that way, as I try to size machines so total load fits in RAM except in rare occasions.

      Windows use the 200 MB or around that value for its own internals as mentioned here in this thread. Your configuration 200-2000 works, but on principle increasing the page size when needed has 2 side effects (which may not be visible, but as I said on principle):
      – The CPU needs to handle additional tasks to manage the page file size
      – Varying the page file size at runtime can create page file fragmentation
      This is the reason why I opted for a safer double value.

      Another cosmetic reason is that when playing with the settings and having kernel debugging file enabled, you would get a warning that the page file is less than 400 MB for smaller amounts of RAM (I think this is less than 4 GB) and 800 MB for anything above that threshold. The values are best estimates by the Microsoft engineers based on their own observations and not mathematically determined. This is how I came with 800 MB page file for 16 GB RAM.
      I prefer to set the debugging file to None and in the rare instance when I need it, I can replicate the issue with a kernel debugging file and different page file. If you need or like to have a debugging file all the time, try setting it to small and avoid all the page file related warnings and potentially not having a debugging file when needed. Small file would do the job in almost all regular instances. This is not advice for large or highly secure enterprises, but for small business and end users.

      Mark Russinovich is obviously right. I said in another reply here that Process Explorer should be left running and take a note of the Committed Memory in the long run exactly for that purpose which you mentioned. However we don’t want to have Committed Memory which is more than about 80% of the RAM installed if we want to avoid paging. The 200 MB of paging is unavoidable though with recent Windows OS.

      If you want dynamically sized Page File, you can always leave Windows to size it, which is default. It does a good job of it.

      To clarify, only reducing the size of the page file require reboot and not increasing the size, or at least in most cases.

      1 user thanked author for this post.
    • #205956

      A thread from February on pagefile might be worth checking – refer to this from @ch100, as well as the post it is replying.

      2 users thanked author for this post.
    • #206093

      Yes, you might think that 200MB is too small. For context, I disable the big kernel dump on issues.

      Still, you might think it is not enough. Let me ask you a question, then. If I set mine to 200-2000, I allow Windows to make it bigger as needed, so if it needs bigger, it will just use it at the time it needs it, right? If the file doesn’t grow, I might assume it wasn’t needed? You seemed to imply earlier it would need a reboot to get bigger. I thought the reboot was only needed to make it smaller or if you changed the setting, but once it is set at, let’s say 200-2000, I didn’t think you needed the reboot to not crash when you need memory. That wouldn’t make no sense to have a lower and upper limit then, no?

      Mark Russinovich says to set it to total peak commit minus RAM + double that amount for safety. So, if your total peak commit is normally under RAM, it would mean 0 for a swap file. We know we don’t want to use no swap file, but 200 with the ability to grow as needed on the fly seemed to me like the answer, and then I can dump a partial kernel info too and avoid maybe some weird issues when you have no swap file. As a car salesman would say to you, we’re not going to not have a deal for just 200 difference, so I am fine with 400, but out of curiosity, do you have any other technical reason to say 200-2000 would create issues?

      I can assure you I have never seen any issue that would seem related to setting 200-2000 since I do it. I think a warning of low virtual memory appear if you ever have a problem coming. But, using a growable on the fly basic swap file seems to work for me. Of course, I don’t use very low memory machines or ones that could benefit from paging, suspending unused apps to leave more RAM to active programs and get performance benefits that way, as I try to size machines so total load fits in RAM except in rare occasions.

      1 user thanked author for this post.
    • #206203

      Ok, I get it.

      I would be curious to measure performance difference on a cpu intensive app (Noel??) with or without the variable paging enabled. I would then to think it’s only a simple check when allowing memory to see how much is left and would not be perceivable, but the theory is sound. Probably nothing that affects most people day to day. Still, that would justify your 800-800 fixed size.

      As for number 2, it should not concern someone like me who doesn’t want to need to pagefile is use it only as a warning that something is wrong by slowing the computer. I use 2000 because a memory leak can grow quite fast and I would like to have enough time to react before a crash due to no memory could cause more important problems, by having time to see my computer slowing down significantly. Maybe 800 would be enough, maybe not. Then, once the issue would be identified and fixed, no fragmentation problem as I would reset to the lowest value. So then, maybe 2000-2000 would be best if HD space can be wasted and it would fit your cpu argument.

      Funny, Windows recommends about 3000MB on my 16GB RAM Windows 10 at home. I might have set it to 250-2000 instead of 200-2000 there and it is still actually 250MB in size after all that time I used this computer, so it never needed more. Of course I disable the useless for me unless I need it debug file.

      Interesting discussion. All in all, your 800-800 seems a good compromise to very fine tune cpu while not taking up too much space. And considering I put it on the SSD because I don’t want to start my sleeping ReFS data disks all the time for no reason, my slowdown argument to warn me don’t work on this particular machine, so I might just switch to 800-800.

       

      1 user thanked author for this post.
    • #206211

      Few years ago I used to do the server build in a virtual environment for a small-medium organisation. Nothing fancy, just the out-of-the-box build with few tweaks and among them one was to set the Page File to 512-512 MB to avoid filling the disk with the default page file (Windows 2008 R2, behaving like windows 7 in that matter). At that time I was not aware of the 400/800 MB concept and pop-up, which I think came around only with Windows 8 /Server 2012. The debugging file was set to small nevertheless, so no issue there. Now and then there was a warning that certain servers were running out of memory, but I used that as pretext to increase the RAM on those servers rather than have it mitigated with page files. It is not a good idea to have multiple servers paging at the same time on shared storage, but sometimes it is more difficult to explain it to management than to show a warning event saying that the server require more memory and Windows increased the page file beyond what was set to avoid crashing. On SSD shared storage the performance may be different, but this was years ago, when the mechanical disks were the norm in business, even if they were fast SCSI/SAS 10,000-15,000 rpm.

      1 user thanked author for this post.
    • #206237

      I use 2000 because a memory leak can grow quite fast and I would like to have enough time to react before a crash due to no memory could cause more important problems, by having time to see my computer slowing down significantly.

      That would suggest a larger pagefile, not a smaller one (if that was what you meant, then never mind!).  It’s one of the reasons I like bigger page files.

      If there is a fast memory leak, it will quickly consume all of the physical memory.  A larger swap file will delay that some, but eventually, all of the easily-swapped low priority memory pages will be swapped, and the system will begin to thrash as it tries to page out higher priority pages, which (by virtue of being higher priority) will only be paged out for a short time before they have to be read back into physical memory, and then the system will have to find some other thing to page out in order to make room for that first thing.  This, of course, makes the system very slow, but at least it doesn’t crash.

      If you have a small pagefile, the system will swap out all of the low priority pages as soon as the pressure on the RAM starts to grow, and the page file will reach its maximum pretty quickly, possibly before all of the low priority pages have been swapped out.  When the physical RAM fills and all of the cached memory reserves are dumped and filled with more leakage data, there’s no more room to try to page out anything.  Linux would start killing processes it thinks are the lowest priority to avoid a kernel panic (BSOD equivalent), but Windows doesn’t do that.  With all of the physical RAM full and the pagefile full, something’s going to crash.  Windows will do the best it can to juggle things, and it may be able to keep things going a while, but what it really needs is more memory right at that moment, virtual or otherwise.

      A large static pagefile (or a dynamic one with a large upper bound) gives you that overflow room in case something starts gobbling memory out of control.  IMO, you want to give it enough to thrash instead of crash.  Once you’ve gotten to the point of thrashing, a larger pagefile won’t help much anymore, as the issue becomes one of not having enough low priority memory pages to be able to swap, not insufficient swap space.  That will give you the warning you need that something is amiss (it would be hard to not notice that your PC has slowed to a crawl) without reaching the point where a given program needs more memory to avoid crashing, but there isn’t any to allocate, virtual or otherwise.

      Under normal circumstances, a PC with a reasonable amount of memory won’t need the pagefile.  It just sits there doing nothing but taking up room on the disk… but not a whole lot of room, compared to the size of disks today, especially if you let Windows set the size (it will be small when it’s not needed).  IMO, it’s better to have it and not need it than to need it and not have it.  It’s not harming performance just sitting there doing nothing.

      On Linux, a swap partition larger than the physical RAM (or at the very least, no smaller) is necessary for hibernation to work. That’s where Linux stores the hibernation data, and if the swap partition is too small, the hibernation will fail, and when you go to start the PC the next time, it will surprise you with a standard boot rather than a resume from hibernation.  Mint and Ubuntu 19 and 18.04, respectively, have hibernation disabled by default, but I’ve re-enabled it on all three of my PCs that are thus far upgraded to Mint 19 (just the desktop to go).  These newer versions of Linux also use a swap file rather than a partition by default, but AFAIK the partition is still needed for hibernation.  I gave my Swift 8GB of swap, which is barely a drop in the bucket of the 1TB drive.  It’s a big bite out of my Dell’s 32GB eMMC drive, but still worth it for hibernation, IMO.

      Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
      XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
      Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

      1 user thanked author for this post.
    • #206272

      Yes, 2000 was meant as more than 1000 or 800 for me. Giving enough time on an old mechanical HD maybe to react. But reading what you just said and putting it all in perspective, maybe the best thing for reliability (not absolute maybe tiny performance improvements) would be 200-8000 (or other big number) or as ch100 prefers 400 or 800 as the minimum instead of 200. This would give even more time to react and unless space is tight all the time, it wouldn’t matter you use more than 4GB of swap while having an issue, only to resize it down once finished. Of course, then you loose the advantage of fixed pagefile size in performance, but it might be worth the peace of mind.

      I tuned swap file on my Unix server a while ago and there was a lot of considerations. If I remember correctly, there would be settings to start preemptively swapping to avoid future possible performance issues. You had to be careful to have the system perform exactly the way it is best for your performance, depending on the amount of RAM you have and use swapping intelligently. Combine that with the use of a RAM filesystem, consider interactions with the RAM cache of a RAID array and you can get huge performance improvements. This can be very satisfying! And this is how I realized to my surprise my bottleneck was RAM bandwidth on my database server and not HDs like I thought (or CPU which I didn’t think it could be).

      1 user thanked author for this post.
    • #206530

      To complicate things a little bit, think about how PVS RAM Cache with overflow on disk write cache works when there is shortage of RAM. In essence, the provisioned machine is a read only streamed copy (or linked clones in a different implementation) while the writes are stored in temporary memory which can be RAM or the so-called RAM cache. When RAM cache is used, this is implemented as non-paged pool RAM. See how complicated things become when RAM Cache in this context competes with the regular RAM and when there is a shortage of physical RAM, this is simulated by the page file storage. In the case presented, there are virtually 2 sets of physical RAM and 2 virtual page files, the Windows one and the write cache.
      https://support.citrix.com/article/CTX119469
      https://www.citrix.com/blogs/2015/01/19/size-matters-pvs-ram-cache-overflow-sizing/
      https://support.citrix.com/article/CTX122141
      It can get even more complicated, but I stop here. 🙂

      2 users thanked author for this post.
    Viewing 7 reply threads
    Reply To: Best way to clear paging file or have it reset back to its minimum size?

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: