Understanding Virtual Memory

by Perris Calderon

May, 2004


First off, let us get a couple of things out of the way

·        XP is a Virtual Memory Operating system

·        There is nothing you can do to prevent virtual memory in the NT kernel


No matter your configuration, with any given amount of ram, you can not reduce the amount of paging by adjusting any user interface in these virtual memory operating systems. You can redirect operating system paging, and you can circumvent virtual memory strategy, but you cannot reduce the amount of paging in the NT family of kernel.


To elaborate;

We have to realize that paging is how everything gets brought into memory in the first place! It's quite obvious that anything in memory either came from your disc, or will become part of your disc when your work is done. To quote the Microsoft knowledge base:


“Windows NT REQUIRES "backing storage" for EVERYTHING it keeps in RAM. If Windows NT requires more space in RAM, it must be able to swap out code and data to either the paging file or the original executable file.”


Here's what actually happens:

Once information is brought into memory (it must be paged in), the operating system will choose for that process the memory reclamation strategy. One form of this memory reclamation (or paging, so to be clear), the kernel can mark to release or unload data without a hard write. The OS will retrieve said information directly from the .exe or the .dll that the information came from if it's referenced again. This is accomplished by simply "unloading" portions of the .dll or .exe, and reloading that portion when needed again. Nice!


Note: For the most part, this paging does not take place in the page file, this form of paging takes place within the direct location of the .exe or .the dll


The "page file" is another form of paging, and this is what most people are talking about when they refer to the system paging. The page file is there to provide space for whatever portion of virtual memory has been modified since it was initially allocated. In a conversation I had with Mark Russinovich this is stated quite eloquently:


“When a process allocates a piece of private virtual memory (memory not backed by an image or data file on disk, which is considered sharable memory), the system charges the allocation against the commit limit. The commit limit is the sum of most of physical memory and all paging files. In the background the system will write these pages out to the paging file if a paging file exists and there is space in the paging file. This is an optimization only.”


See this? Modified information cannot have backing store to the original file or .exe's SINCE it's modified*, this is obvious once told isn't it.


Different types of things are paged to different files. You can't page "private writable committed" memory to exe or .dll files, and you don't page code to the paging file.*


With this understanding we realize:



You see now, in a situation as such, when memory needs to be reclaimed, you'll be paging and unloading the other things in order to take up the necessary slack you've lost by having a page file smaller then the memory in use, (the private writable pages can no longer be backed if you've taken away it's page file area.)


Affect? Stacks, heaps, program global storage, etc will all have to stay in physical memory, NO MATTER HOW LONG AGO ANY OF IT WAS REFERENCED!!! This is very important for any given workload and ANY amount of RAM, since the OS would like to mark memory available when it's not been called for a long time. You have impeded this strategy if you have a page file lower then the amount of ram in use.


The hits? More paging or backing of executable code, cache data maps and the like. This even though they were referenced far more recently than for arguments sake, the bottom most pages of a thread's stack. See? These bottom most pages are what we want paged, not .exe's or .dlls that were recently referenced.


You thwart this good strategy when there is a smaller amount of page file then there is the amount of memory in use.


**All memory seen under the NT family of OS's is virtual memory, (Processes access memory through their virtual memory address space) there is no way to address RAM directly!!


And so we see, if memory is in use, it has either come from the hard drive or it will go to the hard drive...THERE MUST BE HARD DRIVE AREA FOR EVERYTHING YOU HAVE IN MEMORY...(self evident, isn't it).


Now, that's out of the way, let's go further:

When the operating system needs to claim memory, (because all memory is currently in use, and you are launching new apps, or loading more info into existing work), the OS obviously has to get the necessary ram from somewhere. Something in memory will (must) be unloaded to suit your new work. No one knows what will be unloaded until the time comes as XP will unload the feature that is least likely to come into use again.


Memory reclamation in XP even goes further than this to make the process as seamless as possible, using more algorithms than most can appreciate. For instance; there is a "first in first out" (FIFO) policy for pages faults, there is "least recently used" policy, (LRU), and a combination of those with others to determine just what will not be noticed when it's released. Remarkable! There is also a "standby list". When information hasn't been used in a while but nothing is claiming the memory as yet, it becomes available, both written on disc (possibly the page file) and still in memory. Oh, did I forget to say? ALL AT THE SAME TIME ('till the memory is claimed)! Sweat!!! If this information is called before the memory is claimed by a new process it will be brought in without needing anything from the hard drive! This is what's known as a “soft fault", memory available and loaded, also at the ready for new use at the same time!


Why so much trouble with today's amount of ram?

You have to realize; most programs are written with the 90/10 rule - they spend 90% of the time bringing 10% of their code or data into use by any given user. The rest of a program can (should) be kept out on disk. This will obviously make available more physical memory to be in use for other more immediate and important needs. You don't keep memory waiting around if it's not likely to be used; you try to have your memory invested in good purpose and function. The unused features of these programs will simply be paged in (usually from the .exe) if they are ever called by the user...HA!!!...no page file used for this paging (unloading and reloading of .exe's and .dlls).


To sum everything up:

If you are not short of hard drive space, reducing the size of the page file lower then the default is counter productive, and will in fact impede the memory strategies of XP if you ever do increase your workload and do put your memory under pressure.

Here's why:

“Mapped" addresses are ranges for which the backing store is an exe, .dll, or some data file explicitly mapped by the programmer (for instance the swap file in photo shop).

"Committed" addresses are backed by the paging file.

None, some, or all of the "mapped" and "committed" virtual space might actually still be resident in the process address space. Simply speaking, this means that it's still in RAM and reference able without raising a page fault.

The remainder (ignoring the in-memory page caches, or soft page faults) have obviously got to be on disk somewhere. If it's "mapped" the place on the disc is the .exe, .dll, or whatever the mapped file is. If it's "committed", the place on the disc is the paging file.


Why Does The Page File Need To Be Bigger Than The Information Written To It?


**Memory allocation in NT is a two-step process--virtual memory addresses are reserved first, and committed second...The reservation process is simply a way NT tells the Memory Manager to reserve a block of virtual memory pages to satisfy other memory requests by the process...There are many cases in which an application will want to reserve a large block of its address space for a particular purpose (keeping data in a contiguous block makes the data easy to manage) but might not want to use all of the space.


This is simplest to explain using the following analogy:

If you were to look to any 100% populated apartment building in Manhattan, you would see that at any given time throughout the day, there are less then 25% of the residents in the building at once!


Does this mean the apartment building can be 75% smaller?

Of course not, you could do it, but man would that make things tough. For best efficiency, every resident in this building needs their own address. Even those that have never shown up at all need their own address, don't they? We can't assume that they never will show up, and we need to keep space available for everybody.


512 residents will need 512 beds...plus they will need room to toss and turn.

For reasons similar to this analogy, you couldn't have various memory sharing their virtual address could you?


Now, for users that do not put their memory under pressure, if you are certain you won't be adding additional workload, you will not likely take a hit if you decide to lower the default setting of the page file. For this, if you need the hard drive area, you are welcome to save some space on the drive by decreasing the initial minimum. Mark tells me the rule of thumb to monitor if you have a hard drive issue as follows, "You can see the commit peak in task manager or process explorer. To be safe, size your paging files to double that amount, (expansion enabled)" He continues to say that if a user increases physical memory without increasing your, a smaller page file is an option to save hard drive area Once again, we repeat however, it's necessary to have at least as much page file for the amount of you have in use.


Let's move on


!!!!!!!!!!!!!!!!!!!! IMPORTANT!!!!!!!!!!!!!!!!!!!!





Any "expert" that has told you the page file becomes fragmented due to "expansion" has an incomplete understanding of what the page file is, what the page file does, and how the page file functions. To make this as simple as possible, here's what actually happens, and exactly how the "fragmented page file" myth got started:


First, we need to point out that the page file is a different type of file then most of the files on your computer. The page file is a "container" file. Most files are like bladders that fill with water; they are small, taking no space on the hard drive at all until there is information written, the boundaries of the file will form and change as information is written, the boundaries grow, shrink and expand around and in between the surrounding area and the surrounding files like a balloon or bladder would.


The page file is different. The page file is not like a bladder. It's like a can or container. Even if nothing is written to the page file, its physical size and location remain constant and fixed. Other files will form around the page file, even when nothing at all is written to it (once the page file is contiguous).


For instance, suppose you have a contiguous page file that has an initial minimum of 256MB. Even if there is absolutely nothing written to that page file, the file will still be 256MB. The 256MB will not move in location on the hard drive and nothing but page file activity will enter the page file area. With no information written to the page file, it is like an empty can, which remains the same size whether it's full or empty.


Compare this again to a common file on your hard drive. These files behave more like a bladder then a container. If there is nothing written to a common file, other information will form in proximity. This will affect the final location of these common files, not so with the page file. Once you make the page file contiguous, this extent will remain identical on a healthy drive even if expansion is invoked.


Here's how the "fragmented page file" myth due to dynamic page file got started:

Suppose for arguments sake, your computing episode requires more virtual memory then your settings accommodate. The operating system will try to keep you working by expanding the page file. This is good. If this doesn't happen you will freeze, slow down, stall, or crash. Now, it's true, the added portion of the page file in this situation is not going to be near the original extent. You now have a fragmented page file, and this is how that "fragmented page file due to expansion" myth was started. HOWEVER IT IS INCORRECT...simple to see also...the added portion of the page file is eliminated on reboot. The original page file absolutely has to return to the original condition and the original location that it was in when you re-boot. If the page file was contiguous before expansion, it is absolutely contiguous after expansion when you reboot.


(blue is data, green is page file)

What a normal page file looks like:


What an expanded page file looks like:


What the page file looks like after rebooting:



What Causes the Expansion of a Page File?

Your operating system will seek more virtual memory when the "commit charge" approaches the "commit limit".


What does that mean? In the simplest terms this is when your work is asking for more virtual memory (commit charge) than what the OS is prepared to deliver (commit limit).


For the technical terms the "commit charge" is the total of the private (non-shared) virtual address space of all of your processes. This will exclude however all the address that's holding code, mapped files, and etcetera.


For best performance, you need to make your page file so large that the operating system never needs to expand it, that the commit charge (virtual memory requested) is never larger then the commit limit (virtual memory available). In other words, your virtual memory must be more abundant than the OS will request (soooo obvious, isn't it). This will be known as your initial minimum.


Then, for good measure you need to leave expansion available to about three times this initial minimum. Thus the OS will be able to keep you working in case your needs grow, i.e.: you start using some of those very sophisticated programs that get written more and more every day, or you create more user accounts, (user accounts invoke page file for fast user switching), or for whatever, there is no penalty leaving expansion enabled.


NOW YOU HAVE THE BEST OF BOTH WORLDS. A page file that is static, because you have made the initial minimum so large the OS will never need to expand it, and, expansion enabled just in case you are wrong in your evaluation of what kind of power user you are or become.


USUALLY THE DEFAULT SETTINGS OF XP ACCOMPLISH THIS GOAL. Most users do not need to be concerned or proactive setting their virtual memory. In other words, leave it alone.


HOWEVER, SOME USERS NEED TO USE A HIGHER INITIAL MINIMUM THAN THE DEFAULT. These are the users that have experienced an episode where the OS has expanded the page file, or claimed that it is short of virtual memory.





Different types of things are paged to different files. You can't page "private writable committed" memory to .exe or .dll files, and you don't page code to the paging file.

Jamie Hanrahan of Kernel Mode Systems, The web's "root directory" for Windows NT, Windows 2000 (aka jeh from 2cpu.com) has corrected my statement on this matter with the following caveat:


There's one not-unheard-of occasion where code IS paged to the paging file: If you're debugging, you're likely setting breakpoints in code. That's done by overwriting an opcode with an INT 3. Voilà! Code is modified. Code is normally mapped in sections with the "copy on write" attribute, which means that it's nominally read-only and everyone using it shares just one copy in RAM, and if it's dropped from RAM it's paged back in from the exe or .dll - BUT - if someone writes to it, they instantly get their own process-private copy of the modified page, and that page is thenceforth backed by the paging file.

Copy-on-write actually applies to data regions defined in EXEs and .DLLs also. If I'm writing a program and I define some global locations, those are normally copy-on-write. If multiple instances of the program are running, they share those pages until they write to them - from then on they're process-private.



Credits And Contributions :


Perris Calderon

Concept and Creation


Eric Vaughan



*Jamie Hanrahan

Kernel Mode Systems (...)


**Inside Memory Management, Part 1, Part 2

by Mark Russinovich


All content is copyrighted © and therefore may not be reproduced without explicit permission from the author.






Software I Use


I use True Image 2011 for all my OS backups and Disk Director 11.0 for all my partitioning requirements.

TweakHound readers often get a discount off Acronis Products. Check the links for more info.

acronis disk director