How to: Size your windows XP page file.

Status
Not open for further replies.

Phoenix86

Supreme [H]ardness
Joined
Mar 28, 2002
Messages
6,653
Think about the page file (PF) as "extra RAM." When you don't have enough RAM, the data is sent to the PF. There is a memory subsystem called Virtual Memory Management (VMM). Virtual Memory's (VM) *real* size is limited to RAM+PF. When processes load more data (commit charge) than is available in RAM, less recently used data from RAM will page to disk (pagefile.sys) freeing up RAM for the requesting process. It's faster to call that data from RAM, than from the HDD, VMM will store the most recently used data in RAM. Paging to disk is when data is being transferred between RAM and the PF. You can sometimes "see" this happening when you alt-tab between two applications with a lot of data loaded. The HDD will be very active for a short period of time. This is a *basic* description of paging. There is definatly more going on behind the scenes, but that isn't as relative to sizing the pagefile.

To measure your VM usage, give your system a workout; play some games, open large files, whatever you would do in a normal day. In fact this is probably better measured after your system has run for a day or so. Then open Task Manager and go to the Performance Tab and look at the Commit Charge (K) box. Total is the total amount of VM your using. Peak is the maximum VM usage since you booted the computer. Limit is your VM limit.

To size the PF, take your Peak and subtract your Physical Memory Total (RAM). For example, on my work laptop, I have a Peak of ~650MB and 512MB (~500) RAM, so that's ~150MB. In order to run everything I have loaded at once, I would need a minimum of 150MB PF. However, I want room to breath. If I loaded just one more thing, I would receive an out of memory error message, and would have to close something to open something new. So you should increase your Peak by about 25% first, this will account for your breathing room. OK, so (650*1.25)-500=~300. I should have a 300MB PF on this system.

If you require a PF, you should set the minimum and maximum size to the same value. If you set a range, the PF can grow, but this will cause file fragments in the PF. The built in defragmenter cannot correct fragments in the PF, only 3rd party software can. The more fragments in your PF, the slower it will operate. Also, if you sized it properly, your PF will never need to grow.

Now what if (Peak*1.25)-RAM is negative? Well, that means you don't need a PF, and you disable the PF.
**Note, there is a bit of controversy about this setting, and it's performance benefits. Some people claim faster load times in applications, or overall smoother performance. Others say this is a placebo effect.**

What I'm telling you is, if this calculation is negative, the system does not need the extra memory from the PF for VM. There is enough RAM to supply all the VM needs. If new data is loaded, the system with a PF would be paging memory to disk, one without will not. Now, when the system is paging, the HDD is very active, and this slows down your machine. If this happens, say during the middle of a game, your frames/second will likely take a hit. On slower systems (like this laptop I gave in the example above) you can really feel paging because of the slower HDD. Also note, this doesn't mean your disabling VMM, just the size of the memory available to VMM. This means paging will still happen, it just won't be transferred to the HDD.

This may be where some performance gains could be measured. If the system doesn't have to page memory from RAM to the PF (because the PF is disabled), then something like loading a game (or a level in a game) might load faster.

It should be noted, while overall system test (like 3Dmark) do not show an increase in performance, they don't show a decrease either. I suspect this has to do with how and when paging happens. These programs aren't doing anything to cause paging while the system is being tested. Well, you can't test a tweak to the page file, when the system isn't paging. As users, however, we do experience paging...

YMMV with disabling the page file. If you have a negative figure in the calculation of your page file size, and don't feel comfortable with disabling the PF, set it to a small amount like 100MB.

Now, the only thing left is the PF's location. Like many things, this can vary a lot depending on the number of drives and controllers in your system. In general you want it on the fastest drive/controller available, if the speed of two drives is similar, put it on the secondary drive. If the secondary drive is slower, it's better to keep it on the same drive as the OS. In multi drive/controller setups it can be very complicated.

If I left anything out, or your notice anything that's incorrect, let me know.

EDIT: As noted by Ranma_Sao, you cannot obtain a full memory dump if your page file is less that your total RAM. When the system dumps, it creates a file on the HDD with the entire contents of your RAM. The data in the dump files is very usefull when troubleshooting BSODs, with the proper tools. Ranma_Sao has offered many times in the past to diagnose these, so at lease be aware of this. Some BSODs prevent you from booting into windows to make changes to the PF, so this can be important. In other cases, where windows is booting, but still generating a BSOD at a later time, you could increase the PF size to get this information.

EDIT2&3: Corrceted terms for clarity.

 
If you have questions or want to start a discussion about this information please visit this thread..

The thread was created to keep the sticky as clean as possible, yet provide a place to discuss/debate the information found inside. Thank you.

 
Remember if your pagefile isn't big enough to fit a memory dump file, a dump file will not get saved. This is important if you come on these forums and want to know why your machine bugchecked... ;)
 
im not gonna set a 1GB pagefile so that i can troubleshoot a problem...especially when i never NEED to...

To size the PF, take your Peak and subtract your Physical Memory Total (RAM). For example, on my work laptop, I have a Peak of ~650MB and 512MB (~500) RAM, so that's ~150MB. In order to run everything I have loaded at once, I would need a minimum of 150MB PF. However, I want room to breath. If I loaded just one more thing, I would receive an out of memory error message, and would have to close something to open something new. So you should increase your Peak by about 25% first, this will account for your breathing room. OK, so (650*1.25)-500=~300. I should have a 300MB PF on this system.

thanks i never knew that...kinda useless now but i really coulda used this a month ago :eek:
 
Isn't the minimum size for a dumpfile RAM+? I seriously doubt people with 1GB RAM will want/need a 1+GB PF.

If that's the case, I would recommend people enable one large enough to get the dump when they machine is BSODing. I just can't recommend one that large for every day usage, for the few times a dump is needed.

thanks i never knew that...kinda useless now but i really coulda used this a month ago
Well, I had done as a write-up before, but it got lost when the forums was upgraded and just now re-wrote it... :(
 
Yes, you need it at least as big as the amount of memory. The kernel has to dump the contents of memory somewhere. ;)

1GB pagefiles however are nothing, since most people that have 1GB of ram, usually have 200+ GB hard disks. ;)
 
Ranma_Sao,
A quick question. I have a full gig of memory with a 200mb pagefile on C:, and 800mb more on a separate drive. Both are fixed amounts for ease with defragmentation. I believe this meets the requirement, but is my configuration an acceptable way to do it? So far no issues since the install in April of 2003, but I'm curious. I think I read something at Microsoft's site about such an arrangement back when I ran Win2K, but maybe I dreamed it.
 
Ranma_Sao said:
Yes, you need it at least as big as the amount of memory. The kernel has to dump the contents of memory somewhere. ;)

1GB pagefiles however are nothing, since most people that have 1GB of ram, usually have 200+ GB hard disks. ;)
It's not the size... It's the access time. Open any 1GB file on your machine, now close it. Now, how long did that take? :eek:

EX. I have 512MB RAM, and 1GB PF. But if I run my system though the test above, I need a 500MB PF. Wha'ts the benefit of having a PF that is X2 my required size (outside getting a full dump)? I can certianly think of negatives, like access times...

Lets go through the logic... Same example above. 512MB RAM, 1GB required VM.
Here are you PF options. 0, 500MB, 1GB, 2GB, 10GB, 20GB.

0 is too small, 20GB is too big. What do you think is the middle ground and why?
 
I know you're just using visual example to state that a page file is extra RAM, but that's not an accurate description. Stuff gets paged whether RAM is full or not, or even whether there is a PF present or not.

It should be noted, while overall system test (like 3Dmark) do not show an increase in performance, they don't show a decrease either. I suspect this has to do with how and when paging happens. These programs aren't doing anything to cause paging while the system is being tested. Well, you can't test a tweak to the page file, when the system isn't paging. As users, however, we do experience paging...
It should also be noted that while there are some self-proclaimed "experts" out there who claim to provide proof of performance increase (a la QuackV), testing done by more than one person in a set period over a range of systems has shown that neither increase nor decrease is present.

This means paging will still happen, it just won't be transferred to the HDD.
Then where does it go? ;)

Not a bad write-up, overall.
 
GreNME said:
I know you're just using visual example to state that a page file is extra RAM, but that's not an accurate description. Stuff gets paged whether RAM is full or not, or even whether there is a PF present or not.
(...)
Then where does it go? ;)

Well, am I missing something?
(Another thing: Is "use as much ram and as little swap as possible" still the ideal?)
 
HHunt said:
Well, am I missing something?
Probably the difference between Virtual Memory Management and Page File. Just because it isn't being put on a page file doesn't mean VMM isn't paging to disk.
 
GreNME said:
Probably the difference between Virtual Memory Management and Page File. Just because it isn't being put on a page file doesn't mean VMM isn't paging to disk.

Assuming we're still talking about windows, and ignoring programs with their own systems (like photoshop and its scratch disk(s)), exactly when is what paged to disk without using a swap file?

Now I'm curious. :)
 
Phoenix86 said:
It's not the size... It's the access time. Open any 1GB file on your machine, now close it. Now, how long did that take? :eek:

EX. I have 512MB RAM, and 1GB PF. But if I run my system though the test above, I need a 500MB PF. Wha'ts the benefit of having a PF that is X2 my required size (outside getting a full dump)? I can certianly think of negatives, like access times...

Wouldn't that example only apply if the object in your PF was 1 single 1GB file?
I cant think of any time your system would page a 1GB file to create the access time problem you mention.
 
HHunt said:
Assuming we're still talking about windows, and ignoring programs with their own systems (like photoshop and its scratch disk(s)), exactly when is what paged to disk without using a swap file?

Now I'm curious. :)
Typically, it's either other programs that are running concurrently, files associated with such programs, and/or system files (DLLs, services, etc.) that are being paged out. Also, it is rare that the whole kernel—or, to put it better, all of the system files associated with the kernel—to be active in memory all at the same time (even with registry tweaks to force more to remain in memory). All of this goes on, page file or no.
 
-freon- said:
Wouldn't that example only apply if the object in your PF was 1 single 1GB file?
I cant think of any time your system would page a 1GB file to create the access time problem you mention.
I can think of a few, but I would never be in such a situation. Such environments would be those with rendering and compression workstations (audio, video, or photo), or possibly in some print shops dealing with huge, hi-res layouts.

Obviously not a normal situation, but such situations exist.

Oh, and no: it wouldn't only apply with a single large file. If numerous large files linked to a certain program running were taken out of focus and another brought up, then moved back into focus, there is a chance for paging. The dilemma that both Phoenix and I have faced with testing that is that there is nothing we can do to force the paging, and only nominally things we can do to guage the paging. This makes benchmarking it unreliable by any means we can come up with.
 
GreNME said:
I know you're just using visual example to state that a page file is extra RAM, but that's not an accurate description. Stuff gets paged whether RAM is full or not, or even whether there is a PF present or not.
Yes, stuff still gets paged, to RAM.

Data is stored as pages of memory. Paging is when these are sent to VMM to handle. Where VMM stores this data (PF or RAM) is the voodoo that is VMM. (this is probably a better description of paging).

Generaly you see this as HDD activity, however, when no PF is present VM can only page to RAM. There are a few other files associated with VMM. Like I said before, VMM is more than just VM.

-freon-, no, what if a program your launching is requiring 750MB? VMM empties your RAM to make space for it. This requires windows to write hundreds of MB of data to the disk, while it's accessing the data to load the required program. Major HDD access x2=wait.

Thanks for the feedback everyone, glad you like it. :)
 
Ranma_Sao said:
Remember if your pagefile isn't big enough to fit a memory dump file, a dump file will not get saved. This is important if you come on these forums and want to know why your machine bugchecked... ;)
Where do these files get saved BTW?
 
holy cow! great writeup...and we agree on this subject!

since it's been a few months since the last thread, maybe there are some new guys that know of some obscure program that can keep track of how long a process is operating.

I've tested my rig with several configurations of WinBench (or something like that from WinStone) and that bench found very little differences with and without PFs. However, I know there is a difference when loading up BF42 maps...course that is dependent on proc strength, mem latency, HD speed and access times. And if it is a placebo...mmm, it's a good one. :D
 
S1nF1xx said:
Where do these files get saved BTW?
C:\windows\memory.dmp
C:\windows\minidump\

DLL's can also mark memory to never be loaded and to have the VMM always goto disk for, since they assume that this block of memory will almost never be used. That is also paging.

Mosin:
Your setup is fine. You won't be able to get memory dumps but if you are not having a problem, don't worry about it.

Edit:
And I believe firmly that leaving the pagefile on is the best thing period, but I'm tired of debating it, it's your rigs do what you want. ;)
 
GreNME said:
Typically, it's either other programs that are running concurrently, files associated with such programs, and/or system files (DLLs, services, etc.) that are being paged out. Also, it is rare that the whole kernel—or, to put it better, all of the system files associated with the kernel—to be active in memory all at the same time (even with registry tweaks to force more to remain in memory). All of this goes on, page file or no.

That way. Ok.
 
Well, I really don't want this to get into a PF/no-PF debate, there are other threads going on for that.

I really just want to show people how to size the PF. However, following my logic, it's going to lead to the question about negative PF sizes so I put that in here too with lots of ***'s and YMMVs for a reason. :)

It's my opinion that it's OK, but w/o good proof in testing, it's just that, an educated opinion.
 
Phoenix86 said:
Well, I really don't want this to get into a PF/no-PF debate, there are other threads going on for that.
I agree there. And like I said, it's a really good post. I second the sticky recommendation.
 
Phoenix86 said:
-freon-, no, what if a program your launching is requiring 750MB? VMM empties your RAM to make space for it. This requires windows to write hundreds of MB of data to the disk, while it's accessing the data to load the required program. Major HDD access x2=wait.

Thanks for the feedback everyone, glad you like it. :)

Well, using that example, what happens if you dont have a PF?
VMM empties the RAM to where?
 
Without a page file, the memory just takes up space in the VM subsystem. In the event of no PF, the data can only go to the RAM when not in the VM subsystem, which could create paging of other data. I don't know of any way to view what is going in and out of the VM subsystem from the file system.

Basically, this is where you may or may not see reduced performance, depending on the amount of RAM you have, which is why Phoenix (and really just about anyone) recommends ample amounts of RAM for better handling of large files. :)
 
GreNME said:
Without a page file, the memory just takes up space in the VM subsystem. In the event of no PF, the data can only go to the RAM when not in the VM subsystem, which could create paging of other data. I don't know of any way to view what is going in and out of the VM subsystem from the file system.

Basically, this is where you may or may not see reduced performance, depending on the amount of RAM you have, which is why Phoenix (and really just about anyone) recommends ample amounts of RAM for better handling of large files. :)
Exactly.
 
GreNME said:
Without a page file, the memory just takes up space in the VM subsystem. In the event of no PF, the data can only go to the RAM when not in the VM subsystem, which could create paging of other data. I don't know of any way to view what is going in and out of the VM subsystem from the file system.

Basically, this is where you may or may not see reduced performance, depending on the amount of RAM you have, which is why Phoenix (and really just about anyone) recommends ample amounts of RAM for better handling of large files. :)
If you hook up a kernel debugger, you can view very easily what is where. My answer to everything really appears to be, hook up a debugger. I've been doing this way too long... ;)
 
Ranma_Sao said:
If you hook up a kernel debugger, you can view very easily what is where. My answer to everything really appears to be, hook up a debugger. I've been doing this way too long... ;)
I do not know how to use a kernel debugger, could you educate me or point me very strongly where I can quickly see what is going on?
 
Ranma_Sao said:
If you hook up a kernel debugger, you can view very easily what is where. My answer to everything really appears to be, hook up a debugger. I've been doing this way too long... ;)
You're telling me the debugger can tell how much is allocated to what? Well, that just underscores my need to get an MSDN subscription...
 
Ranma_Sao said:
Again, the debugger packages and symbol files are free. (Note, the symbols are public symbols, not private symbols....)

http://www.microsoft.com/whdc/devtools/debugging/default.mspx
Downloading now, but let me point out from your first link:
first link said:
How to Obtain a DDK
  • Order the DDK Suite or the current DDK
    Note: The DDK Suite is available by direct order only.
  • Receive the DDK through MSDN Subscription
    Note: The current DDK is not offered as a download on the WHDC Web site, but it is part of the standard MSDN Subscriber Downloads and is included with MSDN® subscriptions.
Besides, I really do want an MSDN subscription anyway (just can't afford it right now). ;)
 
Sorry, the debugger packages used to be a part of the ddk. I guess someone decided to move them.
 
Well, whoever it was should decide that I'm so special I deserve a free MSDN subscription. :p

* by the way, any update on that info?
 
Well I don't have an msdn subscription either. ;)

No, I am sorry I keep promising and not looking through the information I will let you know when life slows down, and I have the time.
 
great, so I really do need a 2GB+ swapfile :rolleyes: wonderful. (knew this already just being 'miss sunshine' for the day/night/whatever :p)
 
This article has tons of false information. :rolleyes: There is no need to alter the size of the paging file. Windows does a good job managing it. Also disabling the paging file will not increase performance. When you disable the paging file it does not stop your PC from paging to the disk. It just will not page to the paging file. The page file is by far not the only file involved in paging; every exe and dll is, as is every other file NOT opened with the cache bypassed. Also some aplications insist on creating what are called "pagefile backed sections" and will therefore not work correctly without having a pagefile. So disabling the paging file will not give you any performance benefits. It is best to leave the paging file system managed.

As noted by Ranma_Sao, you cannot obtain a full memory dump if your page file is less that your total RAM. When the system dumps, it creates a file on the HDD with the entire contents of your RAM. The data in the dump files is very usefull when troubleshooting BSODs, with the proper tools. Ranma_Sao has offered many times in the past to diagnose these, so at lease be aware of this. Some BSODs prevent you from booting into windows to make changes to the PF, so this can be important. In other cases, where windows is booting, but still generating a BSOD at a later time, you could increase the PF size to get this information.

It is true that for a complete memory dump to be performed you need a paging file larger then the amount of RAM you have, but there is no need to creat a complete memory dump. Windows supports three diferent memory dumps small, kernal and complete. Even MS reccomends a kernal memory dump. It contains more information than the small memory dump file and is significantly smaller than the complete memory dump file. It omits only those portions of memory that are unlikely to have been involved in the problem. Also even if you set the paging file to a small size Windows XP will automatically expand the paging file to store the memory dump BEFORE it is written out to disk on the next reboot.

If you require a PF, you should set the minimum and maximum size to the same value. If you set a range, the PF can grow, but this will cause file fragments in the PF. The built in defragmenter cannot correct fragments in the PF, only 3rd party software can. The more fragments in your PF, the slower it will operate. Also, if you sized it properly, your PF will never need to grow.

Thi is also a myth. The paging file is not read in a continous chunks so even if the paging file was fragmented it would not make a difference in performance.

Virtual Memory (VM) size=RAM+PF

This is also incorrect and just shows you do NOT understand virtual memory at all.
 
KoolDrew said:
This article has tons of false information. :rolleyes:
Please feel free to correct me, I'll update anything that's incorrect. Don't be a jerk about it.

Windows does a good job managing it.
It does a fair job, but it's not perfect. Care to explain more, instead of making one liners?

Also disabling the paging file will not increase performance.
This article is *NOT* for disabling the PF. I though I made that clear.

Thi is also a myth. The paging file is not read in a continous chunks so even if the paging file was fragmented it would not make a difference in performance.
Please explain/link. I'd like to know more about this.

This is also incorrect and just shows you do NOT understand virtual memory at all.
Another one liner... Care to explain what defines VM size instead of saying "your wrong?"
 
Status
Not open for further replies.
Back
Top