How to: Size your windows XP page file.

Status
Not open for further replies.
Ok. I will cover every part that is incorrect.

There is a memory subsystem called Virtual Memory Management (VMM). Virtual Memory (VM) size=RAM+PF

This is incorrect. Virtual Memory is the virtualization of memory addresses. Each process sees it's own set of memory addresses, on a 32-bit system there's 4G worth. 2G is reserved by the NT kernel and the other 2G is available for the process to use.

However the pagefile is a backing store for data so memory can be freed for other uses. Anything that is altered needs to be paged to the paging file. However most things can be paged back to their original files, which is executables, sahred libraries etc..

RAM + PF does not = amount of Virtual Memory. There is always 4GB worth of Virtual memory adress space for each process.

Paging is when data is being transferred between RAM and the PF

You are partly correct, but the pagefile is NOT the only file involved in paging. Executable's and shared library data are aslo involved with paging.

To measure your VM usage, give your system a workout; play some games, open large files, whatever you would do in a normal day. In fact this is probably better measured after your system has run for a day or so. Then open Task Manager and go to the Performance Tab and look at the Commit Charge (K) box. Total is the total amount of VM your using. Peak is the maximum VM usage since you booted the computer. Limit is your VM limit.

As I said above RAM+PF is NOT the amount OF Virtual Memory. Each 32-bit process has 4GB of virtual memory.

To size the PF, take your Peak and subtract your Physical Memory Total (RAM). For example, on my work laptop, I have a Peak of ~650MB and 512MB (~500) RAM, so that's ~150MB. In order to run everything I have loaded at once, I would need a minimum of 150MB PF. However, I want room to breath. If I loaded just one more thing, I would receive an out of memory error message, and would have to close something to open something new. So you should increase your Peak by about 25% first, this will account for your breathing room. OK, so (650*1.25)-500=~300. I should have a 300MB PF on this system.

The pagefile should be left System managed. Windows manages it very well.

If you require a PF, you should set the minimum and maximum size to the same value. If you set a range, the PF can grow, but this will cause file fragments in the PF. The built in defragmenter cannot correct fragments in the PF, only 3rd party software can. The more fragments in your PF, the slower it will operate. Also, if you sized it properly, your PF will never need to grow.

Setting the mininum and max th same is also not reccomended as the pageing file has no room to grow. If you are worried about pagefile fragmentation just set the min high enough that the pagefile will not need to resize itself. This gives you the saem benefit you speak of for a fixed pagefile, but the pageingfile has room to grow if needed.

Also the only thing I see that a fragmented pagefile would cause degraded performance is because since the pagingfile cannot be moved it may cause fragmentation of other files. However fragmentatiopn of the pagefile will not decrese performance when paging to and from the paging file as the paging file is not read as one whole file (from one end to another) instead it is read a few tens of a KB here, a few tens of a KB there etc.

Now what if (Peak*1.25)-RAM is negative? Well, that means you don't need a PF, and you disable the PF.

This is incorrect. NT was designed with the assumtion that there is a backing store on disk. As I said before the pagingfile is NOt the only file involved with paging. So most things will be paged back to what it was paged from. However when it comes to things that have no original file NT has to have a place to put it if it needs the extra memory. This place has to be the pagingfile. So anything modified in memmory needs a backign store and this has to be the pagingfile since there is no where else to put it.

NT needs the pagefile that even if you disable it NT will create a 20MB pageing file without yo8u knowing.

What I'm telling you is, if this calculation is negative, the system does not need the extra memory from the PF for VM. There is enough RAM to supply all the VM needs. If new data is loaded, the system with a PF would be paging memory to disk, one without will not. Now, when the system is paging, the HDD is very active, and this slows down your machine. If this happens, say during the middle of a game, your frames/second will likely take a hit. On slower systems (like this laptop I gave in the example above) you can really feel paging because of the slower HDD. Also note, this doesn't mean your disabling VMM, just the size of the memory available to VMM. This means paging will still happen, it just won't be transferred to the HDD.

Incorrect use of the term Virtual Memory and as I said above NT was designed with the assumption everything has a backign store which is the pagefile. Also as I alreadfy said a 20B pagefile will be created and the paging file is not the only file involved with paging. To minimize PF activity the best you can do is get more RAM. If you have enough RAM that you think you can disable the pgaefile anyway the pagefile is most likley not used much anyway, but many apps create what are called "pagefile backed sections" and will therefore fail miserably if you don't have a pagefile.

This may be where some performance gains could be measured. If the system doesn't have to page memory from RAM to the PF (because the PF is disabled), then something like loading a game (or a level in a game) might load faster.

No because the paging file is not the only file involved in paging and NT creates a 20MB paging file if one is not alreadfy created.

It should be noted, while overall system test (like 3Dmark) do not show an increase in performance, they don't show a decrease either. I suspect this has to do with how and when paging happens. These programs aren't doing anything to cause paging while the system is being tested. Well, you can't test a tweak to the page file, when the system isn't paging. As users, however, we do experience paging...

From what I satted above you should be able to come to the conclusion that there is a reason there was no affect.

YMMV with disabling the page file. If you have a negative figure in the calculation of your page file size, and don't feel comfortable with disabling the PF, set it to a small amount like 100MB.

The PF should never be disabled, but setting it to a small size is not a bad idea. I would reccomend leaving it System managed, but the min size should be high enough that the the amount of PF actually being used does not excede the mininum.

Now, the only thing left is the PF's location. Like many things, this can vary a lot depending on the number of drives and controllers in your system. In general you want it on the fastest drive/controller available, if the speed of two drives is similar, put it on the secondary drive. If the secondary drive is slower, it's better to keep it on the same drive as the OS. In multi drive/controller setups it can be very complicated.

If you have enough RAM pagefile activity will be minimal, so position of the pagefile is kind of useless, but putting the pagefile ona seperate drive an controller is not a bad idea as thereretically it should help. Many people here however do have plenty of RAm and in there case it is best left on the same HDD.

I also have heard many people reccomend to put the pagefile to the front of the drive if you only have 1. Well from th other info I told you you know that th pagefile is not the onyl file involved with paging. These other files most likley are not in the front of the drive so the head will be jumping all over the place.

If I left anything out, or your notice anything that's incorrect, let me know.

I think I let you know everything that was incorrect ;)

EDIT: As noted by Ranma_Sao, you cannot obtain a full memory dump if your page file is less that your total RAM. When the system dumps, it creates a file on the HDD with the entire contents of your RAM. The data in the dump files is very usefull when troubleshooting BSODs, with the proper tools. Ranma_Sao has offered many times in the past to diagnose these, so at lease be aware of this. Some BSODs prevent you from booting into windows to make changes to the PF, so this can be important. In other cases, where windows is booting, but still generating a BSOD at a later time, you could increase the PF size to get this information.

A full memory dump is NOT needed as already said.

Please feel free to correct me, I'll update anything that's incorrect. Don't be a jerk about it.

Sorry if you thought I was being a jerk about it.

This article is *NOT* for disabling the PF. I though I made that clear.

Then you should remove parts abou8 disabling the pagefile, unless you say do NOT diable it.
 
KoolDrew said:
This is incorrect. Virtual Memory is the virtualization of memory addresses. Each process sees it's own set of memory addresses, on a 32-bit system there's 4G worth. 2G is reserved by the NT kernel and the other 2G is available for the process to use.

RAM + PF does not = amount of Virtual Memory. There is always 4GB worth of Virtual memory adress space for each process.

As I said above RAM+PF is NOT the amount OF Virtual Memory. Each 32-bit process has 4GB of virtual memory.
OK, I'm going to break this up a bit and try to sort it out, if I miss something, sue me. ;)

Perhaps I'm using the term incorrectly. Here's the meat behind my maddness. Open task manager and look the performance tab. Under commit chage (K), Peak is a calculated by adding the PF size+RAM. When you run out of Peak windows generates "out of VM error." So I'm correlating RAM+PF=Peak, my assumption (based on the error) is that Peak is Peak VM (otherwise it wouldn't be running out of it). If I have 4GB of VM, why am I running out with ~650MB total used memory. Why does windows call this virtual memory? Where does the OS store this 4GB of VM your talking about (file name)?

I think you confusing the maximum possible VM with total *available* VM (or Peak commit charge if you like). Windows will not access more than 4GB even if you give the system 4GB+4GB PF. It's still a 32 bit OS, and can only address 4GB, no matter how it's stored.

You are partly correct, but the pagefile is NOT the only file involved in paging. Executable's and shared library data are aslo involved with paging.
I'm aware there are other files involved with paging, I don't think I say otherwise... I am interested in more information about it, though. Do you have any links about this?

About PF fragmentation (summing up a few of your points about min/max). I have seen NT4 and W2k machines have both performance issues relate to heavly fragmented PFs and instability. The HDD would simply go nuts when the system started paging. The assumption was it's taking longer to access the PF because it's fragmented. I have also seen system crash with heavly fragmented PFs. Both were fixing by defraging the PF or re-generating it. The sheer number of times I have seen this indicates windows doesn't do a very good job, and PF fragments do affect performance. I'll readly admit I haven't seen this on XP, nor have I seen heavly fragmented PFs but then again, I try to avoid it. My XP images have static PF sizes.

Furthermore MS doesn't know how much RAM you system has. They make assumptions based on the "average" user. A perfect example of this is the TCP/IP socket. On W9x, NT and 2k it was setup best for dial-up connections. On XP it's "tweaked" for NICs. Why? More people use NICs to connect to the internet than before.

They make the same assumptions about RAM. Most machines have ~256, however you and I likely have much more. Anyways, that's nothing specific to the PF, but something to consider when you think MS is doing it pretty good. Chances are things are setup pretty good, for *most* people.

NT needs the pagefile that even if you disable it NT will create a 20MB pageing file without yo8u knowing.
Then you should remove parts abou8 disabling the pagefile, unless you say do NOT diable it.

I'll say it *again* please do no discuss no-PF options in this thread(like the other thread we are currently posing in about no-PF), let's debate that elsewhere. 2/3 of your post addresses no-PF. I bring it up as an option here because it's relevant question when the caculation is negative, but let people make thier own choice. I note that in the OP. There are plenty of other threads about no-PF. I will cut the info in this post on no-PF and reply to it in the other thread.
 
Perhaps I'm using the term incorrectly. Here's the meat behind my maddness. Open task manager and look the performance tab. Under commit chage (K), Peak is a calculated by adding the PF size+RAM. When you run out of Peak windows generates "out of VM error." So I'm correlating RAM+PF=Peak, my assumption (based on the error) is that Peak is Peak VM (otherwise it wouldn't be running out of it). If I have 4GB of VM, why am I running out with ~650MB total used memory. Why does windows call this virtual memory? Where does the OS store this 4GB of VM your talking about (file name)?

I cannot blame you for misunderstanding the actual meaning of Virtual Memory as Microsoft uses the term Virtual Memory wrong in their UI. So by your reasoning yes that would make sense but that is not what virtual memory is. The only place that it seems where MS uses the term VM correctly is in MSDN articles.

Also if you really want to learn more about the internals of the OS check out the book "Microsoft Windows Internals". I read the book "Inside Windows 2000", but that is just an older version. Also your whole system does not have 4GB. Each process sees it's own set of virtual memory adresses. In a 32-bit system this is 4G worth. 2G is used by the NT kernal, while the process uses the other 2.

Windows does not store Virtual memory in a specific place. For more info check these links
http://kerneltrap.org/node/2450?PHPSESSID=1898c27db7b2dd4cc98b107619066927
http://www.csn.ul.ie/~mel/projects/vm/guide/html/understand/
http://www.winntmag.com/Articles/Index.cfm?IssueID=56&ArticleID=3686

The first 2 do cover Linux, but Virtual Memory works the same.

I'm aware there are other files involved with paging, I don't think I say otherwise... I am interested in more information about it, though. Do you have any links about this?

I do not have a link, but you can pick up the book I mentioned at your local bookstore.

About PF fragmentation (summing up a few of your points about min/max). I have seen NT4 and W2k machines have both performance issues relate to heavly fragmented PFs and instability. The HDD would simply go nuts when the system started paging. The assumption was it's taking longer to access the PF because it's fragmented. I have also seen system crash with heavly fragmented PFs. Both were fixing by defraging the PF or re-generating it. The sheer number of times I have seen this indicates windows doesn't do a very good job, and PF fragments do affect performance. I'll readly admit I haven't seen this on XP, nor have I seen heavly fragmented PFs but then again, I try to avoid it. My XP images have static PF sizes.

The pagefile does NOT get fragmented! When your os needs more pagefile, it will enlarge the pagefile now, the pagefile has expanded, and the expanded portion is probably in a different area then the the original pf. This is a fragment, and this is where the myth began, but short sighted, because this fragment is discarded when you reboot. obviously, the original pf has not changed locations, and it is in the exact same state on reboot as it was before it expanded.

Many myths are passed on like this because people misunderstand how it actually works. Then that person just repeats what they heard and the myth is passed on.

So as I said before it is best to set the initial size big enough so the PF does NOT ned to be resized. You can even set the max to 4G (the max). It will not make a difference as you will not use all of this as you already know the pagefile i only expanded when it needs to be. This way the pagefile stays "static", but can resize when needed which you do NOT want to stop the OS from doing. If you continue to argue that a static pagefile is benefiticail you have no idea about NT memory management.

Also if you do set a static size and for some reason Windows needs to page the stuff to the pagefile and it needs to be expanded operation will probally slow down.

Furthermore MS doesn't know how much RAM you system has. They make assumptions based on the "average" user. A perfect example of this is the TCP/IP socket. On W9x, NT and 2k it was setup best for dial-up connections. On XP it's "tweaked" for NICs. Why? More people use NICs to connect to the internet than before.

They make the same assumptions about RAM. Most machines have ~256, however you and I likely have much more. Anyways, that's nothing specific to the PF, but something to consider when you think MS is doing it pretty good. Chances are things are setup pretty good, for *most* people.

Do you even know how Windows determines what to set the PF size to when set to system managed? Windows is very smart and will adjust the pagefile according to your usage patterns. This is why I reccomend ton leave it system managed. Windows does a great job at it. I am sick of seeing tweak guides saying rubbish like if you do not tweak the size of the pagefile you could be hurting your performance. That is total rubbish.

I'll say it *again* please do no discuss no-PF options in this thread(like the other thread we are currently posing in about no-PF), let's debate that elsewhere. 2/3 of your post addresses no-PF. I bring it up as an option here because it's relevant question when the caculation is negative, but let people make thier own choice. I note that in the OP. There are plenty of other threads about no-PF. I will cut the info in this post on no-PF and reply to it in the other thread.

If you and me both agree that you should not disable the pagefile why do you even mention disabling the pagefile to others? You also say it may result in faster performance which is totally false.
 
First let me just say I'm glad we're revisiting this MS has recently released a VERY good explanation of the VM and paging. It'll be my first link. I strongly recommend everyone read it. The more I'm reading, the more I'm verifying what I posted.

KoolDrew said:
I cannot blame you for misunderstanding the actual meaning of Virtual Memory as Microsoft uses the term Virtual Memory wrong in their UI. So by your reasoning yes that would make sense but that is not what virtual memory is. The only place that it seems where MS uses the term VM correctly is in MSDN articles. Also if you really want to learn more about the internals of the OS check out the book "Microsoft Windows Internals". I read the book "Inside Windows 2000", but that is just an older version. Also your whole system does not have 4GB. Each process sees it's own set of virtual memory adresses. In a 32-bit system this is 4G worth. 2G is used by the NT kernal, while the process uses the other 2.
RAM + PF does not = amount of Virtual Memory.

:confused: OK, if you want to counter what MS is saying, I think we are going to need more explanation then "Microsoft uses the term Virtual Memory wrong in their UI." I will believe MS over a "person on the internet." I will consider picking up that book, but I'm sure it's not the only source of info. MSDN has it? Link me. Otherwise, I can show that VM size=RAM+PF. Windows generates an error verifying this with "out of virtual memory" when your memory usage exceeds RAM+PF.

On to 4GB addressable. Yes, that's true, when a process is stated it's allocated (2GB/process, 2GB to Kernel).

Here's what MS has to say about it.
http://support.microsoft.com/default.aspx?scid=kb;en-us;555223
"In the default Windows OS configuration, 2 GB of this virtual address space are designated for each process’ private use and the other 2 GB are shared between all processes and the operating system. Normally, applications (e.g. Notepad, Word, Excel, Acrobat Reader) use only a small fraction of the 2GB of private address space. The operating system only assigns RAM page frames to virtual memory pages that are in use."
However, that doesn't mention the PF at all Why? because that's further down the VMM subsystem. I'll give you a diagram in a later link, but I don't want to get away form this article yet. It *does* cover what the PF is.

"Pagefile

RAM is a limited resource, whereas virtual memory is, for most practical purposes, unlimited. There can be a large number of processes each with its own 2 GB of private virtual address space. When the memory in use by all the existing processes exceeds the amount of RAM available, the operating system will move pages (4 KB pieces) of one or more virtual address spaces to the computer’s hard disk, thus freeing that RAM frame for other uses. In Windows systems, these “paged out” pages are stored in one or more files called pagefile.sys in the root of a partition. There can be one such file in each disk partition. The location and size of the page file is configured in SystemProperties, Advanced, Performance (click the Settings button)."


In the first bolded statement we see the basic function of the page file. It's a supplement to RAM. It also says paging to the disk is done to pagefile.sys. The only way more files are involved is if you have a pagefile.sys on more than one drive. Again in another MS document.
http://support.microsoft.com/default.aspx?scid=kb;en-gb;842628
" If you look at the specification for your PC, it will say that you have 96MB or 128MB of memory fitted, possibly more. That’s physical memory chips, which are very fast. But that’s not enough memory to run all the applications you want to use at once, so Windows XP uses empty space on your hard drive as if it was more memory. It’s not as fast as real memory, but this ‘virtual memory’ lets you run more programs and it’s much cheaper. You probably have plenty of free space on your hard drive and the more memory you have (real or virtual), the faster Windows XP and your application will run."
Again, in bold, the pagefile is a supplement to RAM.

Windows does not store Virtual memory in a specific place. For more info check these links
http://kerneltrap.org/node/2450?PHPSESSID=1898c27db7b2dd4cc98b107619066927
http://www.csn.ul.ie/~mel/projects/vm/guide/html/understand/
http://www.winntmag.com/Articles/Index.cfm?IssueID=56&ArticleID=3686

The first 2 do cover Linux, but Virtual Memory works the same.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dngenlib/html/msdn_virtmm.asp
"Reserved Addresses

When reserving addresses in a process, no pages of physical memory are committed, and perhaps more importantly, no space is reserved in the pagefile for backing the memory. Also, reserving a range of addresses is no guarantee that at a later time there will be physical memory available to commit to those addresses. Rather, it is simply saving a specific free address range until needed, protecting the addresses from other allocation requests. Without this type of protection, routine operations such as loading a DLL or resource could occupy specific addresses and jeopardize their availability for later use."


See figure 1 for that diagram I mentioned.

I understand the difference between the 4/2GB that's given in virtual memory, what you don't understand is the relationship between virtually memory and real memory (aka HDD and RAM). When your talking about the 4/2GB of virtual memory, your talking about the function higher in the VMM subsystem. Why doesn't the space get assigned? Because it's not being used, simple as that, after all it's "virtual." When it does get use, it has to be stored somewhere, and that's RAM or HDD space, aka *real* memory.

The fact that the 4/2GB that's assigned isn't an actual file is irrelevant to paging until it's used. Then the only relevance is maximum size. If you need to expand the amt. of RAM that's addressable you need to change the kernel to PEA mode. BTW this is a MSND link, and all the other links are from MS.

The pagefile does NOT get fragmented! When your OS needs more pagefile, it will enlarge the pagefile now, the pagefile has expanded, and the expanded portion is probably in a different area then the the original pf. This is a fragment, and this is where the myth began, but short sighted, because this fragment is discarded when you reboot. obviously, the original pf has not changed locations, and it is in the exact same state on reboot as it was before it expanded. Many myths are passed on like this because people misunderstand how it actually works. Then that person just repeats what they heard and the myth is passed on.

So as I said before it is best to set the initial size big enough so the PF does NOT ned to be resized. You can even set the max to 4G (the max). It will not make a difference as you will not use all of this as you already know the pagefile i only expanded when it needs to be. This way the pagefile stays "static", but can resize when needed which you do NOT want to stop the OS from doing. If you continue to argue that a static pagefile is benefiticail you have no idea about NT memory management. Also if you do set a static size and for some reason Windows needs to page the stuff to the pagefile and it needs to be expanded operation will probally slow down.

...And you one of the ones perpetuating that myth, no offense. The PF does indeed get fragmented, as proof I setup a test system. I created a static 512-512 PF, opened O&O defrag to see the PF's location, took screenshots. Then I increased the PF size to 1024-1024, again examining the location on the HDD with O&O. It created an additional ~512MB on another spot on the HDD, just like *any* file. Again, took a screenshot. Then I rebooted, twice to make sure it didn't get optimized, examined it's location with O&O and it never changed. It stayed a fragmented file. If you *really* want I can post the pics, but I don't think that necessary as none of your links validate this claim that the PF doesn't get fragmented. I have my experience with NT4/W2K seeing systems crawl or crash because of fragmented PFs. I see no reason why XP is any different, since it still fragments the PF. It all has to do with access time. Some programs don't like not being able to access the data quick enough.

If you and me both agree that you should not disable the pagefile why do you even mention disabling the pagefile to others? You also say it may result in faster performance which is totally false.
I don't agree. I just don't think it's appropriate to discuss in detail when this article is for people running a PF. Last time I'm saying this. I provided a formula to calculate PF size. That formula can produce negative results (negative PF size) indicating why no-PF is viable, and to run it *YMMV* as stated in the original post. It's discussed because of the formula's possible results. I plainly state to set a small PF up if the value is negative and you don't want to run a PF.

LAST REQUEST, ENOUGH OF THE NO-PF DISCUSSIONS IN THIS THREAD. Each of your posts mention it, yet I clearly state it's YMMV, and We are already discussing no-PF in the other threads as I type this!!! If anyone needs help finding these threads PM me, I will provide the links.

Re-Cap of points you have brought up and the status:
Kernel Dump: Previously discussed, you cannot obtain a full dump with page file < RAM.
PF is not read contiguous: Misleading, but true, you don't read the whole PF, but this is irrelevant if the data you need to read is spanned across fragmented areas of the PF. What it does mean is the entire PF is not read every time it's accessed, unlike most files.
RAM+PF != VM Size: False. This can be proven by adjusting you PF size, and monitoring task manager. If your memory usage (commit charge total) exceeds limit (which is PF+RAM) you will generate an "out of virtual memory error" Adjust your PF size, and watch the commit charge limit, it will equal that change.
PF is not the only paging file: Misleading. You can have multiple PF (on separate drives) and there are exes and dll used in paging, but no other place where paged data goes. No links have been provided to show otherwise. This is simply a claim at this point. There are MS links stating pagefile.sys is the only page file.
VM=4/2GB: Misleading, but it's true, however your still limited to your *real* memory, which again consists of PF+RAM.
PF does not get fragmented: False. The PF does get fragmented. You can repeat the test I showed, or I can show screen shots.
Leave it to "system managed": YMMV. This will create fragments over time if the minimum PF size needs to be expanded. Repeat this over time, and the PF will become very fragmented leading to poor performance.
NT needs a backing store on disk: Misleading. Maybe NT, not but XP. NT and 2K required some PF, XP doesn't. MSDN quote verifies this.
NT makes a PF for you anyways: Misleading. NT/2K does, XP doesn't. This article is for XP. What happens in XP/2K is irrelevant to this article.

I think that's a majority of the points brought up... Now on to those 2 other threads where we *are* discussing the no-PF stuff. It'll take a while for those replies too. :)
 
First of all many of the things you are quoting and putting in bold are explaining the pagefile. On this part you are correct and I am not arguing with what the pagefile is. You seem to understand what the pagefile is used for.

What you do not understand is Virtual Memory is not PF+RAM. This is easily confused because MS uses the term incorrectly in its UI, but if you really want to learn more che4ck out the links I posted. Also read this thread.
http://forums.anandtech.com/messageview.aspx?catid=34&threadid=1500375&enterthread=y

The initial poster thought the pagefile was the same as Virtual memory and maybe some of your questions have already been answered.

Also Anything that has an original file can be paged to that original file. IF it does not it is paged to the pagefile.

...And you one of the ones perpetuating that myth, no offense. The PF does indeed get fragmented, as proof I setup a test system. I created a static 512-512 PF, opened O&O defrag to see the PF's location, took screenshots. Then I increased the PF size to 1024-1024, again examining the location on the HDD with O&O. It created an additional ~512MB on another spot on the HDD, just like *any* file. Again, took a screenshot. Then I rebooted, twice to make sure it didn't get optimized, examined it's location with O&O and it never changed. It stayed a fragmented file. If you *really* want I can post the pics, but I don't think that necessary as none of your links validate this claim that the PF doesn't get fragmented. I have my experience with NT4/W2K seeing systems crawl or crash because of fragmented PFs. I see no reason why XP is any different, since it still fragments the PF. It all has to do with access time. Some programs don't like not being able to access the data quick enough.

...and what OS did you use to test this? The fact of the matter is the pagefile only needs to be defragmented once because if the pagefile needs expanding it will revert back to it's initial size on reboot. So it cannot get fragmented once it is already contiguous unless you increase the initial size of the pagefile later.

Even thent a fragmented pagefile does not cause degraded performance. The ONLY way it will cause degraded perofrmance is if it fragments other files around it. This is why you set the initial size high enough that it will not expand, but will expand if it needs be.

Kernel Dump: Previously discussed, you cannot obtain a full dump with page file < RAM.

Full Dump is useless. All a full dump gives you is details on every process running. This is rarely ever needed.

PF is not read contiguous: Misleading, but true, you don't read the whole PF, but this is irrelevant if the data you need to read is spanned across fragmented areas of the PF. What it does mean is the entire PF is not read every time it's accessed, unlike most files.

I have already explained this many times. Refer to the links I posted and you will understand the ONLY way a fragmented pagefile will degrade performance is if it fragments other files around it.

RAM+PF != VM Size: False. This can be proven by adjusting you PF size, and monitoring task manager. If your memory usage (commit charge total) exceeds limit (which is PF+RAM) you will generate an "out of virtual memory error" Adjust your PF size, and watch the commit charge limit, it will equal that change.

And this proves you do not even know the basics behind Virtual memory. Each process has 4GB of virtual memory yet RAM+PF equals total VM? That is false. I also already told you that Ms uses the term VM wrong in their UI but you completely ignore me. I know the internal of the OS. I think I knwo what I am talkign about.

PF is not the only paging file: Misleading. You can have multiple PF (on separate drives) and there are exes and dll used in paging, but no other place where paged data goes. No links have been provided to show otherwise. This is simply a claim at this point. There are MS links stating pagefile.sys is the only page file.
Again incorrct as already stated. Anything that does have a original file can be paged back to it, but if it does not it must be paged to the pagefile. Also yes pagefile.sys is the only pagefile, but in essense eavery exe and dll your system is useing is a pagefile

VM=4/2GB: Misleading, but it's true, however your still limited to your *real* memory, which again consists of PF+RAM.

Read the linnks I posted or read the book and you will see this is also wrong.

PF does not get fragmented: False. The PF does get fragmented. You can repeat the test I showed, or I can show screen shots.

Read what I said above about that.

Leave it to "system managed": YMMV. This will create fragments over time if the minimum PF size needs to be expanded. Repeat this over time, and the PF will become very fragmented leading to poor performance.

Again read what I already said abotu a fragmented pagefile.

NT needs a backing store on disk: Misleading. Maybe NT, not but XP. NT and 2K required some PF, XP doesn't. MSDN quote verifies this.

Yet XP in most cases creates a small (~20MB pagefile)

NT makes a PF for you anyways: Misleading. NT/2K does, XP doesn't. This article is for XP. What happens in XP/2K is irrelevant to this article.

Incorrect. Why don't you just stop arguing with me and read the book as you will gain from it as I can tell you have no idea how Virtual memory works. You have a basic idea behing the pagefile, but are wrong in many cases.

If you want to continue to argue with me about this at least read the sources I ahve given such as that book. Once you read that book you will have nothing to argue about and you will understand how it works.
 
I spent a coulpe of hours researching and replying with documented links from MS, and your asking me to "read a book" and "see this 7 page thread," w/o quoting one relevant piece of info. Oh well, at least I'm learning something here.

Here's a quick bit for you to chew on. Since virtual memory is *only* the 2/4gb virtually assigned what happens when there is *real* data loaded in the the VM? Where is it stored?

When you understand that, you will understand the *real* limits to VM, not the *virtual* ones. I'll have a proper reply coming shortly...
 
What you do not understand is Virtual Memory is not PF+RAM. This is easily confused because MS uses the term incorrectly in its UI, but if you really want to learn more che4ck out the links I posted. Also read this thread.
http://forums.anandtech.com/message...5&enterthread=y

OK, perhaps you can hilight the relevant info from that thread... I don't mind reading and all, but I prefer trusted sources. Link to those, and let's go from there. Posting that thread is about a usefull as saying "Google it."

Look at it this way. Virtual Memory is just that, virtual. The 2/4GB that gets assigned isn't usable without real memory. You can't use all 2GB unless it has a place to store it. Namely, RAM and the PF. This is why when your usage exceeds this calculation, you get an "out of virtual memory" error. It's the difference between allocated, and addressable. You can allocate all day long, but unless you can address it, it's kinda pointless. I'm reminded of the Seinfeld episode with the car reservation. Making a reserving something isn't important, holding it is.

"SEINFELD: I don't understand. I made a reservation. Do you have my reservation?

RENTAL AGENT: Yes we do. Unfortunately we ran out of cars.

SEINFELD: But the reservation keeps the car here. That's why you have the reservation.

RENTAL AGENT: I know why we have reservations.

SEINFELD: I don't think you do. If you did I'd have a car."


Virtual Memory makes the reservation, but RAM + PF holds it. I can't believe a Seinfeld quote is relevant here... LMAO.

...and what OS did you use to test this? The fact of the matter is the pagefile only needs to be defragmented once because if the pagefile needs expanding it will revert back to it's initial size on reboot. So it cannot get fragmented once it is already contiguous unless you increase the initial size of the pagefile later.
I tested using WXPSP1. Since you don't believe me I'll post the damned screen shots.


Static 512-512 PF, unfragmented.
PF512-512.JPG



Static 1024-1024 PF, fragmented, pre-reboot. (Not that I though rebooting would help)
PF1024-1024pre-reboot.JPG



Static 1024-1024 PF, still fragmented after 2 reboots.
PF1024-1024post-reboot.JPG



Can we put the "PF doesn't get fragmented" agrument to bed, or do I need more proof? I saw the same thing when I supported thousands of NT/2K installs at my previous job, so I'm not sure how OS matters, but anyways. The test was on XP, the article is for XP, which is why it's titled "...XP page file."

"Even thent a fragmented pagefile does not cause degraded performance. The ONLY way it will cause degraded perofrmance is if it fragments other files around it. This is why you set the initial size high enough that it will not expand, but will expand if it needs be."
Now your playing moving target, but that's OK, I'll answer this too. Files will inheretnly surround the PF when you defragment, you can't reserve this space. When the PF does need to expand, it'll be fragmented just like I showed above in the pics. Do this multiple times over time and it can cause massive performance hits. It doesn't revert back in size, proof is in the above screenies.

Each process has 4GB of virtual memory yet RAM+PF equals total VM? That is false. I also already told you that Ms uses the term VM wrong in their UI but you completely ignore me. I know the internal of the OS. I think I knwo what I am talkign about.
WTF? When did my system get 4GB of addressable memory? Did the memory fairy bring it with windows? No. It's allocated, not addressable unless you have real memory as it's backing store. When you exceed the space provided in the backing store (aka RAM+PF) you generate "out of VM" error. Are you out of the 4GB? No, are you out of addressable memory? Yes.

Anything that does have a original file can be paged back to it, but if it does not it must be paged to the pagefile. Also yes pagefile.sys is the only pagefile, but in essense eavery exe and dll your system is useing is a pagefile
You can say it until your blue in the face, but until you show some evidence this argument is weak. Read the MS link talking about Reserved Addresses, I bolded the important "no space is reserved in the pagefile for backing the memory."

Your arguing against MS' articles with your words alone. That doesn't carry weight in my book at all.

Quote the book, quote the threads your linking to, provide some info beyond "I said it already." You need to show some sourcing. Further more, your "I know what I'm talking about" comments only show your arrogance, not your experience.
 
I did not read your reply all I did was look at the screenshots and your logic makes no sense at all. Of course it would still be fragmented if it is staitc. I did say the pagefile will revert bvack to it's initial size on reboot. The initial in this case is the same as the max so those screenshots proved nothing. Make a pagefile with a really small initial size and a high max (1024MB or so). Now run a game or something and wait for the PF to grow. Go into O&O and you will see it is fragmented. Then reboot and you will see it is not fragmented and it will be whatever the intial size was.

I am not going to waste my time anymore telling you what Virtual memory is. You clrearly do not understand one bit of Virtual memory even though I have alrady explained it and given you threads to look at and a book because every post you make just shows you understand very little.

Also you keep refering to MS links and I already told you Microsoft uses the term incorrectly in their UI anyway. I also hope you know that where it says PF usage in the Task manager that is not the real amount of PF being used and I bet you are going to argue this too?
 
You ought to tell the fine folks writting the OS that they are wrong, I'm sure they would like to know... :rolleyes:

Next time, maybe you should advertise the fact you won't listed to anyone, including those with obviously more knowledge than yourself like MS, since you know better. I wouldn't have spend so much time actually RESEARCHING your arrogant comments.

That's pure comedy gold... You know more about VM than MS, LMAO. You'll go far with that attitude.

BTW, thanks for mucking up my stickied thread.
 
If you would read instead of acting like a know-it-all you would know that MS uses the term incorrectly in their UI as I already told you. I am sicking of arguing with someone as stubborn as you. If you actually want to learn how VM works since you ahve NO iunderstanding of it at all go read the book I suggested.

I also can tell you did not check out any of the links I provided. If you actually took the time to read them instead of being a jerk about it you would know hyow stupid you sound right now.

READ THE LINKS I PROVIDED BEFORE REPLYING!

If you do that I know you will not argue with me anymore as you will realize I am correct and you are just making an idiot out of yourself.

BTW, thanks for mucking up my stickied thread.

WTF? When I posted in this topic I listed all of the things that were incorrect and should be fixed. It is not my fault you are too stubborn to even listen. I was trying to help you and the others who are reading this topic so they do not have a misunderstanding of how VM works like everyone else.

You'll go far with that attitude.

LMFAO. You are the one too stubborn to listen or go to the links I provided and actualy do some reading.

I highly suggest checking out this thread.
http://forums.anandtech.com/messageview.aspx?catid=34&threadid=1500375&enterthread=y

You only really have to read the first 3 pages as the rtest is junk, but thyat guy was saying basically the same stuff you are right now. He was also refering to MS links etc.
 
KoolDrew said:
If you would read instead of acting like a know-it-all you would know that MS uses the term incorrectly in their UI as I already told you. I am sicking of arguing with someone as stubborn as you. If you actually want to learn how VM works since you ahve NO iunderstanding of it at all go read the book I suggested.

I also can tell you did not check out any of the links I provided. If you actually took the time to read them instead of being a jerk about it you would know hyow stupid you sound right now.
I'm done with fucking up this stickied thread with a pissing contest. If you want to argue with me, reply to the PM.

Responded to via PM. I have asked KoolDrew for valid, sourcable (books are sourcable if you *really* need to quote a book), linked information. I will post the results if he supplies any.
 
Phoenix86 said:
I'm done with fucking up this stickied thread with a pissing contest. If you want to argue with me, reply to the PM.

Responded to via PM. I have asked KoolDrew for valid, sourcable (books are sourcable if you *really* need to quote a book), linked information. I will post the results if he supplies any.

We better just ingnore him at this point. We've had one thread closed because of his ways, I would hate to see your sticky end up the same fate. He's spouting out things he read on other boards and sounds good, but no matter how much you ask for proof, he doesn't have it. It's never going to end, because he's never going to find proof of something incorrect, nor will he ever just drop it and admit he has more to learn. No one would think less of him for this, being that he's only 15, but some people like to argue on here even when wrong. Oh well...such is life...I'm just hoping he doesn't cause yet another of your good threads to be closed.
 
The ironic thing is that the truth here is somewhere in-between what both are claiming, but with the long and drawn out semantics of the argument I simply don't have the time currently to point out where each has a basis in fact. Suffice to say that while total virtual memory is not simply RAM and PF, it is quite acceptable to break it down to just that (since addressable space is mostly a theoretical number on the majority of machines). Additionally, while I often refer to the similarities between *nix and Windows as far as internals and "how it works" (because they are very similar), they are not the same.

I'd love to go into detail, but work and a long residence move (1300 miles) have priority right now. Much like Ranma_Sao said earlier, when life slows down a bit more detail can be gotten into.
 
I am not going by what other people on other boards tell me. I quoted a book and that is where most of the info was from. I also have provided proof. Also yes I do have more to learn even in thsi subject, but as I can see from both of your posts you do not even understand the basics of Virtual Memory and that fact remains.

Actually you djnes have done nothing but talk and just going by what Phoenix86 says. So one wonders what you actually know about this subject.

but some people like to argue on here even when wrong

...and you two are a perfect example. I highly doubt you even you read the sources I provided djnes yet you are the one talkign crap right now. So I actually think you are the one just going by what other people say on boards and that person you are following happens to be Phoenix86.

I already told you if you think that Mark Russinovich is wrong by all means email him. Just about everything I said happens to be in his book. So if you say I am wrong you are saying he is wrong too. Email him asking if RAM+PF=VM etc. and see what he has to say about it. I guarantee he will say the same thing I have been saying, but of course I cannot get my hopes up. I bet you will not email him and still sit there saying I am wrong. Which in fact is pretty pathetic.

.I'm just hoping he doesn't cause yet another of your good threads to be closed.

If this thread was unstickied that would be good for the community as they will not be misinformed by this will inaccurate and false information.
 
DO NOT CRAP IN THE STICKIES

Seriously, the stickies are here because they provide information to all forumers quickly before they have to resort to searching. This is no place to have a pissing contest. Any further posts that do not contribute the subject of the sticky will be deleted. I'll leave what's in place because somewhere between the pissing a discussion has been sparked, which is great. Keep it clean though folks.
 
Well... OK, this thread is already cluttered and I want to re-write the article with the new links I have researched from MS. I will try to base all my conclusions off of trusted sources (MS or otherwise, but it'll probably be MS only in my quotes) going step-by-step.

When the new thread comes out I'll try to stem debates in that thread, and have a linked topic where people can debate the finer points. That will keep the sticky clean, and give place to compare e-wangs (within the rules, of course. ;) )

However, today I won't likely be responding much at all, personal stuff to deal with as well... At any rate look for a re-write soon and don't worry about "debating" in this thread. Have at it. Until then I'm going to stay out of it a bit, if you have a direct Q for me I'll try to answer it.

GreNME, shoot me a line when you get settled, we still gotta catch a beer. :)
 
Yes a rewrite is a fine idea (not because of incorrect info). I do agree a cluttered sticky is bad, but my suggestions still stand.

First of all you and me can both agree there is no benefit to disabling the pagefile. There is also no benefit for a fixed pagefile. IT is best to set the min high enough that it will not resize, but it will if it needs to be. So this will give you the same affect you speak of about a fixed pagefile, but leaves you with that "safety net."

Another options that should at least be mentioned is leaving it System Managed.

You should also mention that if you have enough RAM pagefile placemtn is pointless. The pagefiel most likley will not be acessed often and the pagefile is not the only file involved with paging so in essence all of them are pagefiles and you cannot find the best spot for all of them.

I think we can all agree on my suggestions?
 
KoolDrew said:
First of all you and me can both agree there is no benefit to disabling the pagefile.
Ever tried loading a large map of Battlefield 1942 (El Alamein) with and without a pagefile?
Since the game loads the map into memory, no pagefile does load quicker. Though no proof other than manually timing it, which has been discussed to be non-error-proof.
 
Fark_Maniac said:
Ever tried loading a large map of Battlefield 1942 (El Alamein) with and without a pagefile?
Since the game loads the map into memory, no pagefile does load quicker. Though no proof other than manually timing it, which has been discussed to be non-error-proof.

That's why I ran without a pagefile for years until I started playing HL2. I definitely saw a performance difference in terms of loading game maps. I wouldn't say anything about my FPS being higher or anything like. I'm strictly talking load times. I was part of that infamous thread where we argued, and then discussed possible ways of checking this, and neither me, Phoenix86, or GreNME have found a way to accurately test.
 
djnes said:
I was part of that infamous thread where we argued, and then discussed possible ways of checking this, and neither me, Phoenix86, or GreNME have found a way to accurately test.
I'm not giving up on that one. I think there are ways to measure both load times and monitor i/o usage on pagefile.sys.

I really think the pagefile.sys will become a relic if RAM continues to exceed memory requirements. Some cases, sure it's necessary, but all?

From the MS links in my previous post:
"RAM is a limited resource((assumes it's not enough)), whereas virtual memory is, for most practical purposes, unlimited."
"If you look at the specification for your PC, it will say that you have 96MB or 128MB of memory fitted, possibly more. That’s physical memory chips, which are very fast. But that’s not enough memory to run all the applications you want to use at once..."

After all, how long will these statements hold true for the "average" consumer? They certianly don't hold true for me and many others today.

Something to consider...
 
Oh, I already know of programs that can monitor usage, just none that record them. I've looked at code to try to implement, including the possibility of some debuggers, but nothing that seems overly promising yet.

The file named pagefile.sys might go the way of the dodo, but the paging that takes place will likely not unless there is a drastic rewrite to memory management for modern operating systems. Not saying that it couldn't happen, but that I don't see it happening in the foreseeable future.
 
GreNME said:
Oh, I already know of programs that can monitor usage, just none that record them. I've looked at code to try to implement, including the possibility of some debuggers, but nothing that seems overly promising yet.

The file named pagefile.sys might go the way of the dodo, but the paging that takes place will likely not unless there is a drastic rewrite to memory management for modern operating systems. Not saying that it couldn't happen, but that I don't see it happening in the foreseeable future.
That simple? If so we should be able to measure data going from PF<-->RAM. Then the only cliam to no-PF testing is load times.

Yes, pagefile.sys, not paging. Could still be necessary in the future, but with the direction things are going... RAM > usage on many-many machines.

The question is what's better, more system cache with PF, or less system cache and no PF? I think that all depends on how much I/O the PF gets to make room for system cache.
 
I don't have anything contructive to add at this point, but if we can find a way of testing, count me in as a tester. I know we've been battling this one for quite some time.
 
Phoenix, until 64 bit boxen rule the world, and people write programs to use those 64 bit pointers, and then VMM will still be required.
 
I know. I'm not talking about axing VMM, or paging, just tweaking/removing the PF.

Why until 64bit? Isn't 4GB enough for most people now or is it for another reason?
 
What about creating one small partition for your page file and formatting it with large cluster sizes. WOuld formatting it with largest cluster sizes speed up access time for the page file? Because after all, diskl space utilization isn't important if you have plenty of space on your HDD and create one small partition for the PAGE file.
 
Super Mario said:
What about creating one small partition for your page file and formatting it with large cluster sizes. WOuld formatting it with largest cluster sizes speed up access time for the page file? Because after all, diskl space utilization isn't important if you have plenty of space on your HDD and create one small partition for the PAGE file.

This would not accomplish much because if you're creating a small partition on your only HD which is also running the OS, you could prevent PF fragmentation but each time you accessed your PF for reads or writes the physical location of this partition can still be different than where you're currently reading/writing files for your OS or opened programs like Microsoft Excel or whatever. You could very well end up increasing the amount your HD head has to move by doing this.

If you have a whole separate HD on a different channel dedicated for the PF, this would theoretically speed up your system because you could independently access OS and program files and at the same time access the PF on the separate channel without having to move the HD heads as much (the 2 heads can move independently and simultaneous access their respective drives). Whether the performance gain is enough to warrant a separate drive is doubtful to me.
 
This would not accomplish much because if you're creating a small partition on your only HD which is also running the OS, you could prevent PF fragmentation but each time you accessed your PF for reads or writes the physical location of this partition can still be different than where you're currently reading/writing files for your OS or opened programs like Microsoft Excel or whatever. You could very well end up increasing the amount your HD head has to move by doing this.

If you have a whole separate HD on a different channel dedicated for the PF, this would theoretically speed up your system because you could independently access OS and program files and at the same time access the PF on the separate channel without having to move the HD heads as much (the 2 heads can move independently and simultaneous access their respective drives). Whether the performance gain is enough to warrant a separate drive is doubtful to me.

So are you saying that putting your page file on a separate partition on the same drive as the OS is a bad idea? Or doesn't it matter? What about putting your page file on a separate physical HDD, but also including all your games on the same HDD in a different partition.
 
Putting the page file on a seperate partition on the same drive is logically equivalent to a permanent paging file, on that never changes size on the main partition. In fact, all windows page files are semi-permanent. If you often go over the minimum page file size you've set then the page file will get fragmented. If there is a seperate partition for it this will not happen. But you lose the advantage of having a semi permanent page file, you have to allocate all the disk space beforehand. Thus it is just as effective to set the mimum pagefile size to the intended size of your second partition and then defragment (preferably to outer cylinders). this has the same effect as putting it on a second partition, but you don't have to worry about it goign over the size of the partition, windows will just increase the file size (making a new fragment) when this happens.
 
GreNME said:
Oh, I already know of programs that can monitor usage, just none that record them.

Why can't you use PerfMon?
 
Phoenix86 said:
When the system is loading data into memory, it will take data from RAM, and save it to the PF.

In the original post, I can't make any sense of the above statement.

When the system is loading data into memory from where? The disk? What data is it taking from RAM and saving to the PF? Why?
 
mikeblas said:
In the original post, I can't make any sense of the above statement.

When the system is loading data into memory from where? The disk? What data is it taking from RAM and saving to the PF? Why?
That could be better stated as: When a process loads more data than is available in RAM, unused data from RAM will page to disk (pagefile.sys) freeing up RAM for the requesting process. I'm just describing paging to disk. I wrote that whole thing out as a response to someone's post. ;)

I edited to OP to make it more clear. Make sense now?


 
Phoenix86 said:
That could be better stated as: When a process loads more data than is available in RAM, unused data from RAM will page to disk (pagefile.sys) freeing up RAM for the requesting process. I'm just describing paging to disk. I wrote that whole thing out as a response to someone's post. ;)

I edited to OP to make it more clear. Make sense now?

That's better; I can understand what you're trying to say, now. It could still use some work, though, becaus there are both clarity and correctness issues.

Paging isn't caused by reading data from disk. The demand for memory is what causes paging.

"When a process loads more data than is available in RAM" doesn't make sense. How much data is available in RAM is irrelevant. The amount of memory that's free is what matters. If you're trying to load more data than you have free space, you'll need to free up some space to land that data.

All this is presuming that the data read from disk is going into a contigiously allocated spot and being read instead of mapped.

"Unused data" doesn't make much sense. Paging happens based on how recently data was used. The data is still used and useful; it just hasn't been touched recently. The least recently used data should be swapped out.

The data isn't always paged to Pagefile.SYS. If the sections are read-only, then the memory might simply be freed.
 
Unused doesn't mean unnecessary, though "less recently used" sounds better. Again I wrote that off the cuff. If anything else is unclear lemme know. :)

 
I'm done with fucking up this stickied thread with a pissing contest. If you want to argue with me, reply to the PM.

WTF? That was THE most interesting thread I've ever read. You made abuncha comments about alot of things. After being debated you THEN provided links and admitted you had done further research a good thing but you did it AFTER being questioned by another.

Am I incorrect or is this supposed to be info for others :confused: Your not the FINAL WORD on anything! You got questioned and answerd. If YOU are confident in YOUR points and unless you think WE are all fuckin morons who cant think for ourselves you should stop responding (confident in the points you have made).

He (KoolDrew) started as a smart ass and you "became" one later. Pardon me for saying but you spoke in big fucking circles not REALLY saying much of anything. Certainly nothing I havent heard before.

And it takes some nuts acting as if MS is the final word? They made the system so they know all? Do you know how many lines of code make up XP? Do you cuz most every programmer at MS DOES NOT! And admitts it. Hence the endless updates. XP had (and has, just give the hackers time) such huge holes in it it is almost inconcievable how a billion dollar company could release such a corrupted load of crap :mad:

There is now a BILLION dollar industry (Anti-virus) created only on the shortcomings of MS.
They would not exist were it not for MS and the crap they sell us.


Thank you for the excellent thread, next time please dont fuck it up by getting your panties bunched up when someone disagrees with you.
 
JL, I think you're shedding more heat than light with the MS quality stuff. At best, I don't see how it's relevant. But I'll second your disappointment. I'm not sure how this thread got stickied when it's between confusing and not accurate:

When you don't have enough RAM, the memory is sent to the PF.

Well, memory isn't sent to the PF. Data is. And it isn't always sent to the PF -- pageable data can be backed by files other than the PF.

Virtual Memory's (VM) *real* size is limited to RAM+PF.

Since files other than the page file can back storage, I don't think this is accurate, either.

When a process loads more data than is available in RAM,

A process doesn't need to "load" more data, whatever that actually means. It just needs to commit more memory than than is pyhsically available. Confusing the use of "data" and "memory" twice in the first paragraph gives the user a shaky foundation for the rest of the piece.
 
JL_Audio_User,

I'm far from the final source of information.

There are lots of threads where the use of the PF is debated, a sticky thread is not the correct place argue unless it need correcting. Even then if it's going to get involved, it should be in another thread (which I'd be happy to link to from this one).

Why? People coming to the sticky will leave confused, and that's not the point of a sticky. Heck I almost think stickies should be locked after a few days just because of this.

If you have questions, and feel up for lots of reading, post a new thread or jump into one of the many previous ones.

mikeblas, it's discussing finer details like that which confuses people. You are really splitting hairs with the terms here.
Well, memory isn't sent to the PF. Data is. And it isn't always sent to the PF -- pageable data can be backed by files other than the PF.
OK, "memory" isn't sent, "data" is, and where is that stored? in memory. Did you understand it when you read it? Obviously so. Could it be better corrected, apparently so. Are you helping people posting stuff like this in the thread? Nope.

Anyways, since it's here I'll respond (again, since this next one is a repeat question).
Since files other than the page file can back storage, I don't think this is accurate, either.
With regard to the PFs size it's VERY correct. Your commit charge cannot exceed PF+RAM without generating an "out of memory" error message. You may be correct that there is more backing store with executable files and what-not, but that's not related to PF sizing is it. Are you saying that I can modify the limit of VM by changing a different value? From my understanding the exe backing is limited to the actual files and their sizes you have loaded, not a system setting. So again, while you may be correct, it's irrelevant to the PFs size, unless I'm missing something. Again, you're splitting pointless hairs. This is not a "Windows Virtual Memory Explained Top-Down" article.

A process doesn't need to "load" more data, whatever that actually means. It just needs to commit more memory than than is pyhsically available.
Found another hair to split I see... BTW, you don't commit "memory" you commit "data" to memory.

Confusing the use of "data" and "memory" twice in the first paragraph gives the user a shaky foundation for the rest of the piece.
Well, I'd say most people confuse them so much, people get what you're saying, like above. I understand what you mean when you say "commit more memory" I know you mean "commit more data to memory", but somehow when I say "memory is sent to the PF" you don't understand "data from memory is sent to the PF"?

Confusing the use of "data" and "memory" is common. Also, confusing the use of the term while blasting another's misuse of them is quite amusing.

Anyways, PM me or start a seperate thread if you think it's going to be a long discussion. I think that posts that end in basically discrediting the OP falls into that category... IMO.

odoe said:
DO NOT CRAP IN THE STICKIES

Seriously, the stickies are here because they provide information to all forumers quickly before they have to resort to searching.

edit: Corrected terms in OP. Changed sentences now read as; "When you don't have enough RAM, the data is sent to the PF. There is a memory subsystem called Virtual Memory Management (VMM). Virtual Memory's (VM) *real* size is limited to RAM+PF. When processes load more data (commit charge) than is available in RAM, less recently used data from RAM will page to disk (pagefile.sys) freeing up RAM for the requesting process."

 
Phoenix86 said:
Again, you're splitting pointless hairs. This is not a "Windows Virtual Memory Explained Top-Down" article.

Then perhaps you should remove the attempts at explaning how virtual memory works. Parts of them are between oversimplified and inaccurate.

Phoenix86 said:
Did you understand it when you read it? Obviously so.

Not immediately, no. I had to read past it, then re-read it again, then try to figure out what you were saying.

Phoenix86 said:
BTW, you don't commit "memory" you commit "data" to memory.

You can commit data to memory by storing it there, I guess. But if you read up on Windows memory management, you'll find that memory does get committed. You can do so directly using the [link=http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/virtualalloc.asp]VirtualAlloc() API[/link]. It's probably most accurate that you're committing "pages of memory", since a page is the minimum amount of memory you can commit, reserve, or free. But there's no doubt my usage matches the [link=http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/memory_management_functions.asp]Windows API docs[/url] themselves.

If you think I mean "commit more data to memory", then you're wrong and you've revealed that you don't understand how Windows memory management actually works. "Commit" in this context means that a physical storage has been committed to back up a range of virtual address spaces. It's different than reserved, which means that range of addresses is available for future use but has no physical memory associated with that range of address space.

The state of a range of memory has little to do with "committing more data to memory". To me, that phrase means I've stored some data in memory -- either by retrieving it from I/O or by computing it and then storing it. The targeted memory range has to be committed before it can be used -- otherwise, there's no storage there.

In committing memory, I haven't committed any data to the memory; I've only assured that memory is available to hold whatever data I plan to create or retrieve.

Phoenix86 said:
a sticky thread is not the correct place argue unless it need correcting. Even then if it's going to get involved, it should be in another thread (which I'd be happy to link to from this one).

Why? People coming to the sticky will leave confused, and that's not the point of a sticky. Heck I almost think stickies should be locked after a few days just because of this.

Could this be the root of the problem?

I'm not sure who decides to make posts sticky around here, but it seems like they don't do a thorough evaluation of the post before making it sticky. If the post becomes sticky but isn't an appropriate place for getting errors in the post fixed, then it seems like there isn't an effective peer review process and as a result the information posted in stickies can't be authorotative.

If you don't think there's room to improve your post, then I'll give up on helping you -- but it disappoints me to see low-quality, unreviewed advice, particularly in a prominently featured post.
 
*sigh*

I'll tell you what, when you make a thread I'll respond... If everyone wants to keep posting like this, I'll ask that it get locked. This is exactly how this thread got mucked up in the first place. Please read the quote from odoe (to answer your Q, the mods make stickies, he's a mod/admin), I totally agree what he says about stickies. Not that I think your crapping, but posts like yours make it difficult to get the infromation people need.


I really want to keep it open for a variety of reasons, mostly so I can continue to edit it as needed.

I'm all for having better information, hopefully you can see that by the changes I have already made. Don't take my lack of interest in this thread as a genenral disinterest in making it better. I just don't think this is the place for the discussion. I can easly link a discussion thread in the OP if needed. OK?

edit: OK, I tried to make a thread with your comments as the OP, but that doesn't work so well since it'd be a PIA to get my quotes too. If you edit your last post, copy and paste the info into this thread I'll respond.

 
Status
Not open for further replies.
Back
Top