Where to put pagefile?

As you say, drive space is cheap. If a 320 gig drive costs $85, then 4 gigs of space costs $1.06. How much does E_OUTOFMEMORY cost?
Storage is cheap, yeah, but not "throw out the window" cheap -- at least not to me. I bought the very drive you linked to earlier this week, and I intend to fill it to capacity over time. To others, a four gig page file is no problem, but it does seem senseless if there is no pure reason to set a four gig page file, and if the OS and programs will behave exactly the same way if the page file is 512MB or 768MB or whatever. It seems more logical to encounter out of memory errors and then increase the page file size, rather than immediately going to an extreme size, unless you're doing mission-critical work.

Besides which, there are factors there too, such as the large page file breaking apart the sectors that house the OS and the sectors that hose, say, program files, leading to slightly decreased seek efficiency, and yadda, yadda, yadda...

And, technically, the formatted capacity of that drive is around 298GB, which equates to $1.14/4GB :)
 
Kind of. I just mean when the hard drive isn't active and the head is parked near the outer edges of the drive.
Why doesn't it remain where it last was? Why would the head seek twice for every seek? How long does it wait before moving there?

If a lot of the latency is caused due to the drive head waiting for the platter to rotate then, surely, due to the higher rotational speeds of the out platters on the drive latency will be reduced.
I think you're confusing angular velocity and rotation rate. On average, the drive will have to wait half a revolution before the needed sector is under the head. A half a revolution on a 7200 RPM drive takes 1/60th of a second, and that's true at the innermost track and the outermost track.

The angular velocity at the outer edge is higher, and this is why the bits at the outer edge are either less dense or are the same density and read faster.
 
And, technically, the formatted capacity of that drive is around 298GB, which equates to $1.14/4GB :)
I'm not sure how this is relevant, but I'm talking about gigabytes, which are billions of bytes. It sounds like you're talking about gibbibytes, which are abbreviated GiB.

The capacity is 320 gigabytes, formatted or not.

and if the OS and programs will behave exactly the same way if the page file is 512MB or 768MB or whatever.
Problem is, it doesn't -- and application's certainly won't.
 
I'm not sure how this is relevant, but I'm talking about gigabytes, which are billions of bytes. It sounds like you're talking about gibbibytes, which are abbreviated GiB.
Ah, you're right. It seems that's a common misconception, which I guess makes me a statistic today.

Problem is, it doesn't -- and application's certainly won't.
Elaborate. If you don't run out of swap space, you don't run out of swap space. Is it not a binary result?
 
In general, isn't the best place for the page file on the outer platters of the drive, rather than the inner platters?
 
In general, isn't the best place for the page file on the outer platters of the drive, rather than the inner platters?

Cylinders, man, cylinders!!! and yes.

There used to be this great defragger in dos/win3.1 days that would selectively put files closer to the edge that were used more (speculatively)... it worked very well. I don't think they do this these days. How come someone doesn't have a defragger that watches for heavy accesses and arrange those specific files in order and contiguously on the disk? (and at outer tracks).
 
Disabling the pagefile completely is a bad idea. Even if Windows has enough RAM, some data is still paged onto the pagefile area on the hard drive.

Why? I understand that way back in the 90's when ram was expensive, that made sense, but it doesn't any more. Why are we still churning the hard drives to shuffle information on and off when a system has 16 gigs of ram, most of it unused? No one has been able to answer this one for me, other than that there are some archaic applications that were written 20 years ago and no one has fixed them to run in ram. OR......is this still a remnant of ancient windows code that they can't fix? Periodically, something will start filling up the swapfile, I see the hard drive light flickering, and everything on the computer slows down, yet I have plenty of unused memory. It doesn't seem to make any sense at all.
 
@ nightfly: You realize that this is a four year old thread?
I remember reading about creating a RAM disk for the swap file not too long ago, but IIRC that was for 32-bit OSs that exceed 3.5 GB RAM (how to put that extra RAM to good use).
If you suspect that the system spends too much time swapping data while there is unused RAM, simply decrease the swap file size and see if it helps.

-TLB
 
Sure, I know it's an old thread. But like I wrote, page file use doesn't make any sense to me. Lots of people write how the people who design how windows works must know better than we do, but windows over the years has been, and still is, a jumbled mess, so I don't see any evidence of a genius at work there (now in their marketing department, that's where the geniuses are). I'm just trying to understand the logic involved in things such as why we are writing onto the page file on the hard drive in preparation to writing the very same file to the hard drive again later, or swapping out files when there's a ton of empty memory available. Seemed a bit odd to me. Trying to disable the swap file from what I know only results in windows creating one of it's own anyway, so even though we think we got rid of it, it's still there...somewhere. I even read threads by people who supposedly know 'better'; that the new memory management only puts a few things in the page file, and how it belongs on the SSD because it has gotten so much better, with such limited writes and reads; but it still doesn't explain why it's being used in the first place. I don't care how carefully windows picks what to place in the page file, when three quarters of my ram isn't being used in the first place, there really shouldn't be anything in a page file at all.
 
I don't care how carefully windows picks what to place in the page file, when three quarters of my ram isn't being used in the first place, there really shouldn't be anything in a page file at all.
In that case, there typically won't be anything in your page file.

Running without a page file will cause you to use more physical memory than you would had you used one. If you don't have a page file and you do run out of memory, then your machine won't be recoverable; you'll have to re boot. Had a page file been available, you'd be able to swap out the errant process and load in whatever utility you wanted in order to get control of the system again.

You'll also find that applications which make use of memory mapped files end up swapping more if you don't have a page file.

It's frustrating that you say that you don't see any evidence of genius when you're also making it apparent that you also don't know how Windows is working.
 
It's frustrating that you say that you don't see any evidence of genius when you're also making it apparent that you also don't know how Windows is working.

I don't know exactly how the space shuttle works either, as I'm not a physicist; but I know when it blows up, it's not working. Same with windows; I may not know all the details, but I know when it's not working.
 
Holy thread necromancy, Batman!!! :D

Anyway, there is one piece of advice with respect to the page file and Windows that applies nowadays (meaning Vista but primarily Windows 7 tech):

Leave it alone.

The OS can take care of itself. There's nothing you're going to do with respect to disabling effectively anything or adjusting it that's going to make any noticeable difference in your day to day computing. The only thing you could do to improve efficiency for the OS overall would be to place a page file on each physical drive (hard drive or SSD) and that allows the OS to "hit" any of them at any time when required.

Since a single page file on the system drive is still going to suffer from the basic fact that modern storage cannot read and write at the same time - you're either reading data from a storage device or you're writing to it, but never both at exactly the same time - you effectively give the OS itself a bit more capability by offering it the potential to have access to a page file when it needs to, even if it's reading/writing from the system drive or whatever.

More efficient operation = smoother multitasking = less glitchy/laggy behavior even in spite of fast hard drives and even faster SSDs.

That's about all you or anyone is going to do to make any difference at all. Best advice...

Leave it alone. Always.
 
Same with windows; I may not know all the details, but I know when it's not working.
You admit the page file doesn't make any sense to you, and that's fine -- if you don't want to learn about it, that's fine too. But it also means you haven't identified any flaw in Windows.

Noting a flaw is a different claim than "failing to see any evidence of genius". Certainly, you can see that the people who invented Windows are smarter than you are, even though it might fail sometimes.

Can't you?
 
Lots of people write how the people who design how windows works must know better than we do, but windows over the years has been, and still is, a jumbled mess, so I don't see any evidence of a genius at work there (now in their marketing department, that's where the geniuses are). I'm just trying to understand the logic involved in things such as why we are writing onto the page file on the hard drive in preparation to writing the very same file to the hard drive again later, or swapping out files when there's a ton of empty memory available. Seemed a bit odd to me.

Interesting point of view. Of course, all my Linux and Solaris installs (with extremely large amounts of RAM) all have virtual memory provisioned. I guess they just copy Windows?

That, or there is some value to it, wether or not one understands or agrees.
 
Joe Average wrote:
Since a single page file on the system drive is still going to suffer from the basic fact that modern storage cannot read and write at the same time - you're either reading data from a storage device or you're writing to it, but never both at exactly the same time - you effectively give the OS itself a bit more capability by offering it the potential to have access to a page file when it needs to, even if it's reading/writing from the system drive or whatever.
That's one of the nice things here; reading something new.

Mikeblas wrote
<snip> But it also means you haven't identified any flaw in Windows.<snip>
Oh crap. You must work for Microsoft. Sorry. Everyone else I've ever met can name things wrong with windows, but mostly I think it's because they (MS) simply design the product to get us to buy other MS products (ever heard of the 'DOS isn't done until Lotus won't run?' phrase from the 80's?). Flaws? O.K.
One big one that I can name off the top of my head: Windows is designed to allow outside software to run itself all the time, take up memory and processor time, interfere with other programs, and intentionally hides those facts from the average user. So, the user is made to think his machine is 'too old', and needs to buy a new one (along with another version of windows, of course). While nothing could be further from the truth. With fresh reinstalls, I still have friends and relatives using ancient >10 year old p3 machines with windows 2000, which run very quick once all the garbage is taken off of them, and this is 11 years later. Microsoft would have you believe that OS is garbage now, because they want you to buy another one. The problem is exacerbated because as Microsoft creates 'new' OS's to sell (Look a shiny new OS! Run out and buy it! Because it has a new name! Really now; exactly how was XP so different from W2000 anyway other than the SCSI bug that was thrown into XP), they make subtle changes so that drivers won't work from OS to OS, and developers are simply more likely to write drivers and programs for the shiny 'new' OS, eventually the old one won't be able to run the new hardware or software; not because it's unable to, but because it's been intentionally prevented from doing so by a marketing strategy.

Another is the dumping of outside program's files into the OS's directory, and the retention of temporary files. It makes the computer run slower and slower
Any decent programmer who uses windows could give you a slew of other problems, but those two are glaring examples I guess you're as yet unaware of.

RemoteDev wrote:
Interesting point of view. Of course, all my Linux and Solaris installs (with extremely large amounts of RAM) all have virtual memory provisioned. I guess they just copy Windows?
I'm not sure where or if it's copied from anyone in particular; I don't know anyone who goes back far enough to the days of 1950's mainframe OS's. I suppose it's simply just based on a time when simple OS's like DRDOS or programs like Software Carousel shuffled programs in and out of active memory in a very primitive form of multitasking because memory was so expensive. I think it just got too annoying when we got to the point of cheap enough RAM, quick processors and large hard drives, and the OS kept writing info in and out of memory onto a physical drive for no particular reason even if >half of it was perpetually empty. And sure, the programmers as MS know more about the code in windows than I do, but if so, then why was it a mess for so long? I'm in the process of migrating my new machines to W7, and know that I will still have to sit through a long install, then waste another hour or so tweaking it to get it to work well. And even though I don't have it installed yet, I'm willing to bet that outside apps still dump their files into the OS's directory, programs will run hidden, and that as time goes on, I will find old temp files laying around too. I just thank god that we haven't had another file compression debacle as with DOS 6. I guess MS at least learned that lesson; I sure did, that was the event that taught me (and probably a lot of others, too) to back up our data regularly.

Sure, I understand the concept of having a swap file for use in case of an emergency situation where you run out of ram; what I don't understand is why waste processor time and write time into it unless you have to, why windows is written to assume you are always going to run out of ram, so it writes to the physical drive intentionally to keep stuff out of ram, slowing down the machine. While that may make sense for an average moron, there should always have been a way for more intelligent users to stop that from happening.
 
That's one of the nice things here; reading something new.
You might want to get yourself a copy of Windows Internals and read the chapters on memory management. It'll clear up some of your misconceptions and the knowledge you gain will help you make better decisions.
 
Obviously for maximum stability leave it on. But if you have ample RAM, there is a benefit to be had by turning it off, since even when you have plenty of RAM windows still uses the page file. I like to leave all my programs open and minimized in the tray, anything that is idle for a certain amount of time will get paged, when I go back to open those programs I want instant response.

Even with the handful of big programs that I use, since bumping to 16GB I've had no problems whatsoever with having the pagefile turned off. Everything is blazing fast and instant. And sure your RAM usage will be higher, but that's what yall bought it for right? Ya don't fill your ram slots so you can have the goal of trying not to use it. Those that have a need for more than 16GB know that they do, and thus have bigger mobos, or will turn the pagefile on. If you have 12-24GB though, most of us here can safely turn it off.

I am definitely looking forward to the 32GB mobos coming up. Large RAMdisks & SuperCache here we come.
 
Last edited:
Obviously for maximum stability leave it on. But if you have ample RAM, there is a benefit to be had by turning it off, since even when you have plenty of RAM windows still uses the page file.
No, it doesn't. Even if it did, what's the quantitative benefit that you're getting in exchange for stability?
 
No, it doesn't. Even if it did, what's the quantitative benefit that you're getting in exchange for stability?

Can you supply some proof that it does not? Because in my testings it certainly seems that Windows in fact does use the page file even if you have a lot of ram and even if there is 65%+ available.

If you know what you're doing which obviously most members of this forum do, then there is no stability trade off, if you have enough RAM. The gain is performance. Do I tell the average ma & pa to turn it off? Of course not. Default settings are targeted towards the average illiterate user. It's marketing. Windows isn't made with the enthusiast in mind. While win7 is their best creation yet, that doesn't mean there aren't many things that can be done to improve it.
 
Imagine your RAM as the room you are sitting in (say the kitchen). It has a fixed volume, and can only hold a certain amount of 'stuff' (data). Imagine your page file is the storage room on the other side of the house; it can hold more data, but its not as easy or fast to get to as you have to walk down the hall.

A page file exists so that when the computer runs out of actual RAM (the kitchen fills up) it has somewhere to put more data. Its not nearly as fast and easy as storing it in the room you are in, but atleast it has somewhere to go. If it had nowhere to go, it would overflow and crash (ie. the kitchen would collapse).

Now imagine if you took the storage room down the hall, and installed it into the kitchen (i.e. put the page file on a RAM disk). The kitchen is still the same size (same amount of RAM), but there is now much less usable space in it. When you run out of space in the main kitchen, its now very quick to put the data into the other storage room, as its not all the way down the hall anymore. However, the main kitchen (available RAM) is much smaller because the other room (page file) takes up a good portion of its volume, so you are more likely to run out of kitchen space faster and are more often going to have to use this secondary storage room (page file). Why not just use your entire kitchen to begin with and not install an additional storage room inside it?

The only explanation why anyone would suggest puting their page file back onto their RAM is a lack of understanding of how the page file is used and what it does, which seems to be wide spread. I dont fault those who are simply looking for information, but to those people who have been posting completely incorrect ideas as cold hard fact- click this link now and save us the trouble.

in my testings it certainly seems that Windows in fact does use the page file even if you have a lot of ram and even if there is 65%+ available.

provide your testing procedure and results, then. i have never seen evidence to support this, although i have never done actual testing like you have. please support your theory with evidence or data if you are going claim that you have tested this.
 
provide your testing procedure and results, then. i have never seen evidence to support this, although i have never done actual testing like you have. please support your theory with evidence or data if you are going claim that you have tested this.

http://support.microsoft.com/kb/2267427

"Virtual Memory is always in use, even when the memory required by all running processes does not exceed the amount of RAM installed on the system."
 
Can you supply some proof that it does not? Because in my testings it certainly seems that Windows in fact does use the page file even if you have a lot of ram and even if there is 65%+ available.
"Proof" is a strong word; I take it like "mathematical proof". I won't offer such strong proof as it requires detailed and painstaking analysis of the Windows source code. If you want to read about the paging algorithm Windows uses, you can find a ton of information in the memory chapters of Windows Internals.

What test are you doing, on which version of which operating system? How are you measuring page file activity? You might have trouble making correct observations depending on which version of Windows you're using because of naming problems in Task Manager. Windows XP and earlier (and maybe Windows Vista, too) show "Page File Usage" in the Task Manager's "Performance" tab. This quantity isn't usage of the page file in the sense that a page actually has been written to the file. The indicated quantity means that the space is reserved in the page file.

Say your program allocates some memory, and that memory is marked pageable and read-write. Since you have enough physical memory free, Windows allocates the physical page, and commits it. This "commitment" means that Windows is committed to having the page available even if it gets swapped out. It won't be swapped out unless there's memory pressure. Until then, though, Windows must keep space in the page file reserved in case the paging operation does need to happen.

In those versions of Windows, the "page file usage" is just the total commit charge for the system, and not an indication that anything has actually been written to the page file. In Windows 7 and Window Server 2008 and newer, the quantity is called "physical memory usage". and the layout of the tab has been significantly simplified.

I think a reading of Windows Internals will confirm my summarization of this process. Though I have left out several details, the point remains that Windows doesn't use the page file unless it experiences memory pressure. You can also confirm this by piecing together the relevant parts of MSDN documentation.

"Virtual Memory is always in use, even when the memory required by all running processes does not exceed the amount of RAM installed on the system."

Virtual memory and the page file are different. People here are asserting that the page file is being used and somehow causing a problem on systems with large amounts of memory. Fact is, the page file is used only when significant memory pressure exists.

Virtual memory is always in use; the OS and the processor work together to map physical memory. In protected parts of the OS (if you're writing a system-mode VxD, for example), you can bypass virtual memory.

That is, the translation between process-local virtual addresses and physical addresses is pretty much always happening. It's a feature of protection modes on the processor. Data may or may not be moved out of physical memory and into some other storage at any time. Page faulting is a feature of virtual memory, so it's still accurate to say "virtual memory is always in use" when only translation is happening and page faults aren't.

Even when hard page faults are happening, it's possible that the page file isn't used to satisfy them. See post #26 in this very thread for an explanation of how paging can happen to files other than the system page file.
 
Last edited:
^^^^^^^^^^^ Interesting post, mike and a good read.

First, let me thank everyone for taking the time to write up all this, I know for you guys it's something you probably get tired of explaining.

And so, does win 7 create it's own swap file if you try to turn it off, like previous versions of windows? This answer will save me a whole lot of time searching for it if that is the case, and then I might as well just buy a small SSD to put the page file on to increase performance, and just replace it when and if it stops working from all the writes onto it. I've always had a separate drive with a static swap file to minimize the usage of my OS drive (using old pentium slot processor cooling sinks on my hard drives has kept them cool, and I haven't had any hard drive failures since using them that way).

Me, I base my 'test' on the simplest observations. We know how long it takes for a program to load initially (other than those that install themselves to run all the time in the background, grrrrrrr), and we see the hard drive light flickering and can hear the hard drive churning to retrieve the data before we are able to use the program. Once initially loaded into ram, while using the program, none of this is going on (other than the occasional blip if, say, it auto-saves the file being worked). However if you minimize it or don't use it for a while, you can see and hear hard drive activity, and again when you go back to use the file. Even as storage has gotten faster and faster, the size of the programs has also grown geometrically so we're back to where we were a decade ago. This may not seem to be a very scientific way of doing things in the computer age where everything is based on a software measurement, but it provides information none the less.

Ghost, I like your analogy of the 'living room RAM', but our living room ram is now basically the size of a barn, so taking oh, a reclining chair out of the living room and into a warehouse across the street when we're going to use it again in half an hour, all to save a tiny bit of unused space in the barn sized living room doesn't make a lot of sense.

EDIT: Actually, I'll probably just put in another SSD anyway. They aren't that expensive, the reads and writes will be much faster, and I can stop trying to figure out why microsoft does what appear to me to be crazy things. After all, if my idiot brother in law did the above with the recliner, I'd no longer argue with him, just let him carry it and shake my head that anyone would do such a thing.
 
Last edited:
And so, does win 7 create it's own swap file if you try to turn it off, like previous versions of windows? This answer will save me a whole lot of time searching for it if that is the case, and then I might as well just buy a small SSD to put the page file on to increase performance, and just replace it when and if it stops working from all the writes onto it. I've always had a separate drive with a static swap file to minimize the usage of my OS drive (using old pentium slot processor cooling sinks on my hard drives has kept them cool, and I haven't had any hard drive failures since using them that way).
It's not clear to me why you're doing these things. It seems a bit daft to put CPU fans on your drives to cool them; if you have a problem getting air through your case, you should address that. Keeping the drives a few degrees cooler certainly won't make them any faster, and probably won't make them last any longer.


Me, I base my 'test' on the simplest observations. We know how long it takes for a program to load initially (other than those that install themselves to run all the time in the background, grrrrrrr), and we see the hard drive light flickering and can hear the hard drive churning to retrieve the data before we are able to use the program. Once initially loaded into ram, while using the program, none of this is going on (other than the occasional blip if, say, it auto-saves the file being worked). However if you minimize it or don't use it for a while, you can see and hear hard drive activity, and again when you go back to use the file. Even as storage has gotten faster and faster, the size of the programs has also grown geometrically so we're back to where we were a decade ago. This may not seem to be a very scientific way of doing things in the computer age where everything is based on a software measurement, but it provides information none the less.
It doesn't provide any useful information. The drive is being accessed, but you have no idea what is being read or written. There tens of thousands of files on your drive; maybe a couple hundred thousand. Yet you've concluded it's the pagefile that's being accessed, not any of those thousands of data or program files. That's not information -- it's an assumption or a blind guess.

If you want some information, run a program like Process Monitor to see what files are being touched. Or use PerfMon to watch some of the process statistics. At that point, you'll have some information -- but it's just an observation. You'll need to use information about how Windows works (which you don't appear to have), information about how the program you've loaded works (which you probably don't have), and similar information about everything else on your system, too.

Ghost, I like your analogy of the 'living room RAM', but our living room ram is now basically the size of a barn, so taking oh, a reclining chair out of the living room and into a warehouse across the street when we're going to use it again in half an hour, all to save a tiny bit of unused space in the barn sized living room doesn't make a lot of sense.
I don't think this is what's happening.

But so what if it is? You're not losing any performance.

Let's say that you've got 12 gigs of memory, and you've got a total commit charge in read-write pages of 4 gigs. Another 4 gigs is in use for locked, non-paged, and read-only data, total. So you've got 4 gigs of pages that are potentially dirty, 4 gigs of pages that are read-only, and 4 gigs of free memory.

There's no memory pressure. But maybe you're going to read some data from a file. Windows is using that "free" memory as file cache, so maybe we don't have to read from the drive the next time we read the file. Other structures on disk, like directories and allocation tables, are very frequently read, so they're in that "free" space, too.

What if you start another program that needs 8 gigs of memory? You don't have it available, so to start that program, Windows will have to take the read-write pages and write them to the page file. Then, the physical memory will be unused and the new program can start loading. Writing out the pages has to complete before the loading can start. You wait while it happens.

But what if, before starting that big program, Windows notices your computer isn't doing much and writes the read-write files to the page file in the event that they might be swapped out? If your computer is idle -- which, most of the time, it is -- pre-emptively writing the pages out doesn't make a difference to the performance of anything. The I/O is asynchronous and non blocking, and is queued behind any other application-issued direct writes.

The preemptive write doesn't cost anything (because you were idle anyway) and saves lots of time when the memory would've been used anyway.

Say you delete the page file. In this case, you won't do the pre-emptive writes to the page file. And you also won't ever be able to load the program that demands 8 gigs of space for itself, either. What have you really saved in this scenario?

Again, I've probably made a mistake or two in explaining this and I've certainly left out details to keep things simple. If you want to understand what's going on, get a copy of Windows Internals and read the section on memory.

EDIT: Actually, I'll probably just put in another SSD anyway. They aren't that expensive, the reads and writes will be much faster, and I can stop trying to figure out why microsoft does what appear to me to be crazy things. After all, if my idiot brother in law did the above with the recliner, I'd no longer argue with him, just let him carry it and shake my head that anyone would do such a thing.
Again, I don't think you're losing much performance even if Windows is pre-emptively pre-writing pages. But if you think you've got problems, how do you know you aren't better off buying more physical memory than getting faster I/O?

In the end, it sounds like you're determined to spend money on your computer, and that's fine. I don't think you're going to realize any measurable performance benefit either way. If it makes you feel better that you're accessing the disk less -- even if you don't know why -- then that's all about you.
 
how do you know you aren't better off buying more physical memory
O.K.; how much is enough? At what point are we considered to have enough ram so that we don't have to take things out of memory and write it to the hard disk? Five times usage? Ten? Twenty? A hundred? Surely there is an answer. Oh, and again, if we turn off the swap file, does windows 7 create it's own anyway, and if so, where does it put it?
And of course, if I were doing mission critical work, then I'd worry about having windows crash. But I'm not; I'll reboot and watch what happens next time to see what goes wrong. After all, it's only recently that we've pretty much eliminated the BSOD from being a common occurance. In the days of DOS, computers crashed all the time. We rebooted and got on with our lives, and tried not to repeat the same activities that prompted the first crash.
There are a whole lot of 'what ifs' here. Well, what if none of that ever happens? Should we just continue doing the same things we always did just because 'that's the way we've always done it'?
Lets be flexible and try it. Can't tell what's going to happen until we do. After all, what if I'm not using any programs that want more memory than I have installed? Things might run fine. But I'd really like to know if windows is going to create a swap file anyway even if I try to turn it off; because if it is, I want to be the one to decide where to put it.
 
O.K.; how much is enough? At what point are we considered to have enough ram so that we don't have to take things out of memory and write it to the hard disk? Five times usage? Ten? Twenty? A hundred? Surely there is an answer.
Asking "how much memory is enough?" is like asking "how much rope should I buy?" Only you know from your usage.

You can watch the "memory usage" counter in Task Manage or PerfMon. How high does it get when you do work? That maximum is the minimum amount of physical memory that you need to avoid swapping. You can add to that number if you think you didn't test correctly (by running your biggest programs at the same time), or if you think that your needs are going to grow (because you're editing longer or bigger files, or because you're going to be using more or bigger programs, or ...)

And of course, if I were doing mission critical work, then I'd worry about having windows crash. But I'm not; I'll reboot and watch what happens next time to see what goes wrong.
If you're not doing mission critical work, why are you so concerned with performance?

But I'd really like to know if windows is going to create a swap file anyway even if I try to turn it off; because if it is, I want to be the one to decide where to put it.
If you must control the location of the swap file , then you should configure your machine to create one and place it there.

Thing is, swapping happens outside the swap file all the time, and you can't control that. Why are you so eager to control the swap file?


Writing one of the previous replies sparked my memory about another reason that running without a page file actually means your work requires more memory: reservations.

In Windows, virtual memory isn't simply "allocated" or "free". Each page of memory managed by the OS has a particular state. That state can be managed by programs which decide to use memory. Some programs carefully manage state -- some are more interested in just allocating and freeing memory.

There are three states:

Free. The page isn't committed or reserved. If we're thinking of physical memory, the memory isn't being used. If a program tries to write to a page that is free, an exception is thrown and the program might crash.

Reserved. The page has been reserved for future use. A program has declared its intent to use that memory, but hasn't begun doing so yet.

Committed. The page has been allocated and really is in use. The virtual page in question is backed by a physical page if the page is loaded in memory. The virtual page in question is probably backed by a page-sized block of space in some storage someplace, even if the page has never actually been written to the page file.

"Free" is pretty simple, but there's a subtlety. As far as user programs are concerned, the memory is available for allocation and use. But Windows potentially uses free memory for itself by using it for file cache. Reading from disk brings data from the file into memory, and that data is given to the program requesting it -- or the OS itself, in the case of disk structures like the allocation table or directories. But the assumption is that the data will soon be read again later, so the file cache hangs onto it. The more free memory you've got, the more file cache you've got.

This subtlety is important: it means that you're getting an advantage from free memory. People will say "I never use more than 3 gigs!" and won't buy a machine with more than 4 gigs or so because they think they're wasting memory. "You don't need 12 gigs because today's games don't use that much memory!" Thing is, today's games use more files than ever. Reading those files again and again slows down the experience. The more cache memory you have available, the more chance you have of the game itself or the OS caching the files for you in memory.

"Committed" is pretty simple too, once you understand the page backing store.

It's possible to pin a page in memory. That's bad, because a program that does so is consuming a shared resource -- memory -- that no other program can use until it is released. This inherently limits the usage of the system, even if only in a small way. Done with a large enough block of memory, it's trouble. A pinned page has no backing store, though.

A read-only page, such as code from a program's EXE or DLL files on disk, uses the backing store from the EXE or DLL file itself. If Windows needs to swap a page out of memory and finds that the page is actually read-only code from a program, it doesn't write it to the swap file. It doesn't need to; it can just load it again from the executable, since it knows it has never changed since loading. Most of the hard page faults encountered by systems with adequate memory for their working set actually occur this way, outside of the page file, against executable files. That's because executable files are demand loaded -- even if the EXE is 20 megabytes, only the first few pages are initially loaded. Execution starts in those pages, and as the code execution moves around and touches code (or read-only constant data) on other pages, those pages are faulted in and loaded. That might happen quickly or slowly, depending on the access pattern of the program. It might happen never, if the program ends before those pages are ever needed.

A read-write page, such as data your program has allocated to store your work, is backed by a page in the paging file. At any moment, it might need to be swapped out. It must have a place to go if it is swapped out, so that reservation is made in the swap file. This is what I explained before as "use" in the page file stat in Task Manager. The "use" is just the reservation -- it's not the actual consumption of the resources associated with the reservation in the page file.

"Reserved" is where things get really interesting.

Reserving memory lets a program declare its eventual intent to use memory, but not commit it completely, immediately. Some programs are written to get a reservation, then slowly use parts of it as needed. The program might start up, reserve 500 megs, and then begin using it a few dozen kilobytes at a time. If you have a page file, this works great; reserved memory slowly becomes committed. When committed, the memory becomes backed by the page file, and that's that.

But what if you don't have a page file? At any moment, Windows might need to satisfy your reservation request. What if it can't, because it can't swap data around to make the request fit in memory? At the time the program has made the reservation request, Windows must decide if it can be granted or not. If there's no page file, Windows actually allocates physical memory and commits it. This means that you might run a text editor to edit a 25 kilobyte file, but end up allocating 250 megs of physical memory to do so. That allocation includes initializing the memory, by the way.

One special application of reserved memory is the stack that your program uses to pass variables from function to function and to track execution progress. A stack exists for each thread that's running (right now, my system has 1028 executing threads!) and the stack has a default size of one megabyte.

The reserve-then-commit pattern works great for the stack. As the stack grows, a block of memory can be committed from the reservation. If the reservation runs out, the stack is too full and the program has a stack overflow and that's that. If the commit request is satisfied, then that's fine -- you keep running.

On a machine with no page file, though, the stack reservation must come from physical memory immediately at the time the reservation is requested. That means that the stack for a thread is probably taking a megabyte, even if the thread would only ever use less -- probably 64K or 128K -- of that reservation. Further, it means that the thread at start up time initializes the whole megabyte rather than a page at a time as it is consumed.

On my machine, then, with 1028 threads, I'm not using much physical memory -- probably 64 megs or so, if each thread uses about 64K of its stack. On a machine with no paging file configured, the 1028 threads are using physical memory for the whole reservation size; about a gigabyte total! They're not going to actually use that memory, but the reservation is necessary.

That over-commit actually causes more physical pages to be used. Since the system has no page file, if there's memory pressure, the file cache is the first to go. Then, read-only pages backed by the DLL and EXE files loaded into memory get swapped. There's not a lot of those relative to read-write pages, so such a system is out of memory pretty quickly -- especially since user data reservations are much larger than the stack, and causing the same immediate commitment problem.

We all use multi core processors these days, and more and more programs support those cores by creating and killing threads as necessary to spread work out. But without a page file, creating a thread means allocating a whole megabyte of physical memory and initializing it immediately before the thread can start, even if the thread doesn't actually use the memory. Doesn't this mean that creating a thread on a system without a page file is substantially more expensive than allocating a thread on a machine with a page file?

Given these facts, can't we be sure that running without a page file is actually worse for performance than running with a page file configured? Running without a page file causes more memory use age and more unnecessary memory initialiation, which we've purposefully exchanged for the perception that we're using the disk less. Isn't that a foolish trade?
 
Last edited:
TL;DR: Running without a page file is dumb, lol.
 
Virtual memory and the page file are different. People here are asserting that the page file is being used and somehow causing a problem on systems with large amounts of memory. Fact is, the page file is used only when significant memory pressure exists.

True technically.. But windows tries to keep a % of your RAM available, (same article I linked earlier, bottom of page) so it will start paging earlier than really needed, and it doesn't hurt to have one so unless you are REALLY limited on hard drive space, it is a bad idea to disable your page file.
 
True technically.. But windows tries to keep a % of your RAM available, (same article I linked earlier, bottom of page) so it will start paging earlier than really needed, and it doesn't hurt to have one so unless you are REALLY limited on hard drive space, it is a bad idea to disable your page file.

I don't think the article is correct. (I'm not advocating using no pagefile, though; I've got machines with hundreds of gigabytes of RAM, and they still have pagefiles.)

Memory, Available MBytes: This measures how much RAM is available to satisfy demands for virtual memory (either new allocations, or for restoring a page from the pagefile). When RAM is in short supply (for example, Committed Bytes is greater than installed RAM), the operating system tries to keep a certain fraction of installed RAM available for immediate use by copying virtual memory pages that are not in active use to the pagefile.

I believe this isn't paging to keep memory free -- I believe the writes are done preemptively as I outlined before.

Even if the text is literally true, we can't assume the fraction is large (like, a quarter). The fraction may be very small, like a couple thousandths.

Posters here seem to believe that any unexplained disk activity is bad, and that it must be related to the page file. I think that notion can be pretty easily debunked by watching the "Pages output/sec" counter. I just did a build (of a few million line C++ project) and the counter never left zero. When the build finished, my machine was idle -- not even running a web browser or a mail client -- and the hard drive light still flashes. But the pages output/second counter never leaves zero.
 
f you're not doing mission critical work, why are you so concerned with performance?
Duh, Mike. Faster responses, better video, less wear and tear on the hardware.
Any questions about that explanation, nightfly?
Cool write up, thanks Mike, love learning new stuff, especially when I don't have to wade through a thousand pages to do so! O.K., so leave a page file somewhere, I'll figure I'll just put it on the second hard drive as always. Still a few questions unanswered though. If you try to turn off the page file, does Win 7 make one of it's own anyway and where does it put it?
 
Duh, Mike. Faster responses, better video, less wear and tear on the hardware.
Page faults don't wear out disk drives (unless you're using SSD).

I don't think hard page faults are between you and your goal of better video, or even faster responses. Try watching the counters as you use your machine; then, you can know for sure.

Cool write up, thanks Mike, love learning new stuff, especially when I don't have to wade through a thousand pages to do so! O.K., so leave a page file somewhere, I'll figure I'll just put it on the second hard drive as always. Still a few questions unanswered though. If you try to turn off the page file, does Win 7 make one of it's own anyway and where does it put it?
If it does make one, it would be on the system drive.
 
I don't think the article is correct. (I'm not advocating using no pagefile, though; I've got machines with hundreds of gigabytes of RAM, and they still have pagefiles.)



I believe this isn't paging to keep memory free -- I believe the writes are done preemptively as I outlined before.

Even if the text is literally true, we can't assume the fraction is large (like, a quarter). The fraction may be very small, like a couple thousandths.

Posters here seem to believe that any unexplained disk activity is bad, and that it must be related to the page file. I think that notion can be pretty easily debunked by watching the "Pages output/sec" counter. I just did a build (of a few million line C++ project) and the counter never left zero. When the build finished, my machine was idle -- not even running a web browser or a mail client -- and the hard drive light still flashes. But the pages output/second counter never leaves zero.

Well one way to be sure would be to use Process Monitor (from Microsoft's sysinternals) to see what exactly is writing to disk in that situation: http://technet.microsoft.com/en-us/sysinternals/bb896645

edit: I bet a lot of it is registry activity, system volume information, and maybe some page file.
 
I have 16 GB of Ram so I turned all paging off. Plus I dont want pages written to my SSD wearing it out faster.

I have zero adverse affects with 16GB of ram and no page file defined. Page files is for systems with 4 or less GB of RAM. I would assume 8 is safe. 12 is good to go and 16 is 100% good to go.
 
Back
Top