With Windows Vista x64 and 6GB of RAM do you need a page file?

Archaea

[H]F Junkie
Joined
Oct 19, 2004
Messages
11,821
Could you completely disable the page file on a PC with 6GB of RAM and Vista 64 Ultimate, forcing the PC to use it's 6GB of RAM? What would happen, I've never tried.....

Can you drop the recommended 1-1.5x times the physical RAM amount to a smaller value like 2GB, etc without ill effects?

I'm just curious.
 
I've run a set page file of 768mb since 2000. I am up to 4GB of RAM at this points, with the only effects being an occaisional warning by windows when using FUS while my wife and I had many things open.

Cut back on the file - no need for gobs of wasted storage space.
 
You could force it to a smaller size (~2GB) if you wanted to, but absolutely DO NOT disable the page file in any version of windows.
 
You don' t need a huge pagefile (the 1-1.5x doesn't really apply when you have a lot of RAM), but setting it to disabled isn't the best idea - pages which are in use aren't written out of the RAM anyway, and having a page file means that the OS doesn't have to potentially commit hundreds of megabytes of RAM to programs that ask for more memory than they need (it has to commit virtual memory when a program asks, but it can mark off part of the page file for that purpose if the program isn't using it yet). The RAM also isn't forced to hold pages that were referenced once and will never be again, and the memory can be put to more useful things such as cache and SuperFetch. Some programs will also complain if you have no page file.
 
ya dont turn off the page file i tried with 4 gigs and it only took a modern game like the witcher a few hours to crash my system.
 
some people dont understand how a computer works apparently, but insist on fucking with the os because some dumbass thinks well to advise them to do so.
 
some people dont understand how a computer works apparently, but insist on fucking with the os because some dumbass thinks well to advise them to do so.

what are you saying?
 
Unfortunately there's a general idea that the Windows team are pretty stupid and never thought of all the incredibly simple optimisations they could do. Like, if you had lots of RAM, maybe the memory manager could keep lots of data in RAM rather than writing it all to the page file? Erm, yes, it does.

You get all sorts of tweaks people recommend, which usually have little more than a placebo effect (if you put /prefetch:1 after a shortcut the program will go faster!), or are actually detrimental (delete the Prefetch folder regularly). Well, actually people don't understand what the Prefetch folder is at all.
 
Running vista x64, 4gb of ram. Using this system for gaming (ut3, crysis, bioshock) and photo/video editing (photoshop cs3, premiere pro cs3). I disabled the page file about 6 months ago.

1.) I've yet to see any errors related to running out of memory. True, I don't typically run 100 different programs at the same time. But still, I've yet to see this happen.

2.) I've yet to run across any of these magical programs that complain "zomg the page files missing!1!!1".
 
I'd be out of memory without a page file on my Vista x64 system; my commit charge is over 5GB, although only 2.37GB of physical memory is in use. But just because you can do it doesn't mean that it's helping performance.
 
Not saying everyone should do it nor am I saying that it provides any real performance benefits. At most, I notice large apps that load, do so with much less hard drive thrashing. Not really any major perceptable speed boosts, a few here and there though. I'm just saying that almost anytime this subject gets brought up, you see a horde of people replying with 1.) the system has to have a pagefile to run properly 2.) certains programs won't work with a page file. I'm just saying that I haven't seen either of these to be true in my experience.
 
Why do you need to keep the pagefile if you're not exceeding the physical amount of memory in the system? My understanding of memory management suggests that this is unnecessary.
 
Why do you need to keep the pagefile if you're not exceeding the physical amount of memory in the system? My understanding of memory management suggests that this is unnecessary.

Page files serve more of a purpose than just additional, logical RAM.
An example: When an app loads, you use it in RAM. After you are done with it, that stuff in RAM can them be written to the page file. Calling that program back up from the pagefile is much faster than calling it from the multiple folder locations all across the disk.
Vista of course takes that to a new level, and figures out what programs are called most often, keeping everything in RAM that has the highest percentage of getting used.


The bottom line: Vista is so efficient at memory management, nobody should screw around with any sort of "optimizations" of any kind. It's smart enough to do it on its own.

And FWIW, try disabling the page file... Windows WILL re enabled it (eventually), as it does need it.
 
Why do you need to keep the pagefile if you're not exceeding the physical amount of memory in the system? My understanding of memory management suggests that this is unnecessary.

I also believe Vista manages better, the crash-recovery. Writing something in-use to the disk, is able to get that BACK after a crash, versus running 100% of the system in RAM, in which case you'd lose all those temporary files.
However, Vista (again) is even better in the area that most of the system is now stored on disk, which is why it's able to recover from things better than XP (Which had more components stored in RAM).
 
One issue, as I mentioned earlier, is that a program might say "Allocate me 300MB of virtual memory" (normal programs have no idea what RAM is; they just see virtual memory, which may be in the RAM or page file). It's not necessarily going to write anything to all of this memory, but Windows has to commit to giving it 300MB from somewhere. If you have no page file, that's 300MB of RAM gone, but with a page file, part of the page file can be used for this purpose, leaving the RAM free for cache and other programs until and if that program wants to use all the memory it was allocated.

If your commit charge (which is not the figure you see in the Vista Task Manager's "Memory" display, it's the "Page File" down in the bottom right; very confusing, it's not the pagefile usage at all) is less than the physical memory, then you can get away with disabling the page file. But even then you're reducing the system's ability to use RAM as cache. This may have no significant effect, or it could slow things down, but it's not really going to speed things up; things get paged because they're not being used and the RAM can be put to better purposes.

I'm not sure what this crash recovery thing is about though.
 
Just use XP and you'll never need more than 2Gb to run smoothly. People who install Vista or even worse, 64-bit Vista are doing nothing but porking their perfectly good hardware into a huge bloat state. As a result people need to install double to triple amounts of ram just to get the system working as fast as with XP.

Retarded if you ask me. Especially when there is no benefit whatsoever from doing the above - just increased trouble and waste of money.
 

Good point.

Crashes so rarely happen, not really a huge point on day to day usage.

The bottom line for me: leave it alone. There have been so many articles out there that talk about disabling the page file, or sticking it on another hard drive, or many things like that... Any actual study finds it has no benefit whatsoever.

Vista is even better, and personally, I wouldn't touch anything to do with its memory-management settings.
 
But even then you're reducing the system's ability to use RAM as cache.
I'm confused by this notion. If commit charge is less than physical memory size (in my case, significantly less), then how does that prevent the system from using RAM as cache? Does it waste memory by allocating too much? Sure. But if that physical memory is available, I don't see why this would cause any ill effects. 300mb of empty space in physical memory is just that. It won't prevent writing anything else to physical ram (unless you're out). And what type of cache are you talking about? The only thing I know that gets cached in ram is superfetch.

I do see a negative side in the page file. When a program is starting and it requests say 300mb of virtual memory. If windows decides to allocate the empty space to the page file, thats a hit on the hard drive + wasted cpu cycles, before the program gets a chance to fully load. Time spent allocating memory, in this case to the hard drive which is much slower, is time wasted not loading the program. The second negative would be that while windows will allocate empty space to the page file, it will also swap important information, that again wastes time and resources to move from a fast memory subsystem (the ram) to a much slower subsystem (hard drive). My commit change hovers at around 1.3gb, with 5 programs being open right now I'm at around 1.8gb. Obviously 2gb of phyisical ram would not fair well without a page file. But with 4gb installed, I've never seen it jump to over 3.4gb used. Is the performance hit of enabling the page file negligible, yeah for the most part. But why do it if you've got the ram? It's like taking the time and effort to tie your luggage to the top of the car when you've got plenty of room in the backseat and trunk. Most people I see saying to leave the page file alone are under this impression since thats the way it was back in the days of windows 3.1/95/98 when ram was relatively expensive and the dangers of running out of physical memory were a reality faced by everyone.

There have been so many articles out there that talk about disabling the page file, or sticking it on another hard drive, or many things like that... Any actual study finds it has no benefit whatsoever.
Links? Because when I googled this is one of the first articles that showed up:
http://www.tomshardware.com/2008/02/15/vista_workshop/index.html

"With 8 GB and no swap file, the system was fine. Even in some memory intensive scenarios such as opening files in Photoshop CS3 with a total file size of 3 GB, the system remained very responsive and even snappy, never writing to disk once."

"Experienced users can give their systems another little performance boost by deactivating the paging file."
 
All I can say I guess is go read through these forums and other places on the internet... Vista handles RAM in a different way than most people are used to thinking.

EDIT- See that word "little" right there? There have been no conclusive studies that disabling your page file, or moving it somewhere else, actually has enough of an impact to worry about.

I don't think anyone is arguing that disabling it would cause the system to actually slow down.

If you have enough RAM, pagefile isn't use that much. Period. I guess this is why I don't see the fuss about it. If you have plenty of RAM, it simply won't get used much at all. It DOES however provide a failover, safety net to the system having it enabled, which is again why, when you get minimal gain, you want to take any risks.
 
I have read through the forums and other places on the internet. The people who say that disabling the paging file have a.) never tried it b.) have no solid evidence to back up the claims that it hurts performance (barring Mithent who is making very good points, not just regurgitating the "trust me, don't do it" stance).

Again, I'll ask for links to these "so many articles out there that find there are no benefits whatsoever". The objective ones that I'm finding seem to suggest otherwise. But I'm open to reading ones that are critical of it.
 
I disabled mine for a trial. My 36GB raptor is nearly silent now instead of "popcorning" all the time. NOt sure if I can say any speed improvement yet, but I guess I'll give it a whirlf for a few days. Part of my PC use right now is running four virtual machines. one is assigned 768mb, and three assigned 512mb.

My 6gb of RAM is feeding everything. I should think if it was going to crash it'll crash when using four VMs. I played an hour of crysis with no page file and nothing blew up :)

I'll let you guys now.
 
Again, I'll ask for links to these "so many articles out there that find there are no benefits whatsoever".

OK, here you go (in my two minutes of searching).
http://www.windowsitpro.com/article...ove-performance-by-removing-the-pagefile.html

Some brief little snippets:
My advice, therefore, is not to disable the pagefile, because Windows will move pages from RAM to the pagefile only when necessary.
Furthermore, you gain no performance improvement by turning off the pagefile.


And
http://home.comcast.net/~SupportCD/XPMyths.html
You gain no performance improvement by turning off the Paging File.


Like you tried to put words in my mouth earlier, I NEVER said (I even reiterated that fact) that disabling it hurts performance. It simply offers NO performance increase.

The false truths spread around as fact too often are that A) System with enough RAM doesn't need a page file and B) Breaking up pagefile on several disks increases performance.

And (as I said earlier), if you have tons of RAM, keep the darn thing enabled as simply a fallback mechanism in the event it will actually be needed!
My car has airbags. I guess I could take them out to lighten the car, seatbelts too, that will give me some performance increase right?
Same way: leave it alone, never know when it will be needed.

Just as in airbags, Windows isn't using your pagefile heavily at all unless it needs it. Same with airbags: they aren't being used until they are needed.
 
This is one area where a major update/upgrade in thinking would make a world of difference. First, the 1x to 1.5x times your physical memory hasn't been the "rule" since 256 MB of system memory was considered more than adequate. Second, there is no reason anymore to set a static page file, or even consider disabling it completely. Not with the way Vista handles memory. Leave it alone, and use the computer.
 
This is one area where a major update/upgrade in thinking would make a world of difference. First, the 1x to 1.5x times your physical memory hasn't been the "rule" since 256 MB of system memory was considered more than adequate. Second, there is no reason anymore to set a static page file, or even consider disabling it completely. Not with the way Vista handles memory. Leave it alone, and use the computer.

Deacon,

I have Vista 64 ulimate. Windows Default grayed out settings on a brand new fresh instal -- fully patched shows the 1.0 to 1.5 rule still intact.

It showed 9GB as the maximum greyed out defaulted if I chose to manually select a value. That means Microsoft still goes by that rule of thumb...since I have 6GB of ram - 9GB = 1.5 tims 6.

Now MIcrosoft lets the OS manage the page file space to anysize it needs unless you tell it otherwise by manually selecting an amount - but my point being 1.0 to 1.5 has been around a long time after 256MB was the norm -- it appears to be still active MS advice today!
 
I'm talking about the thinking of setting a static size that much.

so am I.

I've already changed it so I can't screen print it to show you...But the default min and max in Microsoft's newest operating system Vista 64 Ultimate - is still set to 1 and 1.5x's your physical memory amount by default if you choose to select your own size management. It's visible but grayed out until you choose that option.
 
Jesus, people need to finally drop this whole "omg disable the swap file" thing. The dead horse has been beaten for years now. As other people have already mentioned, the page file serves much more purpose than just being an "overflow" for when you run out of ram. Just because you have lots of ram doesn't mean that the pagefile is just wasted space. You don't need to have several gigabytes of swap space, but you need to have something (like maybe 512mb or so). Here's why:

Your ram + your swapfile are seen together as one big memory space, called virtual memory. A lot of people falsely think that the term "virtual memory" refers to just the swap space; it doesn't, it is the entire addressable space in the system. When you launch a program, it allocates a bunch of virtual memory, but it doesn't necessarily USE all of that space. When you have the normal ram + swap file setup, the space that is actually used (the resident space) is in ram, but the unused virtual space is put in swap. Without a swap file, it takes up all of the virtual space in ram.

Here's an example. All of my systems run linux so I know it's not exactly the same, but the idea is the same. Plus linux gives a lot more information about process memory usage than windows does. Take this example from top:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
839 keith 20 0 170m 60m 22m S 3.8 16.0 5:25.09 firefox-bin

In this example, firefox is taking up 170mb of virtual space (VIRT), but only 60mb of ram (RES). Without a swap file, all 170mb would have to reside in ram.

Why make the OS work harder to schedule processes and manage memory? Just set the swap file to a reasonable size and leave the poor thing alone.
 
Now MIcrosoft lets the OS manage the page file space to anysize it needs unless you tell it otherwise by manually selecting an amount - but my point being 1.0 to 1.5 has been around a long time after 256MB was the norm -- it appears to be still active MS advice today!
Yes, it is... Only their way of doing things for the simple fact it has worked in the past, and is more than plenty today.

I've already changed it
This is the point DF (and myself, obviously) was making though... Don't screw around with it.
I don't know how many people I can count, think after reading how to "optimize" XP, end up totally jacking their systems up because of following such stupid advise that has little or no performance boost whatsoever.
Granted, there were ways you *could* optimize XP, but the rule of thumb in Vista is just leave it the heck alone. Microsoft spent alot of time and money, Vista manages RAM very well.
 
Yes, it is... Only their way of doing things for the simple fact it has worked in the past, and is more than plenty today.


This is the point DF (and myself, obviously) was making though... Don't screw around with it.
I don't know how many people I can count, think after reading how to "optimize" XP, end up totally jacking their systems up because of following such stupid advise that has little or no performance boost whatsoever.
Granted, there were ways you *could* optimize XP, but the rule of thumb in Vista is just leave it the heck alone. Microsoft spent alot of time and money, Vista manages RAM very well.

I'm just trying it out - so far no problems. I copied 16GB over from one drive to another, am running four VM machines and a start bar FULL of apps without any problem. If it fails I add the page file back, no big deal. My thoughts are who cares if Firefox uses 170mb of RAM instead of 60MB when I have 6000MB free???

I'll reply to your reply with this anology I thought of that may be somewhat applicable!

GM spent a lot of time and money making the ecotech 4 cylinder engine. It comes in 2.0 and 2.2 liter variations. It has 205HP and 260HP supercharged in max GM production. It's their most powerful fourcylinder ever. It was all GM wanted it to be and they were very proud of their engine after spending countless dollars and time researching and producing the engine.

Modders/Tuners went ahead and made it jump to over 1200 HP. GM only wanted it to have typically 200-250 max. Those modders don't know anything about what they are doing. GM spent soo much time and money designing the engine so they must know everything. Therefore those crazy tuners must not really know anything at all more than GM getting 1200HP out a engine designed for six times less.



Microsoft doesn't expect the average Vista PC to have 6 or 8GB of RAM, in programming for the common denominator maybe thats the reason things are setup the way they are??? MS created readyboost to allow PC's with a mere 512 or 1gb of RAM to run vista, meaning their target demographic was a PC with 1-2GB of RAM most likely. GM created the ecotech as a tuner engine with quite a bit of Horsepower for it's size. Doesn't mean a tuner with extra resources can't throw additional parts at it and get six times it's intended performance. ------same with an OS? maybe/maybe not. I guess point being Microsoft may have designed the OS differently if they had a different audience, just like GM might have designed that ecotech with 1000 hp if they thought the primary audience wanted to drag race and had the knowledge to handle it.

I'm trying it at any rate. I was amazed doing a google search how many people don't use swap files for so many years even. Not just with Vista. I'm open to trying things and learning about it...
 
I can tell you this without hesitation. My 36GB raptor OS drive is happy....I don't even hear the thing now that I disabled my page file. It used to constantly be poppcorning under vista. Now only my fans make noise. I get popping when I first boot up my PC from the HD but not in just tooling around windows like I used to...I think I'll try to ask my work's Microsoft Premier Support Rep this coming monday what his idea on the subject is...
 
In this example, firefox is taking up 170mb of virtual space (VIRT), but only 60mb of ram (RES). Without a swap file, all 170mb would have to reside in ram.
Thank you!!!
The links I posted go over this too. Even if you have 6GB, if you are loading massive programs, that can still be eaten up. Well explained though!

My thoughts are who cares if Firefox uses 170mb of RAM instead of 60MB when I have 6000MB free???
You don't.
Please, for the love of GOD, go read up on RAM, the page file, and Vista's memory management.

Therefore those crazy tuners must not really know anything at all more than GM getting 1200HP out a engine designed for six times less.
There's your problem with that... SIX TIMES less. Folks have a hard enough time coming up with results that disabling the page file has ANY increase in performance at all, much less SIX TIMES.
 
what if u have 8gbs? can i turn off swap memory?

This is one of those opinionated topics, like UAC and RAID. Try it. If you like it leave it off, if it causes you problems, turn it back on. I don't think you can tell on an individual machine without just trying it. I personally wouldn't advise it, but many people tell me I shouldn't run RAID0. If you do try it, let us know how it goes.
 
I'll reply to your reply with this anology I thought of that may be somewhat applicable!

GM spent a lot of time and money making the ecotech 4 cylinder engine. It comes in 2.0 and 2.2 liter variations. It has 205HP and 260HP supercharged in max GM production. It's their most powerful fourcylinder ever. It was all GM wanted it to be and they were very proud of their engine after spending countless dollars and time researching and producing the engine.

Modders/Tuners went ahead and made it jump to over 1200 HP. GM only wanted it to have typically 200-250 max. Those modders don't know anything about what they are doing. GM spent soo much time and money designing the engine so they must know everything. Therefore those crazy tuners must not really know anything at all more than GM getting 1200HP out a engine designed for six times less.

Your analogy fails for two reasons. TechieSooner mentioned one. Here's the other...

If you knew anything about tuning cars you would know that the higher you take an engine over its stock horsepower, the more parts begin to fail and the more unreliable it becomes. Why? Because the engines weren't designed to handle that much power. My friend used to have a highly tuned 2nd gen RX-7 and he had to make repairs on it almost weekly.

The same thing happens when disabling the page file in operating systems... it increases the chances of something going wrong and the OS crashing.

So go ahead... try it... it will probably work fine for a long time, but don't go crying "Microsoft sucks" when you're working on an important document/project and your computer crashes and loses all of your work.
 
If you knew anything about tuning cars you would know that the higher you take an engine over its stock horsepower, the more parts begin to fail and the more unreliable it becomes. Why? Because the engines weren't designed to handle that much power.

FWIW, same way with CPUs... More heat, more demand, shorter life.

So go ahead... try it... it will probably work fine for a long time, but don't go crying "Microsoft sucks" when you're working on an important document/project and your computer crashes and loses all of your work.

I guess this is the bottom line. I (and some others) are trying to save you some headache, give some advise... don't do it.
Take it or leave it, you run a larger risk.
 
FWIW, same way with CPUs... More heat, more demand, shorter life.



I guess this is the bottom line. I (and some others) are trying to save you some headache, give some advise... don't do it.
Take it or leave it, you run a larger risk.

Almost like what it is with CPU overclocking. At least with CPUs, you have clear cut improvements that you can even sometimes calculate. With this, you have "?".
 
my anology was more about intended audience --- yes a 1200 hp ecotech engine will be incredibly more prone to failure than a 200hp one.

The audience is different.

I do know about car modding - ha.... I have a Grand Prix GTP that I've personally upgraded through two cam jobs (intense-racing, then zzp) and some supercharger, valvetrain, fuel delivery, intake, exhaust, suspension, and programming modifications. My four door grocery getter grand prix makes 480ft/lbs of torque. I have the dyno's to prove it if anybody disbelieves.

That being said - the audience is different!

GM could make a 1200HP ectotech. They've proven it with making several 1000HP ecotech engines. In fact they've got instructions in a book published directly through GM that raise HP levels to 600HP. My supervisor who attends salt flat racing showed me a GM book that had the excact steps to make all manner of various levels of HP with that engine.

So GM could have done it, but they didn't why? The typical audience of the 2.2 liter four cylinder doesn't need or want a 1200HP engine

Some enthusiast does, and he's willing to sacrifice a bit of reliability to have some fun with his engine. It means more maintenance for him, but his car is faster than intended.

I overclock my PC, always have since my first Pentium 100. Along with overclocking is the occassional glitch/heat issue. A little extra performance is worth that to me. I've not clue on this swap file's benefit so far outside of no HD noise as I'll need a few days of testing to determine if there is a better "feeling" to normal operation --- But if it does possibly make things faster then it might be worth it at the slight risk of a bit of trouble with the PC here or there.


I don't have any good full system benchmark tools to run a before and after test to see if there is any difference!, Recomendations????

--- but so far - all day long it's been going well. I appreciate everyone's feedback and answer to the question. I do have a question though. To the people who say don't do it cause it'll cause problems, have you ever tried it? Most of the infromational tests on this that I've seen are several years old using 1GB of ram and such. The Tom's hardware review with 8GB of ram said they saw absolutely no problems with it and it seemed to speed things up if anything.
 
--- but so far - all day long it's been going well. I appreciate everyone's feedback and answer to the question. I do have a question though. To the people who say don't do it cause it'll cause problems, have you ever tried it? Most of the infromational tests on this that I've seen are several years old using 1GB of ram and such. The Tom's hardware review with 8GB of ram said they saw absolutely no problems with it and it seemed to speed things up if anything.

Yes, I have tried it. I've run swapless linux systems before (rootless systems either have a networked swap or no swap at all, and networked swap = yucky) and sure, it works fine, but I would NEVER put a system like that into a production environment. If you were to go even 1 byte over the amount of ram in the system then the OS has no choice but to start killing processes or just crash.

And the end of the day, that's the heart of the issue. Why would anyone want to intentionally put their system in such a perilous state? By disabling the swap file all you're doing is making the system work harder to manage memory efficiently and schedule processes. Plus, programs are written with the intent of there being a swap partition. Paged memory is an inherent and fundamental idea of how modern computers work, why would you want to disable that?

If you want to then I guess go for it, it's your system. All the power to you. But I sincerely hope that you don't do mission-critical work on the system, because all it would take is some random program with a memory leak to fill up all the physical ram and leave the system with no other options but to crash.
 
Back
Top