It seems to me that it's pretty easy to enable a disabled service. Would folks make the same argument against overclocking? It's just as easy to encounter mystery problems down the line, and just as easy to overlook the fact that the overclock might be the culprit, and just as easy to set the clock back down to stock speeds once you figure it out.
I think the burden of proof lies on the wrong side of the argument in the opinions of those arguing against disabling services. Why would someone have to prove that 38 running processes is a leaner configuration than 40 running processes? In the real (non-computer) world when you performance tune something, it's a whole lot of little things that add up. Microscopic changes may not be individually measured, but do you really want the guy who designs the 777 that you're flying in to "eyeball" it? Or would you rather he got under the microscope and optomized each and every material for it's intended purpose? It's already a given that you can't measure the impact of disabling a given service under most circumstances by observation at the macro level.
Conversely, would everyone agree that we've all experienced times when our computers seemed to scream with speed, and other times when things felt a little sluggish? I'm not suggesting services have any impact on that at all. What I am saying is this: How do you quantify what is sluggish and what is not? If we can't even agree on what we're measuring then how do we apply the various evidence being presented?
If you want observable performance gains, then the best thing to do is increase your mouse pointer speed. If you want to win this argument, then good luck. It'll never happen here.
Of course everyone does agree that disabling services was the right thing to do prior to XP SP2, right?
I think the burden of proof lies on the wrong side of the argument in the opinions of those arguing against disabling services. Why would someone have to prove that 38 running processes is a leaner configuration than 40 running processes? In the real (non-computer) world when you performance tune something, it's a whole lot of little things that add up. Microscopic changes may not be individually measured, but do you really want the guy who designs the 777 that you're flying in to "eyeball" it? Or would you rather he got under the microscope and optomized each and every material for it's intended purpose? It's already a given that you can't measure the impact of disabling a given service under most circumstances by observation at the macro level.
Conversely, would everyone agree that we've all experienced times when our computers seemed to scream with speed, and other times when things felt a little sluggish? I'm not suggesting services have any impact on that at all. What I am saying is this: How do you quantify what is sluggish and what is not? If we can't even agree on what we're measuring then how do we apply the various evidence being presented?
If you want observable performance gains, then the best thing to do is increase your mouse pointer speed. If you want to win this argument, then good luck. It'll never happen here.
Of course everyone does agree that disabling services was the right thing to do prior to XP SP2, right?