Why is my hard drive smaller than the advertised capacity?

This, or if it's such a big deal, charge 2.4% more to make up for it and provide 1TB, 2TB etc so it's rounder numbers.
 
And I want to buy them for a round 200 marks or 600 francs, but that's not happening either, these currencies are long gone.
Let's stop dreaming, a kilo was never 1024 from the start, and it's not going back to 1024 now that Mac OS is using the decimal system for hard drive capacities. The only problem is that Windows is still trying to pretend that a kilo is 1024 and a TiB is a TB, that's why our hard drives seem smaller than advertised.
 
Last edited:
From a text file I keep on my server:

Code:
Random Information that's good to know:

Most hard drives are measured in gigabytes nowadays.
But most manufacturers assume a storage space based on base 10.

I.e.  1 GB = 1 billion bytes or 1,000,000,000 bytes.

When in reality, there are quite a bit more in an "actual" gigabyte, 
which is what computer programs use.
Usable storage space is measured in base 2.

I.e.  1 GB = 1024*1024*1024 bytes or 1,073,741,824 bytes.

This means you only get roughly 93.13% of the storage space advertised on any given drive.

(1,000,000,000 / 1,073,741,824 = .931322574615478515625)
So in reality, you're not quite getting what you have paid for.

UPDATE: As terabyte drives become more popular, it might be easier to show an updated percentage.
---------------------
      Marketing 1 TB = 1 trillion bytes or 1,000,000,000,000 bytes.
Binary 1 TB = 1024*1024*1024*1024 bytes or 1,099,511,627,776 bytes.
This means you only get roughly 90.94% of the storage space advertised in TB.
(1,000,000,000,000 / 1,099,511,627,776 = .909494701772928237915)

So quick rules of thumb:
GB = 93.13% of labeled
TB = 90.94% of labeled
---------------------

Now keep in mind this is only for IDE/ATA drives.  
SCSI drives always list storage space in base 2.

Below is a list of common sizes of drives currently available, 
and their actual storage space:

Label: | Actual:

40GB   |   37.25GB
80GB   |   74.50GB
120GB  |  111.76GB
160GB  |  149.01GB
200GB  |  186.26GB
250GB  |  232.85GB
300GB  |  279.39GB
320GB  |  298.02GB
400GB  |  372.52GB
500GB  |  465.66GB
640GB  |  596.04GB
750GB  |  698.49GB
1000GB |  931.32GB |  .91 TB
1500GB | 1396.98GB | 1.36 TB
2000GB | 1862.65GB | 1.82 TB
3000GB | 2793.97GB | 2.73 TB
 
Even if they decide to adopt either the metric way or the binary way, there will always be less hdd space as advertised due to the reserved space taken by the formatting, albeit it'll be a lot less compare to metric measurement being reported into binary measurement.
 
The formatting lost space is probably insignificant anyway. Isint it like in the MB, if that?
 
I really prefer if all data storage is sold/reported properly in binary format, even if it means HDDs will cost slightly more. I bet any manufacturer who started selling drives with actual binary capacities would start to see more enthusiast sales of those drives; of course to the mainstream it wouldn't matter, round numbers market easier I guess.
 
I really prefer if all data storage is sold/reported properly in binary format, even if it means HDDs will cost slightly more.

I prefer that drives now are the same size from manufacturer to manufacturer so I can replace a Seagate 1TB drive in a raid with a 1TB WDC drive without worrying about if one is slightly bigger than the other. To me increasing the size of the drive slightly to workaround a long standing windows display bug is not a good idea.
 
When talking about 1/2/3TB drives, the difference is not slight at all ! It's more than 9%. So doing that would be a major pain, and the price would also be significantly higher. Personally I try to backup from one brand HDD to another brand, so if one does one way and the other another way, I'm in trouble (already for some reason I always end up with the same data taking some more space on one drive compared to the other). And since we'll soon have only two HDD manufacturers...
 
heh, I got my warrenty oxz vertex 2, and it's missing 5gigs

My original 60gig ssd I sent them, came back as 55gigs, cause of their *new* design.
 
Zarathustra[H];1036259585 said:
What you fail to mention is that this was not the case until the IEC under strong lobbying from drive manufacturers adopted formal definitions of Gigabyte, etc in the ten base system in 1994.

Overnight sleazy hardware manufacturers switched their labeling in order to cheat consumers and be able to charge more money for the same drive sizes. Noone really complained much in those days, as the average drive sizes were pretty small, so it didn't have a huge impact.

Today with drives expressed in TB, the difference between the two methods is 10%! Thats 200GB on a 2TB drive.

It's disgusting and sleazy.

Overnight? HDD/tape/floppy/etc have always been meaused using powers of ten. So have network/bus throughput and clock rates. The only thing that was ever measured using powers of two was storage attached to an address bus. The reason for that was because adding a bit line to an address bus doubled it's addressable capacity. To get 1,000 bytes of RAM, you had to be able to address 1024 bytes. Since the difference was negligible, and because it took more memory (generally more than the 24 bytes difference) and processor time to calclulate an exact value, it was easier to label 1024 bytes as a KB.


Yeah I just got a 3tb disk and I'm missing over 300 GBs!

You're not missing anything. Misrepresented, maybe, but all 3,000,000,000,000 bytes are present and accounted for. In fact, I bet you got a few million extra to play with for your trouble.
 
Someday Microsoft will fix their bug and print KiB, MiB, GiB and TiB as the units and then no one can complain.

And that would be a serious step backwards.

Instead the IEC needs to be walloped with the bat of reason, and all storage representation reverted back to the proper binary format, without the i.
 
And that would be a serious step backwards.

To me its a step forward. I do not believe KB should have ever been defined as 1024 bytes in any context.

To me memory manufacturers should start using the proper term.
 
To me its a step forward. I do not believe KB should have ever been defined as 1024 bytes in any context.

To me memory manufacturers should start using the proper term.

But I like getting 17 billion bytes of RAM for the price of 16 GB.
 
Zarathustra[H];1039494538 said:
Instead the IEC needs to be walloped with the bat of reason, and all storage representation reverted back to the proper binary format, without the i.

Don't be absurd. The proper definition for K, M, G has been defined for more than two hundred years since the beginning of the metric system.

These johnny come latelies cannot just change the definition to suit their purposes.

Microsoft needs to fix their units display bug and show Ki, Mi, Gi, Ti when using binary units.
 
To me its a step forward. I do not believe KB should have ever been defined as 1024 bytes in any context.

To me memory manufacturers should start using the proper term.

Right. This is obvious to anyone with any background in science and engineering. Such terms have precise meaning, and it only creates confusion to change them around or use them for different purposes.
 
Don't be absurd. The proper definition for K, M, G has been defined for more than two hundred years since the beginning of the metric system.

These johnny come latelies cannot just change the definition to suit their purposes.

Microsoft needs to fix their units display bug and show Ki, Mi, Gi, Ti when using binary units.

Right. This is obvious to anyone with any background in science and engineering. Such terms have precise meaning, and it only creates confusion to change them around or use them for different purposes.


If this were the 1960s (or whenever the scheme was devised) and we were doing it all from scratch, I would agree with you.

But the binary definition of K M G, etc. is one that has been used since the beginning of computers.

A separate definition for a new and different technology.

Changing it to a 10 base system is what reverts decades of precedent.

IMHO, creating the new symbolism for binary systems in KiB, MiB, and GiB created more confusion that having a separate definition for kilo, mega and giga for computers ever did.

That and its a real pain to type in comparison.
 
Zarathustra[H];1039497191 said:
If this were the 1960s (or whenever the scheme was devised) and we were doing it all from scratch, I would agree with you.

There was no "scheme" devised in the 1960s. It was simply a sloppy mistake, using K for 1024 instead of 1000.

The metric system has defined K as 1000 for more than 200 years.

Trying to change that longstanding definition is foolish and only creates confusion.

There is an excellent definition for the binary units: Ki, Mi, Gi, Ti, etc.

It is best to use the decimal units and prefixes whenever possible, since they are easier to work with in a base-10 system. But if you must use binary units, then you need to use the correct Ki prefixes. Otherwise you are just wrong and perpetuating a mistake that creates confusion.
 
Zarathustra[H];1039497191 said:
If this were the 1960s (or whenever the scheme was devised) and we were doing it all from scratch, I would agree with you.

But the binary definition of K M G, etc. is one that has been used since the beginning of computers.

<snip>

Changing it to a 10 base system is what reverts decades of precedent.

IMHO, creating the new symbolism for binary systems in KiB, MiB, and GiB created more confusion that having a separate definition for kilo, mega and giga for computers ever did.

That and its a real pain to type in comparison.

The SI system is about standards, and EVERY SI unit follows the exact same standard for prefixes. increasing or decreasing the exponent by one changes the prefix used. That had centuries of precedent until lazy programmers came along and bastardized the system for their own convenience. So then we had 20 years of dealing with two definitions for kilo that depended on context... which is exactly what the SI system was meant to avoid.

A separate definition for a new and different technology.

Were radiation, magnetism, and electricity not new and different when they were discovered? They all use the standard definition for prefixes for their units.

I do think the standardized binary definitions are ridiculous. Just say 1k = 1,000 etc and be done with it. No confusion.
 
Anyways, this is getting worse. I needed 3TB not 2.7TB. Now I don't have enough disk space. It was negligible when disks were smaller but now the loss is HUGE. I've been hoping a manufacturer introduces TrueCapacity&#8482; disks but seeing as there is now only one manufacturer, the competition to do it isn't there any more.
 
Anyways, this is getting worse. I needed 3TB not 2.7TB. Now I don't have enough disk space. It was negligible when disks were smaller but now the loss is HUGE. I've been hoping a manufacturer introduces TrueCapacity&#8482; disks but seeing as there is now only one manufacturer, the competition to do it isn't there any more.

You actually needed 3.3TB. :D You bought a 3TB drive, and you got 3TB. There is no "loss" because your bytes aren't disappearing. They're just being represented differently.. which is what the IEC was trying to correct with it's standard.


This is the same thing as driving down the highway at 50 MPH getting a ticket for going 80 in a 50 because the country you're in uses km/h on their road signs. The difference is that the very different quantities 10^12 and 2^40 are expressed using the exact same unit.
 
Anyways, this is getting worse. I needed 3TB not 2.7TB. Now I don't have enough disk space. It was negligible when disks were smaller but now the loss is HUGE. I've been hoping a manufacturer introduces TrueCapacity™ disks but seeing as there is now only one manufacturer, the competition to do it isn't there any more.

The percentage difference has always been the same. If you had 3 1tb drives, you would have 2.7tb of actual hard drive space as well. That's the way it's been, and that's the way it probably will continue to be.

Also, I'm pretty sure it's significantly easier for programmers and hardware manufacturers to create things based on the 2 system. It's only hard drive manufacturers that decided to inflate their numbers by using the 10 base system when counting hard drive capacity.
 
The percentage difference has always been the same. If you had 3 1tb drives, you would have 2.7tb of actual hard drive space as well. That's the way it's been, and that's the way it probably will continue to be.

As long as you're using the same prefix, sure. But each larger prefix definitely has a larger percentage of difference between binary and decimal represenations.\
Kilo = -2.34% or +2.40%
Mega = -4.63% or +4.86%
Giga = -6.87% or +7.37%
Tera = -9.05% or + 9.95%
Peta = -11.18% or +12.59%

Also, I'm pretty sure it's significantly easier for programmers and hardware manufacturers to create things based on the 2 system. It's only hard drive manufacturers that decided to inflate their numbers by using the 10 base system when counting hard drive capacity.

There's no difference between typing
Code:
if ( size > 1,000)
  output = (size / 1,000).toString() + " KB";
else
  output = size.toString() + "Bytes";

vs

Code:
if ( size > 1,024)
  output = (size / 1,024).toString() + " KB";
else
  output = size.toString() + "Bytes";
 
Don't be absurd. The proper definition for K, M, G has been defined for more than two hundred years since the beginning of the metric system.

Thats not going to work in the US where people are still using medieval measuring systems like f/lb and inches.

Still the stupid one I have a problem with is damned kilobits and megabits. Pretty much only used to trick people into thinking stuff is faster than it is (ISP will never sell things as "obit" anything, just "100 meg!" "1000 k!" to purposefully mislead people. :D
 
As long as you're using the same prefix, sure. But each larger prefix definitely has a larger percentage of difference between binary and decimal represenations.\
Kilo = -2.34% or +2.40%
Mega = -4.63% or +4.86%
Giga = -6.87% or +7.37%
Tera = -9.05% or + 9.95%
Peta = -11.18% or +12.59%



There's no difference between typing
Code:
if ( size > 1,000)
  output = (size / 1,000).toString() + " KB";
else
  output = size.toString() + "Bytes";

vs

Code:
if ( size > 1,024)
  output = (size / 1,024).toString() + " KB";
else
  output = size.toString() + "Bytes";

Coding C++ (and other languages) is extremely different from machine code. For machine code, the binary system is much simpler (0 or 1, aka "On" or "Off.")
 
alright fine,

Code:
mov eax, [size]
cmp eax, 1024
jg KILOBYTE


mov esi, [bytestring]
jmp DISPLAYSIZE

;KILOBYTE
mov esi, [kbytestring]
div 1024

:DISPLAYSIZE
...  code for displaying size ...

my assembly is incredibly rusty... but the point is the calculation is trivial to implement in any sort of code. Computers are designed to handle ANY value within a given range, not just 1 and 0. Once upon a time, when hardware was extremely limited, and capacities generally follow powers of two increases (because adding a bit line had that effect), it was better to round and store a number without a decimal... but that's not the case today. It's certainly still easier to design hardware with powers of two sizes, but it's a trivial matter to calculate the resulting prefixed size for ouput to the user.
 
The SI system is about standards, and EVERY SI unit follows the exact same standard for prefixes. increasing or decreasing the exponent by one changes the prefix used. That had centuries of precedent until lazy programmers came along and bastardized the system for their own convenience. So then we had 20 years of dealing with two definitions for kilo that depended on context... which is exactly what the SI system was meant to avoid.



Were radiation, magnetism, and electricity not new and different when they were discovered? They all use the standard definition for prefixes for their units.

I do think the standardized binary definitions are ridiculous. Just say 1k = 1,000 etc and be done with it. No confusion.

Byte is not, and never has been an SI unit. I agree with the poster you quoted. The computing and software industry was using kilobyte to mean 1024 bytes long before the RAM and HDD manufactures came along an conveniently changed the definition for marketing purposes. The RAM manufactures learned their lesson after serious blowback from consumers (and a lawsuit), I guess the HDD manufactures did not. Hey but at least 4GB of RAM today is 4,294,967,296 bytes and not 4,000,000,000 bytes like it could have been if the industry had let that transgression slide.

alright fine,

Code:
mov eax, [size]
cmp eax, 1024
jg KILOBYTE


mov esi, [bytestring]
jmp DISPLAYSIZE

;KILOBYTE
mov esi, [kbytestring]
div 1024

:DISPLAYSIZE
...  code for displaying size ...

my assembly is incredibly rusty... but the point is the calculation is trivial to implement in any sort of code. Computers are designed to handle ANY value within a given range, not just 1 and 0. Once upon a time, when hardware was extremely limited, and capacities generally follow powers of two increases (because adding a bit line had that effect), it was better to round and store a number without a decimal... but that's not the case today. It's certainly still easier to design hardware with powers of two sizes, but it's a trivial matter to calculate the resulting prefixed size for ouput to the user.

Base2 is most useful when addressing memory and calcuating offsets. Hell, its even useful for storage since, you guessed it, hard drive sectors use Base2 measurements despite total capacity being Base10. Harddrive manufactures can't even be self consistent. A 4k hdd sector is 4096 bytes.
 
Base2 is most useful when addressing memory and calcuating offsets. Hell, its even useful for storage since, you guessed it, hard drive sectors use Base2 measurements despite total capacity being Base10. Harddrive manufactures can't even be self consistent. A 4k hdd sector is 4096 bytes.

I think they should leave the calculation alone. However Microsoft needs to fix its unit display so that they are using the correct units.
 
It was the computer/hardware people who misappropriated the SI prefixes for use in base 2 situations - ie, they screwed it up first. The legacy of a megabyte = 1048576 bytes is a result of a convenient "oh, 1024~1000 so we'll say kilobyte instead of 1024 bytes" which has been held over for way too long. The binary prefixes exist and are unambiguous, why not just use them?

That being said, it is not just storage in computing which uses standard SI prefixes - network transfer speeds also use standard prefixes (100mbit = 100 000 000 bits), as well as processor speeds (3Ghz = 3 000 000 000 hz), interface speeds (6 Gbit SATA III = 6 000 000 000 bit/s), and even RAM transfer speeds (1600 mhz ram = 1 600 000 000 hz!). It is only RAM and cache memory size which bucks the trend.

The issue is not the byte. It is the prefix!
 
The appropriate way to do this would be to do away with the kibi crap all together.

Then clearly state that a "byte" is NOT an SI unit, and as such SI definitions for prefixes don't apply, instead the more rational binary definitions for the prefixes will be used.
 
Zarathustra[H];1039498341 said:
The appropriate way to do this would be to do away with the kibi crap all together.

Then clearly state that a "byte" is NOT an SI unit, and as such SI definitions for prefixes don't apply, instead the more rational binary definitions for the prefixes will be used.

No, the appropriate way to deal with this is to not use SI prefixes inconsistently. Use binary prefixes for binary definitions, and do not use a binary definition for a universally understood decimal prefix.
 
Well, I for one refuse to insert any i's in my byte prefixes, and use them in their binary definitions and that's that. No negotiation.
 
Zarathustra[H];1039498617 said:
Well, I for one refuse to insert any i's in my byte prefixes, and use them in their binary definitions and that's that. No negotiation.
Your gigabytes have "i"s in them.;)

Nits aside, that's your perogative. Just don't complain about people using prefixes as per their SI meanings.
 
Your gigabytes have "i"s in them.;)

Nits aside, that's your perogative. Just don't complain about people using prefixes as per their SI meanings.

But, as has been mentioned before, the SI prefixes are really quite pointless as applied to a byte, as a byte is NOT an SI unit.

Look at any comprehensive list of primary and derived SI units. The byte is not on it.

Some examples:
http://www.unitarium.com/si-units
http://lamar.colostate.edu/~hillger/basic.htm
http://physics.nist.gov/cuu/Units/units.html
http://en.wikipedia.org/wiki/Category:SI_units

etc. etc.

Why would one conform to SI nomenclature and apply it to something that is not an SI unit?

It makes about as much sense as talking about a kilo-inch, a mega-foot, a giga-ounce or a tera-gallon.

Instead it makes much more sense to have a non SI definition for ones prefixes for something that is not an SI unit, and is particularly suited to the unit itself.

Some might argue that kibi, mebi, whatever, is this unit, but it not only sounds preposterous when spoken and is an annoyance to type, but it also upends a defacto standard used since the early days of computing, apart from sleazy storage-hardware marketers wanting to sell less for more.
 
Zarathustra[H];1039499169 said:
It makes about as much sense as talking about a kilo-inch, a mega-foot, a giga-ounce or a tera-gallon.

Well, you're exactly right. A megafoot doesn't make sense, so a megabyte doesn't make sense either.
 
Zarathustra[H];1039499169 said:
But, as has been mentioned before, the SI prefixes are really quite pointless as applied to a byte, as a byte is NOT an SI unit.

Look at any comprehensive list of primary and derived SI units. The byte is not on it.

Some examples:
http://www.unitarium.com/si-units
http://lamar.colostate.edu/~hillger/basic.htm
http://physics.nist.gov/cuu/Units/units.html
http://en.wikipedia.org/wiki/Category:SI_units

etc. etc.

Why would one conform to SI nomenclature and apply it to something that is not an SI unit?

It makes about as much sense as talking about a kilo-inch, a mega-foot, a giga-ounce or a tera-gallon.

Instead it makes much more sense to have a non SI definition for ones prefixes for something that is not an SI unit, and is particularly suited to the unit itself.

Some might argue that kibi, mebi, whatever, is this unit, but it not only sounds preposterous when spoken and is an annoyance to type, but it also upends a defacto standard used since the early days of computing, apart from sleazy storage-hardware marketers wanting to sell less for more.
There is such a thing as a kip - kilopound per square inch - a measure of pressure comparable to a megapascal. Used all the time with intense derision in engineering schools. I hate kips. Worst unit ever. See Wiki, it's got microinches, and, incidentally, kilofeet.

Again, the unit is not the issue. It is the prefix that was incorrectly used by the earliest computer pioneers. Is a bit a standard SI unit? No. Doesn't matter. It still uses a standard SI prefix - but inconsistent with how everyone else uses it.

You argue that the "standard" in computing - except it is no standard at all. Computers use no less than 3 different definitions of the mega prefix:

1 000 000x, as per networking transfer speeds (100 Mb ethernet), clockspeeds (Ghz), RAM memory speeds (1600mhz), capacity of hard drives (500GB), number of pixels(8 megapixel camera), pixel clock (165mhz pixel clock), interface speeds (6Gbps SATA III), CD/DVD speeds, DVD capacities.
1 024 000x, as per 1.44 "MB" floppy disks
1 048 576x, as per RAM, and certain operating system representations.

See http://en.wikipedia.org/wiki/Metric_prefix for the prefix issue. You are right that a byte isn't an SI unit. Mega- however, is an SI prefix - and one should not just willy-nilly co-op a term in which it is not appropriate.
 
There is such a thing as a kip - kilopound per square inch - a measure of pressure comparable to a megapascal. Used all the time with intense derision in engineering schools. I hate kips. Worst unit ever. See Wiki, it's got microinches, and, incidentally, kilofeet.

Again, the unit is not the issue. It is the prefix that was incorrectly used by the earliest computer pioneers. Is a bit a standard SI unit? No. Doesn't matter. It still uses a standard SI prefix - but inconsistent with how everyone else uses it.

You argue that the "standard" in computing - except it is no standard at all. Computers use no less than 3 different definitions of the mega prefix:

1 000 000x, as per networking transfer speeds (100 Mb ethernet), clockspeeds (Ghz), RAM memory speeds (1600mhz), capacity of hard drives (500GB), number of pixels(8 megapixel camera), pixel clock (165mhz pixel clock), interface speeds (6Gbps SATA III), CD/DVD speeds, DVD capacities.
1 024 000x, as per 1.44 "MB" floppy disks
1 048 576x, as per RAM, and certain operating system representations.

See http://en.wikipedia.org/wiki/Metric_prefix for the prefix issue. You are right that a byte isn't an SI unit. Mega- however, is an SI prefix - and one should not just willy-nilly co-op a term in which it is not appropriate.

A 3 1/2" High Density floppy had 80 tracks, 18 sectors per track, 512-bytes per sector for a total of 1,474,560 bytes which is exactly 1.44 MB (in base2). Base2 is a standard in computer storage and always was until HDD and RAM manufactures tried there screwy marketing practices. Both faced blowback and lawsuits, and both lost. RAM vendors went back to using binary for computer storage measure as per the de facto standard, HDD vedors did not and instead include a disclaimer on the box or specs that states the size of the drive in bytes or in sectors. For instance the 4TB WD4001FAEX has 7,814,037,168 sectors each 512-bytes for a total of 4,000,787,030,020 bytes. The courts restored order to the madness.

Hz is an SI unit of frequency and not a measure of computer storage and was never base2 (obviously, just stating for completeness). Network transfer is likewise not a measure of computer storage so it was never base2. Base2 for computer storage came out of software/firmware for addressing computer storage. Example being, 4K partition alignment is important with SDDs, you wouldn't want to align 4KB sectors on bytes divisible by 4000, you'd get poor performance, you want to do it on bytes divisible by 4096 (actually you can use a lower common denominator but this is just an example).
 
Last edited:
A 3 1/2" High Density floppy had 80 tracks, 18 sectors per track, 512-bytes per sector for a total of 1,474,560 bytes which is exactly 1.44 MB (in base2).
1474560 bytes = 1440 kiB - or 1.40 MiB. Or 1.47 MB. Or 1.44 "MB" (1024 * 1000 * 1.44).

Base2 is a standard in computer storage and always was until HDD and RAM manufactures tried there screwy marketing practices. Both faced blowback and lawsuits, and both lost. RAM vendors went back to using binary for computer storage measure as per the de facto standard, HDD vedors did not and instead include a disclaimer on the box or specs that states the size of the drive in bytes or in sectors. For instance the 4TB WD4001FAEX has 7,814,037,168 sectors each 512-bytes for a total of 4,000,787,030,020 bytes. They can't make a drive exactly 4 trillion bytes because it still has to be divided in to 512 byte or 4KB sectors (that's 4096 bytes). The courts restored order to the madness.
I do not recall RAM manufacturers ever going to a base-ten definition for their idea of a "megabyte/gigabyte" (and their subsequent backpedalling). Perhaps you could enlighten me?

That does not excuse the original storage gurus from misappropriating the "kilo/mega/giga/etc" prefixes for terms which only had passing resemblance to the actual definition of said terms. They took and existing standard and set it as the de facto standard - perhaps, but it was one which is not in accordance to what the rest of the world understands as is correct.

Hz is an SI unit of frequency and not a measure of computer storage and was never base2 (obviously, just stating for completeness). Network transfer is likewise not a measure of computer storage so it was never base2. Base2 for computer storage came out of software/firmware for addressing computer storage. Example being, 4K partition alignment is important with SDDs, you wouldn't want to align 4KB sectors on bytes divisible by 4000, you'd get poor performance, you want to do it on bytes divisible by 4096 (actually you can use a lower common denominator but this is just an example).
Again - it was a prefix of convenience. There is now a proper, current and most importantly, non-ambiguous binary prefix now - the kibibyte, the mebibyte, the gibibyte, the tebibyte, etc. I do not dispute that storage is better thought of in base-2. That does not mean that it is should be ok to use decimal prefixes when the reference is actually binary. I used to think that of course a megabyte = 1024 kilobytes! But I no longer think that way, having encountered way too many situations where the difference between binary and decimal prefixes have caused misunderstandings.
 
Back
Top