I did some basic calcs for my blog using a 90 minute movie and 128kbit audio, the results were:
* 256kbit = about 256Mb
* 512kbit = about 420Mb
* 768kbit = about 580Mb
* 1200kbit = about 860Mb
* 1500kbit = about 1055Mb
* 2000kbit = about 1370Mb
* 2500kbit = about 1700Mb
However the reason for my post is that Google is, actually, remarkably versatile in its interpretation of the input. Try asking Google this:
90 minutes * ((1200 kbps) + (128 kbps))And you will get the reply:
90 minutes * ((1200 kbps) + (128 kbps)) = 875.390625 megabytesThe first figure is the video bitrate (compared to my older PVR, 1200kbps is acceptable and doesn't burn through DVD-Rs quite so quickly), the second figure is the audio bitrate.
I'd be inclined to factor in an extra megabyte for "overhead", just to be on the safe side.
I'm not going to work out the duration of stuff that can be recorded on your harddisc for two reasons:
1. Are you sure you'll be recording everything at the same bitrate? My 'default' is 1200kbps. I up this to 1500 or 2000 for things I want to look extra-good, and for disposable programming (stuff I'll watch then delete) I often step down to 768kbps.
2. Is your harddisc really 320Gb? Or is it closer to 298Gb? If it says 320 on it, it is probably 298ish, because for lots of lame reasons lost in time and ill-justified by harddiscs that serve up 512 byte sectors of 8 bits-in-a-byte on a 16/32 bit interface, harddisc capacities use base 10 maths. In other words, 1Mb = 1000Kb, 1Gb = 1000Mb. This allows them to "big up" their sizes, but can really annoy end users when they find the sizes don't match up. For instance, a bit of basic maths wth my calculation above would indicate that the harddisc manufacturers lying to you (I call it lying regardless of excuses as pretty much every other part of a computer uses powers of two) would result in
a discrepancy of around 35 hours of recorded material.
Be careful asking your operating system. Windows XP displays binary Gb capacities (i.e. ones you can use with bitrate calculations), while others (Mac OS X..., parts of Linux) display base-10 Gb capacities.
Note: while "giga" is "
officially" reserved for base-10 maths, with "gibi" for the binary equivalent, this was an "official" IEC declaration from about 1996. You will find innumerable references to things kilo-, mega-, and even giga- that predates this, not by years but by decades. That's not to say base 10 is wrong, it isn't. What is wrong is using the same prefixes for binary maths, however one simple little decision in the world of officialdom isn't going to fix some fifty years of computer history, and the end user will only suffer when purchasing a 2Gb memory module (which is as big as is expected) and a 500Gb harddisc (which is somewhat smaller than is expected).
This has gone on too long. Can you tell there's nothing interesting on TV right now?
