[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ARSCLIST] Cassette obsolescence - digitizing standards
Another question...
When you use "16-bit sample," do you mean that the value is
represented by a 16-"digit" binary number, and can thus have any of
65,536 values (from 0 to 65,535)...right? So that means that the
value of the step (say 30000) actually represents an analog value
that could be anything from 29999.50..1 to 30000.49..9, which makes
the maximum inaccuracy 1/(65536/2)...is right?
Ok, assuming that your math is accurate, which I have taken as a
given rather than working it out myself, yes, a possible error of 1
point would give that range of 29,999.50.... to 30,000.49.... SO,
what would a 14 point error give instead? And again, I ask which is
more acceptable? The difference that one point in value will give or
the difference that 14 points will give?
I'm not claiming that digital is in any way perfect, but when it's
provable mathematically that you can have such a large error vs such
a small error, why accept the large error simply because it's digital
which means it's inaccurate to start with?
Your argument certainly tells me that I'd never hire you to digitize
anything of any archival importance for me, nor would I ever
sub-contract work out to someone willing to take that approach.