[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ARSCLIST] The Hope of Audacity...
Hi, Friends,
There are a variety of issues that have been raised in this thread.
First is the disappointment others have mentioned that the tools we're
buying are not doing what we think they're doing. Namely, we're
feeding them data (output of the A to D converter) and that data is
being changed between the input and being written out into a file.
Given the commercial pressures we all understand, writing stable code
that has a few "quirks no one will find" could be seen as a simpler
solution to making payroll than digging deep into the myriad of
problems and the cooperation necessary from other vendors.
Interoperability is a non-trivial issue.
Second, this is more than just an issue of you get what you pay for.
Sure Sonic and Pyramix tend to perform better than the desktop
applications. But they control much more of the signal chain. The
lower priced programs hit their price point by handing off (or
depending upon or using) the work of others (WaveLab, Samplitude,
SoundForge, etc. let M-Audio, RME, PreSonus, etc build interfaces, for
instance). But who wants to be working 20 different other companies
to assure interchangeability. Ideally, everybody does. The reality
is, given the number of hours in the day, are you going to add the
cool new feature, or chase down a bug with somebody else's code.
Apple tried to make the world a simpler place with CoreAudio. Great
idea. When it works.
Yes, they should begin with integrity of signal. But if that's an
obstacle, they build a workaround (a "trap"). And it suits the needs
of 99% of their users (and appears to have snookered a far larger
percentage!). Who are we to deprive a bezillion garage bands of the
features they need and pay for, just so we can have the needs of our
small numbers met?
Lacking the tool vendors doing this work, each of us needs to shake
down their own system. It's a royal pain in the butt. Every change
in the system means re-checking it. OK, so Peak 6 will now preserve
the bext chunk (it used to blow it away). But now it embeds waveform
display info in the INFO chunk. That's perfectly acceptable according
to the .wav spec. But it means there's garbage there that, 25 years
from now, some preservation is going to sweat recovering, because "if
it's in the file it must have been important". Every program, every
OS, every hardware, every driver change has the potential to disrupt a
perfectly crystalline signal chain.
Third, what does it tell us in the trade that, for all our efforts to
create higher resolution files, there's been overwhelming satisfaction
with the lesser results. Well, for one thing, it reminds us that
there's more impact from good analog playback than from spending an
extra whatever on 8 extra bits that are 30dB below the noise floor.
For another, good AtoD design does amazing things when outputting
lower res files. It reminds us that so many of our source materials
are of such low (or lower) quality that there isn't more information
to be captured at the higher resolutions. Once we argued over 48k vs
44.1. Then 96k. Some want 192k, and others think 384 is worth
doing. Look folks, if you keep increasing the sample frequency, you
won't start capturing video off the audio cassettes. But you are
building more and more fragile systems. Where you have to break the
files into multiple sub-file (due to the 2GB file size limit of WAV),
you create a chance to lose one of the pieces. Is this really better
preservation than having one file (even 44/16) of the whole thing?
Fourth, while the pursuit of ever higher standards is a worthy goal
(and the reason we have converters that perform so well at lower
sample rates), there will always be circumstances in which an
institution simply will not have the resources to "do it right". We
can all salivate at the enormous volumes of audio out there to be
saved. As much as we dream of driving Lexuses into retirement, your
garden variety historical society with a shoebox full of oral
histories may never be able to justify spending "what it costs" to
"properly" digitize them. If their board can get its act together and
apply for grants, maybe there's a better use of that few thousand
dollars. A use closer to their primary mission. So a nice retired
couple volunteers to bring in the CD-R recorder they bought for their
granddaughter (who really wanted an iPod), plugs in the cassette
player they picked up at a garage sale, and digitizes the shoebox of
cassettes. And lo and behold, that bit of history is preserved for
the future. Did they only get 80% of the information off the tapes.
Yeah. But 80% is a lot more than nothing.
This is a very slipper slope, of course. For every Harvard or IU (and
others) here are plenty of large institutions buying crap equipment,
giving it to sleep-deprived, hung-over work-study students.
Institutions that really do know better, and who could manage the
resources to do a much better job (even if not "do it right").
And so it goes. So it always has. And so it always will. Our job is
to help advance the trade, do as well as we can, at a price that means
more of it gets done. Until we're all using $25,000 workstations and
$15,000 A to D converter, (and more hours making our systems pass 24
bits than lingering on ListServs!) we shouldn't be pointing fingers at
the poor blokes trying eek out a living selling $250 (or free)
software. You get what you pay for. If you paid for that, you got
it. Of your own free will.
Back to your regularly scheduled madness.
G
PS I bought my Prius when gas was $2.00/gallon.
On Aug 18, 2008, at 12:00 AM, ARSCLIST automatic digest system wrote:
I believe George Blood is on this list.