[Table of Contents]


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: arsclist Transfer of multiple copies, was: Full 3-D mapping of groove?



In a message dated 12/10/2002 2:33:01 PM Eastern Standard Time, 
pomeroyaudio@xxxxxxx writes:

> 
>  Such a processor could sync transfers made at different times and on
>  various turntables, greatly simplifying the process and increasing the
>  potential usefulness of this technique. (Application of CEDAR noise-removal
>  processing to the various source transfers, before syncronization, would no
>  doubt contribute to optimum results.)
>  
>  The theory also states each doubling of the number of sources improves the
>  s/n by another 3 dB.  So, *four* copies of the same recording could produce
>  a whopping 6 dB improvement!  Finding four different copies of the same
>  record is not impossible to imagine, in some cases. (Pull out all your
>  copies of those King Oliver Gennetts!)  And, if they didn't all have to be
>  transferred simultaneously, this technique could prove to be useful.
>  Extending the theory, we could expect to see a s/n improvement in the range
>  of 9 dB (!), if we could syncronize *eight* different copies of the same
>  recording; but, obviously, this begins to get rather unrealistic.
>  
>  Doug Pomeroy   pomeroyaudio@xxxxxxx
>  Audio Restoration & Remastering Services

This discussion started with mention of the problems of accurately replacing 
the lost data during the removal of pops and clicks.  It seems to me that the 
major advantage of synchronizing and mathematically analyzing the data from 
multiple copies would be the total removal of such defects without affecting 
the original music.

It is unlikely that a pop or click would occur at the same point on three or 
more copies.  By completely rejecting data that deviates by a significant 
amount from the average, the defect would be removed and the "good" data from 
the other copies used.

This same technique might also remove distortion products due to record wear 
to a greater extent than simple averaging.

Beyond this, averaging of the random noise of the pressings would give the 3 
dB benefit per data doubling as above, but being able to reach through the 
impulse noise and distortion should be a great value. CEDAR or other 
processing would then be applied to the noise components from the original 
master, common to all copies available.


> It is gratifying to learn that CEDAR is thinking about this subject. It
>  seems to me their existing Azimuth Corrector is actually part of the
>  solution, as it can correct very small timing errors between two sources;
>  what's needed is a much more powerful processor to deal with larger
>  corrections over time.

With processors operating at two billion operations per second soon to reach 
the consumer market, the capability for this kind of analysis should be 
available.

The significance of this now is that archivists might well want to consider 
preserving three or more copies of significant early recordings to have them 
available for future processing.

Mike Csontos
-
For subscription instructions, see the ARSC home page
http://www.arsc-audio.org/arsclist.html
Copyright of individual posting is owned by the author of the posting and
permission to re-transmit or publish a post must be secured
from the author of the post.


[Subject index] [Index for current month] [Table of Contents]