[Table of Contents]


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ARSCLIST] revisiting tape bakers



From: Patent Tactics, George Brock-Nannestad

Hello all,

there has just been a very interesting and expansive exchange of views 
concerning the mechanisms of baking on the ARSCLIST (go for subject line 
'revisiting tape bakers' - it had not strayed when I started writing this), 
and the latest comment by an experienced person - Lou Judson - says:

> In fact, we are looking for some way to dispose of the tapes after  
> transfer, as they are simply not archivable except as voluminous  
> boxes of possibly sentimental but otherwise unusable materials! 

(sorry, Lou, for quoting partly out of context).

As the first person to express clearly in archival circles the distinction 
between primary (intended) and secondary (ancillary) information in an 
artefact I need to introduce a bit of philosophy, and I am sorry, because 
many people are put off by that. However it may be a skeleton to work from in 
a given situation. And also please bear in mind that my own research has 
digged deepest into mechanical media (with films coming next). I should note 
that some have believed that when I said "secondary" information it relates 
to label, cover, and other written information on a tape box - this is 
obviously included, but it would limit the concept too much if not 
_everything_ were included that contributes to the total information content 
of the artefact. At the end I bring two references.

In the tapes we are discussing, the primary information is the sonic content 
that it was intended should be retrievable. The primary information may only 
be retrieved if the replay process makes maximum use of the secondary 
information. The secondary information in this connection relates to 

1) the type of tape, its constituents, 

2) the footprint of the recording head, 

3) any non-intended signals co-recorded with the intended signal. All of this 
secondary information is present on any piece of tape.

Speed goes without saying.

How has the secondary information been used in the history of re-recording?

The Columbia Oral History project in the late 1950s and early 1960s used the 
tapes to record interviews, these were transcribed, and the tapes reused. 
Edward Tatnall Canby deplored this vociferously in Audio Magazine in the 
1970s. No use of secondary information.

A number of broadcast archives from the 1960s-90s had programmes of re-
recording cyclically, perhaps on a 7-year cycle. The in-house standards were 
such that there was an inherent trust in correct reproduction of the tapes, 
and secondary information was not used consciously.

In the late 1990s a consciousness sprang up in archive and re-recording 
circles that a more precise knowledge of tape heads than mere "mono or 
stereo" was needed, and we see the professional transfer establishments 
provide a range of heads to cater for the innumerable recording formats and 
maladjustments. Also, the importance of the conforming to whatever azimuth 
there is on the tape. The secondary information is being used to some degree. 
This is also the case when a decision is made on the handling of the tape by 
the machines, dependent on the type (triacetate, polyester-terephthalate), as 
well as any treatment before replay.

We have seen Plangent Processes use the HF bias co-recorded with the intended 
signal to obtain a much improved temporal stability of the intended signal.

Some of the best investigated tapes from a forensic viewpoint were the Nixon 
Watergate tapes. All tapes can be subjected to such analyses, but we need the 
tapes.

So, using baking and making the last ever transfer from a tape means that we 
have an intended signal that is as good as our knowledge today. If we use the 
original tapes for landfill we are discarding a lot of information. This must 
be a conscious decision, and almost everything is better than what came out 
of the Columbia Oral History project. We shall most likely never be able to 
afford to make use of any more of the ancillary information - sort of 
'diminishing returns'. For this practical reason I believe it is bordering on 
hypocrisy to claim that "although we make a digital transfer, we aim to 
preserve the original" and to have that as a goal in internationally accepted 
recommendations. There are parallels to switching off the life support system 
of a terminal patient - the moral issues are the same, but the subject is not 
a human being. In the "tape world" the re-use of organs would be the re-use 
of hubs and reels to re-house more fortunate tapes.

I have written about the basic considerations when looking at artefacts as 
the most concentrated collection of information; the two main references are:

Brock-Nannestad, George: "Applying the Concept of Operational Conservation 
Theory to Problems of Audio Restoration and Archiving Practice", AES Preprint 
No. 4612, 103nd Convention 1997 September 26-29, New York.
	(the concept is introduced as unifying approaches in Conservation Theory in 
order to evaluate proposed preservation policies)

This text is well worth the $20 that it costs to non-members of the AES.

The other one is free and available in a pdf-version identical in content to 
the original printed publication:

Brock-Nannestad, George: "The Rationale Behind Operational Conservation 
Theory", in `Conservation without limits - IIC Nordic Group XV Congress´, Ed. 
R. Koskivirta, Helsinki 23-26 August, 2000, pp. 21-33.
	(the information content and structure in an object is both of a scientific 
and of a perception nature. The balance between the two types changes over 
the service life of the object. Systematic information analysis provides a 
firm background for responsible decisions on preservation and restoration)

Available from:

http://palimpsest.stanford.edu/byauth/brock-nannestad/operational-
conservation-theory.pdf         - no newline!! copy-and-paste your way

The idea of preserving the originals would only be viable if we can guarantee 
that the use of a particular approach to treatment as a precursor for a re-
recording will at the same time assure that we can subsequently store the 
originals with minimal costs in respect of climatic control. I do think that 
the step of re-recording is indispensable, although a case for not doing that 
until the intended signal is requested may be made. But the access to the 
content by having it transferred to a digital system is so much improved by 
the re-recording that no indexing system can compete. Based on such 
realizations I feel that the minimum requirement should be: "ANY transfer is 
better than NO transfer". The 'lemma' to this is: "better 100 hours of poor 
transfers than one hour of a perfect transfer". This is heretical, I know! 
But what is a "poor" transfer? That is the minimum quality that will stand up 
in court. Worse than that is worthless. And it is unethical, because it 
contains lies about the intended content.

So, on this forensic note,

kind regards,


George


[Subject index] [Index for current month] [Table of Contents]