JAIC 1994, Volume 33, Number 2, Article 10 (pp. 193 to 198)
JAIC online
Journal of the American Institute for Conservation
JAIC 1994, Volume 33, Number 2, Article 10 (pp. 193 to 198)




Nineteen ninety-three was the third year of survey projects funded by CAP and the ninth year of IMS-CP, and both programs have proven very successful. Conservation professionals who are participating in these and other programs have improved our understanding of the process as well as our abilities to provide these services. The benefits to the museum community can be seen in the clarity of long-range planning and informed design reflected in current implementation projects. But this experience has also identified ethical and practical issues for discussion: If conservators agree that the IMS/NIC format that we have helped to develop provides the “standards of practice,” are those projects that do not follow this outline failing to meet an accepted standard? If language is drafted for inclusion in the revised AIC standards that defines assessments according to the CAP model, can it be broad and simple enough to be both inclusive now and flexible in the future? Furthermore, after defining general assessments, is there a need to codify methodology for object condition surveys and environmental studies?

Some conservators and other museum professionals believe that the current format asks the conservator to cover too many issues or to reach beyond his or her expertise. When a conservation assessor begins redrafting mission statements, redefining organizational structure, advising on fund raising, or recommending programmatic shifts that draw resources from outreach or educational programs to support collections projects, they may well be operating beyond the limits of their professional training or experience.

Some assessors express frustration with the limited financial support provided for the CAP survey. They are torn between delivering the (necessarily) shorter report that current funding actually provides for and the longer and more instructive report that they believe these smaller, less sophisticated museums need, although to do so may require many hours of work without compensation.

A second and related area of concern arises in regard to contracts. Again, a de facto standard has been developed as part of simplifying the administration of CAP (Peters 1992). One suggestion to improve on this model would be the provision for follow-up after delivery of the report—one month, six months, or possibly another period—to help smaller museums in implementation, answer questions, provide feedback for assessors, and renew the drive of volunteers to follow through. Should these steps also be contracted obligations? Perhaps practitioners could work with program sponsors on these and other questions.

The issue that draws the most vociferous discussion among conservation practitioners concerns assessor qualifications. Currently, the NIC has identified 177 CAP surveyors. Qualification requires (1) previous general survey experience; (2) evidence of conservation or preservation training; and (3) a minimum of five years in the field (Peters 1992). CAP provides applicants with program information, including guidelines for selecting an assessor and a list of possible assessors. For each project, both applicants and assessors are charged with determining the suitability of their match.

Of necessity, the process of conservation assessment is largely learned through experience. While the academic training provided by formal conservation programs is intensive and focused on fundamentals necessary for success in the field, most assessors agree that a broad range of exposure beyond training is necessary to provide meaningful assessments.

To be effective, an assessment must consider the broadest issues of museum operations that affect collections preservation. The assessor must be able to view collections care in perspective, as one museum program within the context of the whole range of institutional programs and initiatives. Surveyors are called to draw upon experience developed over years of practice that have included exposure to institutional decision making outside and beyond the bench treatments of front-line conservation. In addition to the sharp observation and analytical abilities necessary to discern cause-and-effect diagnosis, surveyors require a deeper understanding of the environmental interplay leading to object deterioration as well as knowledge of geographically and architecturally appropriate possibilities for specific improvement. Add to this the necessary political acumen to discern underlying problems when collections care program failures have more to do with office politics than museum policy as well as the communicative skills needed to field questions while examining objects and their surrounding conditions and to gently teach, and sometimes motivate, staff and volunteers. And, of course, there is a need to document site visits simultaneously with notes and photographs. The museum staff is counting on the assessor to provide guidance that will inform their immediate actions as well as their long-range plans and reports that will explain, prioritize, and support fund-raising efforts to effect improvements.

The problem is this: program sponsors and assessors have left the final judgment regarding the quality of the reports to the museums seeking guidance, and in cases where these museums are staffed by minimally trained paraprofessionals or community volunteers, they are unfortunately sometimes the least qualified of all those involved to make these judgments. Furthermore, because the findings of the assessment are an essential measure in determining subsequent funding support for implementation projects in a highly competitive process, poorly served institutions find themselves at a disadvantage. The failure of an implementation grant to be funded may be the first occasion for an institution to realize the shortcomings of its assessment documents.

And so it becomes a matter of ethics for each assessor to examine his or her own qualifications: Are you working beyond your expertise? Many assessors plainly assert that one should not do assessments if not qualified. Here at last there is language in the current Code of Ethics(AIC 1993) that applies: “It is the conservator's responsibility to undertake … work only within the limits of his professional competence.” Further, surveyors are well advised to summarize their initial findings and refer clients to other practitioners with specific expertise as needed. “No person … can expect to be expertly informed on all [matters],” and there “should be no hesitation in seeking advice … or in referring [clients to other professionals] more experienced in particular special problems.”

A second area of potential ethical dilemmas involves the goals and purposes that motivate conservators to provide assessments. Assessors who use surveys for self-promotion, to market their own treatment services, may be on ethically thin ice. If the client's real needs for better storage methods or improvements to mechanical systems are bypassed on priority lists in favor of revenue-generating treatment projects, the conflict seems quite clear. This situation seems ironic considering that the identification of objects needing treatment was one of the conservators' and museums' original purposes for surveys. Does the assessment program goal create a conflict of interest for conservators in this regard when considered in light of the expectation that a benefit of assessments will be the development of long-term relationships between institutions and contract conservators? Perhaps the current Code of Ethics discussion draft (AIC Ethics and Standards Committee 1993) should include surveys and assessments on the list of activities that provide considerable potential for conflict of interest.

Overall, CAP reports and general assessments have proven to be extremely valuable documents for planning and action. But the shortcomings of a minority of reports illustrate additional issues regarding ethical and responsible performance. Most institutions, especially the smaller ones targeted by CAP, need clear, concise recommendations to be able to use survey reports in developing large-scale programs with additional outside consultants. Assessors must describe short-term, intermediate, and long-term needs and prioritize recommendations for action. They should support their findings with the evidence collected during fieldwork and defend their opinions with a clear rationale. Disclaimers for ensuing recommendations may undermine client confidence and sidetrack implementation efforts. Timely follow-through by assessors is also needed; reports submitted a year after the site survey fail to capitalize on the program's potential momentum.

There are other potential shortcomings inherent in vague contractual language that sets up expectations for information that the assessor knows cannot be developed within the scope of a specific project. The client may learn at the end of the project that, in fact, an additional contract and additional fees will be required to actually deliver what has been implicitly promised.

Further, the practice of delivering boilerplate reports, with generic text about generic problems, is something most colleagues have condemned. While some of the issues addressed in assessments may need general explanation that instructs the reader, reports must be site specific for their users to really benefit. Similarly, to cite textbook environmental standards for the care of a particular collection or objects and then leave the practical aspects of implementation to others may not fulfill the conservator's responsibilities as a participant in collections care.

Copyright � 1994 American Institute for Conservation of Historic and Artistic Works