The Computer as Darkroom and Camera

James Marchand
Jan 1992

Date: Fri, 31 Jan 1992 16:59:55 EST
Sender: Seminar on the Study of Religions
< RELIGION@HARVARDA.HARVARD.EDU >
From: KRAFT%PENNDRLS.UPENN.EDU@HARVARDA.HARVARD.EDU
Subject: OFFLINE 37
To: Steve Mason < Shlomo@VM1.YORKU.CA >
<<o F F L I N E 3 7>>
coordinated by Robert Kraft
[29 January 1992 Draft, copyright Robert Kraft]
[HUMANIST, IOUDAIOS, RELIGION, etc., 30 January 1992]
[Religious Studies News 7.2 (March 1992)]
[CSSR Bulletin 21.2 (April 1992)]

This issue of OFFLINE presents three different types of material contributed by four guest authors covering rather different aspects of computer assisted research in our rapidly expanding electronic world. The main article by James Marchand describes new computerized ways of dealing with visual materials, and in one way or another will be relevant to every reader. Jim Marchand is the coordinator of the MEDTEXTL (Medieval Text List) and GERLINGL (Germanic Linguistics List) electronic discussion seminars (both at UIUCVMD.bitnet), and draws on his experience working with Gothic palimpsest manuscripts, among other things. My colleague James O'Donnell, editor of the electronic and hardcopy Bryn Mawr Classical Review and no stranger to OFFLINE readers, then reviews the new Christian Latin CD-ROM from CETEDOC and Brepols in Belgium. Finally, Michael Strangelove (who also coordinates some electronic discussion groups including CONTEXT-L, for cultural analysis of ancient texts) and Alan Groves report more briefly on some new products and opportunities of both a general informational sort (Directories and Publication Listings) and of specific interest to Hebrew Bible students. And there is a PS from yours truly. Enjoy! (Or at least, be informed.)

Feature Article:

The Computer as Camera and Darkroom

by James Marchand
Center for Advanced Study Professor of German, Linguistics and Comparative Literature at the University of Illinois, Urbana

Those who deal in earlier cultures have to depend on representations of some kind of the artifacts left behind by those cultures: sketches, lithographs, xylographs, models, etc. Since the advent of photography in the mid-19th Century, we have come more and more to depend upon photography as our means of recording and preserving the past. The problem with photographs is that they are often poorly done, particularly given the conditions under which they often must be made, and the process of improving them in the darkroom is long and arduous. The digital computer has changed all that. Note that I have said "the digital computer." A normal photograph is not digital, but analog in nature, that is to say, is continuous rather than discrete in its registration of light values. The notion of digitizing photographs comes from America's space program, particularly LANDSAT; early Russian and American photographs from space were poor in quality and were analog, so that very little could be done to improve, "enhance" them. Dr. Robert Nathan came up with an idea and some programs which permitted digitizing the photographs from Ranger 7, 8 and 9, and the science of digitizing images, the mainstay of image processing, was born. In fact, one of the best introductions to the field of image processing is still Johannes G. Moik, Digital Processing of Remotely Sensed Images (NASA SP-431) (Superintendent of Documents, 1980).

What does it mean, to digitize? Everyone knows that the computer we normally use is called a digital computer because all its information is broken down into yes/no particles called bits (originally "binary digits"). In order to put any information into a computer, the information must be digitized (broken down into yes/no configurations). That is why, for example, we have the ASCII system which gives us so much trouble. Early on, the best we could do in a printer is to have two to the seventh (128), since we had at best eight bit "chips," leaving us with seven patterns of yes/no questions to handle all the letters of the alphabet and other symbols. Of course, Hollerith punched cards, which also had only 128 patterns, brought all this about in part. If we look at a photograph or a scene (we will restrict ourselves to black and white for simplicity at the moment), what we see is continuous values of gray, from black to white. A computer screen or a television screen, however, has to render this scene as configurations of dots, so that the computer screen is like a pointilliste painting or a Sunday comics. If you hold the Sunday comics under a strong magnifying glass, you can see that the picture is actually composed of little dots; if you look closely at a TV screen, you will see the same thing. The computer, through a control card, assigns to each of these dots a unique position (x,y) through a raster scheme familiar to us from juke boxes, seat numbers, etc., a column/row address. It also assigns a gray scale value, according to how much light is transmitted. Thus, the LUT (look up table) generated by a scene in the computer assigns to each pixel (picture element) on the screen a value f(x,y), where x and y are the familiar spatial coordinates and f is the radiometric value in terms of gray- scale. This means that one can manipulate the pixels of an image on the screen one-by-one, a group at a time, a screen (frame) at a time, and that one can also do radiometric operations. Colorizing old films is an example of one process, which assigns color values to gray values.

Image acquisition

There are several ways to get an image into the computer. An old picture, if it has not already been "digitized" by being printed through a half-tone screen (remember the Sunday comics), can be scanned in, using a device such as the Hewlett Packard ScanJet Plus. Here, the size of the chip on the board which connects the scanner to the computer is crucial. At present, we will assume an eight-bit channel, and I would caution that a smaller channel is not practicable in today's world; if you have an old scanner, such as the Hewlett Packard ScanJet, with a four-bit bus, you can modify this easily. This yields 256 levels (2 to the eighth=256) of gray, enough until one comes to color, where a 32 bit chip is often used. The scanner generates a look up table (LUT), assigning to each dot (whence the measure DPI, dots per inch, typically 300) or pixel a geometric location (x,y) and a gray level (f). If one thinks of a "normal" watch with hands as analog and a "digital" watch as digital, then the process becomes clear. The larger the chip (number of levels), the less information is lost.

With 256 levels of gray, my eye does not detect any loss. In a short column like this, I cannot go into other aspects of loss and gain, such as resolution, etc., much of which will depend on your display equipment. Of course, one does not have to scan in a print. The technology is there to work with films, such as microfilm, but at present this is mostly in the development stage. If one wishes to input directly from film, the best thing at present is to convert the film to slides and to use a slide reader, such as the Nikon FL-3510AF, a rather expensive way to go, but one which does eliminate one source of potential error.

Another way of acquiring an image is by direct photography by means of a digitizing camera. All of us are familiar with one such camera, namely the video camera. Since the television screen consists of pixels, the video camera must digitize the scene it is registering. It is for this reason that some of the earliest attempts at using the computer to enhance manuscripts used the video camera. In so doing, however, it is important to use a so-called "frame grabber" to freeze and record one frame. In reality, the video camera is of very little use for our purposes, though it can be relatively cheap. When one adds the fact that raster systems (LUTs) differ in time and in space (try running a European video on your VCR), video becomes a very poor substitute for a camera. If you want to try this out: In the April 1990 issue of his PsL News, Nelson Ford discussed a video card, the VIP-640. He found that he could get a better image by using a Sony black and white and the VIP-640 than he could with a scanner and Gray F/X. His group, the Houston Area League, absolutely the best when it comes to shareware, offers a bundled package for about $500, including a Sony black and white camera, a board, plus PicturePublisher. There are more expensive commercial ventures also; the simplest way of learning about them is through the journal Resolution, which is available free from its publisher (P. O. Box 1347, Camden, Maine 04843).

We are seeing the advent of new digitizing cameras, such as the Canon Color Xapshot. With this "camera" you can take pictures in color, do macrophotographs, use filters, record up to 50 pictures on one disk, etc. The problem is that you have to buy an interface card to make it work for your computer, since the little disk is incompatible with anything else. I have used this for runestones, and it does an excellent job. Another recent arrival is the Dycam, a digital still camera which does not have a disk and does not require a board. It stores its photos in its own RAM and can store 32 256 gray-scale images with 376 by 240 resolution, according to InfoWorld 13.32 (August 12, 1991), p. 54. It then feeds them into your computer through the serial port.

Neither of these cameras is suitable for finicking work, but they are a start. More expensive and better devices, such as the new Sony SEPS-1000 (see Resolution, Jan/Feb, 1992, p. 7), which will permit better "scientific" photography are coming on the market.

The value of having a camera attached to a computer is enormous. As the size of computers comes down, one can envisage carrying five pounds of equipment and being able to do such things as to filter in real time. Normally if one, say, has the idea that a light blue filter (in the case of ferrous based inks which occasionally have an orangish cast) might work, one has to take a photograph, go to the darkroom, develop it, perhaps even to print it, before finding out that it did or did not work. With a digitizing camera attached to a computer with a monitor, one can see the results immediately. When more and more such cameras are made available, filtration will be easier, we will have wrap- around ultraviolets, etc. With the advent of new graphic formats and reduction techniques, storage will cease to be a problem. It should also be pointed out that the first problem in image acquisition is access, which is often the most difficult part of the whole process. Not many keepers of archives are going to be willing to have a scholar with a back-load of equipment photograph in their archive. Given also the problem one has with local current, etc., it is extremely important, if one wants to do ones own work, and that is the only good way to go, to be self-contained and light.

Image Manipulation--The Computer as Darkroom

Once one has acquired the image, one can (both fortunately and unfortunately) manipulate it in various ways. Remembering our formula for the pixel, f(x,y), one could, for example, write a simple BASIC routine, "let f(x,y) = f+40(x,y)" and brighten a photograph by 40 units. One could falsify a document just as easily, however, and scholarship is going to have to address this problem. You can take a photograph of a friend and put two noses on him/her. Two excellent books illustrating such techniques are: Composites: Computer Generated Portraits, by Nancy Burson, Richard Carling and David Kramlich (NY: Beech Tree Books, 1986; ISBN 0-688-02601-X) and Gerard J. Holzmann, Beyond Photography: The Digital Darkroom (Englewood Cliffs, NJ: Prentice Hall, 1988; ISBN 0-13-074410-7). Software illustrations of the latter (in C) are also available. For this reason, it is important that the scholar use only algorithms; otherwise his work is just as subjective as that of the lithographer or the xylographer. If the intention is to make a legible facsimile, and if the scholar clearly announces his intent and the fact that he is using geometric methods, I see nothing wrong with such manipulation. I will just mention some of the algorithms which may be used. For a thorough discussion, you cannot get better than Rafael C. Gonzalez and Paul Wintz, Digital Image Processing (Addison-Wesley, 1977; ISBN 0-201-02596-5). It is somewhat out-of-date (though there is a second edition, which I don't have at hand), but I haven't seen anything better. If you want to see what histogram equalization can accomplish, look at the picture of the dollar on p. 126.

Geometric operations

Most of these are best ignored, but it is good, for example, to mask off a portion of a picture to work on. For the most part, operations should not be carried on on an entire frame (picture), since you probably will not want to increase the contrast, for example, over the entire picture. This operation of masking, which can be difficult in photography, is easy with the computer.

Note also that masking is not dangerous, since most programs will allow one to return to the original position seamlessly. The same can be said for using various overlays, which can at times be useful. It is quite difficult to do overlays without slippage with normal photography; in the computer it offers no problem. It can occasionally be of interest to use geometric operations to correct deformities in the original or the registration of it, as in the case of very crinkly parchment or tightly rolled scrolls.

Cutting and pasting

These two geometric operations can be of great value. In the work of "lacunology," for example, letters from one part of a manuscript are used to "repair" letters from another part. In my own case, I have cut out Gothic letters and assigned them to keys, using SLEd (from VS Software), so that I then had a typewriter which typed Gothic characters as they were found in our manuscripts, both on the screen and in the printer.

Enlargement and reduction

Other geometric operations include enlargement or reduction. The latter operation, often overlooked, is good for old macro photos, since stepping them down increases the resolution. Here the results also can be at times spectacular.

Radiometric operations

These are the ones of most interest to us who work with difficult to register scripts and artifacts. We have already mentioned brightening, an obvious darkroom operation and one which is simple for the computer.

Contrast stretching

Those who remember the stir caused by William Bennett's use of high contrast in his study of the Skeireins (e.g. "The Vatican Leaves of the Skeireins in High-Contrast Reproduction," PMLA LXIX [1954] 655-676) will understand the (often wrong) desire of the scholar to add contrast to a picture. In the case of computer images, this is no problem; one simply asks that, e.g., all values from 1-50 become 1 (are replaced in the lookup table by 1), whereas all other values become zero. The results can be spectacular. NB: in real use, it is best to have a joy-stick installed and to change the values continuously until the result one is looking for is obtained. Make sure to mask!

Histogram operations

One can manipulate the histogram of the dispersal of grays in a picture. This can be done to part of the picture or to all of it.

Many of the special effects seen on TV are done by histogram specification. Histogram equalization, which reduces highs and lows on the gray scale, often reveals things which cannot be seen by the naked eye on a photograph.

Edge finding

One can set up an algorithm to sense differences in the radiometric values (gray levels) in an area, connect the values where the differentiation takes place, and obtain an edge. The results can at times be of use; an example of what can be done is seen in my article, "The Use of the Computer in the Humanities," Ideal 2 (1987), p. 27. We fed into the computer pictures of Gothic letters which were quite unusable, sensed the edges, contour rounded, and filled in: The result was a Gothic alphabet remarkably like that obtained from a professional scribe (cf. Sydney Fairbanks and F. P. Magun, Jr., "On Writing and Printing Gothic," Speculum 15 [1940] 313- 330, 16 [1941] 122).

Image smoothing

By a somewhat opposite method, one can obtain smoothing of an image, analogous to the use of a soft- focus lens in photography. This can be quite useful in processing photographs of three-dimensional objects where edges are too sharp and interfere with perception.

Pseudo-color

Since one can address each pixel and also each level of gray, it is possible to tell all values from 50 to 70, for example, to turn green. At times, this, too, can be very useful, mainly for decipherment of the photograph, not for publication. Such software as Paintbrush IV Plus from Z-Soft can be used for this purpose.

Density slicing

A kind of pseudo-color operation is density slicing, in which one selects a "slice" of values, say 30-50, and has them turn black, whereas all others are assigned white. In the case of a gray-rich photo of, say, a palimpsest, the results again can be excellent. This represents a new event in photography, one which cannot be duplicated in the darkroom.

Deblurring

It has recently been announced that investigators at Rochester have succeeded in developing an algorithm for enhancing out-of-focus images. As any photographer who has worked in macro-photography can tell you, this is an all-too-common event. See "Taking the Fuzz out of Photos," Newsweek (Jan. 8, 1990, p. 61).

Many of these operations have been programmed and are available in off-the-shelf software. Two which I recommend to those who use the DOS platform are PicturePublisher from Micrografx (works under Windows; often bundled with other programs) and Gray f/x (from Xerox). I have already mentioned Paintbrush (from Z- Soft) as a very useful tool. It should be pointed out that such work is not easy; it is tedious in the extreme, and requires hard and careful work. If you want to do careful work, e.g. density slicing, you need to do your own programming, which is nothing like as hard as its seems at first. Mit Sturm ist da nichts einzunehmen ["Nothing comes easy!" Goethe].

It should also be pointed out that we are just beginning. I have not written about 3-dimensional imaging, about color, about holography, about the possibility of 3-dimensional printing, all of which are upon us. More people are becoming involved. The space effort and the raising of the Titanic are highly visible uses of image processing and remote (non-invasive) sensing. One already sees image enhancement studios on a commercial basis arising all over America. Such establishments refurbish old photos (despeckling, contour rounding, pseudo-color, etc.) much in the manner in which retouchers used to work. This means cheaper and better software and hardware.

At the same time, storage capacity is going up day-by-day. Kodak has announced a new "darkroom," called Photo-CD, which consists of hardware and software to handle slides which are simply dropped into the scanner. The result can be enhanced by their software, then stored on CD-ROM. This means for us, for example, that the entire oeuvre of the Swedish painter, Albertus Pictor, almost totally unknown outside Sweden, can be made available on 3 CD-ROMs, with captions and discussion, and can then be displayed using random access techniques. The possibilities for recording and display of early manuscripts are enormous. The next generation of scholars will have to become not only computer literate, but also image literate.

[Prof. Marchand can be reached electronically as MARCHAND@UX1.CSO.UIUC.EDU, or by regular mail at 3072 FLB, 707 S. Mathews, University of Illinois, Urbana IL 61801.]


[Search all CoOL documents]