Reality/surreality/hyperreality. Mechanical vs art. Looking back from digital colour and high levels of control. Capturing images is now so easy. And possibilities of control at shooting and processing stages so broad. Often lose the aesthetics and meaning.
Early colour photography processes produce a feeling of nostalgia for a by-gone leisurely time. As in monochrome photography this ‘elite impressionist aesthetic’ can be enhanced through for example use of chiaroscuro and light, smearing vaseline on the lens, adding brushstrokes or scratches to the film in development process. In colour photography particularly the aesthetic is also partly because of inherent technical limitations of early equipment and processes:
Lens aberrations and distortions in perspective
Chemicals were unstable, inconsistent and less sensitive leading to colour shifts, grain, limited tonality and dynamic range and requiring long exposure times and hence shallow depth of field and blurring. Effect of long exposures while model tries to be still so get selective movement blur? Giving the reflective feel?
Fragile plates and scratches that add to the feeling of human frailty and inevitable passage of time.
Edges of the plates? burning and fade?
Processes like hand-colouring and tinting, coupled with the blurriness of the original black and white image give a de-saturated dreamy look. The leisurely feel is enhanced by the very long exposures needed to produce multiple plates in different colours that are then combined. Photographing any action was not possible, and requires shallow depth of field with much of the image dreamily blurred. Grain, scratches and other imperfections are further exaggerated with fragility of glass plates and the nature of pigments and chemicals used.
Colour photography techniques
hand colouring of black and white prints
monochrome tinting through use of dyes and pigments at the development stage: cyanotypes, carbon prints and gum bichromate prints. They use pigments and bichromated colloids (viscous substances like gelatin or albumen made light-sensitive by adding a bichromate) that harden when exposed to light and become insoluble in water. The resulting prints are characterized by broad tones and soft detail, sometimes resembling paintings or drawings.
Autochrome 1907-1935: 3 colour process using potato starch. Soft focus, pointillist grain. Slow process if you want to keep exposures under control.
Heinrich Kuhn pictorialism
Heinrich Kuhn autochrome technique
Kuhn, Steiglitz and Steichen
Paul Strand modernism
Colour film photography: 1970s to contemporary
see also: Stephen Shore
high contrast and use of motion blur through slow shutter speeds leads to abstraction of movement.
Lomography is a genre of photography, involving taking spontaneous photographs with minimal attention to technical details. Lomographic images often exploit unpredictable non-standard optical traits of cheap toy camera (such as light leaks and irregular lens alignment), and non-standard film processing techniques, for aesthetic effect.
Lomography is named after the Soviet-era 35 mm LOMO LC-A Compact Automat camera cameras produced by the state-run optics manufacturer Leningradskoye Optiko-Mekhanicheskoye Obyedinenie (LOMO) PLC of Saint Petersburg. This camera was loosely based upon the Cosina CX-1 and introduced in the early 1980s. In 1992 the Lomographic Society International was founded as an art movement by a group of Viennese students interested in the LC-A camera and who put on exhibitions of photos. The art movement then developed into the Austrian company Lomographische AG, a commercial enterprise who claimed “Lomography” as a commercial trademark.
But lomography is now a genericized trademark referring to the general style that can be produced with any cheap plastic toy camera using film. Similar-looking techniques can be achieved with digital photography. Many camera phone photo editor apps include a “lomo” filter. It is also possible to achieve the effect on any digital photograph through processing in software like Adobe Photoshop, Lightroom or Analog FX Pro. The lomography trend peaked in 2011.
Because of its ease of use, it has been used in participatory photographic activism because it is easy to use eg by children in slums of Nairobi.
David Carson (born September 8, 1954) is an American graphic designer, art director and surfer. He is best known for his innovative magazine design, and use of experimental typography.
He worked as a sociology teacher and professional surfer in the late 1970s. From 1982 to 1987, Carson worked as a teacher in Torrey Pines High School in San Diego, California. In 1983, Carson started to experiment with graphic design and found himself immersed in the artistic and bohemian culture of Southern California. He art directed various music, skateboarding, and surfing magazines through the 1980/90s, including twSkateboarding, twSnowboarding, Surfer, Beach Culture and the music magazine Ray Gun. By the late 1980s he had developed his signature style, using “dirty” type and non-mainstream photographic techniques.
As art director of Ray Gun (1992-5) he employed much of the typographic and layout style for which he is known. In particular, his widely imitated aesthetic defined the so-called “grunge typography” era. In one issue he used Dingbat as the font for what he considered a rather dull interview with Bryan Ferry. In a feature story, NEWSWEEK magazine said he “changed the public face of graphic design”.
He takes photography and type and manipulates and twists them together and on some level confusing the message but in reality he was drawing the eyes of the viewer deeper within the composition itself. His layouts feature distortions or mixes of ‘vernacular’ typefaces and fractured imagery, rendering them almost illegible. Indeed, his maxim of the ‘end of print’ questioned the role of type in the emergent age of digital design, following on from California New Wave and coinciding with experiments at the Cranbrook Academy of Art.
In the later 1990s he added corporate clients to his list of clients, including Microsoft, Armani, Nike, Levi’s, British Airways, Quiksilver, Sony, Pepsi, Citibank, Yale University, Toyota and many others. When Graphic Design USA Magazine (NYC) listed the “most influential graphic designers of the era” David was listed as one of the all time 5 most influential designers, with Milton Glaser, Paul Rand, Saul Bass and Massimo Vignelli.
He named and designed the first issue of the adventure lifestyle magazine Blue, in 1997. David designed the first issue and the first three covers, after which his assistant Christa Smith art directed and designed the magazine until its demise. Carson’s cover design for the first issue was selected as one of the “top 40 magazine covers of all time” by the American Society of Magazine Editors.
In 2000, Carson closed his New York City studio and followed his children, Luke and Luci, to Charleston, South Carolina where their mother had relocated them. In 2004, Carson became the Creative Director of Gibbes Museum of Art in Charleston, designed the special “Exploration” edition of Surfing Magazine, and directed a television commercial for UMPQUA Bank in Seattle, Washington.
Carson claims that his work is “subjective, personal and very self indulgent”.
Carson, David (1995). The End of Print: The Graphic Design of David Carson. Chronicle Books. ISBN 0-8118-1199-9.
Carson, David (1997). David Carson: 2nd Sight: Grafik Design After the End of Print. Universe Publishing. ISBN 0-7893-0128-8.
Meggs, Phillip B.; David Carson (1999). Fotografiks: An Equilibrium Between Photography and Design Through Graphic Expression That Evolves from Content. Laurence King. ISBN 1-85669-171-3.
Stecyk, Craig; David Carson (2002). Surf Culture: The Art History of Surfing. Laguna Art Museum in association with Gingko Press. ISBN 1-58423-113-0.
Mcluhan, Marshall; David Carson, Eric McLuhan, Terrance Gordon (2003). The Book of Probes. Gingko Press. ISBN 1-58423-056-8.
Carson, David (2004). Trek: David Carson, Recent Werk. Gingko Press. ISBN 1-58423-046-0.
Mayne, Thom; David Carson (2005). Ortlos: Architecture of the Networks. Hatje Cantz Publishers. ISBN 3-7757-1652-1.
Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue – often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.
Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.
Bits Per Pixel
Number of Colors Available
XGA, High Color
SVGA, True Color
16777216 + Transparency
The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.
BASICS OF DIGITAL CAMERA PIXELS
The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.
OVERVIEW OF COLOR MANAGEMENT
“Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.
In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:
Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.
THE NEED FOR PROFILES & REFERENCE COLORS
Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.
Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:
To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.
As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.
A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:
Input Number (Green)
Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.
COLOR MANAGEMENT OVERVIEW
Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:
Profile Connection Space
Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).
The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.
Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.
Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.
Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.
Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.
UNDERSTANDING GAMMA CORRECTION
Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.
WHY GAMMA IS USEFUL
1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).
Perceived as 50% as Bright
by Our Eyes
Detected as 50% as Bright
by the Camera
Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.
Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.
But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.
Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.
2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):
↓ Encoded using only 32 levels (5 bits)
Note: Above gamma encoded gradient shown using a standard value of 1/2.2
See the tutorial on bit depth for a background on the relationship between levels and bits.
Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.
However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.
GAMMA WORKFLOW: ENCODING & CORRECTION
Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:
RAW Camera Image is Saved as a JPEG File
JPEG is Viewed on a Computer Monitor
1. Image File Gamma
2. Display Gamma
3. System Gamma
1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
2. Depicts a display gamma equal to the standard of 2.2
1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.
2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.
3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.
IMAGE FILE GAMMA
The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:
Linear RAW Image
(image gamma = 1.0)
Gamma Encoded Image
(image gamma = 1/2.2)
If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.
Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.
This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.
Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (—), changing the display gamma (—) will therefore have the following overall impact (—) on an image:
Display Gamma 1.0
Display Gamma 1.8
Display Gamma 2.2
Display Gamma 4.0
Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
Recall from before that the image file gamma (—) plus the display gamma (—) equals the overall system gamma (—). Also note how higher gamma values cause the red curve to bend downward.
How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma (—) is uncorrected by the display gamma (—), resulting in an overall system gamma (—) that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).
The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.
LCD (Flat Panel) Monitors
CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.
LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.
Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).
OTHER NOTES & FURTHER READING
Other important points and clarifications are listed below.
Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.
For more on this topic, also visit the following tutorials:
Color can only exist when three components are present: a viewer, an object, and light. Although pure white light is perceived as colorless, it actually contains all colors in the visible spectrum. When white light hits an object, it selectively blocks some colors and reflects others; only the reflected colors contribute to the viewer’s perception of color.
The human eye senses this spectrum using a combination of rod and cone cells for vision. Rod cells are better for low-light vision, but can only sense the intensity of light, whereas whilecone cells can also discern color, they function best in bright light.
Three types of cone cells exist in your eye, with each being more sensitive to either short (S), medium (M), or long (L) wavelength light. The set of signals possible at all three cone cells describes the range of colors we can see with our eyes. The diagram below illustrates the relative sensitivity of each type of cell for the entire visible spectrum. These curves are often also referred to as the “tristimulus functions.”
Raw data courtesy of the Colour and Vision Research Laboratories (CVRL), UCL.
Note how each type of cell does not just sense one color, but instead has varying degrees of sensitivity across a broad range of wavelengths. Move your mouse over “luminosity” to see which colors contribute the most towards our perception of brightness. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum; this is utilized by the bayer array in modern digital cameras.
ADDITIVE & SUBTRACTIVE COLOR MIXING
Virtually all our visible colors can be produced by utilizing some combination of the three primary colors, either by additive or subtractive processes. Additive processes create color by adding light to a dark background, whereas subtractive processes use pigments or dyes to selectively block white light. A proper understanding of each of these processes creates the basis for understanding color reproduction.
The color in the three outer circles are termed primary colors, and are different in each of the above diagrams. Devices which use these primary colors can produce the maximum range of color. Monitors release light to produce additive colors, whereas printers use pigments or dyes to absorb light and create subtractive colors. This is why nearly all monitors use a combination of red, green and blue (RGB) pixels, whereas most color printers use at least cyan, magenta and yellow (CMY) inks. Many printers also include black ink in addition to cyan, magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows.
Additive Color Mixing
Subtractive Color Mixing
Red + Green
Cyan + Magenta
Green + Blue
Magenta + Yellow
Blue + Red
Yellow + Cyan
Red + Green + Blue
Cyan + Magenta + Yellow
Subtractive processes are more susceptible to changes in ambient light, because this light is what becomes selectively blocked to produce all their colors. This is why printed color processes require a specific type of ambient lighting in order to accurately depict colors.
COLOR PROPERTIES: HUE & SATURATION
Color has two unique components that set it apart from achromatic light: hue and saturation. Visually describing a color based on each of these terms can be highly subjective, however each can be more objectively illustrated by inspecting the light’s color spectrum.
Naturally occurring colors are not just light at one wavelength, but actually contain a whole range of wavelengths. A color’s “hue” describes which wavelength appears to be most dominant. The object whose spectrum is shown below would likely be perceived as bluish, even though it contains wavelengths throughout the spectrum.
Although this spectrum’s maximum happens to occur in the same region as the object’s hue, it is not a requirement. If this object instead had separate and pronounced peaks in just the the red and green regions, then its hue would instead be yellow (see the additive color mixing table).
A color’s saturation is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar, but less saturated color. The following example illustrates the spectrum for both a highly saturated and less saturated shade of blue.
Scarfolk is a fictional northern English town created by writer and designer Richard Littler, who is sometimes identified as the town mayor. First published as a blog of fake historical documents, parodying British public information posters of the 1970s, a collected book was published in 2014.
Littler has said “I was always scared as a kid, always frightened of what I was faced with. … You’d walk into WHSmith… and see horror books with people’s faces melting. Kids’ TV included things like Children of the Stones, a very odd series you just wouldn’t get today. I remember a public information film made by some train organisation in which a children’s sports day was held on train tracks and, one by one, they were killed. It was insane. … I’m just taking it to the next logical step.
Scarfolk, which is forever locked in the 1970s, is a satire not only on that decade but also on contemporary events. It touches on themes of totalitarianism, suburban life, occultism and religion, school and childhood, as well as social attitudes such as racism and sexism.
Scarfolk was initially presented as a fake blog which purportedly releases artefacts from the town council’s archive. Artefacts include public information literature, out-of-print books, record and cassette sleeves, advertisements, television programme screenshots, household products, and audio and video, many of which suggest brands and imagery recognisable from the period. Additionally, artefacts are usually accompanied by short fictional vignettes which are also presented as factual and introduce residents of Scarfolk. The public information literature often ends with the strapline: “For more information please reread.”
The aesthetic is utilitarian, inspired by public sector materials in the United Kingdom such as Protect and Survive.
A television series co-written by Will Smith was described as “in the works” in 2018.
Explores the dynamics of text and image, how they combine to create narratives and meanings, the ways in which visual communicators author such content, and the role an audience has in reading it.
Combining text and image brings two different language systems into play: telling and showing. Sometimes they are communicating the same thing but in different ways. At other times, these languages operate on different levels, for example a text prviding detailed content with an illustration providing a broad impression, or text set in a particular typeface indicating the tone of voice the text should be read in. Sometimes the messages are contradictory, each working on their own terms and not in relationship to one another.
Design decisions affecting the tone of the communication:
what to prioritise
where to place it
how to handle layout, colour and typography choices
Types of content:
closed/narrow content: targeted to inform, persuade or signpost attention.
open/rich content: operates beyond the immediate need to communicate including broader questions like why and how and provide a depth of meaning through they overall narrative.
Medium is the message
Authorship represents both a responsibility and a challenge: a responsibility to understand how design decisions have an impact on viewers’ experiences and a challenge to make work that is interesting, meaningful and engaging, both personally and socially.
Christian Lloyd 2015 OCA VCAP Coursebook p65
Following on from Ken Garland’s First Things First Manifesto visual communicators started to explore their role as journalists, social commentators, agitators and innovators. Roland Barthes ‘Death of an Author’ 1969 emphasises ‘the birth of the reader’ as the one who ultimately generates meaning and Michel Foucault 1971 ‘What is an Author?’ points to the responsibility for blasphemous or contentious material.
Members of a community continually construct a shared language shaped by collective values, ideas and beliefs. Meaning isn’t an inherent quality of any of the images, words or typefaces used but in how they are combined and read within this shared language. In international terms there are important questions about what is a ‘community’? at what levels and by whom are languages ‘shared’ and/or ‘created’? what does this imply for the role of designer/illustrator wishing to communicate voices of people who are marginalised in ways that can make an impact on those in power. These are questions I am researching in detail in relation to my professional consultancy work in Visual Research Module : See Translation Bricolage . But they are also important for my personal and political work for a Western audience.
Visual storytelling through graphic narratives
Single image narratives may employ alternative strategies for conveying time, for example composite images that combine multiple scenes of viewpoints. See for example:
Text and image has been combined in artistic traditions in many cultures, notably Persian, Chinese and Japanese. Given the potential contradictions, juxtapositions and complexities of combining text and image, modernist and contemporary artists have used this as a strategy to create new meanings or to challenge orthodoxies:
the impressionists, inspired by Japanese art traditions, started to document the proliferation of billboards and hoardings
Dadaists (eg El Lissitsky) and Futurists (eg Filippo Marinetti) tested the phonetic possibilities of typography, leading to later developments around concrete poetry
Bob & Roberta Smith uses hand-rendered typography and signwriting as a rallying call for people to engage in his debates and workshops, as open letters to politicians or as a way to give other people a voice through community-based projects.
Within narrative fiction, illustrations work alongside text in a responsive way, helping to visualise characters, moods and locations. Illustrations can be more imaginative than many other types of illustration – they’re there to communicate ideas, emotions, moods, drama and contexts as much as characters, actions and plots. Trying to focus on the overall mood, direction, genre and feel of the book will give an impression of the novel without getting too hung up on the specific content.
Key questions for illustrators when reading the text:
What is the overall mood, genre and feel?
What is the plot?
Who are the important characters and what is their relationship?
Who are the readers?
What is the purpose of the illustration?
Do you create an image that visualises the beginning, middle or end, or try to create a piece that suggests all three?
The answers are likely to differ depending on the type of book and its purpose, the age of the reader and the purpose of the illustration.
There is a physical connection between image and text, defining where the images go on the page and how they interact with the written word. There are lots of different ways of working image and text together, including:
whole page illustrations that sit alongside the text, headers, footers or as vignettes that the text wraps around.
typography continuing over the top of an illustration or the illustration extending over the type (space needs to be allowed for this)
digitally compositing the text as part of the image (see my work on Image and Text for Book Design 1, particularly Jabberwocky)
Book covers need a bold visual statement to draw people in, but also need to present key information such as the author, title or publisher. Book covers are most successful when the illustration and the typography have a sympathetic relationship – they’re both pulling in the same direction.
“In fairy tales, internal processes are translated into visual images. When the hero is confronted by difficult inner problems which seem to defy solution, his psychological state is not described; the fairy story shows him lost in a dense, impenetrable wood, not knowing which way to turn, despairing of finding the way out. To everybody who has heard fairy tales, the image and feeling of being lost in a deep, dark forest are unforgettable…“Telling a fairy tale with a particular purpose other than that of enriching the child’s experience turns the fairy story into a cautionary tale, a fable, or some other didactic experience. which at best speaks to the child’s conscious mind, while reaching the child’s unconscious directly also is one of the greatest merits of literature.” Bruno Bettelheim 1975 quoted course text pp 83-84
In illustrated children’s books there’s often a more obvious conversation taking place between text and image. The relationship between image and text varies depending on the target age of the children, and their assumed level of reading skill. Illustrations are often there to facilitate reading of the text, but also to stimulate imagination.
How do you visually help tell a story without giving too much away? The illustrations need to support the text without being too dominant, stealing the storytelling away, but at the same time they shouldn’t be too distant from the action.
Where along the course of the narrative should the images be placed? At what point in the action would an image be best suited – just before something has happened, during, or at the end?
What should the images focus on? should they be character-driven, bringing identities, expressions and gestures to life, or focused on location and landscape.
In some cases illustrations set the blueprint for future interpretations.
In some cases the book’s creator is both author and illustrator:
The Cat in the Hat (1954) created and illustrated by Theodor Geisel writing as Dr Seuss, Der Struwwelpeter
Shockheaded Peter (1845) by Heinrich Hoffmann
Where The Wild Things Are (1963) by Maurice Sendak
Beatrix Potter (1866–1943).
There are also many examples of illustrators who have defined a story visually by being the first or best illustrator to respond to it. In other cases the illustration style becomes inseparable from the reader’s interpretation.
Sir John Tenniel’s illustrations for Lewis Carroll’s Alice’s Adventures in Wonderland, Through the Looking-Glass and Jabberwocky.
Winnie The Pooh (1926) written by A A Milne and illustrated by E H Shepard
The Gruffalo (1999) written by Julia Donaldson and illustrated by Axel Scheffler
Little Red Riding Hood (1812) as defined by the Brothers Grimm and illustrated by Arthur Rackham.
Within comics and graphic novels, the line between what’s written and what’s visual, between the image and the text, becomes increasingly blurred, with written elements taking on the form of illustrations and the whole existing within a carefully constructed visual narrative of frames, bubbles, and drawings. See Sequential Illustration.