Helen Goldberg



‘I often start out using the roller brush and try to create interesting textures, colors as ground for my painting. Sometimes I will set down a gold or other background roughly in ArtRage and then take it to iColorama to change its texture and reimport it to ArtRage. That allows me to create images that remind me of medieval or Japanese paintings on gold leaf. Sometimes I experiment with using the water color brush on an oil painting. It creates a whole new medium that’s not possible in the same way with physical paints.

Often when I paint, I’ll start with a mental impression of an artist I like. I don’t try to copy them, but I will incorporate some aspect that intrigues me. I’ll look at the calligraphic lines in Kline, use of gold leaf in Klimt, or depiction of sunlight in Turner and work it into my painting.



Brushes was one of the very first drawing and painting apps for iPhone and iPad and has been used to create pieces of art by fine artists including David Hockney (see David Hockney’s description of the App in his iPad work on iPad and iPhone).

As an early Ap it was very simple, and used fingers or very basic stylus as this was before the days of pressure sensitivity.

Review 2012 byTim Gee

  1. Can produce a range of textures with the preset brushes
    The preset brushes within the app allow you to easily create different effects and textures. Plus you can create your own brushes and tweak the existing ones to get exactly the kind of effect you want.
  2. It’s easy to select the colour you want
    The app includes a colour wheel so you can easily find the right shade you’re after. Plus you can also set the transparency of your chosen colour to help create blending and give the impression of a lighter touch.
  3. Can undo and redo your actions
    If you make a mistake or decide against your latest stroke you can quickly and easily undo or redo actions without having to paint over and recreate whole sections of your piece.
  4. Watch your painting in action
    Once you’re done you can re-watch how you painted your picture and see the creative process in action. It’s not a particularly useful feature but it is fun.
  5. Easily share your masterpieces
    You can easily tweet, email, save or print your finished work from the app. This means everyone will get to enjoy your latest piece of art.

New Brushes Redux

This is the most recent version after change of management. But it remains very basic, does not respond well to pressure sensitivity, and is no longer at the cutting edge. Not recommended – unless it us substantially improved in any future update.

iPad Apps Compared

Background links for iPad Explorations: Critical Review

Adobe Illustrator Draw is a vector drawing app designed for quickly sketching out ideas and concepts. Zoom up to 64x to apply fine details and customise your toolbar and brushes. You can draw perfectly straight lines and geometric shapes, rename layers, and use shapes from Adobe Capture CC. An enhanced perspective grid also lets you map shapes to a perspective plane. Has 13 tools, a digital ruler, and graph guides. You can import your own images or stock photos to work on and for tracing and collage. Using the Creative Cloud connection, you can send a file to Photoshop CC or Illustrator CC on your PC or share your art with the Behance creative community.

Adobe Illustrator Draw 

AKSketch is a  black & white charcoal drawing app. It’s a simple app designed to make you forget about the tools and just draw, using intuitive multi-touch gestures to help you achieve the desired result. It opens straight to a blank canvas. You can start sketching immediately, using the pinch to zoom gesture to get a bigger or smaller line. Tapping at the bottom of the canvas will bring up the tools menu, which is very basic. You can choose between a smooth or rough brush, and you’ll also get an eraser. The undo and redo tool is handy, and you can also access your other sketches from there.


Artrage: variety of canvas presets and paper options, plus a wide array of brushes, pencils, crayons, rollers, and pastels. You can paint directly onto the screen or apply a glob of paint with one tool and smear it around with another. ArtRage also features a dedicated watercolour brush option, which can produce some striking effects. Experiment with the ArtRage digital canvas by smearing, blending oils and watercolour. The app is smart enough to detect the roughness of your paper so your pencils can be used for soft shading. Add layers to your work without damaging others with a range of Layer Blend Modes, import photos and convert them to oil for smearing or use as reference images, or trace over images. Once you’ve familiarised yourself with the interface, it’s easy to change brush sizes, bring up the colour picker, work with layers and blend/smudge different elements together.The main idea of Artrage is to make painting as real as possible on the iPad. You can mix paints with one another as though you were manipulating them on a real canvas. This app works with layers, and if you’re already familiar with Photoshop, you’ll feel right at home with the blend modes. Artrage also allows you to record your drawing for later viewing on the desktop.

ArtRage Tutorials

ArtStudio: over 20 different brushes, various different canvas sizes and options that include layers, layer masks, filters and effects. ArtStudio also includes step-by-step drawing lessons/tutorials plus the handy ability to export your artwork to Photoshop for further fiddling.


Auryn Ink: for watercolour painting. You can pick different tip shapes for the brushes and specify different bristle effects. You can also adjust the texture of the canvas and the amount of water on your brush.

Auryn Ink

Brushes Redux: used by David Hockney. Using a basic toolbar at the bottom of the screen, you can bring up a colour wheel/picker, work with layers and switch between various brushes. Brushes is fast and responsive to the touch so it’s easy to work quickly. Can record brush strokes. You can only create up to 10 layers.

Comic Draw: by plasq, enables you to build an entire comic narrative inside the app – from concept sketches to colour and lettering. It has a digital sketchpad for original ideas, and also inking and colouring with a variety of brushes to finish your concepts. Lay out different panels on your page and use layers to build your drawings. Add as many pages as you want to create a comic strip, books or even graphic novel. To finish off, add words with Comic Draw’s lettering suite made up of different typefaces, balloons and design tools. Comics can be shared on the on-line community  Comic Connect.

Graphic:  by Autodesk used to be called iDraw. different brushes and full support for the Apple Pencil, but can create vector-based technical drawings. You can create complex vector-based PDF and SVG files using the powerful pen tool for customised shapes, but also simple diagrams if that’s what you prefer. This app offers layer effects such as shadows, glow, multiple strokes and fills, as well as canvas scale, rulers and units to create dimensions with precision. You can also use classic brush and pencil tools for fluid drawing and sketching.

Inkist:  by Tai Shimizue is a painting app with a range of simple, customisable brushes with support for pressure-sensitive styluses.The interface is simple, with minimal taps required to switch between tools,  The apps features three layers with blend modes, opacity, and opacity-locking settings. You can then export your work as a PNG, PSD, proprietary ISImage file format and as individual layer files.

Inspire Pro:  from Canada-based SnowCanoe’s has 60 high-quality brushes to choose from, divided into six sets: oil paint, airbrushes, basic shapes, graphite pencils, wax crayons and markers. And these can all be used as a wet or dry brush or eraser to create fast and realistic painting, drawing and sketches. Dynamic colour picker Adjusting the paint load and customising brushes (by rotating the bristle pattern) becomes second nature. Add a subtle blur, use Canvas Playback to watch your paintings unfold, use dual textured brushes, customise your canvas size.

Inspire pro

Medibang Paint

MediBang Paint includes many different creative tools for illustrators and comic book artists. Some of these include, numerous brushes, screentones and backgrounds, cloud fonts and comic creation tools. Finally registering on MediBang’s site for free gives users access to cloud storage so they can easily manage, backup and share their work.

Medibang Paint

Paper 53:  quick sketches for a selection of virtual journals, with pages to thumb through for easy viewing. Has diagramming and note-taking tools in addition to the standard creative tool suite it’s always had. Tools are a watercolor brush, calligraphy pen, pencil, marker, ballpoint pen, eraser, paint roller, scissors, and a ruler. You can  import or take pictures, and mark them up with text or drawings. Upload to FiftyThree’s creative community Mix.

Paper 53

Photoshop Sketch:  features 14 tools, including a graphite pencil, ink pen and watercolour brushes, with adjustable size, colour, opacity and blending settings. You can layer and rearrange your images, use perspective and graph grids to help align your creations. Export your work to Illustrator or Photoshop CC.

Photoshop Sketch

Pixelmator:  to enhance or touch up  photography, paint detailed, layered images from scratch. It has more than 90 brushes (including double-texture brushes), watercolours, and the pixel brush. Graphic design features include using blending layers, shapes and text, whilst adding features like shadows, outlines and gradient fills and a range of effects including kaleidoscope.


Procreate:  professional software from Savage Interactive won the Apple Design Award and the App Store Essential. It has a built-in  brush editor for creating custom brushes, which enable you to define brush shape and grain. On the Pro, it can go up to 16K resolution with 64-bit color, and you can export your artwork as PSD, PNG, JPG, or Procreate files. The app also lets you record videos of your art and helps you build a portfolio and share your work, if you so desire.

You Tube Procreate Tutorials

Procreate experiments

SketchBook (Autodesk) combines raster and vector features. It has  a wide range of digital pencils, pens, markers, and airbrushes with ability to pin toolbars to the screen for easy access. It has text , distort and shape features together with image import and video export.


SketchBook Ink preset brushes aren’t editable apart from their size, and there are no layers (besides the option to add a photo as a background layer). very high output resolution. You can export images to iTunes at up to 101.5 megapixels (8727 pixels x 11636 pixels) or your Photo app at up to 4096 pixels x 3072 pixels. Although exported files are flat PNGs, not editable vector files, the images are still very high quality.

Sketch Club has  a community of artists with whom you can share your art. You’ll also be able to comment on everyone’s work and get inspiration. Lets you create 64 layers in total, and canvases that are up to 4K in resolution. Wide selection of brushes and vector tools, this is a full-fledged drawing app. It has full support for the Apple Pencil and the ability to record in 1080p.

Tayasui Sketches  Pro eight brushes along the left-hand side, with pencil, rotring, watercolor brush, felt pen, and eraser for free and more brushes and size, shape and blend controls and paper types to buy. You can import photos, too, if you want. You can organize your sketches and creations into different notebooks in the app.

Sketches Pro

Zen Brush:  traditional Japanese calligraphy brushes. Gallery feature that enables you to save your work in progress, as well as an ink dispersion effect to give your drawings an added feeling of depth.  Uses black and red ink.

Zen Brush


Yang Yong Liang

Yang Yongliang is a Chinese contemporary artist. As a young student, he studied traditional Chinese painting and calligraphy before attending the Shanghai Art & Design Academy, where he specialized in decoration and design beginning in 1996. (Wikipedia)

He produces very atmospheric animated digital paintings that overlay traditional Chines landscapes with modern day scenes.

website: http://www.yangyongliang.com/Works.html 

This website has good coverage of his videos and other work.

Relevance to my practice:

Stylistic inspiration for  3.2 The Bamboocutter

I had hoped to animate some of my images from Aldeburgh using similar techniques. This is now postponed to VisCom Level 3.

Colour Management


Cambridge in Colour: Colour Management and Printing series

Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

Bit Depth

Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

Bits Per Pixel Number of Colors Available Common Name(s)
1 2 Monochrome
2 4 CGA
4 16 EGA
8 256 VGA
16 65536 XGA, High Color
24 16777216 SVGA, True Color
32 16777216 + Transparency  
48 281 Trillion  
  • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
  • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
  • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.


The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.



“Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

digital imaging chain

Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.


Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

calibration table

To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.


A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

Input Number (Green)   Output Color
Device 1 Device 2

Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.


Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

display device   printer output device
Input Device
Profile Connection Space
Output Device
Additive RGB Colors
Color Profile
(color space)
CMM Translation CMM Translation Subtractive CMYK Colors
Color Profile
(color space)
  1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
  2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
  3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.


Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.


1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

linear vs nonlinear gamma - cameras vs human eyes  
Reference Tone
Perceived as 50% as Bright
by Our Eyes
Detected as 50% as Bright
by the Camera

Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

Original: smooth 8-bit gradient (256 levels)
  Encoded using only 32 levels (5 bits)
linearly encoded gradient
gamma encoded gradient

Note: Above gamma encoded gradient shown using a standard value of 1/2.2
See the tutorial on bit depth for a background on the relationship between levels and bits.

Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.


Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

RAW Camera Image is Saved as a JPEG File   JPEG is Viewed on a Computer Monitor   Net Effect
image file gamma + display gamma = system gamma
1. Image File Gamma   2. Display Gamma   3. System Gamma

1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
2. Depicts a display gamma equal to the standard of 2.2

1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.


The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

linear RAWLinear RAW Image
(image gamma = 1.0)
gamma encoded sRGB imageGamma Encoded Image
(image gamma = 1/2.2)

If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.


This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

gamma curves chart with a display gamma of 1.0
Display Gamma 1.0 Gamma 1.0
gamma curves chart with a display gamma of 1.8
Display Gamma 1.8 Gamma 1.8
gamma curves chart with a display gamma of 2.2
Display Gamma 2.2 Gamma 2.2
gamma curves chart with a display gamma of 4.0
Display Gamma 4.0 Gamma 4.0

Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

CRT Monitor   LCD Monitor
CRT Monitors LCD (Flat Panel) Monitors

CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).


Other important points and clarifications are listed below.

  • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
  • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
  • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
  • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
  • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

For more on this topic, also visit the following tutorials:


Color can only exist when three components are present: a viewer, an object, and light. Although pure white light is perceived as colorless, it actually contains all colors in the visible spectrum. When white light hits an object, it selectively blocks some colors and reflects others; only the reflected colors contribute to the viewer’s perception of color.

Prism: White Light and the Visible Spectrum
Human Vision

The human eye senses this spectrum using a combination of rod and cone cells for vision. Rod cells are better for low-light vision, but can only sense the intensity of light, whereas whilecone cells can also discern color, they function best in bright light.

Three types of cone cells exist in your eye, with each being more sensitive to either short (S), medium (M), or long (L) wavelength light. The set of signals possible at all three cone cells describes the range of colors we can see with our eyes. The diagram below illustrates the relative sensitivity of each type of cell for the entire visible spectrum. These curves are often also referred to as the “tristimulus functions.”

Select View: Cone Cells Luminosity

Raw data courtesy of the Colour and Vision Research Laboratories (CVRL), UCL.

Note how each type of cell does not just sense one color, but instead has varying degrees of sensitivity across a broad range of wavelengths. Move your mouse over “luminosity” to see which colors contribute the most towards our perception of brightness. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum; this is utilized by the bayer array in modern digital cameras.


Virtually all our visible colors can be produced by utilizing some combination of the three primary colors, either by additive or subtractive processes. Additive processes create color by adding light to a dark background, whereas subtractive processes use pigments or dyes to selectively block white light. A proper understanding of each of these processes creates the basis for understanding color reproduction.

Additive Primary ColorsAdditive
Subtractive Primary ColorsSubtractive

The color in the three outer circles are termed primary colors, and are different in each of the above diagrams. Devices which use these primary colors can produce the maximum range of color. Monitors release light to produce additive colors, whereas printers use pigments or dyes to absorb light and create subtractive colors. This is why nearly all monitors use a combination of red, green and blue (RGB) pixels, whereas most color printers use at least cyan, magenta and yellow (CMY) inks. Many printers also include black ink in addition to cyan, magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows.

Additive Color Mixing
(RGB Color)
  Subtractive Color Mixing
(CMYK Color)
Red + Green Yellow Cyan + Magenta Blue
Green + Blue Cyan Magenta + Yellow Red
Blue + Red Magenta Yellow + Cyan Green
Red + Green + Blue White Cyan + Magenta + Yellow Black

Subtractive processes are more susceptible to changes in ambient light, because this light is what becomes selectively blocked to produce all their colors. This is why printed color processes require a specific type of ambient lighting in order to accurately depict colors.


Color has two unique components that set it apart from achromatic light: hue and saturation. Visually describing a color based on each of these terms can be highly subjective, however each can be more objectively illustrated by inspecting the light’s color spectrum.

Naturally occurring colors are not just light at one wavelength, but actually contain a whole range of wavelengths. A color’s “hue” describes which wavelength appears to be most dominant. The object whose spectrum is shown below would likely be perceived as bluish, even though it contains wavelengths throughout the spectrum.

Color Hue
Visible Spectrum

Although this spectrum’s maximum happens to occur in the same region as the object’s hue, it is not a requirement. If this object instead had separate and pronounced peaks in just the the red and green regions, then its hue would instead be yellow (see the additive color mixing table).

A color’s saturation is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar, but less saturated color. The following example illustrates the spectrum for both a highly saturated and less saturated shade of blue.

Select Saturation Level: Low High

Spectral Curves for Low and High Saturation Color


The Brief

All illustration work needs a brief:

  • as a starting point to help establish the nature of the work
  • provide some guidance and boundaries to be creative within
  • help frame the work in a context, for example the audience.

Even in self-directed work a brief establishing some limitations helps the creative process to prevent rushing off in all directions – though there is more flexibility to redraft a brief as the work progresses.

Content of a brief

  • What you are being asked to do
  • Why -what the illustrations are trying to achieve eg innovative/creative or functional
  • Who is your target audience or other relevant contextual information eg how the client hopes to influence them
  • How – the form the illustrations will take – print size, media, colours etc
  • When – deadlines and milestones – eg thumbnails, rough draft , completion 

The Rationale

A rationale is a short statement that helps to articulate your thinking both to yourself and the client. it:

  • shows you understand the brief. A client-led brief will usually have all the keywords that will identify what will make the client happy.
  • outlines how you propose to answer the brief
  • outlines why you propose to answer it this way and justifying why those ideas are suitable for the brief you have been given.

Answering a brief is rarely a linear process.  You may develop a number of rationales while working on a brief, each picking up on different ideas and directions. You will often have  to go back to the drawing board, re-read the brief and start again. Writing a rationale is a useful way of documenting this as you never know if you’ll need to go back to an early idea. In fact the more twists and turns the better because it’s showing that you are really pushing your ideas and illustrations. 

Self-directed briefs

Tom Phillips A Humument http://humument.com

Competition Awards

D&AD Student Awards

RSA Student Design Awards

SketchBook Pro

!!I have used this quite a lot, but not yet sorted all my the images out.I find it best for sketching and portraits.

SketchBook (Autodesk) is one of the earliest iPad Aps, but has been kept up to date. From available images on he web it is obvious it can produce very professional illustrations. It combines raster and vector features with:

  •  a wide range of digital pencils, pens, markers, and airbrushes and ability to pin toolbars to the screen for easy access.
  • text in all installed fonts that can be manipulated and distorted.
  • basic rectangles, circles and vector line with variety of mirror functions
  • image import and video export.

You Tube Tutorials

Better for comic inking