“A picture is worth a thousand words”. This common English idiom was originally developed to demonstrate how important the use of images is in publicity. Nowadays, this holds also true for every scientist. Pictures prove what we describe. There are hardly any papers without a picture of animals, tissues, cells, western blots, or agarose gels. However, the trend goes to not only showing a picture but to quantifying a thousand pictures to confirm that the result is reproducible – science arrives at “thousand pictures are worth a single picture”.
A new series on “reading” images
Pictures tell stories and reading them is one of the major tools of scientists. However, only few of us scientists have ever participated in a course about how to read images. So what does “reading” mean and how do we do it? Reading means quantification and to this end, we employ computers. In this new mini-series about Microscopy And Digital Images of the ImmunoSensation Blog, we want to inform about the basics of imaging and extend to specialized imaging techniques like two-photon imaging or light-sheet microscopy. We want to help you to perform better in your next image-based experiment.
An image is nothing but a huge table of numbers
A digital image is nothing but a huge table of numbers and this is what we want to read. Every pixel in a digital image has a number. When a machine, i.e. a microscope, captures an image it assigns a number to each pixel position. In most cases, this number corresponds to the intensity value that is detected at the pixel position. However the numbers that can be assigned are limited depending on the “type” of the image. There are 8-bit, 16-bit, and RGB images. But what is actually the difference?
The image type defines the size of each cell in the table
The image type describes the architecture of the table where all the numbers are stored. For every cell of the table, i.e. for every pixel in the image, the computer needs a distinct type of memory. And computers count in binary code – they need to translate every number into a code of 0 or 1 characters. In an 8-bit image, the computer reserves eight 0-or-1-characters for every pixel in the image. While the number 0 gets the code 00000000, the number 42 gets the code 00101010. However, with 8 characters, only 256 different numbers can be encoded – by the way the number of combinations can be calculated from the character number as 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 or 28 = 256 possibilities (Editorial note: “Feeling back in Math’s class?”). In conclusion, for each pixel in an 8-bit image, only numbers of 0 to 255 in steps of 1 can be saved. In image terms, you can only save 256 different brightness values in an 8-bit image. This will be enough to display 50 shades of gray but what if you would like to distinguish objects of more than 256 different intensity values?
The 16-bit image is the big brother of the 8-bit image
For this purpose, the 16-bit image was invented. The 16-bit image offers 216 possible numbers (0 – 65535 in steps of 1) at each pixel position and thereby, allows to distinguish more subtle differences between objects shown in the image. However, of course, a 16-bit image needs the double amount of memory compared to an 8-bit image with the same pixel number. Now you may wonder for what reason you may save an image as an 8-bit image if the 16-bit image can give more information. The answer is quite simple, as long as you have enough memory on your computer there is no reason for using an 8-bit image type instead of a 16-bit. However, common programs to depict images, e.g. made for slide presentations, are not dedicated to handle and depict 16-bit images because for images that are not for quantitative purposes using a 16-bit range is uncommon – you will find out when you try.
RGB images are R + G + B = 24-bit images
But what are now the RGB-images? A pixel in an RGB image has three numbers, one for red (R), one for green (G), and one for blue (B). Each number offers the 8-bit range. Mixing the three different colors allows to display images in 224 = 16,777,216 colors. For the sake of memory there is no common image type offering a 16-bit RGB. However, microscopy image formats or the .tif format allows to save multiple channels at a 16-bit range in one image – but they cannot be displayed by any standard software. So it’s always good to have a raw image with a high bit range and to also save it as a standard RGB image (e.g. in the format .png) for putting it on your presentation slides.
Don’t go for JPEG in science
At that position you might wonder, why not exporting an image as a jpeg for putting it in my presentation? What is actually .png? In principle, using jpeg for depicting an image in a presentation is not a no go. However, when it comes to storing an image that you want to quantify later or where you want to show fine differences in intensity levels, and all this with high resolution, jpeg is not the way to go. The problem: jpeg represents a saving format for saving memory by maximizing the amount of compression and minimizing the loss of information. However, minimal loss of information means loss of information: the compression in JPEG is not loss-less. JPEG algorithms compress the image in a way that you – with your eye – would not directly see a difference of the JPEG-compressed image to the non-compressed image. But when you would zoom into the image, you will see that there are tiny artifacts of the compression. This means that saving an image as a JPEG is a way of manipulating the raw pixel intensities in an image. Especially when you are opening and saving a JPEG image again, again, and again, the compression artifacts may largely amplify. The PNG format compresses the image without loss and without artifacts. Thus, it is better to go PNG than JPEG (unless you don’t care about memory and stay with the TIFF format).
Did you realize? This short article about digital images was actually about 1000 words in length – A picture is worth a thousand words.
Author: Jan Hansen