The word pixel is based on a contraction of pix (“pictures”) and el (for “element”). In digital imaging, a pixel is a physical point in a raster image, or the smallest, addressable element in a display device.
A pixel is generally thought of as the smallest single component of a digital image.
When one talks about resolution, he might refer to one of the following things:
i. Device resolution
The resolution of an output device such as a monitor or printer is measured by the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm), or in short, dots per inch (DPI).
ii. Screen resolution
The display resolution of a digital television or display device is the number of distinct pixels in each dimension that can be displayed.
Pixel count of width by height
when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 640 by 480.
For example, we can adjust our PC screen resolution to 1280*720.
Total number of pixels in one image
Another popular convention is to cite resolution as the total number of pixels in the image, typically given as number of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million.
For example, the resolution of one camera lens is 700 MB.
iii. Image resolution
For a digital image, Pixel per inch (PPI) describes pixels per length unit or pixels per area unit, such as pixels per inch or per square inch.
The higher the PPI, the better the image quality is.
iv. Scanning resolution
As mentioned, DPI refers to the physical dot density of an image when it is reproduced as a real physical entity, for example printed onto paper, or displayed on a monitor. A digitally stored image has no inherent physical dimensions, measured in inches or centimeters. Thus it has no DPI. Its quality is labeled with PPI.
In the case of scanned images, the digital file formats records the DPI value, which is the size of the original scanned object, as its PPI (pixels per inch) value.
The value can be used when printing the image. It lets the printer know the intended size of the image.
For example, a bitmap image may measure 1,000 × 1,000 pixels, a resolution of 1 megapixels. If it is labeled as 250 PPI, that is an instruction to the printer to print it at a size of 4 × 4 inches. Changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000.
Bit Resolution, Bit Depth & Pixel Type
Color depth or bit depth is the number of bits used to indicate the color of a single pixel in a bitmapped image. This concept is also known as bits per pixel (bpp), particularly when specified along with the number of bits used. It is the measurement of how many colors, or shades of grey, that an image has. The number is based on how many bits of data are used to store color data for each pixel of an image.
Obviously, as the number of bits used to store color information increases, the number of colors that can be presented also increases, but it does so exponentially.
A one bit image would use a single bit of data for each pixel, creating an image that was made up of only two colors, usually black and white. An eight bit (or one byte) image would have 256 (28) possible colors, which might be 256 shades of gray in a grayscale image or a limited 256 color palate in an image saved in the GIF file format.
A 24 bit (or three byte) would have a total of 16,777,216 possible colors. Most 24 bit images are made up of three eight bit channels as part of the RGB color model. RGB images are made up of 256 shades of red, 256 shades of green, and 256 shades of blue for a total of 16,777,216 color combinations (256 x 256 x256 or 224). When the bit depth is 24 bit or higher, it is also known as Truecolor (called Millions on a Macintosh) because it represents a significant portion of the range of colors visible to the human eye.
Higher color depth gives a broader range of distinct colors.
1-bit color (21 = 2 colors) – often black and white
2-bit color (22 = 4 colors) – gray-scale
24-bit color (224 = 16,777,216) – true color; the human eye is capable of discriminating among as many as ten million colors. Read more